image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "atlas-functions", "stitch" ]
[ { "code": "found more than one node_modules archive in the '<MY-DIR>' directoryexec: \"transpiler\": executable file not found in %PATH%", "text": "So I tried following this guide to install and import external dependencies via the CLI.So what I did is:This resulted in the following error:\nfound more than one node_modules archive in the '<MY-DIR>' directorySo naturally I tried deleting either the node_modules folder and later the archive, but both resulted in this error:\nexec: \"transpiler\": executable file not found in %PATH%So at this point I gave up and just uploaded the tar to the UI which worked.\nHowever, I’d love to get this to work in my CI/CD workflow.Any ideas what causes this error and how I can get around this?", "username": "Nico_Rogers" }, { "code": "", "text": "Hey Nico - do you mind sharing what CLI version you’re using and if it’s the most updated version?", "username": "Sumedha_Mehta1" } ]
Trouble importing external dependencies
2020-04-19T00:06:32.292Z
Trouble importing external dependencies
2,908
null
[ "queries" ]
[ { "code": "db.tracks.update(\n {\"_id\" : ObjectId(\"5d91dabf0413c90008b39c3e\")},\n {$set : { \"name\" : \"This is sample text\" }},\n {multi : true}\n)\n", "text": "I need help in mongodb to update record as explained below.original text in “name” field of a document is “This is sample text (with some text into bracket)” and\nI need to update it to “This is sample text”.\nI want to remove everything inside bracket including bracket.I am using following query in mongodb studio 3T intellisenseI want to use substr to get text upto opening bracket position, but question is - how to find variable position of opening bracket “(” for substr function and how to use substr.Regards,", "username": "Rushi_Pandya" }, { "code": "$indexOfBytesdb.tracks.update( {\"_id\" : ObjectId(\"5d91dabf0413c90008b39c3e\")},\n [{$set : {name : \n { $substr: [\"$text\",0,{ $indexOfBytes : [\"$text\", \"(\"] }]}\n }}],{multi : true});\n", "text": "Hi @Rushi_Pandya ,Welcome to MongoDB Community.I think you can use $indexOfBytes aggregation operation with a 4.2 feature called pipeline updates:If your version is prior to 4.2 you will need to query with aggregate and update in a second statement.Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
Find character position in string
2020-10-17T06:44:22.711Z
Find character position in string
3,694
null
[]
[ { "code": "const ShirtSchema = new Schema({\n {...}\n images: {\n type: [{\n type: Schema.Types.ObjectId,\n ref: 'Image'\n }]\n },\n {...}\n},\n);\n .populate({path: 'spells', options: { sort: [['damages', 'asc']] }})\n", "text": "Hi, this is my first post here, I’m glad I found this community. I’ve been working with mongo for a couple of years and I implemented it in a new app, now in production. (using mongoose). (My question is basically about mongoose, if this is not the correct place, I apologise.)I am now refactoring and trying to get the best of mongo. One problem I found was trying to sort a populated field, which is an array of objects. I need to sort inside the array by createdAt field.I have the models Shirt and Image (yes, I now know that probably it would’ve been better to embed image into shirt), and Shirt has an array of Image objectId. Those images can be up to three, and it is very important for the app to have the images sorted asc by date.I’ve seen options like this in stackoverflowbut didn’t work.I wanted to ask what would be the best approach (if there is one). I tried to use aggregate but also didn’t get the result. Do you think that I should change the structure of the database, and embed image instead of making reference. I basically did it because I am using the model image also for other things, and I think is more clear, but now I am really asking myself if it was necessary.Some advice would be really good for me, not only for this issue in particular, but also for the future.Thank you very much.", "username": "Juan_Ignacio_Benito" }, { "code": "", "text": "Hi @Juan_Ignacio_Benito,Welcome to MongoDB community!It sounds like embedding might be a good option but you will still need to push the images in an ordered way.Please note I am not that familiar with mongoose but MongoDB allows you to push in a sorted manner based on an array object field when using $each and $sort with $push.If you query this data why can’t you sort it by createDate? Is that a $lookup or 2 seperate queries?Can you provide sample docs from both collections and the current queries used?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": " const shirts = await _shirt\n .aggregate([\n {\n $lookup: {\n from: _team.collection.name,\n localField: 'team',\n foreignField: '_id',\n as: 'team',\n },\n },\n {\n $lookup: {\n from: _comment.collection.name,\n localField: 'comments',\n foreignField: '_id',\n as: 'comments',\n },\n },\n {\n $lookup: {\n from: _image.collection.name,\n localField: 'images',\n foreignField: '_id',\n as: 'images',\n },\n },\n {\n $addFields: { totalLikes: { \"$size\": \"$likes\" }, totalComments: { \"$size\": \"$comments\" } }\n },\n {\n \"$project\": {\n \"_id\": \"$$ROOT\", \"images\": \"$images\"\n }\n },\n {\n $unwind:\n {\n path: \"$images\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n \"$sort\": { \"images.imageName\": 1 }\n },\n {\n \"$group\": {\n \"_id\": \"$_id\",\n \"_images\": { \"$push\": \"$images\" }\n }\n },\n {\n \"$project\": {\n \"_id\": \"$_id._id\",\n \"totalLikes\": \"$_id.totalLikes\",\n \"totalComments\": \"$_id.totalComments\",\n \"images\": \"$_images\",\n \"team\": \"$_id.team\",\n \"title\": \"$_id.title\",\n \"comments\": \"$_id.comments\",\n \"statusType\": \"$_id.statusType\",\n \"size\": \"$_id.size\",\n \"year\": \"$_id.year\",\n \"brand\": \"$_id.brand\",\n \"code\": \"$_id.code\",\n \"description\": \"$_id.description\",\n \"isHome\": \"$_id.isHome\",\n \"isFan\": \"$_id.isFan\",\n \"isNewShirt\": \"$_id.isNewShirt\",\n \"shirtUser\": \"$_id.shirtUser\",\n \"likes\": \"$_id.likes\",\n \"isSoftDeleted\": \"$_id.isSoftDeleted\",\n \"createdAt\": \"$_id.createdAt\",\n \"updatedAt\": \"$_id.updatedAt\"\n }\n },\n {\n $unwind:\n {\n path: \"$team\",\n preserveNullAndEmptyArrays: true\n }\n },\n {\n $match: filters\n },\n {\n $sort: sorted\n },\n {\n $skip: skips\n },\n {\n $limit: pageSize\n }\n ])\n .collation({ locale: \"es\" })\n return shirts\n };\n{\n \"shirts\": [\n {\n \"_id\": \"5f7d36a151b64500193f3a6e\",\n \"totalLikes\": 2,\n \"totalComments\": 2,\n \"images\": [\n {\n \"_id\": \"5f7d36a151b64500193f3a6c\",\n \"cloudImage\": \"xxx.amazonaws.com/03181272-b320-4269-98c0-2ecbf5397597\",\n \"imageName\": \"image0\",\n \"createdAt\": \"2020-10-07T03:31:45.414Z\",\n \"updatedAt\": \"2020-10-07T03:31:45.414Z\",\n \"__v\": 0\n },\n {\n \"_id\": \"5f7d36a151b64500193f3a6d\",\n \"cloudImage\": \"xxx.amazonaws.com/d971ac08-fe97-45e1-bbff-37d83e939dc4\",\n \"imageName\": \"image1\",\n \"createdAt\": \"2020-10-07T03:31:45.414Z\",\n \"updatedAt\": \"2020-10-07T03:31:45.414Z\",\n \"__v\": 0\n }\n ],\n \"team\": {\n \"_id\": \"5f031e9bed4851001a8aceb6\",\n \"name\": \"Vasco Da Gama\",\n \"unique_id\": 159,\n \"country\": \"5f031e9aed4851001a8ace0d\",\n \"__v\": 0,\n \"createdAt\": \"2020-07-06T12:52:43.516Z\",\n \"updatedAt\": \"2020-07-06T12:52:43.516Z\"\n },\n \"title\": \"Vasco da Gama\",\n \"comments\": [\n {\n \"_id\": \"5f7e15370afb6d002063132e\",\n \"commentUser\": {\n \"image\": {\n \"_id\": \"5f01ffd3a5790ba8dcb2de16\",\n \"cloudImage\": \"linktoimage\",\n \"imageName\": \"userImage17\"\n },\n \"userId\": \"5eab70a2a368f33b2ba0d4e0\",\n \"username\": \"tomasm\",\n \"isVerified\": false\n },\n \"isSoftDeleted\": false,\n \"text\": \"Holaaa\",\n \"createdAt\": \"2020-10-07T19:21:27.730Z\",\n \"updatedAt\": \"2020-10-07T19:21:27.730Z\",\n \"__v\": 0\n },\n {\n \"_id\": \"5f7f85d19c46020020801e33\",\n \"commentUser\": {\n \"image\": {\n \"_id\": \"5f0212c0a5790b63e7b2de2a\",\n \"cloudImage\": \"linktoimage\",\n \"imageName\": \"userImage17\"\n },\n \"userId\": \"5eab7597a368f3eefea0d505\",\n \"username\": \"museo.g\",\n \"isVerified\": false\n },\n \"isSoftDeleted\": false,\n \"text\": \"Me da un miedo ver las casacas de los usuarios acá ja\",\n \"createdAt\": \"2020-10-08T21:34:09.287Z\",\n \"updatedAt\": \"2020-10-08T21:34:09.287Z\",\n \"__v\": 0\n }\n ],\n \"statusType\": 0,\n \"size\": \"P\",\n \"year\": 1999,\n \"brand\": \"Kappa\",\n \"code\": \"\",\n \"description\": \"Alternativa, manga larga, 99-00. Horrible la publicidad de ACE\",\n \"isHome\": true,\n \"isFan\": true,\n \"isNewShirt\": false,\n \"shirtUser\": {\n \"image\": {\n \"_id\": \"5f60ace02fec53001b444bb9\",\n \"cloudImage\": \"linktoimage\",\n \"imageName\": \"userImage17\"\n },\n \"deviceToken\": \"xxx\",\n \"userId\": \"5f237f550e1545b19c8cf94c\",\n \"username\": \"xxx\",\n \"isVerified\": false\n },\n \"likes\": [\n \"5eab70a2a368f33b2ba0d4e0\",\n \"5eab7597a368f3eefea0d505\"\n ],\n \"isSoftDeleted\": false,\n \"createdAt\": \"2020-10-07T03:31:45.421Z\",\n \"updatedAt\": \"2020-10-08T21:34:09.299Z\"\n }\n ],\n}\n", "text": "Hi Pavel, thank you very much for your response.Sounds like pushing the images in an ordered way might work. So it would be when saving the images, doing it using $each and $sort with $push?I am now using lookup…this is the query…So far, the array of images is being sorted in client side…This sample doc has a “shirt” with 2 images. What I am seeing now is that they are created in the exact same time, so that could be the possible problem. Anyway, it would be great for me if you could advise on how to perform this operation the best possible way.Thank you again!", "username": "Juan_Ignacio_Benito" }, { "code": "", "text": "Hi @Juan_Ignacio_Benito,Your query has 3 $lookups which is by design an antipattern for MongoDB.If data is queried together it needs to be stored together.This is the main design problem in your schemaI would say that further using skip and limit for pagination is also a bad idea.Read following blogs for better methods:A look at how the Bucket Pattern can speed up paging for usersSo it would be when saving the images, doing it using $each and $sort with $push?It would help but very slightly. Please read more on antipattern and design recommendations hereA summary of all the patterns we've looked at in this serieshttps://www.mongodb.com/article/schema-design-anti-pattern-summaryBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel, thank you very much for your response. Is it being really helpful.\nI’ve been analysing carefully my design I will start to make changes (as long as production database allow me).\nI wanted to ask you what would be the best approach to update the current data. For example, now I have in my model “shirt” a reference to model “image”. I want/need to embed image into shirt. I have 1000 documents in shirt collection. Creating a script that loops over the collection would be a good idea?.\nI wanted to ask you also if you have some program in which you can take a look at my code, and I can get some personal advise.\nThank you very much again.", "username": "Juan_Ignacio_Benito" }, { "code": "", "text": "Hi @Juan_Ignacio_Benito,First we offer consulting packages to help you with schema tuning and migrations of your data. Please let me know if you are interested and I will make sure you be contacted.Get help from the makers of MongoDB. Our professional services team provides the expertise to accelerate the success of your most important projects.Now regarding the way to migrate, as the amount of data sounds small but the changes are drastic probably you can do the migration as part of you code when you lunch the new version you will query and bulk replace all documents with new format and then your application release should be able to work with the migrated model.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,\nThank you for your response!. Yes please, I am interested, if it is possible to get in touch with you regarding the consulting packages would be great.", "username": "Juan_Ignacio_Benito" } ]
Sorting a populated field
2020-10-07T18:04:04.917Z
Sorting a populated field
11,731
null
[]
[ { "code": "", "text": "Hi community and @Manuel_Meyer, Waste2Go arrived here, too.", "username": "Frederico_Cacador" }, { "code": "", "text": "Hey @Frederico_Cacador", "username": "Nabeel_Raza" }, { "code": "", "text": "Hi Nabeel, how are you?", "username": "Frederico_Cacador" }, { "code": "", "text": "Al Hamdulliah I am good, Thanks to Allah Al-Mighty.", "username": "Nabeel_Raza" } ]
Hello Community from Waste2Go
2020-10-13T19:37:08.526Z
Hello Community from Waste2Go
4,756
null
[ "app-services-user-auth", "graphql" ]
[ { "code": "authentication via 'custom-token' is unsupportedimport requests\n\nurl = 'https://realm.mongodb.com/api/client/v2.0/app/app-id/graphql'\n\nheaders = {\n 'jwtTokenString': jwt_token\n}\n\nr = requests.post(url, json={'query': query}, headers=headers)\nprint(r.json())\n", "text": "Issue\nWhen accessing the GraphQL endpoint of realm via the code below. Error authentication via 'custom-token' is unsupported is always returned via the endpoint.Steps Done", "username": "Marnell_James_Montev" }, { "code": "authentication via 'custom-token' is unsupported", "text": "Hey Marnell, generally when we see the authentication via 'custom-token' is unsupported it means that the changes for your Custom JWT authentication were not deployed or you did not toggle the provider to ‘Enabled’. If that’s not the case, I can take a closer look if you link your app", "username": "Sumedha_Mehta1" }, { "code": "application-0-suxcm\n", "text": "Hi Sumdedha, I did toggle it (see the screenshot).[Uploading: image.png…]If you need the application id, this is the application idNOTE: I’m just using the free tier for now since I am just experimenting of our use cases. But we will soon upgrade to premium.", "username": "Marnell_James_Montev" }, { "code": "", "text": "Hey Marnell - After toggling it, make sure you hit “Save” and then “Deploy” in the Blue banner that appears on the top. Looking at your app, it seems like the JWT authentication is turned off and you didn’t add a signing Key.image1530×224 21.1 KBimage1555×74 8.8 KB", "username": "Sumedha_Mehta1" }, { "code": "review and deploy", "text": "After clicking the review and deploy its Now okay Sumedha. Thank you very much for the support!", "username": "Marnell_James_Montev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem using custom JWT Token when using GraphQL
2020-10-15T22:41:57.009Z
Problem using custom JWT Token when using GraphQL
3,400
null
[ "golang" ]
[ { "code": "// GlobalKeypair represents a single entry in the globals DB\ntype GlobalKeypair struct {\n\tKey string `json:\"key\" bson:\"key\"`\n\tValue interface{} `json:\"value\" bson:\"value\"`\n}\nfilter := bson.M{\"key\": key}\nkp := GlobalKeypair{\n\t\tKey: key,\n\t\tValue: value,\n\t}\ninfo, err := coll.ReplaceOne(\n\t\tcontext.Background(), filter, kp, options.Replace().SetUpsert(true))\nGlobalKeypair{Key: \"some-key\",\n Value: map[string]interface{}{\n\t\t\t\"lid-brain-test-1\": map[string]interface{}{\n\t\t\t\t\"brain\": \"StreamingBrain\",\n\t\t\t\t\"floor\": 0.65,\n\t\t\t},\n\t\t\t\"lid-brain-test-2\": map[string]interface{}{\n\t\t\t\t\"brain\": \"BulkBrain\",\n\t\t\t\t\"floor\": 0.65,\n\t\t\t},\n\t\t}\n}\n\"some-key\": [\n {\n \"Key\": \"lid-brain-test-1\",\n \"Value\": [\n {\n \"Key\": \"brain\",\n \"Value\": \"StreamingBrain\"\n },\n {\n \"Key\": \"floor\",\n \"Value\": 0.65\n }\n ]\n },\n {\n \"Key\": \"lid-brain-test-2\",\n \"Value\": [\n {\n \"Key\": \"brain\",\n \"Value\": \"BulkBrain\"\n },\n {\n \"Key\": \"floor\",\n \"Value\": 0.65\n }\n ]\n }\n ],\n", "text": "A long story about why, but we need to be able to store a freeform document a user provides.I take a payload and I store it in a database using a Key/Value approach. e.g.The issue is when it comes back out of the database, it’s getting garbled. I’m thinking due to the freeform nature of the interface.\nSo I store this:And when I pull it out, I get this:I think something is happening under the covers where it’s structuring my interface into a bson.E, but I’m at the end of my rope. I can’t figure out what’s going on.", "username": "TopherGopher" }, { "code": "// GlobalKeypair represents a single entry in the globals DB\ntype GlobalKeypair struct {\n\tKey string `json:\"key\" bson:\"key\"`\n\tValue []byte `json:\"value\" bson:\"value\"`\n}\n// value is an interface{} - get it to an []bytes\nvalBytes, err := json.Marshal(value)\nif err != nil {\n\treturn fmt.Errorf(\"Could not Marshal global into JSON: %v\", err)\n}\nfilter := bson.M{\"key\": key}\nkp := GlobalKeypair{\n\t\tKey: key,\n\t\tValue: valBytes,\n\t}\ninfo, err := coll.ReplaceOne(\n\t\tcontext.Background(), filter, kp, options.Replace().SetUpsert(true))\n\tkp := GlobalKeypair{}\n\tif err != nil {\n\t\treturn value, err\n\t}\n\terr = result.Decode(&kp)\n\tif len(kp.Value) == 0 {\n\t\t// No value - not found\n\t\treturn value, db.ErrNotFound\n\t}\n\tif err = json.Unmarshal(kp.Value, &value); err != nil {\n\t\treturn value, fmt.Errorf(\"Could not pack global into JSON: %v\", err)\n\t}\n", "text": "I found a way around and I’m posting here for others.Essentially, rather than using an interface{}, I use an bytes now, this bypasses whatever weirdness is going on when mongo unpacks an object.Then you can get it back out with:I was still hoping someone could help me understand why interface{} isn’t preserved and is instead unpacked as a nested slice.", "username": "TopherGopher" }, { "code": "bson.Dbson.DKeyValuetype GlobalKeypair struct {\n Key string\n Value bson.M\n}\nbson.Mmap[string]interface{}json", "text": "Hi @TopherGopher,The reason this happens is because the driver unpacks BSON documents in to a bson.D object, which doesn’t support converting to/from JSON. The bson.D type is internally represented as a slice to structs with fields named Key and Value, so that explains why the JSON is structured that way. If you know that the user-provided value will always be a document, you can change your type toUnlike bson.D, the bson.M type is simply map[string]interface{} so the standard library json functions can handle it. Can you let me know if this works for you? I’ve also opened https://jira.mongodb.org/browse/GODRIVER-1765 to make this actually work for bson.D as well.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Hey @Divjot_Arora -\nThanks for the response and thank you for making the ticket - much obliged. \nUnfortunately, we can’t use a map for the value because it’s freeform.Our use case:Unfortunately, the bson.M as a Value technique was only compatible with one of our values. The byte however has been working awesome. I get out exactly what I put in, which is what I would expect when using an interface{}.This might be something to consider as part of ease-of-use improvements. For example, if you get a freeform interface{} as a type, don’t coerce it to a bson.D because we lose the original structure. Instead, store it as a byte. It seems to be reliable and works for all types.", "username": "TopherGopher" }, { "code": "bson.Unmarshal(myBytes, &myInterface)376 bytes14406ns/opinterface{}1107ns/opfunc convertToGoConcreteTypes(val interface{}) interface{} {\n\tif document, ok := val.(primitive.D); ok {\n\t\tresult := make(map[string]interface{})\n\t\tfor _, element := range document {\n\t\t\tresult[element.Key] = convertToGoConcreteTypes(element.Value)\n\t\t}\n\t\treturn result\n\t}\n\tif array, ok := val.(primitive.A); ok {\n\t\tresult := make([]interface{}, 0)\n\t\tfor _, i := range array {\n\t\t\tresult = append(result, convertToGoConcreteTypes(i))\n\t\t}\n\t\treturn result\n\t}\n\tif primitiveMap, ok := val.(primitive.M); ok {\n\t\tresult := make(map[string]interface{}, 0)\n\t\tfor k, v := range primitiveMap {\n\t\t\tresult[k] = convertToGoConcreteTypes(v)\n\t\t}\n\t\treturn result\n\t}\n\treturn val\n}\n", "text": "The following code can convert everything back to go concrete types, but the real problem is that the benchmarks for bson.Unmarshal(myBytes, &myInterface) are eye-popping. Unmarshalling 376 bytes of bson took 14406ns/op in my benchmarks. By comparison, the same json can be unmarshaled to an interface{} in 1107ns/op on my box. The benchmark did not include the code below.", "username": "Dustin_Currie" }, { "code": "16650ns/op15000ns", "text": "Also, if I add the conversion code to the bson.Unmarshal above I get 16650ns/op. This begs the question: Can I convert 376 bytes of bson to json in less than 15000ns? The answer has to be yes.", "username": "Dustin_Currie" } ]
Storing deeply nested data
2020-10-05T19:51:08.428Z
Storing deeply nested data
5,053
null
[ "python" ]
[ { "code": "mylist = [\n {\"_id\":1, \"set\":randomlist},\n { \"_id\":3, \"set\":randomlist2},\n \n]\n\nmycol.insert_many(mylist)\na=mydb.customer.aggregate([\n { \"$match\": { \"_id\": { \"$in\": [1, 3] } } },\n {\n \"$group\": {\n \"_id\": 0,\n \"sets\": { \"$push\": \"$set\" },\n \"initialSet\": { \"$first\": \"$set\" }\n }\n },\n {\n \"$project\": {\n \"commonSets\": {\n \"$reduce\": {\n \"input\": \"$sets\",\n \"initialValue\": \"$initialSet\",\n \"in\": { \"$setIntersection\": [\"$$value\", \"$$this\"] }\n }\n }\n }\n }\n])\nprint(type(cursor))\n\nfor doc in mydb.customer.aggregate(list(a)):\n print(doc)\n", "text": "Hi every one. I’m new with mongo and pymongo. i want to extract the subscription between two array : randomlist and randomlist2. how can i see the result of my query? (it is a cursor type and i wanna see the result in a list form). thank you for your tips in advancemy code:", "username": "elmira_naseh" }, { "code": "pprint( list(mydb.customer.aggregate([\n { \"$match\": { \"_id\": { \"$in\": [1, 3] } } },\n {\n \"$group\": {\n \"_id\": 0,\n \"sets\": { \"$push\": \"$set\" },\n \"initialSet\": { \"$first\": \"$set\" }\n }\n },\n {\n \"$project\": {\n \"commonSets\": {\n \"$reduce\": {\n \"input\": \"$sets\",\n \"initialValue\": \"$initialSet\",\n \"in\": { \"$setIntersection\": [\"$value\", \"$this\"] }\n }\n }\n }\n }\n])));\n", "text": "Hi @elmira_nasehWelcome to MongoDB community!I guess you can use pprint and list methods just as mentioned here :\nhttps://api.mongodb.com/python/current/examples/aggregation.html#aggregation-frameworkThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "pprint( list(mydb.customer.aggregate([\n { \"$match\": { \"_id\": { \"$in\": [1, 3] } } },\n {\n \"$group\": {\n \"_id\": 0,\n \"sets\": { \"$push\": \"$set\" },\n \"initialSet\": { \"$first\": \"$set\" }\n }\n },\n {\n \"$project\": {\n \"commonSets\": {\n \"$reduce\": {\n \"input\": \"$sets\",\n \"initialValue\": \"$initialSet\",\n \"in\": { \"$setIntersection\": [\"$value\", \"$this\"] }\n }\n }\n }\n }\n])))\n", "text": "thanks for your response. it does not work, it just return . but this two lists have common items", "username": "elmira_naseh" }, { "code": "", "text": "Hi @elmira_naseh,I cannot comment on the correctness of the query.If you wish me to comment on that please provide the data set you run this query on and the driver version.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "import pymongo\nimport random\nimport pprint\nmyclient = pymongo.MongoClient(\"mongodb://localhost:27017/\")\nmydb = myclient[\"test\"]\nuser_contacts = mydb[\"user_contacts\"]\nrandomlist2 = random.sample(range(10000000000, 99999999999), 500)\nc=[12345678910,123456, 65824985291, 80787154324, 34935414117, 46032157504]\nrandomlist2.extend(c)\nfinal=[]\n\nfor i in range(len(randomlist2)):\n \n result_dict2={\"tel1\":randomlist2[i]}\n final.append(result_dict2)\nuser_contacts.insert_many(final)\nall_users = mydb[\"all_users\"]\nrandomlist = random.sample(range(10000000000, 99999999999), 1000000)\nfinal2=[]\nfor i in range(len(randomlist)):\n \n result_dict={\"tel2\":randomlist[i]}\n final2.append(result_dict)\nall_users.insert_many(final2)\n", "text": "driver versionit is 3.8.0all of my data is here, i want to know the intersection between allusers and user_contacts collection:", "username": "elmira_naseh" } ]
How to see the result of query?
2020-10-13T05:46:55.348Z
How to see the result of query?
2,697
null
[ "swift" ]
[ { "code": "", "text": "I have an app I am developing . The first read to a Realm db is looking for a username stored in primary key db with four properties. This works like a charm on the Simulator. Here is the line of codenameLabel.text = users?[0].nameIt exceptions with and out of bounds on the 0. The exception also says it must be less than 0. I am running 4.4 and have not done the effort to upgrade to 5.3.4 yet.", "username": "Michael_Granberry" }, { "code": "if users?.count > 0 {\n nameLabel.text = users?[0].name\n}\n", "text": "@Michael_Granberry As a fellow programmer getting acquainted with this system, I would suggest making sure that users actually contains data before dereferencing it with an array subscript.I suspect that the difference between the simulator and the actual phone is the Realm data for users has not finished downloading to the phone at the time you are dereferencing it.Typically, one has to set up a notification handler on the query on the Realm in question, to check when the data has actually downloaded to the device. It is explained here.I hope this was useful.Richard Krueger", "username": "Richard_Krueger" }, { "code": "users", "text": "@Richard_KruegerThat’s a very good call and your explanation sounds correct, but unfortunately that code won’t work as is. If it’s and array, the Value of optional type ‘Int?’ must be unwrapped to a value of type ‘Int’.If it’s a results then you cannot use optional chaining on non-optional value of type ‘Results’But more importantly, it would be good to understand what users is in the first place as an optional array is little odd. Is it actually an array? Or a results object? Or something else?The first read to a Realm db is looking for a username stored in primary key db with four properties.If that’s correct and you’ve loaded a single object via it’s primary key them the array index (e.g. [0]) would not apply.", "username": "Jay" }, { "code": "", "text": "@Jay I am assuming that @Michael_Granberry intended users to be of type Results.", "username": "Richard_Krueger" }, { "code": "let realm = **try** ! Realm()\nvar users: Results<PersonalData>?\n\n users = realm.objects(PersonalData.self)\n nameLabel.text = users?[0].name //Realm retrieved [0] name property\nlet realm = try! Realm()\nvar users: Results<PersonalData>?\n\n // Realm retrieved [0] properties from PersonalData.swift\n users = realm.objects(PersonalData.self)\n screenNameTextField.text = users?[0].name\n emailAddressTextField.text = users?[0].email\n hireDateTextField.text = users?[0].hireDate\n**import** Foundation\n\n**import** RealmSwift\n\n**class** PersonalData: Object {\n\n**@objc** **dynamic** **var** personalID: Int = 0\n\n**@objc** **dynamic** **var** name: String = \"\" // Mike Granberry\n\n**@objc** **dynamic** **var** email: String = \"\" // [email protected]\n\n**@objc** **dynamic** **var** hireDate: String = \"\" // mm/dd/yyyy of first year\n\n**let** businessYear = List<YearData>()\n\n**let** anniversaryYear = List<AnniversaryData>()\n\n**override** **class** **func** primaryKey() -> String? {\n\nreturn \"personalID\"\n\n}\n\n}\n", "text": "Let me try to clarify for Richard and Jay. In the first ViewController that is launched after the Launch Screen. I want to access the PersonalData.swift data model and Realm File to grab the user name that is already in the database. This is for a hello greeting on the first screenThis works fine in the first access on the simulator. But when I try to run it on the iPhone, I get an out of bounds message on the nameLabel.text = line. Robert mentioned that the reason could be because the Realm data may not have actually loaded on the iPhone at the time that I try to load it into nameLabel.text to make it visible on the view that is tied to that ViewController.On that View, I have a Setup barbutton that looks like a gear. That will segue to the SetupViewController. I have 5 buttons on that View which segue to specific functionally to be maintained. The top button says Personal. By selecting it, I segue to the PersonalViewController and View that contains three TextFields that are loaded from the PersonalViewControllers access of the PersonalData db.In that view controller I load three properties from the 0 record. There is only one object in the PersonData db. The three that I load are name, email and hireDate. They come in just fine in the simulator.These load just fine. Richard gave me some suggestions about ways to watch for a notification that the data has actually loaded on the iPhone before I try to use them. I have not done that development yet but that is next on my agenda to make sure I can load the original request as works in the simulator.Here is my PersonalData.swift file. Please excuse the * that get loaded in the cut&paste.I hope this helps the conversation. I am very appreciative. As you can tell I am very new at this language and trying to do some pretty bold object database access as the app grows. Thanks.", "username": "Michael_Granberry" }, { "code": "", "text": "@Michael_Granberry my first question is this. Are you trying to just use Realm as a local object data base, or use Realm as synced data base that is connected to a shared Atlas cluster? In the first case, you would be using Realm to cache data locally for your application. In the second case, you would be using Realm to sync data with other devices running the same application but sharing a common set of data stored on the server.I may have just assumed that you were working on the second scenario. If this is the case, you must first create a RealmApp object, you must then authenticate the user, and lastly you must issue a Realm.asyncOpen call with a user configuration to actually prep a Realm for downloading data from the server. Once you have completed these tasks, you then set up a query on the realm along with a notification call back.I have detailed how to do this in a Medium article that I wrote a few weeks back. There is some open source code in a Github repo that you are free to download.I have been a Realm Cloud developer since early 2018, shortly after the Realm company introduced its Realm Cloud upgrade to its native…\nReading time: 12 min read\nRichard Krueger", "username": "Richard_Krueger" }, { "code": "", "text": "I put a more extensive description and flow on the Realm forum. Sorry if that was not correct. I am trying to use a local object database for persistence and the opportunity to grow it on a cloud solution if the user has more transactions each year than the iPhone should store. The user will enter small labor transactions daily or weekly and then query or filter some of the history to check his data. I hope that helps.", "username": "Michael_Granberry" }, { "code": "", "text": "I did a ton of research on the Build Process System. I realized I was just going to have to dig further and invest more time. I had another project that used Realm that was working fine. I next compared all the Build parameters in the Project and felt pretty certain that was not the issue based on that review. I looked at the two errors which centered around not being able to find Realm.framework and RealmSwift.framework in my project but I could see they were there. When I looked in the project and all looked the same in the left hand pane for the Project assets, the Products, the Pods, but one difference in the Frameworks. for some reason the Realm.frame and the RealmSwift.frame work were sitting above the file listed as Pods_projectname.framework. I moved them below the Pods-projectname.framework. I did a clean up with Shift-Cmd-K and then did a Command-B buid. I compiled , linked and built with no errors. Look at the attached notice where the red arrow is. This is the correct position Frameworks to get the Build to work. I have no idea how this was out of whack.realm369×592 140 KB", "username": "Michael_Granberry" }, { "code": "usersusersif users?.count > 0 {\n nameLabel.text = users?[0].name\n}\nvar users: Results<PersonalData>?usersoverride class func primaryKey() -> String? {\n return \"personalID\"\n}\n override static func primaryKey() -> String? {\n return \"personalID\"\n }\n", "text": "Let me re-pose my questions so we understand more of what you’re trying to do. It’s critical to know what platform you’re using and in which way you’re using itTL;DRIn a nutshell, you’re populating a var users but it’s not the same users you’re trying to access when reading data from it.Moreor are you using the BETA MongoDB Realm SDK 10.x as shown hereI believe you mentioned your data is stored locally and not sync’ing. Is that correct?Your message above states this code is not working on a real device and crasheslet realm = try! Realm()\nvar users: Results?\nusers = realm.objects(PersonalData.self)\nnameLabel.text = users?[0].nameIf that’s correct, it shouldn’t be crashing as long as data exists in the database. There are a few issues with the code though and this code is suspectas if users is an optional as shown in your questionvar users: Results<PersonalData>?, then that code will throw an error in XCode and won’t compile (Value of optional type ‘Int?’ must be unwrapped to a value of type ‘Int’). That tells me you have another var users floating around in your code - possibly a local one within a function which is not the same as a class var. I am a big proponent of addressing class vars with self and not naming vars with the same name.If this is the code in your object it’s not correctPlease replace it withI looked at the two errors which centered around not being able to find Realm.framework and RealmSwift.framework in my projectThat’s a different issue than what was originally asked (about crashing on a real device with an out of bounds error). Can you clarify how that ties in?", "username": "Jay" }, { "code": "", "text": "Jason, thank you for the suggestions. You have given me some good things to look at. I am waiting on a new power supply for my MacBook but will get back with you after it arrives and I can get back to my code. Thanks again. I will put updates on the Realm forum also.", "username": "Michael_Granberry" }, { "code": " if users!.count > 0 {\n print(users!.count)\n nameLabel.text = users?[0].name\n } else {\n print(users!.count)\n nameLabel.text = \"Sample Data\"\n }\n", "text": "I have resolved the two errors about not being able to find the Realm.framework and the RealmSwift.framework. I had to move them in my workspace to be after the the definition of the Pods_appname.framework in the Frameworks folder. For some reason they were sitting above the Pods-appname.framework. That resolved the missing frameworks. I also went back to verson 4.4.0 in the podfile with the statement: pod ‘RealmSwift’, ‘4.4.0’I have been unable to find a variable of users that was different from the use of the Results I had defined with :var users: Results?\nlet realm = try! Realm()at the top of the class where I try to setup up the use of the Realm.Later I want to grab the user name from the PersonalData swift file with:loadUserName()The loadUserName is as follows:\n// MARK : Data Load and Save from Realm\nfunc loadUserName() {\nusers = realm.objects(PersonalData.self)\n}When I run the code on the simulator I get the correct value loaded in to the nameLabel.text . When I run it on the iphone device, I get “Sample Data” as shown in the Else statement. On the simulator the count is equal to 1. On the iphone the count is equal to 0.", "username": "Michael_Granberry" }, { "code": "", "text": "Well, it could be the case where the Realm file on the simulator has been populated with data but on the device was not.", "username": "Jay" }, { "code": "", "text": "Would you think that is a bug in the Realm code? I am not sure how to proceed, if it looks good on the simulator but will now work when I download from Xcode to the actual device. I am kind of at a standstill until the folks at Realm show some interest in working with this issue.", "username": "Michael_Granberry" }, { "code": "@IBAction func myButtonAction( _ sender: Any ) {\n let realm = try! Realm()\n let results = realm.objects(PersonalData.self )\n print(results.count) //output to a text field\n}\nvar users: Results?var personResults: Results<PersonalData>? = nilfunc loadPeople() {\n let realm = try! Realm()\n self.personResults = realm....\n}", "text": "I will not speak for the MongoDB Realm folks but it’s not likely a bug in Realm. It’s likely a bug in your code - implementation or perhaps even overlooking something obvious. It happens!Diagnosing an issue is very challenging here as without a solid troubleshooting path and code visibility, it’s hard to isolate the issue. There are some things you can do to help narrow it down.I always have a button in my UI that allows me to isolate and test code. So in this situation I would add this code to my button actionObviously when you are running it on the device you don’t have a console so perhaps at a temporary text field in your UI where you can send output that would normally go to the console in the simulator.Does that code output 1 or 0? If it’s 0 that tells us the Realm has no data. If 1 then it has data so then look further into how the data is being read. For example, this is not correctvar users: Results?but this isvar personResults: Results<PersonalData>? = nilThen inside functions to access the class var personResults, use self (for clarity) as in", "username": "Jay" }, { "code": "", "text": "Jay, I like your idea about the button and the textfield to help with the debug. I will try that. By the way, you mentioned that the code on the line that read … var users: Results? was incorrect. That was not my line of code. The cut & paste and preview function in the Realm forum actually changed my code. My code reads like this … var users: Results? Therefore your suggestion of … var personResults: Results? = nil except for the default value of nil. I will make that change but it seems similar to my code. If it is because I used the term users then I will change it to something that is unique for sure.", "username": "Michael_Granberry" }, { "code": "", "text": "Once again, the preview changed my code. I will have to start posting screen images because the text editing of the forum seems to change my text entries. Sorry about that.", "username": "Michael_Granberry" }, { "code": "to make it stand out as wellvar personResults: Results? = nilvar personResults: Results<PersonalData>? = nil", "text": "The text editing shouldn’t be changing your code.There is a small symbol </> in the formatting bar specifically for code and any code you put into the question, please use that to format it. That makes the code readable and sets it apart from the rest of the text.You can also include code in tick marks to make it stand out as well.You can go back and edit your own post so I suggest doing that to correct errors when you see them.To be clear, this var personResults: Results? = nil was not my suggestion (I put ticks around that). My code is (using the code format button)var personResults: Results<PersonalData>? = nilStylistically my code suggestion makes it clear that a personResults objects is a Realm Results object containing PersonalData objects. It could be nil, and initially it is.", "username": "Jay" }, { "code": " if usersArray!.count > 0 {\n print(usersArray!.count)\n nameLabel.text = usersArray?[0].name\n } else {\n print(usersArray!.count)\n nameLabel.text = \"Sample Data\"\n }\n", "text": "After an absence from programming for a couple weeks and a hardware problem, I have returned to this issue of being unable to discover why my Realm data files are not visible on my iPhone device. Since I had no experience in Primary Keys, I converted my 7 .swift data files to not include Primary Keys. I have 7 entirely new files with different names and I deleted the App from my iPhone. I got the same error basically when I tested the array following a load I still have no data on the iPhone. However, I do have the App and have the data structures as seen in the Realm Browser.I discovered a process that would allow me to inspect the data files that reside on the iphone by connecting the iPhone to my Mac, and loading the application from a fresh start after deleting the App from the iPhone. I then rebuilt the app with the iPhone device as the target, it was connected and my test for good data still follows the exception process because the data files were empty.This is how I looked at the data from Xcode on the Mac to the connected iPhone.Xcode - Windows - Devices and Simulators . Choose your desired device and navigate to the downloaded app in the Installed App section. With the correct app highlighted, select the gear icon underneath and choose Download Container. There I found the Default Realm file and my swift file entries (all seven of the new named files). When I did this I was expecting to see an object in the file. I move the PersonData.swift file contents to an array (usersArray) and access the first entry. When I test the the count value I take the Else path and load to the variables the Sample Data and not the .name property value.Here is the test of the data file:\nusersArray = realm.objects(PersonData.self)When I run on the simulator the usersArray!.count value is returned as 1 and my .name data displays on the simulated iPhone view with the textfield loaded with the .name value from the db.\nWhen I run on the iPhone device the usersArray!.count value is returned as 0 and the “Sample Data” appears on the view in the appropriate textfield.Does anyone have any thoughts on why the simulator finds the data and the device has the structures of the data.swift files (showing it’s name and the properties defined) but the data does appear to be on the device?What else can I show to the community. I am using the same swift statements I used in the Todoey application that I wrote in the London App Brewery course cirriculm.", "username": "Michael_Granberry" }, { "code": "", "text": "After having a very gratifying and informative conversation with one of the community resources and talking through my code, the most obvious of all answers was apparent for my problem. Though I had initiated the database in the simulator, I had forgotten to do the same process with my teathered iPhone to initiate the values there. Of course I could have updated the sample data that was hard coded, I really wanted to understand the real problem. And the real problem was me. I believe this is solved but now I need to further discuss and understand whether to have primary keys in each Realm or not. This will take more study. But thank you Richard…", "username": "Michael_Granberry" } ]
Realm on iPhone Exception on read
2020-08-20T21:04:05.204Z
Realm on iPhone Exception on read
3,551
https://www.mongodb.com/…167e28afc8e1.png
[ "compass", "connecting" ]
[ { "code": "use my_database\n\ndb.createUser(\n {\n user: \"some_user\",\n pwd: \"some_password\",\n roles: [{ role: \"readWrite\", db: \"my_database\" }]\n }\n)\nError creating SSH Tunnel: connect EADDRINUSE some_ip:22 - Local (0.0.0.0:29353)\n#Port 22\n#AddressFamily any\n#ListenAddress 0.0.0.0\n#ListenAddress ::\n\nHostKey /etc/ssh/ssh_host_rsa_key\n#HostKey /etc/ssh/ssh_host_dsa_key\nHostKey /etc/ssh/ssh_host_ecdsa_key\nHostKey /etc/ssh/ssh_host_ed25519_key\n\n# Ciphers and keying\n#RekeyLimit default none\n\n# Logging\n#SyslogFacility AUTH\nSyslogFacility AUTHPRIV\n#LogLevel INFO\n\n# Authentication:\n\n#LoginGraceTime 2m\n#PermitRootLogin yes\n#StrictModes yes\n#MaxAuthTries 6\n#MaxSessions 10\n\n#PubkeyAuthentication yes\n\n# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2\n# but this is overridden so installations will only check .ssh/authorized_keys\nAuthorizedKeysFile .ssh/authorized_keys\n\n#AuthorizedPrincipalsFile none\n\n\n# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts\n#HostbasedAuthentication no\n# Change to yes if you don't trust ~/.ssh/known_hosts for\n# HostbasedAuthentication\n#IgnoreUserKnownHosts no\n# Don't read the user's ~/.rhosts and ~/.shosts files\n#IgnoreRhosts yes\n\n# To disable tunneled clear text passwords, change to no here!\nPasswordAuthentication yes\nPermitEmptyPasswords no\n#PasswordAuthentication no\n\n# Change to no to disable s/key passwords\nChallengeResponseAuthentication yes\n#ChallengeResponseAuthentication no\n\n# Kerberos options\n#KerberosAuthentication no\n#KerberosOrLocalPasswd yes\n#KerberosTicketCleanup yes\n#KerberosGetAFSToken no\n#KerberosUseKuserok yes\n\n# GSSAPI options\nGSSAPIAuthentication yes\nGSSAPICleanupCredentials no\n#GSSAPIStrictAcceptorCheck yes\n#GSSAPIKeyExchange no\n#GSSAPIEnablek5users no\n\n# Set this to 'yes' to enable PAM authentication, account processing,\n# and session processing. If this is enabled, PAM authentication will\n# be allowed through the ChallengeResponseAuthentication and\n# PasswordAuthentication. Depending on your PAM configuration,\n# PAM authentication via ChallengeResponseAuthentication may bypass\n# the setting of \"PermitRootLogin without-password\".\n# If you just want the PAM account and session checks to run without\n# PAM authentication, then enable this but set PasswordAuthentication\n# and ChallengeResponseAuthentication to 'no'.\n# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several\n# problems.\nUsePAM yes\n\n#AllowAgentForwarding yes\nAllowTcpForwarding yes\n#GatewayPorts no\nX11Forwarding yes\n#X11DisplayOffset 10\n#X11UseLocalhost yes\n#PermitTTY yes\n#PrintMotd yes\n#PrintLastLog yes\n#TCPKeepAlive yes\n#UseLogin no\n#UsePrivilegeSeparation sandbox\n#PermitUserEnvironment no\n#Compression delayed\n#ClientAliveInterval 0\n#ClientAliveCountMax 3\n#ShowPatchLevel no\n#UseDNS yes\n#PidFile /var/run/sshd.pid\n#MaxStartups 10:30:100\n#PermitTunnel no\n#ChrootDirectory none\n#VersionAddendum none\n\n# no default banner path\n#Banner none\n\n# Accept locale-related environment variables\nAcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES\nAcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT\nAcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE\nAcceptEnv XMODIFIERS\n\n# override default of no subsystems\nSubsystem sftp\t/usr/libexec/openssh/sftp-server\n\n# Example of overriding settings on a per-user basis\n#Match User anoncvs\n#\tX11Forwarding no\n#\tAllowTcpForwarding no\n#\tPermitTTY no\n#\tForceCommand cvs server\n\nAuthorizedKeysCommand /opt/aws/bin/eic_run_authorized_keys %u %f\nAuthorizedKeysCommandUser ec2-instance-connect\n", "text": "I have setup my mongodb on AWS Linux 2 EC2 instance.I have associated inbound rule as - SSH | TCP | 22 | to the instance.I was able to SSH into it through MongoDB Compass by using following settings:However as soon as I added a username password to my database using following method:And tried to access it using following parameters:I got following error:Here is my /etc/ssh/sshd_config file content:Am I missing anything over here?", "username": "Devarshi_Kulshreshth" }, { "code": "", "text": "Hi, did you solve it?\nAny suggestion for this problem?\nThank you!", "username": "Juan_Ignacio_Benito" } ]
MongoDB Compass error creating SSH Tunnel: connect EADDRINUSEt, after setting username / pwd on database, AWS Linux 2 (EC2)
2020-08-23T20:56:50.448Z
MongoDB Compass error creating SSH Tunnel: connect EADDRINUSEt, after setting username / pwd on database, AWS Linux 2 (EC2)
5,795
null
[ "graphql" ]
[ { "code": "", "text": "Volkhard_VogelerHello,\ni want to implement a paging on queries like this:query { addresses(query: “Street = ‘myStreet’ AND City = myCity”) {Id, Street, City}}For the paging i want to know in advance how many records this query will return.\nIs there any way for this ìn GraphQL (e.g. Count function)?best regardsvolkhard", "username": "Volkhard_Vogeler" }, { "code": "findlimit", "text": "You would need to write a custom resolver to implement pagination, and our suggestion is to use find and limit for your logic", "username": "Sumedha_Mehta1" } ]
GraphQL: Count of records in advance for paging
2020-03-26T16:29:38.898Z
GraphQL: Count of records in advance for paging
3,740
null
[ "dot-net", "transactions" ]
[ { "code": "public async Task<BsonDocument> GetNextAsyncTest(CancellationToken cancellationToken = default)\n { \n try\n {\n ClientSessionOptions sessionOptions = new ClientSessionOptions()\n { \n };\n var session = await mongoCollection.Database.Client.StartSessionAsync(sessionOptions, cancellationToken);\n \n session.StartTransaction();\n\n FilterDefinition<BsonDocument> lockedNotExist = Builders<BsonDocument>.Filter.Exists(\"locked\", false);\n\n var filter = Builders<BsonDocument>.Filter.And(lockedNotExist);\n\n var update = Builders<BsonDocument>.Update \n .Set(\"locked\", true); \n\n var item = await mongoCollection.FindOneAndUpdateAsync<BsonDocument>(session, filter, update, default, cancellationToken);\n if (item != null)\n {\n return item;\n } \n }\n catch (Exception ex)\n {\n\n } \n\n return null;\n }", "text": "I have (for example) 2 documents in db and 2 processes running in parallel\neach process selects a single doc for update.\nthe select is done as a transaction.in FindOneAndUpdateAsync the\nfilter filters documents with field named ‘locked’\nand the update adds a field ‘locked’ to the docthe first process (transaction) succeeds and the second fails with write violation.for test purposes the transaction is never committed", "username": "Tago" }, { "code": "", "text": "Hi @Tago - welcome to the community!I’m not sure what your question is. Can you clarify what you’re trying to accomplish or what is going wrong?One thing to point out when working with transactions, the docs state: \" When using the drivers, each operation in the transaction must be associated with the session. Refer to your driver specific documentation for details.\" Check out the docs more details and examples.", "username": "Lauren_Schaefer" } ]
How do I read uncommitted data inside a transaction?
2020-10-14T19:16:56.180Z
How do I read uncommitted data inside a transaction?
2,186
null
[ "aggregation", "atlas" ]
[ { "code": "", "text": "Hi Team,\nI was just checking graphLookup to retrieve all links.\nFor example I have this data\nA1->A2->A3->A4->A5 where A1 is in one collection and link between Ai to Aj is in another collection.\nSo when I ran the graphLookup queries It returns me all the links but did not return me in the above mentioned order It returned inthis order [A1, A3, A2, A5, A4]\nI can write the logic to bring this into correct order but if there is any settings to return in correct order would be helpful.Thanks,\nGheri.", "username": "Gheri_Rupchandani" }, { "code": "", "text": "Hi @Gheri_Rupchandani,Can you please share the query and aome sample documents?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hey Pavel,\nI have two collections one objects and other links.\nobjects have { _id: “”, name: …}\nAnd Links have {_id: , sourceObjectId:“id of object”, targetObjectId: “id of object”}\nSummary objects collections storing domain objects and links storing relationships between objects\nThanks,\nGheri.", "username": "Gheri_Rupchandani" }, { "code": "", "text": "Hi @Gheri_Rupchandani,I will still need the graphlookup code and the database version you are running on…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "db.objects.aggregate(\n{$match: {\"_id\": “0FbpzNkY6N48PYXBDZOCD” }},\n{ graphLookup: { \n from: \"links\", \n startWith: \"_id\",\nconnectFromField: “targetId”,\nconnectToField: “sourceId”,\nas: “linksData” } }).pretty()Database version–> version() --> 4.4.1", "username": "Gheri_Rupchandani" }, { "code": "asdepthField", "text": "Hi @Gheri_Rupchandani,I see what you mean now. $graphLookup gurantee that the documents returned the treversed documents by your start and following the connection rules.However it does not guarantee that returned in as field following that order or any order.See the “as” comment.If you wish to get it back in the wanted order sort them in next stage by the added depthField you need to specify.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Sounds great,\nIt works\ndb.assets.aggregate({$match: {\"_id\": “0FbpzNkY6N48PYXBDZOCD” }},{ graphLookup: { from: \"links\", startWith: \"_id\", connectFromField: “targetId”, connectToField: “sourceId”, as: “linksData”, depthField: “level” } }, {$unwind: “$linksData”}, {$sort: {“linksData.level”:1}}, {group: {_id:\"_id\", linksData: {$push:\"$linksData\"}}}).pretty()\nI guess it would work with 50 level deep graph assuming I have considered all memory limitations of graphLookup\nThanks,\nGheri.", "username": "Gheri_Rupchandani" } ]
graphLookup does not return documents in order
2020-10-15T15:56:07.226Z
graphLookup does not return documents in order
3,948
null
[ "java" ]
[ { "code": "DAOpublic User update(User user) {\n\n\tMongoCollection<User> userTbl = database.getCollection(\"User\", User.class);\n\t\n\tuserTbl.updateOne(eq(\"_id\", \"id\"),combine(set(\"email\", user.getEmail())));\n\n\treturn user;\n}\nServicepublic void updateUser() throws ServletException, IOException {\n\tObject id = request.getParameter(\"userId\");\nString email = request.getParameter(\"email\"); // attr is name = \"email\" from the input field\nString fullName = request.getParameter(\"fullname\");\nString password = request.getParameter(\"password\");\n\nSystem.out.println(\"NEW ID = \" + id + \", NEW EMAIL = \" + email + \", NEW NAME = \" + fullName\n+ \", NEW PASSWORD = \" + password);\n\nUser user = new User(email, fullName, password);\n\nuserDAO.update(user);\n\n}\n", "text": "I following this example to make a method for a single person to save email from input instead of fix value. Sadly it does not change anything at all. I wonder if I misunderstood the tutorial.Here is the DAO class to hold the user objectHere is the Service to interact with the browser,", "username": "Pat_Yue" }, { "code": "eq(\"_id\", \"id\")eq(\"_id\", user.getId())combineemailuserTbl.updateOne(\n eq(\"_id\", user.getId()), \n set(\"email\", user.getEmail())\n);", "text": "Hello @Pat_Yue,userTbl.updateOne(eq(\"_id\", “id”),combine(set(“email”, user.getEmail())));In this eq(\"_id\", \"id\"), you should be using something like: eq(\"_id\", user.getId()). Also, you don’t need to use the combine method as you are updating only one field, the email.So, your update could be:", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya Thanks for your reply. I’ve tried it, but still can’t update the email for a user in my return statement. Should I need to iterate the collection to update the document of single person, per said?", "username": "Pat_Yue" }, { "code": "User user = new User(email, fullName, password);iduserupdateUser()User user = new User(id, email, fullName, password);\n// or\nUser user = new User(email, fullName, password);\nuser.setId(id)", "text": "User user = new User(email, fullName, password);I think you missed to include the id field when constructing the user object in the updateUser() method. You can do this:", "username": "Prasad_Saya" }, { "code": "\tpublic void updateUser() throws ServletException, IOException {\n\n\t\tObject id = (String) request.getParameter(\"id\");\n\t\tString email = request.getParameter(\"email\"); // attr is name = \"email\" from the input field\n\t\tString fullName = request.getParameter(\"fullname\");\n\t\tString password = request.getParameter(\"password\");\n\t\t\n\t\tSystem.out.println(\"NEW ID = \" + id + \", or NEW EMAIL = \" + email + \", or NEW NAME = \" + fullName\n\t\t\t\t+ \", NEW PASSWORD = \" + password);\n\n\t\tUser user = new User((ObjectId) id, email, fullName, password);\n\n\t\tuserDAO.update(user);\n\n\t\tString updateMsg = \"User update done!\";\n\t\tlistUser(updateMsg);\n\n\t}\npublic class User {\n\n\tprivate ObjectId id;\n\n\t@BsonProperty(value = \"user_id\")\n\tprivate String userId;\n\tprivate String email;\n\tprivate String fullName;\n\tprivate String password;\n\n\tpublic User() {\n\t}\n\n\tpublic User(ObjectId id, String email, String fullName, String password) {\n\t\tsuper();\n\t\tthis.id = id;\n\t\tthis.email = email;\n\t\tthis.fullName = fullName;\n\t\tthis.password = password;\n\t}\n // getter and setter\n}", "text": "@Prasad_Saya Thanks for your reply. I did and I’m in doubt with using this Object Id that I cast it to the constructor like below, or else I can’t pass the _id as it said:The method setId(ObjectId) in the type User is not applicable for the arguments (Object)If I could convert string to objectId could solve the problem.I using POJOs by the way.The User entity", "username": "Pat_Yue" }, { "code": "String idStr = (String) request.getParameter(\"id\");\nObjectId objId = new ObjectId(idStr);\n\n// ...\n\nUser user = new User(objId, email, fullName, password);\n", "text": "If I could convert string to objectId could solve the problem.Try using the ObjectId class to build it.", "username": "Prasad_Saya" }, { "code": " The server encountered an unexpected condition that prevented it from fulfilling the request.", "text": "@Prasad_Saya really thanks for your support. Somehow it has an exception with the ObjectId. I wonder if the ObjectId in the entity can use a simple type like string could retrieve the same result, which means that not using POJOs mapping at all? howvever, it will break everything of my other CRUD operation. The server encountered an unexpected condition that prevented it from fulfilling the request.java.lang.IllegalArgumentException org.bson.types.ObjectId.isValid(ObjectId.java:86) org.bson.types.ObjectId.parseHexString(ObjectId.java:528) org.bson.types.ObjectId.(ObjectId.java:205) com.smartcard.service.UserService.updateUser(UserService.java:97) com.smartcard.controller.admin.UpdateUserServlet.doPost(UpdateUserServlet.java:29) javax.servlet.http.HttpServlet.service(HttpServlet.java:652) javax.servlet.http.HttpServlet.service(HttpServlet.java:733) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)", "username": "Pat_Yue" }, { "code": "\tpublic void updateUser() throws ServletException, IOException {\n\n\t\tObject id = (String) request.getParameter(\"id\");\n\t\tString email = request.getParameter(\"email\"); // attr is name = \"email\" from the input field\n\t\tString fullName = request.getParameter(\"fullname\");\n\t\tString password = request.getParameter(\"password\");\n\t\t\n\t\tSystem.out.println(\"NEW ID = \" + id + \", or NEW EMAIL = \" + email + \", or NEW NAME = \" + fullName\n\t\t\t\t+ \", NEW PASSWORD = \" + password);\nSystem.out.println(...", "text": "Please tell what is the value printed from the above System.out.println(....", "username": "Prasad_Saya" }, { "code": "_idNEW ID = null, or NEW EMAIL = [email protected], or NEW NAME = Pepe Ronaldo , NEW PASSWORD = pepe123@WebServlet(\"/update_user\")\npublic class UpdateUserServlet extends HttpServlet {\n\tprivate static final long serialVersionUID = 1L;\n\n\tprotected void doPost(HttpServletRequest request, HttpServletResponse response)\n\t\t\tthrows ServletException, IOException {\n\n\t\tresponse.setContentType(\"text/html\");\n\n\t\tdBUtils.getMongoDB();\n\n\t\tUserService userService = new UserService(request, response);\n\n\t\tuserService.updateUser();\n\n\t}\n}\n", "text": "It shows the correct value, but the _id of the user in the console, e.g.\nNEW ID = null, or NEW EMAIL = [email protected], or NEW NAME = Pepe Ronaldo , NEW PASSWORD = pepe123Here is the Servlets for update user I just call it from the Service class.Sorry for the pain.", "username": "Pat_Yue" }, { "code": "request", "text": "requestWhat is passed to the request?", "username": "Prasad_Saya" }, { "code": "userIdidNEW ID = 5f8172544236e93fcf86c6e1, or NEW EMAIL = [email protected], or NEW NAME = Pepe Ronaldo , NEW PASSWORD = pepe123", "text": "My apology!!! I made a typo to the JSP page and this is why I can’t get the parameter. I left userId instead of id while learning POJO few days ago (my bad)!\nit now working like charm!\nNEW ID = 5f8172544236e93fcf86c6e1, or NEW EMAIL = [email protected], or NEW NAME = Pepe Ronaldo , NEW PASSWORD = pepe123THANK YOU !!!", "username": "Pat_Yue" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update a document from single person
2020-10-15T07:41:02.513Z
Update a document from single person
3,989
null
[]
[ { "code": "{\"_id\":{\"$oid\":\"mongoid\"},\n\"id\":\"myid\",\n\"shops\":[{\"number\":200,\"d\":0,\"a\":0,\"p\":0},{\"number\":15,\"d\":0,\"a\":0,\"p\":0},{\"number\":16,\"d\":0,\"a\":0,\"p\":0},{\"number\":17,\"d\":0,\"a\":0,\"p\":0},{\"number\":18,\"d\":0,\"a\":0,\"p\":0}]}\n", "text": "Hi, I’m starting with mongoDB now. And trying to make a GraphQL query with Insomnia for my DB to search records, where n:200 and a_gte:5 in one Object, but query give me records, where I have n:200 and a_gte:5 in any other Object in shops array. How can I specify to look for a_gte:5 only where n:200 are placed?Structure is:", "username": "Oleg_Kobeliatskyi" }, { "code": "", "text": "I am not too sure but I would look at https://docs.mongodb.com/manual/reference/operator/query/elemMatch/. This ensure that the query is satisfied within the same element of the array.", "username": "steevej" }, { "code": "", "text": "Yes, but I can’t use it (or don’t know how) in query GraphQL in Insomnia. With pymongo I made it work yesterday, but DB will be connected to bubble.io.", "username": "Oleg_Kobeliatskyi" } ]
Query search Object in Array with couple parameters
2020-10-15T10:48:26.853Z
Query search Object in Array with couple parameters
2,425
null
[]
[ { "code": "", "text": "G’day MongoDB Community members!We’ve made some category & tag adjustments for Realm discussion based on community feedback.Realm has always been a popular discussion category, but the introduction of MongoDB Realm & the Realm Sync Beta in June has significantly expanded the scope of topics.The Realm Database is an open source embedded database accessed via the Realm SDKs (currently Java, JavaScript, Kotlin, Objective C, Swift, and C#). MongoDB Realm provides Realm Sync (data synchronisation between applications using Realm Database and MongoDB Atlas) as well as a growing number of Application Development Services that can be called from web applications.The Realm Database & MongoDB Realm work together as an integrated solution, but also have substantial standalone use cases. To help facilitate discussion, we have created a new Realm SDKs category focused on mobile development topics including the Realm SDKs and Realm Studio. Topics in the MongoDB Realm category will be focused on web development and MongoDB Realm features including Realm Sync and Application Development Services.For an overview of topics within each of these categories (as well as links to some additional relevant resources including Documentation and Feature Requests), please see:The category and tags for existing discussions in the MongoDB Realm category are being updated to reflect this change, so you may notice some discussion updates over the next few days as the MongoDB Community team reviews existing topics and makes adjustments.Please let us know what you think of the changes, or if you have any additional suggestions.Thanks,\nThe MongoDB Community Team", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Realm category and tag adjustments October 2020
2020-10-16T05:37:37.489Z
Realm category and tag adjustments October 2020
4,393
https://www.mongodb.com/…591dc03b81eb.png
[ "java" ]
[ { "code": "", "text": "code:\nerror:\nimage1140×632 18 KBHope you can get help", "username": "111113" }, { "code": "", "text": "The problean has bean resolved.\nThanks to the mongo technical forum, use MongoConverter to convert all your own types into types supported by the Mongo library.Such as\nmongoConverter.convertToMongoType(value)", "username": "111113" } ]
Batch update error
2020-10-16T01:32:06.605Z
Batch update error
1,518
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hello,\nI’m facing the decision of choosing the tech stack for my new app. I’m wondering about the state of MongoDB/Realm Sync (Beta). Is it ready for production apps? If not, then what is the estimated release date?", "username": "Stanislaw_Baranski" }, { "code": "", "text": "Welcome to the MongoDB Community @Stanislaw_Baranski!There have been several recent discussions on this. In particular, please see: Why the Realm for React Native is still Beta? for more context on the beta label.Comment from @Shane_McAllister in reference to the beta label for the React Native SDK:In this instance, it’s safe to assume that the version is production ready, as in we’re unaware of major issues that will corrupt or break when out of beta and in production. It’s still a beta version, so we do reserve the “right” to make breaking changes to the API, but we don’t expect to.A more general comment on “production ready” from @Ian_Ward:Production ready is really a call that only you, as the product owner of your app, can make. There are a myriad of apps on the Apple/Play store that are compiled with beta libraries and MongoDB Realm has several customers already in production even though it is beta. So it is “production-ready” for those customers but perhaps its not for you - which is why we apply the beta tag, to serve as a warning.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the status of Realm Sync (Beta)
2020-10-16T00:00:02.648Z
What is the status of Realm Sync (Beta)
2,457
null
[ "php" ]
[ { "code": "", "text": "I apologize in advance if this question has been asked or the answer to my problem is simple and can be found easily.I couldn’t find any help for connecting to MongoDB through a site.\nBasically, what I have, is a site, and I want to connect to MongoDB and check my database, but I don’t know what I need to do.Again, sorry if this problem can be solved easily and I’m just being dumb.", "username": "Jan5106" }, { "code": "", "text": "@Jan5106 if I understand, you are composing a website in PHP and wish to connect to MongoDB from inside your PHP code?If so, the information you want is here: MongoDB PHP Library — PHP Library Manual upcoming", "username": "Jack_Woehr" } ]
Connecting to MongoDB through a site with PHP
2020-10-15T19:04:47.698Z
Connecting to MongoDB through a site with PHP
1,410
null
[ "java", "connecting" ]
[ { "code": "com.mongodb.MongoSocketOpenException: Exception opening socket\n at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.0.5.jar:na]\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127) ~[mongodb-driver-core-4.0.5.jar:na]\n at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-4.0.5.jar:na]\n at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]\n", "text": "I had mongodb instance running on AWS EC2 instance, and I was able to interact with it from java code. Then I had to stop the instance and restart it. I modified the connection details in code. But now when I try to run my application, I get:Though I can connect to same mongodb instance via mongo compass.", "username": "Manish_Ghildiyal" }, { "code": "", "text": "There should be a nested exception that the MongoSocketOpenException wraps. Can you provide the stack trace for that one too?", "username": "Jeffrey_Yemin" } ]
Unable to connect to mongodb ec2 instance in java after ec2 restart
2020-10-13T11:03:55.497Z
Unable to connect to mongodb ec2 instance in java after ec2 restart
3,170
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production", "xamarin" ]
[ { "code": "# .NET Driver Version 2.11.3 Release Notes\n\nThis is a patch release that addresses an issue reported since 2.11.2 was released.\n\nAn online version of these release notes is available at:\n\nhttps://github.com/mongodb/mongo-csharp-driver/blob/master/Release%20Notes/Release%20Notes%20v2.11.3.md\n\nThe list of JIRA tickets resolved in this release is available at:\n\nhttps://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.3%20ORDER%20BY%20key%20ASC\n\nDocumentation on the .NET driver can be found at:\n\nhttps://mongodb.github.io/mongo-csharp-driver/\n\n## Upgrading\n\nSince the only change in this patch release is CSHARP-3218 and that issue is specific to Xamarin you only need\nto upgrade from 2.11.2 to 2.11.3 if you are using the driver on Xamarin.\n", "text": "This is a patch release that addresses an issue reported since 2.11.2 was released.An online version of these release notes is available at:The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.3%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:Since the only change in this patch release is CSHARP-3218 and that issue is specific to Xamarin you only need\nto upgrade from 2.11.2 to 2.11.3 if you are using the driver on Xamarin.There are no known backwards breaking changes in this release.", "username": "Robert_Stam" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.11.3 Released
2020-10-15T22:27:25.941Z
.NET Driver 2.11.3 Released
3,043
null
[ "swift", "change-streams" ]
[ { "code": "configure.swiftassertion failed: cursor or change stream wasn't closed before it went out of scope: file MongoSwift/CursorCommon.swift, line 247.hop(to: app.client.eventLoop)let testInventory = app.testingDatabase.collection(\"playerratings\", withType: PlayerRating.self)\ntestInventory.watch().flatMap { stream -> EventLoopFuture<ChangeStream<ChangeStreamEvent<MongoCollection<PlayerRating>.CollectionType>>> in\n let resumeToken = stream.resumeToken\n return testInventory.watch(options: ChangeStreamOptions(resumeAfter: resumeToken))\n}\n.whenFailure { _ in }\n", "text": "Hi EveryoneI am trying to implement change streams in the Swift Driver using Vapor.Unfortunately I am unable to work out how to implement it having looked at the Swift Changes Streams docs. I have placed my code in configure.swift but I am currently getting a crash with this error:assertion failed: cursor or change stream wasn't closed before it went out of scope: file MongoSwift/CursorCommon.swift, line 247Below is the code I have tried starting out with which causes the crash. I have tried using .hop(to: app.client.eventLoop) in various places but I have been unable to prevent my app from crashing on start up using change streams.Any idea what I am doing wrong?Thanks", "username": "Piers_Ebdon" }, { "code": "kill()deinitkill()EventLoopFuture", "text": "Hi @Piers_Ebdon!Change streams must be explicitly closed (by calling the kill() method) before they go out of scope to avoid leaking resources. The assertion failure is happening in a deinit method which confirms the stream has already been killed and fails otherwise.In the example above, since you are returning the second stream, I think the issue is likely the first stream you are creating. Once you are done using the stream, you should call the kill() method which will asynchronously clean up the resources and return an EventLoopFuture.I realize this is not clear from the examples in our change streams guide so I’ve opened a Jira ticket so we remember to correct that.", "username": "kmahar" }, { "code": "kill()", "text": "Hi @kmaharSo after using a change stream, I then need to close it by calling kill(), I assume that the observer watching the collection is still active right?In regards to the example I provided, I was told that I should make sure that I use a resume stream for when the connection closes and starts up again. Still trying to get my head around when best to call a second watch method with the resume token, I guess this would be best placed when an error occurs from what I am doing with the change stream, such as inserting a new document into another collection,.Thanks again for getting back so quickly. I’ll have another attempt at this tomorrow and will provide an update hopefully with some working code.", "username": "Piers_Ebdon" }, { "code": " let testInventory = app.testingDatabase.collection(\"playerratings\", withType: PlayerRating.self)\n _ = testInventory.watch(withEventType: PlayerRating.self).flatMap { stream -> EventLoopFuture<Void> in\n stream.next().map { playerRating in\n if let playerRating = playerRating {\n let matchPlayerFilter: BSONDocument = [\n \"matchID\": .objectID(playerRating.matchID),\n \"playerID\": .objectID(playerRating.playerID)\n ]\n \n app.testingDatabase.collection(\"matchplayers\", withType: MatchPlayer.self).findOne(matchPlayerFilter)\n .hop(to: app.client.eventLoop)\n .map { matchPlayer -> EventLoopFuture<UpdateResult?> in\n let matchPlayer = matchPlayer!\n let newTotalRatings = matchPlayer.totalRatings + 1\n let averageRating = matchPlayer.averageRating * Double(matchPlayer.totalRatings) + playerRating.rating / Double(newTotalRatings)\n let updateMatchPlayer: BSONDocument = [\"$set\": [\"totalRatings\": .init(integerLiteral: newTotalRatings),\n \"averageRating\": .double(averageRating)]]\n return app.testingDatabase.collection(\"matchplayers\", withType: MatchPlayer.self).updateOne(filter: matchPlayerFilter, update: updateMatchPlayer)\n }\n }\n }\n return stream.kill()\n }\n", "text": "Hey @kmaharSo this is my updated attempt to handle using a change stream. I am trying to use it to update a few rolling averages but so far I am just updating one.I’m not entirely sure how to handle any potential error cases and have used a force unwrap which I would never do in production code.Secondly, I haven’t used a resume token which I was told I should do but I don’t know where / when to use them.Also, another grey area is whether the way I handle the change stream would ensure that the integrity of the data is maintained, as I’m thinking that whilst one call is finishing. another call (change stream) might come in before the previous one has finished.", "username": "Piers_Ebdon" }, { "code": "PlayerRatingMatchPlayers// Group together documents in playerratings by player and match ID and calculate averages.\nlet groupStage: BSONDocument = [\n \"$group\": [\n // group together documents where the player and match ID values match.\n\t\t\"_id\": [\"playerID\": \"$playerID\", \"matchID\": \"$matchID\"],\n // add a totalRatings field that adds 1 value for each matching doc.\n\t\t\"totalRatings\": [\"$sum\": 1],\n // add an average field that averages the \"rating\" field for each doc.\n\t\t\"averageRating\": [\"$avg\": \"$rating\"]\n\t]\n]\n\n// Restructure the output of the previous stage to look like a MatchPlayer.\nlet projectStage: BSONDocument = [\n\t\"$project\": [\n // these IDs are nested under _id, flatten out the structure.\n\t\t\"playerID\": \"$_id.playerID\",\n \"matchID\": \"$_id.matchID\",\n // we already projected both fields so don't need to include this too (0 means omit it).\n \"_id\": 0,\n // we want to pass through these two fields as-is, so use 1.\n \"totalRatings\": 1,\n \"averageRating\": 1\n\t]\n]\nvar collectionOptions = CreateCollectionOptions()\ncollectionOptions.viewOn = \"playerratings\"\ncollectionOptions.pipeline = [groupStage, projectStage]\n\ndb.createCollection(\n\t\"matchplayers\",\n\toptions: collectionOptions,\n\twithType: MatchPlayer.self\n).flatMap { view in\n\t// ... use view like a normal collection (read-only)\n}\nChangeStream.forEachEventTypeChangeStreamChangeStreamEventPlayerRatingspipelinewatchwithEventType", "text": "Hello!Just to make sure I understand what you are trying to do correctly: you are watching one collection for changes (player ratings) and on certain types of events (perhaps whenever a new document is inserted to the PlayerRating collection?) you’d like to update a corresponding collection MatchPlayers which is tracking aggregated data. is that correct?Based on what you’ve said you may want to consider using a MongoDB view instead of change streams here. This allows you to create something that behaves very similarly to a collection but has results calculated on-demand via applying an aggregation pipeline to another collection. This would enable you to only store playerRatings and not also matchRatings.This has the downside that you always have to compute the results when you need them, but on the other hand saves you from maintaining multiple collections and recomputing every time a new player rating is inserted.E.g. applying the following pipeline to your playerRatings collection generates the same data as that you store in matchRatings:You could create such a view by doing the following:I think that is likely the simplest way to accomplish what you are doing here.If that doesn’t work for your use case for whatever reason, just to clear up some things about change streams:The way you’ve written this now only processes a single change. I’m not sure how you are calling this code, but typically a change stream is something you have open and running for a long period of time, and the stream will collect any changes that have occurred since it was opened, or in the case of a resume token, since that resume token. This approach appears to possibly create a new change stream each time you need it. You can register a callback for a change stream via ChangeStream.forEach that will execute on each event in the stream as it arrives and then just keep the stream open long term to let it handle processing events until it is killed.I assume that the observer watching the collection is still active right?No, once you kill the change stream no observer exists anymore. If you need to keep the stream around long-term you should store it somewhere and not close it until you are finished using it.The purpose of the resume token is to allow you to create a new change stream that “resumes” wherever a previous change stream left off. Note that the driver will in many cases automatically resume a change stream for you upon e.g. a network error.Also, another grey area is whether the way I handle the change stream would ensure that the integrity of the data is maintained, as I’m thinking that whilst one call is finishing. another call (change stream) might come in before the previous one has finished.If it’s possible that the code here is being called concurrently (or if you were to switch to using forEach), then yes, you could run into issues where the callbacks chained onto the change stream event could be executing at the same time and lead to issues due from lack of synchronization.A couple more notes about your code sample in particular:", "username": "kmahar" }, { "code": "configure.swift", "text": "Hi @kmaharI had no idea about MongoDB views and how they work. They sound cool. Perhaps I should try and expand a bit on what I am doing to provide more context about the problem I was attempting to solve with change streams.I am creating an app that connects certain football (soccer) YouTube channels with their subscribers. The first feature I am looking to provide is to allow the subscribers to rate the players in their team after a match has been played.When a subscriber (fan) submits their rating for a player, it will be used to calculate 3 rolling averages:The average for that player in a match, to be used by the channel and calculated from all their subscriber ratingsThe long term average for each player, to be used by the channel and calculated from all the ratings for that player, from all their subscribers, for every match that player has played.The long term average of that player, to be used by the subscriber, from all the ratings that the subscriber has created for that player.The predicament I have is how to update these averages, particularly for the channel, without blocking anything, whilst ensuring the data maintains it’s integrity and ideally a solution whose performance can scale regardless of whether a channel had 10 subscribers or a million subscribers.I was told change streams could be a solution for this but I have found them rather tricky to get my head around and implement as demonstrated by my attempts above. I was only looking to react to when a player rating was inserted. having to worry about closing the connection, running the code from configure.swift (Vapor project) and not from an api call, somehow handling error etc adds to the complexity for me.views sound good but I wonder whether they would work when a rolling average is needed to be updated in 3 different places when a player rating is inserted?I was thinking that a simple potential solution could be to use the $inc operator to increase the total number of ratings and the total rating value in each related document and then calculate the averages client side? I was thinking that using the $inc operator would have a minimal impact in blocking a document from being written or read to? However this solution doesn’t seem optimal but may be best considering my limited MongoDB skills Apologies for the long reply and hopefully I have made some sense. Please let me know If I haven’t, which is most likely!!I keep sounding like a broken record but I really appreciate all the help and feedback. I usually don’t like asking for help but your colleagues and yourself make the experience of doing so a breeze!", "username": "Piers_Ebdon" } ]
Change Streams - Swift Driver + Vapor
2020-10-10T07:39:38.943Z
Change Streams - Swift Driver + Vapor
3,919
null
[ "compass" ]
[ { "code": "{\n instantClone: true, status: \"PASS\",\n leaseStart:{$gte : ISODate(\"2020-10-9T07:00:00Z\"), $lt: ISODate(\"2020-10-10T07:00:00Z\")}\n}\n", "text": "I am using Compass and I am in the aggregation section. I am using the dropbox where I am using $match. I insert my query for example:I am trying to use the .count() or .countDocuments() at the end of the curly braces. It says \"Stage must be a properly formatted document. How can I get the total number of documents if the sample only shows 20 max.", "username": "Jessica_Rios" }, { "code": "", "text": "Hello @Jessica_Rios welcome to the Community!When you use the aggregation builder than you need to add the stages individually. So first a $match than you will see the results on the right side. After this pls click “add stage” and add a $count stage.\nThat’s it Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "So I have one stage where the $match is and the data is displayed. Shouldn’t I save the data somewhere so I can do the $count? For the $count it’s asking for a string and I don’t know what to insert.", "username": "Jessica_Rios" }, { "code": "", "text": "Hello @Jessica_Riosit will look somewhat like shown in the screenshot. You should see the results of every stage on the right side when the stage definition fits\n\ngrafik622×517 22.9 KB\nThe string with the $count is just the filed name. Here is the link to the documentation were I too the screenshot: $count\n\ngrafik913×181 14.8 KB\nYou can add a https://docs.mongodb.com/manual/reference/operator/aggregation/out/ stage at the end. The string will be a collection name in which the results are written. In case the collection exists thee existing one is overwritten! To save the data to the collection named in $out you need to click the green button “save documents” which you find further right of the $out stage, you may need to scroll to the right.Shouldn’t I save the data somewhere so I can do the $count?you do not need to, the result of one stage is passed to the next stage on which a further stage can act, you can look on it as a filter which narrows down with every stage.\n\ngrafik852×499 80 KB\nHope this helps,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "It’s more of a saved variable. Ok I read more about it and found information. Thank you.", "username": "Jessica_Rios" }, { "code": "", "text": "As further resource I can reommed the free MongoDB class: M121: The MongoDB Aggregation FrameworkMichael", "username": "michael_hoeller" } ]
Using Aggregation in Compass
2020-10-15T19:58:54.552Z
Using Aggregation in Compass
10,491
null
[ "aggregation", "field-encryption" ]
[ { "code": "", "text": "An aggregation pipeline $lookup fails even if the 2 involved collections are not encrypted.\nWe are using a connection pool with 10 connections. Each connection in the pool is created with csfle options enabled. The same connection is used to work on the collections with the encrypted fields.\nThe understanding as per mongo official documentation is that behaviour should be based on which collections is being queried and if there is no encryption on any fields in the collection, there should not be any challenges on the queries w.r.t. those collections.\nThe collections which have encrypted fields will have the query constraints as documented in the official documentation.\nBut, aggregation with lookup, does not work which is not expected.\nException received:\nException in encryption library: Command failed with error 51204 (Location51204): ‘Pipeline over an encrypted collection cannot reference additional collections.’ on server localhost:27020. The full response is{“ok”: 0.0, “errmsg”: “Pipeline over an encrypted collection cannot reference additional collections.”, “code”: 51204, “codeName”: “Location51204”}However, the aggregation works fine if the connection is created without csfle options.Has anyone come across this and has a work around or a fix?Thanks,\nAnu", "username": "Anu_Madan" }, { "code": "", "text": "Hi @Anu_Madan,This is an aggregation bug reported by way of previous discussion on Automatic Client Side Field Level Encryption (CSFLE) Restricts Operations On Unencrypted Collections.For updates, please watch/upvote SERVER-50092: [FLE] with encryption on collection and $lookup with two non-encrypted collections fails in the MongoDB issue tracker.The only suggested workaround at the moment is to use a client connection without CSFLE options (which you’ve already discovered).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X\nI have upvoted for the bug as suggested.\nIs there a way to find out by when the bug will be resolved? This is a big blocker on the path of adopting CSFLE.Regards,\nAnu", "username": "Anu_Madan" }, { "code": "", "text": "Is there a way to find out by when the bug will be resolved? This is a big blocker on the path of adopting CSFLE.Hi @Anu_Madan,Thanks for upvoting the issue - that is a helpful signal for the development team’s planning and prioritisation. Watching the issue in Jira (i.e. logging in and clicking on “Start watching this issue”) is the best way to follow progress.This issue currently has a Fix Version of “Backlog” and an assignee of “Backlog - Query Team”, which means the issue has been assigned to the query team’s work backlog but has not been planned for a development sprint yet. As such, there is no further ETA available at the moment.For some more information on the general development workflow, please see my comment on When will SERVER-25023 be released? - #4 by Stennie_X.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
CSFLE - Java Driver - aggregation with lookup fails with 2 non-encrypted collections
2020-10-15T11:53:20.980Z
CSFLE - Java Driver - aggregation with lookup fails with 2 non-encrypted collections
3,837
null
[]
[ { "code": "", "text": "Hi,\nWe need to store in mongo around 20B documents, where each document is tied into a specific id.\nAlso, each document has an expiration and needs to be deleted after a configurable amount of time.We came up with this model for storing the documents:\n{\n_id: uuid,\ndata: Array\n}\nThe documents are small, and from our testing we can fit around 190,000 documents per _id, but we only expect around 5000.\nThe document are queried by the _id and a date for the nested documents. We use a simple $unwind aggregation to get the results.My only fear is the deletion time. Each document has a date, but from what I understand one can’t create TTL index on nested document, so we needed to create a cron for it.Can you suggest a better solution?", "username": "Sason_Braha" }, { "code": "{\n_id : ObjectID,\nuuid: ... ,\ncreatedDate: ...,\ndata: ...,\n...\n}\n", "text": "Hi @Sason_Braha,I would say that having 5000 nested documents in one array is also very high. It may impact many operations as MongoDB has to serelize and desirialize those arrays in many operations against the documents (cpu/ memory overhead).Also it impose a high risk on expiration mechanism as constantly pulling and pushing array elements to large arrays is unadvisable…I would consider keeping each document in a seperate one and indexing a field named uuid. If your HW will not be able to operate with this design consider sharding the environment on hash shard key for this uuid if you only have to query based on the uuid.This way you can have a TTL index as your createDate will be on the main level of a document.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny, thank you for the response.\nCurrently sharding between clusters is not an option for us, but from what I understand from your comment, it’ll be better performance wise to insert the documents into single big collection and index the id and date?\nI’ll note that the documents are never updated, only $push and $pull are used.", "username": "Sason_Braha" }, { "code": "coll_20200101_20200107\ncoll_20200107_20200114\n...\ncoll_xxxxxx_yyyyyy\ncoll_yyyyy_zzzzz\n...\n", "text": "Hi @Sason_Braha,Well $pull and $push on existing docs are updates.If you can’t shard the environment please consider splitting the collections into partitioned ones. Having single documents with up to 5000 array objects which you constantly pull or push might mean trouble…For example based on a time range, where you can do a daily/weekly collections by integrating the date in their names . This way you will have smaller collections and your application will need to do some mapping to understand what collections to query in real time.On the other hand you can use a hash value to store based on your UUID’s where you will have a collection based on 2 hashes (lower and upper limit):However, your application will still need to store some mapping to which collection you should go when looking for this uuid hash.If you wish to keep it all in one collection note that you will need to scale your HW or shard as you grow.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you.\nWe’ll not use the nested document approach, we decided to follow your recommendation and create partitioned collections by week and create expire index for documents.", "username": "Sason_Braha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Storing ~20 Billion documents
2020-10-13T19:37:01.928Z
Storing ~20 Billion documents
4,657
null
[ "indexes" ]
[ { "code": "", "text": "Hi Team,\nI have one collection in production env. Due to feature enhancements, I am going to add new indices.\nWhats the best practice we should follow for this ??\nDo I have to clear and re-enter existing data again after creating indexes??\nAlso I m going to add two new fields in existing production data. Is there any feature mongo atlas support which will make this addition super simple ??Thanks,\nGheri.", "username": "Gheri_Rupchandani" }, { "code": "", "text": "For index creation look at https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/.For adding fields I am a big fan of Building with Patterns: The Document Versioning Pattern | MongoDB Blog.", "username": "steevej" }, { "code": "", "text": "Thanks for pointing to this.\nI think we have already using document versioning pattern so it would be easy for me to update the documents with new fields.\nNow we can afford some performance degradation so can i rely on createIndex() to take care of existing data.Thanks,\nGheri.", "username": "Gheri_Rupchandani" }, { "code": "", "text": "Hi Team,\nThe Reason to update existing records with new fields becuase I would be creating indices on one of the fields and somewhere I had read that there would be performance degradation if some fields are null.\nJust wanted to confirm this performance cost??\nIf its not performance degration in above case, then i can keep these existing records as they are for some time.Thanks,\nGheri.", "username": "Gheri_Rupchandani" }, { "code": "", "text": "Any updates on above query ??", "username": "Gheri_Rupchandani" } ]
Adding new index to production collection
2020-10-07T18:54:27.184Z
Adding new index to production collection
2,306
null
[ "node-js" ]
[ { "code": "", "text": "HelloI’ve read some key features about MongoDB stitch. I believe it allows us querying the MongoDB from the client’s browser using HTTP. However, I’m looking for an open source solution to connect to a MongoDB instance from my browser.What is the current status of this? Is it possible to deploy an open source solution to connect to a MongoDB instance using HTTP or browserify?", "username": "Alex_Dunphy" }, { "code": "", "text": "Hi @Alex_Dunphy,Welcome to MongoDB community!MongoDB Realm js sdk should allow you to access your Atlas cluster via your web browser code.However, with the use of package like webpack you might be able to use the MongoDB nodejs driver in your web code. Having said that its not recommended as Realm acts like a backend for those queries optimizing connection pools behind while using a driver directly might be insufficient. And expose security hazards…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Dear Pavel,Thank you for your reply \nYes, I understand the security implications but the service I’m building is not for the public and is for individual use. One user = One Database. With React, I believe that the page will not be refreshed and the connection can be kept running.It is interesting to see if it’s possible to have a subset of the features of the MongoDB Driver working in the browser.Are there any goals to provide a self-hosted version (open source) of MongoDB stitch? It would be similar to Hasura and Space Cloud.", "username": "Alex_Dunphy" }, { "code": "", "text": "Hi @Alex_Dunphy,MongoDB Stitch is now called MongoDB realm.Have you explored the possibility to use realm as local\nLight wieght database without syncing it at all to AtlasJust use it as your storeBest\nPavel", "username": "Pavel_Duchovny" } ]
Using MongoDB Node.js Driver to connect to an instance from the browser
2020-10-14T06:06:12.670Z
Using MongoDB Node.js Driver to connect to an instance from the browser
3,024
null
[ "security" ]
[ { "code": "", "text": "Hi,I login to mongo DB atlas or compass through valid credentials and the database data is visible to me and it is clearly understood that it performs disk level encryption and data get decrypts on login ( data at transit).I need to understand what needs to be done for Data to be encrypted at store?\nI have azure key vault and my concept is when i login to the atlas or compass the database data should not be visible to me.\nIs there any configuration that needs to be done between mongo DB m10 cluster and azure key vault.\nI have already configured the azure key vault wth mongo DB cluster.\nfollowed the below mentioned document :\nCustomer Key Management with Azure Key VaultStill encryption at store is not working, and on login data from database is visible to me.Thank you.", "username": "Aniket_Godase" }, { "code": "", "text": "Hi @Aniket_Godase,Please see my comment hereBest\nPavel", "username": "Pavel_Duchovny" } ]
Enable MongoDB database level encryption
2020-10-15T06:46:14.884Z
Enable MongoDB database level encryption
2,929
null
[ "atlas-functions" ]
[ { "code": "", "text": "Hi,I’m looking see if we can request to increase the ext. dependencies import limit or request for the npm “puppeteer” module, to be part of your built-in ones. Thanks.– Alex", "username": "AlexOnTheGrind_N_A" }, { "code": "", "text": "Welcome to the community Alex!Do you mind sharing the total size of your imported dependencies? This will help us understand what kind of limits users are hitting and raise them appropriately. Also feel free to add the request here Realm: Top (70 ideas) – MongoDB Feedback Engine as we’re actively tracking this.", "username": "Sumedha_Mehta1" } ]
Importing External Dependencies
2020-10-08T19:49:44.472Z
Importing External Dependencies
1,824
null
[]
[ { "code": "", "text": "I think flow control is very important function to database, which can prevent database from crashing or OOM by unconscious or malicious clients.Any design in mongo source code such as delay requests or fail requests intentionally ?", "username": "Lewis_Chan" }, { "code": "", "text": "Here is some internal documentation around the design of the server’s flow control mechanism, released in v4.2.Here is the public documentation.", "username": "Daniel_Pasette" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there any flow-control mechanism in mongodb?
2020-08-12T13:14:19.029Z
Is there any flow-control mechanism in mongodb?
3,893
null
[ "devops" ]
[ { "code": "", "text": "Hey Peeps,We’ve had conversations previously about how to setup CI/CD pipelines for Realm Serverless apps, so I wanted to provide an update on my latest pipeline changes.I’ve just finished updating my CI/CD pipeline so that code for each stage is in a branch in a single repo. Previously, the code for each stage was in its own repo, because Realm auto-deploys only worked from the master branch. Now that Realm auto-deploys can be configured to work from any branch, I’ve moved the code to a single repo. This should make it easier to manage moving code between the stages.I’ve updated the Readme of my app to explain how I’ve configured the pipeline: SocialStats/README.md at master · mongodb-developer/SocialStats · GitHub", "username": "Lauren_Schaefer" }, { "code": "", "text": "@Lukas_deConantseszn1 I wanted to let you know about the updates to my Readme in case you found them helpful ^^How is your pipeline going? Have you learned anything along the way? Is your pipeline similar to mine, or have you found another approach to work better?", "username": "Lauren_Schaefer" }, { "code": "", "text": "Hi @Lauren_Schaefer! Thank you so much for your message!I did find them super helpful. I ended up changing a few things. For instance, I decided to use multiple clusters per your own architecture. And I decided to use the same DB name in each cluster so I didn’t have to change that up.I still had to use the script from @kraenhansen to replace things like the cluster name, app_d domiain information like custom_domain and app_default_domain. I don’t know if using github auto deploy would remove the need for some of this search and replace, but I haven’t explored that enough. I’m still using github actions to run tests and deploy code. One thing I noticed with using the realm CLI to deploy, is sometimes it can mess up your data cluster link and database triggers will start failingAnyway, I still want to implement some of the cool stuff from your setup like function tests and UI tests. Hopefully soon!Love the README! ", "username": "Lukas_deConantseszn1" }, { "code": "{\n \"id\": \"reallylongappid\",\n \"name\": \"mongodb-atlas\",\n \"type\": \"mongodb-atlas\",\n \"config\": {\n \"clusterName\": \"Cluster0\",\n \"readPreference\": \"primary\",\n \"wireProtocolEnabled\": false\n },\n \"version\": 1\n}\n{\n \"name\": \"mongodb-atlas\",\n \"type\": \"mongodb-atlas\",\n \"config\": {\n \"readPreference\": \"primary\",\n \"wireProtocolEnabled\": false\n },\n \"version\": 1\n}\n{\n \"app_id\": \"myappid\",\n \"config_version\": 20200603,\n \"name\": \"SocialStats-Staging\",\n \"location\": \"US-VA\",\n \"deployment_model\": \"GLOBAL\",\n \"security\": {},\n \"hosting\": {\n \"enabled\": true,\n \"app_default_domain\": \"mydomain.mongodbstitch.com\"\n },\n \"custom_user_data_config\": {\n \"enabled\": false\n },\n \"sync\": {\n \"development_mode_enabled\": false\n }\n}\n{\n \"config_version\": 20180301,\n \"security\": {},\n \"custom_user_data_config\": {\n \"enabled\": false\n },\n \"realm_config\": {\n \"development_mode_enabled\": false\n }\n}", "text": "@Lukas_deConantseszn1 I’m happy to hear you were able to get your pipeline working!Multiple clusters makes a lot of sense if you’re not using the free clusters.FWIW, I just deleted the app specific stuff in my config files…and everything kept working. I don’t know if there are any consequences to that. @kraenhansen Is that problematic? For example, if I export my Realm app in the Realm web UI, my services/mongodb-atlas/config.json file looks like:The services/mongodb-atlas/config.json file in my repo looks like this:The config.json in the root directory of my exported app looks like:The stitch.json (I haven’t renamed the file yet since the rebrand from Stitch to Realm) looks like:", "username": "Lauren_Schaefer" } ]
CI/CD Pipelines for Realm - Keeping the Code in a Single Repo
2020-10-06T12:48:05.050Z
CI/CD Pipelines for Realm - Keeping the Code in a Single Repo
2,924
null
[ "atlas-search" ]
[ { "code": "", "text": "How to search for similar char inside a string, such as Avenger and Scavenger\nex:-\nAvenger will return both History and story\nHistory will return History", "username": "Sagar942150" }, { "code": "nGram", "text": "For this type of search you should look at the autocomplete docs. I think you’ll want to choose the nGram tokenization in order to be able to find partial text within the word being indexed.", "username": "Daniel_Pasette" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb atlas autocomplete
2020-10-14T19:16:38.773Z
Mongodb atlas autocomplete
2,485
null
[ "sharding" ]
[ { "code": "", "text": "From MongoDB documentation I see that only if the database has sharding enabled, a collection in it can be sharded.\nSay if I have a database with 10 collections and I want to enable sharding on one collection in it. If I enable sharding for the database and that one collection, will the other 9 collections be sharded too?\nAlso if I add few more collections to that database, will the newly added collections be sharded too?Thanks,\nAkshaya Srinivasan", "username": "Akshaya_Srinivasan" }, { "code": "", "text": "Only collections for which you enable sharding with sh.shardCollection() will be sharded.When you shard you must specify a shard key. There is no way for the system to know which shard key should be use. So by default collections are not sharded.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query on sharding the collection and database
2020-10-15T10:00:32.472Z
Query on sharding the collection and database
1,749
null
[ "atlas-search" ]
[ { "code": "", "text": "I’ve been testing a little with searchBeta stage and must say it’s pretty great. It solves a lot of search problems we’ve had up until now with great ease. The only problem (other than it not being on M10 yet :-p) that I see is how do we develop with this Atlas-only feature locally? We develop in MeteorJS and this starts a local mongod which the app connects to which obviously won’t have this aggregation stage available. We could check if we’re running locally and code a fallback query I guess. Any better ideas ?", "username": "Mark_Lynch" }, { "code": "", "text": "@Mark_Lynch Thanks for your feedback on Full Text Search. It is still beta, but the team has been making improvements based on user feedback and an ambitious backlog.The Full Text Search beta is currently Atlas-only, however we have recently added support to the free and shared tier (M0, M2, M5) clusters and are working on bringing this to the M10/M20 tiers. For some context on what’s involved, please see Text Search in M10/M20 and watch/upvote the associated suggestion to Support M10/M20 on the MongoDB Feedback site.There is also a suggestion to add ability to test Atlas Search locally that you can watch and upvote on the MongoDB Feedback site.We could check if we’re running locally and code a fallback query I guess. Any better ideas ?Outside of testing on a free/shared Atlas cluster (which doesn’t meet your requirement for fully local testing), coding a fallback query is probably the best option at the moment.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Now that Altas Search is out of beta, is there a plan to offer Full Text Search for locally deployed Mongo instances for local development use, or is using a free tier hosted solution the way to go?", "username": "Louis_Byers" }, { "code": "", "text": "Hi @Louis_Byers,The relevant feature request to watch and upvote is still add ability to test Atlas Search locally as discussed above.There currently isn’t an on-prem equivalent of Atlas Search, so the solutions are also unchanged: use an Atlas cluster (Atlas Search is now available on all clusters tiers running MongoDB 4.2 or later) or implement a fallback query for local testing.For a local deployment/fallback you could perhaps use the more basic features of Text Indexes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search Local Development
2020-02-14T16:00:28.876Z
Atlas Search Local Development
7,221
null
[ "atlas-search" ]
[ { "code": "", "text": "How can I perform search operations on type Array of numbers?", "username": "Sagar942150" }, { "code": "test:PRIMARY> db.c.insert({array:[1,2,3]})\nWriteResult({ \"nInserted\" : 1 })\ntest:PRIMARY> db.c.find({array:1})\n{ \"_id\" : ObjectId(\"5f84717525fa89f1696c3941\"), \"array\" : [ 1, 2, 3 ] }\n", "text": "Hi @Sagar942150 and welcome in the MongoDB Community !Please provide an example of your document and expected result if you want a detailed answer. It’s too vague.Here is an example based on what I understand from your question:Here are more resources that hopefully will help you:I hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "[1,2,3] => ['1','2','3']\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"ArrayNumbersField\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n }\n }\n}\n[\n {\n \"$search\": {\n \"index\": \"SearchIndex1\",\n \"compound\": {\n \"filter\": [\n {\n \"compound\": {\n \"should\": [\n {\n \"phrase\": {\n \"path\": \"ArrayNumbersField\",\n \"query\": \"1\"\n }\n },\n {\n \"phrase\": {\n \"path\": \"ArrayNumbersField\",\n \"query\": \"2\"\n }\n }\n ],\n \"minimumShouldMatch\": 1\n }\n }\n ]\n }\n }\n }\n]\n", "text": "@MaBeuLux88 He speaking about Atlas Search not MongoDB @Sagar942150 Today we cannot search in array of number in $search.\nYou have to convert your number array to string array.Like this.Make a mappings for search like string:Make $search with filter for my example and every number you want to search add new filter:This query will find all documents with as minimum 1 or 2 in “ArrayNumbersField” ( You can update minimumShouldMatch if you want all number required for your search :I hope this will help you ", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi, Thanks for replying. Actually my question is for mongodb Atlas search query as per there documentation we can’t apply search on array of number type is there a way we can deal with such scenario.", "username": "Sagar942150" }, { "code": "", "text": "It didn’t work for me my collection having a property as grades:[1, 2] it won’t return anything", "username": "Sagar942150" }, { "code": "[1,2] => ['1','2']\n", "text": "@Sagar942150Just convert this colum of array of number, by array of string. ( with all documents in your collection )\nAnd when you insert new document dont forget to convert your array of number !It’s same just need to convert ( String Value) when you make your query to get numbers ( Numeric Value ).And for make search and get array of numeric value, you can use aggregation pipeline to make your query -", "username": "Jonathan_Gautier" }, { "code": "", "text": "Oops! \nLooks like I got tricked and I read too fast here !", "username": "MaBeuLux88" }, { "code": "", "text": "Actually, the search should return something with reference to the above code.\n$search with $compound and $phrase — It is not working", "username": "Sagar942150" }, { "code": "", "text": "Wait 1 hour i will make you a full example ", "username": "Jonathan_Gautier" }, { "code": "", "text": "Sure, will wait for your response!!!", "username": "Sagar942150" }, { "code": "{\"name\":\"document1\",\"array\":[\"1\",\"2\",\"3\"]} \n{\"name\":\"document2\",\"array\":[\"3\",\"4\",\"5\"]}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"array\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n }\n }\n}\n[{$search: {\n index: 'default',\n compound: {\n filter: [\n {\n compound: {\n should: [\n {\n phrase: {\n path: 'array',\n query: '2'\n }\n },\n {\n phrase: {\n path: 'array',\n query: '4'\n }\n }\n ],\n minimumShouldMatch: 1\n }\n }\n ]\n }\n}}]\n[{$search: {\n index: 'default',\n compound: {\n filter: [\n {\n compound: {\n should: [\n {\n phrase: {\n path: 'array',\n query: '2'\n }\n },\n {\n phrase: {\n path: 'array',\n query: '4'\n }\n }\n ],\n minimumShouldMatch: 1\n }\n }\n ]\n }\n}}, {\n $project: { \n name: 1,\n array:\n {\n $map:\n {\n input: \"$array\",\n as: \"grade\",\n in: { $toInt: \"$$grade\" }\n }\n }\n }\n}]\n\n", "text": "Sorry for time I have created new collections with two documents:image1364×546 20.4 KBMake search Index mappings:image1364×306 20 KBAnd make an aggregation with searching 2 and 4:This return my two documents:\nimage1172×420 22.4 KBAnd to get Int value in result array of number you can use $project in your pipeline like this image1209×319 18 KBI cant do more to help you now ", "username": "Jonathan_Gautier" }, { "code": "", "text": "It’s ok things take time Thank you for your help but actually as I previously said my collection is having array of number array = [1,2,3,4,5] and I have to perform an operation on this array", "username": "Sagar942150" }, { "code": "array = [1,2,3,4,5] => array = ['1','2','3','4','5']\nquery = {}\nprint(lots.count(query))\n\n\ndef idlimit(page_size, skip=None, query=None):\n \"\"\"Function returns `page_size` number of documents after last_id\n and the new last_id.\n \"\"\"\n if query is None:\n query = {}\n if skip is None:\n cursor = lots.find(query).limit(\n page_size)\n else:\n cursor = lots.find(query).skip(skip).limit(page_size)\n\n data = [x for x in cursor]\n\n if not data:\n return None, None\n\n if skip is None:\n skip = page_size\n else:\n skip += page_size\n\n return data, skip\n\n\ndata, skip = idlimit(100, query=query)\ncount = 0\n\nwhile skip is not None:\n if data is not None:\n for item in data:\n if len(item['array']) > 0:\n array = list(map(str, item['array']))\n lots.update_one({\"_id\": item['_id']},\n {'$set': {'array': array}})\n count += len(data)\n data, skip = idlimit(100, skip=skip, query=query)\n print(count)\n", "text": "You juste have to convert all your documents like this:I will give you my scrypt python to do this ( You can change query object to apply on specific documents ! )", "username": "Jonathan_Gautier" }, { "code": "", "text": "Thanks !!! I have added a duplicate field with array of string", "username": "Sagar942150" }, { "code": "", "text": "Yes this was another solution also \nNo problem ! ", "username": "Jonathan_Gautier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas search with Array of Numbers
2020-10-12T12:29:40.173Z
MongoDB Atlas search with Array of Numbers
8,087
null
[ "atlas-functions" ]
[ { "code": "exports = function(amounts, colls){\n \n const query = { \"amount\": { $all: amounts } };\n const projection = { \"name\": 1 };\n \n // Accessing a mongodb service:\n var db = context.services.get(\"mongodb-atlas\").db(\"App\");\n \n var doc = [];\n\n var i;\n for(i of colls){\n doc.push(db.collection(i).find(query,projection).toArray());\n }\n\n // return the names \n return doc;\n};\n[\n [{\"_id\":{\"$oid\":\"5f7b4ad6513f671c7e11a350\"},\"name\":\"Marmor\"},\n {\"_id\":{\"$oid\":\"5f7b4ae1513f671c7e11aa05\"},\"name\":\"Sand\"}],\n\n [{\"_id\":{\"$oid\":\"5f7b4aaa513f671c7e118bc3\"},\"name\":\"Kaese\"},\n {\"_id\":{\"$oid\":\"5f7b4ab7513f671c7e11922e\"},\"name\":\"Mandel\"}]\n]\n[\n {\"_id\":{\"$oid\":\"5f7b4ad6513f671c7e11a350\"},\"name\":\"Marmor\"},\n {\"_id\":{\"$oid\":\"5f7b4ae1513f671c7e11aa05\"},\"name\":\"Sand\"},\n {\"_id\":{\"$oid\":\"5f7b4aaa513f671c7e118bc3\"},\"name\":\"Kaese\"},\n {\"_id\":{\"$oid\":\"5f7b4ab7513f671c7e11922e\"},\"name\":\"Mandel\"}\n]\n", "text": "Hey everyone, I want to search through several collections on Mongo Realm and output the result in an array. I used the following function:With exports([“e”, “b”, “z”, “m”], [“CollA”, “CollB”]) in the console I get arrays in an array:How do I have to change the code to get only one array like below? So far all my attempts have failed.Any help on this one would be greatly appreciated.Regards,\nAxel", "username": "Axel_Ligon" }, { "code": "concatpushvar i;\n for(i of colls){\n doc.concat(db.collection(i).find(query,projection).toArray());\n }\n", "text": "Hi @Axel_Ligon,Have you tried using concat instead of push:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes, I did. I get a response in the console:\n> result:\n\n> result (JavaScript):\nEJSON.parse(’’)", "username": "Axel_Ligon" }, { "code": "exports = async function(amounts, colls){\n \n const query = { \"amount\": { $all: amounts } };\n const projection = { \"name\": 1 };\n \n // Accessing a mongodb service:\n var db = context.services.get(\"mongodb-atlas\").db(\"App\");\n \n var doc = [];\n\n var i;\n for(i of colls){\n doc = doc.concat(await db.collection(i).find(query,projection).toArray());\n }\n\n // return the names \n return doc;\n};\n", "text": "Hi @Axel_Ligon,Sorry my mistake in the code. You will need to use an Async approach that worked for me:Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Test was successful. \nThe combination of “async function(…)” and “doc = doc.concat(await …)” was the solution.Thanks for your help.", "username": "Axel_Ligon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search through several collections -> result in one array
2020-10-13T11:03:14.801Z
Search through several collections -&gt; result in one array
1,504
null
[ "ops-manager", "licensing" ]
[ { "code": "", "text": "Can anyone enlighten me as to what MongoDB specify as the Enterprise Advanced licensing requirements for using Ops Manager?", "username": "Rick_Johnson" }, { "code": "2.Subscriptions", "text": "Welcome to the community @Rick_Johnson!Outside of evaluation and development purposes, Ops Manager is licensed as part of an Enterprise Advanced subscription. You can use Ops Manager for all servers covered by your subscription.The specific details are in the Customer Agreement under the 2.Subscriptions section:(b) Free Evaluation and Development. MongoDB grants you a royalty-free, nontransferable and nonexclusive license to use and reproduce the Software in your internal environment for evaluation and development purposes. You will not use the Software for any other purpose, including testing, quality assurance or production purposes without purchasing an Enterprise Advanced Subscription. We provide the free evaluation and development license of our Software on an “AS-IS” basis without any warranty.(c) Enterprise Advanced Subscription. MongoDB grants you a nontransferable and nonexclusive license during the term of the Subscription to use and reproduce the Software in your internal environment for the purposes and on the number of Servers stated on the Order Form. You will cover each Server used by an application with an Enterprise Advanced Subscription.If you have further questions on licensing or costs, please contact the MongoDB Sales team.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Licensing requirements for Ops Manager
2020-10-12T18:36:19.003Z
Licensing requirements for Ops Manager
4,311
https://www.mongodb.com/…_2_1024x465.jpeg
[]
[ { "code": "", "text": "\nArt Credit and Special Thanks to Lo HarrisHello friends!MongoDB has extended our community fundraiser for not-for-profit organizations fighting for racial justice and economic advancement of the Black community. Donate Now! >>>We are committed to matching donations up to $250,000 this year. We have already raised nearly $140,000 including MongoDB’s matching contribution! We need your help to reach our goal. Join the other 300+ donors who have contributed!Once you’ve donated, you can send me (@Jamie) a direct message right here in the forums with a screenshot of your donation confirmation message. You’ll then be rewarded with a special forum badge to declare your commitment to fighting systemic oppression. Stand with us >>>Donations to this fund will support the work of the following organizations:\n1574×612 92.1 KB\n\n1574×646 81.9 KB\n\n1574×564 84.2 KB\n\n1574×564 67.3 KB\n", "username": "Jamie" }, { "code": "", "text": "", "username": "Stennie_X" } ]
We Stand United Against Oppression
2020-10-14T21:44:55.672Z
We Stand United Against Oppression
3,879
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.1.1 MongoDB Java & JVM Drivers release is a patch to the 4.1.0 release.The documentation hub includes extensive documentation of the 4.1 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.1/apidocs/ ", "username": "Jeffrey_Yemin" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Java Driver 4.1.1 Released
2020-10-14T20:17:24.280Z
MongoDB Java Driver 4.1.1 Released
3,158
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "Is it possible to change the local folder of a realm with full sync?\nAt the moment on macOS platform and .net the local storage is:\n/Users/username/realm.object-serverIs there a list where the default storage is on all supported platforms?Regards", "username": "Per_Eriksson" }, { "code": "Environment.GetFolderPath(SpecialFolder.Personal)FullSyncConfigurationSyncConfigurationBase.InitializebasePath", "text": "It’s stored in Environment.GetFolderPath(SpecialFolder.Personal). If you want to change the location of just one Realm file, you can pass in a path argument to the FullSyncConfiguration constructor. If you want to change where the base folder for all synchronized Realms is, you can call SyncConfigurationBase.Initialize and pass in a basePath argument.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Full sync change local folder
2020-10-14T15:25:43.729Z
Realm Full sync change local folder
2,095
null
[]
[ { "code": "", "text": "I want to know about mongodb performance", "username": "Purti_Kushwaha" }, { "code": "", "text": "Hi @Purti_KushwahaThis is a good course for mongodb performance: MongoDB Courses and Trainings | MongoDB University", "username": "chris" } ]
Mongodb performance
2020-10-14T09:05:57.619Z
Mongodb performance
1,264
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "Hi all i am new to this site.As a production database admin for my company we hosted a cloud database on a couple of realm object servers. Our production server starts out of nowhere to be unable to create new users on the authentication. The two other (dev and staging) works fine with the exact same method as the production server cannot handle.On the web platform for realm cloud, I have tried to switch username/password authentication on/off again, to see if it just needed a refresh. However, I am now unable to enable username/password authentication again with the error being : “failed to create provider”. I have tried both with and without “send email to users…” . Tried to look for it on the internet without success… Any ideas? Appreciate the help!/Kasper", "username": "Kasper_Daugaard" }, { "code": "", "text": "If this is affecting your production environment, you should file a ticket and have the support team fix it. Forums are way too slow to fix production issues.", "username": "nirinchev" } ]
Realm Cloud Authentication failed to create provider
2020-10-13T17:35:22.798Z
Realm Cloud Authentication failed to create provider
3,297
null
[ "dot-net", "legacy-realm-cloud", "xamarin" ]
[ { "code": "public class JobResponse : RealmObject\n{\n\t[Required]\n\t[PrimaryKey]\n\t[MapTo(\"id\")]\n\tpublic string Id { get; set; }\n\t[Required]\n\t[MapTo(\"jobId\")]\n\tpublic string JobId { get; set; }\n\t[Required]\n\t[MapTo(\"runId\")]\n\tpublic string RunId { get; set; }\n\t[Required]\n\t[MapTo(\"driverId\")]\n\tpublic string DriverId { get; set; }\n\t[MapTo(\"sequence\")]\n\tpublic int Sequence { get; set; }\n\t[MapTo(\"completed\")]\n\tpublic int Completed { get; set; }\n\t[MapTo(\"reasonId\")]\n\tpublic int? ReasonId { get; set; }\n\t[MapTo(\"reason\")]\n\tpublic string Reason { get; set; }\n\t[MapTo(\"driverNote\")]\n\tpublic string DriverNote { get; set; }\n\t[MapTo(\"clientSignature\")]\n\tpublic string ClientSignature { get; set; }\n\t[MapTo(\"clientSignatureImage\")]\n\tpublic byte[] ClientSignatureImage { get; set; }\n\t[MapTo(\"driverSignature\")]\n\tpublic string DriverSignature { get; set; }\n\t[MapTo(\"driverSignatureImage\")]\n\tpublic byte[] DriverSignatureImage { get; set; }\n\t[MapTo(\"signedBy\")]\n\tpublic string SignedBy { get; set; }\n\t[MapTo(\"latitude\")]\n\tpublic float Latitude { get; set; }\n\t[MapTo(\"longitude\")]\n\tpublic float Longitude { get; set; }\n\t[MapTo(\"image\")]\n\tpublic byte[] Image { get; set; }\n\t[MapTo(\"responseDtTm\")]\n\tpublic DateTimeOffset ResponseDtTm { get; set; }\n\t[MapTo(\"processedDtTm\")]\n\tpublic DateTimeOffset? ProcessedDtTm { get; set; }\n}\npublic class JobResponse : RealmObject\n{\n\t[Required]\n\t[PrimaryKey]\n\t[MapTo(\"id\")]\n\tpublic string Id { get; set; }\n\t[Required]\n\t[MapTo(\"jobId\")]\n\tpublic string JobId { get; set; }\n\t[Required]\n\t[MapTo(\"runId\")]\n\tpublic string RunId { get; set; }\n\t[Required]\n\t[MapTo(\"driverId\")]\n\tpublic string DriverId { get; set; }\n\t[MapTo(\"sequence\")]\n\tpublic int Sequence { get; set; }\n\t[MapTo(\"completed\")]\n\tpublic int Completed { get; set; }\n\t[MapTo(\"reasonId\")]\n\tpublic int? ReasonId { get; set; }\n\t[MapTo(\"reason\")]\n\tpublic string Reason { get; set; }\n\t[MapTo(\"driverNote\")]\n\tpublic string DriverNote { get; set; }\n\t[MapTo(\"clientSignatureImage\")]\n\tpublic byte[] ClientSignature { get; set; }\n\t[MapTo(\"driverSignatureImage\")]\n\tpublic byte[] DriverSignature { get; set; }\n\t[MapTo(\"signedBy\")]\n\tpublic string SignedBy { get; set; }\n\t[MapTo(\"latitude\")]\n\tpublic float Latitude { get; set; }\n\t[MapTo(\"longitude\")]\n\tpublic float Longitude { get; set; }\n\t[MapTo(\"image\")]\n\tpublic byte[] Image { get; set; }\n\t[MapTo(\"responseDtTm\")]\n\tpublic DateTimeOffset ResponseDtTm { get; set; }\n\t[MapTo(\"processedDtTm\")]\n\tpublic DateTimeOffset? ProcessedDtTm { get; set; }\n\t[MapTo(\"jobStart\")]\n\tpublic DateTimeOffset? JobStart { get; set; }\n\t[MapTo(\"jobFinish\")]\n\tpublic DateTimeOffset? JobFinish { get; set; }\n}\n", "text": "I currently have a Realm class as follows;The ClientSignature and DriverSignature properties were used early in the project to store a JSON string of points from the signature pad but these were later replaced by the ClientSignatureImage and DriverSignatureImage properties.Recently we added two nullable fileds, JobStart and JobFinish, and decided to remove the the original ClientSignature and DriverSignature fields as follows;From what I understand from the documentation this is an additive change however the application now throws the following exception;Realms.Exceptions.RealmMigrationNeededException: Migration is required due to the following errors: - Property ‘JobResponse.clientSignature’ has been removed. - Property ‘JobResponse.driverSignature’ has been removed. - Property ‘JobResponse.jobStart’ has been added. - Property ‘JobResponse.jobFinish’ has been added.Since this is using a Full Sync Configuration I can’t use a Migration callback to update the model and there would be a large amount of work to code the changes required to update the Xamarin and Server code to migrate the users to new Realm files as not all users will be updated at the same time.Can someone explain why this is not an additive change and what I could possibly do change the model so it is.", "username": "Raymond_Brack" }, { "code": "FullSyncConfiguration", "text": "You’re right - this is an additive change and the exception should not be thrown. My only guess would be that you’re accidentally opening a Realm without a FullSyncConfiguration. Can you post the section of the code where the exception is thrown along with the version of the SDK you’re using?", "username": "nirinchev" }, { "code": "private readonly string _realmName = AppConfig.RealmName;\n\nprivate Realm OpenRealm()\n{\n\tRealm realm = null;\n\t...\n\t_realmFile = $\"~/{_realmName}\";\n\t...\n\t_config = ConnectionServices.GetRealmConfiguration(_realmFile, _user);\n\t...\n\trealm = ConnectionServices.ConnectToSyncServer(_config);\n\treturn realm;\n}\n\npublic static FullSyncConfiguration GetRealmConfiguration(string realmName, User user)\n{\n\tFullSyncConfiguration config;\n\tvar serverUrl = new Uri(realmName, UriKind.Relative);\n\tconfig = new FullSyncConfiguration(serverUrl, user)\n\t{\n\t\tSchemaVersion = 1,\n\t\tObjectClasses = new[] { typeof(Driver), typeof(Run), typeof(Job), typeof(Service), typeof(JobResponse), typeof(ServiceResponse), typeof(Option), typeof(AppLog) }\n\t};\n\treturn config;\n}\n\npublic static Realm ConnectToSyncServer(FullSyncConfiguration config)\n{\n\tRealm realm;\n\t...\n\trealm = Realm.GetInstance(config);\n\t...\n\treturn realm;\n}\nNativeException.ThrowIfNecessary (System.Func`2[T,TResult] overrider)\nSharedRealmHandle.Open (Realms.Native.Configuration configuration, Realms.Schema.RealmSchema schema, System.Byte[] encryptionKey)\nRealmConfiguration.CreateRealm (Realms.Schema.RealmSchema schema)\nRealm.GetInstance (Realms.RealmConfigurationBase config, Realms.Schema.RealmSchema schema)\nRealm.GetInstance (Realms.RealmConfigurationBase config)\nConnectionServices.ConnectToSyncServer (Realms.Sync.FullSyncConfiguration config)\n\nConnectionServices.ConnectToSyncServer (Realms.Sync.FullSyncConfiguration config)\nAppConfig.LogEntry (Realms.Sync.FullSyncConfiguration config, RealmSx.Shared.Enums+LoggingLevels loggingLevel, RealmSx.Shared.Enums+LoggingLevels logLevel, System.String logEntry, System.String stackTrace)\nSrxRunSheet.Views.ServicePage..ctor (Realms.Sync.FullSyncConfiguration config, RealmSx.Shared.Enums+LoggingLevels loggingLevel, SrxRunSheet.ViewModels.ServiceVm service) [0x00027] in <9a65769ae7ba4c0fbecda6e40a12d3c1>:0\nJobPage.OnServiceSelected (System.Object sender, Xamarin.Forms.ItemTappedEventArgs e)\nAsyncMethodBuilderCore+<>c.<ThrowAsync>b__7_0 (System.Object state)\nSyncContext+<>c__DisplayClass2_0.<Post>b__0 ()\nThread+RunnableImplementor.Run ()\nIRunnableInvoker.n_Run (System.IntPtr jnienv, System.IntPtr native__this)\n(wrapper dynamic-method) Android.Runtime.DynamicMethodNameCounter.50(intptr,intptr)", "text": "Following is the code to open the Realm;The schema version has not been changed in the sync config nor have any of the object classes other than the JobResponse class.I recently upgraded Realm to v5.1.1. I also forgot to mention the same code is used by two services used to process the data from Realm to the mobile devices and to process the data from the devices. Neither of these services threw the error.This is the stack trace from App Center;", "username": "Raymond_Brack" }, { "code": "FullSyncConfigurationCreateRealm\n \n /// </remarks>\n public static void EnableSessionMultiplexing()\n {\n SharedRealmHandleExtensions.EnableSessionMultiplexing();\n }\n \n internal override Realm CreateRealm(RealmSchema schema)\n {\n var configuration = CreateConfiguration();\n \n var srHandle = SharedRealmHandleExtensions.OpenWithSync(configuration, ToNative(), schema, EncryptionKey);\n if (IsDynamic && !schema.Any())\n {\n srHandle.GetSchema(nativeSchema => schema = RealmSchema.CreateFromObjectStoreSchema(nativeSchema));\n }\n \n return new Realm(srHandle, this, schema);\n }\n \n [SuppressMessage(\"Reliability\", \"CA2000:Dispose objects before losing scope\", Justification = \"The Realm instance will own its handle\")]\n internal override async Task<Realm> CreateRealmAsync(RealmSchema schema, CancellationToken cancellationToken)\n \n SharedRealmHandleExtensions.OpenWithSyncRealmConfiguration.CreateRealm\n \n if (ShouldCompactOnLaunch != null)\n {\n var handle = GCHandle.Alloc(ShouldCompactOnLaunch);\n configuration.should_compact_callback = ShouldCompactOnLaunchCallback;\n configuration.managed_should_compact_delegate = GCHandle.ToIntPtr(handle);\n }\n \n var srPtr = IntPtr.Zero;\n try\n {\n srPtr = SharedRealmHandle.Open(configuration, schema, EncryptionKey);\n }\n catch (ManagedExceptionDuringMigrationException)\n {\n throw new AggregateException(\"Exception occurred in a Realm migration callback. See inner exception for more details.\", migration?.MigrationException);\n }\n \n var srHandle = new SharedRealmHandle(srPtr);\n if (IsDynamic && !schema.Any())\n {\n srHandle.GetSchema(nativeSchema => schema = RealmSchema.CreateFromObjectStoreSchema(nativeSchema));\n \n RealmConfigurationConnectionServices.ConnectToSyncServerRealm.GetInstance()configRealmConfigurationRealm.GetInstanceConnectToSyncServerconfig: nullJobPage.OnServiceSelectedServicePagenullRealm.GetInstance", "text": "That’s suuuper interesting. FullSyncConfiguration will call its base CreateRealm method:As you can see, this one calls SharedRealmHandleExtensions.OpenWithSync. Instead, your stacktrace points to RealmConfiguration.CreateRealm:RealmConfiguration is the class for opening a local Realm, which implies that somewhere in ConnectionServices.ConnectToSyncServer you’re either doing Realm.GetInstance() - without passing in the config parameter - or explicitly getting a RealmConfiguration instance and passing it to Realm.GetInstance.Note that this may also be a subtle bug where ConnectToSyncServer is called with config: null. My guess would be that JobPage.OnServiceSelected is invoking an event that constructs ServicePage with null config, which then flows all the way to Realm.GetInstance.", "username": "nirinchev" }, { "code": "ConnectToSyncServerconfig: null", "text": "Your prognosis was spot on. I had moved a line of code in the page constructor above the point where the configuration was instantiated. Have placed a guard statement in my code to ensure this doesn’t happen again.From Realm’s perspective would it make sense to raise an exception if ConnectToSyncServer is called with config: null? The current exception sends you looking in totally the wrong direction ", "username": "Raymond_Brack" }, { "code": "ConnectToSyncServerRealm.GetInstanceRealm.GetInstance()Realm.GetInstance(null)", "text": "ConnectToSyncServer is your method, so we can’t raise an exception there The first method we could throw one in is Realm.GetInstance. Unfortunately, due to the way we chose to implement it - with optional arguments instead of overloads, we can’t distinguish between Realm.GetInstance() and Realm.GetInstance(null) - those are identical from the compiler perspective, so we assume you were trying to open the default local Realm and go for it.", "username": "nirinchev" } ]
Schema Changes - Additive
2020-10-13T05:48:40.257Z
Schema Changes - Additive
5,511
null
[ "legacy-realm-cloud", "legacy-realm-server" ]
[ { "code": "", "text": "Can I access and manipulate data in Realm Cloud through .net or other language the same way that the program Realm Studio does?I would for example like to update data in a users realm in the server, but I don’t want to use a FullSync to do it locally, and then sync it to she server.\nIs this possible, or do I always have to use FullSync to access Realm data?", "username": "Per_Eriksson" }, { "code": "", "text": "Studio uses full sync to read the data locally and then syncs any changes to the server. You can use the GraphQL API though - it allows you to read and write data via an http API without synchronizing it first.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Working with Realm Object Server without syncing
2020-10-13T19:01:28.265Z
Working with Realm Object Server without syncing
3,610
null
[ "atlas-device-sync" ]
[ { "code": "{ \"%%user.custom_data.readable_contents\" : \"%%partition\" }{ \"%OR\" : { \"%%user.custom_data.readable_contents\" : \"%%partition\", \"%%user.id\" : \"%%partition\", } }sync: {\n user: user,\n partitionValue: **AND NOW?***,\n} \n", "text": "Hello guys!I’m finally starting to move my app from Realm.io to MongoDBRealm and I’d like to discuss with you about my goals and how to reach them in a proper way.GOALDesign a mongo db schema and a Realm configuration for a react-native and GraphQL webapp.DATA MODELUserDataOnly the owner can read/write.UserPermissionOnly the system can read/write. It’s used for user_custom_data.ContentEach content has an id. The user can’t create/update/delete any document. The user can read a document only if it’s in a list. This list could be stored in user_custom_data and should be not editable by the user.A content contain also the total number of likes that is updated by a trigger on every “Like” (see below) insertion/deletion.LikeUser can insert document and delete it’s own documents.WHAT I DID UNTIL NOWI started creating a new react-native app and installing realm@beta. After struggling a little bit I setted up everything and tried some CRUD operation with some random models. Everything fine. On Realm cloud I used the query-based sync model so I could reach every specifications using a single realm and the fine grained permissions system. Here it seems a little bit different.I created a collection of Content. Each document has a specific _pid (partition_id).After I setted up the user_custom_data creating the collection UserPermissions on Atlas UI and associating that to user in Realm UI. In UserPermissions I have a list (the array “readable_contens”) of ids that are the id of the document I can read of the class Content.I setted up the Sync to use _-pid as partition key. So, as the documentation says, I should be able to use something like this in the Sync rules:{ \"%%user.custom_data.readable_contents\" : \"%%partition\" }\n*in the read rulesPerfect! It should works. I have to insert an “or” in the rule to include also every content in the class UserData and Like the user own and I should be good (ok I have to write also the trigger for synching the likes to the “total likes” field in the corrispondent “Content” document.So at the end should be something like this{ \"%OR\" : { \"%%user.custom_data.readable_contents\" : \"%%partition\", \"%%user.id\" : \"%%partition\", } }\n*in the read rulesI don’t need even a function for this. But here is the big problem. When I went back to my react-native app to see this:Now that I use a rule like that on the Realm Sync panel, what partition value should I use? It seems it gives an error if I omit that but it accept only “numbers, strings and objectId” as value so I can’t use neither the user.id nor the user_custom_data.readable_contents (that is an array). Anyway I can’t think of what partitionValue rapresents in this Sync rules scenario.What am I doing wrong?", "username": "Aurelio_Petrone" }, { "code": "", "text": "@Aurelio_Petrone You can only open a realm with a single partitionKey value - however, you can open multiple realms in your client side app - each with a different partitionKey value", "username": "Ian_Ward" }, { "code": "{ \"%%user.custom_data.readable_contents\" : \"%%document.id\" }", "text": "@Ian_Ward so this is not the good way of approaching the problem.I tought I could just do something like this.Create just two partition values: one is the user id and another one could be public.I still need a way to prevent the user to access to all the “public content”. So, if I understand well how it works, i’d. need a rule like this:{ \"%%user.custom_data.readable_contents\" : \"%%document.id\" }\n*in the read rulesI tried to use %%root and %%this but in both cases I get an error (that I read from the logs in the admin panel) that say “Don’t know how to expand %%root (or %%this)”Thanks for the feedback", "username": "Aurelio_Petrone" }, { "code": "", "text": "@Aurelio_Petrone You are approaching this problem by thinking about this in the lens of legacy Realm Sync with query-based sync. MongoDB Realm does not QBS, the partitioning strategy of MongoDB Realm is analogous to “full-sync” in legacy Realm.", "username": "Ian_Ward" }, { "code": "", "text": "So it’s impossible to prevent users to read just a part of the content if the partition key is the same? How should I do in this case? There are like 3k different contents and I need a different “part of partition” for each user (and there are thousands of them so a lot of possible “partitions”).@Ian_Ward", "username": "Aurelio_Petrone" }, { "code": "", "text": "Well that it depends on what your public schema looks like and which queries you are using to seed QBS client with the legacy realm model. If you’d like to share them I’d be happy to comment. There are workarounds to migrating to the new partition model which I am happy to suggest", "username": "Ian_Ward" }, { "code": "UserData{\n id : string, // user id\n partitionId : string, // it's the user id again since I can chose only one partition key and I still want access to id\n readContents: string[], // when an user. read a content, the id will go here to track the read contents\n name: string // just the user name\n}\nUserCustomData {\n _id : string //the user id for linking the user custom data to this document. \n canReadDocument: string[] // a list of content id i have access to\n}\nContent {\n id : string // The id of the content\n partitionId : string // Partition id, in this case it's \"public\"\n title: string // Just the title\n}\n", "text": "This is the schema I have in mind (at least part of it, but it believe it’s enough for this example).User can read and write documents in this collectionUser can only read documents in this collectionUser can only read some of the documents. I mean that there should be a realm on the server but user should have only access to part of it (like QBS, hope to see it soon also on MongoDBRealm).The content ids user have access to are written in user_custom_dataThis way I’d be. able to use just two parttion values: “public” and “user_id”Thanks @Ian_Ward", "username": "Aurelio_Petrone" }, { "code": "Lead {\n _id : string // The id of the lead\n salespersonId : string // The id of the salesperson who owns this Lead\n managerId: string // The id of the manager whom the salesperson reports to\n}\n", "text": "So there are two workarounds that support a more flexible syncing method in the new MongoDB Realm partition sync model.One of these is that you can use a different partitionKey field for two separate realms that are opened on the client side. A MongoDB Realm cloud app only allows for a single partitionKey field when you enable sync, however, there is nothing preventing you from creating a second cloud app with a partitionKey field and connecting it to the same Atlas backend. You would need to clone your configuration over and you would have to auth twice but this enables some more flexibility. For instance, you can imagine an app where there are salespeople in the field and you only want salespeople to see their own leads and contacts. However their manager should see an amalgamation of all the salespeople’s leads and contacts that report to them. A manager would login to the managerCloudRealmApp and use the managerId as the partitionKey where as a salesperson would login to the salespersonCloudRealmApp and use the salespersonId as the partitionKey. For example your Lead document could look like this -Another option is to denormalize the data and create copies of the data in each individual user’s realm. You can use Database Triggers - https://docs.mongodb.com/realm/triggers/database-triggers/To copy changes from one content document that receives sync changes to any other user’s realm which also has that content document. You can use a contentId or similar as stable identifier - when changes occur, you query for any other documents that also have that contentId and apply the same changes to the document. You can imagine optimizations to this, such as a lookup table or embedding metadata in the content document if needed.I hope this helps", "username": "Ian_Ward" }, { "code": "", "text": "I think I need both of the workarounds. I think I need1 Realm read-ony for the content (it contain a synchedd copy of the root content stored in a separate collection) partitioned by used_id\n1 Realm read/write for the user_data partitioned by user_idSo, just to confirm. Since I have two different behaviours (read - read/write) I need two Realm App.I have also a setting object for the common app settings that I used to store in a collection with a single document. I think is better to use some kind of service like Firebase Remote Config since we are already using firebase.", "username": "Aurelio_Petrone" }, { "code": "partitionId : \"user=Ian'partitionId : \"content=Ian\"", "text": "@Aurelio_Petrone I don’t think you need two different Realm Apps you can just use partitionId for a single cloud app. For your UserData collection you would have partitionId : \"user=Ian' and for the Content collection you would have partitionId : \"content=Ian\" both are different partitions but they correspond to the same user. You would then duplicate the content as needed. You can see more about this strategy here -\nhttps://docs.mongodb.com/realm/sync/partitioning/#example-partitioning-data-across-groups", "username": "Ian_Ward" }, { "code": "class LeadClass: Object {\n @objc dynamic var some_data = \"\" //whatever data for this lead object\n @objc dynamic var salesperson_partition_key = \"\"\n @objc dynamic var manager_partition_key = \"\"\n}\n", "text": "@Ian_WardMay I ask for a bit of clarity on option #1 a different partitionKey field for two separate realmsIs the suggestion to create two different apps in the console and have them both point to the same Atlas dataset? The Salesperson App has a salesperson_partition_key and the Manager App has a manger_partition_key. So then all the Lead objects would have a associated propertiesThen when a salesperson logs in the salesperson app using their salesperson_partition_key, they only see their data.When a manager logs into the manager app with their partition key, they can see all of the data from all the salespeople.Or is there another component whereas each salesperson has their own distinct Realm? Or something else?", "username": "Jay" }, { "code": "", "text": "@Jay You got it. It’s the same data just accessed through separate cloud apps based on permissions. So depending on the user’s role, you would have different code paths that injected the correct appId and partitionKey value when opening the realm.", "username": "Ian_Ward" }, { "code": "// read rules\n\n{ \n \"%%true\": {\n \"%function\": {\n \"name\": \"canRead\",\n \"arguments\": [\n \"%%partition\"\n ]\n }\n }\n}\n\n// write rules\n\n{\n \"%%true\": {\n \"%function\": {\n \"name\": \"canWrite\",\n \"arguments\": [\n \"%%partition\"\n ]\n }\n }\n}\n// canWrite\n\nexports = async canRead (partition) => {\n \n console.log(\"Valuto i permessi di lettura\")\n \n if(partition == 'owner='+context.user.id || partition == 'reader='+context.user.id){\n return true\n }else{\n return false\n }\n \n}\n\n// canWrite\n\nexports = async (partition) => {\n console.log(\"Valuto i permessi di lettura\")\n \n if(partition == 'owner='+context.user.id){\n return true\n }else{\n return false\n }\n \n}\n exports = function (changeEvent) {\n\n\n let collection, doc;\n const docId = changeEvent.documentKey._id;\n\n\n\n switch (changeEvent.operationType) {\n\n // If you make an update from the MongoDb Atlas Web GUI the system considers it a \"replace\" operation.\n case \"replace\":\n\n\n doc = changeEvent.fullDocument;\n\n collection = context.services.get(\"eDojo\").db(\"Project_1\").collection(\"contents*\");\n\n const query1 = { \"_eid\": docId, \"sync\": true };\n const update1 = { $set : { title : changeEvent.fullDocument.title, sync: true } };\n\n\n collection.updateMany(query1, update1).then(({matchedCount})=>{\n console.log(matchedCount)\n });\n\n\n break;\n \n case \"update\":\n\n\n doc = changeEvent.fullDocument;\n collection = context.services.get(\"eDojo\").db(\"Project_1\").collection(\"contents*\");\n\n const query3 = { \"_eid\": docId, \"sync\": true };\n const update3 = { $set : { title : changeEvent.fullDocument.title, sync: true } };\n\n\n collection.updateMany(query3, update3).then(({matchedCount})=>{\n console.log(matchedCount)\n });\n\n\n break;\n\n case \"delete\":\n\n collection = context.services.get(\"eDojo\").db(\"Project_1\").collection(\"contents*\");\n\n const query2 = { \"_eid\": docId };\n const update2 = {\n $set: { \"sync\": false }\n };\n\n collection.updateMany(query2, update2);\n\n\n\n\n break;\n }\n\n\n};\nconst credentials = Realm.Credentials.emailPassword(\"xxx\", \"xxxx\");\n\nconst user = await Db.app.logIn(credentials);\n\n\nconst config = {\n\n schema: [contentsSchema],\n\n sync: {\n\n user: user,\n\n partitionValue: \"reader=\" + user.id\n\n },\n\n};\n\nconst config2 = {\n\n schema: [userSchema],\n\n sync: {\n\n user: user,\n\n partitionValue: \"owner=\" + user.id\n\n },\n\n};\n\ntry {\n\n Db.realm = await Realm.open(config);\n\n Db.realm2 = await Realm.open(config2);\n\n} catch (error) {\n\n console.log(\"Error:\", error.message)\n\n}}\n export const contentsSchema = {\n\n name: 'contents*',\n\n properties: {\n\n _id: 'objectId?',\n\n _eid: 'string?',\n\n _partition: 'string?',\n\n sync: 'bool?',\n\n title: 'string?',\n\n },\n\n primaryKey: '_id',\n\n};\n\nexport const userSchema = {\n\n name: 'user',\n\n properties: {\n\n _id: 'string?',\n\n _partition: 'string?',\n\n privateData: 'string?',\n\n },\n\n primaryKey: '_id',\n\n};\n", "text": "Thanks, at the end I did just 1 App and 1 partitionKey.I used a field called “_partition” that could be:*where xxxx is the user id.So the Sync rules are nowThese are the two functionIn my example I have different collections:The collections “contents” and “contents*” are synched by a Database Trigger. This is the code I used to sync it. (that maybe could be written a little better ). This is a simple behavior but works for me. When a content document is deleted the synched contents just lose their “sync” instead of being deleted too.Now there shouldn’t be big problems also with the trigger. I read somewhere that the maximum number of triggers that can be executed is 1000/s. Is this true? Can I extend this number by upgrading my plan?If so, let’s do this example. I could store the number of “likes” directly in the “content” document. If a user puts likes I execute a cloud function that updates the number of likes in the document. So if 10000 users have access to that content it means that I have to update 10000 documents in “contents*” to sync that like. If 1000 triggers/s is right this means 10 seconds. But if 1000 users do that at the same time? It will generate a very long cue. Should I worry about that or Atlas is capable of doing that?So two questions please @Ian_Ward :Btw, now I can connect to two realms using this code (where you can see also the schema I’m using). It’s partial.The schema:Hope this helps somebody.", "username": "Aurelio_Petrone" }, { "code": "", "text": "@Ian_Ward @kraenhansen any feedback? Could this sync via triggers mechanism work?", "username": "Aurelio_Petrone" }, { "code": "", "text": "We currently cap Triggers at 1k ops/s but we can raise this for an individual app and are looking into raising the overall limit in the future. But one thing you should consider is that you can update a high number of documents in a Trigger, so a single Trigger should be able to update 10k documents.We think that a single trigger should likely be able to update all the documents. It might be simpler to do this if you kept a metadata document for each piece of content that stored a list of users who had it saved. Those metadata documents would only need to be updated/read by Triggers so partitioning wouldn’t be an issue. This wouldn’t cut down on the number of Triggers – It’s more for simplicity/efficiency of code if you had a single trigger updating all documents based on a new like OR if the data was structured in such a way that it was time consuming/not scalable to get all the users who had liked a certain piece of content within the Trigger.Depending on volume of likes you expect you may want to have a user’s like add +1 to their copy of the content and then add/increment a separate document tracking pending likes for a document and only fire a Trigger to update all users likes when the document counting hits certain thresholds. That would cut down on the write amplification quite a bit. And this would cut down on the actual number of triggers by firing less frequently, but the cutdown is only meaningful if there are lots of likes to the same content within a certain time period.", "username": "Ian_Ward" } ]
How to setup and config Realm MongoDB? What partition_value should I use on the client?
2020-10-05T23:08:46.750Z
How to setup and config Realm MongoDB? What partition_value should I use on the client?
4,573
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi,\nI’m trying to design a simple social network data model.\nI found this repo https://github.com/DimiMikadze/create-social-network/blob/master/api/resolvers/follow.js as a reference.My question is in this node.js app, a “Follow” action contains 3 document updates (1 relationship document, and 2 user documents) .\nIf the app crashes or the network issue happened, the state might be in the middle of the whole state (ex: 1 updated, 2 not updated).I think it’s a very common pattern, but I don’t know how to handle it. Is there any design pattern to avoid it?\nAlthough transaction is introduced from 4.0, I would like to know the best practice of this common situation.Thanks,\nJo", "username": "Jo_Huang" }, { "code": "", "text": "Hi @Jo_Huang,You have found one of the main concern for designing relationship in MongoDB where if you have many collections to update/query you will endup in not optimized and reliably harder schema to maintain.One of the rules is try to use as less collections as possible and data that will be queried updated should be stored together.Therefore, I am not sure why do you need a relationship document. I am not seeing how a relationship will be queried without a user context (either follower or following) . For this reason I think there are 2 documents to update :Now doing those 2 documents can be done with ACID transaction if you have to ensure data consistency across documents. However, it could also be done in an async way where one of your services (or atlas trigger) is listening to follow requests and update the followed user together with other application logic like sending a push notifications to that user.Now the problem of keeping all followers in one array as you will have unbounded arrays which is a known mongo antipattern. Therefore you should look into the outlier pattern design for heavy users.Of course I recommend using all the baked in mechanisms like retrayble writes and causal consistency to improve failure writes. Perhaps also add a retry logic of your own and use $addToSet to push relationship to have no impact if operations are done multiple times to the data logic.https://www.mongodb.com/article/schema-design-anti-pattern-summaryThe Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi, Pavel\nI found this reference\n\n\nOn page 18, it seems suggesting the relationship (edge collection in the page) collection.Could you help me understand more?\nIs there any recommended social network data model design reference?Thanks", "username": "Jo_Huang" }, { "code": "", "text": "Hi @Jo_Huang,I see this presentation nis based on socialite which is our community project mimic of social network to test MongoDB workloads.I think the relationship collection is in a way an outlier collection.One of the consideration when having lots of arrays and possibly index them for searches might introduce an overhead maintaining them or will need large ram to keep the hot working set in memory (best practices).I would like to emphasize that this project and design was initially based on very old MongoDB versions where the storage engines and compression as well as index optimization was different… I am not saying most of the consideration don’t apply but I would relay on more up to date content like our pattern blogs and performance new blog series … I will link them here for you to read!A summary of all the patterns we've looked at in this seriesBest practices for delivering performance at scale with MongoDB. Learn about the best way to evaluate performance with benchmarks.\n( Linked last article as it has all others )Let me know if that makes it clearBest\nPabel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for the references. I’ll check it!", "username": "Jo_Huang" } ]
Multiple documents updates
2020-10-12T18:36:39.206Z
Multiple documents updates
2,691
null
[ "dot-net" ]
[ { "code": "", "text": "Hi\nI’m new to Realm and trying to write a .Net application that works with media data stored in a database. Standard stuff like title, artists etc.My first few tries looked good. Than I added additional properties, mostly lists of primitives, and since then the size is completely out of proportion. At first it was less than 100Mb for less than 5000 entries. But after some refactoring it was suddenly more than 10Gb. I removed one list containing more than 100k of another RealmObject type, and the size for 1k elements was still more than 1Gb. I opened the database in Realm Studio, but there are no duplicates or anything.I tried writing that reduced type to a LiteDB database for comparison, containing exactly the same data. After 20k elements it was less than 20Mb!For some reason the Realm databse for the same type, containing 7 primitive properties and 14 lists of primitive types, half of which have only one or two entries at the moment, the other half has less than 10, is a thousand times bigger than the LiteDb database.Since I’m new to Realm I don’t know what might cause this, so I don’t know what to look for. Does anyone have some helpfull ideas?RegardsThorsten", "username": "Thorsten_Schmitz" }, { "code": "", "text": "Realm is a bit funky when it comes to disk space. There’s some explanation in the docs but you will need to take a look at compacting your realm to remove the unused space. Take a look atCompacting RealmsThat’s the Swift docs.", "username": "Jay" } ]
Size of Realm Database completely out of proportion
2020-10-13T11:03:27.073Z
Size of Realm Database completely out of proportion
3,553
null
[ "swift" ]
[ { "code": "func loginAnon() {\n let anonymousCredentials = Credentials.anonymous()\n realmApp.login(credentials: anonymousCredentials) { (user, error) in\n DispatchQueue.main.sync {\n guard error == nil else {\n print(\"Login failed: \\(error!)\")\n return\n }\n \n Realm.asyncOpen(configuration: user!.configuration(partitionValue: user!.id!)) { [weak self](realm, error) in\n guard let realm = realm else {\n fatalError(\"Failed to open realm: \\(error!.localizedDescription)\")\n }\n\n // This simply passes realm instance to next view controller and loads next view controller\n self!.delegate?.loginAnonDidComplete(self!, realm: realm)\n }\n }\n }\n}", "text": "I’m working on an app for iOS. I was using email/password auth so far, but wanted to let user to get started with the app before registering with help of anonymous auth. In the first run of the app everything seemed to be working fine, but after running it for the second time it throws “Thread 1: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)” at the line with DispatchQueue.main.sync. The code is very simple and based on the example from tutorials, so I wonder if that’s a fault with the MongoDB Realm itself? Here is the code:", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "I’ve noticed another thing. I can run this code fine, but only once on each MacBook.", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "Seems like a bug in the Cocoa SDK - can you file an issue and we’ll have an engineer take a look.", "username": "nirinchev" }, { "code": "", "text": "We’re seeing the exact same error with 10.0.0-beta.6. It was working with beta.4. I filed an issue.## Goals\nRealm should not crash when signing in with Anonymous sign in. The cra…sh is occurring with 10.0.0-beta.6, however I believe it did not crash with beta.4\n\n## Expected Results\nSee above\n\n## Actual Results\nWhen signing in with anonymous sign in, Realm Crashes\n\nThread 1: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)\n\n## Steps for others to Reproduce\nUse anonymous authentication\n\n## Code Sample\n```\nMyRealmApp.login(credentials: Credentials.anonymous()) { user, error in\n DispatchQueue.main.sync { //<------- this is the line that crashes\n guard error == nil else {\n print(\"Login failed: \\(error!)\")\n return\n }\n print(\"anon login success\") // Now logged in, do something with user\n }\n}\n```\n\n## Version of Realm and Tooling\nMongoDB Realm\npod 'RealmSwift', '=10.0.0-beta.6'\nmacOS 10.14 & 10.15\nXCode 11.3", "username": "Jay" } ]
Anonymous login works only once, after re-running the app it crashes
2020-10-02T18:49:04.741Z
Anonymous login works only once, after re-running the app it crashes
2,206
https://www.mongodb.com/…a_2_1024x610.png
[ "o-fish" ]
[ { "code": "realmBaseUrl", "text": "Hi all,This is the first time I’m contributing, and I’m currently looking at Field Officers Edit Profile · Issue #210 · WildAid/o-fish-web · GitHubUnfortunately, I haven’t gotten past the setup.I’m trying to build the o-fish-web using the sandbox, but I keep getting this error.\n\nimage1730×1032 110 KB\nAlso tried building my own instance, but got the same error\n\nimage2178×858 100 KB\nSandbox: https://wildaidsandbox-mxgfy.mongodbstitch.com/\nadmin user: [email protected]\nRealm App ID: wildaidapp-tqiouWhere do I get the realmBaseUrl ?\nAny workarounds?Thanks", "username": "Lenmor_LD" }, { "code": "", "text": "Hi @Lenmor_LD!I’m sorry you’re having struggles. The realmAppIdcomes from src/config.js.Things to check:\nDid you copy src/config.js.tmpl to src/config.js?\nDid you replace the line:\nrealmAppId: ‘’,\nwith\nrealmAppId: ‘wildaidsandbox-mxgfy’, # for the sandbox\nor\nrealmAppId: ‘wildaidapp-tqiou’, # for the build you did\n?", "username": "Sheeri_Cabral" }, { "code": "realmAppIdrealmBaseUrlconfig.js.tmpl\n \n const config = {\n appName: 'WildAid O-FISH',\n realmServiceName: 'mongodb-atlas',\n realmAppId: '',\n database: 'wildaid',\n chartsConfig: {\n baseUrl: \"https://charts.mongodb.com/charts-wildaid-xxxxx\",\n \"boardings\": {\n chartId: \"chart-id\"\n },\n \"boarding-compliance\":{\n chartId: \"chart-id\"\n },\n \"patrol-hours\":{\n \n \n \n \n get database() {\n if (!this._database) {\n throw new Error(\"You are not logged in! Please, login first.\"); //Show correct error, if token expired, or something wrong with Auth.\n }\n return this._database; //For use the database from another services\n }\n \n constructor() {\n //Basic stitch client initializing\n if (\"realmBaseUrl\" in config) {\n //Check if URL is correctly filled\n const stitchConfig = config.realmBaseUrl\n ? new StitchAppClientConfiguration({\n baseUrl: config.realmBaseUrl,\n })\n : new StitchAppClientConfiguration();\n this._localStitchClient = Stitch.initializeDefaultAppClient(\n config.realmAppId,\n stitchConfig\n );\n \n ", "text": "Hi @Sheeri_Cabral,Thanks for the response.I have the realmAppId.\nBut it keeps looking for a realmBaseUrl, which I don’t have in my config.\nIt’s also not in the provided config.js.tmplI tried removing the code that uses it in stitch.service.js,…but then I get a different error\n\nimage1208×768 60 KB\nShould I post this as an issue in case anyone else can repro?Thanks,\nLenny", "username": "Lenmor_LD" }, { "code": "", "text": "Hi Lenny,I was able to reproduce the issue if I checked out the code, copied src/config.js.tmpl to src/config.js, and built, without adding any custom information to the src/config.js file:Screen Shot 2020-10-11 at 2.18.02 PM1330×1168 300 KBWhen I change the realmAppId line to have the sandbox, the build works fine…\nso I changed\nrealmAppId: ‘’,\nto\nrealmAppId: ‘wildaidsandbox-mxgfy’,\nand ran npm start again.Here’s what my config file looks like after the change:\nScreen Shot 2020-10-11 at 2.20.04 PM1218×1168 244 KBHope that helps!", "username": "Sheeri_Cabral" }, { "code": "", "text": "Thanks @Sheeri_Cabral\nThat definitely helps.\nI examined by config copy, and realized that I had the wrong file extension on my config.js, which was definitely an overlook on my part. I’m able to build it now, and ready to work on my issue. Thanks a lot for your help!", "username": "Lenmor_LD" }, { "code": "", "text": "I’m so glad you got it working!", "username": "Sheeri_Cabral" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
O-fish web error: Cannot use 'in' operator to search for 'realmBaseUrl'
2020-10-10T03:44:18.025Z
O-fish web error: Cannot use &lsquo;in&rsquo; operator to search for &lsquo;realmBaseUrl&rsquo;
4,576
https://www.mongodb.com/…9_2_1024x535.png
[ "atlas-device-sync" ]
[ { "code": "class User: Object {\n @objc dynamic var _id = \"\"\n @objc dynamic var _partition = \"\"\n @objc dynamic var username = \"\"\n @objc dynamic var image = \"\"\n @objc dynamic var email = \"\"\n @objc dynamic var exp:Int = 0\n @objc dynamic var level:Int = 0\n \n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n\ntry! realm.write {\n\n let appUser = User(value: [\n \"_id\" : (user!.id)!,\n \"_partition\" : (user!.id)!,\n \"username\" : \"NoNameMan\",\n \"image\" : \"https://◯◯◯.jpg\",\n \"email\" : \"[email protected]\",\n \"exp\" : Int(0),\n \"level\" : Int(0)\n ]\n )\n\n realm.add(appUser, update: .modified)\n}\n{\n \"_id\":\"aaaaaa11bbbbbb22ccccc33\",\n \"_partition\":\"aaaaaa11bbbbbb22ccccc33\",\n \"email\":\"[email protected]\",\n \"image\":\"https://◯◯◯.jpg\",\n \"username\":\"NoNameMan\"\n}\n@objc dynamic var aaa = \"\"{\n \"_id\":\"aaaaaa11bbbbbb22ccccc33\",\n \"_partition\":\"aaaaaa11bbbbbb22ccccc33\",\n \"aaa\":\"aaaaa\",\n \"email\":\"[email protected]\",\n \"image\":\"https://◯◯◯.jpg\",\n \"username\":\"NoNameMan\"\n}\n", "text": "I have a User model in client ios application.I turned dev mode ON, enabled Sync and open realm from client application.\nHowever, the atlas document automatically inserted was incorrect.\nUser didn’t have “exp” and “level”.・Creating an object in a realm(I used asyncOpen)・Result in Atlsa ClusterI tried to add @objc dynamic var aaa = \"\" to User model, then opened realm and add a User object.\nThen inserted document had “aaa”.・Result in remote Realm\nスクリーンショット 2020-10-08 18.09.332535×1326 392 KB\n\nスクリーンショット 2020-10-08 18.07.121343×1285 106 KB\n・Result in Atlsa ClusterWhy did only datas at Int property fail to synchronize?", "username": "Shi_Miya" }, { "code": "@objc dynamic var sample = \"\"", "text": "There could be a number of explanations. Your objects look fine.It could be a coding issue - for example the partition value could be incorrect or maybe the objects are not populated correctly. But since that code wasn’t included we don’t know for sureIt could be the server and the client are out of sync - for example if the objects were created without those properties and then they were added. Did you try deleting the local files/clearing the simulator?I tried to add @objc dynamic var sample = \"\"Not sure what that correlates to because the first object shown does not have ‘aaa’ and the second does but neither has ‘sample’Can you clarify the question as it’s a bit confusing.", "username": "Jay" }, { "code": "", "text": "Sorry, I made a type.\nI added some code which will clarify my question.\nDid you try deleting the local files/clearing the simulator?I tried clearing local simulator and Atlas cluster. Is this all I should do?\nI don’t know how to delete local files (and caches?)", "username": "Shi_Miya" }, { "code": "class TaskClass: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partitionKey: TaskProjectId = \"\"\n @objc dynamic var name = \"\"\n}\nclass TaskClass: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partitionKey: TaskProjectId = \"\"\n @objc dynamic var name = \"\"\n @objc dynamic var exp:Int = 0 //added this property\n}\n", "text": "I attempted to duplicate the issue but it worked correctly me. I started with a sync’d realm objectcreated some objects which sync’d correctly.Then I added an Int propertyThen ran the app again and added a couple more objects and everything wrote, read and sync’d correctly.There’s still a bit of ambiguity in the question as the first half shows an object in the Atlas cluster that does not contain “exp” and “level” whereas the screen shots show the object schema does include those properties.Perhaps it’s how you opening your sync’d realm? Or maybe you have multiple clusters or apps?", "username": "Jay" }, { "code": "", "text": "I’m not very familiar with MongoDB Realm, but my understating is that my architecture consists of three parts; local realm, remote realm and atlas.\nhttps___qiita-image-store.s3.ap-northeast-1.amazonaws.com_0_610167_f49f70fc-98bc-4f8d-c15f-ab1e1bee14611754×828 79 KB\nLocal realm’s object schema includes “exp” and “level”. Remote realm’s schema has them, too. However, atlas doesn’t have them.I think screenshots show remote realm successfully synchronized with local realm.", "username": "Shi_Miya" }, { "code": "", "text": "I am sure the Mongo folks will jump in but no, that’s not exactly accurate. If you are in this forum, we assume you are using MongoDB Realm which is the product that’s replacing Realm.At a high level MongoDB Realm stores it’s data in a local container called a Realm and then on the server the data is stored in Atlas. There really isn’t middleman storage as such.If you log into the online MongoDB Realm Console, the Atlas tab is where your data is and the Realm tab just shows the schema of the objects stored in Atlas. This is why you can have multiple applications access the same Atlas data.When we attempted to duplicate your issue by creating objects, storing them and then adding properties, that the existing objects that were stored did not have new properties added to them, only the newly added objects. Perhaps that’s what you’re seeing?", "username": "Jay" }, { "code": "", "text": "@Shi_Miya Are there any error logs in the Logs tab when you attempt to insert the object with the new schema? As Jay said, clearing the local simulator and then terminating sync and re-enabling should reset your state back to 0 and then you should be able to sync with your new schema.", "username": "Ian_Ward" }, { "code": "Error:\n\n\nFailed to convert MongoDB document to configured schema during initial sync\n\nSource:\n\n\nError syncing MongoDB write\n\nLogs:\n\n[\n \"Namespace: PrivateRealm.User\",\n \"Document ID: 5f7ed493b03d9dde55b49d4d\",\n \"Detailed Error: error encoding document during initial sync: document is missing required field(s): [exp level]\"\n", "text": "Thank you for your explanation, Jay and lan. That was very educational.Are there any error logs in the Logs tab when you attempt to insert the object with the new schema?This is error logs.\n]", "username": "Shi_Miya" }, { "code": ".modifiedrealm.addexplevel", "text": "Looking at your code, I believe this is because you’re using .modified in your realm.add call. Since the default value for int properties is 0, creating a user with 0s for exp and level will not set these properties. Since they’re not sent to the server, translating them to Atlas will also omit setting them, which this time results in the fields missing altogether. This is likely a bug in the Realm -> Atlas translation layer that we’ll need to address, but can you just confirm if this is the case? Setting the fields to non-zero values will be an easy check.", "username": "nirinchev" }, { "code": ".modifiedAttempting to create an object of type 'User' with an existing primary key value 'jfwieo2t489fw2jfl'\n", "text": "Thank you so much. I removed .modified and succeeded in synchronization.\nThen I modified my code in order not to cause a below error.", "username": "Shi_Miya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I cannot synchronize data between MongoDB Atlas cluster and client application
2020-10-07T14:40:16.985Z
I cannot synchronize data between MongoDB Atlas cluster and client application
4,774
null
[ "aggregation" ]
[ { "code": "[\n {\n \"_id\": \"alice\",\n \"type\": \"staff\"\n },\n {\n \"_id\": \"john\",\n \"type\": \"staff\"\n },\n {\n \"id\": \"janice\",\n \"type\": \"patient\",\n \"careTeam\": [\n {\n \"_id\": \"alice\",\n \"primary\": true\n }\n ]\n },\n {\n \"id\": \"michael\",\n \"type\": \"patient\",\n \"careTeam\": [\n {\n \"_id\": \"john\",\n \"primary\": true\n },\n {\n \"_id\": \"alice\"\n }\n ]\n }\n]\n{\n $lookup: {\n from: \"Members\",\n localField: \"_id\",\n foreignField: \"careTeam._id\",\n as: \"caseLoad\"\n }\n}\n{\n $lookup: {\n from: \"Members\",\n let: { primary: \"_id\" },\n pipeline: [{$unwind:\"$careTeam\"}, \n {\n $match: {\n $expr: {\n $and: [{ $eq: [\"$careTeam._id\",\"$$primary\"] }],\n },\n },\n }],\n }\n}\n{\n $lookup: {\n from: \"Members\",\n let: { primary: \"_id\" },\n pipeline: [{$unwind:\"$careTeam\"}, \n {\n $match: {\n $expr: {\n $and: [{ $eq: [\"$careTeam._id\",\"$$primary\"] }],\n },\n },\n }],\n }\n}\n", "text": "Hi,I have a collection Members and I am trying to make a join with the same collection to find all the patients of a staff member.I have the following members:The issue is I need to add two conditions but I can’t. I need careTeam._id = $$mainId and careTeam.primary=trueI only want to receive the primary caseload of the staff not the whole caseloadIf I do a lookup like this I receive both janice and michael but I only need janiceI triedbut nothing I get empty caseLoad, I also tried with $unwind but still nothing.I know I still have to add { $eq: [\"$careTeam.primary\", true] } inside the mathc expr but first I need to receive back something in the caseLoad.Can anyone help me out in what am I doing wrong?Thank you for your help in advance,\nBogi", "username": "Bogi_Niczuly" }, { "code": "[\n {\n \"_id\": \"alice\",\n \"type\": \"staff\"\n },\n {\n \"_id\": \"john\",\n \"type\": \"staff\"\n },\n {\n \"_id\": \"janice\",\n \"type\": \"patient\",\n \"careTeam\": [\n {\n \"_id\": \"alice\",\n \"primary\": true\n },\n {\n \"_id\": \"john\"\n }\n ]\n },\n {\n \"_id\": \"george\",\n \"type\": \"patient\",\n \"careTeam\": [\n {\n \"_id\": \"alice\",\n \"primary\": true\n },\n {\n \"_id\": \"john\"\n }\n ]\n },\n {\n \"_id\": \"michael\",\n \"type\": \"patient\",\n \"careTeam\": [\n {\n \"_id\": \"john\",\n \"primary\": true\n },\n {\n \"_id\": \"alice\"\n }\n ]\n }\n]\n{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$type\",\n \"staff\"\n ]\n }\n }\n },\n {\n \"$lookup\": {\n \"from\": \"testcoll\",\n \"let\": {\n \"staffid\": \"$_id\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$type\",\n \"patient\"\n ]\n }\n }\n },\n {\n \"$addFields\": {\n \"primaryCareTeam\": {\n \"$reduce\": {\n \"input\": \"$careTeam\",\n \"initialValue\": [],\n \"in\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$this.primary\",\n true\n ]\n },\n {\n \"$concatArrays\": [\n \"$$value\",\n [\n \"$$this._id\"\n ]\n ]\n },\n \"$$value\"\n ]\n }\n }\n }\n }\n },\n {\n \"$match\": {\n \"$expr\": {\n \"$in\": [\n \"$$staffid\",\n \"$primaryCareTeam\"\n ]\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 1\n }\n }\n ],\n \"as\": \"primaryPatients\"\n }\n }\n ],\n \"maxTimeMS\": 1200000,\n \"cursor\": {}\n}\n{\n \"_id\": \"alice\",\n \"type\": \"staff\",\n \"primaryPatients\": [\n {\n \"_id\": \"janice\"\n },\n {\n \"_id\": \"george\"\n }\n ]\n },\n {\n \"_id\": \"john\",\n \"type\": \"staff\",\n \"primaryPatients\": [\n {\n \"_id\": \"michael\"\n }\n ]\n }\n", "text": "Hello : )If you need for all staff,their primary patients,look up can do it.\nIf you need for a specific staff,it could be much more simple without look up.\nMaybe there is a better way in general but this does what you want i think.Mongodb has also $graphLookup but here is only 1 level the relation,so lookup works also.\nIf you have 1 relation(like reports to) ,and many connections look that also.Data in (added george also)Query\nLeft aggregation = keep the staff only\nRight aggregation = keep the patients,from the careTeam,keep only the names of the primary care takers(reduce),join only is the staff id is on the array of the primary caretakers,and keep\nonly the id of that patient.Result", "username": "Takis" }, { "code": " {\n $lookup: {\n from: \"Members\",\n let: { primary: \"_id\" },\n pipeline: [ \n {\n $match: {\n $expr: {\n $and: [{ $eq: [\"$careTeam._id\",\"$$primary\"] }],\n },\n },\n }],\n }\n }\n``\n\nIs that because the careTeam is an array? But the localField, foreignField solution is working.\n\nBogi", "text": "Hi Takis,Thank you for your answer I will try it out, although I think it still should be a more optimal solution for this, something with matching on the first pipeline state.Do you know why this version is not working?", "username": "Bogi_Niczuly" }, { "code": "", "text": "HelloYes $careTeam._id is an array,of all the _ids in careTeam array.\nThe above code assumes that maybe they are many primary careTakers.\nThats why i did reduce (keep only the primary) and then $in.\nIf there is only one primary careTake you can do filter ,then get first element and $eq.But if each patient have only 1 primary,i guess one extra field,primary_caretaker would be better,and all the secondaries in the array.\nAnd then you can use the $eq without filter or reduce needed.", "username": "Takis" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Changing foreignField with pipeline not working
2020-10-09T10:13:10.692Z
Changing foreignField with pipeline not working
7,119
null
[]
[ { "code": "", "text": "I’m running the following from the IDE terminalmongo “mongodb+srv://sandbox.bdun6.mongodb.net/sample_airbnb” --username m001-studentGetting the following error3 total, 0 passed, 0 skipped:\n[FAIL] “Successfully connected to the Atlas Cluster”Did you use the right command with the Atlas provided connection string?[FAIL] “The cluster name is Sandbox”Did you name your Atlas cluster Sandbox?[FAIL] “The username is m001-student”Did you create a username m001-student?I checked the cluster name is sandbox . I’ve network access set . I’ve the user created .\nIs my db name correct ? Where do i get the dbname from ?anything wrong with my above connection", "username": "Dinesh_Boara" }, { "code": "", "text": "Please show screenshot where you connected to your cluster", "username": "Ramachandra_Tummala" }, { "code": "", "text": "image1920×1080 293 KB", "username": "Dinesh_Boara" }, { "code": "", "text": "It’s probably case sensitive naming. So you probably have “sandbox” but it needs to be “Sandbox” with a capital S.", "username": "Brandon_Barclay" }, { "code": "editor section", "text": "HI @Dinesh_Boara,You have entered the connection string in the wrong section i.e. editor section.Please take a look at this thread for the resolution of the issue.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "actually i entered in the terminal . The editor section i just pasted what i had copied .\nWhats the db name supposed to be ? I put sample_airbnb but thats a collection not dbname . Any idea", "username": "Dinesh_Boara" }, { "code": "", "text": "Brandon , i had tried both small s and capital S . Same issue .", "username": "Dinesh_Boara" }, { "code": "", "text": "Can someone tell me whats the dbname for this ? Could not find it anywhere . Is it test ?", "username": "Dinesh_Boara" }, { "code": "", "text": "cluster_error885×636 22.1 KBI also can’t seem to connect.", "username": "akhil_guntur" }, { "code": "", "text": "Yes you can use test\nOnce connected you can switch the DBAre you still facing issues?\nI can connect to your clusterbtw:sample_airbnb is a db not collection", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Please check status of your Sandbox cluster in Atlas\nDo you see any errors?\nHave you whitelisted your ip?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hey @Ramachandra_37567\nI am having the same issue. I tried using test and sample_airbnb as DB yet I was unable to connect my Atlas Cluster. Could you help me figure it out please?\nThanks", "username": "Harshita_Kaur" }, { "code": "", "text": "Please post a screenshot of what your are doing that shows the error you are getting.And please do it in a new thread as it simplify thing when one thread is one problem from one person.", "username": "steevej" }, { "code": "", "text": "I’m able to connect now , not sure what changed . Was not earlier", "username": "Dinesh_Boara" }, { "code": "", "text": "Yeah. The status of Sandbox is up. I whitelisted my ip too.I am using the following command:\nmongo “mongodb+srv://sandbox.brdnv.mongodb.net/test” --username m001-student", "username": "akhil_guntur" }, { "code": "", "text": "I can connect to your cluster\nsample_weatherdata 0.002GB\nMongoDB Enterprise atlas-10ghqx-shard-0:PRIMARY> exit\nbyeCould be quotes issue?\nUse straight quotes or try without quotes", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Dinesh_Boara,Brandon , i had tried both small s and capital S . Same issue .We have fixed the issue related to the name of the cluster.", "username": "Shubham_Ranjan" }, { "code": "mongo \"mongodb+srv://sandbox.brdnv.mongodb.net/test\" --username m001-student -p m001-mongodb-basics", "text": "mongo “mongodb+srv://sandbox.brdnv.mongodb.net/test” --username m001-studentHi @akhil_guntur,Just like @Ramachandra_37567, I’m also able to connect to you cluster using this connection string.mongo \"mongodb+srv://sandbox.brdnv.mongodb.net/test\" --username m001-student -p m001-mongodb-basicsScreenshot 2020-10-13 at 4.10.27 PM1388×578 63.1 KBPlease try this connection string and let us know if you are still getting any error.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi @Harshita_Kaur,I am having the same issue. I tried using test and sample_airbnb as DB yet I was unable to connect my Atlas Cluster. Could you help me figure it out please?\nThanksPlease share a screenshot of the error message that you are getting and the connection string that you are using.", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Can't connect to atlas cluster from IDE
2020-10-09T02:51:02.504Z
Can&rsquo;t connect to atlas cluster from IDE
3,198
null
[ "aggregation" ]
[ { "code": " {\n _id: 123,\n asset: [\n {\n id: \"asset1\",\n data: \"file data\"\n },\n {\n id: \"asset2\",\n data: \"file data 2\"\n }\n ],\n tree: [\n {\n id: \"a\",\n path: \"home\",\n parent: null,\n asset: \"asset1\"\n },\n {\n id: \"b\",\n path: \"test\",\n parent: \"a\",\n asset: \"asset2\"\n {\n ]\n }\n", "text": "Hello, I would like some help figuring out how I should structure an aggregate query.I have a document structured like this:I would like to be able to provide a string such as “home/test” and get back the associated asset subdocument.I believe I would have to split “home/test” using “/” and follow that array through the document.tree recursively (can I use $graphLookup?). I need to verify that that document.tree.parent always matches incase of path name duplicates at different levels of the tree. Then I would take the last item in the path (“test” in this example) and return the associates asset subdocument by id.I’m new to aggregate queries and would like some direction. Your help is greatly appreciated.Thank you!", "username": "Aaron_Osborn" }, { "code": "$match : { \"tree.path\" : {$in : [ \"home\", \"test\" ] }}{tree.path : 1, tree.id : 1}$unwind", "text": "Hi @Aaron_Osborn,Welcome to MongoDB community!Not sure why you can’t split the input on application side and provide MongoDB a $match : { \"tree.path\" : {$in : [ \"home\", \"test\" ] }}Of course you should optimse by indexing {tree.path : 1, tree.id : 1}Later stages could be $unwind and perform a graph lookup based on id or use $sort and $group functions to construct the tree.Having said that, it might be beneficial to store a hierarchical structure in a document nested by the hierarchy or store pointers in the correct\nhierarchical order and not traverse the documents.It might be challenging as the graphlookup is based on main document and not subdocuments. Also you need to provide a from collection name and you cannot use a previous stage as this input.If you need further guidance I will try to write na sample aggregation for you.Best\nPavel", "username": "Pavel_Duchovny" } ]
Need Help Structuring Aggregate Query
2020-10-11T22:11:29.484Z
Need Help Structuring Aggregate Query
1,727
null
[ "compass", "connecting", "mongodb-shell" ]
[ { "code": "", "text": "I am currently using the latest mongodb shell and compass .I tried to connect to atlas with the latest string but it fails and only works with the deprecated edition.\nNew string(4.4) - mongo “mongodb+srv://sandbox..mongodb.net/\"\nOld String(3.4 or earlier) - mongo \"mongodb://sandbox-shard-00-00..mongodb.net:27017,sandbox-shard-00-01..mongodb.net:27017,sandbox-shard-00-02.*.mongodb.net:27017/?replicaSet=atlas-8mfuzi-shard-0” --ssl --authenticationDatabase admin\nPlease help me why is it not connecting with the new string. I have enabled public acess and whitelisted ip as well.", "username": "smiraldr" }, { "code": "", "text": "What error are you getting with SRV string?\nPlease show screenshot\nDoes it work from shell?\nAre you using the correct string which you got from your Atlas", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Capture1111343×179 6.71 KB\nIt doesn’t work from shell or compass. I am using the deprecated one to connect as the new srv throws the error above even after I have already whitelisted all IPs.", "username": "smiraldr" }, { "code": "", "text": "Capture2221355×187 9.69 KB\nHere I use the deprecated one", "username": "smiraldr" }, { "code": "", "text": " Compass timesout with the following output", "username": "smiraldr" }, { "code": "", "text": "All these SRV Strings are generated by mongo atlas website by using the connect button and selecting the versions of client. I am using the latest versions of shell and compass yet unable to connect with latest srv strings. Please give me an idea about the error so I may fix it.", "username": "smiraldr" }, { "code": "", "text": "It may be firewall issue\nDid you try from alternate location or different internet like your mobile hotspot\nMake sure no invalid characters while pasting the command", "username": "Ramachandra_Tummala" }, { "code": "", "text": "All these SRV Strings are generated by mongo atlas website by using the connect button and selecting the versions of client.Looking at your first post the connect string does not appear to be of proper SRV format\nWhen you click new connection in Compass it is populated with an example connect string\nYour connect string should be something similar to that\nTry this:\nmongodb+srv://m001-student:[email protected]/testAlso from your connect string it appears you are using mongodb university Sandbox cluster\nTry to post in University forum.You can get more help there\nhttps://www.mongodb.com/community/forums/", "username": "Ramachandra_Tummala" } ]
Atlas cluster connection string New and Deprecated Issue
2020-10-08T20:57:19.214Z
Atlas cluster connection string New and Deprecated Issue
2,001
null
[ "java", "transactions" ]
[ { "code": ".flatMap(someRepository.commitTransaction(sessionReference)) // commit\n.doOnError(throwable -> sessionReference.get().abortTransaction()) // rollback\n.doFinally(() -> sessionReference.get().close()) \n", "text": "I have posted this question on stackoverflow, however haven’t received any good answers so far: java - generic transaction/error handling with rxjava (mongodb) - Stack OverflowBasically, I find myself pretty much handling all transaction in the same way:as copying all that code every time isn’t such a nice idea, I was looking for a generic way to handle this.Additionally a user suggested that I should be using the api described here: https://docs.mongodb.com/manual/core/transactions/ as it would provide additional retry logic, handle write conflicts etc.I haven’t found anything in the docs, that there would be such a difference between those apis. Additionally there are no async/reactive examples, and I couldn’t find this api in the reactive driver. Could someone please clarify, if these claims are true? If so, what does one need to implement to have the reactive driver deal with those situations in a similar way?", "username": "st-h" }, { "code": "", "text": "bump.Is there any official information available on details about transaction, retry behaviour and the reactive driver?", "username": "st-h" }, { "code": "", "text": "bump again. Is there anyone on this forum with more in-depth knowledge about transactions and the reactive driver? Is there probably a better place to ask these type of questions?", "username": "st-h" }, { "code": "doOnError()await ObservableSubscriber<ClientSession> sessionSubscriber = new OperationSubscriber<>();\n mongoClient.startSession().subscribe(sessionSubscriber);\n sessionSubscriber.await(5, SECONDS);\n \n try (ClientSession clientSession = sessionSubscriber.getReceived().get(0)) {\n clientSession.startTransaction(TransactionOptions.builder()\n .writeConcern(WriteConcern.MAJORITY).build());\n\n // operations .. \n\n ObservableSubscriber<Void> commitSubscriber = new OperationSubscriber<>();\n clientSession.commitTransaction().subscribe(commitSubscriber);\n commitSubscriber.await();\n}\nwithTransactions()", "text": "Hi @st-h,as copying all that code every time isn’t such a nice idea, I was looking for a generic way to handle this.Would you be able to abstract that away in a convenient wrapper or method? Having said that, sometimes you do have to handle them differently, i.e. doOnError() you may not want to only abort.Depending on your use case and code structure, maybe you could use await to synchronise the transaction part for a different code structure ? for example:Additionally there are no async/reactive examples, and I couldn’t find this api in the reactive driver. Could someone please clarify, if these claims are true?Unfortunately the convenient withTransactions() API is available on the synchronous version currently (v4.1). There is an open ticket to track this work JAVA-3539, feel free to upvote/watch the ticket to receive notifications on progress.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to efficiently handle transactions with the reactive java driver / clarifications on api
2020-07-24T19:03:22.604Z
How to efficiently handle transactions with the reactive java driver / clarifications on api
5,687
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "My application has a flow in which a user does not need an account, but can link an account in the future to sync between devices. The way I am going about this right now is logging in the user to a synced realm with anonymous credentials and then linking google/apple credentials down the line when they authenticate. From the documentation, it seems as though anonymous user data may be deleted, which is a problem since there is a case for my app where users remain anonymous forever without authenticating. Is there a better way to handle this flow? Perhaps starting the user on a non-synced realm and then manually migrating to a synced realm once they authenticate? Or sending an arbitrary JWT as a workaround?", "username": "Peter_Stakoun" }, { "code": "", "text": "Anonymous user data is deleted after some period of inactivity (I believe it’s 30 days). If your app needs to support a scenario where the user will be offline for more than 30 days, then perhaps using a JWT would be the best approach. Also feel free to file a feature request to expose controls to configure the retention period for anonymous users - I can see that being useful for almost everyone who would want to use sync + anonymous auth.", "username": "nirinchev" } ]
Persisting anonymous users
2020-10-11T22:11:45.546Z
Persisting anonymous users
2,248
https://www.mongodb.com/…8b723acdc14.jpeg
[ "java", "atlas-functions" ]
[ { "code": "", "text": "I initiate Realm and App globally through the Application class:\n\nrealm761×450 31.2 KB\nThen I check if a user is logged in.\nIf there is no user logged in the application sends you to the login screen so you can log in. After logging in, Mongodb Realm functions are called but after a while, when calling a function, it throws the Invalid Session exception.\n\nimagen862×254 14.1 KB\nTo be able to call a function again I have to uninstall the application and log in again.At the moment I’m not using a synchronized Realm, I just interact with the functions through the App.\nWhy is this exception thrown? Doesn’t the SDK update the user’s session automatically? Does it have something to do with the fact that he doesn’t use a synchronised Realm?", "username": "Juan_Conejo" }, { "code": "", "text": "Hey, thanks for the detailed report. I too encountered this and reported it here: https://github.com/realm/realm-object-store/issues/1119. We’re working on a fix (see associated PR) and hope to have it released very soon.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Invalid Session Exception when time passes after logging in
2020-10-11T22:11:20.756Z
Invalid Session Exception when time passes after logging in
3,273
null
[ "database-tools" ]
[ { "code": "", "text": "I am using mongorestore, and having this setup:there shards (1 primary and 2 secondary each)Inserting data from mongos, using mongorestore, steps:I am getting new Primary and it is up for shard 3, failover worked well, but mongorestore unable to detect new primary.Please ecplain the reson or we can do anything for this situation?", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hi @Aayushi_Mangal,Which version of MongoDB are you using? I found an internal ticket that says this might have been resolved in the latest versions of MongoDB and mongorestore.Can you please confirm you are still experiencing the same issue withIf that’s not the case, please just make sure that your cluster topology stays the same during your restore operations.Also, please note that mongodump/mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress. More details here.I hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hello @MaBeuLux88,Thank you for explanation. I am using Mongo 4.2 version and same mongorestore version.\nI am not very much sure how to use mongorestore 100.1.1.We have one job that is using mongodump and restore, else we are not using this for backup.", "username": "Aayushi_Mangal" }, { "code": "", "text": "Upgrading to the versions I mentioned would probably solve the issue from what I saw. Let me know if you can confirm this.MongoDB Tools are now released independently from the main MongoDB releases. Their live cycles are now independent.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Mongorestore is not able to handle failover in replica set
2020-10-12T13:44:21.471Z
Mongorestore is not able to handle failover in replica set
2,415
https://www.mongodb.com/…4_2_1024x687.png
[ "compass", "connecting" ]
[ { "code": "", "text": "I have installed mongodb-org-4.4/7 in a VPS (CentOS Linux 7 Core ) and everything works fine so far.\nI am trying to connect to Mongodb in VPS with Compass via SSH .\nI have already whitelist my IP in VPS firewall by running csf -a MY_IP_ADDRESS .\nAlthough, I am not sure what settings I have to fill in Compass .\nI am usingHostname : localhost\nPort : 27017\nSRV Record : Off\nAuthentication : NoneRead Preference : Primary\nSSL : None\nSSH Tunnel : Use Password\nSSH Hostname : vm582.tmdcloud.eu // I am not sure if it needs server’s IP (107.6.182.130)\nSSH Tunnel Port : 16969 // My VPS doesn’t use 22 as default\nSSH Username : root\nSSH Password : my_ssh_passwordWith the above setting I get the following errors at attached images .Στιγμιότυπο 2020-10-03, 19.46.211988×1334 212 KB Στιγμιότυπο 2020-10-03, 20.01.451988×1196 221 KBThank you in advance .", "username": "Evangelos_Stamos" }, { "code": "", "text": "Many ssh setups disallow login by root expecting users with sudo authorization to log in under their own userid and go sudo. Maybe you are in such a situation.", "username": "Jack_Woehr" } ]
Connect MongoDB Compass to VPS
2020-10-03T21:52:04.775Z
Connect MongoDB Compass to VPS
5,618
null
[ "realm-web" ]
[ { "code": "const mongo = _app.services.mongodb(\"mongodb-atlas\");\nconst mongoCollection = mongo.db(\"**project**\").collection(\"**collection**\").watch();\n\nfor await (const event of mongoCollection) {\n console.log(\"There is an event!\", event)\n}\n", "text": "Hello!I’m trying to use the watch() method on the web SDK but it seems that it’s not documented anywhere yet. I made it working using this syntax but I still need to understand how to “close” the stream. Is there a standard way of closing these async iterators or it’s just not implemented yet?It seems that @kraenhansen is the one working on it. Thank you.", "username": "Aurelio_Petrone" }, { "code": "watchwatchreturn()", "text": "I totally agree, we’re still missing a good example on using this.Calling watch on a collection returns an AsyncIterable (an object with the Symbol.asyncIterator property). More specifically watch is an async generator function, which means that the iterable it returns is also a Generator: Calling the return() method on this will break the loop and close the underlying connection.If you have opinions about this API, I encourage you to voice it on Alternative MongoDB Watch API · Issue #3259 · realm/realm-js · GitHub and https://twitter.com/kraenhansen/status/1313136226246053890.", "username": "kraenhansen" }, { "code": "class DB {\n \n static stream = async (myEventEmitter:any, mongoCollectionWatch:any, mongoCollectionFind:any) => {\n\n \n \n for await (const event of mongoCollectionWatch) {\n myEventEmitter.emit('testEvent', {\n event : event,\n data: await mongoCollectionFind.find()\n }); // Was fired: hi\n }\n\n}\n\nstatic watchLikes = async () => {\n\n const mongo = J.Auth._app.services.mongodb(\"mongodb-atlas\");\n const mongoCollectionFind = mongo.db(\"Project_1\").collection(\"likes\");\n const mongoCollectionWatch = mongo.db(\"Project_1\").collection(\"likes\").watch();\n const myEventEmitter = new MyEventEmitter();\n\n Db.stream(myEventEmitter, mongoCollectionWatch, mongoCollectionFind)\n\n return { on : async (callback:any)=>{ \n var on = myEventEmitter.on('testEvent', callback) \n myEventEmitter.emit('testEvent', {\n event : \"FIRST_CALL\",\n data: await mongoCollectionFind.find()\n });\n return on\n }, removeListener : (callback:any) => { \n mongoCollectionWatch.return() \n myEventEmitter.removeListener(\"testEvent\", callback ) \n } }\n\n}\n\n ...\n}\nclass MyEventEmitter {\n\n _events:any\n\n constructor() {\n\n this._events = {};\n\n }\n\n \n\n on(name:string, listener:any) {\n\n if (!this._events[name]) {\n\n this._events[name] = [];\n\n }\n\n \n\n this._events[name].push(listener);\n\n }\n\n \n\n removeListener(name:string, listenerToRemove:any) {\n\n if (!this._events[name]) {\n\n throw new Error(`Can't remove a listener. Event \"${name}\" doesn't exits.`);\n\n }\n\n \n\n const filterListeners = (listener:any) => listener !== listenerToRemove;\n\n \n\n this._events[name] = this._events[name].filter(filterListeners);\n\n }\n\n \n\n emit(name:string, data:any) {\n\n if (!this._events[name]) {\n\n throw new Error(`Can't emit an event. Event \"${name}\" doesn't exits.`);\n\n }\n\n \n\n const fireCallbacks = (callback:any) => {\n\n callback(data);\n\n };\n\n \n\n this._events[name].forEach(fireCallbacks);\n\n }\n\n }\n<button onClick={async ()=>{\n\n likes = await J.Db.watchLikes()\n\n likes.handle = (data:any)=>{\n\n console.log(\"Event\", data)\n\n }\n\n likes.on(likes.handle)\n}}>\n\n Watch likes\n\n</button>\n\n<button onClick={async ()=>{\n\n try {\n\n likes.removeListener(likes.handle)\n\n } catch (error) {\n\n console.log(\"Errore\", error)\n\n}\n\n}}>\n\n Unwatch likes\n", "text": "@kraenhansen I read that post. In my opinion, it should be more similar to the realm react-native implementation (so just call subscribe( callback ) ) but if it works it works.UPDATEI’m using something like that right nowEvent emitter (found here How to Create Your Own Event Emitter in JavaScript | by Oleh Zaporozhets | Better Programming)This way I can easily do this:I’m sure this could be written in a simpler way but in my current stage it’s “ok”. Of course it could be nice to have this “event emitter” implementation in the SDK", "username": "Aurelio_Petrone" } ]
How to use watch() on web SDK?
2020-10-12T12:00:59.754Z
How to use watch() on web SDK?
6,090
null
[ "golang" ]
[ { "code": "// SetBackground sets value for the Background field.\n//\n// Deprecated: This option has been deprecated in MongoDB version 4.2.\n", "text": "Hello\nIs it possible to follow Semantic Versioning because with version v1.4.2, SetBackground has been deprecated. It was not with v1.4.1. And that doesn’t compliance with Semantic Versioning.", "username": "Jerome_LAFORGE" }, { "code": "", "text": "Hi Jerome,Deprecation doesn’t represent a breaking change and therefore it aligns with the semantic versioning constraints. It’s a warning to users that this feature may be replaced or removed in a future major release.", "username": "Joe_Drumgoole" }, { "code": "SA1019: options.Index().SetBackground is deprecated: This option has been deprecated in MongoDB version 4.2. (staticcheck)\n", "text": "Hello Joe,\nThanks for your reply. But this kind of modification forces us to modify our code because linter (e.g. golangci-lint) breaks our CI. We don’t expect this kind behavior just on patch revision (v1.4.1 -> v1.4.2).Best,", "username": "Jerome_LAFORGE" }, { "code": "", "text": "Hello Joe,\nBy the way, please find the way that is compliance with Semantic Versioning to deprecate an API:Semantic Versioning spec and websiteBest,\nJérôme", "username": "Jerome_LAFORGE" }, { "code": "", "text": "That makes sense. I will flag back to the engineering team. Good catch.", "username": "Joe_Drumgoole" }, { "code": "", "text": "See https://jira.mongodb.org/browse/CSHARP-3221", "username": "Joe_Drumgoole" }, { "code": "IndexOptions.BackgroundstaticcheckstaticcheckSetBackground", "text": "Hi @Jerome_LAFORGE,This is related to the change in GODRIVER-1736. In driver version 1.4.1, the IndexOptions.Background struct field was marked as deprecated, but the deprecation notice did not follow the correct format required by linters such as staticcheck. As part of GODRIVER-1736, we fixed all of those notices. However, staticcheck also does not currently detect use of deprecated struct fields due to staticcheck: missed deprecated references within struct initializer · Issue #607 · dominikh/go-tools · GitHub, so as part of GODRIVER-1736, we also deprecated the corresponding setter functions (e.g. SetBackground). Apologies if this had any unintended consequences for your application. We will do our best to make sure our deprecation notices comply with the correct format when they are first released going forward so we do not have to make these kinds of changes.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Hi @Divjot_Arora,\nThanks for your explanation and for your great driver.\nThanks for your work.\nBest\nJérôme", "username": "Jerome_LAFORGE" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Semantic Versioning v1.4.1 & v1.4.2
2020-10-08T09:34:14.211Z
Semantic Versioning v1.4.1 &amp; v1.4.2
2,906
null
[ "realm-web" ]
[ { "code": "", "text": "Hi again,I’ve noticed that realmapp.services is undefined (it’s missing from the App constructor) if u install the package via npm (1.0.0-rc.1).It works if you use this script https://unpkg.com/[email protected]/dist/bundle.iife.js cause services are present here.Is it intended? This way you can’t access MongoDB service if you install via npm install.", "username": "Aurelio_Petrone" }, { "code": "Userconst mongo = user.mongoClient(\"<atlas service name>\");\nconst db = mongo.db(\"<my-database-name>\");\nconst collection = db.collection(\"<my-collection-name>\");\n", "text": "This is intended. There’s a breaking change in Realm Web 1.0.0-rc.1 altering the API to align with the Realm JS API where services and functions are available from the User instance:See Class: User", "username": "kraenhansen" }, { "code": "", "text": "My bad, I’m sorry ", "username": "Aurelio_Petrone" }, { "code": "", "text": "No worries As Realm Web approaches 1.0.0 I don’t expect breaking changes like this in the near future.", "username": "kraenhansen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Services are not included in realm-web code if it's installed via npm
2020-10-12T14:00:02.630Z
Services are not included in realm-web code if it&rsquo;s installed via npm
2,391
null
[ "graphql" ]
[ { "code": "", "text": "Hello guys,as the title says, since I tried to use those with no success, I was just wondering that maybe they are just not supported yet. Is it that?", "username": "Aurelio_Petrone" }, { "code": "watch()", "text": "We do not at the moment - however we’re actively tracking our feedback board and you can submit your request here:Our current suggested workaround is to use watch() which your’re already aware of! How to use watch() on web SDK? - #3 by Aurelio_Petrone", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks, I’ve just voted", "username": "Aurelio_Petrone" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does MongoDB Realm GRAPHQL supports subscriptions?
2020-10-12T12:03:13.465Z
Does MongoDB Realm GRAPHQL supports subscriptions?
4,822
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hi everyone,\nwe have just deployed our first Kafka Source connecter which is configured to listen to all database change events.\nI have noticed that at least 50% of changes are coming from one specific collection which I would not want to listen to. Is there a way to specifically configure the connector to listen to all database, excluding this one collection?", "username": "Miks_Lusitis" }, { "code": "", "text": "You can define a regex expression within the pipeline parameter to fine tune the collections you’d like to include/exclude:To exclude, use a negative lookhead in your regex, something like\npipeline=[{“$match”: {“ns.coll”: {“$regex”: ^((?!name_of_collection_to_skip).*)$ }}}]", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Set source connector NOT to listen to some specific collection
2020-10-09T21:10:16.792Z
Set source connector NOT to listen to some specific collection
2,459
https://www.mongodb.com/…80ebb492f4a8.png
[ "atlas-search" ]
[ { "code": "", "text": "I tried using the near operator to find company near certain coordinates however I am not receiving any documents.\nThe query i am using:\nScreen Shot 2020-07-19 at 1.40.41 PM766×956 61.9 KBI created the index also. Like that:{\n“mappings”: {\n“dynamic”: false,\n“fields”: {\n“about”: {\n“analyzer”: “lucene.standard”,\n“type”: “string”\n},\n“address”: {\n“fields”: {\n“location”: {\n“indexShapes”: true,\n“type”: “geo”\n}\n},\n“type”: “document”\n},\n“keywords”: {\n“analyzer”: “lucene.standard”,\n“type”: “string”\n},\n“name”: {\n“analyzer”: “lucene.standard”,\n“type”: “string”\n}\n}\n}\n}I also created this:Screen Shot 2020-07-19 at 1.45.00 PM1798×412 48.9 KBAlso my schema:The $geoNear is working in the aggregation and i am getting the correct documents, however as you can see in the aggregation query, i need to do a search on other fields also.Because $geoNear needs to be the first stage in the aggregation pipeline i cannot do a $search with fuzzy on other fields like ‘name’, because the $search also needs to be the first stage in the aggregation pipeline. SO i cannot use $geoNear and $search in the same aggregation pipeline.This is where the near operator comes in. However even though it is the correct syntax i am not getting any documents. The above aggregation pipeline uses the near operator.What am I missing here?", "username": "Abram_Thau" }, { "code": "", "text": "Hi @Abram_Thau ,Looking into a similar example in our documentation I see that there is another level specifying “should” or “must” on the geo search part but is absent in your example:Learn how to search near a numeric, date, or GeoJSON point value.Can you try to specify this level?Additionally, can you clarify how many search indexes you have?Where do you run this query? Compass or data explorer?Have you tried a mongo shell to run it?Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "termnearindex: 'CustomIndex'", "text": "Hi @Abram_Thau,It would be helpful if you try to run similar query, with one of term and near operators and without the other. This way you’ll find out which one of those matches documents and which one does not.Also note that the index: 'CustomIndex' must match the name of your defined index. If it doesn’t you might not get any results.", "username": "Oren_Ovadia" }, { "code": "", "text": "@Abram_Thau Did the responses from my colleagues resolve this issue for you?", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use the "near operator" in Atlas Search
2020-07-19T21:20:28.069Z
How to use the &ldquo;near operator&rdquo; in Atlas Search
3,603
null
[ "production", "golang" ]
[ { "code": "staticcheckgo-yaml", "text": "The MongoDB Go Driver Team is pleased to announce the release of version 1.4.2 of the MongoDB Go Driver.This release contains several bug fixes, a fix to our existing deprecation notices to ensure they follow the format required by linters such as staticcheck , and an upgrade to our go-yaml dependency so the driver is not susceptible to CVE-2019-11254. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.4.2 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Divjot_Arora" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.4.2 Released
2020-10-07T13:56:11.000Z
MongoDB Go Driver 1.4.2 Released
1,577
null
[]
[ { "code": "", "text": "I have an application built using Realm fully on-device. I am now working on switching to Realm Sync, but haven’t been able to find a good way to preserve the on-device data. From what I understand there are no migrations for Realm Sync, which makes it difficult to do something like populate an owner_id field for use as a partition value. Can anyone recommend a good way to automatically populate all the data on device so as to make it available in the user’s realm?", "username": "Peter_Stakoun" }, { "code": "", "text": "@Peter_Stakoun So sync realms and non-synced realms are different. Synced realms contain a history which is all the operations the user performed in order to generate the current state - preserving the intent of this history is necessary to guard the semantics of our conflict resolution. If you are looking to migrate from non-synced to synced you will need to have code that copies the objects from non-synced to a synced realm.", "username": "Ian_Ward" }, { "code": "nonSyncedRealm.objects('Obj').forEach((o) => syncedRealm.create('Obj', o))", "text": "Thanks for the clarification @Ian_Ward. Would I then have to open both the non-synced and synced realms at the same time?Something like nonSyncedRealm.objects('Obj').forEach((o) => syncedRealm.create('Obj', o))?Is there a way to check if the synced realm is being opened for the first time?", "username": "Peter_Stakoun" }, { "code": "", "text": "@Peter_Stakoun Correct, you would open both realms at the same time, providing them different Configuration structs, and copy data between them. Generally, the way to tell if the synced realm is the first time is to see if app.currentUser is null or not - because if you have a valid user then you have logged in before and likely opened a synced realm.", "username": "Ian_Ward" } ]
Migrating non-Sync data to a Synced Realm
2020-10-09T03:22:47.362Z
Migrating non-Sync data to a Synced Realm
1,975
null
[]
[ { "code": "", "text": "I have been using the old Stitch SDK with my Vue application. I am looking to migrate into the Realm SDK. Which library should I use with VueJS. Will it be the Web SDK or NodeJS SDK?", "username": "Salman_Alam" }, { "code": "", "text": "Hi @Salman_Alam,\nI think it mostly depands what features you need and how do you write your application clients.For example if you look into implementing the offline first approach with Realm Sync you should use a node or js sdks since realm-web does not support this capability.Thanks,\nPavel", "username": "Pavel_Duchovny" } ]
Which Realm Libary to use for VueJS
2020-10-11T09:49:45.229Z
Which Realm Libary to use for VueJS
2,667
https://www.mongodb.com/…7_2_1024x284.png
[]
[ { "code": "{\"where\":{\"or\":[{\"inputMessage.SENDERPHONE\":{\"contains\":\"0968xxxxxx8\"}},{\"inputMessage.RECEIVERPHONE\":{\"contains\":\"0968xxxxxx8\"}}]}}\n", "text": "Dear.I have some problems with my query, i have a table with index created, this table is to store log of my app, and many logs are stored. when i call a query with limit is 10, the average response time is around 1.6s, but the CPU Usage is very high, i don’t know why.Pls help me to find out the root cause.Here is my query:Screen Shot 2020-10-10 at 10.46.35 AM2278×632 68 KB", "username": "Chuong_LA" }, { "code": "", "text": "Hi @Chuong_LA,Your query is using a $where clause? Please note that $where or map reduce are known to wrok poorly with indexes.Moreover, regex type queries with un-anchored searches are known to be poor when utelizing indexes.Notice that the query is probably running a full index scan as it tries to find first 10 records , index keys examine is 300k.Are you using atlas? If yes please consider using atlas search as it has support for regex indexing for text which should be far more sufficient.Use MongoDB Atlas Search to customize and embed a full-text search engine in your app.Learn how to use a regular expression in your Atlas Search query.Best\nPavel", "username": "Pavel_Duchovny" } ]
Mongo High CPU Usage when query
2020-10-10T05:53:24.794Z
Mongo High CPU Usage when query
3,724
null
[]
[ { "code": "class Item: Object {\n@objc dynamic var title: String?\n@objc dynamic var done: Bool = false\n}\nvar toDoItems: Results<Item>!\n\nfunc tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {\n let cell = tableView.dequeueReusableCell(withIdentifier: \"cell\", for: indexPath)\n \n cell.textLabel?.text = toDoItems[indexPath.row].title\n \n cell.accessoryType = toDoItems[indexPath.row].done ? .checkmark : .none\n \n return cell \n}\n\nfunc tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {\n try! realm.write {\n toDoItems![indexPath.row].done = !toDoItems![indexPath.row].done\n }\n \n tableView.reloadData()\n tableView.deselectRow(at: indexPath, animated: true)\n}\n", "text": "Hey guys so I developed an NoteApp, the wanted behavior is that whenever a user taps on one of the cells a checkmark appears and if tapped again the checkmark disappears . Here is what my code looks like.Item.swiftViewController.swiftSo the problem is that whenever I tap on a cell the checkmark does not appear and my realm done property remains unchanged in Realm Studio . Please help!!", "username": "Stephano_Luis_Felipe" }, { "code": "", "text": "Cross post to Stackoverflow. It’s a good idea to keep questions in one place so we aren’t doing double duty in trying to answer.My Realm property isn’t updating whenever I run my app", "username": "Jay" }, { "code": "\n \n \n func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {\n return items.count\n }\n \n func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {\n let cell = tableView.dequeueReusableCell(withIdentifier: \"Cell\") ?? UITableViewCell(style: .default, reuseIdentifier: \"Cell\")\n cell.selectionStyle = .none\n let item = items[indexPath.row]\n cell.textLabel?.text = item.body\n cell.accessoryType = item.isDone ? UITableViewCell.AccessoryType.checkmark : UITableViewCell.AccessoryType.none\n return cell\n }\n \n func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {\n let item = items[indexPath.row]\n try! realm.write {\n item.isDone = !item.isDone\n }\n }\n \n \n ", "text": "@Stephano_Luis_Felipe The old legacy realm demo app leveraged the checkmark in a tableView - you can see the code here -Perhaps this can help you", "username": "Ian_Ward" } ]
My Realm property isn't updating whenever I run my app
2020-10-09T21:12:50.719Z
My Realm property isn&rsquo;t updating whenever I run my app
1,309
null
[ "node-js", "change-streams" ]
[ { "code": "[\n {\n \"$match\": {\n \"operationType\": \"insert\",\n \"fullDocument.client_id\": \"5f6f69b96783940464d0ae1c\"\n }\n }\n]\n[\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"fullDocument.client_id\": \"5f6f69b96783940464d0ae1c\"\n }\n }\n]\n", "text": "MongoDB Atlas v3.6, Node Driver v3.6.2I have a db to which items are added / updated.I’m using collection.watch(pipeline)The pipeline has $match on client_id ( string of o_id )The ‘insert’ watchand this works fine - ie, only inserts with the matching client_id are included in the stream.However I also haveWhich never matches any document updates ??Any ideas ?", "username": "Peter_Alderson" }, { "code": "updateLookup=true", "text": "Hi @Peter_Alderson,I believe this is as you have to specify updateLookup=true to get q fullDocument field in the event with update.This is not a default behaviour therefore cause you to miss match the events:Change Streams — MongoDB ManualBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "[\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"fullDocument\": \"updateLookup\"\n }\n }\n]\n", "text": "Hi @Pavel_Duchovny,I’ve now tried…This is as suggested in the link for nodejs, but still there are no updates received on this watch !Best\nPeter", "username": "Peter_Alderson" }, { "code": "\"fullDocument\": \"updateLookup\"collection.watch(pipeline,options).watch([\n {\n \"$match\": {\n \"operationType\": \"update\",\n \"fullDocument.client_id\": \"5f6f69b96783940464d0ae1c\"\n }\n }\n],\n{ \"fullDocument\": \"updateLookup\"})\n", "text": "Hi @Peter_Alderson,This is wrongly done. The \"fullDocument\": \"updateLookup\" is not part of the $match stage.Its an option document provided as part of Watch\n: collection.watch(pipeline,options)Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks! That solved the issue.", "username": "Peter_Alderson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Watch with pipeline insert / update
2020-10-09T10:08:07.152Z
Watch with pipeline insert / update
3,059
null
[ "replication" ]
[ { "code": "", "text": "Is it possible to remove a node from a replica set and start a new replica set from it ?I haven’t seen any topics, article, docs or posts about this, but I believe it would only to change the replica set configuration on the isolated node by removing the other nodes. ", "username": "Lucas" }, { "code": "2701127012", "text": "Hello @Lucas,Is it possible to remove a node from a replica set and start a new replica set from it ?Yes you can.Assume you have a replica-set with three nodes: “localhost:27011”, “localhost:27012” and “localhost:27013”. And, the node with port 27011 is the primary and you want to remove the node on port 27012.Now, remove the replica-set node:Connect to the secondary member to be removed from the shell, and stop the instance.use admin\ndb.shutdownServer()Connect to the primary and remove the member from the replica-set:rs.remove(“localhost:27012”)\nrs.status() // or rs.isMaster(), and verify both before and after the remove.Delete the data directory of the removed member (“localhost:27012”).Now you have “localhost:27012” to create a new standalone member or a new replica-set from it.Also see: Remove Members from Replica Set", "username": "Prasad_Saya" }, { "code": "", "text": "Awesome @Prasad_Saya !Thank you very much for the quick reply ", "username": "Lucas" }, { "code": "", "text": "But can I not remove the data from the removed node and still make it a stand alone or new replica set?The main goal I’m looking for is to remove one node and making it as a point in time copy of the existent replica set", "username": "Lucas" }, { "code": "", "text": "Hi @Lucas,Then take a look at the Delayed Replica Set Members.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Lucas ,Once you remove it from a replica set you can treat this as a recovery operation to create a new replicaset. Start at step 2.", "username": "chris" }, { "code": "", "text": "I’m aware of delayed replica set members @Prasad_Saya , but in my case I want to end up with two replica sets and not just a delayed instance.To get a better picture of what I’m planning to do:By doing this I expect to endup with 2 Replica Sets, RS-0 and RS-1, which RS-1 started with a copy from RS-0. In a git analogy, RS-0 is my master branch and I want to create a branch RS-1 from it I’m just not sure about step 4, if it is possible.", "username": "Lucas" }, { "code": "dbPathreplicationnewConfig.cfgstorage:\n dbPath: C:\\mongo\\data\\replset\\rs3\\db\nnet:\n bindIp: localhost\n port: 27012\nsystemLog:\n destination: file\n path: C:\\mongo\\data\\replset\\rs3\\mongod.log\n logAppend: true\n# replication:\n# replSetName: myrset\nmongod--replSet--dbpath> mongod -f newConfig.cfgmongolocallocaluse local\ndb.dropDatabase()\n\nuse admin\ndb.shutdownServer()\n--replSetnewConfig.cfgreplication:\n replSetName: RS-1\n> mongod -f newConfig.cfgmongors.initiate()rs.isMaster()", "text": "I’m just not sure about step 4, if it is possible.Yes, it is possible.In my last post see this following step:Connect to the primary and remove the member from the replica-set:rs.remove(“localhost:27012”)\nrs.status() // or rs.isMaster(), and verify both before and after the remove.From this point you can do the following steps:Use the configuration for the member “localhost:27012” with the same dbPath, etc., but without the replication option. For example, newConfig.cfg:If you specifying the parameters at command-line, start the mongod without the --replSet parameter, but pointing to the same data directory (the --dbpath parameter).> mongod -f newConfig.cfgConnect to it from mongo shell, and:use local\nshow collections\noplog.rs\n…These collections in local database are related to replication.Now, specify the --replSet parameter, but with a new replica set name. Or in the configuration file newConfig.cfg specify the options with new replica set name. For example:> mongod -f newConfig.cfgConnect to it from mongo shell.3.1. Initiate the new replica-set.rs.initiate()This member becomes the primary of the newly created replica-set. Verify:rs.isMaster()", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to split an existent ReplicaSet?
2020-10-08T22:24:20.771Z
Is it possible to split an existent ReplicaSet?
4,402
null
[ "security" ]
[ { "code": "", "text": "hi,I wanted to understand how auto encryption works in enterprise edition.I have m10 cluster and azure key management on mongo.How will it automatically encrypt the entire database present on cluster.Will it require manual interaction for encryption and decryption of particular collection. I assumed it will get automatically encrypted and decrypted.I have read a document regarding client side field level encryption it means i need to define schema for every entity manually and declare the encryption type.\nAnd what i want is database level encryption.I have created a database on m10 cluster and enabled the encryption. Now my spring application can access the database. How can i encrypt the entire database.Every document says mongodb enterprise edition do the automatic encryption of data.Thank you in advance.", "username": "Aniket_Godase" }, { "code": "", "text": "Hi Aniket,I assume you’re referring to the Customer Key Management with Azure Key Vault in Atlas. However I want to note that from a baseline perspective Atlas always uses storage level encryption underneath the data files. What we’re talking about here is encryption of the files themselves as they’re written to the backing filesystem.Re (1) for each node in your cluster, a node-level master key will be created via envelope encryption, derived from your Azure Key Vault key: then a database-level key will be created derived from that node level key for each database in that replica. This all happens transparently to you and allows you to do online key rotation without having to re-write your data.Re (2) No it’s automatically encrypted at this point and the MongoDB process decrypts it before returning data to a client.Re (3) If you want to separately add another layer of encryption on top you may want to explore MongoDB’s Client-Side Field Level Encryption for the subset of your schema that has the highest data classification level where you’re willing to trade off some queryability for the fact that the data is never decrypted outside your systems: in this model you can do point queries but not range queries. The MongoDB drivers can be configured to automatically decrypt. You do not need to do this if you’re just trying to control the cluster-level key e.g. in (1) above.Re (4) If you’ve enabled MongoDB’s Encrypted Storage Engine with Customer Key Management on this cluster then you’re good to go (that would automatically be set if you had configured your Azure Key Vault).Re (5) Correct you get that automatically in Atlas as long as you’re using your own key management (whether Azure Key Vault, AWS KMS, or GCP KMS)", "username": "Andrew_Davidson" } ]
MongoDB Atlas encryption and Azure Key management
2020-10-09T09:59:57.817Z
MongoDB Atlas encryption and Azure Key management
2,421
null
[]
[ { "code": "@Builder\n@Data\npublic class CaseDocument {\n @BsonId\n private long caseId;\n private String customerId; //Nullable\n private String addressId; //Nullable\n\n @BsonCreator\n public CaseDocument(...) {\n // Constructor\n }\n}\nFindIterable<CaseDocument> documents = mongoCollection.find(request.getQuery());\ndocuments.projection(Projections.include(\"caseId\", \"customerId\")); //Not including addressId\nreturn documents.into(new ArrayList<>());\n", "text": "Hi,I’m dealing with below POJO for automatic serialization and deserialization using Mongo Java Driver:Insert works fine. Retrieving the document by primary key and deserialization it into CaseDocument POJO works fine. Now I’m working on a query where I want to use projection to limit the number of fields returned but it throws an error. Below is how my query looks like:This gives me below error:\n“org.bson.codecs.configuration.CodecConfigurationException: Could not construct new instance of: CaseDocument. Missing the following properties: [addressId]”I can’t use @BsonIgnore on addressId because I want it to be persisted in the DB and retrieved for another query.I saw there is BsonIgnoreExtraElements for C# for nothing similar for Java driver.Hence my question is how can I achieve the desired result where Mongo Java Driver constructs the CaseDocument object without addressId field set?I’m using 3.12 version of Mongo Java Driver.", "username": "Shubham_Gupta" }, { "code": "List<Document>org.bson.DocumentDocumentCaseDocument// Maps an input Document to a CaseDocument and returns it.\nstatic CaseDocument mapToCaseDocument(Document doc) {\n\tCaseDocument caseDoc = new CaseDocument();\n\tcaseDoc.setCaseId(caseDoc.getLong(\"caseId\"));\n\tcaseDoc.setCustomerId(caseDoc.getString(\"customerId\"));\n\t// Map other fields as needed (with any conversions, validations, etc.).\n\t// Throw a runtime exception in case could not be mapped correctly.\n\treturn caseDoc;\n}\nMongoCollection<Document> collection = db.getCollection(\"caseCollection\");\n\nList<CaseDocument> result = collection.find(request.getQuery())\n\t\t .projection(Projections.include(\"caseId\", \"customerId\"))\n\t\t .into(new ArrayList<Document>())\n\t\t .stream()\n\t\t .map(doc -> mapToCaseDocument(doc))\n\t\t .collect(Collectors.toList());\n", "text": "Hello @Shubham_Gupta, welcome to the community.Here is an appraoch.You can get the result of the projection as List<Document> (of org.bson.Document) and use a mapper to map the fields from the Document to the CaseDocument POJO as shown below.The mapper method can be like this:So your modified code:Note, the code uses Java driver v3.12 and MongoDB v4.2.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks for replying Prasad. I understand that there are other ways to achieve it but I was looking for an out of the box solution but looks like that support is not there.I myself achieved it by handling the serialization and deserialization using Jackson (Created a class DocumentUtils with two functions to convert Document to CaseDocument and vice versa).A custom mapper though looks good here but it becomes hard to handle when there are 10+ fields in the POJO. Also the JSON returned by org.bson.Document contains bson identifiers like $numberLong which again requires extra handling.I’ll raise an issue for it on Jira.", "username": "Shubham_Gupta" } ]
How to Ignore some fields during deserialization using MongoDB Java Driver?
2020-10-08T10:43:37.559Z
How to Ignore some fields during deserialization using MongoDB Java Driver?
19,360
null
[]
[ { "code": "", "text": "Can anybody explain to me what’s the difference between Firebase and Realm?\nWhen should I use Firebase or Realm??", "username": "Stephano_Luis_Felipe" }, { "code": "", "text": "TL;DR - see the bottom of the post.Firebase is a suite of products which includes both Firebase Realtime Database and Firestore which are Online First databases that are NoSQL based. They also offer Firebase Storage, Notifications and a bunch of other products.Firebase (databases) are very strong, scaleable, well supported databases with incredible performance. It has a large development community (and/but) is owned by Google (take that as you will). The SDK and API documentation has been some of the best I’ve come across and there’s a LOT of coding examples on Stackoverflow as well as dozens of online videos, tutorials and training courses.Realm is an Offline First database with a similar feel to SQL in its queries but it’s more of an Object store than table based like SQL, and it’s ‘totally different’ than Firebase (NoSQL)MongoDB Realm is the next gen product bringing Realm into a (IMO) much more mature and supported platform. It is also highly scaleable like Firebase but has a much smaller community to fall back on. The documentation is a work in progress but is a heck of a lot better than it used to be, and improving every day (thank you).Both Realm and Firebase ‘store data’ so they could be used in many scenarios.Neither product provides very good multi-tenant support, which has been my gripe for about 8 years now. But I know firebase has a multi-tenant product in the works. Not sure about Realm.There is no pre-defined time or situation of when to use one or the other so the question is impossible to answer as it depends on your use case and more importantly, what YOU feel is the best for the task.If you want a case for using one or the other, I will give you ours.All of our projects are desktop, macOS based (a requirement) with some having a portable iOS component which is far less important.Background: Firebase, suddenly and without any warning dropped macOS support somewhere around 5 years ago. Because of that we had to migrate to something that provided macOS Support, and Realm was that solution. (note that Firebase can run on macOS, it’s just not company supported). So the take-away is that we didn’t initially create the use case for switching to Realm, it was forced upon us. Note that we would probably still be Firebased if it wasn’t for that change - however, with MongoDB Realm, it’s really got a lot to offer.In our case, the change forced us to re-model our data and re-think our apps from more of an online app to more offline. Because our data sets are very large, we also needed to account for the change as instead of the data being ‘in the cloud’ it was also on the device (hence the need for macOS). Takeaway: Realm apps consume more space.Creating relationships between data in Realm was/is far ‘easier’ than in Firebase so that speeded development. However, the extra work it takes in Firebase is offset by the blistering performance it offers getting to that data. Also, Firebase Authentication is mature and rock solid. Realm is getting there.Oh - one other use case thing; user persistence and record ‘locking’. Neither platform offers a feature similar to SQL Record Locking. Firebase Realtime Database has user persistence. I’ve ‘ranted’ about that elsewhere here on the forums but just wanted to mention it (again) as it could affect your implementation.TL;DRMy suggestion is to create a simple To Do app with both platforms to get some experience and see the difference.Again, IMO, if you are new to coding and/or don’t have any idea what ‘NoSQL’ is, you should take a good look at Realm to start with as Firebase will take quite a bit longer to wrap your brain around, and modeling your data is way different between the two.One easy point to consider; is your app Offline First or Online First?Realm offers offline first with a syncing component for online. Firebase is pretty much online only with a offline persistance for situations where the device temporarily looses connection.A final note is that we feel both platforms are incredible and encourage you to investigate both, narrow down and clarify your use case then write some code.", "username": "Jay" }, { "code": "", "text": "Hi @Stephano_Luis_Felipe,My takeaway of working with both platforms is that Firebase and Realm have very similar concepts. However, Firebase is very coupled to Google services and technology.\nIt gives you limited access to your database and depend on google projects.Realm on the other hand bases its main data store on MongoDB Atlas which is a cross cloud versatile database. With Realm you are not coupled with specific services but can integrate lots of 3rd party services including easy integration to AWS services.Since your data is in Atlas and/or data lake you can write application with regular MongoDB drivers for many languages and may also utelize MongoDB charts to visualise your data quickly.Realm also provide event based triggers and data robust functionality like integrating Atlas text search and Atlas data lake. It allows to take advantage of all MongoDB known crud and aggregations as well as support for transactions.Now also offering graphql and mobile sync as mentioned.Its fair to say I have much more experience with Realm but I tend to like its concepts better.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I was a Firebase programmer for three years before Realm Cloud came out. Firebase is a great backend as a service product, and it provides a very rich set of server side functions including Push Notifications, backend functions, and a number of authentication providers. What it lacked is a good client side data base. Any decent Firebase application has to combine the use of Realm for that purpose. The problem is that if you use Firebase along side Realm, you must write a set of translation code between the two models. Firebase sees the world as server side JSON objects, while Realm sees the world as client side Swift objects. Ultimately, the Realm data model is closer to what you actually want to program when you are writing client side. For this reason, I believe that Realm Cloud obviates the need to use Firebase.As far as the security models are concerned, I wrote an article on Medium comparing the two of them. This article however is slightly deprecated since the release of the new MongoDB Realm product in June 2020.This article compares the security models of Realm.io Platform to the rules based language of Google Firebase.\nReading time: 11 min read\n", "username": "Richard_Krueger" }, { "code": "", "text": "Some great points!To add some clarity to the above I would pose how different they are both conceptually and well as functionally; other than they both ‘store data’.Keep in mind that Google is to Firebase what MongoDB is to Realm. Google and Mongo are the parent companies that own their respective particular product. Both can be accessed in a variety of ways from multiple platforms.What it lacked is a good client side data baseNoting that Firebase is not supposed to be a client side database. It’s Online First, cloud driven which is why making a direct comparison near impossible. The takeaway here is that if you’re looking to store data locally on the device, Firebase is not for you.Realm sees the world as client side Swift objects.Actually Realm local objects are Objective-C. Firebase doesn’t have storage objects at all! (lol) and data stored in MongoDB Realm is neither Objective-C nor swift. Aren’t we in JSON land on the server as a data store?@Richard_Krueger thanks for the article post - I remember reading that back in the day. As you mentioned, a lot has changed since then; both platforms now offer powerful cloud functions (firebase is a bit more mature here), executing server side code and now with Cloud Firestore and MongoDB Realm, rules and security is way more flexible.", "username": "Jay" }, { "code": "", "text": "@Jay thanks for elucidating. Yes, the offline first capability is really the killer feature for MongoDB Realm. The fact that an app can work without being connected is huge. This is a form of asynchronous capability at the Internet connectivity level. I can really see this having major implications for building apps that breach the digital divide. Reliable high speed connectivity cannot be assumed, especially in developing countries.", "username": "Richard_Krueger" }, { "code": "", "text": "I will add that the bigger problem with using a hybrid solution that involves Firebase as the backend server and Realm as the local object database is getting the offline-first solution implemented. So to get this working, you have set up a bunch of notification listeners on Firebase to syncrhonize cloud changes. And then on the client side, you need to set up an operation queue that pushes up your local changes to the server. The operation queue needs to block if there is no connection to the internet. And send up changes accordingly. To be frank, I never managed to really get this working properly. With MongoDB Realm on the other hand, you really don’t need to worry about his at all. As fas as the client is concerned, it is reading and writing a local database, which automagically gets synced in the background with the server copy when Internet connectivity is present. There is no translation code from JSON to Realm objects and no operation queue. For me personally, this was a huge productivity saver.", "username": "Richard_Krueger" }, { "code": "", "text": "Thanks to all the Realm community for helping me on this problem, I just became an iOS developer and needed some help, all responses where great and easy to understand.\nOnce again big thanks to everybody!", "username": "Stephano_Luis_Felipe" } ]
Firebase vs Realm
2020-10-03T06:46:32.336Z
Firebase vs Realm
17,829
null
[ "atlas-search" ]
[ { "code": "{\n name: \"test\",\n description: \"It's a glorious day!\",\n thematic: [9, 3, 2, 33]\n}\nint{\n name: \"test2\",\n description: \"It's a glorious night!\",\n thematic: [9, 3, 6, 22]\n}\n93textcompound.should.term", "text": "I am working on a function that helps me find similar documents, sorted by score, using the full-text search feature of MongoDB Atlas.I set my collection index as “dynamic”.I am looking for similarities in text fields, such as “name” or “description”, but I also want to look in another field, “thematic”, that stores integer values (ids) of thematics.Example:Let say that I have a reference document as follows:I want my search to match these int in the thematic field and include their weight in the score calculation.For instance, if I compare my reference document with :I want to increase the score since the thematic field shares the 9 and 3 values with the reference document.Question:What search operator should I use to achieve this? I can input array of strings as queries with a text operator but I don’t know how to proceed with integers.Should I go for another approach? Like splitting the array to compare into several compound.should.term queries?Note that I also posted this question on SO here: How can I search in arrays of integers with a compound MongoDB Atlas search query? - Stack Overflow", "username": "Antoine_Cordelois" }, { "code": "", "text": "Than problem than you ! I want to search int in array I hope new feature soon or any solution !", "username": "Jonathan_Gautier" }, { "code": "", "text": "I solved it by adding a trigger to my collection. Each time a document is inserted or updated, I update the thematic and other fields counterpart, e.g. _thematic, where I store the string value of the integers. I then use this _thematic field for search.Another way to solve this is to make a custom analyser with character mapping that will replace each digit with its string counterpart. I haven’t tried this one tho.", "username": "Antoine_Cordelois" }, { "code": "", "text": "Can you show me an example ? Thanks", "username": "Jonathan_Gautier" }, { "code": "exports = function (changeEvent) {\n\nconst fullDocument = changeEvent.fullDocument;\nconst format = (itemSet) => {\n let rst = [];\n Object.keys(itemSet).forEach(item => rst.push(itemSet[item].toString()));\n return rst;\n};\nlet setter = { \n _thematic: fullDocument.thematic ? format(fullDocument.thematic) : [], \n};\nconst docId = changeEvent.documentKey._id;\n\nconst collection = context.services.get(\"my-cluster).db(\"dev\").collection(\"projects\");\n\nconst doc = collection.findOneAndUpdate({ _id: docId },\n { $set: setter });\n\nreturn;\n};\n", "text": "Here you go (trigger code):I am not sure it is the right way to write this kind of trigger, but it works. Alternatives or advice on how to improve this are welcome", "username": "Antoine_Cordelois" } ]
How can I include arrays of integers in my compound search?
2020-10-02T18:49:06.849Z
How can I include arrays of integers in my compound search?
3,316
null
[]
[ { "code": "", "text": "Actually we need to migration from SQL server to MongoDB. Can we access all the procedures and functions which is available in SQL server. If yes please guide me how to implement those things in MongoDB.", "username": "jetti_Naresh" }, { "code": "", "text": "MongoDB is not a drop in replacement as it is a NoSQL document oriented database. It is actually much more flexible. I recommend that you take some courses at https://university.mongodb.com/.There is one course that go over the differences between MongoDB and SQL.", "username": "steevej" }, { "code": "", "text": "Take a look at MongoSyphon, which was written by a MongoDB employee (John Page). It provides the capabilities to load data from an SQL database.", "username": "Joe_Drumgoole" } ]
Do the migration from SQL Server to Mongodb
2020-10-07T12:06:22.035Z
Do the migration from SQL Server to Mongodb
1,665
null
[ "field-encryption" ]
[ { "code": "", "text": "MongoDB provides “Client-Side Field Level Encryption” for encrypting and decrypting specific field in collection.\nMongoDB FLE implementation does not perform any encryption and decryption operations on the database server. Instead, these operations are performed by the MongoDB client library, also known as the driver. Now for supporting sorting operation on the encrypted field, all data from the MongoDB has to brought to driver and after decrypting the field sorting can only be applied. Which leads to performance degradation.\nPlease suggest how to perform sorting on a field level encrypted column?", "username": "Vikrant_Tiwari" }, { "code": "", "text": "Hi @Vikrant_Tiwari, and welcome to the forum!Please suggest how to perform sorting on a field level encrypted column?Depending on your use case, you could encrypt only fields containing sensitive information. Applications can still query and sort the result on the server using other unencrypted non-sensitive information on the document.Now for supporting sorting operation on the encrypted field, all data from the MongoDB has to brought to driver and after decrypting the field sorting can only be applied. Which leads to performance degradation.The general notion of MongoDB Client-Side Field Level Encryption is that the server never sees the unencrypted values.Maybe what you are looking for is MongoDB Encryption At Rest ? This feature allows MongoDB server to encrypt data files such that only parties with the decryption key can decode and read the data.Regards,\nWan.", "username": "wan" } ]
Sorting on Field Level Encrypted column
2020-06-26T19:29:20.820Z
Sorting on Field Level Encrypted column
3,176
null
[ "monitoring" ]
[ { "code": "", "text": "The db.serverStatus().asserts command returns{ “regular” : 0, “warning” : 1, “msg” : 0, “user” : 41800, “rollovers” : 0 }The number of “user asserts” is very high, for this I have investigated and found, on the mongodb.log, this warning:2020-07-29T07:31:51.848+0200 W COMMAND [conn225] Use of the aggregate command without the ‘cursor’ option is deprecated. See http://dochub.mongodb.org/core/aggregate-without-cursor-deprecation.This warning is repeated about 131000 times in 6 days … this number is too high compared to 41800 assertions user (the uptime is equals to 71 days).Continuing, I also found:2020-07-28T09:36:40.256+0200 I ASIO [NetworkInterfaceASIO-RS-0] Failed to connect to mongo003:27017 - HostUnreachable: Connection refusedbut the frequency with which it occurs is very low.Finally, I found this error:2020-08-06T01:00:12.205+0200 E QUERY [conn58614] Plan executor error during find command: DEAD, stats: { stage: “COLLSCAN”, nReturned: 1, executionTimeMillisEstimate: 82, works: 3, advanced: 1, needTime: 1, needYield: 0, saveState: 1, restoreState: 1, isEOF: 0, invalidates: 0, direction: “forward”, docsExamined: 1 }but I don’t understand on which collection the plan executor failsCan you help me understand what to look for on the log file?Thanks!", "username": "Alaskent19" }, { "code": "", "text": "Hi @Alaskent19,Since the msg parameter is 0 I am not sure the assertions are written to the log.Does your application logs report on any errors or issues?What makes you believe there is an actual issue? Assertions may be transient and unimportant…Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,thanks for the reply.I don’t understand … why on the mongo log there are no user assertions? Or if there are, what should I look for?\nHow does mongodb increment the counter if they are not present in its log file?Can errors be raised by the application and not tracked by mongo?", "username": "Alaskent19" } ]
Questions about "db.serverStatus()"
2020-10-08T07:45:25.493Z
Questions about &ldquo;db.serverStatus()&rdquo;
1,862
null
[]
[ { "code": "", "text": "Hello Team,I need help in mongo lookup query where while i using mongo lookup between 2 collections. I am getting result fine from first collection where i need active count from second collection as well. I am not finding any solution to adjust this count in query. I am using group by with first collection and now required count from second collection.", "username": "Varinder_Patwal" }, { "code": "", "text": "Please provide sample documents from both collections and the aggregation you are using.", "username": "steevej" }, { "code": "MongoDB Official Communitysample documentyour queryexpected output/issue", "text": "Welcome @Varinder_Patwal on MongoDB Official Community platform.\nIt is requested you to always share a sample document, your query, expected output/issue so that if anyone want to do some operation he/she will use your sample document. It’s a good way to ask questions.", "username": "Nabeel_Raza" }, { "code": "", "text": "Hi @Nabeel_Raza , Thanks for your suggestion. Will do the same next time.Attached is the query and expected output result.Hoping for help.Query_Document1314×530 20.1 KB", "username": "Varinder_Patwal" }, { "code": "", "text": "share it in text form not in pictorial.", "username": "Nabeel_Raza" }, { "code": "db.userlogs.findOne()\n{\n \"_id\" : ObjectId(\"5e6f9850bf1f7f22659c2051\"),\n \"userLogInTime\" : ISODate(\"2020-03-16T15:16:32.877Z\"),\n \"userId\" : ObjectId(\"5e6f9832bf1f7f22659c2050\"),\n \"userLogOutTime\" : null,\n \"createdAt\" : ISODate(\"2020-03-16T15:16:32.878Z\"),\n \"updatedAt\" : ISODate(\"2020-03-16T15:16:32.878Z\"),\n \"__v\" : 0\n}\ndb.users.findOne()\n{\n \"_id\" : ObjectId(\"5e7508b752eeffdd9c604fca\"),\n \"username\" : \"130123\",\n \"createdAt\" : ISODate(\"2020-03-20T18:17:27.646Z\"),\n \"createdBy\" : null,\n \"email\" : \"[email protected]\",\n \"isActive\" : true,\n \"updatedAt\" : ISODate(\"2020-06-08T10:12:21.557Z\"),\n}\ndb.userlogs.aggregate([ { '$match': {'userId': new ObjectId('5f27c8967655fc302b0a842d'), 'createdAt': { '$gte': new Date('Mon, 01 Jun 2020 00:00:00 GMT'), \n'$lte': new Date('Mon, 28 Sep 2020 00:00:00 GMT') }, '$or': [ { 'userLogOutTime': {'$ne': null } }, { 'lastHeartBeat': {'$ne': null } }] } }, \n{ '$group': {'_id': { '$dateToString': { 'format': '%Y-%m-%d', 'date': '$createdAt' }},\n'totalLoggedTime': { '$sum': { '$cond': [{ 'userLogOutTime': null}, { '$subtract': [ '$lastHeartBeat', '$userLogInTime' ]},\n{ '$subtract': [ '$userLogOutTime', '$userLogInTime' ]} ] }}, 'userId': { '$first': '$userId'\n}, 'userLogInTime': { '$first': '$userLogInTime'} } },\n{ '$lookup': {'from': 'users', \n'let': { 'user': '$userId'},\n 'pipeline': [ { '$match': {'$expr': { '$eq': [ '$_id', '$$user' ]}, \"processName\":{$eq:\"Samsung Chat\"}, \"isActive\" : true\t} }\n ] , 'as': 'userdetails' }},\n {$group :{_id :\"Totalloginseconds\" , \"totalloginsec\": { \"$sum\": \"$totalLoggedTime\" } \t}},\n {'$project': { '_id': 0, 'totalloginsec': 1 ,'activeHeadCounts':{\"$sum\":'$userdetails.userId'} } }, \n{'$sort': {'date': -1 } }]);\n", "text": "Hi Team - Below are the collections and using query.Collection 1 - userlogscollection 2 - usersUsing query -I need count from lookup collection users in result along with total login.Will be thankfull for this.", "username": "Varinder_Patwal" }, { "code": "", "text": "I executed your code and found this:image1110×435 28.9 KB", "username": "Nabeel_Raza" }, { "code": "", "text": "@Nabeel_Raza I shared you sample collections format. This will not give you result. I was trying to take help from someone that can fulfill my requirement.\nMy question was how to get active headcount from users collections in $project as used “activeHeadCount:{$sum:”$userdetails._userId\"}\". Need to confirm syntax and solution.", "username": "Varinder_Patwal" }, { "code": "", "text": "@steevej Please help as sample data with collection and query shared.", "username": "Varinder_Patwal" }, { "code": "", "text": "Yes, the methodology will be the same filter out the documents, then group them and then get the count form that. But make sure that you’ve a projection list where you can add new (additional field).", "username": "Nabeel_Raza" }, { "code": "db.user.aggregate([\n { \"$match\": { 'phoneInfo.verifiedFlag': true} },\n { \n \"$group\": {\n \"_id\": { \n \"day\": { \n \"$dateToString\": { \n \"format\": \"%Y-%m-%d\", \n \"date\": \"$createdOn\" \n } \n },\n \"status\": { \"$toLower\": \"$status\" }\n },\n \"count\": { \"$sum\": 1 }\n }\n },\n { \n \"$group\": {\n \"_id\": \"$_id.day\",\n \"counts\": {\n \"$push\": {\n \"status\": \"$_id.status\",\n \"count\": \"$count\"\n }\n }\n }\n },\n { \"$sort\": { \"_id\": 1 } }\n])\n", "text": "Here is the link of same type of question on stack overflow: Active and Total User Count Mongodb - Stack Overflow", "username": "Nabeel_Raza" } ]
Mongo lookup Query
2020-10-08T11:54:39.833Z
Mongo lookup Query
3,736
null
[]
[ { "code": "", "text": "I user RealmDB in my swift ios project. I have table with users which has online status flag. On my GUI I have UITableView with sections splited on online/offline rows.\nI need to make correct request to RealmDB to take users list and grouped by online flag. And same moment I want to use Result.observe closure for make update in UI.\nBut can’t find correct query request. for it.\nCurrently I solve this issue with 2 separated requests to database. But its make change async in same table and broke data consistent for UI, and my app will crash.\nPlease help me to understand how to use this realmDB in real iOs projects.\nBefore it, I used YapDatabase and this case solved very easily.", "username": "Maxim_Bunkov" }, { "code": "", "text": "@Maxim_Bunkov Are you able to share more code snippets so we can understand what is going on?", "username": "Ian_Ward" }, { "code": " conversationsResult = dbUsers.filter{$0.isOnline}\n\n conversationsNotificationToken = conversationsResult?.observe { [weak output] changes in\n guard let output = output else { return }\n output.receivedConversationsDataChanges(changes)\n }\n conversationsOfflineResult = dbUsers.filter{!$0.isOnline}\n\n conversationsNotificationTokenOffline = conversationsResultOffline?.observe { [weak output] changes in\n guard let output = output else { return }\n output.receivedConversationsDataChangesOffline(changes)\n }\n func applyChanges<T>(changes: RealmCollectionChange<T>, section: Int = 0) {\n switch changes {\n case .initial:\n reloadSections([section], with: .none)\n case .update(_, let deletions, let insertions, let updates):\n let fromRow = {(row: Int) in\n return IndexPath(row: row, section: section)}\n\n beginUpdates()\n deleteRows(at: deletions.map(fromRow), with: .automatic)\n insertRows(at: insertions.map(fromRow), with: .automatic)\n reloadRows(at: updates.map(fromRow), with: .none)\n endUpdates()\n default: break\n }\n }\n", "text": "I have UITableView with 2 separated sections of users.\n1 section - online users\n2 section - offline users\nWhen I try to make two separated realmdb request like that:For users who is online and for user who is offlineEach of changes change only specific section by change example method from documentation:But when user change status from offline to online or vice verse both sections update async and my app crashed. Because data array is wrong.\nCurrently I use reloadData instead method for update whole talbeView. But its ugly solution.\nHelp find correct solution for update UITableView/UICollectionView with multiply section based on Results of Realm db class.", "username": "Maxim_Bunkov" }, { "code": "", "text": "@Maxim_Bunkov Thank you - can I also see your schema of the object in question as well as the queries you are using to propagate data to the UI?", "username": "Ian_Ward" }, { "code": "class SLRConversation: Object {\n override public static func primaryKey() -> String? {\n return \"topic\"\n }\n\n override public static func ignoredProperties() -> [String] {\n return [\"groupName__\", \"pingInterval\"]\n }\n\n @objc dynamic var createdAt: Date?\n @objc dynamic var descUpdatedAt: Date?\n @objc dynamic var lastMessageAt: Date?\n @objc dynamic var subUpdatedAt: Date = Date(milliseconds: 0)\n @objc dynamic var isOnline: Bool = true\n @objc dynamic fileprivate var meta__: Data?\n @objc dynamic var conversationIcon: String?\n ...\n}\n", "text": "It’s a typical object, nothing new:I didn’t see how the scheme can help to you. For an example, how works group in SQL like group by in YapDB here:Views · yapstudios/YapDatabase Wiki · GitHub\nIt’s what I need. With changes triggering. Which will be moved offline user to online in one change event.", "username": "Maxim_Bunkov" }, { "code": "", "text": "@Maxim_Bunkov It looks like you are trying to use live objects to fill table cells’ data. It would probably be a good thing to use frozen objects as a table’s data source. I would try using our new Frozen Objects which we talk about in this blog post -\nhttps://www.mongodb.com/article/realm-cocoa-swiftui-combineAlso documented here -", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward looks like what I need. So if user changed status from online to offline for example. It will be only one rx signal which we froze and reload sections?", "username": "Maxim_Bunkov" }, { "code": "", "text": "And one more question. Didn’t prepare data array changes from the box? For sections in UITableView? Users should controll the data flow? I mean when indexPath[0][12] moved to indexPath[1][3].", "username": "Maxim_Bunkov" } ]
How to group data from realmDB
2020-10-02T14:12:22.727Z
How to group data from realmDB
3,927
null
[ "typescript" ]
[ { "code": "No changes in database schema were found - cannot generate a migration. To create a new empty migration use \"typeorm migration:create\" command\n", "text": "Hi, I am trying to use typeorm to create database migrations for mongo.It seems to work for me with every other database, but not mongo. No migrations are ever generated, and I always get the error:AAARRRRRGGHHHH!!I have asked about this in multiple places but have not received any answers:Issue on typeorm repo: No Migrations Are Generated For MongoDB · Issue #6695 · typeorm/typeorm · GitHubStack Overflow Q: mongodb - typeorm:migration create on New Project Does Not Recognize Entities - \"No changes in database schema were found - cannot generate a migration.\" - Stack Overflowrepo demonstrating the problem: GitHub - JimLynchCodes/barebones-mongo-typeorm-exampletypeORM gitter: https://gitter.im/typeorm/typeormAppreciate any help. Thanks! ", "username": "James_Lynch" }, { "code": "", "text": "bump anyone have any thoughts on this?", "username": "James_Lynch" }, { "code": "", "text": "… ", "username": "James_Lynch" }, { "code": "+50", "text": "Hi @James_Lynch,Welcome to the MongoDB Community!Unfortunately there don’t appear to be a lot of users with experience using TypeORM with MongoDB, but it does look like you’ve asked in all of the right places to try to reach relevant audiences. Thanks for including the other links here – that will be useful for anyone else with a similar question.I see that you have received at least one answer on your Stack Overflow question (although it looks like it took a +50 bounty to encourage responses). It seems that TypeORM currently ignores creating migrations for MongoDB, and the suggestion was to use a standalone migration tool instead.MongoDB does not have a fixed schema or catalog to reference, and there is no strict requirement for all documents in a collection to have identical schema. You can enforce optional schema validation on inserts and updates using JSON Schema validation, but changes in schema validation rules do not affecting existing documents.For larger deployments flexible schema can have significant performance benefits: you can change schema (or schema validation rules) without immediately rewriting all of the historical documents. However, the onus of handling schema variations is then pushed down to your application code.If your use case requires a strictly consistent schema for all documents, you can still create more traditional migration scripts. However, there is some potential flexibility in choosing how to migrate documents. For example, you could choose to migrate documents in batches (perhaps based on some criteria like newest to oldest created) in order to control the impact on your MongoDB deployment’s working set or performance.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you @Stennie_X, I hadn’t seen your response!Yes, it took a +50 bounty, and the answer was a similar “write your own migration scripts each time” strategy.Seems a bit error prone to me, but I guess you have the most control that way.The story for TypeORM is so appealing though- put some typescript decorators on your data models and everything just falls into place!Have some existing data you want to covered to a different data model? Just change the data model, generate and run a migration, and it all just works!I guess there are some issues with it though- for example if you rename 2 string fields then how does it know which is which? what if they have different indexes, other decorators, etc.The batching idea sort of makes sense in that you don’t have to bring the db server down at all and it doesn’t get a huge amount of traffic all at once potentially bottlenecking things when aded to the normal load. However, truly making your application(s) work correctly during this transition means you have to support both “versions” or the document, ie. adding additional arguably necessary branching logic making the code each time just a bit more complex and difficult to read and understand.", "username": "James_Lynch" } ]
How To Use TypeORM with MongoDB?
2020-09-10T17:16:46.897Z
How To Use TypeORM with MongoDB?
15,359
null
[]
[ { "code": "", "text": "what am i doing wrong?\nmongo “mongodb + srv: //sandbox.8mxqm.mongodb.net/test” --username m001-student\nor mongo “mongodb + srv: //sandbox.8mxqm.mongodb.net/m001” --username m001-student\nconnection failed.", "username": "Vitalii_Kyba" }, { "code": "", "text": "Please revise the lesson where the IDE is presented. You have entered the mongo command in the file editing area rather than the terminal area.", "username": "steevej" }, { "code": "", "text": "Hi @Vitalii_Kyba,Did you whitelist the ip ?", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi @Shubham_Ranjan\nThanks for the help. After adding the IP address, I was able to connect to my AtlasCluster.", "username": "Vitalii_Kyba" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Connect to you AtlasCluster
2020-10-08T09:36:52.012Z
Connect to you AtlasCluster
1,362
null
[ "aggregation", "dot-net" ]
[ { "code": "", "text": "Hi,I have a large set of documents (>1.000.000) and they are tree-like connected. Now I wanted to use graphlookup aggregation to find parents/children. However, if I set maxDepth above a certain value (e.g. 10) I get: MongoDB.Driver.MongoCommandException: ‘Command aggregate failed: $graphLookup reached maximum memory consumption.’I already use .Aggregate(new AggregateOptions() { AllowDiskUse = true }) but this did not help. Any ideas? Or is it time to move to Neo4J or ArrangoDB now?Cheers\nSteffen", "username": "Steffen_Schutte" }, { "code": "", "text": "Hi @Steffen_Schutte,We certainly hope you won’t move to any other database.Unfortunately, the $graphLookup stage is not able to use disk for extending memory… Therefore it ignores allowDiskUse.Perhaps you can run several queries to limit the amount it traverse.Also consider trying a $facet with several $graphLookup stages (however, did not test if that would work)Thanks\nPavel", "username": "Pavel_Duchovny" } ]
C# GraphLookup Out of Memory
2020-10-08T10:43:47.987Z
C# GraphLookup Out of Memory
2,952
null
[]
[ { "code": "", "text": "I have a M10 cluster on Mongo Atlas and when I saw my recent bill it had 3x the charges for the M10. It shows 2160 hours, but I only have 1 cluster running, with 3 nodes.Documentation says there is no extra charge for the 3 nodes, so what could be the reason for 3x the chage for the cluster?", "username": "Faheem_Gill" }, { "code": "", "text": "Hi @Faheem_Gill,I believe what you saw is server Hours. Since the replica has 3 servers evry hour running is multiplied by 3.However, the price per hour/month you got when you deployed the cluster is for the 3 servers so you still should pay the same amount.https://docs.atlas.mongodb.com/billing/cluster-configuration-costs/#number-of-nodesBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,The price per hour is $0.08 which for the M10 I’m guess is okay ($0.026/hr * 3).But there is another issue, we closed all the database connections and blocked all IPs and the network out was still showing ~60kb/s and network in at ~20kb/s.This makes no sense if the IPs are blocked and no data is incoming and outgoing.We are averaging 20GB/day of network internet use and our database size is only 3GB.What could cause this?", "username": "Faheem_Gill" }, { "code": "", "text": "Hi @Faheem_Gill,I am not sure I can help out on this without having the Atlas link and investigate the cluster logs.Blocking IPS are relevant for any new external connections but already connected connections will work.Also there is internal replica set communication which write some dummy operations to progress optime for idle replicate as well.Do you use any triggers or realm apps?I also suggest that you open a support case or contact chat for specific cluster investigation.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Can I share the logs in an email?", "username": "Faheem_Gill" }, { "code": "", "text": "Hi @Faheem_Gill,Its better to open a case with our support or chat conversation.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Billing Question
2020-10-06T09:05:32.058Z
Billing Question
3,640
null
[ "golang" ]
[ { "code": "", "text": "Hello,\nWhat is release policy for driver mongoDB golang? (e.g. Release History - The Go Programming Language)Do you support only latest major version or two newer version?For example with CVE fixed with v1.4.2, will it be back port with 1.3 branch if any?Many thanks\nJérôme", "username": "Jerome_LAFORGE" }, { "code": "", "text": "Hi @Jerome_LAFORGE,We backport changes to the most current minor version. For example, once v1.4.0 has been released, changes will only be backported to the 1.4.x branch to make releases 1.4.1, 1.4.2, and so on. The CVE fixed with v1.4.2 will not be backported to the 1.3 branch. While that might be possible for this specific case, it’s not something we want to do regularly because our usage of any given dependency can change across minor versions and make backporting very risky.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Thanks for the clarification.", "username": "Jerome_LAFORGE" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Release policy for MongoDB Go driver
2020-10-08T08:52:32.517Z
Release policy for MongoDB Go driver
2,012
null
[ "dot-net" ]
[ { "code": "", "text": "Hi there,My C# code uses MongoDB.Driver latest version and I’m trying to get data from a mongodb secondary server.\nI wonder how can I set SetSlaveOK() in my C# code or maybe run a rs.slaveOk() command before querying.\nTested with Robo 3T and it works just fine if i run rs.slaveOk() before any other command.\nI have no mongodb client on any machine. Just credentials to connect to this secondary server.\nCan you guide me how to solve it?Thank you,\nRadu.", "username": "Radu_Pozneac" }, { "code": "readPreference=secondaryreadPreference-secondaryPreferred", "text": "Hi @Radu_PozneacThis is also known as read preference. If you use a mongo uri you can append the option readPreference=secondary or readPreference-secondaryPreferred.Same thing can be set directly with MongoClientSettings.ReadPreference Property", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Set slaveOK with MongoDB.Driver
2020-10-08T10:43:00.679Z
Set slaveOK with MongoDB.Driver
2,997
null
[]
[ { "code": "$and$and", "text": "I have recently modified a complex multi-field query to use $and instead of simply assigning the fields as direct properties of the query. The idea being to sequence things so it fails fast, instead of applying every condition, even if we know it should not be, if the previous condition is already false.For example, if we have a collection of people with a schema looking as follows:Then a condition such as:My understanding, based on documentation, is that without the $and all the conditions will be evaluated, even if the previous conditions have already evaluated to false.I am now being challenged to show that there is a performance improvement in doing this, so I have started writing a test case for this, but can anyone confirm my understanding is correct and whether there is any existing documentation or test cases showing the performance gain?", "username": "Andre-John_Mas" }, { "code": "", "text": "Hi @Andre-John_Mas,I think that the difference between the $and and just comma separated values os just syntactic. Don’t believe there is any difference from execution planning.When the document is inspected for filtering the entire document is read therefore failing fast is really negligible…Anyway, to improve the described query you should build compound indexes for all equility fields this is what will definitely improve performance.In general, you should index fields based on the following rule: Equity, Sort , Range (ESR).Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "$and/* eslint-disable no-console */\nimport mongoose from 'mongoose';\n\nlet User;\n\nfunction generateRandomString (length = 6) {\n return Math.random().toString(20).substr(2, length)\n}\n\nasync function createEntries () {", "text": "I ended up writing the test scenario and did find indexed fields coupled with and $and as part of the query did speed things up. Sometimes we were talking a 20ms improvement, going from 22ms to 2ms. For a site with a lot of data and traffic this can add up. Also, since it is such as simple optimisation I’ll take it.The test scenario I wrote:\nFeel free to comment on it. I have made it public so it can be improved on.", "username": "Andre-John_Mas" }, { "code": "db.User.getIndexes()", "text": "Hi @Andre-John_Mas,I am not familiar how mongoose create the indexes? Is it compound index?Can you provide db.User.getIndexes()? Also I would like to see the explain plan from both attempts…I suspect that they are not using the same indexes therefore you see performance difference.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "console.log(await connection.models.User.collection.getIndexes());\n{\n _id_: [ [ '_id', 1 ] ],\n 'address.countryCode_1': [ [ 'address.countryCode', 1 ] ],\n 'address.town_1': [ [ 'address.town', 1 ] ]\n}\n$andfalse", "text": "For Mongoose index docs:I added the following line to my code:and got the following outputThe test is straight forward:The official documentation states: $and uses short-circuit logic: the operation stops evaluation after encountering the first false expression.You indicate that:I suspect that they are not using the same indexes therefore you see performance difference.Can you provide any official documentation, beyond the one I shared, that suggests there should be no performance difference as you stated?", "username": "Andre-John_Mas" }, { "code": "db.User.createIndex({'name': 1,\n 'address.town': 1,\n 'address.countryCode': 1})\n", "text": "Hi @Andre-John_Mas,I didn’t mean that the short circuit does not exist but when occuring on the same object its ngelegible as inspecting two additional comparison in a document we have to read is very fast. To speed this up we need to use index compound on all fields to not read the document at all if not all conditions are met.You have an index only seperately on countryCode or on town instead of creatingThis is what really should tune that query without relying on order of the and expression.Read more hereBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "AND> db.test.explain().find({a:1, b:1})\n....\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"a\" : {\n\t\t\t\t\t\t\"$eq\" : 1\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"b\" : {\n\t\t\t\t\t\t\"$eq\" : 1\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n....\n> db.test.explain().find({$and: [ {a:1}, {b:1} ]})\n...\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"a\" : {\n\t\t\t\t\t\t\"$eq\" : 1\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"b\" : {\n\t\t\t\t\t\t\"$eq\" : 1\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n....\n", "text": "Hi @Andre-John_MasI’d like to answer directly to this question:Can you provide any official documentation, beyond the one I shared, that suggests there should be no performance difference as you stated?It’s mentioned in the $and page:MongoDB provides an implicit AND operation when specifying a comma separated list of expressions.To see that this is the case:and using $and:Both queries shows identical parsed query explain output, so they’re the same query. In fact, the comma separated query was internally translated to use $and. I’m not sure why your test shows a difference between the two cases, but they should not differ.One way to check is currently in the test you posted, you use the comma-separated case first, then the $and case second. Try reversing their order to eliminate the possibility of cache warm up interfering with the timing.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "I’m not sure why your test shows a difference between the two cases, but they should not differ.One reason that might explain this is the cache behavior. Let assume you start with a cold server with the working set out of RAM. If you perform the implicit and first, you spend a couple of extra CPU cycle to bring everything in RAM. By running the explicit $and after on the now hot server with the with the working set in RAM you will definitively obtain better numbers.Not enough information was supplied to be know if this was the case but I think it is a possibility.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Performance benefit of $and
2020-10-05T19:52:14.679Z
Performance benefit of $and
2,659
null
[]
[ { "code": "[\n {\n '$search': {\n 'compound': {\n 'must': [], \n 'mustNot': [\n {\n 'phrase': {\n 'query': 'bar', \n 'path': [\n 'title', 'description'\n ]\n }\n }\n ], \n 'should': [], \n 'filter': []\n }\n }\n }, {\n '$sort': {\n 'ratemin': 1\n }\n }\n]\n{\"errorCode\":\"OPERATION_ERROR\",\"message\":\"Reason: [23:47:46.227] Error running aggregation for '****.testlots' on process '****-cluster-shard-00-01-****.mongodb.net:27017' : [23:47:46.227] Error calling aggregation in coll (*****.testlots) for connParams=*****-cluster-shard-00-01-****.mongodb.net:27017 (local=false) partialRes:[[]] : [23:47:46.227] Error executing WithClientFor() for cp=****-cluster-shard-00-01-****.mongodb.net:27017 (local=false) connectMode=SingleConnect identityUsed=mms-automation@admin[[MONGODB-CR/SCRAM-SHA-1]][24] : (MaxTimeMSExpired) Remote error from mongot :: caused by :: operation exceeded time limit\",\"version\":\"1\",\"status\":\"ERROR\"}\n", "text": "Hi,I got problem with sort and MustNot in $search compoundI got this every time :But if i do same pipeline by changing mustNot by must it’s working.( 500k documents in collection ) ( M40 )\nAny idea ?Thanks.", "username": "Jonathan_Gautier" }, { "code": "near[\n {\n '$search': {\n 'compound': {\n 'must': [\n {\n 'near': {\n 'path': 'some _numeric_field',\n 'origin': 0, //0 for ascending || an integer larger than total results for descending\n 'pivot': 1\n }\n }\n ], \n 'mustNot': [\n {\n 'phrase': {\n 'query': 'bar', \n 'path': [\n 'title', 'description'\n ]\n }\n }\n ], \n 'should': [], \n 'filter': []\n }\n }\n }\n]", "text": "@Jonathan_Gautier one way to accomplish the sort would be to use the near operator. It will only work for numeric, date, or geo data types. Try it out and let me know if that helps.For the query you provided, it might look something like this:", "username": "Marcus" }, { "code": "", "text": "XD @MarcusLittle tricky but i will try !I hope we can make simple sort and count directly in $search in future ", "username": "Jonathan_Gautier" } ]
Cant do sort after $search MustNot compound
2020-09-25T23:53:34.450Z
Cant do sort after $search MustNot compound
2,708
null
[ "node-js", "change-streams" ]
[ { "code": "const stream = collection('mydb').watch([\n { $match: { \n operationType: { \n $in: ['insert','update'] }, \n 'fullDocument.client_id': client_id+'' \n } \n }\n])\nconst stream = collection('mydb').watch([\n { $match: { \n client_id: client_id+'' \n } \n }\n])\n", "text": "Hi,MongoDB Atlas Cluster v3.6I have a watch stram and want to apply a filter.My Watch starts with…Where client_id is object in the document ( sting of o_id );I’ve also tried…But in both cases the stream includes updates for all changes to ‘mydb’ irrespective of a match on client_id ?Any ideas where I’m going wrong ?", "username": "Peter_Alderson" }, { "code": "mydb", "text": "Hi @Peter_AldersonThe pipeline seems to be malformed. Why is there a double quote in the end of client_id field value?Where do you run this code, in node or mongo shell?Have you tried using a parameter and printing the match stage so you see its correct?Is the collection name mydb?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "[\n {\n \"$match\": {\n \"operationType\": {\n \"$in\": [\n \"insert\",\n \"update\"\n ]\n },\n \"fullDocument.client_id\": \"5f69cbbb01e0610ab651e146\"\n }\n }\n]\n", "text": "I think there may have been a typo in the copy, so here are the stringified versions of the actual pipelinesI’ve just re-tried the first pipeline, see below, and this now seems to be working OK.Not sure what I got wrong, but thanks for your guidance.Peter", "username": "Peter_Alderson" } ]
MongoDB collection.watch $match
2020-10-08T09:30:57.487Z
MongoDB collection.watch $match
3,110
null
[]
[ { "code": "", "text": "I’m using KMongo, and I’m looking for a way to automatically update documents inside mongo db, every time a document is touched/retrieved/updated/… I was wondering if there is some built in functionallity or some specific way to do this. Thank you in advance!", "username": "Violeta_Costea" }, { "code": "", "text": "Hi @Violeta_Costea,Kmongo is a 3rd party project for kotlin. The java driver support change streams which is probably what you are looking for.MongoDB triggers, change streams, database triggers, real time\nI am not familiar with kmongo but if its based on the mongo java driver it might have this capability…MongoDB Java Driver documentationIf you can use Atlas as your deployment you can use build in trigger mechanism which is client agnostic:Hope this helps.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hey Pavel,Thank you for your response, it was really useful Kotlin is based on java and there is a function called watch(), that can listen to any changes that go to a db/colleection/deployment, same as shown in the java driver documentation from above.There is a thing that i couldn’t figure out, so while using a change stream, does it act as a middleware? and can I actually interfere with the mongo operation? or is this change stream acting only as a listener that returns all activity that already happened ( for example if I insert a document, will the change stream show me this activity after it was inserted in db or before it was actually inserted? )Hope this makes senseThank you!", "username": "Violeta_Costea" }, { "code": "", "text": "Hi @Violeta_Costea,The change stream listen to the oplog collection and. Filter the needed events after they happened. It cannot block ot interfere with any of the applied operations.This is why the oplog size and resume data are so coupled as the changestream seek for ita information there.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you very much @Pavel_Duchovny! I have a much clearer understanding on how the change stream works now ", "username": "Violeta_Costea" } ]
Autoupdate document field kmongo
2020-09-29T11:04:42.339Z
Autoupdate document field kmongo
2,430
null
[ "security", "configuration" ]
[ { "code": " systemLog:\n [...]\n\n net:\n port: XXXXX\n bindIp: XXXXX\n tls:\n mode: requireTLS\n certificateKeyFile: /mongo/tls/mongo01.pem\n CAFile: /mongo/tls/ca.pem\n allowConnectionsWithoutCertificates: true\n disabledProtocols: TLS1_0,TLS1_1\n\n replication:\n [...]\n\n security:\n authorization: enabled\n keyFile: /mongo/tls/member.key\n authenticationMechanisms: [SCRAM-SHA-1,SCRAM-SHA-256]\n clusterAuthMode: keyFile\n\n storage:\n [...]\n\n processManagement:\n [...]\n", "text": "Hi all, I’m new in this community, i hope I created this topic in the right category.I recently needed to deploy a 3-nodes ReplicaSet and my configuration file is something like this:I have also created a mongo0X.pem file for every server, as you can notice in the certificateKeyFile flag, every signed by the same internal root CA (ca.pem file).When I try to start my mongod instances, despite the presence of ‘clusterAuthMode: keyFile’ (that I supposed it should force in some way the usage of the keyfile ONLY), the servers still check each other’s certificate (that i only want to be used by clients to verify the servers’ identities).\nThis procedure fails because i did not insert the ‘TLS Web Client Authentication’ setting in the certificate itself (returning a SSL invalid certificate purpose).So my question is, why does MongoDB tries to validate the “between servers” certificates even though I told him not to?\nWhat am I doing wrong/misunderstanding?Any help is appreciated. ", "username": "Leonardo_Papini" }, { "code": "", "text": "No one can help me? If more specifications are needed let me know.", "username": "Leonardo_Papini" } ]
Unable to force clusterAuthMode: sendKeyFile between nodes
2020-09-22T08:30:08.752Z
Unable to force clusterAuthMode: sendKeyFile between nodes
1,799
null
[ "data-modeling" ]
[ { "code": "", "text": "Just wanted to understand if we have any reference data model patterns for Reporting & Analytics kind of applications something like Star schema having daily, weekly, monthly, qtrly, yearly fact table(s) & multiple dimension tables to help create business KPI dashboards/reports ?Thanks!", "username": "ManiK" }, { "code": "", "text": "Hi @ManiK,I think the following post will answer your questions:\nCan I use Mongo DB for Star Schema type of Data Model? - #2 by slavaAlso have a look at the bucket pattern over our patterns suggestion for time based analytics:Building with Patterns: A Summary | MongoDB BlogBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny ! I’ve already gone through the links you’ve provided above. And, I agree that the pattern provided in the first link is very simple and makes much sense.So let me be more specific here, the goal is not to copy or migrate an existing star schema from a relational DB to NoSQL - MongoDB, instead to design a data model pattern in MongoDB that is as efficient as a star schema and which also enables to do “self-service” analytics kind of reporting.Let’s assume, if we want to process and store the historical data for reporting like a periodic snapshot fact tables like weekly, monthly, quarterly and yearly with some “conformed” dimension tables around like - Date, Geography, Customer, Products etc. There could be several dimensional attributes related to each dimensions and their associated measures, which could make a document very long and bulky (w.r.t document size) and at the same time we also need to ensure the hierarchies are maintained within each dimension like for example:- Date (i.e. Day, Week, Month, Qtr, Year) and Geography (i.e. Region, Country, State, City, County).Also, the user can try to analyze just the sales by customer and/or just the sales by product and time. But a single document would still fetch all the dimensional attributes of other dimensions everytime.Considering all these, would you still recommend to have all the dimensions, associated hierarchies etc embedded - like in the first link you provided ? Or, would it make sense to have separate collections created at different granularity level like - one for weekly, one for monthly and one for quarterly etc. However, in this approach then how do we handle the reporting context - meaning - if a user wants to see or analyze only the “monthly” sales, how do we change the context during runtime that instead of weekly collection, select/use the “monthly” collection for reporting. This might require it to be handled programmatically though, it seems.I guess for such situations instead of just a bucket pattern, what we may need could be a combination of computed and bucket (in separate collections for each period - week, month, qtr, year)?", "username": "ManiK" }, { "code": "", "text": "Just following up again on above topic - mainly, want to understand if I have a need for doing daily, weekly, monthly, quarterly “sales” kind of reporting/analysis. Should we create separate collections for each aggregation level (daily, weekly…) or have all the calculations at different aggregation levels in one single collections ? But if we have separate collections, how to handle the Date “hierarchy” to ensure it properly drill’s up/down/across ?", "username": "ManiK" }, { "code": "", "text": "Hi @ManiK,As you mentioned the main consideration should be how data is best accessed and seperate your data respectfully.The meanings of “star schema” belongs to relational databases more than semistructured.Please note that we recommend having as less collections as possible since having many collections means many files on disk and presents OS limitations as well as memory and disk overhead.On the other hand, timesieries data on partitioned collections based on day/week/month is a known pattern for MongoDB so it might make sense if it respect the data access requirements as well as your system available resources. Be ready that having many collections may introduce challenges when you decide to shard the environment or might require much more expensive HW to scale.I recommend reading all of our design patterns and schema antipattern for avoiding or following whats best for you.A summary of all the patterns we've looked at in this serieshttps://www.mongodb.com/article/schema-design-anti-pattern-summaryHope this helpsBest\nPavel", "username": "Pavel_Duchovny" } ]
Data Model Pattern for Analytics/Reporting (Star Schema or Materialized View)?
2020-09-18T18:06:10.404Z
Data Model Pattern for Analytics/Reporting (Star Schema or Materialized View)?
4,449
null
[ "text-search" ]
[ { "code": "", "text": "On text search, is text with trailing spaces handled not same as text without trailing spaces?On SQL Server, because it bases on SQL-92, text with trailing spaces hits text with no trailing space.\nhttps://support.microsoft.com/en-us/help/316626/inf-how-sql-server-compares-strings-with-trailing-spacesThe behavior of MongoDB seems to be obvious. But I can’t find the document that describe it.Our customer had a trouble by the behavior of SQL DB and requests us to express MongoDB’s spec clearly.Best Regards,\nKatsuhiro Mihara", "username": "Katsuhiro_Mihara" }, { "code": "", "text": "Hi Katsuhiro,At least with v4.0.18, the trailing text is significant and processed as part of a contains search on a field with a text index.Regards,\nSteve", "username": "Steve_Hand" }, { "code": "", "text": "Dear Steve,Thank you for declaring the spec. I can answer to the our customer.Best Regards,\nKatsuhiro Mihara", "username": "Katsuhiro_Mihara" }, { "code": "$textORdb.cuppa.insert(\n [\n { _id: 1, name: \"coffee\" },\n { _id: 2, name: \" coffee\" },\n { _id: 3, name: \"coffee \" },\n { _id: 4, name: \" coffee \" },\n { _id: 5, name: \"tea\" },\n ]\n)\ndb.cuppa.createIndex( { name: \"text\" } )\ncoffeedb.cuppa.find( { $text: { $search: \"coffee\" } } )\n{ \"_id\" : 4, \"name\" : \" coffee \" }\n{ \"_id\" : 3, \"name\" : \"coffee \" }\n{ \"_id\" : 2, \"name\" : \" coffee\" }\n{ \"_id\" : 1, \"name\" : \"coffee\" }\n> db.cuppa.find( { $text: { $search: \"coffee \" } } )\n{ \"_id\" : 4, \"name\" : \" coffee \" }\n{ \"_id\" : 3, \"name\" : \"coffee \" }\n{ \"_id\" : 2, \"name\" : \" coffee\" }\n{ \"_id\" : 1, \"name\" : \"coffee\" }\n> db.cuppa.find( { $text: { $search: \"\\\"coffee \\\"\" } } )\n{ \"_id\" : 4, \"name\" : \" coffee \" }\n{ \"_id\" : 3, \"name\" : \"coffee \" }\nexplain()parsedTextQuery> db.cuppa.find( { $text: { $search: \"coffee \" } } ).explain().queryPlanner.winningPlan.parsedTextQuery\n{\n\t\"terms\" : [\n\t\t\"coffe\"\n\t],\n\t\"negatedTerms\" : [ ],\n\t\"phrases\" : [ ],\n\t\"negatedPhrases\" : [ ]\n}\ncoffecoffee> db.cuppa.find( { $text: { $search: \"\\\"coffee \\\"\" } } ).explain().queryPlanner.winningPlan.parsedTextQuery\n{\n\t\"terms\" : [\n\t\t\"coffe\"\n\t],\n\t\"negatedTerms\" : [ ],\n\t\"phrases\" : [\n\t\t\"coffee \"\n\t],\n\t\"negatedPhrases\" : [ ]\n}\n", "text": "Welcome to the community @Katsuhiro_Mihara!Per the Text Search documentation, whitespace characters are not part of the search terms: $text will tokenize the search string using whitespace and most punctuation as delimiters, and perform a logical OR of all such tokens in the search string.This means the behaviour will be similar to SQL-92 by default, however you can achieve a stricter result using phrase matching if appropriate.You can confirm the behaviour by setting up some test data:A search on coffee will match all of these example documents:A search with trailing spaces will perform identically:You can match a trailing space by wrapping the search term in double quotes to perform a phrase match:If you want to understand differences in how these text search queries are processed, you can explain() the queries and look at the parsedTextQuery outcome for the winning plan:In this example, coffe is the stemmed version of coffee in English (according to the Snowball stemming algorithm), and whitespace characters have been removed by default.A query with phrase matching has the same stemming outcome but adds an additional phrase match filter:If you are performing a different type of text search (for example a regular expression match or Atlas Search), please provide a sample document, example of your search query, and the version of your MongoDB server.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Dear Stennie,Thanks for describing. This information is helpful.I associate it with a Internet search engine. Even if a user enter spaces, the engine searches documents by words only.Best Regards,\nKatsuhiro Mihara", "username": "Katsuhiro_Mihara" } ]
Trailing spaces on text search
2020-09-30T10:46:13.631Z
Trailing spaces on text search
5,864
null
[]
[ { "code": "", "text": "If mongo has replication and I replicate the data on each shard 3x times, why do I need/want RAID to mirror data on the host? E.g., If I’m already redundant with that data, aren’t I just giving up performance? Assuming I would be using RAID-1, aren’t I just giving up half of my write performance and storage space? (I’m not optimizing for read but write)Aren’t 30 shards better than 15 for write performance? Of course losing all three replicas of one shard would be a catastrophe. Please advise.", "username": "Matthew_Zimmerman1" }, { "code": "", "text": "Hi @Matthew_Zimmerman1A comment on RAID. Without RAID when you drop a disk(the most commonly failed component) your node is offline and you will need sync from scratch when disk is replaced. Conversely with a RAID that node would remain online and suffer some performance degradation while the RAID resynchronises.RAID10 is the recommended level for IO intensive operations such as Databases and mongodb.As for sharding the recommendation I received was scale up then scale out. But this is going to be dependent on where your bottlenecks end up.", "username": "chris" }, { "code": "", "text": "Lack of write performance is my issue. Currently in order to max out the CPU/Disk for writing I have 29 shards (3 replicas) on 4 hosts with 22 disks each. So each shard is on one disk. I’m getting performance of about a million records inserted per hour on a collection with 34 indexes.True that yes I will have to restore/resync, but otherwise I’d only be able to have 15 shards as I’m “wasting” those disks in terms of write performance (and storage space in the db too…)I guess to ask the question another way, if I’m already replicated out to 3 hosts, why should I also replicate on the host too?", "username": "Matthew_Zimmerman1" }, { "code": "", "text": "Hi @Matthew_Zimmerman1I don’t think there is a right/wrong answer to your question, really. It’s a matter of tradeoffs, and what your priority is.As @chris mentioned, RAID would help availability on individual node. If there are any issue in the storage part of a node, you would not need to do maintenance from the database side. Thus having RAID helps keep the node from being offline or having to do initial sync which could be an expensive operation that your app can’t afford.On the other side, not using RAID may help with throughput, as you have mentioned. If this is the must-have feature of your app, then the tradeoff is not having redundancy within the individual nodes and would increase their chances of getting disrupted.In conclusion, if availability is your main concern, then I would say that you’re not wasting anything by using RAID, with the expense of speed. Conversely, if throughput is your main concern, sacrificing reliability for speed may be a good tradeoff. I don’t think there’s a single correct answer. It depends on what you need from the system.Best regards,\nKevin", "username": "kevinadi" } ]
Mongodb - replication - raid
2020-10-05T19:52:45.355Z
Mongodb - replication - raid
3,167
null
[]
[ { "code": "", "text": "Hi everyone. I am new using Mongo Atlas and I have some questions.Is it possible to transfer data between two databases in the same cluster. I have two databases called DOMUS-CLIENT and DOMUS-ADMIN, both have the collection User. When a new user is created in the DOMUS-ADMIN, I want that user to be also in the DOMUS-CLIENT database.Thanks in advance", "username": "Jose_Daniel_Oropeza" }, { "code": "", "text": "I think change stream can help.See https://docs.mongodb.com/manual/changeStreams/", "username": "steevej" } ]
Connection between two databases in the same cluster
2020-10-07T18:54:08.769Z
Connection between two databases in the same cluster
2,230
null
[]
[ { "code": "", "text": "Hi everyone!\nI’m looking for a solution to find the matching documents in a collection.I’ve got two types of arrays:Type A is a storage, whit an array field containing more elements, for example let’s say letters, but not necessary all possible.Docs of type B containing less elements in the array, also characters but in any variation, order and size. Every character appears just once in a document.I’m looking for a quick way to find the documents of type B which containing just elements showing up in A.Example:storage: [a,b,c,e,f,h,I,j,k,l,o,p,q,z]Type B docs: 1: [a,b,d]\n2: [a,b,c]\n3: [p,h,l,j,q]\n4: [a,b,d,g]result: 2,3Is there any way to define, how many differences (1,2,3…) are allowed to still showing up as a matching result? Any ideas?", "username": "Martin_Schneider" }, { "code": "", "text": "May be you want", "username": "steevej" }, { "code": "", "text": "Not exactly. I dot want to merge them or masking out elements or get an array back!Following example:A barkeeper or mixer has several drinks on hes shelf, but not all posible in the world. He is a document of type A.And there are also documents like recipes for longdriks and cocktails. Those are the type B s. And in the array fild you can find all the ingredients that are needed to mix one of them.I’m looking for a way, to find out, which one the barkeeper is able to mix from hes stock on the shelf. I need a function which returns these documents from the database.In the second step I wanna find out, which one he could mix, if he had one or two types of drinks more on hes self. (Ranking them?)", "username": "Martin_Schneider" }, { "code": "", "text": "The $setIntersectionA and drink 1 is [ a , b ] != size of drink 1 => cannot be mixed\nA and drink 2 is [ a , b , c ] == size of drink 2 => can be mixed\nA and drink 3 is [ p , h , l , j , q ] == size of drink 3 => can be mixed\nA and drink 4 is [ a , b ] != size of drink 4 => cannot be mixedAnd then you may sort on the size of the drink array ascending ( to get the simplest drink first ) or descending ( to get the most expensive drink first ).", "username": "steevej" }, { "code": "", "text": "Okay, I’m understanding what you’re doing, Steeven! And it’s a nice way. But I’m looking for a way to handle all the stuff inside Mongo.Example:A:\n“name”:“Barkeeper”,\n“store”:[“vodka”,“gin”,“soda”,“syrup”,“cherry”,“martini”,“olive”,“lemon”,“bacardi”,“coke”,“apple juice”,“orange”,“tonic”]B:\n“name”:“vodka soda”\n“ingredients”:[“vodka”,“soda”]“name”:“martini”\n“ingredients”:[“martini”,“olive”]“name”:“bacari coke”\n“ingredients”:[“bacardi”,“coke”,“lime”,“ice”]i found an ugly, but working solution for the first step (return just perfect matchs):var filter = [“vodka”,“gin”,“soda”,“syrup”,“cherry”,“martini”,“olive”,“lemon”,“bacardi”,“coke”,“apple juice”,“orange”,“tonic”]db.b.fing({ingredients: {\"$not\": {\"$elemMatch\": {\"$nin\" : filter }}}})This query wil only return “vodka soda” and “martini”, cause there is no ice.For the second part i’m also looking for a solution likely the upper, with theoportunity to define, how many differences i’m allowing.\nIn case it would be 1, “bacardi coke” would also appear as match, cause there is just one item missing.I know, it could be done “manualy” sorting them out and ranking them with a function like in your solution! But i don’t want to generate unnecessary traffic by reading out nearly all documents -there could be 1000+ matchs in many cases-, and serching on my own way in my server application for matching data and ranking them creating indexes and doing stuff thats normaly done by the DB.But i’ll keep your solution in mind as it’s the olny one I have yet!", "username": "Martin_Schneider" }, { "code": "", "text": "When publishing sample data, I would recommend that you post them as real JSON document as it is much more simpler and faster for people trying to help to enter the data into their own database.To match documents from one collection (A) to documents of the other one (B) you will need to use an aggregation pipeline. See https://docs.mongodb.com/manual/aggregation/\nYou would start by $match stage for the A document that gives you your source array. You then $lookup into B collection.\nBass", "username": "steevej" }, { "code": "testdata:\n{ \n \"_id\" : ObjectId(\"5f7c5d430d4d5883c0c4b278\"), \n \"name\" : \"A\", \n \"values\" : [\"x\",\"c\",\"b\",\"o\",\"m\"]\n}\n{ \n \"_id\" : ObjectId(\"5f7c5d630d4d5883c0c4b279\"), \n \"name\" : \"B\", \n \"values\" : [\"f\",\"c\",\"m\"]\n}\n{ \n \"_id\" : ObjectId(\"5f7c5d630d4d5883c0c4b27a\"), \n \"name\" : \"C\", \n \"values\" : [\"x\",\"c\",\"b\",\"5\"]\n}\nvar filter = [\"a\",\"b\",\"c\",\"e\",\"f\",\"k\",\"j\",\"l\",\"p\",\"s\",\"m\",\"x\",\"y\"]\n\ndb.test.aggregate([\n\n{$project:{id: \"$_id\",\nname:\"$name\",\nvalues: \"$values\",\noriginSize: {$size:\"$values\"}, \ncommon:{$setIntersection:[ filter,\"$values\"]}}},\n\n{$project:{id: \"$_id\",\n name:\"$name\",\n values: \"$values\",\n common:\"$common\",\n originSize: {$size:\"$values\"}, \n commonSize:{$size:{$setIntersection:[ filter,\"$values\"]}}}},\n \n{$project:{ \n id:\"$_id\", \n name: \"$name\", \n values:\"$values\", \n originSize: \"$originSize\",\n common:\"$common\",\n commonSize: \"$commonSize\", \n difference:{ $subtract:[\"$originSize\",\"$commonSize\"]}}},\n\n{$project:{ \n name: \"$name\", \n values:\"$values\", \n originSize: \"$originSize\",\n common:\"$common\", \n commonSize: \"$commonSize\",\n difference:\"$difference\", \n missing:{$setDifference:[\"$values\",\"$common\"]}}},\n \n{$sort:{difference:1,originSize:-1}},\n\n])\n{ \"_id\" : ObjectId(\"5f7c5d630d4d5883c0c4b27a\"), \n \"name\" : \"C\", \n \"values\" : [\"x\",\"c\",\"b\",\"5\"], \n \"originSize\" : NumberInt(4), \n \"common\" : [\"b\",\"c\",\"x\"], \n \"commonSize\" : NumberInt(3), \n \"difference\" : NumberInt(1), \n \"missing\" : [\"5\"]\n}\n", "text": "Hello!I figured it out, its ugly… it works! Thanks a lot for the help!The code im running on it:just one of the outputs:My question is: how can i store data generated inside the aggregate (var d= difference or something like this), so i don’t have to $project them every time, to use it later?Is it possible to declare functions from these $project sequences end nesting them into each other?Can i transfer data (result) from one to the next aggregate sequence?", "username": "Martin_Schneider" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Matching arrays
2020-10-05T19:52:21.775Z
Matching arrays
3,340
null
[ "mongoose-odm" ]
[ { "code": "ship: {\n type: mongoose.Schema.ObjectId,\n ref: 'Ship',\n required: [true, 'Review must belong to a ship.'],\n", "text": "Hello! I have a model in my code that includes the following:This attaches a “Ship ID” to individual reviews. While nice, I’m wondering if I can attach a specific field in the Ship model called “shipName”. Is there a way for me to do this? I wasn’t sure if I would use a different “type” in the example code. Any help would be appreciated. Thank you.", "username": "Christopher_Clark" }, { "code": "reviewSchema.pre(/^find/, function (next) {\n this.populate({\n path: 'ship',\n select: 'shipName',\n });\n next();\n});", "text": "I was able to accomplish the desired result with this code:", "username": "Christopher_Clark" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Question about referencing information in another DB
2020-10-07T14:51:42.933Z
Question about referencing information in another DB
1,543
null
[ "replication", "connecting", "containers" ]
[ { "code": "sudo docker exec mongo bash -c 'mongo --eval \"rs.status();\"'\nMongoDB shell version v4.4.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"057d296e-b1da-4983-ada0-7b7407860fb2\") }\nMongoDB server version: 4.4.1\n{\n \"set\" : \"rs0\",\n \"date\" : ISODate(\"2020-10-06T13:52:59.964Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(2),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 1,\n \"writeMajorityCount\" : 1,\n \"votingMembersCount\" : 1,\n \"writableVotingMembersCount\" : 1,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1601992370, 1),\n \"t\" : NumberLong(2)\n },\n \"lastCommittedWallTime\" : ISODate(\"2020-10-06T13:52:50.061Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1601992370, 1),\n \"t\" : NumberLong(2)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2020-10-06T13:52:50.061Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1601992370, 1),\n \"t\" : NumberLong(2)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1601992370, 1),\n \"t\" : NumberLong(2)\n },\n \"lastAppliedWallTime\" : ISODate(\"2020-10-06T13:52:50.061Z\"),\n \"lastDurableWallTime\" : ISODate(\"2020-10-06T13:52:50.061Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1601992350, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"electionTimeout\",\n \"lastElectionDate\" : ISODate(\"2020-10-06T13:48:40.043Z\"),\n \"electionTerm\" : NumberLong(2),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1601992117, 1),\n \"t\" : NumberLong(1)\n },\n \"numVotesNeeded\" : 1,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"newTermStartDate\" : ISODate(\"2020-10-06T13:48:40.050Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2020-10-06T13:48:40.105Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"127.0.0.1:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 261,\n \"optime\" : {\n \"ts\" : Timestamp(1601992370, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDate\" : ISODate(\"2020-10-06T13:52:50Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1601992120, 1),\n \"electionDate\" : ISODate(\"2020-10-06T13:48:40Z\"),\n \"configVersion\" : 1,\n \"configTerm\" : 2,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1601992370, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1601992370, 1)\n}\nsudo docker-compose up --build\nBuilding app\nStep 1/7 : FROM node:12\n ---> 28faf336034d\nStep 2/7 : WORKDIR /usr/src/app\n ---> Using cache\n ---> 528954c980ed\nStep 3/7 : COPY package*.json ./\n ---> Using cache\n ---> ac7231ed1e31\nStep 4/7 : RUN npm install\n ---> Using cache\n ---> 599bd8e45d0e\nStep 5/7 : COPY . .\n ---> Using cache\n ---> 7bef22e63d74\nStep 6/7 : EXPOSE 5000\n ---> Using cache\n ---> 419cba388cef\nStep 7/7 : CMD [\"node\", \"index.js\"]\n ---> Using cache\n ---> f45bf6120b98\n\nSuccessfully built f45bf6120b98\nSuccessfully tagged app_app:latest\nCreating app ... done\nAttaching to app\napp | Node app is running on port... 5000!\napp | Connected to MongoDB...\nsudo docker-compose up --build\nBuilding app\nStep 1/7 : FROM node:12\n ---> 28faf336034d\nStep 2/7 : WORKDIR /usr/src/app\n ---> Using cache\n ---> 528954c980ed\nStep 3/7 : COPY package*.json ./\n ---> Using cache\n ---> ac7231ed1e31\nStep 4/7 : RUN npm install\n ---> Using cache\n ---> 599bd8e45d0e\nStep 5/7 : COPY . .\n ---> a7073f6db1f1\nStep 6/7 : EXPOSE 5000\n ---> Running in 301b943c791a\nRemoving intermediate container 301b943c791a\n ---> 98c67ab84a1b\nStep 7/7 : CMD [\"node\", \"index.js\"]\n ---> Running in 726ad8399f03\nRemoving intermediate container 726ad8399f03\n ---> 9e59e6e37f0e\n\nSuccessfully built 9e59e6e37f0e\nSuccessfully tagged app_app:latest\nCreating app ... done\nAttaching to app\napp | Node app is running on port... 5000!\napp | db error on connection MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\napp | at NativeConnection.Connection.openUri (/usr/src/app/node_modules/mongoose/lib/connection.js:800:32)\napp | at /usr/src/app/node_modules/mongoose/lib/index.js:341:10\napp | at /usr/src/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:5\napp | at new Promise (<anonymous>)\napp | at promiseOrCallback (/usr/src/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:30:10)\napp | at Mongoose.connect (/usr/src/app/node_modules/mongoose/lib/index.js:340:10)\napp | at Object.<anonymous> (/usr/src/app/index.js:6:4)\napp | at Module._compile (internal/modules/cjs/loader.js:1137:30)\napp | at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)\napp | at Module.load (internal/modules/cjs/loader.js:985:32)\napp | at Function.Module._load (internal/modules/cjs/loader.js:878:14)\napp | at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)\napp | at internal/main/run_main_module.js:17:47 {\napp | reason: TopologyDescription {\napp | type: 'ReplicaSetNoPrimary',\napp | setName: 'rs0',\napp | maxSetVersion: 1,\napp | maxElectionId: 7fffffff0000000000000002,\napp | servers: Map { '127.0.0.1:27017' => [ServerDescription] },\napp | stale: false,\napp | compatible: true,\napp | compatibilityError: null,\napp | logicalSessionTimeoutMinutes: null,\napp | heartbeatFrequencyMS: 10000,\napp | localThresholdMS: 15,\napp | commonWireVersion: 9\napp | }\napp | }\n", "text": "I use two docker containers: one for mongo and one for node.js. I was able to connect them both on the same docker network, but if I set replicaSet on a connection, I can’t connect, it fails. I’ve checked the mongo status and can confirm it’s set to replica, and I think I’ve tried everything - nothing helps.I’m aware of that I’m trying to set replica on a single instance.I’ve spend two days on this already…Here’s what I get when I init my mongo container:\n-deleted-You can see below that replica is indeed set:Here I’m establishing connection without replica(succesfuly):And here I’m actually trying to set the replica set and it fails:My connection string when connecting with replica set:\nmongoose\n.connect(“mongodb://mongo:27017/?replicaSet=rs0”, {\nuseNewUrlParser: true,\nuseUnifiedTopology: true,\nuseCreateIndex: true,\nuseFindAndModify: false\n})\n.then(() => console.log(“Connected to MongoDB…”))\n.catch((err) => console.log(“db error on connection”, err));", "username": "letoke8464_leto" }, { "code": "", "text": "Hi @letoke8464_leto and welcome in the MongoDB Community !There is nothing to really work with here to help you debug this. Would it be possible for you to share all that code in a public Github maybe? Especially all the docker & docker-compose files you are using?It’s possible that the MongoDB single node RS isn’t completely set up & ready yet when the NodeJS application is trying to connect to it.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "version: \"3.7\"\nservices:\n mongo:\n container_name: mongo\n hostname: mongo\n build: .\n expose:\n - \"27017\"\n ports:\n - \"27017:27017\"\n networks:\n - resolute\n\nnetworks:\n resolute:\n name: resolute\nversion: \"3.7\"\nservices:\n app:\n container_name: app\n hostname: app\n build: .\n expose:\n - \"5000\"\n ports:\n - \"5000:5000\"\n networks:\n - innerResolute\n\nnetworks:\n innerResolute:\n external:\n name: resolute\nFROM node:12\n\nWORKDIR /usr/src/app\n\nCOPY package*.json ./\n\nRUN npm install\n\nCOPY . .\n\nEXPOSE 5000\n\nCMD [\"node\", \"index.js\"]\nconst express = require(\"express\");\nconst mongoose = require(\"mongoose\");\nconst app = express();\n\nmongoose\n .connect(\"mongodb://mongo:27017/?replicaSet=rs0\", {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n useCreateIndex: true,\n useFindAndModify: false\n })\n .then(() => console.log(\"Connected to MongoDB...\"))\n .catch((err) => console.log(\"db error on connection\", err));\n\napp.get(\"/\", (req, res) => {\n res.send(\"Node app is up and running...\");\n});\n\napp.listen(5000, () => {\n console.log(\"Node app is running on port... 5000!\");\n});\n", "text": "Indeed, I probably not setting it right(despite it connects without replica set successfully). Hopefully we could resolve this together with your help.Regards the possibility that MongoDB isn’t completely ready at the moment NodeJS is trying to connect:\nI connect them manually, first the MongoDB container and only then NodeJS. Also, before I start NodeJS, I check if replica is indeed set on the mongo, I’ve posted log in my first message.I will post my code here, if you do need some more info, please tell me.Mongo Container (docker-compose):Mongo Container (Dockerfile):I’ve tried to connect to host mongo:27017 instead of 127.0.0.1, but then I can’t get replica set when I check it via terminal.FROM mongo\nRUN echo “rs.initiate({’_id’:‘rs0’, ‘members’:[{’_id’:0,‘host’:‘127.0.0.1:27017’}]});” > /docker-entrypoint-initdb.d/replica-init.js\nRUN cat /docker-entrypoint-initdb.d/replica-init.js\nCMD [ “–bind_ip_all”, “–replSet”, “rs0” ]NodeJS Container(docker-compose):NodeJS Container (Dockerfile):NodeJS Container (index.js ):Let me know if you need extra info on anything.", "username": "letoke8464_leto" }, { "code": "", "text": "Don’t use 127.0.0.1 for the replica name in the rs.init().It needs to be addressable. Make it mongo1 say, make sure the container name matches so other containers can resolve it.", "username": "chris" }, { "code": "", "text": "Thanks to both of you I was able to resolve this. Thanks a lot!", "username": "letoke8464_leto" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Able to connect to containerized DB, but only without replica set
2020-10-06T15:44:39.738Z
Able to connect to containerized DB, but only without replica set
9,439
null
[]
[ { "code": "foobartag{_id: 1, tag: {bab: 123}}\n{_id: 2, tag: {bar: \"BAR\"}}\n{_id: 3, tag: {foo: \"FOO\"}}\n{tag: {$gte: {foo: MinKey()}, $lte: {foo: MaxKey()}} \n// Values aren't always Min/MaxKey, but in this example, I'm trying to match any documents with the key \"foo\"\n", "text": "SIMPLIFIED QUESTION\nWhy is this?bsonWoCompare({foo: true}, {bar: “BAR”})\n25\nbsonWoCompare({foo: “”}, {bar: “BAR”})\n1\nbsonWoCompare({foo: MinKey()}, {bar: “BAR”})\n-16According to https://docs.mongodb.com/manual/reference/bson-type-comparison-order/#objects both results should be positive, right? Field keys are compared first and foo > bar.MORE DETAILED QUESTION\nI have a single key-value pair embedded document, tag, that I’m treating as a tuple (basically). Example collection:I am trying to do a range query on the ENTIRE embedded document. For example:According to https://docs.mongodb.com/manual/reference/bson-type-comparison-order/#objects:\n1. Recursively compare key-value pairs in the order that they appear within the BSON object.\n2. Compare the key field names.\n3. If the key field names are equal, compare the field values.\n4. If the field values are equal, compare the next key/value pair (return to step 1). An object without further pairs is less than an object with further pairs.Other places I’ve been asking:\nhttps://jira.mongodb.org/browse/SERVER-51258", "username": "Scott_Crunkleton" }, { "code": "test:PRIMARY> db.c.find()\n{ \"_id\" : 1, \"tag\" : { \"bab\" : 123 } }\n{ \"_id\" : 2, \"tag\" : { \"bar\" : \"BAR\" } }\n{ \"_id\" : 3, \"tag\" : { \"foo\" : \"FOO\" } }\ntest:PRIMARY> db.c.find({\"tag.foo\": {$exists: 1}})\n{ \"_id\" : 3, \"tag\" : { \"foo\" : \"FOO\" } }\n", "text": "Hi @Scott_Crunkleton and welcome in the MongoDB Community !I’m not really answering your question but here is the right way to find documents with a specific key in a sub-document:What are you trying to do exactly?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{_id: 1, name: \"foo\", tags: [{bpSystolic: 120}, {bpDiastolic: 80}, {bloodGlucose: 86.4}]}\n{_id: 2, name: \"bar\", tags: [{alert: true}, {bloodGlucose: 124.3}]}\n// Get patients with systolic blood pressure 150+.\n{tags: {$elemMatch: {$gte: {bpSystolic: 150}, $lt: {bpSystolic: MaxKey()}}}}\n\n// Get all patients that have a blood glucose reading.\n{tags : {$elemMatch: {$gte: {bloodGlucose: MinKey()}, $lte: {bloodGlucose: MaxKey()}}}}\n\n// Get all patients that have a blood glucose reading AND a systolic blood pressure reading.\n{$and: [{tags : {$elemMatch: {$gte: {bloodGlucose: MinKey()}, $lte: {bloodGlucose: MaxKey()}}}}, {tags : {$elemMatch: {$gte: {bpSystolic: MinKey()}, $lte: {bpSystolic: MaxKey()}}}}]}\n\n// Get patients with an alert.\n{tags: {alert: true}}\n\n// Sort patients by systolic blood pressure descending.\n{tags : {$elemMatch: {$gte: {bpSystolic: MinKey()}, $lte: {bpSystolic: MaxKey()}}}} (sorted by {tags: -1})\ntags", "text": "I’ll try to give a better example of what I’m trying to do. Here is a simplified collection of medical patients:With index:\n{tags: 1}As you can see, I have multiple tags (which I’m trying to treat as 2-tuples: key, value) on a patient. A tag key may or may not be present for a patient. Those that do exist, I would like to be sortable/range-queryable. Some example queries:A quick explanation for why I’m trying this route:\nWe have patients that need to be queried/ordered in a lot of different ways. I previously had created 30+ indexes to support those queries. I’m trying to merge these different data points into one indexed field, tags.", "username": "Scott_Crunkleton" } ]
Querying and sorting whole embedded documents
2020-10-05T22:54:18.970Z
Querying and sorting whole embedded documents
2,941
null
[]
[ { "code": "exports.getHighestRatings = catchAsync(async (req, res, next) => {\n const highestRatings = await Review.aggregate([\n {\n $sort: { rating: -1 },\n },\n ]);\n\n res.status(200).render('aggdisplay', {\n title: 'Fun Stats!',\n data: {\n highestRatings,\n },\n });\n console.log(highestRatings);\n});\n${highestRatings.rating}", "text": "So, I’m a little new to this… so bear with me. I was able to set up a route with a GET request that products aggregation pipeline results for me in JSON. It’s a pretty simple request that lists “review ratings” from highest to lowest. This is the code, and I have it in a viewController file:I’ll note that the console.log here displays the expected content. However, when I try to render anything from highestRatings in my Pug template file, it comes back as “undefined.” This is an example of the pug code:extends base\nblock content\narticle#mainArticle\neach rating in highestRatings.rating\nspan= ${highestRatings.rating}It was brought to my attention by some crazy guy with a Viking hat that my problem may not be with MongoDB as much as it is with displaying data in JSON format on my front end. So, I’m guessing I should be searching how to properly format and render JSON in my pug template file. Can anyone confirm, provide me with thoughts / feedback? Thank you!", "username": "Christopher_Clark" }, { "code": "dataextends base\nblock content\narticle#mainArticle\neach rating in data.highestRatings.rating\n span= ${data.highestRatings.rating}\nres.status(200).render('aggdisplay', {\n title: 'Fun Stats!',\n highestRatings,\n});\n", "text": "Hey @Christopher_Clark! Welcome to the MongoDB Community! So happy to see you made the jump from Twitter! It’s been a couple of years since I’ve used Pug. I believe that the second argument when rendering a Pug template is the data that you want to be sent to the template. Since you are assigning your MongoDB payload to the data key, your MongoDB data needs to be accessed through that key. What happens when you try the following?A better way to send to a payload to your template would be:Then you could use your original template with no modifications (plus it’s “cleaner”).This is an easy to miss issue (I’ve actually personally done this so many times), and you got 99.999% of the way there by yourself, which is so impressive. Let me know if this solution works for you.", "username": "JoeKarlsson" }, { "code": "exports.getTopShips = async (req, res, next) => {\n const topShips = await Ship.find().sort({ ratingsAverage: -1 });\n res.status(200).render('topRated', {\n title: `Top Ships`,\n topShips,\n });\n};", "text": "Thank you, Joe! So, since I’m really only sorting from highest to lowest, I was able to find a simpler way to go about this using the following approach in the controller file. The big different being the use of .sort vs. using the aggregation pipeline. Names of the variable have changed from the original example, but same principal.", "username": "Christopher_Clark" }, { "code": "", "text": "That’s great! So glad you got it working - your code looks great! Be sure to let me know when you’re done! I’d love to check it out Also, let me know if you are interested in joining me on Twitch to talk about your project!", "username": "JoeKarlsson" } ]
Displaying MongoDB aggregation pipeline results on the front end?
2020-10-06T15:44:35.880Z
Displaying MongoDB aggregation pipeline results on the front end?
2,468