image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "security" ]
[ { "code": "", "text": "I have noticed that in the settings of a webhook there are 2 options for the request validation methods.\n“Payload Signature Verification” seems to be pretty safe as it uses a secret signature inserted inside the header.\nAs for the “Secret as a Query Parameter” option, how secure can it be? Is inserting a secret key inside the url as a query parameter, e.g. in the HTTP POST method, highly insecure since it is very easy for a user to read the url and immediately get the secret key written inside it (e quindi superare con facilità la sicurezza della richiesta di validazione del webhook)?\nIs it highly inadvisable to use this validation method or is security still guaranteed?", "username": "Andrea" }, { "code": "", "text": "Hi @Andrea,Its more a question what the calling platform can provide. If the calling platform is a third party where you can’t control headers or calculate signeture providing at least a secret is more secure than not providing anything or running as System.This is why we have different authentication methods for webhooks as we already discussed (application, script etc…)Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Security request validation methods webhook
2020-09-18T17:46:10.334Z
Security request validation methods webhook
1,719
https://www.mongodb.com/…_2_1024x690.jpeg
[]
[ { "code": "", "text": "Hello fellow MongoDB’ers!My family and I are in stay-at-home isolation in the Pacific Northwest (Oregon, USA), wrapping up week three since our state governor declared a state of emergency due to COVID-19. We instantly went from three people at home full time (me, my husband, and our 18 month old) to five people at home full time. Now, I’m managing homeschool for our 10 and 11 year old girls, working on launching some cool digital initiatives to keep our community engaged and growing (more on this soon!), and getting ready to move houses. Needless to say, I’ve been staying busy!That said, I am also a veteran remote worker and I know the switch to remote work can be incredibly challenging for those who are used to their routines, regular engagement with coworkers, and just generally leaving their home environment on a daily basis. But enough about pants…Tell me about what you’ve been doing to stave off the inevitable cabin fever that comes with being in lock down all day. The first thing we did was download some new games on our Nintendo Switch, including the new Animal Crossing: New Horizons game (and to my dismay, replacing our lost Mario Kart 8 cartridge with a new $60 USD digital download). We’ve also started taking daily family walks around the block, led by the 18 month old, of course. I think we’ll keep this habit once the stay home order is lifted.How about you? How are you avoiding going stir crazy right now?P.S. here’s the view from the house I’m moving into tomorrow:\n\nScreenshot_20200326-234032_Chrome1078×727 492 KB\n", "username": "Jamie" }, { "code": "", "text": "Nice view!\nI am one of those that have been preparing for something like this all of their lives. Not really liking going outside, being in a crowded environment and having to socialize much.\nAnd now I get to hang out with the missus all day. I missed that, as we used to spend a lot of time together when we were younger.\nSo far, I am liking it!", "username": "DavidSol" }, { "code": "", "text": "I’m a bit of a hermit myself, which is why remote work fits so naturally for me. That’s fab that you’re finding and appreciating time spent with your lady in the midst of all this. I’ve seen a few posts lately around the internet about people struggling to cohabitate with their significant others right now and i just… don’t get it?", "username": "Jamie" }, { "code": "", "text": "I guess if you are used to spend time away, and suddenly you find in close quarters, it can be disconcerting. I am lucky to be enjoying it, as I missed it.", "username": "DavidSol" }, { "code": "", "text": "As life long nerd myself, used to spend extended periods of time in front of computer, this time doesn’t really change my lifestyle that much. As I started own company a bit over six month ago, and been working home since then (and had over six months of period of unemployment before that, so being at home), my habits are pretty same as they used to be.I’m one of those who has more friends online than offline, and I’ve been keeping contact with people as usual. Biggest difference is maybe some old friends surfacing in our gaming sessions once again, as they’ve been busy with work and families. And many games have massive influx of new players, or players playing extensive periods of time. Both are nice, and I welcome people to explore this lifestyle.So, how do I fight off cabin fever usually? Not trying to live in haste. Taking time to work, and also taking time to do other time. I have spent a lot of time cooking before, and that habit will continue. Gaming is still excellent way to escape from home, so that’s what I encourage to everyone to try. There are so many different kind of games that I’m sure everyone find something that sparks their interest ", "username": "kerbe" }, { "code": "", "text": "Online gaming is a great way to make connections virtually! I actually met my first husband playing World of Warcraft Which games are you enjoying most right now?", "username": "Jamie" }, { "code": "", "text": "Which games are you enjoying most right now?My GPU broke down in last November, and as my company is still starting up (and struggling, due these times), I have not been able to replace it. So my gaming choises are pretty limited unfortunately.I used to play a lot of EVE Online, but that’s one of those games I can’t really get into right now, partly because lack of GPU, but also I’d like to have subscription, and that’s luxury I’m cutting now. I also played quite much Black Desert Online, but again, lack of GPU… Luckily I love some alternative games too. Dwarf Fortress is maybe game I’d choose to take with me to isolated island, if only one is allowed. I was also lucky to be in Geforce NOW beta test, and as I lost my GPU, grabbed that Founder offer from them on release. So through that I am able to play Rust. If there would be people, some Destiny 2, Warframe, World of Tanks and such too. Previously I spent few thousand hours in ARK: Survival Evolved, but currently I prefer Rust over that.I had hoped that GFN would support GTA5, RDR2 and such… but publishers have been ***** and pulled those games from allowed games list. So I have saved money not to buy those games, as I couldn’t play them, but I would like to experience that story.It is pretty hard to balance how much time to devote in gaming, and how much of time devote to coding. That is best part of having own company, and no employees/co-workers. I can code few days straight through, then rest and sleep properly. Most likely after such coding sprint I want to unwind my brains, so I can then play few days, until that urge is filled (I get beaten so badly that I want to do something else), then I switch again to coding for few days… and cycle repeats. Kind of good that my online gaming partners ain’t that active these days, so they don’t get frustrated that there are times when I play days, then times when I disappear for days. ", "username": "kerbe" }, { "code": "", "text": "I get beaten so badly that I want to do something elseI’ve yet to find this limit I just keep going at it until I’m screaming at the TV. Part of why I’ve resorted to casual play games like Animal Crossing in the last 5 years. (The other part has to do with the tiny humans running around my house.) I recently tried out Horizon Zero Dawn but I haven’t had the attention span to devote to it lately. I miss PC gaming a lot.Tell me more about your company. What are you building? It sounds like you’re doing consulting but also maybe working on a side project?", "username": "Jamie" }, { "code": "", "text": "I don’t play games, although I sit in front of a computer most days. While I’ve got free time I enjoy looking out my window at views like this:\n20200404_1728121589×629 191 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "Beautiful view! (My elk-hunting husband is jealous.) We have yet to see if we get any herds coming through the new property We’ll have deer, for sure. At this point, I’m just hoping for sun!", "username": "Jamie" }, { "code": "", "text": "Yeah, we had a couple dozen deer come through later that same evening. We live on a wildlife preserve so the elk are quite safe coming through the property. They even ignore our heeler mix that runs along the fence line for the most part. Although there was a young buck about a year ago that was running back and forth with the heeler. It was a funny site to see.", "username": "Doug_Duncan" }, { "code": "", "text": "Hello Jamie,\nI am happy that you still find the time to work on new initiatives.Homeschool can be a challenge I am used to have remote projects as consultant since years, you get your routine, you have your habits and of course some flexibility, you seem to be settled and suddenly one of the kids pops in and asks you very detailed questions in Chemistry or IT. hmm, wait young padawan not so fast … you are in your last year and I made my masters in both but … after 20 years (at least for chemistry)… That can be a real challenge.\nAfter luckily passing the challenge the younger girl shows up just for a chat of cause, and , by the way, likes to get a “hint” for math… , no problem it was only my focus time… , didn’t you learned that if the door is closed this is a strong sign ?\nIt seems that we all have similar issues theses days. How do we compensate? We do a lot of stuff out side, being on the country side this is still possible. But I do no more sports with my 16 year “young” boy, this is too frustrating ____ \nAs @Doug_Duncan I do not play computer games. Just COVID-19 makes a difference, I found my old “Monkey Island” Disks (some may recall these things)! I played all night. No Link for Monkey Island (the not so young know it), if someone need a hint I am almost done with part three This is taken from the first scene:\n\ngrafik739×327 131 KB\nAll the best and have an easy move Jamie!\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Monkey Island!!https://media.giphy.com/media/TwKVbfwyZ0Zck/giphy.gif(image larger than 4 MB)", "username": "Jamie" }, { "code": "", "text": "I’ve yet to find this limit I just keep going at it until I’m screaming at the TV.I usually play games which most people consider to be too challenging. That’s how I get my Fun. This comic that is extended regarding Dwarf Fortress:\n\n750×2880 341 KB\n\nSo it doesn’t necessarily take that long. There are only certain amount of losses one can withstand until they need break. DF is good example of that. So is Rust, which I play these days with others. Death and destruction is around every corner there. So it is mostly matter of getting hard enough game. Tell me more about your company. What are you building? It sounds like you’re doing consulting but also maybe working on a side project?I’m in position currently where I would do anything that anyone is willing to pay me. I hope to get into consulting eventually. I would love to help small/medium sized IT companies to improve their development habits, introduce them to DevOps, help them plan their infrastructure and so on… I did that quite much for past companies where I worked, and I enjoy doing that, so hopefully I can scale it to be part of my business.\nAlso helping such IT teams to learn new technologies. From past experience, I know that if you need to study new technology, then plan how to implement it in your company, and then overcome resistance from stakeholders and fellow devs, it is easily 1-2 year personal project you’re doing. On the side of your regular job. And as I have already quite much knowledge and interest in these things, I can speed things up dramatically. Easily for technologies and ideologies that I am already familiar, but I would think that I’d be able to take assignment regarding some new tech, study that & then pass that knowledge to IT team. Again, speeding up their learning process dramatically.\nI’ve been in many courses and seen fellow devs take part on courses about new interesting tech… after few days once they return back to own company, they get asked what did you learn, how do we start using this new knowledge? Answer is pretty much always “It looked cool, but no idea how to do it in our company”. So I would love to be more hands on with things they would learn. Not just few days of powerpoints, but more of workshops, and individually planning how these things are taken into real use.That all is what I would love to do. But I’m not there yet. Currently I have one own mobile application idea that I am building. MongoDB is backend for that, operated through AWS Lambda. Actual application is done in React Native, admin site will be done with React, and it will use same API’s as application via Lambda’s. I like concept of serverless, and I see a lot of potential in that for these kinds of things.\nNow there are also handful of potential customers who are considering some sort of applications. Can’t say that they are customers yet, as I have been planning things with them, and not billed anything yet. Somehow word has got out that I can do apps, and sparring ideas with me is good thing, so I’ve been referred by many for these kind of things. Which isn’t bad, as helping companies to drive their digitization forward is one area which I would like to do.All in all, I am in so early stage with my business that it can become almost anything. I am open to all opportunities and time will show which of those refines into profitable business. As I was unemployed way over half an year before starting own company, living costs are driven to ground, so I can do only things I enjoy, and no need to worry about huge pile of bills every month.Once I get this own mobile application in such shape that it can be proudly showcased, I’ll definitely post about it in these forums, showing what kind of stuff can be done with MongoDB Atlas ", "username": "kerbe" }, { "code": "", "text": "Would absolutely love to have a chat about your process and what you built once you’re ready to share some details on your app It sounds like you have some great ideas and a drive to help people. I wish you the best in launching your business & let the community know how we can help. ", "username": "Jamie" }, { "code": "", "text": "Thank you Jamie! @Asya_Kamsky was great help already with my biggest problem. I’ve got quite good amount of insights by checking her answers to other people’s problems also. ", "username": "kerbe" }, { "code": "", "text": "Hello @Jamie , even i’m not working still stuck with dba toys In free times usually solve puzzles with my sons.\ndbatoys800×599 70.2 KB\nAll the best from Sao Paulo-Brazil,\nAlexandre Araujo", "username": "Alexandre_Araujo" }, { "code": "", "text": "Sounds like you’re having a good time @Alexandre_Araujo! What kinds of puzzles are you working on? How old are your kiddos?", "username": "Jamie" }, { "code": "", "text": "they are young 14 and 17 years old and we’re solving a 1000 pieces of London Tower Bridge.", "username": "Alexandre_Araujo" }, { "code": "", "text": "That’s fabulous! Would love to see it when you’re done!", "username": "Jamie" } ]
What are you doing to fight off cabin fever right now?
2020-03-27T06:14:42.047Z
What are you doing to fight off cabin fever right now?
9,695
null
[]
[ { "code": "This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.", "text": "Hello!On our application we have been updating our Mongo database from 3.X every now and then and we were at 4.2.8 when we decided to move to 4.4.1. The problem is that after installing 4.4.1 trying to run mongod crashes. When looking mongo.log I see the following error:This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.What could be the cause?Thanks", "username": "Moises_Bonilla" }, { "code": "", "text": "Hi @Moises_Bonilla and welcome in the MongoDB Community !How did you perform the upgrade? What’s your MDB configuration (standalone, replica set) ?Did you follow the documentation?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "featureCompatibilityVersion\"4.2\"featureCompatibilityVersion", "text": "Have you met this prerequisite ?The 4.2 instance must have featureCompatibilityVersion set to \"4.2\" . To check featureCompatibilityVersion :", "username": "chris" }, { "code": "featureCompatibilityVersion", "text": "Hi!It’s decided: I won’t try to update dev on Friday anymore. I was querying the wrong server ^^UIt was the featureCompatibilityVersion, once updated everything worked.Thanks both and sorry for bothering you!", "username": "Moises_Bonilla" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Crash "This version of MongoDB is too recent..." when updating from 4.2.8 to 4.4.1
2020-09-18T14:45:22.002Z
Crash “This version of MongoDB is too recent…” when updating from 4.2.8 to 4.4.1
5,786
null
[]
[ { "code": "", "text": "Hello Devs! Do you have an application that you’ve built that uses MongoDB that you want to share with the Developer Community at large? We are looking for developers and projects to showcase on the official MongoDB Twitch channel. We are looking for a diverse set of projects and developers to join us. No Twitch or professional coding experience is required! Our job is to make you and your project shine for the world!Wondering what our Community Showcase looks like? Checkout these recent broadcasts:Reply to this thread with a brief introduction of your project, and a link to your source code!", "username": "JoeKarlsson" }, { "code": "", "text": "and a link to your source code!Does this mean you primaly are looking for open source projects? Or projects that can showcase their codebase? Haven’t checked those previous Community Showcases, so not sure how code centric they have been. ", "username": "kerbe" }, { "code": "", "text": "Great question @kerbe! The project not being open source isn’t a deal breaker, be it is definitely preferred. If there is no code to discuss directly, the showcase can feel too abstract and “sales-y” to our viewers. However, if you are willing and able to discuss details about the code and project, then we would definitely consider bringing it on! ", "username": "JoeKarlsson" }, { "code": "", "text": "", "username": "Jamie" } ]
Want to be Featured on the MongoDB Community Showcase on Twitch?
2020-09-17T19:56:31.931Z
Want to be Featured on the MongoDB Community Showcase on Twitch?
1,732
null
[ "monitoring" ]
[ { "code": "", "text": "I’m trying to understand why the value of the serverStatus.opcounters will go down from time to time. According to the documentation here: https://docs.mongodb.com/manual/reference/command/serverStatus/#serverstatus.opcounters the opcounter is basically a tally of each operation that occurred since the mongo server restarted.So, if that’s truly the case, wouldn’t that number always be incremented (until it overflows I guess)? Or is there a time-frame that opstatus uses to determine how many operations occurred? Do they roll off after some time? Why would that number go down?The reason I ask is because I was attempting to monitor one of our dev servers, and I saw the numbers going up then when I ran it again 2-3 seconds later they go down, then up again. It seems inconsistent?Does anyone know what I should be seeing here?", "username": "Wyatt_Baggett" }, { "code": "db.serverStatus().opcounters", "text": "Hi @Wyatt_Baggett and welcome in the MongoDB Community !Which command did you use? How are you monitoring this?Also, which version of MongoDB are you using? Since MongoDB 4.2, these values are implemented using 64 bits instead of 32.Also, are you sure the devs didn’t restart your server in the meantime?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "db.serverStatus().opcounters", "text": "Hey @MaBeuLux88!It looks like the script I was using to run this command has an issue. I was comparing the previous values each time to determine the difference each time it ran the db.serverStatus().opcounters command. So that makes sense to me now why I’d see the values going down…", "username": "Wyatt_Baggett" }, { "code": "mongostat", "text": "Totally!\nI think this script is trying to reproduce what mongostat is doing and it would also be much easier to plot these values to get something like what we have built-in MongoDB Atlas.\nimage2285×417 78.2 KB\n", "username": "MaBeuLux88" } ]
Question about opcounters
2020-09-17T22:04:33.813Z
Question about opcounters
3,625
https://www.mongodb.com/…aabdcabf1d5e.png
[]
[ { "code": "", "text": "Hi,How we can Add Field ( I want to create calculated Field for filter my dashboard ) ?Thanks", "username": "Jonathan_Gautier" }, { "code": "[ { $addFields: { fieldName: 0 } } ]\n", "text": "Hey @Jonathan_Gautier - unfortunately this is a gap in the product at the moment - you can’t add dashboard filters on fields that are not part of the data source. We do plan on addressing this in the future, but it is possible to workaround it.You need to add a pipeline to your data source that contains the missing field. One option is to put the actual calculated field definition directly into the data source pipeline, which would mean you no longer need to add it explicitly to each chart.Alternatively if you’d prefer to keep the calculated fields as they are, you could create a dummy calculated field in the data source pipeline, e.g.That would make the field show in the data source with the correct type, and you could add a dashboard filter - but your calculated fields would overwrite the dummy value. Once the filter was added to the dashboard, you could remove the pipeline from the data source.Apologies for the need for a clumsy workaround, but hopefully that’s better than having no workaround at all Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks for your help, i have find where to add my pipeline image1702×108 11.2 KB", "username": "Jonathan_Gautier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Add Field in Filters Dashboard Charts
2020-09-16T23:44:18.319Z
Add Field in Filters Dashboard Charts
2,721
null
[ "golang" ]
[ { "code": "UnmarshalBSONUnmarshalBSONcursor.Decode()", "text": "I wrote a collection struct that pulls data from mongo in the standard way, and a data struct that represents the collections schema. I receive 3611 results from the cursor, and those decode fine. Then, I made an enum (int), and implemented UnmarshalBSON on that type. After replacing a string field in the schema type with that new custom enum type, I silently get 160 results. When I debug, errors are being returned by my UnmarshalBSON function, but cursor.Decode() never surfaces those errors. Is the UnmarshalBSON interface incompatible with the default registry or something?", "username": "Dustin_Currie" }, { "code": "cursor.Decode()Decoder.Decode()", "text": "Hi @Dustin_Currie - welcome to MongoDB Community!This does sound strange. cursor.Decode() ultimately delegates to Decoder.Decode() - and as far as I can tell that surfaces errors as it should.Would it be possible to create a small code sample that demonstrates this problem?", "username": "Mark_Smith" }, { "code": "UnmarshalBSONfunc ElementTypeFrom(s string) (ElementType, error) {\n\tfor i, e := range elementTypeNames {\n\t\tif e == s {\n\t\t\treturn ElementType(i), nil\n\t\t}\n\t}\n\treturn ElementType(-1), errors.Errorf(\"%v is not a valid ElementType\", s)\n}\n\nfunc (e *ElementType) UnmarshalBSON(data []byte) error {\n\ts, err := bsonrw.NewBSONValueReader(bsontype.String, data).ReadString()\n\tif err != nil {\n\t\treturn errors.Wrapf(err, \"failed to unmarshal %v into element type\", string(data))\n\t}\n\tet, err := ElementTypeFrom(s)\n\tif err != nil {\n\t\treturn errors.Wrap(err, \"failed to unmarshal ElementType\")\n\t}\n\t*e = et\n\treturn nil\n}\nfor cursor.Next(ctx) {\n\t\tq := types.MyTypeThatHasAnElementTypeField{}\n\t\tif err = cursor.Decode(&q); err != nil || cursor.Err() != nil {\n\t\t\treturn nil, errors.Wrapf(err, \"failed to decode %v\", cursor.Current())\n\t\t}\n\t\tresult = append(result, q)\n\t}\n", "text": "I haven’t been able to figure out the problem yet. But, my UnmarshalBSON function is only getting called. 101 times when the struct field is a custom enum. The cursor only outputs those 101 results, without error. When I change the field back to a *string the cursor returns 3662 records.The important cursor code looks like this:", "username": "Dustin_Currie" }, { "code": "", "text": "The was all due to a context cancellation, parallelized goroutines, and my worst copy pasta in the last serveral years. A failed query goroutine was wrapping the nil error of a successful goroutine. That’s why the errors weren’t coming through. Thanks for being a sounding board mongo community.", "username": "Dustin_Currie" }, { "code": "", "text": "Hi @Dustin_Currie,I’m glad to hear that you were able to figure out your issue with cursors and UnmarshalBSON. As always, if you have any driver bug reports or general feedback issues, please feel free to open a ticket in our Jira project (https://jira.mongodb.org/browse/GODRIVER) or a new topic on these forums.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Implementing UnmarshalBSON Truncated Results with no Errors
2020-09-16T04:04:11.829Z
Implementing UnmarshalBSON Truncated Results with no Errors
3,259
null
[]
[ { "code": "at makeError (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\execa\\index.js:174:9)\nat Function.module.exports.sync (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\execa\\index.js:338:15)\nat windowsRelease (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\windows-release\\index.js:39:19)\nat osName (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\os-name\\index.js:39:18)\nat Object.default (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\mongodb-js-metrics\\lib\\resources\\app.js:31:16)\nat result (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\mongodb-js-metrics\\node_modules\\lodash\\result.js:51:40)\nat child.getAttributes (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\mongodb-js-metrics\\node_modules\\ampersand-state\\ampersand-state.js:430:55)\nat child.serialize (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\mongodb-js-metrics\\node_modules\\ampersand-state\\ampersand-state.js:104:24)\nat child.<anonymous> (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\mongodb-js-metrics\\lib\\resources\\base.js:28:26)\nat arrayEach (C:\\Users\\siham\\Downloads\\mongodb-compass-1.22.1-win32-x64\\resources\\app.asar\\node_modules\\ampersand-collection-lodash-mixin\\node_modules\\lodash\\_arrayEach.js:15:9)", "text": "The problem is:\n‘powershell’ is not recognized as an internal or external command,\noperable program or batch file. When I change the path system variables to C:\\Windows\\System32\\WindowsPowerShell\\v1.0 , Compass worked.\nbut how to solve the problem by keeping the path: C:\\Program Files\\MongoDB\\Server\\4.2\\bin ?The full error:loading.js:29 Error: Command failed: powershell (Get-CimInstance -ClassName Win32_OperatingSystem).caption\n‘powershell’ is not recognized as an internal or external command,\noperable program or batch file.", "username": "Alaa_Nasr" }, { "code": "", "text": "You can have both mongodb/bin and powershell paths\nThats what i see on my Windows system", "username": "Ramachandra_Tummala" }, { "code": "", "text": "", "username": "system" } ]
Compass is freezing at "INITIALIZING"
2020-09-18T12:43:24.915Z
Compass is freezing at &ldquo;INITIALIZING&rdquo;
1,839
null
[]
[ { "code": "", "text": "Come and watch our new CTO Mark Porter speak at Techsylvania about “Building Modern Applications: The Data Evolution”.The virtual event Techsylvania 2020 is happening on September 22-23. It consists of over 70 keynotes, panels, workshops, satellite events, executive roundtables, pitching and Q&A sessions. If you’d like one of our free tickets please use the code MongoDBCmpgn when registering.Maybe see you there,\nNaomi ", "username": "Naomi_Pentrel" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Giveaway: Tickets for Techsylvania 2020
2020-09-18T13:57:42.617Z
Giveaway: Tickets for Techsylvania 2020
1,507
null
[ "node-js" ]
[ { "code": " app.post(\"/xx/yy/:item1/:item2\", function (request, response) {\n EndpointA = \"EndpointA\"\n EndpointB = \"EndpointB\"\n ItemLookup = database.collection(MongoInterpret(EndpointB))\n ItemUpdate = database.collection(MongoInterpret(EndpointA))\n apple = request.body.item1\n ItemLookup.find({\"_id\": apple}).toArray((error, result) => {\n if (error) {\n return response.status(500).send(error)\n }\n const tempJson = result[0]\n importObj = {}\n importObj['_id'] = tempJson.apple\n importObj['name'] = tempJson.pear\n ItemUpdate.updateOne(\n { \"listID\": request.params.item2},\n { $addToSet: { \"DocDetails\": importObj } },\n function(error, result) {\n if (error) {\n console.log(error)\n return response.status(500).send(error);\n };\n console.log(result)\n response.send(result);\n }\n )\n return\n })\n });\n result: {\n n: 0,\n nModified: 0,\n opTime: { ts: [Timestamp], t: 30 },\n electionId: 7fffffff000000000000001e,\n ok: 1,\n '$clusterTime': { clusterTime: [Timestamp], signature: [Object] },\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1598328571 }\n }.......\n\n result: {\n n: 1,\n nModified: 1,\n opTime: { ts: [Timestamp], t: 30 },\n electionId: 7fffffff000000000000001e,\n ok: 1,\n '$clusterTime': { clusterTime: [Timestamp], signature: [Object] },\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1598331308 }\n }.......\n", "text": "I’m having a bit of an issue with my node js application where I’m hitting my server to run a nested updateOne query and getting a response back where mongo is telling me that it doesn’t find a document that matched my query to update.I’ve run the same query by hitting the API through a command line, and the document updates perfectly. So far I’ve tried changing the data types for all combinations of elements, and the mongo function to be “find” rather than “updateOne”. I’ve also tried changing the “addToSet” to “set”, and altering the parameters, but haven’t gotten anywhere.Been pulling my hair out over this for a few days, so any suggestions are much appreciatedCode:Response using application:Response using command line:", "username": "Kent_Buboltz" }, { "code": "", "text": "Hey Kent,Were you able to figure out this problem? Since n is 0, I’m wondering if MongoDB can’t find a document that matches your filter. Have you tried printing/debugging request.params.item2 to confirm it matches the listId of a document in your collection? You could also try running a findOne({}) on the ItemUpdate collection to make sure you are connecting and can retrieve anything from the collection.", "username": "Lauren_Schaefer" } ]
UpdateOne in Node
2020-08-25T05:02:09.994Z
UpdateOne in Node
3,742
null
[ "indexes" ]
[ { "code": "", "text": "since we can’t get explain plan on findOne. How does findOne retrieves records? Is it always based on a CollScan?or is it still smart enough to use indexes if exist?", "username": "Bluetoba" }, { "code": "findOne(...)find(...).limit(1).pretty()find()findOne()db.col.findOne({name:\"Max\"})\n_idfindOne()", "text": "Hi @Bluetoba,findOne(...) is just a wrapper for find(...).limit(1).pretty().So if find() would use an index, findOne() will too.You can confirm this by checking if the index was used or not.That’s a screenshot from MongoDB Compass before my query:image1015×341 29 KBNow I run my query:And I refresh in Compass:image1018×344 28.7 KBNote: I think the usage on the _id index comes from Compass , not my findOne().Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks Maxime. It’s good to understand it.", "username": "Bluetoba" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the explain plan behind findOne
2020-09-17T12:06:26.092Z
What is the explain plan behind findOne
1,872
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "Hello,\nis it possible to delete realm path programmatically from cloud realm. I mean similar functionality as delete on Studio. Not asking how to remove all objects from realm path, want to nuke off path programmatically.thanks,\n-janne", "username": "Janne_Jussila" }, { "code": "", "text": "Hi JanneHave you tried the documentation HERE ? There’s a few methods described that might help.", "username": "Shane_McAllister" }, { "code": "", "text": "It would be helpful to know if you are using classic Realm 5.x or Beta MongoDB Realm.It would also be good to include if you are using Sync or not as that may change the answer.Oh, and what coding platform?", "username": "Jay" }, { "code": "", "text": "Hello,yes, have tried with documentation. Basically I get http response 503 when trying to run http “DELETE” to right cloud realm. Path I’m using is:", "username": "Janne_Jussila" }, { "code": "", "text": "Jay,we are on classic realm with full sync. We have both Swift for IOS and Javascript (node.JS) backend in use. Back end is having GraphQL access to cloud realm w/ admin rights.", "username": "Janne_Jussila" } ]
Delete path from Realm Cloud programmatically
2020-09-16T07:14:02.448Z
Delete path from Realm Cloud programmatically
3,699
null
[ "vscode" ]
[ { "code": "", "text": "Hello guys,New guy to MongoDB here. I wanted to ask here instead of giving a review on VS Marketplace:I found out about the VS Code extension for MongoDB today and it gave me 2 functionalities that I had wanted ever since I started learning Mongo 2 days ago: color coded statements and Intellisense(or auto-complete).I started trying playgrounds today and to my utter surprise the extension doesn’t seem to be able to do the most basic of tasks: switch databases.Using use(‘db_name’); AND use db_name: Both show that the db has been switched in the output but any commands that I execute AFTER use seem to be working on test(default db) only.Even using db(show active db) shows test even after running db just after use db_nameWhat’s up with the extension? Using cmd(I’m on windows) works fine", "username": "Pranjal_Jain" }, { "code": "use()", "text": "Is this how you are using playgrounds?By design, playgrounds don’t preserve the state of the previous run. Every run is purposely sandboxed. This means that you need to have a use() call to select the database before you run the other commands. This has the advantage that you can save your playground and come back to it later (or run it against another cluster) and always run against the same database that you have explicitly set.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hello M.M.,Yes that was how I was using playgrounds. I thought to use it like mongo.exe cause that was the only way familiar to me. It makes sense now that you tell it. Thank you for this. One more question that I have: is it possible to assign a shortcut key to the (run playground) command?", "username": "Pranjal_Jain" }, { "code": "MongoDB: Run All From PlaygroundMongoDB: Run Selected Lines From PlaygroundCmd/Ctrl + Alt + RCmd/Ctrl + Alt + S", "text": "Yes, it’s possible. In fact, that’s what I did too.You just open the keyboard shortcuts editor in vs code (Visual Studio Code Key Bindings), search for MongoDB and assign your favorite shortcuts to MongoDB: Run All From Playground and MongoDB: Run Selected Lines From Playground.The new version we’ll ship next week will have default keybindings (Cmd/Ctrl + Alt + R and Cmd/Ctrl + Alt + S) but you’ll always be able to customize them to your preference.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB for VS Code won't change DB
2020-09-17T11:04:43.780Z
MongoDB for VS Code won&rsquo;t change DB
5,841
null
[ "aggregation" ]
[ { "code": "", "text": "Does $lookup has a restrictions on mognodb versions ?is bit tricky question to me, but one of my developer complaints $lookup is working on productions, having issues running on the staging environment, we are using same setup like productions on staging.Thanks’", "username": "Naresh_bavisetty" }, { "code": "$lookup$out$merge$lookup$lookupfrom", "text": "Hi @Naresh_bavisetty and welcome in the MongoDB Community !There are two considerations to be aware of in the $lookup operator.You cannot include the $out or the $merge stage in the $lookup stage.In the $lookup stage, the from collection cannot be sharded.You can find more information about these restrictions in the MongoDB Docs.", "username": "JoeKarlsson" }, { "code": "", "text": "Hi @JoeKarlsson, it really helps! Thank you!Thanks’\nNaresh", "username": "Naresh_bavisetty" }, { "code": "", "text": "Hello @Naresh_bavisettymay I add to Joe’s answer one word of warning?$lookup is NOT meant as replacement for a join. Very often I see folks modeling their data with a tabular (aka relational) mindset. This will not be fun in the end and you will not take advantage of the pros of the flexible datamodel MongoDB provides.\nSo whenever you think that you want to use $lookup - check your datamodel, check if embedding will help to avoid $lookup. Often this comes with a the notion of denormalized data and data duplication. This is not bad, you gain simplicity, and read speed - you pay with some more updates. It will be always a trade of and you will need to think a lot more than in SQL about your data moldel.\n$lookup can make sense e.g. when your model works best in case need to you use references.Cheers,\nMichael", "username": "michael_hoeller" } ]
Does $lookup have any restrictions
2020-09-17T18:48:34.333Z
Does $lookup have any restrictions
1,461
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi team ,Need help for usage of ‘DBref’ or $ref clause in documents. As we know these clauses used for referencing one document to other in mongodb.from some blogs and mongodb manuals I got to know this approach is not very helpful for query\nperformence and should be avoided.httpss://stackoverflow.com/questions/9412341/mongodb-is-dbref-necessary.Can you please suggest , instead of DBref what other alertnative approach we can use for better query performence.Kind regards\nGaurav", "username": "Gaurav_Gupta" }, { "code": "", "text": "Hi @Gaurav_Gupta, if you are not sharding your data and the collections are in the same database you can look into $lookup (available since MongoDB 3.2) or $graphLookup (available since MongoDB 3.4) depending on your needs.Also can you data be modeled in a way that doesn’t require joins between collections? If it can that will also give data retrieval a performance boost. Looking at the MongoDB Data Modeling and Memory Sizing and the Building with Patterns blog posts can help with looking into different ways to build out the data models for different use cases.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi DuncanThanks so much – I will check those options. I don think our environment uses sharding. From somewhere on web I got know ‘Manual linking’ could also be helpful over DBref …but it can only be used in same documents of collections. If we need to link documents in different collections .\nThen have to use DBref only.Does manual linking has any limitations ? How to fix this if we have documents on different collections.", "username": "Gaurav_Gupta" }, { "code": "$lookup{ \"$ref\" : <value>, \"$id\" : <value>, \"$db\" : <value> }\n$", "text": "From somewhere on web I got know ‘Manual linking’ could also be helpful over DBref …Hi @Gaurav_Gupta,As @Doug_Duncan mentioned, definitely consider whether referencing is the best path for your data model and use case. This approach often feels comfortable coming from an RDBMS data modelling background, but may not be taking advantage of some of the flexibility that MongoDB enables. For best performance and outcomes with MongoDB, you need to focus on modelling to support your common application usage rather than strictly normalising data.If you do want to relate documents (which can definitely be appropriate), I strongly recommend using manual referencing. This approach provides the most flexibility if you are likely to want to perform additional manipulation in future (such as using $lookup and other aggregation operators).DBRefs use an older convention which represents references using a document format:The DBRef convention uses $-prefixed keys and generally has limited support in modern drivers, tools, and aggregation queries. The convention isn’t officially deprecated, but isn’t a great choice for modern applications.In addition to the Building with Patterns series of blog posts mentioned in Doug’s earlier comment , I would also consider taking some of the free online courses at MongoDB University. There’s a MongoDB for Developers Learning Path with some course recommendations, or you could also dive into courses like M320: Data Modelling if you already have some MongoDB experience.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie ,Thanks so much for suggestions - Will certainly work on that, will get back if further any queries.Thanks again !\nkind regards\nG", "username": "Gaurav_Gupta" }, { "code": "", "text": "Hello, I’v read somewhere on this page https://docs.mongodb.com/manual/reference/database-references/#manual-references, some limitations of using manual referencing.In my case, I’m the modeling phase of my database. And many collections are related through documents. If I use manual reference by considering what I read on this page, the entire document isn’t conveyed to other collection. I’m still thinking if I should change my manual ref with DBRefs.", "username": "Ody" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
DBref usage in mongodb
2020-05-07T08:49:41.512Z
DBref usage in mongodb
8,547
null
[ "server" ]
[ { "code": "", "text": "I want to know the difference between setSlaveOK() and slaveOK().The results of slaveOK() and getMongo().setSlaveOK() were the same, but the results of slaveOK() and only setSlaveOK() were found to be different.I want to know the function of setSlaveOK().I also know that setSlaveOk() has been changed to setSecondaryOk() in 4.4.1 ver.", "username": "Kim_Hakseon" }, { "code": "show dbs\"errmsg\" : \"not master and slaveOk=false\"rs.slaveOk()db.getMongo().setSlaveOk()rs.slaveOkreplset:SECONDARY> rs.slaveOk\nfunction(value) {\n return db.getMongo().setSlaveOk(value);\n}\nreplset:SECONDARY> show dbs\n\"errmsg\" : \"not master and slaveOk=false\"\n\nreplset:SECONDARY> use testdb \t// this is an existing database with a collection testColl\nreplset:SECONDARY> show collections\ntestColl\nreplset:SECONDARY> db.testColl.find()\n{ \"_id\" : ObjectId(\"5f630147076a85deff34973a\") }\nrs.setSlaveOk()", "text": "Hello @Kim_Hakseon,In a replica set, you are connected to one of the secondary nodes with mongo shell. If you run command like show dbs there will be an error: \"errmsg\" : \"not master and slaveOk=false\". You can run read commands only after telling MongoDB so. The command to enable read operations on replica set’s secondary node is:rs.slaveOk()Alternatively, you can use the command:db.getMongo().setSlaveOk()Both the commands are the same (you can type rs.slaveOk when connected to a secondary node and see):About the db.setSlaveOk():The functionality of this command is limited to data within a database only, I see. For example:NOTE: There is no rs.setSlaveOk() command.", "username": "Prasad_Saya" }, { "code": "", "text": "Wow!.. Your answer surprised me.\nI’m so impressed that I want to shout “Eureka.”Thank you very much. ", "username": "Kim_Hakseon" }, { "code": "", "text": "Thanks. But, why did my answer surprise you?", "username": "Prasad_Saya" }, { "code": "", "text": "Knowing something new always made me so interesting that I meant I was surprised to know something new.Thank you ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difference between setSlaveOK() and slaveOK()
2020-09-17T01:14:53.897Z
Difference between setSlaveOK() and slaveOK()
16,276
https://www.mongodb.com/…c687d5794f6.jpeg
[ "podcast" ]
[ { "code": "", "text": "Great chat with @Kieran_Peppiatt about his company Thena, and how they’re using MongoDB, and Realm. Hope you enjoy it!If you’re a startup using MongoDB and would like to explore talking about your product or service on an episode of The MongoDB Podcast, please let myself (Mike) or Nic (@nraboy ) know and we’ll set up a time to chat.Regards,\nMike", "username": "Michael_Lynn" }, { "code": "", "text": "First MongoDB Podcast that I’ve listened, and I liked it. It was short enough to get me started listening it, and stayed quite nicely on point whole time.Pointer to @Kieran_Peppiatt, Google Calendar has option to set meeting timezone. Easy on both computer and my Android phone, but as I don’t have iPhone I don’t know if it is more hidden there ", "username": "kerbe" }, { "code": "", "text": "Very cool! I love this Podcast!", "username": "JoeKarlsson" }, { "code": "", "text": "Thanks so much! Appreciate the kind words of feedback. We graciously accept reviews, ratings, thumbs-ups, and shares ", "username": "Michael_Lynn" } ]
MongoDB Podcast Ep. 18 Thena with Kieran Peppiatt
2020-09-17T00:25:17.040Z
MongoDB Podcast Ep. 18 Thena with Kieran Peppiatt
5,081
null
[]
[ { "code": "", "text": "Hi, I’m very new and I’m wondering what the syntax is to get a file from GridFS, using the mongofiles console. This doesn’t seem to work:get_id ‘ObjectId(“5f3f0010c5cd5cc7581728e4”)’ -d MyDatabaseI’ve also tried combinations of single/double quote usage for the ObjectId parameter, with no luck.Is this even possible?", "username": "Terry_Wray" }, { "code": "get_idmongofiles get_id '{ \"$oid\": \"5f3f0010c5cd5cc7581728e4\" }'\nget_id\"$oid\"'$oid'", "text": "Hi @Terry_Wray welcome to the community.For get_id, you need to use the Extended JSON ObjectId format. Using your example:Since get_id requires the parameter to be a well-formed extended JSON, please ensure that the quote types are correct, i.e. JSON only recognizes double quotes, so \"$oid\" will work while '$oid' will not.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "@kevinadi, thank you so much for your quick and concise answer!I’ve now used this command in the mongofiles console:\nmongofiles /uri:mongodb+srv://cluster01.myhost.com /username:blah /password:blah /db:myDB put_id “Stairwell logo.png” ‘{\"$oid\":“b78fe9a0cc83da37e410ac2f”}’The file was successfully uploaded, but I noticed that the ObjectId is different from a file I uploaded using just the “put” command (letting GridFS assign the ObjectId).\nI still seem to be doing something wrong.", "username": "Terry_Wray" }, { "code": "❯ ~/bin/mongofiles put image.jpg\n2020-08-25T13:46:43.480+1000\tconnected to: mongodb://localhost/\n2020-08-25T13:46:43.692+1000\tadded gridFile: image.jpg\n\n❯ ~/bin/mongofiles put_id image.jpg '{\"$oid\":\"ffffffffffffffffffffffff\"}'\n2020-08-25T13:47:52.378+1000\tconnected to: mongodb://localhost/\n2020-08-25T13:47:52.404+1000\tadded gridFile: image.jpg\nfs.files> db.fs.files.find()\n{ \"_id\" : ObjectId(\"5f4489a3c4819a2eaf45d7f5\"), \"length\" : NumberLong(6781), \"chunkSize\" : 261120, \"uploadDate\" : ISODate(\"2020-08-25T03:46:43.681Z\"), \"filename\" : \"image.jpg\", \"metadata\" : { } }\n{ \"_id\" : ObjectId(\"ffffffffffffffffffffffff\"), \"length\" : NumberLong(6781), \"chunkSize\" : 261120, \"uploadDate\" : ISODate(\"2020-08-25T03:47:52.392Z\"), \"filename\" : \"image.jpg\", \"metadata\" : { } }\n\"{^\"oid^\":^\"ffff....^\"}\"", "text": "Hi @Terry_WrayI can’t seem to reproduce what you’re seeing:Inside the fs.files collection, the ids seem to be as expected:I’m using mongofiles version 100.1.1, in OSX.Could you double check the version you have? Also, if you’re using Windows, note that the Windows cmd doesn’t recognize single quotes as delimiters. So in Windows the quotes character may need to be escaped like \"{^\"oid^\":^\"ffff....^\"}\"Best regards,\nKevin", "username": "kevinadi" }, { "code": "'{\"$oid\":\"ffffffffffffffffffffffff\"}'\nThank, you! That was the format that I needed!", "text": "'{\"$oid\":\"ffffffffffffffffffffffff\"}'@kevinadi This snip of your code made all the difference: ```\n‘{\"$oid\":“ffffffffffffffffffffffff”}’", "username": "Terry_Wray" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get_ID with ObjectID in mongofiles console
2020-08-21T00:07:05.324Z
Get_ID with ObjectID in mongofiles console
3,469
null
[ "node-js" ]
[ { "code": "Node.jsGraphQLJestTEST_collections", "text": "I’m working within a Node.js application with GraphQL endpoints. I’m mainly using Jest for unit tests, but have started to write integration tests for CRUD on MongoDB collections to cover complex, multi-collection, multi-db, cascading ops in my resolvers.I currently have a few strategies for writing integration tests, but want to have them vetted by the community and hear other options before I choose a strategy to run with. At the moment, I’m running these tests locally, but would like to run them continuously with something like Github actions sometime in the near future.Approaches tried so far:I’m also deciding on whether or not to setup a mongodb-memory-server for each test to allow parallel testing and faster overall performance.", "username": "Matthew_Van_Dyke" }, { "code": "", "text": "Hello @Matthew_Van_Dyke! Thanks for your awesome question!Now, it depends on your definition of what an “integration test” means to you (people have very strong opinions on this subject ) However, traditionally, when writing integration tests that touch a DB in some way or another, it is best practice to mock the actual database call. The purpose of mocking is to skip the complexity and unit test own code.There are mock libraries built into Jest. Using with MongoDB · JestDoes this help?", "username": "JoeKarlsson" } ]
Scaleable way to manage integration tests?
2020-07-28T19:59:51.325Z
Scaleable way to manage integration tests?
2,887
null
[]
[ { "code": "", "text": "I think that it should be very interesting to know the solutions that exist arround the world that are using mongoDB. We can share knowlwdge and bussiness.\nDo some one have a list and contact of these solutions??", "username": "Fernando_Elizaga" }, { "code": "", "text": "@Fernando_Elizaga There are many solutions using MongoDB. Are you interested in any specific use cases or examples?The MongoDB.com site has some interesting starting points:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie, I have reviewed what you send me and there are interesting things. What I am looking for is a partner that has created an attractive solution onMongo. The world of large IBM systems is very closed and they are in need of modernization, this is a good field, but any other new application would help me a lot to spread Mongo in my territory.", "username": "Fernando_Elizaga" }, { "code": "", "text": "Hello @Fernando_Elizaga,If you are looking for smaller demos and blogs walking through various MongoDB solutions, you should check out our Developer Hub. We have lots of great examples that you can try out today! https://www.mongodb.com/learn", "username": "JoeKarlsson" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Solutions on Mongo DB
2020-03-05T15:44:46.049Z
Solutions on Mongo DB
2,278
null
[]
[ { "code": "", "text": "Hello everyone,\nI’ve been using MongDB locally for 1 year now and it worked very fine for me as I only used in small local applicaitons, but this is my first experience to MongoAtlas\nnow I’m considering creating an app that will help tracking the virus using golcation, I already submit to the COVID-19 Battle Form and my project was approved.\nI made few tests and I can tell that tha app is working now (10 users, for 1 weak), but I want to know what are the contrainst that I ll be facing as tha app usage will grow up.the logic of the app is very simple it tracks its users using geolcation and if any of the users is declared covid+ the app send a warning message to all the users that may have encountered him in the past two weeks.\nfor my use case (10 users, 1 weak) it took over 3 minutes to get the result.\nI’m using Node.js Mongoose on Heroku to connect to the database.\nso can anybody help to get to know what are the best practices that I need to follow in order to make it working with few cost for a population of 23 Million users. (I am from Morocco)\nThanks in advance.", "username": "mehdi_wadoud" }, { "code": "", "text": "Hello @mehdi_wadoud! Your app sounds so cool! Wondering if you are still working on this? What kind of best practices are you looking for? Schema Design? System Design? Node Best Practices? Let us know if we can help!", "username": "JoeKarlsson" } ]
Tracking Covid19 using gelocation
2020-04-07T00:58:45.317Z
Tracking Covid19 using gelocation
1,834
null
[]
[ { "code": "", "text": "Hi, I have a Atlas service and created a test database (TestDB). Created a cluster with 2 documents in it. Nothing fancy. Trying to export the data from this cluster using mongoexport on my macbookpro. I am getting the below message:./mongoexport --uri mongodb+srv://Test:@/TestDB --collection=DataTypeSet --out=DTT.json\n2020-09-15T12:34:11.497-0400\tconnected to: mongodb+srv://[REDACTED]@******\n2020-09-15T12:34:11.540-0400\texported 0 recordsI don’t think its a permission issue. Been able to use mongodump to dump the cluster fully. Wanted to get a proper json export. Any ideas? Thanks much !.", "username": "Prekesh_Nduri" }, { "code": "mongoexportJD10Gen:~ jdrumgoole$ mongoexport --version\nmongoexport version: r4.2.7\ngit version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212\nGo version: go1.12.17\n os: darwin\n arch: amd64\n compiler: gc\nJD10Gen:~ jdrumgoole$\n", "text": "I have seen this happen when there is a mismatch between the mongoexport client and the server. To double check try doing an export with Compass which has an export function built in.What version of mongoexport and what version of the server are you running?Here is how to get the mongoexport version:", "username": "Joe_Drumgoole" }, { "code": "mongoexport --version", "text": "mongoexport --versionHi Joe:Here is the outputput from Mongoexport --version./mongoexport --version\nmongoexport version: 100.1.1\ngit version: 8bca136c2a0e0daa3947df31c0624c5615f9aa02\nGo version: go1.12.17\nos: darwin\narch: amd64\ncompiler: gcI have tried to use all the latest versions basically. For MongoDB, I am using the Atlas service. Do you think this could be an issue with compatibility between Mongo tools and Atlas Service? What version of tools/CLI would you recommend for working with Atlas.Thanks much for your guidance.", "username": "Prekesh_Nduri" }, { "code": "Command Line Toolsmongoexport --uri \"mongodb+srv://max:MySafe&[email protected]/test\" --collection col --type json --out col.json\nSecurity > Database AccessIP Access ListSecurity > Network Access", "text": "Looks like you have the latest version of the tools @Prekesh_Nduri.\nIs your MongoDB cluster in 4.4.0? I have also tried with a Free cluster in 4.2.9 and it’s working just fine with mongoexport 100.1.1. If your cluster is in an older version, I would update to make sure version numbers are aligned or use the appropriate mongoexport version.To avoid doing a mistake in the command line, please retrieve the command line from the Command Line Tools tab in Atlas:image1325×340 27.3 KBThen scroll down and you will find this section:image809×468 71.4 KBCopy the mongoexport command line and make sure to replace all the placeholders with the correct values.In the end, my command line looks like this:Note here that I have added double quotes around the URI to avoid an issue if the password contains a “&” for example or another special character that would break the command line logic.Also, make sure the user you are using has enough privileges on this collection in the Security > Database Access menu and make sure your current IP address is in the IP Access List in the Security > Network Access menu.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi, The version is 4.2 and its a MO instance. So I cannot upgrade to a different version it seems. Maybe I will have to delete the current cluster and create another free instance with version 4.2.9 or higher. I created this just recently. Didn’t realize its not the latest version.Thanks for pointing that out.", "username": "Prekesh_Nduri" }, { "code": "mongoexportmongoexport", "text": "M0, M2 and M5 instances are shared instances so you cannot upgrade their versions yourself.Deleting and recreating an M0 instance won’t upgrade it to a higher version, you will just be linked again to the same shared cluster. They will be upgraded to 4.4 automatically in a near future by the Atlas team.I tested mongoexport 100.1.1 withAnd mongoexport worked with both. Please double check your command line and your user & IP address as I explained above. It should work for you too.", "username": "MaBeuLux88" }, { "code": "", "text": "I will try this tonite and keep you posted. Thanks much !.", "username": "Prekesh_Nduri" }, { "code": "", "text": "Hi, Wanted to confirm that using a MongoDB version 4.4 fixed the problem and I see that 1 record is getting exported. Thanks for the pointer about version mismatch between the utilities and the MongoDB version. I will mark this thread as closed.", "username": "Prekesh_Nduri" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongexport producing 0 records for a collection (from Atlas)
2020-09-15T18:26:44.450Z
Mongexport producing 0 records for a collection (from Atlas)
4,609
null
[ "queries", "python" ]
[ { "code": "for dbname in enumerate(myclient.list_databases()):\n print(\"database name is : \",dbname)\n dbc = myclient.dbname;\n cols =dbc.list_collection_names()\n\n for coll in cols:\n print(\"collection name is : \", coll)\n collection = dbc[coll]\n cursor = collection.find({})\n for document in cursor:\n pprint(document)\n", "text": "HiI am trying to get my hands dirty with PyMongo (latest version) with MongoDB 4.2 (Atlas). I am trying to list out all databases, collections and documents within the collections programmatically. What is the best way to achieve this? I tried the following but something is missing:Any pointers? Really appreciate it. I can loop through if I hard code the DB name . Things are not working when I want to loop through all DBs and then each collection within a DB and then get to the documents in a collection.", "username": "Prekesh_Nduri" }, { "code": "for db_name in conn.list_database_names():\n db = conn[db_name]\n for coll_name in db.list_collection_names():\n print(\"db: {}, collection:{}\".format(db_name, coll_name))\n for r in db[coll_name].find({}):\n print(r)\n print('\\n\\n')\n", "text": "", "username": "chris" }, { "code": "", "text": "Wow, that was so slick :). Really appreciate. I think I was trying to complicate it a bit. Didn’t know how to handle the db_name and coll_name properly. I will mark this as a solution.", "username": "Prekesh_Nduri" }, { "code": "for r in conn.test.foo.find({}):\n print(r)\n", "text": "It is even easier if you are not using variables for the database and collection name as you can just use the attribute style to access those:", "username": "chris" }, { "code": "", "text": "Yes that will be easier if you know the db & collection name ahead of time (say a parameter being passed to a function).", "username": "Prekesh_Nduri" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to iterate over all databases and collections in each db and then print the documents in Pymongo?
2020-09-16T23:17:55.607Z
How to iterate over all databases and collections in each db and then print the documents in Pymongo?
17,203
null
[ "sharding", "ops-manager" ]
[ { "code": "shard0:centos7:27117:anonymous:test:PRIMARY> show dbs\n5ea30f4d15f4800d42dced74 4.392GB\n5efca35993905fd511323efc_sync 0.053GB\n5efca35993905fd511323eff_sync 0.000GB\n5efca35993905fd511323f02_sync 0.000GB\n\nshard0:centos7:27117:anonymous:5ea30f4d15f4800d42dced74:PRIMARY> show collections\noplog_config-rset1-5ee9f32d93905fd5112d53fd\noplog_shard0\noplog_shard1\n\nshard0:centos7:27117:anonymous:5ea30f4d15f4800d42dced74:PRIMARY> db.oplog_shard0.stats().wiredTiger.uri\nstatistics:table:collection-6--3400962913216231357\n\n/data/replSet/1A> l collection-6--3400962913216231357*\n-rw------- 1 mongod mongod 4710215680 Sep 15 17:56 collection-6--3400962913216231357.wt\n", "text": "i don’t understand why have appeared some db’s with large names and taking huge space, i’m testing a couple of collections with few data, and has appeared some db’s like:is it normal?, i dont’ understand why that file is so big and is updated, thanks for the explanation, and is it possible to drop those databases?", "username": "Willy_Latorre" }, { "code": "", "text": "Hi @Willy_Latorre,The names of those databases is Ops Manager convention for oplog store and sync stores.Those are components created by the backup which are used for storing backups of Ops Manager.Have you used your backedup deployment as the target for those databses ? If so this is fundamentally wrong.Oplog stores needs to be a separate instance or replica set.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "test.testcolMongoDB Enterprise mongos> show dbs \nadmin 0.000GB\nconfig 0.003GB\ntest 0.000GB\nMongoDB Enterprise mongos> use admin\nswitched to db admin\nMongoDB Enterprise mongos> show collections\nsystem.keys\nsystem.version\nMongoDB Enterprise mongos> use config\nswitched to db config\nMongoDB Enterprise mongos> show collections\nactionlog\nchangelog\nchunks\ncollections\ndatabases\nlockpings\nlocks\nmigrations\nmongos\nshards\nsystem.indexBuilds\ntags\ntransactions\nversion\nMongoDB Enterprise mongos> use test\nswitched to db test\nMongoDB Enterprise mongos> show collections\ntestcol\nsh.status()test.testcolMongoDB Enterprise shard2:PRIMARY> show dbs \nadmin 0.000GB\nconfig 0.001GB\nlocal 0.001GB\ntest 0.000GB\nMongoDB Enterprise shard2:PRIMARY> use admin \nswitched to db admin\nMongoDB Enterprise shard2:PRIMARY> show collections\nsystem.version\nMongoDB Enterprise shard2:PRIMARY> use config \nswitched to db config\nMongoDB Enterprise shard2:PRIMARY> show collections\ncache.chunks.config.system.sessions\ncache.chunks.test.testcol\ncache.collections\ncache.databases\nrangeDeletions\nsystem.indexBuilds\nsystem.sessions\ntransactions\nMongoDB Enterprise shard2:PRIMARY> use local\nswitched to db local\nMongoDB Enterprise shard2:PRIMARY> show collections\noplog.rs\nreplset.election\nreplset.initialSyncId\nreplset.minvalid\nreplset.oplogTruncateAfterPoint\nstartup_log\nsystem.replset\nsystem.rollback.id\nMongoDB Enterprise shard2:PRIMARY> use test\nswitched to db test\nMongoDB Enterprise shard2:PRIMARY> show collections\ntestcol\nMongoDB Enterprise shard2:PRIMARY> db.getReplicationInfo()\n{\n\t\"logSizeMB\" : 15910.21054649353,\n\t\"usedMB\" : 1.5,\n\t\"timeDiff\" : 2833,\n\t\"timeDiffHours\" : 0.79,\n\t\"tFirst\" : \"Tue Sep 15 2020 21:04:28 GMT+0200 (CEST)\",\n\t\"tLast\" : \"Tue Sep 15 2020 21:51:41 GMT+0200 (CEST)\",\n\t\"now\" : \"Tue Sep 15 2020 21:51:44 GMT+0200 (CEST)\"\n}\n", "text": "Hi @Willy_Latorre,EDIT: @Pavel_Duchovny answered while I was typing this ! I hope you find what you need in our answers .Looks like you are connected on the shard0 of your MongoDB Sharded cluster.\nYou should not do manipulations directly on a shard in a sharded cluster. Your queries should go through the mongos.Here are the collections I can see from a fresh sharded cluster with just one sharded collection test.testcol from the mongos.In my case, sh.status() reports that my only chunk for test.testcol is on my shard2. Here is the content of my shard2 replica set.So the above is what you should expect in your cluster too.Now it looks like you are wondering what the oplog collection is from what I see above and it’s normal if this collection is a little bit big as it’s the collection that participate in the replication process in this shard / replica set. It contains the latest write operations this particular replica set did so far. The oplog is also a capped collection, meaning its size will depend on the size you chose to give it when you configured your mongod nodes. The bigger it is, the more history it can store. The oldest operations are overwritten by new ones.You can actually see the oplog size along with the first and last time entries with this command:In a prod cluster, the more you have, the merrier! That’s going to give more opportunities for a server to catch up if it is stopped and has to catch up.\nI hope this helps.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi, i’m not using backup, it’s disabled in ops manager, so, the big question is. can i drop those databases?", "username": "Willy_Latorre" }, { "code": "shard0:centos7:27117:anonymous:test:PRIMARY> db.getReplicationInfo()\n{\n\t\"logSizeMB\" : 990,\n\t\"usedMB\" : 986.48,\n\t\"timeDiff\" : 1280352,\n\t\"timeDiffHours\" : 355.65,\n\t\"tFirst\" : \"Wed Sep 02 2020 00:42:24 GMT+0200 (CEST)\",\n\t\"tLast\" : \"Wed Sep 16 2020 20:21:36 GMT+0200 (CEST)\",\n\t\"now\" : \"Wed Sep 16 2020 20:21:42 GMT+0200 (CEST)\"\n}\nMongoDB Enterprise mongos> show dbs\n5ea30f4d15f4800d42dced74 4.392GB\n5efca35993905fd511323efc_sync 0.053GB\n", "text": "it’s only 990Mb", "username": "Willy_Latorre" }, { "code": "", "text": "Hi @Willy_Latorre,If you confident that there is no Ops Manager deployment using this in your organisation you can drop those. Be aware that the related backup will b corrupted an unrecoverableBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel, i have Ops manager currently, and the backup is not enabled on it, i mostly use mongoshell", "username": "Willy_Latorre" } ]
Some databases have been created by sharding
2020-09-15T18:26:49.743Z
Some databases have been created by sharding
2,246
null
[ "golang" ]
[ { "code": "internal server error", "text": "Project Repo:I’m trying to create a route to filter results on the server before returning the json.Here’s a gist of my first attempt: Filter Functions with Filter Type\nI’ve removed the bson.M from the filter. Now the code creates an internal server error.\n@nraboy", "username": "awe_ful" }, { "code": "", "text": "Hi @awe_ful (nice nickname !) and welcome in the MongoDB Community !We have a Go code sample here that is using filters.Hopefully this helps.\nMore about the MongoDB Open Data COVID-19 project in the DevHub .Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "missing key in map literalcompilerFilterTransactionFilterTransactionfilterFilterTransactionbson.MfilterTranxomitempty// ...\n\nfilter := bson.M{}\n\nif filterTranx.BudgetID != nil {\n filter[\"budget_id\"] = *filterTranx.BudgetID\n}\n\n// ...\n", "text": "Hey @awe_ful,Thanks for taking our Twitter conversation to the forums. To fill in any missing context for users with similar questions, a digest of our conversation on Twitter is as follows:@nraboy\nI saw some of your #mongodb #golang tutorials. Have you tried doing any filter queries?What are you interested in?Gist: https://bit.ly/2ZGHbHK I’m trying filter transactions but I’m getting a missing key in map literalcompiler error. They’re financial tranx & I’d like to be able to filter by vendor, budget, etc. Curious if you or anyone else has tried it. I’ve done it in js before.Since your struct has the BSON annotations you don’t need to wrap the filter in bson.M. If that doesn’t work, create a thread on http://www.mongodb.com/community/forums/ and tag me. I’ll check it in more detail in the morning.So I had a look at your Gist. I believe the problem is because you’re using pointers for each of your FilterTransaction fields and are not properly dereferencing them when trying to use them. I’m also not sure BSON annotations will work on pointer variable types.So we have a few options:If you go with option #1, you don’t need to be checking if nil for each field because you have the omitempty on the field. It will be ignored anyways with that field so you are doing double the work.In regards to dereferencing in option #2, I mean something like this:I’d go with option #1 if you can.Let me know if you’re still stuck.Best,", "username": "nraboy" }, { "code": "", "text": "Thank you. I’ll check it out.\nI’m thinking that the issue isn’t with MongoDB, but with the decoder.", "username": "awe_ful" } ]
Utilizing Filters with a Golang Rest API
2020-09-16T23:17:53.024Z
Utilizing Filters with a Golang Rest API
4,704
null
[ "on-premises" ]
[ { "code": "stitch.logError flushing log item; error: an API RequestLogItem requires a domainIDHash {\"api_type\": \"client\", \"co_id\": \"5f62421f276f1bb679fefeww\", \"http_remote_addr\": \"127.0.0.1\", \"http_proto\": \"HTTP/1.0\", \"http_method\": \"POST\", \"http_path\": \"/api/client/v2.0/app/mongodb-charts-isaby/auth/providers/local-userpass/login\", \"http_pattern\": \"/app/:client_app_id/auth/providers/:auth_provider_name/login\", \"http_status\": 500", "text": "Hi All,I have created docker container from “Quay” and i am using mongodb version 3.4.26. I created user from mongo cli but when I am trying to login, I am getting below error in stitch.logError flushing log item; error: an API RequestLogItem requires a domainIDHash {\"api_type\": \"client\", \"co_id\": \"5f62421f276f1bb679fefeww\", \"http_remote_addr\": \"127.0.0.1\", \"http_proto\": \"HTTP/1.0\", \"http_method\": \"POST\", \"http_path\": \"/api/client/v2.0/app/mongodb-charts-isaby/auth/providers/local-userpass/login\", \"http_pattern\": \"/app/:client_app_id/auth/providers/:auth_provider_name/login\", \"http_status\": 500", "username": "VenkataNikhil_Thonda" }, { "code": "", "text": "Hi @VenkataNikhil_Thonda and welcome in the MongoDB Community !MongoDB 3.4 is VERY old now and MongoDB Charts needs at least 3.6. It’s specified in the documentation.While you are at it… I would suggest upgrading to MongoDB 4.4. MongoDB 3.4 was released in Nov 2016 and isn’t supported since January 2020.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Charts not able process request
2020-09-17T03:57:34.592Z
Mongo Charts not able process request
3,326
null
[]
[ { "code": "", "text": "HelloI want to understand what would be the actual behaviour of the WIRED Tiger engine in the below caseWrite IOPS is very high, exceeding provisioned or expected capacity for several 5-mins intervals together, almost 2-3 times of expected capacity. For eg if its M10 or M20 the peak IOPS is 100. Lets say if the IOPS is going beyond 250 consistently for 30 minsWhat exactly happens during this sceario. Would the Wired Tiger engine queue all the operations and then keep clearing the queue based on Disk IOPS. If this is the case, assuming the relevant documnts are in memory set (cache) would it do the writes to the memory cache first and then write to the DB. So basically even if the write operation takes time to get completed (based on how far it is down the queue), future reads of the same document wont be affected since it will be read from cache.This is my guess. I would like to know how the WiredTiger engine works", "username": "Prasanna_Venkatesan" }, { "code": "", "text": "Hi @Prasanna_Venkatesan and welcome in the MongoDB Community !I will let someone answer your question about WiredTiger but I just wanted to mention that, usually, high IOPS are generated because too many documents are evicted from the RAM too soon.If these frequently accessed documents can’t stay in RAM long enough, you are consistently fetching the same documents over and over again from the disk and your RAM isn’t large enough to keep them loaded. Adding more RAM would reduce the evictions and your queries would find more often what they need in RAM directly without the need to fetch on disk.Usually high IOPS == your working set + indexes + queries and workload don’t fit in your RAM.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Over the top IOPS - Behaviour
2020-09-17T10:33:47.639Z
Over the top IOPS - Behaviour
1,796
null
[ "security" ]
[ { "code": "", "text": "How can I verify that perfect forward secrecy is in use? I have found documentation on pfs but nothing that really says how to verify it (unless I’ve overlooked it). I am running Enterprise Mongo 4.0.18 and 4.2.8 on rhel7. Also, I see that there could potentially be performance lag involved. Is anyone familiar with pfs that has seen any concerns with it? Are there any pearls of wisdom to share on pfs?", "username": "JamesT" }, { "code": "net:\n tls:\n disabledProtocols: TLS1_0,TLS1_1,TLS1_2\nTLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256", "text": "Hi @JamesTPFS is a function of the selected cipher suites. So restricting mongod to those is enough to enforce it.The easy button for this one is to use TLSv1.3 only as only PFS ciphers are used.Otherwise you will have to specify the cipher list using opensslCipherConfig\nThe OWASP cheat cheat identifies OWASP string B using TLS1.3 and TLS1.2 and only PFS ciphers as:\nTLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256I like to use sslyze for internal/non-https TLS testing. Deferring to ssllabs and Mozilla Observatory for general web.sslyze will only show that PFS is supported, not which ciphers do.", "username": "chris" } ]
How to check/enable perfect forward secrecy
2020-09-17T11:30:12.701Z
How to check/enable perfect forward secrecy
2,473
null
[ "data-modeling", "sharding" ]
[ { "code": "", "text": "i though of using aggregate lookup queries, but it seems it will have some issues while sharding. I am currently in db design phase only.Also is it anti-pattern to store member_ids(maybe few 100s) as array in group table.i read like updating arrays will include lot of resource usage.Please guide me thanks", "username": "Jose_Kj" }, { "code": "", "text": "Hello @Jose_Kj, welcome to the community.i though of using aggregate lookup queries, but it seems it will have some issues while sharding. I am currently in db design phase only.The docs mention one restriction for the lookup operation: Sharded Collection Restrictions for $lookup. But, there is also a workaround mentioned for that issue.The most important aspect of sharding is the selection of the Shard Key. This is a design aspect. There are a few rules guiding the shard key selection, and please be aware of them. Shard key is important - as the number of shards, the data distribution and the performance of queries depend upon it.It is a good idea to think about the shard key during the design phase of the database as also the application.Also is it anti-pattern to store member_ids(maybe few 100s) as array in group table.i read like updating arrays will include lot of resource usage.There are no restrictions in storing an array of data (e.g., member id’s in a group collection), as long as the number of members stored in the array is definite (means that you know ahead that there will be at most 100 or 1000 of them), and not growing indefinitely.As far as working with arrays, MongoDB Query Language (MQL) has various operators to query and update array data. In addition, there are Aggregation Array and Set Expression Operators. These are optimized to be used with arrays.To get best performance, the array data can be indexed, and these indexes are called Multikey Indexes. There are various tools and techniques which can be used to optimize your queries, to use the indexes.", "username": "Prasad_Saya" } ]
Should I worry about sharding while designing db schema
2020-09-17T09:26:23.753Z
Should I worry about sharding while designing db schema
1,601
null
[]
[ { "code": "", "text": "Hello,If you mongodump a collection with a name including “:” character for example : db._Join:users:_Role\nThis will write _Join%3Ausers%3A_Role.metadata.json.gzAnd then in mongorestore, the db wont be the same as dumped because the collection name is restore with % character.Thanksmongodump --version\nmongodump version: 100.1.1\ngit version: 8bca136c2a0e0daa3947df31c0624c5615f9aa02\nGo version: go1.12.17\nos: linux\narch: amd64\ncompiler: gc", "username": "Christopher_Brookes" }, { "code": "docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.0 --replSet=test && sleep 4 && docker exec mongo mongo --eval \"rs.initiate();\"\ntestcolcol:testtest:PRIMARY> show collections \ncol\ncol:test\ngetCollection()db.col:test.insert()db.getCollection(\"col:test\").insert({name:\"Max\"})\npolux@hafx:/tmp/mdb$ mongodump \n2020-09-16T20:30:03.862+0200\twriting admin.system.version to dump/admin/system.version.bson\n2020-09-16T20:30:03.863+0200\tdone dumping admin.system.version (1 document)\n2020-09-16T20:30:03.863+0200\twriting test.col to dump/test/col.bson\n2020-09-16T20:30:03.864+0200\tdone dumping test.col (3 documents)\n2020-09-16T20:30:03.864+0200\twriting test.col:test to dump/test/col%3Atest.bson\n2020-09-16T20:30:03.865+0200\tdone dumping test.col:test (1 document)\npolux@hafx:/tmp/mdb$ tree\n.\n└── dump\n ├── admin\n │ ├── system.version.bson\n │ └── system.version.metadata.json\n └── test\n ├── col%3Atest.bson\n ├── col%3Atest.metadata.json\n ├── col.bson\n └── col.metadata.json\n\n3 directories, 6 files\n%3A:testmongorestorepolux@hafx:/tmp/mdb$ mongorestore \n2020-09-16T20:35:36.908+0200\tusing default 'dump' directory\n2020-09-16T20:35:36.909+0200\tpreparing collections to restore from\n2020-09-16T20:35:36.909+0200\treading metadata for test.col from dump/test/col.metadata.json\n2020-09-16T20:35:36.909+0200\treading metadata for test.col:test from dump/test/col%3Atest.metadata.json\n2020-09-16T20:35:36.954+0200\trestoring test.col:test from dump/test/col%3Atest.bson\n2020-09-16T20:35:36.965+0200\trestoring test.col from dump/test/col.bson\n2020-09-16T20:35:36.967+0200\tno indexes to restore\n2020-09-16T20:35:36.968+0200\tfinished restoring test.col:test (1 document, 0 failures)\n2020-09-16T20:35:36.970+0200\tno indexes to restore\n2020-09-16T20:35:36.970+0200\tfinished restoring test.col (3 documents, 0 failures)\n2020-09-16T20:35:36.970+0200\t4 document(s) restored successfully. 0 document(s) failed to restore.\ntesttest:PRIMARY> show collections\ncol\ncol:test\nmongodumpmongorestore$ mongodump --version\nmongodump version: 100.1.1\ngit version: 8bca136c2a0e0daa3947df31c0624c5615f9aa02\nGo version: go1.12.17\n os: linux\n arch: amd64\n compiler: gc\n", "text": "Hi @Christopher_Brookes and welcome in the MongoDB Community !I can’t reproduce your issue.\nHere is what I did to try to reproduce your issue:Note that I had to use getCollection() to insert in this weird collection because the normal db.col:test.insert() didn’t work here.Here is the result of mongodump:Indeed, we can notice the %3A in the file names which is just the representation of : as you can see here.Here is the result in my DB test:As a good practice, I would avoid this kind of weird characters in db and collection names. There are actually naming restrictions in MongoDB’s doc. Looks like it’s working for me but apparently something isn’t going well in your case.I guess you have some encoding issues in your shell or maybe you used some options for mongodump or mongorestore that made things awkward for some reasons? I would just stay away to avoid avoid surprises of this kind.", "username": "MaBeuLux88" }, { "code": "mongodump --version\nmongodump version: r4.2.8\ngit version: 43d25964249164d76d5e04dd6cf38f6111e21f5f\nGo version: go1.12.17\n os: darwin\n arch: amd64\n compiler: gc \n\n\n mongodump --gzip\n ls\n _Join:users:_Role.bson.gz \n", "text": "Hello Maxime,Thanks a lot for your very complete response. This helped me to figure it out what happens here.\nWhen dumping the same collection with previous mongo-tools new release, there is no special encoding on the “:” character on collections names.The encoding collection name only happens when using the new mongo tool (100.1) version as you seen also on your side.In my case i was dumping the collection with the new version, 100.1, so with “%3A” in final gzip files name and i was restoring on a different machine with mongo-tools previous version. The previous version looks like it does not decode the collection name and restore as it is in the dump folder. I down graded the mongo tool version on the dumping machine to have clean gzip files for now.\nI hope this subject help someone in the future.(I know special characters in collection name is not a good practice but the framework used in this case does not give me the choice )", "username": "Christopher_Brookes" }, { "code": "", "text": "Happy that you found your problem !Cheers,\nMax.image980×709 237 KB", "username": "MaBeuLux88" } ]
Mongodump escaping special collection characters
2020-09-16T14:40:39.916Z
Mongodump escaping special collection characters
6,125
https://www.mongodb.com/…0_2_1024x513.png
[ "aggregation", "dot-net" ]
[ { "code": "", "text": "Mongodb Aggregate Query1366×685 110 KBHow to write aggregate query in C#? Let me know if anything is not clear.I tried to define filter but it fails:var filter1 = Builders.Filter.ElemMatch(“Items”, Builders.Filter.And(Builders.Filter.AnyIn(“Items.ItemId”, workItemWithStartToEndRevs.Select(t => t.WorkItemId).ToList()), Builders.Filter.AnyIn(“Items.ItemRevisionNumber”, workItemWithStartToEndRevs.Select(t => t.StartToEndRev).ToList())));var obj1 = mongoReviewItems.Aggregate().Match(filter1).Project(p => new { ReviewId = p.ReviewId});", "username": "Mursaleen_Fayyaz" }, { "code": "{\n \"_id\": ObjectId(\"...\"),\n \"ReviewId\": 210,\n \"Items\": [\n {\"ItemId\": 100, \"Revision\": 4 },\n {\"ItemId\": 101, \"Revision\": 5 },\n {\"ItemId\": 101, \"Revision\": 6}\n ]\n}\nItems.ItemIdRevision[3, 4]ReviewIddb.collection.find({\"Items\":{\"$elemMatch\":{\n \"ItemId\":100, \"Revision\":{\"$in\":[3,4]}\n }}}, {\"ReviewId\":1});\n{\"_id\": ObjectId(...), \"ReviewId\": 210}\n// Class Mappings\nclass MyDocument\n {\n public ObjectId Id { get; set; }\n public int ReviewId { get; set;}\n public List<Item> Items { get; set; }\n }\nclass Item\n {\n public int ItemId { get; set; }\n public int Revision { get; set; }\n }\n\n// Query\nvar revisionIds = new List<int>();\nrevisionIds.Add(3); \nrevisionIds.Add(4); \n \nFilterDefinition<MyDocument> filter = Builders<MyDocument>.Filter.And(\n Builders<MyDocument>.Filter.ElemMatch(x => x.Items, Builders<Item>.Filter\n .And(\n Builders<Item>.Filter.Eq(y => y.ItemId, 100),\n Builders<Item>.Filter.In(y => y.Revision, revisionIds)\n )));\nProjectionDefinition<MyDocument> project = Builders<MyDocument>\n .Projection.Include(x => x.ReviewId);\n\nvar results = collection.Find(filter).Project(project).ToList();\n", "text": "Hi @Mursaleen_Fayyaz, and welcome to the forumhow to write aggregate query in C#? let me know if anything is not clear.The question is not very clear from the post, but I assumed that you’re having some issues to filter based on a document in an array.For example, if you have the following example document:If you would like to filter documents in the collection where Items.ItemId is 100 and Revision is in range of [3, 4] and only output the ReviewId you could construct the MongoDB query as below:The query above utilises $elemMatch query operator and will provide output as below:Using MongoDB .NET/C# driver you could construct this query as follow:If this does not answer your question, please provide:I’d also recommend to enrol in a free online course from MongoDB University M220N: MongoDB for .NET developers to learn more about application development in MongoDB with .NET.Regards,\nWan.", "username": "wan" }, { "code": "100[Serializable]\n [DataContract(Name = \"WorkItemWithStartToEndRev\", Namespace = \"\")]\n public class WorkItemWithStartToEndRev\n {\n [DataMember(Name = \"workItemId\")]\n public int WorkItemId { get; set; }\n\n\n [DataMember(Name = \"startToEndRev\")]\n public List<int> StartToEndRev { get; set; }\n }\n", "text": "100Thanks wan. you understand correctly. But my query input is List of object\nand object isI want to find each object of List in each document .\nyour provided solution find single object in all document in collection.", "username": "Mursaleen_Fayyaz" }, { "code": "Please review my following code and tell me what's wrong which causing exception. Thanks [Serializable]\n [DataContract(Name = \"WorkItemWithStartToEndRev\", Namespace = \"\")]\n public class WorkItemWithStartToEndRev\n {\n [DataMember(Name = \"workItemId\")]\n public int WorkItemId { get; set; }\n\n\n [DataMember(Name = \"startToEndRev\")]\n public List<int> StartToEndRev { get; set; }\n }\n\n [Serializable]\n [DataContract(Name = \"ReviewMetaInfo \", Namespace = \"\")]\n public class ReviewMetaInfo\n {\n\n public static string ReviewCollectionName\n {\n get { return \"ReviewMetaInfos\"; }\n }\n\n\n [DataMember(Name = \"reviewId\")]\n public int ReviewId { get; set; }\n\n\n [DataMember(Name = \"items\")]\n public List<ReviewItemInfo> Items { get; set; }\n }\n\n \n [Serializable]\n [DataContract(Name = \"ReviewItemInfo \", Namespace = \"\")]\n public class ReviewItemInfo\n {\n \n [DataMember(Name = \"itemId\")]\n public int ItemId { get; set; }\n\n\n [DataMember(Name = \"itemRevisionNumber\")]\n public int ItemRevisionNumber { get; set; }\n }\n\npublic OperationResult GetReviewIdsFromMongo(List<WorkItemWithStartToEndRev> workItemWithStartToEndRevs, string projectId)\n {\n OperationResult result = new OperationResult(OperationStatusTypes.Failed);\n\n\n DebugLogger.LogStart(\"MongoReviewController\", \"GetReviewIdsFromMongo\");\n \n IMongoDatabase database = this.mMongoClient.GetDatabase(this.mDbName);\n\n\n IMongoCollection<ReviewMetaInfo> mongoReviewItems = database.GetCollection<ReviewMetaInfo>(ReviewMetaInfo.ReviewCollectionName);\n \n var jsonObject = CommonUtility.Serialize(workItemWithStartToEndRevs);\n\n\n\n\n var scope = new BsonDocument(\"workItemWithStartToEndRevs\", jsonObject);\n\n\n var map = new BsonJavaScriptWithScope(@\"\n function() {\n \n \n for (var i = 0; i < this.Items.length; i++) {\n\n\n for(var item in workItemWithStartToEndRevs)\n {\n if(workItemWithStartToEndRevs[item].workItemId == this.Items[i].ItemId && workItemWithStartToEndRevs[item].startToEndRev.includes(this.Items[i].ItemRevisionNumber))\n {\n emit(this.Items[i].ItemId, { reviewId: this.ReviewId, workItemRevId: this.Items[i].ItemRevisionNumber });\n break;\n }\n \n }\n \n\n\n }\n\n\n\n\n \n }\", scope);\n\n\n var reduce = new BsonJavaScriptWithScope(@\" \n function(key, values) {\n \n var result = []\n \n for(var i in values) { \n\n\n var item = values[i]; \n\n\n result.push({\n 'reviewId' : item.reviewId,\n 'workItemId' : key,\n 'revId' : item.workItemRevId\n });\n }\n\n\n return result;\n \n }\", scope);\n\n\n \n\n\n try\n {\n var results = mongoReviewItems.MapReduce<BsonDocument>(map, reduce);\n }\n catch (Exception ex)\n {\n\n\n \n }\n}\n", "text": "I also tried with mongodb mapreduce for solving this:\nFollowing is the code:\nPlease review my following code and tell me what's wrong which causing exception. ThanksAuto%20Generated%20Inline%20Image%2011710×372 32 KB", "username": "Mursaleen_Fayyaz" }, { "code": "", "text": "Please suggestion solution for the give query.", "username": "Mursaleen_Fayyaz" }, { "code": "db.collection.find({\"Items\":{\"$elemMatch\":\n {\"$or\":[\n {\"ItemId\":100, \"Revision\":{\"$in\":[3,4]}}, \n {\"ItemId\":200, \"Revision\":{\"$in\":[1,2]}}, \n ]}\n }}, {\"ReviewId\":1});\nOrvar revisionIds1 = new List<int>();\nrevisionIds1.Add(3); \nrevisionIds1.Add(4); \n \nvar revisionIds2 = new List<int>();\nrevisionIds2.Add(1); \nrevisionIds2.Add(2); \n\nFilterDefinition<MyDocument> filter = Builders<MyDocument>.Filter.And(\n Builders<MyDocument>.Filter.ElemMatch(x => x.Items, Builders<Item>.Filter.Or(\n Builders<Item>.Filter.And(\n Builders<Item>.Filter.Eq(y => y.ItemId, 100),\n Builders<Item>.Filter.In(y => y.Revision, revisionIds1)\n ), \n Builders<Item>.Filter.And(\n Builders<Item>.Filter.Eq(y => y.ItemId, 200),\n Builders<Item>.Filter.In(y => y.Revision, revisionIds2)\n )\n ) )\n );\nProjectionDefinition<MyDocument> project = Builders<MyDocument>\n .Projection.Include(x => x.Reviewid);\nvar results = collection.Find(filter).Project(project).ToList();\n", "text": "Hi @Mursaleen_Fayyaz,I want to find each object of List in each document .\nyour provided solution find single object in all document in collection.Again, it’s not very clear what you’re asking here. Are you wanting to query with a list as below example:If so, you could utilise the Or builder, for example:If this doesn’t answer your question could you elaborate the question with the following:Providing good information to clarify your question helps others to answer your question better.Regards,\nWan", "username": "wan" }, { "code": "> [Serializable]\n[DataContract(Name = \"WorkItemWithStartToEndRev\", Namespace = \"\")]\npublic class WorkItemWithStartToEndRev\n{\n [DataMember(Name = \"workItemId\")]\n public int WorkItemId { get; set; }\n\n\n [DataMember(Name = \"startToEndRev\")]\n public List<int> StartToEndRev { get; set; }\n}\n", "text": "Hi wan,This is my objectand i have List of above objectList of WorkItemWithStartToEndRevand the length of list is dynamic.The length of the array is dynamic. How many “and” filters we apply in “or” filter.[\n{“ItemId”:100, “Revision”:{\"$in\":[3,4]}},\n{“ItemId”:200, “Revision”:{\"$in\":[1,2]}},\n]", "username": "Mursaleen_Fayyaz" }, { "code": "WorkItemWithStartToEndRevFilterDefinitionFind()BsonDocument.Parse()", "text": "Hi @Mursaleen_Fayyaz,I don’t quite understand what you’re trying to ask here.If you have a list of WorkItemWithStartToEndRev then you just have to convert them into FilterDefinition object so that you could pass that on to Find(). Alternatively, you could serialise the object into JSON and parse using BsonDocument.Parse() to convert into BsonDocument.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hello,I don’t know why you are not understanding the actual problem.Let me explain you the problem again. For making it simpleFollowing is my C# models for querying mongo collection:\nimage480×1122 57.1 KBand we have list of WorkItemWithStartToEndRev object with n number of items.now my mongo collection C# object is ReviewMetaInfo and it contains Items property of List type of object ReviewItemInfoNow, what query and filter(s) will check all items of List of “WorkItemWithStartToEndRev” in each document of mongo collection object “ReviewMetaInfo” and projection ReviewId list.what we will use in mongo for the above type of query modelsSimple mongo query with filter definitionAggregation PipelineMongo map reduce with C#I would really appreciate your reply on this long running problem.Thanks,\nMursaleen", "username": "Mursaleen_Fayyaz" } ]
How to write this aggregate query in C#?
2020-07-21T10:23:40.122Z
How to write this aggregate query in C#?
34,165
null
[ "data-modeling" ]
[ { "code": "{\n nom:'Kox', \n prenom:'Karl', \n gender:'M', \n addres: \n {\n rue: '123 Fake Street', \n appt:108, \n city:'mycity',\n zip_code:'GGG23'\n }, \n class: \n {\n name:'CLASS ONE', \n group:'C', \n section:'SECTION ONE' \n }\n\n}", "text": "I want to create a mongodb database, and use embedded structur. For exemple, consider that the size of each document of the persons 's collection is 16MB. It means that i can not add the sub-document contacts in the person’s collection. 1- In this case what should i do ? 2- If i create the collection of contact, it will be an obligation to reference to the a person. Can we have embeded and reference stuctur in a mongodb database ?Thank you.", "username": "mixd" }, { "code": "", "text": "Hi @mixd,When you say you are considering to embedded the contacts of the user is that all of his contact or a selected portion (favourite, recent etc.)?If the amount of contacts is small in its document size and in the amount of contacts they can definitely be embedded as an array in the users document. Having said that’s if you cannot control the embedded array size you should use an extended document pattern and move the data to another collection reference the owner,_user_id in its contact document.Using a reference to improve performanceAgain, I believe that you should access this data in 2 consecutive queries rather then utelizing a join .Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,Thank you for your answer and your explanation.Let me take another example of a social network project, that i want to use embed document model :\nEach person can have one or many albums. In the collection of person, we have a sub-document albums(12 fileds). John have 150 albums, and the size of the document of John go over 16MB.\n1- In a one-To-Many situation, and to do not go over 16MB for a document, what is the solution ?Thank you.", "username": "mixd" }, { "code": "// User document\n{\nUserId: xxxx\nUsername : ...,\nAlbums : [ {\nAlbumNama : ...}\n...\n<Tenth album>\n],\nTotalAlbums : 150\n, \nHasExtended : true\n}\n\n//Extended Albums\n{\nUserId xxx,\nAlbums : [ {\nAlbumNama : ...}\n...\n<Tenth album>\n]\n, \nHasExtended : true\n}\n", "text": "Hi @mixd,Well if you need to keep that you can consider the following.Storing first few albums in his document and then the rest of the outlier pattern in another collection with this user Id as an index reference to those extra album list.The idea is when the application will show user xxxx profile it will get the first 10 albums to show on screen and will show a button “see more 140 albums” . When user clicks this button you will fetch any amount of needed indexed documents from extended collection.This way you will save lots of uneeded data for most users. Keeping the totals is easy with $inc and if you have to keep it acid you can use transactions.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Storing first few albums in his document and then the rest of the outlier pattern in another collection with this user Id as an index reference to those extra album list.Hi @Pavel_Duchovny, thanks a lot for the answer, i understand well.The idea is when the application will show user xxxx profile it will get the first 10 albums to show on screen and will show a button “see more 140 albums” . When user clicks this button you will fetch any amount of needed indexed documents from extended collection.I understand that also.Thank you.", "username": "mixd" } ]
Embedded and references in a data model
2020-09-06T05:21:31.608Z
Embedded and references in a data model
1,860
null
[]
[ { "code": "", "text": "Hi Experts,\nI have 1 question about the flow of data from MongoDB.\nThe scenario:What I see in the data folder:Could you please explain this case? I am expecting that the data will come all to WiredTiger after interval 60s from the document.", "username": "Duc_Bui_Minh" }, { "code": "test.colWiredTigerLog.0000000003_idnamewatch -n 0.1 ls -l", "text": "Hi @Duc_Bui_Minh and welcome in the MongoDB community !I did a little test.First, I started a for loop and inserted 10K docs in my test.col collection. I did this at 22:40 and some seconds. Then took a screenshot at 22:41:09.\nimage1075×1117 105 KB\nAs you can see in this first screenshot, the WiredTigerLog.0000000003 has been update a few seconds ago but the collection and index files have not been changed even if I’m actively writing to MDB at this moment in time.After a few seconds, my write was done and nothing changed. It was similar to the image above.Then, after 30 seconds, I’m guessing MongoDB reached a checkpoint and I took the following snapshot:\nimage1073×1125 111 KB\nMy understanding is that MongoDB flushed its WiredTiger journal and its content went into the collection and indexes files (my collection has 2 indexes, _id and name).As you can see, MongoDB’s journal size didn’t really change because of the pre-allocation. Also my current MongoDB is running with snappy’s compression algo so the file sizes are affected by this too.So yes, from what I see, MongoDB is flushing its journal every 60 seconds or 2GB.It’s also in the documentation.So to answer your question, I think you are confused because of the pre-allocation. Your journal file is probably empty after 60sec if you stopped writing to MongoDB.You can probably double check this by monitoring the file size like I did with watch -n 0.1 ls -l or something similar.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88,\nAwsome. Thanks so much for your detailed response.\nI have checked again and all data come into WiredTiger after the interval 60s.", "username": "Duc_Bui_Minh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How the data come from Journal to WiredTiger
2020-09-15T10:43:39.261Z
How the data come from Journal to WiredTiger
2,158
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "I purchased Realm Cloud Platform. I just realized that you call it Legacy product. Will it be close soon?\nWhat should I do with my instances at Realm Cloud ?", "username": "Nguyen_Dinh_Tam" }, { "code": "", "text": "Thanks for your post on Realm Cloud. Whilst it is now legacy since the introduction of MongoDB Realm, we do expect it to be still around for possibly the next year.As for your instances, we have previously discussed this in this post from @Ian_Ward here. We don’t have guide just yet, but there are steps to follow there and we do hope to formalise the guide in the next few weeks.I hope this helps.", "username": "Shane_McAllister" }, { "code": "", "text": "Thanks, I hope there’s a tool to simplify the migration as well.", "username": "Nguyen_Dinh_Tam" }, { "code": "", "text": "@Nguyen_Dinh_Tam What kind of tool would you be looking for? It’s a bit difficult for us to “generalize” a tool for all use cases because it is dependent on your schema, your data access pattern (ie. what data do you read/write on the client), and your use case. Feel free to share details here or you can email me at [email protected] and I can reply personally -We are committed to helping all of users migrate from legacy Realm Cloud to MongoDB Realm - please reach out and I will help you personally", "username": "Ian_Ward" }, { "code": "", "text": "Like auto import data from Realm Cloud to Mongo Atlas, and auto migrate user from Realm Cloud. Now I don’t know how to migrate my current user from Realm Cloud.", "username": "Nguyen_Dinh_Tam" }, { "code": "", "text": "there are steps to follow thereis this what you mean by steps to follow?Move your realm data over to MongoDB Atlas documents. Probably by using the realm-js SDK and node.js MongoDB driverWe really don’t like database migrations. They are usually complex, high risk and errors upset our customers and we lose business.So if MongoDB Realm expect us to do a database migration, then devs will take this opportunity to consider alternatives and you risk losing us altogether.", "username": "Nosl_O_Cinnhoj" } ]
What should I do with legacy Realm Cloud instances?
2020-09-15T09:56:50.945Z
What should I do with legacy Realm Cloud instances?
3,163
null
[ "serverless", "field-encryption" ]
[ { "code": "", "text": "We are trying to integrate Automatic client side field level encryption with AWS Lambda. I wrote a blog post about a POC, overcoming some of the obstacles around the mongocryptd process here: Using MongoDB client field level encryption with AWS Lambda | MediumHowever, convert this POC into production ready code proves to be challenging. My lambda randomly exists prematurely, but when run again it sometimes works, sometimes not. It seems like there is some sort of race condition going on. Since it works fine without field level encryption, my best guess is, that the mongocryptd process is not fully up and running.Has anyone had success in integrating automatic FLE with lambda?\nHow can I configure the mongocryptd process in a way that it offers some basic debug logs, so I can see the point of failure?\nI know that according to the mongo docs one should set context.callbackWaitsForEmptyEventLoop = false; and cache connections (in order to not flood the cluster with unused connections). But I wonder: Does this create problems with hanging mongocryptd processed?Generally it seems the mongocryptd approach favors a “tranditional” approach of a long running process. What would be the advice for using it in ephemeral function containers (like lambda, Google Cloud functions, etc.)", "username": "Florian_Bischoff" }, { "code": "MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27020 at Timeout._onTimeout \"MongoError\",\"errorMessage\":\"Current topology does not support sessions\"", "text": "Sometime the error is MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27020 at Timeout._onTimeout, but most of the error are \"MongoError\",\"errorMessage\":\"Current topology does not support sessions\" which is strange as I am on an Atlas cluster 4.4 with MongoDB Driver 3.6", "username": "Florian_Bischoff" }, { "code": "", "text": "There is a bug in the node bindings of the libmongocrypt page: libmongocrypt/mongocryptdManager.js at 8530af06643daa28259e5830ce1dff22b6be326a · mongodb/libmongocrypt · GitHubThe spawning function does not wait until the process is up, it merely waits until the spawning of the process has been handed off by Node to the OS land. At least to my understanding of how the node event queue works. This can create a race condition where the driver tries to connect to the cluster while the mongocryptd process is not ready to accept connections yet. In my case it sometimes did, sometimes not.I detailed a fix in the article above (at the very bottom), but it is not pretty and I would love to hear from the devs.", "username": "Florian_Bischoff" }, { "code": "libmongocrypt", "text": "Hi @Florian_Bischoff and welcome to the forums!Thanks for sharing the knowledge and also raising an issue relating to libmongocrypt. There is now a spec discussion changes to accommodate this (default timeout value).Please see NODE-2794 for more information. Feel free to upvote/watch the issue ticket to receive notifications on the ticket.Regards,\nWan", "username": "wan" }, { "code": "", "text": "Hi Wan, thank you. The link you provided leads to a protected resource.:It may have been deleted or you don’t have permission to view it.", "username": "Florian_Bischoff" }, { "code": "", "text": "Hi @Florian_Bischoff,Apologies for that, as the ticket is linked to your reported ticket NODE-2794 you should be able to receive notifications on the progress. The ticket is linked as a dependency.Regards,\nWan", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Automatic Client Field Level Encryption with AWS Lambda
2020-08-31T17:34:10.328Z
Automatic Client Field Level Encryption with AWS Lambda
3,920
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello,\nI’m modeling my Mongodb App collection. I wonder if I’ll need to config another server to manage things like users’ accounts, or if it’s possible to do it directly withing Mongodb.\nAlso, is there a limited number of collection recommended?Thanks for your help.", "username": "Ody" }, { "code": "", "text": "Hello @Ody, welcome to the community.I wonder if I’ll need to config another server to manage things like users’ accounts, or if it’s possible to do it directly withing MongodbYou can manage user accounts within MongoDB. Here is general information about security within MongoDB and the features: Security.is there a limited number of collection recommended?Here is a post about limitations:Also, general Frequently Asked Questions about MongoDB.", "username": "Prasad_Saya" } ]
Server and the use of many collections
2020-09-17T03:57:38.114Z
Server and the use of many collections
2,005
https://www.mongodb.com/…6_2_1024x369.png
[ "app-services-data-access" ]
[ { "code": "", "text": "I am new to much of swift, realm, and MongoDB so please forgive the basic-ness of my question.I am simply trying to add “_partition” as a required variable in one of my collection’s schema and am getting an error (see screenshot below). The problem is that I can’t read the error (in red) because part of the message is blocked by the web UI.I’ve tried to click on the error to see if another window opens so I can see the whole thing. Is there something simple I’m missing (e.g., somewhere else to look?) so I can see the error message? I am using realm sync and am currently in developer mode.SchemaError1954×706 90.2 KB", "username": "Donal_MacCoon" }, { "code": "", "text": "The fix for the UI bug is underway and should be out in our release next week. Thank you for reporting!You can directly message me your application so I can take a look at what might be causing the schema error if it’s not resolved yet.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thank you so much for the reply.", "username": "Donal_MacCoon" } ]
Schema error message hidden by realm.mongodb.com rules interface
2020-09-13T16:59:35.962Z
Schema error message hidden by realm.mongodb.com rules interface
3,039
https://www.mongodb.com/…dadaf6127805.png
[ "atlas-functions", "app-services-user-auth", "app-services-data-access" ]
[ { "code": "change userexports = async function updateUserPublicData(arg) {\n const cluster = context.services.get('mongodb-atlas');\n const collection = cluster.db('user').collection('public');\n \n const userId = context.user.id\n \n const updatedUser = await collection.findOne({ _id: BSON.ObjectId(userId) })\n \n return updatedUser;\n};\nauthIdid{\n \"roles\": [\n {\n \"name\": \"owner\",\n \"apply_when\": {\n \"_id\": \"%%user.id\"\n },\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"write\": true,\n \"fields\": {},\n \"additional_fields\": {}\n }\n ],\n \"filters\": [],\n \"schema\": {\n \"title\": \"Public\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"createdAt\": {\n \"bsonType\": \"date\"\n },\n \"updatedAt\": {\n \"bsonType\": \"date\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"image\": {\n \"bsonType\": \"string\"\n }\n }\n }\n}\nUser AuthenticationApplication Authentication", "text": "I’m trying to use the Realm function to update the user data, When I select the specific user from the Realm function change user and try to get the current user document the document is null but I can get the document with the system user.Here is my functionI use Realm authentication and when the user signed I call the Authentication create trigger function and create a new user document and save the user, using the Realm authentication authId for user public document idSample user public documentation\n\nCapture696×123 3.69 KB\nUser public collection ruleFor update user function User Authentication I set Application AuthenticationI’m I missing something?", "username": "chawki" }, { "code": "user.Iduser.IdobjectId%stringToOid", "text": "Hey Chawki,user.Id is actually a string, so for now you would have to create a function that converts user.Id to an objectId type and then use it in your rules. You can learn how to implement something like this here.We’re also planning on adding a convenience expansion like %stringToOid to make this easier in the future.", "username": "Sumedha_Mehta1" }, { "code": "user.Ididstringiduser.IdobjectId", "text": "Hi, @Sumedha_Mehta1 thanks for the response.As you said user.Id is a string, I tried storing a user with id field as string data type now I can retrieve that user. But I don’t like to set id as a string data type.I’m new to Realm and I don’t understand how to create a function that converts user.Id to an objectId type and then use it in your rulesPlease help me to solve this issue.", "username": "chawki" }, { "code": "_id", "text": "There isn’t currently a way to change the _id field on the User object to a string at the moment. If this is a change you need to make, an alternative is to store another field with the same value as the objectId in string format. However, the recommendation is to use the function based rule to convert the objectId to string when using rules.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't access user document with user level permission with Realm function
2020-09-13T12:50:11.433Z
Can&rsquo;t access user document with user level permission with Realm function
4,551
null
[ "golang" ]
[ { "code": "package entities\ntype IMDBRegistry struct {\n MovieName string `json:\"moviename,omitempty\"`\n Rating string `json:\"rating,omitempty\"`\n RatingCount int `json:\"peoplecount,omitempty\"`\n Comments map[string]interface{} `json:\"comments,omitempty\"`\n}\nfunc UpdateDocument(filterObject interface{}, operation string, update map[string]interface{}) (int64, error) {\n mongoObj := connect.GetMongoObject()\n if mongoObj == nil {\n log.Fatalln(constants.ERR_DB)\n return 0, nil // TODO\n }\n collection := mongoObj.Database(constants.DB_NAME).Collection(constants.COLLECTION_NAME_USER)\n jsonData, err := json.Marshal(filterObject)\n if err != nil {\n log.Println(\"Error while Marshalling in UpdateDocument\")\n return 0, err\n }\n var m interface{}\n json.Unmarshal(jsonData, &m)\n filter := m.(map[string]interface{})\n jsonDataUpdate, err := json.Marshal(update)\n if err != nil {\n return 0, err\n }\n var m1 interface{}\n json.Unmarshal(jsonDataUpdate, &m1)\n updateString := bson.M{operation: update}\n result, err := collection.UpdateOne(context.TODO(), filter, updateString)\n log.Println(\"Result is ::: \", result)\n if err != nil {\n return 0, err\n } else {\n log.Println(\"Returning from here.\")\n return result.ModifiedCount, nil\n }\n}\nfunc AddComment(params add_comment.PostcommentsParams) middleware.Responder {\n\n log.Println(\"Processing request to AddComment.\")\n userName := params.Body.UserName\n movieName := params.Body.MovieName\n movieComment := make(map[string]interface{}, 0)\n movieComment[\"comments\"] = params.Body.MovieComment\n if UserNameIsValid(userName) == userName && len(userName) > 0 {\n //Checking if provided moviename exist in DB.\n if MovieNameIsValid(movieName) == movieName && len(movieName) > 0 && len(movieComment) > 0 {\n searchResult, err := ReadDocument(entities.IMDBRegistry{Comments: movieComment}, &entities.IMDBRegistry{})\n if err != nil {\n }\n if searchResult != nil {\n result := searchResult.(entities.IMDBRegistry)\n commentUp := result.Comments\n _, err := UpdateDocument(entities.IMDBRegistry{Comments: result.Comments}, \"$push\", nil)\n if err != nil {\n }\n }\n //UpdateDocument(entities.IMDBRegistry{Comments: movieComment})\n } else { // If moviename is invalid.\n errMsg := constants.INVALID_MOVIENAME + constants.REQUEST_FAILED\n return add_comment.NewPostcommentsInternalServerError().WithPayload(&models.Error{Code: constants.INTERNAL_ERROR_CODE, Message: &errMsg})\n }\n }\n //Return error if above conditions are not satisfied.\n errMsg := constants.INVALID_USER + constants.REQUEST_FAILED\n return add_comment.NewPostcommentsInternalServerError().WithPayload(&models.Error{Code: constants.INTERNAL_ERROR_CODE, Message: &errMsg})\n}\n", "text": "I am trying to update the embedded field “Comments” in MongoDB. Below is the code.If any other information is needed I will provide . Any help is appreciated .", "username": "Vibhor_Dubey" }, { "code": "", "text": "Hi @Vibhor_Dubey,Can you share what is actually happening when you run your code and what the expected result is? Right now you’ve posted your code, which is great, but I don’t know if you’re receiving errors, documents are silently not updating, or documents are updating with the wrong information, or something else.The more details into the actual problem, the easier it will be to find a solution Best,", "username": "nraboy" }, { "code": "", "text": "@nraboy Firstly thank you for your time.\nThere are no errors , when I checked the update response it showed “0” , which means nothing is updated. Please let me know if any other information is needed.", "username": "Vibhor_Dubey" }, { "code": "type IMDBRegistry struct {\n MovieName string `bson: \"moviename\" json:\"moviename,omitempty\"`\n Rating string `bson: \"rating\" json:\"rating,omitempty\"`\n RatingCount int `bson: \"peoplecount\" json:\"peoplecount,omitempty\"`\n Comments map[string]interface{} `bson: \"comments\" json:\"comments,omitempty\"`\n}\n", "text": "Hi @Vibhor_Dubey,I’m not sure I have enough of your code to be able to sample run it myself, but I’m thinking the problem might be in your data structure.You have JSON annotations on your fields, but to get proper bindings to document fields you want to use BSON annotations. For example:I’m thinking your filter isn’t working correctly because the field mappings aren’t happening due to the missing BSON annotations.You might also check out a Golang quickstart series I wrote which includes CRUD and annotations:https://www.mongodb.com/quickstart/golang-change-streamsLet me know how it goes.Best,", "username": "nraboy" }, { "code": "json:\"moviename,omitempty\"json:\"rating,omitempty\"json:\"peoplecount,omitempty\"json:\"comments,omitempty\"bson: \"MovieName\" json:\"MovieName,omitempty\"bson:\"MovieName\" bson: \"UserName\" json:\"UserName,omitempty\"bson: json:\"UserName,omitemptybson: \"UserComment\" json:\"UserComment,omitempty\"json:\"UserComment,omitemptybson: \"UserRating\" json:\"UserRating,omitempty\"json:\"UserRating,omitempty", "text": "Hi @nraboy ,Above I’ve provided wrapper written over MongoDB in 2 nd point . There I am doing marshalling and unmarshalling , i.e is the reason I’ve used JSON data structure .Repo: GitHub - vibhordubey333/MoviesDB: Microservice written in Golang , where user can save new movies as well as rate, comment them . Functionality is similar to IMDB website where guest can see the movies , logged in users can comment , rate , admin can save new movies . For DB MongoDB is used.I thought may be using the “map[string]interface{}” data structure is wrong , so I am tried changing my structure also , but no luck .package entities\ntype IMDBRegistryImproved struct {\nMovieName string json:\"moviename,omitempty\"\nRating string json:\"rating,omitempty\"\nRatingCount int json:\"peoplecount,omitempty\"\nComments UserComments json:\"comments,omitempty\"\n}type UserComments struct {\nMovieName string bson: \"MovieName\" json:\"MovieName,omitempty\" //bson:\"MovieName\" json:“MovieName,omitempty” \nUserName string bson: \"UserName\" json:\"UserName,omitempty\" //bson: json:\"UserName,omitempty\nUserComment string bson: \"UserComment\" json:\"UserComment,omitempty\" //json:\"UserComment,omitempty\nUserRating float32 bson: \"UserRating\" json:\"UserRating,omitempty\" //json:\"UserRating,omitempty\n}", "username": "Vibhor_Dubey" }, { "code": "", "text": "Hi @Vibhor_Dubey,The manual marshaling and unmarshaling seems unnecessary based strictly on the example in the point you made. Maybe there’s further reason for it in your code, but you should definitely look at using BSON annotations so the marshaling and unmarshaling happens automagically:Learn how to model MongoDB BSON documents as native Go data structures for seamless interaction with MongoDB collections.@Divjot_Arora, are you able to glance over this and see if you can spot the point of failure in the code?", "username": "nraboy" }, { "code": "", "text": "There’s a lot of stuff going on in the code here. Would it be possible to create a smaller, reproducible example? That might make it easier to spot the issue.", "username": "Divjot_Arora" } ]
Update embedded field in MongoDB using Golang
2020-09-11T07:06:28.875Z
Update embedded field in MongoDB using Golang
6,154
null
[ "aggregation" ]
[ { "code": "_id:shubham\nhobbies [2 elements]\n [0] {2 fields}\n Type : Drawing\n Members: 1\n [1] {2 fields}\n Type : Cricket\n Members: 11\n\n _id:anant\nhobbies [2 elements]\n [0] {2 fields}\n Type : Drawing\n Members: 1\n\n _id:nipoon\nhobbies [2 elements]\n [0] {2 fields}\n Type : Drawing\n Members: 1\n [1] {2 fields}\n Type : Cricket\n Members: 11\n_id: shubham_id: nipoon_id: anant", "text": "I have documents like this:I want to extract data which as both Drawing and Cricket as hobbies and eliminate if not having both.In above case I want all data of _id: shubham and _id: nipoon and eliminate _id: anant.", "username": "shubham_udata" }, { "code": "", "text": "Could you please provide your sample data as well formed JSON documents?This way we can just cut-n-paste the documents directly in our installation. This will help us help you faster.", "username": "steevej" }, { "code": "> db.col.find({hobbies: {$all: [\"Cricket\", \"Drawing\"]}})\n{ \"_id\" : \"shubham\", \"hobbies\" : [ \"Drawing\", \"Cricket\" ], \"members\" : 11 }\n", "text": "I think I have a simpler version with the $all array operator:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to extract data if array contains required data
2020-09-15T06:28:54.445Z
How to extract data if array contains required data
1,495
null
[ "java", "connecting" ]
[ { "code": "INFO: Exception in monitor thread while connecting to server ssc-cluster-01-shard-00-02.9cbnp.mongodb.net:27017\ncom.mongodb.MongoSocketWriteException: Exception sending message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:525)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:413)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:269)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:253)\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:106)\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:63)\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127)\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative names matching IP address 13.235.32.72 \n", "text": "Hi All,I have created an account on MongoDB Atlas and tried connecting to MongoDB Atlas Cluster using the Java code.With the same java code I am able to connect to On-Premise MongoDB but not the Atlas Cluster.", "username": "Samir_Benjamin" }, { "code": "", "text": "Hi @Samir_Benjamin,I think you are hitting a known issue where atlas CA certificate is not present in your jdk certificate trust store:Please download the certificate from the link above and import it with keytool command to your trust store.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I think I had this issue when I was using the Oracle SDK but I solve this issue by using the OpenJDK instead.", "username": "MaBeuLux88" } ]
MongoDB Atlas Issue Connecting with Java
2020-09-14T20:17:21.435Z
MongoDB Atlas Issue Connecting with Java
5,557
null
[ "replication" ]
[ { "code": "", "text": "Hi ,I have started mongod, with volume created from snapshot backup from another instance.I am trying to drop the local database using the root role enabled user. but it throws an error. Shared logs belowI would like to drop the old replication and set up replication with new members(primary+ 2 secondaries)Any help is highly appreciated. THanks.", "username": "madhuri_yeruva" }, { "code": "", "text": "Hi @madhuri_yeruvaYou have to start mongod without replicaSet option to be able to drop the local database.This is covered in Restore a Replica Set from MongoDB Backups", "username": "chris" }, { "code": "", "text": "Thanks, i’m able to drop the local database now.\nI disabled security , keyfile under security, and replication params to be able to do also.", "username": "madhuri_yeruva" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to drop local database after restoring from snapshot backup
2020-09-15T20:27:15.184Z
Unable to drop local database after restoring from snapshot backup
1,804
null
[ "data-modeling", "mongoose-odm" ]
[ { "code": "const newUser = { dob: new Date('10/16/1995') };\nconst user = new UserModel(newUser);\nawait user.save();\nlet date = '2/10/1977';\n date = new Date(\n `${date.split('/')[2]}-${date.split('/')[0]}-${date.split('/')[1]}`,\n );\n console.log(date);\n const newUser = { dob: new Date(date) };\nconst mongoose = require('mongoose');\n\nlet UserSchema = new mongoose.Schema({\n\tdob: Date,\n});\n\nrun().catch((err) => console.log(err));\n\nasync function run() {\n\tawait mongoose.connect('mongodb://localhost:27017/test', {\n\t\tuseNewUrlParser: true,\n\t\tuseUnifiedTopology: true,\n\t});\n\tawait mongoose.connection.dropDatabase();\n\n\tconst UserModel = mongoose.model('user', UserSchema);\n\n\tconst newUser = { dob: '10/16/1995' };\n\tconst user = new UserModel(newUser);\n\tawait user.save();\n\tconsole.log(user, 'output');\n\n\t// output\n\t/* { _id: 5f5f13e643f6c0cc94ab26a8,\n dob: 1995-10-15T18:30:00.000Z,\n __v: 0 } '' */\n}\n\n", "text": "Hi All,how do you save date of birth of user, It is changing due to timezone.It is returning minus one i.e if you pass 16 date it returns 15, if you pass 18 date it returns 17.Also i tried with new Date() but still it returns less than oneAlso i converted date into yyyy-mm-dd but still same", "username": "indraraj26" }, { "code": "> db.test.insertOne( { dob: new Date('10/16/1995Z') })\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"5f608deee83a779d9f5a4412\")\n}\n> db.test.findOne()\n{\n\t\"_id\" : ObjectId(\"5f608db5e83a779d9f5a4411\"),\n\t\"dob\" : ISODate(\"1995-10-15T23:00:00Z\")\n}\n>\n", "text": "Add a Z to the date to indicate it is already UTC:", "username": "Joe_Drumgoole" }, { "code": "", "text": "Hi Joe,Why i am seeing different result than yours, as we marking that it is already UTC.I am very curious in your output you are getting 15 and i get 16 where as i want 16 always since it is DOB. I don’t find any good docs related to manage date with mongodb.Thank you for quick response.", "username": "indraraj26" }, { "code": "", "text": "Hi Joe,Above highlight is wrong, Here is the correct output that i am getting strong text", "username": "indraraj26" }, { "code": "> db.test.drop()\ntrue\n> db.test.insertOne({dob: new Date(\"1/2/1964\")})\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"5f6116f2e83a779d9f5a441b\")\n}\n> db.test.find()\n{ \"_id\" : ObjectId(\"5f6116f2e83a779d9f5a441b\"), \"dob\" : ISODate(\"1964-01-02T00:00:00Z\") }\n>\n", "text": "Apologies my first example was in error. So just appending a Z to a raw date causes it to fail to parse and date and instead it uses the epoch date hence the 1970 date. To use the Z suffix you need a date and a time.You can just insert a constructed date without the Z and it will be added as UTC by default.The trailing Z indicates UTC.", "username": "Joe_Drumgoole" }, { "code": "10/16/1995", "text": "if you want to get date in this format (10/16/1995) then i would recommend you to store them as a string. What’s the benefit of that… as you know mongodb stores the Date in ISO format so if we only insert the date(not the time) then it will ingest the timestamp as 00 with the date. so it’s better to use date as string. Hope so this will work for you.\nThanks", "username": "Nabeel_Raza" }, { "code": "dateTestSolution", "text": "insert a sample date in dateTest collections.db.dateTest.insert(\n{\n“_id” : 1,\n“date” : ISODate(“2020-09-16T09:55:39.736Z”)\n}\n)Here is the query:db.dateTest.aggregate(\n[\n{\n$project: {\ndateOnly: { $dateToString: { format: “%Y-%m-%d”, date: “$date” } },\n}\n}\n]\n)and this will be the output:/* 1 */\n{\n“_id” : 1.0,\n“dateOnly” : “2020-09-16”\n}Hope so this will help you out, if so then kindly mark this as Solution. Thanks", "username": "Nabeel_Raza" }, { "code": "", "text": "but if you insert without time, it does not add 00 instead of UTC timezone as you can see in above post", "username": "indraraj26" }, { "code": "", "text": "Hi Joe,This does not make date to fixed i think i should save as string instead type date\n", "username": "indraraj26" }, { "code": "> new Date(\"1977-04-01T00:00:00Z\")\n1977-04-01T00:00:00.000Z\n> new Date(\"1977-04-01T00:00:00\")\n1977-03-31T23:00:00.000Z\n>\n", "text": "I think I worked out what is going on. The date you have specified is within your daylight savings period so the the hour is subtracted to create UTC forcing your date to be on the previous day. The fix is to specify the UTC date. e.g. :The first date is reported correctly. The second date is shifted as it uses daylight savings time.", "username": "Joe_Drumgoole" }, { "code": "YYYY-MM-DD", "text": "for that just insert a simple date it will store it to UTC format and i did the same i save it to ISOFormat and get the result in required form i.e. YYYY-MM-DD", "username": "Nabeel_Raza" }, { "code": "", "text": "Thank you Joe, This works so great!i appreciate your help for quick response.<3 Mongodb", "username": "indraraj26" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Save date of birth of user without timezone
2020-09-15T07:15:45.597Z
Save date of birth of user without timezone
47,972
null
[ "indexes" ]
[ { "code": "", "text": "I understand that an index consisting of one field will not include documents whose field contains a null value. For example, if I was to create an index on FieldB, the index will not include the 3rd document.\nFieldA FieldB\nA 1\nB 1\nC \nD 1However, if then create another index with FieldB, FieldA. Will it exclude the 3rd document? Or because there is now a value for FieldA for the 3rd document?I basically do not want to include the 3rd document in the index because FieldB has a null value. How can I achieve that?I was thinking about creating a compound sparse index. However, how can I specify sparse index to ignore the 3rd document because FieldB does not exist? Note: I will then prevent inserting a null value to FieldB, so instead of null I will not create a value.", "username": "Bluetoba" }, { "code": "", "text": "I guess my question is very simple.Will a compound sparse index ignore documents if one of the index fields is empty or will it only ignore it if all fields are empty?", "username": "Bluetoba" }, { "code": "", "text": "Hello @Bluetoba\na document will be ignored when one (or more) of the sparse index fields is (are) null / not exist.\nPlease keep in mind that you can not use sparse indexes for sorting. Since fields are potentially omitted the sort will end in a collection scan.Here you can find further details:\nSparse Indexes MDB training\nSparse Indexes docCheers Michael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks Michael,What if I use partial index, instead of sparse index to include a field that will be used for sorting? The sorting field is positioned last.Is it the same issue?", "username": "Bluetoba" }, { "code": "db.test.createIndex(\n { field: 1 },\n { partialFilterExpression: { field: { $exists: true } } }\n) \n", "text": "Hello @Bluetobayou can build up the same behavior of a sparse index with an partial index:Reading your first post I struggle to get your use case. I like to suggest you to read the attached linksGeneral Index Documentation\nPartial Index Documentationand ideally follow the free MongoDB Training M201: MongoDB Performance which explains in detail the different indexes.When you have further question after you went through the documentation and the class (you may only pick what you need, in the corona time this is self paced) please add you use case and question to this post, I keep it on my watch list. I am happy to help.Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Okay the use case is for chat collection. The first chat document has a flag, Top_Chain_FLG = 1. All documents belonging to the same conversation, share the same foreign key, Top_Chain_ID, that point to the ID of first chat document.The conversation is between merchants and buyers, as identified by Merchant_ID and Buyer_ID. So these IDs are part of the document.The first chat document, also known as the Top_Chain documents are augmented with the attributes of the last chat message, in order for us to save many queries to extract information about the first and last chat message. It also includes information such as Last_Message_Date.So, it essential fields look like the following:\nID, Top_Chain_ID, Top_Chain_FLG, Merchant_ID, Buyer_ID, Last_Message_Date, etc, etc (e.g. message itself, createdAt, createdDate)What we need is to display a list of chat messages for a merchant or a buyer in the order of last messages received (known as Last_Message_Date descendingly). So, instead of doing a separate order by, I wanted to rely on the index to sort it.So, what I initially had in mine is to create three partial indexes.I also thought that Index Intersection will take place on both index#1 and #3 and when say I want to query on the following, but the explain plan didn’t indicate that.\nMerchant_ID = xyx\nTop_Chain_FLG=1Question 1: Why doesn’t index intersection take place?So, I created one index that combines content of index #1 and #3 together.\nMerchant_ID, Top_Chain_ID, Last_Message_Date descending (partialFilterExpression where Merchant_ID exists, Top_Chain_FLG=1)Question 2: The explain plain works as intended, but I still don’t know if the sorting really use the Last_Message_Date from the index? Does it?Question 3: Should I use a hint? how can I do a hint based an index name?", "username": "Bluetoba" } ]
Compound index with null and non-null values
2020-09-13T11:03:19.639Z
Compound index with null and non-null values
13,045
null
[]
[ { "code": "", "text": "how to get all the attributes for a particular document and its datatype for a collection?if there is a script it will be helpful.", "username": "Mamatha_M" }, { "code": "", "text": "Hi @Mamatha_M,Have you tried the Schema Analyser in MongoDB Compass? Sounds like what you are trying to do here.\nimage1244×950 37.4 KB\nIf you prefer a script, I already used successfully Variety which was enough for what I was doing at the time.Is it what you are looking for?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hello @Mamatha_Myou may also can check out the schema in VSCode with the MongoDB plugin. In case this is your favorite editor.\n\ngrafik1573×661 169 KB\nIn all other cases compass, as recommended by @MaBeuLux88, will offer you many more features. Personally I use both since, while coding, it is faster to use the features of the editor. The VSCode plugin also brings a sandbox which is great to develop queries.Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Here is post with similar question and answers:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get all the attributes for a particular document and its datatype?
2020-09-16T09:15:33.330Z
How to get all the attributes for a particular document and its datatype?
4,924
https://www.mongodb.com/…5da745c9c336.png
[ "atlas-search" ]
[ { "code": "", "text": "Hi there,\nRecently we have created Atlas Search indexes for the databases but the search function always return empty result. We used Atlas API to create the indexes, it responses okay meaning that indexes has been created but in fact they have never built successfully. Moreover there are no alerts indicating the search indexes built successfully.We would like to use the Atlas UI Explorer to view the indexes build status but we can’t, our cluster has uptown 6000 databases, the Atlas UI Explorer always crashed when loading 6k databases. We try another way, i.e using Atlas API to get indexes but sadly, it don’t return “status” fields.We run into a trouble that Atlas search indexes have been created but never built successfully. To give you more details, our cluster recently exceeded 75% memory usage. In addition, we noticed that, in our RTPP, the “SYS MEM” metric is too highAre there any ways that “SYS MEM” affect the Atlas Search Index building ?\nThanks and regards", "username": "dattannguyen" }, { "code": "", "text": "Hi @dattannguyen and welcome in the MongoDB Community !From what I’m reading here, it sounds like your cluster is paralysed because it’s begging for more RAM.6K databases sounds like a lot but maybe they are really small so I can’t tell. How much data do you have in this cluster?From what I see here, I guess you are running an M50 on GCP which has a recommended disk size of 160GB which sounds like a good ratio for the 30 GB of RAM. Do you have more than that?MongoDB needs RAM for everything it does and creating indexes is, indeed, an extra burden that should be conducted outside of the peak hours if that’s possible.Also, indexes should fit in RAM. Always.Please make sure you have enough RAM for your indexes + working set + queries & workload. If indexes represent more than 15% of your RAM, I would say it’s time to add more RAM or I would start being more selective when creating indexes and I would make sure all the indexes that exists are really useful and not just a waste of RAM.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Atlas Search Index Not Build
2020-09-14T20:16:39.033Z
Atlas Search Index Not Build
2,216
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "Hi,\nI am trying to use logs API from here:\nand I get 401 access error even though the credentials I use are authorized{“detail”:“Current user is not authorized to perform this action.”,“error”:401,“errorCode”:“USER_UNAUTHORIZED”,“parameters”:,“reason”:“Unauthorized”}%Thanks\nGadi", "username": "Gadi_Srebnik" }, { "code": "M0M2M5", "text": "Some tiers, at least free tiers, do not allow the use of the logs API. For more information, see https://docs.atlas.mongodb.com/reference/api/logs/In particular, the paragraph:FEATURE UNAVAILABLE IN FREE AND SHARED-TIER CLUSTERSThis feature is not available for M0 (Free Tier), M2 , and M5 clusters. To learn more about which features are unavailable, see Atlas M0 (Free Tier), M2, and M5 Limitations.", "username": "steevej" }, { "code": "", "text": "Thanks for the answer.\nThis is not the case, as Im trying to pull logs of M50 and M60 clusters.Gadi", "username": "Gadi_Srebnik" }, { "code": "Project Data Access Read Only", "text": "May it is the following then.IMPORTANTYou must have the Project Data Access Read Only role or higher in the project that the cluster belongs to in order to retrieve the log.", "username": "steevej" }, { "code": "", "text": "My current permissions are Organization read-only, which should cover project-specific data access. Isn’t it?", "username": "Gadi_Srebnik" }, { "code": "", "text": "Organization Read Only entitles you to effectively have Project Read Only for each Project. However Project Read Only is not the same as Project Data Access Read Only.I do apologize that “Project Read Only” and “Project Data Access Read Only” are so similarly worded: this has a legacy reason that I won’t go into. Really “Project Read Only” should be called “Project Metadata View Only” since it does not grant access to cluster data contents.-Andrew", "username": "Andrew_Davidson" } ]
Using logs API with Organization read only user role
2020-09-10T10:32:37.494Z
Using logs API with Organization read only user role
2,216
https://www.mongodb.com/…8c97e0811ea2.png
[]
[ { "code": "", "text": "Hi,I want to know if is possible to make string replace in calculated field.\nI want to make field, with list of domains name by cleaning urls and only get domains list.Thanks for your help !", "username": "Jonathan_Gautier" }, { "code": "Add Field", "text": "Hi @Jonathan_Gautier,Can you provide an example document and an example of what you expect? This would help me to reproduce and hack on my side.As a short answer without more details, it sounds like you could solve this issue by applying an aggregation pipeline directly at the data source level and use $project and $substringBytes or another string manipulation to do what you want.That being said, you can probably do the same with the Add Field feature you have here.", "username": "MaBeuLux88" }, { "code": "", "text": "I have solve my problem, thanks for aggregation:{ $regexFind: { input: “$order_status_url” , regex: “(?:https?:)?(?://)?(?:[^@\\n]+@)?(?:www.)?([^:/\\n]+)”} }give me this in ChartsAnd just didAnd finally got all my domain list in charts <3", "username": "Jonathan_Gautier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
String Replace in Calculated Field Charts
2020-09-15T15:51:29.032Z
String Replace in Calculated Field Charts
2,616
null
[ "flutter" ]
[ { "code": "", "text": "HelloDart issue been resolved. Please I have a question : do you currently work on a SDK Flutter for MongoDb Realm ? And do you have any approx. release date ?\nMany thanks in advanceJulien", "username": "Julien_Onezime" }, { "code": "", "text": "@Julien_Onezime While a FFI has been shipped in Dart it is still incredibly new and is leaving a lot of the burden on the embedder (realm) to re-implement logic that is already part of the Dart VM. I would like to perform a spike on implementing our own custom logic and we may be able to get an alpha/preview out for the community to test but it will have a bunch of features lacking. If you are asking about timeline because you need to make decision on what data layer to use in your Dart app then I’d advise you to look elsewhere since a production release of RealmDart is a long way out.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flutter SDK for MongoDB Realm
2020-09-15T18:26:36.116Z
Flutter SDK for MongoDB Realm
4,015
null
[ "dot-net" ]
[ { "code": " {\n \"_id\": \"string\",\n \"OtherField\": \"string\",\n .... \n }\n class MyClass\n [BsonElement(\"OtherField\")] \n public string Id { get; set;}\n", "text": "I want to map an attribute from MongoDb to Id in the POCO class. Note that it is not the real _id in MongoDb, I wanted to use the name Id over something else.The Mongo Object looks likeThe Class looks likeHowever, even though I used the [BsonElement] decorator, the Id still reflects the _id in the MongoDb. Is it not possible to do that?Environment:\ndotnet core 3.1\nMongoDb.Driver 2.10.4", "username": "sssppp" }, { "code": "[BsonId]", "text": "Hi @sssppp and welcome in the MongoDB community ,I’m not a C# dev but I found 2 possible solutions in the doc which could solve your issue.Looks like you can use the decorator [BsonId]. If it’s set explicitly on another field, maybe it won’t be mapped to your “id” field anymore.This will give you a chance to set your own serializer I guess.", "username": "MaBeuLux88" }, { "code": "", "text": "@sssppp Try the suggestions from @MaBeuLux88 and let us know how it goes.", "username": "Ken_Alger" } ]
BsonElement on arbitrary attribute Id?
2020-09-14T20:59:38.004Z
BsonElement on arbitrary attribute Id?
3,040
null
[]
[ { "code": "mongod --dbpath data repair", "text": "Hi,I am writing a web application which requires mongodb to deal with huge datasets (up to and more than 100K documents per collection).While the web app runs, every hour, it does some updates on the database by inserting new records. Today I woke up and I saw that my database got corrupted and not even the mongod --dbpath data repair worked to fix my database, so I had to get rid of the database.Luckily I had made a backup but seeing data corruption every now and then is very frustrating.Any ideas on how to safely deal with large datasets and updates with mongodb and prevent any possible data corruption from occurring?", "username": "George_K" }, { "code": "", "text": "Hi @George_K welcome to the community.The WiredTiger storage engine (default from MongoDB 3.2 series) is very conservative in handling data, that typically the cause of a corruption is faulty hardware.Could you elaborate on what you’re seeing:Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,I use WiredTiger and after days of inspections, I realised that MongoDB uses a lot of RAM. I had to upgrade my server from 1GB to 4GB ram (Ubuntu 19.02).The thing is that when I reboot my server, there is roughly 3GB free of RAM and after a few days it drops to around 500MB.Screenshot 2020-09-11 at 15.42.221134×94 22.9 KB\nThis is very frustrating as my budget is very low and I cannot keep increasing the RAM. I was wondering if there is any other way either limit the RAM usage or keep freeing RAM automatically periodically so my application doesn’t end up crashing or the server becomes completely unresponsive.My MongoDB version is now 4.4.0. I use node.js to deploy my web-app. I don’t have a screenshot of the error, but as I remember it was displaying a series of 0’s (8 bits 0000000x0) and hexadecimals.How could I prevent this issue from happening?", "username": "George_K" }, { "code": "0.5 * (4 GB - 1 GB) = 1.5 GB", "text": "Hi @George_K,It is normal for Linux to try to use available RAM for file caching. See Linux ate my RAM! for more context.With your current output, you have 996MB available (used, but can be made available for applications if needed) of which 568MB is free (not currently used). If you aren’t seeing any memory-related issues (for example, failed allocations or performance challenges), you probably have ample memory at the moment.By default MongoDB allocates the larger of 50% of (RAM - 1 GB), or 256 MB for the WiredTiger cache. The WiredTiger cache is used for reading and writing data in MongoDB. Memory outside the WiredTiger cache is used for temporary allocations (connections, aggregation, JavaScript evaluation, …) and by the operating system for caching files.With your original 1GB of RAM, the WiredTiger cache would be 256MB. With 4GB of RAM, the WiredTiger cache will (by default) use 1.5GB of RAM ( 0.5 * (4 GB - 1 GB) = 1.5 GB ).Regards,\nStennie", "username": "Stennie_X" }, { "code": " root@me:/var/www/website.com# mongod --dbpath /data/db/\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.616+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.628+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.628+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.629+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1133,\"port\":27017,\"dbPath\":\"/data/db/\",\"architecture\":\"64-bit\",\"host\":\"me\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.629+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.0\",\"gitVersion\":\"563487e100c4215e2dce98d0af2a6a5a2d67c5cf\",\"openSSLVersion\":\"OpenSSL 1.1.1c 28 May 2019\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu1804\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.629+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"19.10\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.629+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/data/db/\"}}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.631+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.631+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-09-15T16:40:39.631+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=1456M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.392+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:392279][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 20 through 21\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.503+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:503357][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 21 through 21\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.615+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:615158][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 20/7936 to 21/256\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.733+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:733600][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Creating the history store before applying log records. Likely recovering after anunclean shutdown on an earlier version\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.739+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:739499][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 20 through 21\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.804+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:804151][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 21 through 21\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.857+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188040:857836][1133:0x7f7b85cce440], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.867+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1236}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.867+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.885+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.887+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.887+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22138, \"ctx\":\"initandlisten\",\"msg\":\"You are running this process as the root user, which is not recommended\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.887+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.887+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22184, \"ctx\":\"initandlisten\",\"msg\":\"Soft rlimits too low\",\"attr\":{\"currentValue\":1024,\"recommendedMinimum\":64000},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.910+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.923+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.925+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.925+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:40.925+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:41.028+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37876\",\"sessionId\":1,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:41.037+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37876\",\"client\":\"conn1\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:41.051+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37878\",\"sessionId\":2,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:41.052+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn2\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37878\",\"client\":\"conn2\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:41.070+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37876\",\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:41.072+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn2\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37878\",\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.924+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37914\",\"sessionId\":3,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.925+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn3\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37914\",\"client\":\"conn3\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.932+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37916\",\"sessionId\":4,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.933+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn4\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37916\",\"client\":\"conn4\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.941+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37918\",\"sessionId\":5,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.949+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn5\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37918\",\"client\":\"conn5\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.997+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn3\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37914\",\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.998+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37918\",\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:49.998+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn4\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37916\",\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.103+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37922\",\"sessionId\":6,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.104+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn6\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37922\",\"client\":\"conn6\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.107+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37924\",\"sessionId\":7,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.108+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn7\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37924\",\"client\":\"conn7\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.124+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37926\",\"sessionId\":8,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.129+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn6\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37922\",\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.130+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn8\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37926\",\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:40:50.131+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn7\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37924\",\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2020-09-15T16:41:05.393+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37930\",\"sessionId\":9,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:41:05.394+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn9\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37930\",\"client\":\"conn9\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:41:05.396+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:37932\",\"sessionId\":10,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2020-09-15T16:41:05.397+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn10\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:37932\",\"client\":\"conn10\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"3.5.9\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.3.0-19-generic\"},\"platform\":\"'Node.js v10.15.2, LE (unified)\"}}}\n{\"t\":{\"$date\":\"2020-09-15T16:41:05.423+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn9\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37930\",\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2020-09-15T16:41:05.424+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn10\",\"msg\":\"connection ended\",\"attr\":{\"remote\":\"127.0.0.1:37932\",\"connectionCount\":0}}\n^C{\"t\":{\"$date\":\"2020-09-15T16:42:19.568+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":2,\"error\":\"Interrupt\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.568+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23380, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by the kernel\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.568+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.571+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.578+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.579+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.579+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.579+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20609, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.579+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.582+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.582+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.583+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22324, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger in preparation for reconfiguring\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.597+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795905, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":14}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.611+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188139:611666][1133:0x7f7b85ccc700], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 21 through 22\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.666+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188139:666906][1133:0x7f7b85ccc700], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 22 through 22\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.762+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188139:762598][1133:0x7f7b85ccc700], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 21/5760 to 22/256\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.866+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188139:866564][1133:0x7f7b85ccc700], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 21 through 22\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.928+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188139:928597][1133:0x7f7b85ccc700], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 22 through 22\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.980+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1600188139:980661][1133:0x7f7b85ccc700], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.986+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795904, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger re-opened\",\"attr\":{\"durationMillis\":389}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:19.986+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22325, \"ctx\":\"SignalHandler\",\"msg\":\"Reconfiguring\",\"attr\":{\"newConfig\":\"compatibility=(release=3.3)\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:20.006+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795903, \"ctx\":\"SignalHandler\",\"msg\":\"Reconfigure complete\",\"attr\":{\"durationMillis\":20}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:20.006+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:20.011+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":5}}\n{\"t\":{\"$date\":\"2020-09-15T16:42:20.011+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:20.011+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2020-09-15T16:42:20.011+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\nmongod --fork --logpath /var/log/mongodb/mongod.log --dbpath /var/www/covid19livespread.com/data/db/ --config /etc/mongod.confERROR: child process failed, exited with 14\n\nTo see additional information in this output, start without the \"--fork\" option.\n", "text": "Hi @Stennie_XMake sense, but what can I do to prevent causing memory leaks? I need to occasionally reboot my server which could cause my data to be corrupted for some reason. Here is for example my most recent corruption occured:I could easily realise since I use the following command to run mongodb:\nmongod --fork --logpath /var/log/mongodb/mongod.log --dbpath /var/www/covid19livespread.com/data/db/ --config /etc/mongod.confI got the following error:\nforked process: 1375I couldn’t find any information online so I had to use my backup data and re-build all of my collections.Is there a way to prevent such data corruption? How do I make sure mongodb doesn’t cause memory leaks and prevent data corruption?", "username": "George_K" } ]
Dealing with large datasets with frequent updates cause data corruption
2020-09-02T14:50:22.628Z
Dealing with large datasets with frequent updates cause data corruption
4,231
null
[ "aggregation", "atlas-search" ]
[ { "code": "", "text": "Hi !I want to know why we cannot use $search in $facet ?Can we use it in near future ?Thanks", "username": "Jonathan_Gautier" }, { "code": "", "text": "@Asya_Kamsky maybe you have an insight here?", "username": "MaBeuLux88" }, { "code": "", "text": "It is largely a performance concern. We need to implement faceting capabilities for the full text search use case and performance expectation. The $facet implementation targets a different use case and has a different implementation as a result.", "username": "Marcus" }, { "code": "", "text": "Hi Jonathan! $search is always the 1st stage in an aggregation pipeline, and it calls an entirely different process (the mongot) before sending documents back to the mongod to finish the aggregation pipeline. One of the reasons for this is because search uses a different type of index -inverted indexes - before sending to mongod which uses b tree indexes. You can still do a “faceted search” inside the $search stage by combining other operators with a compound operator. There are some examples here:Use the compound operator to combine multiple operators in a single query and get results with a match score.Hope this helps. If not, please let me know your use case for $facet and I can provide further info. Thanks!", "username": "Karen_Huaulme" }, { "code": "", "text": "I want to make facets like Elastisearch. But for now i think is not possible or very difficult to make many counts in $search or $compoundimage460×677 8.08 KBQuery:“facets”:{\n“ratemin”:{\n“type”:“range”,\n“ranges”:[\n{\n“from”:0,\n“to”:10000,\n“name”:“0 - 10000”\n},\n{\n“from”:10001,\n“to”:100000,\n“name”:“10001 - 100000”\n},\n{\n“from”:100001,\n“to”:500000,\n“name”:“100001 - 500000”\n},\n{\n“from”:500001,\n“to”:1000000,\n“name”:“500001 - 1000000”\n},\n{\n“from”:1000001,\n“to”:5000000,\n“name”:“1000001 - 5000000”\n},\n{\n“from”:5000001,\n“to”:10000000,\n“name”:“5000001 - 10000000”\n},\n{\n“from”:10000001,\n“name”:“10000001+”\n}\n]\n},\n“date”:{\n“type”:“range”,\n“ranges”:[\n{\n“from”:“2020-09-15T16:00:15.810Z”,\n“to”:“2020-09-16T16:00:15.810Z”,\n“name”:“Today”\n},\n{\n“from”:“2020-09-15T16:00:15.810Z”,\n“to”:“2020-09-22T16:00:15.810Z”,\n“name”:“Next 7 Days”\n},\n{\n“from”:“2020-09-22T16:00:15.810Z”,\n“to”:“2020-09-29T16:00:15.810Z”,\n“name”:“22/09/2020 to 29/09/2020”\n},\n{\n“from”:“2020-09-29T16:00:15.819Z”,\n“name”:“After 14 days”\n}\n]\n},\n“currency”:{\n“type”:“value”\n}\n},Response:“facets”:{\n“currency”:[\n{\n“type”:“value”,\n“data”:[\n{\n“value”:“USD”,\n“count”:483046\n},\n{\n“value”:“EUR”,\n“count”:195327\n},\n{\n“value”:“GBP”,\n“count”:37399\n},\n{\n“value”:“CHF”,\n“count”:14690\n},\n{\n“value”:“AUD”,\n“count”:2024\n},\n{\n“value”:“CAD”,\n“count”:812\n},\n{\n“value”:“HKD”,\n“count”:789\n},\n{\n“value”:“ZAR”,\n“count”:85\n},\n{\n“value”:“CZK”,\n“count”:34\n},\n{\n“value”:“MXD”,\n“count”:27\n}\n]\n}\n],\n“ratemin”:[\n{\n“type”:“range”,\n“data”:[\n{\n“to”:10000.0,\n“from”:0.0,\n“name”:“0 - 10000”,\n“count”:734339\n},\n{\n“to”:100000.0,\n“from”:10001.0,\n“name”:“10001 - 100000”,\n“count”:0\n},\n{\n“to”:500000.0,\n“from”:100001.0,\n“name”:“100001 - 500000”,\n“count”:0\n},\n{\n“to”:1000000.0,\n“from”:500001.0,\n“name”:“500001 - 1000000”,\n“count”:0\n},\n{\n“to”:5000000.0,\n“from”:1000001.0,\n“name”:“1000001 - 5000000”,\n“count”:0\n},\n{\n“to”:1.0E7,\n“from”:5000001.0,\n“name”:“5000001 - 10000000”,\n“count”:0\n},\n{\n“from”:1.0000001E7,\n“name”:“10000001+”,\n“count”:0\n}\n]\n}\n],\n“date”:[\n{\n“type”:“range”,\n“data”:[\n{\n“to”:“2020-09-16T16:00:15.810Z”,\n“from”:“2020-09-15T16:00:15.810Z”,\n“name”:“Today”,\n“count”:197\n},\n{\n“to”:“2020-09-22T16:00:15.810Z”,\n“from”:“2020-09-15T16:00:15.810Z”,\n“name”:“Next 7 Days”,\n“count”:573\n},\n{\n“to”:“2020-09-29T16:00:15.810Z”,\n“from”:“2020-09-22T16:00:15.810Z”,\n“name”:“22/09/2020 to 29/09/2020”,\n“count”:0\n},\n{\n“from”:“2020-09-29T16:00:15.819Z”,\n“name”:“After 14 days”,\n“count”:75\n}\n]\n}\n]\n}", "username": "Jonathan_Gautier" }, { "code": "$search$count$search", "text": "@Jonathan_Gautier to get counts in a $search pipeline the simplest way to get that information today is to add a second stage that is $count for each of the corresponding pipelines beginning with a $search stage. Have you tried that?", "username": "Marcus" }, { "code": "", "text": "Unfortunately, we do not support an implementation very similar to the way you would implement it in Mongo or Elasticsearch today.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$search in $facet aggregation
2020-09-10T23:04:22.092Z
$search in $facet aggregation
5,695
null
[ "app-services-user-auth" ]
[ { "code": "ourapi.com/user/login", "text": "Hi,I am working on a game that will be making heavy use of cloud data. We are currently using Realm and Atlas.We are developing in Unreal Engine which does not have a Realm SDK. As a result I have been trying to develop a NodeJS server that will act as a “gateway” to Realm functionality. The idea is for the game to make requests to an endpoint like ourapi.com/user/login which will then authenticate using the NodeJS SDK behind the scenes.As I am researching this approach I am realizing that there will likely be a problem with session management. My understanding after a lot of digging through documentation is that Realm will store users locally, in the case of Node in a folder on the server. Only one user can be “Active” and any database interactions will happen in the context of that user.I am imagining a scenario like the following:What happens next? Will Realm return User B’s data instead of User A because only one user can be active at once? How can multiple users be active at a time?I would really appreciate guidance here.", "username": "John_Saigle" }, { "code": "", "text": "@John_Saigle What do you mean by Active user here? Can you explain more? I believe you are actually try to call login() with the user’s credentials and the open the realm as that user? For this kind of architecture we typically recommend that node.js server logs in to MongoDB Realm as an administrator role. From there, it can open any user’s realm and serve that data via its REST interface to the correct user.", "username": "Ian_Ward" }, { "code": "db_userdb_user", "text": "Thanks for the quick reply.I’m using the term “active user” based on this documentation: https://docs.mongodb.com/realm/authentication/#active-userSo if I understand you correctly, the nodeJS server would have a single user, e.g. db_user that would handle retrieving the data from Realm and that the logic on the server would return those results as JSON responses.That makes sense to me in terms of data access. However, this means that it’s not possible to take advantage of Realm’s session management or login functionality, right? Because there’s only db_user, no other user would be logging into Realm.", "username": "John_Saigle" }, { "code": "db_user", "text": "That makes sense to me in terms of data access. However, this means that it’s not possible to take advantage of Realm’s session management or login functionality, right? Because there’s only db_user , no other user would be logging into Realm.Yeah - that’s generally correct because you are essentially injecting a middleware server in between the Realm clients and the server-side Ream cloud so you need to implement your own session management. One thing you might want to consider is using Realm cloud’s webhooks since they could be used for GETs and POSTs - not sure how complicated you wanted this middleware logic to be but it could work for you -", "username": "Ian_Ward" }, { "code": "", "text": "Fair enough. Thanks for clarifying.", "username": "John_Saigle" }, { "code": "", "text": "Out of curiosity, what APIs of the SDK would you want expose via your Node.js server? Are you planning on using Realm Sync or are you only accessing data via the “MongoDB service” apis?In general all APIs exposed on a specific “user” object in the SDKs should request in the context of that user and not the active user. The active user is ment more as a shortcut for people developing apps where a single user is active at any given time.That being said, I am also a bit curious why you want to add a “gateway” in front of the Realm services? Is this because you have your own authentication scheme? Then you might benefit from using a custom JWT provider. As your use case is a game, I hope you’re also considering the downside to this approach, namely that your users will experience a higher latency and your setup won’t benefit from the ability to globally deploy your MongoDB Realm app (again lowering the latency).", "username": "kraenhansen" }, { "code": "", "text": "We’re planning to have a cloud component to our games. For example a user might have a character unlocked from in an in-game store. This character may have a specific outfit they can be configured to wear.We would use Realm to, for example, allow someone to customize their character in a web app or in an app on their phone. They would be using Realm login to access their account details on Atlas. This same process would be used to login from an HTTP call from within the game.The idea for a “gateway” was a solution to there being no “Unreal Engine Realm SDK”. We thought it would be possible for the engine to make HTTP calls to the NodeJS API which would access Realm under the hood. However with only one active user this isn’t possible as the server would need to have some kind of session management middleware as your colleague pointed out.In addition to the login APIs we were using some webhooks and database triggers within Realm. We used remoteMongoClient for data access for users.Ultimately since it looks like this isn’t really the proper use case for Realm we’ll likely move away from Realm and use a different method to access Atlas data via the back-end.", "username": "John_Saigle" } ]
Authentication without the SDK
2020-09-11T23:30:36.144Z
Authentication without the SDK
3,027
null
[ "student-developer-pack" ]
[ { "code": "", "text": "Hi everyone,\nI’ve started my certification journey at mongodb university with my personal email account instead of university email. Is there any chance to link university accounts to my existing account ? I don’t want to lose my course progress.Thanks.", "username": "bekohitachi" }, { "code": "", "text": "Hi @bekohitachiThank you for reaching out and welcome to the forum. It’s currently not possible to merge accounts, but I’ve send you a DM with more info.Best,Lieke", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to link student account to mongodb university
2020-09-15T09:56:16.806Z
How to link student account to mongodb university
4,876
null
[ "data-modeling" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5e71a1f3081c4b70cdbc438f\"),\n \"DataSetID\" : ObjectId(\"5e71a1f3081c4b70cdbc438e\"),\n \"row\" : [ \n {\n \"key\" : \"Region\",\n \"prev\" : \"root\",\n \"value\" : \"Australia and Oceania\",\n \"typeOfValue\" : \"string\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Country\",\n \"prev\" : \"root\",\n \"value\" : \"Tuvalu\",\n \"typeOfValue\" : \"string\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Item Type\",\n \"prev\" : \"root\",\n \"value\" : \"Baby Food\",\n \"typeOfValue\" : \"string\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Sales Channel\",\n \"prev\" : \"root\",\n \"value\" : \"Offline\",\n \"typeOfValue\" : \"string\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Order Priority\",\n \"prev\" : \"root\",\n \"value\" : \"H\",\n \"typeOfValue\" : \"string\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Order Date\",\n \"prev\" : \"root\",\n \"value\" : ISODate(\"2010-05-27T18:30:00.000Z\"),\n \"typeOfValue\" : \"date\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Order ID\",\n \"prev\" : \"root\",\n \"value\" : 669165933,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Ship Date\",\n \"prev\" : \"root\",\n \"value\" : ISODate(\"2010-06-26T18:30:00.000Z\"),\n \"typeOfValue\" : \"date\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Units Sold\",\n \"prev\" : \"root\",\n \"value\" : 9925,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Unit Price\",\n \"prev\" : \"root\",\n \"value\" : 255.28,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Unit Cost\",\n \"prev\" : \"root\",\n \"value\" : 159.42,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Total Revenue\",\n \"prev\" : \"root\",\n \"value\" : 2533654,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Total Cost\",\n \"prev\" : \"root\",\n \"value\" : 1582243.5,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }, \n {\n \"key\" : \"Total Profit\",\n \"prev\" : \"root\",\n \"value\" : 951410.5,\n \"typeOfValue\" : \"number\",\n \"currentDepth\" : 1\n }\n ]\n}\n", "text": "So I’ve a use case where I don’t know what data is coming and in which form. The user might be sending a simple JSON object or a n level deep nested JSON object.I need to store it in such a way that I can run complex aggregations on it.\nTo solve that I made the incoming JSON into an array of objects with each object containing\n‘key’ as one key,prev as parent of that key and value of the key as ‘value’\nExample -Now the aggregations work for any type of data using $reduce but it takes up a lot of time. 40 sec to run aggregation on a million records.\nSo", "username": "Siddhant_Shah" }, { "code": "", "text": "Hi @Siddhant_Shah - Were you able to come up with a solution that worked more efficiently?I’m curious why you chose to parse the incoming JSON into an array of objects rather than storing the incoming objects as-is. What is the goal of your aggregation?", "username": "Lauren_Schaefer" } ]
How to store data when you are not sure about the schema
2020-05-16T08:58:28.359Z
How to store data when you are not sure about the schema
2,124
null
[ "atlas-search" ]
[ { "code": "Remote error from mongot :: caused by :: Index 12 out of bounds for length 12db.getCollection('my-documents').aggregate([\n {\n \"$search\": {\n \"compound\": {\n \"filter\": [\n {\n \"equals\": {\n \"value\": ObjectId('5ea6fa18ef17e8002708feea'),\n \"path\": \"tenantId\"\n }\n }\n ],\n \"must\": [\n {\n \"text\": {\n \"query\": \"Abcd\",\n \"path\": [\n \"field1\",\n \"field2\",\n \"field3\"\n ],\n \"fuzzy\": {\n \"prefixLength\": 2\n }\n }\n }\n ]\n },\n \"highlight\": {\n \"path\": [\n \"field1\",\n \"field2\",\n \"field3\"\n ]\n }\n }\n },\n {\n \"$limit\": 15\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"field1\": 1,\n \"field2\": 1,\n \"field3\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"highlight\": {\n \"$meta\": \"searchHighlights\"\n }\n }\n }\n])\n", "text": "Hi,\nWhile using Atlas Search feature on M2 cluster I’ve got error Remote error from mongot :: caused by :: Index 12 out of bounds for length 12This is not working query:After few experiments I realised that it will work when I remove “filter” operator or “highlight” option from $search but then I will lose key logic.\nWhat is interesting on M0 tire clusters I don’t have this problem. I’m not sure if it is cluster tire related error or data related error.Thanks for any help.", "username": "Jakub_Zloczewski" }, { "code": "", "text": "Hi @Jakub_Zloczewski,Is it happen for the same index?Can you share index desc and sample document?Have you tried rebuilding the index (drop + create)Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny\nI rather would not share data and index.\nIs there any way someone will look at my cluster?", "username": "Jakub_Zloczewski" }, { "code": "", "text": "Yes, it is happen for the same index.\nI’ve tried rebuilding the index few times and nothing changed.", "username": "Jakub_Zloczewski" }, { "code": "", "text": "Hi @Jakub_Zloczewski,To review your cluster please open a support case or interact with our support through the chat on the right lower side.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Remote error from mongot :: caused by :: Index 12 out of bounds for length 12
2020-09-14T20:18:22.384Z
Remote error from mongot :: caused by :: Index 12 out of bounds for length 12
3,352
null
[]
[ { "code": "import * as RealmWeb from \"realm-web\";\n", "text": "The documentation suggests:Near the top of any JavaScript or TypeScript file that uses Realm, add the following import statement:However this is not valid Javascript and can not be interpreted by any ES6 compatible browser.\nBundlers like Rollup treat node modules as external dependencies and thus don’t resolve the sdk, leaving the import as is which results in an error in the browser.Is it possible to include the SDK via script tag or by importing all required modules from a single esm js file rather than using npm?", "username": "veysi_yalcin" }, { "code": "", "text": "The example assumes that your using a bundler like WebPack, Parcel.js or Rollup.js that understand ES6 modules and resolvese these dependencies before they reach an end-users browser.We’re currently working on releasing Realm Web in a self-executing function (‘iife’) format, which can be included in the package as well as uploaded to a CDN for direct consumption by end users via a URL. Would either of these work in your situation?", "username": "kraenhansen" }, { "code": "", "text": "Thank you for clarifying! IIFE would be ideal for our purposes. Do you have an ETA?", "username": "veysi_yalcin" }, { "code": "", "text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "Hi – You can follow phase 2 of the Web SDK (which includes this improvement).", "username": "Drew_DiPalma" }, { "code": "distsrc", "text": "Even better is Realm Web: Publish an \"iife\" bundle · Issue #2966 · realm/realm-js · GitHub, which is this specific task of publishing the IIFE.Also @veysi_yalcin, will you be copying this bundle into your own dist directory before publishing your app or are you looking to load it using a script-tag with a src attribute referencing the script on an external HTTP server / CDN?", "username": "kraenhansen" }, { "code": "", "text": "An official CDN link from your end would be ideal.Kind of like Stitch:\nhttps://s3.amazonaws.com/stitch-sdks/js/bundles/4.6.0/stitch.jsHave you considered something like https://www.jsdelivr.com/ ?\nThis would allow us to pull the latest version directly from your github and the CDN is much faster than amazon s3 storage and requires minimal setup on your end…", "username": "veysi_yalcin" }, { "code": "", "text": "Just wanted to share that Realm Web is now published as an IIFE bundle, enabling installs via a script-tag.See install instructions in the readme: realm-web - npm", "username": "kraenhansen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can not import realm web-sdk into vanilla JS application
2020-06-11T19:36:19.509Z
Can not import realm web-sdk into vanilla JS application
3,334
null
[ "data-modeling" ]
[ { "code": "", "text": "HelloI come from a world of relational database butI curious about nosql for a project I have in mind.I like to make a invoice app and now I need to make the database layout.For a screen I need all the customer, another displayed subscriptions and another invoices.in a relatioonal database I would say.\nA customer can have multiple invoices and a invoice can have multiple subscriptions,Now I wonder how this works with a nosql database.Roelof", "username": "Roelof_Wobben" }, { "code": "", "text": "Hello @Roelof_Wobben\nIt is great that you want to utilize the strong features of MongoDB. As you mention you have a solid SQL background. To get the most out of an noSQL Setup, you need to change the way of thinking about schema design. Your first goal will no longer be to get the maximal normalized Schema, Denormalization is not bad, the requirement of your queries will drive your design. The story will start to think about a good schema design. In case you move the SQL normalized Data Model 1:1 to MongoDB you will not have much fun or benefit.You can find further information on the Transitioning from Relational Databases to MongoDB in the linked blog post. Please note also the links at the bottom of this post, and the referenced migration guide .Since you are new to MongoDB and noSQL I highly recommend to take some of great and free classes from the MongoDB Univerity:Generally data modelling is a broad topic to discuss, this is due to many factors that may affect the design decision. One major factor is the application requirements, knowing how the application is going to interact with the database. With MongoDB flexible schema characteristic, developers can focus on the application design and let the database design conform for the benefit of the application. See also : MongoDB A summary of all the patterns we’ve looked at in this seriesYou may also can checkout:This is just a sample which can get you started very well. In case this is going to be a mission critical project\nI’d recommend getting Professional Advice to plan a deployment There are many considerations, and an experienced consultant can provide better advice with a more holistic understanding of your requirements. Some decisions affecting scalability (such as shard key selection) are more difficult to course correct once you have a significant amount of production data.Hope this helps to start, while getting familiar and all time after, feel free to ask you questions here - we will try to help.Regards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "You can also see this course M100 MongoDB for SQL ProsUniversity courses are free,to watch the videos just register,and if you want you will do the exercises\nto get certification,else you watch just the videos.M320 was nice M100 is the same instructor Daniel Coupal", "username": "Takis" }, { "code": "", "text": "Thanks all,This is not for a mission critical project but a project that I made for a hobby.\nI look now at mongo because I develop in smalltalk and there mongo is a lot used", "username": "Roelof_Wobben" }, { "code": "", "text": "Congrats!I started with PL/1 and smalltalk/DB2 in the early 90th and some code is still active \nSince a long time, unfortunately, I had no smalltalk project - good to hear that it is still around a that MongoDB is used there. I will do some research just out of curiosity.Michael", "username": "michael_hoeller" }, { "code": "", "text": "Hello @Takis,thanks for pointing to M100 MongoDB for SQL Pros I missed that out to mention. It is a new Course, new style, the focus of M100 is more on the transition from SQL (tabular databases aka as relationonal) to MongoDB. Where as M320 is focusing an schema design aspects. @Roelof_Wobben I’d recommend both just go for the M100 first.Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "ThanksI work now a lot with Pharo (https://pharo.org/).\nNo idea if it’s still the same as the orginal smalltalk", "username": "Roelof_Wobben" }, { "code": "", "text": "I look now at mongo because I develop in smalltalk and there mongo is a lot usedHi @Roelof_Wobben,Are you using a MongoDB driver or API with Pharo? If so, which approach are you using and how has your experience been so far?Pharo doesn’t currently have an officially supported driver, but it looks like there is a community-supported driver with partial feature support: GitHub - pharo-nosql/mongotalk: A Pharo driver for MongoDB.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "in a relatioonal database I would say.\nA customer can have multiple invoices and a invoice can have multiple subscriptions,Now I wonder how this works with a nosql database.The relationships are the same, as the data entities are same - the one-to-many, one-to-one, etc. How you store the data is the main difference between relational and MongoDB document model (and of course, how you query too).The one-to-many relationships can be modeled with document references or with embedded documents. See Model Relationships Between Documents.The customer has the customer details, and the subscriptions (or plans subscribed to). The subscriptions, can be multiple. These can be stored within the customer collection as an array of subscription objects (or embedded documents).The invoices have header and lines. These are stored as a single collection. The header details and lines as an array of line objects. Each line represents the subscription billing info.", "username": "Prasad_Saya" }, { "code": "", "text": "@Stennie_X I use this as driver : voyage you can find it in the same repomy xp for now is good but I have to say I only have used it in a tutorial.@Prasad_Saya Thanks for showing me this", "username": "Roelof_Wobben" }, { "code": "", "text": "I use this as driver : voyage you can find it in the same repoHi @Roelof_Wobben,Thanks for confirming! Voyage is an object persistence abstraction on top of the mongotalk driver I found earlier, but also supports a few different NoSQL databases.I haven’t played with Smalltalk for a long long while, but like @michael_hoeller I’m also curious to see how it has evolved .Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I model relationships with MongoDB?
2020-09-13T20:39:02.963Z
How do I model relationships with MongoDB?
4,934
null
[ "indexes", "mongoid-odm" ]
[ { "code": "strength", "text": "I’m trying to do a case insensitive sort of a nested dynamic field using a wild card index.\nI set the collation strength to various values but I’m not getting a case insensitive sort order.\nThe index was created and did not produce an error message.How can I get a case insensitive sort?Thanks", "username": "Derek_Lee" }, { "code": "collation.collation(locale: \"en\")", "text": "Hi, I solved my issue. I am using the mongoid ruby driver, and I had to add the collation to my query to choose the collation defined for the index. For example .collation(locale: \"en\") and it works.", "username": "Derek_Lee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Do wildcard indices support collation?
2020-09-14T20:58:38.068Z
Do wildcard indices support collation?
3,054
null
[ "react-native" ]
[ { "code": "[email protected]", "text": "I’m using Realm with my React Native application and MongoDB Realm SDK version for React Native is [email protected] can I use this for production-ready application?", "username": "chawki" }, { "code": "", "text": "In this instance, it’s safe to assume that the version is production ready, as in we’re unaware of major issues that will corrupt or break when out of beta and in production. It’s still a beta version, so we do reserve the “right” to make breaking changes to the API, but we don’t expect to. General availability is expected towards the end of the year.", "username": "Shane_McAllister" }, { "code": "", "text": "@Shane_McAllister Respectfully I disagree with you. I am making a simple application at this time and very basic features in the Node SDK are not working, such as Facebook and Google authentication or even something as simple as printing a Realm object’s details to the console.See for example these GitHub issues<!---\n\nQuestions: If you have questions about HOW TO use Realm, please ask on\nSt…ackOverflow: http://stackoverflow.com/questions/ask?tags=realm\nWe monitor the `realm` tag.\n\nFeature Request: Just fill in the first two sections below.\n\nBugs: To help you as fast as possible with an issue please describe your issue\nand the steps you have taken to reproduce it in as much detail as possible.\n\n-->\n\n## Goals\n\n\nLogging into Realm via Google OAuth2\n\n## Expected Results\n\n\nSuccessful processing of my submitted `id_token` from Google.\n\n## Actual Results\n\n\n\nLogin failed. Error message: `{message: 'error exchanging access code with OAuth2 provider', code: 47}`\n\nError message from MongoDB Realm logs.\n\n![google](https://user-images.githubusercontent.com/4022790/89660471-60950500-d89f-11ea-9e5b-7a755b03ce5e.PNG)\n\n\n## Steps to Reproduce\n\n\nFollow the guide from the Realm docs. (https://docs.mongodb.com/realm/authentication/google/)\n\nSpecifically I did the following:\n* Create new credentials in Google developer console\n* Enabled the auth provider on Realm.\n* Created a client secret `google` that has my client secret from Google console\n* Enabled all relevant callback URIS, authorized domains, etc. with Google and Realm\n* Created a Google sign in button which gave me an `id_token` which was successfully sent and processed by NodeJS.\n\nWhen running the `realmApp.logIn(credentials)` function, this error message appeared.\n\n## Code Sample\n\n<!---\nPlease provide a code sample or test case that highlights the issue.\nIf relevant, include your model definitions.\nFor larger code samples, links to external gists/repositories are preferred.\nFull projects that we can compile and run ourselves are ideal!\n-->\n\n### Front-end sending tokens to back-end\n![googlefrontend](https://user-images.githubusercontent.com/4022790/89661368-9dadc700-d8a0-11ea-8e1a-4bb6e393e7f1.PNG)\n\n\n### Node back-end processing (fails on line 86 with error message above)\n\n![googlebackend](https://user-images.githubusercontent.com/4022790/89661359-9a1a4000-d8a0-11ea-9029-789141b6b57f.PNG)\n\n\n## Version of Realm and Tooling\n\n- Realm JS SDK Version: `10.0.0-beta.9`\n- Node or React Native: Node\n- Client OS & Version: Ubuntu 18.04 LTS\n- Which debugger for React Native: n/aAnd many more on the github repository.I don’t think it’s responsible to recommend that this software is production-ready when basic authentication features or printing objects to the console is not supported. It appears that there are some growing pains to sort out from the migration from Stitch.It look me weeks of development to realize that Realm was in beta; the documentation does not make this clear.", "username": "John_Saigle" }, { "code": "toJSON()console.log()", "text": "@John_Saigle To be fair, I believe Shane was saying that we don’t foresee any major breaking changes with the local Realm SDK API. Production ready is really a call that only you, as the product owner of your app, can make. There are a myriad of apps on the Apple/Play store that are compiled with beta libraries and MongoDB Realm has several customers already in production even though it is beta. So it is “production-ready” for those customers but perhaps its not for you - which is why we apply the beta tag, to serve as a warning.The Google OAuth issue you linked to is actually because of a hard-break in how Google encodes their tokens - we are not the only platform running into this problem. We are actually meeting with the Google engineering team next week to rectify this.The issue with not being able to print an object is an unfortunate side effect of how Realm works as an embedder with the JS VM and uses reflection to invoke native functions. There is a simple workaround, you can use toJSON() to print the object - in the future we can look into ways of improving convenience methods for developer ergonomics. We could not embed into the JS VM which would give you the ability to console.log() Realm objects - but then this would lose all of the magic that makes Realm great - such as inferring your database schema from your class definitions, live objects, the notification system, etc. - to us it seems like a fair tradeoff but we are always looking for ways to improve.", "username": "Ian_Ward" }, { "code": "", "text": "@Shane_McAllister thanks for the response, And I’m ready to start building my app with Realm.", "username": "chawki" }, { "code": "", "text": "Thanks for replying @Ian_Ward. I’m happy that there are solutions underway. I’ll use toJSON() going forward.", "username": "Martin_Bradstreet" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why the Realm for React Native is still Beta?
2020-09-11T04:38:50.449Z
Why the Realm for React Native is still Beta?
4,421
null
[ "java" ]
[ { "code": " BsonDocument uuidDoc = _clientSession.getServerSession().getIdentifier();\n _db.runCommand(new Document().append(\"refreshSessions\", new BsonDocument[] { uuidDoc }));\norg.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class [Lorg.bson.BsonDocument;.\n\tat org.bson.codecs.configuration.CodecCache.getOrThrow(CodecCache.java:46) ~[mongo-java-driver-3.11.2.jar:?]\n", "text": "I’d like to issue a “refreshSession” command to the server with the serverSession. Here is what I try in the Java driver:which leads to:How do I use the refreshSession command in the Java driver correctly? Anyone got an example? I can’t find one on google it seems…", "username": "Mike" }, { "code": "", "text": "Programming Error… you need to use a List, not an Array. Question solved.", "username": "Mike" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using "refreshSession" in Java
2020-09-14T09:36:08.278Z
Using &ldquo;refreshSession&rdquo; in Java
2,682
null
[ "monitoring" ]
[ { "code": "", "text": "I’m using MongoDB of version 4.2.8The storage engine is wiredTigerI entered the db.serverStatus().wiredTiger to monitor cache usage, but storageEngine and wiredTiger field are not shown.On the other hand, consist of standalone server can see these fieldCan’t we check in shard cluster mode?", "username": "choi_seunghwan" }, { "code": "", "text": "db.serverStatus().wiredTigerIt should if you connect to mongod. There is no storage engine info for mongos. You can check it easily by using command db.serverStatus().process.", "username": "ken.chen" }, { "code": "", "text": "Thanks i forgot mongos dont have datas", "username": "choi_seunghwan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
db.serverStatus().storageEngine are not shown
2020-09-14T10:14:29.439Z
db.serverStatus().storageEngine are not shown
2,737
null
[ "queries" ]
[ { "code": "\"gameData\": {\n \"gold\": 0\n}\nmyCollection.findOne( \n { \"userData.username\": username},\n { \"gameData.gold\": 1, \"_id\": 0} \n);\n{\"gameData\":{\"gold\":{\"$numberLong\":\"0\"}}}", "text": "This is part of my document:Through the findOne method, I wanted to obtain the value of “gold” in the document.The result is: {\"gameData\":{\"gold\":{\"$numberLong\":\"0\"}}}I wanted to know if it was possible to have only the value of “gold” as a result, or a normal JSON without the “$numberLong” key (I’m not sure but I think the current result is in BSON format, but I couldn’t find a function to convert it to JSON).Thanks.", "username": "Andrea" }, { "code": "goldgoldgameDatamongoconst doc = db.test.findOne( { }, { \"gameData.gold\": 1, \"_id\": 0 } );\nprintjson(doc.gameData)\n{ \"gold\" : NumberLong(12345678) }db.test.aggregate( [ \n { \n $project: { _id: 0, gold: \"$gameData.gold\" } \n } \n] )\n", "text": "I wanted to know if it was possible to have only the value of “gold” as a result, or a normal JSON without the “$numberLong” key (I’m not sure but I think the current result is in BSON format, but I couldn’t find a function to convert it to JSON).Hello Andrea,The value of gold is in JSON format, in the output. It is the the MongoDB representation of BSON value in JSON format. See:To get the value of gold field’s value only from the embedded document gameData, you can use one of the following ways from mongo shell:Prints: { \"gold\" : NumberLong(12345678) }Or, an aggregation query:", "username": "Prasad_Saya" }, { "code": "{ \"gold\" : 12345 }", "text": "Hi @Prasad_Saya ,\nthank you for your answer.\nSo there is no possibility to receive a string like { \"gold\" : 12345 } as return?", "username": "Andrea" }, { "code": "doubledoublelongdoubledb.test.aggregate( [ \n { \n $project: { _id: 0, gold: { $convert: { input: \"$gameData.gold\", to: \"double\" } } }\n } \n] )", "text": "You can. The default numeric data type for MongoDB is a double. If you dont use any type it will be a double. So, convert the long to a double and you get the result in the desired format.", "username": "Prasad_Saya" }, { "code": "doc = await myCollection.findOne( \n { \"userData.username\": username},\n { \"gameData.gold\": 1, \"_id\": 0} \n);\n\nresponse.setBody(JSON.stringify(doc));\n", "text": "Hii @AndreaI assume that since we chatted about MongoDB realm this is a return of a document from a webhook in EJSON format which is what could be expected when just returning it.However, you could set the response body with the query result parsed to JSON:See this docs:Let me know if I am correct. Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you both very much for your answers!\nPavel, you understood perfectly what I needed. Thank you very much!", "username": "Andrea" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Result of findOne method without value type
2020-09-13T19:40:32.091Z
Result of findOne method without value type
4,499
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 3.6.20 is out and is ready for production deployment. This release contains only fixes since 3.6.19, and is a recommended upgrade for all 3.6 users.\nFixed in this release:3.6 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 3.6.20 is released
2020-09-14T17:36:51.513Z
MongoDB 3.6.20 is released
2,284
null
[ "queries", "python" ]
[ { "code": " {\"_id\":{\"$oid\":\"5f5d3ffc88e588d51ced6193\"}, \"test_idea\":[\"information1\",\"information2\"], \"subtopic\":\"subtopic_test\", \"test_idea2\":[\"information3\"]}collection.find({\"subtopic\": {\"$regex\": str(\"/%s/\" % query), \"$options\": 'i'}})str(\"/%s/\" % query)", "text": "Hello all, this is my first time on the forums, so please excuse any mistakes I might make with this post. With that being said, I am currently using pymongo for a small web app which I plan to use to take notes in school. I am attempting to implement a search function to allow me to search for documents containing certain topics.This is my data inside mongodb atlas:\n {\"_id\":{\"$oid\":\"5f5d3ffc88e588d51ced6193\"}, \"test_idea\":[\"information1\",\"information2\"], \"subtopic\":\"subtopic_test\", \"test_idea2\":[\"information3\"]}I am querying the ‘subtopic’ field only. The code I am using to query it is:\ncollection.find({\"subtopic\": {\"$regex\": str(\"/%s/\" % query), \"$options\": 'i'}})Just to clarify that code a bit, the str(\"/%s/\" % query) portion just adds ‘/’ before and after the query. In spite of this code being almost identical to the docs, it returns nothing. I’ve been stuck on this for several hours now, and I would appreciate any advice I could get.If you want to see the full code, it is located at: https://repl.it/@emotionbot/jacknotes#database.py", "username": "Jack_Hodge" }, { "code": "query = 'test'\ncollection.find_one({\"subtopic\": {\"$regex\": query, \"$options\": 'i'}})\n", "text": "Hi @Jack_Hodge and welcome in the MongoDB Community !You have exactly the right syntax but there is just a little detail ! You don’t need the slaches in this type of query because the information that the string is a regex is already in $regex so…This works for me.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$regex not working despite following exactly what the documentation says
2020-09-13T11:02:55.510Z
$regex not working despite following exactly what the documentation says
7,646
null
[]
[ { "code": "", "text": "I have a three node cluster with a mongos query router.\nWhen connecting to the mongos instance it get’s stuck on connecting.\nConnecting directly to the mongod instance works.Is there anyway to increase the logging of mongos?\nThe logs just say it’s received a client connection.", "username": "Martin_Nystrom" }, { "code": "", "text": "Hi @Martin_Nystrom and welcome in the MongoDB Community !I don’t understand the topology of your cluster. Usually a “3 nodes cluster” is a replica set (RS) and a RS doesn’t need a mongos. Only a sharded cluster would need a few mongos to work properly. If you are running a sharded cluster with 3 shards and they are not replicated, it’s really not safe. If one of the node fails, you will lose a third of your DB, given that you chose a shard key that distribute the data evenly across the 3 single node RS shards.If this is really what you are doing:I hope this helps,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Sorry I was a bit unclear. It’s really a 8 node cluster. 3 replicated config nodes, 3 replicated shard nodes and two query routers.\nI followed this guide while setting up the cluster with a few modifications of my own:\n716×556 9.67 KB\nThe only difference is that I have three shard nodes and it’s not open for Internet acess.\nThank you for the link to the verbosity settings. I will see if I can get some more info from the query router, seems like there is nothing wrong with the config replica set or the shard replica set. Since both replica sets have a working primary.", "username": "Martin_Nystrom" }, { "code": "", "text": "Oh that looks great ! Way more production ready !Can you please provide more information like:If I understand correctly, you were never able to connect to the mongos and thus finish the setup by adding the 3 replica sets in the configuration. Correct?What’s the error that you get exactly?", "username": "MaBeuLux88" }, { "code": " mongo 10.4.44.88 -u mongo-admin -p --authenticationDatabase admin mongo localhost -u mongo-admin -p --authenticationDatabase adminconnecting to: mongodb://10.4.44.88:27017/test/usr/bin/mongos --config /etc/mongos.conf[Unit]\nDescription=Mongo Cluster Router\nAfter=network.target\n\n[Service]\nUser=mongodb\nGroup=mongodb\nExecStart=/usr/bin/mongos --config /etc/mongos.conf\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n\n[Install]\nWantedBy=multi-user.target\nroot@zenv-0689:~# cat /etc/mongos.conf \n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongos.log\n logRotate: reopen\nprocessManagement:\n pidFilePath: /var/run/mongodb/mongos.pid\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\nsecurity:\n keyFile: /var/lib/mongodb/keyfile\n\nsharding:\n configDB: configReplSet/mongo-config01:27019,mongo-config02:27019,mongo-config03:27019\n/usr/bin/mongod --config /etc/mongod.conf# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n logRotate: reopen\nprocessManagement:\n pidFilePath: /var/run/mongodb/mongod.pid\n\n# network interfaces\nnet:\n port: 27019\n bindIp: 0.0.0.0\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n keyFile: /var/lib/mongodb/keyfile\n\nreplication:\n replSetName: configReplSet\n\nsharding:\n clusterRole: \"configsvr\"\n/usr/bin/mongod --config /etc/mongod.conf# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n logRotate: reopen\nprocessManagement:\n pidFilePath: /var/run/mongodb/mongod.pid\n\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\nsecurity:\n keyFile: /var/lib/mongodb/keyfile\n\n\n#operationProfiling:\nreplication:\n replSetName: Shard00\n\n#sharding:\nsharding:\n clusterRole: shardsvr\n\n\n\n#auditLog:\n\n#snmp:", "text": "Well it’s very strange I was running this setup without issues for over 6 months and all of a sudden I was unable to connect through mongos.\nThese are the commands I use to connect to the mongos instance: mongo 10.4.44.88 -u mongo-admin -p --authenticationDatabase adminTo rule out network issues I have tried SSHing directly to the host and using mongo client there: mongo localhost -u mongo-admin -p --authenticationDatabase adminI don’t really get any errors it sort of just times out… It just says:connecting to: mongodb://10.4.44.88:27017/testEven after increasing the verbosity of the mongos instance I was unable to determine what causes the client to get stuck.Mongos configuration:Start command:\n/usr/bin/mongos --config /etc/mongos.confmongos.conf:Mongod config servers:Start command:/usr/bin/mongod --config /etc/mongod.confmongod.conf:Mongod shard servers:Start command:/usr/bin/mongod --config /etc/mongod.confmongod.conf:", "username": "Martin_Nystrom" }, { "code": "", "text": "Hi,I resolved the issue by upgrading from 4.2 to 4.4.1", "username": "Martin_Nystrom" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I troubleshoot mongos? Anyway to increase logging?
2020-09-07T11:07:06.221Z
How can I troubleshoot mongos? Anyway to increase logging?
2,563
null
[]
[ { "code": "query = [\n \n\t{'$match': {'EventTypCD': {'$ne': '04'}}},\n\t\n {\n $group: { \n '_id':\"$DelNo\",\n 'max':{'$first':\"$$ROOT\"}\n }\n },\n {\n '$sort': {\n 'EventDTM': -1\n }\n }\n]\n", "text": "[thread1] Assertion: 10334:BSONObj size: 17114398 (0x105251E) is invalid. Size must be between 0 and 16793600(16MB) First element: 0: { _id: “0714015625”, max: { _id: ObjectId(‘5f55945299a36c0a0d18f386’), EventDTM: “2020-09-07T02:00:10.592”, EventTypCD: “03”, DelNo: “0714015625”, Cust_Ship_To_ID: “0012607572”, Cust_Ship_To_Address: “T.SALA DAENG, A.MUANG 3/3 M.2 14000 ANG THONG Lopburi TH”, SoldToID: “0012607572”, Delivery_Event_Message_Text: “2 hours before delivery”, Planned_Delivery_DateTime: “2020-09-07T04:00:00.000”, Purchase_Order_Number: “”, Sales_Order_Number: “0249278737”, Source_Application_ID: “1”, Time_To_Delivery_Period: “02:00”, DELIVERY_ITEM: [ { Product_Code: “550047529”, Product_Name: “RimR3Turbo20W50CH4_1*209L_A227”, Shipment_Quantity: “1”, Unit_Of_Measure_Code: “EA” } ] } } src/mongo/bson/bsonobj.cpp 101\n2020-09-07T10:43:13.471+0000 E - [thread1] Assertion: 10334:BSONObj size: 17114476 (0x105256C) is invalid. Size must be between 0 and 16793600(16MB) src/mongo/bson/bsonobj.cpp 101\n2020-09-07T10:43:13.471+0000 E - [thread1] Assertion: 10334:BSONObj size: 17114501 (0x1052585) is invalid. Size must be between 0 and 16793600(16MB) src/mongo/bson/bsonobj.cpp 101\n2020-09-07T10:43:13.472+0000 E QUERY [thread1] Error: BSONObj size: 17114501 (0x1052585) is invalid. Size must be between 0 and 16793600(16MB) :My Query is:db.DELIVERY_EVENT.aggregate(query).pretty()Kindly help!!!", "username": "Neha_Sinha" }, { "code": "", "text": "Hello @Neha_Sinha welcome to community!The maximum BSON document size is 16 megabytes, you seem to exceed this limit.In a first step I’d suggest to review your document schema, do you really need to have such a huge document? Or do you try to store files in the collection, in that case gridFS is your friend.This is a starting point to help you to solve your issue. In case you get stuck, feel free to provide further details, we will try to help.Cheers,\nMichael", "username": "michael_hoeller" } ]
I am Getting an error while running my query kindly suggest the change
2020-09-14T06:48:26.883Z
I am Getting an error while running my query kindly suggest the change
9,462
null
[]
[ { "code": "const ProfileSchema = new mongoose.Schema({\n user: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"user\",\n },\n favoritedisc: {\n type: String,\n },\n disccollection: [\n {\n discname: {\n type: String,\n required: true,\n },\n discmanufacturer: {\n type: String,\n }, }, ],});\n", "text": "Hi! First post and relatively new to mongo db. Also I have hit my google limit and might switch back to sql database if I can’t figure this out soon Question: How do I create a document of the schema below? Specifically how do I create the profile with only the array data field, disccollection, filled out. I have this schema implemented in the demo, however I have to create the profile first. Then after the profile is created, then I was able to add data to the disccollection field.To restate the question: How do I create a document in mongo db atlas that is only an array of objects.I am using nodejs and mongoose to connect with mongo atlas.Background info: The demo and github for the app I am working is listed belowWorking demo (it is using a free dyno on mongo atlas so give it a second to load if the dyno is asleep ). If you register an account, you will have to create a profile, before being able to create your disc golf bag. If you click on Create Disc Bag it just goes to empty page. That is not related to my mongo db issue.\nhttps://enigmatic-beyond-47734.herokuapp.com/Thanks, Dan", "username": "Daniel_Westlund" }, { "code": "{\n_id : ...,\ndisccollection : [ { ... } , ...]\n}\n", "text": "Hi @Daniel_Westlund,Not sure what you mean by a document who “is just an array of documents”But in MongoDB the basic storing structure is JSON document with the _id unique field. So you can still create documents like:Perhaps I don’t understand your intention but when query you can project only the array field and your clients will get just this fields array…Please note that we have programs like github students pack where you can get credit and continue with Atlas .Plus atlas have free tier clusters so you don’t have to pay for those.MongoDB Student PackWant the perks of Atlas without the price tag? Try MongoDB Atlas’ M0 Free Tier cluster for testing and exploration today.Let me know what am I missing.Best\nPavel", "username": "Pavel_Duchovny" } ]
How to create new Document that is an array
2020-09-14T05:15:03.647Z
How to create new Document that is an array
7,180
https://www.mongodb.com/…e2cd6d0d089.jpeg
[ "configuration" ]
[ { "code": "", "text": "I was baffled, I could not see any difference between the (2) mongod.cfg files I was playing with. One would work, the other would not.To the naked eye, the only difference was an extra line. Could the startup process be so delicate and so flimsy that an innocent extra line could cause a failure to start up the mongodb database? Apparently so.After further examining that innocent extra line in the mongod.cfg file, I found I had pressed the “tab” key, so if I kept the extra line in, but with no “tab” indentation in the line, I could successfully startup the mongodb database. However, put that “tab” in the extra blank line, it fails to startup mondodb.I am baffled, how easily this can happen. 4 hours ago I made a change to the mongod.cfg file according to the online manual at https://docs.mongodb.com/manual/tutorial/enable-authentication/, and then it did not work.Well I proceeded to backout line by line and trying with no success, until I was down to the only difference between the mongod.cfg that worked and the mongod.cfg the did NOT work was an extra line with a “tab” indentation that is not visible to the naked eye.Question:\nWhat is wrong with the mongo configuration reader that it cannot handle and extra “tab” in a line?See my attached two mongod.cfg files\nworks-mongod801×1006 142 KB\n \nnotWorking-mongod795×1006 138 KB\n\n#1. mongod.cfg <— that works\n#2. mongod.cfg <— that does not work with one extra blank line. See line number 12. Mongodb supplies no log messages in an empty mongod.log to analyze. It fails hard and fast.Mongodb version 4.4.0\nWindows 10 pro 64bit\nEditor windows Notepad++", "username": "Wally_Bowles" }, { "code": "", "text": "The mongo config format is YAML . It would have to pass as valid syntax.\nBecause your config is invalid the file logger would not have initialised.If invoking mongod on the command line there would likely have been an error along the lines of:mongod -f mongod.conf\nError parsing YAML config file: yaml-cpp: error at line 8, column 10: illegal map value\ntry ‘mongod --help’ for more informationAs you are on windows I’ll go ahead and assume this is being logged in event viewer as the Service Control Manager is starting and stopping mongod if you installed as a service.", "username": "chris" }, { "code": "", "text": "Thanks, Chris, for your quick response.\nI did not know it was a YAML file based on the suffix “mongod.cfg”\nBut it makes sense, I validated my mongod.cfg file with an extra line and tab in it at http://www.yamllint.com/\nAnd it does not pass:\n(): found a tab character that violate intendation while scanning a plain scalar at line 11 column 14I am not used to working with YAML files.\nYAML files are so well indented to be readable (but … in YAML, you indent with spaces, not tabs, in addition, there MUST be spaces between element parts.)\nI guess YAML files are more readable, but the con is they are harder to type up or change them, you need a linter or compiler to get the syntax correctly typed up.I will use the linter to validate the file if I make more changes.\nI am running the zip installation on windows. Starting mongod.exe up with a *.bat file which points to the mongod.cfg file.Thanks again,\nWally", "username": "Wally_Bowles" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
An innocent “tab” indentation in the mongod.cfg file fails to startup the database, Why?
2020-09-13T20:37:48.830Z
An innocent “tab” indentation in the mongod.cfg file fails to startup the database, Why?
2,953
null
[]
[ { "code": "", "text": "I have gone through the intro monoDB University course as well as the course on schemas. I’ve gone through the tracker tutorial and it works. I also have checked out the MongoDB twitch offerings. I am able to setup a backend that connects to my iOS app and create or sign-in a user.I have a working local realm in an existing swift-based iOS app, and I would like users to be able to remain offline totally if they wish. If they decide to go online, I’d like MongoDB as the backend and for data to sync.There are a few fundamental things I need to know:(1) When I start using sync, I can no longer see my local realm using realm studio or realm browser. It wants an encryption key to allow access to the local file. I understand this has something to do with different histories between local and remote (though I haven’t successfully synced anything…see below), but it would be very useful if I could see my local realm for understanding what my CRUD operations are doing locally vs. on Atlas. Is there any way to view my local file?(2) I don’t understand how my local realm interacts with the Atlas cluster and realmsync. For example, do I need to perform separate writes to update both the cluster and my local realm? Or, does sync take care of both the local and the remote? Do I need to have the Atlas schemas set up for all the objects in my app if I don’t need to write those? If I want to only write a subset of my realm objects to MongoDB but want the full set saved locally, how do I do that?(3) Is there a list of error codes and what they mean somewhere? If so, I can’t find it. The lack of that resource makes it very difficult for me to learn what I’m doing wrong. For example, I’m getting the following error: Sync: Connection[1]: Session[1]: Received: ERROR(error_code=225, message_size=180, try_again=0).Thank you.", "username": "Donal_MacCoon" }, { "code": "localRealmsyncRealmEnding session with error: additive schema change: adding schema for Realm table \"myObject\", additive changes from clients are restricted when developer mode is disabled (ProtocolErrorCode=225)", "text": "@Donal_MacCoon You do not need to write to both the local realm and MongoDB if you are opening a synced Realm - we take care of all of that for you - that’s the magic! You can read more about it here:\nhttps://docs.mongodb.com/realm/sync/overview/#think-offline-firstYou do need to have your schema set up on the MongoDB Realm cloud - you define it manually here:\nOr use developer mode (designed for mobile developers) - which defines your schema on the serverside from the mobile data model\nhttps://docs.mongodb.com/realm/sync/enable-development-mode/If you want to only sync some objects - one way you could accomplish this is by using both a synced realm and a non-synced realm. Any objects you don’t want to sync but store locally just store in the local realm. You would hold a reference to each separate realm? ie. localRealm and syncRealmError code 225 is a schema mismatch error - you can read more about it here -We are working on porting these docs over. If you check your server logs you will see something like this:Ending session with error: additive schema change: adding schema for Realm table \"myObject\", additive changes from clients are restricted when developer mode is disabled (ProtocolErrorCode=225)", "username": "Ian_Ward" }, { "code": "", "text": "Thank you so much for the helpful response. This got me past some blocked points. I’m inferring from the lack of response on being able to view local realms that one should simply check work based on the synced realm.Also the error document you reference doesn’t appear to actually have a error code 225 but maybe I’m missing something.Am I right in thinking that once a user goes online for the first time, the local data base will have to have their user ID number (assigned when signed up) to all the _partition variables before they will be synced appropriately on atlas?Thanks again!", "username": "Donal_MacCoon" }, { "code": " let partitionValue = app.currentUser()!.identity!\n self.realm = try! Realm(configuration: user.configuration(partitionValue: partitionValue))\n\n", "text": "No you should be able to open a local synced realm. See - Fixes #392 open a local sync Realm by nhachicha · Pull Request #498 · realm/realm-studio · GitHubAm I right in thinking that once a user goes online for the first time, the local data base will have to have their user ID number (assigned when signed up) to all the _partition variables before they will be synced appropriately on atlas?That depends on if you assign the userId as the partitionKey value in your code. But you can do that with something like this:", "username": "Ian_Ward" }, { "code": " **Update: The documentation was incorrect and is now fixed. It must be opened with Realm Studio 10.x.**\n", "text": "Let me add a bit of data we’ve found in reference to the original question # (1)@Ian_Ward feel free to correct me but please check my findings first if I’m off.If you are using Beta MongoDB Realm with sync, you cannot open the sync’d Realm locally with Realm Studio 3.11.0Here’s what our findings show:When you’re using a local Realm, or a Realm Cloud sync (5.x.x), the Realm files are stored in ~/library/application support/app name and will look like thisLocal Realm1112×170 16.7 KBYou can open that with Realm Studio, modify and edit it or even delete it if you want to start fresh. If deleted, the next time the app opens, it will read your Realm Objects and re-create those files.However, when using Beta MongoDB Realm sync, there’s a different structure and then _partitions come into play.With Beta MongoDB Realm sync’d, the files look like this (noting the above structure no longer exists as long as there are no locally stored Realm objects e.g. all sync’d.)Syncd Realm1304×582 131 KBAnd while there are .realm files, they are different for each partition. In this case, note I have a Realm with an object using a Task partition and then a totally separate set of files for the objects with the Jay partition.Note that in the web console all of those objects are stored ‘together’ visually within the same collection (the _partitionKey values are different)Collection1350×1118 80.1 KBIf you delete the high level folder in the Finder (ending in .TaskTracker, shown above - maybe in the case of a client reset error for example) and re-run the app, that causes it to re-sync - the objects are re-sync’d only when they are accessed per partition.In other words, our app initially only shows objects in the Tasks partition so when the app starts, those files are synced/created. From a popup, selecting that we want to view objects in the Jay partition, then those are synced/created.The cool thing there in development is if you change an object you can just delete the objects for that _partitionKey causing it to re-sync without affecting the other objects.So, no, local Realms cannot be managed with the current version of Realm Studio - the objects are not stored in the same way as a non-beta local RealmUpdate: No as in they are not compatible with 3.11.0 but ARE compatible with Real Studio 10.xError1336×316 16.5 KBIf you want to manage your Beta MongoDB Realm that’s sync’d, it CAN be done through the MongoDB Realm web console or via Realm Studio 10.x", "username": "Jay" }, { "code": "", "text": "So, no, local Realms cannot be managed with the current version of Realm Studio - the objects are not stored in the same way as a non-beta local RealmThis is incorrect. Non-sync realms and synced realm are stored in a different way on disk but that doesn’t really matter - you should still be able to open them with Realm Studio. They are stored in a different way because syncing introduces a bunch of new concepts like users, permissions, and partitioning that is not germane to a non-synced realm.If you want to manage your Beta MongoDB Realm that’s sync’d, it should be done through the MongoDB Realm web console.This is not true - you should be able to open synced realms with Realm Studio in the “local” way, if it doesn’t work then there is a bug and we will fix it. Your issue looks like a file format mismatch - check the release notes for the SDK and Studio and make sure your versions are aligned.", "username": "Ian_Ward" }, { "code": "", "text": "Good good, glad that info is wrong, I thought it may be as such but just wanted to paint a current picture of our experience since it’s independent of any coding issues and in case others run into the same thing - I will update it once we have clarity.Regardless, it would be correct that every partition has its own ‘realm’ file (as shown in the screenshot), so to administer your ‘database’ with Realm Studio, you would have to open each partition separately. Not sure how that works with users in each file etc but we’ll see once it works.Realm versions all match up. Realm Studio 3.11.0, pod ‘RealmSwift’, ‘=10.0.0-beta.4’. We’ve got the same config on several workstations and tried it with different data/files and even after a full delete and resync of everything.Interesting to note that the Realm Studio Error message says it’s not compatible‘with format version 11’when that doesn’t exist yet for us; how could we have a v11 formatted Realm?v10.0.0-beta.4 realm-ci released this 10 days ago · 4 commits to master since this releaseand our Realm Studio\nScreen Shot 2020-09-13 at 2.42.27 PM596×702 35 KB\n", "username": "Jay" }, { "code": "", "text": "Those versions are incompatible. Where are you getting your information to make a statement like this:Realm versions all match up. Realm Studio 3.11.0, pod ‘RealmSwift’, ‘=10.0.0-beta.4’.I’d like to correct that doc if you are reading it somewhere", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_WardThat link was to the release notes. Go here and scroll down a tad to the the v10.0.0-beta.4 sectionRealm is a mobile database: a replacement for Core Data & SQLite - realm/realm-swift", "username": "Jay" }, { "code": "", "text": "Oofh - yeah looks like some of these are wrong. I’ll get that corrected. You need to use a 10.x version of Studio - the reason we bumped all SDKs to 10 was to make compatibility clear", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_WardFantastic! Super good info and we’ll get everything updated. Thanks for the help and clarification.", "username": "Jay" }, { "code": "", "text": "Thank you both for leading a conversation I could not contribute much to but am gaining benefit from.", "username": "Donal_MacCoon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Understanding the interaction between local realm, sync, Atlas using swift on iOS app
2020-09-11T23:31:44.079Z
Understanding the interaction between local realm, sync, Atlas using swift on iOS app
5,996
null
[ "kotlin" ]
[ { "code": "db.getCollection('sessions').update(\n{'_id' : ObjectId(\"5f5a3661d00ba84ba0247878\")},\n {\n $set:\n {\n \"rates.native.stars\": NumberInt(3),\n \"rates.native.details\": \"abcde\"\n }\n }\n)\nrealm = Realm.getDefaultInstance()\n realm.executeTransaction {\n val item = it.where<session>().equalTo(\"_id\", ObjectId(\"5f5a3661d00ba84ba0247878\")).findFirst()\n item?.rates?.native?.stars = 3\n }\n realm.close()\nval user: User? = taskApp.currentUser()\n val mongoClient : MongoClient? = user?.getMongoClient(\"myservicename\")\n val mongoDatabase : MongoDatabase? = mongoClient?.getDatabase(\"mydbname\")\n val mongoCollection : MongoCollection<Document>? = mongoDatabase?.getCollection(\"sessions\")\n\n val queryFilter : Document = Document(\"_id\", ObjectId(\"5f5a3661d00ba84ba0247878\"))\n val updateDocument : Document = Document(\"rates.native.stars\", 3)\n .append(\"rates.native.details\", \"abcde\")\n\n mongoCollection?.updateOne(queryFilter, updateDocument)?.addOnCompleteListener {\n if (it.isSuccessful) {\n val count : Long = it.result.modifiedCount\n if (count == 1L) {\n Log.v(\"EXAMPLE\", \"successfully updated a document.\")\n } else {\n Log.v(\"EXAMPLE\", \"did not update a document.\")\n }\n } else {\n Log.e(\"EXAMPLE\", \"failed to update a document with: ${it.exception}\")\n }\n }\ndb.getCollection('sessions').update(\n{'_id' : ObjectId(\"5f5a3661d00ba84ba0247878\")},\n {\n $set:\n {\n \"rates.native.stars\": NumberInt(3),\n \"rates.native.details\": \"abcde\"\n }\n }\n)\n", "text": "Hello,\nI would like to perform the following functionality but from Realm using Android/Kotlin as frontend.Here it is:So I tried the following:This didn’t work because the executeTransaction seems to work only on existing fields in the document and doesn’t insert/update new ones. In my case “stars” does not exist and I want to insert it.Exactly as I am doing in the $set method. But I want something for Kotlin/RealmThen I tried this one:This one also didn’t work. I am getting that the document failed to update and that the update is not permitted. I did enable read and write from the Sync UI so that users can read and write but I am still getting this error.Note that in the documentation, it shows:mongoCollection?.updateOne(queryFilter, updateDocument)?.getAsync …but this function is not loading in Android Studio. so I am use onCompleteListener instead.Does anybody know how to convert the below function into a function that can be called from Android/kotlin using the Realm/Sync.", "username": "Maz" }, { "code": "", "text": "@Maz So the Realm Schema on the mobile app is generally static so you won’t be able to use sync to insert new fields that replicate to the other side. You can iterate on the schema and extend it as you develop and out in product - but generally that is a new build of the app with a new RealmObject class definition which include the new field.The second method of using the MongoDB APIs is how I would recommend solving this - can you share more logs you are getting on the client side and the corresponding logs on the serverside?", "username": "Ian_Ward" }, { "code": "_partitionrealm.executeTransactionrealm.executeTransaction_partition[\n \"Performing schema validation\",\n \"Namespace: test.sessions\",\n \"Limit: 1000\",\n \"Examined 823 document(s)\",\n \"0 of 823 documents failed schema validation\",\n \"Completed app request\"\n]\nuser = taskApp.currentUser()!!\nval islogged = user.isLoggedIn\nval apiAuth = user.apiKeyAuth\nval token = user.accessToken\n", "text": "Hi Ian,It is well noted that the ‘realm.executeTransaction’ is static. So it won’t work in this case.But what about the second one. This one should work, right?mongoCollection?.updateOne(queryFilter, updateDocument)?.addOnCompleteListener …I am not really getting too much information from the logs.From the client side, I am just gettingfailed to update a document with: SERVICE_UNKNOWN(realm::app::ServiceError:-1): update not permittedand from the Realm UI logs, I am getting the following error:SchemaValidationFailedWrite Error\nupdate not permitted for document with _id: ObjectID(“5f5a3661d00ba84ba0247878”) : could not validate document: (root): _partition is requiredThe _partition is already there in the document. The proof is that I can already access this document from the realm.executeTransaction but as it is static, I am not able to insert new fields. And as realm.executeTransaction is a Sync related, _partition is a must for the Sync to even work.I also tried to run the Schema validation again and there was no error:\nLogs:Also, let me add that the users are logged in using custom login function. I don’t think this should be an issue.I also run those functions on the userAnd all those functions gave valid results.I tried this function because I saw that the user may not be authenticated.{“error”:“must authenticate first”,“link”:“App Services”}I am getting this from the POST request on the client side. Though the user should be authenticated from both the client side as well as the Realm UI.What do you think?", "username": "Maz" }, { "code": "", "text": "Let me add that one thing. I believe the user not permitted reason is not valid. Possibly because I was clicking on the POST link, I am considered to be an external user and this is creating ‘authentication error’. So don’t bother about this reason.\nI think the others should give clear directions.\nThank you.", "username": "Maz" }, { "code": "", "text": "Thanks this is helpful. Let me investigate the behavior on the cloud side and get back to youAs a workaround, you can create a Realm Cloud function that takes a document as user input and then inserts it into the collection on the backend. I believe this should work.", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian,Yes indeed. The custom functions in the UI can solve the problem.\nBut please check this error as it not really convenient to keep creating custom functions and link them.\nThank you", "username": "Maz" }, { "code": "", "text": "@Maz So this appears to be working as expected. When enabling sync - sync permissions take precedence. This also means that any mutations made from clients logged into that realm cloud app need to follow the syncing schema.If you are trying to use rules on a separate collection or allow that is not part of sync or use different schemas what you can do is create a separate sever-side Realm App which is just for your web or other API traffic but connect it to the the same MongoDB Atlas cluster as the Realm Sync app. You can apply your rules there. You can see an example of this here -master/inventoryDemoContribute to brittonlaroche/realm development by creating an account on GitHub.We realize this is a workaround and are looking to unify the permissions system in the near future.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to update a document using the Realm Android SDK?
2020-09-10T16:56:51.687Z
How to update a document using the Realm Android SDK?
4,331
null
[ "next-js" ]
[ { "code": "", "text": "I’m currently first time developing with MongoDB and using Next.js to query data from Atlas. I found out the hard way that the MongoDB node driver does not work on client side. I’m still very new to working with databases and would like to know what causes this limitation?", "username": "Matthew_Wang" }, { "code": "", "text": "You don’t want to do that for security reasons mainly.You can’t control the JS that is executed on the client side. It can be altered. Also, where do you hide your login & password (and eventually certificates) that you need to access MongoDB?If it’s in the client code, it means anyone can retrieve it and start playing with your database directly because you can’t restrict the access by IP address with your solution.Doing so basically means your MongoDB would not be secured correctly.You need a backend system to handle the authentication and the authorisations. MongoDB Realm is an option but any homemade backend system would do as long as you secure correctly your REST API or GraphQL API or whatever protocole you choose to use.There are more reasons that I could mention but it’s a clear violation of the MVC architecture for example.", "username": "MaBeuLux88" } ]
Why doesn't MongoDB work with Client side JS?
2020-09-12T20:26:22.125Z
Why doesn&rsquo;t MongoDB work with Client side JS?
5,087
null
[ "atlas-functions", "app-services-user-auth" ]
[ { "code": " exports = async function(loginPayload) {\n // Get a handle for the app.users collection\n const users = context.services\n .get(\"mongodb-atlas\")\n .db(\"app\")\n .collection(\"users\");\n\n // Parse out custom data from the FunctionCredential\n\n const { username } = loginPayload;\n\n\n // Query for an existing user document with the specified username\n\n const user = await users.findOne({ username });\n\n\n if (user) {\n // If the user document exists, return its unique ID\n\n return user._id.toString();\n\n } else {\n // If the user document does not exist, create it and then return its unique ID\n const result = await users.insertOne({ username });\n\n return result.insertedId.toString();\n\n }\n };\n", "text": "Hi everyone, I apologize in advance if my question may be trivial, but I am now learning how to use MongoDB, and some parts of the documentation are difficult for me to understand.\nIn my application I don’t have the possibility to implement the Realm SDK, so I was thinking to manage the user/database communication via Webhooks.\nFirst I thought to create a Custom Function authentication, in order to authenticate a user. I was trying to create a simple authentication function like this one:And then recall it via a Webhook.\nIs this correct as a reasoning? Or am I adopting the wrong method?\nIf everything is correct I will proceed by explaining my current problem. I tried to create a new HTTP Webhook by selecting the “POST” method (since I have to communicate to the function the username of the player to be searched in the database), but I have some difficulties to create the url and the Webhook function.\nHow should the parameters of the url indicated by the Webhook be set? How can I read this data inside the function and then recall the custom authentication function?\nSearching inside the documentation I think I have understood that I have to recall the custom function in this way:\nconst loginResult = context.functions.execute(“authFunc”, arg1);\nreturn loginResult;\nBut unfortunately I can’t test this code since, as I wrote earlier, I can’t understand how to correctly read the data received by the “POST” method and assign the user’s username to the variable “arg1”.Can someone give me some indication about this?\nThanks in advance!", "username": "Andrea" }, { "code": " https://realm.mongodb.com/api/client/v2.0/app/<yourappid-abcde>/auth/providers/<provider type>/login\ncurl --location --request POST 'https://realm.mongodb.com/api/client/v2.0/app/myapp-abcde/auth/providers/custom-function/login' \\\n --header 'Content-Type: application/json' \\\n --data-raw '{\n\n \"username\" : \"[email protected]\"\n\n }'\n", "text": "Hi @Andrea,You do not need to build a webhook to perform http POST authentication with a custom-function.Basically any provider can be authenticated via the following url:For custom function you can do:This will result in a successful authentication and the access token to use for other services (eg. graphql).Remember to change your appId and provide the relevant payload for the auth provider, in your case a username field.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny Thank you very much for your answer!\nI just did a test run, and it’s not really necessary to create a Webhook for authentication like I did.\nIf I understand correctly the last parameter of the url must be “login” and not the name of the authentication function (“login” is directly associated to the custom authentication function). Right?\nWhen I did a test, I received this data as answer: “access_token”, “refresh_token”, “user_id” and “device_id”.\nWhere can I find information about the use of these values? I only managed to understand that “user_id” refers to the value of the “_id” field in the db document that refers to the user.\nMoreover I have noticed that the “device_id” field is always the same as “00000000000000000000”. I don’t know if this is normal or if I need to do some specific operation to fill the field correctly.", "username": "Andrea" }, { "code": "", "text": "Hi @AndreaIf I understand correctly the last parameter of the url must be “login” and not the name of the authentication function (“login” is directly associated to the custom authentication function). Right?Correct only 1 auth function can exist per application.The output you got is the expected output for successful authentication.Now you can use this token to authenticate services via a Bearer header just as explained for graphql queries here:Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you for your answer. Now everything is clearer to me!\nOne last question (I tried looking in the documentation but found nothing about it): what does the value “device_id” refer to? What is its functionality? Or rather, what is it usually used for?", "username": "Andrea" }, { "code": "", "text": "Hi @Andrea,\nI am not sure but maybe some idenrifier for your user agent , I think custom function auth will never track it.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "exports = async function(loginPayload) {\n // Get a handle for the app.users collection\n const users = context.services\n .get(\"mongodb-atlas\")\n .db(\"magika_db\")\n .collection(\"users\");\n \n //console.log(loginPayload);\n const username = loginPayload.toString();\n\n const user = await users.findOne( {\"userData.username\": username} );\n\n if (user) {\n // If the user document exists, return its unique ID\n return user._id.toString();\n } else {\n // If the user document does not exist, create it and then return its unique ID\n const newDocument = {\n \"userData\": {\n \"username\": username,\n \"email\": \"\"\n },\n \"collection\": [{\n \"name\": \"\",\n \"goldIncrase\": 5\n }]\n };\n \n const result = await users.insertOne(newDocument);\n \n return result.insertedId.toString();\n }\n};\nhttps://realm.mongodb.com/api/client/v2.0/app/myapp-abcde/auth/providers/custom-function/login", "text": "I am not sure but maybe some idenrifier for your user agent , I think custom function auth will never track it.Thanks again for your answer. It’s ok, mine was just curiosity, for what I have to do I don’t need to trace the id of the device.\nI was completing my custom authentication function, but I noticed a small problem. To better clarify my situation I placed part of my custom function:When I call the authentication function via https://realm.mongodb.com/api/client/v2.0/app/myapp-abcde/auth/providers/custom-function/login I get “access_token”, “refresh_token”, “user_id” and “device_id”. I thought that “user_id” was the id that returns the function (so the unique id of the document where the function finds the controlled username), but I noticed that this is not the case. So “user_id” is a unique id created by authentication to authenticate the user?\nIf I need to receive the return value of the custom authentication function as an answer (in addition to the authentication success values), how can I do it?", "username": "Andrea" }, { "code": "", "text": "Hi @Andrea,This will be more tricky to implement as a user is not expected to use the function internal returned id for any user logic.The idea of a custom function auth is that you implement your logic to your external 3rd party application provider and return a value that maps with a Realm User internally, the example in the docs is simple just for the idea presentation.But for the sake of my point, since your logic provide a unique username and will always retrieve same realm user_id you should treat it as the user id. If you wish to store some additional information in that collection index the username field and query it through user name.Now to implement collection rules you can still provide a filter based on user_id of realm.If you wish to still retrieve this ID anyway you need to consider save it in custom user data or have a webhook to get it after the login from the collection.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "First of all I want to thank you again for your time. Thanks to your answers I am learning many new things! So, if I understand correctly, the custom authentication function must return an ID string that identifies a unique user (in the case of my function I return the _id of the document containing the user’s username, then a unique ID string). Realm uses this string to check the IDs of internal users, and returns “user_id”. Then by authenticating with the usual _id of the document, Realm will always return the same “user_id”. Do I understand correctly?If I understood what you wrote, your advice is to save the “user_id” returned by the authentication, and use that value for future interactions with the db (instead of the _id, which I wanted to save from the function and use). If the answer is yes, how should I proceed to “associate” “user_id” to a user’s document? Once I received the authentication answer, should I save the “user_id” value inside my app, and then tell my db (via a Webhook) to save the “user_id” value inside the user’s document with that username?", "username": "Andrea" }, { "code": "", "text": "Hi @Andrea,You understand correctly.Actually, the way to use the authentication object in realm services is not always aligned across services.It really depends on the services, for example graphql uses the access_token as a bearer header for its http request.However, webhooks should use a script authentication to use the user_id you got.If you want a detailed example please let me build one and I will provide it in upcoming days.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "If you want a detailed example please let me build one and I will provide it in upcoming days.Thank you very much! I think I understand how it works, but with an example created by someone experienced, I think I will be able to learn and understand many new things!", "username": "Andrea" }, { "code": "exports = function(payload) {\n const authInput = JSON.parse(payload.body.text());\n \n if (authInput.user_id)\n {\n return authInput.user_id;\n }\n};\n// This function is the webhook's request handler.\nexports = function(payload, response) {\n // Data can be extracted from the request as follows:\n\n // Query params, e.g. '?arg1=hello&arg2=world' => {arg1: \"hello\", arg2: \"world\"}\n const {arg1, arg2} = payload.query;\n\n // Headers, e.g. {\"Content-Type\": [\"application/json\"]}\n const contentTypes = payload.headers[\"Content-Type\"];\n\n // Raw request body (if the client sent one).\n // This is a binary object that can be accessed as a string using .text()\n const body = JSON.parse(payload.body.text());\n\n\n // Querying a mongodb service:\n const comments = context.services.get(\"mongodb-atlas\").db(\"feed\").collection(\"comments\");\n\n\n\n return doc.updateOne({comment_id : body.comment_id, post_id :body.post_id, user_id : body.user_id },body,{ \"upsert\" : true});\n \n};\ncurl \\\n-H \"Content-Type: application/json\" \\\n-d '{\"user_id\":\"5fa7105a871d206bd6739a4\", \"comment_id\" : 1, \"post_id\" : 1, comment : \"great post!\" }' \\\nhttps://webhooks.mongodb-realm.com/api/client/v2.0/app/app-abcd/service/myTest/incoming_webhook/storeComment\n", "text": "Hi @Andrea,Ok so the idea is once you get the authentication object from the query you can use it in a webhook payload to authticate the webhook via script function method. For example my webhook of storing post comments:\nScreen Shot 2020-09-12 at 20.42.552494×1126 186 KB\nScriptThe trick is only the required user will be returned from payload so anyone who calls the webhook can execute it via a specific user only if it knows it Realm id (consider the user id to operate as sort of apiKey here)** Webhook body and parsing **Now the field provided in my webhook call will save it with user under the user_id field:My rules for comments are write only permitted to user owned objects and read is for everyone. Therefore if webhook tries to access a comment that is not written by the user it will not allow it to edit that comment.\nScreen Shot 2020-09-12 at 20.45.202494×1188 241 KBAs you can see Realm will use the user_id in my comment collection to filter and get the correct permissions. And my webhook require this field to authenticate. This field must be the same value as my custom function. Hope now it all make sense.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you very much!\nYour example is really clear and easy to understand! I was able to make everything work properly!I just have one more simple question:\nImmagine 2020-09-13 1424491533×253 15.3 KB\nUsually, is the execution of the initial authentication function (i.e. the script that returns Realm’s user_id) called with a System authentication method (i.e. with all privileges)? I think the answer is yes, since in my case, when I call that function, I haven’t yet received the “user_id” that I can use to identify the user, but I don’t know if it could be a problem in terms of security.Thank you.", "username": "Andrea" }, { "code": "", "text": "The authentication function should be run as system thats correct.It should not be run from anywhere but the authentication flow and should be marked as Private I believe…", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Custom Function authentication and POST method
2020-09-08T17:53:06.322Z
Custom Function authentication and POST method
7,052
null
[]
[ { "code": "mongod -versiondb version v4.2.9mongodb-org is already the newest version (4.4.1).mongod -versiondb version v4.2.9dpkg -l|grep mongoii mongodb-compass 1.18.0-1 amd64 The MongoDB GUI\nii mongodb-org 4.4.1 amd64 MongoDB open source document-oriented database system (metapackage)\nii mongodb-org-mongos 4.2.9 amd64 MongoDB sharded cluster query router\nii mongodb-org-server 4.2.9 amd64 MongoDB database server\nii mongodb-org-shell 4.2.9 amd64 MongoDB shell client\nii mongodb-org-tools 4.2.9 amd64 MongoDB tools\nrc mongodb-server 1:3.6.3-0ubuntu1.1 all object/document-oriented database (managed server package)\n", "text": "I was using mongodb 4.2.9 on ubuntu 18.04I installed it with package manager.\nNow I to update it to 4.4 version used package manager following → https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ instructions.Installation was ok but when I write mongod -version command it shows - db version v4.2.9.\nI tried to re-install again it says\nmongodb-org is already the newest version (4.4.1). but mongod -version command it shows - db version v4.2.9.\nHow can I get rid of this problem?dpkg -l|grep mongoSays just mongodb-org is updated.", "username": "Md_Mahadi_Hossain" }, { "code": "", "text": "Hi @Md_Mahadi_Hossain,How did you install MongoDB? Was it via package reload or specific version install command?Also is MongoDB running while you do this?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "sudo apt-get install -y mongodb-orgapt-get upgrade", "text": "I installed it by package reload and by sudo apt-get install -y mongodb-org .\nI solved this problem latter by apt-get upgrade and restart the machine.", "username": "Md_Mahadi_Hossain" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb is not updated properly from 4.2 to 4.4
2020-09-12T20:26:59.751Z
Mongodb is not updated properly from 4.2 to 4.4
2,978
null
[ "java" ]
[ { "code": " MongoClient mongoClient = MongoClients.create(MongoClientSettings.builder()\n .applyToClusterSettings(builder -> builder.hosts(Arrays.asList(new ServerAddress(\"localhost\", 27017))))\n .credential(credential)\n .build());\n\n // Accessing the database\n MongoDatabase database = mongoClient.getDatabase(\"mydb\");\n MongoCollection<Document> collection = database.getCollection(\"mycollection\");\n System.out.println(collection.countDocuments());\n", "text": "I have added myself as a user by running db.createUser() command:db.createUser({user:“manish”,pwd:“manish”,roles:[{role:“userAdminAnyDatabase”,db:“admin”}]})In Java code I have code as:MongoCredential credential = MongoCredential.createCredential(“manish”, “admin”,\n“manish”.toCharArray());But I get error:MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName=‘manish’, source=‘admin’, password=, mechanismProperties={}}How to get it working?", "username": "Manish_Ghildiyal" }, { "code": "docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.0 --replSet=test --auth\ndocker exec -it mongo mongo\n> rs.initiate()\n{\n\t\"info2\" : \"no configuration specified. Using a default configuration for the set\",\n\t\"me\" : \"hafx:27017\",\n\t\"ok\" : 1\n}\ntest:SECONDARY> \ntest:PRIMARY>\ntest:PRIMARY> use admin\nswitched to db admin\ntest:PRIMARY> db.createUser({user:\"manish\",pwd:\"manish\",roles:[\"readWriteAnyDatabase\"]})\nSuccessfully added user: { \"user\" : \"manish\", \"roles\" : [ \"readWriteAnyDatabase\" ] }\nimport com.mongodb.Block;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.MongoCredential;\nimport com.mongodb.ServerAddress;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.connection.ClusterSettings;\nimport org.bson.Document;\n\nimport static java.util.Collections.singletonList;\n\npublic class App {\n public static void main(String[] args) {\n MongoCredential credential = MongoCredential.createCredential(\"manish\", \"admin\", \"manish\".toCharArray());\n Block<ClusterSettings.Builder> localhost = builder -> builder.hosts(singletonList(new ServerAddress(\"localhost\", 27017)));\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyToClusterSettings(localhost)\n .credential(credential)\n .build();\n MongoClient client = MongoClients.create(settings);\n MongoCollection<Document> col = client.getDatabase(\"test\").getCollection(\"col\");\n System.out.println(col.countDocuments());\n }\n}\nException in thread \"main\" com.mongodb.MongoCommandException: Command failed with error 13 (Unauthorized): 'not authorized on test to execute command { aggregate: \"col\", pipeline: [ { $match: {} }, { $group: { _id: 1, n: { $sum: 1 } } } ], cursor: {}, $db: \"test\", lsid: { id: UUID(\"8e7f5ef1-7e58-4148-935e-974d48373764\") }, $readPreference: { mode: \"primaryPreferred\" } }' on server localhost:27017. The full response is {\"operationTime\": {\"$timestamp\": {\"t\": 1599489561, \"i\": 1}}, \"ok\": 0.0, \"errmsg\": \"not authorized on test to execute command { aggregate: \\\"col\\\", pipeline: [ { $match: {} }, { $group: { _id: 1, n: { $sum: 1 } } } ], cursor: {}, $db: \\\"test\\\", lsid: { id: UUID(\\\"8e7f5ef1-7e58-4148-935e-974d48373764\\\") }, $readPreference: { mode: \\\"primaryPreferred\\\" } }\", \"code\": 13, \"codeName\": \"Unauthorized\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1599489561, \"i\": 1}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"0n4gHEb72t+aCiUPm62027bZNj8=\", \"subType\": \"00\"}}, \"keyId\": 6869747452048572419}}}\n<dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.1.0</version>\n</dependency>\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n\n <groupId>org.example</groupId>\n <artifactId>java</artifactId>\n <version>1.0-SNAPSHOT</version>\n\n <name>java</name>\n\n <properties>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <maven.compiler.source>13</maven.compiler.source>\n <maven.compiler.target>13</maven.compiler.target>\n </properties>\n\n <dependencies>\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.1.0</version>\n </dependency>\n </dependencies>\n \n</project>\n", "text": "Hi @Manish_Ghildiyal & welcome in the MongoDB community !I deployed a localhost cluster in docker to run a little test:Then I logged into it:And I initialised the single node replica set and create the user:Note that I userAdminAnyDatabase doesn’t have enough permissions to run a query in the “mydb.mycollection” collection. You need readWriteAnyDatabase if you want to read & write somewhere.Once this is in place, I can execute this code and it works for me:It’s the same piece of code you are using. I just refactored the code a little and changed the role of the user I created.Note that you may have another issue because if I try to execute this piece of code with the same user but with the userAdminAnyDatabase role, I get this error:In your case, it looks like it’s something else.Which version of MongoDB are you running and which version of the MongoDB Java Driver are you using?\nTo run this example, I used Java 13, MongoDB 4.4.0 and the latest version of the MongoDB Java Driver: 4.1.0.Here is my Maven config file just in case: pom.xmlBefore trying to do this with Java. Please make sure you can connect from the same computer using the Mongo Shell or Mongosh and run the query you are trying to do in Java.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Most likely, you created the new user in the wrong authentication database. Most likely, you forgot to runuse adminbefore createUser().", "username": "steevej" }, { "code": "", "text": "Sorry, you are right. I forgot this while writing my answer but I did type this in reality. I’m editing my post now. Thanks for spotting this !", "username": "MaBeuLux88" }, { "code": "", "text": "My comment was directed to the original poster but indeed, you missed the use admin. B-)", "username": "steevej" }, { "code": "", "text": "You get an error if you are not in the “admin” database .", "username": "MaBeuLux88" }, { "code": "> use notAdminDatabase\nswitched to db notAdminDatabase\n> user = {user:\"steevej\",pwd:\"not-my-usual-password\",roles:[\"dbOwner\"]}\n{\n\t\"user\" : \"steevej\",\n\t\"pwd\" : \"not-my-usual-password\",\n\t\"roles\" : [\n\t\t\"dbOwner\"\n\t]\n}\n> db.createUser( user )\nSuccessfully added user: { \"user\" : \"steevej\", \"roles\" : [ \"dbOwner\" ] }\n> db.getUsers( { filter : { user : \"steevej\" } } )\n[\n\t{\n\t\t\"_id\" : \"notAdminDatabase.steevej\",\n\t\t\"user\" : \"steevej\",\n\t\t\"db\" : \"notAdminDatabase\",\n\t\t\"roles\" : [\n\t\t\t{\n\t\t\t\t\"role\" : \"dbOwner\",\n\t\t\t\t\"db\" : \"notAdminDatabase\"\n\t\t\t}\n\t\t],\n\t\t\"mechanisms\" : [\n\t\t\t\"SCRAM-SHA-1\",\n\t\t\t\"SCRAM-SHA-256\"\n\t\t]\n\t}\n]\n", "text": "Oops, right, because of the role readWriteAnyDatabase.However, some roles will work outside the admin database.", "username": "steevej" }, { "code": "", "text": "Yup, I didn’t ran ‘use admin’ command. And hence it put it in a database called ‘test’, which was not there at first place, and hence is created.", "username": "Manish_Ghildiyal" }, { "code": "", "text": "Is your issue solved then @Manish_Ghildiyal? If that’s the case, could you please select the answer that helped you and mark your post as resolved?\nThanks ", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Authentication issue while trying from java
2020-09-05T20:31:16.251Z
Authentication issue while trying from java
20,922
null
[ "node-js" ]
[ { "code": "exports.getViewownerdata = (req,res,next)=>\n{\n const OwnerId= req.body._id;\n const mownername = req.body.ownername;\n const ownertype = req.body.ownertype;\n const owneraddress = req.body.owneraddress;\n const propertymanage = req.body.propertymanage;\n OwnerModel.findById(OwnerId)\n\t\t\t.then(owner=>{\n owner.ownername = mownername;\n owner.ownertype = ownertype;\n owner.owneraddress = owneraddress;\n owner.propertymanage = propertymanage;\n return owner.save()\n })\n\t\t\t\t\t.then(result=>{\n res.render('/user/home_single');\n })\n .catch(err=> {\n errormessage =err;\n res.redirect('/home');});\n }\n", "text": "", "username": "Rout_Jagan" }, { "code": "", "text": "Hi @Rout_Jagan - welcome to community!I can see the code you’ve posted, but you haven’t asked a question or stated what’s not working. If you’d like a response, please tell us what you want to know!", "username": "Mark_Smith" }, { "code": "", "text": "@Mark_Smith sir please replyi want to fetch only one data/element of collection but it retrives all data.for example : a user f_name ,l_name,age,mobilenumber are inserted in a single row/collection . i want to fetch each element by its id. i have tried in the above codes but i am unable to fetch it to show in the frontend/userside . please suggest me how to correct it. i am using mongoose,expressjs,pug with mvc.", "username": "Rout_Jagan" }, { "code": "", "text": "I implemented a few examples with Node.js & MongoDB back in the days. It’s not with Mongoose but I hope this gives you some insights: helloworldMongoNode/forecast.js at master · MaBeuLux88/helloworldMongoNode · GitHub", "username": "MaBeuLux88" } ]
model.Find By Id
2020-09-11T16:55:11.658Z
model.Find By Id
2,604
null
[]
[ { "code": "", "text": "I’m wondering what strategies there are for backup/restoring data for a given partition. If a syncing Realm mobile app has a separate partition for each user and the user deletes something they wish they hadn’t or there is a bug in the app that corrupts data, how can you restore the data to a previous state? On a mobile device the app can make copies of the user’s data periodically. In the event of disaster, I can get the user’s backup and restore the data. Is there a better alternative?", "username": "Nina_Friend" }, { "code": "", "text": "@Nina_Friend Well you could use Realm Sync! The data is stored in MongoDB Atlas so if the client-side Realm is corrupted you could re-download the data from Atlas.The Realm client doesn’t has an undo functionality but you could implement some sort of history or opLog yourself - say to store the last ten edits. And then rollback if needed. You could do this on the client if you are non-sync or you could also do this on the serverside by tailing the MongoDB oplog.", "username": "Ian_Ward" }, { "code": "", "text": "I am using a sync Realm. If the errant delete or corruption happens on the client then sync happens and I have the corruption in Atlas. I don’t see how I can re-download from Atlas if sync is going on.Is there some document that shows how to use the opslog on the server side to rollback changes for a particular partition instead of the whole database?", "username": "Nina_Friend" }, { "code": "", "text": "I wouldn’t call a delete a corruption - that is a CRUD operation that your app allowed but it sounds like you want to validate that delete against some business logic. In that case perhaps a database trigger might work?If you need something more custom you can tail the MongoDB opLog -called changestreams. see here:MongoDB triggers, change streams, database triggers, real time", "username": "Ian_Ward" } ]
Backup/restore a partition
2020-09-11T18:45:45.433Z
Backup/restore a partition
2,563
null
[ "java", "kafka-connector" ]
[ { "code": "{\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"tasks.max\": \"{{TASKS}}\",\n \"collection\": \"Profile\",\n \"database\": \"data\",\n \"pipeline\": \"[]\",\n \"topic.prefix\": \"mongodb\",\n \"poll.await.time.ms\": \"5000\",\n \"poll.max.batch.size\": \"1000\",\n \"publish.full.document.only\": \"false\",\n \"batch.size\": \"0\",\n \"change.stream.full.document\": \"updateLookup\",\n \"copy.existing\": \"true\",\n \"connection.uri\": \"mongodb://{[user]}:{[pass]}@{[host]}:{[port]},{[host2]}:{[port]},{[host3]}:{[port]}\"\n}\n", "text": "Hello,I am using the mongodb source connector and producing to a kafka topic. I am consuming from this topic with a String serde and attempting to read the values of the Kafka message into a Jackson JsonNode in java.The problem I am seeing is that all of the messages arrive with some control characters at the beginning of the string like so (which is not valid json):“]�!{”_id\": {\"_data\": “samplevalue”}}\"I’m struggling to remove these characters from incoming kafka messages. Did I miss a setting in the source connector configs? Just looking for ways to parse out my data or remove these characters from my incoming messages.I saw the thread discussing the 1.3 version of the connector which will allow us to produce directly to json format but I’m looking for something to hold me over until that releases! UPDATE- Here are my connector configs (after some sanitizing):Thanks in advance!\nNick", "username": "Nick_Lewis" }, { "code": "", "text": "what converter are you using at the Source? can you provide your source configuration settings?", "username": "Robert_Walters" }, { "code": "", "text": "Hey Robert!I have updated my post to include my connector configs. The closest thing I have seen to a converter configuration in the docs here was ‘collation’, and wasn’t entirely sure if I needed to use that in my use case.", "username": "Nick_Lewis" }, { "code": " \"key.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter.schemas.enable\":false,\n \"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\":false,", "text": "I think you should try telling the connector what to serialize the data onto the kafka topic with. This is done through the key.converter and value.converter parameters.", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Control Characters at the beginning of Kafka value
2020-09-11T01:00:50.539Z
Control Characters at the beginning of Kafka value
3,758
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "Using Realm Studio v 3.11.0. I attempted to create a new Realm.\nRealm Studio now presents:Bad Changeset (DOWNLOAD)Reconnect button cycles back to the same error message.", "username": "Richard_Fairall" }, { "code": "", "text": "@Richard_Fairall Can you try clearing your local cache for Studio? I believe it is under the Help> dropdownIf the problem persists please open a ticket with support", "username": "Ian_Ward" }, { "code": "", "text": "Thanks you Ian, for the speedy response and fix. That solved the problem.", "username": "Richard_Fairall" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Created new Realm (for Realm Cloud) Now get Bad Changeset (DOWNLOAD) on Realm Studio
2020-09-11T16:52:17.738Z
Created new Realm (for Realm Cloud) Now get Bad Changeset (DOWNLOAD) on Realm Studio
2,650
null
[]
[ { "code": "", "text": "Hi, \nI’ve written a server event handler but I got this error.\n“Sync connection was not fully established in time”\nI’m using this example for node Js but I can’t make it works.\nhttps://docs.realm.io/sync/v/3.x/using-synced-realms/server-side-usage/data-change-events\nAny advice?Thanks in advance!", "username": "Jorge_Gomez" }, { "code": "", "text": "Welcome to the Community JorgeSorry that you’re having issues and without knowing exactly what you’re building, I can’t be specific, but what I would say is that you should look at the newer docs here - https://docs.mongodb.com/realm/node/sync-data/#node-sync-data and perhaps follow the steps there?Let us know if that helps.Shane", "username": "Shane_McAllister" }, { "code": "", "text": "Well, I’m using realm io DB for now,\nWe have an application built under realm.io and another DB in mongo realm, So what we want to do is, to get changes under realm.io DB and copy into Mongo realm DB.I think the one you shared to me is to get the changes from mongo realm DB not form realm io DB.Thanks in advance Shane.", "username": "Jorge_Gomez" }, { "code": "", "text": "@Jorge_Gomez Can you share more details? Code? Logs?", "username": "Ian_Ward" }, { "code": "'use strict';\n\nconst fs = require('fs');\nconst Realm = require('realm'); \n\n// the URL to the Realm Object Server\nconst SERVER_URL = '//instance.cloud.realm.io:9080';\nconst adminCreds = Realm.Sync.Credentials.usernamePassword('Administrator', 'password', false);\n// The regular expression you provide restricts the observed Realm files to only the subset you\n// are actually interested in. This is done in a separate step to avoid the cost\n// of computing the fine-grained change set if it's not necessary.\nvar NOTIFIER_PATH = '.*';\n\n//declare admin user \nlet adminUser = undefined\n\n// The handleChange callback is called for every observed Realm file whenever it\n// has changes. It is called with a change event which contains the path, the Realm,\n// a version of the Realm from before the change, and indexes indication all objects\n// which were added, deleted, or modified in this change\nvar handleChange = async function (changeEvent) {\n console.log(\"this is a change\");\n}\n\nfunction verifyCouponForUser(coupon, userId) {\n //logic for verifying a coupon's validity\n}\nasync function login(serverUrl,user) {\n let result = await Realm.Sync.User.login(serverUrl,user);\n return result;\n}\n// register the event handler callback\nasync function main() {\n try{\n adminUser = await login('https://instance.cloud.realm.io',adminCreds);\n Realm.Sync.addListener(`realms:${SERVER_URL}`, adminUser, NOTIFIER_PATH, 'change', handleChange);\n }catch(e) {\n console.log('error');\n console.log(e);\n }\n}\n\nmain()\n", "text": "This is my code on Node JsThat’s all. and I got this: “Sync connection was not fully established in time” after 2 minutes. in console.I add an element to the realm and try to save the response into a file.txt but nothing happens.It suppose, if I set NOTIFIER_PATH = ‘.*’; it will listen to all realms already created for updates.Thanks in advance!!", "username": "Jorge_Gomez" }, { "code": "const SERVER_URL = '//instance.cloud.realm.io:9080';\n", "text": "The legacy realm cloud is not on 9080 - remove the port configuration. It can use standard HTTPS", "username": "Ian_Ward" }, { "code": "'use strict';\n\nconst fs = require('fs');\nconst Realm = require('realm'); \n// the URL to the Realm Object Server\nconst SERVER_URL = '//instance.cloud.realm.io';\nconst adminCreds = Realm.Sync.Credentials.usernamePassword('Administrator', 'password', false);\nconst NOTIFIER_PATH = './mainRealm';\n\nvar handleChange = async function (changeEvent) {\n\n let realm = changeEvent.realm;\n let users = realm.objects('User');\n const userInsertIndexes = changeEvent.changes.User.insertions;\n const userModIndexes = changeEvent.changes.User.modifications;\n const userDeleteIndexes = changeEvent.changes.User.deletions;\n\n console.log(userInsertIndexes[0].userId);\n console.log(userInsertIndexes[0].name);\n console.log(userInsertIndexes[0].age);\n console.log(userInsertIndexes[0].isDone);\n console.log(userInsertIndexes[0].timestamp);\n saveinfo(userInsertIndexes[0]);\n}\n\nfunction saveinfo(obj){\n try {\n let jsonResponse = JSON.stringify(obj, null, 2);\n\n console.log(\"\\nFile Contents of file before append:\", \n fs.readFileSync(\"/test.txt\", \"utf8\"));\n fs.appendFileSync('/test.txt',jsonResponse, (err) => {\n if (err) throw err;\n });\n }catch (e) {\n console.log(e);\n }\n}\n\nasync function login(serverUrl,user) {\n let result = await Realm.Sync.User.login(serverUrl,user);\n return result;\n}\n\nasync function main() {\n try{\n let adminUser = await login('https://instance.cloud.realm.io',adminCreds);\n Realm.Sync.addListener(`realms:${SERVER_URL}`, adminUser, NOTIFIER_PATH, 'change', handleChange);\n }catch(e) {\n console.log('error');\n console.log(e);\n }\n}\n\nmain()\n", "text": "Thanks @Ian_Ward remove the port worked but was not working with HTTPS so I keep “realms:” and it worked that way.\nI read the documentation to get the data of the changes but I couldn’t get it to work that way so this is the code that it wrote and worked for meThanks for all ", "username": "Jorge_Gomez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can’t get server Event Handling to work in Node.js
2020-09-10T20:55:43.827Z
Can’t get server Event Handling to work in Node.js
2,763
null
[ "swift", "atlas-device-sync", "objective-c" ]
[ { "code": "", "text": "Hi,I am trying to implement syncing from my iOS app to the sync url.But in sync ( beta) , I have sync enabled but I don’t see the sync url or credentials for it. What is the credentials and serverURL that I need to provide below?https://realm.mongodb.com/groups/*************/apps/**********/sync/config[RLMSyncUser logInWithCredentials:credentials\nauthServerURL:serverURL\nonCompletion:^(RLMSyncUser *user, NSError *error) {\nif (user) {\nRLMRealmConfiguration *config = [user configuration];\n[RLMRealm asyncOpenWithConfiguration:config\ncallbackQueue:dispatch_get_main_queue()\ncallback:^(RLMRealm *realm, NSError *error) {\nif (realm) {\n// …\n}\n}];\n}\n}];", "username": "Krikor_Herlopian" }, { "code": "", "text": "How do I convert this from swift to objective-clet app = App(id: “application-0-iiii”)\nlet user = app.currentUser()!\nlet partitionValue = “myPartition”\nRealm.asyncOpen(configuration: user.configuration(partitionValue: partitionValue),\ncallback: { (maybeRealm, error) in\nguard error == nil else {\nfatalError(“Failed to open realm: (error!)”)\n}\nguard let realm = maybeRealm else {\nfatalError(“realm is nil!”)\n}\n// realm opened\n})", "username": "Krikor_Herlopian" }, { "code": "", "text": "[RLMSyncUseris there a way I sync automatically via applicationid only?", "username": "Krikor_Herlopian" }, { "code": "", "text": "@Krikor_Herlopian We are working on a Obj-C guide but until then you can take a look at our API docs - for instance -\nhttps://docs.mongodb.com/realm-sdks/objc/latest/Classes/RLMApp.html#/c:objc(cs)RLMApp(cm)appWithId:You can use anonymous auth if you dont want to login -", "username": "Ian_Ward" } ]
Sync Url and Credentials
2020-09-11T17:03:14.110Z
Sync Url and Credentials
2,924
null
[ "atlas-device-sync" ]
[ { "code": "Sync is in Beta: Permissions for this synced collection are set on the synced cluster. Visit the Sync page", "text": "When enabling Sync (which is still in beta), you have to select a cluster, in which all databases and collections will be synced automatically. I would like to have collections that are external to the sync process — like a “users” collection for instance, I don’t want all clients to store all users on their device.If you don’t specify a schema for a collection, it will be omitted from Sync, however this yields red errors in the Sync UI, and I can’t seem to set permission rules for this collection, since the message Sync is in Beta: Permissions for this synced collection are set on the synced cluster. Visit the Sync page is displayed.You could create another cluster dedicated to collections that are not synced, but this is inconvenient for the following reasons:Therefore, is it possible to limit the Sync to the database-level, since you can have multiple databases running on the same cluster? If it’s not possible, that would be a great feature to add.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hey Realm devs, is this something you might add in the future or is there a specific reason not to?", "username": "Jean-Baptiste_Beau" }, { "code": "user", "text": "Still haven’t found a solution to this. It seems that with Sync enabled, permissions set in the Rules>Permissions tab are overridden. I have a user collection that I don’t know how to handle.How to have collections that are external to the sync process, with custom permissions?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "@Jean-Baptiste_Beau Your understanding is correct. When enabling sync - sync permissions take precedence. If you don’t want to a collection to sync - you simply do not give it a partitionKey value/field and it will not be syncable down to the client.If you are trying to use rules on a separate collection that is not part of sync what you can do is create a separate sever-side Realm App which is just for your web or other API traffic but connect it to the the same MongoDB Atlas cluster as the Realm Sync app. You will be apply your rules there. You can see an example of this here -master/inventoryDemoContribute to brittonlaroche/realm development by creating an account on GitHub.We realize this is a workaround and are looking to unify the permissions system in the near future.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Limit Sync to database instead of cluster
2020-08-17T18:29:42.382Z
Limit Sync to database instead of cluster
1,854
null
[ "dot-net" ]
[ { "code": "", "text": "Docker is returning exit code 145, which implies that .NET Core could not load the MongoDB Driver (2.11)I am using the driver from NuGet.Any idea on what is missing?", "username": "Jose_Gonzalez" }, { "code": "", "text": "Hi @Jose_Gonzalez!Do you mind sharing the docker file you are trying to run so that we can see a bit more context on the image you are trying to build?That exit code could mean several things, so it will be helpful to see the full picture!Thanks!", "username": "yo_adrienne" }, { "code": "", "text": "For the record:Environment - .NET Core 3.1 in a Docker Container with Ubuntu 18 as the OS.Error: When the MongoDb driver was loaded as part of a dynamically loaded DLL, the container would exit with error code 145.Solution: DLL must be loaded using Application Context (see About AssemblyLoadContext - .NET | Microsoft Learn) and not Assembly.LoadFile (see Assembly.LoadFile Method (System.Reflection) | Microsoft Learn).The proper call is:AssemblyLoadContext.Default.LoadFromAssemblyPath(pathtodll)And not:Assembly.LoadFile(pathtodll)NOTE: I do not know if the issue is limited to Ubuntu 18 or also happens with other Linux versions. Does not happen when running in MS Windows 10.This issue may also occur to other DLLs which have been created as part of a process outside the compilation step that is generating the parent executable, not just MongoDb… Also it is an issue only with .NET Core.", "username": "Jose_Gonzalez" }, { "code": "", "text": "Sweet!Glad to know that you were able to resolve it.And thank you for coming back to share your solution. I’m sure it will help others who may come across the same issue you did!", "username": "yo_adrienne" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Exit 145 when loading Driver dll using NET Core 3.1 in Docker container
2020-09-02T17:21:43.937Z
Exit 145 when loading Driver dll using NET Core 3.1 in Docker container
5,014
null
[ "swift" ]
[ { "code": " func kittenTransactionExample(_ req: Request) throws -> EventLoopFuture<Response> {\n let newKitten = try req.content.decode(Kitten.self)\n \n let neighbourhoodIncDocument: BSONDocument = [\"$inc\": [\"totalCats\": 1]]\n let neighbourhoodFilter: BSONDocument = [\"name\": \"London\"]\n \n let session = req.application.mongoClient.startSession()\n \n session.startTransaction().flatMap { _ in\n req.neighbourhoodCollection.updateOne(filter: neighbourhoodFilter, update: neighbourhoodIncDocument)\n }.flatMap { _ in\n req.kittenCollection.insertOne(newKitten)\n }.flatMap { _ in\n session.commitTransaction()\n }.whenFailure { error in\n req.eventLoop.makeFailedFuture(error)\n }\n // this doesn't compile \n // how do I pass back a Response future ????\n // how do I access insertedID for the new kitten ???\n }\n\nstruct Neighbourhood: Content {\n let name: String\n var totalCats: Int\n}\n\n/// Possible cat food choices.\nenum CatFood: String, Codable {\n case salmon,\n tuna,\n chicken,\n turkey,\n beef\n }\n\nstruct Kitten: Content {\n var _id: BSONObjectID?\n let name: String\n let color: String\n let favoriteFood: CatFood\n}\n", "text": "Hi EveryoneI am trying to update multiple different documents within different collections using a transaction from the Swift Driver for a Vapor application. I have looked at the example transactions docs and it doesn’t seem to provide a concrete example of how to actually use them.I am currently stuck on firstly being able to implement a transaction and secondly how to get data back from one or more of the transactions such as the objectID for a newly inserted document.I have created example code expanding on the ComplexVaporExample to hopefully help demonstrate the problem I am having.", "username": "Piers_Ebdon" }, { "code": "", "text": "Hey @Michael_LynnAny chance on getting some feedback on this?So close to releasing my iOS app but this is blocking me at the moment Thanks", "username": "Piers_Ebdon" }, { "code": "", "text": "Hey @Michael_LynnUnfortunately I haven’t come up with a solution yet for this.I was hoping for some help as no documentation or example articles out there on how to achieve this.Thanks", "username": "Piers_Ebdon" }, { "code": "", "text": "Piers, sorry for delay… I’m out today - but will definitely take a look this weekend.", "username": "Michael_Lynn" }, { "code": "whenFailuresession.abortTransaction()session.end()func kittenTransactionExample(_ req: Request) throws -> EventLoopFuture<Response> {\n let newKitten = try req.content.decode(Kitten.self)\n \n let neighbourhoodIncDocument: BSONDocument = [\"$inc\": [\"totalCats\": 1]]\n let neighbourhoodFilter: BSONDocument = [\"name\": \"London\"]\n \n let session = req.application.mongoClient.startSession()\n \n return session.startTransaction().flatMap { _ in\n req.neighbourhoodCollection.updateOne(filter: neighbourhoodFilter, update: neighbourhoodIncDocument)\n }.flatMap { _ -> EventLoopFuture<BSONObjectID> in\n req.kittenCollection.insertOne(newKitten)\n .flatMapThrowing { insertOneResult in\n guard let insertedID = insertOneResult?.insertedID.objectIDValue else {\n throw Abort(.notFound)\n }\n return insertedID\n }\n }.flatMap { objectID -> EventLoopFuture<BSONObjectID> in\n session.commitTransaction()\n .map { objectID }\n }.map { objectID in\n return Response(status: .ok)\n }\n}\n", "text": "Hey @Michael_LynnThis is what I have come up with so far, which at least compiles but I am not sure if it is the correct approach. The example code in the documentation uses the whenFailure call back but I can’t get that to work. I also haven’t implemented session.abortTransaction() or session.end() and I am not sure if they are required and if so, where.Edit - BELOW DOESN’T WORK - results in a crashIs there any intention of expanding upon the ComplexVaporExample app? It is a solid foundation and incredibly useful but I think most projects would soon go beyond the example functionality shown in it.", "username": "Piers_Ebdon" }, { "code": "", "text": "What version of Xcode and Swift are you using?", "username": "Michael_Lynn" }, { "code": "", "text": "I’m using Xcode 12 beta 6 and Swift 5.3", "username": "Piers_Ebdon" }, { "code": "/Users/mlynn/code/.../.build/checkouts/swift-nio-transport-services/Sources/NIOTransportServices/NIOTSConnectionChannel.swift:425:41: error: value of type 'NWConnection' has no member 'startDataTransferReport'", "text": "Ok - I’m unable to move up to 12 at the moment… and on 11.6 I can’t even get a basic vapor project to compile due to a swift-nio-transport-services bug./Users/mlynn/code/.../.build/checkouts/swift-nio-transport-services/Sources/NIOTransportServices/NIOTSConnectionChannel.swift:425:41: error: value of type 'NWConnection' has no member 'startDataTransferReport'", "username": "Michael_Lynn" }, { "code": "", "text": "Are you able to get 11.7? (latest public release).I can go on my work Macbook and give 11.7 a go and see if it works. It seems strange that you are unable to compile a basic Vapor project though. Probably worth doing the standard project clean stuff like deleting derived data, cleaning folder etc and trying again if you haven’t already.Are you able to speak to the Mongo Swift Driver team if this nio problem persists?", "username": "Piers_Ebdon" }, { "code": "", "text": "I’ve got no issue building with Xcode 11.7 using the Mongo Swift Driver and my personal Vapor project. If that bug persists then the Vapor Discord would be a good place to ask about it as I don’t imagine it being a permanent issue that can’t be resolved.", "username": "Piers_Ebdon" }, { "code": "", "text": "So we are in the midst of a company holiday so the swift driver team is probably afk.I put this repo together to reproduce the error. GitHub - mrlynn/swift-nio-transport-test: Just a testI’m thinking it’s just a problem with nio-transport.", "username": "Michael_Lynn" }, { "code": "", "text": "Any luck with the above?I’m not getting an error from the repo you created and not sure how to resolve that issue on your end.What do you think would be the best way to tackle this?", "username": "Piers_Ebdon" }, { "code": "", "text": "We were closed from Friday to Monday for a U.S. Holiday… Hoping to have the swift driver folks take a look today.", "username": "Michael_Lynn" }, { "code": "", "text": "Awesome, thanks a lot Michael!", "username": "Piers_Ebdon" }, { "code": "sessionupdateOnesession: sessionMongoClientend()assertionFailureClientSessionMongoClient.withSessionend()abortTransaction()session.end()MongoClientfunc kittenTransactionExample(_ req: Request) throws -> EventLoopFuture<Response> {\n let newKitten = try req.content.decode(Kitten.self)\n\n let neighbourhoodIncDocument: BSONDocument = [\"$inc\": [\"totalCats\": 1]]\n let neighbourhoodFilter: BSONDocument = [\"name\": \"London\"]\n\n return req.application.mongoClient.withSession { session in\n session.startTransaction().flatMap { _ -> EventLoopFuture<UpdateResult?> in\n req.neighbourhoodCollection.updateOne(filter: neighbourhoodFilter, update: neighbourhoodIncDocument, session: session)\n }\n .flatMap { _ -> EventLoopFuture<BSONObjectID> in\n req.kittenCollection.insertOne(newKitten, session: session)\n .flatMapThrowing { insertOneResult in\n guard let insertedID = insertOneResult?.insertedID.objectIDValue else {\n throw Abort(.notFound)\n }\n return insertedID\n }\n }\n .flatMap { objectID -> EventLoopFuture<BSONObjectID> in\n session.commitTransaction()\n .map { objectID }\n }\n .map { objectID in\n Response(status: .ok)\n }\n }.hop(to: req.eventLoop)\n}\nwithSessionhop(to:)updateOneinsertOne", "text": "(Sorry, pressed enter too soon on my last post!)Hi @Piers_Ebdon, thanks for reaching out and for your patience while our team was out of office.To address a few points of confusion that have come up here thus far -In order to have any operation you perform be considered part of a transaction, you must pass in the session you used to start the transaction as the session parameter. For example your updateOne call should have a final session: session parameter. This is how the driver determines whether an operation is part of a particular transaction or not. This is necessary because MongoClients are intended to be global, thread-safe objects, and there could be requests occurring in parallel using the same client that are not intended to be part of the transaction.You must always call end() when you are done using a session. Failure to do so will result in a assertionFailure (only evaluated in debug mode) when the ClientSession goes out of scope. That said, you may find it easier to use the MongoClient.withSession helper as that will handle the cleanup for you automatically by calling end() once the passed-in closure has finished executing. (Shown in example below.)abortTransaction() will effectively “throw away” the transaction a session has in progress if it exists. A session can only have one transaction in progress at a time, so you would need to call this if you wanted to perform multiple transactions with the same session. Calling session.end() will automatically abort an in-progress transaction if one exists, so if you are only ever doing a single transaction per session it’s not strictly necessary to also call abort.I think the source of the crash in your example method is that you are not hopping back to the request’s eventLoop before the end of the route closure. Futures returned from the driver may fire on any of the event loops in the ELG you pass in when creating the MongoClient so you need to ensure you hop back to the correct one for the request before returning. I’ve shown how to do this below. (We discussed before upcoming work to make the driver easier to use with Vapor - one goal of that project is to remove the need for you to do this kind of hopping yourself.)I did see that you commented asking about our plans to implement the “convenient API” for transactions, which adds support for automatically retrying transactions upon certain types of failures. We do not have a timeline for completing that work at the moment but we will certainly take into account your request as we plan upcoming work.I am glad you have found the complex example project useful. Our developer advocacy team including @Michael_Lynn is working on creating more examples now, and we could consider augmenting the existing example app as well. We very much appreciate hearing from you on what your pain points have been thus far (such as you have described in this post) as it helps us figure out what examples we should prioritize creating.Ok, all that said, onto the code sample you’ve provided! I’m able to get it working as I would expect by doing the following:Basically what I’ve changed is switched to using withSession, added a hop(to:) at the very end of the returned closure, and passed the session parameter to both updateOne and insertOne.Let me know if that works for you or if you have any further questions or issues.Best,\nKaitlin", "username": "kmahar" }, { "code": "", "text": "Hi @kmaharI am running out of superlatives to say thank you for your responses!Apologies for the slow reply. Currently in the process of moving cities in the UK and so I have been unable to really look into your amazing detailed explaination and answer.Following your response I have been able to get transactions working with the Swift driver .\nI only commented on the conventient transactions api ticket because I was desperate to get the transactions working in my app , so please ignore.In the example code you provided, I assume if updating the neighbourhood document fails then the whole transaction fails and there is no need to add a check on the updateResult response to ensure it has succeeded. Is that correct?I will let you know if I have any other questions but I think everything else is coveredCheersPiers", "username": "Piers_Ebdon" }, { "code": "neighbourhoodIncDocumentlet neighbourhoodIncDocument: BSONDocument = [\"$blah\": [\"totalCats\": 1]]{\n \"error\": true,\n \"reason\": \"WriteError(writeFailure: Optional(MongoSwift.MongoError.WriteFailure(code: 9, codeName: \\\"\\\", message: \\\"Unknown modifier: $blah. Expected a valid update modifier or pipeline-style update specified as an array\\\")), writeConcernFailure: nil, errorLabels: nil)\"\n}\n", "text": "Apologies for the slow reply. Currently in the process of moving cities in the UK and so I have been unable to really look into your amazing detailed explaination and answer.Not a problem at all, good luck with the move!I am glad you were able to get your application working! In the example code you provided, I assume if updating the neighbourhood document fails then the whole transaction fails and there is no need to add a check on the updateResult response to ensure it has succeeded. Is that correct?Yes, that’s correct. You can test that behavior by, for example, modifying your neighbourhoodIncDocument to contain an invalid update operator the MongoDB server will reject (for ex. let neighbourhoodIncDocument: BSONDocument = [\"$blah\": [\"totalCats\": 1]]).\nIn that case, the response I get to a POST request to the corresponding endpoint isAnd both the kitten and neighbourhood collections remain unchanged.", "username": "kmahar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Transactions - Swift Driver
2020-09-02T19:06:01.933Z
Transactions - Swift Driver
4,276
null
[]
[ { "code": "{\n\"students.name\": \"Peter\",\n\"students.level\": \"excelent\"\n}\n{\n \"schoolName\": \"London school\"\n \"students\": [{\n \"name\": \"Peter\",\n \"level\": \"middle\"\n }, {\n \"name\": \"Jane\",\n \"level\": \"excelent\"\n\t}]\n}\n", "text": "Hi, I would like to match all “schools” that contain some student “Peter” with “excelent” level.\nI tried:but it returns also documents where “Peter” and “excelent” are not related to the same person.My data:Is it possible somehow? I’m new in MongoDB…", "username": "Daniel_Reznicek" }, { "code": " db.schools.find(\n {\"students\": {\"$elemMatch\": {\"name\": \"peter\", \"level\": \"excellent\"}}});\n", "text": "Hi @Daniel_Reznicek,Sure. You need to use $elemMatch in your queries:Of course you can index those fields or the main array field to better search this syntax.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Match by 2 fields in one subobject
2020-09-11T16:52:01.747Z
Match by 2 fields in one subobject
8,060
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "How to take backup based on isodate or objectId in mongodb v4.2.2; can you please let me know mongodump command.", "username": "hari_dba" }, { "code": "", "text": "Can you clarify what you mean by “based on ISOdate or objectId”?Do you mean you want to dump all the records in a collection that have some field greater than a particular ISODate or ObjectId?", "username": "Asya_Kamsky" }, { "code": "", "text": "If you want to select only the documents based on a filter criteria (like the restricting the documents based on a date field values), you can specify a filter with the option –query.", "username": "Prasad_Saya" }, { "code": "", "text": "I mean filter documents not all documents.\nExample: documents have 100000 records then I need take only limited documents based on _id or createAt date wise , if you know that command let me know.The below example you can understand I am asking, I hope you.\nOne more example: one document salary field start from 1 to 100000, then I want take back up from 1000 to 20000 filter it success my back up\nBut when I was try based on _id or date mongodump filter not working.", "username": "hari_dba" }, { "code": "", "text": "The below command tried but output documents not write , command should be wrong if yes, please let me know correct command.\nmongodump --port 27017 --db pms --collection purchase --query {\"_id\":{“gte”:“ObjectId(“5e464233b5b419ebe5e5c035”)”}} --out D:\\backup_export\\admin\\admin\\OUTPUT:2020-02-28T14:43:50.612+0530 writing pms.purchase to\n2020-02-28T14:43:50.631+0530 done dumping pms.purchase (0 documents)", "username": "hari_dba" }, { "code": "_idObjectIdnamemongodump --db=test --collection=test --query=\"{\\\"name\\\": \\\"Krish\\\"}\"ObjectIddate", "text": "I see you are using Windows OS. I couldn’t get the query filter with _id (an ObjectId) working, But the following worked fine. Assuming that there is a field called as name and it is of string type, I could export the documents with names “Krish”:mongodump --db=test --collection=test --query=\"{\\\"name\\\": \\\"Krish\\\"}\"The documentation says that if the query filter fields use the ObjectId or date type fields they must be specified as Extended JSON in the filter.", "username": "Prasad_Saya" }, { "code": "", "text": "i was tried string working but when i did use “date” and “_id” then not working.\nexample: query below:\n–query {“createdAt”:{\"$lte\":“ISODate(“2020-01-23T09:25:48Z”)”}}\ncan you please provide correct command", "username": "hari_dba" }, { "code": "ObjectIddate> mongodump --db=test --collection=test --query=\"{ \\\"_id\\\": { \\\"$gte\\\" : { \\\"$oid\\\": \\\"5e58e938707edd40784daf83\\\" } } }\"> mongodump --db=test --collection=test --query=\"{ \\\"dt\\\": { \\\"$gte\\\" : { \\\"$date\\\": \\\"2020-02-26T01:00:00.000Z\\\" } } }\"--queryFile--query> mongodump --db=test --collection=test --queryFile=qry.jsonqry.json{ \"_id\": { \"$gte\" : { \"$oid\": \"5e5911fe9476bac92059a747\" } } }", "text": "These will work on Windows OS. The first is a filter by ObjectId and the next by the date types:> mongodump --db=test --collection=test --query=\"{ \\\"_id\\\": { \\\"$gte\\\" : { \\\"$oid\\\": \\\"5e58e938707edd40784daf83\\\" } } }\"> mongodump --db=test --collection=test --query=\"{ \\\"dt\\\": { \\\"$gte\\\" : { \\\"$date\\\": \\\"2020-02-26T01:00:00.000Z\\\" } } }\"The --queryFile option:You can use the --queryFile option instead of the --query, in which case you can put the query filter JSON in a file. For example:> mongodump --db=test --collection=test --queryFile=qry.jsonand the qry.json has the following:\n{ \"_id\": { \"$gte\" : { \"$oid\": \"5e5911fe9476bac92059a747\" } } }Note the query file option’s JSON is cleaner without the backslash escapes.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi Prasad,Thanks for given solutions, was tried commands first one done but date and queryfile both some issue errors are given below please help on this any mistake about that\nThis one successful :mongodump --db=test --collection=test --query=\"{ “_id”: { “$gte” : { “$oid”: “5e58e938707edd40784daf83” } } }\"As per below command based on date all documents not write can you please help on this why documents not write properly.I was tried as per you mentioned --queryFile below error getting can you please help on thismongodump --port 27019 --db pms --collection purchase --queryFile “{”_id\":{\"$gte\":{\"$oid\":“5e461d06dba04739ea454892”}}}\" --out D:\\backup_export\\admin\\admin\\ --gzip\n2020-03-03T15:47:53.961+0530 Failed: error reading queryFile: open {\"_id\":{\"$gte\":{\"$oid\":“5e461d06dba04739ea454892”}}}: The filename, directory name, or volume label syntax is incorrect.", "username": "hari_dba" }, { "code": "mongodump --port 27019 --db pms --collection purchase --queryFile “{”_id\":{\"$gte\":{\"$oid\":“5e461d06dba04739ea454892”}}}\" --out D:\\backup_export\\admin\\admin\\ --gzip--queryFile--queryFile--query\" \"“ ”", "text": "mongodump --port 27019 --db pms --collection purchase --queryFile “{”_id\":{\"$gte\":{\"$oid\":“5e461d06dba04739ea454892”}}}\" --out D:\\backup_export\\admin\\admin\\ --gzip\n2020-03-03T15:47:53.961+0530 Failed: error reading queryFile: open {\"_id\":{\"$gte\":{\"$oid\":“5e461d06dba04739ea454892”}}}: The filename, directory name, or volume label syntax is incorrect.In your query:mongodump --port 27019 --db pms --collection purchase --queryFile “{”_id\":{\"$gte\":{\"$oid\":“5e461d06dba04739ea454892”}}}\" --out D:\\backup_export\\admin\\admin\\ --gzipYou are using the --queryFile option. But, you have the query instead of the file name. So the error. Change the --queryFile to --query.Since there is no error in this case, the issue must be with the query criteria and the available data in the collection. Also, the type of quotes you are using might be the problem; use straight quotes \" \" instead of “ ”.", "username": "Prasad_Saya" }, { "code": "\" \"“ ”", "text": "I used 2 types of command first one with backslash escapes 2nd straight quotes \" \" instead of “ ” . but proper output not getting. please confirm anything change below commandsD:\\bin>mongodump --port 27019 --db pms --collection purchase --query “{“dt”:{”$gte\":{\"$date\":“2020-02-14T04:07:34Z”}}}\" --out D:\\backup_export\\admin\\admin\\ --gzip\nOutPut:2020-03-03T19:22:16.014+0530 writing pms.purchase to\n2020-03-03T19:22:16.059+0530 done dumping pms.purchase (0 documents)D:\\bin>mongodump --port 27019 --db pms --collection purchase --query “{“dt”:{”$gte\":{\"$date\":“2020-02-12T04:07:34Z”}}}\" --out D:\\backup_export\\admin\\admin\nOutPut:2020-03-03T19:24:06.198+0530 Failed: error parsing query as Extended JSON: invalid JSON input", "username": "hari_dba" }, { "code": "mongodump --port 27019 --db pms --collection purchase --query \"{ \\\"dt\\\":{ \\\"$gte\\\": { \\\"$date\\\": \\\"2020-02-14T04:07:34Z\\\" } } }\" --out D:\\backup_export\\admin\\admin --gzipD:\\backup_export\\admin\\admin--gzipdtfieldpurchasepmsdb.purchase.find( { dt: { $gte: ISODate(\"2020-02-14T04:07:34Z\") } } )mongodump", "text": "mongodump --port 27019 --db pms --collection purchase --query “{“dt”:{”$gte\":{“$date”:“2020-02-14T04:07:34Z”}}}\" --out D:\\backup_export\\admin\\admin\\ –gzipmongodump --port 27019 --db pms --collection purchase --query \"{ \\\"dt\\\":{ \\\"$gte\\\": { \\\"$date\\\": \\\"2020-02-14T04:07:34Z\\\" } } }\" --out D:\\backup_export\\admin\\admin --gzipPlease note that:Make sure there is data in your collection by running this query from Mongo Shell; and these documents will be exported:\ndb.purchase.find( { dt: { $gte: ISODate(\"2020-02-14T04:07:34Z\") } } )For more details on mongodump see the documentation.", "username": "Prasad_Saya" }, { "code": "-q \"{'_id':{'\\$lte':ObjectId('ffffffff0000000000000000')}}\"", "text": "$ is a variable. (linux, macos)\nEscape the $.Like this.\n-q \"{'_id':{'\\$lte':ObjectId('ffffffff0000000000000000')}}\"", "username": "elfoo_N_A" } ]
How to take backup based on ISODate or ObjectID using mongodump v4.2.2?
2020-02-26T19:08:57.685Z
How to take backup based on ISODate or ObjectID using mongodump v4.2.2?
11,781
null
[ "aggregation" ]
[ { "code": "", "text": "Can we apply lookup on different collections i.e.\nA collection (on accountId field ) lookup B collection(on accountId field) lookup C collection (on trx field)let me clarify this i want to make a lookup onIs that possible in mongoDB to apply lookup in a same query on different collections as mentioned above?\nThanks in advance.", "username": "Nabeel_Raza" }, { "code": "[{\n $lookup: {\n from: 'a',\n localField: 'accountId',\n foreignField: 'accountId',\n as: 'infoFromA'\n }\n}, {\n $lookup: {\n from: 'c',\n localField: 'trx',\n foreignField: 'trx',\n as: 'infoFromC'\n }\n}]\n", "text": "Yes, you can perform $lookup more than once in an aggregation. Here is a pipeline I built to be run on Collection b:I’m including a screenshot from Atlas, so you can see what’s happening in each stage.Screen Shot 2020-09-10 at 7.54.55 AM1324×989 148 KB", "username": "Lauren_Schaefer" }, { "code": "Collection Acollection BCollection CCollection BCollection ACollection BCollection CCollection ACollections BCollection BCollection C", "text": "@Lauren_Schaefer this is not the requirement might be i was unable to define my question properly. my question is that we all know that we can join many collections based on a field. i just want to join Collection A to collection B (which is a piece of cake) the next tricky thing is that i want to add join on Collection C with Collection B in the same query is that possible to do so in mongodb?1- Collection A joins Collection B joins Collection C (this is what we all know)\n2- Collection A joins Collections B, Collection B joins Collection C (in the same query)Point 2 is required.", "username": "Nabeel_Raza" }, { "code": "db.b.aggregate([{\n $lookup: {\n from: 'a',\n localField: 'accountId',\n foreignField: 'accountId',\n as: 'infoFromA'\n }\n}, {\n $lookup: {\n from: 'c',\n localField: 'trx',\n foreignField: 'trx',\n as: 'infoFromC'\n }\n}])\n{ \"_id\" : ObjectId(\"5f5a12bf931c05e9b75e0f87\"), \"accountId\" : \"1\", \"hi\" : \"there\", \"trx\" : \"one\", \"infoFromA\" : [ { \"_id\" : ObjectId(\"5f5a12a0931c05e9b75e0f85\"), \"accountId\" : \"1\", \"something\" : \"else\" } ], \"infoFromC\" : [ { \"_id\" : ObjectId(\"5f5a12f1931c05e9b75e0f89\"), \"trx\" : \"one\", \"waz\" : \"up\" } ] }\n{ \"_id\" : ObjectId(\"5f5a12c8931c05e9b75e0f88\"), \"accountId\" : \"3\", \"hello\" : \"again\", \"trx\" : \"one\", \"infoFromA\" : [ ], \"infoFromC\" : [ { \"_id\" : ObjectId(\"5f5a12f1931c05e9b75e0f89\"), \"trx\" : \"one\", \"waz\" : \"up\" } ] }\n", "text": "Hmm…I’m not sure what you mean. I can run this in a single query:Below is the output:If that is not what you mean, please provide sample documents for each collection as well as the output you are looking for.", "username": "Lauren_Schaefer" }, { "code": "Collection BCollection A", "text": "@Lauren_Schaefer you are using Collection B as parent collection for aggregation but there will be 5 more joins with Collection A so it should be at the top.", "username": "Nabeel_Raza" }, { "code": "", "text": "The joins are happening on the the fields you indicated in your original question:I don’t think I fully understand what you are trying to do. Please provide sample documents for each collection as well as the output you are looking for.", "username": "Lauren_Schaefer" }, { "code": "", "text": "db.a.aggregate([{\n$lookup: {\nfrom: ‘b’,\nlocalField: ‘accountId’,\nforeignField: ‘accountId’,\nas: ‘infoFromB’\n}\n}, {\n$lookup: {\nfrom: ‘c’,\nlocalField: ‘b.trx’,\nforeignField: ‘trx’,\nas: ‘infoFromC’\n}}])this is the solution.", "username": "Nabeel_Raza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lookup on different table
2020-09-10T09:27:21.988Z
Lookup on different table
6,015
null
[]
[ { "code": "$facet", "text": "I am eager to know whether MongoDB’s Aggregation Framework processes each sub-pipeline inside $facet in parallel.It is specified in the doc that:Each sub-pipeline within $facet is passed the exact same set of input documents. These sub-pipelines are completely independent of one another.Does that mean each sub-pipeline in $facet is send to a separate thread and processed asynchronously?", "username": "Tiya_Jose" }, { "code": "$facet", "text": "No, all aggregation pipelines are executed sequentially, including $facet.If you’re prepared to add some complexity and overhead to your client code, there’s a blog post exploring using custom code to execute facet sub-pipelines in parallel and then manually merging the results: Paul Done's Technical Blog: Run MongoDB Aggregation Facets In Parallel For Faster Insight", "username": "Mark_Smith" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are MongoDB $facet sub-pipelines asynchronous and executed in parallel?
2020-09-11T08:51:43.993Z
Are MongoDB $facet sub-pipelines asynchronous and executed in parallel?
3,739
null
[]
[ { "code": "", "text": "I want to know if there is an ETL service with Mongo Atlas. A services like AWS Glue.", "username": "Jose_Daniel_Oropeza" }, { "code": "", "text": "Input or output?For the output I would leverage the Change Streams by using MongoDB Realm Triggers.For the input, tons of possibilities but I guess I would mention at least the 3rd Party Service in MongoDB Realm and its HTTP service or the AWS Service maybe that can probably leverage AWS Glue directly in input or output I guess? It’s worth a shot at least.", "username": "MaBeuLux88" }, { "code": "", "text": "You should also check out the MongodB Data Lake, which allows you to parse and query datafiles stored on S3 in SON, BSON, CSV, TSV, Avro, ORC and Parquet formats.", "username": "Joe_Drumgoole" } ]
Is there an ETL for MongoAtlas?
2020-09-10T20:55:28.617Z
Is there an ETL for MongoAtlas?
3,498
null
[]
[ { "code": "", "text": "I restored a cluster inside another Atlas project from an automated Cloud Provider snapshot.\nI noticed on the backup history page [1] that most of the time was spent during “Transferring Snapshot - X%” stage. Why is this the case?Technically, the underlying AWS EBS volume should be restored from an EBS snapshot.\nWhy do we have to wait for snapshot transfer in this case?\nAre we doing something wrong by restoring to a cluster in another project?[1] Cloud: MongoDB Cloud", "username": "MartinLoeper" }, { "code": "", "text": "Hi @MartinLoeper,If you want us to review the cloud progress of your specific project please post the link with all the details as the current link has masked fields.During a restore of the EBS volume we still need to build this new volume from the image as well as if the restore is done from a different region we have to move it to the region.Please see the following recommendations for optimal restores:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,thanks for the information!\nIt is stated inside the docs that for optimal restore performance, the target cluster should exist \"in the same Atlas project as the source cluster. \".This is not the case in our current setup.\nDoes it mean that restore is taking much more time as the snapshot must be transferred from one Atlas project to another?Is it best practice to put production and staging cluster into the same Atlas project?Thanks,\nMartin", "username": "MartinLoeper" }, { "code": "", "text": "Hi @MartinLoeper,Having clusters across different projects have many consideration like your atlas user teams and permission saggregation, your DevOps ways of deploying and maintenance of your cluster.\nAdditionally there is database security features, although users and roles can now be bound to a cluster level, peering and other advance options might not.If your only consideration is backup and its restore times it make sense to keep the clusters in one project having the same disk and instance size during restores as it can use the fastest available disk cloning methods.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does restoring cluster backups across Atlas projects increase RTO?
2020-09-06T04:38:54.142Z
Does restoring cluster backups across Atlas projects increase RTO?
2,851
null
[ "dot-net" ]
[ { "code": "", "text": "Hi,\nI have to update few millions of records in Mongo collection using my C# .Net Core Console app as one time activity. It works just fine but it time-out after updating around 200 records so my DBA asked me to implement the noCursorTimeout for that activity. But I don’t know how to use noCursorTimeout in C# driver. Any help please.", "username": "ram_ram" }, { "code": "", "text": "Hi @ram_ram!Do you mind posting the code snippet of how you are updating those documents?This will help me determine the best solution for your issue, which may not be related to the NoCursorTimeout option at all.Thanks!", "username": "yo_adrienne" } ]
Question - noCursorTimeout - in C# Mongo driver
2020-06-22T20:33:03.172Z
Question - noCursorTimeout - in C# Mongo driver
1,957
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "# .NET Driver Version 2.11.2 Release Notes\n\nThis is a patch release that fixes a bug reported since 2.11.1 was released.\n\nAn online version of these release notes is available at:\n\nhttps://github.com/mongodb/mongo-csharp-driver/blob/master/Release%20Notes/Release%20Notes%20v2.11.2.md\n\nThe list of JIRA tickets resolved in this release is available at:\n\nhttps://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.2%20ORDER%20BY%20key%20ASC\n\nDocumentation on the .NET driver can be found at:\n\nhttps://mongodb.github.io/mongo-csharp-driver/\n\n## Upgrading\n\nEveryone using versions 2.11.0 or 2.11.1 of the C# driver with version 4.4.0 or later of the server should upgrade to 2.11.2.\nThe issue fixed is related to simultaneous authentication of two or more connections, in which case a change introduced\n", "text": "This is a patch release that fixes a bug reported since 2.11.1 was released.An online version of these release notes is available at:The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.2%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:Everyone using versions 2.11.0 or 2.11.1 of the C# driver with version 4.4.0 or later of the server should upgrade to 2.11.2.\nThe issue fixed is related to simultaneous authentication of two or more connections, in which case a change introduced\nin 2.11.0 can result in authentication failing. Under loads low enough that only one connection is ever opened at the same\ntime the issue does not happen.See: https://jira.mongodb.org/browse/CSHARP-3196There are no known backwards breaking changes in this release.", "username": "Robert_Stam" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.11.2 Released
2020-09-10T23:23:13.258Z
.NET Driver 2.11.2 Released
2,691
null
[ "dot-net" ]
[ { "code": "", "text": "MongoDb.Driver 2.10.2 will support TLS 1.2 connections and what is default TLS version supported?When .Net framework(v 4.5) app creates connection to MongoDB, what is the TLS version? TLS 1.0 / TLS 1.1 /TLS 1.2When .Net core(v2.1) app creates connection to MongoDB, what is the TLS version? TLS 1.0 / TLS 1.1 /TLS 1.2", "username": "Kannan_Ranganathan" }, { "code": "", "text": "Hi @Kannan_Ranganathan!what is default TLS version supported?Here’s the TLS Support for the C# driver.To reference here as well, these are the supported versions:\n\ntls-support-csharp-driver788×911 49.1 KB\nLet me know if this helps.Thanks!", "username": "yo_adrienne" } ]
MongoDb.Driver 2.10.2 will support TLS 1.2 and what is default TLS version supported?
2020-08-26T22:29:52.643Z
MongoDb.Driver 2.10.2 will support TLS 1.2 and what is default TLS version supported?
2,695
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "\nI would like to ask, what are the relationships between Atlas, MongoDB and the Free Tier Cluster providers?\nMy thinking is\nMongoDB = the database software - to be used for creating databases\nAtlas = The place where a MongoDB could be hosted in a clustered environment\nCluster Provider = no idea - What is the role of AWS, Azure, etc?\nI guess my thinking is wrong.", "username": "Mat_Jung" }, { "code": "", "text": "AWS(Amazon),Azure(Microsoft) are cloud providers\nFew others like Google cloudYou can check this linkCloud databases are a great alternative to traditional on–premise databases and offer many benefits, including reliability, scalability, performance, and much more.", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Mat_JungMongoDB = the database software - to be used for creating databases\nAtlas = The place where a MongoDB could be hosted in a clustered environment\nCluster Provider = no idea - What is the role of AWS, Azure, etc?MongoDB Atlas is the global cloud database service for modern applications.\nYou can deploy fully managed MongoDB across AWS, Azure, or GCP. For a simple database you only choose which provider you like, nothing more - no extra account no extra contract that is all included. In case you change your mind and want to change the underlying provider, you can do so with in Atlas (s. above the cloud database service from MongoDB. You only need to chose the new provider everything else is done for you, just wait until the migration is finished.\nThere is lost more just hop over to the MongoDB Altas page and read the intro and/or the blog posts.\nYou asked for the free tier Cluster, you will find information about this on the previous mentioned link. In short: you can start completely free with a small DB (< 0.5 GB).Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "If I am creating a MongoDB at AWS, Azure, Google or IBM Cloud - what is then the role/advantage to work with mongodb.com?", "username": "Mat_Jung" }, { "code": "", "text": "Hello @Mat_Jungin case of a bare MongoDB nothing. But when you look on Altas, you have fully automated deployments, professional backup, auto scaling, monitoring, suggestion of schema improvements, suggestion of index improvements, integration of charts, integration of MDB data lakes… So there is plenty more which can ease your live. In case you have some years of experience as MongoDB DBA and deployment and optimizing shards and replicasets and when you have already a working monitoring for MongoDB on AWS you may come close to what Altas provides. Minor updates are applied in Atlas automatically one point which saves quite a bit of work…I am not a sales rep, so I may have missed some points. But again, I like to encourage you to read the linked pages and than make your decision. Feel free to ask any concrete question here in the community, we will try to answer. In case you need further input concerning you initial question I like to suggest to contact MongoDB sales - they will find you an optimal solution.Cheers\nMichael", "username": "michael_hoeller" }, { "code": "M#", "text": "Hi @Mat_Jung,MongoDB Atlas is a managed database service – you don’t have to worry about the underlying administrative knowledge and tasks to configure, secure, monitor, backup, and scale your MongoDB deployments.I recommend starting with the MongoDB Atlas product page and MongoDB Atlas FAQ, but will try to address the specific questions you’ve asked here.An Atlas cluster is a MongoDB deployment (currently a replica set or sharded cluster) managed by the MongoDB Atlas service. Atlas clusters may either have shared resources (for example, M0 free tier clusters and M2/M5 clusters) or dedicated resources (M10+). The M# reference refers to a specific Atlas cluster configuration, with larger numbers generally indicating clusters with more resources.Atlas creates and manages clusters on supported cloud providers (currently AWS, GCP, and Azure) and provides a single bill for your database management and hosting. Atlas also integrates a growing set of cloud features including Atlas Search (full text search with Lucene), Atlas Data Lake (query and analyse data across AWS S3 and Atlas), and MongoDB Realm (serverless apps and mobile sync).The minimum Atlas cluster deployment is a 3 member replica set running MongoDB Enterprise configured with best practice features such as role-based access control, TLS/SSL network encryption, firewall restrictions, and monitoring. You can configure additional features and behaviour via the Atlas UI or API.Your team is still generally responsible for capacity planning and choosing when to scale clusters up (or down), but with dedicated clusters you can also choose to configure Cluster Auto-Scaling to adjust cluster tier or storage based on thresholds.Supported cloud providers (currently Atlas, GCP, and Azure) provide the hosting infrastructure (virtual machine instances, storage, …) where the Atlas service will create and manage your clusters. You can choose your preferred cloud provider and region(s) to match other other aspects of your use case (for example, choosing a provider and region to reduce network latency between your application servers and your Atlas cluster).The Atlas free tier gives you access to a shared deployment running on one of the supported cloud providers. Free tier clusters are intended to be used as development sandboxes: they have a storage limit of 512MB of data and some operational limitations including throughput, connections, and data transfer limits.You can create a self-hosted MongoDB deployment, but are then responsible for all of the administrative tasks and infrastructure setup/maintenance (security, monitoring, backup, scaling, upgrades, etc). Atlas automates common administrative and operational challenges so your team can focus on development.MongoDB = the database software - to be used for creating databases\nAtlas = The place where a MongoDB could be hosted in a clustered environment\nCluster Provider = no idea - What is the role of AWS, Azure, etc?MongoDB = database server software\nCluster = a MongoDB deployment (Atlas clusters are replica sets or sharded clusters)\nAtlas = cloud-hosted interface to manage your clusters (deployment, security, monitoring, and backup)\nCloud Provider = the cloud hosting provider where your managed instances are created by Atlas (currently AWS, GCP, or Azure)\nFree Tier Cluster = a free Atlas cluster you can use for development and testingRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks everyone for explaining the roles in more detail.\nWhat I consider as advantage is, during the Trial / Free Tier nobody asked for Credit Card details yet.One could close the ticket.", "username": "Mat_Jung" }, { "code": "", "text": "What I consider as advantage is, during the Trial / Free Tier nobody asked for Credit Card details yet.Hi Mat,That’s correct: you can get started on the Atlas Free Tier without providing any payment details. There is no expiry period, so this is a free service offering rather than a trial.One could close the ticket.Conversations in this community forum are discussions rather than tickets.However, if a specific post was most helpful in resolving a discussion topic you started, please mark it as a “Solution”.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "2 posts were split to a new topic: Can I use AWS Activate credits on AWS hosting of mongodb atlas", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Relationship between Atlas, MongoDB, and Cluster Providers
2020-08-01T21:41:25.054Z
Relationship between Atlas, MongoDB, and Cluster Providers
4,749
null
[]
[ { "code": "", "text": "Hello,\nI have some AWS Activate credits and I want to spend them on the AWS hosting of mongodb atlas? I want to avoid paying since I have these credits, how can I do that?", "username": "Alessandro_Sanino" }, { "code": "", "text": "Hi @Alessandro_Sanino,\nThanks a lot for your question. We’re happy to help AWS Activate startups in this situation! Please reach out to us at [email protected] and we’ll discuss your options!", "username": "Manuel_Meyer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I use AWS Activate credits on AWS hosting of mongodb atlas
2020-09-10T16:41:24.570Z
Can I use AWS Activate credits on AWS hosting of mongodb atlas
2,861
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hello,I have a mongodb container in my server, and Im trying to connect from ssis. Well, my problem is that when I fill the configuration with the ip address of my server where Im running my container with mongo, and then I test the connection I received the following message: 'Lost connectoin to MySAL server at ‘reading initial communication packet’, system error:0.", "username": "Niskeydi_Michel" }, { "code": "", "text": "Hi @Niskeydi_Michel,Please can you provide some more detail about your setup? For example, MongoDB doesn’t support ODBC unless you’re also running the BI connector which emulates a MySQL server on top of a MongoDB server - but you don’t mention this in your question.The fact that you’re running MongoDB inside a container also adds some complexity - for example if you haven’t set up authentication in MySQL then it probably won’t be accessible from outside of the container (although that may not be a problem if you are running the BI Connector)", "username": "Mark_Smith" } ]
MongoDB ODBC configuration doesnt work
2020-09-10T16:41:09.247Z
MongoDB ODBC configuration doesnt work
3,828
null
[ "sharding", "security" ]
[ { "code": "", "text": "Hi all,I have two nodes -This whole setup is going good. but I have a major concern of enabling authentication to mongos server. at present I’m able to login to mongos without any authentication credentials and that is worrying me. and now I’m unable to find a proper resource on how to prevent this from happening. as mongos config does not have any security.authorization key to it how do I enable authorization to it?", "username": "Vibhanshu_Biswas" }, { "code": "transitionToAuth: true", "text": "After hours of looking up on the internet I found that I have included a property calledtransitionToAuth: trueDue to this it does not waits for adhoc authentication rather authenticates based on the keyFile itself. this is why I was able to see all the databases in my shards directly.New learning! Phew!", "username": "Vibhanshu_Biswas" }, { "code": "", "text": "Hi @Vibhanshu_Biswas - I see this question is marked as [Solved], but I can’t see a solution here to your post. Does this mean you solved it yourself and you’re no longer looking for an answer to your question?", "username": "Mark_Smith" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to enable authentication/authorization to Mongos router?
2020-09-10T08:01:18.306Z
How to enable authentication/authorization to Mongos router?
2,429
null
[]
[ { "code": "", "text": "Hi. Recently joined MongoDB Startups and Atlas through TechStars.Looking to understand best practices for indexing in a production environment.", "username": "Ephraim_Tabackman" }, { "code": "", "text": "Hi @Ephraim_Tabackman,Welcome!I recommend to read the following documentation and blog:Please let me know if that helpsBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Welcome, @Ephraim_Tabackman, I’m glad you’ve joined us! Be sure to take a look at our code of conduct to ensure you get the most from your community experience.", "username": "Jamie" } ]
Hi. Ephraim, CTO of Parsempo in Israel
2020-09-06T07:02:16.703Z
Hi. Ephraim, CTO of Parsempo in Israel
4,136
null
[ "graphql", "schema-validation" ]
[ { "code": " {\n \"type\": \"object\",\n \"title\": \"SearchResults\",\n \"properties\": {\n \"artists\": {\n \"type\": \"array\",\n \"description\": \"Returns the artist documents from this search\",\n \"items\": {\n \"type\": \"object\",\n \"$ref\": \"#/realm/mongodb-atlas/music/artists\"\n }\n }\n }\n}\n", "text": "Hi MongoDB community!I’m having trouble formatting the payload type for a custom GraphQL resolver.I’m wanting to implement a search query, and return matching documents to a given search. I’ve got the function all set up, but I cannot figure out how to re-use existing schema types from my MongoDB Atlas schema as the payload type.I want to send an array of documents as the payload, and this was what I came up with:I’m getting an error when using $ref, and I’m also not sure if the $ref schema link is right. Basically, the artists schema already exists as defined in Realm rules, and I just want to reuse this as the payload type.Will I just have to redefine the document schema in this payload, or can I access this already-existing schema type?Thank you so much! Please let me know if this question is clear or if I could add anything else ", "username": "Pierre_Rodgers" }, { "code": "artists", "text": "Hi Pierre,At the moment you will need to re-define the schema type for artists in your payload type. However, we are actually planning on releasing the ability to reuse schema types this month (likely in the next week or so). Hope that helps.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Pierre_Rodgers, this should be available to try out in the Custom Resolver UI now. You can also read more about it in our docs.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
GraphQL custom resolver payload schema
2020-08-05T11:33:45.366Z
GraphQL custom resolver payload schema
3,806
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "How to change the email template for Realm email verification and other emails?", "username": "chawki" }, { "code": "", "text": "At the moment, you can only change the email subject for email verification emails.I’m not quite sure what you’re referring to in terms of “other emails” - do you mind explaining?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "If a new user signed I get this email template for email confirmation can I able to change this email template to a custom template.Capture782×265 8.01 KB", "username": "chawki" }, { "code": "", "text": "You can edit the content of the email if you decide to implement the logic with Custom Function Authentication and write your own email logic.If you are using Email/Password Authentication, there is no way to edit the content of the email.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to change default email template for Realm?
2020-09-10T00:53:26.241Z
How to change default email template for Realm?
3,169