image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "server" ]
[ { "code": "brew services start [email protected]\nError: Permission denied @ rb_sysopen - /Users/[username]/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nls -la ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nls: /Users/[username]/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist: No such file or directory\n", "text": "I’m trying to install the MongoDB Community Edition by following MongoDB’s guide here. I’m on an M1 Mac running MacOS 12.6.Brew (seemingly) installs MongoDB with no problems, however, when I go to runI encounter this error:When I go to view the file mentioned, usingI get the following error:I’ve uninstalled mongodb twice now and reinstalled it but nothings changed. Has anyone had this problem?", "username": "Braeden_Kilburn" }, { "code": "brew doctorbrewbrew doctor", "text": "Hi @Braeden_Kilburn ,Welcome to The MongoDB Community Forums! \nI notice you haven’t had a response to this topic yet - were you able to find a solution?The error looks like a file permission problem and should be unrelated to using an M1 mac.I would try running brew doctor via Terminal.app, as this will detect & resolve common permission and install issues that affect brew .If brew doctor doesn’t help, I would try:sudo chown $(whoami) ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plistRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Installation Error at install
2022-10-12T19:39:58.130Z
MongoDB Installation Error at install
1,615
https://www.mongodb.com/…a_2_1024x201.png
[ "monitoring" ]
[ { "code": "storage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n\n wiredTiger:\n engineConfig:\n cacheSizeGB: 5\n\nsystemLog:\n destination: file\n logAppend: true\n #path: /var/log/mongodb/mongod.log\n path: /storage/mongodb/log/mongod.log\n logRotate: reopen\n\nnet:\n port: 27017\n #bindIp: 127.0.0.1\n bindIp: 0.0.0.0\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n", "text": "I got error high CPU,Cat file config as follow:\nScreenshot 2021-09-02 1705511486×292 30.2 KB\n", "username": "Ti_n_Tr_nh_Minh" }, { "code": "python3mongod", "text": "Looks like there are a lot of python3 processes running on this host and they are taking up all the CPU (mongod process has mere 6.5%) so you may need to look at those rather than your MongoDB set-up.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "The first, thank you for your feedback!\nThat right, I had a lot of python3 processes in my server.\nMy script are simple, Initiate connection to, then update a lot of field in collection of db, then close session. I schedule them every 1 hour and I observed CPU increase over time. CPU goes down when I restart mongod service. Looking at the htop result, mongodb are taking up all the CPU.Did I do something wrong in my script?\nIs there a way to limit CPU for mongod service?\nOr show me how to test and solve this problem.Thanks for the help!\n\nScreenshot 2021-09-04 1223011149×912 124 KB\n", "username": "Ti_n_Tr_nh_Minh" }, { "code": "", "text": "", "username": "Stennie_X" } ]
I got error High CPU
2021-09-02T10:10:58.393Z
I got error High CPU
3,149
null
[ "serverless" ]
[ { "code": "", "text": "I want to use MongoDB atlas serverless option, but i see that Atlas search is not supported on the serverless option is there any way to achieve autocomplete search queries without using Atlas Search", "username": "peter_mwangi" }, { "code": "autocomplete$search", "text": "Hi @peter_mwangi - Welcome to the community.Yes, as of the time of this message per the Serverless Limitations documentation, Atlas search usage is not available with Serverless instances.is there any way to achieve autocomplete search queries without using Atlas SearchThe autocomplete operator is part of Atlas search’s $search stage so you won’t be able to use it without Atlas Search.Do you have further details you can share about your use case here? If your use case heavily relies on autocomplete then perhaps you may wish to consider a dedicated cluster instead for the meantime. You can put in a feature request in the MongoDB Feedback site detailing your use case and requirements for Atlas Search and Serverless together.Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB Atlase search in serveless option
2022-10-14T15:20:36.220Z
MongoDB Atlase search in serveless option
2,079
null
[ "aggregation", "queries", "atlas-search" ]
[ { "code": "{\n \"analyzer\":\"lucene.standard\",\n \"searchAnalyzer\":\"lucene.standard\",\n \"mappings\":{\n \"dynamic\":true,\n \"fields\":{\n \"bestProduct\":{\n \"fields\":{\n \"productName\":[\n {\n \"analyzer\":\"lucene.simple\",\n \"searchAnalyzer\":\"lucene.simple\",\n \"type\":\"string\"\n },\n {\n \"analyzer\":\"lucene.standard\",\n \"maxGrams\":3,\n \"minGrams\":15,\n \"tokenization\":\"edgeGram\",\n \"type\":\"autocomplete\"\n }\n ]\n },\n \"type\":\"document\"\n },\n \"brandName\":[\n {\n \"analyzer\":\"lucene.simple\",\n \"searchAnalyzer\":\"lucene.simple\",\n \"type\":\"string\"\n },\n {\n \"type\":\"autocomplete\"\n }\n ]\n }\n }\n}\n{\n \"$search\":{\n \"compound\":{\n \"should\":[\n {\n \"term\":{\n \"query\":\"organic\",\n \"path\":[\n \"bestProduct.productName\"\n ],\n \"score\":{\n \"boost\":{\n \"value\":10\n }\n }\n }\n },\n {\n \"autocomplete\":{\n \"query\":\"organic\",\n \"path\":\"bestProduct.productName\",\n \"score\":{\n \"boost\":{\n \"value\":50\n }\n }\n }\n }\n ]\n }\n }\n}\n{\n \"bestProduct\":{\n \"productName\":\"Samisha organic\"\n }\n},\n{\n \"bestProduct\":{\n \"productName\":\"childern organic\"\n }\n},\n{\n \"bestProduct\":{\n \"productName\":\"organic\"\n }\n}\n", "text": "Hi,\nI am creating a search engine with the help of mongo Atlas search,\nThis is the mapping I am using for $search in aggregation query,and this is the aggregation queryI am getting this as outputI want to know, how can I get the matching word first, and also if the search_key = “organ” still I want products with a name starting with “organ*” shows first, Can anyone help?", "username": "Nikhil_Anand1" }, { "code": "\"minGrams\"\"maxGrams\"\"bestProduct.productName\":\"organic\"DB> db.organic.aggregate([\n{\n '$search': {\n compound: {\n should: [\n {\n term: {\n query: 'organic',\n path: [ 'bestProduct.productName' ],\n score: { boost: { value: 10 } }\n }\n },\n {\n autocomplete: {\n query: 'organic',\n path: 'bestProduct.productName',\n score: { boost: { value: 50 } }\n }\n }\n ]\n }\n }\n},\n{\n '$project': { bestProduct: 1, _id: 0, score: { '$meta': 'searchScore' } }\n}\n[\n { bestProduct: { productName: 'organic' }, score: 9.29643726348877 },\n {\n bestProduct: { productName: 'Samisha organic' },\n score: 8.128897666931152\n },\n {\n bestProduct: { productName: 'childern organic' },\n score: 8.128897666931152\n }\n]\n\"minGrams\"\"maxGrams\"{ '$project': { bestProduct: 1, _id: 0, score: { '$meta': 'searchScore' } } }\"default\"Your index could not be built: \"mappings.fields.bestProduct.fields.productName[1]\" minGrams cannot be greater than maxGrams. You are still using your last valid index.\n\"You are still using your last valid index.\"\"minGrams\"\"maxGrams\"$search", "text": "Hi @Nikhil_Anand1 - Welcome to the community This is the mapping I am using for $search in aggregation query,I tried using this index definition and noticed that \"minGrams\" was larger than \"maxGrams\". Are these values you entered a typo? I.e. They are supposed to be reversed?I ran a simple test with the same query and had the document where \"bestProduct.productName\":\"organic\" appeared first (with the highest score). Please see the query ran but with an extra projection stage to indicate scoring:Output:To further assist you with this, can you provide the following information:I tried to use the index you provided and was returned with:Note specifically the following: \"You are still using your last valid index.\"You could be getting the results you’ve specified since the index you provided is actually not in use. I swapped the \"minGrams\" and \"maxGrams\" values you had specified in my own test above before running my $search.Regards,\nJason", "username": "Jason_Tran" } ]
I am not getting the accepted result from autocomplete edge gram
2022-10-14T09:54:39.355Z
I am not getting the accepted result from autocomplete edge gram
1,481
null
[ "aggregation", "queries" ]
[ { "code": "{\n _id: ObjectId(\"634b08f7eb5cb6af473e3ab2\"),\n name: 'India',\n iso_code: 'IN',\n states: [\n {\n name: 'Karnataka',\n cities: [\n {\n name: 'Hubli Tabibland',\n pincode: 580020,\n location: { type: 'point', coordinates: [Array] }\n },\n {\n name: 'Hubli Vinobanagar',\n pincode: 580020,\n location: { type: 'point', coordinates: [Array] }\n },\n {\n name: 'Hubli Bengeri',\n pincode: 580023,\n location: { type: 'point', coordinates: [Array] }\n },\n {\n name: 'Kusugal',\n pincode: 580023,\n location: { type: 'point', coordinates: [Array] }\n }\n ]\n }\n ]\n}\n{\n _id: ObjectId(\"634b08f7eb5cb6af473e3ab2\"),\n name: 'India',\n iso_code: 'IN',\n states: [\n {\n name: 'Karnataka',\n cities: [\n {\n name: 'Kusugal',\n pincode: 580023,\n location: { type: 'point', coordinates: [Array] }\n }\n ]\n }\n ]\n}\ndb.countries.find(\n {\n 'states.cities': {\n $elemMatch: {\n 'name' : 'Kusugal'\n }\n }\n }, \n {\n '_id': 1, \n 'name': 1, \n 'states.name': 1, \n 'states.cities.$' : 1\n }\n);\n", "text": "I have the following document sampleI need only the followingFollowing is the query that I have tried so far but it returns all the cities", "username": "Channaveer_Hakari" }, { "code": " db.countries.aggregate([\n { $match: { \"states.cities.name\": /Kusugal/ } }, \n { $unwind: \"$states\" }, \n { $unwind: \"$states.cities\" }, \n { $match: { \"states.cities.name\": /Kusugal/ } }\n ]);\n", "text": "I was able to achieve it with the help of aggregation.1st line $match will query the records with cities with only Kusugal2nd & 3rd line $unwind will create a separate specific collection of documents from the filtered records3rd line $match will filter these records again based on the conditionIn simple aggregation processes commands and sends to next command and returns as an single result.", "username": "Channaveer_Hakari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get only matched array object along with parent fields
2022-10-16T09:00:31.355Z
Get only matched array object along with parent fields
1,462
null
[]
[ { "code": "context.values.get(\"secretName\")undefined", "text": "In a previous version of MongoDB, where Values and Secrets were two distinct things, I created some secrets that I linked to values. I was able to get the values of those secrets from Realm functions using context.values.get(\"secretName\"), and I’m still able to do that.Recently I created new secrets, this time using the new MongoDB version (where secrets are simply values with a “secret” type), but this time I can’t access those secrets in Realm functions. I get undefined values whenever I use the expression above. This is even weirder because in the Realm UI, those new secrets look the same as the old secrets —there is nothing that could explain why the old one are working and the new ones aren’t.According to the doc here:If you need to access the Secret from a Function or Rule, you can link the Secret to a Value.What does this mean? How to do that in the new Values system?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "[Pinging this post cause I’m still looking for an answer]", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Still looking for an answer… Can’t use Secrets, it’s a shame", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Step 1: In Mongo console, create a “secret”.\nStep 2: In Mongo console, create a “value”, and choose “link” to that “secret”.\nStep 3: In your function, use ‘context.values.get’ to get the “value”, which in turn gets the “secret”.", "username": "Alex_Breen" } ]
Using secret in function with new Values system?
2022-04-19T19:09:45.926Z
Using secret in function with new Values system?
3,002
https://www.mongodb.com/…39935891accd.png
[]
[ { "code": "", "text": "Hello, I have 4.2.1 MongoDB version and look this result of commonservices DB space:\n(Note: The dataSize is bigger then the storageSize and sizeOnDisk)\nhow is this possible? existis one procedure to solve this?Obs.: I have others DBs present true information.\nimage419×633 10.2 KB\nThanks,\nHenrique.", "username": "Henrique_Souza" }, { "code": "", "text": "Hi @Henrique_Souza,The dataSize is bigger then the storageSize and sizeOnDiskMongoDB’s WiredTiger storage engine compresses indexes and collections by default, so it is expected that the data size can be bigger than the storage size. The ratio between storage and data will depend on how compressible your data is as well as the compression algorithm used.I have 4.2.1 MongoDB versionWhile I believe the behaviour you are describing is expected, I would note that MongoDB 4.2.1 was released almost two years ago (Oct 22, 2019). I recommend upgrading to the latest 4.2.x release (currently 4.2.23) as there have been many bug fixes and stability improvements since 4.2.1: 4.2 Changelog. Minor updates within the same release series do not introduce any backward breaking changes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X by explanation!", "username": "Henrique_Souza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Bug in 4.2.1 MongoDB on allocated DB space?
2022-10-12T03:25:14.592Z
Bug in 4.2.1 MongoDB on allocated DB space?
1,731
null
[]
[ { "code": "", "text": "Hi!I’ve been using MongoDB Atlas for around 2 years now, but I’ve recently joined the community. Looking forward to having a good time here!Regards,\nnjt", "username": "njt" }, { "code": "", "text": "Welcome to the MongoDB Community @njt!Were you looking for any resources in particular?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "A post was split to a new topic: How to relate record using Mongoose?", "username": "Stennie_X" } ]
Hello, MongoDB Community!
2022-06-06T06:59:09.764Z
Hello, MongoDB Community!
2,908
null
[ "java" ]
[ { "code": "", "text": "With Loom starting to be integrated in the JDK (preview in 19), I’m curious to know how compatible the Java sync driver is with Loom, major issues are JNI calls and (as of now) synchronized blocks (proper Locks objects are supported).Is someone looking into this? This could be a really interesting alternative to the reactive driver.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "Yes, we’ve done some testing with both virtual threads and structured concurrency, and recently addressed https://jira.mongodb.org/browse/JAVA-4642. There are a few other known issues that we are tracking in scope of the https://jira.mongodb.org/browse/JAVA-4649 epic. If you happen to do any testing of your own, please report any issues that you find.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How Loom friendly is the Java sync driver?
2022-10-15T14:42:49.114Z
How Loom friendly is the Java sync driver?
1,548
https://www.mongodb.com/…9_2_1024x533.png
[ "vscode" ]
[ { "code": "", "text": "dear all,\ni am using vscode with mongodb extension, connection is fine, but i cannot debug anymore. I did use 2 weeks ago. at that time, it was fine.\n\n11675×873 150 KB\ni unstall and installed the extension, but it doesnot make sense. please help me.", "username": "jian_wu" }, { "code": "", "text": "@jian_wu how were you debugging MongoDB Playground files? The extension never had that functionality. What do you expect to happen when you start a debugging session?", "username": "Massimiliano_Marcon" }, { "code": "", "text": "I see, thank you! I found the solution, i.e., after connection of local or atlas, to select the command lines and press the yellow buble, then the output of query will show on the right side as “playground results”\nimage1033×286 13.2 KB\n", "username": "jian_wu" }, { "code": "", "text": "You can also hit the play button at the top right and the entire playground file will be run.", "username": "Massimiliano_Marcon" } ]
Vscode with mongodb extentison, connection fine but cannot debug anymore
2022-10-12T01:32:35.591Z
Vscode with mongodb extentison, connection fine but cannot debug anymore
2,463
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hey guys I’m sorry. Last year’s installation was a breeze. Now, after a hiatus, I want to get back into learning and have tried using Ubuntu 20.04, debian11.5,11.4,11.0 and I get no love from any of those distros while trying to install both MongoDB and mongosh. Works fine on windows. But I would also like to be able to duplicate the learning experience on a Linux distro. systemctl says failed mongod service, mongod at the command line says “Illegal instruction”. right now I’m using Debian 11.5. I don’t know what repository to add that won;t give me errors. Any help would be appreciated.", "username": "Peter_Suchsland" }, { "code": "", "text": "I would be willing to help out with a solution.\nmongod tries to load but fails everytime whether on boot or by hand.Perhaps the repo trees has problems? Does anyone have a running version of Debian or Ubuntu running MongoDB community edition working as of today October 15th, 2022 specifically within a virtualbox? (and specifically what versions are you using? ) I just tried Buster with 4.4 community edition: no go. What I find so surprising is that last year it was a just “bing bang finished” install – with zero problems and easy connections. Btw I’m not in love with deb nor ubuntu. Ok I’ll shut up now. Thanks in advance.", "username": "Peter_Suchsland" }, { "code": "", "text": "Ok I think I figured out what the deal is. Mongo is doing some massive rearranging of links and sites. It became evident when I realized that all the links from other sites that point to “how-tos” are all different. Importantly the pgp/asc site has been moved. Plus Ubunto is phasing out some commands confusing the matter. Good luck guys. Hopefully in the next few weeks… hopefully.", "username": "Peter_Suchsland" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb mongosh on ubuntu or debian oct '22
2022-10-15T07:42:23.273Z
Mongodb mongosh on ubuntu or debian oct ‘22
1,872
https://www.mongodb.com/…9161519550e9.png
[ "queries", "node-js", "mongoose-odm", "next-js" ]
[ { "code": "import Users from \"../../../api/models/Users\";\nimport axios from \"axios\";\nimport dbConnect from \"../../util/mongodb\";\n\nconst handler = async (req, res) => {\n const { method } = req;\n await dbConnect();\n\n switch (method) {\n case \"GET\":\n try {\n const res = await Users.find();\n res.status(200).json(res);\n } catch (error) {\n res.status(500).json(error);\n }\n\n break;\n case \"POST\":\n console.log(POST);\n break;\n case \"PUT\":\n console.log(PUT);\n break;\n case \"Delete\":\n console.log(Delete);\n break;\n }\n};\n\nexport default handler;\n\nimport mongoose from 'mongoose'\n\nconst MONGO_URL = process.env.MONGO_URL\n\nif (!MONGO_URL) {\n throw new Error(\n 'Please define the MONGO_URL environment variable inside .env.local'\n )\n}\n\n/**\n * Global is used here to maintain a cached connection across hot reloads\n * in development. This prevents connections growing exponentially\n * during API Route usage.\n */\nlet cached = global.mongoose\n\nif (!cached) {\n cached = global.mongoose = { conn: null, promise: null }\n}\n\nasync function dbConnect() {\n if (cached.conn) {\n return cached.conn\n }\n\n if (!cached.promise) {\n const opts = {\n bufferCommands: false,\n }\n\n cached.promise = mongoose.connect(MONGO_URL, opts).then((mongoose) => {\n return mongoose\n })\n }\n cached.conn = await cached.promise\n return cached.conn\n}\n\nexport default dbConnect\n", "text": "Why do I keep getting this error when I run my dbConnect() function?\nimage649×527 24.7 KBThis is how I’m calling the dbConnect functionAnd this is where i create the function util/mongodb.jswhat am I doing wrong? I’ve been on this for days!", "username": "Anthony_Ezeh" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
Mongo connection error with next.js
2022-10-14T09:53:50.055Z
Mongo connection error with next.js
2,772
null
[ "dot-net", "indexes", "performance" ]
[ { "code": "Builders<[MY CLASS]>.Filter.Eq(x => x.Id, [ID]);\nBuilders<[MY CLASS]>.Filter.In(x => x.Id, [MY COLLECTION]);\nvar builder = Builders<[MY CLASS]>.Filter;\nbuilder.Or(builder.Eq(x => x.Id, [ID 1]), builder.Eq(x => x.Id, [ID 2]), builder.Eq(x => x.Id, [ID 3]));\n", "text": "Hi, I’m using the .NET driver and I’m curious about how to make the best use of indexed fields. If I want to get a single document by its index, it’s a piece of cake and super fast:But what if I have a collection of potentially many IDs and want the filter to get all those documents? I’ve tried using in, but the slow performance makes me think that Mongo is just searching all documents rather than using the index:I’ve also tried oring multiple eqs. This seems like it might be faster, at least when I try it with a reasonably small number of IDs. But when I put in hundreds or thousands of IDs, the code crashes when I try to iterate the IAsyncEnumerable that is produced:The last option that comes to my mind would be to make multiple calls to the database for each ID and handle everything in code. I can’t imagine this would be a very performant option, especially as I’m applying other filtering rules as well as sorting / skipping / limiting. Is there another option I’m missing to tackle this in a Mongo filter?", "username": "Zachary_Anderson1" }, { "code": "InIn", "text": "That’s a good question. I will look into and see what I can find.I would expect the most performant way would be to use In to query for a number of documents by _id at the same time. In should take advantage of the index, but it would still have to fetch the contents of the documents one at a time (from the location pointed to by the index). Depending on the number of _id’s relative to the size of the collection it could even decide that a full collection scan is faster.There would be a limit to how many id’s you could query for at the same time. There is a finite limit on how big a query document can be, and the more _id’s you query for at the same time the larger the query document becomes.You say that “the code crashes” when you try to enumerate the results. Can you provide more details, such as what exception message you are seeing and how to reproduce?In the meantime I will begin my own attempts to reproduce this and measure performance.", "username": "Robert_Stam" }, { "code": "Builders<[MY CLASS]>.Filter.Or(ids.Select(id => filterBuilder.Eq(r => r.Id, id)))\nfind({ \\\"$or\\\" : [{ \\\"_id\\\" : ObjectId(\\\"123\\\") }, { \\\"_id\\\" : ObjectId(\\\"456\\\") }, { \\\"_id\\\" : ObjectId(\\\"789\\\") }] }).sort({ \\\"_id\\\" : 1 })\nvar enumerable = _collection.Find(...).ToAsyncEnumerable(cancellationToken);\nawait foreach (T item in enumerable.WithCancellation(cancellationToken).ConfigureAwait(false))\n{\n\t// do work...\n}\n{\"Command find failed: Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Message: errors \\u0005\\u0001 severity Error location \\u0010 start \\u0004 end \\u0004 code SC1030 message~ Tt\\u0019\\u0014 yPz\\u000e ߠ \\rg \\u001b$. < .\\u0010\\u0015 sP = u \\a e\\u0010 \\r\\n ey\\u001eD. < ] d\\u0010 -\\a m8 O x < nP \\r ˠx ,ϻ@÷{ & \\u001c^ g\\u0010\\u001d]\\u0006 ey\\u001e v ߠv MO eм\\u001df rP ^ s\\u0016 -\\a m8;mΧ g\\u0010\\u001d]\\u0006 m8 \\a pyy>O .\\r\\nActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a, Request URI: /apps/DocDbApp/services/DocDbServer10/partitions/a4cb4956-38c8-11e6-8106-8cdcd42c33be/replicas/1p/\n, RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.22000 cosmos-netstandard-sdk/3.18.0);););.\"}\n Code: 2\n CodeName: \"BadValue\"\n Command: {{ \"find\" : \"items\", \"filter\" : { \"$or\" : [{ \"_id\" : ObjectId(\"123\") }, { \"_id\" : ObjectId(\"456\") }, { \"_id\" : ObjectId(\"789\") }] }, \"sort\" : { \"_id\" : 1 }, \"skip\" : 0, \"limit\" : 500 }}\n ConnectionId: {{ ServerId : { ClusterId : 1, EndPoint : \"Unspecified/localhost:10255\" }, LocalValue : 3, ServerValue : \"663943947\" }}\n Data: {System.Collections.ListDictionaryInternal}\n ErrorLabels: Count = 0\n ErrorMessage: \"Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Message: errors \\u0005\\u0001 severity Error location \\u0010 start \\u0004 end \\u0004 code SC1030 message~ Tt\\u0019\\u0014 yPz\\u000e ߠ \\rg \\u001b$. < .\\u0010\\u0015 sP = u \\a e\\u0010 \\r\\n ey\\u001eD. < ] d\\u0010 -\\a m8 O x < nP \\r ˠx ,ϻ@÷{ & \\u001c^ g\\u0010\\u001d]\\u0006 ey\\u001e v ߠv MO eм\\u001df rP ^ s\\u0016 -\\a m8;mΧ g\\u0010\\u001d]\\u0006 m8 \\a pyy>O .\\r\\nActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a, Request URI: /apps/DocDbApp/services/DocDbServer10/partitions/a4cb4956-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, Re\nquestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.22000 cosmos-netstandard-sdk/3.18.0);););\"\n HResult: -2146233088\n HelpLink: null\n InnerException: null\n Message: \"Command find failed: Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Message: errors \\u0005\\u0001 severity Error location \\u0010 start \\u0004 end \\u0004 code SC1030 message~ Tt\\u0019\\u0014 yPz\\u000e ߠ \\rg \\u001b$. < .\\u0010\\u0015 sP = u \\a e\\u0010 \\r\\n ey\\u001eD. < ] d\\u0010 -\\a m8 O x < nP \\r ˠx ,ϻ@÷{ & \\u001c^ g\\u0010\\u001d]\\u0006 ey\\u001e v ߠv MO eм\\u001df rP ^ s\\u0016 -\\a m8;mΧ g\\u0010\\u001d]\\u0006 m8 \\a pyy>O .\\r\\nActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a, Request URI: /apps/DocDbApp/services/DocDbServer10/partitions/a4cb4956-38c8-11e6-8106-8cdcd42c33be/\nreplicas/1p/, RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.22000 cosmos-netstandard-sdk/3.18.0);););.\"\n Result: {{ \"ok\" : 0.0, \"errmsg\" : \"Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a; Reason: (Message: errors \\u0005\\u0001 severity Error location \\u0010 start \\u0004 end \\u0004 code SC1030 message~ Tt\\u0019\\u0014 yPz\\u000e ߠ \\rg \\u001b$. < .\\u0010\\u0015 sP = u \\u0007 e\\u0010 \\r\\n ey\\u001eD. < ] d\\u0010 -\\u0007 m8 O x < nP \\r \\u02e0x ,ϻ@÷{ & \\u001c^ g\\u0010\\u001d]\\u0006 ey\\u001e v ߠv MO eм\\u001df rP ^ s\\u0016 -\\u0007 m8;mΧ g\\u0010\\u001d]\\u0006 m8 \\u0007 pyy>O .\\r\\nActivityId: 86a080ef-3cd2-426c-bdbc-a674979c674a, Request URI: /apps/DocDbApp/services/DocDbServer10/partitions/a4cb4956-38c\n8-11e6-8106-8cdcd42c33be/replicas/1p/, RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.22000 cosmos-netstandard-sdk/3.18.0);););\", \"code\" : 2, \"codeName\" : \"BadValue\" }}\n Source: \"MongoDB.Driver.Core\"\n StackTrace: \" at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.ProcessResponse(ConnectionId connectionId, CommandMessage responseMessage)\\r\\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.<ExecuteAsync>d__20.MoveNext()\\r\\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.<ExecuteProtocolAsync>d__20`1.MoveNext()\\r\\n at MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.<ExecuteAsync>d__3`1.MoveNext()\\r\\n at MongoDB.Driver.Core.Operations.ReadCommandOperation`1.<ExecuteAsync>d__8.MoveNext()\\r\\n at MongoDB.Driver.Core.Operations.FindOperation`1.<ExecuteAsync>d__129.MoveNext()\\r\\n at MongoDB.Driver.Core.Operations.FindOperation`1.<ExecuteAsync>d__128.MoveNext()\\r\\n at MongoDB.Driver.OperationExecutor.<ExecuteReadOperationAsync>d__3`1.MoveNext()\\r\\n at MongoDB.Driver.MongoCollectionImpl`1.<ExecuteReadOperationAsync>d__99`1.MoveNext()\\r\\n at MongoDB.Driver.MongoCollectionImpl`1.<UsingImplicitSessionAsync>d__107`1.MoveNex\nt()\\r\\n at DataCollection.Data.Extensions.IAsyncCursorSourceExtensions.<ToAsyncEnumerable>d__0`1.MoveNext() in C:\\\\Source\\\\sawtooth\\\\DataCollection\\\\src\\\\DataCollection.Data\\\\Extensions\\\\IAsyncCursorSourceExtensions.cs:line 12\\r\\n at DataCollection.Data.Extensions.IAsyncCursorSourceExtensions.<ToAsyncEnumerable>d__0`1.System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult(Int16 token)\\r\\n at System.Threading.Tasks.ValueTask`1.get_Result()\\r\\n at System.Runtime.CompilerServices.ConfiguredValueTaskAwaitable`1.ConfiguredValueTaskAwaiter.GetResult()\\r\\n at DataCollection.Business.Helpers.GetRespondentsHelper.<ToListAsync>d__20`1.MoveNext() in C:\\\\Source\\\\sawtooth\\\\DataCollection\\\\src\\\\DataCollection.Business\\\\Helpers\\\\GetRespondentsHelper.cs:line 213\\r\\n at DataCollection.Business.Helpers.GetRespondentsHelper.<ToListAsync>d__20`1.MoveNext() in C:\\\\Source\\\\sawtooth\\\\DataCollection\\\\src\\\\DataCollection.Business\\\\Helpers\\\\GetRespondentsHelper.cs:line 213\"\n TargetSite: {TCommandResult ProcessResponse(MongoDB.Driver.Core.Connections.ConnectionId, MongoDB.Driver.Core.WireProtocol.Messages.CommandMessage)}\n", "text": "Thanks for the response. Assuming “ids” is a collection of strings, this is how I built my filter in C#:When I run ToString on my IFindFluent, I get what I believe to be the final Mongo query:(But with real object IDs and a whole lot more of them.)I then call ToAsyncEnumerable and try to iterate the enumerable:Everything works when I use 500 IDs, but 1000 IDs results in this exception being throw (simplified IDs like before):I assumed such a cryptic error message wouldn’t be of any help, but perhaps it means something to you.", "username": "Zachary_Anderson1" }, { "code": "explain(\"executionStats\")", "text": "Welcome to the MongoDB Community @Zachary_Anderson1 !RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK: Windows/10.0.22000 cosmos-netstandard-sdk/3.18.0);););\"Cosmos DB’s emulation of MongoDB is an entirely independent server implementation of a subset of MongoDB features for the claimed MongoDB API version. The error message you are encountering is specific to Cosmos DB.I would try running explain(\"executionStats\") to see if there is any insight on the plan execution for your query, but since the backend isn’t MongoDB you will need different expertise to understand how to improve performance.For more insight on Cosmos’ indexing behaviour, I suggest looking into Stack Overflow (Newest 'azure-cosmosdb+indexing' Questions - Stack Overflow) and the Cosmos DB Indexing documentaiton.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I tested your scenario against MongoDB Version 6.0.1 and using In is performant with any number of _ids. In fact it is more performant (in terms of documents/second) the more_ids are involved.I got the following results:1 _id : 28 ms (35/sec)\n10 _ids : 0 ms (17,633/sec)\n100 _ids : 1 ms (84,160/sec)\n500 _ids : 3 ms (139,481/sec)\n1000 _ids : 9 ms (110,989/sec)\n100,000 _ids : 724 ms (138,013/sec)I attribute the 28 ms for 1 _id to some sort of warming up of the server (e.g. loading the index into memory).Reading 10 _ids didn’t actually take 0 seconds. It’s a rounding error (it took close to 0).", "username": "Robert_Stam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What's the most performant way to get multiple documents by an indexed field?
2022-10-12T19:08:41.770Z
What&rsquo;s the most performant way to get multiple documents by an indexed field?
4,045
null
[ "replication" ]
[ { "code": "- rs.add(\"ip:port\", priority:1, arviter:true) #mongodb have 2 arbiters\n- rs.remove(\"ip:port\") # cant remove\n- rs.reconfig() # cant remove\n", "text": "In pruduct enviroment, when we use PSA architecture and arbiter node cant work, we usually first add arbiter and then remove the bad one. But when we remove the old arbiter, we will get error that Rejcting reconfig where the new config has PSA topology and the secondary is electable, but the old config contains only one writable node. Even though using rs.reconfigForPSAS for next sets, we also get error.get error step:", "username": "liu_yan" }, { "code": "", "text": "Show us the exact command you ran and the error you got\nArbiter should be added with rs.addArb()\nCheck mongo documentation for exact syntax", "username": "Ramachandra_Tummala" } ]
Why add arbiter then we get error?
2022-10-14T05:55:14.096Z
Why add arbiter then we get error?
1,080
https://www.mongodb.com/…4_2_1024x512.png
[ "aggregation", "dot-net", "production" ]
[ { "code": "", "text": "This is the general availability release for the 2.18.0 version of the driver.The main new features in 2.18.0 include:The full list of JIRA issues resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.18.0%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:", "username": "Robert_Stam" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.18.0 Released
2022-10-14T20:08:31.408Z
.NET Driver 2.18.0 Released
2,482
null
[ "queries", "data-modeling" ]
[ { "code": "", "text": "A sample query (for the sample data provided below)\ndb.modelA.find ( { “relations.grpId”: “Rebels” }, { “name”: 1, controllers: { $elemMatch: { qKey: relations.relevancy } } } )How can I dynamically refer to a field value in the above $elemMatch in projection clause.db.modelAA.insertOne({\"_id\": “1234”, “name”: “Chewbacca”, “controlKeys”: [“C012”, “M088”], “controllers”: [{“qKey”: “C012”, “status”: “Active”, “autoPay”: true, “amtDue”: 20}, {“qKey”: “M088”, “status”: “Active”, “autoPay”: false, “amtDue”: 60}], “relations”: [{“basis”: “motto”, “grpId”: “Rebels”, “relevancy”: “M088”}] })db.modelAA.insertOne({\"_id\": “6789”, “name”: “Luke Skywalker”, “controlKeys”: [“C977”, “M977”], “controllers”: [{“qKey”: “C977”, “status”: “Inactive”}, {“qKey”: “M977”, “status”: “Active”, “autoPay”: true, “amtDue”: 55}], “relations”: [{“basis”: “motto”, “grpId”: “Rebels”, “relevancy”: “M977”}, {“basis”: “clan”, “grpId”: “Jedi”, “relevancy”: “C977”}] })db.modelAA.insertOne({\"_id\": “AB7R”, “name”: “Han Solo”, “controlKeys”: [“M177”], “controllers”: [{“qKey”: “M177”, “status”: “Active”, “autoPay”: true, “amtDue”: 35}], “relations”: [{“basis”: “motto”, “grpId”: “Rebels”, “relevancy”: “M177”}, {“basis”: “clan”, “grpId”: “Jedi”, “relevancy”: “M177”}] })", "username": "Shyam_Potnuri" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your sample documents so we can cut-n-paste into our systems.If that projection can be done within find() it will probably involve $expr with $$ROOT to refer to fields outside the controllers array. But since relations is also an array I am not too sure. In particular, with _id:6789 where you have 2 qKey matching relevancy.I think the change are better with a $project aggregation with $map on controllers.", "username": "steevej" } ]
Referencing a doc field in the $elemMatch criteria value in the projection
2022-10-12T14:25:40.263Z
Referencing a doc field in the $elemMatch criteria value in the projection
2,688
https://www.mongodb.com/…_2_1024x576.jpeg
[ "atlas", "api", "hyderabad-mug" ]
[ { "code": "Staff Software Engineer, IntuitDeveloper Relations Engineer, OrkesStaff Developer Advocate, MongoDB", "text": "\nHyderabad MUG1920×1080 278 KB\nHyderabad, MongoDB User Group, and Orkes.io, Hyderabad Group are excited to bring our first meetup in Hyderabad.The event will kick off with meets and greets and lead into Cherish Santoshi, Developer Relations Engineer at Orkes will host an exciting session on building resilient systems with the power of Orchestration. Later @Megha_Arora , Developer Advocate at MongoDB, will discuss how MongoDB Atlas, the cloud offering by MongoDB, along with its collection of services integrates nicely with your data. Towards the end, Siben Nayak will deliver a session on Observability in Distributed Systems.We will also have fun Networking Time to meet some of the developers, architects, and experts in the region. Not to forget there will also be Swags , and Lunch. If you wish to speak at our meetup, please submit your session here We are looking for a Leader to lead Hyderabad, MongoDB User Group,. In case you are interested please fill out this form or reach out to @Megha_Arora at the event Event Type: In-Person\nLocation: 91springboard Hitech City, Kondapur, HyderabadHitech Kondapur Coworking space F948+W7 · 080 4748 9191,\nLandmark: Barbeque Nation, Opposite Sarath City Capital Mall! RSVP Here: Login to Meetup | Meetup\nIMG_20210407_2324571920×1920 323 KB\nStaff Software Engineer, IntuitI am a software architect, developer, team lead, educator, and mentor based out of India. I love building new stuff, solving complex problems, scaling systems, and ensuring that I lead my team to success. I’ve been in the software industry for about 12 years, progressing from an entry-level trainee to an architect and technical leader in my current role. I’m currently a Staff Software Engineer at Intuit and have worked with Amazon, McAfee, and TCS in the past.Developer Relations Engineer, Orkes–\nimage512×512 38.6 KB\nStaff Developer Advocate, MongoDBJoin the Hyderabad User Group to stay updated with upcoming meetups and discussions in Hyderabad.", "username": "Harshit" }, { "code": "", "text": "hi All,Looking forward to meeting you tomorrow!The event starts at 12 pm noon and the exact address is 91springboard - Hitech Kondapur Coworking space F948+W7 · 080 4748 9191, Landmark : Barbeque Nation, Opposite Sarath City Capital Mall!See you there!", "username": "Megha_Arora" } ]
MUG Hyderabad: Orchestration and Microservices
2022-10-05T11:53:55.173Z
MUG Hyderabad: Orchestration and Microservices
3,591
null
[ "flutter" ]
[ { "code": "", "text": "Hellooo, i have two questions :1- In flutter beta there is no “API Key” yet in realm package, can i use single username/password for multiple users ? (thousands users access my app through one and same username/password).2-I am using “App Services” (realm) in my mobile app to connect to the mongodb, in order to connect to the “App Services” i need to type the “App ID” and “API Key” to access the “App Services”, is it secure to type both values inside my mobile app ?", "username": "abdelrahman_mokhtar" }, { "code": "", "text": "Hi @abdelrahman_mokhtar,can i use single username/password for multiple users ?There’s nothing preventing you to do that, and Device Sync will correctly work on each device separately: please note that, in perspective, it may not be a good idea, as you won’t be able to distinguish among clients and track their activity, should any of them need support.i need to type the “App ID” and “API Key” to access the “App Services”, is it secure to type both values inside my mobile app ?In general, communication happens over HTTPS, so it’s as secure as it could be: that said, the two values still need to be inside your app, so a determined attacker can discover and access them, unless you add an additional level by encrypting the values, and decrypting just before use.More in detail, App ID isn’t that valuable, unless you leave the app with glaring security holes like Anonymous Authentication or Developer Mode: you can’t do much with it alone. The API Key, however, is a different story: the two typical use cases areLeaving remote clients authenticated with API Key the privileges to modify anything in the backend is definitely a security problem.", "username": "Paolo_Manna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to secure app id & API Key & username/password... etc
2022-10-13T15:51:50.635Z
How to secure app id &amp; API Key &amp; username/password&hellip; etc
1,603
null
[ "stitch" ]
[ { "code": "const docs = context.services.get(\"mongodb-atlas\")\n .db(\"bc-notes-db\")\n .collection(\"notes\")\n .find({})\n .sort({shortnote:1})\n .skip(1)\n .limit(9);\n.skip(1)\n\"error\": \"TypeError: 'skip' is not a function\",\n", "text": "I’m building an API using Stitch. Using a simple GET request and obtaining a set of documents from the DB. See code below:I get an error using the linestating thatAny ideas how I can implement pagination in this situation? or why this doesn’t work?", "username": "engineD" }, { "code": "const docs = context.services.get(\"mongodb-atlas\")\n .db(\"bc-notes-db\")\n .collection(\"notes\")\n .aggregate([\n {\"$skip\": 1},\n {\"$limit\": 9}\n ])\n .toArray();\n", "text": "In order to use “skip” needed to use aggregate instead of find? see below…", "username": "engineD" }, { "code": "", "text": "Hi,\nIn my code I don’t use skip, because I read somewhere it is inefficient. Instead you can use find by _id and use limit. The second approach described in this article: Fast and Efficient Pagination in MongoDB | CodementorHope it helps.", "username": "Dimitar_Kurtev" }, { "code": "", "text": "Here is the performance test: Benchmark Pagination Strategies in MongoDB", "username": "Dimitar_Kurtev" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Stitch pagination
2020-04-19T00:06:37.510Z
Stitch pagination
3,315
null
[ "atlas-functions" ]
[ { "code": "doMagicpending promise returned that will never resolve/rejectexports = async function () {\n const { doMagic } = require('@my/dependency');\n \n const mongodb = context.services.get(\"xxx\")\n const db = mongodb.db(\"xxx\")\n \n const result = await doMagic(db);\n return result\n};\ndoMagicpending promise returned that will never resolve/reject", "text": "Hi everyone! I faced with some trouble in Atlas function which I totally can’t debug and even normally google it. The case in I put all my business logic into NPM package and publish it, then I’m importing it and using in function. In code below doMagic just an Async Function which do some stuff with collections, but when I’m running function I’m getting pending promise returned that will never resolve/reject as a result. Any clues what’s wrong or in which direction should I dig?In more details: doMagic is an async function which starts a bunch of js-generators and executes them step by step, simultaneously collecting all results, awaits until they all will stops. Then returns results map. Nothing very extraordinary, but a lot of algo-stuff. I don’t wanna copy-paste all this stuff into function body in UI to debug the issue step-by-step, I’d like to know what pending promise returned that will never resolve/reject mean at all?", "username": "Trdat_Mkrtchyan" }, { "code": "doMagicawait doMagic()doMagicpending promise returned that will never resolve/rejectawaitdoMagic", "text": "Hi @Trdat_Mkrtchyan welcome to the community!The function you posted looks pretty straightforward, so I doubt the message was caused by that function. Rather, it’s likely was bubbled up from the doMagic function. Presumably this error was thrown from the await doMagic() line there. However without access to the actual doMagic function, my best educated guess follows Just taking the error message at face value: pending promise returned that will never resolve/reject I think it’s trying to tell you that you have some Promise that was never resolved, and the Atlas function was awaiting on it.As a short experiment, could you perhaps load a simplified doMagic function (e.g. just echo the parameters) into the Atlas function, and see if the error still persists? If the error disappears, then perhaps you can add more complexity into the function to see at what point you’re not resolving/rejecting a Promise.Best regards\nKevin", "username": "kevinadi" }, { "code": "doMagicnew Promise((resolve, reject) => collection.find({}, (err, results) => {...}));\nconst res = await collection.find({})", "text": "Hi @kevinadiThanks for response. I found the cause of problem. After more detailed debug I found out that inside of doMagic I used callbacks notation for retrieving data from DB, such as:Which was the origin of error. I guess mongo driver in function’s context doesn’t allow to do that. When I replaced it with const res = await collection.find({}), the error disappeared.", "username": "Trdat_Mkrtchyan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Functions: > pending promise returned that will never resolve/reject
2022-10-13T11:29:21.120Z
Atlas Functions: &gt; pending promise returned that will never resolve/reject
2,526
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "vscode" ]
[ { "code": "", "text": "Below is MongoDB for VS CodeIs there a tool of MongoDB for Visual Studio?", "username": "Ping_Pong" }, { "code": "", "text": "yeah you can use plugin in visual studio to connect to mogo DB for details on installation refer to https://docs.mongodb.com/mongodb-vscode/install/", "username": "krishnapankit" }, { "code": "", "text": "@krishnapankit The link you provided is already in my OP, which is not what I want.I am looking for MongoDB for Visual Studio, not VS Code.", "username": "Ping_Pong" }, { "code": "", "text": "Hello, I’m also looking for MongoDB for Visual Studio and not VS Code, if someone has a tip I would be grateful. Thanks", "username": "Filipe_Cordella" } ]
MongoDB for Visual Studio
2021-07-08T21:24:09.852Z
MongoDB for Visual Studio
4,537
null
[ "atlas-cluster", "serverless", "php" ]
[ { "code": "Could not establish stream for node pe-1-REDACTED.REDACTED.mongodb.net:27017: [socket timeout calling hello on 'pe-1-REDACTED.REDACTED.mongodb.net:27017']", "text": "We’re trying to move an application over from an M10 cluster to Serverless, connecting using an AWS Private Endpoint. We had the Private Endpoint working perfectly with the M10 setup, but it just times out while connecting to the Serverless instance with Could not establish stream for node pe-1-REDACTED.REDACTED.mongodb.net:27017: [socket timeout calling hello on 'pe-1-REDACTED.REDACTED.mongodb.net:27017']We’re at a loss as to what to try. Any ideas?", "username": "Jaik_Dean" }, { "code": "tlsAllowInvalidCertificates", "text": "This looks like a TLS negotiation issue, as we are able to connect if we set the tlsAllowInvalidCertificates URI option to true. I don’t understand why it’s resulting in the error in the original post, rather than throwing an error about the certificate.", "username": "Jaik_Dean" }, { "code": "", "text": "Following this thread. Really frustrating issue !", "username": "Mike_Corlett" }, { "code": "", "text": "Hi @Jaik_Dean,\nThis does not sound like the desired behavior. Please can you please open a support ticket so we can look into your issue?Sincerely,\nVishal\nMongoDB Atlas Serverless PM team", "username": "Vishal_Dhiman" }, { "code": "tlsDisableCertificateRevocationChecktlsDisableOCSPEndpointCheckmongoshellmongoshell", "text": "With some more digging and pointers from @Jason_Tran, we’ve found out what’s going on here.The PHP driver is calling the certificate’s OCSP endpoint during TLS negotiation, to check if the certificate authority has been revoked. Our network has outbound port 80 (HTTP) traffic blocked, so this check was failing. We are able to disable this check with either of these connection options and have a working connection:What’s interesting is that connecting using mongoshell works even with port 80 blocked. I don’t know where the underlying difference between the PHP driver and mongoshell is that determines whether the OCSP check is performed. It’s getting to the limit of my knowledge around responsibilities and interactions of the different layers of drivers, OpenSSL etc.", "username": "Jaik_Dean" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Serverless Private Endpoint in AWS times out
2022-10-11T10:21:29.873Z
Serverless Private Endpoint in AWS times out
2,833
null
[ "aggregation", "queries" ]
[ { "code": "{\t\n\t\"_id\" : \"1\",\n\t\"name\" : \"Kathir\",\n\t\"age\" : \"28\",\n\t\"priority\" : null\n},\n{\t\n\t\"_id\" : \"12\",\n\t\"name\" : \"Raja\",\n\t\"age\" : \"32\",\n\t\"priority\" : \"high\"\n},\n{\t\n\t\"_id\" : \"34\",\n\t\"name\" : \"Sarath\",\n\t\"age\" : \"24\",\n\t\"priority\" : \"medium\"\n},\n{\t\n\t\"_id\" : \"08\",\n\t\"name\" : \"Chandru\",\n\t\"age\" : \"24\",\n\t\"priority\" : \"low\"\n},\n", "text": "String Sort by Custom sort in MongoDB?how to sort this data in mongo sorting by priority as [“high”, “medium”, “low”, null] - i need this order,\nplease help me friends !..", "username": "Kathiresan_S" }, { "code": "db.collection.aggregate([{\n $addFields: {\n sortPriority: {\n $switch: {\n branches: [\n {\n 'case': {\n $eq: [\n '$priority',\n 'high'\n ]\n },\n then: 3\n },\n {\n 'case': {\n $eq: [\n '$priority',\n 'medium'\n ]\n },\n then: 2\n },\n {\n 'case': {\n $eq: [\n '$priority',\n 'low'\n ]\n },\n then: 1\n }\n ],\n 'default': 0\n }\n }\n }\n}, {\n $sort: {\n sortPriority: -1\n }\n}])\n{ _id: '12',\n name: 'Raja',\n age: '32',\n priority: 'high',\n sortPriority: 3 }\n{ _id: '34',\n name: 'Sarath',\n age: '24',\n priority: 'medium',\n sortPriority: 2 }\n{ _id: '08',\n name: 'Chandru',\n age: '24',\n priority: 'low',\n sortPriority: 1 }\n{ _id: '1',\n name: 'Kathir',\n age: '28',\n priority: null,\n sortPriority: 0 }\n", "text": "Hi @Kathiresan_S ,You can use a switch case statement to have sortable values based on your logic:This will sort as the result would be:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "thanks a lot @Pavel_Duchovny", "username": "Kathiresan_S" } ]
String Sort by Custom sort in MongoDB?
2022-10-13T10:07:38.921Z
String Sort by Custom sort in MongoDB?
3,481
null
[]
[ { "code": "", "text": "Hi , good time\ni have big problem during last 2 month . since 2 month ago sometime we have huge connection ( more than 20K) as unusual on primary and shard0\nas we have checked there is not any abnormal TPS or queries or identified time pattern.\nnow i have below enquiries :", "username": "Soroush_Sadegh_Bayan" }, { "code": "db.getCmdLineOpts()db.version()", "text": "Hi @Soroush_Sadegh_Bayan and welcome to the MongoDB community!!is it related to version ?The current version you are using is quiet old (released around 2017) and there have been security issues related to this version.Hence, the recommendation would be to update to the oldest supported version which is 4.4. Please refer to the legacy support documentation for more.the main error is REST SERVICE INVOCATION , what’s it?Can you please share the complete error message that you are seeing in the deployment?1 primary and 2 secondaryCan you also share more details on how the does your deployment looks like?Also, please share the output for db.getCmdLineOpts() and db.version() on the affected node.Let us know if you have any further questions.Best Regards\nAasawari", "username": "Aasawari" } ]
Increase conection suddenly on MongoD
2022-10-10T07:42:51.944Z
Increase conection suddenly on MongoD
1,750
null
[ "aggregation", "queries", "node-js", "data-modeling" ]
[ { "code": "", "text": "Imagine I have a large collection of articles (say 20,000 - which may not be that large). Each article has a rating that changes as people interact with it - view it, share it, etc. The front page of my website will present the top 500 rated articles - so each time a user opens the front page they will extract these from the database.Would it be better to serve this data using:Letting each user query the entire collection to extract the articles sorted by rating and limited to 500 returned articles. I presume this requires the database to process the query by searching the entire collection until the top 500 articles are found - this would be repeated for each person requesting the data. I think this is essentially a order-n process.Periodically, say every hour, have the backend process the collection and find the 500 top articles, but store their IDs in a separate collection: say called “top_articles”. Then, when each user access the front page, they would instead request the “top_articles” and then POPULATE the results with the actual article data before returning this list. Theoretically the users wouldn’t be constantly asking the database to sort the articles based on rating because that would be done once every hour and hence reduce the number of times the users do it - effectively preventing constant sorting that seems duplicative.I’m trying to understand which is a better design? Or if you have an even better suggestion? I realize that if I am ranking the articles by their interactivity numbers (shares, likes, etc), those things would change during the hour and hence I would theoretically have stale rankings until the next hour where the backend does the processing - that’s ok.", "username": "Z_Knight_Z" }, { "code": "{ _id : ARTICLE_ID,\n title : \"SOME TITLE\",\n rating : 10,\n...\n }\n{rating : -1}db.articles.find({}).sort({rating : -1}).limit(500);\n", "text": "Hi @Z_Knight_Z ,If you index your rating field (the field in your article collection that indicate its rating) and have it sorted in descanding order , a query fetching the top 500 documents will only need to query the first indexed 500 (no colleciton scan)For example if my article collection looks like the following:Indexing {rating : -1} will allow me to quickly get the top 500 articles:Is that what you’ve been looking for?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you for this - it helps a lot. I think this will work assuming me indexing already on title/slug (for search purposes) won’t interfere. Although I would still be curious to know which of my original options would be more efficient - or are you also saying indexing is more efficient too? What happens if I had say 100,000 items in the collection, instead of 20,000 - would indexing still be advisable?", "username": "Z_Knight_Z" }, { "code": "", "text": "The bigger the collection the bigger the index , this makes the chance of the entire index to fit into memory leas likely (but it depnds on the ram as one field index in a int should not be large).Otherwise 20k and 100k with an index still scan the same number of entries (500) …Therefore this is the advisable way.Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Read entire collection with each query OR pre-process data into a separate smaller collection?
2022-10-13T05:12:54.282Z
Read entire collection with each query OR pre-process data into a separate smaller collection?
1,771
null
[ "node-js", "mongodb-shell" ]
[ { "code": "", "text": "How to Fetch MongoDB CPU usage and Disk utilization using Node.js or mongosh", "username": "Santhosh_V" }, { "code": "npm", "text": "Welcome to the MongoDB Community @Santosh_Salvaji !Metrics like CPU usage and disk utilisation come from the host environment MongoDB processes are running in, so you will have to look into npm packages or external tools you can invoke.For example:I can’t vouch for how well these packages work, but they might be helpful starting points.If you happen to be using MongoDB Atlas, it surfaces many available metrics via UI (including historical charts) as well as the Atlas Administration API. The Atlas control plane collects host resource metrics including CPU, disk, memory, and network utilisation as well as MongoDB metrics such as connections, replication, and operation execution time.For more relevant suggestions on monitoring process resources via Node.js, I suggest searching or asking on StackOverflow: Newest 'node.js' Questions - Stack Overflow.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X\nHow to get the status of CPU usage and Disk utilisation through query/command for the cluster running on Mongodb Atlas.\nI just want to make sure that CPU & Disk utilisation are under threshold before hitting Query/Command.", "username": "Santhosh_V" } ]
Finding CPU usage , Disk Utilization by using mongosh or nodejs
2022-10-13T05:36:43.758Z
Finding CPU usage , Disk Utilization by using mongosh or nodejs
2,044
null
[ "queries", "replication", "java", "sharding" ]
[ { "code": "mongodb://mongo0.example.com,mongo1.example.com,mongo2.example.comMongoClientSettings settings = MongoClientSettings.builder()\n .readPreference(READ_PREFERENCE)\n .applyConnectionString(\"mongodb://mongo0.example.com,mongo1.example.com,mongo2.example.com\")\n.build();\n\ncom.mongodb.client.MongoClient mongoClient= MongoClients.create(settings);\nmongoClient.getDatabase(\"test_db\").runCommand(new Document(\"buildinfo\", 1)).getString(\"version\")Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongo0.example.com:27010, type=REPLICA_SET_SECONDARY, TagSet{[Tag{name='use', value='prod1'}]}, roundTripTime=1.7 ms, state=CONNECTED}, {address=mongo1.example.com:27010, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo1.example.com:27010}, caused by {java.net.UnknownHostException: mongo1.example.com:27010}}]\nat com.mongodb.internal.connection.BaseCluster.createTimeoutException(BaseCluster.java:408)\n at com.mongodb.internal.connection.BaseCluster.selectServer(BaseCluster.java:123)\n at com.mongodb.internal.connection.AbstractMultiServerCluster.selectServer(AbstractMultiServerCluster.java:54)\n at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:110)\n at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:106)\n at com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:93)\n at com.mongodb.client.internal.ClientSessionBinding.getReadConnectionSource(ClientSessionBinding.java:82)\n at com.mongodb.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:461)\n at com.mongodb.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:203)\n at com.mongodb.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:198)\n at com.mongodb.operation.CommandReadOperation.execute(CommandReadOperation.java:59)\n at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:196)\n at com.mongodb.client.internal.MongoDatabaseImpl.executeCommand(MongoDatabaseImpl.java:194)\n at com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:163)\n at com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:158)\n at com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:148)\nCaused by: class java.net.UnknownHostException: mongo1.example.com: **Name or service not known**\n java.net.Inet4AddressImpl::lookupAllHostAddr(Inet4AddressImpl.java::-2)\n java.net.InetAddress$PlatformNameService::lookupAllHostAddr(InetAddress.java::929)\n java.net.InetAddress::getAddressesFromNameService(InetAddress.java::1519)\n java.net.InetAddress$NameServiceAddresses::get(InetAddress.java::848)\n java.net.InetAddress::getAllByName0(InetAddress.java::1509)\n java.net.InetAddress::getAllByName(InetAddress.java::1368)\n java.net.InetAddress::getAllByName(InetAddress.java::1302)\n com.mongodb.ServerAddress::getSocketAddresses(ServerAddress.java::203)\n com.mongodb.internal.connection.SocketStream::initializeSocket(SocketStream.java::75)\n com.mongodb.internal.connection.SocketStream::open(SocketStream.java::65)\n com.mongodb.internal.connection.InternalStreamConnection::open(InternalStreamConnection.java::128)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable::run(DefaultServerMonitor.java::117)\n java.lang.Thread::run(Thread.java::834)\nCanonical address mongo0.example.com:27010 does not match server address. Removing mongo2.example.com:27010 from client view of cluster\n", "text": "On running a build info query ( mongoClient.getDatabase(\"test_db\").runCommand(new Document(\"buildinfo\", 1)).getString(\"version\") ) we are getting timeout error as belowStack trace is as below:When we check the logs, we can see below logs as welland we can also see this log multiple timesThis problem doesn’t occur on Atlas sharded clusters. Also, for non-atlas sharded clusters, if we try to connect to only the primary node it succeeds where as it fails while connecting to multiple nodes using connectionString and connectionTagsJava driver version: 3.12", "username": "Nachiket_G_Kallapur" }, { "code": "mongodb://mongo0.example.com,mongo1.example.com,mongo2.example.com", "text": "Hi @Nachiket_G_Kallapur and welcome to the MongoDB community!!For better understanding of the problem, it would be very helpful if you could provide the following information:This is how we are doing it currently. We listShards() for a particular shard and then note all the replicaset members inside it. Then we build a connectionString( mongodb://mongo0.example.com,mongo1.example.com,mongo2.example.com ).When you state that this works with an Atlas sharded cluster, are you following the exact same steps to get the connection string for both Atlas and the on-prem deployment?Is there a specific use case regarding the method you are attempting to perform the connection? You could possibly be using change streams.Are the hostnames mentioned in the connection string in your post example hostnames or the actual hostnames used in the connection attempts?MongoDB version being used.Please help us with the above information for further understanding.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari , thanks for your reply.\nThe below are the answers to your follow up questionsWe use exact same steps in connecting to all types of MongoDB deploymentsWe are thinking to do so but wanted to know why are we facing this problem with reading oplogs from sharded clusterIn our code we use the exact same hosts mentioned in the listShards query to build the connection String. The hostnames I provided are not the original ones.MongoDB version=4.2.14, java driver version 3.12, privateLink connectionI have read multiple articles stating that privateLink is not stable on 3.12. If you feel privateLink is the actual cause of the issue, then I would want to know why are we succeeding to connect to a single replicaset memeber whereas failing in connecting to multiple of them in the shardThank you", "username": "Nachiket_G_Kallapur" }, { "code": "", "text": "Hi @Nachiket_G_KallapurThank you for sharing the responses.\nSince the exact same deployment and connection string works in Atlas and does not work in non Atlas sharded cluster, there is a possibility that Atlas has a network setting configured in such a way that the latter deployment does not have.Official MongoDB drivers specifications requires official drivers to connect to all nodes in a replica set for monitoring and high availability (see Server Monitoring). Since you’re using AWS PrivateLink, you would need to setup the PrivateLink to allow the driver to connect to all members of the replica set. I believe this is the reason behind the java.net.UnknownHostException error.Please have a look at the documentation page below for understanding how the security is configured in Atlas and that might be helpful in understanding how you deployment has been configured.I have read multiple articles stating that privateLink is not stable on 3.12.AWS PrivateLink is supported in Atlas (see security private endpoints and MongoDB Atlas data plane with AWS privateLink so if you are able to use Atlas, it might be able to simplify this setup and operation for you.Let us know if you have any further queries.Regards\nAasawari", "username": "Aasawari" } ]
Not able to connect to multiple replicaset members in a shard
2022-10-06T10:04:17.512Z
Not able to connect to multiple replicaset members in a shard
3,386
null
[ "queries", "python", "crud" ]
[ { "code": "", "text": "I am Using Python 3+ and trying to update a large collection (70K documents)\nAll documents in the collection should be updated for ONE FIELD with the same valueI tried these Options, and it takes very long time. had to abort in between.\ndoc=mycollection.find({},{“mcs”:\"{}\"},sort=[(\"_id\", 1)])\nfor data in doc:\nOption1: collection.update_many({},{\"$set\":{“mcs”:json_data}})\nOption2: requests_list = [UpdateMany({},{\"$set\":{“mcs”:json_data}},upsert = False)]\nOption3: requests_list = [UpdateOne({},{\"$set\": {“mcs”: json_data}},upsert=False)] ( this will update\nonly one record)\ncollection.bulk_write(requests_list,ordered = False)Any suggestions?", "username": "Girish_V" }, { "code": "", "text": "Hi @Girish_V and welcome to the community!!It would be very helpful if you could help me in understanding the following to understand in more better way:I tried these Options, and it takes very long time. had to abort in between.Is the a time estimate on how long the update takes for all the options mentioned ?Can you share an example document and the desired output after the update is made for the documents.The MongoDB version you are using.Option1: collection.update_many({},{\"$set\":{“mcs”:json_data}})Does the field “mcs” is there for all the documents in the collection?Do you see a similar delay for update while performing the update operation using mongosh?Let us know the following details for better understanding .Best Regards\nAasawari", "username": "Aasawari" } ]
Update Large Collection with same value for all documents
2022-10-12T14:36:11.793Z
Update Large Collection with same value for all documents
1,337
null
[ "atlas-search" ]
[ { "code": "OptimismPrime\n{\n '$search': {\n index: 'clients',\n text: { query: 'optimismprime', path: {wildcard: \"*\"}, fuzzy: {} }\n }\n}\noptimismprioptopti", "text": "Hello,Considering the following search word: OptimismPrime and running the following query:I can successfully find a document containing the word “Optimism Prime” (be it a name, an address or w/e) and it seems the maximum missing data allowed is optimismpri , around 2 missing letters.However, I would like the search to start finding results much earlier, when a user is sending opt or opti etc.Could you kindly tell me please, what would be the correct way to achieve this desired result?", "username": "RENOVATIO" }, { "code": "optoptiautocomplete\"opt\"\"opti\"autocomplete\"opt\"DB> db.collection.aggregate({\n '$search': {\n index: 'default',\n autocomplete: { \n query: 'opt',\n path: 'text',\n fuzzy: {}\n }\n }\n})\n[\n { _id: ObjectId(\"6345e56af85ed6b4e320f5b5\"), text: 'OptimismPrime' } /// <--- Returned document(s)\n]\n\"opti\"DB> db.collection.aggregate({\n '$search': {\n index: 'default',\n autocomplete: { \n query: 'opti',\n path: 'text',\n fuzzy: {}\n }\n }\n})\n[\n { _id: ObjectId(\"6345e56af85ed6b4e320f5b5\"), text: 'OptimismPrime' } /// <--- Returned document(s)\n]\nautocomplete", "text": "Hi @RENOVATIO,However, I would like the search to start finding results much earlier, when a user is sending opt or opti etc.Perhaps the autocomplete operator would suit your use case. Please see below the results from my quick test when querying \"opt\" and \"opti\" using the autocomplete operator:The above was done using a single test document so it is highly recommend you test thoroughly in your own test environment to verify usage of the autocomplete operator suits your use case and requirements.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello Jason,Thank you for your reply and examples. I had considered autocomplete, but there was one apparent problem that held me back from opting for it. Doesn’t autocomplete require only 1 designated path field? Meaning, I have to choose 1 field only to run the search on, for example a “name” field, but then not the “address” or anything else.For example, if I have a username John Brownfield, on the address Green Tree Street, 20, with autocomplete I would have to choose only 1 field to scan, in this case either username or address.Is there a way to find John Brownfield’s document, by partially entering just part of his surname, e.g. brown or his address, e.g. only green? The search query is passed through REST api ?search=query param, meaning the system is not aware of which field (I am assuming that’s what the path is for) is targetted.\nThank you again and I apologize if my original request was not as comprehensive about the issue.", "username": "RENOVATIO" }, { "code": "autocomplete{\"wildcard\":\"*\"}pathcompoundautocomplete\"John\"\"Green\"autocomplete", "text": "For example, if I have a username John Brownfield, on the address Green Tree Street, 20, with autocomplete I would have to choose only 1 field to scan, in this case either username or address .Thanks for the clarification regarding the mutliple fields that need to be searched on. I presume you tested autocomplete with a {\"wildcard\":\"*\"} path value but correct me if I am wrong here.Is there a way to find John Brownfield’s document, by partially entering just part of his surname, e.g. brown or his address, e.g. only green ? The search query is passed through REST api ?search=query paramIn saying so, have you tried compound with autocomplete to see if it meets your requirements? I did 2 simple tests based off your example:\nimage1574×1208 125 KB\n\nimage1532×1198 123 KB\nThe two tests were to try and retrieve the same document as you have mentioned.Examples are from Data ExplorerIf this is not what you’re exactly after, there is a feedback post regarding autocomplete usage on multiple fields (which sounds like it would suit your use case). It is currently being worked on at this stage.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_Tran thank you very much for the extremely clear and useful examples. To answer your question, yes I have tried using the wildcard, however I received an error from Mongo that autocomplete only accepts a string.Your solution appears to be very usable and I think that is what I will probably go with. Just 2 last questions if I may.Thank you very much, Jason", "username": "RENOVATIO" }, { "code": "compoundautocompleteautocomplete\"username\"\"address\"autocompleteautocompleteautocomplete", "text": "Hi @RENOVATIONot specific to the compound operator but more so the autocomplete operator for your use case, perhaps taking a look at the performance considerations mentioned here will help.Just to clarify, do you mean setting index field definition to contain autocomplete for the fields? E.g.:If you have a screenshot of what you are referring in the atlas dashboard, that would help clarify the question for me just to be sure Additionally, regarding your testing mentioned above, I presume you created the index without specifying the data type configuration of autocomplete? Is this correct? I.e. created the search index without specifying autocomplete in the index definition and then running an autocomplete search query which returned your required results?Regards,\nJason", "username": "Jason_Tran" }, { "code": "autocompleteautocomplete", "text": "Hello @Jason_Tran, forgive me the lack of clarity.Just to clarify, do you mean setting index field definition to contain autocomplete for the fields?Additionally, regarding your testing mentioned above, I presume you created the index without specifying the data type configuration of autocomplete ? Is this correct?Yes, that’s right, this is what I intended. I just created a new search index without specifying any fields or their types and the search was working anyway, so I was wondering why was that the case and when then does one need to create those explicit fields.", "username": "RENOVATIO" }, { "code": "", "text": "Interesting - Do you have some quick steps that I can reproduce this behaviour? Please include the index details and query used.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Increasing search sensitivity
2022-10-11T18:43:13.382Z
Increasing search sensitivity
1,611
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hello, I need to retrieve a post to populate a user field and also determine if a user has liked the post.\nfor clarification, I have a post, user, and likes collectionlikes collection holds a reference to the user id and post id.I want to retrieve a list of posts and check if the user has liked the post or not (Restful API) which is necessary for the user interfaceplease, does anyone knows the best way to go about this?", "username": "EmeritusDeveloper" }, { "code": "$lookuppostuserlikesfind().aggregate()", "text": "Hi @EmeritusDeveloper,Have you considered using $lookup? If that doesn’t suit your use case, please provide the following information:Additionally, i’ve noticed you’ve advised “with find() method” in your post title - Is there a specific reason you’re wanting to do this with find() rather than perhaps .aggregate() for example?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Adding a status key to post collection to indicate if a user has liked the post with find() method
2022-09-29T16:50:15.439Z
Adding a status key to post collection to indicate if a user has liked the post with find() method
1,676
null
[ "dot-net" ]
[ { "code": "{\n \"lrec80\" : [ \n {\n \"RR4NFAD\" : {\n \"$ref\" : \"RR4OCO\",\n \"$id\" : ObjectId(\"0000000000000000879866fe\"),\n \"$db\" : \"tpfdf\"\n },\n \"RR4NCTY\" : \"LAX\",\n \"RR4NNBR\" : 1\n }, \n {\n \"RR4NFAD\" : {\n \"$ref\" : \"RR4OCO\",\n \"$id\" : ObjectId(\"0000000000000000879866fe\"),\n \"$db\" : \"tpfdf\"\n },\n \"RR4NCTY\" : \"MSY\",\n \"RR4NNBR\" : 1\n }, \n ],\n \"_id\" : ObjectId(\"0000000000000000ec013401\")\n}\n var mgdbref = new MongoDBRef(_appConfiguration.MongoDbConnection[\"MongoDatabaseName\"],\n \"RR4OCO\", \"0000000000000000ec013401\");\n\n var embeddedDoc = new BsonDocument\n {\n /*{ \"$ref\", \"RR4OCO\"},*/\n { \"$id\", new ObjectId(objectId) }\n };\n\n var deleteDoc = new BsonDocument\n {\n { \"RR4NFAD\", embeddedDoc (or mgdbref) }\n };\n", "text": "Hello. I’m still new to the nuances of Mongo and have the following issue. I’m working on a .Net project using MongoDB.Driver 2.13.0. I have the following document structure.I have an “lrec80” array with one or more elements that contain a “RR4NFAD” DBRef to a document in another collection.I can easily delete the referenced document in the other collection, but how do I perform a batch delete of elements in the array that all have the same ObjectID of the document in the other collection?I’ve tried the following without success to filter out the elements from the array for deletionI’m probably missing something simple, but I can’t figure it out. Thanks for any help you can offer!", "username": "RKS" }, { "code": " var deleteReadFilter = Builders<BsonDocument>.Filter.Eq(\"_id\", id) &\n Builders<BsonDocument>.Filter.ElemMatch<BsonValue>(\"lrec80\", embeddedDoc);\n\n var document = (await _mongoCollection.FindAsync(deleteReadFilter).ConfigureAwait(false)).ToList();\n\n if (document != null && document.Any())\n {\n // Delete existing record\n var deleteFilter = Builders<BsonDocument>.Update.Pull(\"lrec80\", embeddedDoc);\n UpdateResult updateResult = _mongoCollection.UpdateOne(readFilter, deleteFilter, new UpdateOptions() { BypassDocumentValidation = true });\n retVal = (updateResult.IsAcknowledged && updateResult.ModifiedCount > 0);\n", "text": "Didn’t include the other part. I tried to filter out the elements for deletion, with the following:I get the following error:QueryFailure flag was true (response was { “$err” : “MONG0025E 13.37.37 An error occurred when a request that contains ‘/PrefixedLREC/DFLREC/lrec80/RR4NFAD’ was processed. Reason: Reference must be set using a DBRef that contains an ObjectID. Path name: /PrefixedLREC/DFLREC/lrec80/RR4NFAD.” }).", "username": "RKS" }, { "code": "", "text": "Welcome to the MongoDB Community @RKS !MONG0025E 13.37.37 An error occurred when a request that contains ‘/PrefixedLREC/DFLREC/lrec80/RR4NFAD’ was processed.This server response is not a MongoDB server error message, so I assume you are using an emulated API that has some different expectations like “Reference must be set using a DBRef that contains an ObjectID”. You would have to consult the relevant API documentation for more guidance.What type of deployment are you connecting to?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi, you are correct that part of the error is non-standard and specific to our environment. The “Reference must be set using a DBRef that contains an ObjectID” was what I was concerned about.The service I’m working on is .Net Core using the official .Net driver for MongoDB connecting to MongoDB version 2.6.5 in a z/TPF environment version 1.1.If I can query and delete one or more array elements from the “lrec80” array where RR4NFAD is a DBRef to a document in another collection, that would resolve the issue I’m facing. For my example, I would take ObjectId “0000000000000000879866fe” as input and then delete array elements with RR4NFAD.$id that match.Thanks for your time.", "username": "RKS" }, { "code": "var readFilter = Builders<BsonDocument>.Filter.Eq(\"_id\", new ObjectId(\"0000000000000000ec013401\"));\n\nvar deleteDoc2 = new BsonDocument\n{\n\t{ \"RR4NFAD\", new ObjectId(\"0000000000000000879866fe\") }\n};\n\nvar deleteFilter = Builders<BsonDocument>.Update.Pull(\"lrec80\", deleteDoc2);\n\n_mongoCollection.UpdateOne(readFilter, deleteFilter, new UpdateOptions() { BypassDocumentValidation = true });", "text": "I managed to figure it out. As usual, I overcomplicated it. The following solution worked.", "username": "RKS" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Querying Array Elements by DBRef for Deletion
2022-10-07T20:31:28.093Z
Querying Array Elements by DBRef for Deletion
2,681
null
[ "flutter" ]
[ { "code": "final config = Configuration.local([Person.schema], schemaVersion: 2);\nfinal config = Configuration.flexibleSync(user, [Person.schema])); \n", "text": "Hello, i just updated the schema in my exists app,\nin “local” we just increase schemaVersion like that :but in “flexibleSync” there is no such argument to change :the error i got after updating my schema :Message: Migration is required due to the following errors…\nso how can i migration old schema with the new one using “flexibleSync” ?", "username": "abdelrahman_mokhtar" }, { "code": "", "text": "Hello @abdelrahman_mokhtar , Welcome to MongoDB Community Thank you so much for raising your concern.Please note migrations are only supported for local realms and flexible sync does not support migration. The local realm migrations in flutter is a new feature that will be released this week.For changes to the schema for sync, there can be breaking or non-breaking changes. Please refer to the update Schema section in the documentation for the same.I hope the provided information is helpful.Please don’t hesitate to ask if you need further assistance.Cheers, \nHenna", "username": "henna.s" }, { "code": "default.realm", "text": "I found the solution in the realm packge :If you change your data models often and receive a migration exception be sure to delete the old default.realm file in your application directory. It will get recreated with the new schema the next time the Realm is opened.I need to delete the local realm files, there is no migration option in the flutter beta yet.", "username": "abdelrahman_mokhtar" }, { "code": "", "text": "@henna.s this solution should be added to the mongodb flutter docs as it’s missing there, also there is a wrong information about it : Updating the Schema of a Synced Realm", "username": "abdelrahman_mokhtar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migration new schema with Device sync
2022-10-05T23:51:19.410Z
Migration new schema with Device sync
2,313
null
[]
[ { "code": "", "text": "We have a MongoDB v4.4 instance with several TB of data in it. The instance experienced a bad shutdown (the cause of the shutdown has yet to be determined), and the recovery process, once completed, didn’t allow it to start again. A few days ago we started the repair process. It is still running, but has had no console/log updates in over a day, and neither top nor iotop show much activity at all. So, we have a few questions for the experts in the community.• How long should a repair process take on 6-ish TB of data? Is that even something that can be predicted?\n• Should we stop the repair process since it appears to have stalled, or do we just wait (and hope)?\n• Other than the console/log messages is there any way to monitor a repair process?\n• Are there any other suggestions as to how to troubleshoot this problem?Thanks for any help.", "username": "smaderak" }, { "code": "mongod --repair/var/log/messagessudo grep mongod /var/log/messages\nsudo grep score /var/log/messages\nmongod", "text": "Hey @smaderak,Welcome to the MongoDB Community Forums! It’s been five days since you posted, has this been resolved or you are still facing issues?A few days ago we started the repair process. It is still runningDid you use mongod --repair CLI command to start the repair process? Please make sure sufficient disk space is available to perform the database repair - in extreme cases, a repair may require as much space as is currently occupied by the MongoDB database. Also, repair process rebuilds indexes as required. So for 6 TB of data, it can take a while.Other than the console/log messages is there any way to monitor a repair process?You can check your system logs for messages pertaining to MongoDB. For example, for logs located in /var/log/messages, use the following commands:You may be able to try to use tools like dtrace or a similar tool to trace what files are being accessed by mongodShould we stop the repair process since it appears to have stalled, or do we just wait (and hope)?It would be better to first check the last entries in the logs before it got stuck before you decide doing this. I’m also attaching the documentation link for you to go through which should be able to help you out.\nRecover a Standalone after an unexpected shutdownHope this helps. Also, for future, I would also highly suggest to run a replica set instead of a standalone in production environments so cases like this can be solved by resyncing the affected node instead of suffering any downtime and data loss.Please feel free to reach out for anything else as well. Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Sorry for my delayed response.In the end we opted to restore from a backup even though that did cause a bit of data loss. We decided that we couldn’t wait any longer.We did use the --repair flag, and there was plenty of disk space available. In the future we will keep in mind that more information could be in the system messages.I agree that a replica set would be better, but I personally don’t make all the decisions.Thanks for the information.", "username": "smaderak" } ]
Mongod --repair appears to have stalled, what now?
2022-09-09T14:30:11.630Z
Mongod &ndash;repair appears to have stalled, what now?
2,067
null
[ "serverless" ]
[ { "code": "", "text": "Hi everyone,I have an issue in which i had and extremely huge number of RPU operations when i created indexes.\nI usually have 1K/s but today i reached over 150K/s which make me a lot worried for the cost of this. I deleted all indexes and things seems to be back to normal but i’m quite surprised because in guides that i read, they always say to use indexes to reduce cost. I want to know if another person fell into the same trap and what happened ? I’m very worried about my position in the company if we end up with huge cost because of this.\nI also think that if i should be able to see my billing in real time and not waiting a day for it because being in this situation is quite uncomfortable.Thanks for reading", "username": "Valentin_CORSAIN" }, { "code": "", "text": "I had those spikes again and i’m now very very worried. I deleted a time series collection that i created to see if it was this that created those huge rpu.", "username": "Valentin_CORSAIN" }, { "code": "", "text": "Hey Valentin,Please reach out to our support team either via a support ticket or the chat icon in the bottom right corner of the Atlas UI. We’ll be sure to help.Also feel free to DM me your serverless instance; I’m happy to take a look. Typically, indexes will reduce the number of RPUs consumed (see How to Optimize Your Serverless Instance Bill with Indexing | MongoDB), though we can certainly dig deeper.Best,\nChris\nAtlas Serverless Product Manager", "username": "Christopher_Shum" } ]
Insanely huge RPU because of indexes
2022-10-13T11:48:11.439Z
Insanely huge RPU because of indexes
1,767
null
[]
[ { "code": "", "text": "Hello,We have already connected to the Document DB cluster, we have also succeded uploading a file to documentDB through Application Frontend Interface, the issue comes when we tried to get that uploaded file from DocumentDB, Note: We’re using DocumentDB 4.0 we already tested it with 3.6 same issue \"Query failed with error code 2 and error message ‘Bad value’ on server please note that we got error 500.. We’re using only the primary instance without reader.\nWe have tested our application using MongoDB (version 6.0 and 4.0) installed directly in EC2 machine, we didn’t get the same issue and it worked well for us.here is the query used by the GET API which returns the error sent previously => “@Query(”{'folder.id ’ : :#{#id}, ‘idEmployee’ : :#{#idEmployee}}“) List findFileByFolderAndEmployee(@Param(“id”) String id, @Param(“idEmployee”) long idEmployee);”\nDo you have any ideas about possible causes?\nThanks", "username": "ghazouani_nada" }, { "code": "", "text": "Hi @ghazouani_nada ,DocumentDB is just not MongoDB. Therefore, I truly and honestly recommend Atlas, the best MongoDB platform in the cloud.The issue seems to be something related to Amazon…Wish I could have helped with the specific issue on the amazon platform, but I would recommend contacting Amazon support…", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you Pavel for your reply,\nI contacted AWS support today and they confirmed that the documentDB server is working well and it seems that the error is related to the query in the backend of the app.", "username": "ghazouani_nada" } ]
DocumentDB GET request issue
2022-10-13T14:08:53.633Z
DocumentDB GET request issue
1,178
null
[]
[ { "code": "", "text": "While going through the Realm video series on Youtube I noticed in the search section we didn’t implement a text index in order to enable the text search functionality we just made a search index. I did not even know this was a thing, I thought the purpose of a text index was to enable search. Is this text search something exclusive to Realm? If not…Any feedback/ guidance is greatly appreciated.", "username": "Wayne_Barker" }, { "code": "", "text": "Hello @Wayne_Barker ,Welcome to The MongoDB Community Forums! Could you please share which Realm video series are you referring to?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Sorry, I didn’t realize there was more than one here is the link at the time as well.\n MongoDB Atlas Search to Easily Find Your Data | Search & Autocomplete Implementation | Jumpstart", "username": "Wayne_Barker" }, { "code": "", "text": "Also while I have a professional here… If it’s not a big inconvenience to you can you look at my other question?\nHow to restrict certain serverless functions based on the authentication type in Realm", "username": "Wayne_Barker" } ]
What is the difference between a text index and a search index?
2022-10-10T18:54:30.537Z
What is the difference between a text index and a search index?
1,856
https://www.mongodb.com/…be60502fbd61.png
[ "node-js", "mongoose-odm" ]
[ { "code": "const express = require(\"express\");\nconst app = express();\nconst mongoose = require(\"mongoose\");\nconst dotenv = require(\"dotenv\");\nconst helmet = require(\"helmet\");\nconst morgan = require(\"morgan\");\nconst userRoute = require(\"./routes/users\");\nconst authRoute = require(\"./routes/auth\");\nconst postRoute = require(\"./routes/posts\");\ndotenv.config();\n\nmongoose.connect(\n process.env.MONGO_URL,\n {useNewUrlParser:true})\n .then(()=>{\n console.log(\"Connected to MongoDB\");\n })\n .catch(()=>{\n console.log(\"Couldn't connect to MongoDB\");\n })\n\n//middleware\napp.use(express.json());\napp.use(helmet());\napp.use(morgan('common'));\n\napp.use(\"/api/auth\", authRoute);\napp.use(\"/api/users\", userRoute);\napp.use(\"/api/posts\", postRoute);\n\napp.listen(8800,()=>{\n console.log(\"Backend server is running!\")\n})\n\n", "text": "Hello,\nI have a problem trying to connect to mongoDb ( using mongoose). It all worked untill today but nothing is changed.\nFirst i thought that there is a problem with my code, but after checking the connection it shows that the problem is there.This is the index.js:", "username": "Cotuna_Dumitru-Sorin" }, { "code": " .catch( (e)=> console.log(e) )\n", "text": "instead of your message, can you print the “actual” error coming from mongoose?it is possible you have IP security on your cluster and also your IP has changed. if you do not have a static IP on your network, your ISP may change your IP when your router restarts causing this kind of problem. please check this possibility first. open your cluster security and add your current IP to the list and see if it fixes the problem. if so, also remove the old IP entry.", "username": "Yilmaz_Durmaz" }, { "code": "{ useNewUrlParser: true }\nmongoose.connect(process.env.MONGO_URL)\n .then(()=>{\n console.log(\"Connected to MongoDB\");\n })\n .catch(()=>{\n console.log(\"Couldn't connect to MongoDB\");\n })\n", "text": "Hi,If you are using Mongoose v6+, then you should remove the following config:So, just do this:More info", "username": "NeNaD" }, { "code": "", "text": "Thx man, u saved me. My ip wasn’t listed in the whitelist.", "username": "Cotuna_Dumitru-Sorin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoose Connection
2022-10-12T18:09:19.679Z
Mongoose Connection
5,954
null
[ "replication", "python", "mongodb-shell" ]
[ { "code": "from pymongo import MongoClient\nimport pymongo\nimport pandas as pd\n\nclient =pymongo.MongoClient(\"mongodb://196.xxx.xxx.xxx:27017\")\nprint(client)\n\nwith client:\n \n db = client.cloud_db\n print(db.collection_names())\nC:\\Users\\chuan\\OneDrive\\Desktop\\10.13_connect_mongoDB>python 1.py\nMongoClient(host=['196.xxx.xxx.xxx:27017'], document_class=dict, tz_aware=False, connect=True)\nTraceback (most recent call last):\n File \"C:\\Users\\chuan\\OneDrive\\Desktop\\10.13_connect_mongoDB\\1.py\", line 11, in <module>\n print(db.collection_names())\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\collection.py\", line 3194, in __call__\n raise TypeError(\nTypeError: 'Collection' object is not callable. If you meant to call the 'collection_names' method on a 'Database' object it is failing because no such method exists.\ncloud_dbcloud_dbeform_detailclient.list_database_names()import pymongo\n\n#connect to my desire mongoDB by IP address\nclient = pymongo.MongoClient(\"mongodb://196.xxx.xxx.xxx:27017\")\n\n#name my DB name\nmydb_Name=\"cloud_db\"\n\n# set mydb will have all properties from myclient[mydb_Name]\nmydb = client[mydb_Name]\n\n#print all DB name\nprint(\"now exsisting DB are: \",client.list_database_names())\n\n#if DB exsist, then print it out\n# and the \"cloud_db\" already exsist \ndblist = client.list_database_names()\nif mydb_Name in dblist:\n print(\"Your DB: {0}exsist!!!!\".format(mydb_Name))\nTraceback (most recent call last):\n File \"C:\\Users\\chuan\\OneDrive\\Desktop\\10.13_connect_mongoDB\\2.py\", line 13, in <module>\n print(\"now exsisting DB are: \",client.list_database_names())\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1839, in list_database_names\n return [doc[\"name\"] for doc in self.list_databases(session, nameOnly=True, comment=comment)]\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1812, in list_databases\n res = admin._retryable_read_command(cmd, session=session)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\database.py\", line 843, in _retryable_read_command\n return self.__client._retryable_read(_cmd, read_preference, session)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1413, in _retryable_read\n server = self._select_server(read_pref, session, address=address)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1229, in _select_server\n server = topology.select_server(server_selector)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\topology.py\", line 272, in select_server\n server = self._select_server(selector, server_selection_timeout, address)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\topology.py\", line 261, in _select_server\n servers = self.select_servers(selector, server_selection_timeout, address)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\topology.py\", line 223, in select_servers\n server_descriptions = self._select_servers_loop(selector, server_timeout, address)\n File \"C:\\Users\\chuan\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\topology.py\", line 238, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: Could not reach any servers in [('mongodb', 27017)]. Replica set is configured with internal hostnames or IPs?, Timeout: 30s, Topology Description: <TopologyDescription id: 6347a91a4c61799b7b1594ac, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('mongodb', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('mongodb:27017: [Errno 11001] getaddrinfo failed')>]>\n", "text": "How can I read and write from an existing mongoDB by IP address?I learn from here (pymongo): Connect to MongoDB from Python Application - TutorialKartSaying that technically using (Windows + pymongo + python script) can read and write to mongoDB.\nI have IP address and existing mongoDB(already have company’s collection/table),environment: Windows 11 cmd , pythonI should make more clear that I have actual mongoDB collection/table with IP address 196.xxx.xxx.xxx (x is for actual ip address )and output is not found:online image link:(about the actual mongoDB I want to connect, print,\nwrite)if I want to list up all the collections/tables from cloud_db (in the mongoDB picture above) and maybe read and write to cloud_db’s one of the collection named eform_detail , then what should I do??here is my tried code after using client.list_database_names()and error raise:my expect output is : (then I could do read and write later)\nmongoDB pic : 500 02 — Postimagesnow exsisting DB are: [ ‘admin’ , ‘cloud_db’]inside cloud_db are: [ ‘asset’ , ‘bas_bom_plain’ , ‘bas_part_info’ ,\n‘bom’ , ‘comments’ , ‘def’ , ‘def_detail’ , ‘doc’ , ‘eform’,\n‘eform_detail’ …etc(the collections in picture) ]", "username": "j_ton" }, { "code": "directConnection=true", "text": "Your host is a single node replicaSet. The hostname configured in the replicaSet is not resolvable by your client.When the driver is connecting it retrieves the topology of the replicaSet so it can connect correctly.You can either:", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to read and write from existing mongoDB by IP address?
2022-10-13T07:39:27.150Z
How to read and write from existing mongoDB by IP address?
1,922
null
[ "replication", "storage", "kubernetes-operator" ]
[ { "code": "apiVersion: mongodbcommunity.mongodb.com/v1\nkind: MongoDBCommunity\nmetadata:\n name: mongodb-cluster\nspec:\n members: 2\n type: ReplicaSet\n version: \"5.0.6\"\n statefulSet:\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: mongod\n resources:\n limits:\n cpu: 0.3\n memory: 300Mi\n requests:\n cpu: 0.2\n memory: 200Mi\n - name: mongodb-agent\n readinessProbe:\n failureThreshold: 100\n initialDelaySeconds: 10\n resources:\n limits:\n cpu: 0.3\n memory: 300Mi \n requests:\n cpu: 0.2\n memory: 200Mi \n volumeClaimTemplates:\n - metadata:\n name: data-volume\n spec: \n storageClassName: managed-nfs-storage\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi\n - metadata:\n name: logs-volume\n spec:\n storageClassName: managed-nfs-storage\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 5Gi\n security:\n authentication:\n modes: [\"SCRAM\"]\n users:\n - name: cluster-user\n db: admin\n passwordSecretRef: # a reference to the secret that will be used to generate the user's password\n name: cluster-user-password\n roles:\n - name: clusterAdmin\n db: admin\n - name: userAdminAnyDatabase\n db: admin\n scramCredentialsSecretName: my-scram\n additionalMongodConfig:\n storage.wiredTiger.engineConfig.journalCompressor: zlib\n", "text": "Hi All,\nI am new to MongoDB and the kubernetes community operator.\nDeployed it on Proxmox server/k3s.\nMy question:\nIs it possible to pass container resources attributes to override the default ones?This is my replicaSet deployment:The only part that works with the custom attributes is volumeClaimTemplates. All other attributes, like container resources and Readiness Probe values, will not change default values.", "username": "lk777" }, { "code": " statefulSet:\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: mongod\n image: my-private-registry/mongodb/mongodb-enterprise-appdb-database:5.0.5-ent\n", "text": "I’m seeing a similar issue. I cannot figure out how to override the container images for a k8s environment that is configured for a private container registry. I’d expect to be able to do something like this:Overriding the image doesn’t seem to work, and it still tries to pull from quay.io, which we do not have access to.", "username": "Baldwin_Alan" }, { "code": "", "text": "I ran into same issue, and I found a example called mongodb.com_v1_mongodbcommunity_specify_pod_resources.yaml inside the sample folder", "username": "weiliang_li" }, { "code": "statefulSet.spec.template.specapiVersion: mongodbcommunity.mongodb.com/v1\nkind: MongoDBCommunity\nmetadata:\n name: mongodb-cluster\nspec:\n statefulSet:\n spec:\n terminationGracePeriodSeconds: 10\n template:\n spec:\n containers:\n - name: mongod\n resources:\n limits:\n cpu: 0.3\n memory: 300Mi\n requests:\n cpu: 0.2\n memory: 200Mi\n...\n", "text": "Hey, info about containers must be in statefulSet.spec.template.spec section (StatefulSets | Kubernetes).So, it should be something like that:", "username": "Tomek_Swiecicki" } ]
Mongodb kubernetes community operator resources attributes
2022-03-09T16:49:51.120Z
Mongodb kubernetes community operator resources attributes
4,974
null
[ "serverless" ]
[ { "code": "GET https://cloud.mongodb.com/api/atlas/v1.0/groups/{groupId}/serverless/{instance}/\n", "text": "Hello community,I am using the Atlas Admin API to manage my serverless instances programmatically so that I can provision customer databases when needed. The Atlas UI shows the disk usage and database number for a serverless instance. I would like to retrieve that information through the Atlas Admin API. There is a method for regular clusters: MongoDB Atlas Administration APIHowever, unfortunately the process-related methods do not work for Serverless instances. When I retrieve an instance through the Atlas Admin API with the following, I only receive very basic information.How do I programmatically retrieve the Total Storage Size and Total Number Of Databases of a serverless instance?Many thanks in advance, Jan", "username": "derjanni" }, { "code": "", "text": "Solved it now by connecting to each instance and retrieve the database list that includes the size.", "username": "derjanni" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get the storage size and number of databases of a serverless instance?
2022-10-13T10:47:57.526Z
How to get the storage size and number of databases of a serverless instance?
2,141
null
[]
[ { "code": "replication:\n oplogSizeMB: 1024\n replSetName: rs0\n", "text": "Hi Team,I want to configure oplog on stand-alone mongo DB server. I have very basic idea of Mongo DB as admin. Need all your help to resolve this.I am trying to do below steps but mongodb service is not starting after adding the below parameters on mongo.conf file.There is no particular error but services are unable to start.Let me know if any other steps to be done before this.", "username": "Swarup_Dey" }, { "code": "mongodsystemLog.pathmongod", "text": "Welcome to the MongoDB Community @Swarup_Dey !Please share some more details on your environment:O/S versionHow you are starting mongodSince you are using a configuration file, I assume you have also set a systemLog.path location. There should be more informative log messages indicating the reason the mongod process was unable to start successfully.Regards,\nStennie", "username": "Stennie_X" }, { "code": "[root@UATMONGODB etc]# cat /etc/redhat-release\nRed Hat Enterprise Linux release 8.5 (Ootpa)\nmongodsystemctl start mongod\nDetails:\n[root@UATMONGODB etc]# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: active (running) since Thu 2022-10-13 11:18:13 +08; 3h 22min ago\n Docs: https://docs.mongodb.org/manual\n Process: 6965 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=0/SUCCESS)\n Process: 6963 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 6961 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 6958 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Main PID: 6969 (mongod)\n Memory: 173.2M\n CGroup: /system.slice/mongod.service\n └─6969 /usr/bin/mongod -f /etc/mongod.conf\n[mongo@UATMONGODB ~]$ cat /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\nnet:\n tls:\n FIPSMode: true\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n quiet: false\n path: /mongo/log/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /mongo/datadir\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27077\n bindIp: 10.10.10.10 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\nsecurity:\n authorization: \"enabled\"\n javascriptEnabled: false\n\nsetParameter:\n enableLocalhostAuthBypass: false\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n", "text": "Hi Stennie,Thanks for your reply.O/S version:Conf File:Thanks\nSwarup", "username": "Swarup_Dey" }, { "code": "[root@UATMONGODB etc]# systemctl start mongod\nJob for mongod.service failed because the control process exited with error code.\nSee \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n[root@UATMONGODB etc]# journalctl -xe\nOct 13 10:21:17 UATMONGODB.katmb.com.my mongod[1229696]: try '/usr/bin/mongod --help' for more information\nOct 13 10:21:17 UATMONGODB.katmb.com.my systemd[1]: mongod.service: Control process exited, code=exited status=2\nOct 13 10:21:17 UATMONGODB.katmb.com.my systemd[1]: mongod.service: Failed with result 'exit-code'.\n-- Subject: Unit failed\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n--\n-- The unit mongod.service has entered the 'failed' state with result 'exit-code'.\nOct 13 10:21:17 UATMONGODB.katmb.com.my systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: https://access.redhat.com/support\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nOct 13 10:21:18 UATMONGODB.katmb.com.my dbus-daemon[1104]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'\nOct 13 10:21:20 UATMONGODB.katmb.com.my setroubleshoot[1229698]: AnalyzeThread.run(): Cancel pending alarm\nOct 13 10:21:20 UATMONGODB.katmb.com.my setroubleshoot[1229698]: failed to retrieve rpm info for /sys/fs/cgroup/memory/memory.limit_in_bytes\nOct 13 10:21:20 UATMONGODB.katmb.com.my dbus-daemon[1104]: [system] Activating service name='org.fedoraproject.SetroubleshootPrivileged' requested by ':1.349488' (uid=993 pid=1229698 comm=\"/usr/>\nOct 13 10:21:21 UATMONGODB.katmb.com.my dbus-daemon[1104]: [system] Successfully activated service 'org.fedoraproject.SetroubleshootPrivileged'\nOct 13 10:21:21 UATMONGODB.katmb.com.my setroubleshoot[1229698]: SELinux is preventing /usr/bin/mongod from open access on the file /sys/fs/cgroup/memory/memory.limit_in_bytes. For complete SELi>\nOct 13 10:21:21 UATMONGODB.katmb.com.my setroubleshoot[1229698]: SELinux is preventing /usr/bin/mongod from open access on the file /sys/fs/cgroup/memory/memory.limit_in_bytes.\n\n ***** Plugin catchall (100. confidence) suggests **************************\n\n If you believe that mongod should be allowed open access on the memory.limit_in_bytes file by default.\n Then you should report this as a bug.\n You can generate a local policy module to allow this access.\n Do\n allow this access for now by executing:\n # ausearch -c 'mongod' --raw | audit2allow -M my-mongod\n # semodule -X 300 -i my-mongod.pp\n\nOct 13 10:21:21 UATMONGODB.katmb.com.my setroubleshoot[1229698]: AnalyzeThread.run(): Set alarm timeout to 10\nOct 13 10:21:21 UATMONGODB.katmb.com.my setroubleshoot[1229698]: AnalyzeThread.run(): Cancel pending alarm\nOct 13 10:21:22 UATMONGODB.katmb.com.my setroubleshoot[1229698]: failed to retrieve rpm info for /sys/fs/cgroup/memory/memory.limit_in_bytes\nOct 13 10:21:22 UATMONGODB.katmb.com.my setroubleshoot[1229698]: SELinux is preventing /usr/bin/mongod from getattr access on the file /sys/fs/cgroup/memory/memory.limit_in_bytes. For complete S>\nOct 13 10:21:22 UATMONGODB.katmb.com.my setroubleshoot[1229698]: SELinux is preventing /usr/bin/mongod from getattr access on the file /sys/fs/cgroup/memory/memory.limit_in_bytes.\n\n ***** Plugin catchall (100. confidence) suggests **************************\n\n If you believe that mongod should be allowed getattr access on the memory.limit_in_bytes file by default.\n Then you should report this as a bug.\n You can generate a local policy module to allow this access.\n Do\n allow this access for now by executing:\n # ausearch -c 'mongod' --raw | audit2allow -M my-mongod\n # semodule -X 300 -i my-mongod.pp\n\nOct 13 10:21:22 UATMONGODB.katmb.com.my setroubleshoot[1229698]: AnalyzeThread.run(): Set alarm timeout to 10\n", "text": "Got generic error while starting after adding mentioned replica parameters.", "username": "Swarup_Dey" }, { "code": "SELinux is preventing /usr/bin/mongod from open access on the file /sys/fs/cgroup/memory/memory.limit_in_bytes/mongo/log/mongod.log", "text": "Hi @Swarup_Dey,SELinux is preventing /usr/bin/mongod from open access on the file /sys/fs/cgroup/memory/memory.limit_in_bytesDid you install MongoDB using the official packages? What specific version of MongoDB 5.x server did you install?SELinux policies should be setup as part of the packaged installation, but can also be configured manually per Configure SELinux.If you happen to be using an older MongoDB 5.0.x release I would strongly recommend installing the most recent patch release (currently 5.0.13) as there have been several important stability fixes. Patch releases do not introduce any backward-breaking or compatibility changes. Please review Release Notes for MongoDB 5.0 for more details./mongo/log/mongod.logCan you please have a look at this MongoDB log file? It should have some more relevant error messages.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,I have installed mongodb with below repository.[root@UATMONGODB yum.repos.d]# cat mongodb-org-5.0.repo\n[mongodb-org-5.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/5.0/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-5.0.ascMongoDB version currently is 5.0.6.\nWe have make seLinux permission. Do you think we still require selenux policy to be applied ?We will try to installed patch but that’s gonna take a while with downtime approval.\nBut do you think this older version has any effect on the replication/oplog setup?", "username": "Swarup_Dey" }, { "code": "", "text": "** selinux is set to Permissive.[root@UATMONGODB /]# sestatus\nSELinux status: enabled\nSELinuxfs mount: /sys/fs/selinux\nSELinux root directory: /etc/selinux\nLoaded policy name: targeted\nCurrent mode: permissive\nMode from config file: permissive\nPolicy MLS status: enabled\nPolicy deny_unknown status: allowed\nMemory protection checking: actual (secure)\nMax kernel policy version: 33", "username": "Swarup_Dey" } ]
Enable/Configure Oplog on standalone mongodb version 5
2022-10-13T05:16:15.043Z
Enable/Configure Oplog on standalone mongodb version 5
2,188
null
[ "replication", "database-tools" ]
[ { "code": "", "text": "Hellow everybody ,I have a server running Ubuntu Server 20 LTS. I have installed there Mongo 4.4.10. The architecture deployed is replicaset of 3 nodes, one primary and two other secondaries. After making several mongoimport operations with big TSV files (several million rows), one of the nodes falls down, usually a secondary (sometimes is the id 1, sometimes is the id 2), and very rarely the primary.I have the problem in a Server of 32 GB RAM, but also in a Server of 64 GB RAM.I have check several topics:Linux limits:\nI have set the limits to maximum possible:\nsudo echo “* soft nproc 65536” >> /etc/security/limits.conf\nsudo echo “* hard nproc 65536” >> /etc/security/limits.conf\nsudo echo “* soft nofile 655360” >> /etc/security/limits.conf\nsudo echo “* hard nofile 655360” >> /etc/security/limits.conf\nsudo echo “root soft nproc 65536” >> /etc/security/limits.conf\nsudo echo “root hard nproc 65536” >> /etc/security/limits.conf\nsudo echo “root soft nofile 65536” >> /etc/security/limits.conf\nsudo echo “root hard nofile 65536” >> /etc/security/limits.confSwappiness factor:\nThe current swappiness value is 60, as it is the value by default…Ports:\nThe 3 needed ports are opened.The error can be found reading the journal of the core:\njournalctl -k -p err;\nThe error has this output:\nkernel: Out of memory: Killed process 2923646 (mongod) total-vm:28964256kB, anon-rss:23559824kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:53796kB oom_score_adj:0Please help me solving this problem. Thank you in advance.Best regards ,\nJose", "username": "Jose_M_Aguero" }, { "code": "mongodmongod", "text": "Hello @Jose_M_Aguero ,Could you confirm that all mongod processes are run in their separate machine or VMs? Generally, running multiple mongod processes in a single machine is not a recommended setup since they will compete for resources and may lead to undesirable outcomes. From what you posted, I think the OS is running out of physical memory, the kernel’s out-of-memory killer (OOM-killer) kicks in and terminates processes to free up some RAM (in this case it’s mongodb, but could be any other).To prevent OOMkills, the general approach is to configure swap space on the server. If it still happening after this, then I tend to think that the hardware is insufficient to serve the workload you’re putting on it and upgrading the deployment might be an option to consider.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hellow Tarun ,Thank you for your reply.All the 3 proceses are running directly in Ubuntu, not in separated WM.For mongo processes I have read that the swappiness factor should be 1. This low value means the kernel will try to avoid swapping as much as possible where a higher value instead will make the kernel aggressively try to use swap space. This is the same as saying that Mongo does not use swap memory.For hardware RAM, I can say the problem will ocurr in any server with this configuration. I have seen it with 32, 64 and 128 GB hardware RAM.The solution I have found is to set the option --wiredTigerCacheSizeGB with a reasonable value of RAM for each process. For 64 GB machine, it would be “–wiredTigerCacheSizeGB 10”.Regards ,\nJose", "username": "Jose_M_Aguero" }, { "code": "mongodmongod", "text": "I’d like to mention that it’s strongly not recommended to run more than one mongod in a single machine/VM. Below blob is from Production notesThe default WiredTiger internal cache size value assumes that there is a single mongod instance per machine. If a single machine contains multiple MongoDB instances, then you should decrease the setting to accommodate the other mongod instances.While running multiple mongod is fine during the development phase, it’s best not to do this in a production environment.Note that although it’s possible to limit the WiredTiger cache size, it is not the only memory usage needs of the mongod process. Things such as aggregation queries, connections, and other processes outside of WiredTiger requires memory outside of WiredTiger cache.", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problems with replicaset of 3 nodes falling down one for out of memory error
2022-10-07T19:06:12.088Z
Problems with replicaset of 3 nodes falling down one for out of memory error
2,290
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6169547919aa9900011c3995\"\n },\n \"TranscriptionID\": \"YQPTKEU8JP9844YQVT4WY4HNQP\",\n \"Status\": 1,\n \"Created\": {\n \"$date\": {\n \"$numberLong\": \"1633671595666\"\n }\n },\n \"Modified\": {\n \"$date\": {\n \"$numberLong\": \"1633671595666\"\n }\n },\n \"Tracks\": {\n \"A3-A4\": {\n \"Language\": \"en_US\",\n \"Sentences\": [\n {\n \"SentenceID\": \"20e8e7f1efd24098be28b3cb5cdbf081\",\n \"StartTime\": 0,\n \"EndTime\": 0,\n \"Speaker\": \"spk_0\",\n \"Content\": \"we should be recording everybody. So yes, this meeting is being recorded, abc\",\n \"IsApproved\": false,\n \"ModifiedContent\": \"we should be recording everybody. So yes, this meeting is being recorded,\",\n \"Words\": [\n {\n \"StartTime\": 8.84,\n \"EndTime\": 9.19,\n \"Type\": \"pronunciation\",\n \"Content\": \"we\",\n \"VocabularyFilterMatch\": false,\n \"Confidence\": 1\n }\n ]\n },\n {\n \"SentenceID\": \"fc80abf17f0c42b1833857a1dfb07481\",\n \"StartTime\": 0,\n \"EndTime\": 0,\n \"Speaker\": \"spk_0\",\n \"Content\": \"now we actually have to be somewhat professional. Okay,\",\n \"IsApproved\": true,\n \"ModifiedContent\": null,\n \"Words\": [\n {\n \"StartTime\": 16.74,\n \"EndTime\": 16.97,\n \"Type\": \"pronunciation\",\n \"Content\": \"now\",\n \"VocabularyFilterMatch\": false,\n \"Confidence\": 0.999\n }\n ]\n },\n {\n \"SentenceID\": \"25f4c1dd322941d4993c02a3270db5bb\",\n \"StartTime\": 8.84,\n \"EndTime\": 13.565,\n \"Speaker\": \"spk_0\",\n \"Content\": \"hahahaha A3-A4 Server\",\n \"IsApproved\": false,\n \"ModifiedContent\": \"we should be recording everybody. So yes, this meeting is being recorded,\",\n \"Words\": [\n {\n \"StartTime\": 0,\n \"EndTime\": 0,\n \"Type\": \"pronunciation\",\n \"Content\": \"we\",\n \"VocabularyFilterMatch\": false,\n \"Confidence\": 1\n }\n ]\n }\n ]\n },\n \"A5-A6\": {\n \"Language\": \"en_US\",\n \"Sentences\": [\n {\n \"SentenceID\": \"ff62bda2900e4121bbd221c3d294af7c\",\n \"StartTime\": 8.84,\n \"EndTime\": 13.565,\n \"Speaker\": \"spk_0\",\n \"Content\": \"we should be recording everybody. So yes, this meeting is being recorded,\",\n \"IsApproved\": false,\n \"ModifiedContent\": \"we should be recording everybody. So yes, this meeting is being recorded,\",\n \"Words\": [\n {\n \"StartTime\": 0,\n \"EndTime\": 0,\n \"Type\": \"pronunciation\",\n \"Content\": \"we\",\n \"VocabularyFilterMatch\": false,\n \"Confidence\": 1\n }\n ]\n },\n {\n \"SentenceID\": \"de7328fb670e47b5aa91f8b808cf3e18\",\n \"StartTime\": 16.74,\n \"EndTime\": 21.965,\n \"Speaker\": \"spk_0\",\n \"Content\": \"now we actually have to be somewhat professional. Okay, whatevs\",\n \"IsApproved\": true,\n \"ModifiedContent\": null,\n \"Words\": [\n {\n \"StartTime\": 0,\n \"EndTime\": 0,\n \"Type\": \"pronunciation\",\n \"Content\": \"now\",\n \"VocabularyFilterMatch\": false,\n \"Confidence\": 0.999\n }\n ]\n }\n ]\n }\n }\n}\n{\n \"_id\": \"01GF7QG56P13SBKJHTK51TFK40\",\n \"transcription:id\": \"01GCDK5YBHVBN4W2980Y619FTZC\",\n \"track:id\": \"A1-A2\",\n \"sentence:rev\": \"1\",\n \"sentence:id\": \"01GF7QG56P13SBKJHTK51TFK40\",\n \"startTime:decimal\": 0.1275,\n \"endTime:decimal\": 1.1375,\n \"speaker:text\": \"0\",\n \"content:text\": \"Then why do you want to kill\",\n \"isApproved:bool\": false,\n \"modifiedContent:text\": null,\n \"words\": [\n {\n \"startTime:decimal\": 0.1275,\n \"endTime:decimal\": 0.2975,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"Then\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 0.5578\n },\n {\n \"startTime:decimal\": 0.2975,\n \"endTime:decimal\": 0.4475,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"why\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 1\n },\n {\n \"startTime:decimal\": 0.4475,\n \"endTime:decimal\": 0.5375,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"do\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 1\n },\n {\n \"startTime:decimal\": 0.5375,\n \"endTime:decimal\": 0.6175,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"you\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 1\n },\n {\n \"startTime:decimal\": 0.6175,\n \"endTime:decimal\": 0.7875,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"want\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 0.9724\n },\n {\n \"startTime:decimal\": 0.7875,\n \"endTime:decimal\": 0.8525,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"to\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 0.9724\n },\n {\n \"startTime:decimal\": 0.8525,\n \"endTime:decimal\": 1.1375,\n \"type:text\": \"pronunciation\",\n \"content:text\": \"kill\",\n \"vocabularyFilterMatch:bool\": false,\n \"confidence:number\": 1\n }\n ]\n}\n", "text": "Hi everyone, I’ve got an example set of data above, Tracks is a dictionary with “A1-A2” as an example key and Language and a list of sentences as its value. I was wondering if it’s possible for me to unwind the sentences array inside tracks and output each sentence to a new document containing the Track key.Something like below;", "username": "Ye_Wyn" }, { "code": "db.collection.aggregate([\n {\n \"$set\": {\n \"Tracks\": {\n \"$objectToArray\": \"$Tracks\"\n }\n }\n },\n {\n \"$unwind\": \"$Tracks\"\n },\n {\n \"$unwind\": \"$Tracks.v.Sentences\"\n },\n {\n \"$set\": {\n \"trackId\": \"$Tracks.k\",\n \"sentence\": \"$Tracks.v.Sentences\"\n }\n },\n {\n \"$unset\": \"Tracks\"\n }\n])\n", "text": "Do you really want to “flatten” the data that much?Here’s a bit of flattening. You could continue it if you want/need.Try it on mongoplayground.net.", "username": "Cast_Away" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to access an array in a dictionary and project the array to a new document?
2022-10-13T03:53:10.426Z
Is it possible to access an array in a dictionary and project the array to a new document?
1,758
null
[ "aggregation" ]
[ { "code": "", "text": "Hi ,I want some understanding regarding the working of maxTimeMS option in Cursor /find / aggregate…From my understanding, find operation will internally have some deafult batch size associated with it, hence, if we fire some find query it will fetch and return the result in batches, if the result document exceeds the default batch size. [ For eg., lets say default batch size = 10 and result size = 100, then the find query internally return the results in 10 iterations (10 batches)]My doubt is that whether maxtimeMS is applicable for the entire cursor iterations(whole find operation) or else for single cursor iteration. Need some help on this. I referred the documentation, can’t able to get clarity. Can anyone able to help me on this one?I want to configure maxTimeMS for my queries in my project, hence thought of getting good clarity before using it.", "username": "Naveen_Kumar10" }, { "code": "cursor.maxTimeMS()maxTimeMS()maxTimeMS()maxTimeMS()...\"msg\":\"getMore command executor error\",\"attr\":{\"error\":\n{\"code\":50,\"codeName\":\"MaxTimeMSExpired\",\"errmsg\":\"operation exceeded time limit\"}\n", "text": "Hi @Naveen_Kumar10 - Welcome to the community.My doubt is that whether maxtimeMS is applicable for the entire cursor iterations(whole find operation) or else for single cursor iteration. Need some help on this.For the cursor.maxTimeMS(), queries that generate multiple batches of results continue to return batches until the cursor exceeds its allotted time limit. maxTimeMS() relates to processing time (not the cursor’s idle time). The maxTimeMS() would be applicable to the whole cursor. E.g. If you have 10 batches for a particular operation and the 4th batch’s cumulative processing time exceeds the maxTimeMS() configured, the operation would terminate (1st + 2nd + 3rd + 4th processing time > maxTimeMS() configured value).You can see an error message in the logs (if configured to log these details) somewhat similar to the below depending on your MongoDB version:Please note that Session Idle Timeout Overrides maxTimeMS.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks for the detailed explanation @Jason_Tran . Now I got a clear understanding.", "username": "Naveen_Kumar10" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Clarification regarding working of maxTimeMS options
2022-09-22T05:35:20.593Z
Clarification regarding working of maxTimeMS options
1,907
https://www.mongodb.com/…e_2_1024x518.png
[ "queries", "realm-web" ]
[ { "code": "", "text": "\nMONGODB1842×933 193 KB\ninstead of the name of the cluster use the Atlas service name.\nYou can find Atlas Service Name as follow go to:1- App services.\n2- Under Manage click select Linked Data Sources.\n3- You will find Atlas Service Name", "username": "Ahlam_bey" }, { "code": "", "text": "Hi @Ahlam_bey,Thanks for point this one out. I’ll raise this with the team regarding the variable naming on the screenshot you have provided. Could you just confirm the exact page link where you have seen this screenshot? I presume it is the Realm - Query your Document page but correct me if I am wrong here.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
A mistake on Realm documentation
2022-10-08T16:09:06.277Z
A mistake on Realm documentation
2,098
null
[ "java" ]
[ { "code": "[12:13:11 ERROR]: Could not pass event PlayerJoinEvent to Login v0.1-ALPHA\norg.bukkit.event.EventException\n at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:302) ~[PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:78) ~[PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:67) ~[PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:541) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:517) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.PlayerList.onPlayerJoin(PlayerList.java:320) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.PlayerList.a(PlayerList.java:173) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.LoginListener.b(LoginListener.java:144) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.LoginListener.c(LoginListener.java:54) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.NetworkManager.a(NetworkManager.java:231) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.ServerConnection.c(ServerConnection.java:148) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.MinecraftServer.B(MinecraftServer.java:982) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.DedicatedServer.B(DedicatedServer.java:378) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.MinecraftServer.A(MinecraftServer.java:819) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at net.minecraft.server.v1_8_R3.MinecraftServer.run(MinecraftServer.java:715) [PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]\nCaused by: org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.redefantasy.core.shared.world.location.SerializedLocation.\n at org.bson.internal.CodecCache.lambda$getOrThrow$1(CodecCache.java:52) ~[?:?]\n at java.util.Optional.orElseThrow(Optional.java:290) ~[?:1.8.0_282]\n at org.bson.internal.CodecCache.getOrThrow(CodecCache.java:51) ~[?:?]\n at org.bson.internal.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:64) ~[?:?]\n at org.bson.internal.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:39) ~[?:?]\n at com.mongodb.internal.operation.Operations.createFindOperation(Operations.java:163) ~[?:?]\n at com.mongodb.internal.operation.Operations.findFirst(Operations.java:148) ~[?:?]\n at com.mongodb.internal.operation.SyncOperations.findFirst(SyncOperations.java:88) ~[?:?]\n at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:200) ~[?:?]\n at com.redefantasy.login.storage.repositories.implementations.MongoSpawnRepository.fetch(MongoSpawnRepository.kt:24) ~[?:?]\n at com.redefantasy.login.listeners.GeneralListeners.on(GeneralListeners.kt:81) ~[?:?]\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_282]\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_282]\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_282]\n at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_282]\n at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:300) ~[PaperSpigot.jar:git-PaperSpigot-\"e447335\"]\n ... 15 more\npackage com.redefantasy.core.shared.providers.databases.mongo\n\nimport com.mongodb.ConnectionString\nimport com.mongodb.MongoClientSettings\nimport com.mongodb.MongoCredential\nimport com.mongodb.client.MongoClient\nimport com.mongodb.client.MongoClients\nimport com.mongodb.client.MongoDatabase\nimport com.redefantasy.core.shared.providers.databases.IDatabaseProvider\nimport org.bson.codecs.configuration.CodecRegistries\nimport org.bson.codecs.pojo.PojoCodecProvider\nimport java.net.InetSocketAddress\n\n/**\n * @author SrGutyerrez\n **/\nclass MongoDatabaseProvider(\n private val address: InetSocketAddress,\n private val user: String,\n private val password: String,\n private val database: String\n) : IDatabaseProvider<MongoDatabase> {\n\n private lateinit var mongoClient: MongoClient\n private lateinit var mongoDatabase: MongoDatabase\n\n override fun prepare() {\n val mongoClientSettings = MongoClientSettings.builder()\n .applyConnectionString(\n ConnectionString(\n \"mongodb://${this.address.address.hostAddress}:${this.address.port}\"\n )\n )\n .credential(\n MongoCredential.createCredential(\n this.user,\n \"admin\",\n this.password.toCharArray()\n )\n )\n .codecRegistry(\n CodecRegistries.fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n CodecRegistries.fromProviders(\n PojoCodecProvider.builder()\n .automatic(true)\n .build()\n )\n )\n )\n .build()\n\n this.mongoClient = MongoClients.create(mongoClientSettings)\n\n this.mongoDatabase = mongoClient.getDatabase(this.database)\n }\n\n override fun provide() = this.mongoDatabase\n\n override fun shutdown() = this.mongoClient.close()\n\n}", "text": "Exception:Code:", "username": "Gutyerrez_N_A" }, { "code": "com.redefantasy.core.shared.world.location.SerializedLocation", "text": "Hi @Gutyerrez_N_A,Can you share the code of the com.redefantasy.core.shared.world.location.SerializedLocation class? Is it a Kotlin data class?Thanks,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "'org.bson.codecs.configuration.CodecConfigurationException'\n return template.save(\n MyClass(\n id = null,\n date = Date(System.currentTimeMillis()).toString(),\n ), \"<collectionName>\"\n )\ntemplate.getCollection(\"<collectionName>\").find(MyClass::class.java).toList()\nval pojoCodecProvider: CodecProvider =\n PojoCodecProvider.builder().automatic(true)\n .register(MyClass::class.java)\n .build()\nval pojoCodecRegistry = fromRegistries(getDefaultCodecRegistry(), fromProviders(pojoCodecProvider))\n\ntemplate.getCollection(\"<collectionName>\").withCodecRegistry(pojoCodecRegistry).find(MyClass::class.java).toList()\n\n", "text": "Hi Jeff,I am having the same issue. I am using a Kotlin data class and got this error:I store the data doing:Then I am trying to access it like so:So I tried adding registries manually and still am having the same issue.Thanks,\nJake", "username": "Jake_Nieto" }, { "code": "@Document\ndata class MyClass @BsonCreator constructor(\n @BsonId\n @BsonRepresentation(BsonType.OBJECT_ID)\n var id: String?,\n @BsonProperty(\"date\")\n val date: String,\n)\n@BsonCreatorconstructor@BsonProperty(\"<fieldName>\")", "text": "Figured out my issue. It’s necessary to have your data class defined as so:You need the @BsonCreator annotation and constructor keyword used. Additionally, each field needs to have a @BsonProperty(\"<fieldName>\") annotation.", "username": "Jake_Nieto" } ]
Can't find a codec for my class (kotlin)
2021-03-09T17:18:46.736Z
Can&rsquo;t find a codec for my class (kotlin)
6,481
null
[ "queries", "scala" ]
[ { "code": "raw JSONroot.leafroot.level1.level2.level3.level4.leaf", "text": "I looked at the latest document (v2.14) for mongodrdl.I cannot find an option to control the level of depth when it attempts to map out and infer the schema from the documents being sampled in my collection.The context here are:I believe at a very minimal these couple of features will help me to tamed the large nested documents, to enable me to work with a simplified SQL schema with attributes that matters for my query.Open for ideas, options, or different perspectives.\nThank you.", "username": "cequencer" }, { "code": "mongodrdl", "text": "The documents in the given collection are so nested that mongodrdl is emitting the following warnings.<… clipped the same repeating messages…>\n2022-10-12T15:48:40.123-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.123-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.126-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.126-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.126-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.126-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.128-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.128-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n2022-10-12T15:48:40.129-0700 W SCHEMA [sampler] cannot map field “attribute1” - collection “myCollectionName” has reached configured field limit 2000\n<… clipped the same repeating messages…>", "username": "cequencer" } ]
Mongodrdl option to limit the the depth of nesting levels?
2022-10-12T23:42:03.642Z
Mongodrdl option to limit the the depth of nesting levels?
1,967
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const query = PersonModel.findOne({ userId });\nquery.then(function (foundPerson) {\n\tconsole.log(foundPerson);\n});\nreturn true;\n", "text": "I tried many ways to get the following code to work. I’ve tried async/await, .then, straight code with no decorations. The function loginPerson always returns before the findOne executes.app.post(’/login’, function (req, res) {\n// console.log(‘Login Requested’);\nif (db.loginPerson(req.body.userId, req.body.password)) {\nconsole.log(‘Login successful’);\n} else {\nconsole.log(‘Login failed’);\n}\n});exports.loginPerson = function (userId, password) {\nconsole.log(userId);\nconsole.log(password);};", "username": "Jim_Olivi" }, { "code": "", "text": "when you deal with “async” function, you need to “await” everywhere needed or use “then/catch” blocks everywhere you have them.app.post(’/login’, function (req, res) {\n// console.log(‘Login Requested’);\nif (db.loginPerson(req.body.userId, req.body.password)) {\nconsole.log(‘Login successful’);\n} else {\nconsole.log(‘Login failed’);\n}\n});in this block of code, your “function” needs to be “async” and you have to “await” for “db.loginPerson”, or move success/fail messages along with login logic into then/catch blocks.follow the same in your loginPerson function. either use async/await (not then/catch) or make it a Promise and resolve/reject depending on the query result.", "username": "Yilmaz_Durmaz" } ]
Mongoose findOne async
2022-10-12T16:35:20.311Z
Mongoose findOne async
2,951
null
[ "golang", "time-series" ]
[ { "code": "\tvar timeseriesMetricsName = \"timeseries_metrics\"\n\t// Timeseries collections must be explicitly created so we explicitly create it here\n\terr = Database.CreateCollection(ctx, timeseriesMetricsName, &options.CreateCollectionOptions{\n\t\tTimeSeriesOptions: &options.TimeSeriesOptions{\n\t\t\tTimeField: \"created\",\n\t\t\tMetaField: aws.String(\"metadata\"),\n\t\t\tGranularity: aws.String(\"hours\"),\n\t\t}})\n\t// If it already exists then we swallow that error, otherwise we panic on all others\n\tif err != nil {\n\t\tif !strings.HasPrefix(err.Error(), \"(NamespaceExists) A timeseries collection already exists\") {\n\t\t\tlog.Fatal(fmt.Sprintf(\"Error creating collection [%s]: [%+v]\", timeseriesMetricsName, err))\n\t\t} else {\n\t\t\tfmt.Println(fmt.Sprintf(\"%s table already exists. continuing.\", timeseriesMetricsName))\n\t\t}\n\t} else {\n\t\tfmt.Println(fmt.Sprintf(\"Successfully created %s table for the first time.\", timeseriesMetricsName))\n\t}\n", "text": "It would be awesome if there was a non-cludgy way to initialize timeseries collections. Creating a normal collection / handle is easy in golang and if it doesn’t exist, it gets created. It’s seamless. This isn’t the case with timeseries collections as far as I can tell. Here’s something terrible I hacked together:", "username": "kwM5l76i6b7pSZNg_fHM5P2x15BDHxlI1" }, { "code": "CreateIfNotExist\tDatabase = Client.Database(Name)\n\n\tvar timeseriesMetricsName = \"timeseries_metrics\"\n\texists := false\n\tnames, err := Database.ListCollectionNames(ctx, bson.D{}, nil)\n\tfor _, name := range names {\n\t\tif name == timeseriesMetricsName {\n\t\t\texists = true\n\t\t\tfmt.Println(fmt.Sprintf(\"%s table already exists. continuing.\", timeseriesMetricsName))\n\t\t}\n\t}\n\n\tif !exists {\n\t\t// Timeseries collections must be explicitly created so we explicitly create it here\n\t\terr = Database.CreateCollection(ctx, timeseriesMetricsName, &options.CreateCollectionOptions{\n\t\t\tTimeSeriesOptions: &options.TimeSeriesOptions{\n\t\t\t\tTimeField: \"created\",\n\t\t\t\tMetaField: aws.String(\"metadata\"),\n\t\t\t\tGranularity: aws.String(\"hours\"),\n\t\t\t}})\n\t\tif err != nil {\n\t\t\tlog.Fatal(fmt.Sprintf(\"Error creating collection [%s]: [%+v]\", timeseriesMetricsName, err))\n\t\t} else {\n\t\t\tfmt.Println(fmt.Sprintf(\"Successfully created %s table for the first time.\", timeseriesMetricsName))\n\n\t\t}\n\t}\n\tAdminColl = Database.Collection(\"admins\")\n", "text": "Here’s a far less disgusting implementation but it’s still not pretty because golang doesn’t search arrays of strings well and you still have to handle a lot of errors.A native driver function for CreateIfNotExist would be awesome. Even better if the driver handled timeseries collections identically do regular collections and create if necessary otherwise return a handle to the existing collection.Timeseries create if not exist code:Regular collection create if not exist code:The difference here is huge.", "username": "kwM5l76i6b7pSZNg_fHM5P2x15BDHxlI1" } ]
Is there an easier way to create time series collections in Go?
2022-10-12T16:22:59.667Z
Is there an easier way to create time series collections in Go?
1,952
null
[ "golang" ]
[ { "code": "cliente_local, err := mongo.NewClient(\n\t\toptions.Client().ApplyURI(\n\t\t\tuseful.CadenaConexion))\n\tuseful.Check(fmt.Errorf(\"[-]It cannot create Newclient\", err))\n\n\tctx, cancelar = context.WithTimeout(context.Background(), 10*time.Second)\n\terr = cliente_local.Connect(ctx)\n\tif err != nil {\n\t\tuseful.Check(fmt.Errorf(\"[-] It cannot create Connect\", err))\n\t}\n\tdefer cancelar()\n\n\t// Check the connection\n\terr = cliente_local.Ping(ctx, nil)\n\n\tuseful.Check(fmt.Errorf(\"[-]It cannot Ping\", err))\n\n\tlog.Println(\"[+]Connected to MongoDB Atlas\")\nvar coll *mongo.Collection = mongo_cliente.Collection(\"config\")\n\n\texists = true\n\n\t// Find the document for which the _id field matches id.\n\t// Specify the Sort option to sort the documents by age.\n\t// The first document in the sorted order will be returned.\n\topts := options.FindOne()\n\tvar result bson.M\n\n\tfilter := bson.D{{Key: \"first_run\", Value: true}}\n\n\terr := coll.FindOne(\n\t\tcontext.TODO(),\n\t\tfilter,\n\t\topts,\n\t).Decode(&result)\n", "text": "Hello everybody,First, I connect with MongoDBAtlas successfully:After I use a function to check if a collection exists:** Is there another way to check if a collection exists, anyway, with my code does not work **Thanks in advance!!", "username": "Daniel_Leyva_Cortes" }, { "code": "\tclientOpts := options.Client().ApplyURI(uri)\n\tclient, err := mongo.Connect(ctx, clientOpts)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\terr = client.Ping(ctx, readpref.Primary())\n\tif err != nil {\n\t\tlog.Println(\"mongoDB is down\")\n\t\tlog.Fatal(err.Error())\n\t}\n\n\tcNames, err := client.Database(\"company\").ListCollectionNames(ctx, bson.D{})\n\tif err != nil {\n\t\tlog.Fatal(err.Error())\n\t}\n\tfmt.Println(cNames)\n\n\n", "text": "Here is the simple snip code to print all the collection names for you DB, once you get all the collections you can check if the the collection is present or not.Please do let me know if you are looking for something else.", "username": "Ajay_Mahar" }, { "code": "", "text": "2 posts were split to a new topic: Is there an easier way to create time series collections in Go?", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
How to check if a collection exists
2022-01-16T17:42:55.450Z
How to check if a collection exists
8,010
null
[]
[ { "code": "", "text": "Hi everyone! I have a database for a project I am working on. Where I am tracking the buys of a specific asset. When that buy gets tracked, I have a collection for that specific asset.So my collection is called “Assets”, then each asset has its own object/scheme with a “lastBuyTimeStamp” and “totalBuys” field that gets updated when a buy of that asset comes through.Now i am wanting to pull data to get average number of buys in the past hour and 24 hours, to then also get an average buys per minute.I feel like with these 2 fields, I may not be able to get this, so looking for some tips or guidance on the best way to store data to get averages I am looking for. I am currently in testing, so hoping to get this worked out before we roll out to production. Thank you!", "username": "Ninja_Dev" }, { "code": "", "text": "@Ninja_Dev If you store a time with each buy, you should be able to calculate buy/time rates (“averages”).", "username": "Cast_Away" }, { "code": "", "text": "Thank you. What I ended up doing was something like that. I just recorded a timestamp on each buy, I then added a function that runs every so often, and removes any objects over the specified time period, and this does the trick!\n\nimage868×584 44.7 KB\n", "username": "Ninja_Dev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best way to store data to calculate averages?
2022-10-11T16:38:13.418Z
Best way to store data to calculate averages?
1,151
null
[ "app-services-user-auth", "react-native" ]
[ { "code": "Send Confirmation EmailConfirmation Functionapp.emailPasswordAuth.registerUser({ email, password })callResetPasswordFunctionresendConfirmationEmailresetPassword", "text": "1.I’m trying to use realm react-native sdk。\n2.In the realm settings, i set Send Confirmation Email or Confirmation Function options。\n3.when i call app.emailPasswordAuth.registerUser({ email, password }) I did not receive the email。\n4.Then I tried different email addresses。\n5.Try calling callResetPasswordFunction resendConfirmationEmail resetPassword\nI still did not receive the email…", "username": "hou_andy" }, { "code": "", "text": "Are you sure you enabled both the email and password authentication as well as the email confirmation feature under the authentication tab in the realm dashboard?", "username": "Wayne_Barker" }, { "code": "", "text": "yes of course, this Email Confirmation URL need to be changed?\n\nScreenshot from 2022-10-12 07-19-541210×536 43.2 KB\n", "username": "hou_andy" }, { "code": "registerUser()confirmUser()tokentokenIDconfirmUser({token, tokenId})", "text": "Hi,Welcome to the MongoDB Developer Community! Regarding your problem:3.when i call app.emailPasswordAuth.registerUser({ email, password }) I did not receive the email。The registerUser() function won’t automatically send a confimation email for the user. For that you need to implement the confirmUser() function after you have registered the user since you need the token and tokenID parameters that the function will return in order to send with confirmUser({token, tokenId}).I would recommend checking out our Atlas App Services documentation section regarding the user email confirmation so you can get a better understanding on the steps involved in the process.I hope this helps!", "username": "Mar_Cabrera" } ]
How to receive confirmation emails using Realm SDK
2022-10-11T08:45:41.204Z
How to receive confirmation emails using Realm SDK
2,294
null
[ "data-modeling" ]
[ { "code": "", "text": "I need to design data model for combo discount\nI have the following scenario for -\nitem1 + item2 => 10% discount\nitem1 + item3 => 12% discount\nitem2 + item4 => 11% discount and so on…The number of items in combination could be variable. eg:\nitem1 + item2 + item3 + item4 => 15%discountCan someone provide guidance about how to approach this problem using mongodb?", "username": "SAURAV_KUMAR" }, { "code": "", "text": "From the examples you provided, it looks like there is no logic on how you change the discount. It looks like it is completely arbitrary to which items are combined.What happen when you have item1, item2 and item3? Is it 10% because i1 and i2 is there? Or is it 12% because i1 and i3 is there? It cannot really be 10% because I would not have any incentive of having i3 also. If 12%, then i2 does not increase my discount, now I am disappointed, I will buy i2 else where.You need to develop a better and predictable way to compute your discount, otherwise you will end up with a big spaghetti of if-then-else.Example:", "username": "steevej" }, { "code": "discountdiscount_reason", "text": "Hi @SAURAV_KUMAR ,I think the most straightforward approach would be to compute your combo discounts in application logic and save the calculated discount value (and perhaps a discount_reason) in your data model. As @steevej mentioned, the logic for the discount calculation will also need a clearer calculation as the expected outcome isn’t obvious from the current information.If you are dealing with values that need fractional precision (for example, a calculated currency amount) you should also use the Decimal128 BSON type for those values.Regards,\nStennie", "username": "Stennie_X" } ]
Data Model for Combination Discount
2022-10-04T11:15:00.190Z
Data Model for Combination Discount
1,964
null
[]
[ { "code": "", "text": "Based on https://www.mongodb.com/docs/manual/administration/production-notes/#platform-supportppc64le is only supported for Enterprise version.I’ve managed to build from source on ppc64le on RHEL 8.6. I have yet to test the functionality.However, my question is, what does this actually means?Does it means that the developer hasn’t fully tested (or don’t plan to test) the MongoDB Community version for ppc64le?Or if there’s any issue, the community won’t entertain it?Please advice.Thanks!", "username": "psyntium" }, { "code": "", "text": "Another issue that I can see is on applying fixes. You will have to manually pull and rebuild from source on the latest changes. Then manually apply the binaries on the servers.", "username": "psyntium" }, { "code": "", "text": "A belated welcome to the MongoDB Community @psyntium !However, my question is, what does this actually means?Does it means that the developer hasn’t fully tested (or don’t plan to test) the MongoDB Community version for ppc64le?Unsupported platforms and server edition combinations do not have official binary packages or build/testing infrastructure. You can still build from source and run server tests to test your build environment.Or if there’s any issue, the community won’t entertain it?You can always submit bug reports in the SERVER project at jira.mongodb.org, but there will be limited assistance for platform-specific issues because test environments are less readily available.Regards,\nStennie", "username": "Stennie_X" } ]
Build from source for ppc64le support
2022-07-14T23:53:05.185Z
Build from source for ppc64le support
1,841
null
[ "transactions" ]
[ { "code": "", "text": "I have 5 nodes in total - 3 in EU (1 primary and 2 secondary) and 2 in US (both secondary).\nIf the transaction writes the data in primary, how much time does it take to update the secondary nodes with data written in primary node?\nWill the time taken to update all the secondary nodes are same irrespective of the region?", "username": "Chaitanya_Tilwankar" }, { "code": "", "text": "Hi @Chaitanya_Tilwankar ,The replication is asynchronous therefore its hard to say the exact timing , and it depends on many factors like network and compute speed.What you can control is the write concern which will only acknowledge a write when it happens on specific number of nodes , based on majority/tag or number.Ty\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks Pavel. A basic question but in the deployment I mentioned above, will all the instances (say 5) of my server be connected to only the primary node? And only when primary goes down, then all those connections will move to next node taking over and acting as primary?", "username": "Chaitanya_Tilwankar" }, { "code": "", "text": "So if you use a connection for replica set you will be connected to all nodes , when writes are only happening on Primary. Once a new primary is elected for any reason writes will be redirected there…Now for reads you have a setting called readPreference where you can read from secondary or nearest nodes. This is only for read operations.Thanks", "username": "Pavel_Duchovny" }, { "code": "w:majority", "text": "Welcome to the MongoDB Community @Chaitanya_Tilwankar !A basic question but in the deployment I mentioned above, will all the instances (say 5) of my server be connected to only the primary node?By default secondaries will either sync from the current primary or a closer secondary using Chained Replication. All members of a replica set also send regular heartbeats to each other.In your deployment scenario with replica set members in the EU and US, it is likely that replica set members will choose a local sync source where possible to minimise the network latency (EU from the primary in EU, one of the US members from EU and the other US member from the US) .As Pavel mentioned, client applications will be connected to all members of your replica set and automatically discover changes in topology.how much time does it take to update the secondary nodes with data written in primary node?The default write concern for MongoDB 5.0+ drivers is w:majority, which would require write acknowledgement from 3 members of your replica set (assuming all 5 are voting).Your deployment may have a few broad latency characteristics:If all replica set member are healthy, majority writes can be acknowledged by the 3 members in your primary data centre in EU.If any of your EU replica set members are unavailable, majority writes will require acknowledgement from one of your replica set members in the US so majority write latency will likely increase.Since you refer to EU as your primary data centre and only have two members in the US (which would not be able to sustain or elect a primary in the event of a network partition) I assume that the US members will have lower priorities. If you also made the US members non-voting this would provide more predictable majority write latency: there would be 3 voting members (all in EU) and the strict majority required to acknowledge majority writes would be 2. In this scenario you could also use Replica Set Tag Sets with a custom write concern to ensure important writes are acknowledged by members both regions.Regards,\nStennie", "username": "Stennie_X" } ]
Data consistancy
2022-10-03T11:46:49.222Z
Data consistancy
1,926
null
[ "change-streams", "time-series" ]
[ { "code": "", "text": "Hi, We recently updated to mongoDB 5 and have one more latest version of mongoDB 6.\nWe are storing data in time series introduced in mongo 5. We want to notify whenever new data from a particular IoT device. I have read that change stream is currently not supported for time series so is there any alternative for change streams for time series collection.", "username": "Patrik_Lindergren" }, { "code": "Planned", "text": "Hi @Patrik_Lindergren welcome to the community!I believe your question is answered in the thread: collection.Watch for time seriesHowever there is a suggestion in the MongoDB Feedback Engine regarding this subject: Change streams and Triggers for Time Series Collections – MongoDB Feedback Engine and at the moment the status of the idea is Planned so you might want to keep an eye on this.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi, Thanks for update on the change streams\nBut any timeline that this feature will be implemented?\nAs of now to simulate change streams we are calling API at regular intervals which gets latest data received from IoT devices and we have like already more than 500+ devices sending data and we need latest data to updated on UI, and this is creating too much load on servers so any suggestions on how to improve without change streams", "username": "Patrik_Lindergren" }, { "code": "", "text": "But any timeline that this feature will be implemented?Unfortunately I cannot say beyond that it is being planned, since it would be prioritized by the product team in relation with other planned improvements on the server.Regarding workaround, is the linked topic’s potential solution not working out for you?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi,\nWe tried the solution in a bit different way and seems to work, but we have to create a duplication collection kinda so it would be much easier if we have change streams on time-series database to reduce data duplication", "username": "Patrik_Lindergren" } ]
MongoDB timeseries change stream support or any alternative
2022-10-08T06:34:05.069Z
MongoDB timeseries change stream support or any alternative
2,844
null
[ "aggregation", "compass" ]
[ { "code": "groupBybookingTariffKeybookingCachedeletedBookingsCachebookingTariffKeybookingCachedeletedBookingsCachebookingCachedeletedBookingsCachebookingCachelookupbookingTariffKeydeletedBookingsCache[{\n $match: {\n appId: 'myApp'\n }\n}, {\n $lookup: {\n from: 'bookingCache',\n localField: 'bookingCacheId',\n foreignField: '_id',\n as: 'bookingsWithTariff'\n }\n}, {\n $unwind: {\n path: '$bookingsWithTariff'\n }\n}, {\n $group: {\n _id: {\n locationId: '$bookingsWithTariff.locationID._id'\n },\n bookings: {\n $push: '$bookingsWithTariff'\n }}\n}, {\n $match: {\n bookings: {\n $elemMatch: {\n 'metadata.updatedAt': {\n $gte: ISODate('2021-03-29T14:13:38.046Z'),\n $lte: ISODate('2022-03-29T15:00:00.000Z')\n }}}}\n}, {\n $unionWith: {\n coll: 'deletedbookingsCache',\n pipeline: [{\n $match: {\n 'locationID._id': {\n $exists: true\n }}\n },{\n $group: {\n _id: {\n locationId: '$locationID._id'\n },\n bookings: {\n $push: '$$ROOT'\n }}\n },{\n $match: {\n bookings: {\n $elemMatch: {\n 'metadata.updatedAt': {\n $gte: ISODate('2022-03-29T14:13:38.046Z'),\n $lte: ISODate('2022-03-29T15:00:00.000Z')\n }}}}}]}\n}, {\n $group: {\n _id: '$_id.locationId',\n bookings: {\n $addToSet: '$bookings'\n }}\n}, {\n $facet: {\n result: [{\n $count: 'total'\n }],\n data: [{\n $sort: {\n _id: 1\n }\n },{\n $skip: 0\n },{\n $limit: 10\n }]}}]\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"c58d9a7a-786b-4e7f-8092-6ef9f6990f8e.bookingTariffKey\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"appId\": {\n \"$eq\": \"myApp\"\n }\n },\n \"queryHash\": \"B196AC43\",\n \"planCacheKey\": \"3CEC6D3F\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"bookingCacheId\": 1,\n \"bookingsWithTariff\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"appId\": 1\n },\n \"indexName\": \"appId\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"appId\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"appId\": [\n \"[\\\"myApp\\\", \\\"myApp\\\"]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 368473,\n \"executionTimeMillis\": 70076,\n \"totalKeysExamined\": 368473,\n \"totalDocsExamined\": 368473,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 368473,\n \"executionTimeMillisEstimate\": 429,\n \"works\": 368474,\n \"advanced\": 368473,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 384,\n \"restoreState\": 384,\n \"isEOF\": 1,\n \"transformBy\": {\n \"bookingCacheId\": 1,\n \"bookingsWithTariff\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 368473,\n \"executionTimeMillisEstimate\": 398,\n \"works\": 368474,\n \"advanced\": 368473,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 384,\n \"restoreState\": 384,\n \"isEOF\": 1,\n \"docsExamined\": 368473,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 368473,\n \"executionTimeMillisEstimate\": 107,\n \"works\": 368474,\n \"advanced\": 368473,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 384,\n \"restoreState\": 384,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"appId\": 1\n },\n \"indexName\": \"appId\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"appId\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"appId\": [\n \"[\\\"myApp\\\", \\\"myApp\\\"]\"\n ]\n },\n \"keysExamined\": 368473,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 368473,\n \"executionTimeMillisEstimate\": 1211\n },\n {\n \"$lookup\": {\n \"from\": \"bookingCache\",\n \"as\": \"bookingsWithTariff\",\n \"localField\": \"bookingCacheId\",\n \"foreignField\": \"_id\",\n \"unwinding\": {\n \"preserveNullAndEmptyArrays\": false\n }\n },\n \"nReturned\": 367963,\n \"executionTimeMillisEstimate\": 45117\n },\n {\n \"$group\": {\n \"_id\": {\n \"locationId\": \"$bookingsWithTariff.locationID._id\"\n },\n \"bookings\": {\n \"$push\": \"$bookingsWithTariff\"\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"bookings\": 102766973\n },\n \"totalOutputDataSizeBytes\": {\n \"$numberLong\": \"2218104218\"\n },\n \"usedDisk\": true,\n \"nReturned\": 131365,\n \"executionTimeMillisEstimate\": 63719\n },\n {\n \"$match\": {\n \"bookings\": {\n \"$elemMatch\": {\n \"$and\": [\n {\n \"metadata.updatedAt\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1617027218046\"\n }\n }\n }\n },\n {\n \"metadata.updatedAt\": {\n \"$lte\": {\n \"$date\": {\n \"$numberLong\": \"1648566000000\"\n }\n }\n }\n }\n ]\n }\n }\n },\n \"nReturned\": 660,\n \"executionTimeMillisEstimate\": 65703\n },\n {\n \"$unionWith\": {\n \"coll\": \"deletedbookingsCache\",\n \"pipeline\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"c58d9a7a-786b-4e7f-8092-6ef9f6990f8e.deletedbookingsCache\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"locationID._id\": {\n \"$exists\": true\n }\n },\n \"queryHash\": \"E5759F8E\",\n \"planCacheKey\": \"00C12082\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"locationID._id\": {\n \"$exists\": true\n }\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"locationID._id\": 1\n },\n \"indexName\": \"locationID._id\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"locationID._id\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"locationID._id\": [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 0,\n \"executionTimeMillis\": 70076,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 0,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"locationID._id\": {\n \"$exists\": true\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 0,\n \"advanced\": 0,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"docsExamined\": 0,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 0,\n \"advanced\": 0,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"keyPattern\": {\n \"locationID._id\": 1\n },\n \"indexName\": \"locationID._id\",\n \"collation\": {\n \"locale\": \"en\",\n \"caseLevel\": false,\n \"caseFirst\": \"off\",\n \"strength\": 2,\n \"numericOrdering\": false,\n \"alternate\": \"non-ignorable\",\n \"maxVariable\": \"punct\",\n \"normalization\": false,\n \"backwards\": false,\n \"version\": \"57.1\"\n },\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"locationID._id\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"locationID._id\": [\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\": 0,\n \"seeks\": 0,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$group\": {\n \"_id\": {\n \"locationId\": \"$locationID._id\"\n },\n \"bookings\": {\n \"$push\": \"$$ROOT\"\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"bookings\": 103598076\n },\n \"totalOutputDataSizeBytes\": 459907739,\n \"usedDisk\": true,\n \"nReturned\": 28829,\n \"executionTimeMillisEstimate\": 3775\n },\n {\n \"$match\": {\n \"bookings\": {\n \"$elemMatch\": {\n \"$and\": [\n {\n \"metadata.updatedAt\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1648563218046\"\n }\n }\n }\n },\n {\n \"metadata.updatedAt\": {\n \"$lte\": {\n \"$date\": {\n \"$numberLong\": \"1648566000000\"\n }\n }\n }\n }\n ]\n }\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 4250\n }\n ]\n },\n \"nReturned\": 660,\n \"executionTimeMillisEstimate\": 69954\n },\n {\n \"$group\": {\n \"_id\": \"$_id.locationId\",\n \"bookings\": {\n \"$addToSet\": \"$bookings\"\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"bookings\": 29217178\n },\n \"totalOutputDataSizeBytes\": 29370958,\n \"usedDisk\": false,\n \"nReturned\": 660,\n \"executionTimeMillisEstimate\": 70011\n },\n {\n \"$facet\": {\n \"result\": [\n {\n \"$teeConsumer\": {},\n \"nReturned\": 660,\n \"executionTimeMillisEstimate\": 70021\n },\n {\n \"$group\": {\n \"_id\": {\n \"$const\": null\n },\n \"total\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"total\": 72\n },\n \"totalOutputDataSizeBytes\": 229,\n \"usedDisk\": false,\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 70021\n },\n {\n \"$project\": {\n \"total\": true,\n \"_id\": false\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 70021\n }\n ],\n \"data\": [\n {\n \"$teeConsumer\": {},\n \"nReturned\": 660,\n \"executionTimeMillisEstimate\": 51\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"_id\": 1\n },\n \"limit\": 10\n },\n \"totalDataSizeSortedBytesEstimate\": 1100867,\n \"usedDisk\": false,\n \"nReturned\": 10,\n \"executionTimeMillisEstimate\": 51\n }\n ]\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 70072\n }\n ],\n \"serverInfo\": {\n \"host\": \"st0cvm200117.internal-mongodb.de1.bosch-iot-cloud.com\",\n \"port\": 30000,\n \"version\": \"5.0.4\",\n \"gitVersion\": \"62a84ede3cc9a334e8bc82160714df71e7d3a29e\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"bookingTariffKey\",\n \"pipeline\": [\n {\n \"$match\": {\n \"appId\": \"myApp\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"bookingCache\",\n \"localField\": \"bookingCacheId\",\n \"foreignField\": \"_id\",\n \"as\": \"bookingsWithTariff\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$bookingsWithTariff\"\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"locationId\": \"$bookingsWithTariff.locationID._id\"\n },\n \"bookings\": {\n \"$push\": \"$bookingsWithTariff\"\n }\n }\n },\n {\n \"$match\": {\n \"bookings\": {\n \"$elemMatch\": {\n \"metadata.updatedAt\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1617027218046\"\n }\n },\n \"$lte\": {\n \"$date\": {\n \"$numberLong\": \"1648566000000\"\n }\n }\n }\n }\n }\n }\n },\n {\n \"$unionWith\": {\n \"coll\": \"deletedbookingsCache\",\n \"pipeline\": [\n {\n \"$match\": {\n \"locationID._id\": {\n \"$exists\": true\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"locationId\": \"$locationID._id\"\n },\n \"bookings\": {\n \"$push\": \"$$ROOT\"\n }\n }\n },\n {\n \"$match\": {\n \"bookings\": {\n \"$elemMatch\": {\n \"metadata.updatedAt\": {\n \"$gte\": {\n \"$date\": {\n \"$numberLong\": \"1648563218046\"\n }\n },\n \"$lte\": {\n \"$date\": {\n \"$numberLong\": \"1648566000000\"\n }\n }\n }\n }\n }\n }\n }\n ]\n }\n },\n {\n \"$group\": {\n \"_id\": \"$_id.locationId\",\n \"bookings\": {\n \"$addToSet\": \"$bookings\"\n }\n }\n },\n {\n \"$facet\": {\n \"result\": [\n {\n \"$count\": \"total\"\n }\n ],\n \"data\": [\n {\n \"$sort\": {\n \"_id\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n ]\n }\n }\n ],\n \"allowDiskUse\": true,\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"c58d9a7a-786b-4e7f-8092-6ef9f6990f8e\"\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1665327827,\n \"i\": 8\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"EfUsMmgMURqthwu/ROmLDpz9vkg=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": {\n \"$numberLong\": \"7124917905849843910\"\n }\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1665327827,\n \"i\": 8\n }\n }\n}\n", "text": "Hello I am kind of new to mongoDB (I just implemented my first complex query) and I am stuck on a quite common problem: the performance of groupBy action (at least, this is what I think is the problem).Let me describe a bit the context:\nI have 3 collections: bookingTariffKey, bookingCache and deletedBookingsCachebookingTariffKey contains about 2M documents, but they are not complex\nbookingCache contains about 600k documents\ndeletedBookingsCache contains about 82k documents\nbookingCache and deletedBookingsCache have the same stucture, but the documentes are quite complex (with many fields, arrays …)\nI have indexes for the most-used fields.The scope of my query is to find all the exisitng bookings (the ones from bookingCache) which have a tariff (lookup in bookingTariffKey) and respect a time condition plus all the deleted bookings (the ones from deletedBookingsCache) that respect the same time condition.And this is how the query looks like:And this is the explain:I know it is a lot, but I would really appreciate if you can help me improve the performance of this query.\nThank you in advance! ", "username": "Alina_Bolindu" }, { "code": "$group$match$match$lookup", "text": "Hi @Alina_Bolindu, and welcome to the MongoDB Community forums! After your $group stage, I see you’re doing a $match on a very narrow time period range. Could the query be rewritten in a way to do that $match first, even if that means you have to perform your $lookup the other way? With the amount of data being pushed through the pipeline it would be hard for us to simulate and test other options to see what might help.", "username": "Doug_Duncan" }, { "code": "\"$match\"\"$lookup\"\"pipeline\"", "text": "Or maybe even move the \"$match\" into the first \"$lookup\" with a \"pipeline\"?", "username": "Cast_Away" }, { "code": "", "text": "Hi @Alina_Bolindu and welcome to the MongoDB community!!It would be very helpful if you could help me with a few details for better understandingif you can help me improve the performance of this query.Generally the query performance you can consider the documentation on Optimising the Query PerformanceAlso, for the query, as @Doug_Duncan mentions, if the query can be modified in such a way that the $match can be used only once at mostly at the first stage of the pipeline as it would filter out the collection based on the condition.Let us know if you have any further concerns.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can this query be more performant?
2022-10-09T15:24:05.664Z
Can this query be more performant?
1,387
null
[ "swift" ]
[ { "code": " app = RealmSwift.App(id: \"app\")\n\n realmConfig = app?.currentUser?.flexibleSyncConfiguration()\n realmConfig?.objectTypes = [Users.self, Tasks.self]\n\n realm = try! await Realm(configuration: realmConfig!, downloadBeforeOpen: .never)\n", "text": "I’ve implemented flexible sync using the examples in the documentation using the following code each time the app opens and a user is logged in:If the app does not have connection prior to starting and trying to open the realm, the realm does not open and I am not able to save objects.Does @AutoOpen work with flexible sync, or should I be implementing opening the realm in a different way to enable interaction when there is no connection?Thank you in advance", "username": "Tyler_Collins" }, { "code": "", "text": "Glad to see I am not the only one who can’t figure out why @AutoOpen does not work offline.\nI opened an issue on Github about this a few days ago.", "username": "Sonisan" } ]
AutoOpen with Flexible Sync
2022-03-09T04:02:46.675Z
AutoOpen with Flexible Sync
1,830
null
[ "queries" ]
[ { "code": "", "text": "Hello, One of the select * from collection in TEST and DEV is taking very long to run while it is running in milliseconds on prod. Can you please let me know what it could be? There is only one index and I reindexed and I cleared cache too. All other collections are fine and super fast. What should I do?", "username": "Ana" }, { "code": "db.collection.find({});\n", "text": "Hi @Ana,Welcome to the MongoDB Community forums Just for clarification,Select * from collectiondo you mean:One of the select * from collection in TEST and DEV is taking very longWould it be possible for you to be more specific in numbers, such as how much difference is being observed in time between the production environment and the test environment?Can you please let me know what it could be?There could be a few possible reasons behind that but we will be able to narrow it down once we have more specific details.There is only one index and I reindexed and I cleared the cache tooIt would be helpful if you shared a sample dataset and the indexing of the field or sub-field with us so that we could get a reference. Additionally, could you provide the .explain(“executionStats”) output from each environment?Thanks,\nKushagra", "username": "Kushagra_Kesav" } ]
Collection Select * very slow
2022-10-10T16:46:26.870Z
Collection Select * very slow
1,126
null
[ "indexes" ]
[ { "code": "", "text": "Mongo used to have 2 index build type, one is foreground which is fast and block read-write on the whole db, and the other is background which is slow and non-blocking. Since 4.2, it just has one build type which has medium speed and it does block the collection being indexed. Am I right here?The doc has this part https://www.mongodb.com/docs/manual/core/index-creation/#index-build-impact-on-database-performance\n“Building indexes during time periods where the target collection is under heavy write load can result in reduced write performance and longer index builds.”\nWhat does it mean by “reduced write performance”? Isn’t it block the read-write entirely? Does it reduce the write performance for the whole db?I used to run createIndexes in the middle of the day on a 4.4 mongo with background option. At that time, I did not know that background build had been removed. But after I know that the option has been removed, I still run it in the middle of the day, because I think that the new index build == ( foreground + background) == ( medium speed + non blocking).", "username": "AL_VN27" }, { "code": "", "text": "I test it on mongo 4.4, and the result is that:\nWhile an index is creating on a collection, that collection can still be read and written. I index on a 49M docs collection, and the indexing time takes 20 minutes so I have plenty of time to observe the read/write activities.Also this doc states that “Building indexes during time periods where the target collection is under heavy write load can result in reduced write performance and longer index builds.”. So I guess that it will not block the collection during the whole indexing process.", "username": "AL_VN27" }, { "code": "", "text": "Since 4.2, it just has one build type which has medium speed and it does block the collection being indexed. Am I right here?Hi @AL_VN27,As per the documentation you linked, index builds have been optimised since MongoDB 4.2 so there no longer foreground versus background build types:Starting in MongoDB 4.2, index builds use an optimized build process that holds an exclusive lock on the collection at the beginning and end of the index build. The rest of the build process yields to interleaving read and write operations.… and this is as good or better than previous background index build performance:The optimized index build performance is at least on par with background index builds. For workloads with few or no updates received during the build process, optimized index builds can be as fast as a foreground index build on that same data.Collection read/write operations can be interleaved while indexes are being built, but additional I/O from concurrent index builds may impact performance.new index build == ( foreground + background) == ( medium speed + non blocking).This should be similar or better speed without blocking read/write operations during index builds.Regards,\nStennie", "username": "Stennie_X" } ]
Confirmation about index build
2022-09-07T10:38:26.391Z
Confirmation about index build
2,205
null
[ "aggregation", "queries", "swift", "atlas-functions", "atlas-search" ]
[ { "code": "exports = function(typpedText){\n /*\n Function to get places according to typpedText\n */\n const places = context.services\n .get(\"mongodb-atlas\")\n .db(\"DataBase\")\n .collection(\"Place\");\n \n let arg = typpedText\n let found = places.aggregate([\n {\n $search: {\n index: 'placeIndex',\n text: {\n query: arg,\n path: {\n 'wildcard': '*'\n }\n }\n }\n }\n ]);\n return {result : found};\n};\nTask {\n do {\n // calling SearchRealm function\n let results = try await user.functions.SearchRealm([AnyBSON(searchValue)])\n print(\"found places : \\(results)\")\n } catch {\n print(\"Function call failed: \\(error.localizedDescription)\")\n }\n}\n", "text": "I’m developing an SwiftUI app (with Realm) to help people find specific places. On this app I have a search bar so people can type some keywords to find these places.My goal is to use Atlas Seach with Realm. After investigation I found a way, using Realm Functions. So I developed this function (SearchRealm):and on Xcode I did this :My problem is “results” is an “AnyBson” type. How can I convert it to my Swift Realm Object “Place” ?Thank You.", "username": "Ruben_Moha" }, { "code": "", "text": "Did you figure this out? I’m facing the same challenge", "username": "Tyler_Collins" }, { "code": " { $project: { title: 1, score: { $meta: \"searchScore\" }}},\n { $limit: 10 }\nobjects(Place.self).filter(\"_id IN %@\", arrayPlaceId)\n", "text": "For now I just find a temporary solution returning only a list of ObjectIds in the Realm Function with :and on Xcode I filter my Realm Objects with :it works, I don’t know if it the best way to filter data", "username": "Ruben_Moha" } ]
How can I convert an AnyBson document to a Realm Swift Object
2022-04-26T18:10:08.439Z
How can I convert an AnyBson document to a Realm Swift Object
3,294
null
[ "dot-net", "mongodb-shell", "atlas-cluster" ]
[ { "code": "System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"2\", Type : \"Unknown\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 2, EndPoint : \"Unspecified/localhost:27017\" }\", EndPoint: \"Unspecified/localhost:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (10061): No connection could be made because the target machine actively refused it. [::1]:27017\n at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)\n at System.Net.Sockets.Socket.Connect(EndPoint remoteEP)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.Connect(Socket socket, EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2022-10-07T19:23:18.1503545Z\", LastUpdateTimestamp: \"2022-10-07T19:23:18.1503546Z\" }] }.\n at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedAsync(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Clusters.Cluster.SelectServerAsync(IServerSelector selector, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.AreSessionsSupportedAfterServerSelectionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.AreSessionsSupportedAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.MongoClient.StartImplicitSessionAsync(CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToListAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n at Brewery.Services.BreweryInformationService.GetAsync() in C:\\BrewerySimulator\\BrewerySimulator\\Brewery.Services\\BreweryInformationService.cs:line 21\n at BreweryLab.API.Controllers.LocationTypeController.Get() in C:\\BrewerySimulator\\BrewerySimulator\\BreweryLab.API\\Controllers\\LocationTypeController.cs:line 20\n at lambda_method5(Closure , Object )\n at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.AwaitableObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)\n at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)\n at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)\n at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)\n at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)\n at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)\nAccept: text/plain\nConnection: keep-alive\nHost: localhost:5183\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36\nAccept-Encoding: gzip, deflate, br\nAccept-Language: en-US,en;q=0.9\nReferer: http://localhost:5183/swagger/index.html\nsec-ch-ua: \"Chromium\";v=\"106\", \"Google Chrome\";v=\"106\", \"Not;A=Brand\";v=\"99\"\nsec-ch-ua-mobile: ?0\nsec-ch-ua-platform: \"Windows\"\nSec-Fetch-Site: same-origin\nSec-Fetch-Mode: cors\nSec-Fetch-Dest: empty", "text": "I am getting a “System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector” error when trying to connect to MongoDB Atlas from an ASP.NET Core 6.0 application.I am able to connect and do CRUD operations using the mongosh command line shell with no issues.I have also set IP configuration to all “0.0.0.0/0” in the network settings.Connection string is:mongodb+srv://:@cluster0.fpgve4z.mongodb.net/?retryWrites=true&w=majorityI am using MongoDB.Driver v2.17.1Any help is very much appreciated!Thanks!", "username": "Don_King" }, { "code": "localhost:::1", "text": "Hi @Don_King, and welcome to the MongoDB Community forums! I don’t code in .NET, but your exception stack mentions localhost and :::1. It looks like your application is trying to connect to a local instance of MongoDB instead of the Atlas cluster.You could try sharing a minimal application that exhibits this problem and someone knowledgeable in .NET might be able to point out what’s going on.", "username": "Doug_Duncan" } ]
MongoDB Atlas - ASP.NET Core 6.0 System.TimeoutException:
2022-10-07T20:02:23.237Z
MongoDB Atlas - ASP.NET Core 6.0 System.TimeoutException:
2,098
null
[ "queries", "java" ]
[ { "code": "// Finds 4 documents as expected\nint llx = -100, lly = 40, urx = -70, ury = 45;\n\n{\"type\": \"Polygon\", \"coordinates\": [[[-100.0, 40.0], [-70.0, 40.0], [-70.0, 45.0], [-100.0, 45.0], [-100.0, 40.0]]]}\n\n// Shift the west edge of the bbox 2 degrees west\n// Finds 1 document (the one it finds is quite a bit larger than the 3 it misses)\nint llx = -112, lly = 40, urx = -70, ury = 45;\n\n{\"type\": \"Polygon\", \"coordinates\": [[[-112.0, 40.0], [-70.0, 40.0], [-70.0, 45.0], [-112.0, 45.0], [-112.0, 40.0]]]}\n\n// Shift the west edge of the bbox 2 degrees west again\n// Finds 0 documents\nint llx = -114, lly = 40, urx = -70, ury = 45;\n\n{\"type\": \"Polygon\", \"coordinates\": [[[-114.0, 40.0], [-70.0, 40.0], [-70.0, 45.0], [-114.0, 45.0], [-114.0, 40.0]]]}\n", "text": "For a small bounding box over Chicago, I get the 4 hits I expect. But if I just slightly increase the size of the polygon query by shifting its west edge to the west, I stop getting the hits I expect.Below are the bbox coordinates converted to a Polygon for the geoInstersects query:In all 3 cases the 4 documents I expect are clearly within the search polygon. The geometries of the document data and of the polygon query are all counter-clockwise.Does anyone have any idea what might be going on here?Does the second case, where it finds one document with a larger footprint but misses the smaller three shed any light?Any insight would be much appreciated. Thanks.", "username": "Scott_Ellis1" }, { "code": "", "text": "Sorry there is a typo in one of the comments in the OP and I can’t edit it. The first shift of the west edge to the west is by 12 degrees, not 2 degrees.", "username": "Scott_Ellis1" } ]
geoIntersects not working properly when search Polygon is slightly larger
2022-10-04T17:46:40.915Z
geoIntersects not working properly when search Polygon is slightly larger
1,180
null
[ "swift" ]
[ { "code": "ObservedResultObservedResultstruct SomeView: View {\n @ObservedResults(SomeType.self, configuration: config) var objects\n \n func setFilter(_ filter: NSPredicate) {\n _objects.filter = filter\n }\n \n var body: some View { ... }\n}\nObservedResults", "text": "I came across this discussion stating that it’s not possible to have a dynamic filter in ObservedResult property wrapper. It suggests to filter in the view body instead.But I was able to update the filter of the ObservedResult like this:After reading the code it seems that ObservedResults was designed to make it possible to update the parameters, so I don’t see why this wouldn’t be the correct solution? Did I miss something?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "I am struggling to figure out how to make certain server functions only available for users that have been authenticated. I have two forms of authentication anonymous and email and password. I want it so that only email and password-authenticated users can make a query to add a product to the collection. I am using web SDK rn. Any help or direction would be greatly appreciated.", "username": "Wayne_Barker" } ]
Update parameters of ObservedResults?
2022-09-15T16:07:32.109Z
Update parameters of ObservedResults?
1,535
null
[ "data-api" ]
[ { "code": "> const findEndpoint = 'https://data.mongodb-api.com/app/XXXXX/endpoint/data/v1/action/findOne';\nconst clusterName = \"ClusterNAME\"\n \nfunction lookupInspection() {\n\n const apikey = 'API KEY'\n\n const query = { edate: '2022_09_27'}\n //We can Specify sort, limit and a projection here if we want\n const payload = {\n filter: query, \n collection: \"COLNAME\", database: \"DB NAME\", dataSource: clusterName\n }\n \n const options = {\n method: 'post',\n contentType: 'application/json',\n accept: 'application/json', // THIS WAS ADDED TO SEE IF IT NEEDED TO ACCEPT RESPONSE AS WELL. \n payload: JSON.stringify(payload),\n headers: { \"api-key\": apikey },\n muteHttpExceptions: true\n };\n \n const response = UrlFetchApp.fetch(findEndpoint, options);\n Logger.log(response.getContentText())\n const documents = JSON.parse(response.getContentText()).documents\n\n Logger.log(documents)\n \n\n}\n", "text": "Greetings,I have a Google App Script Code that connects to my Mongo DB for a Web app. However, when the post request is sent to findone it actually returns a null and the error in the log is “failed to set response” on Mongodb Logs.Any help would be appreciated in helping me understand why i cannot seem to make the call to load the information.The document that is pulled is quite a long array of data. Not sure if that is the reason or if there is a limit of what can be pulled.", "username": "Suren_Gunaseelan" }, { "code": "", "text": "I’ve also run into the same issue recently. My script worked fine until a few months ago and I believe it has something to do with the query taking too long. Limiting the query to 100 items got my function to return the data properly but any more or if I try to sort the query will result in a “Error 500” on the App Scripts side, and “failed to set response” error in Mongo DB’s logs.Not sure if this helps, but I would love to hear workarounds or updates to this problem since I have a lot more than 100 items to fetch.", "username": "Tina_Peng" }, { "code": "", "text": "Greetings,the way i solved the issue was to convert the mongo db via a Python script using Pymongo. then I turned that script into a heroku webapp as a Flask API. then i made the calls from GAS to that API that i made. I was able to return everything that I needed and it was smooth as fast.\nHope that it helps.", "username": "Suren_Gunaseelan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting Failed to set response in Data API. Request made from GAS
2022-09-24T07:08:16.681Z
Getting Failed to set response in Data API. Request made from GAS
3,853
null
[ "replication", "database-tools", "backup" ]
[ { "code": "", "text": "I have to migrate a MongoDB database in a 3-node replicaset in version 4.0 from one infra to another infra with MongoDB but version 5.0, the problem is that the database cannot be stopped during the corresponding time to perform the mongodump and the corresponding mongorestore, what would be the options to carry out the migration with the shortest service downtime? Is there any possibility of setting up a replica from a version 4.0 node to a version 5.0 node?I thank you in advance for any comment on this.", "username": "Gerardo_Capelli" }, { "code": "", "text": "Welcome to the MongoDB community @Gerardo_Capelli!If you want to perform an in-replace migration from 4.0 to 5.0 without downtime you will need to upgrade through each successive major release following the documented procedures for your deployment type:Replication between adjacent major releases (4.0 => 4.2, 4.2 => 4.4, 4.4 => 5.0) is possible during a rolling upgrade, but you cannot mix 4.0 and 5.0 in the same replica set as there are multiple intervening release series.Upgrades and other maintenance processes can be automated using tooling like MongoDB Ops Manager or MongoDB Cloud Manager, but those are commercial solutions outside of evaluation or trial periods.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "n adjaceHello Stennie, thanks for the quick answer, in that way, if i can to add a member in 4.2 to a replicaset in 4.0 , after de syncronization, could i start a new replicaset from the member in 4.2, somehow?Regards Gerardo.", "username": "Gerardo_Capelli" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.0 to 5.0 Migration
2022-10-06T15:34:55.322Z
MongoDB 4.0 to 5.0 Migration
6,670
null
[ "connector-for-bi" ]
[ { "code": "", "text": "I am trying to connect my MongoDB version 3.6.21-11.0 with the BI connector 2.14.5 + MongoDB ODBC Driver 1.4.3 and 1.4.2 but I get the following message:Test ResultConnection Failed\n[MongoDB] [ODBC 1.4(w) Driver]MongoDB version is 3.6.21-11.0 but version >= 4.0 requiredThe same process on the same database has been run from another person at May (5 months prior to my test) with BI connector 2.14.4 + ODBC 1.4.2 and everything run smoothly.", "username": "Lampros_Makrodimitris" }, { "code": "", "text": "Welcome to the MongoDB Community @Lampros_Makrodimitris !MongoDB 3.6 reached End of Life (EOL) in April 2021 and is no longer supported. The error message indicates your installed connector version expects a MongoDB 4.0 or newer deployment.Per the Release Notes for MongoDB Connector for BI, version 2.14.4 of the connector removed support for MongoDB 3.2, 3.4, and 3.6.You can either upgrade to a newer version of MongoDB server (I recommend at least 4.2 since 4.0 is also EOL) or download an older version of the BI Connector (2.14.3).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X thank you for the solution.", "username": "Lampros_Makrodimitris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connection Failed MongoDB version is 3.6.21-11.0 but version >= 4.0 required
2022-10-10T08:52:39.000Z
Connection Failed MongoDB version is 3.6.21-11.0 but version &gt;= 4.0 required
2,960
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "There is an internal tool at work that does not allow aggregations.\nMy challenge is to essentially use\ncollection.find( {field} ).sort(???)\nto order the documents in a way that will improve the productivity of the team.e.g.\nWhat I want to do is sort the result of a query by the frequency of values of a field.\nI’ve fiddled around with a few things and nothing has done what I want.\ne.g.\ncollecton.find( {field} ).sort(???)\nkey: a\nkey: a\nkey: b\nkey: c\nkey: b\nkey: b\nkey: c\nkey: xshould be returned in the order\nkey: b\nkey: b\nkey: b\nkey: a\nkey: a\nkey: c (dont care about the order of the values if their frequencies are the same, frequency of a == frequency of c)\nkey: c\nkey: xany ideas?\nIs this even possible?", "username": "ben_dpb" }, { "code": "group = { \"$group\" : {\n \"_id\" : \"$key\" ,\n \"documents\" : { \"$push\" : $$ROOT } ,\n \"frequency\" : { \"$sum\" : 1 }\n} }\nsort = { \"$sort\" : {\n \"frequency\" : -1\n} }\nunwind = { \"$unwind\" : \"documents\" }\npipeline = [ group , sort , unwind ]\n", "text": "I am not sure I understand correctly the following.internal tool at work that does not allow aggregationsA tool you developed internally or a 3rd party tool that you use internally. If it really does not allow aggregations while you are using MongoDB, it is like driving a Porsche and only allowed to use the first speed.The best way toimprove the productivity of the teamis to modify your internally developed tool to use aggregation or stop using a 3rd party tool that does not allow aggregation.The problem is that you want to sort on frequencies of values. You either have to store the frequencies in all documents, which is a nightmare to maintain, or compute it on the fly. The only way I see to compute it on the fly without aggregation is to download all the data, compute the frequencies in what ever language your internal non-aggregation friendly tool permits, and then sort using the frequencies you just computed.$OR simply bypass the tool and use mongosh to run a trivial aggregation that looks like:You might want to do a $replaceRoot after unwind to have the exact result.", "username": "steevej" } ]
Sort by frequency of a value in a specific field
2022-10-11T09:37:07.394Z
Sort by frequency of a value in a specific field
1,264
null
[ "data-modeling" ]
[ { "code": "questions collection:\n - creator: userId\n - createdAt: Date\n - isAnon: boolean\n - title: string\n - description: string\n - tags: string[]\n - answers: AnswerSchema[]\n - upvotes: userId[]\n - downvotes: userId[]\n\nAnswerSchema:\n - creator: userId\n - createdAt: Date\n - isAnon: boolean\n - content: string\n - replies: AnswerSchema[]\n - upvotes: userId[]\n - downvotes: userId[]\n", "text": "Hi everyone!\nI’m new to MongoDB and would like to get some help modeling my forum data. The main things that I currently try to model are the questions, answers, and votes (I also have users collection, but I dealt with it already, it was pretty easy). My website is very similar to Reddit or StackOverflow. each question has a title, description, creator, if the creator is anonymous, time created, and tags. Each answer has content, creator, if the creator is anonymous, and time created. Every answer is also linked to a question and possibly to another answer (if it’s a reply to another answer). I also want both the questions and answers to have voting (upvotes and downvotes). If I would just embed everything it would look something like this:That doesn’t seem like a good idea because even tho embedding is considered the better approach most of the times, it sets a limit for how much data I can store (even if in the beginning I won’t have many answers/votes, but what if my website will grow and have a lot of data?).\nI thought of just don’t everything with referencing so I’ll have four collections: one for questions, one for answers (with a question id field to reference to the question, and an answer id field for when it’s a reply.), and two for votes (connecting between answer/question and user, and another field for if it’s downvote or upvote). And then also adding to the questions and answers collection upvotesCount and downvotesCount.This still doesn’t seem perfect because each time I’ll want to update the votes, I’ll need to update two different collections. Also each time I want to get questions/answers and also to get if a user already voted on them, I’ll need to have two different queries and then somehow combine them.What would you recommend me to do? Use embedding or referencing and where?", "username": "Roi_Bar" }, { "code": "db.questionsAndAnswers.find({question_id : xxx }).sort({type : -1, timestamp : 1})\n", "text": "Hi @Roi_Bar ,Can you share a typical test documents from those 2 collections ?If somebody upvotes a question why would an answer be upvoted? Am I missing something?Once I see the current schema I would need the most critical queries and I can tube it based on that.In general, I would say that each topic/question should be a document and each top level answer should be a document, a reply on an inner answer might be embedded in my opinion.Now they can all live in one collection meaning that a question document will have a “type” : “question” field and an answer will have an answer type. But if they both in the same collection and you have a question id in each answer document you can run one query for all threads in that question …Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi, thanks for your answer!\nWhen somebody upvotes a question, of course, no answer should be upvoted. By saying that I’ll need to update two different collections I meant that because the votes are in a different collection, I will need to update both the votes collection (save the user who voted, on what question/answer, and if it’s an upvote or downvote) and also in the question/answer document (in a field that saves how much upvotes and downvotes are there, so I don’t need to count the votes each time). That also means that when I want to get a list of questions/answers and also get for each one of them if a specific user has already voted in them (so when the user sees the question/answer he will see if he voted on it already), I will need 2 different queries. The first one to get the list of questions/answers, and after that I’ll need to get for each question/answer id if the user has voted on it, then somehow combine that data to one object. Hope I explained it clearly enough.About the replies. If I put the replies inside the answer schema, won’t it limit how many replies you can have on an answer? I really don’t know how many I’ll have because that depends on how users I’ll have and how big my website will grow. I’m scared of killing the scalability…About that I could put questions and answers in the same collection, that’s actually what I’ve done until now, I created a collection for Posts and I thought that I would just put every kind of post there (not only question and answers, because I’m planning to add more things like writing on someone wall, posting a status and more). they all have text content, a creator, if the creator is anonymous, and created date. I thought it would also be easier for stuff like the voting because then I can have one votes collection that will connect between user and post, instead of needing to create a separate collection for votes on questions and answers.\nBut, with all that said, I’m still thinking that it might be a bad idea. First of all, each type of post will have different fields (like question having tags and description, and answer having question id), and it’s harder to manage when they are all in the same collection. Also, I can’t have mongoose schema validation, because each type of post has different rules to validate (even tho I have other layers of validation, it’s always good to have the mongoose validation to be extra sage). Even in the votes, I only want questions and answers to have votes, and not other kinds of posts. And even then, what if someday I’ll decide that I don’t want downvotes on questions for example?I feel like maybe putting everything in one collection is messier and complicated then just creating a collection for each type of post, and that it’ll limit changes afterward. Would be happy to hear your opinion.", "username": "Roi_Bar" }, { "code": "votes : { n : 50,\n users : [ \"id1\", \"id2\" .... \"Id50\" ],\ndownVotes : ...\n", "text": "Hi @Roi_Bar ,In terms of upvotes I am more of a fan for keeping data that is queried and upfated together in the same document.What is possible is for each post that is being upvoted keep an array of user ids that upvoted on it and the total number of votes/downvotesTraversing this array on the client side when building the Ui should not be problematic and will be fast to indicate a full or empty like for the connected user.For the replies on a specific inner answer/post, the nature of those would usually be of a lower magnetitude compared to the amount of messages on the main thread. Therefore I assume the 100-200 inner comments can live in the embedded array. Moreover the nature of showing those heirarchy is usually paginated. So keeping the top comments embedded and any click on “load more” can go to this outlier collection which holds the extra long comment threads …For mongoose I have not much to share as I rather not use it exactly because of the schema type limitations, it prevents me for using MongoDB polymorphism which is one of the strongest points of MongoDB. Documents does not have to be the same, and fhey can only share a common attribute for logical queries … This is a classic example where all content will probably hold a post id . The documents can then have different fields for different type of posts. Your UI should be Smart enough to get a post and based on its structure for it correctly. Any validation can be done on the buisness logic of the application backend …Hope that helpsPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks again!\nI think I’ll just go with referencing, I don’t want to have limitations on how many upvotes or downvotes are there (and if I would also have the replies embedded, that can be a pretty big limitation because I will have even more data in the document. also, most of the time, I only need to check if a specific user has voted, I don’t need the whole array of users who voted, I never show the list of users who have voted to a question/answer).\nIf I go in that route, what do you think would be the best way to update and get the votes? I need to update both the upvotes/downvotes count on the post, and add it to the votes collection, and when getting the data I also need to check for each comment if the user has already voted on it (I know that in SQL those things are supported with atomic updates and queries with join, how would you do that in mongo?)About the replies I need to think about it more - do I want to sacrifice scalability for speed and ease of working with data? what if my website will grow very big (unlikely, but still, I don’t want to block this option completely)?I’m still not sure about combining all of the post types into one Posts collection. I mean, either way, I have a database layer that handles working with the different types of posts separately, so what value do I get by combining those into one collection? The only thing I can think of is easier references like with the votes (having one votes collection to connect Posts and Users instead of having two separate votes collections for questions and answers). Isn’t it just easier to have a separate collection for each type of post?", "username": "Roi_Bar" }, { "code": "", "text": "Hi @Roi_Bar ,If you go down the route of having a document representing if a user has voted or not you will need aome sort of a transaction to update the total on the specific post document. You can use the native mongodb transactions.If placing the data in one collection or several depands on your code and UI.If I can imagine correctly you wil probably have different topics for posts and therefore the main screen will be to show some preview of available posts, therefore i assume that you will have some sort of grouping per category on main posts. Once a specific post is loaded you will need to show first portion of replies/answers …Therefore I thought that if you spread the posts into their types to group them you will need several queries and with one collection you will need one query doing it all…If you feel like separating is better try it. Remember that MongoDB is a very good database for changing schema , so moving your application from many collection to one once you are in the air shouldn’t be that of heavy lift…", "username": "Pavel_Duchovny" }, { "code": "", "text": "I don’t think I’ll use transactions because voting is something that you do very often, so it should be as fast as possible, and even if once in a million times it won’t update the question/answer votes count, it’s worth it. Also, I’ll probably just handle it myself, that when the question/answer update fails or it does not exist, I’ll just delete/update to the previous value in the votes collection.About the topics, I’m not exactly sure what you mean but if I understand correctly you mean I’ll have different topics, and each question belongs to a topic. I thought you meant to combine the questions and answers in one collection. I only have tags for questions, not topics, and I already embedded them in the question schema.Thank you very much for your help and time!", "username": "Roi_Bar" } ]
What would be the best way to model the data for a forum? Including questions, answers and votes
2022-10-09T13:21:32.516Z
What would be the best way to model the data for a forum? Including questions, answers and votes
2,269
null
[]
[ { "code": "", "text": "We are working on a project that may require up to 10000 databases. We are looking at MongoDB as an option but not sure if we can create that much databases.", "username": "Naman1" }, { "code": "", "text": "Hey @Naman1,Welcome to the MongoDB Community Forums! There is no hardcoded limit on the number of databases one can create in MongoDB. However, there are other limitations that one needs to be mindful of while creating a database. For example, there is a database limit of 100 if you are planning on using Atlas (M0, M2/M5 tier clusters). You also need to be mindful of your resources like RAM, Memory utilizations, etc.\nYou can read more on the Documented limits & thresholds in the MongoDB Limits and Thresholds page.There is also a similar conversation that you might find useful:Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Thank you for such an insightful reply. I just want to confirm, if I want to create 10000 databases and I don’t have any prior resource planning for the same. Then what will be the required resources and suggested tier so that I can efficiently create and manage that much databases in MongoDB.", "username": "Naman1" }, { "code": "", "text": "Hey @Naman1,Unfortunately there is no single easy way to answer this other than simulating the workload you’re expecting on the hardware that you think should be able to serve the workload, and checking if it can be done. Different cases could arrive at entirely different answer to this, so I would encourage you to do your own testing.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we create 10000 databases in MongoDB?
2022-09-19T04:27:58.152Z
Can we create 10000 databases in MongoDB?
1,999
https://www.mongodb.com/…_2_1024x353.jpeg
[]
[ { "code": "", "text": "\n스크린샷 2022-09-22 오전 10.37.281974×682 75.3 KB\nI have found out the connection to our MongoDB server has suddenly spiked up.so I downloaded connection log and found IP: 192.168.254.25 and Application MongoDB Automation Agent occupied the most of logsWhy MongoDB Automation Agent suddenly connects to our server so much ?,{“t”:{\"$date\":“2022-09-19T13:42:53.152+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn280777”,“msg”:“client metadata”,“attr”:{“remote”:“192.168.254.25:55370”,“client”:“conn280777”,“doc”:{“driver”:{“name”:“mongo-go-driver”,“version”:“v1.7.2+prerelease”},“os”:{“type”:“linux”,“architecture”:“arm64”},“platform”:“go1.17.6b7”,“application”:{“name”:“MongoDB Automation Agent v12.3.4.7674 (git: 4c7df3ac1d15ef3269d44aa38b17376ca00147eb)”}}}}the log above occupies most of proportion among connection logs\nI have checked out that this IP on 17th September was around 10,000 connection but on 18th and 19th, spiked up to 30,000 connectionsOn 20th September ,the connection decreased down to the normal rate and it seems increasing gradually now.That’s what I found and I am trying to find out another reason…but I don’t know yet.What can be the reason for the sudden spiking connection rate??", "username": "Jihoon_Shim" }, { "code": "", "text": "Hi @Jihoon_Shim - Firstly, welcome to the community.Why MongoDB Automation Agent suddenly connects to our server so much ?Please contact the Atlas support team via the in-app chat to investigate any operational and billing issues related to your Atlas account as the support team would have more insight into the cluster in question if you are certain that the automation agent is causing the connect spikes. You can additionally raise a support case if you have a support subscription. You could possibly investigate the Project Activity Feed as well during those same time frames of when the spikes occurred to see if any particular cluster maintenance tasks were happening.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
The connection to MongoDB Altas had spiked up to max suddenly and decreased to the normal rate among two days
2022-09-22T01:42:13.621Z
The connection to MongoDB Altas had spiked up to max suddenly and decreased to the normal rate among two days
1,605
null
[ "queries", "replication", "sharding" ]
[ { "code": "", "text": "Hey all,My team is considering a migration to MongoDB. We have three regional data centres and our requirements include that that all data be replicated across regions and read/writes happen locally. After reviewing docs it seems the preferred way to do this is to shard data regionally, with each region containing the primary for its own shard as well as a secondary for each other region.My concern is what happens if some user’s data becomes separated across shards? Say they are travelling and connect to a different data center. We would still want them to be able to access their data, and access and data they wrote to that region once they returned home. It seems if we include shard key for the user we will miss their data on other shards. Would we need to query only by a user’s UUID and rely on scatter gather queries, despite that not being scalable?", "username": "Dale_Harrison" }, { "code": "", "text": "Hi @Dale_Harrison ,Welcome to The MongoDB Community Forums! Please confirm if my understanding of your use case is correct, you have 3 data centres globally and want the data to be replicated overtime but initially want read/writes to happen locally?If this is your use case then I would recommend you to go through this blog - “Active-Active Application Architecture with MongoDB”.If you can use Atlas, you might want to check out “Atlas Global Clusters” where you can define single or multi-region Zones, where each zone supports write and read operations from geographically local shards. You can also configure zones that support global low-latency secondary reads.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Querying On Data Segmented By Location
2022-10-06T05:52:13.575Z
Querying On Data Segmented By Location
1,272
null
[ "serverless" ]
[ { "code": "", "text": "Hi,We’re currently deciding which option to go for on MongoDB Atlas, Serverless vs having a cluster (Shared/Dedicated).We’re a bit confused about how Serverless operates, in the docs it states:Serverless instances are incredibly flexible and are recommend for lightweight or infrequent application workloadsHowever it also states:With serverless instances, MongoDB Atlas will seamlessly provide the database resources your application needs at any given time, removing the need to manually scale up and down.Which is a little bit confusing, does this mean Serverless can handle heavy loads when it needs? Or is it not recommended for heavy loads completely?We’re building a fitness tracking application with daily goals where users login every day to check their progress, badges, etc…We also want to ensure the application would work well under heavier load if we experience a spike in traffic (promo code events, etc…)Comparing the storage capacity for example Shared Free (512MB) vs Serverless (up to 1TB per 1M reads, which is a huge number in our case), is this a fair comparison?Thanks in advance!", "username": "Turbulent_Stability" }, { "code": "", "text": "Hello,I think this might answer your question:Regards !", "username": "Xeneural_INC" }, { "code": "", "text": "Welcome to the MongoDB Community @Turbulent_Stability !The Atlas Serverless FAQ shared by @Xeneural_INC is a good reference.Which is a little bit confusing, does this mean Serverless can handle heavy loads when it needs? Or is it not recommended for heavy loads completely?Serverless will automatically scale to larger workloads. A dedicated Atlas instance (M10+) can also be configured for auto-scaling, but is less reactive to bursty workloads. For example, per the note on the auto-scaling page:Scaling up to a greater cluster tier requires enough time to prepare backing resources. Automatic scaling may not occur when a cluster receives a burst of activity, such as a bulk insert. To reduce the risk of running out of resources, plan to scale up clusters before bulk inserts and other workload spikes.A main difference in the billing model is that a traditional Atlas cluster is based on a reserved allocation of resources (RAM, CPU, Disk) whereas a Serverless cluster is billed based on usage (resources scale to match workload).If you have a consistent level of workload with occasional spikes for promotions (which you can proactively plan for), a dedicated cluster may be more cost effective. If your workload is less predictable, a serverless cluster may be more suitable.You can perhaps get some more relevant advice discussing your current and future workload with the MongoDB Sales team and a Solution Architect: Contact Us | MongoDB.Comparing the storage capacity for example Shared Free (512MB) vs Serverless (up to 1TB per 1M reads, which is a huge number in our case), is this a fair comparison?The shared tier has options for 2 or 5 GB of storage (or 512MB for the free plan), and generally isn’t sized for typical production workloads.If you are expecting more significant growth and scaling, the Dedicated Tier would be a better comparison point (10GB to 4TB of storage, 2GB to 768 GB of RAM) with additional features such as auto-scaling, VPC peering, and more storage options (eg NVMe).Regards,\nStennie", "username": "Stennie_X" } ]
Serverless vs Shared/Dedicated?
2022-10-07T10:02:42.225Z
Serverless vs Shared/Dedicated?
8,597
https://www.mongodb.com/…_2_1024x576.jpeg
[ "vancouver-mug" ]
[ { "code": "CEO, IGENODirector of Community (Developer Relations), MongoDBLead, Community Programs, MongoDB", "text": "It’s time! After a two+ year hiatus, the Vancouver MongoDB User Group (VanMUG) is back! For our first event, we’re going to keep things simple and informal with a social event. This event is for anyone interested in assisting with the care and feeding of VanMUG or those that might be interested in potentially speaking at a future MUG. As an added bonus, we’ll have some folks (both local and international) from MongoDB in attendance.For those of you who haven’t attended our meet-ups in the past, we have always tried to maintain a balance in topics from fairly technical to “case studies” of organizations using MongoDB. There’s always a social element to our events and it’s a great way to connect with the local MongoDB community which has grown rapidly over the past few years.The event will be held in downtown Vancouver with the exact location to be announced once we have a better idea of the numbers — please be sure to RSVP! Hope to see you there!Event Type: In-Person\n Location: Yaletown Brewing - 1111 Mainland St.\nTo RSVP - please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Mark Clancy - @Mark_Clancy\nCEO, IGENO–\nStephen (@Stennie_X) Steneker\nDirector of Community (Developer Relations), MongoDB–\nAngie Byron - @webchick\nLead, Community Programs, MongoDBJoin the Vancouver group to stay updated with upcoming meetups and discussions.", "username": "Mark_Clancy" }, { "code": "", "text": "Hello All,\nIncase you missed the update last week. The location for the gathering is: Yaletown Brewing - 1111 Mainland St .Looking forward to seeing most of you today!Cheers\nHarshit", "username": "Harshit" }, { "code": "", "text": "Hey everyone! Looking forward to meeting you tonight! I’ll be easy to spot because I have bright blue hair. ", "username": "webchick" }, { "code": "", "text": "Oh also when you get here it’s the “Mark MongoDB” table. ", "username": "webchick" }, { "code": "", "text": "Thanks folks for dropping by! It was wonderful to meet everyone! \n\nimage1920×1440 370 KB\n", "username": "webchick" }, { "code": "", "text": "Event Wrap-Up:Because the Vancouver MUG hasn’t met together in literal years, and because Vancouver gets mere days of sun per year, and this month has most of them , we decided on a smaller social thing to break the ice. We were joined by @Mark_Clancy (lead organizer), members of the MongoDB Community Team (@Stennie_X and @webchick — aka me), MongoDB engineers @Xander_Neben and @Frederic_Vitzikam, and @Jamie_Jann and @Sean_Brophy from Rain City Housing, who use MongoDB as a data warehouse to help people experiencing homelessness and mental health, trauma and substance use issues, throughout BC’s lower mainland. Among the topics discussed were engineering best practices, Queryable Encryption, strategies for storing large binary data, and tools and tricks of analysis of large data sets.This was an awesome “reboot” event, and thank you so much to @Mark_Clancy for leading the organization efforts! ", "username": "webchick" }, { "code": "", "text": "Thanks for posting the photo Angie! Great to see everyone again!", "username": "Mark_Clancy" }, { "code": "", "text": "It’s that time again! Vancouver MongoDB User Group (VanMUG) - Conversations - From Green Runs to Double Black Diamonds", "username": "webchick" } ]
Vancouver MUG: MongoDB User Group Re-Boot Social Gathering!
2022-07-08T05:23:14.252Z
Vancouver MUG: MongoDB User Group Re-Boot Social Gathering!
5,559
null
[]
[ { "code": "", "text": "Hi, we’ve been wondering if there’s been an increase in pricing? If not now, has anyone heard of one that’s coming down the pipe? I’ve been asked to do some rough budgeting for CY23, and am nervous about under allocating. Would expect some increase at some point due to inflation, but before we dig through the numbers for our account wanted to ask the community.Thanks!\nMike", "username": "Mike_Howlan" }, { "code": "", "text": "Hi @Mike_Howlan,I’m not sure which pricing you are referring to (MongoDB Atlas, MongoDB Enterprise Advanced, or something else) but pricing questions are best directed to the MongoDB sales team: Contact Us | MongoDB.Regards,\nStennie", "username": "Stennie_X" } ]
Increase in pricing?
2022-10-07T19:11:54.651Z
Increase in pricing?
1,665
null
[ "node-js", "crud" ]
[ { "code": "var message = user.messages;\n\n//should be an empty array for right now, can be something like [{ from: 'erin', to: 'erin', content: 'test' }]\n//in the future\n\nif (!message[otherPerson]) message[otherPerson] = [];\nawait message[otherPerson].push(msg);\n//where msg is a msg object\n//pushes message into new index\n\n//updates messages with new data\nconst test = await User.updateOne({ usertag: person }, {\n$set: { messages: message }\n});\nconsole.log(await test);\nUser.updateOne({ usertag: person }, {\nmessages\n});\nUser.updateOne({ usertag: person }, {\n$set: { messages }\n});\n", "text": "I’m having an issue where I’m attempting to update my document and the change is not being reflected. I suspect MongoDB is finding that my value is somehow the same even though I’ve changed itI’ve tried multiple formats of updating such aswhere the messages variable is called message in the earlier example orand nothing seems to workI will also mention that this is some rather old code that used to work pretty well. Has something changed in how MongoDB handles updates or am I doing something wrong?", "username": "Erin_Nyx" }, { "code": "modifiedCountmatchedCountmyFirstDatabase> db.coll.updateOne({a:2},{$set:{a:1}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 0,\n modifiedCount: 0,\n upsertedCount: 0\n}\n", "text": "Hi @Erin_Nyx,Could you provide some sample documents?What are the input documents and what are you expecting them to be updated to?Lastly, you have specified the modifiedCount value to be 0. What is the matchedCount value? e.g.:Note: No documents have matched, none have been updated in the above.Regards,\nJason", "username": "Jason_Tran" }, { "code": "_id: ObjectID('NUMERICAL_ID')\nusertag: 'unique_string'\nemail: 'unique_email'\ndisplayName: 'string'\npassword: 'encrypted_pass'\nadmin: false\nauth: false\nmessages: Array // should just look like var messages = [ ]\nmatchedCount", "text": "Hi!\nThis will be my first time posting on here so I’m not really sure what format you want the sample document in but I’ll try my best!!A sample document should look like this:And matchedCount was 1. So it found the document with a filter to the unique usertag but didn’t update it.", "username": "Erin_Nyx" }, { "code": "await message[otherPerson].push(msg);", "text": "I will also mention that this is some rather old code that used to work pretty well.Could you share the original code? This way we can compare to yourmultiple formats of updatingyou have tried.What you are doing is such a basic operation that I do not seesomething changed in how MongoDB handles updatesNote that the document is not updated if you supply the same value that was already there.And reading the messages from the original document, and then using js code to doawait message[otherPerson].push(msg);following by a $set that sets all the old messages with the new msg you pushed is slow and error prone. A $push directly in the messages is both faster and safer.", "username": "steevej" }, { "code": "const messages = await User.findOne({ usertag: user }).lean().messages;\nconst res = messages.filter(m => (m.to == user || m.to == otherPerson) && (m.from == user || m.from == otherPerson));\n", "text": "Hi! I’m so sorry this thread has been so frustrating for everyone, but I appreciate all your help nonetheless!!I did actually take your suggestion and use $push instead\nIt worked immediately and I’m sure the O(n) is MUCH better than what I was tryingIf I may, however, I still have one more problem.And this is my fault, I suppose I should have followed a tutorial or something\nBut I’d like to know if there’s anyway I can improve my data structure for a better implementation of my code.See, the way I have it is in the document I have this Messages object\nand in that object I will be storing ALL the messages a user has received.the way I want to grab the messages I want for the user is with the following code\nI have otherPerson (always going to be someone else) and user (always going to be user)where messages will be ALL the messages the user has ever sent and res will be an array of the messages I want to grabHowever, it seems like the time complexity will get MUCH worse as n increases, is there any better way that I could have written my code? I mean, there’s always a better way I suppose but I’d like some tips if anyone’s willing to share thanks!", "username": "Erin_Nyx" }, { "code": "", "text": "See, the way I have it is in the document I have this Messages object\nand in that object I will be storing ALL the messages a user has received.Here, I would have done it the other way around. A user has greater control on the messages he sends compared to the messages he receives. So if Message is an object within the User, then I would store the message he sends.But in both cases, you might end up with a massive array anti-pattern.The approach to avoid the above would be to useThe Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data set\norLearn about the Bucket Schema Design pattern in MongoDB. This pattern is used to group similar documents together for easier aggregation.I like the bucket pattern to store old messages (ex: 1 bucket per month per user) and embedded array for recent both sent and received messages. Most of the time a User is more interested in the recent messages he sent and received. You have all that in one document. From time to time, he needs to access old messages, but I am pretty sure most people can tolerate a slower history access compared to current affairs.I also like to keep messages in both sender and receiver current and history. A sender might want to remove a message from its sent list while the receiver really wants to keep it, or the other way around.I suppose I should have followed a tutorial or somethingDefinitively recommended. The mongodb university gives good and free courses.", "username": "steevej" }, { "code": "", "text": "Awesome! Thank you so much for all your help, I’ll definitely check those patterns out!", "username": "Erin_Nyx" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
updateOne always returning modifiedCount: 0
2022-10-06T14:36:22.694Z
updateOne always returning modifiedCount: 0
5,626
null
[ "configuration", "app-services-cli" ]
[ { "code": "{\n \"config_version\": \"-------\",\n \"app_id\": \"app_id\",\n \"name\": \"new-app-name\",\n \"location\": \"US-VA\",\n \"deployment_model\": \"GLOBAL\"\n},\n", "text": "Hello, I am simply trying to rename my app so that I don’t have to make a new one and spend an hour re-entering secrets.It does not appear that there is a way to change a Realm App’s name in the UI, but the documentation on Realm config files suggests that it should be possible to edit the name by pushing an altered config file.However, if I edit my config files such that I change the app name:and push via realm-cli push --remote=“app_id”,The name change does not register.How else might I accomplish this? Thanks!", "username": "Austin_Imperial" }, { "code": "\"name\"\"realm_config.json\"\"name\"", "text": "Hi @Austin_Imperial,When creating the App initially, the UI prompts that the name cannot be changed later as it is used internally:\nimage858×263 15.4 KB\nbut the documentation on Realm config files suggests that it should be possible to edit the name by pushing an altered config file.Could you link the specific documentation where this is suggested and also confirm if you’ve attempted to alter and then push the updated \"name\" value from within the \"realm_config.json\" file?I understand the response from this particular push (specifically in regards to the \"name\" value changing in the associated file) could be improved.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How To Rename Realm App?
2022-10-05T20:07:15.615Z
How To Rename Realm App?
2,770
null
[]
[ { "code": "", "text": "I am receiving alerts from my app service stating “An overall request rate limit has been hit”. The MongoDB cluster is an M10.I have searched for M10 limits but all I’ve been able to find is limits for M0, M2 and M5 here.How can I manage these alerts and increase the threshold?", "username": "Raymond_Brack" }, { "code": "", "text": "Hi @Raymond_Brack,I believe this is related to the Request Rate Limit rather than a limit on the cluster specifically. Can you try contact the Atlas support team via the in-app chat or raise a support case (as noted in the docs)?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks @Jason_Tran , I’ll get in contact with the support team.", "username": "Raymond_Brack" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Request Rate Limit
2022-10-11T00:01:27.680Z
Request Rate Limit
2,251
null
[]
[ { "code": "", "text": "Incompatible histories. Expected a Realm with no or in-realm history, but found history type 3Can anyone explain what this error means and why it is not possible to open the realm file ?This is a realm file created using Realm.writeCopy() and then opened as a local realm file.Any workaround for this ?Thanks", "username": "Duncan_Groenewald" }, { "code": "", "text": "BTW we are using realm-cocoa 10.17.0 and the file opens just fine using Realm Studio 11.1.0", "username": "Duncan_Groenewald" }, { "code": "", "text": "Also encountered this. There’s still very little information about this issue out there. I can’t seem to find any official documentation about what a “history” of a realm file is exactly. I contacted Mongo support about this and they referred me to this conversation: Error when opening Realm backup file: \"Incompatible histories. Expected a Realm with no or in-realm history, but found history type 3\" · Issue #7513 · realm/realm-swift · GitHub. In there it’s implied that one reason this happens is if you try to open a sync realm in a non-sync-read/write mode. In my case, I tried to open a sync realm file as a Dynamic Realm.", "username": "Eric_Klaesson" }, { "code": "", "text": "And FYI: In the Java SDK for Android (Realm 10.11.0), there’s a function SyncConfiguration.forRecovery(…) which can give you a realm config with which you can open sync realm files as dynamic realms.", "username": "Eric_Klaesson" } ]
Error opening realm file: "Incompatible histories. Expected a Realm with no or in-realm history, but found history type 3"
2021-11-04T09:53:45.219Z
Error opening realm file: &ldquo;Incompatible histories. Expected a Realm with no or in-realm history, but found history type 3&rdquo;
2,062
null
[ "atlas-search", "ruby", "mongoid-odm" ]
[ { "code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"deleteStatus\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"boolean\"\n }\n ],\n \"name\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ],\n \"number\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ],\n \"user\": {\n \"type\": \"objectId\"\n }\n }\n }\n}\n{\n\t\t\t\t\"$search\": {\n\t\t\t\t\"index\": \"contactsearch\",\n\t\t\t\t \"compound\": {\n\t\t\t\t\t\"filter\": [{\n\t\t\t\t\t\t\"equals\": {\n\t\t\t\t\t\t \"value\": false,\n\t\t\t\t\t\t \"path\": \"deleteStatus\"\n\t\t\t\t\t\t }\n\t\t\t\t\t },\n\t\t\t\t\t {\n\t\t\t\t\t\t\"equals\": {\n\t\t\t\t\t\t \"value\": MongoId(parentId),\n\t\t\t\t\t\t \"path\": \"user\"\n\t\t\t\t\t\t }\n\t\t\t\t\t }]\n}\n}\n}\n", "text": "here is the index defination not fetching any data with respect to user(ObjectId ) if I remove this it will fetch according to filter let me know how can we resolve this issue?query:", "username": "Milan_Zadfiya" }, { "code": "{\n\t\"equals\": {\n \t \"value\": MongoId(parentId),\n\t \"path\": \"user\"\t\n }\n}\nuser", "text": "Hi @Milan_Zadfiya - Welcome to the community.not fetching any data with respect to user(ObjectId ) if I remove this it will fetch according to filterJust to confirm, if you remove the following, the query works as expected?Additionally, can you provide 3-4 sample documents and the expected output with the user filter?Regards,\nJason", "username": "Jason_Tran" } ]
Atlas search with mongodb with ObjectId is not working
2022-10-06T08:20:26.866Z
Atlas search with mongodb with ObjectId is not working
2,366
null
[ "node-js" ]
[ { "code": "", "text": "What is the best way to batch multiple queries into a single request? My goal is to avoid incurring the multiplied latency from serial requests, but also avoid opening too many connections for parallel requests. Ideally I would be able to say db.bulkRead([query1, query2, query3]) and get back a (potentially paginated) list of results.I’m also curious if the driver automatically does something like this (i.e. if I send two requests with maxPoolSize=1, will the driver wait for the first request to fully complete before issuing the second request? Or could it send both across a single connection, and just get the response streamed in after?). I would prefer a system where I could choose what to batch to ensure large queries aren’t mixed into batches of cheap queries.From my research so far, it seems like this exists but only with bulkWrites, and I’m not sure why it wouldn’t exist for bulkRead as well.", "username": "Alex_Coleman" }, { "code": "", "text": "Hi @Alex_Coleman ,Making the poolsize to one will not increase on parallel execution of queries. On the other hand it will harm it.The best way to batch data fetch in MongoDB is to store it embedded in documents. Meaning that if you need to always fetch related data together store it together in a single document. If not possible store it in a chain of documents in a single collection with a predict that you can identify for example an id for a batch grab indexed.Other less recommended technics are using a unionWith aggregation or a $facet search with multiple facets as batches or $lookup to access different collections in a single query.I recommend reading some of our blogs on data designHave you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!A summary of all the patterns we've looked at in this seriesThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,Thanks for the answer. I understand that a higher poolsize is best for parallel execution of queries, but I was just curious the exact effect of using a single connection. In particular, if execution was serial or just the writing of query/response to the DB was serial (which would be less of an issue since it doesn’t incur latency cost twice).I agree its best to store related data together in similar documents and such. However, across a large application issuing many queries it would be nice if there was a way to batch unrelated queries. Facet seems like exactly the answer I’m looking for, but it doesn’t support indexes on the contents. Do you know if there’s any way to do something similar? Basically I just want a {$or: [$query1, $query2]}, but with the results of each query labeled. Thanks!", "username": "Alex_Coleman" }, { "code": "", "text": "Hi @Alex_Coleman ,With facets you can achieve several answers each is labled at another field.You can run both and than use a $project to return only one based on a condition.Be cautious that since data is returned in a single document for all queries it cannot cross 16MB of returning data so use limit if necessary to fetch just a portion of the data.Otherwise, you can use your clients to run many queries in parallel from the driver stand point. I used promise arrays to run it via node js.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Best way to batch queries
2022-01-20T01:58:59.366Z
Best way to batch queries
10,208
null
[ "node-js" ]
[ { "code": "affiliatePaymentsaffiliateCommissions_id: ObjectId('631a5067hcfeb5940686f144')\nemail: “[email protected]”\nhashedPassword: “test”\ndateCreated: 2022-09-08T20:28:23.766+00:00\nplan: \"FREE\"\ndirectAffiliateSignup: false\nreferralPromoCode: \"f2f11fed-ab02-46df-8eff-ba3d8cc3fb93\"\naffiliateRate: 0.3\naffiliatePayments: [{date: 2022-07-27, amount: 2, id: someMongoId}, {date: 2022-08-23, amount: 3, id: someMongoId}]\naffiliateCommissions: [{date: 2022-08-01, amount: 10, id: someMongoId}, {date: 2022-09-01, amount: 4, id: someMongoId}]\ntypepaymenttypecommission[{date: 2022-07-27, amount: 2, id: someMongoId, type: ‘payment’}, {date: 2022-08-01, amount: 10, id: someMongoId, type: ‘commission’}, {date: 2022-08-23, amount: 3, id: someMongoId, type: ‘payment’}, {date: 2022-09-01, amount: 4, id: someMongoId, type: ‘commission’}]\naffiliatePaymentsaffiliateCommissionsaffiliatePaymentsaffiliateCommissionstypeaffiliateTransactionsaffiliatePaymentsaffiliateCommissionsaffiliateTransactionsaffiliateTransactionsaffiliatePaymentsaffiliateCommissionsaffiliatePaymentsaffiliateCommissionsaffiliatePaymentsaffiliateCommissionsaffiliateTransactionsaffiliateTransactions", "text": "This topic is all about what is more performant. I am new to MongoDB & NoSQL and this may be obvious to some, but I am trying to figure out the most efficient way of making my back-end allow the front-end to fetch paginated data that combines two fields. I essentially want to know whether plan A or B is best - that is, either storing more data in the database vs storing less data but having to amend, merge and sort the data before returning it to the front-end.To provide some context, I want the back-end to merge the affiliatePayments and affiliateCommissions fields in date order and expose the pagination-friendly result to the front-end via an API.Current user model (with an example user):It’ll also need to be able to differentiate between each affiliatePayment and affiliateCommission, so each affiliatePayment will need a type field with a payment value and the affiliateCommission will need a type field with a commission value.So the API should return something like this example data:As the affiliatePayments & affiliateCommissions fields don’t have a type field stored in the database, I assumed the plan to return the correct data would be something like this:Plan A:Plan B, however, would involve simpler processing but more data fields stored in the database: the affiliatePayments and affiliateCommissions would each have a type stored in each array entry, and an affiliateTransactions field would exist in the database that would already contain the example data as above - which is essentially duplicates of affiliatePayments and affiliateCommissions but just merged into an additional affiliateTransactions field.Plan B:So my key question is this: which plan is more performant?Is it better to store a merged affiliateTransactions list (and extra type fields on affiliatePayments and affiliateCommissions) in the database? It’ll essentially copy affiliatePayments and affiliateCommissions. Or is it better to store less data in the database & accept the more expensive query processing that’ll presumably use more bandwidth?Am I correct in thinking I should be reluctant to add extra fields to the database? It would make it much easier, but isn’t it unclean, inefficient and more expensive? I don’t know the scale of the tradeoff between the two options. I could be making wrong assumptions, i.e. does it cost bandwidth to query data more, does it cost more to store more data, etc.Any advice would be greatly appreciated here. Thank youNote: I also wonder whether I should get rid of the affiliatePayments and affiliateCommissions fields and just have one affiliateTransactions field (that contains the payments and commissions data)? Note that I have a separate affiliate payments query for just getting the payments, and another for commissions. Having them in separate lists would prevent the need to filter from a single affiliateTransactions field.", "username": "Nick_Smith" }, { "code": "skip+limit", "text": "Hi @Nick_Smith and welcome to the community!!According to plan A, if performing these modifications, reduces the query execution time, the recommendation would be to do so.However, for plan B, if adding data into each of the document increases the performance, this could also be a preferred method.But please note that skip+limit is not the most performant way to do the pagination.\nTo avoid that, range queries or bucket pattern would be useful.Generally there are tradeoffs between disk space & execution time as far as I understand your use case. Depending on your hardware, plan A might be performant enough for 99% of your use case, but plan B might be better once the collection hits a certain size.However I would note that in most cases, the less work the database has to work to return the result, the more performance it can give. That is, the plan that requires the least amount of steps would usually be relatively more performant compared to the more complex execution plan. But I would not put this generalisation into production without extensive testing with the hardware you have, along with the workload you’re expecting in your day-to-day operation.Am I correct in thinking I should be reluctant to add extra fields to the database?This depends on how much data you’re adding to each document. If extra fields simplifies your workflow, queries, and results in better performance, I don’t think you should hesitate adding fields. But again this also depends on extensive testing using your expected workload.Let us know if you have any further questions.Best regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Thank you very much Aasawari. I will take your advice and I think I will choose option B, and research the bucket pattern you suggested.Best wishes,\nNick", "username": "Nick_Smith" }, { "code": "", "text": "Quick question: I have read the bucket pattern article and noticed that the history array is an unbounded array. I thought this would be an anti pattern since the array could be infinite size and breach the 16MB size? If this is true, how can pagination be improved? I can see how skipping is less efficient though, especially as the data grows.", "username": "Nick_Smith" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
The most performant method of querying paginated data with MongoDB
2022-10-03T23:01:18.184Z
The most performant method of querying paginated data with MongoDB
3,309
null
[ "connecting" ]
[ { "code": "", "text": "MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/NOTE: Local Server", "username": "Joao_Vitor_Soares" }, { "code": "", "text": "Can you connect by shell?\nShow your connect string and others parameters you are using in your code", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I can through docker, but not through Node.this is the code part\n\nrespMongo1481×195 39.5 KB\n", "username": "Joao_Vitor_Soares" } ]
Error connecting mongoDB
2022-10-07T19:36:48.450Z
Error connecting mongoDB
1,192
null
[ "swift" ]
[ { "code": "Terminating app due to uncaught exception 'RLMException', reason: 'Invalid value '1' of type '__NSCFNumber' for 'string' property 'RealmProfile.id'.'if (oldSchemaVersion < 3) {\n migration.enumerateObjects(ofType: RealmProfile.className()) { oldObject, newObject in\n newObject![\"id\"] = \"\\(oldObject![\"id\"] ?? 1)\"\n }\n}\n", "text": "Hi guys,so, we have an app in production, that now needs a few changes. One issue I’m encountering is, I need to change the primaryKey type of a model from Int to String.\nRight now when launching I’m getting:Terminating app due to uncaught exception 'RLMException', reason: 'Invalid value '1' of type '__NSCFNumber' for 'string' property 'RealmProfile.id'.'I was trying to solve this with:but to no avail.Any help would be really appreciated.", "username": "Lila_Q" }, { "code": "idas! Int", "text": "At first glance, 1 is not a String? Do some objects not have an id? e.g. id was optional? Wouldn’t something like this (pseudo code) work?let oldId = oldObject![“id”] //maybe need as! Int\nlet newId = String(old)\nnewObject![“id”] = newId", "username": "Jay" }, { "code": "", "text": "That is basically what I have in my (not working) attempt in the code box.The id was set to a fixed value before, so every old object has the id 1 so far, and it needs to be a string now.", "username": "Lila_Q" }, { "code": "newObject![\"id\"] = \"\\(oldObject![\"id\"] ?? 1)", "text": "newObject![\"id\"] = \"\\(oldObject![\"id\"] ?? 1)I don’t believe that code is equivalent; your goal is to assign a string as the id instead of an int, and that line of code will assign an int - the number 1every old object has the id 1That’s concerning as Primary Key’s must be unique so every object should not have an id of 1.There’s nothing in your code that casts the Int value to a String so that will be needed.", "username": "Jay" }, { "code": "", "text": "Not concerning up until since there was only one object present so far, which needed to be identified from wherever. So, only with the change the objects will of course have different ids.I don’t know why you think I’m assigning an Int in that code though, as there are string quotation marks, where I have an implicit cast of either the oldObject id or 1 to a string.", "username": "Lila_Q" }, { "code": "(ofType: RealmProfile.className())RealmProfile.className()", "text": "Oh - you mean a String of the Int. I thought you meant you wanted to change the type and value; e.g. an ObjectId type is often used as a primary key due to its uniqueness and self-generating nature. My apologies.Can we see what (ofType: RealmProfile.className()) - RealmProfile.className() looks like?", "username": "Jay" } ]
Migrate primaryKey type?
2022-10-07T20:47:40.882Z
Migrate primaryKey type?
2,419
null
[]
[ { "code": "", "text": "Hei there!I have an error that was mentioned once in a discussion here. In my Realm app, I use jwk authentication and try to login from a client. As usual, the request is sent with an auth (bearer) header containint the jwk. My problem is, mongodb realm does not validate the key and constantly gives me the error:‘value of ‘kid’ has invalid format’.The ‘kid’ value of my jwt header has a vlaue pointing to the key specified in the .jwks url.Does anyone already encounter this problem?Thanks,\nNicolas", "username": "Nicolas_Muller" }, { "code": "", "text": "Hi Nicolas,Thanks for posting your first question and welcome to the community!\nAs it has been a while since you posted this, are you still experiencing this error?Did you follow the instructions for setting up JWT Authentication as per the article below?Was the authentication ever working previously or are you implementing this for the first time?‘value of ‘kid’ has invalid format’.This error should not happen unless the wrong token is being sent or it has been inadvertently modified.If you’re still seeing this issue it might be best to raise a support ticket with us with the following information:http://cloud.mongodb.com/supportRegards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "I know this is old, but I’m having this issue as well. The kid in the token matches the jwks.json endpoint and has the correct audience as set up in the config. This is my first time trying to set this up and it has never worked. Any help would be appreciated.", "username": "Troy_Stolte" }, { "code": "", "text": "We’re also interested in this thread as we’re getting the ‘value of ‘kid’ has invalid format’ error too when trying to integrate our realm with AWS Cognito JWT tokens. It looks like people have done this before (though online doco and examples are limited) but we can’t get past this error. The rest of our settings look okay and the key ID in the token matches the JWKS exactly. One thing we did notice reading the spec is that the Cognito JWT headers don’t include the “typ=JWT” field which is listed as required by Mongo for auth. I’m not sure if that’s related though again there seems to be a precedent for this working.This is a big factor on whether we shift our backend to MongoDB as we want to stay serverless as much as possible and identity integration is important to us. If anyone has any thoughts or example Cognito configuration they could share that would be much appreciated.", "username": "Andrew_Schafer" }, { "code": "jwtTokenString: [auth token]", "text": "Found the solution to this for me. When you send the token to your service, you send it with the following headerjwtTokenString: [auth token]so don’t send it has a bearer token. Also, make sure you following the steps in the link provided by the Manny above https://docs.mongodb.com/realm/authentication/custom-jwt/ ", "username": "Troy_Stolte" } ]
value of 'kid' has invalid format
2021-02-06T12:36:02.437Z
value of &lsquo;kid&rsquo; has invalid format
3,310
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.7.2 MongoDB Java & JVM Drivers release is a patch to the 4.7.1 release.The documentation hub includes extensive documentation of the 4.7 driver.You can find a full list of bug fixes here .", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Java Driver 4.7.2 released
2022-10-10T16:22:52.185Z
MongoDB Java Driver 4.7.2 released
3,042
https://www.mongodb.com/…_2_1024x576.jpeg
[ "serverless", "kotlin", "dublin-mug" ]
[ { "code": "Technical Sourcer at MongoDB Developer Advocate at MongoDBAssociate Front End Engineer at Genesys", "text": "G’Day Folks,Dublin MUG is back with another event on Oct 05, 2022.\nDublinMUG05v21920×1080 218 KB\n\nJennifer1285×1287 321 KB\nJennifer Murphy\nTechnical Sourcer at MongoDB \nimage (5)512×512 402 KB\nMohit Sharma\nDeveloper Advocate at MongoDB\ncarolina600×600 45.2 KB\nCarolina Cabo\nAssociate Front End Engineer at GenesysEvent Type: In-Person\nMongoDB, Ballsbridge, DublinTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Volunteers will be at the gate for door opening. Please arrive on time.Please sign in at the reception iPad when you enter inThe event will take place in the Office cafeteria on the third floor. Access to the third floor will be given by the volunteer.Doors will close at 18:15 PM. Contact +353- 899722424 if you come after thatPlease be respectful of the workplace.I welcome you all to join the Dublin MongoDB User group, introduce yourself, and I look forward to collaborating with our developer community for more exciting things Cheers, ", "username": "henna.s" }, { "code": "", "text": "Some Highlights of our Dublin MUG Event on Oct 05, 2022\nphoto_2022-10-10 14.54.131280×960 272 KB\n\nphoto_2022-10-10 14.54.161280×960 291 KB\n\nphoto_2022-10-10 14.54.191280×960 246 KB\n", "username": "henna.s" } ]
Dublin MUG: Let's talk Kotlin Multiplatform Apps, CV Tips and Career changes to Software Development
2022-09-28T18:49:25.589Z
Dublin MUG: Let&rsquo;s talk Kotlin Multiplatform Apps, CV Tips and Career changes to Software Development
3,819
null
[ "compass", "mongodb-shell" ]
[ { "code": "", "text": "Hello, I created a file with some code and I want to run it as a script in mongosh (in Mongo Compass). The problem I encounter is that I cannot load a file. I tried to use “load” function but I get the message that this method is not implemented yet. I tried to use .load and I getError:(#) clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,“column\"in r?r.column:……)} could not be cloned.”.My question is: It is even possible to run files in Mongo Compass or should I just copy the code from the file and run it directly ?", "username": "Dan_Muntean" }, { "code": "mongosh", "text": "Compass currently does not support loading files. You can either copy-paste the script as you suggested or you can run it with mongosh in the terminal.Alternatively, if that’s something you do often and you are a VS Code user, I’d recommend checking out MongoDB for VS Code.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Run script files in Mongo Compass
2022-10-10T13:07:06.494Z
Run script files in Mongo Compass
8,483
null
[ "aggregation" ]
[ { "code": "db['mycollection'].aggregate([\n {\n \"$group\": {\n \"uniqueList\": {\n \"$addToSet\": \"$secondField\"\n },\n \"_id\": {\n \"GroupedField\": \"$firstField\"\n }\n }\n }\n ]);\n", "text": "An aggregate query that previously ran quickly in Mongo 4.0 runs slowly (actually, it hangs, causing 100% CPU until killed) in Mongo 6.0.2.The query in question is below and is attempting to get the number of documents in a collection grouped by “firstField”, counting the documents based on the uniqueness of “secondField”. secondField is a small value, around 60 bytes, and is indexed too. According to an explain of this query though, no index is being utilized (even if I hint it, there’s no performance improvement). This query runs fine on smaller collections but on this particular collection where there are hundreds of thousands of unique values for secondField for a given value of firstField, the query just hangs forever. Switching ‘allowDiskUse’ to true/false makes no difference. I’ve also tried setting internalQueryMaxAddToSetBytes to a 10x value but no difference either.I’ve been banging my head against the wall for over two days on this issue so am hoping that someone can tell me why this is slow with 6.0 and wasn’t with 4.0, or if there’s a more modern way to aggregate to get the results I’m after.Thank you!", "username": "smock" }, { "code": "", "text": "I’ve been investigating this further, and have even spun up a separate MongoDB 4.0 cluster (same specs) to test alongside the 6.0.2 cluster. I’ve found two things:On both clusters, with the same data (1.3m documents), running the exact same query, same indexes, same specs, etc. the 4.0 database doesn’t resort to use disk for the $group at all (explicitly disabled with allowDiskUse: false). On the 6.0.2 cluster however, not only does it use disk but uses it even with 100,000 documents (tested by inserting a $limit before the $group). Why is resorting to disk so aggressive in 6.0? Is this configurable at all?When it does resort to disk, the performance difference is significant (not surprising). I see that an “internal-xxx-xxxx.wt” file is created in my mongodb data directory, growing to 10MB size (tiny), but does so slowly - takes about 3 minutes just to get to this size, growing about 3MB per minute, then stopping at that size before decreasing to 8MB. I/O on the impacted volume is incredibly low (oplog actually sits on a separate disk). Meanwhile CPU on one of the cores of the 8-core, 64GB RAM machine is at 100%.This feels to me like a bug but I’d appreciate confirmation either way to ensure I’m not wasting my time here.", "username": "smock" }, { "code": "", "text": "Just an idea I would try.Start with a $sort on firstField. In principal, a $group stage does not output any document before all input documents are processed. That is logocal since it does not know if more documents will have the same group _id. But if they are sorted, then it may.$addToSet might be O(n2), so I would try to sort on firstField:1,secondField:1, then $group with _id:{firstField,secondField} using $first of secondField. May be a group document with only $first accumulator gets output right away. The another $group with _id:firstField, that uses $push:secondField rather than $addToSet.", "username": "steevej" }, { "code": "db.adminCommand({'setParameter': 1, internalQueryForceClassicEngine: true });\n", "text": "I think I’ve narrowed down the issue to the Slot-Based Engine. I’ve just forced the use of the classic query engine with:No more issues. The performance of the full collection aggregation is just as fast as it was on 4.0. I also don’t see any spikes in CPU or memory (or at least, the query finishes too fast for me to be able to spot it in my graphs).I’m assuming the performance of the SBE in this case is not working as intended, since I imagine the SBE should be much faster and not require disk use more than the classic engine?", "username": "smock" }, { "code": "", "text": "Created a bug for this issue: https://jira.mongodb.org/browse/SERVER-70395", "username": "smock" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$addToSet slower/hanging on large collections since MongoDB 6
2022-10-09T19:28:50.348Z
$addToSet slower/hanging on large collections since MongoDB 6
2,242
https://www.mongodb.com/…_2_1023x195.jpeg
[ "node-js", "mongoose-odm", "indexes", "serverless" ]
[ { "code": "createdAt: {type: Date, default: Date.now, expires: 86400}\ncreatedAt: {type: Date, default: Date, expires: 86400}\n", "text": "Hi Team, this post will hopefully save someone time if they encounter the same issue I did.I recently moved my app onto a Serverless cluster and started seeing high RPU / WPU stats under the monitoring of the cluster in Atlas.\nScreen Shot 2022-07-25 at 12.05.52 PM2498×476 93.7 KB\nThis was strange because no one was using my app. I contacted support who confirmed that this was unexpected for a cluster that wasn’t meant to be doing any work. Even disconnecting my app completely from Atlas and blocking all network traffic didn’t resolve the issue. Only dropping the DB, or more specifically, the TTL index on one of my collections, stopped the high RPU/WPU usage.It turns out the culprit appears to be the way my Schema is configured in Mongoose…Bad:Good:After chaning Date.now to Date, my WPU is basically zero per second when the app isn’t doing anything, while RPU is sitting at about 8/s.Using Date.now somehow causes Atlas to do some kind of endless loop???\nUsing Date seems to work just fine.It would be interesting if someone can explain why this is a problem in Atlas Serverless.\nI only picked it up because I noticed my billing was much higher than expected so I’m not sure if this is also a problem in my dev environment, but I simply never noticed.Cheers, Daniel.", "username": "Daniel_Kempthorne" }, { "code": "", "text": "Hi,\nMy name is Vishal and I work as a Product Manager in MongoDB Atlas Serverless team. The code snippet you posted seems to be creating a TTL index which automatically deletes your document based on the expires flag. TTL indexes are interesting and by definition will result in a very very small number of WPUs so that the system can check when to delete your document. I was curious whether the Good code snippet you posted actually works - one reason you could be seeing this behavior is because the change never actually creates TTL index. Please let me know?", "username": "Vishal_Dhiman" }, { "code": "", "text": "In my case I have the same or very similar problem, but in my case I am using Prisma ORM, I have tried to apply a solution similar to yours, but I have not been successful.For the case of applications that use Prisma ORM, do you know where the problem could be and how I could solve it?Thanks!", "username": "TattoosClan_Team" } ]
High RPU/WPU with Atlas Serverless and Mongoose
2022-07-25T00:22:37.272Z
High RPU/WPU with Atlas Serverless and Mongoose
3,256
null
[ "replication", "kafka-connector" ]
[ { "code": "\"config\":{\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"key.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter.schemas.enable\": true,\n \"value.converter.schemas.enable\": true,\n \"connection.uri\": \"mongodb://mongoadm:mongoadm@mongodb1:27017,mongodb2:27017,mongodb3:27017/?replicaSet=rs01\",\n \"topic.prefix\" : \"metaDB\",\n \"database\":\"metaDB\",\n \"tasks.max\":1,\n \"poll.max.batch.size\":1000,\n \"poll.await.time.ms\" :5000,\n \"collection\": \"conntest\",\n \"copy.existing\" : true,\n \"batch.size\" : 0,\n \"change.stream.full.document\":\"updateLookup\"\n\t}\n\n", "text": "Messages are not sent to kafka topic after failover. I’m using the mongodb source connector.I checked in connect log, it seems that the old primary (mongodb1) automatically connects to the new primary (mongodb2) after server down.\nHowever, even though the connector status is running, no message is sent to the topic. No errors have occurred.Task restart, connector restart, pause, resume… No matter what I do, the message was not sent to the topic. Instead, I found that when the old primary(mongodb1) starts up, messages are sent to the topic again. I hope that when the new primary is elected, messages will continue to be sent to the topic.What setting is required for mongodb or mongodb source connector? Should I do additional setting for this?", "username": "minjeong.bahk" }, { "code": "", "text": "It might be by design as change streams will pause notifications in this node failure scenario", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
After fail-over, the message is not sent to kafka
2022-10-07T08:27:45.488Z
After fail-over, the message is not sent to kafka
2,159
null
[ "node-js" ]
[ { "code": "", "text": "I am using MongoDB Realm for authentication in my Rest API writen in node js.When I do the deployment that Realm authorization token getting expired for all the users. My assumption is MongoDb Realm is creating temp files inside the application directory and using that for authorization. Users are getting logged out because of this files are getting deleted during the deployment.If anyone has some idea please help me outAre this temporary files needed?", "username": "VISHNU_KUMAR" }, { "code": "mongodb-realm/<app-id>/server-utility/metadata.realmloginlogout", "text": "Hi @VISHNU_KUMAR ,Users are getting logged out because of this files are getting deleted during the deployment.Yes, that’s correct: the files inside the mongodb-realm/<app-id>/server-utility/metadata folder keep some information about the users that have logged in. It’s also possible to open the .realm file with Realm Studio, and check the identities. If, as part of your deployment, you delete that folder, that information is gone, and a new login is needed (even though you never did an explicit logout).", "username": "Paolo_Manna" }, { "code": "", "text": "Hi @Paolo_Manna\nI will explain the deployment process we follow. Each time we make changes in the source code, During the deployment, we create a new docker image. This new image gets deployed in AWS EC2. Even if we are doing the source code change in a single file. The new docker image will replace the old docker image. (We are following this method from the beginning of this project) So my assumption is the temporary files associated with old docker images get lost during the deployment process. So all the users are getting logout. Is there method to get old realm tempfile?", "username": "VISHNU_KUMAR" }, { "code": "", "text": "Hi @VISHNU_KUMAR ,During the deployment, we create a new docker image.So, if I understand it correctly, you create what amounts to Realm’s idea of a “new” device, but using an already existing DB? If that’s the case, it can’t work: Device Sync assumes that each remote device has a different unique id, in order to track which changes have been applied, and which haven’t. Keeping the same local data file, while essentially changing the whole environment around it, basically means that Realm can only discard it, and get it again from scratch.I may have misunderstood (installing a client in Docker isn’t a usual setup), feel free to clarify the details and the rationale behind them.", "username": "Paolo_Manna" } ]
MongoDB Realm all users automatically loggingout after deployment
2022-09-21T08:43:54.050Z
MongoDB Realm all users automatically loggingout after deployment
2,115
null
[ "aggregation", "queries", "node-js", "data-modeling", "compass" ]
[ { "code": "", "text": "I am on v1.33.1 of MongoDB Compass and it allowed me to create a view that ends with a “.”\nNow when I am trying to open the view or do any operations on it. It says \"Collection names must not start or end with ‘.’ \"\nI tried deleting using the compass as well as CLI using the command db[‘collection name.’].drop() and it failed with the above message for both.\nI guess this is a bug in MongoDB Compass where it won’t validate the name while creating the view and validates on operations and delete.", "username": "Voxelli" }, { "code": "", "text": "Try square bracket notation\nCheck this link", "username": "Ramachandra_Tummala" }, { "code": "db.getCollection('collection name.').drop()\n", "text": "You can try", "username": "Massimiliano_Marcon" }, { "code": "", "text": "\nimage777×290 13.3 KB\n\n@Massimiliano_Marcon Doesn’t work. I wonder why it allowed me to create it first of all", "username": "Voxelli" }, { "code": "", "text": "Tried those methods, but nothing seems to work.\n", "username": "Voxelli" }, { "code": "db.runCommand({drop: 'Adv Alts Detection.'})\n", "text": "Yes, Compass should not let you create that view. We’ll fix it.In the meantime, you should be able to delete it with", "username": "Massimiliano_Marcon" }, { "code": "", "text": "It worked, thank you very much.", "username": "Voxelli" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to delete a collection which ends with a "."
2022-10-09T20:34:39.970Z
How to delete a collection which ends with a &ldquo;.&rdquo;
2,630
null
[ "queries" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"633ab3c11e6e97b6332f56a1\"\n },\n \"orders\": [\n {\n \"date\": \"6/10/2022\",\n \"productDetails\": {\n \"0\": {\n \"name\": \"Top\",\n \"price\": \"1235\",\n \"quantity\": 1,\n \"status\": \"placed\"\n },\n \"1\": {\n \"name\": \"Shirt\",\n \"price\": \"1235\",\n \"quantity\": 1,\n \"status\": \"placed\"\n }\n },\n \"billingAddress\": {\n \"address\": \"My Address\",\n \"city\": \"City\",\n \"state\": \"State\",\n \"country\": \"Country\",\n \"pincode\": \"123456\",\n \"contact\": \"1234567890\"\n },\n \"paymentMode\": \"cod\"\n },\n {\n \"date\": \"6/10/2022\",\n \"productDetails\": {\n \"0\": {\n \"name\": \"Top\",\n \"price\": \"1235\",\n \"quantity\": 3,\n \"status\": \"placed\"\n },\n \"1\": {\n \"name\": \"Shirt\",\n \"price\": \"1235\",\n \"quantity\": 1,\n \"status\": \"placed\"\n }\n },\n \"billingAddress\": {\n \"address\": \"My Address\",\n \"city\": \"City\",\n \"state\": \"State\",\n \"country\": \"Country\",\n \"pincode\": \"123456\",\n \"contact\": \"1234567890\"\n },\n \"paymentMode\": \"cod\"\n },\n {\n \"date\": \"6/10/2022\",\n \"productDetails\": {\n \"0\": {\n \"name\": \"Shirt\",\n \"price\": \"234\",\n \"quantity\": 1,\n \"status\": \"placed\"\n },\n \"1\": {\n \"name\": \"Top\",\n \"price\": \"123\",\n \"quantity\": 1,\n \"status\": \"placed\"\n }\n },\n \"billingAddress\": {\n \"address\": \"My Address\",\n \"city\": \"City\",\n \"state\": \"State\",\n \"country\": \"Country\",\n \"pincode\": \"123456\",\n \"contact\": \"1234567890\"\n },\n \"paymentMode\": \"cod\"\n }\n ]\n}\n", "text": "This is the structure of my db,\nI want to update the status for each as placed, updated and canceled.", "username": "Muhammed_Sibly_B" }, { "code": "", "text": "Hello @Muhammed_Sibly_B,Welcome to The MongoDB Community Fourms! Can you please provide below details for me to direct you towards a solution for your requirement?Regards,\nTarun", "username": "Tarun_Gaur" } ]
Please suggest a solution for updating an elemnt inside of an object array which is also insided of an array object
2022-10-06T08:42:52.300Z
Please suggest a solution for updating an elemnt inside of an object array which is also insided of an array object
976
null
[ "dot-net" ]
[ { "code": "", "text": "I have a C#/WPF application has has been developed with several class instances. Of course these are not persistent. I would like to bind the class instances to MongoDB collections for persistence and to enable backups.\nHow can I bind a class instance to a MongoDB collection so as the content of the class instance changes the changes will be reflected in the MongoDB collection?", "username": "Duane_Sniezek" }, { "code": "using MongoDB.Bson;\nusing MongoDB.Driver;\n\npublic class BeginSimple : BaseClass\n{\n // Here we define the objects we use in the database\n public class Info\n {\n public int x, y;\n }\n\n public class DBInfo\n {\n public ObjectId id; // You MUST add an id or Id or _id variable or field. See 04 - Attributes to change this name\n // ObjectId is the natural type for id in MongoDB but you can use simple type like int, long, string, etc.\n // ObjectId are set automaticaly during insert, for other types you must set them manually.\n\n public string name, type; // MongoDB serializer handles all public variables\n public int count { get; set; } // and all public properties\n public Info info;\n", "text": "Hi,Have a look at this example that save C # class instance and retrieve it.", "username": "Remi_Thomas" }, { "code": "", "text": "Hi,Thanks for the sample code. I am looking for a way to bind a class to a collection. The code has been written and I don’t want to rewrite it I want to continue to read and write the class instance and have it bound to the collection.", "username": "Duane_Sniezek" }, { "code": "", "text": "Hi,You need to decide when to write the instance in MongoDB.\nI don’t recommand to write every modification automatically.\nIt’s possible but this is not a MongoDB feature, this is a C# feature.\nRead the instance on screen opening.\nWrite it on screen closing for example.Here are 3 strategies to use class that you can’t adapt with MongoDB.public class theClassICantChange\n{\npublic string Name;\n…\n}public class mdbWrapper : theClassICantChange\n{\npublic ObjectId id;\n}and then use the wrapper class for db insert/queryBsonDocument doc = o2.ToBsonDocument();\nthe insert and query BsonDocumentLook at examples 3 and 1main/dotnet/01%20-%20BeginLearn mongodb quickly with examples. Contribute to iso8859/learn-mongodb-by-example development by creating an account on GitHub.", "username": "Remi_Thomas" }, { "code": "", "text": "Hi,Thanks for the additional information.I have implemented a process where I first DropCollection and immediately InsertMany attempting to “replace” the collection with the class instance.This works most of the time but because I have multiple clients accessing the database, occasionally the collection gets dropped and not created as this process is not collection atomic.I tried to use a transaction to update the collection with …using (var session = await client.StartSessionAsync())\nsession.StartTransaction();… but I get an error that says you can’t do a transaction on a single server instance and I cannot change the database configuration ", "username": "Duane_Sniezek" }, { "code": "", "text": "What do want to achieve ? Some sort of synchronization between clients?Dropping the collection is not recommanded.\nTry\nawait collection.DeleteManyAsync(_ => true);\nto delete all records.", "username": "Remi_Thomas" } ]
Binding C# Class Instance to MongoDB Collection
2022-09-21T21:41:27.345Z
Binding C# Class Instance to MongoDB Collection
3,089
null
[]
[ { "code": "", "text": "Hi,I created an API key for my organization and added it to my project. When I try to get the atlas searches for a specific collection I get Unauthorizedhttps://cloud.mongodb.com/api/atlas/v1.0/groups//clusters//fts/indexes/>/businesspartner?apikey=What did i forget?Thanks", "username": "Ad_Kerremans1" }, { "code": "$search$search", "text": "Hi @Ad_Kerremans1 - Welcome to the community.I created an API key for my organization and added it to my project. When I try to get the atlas searches for a specific collection I get UnauthorizedDid you create an Atlas Administration API or an Atlas Data API? You’ll need to create an Atlas Data API to perform the $search query for your request.Please see an example of a $search query performed using the Data API here.Note: If the “search” you mentioned is just general queries, the Data API key would still be required Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran ,I already found the solution. The authorization was not correct.\nRegards\nAd", "username": "Ad_Kerremans1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas https://cloud.mongodb.com/api/atlas/v1.0/groups
2022-10-06T08:02:14.013Z
Atlas https://cloud.mongodb.com/api/atlas/v1.0/groups
3,419
null
[ "replication", "sharding" ]
[ { "code": "", "text": "Hi All,I read through\nMongo Limitations and ThresholdSorry, there is limit of 2 links for new members.\nand /127411\nand /5808/2I’m being tasked to find out more about the size limitation of an unsharded database. If I understand corretly. there is theorically no database size limit, which are more often restricted by system limitation.\nI also believe that the table presented only applies to sharded database, and its limitation before we can attempt to change some of the sharding parameters (Size of Shard Key Value and Chunk Size).However, I’m curious about the line:\nQuoted from\"The theoretical limits for maximum database and collection size are much higher than the practical limits imposed by filesystem, O/S, and system resources relative to workload. You can add server resources and tune for vertical scaling, but when your workload exceeds the resources of a single server or replica set you can also scale horizontally (partitioning data across multiple servers) using sharding \"What are some of the consideration you expert consider before deciding that its time to perform sharding?Thank you.", "username": "Boon_Hui_Tan" }, { "code": "mongos", "text": "Welcome to the MongoDB Community @Boon_Hui_Tan !What are some of the consideration you expert consider before deciding that its time to perform sharding?As noted in my earlier response, the most common one is:when your workload exceeds the resources of a single server or replica setTo add some nuance to this suggestion, the decision should really be “when you predict your workload will exceed”, as it is better to shard preemptively than to shard when your system is struggling under current workload. Initial sharding and data redistribution is better actioned when your system has some headroom for additional I/O. Shard keys are set per collection, so you when you start to shard you would generally focus on the largest and most active collections unless you have special requirements for data locality.Aside from workload there are also operational considerations like the time it takes to backup or restore a deployment. If you have TBs of data with GBs of RAM and need to resync a replica set member (or restore a deployment from offsite backup), a single large deployment may be not meet your expectations on timing.A sharded cluster has more components and complexity than a replica set deployment (there will now be config servers and mongos routers), and will change your backup strategy. If you have a replica set deployment with acceptable performance, I would consider reasonable vertical scaling before adding the extra infrastructure for a sharded cluster deployment. However, similar to deciding when to shard individual collections it is better to have this infrastructure in place before the need becomes desperate.In addition to horizontal scaling for workload and data distribution, sharding can also be useful for data locality using Zone Sharding. For example, Segmenting Data by Location, Tiered Hardware for Varying SLA or SLO, or Segmenting Data by Application or Customer.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,Sorry for my late response.Thank you for your clarification and explanation.\nI will take this back to the team and discuss what would be our best course of action.Thank youRegards\nBoon Hui", "username": "Boon_Hui_Tan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Clarification on database limit
2022-09-30T06:05:16.534Z
Clarification on database limit
2,113
null
[ "node-js", "connecting" ]
[ { "code": "var MongoClient = require('mongodb').MongoClient;\nvar url = \"mongodb://localhost:3000/\";\n\nMongoClient.connect(url, function (err, db) {\n if (err) {\n throw err;\n }\n var dbo = db.db(\"testDB\");\n dbo.createCollection(\"customers\", function(err, res) {\n if (err) {\n throw err;\n }\n console.log(\"customers Collection created\");\n db.close();\n });\n});\nC:\\AIT618\\node_modules\\mongodb\\lib\\utils.js:365\n throw error;\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:3000\n at Timeout._onTimeout (C:\\AIT618\\node_modules\\mongodb\\lib\\sdam\\topology.js:291:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:3000' => ServerDescription {\n address: 'localhost:3000',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 1639641464,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:3000\n at connectionFailureError (C:\\AIT618\\node_modules\\mongodb\\lib\\cmap\\connect.js:387:20)\n at Socket.<anonymous> (C:\\AIT618\\node_modules\\mongodb\\lib\\cmap\\connect.js:310:22)\n at Object.onceWrapper (node:events:642:26)\n at Socket.emit (node:events:527:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21) {\n cause: Error: connect ECONNREFUSED 127.0.0.1:3000\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1187:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '127.0.0.1',\n port: 3000\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "I’m just learning mongodb for a JS class that uses Node JS, and I can’t get the first class example to work. I don’t know if it didn’t install correctly or if I missed a step. I use the terminal in Visual Studio Code to run the program. Here’s what I did:\nI downloaded the MSI package for the community server and installed it with the default settings. It also installed Compass.\nI did read on one website that I needed to add the C:\\Program Files\\MongoDB\\Server\\6.0\\bin to Window’s Environmental Variables (Mongodb Installation on Windows, MacOS and linux)\nI added C:\\data\\dbUsing Command Prompt, I entered npm install mongodbin Command Prompt, I entered mongodThen in Visual Studio Code, where I have my program open, in the terminal command prompt, I change to the directory where my program is saved and run the following program by typing: node assignment4.jsPROGRAM:In the terminal, it pauses, and then spits this out:I tried moving the saved program to different folder on the C drive and rerunning the npm install mongodb and the mongod while in that directory, but same error.I read somewhere it may have to do with my firewall, so I reset the Windows Defender firewall settings, but no luck. I also have Kaspersky Anti-virus on my machine. But I’m guessing it’s something really easy that I’m too new to this to see or understand.I’m so frustrated, and would really appreciate any help!Thank you.", "username": "bockyweez" }, { "code": "mongod", "text": "Hi @bockyweez, and welcome to the MongoDB Community forums! The error states that your app cannot connect to MongoDB at localhost on port 3000.A couple of questions:", "username": "Doug_Duncan" }, { "code": "\n`{\"t\":{\"$date\":\"2022-10-06T20:14:09.278-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.137-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.142-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.144-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.144-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.144-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.145-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.146-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":15440,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"RLB-Laptop\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.146-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.146-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.2\",\"gitVersion\":\"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.146-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19044)\"}}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.146-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.152-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:12.153-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7638M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.384-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1230}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.384-04:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.466-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.466-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.478-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.478-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.479-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.483-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.845-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.858-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.858-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.865-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2022-10-06T20:14:13.865-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}`\n\n", "text": "The port 3000 comes from the JS program example from class – I don’t know how to change the config… it comes from this javascript:var url = “mongodb://localhost:3000/”;\nMongoClient.connect(url, function (err, db) … etc.So when I enter in command prompt mongod this is what it spits out – I’m assuming it means it’s running? (does it matter what directory I’m in when I start it?)", "username": "bockyweez" }, { "code": "\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}mongodvar url = \"mongodb://localhost:27017\";", "text": "\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}This line (the last one in the log file you provided) is what we’re interested in. This means that the mongod process is indeed running and listening for connections. We can see that it is listening on port 27017. You would want to change your code to var url = \"mongodb://localhost:27017\";.Port 3000 is most likely the port that your web application listens on.Try making the change as I’ve mentioned above to see if you can get your application to connect to the MongoDB server.", "username": "Doug_Duncan" }, { "code": "", "text": "@Doug_Duncan thank you! – I changed that and I got some other errors but was able to figure it out from there.", "username": "bockyweez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Beginner can't connect to db when trying to use mongodb with node on windows 10 in Visual Studio Code
2022-10-06T03:47:48.482Z
Beginner can&rsquo;t connect to db when trying to use mongodb with node on windows 10 in Visual Studio Code
5,547
null
[ "data-modeling", "graphql-api" ]
[ { "code": "Cannot query field \\\"events\\\" on type \\\"MyFieldType\"\"events\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n\"events\" : [\n [\n \"101\",\n \"404\",\n \"625\"\n ]\n ]\n", "text": "Hi,I’ve been struggling with an issue with my schema that contains arrays of arrays. While the schema validates, I always get an error Cannot query field \\\"events\\\" on type \\\"MyFieldType\".I’m trying to use this:The raw data for this looks as expected e.g.:Is the issue here nested arrays, are they not supported? Note that the schema generator also produces this output.", "username": "Gareth_Davies" }, { "code": "", "text": "This should have been posted in the Atlas GraphQL API section (I can’t edit it)", "username": "Gareth_Davies" }, { "code": "{\n \"_id\" : ObjectId(\"633603c8a4e9837b18a0459d\"),\n \"events\" : [\n [\n \"101\",\n \"404\",\n \"625\"\n ],\n [\n \"201\",\n \"204\",\n \"225\"\n ]\n ]\n}\n{\n \"title\": \"test\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"events\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n}\nquery MyQuery {\n test {\n _id\n }\n}\nquery MyQuery {\n test {\n _id,\n events\n }\n}\n\"message\": \"Cannot query field \\\"events\\\" on type \\\"Test\\\".\",", "text": "So for anyone else with this issue, I guess this isn’t possible despite the schema generator producing this output.Simplified by creating a new collection and added this document:Then used the following GraphQL schema generator, which generated this schema:While this works;This query:returns \"message\": \"Cannot query field \\\"events\\\" on type \\\"Test\\\".\",", "username": "Gareth_Davies" }, { "code": "", "text": "Hi @Gareth_Davies, welcome to the community. \nQuerying arrays of arrays using the Atlas GraphQL API is not supported as of now. We already have a feature request logged in for this. Please upvote the same if you’d like to have this in our GraphQL API.Currently, as explained in the App Services documentation, the \"GraphQL API does not currently support relationships for fields inside arrays of embedded objects. You can use a custom resolver to manually look up and resolve embedded object array...However, for now you can create a custom resolver to achieve the same that resolves the request to an array you expect.\nHere’s a step-by-step tutorial that elaborates the complete process of creating a custom resolver:MongoDB Atlas GraphQL API provides a lot of features out of the box. In the previous parts of this blog series, we implemented all the CRUD…\nReading time: 6 min read\nIf you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't query field when schema contains nested arrays
2022-09-20T15:56:32.711Z
Can&rsquo;t query field when schema contains nested arrays
3,238
null
[ "atlas-cluster" ]
[ { "code": "", "text": "i want to upgratde mongodb version for my atlas m10 cluster does it remove my peer connection vpc settings with ec2 do i have to configure again", "username": "mohamed_aslam" }, { "code": "", "text": "Hi @mohamed_aslam,i want to upgratde mongodb version for my atlas m10 cluster does it remove my peer connection vpc settingsUpgrading the MongoDB version does not remove the peering connection settings.In saying so, you may find the Upgrade Major MongoDB Version for a cluster considerations documentation useful.Regards,\nJason", "username": "Jason_Tran" } ]
Upgrading mongodb version in atlas cluster can remove my peer connection settings?
2022-10-08T10:54:31.247Z
Upgrading mongodb version in atlas cluster can remove my peer connection settings?
1,140