image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"aggregation",
"queries",
"node-js",
"data-modeling",
"sharding"
] | [
{
"code": "$lookup$lookupconst suggested_friends = await all_users.aggregate([\n {\n $match: {\n age: { $gt: 18 },\n city: { $in: [\"chicago\", \"new york\"] },\n school: { $in: [\"harvard\", \"standford\"] },\n // etc., potentially hundreds of other arbitrary filters, which are different for each query and undefined\n },\n },\n // must lookup and remove users that this user has blocked and who have blocked this user.\n {\n $lookup: {\n from: \"block_users\",\n let: { tar_id: \"$_id\", tar_tid: \"$tar._id\" }, // joining by _id, which is indexed\n pipeline: [\n {\n $match: {\n $expr: {\n $or: [\n {\n $and: [\n {\n $eq: [\"$blocker\", querier_user_id],\n },\n { $eq: [\"$blocked\", \"$$tar_id\"] },\n ],\n },\n {\n $and: [\n { $eq: [\"$blocker\", \"$$tar_id\"] },\n {\n $eq: [\"$blocked\", querier_user_id],\n },\n ],\n },\n ],\n },\n },\n },\n { $limit: 1 },\n ],\n as: \"remove\",\n },\n },\n {\n $match: {\n $expr: {\n $eq: [{ $size: \"$remove\" }, 0],\n },\n },\n },\n {\n $project: {\n remove: 0,\n },\n },\n\n // now to derive some similarity score to predict whether user will be good friend\n {\n $set: {\n similarity_score: complex_function_call(), // some complex function to compute a number\n },\n },\n\n // finally sorting\n {\n $sort: {\n similarity_score: -1,\n age: 1,\n grade: 1,\n distance: 1,\n // etc. the sort order, by what field, and how many fields is not predefined and can be different for each query\n // and thus, cannot use index.\n // also, similarity_score must be derived, hence another reason an index cannot be used.\n // this must come at the end because blocked users must be filtered and removed first.\n },\n },\n {\n $limit: 100,\n },\n]);\n",
"text": "My app has a performance intensive query, which generates a list of suggested friends for any user that logs into the App.This query currently is slow despite indexes. The $lookup, which is a join, is slow despite being indexed.Currently, this query is slow with a database of more than 100,000 users. I need to scale my APP to potentially 10s of millions of users.questions:The business logic of the query goes like so:",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Couple of thoughts…You’re running this on every logon? What’s more often to happen, a user log on or a user sign up / change key information. If it’s logon then there is no point re-calculating the friends list every time, calculate it only as needed if it’s expensive.Store the meta information as attributes, which would make indexing the fields easier / more efficient.Don’t keep blocking / blocked user data in a different collection, store it with the users as that’s a place it’s being actively used, you can re-calculate a users friends suggestion list when a user blocks / is blocked.\n(with the assumption that if it grows to massive you’ve the 16MB limit, but that’s a VERY disliked user, you could always have a flag to indicate a user like this and process them differently…)Don’t calculate this in real-time, when a change is made, put the user id on a queue and have a background process calculate this at some point, but not immediately, I assume it’s not critical that friends matches are calculated the second that a user logs in.",
"username": "John_Sewell"
}
] | How to scale and optimize this friend suggestions query? | 2023-07-26T21:00:59.525Z | How to scale and optimize this friend suggestions query? | 561 |
null | [] | [
{
"code": "",
"text": "U\nI getting following errors in a row , who can assistAn error occurred while querying your MongoDB deployment.\nPlease try again in a few minutes.Cluster conversion failed. Please try again later.",
"username": "ngomano_fc"
},
{
"code": "",
"text": "Hi @ngomano_fc - Welcome to the community.An error occurred while querying your MongoDB deployment.\nPlease try again in a few minutes.Cluster conversion failed. Please try again later.Can you contact the Atlas in-app chat support regarding this? They’ll have more insight into your Atlas account.Regards,\nJason",
"username": "Jason_Tran"
}
] | Errors while browsing my collection or trying to migrating to dedicated cluster | 2023-07-27T04:02:53.769Z | Errors while browsing my collection or trying to migrating to dedicated cluster | 274 |
[
"aggregation"
] | [
{
"code": "[\n {\n $search: {\n index: \"cache_chat_message\",\n compound: {\n must: [\n {\n search: {\n query:\n \"Help me choose a good fund\",\n path: \"context\",\n },\n },\n {\n equals: {\n value: ObjectId(\n \"649e487b6465e9fa440db8f5\"\n ),\n path: \"projectId\",\n score: { \"boost\": { \"value\": 1 } }\n },\n },\n ],\n },\n },\n },\n {\n $project: {\n _id: 1,\n slots: 1,\n context: 1,\n score: {\n $meta: \"searchScore\",\n },\n },\n },\n {\n $limit:\n /**\n * Provide the number of documents to limit.\n */\n 2,\n },\n]\n\n",
"text": "Hi, how can I set returned score to be not affected by length ? Not sure it is affected by search term or the value stored.I’m trying to use AtlasSearch to do text similarity.\nThen I use a score threshold to determine if the text is similar.\nIs this the best way? Or should I use Text Index?Here is my pipelines:Attached example. If I change the query and context, the score will even though they are exact match, making it difficult to determine the similarity\nScreenshot 2023-07-24 163843939×470 40.8 KB\n",
"username": "HS-Law"
},
{
"code": "stringnormsincludeomitscore",
"text": "Hi @HS-Law,What’s your current search index definition?Hi, how can I set returned score to be not affected by length ?I’m not entirely sure of your expected output or use case with the single document output you’ve provided but you can consider looking at:If you need further help, please provide sample documents and the index definition along with the current output + expected output (what you want the scores to be for example for a particular search term).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_TranMy index definition and sample docs: gist:6e4214f72106c73c5209452ab9ddb2f7 · GitHubI am trying to use search to find similar question (context field) so that I can group the answers together (slot array field)For example, these questions are similar to “What is the best performing fund of ABC Company in July?”\nSo, when the apps return the answer, it should update it’s answers’ slot.-What is the best performing fund of ABC Company in July?\n-What is the best performing fund in July?\n-Show me best performing fund in JulyIf it cannot find a similar question (context field), then it should create a new doc.Current situation:\nMy pipeline: gist:575558af6ddfeaccc4d14793a9f3dfbb · GitHubQuery “What is the best performing fund of ABC Company in July?” , gets score of 4.621779441833496Query “Who is the CTO of ABC Company” gets score of 2.6413586139678955If I delete the “best performing fund” doc, my second query “Who is the CTO of ABC Company” returns score of 2.409642219543457My questions:Thanks.",
"username": "HS-Law"
},
{
"code": "searchScoreDetailsidfNnsearchtextequalssearchtextscoretext",
"text": "What is the score threshold to decide a question is similar to one of the context in doc?If I delete the “best performing fund” doc, my second query “Who is the CTO of ABC Company” returns score of 2.409642219543457As per the scoring documentation:Every document returned by an Atlas Search query is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.Many factors can influence a document’s score, including:You can use the searchScoreDetails option to help analyse the scoring but I believe in this particular case, the idf value is changing due one/both of the following values being changed when you deleted the document:How to configure so that exact match return a maximum constant score?Have you tried putting a constant scoring option in the search / text operator portion of your query? I can see your pipeline has a constant scoring option for the equals operator but not the search / text portion. Please see the score field details for the text operator here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"context\": {\n \"norms\": \"omit\",\n \"type\": \"string\"\n },\n \"projectId\": {\n \"type\": \"objectId\"\n }\n }\n }\n}\n",
"text": "In this example, why the search “tell me a joke” can return “tell me more about the World Series Fund” with a very high score ?My index:",
"username": "HS-Law"
},
{
"code": "search>db.collection.find({},{_id:0})\n[ { context: 'tell me more about the World Series Fund' } ]\n\nsearch> db.collection.aggregate({$search:{text:{path:'context',query:'tell me a joke'}}},{$project:{_id:0,context:1,score:{$meta:'searchScore'}}})\n[\n {\n context: 'tell me more about the World Series Fund',\n score: 0.4073374569416046\n }\n]\n0.4073374569416046$searchsearch> db.collection.find({},{_id:0})\n[\n { context: 'tell me more about the World Series Fund' },\n { context: 'random string' },\n { context: 'random string abcdef' },\n { context: 'random string testing' },\n { context: 'random string new' }\n]\n\nsearch> db.collection.aggregate({$search:{text:{path:'context',query:'tell me a joke'}}},{$project:{_id:0,context:1,score:{$meta:'searchScore'}}})\n[\n {\n context: 'tell me more about the World Series Fund',\n score: 1.804081678390503\n }\n]\n1.804081678390503searchScoreDetails",
"text": "In this example, why the search “tell me a joke” can return “tell me more about the World Series Fund” with a very high score ?As noted in the previous post: Every document returned by an Atlas Search query is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.It may be that in your environment, there is many other documents that do not contain matching terms. So relative to those, the returned document you have shown may have a higher score. Let’s take a look at an example using the same document in my test environment (using the same index definition you provided):Only 1 document in this collection:We can see that the score value is 0.4073374569416046.Now, I insert another 4 documents that do not contain any matching terms and perform the exact same $search:We can now see the same doucment is returned but with a score of 1.804081678390503 this time.As noted previously as well, you can see more into the scoring using searchScoreDetails for your environment.In terms of the use case from the example you provided - Is it that you believe the score is too high or is it that you believe the document should not be returned at all based off the search term?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks, I guess this is not suitable for my use case of identifying similarity. I will explore other options.",
"username": "HS-Law"
},
{
"code": "\"tell me a joke\"search> a\n[\n {\n '$search': {\n text: {\n path: 'context',\n query: 'tell me a joke',\n score: { constant: { value: 1 } }\n }\n }\n },\n {\n '$project': { _id: 0, context: 1, score: { '$meta': 'searchScore' } }\n }\n]\nsearch> db.collection.find({},{_id:0})\n[\n { context: 'tell me more about the World Series Fund' },\n { context: 'random string new' },\n { context: 'random string new' },\n { context: 'random string new' },\n { context: 'tell me a joke' }\n]\nconstantsearch> db.collection.aggregate(a)\n[\n { context: 'tell me more about the World Series Fund', score: 1 },\n { context: 'tell me a joke', score: 1 }\n]\n",
"text": "this is not suitable for my use case of identifying similarity.Just to try get a bit of feedback here, if there are “similar documents” - are you wanting these to have the same score?For example, 2 documents which hit a match for the search term \"tell me a joke\" with the constant scoring option:Output: 2 matching documents with same score due to constant scoring option.Thanks in advance.Jason",
"username": "Jason_Tran"
},
{
"code": "\"tell me a joke\"search> a\n[\n {\n '$search': {\n text: {\n path: 'context',\n query: 'tell me a joke',\n score: { constant: { value: 1 } }\n }\n }\n },\n {\n '$project': { _id: 0, context: 1, score: { '$meta': 'searchScore' } }\n }\n]\nsearch> db.collection.find({},{_id:0})\n[\n { context: 'tell me more about the World Series Fund' },\n { context: 'random string new' },\n { context: 'random string new' },\n { context: 'random string new' },\n { context: 'tell me a joke' }\n]\nconstantsearch> db.collection.aggregate(a)\n[\n { context: 'tell me more about the World Series Fund', score: 1 },\n { context: 'tell me a joke', score: 1 }\n]\n",
"text": "My dataset would have many questions from customers such as:So, these four are considered unique.When users ask more questions such as:Thanks.this is not suitable for my use case of identifying similarity.Just to try get a bit of feedback here, if there are “similar documents” - are you wanting these to have the same score?For example, 2 documents which hit a match for the search term \"tell me a joke\" with the constant scoring option:Output: 2 matching documents with same score due to constant scoring option.Thanks in advance.Jason",
"username": "HS-Law"
},
{
"code": "search>db.collection.find({},{_id:0})\n[\n { context: 'tell me more about the World Series Fund' },\n { context: 'random string new' },\n { context: 'random string new' },\n { context: 'random string new' },\n { context: 'tell me a joke' },\n { context: 'Who is the CTO of company ABC?' },\n { context: 'Who is the CEO of company ABC?' },\n {\n context: 'What product options do you have for a beginner in hiking?'\n },\n { context: 'What product does your company offer?' }\n]\nsearch> db.collection.aggregate({$search:{text:{path:'context',query:'I’m a beginner looking for hiking products, can you make some recommendation?'}}},{$project:{_id:0,score:{$meta:'searchScore'},context:1}})\n[\n {\n context: 'What product options do you have for a beginner in hiking?',\n score: 6.164970397949219\n },\n { context: 'tell me a joke', score: 0.9522761702537537 }\n]\n",
"text": "My dataset would have many questions from customers such as:So, these four are considered unique.When users ask more questions such as:Thanks.I added those as documents to my test environment and used the search term you provided - Question 3’s document was returned as the highest result.In regards to my above statement, I have a few questions:Test documents:Document with the highest score correlates with the “Question 3” you mentioned.Looking forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Yup, the issue is the highest score range.\nIn our use case, the questions are populated and grouped by similarity from zero record.For example, if we change the question to expert instead of beginner:-what products do you have for expert in hiking?,\n-I’m an expert looking for hiking products, can you make some recommendation?it is returning very high score as well.We need to know that question for expert variant does not exist yet, and we will create a new doc for customer service to add the answers for it. So that next time it can pick answers for question related to “products for hiking’s expert.”We need a controlled range to identity if user’s question carry the exact meaning. Maybe something like 0-1, where 0.9 can be considered as very similar ?Thanks.",
"username": "HS-Law"
}
] | Score varies by length | 2023-07-24T09:12:37.035Z | Score varies by length | 527 |
|
null | [
"queries",
"cxx",
"c-driver"
] | [
{
"code": "",
"text": "Hi,My question seems to be small but I’m pretty new to MongoDB programming.\nI have a document with key-value pairs. What is the best approach to update one or more fields using MongoCxx.\nHere is my query extracted from one of the example for only field “title”.\nauto update_one_result = movies.update_one( make_document(kvp(“title”,“The Adventures of Tom Thumb & Thumbelina”)),\nmake_document( kvp(“$set”, make_document( kvp(“title”, “MOVIE UPDATE TEST”) ) ) ) );Question 1) Suppose if I have another field called “duration” and I would like to update this. How to do it in single statement. I know writing another update_one command for “duration” field will work.Q2) Lets assume my document has 100s of values and I would like to change 90 of it. It would be really tough to update each and every field. So I am planning to have the entire document in BSON and then execute update command. Is it possible to update the collection with the modified BSON in single statement.",
"username": "Raja_S1"
},
{
"code": "",
"text": "Hi @Raja_S1Please take a look at bulkWrite that allows you to perform multiple write operations together.\nHere’s a code sample in C++ driver - https://github.com/mongodb/mongo-cxx-driver/blob/master/examples/mongocxx/bulk_write.cppFor more details, refer to the Driver Bulk API Spec, which describes bulk write operations for all MongoDB drivers.",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Thank you for the response. I will try this approach and reach out to here incase of any problem.",
"username": "Raja_S1"
}
] | Update multiple values using MongoCxx | 2023-07-25T08:29:30.174Z | Update multiple values using MongoCxx | 549 |
null | [
"queries",
"indexes"
] | [
{
"code": " {\n \t\"_id\" : \"doc4\",\n \t\"path\" : \"/f10/f4/\",\n \t\"pathArray\" : [ \"f10\", \"f4\" ],\n \t\"pathArrayMulti\" : [\n \t\t[ \"f10\", \"f4\" ],\n \t\t[ \"f1\" ]\n \t]\n }\n",
"text": "Hi,\nI have an array of dynamic arrays in the pathArrayMulti below.\nI want to index this field. Is this possible?If I create an index on pathArray like below, I can search the value and it will give me result backdb.doc.createIndex({“pathArray”: 1})\ndb.doc.find({pathArray: “f10”})But if I create index on pathArrayMulti, and do the search, then empty resultset is returned.db.doc.createIndex({“pathArrayMulti”: 1})\ndb.doc.find({pathArrayMulti: “f10”})Is there any way for me to see the values that is stored in the index?",
"username": "Eirik_Andersen"
},
{
"code": "",
"text": "This should find the data, however the index will not be hit:Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "John_Sewell"
},
{
"code": "db.testCol.find({ \"pathArrayMulti.0\": { $elemMatch: { $in: [\"f10\"] } } })\n\"pathArrayMulti.0\"pathArrayMulti$elemMatch",
"text": "Hello there!This query will also work:This query targets a more specific position within the array using dot notation (\"pathArrayMulti.0\" ). This allows MongoDB to directly access and match the first element of the pathArrayMulti array, without the need for multiple levels of $elemMatch. However, the index will not be used either.Hope this helps ",
"username": "Carl_Champain"
},
{
"code": "",
"text": "Thanks both of you for the query syntax fixes \nI now get the data but my biggest issue still remain.\nWhen this collection grows to million of documents it will be to slow without an index.So is there any way to index or query this 2 dim array so the index is used?",
"username": "Eirik_Andersen"
},
{
"code": "",
"text": "Not that I’m aware of, you may need to re-factor the data to store it in an indexable layout Not hitting an index is a bit of deal breaker, as you say when data grows to reasonable levels you don’t want to have collection scans all over the palce.",
"username": "John_Sewell"
},
{
"code": "db.testCol.find({ \"pathArrayMulti.0\": { $elemMatch: { $in: [\"f10\"] } } }).hint({\"pathArrayMulti\" : 1})\n",
"text": "You could also try using hint():This method overrides MongoDB’s default index selection and query optimization process. It forces MongoDB to use the specified index when performing the query.",
"username": "Carl_Champain"
},
{
"code": "db.testCol.find({ \"pathArrayMulti.0\": { $elemMatch: { $in: [\"f10\"] } } }){\n \t\"_id\" : \"doc4\",\n \t\"path\" : \"/f10/f4/\",\n \t\"pathArray\" : [ \"f10\", \"f4\" ],\n \t\"pathArrayMulti\" : [\n \t\t[ \"f1\" ] ,\n \t[ \"f10\", \"f4\" ]\n \t]\n }\ndb.testCol.find({ \"pathArrayMulti.1\": { $elemMatch: { $in: [\"f10\"] } } })\n",
"text": "The querydb.testCol.find({ \"pathArrayMulti.0\": { $elemMatch: { $in: [\"f10\"] } } })work ONLY because you are querying element 0 of pathArrayMulti. If the document wasYou would then to change the query toThe solution shared by John_Sewell is the correct one.",
"username": "steevej"
},
{
"code": "\"pathArrayMulti.0\"pathArrayMulti",
"text": "You are right, @steevej!\nI did mention it in my response:This query targets a more specific position within the array using dot notation (\"pathArrayMulti.0\" ). This allows MongoDB to directly access and match the first element of the pathArrayMulti arrayMaybe I should have been a bit clearer ",
"username": "Carl_Champain"
},
{
"code": "",
"text": "Sorry, I missed that part.Maybe I should have been a bit clearerNo. I should be a better reader.",
"username": "steevej"
},
{
"code": "",
"text": "I guess if you KNEW it would always be X dimensions you could $OR them together and then it would hit the index…",
"username": "John_Sewell"
},
{
"code": "\"pathArrayMulti\" : [\n { values: [ \"f1\" ] },\n { values: [ \"f10\", \"f4\" ] }\n]\n{ \"pathArrayMulti.values\": 1 }",
"text": "If you store the data as an array of objects, you can index them:Then you can create an index on { \"pathArrayMulti.values\": 1 }",
"username": "Ryan_Wheale"
},
{
"code": "{ \"pathArrayMulti.values\": \"f10\" }\n",
"text": "I finally had the time to test that and it looks like it is working fine.I was afraid that it would not used the index on queries likebecause the indexed values would be the arrays.Thanks",
"username": "steevej"
}
] | Indexing multi dimentional array | 2023-07-11T10:47:21.683Z | Indexing multi dimentional array | 925 |
null | [
"node-js",
"mongoose-odm",
"react-native"
] | [
{
"code": "",
"text": "So I am a bit lost. I am trying to make 2 collections, one is a master list that my users can only read from and the other is a collection of their own objects.basically they choose from the master list an item, and then create a reference to it on their own document.Is it possible to read collections from the realm in a fully synced app without defining the schema in my react-native app? If I do need to create one how do i match it exactly?I added the schema locally for the reference and added a field for a BSON object id for the reference. (I am coming from Mongoose, I miss populate!)Then I created the master list in the cloud and manually added some data. I managed to insert the new document with reference to the master list, but cannot pull from the master list. I assume I need a schema defined locally? I tried to make one locally that matches the cloud one but I cannot figure out the correct way to add the relationship.I also now cannot delete the schema for the master list in the cloud and let the local one populate it since the other collection references it.Currently my app will not build now as it states “Exception in host function: Property blah has been made required”, but it is not required in the cloud nor locally. But i have defined the schema using typescript and it’s possible that has made it required.\nthank you for any info!",
"username": "Mike_Powell"
},
{
"code": "Realm SDKs",
"text": "@Mike_Powell In the App Services there is a tab called Realm SDKs which contains a section called Realm Object Models. There you can copy and paste the exact schemas needed for your application. Hope that helps!\n",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Hey Andrew!\nThank you so much for the reply, I ended up finding it after hours of struggle trying to match up my schemas. It only then introduced a whole new problem which was it removed my defaults, such as BSON.objectId = new BSON.objectId. So i had to write my schema similar to how I would write them in mongoose using {type: ‘objectId’, default: new BSON.objectId} and that did the trick.Could you tell me the exact process for changing schema, My app is in development and I have had so much trouble with mismatching schemas.I thought I understood the process was simple, make the change to my local schema, delete any data that does not match it from the db, remove the schema online and stop and restart sync. However this did not work. I ended up having to wipe all data from my simulator to get it to stop complaining finally.\nIn dev mode shouldn’t this be way easier, I should just be able to change it and it updates the cloud easily. I would much rather only deal with my local schema.any help would be appreciated.",
"username": "Mike_Powell"
},
{
"code": "",
"text": "@Mike_Powell We are hoping to make the process a bit easier for breaking changes, but at the moment, if you do any modifications to your local schema (deletions or updates), you will need to wipe the local data.",
"username": "Andrew_Meyer"
}
] | What is the correct process to create schemas in react native? | 2023-07-25T12:58:37.207Z | What is the correct process to create schemas in react native? | 503 |
null | [
"kafka-connector",
"time-series"
] | [
{
"code": "Cannot perform a non-multi update on a time-series collection\n{\n\t\"metadata\": {...},\n\t\"timestamp\": 1689949371,\n\t\"readings\": {...}\n}\n",
"text": "Question:I’m working with the Kafka MongoDB connector to stream data from Kafka into a MongoDB time-series collection. My setup involves processing data from an API and sinking it into MongoDB using the Kafka connector. When attempting to sink new messages, I’m encountering an error I can’t resolve:Setup:What I’ve tried:Issue:Despite the above, when I try to sink messages, I’m getting the aforementioned error. The connector seems to think it’s an update operation, even though I’m only sinking new messages.Question:Why might the Kafka MongoDB connector perceive these insertions as updates? Has anyone else experienced this, especially with time-series collections in MongoDB, and how did you resolve it?Any insights or pointers would be greatly appreciated.",
"username": "Denes_Juranyi"
},
{
"code": "",
"text": "I am not 100% sure, but as of now it seems to solve my problem.\nIn the kafka connect config I set the writemodel.strategy the following way:\n“writemodel.strategy”: “com.mongodb.kafka.connect.sink.writemodel.strategy.InsertOneDefaultStrategy”",
"username": "Denes_Juranyi"
}
] | Kafka MongoDB Connector Error: "Cannot perform a non-multi update on a time-series collection" | 2023-07-25T08:18:43.895Z | Kafka MongoDB Connector Error: “Cannot perform a non-multi update on a time-series collection” | 471 |
null | [
"node-js"
] | [
{
"code": "const UserDBPath = \"//192.168.10.13/Test/Users.realm\";\nuserRealm = await Realm.open({\n schemaVersion: 1,\n readOnly:false,\n inMemory:false,\n path: UserDBPath,\n schema: [UserSchema.schema],\n });\n",
"text": "I use local realm db.I tried to access the local Users.realm db from shared folder path.below code:ErrorMessage : CreateFile() failed: ������ ��θ� ã�� �� �����ϴ�.It doesn’t work.Please tell me how to access realm db from shared folder path.",
"username": "jinsu_kim"
},
{
"code": "",
"text": "Hey @jinsu_kim,Realm on Windows doesn’t support shared network drives, only local filesystems.",
"username": "Yavor_Georgiev"
}
] | Cannot Open Local RealmDB | 2023-07-26T04:55:04.500Z | Cannot Open Local RealmDB | 526 |
null | [] | [
{
"code": "",
"text": "We have 3 shared MongoDB clusters. We enabled 3T on each node.We did some DB cleanup and disk usage is reduced to 50%. As part of cost savings, we are planning to downsize the disks to 2 TB. We are looking for the best ways to perform this during business hours.Thanks",
"username": "MouliVeera_N_A"
},
{
"code": "",
"text": "Hey @MouliVeera_N_A,Thank you for reaching out to the MongoDB Community forums. We have 3 shared MongoDB clusters. We enabled 3T on each node.Regarding your statement, “We enabled 3T on each node,” could you please clarify what you mean by “3T”?we are planning to downsize the disks to 2 TBAdditionally, it would be helpful if you can share the current size of your MongoDB deployment.Furthermore, I’d like to emphasize the importance of performing a backup before initiating the downsizing process. Backing up your data ensures that you have a reliable and recoverable copy in case any issues arise during the downsizing. Please refer to the MongoDB Backup Methods to read more on this.Feel free to provide any further details related to your deployment so that we can assist you more effectively.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "3T: Its a 3TB. Our sharded cluster is with 3 nodes and each node is with 3TB PVC on them.The current size of storage used is 950GB.Here the request is to downsize the PVC to 2TB.Thanks,\nMouli",
"username": "MouliVeera_N_A"
},
{
"code": "",
"text": "One possible way might be adding some new nodes with 2T as disk space to the same replica set, then remove the ones with 3TB. But apparently, this can be slow given you have 100+ GB to replicate.Another one:\nCreate a snapshot from 3T PV, then create a 2T PV from the snapshot and add a new node from it to the replica set. Then replication will continue with new changes.",
"username": "Kobe_W"
},
{
"code": "",
"text": "That works for VM environment.We are using Kubernetes statefulsets and we can enlarge them using patch and cascade deletes.Downsizing the disks is challenging.Thanks",
"username": "MouliVeera_N_A"
},
{
"code": "",
"text": "@Kushagra_Kesav could you share some inputs to move further.",
"username": "MouliVeera_N_A"
},
{
"code": "",
"text": "Hey @MouliVeera_N_A,When making significant changes to production MongoDB deployments, it is always advisable to take proper backups ahead of time before downsizing disks in case anything goes wrong.In terms of downsizing the disks from 3TB to 2TB, this may be possible as long as the total size of data where MongoDB resides is not currently consuming more than 2TB of disk space.However, this is not really a MongoDB question but rather a Kubernetes operational question. Unfortunately, we don’t really have the expertise to answer this, and even if we have the answer, it might not reflect the current best practices with regard to Kubernetes operations. I would suggest going to StackOverflow or ServerFault instead for the specifics on the best practices for live resizing Kubernetes persistent volumes.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Downsize mongoDB disks | 2023-06-06T07:07:06.228Z | Downsize mongoDB disks | 931 |
null | [
"change-streams"
] | [
{
"code": "v1v0",
"text": "mey be one of the below data types:if i only have one resumeToken ({_data:“xxxxxxxxxxxxxxx”}), how to parse resume token and get timestamp or other fields?",
"username": "111146"
},
{
"code": "",
"text": "Maybe this library could help.",
"username": "Felipe_Gasper"
}
] | How to parse resume token and get timestamp? | 2021-09-03T02:43:06.906Z | How to parse resume token and get timestamp? | 2,542 |
[
"queries",
"data-modeling"
] | [
{
"code": "",
"text": "Hi, I am new here.\nCan someone help me design a MongoDB schema for this relational schema.Thanks",
"username": "Fowmy_Abdulmuttalib"
},
{
"code": "",
"text": "One thing you can do is simply replicate a relational schema as a db of MongoDB collections.\nIt may not be “sexy” but it works!",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "One thing is missing here is how you intend to query the data. A MongoDB data model should heavily consider the queries, here one strives to provide as much data in a single document (i.e a single query) to avoid “expensive” joins. This library model is very similar to a standard eCom model: customers, products, orders, payments. → students, books, loans, fines. There are plenty of articles on such data models.However to show how a MongoDB schema might change as compared to RDBMS, I expect that query on loans to be a common need. A simple improvement could be to use imbedding - (duplicate) - to copy all the specific book info in the specific loan document. A query on a specific loan, likely gets the information needed.",
"username": "Thomas_Luckenbach"
}
] | Convert a Relational Schema to MongoDB Schema | 2023-01-02T20:59:08.087Z | Convert a Relational Schema to MongoDB Schema | 1,475 |
|
null | [
"queries"
] | [
{
"code": "joined_at: [new ISODate(\"2022-02-15T17:12:46.837Z\")]\n{$type: \"array\"}{$type: \"date\"}",
"text": "I have a field which contain array of ISODate like thiswhen I’m filtering those data with {$type: \"array\"}, It correctly appears in search results. But, when I’m filtering with {$type: \"date\"}, it also appears in search results. Why?MongoDB version 4.2",
"username": "Nabil_Muh_Firdaus"
},
{
"code": "$type{ _id: 1, joined_at: [ISODate(\"2022-02-15T17:12:46.837Z\"), \"2022-02-15T17:12:46.837Z\"] },\n{ _id: 2, joined_at: ISODate(\"2022-02-15T17:12:46.837Z\") },\n{ _id: 3, joined_at: \"2022-02-15T17:12:46.837Z\" }\n{ joined_at: { $type: \"array\" } }\n// Result:\n{ _id: 1, joined_at: [ISODate(\"2022-02-15T17:12:46.837Z\"), \"2022-02-15T17:12:46.837Z\"] }\n{ joined_at: { $type: \"date\" } }\n// Result: \n{ _id: 1, joined_at: [ISODate(\"2022-02-15T17:12:46.837Z\"), \"2022-02-15T17:12:46.837Z\"] },\n{ _id: 2, joined_at: ISODate(\"2022-02-15T17:12:46.837Z\") },\n{ joined_at: { $type: \"string\" } }\n// Result: \n{ _id: 1, joined_at: [ISODate(\"2022-02-15T17:12:46.837Z\"), \"2022-02-15T17:12:46.837Z\"] },\n{ _id: 3, joined_at: \"2022-02-15T17:12:46.837Z\" }\n",
"text": "Hello @Nabil_Muh_Firdaus, Welcome to the MongoDB developer community forum,It will check the inner element’s type of array, and it’s a default behavior of $type operator,Consider the example with sample documents:Query 1:Query 2:Query 3:For more details, refer to the documentation:",
"username": "turivishal"
},
{
"code": "{ \"joined_at\" : { \"$elemMatch\" : { \"$type\" : \"date\" } } }\n{ \"$expr\" : { \"$eq\" : [ { \"$type\" : \"$joined_at\" } , \"array\" ] } }\n",
"text": "May be the following variation is suited for your use-case.Documents where joined_at is not an array won’t be selected.A slightly more complex variation would be to use version of $type available inside $expr using:",
"username": "steevej"
},
{
"code": "$type{ \"joined_at\" : { \"$elemMatch\" : { \"$type\" : \"date\" } } }{ \"joined_at\" : { \"$type\" : [\"date\"] } }\n",
"text": "You may skip reading this post as the correct solution is in the next postI forgot to mention the flexibility of $type operator for this scenario, { \"joined_at\" : { \"$elemMatch\" : { \"$type\" : \"date\" } } }The alternative to this, Just need to wrap the type into an array brackets.",
"username": "turivishal"
},
{
"code": "{ \"joined_at\" : { \"$type\" : [\"date\"] } }{ _id: 0 , a: 'x' }\n{ _id: 1 , a: [ 'x' ] }\nc.find( { \"a\" : { \"$type\" : \"string\" } } )\nc.find( { \"a.0\" : { \"$exists\" : true } , \"a\" : { \"$type\" : \"string\" } } )\n",
"text": "I just tried{ \"joined_at\" : { \"$type\" : [\"date\"] } }on the documentswithand the document with _id:0 was still found. With the $expr version only _id:1 is found.The querywill also only find _id:1.",
"username": "steevej"
},
{
"code": "{ \"joined_at\" : { \"$type\" : [\"date\"] } }$elemMatchc.find( { \"a\" : { \"$type\" : \"string\" } } )\n{ \"$expr\": { \"$eq\": [{ \"$type\": \"$joined_at\" }, \"date\"] } }\n",
"text": "You may skip reading this post as the correct solution is in the next postI just tried{ \"joined_at\" : { \"$type\" : [\"date\"] } }This is a solution to check the field type should be an array and have at least one date type value in it.\nThe alternative to $elemMatch approach, as ia updated my previous post.and the document with _id:0 was still found. With the $expr version only _id:1 is found.You are right, this is still a problem in non-array type, and your solution will work for this scenario, to check only non-array and date type field’s value only.",
"username": "turivishal"
},
{
"code": "c.find( { \"a\" : { \"$type\" : \"string\" } } )c.find( { \"a\" : { \"$type\" : [ \"string\" ] } } )c.find( { \"a\" : { \"$type\" : [ \"string\" , \"date\" ] } } )a : \"aStringValue\" \na : aDateValue\na : [ \"aStringValue\" ]\na : [ aDateValue ]\n",
"text": "{ “joined_at” : { “$type” : [“date”] } }This is a solution to check the field type should be an array and have at least one date type value in it.The brackets [ ] are meant to be able to specify multiple types for the element array. The $elemMatch version will ONLY find arrays. The [ ] version will find non-array that matches the type specification.Inc.find( { \"a\" : { \"$type\" : \"string\" } } )What I really meant wasc.find( { \"a\" : { \"$type\" : [ \"string\" ] } } )stills selects non-array field a that are string.For example,\nc.find( { \"a\" : { \"$type\" : [ \"string\" , \"date\" ] } } )will find documents with",
"username": "steevej"
},
{
"code": "",
"text": "Ahh got it, you are totally right \nRemoved the post so the query doesn’t mislead the others.",
"username": "turivishal"
},
{
"code": "field$type$type",
"text": "Thank you guys, actually I just wondering that it is “likely” to be buggy.I think it just about I’m not reading docs carefully, since it is clearly stated in docs thatFor documents where field is an array, $type returns documents in which at least one array element matches a type passed to $type.",
"username": "Nabil_Muh_Firdaus"
},
{
"code": "",
"text": "Thank you, I like your second solution. I believe your solution will be useful for others",
"username": "Nabil_Muh_Firdaus"
},
{
"code": "",
"text": "I’m not reading docs carefullyDo not worry. Most of us do that often. More important, your question started an interesting discussion where I learned about the [ ] shared by turivishal. Despite the fact that the version still had issues, the [ ] was still useful.",
"username": "steevej"
},
{
"code": "",
"text": "Do you think it would be a good idea to bring back your posts since some of the replies do not really make sense without the original post?May be you could simply add a big warning at the beginning and mentioned why the post is kept despite not being the final solution.I did that in the post How to use conditionals (and more?) to combine and reduce data - #4 by steevej.",
"username": "steevej"
},
{
"code": "",
"text": "Definitely, I have just reposted the answers.",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why field with array of ISODate detected as date by $type operator | 2023-07-21T17:44:43.947Z | Why field with array of ISODate detected as date by $type operator | 1,077 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "const reports = await Report.aggregate([\n {\n $lookup: {\n from: 'operations',\n let: { operationId: '$operation', reportId: '$_id' /* I don't know if this is right or if there's a better way to get the report id */ },\n pipeline: [\n {\n $match: { $expr: { $eq: ['$_id', '$$operationId'] } },\n },\n {\n $lookup: {\n from: 'users',\n let: {\n userId: '$initiator.user', type: '$initiator.type'},\n pipeline: [\n { $match: { $expr: { $eq: ['$_id', '$$userId'] } } },\n {\n $project: {\n color: '$colorCode.code',\n avatar: 1,\n type: '$$type',\n clear: {\n $filter: {\n input: '$clear',\n as: 'clear',\n cond: {\n $eq: [\n '$clear.report',\n '6375856e1f7e265016ffd3c8', /* here i want to pass the reporId */\n ],\n },\n },\n },\n },\n },\n ],\n as: 'initiator',\n },\n },\n {\n $lookup: {\n from: 'users',\n let: {\n userId: '$peer.user', type: '$peer.type',\n },\n pipeline: [\n { $match: { $expr: { $eq: ['$_id', '$$userId'] } } },\n {\n $project: {\n color: '$colorCode.code',\n avatar: 1,\n type: '$$type',\n clear: {\n $filter: {\n input: '$clear',\n as: 'clear',\n cond: {\n $eq: [\n '$clear.report',\n '6375856e1f7e265016ffd3c8', /* same here */\n ],\n },\n },\n },\n },\n },\n ],\n as: 'peer',\n },\n },\n {\n $unwind: '$initiator',\n },\n {\n $unwind: '$peer',\n },\n ],\n\n as: 'operation',\n },\n },\n {\n $unwind: '$operation',\n },\n])\n",
"text": "I want to pass the report id variable into the pipeline stage of the user’s collection to filter the (clear) field array to get only objects owned by this report",
"username": "Ahmed_Abdelrahman"
},
{
"code": "",
"text": "I am not sure I understand correctly but you could use $addFields just before the inner $lookup. This way it is available an any other field.",
"username": "steevej"
}
] | How to pass variables many level down in mongodb aggregation | 2023-07-23T08:26:13.795Z | How to pass variables many level down in mongodb aggregation | 412 |
null | [
"sharding",
"storage",
"ops-manager",
"kubernetes-operator"
] | [
{
"code": "apiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\n name: my-sharded-cluster\nspec:\n version: \"4.2.2-ent\"\n type: ShardedCluster\n opsManager:\n configMapRef:\n name: my-project\n credentials: organization-secret\n backup:\n mode: enabled\n persistent: false\n shardCount: 2\n mongodsPerShardCount: 3\n mongosCount: 1\n configServerCount: 1\n configSrv:\n additionalMongodConfig:\n storage:\n engine: wiredTiger\n shard:\n additionalMongodConfig:\n storage:\n engine: wiredTiger\n",
"text": "I have a problem. I used MongoDB Enterprise Kubernetes Operator to deploy mongodb Sharded Cluster. I want have 2 shard with 3 mongodsPerShardCount. On each of 3 mongodsPerShardCount have 1 memory (primary) and 2 wiredtiger (secondary) storage engine. So How i can achieve that? If can, please give me a example or link to documentation to refer. Thank you so much!I used MongoDB Enterprise Kubernetes Operator to deploy mongodb Sharded Cluster with specific type of storage engine (memory or wiredtiger) so that work this is my yaml file.But I don’t know how to deploy mixing Sharder Cluster that combine two storage engine? Please give me a solution.",
"username": "Nguy_n_Xuan_D_ng"
},
{
"code": "",
"text": "To deploy a MongoDB mixing storage sharded cluster with the MongoDB Enterprise Kubernetes Operator, you can use the following YAML file:apiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\nname: my-sharded-cluster\nspec:\nversion: “4.2.2-ent”\ntype: ShardedCluster\nopsManager:\nconfigMapRef:\nname: my-project\ncredentials: organization-secret\nbackup:\nmode: enabled\npersistent: false\nshardCount: 2\nmongodsPerShardCount: 3\nmongosCount: 1\nconfigServerCount: 1\nconfigSrv:\nadditionalMongodConfig:\nstorage:\nengine: wiredTiger\nshard:\nadditionalMongodConfig:\nstorage:\nengine:\n- wiredTiger\n- memoryThis YAML file will create a sharded cluster with two shards, each with three mongods. On each mongod, one of the storage engines will be wiredTiger and the other will be memory.To deploy this cluster, you can use the following command:kubectl apply -f my-sharded-cluster.yaml\nOnce the cluster is deployed, you can connect to it using the following command:mongos --host my-mongos\nThis will connect you to the mongos instance, which will allow you to access the shards in the cluster.For more information, you can refer to the following documentation:1- MongoDB Enterprise Kubernetes Operator documentation: MongoDB Enterprise Kubernetes Operator — MongoDB Kubernetes Operator upcoming\n2- MongoDB Sharding documentation: https://docs.mongodb.com/manual/sharding/",
"username": "kavin_oven"
},
{
"code": "Status: 400 (Bad Request), ErrorCode: INVALID_JSON_ATTRIBUTE, Detail: Received JSON for the processes.java.util.ArrayList[0].args2_6.storage.engine attribute does not match expected format.\"\n",
"text": "Hello, thank for suport me.\nBut when i try your solution, iam get this error.",
"username": "Nguy_n_Xuan_D_ng"
}
] | How to deploy mongodb mixing storage sharded cluster with MongoDB Enterprise Kubernetes Operator? | 2023-07-19T15:50:25.519Z | How to deploy mongodb mixing storage sharded cluster with MongoDB Enterprise Kubernetes Operator? | 676 |
null | [] | [
{
"code": "ending session with error: failed to validate upload changesets: field \"field_name\" in table \"table_name\" should have link type \"objectId\" but payload type is \"Null\" (ProtocolErrorCode=212)",
"text": "Hi, I’m getting this error with realm sync that it’s ending the sync session before it can sync the data. Error: ending session with error: failed to validate upload changesets: field \"field_name\" in table \"table_name\" should have link type \"objectId\" but payload type is \"Null\" (ProtocolErrorCode=212).The field specified is a relationship field, which makes the field not required, but I still have the error coming in. A field that is not required can be nullable? Is that correct or am I missing something?",
"username": "Rossicler_Junior"
},
{
"code": "required",
"text": "Hi @Rossicler_Junior,That error implies that you are generating a link to a table with a required primary key, but the primary key in your link is null so it fails schema validation. Is this error coming from a device session? Or do you encounter this error when sync is trying to ingest your MongoDB data?",
"username": "Kiro_Morkos"
},
{
"code": "requiredSync -> Session EndSync -> WriteBadChangeset Error",
"text": "at you are generating a link to a table with a required primary keyI can only see this error in the realm app logs, not on client side. The error type is either Sync -> Session End or Sync -> Write with status BadChangeset ErrorJust to make sure if I understand what you’re saying, if the relationship field is null it will generate this error?Let me give you an example of a realm scenario. I have a schema “schema_1”, this schema has a field called “media”, which is a relationship to a schema called “media”, pointing to “_id” primary key field.\nOn my “schema_1” I have some entries with the media field being the ObjectId of a media _id, and some of them are null. So the null values are a problem? Even if “media” field is not required?Screenshot of the media field in the realm schema.\n\nimage1086×82 5.26 KB\n",
"username": "Rossicler_Junior"
},
{
"code": "mediaSync --> WriteBadChangeset ErrorWrite Summary",
"text": "if the relationship field is null it will generate this error?The link consists of two components: a table and primary key. So the link can be non-null, while the primary key is null.In any case, that error should never appear for writes originating in MongoDB, so it appears there’s a bug somewhere. Can you share the value of the media field in the document that is triggering the bad changeset error? The Sync --> Write log with the BadChangeset Error should contain a Write Summary section that you can use to determine which document triggered the error.Feel free to follow up in a DM if you don’t want to share the data publicly here!",
"username": "Kiro_Morkos"
},
{
"code": "schema_1media",
"text": "you canI don’t see any additional log that indicates the document which triggered the error. But I think I found the issue and fixed it. I found some entries in my schema_1 with the media field as undefined instead of null. I manually changed them to null, terminated device sync and started again, so far it’s working and no more errors.Last time this happened it also worked for some time then the error came in, if the error persists I will come back here to update you about more details. Thanks for the help!",
"username": "Rossicler_Junior"
},
{
"code": "nullundefined",
"text": "Got it - I was able to reproduce the error you’re seeing with an undefined value, so we’ll have to fix that on our end. In the meantime, you can avoid it by using null values instead of undefined as you identified!",
"username": "Kiro_Morkos"
},
{
"code": "undefinednull",
"text": "ad oYep, already done the changes to avoid undefined and use null instead. Thank you",
"username": "Rossicler_Junior"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Sync validate upload changesets error | 2023-07-24T14:28:25.066Z | Realm Sync validate upload changesets error | 551 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{\n \"_id\": ObjectId(\"64974bfc24cb7cd87c4e0359\"),\n \"text\": \"some text\",\n \"company_id\": \"abcdef-e799-4be3-94db-9f79b38fdeff\",\n \"role_id\": \"it\",\n \"text_embedding\": [\n -0.0058572087, 0.033706117, -0.0049423487, -0.033509824,\n ... 1532 more items\n ]\n}\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"text_embedding\": [\n {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n ],\n \"company_id\": {\n \"type\": \"embeddedDocuments\"\n },\n \"role_id\": {\n \"type\": \"embeddedDocuments\"\n }\n }\n }\n}\n",
"text": "Hello everyone,I have a question regarding the creation of an aggregation pipeline that involves searching on two different field types, namely “embeddedDocuments” and “knnVector,” in two separate stages. Specifically, I want to first filter out all documents for a particular “company_id” and “role_id,” and then perform vector search on the resulting dataset.I’ve attempted various approaches using functions such as $match, $search, compound, and $facet, but I’ve encountered several errors along the way. Here are some examples of the errors I faced:Below is a sample document to provide context for the data structure:Additionally, the search index mappings are as follows:My main question is whether this functionality is supported in MongoDB Atlas?Thank you for your assistance and guidance.Best regards,\nLazar",
"username": "Lazar_Nakinov"
},
{
"code": "[\n {\n '$search': {\n 'index': 'someIndex', \n 'knnBeta': {\n 'vector': [\n 0.3, -0.4, 0.2, 1\n ], \n 'path': 'text_embedding', \n 'k': 5\n }\n }\n }, {\n '$match': {\n 'company_id': 'abcdef-e799-4be3-94db-9f79b38fdeff'\n }\n }\n]\n",
"text": "Something like this maybe?",
"username": "Felix_Rejmer"
},
{
"code": "",
"text": "Thank you Felix. It is working. My mistake was that I was creating the aggregation pipeline on an index having only the field that held the vector embeddings. That is why all my past attempts were not working. After dropping the index, re-creating it as vector and added the company_id as embeddedDocument, the standard $search and $match worked like a charm.",
"username": "Lazar_Nakinov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation Pipeline with Multi-Field Search (embeddedDocuments and knnVector) | 2023-07-23T17:47:52.154Z | Aggregation Pipeline with Multi-Field Search (embeddedDocuments and knnVector) | 562 |
null | [
"sharding"
] | [
{
"code": "",
"text": "I have a collection that only holds 10 million documents, totaling 10 gigabytes. This may not seem like enough to necessitate sharding.But there is a query that takes 1000 seconds to complete on this collection.If I divide this collection into 1000 shards, then I can take advantage of the divide and conquer strategy, and reduce the query speed to 1 second (in theory, excluding overhead and other complications).Is the above scenario not the primary reason for sharding? If so, it seems odd that MongoDB Atlas only allows 50 shards maximum.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Do you know what the bottlenecks are on your query before you resort to building a load of shards?",
"username": "John_Sewell"
},
{
"code": "$lookup$lookup$lookup",
"text": "One of the bottlenecks is that the query contains several $lookup.\nThe other bottleneck is that the query may return arbitrary number of documents, ie. 10 million docs, before the $lookup. So if you pass 10 million docs to several $lookups, then it will be slow.Even though the number of docs returned to the client is limited at 50, the query has a $sort, so it must examine all docs returned.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Ahh, ok I guess you have a specific requirement, as per the sorting could you do that early and then make use of an index or do you unwind and reshape the data as it flows, so end up sorting on a derived field?",
"username": "John_Sewell"
},
{
"code": "$lookup",
"text": "Right, sorting must be done after all of the $lookup, which filters documents, and some of the fields involved in the sorting are derived. So there is no way to sort early or use an index. The only optimization for the sorting is that the returned set of documents are limited to 50.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Sounds like given the data layout, there is a lot of work! I guess to get it running quicker you need to either change the storage model or throw a lot of hardware at it as you said at the top.I’m not quite sure how running a lookup on a massively sharded collection to another collection would work with that many nodes in terms of the extra overhead…I guess you get to the point when you export data to a relational reporting database and use joins just for reporting requirements…Sorry, not being much help here! I guess you could merge in the vlookups or keep them updated in this collection if they do not change on a very frequent basis. If you need this query to run in 1s then it would be cheaper to re-model than throw a LOT of hardware at it.",
"username": "John_Sewell"
}
] | The purpose of sharding | 2023-07-25T14:59:55.098Z | The purpose of sharding | 521 |
null | [] | [
{
"code": "",
"text": "I’m currently facing an issue with MongoDB Atlas related to automatic indexing on a substantial collection. This process is adding an unexpected extra 1.5GB of storage to my database, consequently pushing me to the brink of my 2GB limit. I discovered that this auto-indexing process is a standard feature of MongoDB Atlas, which I learned about here.However, what I find problematic is the lack of advance notification or warning before this process takes place. The surprise addition of 1.5GB to the database can be incredibly disruptive, especially when dealing with a production database where storage is meticulously managed and calculated.The fact that such a significant increase in size can occur without prior notification makes it challenging to account for these unexpected changes and to manage resources effectively. I believe it would be highly beneficial to receive some kind of alert or warning before such an automatic process initiates. This would allow users to better prepare and potentially avoid critical disruptions in their production environment.",
"username": "Callum_Osborne"
},
{
"code": "",
"text": "Hi @Callum_Osborne,Thank you for your suggestion. I’m sorry to hear that you’re experiencing this problem. I’d like to look into this a bit more, as auto-creating a 1.5GB index based on 0.5GB of document data would be very unexpected. Would you mind filing a support case for this so I can look more into your environment?Thanks,\nFrank",
"username": "Frank_Sun"
}
] | Unanticipated Auto-Indexing breaking DB | 2023-07-24T08:38:32.984Z | Unanticipated Auto-Indexing breaking DB | 311 |
null | [
"replication",
"security"
] | [
{
"code": "",
"text": "I’m trying to add tls/ssl certification to my existing mongodb (5.0) 3-node replication cluster and I wanna know whether doing this could do any harm to my data. Do I need to get a backup ? Is there any possibility of data or any configurations in database being damaged ?This server is in production.",
"username": "Sandeepa_Kariyawasam"
},
{
"code": "",
"text": "Simply enabling tls has no impact on data.However routine backup and recovery must be in place for any production database. The alternative is catastrophe when a disaster occurs.You should already be familiar and confident with the procedure from execution in a lower or test environment.Following the documented process is quite safe.",
"username": "chris"
}
] | Adding ssl/tls to existing replica cluster | 2023-07-25T05:39:36.556Z | Adding ssl/tls to existing replica cluster | 462 |
[
"mongodb-shell"
] | [
{
"code": "db.createCollection(\"a.\");\n FUNCTION_STATE_IDENTIFIER = \"threw\";\\n } else throw err;\\n } finally {\\n if (FUNCTION_STATE_IDENTIFIER !== \"threw\") FUNCTION_STATE_IDENTIFIER = \"returned\";\\n }\\n }\\n '),h=o.template.statement(\"\\n let EXPRESSION_HOLDER_IDENTIFIER;\"),f=o.template.statements('\\n let FUNCTION_STATE_IDENTIFIER = \"sync\",\\n SYNC_RETURN_VALUE_IDENTIFIER;\\n\\n const ASYNC_RETURN_VALUE_IDENTIFIER = (ASYNC_TRY_CATCH_WRAPPER)();\\n\\n if (FUNCTION_STATE_IDENTIFIER === \"returned\")\\n return SYNC_RETURN_VALUE_IDENTIFIER;\\n else if (FUNCTION_STATE_IDENTIFIER === \"threw\")\\n throw SYNC_RETURN_VALUE_IDENTIFIER;\\n FUNCTION_STATE_IDENTIFIER = \"async\";\\n return MSP_IDENTIFIER(ASYNC_RETURN_VALUE_IDENTIFIER);\\n '),m=o.template.expression(\"(\\n ORIGINAL_SOURCE,\\n EXPRESSION_HOLDER = NODE,\\n ISP_IDENTIFIER(EXPRESSION_HOLDER) ? await EXPRESSION_HOLDER : EXPRESSION_HOLDER\\n )\",{allowAwaitOutsideFunction:!0}),g=o.template.expression(\"\\n ANSP_IDENTIFIER(NODE, ORIGINAL_SOURCE)\\n \"),y=o.template.statement(\"\\n try {\\n ORIGINAL_CODE;\\n } catch (err) {\\n throw err;\\n }\\n \"),_=o.template.statement(String.raw`\n \n\nMongoInvalidArgumentError: Collection names must not start or end with '.'\n at t.checkCollectionName (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2674072)\n at new Collection (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2452002)\n at /tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2570867\n at /tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2431876\n at /tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2628362\n at /tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2627307\n at Connection.onMessage (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2422155)\n at MessageStream.<anonymous> (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:2:2420014)\n at MessageStream.emit (node:events:513:28)\n at MessageStream.emit (node:domain:489:12) {\n [Symbol(errorLabels)]: Set(0) {}\n}\n\n",
"text": "I was playing around with createCollection.\nFirstly, I was working on my local mongodb. I ran the following code and the shell client threw a lot of code terminated immediately.I started client again and re-run the same command and it says that -\nMongoServerError: Collection test.a. already exists.When I ran the same code on Mongo Web Shell, it also did the same. But I can see the code that was thrown (last few lines)To my surprise, the shell got terminated and I could access the machine. Attached the screenshot\n\nScreenshot 2023-07-25 at 10.25.21 AM1058×690 77.9 KB\n",
"username": "Abhishek_Chaudhary1"
},
{
"code": "",
"text": "Probably better over at https://jira.mongodb.org/",
"username": "chris"
}
] | Super Critical Bug for createCollection | 2023-07-25T04:57:52.561Z | Super Critical Bug for createCollection | 388 |
|
[] | [
{
"code": "",
"text": "I think the answer to the 2nd question in this quiz is wrong. They are mixing embedding and referencing concepts. MongoDB Courses and Trainings | MongoDB University\nimage1130×819 19.7 KB\n",
"username": "ajinkya_shidhore1"
},
{
"code": "",
"text": "Hey @ajinkya_shidhore1,Welcome to the MongoDB Community forums!the answer to the 2nd question in this quiz is wrong. They are mixing embedding and referencing conceptsCould you please further elaborate on why you think the answer to this quiz is wrong?Looking forward to your response to assist further.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "May be the question itself is wrong. both comment_id and reader_id are used as reference, but question is asking to select only one field.",
"username": "ajinkya_shidhore1"
},
{
"code": "",
"text": "May be the question itself is wrong.I do not think the question is wrong. The question mentions blog post and its comments. While reader_id is probably a reference to the reader, it has nothing to do with blog post and its comments mentioned in the question.",
"username": "steevej"
},
{
"code": "",
"text": "It appears the answer has been corrected to reader_id.\nThanks for raising the question because the incorrect answer must have confused others too.\nMongoDB referencing question - Screenshot 2023-07-25 1150131171×888 30.9 KB\n",
"username": "Xavier_Fernandes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is quiz answer need to be corrected? | 2023-07-08T05:04:48.460Z | Is quiz answer need to be corrected? | 875 |
|
null | [] | [
{
"code": "",
"text": "We are having 5 shard cluster in one shard the CPU is reaching maximum & Remaining 4 shards are good, How can i find the high CPU usage in this shard and how can we balance it\nto other shard to decrease the CPU usage of the particular shard",
"username": "dev123_dev123"
},
{
"code": "",
"text": "Hi @dev123_dev123 and welcome in the MongoDB Community !This isn’t a good sign. \nIt probably means you have something unbalanced in your cluster.I’d first run a mongostat and mongotop on each shard (==replica set) in your sharded cluster to check that they are all roughly similar. If your shard keys are correct, your work load should be shared evenly across the different shards.\nWith mongotop for example if you see that there is a clear difference on that particular shard VS the other shard on a specific collection, at least now you know which collection(s) is(are) causing the problem.I would also run sh.status() to check that you don’t have jumbo chunks and that the chunk repartition is even across the 5 shards for all the sharded collections.While you are on the result of this command, I would double check all the shard keys and make sure that they all follow the good practices - especially that none of them is growing monotonically. This would result in a single shard receiving all the insert operations for that particular collection which isn’t very scalable.I hope this helps.\nDon’t hesitate to provide the output of these commands if you can’t find the problem.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "sh.status()",
"text": "Hi, is there any method we can check how that CPU is getting used? We meet a similar problem that traffic causes all shards CPU to get high but only 1 gets to 90%+ and the rest of them stay at 60%.\nWe are actively debug what might go wrong, but wondered if we have more tools and monitors than sh.status() on the shard.",
"username": "Yiru_Gao"
},
{
"code": "",
"text": "You will need system level tool to check cou usage. Eg top.Even say your data is evenly distributed some keys may be hot than ithers and /or disk usage or network traffic differs.Or maybe caused by other processing on the same node",
"username": "Kobe_W"
},
{
"code": "",
"text": "Have you tried running mongotop & mongostat on this Shard (i.e. replica set) and compared the values with other shards?\nMongotop will show you the collections that are using the more resources per seconds. It’s a good way to identify a collection that is using a bad shard key (growing monotonically) for example.",
"username": "MaBeuLux88"
}
] | One Shard Getting High CPU | 2022-12-01T08:13:01.338Z | One Shard Getting High CPU | 2,166 |
null | [
"python",
"golang"
] | [
{
"code": "package persistence\n\nimport (\n\t\"context\"\n\t\"go.mongodb.org/mongo-driver/bson\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n\t\"time\"\n)\n\nvar MongoClient *mongo.Client\n\nfunc InitMongo(ctx context.Context, URI string) error {\n\tif MongoClient != nil {\n\t\treturn nil\n\t}\n\tserverAPI := options.ServerAPI(options.ServerAPIVersion1)\n\topts := options.Client().ApplyURI(URI).SetServerAPIOptions(serverAPI)\n\n\topts.SetMinPoolSize(10)\n\topts.SetMaxPoolSize(100)\n\topts.SetMaxConnIdleTime(2 * time.Second)\n\n\t// Create a new client and connect to the server\n\tclient, err := mongo.Connect(ctx, opts)\n\tif err != nil {\n\t\treturn err\n\t}\n\tMongoClient = client\n\n\t// Send a ping to confirm a successful connection\n\tvar result bson.M\n\tif err = client.Database(\"admin\").RunCommand(context.TODO(), bson.D{{\"ping\", 1}}).Decode(&result); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n\n}\nfunc main() {\n\tconfiguration := config.NewEnvConfigs()\n\n\tctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\n\tdefer cancel()\n\terr = persistence.InitMongo(ctx, configuration.MongoDBURI)\n\tif err != nil {\n\t\tfmt.Println(\"Mongo not ready\")\n\t\tpanic(err)\n\t}\n\tfmt.Println(\"mongo connected\")\n\tdefer func() {\n\t\tif err = persistence.MongoClient.Disconnect(context.TODO()); err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}()\n\tr := server.Routes()\n\thttp.ListenAndServe(\"0.0.0.0:3333\", r)\n}\npackage handlers\n\nimport (\n\t\"context\"\n\t\"encoding/base64\"\n\t\"errors\"\n\t\"fmt\"\n\t\"github.com/bycultivaet/backend/internal/infrastructure/persistence\"\n\t\"github.com/go-chi/chi\"\n\t\"github.com/go-chi/render\"\n\t\"github.com/google/uuid\"\n\t\"go.mongodb.org/mongo-driver/bson\"\n\t\"go.mongodb.org/mongo-driver/bson/primitive\"\n\t\"io/ioutil\"\n\t\"mime/multipart\"\n\t\"net/http\"\n)\n\nfunc GetImageByID() http.HandlerFunc {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\tctx, cancel := context.WithCancel(r.Context())\n\t\tdefer cancel()\n\t\tUuid := chi.URLParam(r, \"uuid\")\n\t\t_, err := uuid.Parse(Uuid)\n\t\tif err != nil {\n\t\t\trender.Status(r, 400)\n\t\t\trender.Respond(w, r, errors.New(\"invalid uuid\").Error())\n\t\t\treturn\n\t\t}\n\t\tfilter := bson.M{\"uuid\": Uuid}\n\t\tvar result bson.M\n\t\tcollection := persistence.MongoClient.Database(\"hassad-media\").Collection(\"images\")\n\t\terr = collection.FindOne(ctx, filter).Decode(&result)\n\t\tif err != nil {\n\t\t\trender.Status(r, 404)\n\t\t\trender.Respond(w, r, errors.New(\"image not found\").Error())\n\t\t\treturn\n\t\t}\n\t\timageData, ok := result[\"image\"].(primitive.Binary)\n\t\tif !ok {\n\t\t\trender.Status(r, 500)\n\t\t\trender.Respond(w, r, errors.New(\"error parsing image\").Error())\n\t\t\treturn\n\t\t}\n\t\timageBase64 := base64.StdEncoding.EncodeToString(imageData.Data)\n\t\trender.Status(r, 200)\n\t\trender.JSON(w, r, map[string]string{\"image\": imageBase64})\n\t}\n}\n\nfunc UploadImage() http.HandlerFunc {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\tctx, cancel := context.WithCancel(r.Context())\n\t\tdefer cancel()\n\t\tcancelled := r.Context().Done()\n\t\tselect {\n\t\tcase <-cancelled:\n\t\t\trender.Status(r, 499)\n\t\t\trender.Respond(w, r, errors.New(\"request cancelled\").Error())\n\t\t\treturn\n\t\tdefault:\n\t\t\terr := r.ParseMultipartForm(10 << 20)\n\t\t\tif err != nil {\n\t\t\t\trender.Status(r, http.StatusBadRequest)\n\t\t\t\trender.Respond(w, r, err.Error())\n\t\t\t\treturn\n\t\t\t}\n\t\t\tfile, _, err := r.FormFile(\"image\")\n\t\t\tif err != nil {\n\t\t\t\trender.Status(r, http.StatusBadRequest)\n\t\t\t\trender.Respond(w, r, err.Error())\n\t\t\t\treturn\n\t\t\t}\n\t\t\tdefer file.Close()\n\t\t\tUuid, err := uuid.NewRandom()\n\t\t\tif err != nil {\n\t\t\t\trender.Status(r, http.StatusInternalServerError)\n\t\t\t\trender.Respond(w, r, errors.New(\"error generating uuid\").Error())\n\t\t\t\treturn\n\t\t\t}\n\t\t\t// Pass the contents of the file to GetMongoDB\n\t\t\t_, err = UploadPhoto(file, Uuid.String(), ctx)\n\t\t\tif err != nil {\n\t\t\t\trender.Status(r, http.StatusInternalServerError)\n\t\t\t\trender.Respond(w, r, err.Error())\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\trender.Status(r, http.StatusOK)\n\t\t\trender.JSON(w, r, map[string]string{\"uuid\": Uuid.String()})\n\n\t\t}\n\t}\n}\n\nfunc UploadPhoto(file multipart.File, uuid string, ctx context.Context) (interface{}, error) {\n\timageBytes, err := ioutil.ReadAll(file)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\timageDoc := bson.M{\"image\": imageBytes, \"uuid\": uuid}\n\tdefer file.Close()\n\tcollection := persistence.MongoClient.Database(\"hassad-media\").Collection(\"images\")\n\tdata, err := collection.InsertOne(ctx, imageDoc)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\tfmt.Println(\"Inserted image into MongoDB!\")\n\tfmt.Println(data.InsertedID)\n\treturn data.InsertedID, nil\n}\n\nfunc DeleteImageByID() http.HandlerFunc {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\tctx, cancel := context.WithCancel(r.Context())\n\t\tdefer cancel()\n\n\t\tuuid := chi.URLParam(r, \"uuid\")\n\t\tfilter := bson.M{\"uuid\": uuid}\n\t\tcollection := persistence.MongoClient.Database(\"hassad-media\").Collection(\"images\")\n\t\tres, err := collection.DeleteOne(ctx, filter)\n\t\tif err != nil {\n\t\t\tif res.DeletedCount == 0 {\n\t\t\t\trender.Status(r, 404)\n\t\t\t\trender.Respond(w, r, errors.New(\"image not found\").Error())\n\t\t\t\treturn\n\t\t\t}\n\t\t\trender.Status(r, 500)\n\t\t\trender.Respond(w, r, err.Error())\n\t\t\treturn\n\t\t}\n\t\trender.JSON(w, r, map[string]string{\"answer\": \"deleted\"})\n\t}\n}\n",
"text": "I’m developing a Go application that integrates with MongoDB to create, read, and delete images. I’ve set the connection pool to 4 for testing purposes, but I’ve noticed that the number of connections can go up to 10 even if I haven’t hit any endpoint related to MongoDB. If I don’t hit any endpoint, the number of connections stays between 2-3. There’s a connection leak in my code, and I’m having trouble figuring out how to release resources correctly to ensure that they are returned to the pool.As the documentation says mongo-go driver is goroutine safe so I have a function that initialize mongo and set a global variable with the initialized mongo clientand I use this function in the main to initialize italso the last piece of code that uses the mongo client is the handlers for images endpointswhat I’m doing wrong closing the connections to be returned to the pool or any other configurationI test my code by with python script that hits the endpoints excessively",
"username": "Omar_Dawah"
},
{
"code": "",
"text": "Hey @Omar_Dawah, welcome and thanks for the question!What you’re describing actually sounds like normal behavior. Additionally, the code you provided seems correct and shouldn’t cause a connection leak.MongoDB drivers open a minimum of 2 monitoring connections to each node in a MongoDB database, plus at least 1 connection to each node for user operations. For example, if you’re connecting to a 3-node replica set, MongoDB drivers will open 6 total monitoring connections before you run any operations. Monitoring connections are used to track the state of the database to maximize availability during topology changes or unexpected disconnects.Even though your description doesn’t sound like a connection leak, I have a few questions to help me understand your situation:Thanks!",
"username": "Matt_Dale"
},
{
"code": "opts.SetMaxConnIdleTime(2 * time.Second)",
"text": "Thanks For Answering @Matt_Dale,\n1- go.mongodb.org/mongo-driver v1.12.0\n2- Mongo Cluster (The Free plan)\n3- Atlas-hosted databaseAnother piece of information that may help you, is that on production I used to get connections up to 500 even if the max pool is 100 but after I have set opts.SetMaxConnIdleTime(2 * time.Second) it’s dropped to about 240-250 at the peak of using.\nalso, I’m sure that there are no 240 hits to the Mongo endpoints",
"username": "Omar_Dawah"
}
] | Connection leak issue with Mongo-Go Driver | 2023-07-23T12:06:36.174Z | Connection leak issue with Mongo-Go Driver | 789 |
null | [
"aggregation",
"queries"
] | [
{
"code": "\n\"aggregate\": \"collection\",\n \"pipeline\": [\n {\n \"$match\": {\n \"story_dt\": {\"$gte\": {\"$date\": XXX},\"$lt\": { \"$date\": XXX } },\n \"org_id\": {\n \"$in\": [\n (HERE WE SENT 600 NUMBERS)234,2345,86,292,456, etc\n ]\n },\n \"topics.topic_cd\": {\n \"$in\": [\n \"Water\",\n \"WatPoll\"\n ]\n }\n }\n }\n",
"text": "My query has the next pattern:The index got is org_id but the query is slow, how could we improve the performance?\nwhat is the limit of values to match using $in operator?",
"username": "Valeria_Haro1"
},
{
"code": "$inorg_idorg_id$in$or \"$or\": [\n {\"org_id\": 234},\n {\"org_id\": 2345}, \n {\"org_id\": 86},\n {\"org_id\": 292},\n {\"org_id\": 456},\n ...\n ],\n{story_dt: 1, org_id: 1, 'topics.topic_cd': 1}org_id$match$in$in",
"text": "Hi @Valeria_Haro1,Welcome to the MongoDB Community!I’ve some thoughts that may help optimize this aggregation pipeline query:However, this may not be feasible if the number of IDs is very large.Further, you can consider adding an index on {story_dt: 1, org_id: 1, 'topics.topic_cd': 1} to support the full query criteria, not just org_id. This is a compound index spanning all fields used in the $match.There is no hard-coded limit on the number of values in $in. But in general, very large $in lists with hundreds or thousands of items will negatively impact query performance.Let us know if this helps or if you have any further questions!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query performance using $in operator maching more than 600 values | 2023-06-22T14:29:26.565Z | Query performance using $in operator maching more than 600 values | 549 |
null | [] | [
{
"code": "",
"text": "Hi Team, I have done the replica setup where I do have (1 primary node, 1 secondary node and 1 arbiter node). During my test, when I stop my primary node, then it auto failover happen and my secondary auto become primary node, but I am unable to perform write operation on new primary node.\nNeed your help to understand, why write operation is not working after successful failover on primary node?\nOther question I do have, If I setup (1 primary node and 2 secondary node), in this case do I still need arbiter node?",
"username": "Rajeev_Jha"
},
{
"code": "",
"text": "You should have majority data bearing nodes up. But arbiter being non data bearing node your majority is not met and write fails\nPSA is not a recommended configuration\nPSS is a valid configuration and you will not face this issue\nIf you have PSS no need of arbiter\nYou should have odd number of nodes",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you @Ramachandra_Tummala. This work for for, so now if primary fail, then one of secondary become primary and I able to perform write and read operation both. But with failed primary, when it again become available as secondary, then I am unable to perform read operation. Even I have removed the server from replication group and added again, still when I try run read query, I get below error:“not primary and secondaryOk=false”I tried below commands to enable read like:\nrs.setSlaveOk()\ndb.getMongo().setSlaveOk()\nrs.secondaryOk()And while I was running above commands, I got error, it has been deprecated, and then I used below query:db.getMongo().setReadPref(‘primaryPreferred’)But it still having same error when I try run read query “NotPrimaryNoSecondaryOk”.Please help to suggest, how I can fix this error?",
"username": "Rajeev_Jha"
},
{
"code": "",
"text": "Hi @Ramachandra_Tummala and team, please help to understand and fix my problem as mentioned above.",
"username": "Rajeev_Jha"
},
{
"code": "",
"text": "It should work\nAre you checking rs.status() before running those commands\nThey wont work on primary if by mistake you are running them on primary\nWhen you are removing/adding election may be taking place and new primary elected\nAre you running these commands manually or expecting driver to read from secondary’s\nYou have to use appropriate connect string",
"username": "Ramachandra_Tummala"
}
] | Write operation fail on primary node after replica failover | 2023-06-28T07:48:13.370Z | Write operation fail on primary node after replica failover | 735 |
null | [
"server"
] | [
{
"code": "brew services start mongodb-communitybrew services listName Status User File\nmongodb-community error 12288 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\ntail $(brew --prefix)/var/log/mongodb/mongo.log{\"t\":{\"$date\":\"2022-10-14T19:18:19.147+03:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.148+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-10-14T19:18:19.150+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n",
"text": "Hello,I was able to start mongodb-community server 2 days ago but now I cannot. I do not remember making any changes on it. At some point I have changed my node version to an old one but changed it back to newer one already. So I don’t think that should be a problem. I use\nbrew services start mongodb-communityand I get this error when I want to check with\nbrew services listI checked my logs using\ntail $(brew --prefix)/var/log/mongodb/mongo.logWhat I get is this below:I don’t know what the problem is. Can you guys please help me to solve this error 12288 problem?Thanks in advance.",
"username": "Samed_Torun"
},
{
"code": "",
"text": "Show us more details from your mongod.log\nIt would have given the cause for error 48 like address in use,permissions etc",
"username": "Ramachandra_Tummala"
},
{
"code": "{\"t\":{\"$date\":\"2022-10-15T21:40:18.665+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":14790,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"SamsMacbook.local\"}}\n{\"t\":{\"$date\":\"2022-10-15T21:40:18.665+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-15T21:40:18.665+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.3.0\"}}}\n{\"t\":{\"$date\":\"2022-10-15T21:40:18.665+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1, ::1\",\"ipv6\":true},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-15T21:40:18.668+03:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23024, \"ctx\":\"initandlisten\",\"msg\":\"Failed to unlink socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\",\"error\":\"Permission denied\"}}\n{\"t\":{\"$date\":\"2022-10-15T21:40:18.668+03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40486,\"file\":\"src/mongo/transport/transport_layer_asio.cpp\",\"line\":1120}}\n{\"t\":{\"$date\":\"2022-10-15T21:40:18.668+03:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.606+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.608+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.623+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.627+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":15254,\"port\":27017,\"dbPath\":\"/usr/local/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"SamsMacbook.local\"}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.3.0\"}}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.629+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1, ::1\",\"ipv6\":true},\"storage\":{\"dbPath\":\"/usr/local/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.634+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.635+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Address already in use\"}}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.636+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.639+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.639+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.640+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.640+03:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.640+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.640+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.640+03:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-10-15T21:43:10.641+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n",
"text": "Hello,This is the latest 40 log when I try to restart the mongodb-community. (for error 12288)",
"username": "Samed_Torun"
},
{
"code": "",
"text": "It says address in use.It means you already have a mongod running\nTry to issue mongo/mongosh and see if you can connect",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I can run it using mongosh, but I want to know what is the reason I am not able to run it through mongo or mongod command?\nAlso it shows mongo is not found when I try to run through mongo command",
"username": "Shubham_Mishra5"
},
{
"code": "",
"text": "Hi @Shubham_Mishra5,\nHere are some information about mongosh:Today we introduce the first beta of the new MongoDB Shell (mongosh), a shell for MongoDB with a modern user experience that will grow in functionality along with the MongoDB data platform.For mongo:For mongod:Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "mongosh is the latest Mongodb shell\nmongo is the older shell and it is deprecated now\nReason you get mongo not found is since it is not installed on your system you get that message\nIf you are able to connect with mongosh successfully why you want to run mongod?\nmongod is used to start mongodb\ncheck the docs shared by Fabio_Ramohitaj",
"username": "Ramachandra_Tummala"
}
] | Unable to start mongodb-community - Error 12288 | 2022-10-14T16:42:31.210Z | Unable to start mongodb-community - Error 12288 | 4,347 |
null | [
"replication",
"sharding",
"mongodb-shell"
] | [
{
"code": "[direct: mongos] umbraco> sh.shardCollection(\"umbraco.contents\", { \"parentId\": \"hashed\" })\nMongoServerError: No keys found for HMAC that is valid for time: { ts: Timestamp(1690092491, 1) } with id: 0\n[direct: mongos] test> show dbs;\nClient is not properly authorized to propagate mayBypassWriteBlocking\n",
"text": "Please help, Thanks in advance",
"username": "asheesh_prajapti"
},
{
"code": "",
"text": "Hi @asheesh_prajaptiWhat version are you using?Can you check that the time on all the hosts is in sync.",
"username": "chris"
},
{
"code": "sharding:\n configDB: config/127.0.0.1:27011,127.0.0.1:27012,127.0.0.1:27013\nsecurity:\n keyFile: /var/lib/mongodb/mongodb.key\nnet:\n bindIp: localhost,127.0.0.1\n port: 26000\nsystemLog:\n destination: file\n path: /var/db/logs/mongos.log\n logAppend: true\nprocessManagement:\n fork: true\nsudo chown mongodb:mongodb /var/lib/mongodb/mongodb.key",
"text": "@chris Thanks for your reply, while fixing above issue now I am stuck with another problem. Now mongos command get stuck so I am not able to start query router server now.\n/etc/mongs.conf content is belowfile /var/lib/mongodb/mongodb.key has permission of root user with access sudo chown mongodb:mongodb /var/lib/mongodb/mongodb.key\nScreenshot from 2023-07-24 09-12-50726×153 11.7 KB\n",
"username": "asheesh_prajapti"
},
{
"code": "",
"text": "Hi @asheesh_prajaptiIs the configDB replicaSet initialised yet? Mongos is likely waiting until it is.Check in the log file, there should be some information on what is happening.",
"username": "chris"
}
] | Not able to shard a collection of a db on local machine | 2023-07-23T06:17:07.960Z | Not able to shard a collection of a db on local machine | 535 |
null | [
"security"
] | [
{
"code": " ( ( 50 req/s * 86400 s/day ) / 1000000 ) * 0.3 = $1.29 / day\n ( ( 100 bots * 50 req/s * 86400 s/day ) / 1000000 ) * 0.3 = $129+/day\n ( ( 10 endpoints * 200 bots * 50 req/s * 86400 s/day ) / 1000000 ) * 0.3 = $2592 /day\n% curl https://webhooks.mongodb-realm.com/api/client/v2.0/app/APP/service/test/incoming_webhook/test\n \"Hello World!\"\n% curl https://webhooks.mongodb-realm.com/api/client/v2.0/app/APP/service/test/incoming_webhook/test\n {\"error\":\"incoming webhook evaluation blocked by CanEvaluate\",\"link\" ... }\n",
"text": "Hi Mongo(recreated topic as more info and could not edit original)I have been experimenting with Realm 3rd Party Services to create incoming webhooks, Based on the scalability I am particularly interested in preventing attacks on the endpoint and how to prevent an attacker requesting the endpoint over and over and causing a “denial of wallet” attack.I expected that using the ‘Can Evaluate’ expression combined with a list of abusive IP’s would mitigage basic attacks by blocking the request from being procesed; however from testing this does not seem to be the case - I can prevent the funtion from running so reducing Data Transfer and Compute Runtime but I cannot prevent the request from being billed.Can you answer the following:Is there any way for me to completely deny a request to an endpoint and not be charged for the request?Does Mongo have any recomended practices for mitigating endpoint attacks (for example you provide a json expression showing allow if in set of ip but this is useless if the request is still charged)?What protection does Mongo provide at the Realm platform level to mitigate platform level / large scale ddos attacks on endpoints and how does this impact our billing if our endpoints are targeted?What protection does Mongo provide at the Realm platform level to mitigate targetted ddos attacks\non our endpoints and how does this impact our billing?For example Firebase provide a platform level ddos protection that prevents billing of clients if there was a platform level ddos attack; however this is retrospective and would only apply in case 3, they provide support for case 2 but very little for case 1 which can result in unessecary billing for clients.Thank you,I have included case studies and test setup below.Case studies:Case 1: A basic attacker requesting an endpoint over and over again using a basic load testing tool running 50req/s which would be processed by Realm and result in an additional 4.3M requests per day:In this case I could identify the attacker and prevent the function being run for them but I cannot completely deny the requests; this would result in a $40/month cost when normal load might only run < 5$Case 2: A basic distributed attack could request an endpoint over and over again lets say a cluster of 100 bots requesting the endpoint at 50 req/s which would be processed by Realm and result in ~430M requests per day:Here again the IPs could be identified but not completely denied and we would have to terminate the app to prevent a bill of $4000+ for the month.Case 3: A sophisticated attacker could distribute the attack using multiple rotating bots and target all of the apps endpoints - assuming the app has 10 endpoints and using a sustained 200 bot attack:Test setup:My testing setup using the UI for setup and leaving all defaults unless specified:Test 1 - Setup endpoint to return ‘Hello World’, expect request to be processed and request billed.\nOutcome: Endpoint returns and 1 request is billedTest 2 - Set Can Evaluate to always false expect request to be denied\n=> Edit Webhook => Can Evaluate = { “%%true”: false }\nOutcome: Endpoint returns error but 1 request is still billed",
"username": "mba_cat"
},
{
"code": "",
"text": "I am actually interested in the answer to this question as well.Just to clarify, this attack is only possible using a Webhook, which I would have to activate first? So, as long as I only use Realm Sync with Atlas, this does not need to concern me, right?Now if i do use webhooks, I was expecting that any billable service would require some form of authentication or built in security. The webhook is intended to link to a third party service and the documentation describes authentication:Is it really possible to get billed for webhooks without authentication or without some sort of security token?",
"username": "Christian_Wagner"
},
{
"code": "guest => endpoint => function runs and responds with redacted data\nuser => endpoint => function runs and responds with full data\nabusive ip => endpoint => denied by can evaluate / other mechanism, request is denied and function does not run\n",
"text": "Christian_Wagner, short answer - yes that is correct.Longer answer - we need Mongo to comment and confirm but from my testing I beleive the following is the case:To be venerable you would need to create / enable a webhook - if you do not have any then there are no endpoints to exploit therefore no excessive requests can be made.When I ran my tests I left the defaults in place - my endpoint was using Authentication = System because I wanted the function to be able to access all data and the endpoint to be plubic then setup my own logic within the function; however I also want to block obviously abusing users for example by IP, ideally I was expecting the following:So far I can only get the function to not run, not the request to be denied, I have tried using both service rules and webhook can evaluate to deny the request.I will run a quick test using an Authentication = Application Authentication and see if I get different results.",
"username": "mba_cat"
},
{
"code": "",
"text": "UPDATE: When setting Authentication = Application Authentication Realm returns a 400 for a straight request with no auth specified but the requests are still charged; therefore it appears that setting up any incoming webhook is a potential vector for abuse regardless of the authentication or can evaluate settings anyone sending a request to your endpoint regardless of if they are authenticated cause you to be billed for one request.",
"username": "mba_cat"
},
{
"code": "",
"text": "Hey @mba_cat - these are really good points. Some additional things to consider with rate-limiting/throttling Realm API requests -More configuration around rate limiting can also be an added on with an API gateway or a proxy today, but configurable rate limiting in Realm sn’t out of the question. You can vote for the feature here - Configure rate limit – MongoDB Feedback EngineAnswered above, we do have set rate-limiting in place and will stop accepting requests after that.(and 4) - We’re covered under AWS Shield Advanced which should help mitigate DDoS attacks and that we’re in the process of adding more network security features (IP Access List, PrivateLink).",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hello,I understand your concerns regarding endpoint attacks and the impact on billing within the Realm platform. While I can provide some general information, please note that specific details about billing and protection mechanisms might require assistance from MongoDB support or documentation.Denying Requests and Billing: As of my last update in September 2021, I don’t have real-time information on changes to Realm’s billing policies or protection mechanisms. However, based on your description, it seems that preventing a request from being processed doesn’t necessarily prevent it from being billed. To completely deny a request to an endpoint without incurring billing charges, you may need to explore further options or consult MongoDB’s support for the most up-to-date information.Mitigating Endpoint Attacks: MongoDB might have recommended practices for mitigating endpoint attacks within the Realm platform. This could include rate limiting, IP blocking, or other security measures. To get specific guidance on these practices, I recommend reaching out to MongoDB support or referring to the official Realm documentation.Protection Against DDoS Attacks: As for large-scale DDoS attacks on the platform level, MongoDB is likely to have measures in place to mitigate these attacks and ensure platform stability. However, the details of these measures and their impact on billing would be best addressed by MongoDB support.Protection Against Targeted DDoS Attacks: Targeted DDoS attacks on specific endpoints might have varying impacts depending on the scale and nature of the attack. Again, MongoDB should have security measures in place to handle such scenarios, but for specific details on protection and billing impact, it’s best to consult MongoDB support.As a general practice, when dealing with potential DDoS or endpoint attacks, rate limiting, IP filtering, and application-level security mechanisms can be effective in mitigating the impact. However, the specifics of implementing these measures within Realm and their interaction with billing would depend on the platform’s current capabilities and policies.I recommend reaching out to MongoDB support or checking the official documentation and release notes for the most accurate and up-to-date information on billing, protection mechanisms, and best practices within the Realm platform. They will be able to provide detailed guidance and solutions tailored to your specific use case and environment.Always ensure you have the latest versions of Realm and other MongoDB services to take advantage of the latest security features and improvements. Additionally, keep your software and services up-to-date to address any potential vulnerabilities that may be exploited in attacks.Please note that my responses are based on information available up to September 2021, and I encourage you to verify and validate any details with MongoDB’s official resources for the most current information.",
"username": "Robert_Collins"
}
] | Mongo Realm 3rd Party Services denial of wallet protection / mitigation / prevention | 2021-07-24T11:01:14.188Z | Mongo Realm 3rd Party Services denial of wallet protection / mitigation / prevention | 5,027 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hello there,this question comes from an absolute beginner with Mongo and I hope anyone can help me understand a little bit more about what’s going on.I’m currently developing a mobile application in C# that is constantly running in the background and uploads very little information every few minutes using the MongoDB Client.\nI noticed when working with mobile data, the application consumes a significant amount of battery power over time (roughly 3% per hour, which is not acceptable for a 24/7 app).\nAnd yes, I’m absolutly sure that this is caused by the MongoDB Client.Now I was experimenting with different settings for the client (increasing the HeartbeatInterval, using really short or really long lifespans for connections), but it seems to make no difference.\nComing to the real question:\nHow could I change the settings of the client to have the least possible amount of network activity over time, only one upload every few minutes? I feel like it should be a constantly open connection that is idle for most of the time, but so far I’m not satisfied with my results.I really hope that anyone has some tips for me!\nThanks a lot in advance!",
"username": "Asimovcoviz"
},
{
"code": "",
"text": "this is a thing for mobile platform, not for a specific database client. there’s isn’t much you can do apart from tuning the basic config (e.g. number of min connections in the pool)on android, you can check Battery consumption for billions | Build for Billions | Android Developersfor ios, similar should exist.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks a lot for the link, I will take a deeper look into the platform optimizations.\nThe reason I ask about the database client is that even when I disable the upload function, when the client is only existing without anything to do, it constantly uses mobile data and therefore battery power. It takes roughly 0.01 MB every few seconds. That’s not much, but enough to keep the mobile connection active.\nIt seems like open connections in the pool are doing this and I don’t know how to stop that without closing the connections.",
"username": "Asimovcoviz"
},
{
"code": "",
"text": "Long live connections generally use heartbeat for various reasons (e.g. check server responsiveness).On android, you only connect to remote network when necessary (during an allowed window etc), so outside of that necessary time window, you can simply clear all the connections as they can’t do anything but consuming powers.There are a lot of features/utilities on android regarding when/how to access network. or when to keep the device “awake”. Check it out.",
"username": "Kobe_W"
}
] | Optimizing battery usage on mobile devices for MongoDB Client | 2023-07-23T22:04:42.687Z | Optimizing battery usage on mobile devices for MongoDB Client | 557 |
null | [
"aggregation",
"indexes",
"atlas-search"
] | [
{
"code": " {\n \"$project\": {\n \"_id\": 0,\n \"object_id\": \"$customerId\",\n \"object_infos\": {\n \"$concat\": [\n {\"$toString\": \"$customerId\"}, \n \" - \", \n \"$firstName\", \n \" - \", \n \"$name\", \n \" - \", \n \"$streetName\", \n \" - \", \n \"$locality\"\n ]\n },\n \"score\": { \"$meta\": \"searchScore\"},\n \"highlights\": { \"$meta\": \"searchHighlights\" }\n }\n {\n \"object_id\": 750445,\n \"object_infos\": \"750445 - Madelena - O'Connell and Becker - Clarendon Street - Ketangi\",\n \"score\": 25.159177780151367,\n \"highlights\": [\n {\n \"score\": 6.748023509979248,\n \"path\": \"name\",\n \"texts\": [\n {\n \"value\": \"O'Connell and \",\n \"type\": \"text\"\n },\n {\n \"value\": \"Becker\",\n \"type\": \"hit\"\n }\n ]\n },\n {\n \"score\": 7.059268474578857,\n \"path\": \"locality\",\n \"texts\": [\n {\n \"value\": \"Ketangi\",\n \"type\": \"hit\"\n }\n ]\n }\n ]\n },\n {\n \"object_id\": 750445,\n \"object_infos\": \"750445 - Madelena - O'Connell and Becker - Clarendon Street - Ketangi\",\n \"score\": 25.159177780151367,\n \"highlights\": [\"Becker\", \"Ketangi\"]\n },\n {\n \"$project\" : {\n \"object_id\": 1,\n \"object_infos\": 1,\n \"highlightsNEW\": {\n \"$cond\": {\n \"if\": { \"$eq\": [ \"$highlights.texts.type\", \"text\" ] },\n \"then\": \"$$REMOVE\",\n \"else\": \"$highlights.texts.value\"\n }\n }\n }\n }\n {\n \"object_id\": 750445,\n \"object_infos\": \"750445 - Madelena - O'Connell and Becker - Clarendon Street - Ketangi\",\n \"highlightsNEW\": [\n [\n \"White, O'Connell and \",\n \"Becker\"\n ],\n [\n \"Ketangi\"\n ]\n ]\n },\n",
"text": "Hi,I made an aggregation pipeline including a “search” stage on several fields of a collection containing clients information (name, first name, street name, locality, …).In the “project” stage at the end of the pipeline, I include the “highlights” metadata:During my tests, I provided two strings in input (“Becker” and “Ketangi”), so that I have some results for which there’s a match on the name (at least partially), and the locality:The highlights metadata currently provide a lot of information that I actually don’t need. I would like to “reduce” them only to the values for which there was a hit. So for my example above, I would like to have something like this:The goal of this is to help to identify on which value there was a hit, in the case where that value doesn’t correspond exactly to my input string (in case of fuzzy search).Removing the “highlights.score” and “highlights.path” is easy (by just adding a “project” stage and setting to fields to 0). However in my example, I still need to do two more steps, but so far I didn’t find a way to do it:Adding an “unwind” stage to split the array of highlights is not an option, as I want to keep everything in one single document. I already tried to use the conditional removal, like this :… but it doens’t work. Here’s the result that I have:Would someone have an idea about how I could do that ?",
"username": "Nicolas_Guilitte"
},
{
"code": "db.collection.aggregate([\n {\n $unwind: \"$highlights\"\n },\n {\n $unwind: \"$highlights.texts\"\n },\n {\n $match: {\n \"highlights.texts.type\": \"hit\"\n }\n },\n {\n $group: {\n _id: {\n _id: \"$_id\",\n \"object_infos\": \"$object_infos\",\n \"score\": \"$score\",\n \n },\n highlights: {\n $push: \"$highlights.texts.value\"\n }\n }\n },\n {\n $project: {\n _id: \"$_id._id\",\n \"object_infos\": \"$_id.object_infos\",\n \"score\": \"$_id.score\",\n highlights: 1\n }\n }\n])\n",
"text": "You could do it with two unwinds and a re-group:Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "John_Sewell"
},
{
"code": "",
"text": "works perfectly ! Thank you !",
"username": "Nicolas_Guilitte"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Search highlights, keep only hits | 2023-07-20T16:01:49.692Z | Search highlights, keep only hits | 644 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "hello everyone, i am trying to use $unionWith in an atlas chart query, i receive the following error:$unionWith is not a valid aggregation stage or is not supported by Charts.even though in the release notes [https://www.mongodb.com/docs/charts/release-notes/#charts-v1.33.1], $unionWith is supported in charts.Any thoughts.\nThank you\nJad",
"username": "Jad_Bsaibes"
},
{
"code": "",
"text": "Hi @Jad_Bsaibes; to use $unionWith you need to put the query in a “Charts View” on the data source; you can’t do it directly in the query bar.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "thank you so much Tom, it worked perfectly with Charts View.",
"username": "Jad_Bsaibes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $unionWith in atlas charts giving error: is not a valid aggregation stage or is not supported by Charts | 2023-07-23T13:07:32.423Z | $unionWith in atlas charts giving error: is not a valid aggregation stage or is not supported by Charts | 558 |
null | [
"production",
"ruby"
] | [
{
"code": "",
"text": "This patch release in the 2.19 series fixes the following issue:RUBY-3284 Connection Pool does not open new connections when needed",
"username": "Dmitry_Rybakov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | Ruby driver 2.19.1 Released | 2023-07-24T11:56:50.197Z | Ruby driver 2.19.1 Released | 553 |
null | [
"golang"
] | [
{
"code": "handler/handler.goclient, _ := mongo.NewClient(options.Client().ApplyURI(\"mongodb://127.0.0.1:27017\"))\nctx, _ := context.WithTimeout(context.Background(), 10*time.Second)\nerr := client.Connect(ctx)\nif err != nil {\n panic(err)\n}\n\ndefer client.Disconnect(ctx)\ncollection := client.Database(\"foo\").Collection(\"bar\")\nmain.go",
"text": "I am using the official mongo driver for golang.I am working on an HTTP server. And all of my handler function has the same pattern. All handlers are in handler/handler.go.What other ways are available than extracting this block to a function and calling in all the handlers?Can I make a connection in the main.go file and use it in all the handlers? How can I do that? I am using gorilla mux by the way.Anything you’d like to suggest?",
"username": "sntshk"
},
{
"code": "mongo.ClientClientClientfunc(http.ResponseWriter,*http.Request)HandleFuncappContext",
"text": "Hi @sntshk,A couple of things you can look into:Per our documentation, the mongo.Client type is safe for concurrent use, so if you’re always connecting to the same MongoDB instance, you can create a single Client and re-use it between handlers rather than creating a new one in each handler. Note that this does come with some overhead, as a Client keeps a connection pool per node in your MongoDB cluster, so if you perform a lot of concurrent database operations, you could potentially keep around a large connection pool.Go allows you to pass a function on a struct as a callback, so you can probably create a struct to store application state and helper functions. That struct could have separate functions for each of your HTTP requests and each handler function should have the signature func(http.ResponseWriter,*http.Request) so it can be used with HandleFunc in gorilla/mux.To go even further into suggestion (2), you could maybe set things up so your struct has a single top-level handler to do common operations like validate the request and then have that call a struct function to handle a specific request type once it’s been validated.There’s some examples for creating stateful handlers at Custom Handlers and Avoiding Globals in Go Web Applications · questionable services. Specifically, the appContext type discussed there seems similar to what you want. I’d also recommend asking this as a more general question (e.g. “how to create stateful HTTP handlers”) on the Go mailing list or Slack workspace to get advice from others as well.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Hi Divjot,\nI’m using the first method you said (a global Mongo client) but I got the connections going higher Even if I set a maximum to the pool it goes beyond it and I think this because the connections don’t get returned to the pool and I can’t find a way to close or return to the pool the connection used by functions.i have wrote a topic with more info if you can help me.\nthanks in advance",
"username": "Omar_Dawah"
}
] | Best way to refactor connection overhead from my handler functions? | 2020-05-04T20:57:30.037Z | Best way to refactor connection overhead from my handler functions? | 3,456 |
null | [
"flutter"
] | [
{
"code": "",
"text": "I want to authenticate the user anonymously in flutter. Kindly help, document is insufficient for the beginner in flutter and realm",
"username": "Zubair_Rajput"
},
{
"code": "final appConfig = AppConfiguration(APP_ID);\nfinal app = App(appConfig);\nfinal anonCredentials = Credentials.anonymous();\nawait app.logIn(anonCredentials);\n",
"text": "Hi @Zubair_Rajput!\nYou first have to allow the anonymous authentication provider on your App Service at http://realm.mongodb.com. You can see here how to do it.\nThen from the Flutter app you can create anonymous credentials and login with them. Follow this document.You can also see our tests at the repoThe code should look like:",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "In react native I know how to do that but in flutter I am new and you can see the screenshot.\nI am setting up app configuration but I am having errors. Kindly help.\nI can also show my full code If you required.\nScreenshot 2023-07-22 145148641×546 18.1 KB\n",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "Did you try the code that I sent above? Just set the correct appId in AppConfiguration and run it. This should work.\nWhat errors do you receive?",
"username": "Desislava_St_Stefanova"
},
{
"code": "*error message*: The instance member of 'appId' can't \nbe accessed in an initializer. \nTry replacing the reference\nto the instance member with a different expression.\nimport 'package:flutter/material.dart';\nimport 'package:realm_dart/realm.dart';\n\nimport 'package:cric_orion/src/constants/Colors.dart';\nimport 'package:go_router/go_router.dart';\n\nclass SettingScreen extends StatefulWidget {\n const SettingScreen({super.key});\n\n @override\n State<SettingScreen> createState() {\n return _SettingScreen();\n }\n}\n\nclass _SettingScreen extends State<SettingScreen> {\n String appId = \"vishwa-sports-b2x-xxxxx\";\n final appConfig = AppConfiguration(appId);\n final app = App(appConfig);\n final anonCredentials = Credentials.anonymous();\n \n\n @override\n Widget build(context) {\n return Scaffold(\n appBar: AppBar(\n automaticallyImplyLeading: false,\n centerTitle: true,\n leading: IconButton(\n padding: EdgeInsets.zero,\n color: ColorList.five,\n onPressed: () => context.go('/profile'),\n icon: const Icon(Icons.chevron_left),\n ),\n leadingWidth: 50,\n elevation: 0,\n title: const Text(\n \"Settings\",\n style: TextStyle(color: ColorList.five),\n ),\n backgroundColor: ColorList.sbYellow,\n ),\n body: Column(\n children: [\n const CartContainer(),\n const CartContainer(),\n const CartContainer(),\n const CartContainer(),\n const CartContainer(),\n Expanded(\n child: Align(\n alignment: Alignment.bottomCenter,\n child: Container(\n width: 300,\n height: 50,\n margin: const EdgeInsets.only(bottom: 30),\n child: ElevatedButton(\n style: ButtonStyle(\n foregroundColor:\n MaterialStateProperty.all<Color>(ColorList.five),\n backgroundColor: MaterialStateProperty.all<Color>(\n ColorList.sbYellow,\n ),\n shape: MaterialStateProperty.all<RoundedRectangleBorder>(\n RoundedRectangleBorder(\n borderRadius: BorderRadius.circular(25),\n ),\n ),\n ),\n onPressed: () => null,\n child: const Text(\n \"Logout\",\n style: TextStyle(\n fontSize: 15,\n ),\n ),\n ),\n ),\n ),\n )\n ],\n ),\n );\n }\n}\n\nclass CartContainer extends StatelessWidget {\n const CartContainer({super.key});\n\n @override\n Widget build(context) {\n return Container(\n height: 70,\n decoration: const BoxDecoration(\n border: Border(\n bottom: BorderSide(color: ColorList.grey, width: .5),\n ),\n ),\n child: GestureDetector(\n behavior: HitTestBehavior.translucent,\n onTap: () => {\n context.go('/profile/setting/feedback'),\n },\n child: Row(\n children: [\n Expanded(\n flex: 1,\n child: Container(\n height: double.infinity,\n child: Center(\n child: Container(\n height: 40,\n width: 40,\n padding: const EdgeInsets.all(5),\n decoration: BoxDecoration(\n color: ColorList.lightGrey,\n borderRadius: BorderRadius.circular(40),\n ),\n child: const Icon(\n Icons.all_inclusive,\n size: 20,\n ),\n ),\n ),\n ),\n ),\n Expanded(\n flex: 3,\n child: Container(\n height: double.infinity,\n alignment: Alignment.centerLeft,\n margin: const EdgeInsets.only(left: 5),\n child: const Text(\n \"Re-calibrate Stricker\",\n style: TextStyle(fontSize: 16, color: ColorList.greySecond),\n ),\n ),\n )\n ],\n ),\n ),\n );\n }\n}\n\n",
"text": "Hello Desislava, thanks for the replies,\nYes I tried to write the above line you suggested. kindly help\nScreenshot 2023-07-22 221919744×754 36.4 KB\nIn the above screenshot you can see red zigzag error line. When I hover over it it says",
"username": "Zubair_Rajput"
},
{
"code": "realm_dartrealmrealm_dartrealmflutter pub remove realm_dartflutter pub add realmimport 'package:realm/realm.dart\n",
"text": "@Zubair_Rajput, I’m not sure what exactly could be the issue with the red lines, but I see from the imported packages that you have imported the dart package realm_dart.\nFor a Flutter app you should import realm package. You should remove realm_dart and add realm realm | Flutter Packageflutter pub remove realm_dart\nflutter pub add realmThen import this.",
"username": "Desislava_St_Stefanova"
},
{
"code": "\nString appId = \"safexxxtraxx-xxxxxx\";\nfinal appConfig = AppConfiguration(appId);\n\n// APP ENTRY POINT\nclass MyApp extends StatelessWidget {\n const MyApp({super.key});\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n title: 'Flutter Demo',\n theme: ThemeData(\n colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),\n useMaterial3: true,\n ),\n home: const MyHomePage(title: 'Flutter Demo Home Page'),\n );\n }\n}\n\n",
"text": "Hello Mam,Actually placing appId and App AppConfiguration at the top solve the problemThanks for the supports",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "G’day, @Zubair_Rajput ,Glad to know your issue was resolved. Could you please confirm if you also changed the package names?I look forward to your response.Thanks,\nhenna",
"username": "henna.s"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to authenticate anonymously in flutter app with realm | 2023-07-22T07:20:51.326Z | How to authenticate anonymously in flutter app with realm | 701 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "$searchknnBetaegVector$matchfile_name$projectlet text = await Text.aggregate([\n {\n $search: {\n index: \"default\",\n knnBeta: {\n vector: resp.data.data[0].embedding,\n path: \"egVector\",\n k: 10,\n },\n },\n },\n {\n $match: {\n file_name: \"Test.txt\",\n },\n },\n {\n $project: {\n egVector: 0,\n },\n },\n])\n$searchkk=20$matchfile_name$searchfile_name$searchegVectorfile_name",
"text": "I’m currently using MongoDB’s $search with a knnBeta pipeline for a k-nearest neighbours search to retrieve the 10 most similar text documents based on their egVector field. Then, I apply a $match pipeline to filter the texts by a specific file_name , “Test.txt”, and finally a $project pipeline to return the information that I need. Here’s my current query:The issue I’m running into is that if the “Test.txt” document isn’t a part of the initial 10 documents retrieved by $search , it’s not considered in my query, even when it might exist in my database. This situation occurs when “Test.txt” would be part of the top-k returned documents if I were to run the query with a larger k parameter (like k=20 ). However, I’m only interested in getting the top 10 results for this specific file name. As such, I’m trying to figure out how I can apply a $match filter on file_name before running $search , so that I consider only the documents where file_name equals “Test.txt”. However, I have found out that $search needs to be the first operator in a MongoDB aggregation pipeline with the Full-Text Search feature. Given this, how can I modify my query so that I return the top 10 most similar documents (based on their egVector field) where file_name is equal to “Test.txt”? Is there an alternative approach to this problem? Any help would be much appreciated!",
"username": "Josh_Sang_Hoon_Cho"
},
{
"code": "$searchfilterknnBetafilter≤2021egVectorvector3textphrase",
"text": "Hi @Josh_Sang_Hoon_Cho - Welcome to the community.Thanks for providing the $search query you’ve attempted initially.Have you tried using the filter option noted in the knnBeta operator documentation? There’s a filter example in the documentation too. As per the documentation for the example, I believe it somewhat matches what you’ve described for what you are expecting:The following query filters the documents for cheese produced before or in (≤ ) the year 2021 , then searches the egVector field in the filtered documents for vector dimensions, and requests up to 3 nearest neighbors in the results.i.e. Filtering first then performing the vector search.You could try it with the text or phrase operator but let me know if those do not work for you.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": " let text = await Text.aggregate([\n {\n $search: {\n index: \"default\",\n knnBeta: {\n vector: resp.data.data[0].embedding,\n path: \"egVector\",\n k: 10,\n filter: {\n regex: {\n query: \"TEST_FILE_NAME.txt\",\n path: \"file_name\",\n allowAnalyzedField: true,\n },\n },\n },\n },\n },\n {\n $project: {\n egVector: 0,\n },\n },\n ]);\n",
"text": "Good I got it work like this. Thank you very much!",
"username": "Josh_Sang_Hoon_Cho"
},
{
"code": "filter",
"text": "Thanks for posting your updated aggregation with the filter option used Glad to hear it works for you.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to $match before $search | 2023-07-23T06:50:53.923Z | How to $match before $search | 656 |
null | [
"react-native",
"react-js"
] | [
{
"code": "enum ProgressDirection {\n Download = 'download',\n Upload = 'upload',\n}\n\nenum ProgressMode {\n ReportIndefinitely = 'reportIndefinitely',\n ForCurrentlyOutstandingWork = 'forCurrentlyOutstandingWork',\n}\n\nconst AppSync = () => {\n const app = useApp();\n const realm = useRealm();\n\n useEffect(() => {\n if (__DEV__) {\n Realm.App.Sync.setLogLevel(app, 'debug');\n }\n\n const progressNotificationCallback = (transferred, transferable) => {\n // Convert decimal to percent with no decimals\n // (e.g. 0.6666... -> 67)\n const percentTransferred = parseFloat((transferred / transferable).toFixed(2)) * 100;\n console.log('percentTransferred', percentTransferred);\n };\n\n // Listen for changes to connection state\n realm.syncSession?.addProgressNotification(\n ProgressDirection.Download,\n ProgressMode.ForCurrentlyOutstandingWork,\n progressNotificationCallback\n );\n // Remove the connection listener when component unmounts\n return () => {\n realm.syncSession?.removeProgressNotification(progressNotificationCallback);\n if (!realm.isClosed) realm.close();\n };\n }, []);\n\n ...\n}\n",
"text": "I have integrated realm flexible sync which download lots of data from realm database after user login and works fine but i want to show progress screen until all data are downloaded on first launch.As per documentation i have integrated to get progress notification but it only notify when 100% downloaded which is not wise for our app as we want to show actual progress and once download complete start using the app.Please find below code that i implemented for same as per documentation.percentTransferred log showed only when percentTransferred to 100%My aim is to have progress screen until data get downloaded and once downloaded then render the app part only after it.",
"username": "Hardik_Chavda"
},
{
"code": "",
"text": "Hey @Hardik_Chavda,As noted in this section of the docs, download progress notifications are not yet supported for flexible sync, but support for this is coming soon.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Okay but is there is workaround for this to show progress screen?\nFor now it’s fine to have random progress bar but hide progress screen and load the app part once all data available.",
"username": "Hardik_Chavda"
},
{
"code": "downloadAllServerChanges",
"text": "If you don’t need granular progress, you can use the downloadAllServerChanges method to be notified when the session has caught up.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thank you at least i can use this and show skeleton screen.Obviously not best solution but what i can do if as download progress notifications are not yet supported for flexible sync.",
"username": "Hardik_Chavda"
}
] | React Native Realm flexible sync progress screen | 2023-07-20T10:12:42.051Z | React Native Realm flexible sync progress screen | 757 |
null | [
"mongodb-shell",
"database-tools",
"backup"
] | [
{
"code": "2023-07-21T06:05:16.168-0400 The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}\n2023-07-21T06:05:16.168-0400 building a list of collections to restore from /home/zdx/backup dir\n2023-07-21T06:05:16.169-0400 **reading metadata for ir.backup from** /home/zdx/backup/backup.metadata.json\n2023-07-21T06:05:16.170-0400 **restoring to existing collection ir.backup without dropping**\n2023-07-21T06:05:16.170-0400 restoring ir.backup from /home/zdx/backup/backup.bson\n2023-07-21T06:05:16.181-0400 continuing through error: E11000 duplicate key error collection: ir.backup index: _id_ dup key: { _id: ObjectId('64b5a5865e23e338c4fdfe27') }\n2023-07-21T06:05:16.181-0400 continuing through error: E11000 duplicate key error collection: ir.backup index: _id_ dup key: { _id: ObjectId('64b5a643a374f41eff1c897e') }\n2023-07-21T06:05:16.181-0400 finished restoring ir.backup (0 documents, 2 failures)\n2023-07-21T06:05:16.181-0400 no indexes to restore for collection ir.backup\n2023-07-21T06:05:16.181-0400 0 document(s) restored successfully. 2 document(s) failed to restore.\nCurrent Mongosh Log ID: 64ba59fb42ffe11856bda7cb\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB: 6.0.8\nUsing Mongosh: 1.10.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n 2023-07-21T05:52:46.578-04:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2023-07-21T05:52:46.579-04:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'\n 2023-07-21T05:52:46.579-04:00: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'\n 2023-07-21T05:52:46.579-04:00: vm.max_map_count is too low\n\n",
"text": "I have a problem restoring data using mongorestore:\nFirst, I used this command to export the data and everything seems to work fine:\nmongodump -h 127.0.0.1 -d backup -o ~/\nThen, I executed this command many times to restore the data:\nmongorestore -h 127.0.0.1 -d backup -dir ~/backup/\nBut got this error:My problem is, I have specified the “backup” database, but the data is restored to the “ir” database, why?Operating environment:\nOS version: CentOS Linux release 7.2.1511 (Core)\nMongodb :",
"username": "Avalon_Zhou"
},
{
"code": "-dirmongorestoremongorestore -h 127.0.0.1 -d backup --dir ~/test/ reading metadata for backup.sample from ........2023-07-21T06:05:16.181-0400 continuing through error: E11000 duplicate key error collection: ir.backup index: _id_ dup key: { _id: ObjectId('64b5a5865e23e338c4fdfe27') }\n2023-07-21T06:05:16.181-0400 continuing through error: E11000 duplicate key error collection: ir.backup index: _id_ dup key: { _id: ObjectId('64b5a643a374f41eff1c897e') }\nmongorestore",
"text": "Hi @Avalon_Zhou and welcome to MongoDB community forums!!My problem is, I have specified the “backup” database, but the data is restored to the “ir” database, why?Firstly, thank you for recording this behaviour and reporting the error. I tried to reproduce the error using the command mentioned, and I am seeing the similar behaviour. The reason for that is the -dir used in the mongorestore command, is treated as -d ir which overwrites the database name as ir.\nThis has been reported internally and you can track any progress / updates with the following ticket TOOLS:3357.\nOne possible workaround here is to use the restore command as:\nmongorestore -h 127.0.0.1 -d backup --dir ~/test/ which would restore as:\n reading metadata for backup.sample from ........Please test this workaround on a test environment to see if it works for you before proceeding to do it on a production environment.Further, regarding the duplicate key error seen in the above logs is because mongorestore would only perform the insert and would not perform any update operations. Please refer to the documentation on mongorestore Inserts Only for more informationYou can refer to the response by @Prasad_Saya, our community user where he has described the same with a very efficient example.Let us know of you have further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for your reply. This little problem has been bugging me all day. But luckily it was resolved in the end. ",
"username": "Avalon_Zhou"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to fix dup key error during mongorestore | 2023-07-21T01:14:02.960Z | How to fix dup key error during mongorestore | 643 |
null | [
"aggregation",
"performance"
] | [
{
"code": " \"idTracciato\": \"458\"\", \"count\": 23275575\n \"idTracciato\": \"488\"\", \"count\": 1207470\n \"idTracciato\": \"500\"\", \"count\": 1121987\n \"idTracciato\": \"511\"\", \"count\": 956498\n \"idTracciato\": \"456\"\", \"count\": 789206\n \"idTracciato\": \"475\"\", \"count\": 520500\n \"idTracciato\": \"304\"\", \"count\": 18014\n \"idTracciato\": \"207\"\", \"count\": 15760\n \"idTracciato\": \"107\"\", \"count\": 12613\n \"idTracciato\": \"198\"\", \"count\": 9457\n \"idTracciato\": \"411\"\", \"count\": 7166\n \"idTracciato\": \"100\"\", \"count\": 6304\n \"idTracciato\": \"410\"\", \"count\": 5474\n \"idTracciato\": \"462\"\", \"count\": 3587\n \"idTracciato\": \"132\"\", \"count\": 3156\n \"idTracciato\": \"117\"\", \"count\": 3154\n \"idTracciato\": \"102\"\", \"count\": 3152\n \"idTracciato\": \"232\"\", \"count\": 3152\n \"idTracciato\": \"177\"\", \"count\": 3152\n \"idTracciato\": \"210\"\", \"count\": 3152\n \"idTracciato\": \"461\"\", \"count\": 2594\n \"idTracciato\": \"482\"\", \"count\": 2020\ndb.flussi_dettagli.aggregate([\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"idTracciato\" : Long(\"458\")\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"dataCreazione\" : {\n\t\t\t\t\t\t\"$lte\" : ISODate(\"2023-04-05T13:46:00.200+02:00\")\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"stato\" : {\n\t\t\t\t\t\t\"$ne\" : \"CACHED\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"metadata.annoRiferimento\" : Long(\"2023\"),\n\t\t\t\t\t\"metadata.periodoRiferimento\" : \"01\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$addFields\" : {\n\t\t\t\t\t\"validazioniReduce\" : {\n\t\t\t\t\t\t\"$reduce\" : {\n\t\t\t\t\t\t\t\"input\" : \"$validazioni\",\n\t\t\t\t\t\t\t\"initialValue\" : {\n\t\t\t\t\t\t\t\t\"dataAzione\" : 0\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"in\" : {\n\t\t\t\t\t\t\t\t\"$cond\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : [ \"$$this.dataAzione\", \"$$value.dataAzione\" ]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"$$this\",\n\t\t\t\t\t\t\t\t\t\"$$value\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"flagVersioneMassima\" : true\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$addFields\" : {\n\t\t\t\t\t\"keyNoStato\" : \"$key\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$project\" : {\n\t\t\t\t\t\"keyNoStato.stato\" : 0\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$project\" : {\n\t\t\t\t\t\"_id\" : \"$keyNoStato\",\n\t\t\t\t\t\"data\" : 1,\n\t\t\t\t\t\"metadata\" : 1,\n\t\t\t\t\t\"progressivo\" : 1,\n\t\t\t\t\t\"progressivoMassimo\" : 1,\n\t\t\t\t\t\"_oid\" : \"$_id\",\n\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\"validazioni\" : 1,\n\t\t\t\t\t\"validazioniReduce\" : 1,\n\t\t\t\t\t\"derived\" : 1,\n\t\t\t\t\t\"key\" : 1,\n\t\t\t\t\t\"tempDiscard\" : 1,\n\t\t\t\t\t\"idUtente\" : 1,\n\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\"idFlusso\" : 1,\n\t\t\t\t\t\"component_1\" : 1,\n\t\t\t\t\t\"component_2\" : 1,\n\t\t\t\t\t\"component_3\" : 1,\n\t\t\t\t\t\"component_4\" : 1\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$skip\" : 0\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$limit\" : 16\n\t\t\t}\n\t\t]\n)\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"metadata.annoRiferimento\" : Long(\"2024\"),\n\t\t\t\t\t\"metadata.periodoRiferimento\" : \"01\"\n\t\t\t\t}\n\t\t\t},\n",
"text": "Hi,i have a problem with performance of query; i have a collection with 81 millions of records, where the fields that indicating a macro-category is “idTracciato”my queries have as last step “$limit” operator with 16 records, so if I have more than 16 records as a result of the query, mongo replies in 1.4 seconds; otherwise if i have no records mongo don’t reply ( after 10 minutes )the query that have result isif i add other stage of match like belowdon’t reply…please help me, i’m in trouble",
"username": "ilmagowalter"
},
{
"code": "",
"text": "Someone has ideas for this problem ?",
"username": "ilmagowalter"
},
{
"code": "",
"text": "Have you run an .explain on the two scenarios?What indexes are on your collection?",
"username": "John_Sewell"
},
{
"code": " \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"flagVersioneMassima\": {\n \"$eq\": true\n }\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"idTracciato\": 1,\n \"dataCreazione\": 1,\n \"stato\": 1,\n \"flagVersioneMassimaVERSIONED\": 1\n },\n \"indexName\": \"idx_max_version_VERSIONED_with_data\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"idTracciato\": [],\n \"dataCreazione\": [],\n \"stato\": [],\n \"flagVersioneMassimaVERSIONED\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"idTracciato\": [\n \"[458, 458]\"\n ],\n \"dataCreazione\": [\n \"(true, new Date(1680695160200)]\"\n ],\n \"stato\": [\n \"[MinKey, \\\"CACHED\\\")\",\n \"(\\\"CACHED\\\", MaxKey]\"\n ],\n \"flagVersioneMassimaVERSIONED\": [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n",
"text": "the explain for scenario 1 ( with data aas result )with the second scenario i ran .explain() and after 600 seconds i did have a response i have others index in collection but not with “metadata.periodoRiferimento” e “metadata.annoRiferimento”but … this query is used to populate a data grid, and column can essere different between idTracciato x and idTracciato yso i can add many filtersthe problem happen when with fthe filters used i don’t have record to show",
"username": "ilmagowalter"
},
{
"code": "",
"text": "Can you put up the query that fails as well to check exactly what that looks like?",
"username": "John_Sewell"
},
{
"code": "db.flussi_dettagli.aggregate([\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"idTracciato\" : Long(\"458\")\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"dataCreazione\" : {\n\t\t\t\t\t\t\"$lte\" : ISODate(\"2023-04-05T13:46:00.200+02:00\")\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"stato\" : {\n\t\t\t\t\t\t\"$ne\" : \"CACHED\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"metadata.annoRiferimento\" : Long(\"2024\"),\n\t\t\t\t\t\"metadata.periodoRiferimento\" : \"01\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$addFields\" : {\n\t\t\t\t\t\"validazioniReduce\" : {\n\t\t\t\t\t\t\"$reduce\" : {\n\t\t\t\t\t\t\t\"input\" : \"$validazioni\",\n\t\t\t\t\t\t\t\"initialValue\" : {\n\t\t\t\t\t\t\t\t\"dataAzione\" : 0\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"in\" : {\n\t\t\t\t\t\t\t\t\"$cond\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : [ \"$$this.dataAzione\", \"$$value.dataAzione\" ]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"$$this\",\n\t\t\t\t\t\t\t\t\t\"$$value\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"flagVersioneMassima\" : true\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$addFields\" : {\n\t\t\t\t\t\"keyNoStato\" : \"$key\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$project\" : {\n\t\t\t\t\t\"keyNoStato.stato\" : 0\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$project\" : {\n\t\t\t\t\t\"_id\" : \"$keyNoStato\",\n\t\t\t\t\t\"data\" : 1,\n\t\t\t\t\t\"metadata\" : 1,\n\t\t\t\t\t\"progressivo\" : 1,\n\t\t\t\t\t\"progressivoMassimo\" : 1,\n\t\t\t\t\t\"_oid\" : \"$_id\",\n\t\t\t\t\t\"stato\" : 1,\n\t\t\t\t\t\"validazioni\" : 1,\n\t\t\t\t\t\"validazioniReduce\" : 1,\n\t\t\t\t\t\"derived\" : 1,\n\t\t\t\t\t\"key\" : 1,\n\t\t\t\t\t\"tempDiscard\" : 1,\n\t\t\t\t\t\"idUtente\" : 1,\n\t\t\t\t\t\"idTracciato\" : 1,\n\t\t\t\t\t\"idFlusso\" : 1,\n\t\t\t\t\t\"component_1\" : 1,\n\t\t\t\t\t\"component_2\" : 1,\n\t\t\t\t\t\"component_3\" : 1,\n\t\t\t\t\t\"component_4\" : 1\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$skip\" : 0\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"$limit\" : 16\n\t\t\t}\n\t\t]\n).explain();\ndb.flussi_dettagli.createIndex(\n {\n \"idTracciato\": 1,\n \"dataCreazione\": 1,\n \"stato\": 1,\n \"flagVersioneMassimaVERSIONED\": 1,\n \"metadata.periodoRiferimento\": 1,\n \"metadata.annoRiferimento\": 1\n },\n {\n name: \"idx_max_version_VERSIONED_with_data_periodo\"\n }\n);\n",
"text": "the query isis stuck in runningnow, i create a new indexand now the query is fast!!!the question is… every time i have to add a new match condition… i have to create an index ? if this, i will have many many indexes… is a problem ?",
"username": "ilmagowalter"
},
{
"code": "{\n seardchField1:'A',\n seardchField2:'B',\n seardchField3:'C',\n}\n{\n searchData:[\n {\n fieldName:'seardchField1',\n fieldValue:'A'\n },\n {\n fieldName:'seardchField2',\n fieldValue:'B'\n },\n {\n fieldName:'seardchField3',\n fieldValue:'C'\n },\n ]\n}\ndb.getCollection(\"Test\").createIndex({'searchData.fieldName', 1, 'searchData.fieldValue':1})\ndb.getCollection(\"Test\").find({\n searchData: {\n $elemMatch: {\n fieldName: \"seardchField3\", \n fieldValue: \"A\"\n }\n }\n}\n)\n",
"text": "Comparing the two queries I can only see a difference of changing the VALUE that’s being searched for not the field, assuming that it’s an extra field you’re searching for…Yes…if you add a new property that you want to filter on you’ll need to add a new index to cover that field.This may not be the best design approach to storing the data however, if you have a bunch of fields you want to be able to index for searching, i.e.Where you’ll need an index on each one, you could do this instead:Add an index on both sub-elementsand then this:Will hit the index.Take a look here:Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.Indexes are expensive in terms of memory so if you have lots of search requirements then it’s worth thinking about how to store the data to align with the search needs.\nAs an example we have a huge amount of documents in our system and create a common search area within documents to store data in an easily indexable format.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Yes, there is an error in my paste … first query have to be without match stage on metadata.periodoRiferimento and metadata.annoRiferimentoRegards pattern i will read the article but i aren’t surr that is appliable in my application, because apart some specific fields, the data structure can be different between documentsI understand that i must have indexes, but cause i can have filters on various combination of specific fields. Have i to create one index for all combinations of filters fields ?",
"username": "ilmagowalter"
},
{
"code": "",
"text": "Indexes are sensitive to order of fields as well as direction (more important when sorting etc)But if you have an index on A,B,C then you cannot use that index (very efficiently) to search on just C or B, but you could use it to search for A only, or A and B or A and B and CI didn’t realise this until I just looked over the index documentation but Mongo will use index intersection:So worth a read on that if you have multiple fields to index.This may also be worth a read:",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks John for the clarification ",
"username": "ilmagowalter"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [Performance] Query performance issue with 0 records as resultset | 2023-07-18T23:35:27.413Z | [Performance] Query performance issue with 0 records as resultset | 817 |
null | [
"cxx"
] | [
{
"code": "",
"text": "Hello, I’m learning to use MongoDB in the C++ language. Even though I dynamically link the .lib files to my project, when I compile the project, mongocxx.dll, bson.dll, and other DLL files are being generated, and I can’t run my application independently. I’ve tried various approaches, but I couldn’t manage to include them properly in my project. I would really appreciate it if someone knowledgeable could assist me with this issue. Thank you!",
"username": "john_spear"
},
{
"code": "",
"text": "Hi @john_spear,Here’s a tutorial that should guide you with adding MongoDB C++ driver in your project - Getting Started with MongoDB and C++ | MongoDBHope this helps.",
"username": "Rishabh_Bisht"
}
] | How to add mongo driver dynamic linker in c++ | 2023-07-22T19:06:38.671Z | How to add mongo driver dynamic linker in c++ | 538 |
null | [] | [
{
"code": "",
"text": "I am from data analytics background working in telecom domain. We used to get large volume of CDR data from CGSN,SGSN,IOT all these data need to be ingested ,processed and analyzed. I have to choose a database between mongodb and Druid which one will be better for my requirements and why its better. Please experts advice on this",
"username": "karthikeyan"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Charles_Karen"
},
{
"code": "",
"text": "thanks for your response ",
"username": "karthikeyan"
}
] | In a big data analytical space which one is the best database between MongoDb and Druid | 2023-07-11T04:29:05.551Z | In a big data analytical space which one is the best database between MongoDb and Druid | 683 |
null | [] | [
{
"code": "",
"text": "Hello,We are trying to implement a data as a service component based on MongoDB Atlas Data Federation. One of the considerations for the same is ability to automate the creation of the federated store and databases/collections within it. The documentation here Manage a Federated Database Instance — MongoDB Atlas talks about a Data Federation CLI that seems to fit our requirements. But we are not sure of how/where to get the CLI installed from. If anyone has used the same please help.Thanks,\nVinod",
"username": "Vinod_Nair1"
},
{
"code": "db.runCommand()mongoshmongocli atlas datalakeatlascli dataLakes",
"text": "Hi @Vinod_Nair1 - Welcome to the community talks about a Data Federation CLIThanks for raising this! I believe this might be a typo but I am just double checking with the team. The commands on the page are related to the db.runCommand() command. You can use a MongoDB driver(s) or mongosh shell to run those commands.One of the considerations for the same is ability to automate the creation of the federated store and databases/collections within it.I’m not sure if this suits your use case but there is the following that you could possibly use to automate the creation of federated instances:Let me know if this works for you or not.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "The documentation here Manage a Federated Database Instance — MongoDB Atlas talks about a Data Federation CLI that seems to fit our requirements. But we are not sure of how/where to get the CLI installed from.Hi @Vinod_Nair1 - Just confirming this was a typo so thank you for raising it. A fix for the typo should be up soon.In the mean time, please check out the atlas cli 1.9.0 release notes which mention:Adds the following new commands to manage data federation:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hey @Vinod_Nair1Quickly following up here, in order to do what you’re saying you will need to “set the storage” config. I don’t believe this is currently available in the CLI.If you look at the API Reference here which undergirds the CLI, you will see that you need to set a “storage” object in the request to set the storage configuration as part of the request. MongoDB Atlas Administration APIThe programmatic experience for Data Federation is somewhat evolving, but all of this is currently supported in the Terraform experience here: Terraform RegistryFeel free to reach out to me if I can help further at [email protected],\nBen",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to install and use the Data Federation CLI | 2023-06-30T11:25:37.729Z | How to install and use the Data Federation CLI | 502 |
null | [
"database-tools",
"backup"
] | [
{
"code": "database-tools",
"text": "We are pleased to announce version 100.7.4 of the MongoDB Database Tools.This release fixes issues with mongorestore that ommitted all namespaces containing “admin” when\nrestoring to an Atlas Proxy Cluster. This release also fixes an issue with mongodump where the\nprocess failed against clusters using Atlas Online Archive.The Database Tools are available on the MongoDB Download Center. Installation\ninstructions and documentation can be found on docs.mongodb.com/database-tools. Questions and inquiries can be asked on the MongoDB Developer Community Forum. Please make sure to tag forum posts with database-tools. Bugs and feature requests can be reported in the Database Tools Jira where a list of current issues can be found.",
"username": "Johnny_DuBois"
},
{
"code": "",
"text": "What’s the difference between mongodb cli and the mongodb Atlas cli?",
"username": "Jack_Woehr"
},
{
"code": "mongocliatlas",
"text": "mongocli is for managing deployments connected to Cloud Manager or Ops Manager.atlas (atlas cli) is for managing atlas: databases, federation, atlas security, backups, private endpoints etc…",
"username": "chris"
},
{
"code": "mongocli",
"text": "Thanks, @chris … thought I saw that the mongocli also managed Atlas deployments.",
"username": "Jack_Woehr"
}
] | Database Tools 100.7.4 Released | 2023-07-21T19:20:34.927Z | Database Tools 100.7.4 Released | 704 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Hi ,Just wanted to ask regarding mongorestore error that i encountered ,\" error connecting to host: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: 127.0.0.1:37027, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(127.0.0.1:37027[-61]) incomplete read of message header: EOF }, ] }\"command usedmongorestore -h 127.0.0.1 --gzip --archive=/home/mongodbadm/advisory_dump.gz --port 37027 -u testuser -p ‘testpassw’ --authenticationDatabase adminthe dump only contains a specific database.",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi @Daniel_Inciong,It looks like no process is listenning on 127.0.0.1:37027…Can you verify that this is actually the correct port and host?Try a mongo shell connection to verify…Additionally, what is the mongorestore version and the server version on 37027 port?Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Daniel_Inciong how did you solve this?",
"username": "Progress_Nwimue_Lekara"
}
] | Mongorestore error encountered server selection timeout | 2021-05-24T04:45:49.010Z | Mongorestore error encountered server selection timeout | 10,978 |
null | [] | [
{
"code": "",
"text": "Hello,Let’s say I have 3 high-performant physical servers. Would it be a valid deployment (see below)?\n3 powerful physical hosts hold 3 replica-sets:\nHost1: RS1 primary, RS2 secondary, RS3 secondary\nHost2: RS1 secondary, RS2 primary, RS3 secondary\nHost3: RS1 secondary, RS2 secondary, RS3 primaryI really don’t want to setup a dedicated server for each Mongo daemon.It would be awesome if someone could share experience.Many thanks in advance!",
"username": "Petr_Makarov"
},
{
"code": "",
"text": "Why do you want 3 replica set?Don’t forget that all members of a RS handle the same write load.",
"username": "steevej"
},
{
"code": "",
"text": "Simple answer is yes. You can do that.",
"username": "Kobe_W"
},
{
"code": "",
"text": "The idea is to have one RS per each app. I want to know if there is someone who deploys in this way.",
"username": "Petr_Makarov"
},
{
"code": "",
"text": "Thanks, I know that I can but I’m wondering to know whether there is someone who really uses this kind of configuration.",
"username": "Petr_Makarov"
},
{
"code": "",
"text": "that is of course not recommended for production deployment.A db server can consume a lot of resources, so why put so many servers on the same node and let them fight with each other?It’s almost always better to use dedicated machine.",
"username": "Kobe_W"
},
{
"code": "",
"text": "From a different aspect, 3 commodity machines are supposed to be cheaper than a single high end machine with similar capability.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you for your answers!",
"username": "Petr_Makarov"
},
{
"code": "",
"text": "one RS per each appI think that, given the same hardware, you would get better performance with all the applications using the same RS. But as anything performance related only testing and benchmarking can provide the clear answer.",
"username": "steevej"
}
] | Several Mongo daemons on the same physical server? | 2023-07-18T20:58:34.846Z | Several Mongo daemons on the same physical server? | 538 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have 3 collection . coll1, coll2, coll3 . Each collection have multiple fields which includes state and priority . state field value can be “OPEN” . “INREVIEW”, \"CLOSED\"and priority value can be “HIGH”, “MEDIUM”, “LOW”. Now i would like to get the total count ,“OPEN” count, “INREVIEW” COUNT also “HIGH” , “MEDIUM”, “LOW” count for each collection. One tricky part here is I would like to execute single pipeline to get these result. Can some one help on this ?",
"username": "ari_k"
},
{
"code": "",
"text": "Take a look at",
"username": "steevej"
}
] | Aggregation pipeline for one use case | 2023-07-20T18:42:59.510Z | Aggregation pipeline for one use case | 618 |
null | [
"aggregation",
"java",
"time-series"
] | [
{
"code": "",
"text": "Hi Team,I am preparing for Associate Dev - Java path exam.I have the PDF link of study guide (MongoDB Courses and Trainings | MongoDB University)and I am referring exhaustive documentation (https://www.mongodb.com/docs/v6.0/)I just want to confirm from someone here on the tree/topics below if they are not relevant to CDA exam.Relevant - Introduction, Mongo CRUD, Aggregations, Data Models, Indexes… +… (and there can be others like Drivers etc)Not Relevant - Security, Replication, Sharding, Time Series, Administration, Storage (Since in PDF, there is no mention of any line item pertaining to this)Mainly want to reduce the time I will take to finish documentation and focus better only on relevant topics. Any help here would be highly appreciated!Thanks,\nKetan",
"username": "Ketan_Mehta1"
},
{
"code": "",
"text": "Just need confirmation on my understanding, if it’s correct or not!",
"username": "Ketan_Mehta1"
},
{
"code": "",
"text": "Hello @Ketan_Mehta1, Welcome to the MongoDB developer community forum,Mainly want to reduce the time I will take to finish documentation and focus better only on relevant topics. Any help here would be highly appreciated!It is not required to do any not relevant topics, which is why they have designed courses for specific languages, and it is specific for the developer path certification,Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.For more reference, I have published an article related to “MongoDB Associate Developer Certification Study Guide”, Which explains my certification journey and study guide.Firstly, let's take a brief overview of MongoDB certification. MongoDB certification is a significant accomplishment in the industry, adding value to our professional credentials.\nReading time: 3 min read\n",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Relevant Tree/Topic items to refer for Associate Dev Exam | 2023-07-23T02:56:38.591Z | Relevant Tree/Topic items to refer for Associate Dev Exam | 583 |
null | [
"flutter"
] | [
{
"code": "",
"text": "Hello, I’m currently new to Realm and I’m building an app for offline functionality with clinical histories. As more patients are treated, this collection will grow larger. Therefore, I would like to have a mechanism to have on the mobile device only the information that has changed in the last week. This is to have a small database on my device and avoid having too much information in the app when it is offline.",
"username": "Fabian_Eduardo_Diaz_Lizcano"
},
{
"code": "final app = App(AppConfiguration(\"app-service-id\"));\nfinal user = await app.logIn(Credentials.anonymous());\nfinal realm = Realm(Configuration.flexibleSync(user, [Card.schema]));\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.removeByName(\"dateFilter\");\n mutableSubscriptions.add(realm.query<Card>(r\"date >= $0\", [today]), name: \"dateFilter\");\n });\n await realm.subscriptions.waitForSynchronization();\n await realm.syncSession.waitForDownload();\n realm.write(() => realm.add(Card(ObjectId(), DateTime.now())));\n realm.syncSession.resume(); // go online\n //re-create the subscription and wait for download\n realm.syncSession.pause(); // go offline\n",
"text": "Hi @Fabian_Eduardo_Diaz_Lizcano!\nWellcome the MongoDB forum!\nYes, we have such mechanism using realm and sync to Atlas MongoDB - https://realm.mongodb.com.\nI assume you have already an AppService at Atlas MongoDB cloud. And here is some simple code how to connect from a flutter app after you have added the realm | Flutter Package.The data downloaded to the device are defined using subscription sets. You can define a subscription with a filer by date. And you can re-create this subscription on daily bases using a timer or some other approach. The filtered data will be downloaded automatically and if you have other items matching the criteria during the whole day they will be automatically downloaded.If you want to control the online/offline mode you can use:Feel free to ask if you have any further questions.",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Hi @Desislava_St_StefanovaThank you for your response.I have a subscription as you mentioned and I see that the offline topic works, but after the time that I define in the subscription, I still see the data on the device where I created the entity. I would like this information to be automatically deleted from the device to have only the most recent information, since the collections that I am going to create will have information that once the process is finished I will only read for reports.For this reason, I would like to gradually delete this information from mobile devices only.Thanks in advance,",
"username": "Fabian_Eduardo_Diaz_Lizcano"
},
{
"code": "updaterealm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(realm.query<Card>(r\"date >= $0\", [today]), name: \"dateFilter\", update: true);\n });\n",
"text": "Hi @Fabian_Eduardo_Diaz_Lizcano!\nThe synced Realm contains only the data matching the subscriptions, all the other realm objects are deleted automatically. If you add new subscriptions the data matching all the subscriptions will exist on the device. You have too be sure that you have deleted the old subscriptions before to add a new one as it is in the sample above. Or you can use named subscription with update flag. Then each time a subscription with the same name is added the query will be replaced. Be sure to do this when the device and realm syncSession are online.If your don’t see the objects in the Realm but your Realm file size continue to grow, you should know that\nthe file size will be reduced automatically later. But if you want to shrink it immediately then you can force the compaction. See https://www.mongodb.com/docs/realm/sdk/flutter/realm-database/realm-files/compact/#std-label-flutter-compact.Please, let me know if you still see the old realm objects. If yes, it will be nice if you can share a part of your code.",
"username": "Desislava_St_Stefanova"
},
{
"code": "update:true",
"text": "Hi @Desislava_St_StefanovaThank you very much for your help with the flat update:true I see that I have the expected behavior.",
"username": "Fabian_Eduardo_Diaz_Lizcano"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | Automatically purge a collection | 2023-07-18T01:39:21.095Z | Automatically purge a collection | 696 |
null | [
"swift"
] | [
{
"code": "@ObservedRealmObject var address: Address\n\nTextField(\"City\", text: $address.city)\n",
"text": "I’ve been running into an issue on realm-swift 10.25.0 where editing a field with the bindings such as:If I type too fast (meaning, still slower than regular speed) the whole app freezes with an error like this:Binding action tried to update multiple times per frame.I’m currently moving toward storing strings in state variables and updating Realm objects on completion just to get around it, but wondering if anyone else is seeing this happen or if there is a way to make sure that it doesn’t happen?",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "I’ve seen this happen, too, @Kurt_Libby1 , but haven’t been able to reproduce with consistency, so I (perhaps naively) assumed I was doing something wrong. I did the same workaround - stored the strings in state variables to sidestep the issue. I’ll flag this with the SDK team and see if they have any info around this.",
"username": "Dachary_Carey"
},
{
"code": "debounce",
"text": "Following up, a member of the Swift SDK team recommends debounce to reduce the update frequency, @Kurt_Libby1 . We don’t currently have anything in our docs or example projects showing this, so I don’t have anything to point you at to show you how best to implement this. I’m making a ticket to provide some docs/code guidance around this, though. Meanwhile, if you try this in your project and it resolves your issue, I’d love to hear about it.",
"username": "Dachary_Carey"
},
{
"code": "debounce Observableclass DebounceTextObserver : ObservableObject {\n \n @Published var bounced = \"\"\n @Published var tmp = \"\"\n \n init(delay: DispatchQueue.SchedulerTimeType.Stride) {\n $tmp\n .debounce(for: delay, scheduler: DispatchQueue.main)\n .assign(to: &$bounced)\n }\n}\n@StateObject var bounceDescrip = DebounceTextObserver(delay: 0.75)\n...\nTextField(\"\", text: $bounceDescrip.tmp, onCommit: { focusedField = .cost })\n ...\n .onChange(of: bounceDescrip.bounced, perform: { value in\n if estimateItem.realm == nil {\n estimateItem.descrip = value\n } else {\n guard let estimateItem = estimateItem.thaw(), let realm = estimateItem.realm else { return }\n \n try! realm.write {\n estimateItem.descrip = value\n }\n }\n })\n",
"text": "I’ve been experiencing a similar problem and it’s pretty inconsistent. In testing on an attached iPad Pro, I’m watching my debug panel, and after freezing the CPU hits 100% and the Memory climbs to over 1GB before Xcode eventually halts the simulator - see attached.I’m using a debounce Observable:Setup like so:At a loss here for how to deal with this.\nAlso (and forgive my ignorance here), I’m not on a production DB yet and still using the ‘Sandbox’ (General) cluster, and so curious if my backend is having trouble syncing? Could my move to a more robust cluster solve this?Any help if appreciated.Thanks,\nRon\nScreen Shot 2022-05-31 at 14.12.19600×1148 109 KB\n",
"username": "Ron_Dyck"
},
{
"code": "",
"text": "Hey @Ron_Dyck,I can confirm that debouncing did not work for me either. I did it differently, but for whatever reason (and I do think that @Dachary_Carey said that they’re at least looking into the reason and a permanent solution) it still freezes.I have been loading the values into separate state variables onAppear and then if the state value != realm value, I’m showing a save button so that nothing is updated until the end in one write.It is a very annoying issue that I’ve had to use on quite a few screens, but not all. I haven’t been able to isolate the cause. And the workaround is a worse experience for sure. I do wish they would add some debounce timing into the standard way that the implicit writes work.@Jason_Flax, do you know if this is on the roadmap?",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Not Jason here (obviously), but everyone is pretty busy right now getting things ready for MongoDB World next week and I didn’t want to leave you folks hanging.Yes, the SDK team has some time earmarked a bit later to look into SwiftUI performance issues, and this is definitely one of them. One of the things I’ve discussed with them was a POC that includes debounce so folks don’t have to implement it themselves, but I think they’re also looking bigger-picture into a handful of performance-related things. Addressing this issue is on the roadmap. Sorry you’re having to use workarounds in the meantime.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Thanks @Dachary_Carey! I’ll be in NYC for the MDB World, so hopefully will get to meet some of the realm team. Really looking forward to the sessions and seeing what is coming for Realm!",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Any updates on this? Still struggling with freezes even with bounce set at 1.25 seconds.",
"username": "Ron_Dyck"
},
{
"code": "",
"text": "Same; I experience this 100% of the time on every view I’ve implemented. I was assuming there was something I hadn’t learned yet that would easily fix it.Alongside this issue, I am also experiencing where characters in strings that are typed are lost.So, if the user types:\n“This is a sentence.”Their UI may show:\n“This is a sentence.”And the db actually has:\n“This s a sntence”Also, upon editing strings (opposed to typing in an empty field), then the cursor bounces from where it is to the end of the string and then all the user input ends up at the end of the string. An example when fixing a typo in:“This is s sentence.”If the cursor is placed at the “s” and a backspace is typed to remove the “s” and then an “a” is typed, you end up with:\n“This is sentence.a”All this behavior is occurring while the console log is flooded with:\n“Binding action tried to update multiple times per frame.”This will be a blocker to our app going thru apple review in a couple months time. Currently, the dev env is on the free/shared instance.",
"username": "Joseph_Bittman"
},
{
"code": "",
"text": "@Ron_Dyck @Kurt_Libby1 @Dachary_CareyI can’t imagine that every customer is having this issue. I wonder if it is a specific IOS version issue?I’m on macos 12.4, xcode 13.4.1\nSimulator v13.4.1 (977.2)\nSimulatorKit 618\nCoreSimulator 802.6.1\nRealm Cocoa v10.28.4 Platform Version Version 15.5 (Build 19F70)",
"username": "Joseph_Bittman"
},
{
"code": "Binding<String> action tried to update multiple times per frame.",
"text": "@Joseph_Bittman I have been using Realm Swift 10.28.6 with swift package manager and although I do get the Binding<String> action tried to update multiple times per frame. warning in the console log, I am not experiencing any freezes anymore.Maybe try updating to the newest version and see if you see an improvement?",
"username": "Kurt_Libby1"
},
{
"code": "@ObservedRealmObject var room: Room\n@State private var name: String = \"\"\n...\n\n DebounceTextField(label: \"Name\", value: $name) { value in\n let thawedRoom = room.thaw()!\n let realm = thawedRoom.realm!\n do {\n try! realm.write({\n thawedRoom.name = value\n })\n }\n }\n .onAppear(perform: {\n self.name = room.name\n }\n\n",
"text": "This issue has resurfaced for me (I’m on Realm Swift 10.41.0). Not nearly as bad as before, but writing straight to the binding like in the quickstart example does cause noticeable delays and issues. My users are noticing and complaining.For anyone else who is seeing this issue, I ended up using DebounceTextField from this medium post. I used an ObservedRealmObject still and thawed it to write after the debounce. Annoying workaround, but it actually works with no noticeable lag.",
"username": "Kurt_Libby1"
},
{
"code": "import SwiftUI\nimport Combine\nimport RealmSwift\n\n@available(macOS 13.0, *)\npublic struct RealmTextField<ObjectType>: View where ObjectType: RealmSwift.Object & Identifiable {\n @ObservedRealmObject var object: ObjectType\n @Binding var objectValue: String\n \n @State var realtimeValue = \"\"\n @State var publisher = PassthroughSubject<String, Never>()\n var label: String\n \n var valueChanged: ((_ value: String) -> Void)?\n \n @State private var debounceSeconds = 1.110\n \n public var body: some View {\n TextField(label, text: $realtimeValue, axis: .vertical)\n .disableAutocorrection(true)\n .task {\n Task { @MainActor in\n realtimeValue = objectValue\n }\n }\n .onChange(of: realtimeValue) { value in\n publisher.send(value)\n }\n .onReceive(\n publisher.debounce(\n for: .seconds(debounceSeconds),\n scheduler: DispatchQueue.main\n )\n ) { value in\n if let valueChanged = valueChanged {\n valueChanged(value)\n }\n }\n }\n \n public init(_ title: String, object: ObjectType, objectValue: Binding<String>) {\n self.object = object\n _objectValue = objectValue\n self.label = title\n }\n}\n",
"text": "nice tip. i’ve found success in a similar pattern that i think results in less boilerplate, you can avoid the thaw: GitHub - Tunous/DebouncedOnChange: SwiftUI onChange View extension with debounce time but on second thought i’m not sure it’s as good as yours at avoiding view reloadsedit: I’m trying this out, feel free to use under MPLv2 license",
"username": "Alex_Ehlke"
}
] | Synced Realm SwiftUI typing freeze | 2022-04-11T16:07:20.942Z | Synced Realm SwiftUI typing freeze | 4,678 |
null | [] | [
{
"code": "import { MongoClient } from \"mongodb\";\n\nconst createConnection = () => {\n\n // let current = null;\n\n const client = new MongoClient(process.env.MONGODB_URI);\n\n return async function run() {\n\n await client.connect();\n\n };\n\n};\n\nexport const managerdb = createConnection();\nTypeError: Cannot read properties of undefined (reading 'startsWith')",
"text": "I have this code here:but it is giving me this error here:TypeError: Cannot read properties of undefined (reading 'startsWith')What do you think it could be wrong?I am using this code for Nextjs",
"username": "Edgar_Lindo"
},
{
"code": "",
"text": "\nCapture111359×770 46.3 KB\n",
"username": "Edgar_Lindo"
},
{
"code": "",
"text": "I am node a node.js guy, but perhaps MONGODB_URI is not defined",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "The name startsWith looks like a field from a $graphLookup stage.I suspect that you have some logic error in the code that generates a dynamic $graphLookup with a null object.I have doubts that the issue is in the connect code you shared.",
"username": "steevej"
},
{
"code": "mport { managerdb } from \"../utils/connect\";\n\nexport const getStaticProps = async () => {\n\n const { db } = await managerdb.connect();\n\n const data = await db\n\n .collection(\"parts\")\n\n .find({})\n\n .sort({})\n\n .limit(1000)\n\n .toArray();\n\n return {\n\n props: {\n\n data: JSON.parse(JSON.stringify(data)), // why?\n\n },\n\n revalidate: 60,\n\n };\n\n};\n",
"text": "The URI is good… tested and got connection OK from another file app code.This error is trigger from a page requesting the connection from the utils page…",
"username": "Edgar_Lindo"
},
{
"code": "",
"text": "The more I think about that issue the less confident I feel about my $graphLookup thing. If there was an issue with it, I suspect you would get a server error rather than a type error.DespiteThe URI is good… tested and got connection OK from another file app codeIt would be helpful to see that URI since some application do not use exactly the same format.",
"username": "steevej"
},
{
"code": "",
"text": "A post was split to a new topic: Cannot read properties of undefined (reading ‘type’)",
"username": "Stennie_X"
},
{
"code": "Error: Cannot read properties of undefined (reading 'connection')\n[nodemon] app crashed - waiting for file changes before starting...\n\nconst mongoose = require(\"mongoose\");\nconst connectDB = async () => {\n const conn = await mongoose\n .connect(process.env.MONGO_URI, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n })\n .catch((err) => {\n console.log(\"error: \", err);\n });\n\n console.log(`Mongoose Connected: ${conn.connection.host}`.cyan.underline.bold);\n};\n\nmodule.exports = connectDB;\n",
"text": "",
"username": "jephthah_yusuf"
},
{
"code": "",
"text": "Hello, But how to do it in nodejs",
"username": "MONTANA_Delly"
}
] | TypeError: Cannot read properties of undefined (reading 'startsWith') | 2022-06-02T17:22:50.557Z | TypeError: Cannot read properties of undefined (reading ‘startsWith’) | 26,598 |
null | [
"data-modeling",
"kotlin"
] | [
{
"code": "@JvmInline\n@Serializable\nvalue class Id<T>(\n @Contextual\n val value: ObjectId = ObjectId()\n)\n@Serializable\ndata class Test(\n override var _id: Id<Test> = Id(),\n val value: String,\n) \n\n@Serializable\ndata class Status(\n override var _id: Id<Status> = Id(),\n var test_id: Id<Test>,\n val value: String\n)\n\n...\n\norg.bson.codecs.configuration.CodecConfigurationException:\nCan't find a codec for CodecCacheKey{clazz=class Id, types=null}.\n",
"text": "",
"username": "Uros_Jarc"
},
{
"code": "class IdCodec : Codec<Id<*>> {\n override fun encode(writer: BsonWriter, value: Id<*>, encoderContext: EncoderContext) {\n return writer.writeObjectId(value.value)\n }\n\n override fun decode(reader: BsonReader, decoderContext: DecoderContext): Id<*> {\n return Id<Any>(value = reader.readObjectId())\n }\n\n override fun getEncoderClass(): Class<Id<*>> = Id::class.java\n}\n val idCodecRegistry = CodecRegistries.fromCodecs(IdCodec())\n var codecRegistry = CodecRegistries.fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n idCodecRegistry\n )\n val db = mongoClient.getDatabase(db_name).withCodecRegistry(codecRegistry)\n",
"text": "You have to define codec class for your generic class and use star operator on generic type…Then register this codec on created database instance…",
"username": "Uros_Jarc"
}
] | Can't find a codec for class with generics | 2023-07-21T22:49:42.767Z | Can’t find a codec for class with generics | 1,671 |
null | [
"app-services-user-auth",
"react-native"
] | [
{
"code": "let authenticatedUser = await collection.findOne({ userEmail: givenEmail });\nif(!authenticatedUser) \n throw Error(\"User with email \\\"\" + givenEmail + \"\\\" does exist!\");\n\nlet match = await bcrypt.compare(givenPassword, authenticatedUser.userPassword);\n \nif(match)\n return { id: authenticatedUser._id.toString(), email: authenticatedUser.userEmail };\nelse\n throw Error(\"Password did not match\");\n",
"text": "I have implemented a custom function for user authentication in Realm and for login I have:The implementation works but it takes too long (1m 30s every time), which will be quite annoying to users, so I am wondering why is it taking too long. Is there something I am doing wrong?( when signing the users up, I generated the bcrypt passwords using a salt of 10 if that matters)I decided to use Custom function auth because I didn’t know how to use the recommended email/password auth and map the authenticated user to my “Users” collection in my atlas database. I’d appreciate a guide to using the email/password auth as well",
"username": "james_robert1"
},
{
"code": "bcryptjs5041M10M20M10",
"text": "I’m attempting to use bcryptjs to hash passwords in an Atlas Function as well. I originally picked 12 rounds for the hash, but that ended with a 504 error when the function timed out.I dropped the rounds to 1 (do not tell the cryptographers—they are an angry people) and the function to hash a user’s password now executes in about 3 seconds on a free tier cluster where I’m testing.Did you test your function on an M10 or M20 cluster? In the past when I’ve had performance issues with Mongo, moving to at least an M10 solved them.",
"username": "Bryan_Jones"
}
] | Realm Custom function authentication taking too long | 2023-03-10T23:20:56.462Z | Realm Custom function authentication taking too long | 1,086 |
null | [
"node-js"
] | [
{
"code": "const bcrypt = require('bcrypt');\n> error: \nfailed to execute source for 'node_modules/bcrypt/bcrypt.js': TypeError: Value is not an object: undefined\n\tat node_modules/@mapbox/node-pre-gyp/lib/pre-binding.js:29:18(4)\n\tat node_modules/bcrypt/bcrypt.js:15:35(37)\n",
"text": "I am trying to create a https endpoint that uses bcrypt to compare the users password with the database password. I have downloaded bcrypt with version 5.1.0 through the atlas ui and it shows up on the dependencies. However, when i try to import the library i get an error.\nI use this to import bcryptand i get this error",
"username": "Tamothee_N_A"
},
{
"code": "",
"text": "Have you found a solution for it, I am facing the same issue @Tamothee_N_A",
"username": "james_robert1"
},
{
"code": "bcryptbcryptbcryptbcryptjsbcryptbcryptjsbcryptconst bcrypt = require('bcryptjs')\n",
"text": "I have found a solution.This error occurs because the bcrypt package requires native dependencies that must be compiled specifically for the environment where the code is running. Since MongoDB Realm runs in a serverless environment, it does not provide access to the native dependencies required by bcrypt, causing the error you are seeing.To use bcrypt in a MongoDB Realm function, you will need to find an alternative implementation that does not rely on native dependencies. One such implementation is bcryptjs, which is a pure JavaScript implementation of the bcrypt algorithm and does not require any native dependencies.To use bcryptjs, you can simply replace the bcrypt import statement in your code with:this should solve the issue!(Shout out to ChatGPT for the solution)",
"username": "Sangat_Shah"
},
{
"code": "bcryptjs",
"text": "using bcryptjs makes my function timeout or fail with 504 error",
"username": "louis_Muriuki"
},
{
"code": "bcryptjsconst bcrypt = require(\"bcryptjs\");\nconst hashedPassword = await bcrypt.hash(\"blahblah\", 12);\n",
"text": "@Sangat_Shah - Mind sharing how you used bcryptjs? I’m getting the same result as @louis_Muriuki: the function hangs and eventually times out. My implementation looks like this:",
"username": "Bryan_Jones"
}
] | Importing bcrypt into mongodb function not working | 2023-01-28T14:11:54.290Z | Importing bcrypt into mongodb function not working | 2,280 |
null | [] | [
{
"code": "const homeId = \"1a\"\nconst requestDate = 1648771200000\n\nconst monthly_gas_bills = \"houses.\" + homeId + \".monthly_gas_bills.$[elem]\"\n\nconst body = {\ndataSource: \"POC-cluster\",\n database: \"MyFirstDataBase\",\n collection: \"users\",\n \"filter\": { \n \"_id\": userId,\n },\n \"update\": {\n \"$set\": {[monthly_gas_bills]: outputObj },\n }\n \"options\": {arrayFilters: {\"elem.date\": requestDate}\n });\n+\n",
"text": "is it possible to use arrayFilters in the Data API, currently? I am trying to alter a specific element in an array of nested objects with a specified date value.I have the following body so far, but I can not find a place to put the arrayFilters. are they supported?I get an error that simply says: “Invalid parameter(s) specified: options”. I also can not put it in the update object, it does not detect it",
"username": "5ffe176b05637d76dbc28218bd2903c"
},
{
"code": "",
"text": "Late in answering here,Referring to a similar question,",
"username": "turivishal"
}
] | Possible to use arrayFilters in the mongodb atlas data api? | 2023-02-11T01:12:41.981Z | Possible to use arrayFilters in the mongodb atlas data api? | 1,568 |
[
"queries",
"node-js",
"compass"
] | [
{
"code": "",
"text": "Connection to MongoDB suddenly stopped working and in migration topic now, i have not changed any env. after the successfully passing the test, what might be the issue at backend?m2student is the username i am using for connection.image854×942 26.3 KBable to view the collections from mongoDB portalimage1209×802 51.6 KB",
"username": "Rama_krishna1"
},
{
"code": "",
"text": "Hi @Rama_krishna1,To solve this issue make sure to add your IP address whitelisted.\nfDgXF1366×552 32.2 KBPlease feel free to reach out if you have any questions.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks a lot Kesav, for your support .\ncoming back to issue, have configured the above network access while working with db connection task, and its not changed from then. while it used to work but not sure what changed , currently not able to connect to mflie/simple_fix dbimage1562×386 22 KB",
"username": "Rama_krishna1"
},
{
"code": "",
"text": "Can you connect from shell?\nThis thread is about M001-mongobasics but you mentioned about migration topic\nMay be you are referring to M220 as your hostname also shows mflix…Please clarify and check your Compass parameters",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "have you tried to install mongodb client on your local system and tried to connect?",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "Thanks all,Yes i have installed mongoDB compass community when i started working with the current exercise. it used to work fine,\n@Ramachandra : connection used to work before when was working with db connection chapter.\ncurrently am working with migration task and the connection suddenly stopped working.my assumption would be, i have used the command npm update recently, by any chance this might created the problem?here are my env details which i configure for db connectionSECRET_KEY=everyone_is_a_critic\nMFLIX_DB_URI = mongodb+srv://m2student:[email protected]/sample_mflix?authSource=admin\nMFLIX_NS=sample_mflix\nPORT=5000",
"username": "Rama_krishna1"
},
{
"code": "",
"text": "Can you connect from shell?Are you able to connect from shell with the connect string you have shown in your env. file?\nIt should work if it was working before\nAre you using Compass favorites?\nIf yes drop it and create a new one and see",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Were you able to solve this?",
"username": "Progress_Nwimue_Lekara"
}
] | Connection to MongoDB suddenly stopped working | 2021-03-31T03:33:35.358Z | Connection to MongoDB suddenly stopped working | 6,205 |
|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I have the following context in my application:Currently I have a collection for storing all the posts and a collection for storing all the users, and I have been tempted to store the ObjectIds of the posts that a user has liked as a field of type [ObjectId] in each user’s document. However, based on my understanding I think this is an anti-pattern (Massive Arrays | MongoDB) due to the unbounded nature of the array, but at the same time I see that each ObjectId is only 12 bytes and the 16 MB document limit means that it’s possible to store a list of hundreds of thousands of ObjectIds, and I think it would be a very rare case for a user to like hundreds of thousands of posts.The alternative is obviously storing all the likes as documents in a separate collection where each “like” document will reference the post and user document by ObjectId. This way is surely more scalable, but in order to perform queries efficiently on the “likes” collection I would have to index both the field that references the post’s ObjectId and also the field that references the user’s ObjectId, and I am not sure if the indexing would take a large amount of space when there are posts that have many likes. It seems to me that there will be a lot of “like” documents when there are many (hundreds or thousands of) posts where each post can possibly have hundreds of thousands or millions of likes.I have been looking around for existing posts and threads related to this issue and I found this: How to store User's liked items?\n, which prefers the “storing likes in a separate collection” approach over the “array of post ObjectIds in each user document” approach if I understand it correctly.I would also like to hear some recommendations or advice from anyone who has experience with this issue.Thanks a lot!",
"username": "fried_empanada"
},
{
"code": "",
"text": "16MB is not big enough for a very large scale social app user. A user can easily have many thousands of liked posts easily and a very popular post can be liked by, say millions of people.In the long term, you should definitely use a separate like collection for it.That being said, pre-mature optimization is the root of evil. No need to make it over-complicated unless it’s necessary.If a short term solution is good enough for 5 years. then go for it.",
"username": "Kobe_W"
},
{
"code": "likesnum_likesnum_likeslikenum_likes",
"text": "Hi @Kobe_W, thank you very much for your insight and recommendations! Like you suggested, I think I am going to create a collection just for the likes, but I think I will also keep track of the number of likes each post gets with a field like num_likes in each post document. And I have a follow-up question with regards to this: if I keep track of the num_likes field in each post document, I will need a way to keep the action of inserting a like document and the action of incrementing the like count of a post’s num_likes field atomic so that the data is in sync. What would be a good way to achieve this? I read that there’s the “multi-document transaction” option, but alternatively I can also just count the number of likes while querying the posts, because I mainly want to use the number of likes information in a custom algorithm to calculate a score that would be used for ranking the posts.",
"username": "fried_empanada"
},
{
"code": "",
"text": "What would be a good way to achieve this?Only two results:Generally inconsistency in number of likes is ok, (e.g. 1000000 is no different from 1000001). However there’s a way to mitigate it.You can check this video, the presenter mentioned an async way to fix it. (basically use a background job to count-and-correct from time to time).",
"username": "Kobe_W"
},
{
"code": "",
"text": "I see, thanks for the suggestions! By “transactions”, you mean something like multi-document transactions (https://www.mongodb.com/docs/manual/core/transactions/) right?",
"username": "fried_empanada"
},
{
"code": "",
"text": "Correct. anything beyond a single document operation needs to be wrapped in an explicit transaction for “ACID” purpose.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Got it, thank you for the explanation!",
"username": "fried_empanada"
}
] | Storing user likes in many-to-many relationship | 2023-07-06T22:20:09.622Z | Storing user likes in many-to-many relationship | 769 |
null | [] | [
{
"code": "",
"text": "Creating entries in the mobile app I can see these changes in the adjacent test mobile, so Realm is synchronising. However looking at MongoDB App Services the last sync event was yesterday evening:Latest Sync Event\n02/05/2023 19:52:53Last Cluster Event Processed\n02/05/2023 19:52:51Lag\n2 secI can see sync working in the logs this morning:OK Sync ->Write Feb 06 9:18:02+00:00\n[\n“Upload message contains 1 changesets (total size 50 bytes) to be integrated”,\n“Integrated 1 of 1 remaining changesets (total size 50 bytes) (0 required conflict resolution) in 1 attempts to bring server version to 2929”\n]However I cannot see the changes I make in the app in Data Services whilst browsing the collection. Additionally my desktop application (accessing data via an API) does not show the new entities. This implies the server does not have the data but the mobiles are synchronised? I’m lost…This is odd what’s going on?",
"username": "Sam_Roberts"
},
{
"code": "",
"text": "Resolved by turning sync off and on again.This error featured in the logs recurrently:failed to validate upload changesets: field “forms.1” in table “Project” should have link type “objectId” but payload type is “Null” (ProtocolErrorCode=212)How in future might I fix this without “aggressively” resetting sync",
"username": "Sam_Roberts"
},
{
"code": "",
"text": "Based on the error, it seems like your writes failed upload validation because of a type mismatch between your schemas. Your schema expects the field “forms.1” to contain type \"objectId” but it’s null. Having your writes conform to the schema should avoid hitting this error.In the future, you can pause and resume sync instead of terminating and re-enabling sync completely. This will preserve your sync configuration / client metadata and clients will not have to client reset.",
"username": "Niharika_Pujar"
},
{
"code": "ending session with error: failed to validate upload changesets: field \"field_name\" in table \"table_name\" should have link type \"objectId\" but payload type is \"Null\" (ProtocolErrorCode=212)",
"text": "ot haHi, I had the same issue with the error ending session with error: failed to validate upload changesets: field \"field_name\" in table \"table_name\" should have link type \"objectId\" but payload type is \"Null\" (ProtocolErrorCode=212).Context: The field specified is a relationship field, which makes the field not required, but I still have the error coming in. Is that correct or am I missing something?",
"username": "Rossicler_Junior"
}
] | Realm Sync delay | 2023-02-06T09:22:22.223Z | Realm Sync delay | 792 |
null | [
"backup",
"atlas-cli"
] | [
{
"code": "atlas backup exports buckets create\n",
"text": "Hi guys, I’m setting up a backup export to an S3 bucket and I can’t find a valid value for the iamRoleId. Does anyone know how to get this value given that there is already an iam role configured?Just as a reference:Create bucket reference",
"username": "Leandro_Domingues"
},
{
"code": "atlasatlas cloudProviders accessRoles listatlas cloudProviders accessRoles aws createatlas cloudProviders accessRoles list \n{\n \"awsIamRoles\": [\n {\n \"atlasAWSAccountArn\": \"<REDACTED>\",\n \"atlasAssumedRoleExternalId\": \"<REDACTED>\",\n \"createdDate\": \"2023-07-21T00:40:13Z\",\n \"roleId\": \"<REDACTED>\", // <--- This value\n \"providerName\": \"AWS\"\n }\n ]\n}\nroleId",
"text": "Hi @Leandro_Domingues,I haven’t tried backup export creation from the atlas cli yet but it may be from atlas cloudProviders accessRoles list. Output testing it out (after running atlas cloudProviders accessRoles aws create):Let me know if this is roleId value works and I will enquire about this further with the team.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you Jason!I achieved it using a similar process but through the Atlas API:\nhttps://docs.atlas.mongodb.com/reference/api/cloud-provider-access-get-roles/#get-all-cloud-provider-access-roles",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Nice one - Thanks for posting the solution too Leandro!",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas iamRoleId | 2023-07-20T20:21:17.897Z | Atlas iamRoleId | 626 |
null | [
"queries",
"node-js",
"data-modeling",
"mongoose-odm"
] | [
{
"code": "const express = require(\"express\");\nconst multer = require(\"multer\");\nconst mongoose = require(\"mongoose\");\nconst { GridFsStorage } = require(\"multer-gridfs-storage\");\nconst Grid = require(\"gridfs-stream\");\n\n\nconst app = express();\nconst url = 'mongodb://127.0.0.1:27017/gridfs';\n\nconst storage = new GridFsStorage({\n url,\n file: (req, file) => {\n return {\n filename: file.originalname,\n bucketName: \"Prescription Upload\",\n };\n },\n});\n\nconst uploadGrid = multer({ storage });\n\nmongoose.connect(url)\n .then(() => {\n console.log('Connected to MongoDB');\n })\n .catch((err) => {\n console.error('Failed to connect to MongoDB', err);\n });\n\nconst conn = mongoose.createConnection(url);\nlet grf;\n\nconn.once(\"open\", () => {\n grf = Grid(conn.db, mongoose);\n});\napp.get(\"/files/:filename\", (req, res) => {\n const filename = req.params.filename;\nconsole.log(filename);\n grf.files.findOne({ filename }, (err, file) => {\n \n if (err) {\n console.error(\"Error retrieving file:\", err);\n return res.status(500).json(\"Error retrieving file\");\n }\n\n if (!file) {\n return res.status(404).json(\"File not found\");\n }\n\n const readStream = grf.createReadStream({ filename });\n readStream.pipe(res);\n });\n});\n",
"text": "",
"username": "Soumyajit_Bhattacharya"
},
{
"code": "when the api is getting called no response or error shown. \nMulter",
"text": "Hello @Soumyajit_Bhattacharya .Welcome to The MongoDB Community Forums! Can you please provide a few additional details, for me to understand your use-case better?Lastly, can you share the logs from the MongoDB side when you run this?Note: Please redact any sensitive information before posting.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "app.get(\"/files/:filename\", (req, res, next) => {\n const filename = req.params.filename;\n console.log(filename);\n\n grf.files.findOne({ filename }, (err, file) => {\n if (err) {\n console.error(\"Error retrieving file:\", err);\n return res.status(500).json({ error: \"Error retrieving file\" });\n }\n\n if (!file) {\n return res.status(404).json({ error: \"File not found\" });\n }\n\n const readStream = grf.createReadStream({ filename });\n readStream.on(\"error\", (err) => {\n console.error(\"Error reading file stream:\", err);\n res.status(500).json({ error: \"Error reading file stream\" });\n });\n readStream.pipe(res);\n });\n});\n\"mongoose\": \"^7.1.1\"let grf;\nconn.once(\"open\", () => {\n grf = Grid(conn.db, mongoose);\n});\n",
"text": "Version of Mongoose used - \"mongoose\": \"^7.1.1\"Yes. This is declaration part -Not sure what do you mean by Driver version.Multer is working fine.I have referred to MongoDB documentation.\nLink - GridFS - Retrieve File Information",
"username": "Soumyajit_Bhattacharya"
},
{
"code": "",
"text": "Did you ever find a solution to this @Soumyajit_Bhattacharya Soumyajit_Bhattacharya? I am experiencing the same issue today and am running my code very similarly to the way you are. It just hangs on the GET request without an error message or a failure. At first I thought that my multer and multer-gridfs-stream versions were not compatible, but that ended up not being the problem after trouble shooting. If you did find a solution, can you please share?",
"username": "Simeon_Ikudabo"
},
{
"code": "",
"text": "Do you not need a res.end() call after the data has been piped to the response?",
"username": "John_Sewell"
},
{
"code": "gfs.files.findOneasynchronousapp.get(\"/files/:filename\", async (req, res) => {\nawaitconst file = await gfs.files.findOne({ filename });\ngrf = Grid(conn.db, mongoose);gridfsBucket = new mongoose.mongo.GridFSBucket(conn.db, {\n bucketName: 'yourBucketName',\n });\nconst readStream = gridfsBucket.openDownloadStream(file._id);\nreadStream.pipe(res);\n",
"text": "I found a solution to this @John_Sewell. It turns out that gfs.files.findOne no longer takes a callback. I had previously used the callback style with mongoose, but now it is an asynchronous call. Now to get the file you would make the function for the route asynchronousTo get the file that you want, you would now use await instead of a callbackto stream the file you will need to create a bucket now. In the line of code that opens the connection once, after grf = Grid(conn.db, mongoose);, you will also want to create a bucket like below:Then you will use gridfsBucket to stream your files back to the client:",
"username": "Simeon_Ikudabo"
},
{
"code": "",
"text": "Excellent! Glad you got a solution and something to watch for with mongoose callbacks.",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to fetch data from Gridfs , when the api is getting called no response or error shown. Working on this for last few weeks have tried everything that is possible but the result is same | 2023-05-24T17:35:28.029Z | Unable to fetch data from Gridfs , when the api is getting called no response or error shown. Working on this for last few weeks have tried everything that is possible but the result is same | 1,149 |
null | [
"mongodb-shell",
"transactions",
"installation"
] | [
{
"code": "$ sudo yum install -y mongodb-org\n\nLast metadata expiration check: 0:06:18 ago on Sun Jun 18 23:56:43 2023.\nError: \n Problem: conflicting requests\n - package mongodb-org-4.4.0-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.0, but none of the providers can be installed\n - package mongodb-org-4.4.1-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.1, but none of the providers can be installed\n - package mongodb-org-4.4.10-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.10, but none of the providers can be installed\n - package mongodb-org-4.4.11-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.11, but none of the providers can be installed\n - package mongodb-org-4.4.12-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.12, but none of the providers can be installed\n - package mongodb-org-4.4.13-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.13, but none of the providers can be installed\n - package mongodb-org-4.4.14-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.14, but none of the providers can be installed\n - package mongodb-org-4.4.15-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.15, but none of the providers can be installed\n - package mongodb-org-4.4.16-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.16, but none of the providers can be installed\n - package mongodb-org-4.4.17-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.17, but none of the providers can be installed\n - package mongodb-org-4.4.18-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.18, but none of the providers can be installed\n - package mongodb-org-4.4.19-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.19, but none of the providers can be installed\n - package mongodb-org-4.4.2-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.2, but none of the providers can be installed\n - package mongodb-org-4.4.20-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.20, but none of the providers can be installed\n - package mongodb-org-4.4.21-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.21, but none of the providers can be installed\n - package mongodb-org-4.4.22-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.22, but none of the providers can be installed\n - package mongodb-org-4.4.3-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.3, but none of the providers can be installed\n - package mongodb-org-4.4.4-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.4, but none of the providers can be installed\n - package mongodb-org-4.4.5-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.5, but none of the providers can be installed\n - package mongodb-org-4.4.6-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.6, but none of the providers can be installed\n - package mongodb-org-4.4.7-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.7, but none of the providers can be installed\n - package mongodb-org-4.4.8-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.8, but none of the providers can be installed\n - package mongodb-org-4.4.9-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.9, but none of the providers can be installed\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.0-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.0-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.1-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.1-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.10-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.10-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.10-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.10-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.10-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.11-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.11-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.11-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.11-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.11-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.12-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.12-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.12-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.12-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.12-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.13-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.13-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.13-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.13-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.13-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.14-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.14-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.14-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.14-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.14-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.15-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.15-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.15-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.15-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.15-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.16-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.16-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.16-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.16-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.16-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.17-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.17-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.17-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.17-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.17-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.18-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.18-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.18-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.18-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.18-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.19-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.19-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.19-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.19-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.19-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.2-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.2-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.20-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.20-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.20-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.20-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.20-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.21-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.21-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.21-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.21-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.21-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.3-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.3-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.4-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.4-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.5-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.5-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.5-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.5-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.5-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.6-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.6-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.6-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.6-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.6-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.7-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.7-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.7-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.7-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.7-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.8-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.8-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.8-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.8-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.8-1.amzn2.x86_64\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.9-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.9-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.9-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.9-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.9-1.amzn2.x86_64\n(try to add '--skip-broken' to skip uninstallable packages)\n\n$ sudo yum install -y mongodb-org --skip-broken\n\nLast metadata expiration check: 0:06:58 ago on Sun Jun 18 23:56:43 2023.\nDependencies resolved.\n\n Problem: package mongodb-org-4.4.22-1.amzn2.x86_64 requires mongodb-org-shell = 4.4.22, but none of the providers can be installed\n - cannot install the best candidate for the job\n - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(libcrypto.so.10)(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libssl.so.10(libssl.so.10)(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n - nothing provides libcrypto.so.10(OPENSSL_1.0.2)(64bit) needed by mongodb-org-shell-4.4.22-1.amzn2.x86_64\n=================================================================================================================================================================================================================================================================================\n Package Architecture Version Repository Size\n=================================================================================================================================================================================================================================================================================\nSkipping packages with broken dependencies:\n mongodb-org x86_64 4.4.22-1.amzn2 mongodb-org-4.4 6.2 k\n mongodb-org-shell x86_64 4.4.22-1.amzn2 mongodb-org-4.4 14 M\n\nTransaction Summary\n=================================================================================================================================================================================================================================================================================\nSkip 2 Packages\n\nNothing to do.\nComplete!\n",
"text": "I am trying to install MongoDB on ec2-user.\nI type:\nsudo yum install -y mongodb-org\nIt says:\nError:\nProblem: conflicting requests…\n…\n(try to add ‘–skip-broken’ to skip uninstallable packages)After this I type\nsudo yum install -y mongodb-org --skip-broken\nAnd it straight up skips mongodb-org and mongodb-org-shellHere’s the complete response:",
"username": "Divyani_Audichya"
},
{
"code": "",
"text": "I have exactly the same issue and stuck. Any replies on this ?",
"username": "Gaurav_Sharma8"
},
{
"code": "",
"text": "same issue for me… any advice?",
"username": "Imtiyaz_S"
}
] | Experiencing persistent issues with installing MongoDB on your EC2 instance and need further technical support | 2023-06-19T00:16:56.961Z | Experiencing persistent issues with installing MongoDB on your EC2 instance and need further technical support | 1,209 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "const counter = Order.findOne().sort({ order_number: -1 }).select(\"order_number\");const result = await Order.find().sort({ order_number: -1 }).select(\"order_number\"); // result [ 7, 4, 2, 1]const counter = result[0];",
"text": "I was surprised of the result of this request I found:\nconst counter = Order.findOne().sort({ order_number: -1 }).select(\"order_number\");I thought firstly it will find first one element and try to sort it by ‘order_number’ but It works very strange.\nFirstly it will sort all collection by ‘order_number’ then it will find one element (the first) and select field ‘order_number’.\nBefore this I did it in 2 operations:\nconst result = await Order.find().sort({ order_number: -1 }).select(\"order_number\"); // result [ 7, 4, 2, 1]\nconst counter = result[0];Why it works in such sequense ?",
"username": "Aleksander_Podmazko"
},
{
"code": "",
"text": "Hello @Aleksander_Podmazko, Welcome back to the MongoDB developer forum,I am not getting the exact case you are showing,I thought firstly it will find first one element and try to sort it by ‘order_number’ but It works very strange.\nFirstly it will sort all collection by ‘order_number’ then it will find one element (the first) and select field ‘order_number’.There is no meaning of sort if they do sort after finding one document.",
"username": "turivishal"
}
] | Opposite sequense of mongoose methods | 2023-07-21T07:59:51.296Z | Opposite sequense of mongoose methods | 357 |
null | [
"atlas-cluster",
"database-tools",
"backup"
] | [
{
"code": "C:\\util\\mongodb-database-tools-windows-x86_64-100.7.3\\bin>mongodump --uri mongodb+srv://kurt:[email protected]/ur_sensors --collection Dissolved_DO --out bin/exptest3.bin\n2023-07-18T18:25:00.587-0500 writing ur_sensors.Dissolved_DO to bin\\exptest3.bin\\ur_sensors\\Dissolved_DO.bson\n2023-07-18T18:25:00.645-0500 done dumping ur_sensors.Dissolved_DO (94 documents)\nC:\\util\\mongodb-database-tools-windows-x86_64-100.7.3\\bin>\nC:\\util\\mongodb-database-tools-windows-x86_64-100.7.3\\bin>mongoexport --uri mongodb+srv://kurt:[email protected]/ur_sensors --collection Dissolved_DO --fields Temp,DO,Date,Time,HWINFO,BatVolt,Comments --type=csv --out outtestX.csv\n2023-07-18T18:25:16.310-0500 connected to: mongodb+srv://[**REDACTED**]@cluster0.286gegr.mongodb.net/ur_sensors\n2023-07-18T18:25:16.451-0500 exported 94 records\nHere is a copy of a few of the mongodb documents, it is just JSON data: \n{\"_id\":{\"$oid\":\"64b6c45d54deb6b6af3b6e46\"},\"date\":{\"$date\":{\"$numberLong\":\"1689699421390\"}},\"requestBody\":{\"Temp\":\"28.50\",\"DO\":\"7258\",\"HWINFO\":\"DO2 Sensor SolarLTE1 sw:1.0.1\",\"BatVolt\":\"4366.64\",\"Comments\":\"Misc info\",\"Date\":\"07/17/2023\",\"Time\":\"17:14\"}}\n{\"_id\":{\"$oid\":\"64b6c463867a358dbda0e8f0\"},\"date\":{\"$date\":{\"$numberLong\":\"1689699427679\"}},\"requestBody\":{\"Temp\":\"29.50\",\"DO\":\"7076\",\"HWINFO\":\"DO2 Sensor SolarLTE1 sw:1.0.1\",\"BatVolt\":\"4331.18\",\"Comments\":\"Misc info\",\"Date\":\"07/17/2023\",\"Time\":\"17:29\"}}\n{\"_id\":{\"$oid\":\"64b6c46ae8ccbd12e9d74fb5\"},\"date\":{\"$date\":{\"$numberLong\":\"1689699434014\"}},\"requestBody\":{\"Temp\":\"-127.00\",\"DO\":\"4967\",\"HWINFO\":\"DO2 Sensor SolarLTE1 sw:1.0.1\",\"BatVolt\":\"4357.77\",\"Comments\":\"Misc info\",\"Date\":\"07/17/2023\",\"Time\":\"17:44\"}}\n",
"text": "I’m learning how to use Mongdb and Atlas.\nI successfully created documents in my Cluster0.\nUsing the mongoexport command I tried to export the data to a csv file but only the field column titles get created, no data. When executing the command, it says it exported 94 records, a file is created …but the data is missing.\nHowever, if I run the mongodump command with similar syntax, that execution states it exported 94 records, a BSON file is created and and it works as expected. I used an online converter and converted the binary to csv so I do see the data I expect, but I’d rather not have that extra step with a converter.What am I doing wrong with the mongoexport command?I’m using Windows 10 and a CMD window.Here is the cmd window input/output.The resulting csv file just has one line with 6 columns: Temp\tDO\tDate\tTime\tHWINFO\t BatVolt\tComments.Again….what is mongoexport expecting in order to do this correctly? I kinda suspect my data is causing this, maybe the quotes but I think it is proper JSON? I’d be surprise though to think Mongo was this sensitive to data.",
"username": "kurt_h"
},
{
"code": "",
"text": "Hello @kurt_h ,I tried your query and updated it as below to get the expected result.mongoexport --uri mongodb+srv://kurt:[email protected]/ur_sensors --collection Dissolved_DO --fields requestBody.Temp,requestBody.DO,requestBody.Date,requestBody.Time,requestBody.HWINFO,requestBody.BatVolt,requestBody.Comments --type=csv --out outtestX.csvThis was done because the fields you are trying to access is available under requestBody object and this is known as Dot notation used to access fields of array.Let me know if you get any issues/queries, will be happy to help! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Tarun,Excellent! That was the problem, another step in learning Mongodb. Thank You very much.Kurt_H",
"username": "kurt_h"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongoexport to csv seems to read records but the data fields are empty | 2023-07-20T22:31:34.583Z | Mongoexport to csv seems to read records but the data fields are empty | 635 |
null | [
"connecting",
"php"
] | [
{
"code": "serverSelectionTryOnce",
"text": "Fatal error : Uncaught MongoDB\\Driver\\Exception\\ConnectionTimeoutException: No suitable servers found (serverSelectionTryOnce set): [Failed to receive length header from server. calling hello on ‘cluster0-shard-00-02.sw31d.mongodb.net:27017’] [Failed to receive length header from server. calling hello on ‘cluster0-shard-00-00.sw31d.mongodb.net:27017’] [Failed to receive length header from server. calling hello on ‘cluster0-shard-00-01.sw31d.mongodb.net:27017’] in C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\functions.php:520 Stack trace: #0 C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\functions.php(520): MongoDB\\Driver\\Manager->selectServer(Object(MongoDB\\Driver\\ReadPreference)) #1 C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\Collection.php(932): MongoDB\\select_server(Object(MongoDB\\Driver\\Manager), Array) #2 C:\\xampp\\htdocs\\php_mongdb\\index.php(14): MongoDB\\Collection->insertOne(Array) #3 {main} thrown in C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\functions.php on line 520hi,\nhow can i fix this",
"username": "Gimhan_De_Silva"
},
{
"code": "mongo+srv://",
"text": "I believe you are using a url of the form mongo+srv:// instead of supplying all three cluster member addresses.It is highly possible you are in your work network (or a cafe), and it is blocking access to Atlas cluster servers. if this is the case, you may ask you network admins to permit these connections. otherwise you need to change your internet access. for example use your mobile data, or use network at home or at a friend’s.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hello @Gimhan_De_Silva, welcome to the community forum!You can also include the code and the versions of the programming language, driver and the database you are working with. It is generally easy to figure the cause of the issue with all the available information.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Its not working please help with this problem",
"username": "The_Omniscient"
},
{
"code": "",
"text": "What is it “exactly” that is not working? please give us details we may work on. what code you use, what error you get, how, where and on what you run it, etc.",
"username": "Yilmaz_Durmaz"
}
] | Fatal error: No suitable servers found | 2023-01-24T06:26:12.254Z | Fatal error: No suitable servers found | 1,485 |
null | [
"kafka-connector"
] | [
{
"code": "{\n \"database\": \"weather_db\",\n \"collection\": \"random_col\"\n}\n{\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"namespace.mapper.key.collection.field\": \"$collection\",\n \"tasks.max\": \"1\",\n \"topics\": \"processed_weather_topic\",\n \"namespace.mapper.key.database.field\": \"$database\",\n \"namespace.mapper\": \"com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper\",\n \"key.converter.schemas.enable\": \"false\",\n \"database\": \"local\",\n \"connection.uri\": \"mongodb://database:27017\",\n \"value.converter.schemas.enable\": \"false\",\n \"name\": \"mongo-sink\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter\": \"org.apache.kafka.connect.json.JsonConverter\"\n}\nnamespace.mapper.key.database.fieldweather_dblocal$.database\"value.converter.schemas.enable\": \"false\"org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded...",
"text": "Hello,I’m integrating Kafka with MongoDB using the Kafka-connector sink. My objective is to dynamically route messages to specific MongoDB collections based on their Kafka message keys.Here’s an example Kafka message key:I intend to use this key to dynamically determine the target MongoDB database and collection for each Kafka message.Below is my current connector configuration:My expectation is that the namespace.mapper.key.database.field property should route messages to the weather_db database, but it’s currently routing to the default local database.Notes:Any guidance or insights on configuring the connector correctly would be greatly appreciated. Thank you!",
"username": "Denes_Juranyi"
},
{
"code": "",
"text": "The configuration I shared is indeed functional. Upon closer examination, I realized the discrepancy was due to the use of single quotes in the message key, rather than the standard double quotes. Consequently, the key I presented in my post (“database”) did not match the one I was testing with (‘database’).",
"username": "Denes_Juranyi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issues with Dynamic Routing to MongoDB Collections using Kafka Connector Sink | 2023-07-20T14:14:26.787Z | Issues with Dynamic Routing to MongoDB Collections using Kafka Connector Sink | 613 |
null | [
"aggregation"
] | [
{
"code": "{\"_id\":1,\n\"UniqueID\":\"111\",\n\"Search_Criteria\":\"NameOfOrg\",\n\"Searched_value\":\"IBM\",\n\"UserNavPage_YN\":\"Y\",\n\"PDF_DL\":\"N\",\n\"Excel_DL\":\"Y\",\n\"Id_of_entity\":\"121212\",\n\"ProcessedYN\":\"N\",\n},\n{\"_id\":2,\n\"UniqueID\":\"111\",\n\"Search_Criteria\":\"NameOfOrg\",\n\"Searched_value\":\"IBM\",\n\"UserNavPage_YN\":\"Y\",\n\"PDF_DL\":\"Y\",\n\"Excel_DL\":\"N\",\n\"Id_of_entity\":\"121212\",\n\"ProcessedYN\":\"N\",\n},\n{\"_id\":3,\n\"UniqueID\":\"222\",\n\"Search_Criteria\":\"NameOfOrg\",\n\"Searched_value\":\"Tesla\",\n\"UserNavPage_YN\":\"Y\",\n\"PDF_DL\":\"N\",\n\"Excel_DL\":\"N\",\n\"Id_of_entity\":\"2121\",\n\"ProcessedYN\":\"N\",\n},\n{\"_id\":4,\n\"UniqueID\":\"222\",\n\"Search_Criteria\":\"NameOfOrg\",\n\"Searched_value\":\"Tesla\",\n\"UserNavPage_YN\":\"Y\",\n\"PDF_DL\":\"N\",\n\"Excel_DL\":\"Y\",\n\"Id_of_entity\":\"2121\",\n\"ProcessedYN\":\"N\",\n}\nExpected Output:\nThe two docs with UniqueID = 111, should be merged to single doc as below: \n\n{\n\"UniqueID\":\"111\",\n\"Search_Criteria\":\"NameOfOrg\",\n\"Searched_value\":\"IBM\",\n\"UserNavPage_YN\":\"Y\", // Since both docs have same value so it should remain Y\n\"PDF_DL\":\"Y\", // if any doc with same uniqueId contains Y in this field then keep this as Y\n\"Excel_DL\":\"Y\", // if any doc with same uniqueId contains Y in this field then keep this as Y\n\"Id_of_entity\":\"121212\", \n\"ProcessedYN\":\"N\",\n}\n",
"text": "Hi,I have been trying to merge multiple documents to a single document based on a unique id present on all these docs of a collection. I have tried using group, $mergeObjects , $set however I m still not able to achieve the expected output, didnt find any similar question on the internet as well so can someone help to solve this?\nIn below sample documents of the collection, need to merge documents having same “UniqueID” and the value in same field names of the documents getting merged should be decided conditionally.Same logic to be applied to the two docs with UniqueId = 222",
"username": "Rai_Deepak"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $group:{\n _id:'$UniqueID',\n \"Search_Criteria\" : {$addToSet:'$Search_Criteria'},\n \"Searched_value\" : {$addToSet:'$Searched_value'},\n \"UserNavPage_YN\" : {$addToSet:'$UserNavPage_YN'},\n \"PDF_DL\" : {$addToSet:'$PDF_DL'},\n \"Excel_DL\" : {$addToSet:'$Excel_DL'},\n \"Id_of_entity\" : {$addToSet:'$Id_of_entity'},\n \"ProcessedYN\" : {$addToSet:'$ProcessedYN'},\n }\n},\n{\n $addFields:{\n 'UniqueID':'$_id',\n \"Search_Criteria\" : {$arrayElemAt:['$Search_Criteria', 0]},\n \"Searched_value\" : {$arrayElemAt:['$Searched_value', 0]},\n \"UserNavPage_YN\" : {\n $cond:{\n if:{\n $or:[\n {\n $and:[\n {$eq:[1, {$size:'$UserNavPage_YN'}]},\n {$eq:['Y', {$arrayElemAt:['$UserNavPage_YN', 0]}]}\n ]\n },\n {\n $gt:[1, {$size:'$UserNavPage_YN'}]\n } \n ]\n },\n then:'Y',\n else:'N'\n }\n },\n \"PDF_DL\" : {\n $cond:{\n if:{\n $or:[\n {\n $and:[\n {$eq:[1, {$size:'$PDF_DL'}]},\n {$eq:['Y', {$arrayElemAt:['$PDF_DL', 0]}]}\n ]\n },\n {\n $gt:[1, {$size:'$PDF_DL'}]\n } \n ]\n },\n then:'Y',\n else:'N'\n }\n },\n \"Excel_DL\" : {\n $cond:{\n if:{\n $or:[\n {\n $and:[\n {$eq:[1, {$size:'$Excel_DL'}]},\n {$eq:['Y', {$arrayElemAt:['$Excel_DL', 0]}]}\n ]\n },\n {\n $gt:[1, {$size:'$Excel_DL'}]\n } \n ]\n },\n then:'N',\n else:'Y'\n }\n },\n \"Id_of_entity\" : {$arrayElemAt:['$Id_of_entity', 0]},\n \"ProcessedYN\" : {$arrayElemAt:['$ProcessedYN', 0]},\n \n }\n},\n{\n $project:{\n _id:0\n }\n}\n])\n\n",
"text": "As a very basic possible solution something like this:Mongo playground: a simple sandbox to test and share MongoDB queries onlineI’ve obviously made up some logic for the other fields, but you could take this approach, group up and push the other fields into an array and then do some logic on those, either like I did or a reduce function over the array to get the desired output.In this case, make the assumption that a field is either Y or N, if it’s more than one value then we know it’ll have Y in there, else check that the single value is Y and set the output accordingly.I’ve not tested this in terms of performance, obviously you’ll want an index for the grouping (probably sort first as well) and with a lot of data you may want to run it in batches possibly.",
"username": "John_Sewell"
},
{
"code": " $gt:[1, {$size:'$Excel_DL'}]",
"text": " $gt:[1, {$size:'$Excel_DL'}]Thank you very much, your solution indeed solves the problem to great extent. one minor change I made to the query was updating $gt:[1, {$size:‘$Excel_DL’}] to $gt:[{$size:‘$Excel_DL’},1] at all applicable places since $gt returns true if first argument is greater than the 2nd one. Cheers ",
"username": "Rai_Deepak"
},
{
"code": "",
"text": "D’oh…stupid error!Glad you got it working!",
"username": "John_Sewell"
}
] | How to merge multiple documents of a collection to a single document with same field names? | 2023-07-19T17:25:44.000Z | How to merge multiple documents of a collection to a single document with same field names? | 462 |
null | [
"java"
] | [
{
"code": "",
"text": "Hi,\nI am using mongodb 3.4.4,\nMachine got restarted due to power fluctuation. After that mongodb server was not starting at all.\nWhen devugged we found that one of the .ns file is corrupted and due to that mongo is not starting.\nAfter removing it from database directory mongo is starting.\nIs there any way to repair that .ns file only?\nWe tried repairing whole data directory but no luck.\nThanks in advance…",
"username": "Shivananda_Shiragavi"
},
{
"code": "",
"text": "Hello @Shivananda_Shiragavi ,Welcome to The MongoDB Community Forums! MongoDB v3.4 was released in November 2016 and reached end of life in January 2020. Starting in version 4.2, MongoDB removes the deprecated MMAPv1 storage engine.The possible way to restore .ns file is to restore from a recent backup.\nI would recommend you to upgrade your version to at least MongoDB version 4.4 as it still comes under support till February 2024. Generally, upgrading the servers to the latest stable release is always recommended as it involves many new features, bug fixes and removed vulnerabilities.Below you can find recent MongoDB releases with life cycle scheduleMongoDB Software Lifecycle SchedulesFor upgrade procedure kindly refer below linkRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | Mmapv1 namespace corrupted | 2023-07-19T16:05:29.092Z | Mmapv1 namespace corrupted | 437 |
null | [
"atlas-search",
"text-search"
] | [
{
"code": "",
"text": "Hello!\nI believe having the $count and $sort inside $search has been asked by lot of people.\nWanted to know how others are doing it until the feature is provided.\nI am observing high memory usage by doing $count and $sort after the $search. If there are 2 different processes mongot for $search and mongod for $count and $sort what is causing this high memory?\nAny ideas?Best,\nSupriya",
"username": "Supriya_Bansal"
},
{
"code": "near{\n $search: {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"query\": \"Adam\",\n \"path\": \"A\"\n }},\n {\n \"range\": {\n \"gt\":0,\n \"lt\":12,\n \"path\": \"B\"\n }},\n {\n \"near\": {\n \"path\": \"B\",\n \"origin\":12,\n \"pivot\": 1,\n \"score\": { \"boost\": { \"value\": 999} }\n }\n }\n ]\n }\n }\n}\n",
"text": "Hi again @Supriya_Bansal!Customers are using the near operator to sort numeric results today. Most search use cases want to be sorted by relevance, so near influences relevance based on numeric order. However, if you want an absolute sort, you can boost the near clause with an arbitrarily high number like:We are releasing a search optimized count operator in a couple months, and search optimized sort soon after.",
"username": "Marcus"
},
{
"code": "",
"text": "Thanks @Marcus!\nI am using the “wildcard” operator and our clients want the results to be sorted alphabetically.\nWould you have an example?",
"username": "Supriya_Bansal"
},
{
"code": "$search",
"text": "We don’t support that today using the $search operator alone. It is coming soon.",
"username": "Marcus"
},
{
"code": "",
"text": "Thank you @Marcus for the update!!",
"username": "Supriya_Bansal"
},
{
"code": "",
"text": "@Marcus Still struggling with counting search result after full text search. A strange behavior I noticed is when I search for a query “harry potter” it counts fast as compare to “the harry potter”. What difference “the” makes in query which make it too slow.\nAny solution?",
"username": "Ankit_Saini"
},
{
"code": "",
"text": "Hi @Marcus, this there an update on this? Or has it been released yet?",
"username": "Arjen_Devries"
},
{
"code": "",
"text": "$count with $search is now available. Still waiting on the $sort inside $search.",
"username": "Supriya_Bansal"
},
{
"code": "",
"text": "Hey @Marcus, any update regarding $sort inside $search? In my use case not having $sort available inside $search will heavily affect performance",
"username": "Oz_Ben_Simhon"
},
{
"code": "",
"text": "@Marcus Any idea when faster sort will be coming to atlas search? stored source fields do not provide the speed that is acceptable.",
"username": "Kyle_Mcarthur2"
},
{
"code": "",
"text": "My use case would also benefit from “search optimized sort”. I’ll check back here and the changelog for any updates. Is this feature still coming soon?",
"username": "Daniel_Beaulieu"
},
{
"code": "",
"text": "@Elle_Shwer @Andrew_Davidson could you provide any ETA on “search optimized sort” coming to the platform? What is the best place to read about upcoming features?",
"username": "Daniel_Beaulieu"
},
{
"code": "",
"text": "We will keep our Feedback Portal and the changelog you linked up to date. Changelog most reliably.@Daniel_Beaulieu, if this is a challenge for you, do you mind sharing the following information:",
"username": "Elle_Shwer"
},
{
"code": "db.mycollection.aggregate(\n [\n {\n $search: {\n index: \"default\",\n returnStoredSource: true,\n count: {\n type: \"total\",\n },\n compound: {\n must: [\n {\n text: {\n query: \"60140fae076eeb001176245c\",\n path: \"tenantId\",\n },\n },\n ],\n },\n },\n },\n {\n $sort: { displayName: 1 },\n },\n {\n $facet: {\n totalCount: [\n { $limit: 1 },\n { $project: { meta: \"$$SEARCH_META\" } },\n { $group: { _id: null, count: { $sum: \"$meta.count.total\" } } },\n ],\n results: [\n { $skip: 0 },\n { $limit: 10 },\n {\n $lookup: {\n from: \"mycollection\",\n localField: \"_id\",\n foreignField: \"_id\",\n as: \"document\",\n },\n },\n {\n $unwind: {\n path: \"$document\",\n },\n },\n {\n $replaceRoot: {\n newRoot: \"$document\",\n },\n },\n ],\n },\n },\n ],\n {\n allowDiskUse: true,\n }\n);\n\n",
"text": "I’m implementing paging/sorting on a multi-tenant collection. Any reasonable filter on top of the full data set where results are in the tens or low thousands performs very will. Worst case is there is no filter (other than tenantId filter that gets added automatically). In this case, 750k total documents takes 43 seconds. I’m happy to provide more info or get on a call to discuss more, but figured i’d post what i have collected so far.",
"username": "Daniel_Beaulieu"
},
{
"code": "",
"text": "43 seconds with Stored Source? And what is your ideal response time?",
"username": "Elle_Shwer"
},
{
"code": "db.mycollection.aggregate([\n {\n $match: {\n tenantId: '60140fae076eeb001176245c'\n }\n },\n {\n $sort: {\n displayName: 1\n }\n },\n {\n $facet: {\n results: [{$skip: 0}, {$limit: 10}],\n total: [{$group: {_id: null, count: { $sum: 1 }}}]\n }\n }\n])\n",
"text": "Yes, 43 seconds is with stored source. The ideal performance is at least the same as non $search based that uses normal mongo index. This runs in 6 seconds.",
"username": "Daniel_Beaulieu"
},
{
"code": "",
"text": "Thanks for the response, may respond directly but in the mean time our team is evaluating this. Improving the performance for pagination and sort are extremely top of mind for us at the moment.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "@Marcus could you please tell me how we can use sort inside the $search pipeline?",
"username": "Nitin_Malik"
},
{
"code": "",
"text": "seems to be not possible right now, only can sort after search, which is slow",
"username": "icp"
},
{
"code": "{\n $search: {\n compound: {\n should: [\n {\n near: {\n origin: new Date(),\n path: 'createdOn',\n pivot: 1,\n score: { boost: { value: 999 } },\n },\n },\n ],\n },\n count: { type: 'total' },\n index: 'default',\n returnStoredSource: true,\n },\n }\n",
"text": "sorting on strings after $search is indeed slow. Atlas search results are sorted by relevance/score. You can boost the score to sort on numeric or date fields, but not on strings. For example, this will sort on createdDate descending",
"username": "Daniel_Beaulieu"
}
] | $count and $sort inside $search | 2021-05-05T15:12:03.678Z | $count and $sort inside $search | 12,903 |
null | [] | [
{
"code": "struct HomeUIView: View {\n \n \n @StateObject var transactionModel = TransactionViewModel()\n @State private var expensesList: [ExpenseData] = []\n \n var body: some View {\n \n Vstack {\n \n@MainActor\nclass TransactionViewModel: ObservableObject {\n \n func deleteTransaction(expense: ExpenseData) {\n transactionState = .Loading\n let fbPath: String? = expense.storagePath\n let offlinePath: String? = expense.offlinePath\n let isDraft: Bool = expense.draft\n let uuid: String = expense.uuid",
"text": "i delete a record from the db, when querying again to refresh my list on the UI, it throws RLMException*', reason: ‘Object has been deleted or invalidated.’*here is the link for the gist:\n@Mohit_SharmaThanks",
"username": "Eman_Nollase"
},
{
"code": "@State private var expensesList: [ExpenseData] = []",
"text": "@State private var expensesList: [ExpenseData] = []Just a guess.That’s probably where the issue is. Its possible you’ve deleted the object from Realm, but that objects index still exists in the array, but is now pointing to a non-existent object.Best bet is to use Realm constructs to store data you’re using - e.g. Realm Results type objects always reflect the state of the underlying data - if an object is removed from Realm, it’s also removed from Results so therefore views that are being updated won’t try to access a non-existant object.Again, just a guess - without a minimal, verifiable example, it’s hard to say.",
"username": "Jay"
},
{
"code": "let _idx = expensesList.firstIndex { oldExp in\n oldExp.uuid == expToDelete!.uuid\n }\n if let __idx = _idx {\n expensesList.remove(at: __idx)\n }\n",
"text": "Hi Jay,I updated the gist, i added this line of codebut i am having issue, i am keen to use @ObserverResults by due to my complex query it may not be suffice, i also updated the gist of the complete query (loadExpenses) result i expected. If i can use @ObserveResults to achieve that, then well and good, i just need some concrete examplesThanks",
"username": "Eman_Nollase"
},
{
"code": "struct HomeUIView: View {\n \n \n @StateObject var transactionModel = TransactionViewModel()\n @State private var expensesList: [ExpenseData] = []\n \n var body: some View {\n \n Vstack {\n \n@MainActor\nclass TransactionViewModel: ObservableObject {\n \n func deleteTransaction(expense: ExpenseData) {\n transactionState = .Loading\n let fbPath: String? = expense.storagePath\n let offlinePath: String? = expense.offlinePath\n let isDraft: Bool = expense.draft\n let uuid: String = expense.uuid",
"text": "@ObserverResultsI already make it work using the @ObserveResults i already updated the my public gist so that other may refer to it in the future\n\n…though all the business logic move it to UI and i think for now is ok as long as it is working ",
"username": "Eman_Nollase"
}
] | SwiftUI crash on ondelete | 2023-07-20T15:01:30.713Z | SwiftUI crash on ondelete | 577 |
[
"queries"
] | [
{
"code": "db.collection.find({}).select console.time('Execution Time');\n const events = await Events.find({}).select(\n \"-_id -id -expireAt -homeNmae -awayName -competition\"\n );\n console.timeEnd('Execution Time');\n.explain(){\n explainVersion: '1',\n queryPlanner: {\n namespace: 'foodball.events',\n indexFilterSet: false,\n parsedQuery: {},\n queryHash: '17830885',\n planCacheKey: '17830885',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'PROJECTION_DEFAULT',\n transformBy: [Object],\n inputStage: [Object]\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 1396,\n executionTimeMillis: 5,\n totalKeysExamined: 0,\n totalDocsExamined: 1396,\n executionStages: {\n stage: 'PROJECTION_DEFAULT',\n nReturned: 1396,\n executionTimeMillisEstimate: 1,\n works: 1397,\n advanced: 1396,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n transformBy: [Object],\n inputStage: [Object]\n },\n allPlansExecution: []\n },\n command: {\n find: 'events',\n filter: {},\n projection: {\n _id: 0,\n id: 0,\n expireAt: 0,\n homeNmae: 0,\n awayName: 0,\n competition: 0\n },\n '$db': 'foodball'\n },\n serverInfo: {\n host: 'ac-omjpzdm-shard-00-02.tmhozcd.mongodb.net',\n port: 27017,\n version: '6.0.8',\n gitVersion: '3d84c0dd4e5d99be0d69003652313e7eaf4cdd74'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 16793600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 33554432,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: new Timestamp({ t: 1689649740, i: 11 }),\n signature: {\n hash: Binary.createFromBase64(\"V7X4bMZ0EByYM+mxZcxLQLBjdn0=\", 0),\n keyId: new Long(\"7237192774783598597\")\n }\n },\n operationTime: new Timestamp({ t: 1689649740, i: 11 })\n}\n",
"text": "I had around 2000 objects in this collection, a sample object:I want to extract all object db.collection.find({}) with selected field .select using Mongoose :The Execution Time is more than 2s everytime, it’s pretty slow in this small collection. May I ask why and how I can improve..explain():P.S. I didn’t subscribe to any paid plan, will it affect the database performance?",
"username": "WONG_TUNG_TUNG"
},
{
"code": "",
"text": "Hi there,Is there an index covering this query?",
"username": "Carl_Champain"
},
{
"code": " console.time('Execution Time');\n const events = await Events.find({}).select(\n \"-_id -id -expireAt -homeNmae -awayName -competition\"\n );\n console.timeEnd('Execution Time');\nexecutionStats: {\n executionSuccess: true,\n nReturned: 1396,\n executionTimeMillis: 5\n.time(){}query.find()",
"text": "The Execution Time is more than 2s everytime, it’s pretty slow in this small collection. May I ask why and how I can improveI assume “Execution Time” mentioned above is the time it takes to get the results back. This is different to the execution time on the server which appears to be 5ms:I’m not familiar with the .time() you’ve noted in your code snippet but the 2 seconds you’ve mentioned sounds like the amount of time it takes the request to reach the server, execute, get the response back (possibly include other processes). This is probably going to include any network latency. Is the client you’re performing the query from on the same region as the atlas cluster?Additionally, what’s the use case for returning all documents? i.e. Using {} in the query portion of the .find().",
"username": "Jason_Tran"
},
{
"code": ".find()",
"text": "The problem is solved aftering I changing the databse region to my region(original wasn’t).\nbut there is few questions I wanna ask:",
"username": "WONG_TUNG_TUNG"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Slow performance on simple query using MongoDB ATLAS | 2023-07-18T03:02:19.028Z | Slow performance on simple query using MongoDB ATLAS | 454 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "Hey,I am using “$in” operator to pass ids in mogodb query. The problem is the number of ids can be anywhere between 1 to 1k.What is the limit to use $in operator?",
"username": "Sahildeep_Kaur"
},
{
"code": "",
"text": "Documentation does not suggest that things will go well doing that:The recommendation there is in the 10’s of items, more than that can cause performance issues.What exactly are you trying to do with a find using “$in” this large?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Hey @John_SewellThank you for the response, In my project I have a requirement such that there are number of providers and I need to get the providers which are available. I don’t want to get all the providers.Thank you!",
"username": "Sahildeep_Kaur"
},
{
"code": "",
"text": "How many are there, at some point you’re better doing not equal to!Are there no other options to filter on? In a relational system doing massive In statements can also cause issues (in a coincidence I ran into this today when debugging some code!) so you may join onto another table that can have the filter applied or something similar.Is it possible to update the data to have a marker or something that’s easier to search for without doing a massive $in?Sorry I cant think of an immediate quick solution, it kinds of depends on the data you have and what changes you can do. Perhaps someone else can chip in with a solution they’ve deployed before.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you @John_Sewell , There is no marker to identify the ids in the collection for the given requirement.If I do not use “$in” then I have to use lookup.",
"username": "Sahildeep_Kaur"
},
{
"code": "$in$in",
"text": "While there is no hard limit on the number of values you can use with the $in operator, practical limitations arise due to the BSON document size restriction. If your $in query results in a BSON document that exceeds the 16 MB limit, MongoDB will reject the query.",
"username": "adeola_oladeinde"
},
{
"code": "",
"text": "True, possibly best to try and take some metrics on how it actually performs with your workload…try it with 10,100,1000,10000 elements in the $in stage and see how performance changes before you do anything drastic!",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you @adeola_oladeinde and @John_Sewell , I will try it as you suggested with different limits and check the result. Thank you!",
"username": "Sahildeep_Kaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Limit of $in operator | 2023-07-20T12:17:46.717Z | Limit of $in operator | 338 |
[
"queries",
"crud"
] | [
{
"code": "db.mycollection.updateMany(\n{ thumbnail_url: { $regex: /no-img-xx.png/ } },\n [{\n $set: { thumbnail_url: {\n $replaceOne: { input: \"$URL\", find: \"no-img-xx.png\", replacement: \"no-img.png\" }\n }}\n }]\n)\n",
"text": "I wanted to search and replace one image url. On a single document based collection this following code worked for me.However i have another collection in which the documents are inside the array\nI wanted to replace this string staticflyo to helloworlditems.urlAny help please",
"username": "abdul_rashid"
},
{
"code": "db.collection.update({},\n{\n $set: {\n \"data.$[element].url\": \"boo\"\n }\n},\n{\n arrayFilters: [\n {\n \"element.url\": \"hello\"\n }\n ],\n multi: true\n})\ndata.$[level1].modedata.$[level2].deepValue",
"text": "You can use the arrayFilters operator:Mongo playground: a simple sandbox to test and share MongoDB queries onlineYou can put some moderately complex logic in the array filters should the need arise as well as have multi dimensional updates, i.e.data.$[level1].modedata.$[level2].deepValue",
"username": "John_Sewell"
}
] | Doing Find and Replace of all instance of the search match inside the array | 2023-07-21T06:38:36.689Z | Doing Find and Replace of all instance of the search match inside the array | 435 |
|
null | [
"sharding"
] | [
{
"code": "",
"text": "HI All,I am not able to install mangodb in raspberrypi4B getting below error$ sudo apt-get install -y mongodb-org\nReading package lists… Done\nBuilding dependency tree… Done\nReading state information… Done\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:The following packages have unmet dependencies:\nmongodb-org-mongos : Depends: libssl1.1 (>= 1.1.0) but it is not installable\nmongodb-org-server : Depends: libssl1.1 (>= 1.1.0) but it is not installable\nmongodb-org-shell : Depends: libssl1.1 (>= 1.1.0) but it is not installable\nE: Unable to correct problems, you have held broken packages.my OS version is:~$ lsb_release -a\nNo LSB modules are available.\nDistributor ID:\tUbuntu\nDescription:\tUbuntu 22.10\nRelease:\t22.10\nCodename:\tkineticAny suggestion welcome",
"username": "Siva_kumar8"
},
{
"code": "Install & Configure MongoDB on the Raspberry PiCCFLAGS-march=armv8",
"text": "Hello @Siva_kumar8 ,Please follow below tutorial on Install & Configure MongoDB on the Raspberry PiInstall and correctly configure MongoDB on Raspberry PiThis will give you step by step instructions and in case you face any issues, I would request you to check a few following threads as installation and relevant errors about Raspberry PI are often discussed in various threads such as:Note: MongoDB 5.0 requires ARM v8.2-A or later and the Raspberry Pi 4 uses an ARM Cortex-A72 which is ARM v8-A. Unfortunately this means the pre-built packages for MongoDB 5.0 will not support Raspberry Pi 4. However, you should be able to build from source by adding CCFLAGS-march=armv8 to the SCons invocation. See Building MongoDB for full instructions. If building from source is a daunting prospect, an alternative would be to install the latest ARM 64 package of MongoDB 4.4.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Not able to install mangodb in raspberrypi4B | 2023-07-17T10:02:00.031Z | Not able to install mangodb in raspberrypi4B | 811 |
[
"java"
] | [
{
"code": "",
"text": "Hi,I am trying to connect to MongoDB using DBCPConnectionPool in NiFi. I have given the details as follows.\n\nIssue819×568 37.9 KB\nI have included the following jar files as well:\nbson-4.9.1-javadoc.jar\nbson-record-codec-4.9.1-javadoc.jar\nmongodb-driver-core-4.9.1-javadoc.jar\nmongodb-driver-sync-4.9.1-javadoc.jarBut the connector seems to be just in ‘Enabling’ state and the following error can be seen.ERROR: StandardControllerServiceNode[service=DBCPConnectionPool[id=115f11bf-139d-125f-3ba6-4ff6a443109], name=DBCPConnectionPool, active=true] Failed to invoke @OnEnabled method: org.apache.nifi.processor.exception.ProcessException:Driver class mongodb.jdbc.MongoDriver is not found\n-Caused by: java.lang.ClassNotFoundException:mongodb.jdbc.MongoDriverMy mongoDB version is ‘6.0.1’ and my NiFi Version is 1.20.0.Any help would be appreciated! Thank you!",
"username": "Janudi_Disara"
},
{
"code": "",
"text": "This JDBC driver for MongoDB is not built or maintained by MongoDB, Inc. I think you’re going to have to ask for help from the maintainer, which looks to be https://www.cdata.com/.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Thank Jeff. I tried with the cdata jar file and it works. Do you have any idea as to why it works with the cdata jar files only?",
"username": "Janudi_Disara"
},
{
"code": "",
"text": "Getting the same error as mentioned above, is there any other drivers other than from cdata that can be used",
"username": "sparsh_kanak"
}
] | Connecting to MongoDB using NiFi | 2023-05-03T11:51:14.871Z | Connecting to MongoDB using NiFi | 1,100 |
|
null | [
"python"
] | [
{
"code": "{\n \"new hires\": [\n {\"name\": \"Interpol Lundquist\", \"age\": \"50\", \"sex\": \"male\", \"accounts\": \"interpol_lundquist\", \"join_date\": \"2010-08-12 01:42:28\"},\n {\"name\": \"Hebrides Adair\", \"age\": \"47\", \"sex\": \"male\", \"accounts\": \"hebrides_adair\", \"join_date\": \"2013-07-16 20:47:08\"},\n {\"name\": \"Cantabrigian Gilchrist\", \"age\": \"21\", \"sex\": \"male\", \"accounts\": \"cantabrigian_gilchrist\", \"join_date\": \"2010-02-18 02:46:07\"},\n {\"name\": \"Missy Chesapeake\", \"age\": \"42\", \"sex\": \"male\", \"accounts\": \"missy_chesapeake\", \"join_date\": \"2015-09-17 08:17:45\"}\n ]\n}\n[{\n \"_id\": {\n \"$binary\": {\n \"base64\": \"xraIZaUFinO8IOoY5cqI0A==\",\n \"subType\": \"03\"\n }\n },\n \"name\": \"name\",\n \"age\": \"52\",\n \"sex\": null,\n \"accounts\": \"\"\n}]\n#!/usr/bin/env python3\n#-*- coding: utf-8 -*-\n\n# import the built-in JSON library\nimport json\n\n# import the BSON library from PyMongo's bson\nfrom bson import BSON\n\n# here's an example of an invalid JSON string\nbad_json = '{\"this is\": \"missing the closing bracket\"'\n\n# json.loads() will throw a ValueError if JSON is invalid\ntry:\n json.loads(bad_json)\nexcept ValueError as error:\n print (\"json.loads() ValueError for BSON object:\", error)\n\n# declare an empty string object\njson_string = \"\"\n\n# use Python's open() function to load a JSON file\nwith open(\"data.json\", 'r', encoding='utf-8') as json_data:\n print (\"data.json TYPE:\", type(json_data))\n\n # iterate over the _io.TextIOWrapper returned by open() using enumerate()\n for i, line in enumerate(json_data):\n # append the parsed IO string to the JSON string\n json_string += line\n\n# make sure the string is a valid JSON object first\ntry:\n # use json.loads() to validate the string and create JSON dict\n json_docs = json.loads(json_string)\n\n # loads() method returns a Python dict\n print (\"json_docs TYPE:\", type(json_docs))\n\n # return a list of all of the JSON document keys\n print (\"MongoDB collections:\", list(json_docs.keys()))\n\nexcept ValueError as error:\n # quit the script if string is not a valid JSON\n print (\"json.loads() ValueError for BSON object:\", error)\n quit()\n\n# iterate the json_docs dict keys (use iteritems() for Python 2.7)\nfor key, val in json_docs.items():\n\n # iterate each JSON document in the list\n for i, doc in enumerate(json_docs[key]):\n # bytearray([source[, encoding[, errors]]])\n\n try:\n # print the original JSON document\n print (\"\\ndoc:\", doc)\n\n # encode the document using the BSON library\n data = BSON.encode(doc)\n print (\"BSON encoded data:\", type(data))\n\n myclient = MongoClient(\"mongodb://localhost:27017/\")\n mydb = myclient[\"test\"]\n mycol = mydb[\"BSON\"]\n data = BSON.encode({'a': 1})\n mycol.insert_many(data)\n\n # print the result of the BSON encoding\n print (\"data:\", data)\n\n # decode the BSON document back to a Python dict object\n decode_doc = BSON.decode(data)\n print (\"decode_doc:\", type(decode_doc))\n\n except Exception as error:\n # catch any BSON encoding or decoding errors\n print (\"enumerate() JSON documents ERROR:\", error)\n\n # # decode the BSON document back to a Python dict object\n # decode_doc = BSON.decode(data)\n # print (\"decode_doc:\", type(decode_doc))\n\n except Exception as error:\n # catch any BSON encoding or decoding errors\n print (\"enumerate() JSON documents ERROR:\", error)\n",
"text": "We have a json file as structure below. .sample.json:we need to saving the above json data in to the mongodb with BSON encoded format as below format. We need to save the id with binary format and subtype.sample code:When trying to run the above code, facing the below error:\nenumerate() JSON documents ERROR: document must be an instance of dict, bson.son.SON, bson.raw_bson.RawBSONDocument, or a type that inherits from collections.MutableMapping",
"username": "sriramakrishna_s"
},
{
"code": "{\n \"_id\": {\n \"$binary\": {\n \"base64\": \"xraIZaUFinO8IOoY5cqI0A==\",\n \"subType\": \"03\"\n }\n },\n_id",
"text": "Hey @sriramakrishna_s,Welcome to the MongoDB Community!MongoDB stores data in BSON format both internally and over the network, so converting it to BSON before inserting is not required as it is done automatically by the server.The JSON format is a human-readable form of the data in MongoDB.Could you please provide more details about why you are saving the _id in such a format?May I ask specifically what you are trying to achieve here? This will help us understand the context better, in order to assist you effectively.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Converting JSON File to DECODED BSON data and saving in mongo db | 2023-06-25T15:46:10.175Z | Converting JSON File to DECODED BSON data and saving in mongo db | 1,611 |
null | [
"queries",
"replication",
"transactions",
"database-tools",
"backup"
] | [
{
"code": "mongodump -u admin --authenticationDatabase=admin -h <exisiting_running_host_ip> -d local -c oplog.rs -o oplogDumpDir/\nuse local;\ndb.oplog.rs.find({op:\"i\"}).sort({$natural: -1}).limit(10);\nmongorestore -v -u root --noIndexRestore --oplogReplay --drop --oplogLimit 1689689931:7 oplogRecoveryDir/\nFailed: restore error: error applying oplog: applyOps: (DuplicateKey) E11000 duplicate key error collection: test.test2 index: test-id_1_local-date_1_type_1 dup key: { test-id: ObjectId('6454a05b0389d90393268baf'), local-date: 20220915, type: \"asdada\" }\n",
"text": "Hello Team,Mongo Server: v6.0.4\nOperating System: Amazon Linux2\nMode: 3 Node ReplicasetBackground:\nWe create nightly snapshots of EBS Volume of /data mount location of our primary replicaset.Our goal is to be able to perform a point in time restore of the Mongo database from T-1 FileSysytem Snapshot copy. The database receives multiple updates/deletions/inserts for the records.Steps we have done so far:Copied the opLog timestamp t and i values from the existing replicaset\nexample: ts: Timestamp({ t: 1689689931, i: 7 }),Now; we want to replay the transactions in new EC2 instance until the above timestamp.We are receiving the below errorI have given a read to MongoDB community forum posts and JIRA tickets regarding the same; but unable to find any solution.I have tried the mongorestore with different parameters as well (–noIndexRestore --maintainInsertionOrder --keepIndexVersion ) but; still no solution.We would appreciate for help and direction to move forward.",
"username": "Ritesh_Kumar6"
},
{
"code": ">use local\n>db.oplog.rs.find({op:\"i\"}).sort({$natural: -1}).limit(1);\n\n# Sample Output for ts value\nts: Timestamp({ t: 1689748934, i: 3 }),\nmongodump -u username --authenticationDatabase=admin -h secondary-node-ip-existing-cluster -d local -c oplog.rs --query '{\"ts\": {\"$gt\": {\"$timestamp\": {\"t\": 1689748934, \"i\": 3}}}}' -o oplogDumpDir/\nmv oplogDumpDir/local/oplog.rs.bson oplog.bson\nrm -rf oplogDumpDir/local\nmongorestore --authenticationDatabase=admin -u root -h new-ec2-instance-ip --oplogReplay --oplogLimit 1689837570:1 oplogDumpDir/\n",
"text": "I was able to resolve the issue and perform the PITR of mongo 3 node replicaset cluster.Adding the steps in-case someone else stumbles across this post.After restoring the T-1 EBS Snapshot of /data mount point in new EC2 instance.\n…We verified the restoration using one document that gets updated very frequently.Two things that we observed:\na. The transactions were replayed but the oplog entry in the recovered instance was not updated. I am not sure if this is Mongo Server behaviour or not.b. It took nearly 12 hours just to replay the 14GB oplog on t3.large EC2 instance with /data EBS Volume of gp2 type with 1200 IOPS with no active user connections. This is too long for production environment.While searching for solutions; we stumbled acrosspbmtool as well. It looks promising, but we did not research much into that implementation.",
"username": "Ritesh_Kumar6"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Mongorestore: opLogReplay E11000 duplicate key error | 2023-07-19T11:11:54.247Z | Mongorestore: opLogReplay E11000 duplicate key error | 648 |
null | [] | [
{
"code": "",
"text": "can’t browse data anymore. i have been getting this error An error occurred while querying your MongoDB deployment. Please try again in a few minutes. there are no instructions on what happened and what to do next. its a free tier. please help",
"username": "hakan_ahmet"
},
{
"code": "",
"text": "Hello @hakan_ahmet ,Welcome to The MongoDB Community Forums! Also, I would recommend you contact the Atlas in-app chat support team regarding this as they will be able to help you better in this case. Please provide them with the errors you’re receiving as well.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Getting this strange error to my cluster. can't browse and see it anymore | 2023-07-20T05:33:44.211Z | Getting this strange error to my cluster. can’t browse and see it anymore | 454 |
null | [
"queries",
"node-js",
"mongodb-shell",
"storage",
"mongocli"
] | [
{
"code": "",
"text": "I have been looking a way how to start/stop the mongo db cluster for M0 Sandbox cluster tier.Can somebody guide me through the process.",
"username": "srinivasa_reddy_challa"
},
{
"code": "M0M2M5",
"text": "Hi @srinivasa_reddy_challa,Manually pausing M0, M2 and M5 tier clusters isn’t possible. As per the pause one cluster documentation:Atlas automatically pauses all inactive M0 , M2 , and M5 clusters after 60 days.What’s the use case for wanting to pause the M0? Do you mean restarting the mongod processes? If so, this is not possible.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I feel as the cluster is running irrespective of usage.I thought of pausing it in the time of no usage.",
"username": "srinivasa_reddy_challa"
},
{
"code": "M0M2M5",
"text": "Thanks for getting back to me regarding your concerns with the ability to pause M0’s. At this stage it isn’t possible but as noted before, Atlas will automatically pauses all inactive M0 , M2 , and M5 clusters after 60 days.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Stop/Start Mongo db cluster of tier M0 Sandbox through Atlas | 2023-07-19T19:09:03.625Z | Stop/Start Mongo db cluster of tier M0 Sandbox through Atlas | 623 |
null | [
"queries",
"golang",
"atlas-search"
] | [
{
"code": "searchStage := bson.D{\n\t\t{\"$search\", bson.M{\n\t\t\t\"index\": \"sampleAddress\",\n\t\t\t\"compound\": bson.D{\n\t\t\t\t{\"filter\", bson.A{\n\t\t\t\t\tbson.D{{\"text\", bson.D{{\"query\", \"1994\"}, {\"path\", \"address\"}}}},\n\t\t\t\t\tbson.D{{\"text\", bson.D{{\"query\", \"test\"}, {\"path\", \"image\"}}}},\n\t\t\t\t}},\n\t\t\t},\n\t\t}},\n\t}\n",
"text": "While developing a pipeline in Golang for MongoDB, I am in the process of migrating to AtlasSearch. I have some questions about filter, must, and should.As far as I know, must represents an AND condition, should represents an OR condition, and filter acts as an AND condition. However, I’m not entirely sure about the differences between must and filter.The query I am currently using is as follows:Could you explain the differences between must and filter using examples? The query results appear to be the same, so I’m not sure how they differ.",
"username": "_WM2"
},
{
"code": "filterfiltermustfilter",
"text": "Hi @_WM2,As per the filter documentation:filter behaves the same as must , except that the filter clause is not considered in a returned document’s score, and therefore does not affect the order of the returned documents.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
}
] | AtlasSearch filter vs must | 2023-07-21T01:18:43.791Z | AtlasSearch filter vs must | 509 |
null | [
"swift"
] | [
{
"code": "@ObservedResult(UserPreferences.self) var userPreferences = defaultValue@AppStoragelazy@ObservedRealmObjectlazyCannot use instance member 'getUserPreferences' within property initializerimport Foundation\nimport RealmSwift\n\nclass ViewModel: ObservableObject {\n @ObservedResults(UserPreferences.self) private var _userPreferences\n \n // @ObservedRealmObject lazy var userPreferences: UserPreferences = getUserPreferences()\n \n private func getUserPreferences() -> UserPreferences {\n if let userPreferences = _userPreferences.first {\n return userPreferences\n }\n let newUserPreferences = UserPreferences()\n $_userPreferences.append(newUserPreferences)\n return newUserPreferences\n }\n}\nThread 1: \"Frozen Realms do not change and do not have change notifications.\"import RealmSwift\nimport SwiftUI\n\nclass UserPreferences: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var isDarkTheme: Bool\n}\n\nstruct WrapperView: View {\n @ObservedResults(UserPreferences.self) private var _userPreferences\n \n var body: some View {\n if let userPreferences = _userPreferences.first {\n MainScreen()\n .environmentObject(userPreferences)\n// MainScreen(userPreferences: userPreferences)\n } else {\n ZStack {\n // Empty view\n }\n .onAppear{\n $_userPreferences.append(UserPreferences())\n }\n }\n }\n}\n\nstruct MainScreen: View {\n// @ObservedRealmObject var userPreferences: UserPreferences\n\n @EnvironmentObject var userPreferences: UserPreferences\n \n var body: some View {\n VStack{\n Button(\"Toggle\") {\n userPreferences.isDarkTheme.toggle()\n }\n Text(userPreferences.isDarkTheme.description)\n }\n }\n}\n",
"text": "If there is a single document of some type per user, such as UserPreferences, I could setup a trigger when a user creates an account. However, I am not sure what is the best/cleanest way to approach the problem when Realm is used locally.It would be nice if we could query for a single result using @ObservedResult(UserPreferences.self) var userPreferences = defaultValue similar to @AppStorage.Is it possible to abstract the logic in a view model (or a property wrapper)? I can’t use lazy and @ObservedRealmObject at the same time, but lazy is needed to avoid Cannot use instance member 'getUserPreferences' within property initializer.If I try to use environment objects, I get Thread 1: \"Frozen Realms do not change and do not have change notifications.\".",
"username": "BPDev"
},
{
"code": " // from template code\n @AsyncOpen(appId: \"appId\", timeout: 4000) var asyncOpen\n case .waitingForUser:\n ProgressView(\"Waiting for user to log in...\")\n // The realm has been opened and is ready for use.\n // Show the content view.\n case .open(let realm):\n ItemsView(itemGroup: {\n if realm.objects(ItemGroup.self).count == 0 {\n try! realm.write {\n // Because we're using `ownerId` as the queryable field, we must\n // set the `ownerId` to equal the `user.id` when creating the object\n realm.add(ItemGroup(value: [\"ownerId\":user!.id]))\n }\n }\n return realm.objects(ItemGroup.self).first!\n }(), leadingBarButton: AnyView(LogoutButton())).environment(\\.realm, realm)\nItemGroupItemGrouplet itemGroup = itemGroups.first@ObservedResult",
"text": "Also, when using triggers, there is a small chance that there is a connection error after logging in but before opening the realm.If we rely on having a single ItemGroup, the app wouldn’t work (when waiting for a trigger to execute).If we create the single ItemGroup locally, then there is a chance we create a second one if the user already exists but opening the synced realm failed (logging in for the first time on a different device where the realm is empty). In that case, I think let itemGroup = itemGroups.first will return the oldest item group. It would be convenient to have a @ObservedResult that merges changes in case this happens.",
"username": "BPDev"
}
] | Single document per user | 2023-07-20T01:28:07.972Z | Single document per user | 642 |
null | [
"replication"
] | [
{
"code": "",
"text": "I am using MongoDB community edition in Ubuntu 22.04. How can I make replica set in my local machine and how can I enable it to use in my node js application?",
"username": "Tanzim_Ahmed"
},
{
"code": "",
"text": "",
"username": "Kobe_W"
}
] | How to create replica set of mongodb in ubuntu and enable it? | 2023-07-20T21:10:29.566Z | How to create replica set of mongodb in ubuntu and enable it? | 665 |
null | [
"queries",
"python",
"performance"
] | [
{
"code": "",
"text": "Hi all, so using pymongo and filter to just select just names (which I have in a single unique index) also excluded _id.When looping over the returned names and adding them to a set it seems to struggle and take a long time to complete the operation. However, if I pull the details from a text file with the same information I can complete the same operation very quickly.Is there a better way to fetch data one has in an index other than using find please?",
"username": "Damien_N_A"
},
{
"code": "",
"text": "Hello @Damien_N_A – wondering if you figured out the solution to the performance issue you were having?I am having a similar problem, the cursor is taking forever.Thanks.\nJuan Luna",
"username": "Juan_Luna"
},
{
"code": "",
"text": "Please provide your code, some sample data, and your list_indexes() output for us to debug.",
"username": "Shane"
},
{
"code": "",
"text": "Thank you Shane. You have seen my other post already – I was just wondering if Damien got his answer.",
"username": "Juan_Luna"
}
] | Performance issue - pymongo - fetching all entries in index - over 1 mill names? | 2022-08-02T09:16:26.555Z | Performance issue - pymongo - fetching all entries in index - over 1 mill names? | 1,900 |
null | [
"aggregation",
"queries",
"python"
] | [
{
"code": "collection_conn.aggregate(\n [{\"$project\": {\"arrayofkeyvalue\": {\"$objectToArray\": \"$$ROOT\"}}},\n {\"$unwind\": \"$arrayofkeyvalue\"},\n {\"$group\": {\"_id\": \"None\", \"allkeys\":\n {\"$addToSet\": \"$arrayofkeyvalue.k\"}}}])\ncollection_conn.aggregate(\n [{\"$project\": {\"arrayofkeyvalue\": {\"$objectToArray\": \"$$ROOT\"}}},\n {\"$unwind\": \"$arrayofkeyvalue\"},\n {\"$sort\": {\"EchoTimeStamp\": -1}},\n {\"$limit\": 1000000},\n {\"$group\": {\"_id\": \"None\", \"allkeys\":\n {\"$addToSet\": \"$arrayofkeyvalue.k\"}}}], allowDiskUse=True)\n",
"text": "Hello.I want my python script to get me all fields (and subfields) for every single collection (I need to validate no fields are added/removed in a deployment).I tried:And works great for small collections but when reaching over 1.5 million records… stalls for ever. So, for those big collections I tried this:It kind-of works… takes over 6 minutes to go thru the entire cursor and get me the keys. I also tried with a find.sort.limit – same thing. I have like 20 collections like that, so its not an option to do this.The timestamp column is indexes (the only one).So I am wondering if there is any way easier to do that. I see my IDE that has on the left pane all the Databases, collections fields and sub-fields, when I refresh, maybe takes a minute to refresh the entire collection…Apologies if I am using incorrect terminology, just started to work with mongo 2 weeks ago. I call a Sub-Field all the fields of an Field type array.Thank you in advance for your help and advice.Juan Luna",
"username": "Juan_Luna"
},
{
"code": "",
"text": "I need to validate no fields are added/removed in a deploymentYou will likely have a better experience enabling schema validation on the server: https://www.mongodb.com/docs/manual/core/schema-validation/",
"username": "Shane"
},
{
"code": "",
"text": "Thank you Shane for your answer. Unfortunately, we are guests in this DB and we can’t.",
"username": "Juan_Luna"
}
] | Python - how to get all fields (and sub-fields) for a collection? | 2023-07-18T19:56:46.455Z | Python - how to get all fields (and sub-fields) for a collection? | 447 |
[
"frankfurt-mug"
] | [
{
"code": "Staff Curriculum Engineer at MongoDBDevOps Engineer & Frontend developer at adorsysSenior Agile Coach at CHECK24adorsys CTODevops Engineer at adorsysProject Management at adorsys",
"text": "\nMUG_Frankfurt_Sep20231000×563 287 KB\nDATE CHANGE- Please note the date of this event has been changed to Tuesday, September 5th.The Frankfurt MongoDB User Group is excited to host its third meetup with CHECK24 and adorsys!Make sure you join the Frankfurt Group to introduce yourself and stay abreast with future meetups and discussions. To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button. Have meetup.com? You can also register for the event here.The details are coming soonWe may adjust the order of the sessions according to demandEvent Type: In-Person\nLocation: CHECK24 - Office, Speicherstrasse 55, 60327 Frankfurt am MainStaff Curriculum Engineer at MongoDB\nTopic: Field Level Encryption–\n\nPraba512×512 20.6 KB\nDevOps Engineer & Frontend developer at adorsys\nTopic: Client-Side Field Level EncryptionSenior Agile Coach at CHECK24adorsys CTO–\n\nPraba512×512 20.6 KB\nDevops Engineer at adorsys–\n\nNicoleW800×800 99.9 KB\nProject Management at adorsys",
"username": "Nicole_Wesemeyer"
},
{
"code": "",
"text": "Hi, thanks for planning this event!\nThere is no RSVP button. Is it done intentionally?",
"username": "Ruslan_Peshchuk"
},
{
"code": "",
"text": "Hey @Ruslan_Peshchuk,\nI have recently added RSVP to the event. You can now see it by refreshing the page. ",
"username": "Harshit"
}
] | Frankfurt MUG: MongoDB September Meet-up | 2023-07-14T10:32:59.711Z | Frankfurt MUG: MongoDB September Meet-up | 1,704 |
|
null | [
"replication",
"sharding"
] | [
{
"code": "",
"text": "I have a replica set of 5 and I need to replace the binaries to 4.4.16 - i have the mongos, server and shell rpms at a temporary location /app/software/ - what specific command would be best for replacing the current binaries to the .16 binaries - I understand the steps of stoppping mongod and stepdowns and starting it back up but what is the best and safest way to replace the binaries?",
"username": "Gareth_Furnell"
},
{
"code": "yum updateyum installrpm -U mongodb-org-server-4.4.23-1.el8.x86_64.rpm ",
"text": "The better way is to add the repo and use yum update or yum installAs you already have the RPMs rpm -U mongodb-org-server-4.4.23-1.el8.x86_64.rpm ",
"username": "chris"
},
{
"code": "",
"text": "For now I think I’ll do the rpm -Uvh mongodb-org-server-4.4.16.el8.x86_64.rpm process on my nodes so that they’re all the same version and then go straight to the latest 4.4 version before going to 5.0 - I have been told that the repo config for MongoDB is normally disabled (set to 0) so that when RHEL gets updated it does not accidentally do MongoDB as well - I will do the repo yum update process in future thank you!",
"username": "Gareth_Furnell"
},
{
"code": "yumexclude/etc/yum.confexclude=mongodb-org,mongodb-org-database,mongodb-org-server,mongodb-mongosh,mongodb-org-mongos,mongodb-org-tools\n",
"text": "Rather than disabling the repo the packages can be excluded to avoid unintented upgrades.You can specify any available version of MongoDB. However yum upgrades the packages when a newer version becomes available. To prevent unintended upgrades, pin the package. To pin a package, add the following exclude directive to your /etc/yum.conf file:",
"username": "chris"
},
{
"code": "",
"text": "Thank you for the clarification on the exclusions, I have another question - I’ve been recommended to instead try doing a symbolic link to the binaries instead of overriding them incase something does not work, what are your thoughts on this and have there been any cases I can learn from?",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "You should upgrade to the latest release of 4.4 which is 4.4.23The mongodb-org-server rpm is not relocatable so this method would require the installation by tarball.There is no real benefit to doing this. Just downgrade to the package if you need to.",
"username": "chris"
},
{
"code": "",
"text": "That shall occur - the current issue is that the replica set of 5 - 2 are 4.4.16 and 3 are 4.4.14 - so I’ll go to .16 to make all synchronous before going to the latest. from a production standpoint - I’ll upgrade to 4.4.22… I’ll take the risk of the rpm -Uvh mongodb-org-server-4.4.16-1.el7.x86_64.rpm and the others ",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Replacing binaries on rhel from 4.4.14 to 4.4.16 | 2023-07-17T10:34:49.377Z | Replacing binaries on rhel from 4.4.14 to 4.4.16 | 551 |
null | [
"replication",
"kafka-connector"
] | [
{
"code": "Code: **InvalidInput.InvalidConnectorConfiguration**\n\nMessage: **The connector configuration is invalid. Message: Connector configuration is invalid and contains the following 1 error(s): Invalid value mongodb://${ssm::/order/db/order/username}:${ssm::/order/db/order/password}@redactedurlformongo:27017/?ssl=true&replicaSet=rs0&retryWrites=false for configuration connection.uri: The connection string contains an invalid host '${ssm::'. Reserved characters such as ':' must be escaped according RFC 2396. Any IPv6 address literal must be enclosed in '[' and ']' according to RFC 2732.**\nconnector.class=com.mongodb.kafka.connect.MongoSourceConnector\nconnection.ssl.truststorePassword=${ssm::/platform/db/master/cacert/truststore/password}\ntasks.max=1\nchange.stream.full.document=updateLookup\nconfig.providers.ssm.class=com.amazonaws.kafka.config.providers.SsmParamStoreConfigProvider\nconfig.providers=s3import,ssm\ncollection=orders\nconnection.ssl.truststore=${s3import:us-east-2:ole-poc-kafka-connectors/rds-truststore.jks}\nconfig.providers.s3import.param.region=us-east-2\ndatabase=order\ntopic.namespace.map={\"order.orders\": \"orders\"}\nconnection.uri=mongodb://${ssm::/order/db/order/username}:${ssm::/order/db/order/password}@redactedurlformongo:27017/?ssl=true&replicaSet=rs0&retryWrites=false\nerrors.tolerance=all\nconfig.providers.ssm.param.region=us-east-2\nconfig.providers.s3import.class=com.amazonaws.kafka.config.providers.S3ImportConfigProvider\n",
"text": "Hello,I am following this article to create a source connector in the AWS Ecosystem (AWS MSK and AWS Kafka Connect). I am following this article which details how we can externalize the secrets (username and password) from the mongo connection string with a config provider: Stream data with Amazon DocumentDB, Amazon MSK Serverless, and Amazon MSK Connect | AWS Database BlogHowever, I get the following validation error on the connection.uri property when using the config provider:Here is an example of my configuration:This issue is also reported here: MongoDB Source Connector - configuration validation runs before replacement when using a Config Provider · Issue #1319 · confluentinc/kafka-connect-jdbc · GitHub",
"username": "Dan_Hvidding"
},
{
"code": "connection.uri=mongodb://${ssm::/order/db/order/username}:${ssm::/order/db/order/password}@redactedurlformongo:27017/?ssl=true&replicaSet=rs0&retryWrites=false\n${ssm::}The connection string contains an invalid host '${ssm::'. Reserved characters such as ':' must be escaped according RFC 2396. Any IPv6 address literal must be enclosed in '[' and ']' according to RFC 2732:$[ ]IPv6",
"text": "Hi @Dan_Hvidding,Welcome to the MongoDB Community!I’m not very familiar with Kafka connectors and the AWS Ecosystem, but based on the error message, it seems like there is an issue parsing the URI of the MongoDB due to a syntax problem.`Message: The connector configuration is invalid. Message: Connector configuration is invalid and contains the following 1 error(s): Invalid value mongodb://${ssm::/order/db/order/username}:${ssm::/order/db/order/password}@redactedurlformongo:27017/?ssl=true&replicaSet=rs0&retryWrites=false for configuration connection.uri:The error indicates that the MongoDB connection URI is not being parsed correctly due to the ${ssm::} placeholders used for the username and password. I guess the SSM placeholders are not being resolved even before the connection URI is validated.The connection string contains an invalid host '${ssm::'. Reserved characters such as ':' must be escaped according RFC 2396. Any IPv6 address literal must be enclosed in '[' and ']' according to RFC 2732Based on the error message, it seems that the appropriate way to fix the issue is to escape the : and $ characters in the URI and use [ ] around the IPv6 address so that it becomes valid even before parameter resolution.Hope the above helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks Kushagra,I was able to resolve this by putting the config provider configuration within the Worker config and not the Connector config.I found the description here: https://jira.mongodb.org/browse/KAFKA-361",
"username": "Dan_Hvidding"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Connection.uri validation error when using a config provider | 2023-07-19T10:23:56.651Z | Connection.uri validation error when using a config provider | 693 |
[
"atlas-cluster",
"database-tools"
] | [
{
"code": "",
"text": "“try ‘mongoimport --help’ for more information”\n“error parsing command line options: error parsing uri: unescaped @ sign in user info”I am new to learning database & I want to import my \"Json \" file on MongoDB-Atlas . but after using the command line tools below:\n\"mongoimport --uri mongodb+srv://gofood:Punit@[email protected]/myfood --collection sample --type jsonArray --file “C:\\Users\\Punit\\Desktop\\Food-App\\foodData2.json”How to solve this ? thanks in Advance\n\ndb-probl1920×493 39.9 KB\n",
"username": "Puneet_Kumar_sharma"
},
{
"code": "",
"text": "It is syntax issue\nEnclose the connect string after --uri till your /dbname in double quotes and try again",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "\nprob21920×270 19.6 KB\nThis Would Happen after i Use the double quotes.",
"username": "Puneet_Kumar_sharma"
},
{
"code": "",
"text": "Your password is having special character @ and shell interprets it differently\nEscape the special character or use URL encoder or change your password to a simple one\nSearch our forum threads.You will get more details",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "\npb-31908×145 13.7 KB\nSir, I had changed the passwrd but still it throws error.",
"username": "Puneet_Kumar_sharma"
},
{
"code": "",
"text": "Remove quotes from file path and try again\nIf you still face issues cd to the dir where your json file is residing and run mongoimport from that location.Just give filename without path as parameter for --file",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thankyou for your support ,…",
"username": "Puneet_Kumar_sharma"
},
{
"code": "",
"text": "How your issue get resolved finally please send the exact same code i’ve tried all the above steps still get same erro",
"username": "NIMESH_SINGH"
},
{
"code": "",
"text": "\nimage1910×175 11.5 KB\ni still get that error",
"username": "NIMESH_SINGH"
},
{
"code": "",
"text": "Check your syntax again.You are missing collection flag",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "\nimage1902×256 11.9 KB\ni tried all the ways still error",
"username": "NIMESH_SINGH"
},
{
"code": "",
"text": "\nimage1918×271 14.5 KB\n",
"username": "NIMESH_SINGH"
},
{
"code": "",
"text": "Add (–) before jsonArray and try again",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Can I get the exact rectified statement",
"username": "Jatin_N_A1"
},
{
"code": "",
"text": "Just add double hypen before like you added for other params\n–jsonArray",
"username": "Ramachandra_Tummala"
}
] | "error parsing command line options: error parsing uri: unescaped @ sign in user info" | 2023-02-09T07:53:48.771Z | “error parsing command line options: error parsing uri: unescaped @ sign in user info” | 3,018 |
|
[] | [
{
"code": "",
"text": "Mongo DB version 6 installed on Rocky linux 9 server but the mongodb service is getting failed not started.\nI have referred the below official link to install.\nI have tried with another vm on rocky linux 9 there also getting the same error. core dump error\nPlease give me solution foe this issue.\nmongodb_error_rockylinux9888×410 28.1 KB\n",
"username": "Rajkumar_N"
},
{
"code": "",
"text": "ILL means illegal instructions.\nCheck if your CPU microarchitecture supports the mongodb version you are trying to install\nCheck mongo documentation installation instructions/compatibility matrix etc",
"username": "Ramachandra_Tummala"
}
] | Mongo db service not started on Rocky Linux 9 | 2023-07-20T09:38:41.860Z | Mongo db service not started on Rocky Linux 9 | 426 |
|
null | [] | [
{
"code": "",
"text": "Hi,\nThe problem is that i need to have complete MongoDB installed in unattended mode.\nI have used:\nmsiexec /l*v install.log /qn /i mongodb-windows-x86_64-6.0.8-signed.msi /norestart SHOULD_INSTALL_COMPASS=“0” ADDLOCAL=“All”\nBut, this registers mongod service and starts it. I need complete installation, but service should not be registered and started. In interactive setup this can be done but how to do that in unattended?\nI have tried adding MONGO_SERVICE_INSTALL=“0” but this does not help.Please help, any ideas?Chris",
"username": "Chris_B1"
},
{
"code": "",
"text": "Aaah.\nNeed to use ADDLOCAL=\" ServerNoService,Router,MiscellaneousTools\" instead of all…Done.",
"username": "Chris_B1"
}
] | Windows unattended install - how to disable service registeration and start but with full install | 2023-07-20T07:57:59.996Z | Windows unattended install - how to disable service registeration and start but with full install | 398 |
null | [] | [
{
"code": "connecting to: mongodb://<my ip address>:27017/admin?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server <my ip address>:27017, connection attempt failed: SocketException: Error connecting to <my ip address>:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\nnc -vz <my ip address> 27017\nnc: connectx to <my ip address> port 27017 (tcp) failed: Connection refused\n(base) exen@jingyang:~$ sudo ufw status verbose\nStatus: active\nLogging: on (low)\nDefault: deny (incoming), allow (outgoing), disabled (routed)\nNew profiles: skip\n\nTo Action From\n-- ------ ----\n27017 ALLOW IN Anywhere \n22 ALLOW IN Anywhere \n27017 (v6) ALLOW IN Anywhere (v6) \n22 (v6) ALLOW IN Anywhere (v6)\n",
"text": "Hello! I’m a newcomer to MongoDB and it’s a great product. I have set up a mongo server on my local desktop machine (running Ubuntu 20.04), and I can connect to it without any issue on that machine. However, when I try to connect from my laptop, it shows this error:I have followed many suggestions by others (for example here) but none of them works. I’m wondering what could have gone wrong?Edit: I checked the connection using telnet:and it gives:However, checking on my desktop machine reveals that the firewall on port 27017 is open:",
"username": "Jingyang_Wang"
},
{
"code": "connecting to: mongodb://<my ip address>:27017/admin?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server <my ip address>:27017, connection attempt failed: SocketException: Error connecting to <my ip address>:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\n\nnc: connectx to <my ip address> port 27017 (tcp) failed: Connection refused\n<my ip address>127.0.0.1/etc/mongod.confbindIp0.0.0.0netstat -an | grep 270170.0.0.0",
"text": "Hey @Jingyang_Wang,Welcome to the MongoDB Community!By default, MongoDB binds to a localhost IP address. In the post, you mentioned <my ip address>. Could you clarify what this address is? Is this address 127.0.0.1, or some other address? Is it accessible from outside the node?Based on the details you provided, it seems the issue is that the MongoDB server is only listening for connections on the local interface and not on your public IP address.May I ask what steps you followed to install MongoDB?Moreover, a few things to check:If the issue still persists, some other things to check would be the network/firewall configuration on the server, making sure the public IP is certainly assigned to the server, or checking for any other processes that may be blocking port 27017.Please note 0.0.0.0 is not recommended for your production environment as it allows access from anywhere.Hope the above helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Connection refused for remote access of mongodb server | 2023-07-18T21:14:39.159Z | Connection refused for remote access of mongodb server | 1,022 |
[] | [
{
"code": "",
"text": "i need some help to populate the table with mondodb datas. My query brings to me just keys. where am i doing wrong? please someone help me\n\n22F2AF1E-9CFE-42BA-A13F-FD16E6466C552560×1600 312 KB\n",
"username": "Orhan_BAKKAL"
},
{
"code": "",
"text": "There’s not enough information here to help further.What’s the query you’re performing and how are you performing it? Please provide as much detail as possible otherwise it would make it difficult to assist.BR,\nJason",
"username": "Jason_Tran"
},
{
"code": " self.table.setColumnCount(self.numcolumn)\n self.table.setRowCount(self.numrow)\n self.table.setColumnHidden(0,True)\n self.table.setHorizontalHeaderLabels(self.baslik)\n\n for i in reversed (range(self.table.rowCount())):\n self.table.removeRow(i)\n\n db = client['kisiler']\n self.collection = db['muvekkiller']\n self.data = self.collection.find()\n\n\n girecekler=self.collection.find_one()\n print(girecekler.keys)\n\n\n\n\n self.table.setRowCount(0)\n for row_number, row_data in enumerate(self.data):\n self.table.insertRow(row_number)\n\n for column_number, data in enumerate(row_data):\n self.table.setItem(row_number, column_number, QTableWidgetItem(str(data)))\n",
"text": "def loaddata(self):",
"username": "Orhan_BAKKAL"
},
{
"code": "",
"text": "Actually I solved this problem with pandas data fame. but it made no sense to me.",
"username": "Orhan_BAKKAL"
},
{
"code": "",
"text": "Glad to hear Orhan.Perhaps it may have been more so to do with presenting the data rather than querying it although just a thought.You could include any changes made here so that someone could explain but if you believe it was unrelated to MongoDB then perhaps maybe stack overflow would work too.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": " self.table.setColumnCount(self.numcolumn)\n self.table.setRowCount(self.numrow)\n self.table.setColumnHidden(0, True)\n\n\n for i in range(self.table.rowCount()):\n self.table.removeRow(i)\n\n db = client['kisiler']\n collection = db['muvekkiller']\n ducuments = collection.find()\n list_cur=list(ducuments)\n df=DataFrame(list_cur)\n #print(df)\n\n nRows,nColumns=df.shape\n self.table.setColumnCount(nColumns)\n self.table.setRowCount(nRows)\n\n self.table.setHorizontalHeaderLabels(df.head())\n for i in range(self.table.rowCount()):\n for j in range(self.table.columnCount()):\n self.table.setItem(i,j,QTableWidgetItem(str(df.iloc[i,j])))\n",
"text": "def loaddata(self):",
"username": "Orhan_BAKKAL"
},
{
"code": "",
"text": "this is my solution. I loaded data into table with pandas dataframe",
"username": "Orhan_BAKKAL"
},
{
"code": "",
"text": "Thank you for uploading that solution for others to view in future ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can i fill qtablewidget(pyqt5) with mongodb datas | 2023-07-14T23:20:49.000Z | How can i fill qtablewidget(pyqt5) with mongodb datas | 385 |
|
[] | [
{
"code": "",
"text": "Hello Team,I am doing some POC to migrate the SQL Server database to MongoDB Atlas using the MongoDB Relational Migrator tool.I could see the tool by default is not able to detect the tables under user-created schema in the SQL Server. I am able to see the tables under the default schema (dbo) but not the other tables.\nimage1411×742 45.9 KB\nHere is the screenshot from the SQL Server side where I have another set of tables\ntwo631×789 21.3 KB\nAnyone else encounter something similar? Can you please suggest?If this is the expected behavior with the current version (1.1.2) of the tool, then it would be great if we could also add the functionality to detect tables under user-created schema.Thanks\nNaveen",
"username": "Naveen_Kumar2"
},
{
"code": "dbodatabaseName",
"text": "Hi @Naveen_Kumar2 -Ah, AdventureWorks - almost as fine a schema as Northwind!For a reason I’ve never fully figured out, the SQL Server driver only pulls tables from the dbo schema when you connect to all databases. However if you specify a database name on the connection page (or via the databaseName JDBC parameter) the tool is able to load data from all schemas.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks for your help @tomhollander !After providing the database name, now I am able to list all schemas and the tables underneath it\nimage1414×889 58.2 KB\nI will continue to explore the tool Thanks\nNaveen",
"username": "Naveen_Kumar2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Questions About MongoDB Relational Migrator Tool | 2023-07-19T18:22:50.599Z | Questions About MongoDB Relational Migrator Tool | 294 |
|
[
"storage"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-02T13:18:50.717-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.10.42.251:56707\",\"uuid\":\"6a36177a-b425-400a-a1a9-1fc735f56ab0\",\"connectionId\":165612,\"connectionCount\":9}}\n{\"t\":{\"$date\":\"2023-03-02T13:18:58.738-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn165612\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.10.42.251:56707\",\"uuid\":\"6a36177a-b425-400a-a1a9-1fc735f56ab0\",\"connectionId\":165612,\"connectionCount\":8}}\n{\"t\":{\"$date\":\"2023-03-02T13:18:59.738-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.10.42.251:56884\",\"uuid\":\"0c8e1898-f54c-49dd-8605-bb31d7f2b909\",\"connectionId\":165613,\"connectionCount\":9}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:11.933-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn165613\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.10.42.251:56884\",\"uuid\":\"0c8e1898-f54c-49dd-8605-bb31d7f2b909\",\"connectionId\":165613,\"connectionCount\":8}}\n\n{\"t\":{\"$date\":\"2023-03-02T13:19:11.990-05:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:12.032-05:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileRenameFailed: Access is denied\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n\"}}\n\n{\"t\":{\"$date\":\"2023-03-02T13:19:12.766-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.10.42.251:57108\",\"uuid\":\"9d15f9b4-8e8a-4659-9377-a78356a0c731\",\"connectionId\":165614,\"connectionCount\":9}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:12.766-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn165614\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.10.42.251:57108\",\"uuid\":\"9d15f9b4-8e8a-4659-9377-a78356a0c731\",\"connectionId\":165614,\"connectionCount\":8}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:13.768-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.10.42.251:57120\",\"uuid\":\"cd4b4ba0-4f96-4494-8073-7d408e924f4f\",\"connectionId\":165615,\"connectionCount\":9}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:13.768-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn165615\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.10.42.251:57120\",\"uuid\":\"cd4b4ba0-4f96-4494-8073-7d408e924f4f\",\"connectionId\":165615,\"connectionCount\":8}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:14.390-05:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1677781154:390576][14408:140723038999488], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 5515, snapshot max: 5515 snapshot count: 0, oldest timestamp: (1677781152, 1) , meta checkpoint timestamp: (1677781152, 1) base write gen: 108733\"}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:14.770-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.10.42.251:57137\",\"uuid\":\"43c1c2bf-d3c5-49d3-bb1b-4e83e16e1440\",\"connectionId\":165616,\"connectionCount\":9}}\n{\"t\":{\"$date\":\"2023-03-02T13:19:14.770-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn165616\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.10.42.251:57137\",\"uuid\":\"43c1c2bf-d3c5-49d3-bb1b-4e83e16e1440\",\"connectionId\":165616,\"connectionCount\":8}}\n{\"message\":\"DBException::toString(): FileRenameFailed: \\ufffdv\\ufffd\\ufffd...",
"text": "Good day.First, I know they are many posts that relate to my issue, but none provided me with a fix. Running:\nMongo Server Community 5.0.5\nWindows Server 2019The service runs with a domain user for which we gave full control over the root path of D:\\Mongo\\ (in which is the data and log folder). Additionally, we’ve also setup our AV to exclude scanning within D:\\Mongo\\ too !Every so often (too often!) the mongod.exe process still seems to crash with a FileRenamedFailed: Access is denied… error. Here’s a snipped of the log file:In all the posts out there, none of them resolved this crash for us:Most of them related to an AV scanning files with the /data/: we’ve excluded scanning within the folder!Some talk about permissions problems: we’ve given full control to the user running mongod within the root of the Mongo files!I’ve even seen posts talking about a bad server locale setup (but that would be when the log shows unicode chars not processed properly or something (log would show something like {\"message\":\"DBException::toString(): FileRenameFailed: \\ufffdv\\ufffd\\ufffd...), but that doesn’t seem to be our case from viewing our log. Plus, our server is set with a “English” local:\n\nimage525×630 126 KB\nI’m running out of ideas here… Upgrade to latest Mongo? But why haven’t I found anything regarding this that says you need to upgrade if that’s the case?Any ideas would be super appreciated. Much thanks for your time folks.Regards,\nPatrick",
"username": "Patrick_Roy"
},
{
"code": "FileRenameFailed",
"text": "Hi @Patrick_RoySorry you’re having difficulty with this issue, but unfortunately I believe the error FileRenameFailed originated from outside the server, so it’s typically an OS level issue.One thing I can think of is SERVER-58085, which will warn you if the path is a network drive (which is known to sometimes result in this). SERVER-28194 is another, but that was fixed a long time ago.Since you’re running version 5.0.5 and the latest in the 5.0 series is 5.0.15, I would start by upgrading first. Upgrading to the latest version ensures that you’re not seeing a fixed issue, so it’s usually a good idea to try first.If your dbpath is not on a network drive, and you have upgraded to 5.0.15, then perhaps the best option is to open a SERVER ticket describing the situation.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hello @kevinadi. Thanks for your reply.Our server currently randomly crashing seems to be our Arbiter (we are running with PSA). Although we’ve had the crash on another server that has only 1 instance (primary only - testing server). These 2 servers that did produce the crash all have the dbPath set to a local disk.Our next step, since we don’t want to fall too much behind in upgrades, is to upgrade to latest Mongo 6.0.x LTS version, and hope all crashes magically goes away Although, I’m still puzzled as to why we’re getting the crash. I mean, if doing an upgrade fixes it, then I should be able to find the relevant fix that resolves the issue, but didn’t find anything yet…",
"username": "Patrick_Roy"
},
{
"code": "cloud:\n monitoring:\n free:\n state: off\nsetParameter:\n\tdiagnosticDataCollectionEnabled: false\nmongosh --nodb --eval \"disableTelemetry()\"",
"text": "Hi folks, just to share an update on this particular crash… We know that the crash would occasionally occur when Mongo renamed this file: \\diagnostic.data\\metrics.interim to metrics.interim.temp.Few steps I took to try and bypass the manipulation of this file (it’s only a diagnostic / metrics file info of some kind, so not really needed (?))Results: it seems like it is the last point (4) that fixed the issue by disabling telemetry. I am not sure though if it’s a combination of all points that did it… But so far, it’s but up over a month without a crash (was crashing 3-4 times a month before!).Regardless, we definitely shouldn’t need to disable all that stuff. To me, it looks like there’s a bug somewhere with specifics setup (but what?) can’t say…Cheers! Pat",
"username": "Patrick_Roy"
},
{
"code": "Objekt:\n\tObjektserver:\t\tSecurity\n\tObjekttyp:\t\tFile\n\tObjektname:\t\tC:\\mongo\\db\\diagnostic.data\\metrics.interim\n\tHandle-ID:\t\t0x82c\n\tRessourcenattribute:\tS:AI\n\nProzessinformationen:\n\tProzess-ID:\t\t0xcf8\n\tProzessname:\t\tC:\\Program Files (x86)\\Kaspersky Lab\\Kaspersky Security for Windows Server\\kavfswp.exe\n\nZugriffsanforderungsinformationen:\n\tZugriffe:\t\tAttribute schreiben\n",
"text": "I just had the same issue with mongodb 4.4.22.I enabled file auditing on windows and it appears Kaspersky is to blame:",
"username": "Markus_S"
}
] | Mongod random crashes on Windows: FileRenameFailed | 2023-03-03T09:52:09.500Z | Mongod random crashes on Windows: FileRenameFailed | 1,637 |
|
null | [
"aggregation"
] | [
{
"code": "> db.getCollection(\"daily_signals\").aggregate(\n> [\n> \n> {\n> $match: {\n> \"Stock\":{$in: ['SENSOR1','SENSOR2','SENSOR15']},\n> $or: [\n> {\n> \"MOSC\": {\n> $gt: 0.0\n> }\n> },\n> {\n> \"SHL\": {\n> $gt: 0.0\n> }\n> },\n> {\n> \"DDSM\": {\n> $gt: 0.0\n> }\n> }\n> ],\n> },\n> \n> },\n> \n> ])\n> {\n> \"_id\" : ObjectId(\"64adb9af35dbc22c4b376cf8\"),\n> \"Date\" : ISODate(\"2023-07-11T00:00:00.000+0000\"),\n> \"Sensor\" : \"SENSOR15\",\n> \"MOSC\" : NumberInt(66),\n> \"SHL\" : NumberInt(0),\n> \"DDSM\" : NumberInt(0),\n> }\n",
"text": "With this aggregation I’m able to query previous signals (where MOSC,SHL,DDSM >1) for each sensor.\nHowever this query returns all signals for each sensor. I would like to get only 2 latest signals of each sensor.(last 2 documents). I tried to limit after match. and also wrap with facet.\nHere is an example document:Thanks in advance.",
"username": "Sakalli_Celal"
},
{
"code": "",
"text": "",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Could you please show me an example of how i can apply $limitN to this aggregation?",
"username": "Sakalli_Celal"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $sort:{\n SensorName:1,\n TimeSequence:1\n }\n},\n{\n $group:{\n _id:'$SensorName',\n lastData:{\n $lastN:{\n input:'$$ROOT',\n n:2\n }\n }\n }\n},\n{\n $unwind:'$lastData'\n},\n{\n $replaceRoot:{\n newRoot:'$lastData'\n }\n}\n])\n",
"text": "If you sort then group and use the lastN group operator as @Jack_Woehr said above, you can then unwind and get the documents, something like this:Mongo playground: a simple sandbox to test and share MongoDB queries onlineYou could probably also do it with the window operator:",
"username": "John_Sewell"
}
] | How to limit aggregation match results to last n documents? | 2023-07-19T17:24:30.201Z | How to limit aggregation match results to last n documents? | 273 |
null | [] | [
{
"code": "rs.addArb(\"mongoA.somewhere.com:27017\")ARBITERnetstat -natp",
"text": "Hi all,I had a PSA architecture setup and running no problem. Then I was tasked with switching the arbiter and secondary around for bandwidth costs in AZs in a single AWS region.First I removed the arbiter, converted it to a secondary and added it back to the cluster. Now there it’s PSS.\nThen I removed the secondary I’m converting to an arbiter, removed the data, etc.I then cleared out the data for the data dir, start mongod on the new arbiter, which is the exact same instance class I was using before, t4g.small. More than enough. I get the typical init failure logs/checkpoint logs waiting for replicate set information. Totally normal.When I go to add the arbiter back with rs.addArb(\"mongoA.somewhere.com:27017\"), it stays in ARBITER and healthy for about 10s, then on the mongoA host, the load just completely skyrockets. No connections from anywhere but the PS in the replSet (netstat -natp). Like 2k+ sysload. The little box dies, and the cluster considers the arbiter unhealthy.Is there something strange about re-adding an arbiter that was a secondary? Is reusing the hostname bad?This seems pretty bonkers that the initialization fails. Adding the arbiter originally did not do anything of the sort and only increased CPU a bit. Obviously I don’t want to use something massive just to init the arbiter. Then I’ll have to downgrade it, and add/rm the arbiter again anyways.Mongo self-hosted 4.4.23 for all three nodes. About 350GB data size.",
"username": "Rebecca_Jean"
},
{
"code": "t4g.smallrs.status()rs.conf()",
"text": "Hi @Rebecca_Jean and welcome to MongoDB community forums!!First I removed the arbiter, converted it to a secondary and added it back to the cluster. Now there it’s PSS.\nThen I removed the secondary I’m converting to an arbiter, removed the data, etc.With reference to the above statement, could you help me with a few details:Could you post the actual commands you used in the overall process, both inside the mongo shell and in bash if this is a Linux deployment?To convert the secondary to arbiter, are you following the mongoDB official documentation on Convert a Secondary to an Arbiter ? or are you referring to any documentation? If yes, could you please share the same?Are all the nodes in the replica set provisioned identically? As mentioned you are using t4g.small for the arbiter, but when the old arbiter was converted to a secondary, was the instance type changed to match the primary?Do you have a majority read concern disabled ?What’s the state and setting of the replica set now? Could you please post the current rs.status() and rs.conf() and whether you have read concern majority enabled or disabled\". This would help me to understand the deployment in a better way.Regards\nAasawari",
"username": "Aasawari"
}
] | Re-Adding single arbiter to RS crashes it, load through the roof | 2023-07-17T05:33:15.484Z | Re-Adding single arbiter to RS crashes it, load through the roof | 561 |
null | [
"replication"
] | [
{
"code": "",
"text": "How to create 2 node active-active MongoDB replicaset?",
"username": "Sonal_Sharma"
},
{
"code": "",
"text": "Hi,Can you elaborate a little bit more? What do you mean by 2-node active-active MDB replicaset?All the best,–Rodrigo",
"username": "logwriter"
},
{
"code": "",
"text": "we need to setup DR of 2 node mongodb replicaset where both the nodes primary and secondary would be doing read and write",
"username": "Sonal_Sharma"
},
{
"code": "",
"text": "Hi,In a replica set, only the Primary can handle reads and writes coming from the application. You can have the application sending read requests to the secondaries, but only the Primary can handle both reads and writes for the replica set.All the best,–Rodrigo",
"username": "logwriter"
},
{
"code": "",
"text": "I have to set up DR of mongodb replica set in target region as active passive. Can you help me to know how the replication would be done from region1 to region2 and suppose all nodes in region1 go down then how the replica set in region2 will have latest data?",
"username": "Sonal_Sharma"
},
{
"code": "",
"text": "Hi Sonal,I believe you have a few options to explore. Since, you’re talking about regions, I’m assuming you’re working on a Cloud environment.If that is the case, have you consider using MongoDB ATLAS? In ATLAS you can have your MongoDB replicaset distribute across regions or even across different cloud providers by just flipping a switch while creating your cluster.\nimage1736×586 89.6 KB\nIf that isn’t the case and you’re running on-premises, have you looked at mongosync?All the best,–Rodrigo",
"username": "logwriter"
},
{
"code": "",
"text": "we are using aws regions and running with mongodb 4.2 community edition 3 node replica set on EC2 instance. Need to setup DR in region2 with active-passive setup. After inital sync from region1 to region2 how the replica set will elect primary in region2 if all nodes in region1 go down? Can you help me to clear this? How ill this work? Do Ineed to setup 3 node replica in DR region too and configure one node as hidden node?",
"username": "Sonal_Sharma"
},
{
"code": "",
"text": "Hi @Sonal_Sharma welcome to the community!As @logwriter mentioned, there is no “active-active” setting for MongoDB replica set since a replica set uses the primary-secondary or leader-follower concept.However there are other options to achieve this, detailed in the blog post Active-Active Application Architectures with MongoDB | MongoDBHope this helps!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hello,\nThis is for active -passive now with 2 different regions in AWS and if all nodes in region1 go down then how the replica set in region 2 will resume service in region2 and serve as primary?",
"username": "Sonal_Sharma"
}
] | How to create 2 node active-active MongoDB replicaset? | 2023-07-12T14:09:45.379Z | How to create 2 node active-active MongoDB replicaset? | 780 |
null | [
"java"
] | [
{
"code": "\norg.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.mongodb.DBRef.\n\nat org.bson.internal.CodecCache.getOrThrow(CodecCache.java:57)\n\nat org.bson.internal.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:64)\n\nat org.bson.internal.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:39)\n\nat org.bson.codecs.DocumentCodec.writeValue(DocumentCodec.java:197)\n\nat org.bson.codecs.DocumentCodec.writeMap(DocumentCodec.java:212)\n\nat org.bson.codecs.DocumentCodec.encode(DocumentCodec.java:154)\n",
"text": "Getting error on MongoDB : Can’t find a codec for class com.mongodb.DBRef.\nI’m getting below error while execution of mule flow. Let me know if anyone know about this.I couldn’t find suitable answer.",
"username": "Janak_Rana"
},
{
"code": "",
"text": "Did you check some popular answers for this error? e.g. this. java - Resolve DBRef into Json - Stack Overflow",
"username": "Kobe_W"
}
] | Java error in DBRef of mongo Driver | 2023-07-20T03:46:09.106Z | Java error in DBRef of mongo Driver | 402 |
null | [] | [
{
"code": "realmDB.beginTransaction();\n for (int i = 0; i < 5000000; i++) {\n QRBaseData qrBaseData = realmDB.createObject(QRBaseData.class, \"https://www.baidu.com?uii=\" + UUID.randomUUID());\n qrBaseData.setGroupId(232324);\n qrBaseData.setSkuId(23523);\n qrBaseData.setOutletId(33523);\n }\n realmDB.commitTransaction();\nExecutorService fixedThreadPool = Executors.newFixedThreadPool(3);\nfixedThreadPool.execute(new Runnable() {\n @Override\n public void run() {\n DynamicRealm dynamicRealm = DynamicRealm.getInstance(configuration);\n dynamicRealm.beginTransaction();\n for (int i = 0; i < 2500000; i++) {\n DynamicRealmObject dynamicRealmObject = dynamicRealm.createObject(\"Goup_Table_0\", \"https://www.baidu.com?uii=\" + UUID.randomUUID());\n dynamicRealmObject.setInt(\"outletId\", 33523);\n dynamicRealmObject.setInt(\"groupId\", 232324);\n dynamicRealmObject.setInt(\"skuId\", 23523);\n }\n dynamicRealm.commitTransaction();\n trasitionCheck();\n }\n });\n\nfixedThreadPool.execute(new Runnable() {\n @Override\n public void run() {\n DynamicRealm dynamicRealm = DynamicRealm.getInstance(configuration);\n dynamicRealm.beginTransaction();\n for (int i = 0; i < 2500000; i++) {\n DynamicRealmObject dynamicRealmObject = dynamicRealm.createObject(\"Goup_Table_1\", \"https://www.baidu.com?uii=\" + UUID.randomUUID());\n dynamicRealmObject.setInt(\"outletId\", 33523);\n dynamicRealmObject.setInt(\"groupId\", 232324);\n dynamicRealmObject.setInt(\"skuId\", 23523);\n }\n dynamicRealm.commitTransaction();\n trasitionCheck();\n }\n });\n",
"text": "There are 500 million datas on my server, i need down and insert those datas into my realm db.\nThe following code takes me 26 minutes:And the next code takes more time:Can only one process write to the database at the same time?",
"username": "Rafe_N_A"
},
{
"code": "",
"text": "i never use realm, but seems you are putting a lot of data within the same transaction??if yes, you may want to change it as such a big transaction is almost never a good idea. e.g. this link.",
"username": "Kobe_W"
},
{
"code": "start a background task {\n int i = 0; i < 10000; i++\n begin write transaction\n for j = 0; i < 5000; j++\n do stuff\n continue j loop\n commit write\n continue i loop\n}\n",
"text": "It’s hard to know exactly where the bottleneck is as it could be a slow internet connection, other processes happening in the app etc.However one specific point that I can share for certain is thatRealm can be very efficient when writing large amounts of data by batching together multiple mutations within a single transaction. Transactions can also be performed in the background to avoid blocking the main threadIf you break that loop up into smaller chunks, you should see a dramatic improvement (we did)Here’s some pseudo-code which is a good general design pattern for big data",
"username": "Jay"
},
{
"code": "",
"text": "but the big questionWhy do you use 1 transaction to create almost duplicate of the same object?There is no need of transaction.It looks like you are trying to establish some benchmark for something you really what to do but you are hiding some much details that we cannot do a real assessment of the issue.",
"username": "steevej"
},
{
"code": " for (int i = 0; i < 1000; i++) {\n realmDB.beginTransaction();\n for (int j = 0; j < 5000; j++) {\n QRBaseData qrBaseData = realmDB.createObject(QRBaseData.class, \"https://www.baidu.com?uii=\" + UUID.randomUUID());\n qrBaseData.setGroupId(232324);\n qrBaseData.setSkuId(23523);\n qrBaseData.setOutletId(33523);\n }\n realmDB.commitTransaction();\n }\n",
"text": "Seems not work for me It takes almost 50 minutes. Did I ignore anything?",
"username": "Rafe_N_A"
},
{
"code": "for (int i = 0; i < 5000000; i++) {\n QRBaseData qrBaseData = realmDB.createObject(QRBaseData.class, \"https://www.baidu.com?uii=\" + UUID.randomUUID());\n qrBaseData.setGroupId(232324);\n qrBaseData.setSkuId(23523);\n qrBaseData.setOutletId(33523);\n }\n",
"text": "It’s a test demo used to test how long to take when insert 500 million datas.\nYou mean there’s no need of transaction? like this?",
"username": "Rafe_N_A"
},
{
"code": "",
"text": "I essentially copy and pasted the code (that uses the inner loop), and added a timer to determine start time and end time.insert 500 million datasFor clarity, the code writes 5 Million, not 500 MillionThat code wrote 180 Mb of data and took 4.318 minutesI am running it on macOS 16GB Ram with a SSD, 3.6 GHz 8-Core Intel Core i9Based on those results, I would say the bottleneck lies outside of your code.",
"username": "Jay"
},
{
"code": "",
"text": "You mean there’s no need of transaction? like this?You do not need transaction to create N unrelated documents. Period.Yes like you shared.",
"username": "steevej"
},
{
"code": "try realm.write {\n //this closure is a write transaction\n}\nfor (int i = 0; i < 5000000; i++) {\n QRBaseData qrBaseData = realmDB.createObject(QRBaseData.class, \"https://www.baidu.com?uii=\" + UUID.randomUUID());\n qrBaseData.setGroupId(232324);\n qrBaseData.setSkuId(23523);\n qrBaseData.setOutletId(33523);\n }\n",
"text": "A bit of clarity may be needed - or I may misunderstand the meaning:You do not need transaction to create N unrelated documents. Period.I am not sure of the context of that statement but ALL writes in Realm must be within a transaction.From the docsUse realm.createObject() in a transaction to create a persistent instance of a Realm object in a realm.SeeorEven in Swift, all writes must be within a transaction, and the very nature of the code forces the developer to do it that wayThe other advantage of using transaction is that the writes in the transaction either all pass or all fail. That guarantees data integrity so you’ll never have a situation where there’s a partial write of data sent to the server within the transaction.So the above code does not actually write any data to Realm - it just creates an object over and over.Again, my testing with your original code writes all that data in about 4 minutes so the code itself is working as intended. If your write is taking much longer, the issues lies somewhere else in your code.",
"username": "Jay"
},
{
"code": "",
"text": "I am the one needed more clarity.I was thinking and commenting about normal mongodb driver. I was out of my league with Realm.Thanks for the clarification.",
"username": "steevej"
},
{
"code": "",
"text": "Tanks Jason\nIt took only 8 minutes on another phone. So, the code is not the bottleneck, the old machine is.And another question, can i use two or more Realm DB to insert data on different thread? I still need to improve the efficiency of this old machine.",
"username": "Rafe_N_A"
},
{
"code": "config",
"text": "can i use two or more Realm DBYes, you can have multiple databases on your device and interact with any of them at any time via the config parameter, just changing the Realm file name, which is the last component of the path.However, those will be treated as completely separate Realms so things like queries and forward/inverse relationships will not be possible cross realm.On the other hand, for situations where you have a LOT of static data, or where you’re using denormalization, it would work. Like if you have an inventory system - a ‘master list’ of inventory item names could be stored in one realm, and then that name (a copy) and details about the item could be stored in another.In this use case I don’t think that’s going to be applicable… or really buy any kind of speed improvement since it sounds like the data is all of the same kind so it would end up needing to be stored in the same Realm.",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Insert to multiple tables with multithreading is inefficient | 2023-07-14T02:58:42.987Z | Insert to multiple tables with multithreading is inefficient | 776 |
null | [
"queries",
"indexes",
"performance"
] | [
{
"code": "\n{\n \"explainVersion\": \"1\",\n \"queryPlanner\": {\n \"namespace\": \"Test.StockData\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"$and\": [\n { \"status\": { \"$eq\": \"Active\" } },\n { \"OpId\": { \"$eq\": \"11536\" } },\n { \"S_TIME\": { \"$lt\": 1374085800 } },\n { \"S_TIME\": { \"$gt\": 1374049700 } }\n ]\n },\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"SORT\",\n \"sortPattern\": { \"Sorter\": 1 },\n \"memLimit\": 104857600,\n \"type\": \"default\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"OpId\": 1,\n \"S_TIME\": 1,\n \"status\": 1,\n \"Sorter\": 1\n },\n \"indexName\": \"OpId_1_S_TIME_1_status_1_Sorter_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"OpId\": [],\n \"S_TIME\": [],\n \"status\": [],\n \"Sorter\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"OpId\": [\"[\\\"11536\\\", \\\"11536\\\"]\"],\n \"S_TIME\": [\n \"(1374049700, 1374085800)\"\n ],\n \"status\": [\n \"[\\\"Active\\\", \\\"Active\\\"]\"\n ],\n \"Sorter\": [\"[MinKey, MaxKey]\"]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 5674,\n \"executionTimeMillis\": 33,\n \"totalKeysExamined\": 5675,\n \"totalDocsExamined\": 5674,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 5674,\n \"executionTimeMillisEstimate\": 22,\n \"works\": 11350,\n \"advanced\": 5674,\n \"needTime\": 5675,\n \"needYield\": 0,\n \"saveState\": 11,\n \"restoreState\": 11,\n \"isEOF\": 1,\n \"docsExamined\": 5674,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"SORT\",\n \"nReturned\": 5674,\n \"executionTimeMillisEstimate\": 7,\n \"works\": 11350,\n \"advanced\": 5674,\n \"needTime\": 5675,\n \"needYield\": 0,\n \"saveState\": 11,\n \"restoreState\": 11,\n \"isEOF\": 1,\n \"sortPattern\": { \"Sorter\": 1 },\n \"memLimit\": 104857600,\n \"type\": \"default\",\n \"totalDataSizeSorted\": 453920,\n \"usedDisk\": false,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 5674,\n \"executionTimeMillisEstimate\": 1,\n \"works\": 5675,\n \"advanced\": 5674,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 11,\n \"restoreState\": 11,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"OpId\": 1,\n \"S_TIME\": 1,\n \"status\": 1,\n \"Sorter\": 1\n },\n \"indexName\": \"OpId_1_S_TIME_1_status_1_Sorter_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"OpId\": [],\n \"S_TIME\": [],\n \"status\": [],\n \"Sorter\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"OpId\": [\"[\\\"11536\\\", \\\"11536\\\"]\"],\n \"S_TIME\": [\n \"(1374049700, 1374085800)\"\n ],\n \"status\": [\n \"[\\\"Active\\\", \\\"Active\\\"]\"\n ],\n \"Sorter\": [\"[MinKey, MaxKey]\"]\n },\n \"keysExamined\": 5675,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n \"allPlansExecution\": []\n },\n \"command\": {\n \"find\": \"StockData\",\n \"filter\": {\n \"OpId\": \"11536\",\n \"S_TIME\": {\n \"$gt\": 1374049700,\n \"$lt\": 1374085800\n },\n \"status\": \"Active\"\n },\n \"sort\": { \"Sorter\": 1 },\n \"skip\": 0,\n \"limit\": 0,\n \"maxTimeMS\": 30000,\n \"$db\": \"Test\"\n },\n \"serverInfo\": {\n \"host\": \"ip-10-23-52-181.ap-south-1.compute.internal\",\n \"port\": 7210,\n \"version\": \"5.0.2\",\n \"gitVersion\": \"6d9ec525e78465dcecadcff99cce953d380fedc8\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1\n }\n",
"text": "I have around 10 collections each have approx 20 million docs and increasing at speed of around 700-1000 writes per second at the same time 700-1000 read per second is made by querying through my api. Avg query time is 30-40ms per. Issue is that after 150 req/sec (using jmeter). My Avg response time is 3-7 sec which is really bad. I have M50 paid cluster (1 primary 2 secondary)\nBelow are the possible optimization that I have tried.eg query: {OpId: ‘12394’,S_TIME: {$gt: 1873703716,$lt: 1973711146},time: ‘ACTIVE’}.sort({Sorter:1})Here’s the query explainWhat shall I do to get around atleast 1000 req per seconds with avg resp time of less then 150-200ms.",
"username": "Nikunj_Guna"
},
{
"code": "",
"text": "Hi @Nikunj_Guna and welcome to MongoDB community forums!!I have around 10 collections each have approx 20 million docs and increasing at speed of around 700-1000 writes per second at the same time 700-1000 read per second is made by querying through my api.Sharding in MongoDB allows you to scale your database to handle increased load to a nearly unlimited degree by providing increased read/write throughput , storage capacity , and high availability. This would be beneficial to manage the read and writes across the sharded cluster. Please note that, selection of the right shard key plays an important role in sharding of the collections.Secondly, could you also reconsider the indexes which are created for the query executed and if it does improves the performance of the application in terms of the most frequently query being used.Avg query time is 30-40ms per. Issue is that after 150 req/sec (using jmeter). My Avg response time is 3-7 sec which is really bad. I have M50 paid cluster (1 primary 2 secondary)Tor the average timings mentioned above, are these for the same query being executed multiple times?\nIf there are more than one query involved, can you help me with the exaplin() output for the slow and the fast queries being executed?As you mentioned that you are using M50 paid tier cluster on Atlas, which gives you the leverage to use the built-in features like Performance Advisor, Real-Time Performance Panel, and Query Profiler to track operations and highlight slow/heavy spotted operations. Additionally, the Metrics tab provides many graphs that plot operations and number of connections.\nIt would be helpful for us to debug further, if you provide the above information or open a cloud support ticket with all information in place.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "{\n \"explainVersion\": \"1\",\n \"queryPlanner\": {\n \"namespace\": \"LiveFeed.NSE_E_EQUITY_BKP\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"$and\": [\n { \"Exch\": { \"$eq\": \"NSE\" } },\n { \"ScripId\": { \"$eq\": \"11536\" } },\n { \"Start_Time\": { \"$lt\": 1374040852 } },\n { \"Start_Time\": { \"$gt\": 1373615400 } }\n ]\n },\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"Exch\": 1,\n \"ScripId\": 1,\n \"Start_Time\": -1\n },\n \"indexName\": \"sym\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"Exch\": [],\n \"ScripId\": [],\n \"Start_Time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"Exch\": [\"[\\\"NSE\\\", \\\"NSE\\\"]\"],\n \"ScripId\": [\"[\\\"11536\\\", \\\"11536\\\"]\"],\n \"Start_Time\": [\n \"(1374040852, 1373615400)\"\n ]\n }\n }\n },\n \"rejectedPlans\": [\n {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"$and\": [\n { \"Exch\": { \"$eq\": \"NSE\" } },\n { \"ScripId\": { \"$eq\": \"11536\" } }\n ]\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": { \"Start_Time\": -1 },\n \"indexName\": \"Start_Time\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": { \"Start_Time\": [] },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"Start_Time\": [\n \"(1374040852, 1373615400)\"\n ]\n }\n }\n }\n ]\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1016,\n \"executionTimeMillis\": 2,\n \"totalKeysExamined\": 1016,\n \"totalDocsExamined\": 1016,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 1016,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1017,\n \"advanced\": 1016,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 1,\n \"isEOF\": 1,\n \"docsExamined\": 1016,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 1016,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1017,\n \"advanced\": 1016,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 1,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"Exch\": 1,\n \"ScripId\": 1,\n \"Start_Time\": -1\n },\n \"indexName\": \"sym\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"Exch\": [],\n \"ScripId\": [],\n \"Start_Time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"Exch\": [\"[\\\"NSE\\\", \\\"NSE\\\"]\"],\n \"ScripId\": [\"[\\\"11536\\\", \\\"11536\\\"]\"],\n \"Start_Time\": [\n \"(1374040852, 1373615400)\"\n ]\n },\n \"keysExamined\": 1016,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n \"allPlansExecution\": [\n {\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"totalKeysExamined\": 101,\n \"totalDocsExamined\": 101,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 101,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"docsExamined\": 101,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 101,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"keyPattern\": {\n \"Exch\": 1,\n \"ScripId\": 1,\n \"Start_Time\": -1\n },\n \"indexName\": \"sym\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"Exch\": [],\n \"ScripId\": [],\n \"Start_Time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"Exch\": [\"[\\\"NSE\\\", \\\"NSE\\\"]\"],\n \"ScripId\": [\n \"[\\\"11536\\\", \\\"11536\\\"]\"\n ],\n \"Start_Time\": [\n \"(1374040852, 1373615400)\"\n ]\n },\n \"keysExamined\": 101,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n {\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"totalKeysExamined\": 101,\n \"totalDocsExamined\": 101,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"$and\": [\n { \"Exch\": { \"$eq\": \"NSE\" } },\n { \"ScripId\": { \"$eq\": \"11536\" } }\n ]\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 0,\n \"needTime\": 101,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 1,\n \"isEOF\": 0,\n \"docsExamined\": 101,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 101,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 101,\n \"advanced\": 101,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 1,\n \"restoreState\": 1,\n \"isEOF\": 0,\n \"keyPattern\": { \"Start_Time\": -1 },\n \"indexName\": \"Start_Time\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": { \"Start_Time\": [] },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"Start_Time\": [\n \"(1374040852, 1373615400)\"\n ]\n },\n \"keysExamined\": 101,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n }\n ]\n },\n \"command\": {\n \"find\": \"NSE_E_EQUITY_BKP\",\n \"filter\": {\n \"ScripId\": \"11536\",\n \"Exch\": \"NSE\",\n \"Start_Time\": {\n \"$gt\": 1373615400,\n \"$lt\": 1374040852\n }\n },\n \"sort\": { \"Start_Time\": -1 },\n \"skip\": 0,\n \"limit\": 0,\n \"maxTimeMS\": 30000,\n \"$db\": \"LiveFeed\"\n },\n \"serverInfo\": {\n \"host\": \"ip-10-23-52-181.ap-south-1.compute.internal\",\n \"port\": 7210,\n \"version\": \"5.0.2\",\n \"gitVersion\": \"6d9ec525e78465dcecadcff99cce953d380fedc8\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1\n}\n",
"text": "Thanks for replay,Yes for now I am benchmarking using same queries.I have also tried different combinations of indexes but still slow. What confuses me is that I have collection containing 350million doc. But there is no issue there. I also tried keep same index in my collection since almost every field are present in both collection’s docs. But still no luck. I have already attached explain() o/p in post for slow queryhere’s the explain for fast one in which 350million docs are there having similar structure and compound index.",
"username": "Nikunj_Guna"
},
{
"code": "{OpId:1, status:1, Sorter:1, S_TIME:1}",
"text": "The first explain you posted has an in-memory sort.The best index for that query is one supporting the ESR “rule”An index of {OpId:1, status:1, Sorter:1, S_TIME:1} should support the query well.The equality OpId, status are first, followed by the sort order Sorter, then the range S_TIME.Looking at the explain though that might only save you ~7ms as most of the time is spent fetching the documents(22ms).",
"username": "chris"
},
{
"code": "{\n \"explainVersion\": \"1\",\n \"queryPlanner\": {\n \"namespace\": \"LiveFeed.NSE_E_EQUITY_SEC_BKP\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"$and\": [\n { \"MarketStatus\": { \"$eq\": \"OPEN\" } },\n { \"ScripId\": { \"$eq\": \"11536\" } },\n { \"Start_Time\": { \"$lt\": 1374127252 } },\n { \"Start_Time\": { \"$gt\": 1374121800 } }\n ]\n },\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"MarketStatus\": 1,\n \"ScripId\": 1,\n \"Sorter\": 1,\n \"Start_Time\": 1\n },\n \"indexName\": \"MarketStatus_1_ScripId_1_Sorter_1_Start_Time_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"MarketStatus\": [],\n \"ScripId\": [],\n \"Sorter\": [],\n \"Start_Time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"MarketStatus\": [\n \"[\\\"OPEN\\\", \\\"OPEN\\\"]\"\n ],\n \"ScripId\": [\"[\\\"11536\\\", \\\"11536\\\"]\"],\n \"Sorter\": [\"[MinKey, MaxKey]\"],\n \"Start_Time\": [\n \"(1374121800, 1374127252)\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 9064,\n \"executionTimeMillis\": 140,\n \"totalKeysExamined\": 69779,\n \"totalDocsExamined\": 9064,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 9064,\n \"executionTimeMillisEstimate\": 34,\n \"works\": 69779,\n \"advanced\": 9064,\n \"needTime\": 60714,\n \"needYield\": 0,\n \"saveState\": 69,\n \"restoreState\": 69,\n \"isEOF\": 1,\n \"docsExamined\": 9064,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 9064,\n \"executionTimeMillisEstimate\": 33,\n \"works\": 69779,\n \"advanced\": 9064,\n \"needTime\": 60714,\n \"needYield\": 0,\n \"saveState\": 69,\n \"restoreState\": 69,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"MarketStatus\": 1,\n \"ScripId\": 1,\n \"Sorter\": 1,\n \"Start_Time\": 1\n },\n \"indexName\": \"MarketStatus_1_ScripId_1_Sorter_1_Start_Time_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"MarketStatus\": [],\n \"ScripId\": [],\n \"Sorter\": [],\n \"Start_Time\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"MarketStatus\": [\n \"[\\\"OPEN\\\", \\\"OPEN\\\"]\"\n ],\n \"ScripId\": [\"[\\\"11536\\\", \\\"11536\\\"]\"],\n \"Sorter\": [\"[MinKey, MaxKey]\"],\n \"Start_Time\": [\n \"(1374121800, 1374127252)\"\n ]\n },\n \"keysExamined\": 69779,\n \"seeks\": 60715,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n \"allPlansExecution\": []\n },\n \"command\": {\n \"find\": \"NSE_E_EQUITY_SEC_BKP\",\n \"filter\": {\n \"MarketStatus\": \"OPEN\",\n \"ScripId\": \"11536\",\n \"Start_Time\": {\n \"$gt\": 1374121800,\n \"$lt\": 1374127252\n }\n },\n \"sort\": { \"Sorter\": 1 },\n \"skip\": 0,\n \"limit\": 0,\n \"maxTimeMS\": 30000,\n \"$db\": \"LiveFeed\"\n },\n \"serverInfo\": {\n \"host\": \"ip-10-23-52-181.ap-south-1.compute.internal\",\n \"port\": 7210,\n \"version\": \"5.0.2\",\n \"gitVersion\": \"6d9ec525e78465dcecadcff99cce953d380fedc8\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1\n}\n",
"text": "@chris @Aasawari\nI tried using ESR rule also different combinations , along with asc and desc both direction but still no luck.\n{OpId:1, status:1, Sorter:1, S_TIME:1}\n{OpId:1, status:1, Sorter:1, S_TIME:-1}\n{status:1, OpId:1, Sorter:1, S_TIME:-1}\n{status:1, OpId:1, Sorter:1, S_TIME:1}\n{OpId:1, Sorter:1, S_TIME:1}\n{OpId:1, Sorter:1, S_TIME:-1}\netc etcHere’s the explain of {status:1, OpId:1, Sorter:1, S_TIME:1}for query : {MarketStatus: ‘OPEN’,ScripId: ‘11536’,Start_Time: {$gt: 1374121800,$lt: 1374127252}}",
"username": "Nikunj_Guna"
}
] | Atlas faster read write? | 2023-07-17T08:26:15.804Z | Atlas faster read write? | 598 |
null | [
"replication",
"connecting",
"mongodb-shell",
"containers",
"devops"
] | [
{
"code": "`scripts % docker network ls \nNETWORK ID NAME DRIVER SCOPE\ne090044221b8 bridge bridge local\na16aa24d85b5 host host local\n1060cad3b9ed none null local\n08c22266be93 testcompose_mongodb_network bridge local`\n`scripts % docker-compose ps\nNAME IMAGE COMMAND SERVICE CREATED STATUS PORTS\nmongodb1 mongo:5.0 \"docker-entrypoint.s…\" mongodbsvr01 About an hour ago Up About an hour 27016/tcp, 0.0.0.0:27016->27017/tcp\nmongodb2 mongo:5.0 \"docker-entrypoint.s…\" mongodbsvr02 About an hour ago Up About an hour 27018/tcp, 0.0.0.0:27018->27017/tcp\nmongodb3 mongo:5.0 \"docker-entrypoint.s…\" mongodbsvr03 About an hour ago Up About an hour 27019/tcp, 0.0.0.0:27019->27017/tcp`\n<houseadm@houseadms-iMac scripts % docker container inspect 0c64d3267a22 | grep 'IPAddress'\n\n \"SecondaryIPAddresses\": null,\n \"IPAddress\": \"\",\n \"IPAddress\": \"172.21.0.2\",\nhouseadm@houseadms-iMac scripts % docker container inspect 280b455a3abe | grep 'IPAddress'\n\n \"SecondaryIPAddresses\": null,\n \"IPAddress\": \"\",\n \"IPAddress\": \"172.21.0.4\",\nhouseadm@houseadms-iMac scripts % docker container inspect 0fe371970fe0 | grep 'IPAddress'\n\n \"SecondaryIPAddresses\": null,\n \"IPAddress\": \"\",\n \"IPAddress\": \"172.21.0.3\",\n** docker-compose.yml*\n\n*version: \"3.9\"*\n*services:*\n\n* mongodbsvr01:*\n* image: mongo:5.0*\n* restart: unless-stopped*\n* container_name: mongodb1*\n* hostname: mongodb1*\n* command: --bind_ip_all --replSet mongo-replica*\n* # command: --replSet mongo-replica*\n* environment:*\n* DB: mongodb*\n* networks:*\n* mongodb_network:*\n* # ipv4_address: 127.0.10.5*\n* volumes:*\n* - mgodb_data_1:/data/db*\n* - mgodb_logs_1:/data/logs*\n* expose:*\n* - 27016*\n* ports:*\n* - 27016:27017*\n\n* mongodbsvr02:*\n* image: mongo:5.0*\n* restart: always*\n* container_name: mongodb2*\n* hostname: mongodb2*\n* command: --bind_ip_all --replSet mongo-replica*\n* environment:*\n* DB: mongodb*\n* networks:*\n* mongodb_network:*\n* # ipv4_address: 127.0.10.2*\n* volumes:*\n* - mgodb_data_2:/data/db*\n* - mgodb_logs_2:/data/logs*\n* expose:*\n* - 27018*\n* ports:*\n* - 27018:27017*\n\n* mongodbsvr03:*\n* image: mongo:5.0*\n* restart: unless-stopped*\n* container_name: mongodb3*\n* hostname: mongodb3*\n* command: --bind_ip_all --replSet mongo-replica*\n* environment:*\n* DB: mongodb*\n* networks:*\n* mongodb_network:*\n* volumes:*\n* - mgodb_data_3:/data/db*\n* - mgodb_logs_3:/data/logs*\n* - ./scripts:/scripts*\n* expose:*\n* - 27019*\n* ports:*\n* - 27019:27017*\n\n* mongosetup: *\n* image: mongo:5.0*\n* restart: no*\n* depends_on:*\n* - mongodbsvr01*\n* - mongodbsvr02*\n* - mongodbsvr03*\n\n* networks:*\n* mongodb_network:*\n* volumes:*\n* - ./scripts:/scripts*\n* environment:*\n* DB: mongodb*\n* entrypoint: [ \"bash\", \"-c\", \"sh ./scripts/mongo_setup.sh\"]*>\n\n*volumes:*\n* mgodb_data_1:*\n* mgodb_logs_1:*\n* mgodb_data_2:*\n* mgodb_logs_2:*\n* mgodb_data_3:*\n* mgodb_logs_3:*\n\n\n*networks:*\n* mongodb_network:*\n* driver: bridge*\n*#!/bin/bash*\n*sleep 15*\n\n*echo SETUP.sh time now: `date +\"%T\" `*\n*mongo --host mongodb1:27017 <<EOF*\n* var cfg = {*\n* \"_id\": \"mongo-replica\",*\n* \"version\": 1,*\n* \"members\": [*\n* {*\n* \"_id\": 0,*\n* \"host\": \"mongodb1:27017\",*\n* \"priority\": 2*\n* },*\n* {*\n* \"_id\": 1,*\n* \"host\": \"mongodb2:27017\",*\n* \"priority\": 1*\n* },*\n* {*\n* \"_id\": 2,*\n* \"host\": \"mongodb3:27017\",*\n* \"priority\": 0*\n* }*\n* ]*\n* };*\n* rs.initiate(cfg, { force: true });*\n* rs.reconfig(cfg, { force: true });*\n* rs.secondaryOk();*\n* db.getMongo().setReadPref('nearest');*\n* db.getMongo().setSecondaryOk();*\n* replSetStepDown(cfg, { force: true });*\n*EOF*\nhouseadm@houseadms-iMac scripts % docker ps \nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n0c64d3267a22 mongo:5.0 \"docker-entrypoint.s…\" 51 minutes ago Up 51 minutes 27016/tcp, 0.0.0.0:27016->27017/tcp mongodb1\n280b455a3abe mongo:5.0 \"docker-entrypoint.s…\" 51 minutes ago Up 51 minutes 27019/tcp, 0.0.0.0:27019->27017/tcp mongodb3\n0fe371970fe0 mongo:5.0 \"docker-entrypoint.s…\" 51 minutes ago Up 51 minutes 27018/tcp, 0.0.0.0:27018->27017/tcp mongodb2\nhouseadm@houseadms-iMac scripts % docker exec -it mongodb2 mongosh --eval \"rs.status()\"\n\nCurrent Mongosh Log ID: 64b768ce32f675388996d04f\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB: 5.0.19\nUsing Mongosh: 1.10.1\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n\nTo help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).\nYou can opt-out by running the disableTelemetry() command.\n\n------\n The server generated these startup warnings when booting\n 2023-07-19T03:45:52.242+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n 2023-07-19T03:45:53.510+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n------\n\n{\n set: 'mongo-replica',\n date: ISODate(\"2023-07-19T04:38:38.607Z\"),\n myState: 2,\n term: Long(\"1\"),\n syncSourceHost: 'mongodb1:27017',\n syncSourceId: 0,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 3,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n lastCommittedWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n appliedOpTime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n durableOpTime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n lastAppliedWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n lastDurableWallTime: ISODate(\"2023-07-19T04:38:29.430Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1689741499, i: 1 }),\n electionParticipantMetrics: {\n votedForCandidate: true,\n electionTerm: Long(\"1\"),\n lastVoteDate: ISODate(\"2023-07-19T03:46:19.140Z\"),\n electionCandidateMemberId: 0,\n voteReason: '',\n lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1689738367, i: 1 }), t: Long(\"-1\") },\n maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1689738367, i: 1 }), t: Long(\"-1\") },\n priorityAtElection: 1,\n newTermStartDate: ISODate(\"2023-07-19T03:46:19.168Z\"),\n newTermAppliedDate: ISODate(\"2023-07-19T03:46:20.508Z\")\n },\n members: [\n {\n _id: 0,\n name: 'mongodb1:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 3150,\n optime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n optimeDurable: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2023-07-19T04:38:29.000Z\"),\n optimeDurableDate: ISODate(\"2023-07-19T04:38:29.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n lastDurableWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n lastHeartbeat: ISODate(\"2023-07-19T04:38:38.155Z\"),\n lastHeartbeatRecv: ISODate(\"2023-07-19T04:38:37.335Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1689738379, i: 1 }),\n electionDate: ISODate(\"2023-07-19T03:46:19.000Z\"),\n configVersion: 32556,\n configTerm: -1\n },\n {\n _id: 1,\n name: 'mongodb2:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 3166,\n optime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2023-07-19T04:38:29.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n lastDurableWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n syncSourceHost: 'mongodb1:27017',\n syncSourceId: 0,\n infoMessage: '',\n configVersion: 32556,\n configTerm: -1,\n self: true,\n lastHeartbeatMessage: ''\n },\n {\n _id: 2,\n name: 'mongodb3:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 3150,\n optime: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n optimeDurable: { ts: Timestamp({ t: 1689741509, i: 1 }), t: Long(\"1\") },\n optimeDate: ISODate(\"2023-07-19T04:38:29.000Z\"),\n optimeDurableDate: ISODate(\"2023-07-19T04:38:29.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n lastDurableWallTime: ISODate(\"2023-07-19T04:38:29.430Z\"),\n lastHeartbeat: ISODate(\"2023-07-19T04:38:38.187Z\"),\n lastHeartbeatRecv: ISODate(\"2023-07-19T04:38:38.186Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: 'mongodb1:27017',\n syncSourceId: 0,\n infoMessage: '',\n configVersion: 32556,\n configTerm: -1\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1689741509, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1689741509, i: 1 })\n}\nscripts % docker exec -it mongodb2 mongosh --eval 'var m = db.isMaster(); print(\"Is Primary?\", m.ismaster); print(\"hosts:\", m.hosts); print(\"Primary:\", m.primary)' --quiet \nIs Primary? false\nhosts: [ 'mongodb1:27017', 'mongodb2:27017' ]\nPrimary: mongodb1:27017\n\niMac scripts % mongosh --host mongodb2:27017 --eval 'var m = db.isMaster(); print(\"Is Primary?\", m.ismaster); print(\"hosts:\", m.hosts); print(\"Primary:\", m.primary)' --quiet\nMongoServerSelectionError: Server selection timed out after 30000 ms\nhouseadm@houseadms-iMac scripts % mongosh \"mongodb://mongodb1:27017,mongodb2:27017,mongodb3:27017/mongodb?replicaSet=mongo-replica\"\n\nCurrent Mongosh Log ID: 64b76a37fd5bf937a6cca38f\nConnecting to: mongodb://mongodb1:27017,mongodb2:27017,mongodb3:27017/mongodb?replicaSet=mongo-replica&appName=mongosh+1.10.1\nMongoServerSelectionError: Server selection timed out after 30000 ms\n\nhouseadm@houseadms-iMac testcompose % mongosh \"mongodb://mongodb1:27017,mongodb2:27017,mongodb3:27017/database?replicaSet=mongo-replica\"\n\nCurrent Mongosh Log ID: 64b6930329ab4371a505f7b4\nConnecting to: mongodb://mongodb1:27017,mongodb2:27017,mongodb3:27017/database?replicaSet=mongo-replica&appName=mongosh+1.10.1\n\nMongoNetworkError: getaddrinfo ENOTFOUND mongodb1",
"text": "Hi everyone\nFrom what I see, I’m more concerned with connectivity issues when we talk about using MongoDB in Docker. And I would like to be able to count on the help of the community.\nI’m running the project in:\nDocker version 24.0.2, build cb74dfc\nmacOS Monterey version 12.6.7And I’m also having trouble connecting to the replica set running in a docker container. This project’s infrastructure has 3 nodes. And it has the default setting.Option used to star my project : docker-compose up -dabout the configurationConfiguring replicationSome testsAnother when a try to connect using mongosh or connect to replica setMongoNetworkError: getaddrinfo ENOTFOUND mongodb1An interesting point to share is that through Studio 3T, the connection via node is also successful, as well as via “docker exe”\nI hope I managed to share all the procedures performed in search of a workaround solution. But without success…Regards\nCarlos",
"username": "Carlos_Alberto_da_Silva_Junior1"
},
{
"code": " image: mongo:5.0\n restart: unless-stopped\n container_name: mongodb1\n hostname: mongodb1\n command: --replSet mongors\n environment:\n DB: mongodb\n networks:\n mongodb_network:\n ipv4_address: 172.20.0.2\n volumes:\n - mgodb_data_1:/data/db\n - mgodb_logs_1:/data/logs\n expose:\n - 27017\n ports:\n - 28017:27017\n\n mongodbsvr02:\n ...\n networks:\n mongodb_network:\n ipv4_address: 172.20.0.3\n...\n expose:\n - 27017\n ports:\n - 28018:27017\n.....\n\n mongodbsvr03:\n...\n networks:\n mongodb_network:\n ipv4_address: 172.20.0.4\n...\n expose:\n - 27017\n ports:\n - 28019:27017\n\nnetworks:\n mongodb_network:\n # driver: bridge\n ipam:\n driver: default\n config:\n - subnet: 172.20.0.0/24`\n\n**TPC Listen Tests**\n[\n {\n \"Name\": \"testcompose_mongodb_network\",\n \"Id\": \"c0623efc620722a892ff2b8b57c6a16ff2cfa81a16dad60645b6c90f35ebe482\",\n \"Created\": \"2023-07-19T23:19:04.671723761Z\",\n \"Scope\": \"local\",\n \"Driver\": \"bridge\",\n \"EnableIPv6\": false,\n \"IPAM\": {\n \"Driver\": \"default\",\n \"Options\": null,\n \"Config\": [\n {\n \"Subnet\": \"172.20.0.0/24\"\n }\n ]\n },\n \"Internal\": false,\n \"Attachable\": false,\n \"Ingress\": false,\n \"ConfigFrom\": {\n \"Network\": \"\"\n },\n \"ConfigOnly\": false,\n \"Containers\": {\n \"2d6708254ee69c1c154510cf107bfd7145fbdf53f7a989c0ff79c235eb67451c\": {\n \"Name\": \"mongodb2\",\n \"EndpointID\": \"3f98e00fe6b6513981c1f325fcaea307875b59bb7cf8655e888cf086258d47e0\",\n \"MacAddress\": \"02:42:ac:14:00:03\",\n \"IPv4Address\": \"172.20.0.3/24\",\n \"IPv6Address\": \"\"\n },\n \"910c8d8e143f4b9178ddc1d7b7d4fd9c653a26eb8b546869fa32ee5b94e52144\": {\n \"Name\": \"mongodb1\",\n \"EndpointID\": \"534df4471b4b29bf723e80792231be3ad7ddc510819b8d73efbbd98eeca9eff7\",\n \"MacAddress\": \"02:42:ac:14:00:02\",\n \"IPv4Address\": \"172.20.0.2/24\",\n \"IPv6Address\": \"\"\n },\n \"b1973fba7e2bd2374eddd2b22594b8dd84211259c3ab3f9606ecc11b3142559d\": {\n \"Name\": \"mongodb3\",\n \"EndpointID\": \"5ac7be6b1fdea003651d89e62f421abaf21f5ed8976a7ecdd6bc869e91d68b98\",\n \"MacAddress\": \"02:42:ac:14:00:04\",\n \"IPv4Address\": \"172.20.0.4/24\",\n \"IPv6Address\": \"\"\n }\n },\n \"Options\": {},\n \"Labels\": {\n \"com.docker.compose.network\": \"mongodb_network\",\n \"com.docker.compose.project\": \"testcompose\",\n \"com.docker.compose.version\": \"2.19.1\"\n }\n }\n]`\n\n**Connect using docker exe**\n\n<ins>cluster </ins><ins>health </ins>\nhouseadm@houseadms-iMac testcompose % docker exec -it mongodb1 bash root@mongodb1:/#mongosh --version 1.10.1 \"mongors/172.20.0.2:27017,172.20.0.3:27017,172.20.0.4:27017\" mongodb\nhouseadm@houseadms-iMac testcompose % cat /etc/hosts \n\n127.0.0.1 localhost\n255.255.255.255 broadcasthost\n::1 localhost\n#example\n#127.0.10.1 mongo-0-a\n#127.0.10.2 mongo-0-b\n#127.0.10.3 mongo-0-c\n#\n172.21.0.2 mongodb1\n172.21.0.3 mongodb2\n172.21.0.4 mongodb3\n\n172.21.0.2 mongodbsvr01\n172.21.0.3 mongodbsvr02\n172.21.0.4 mongodbsvr03`\n\n",
"text": "Hi,\nToday I looked for a new cofiguration strategy based on a document here in the community. However, unfortunately I still observe the same scenario.Configuring replicationabout the configurationin docker-compose.yml % lsof -iTCP -sTCP:LISTEN -P -nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\nControlCe 396 houseadm 20u IPv4 0x33f499f53a953481 0t0 TCP *:7000 (LISTEN)\nControlCe 396 houseadm 21u IPv6 0x33f499f0726cc421 0t0 TCP *:7000 (LISTEN)\nControlCe 396 houseadm 22u IPv4 0x33f499f53a9529d9 0t0 TCP *:5000 (LISTEN)\nControlCe 396 houseadm 23u IPv6 0x33f499f0726ccb21 0t0 TCP *:5000 (LISTEN)\nrapportd 401 houseadm 4u IPv4 0x33f499f52fee19d9 0t0 TCP *:62025 (LISTEN)\nrapportd 401 houseadm 7u IPv6 0x33f499f0726cfc21 0t0 TCP *:62025 (LISTEN)\ncom.docke 61040 houseadm 195u IPv6 0x33f499f075eb9121 0t0 TCP *:28018 (LISTEN)\ncom.docke 61040 houseadm 196u IPv6 0x33f499f075eb9f21 0t0 TCP *:28017 (LISTEN)\ncom.docke 61040 houseadm 198u IPv6 0x33f499f075eb9821 0t0 TCP *:28019 (LISTEN)`Inspect Networkdocker network inspect testcompose_mongodb_network docker exec mongodb1 bash -c ‘mongo --eval “rs.status()”’MongoDB shell version v5.0.19\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“daa166f0-f8f3-4a83-b9e7-af6fdcb214ce”) }\nMongoDB server version: 5.0.19\n{\n“set” : “mongors”,\n“date” : ISODate(“2023-07-20T00:18:17.793Z”),\n“myState” : 1,\n“term” : NumberLong(1),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 2,\n“writeMajorityCount” : 2,\n“votingMembersCount” : 3,\n“writableVotingMembersCount” : 3,\n“optimes” : {\n“lastCommittedOpTime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“lastCommittedWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“readConcernMajorityOpTime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“appliedOpTime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“durableOpTime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“lastAppliedWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“lastDurableWallTime” : ISODate(“2023-07-20T00:18:07.887Z”)\n},\n“lastStableRecoveryTimestamp” : Timestamp(1689812287, 1),\n“electionCandidateMetrics” : {\n“lastElectionReason” : “electionTimeout”,\n“lastElectionDate” : ISODate(“2023-07-19T23:19:32.484Z”),\n“electionTerm” : NumberLong(1),\n“lastCommittedOpTimeAtElection” : {\n“ts” : Timestamp(1689808761, 1),\n“t” : NumberLong(-1)\n},\n“lastSeenOpTimeAtElection” : {\n“ts” : Timestamp(1689808761, 1),\n“t” : NumberLong(-1)\n},\n“numVotesNeeded” : 2,\n“priorityAtElection” : 2,\n“electionTimeoutMillis” : NumberLong(10000),\n“numCatchUpOps” : NumberLong(0),\n“newTermStartDate” : ISODate(“2023-07-19T23:19:32.526Z”),\n“wMajorityWriteAvailabilityDate” : ISODate(“2023-07-19T23:19:33.852Z”)\n},\n“members” : [\n{\n“_id” : 0,\n“name” : “mongodb1:27017”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 3552,\n“optime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“optimeDate” : ISODate(“2023-07-20T00:18:07Z”),\n“lastAppliedWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“lastDurableWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1689808772, 1),\n“electionDate” : ISODate(“2023-07-19T23:19:32Z”),\n“configVersion” : 79652,\n“configTerm” : -1,\n“self” : true,\n“lastHeartbeatMessage” : “”\n},\n{\n“_id” : 1,\n“name” : “mongodb2:27017”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 3536,\n“optime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“optimeDate” : ISODate(“2023-07-20T00:18:07Z”),\n“optimeDurableDate” : ISODate(“2023-07-20T00:18:07Z”),\n“lastAppliedWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“lastDurableWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“lastHeartbeat” : ISODate(“2023-07-20T00:18:16.417Z”),\n“lastHeartbeatRecv” : ISODate(“2023-07-20T00:18:16.241Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “mongodb1:27017”,\n“syncSourceId” : 0,\n“infoMessage” : “”,\n“configVersion” : 79652,\n“configTerm” : -1\n},\n{\n“_id” : 2,\n“name” : “mongodb3:27017”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 3536,\n“optime” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1689812287, 1),\n“t” : NumberLong(1)\n},\n“optimeDate” : ISODate(“2023-07-20T00:18:07Z”),\n“optimeDurableDate” : ISODate(“2023-07-20T00:18:07Z”),\n“lastAppliedWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“lastDurableWallTime” : ISODate(“2023-07-20T00:18:07.887Z”),\n“lastHeartbeat” : ISODate(“2023-07-20T00:18:16.417Z”),\n“lastHeartbeatRecv” : ISODate(“2023-07-20T00:18:16.240Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “mongodb1:27017”,\n“syncSourceId” : 0,\n“infoMessage” : “”,\n“configVersion” : 79652,\n“configTerm” : -1\n}\n],\n“ok” : 1,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1689812287, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n},\n“operationTime” : Timestamp(1689812287, 1)`bash connections\nhouseadm@houseadms-iMac testcompose % docker exec -it mongodb1 bash root@mongodb1:/#mongosh connections with docker exe\n`docker exec -it mongodb1 mongoshCurrent Mongosh Log ID: 64b8828b52c0d5d3ac3d4e57\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB: 5.0.19\nUsing Mongosh: 1.10.1For mongosh info see: https://docs.mongodb.com/mongodb-shell/The server generated these startup warnings when booting\n2023-07-19T23:19:05.873+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-07-19T23:19:06.916+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestrictedmongors [direct: primary] test `by mongosh\nthe same result = refusedmongosh --version 1.10.1Configuring replication\nabout the configuration**outside container\"\nin my mac`mongosh --host \\“mongors/172.20.0.2:27017,172.20.0.3:27017,172.20.0.4:27017” mongodbCurrent Mongosh Log ID: 64b885daf9a9c77f5a038e2f\nConnecting to: mongodb://172.20.0.2:27017,172.20.0.3:27017,172.20.0.4:27017/mongodb?replicaSet=mongors&appName=mongosh+1.10.1\nMongoServerSelectionError: connection timed out`**inside the container\"`root@mongodb1:/# mongosh --host \\Current Mongosh Log ID: 64b8870cb701fb873415832e\nConnecting to: mongodb://172.20.0.2:27017,172.20.0.3:27017,172.20.0.4:27017/mongodb?replicaSet=mongors&appName=mongosh+1.10.1\nUsing MongoDB: 5.0.19\nUsing Mongosh: 1.10.1mongors [primary] mongodb``root@mongodb1:/# mongo --host mongodb://172.20.0.2:27017,172.20.0.3:27017,172.20.0.4:27017/test?replicaSet=mongorsMongoDB shell version v5.0.19\nconnecting to: mongodb://172.20.0.2:27017,172.20.0.3:27017,172.20.0.4:27017/test?compressors=disabled&gssapiServiceName=mongodb&replicaSet=mongors\nImplicit session: session { “id” : UUID(“9aaa4e0d-deaf-4e97-a2aa-6dfc69ca3dda”) }\nMongoDB server version: 5.0.19Warning: the “mongo” shell has been superseded by “mongosh”\nwhich delivers improved usability and compatibility.The “mongo” shell has been deprecated and will be removed in\nan upcoming release.\nFor installation instructions, seeWelcome to the MongoDB shell.\nFor interactive help, type “help”.\nFor more comprehensive documentation, see\nhttps://docs.mongodb.com/\nQuestions? Try the MongoDB Developer Community Forums\nhttps://community.mongodb.comThe server generated these startup warnings when booting:\n2023-07-19T23:19:05.873+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n2023-07-19T23:19:06.916+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\nmongors:PRIMARY> show dbs\nadmin 0.000GB\nconfig 0.000GB\nlocal 0.000GB`The other two new variables in relation to the tests performed yesterdayhouseadm@houseadms-iMac testcompose %telnet mongodb1 27017\nTrying 172.21.0.2houseadm@houseadms-iMac testcompose %telnet mongodbsvr01 27017\nTrying 172.21.0.2Another was possible to access from Studio 3T, using the 0.0.0.0 Port. Not anymore today, yesterday it was possible using de first confuguartion.\nScreenshot 2023-07-19 at 22.27.08689×797 109 KB\n\nScreenshot 2023-07-19 at 22.32.12696×923 108 KB\n\nScreenshot 2023-07-19 at 22.34.46686×799 92.5 KB\nAnd finally I changed the default IP address on to or new 172.21.0.0/24 used by replica set in Docker Desktop\nScreenshot 2023-07-19 at 22.36.31765×435 27.1 KB\nRegards,Carlos",
"username": "Carlos_Alberto_da_Silva_Junior1"
}
] | MongoDB Replica Docker: timeout/unsuccessful connection with replica set , only individual connection on macOS using "dokcker" | 2023-07-19T05:18:22.142Z | MongoDB Replica Docker: timeout/unsuccessful connection with replica set , only individual connection on macOS using “dokcker” | 753 |
[
"migration"
] | [
{
"code": "",
"text": "When executing MongoDB Realtional Migrator with Oracle, it does not display the schemes to select and continue with the migration, here I leave images of what was done in ubuntu.\nCaptura desde 2023-07-19 12-47-44884×382 16.8 KB\n",
"username": "edisoncsi"
},
{
"code": "",
"text": "Hi @edisoncsi, thanks for trying Relational Migrator and sorry to hear you’re having issues.\nCould you let me know which version of Oracle you are trying to connect to, and what version of the JDBC driver you have installed?Tom",
"username": "tomhollander"
}
] | Does not display database schemas for migration in MongoDB Realtional Migrator | 2023-07-19T17:53:57.367Z | Does not display database schemas for migration in MongoDB Realtional Migrator | 443 |
|
[
"swift"
] | [
{
"code": "",
"text": "Everytime I run the app on the xcode simulator there’s about a 100 extra requests every minute on my realm application dashboard. Is this normal? This is how it looks in the console.\n\nScreenshot 2023-01-31 at 01.57.371920×1364 388 KB\n",
"username": "Tim_Tati"
},
{
"code": "",
"text": "That doesn’t look crazy abnormal, and the changes size is 0 so I suspect there’s activity in the app triggering this. Have you isolated it to any code you can share.If not, whatever you’re doing with the app starts, don’t do that as a test and see if you see the same activity. If not, slowly reintroduce code until you spot it again and then share that code with us.",
"username": "Jay"
},
{
"code": "",
"text": "Hello Again,I’ve tried to just isolate it but i cant really pinpoint it to any specific code.The logs just show “changing query from existing query to new query”.The whole log history is filled with these sync requests happening every second. I’ve compiled the app multiple times and left it idle and even without any activity it keeps updating the query multiple times a second. I’ve cross checked the queries as well but they’re absolutely identical.The syncmanager loglevel is set to debug but hasnt been very helpful. The changesets have still been 0 but the requests dont stop. How can i fix this? It hasnt affected device sync speed or function but the requests are only free until the first Million so it doesnt seem efficient to just let this\nissue be.This is how the logs look\n49db2bf3725e370ab3981d0dbffcdca52305×1059 354 KB\n\n6b9029d8d0f5b8fa2f038d7d004f58b42800×1070 239 KB\n\nScreenshot 2023-03-09 at 17.47.331704×1604 449 KB\n",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "Does you app have any observers? If so, what happens if those are disabled.Do you call any server functions? How about authenticating users or is this a single user - standalone app?",
"username": "Jay"
},
{
"code": "",
"text": "No i dont have any observers anywhere. I dont use any server functions except for an aggregation pipeline thats only triggered when searching. Its a single user app. Authentication happens only once on the login screen.",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "well, looking at the logs I would say there’s either a query that’s being called/updated, or data written to realm that falls within the query parameters. Just a guess.",
"username": "Jay"
},
{
"code": "@AsyncOpen@AutoOpenlet _ = Self._printChanges()@ObservableObject",
"text": "Are you by chance using the SwiftUI property wrappers @AsyncOpen or @AutoOpen? If so, they open up a brand new connection every time the view that they are a part of is rendered. You can see why that view is getting rendered with let _ = Self._printChanges() in your view body.In my opinion these property wrappers aren’t really usable outside a tutorial because of this issue. I’m having more success using the normal Swift SDK and @ObservableObject environment objects to accomplish something similar.",
"username": "Jonathan_Czeck"
},
{
"code": " realm.subscriptions.update(subs => { \n///collections to subscribe\n});\n",
"text": "Had the same issue. For me it was because I had multiplewithout awaiting it",
"username": "Tam_Nguyen1"
}
] | Realm Swift Excessive Sync Requests | 2023-01-31T18:04:44.461Z | Realm Swift Excessive Sync Requests | 1,587 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.