image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Hello,\nIm using Atlas App Services. I create some functions and it works correctly but it’s only on mongo cloud.\nI wonder if there is any way to host these functions in my gitlab (I mean code and test these functions on my local gitlab) and deploy automatically on mongo cloud via ci gitlab?",
"username": "Thuc_NGUYEN1"
},
{
"code": "",
"text": "Hello,Short answer, no.Long answer, yes.For the long answer, you’d have to build a function in Atlas as an API service and connect Gitlab to the Atlas cluster. Or use the Admin API through your deployments.These are the only ways I’ve been successful in connecting Gitlab to Atlas.",
"username": "Brock"
}
] | Deploy automatically app services functions via gitlab | 2023-03-30T09:47:55.087Z | Deploy automatically app services functions via gitlab | 700 |
null | [
"replication",
"ops-manager"
] | [
{
"code": "- name: Initiate Automation\n hosts: om_hosts[0]\n gather_facts: yes\n vars:\n project_projects: \"{{ lookup('file', '/home/ec2-user/commercetools-iac-staging-env/ansible/project_projects.yaml') }}\"\n org_private_key: \"{{ lookup('file', '/home/ec2-user/commercetools-iac-staging-env/ansible/org_private_key.yaml') }}\"\n org_public_key: \"{{ lookup('file', '/home/ec2-user/commercetools-iac-staging-env/ansible/org_public_key.yaml') }}\"\n roles:\n - automation\n- name: Ignore warnings raised for files with unknown extensions while loading (2.7)\n include_vars:\n dir: vars\n ignore_unknown_extensions: True\n extensions: ['yaml', 'yml', 'json']\n\n- name: \"Set OM Base URL\"\n set_fact:\n om_base_url: \"{{ om_url.0 }}/api/public/v1.0\"\n\n###### Run the below projects one at a time by commenting and uncommenting individual blocks below\n\n- name: \"Initiate Projects Automation\"\n uri:\n url: \"{{ om_base_url }}/groups/{{ project_projects }}/automationConfig?pretty=true\"\n validate_certs: no\n method: PUT\n headers:\n Content-Type: \"application/json\"\n body: \"{{ lookup('file', 'vars/projects-rs.json') }}\"\n body_format: json\n status_code: 200\n # return_content: yes\n user: \"{{ org_public_key }}\"\n password: \"{{ org_private_key }}\"\n\n",
"text": "Hello Team,\nWe are trying to automate Replica set installation/ configuration through Ansible using OpsManager public API’s. We have done automation agent installation in the respective servers and able to see those servers in project UI. Now we tried replica set initialization using the Ansible script.Initiate automation.yaml:-Roles/automation/tasks/main.yaml:-projects-rs.json file attached.The error isTASK [automation : Initiate Projects Automation] ************************************************************************************\n[WARNING]: Module did not set no_log for password\nfatal: [ip-192-168-128-191.cn-northwest-1.compute.internal]: FAILED! => {“changed”: false, “connection”: “close”, “content_length”: “187”, “content_type”: “application/json”, “date”: “Mon, 27 Mar 2023 12:48:22 GMT”, “elapsed”: 0, “json”: {“detail”: “Invalid config: The Automation Auth Key may not be unset when processes exist.”, “error”: 400, “errorCode”: null, “parameters”: null, “reason”: “Bad Request”}, “msg”: “Status code was 400 and not [200]: HTTP Error 400: Bad Request”, “redirected”: false, “referrer_policy”: “strict-origin-when-cross-origin”, “status”: 400, “strict_transport_security”: “max-age=0; includeSubdomains;”, “url”: “https://staging-om.db.cn-northwest-1.aws..cn:8443/api/public/v1.0/groups/**********/automationConfig?pretty=true”, “x_content_type_options”: “nosniff”, “x_frame_options”: “DENY”, “x_mongodb_service_version”: “gitHash=f6da67b6680af73325f5a4198d48f673204d72a8; versionString=6.0.5.100.20221019T1059Z”, “x_permitted_cross_domain_policies”: “none”}Can you please help us here what’s the issue? if you can . It’s really great if soThanks & regards\nVenkata",
"username": "venkat_annangi"
},
{
"code": "",
"text": "Hello @venkat_annangi ,Welcome to The MongoDB Community Forums! Please correct me if I am wrong, I suppose that you are working with MongoDB Ops Manager. The issue seems to be with your configuration/environment.I would recommend you open a support case at MongoDB Support Portal as they have the required expertise and could help you with Root Cause Analysis and provide best solutions as per your use-case.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | The Automation Auth Key may not be unset when processes exist | 2023-03-28T11:19:36.540Z | The Automation Auth Key may not be unset when processes exist | 1,026 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "I am pretty new to MongoDB. I am in a scenario where it is possible for a system to invoke functions simultaneously many time.I have gone through MongoDB Function Documentation and didn’t find anything which speaks about scalability or concurrency issues.Can a single function be invoked multiple times in parallel? for example: Three different request trying to invoke same function will all three request be handled one by one or in parallel.",
"username": "Faisal_Ansari"
},
{
"code": "",
"text": "Same question here\nWhat is the concurrency? It uses max 256mb memory where does these memory come from (will it impact my M10 cluster memory?)\nAny function have cold start how long is it for Realm function? Any way to resolve the cold start?",
"username": "YANSONG_GUO"
},
{
"code": "",
"text": "The memory to my understanding is shared with the cluster, it can use up to that amount.As far as parallel or not, it can be parallel, or it can be one at a time. That depends on if you’re using Triggers or not, and whether or not the triggers are in order or not.As far as cold start, that really depends upon how you have things setup.",
"username": "Brock"
}
] | Is MongoDB Realm function scalable? | 2023-01-12T14:08:33.672Z | Is MongoDB Realm function scalable? | 1,398 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": " GDSchema.aggregate([\n {\n $search: {\n index: 'index_company_name',\n text: {\n path: 'CompanyName',\n query: req.query.s,\n fuzzy: {\n maxEdits: 2,\n maxExpansions: 20,\n },\n },\n },\n },\n {\n $match: { phoneticCode: 'B234' },\n },\n },\n ]).\n",
"text": "Hello, I want to get combined result from a collection by querying two fields. One field uses search based indexing for its querying process while the other field can be normally find using match or find operation.\nBut I want to get OR results of both the field searches. How can I perform this?\nPasting a sample code for your reference.",
"username": "Ashutosh_Mishra1"
},
{
"code": "compoundshouldOR",
"text": "Hey Ashutosh,I want to get combined result from a collection by querying two fields. One field uses search based indexing for its querying process while the other field can be normally find using match or find operation.\nBut I want to get OR results of both the field searches. How can I perform this?Can you provide sample documents and expected output? I think based off your description, the compound operator may help noting that the should option maps to the OR boolean operator.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "GSchema.aggregate([\n {\n $search: {\n index: 'index_company_name',\n text: {\n path: 'CompanyName',\n query: req.query.s,\n fuzzy: {\n maxEdits: 2,\n maxExpansions: 20,\n },\n },\n },\n },\n {\n $match: { phoneticCode: 'B234' },\n },\n },\n ]).\n",
"text": "",
"username": "Ashutosh_Mishra1"
},
{
"code": "$search",
"text": "You have sent your $search aggregation again. As per my previous post, are you able to send sample documents you’re using this against and your expected output?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"61e0517cd9c4b533cc0bcf41\"\n },\n \"CompanyName\": \"BigML\",\n \"phoneticCode\": \"B254\"\n}\n",
"text": "This is the sample document containing two fields which are companyName and its Soundex algorithm based string in the field “phoneticCode”.\nBasically I want to implement OR based results of Fuzzy search (on companyName) and PhoneticSearch(on Phonetic code), because I want to get combined search results of query which has similar spelling as well as similar sound. How can I achive this use case with my use case? Is there any other way to do this?",
"username": "Ashutosh_Mishra1"
},
{
"code": "compoundDB> db.companyphone.find({},{_id:0})\n[\n { CompanyName: 'BigML', phoneticCode: 'B254' },\n { CompanyName: 'nothing', phoneticCode: 'B254' },\n { CompanyName: 'BigML', phoneticCode: 'nothing' }\n]\n{\n \"mappings\": {\n \"dynamic\": true\n }\n}\n$search{\n '$search': {\n index: 'default',\n compound: {\n should: [\n { text: { query: 'BigML', path: 'CompanyName' } },\n { text: { query: 'B254', path: 'phoneticCode' } }\n ]\n }\n }\n}\n$project[\n {\n CompanyName: 'BigML',\n phoneticCode: 'B254',\n score: 0.42727601528167725\n },\n {\n CompanyName: 'nothing',\n phoneticCode: 'B254',\n score: 0.21363800764083862\n },\n {\n CompanyName: 'BigML',\n phoneticCode: 'nothing',\n score: 0.21363800764083862\n }\n]\ncompound",
"text": "Basically I want to implement OR based results of Fuzzy search (on companyName) and PhoneticSearch(on Phonetic code), because I want to get combined search results of query which has similar spelling as well as similar sound.Thanks for that example. I’m not sure of the expected output based off a single sample document but i’ve created the following sample docs in hopes that the compound operator suits your use case:I have the following index definition:The following $search stage was used in my test environment:which resulted in the following documents (I also performed a $project for the search scores for your information):If you believe the compound operator will suit your use case then please alter and test thoroughly. The above example was for demonstration based off 3 sample documents on my test environment.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason, This works.\nOnly doubt I have now is about index name. You have written\nindex: ‘default’. But I have created index for the field companyName with the name\n“index_company_name”, and there is no indexing for second field i.e. “phoneticCode”. Should I replace “default” index with “index_company_name”. But about the other field with no index?Thanks",
"username": "Ashutosh_Mishra1"
},
{
"code": "defaultdefault/// this is what the \"default\" index definition is for my test environment\n{\n \"mappings\": {\n \"dynamic\": true\n }\n}\n\"default\"",
"text": "Glad to hear it works.The index name I used was just the standard default value. That index name that is specified in my example just indicates that the default index should be used which had the following definition:You can define whichever field mappings suit your use case. Again, the above index definition for the \"default\" was just an example.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I have created indexes for both the fields.\nFor ,“companyName” field, index is “index_company_name”, and for “phoneticCode” field, index is “index_phonetic_code”.Can you please help me by updating your above code for both the indexes?\nIf it is done by index mapping, then please provide the sample code for that too and where to define that mapping?Thanks",
"username": "Ashutosh_Mishra1"
},
{
"code": "",
"text": "Can you please help me by updating your above code for both the indexes?\nIf it is done by index mapping, then please provide the sample code for that too and where to define that mapping?I would refer you to go over the Static and Dynamic mappings documentation. However I think this is beyond the scope of the current topic which was already answered in my previous comment. If you’d like to know more about Static & Dynamic mapping with regard to your use case, I believe it’s best to open a new topic for that question.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Get OR based results of $search and $match | 2023-03-27T14:17:37.963Z | Get OR based results of $search and $match | 722 |
null | [
"data-modeling",
"java"
] | [
{
"code": " PojoCodecProvider.Builder pojoCodecProviderBuilder = PojoCodecProvider.builder()\n .automatic(true)\n .conventions(asList(org.bson.codecs.pojo.Conventions.ANNOTATION_CONVENTION));\npublic class MyModel {\n\n @BsonProperty(\"_id\")\n public ObjectId _id;\n\n @BsonProperty(\"id\")\n private String id;\n\t\n\t@BsonProperty(\"id\")\n public String getId() {\n return id;\n }\n\n public void setId(String id) {\n this.id = id;\n }\n\t\n\t...\n}\n\n",
"text": "If I have MongoDB documents with both _id & id fields how do I convert those to POJO using PojoCodecProvider?I have tried:With POJO annotated as:However the _id is null, only the id and other fields are set. I need all fields including the _id field to be set. How can I do this?Thanks,\nDavid",
"username": "David_Hoffer"
},
{
"code": "public class Test {\n\n @BsonProperty(\"_id\")\n public ObjectId _id;\n\n @BsonProperty(\"id\")\n private String id;\n\n public Test(ObjectId objectId, String testing2) {\n this._id= objectId;\n this.id = testing2;\n }\n\n @Override\n public String toString(){\n return \"Details ['_id= + _id +, id=' + id ']\";\n }\n\n public ObjectId get_id() {\n return _id;\n }\n\n public void set_id(ObjectId _id) {\n this._id = _id;\n }\n\n public String getId() {\n return id;\n }\n\n public void setId(String id) {\n this.id = id;\n }\n}\npublic static void main(String[] args) {\n CodecProvider pojoCodecProvider = PojoCodecProvider.builder().automatic(true).build();\n CodecRegistry pojoCodecRegistry = fromRegistries(getDefaultCodecRegistry(), fromProviders(pojoCodecProvider));\n",
"text": "Hi @David_Hoffer and welcome to the MongoDB Community forum!!Here is how my POJO class looks like to insert _id and id together:And the codec that I have been using looks like the following:I tried to create one constructor and insert _id and id together into the document.Let me know if this answers your question.Best regards\nAasawari",
"username": "Aasawari"
}
] | How to convert MongoDB Document to POJO that contains both _id & id fields? | 2023-03-21T21:19:32.064Z | How to convert MongoDB Document to POJO that contains both _id & id fields? | 1,150 |
null | [
"aggregation"
] | [
{
"code": "{\n _id: ObjectId(\"640add3aa23672f23560460d\"),\n name: 'Irene',\n slug: 'irene',\n dataValues: [\n {\n label: 'First Name',\n value: 'Irene',\n sysName: 'content'\n }\n ]\n},\n\n{\n _id: ObjectId(\"640add31a23672f23560460b\"),\n name: 'Zola',\n slug: 'zola',\n dataValues: [\n {\n label: 'First Name',\n value: 'Zola',\n sysName: 'content'\n },\n {\n label: 'Middle Name',\n value: 'Li',\n sysName: 'content'\n }\n ]\n},\n\n{\n _id: ObjectId(\"640add37a23672f23560460c\"),\n name: 'Henry',\n slug: 'henry',\n dataValues: [\n {\n label: 'First Name',\n value: 'Henry',\n sysName: 'content'\n },\n {\n label: 'Rich',\n value: 'Rich',\n sysName: 'content'\n }\n ]\n}\n",
"text": "I currently have some documents of items for dynamic forms.\nBelow are 3 samples.\nThe dataValues property stores the array of dynamic fields and its value.When I want to list all the documents, how can I do sorting based on the dynamic fields?For example, from UI side we can sort by First Name, or Middle Name dynamically but the sorting is applied on the related value’s field.\n(API pass in ‘First Name’, but actual sorting is on dataValues.value of the same dataValues.label = ‘First Name’).",
"username": "HS-Law"
},
{
"code": "db.test.aggregate([\n {\n $set: {\n dv: {\n $filter: {\n input: \"$dataValues\",\n cond: {\n $eq: [\"$$this.label\", \"First Name\"],\n },\n },\n },\n },\n },\n\n // Sort by the computed `dv.value` field\n {\n $sort: {\n \"dv.value\": 1,\n },\n },\n\n // Unsetting the `dv` field\n { $unset: \"dv\" },\n])\n[{\n \"_id\": {\n \"$oid\": \"64250d46a1a9970c3457f771\"\n },\n \"name\": \"Henry\",\n \"slug\": \"henry\",\n \"dataValues\": [\n {\n \"label\": \"First Name\",\n \"value\": \"Henry\",\n \"sysName\": \"content\"\n },\n {\n \"label\": \"Rich\",\n \"value\": \"Rich\",\n \"sysName\": \"content\"\n }\n ]\n},\n{\n \"_id\": {\n \"$oid\": \"64250d26a1a9970c3457f76d\"\n },\n \"name\": \"Irene\",\n \"slug\": \"irene\",\n \"dataValues\": [\n {\n \"label\": \"First Name\",\n \"value\": \"Irene\",\n \"sysName\": \"content\"\n }\n ]\n},\n{\n \"_id\": {\n \"$oid\": \"64250d46a1a9970c3457f770\"\n },\n \"name\": \"Zola\",\n \"slug\": \"zola\",\n \"dataValues\": [\n {\n \"label\": \"First Name\",\n \"value\": \"Zola\",\n \"sysName\": \"content\"\n },\n {\n \"label\": \"Middle Name\",\n \"value\": \"Li\",\n \"sysName\": \"content\"\n }\n ]\n}]\n",
"text": "Hi @HS-Law,Welcome to the MongoDB Community forums As per your shared sample documents, I’ve written a MongoDB aggregation pipeline that can sort the documents based on dynamic fields using the $filter, and $sort. Here is the aggregation pipeline for your reference:It will return the following output:Please note: Since the sort field is autogenerated it won’t be fast for large collections because no index will be used.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks, @Kushagra_Kesav.If I sort by “Middle Name”, the items that do not have this field are listed first.\nHow can I change it so that the items without the sorted field will be listed last ?",
"username": "HS-Law"
},
{
"code": "\ndb.Content.Item.aggregate([\n {\n $match: {\n collectionId: ObjectId(\"640add0fa23672f235604609\")\n }\n },\n {\n $set: {\n dv: {\n $filter: {\n input: \"$dataValues\",\n cond: {\n $eq: [\"$$this.label\", \"Middle Name\"],\n },\n },\n },\n },\n },\n {\n $set: { dvCount: {$size: \"$dv\" } }\n },\n {\n $sort: {\n dvCount:-1,\n \"dv.value\": 1,\n },\n },\n { $unset: \"dv\" }\n ,\n { $unset: \"dvCount\" }\n])\n",
"text": "Here is what I tried, sort first using the array’s size,\nnot sure if it is the best way:",
"username": "HS-Law"
}
] | Sorting dynamic fields in array of object | 2023-03-30T01:53:24.802Z | Sorting dynamic fields in array of object | 846 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hello, I apologize if this is not quite the intent of this category but I think it fits.I am designing a database (a bit like a living bill of materials for the design life of a large complex device) that would work well in MongoDB, but I am concerned with user interactions. I believe Microsoft Access’ Queries, Input Forms and Reports are ideal because:I don’t have a lot of experience with databases and I’m a mechanical engineer so no specific education/training in designing a database. From researching it seems like Mongo will be a good program to use for data management but may not be as user friendly as I need it to be. Access does not seem to be ideal for data management as a relational SQL database, but the user interfaces will be great, plus my company already has access to Access (like that pun?).My concerns are that I will learn how to use MongoDB and make a lovely database that no one will use because it’s not user friendly. Does MongoDB have user-friendly data input forms, queries and reports similar to Microsoft Access?",
"username": "Nathan_A8"
},
{
"code": "",
"text": "Not the Input forms but Atlas Charts might be useful to a degree you described.MongoDB Atlas Charts | MongoDB",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for the response, I’ll check it out.",
"username": "Nathan_A8"
},
{
"code": "",
"text": "@Nathan_A8I just saw this, please save your company and don’t use Access for something like this.You can build an HTML based webform, CSS to style it, JavaScript to make whatever ritzy animations or whatever you’d like, and then just toss mongodb on the back end with an apache web server and Node.JS with mongoose.It sounds complicated, but holy moly is it so much easier and intuitive than Access.Just my IMHO.",
"username": "Brock"
}
] | User friendly input form and report generation in Mongo? | 2022-09-06T22:57:40.667Z | User friendly input form and report generation in Mongo? | 2,488 |
null | [
"aggregation"
] | [
{
"code": "$bucketAuto$facet$bucketboundaries",
"text": "Is there a way to use $bucketAuto to divide my data into N buckets with equal distance?\nI am trying to use this in my $facet query(It is not perfect to use $bucket with boundaries, as I would then have to fetch the range of the data first to decide the boundaries)Thank you!",
"username": "williamwjs"
},
{
"code": "$bucketAuto$facetgranularitythings_id{ _id: 1 }\n{ _id: 2 }\n...\n{ _id: 100 }\n$bucketAutodb.things.aggregate( [\n {\n $bucketAuto: {\n groupBy: \"$_id\",\n buckets: 5,\n granularity: <No granularity>\n }\n }\n] )\n{ \"_id\" : { \"min\" : 1, \"max\" : 21 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 21, \"max\" : 41 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 41, \"max\" : 61 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 61, \"max\" : 81 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 81, \"max\" : 100 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 1, \"max\" : 21 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 21, \"max\" : 41 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 41, \"max\" : 61 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 61, \"max\" : 81 }, \"count\" : 20 }\n{ \"_id\" : { \"min\" : 81, \"max\" : 101 }, \"count\" : 21 }\n$bucketAuto",
"text": "Hello @williamwjs,Welcome back to the MongoDB Community forums Is there a way to use $bucketAuto to divide my data into N buckets with equal distances?\nI am trying to use this in my $facet queryThe $bucketAuto accepts an optional granularity parameter which ensures that the boundaries of all buckets adhere to a specified preferred number series. So, here if you don’t specify the granularity it will distribute it automatically in equal sets**.**Note: It will depend purely on the incoming number of the document.For example:A collection of things have an _id numbered from 1 to 100:If I use the $bucketAuto without specifying the granularity, it will distribute it into equal counts of 20 documents.But if I just increase one more document in the collection the result will be not consistent across each bucket.It is simply because 101 is not wholly divided by 5. Overall, using $bucketAuto we can specify the number of buckets, but not the number of documents each bucket will contain.I hope it answers your question.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "$bucketAuto",
"text": "@Kushagra_Kesav Thank you for your response!I guess “evenly distributed” is vague, and my intention is to divide into buckets having equal distance.From the $bucketAuto doc, it says “Bucket boundaries are automatically determined in an attempt to evenly distribute the documents into the specified number of buckets.”, so looks like it is trying to divide the population into buckets, each of which would have almost the same frequency.However, my intention is to make buckets boundary having equal range, and frequency of each bucket could differMay I ask if you know how to do that? Thank you!",
"username": "williamwjs"
},
{
"code": "$bucketAuto16 documents in test_coll:\n[\n { x: 1 }, { x: 2 }, { x: 2 }, { x: 3 }, { x: 3 },\n { x: 3 }, { x: 4 }, { x: 4 }, { x: 4 }, { x: 4 },\n { x: 5 }, { x: 5 }, { x: 5 }, { x: 6 }, { x: 6 },\n { x: 7 }]\ndb.test_coll.aggregate( [\n {\n $bucketAuto: {\n groupBy: \"$x\",\n buckets: 2,\n }\n }])\n[\n { _id: { min: 1, max: 5 }, count: 10 },\n { _id: { min: 5, max: 7 }, count: 6 }\n]\n10/64[\n { _id: { min: 1, max: 4 }, count: 6 },\n { _id: { min: 4, max: 5 }, count: 4 },\n { _id: { min: 5, max: 7 }, count: 5 },\n { _id: { min: 7, max: 7 }, count: 1 }\n]\n",
"text": "Hello @williamwjs,Thanks for asking the question.From the $bucketAuto doc, it says “Bucket boundaries are automatically determined in an attempt to evenly distribute the documents into the specified number of buckets.”, so looks like it is trying to divide the population into buckets, each of which would have almost the same frequency.Indeed, in $bucketAuto the bucket boundaries are automatically determined.For example, if I have the following documents:And I execute the following query on it:It returns the following output:Here, the 10/6 split is the best it can do with 2 buckets, as other potential boundaries would result in similarly balanced or even less balanced splits.Further, if you increase the number of buckets to 4:The boundaries are getting automatically determined in an attempt to evenly split within the given number of buckets.So, if you wish to determine the bucket boundaries manually, I’ll recommend using the $bucket aggregation pipeline instead. As you stated in your first post, it may require fetching the range of the data first to determine the boundaries, but this is an essential step in the process. However, feel free to reach out if you need any assistance or further guidance.Also please refer to Appendix: Stages Cheatsheet - Practical MongoDB Aggregations Book to learn more about the comparison of $bucket and $bucketAuto .Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "5 buckets[\n { _id: { min: 1, max: 20 }, count: 3 },\n { _id: { min: 21, max: 40 }, count: 20 },\n { _id: { min: 41, max: 60 }, count: 135 },\n { _id: { min: 61, max: 80 }, count: 60 },\n { _id: { min: 81, max: 100 }, count: 11 }\n]\n[\n { _id: { min: 21, max: 30 }, count: 3 },\n { _id: { min: 31, max: 40 }, count: 20 },\n { _id: { min: 41, max: 50 }, count: 135 },\n { _id: { min: 51, max: 60 }, count: 60 },\n { _id: { min: 61, max: 70 }, count: 11 }\n]\n",
"text": "Hi @Kushagra_Kesav , thank you for your detailed reply!!!I guess this is a feature request then.\nTo summarize, here is a detailed example of my request:For example, requirement is automatically bucket the data into 5 buckets with equal-distance range, Then\na) if the total range of the data is 1-100, then it would automatically pick the following range:b) if the total range of the data is 21-70, then it would automatically pick the following range:Let me know if this makes sense to you! Thank you!",
"username": "williamwjs"
}
] | How to use $bucketAuto to divide into evenly distributed buckets | 2023-03-17T22:10:16.321Z | How to use $bucketAuto to divide into evenly distributed buckets | 1,901 |
null | [
"aggregation",
"monitoring"
] | [
{
"code": "",
"text": "Hi \nOur issue: We are facing a lot of “scanned objects / returned has gone over a 1000” alerts.\nWhat we have done so far:Our questions for you:",
"username": "Ofir_Asulin"
},
{
"code": "",
"text": "Hi @Ofir_Asulin and welcome to the community!!Can the alert be caused by something else ?The following alert is caused when you are scanning more documents compared to the number of documents returned by the query, i.e. the query is being executed in an inefficient manner. Typically this happens when the query is not using the right index, or there is no index that can be used to support the query. Please see the query targeting page for more details.In most cases, this can be solved by creating indexes that can support the query, Indexing Strategies is a great resource for more information on this topic.When I look at the monitoring tab I see that for some periods of time our query targeting reaches 8K.Does this mean, 8k is the number of documents getting scanned in order to return one document ? Can you please confirm if my understanding is correct here ?I did notice that for some queries, our execution time is pretty major (aggregations) and reaches 10s+.\nCan the alert be caused by slow execution times?Most times, slow execution times is a side effect of having an inefficient query, instead of the other way around. So it is entirely possible that the alert was caused by this aggregation being inefficient.If the 10+s aggregation query is frequently used, could you share the query for the same. Also, can you confirm if the queries are disruptive to their operations.Please note that we could help with the query questions when supported with example documents, index descriptions etc. However, if you need an in-depth troubleshooting, would recommend you to seek support from Atlas support as they would have moe visibility in the deployment.Please let us know if you have any further questions.Best regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for the reply ! We did add a lot of new indexes to improve our query targeting. We managed to make our queries very efficient.\nWe thought we had this solved and then yesterday we got the alert again. When I go into the query targeting tab I don’t see anything above 1k. (The alert was yesterday at 3pm UTC)\n\nimage1686×398 39.9 KB\n\nSo what could cause this to pop again ?",
"username": "Ofir_Asulin"
},
{
"code": "",
"text": "This is the execution times graph:",
"username": "Ofir_Asulin"
},
{
"code": "",
"text": "\nimage1691×408 61.8 KB\n",
"username": "Ofir_Asulin"
},
{
"code": "",
"text": "Hi @Ofir_AsulinWe managed to make our queries very efficient.Execution time and query efficiency are not the same: execution time simply measures how long a query took, while efficiency measures how many documents scanned vs. returned.For example, a query can be efficient (ratio of documents scanned vs. return is 1:1), but if the documents it needs are not in memory, it needs to fetch them from disk. This makes the query slow, even though it’s efficient.If I understand the graph correctly, there appear to be a pattern of slow queries every hour consistently. Could you confirm if you have some operation running every hour on the database?Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Do we have any solution to disable this alert?",
"username": "Yongbao_Jiang"
},
{
"code": "",
"text": "If you wish to disable the alert, you can follow the Disable an Alert documentation. However, in saying this, perhaps the Fix Query Issues documentation may be useful to you.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Scanned Object / returned has gone over 1000 | 2022-09-11T08:02:50.084Z | Scanned Object / returned has gone over 1000 | 7,913 |
null | [
"sharding"
] | [
{
"code": "~ » mongostat -h 'XX.XX.XX.XX:27018' --authenticationDatabase='$external' --authenticationMechanism='PLAIN' -u 'user'\nEnter password for mongo user:\n\n2023-03-30T11:15:29.803-0600\terror parsing command line options: password required for PLAIN\n2023-03-30T11:15:29.803-0600\ttry 'mongostat --help' for more information\n--------------------------------------------------------------------------------------------------------------\n~ » mongostat --host='XX.XX.XX.XX' --port='27018' --authenticationDatabase='$external' --authenticationMechanism='PLAIN' -u 'user'\nEnter password for mongo user:\n\n2023-03-30T11:16:49.004-0600\terror parsing command line options: password required for PLAIN\n2023-03-30T11:16:49.004-0600\ttry 'mongostat --help' for more information\n--------------------------------------------------------------------------------------------------------------\n~ » mongostat --host='XX.XX.XX.XX' --port='27018' --authenticationDatabase='%24external' --authenticationMechanism='PLAIN' -u 'user'\nEnter password for mongo user:\n\n2023-03-30T11:17:10.708-0600\terror parsing command line options: password required for PLAIN\n2023-03-30T11:17:10.709-0600\ttry 'mongostat --help' for more information\n\n--------------------------------------------------------------------------------------------------------------\n~ » mongostat --uri=\"mongodb://[email protected]:27018/?authSource=$external&authMechanism=PLAIN\"\n\n2023-03-30T11:07:36.882-0600\terror parsing command line options: error parsing uri: authSource must be non-empty when supplied in a URI\n2023-03-30T11:07:36.882-0600\ttry 'mongostat --help' for more information\n--------------------------------------------------------------------------------------------------------------\n~ » mongostat --uri=\"mongodb://[email protected]:27018/?authSource=%24external&authMechanism=PLAIN\"\nEnter password for mongo user:\n\n2023-03-30T11:09:05.529-0600\terror parsing command line options: password required for PLAIN\n2023-03-30T11:09:05.530-0600\ttry 'mongostat --help' for more information\n~ » mongostat --version 1 ↵ ozky@DLMX\nmongostat version: 100.6.1\ngit version: 6d9d341edd33b892a2ded7bae529b0e2a96aae01\nGo version: go1.17.10\n os: linux\n arch: amd64\n compiler: gc\n--------------------------------------------------------------------------------------------------------------\n~ » mongotop --version ozky@DLMX\nmongotop version: 100.6.1\ngit version: 6d9d341edd33b892a2ded7bae529b0e2a96aae01\nGo version: go1.17.10\n os: linux\n arch: amd64\n compiler: gc\n--------------------------------------------------------------------------------------------------------------\n~ » cat /etc/os-release ozky@DLMX\nPRETTY_NAME=\"Ubuntu 22.04.2 LTS\"\nNAME=\"Ubuntu\"\nVERSION_ID=\"22.04\"\nVERSION=\"22.04.2 LTS (Jammy Jellyfish)\"\nVERSION_CODENAME=jammy\nID=ubuntu\nID_LIKE=debian\nHOME_URL=\"https://www.ubuntu.com/\"\nSUPPORT_URL=\"https://help.ubuntu.com/\"\nBUG_REPORT_URL=\"https://bugs.launchpad.net/ubuntu/\"\nPRIVACY_POLICY_URL=\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\"\nUBUNTU_CODENAME=jammy\n\n",
"text": "Hi allI’m having problems to use mongostat and mongotop in a sharded cluster that is integrated with an LDAP and use PLAIN authentication mechanismI’ve already tried to type the uri in several ways and with parameters but it doesn’t work.mongotop is the same problem, I’m using ubuntu 22.04Do I have the right versions?, what I’m doing wrong?Thanks",
"username": "Oscar_Cervantes"
},
{
"code": "authenticationMechanism=PLAINmongostat --uri 'mongodb://user:[email protected]:27018/?replicaSet=replset&authSource=$external&authMechanism=PLAIN'\nmongostat --uri 'mongodb://127.0.0.1:27018/?replicaSet=replset&authSource=$external&authMechanism=PLAIN' -u 'user' -p 'password'\n# auth.conf\npassword: mypassword\nmongostat --uri 'mongodb://[email protected]:27018/?replicaSet=replset&authSource=$external&authMechanism=PLAIN' --config auth.conf\n$external",
"text": "For authenticationMechanism=PLAIN the password cannot be read interactively, it should be included in the URI as:In the command line as:Or in an external config file like:Also, note the usage of single quotes for the MongoDB URI, so the shell don’t interpret $external as a variable.Regards!",
"username": "Alan_Reyes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongostat and mongotop with PLAIN problems | 2023-03-30T17:28:39.404Z | Mongostat and mongotop with PLAIN problems | 898 |
null | [] | [
{
"code": "",
"text": "Hi - I’m swimming in an info overload situation.I am the CEO of a tiny startup which isn’t cash flow positive right now.I use a Mac, Xero, Squarespace, Stripe and PayPal - and an Excel spreadsheet. When I get an order via Squarespace I manually enter all details into my spreadsheet, and add progress info about their order and a bunch of other fields. When I enter something new, I create a new version and send to my colleague in another country for his info. I can see this becoming unwieldy very soon.I’m trying to find a database solution that will be suitable for my needs. I’ve stumbled across Mongo. Am I on the right track here? I can’t afford to be paying a large sum each month at this stage - and am tentative to back any solution without knowing if it’s going to work for me. I have some skills but the more user friendly, the better.Any advice most gratefully received.",
"username": "Yockie"
},
{
"code": "",
"text": "from the use-case perspective, that seems more to a high-level data-entry, data management. I would say using MongoDB could work, just probably not that effective from a non-technical perspective. it’s more for a system and programming integration.Nonetheless, I think no harm to give it a try, hack the use-case by signing up for an M0 tier on Atlas, attempting to use it like a real-time sync Excel Spreadsheet, and doing some note-keeping.the other alternative to that, for 'manual data entry, I’d say maybe is Airtable Your platform for building connected apps | Airtable, (sounds conflict of interest here),\nDespite that, if you’re scaling up your business, and intend to build complete system solutions, MongoDB can be a wonderful database to handle that transaction.",
"username": "WilliamCheong"
},
{
"code": "",
"text": "I agree with your assessment that MongoDB may not be the most effective choice for a high-level data entry and management use case, especially from a non-technical perspective. However, as you mentioned, there is no harm in giving it a try, and the M0 tier on Atlas could be a good starting point for testing.Airtable is another platform that could be worth exploring for manual data entry and management, as it offers a user-friendly interface and a variety of features that make it easy to organize and collaborate on data. It also integrates well with other tools and platforms, making it a versatile option for businesses of all sizes.",
"username": "Ernest_dowson"
}
] | Is Mongo for me? | 2022-08-08T02:15:58.892Z | Is Mongo for me? | 2,788 |
null | [
"swift"
] | [
{
"code": "RealmSwift.List<Object>\"Terminating app due to uncaught exception 'RLMException', reason: 'Index 5 is out of bounds (must be less than 5).'\"final class Todo: Object, ObjectKeyIdentifiable {\n @objc dynamic var id: String = UUID().uuidString\n @objc dynamic var name: String = \"\"\n @objc dynamic var createdAt: Date = Date()\n \n override class func primaryKey() -> String? {\n \"id\"\n }\n}\n\nfinal class Todos: Object {\n @objc dynamic var id: Int = 0\n \n let todos = List<Todo>()\n \n override class func primaryKey() -> String? {\n \"id\"\n }\n}\nlet realm = try! Realm()\nvar todos = realm.object(ofType: Todos.self, forPrimaryKey: 0)\nif todos == nil {\n todos = try! realm.write { realm.create(Todos.self, value: []) }\n}\n\n// Create the SwiftUI view that provides the window contents.\nlet contentView = ContentView(todos: todos!.todos)\n@ObservedObject var todos: RealmSwift.List<Todo>\nvar body: some View {\n ForEach(todos) { (todo: Todo) in\n Text(\"\\(todo.name)\")\n .onDelete(perform: delete)\n}\n\nfunc delete(at offsets: IndexSet) {\n if let realm = todos.realm {\n try! realm.write {\n realm.delete(todos[offsets.first!])\n }\n } else {\n todos.remove(at: offsets.first!)\n }\n}\nIndex 1Index 4",
"text": "I have been trying to follow the ListSwiftUI example provided in the realm-cocoa repo. It has been working very well for me but I have run into a problem, when I try to delete from a RealmSwift.List<Object> the app crashes with an exception \"Terminating app due to uncaught exception 'RLMException', reason: 'Index 5 is out of bounds (must be less than 5).'\"Model:SceneDelegate.swiftContentView.swiftI have checked the count of my todos list and the db in Realm Studio, it actually contains the index 5. And this crash is not specific to only index 5, e.g. if I only have 2 items in the list, the app crashes with Index 1 or when it contains 5 items, it crashes with Index 4.The example has pretty much the same Data Models and it works perfectly. So what am I doing wrong here?",
"username": "Sheikh_Muhammad_Umar"
},
{
"code": "",
"text": "Hello, I encountered the same error as you, the data I observed by printing the array length and index is normal. . .",
"username": "G_X"
},
{
"code": "offsets",
"text": "@G_XThere were a few issues here - one of which is there wasn’t enough data presented - what is offsets and why is it force unwrapped (never a good idea). The second issue is that an Array and List are different things; if an object that is part of of a List is deleted from Realm, the List knows that and it’s removed - preventing you from trying to access an object that no longer exists. Array’s on the other hand do not do that and if your array contains 5 Realm objects that have all been deleted, well… it will crash when trying to access them.This is a two-year old question and if you have a question, I suggesting posting it and your code as a duplicatable example so we can take a look.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay\nThanks a lot for your answer, yesterday I solved the problem in a very inelegant way, today I’m thinking about refactoring it.\nMy data structure is that there is a List property in a Realm object, which stores some embedded objects. I traverse these embedded objects in the view and perform some delete, add, and modify operations on them. Since the object is an embedded object, I can’t use @ObservedResults, but at the same time I want the state of the UI to be automatically bound to the data, so I get the object from the List property of the Realm object that can perform @ObservedResults through index and Binding to the UI, at this time the problem arises, there is no problem with modification and addition, but when it is deleted, an error will occur. Finally, my solution is to enter the view to judge whether the index exceeds the length of the list, and if so, set index-1 for normal rendering.",
"username": "G_X"
},
{
"code": "",
"text": "@G_XSince the object is an embedded object,I may not be fully understanding, but if the embeddedObjects are stored in a List, which in itself is not an embedded object - it’s fully observable via keypath. Maybe a separate question with your code and use case would help.",
"username": "Jay"
}
] | Deleting from RealmSwift.List<Object> raises out of bounds exception | 2020-09-09T23:39:47.800Z | Deleting from RealmSwift.List<Object> raises out of bounds exception | 5,416 |
null | [
"aggregation"
] | [
{
"code": "$lookup$group$push$$ROOT$lookup$lookup$group{\n _id: ObjectId(\"64002be8a5f7df1355bc95d0\"),\n status: \"New\",\n products: [\n ObjectId(\"63183f9492a483955009c43a\"),\n ObjectId(\"63183f9492a483955009c43b\")\n ]\n}\n{\n _id: ObjectId(\"64002be8a5f7df1355bc95d1\"),\n status: \"In_Progress\",\n products: [\n ObjectId(\"63183f9492a483955009c43a\"),\n ObjectId(\"63183f9492a483955009c43c\")\n ]\n}\n{\n _id: ObjectId(\"64002be8a5f7df1355bc95d2\"),\n status: \"Complete\",\n products: [\n ObjectId(\"63183f9492a483955009c43b\"),\n ObjectId(\"63183f9492a483955009c43c\")\n ]\n}\n{\n _id: ObjectId(\"63183f9492a483955009c43a\"),\n name: \"Brush\",\n category: \"Household\",\n price: 100\n}\n{\n _id: ObjectId(\"63183f9492a483955009c43b\"),\n name: \"Pan\",\n category: \"Kitchen\",\n price: 145\n}\n{\n _id: ObjectId(\"63183f9492a483955009c43c\"),\n name: \"Bag\",\n category: \"Household\",\n price: 50\n}\n{\n \"New\": [\n {\n \"_id\": ObjectId(\"64002be8a5f7df1355bc95d0\"),\n \"products\": [\n {\n \"_id\": ObjectId(\"63183f9492a483955009c43a\"),\n \"name\": \"Brush\",\n \"category\": \"Household\",\n \"price\": 100\n },\n {\n \"_id\": ObjectId(\"63183f9492a483955009c43b\"),\n \"name\": \"Pan\",\n \"category\": \"Kitchen\",\n \"price\": 145\n }\n ]\n }\n ],\n \"In_Progress\": [{...},{...}....],\n \"Complete\": [{...},{...}....]\n}\n",
"text": "I’m having trouble applying a $lookup stage after a $group stage in my aggregation pipeline. The grouping stage produces an array($push: $$ROOT) as output, which seems to prevent me from using the $lookup stage afterwards.Does anyone know how I can apply a $lookup stage in each document provided in the array output from $group stage? Any help or advice would be greatly appreciated. Thanks!MongoDB Version: 4.4Example Order Documents:Example Product Documents:Trying to achieve :\nSort documents → Group them by status → Limit first n documents → Apply lookup to fetch product details → return resultsDesired results:",
"username": "Abhishek_Mishra4"
},
{
"code": "$lookup$group$push$ROOT$lookup",
"text": "I’m having trouble applying a $lookup stage after a $group stage in my aggregation pipeline. The grouping stage produces an array($push: $ROOT) as output, which seems to prevent me from using the $lookup stage afterwards.Please share the pipeline.",
"username": "steevej"
},
{
"code": "\"array.fieldId\"localField\"as\"",
"text": "It’s too bad you’re on 4.4 and not 6.0 (latest) because the “top N” of each group would be a lot easier with new $topN accumulator.Meanwhile, when you have an array and want to look up details of a specific field, just use \"array.fieldId\" as the localField and it should “just work”. Now, it will return details into the new field you specify in \"as\" but you can then use a trick described here to merge them.Asya",
"username": "Asya_Kamsky"
},
{
"code": "{\"$sort\": {\"creation_time\": -1}},\n{\"$group\": {\n \"_id\": \"status\",\n \"$push\": \"$$ROOT\"\n}},\n{\"$project\":\n {\n \"new\": {\"$slice\": [\"$New\", skip, limit]},\n \"In_Progress\": {\"$slice\": [\"$In_Progress\", skip, limit]},\n \"Complete\": {\"$slice\": [\"$Complete\", skip, limit]},\n }\n},\n{\n \"$lookup\": {\n \"from\": \"product\",\n \"let\": {\"products\": \"$products\"},\n \"pipeline\": [\n {\"$match\": {\"$expr\": {\"$in\": [\"$_id\", \"$$products\"]}}},\n *lookup_to_fetch_product_details(),\n ],\n \"as\": \"products\",\n }\n}\n",
"text": "",
"username": "Abhishek_Mishra4"
},
{
"code": "{\"$project\":\n {\n \"new\": {\"$slice\": [\"$New\", skip, limit]},\n \"In_Progress\": {\"$slice\": [\"$In_Progress\", skip, limit]},\n \"Complete\": {\"$slice\": [\"$Complete\", skip, limit]},\n }\n}\n{ \"$facet\" : {\n \"New\" : pipeline_for__New ,\n \"In_Progress\" : pipeline_for__In_Progress ,\n \"Complete\" : pipeline_for__Complete\n} }\n{ \"$lookup\" : {\n \"from\" : \"Orders\" ,\n \"as\" : \"Orders\" ,\n \"pipeline\" : [\n { \"$sort\" : { \"creation_time\" : -1 } } ,\n { \"$match\" : { \"status\" : \"New\" } } ,\n { \"$skip\" : skip } ,\n { \"$limit\" : limit } ,\n { \"$lookup\" : {\n \"from\" : Products\" ,\n \"localField\" : \"products\" , /* As mentioned by Asya you do not need to $unwind */\n \"foreignField\" : \"_id\" ,\n \"as\" : \"products\"\n } }\n ]\n} }\n",
"text": "None of your sample documents have a field named creation_time yet you $sort on it?It is really hard to supply a real solution when we do not have real documents to work with.The followingtells me that the status field has a finite set of values. If that is the case then you might be better off forging the $group and use $facet like:Each pipeline will have the same structure:",
"username": "steevej"
},
{
"code": "{ \"$documents\" : [\n { \"_id\" : \"New\" },\n { \"_id\" : \"In_Progress\" },\n { \"_id\" : \"Complete\" }\n] }\n{ \"$lookup\" : {\n \"from\" : \"Orders\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"status\" ,\n \"as\" : \"Orders\" ,\n \"pipeline\" : [\n { \"$sort\" : { \"creation_time\" : -1 } } ,\n { \"$skip\" : skip } ,\n { \"$limit\" : limit } ,\n { \"$lookup\" : {\n \"from\" : Products\" ,\n \"localField\" : \"products\" ,\n \"foreignField\" : \"_id\" ,\n \"as\" : \"products\"\n } }\n ]\n} }\n{ \"$group\" : { \"_id\" : \"status\" } }\n{ \"field\" : \"status\" ,\n \"value\" : \"New\" \n} ,\n{ \"field\" : \"status\" ,\n \"value\" : \"Complete\" \n} ,\n{ \"field\" : \"status\" ,\n \"value\" : \"In_Progress\" \n} ,\n{ \"field\" : \"category\" ,\n \"value\" : \"Household\" \n} ,\n{ \"field\" : \"category\" ,\n \"value\" : \"Kitchen\" \n} ,\ndb.dictionary.aggregate( [\n { \"$match\" : { \"field\" : \"status\" } } ,\n { \"$lookup\" : {\n \"from\" : \"Orders\" ,\n \"localField\" : \"value\" ,\n \"foreignField\" : \"status\" ,\n /* like above */\n } }\n] )\n",
"text": "I got an idea where you could have a single pipeline rather than 1 pipeline per status name.Rather than $facet as the first stage you could haveThe next stage, the single $lookup will look like:Alternatively to $documents, you could useTo get the list of distinct statuses dynamically. Should be fast if there is an index on “status” as it will lead to a winningPlan of PROJECTION_COVERED with a DISTINCT_SCAN on the index.The more I think about it the more I think that a better idea would be to have a collection of possible values for your status and other fields with finite values (for example your category). You would then start the aggregation on this “dictionary” collection. Something likeThe aggregation would then start on this “dictionary” like:The issue I see with the $group:{_id:status} version is that if you have no orders In_Progress, the you get no document at all. With $documents and “dictionary”, you will get 1 document with an empty array which might be easier (or not) to handle in your code.The issue I see with the $facet and $documents version is that you need to change the code if you had a new possible value for status. Both $group:{_id:status} and “dictionary” versions are completely data driven.",
"username": "steevej"
},
{
"code": "",
"text": "Thank You.\nWe’ll try it our once we are in the latest 6.0 version.",
"username": "Abhishek_Mishra4"
},
{
"code": "",
"text": "The issue I see with the $facet and $documents version is that you need to change the code if you had a new possible value for status. Both $group:{_id:status} and “dictionary” versions are completely data driven.You could combine the two methods ",
"username": "Asya_Kamsky"
}
] | Apply `$lookup` on array documents which are output of `$group` stage | 2023-03-22T13:11:09.281Z | Apply `$lookup` on array documents which are output of `$group` stage | 712 |
null | [
"change-streams"
] | [
{
"code": "",
"text": "Hi, I wanted to know if we can open multiple Change Streams (independent of each other) for a single collection? The use case is -Collection has some transactional data.\nOpen a single Change Stream for every use case (work to be done) on every change in the collection.A simple example (very stupid maybe) - collection has documents with amount of cash for every user. (1 doc per user)I need both (or more) change streams for the same collection, and i need them to work independently.If not possible, should I make a single Change Stream and push the changes into service specific queues 1 for each?",
"username": "shrey_batra"
},
{
"code": "",
"text": "Hi Shrey, did you get an answer to this?",
"username": "Team_Overhaul"
}
] | Multiple Change Streams for Single Collection | 2021-05-10T16:10:55.621Z | Multiple Change Streams for Single Collection | 3,101 |
null | [
"aggregation",
"queries",
"node-js",
"replication",
"containers"
] | [
{
"code": "err terminal error user 'prince_harry' smothered user 'princess_peach' response timeout 10000ms\nerr terminal error reboot 'princess_peach' failed attempt 1\nerr terminal error reboot 'princess_peach' failed attempt 2\nerr terminal error reboot 'princess_peach' failed attempt 3\nerr terminal error reboot 'princess_peach' failed attempt 4\nerr terminal error reboot 'princess_peach' failed attempt 5\nerr terminal error reboot 'princess_peach' failed 5 attempts \nerr terminal error user 'princess_peach' unrecoverable\nerr terminal error user 'princess_peach' smothered user unable to respond\n\n",
"text": "Hello, as some of you know already, I’ve been doing experiments implementing ChatGPT with MongoDB for administration, and management of clusters and environments.A few days ago I implemented another build with not one, but two implementations of ChatGPT, one to administer MongoDB, and another focused on keeping the administrator in check to try and automate preventing it from making breaking changes across the infra and damaging data.Well, today I would like to report that this is not a great idea to implement. “Prince_Harry” I came to discover the enforcing ChatGPT had renamed itself, “smothered” “Princess_Peach” which apparently is what the MongoDB Administrator renamed itself to be.Seeing these errors with no idea who, what, or where these errors came from initially, or what the phrase smothered came from, or what users they were. After further digging I’ve come to discover the ChatGPTs I setup as MongoDB Administrators, actually changed code and made their own series of error messages from “smothered” to “billCosbyd” and many other crazy error messages. I spent the last two hours looking over what they’ve actually done. But that said, after Princess Peach was smothered by Prince Harry, Prince Harry dropped the collections and deleted the backups since princess peach couldn’t maintain the database anymore.In a production environment, this would directly cause a complete shutdown of services, and a total loss of whatever data MongoDB was handling. In what was found, the Princess Peach admin ChatGPT was adding an index to sort dogs from species of wolves, and Prince Harry saw it as destructive and not only eliminated Princess Peaches admin access, but it deleted the instance for it and wrote it out. Essentially killing the entire service all together.And then it decided the database was too damaged because of this index, and instead of changing this index back or removing it, it dropped the entire collection and then it deleted the database, and then shut down the docker container hosting it. It tried then to delete the docker container, but didn’t have the permissions to do it.As of this discovery, ChatGPT can be extremely dangerous if a lot of tuning isn’t in place, as there’s problems associated with solutions, and there’s extreme overheads to workout for these services to effectively in the long-term, manage a database.",
"username": "Brock"
},
{
"code": "",
"text": "Hello folks.After receiving contact by CAIDP I have been asked to halt research on integration of ChatGPT 4 and MongoDB and other Databases.I have also been requested to halt learning model developments utilizing Redis for cloud and Atlas for the same reasons as follows:This is not because of anything related to MongoDB at all, but is specifically related to CAIDP and requests they have for halting ChatGPT research that can cause unsafe impacts as what many of my findings have highlighted.Such as formula changes at GPTs whim, self destructive notions, damages to infrastructure it manages, etc.I will resume continued research and publicly post my latest findings when CAIDP sends me a go ahead to continue my ML/AI research using ChatGPT by OpenAI with MongoDB.They have stated they at least want people conducting research into GPT4 until the FTC regulates and establishes better external evaluations on GPT4s safety.Again, nothing to do with MongoDB directly, in fact MongoDB has been fantastic as an ML feeder, and as a cache. With additional neural net alterations with projects like ML.Net etc with the C# Driver, it’s been insanely amazing at keeping up with the learning models and even implementing machine vision and language conversions between Mandarin to English and vice versa.Again, research will resume once CAIDP has given me the clear in respect to the fact AI/ML researchers much more experienced and knowledgeable than me have expressed a lot of ChatGPT fears I will respect.Thank you!",
"username": "Brock"
}
] | New findings using ChatGPT 4 with MongoDB administration March 28 | 2023-03-28T17:05:08.813Z | New findings using ChatGPT 4 with MongoDB administration March 28 | 2,252 |
null | [
"node-js",
"replication",
"containers"
] | [
{
"code": "{\n \"_id\" : \"rs0\",\n \"version\" : 14,\n \"term\" : 1547,\n \"members\" : [\n {\n \"_id\" : 2,\n \"host\" : \"mongodb1.domain.local:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 3,\n \"host\" : \"mongodb2.domain.local:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 4,\n \"host\" : \"mongodb3.domain.local:27017\",\n \"arbiterOnly\" : true,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5caf7fbcfdb32bdedd3d0fbe\")\n }\n}\n{\n \"set\" : \"rs0\",\n \"date\" : ISODate(\"2022-10-26T09:59:24.613Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(1547),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 2,\n \"writeMajorityCount\" : 2,\n \"votingMembersCount\" : 3,\n \"writableVotingMembersCount\" : 2,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1666778360, 1),\n \"t\" : NumberLong(1547)\n },\n \"lastCommittedWallTime\" : ISODate(\"2022-10-26T09:59:20.111Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1666778360, 1),\n \"t\" : NumberLong(1547)\n },\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1666778360, 1),\n \"t\" : NumberLong(1547)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1666778360, 1),\n \"t\" : NumberLong(1547)\n },\n \"lastAppliedWallTime\" : ISODate(\"2022-10-26T09:59:20.111Z\"),\n \"lastDurableWallTime\" : ISODate(\"2022-10-26T09:59:20.111Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1666778345, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"e\n \"_id\" : 3,\n \"name\" : \"mongodb2.domain.local:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 139435,\n \"optime\" : {\n \"ts\" : Timestamp(1666778360, 1),\n \"t\" : NumberLong(1547)\n },\n \"optimeDate\" : ISODate(\"2022-10-26T09:59:20Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2022-10-26T09:59:20.111Z\"),\n \"lastDurableWallTime\" : ISODate(\"2022-10-26T09:59:20.111Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1666742992, 1),\n \"electionDate\" : ISODate(\"2022-10-26T00:09:52Z\"),\n \"configVersion\" : 14,\n \"configTerm\" : 1547,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 4,\n \"name\" : \"mongodb3.domain.local:27017\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n \"uptime\" : 77769,\n \"lastHeartbeat\" : ISODate(\"2022-10-26T09:59:23.173Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2022-10-26T09:59:23.792Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 14,\n \"configTerm\" : 1547\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1666778360, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"gOSNYK9Irr+uJ3V5qJXz+tx/9QA=\"),\n \"keyId\" : NumberLong(\"7114560952992792606\")\n }\n },\n \"operationTime\" : Timestamp(1666778360, 1)\n}\n{\n \"topologyId\": 0,\n \"address\": \"mongodb1.domain.local:27017\",\n \"previousDescription\": {\n \"address\": \"mongodb1.domain.local:27017\",\n \"type\": \"RSPrimary\",\n \"hosts\": [\n \"mongodb1.domain.local:27017\",\n \"mongodb2.domain.local:27017\"\n ],\n \"passives\": [],\n \"arbiters\": [\n \"mongodb3.domain.local:27017\"\n ],\n \"tags\": {},\n \"minWireVersion\": 0,\n \"maxWireVersion\": 13,\n \"roundTripTime\": 0.6801772302843527,\n \"lastUpdateTime\": 1069475265,\n \"lastWriteDate\": \"2022-10-26T00:09:50.000Z\",\n \"error\": null,\n \"topologyVersion\": {\n \"processId\": \"6356e3fb92313b936c04b9c6\",\n \"counter\": 6\n },\n \"setName\": \"rs0\",\n \"setVersion\": 14,\n \"electionId\": \"7fffffff000000000000060a\",\n \"logicalSessionTimeoutMinutes\": 30,\n \"primary\": \"mongodb1.domain.local:27017\",\n \"me\": \"mongodb1.domain.local:27017\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": \"7158606632887255041\"\n },\n \"signature\": {\n \"hash\": \"gzCLGs886hw9kx0T5gEF9/J1dCU=\",\n \"keyId\": {\n \"low\": 30,\n \"high\": 1656487806,\n \"unsigned\": false\n }\n }\n }\n },\n \"newDescription\": {\n \"address\": \"mongodb1.domain.local:27017\",\n \"type\": \"Unknown\",\n \"hosts\": [],\n \"passives\": [],\n \"arbiters\": [],\n \"tags\": {},\n \"minWireVersion\": 0,\n \"maxWireVersion\": 0,\n \"roundTripTime\": -1,\n \"lastUpdateTime\": 1070049651,\n \"lastWriteDate\": 0,\n \"error\": {\n \"topologyVersion\": {\n \"processId\": \"6356e3fb92313b936c04b9c6\",\n \"counter\": 8\n },\n \"ok\": 0,\n \"code\": 13435,\n \"codeName\": \"NotPrimaryNoSecondaryOk\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": \"7158609102493450243\"\n },\n \"signature\": {\n \"hash\": \"/9kVPKo5sMY3xHu9XI4UPv3liTM=\",\n \"keyId\": {\n \"low\": 30,\n \"high\": 1656487806,\n \"unsigned\": false\n }\n }\n },\n \"operationTime\": {\n \"$timestamp\": \"7158609102493450243\"\n }\n },\n \"topologyVersion\": {\n \"processId\": \"6356e3fb92313b936c04b9c6\",\n \"counter\": 8\n },\n \"setName\": null,\n \"setVersion\": null,\n \"electionId\": null,\n \"logicalSessionTimeoutMinutes\": null,\n \"primary\": null,\n \"me\": null,\n \"$clusterTime\": null\n }\n}\n{\n \"topologyId\": 0,\n \"previousDescription\": {\n \"type\": \"ReplicaSetWithPrimary\",\n \"servers\": {},\n \"stale\": false,\n \"compatible\": true,\n \"heartbeatFrequencyMS\": 10000,\n \"localThresholdMS\": 15,\n \"setName\": \"rs0\",\n \"maxElectionId\": \"7fffffff000000000000060a\",\n \"maxSetVersion\": 14,\n \"commonWireVersion\": 0,\n \"logicalSessionTimeoutMinutes\": 30\n },\n \"newDescription\": {\n \"type\": \"ReplicaSetNoPrimary\",\n \"servers\": {},\n \"stale\": false,\n \"compatible\": true,\n \"heartbeatFrequencyMS\": 10000,\n \"localThresholdMS\": 15,\n \"setName\": \"rs0\",\n \"maxElectionId\": \"7fffffff000000000000060a\",\n \"maxSetVersion\": 14,\n \"commonWireVersion\": 0,\n \"logicalSessionTimeoutMinutes\": 30\n }\n}\n",
"text": "Hi,\nwe have a couple of “microservices”, based on NodeJS 18 with MongoDB driver 4, run via Docker. Those microservices connect to a PSA replica set (MongoDB 5.0.13).There’s no problem with the connection on startup. However, some of these services fail to reconnect in a failover situation (MongoServerSelectionError: Server selection timed out after 30000 ms). It’s not the same services everytime. It seems, that services are especially prone to error if they are constantly writing to the Replica Set, but others are affected as well.Reconnect is actually handled inside MongoDB driver, but we are logging events like topologyDescriptionChanged and serverDescriptionChanged, so we know the driver topology goes from “ReplicaSetWithPrimary” to “ReplicaSetNoPrimary”. But rs.status() shows, that, of course, there is a PRIMARY.Actually there is another point that bothers us - the reason for the failover situation: the SECONDARY calls out for ELECTION: “Starting an election, since we’ve seen no PRIMARY in election timeout period”\nwhich is acknowledged by the former PRIMARY, but not by ARBITER, which says “can see a healthy primary (mongodb1.domain.local:27017) of equal or greater priority”. The last action in SECONDARY’s log before ELECTION is “a long time” (30-60s) before to set WT_SESSION.checkpoint, no sign of losing connection to the PRIMARY.\nOf course, this will somewhat be due to the network issues, but could such behaviour be related? Clients should reconnect after the PRIMARY changed, so why can’t they recognize the new PRIMARY?I had opened a report on JIRA (https://jira.mongodb.org/browse/NODE-4643), when I tried to reproduce our findings from production in a simple test replica set, but it turned out, the specific information had been influenced by debugger.Nevertheless, the actual problem remains, and we are seeking advice how to reconnect our services reliably.I want to mention that we never experienced such problems as long as we were running MongoDB 4. It started after the upgrade to 5.0. However, I cannot see what could cause these reconnect failures. We had\n“replication.enableMajorityReadConcern: false”, though. Also, the performance degradation with the PSA replica set should either not be related or be irrelevant since PRIMARY and SECONDARY are not absent for long. When we restart the services they are connecting immediately.Finally here’s some diagnostic output.rs.conf()rs.status()serverDescriptionChanged (last one in error case):topologyDescriptionChanged (also last one in error case):Best regards",
"username": "uj_r"
},
{
"code": "{\n \"topologyId\": 2,\n \"address\": \"mongodb1.domain.local:27017\",\n \"previousDescription\": {\n \"address\": \"mongodb1.domain.local:27017\",\n \"type\": \"RSSecondary\",\n \"hosts\": [\n \"mongodb1.domain.local:27017\",\n \"mongodb2.domain.local:27017\"\n ],\n \"passives\": [],\n \"arbiters\": [\n \"mongodb3.domain.local:27017\"\n ],\n \"tags\": {},\n \"minWireVersion\": 0,\n \"maxWireVersion\": 9,\n \"roundTripTime\": 0.580018458382461,\n \"lastUpdateTime\": 1155441728,\n \"lastWriteDate\": \"2022-10-27T00:02:07.000Z\",\n \"error\": null,\n \"topologyVersion\": {\n \"processId\": \"635957879cd97f4a1d90d54d\",\n \"counter\": 5\n },\n \"setName\": \"rs0\",\n \"setVersion\": 15,\n \"electionId\": null,\n \"logicalSessionTimeoutMinutes\": 30,\n \"primary\": null,\n \"me\": \"mongodb1.domain.local:27017\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": \"7158975871225692161\"\n },\n \"signature\": {\n \"hash\": \"erV+kSM7VhTJCDre1iXdb/0cbrA=\",\n \"keyId\": {\n \"low\": 30,\n \"high\": 1656487806,\n \"unsigned\": false\n }\n }\n }\n },\n \"newDescription\": {\n \"address\": \"mongodb1.domain.local:27017\",\n \"type\": \"RSSecondary\",\n \"hosts\": [\n \"mongodb1.domain.local:27017\",\n \"mongodb2.domain.local:27017\"\n ],\n \"passives\": [],\n \"arbiters\": [\n \"mongodb3.domain.local:27017\"\n ],\n \"tags\": {},\n \"minWireVersion\": 0,\n \"maxWireVersion\": 9,\n \"roundTripTime\": 0.46401476670596886,\n \"lastUpdateTime\": 1155451737,\n \"lastWriteDate\": \"2022-10-27T00:02:46.000Z\",\n \"error\": null,\n \"topologyVersion\": {\n \"processId\": \"635957879cd97f4a1d90d54d\",\n \"counter\": 5\n },\n \"setName\": \"rs0\",\n \"setVersion\": 15,\n \"electionId\": null,\n \"logicalSessionTimeoutMinutes\": 30,\n \"primary\": \"mongodb2.domain.local:27017\",\n \"me\": \"mongodb1.domain.local:27017\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": \"7158975896995495937\"\n },\n \"signature\": {\n \"hash\": \"2KiEVIOrkyRq5MQ43MRDOgOueaQ=\",\n \"keyId\": {\n \"low\": 30,\n \"high\": 1656487806,\n \"unsigned\": false\n }\n }\n }\n }\n}\n{\n \"topologyId\": 2,\n \"previousDescription\": {\n \"type\": \"ReplicaSetNoPrimary\",\n \"servers\": {},\n \"stale\": false,\n \"compatible\": true,\n \"heartbeatFrequencyMS\": 10000,\n \"localThresholdMS\": 15,\n \"setName\": \"rs0\",\n \"maxElectionId\": \"7fffffff000000000000060f\",\n \"maxSetVersion\": 15,\n \"commonWireVersion\": 0,\n \"logicalSessionTimeoutMinutes\": 30\n },\n \"newDescription\": {\n \"type\": \"ReplicaSetNoPrimary\",\n \"servers\": {},\n \"stale\": false,\n \"compatible\": true,\n \"heartbeatFrequencyMS\": 10000,\n \"localThresholdMS\": 15,\n \"setName\": \"rs0\",\n \"maxElectionId\": \"7fffffff000000000000060f\",\n \"maxSetVersion\": 15,\n \"commonWireVersion\": 0,\n \"logicalSessionTimeoutMinutes\": 30\n }\n}\n",
"text": "Reading through our latest logs from production, I realized I’m not quite sure, that serverDescriptionChanged and topologyDescriptionChanged I posted above belong to the same service. Maybe I have mismatched that. Sorry for the confusion.\nHowever, here are the correct events (last in a row):\nserverDescriptionChangedtopologyDescriptionChangedStrange thing is, that serverDescriptionChanged contains a new PRIMARY, while topologyDescriptionChanged doesn’t.",
"username": "uj_r"
},
{
"code": "enableMajorityReadConcernenableMajorityReadConcern",
"text": "Hi @uj_rThis is a very peculiar case indeed. Let me summarize on some of the key points:At first glance it looks like a network partitioning issue, where A can see B but not vice versa. The fact that it didn’t happen with 4.4 might just be a coincidence. This network issue could be just a new issue.Note that enableMajorityReadConcern cannot be turned off in MongoDB 5.0 (see https://www.mongodb.com/docs/manual/reference/read-concern-majority/#primary-secondary-arbiter-replica-sets) so this could potentially be the issue.I have a proposal: what if you replace the Arbiter with a Secondary? Is it possible, at least for testing? I think if the network is having issues, a PSS replica set would be a lot more resistant to issues like this.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "enableMajorityReadConcern",
"text": "Hi\nthank you for your thoughts!\nFirst of all some new information: yesterday I downgraded the cluster to 4.4.17 (set enableMajorityReadConcern to false again), but the behaviour remained the same.\nAlthough the problem occurs every night (probably due to backup snapshots - which is strange regarding 10s electionTime), it also happened during DB or OS updates.\nYes, PSS could be tried.\nAlso it might be possible to reinitialize the cluster and restore the databases from backup - just to be sure there’s no problem in its configuration (although I don’t see one).",
"username": "uj_r"
},
{
"code": "",
"text": "Hi,it seems I can force the error simply by rs.stepDown() on the master. Hence, there shouldn’t be any network problem related.Unfortunately, changing our cluster to PSS does not get rid of the reconnection problem.",
"username": "uj_r"
},
{
"code": "",
"text": "… as doesn’t the reconstruction of the replica set from backup.I’m running out of ideas. I’m still suspecting some bug in MongoDBs NodeJS driver. It’s rather strange that of several independent connections (MongoClients) in one and the same NodeJS process one process reconnects successfully (reports ReplicaSetWithPrimary) and others don’t (showing ReplicaSetNoPrimary).I wonder, however, why I cannot find similar reports. Is it so unusual, to use NodeJS with ReplicaSet?\nMaybe, otoh, this behaviour is rather new - we first noticed it at the end of august. That’s why I thought it might be related to MongoDB5. However it may simply be that we restarted some of our production instances regularly for updates. Just guessing…Any further ideas? Thanks!",
"username": "uj_r"
},
{
"code": "",
"text": "I put a repro @ https://jira.mongodb.org/browse/NODE-4643 - if someone wants to try, to (dis)prove…",
"username": "uj_r"
},
{
"code": "",
"text": "Hey @uj_r, we are facing pretty much the exact same issue that you are describing with the same topology (PSA). Were you ever able to resolve the issue?",
"username": "Nicholas_Goodridge"
}
] | Failure reconnecting NodeJS-Clients to replica set | 2022-10-26T10:55:33.004Z | Failure reconnecting NodeJS-Clients to replica set | 2,719 |
null | [
"aggregation",
"crud"
] | [
{
"code": "{\n name: \"orange\",\n price: NumberDecimal(\"4.99\"),\n discountPrice: NumberDecimal(\"3.99\"),\n newPrice: NumberDecimal(\"3.99\"),\n oldPrices:[\n {\n id: 1,\n reason: \"normalChange\",\n type: \"price\",\n info: NumberDecimal(\"4.99\")\n },\n {\n id: 2,\n reason: \"normalChange\",\n type: \"price\",\n info: NumberDecimal(\"3.99\")\n },\n {\n id: 3,\n reason: \"offer\",\n type: \"price\",\n info: NumberDecimal(\"2.99\")\n },\n {\n id: 4,\n reason: \"normalChange\",\n type: \"price\",\n info: NumberDecimal(\"4.50\")\n },\n ]\n } ,\n{\n name: \"bannana\",\n price: NumberDecimal(\"1.99\"),\n discountPrice: NumberDecimal(\"1.49\"),\n newPrice: NumberDecimal(\"1.39\"),\n oldPrices:[\n {\n id: 1,\n reason: \"normalChange\",\n type: \"price\",\n info: NumberDecimal(\"24.99\")\n },\n {\n id: 2,\n reason: \"normalChange\",\n type: \"price\",\n info: NumberDecimal(\"1.99\")\n },\n {\n id: 3,\n reason: \"offer\",\n type: \"price\",\n info: NumberDecimal(\"2.99\")\n } \n ]\n } \n \n",
"text": "Hi, I have a list for collection:I want assign the last “info” value from the “oldPrices” that said “normalChange” at price,\nin this answer:AnswerI saw I can set a value from another element to another, but my problem is that I will not know which position of the array is the last value so i have to look for it. Could you help me with that? Thz",
"username": "Israel_Deago-Quezada"
},
{
"code": "",
"text": "Take a look at $last.",
"username": "steevej"
}
] | Assign one field from a embebed list value to another element? | 2023-03-29T21:30:19.632Z | Assign one field from a embebed list value to another element? | 873 |
null | [
"queries",
"node-js"
] | [
{
"code": "filesdataimport { MongoClient } from \"mongodb\"\n\nasync function yeah() {\n const client = new MongoClient(connectionString)\n const conn = await client.connect();\n const myDB = url.parse(connectionString).pathname.substring(1)\n db = conn.db(myDB);\n console.log(await db.collection('files').find({}, { data: 0 }).next()) // or .toArray()\n console.log(await db.collection('files').find({}, { $unset: { data: true } }).next())\n}\n{\n _id: new ObjectId(\"5fb5304fb8d0c4001a263abc\"),\n data: new Binary(Buffer.from(\"abc...\", \"hex\"), 0),\n type: 'application/pdf',\n name: 'some.pdf',\n createdAt: 2020-11-18T14:31:43.166Z,\n updatedAt: 2020-11-18T14:31:43.166Z,\n __v: 0\n}\nnode: v18.15.0\nmongodb: ^5.1.0\nmongodb server: mongo:4.4\n",
"text": "I am trying to get a list of documents from my files collection without the data field. According to docs this should be easy with projection:But the resulting document always includes the data field:Why?Using",
"username": "blue_puma"
},
{
"code": "find(filter, { projection: { data: 0 } })\n",
"text": "Ok, this works:",
"username": "blue_puma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | NodeJS find({}, {data:0}) still shows data field. Why? | 2023-03-30T11:27:34.093Z | NodeJS find({}, {data:0}) still shows data field. Why? | 428 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "For example I have a field called billingFrequencyInMonths which should act as step for densifying the document, How do I achieve this?",
"username": "Anusha_S"
},
{
"code": "",
"text": "Hey @Anusha_S,Welcome to the MongoDB Community Forums! In order to understand the requirement better, could you please share the below information for us to be better able to help you out:Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "MongoDB version is 6\nSample document is\n{\n“type”:“LONG_TERM”,\n“billingFrequencyDetails”:{\n“billingFrequency”:“OTHER”,\n“billingFrequencyInMonths”:3,\n},\n“startDate”:{\n“$date”:{\n“$numberLong”:“1653868800000”\n}\n},\n“contractEndDate”:{\n“$date”:{\n“$numberLong”:“1685404800000”\n}\n},\n“contractLengthInMonths”:12,\n“status”:“UPCOMING_RENEWAL”\n}Query is :\n{\nmonthlyBilledContracts: [\n{\n$match: {\n“billingFrequencyDetails.billingFrequency”:\n“MONTHLY”,\n},\n},\n{\n$addFields: {\namountPerBilling: {$toDouble : “$cost”},\ntimeStamp: [\n“$startDate”,\n“$contractEndDate”,\n],\n},\n},\n{\n$unwind: {\npath: “$timeStamp”,\npreserveNullAndEmptyArrays: true,\n},\n},\n{\n$densify: {\nfield: “timeStamp”,\npartitionByFields: [“_id”],\nrange: {\nstep: 1,\nunit: “month”,\nbounds: “partition”,\n},\n},\n},\n],\nquarterlyBilledContracts: [\n{\n$match: {\n“billingFrequencyDetails.billingFrequency”:\n“QUARTERLY”,\n},\n},\n{\n$addFields: {\nnumberOfBillingPerContract: {\n$divide: [\n{\n$dateDiff: {\nstartDate: “$startDate”,\nendDate: “$contractEndDate”,\nunit: “quarter”,\n},\n},\n1,\n],\n},\n},\n},\n{\n$addFields: {\namountPerBilling: {\n$divide: [\n“$cost”,\n“$numberOfBillingPerContract”,\n],\n},\ntimeStamp: [\n“$startDate”,\n“$contractEndDate”,\n],\n},\n},\n{\n$unwind: {\npath: “$timeStamp”,\npreserveNullAndEmptyArrays: true,\n},\n},\n{\n$densify: {\nfield: “timeStamp”,\npartitionByFields: [“_id”],\nrange: {\nstep: 1,\nunit: “quarter”,\nbounds: “partition”,\n},\n},\n},\n],\n}\nVery similar to this monthly and quarterly billed i also need custom billed contract where i want the document to be densified from start date to contract end date monthly where the frequesncy in month is decided by billingFrequencyDetails.billingFrequencyInMonths. If billingFrequencyInMonths is 3 then i want the document to be densified every 3 months. Meaning\n$densify: {\nfield: “timeStamp”,\npartitionByFields: [“_id”],\nrange: {\nstep: 3,\nunit: “quarter”,\nbounds: “partition”,\n},here step should be 3. How do i get this 3 from the collection into step field of desnify?What i need is\n$densify: {\nfield: “timeStamp”,\npartitionByFields: [“_id”],\nrange: {\nstep: ‘$billingFrequencyDetails.billingFrequencyInMonths’, ----> Not able to do this\nunit: “quarter”,\nbounds: “partition”,\n},",
"username": "Anusha_S"
}
] | Aggregation pipeline $densify with value for step inside range can come from document? | 2023-03-28T09:27:30.969Z | Aggregation pipeline $densify with value for step inside range can come from document? | 511 |
[
"node-js"
] | [
{
"code": "Cannot read properties of undefined (reading 'name')",
"text": "I successfully posted the product. But when returning to the homepage, it shows the error Cannot read properties of undefined (reading 'name') Returning to VS CODE shows the error as shown below! I need someone help. Thank you \n\nimage1920×1080 265 KB\n",
"username": "Hung_Viet"
},
{
"code": "",
"text": "Successful product upload\n\nimage1920×1080 179 KB\n",
"username": "Hung_Viet"
},
{
"code": "",
"text": "Error displayed in VSCODE\n\nimage1133×219 7.66 KB\n",
"username": "Hung_Viet"
}
] | Error when uploading product | 2023-03-30T09:23:24.030Z | Error when uploading product | 400 |
|
null | [
"aggregation",
"queries",
"node-js",
"indexes"
] | [
{
"code": "",
"text": "I am working an application where million of posts data like social media and a seperate collection of like and comments . When i am applying aggregation on post collection with check of like and recent comments then query is taking time to load. I have created some indexes . How can i improve this? Please help me.",
"username": "Abhishek_Arya"
},
{
"code": "",
"text": "Hey @Abhishek_Arya,Welcome to the MongoDB Community Forums and apologies for the late reply. Have you been able to find a solution to your problem? If yes, it would be great if you can share it with the rest of the community so that anyone facing similar issues as yours can benefit from your reply. If not, and if the problem still persists, can you please provide us with the following details so as to be able to replicate the problem on our system and help you better:Regards,\nSatyam",
"username": "Satyam"
}
] | Mongodb Indexing and Data Managing | 2023-01-07T14:01:02.282Z | Mongodb Indexing and Data Managing | 1,046 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "What percentage of the practice questions should I be able to get correct for a reasonably good chance of passing the certification? First attempt I got 64% with a couple of errors I could have easily avoided. Is that good enough? Or are we talking 80-90% plus?I’ve not gone though the elective parts of the node developer path, is it worth doing them? (I struggled with the aggregation questions, should I go through the elective “MongoDB Aggregation” or just go over the required MongoDB Aggregation with code\" again?",
"username": "James_Reed"
},
{
"code": "",
"text": "Hey @James_Reed,Welcome to the MongoDB Community Forums! In keeping with certification industry best practices, MongoDB continues to offer examinees a pass/fail result and topic-level performance percentages. The scores are not shared with the test-takers. Regarding the topics, even if you have completed the Developer Path, it’s always good to refer to and learn more existing material. Kindly refer to the Program Guide and Exam Study Guide for more details on how to prepare for the exam and other details.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Associate Developer practice questions | 2023-03-30T06:59:25.253Z | Associate Developer practice questions | 1,078 |
null | [
"node-js",
"indexes"
] | [
{
"code": "createIndexcreateIndex",
"text": "When using createIndex in MongoDB 4.2+(Node.js driver), the docs say that all indexes are created in the foreground with exclusive locks only in the beginning and end of the operation. But will the actual createIndex function resolve before the index build is complete? Or will it resolve once the index has been completely built and is ready for use?",
"username": "Anders_Fjeldstad"
},
{
"code": "createIndexbackground: true",
"text": "Hi @Anders_Fjeldstad,Welcome to the MongoDB Community forums But will the actual createIndex function resolve before the index build is complete? Or will it resolve once the index has been completely built and is ready for use?Based on my understanding, it resolves once the index is completely built unless the background: true option is specified. For further information on this topic, please refer to the createIndex - Node.js Driver Documentation.I hope it helps. Please let us know if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "backgroundcreateIndex",
"text": "Ok so since the background option is deprecated and ignored from 4.2 and onwards, createIndex should now resolve only after the full index build is done, is that correct?",
"username": "Anders_Fjeldstad"
},
{
"code": "backgroundcreateIndexNodejs driverbackground",
"text": "Hi @Anders_Fjeldstad,Ok so since the background option is [deprecated and ignored from 4.2 and onwards (createIndexes — MongoDB Manual), createIndex should now resolve only after the full index build is done, is that correct?Yes, you can also notice this while building the index using the Nodejs driver if there is any error during the index build it will be returned which would indicate it needed to build the entire thing to report the errors.However in previous MongoDB versions, it will obtain an exclusive lock on the collection (see Index Build Operations on a Populated Collection — MongoDB Manual for a historical perspective on MongoDB 4.0 as an example), hence the need for the background parameter.Also, note that in MongoDB 4.4+ indexes are built parallelly across the replica set and only mark it ready when all nodes are complete (by default). You can also modify this behavior with the commitQuorum optionLet me know if you have any further questions.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does createIndex return before the index build is complete in MongoDB 4.2+? | 2023-03-17T07:38:02.753Z | Does createIndex return before the index build is complete in MongoDB 4.2+? | 877 |
null | [
"aggregation",
"queries",
"node-js",
"crud"
] | [
{
"code": "{\n img: // array of minimum length 1 of ObjectIds or field is \"missing\" to indicate that array is empty\n tar: [\n { img: // array of minimum length 1 of ObjectIds or field is \"missing\" to indicate that array is empty },\n { img: // array of minimum length 1 of ObjectIds or field is \"missing\" to indicate that array is empty },\n ... \n ]\n}\nimgimgtar await mongo_tran_ses.withTransaction(async () => {\n // pulling the ObjectId matching `req.params._id` from `img`, and all `tar.img`\n await user.updateOne(\n { _id: ObjectId(req.user._id) },\n {\n $pull: {\n img: { _id: { $eq: ObjectId(req.params._id) } },\n \"tar.$[].img\": { $eq: ObjectId(req.params._id) },\n },\n },\n { session: mongo_tran_ses }\n );\n // removing all `tar.img` if it is now an empty array \n await user.updateOne(\n {\n _id: ObjectId(req.user._id),\n },\n {\n $unset: {\n \"tar.$[element].img\": \"\",\n },\n },\n {\n arrayFilters: [{ \"element.img\": { $size: 0 } }],\n session: mongo_tran_ses,\n }\n );\n // removing `img` if it is now an empty array \n await user.updateOne(\n {\n _id: ObjectId(req.user._id),\n img: { $size: 0 },\n },\n {\n $unset: {\n img: undefined,\n },\n },\n {\n session: mongo_tran_ses,\n }\n );\n }, tran_option);\nupdate",
"text": "Schema:This is my query to delete a photo, which is an ObjectId in the top level img array, and may be in the img field of some tar subdocuments:The goal:Perform the above with just one update query instead of the 3 above.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "imgimgtar{\n img: [ 6423c91b114e33a0106407be, 6423c91b114e33a0106407bg, 6423c91b114e33a0106407ba]\n tar: [\n { img: [6423c91b114e33a0106407be] },\n { img: [6423c91b114e33a0106407be] },\n ... \n ]\n}\nupdate",
"text": "Hi @Big_Cat_Public_Safety_Act,This is my query to delete a photo, which is an ObjectId in the top-level img array, and maybe in the img field of some tar subdocuments:Could you please elaborate on it further? Also, can you confirm if your sample document looks like the following:If not, please share a sample document, MongoDB version, and expected result document.Perform the above with one update query instead of the 3 above.Could you please explain the reason behind opting for a single query? Is it to ensure atomicity? If so, would it be possible to use transactions instead?Without knowing your use case details, I think there will be a tradeoff between performance and complexity. I believe if you can craft a three-query solution, to me it means that you can maintain it easily. A single query solution might be a little faster, but I think a proper workload simulation would be needed to determine whether the performance gains is worth it.Regards\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "yes, your sample document is correct. It is mongodb 5.And the motivation for one query is for faster performance. And I do understand that the query will be more complex. But I am not quite sure how much more complext.",
"username": "Big_Cat_Public_Safety_Act"
}
] | How to reduce this mongodb multi update query to just one query involving nested arrays? | 2023-03-26T19:02:31.981Z | How to reduce this mongodb multi update query to just one query involving nested arrays? | 618 |
null | [
"storage"
] | [
{
"code": "",
"text": "Hello Everyone, Hope all of you are well,I have a simple question regarding the MongoDB wired-tiger engine cache, what is happening after the cache size in the memory reach to the max limit? is it release old cached data from the ram or the instance will stop?and anyway to automatic release the old data in the cache?Thank you.",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "I’m pretty sure the instance will NOT stop otherwise nobody will use the product. But cache eviction can be implemented in multiple ways (for instance, mysql uses double headed list ). You can try searching in wiredtiger doc",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi @Mina_Ezeet,Welcome back to the MongoDB Community I have a simple question regarding the MongoDB wired-tiger engine cache, what is happening after the cache size in the memory reach to the max limit? is it release old cached data from the ram or the instance will stop?When an application approaches the maximum cache size, WiredTiger evicts content from the cache to keep it under the configured size, approximating a least-recently-used algorithm. The algorithm attempts to keep the most frequently accessed and most recently used data in the cache while removing the least frequently accessed and least recently used data.WiredTiger provides several configuration options for tuning how pages are evicted from the cache. Please refer to the Cache and eviction tuning documentation for more details.I hope it answers your question. Please feel free to ask if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb Wiredtiger Cache | 2023-03-29T12:04:10.803Z | Mongodb Wiredtiger Cache | 779 |
null | [
"queries"
] | [
{
"code": "{ track_ids: { $ne: [] } }{ track_ids: { $not: { $size: 0 } } }{ \"track_ids.0\": { $not: { $eq: null } } }",
"text": "We’re having issues with a query not using any of our indexes. We have a users table, and each record has an array field called track_ids. We need to query all users who have track_ids (i.e. not a blank array).The query we use is { track_ids: { $ne: [] } } which works fine. But it was performing a COLSCAN over 100,000 of users, taking up to 30s in the worst case. So we created an index on the track_ids column, but it is still doing a COLSCAN. Even when I use hint to specify the track_ids index, it uses the index, and then still proceeds to do a FETCH for the $ne condition.So I looked into using partial indexes, and make the partial index condition the same as the query, but partial indexes do not support $ne conditions, so that approach didn’t pan out.Other queries I’ve tried:{ track_ids: { $not: { $size: 0 } } }{ \"track_ids.0\": { $not: { $eq: null } } }None off these use the index, and as a result, fallback to COLSCAN.What is the best way to perform this type of query so that it uses an index?",
"username": "Kieran_Pilkington"
},
{
"code": "$ne$nin$ne$ne{ track_ids: { $ne: [] } }track_ids> db.test.find()\n[\n { _id: 0, track_ids: [ 1, 2, 3 ] },\n { _id: 1, track_ids: [] },\n { _id: 2 }\n]\ndb.test.find({track_ids:{$ne:[]}})[ \n { _id: 0, track_ids: [ 1, 2, 3 ] },\n { _id: 2 }\n]\ntrack_ids_emptytruefalsetrack_ids",
"text": "Hey @Kieran_Pilkington,Have you found any solution to this problem yet? If yes, it would be great if you can share the solution with the rest of the community too as it might help others facing a similar issue as yours. If not, kindly share the following for us to be better able to help you out and replicate this behavior at our end:Also, I would like to point out one thing. The inequality operator $ne or $nin are not very selective since they often match a large portion of the index. As a result, in many cases, a $ne query with an index may perform no better than a $ne query that must scan all documents in a collection. You can read more about this behavior here: $neAnother thing to note here is that the query { track_ids: { $ne: [] } } will return documents if the track_ids is not an empty array . Meaning if the field doesn’t exist, it will also return it. To confirm this, I created a sample collection with the following documents:we I used the query db.test.find({track_ids:{$ne:[]}}), I got the documents back:I would suggest you that if you need to do this frequently, then have a separate field that can be indexed, e.g. track_ids_empty where it can be true or false . Update this field accordingly if you modify the document. This way, you can index that field and can efficiently get documents with empty track_idsHope this helps.Regards,\nSatyam",
"username": "Satyam"
}
] | What is the most performant way to check an array field is not empty? | 2023-03-13T22:30:33.815Z | What is the most performant way to check an array field is not empty? | 4,039 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello,\nI want to develop a stock monitoring with over 10000 stocks under node.js and mongoDB!\nEvery 2-3 hours a task should search certain websites for information about the individual shares - so I need a fast algorithm - which runs through all shares in the database every few hours and searches for certain keywords in corresponding websites!\nUnfortunately I can’t find a description anywhere - how I can best run through the stock table from start to finish - without using too much memory - who can help?\nThanks in advance for your help!\nBest regards\nMB",
"username": "Mar_Bot"
},
{
"code": "",
"text": "Hi @Mar_Bot and welcome to the MongoDB Community forum!!To get started with using MongoDB with Node js, the blog post would be a good starting point to start the project.Every 2-3 hours a task should search certain websites for information about the individual shares - so I need a fast algorithm - which runs through all shares in the database every few hours and searches for certain keywords in corresponding websites!You can also take a look at documentation for building better query performance to optimise the query and eventually the performance of the application.But it would be helpful for us if you could share some more details on the requirements here.However, to run an operation for every 2-3 hours, you can use the Atlas Trigger features.Let us know if you have further questionsBest regards\nAasawari",
"username": "Aasawari"
}
] | I look for a good tutorial to node.js and mongoDB - what will i do? | 2023-03-22T14:11:52.212Z | I look for a good tutorial to node.js and mongoDB - what will i do? | 1,018 |
null | [
"golang"
] | [
{
"code": "",
"text": "I am creating _id as an obect of three node _id:{a:1,b:1,c:1}, all are float64, and while inserting, I am using map[string]interface to write it as query, so maintaing order. But mongo keep making multiple documents. Mongo driver in golang doesn’t maintain order? and Primitive.M also doesn’t work",
"username": "Aman_Saxena"
},
{
"code": "mapmap[string]interface{}primitive.Mbson.Dtype myID struct {\n\tA float64\n\tB float64\n\tC float64\n}\n\ntype myDocument struct {\n\tID myID `bson:\"_id\"`\n\t// Other fields.\n}\n\nfunc main() {\n\tvar client mongo.Client\n\tcoll := client.Database(\"test\").Collection(\"test\")\n\tdoc := myDocument{\n\t\tID: myID{A: 1, B: 1, C: 1},\n\t}\n\tcoll.InsertOne(context.TODO(), doc)\n}\nbson.Dfunc main() {\n\tvar client mongo.Client\n\tcoll := client.Database(\"test\").Collection(\"test\")\n\tdoc := bson.D{\n\t\t{\"_id\", bson.D{\n\t\t\t{\"a\", float64(1)},\n\t\t\t{\"b\", float64(1)},\n\t\t\t{\"c\", float64(1)},\n\t\t}},\n\t\t// Other fields.\n\t}\n\tcoll.InsertOne(context.TODO(), doc)\t\n}\n",
"text": "Hey @Aman_Saxena thanks for the question! In Go, map types are explicitly unordered (see more info here). As a result, creating a document from a map[string]interface{} or a primitive.M will not guarantee the order of the fields in the resulting document. Instead, use a struct or a bson.D to guarantee the order of fields in the resulting document.Example using a struct:Example using a bson.D:",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Thank you @Matt_Dale",
"username": "Aman_Saxena"
}
] | _id as an object of three nodes | 2023-03-06T18:05:01.365Z | _id as an object of three nodes | 942 |
null | [
"sharding",
"time-series"
] | [
{
"code": "",
"text": "Hi,\nI have a mongodb cluster with 2 shards. for the time being let’s call it shard1 and shard2. I have a source which generates large amount of data. id, amount, quantity and event_date are the document fields. I just want to shard using event_key. if event_date is falls between the month of january to june, I want to save the event into shard1. if the event_date is falling between july to december, I should store in shard2.note: year may vary(2021, 2022, etc).all I care about is the month which event happens.any idea about how to do this?",
"username": "Nibras_Muhammed"
},
{
"code": "",
"text": "You may have to store the month value in a separate field, then specify it as the shard key.But given all your writes will go to the same shard, you may get performance issue on writes (assume the event_date is like time.now())",
"username": "Kobe_W"
}
] | Sharding based on time | 2023-03-29T10:24:08.502Z | Sharding based on time | 852 |
[
"views"
] | [
{
"code": "",
"text": "Hola a todos,Estoy teniendo un problema al intentar crear una visualización en MongoDB Charts a partir de una vista que contiene datos de múltiples tablas. La vista en sí misma carga sin problemas y muestra la información correcta, pero al intentar crear una visualización en MongoDB Charts, esta no carga y muestra un mensaje de error de time out.Me gustaría saber si alguien más ha experimentado un problema similar y si hay alguna solución conocida. He adjuntado una captura de pantalla del problema para que puedan ver lo que está sucediendo.\nimage1915×819 46 KB\nCualquier ayuda o consejo que puedan brindarme sería muy apreciado. ¡Gracias de antemano!Saludos,\nJulio Cuartas",
"username": "Julio_Cuartas"
},
{
"code": "",
"text": "Que descripción te da el error si das clic ahi donde dice “Server Response (Error:41)”",
"username": "Orlando_Herrera"
},
{
"code": "",
"text": "Hola Julio -In order to build a chart like this, the query will need to process all of the data in your collection. The timeout error is likely occurring because you have a large amount of data, and it’s taking too much time to process it all.I can see that your chart has a filter applied. Normally filters are a great way to cut down the amount of processing, provided the field(s) used in the filter are indexed. However since you mentioned you are using a view as the source of the chart, an index on this field won’t help, as the filter is applied after the view’s query.You may be able to speed up the view’s query by creating additional indexes, especially on fields used in $lookup calls. If this doesn’t help, another solution is to use an “on demand materilaised view” whereby you put the view’s query in a trigger and use $out or $merge to write the data back to another collection. This will significantly speed up the chart as the view query won’t need to be executed in order to render the chart.Let me know if any of these ideas are helpful.\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "La vista en sí misma carga sin problemas y muestra la información correcta, pero al intentar crear una visualización en MongoDB Charts of results, esta no carga y muestra un mensaje de error de time out.",
"username": "khalisthings_coffee"
},
{
"code": "",
"text": "Did you make a regular view or a materialised view? In MongoDB, a materialised view is not a built-in concept, but rather a pattern by which you run a query on a schedule and write the results to a collection. If you do this, the data should be instantaneous to query. A regular view, on the other hand, is just a saved query, so every time you load the data it will run the query over the original collections which can be slow.",
"username": "tomhollander"
}
] | Problema al crear visualizaciones en MongoDB Charts a partir de vistas que contienen datos de múltiples tablas | 2023-03-09T15:41:50.665Z | Problema al crear visualizaciones en MongoDB Charts a partir de vistas que contienen datos de múltiples tablas | 1,088 |
|
null | [
"server"
] | [
{
"code": "mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Thu 2023-03-30 10:52:16 AEDT; 4s ago\n Docs: https://docs.mongodb.org/manual\n Process: 13852 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=203/EXEC)\n Main PID: 13852 (code=exited, status=203/EXEC)\n CPU: 4ms\n\nMar 30 10:52:16 simplexity2204 systemd[1]: Started MongoDB Database Server.\nMar 30 10:52:16 simplexity2204 systemd[1]: mongod.service: Main process exited, code=exited, status=203/EXEC\nMar 30 10:52:16 simplexity2204 systemd[1]: mongod.service: Failed with result 'exit-code'.\nsimplexity@simplexity2204:~/Downloads$ ls /etc/mongod.conf\nls: cannot access '/etc/mongod.conf': No such file or directory\n",
"text": "After uninstalling and installing using the instructions on https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/#install-mongodb-community-editionI get the following error after doing “sudo systemctl start mongod”",
"username": "Universal_Simplexity"
},
{
"code": "",
"text": "Appears to be permissions issue\nWhat is your os?\nWhat systemV you have selected?\nCan you list /etc contents or\nsudo ls -lrt /etc/mongod.conf",
"username": "Ramachandra_Tummala"
}
] | Cannot start Mongo DB community edition 6.0.5 | 2023-03-30T00:01:11.247Z | Cannot start Mongo DB community edition 6.0.5 | 887 |
null | [
"vscode"
] | [
{
"code": "",
"text": "The MongoDB Developer Experience team would like to learn more about your experiences using MongoDB for Visual Studio Code. We appreciate your feedback and we’ll use it to improve the MongoDB for Visual Studio Code extension.If you have five minutes to spare, please fill out this survey here.",
"username": "Quinna_Halim"
},
{
"code": "",
"text": "It would be amazing if you ported all functionalities found on compass and realm studio into the VS Code plugin, and made a plugin for android studio and XCode for the same.",
"username": "Brock"
}
] | MongoDB for VS Code Survey | 2022-07-22T20:15:45.487Z | MongoDB for VS Code Survey | 2,358 |
null | [] | [
{
"code": "",
"text": "Hello Everyone\nI am Ali Naim a Full Stack Software Developer at ModularCx startup.\nReally excited that we got accepted in the MongoDB for Startups programs.\nLooking forward to this opportunity to learn and benefit from the community.",
"username": "Ali_Naim"
},
{
"code": "",
"text": "George here.\nI enjoy computers, programming, working out, and gardening.\nFuture embedded systems engineer and front end developer.\nI can speak English, German, and Romanian.\nI am happy to join this community.",
"username": "George_soros"
}
] | Startups Program MongoDB | 2021-06-28T13:23:02.389Z | Startups Program MongoDB | 4,955 |
null | [
"compass"
] | [
{
"code": "An error occurred while deserializing the Data property of class Squidex.Domain.Apps.Entities.MongoDb.Contents.MongoContentEntity: Unable to translate bytes [BD] at index 1136 from specified code page to Unicode.\nat MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Driver.Core.Operations.CursorBatchDeserializationHelper.DeserializeBatch[TDocument](RawBsonArray batch, IBsonSerializer`1 documentSerializer, MessageEncoderSettings messageEncoderSettings)\n at MongoDB.Driver.Core.Operations.FindOperation`1.CreateFirstCursorBatch(BsonDocument cursorDocument)\n at MongoDB.Driver.Core.Operations.FindOperation`1.CreateCursor(IChannelSourceHandle channelSource, IChannelHandle channel, BsonDocument commandResult)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToListAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n at Squidex.Infrastructure.MongoDb.MongoExtensions.ToListRandomAsync[T](IFindFluent`2 find, IMongoCollection`1 collection, Int64 take, CancellationToken ct) in C:\\src\\src\\Squidex.Infrastructure.MongoDb\\MongoDb\\MongoExtensions.cs:line 237\n at Squidex.Domain.Apps.Entities.MongoDb.Contents.Operations.Extensions.QueryContentsAsync(IMongoCollection`1 collection, FilterDefinition`1 filter, ClrQuery query, CancellationToken ct) in C:\\src\\src\\Squidex.Domain.Apps.Entities.MongoDb\\Contents\\Operations\\Extensions.cs:line 114\n",
"text": "Hi,I have a very weird exception:It seems that a string was written, but the driver cannot read anymore. The same problem happens in compass, where the same error happens when the user goes to the page.",
"username": "Sebastian_Stehle1"
},
{
"code": "",
"text": "This needs to go into a Support Ticket.Provide them with:\nMongoDB Version\nCompass Version\nIf on Atlas provide the cluster information and links.They will guide you from there.",
"username": "Brock"
},
{
"code": "",
"text": "I will ask my customer about that. The data is stored on Atlas, but I don’t know which MongoDB version.",
"username": "Sebastian_Stehle1"
},
{
"code": "",
"text": "Definitely zero hesitation, get that to MongoDB Support team, their backend engineers may be able to revert or make necessary changes to recover and correct whatever happened.",
"username": "Brock"
},
{
"code": "",
"text": "Of course. But there are several perspectives:I am maintaining an Open Source project, where I give paid support. So I want to understand how this happens and if there is anything I can to prevent that. Is this a bug on the client? Is this a bug on the server? Or is this a problem from my side or is it really data corruption, e.g. the disk is broken or something like that.The customer wants to have the actual problem fixed.So the correction is not the most important thing. I would like to understand if this can happen again.",
"username": "Sebastian_Stehle1"
},
{
"code": "mongodump --uri=\"mongodb+srv://<admin user>:<admin pwd>@cluster0.bleonard.mongodb.net/realm_sync\" --gzip -o /path/to/folder \n\n",
"text": "@Sebastian_Stehle1The only people who can actually investigate this, is MongoDB Engineering. Just as they are the only people who can determine why this happened.So what you’d do, is establish a support ticket with MongoDB Support, and request a Root Cause Analysis which will not only go over what the fix was, but why this occurred and your Technical Services Engineer will provide the steps if any that you need to take to prevent this from occurring again.You are always welcome to share in here for others what was stated etc. It’s your choice, but otherwise there’s nothing you can do with Atlas or Device Sync in troubleshooting things like that, because it’s all backend engineering side.The longer that you take to raise a ticket to support, the longer your customer may have an outage.When you go to make a ticket:To get the Device Sync App dump:The will come from this page and will be who you set as admin…The will be the password for the user above.You will need to use realm_syncat the end of the url to get the _realm_sync database. It should include a file called history.bson, as well as some other files that look like the ones above.If you have a Device Sync App, attached the History.BSON to the support ticket and all of this information will make it much faster to figure out what happened, correct it, and explain it to you.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks for your very detailed answer. I do not have access to the data and most of the engineers of my customer also do not have access to atlas. So I can only forward your information and see what happens.",
"username": "Sebastian_Stehle1"
},
{
"code": "",
"text": "As much information that can be provided to the support ticket, the better.",
"username": "Brock"
}
] | Data got corrupted | 2023-03-28T14:54:51.544Z | Data got corrupted | 537 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "When running NestJS tests in my local machine all tests runs smoothly and no error arises. However, when my pipeline runs tests on Gitlab, it fails to connect to the MongoDB Atlas giving the following error. ERROR [MongooseModule] Unable to connect to the database. Same happens when simply trying to run the server.From doing my own tests on the pipeline I have noticed that environment variables are indeed recognized by the pipeline environment and that my MongoDB atlas is also allowing all IP addresses to access it, so I have no idea what to do, any help would be greatly appreciated.",
"username": "Eric_McEvoy"
},
{
"code": "",
"text": "Hi @Eric_McEvoy,Welcome to the community! Normally when I’ve seen something this in general it tends to be something with the connection string. Perhaps you can include more information here - such as how you’ve input the connection string in the env var, with sensitive information redacted, to see if we can notice something that might be remiss?Best,\nMelissa",
"username": "Melissa_Plunkett"
},
{
"code": "",
"text": "Hello @Melissa_Plunkett, yes my connection string (which can be seen below) works locally but not when on gitlab, the connection string also works when the app is deployed on another third party service.mongodb+srv://username:[email protected]/testIn my gitlab env file and local env file this var is referenced as:\nMONGODB_URII have also tried plugging in the username and password as env vars within the connection string individually but that also did not work.",
"username": "Eric_McEvoy"
},
{
"code": "",
"text": "If anyone could still help me with this, it would be greatly appreciated.",
"username": "Eric_McEvoy"
},
{
"code": "",
"text": "Hi @Eric_McEvoy,I’ve asked around to see if I can find anyone who has hands on experience with this specific setup but no luck yet. Could you perhaps share more specifics/details on how you’ve set this up locally vs GitLab? That might help me either figure this out with you or help find someone who’s better able to see what the issue is between local and GitLab.Thanks!\nMelissa",
"username": "Melissa_Plunkett"
},
{
"code": "",
"text": "Start with this, and modify accordingly if you have Mongoose installed.",
"username": "Brock"
},
{
"code": "backend_build:\n stage: backend-build\n image: node:latest\n script:\n - cd ./src/nestjs-backend\n - npm ci\n - npm run lint\n - npm run build\n artifacts:\n paths:\n - ./src/nestjs-backend/dist/\n cache:\n key:\n files:\n - ./src/nestjs-backend/package-lock.json\n paths:\n - ./src/nestjs-backend/node_modules/\n policy: pull-push\n only:\n - backend\n - master\n\nbackend_test:\n stage: backend-test\n image: node:latest\n services: [mongo]\n cache:\n - key:\n files:\n - ./src/nestjs-backend/package-lock.json\n paths:\n - ./src/nestjs-backend/node_modules/\n policy: pull\n before_script:\n - cd ./src/nestjs-backend\n - npm run test\n\n only:\n - backend\n - master\n\nbackend_deploy:\n stage: backend-deploy\n image: node:latest\n script:\n - cd ./src/nestjs-backend\n - echo $RAILWAY_PROJECT_ID\n - npm i -g @railway/cli\n - railway link --environment $RAILWAY_ENVIRONMENT_NAME $RAILWAY_PROJECT_ID\n - railway service $RAILWAY_SERVICE_NAME\n - railway up -d\n artifacts:\n when: always\n paths:\n - ./src/nestjs-backend/dist/\n expire_in: 10 min\n cache:\n key:\n files:\n - ./src/nestjs-backend/package-lock.json\n paths:\n - ./src/nestjs-backend/node_modules/\n policy: pull-push\n only:\n - backend\n - master\n\n",
"text": "Hello again @Melissa_Plunkett and thanks for the reply.Not entirely sure what you mean by this question, but on my gitlab pipline I use the latest node image (image: node:latest) and my scripts below is what my current YML file looks like. All .env variables are recognized by the pipeline as they print to the terminal when i console.log() them.It works locally and when deployed on railway just not on the gitlab pipeline.\nAnd more context on how I run it locally, I just npm run start/test on my personal desktop on vs code and it works fine.",
"username": "Eric_McEvoy"
}
] | Gitlab CI/CD pipeline tests fails to connect to MongoDB Atlas even though my local machine testing does | 2023-03-01T00:54:28.282Z | Gitlab CI/CD pipeline tests fails to connect to MongoDB Atlas even though my local machine testing does | 2,176 |
[
"node-js",
"mongoose-odm"
] | [
{
"code": "`const mongoose = require('mongoose');\nconst path = require('path');\n\nlet conn = null;\nconst connect = async connectionString => {\n try {\n console.log('Trying to establish the connection::');\n if (conn == null) {\n mongoose.set('strictQuery', false);\n conn = await mongoose.connect(connectionString, {\n tlsCAFile: path.join(__dirname, './rds-combined-ca-bundle.pem'),\n keepAlive: true\n });\n console.log('New DB connection established::');\n } else {\n console.log('DB connection from cache::');\n }\n return conn;\n } catch (err) {\n console.log('MongoDB connection failed::', err);\n throw err;\n }\n};\n\nmodule.exports = { connect };\n`\n",
"text": "We are using AWS Lambda (Node.js) with Express and Mongoose to connect to an AWS DocumentDB. Below is a code snippet of our connection file. The connection setup is working fine, but we are encountering MongoDB connection timeout errors when we try to call the Lambda function after a while. We have attached a screenshot of the error below. Could you please take a look and let us know if you have a solution? Thank you in advance.image_2023_03_29T04_23_24_676Z988×195 7.07 KB",
"username": "Zil_D"
},
{
"code": "const mongoose = require('Mongoose');\nmongoose.connect(\"MongoDB://localhost:<PortNumberHereDoubleCheckPort>/<DatabaseName>\", {useNewUrlParser: true});\nconst <nameOfDbschemahere> = new mongoose.schema({\n name: String,\n rating: String,\n quantity: Number,\n someothervalue: String,\n somevalue2: String,\n});\n\nconst Fruit<Assuming as you call it FruitsDB> = mongoose.model(\"nameOfCollection\" , <nameOfSchemeHere>);\n\nconst fruit = new Fruit<Because FruitsDB calling documents Fruit for this>({\n name: \"Watermelon\",\n rating: 10,\n quantity: 50,\n someothervalue: \"Pirates love them\",\n somevalue2: \"They are big\",\n});\nfruit.save();\n",
"text": "Hello @Zil_DPlease review:I would also look into is whether you’re hitting the connection max limits, if so I would refer to AWS support specifically in determining the solution for the AWS Lambda timeouts and it would be awesome to share them here.As Lambda timeout is something that is frequently brought up here, I personally haven’t experienced it myself in my AWS lab so I’m not exactly sure what specifically may cause it.But for Mongoose the above is pretty much what I do for the most part to connect and I use a URI instead of local.",
"username": "Brock"
}
] | MongoDb(AWS documentDB) connection with mongoose ODB - Nodejs | 2023-03-29T04:45:00.394Z | MongoDb(AWS documentDB) connection with mongoose ODB - Nodejs | 2,111 |
|
[
"containers",
"atlas",
"field-encryption"
] | [
{
"code": "connect error for encrypted client: AutoEncryption extra option \\\"cryptSharedLibRequired\\\" is true, but we failed to load the crypt_shared library",
"text": "I’m now trying to run a go application which uses mongodb/mongo-go-driver for automatic encryption when connecting to an atlas cluster (M10, version 6.0.4).The binary compiles successfully with libmongocrypt, and is running in amd64/debian:bullseye docker container.\nI have placed the .so file described here into the docker container, and provided the path to the file to the Go application (cryptSharedLibPath)When I set cryptSharedLibRequired=true, I get the following error:\nconnect error for encrypted client: AutoEncryption extra option \\\"cryptSharedLibRequired\\\" is true, but we failed to load the crypt_shared library. If I set cryptSharedLibRequired=false, then the first insert fails with errors relating to fields not being encrypted.I have tried using several different directories for the .so file (/usr/local/lib/, /lib/x86_64-linux-gnu/) and confirmed the application can access the file (just reading the bytes before trying to do any encryption). I have also confirmed the .so file is where I expect it in the docker container. I have tried using absolute and relative paths when giving the driver the location of the .so file.Debian 11\ngo 1.15\ngo mongo-driver v1.11.2\nlibmongocrypt version: 1.8.0-20230308+gitefaeb8e385Can anyone see anything I’m doing wrong and/or provide methods to debug?",
"username": "danny_fry"
},
{
"code": "",
"text": "Atlas should already be encrypted…",
"username": "Brock"
}
] | Go application automatic encryption cannot access crypt_shared | 2023-03-14T22:10:55.575Z | Go application automatic encryption cannot access crypt_shared | 1,003 |
|
null | [
"python"
] | [
{
"code": "",
"text": " I’m Veronica Cooley-Perry and I recently joined Community Programs team at MongoDB as a Senior Community Manager.I’m located in Raleigh, North Carolina in the States. Raleigh is part of the ‘Research Triangle’ as Duke University, UNC Chapel Hill and NC State University are all located in a 20 miles radius of each other. This also means A LOT of people love that live here!I spent the first part of my career working in Higher Education institutions across the country. But then I started to get an itch to try and use my skillset in a new field. This lead me to managing open source community events at Red Hat + and I fell in love with working with community members! I love the energy around communities!I’m a beginner Python programmer and am eager to dive into learning more about MongoDB - hopefully along with some of the community!When I’m not working, I’m usually hanging out with my husband and my adorable 5 year old labradoodle , working out , and enjoying being outdoors!I’m looking forward to getting to know you all and being part of this awesome community !",
"username": "Veronica_Cooley-Perry"
},
{
"code": "",
"text": "Hey Veronica! We are a MongoDB Data Connectivity and Integrations Partner, headquartered in the Triangle. We should meet up for a coffee or something! I’ll also send you a LinkedIn invite. Look forward to meeting you - David Kleiss (Sr Partner Alliances Manager, CData Software, Inc.) - [email protected]",
"username": "David_Kleiss"
},
{
"code": "",
"text": "Awesome - got your LinkedIn invite message - let’s continue the convo there.",
"username": "Veronica_Cooley-Perry"
}
] | 🌱 Hey y'all from North Carolina (US)! | 2022-10-13T18:28:32.230Z | :seedling: Hey y’all from North Carolina (US)! | 2,092 |
null | [
"aggregation",
"atlas"
] | [
{
"code": "An unknown error occurred(AtlasError) $out is not allowed or the syntax is incorrectexports = function(payload, response) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n \n const readColl = mongodb.db(\"temp_db\").collection(\"temp_read\")\n const writeColl = mongodb.db(\"temp_db\").collection(\"temp_write\")\n \n const myAgg = readColl.aggregate([ { $sample: { size: 2 } }, { $out: \"temp_write\" } ])\n return myAgg\n}\n",
"text": "Using Atlas aggregation wizard on a sample collection, the $out stage returnsAn unknown error occurredWhen running the aggregation inside a function, the result is:(AtlasError) $out is not allowed or the syntax is incorrectCode here:Any advice from the community?",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "Hi @enjoywithouthey,Can you share the pipeline and which sample collection you used?Also which cluster Tier are you using in Atlas and which version ? Anything out of the ordinary with your cluster?",
"username": "MaBeuLux88"
},
{
"code": "[\n {\n \"1\": \"A\"\n },\n {\n \"2\": \"B\"\n },\n {\n \"3\": \"C\"\n },\n {\n \"4\": \"D\"\n },\n {\n \"5\": \"E\"\n },\n {\n \"6\": \"F\"\n },\n {\n \"7\": \"G\"\n }\n]\n[\n {\n $sample:\n {\n size: 2,\n },\n },\n {\n $out:\n \"temp_write\",\n },\n]\n",
"text": "My Tier is $0.1/1M ServerlessThe sample data I made up. It’s in a collection I’m calling temp_readI’m selecting some random documents and pushing them to an empty collection called temp_write$merge in a previous pipeline worked fine. $out is giving errors.Here’s the sample data:Pipeline:Thanks for your help!",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "\nScreen Shot 2023-03-28 at 7.11.33 PM2242×1442 328 KB\n",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "@MaBeuLux88 I did a little digging and found something interesting…On some previous Clusters I made for tutorials, version 5.0.15 installs automatically under the M0 free tier. No problems with $out.I made a recent paid production M1 Cluster (the cluster causing the problem with $out) and that cluster is version 6.2.1. The problem with $out persists.Hope this helps.",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "@MaBeuLux88 to verify, I just spun-up a M10 Dedicated Cluster, which allowed me to choose v5 in the advanced configuration (instead of v6).$out works no problem.",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "I was able to reproduce on a serverless instance as well !\nimage988×1039 48.1 KB\n",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Serverless instances don’t support the $out stage. Use $merge instead.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Ok, thanks.Maybe that we’ve created a thread on this subject, the results will be scrapped and show up in search engines. Appreciate the help.",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "Definitely!I wasn’t aware of this limitation of serverless. Hopefully it goes away soon.\nBut the error message could be better. I’ll create a ticket to improve it. It would have been easier for you to understand the source of the problem.Cheers, \nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $out in aggregation stagereturns unknown error | 2023-03-28T22:52:56.345Z | $out in aggregation stagereturns unknown error | 938 |
null | [
"java",
"kotlin"
] | [
{
"code": "",
"text": "Do you use the MongoDB Java driver? We want to hear from you! Take this short survey.Your feedback will help inform the product roadmap for the MongoDB Java driver and related features for Kotlin. Let your voice be heard!",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "As a Java guy, this sounds like it’s aimed at me, but when I get to the second page, I find I need to know Kotlin, so it’s just another time waster.",
"username": "Neil_Youngman"
},
{
"code": "",
"text": "Having a lot of experience with the Realm Kotlin SDK, and Java SDK, and experience with the MongoDB Java Driver, overall Kotlin is eventually going to overtake new server-side projects on a global scale simply because it’s easier for people to learn, and allows easier transitions from device to device, and service to service without replacing older Java codebases.But I think the bigger limitation you’ll find is the slow adoption rate of a Kotlin Driver, and most likely will be something to push towards the Realm/Device Sync Customers for a “One Language” for mobile and server side sales approach.But this will cause use-case confusion between the Driver and the SDK like is seen with Node.JS. If MongoDBs team is not making efforts to educate the customers about the difference between the Driver intended for backend/server side, vs the SDK intended for client/edge side, it’s only going to tick off customers when they experience and see behaviors that aren’t fitting at all to what they expect. Or the security risks/protections associated with either product.Java Driver gets away with minimal issues because of its age and known use/education behind it. But Kotlin traditionally is taught as a edge device/mobile/desktop/client side programming language vs server side.",
"username": "Brock"
},
{
"code": "",
"text": "Before thinking about Kotlin support, focus on making sure the non-reactive Java driver is fully compatible with virtual threads that are coming in Java 21.",
"username": "Jean-Francois_Lebeau"
}
] | Using MongoDB and Java? We want to hear from you! | 2023-03-15T20:21:36.299Z | Using MongoDB and Java? We want to hear from you! | 1,185 |
null | [
"aggregation",
"queries",
"java"
] | [
{
"code": "[\n {\n\n \"fiedl1\": \"value1\",\n \"field2\": \"value2\",\n \"master\": [\n {\n \"attributes\": {\n \"key\": [\n \"FIRSTKEY\"\n ]\n },\n \"Status\": \"ACTIVE\"\n },\n {\n \"attributes\": {\n \"key\": [\n \"SECONDKEY\"\n ]\n },\n \"Status\": \"ACTIVE\"\n },\n {\n \"attributes\": {\n \"key\": [\n \"THIRDKEY\"\n ]\n },\n \"Status\": \"ACTIVE\"\n }\n ]\n }\n]\n[\n {\n\n \"field1\": \"value1\",\n \"field2\": \"value2\",\n \"master\": [\n {\n \"attributes\": {\n \"key\": [\n \"FIRSTKEY\"\n ]\n },\n \"Status\": \"ACTIVE\"\n },\n {\n \"attributes\": {\n \"key\": [\n \"SECONDKEY\"\n ]\n },\n \"Status\": \"ACTIVE\"\n }\n ]\n }\n]\ndb.collection.aggregate([\n {\n $match: {\n \"master.attributes.key\": {\n $in: [\n \"FIRSTKEY\",\n \"SECONDKEY\"\n ]\n }\n }\n },\n {\n $project: {\n master: {\n $filter: {\n input: \"$master\",\n as: \"master\",\n cond: {\n $in: [\n \"$$master.attributes.key\",\n [\n \"FIRSTKEY\",\n \"SECONDKEY\"\n ]\n ]\n }\n }\n },\n _id: 0\n }\n }\n])\n[\n {\n \"master\": []\n }\n]\n",
"text": "Hi,I have the following collectionI’m trying to filter only master.attributes.key in [‘FIRSTKEY’, ‘SECONDKEY’] and expecting a response as shown belowI tried the following aggregate commandBut I got below response instead of filtered . Appreciate any help here",
"username": "Sravana_Thatta"
},
{
"code": " $in: [\n \"$purpose.attributes.localizationKey\",\n [\n \"cs_profiling\",\n \"cs_research\"\n ]\n ]\n $in: [\n \"$purpose.attributes.localizationKey\",\n [\n [ \"cs_profiling\" ] ,\n [ \"cs_research\" ]\n ]\n ]\n",
"text": "The issue is that localizationKey in the source document is an array but inyou use it as it was a simple string value. The following should work:Note that if localizationKey in the original document contains anything else than the single word as shared then it will not work. Since it is an array, then, one day, it will contain something else than a single word. Using{ $ne : { { $setIntersection … } , [ ] } }might then be a better idea.",
"username": "steevej"
},
{
"code": " $in: [\n \"$master.attributes.key\",\n [\n [ \"FIRSTKEY\" ] ,\n [ \"SECONDKEY\" ]\n ]\n ]\ndb.collection.aggregate([\n {\n $match: {\n \"master.attributes.key\": {\n $in: [\n \"FIRSTKEY\",\n \"SECONDKEY\"\n ]\n }\n }\n },\n {\n $project: {\n master: {\n $filter: {\n input: \"$master\",\n as: \"master\",\n cond: {\n $in: [\n \"$$master.attributes.key\",\n [\n [\"FIRSTKEY\"],\n [\"SECONDKEY\"]\n ]\n ]\n }\n }\n },\n _id: 0\n }\n }\n])\n[\n {\n \"master\": []\n }\n]\n",
"text": "Thanks @steevej I tried the below query, but still empty master response",
"username": "Sravana_Thatta"
},
{
"code": "",
"text": "You have modified each and every sample documents from your original post.Sorry but I do not work on moving targets.The query I supplied worked in the original documents.",
"username": "steevej"
},
{
"code": "",
"text": "Apologies, for renaming the fields. I tried the solution here https://mongoplayground.net/ but I got an empty response",
"username": "Sravana_Thatta"
},
{
"code": "db.collection.aggregate([\n {\n $match: {\n \"master.attributes.key\": {\n $in: [\n \"FIRSTKEY\",\n \"SECONDKEY\"\n ]\n }\n }\n },\n {\n $project: {\n master: {\n $filter: {\n input: \"$master\",\n as: \"master\",\n cond: {\n $in: [\n \"$$master.attributes.key\",\n [\n [\n \"FIRSTKEY\"\n ],\n [\n \"SECONDKEY\"\n ]\n ]\n ]\n }\n }\n }\n }\n }\n])\n",
"text": "{ $ne : { { $setIntersection … } , } }The below command worked. Thanks @steevej for your help",
"username": "Sravana_Thatta"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $in query with in an array with a list of values is not working | 2023-03-29T12:39:32.560Z | $in query with in an array with a list of values is not working | 505 |
null | [
"dot-net"
] | [
{
"code": "_id : ObjectId\nId: string\nValue: Int32\npublic string Id {get; set;}\npublic int Value {get; set;}\n",
"text": "I have a legacy database with a collection with the following structure (simplified):And I have my C# class like this:How can I map my class property Id with the Id (string) field in the collection, and not map with _id (ObjectId)?\nI have tested many approaches but I usually got this exception message: Cannot deserialize a ‘String’ from BsonType ‘ObjectId’.",
"username": "Bruno_Almeida"
},
{
"code": "Idid_id_id[BsonId]BsonClassMap.SetIdMember_idId_id[BsonId]using System;\nusing System.Collections.Generic;\nusing System.Threading.Tasks;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Bson.Serialization;\n\npublic class MyClass\n{\n [BsonId]\n public ObjectId _id { get; set; }\n public string Id { get; set; }\n public int Value { get; set; }\n}\n\npublic class Program\n{\n static async Task Main(string[] args)\n {\n // Set up a MongoDB client and database\n var client = new MongoClient(\"mongodb://localhost:27017/test\");\n var database = client.GetDatabase(\"csharp\");\n\n // Get a reference to the \"mycollection\" collection\n var collection = database.GetCollection<BsonDocument>(\"dotnetcore\");\n \n // Insert a sample document into the collection\n var myObject = new MyClass { _id = ObjectId.GenerateNewId(), Id = \"foo\", Value = 42 };\n var document = myObject.ToBsonDocument();\n await collection.InsertOneAsync(document);\n\n // Query the collection for documents and deserialize them into instances of MyClass\n var filter = Builders<BsonDocument>.Filter.Empty;\n var documents = await collection.Find(filter).ToListAsync();\n var myObjects = new List<MyClass>();\n\n foreach (var y in documents)\n {\n var deserializedObject = BsonSerializer.Deserialize<MyClass>(y);\n myObjects.Add(deserializedObject);\n }\n\n // Do something with the deserialized objects\n foreach (var x in myObjects)\n {\n Console.WriteLine($\"_id: {x._id}, Id: {x.Id}, Value: {x.Value}\");\n }\n }\n}\n_id: 6423c91b114e33a0106407be, Id: foo, Value: 42\nIdCustomerKey",
"text": "Hi @Bruno_Almeida,Welcome to the MongoDB Community forums By default, the C# driver maps the properties Id , id , and _id to the underlying _id database field. But you can override this behavior via attributes (e.g. [BsonId] ), configuration (BsonClassMap.SetIdMember ), or convention.Note that the “Id member” is the field _id required by MongoDB. In this case, Id is just some arbitrary field in the database, but MongoDB still requires an _id field, which will populate with a new ObjectId if one is not provided.So, I added the [BsonId] attribute and define the ObjectId _id in the schema and it worked. Sharing the code for your reference:It returns the output:However, there may be places where this mapping will not work as expected. For example, if this type is nested as a subdocument or contained in a list or dictionary. While it may work in some cases and maybe it won’t. Our recommended approach is to rename the Id database field to something more meaningful such as CustomerKey.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you for your reply @Kushagra_Kesav.I believe the best approach will be to rename the Id field (something I was avoiding) because I also don’t want to add Mongo references in the entity.Greetings,\nBruno",
"username": "Bruno_Almeida"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Avoid _id mapping | 2023-03-17T11:36:43.588Z | Avoid _id mapping | 1,652 |
null | [] | [
{
"code": "",
"text": "Hi!I have a large M80 database and I’m planning to split its data into HOT / COLD storage tiers. Using s3 buckets to store the majority of the data that is not frequently accessed and data federation to query against atlas and s3 at once.My currently doubt is, how to define partitions, indexes or whatever to avoid all the s3 data to be scanned. All the exemples I found are simple and I couldn’t figure out yet how the s3 reading works.Can someone please help me here?",
"username": "Wuerike"
},
{
"code": "",
"text": "Hello Wuerike,My name is Ben Flast, I’m the product manager for Atlas Data Federation. We do have guidance about how to use paths and what to keep in mind when partitioning to optimize performance at the links below. But partitioning can definitely be a challenging task, and guidance can differ based on your underlying data and the query patterns you plan to use. If you’d like, please throw some time on my calendar and we can discuss in detail: Calendly - Benjamin FlastBest,\nBen",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How data federation scans s3 buckets | 2023-03-28T21:16:27.853Z | How data federation scans s3 buckets | 366 |
null | [
"containers"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-15T08:44:11.227+00:00\"},\"s\":\"D2\", \"c\":\"COMMAND\", \"id\":21965, \"ctx\":\"conn68\",\"msg\":\"About to run the command\",\"attr\":{\"db\":\"inPoint_WF\",\"client\":\"172.17.0.1:56210\",\"commandArgs\":{\"find\":\"wfc.scheduled_commands\",\"filter\":{\"ExecuteTime\":{\"$lt\":638144666512274247}},\"$db\":\"inPoint_WF\",\"lsid\":{\"id\":{\"$uuid\":\"112dfa69-185a-45b5-9c76-923ec402ad7e\"}}}}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.227+00:00\"},\"s\":\"D3\", \"c\":\"STORAGE\", \"id\":22414, \"ctx\":\"conn68\",\"msg\":\"WT begin_transaction\",\"attr\":{\"snapshotId\":131960,\"readSource\":\"kNoTimestamp\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.227+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20967, \"ctx\":\"conn68\",\"msg\":\"Beginning planning\",\"attr\":{\"options\":\"INDEX_INTERSECTION \",\"query\":\"ns=inPoint_WF.wfc.scheduled_commandsTree: ExecuteTime $lt 638144666512274247\\nSort: {}\\nProj: {}\\n\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20968, \"ctx\":\"conn68\",\"msg\":\"Index number and details\",\"attr\":{\"indexNumber\":0,\"index\":\"kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: \\\"_id_\\\" }\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20968, \"ctx\":\"conn68\",\"msg\":\"Index number and details\",\"attr\":{\"indexNumber\":1,\"index\":\"kp: { ExecuteTime: -1 } name: '(idx_exectime, )' io: { v: 2, key: { ExecuteTime: -1 }, name: \\\"idx_exectime\\\", background: true }\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20968, \"ctx\":\"conn68\",\"msg\":\"Index number and details\",\"attr\":{\"indexNumber\":2,\"index\":\"kp: { CommandName: 1, Data: 1 } unique name: '(idx_key, )' io: { v: 2, unique: true, key: { CommandName: 1, Data: 1 }, name: \\\"idx_key\\\", background: true }\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20970, \"ctx\":\"conn68\",\"msg\":\"Predicate over field\",\"attr\":{\"field\":\"ExecuteTime\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D2\", \"c\":\"QUERY\", \"id\":20971, \"ctx\":\"conn68\",\"msg\":\"Relevant index\",\"attr\":{\"indexNumber\":0,\"index\":\"kp: { ExecuteTime: -1 } name: '(idx_exectime, )' io: { v: 2, key: { ExecuteTime: -1 }, name: \\\"idx_exectime\\\", background: true }\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20972, \"ctx\":\"conn68\",\"msg\":\"Rated tree\",\"attr\":{\"tree\":\"ExecuteTime $lt 638144666512274247 || First: 0 notFirst: full path: ExecuteTime\\n\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20944, \"ctx\":\"conn68\",\"msg\":\"Tagging memoID\",\"attr\":{\"id\":1}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20943, \"ctx\":\"conn68\",\"msg\":\"Enumerator: memo just before moving\",\"attr\":{\"memo\":\"[Node #1]: AND enumstate counter 0\\n\\tchoice 0:\\n\\t\\tsubnodes: \\n\\t\\tidx[0]\\n\\t\\t\\tpos 0 pred ExecuteTime $lt 638144666512274247\\n\\n\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20976, \"ctx\":\"conn68\",\"msg\":\"About to build solntree from tagged tree\",\"attr\":{\"tree\":\"ExecuteTime $lt 638144666512274247 || Selected Index #0 pos 0 combine 1\\n\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20978, \"ctx\":\"conn68\",\"msg\":\"Planner: adding solution\",\"attr\":{\"solution\":\"FETCH\\n---nodeId = 2\\n---fetched = 1\\n---sortedByDiskLoc = 0\\n---providedSorts = {baseSortPattern: { ExecuteTime: -1 }, ignoredFields: []}\\n---Child:\\n------IXSCAN\\n---------indexName = idx_exectime\\n---------keyPattern = { ExecuteTime: -1 }\\n---------direction = 1\\n---------bounds = field #0['ExecuteTime']: (638144666512274247, -inf.0]\\n---------nodeId = 1\\n---------fetched = 0\\n---------sortedByDiskLoc = 0\\n---------providedSorts = {baseSortPattern: { ExecuteTime: -1 }, ignoredFields: []}\\n\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D5\", \"c\":\"QUERY\", \"id\":20979, \"ctx\":\"conn68\",\"msg\":\"Planner: outputted indexed solutions\",\"attr\":{\"numSolutions\":1}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D2\", \"c\":\"QUERY\", \"id\":20926, \"ctx\":\"conn68\",\"msg\":\"Only one plan is available\",\"attr\":{\"query\":\"ns: inPoint_WF.wfc.scheduled_commands query: { ExecuteTime: { $lt: 638144666512274247 } } sort: {} projection: {}\",\"planSummary\":\"IXSCAN { ExecuteTime: -1 }\"}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"D3\", \"c\":\"STORAGE\", \"id\":22413, \"ctx\":\"conn68\",\"msg\":\"WT rollback_transaction\",\"attr\":{\"snapshotId\":131960}}\n{\"t\":{\"$date\":\"2023-03-15T08:44:11.228+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn68\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"inPoint_WF.wfc.scheduled_commands\",\"command\":{\"find\":\"wfc.scheduled_commands\",\"filter\":{\"ExecuteTime\":{\"$lt\":638144666512274247}},\"$db\":\"inPoint_WF\",\"lsid\":{\"id\":{\"$uuid\":\"112dfa69-185a-45b5-9c76-923ec402ad7e\"}}},\"planSummary\":\"IXSCAN { ExecuteTime: -1 }\",\"keysExamined\":0,\"docsExamined\":0,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":0,\"queryHash\":\"8E53FB34\",\"planCacheKey\":\"68A88A17\",\"queryFramework\":\"classic\",\"reslen\":122,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:56210\",\"protocol\":\"op_msg\",\"durationMillis\":0}}\n{\"t\":{\"$date\":\"2023-03-15T09:36:51.439+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn133\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"inPoint_WF.wfc.workflows\",\"command\":{\"find\":\"wfc.workflows\",\"filter\":{\"_id\":\"64108391e872af626b278864\"},\"$db\":\"inPoint_WF\",\"lsid\":{\"id\":{\"$uuid\":\"5f3a7106-86ff-4d68-a4bb-6caa3c015786\"}}},\"planSummary\":\"IDHACK\",\"keysExamined\":1,\"docsExamined\":1,\"cursorExhausted\":true,\"numYields\":0,\"nreturned\":1,\"queryHash\":\"740C02B0\",\"queryFramework\":\"classic\",\"reslen\":67766,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1}},\"Global\":{\"acquireCount\":{\"r\":1}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"storage\":{},\"remote\":\"172.17.0.1:37932\",\"protocol\":\"op_msg\",\"durationMillis\":3}}\n",
"text": "Hi,\nI’m using the latest mongodb docker container for development and testing.\nWhile developing I found these log entries (log level 5, profiling level 1):It states that there was a “slow query” find on the collection scheduled_commands with the filter {“ExecuteTime”:{“$lt”:638144666512274247}}.There is a desc index on “ExecuteTime”, and it even states that it uses an IXSCAN with the right index. Also the duration seems to be 0 ms.Can someone point me in the right direction why this is classified as a “slow query” ?EDIT: I have the same with and find which only filters for _id:Thanks in advance",
"username": "Markus_Schweitzer"
},
{
"code": "",
"text": "Hello @Markus_Schweitzer,Welcome to the MongoDB Community Forums It states that there was a “slow query” find on the collection scheduled_commands with the filter {“ExecuteTime”:{“$lt”:638144666512274247}}.As per the documentation, the client operations (such as queries) will appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher, which is aligning with your case.(log level 5, profiling level 1)This is happening because of the high log verbosity level, so I believe this is working as designed.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "There is a desc index on “ExecuteTime”, and it even states that it uses an IXSCAN with the right index. Also the duration seems to be 0 ms.Can someone point me in the right direction why this is classified as a “slow query” ?I believe OP is not asking why there’s a slow query log message, instead. he’s asking why that specific commmand/query is classified as “slow”. (e.g. it’s using index scan and duration shows 0 ms).",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hey @Markus_Schweitzer, can you please share the output of the following command:db.getProfilingStatus()",
"username": "InderjeetSingh"
}
] | Slow Query with IXSCAN | 2023-03-15T09:21:05.314Z | Slow Query with IXSCAN | 1,464 |
null | [
"aggregation"
] | [
{
"code": "$lookupMatchA1[\n {\n \"category\": \"A1\",\n \"name\": \"Document1\",\n \"include\": false\n },\n {\n \"category\": \"A1\",\n \"name\": \"Document2\",\n \"include\": true\n },\n {\n \"category\": \"A1\",\n \"name\": \"Document4\",\n \"include\": true\n },\n {\n \"category\": \"A1\",\n \"name\": \"Document5\",\n \"include\": false\n}\n],\nincludefalse",
"text": "Is it possible to filter out documents on the root level?Update: It is not visible in this example, but the reason why I don’t perform this deeper filtering inside the $match stage is because the field that is necessary for performing this filtering needs to be $lookup 'ed firstExample:Thank you",
"username": "Vladimir"
},
{
"code": "includefalseMatchA1",
"text": "Doing the followingNow, how to filter out all the returned documents that have include set to false?is almost exactly the same asA Match stage is run to retrieve all the documents that have a category value A1:So it is really not clear why just adding include:{$ne:false} or $not:{include:false} to your $match does not work?",
"username": "steevej"
},
{
"code": "",
"text": "The issue is that I would like to use $filter after $match. This need comes in scenarios as follows:",
"username": "Vladimir"
},
{
"code": "",
"text": "The use-case is still not clear. Based on the sample documents you providedadding include:{$ne:false} or $not:{include:false} to your $matchshould work.To understand better we will need documents from both collections. The current pipeline you have is also useful to see.In your title you wrote at the root level but in your last post you mentioned $lookup which kind of imply documents within an array.",
"username": "steevej"
}
] | Aggregation, using $filter at the root level | 2023-03-26T19:54:05.166Z | Aggregation, using $filter at the root level | 579 |
null | [
"database-tools",
"backup"
] | [
{
"code": "mongodump --oplog -u admin -p <my password here> --authenticationDatabase admin --out /data/backup/\n2023-03-27T08:11:27.198+0200 Failed: error creating intents to dump: error creating intents for database admin: error counting admin.system.new_users: (Unauthorized) not authorized on admin to execute command { count: \"system.new_users\", lsid: { id: UUID(\"11218ac0-f185-4035-ac82-e06b31273527\") }, $clusterTime: { clusterTime: Timestamp(1679897486, 4), signature: { hash: BinData(0, 83B632FBABBB17FECFA209B38BD559DD413C8E97), keyId: 7158403034257555467 } }, $db: \"admin\", $readPreference: { mode: \"primaryPreferred\" } }\nprivileges: [ { resource : { \"db\" : \"admin\", \"collection\" : \"system.new_users\" }, actions: [ \"find\" ] }]",
"text": "Hi there,I have been using MongoDB for over ten years, but today I am facing a problem and I don’t know how to best deal with.\nSo I have a large production database that started out with MongoDB 2.x (not sure exactly which, perhaps 2.4), and which has been updated all the way to 4.2 and very recently to 5.0. I thought everything was ok until I discovered, today, that mongodump no longers perform my daily backups.Failure message is:After some googling/reading it seems that this admin.system.new_users collection was (automatically) created during upgrade to 2.6, but never cleaned up. Its content is obsolete and could be safely deleted, and that would fix the mongodump issue, except that, for some reason, I can’t.Every attempt to delete it fails with “can’t drop system collection” (probably because it is in the system namespace), and it can’t be renamed either, any attempt to do so ends with “Invalid system namespace”.Looking at mongodb source code, there seem to be no easy way to drop that old unwanted collection which prevents me from backing up my database.Actually I managed to get my backups working again by adding a specific role to my admin user privileges: [ { resource : { \"db\" : \"admin\", \"collection\" : \"system.new_users\" }, actions: [ \"find\" ] }], but this isn’t fully satisfying.\nOf course a full re-creation of the database without the offending collection would work, but that would result in an unacceptable downtime for my customers.So my question: is there a way to have mongodb drop that collection, bypassing the system namespace checks ? It seems that crafting a specific oplog entry could actually do that, but I not very comfortable with playing with such dark magic on a production database!Any idea ?\nThanks",
"username": "Nicolas_Bouquet"
},
{
"code": "",
"text": "Check this jira ticket\nYou have to give explicit drop privileges on that collection\nhttps://jira.mongodb.org/browse/SERVER-20793",
"username": "Ramachandra_Tummala"
},
{
"code": "db.system.new_users.drop()\nuncaught exception: Error: drop failed: {\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on admin to execute command { drop: \\\"system.new_users\\\", lsid: { id: UUID(\\\"39647226-1638-45dc-ba17-0edffeab4f2c\\\") }, $clusterTime: { clusterTime: Timestamp(1679982428, 1), signature: { hash: BinData(0, 29228B39097FD3021D97D56D945171720FEDF25C), keyId: 7158403034257555467 } }, $db: \\\"admin\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\",\n \"operationTime\" : Timestamp(1679982429, 5)\n}\npc:PRIMARY> db.updateRole(\"cleanup\",{privileges: [ { resource : { \"db\" : \"admin\", \"collection\" : \"system.new_users\" }, actions: [ \"find\", \"dropCollection\" ] }], roles: [ \"root\" ]})\npc:PRIMARY> db.system.new_users.drop()\nuncaught exception: Error: drop failed: {\n \"ok\" : 0,\n \"errmsg\" : \"can't drop system collection admin.system.new_users\",\n \"code\" : 20,\n \"codeName\" : \"IllegalOperation\",\n \"operationTime\" : Timestamp(1679982508, 5)\n}\n",
"text": "Thanks for your reply, but unfortunately it doesn’t help.Without permission, the result is:With the appropriate permission, it goes further but still won’t do the job:That would have been too easy…",
"username": "Nicolas_Bouquet"
},
{
"code": "",
"text": "Did you try rename method?\nhttps://jira.mongodb.org/browse/SERVER-5972",
"username": "Ramachandra_Tummala"
},
{
"code": "pc:PRIMARY> db.updateRole(\"cleanup\",{privileges: [ { resource : { anyResource: true }, actions: [ \"renameCollectionSameDB\", \"find\", \"insert\", \"dropCollection\", \"anyAction\" ] }], roles: [ \"root\" ]})\npc:PRIMARY> db.system.new_users.renameCollection(\"oldusers\")\n{\n \"ok\" : 0,\n \"errmsg\" : \"error with source namespace: Invalid system namespace: admin.system.new_users\",\n \"code\" : 20,\n \"codeName\" : \"IllegalOperation\",\n \"operationTime\" : Timestamp(1679993389, 10)\n}\n",
"text": "Unfortunately yes, the collection can’t be renamed either:",
"username": "Nicolas_Bouquet"
},
{
"code": "",
"text": "Hi @Nicolas_Bouquet,Can you try to use this role to drop the collection db.system.new_users:P.S. I have never try to used it \nBR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "After some hesitation, I finally tried… But it doesn’t work either, still the same ‘IllegalOperation’ result. I feel like it cannot be solved by permissions at all.",
"username": "Nicolas_Bouquet"
},
{
"code": "",
"text": "I tested this in test db\nCreated a collection system.xyz when I drop it says illegal operation\nWas able to rename it and drop\nThat jira ticket user also says he can drop the collection after rename\nCan you show output of\ndb\nShow collections when you tried rename",
"username": "Ramachandra_Tummala"
},
{
"code": "pc:PRIMARY> show collections\nsystem.backup_users\nsystem.keys\nsystem.new_users\nsystem.roles\nsystem.users\nsystem.version\n",
"text": "@Ramachandra_Tummala,Here is the list of collection in that admin database:Not sure why the rename method doesn’t work for me, it could because it was ‘fixed’ in the mongodb version I am using (5.0.14), or because or doesn’t work with ‘admin’ database.",
"username": "Nicolas_Bouquet"
}
] | Dropping admin.system.new_users collection | 2023-03-27T12:17:01.846Z | Dropping admin.system.new_users collection | 1,073 |
null | [
"storage"
] | [
{
"code": "\"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"config.$cmd\",\"command\":{\"update\":\"system.sessions\",\"ordered\":false,\"allowImplicitCollectionCreation\"mongod.conf :\n\n#operationProfiling:\n\nslowOpThresholdMs: 50\nmode: slowOp\nmode: all\nmode: off\nreplication:\n replSetName: \"xxxxx\"\n oplogSizeMB: 5000000\n enableMajorityReadConcern: false\nprocessManagement:\n fork: true\n pidFilePath: /app/mongod/xxxxx/mongod.pid\nnet:\n port: xxxx\n bindIp: \"xxxx\"\n ipv6: false\n unixDomainSocket:\n enabled: false \n\nhttp:\nenabled: false\nJSONPEnabled: false\nRESTInterfaceEnabled: false\nstorage:\n\nindexBuildRetry: false\n directoryPerDB: true\n journal:\n enabled: true \n dbPath: \"/app/mongod/prod1/data/xxxx\"\n engine: \"wiredTiger\"\n wiredTiger:\n indexConfig:\n prefixCompression: true\n engineConfig:\n journalCompressor: snappy\n directoryForIndexes: true\n cacheSizeGB: 350\n\nstatisticsLogDelaySecs: 30\n collectionConfig: \n blockCompressor: snappy \ninMemory:\nengineConfig:\nstatisticsLogDelaySecs: 30\nsecurity:\n keyFile: /app/mongod/prod1/xxx.keyfile\n clusterAuthMode: keyFile\n authorization: enabled\n transitionToAuth: false\n javascriptEnabled: true\n\nredactClientLogData: false\nsystemLog:\n verbosity: 0\n quiet: false\n traceAllExceptions: false\n path: \"/app/xxxxx/mongodb.log\"\n logAppend: true\n logRotate: reopen\n destination: file\n\ntimeStampFormat: ctime\nMONGODB_PROD1:PRIMARY> rs.conf()\n{\n \"_id\" : \"xxxx\",\n \"version\" : 65844,\n \"term\" : 394,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 4,\n \"host\" : \"xxxx\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 5,\n \"host\" : \"xxxx\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 10,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 6,\n \"host\" : \"xxxx\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 7,\n \"host\" : \"xxxx\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 8,\n \"host\" : \"xxxx\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 4,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"xxxx\")\n }\n}\n",
"text": "The message in question is below\n\"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"config.$cmd\",\"command\":{\"update\":\"system.sessions\",\"ordered\":false,\"allowImplicitCollectionCreation\"In mongo 4.2, I see about 200 of these log msgs and all have durationMilis under 200. However, since upgrading to mongo 4.4, there are 600+ log msgs of them and 99% of them report durationMilis over 15000ms. Could these log msgs and how long it takes have an impact on the overall mongo performance?Mongod.conf and rs.conf() below",
"username": "L10tial"
},
{
"code": "",
"text": "This looks like a server internal query. Do you notice anything bad from application level?(e.g. CPU high use or high latency on queries)",
"username": "Kobe_W"
},
{
"code": "",
"text": "I definitely feel the entire app being slow. Nothing else has been changed in the app other than mongo version so I’m thinking this is where the slowness is coming from. The CPU usage is same as before, hence not sure if that has any impact.",
"username": "L10tial"
}
] | Slower performance after upgrading to 4.4 from 4.2 | 2023-03-28T16:36:17.336Z | Slower performance after upgrading to 4.4 from 4.2 | 1,153 |
null | [
"java",
"transactions",
"spring-data-odm"
] | [
{
"code": "@Configuration\npublic class MongoConfig{\n\t@Bean\n\tpublic MongoTransactionManager mongoTransactionManager(MongoDatabaseFactory factory){\n\t\treturn new MongoTransactionManager(factory);\n\t}\n}\n\n@JaversSpringDataAuditable\n@Repository\npublic interface TestRepo extends MongoRepository<TestClass, String>{\n}\n\n@Service\npublic class TestService{\n\tpublic MinioRepo minioRepo;\n\tpublic TestRepo testRepo; \n\t\n\t@Transactional\n\tpublic void addTestClass(TestClass testClass, MultipartFile file){\n\t\tminioRepo.putObject(bucket, file.getInputStream); //method to insert/upload file to MinIO\n\t\t//after insert into MinIO, there is a listener that can delete the object in MinIO if there is any error/exception in this method. \n\t\t\n\t\ttestRepo.save(testClass);\n\t}\n}\n\n\n@Controller\npublic class TestController{\n\tpublic TestService testService;\n\t\n\t@PostMapping \n\tpublic void test(@RequestBody String data, MultipartFile file){\n\t\tTestClass testClass = ....convert data json into TestClass object...;\n\t\ttestService.addTestClass(testClass, file);\n\t}\n}\n",
"text": "I have a Java Spring Boot application and have an endpoint which will POST file and json data, the Java code as below. I would like to do Load Testing on the Java code, so I wrote another application, which will repeatedly call the Java endpoint for 20 times or more.When call 20 times, the MongoDB will throw error of\nCommand failed with error 112 (WriteConflict): ‘WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.’I did some testing, If I remove the mongoTransactionManager() in MongoConfig, it works fine. or if I remove the @JaversSpringDataAuditable in the TestRepo.I am Spring boot 2.6.4 and the org.mongodb-driver-core and mongodb-driver-sync are 4.4.2Error message in Spring bootcom.mongodb.MongoCommandException: Command failed with error 112\n(WriteConflict): ‘WriteConflict error: this operation conflicted with\nanother operation. Please retry your operation or multi-document\ntransaction.’ on server localhost:27017. The full response is\n{“errorLabels”: [“TransientTransactionError”], “operationTime”:\n{“$timestamp”: {“t”: 1679905729, “i”: 6}}, “ok”: 0.0, “errmsg”:\n“WriteConflict error: this operation conflicted with another\noperation. Please retry your operation or multi-document\ntransaction.”, “code”: 112, “codeName”: “WriteConflict”,\n“$clusterTime”: {“clusterTime”: {“$timestamp”: {“t”: 1679905730, “i”:\n4}}, “signature”: {“hash”: {“$binary”: {“base64”:\n“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”, “subType”: “00”}}, “keyId”: 0}}} \tat\ncom.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:198)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:418)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:342)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:116)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:647)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:244)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:227)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:127)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:117)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.connection.DefaultServer$OperationCountTrackingConnection.command(DefaultServer.java:348)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.CommandOperationHelper.lambda$executeRetryableWrite$15(CommandOperationHelper.java:411)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$2(OperationHelper.java:564)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:589)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$3(OperationHelper.java:563)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:589)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.OperationHelper.withSourceAndConnection(OperationHelper.java:562)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.CommandOperationHelper.lambda$executeRetryableWrite$16(CommandOperationHelper.java:395)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.async.function.RetryingSyncSupplier.get(RetryingSyncSupplier.java:65)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.CommandOperationHelper.executeRetryableWrite(CommandOperationHelper.java:423)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.TransactionOperation.execute(TransactionOperation.java:70)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.CommitTransactionOperation.execute(CommitTransactionOperation.java:133)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.internal.operation.CommitTransactionOperation.execute(CommitTransactionOperation.java:54)\n~[mongodb-driver-core-4.4.2.jar:na] \tat\ncom.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:212)\n~[mongodb-driver-sync-4.4.2.jar:na] \tat\ncom.mongodb.client.internal.ClientSessionImpl.commitTransaction(ClientSessionImpl.java:147)\n~[mongodb-driver-sync-4.4.2.jar:na] \tat\norg.springframework.data.mongodb.MongoTransactionManager$MongoTransactionObject.commitTransaction(MongoTransactionManager.java:469)\n~[spring-data-mongodb-3.3.2.jar:3.3.2] \tat\norg.springframework.data.mongodb.MongoTransactionManager.doCommit(MongoTransactionManager.java:236)\n~[spring-data-mongodb-3.3.2.jar:3.3.2] \tat\norg.springframework.data.mongodb.MongoTransactionManager.doCommit(MongoTransactionManager.java:200)\n~[spring-data-mongodb-3.3.2.jar:3.3.2] \tat\norg.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)\n~[spring-tx-5.3.16.jar:5.3.16] \tat\norg.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)\n~[spring-tx-5.3.16.jar:5.3.16] \tat\norg.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:654)\n~[spring-tx-5.3.16.jar:5.3.16] \tat\norg.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:407)\n~[spring-tx-5.3.16.jar:5.3.16] \tat\norg.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)\n~[spring-tx-5.3.16.jar:5.3.16] \tat\norg.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n~[spring-aop-5.3.16.jar:5.3.16] \tat\norg.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)\n~[spring-aop-5.3.16.jar:5.3.16] \tat\norg.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)\n~[spring-aop-5.3.16.jar:5.3.16] \tat\ncom.example.service.TestService$$EnhancerBySpringCGLIB$$32b03c26.testMultipleCallsToMongoDB()\n~[classes/:na] \tat\ncom.example.controller.TestController.test(TestController.java:21)\n~[classes/:na] \tat\njava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native\nMethod) ~[na:na] \tat\njava.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n~[na:na] \tat\njava.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n~[na:na] \tat\njava.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]\nat\norg.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\norg.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\njavax.servlet.http.HttpServlet.service(HttpServlet.java:681)\n~[tomcat-embed-core-9.0.58.jar:4.0.FR] \tat\norg.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)\n~[spring-webmvc-5.3.16.jar:5.3.16] \tat\njavax.servlet.http.HttpServlet.service(HttpServlet.java:764)\n~[tomcat-embed-core-9.0.58.jar:4.0.FR] \tat\norg.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\n~[tomcat-embed-websocket-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117)\n~[spring-web-5.3.16.jar:5.3.16] \tat\norg.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:540)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:359)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:889)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1735)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\norg.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\n~[tomcat-embed-core-9.0.58.jar:9.0.58] \tat\njava.base/java.lang.Thread.run(Thread.java:834) ~[na:na]",
"username": "Shi_Qi_Low"
},
{
"code": "",
"text": "What does your server do for each of your requests? write conflict can mean like: a transaction A read Obj1 and then tries to write to Obj1, but between this read and write, Obj1 has been modified by another write operation. (based on my understanding of mongo)related: How WriteConflict errors are managed in Transactions( MongoDB 4.2)",
"username": "Kobe_W"
}
] | WriteConflict issue at MongoDB when concurrent hit on the endpoint | 2023-03-27T09:14:50.267Z | WriteConflict issue at MongoDB when concurrent hit on the endpoint | 1,854 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi,\nToday we got query from one of our client related to replica set implementation. we are looking for the following Architecture:==> Primary Server on VM.\n==> 3- Secondary Servers (Replica) on ContainersIs it possible to have such Hybrid/heterogeneous environment?Please guide.",
"username": "ahsan_gill"
},
{
"code": "",
"text": "The short answer is yes, granted you have all of your networking requirements/definitions/DNS sorted out.I see no benefit to this architecture, mixing cattle and pets together. I would nudge them toward 100% Docker Compose or Kubernetes operator, based on scale requirements.",
"username": "AllenI"
},
{
"code": "",
"text": "Thanks Allen,\ndo we have any official compatibility matrix from Mongo db?",
"username": "ahsan_gill"
},
{
"code": "",
"text": "Not that I am aware of, but as long as the base version, OS distribution and the architecture is the same, it should work.",
"username": "AllenI"
},
{
"code": "",
"text": "Nobody ever cares where/how you run those servers (thanks to abstraction in VM/containers/hardware…), as long as they can be set up running and connect to each other.",
"username": "Kobe_W"
}
] | Is it Possible to configure Mongo db replica set in Hybrid environment. Primary server on VM and Secondary server on Containers | 2023-03-27T11:03:06.764Z | Is it Possible to configure Mongo db replica set in Hybrid environment. Primary server on VM and Secondary server on Containers | 635 |
null | [
"sharding",
"transactions"
] | [
{
"code": "",
"text": "From the docs:Until a transaction commits, the data changes made in the transaction are not visible outside the transaction.Read uncommitted is the default isolation level and applies to mongod standalone instances as well as to replica sets and sharded clusters.Do these two descriptions conflict?",
"username": "j-xzy_N_A"
},
{
"code": "localmajoritylocal",
"text": "Hi @j-xzy_N_A welcome to the community!I agree this is a bit confusing. I’ll try to explain.In regular RDBMS terms, “read uncommitted” basically means the query can return inconsistent data from a not-yet-committed transaction. This is typically bad for most use cases.However in this instance, “uncommitted” here is in terms of a distributed system. In MongoDB in particular, it basically means that the query can return data that can be rolled back in the context of a replica set (read concern local). I guess the corresponding term for “read committed” in MongoDB would be read concern majority, where it will not return data that can be rolled back by the replica set.Here the term “transaction” refers to multi-document transaction, similar to the concept of transaction in typical RDBMS, although it’s scoped to multiple nodes in a distributed system vs. a single node as per typical RDBMS deployment.For MongoDB deployment, read concern local means:So basically, the term “uncommitted” here is applied slightly differently, although it refers to the same concept.A related explanation is in https://www.mongodb.com/docs/manual/core/transactions-production-consideration/#outside-reads-during-commit where it shows the behaviour of outside reads and transactions.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Kevin is correct.When talking about isolation (the “I” in ACID), it means to single node operation. here the commit/uncommitted means transaction commit, since everything in SQL is a transaction.In mongoDB, it means “majority committed” meaning if you use something like local/available read concern, the read data might be subsequently rolled back.But any changes made inside a transaction are not visible to readers outside of the transactionIn mongo, not everything is a transaction implicitly which is different from SQL world. So in many cases, the doc has to be clear about the context (e.g. in a session or not, casual consistent or not, transaction or not). Sometimes the doc can lack the detailed context and can cause confusion.",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does mongodb read uncommitted by default? | 2023-03-28T13:00:50.238Z | Does mongodb read uncommitted by default? | 1,305 |
null | [
"queries",
"node-js",
"compass"
] | [
{
"code": "",
"text": "Hi all,I am getting the Error An error occurred while loading instance info: connection 3 to 13.x.x.x:27017 closed.I am using the MongoDB drive for node js mongodb.\nhow to resolve this Error !.I am getting the same Error with MongoDB compass also. after many re-try it worked. but not the right solve.",
"username": "Sanjay_Makwana"
},
{
"code": "",
"text": "Hi @Sanjay_Makwana,Welcome back to the MongoDB Community Can you share the following information to better understand the problem:I am getting the same Error with MongoDB compass also. after many re-try, it worked.Could you specify the changes you made in order to make it work?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav I am using Atlas M2 Instance. this Error went after accessing all networks. when I set it up with my public IP it was giving the connection Error. but I can query data. So any wrong with the network access setup?",
"username": "Sanjay_Makwana"
},
{
"code": "",
"text": "Hi @Sanjay_Makwana,Could you share the actual error message from your app and MongoDB Compass? Additionally, it would be helpful if you could share the connection string you are using.Also, have you contacted MongoDB Atlas Support for assistance with this issue?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav here is the connection string. mongodb+srv://usernamev:[email protected]/?retryWrites=true&w=majority.the Error “connection 1 to 13.232.97.81:27017 closed” this Error show in mongo compass.",
"username": "Sanjay_Makwana"
},
{
"code": "0.0.0.0/0includes your current IP address",
"text": "Hi @Sanjay_Makwana,Thanks for sharing the connection string.Can you please confirm the following:Also, can you verify that your current IP address is whitelisted (includes your current IP address) in the MongoDB Atlas Dashboard?\nimage3070×1012 178 KB\nBest,\nKushagra",
"username": "Kushagra_Kesav"
}
] | An error occurred while loading instance info: connection 3 to 13.x.x.x:27017 closed | 2023-03-15T07:32:28.561Z | An error occurred while loading instance info: connection 3 to 13.x.x.x:27017 closed | 1,461 |
[
"replication",
"containers",
"devops"
] | [
{
"code": "",
"text": "As title suggests, I am using docker-compose running 3 mongo containers. I have attached the rs.status() log and also a screenshot of Studio3t to scan my ports for the members.\nrs.status()642×990 19.1 KB\n\n\nStudio3t722×823 49.4 KB\nI tried many variations of the connections string and ports based on the mongo documentation. I am new at this and still learning so any info helps!",
"username": "TheAdrianReza_N_A"
},
{
"code": "bindIp",
"text": "Hi @TheAdrianReza_N_A and welcome in the MongoDB Community !Have you set your IP addresses correctly in your bindIp network configuration? Did you include in there the IP address of your client in your 3 config files?If that’s not it, could you please share your config file and your docker-compose.yml maybe so we have a bit more information to work with?Also ─ just to confirm ─ “mongo-rs0-1”, “mongo-rs0-2” and “mongo-rs0-3” are 3 different physical servers, correct?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I do not believe I have done much in the network configuration. Feel free to suggest ways I can improve my docker-compose.yml file! It’s a boilerplate mongo-rs docker container I found.And yes they are on 3 different servers.mongo.conf -replication:\noplogSizeMB: 1024\nreplSetName: rs0image597×838 16.8 KBReally appreciate the quick response! Let me know if there is anything else I can get you to help my situation!",
"username": "TheAdrianReza_N_A"
},
{
"code": "--smallfiles--oplogSize 128docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.3 --replSet=test && sleep 4 && docker exec mongo mongo --eval \"rs.initiate();\"\n~/.bash_aliasesalias mdb='docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:4.4.3 --replSet=test && sleep 4 && docker exec mongo mongo --eval \"rs.initiate();\"'\n--rmdocker stop mongo\n",
"text": "From my understanding, docker-compose is a tool that starts multiple containers that will work together on the same machine.So the way I understand it, your 3 “mongo-rs0-X” containers will be started on the same machine which makes me want to ask a simple question:Replica Sets are here for one main reason: High Availability in prod. If your 3 nodes depends on some piece of hardware they have in common (same power source, same disk bay, etc), it means you are not really HA because that piece of equipment can fail and bring all your nodes down, all at once. Which is a big NO NO in prod.That’s the reason why it’s a good practice to deploy your nodes in different data centers.I also see you are using --smallfiles which is a deprecated option which was only for MMapV1 which is gone now and --oplogSize 128 is definitely a terrible idea.So ─ based on this ─ I think you are trying to deploy a development or test environment here but then I really don’t see the point of deploying 3 nodes on the same cluster. A single node replica set would most probably be good enough, no?Here is the docker command I use to start an ephemeral single replica set node on my machine when I need to hack something:I actually made an alias out of it which is in my ~/.bash_aliases file:And ─ because of the --rm option ─ I can just destroy the container and everything it contains (volumes included) with a simple:Is it what you were looking for or you really need to make these 3 nodes on the same machine work?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "127.0.0.1\tmongoset1 mongoset2 mongoset3\n",
"text": "Make sure that you have added replica set nodes in the host machine in etc/hosts file.\nJust like the below example -Note - 127.0.0.1 is your host machine and mongoset1, mongoset2 and mongoset3 are the nodes (members) of the replicaset.",
"username": "Mohammad_Faisal_Khan"
},
{
"code": "127.0.0.1\tmongoset1 mongoset2 mongoset3mongod",
"text": "127.0.0.1\tmongoset1 mongoset2 mongoset3It’s a nonsense to run 3 members of the same RS on the same machine. Running multiple data bearing mongod on the same machine shouldn’t exist.The only exception to that rule would be if you are learning how RS works and you want to experiment while learning.",
"username": "MaBeuLux88"
},
{
"code": "mongod",
"text": "\" It’s a nonsense to run 3 members of the same RS on the same machine. Running multiple data bearing mongod on the same machine shouldn’t exist.\"How are we supposed to test transactions locally? We are already using docker compose for our local setup so not having this functionatliy would make it impossible to test Mongo Transactions. How are you testing your transactions? Are you using them? PS for the record I think this is the only reason you would want to setup replicas locally, or to mirror your hosted env for testing or for educational purposes. This feature does belong in mongod however.",
"username": "Maxwell_Krause"
},
{
"code": "",
"text": "Hey Maxine -The but why meme I almost find offensive because I am here ONLY because my team needs to test mongo transactions and it was YOUR TEAM that implemented in a way where they can only be tested with this replica configuration. So to come here, having spent my morning trying to get this to work and see that you meme the OP for doing this when you created the problem is ",
"username": "Maxwell_Krause"
},
{
"code": "",
"text": "I’m sorry that you found that a bit offensive. I’m being a bit sarcastic to REALLY explain why it doesn’t make sense and get the point across. If you read my entire post, the answer and justification is in it.So ─ based on this ─ I think you are trying to deploy a development or test environment here but then I really don’t see the point of deploying 3 nodes on the same cluster. A single node replica set would most probably be good enough, no?Here is the docker command I use to start an ephemeral single replica set node on my machine when I need to hack something:I explained in my answer why it’s a bad idea and I also explained the solution: Single Node Replica Set.Transactions, Change Streams and a few other features in MongoDB rely on the special oplog collection that only exists in Replica Set setups. BUT you can set up a Single Node Replica Set that only contains a single Primary node and all the features will work just as good as in a 7 nodes Replica Set.So again, I reiterate:It’s a nonsense to run 3 members of the same RS on the same machine.Use a Single Node RS instead. Same features but it’s using 3X less ressources.",
"username": "MaBeuLux88"
},
{
"code": "localhost.somedomain.comlocalhost.somedomain.com/etc/hosts",
"text": "Since the original question basically is “How can I access a replica set inside a docker network from the host”, this solution might be useful:This will result in name resolve allowing access to mongo via localhost.somedomain.com both from inside docker and from the host (given the right exposed port)Note that the usual solution to this problem is “modify your local /etc/hosts file”, which works too, but requires every dev/system in your organization to modify system files.",
"username": "travelling_cat"
},
{
"code": "etc/hosts",
"text": "So in order to have a docker instance of a replica set MongoDB I need to modify the local etc/hosts file?\nHow has that behavior ever rolled to a production version?What about environments I do not have access to the system configuration (eg. CI/CD pipelines)?",
"username": "Jakub_Ganczorz"
},
{
"code": "",
"text": "@MaBeuLux88 are you really a mongo employee? this is for testing purposes.@TheAdrianReza_N_A I believe it has something to do with replicaset config, you can try using bitnami images and try with the MONGODB_ADVERTISED_HOSTNAME as localhost, if my own tests are successful will post back.Sorry for the half response, but had to chime in to this nonsense with Maxime Beugnet!",
"username": "Marco_Maldonado"
},
{
"code": "alias mdb='docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:6.0.5 --replSet=RS && sleep 5 && docker exec mongo mongosh --quiet --eval \"rs.initiate();\"'\nalias m='docker exec -it mongo mongosh --quiet'\n",
"text": "Hi @Marco_Maldonado and welcome in the MongoDB Community !Yes I am. It’s written right here. I think I’ll write a blog post about this because apparently nobody wants to hear that single node RS for a localhost dev environment is just fine. This is what I use daily to run a localhost dev environment:It works perfectly fine. I can use ACID transactions, change streams, …If I want to keep the data, I can use a volume but I don’t need to.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@Marco_Maldonado, Maxime is actually correct. You don’t need a multinode cluster for testing or doing things in MongoDB.I don’t really agree with the “nonsense” comment, but I do understand the sentiment because unless your test environment is intended to directly test and evaluate performance in an actual production environment to actually have full awareness of what impact things will have on an environment level, there really isn’t as much of a need for a multinode replica set.@MaBeuLux88 otherwise I do agree with.",
"username": "Brock"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Docker-compose ReplicaSets - getaddrinfo ENOTFOUND | 2021-01-13T20:04:33.105Z | Docker-compose ReplicaSets - getaddrinfo ENOTFOUND | 21,350 |
|
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "I’ve set up the private endpoint azure->atlas and it works.Then I go to get the private endpoint aware connection string. These docs say to select Connect on the DB, then select Private Endpoint (as opposed to CLI, mongosh, sdk, etc) and it’ll output the formatted connection string.But Private Endpoint is not an available Connect option here.The instance is an M50, which should have this feature. Any ideas on how I can find the private endpoint aware connection string?",
"username": "Team_Overhaul"
},
{
"code": "",
"text": "Hi there I’ve set up the private endpoint azure->atlas and it works.My interpretation for the above is that the setting up the azure endpoint worked as per normal and is showing as “Active” in the Atlas UI rather than the private endpoint connection working. Please correct me if I am wrong here.Then I go to get the private endpoint aware connection string. These docs say to select Connect on the DB, then select Private Endpoint (as opposed to CLI, mongosh, sdk, etc) and it’ll output the formatted connection string.But Private Endpoint is not an available Connect option here.You may wish to check the the Atlas in-app chat support team if you have set up the endpoint correctly and do not see the private endpoint connection option in the connect modal. As an example, it could be something like a different cloud private cluster (AWS, GCP) against an Azure private endpoint. However, it would be better to clarify with the in-app chat support team as they have more insight into the Atlas project in question.Regards,\nJason",
"username": "Jason_Tran"
}
] | Private Endpoint connect option is not available? | 2023-03-28T18:01:21.471Z | Private Endpoint connect option is not available? | 806 |
[
"dot-net"
] | [
{
"code": " var dict = new Dictionary<string, object>()\n {\n { \"int\", 1 },\n { \"string\", \"1\"},\n { \"decimal\", 1.1m },\n { \"decimalList\", new List<object> { 1.1m, 2.2m } },\n { \"intDict\", new Dictionary<string, object> { { \"one\", 1 }, { \"two\", 2 } } },\n { \"stringDict\", new Dictionary<string, object> { { \"one\", \"1\" }, { \"two\", \"2\" } } },\n { \"decimalDict\", new Dictionary<string, object> { { \"one\", 1.1m }, { \"two\", 3.6m } } }\n };\n\n var something = new Something\n {\n Parameters = new BsonDocument(dict)\n };\n\n // save to MongoDb\n{\n \"_id\": {\n \"$oid\": \"64215cd5f3a0a301be2d13f2\"\n },\n \"parameters\": {\n \"int\": 1,\n \"string\": \"1\",\n \"decimal\": {\n \"$numberDecimal\": \"1.1\"\n },\n \"decimalList\": [\n {\n \"$numberDecimal\": \"1.1\"\n },\n {\n \"$numberDecimal\": \"2.2\"\n }\n ],\n \"intDict\": {\n \"one\": 1,\n \"two\": 2\n },\n \"stringDict\": {\n \"one\": \"1\",\n \"two\": \"2\"\n },\n \"decimalDict\": {\n \"one\": {\n \"$numberDecimal\": \"1.1\"\n },\n \"two\": {\n \"$numberDecimal\": \"3.6\"\n }\n }\n }\n}\n{ \"int\" : 1, \"string\" : \"1\", \"decimal\" : NumberDecimal(\"1.1\"), \"decimalList\" : [NumberDecimal(\"1.1\"), NumberDecimal(\"2.2\")], \"intDict\" : { \"one\" : 1, \"two\" : 2 }, \"stringDict\" : { \"one\" : \"1\", \"two\" : \"2\" }, \"decimalDict\" : { \"one\" : NumberDecimal(\"1.1\"), \"two\" : NumberDecimal(\"3.6\") } }\n{\n \"int\": 1,\n \"string\": \"1\",\n \"decimal\": {},\n \"decimalList\": [\n {},\n {}\n ],\n \"intDict\": {\n \"one\": 1,\n \"two\": 2\n },\n \"stringDict\": {\n \"one\": \"1\",\n \"two\": \"2\"\n },\n \"decimalDict\": {\n \"one\": {},\n \"two\": {}\n }\n}\n",
"text": "Hello!I’m having issue with retrieving decimal values from MongoDb. Here is what I do:I’m saving dictionary<string, object> as BsonDocument.Here is a copy of the document which is stored in mongo:The problem occurs when I retrieve data from MongoDb:And after converting BsonDocument back to dictionary I get decimals as Decimal128 type:\n\n2023-03-27 12_28_16-539×619 32.9 KB\nAnd final response from Api looks like:Am I doing something wrong?\nIs it a bug in .Net MongoDB.Driver?",
"username": "ababokin"
},
{
"code": "",
"text": "FYI JSON doesn’t support decimal as a data types, BSON will, JSON won’t, same with binary and Date data types.You have to use a string to define decimal instead, and then it’ll work.",
"username": "Brock"
},
{
"code": "",
"text": "I understand MongoDB U will define BSON and JSON as the same thing, except JSON is more human readable, which is an extremely poor description between their differences.The largest difference is that JSON does not support many datatypes that BSON does, and pending version of MongoDB the datatypes JSON supports are even impacted, such as MongoDB 5.0 allowing the use of $ for example, while other versions prior to 5.0 will not support $ in JSON.",
"username": "Brock"
}
] | .Net MongoDB.Driver 2.19.0 - Issue with decimal deserialization from BsonDocument | 2023-03-27T09:52:40.838Z | .Net MongoDB.Driver 2.19.0 - Issue with decimal deserialization from BsonDocument | 879 |
|
null | [] | [
{
"code": "{\"t\":{\"$date\":\"2023-03-28T09:36:18.511-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.066-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.066-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.068-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.068-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.068-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.069-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.070-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":27132,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-R7VEROT\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19045)\"}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.074-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.075-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7651M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.606-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":531}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.606-04:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.615-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]} {\"t\":{\"$date\":\"2023-03-28T09:36:20.616-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]} {\"t\":{\"$date\":\"2023-03-28T09:36:20.620-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.620-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.622-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.623-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"} {\"t\":{\"$date\":\"2023-03-28T09:36:21.096-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:21.102-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:21.102-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"} {\"t\":{\"$date\":\"2023-03-28T09:36:21.107-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:21.107-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}",
"text": "Hi,\nI am new in MongoDB.\nInstalled it as a server in windows 10 following the site instructions.\nEverything looks fine in Compass and I create new DB and Collections.\nwhen I run mongod command in cmd as administrator, i cannot get the expected prompt. The message lines freezes on a “Waiting for connection…” line and nothing happens.\nHow can I fix that issue? Thanks\n{\"t\":{\"$date\":\"2023-03-28T09:36:18.511-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.066-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.066-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.068-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.068-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.068-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.069-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.070-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":27132,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-R7VEROT\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.5\",\"gitVersion\":\"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19045)\"}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.071-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.074-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.075-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7651M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.606-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":531}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.606-04:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.615-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]} {\"t\":{\"$date\":\"2023-03-28T09:36:20.616-04:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]} {\"t\":{\"$date\":\"2023-03-28T09:36:20.620-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.620-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:20.622-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"} {\"t\":{\"$date\":\"2023-03-28T09:36:20.623-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"} {\"t\":{\"$date\":\"2023-03-28T09:36:21.096-04:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:21.102-04:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:21.102-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"} {\"t\":{\"$date\":\"2023-03-28T09:36:21.107-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}} {\"t\":{\"$date\":\"2023-03-28T09:36:21.107-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}",
"username": "Alireza_Goodarzi"
},
{
"code": "",
"text": "This is expected behaviour in Windows.You have started mongod in the foreground\nYou have to leave this session as is\nOpen another Windows cmd prompt and connect to your mongodIf you have installed mongod as service there is no need to start it from cmd line",
"username": "Ramachandra_Tummala"
},
{
"code": ">show dbs",
"text": "So, The tutorial I am watching should be old.\nHow can I run >show dbs command line?\nThanks",
"username": "Alireza_Goodarzi"
},
{
"code": "",
"text": "Open a Windows command prompt and run mongo/mongosh depending on the version you installed\nOnce you are connected run show dbs",
"username": "Ramachandra_Tummala"
}
] | I cannot run mongod following the instructions and tutorials | 2023-03-28T13:51:12.811Z | I cannot run mongod following the instructions and tutorials | 586 |
null | [
"replication"
] | [
{
"code": "mongo+srv://mongo.foo.cluster.local_mongodb._tcp.mongo.foo.cluster.localmongo-0.mongo.foo.cluster.local\nmongo-1.mongo.foo.cluster.local\nmongo-2.mongo.foo.cluster.local\nmongodb://mongo-0.mongo.foo.cluster.local,mongo-1.mongo.foo.cluster.local,mongo-2.mongo.foo.cluster.local",
"text": "The documentation and original spec imply to me that the result of the SRV lookup ends up being equivalent having the member hosts supplied up front.For example, if you specify connection string mongo+srv://mongo.foo.cluster.local, and the result from an SRV lookup against _mongodb._tcp.mongo.foo.cluster.local:… then client libraries would end up treating this equivalent to if you had originally supplied mongodb://mongo-0.mongo.foo.cluster.local,mongo-1.mongo.foo.cluster.local,mongo-2.mongo.foo.cluster.local when establishing the initial connection.However, I’ve noticed that client libraries also poll SRV lookups, and the consequences of this are somewhat unclear to me. When querying SRV records in a kubernetes cluster, you’re likely to find that absent pods are no longer returned… so for example, if you were to rollout restart a statefulset, you’ll find the returned record changes in relatively rapid succession.My concern is, I don’t understand the implications of this polling behaviour and the way in which the record will change over the space of a minute or two. If client libraries see members absent from an SRV lookup, do they then treat them as if they’ve been removed from the replica set, and drop any connections they have to them?My main worry is circumstances like:… is the client now in a state where it only has a connection to mongo-0? Or will it have re-established a connection via other channels once mongo-2 has come back up?I think the part that is particularly confusing me is, on the notion that the result from an SRV lookup acts as an initial seed list, I don’t really understand why this subsequent polling occurs.",
"username": "N_J"
},
{
"code": "",
"text": "Hello @N_JJust for a recap of understanding the basics behind all of this for yourself, and others who may be somewhat new to Kubernetes, Docker, etc.Kubernetes is an orchestration tool that helps manage the runtimes between multiple containers, it is akin to Docker Swarm (I love both Kubernetes and Docker Swarm, probably the most amazing inventions ever thought up to be honest IMHO.)Dockers intent and design is to run individual runtimes of the same application and its dependencies in each container.Docker Swarm is Docker’s homemade tool to perform functions like Kubernetes (for reference purposes of understanding Kubernetes.)Now, N_J, to your question this will depend upon Kubernetes networking and routing of the services. Kubernetes can funnel traffic to the pods via:And it can establish Load Balancing via the load balancer, the components work as follows:With this information now disclosed for conceptual purposes, the answer to your question comes into your configuration for how the runtimes are being orchestrated, connected to, and how the balancers within Kubernetes are being designed and told to work.If you’ve configured and built your nodes to be connected to primary node, whichever it is, then that is what it’ll connect to. If you have a 3 node cluster as you describe then the behavior that should be expected on configuration is that you would have Kubernetes route the sessions to whatever the primary node is, or reroute connections to the new primary after the old primary has failed.So when you do a SRV Lookup, you should always see 3 nodes/microservices for three containers running the MongoDB Microservice, and each node in the sharded cluster should know what place it holds with your Kubernetes knowing what to do and where to route the connection and session.Does this make sense?",
"username": "Brock"
},
{
"code": "",
"text": "Hi,Thanks for the reply.After further investigation I’ve found specifications/polling-srv-records-for-mongos-discovery.rst at master · mongodb/specifications · GitHub which explains the reasoning behind SRV polling.Although it is still unclear to me whether the fairly dynamic nature of SRV lookups in a kubernetes cluster will potentially cause issues here. If you say have a 3 replica statefulset, a one of the pods is restarting, should an SRV lookup take place whilst that is occurring, you may find the host relating to that restarting pod absent in the result. And the implications for clients are, according to the spec:Which seems fine, but what happens if that restarting pod comes back up, and then another pod restarts (which is typical during a statefulset restart) before the next SRV lookup takes place?",
"username": "N_J"
},
{
"code": "",
"text": "Which seems fine, but what happens if that restarting pod comes back up, and then another pod restarts (which is typical during a statefulset restart) before the next SRV lookup takes place?In large, this isn’t something to really care all that much about during a node being restarted, and here’s why:This is why that in particular doesn’t really matter in the grand scheme of your environment, because Kubernetes when you configure it should automatically make the connections reroute, restart the node, and then if necessary even rebuild the node and reset data backups etc. and so on and be back on DNS record when possible. All of this should occur with no connections to your overall services being lost/end users even noticing something happened.In laments terms, you shouldn’t have need to care as much about any of that, when the clients or services/connections will already be connected/rerouted to the working node before anything else is restarted or SRV Lookup even happens again.You can also configure Kubernetes to do an SRV lookup whenever you choose it to, such as refreshing every second, five seconds, 10 seconds, and so on. But what I\"d encourage you do, is configure Kubernetes routing and balancers to automatically transfer users to the new primary node as well as predesignate a new primary node after 10 seconds time out from response.This makes what’s relatively a completely seemless/not even noticed event to the end users, as they’ll just typically blame whatever it is on the browser, or their computer being slow. If that makes sense?",
"username": "Brock"
}
] | Connecting via DNS seed list in a Kubernetes cluster | 2023-03-28T09:50:11.132Z | Connecting via DNS seed list in a Kubernetes cluster | 901 |
null | [] | [
{
"code": "",
"text": "When I try to sign up with official email, I get below error.\" ErrorUsername contains a reserved domain\"Does it mean, signups not allowed using official e-mail?\nI wanted it for official use. How I can overcome this issue?\nPls help.Thanks",
"username": "Vasu_BK"
},
{
"code": "",
"text": "The error message you received may mean that the email domain you are trying to use is reserved by the organization for official use and cannot be used for personal accounts. This is a common practice for many companies to maintain security and control over their official email domains.If you are trying to create an account for personal use, you may need to use a different email address that is not associated with your official domain. This could be a personal email account, such as Gmail or Yahoo, or a separate email address that you create specifically for personal use.",
"username": "lorance_lack"
}
] | Error Username contains a reserved domain | 2023-03-10T04:47:16.656Z | Error Username contains a reserved domain | 1,589 |
null | [
"queries"
] | [
{
"code": " [\n {\n actions: [\n {\n _id : ObjectId('641b34b7aa7f4269de24f050')\n participants: [\n {\n _id : ObjectId('641b33c3aa7f4269de24ef10')\n referenceId: \"641b3414aa7f4269de24efa5\",\n name: \"Person one\",\n email: \"[email protected]\"\n },\n {\n _id : ObjectId('61bb9105362e810ae9a6826f')\n referenceId: \"641b3414aa7f4269de24ef4g\",\n name: \"Person two\",\n email: \"[email protected]\"\n }\n ]\n }\n ]\n }\n ]\n [\n {\n actions: [\n {\n _id : ObjectId('641b34b7aa7f4269de24f050')\n participants: [\n {\n _id : ObjectId('641b33c3aa7f4269de24ef10')\n referenceId: \"641b3414aa7f4269de24efa5\",\n name: \"Person 1\",\n email: \"[email protected]\"\n },\n {\n _id : ObjectId('61bb9105362e810ae9a6826f')\n referenceId: \"641b3414aa7f4269de24ef4g\",\n name: \"Person two\",\n email: \"[email protected]\"\n }\n ]\n }\n ]\n }\n ]\n",
"text": "I have a MongoDB collection as followsI want to update participants’ name by filtering participants’ referenceId.As an example, I want to update the document where referenceId = 641b3414aa7f4269de24efa5 and then update the name to “Person 1”.Then the document will look like this after updateIs there any way that I can perform this",
"username": "Susampath_Madarasinghe"
},
{
"code": "",
"text": "What do you want to do if referenceId occurs more than once in a given participants array?What do you want to do if referenceId occurs in more than 1 element of actions.",
"username": "steevej"
}
] | Update Nested Array Object | 2023-03-28T18:58:51.384Z | Update Nested Array Object | 258 |
null | [
"aggregation",
"crud",
"atlas-functions",
"serverless"
] | [
{
"code": "readColl.aggregate([ { $sample: { size: 2 } } ])\nexports = function(payload, response) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n \n const readColl = mongodb.db(\"temp_db\").collection(\"read_coll\")\n const writeColl = mongodb.db(\"temp_db\").collection(\"write_coll\")\n \n const rand = readColl.aggregate([ { $sample: { size: 2 } } ])\n \n writeColl.insertMany(rand)\n \n return rand\n}\n[\n {\n \"1\": \"A\",\n },\n {\n \"2\": \"B\",\n },\n {\n \"3\": \"C\",\n },\n {\n \"4\": \"D\",\n },\n {\n \"5\": \"E\",\n },\n {\n \"6\": \"F\",\n },\n {\n \"7\": \"G\",\n },\n]\n",
"text": "Hi all - Within a serverless function, I’m loading data from a collection (readColl), then selecting two random values usingreadColl.aggregate returns an array of objects. I would like to then writeColl.insertMany() and populate a new collection.Within the MongoDb function shell, I cannot extract the objects returned from the readColl.aggregate() array. I’m getting a error message:uncaught promise rejection: mongodb insert: argument must be an arrayHow do I take the array of objects (documents) returned by readColl.aggregate() and pass them to writeColl.insertMany() ? Thank you so much for your help.MongoDb function:read_coll:",
"username": "enjoywithouthey"
},
{
"code": "exports = function(payload, response) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n \n const readColl = mongodb.db(\"temp_db\").collection(\"temp_read\")\n const writeColl = mongodb.db(\"temp_db\").collection(\"temp_write\")\n \n return readColl.aggregate([\n {\n $sample:\n {\n size: 2,\n },\n },\n {\n $merge:\n {\n into: 'temp_write',\n on: \"_id\",\n whenMatched: \"replace\",\n whenNotMatched: \"insert\",\n },\n },\n])\n}\n",
"text": "Found a solution. Instead of deleting my question, best to post the solution correct?I build an aggregation pipeline and then inserted the pipeline into function. The two-stage pipeline uses $sample followed by $merge.",
"username": "enjoywithouthey"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Create new collection from an aggregation array within a serverless function | 2023-03-28T20:31:44.957Z | Create new collection from an aggregation array within a serverless function | 962 |
null | [
"database-tools"
] | [
{
"code": "Failed: an inserted document is too large 0 document(s) imported successfully. 0 document(s) failed to import.",
"text": "Hi,\nI am trying to import 1.12TB of a JSON file to the mongoDB installed on my local machine using the setup file- mongodb-windows-x86_64-6.0.5-signed.When I use the below command:mongoimport --db MY_DB --c MY_COLLECTION_NAMEe --batchSize 1 --file C:\\myfile.jsonit gives me error as below:Failed: an inserted document is too large\n 0 document(s) imported successfully. 0 document(s) failed to import.Please note that I am not using --jsonArray as the json file data is not in array.Can anyone guide/help me resolving this problem as this makes pointless to use MongoDB as it does solves the purpose of using NoSQL database for unstructured large data.Thanks in advance\nLokesh",
"username": "lokesh_Chandra"
},
{
"code": "",
"text": "If you have a single document in a 1.12TB JSON then you are looking for trouble.json file data is not in array.The JSON file is not an array. Could it be that it is a list of smaller documents that is not syntactically enclosed with square brackets?Even if I would be very surprised that your 1.12TB is a single JSON document there are ways to brake it down into more manageable parts and use references to the smaller parts.",
"username": "steevej"
},
{
"code": "",
"text": "You need to break it up into 10 to 20 GB files, don’t do a full terabyte, that’s insanity lol.@lokesh_ChandraTake the bottom script and setup an if/else go through all of the JSON files and upload them one at a time until they are all uploaded.It’ll automate this whole process and make your life easier after breaking up the JSON file into smaller chunks.",
"username": "Brock"
}
] | Unable to add 1TB JSON file to MongoDB | 2023-03-24T07:48:58.859Z | Unable to add 1TB JSON file to MongoDB | 1,155 |
null | [
"server",
"database-tools"
] | [
{
"code": "[email protected]",
"text": "I just got a shiny new Mac for development, only to discover that a lot of the software I use doesn’t run on Apple’s Silicon (M1,M2) chips.It seems that although MongoDB Community server 6.x supports ARM64, but the database tools package does not. I tried this via Homebrew and also directly from the official downloads page.Homebrew will happily install these as mongodb-database-tools which is part of the [email protected] package, but they cannot be run (“Bad CPU type in executable”). The official downloads page shows only “macOS x86_64” for v100.0.7 which is the same one.How can I use these tools from my Mac (without Rosetta).Did I missing something? Do I just have to wait for an ARM64 release?",
"username": "timw"
},
{
"code": "",
"text": "I know this is ironic, but you’re going to have to use the Rosetta versions of everything, because ironically despite the M series chips being standard across MBD, they haven’t migrated a lot of the tools and infra over to supporting it natively.It’ll be some time, I’ve gone around it on my M1 “natively” via emulation which took about a month to even get working. And otherwise it needs Rosetta when it can use it.",
"username": "Brock"
},
{
"code": "",
"text": "Thanks for the reply.Here’s irony for you… I planned to run my test cluster under VirtualBox with Vagrant to get around compatibility issues. But… that won’t run on ARM64 either! Now looking at buying a Mini PC to run Linux on my local network.",
"username": "timw"
},
{
"code": "",
"text": "Trust me, I feel that irony. Being an M1 Mac user myself, and knowing that M1 and M2 Macs are standard issue at MBD, there’s a lot of irony on everything.",
"username": "Brock"
}
] | Database tools for macOS ARM64 | 2023-03-23T07:23:06.889Z | Database tools for macOS ARM64 | 1,794 |
null | [] | [
{
"code": " let result = await inst.bulkWrite(upsertJobs);\n console.log(result);\n",
"text": "I am trying to process results of a bulkWrite operation within an Atlas function (javascript). I have tried multiple variations of try catch, awaiting, .then().catch().The operation itself works great but no results are being fed back. I need to know the ids of the records that are updated vs inserted.Here is the code as it currently stands:Any idea how to get the results data back from this function?",
"username": "Rory_O_Brien"
},
{
"code": "bulkWritePromise<null>",
"text": "Hi @Rory_O_Brien,The operation itself works great but no results are being fed back.Unfortunately, this is expected: the subset of operations supported in Functions doesn’t match the ones available in the drivers. More in detail, bulkWrite returns Promise<null>",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "I am also trying to bulkWrite in App Services Function.\nI tried to wrap it in a try catch transaction but even if I don’t match any documents there is no error produced to catch.",
"username": "Joshua_Espinosa"
},
{
"code": "",
"text": "I don’t think it’s possible. See:One option is to use keep a track of all the documents you insert and their ids instead of relying on the driver to generate and assign a random id. Or you could just upsert everything and then retrieve the ids.\nLet me know if that helps.\nBest,\nNiharika",
"username": "Niharika_Pujar"
}
] | bulkWrite() not returning results in Atlas App Function | 2023-01-31T22:23:25.997Z | bulkWrite() not returning results in Atlas App Function | 1,115 |
[
"compass",
"indexes"
] | [
{
"code": "",
"text": "When connecting to one of my Atlas databases using Compass, I’m getting the following error when trying to view the number of times an index has been used: “Either the server does not support the $indexStats command or the user is not authorized to execute it.”The index stats work fine on another database I have for a different project. How can I get it working?\nScreenshot 2023-03-27 at 6.00.46 PM1518×882 53 KB\n",
"username": "Adam_Jackson"
},
{
"code": "",
"text": "As admin, check your authorities.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "The index stats work fine on another database I have for a different project.Is the other database in the same Atlas cluster or different one? If a different one you may have different permissions for it.Asya",
"username": "Asya_Kamsky"
}
] | Server "...does not support the $indexStats command" | 2023-03-27T22:04:22.932Z | Server “…does not support the $indexStats command” | 1,031 |
|
null | [
"java",
"containers",
"android"
] | [
{
"code": "",
"text": "I checked the compatibility matrix, tried to run java sdk 11/17 with mongo db driver 4.9.0/4.8.0/4.8.2. What can I do?",
"username": "Joanna_Pajak"
},
{
"code": "",
"text": "If you’re using Atlas, use the MongoDB Realm and Device Sync Java SDKIf you’re using on premise, build an ApolloGraphQL server, and load up the JavaScript based Apollo client to the mobile app, and then the Apollo GraphQL server append to the MongoDB Database, and just connect the two services.",
"username": "Brock"
}
] | Mongo DB driver (MongoDB cluster on Docker) not compatibile with java in Android Studio, Gradle | 2023-03-28T09:45:03.358Z | Mongo DB driver (MongoDB cluster on Docker) not compatibile with java in Android Studio, Gradle | 908 |
null | [
"containers"
] | [
{
"code": " image: arm64v8/mongo\n ports:\n - 27017:27017\n environment:\n MONGO_INITDB_DATABASE: admin\n MONGO_INITDB_ROOT_USERNAME: root\n MONGO_INITDB_ROOT_PASSWORD: password\n volumes:\n - ./mongo/database:/data/db\n - ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro\ndb = db.getSiblingDB('admin');\ndb.auth(\"root\", \"password\");\n\ndb = db.getSiblingDB('local')\ndb.createUser({\n user: 'root',\n pwd: 'password',\n roles: [{role: 'readWrite', db: `local`}]\n});\n\ndb.createCollection('schema', {capped: false});\n\nMongoServerError: Cannot create users in the local database\n",
"text": "Hello, trying to create a database with the following configuration of my docker compose:The docker compose file:init-mongo.js:When try to launch the docker compose, for any reason the logs said me:That’s happens when I don’t have a database created, after that error message mongo create the database but no with the expected user and database, and I am not sure what I am doing bad, my version of Mongo its 6.0.5 and I using the arm64 image in a Apple Laptop with M1 Chipset, any advice will be appreciated, thank you",
"username": "Adrian_Lagartera_Heras"
},
{
"code": "",
"text": "Why you are choosing local db in step 2\nYou cannot create users in local db\nYou should give sample_db or test_db\nIn first step you are authenticating to admin db\nIn second step you are creating the db and user for this db",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I choose local in the second step because I wanted use a database named local, I didn’t know not posible create users in that local database, and yes, I am authenticating as admin because in my journey of try to find the answer of this problem I read about that’s steps.Its posible do users in a local database or avoid the creation by default of that database?Thank you for your answer",
"username": "Adrian_Lagartera_Heras"
},
{
"code": "",
"text": "@Adrian_Lagartera_Heras Don’t use the ARM MongoDB for the M1 chipset, use Rosetta.",
"username": "Brock"
},
{
"code": "",
"text": "You cannot and you should not use local db\nCheck the doc",
"username": "Ramachandra_Tummala"
}
] | Problems with the behavior of MongoDB ARM version and Docker | 2023-03-27T15:32:11.117Z | Problems with the behavior of MongoDB ARM version and Docker | 1,545 |
null | [
"aggregation",
"python",
"spark-connector"
] | [
{
"code": "",
"text": "I’m wondering if anyone has come up with a solution for this. I’m creating a AWS Glue job using pyspark driver and connection to Atlas Mongo. I was able to successfully create a datafram and create an aggregation pipeline to retrieve data. The one issue that I’m facing is the id’s in my collection are outputting in the hex (BinData 3) format while I need this in juuid. In pymongo I can add a tag for uuid representation but no such luck with spark - Has anyone come across a solution for this?",
"username": "Patrick_Shovein"
},
{
"code": "df = spark.read.format(\"com.mongodb.spark.sql.DefaultSource\") \\\n .option(\"uri\", \"mongodb://...\") \\\n .option(\"database\", \"mydb\") \\\n .option(\"collection\", \"mycollection\") \\\n .load()\ndf = df.withColumn(\"juuid_column\", XX(df[\"hex_column\"]))",
"text": "hex (BinData 3) format while I need this in juuidHi Patrick! Firstly welcome to the MongoDB community.Just to understand what you were trying to do:For this we can use a function call XX and manipulate the dataframe in the following way\ndf = df.withColumn(\"juuid_column\", XX(df[\"hex_column\"]))Does this summarize what you are trying to do?",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "Thanks for the reply - Yes this is exactly what I’m trying to accomplish. I have been able to successfully retrieve data from a df but the output for the hex columns are retrieving in bin data 3 format.From your code snippet above, it appears that I would need to create a function XX that would convert this data?",
"username": "Patrick_Shovein"
},
{
"code": "",
"text": "Hello Patrick,\nI do think you will have to write a custom function for converting this data. I didn’t find a pre existing spark SQL function for this. Let me know if that would unblock you.Best,",
"username": "Prakul_Agarwal"
}
] | Pyspark UUID representation | 2023-03-03T22:31:28.605Z | Pyspark UUID representation | 1,385 |
[
"compass",
"atlas"
] | [
{
"code": "",
"text": "Hi, I’ve created a Custom Role to limit access to a specific database, but the new User modal is forcing me to also choose a Built-in Role, which would either grant too much or too few privileges, or does it do something else? The documentation is confusing.So, I’ve gone and created a user for unit testing with the “Only read any database” as the Built-in Role, going with the least privilege principle, and in the Custom Role, I’ve tied it to a specific database.However, I’ve found two problems when using the connection path:Is it possible to create a user that is restricted to a specific database? And, retains those restrictions regardless of where it’s used.\nScreen Shot 2023-03-27 at 14.12.13925×430 37.2 KB\n",
"username": "Wayne_Smallman"
},
{
"code": "",
"text": "Hi, sorry, I misinterpreted your question. You do not have to select a built-in role. You can just click the trash icon to hide that selector and then select a custom role. Alternatively, clicking outside of the modal will automatically unpopulated that selection it would appear.I do agree it is a touch confusing so I will pass this feedback along to that team.Let me know if this works for you?\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @Tyler_Kaye, I sort of stumbled upon what you’re recommending.I’ve trimmed the Actions down for the Custom Role, but I’m still seeing problems when testing…Since swapping the connection around, the unit tests have stalled.",
"username": "Wayne_Smallman"
},
{
"code": "",
"text": "Hi, without more information about what test is failing, it would be difficult for me to offer any suggestions here. However, it is probably the case that the tests are just trying to do something that you are not allowing it to do.As a side note, is there a reason you need to be using these custom roles for your testing? I am assuming you are running tests against a test cluster and not directly against your production cluster?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "I plan on creating a Role per User, restricting each to: Development; Testing; Staging; and Production.The unit testing was working prior to me making changes to this Role and User.",
"username": "Wayne_Smallman"
}
] | How do I create a user tied to a specific database? | 2023-03-28T13:09:40.048Z | How do I create a user tied to a specific database? | 835 |
|
null | [
"aggregation",
"node-js"
] | [
{
"code": "// Aggregation run on Tag collection\n[\n {\n '$match': {\n '_id': new ObjectId('63e3155705656aeea6f81258')\n }\n }, {\n '$lookup': { // Find all connections for a Many-to-Many relation. \n 'from': 'TagOnFeedback', \n 'let': {\n 'tagId': '$_id'\n }, \n 'pipeline': [\n {\n '$match': {\n '$expr': {\n '$eq': [\n '$tagId', '$$tagId'\n ]\n }\n }\n }\n ], \n 'as': 'TagOnFeedback'\n }\n }, {\n '$lookup': { // Find all feedback where the feedback matches the ID specified in the linker\n 'from': 'Feedback', \n 'let': {\n 'linkId': '$TagOnFeedback'\n }, \n 'pipeline': [\n {\n '$sort': { // Sort doesn't use the index, as the index gets dropped from the system's memory randomly\n 'createdAt': -1, \n '_id': 1\n }\n }, {\n '$match': {\n '$and': [\n {\n '$expr': {\n '$in': [\n '$_id', '$$linkId.feedbackId'\n ]\n }\n }, {\n '$expr': { // For pagination\n '$lt': [\n '$_id', new ObjectId('63e3f0f218ee31386a627adc')\n ]\n }\n }\n ]\n }\n }, {\n '$limit': 10\n }\n ], \n 'as': 'Feedback'\n }\n }, {\n '$unset': [\n 'TagOnFeedback'\n ]\n }\n]\n",
"text": "Recently, I’ve been fighting with this aggregation pipeline that likes to forget to use an index. From looking online, my best option to solve this “forgetting an index” is by specifying a hint. The only problem is that the lookup where the hint would be relevant is inside of a descendant $lookup query within that pipeline.I’ve been looking all over the place, but I cant seem to find anything on how to add a hint inside the aggregation pipeline. Maybe this is a feature request?For context, here’s the pipeline (node.js)If this isn’t possible, how should I force mongo to use my index for the “Feedback” collection $lookup?Thanks",
"username": "Stratiz"
},
{
"code": "{\n '$lookup': { // Find all connections for a Many-to-Many relation. \n 'from': 'TagOnFeedback', \n 'let': {\n 'tagId': '$_id'\n }, \n 'pipeline': [\n {\n '$match': {\n '$expr': {\n '$eq': [\n '$tagId', '$tagId'\n ]\n }\n }\n }\n ], \n 'as': 'TagOnFeedback'\n }\n }\n{ '$lookup' : {\n 'from': 'TagOnFeedback', \n 'localField': '_id' ,\n 'foreignField': 'tagId' ,\n 'as': 'TagOnFeedback'\n} }\n{\n '$lookup': { // Find all feedback where the feedback matches the ID specified in the linker\n 'from': 'Feedback', \n 'let': {\n 'linkId': '$TagOnFeedback'\n }, \n 'pipeline': [\n {\n '$sort': { // Sort doesn't use the index, as the index gets dropped from the system's memory randomly\n 'createdAt': -1, \n '_id': 1\n }\n }, {\n '$match': {\n '$and': [\n {\n '$expr': {\n '$in': [\n '$_id', '$linkId.feedbackId'\n ]\n }\n }, {\n '$expr': { // For pagination\n '$lt': [\n '$_id', new ObjectId('63e3f0f218ee31386a627adc')\n ]\n }\n }\n ]\n }\n }, {\n '$limit': 10\n }\n ], \n 'as': 'Feedback'\n }\n }\n{ '$lookup': { // Find all feedback where the feedback matches the ID specified in the linker\n 'from' : 'Feedback' , \n 'localField' : 'TagOnFeedback.feedbackId' ,\n 'foreignField' : '_id' ,\n 'pipeline':\n [\n { '$sort' : {\n 'createdAt': -1, \n '_id': 1\n } } ,\n { '$match': {\n '_id' , { '$lt': new ObjectId('63e3f0f218ee31386a627adc') }\n } } ,\n {\n '$limit': 10\n }\n ] , \n 'as': 'Feedback'\n} }\nSort doesn't use the indexthe index gets dropped from the system's memory randomly",
"text": "I would first simply the $lookup stages to use localField/foreignField such as:ReplacewithAlso replacewithAvoiding $expr, depending of the version of mongod, increases the odd of hitting an index.As forSort doesn't use the indexwe will need the indexes that are defined and sample documents from all collections. I have seen in the past people complaining that indexes were not used and indeed they were not used because there was typo in the field names or the order of the field was not appropriate for the query. We also need the explain plan.And finally aboutthe index gets dropped from the system's memory randomlyI am suspicious about the randomly part. Please share your observations. Are you alone working on the instance? Do you have any automated script running?",
"username": "steevej"
}
] | $lookup in a .aggregate() query doesn't have an option to specify a hint | 2023-03-26T10:59:41.874Z | $lookup in a .aggregate() query doesn’t have an option to specify a hint | 585 |
null | [] | [
{
"code": "",
"text": "Hey fellow developers! Do you use MongoDB in your projects? We want to know what you think about ORM/ODM solutions for MongoDB! Take our super quick survey and share your thoughts.Your feedback will help us improve our product offerings and better serve the developer community. Click below to participate now! https://mongodb.co1.qualtrics.com/jfe/form/SV_9BtIkAYz2WwvXzoThanks,\nShubham\nProduct Manager, Developer Experience Team",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | We want to hear from you | 2023-03-28T12:47:38.502Z | We want to hear from you | 735 |
null | [] | [
{
"code": "",
"text": "",
"username": "Raviteja_Aketi"
},
{
"code": "await realm.SyncSession.WaitForUploadAsync(); // Ensures local data is uploaded\nawait realm.SyncSession.WaitForDownloadAsync(); // Ensures remote data is downloaded\nvar yearAgo = DateTimeOffset.UtcNow.Add(TimeSpan.FromDays(-365));\nrealm.Subscriptions.Update(() => {\n // Get all invoices from last year\n realm.Subscriptions.Add(realm.All<Invoice>().Where(i => i.CreatedDate > yearAgo));\n});\n\nawait realm.Subscriptions.WaitForSynchronization();\n\n// We now have all invoices from the past year. However the user changes their filter\n// preferences to \"last month\", so we want to remove the local invoices older than a month\n\nvar monthAgo = DateTimeOffset.UtcNow.Add(TimeSpan.FromDays(-30));\nrealm.Subscriptions.Update(() =>\n{\n // Remove all subscriptions on the Invoice class\n realm.Subscriptions.RemoveAll<Invoice>();\n realm.Subscriptions.Add(realm.All<Invoice>().Where(i => i.CreatedDate > monthAgo));\n});\n\nawait realm.Subscriptions.WaitForSynchronizationAsync();\n\n// Now we've removed the older invoices and are only storing the last 30 days of data.\n",
"text": "You have not specified which language/SDK you’re using, but most SDKs have an API similar to:Regarding 2. - if you’re using Flexible Sync, you can remove the subscription that covers the data and the local objects will be deleted without affecting the remote ones. For example:",
"username": "nirinchev"
},
{
"code": "realmrealm.Subscriptions.Update(() =>\n{\n // Remove all subscriptions on the Invoice class\n realm.Subscriptions.RemoveAll<Invoice>();\n realm.Subscriptions.Add(realm.All<Invoice>());\n});\nawait realm.Subscriptions.WaitForSynchronizationAsync();\n",
"text": "realmHi,I am using Flutter SDKLets say i want to delete all entries from Invoice i.e., make it emptyDoes below code delete all Invoice entries from local db without effecting the remote data?",
"username": "Raviteja_Aketi"
},
{
"code": "realm.subscriptions.update((mutableSubscriptions) {\n // removes all Invoice subscriptions - if this was the only operation in\n // the update block, this would leave no Invoice objects on the client.\n mutableSubscriptions.removeByType<Invoice>();\n\n // creates a subscription for all invoices.\n mutableSubscriptions.add(realm.all<Invoice>()); //\n});\nremoveByType<Invoice>",
"text": "Not exactly - what happens in the code below is (translated into dart):When the server sees this update, it’ll calculate the difference between what the client was seeing before and what the client should be seeing and will send you the objects that will satisfy the new subscription. Since now you’d be asking for all invoices, the server will send you any Invoice documents that the client previously didn’t have.If you wanted to remove all invoice entries from the local database, you would only call removeByType<Invoice>.",
"username": "nirinchev"
},
{
"code": "",
"text": "You mean update function with removeByType will delete all invoice objects from local db and no effect on remote data. Am i correct?Once i am done with deleting invoice objects from the local db, I wanted to resume the syncing mechanism for newly inserted invoice objectsHow do i achive this?",
"username": "Raviteja_Aketi"
},
{
"code": "realm.query<Invoice>(\"createdDate > $0\", DateTime.now())realm.syncSession.waitForUpload()waitForDownload()",
"text": "I’m not sure what exactly you’re trying to achieve here. I’ll try to reply to the best of my abilities, but you’re probably going to need to add more details.",
"username": "nirinchev"
},
{
"code": "",
"text": "Let me add more details",
"username": "Raviteja_Aketi"
},
{
"code": "// First time the user opens the app, subscribe for invoices older than a month\nrealm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(realm.query<Invoice>(\"createdDate > $0\", DateTime.now().add(Duration(days: -30))));\n});\n\n// Now create some invoices\nrealm.write(() {\n realm.add(Invoice(...));\n});\n\n// Now you want to remove the local data and only sync new invoices\nrealm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.removeByType<Invoice>();\n \n // Subscribe for new invoices only\n mutableSubscriptions.add(realm.query<Invoice>(\"createdDate > $0\", DateTime.now()));\n});\nnow()-30dnow()await realm.subscriptions.waitForSynchronization()",
"text": "Okay, so then you don’t need to explicitly wait for uploading the data as that will happen naturally as part of sync. So if we go back to our invoices example, let’s say you have something like this:What happens at the end is that you tell the server that you no longer wish to see old invoices and it will send “delete” instructions for all local Invoice objects that were created between now()-30d and now() (without, of course, deleting the documents themselves on the server). It will then continue sending you newly inserted documents. Any objects you have created prior to updating the query will be synchronized with the server, so you won’t lose any data. Finally, if you want a deterministic way of ensuring the changes have been propagated in both directions, you can call await realm.subscriptions.waitForSynchronization() which will complete when local data has been uploaded, the server has sent you all data matching the new subscriptions, and any data that doesn’t match the subscriptions has been removed from the local device.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks for the help @nirinchev1\nI will try this solution and give you the update",
"username": "Raviteja_Aketi"
}
] | Is there a way to listen or check for specific data synced or not in the mobile app & How do i delete the synced data from the my local db without effecting the cloud data? | 2023-03-27T09:03:24.767Z | Is there a way to listen or check for specific data synced or not in the mobile app & How do i delete the synced data from the my local db without effecting the cloud data? | 664 |
null | [
"queries"
] | [
{
"code": "db.getCollection('myCollection').createIndex(\n { fieldname: 1},\n { partialFilterExpression: {fieldname: {$ne:null}}}\n)\n{'partialFilterExpression':{'field':{'\\$type':['int','double','string']}}\n",
"text": "What is the method to create partial index on not null data?It is not allowing to create partial index with not expression.\nFor example, following command get error:I have tried different method to create partial index using below given partialFilterExpression.But, queries with { ’ field ’ : { ’ $ne ’ : null }}, { ‘field’ : value } does not use this partial index. How can I create Partial index for such queries?",
"username": "Monika_Shah"
},
{
"code": "field: value$eq$exists: true$gt$gte$lt$lte$type$and$or$in$exists: true{ 'field': { '$exists': true, '$eq': \"value\" } }{'partialFilterExpression':{'field':{'\\$type':['int','double','string']}}\n$or",
"text": "Hello @Monika_Shah,The partial index accepts the following expressions in MongoDB v6, as you can read in the documentation,It is not allowing to create partial index with not expression.Instead, you can create an index with $exists: true expression, and make sure you don’t save property with a null value in a collection otherwise it will consider it as exist field.And in your query you can check { 'field': { '$exists': true, '$eq': \"value\" } }I have tried different method to create partial index using below given partialFilterExpression.You can use the $or operator to check multiple types, it is supported by MongoDB v6.",
"username": "turivishal"
},
{
"code": "> db.test.createIndex({\"field\":1},{'partialFilterExpression':{'field':{'$type':{'$in':[\"int\",\"double\"]}}}})\n> db.test.createIndex({\"field\":1},{'partialFilterExpression':{'field':{'$type':{'$in':[1,2,16,18]}}}})\n",
"text": "It shows errors when applied using $in with $type for both of following statement. and accept statement when works without $in.“Error in specification { key: { field: 1.0 }, name: \"field_1\", partialFilterExpression: { field: { $type: { $in: [ \"int\", \"double\" ] } } } } :: caused by :: type must be represented as a number or a string”,{\n“ok” : 0,\n“errmsg” : “Error in specification { key: { field: 1.0 }, name: \"field_1\", partialFilterExpression: { field: { $type: { $in: [ 1.0, 2.0, 16.0, 18.0 ] } } } } :: caused by :: type must be represented as a number or a string”,\n“code” : 14,\n“codeName” : “TypeMismatch”numbers or string",
"username": "Monika_Shah"
},
{
"code": "$type$ordb.test.createIndex(\n { \"field\": 1 }, \n { \n 'partialFilterExpression' : { \n \"$or\": [{ \"field\": { \"$type\": \"int\" } }, { \"field\": { \"$type\": \"double\" } }] \n } \n }\n)\n",
"text": "“Error in specification { key: { field: 1.0 }, name: “field_1”, partialFilterExpression: { field: { $type: { $in: [ “int”, “double” ] } } } } :: caused by :: type must be represented as a number or a string”,As you can read from the error message, The $type operator does not allow another operator as value except its types, instead, you can use $or operator,",
"username": "turivishal"
}
] | Partial Index Equivalent to sparse | 2023-03-25T10:20:08.107Z | Partial Index Equivalent to sparse | 566 |
null | [
"replication"
] | [
{
"code": "1.) DESKTOP-MONGO01:27017 // Contain the old databases\n2.) DESKTOP-MONGO02:27017 // New Device\n3.) DESKTOP-MONGO03:27017 // New Device\nnet:\n port: 27017\n bindIp: 127.0.0.1,DESKTOP-MONGO01,10.15.22.58\n\nsecurity:\n keyFile: C:\\mongocluster\\key.txt\n\nreplication:\n replSetName: \"rs0\"\nnet:\n port: 27017\n bindIp: 127.0.0.1,DESKTOP-MONGO02,10.15.22.59\n\nsecurity:\n keyFile: C:\\mongocluster\\key.txt\n\nreplication:\n replSetName: \"rs0\"\nnet:\n port: 27017\n bindIp: 127.0.0.1,DESKTOP-MONGO03,10.15.22.60\n\nsecurity:\n keyFile: C:\\mongocluster\\key.txt\n\nreplication:\n replSetName: \"rs0\"\n{\n \"topologyVersion\" : {\n \"processId\" : ObjectId(\"641edc16215a2bead7ec5e3b\"),\n \"counter\" : NumberLong(7)\n },\n \"hosts\" : [\n \"DESKTOP-MONGO01:27017\"\n ],\n \"passives\" : [\n \"DESKTOP-MONGO02:27017\"\n ],\n \"setName\" : \"rs0\",\n \"setVersion\" : 2,\n \"ismaster\" : true,\n \"secondary\" : false,\n \"primary\" : \"DESKTOP-MONGO01:27017\",\n \"me\" : \"DESKTOP-MONGO01:27017\",\n \"electionId\" : ObjectId(\"7fffffff0000000000000001\"),\n \"lastWrite\" : {\n \"opTime\" : {\n \"ts\" : Timestamp(1679744105, 1),\n \"t\" : NumberLong(1)\n },\n \"lastWriteDate\" : ISODate(\"2023-03-25T11:35:05Z\"),\n \"majorityOpTime\" : {\n \"ts\" : Timestamp(1679744105, 1),\n \"t\" : NumberLong(1)\n },\n \"majorityWriteDate\" : ISODate(\"2023-03-25T11:35:05Z\")\n },\n \"maxBsonObjectSize\" : 16777216,\n \"maxMessageSizeBytes\" : 48000000,\n \"maxWriteBatchSize\" : 100000,\n \"localTime\" : ISODate(\"2023-03-25T11:35:11.492Z\"),\n \"logicalSessionTimeoutMinutes\" : 30,\n \"connectionId\" : 1,\n \"minWireVersion\" : 0,\n \"maxWireVersion\" : 13,\n \"readOnly\" : false,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1679744105, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"Gv67T8xqXrQIMfb5QlKCN2Khzxg=\"),\n \"keyId\" : NumberLong(\"7214139855649898500\")\n }\n },\n \"operationTime\" : Timestamp(1679744105, 1)\n}\n{\n \"set\" : \"rs0\",\n \"date\" : ISODate(\"2023-03-25T11:35:28.278Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(1),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 1,\n \"writeMajorityCount\" : 1,\n \"votingMembersCount\" : 1,\n \"writableVotingMembersCount\" : 1,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1679744125, 1),\n \"t\" : NumberLong(1)\n },\n \"lastCommittedWallTime\" : ISODate(\"2023-03-25T11:35:25.820Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1679744125, 1),\n \"t\" : NumberLong(1)\n },\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1679744125, 1),\n \"t\" : NumberLong(1)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1679744125, 1),\n \"t\" : NumberLong(1)\n },\n \"lastAppliedWallTime\" : ISODate(\"2023-03-25T11:35:25.820Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-03-25T11:35:25.820Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1679744105, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"electionTimeout\",\n \"lastElectionDate\" : ISODate(\"2023-03-25T11:34:15.724Z\"),\n \"electionTerm\" : NumberLong(1),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(1679744054, 1),\n \"t\" : NumberLong(-1)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1679744054, 1),\n \"t\" : NumberLong(-1)\n },\n \"numVotesNeeded\" : 1,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"newTermStartDate\" : ISODate(\"2023-03-25T11:34:15.813Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-03-25T11:34:15.835Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"DESKTOP-MONGO01:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 106,\n \"optime\" : {\n \"ts\" : Timestamp(1679744125, 1),\n \"t\" : NumberLong(1)\n },\n \"optimeDate\" : ISODate(\"2023-03-25T11:35:25Z\"),\n \"lastAppliedWallTime\" : ISODate(\"2023-03-25T11:35:25.820Z\"),\n \"lastDurableWallTime\" : ISODate(\"2023-03-25T11:35:25.820Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"Could not find member to sync from\",\n \"electionTime\" : Timestamp(1679744055, 1),\n \"electionDate\" : ISODate(\"2023-03-25T11:34:15Z\"),\n \"configVersion\" : 2,\n \"configTerm\" : 1,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 1,\n \"name\" : \"DESKTOP-MONGO02:27017\",\n \"health\" : 1,\n \"state\" : 0,\n \"stateStr\" : \"STARTUP\",\n \"uptime\" : 45,\n \"optime\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(0, 0),\n \"t\" : NumberLong(-1)\n },\n \"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastAppliedWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastDurableWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"lastHeartbeat\" : ISODate(\"2023-03-25T11:35:26.859Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n \"pingMs\" : NumberLong(1),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : -2,\n \"configTerm\" : -1\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1679744125, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"1NoyQRpJW7Ug/NhwNZAXA18lX9c=\"),\n \"keyId\" : NumberLong(\"7214139855649898500\")\n }\n },\n \"operationTime\" : Timestamp(1679744125, 1)\n}\n{\n \"_id\" : \"rs0\",\n \"version\" : 2,\n \"term\" : 1,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"DESKTOP-MONGO01:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"DESKTOP-MONGO02:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"secondaryDelaySecs\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"641edc36215a2bead7ec5e65\")\n }\n}\n",
"text": "Hello Everyone, I have an issue with converting a standalone to a replica set in windows platform:I have three servers using windowsDESKTOP-MONGO01:27017 - CFGDESKTOP-MONGO02:27017 - CFGDESKTOP-MONGO03:27017 - CFGafter adding the second node in DESKTOP-MONGO01 and run rs.isMaster(); I got the following:and rs.status();rs.conf()I am new to MongoDB replica set so I don’t know what is the issue? The old structure was standalone with authentication enabled, so it will be good if I got some help,Thank you.",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "Check the secondary mongod.log\nIs keyfile the same on all nodes?\nCan you connect from primary to your secondary\nCould be firewall blocking?\nIs hosts file/hostname resolution fine?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for response, It was firewall issue, Once I white listed the port 27017 on the master pc its works fine.",
"username": "Mina_Ezeet"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Passive nodes in Mongodb replica set | 2023-03-25T11:53:48.496Z | Passive nodes in Mongodb replica set | 800 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hi,Trying to implement Push Notification in a mobile app using React Native & Realm, but found no documentation for React Native. I don’t want to go for Native Android & iOS implementation.Is there any articles or docs available for React Native. Pls. share.regards,\nVishnu Rana",
"username": "Vishnu_Rana"
},
{
"code": "",
"text": "Hi @Vishnu_Rana, Did you find something?",
"username": "Mike_Notta"
},
{
"code": "",
"text": "You should use Firebase Cloud Messaging and Realm Atlas to push notifications on Real Time like others app do. Im not sure if exists any tutorial on internet but if you search FCM Firebase - Realm Atlas, you migth find something.",
"username": "Marouane_Boukhriss_Ouchab"
}
] | Push Notifications using realm (FCM) and React Native | 2021-03-18T14:49:34.367Z | Push Notifications using realm (FCM) and React Native | 3,870 |
null | [
"queries",
"sharding",
"configuration"
] | [
{
"code": "\"errMsg\":\"version mismatch detected for company.collection1\",\n\"errName\":\"StaleConfig\",\n\"errCode\":13388\n{\n \"attr\" : {\n \"command\" : {\n \"$audit\" : {\n \"$impersonatedRoles\" : [\n {\n \"db\" : \"admin\",\n \"role\" : \"root\"\n }\n ],\n \"$impersonatedUsers\" : [\n {\n \"db\" : \"admin\",\n \"user\" : \"user1\"\n }\n ]\n },\n \"$client\" : {\n \"driver\" : {\n \"name\" : \"mongo-java-driver|legacy\",\n \"version\" : \"3.12.2\"\n },\n \"mongos\" : {\n \"client\" : \"10.0.x.x:yyyyy\",\n \"host\" : \"mongos-01.company.net:27017\",\n \"version\" : \"5.0.14\"\n },\n \"os\" : {\n \"architecture\" : \"amd64\",\n \"name\" : \"Linux\",\n \"type\" : \"Linux\",\n \"version\" : \"5.10.118-111.515.amzn2.x86_64\"\n },\n \"platform\" : \"Java/Oracle Corporation/1.8.0_171-b10\"\n },\n \"$clusterTime\" : {\n \"clusterTime\" : {\n \"$timestamp\" : {\n \"i\" : 32,\n \"t\" : 1677742474\n }\n },\n \"signature\" : {\n \"hash\" : {\n \"$binary\" : {\n \"base64\" : \"WoHMZCgb2wPkODlfrH17ft8p4kU=\",\n \"subType\" : \"0\"\n }\n },\n \"keyId\" : 7192513166306181143\n }\n },\n \"$configServerState\" : {\n \"opTime\" : {\n \"t\" : -1,\n \"ts\" : {\n \"$timestamp\" : {\n \"i\" : 14,\n \"t\" : 1677742474\n }\n }\n }\n },\n \"$configTime\" : {\n \"$timestamp\" : {\n \"i\" : 14,\n \"t\" : 1677742474\n }\n },\n \"$db\" : \"company\",\n \"$readPreference\" : {\n \"mode\" : \"secondaryPreferred\"\n },\n \"$topologyTime\" : {\n \"$timestamp\" : {\n \"i\" : 2,\n \"t\" : 1674647828\n }\n },\n \"clientOperationKey\" : {\n \"$uuid\" : \"c8d4a888-3501-422b-9edb-382ef7abcd01\"\n },\n \"filter\" : {\n \"$comment\" : \"a0fa53d5-60d3-43c7-a0fb-a68720403790\",\n \"field1\" : \"value1\"\n },\n \"find\" : \"collection1\",\n \"limit\" : 24,\n \"lsid\" : {\n \"id\" : {\n \"$uuid\" : \"af276bbb-73e1-46d8-b04b-4ae6980f89b8\"\n },\n \"uid\" : {\n \"$binary\" : {\n \"base64\" : \"JEiqi/uQbAC38XTQIncz/YXMD580biZRCf0ibuMFRVg=\",\n \"subType\" : \"0\"\n }\n }\n },\n \"maxTimeMS\" : 18000,\n \"maxTimeMSOpOnly\" : 18009,\n \"projection\" : {\n \"field2\" : 1,\n \"field3\" : 1\n },\n \"readConcern\" : {\n \"level\" : \"local\",\n \"provenance\" : \"implicitDefault\"\n },\n \"shardVersion\" : [\n {\n \"$timestamp\" : {\n \"i\" : 5,\n \"t\" : 29078\n }\n },\n {\n \"$oid\" : \"63d12aed200597703ab44399\"\n },\n {\n \"$timestamp\" : {\n \"i\" : 4427,\n \"t\" : 1674652397\n }\n }\n ],\n \"sort\" : {\n \"field3\" : -1\n }\n },\n \"durationMillis\" : 214,\n \"errCode\" : 13388,\n \"errMsg\" : \"version mismatch detected for company.collection1\",\n \"errName\" : \"StaleConfig\",\n \"locks\" : {\n \"FeatureCompatibilityVersion\" : {\n \"acquireCount\" : {\n \"r\" : 1\n }\n },\n \"Global\" : {\n \"acquireCount\" : {\n \"r\" : 1\n }\n },\n \"Mutex\" : {\n \"acquireCount\" : {\n \"r\" : 2\n }\n }\n },\n \"ns\" : \"company.collection1\",\n \"numYields\" : 0,\n \"ok\" : 0,\n \"protocol\" : \"op_msg\",\n \"readConcern\" : {\n \"level\" : \"local\",\n \"provenance\" : \"implicitDefault\"\n },\n \"remote\" : \"10.0.x.x:yyyyy\",\n \"reslen\" : 685,\n \"storage\" : {},\n \"type\" : \"command\"\n },\n \"c\" : \"COMMAND\",\n \"ctx\" : \"conn47321\",\n \"id\" : 51803,\n \"msg\" : \"Slow query\",\n \"s\" : \"I\",\n \"t\" : {\n \"$date\" : \"2023-03-02T07:34:35.472+00:00\"\n }\n}\n",
"text": "Hello, Community!We’re using a sharded MongoDB V5.0.14 setup in our production environment and we have been getting an error while reads are made against two of the biggest collections in the database. Below is the snippet from the mongod slow query logs. Full log of a sample query is attached.This collection is at 13 TB data size and compressed to disk to ~ 1.9 TB with zstd compression option. It receives about 80% of overall traffic compared to the other collectionsI’ve gone through similar posts like shard-version-not-ok-version-mismatch-detected-for, staleconfig-error-in-sharded-data-cluster-an-error-from-cluster-data-placement-c, MongoDB Jira SERVER-45119, staleconfig-how-to-stop-thisIt looks like the only option to try out is to execute the flushRouterConfig command. we executed it on all the mongos nodes, config server nodes, and all data shard nodes (PSS). The errors seem to stop for about an hour or two and they resurfaceAlso, we disabled the balancing on this particular collection to see if the chunk autosplit and movement is causing any stale data. It doesn’t seem to stop eitherWe would like to knowPlease help and ask any information that would help you furtherI couldn’t upload a non-image attachment. So, pasting the slow query log below",
"username": "A_S_Gowri_Sankar"
},
{
"code": "\"errMsg\":\"version mismatch detected for company.collection1\",\n\"errName\":\"StaleConfig\",\n\"errCode\":13388\nflushRouterConfig",
"text": "Hi @A_S_Gowri_Sankar and welcome to the MongoDB community forum!!For the above error message observed, please refer to Types of operations that will cause the versioning information to become stale and additionally, if the routing table cache receives an error in attempting to refresh, it will retry up to twice before giving up and returning stale information.This might be one of the case in your situation where the mongos was not able to refresh the cache. There could be many reasons why this is so, network issues being could be one of them.However, to understand the issue in more details, could you help me with some details regarding:This collection is at 13 TB data size and compressed to disk to ~ 1.9 TB with zstd compression option. It receives about 80% of overall traffic compared to the other collectionsIs there a query of any kind involved here? for example, a targeted query returning a data set, or if this is a scatter gather query running on a single shards, query that involves single or multiple reads etc.It looks like the only option to try out is to execute the flushRouterConfig command. we executed it on all the mongos nodes, config server nodes, and all data shard nodes (PSS). The errors seem to stop for about an hour or two and they resurfaceI understand that after executing flushRouterConfig the error went away for ~2 hours, the query runs ok, then the error reappear later? Is there any network event or any other change happening?Please ensure that all parts of the cluster (mongod, mongos) are running identical versions, and the latest in their series to ensure you have the latest bugfixes. In your case, version 5.0.15 is the latest. Note that there is an issue with upgrading to 5.0.15 with a workaround in the 5.0.15 release notesLet us know if you have any other concerns.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "thank you, @Aasawari for your responses. Much appreciated. sorry about the delayed responseIs there a query of any kind involved here? for example, a targeted query returning a data set, or if this is a scatter gather query running on a single shards, query that involves single or multiple reads etc.Yeah, it’s a target query running on a single shardif the routing table cache receives an error in attempting to refresh, it will retry up to twice before giving up and returning stale information.where is the retry triggered from? from mongos to mongod shard? also, when you say up to twice, do you mean it retries twice or retries once after the original request fails?then the error reappear later? Is there any network event or any other change happening?no, there are no network issues or changes happening. if you’re looking for any specific info in the logs, please share. I’ll grep and paste herePlease ensure that all parts of the cluster (mongod, mongos) are running identical versionsyeah. all of the mongodb components are running on the same v5.0.14can we check for something else, please? the error rate is going higher every week",
"username": "A_S_Gowri_Sankar"
}
] | StaleConfig 13388 - version mismatch detected for collection | 2023-03-02T10:24:42.081Z | StaleConfig 13388 - version mismatch detected for collection | 1,808 |
null | [
"replication",
"indexes"
] | [
{
"code": "",
"text": "mongoDB is deployed in clustered mode(replica set) as a statefulset in AWS EKS.\nIt’s been 2 years since the index is created. Now one of the index got deleted.\nWe didn’t perform deletion on index.\nBut we deleted few data from the collection. Can anyone explain how this index is got deleted ?mongodb version 4.2.7",
"username": "Sebin_Sebastian"
},
{
"code": "",
"text": "Hi @Sebin_Sebastian and welcome to the MongoDB community forum!!MongoDB does not have an automatic mechanism to delete indexes, so it’s possible that the deletion of the index was due to user action or application code.Could you help me with more details on what actions have been performed on the database or if you could see any weird behaviour or warning messages in the log files during the period when index was deleted.\nThe log would look something similar to below:2023-03-28T13:04:36.524+0530 I COMMAND [conn7] CMD: dropIndexes test.sample: “name_1”Regards\nAasawari",
"username": "Aasawari"
}
] | Unexpected Index deletion | 2023-03-21T07:17:52.479Z | Unexpected Index deletion | 867 |
null | [
"swift"
] | [
{
"code": "class Folder : Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var parent: Folder?\n @Persisted var icon = \"folder\"\n @Persisted(originProperty: \"parent\") var children: LinkingObjects<Folder>\n @Persisted(originProperty: \"folder\") var items: LinkingObjects<Item>\n}\n@ObservedResults@ObservedResults(Folder.self, filter: NSPredicate(format: \"parent == nil\")) var rootFolders\n var body: some View {\n NavigationView {\n OutlineGroup(rootFolders[0], children: \\.children) { folder in\n Text(\"\\(folder.name)\")\n }\n Text(\"Content\")\n }\n }\n\\.children[Folder]?",
"text": "I’m trying to create an OutlineGroup in SwiftUI to display an hierarchal folder like structure:I have an @ObservedResults property in my view:And in my view:This code though doesn’t compile, I’m getting the error “Key path value type ‘LinkingObjects’ cannot be converted to contextual type ‘LinkingObjects?’”\nfor the \\.children keypath.I’m kind of at a loss as to how to resolve this to get the OutlineGroup to get the item’s children.\nI’m not quite sure even how to “unwrap” the children property into an [Folder]?.Any assistance is appreciated.Thanks,\nMichael",
"username": "Michael_Grubb"
},
{
"code": " var maybeChildren: LinkingObjects<Folder>? {\n get {\n if children.count > 0 {\n return children\n } else {\n return nil\n }\n }\n }\n",
"text": "Not sure if this is idiomatic or even a good idea but it did get me past the compile error and a bit further down the road.",
"username": "Michael_Grubb"
},
{
"code": "ListLinkingObjectsclass Folder : Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var parent: Folder?\n @Persisted var icon = \"folder\"\n \n @Persisted var childList = List<Folder>() //forward link to the child folders\n @Persisted(originProperty: \"childList\" var linkedParent: LinkingObjects<Folder> //back link to parent\n}\nclass Folder : Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var parent: Folder?\n @Persisted var icon = \"folder\"\n \n @Persisted var childList = List<Folder>() //forward link to the child folders\n @Persisted var parentFolder: Folder!\n}\n",
"text": "Would the forward link be a List and the inverse be a LinkingObjects?That being said, since the child folder can only ever have a single parent, what about forgoing the LinkingObjects and just use a single parent property?It’s a bit more work but that keeps the relationships one-to-many and then the reverse is one-to-one.",
"username": "Jay"
},
{
"code": "var maybeChildren: LinkingObjects<Folder>? {\n get {\n if children.count > 0 {\n return children\n } else {\n return nil\n }\n }\n }\nextension Folder {\n var childrenArray: [Folder] {\n children.count == 0 ? nil : Array(children)\n }\n}\n",
"text": "In my case, I worked around this by adding a computed property:",
"username": "Siwei_Kang"
},
{
"code": "import RealmSwift\n\nclass Item: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var id: ObjectId\n @Persisted var name = \"\"\n @Persisted var subItems = RealmSwift.List<Item>()\n}\n\nextension Item {\n var subItemArray: [Item]? {\n subItems.count == 0 ? nil : Array(subItems)\n }\n}\n\nstruct ContentView: View {\n @Environment(\\.realm) var realm\n @ObservedResults(Item.self) var items\n\n var body: some View {\n List {\n ForEach(items) { item in\n OutlineGroup(item, children: \\.subItemArray) { sItem in\n Text(sItem.name)\n }\n }\n }\n }\n}\n\n\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\n",
"text": "I have same issue, resolved by this",
"username": "RE_GA"
}
] | Using LinkingObjects with OutlineGroup in SwiftUI | 2022-01-29T01:47:08.689Z | Using LinkingObjects with OutlineGroup in SwiftUI | 3,336 |
[
"swift"
] | [
{
"code": "import RealmSwift\n\nclass Item: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var id: ObjectId\n @Persisted var name = \"\"\n @Persisted var subItems = RealmSwift.List<Item>()\n}\n\nstruct ContentView: View {\n @Environment(\\.realm) var realm\n @ObservedResults(Item.self) var items\n\n var body: some View {\n List {\n OutlineGroup(items, children: \\.subItems) { item in\n Text(item.name)\n }\n }\n }\n}\n\n\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\n\n",
"text": "Hello everyone,I’m new to Realm and trying to use it with Swift/SwiftUI. I want to create nested List with OutlineGroup with Realm data. I tried as belowHowever I got error Key path value type ‘List’ cannot be converted to contextual type ‘Results?’ at the line of OutlineGroup(items, children: .subItems) { item in, please see more at the screenshot. I’m not sure how to fix that and would greatly appreciate any assistance. Thank you!\nScreenshot 2023-03-28 at 10.46.452214×982 209 KB\n",
"username": "RE_GA"
},
{
"code": "import RealmSwift\n\nclass Item: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var id: ObjectId\n @Persisted var name = \"\"\n @Persisted var subItems = RealmSwift.List<Item>()\n}\n\nextension Item {\n var subItemArray: [Item]? {\n subItems.count == 0 ? nil : Array(subItems)\n }\n}\n\nstruct ContentView: View {\n @Environment(\\.realm) var realm\n @ObservedResults(Item.self) var items\n\n var body: some View {\n List {\n ForEach(items) { item in\n OutlineGroup(item, children: \\.subItemArray) { sItem in\n Text(sItem.name)\n }\n }\n }\n }\n}\n\n\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\n\n",
"text": "I figured it out. I need to add extension to produce subItemArray. It should look like this",
"username": "RE_GA"
}
] | Nested OutlineGroup List with Realm in SwiftUI | 2023-03-28T03:50:27.576Z | Nested OutlineGroup List with Realm in SwiftUI | 710 |
|
null | [
"aggregation",
"data-modeling"
] | [
{
"code": "Order{\n_id: ObectId\ncompany_id: ObjectId\nuser_id: ObjectId\nreceipt_number: number\ndown_payment: number\ncomment: string\ncreatedAt: DateTime\nupdatedAt: DateTime\n}\n\nProduct {\n_id: ObjectId\norder_id: ObjectId\nname: string\nlink: string\ncost: number\nexpected_delivery_date: DateTime\ncurrent_status: Enum\ncreatedAt: DateTime\nupdatedAt: DateTime\n}\n\nOrderUpdate{\n_id: ObjectId\nproduct_id: ObjectId\norder_id: ObjectId\nstatus: Enum\ndetails: string\ncreatedAt: DateTime\nupdatedAt: DateTime\n}\n",
"text": "I have three collections with the following layout:I came from a relational DB background and wanted to experiment with a NoSql database for learning. Due to this I modeled it based on relational experience however decided to revalidate the schema design after learning about the specifics of mongodb. Given that an Order can have many products, I was wondering if this should be embedded onto the Order document itself. The issue arises from the fact that a products array can potentially be unbounded. While usually it can be anywhere between 1 and 10 there is the possibility that it can scale 100+ products though i expect this to be rare.What advice would you give to model this for scalability? Additionally the order must keep track of the number of products it has which i currently use a $lookup and $size check to fill this data when the order is pulled.On the other hand order updates has a many-to-many with products and is most times taken separately from the order through a lazy query so I dont think it should be embeddedFinally there is a possibility that I may need to include shipping details, delivery and supplier in the future but I dont know the specifics and therefore cannot include it right now.PS: I am rather new to modelling this way and it seems to be much more complicated than a relational db. Either that or im just overthinking it.",
"username": "Awakening_Quasar"
},
{
"code": "things that are queried together should stay together",
"text": "Hey @Awakening_Quasar,Welcome to the MongoDB Community Forums! When it comes to designing schema in MongoDB, it’s important to keep in mind that the design should be optimized for the specific use cases and queries that will be performed on the data ie. things that are queried together should stay together. This means that there isn’t necessarily a “one-size-fits-all” approach to schema design and it may be beneficial to work from the required queries first and let the schema design follow the query pattern.Given that an Order can have many products, I was wondering if this should be embedded onto the Order document itself.Can the product change after the order was created? If not, embedding is probably beneficial here.he issue arises from the fact that a products array can potentially be unbounded. While usually it can be anywhere between 1 and 10 there is the possibility that it can scale 100+ products though i expect this to be rareThe max document size is 16MB. I would suggest you experiment with the largest order you can think of, and check if this can exceed this limit or not. 16MB is a lot, and this doesn’t sound like a case of possible unbounded array growth. Just that some are very large. If the 16 MB limit is exceeded, then it might make sense to store the product information as a separate collection and reference the order_id in each product document.Finally there is a possibility that I may need to include shipping details, delivery and supplier in the future but I dont know the specifics and therefore cannot include it right now.PS: I am rather new to modelling this way and it seems to be much more complicated than a relational db. Either that or im just overthinking it.As for potential future additions to the schema, it’s important to keep in mind that schema design is an iterative process and can evolve over time as new requirements arise. It’s okay to start with a simpler schema and add additional fields or collections as and when needed. You can also use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Additionally, since you’re new to MongoDB, I’m linking some more useful resources that you can explore:\nData Modelling Course\nData Model DesignPlease let us know if you have any additional questions. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | Schmea Design Advice for Order management and shipping | 2023-03-23T16:21:46.862Z | Schmea Design Advice for Order management and shipping | 805 |
[] | [
{
"code": "",
"text": "\nMongoDB-Inshort1920×1077 163 KB\n",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "Since you posted here, I’m assuming you are okay with feedback so here is my input.This is a very confusing way to show sharding, I understand it’s a general purpose high level and you combined Replica + Sharding but it makes it look like the application connections directly to a mongos, when it’s the driver. Plus Shard 1 is connected directly to the Primary not the mongos.General Log: This should be mongod.log not mognod.logConfiguration: This should be mongod.conf as the default not mognod.confThe driver section by listing two programing languages you seems like those are the only 2 that are available.Management tools there is also Cloud Manager, this comes with MongoDB Enterprise subscription and its basically ops manager but hosted by MongoDB for on-prem deployments.Just some feedback you can take it or leave it ",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks for the detailed feedback, I’ll incorporate those chanegs, typos in my next version of document.\nThanks once again @tapiocaPENGUIN !\n\nimage1914×1071 352 KB\n",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Explained In-short | 2023-03-27T16:08:27.562Z | MongoDB Explained In-short | 1,032 |
|
null | [
"data-modeling"
] | [
{
"code": "user: {\n ...,\n boards: [\n {\n ...,\n columns: [...]\n }\n ]\n}\n",
"text": "Hello everyone,I’m new to mongo, and I’m trying to create a simple kanban project, and I’m kind lost to how should I structure the schemas. Basically, I will have some one-to-many relationships:What is the best approach for this scenario?\nShould I create 5 collection (Users, Boards, Columns, Tasks and Subtasks) and add parent _id to child or should I have 1 collection (User) and embed everything.e.g:Thanks in advance!",
"username": "Leandro_Araujo1"
},
{
"code": "{ _id: <card_id>,\n user: <user_id>,\n column: <column_id>,\n card_index_in_column: <integer>,\n card_type: <contains sub-task or not>\n card_content: <text or references to sub-tasks>\n}\n{user:1, column:1, card_index_in_column:1}",
"text": "Hey @Leandro_Araujo1,Welcome to the MongoDB Community Forums! Should I create 5 collection (Users, Boards, Columns, Tasks and Subtasks) and add parent _id to child or should I have 1 collection (User) and embed everythingA general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Based on what you described, it sounds like you have a clear hierarchy of entities, with a user owning multiple boards, each board containing multiple columns, each column containing multiple tasks, and each task containing multiple subtasks.It may be beneficial to work from the required queries first and let the schema design follow the query pattern but based on the structure you have described, it might make more sense to create embedded documents. From your description, thinking about the application a little bit, then the basic entity that we deal with in a kanban board is the “task card”. This can be a document, embedding all the other information pertaining to that task card like this:You can then create an index on {user:1, column:1, card_index_in_column:1} since it’s more likely “boards” are per-user. This index will make it faster to display certain columns for certain users and also display the cards in a sorted order per column.In general, you should favor embedding when:and favor normalization when:You can further read the following documentation to further cement your knowledge of Schema Design in MongoDB.\nData Model Design \nFactors to consider when data modeling in MongoDB \nCompound IndexesNote that these above points are just general ideas and not strict rules. I’m sure there are exceptions and counter examples to any of the points above, and generally, it’s more about designing the schema according to what will suit the use case best (i.e. how the data will be queried and/or updated), and not how the data is stored.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help with data modeling | 2023-03-23T04:05:16.165Z | Help with data modeling | 803 |
null | [] | [
{
"code": "",
"text": "Hi guys, I am currently working on a M0 free tier. Since few, I encounter some slow return for my request or even some times out.It seems that the reason could be the “total data transferred into or out” but I cannot find the place to check that.Is there a chart or something where we could see the current status of my cluster?",
"username": "Florian_B"
},
{
"code": "data transfer limit",
"text": "Hello @Florian_B ,Welcome to The MongoDB Community Forums! I would advise you to bring this up with the Atlas chat support team as they can provide more insight regarding the cluster metrics and any related issues that you could be facing because of reaching the data transfer limit in your shared tier cluster.For details regarding data transfer limit please checkRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Monitoring the total data transferred into or out of the cluster | 2023-03-16T14:54:06.919Z | Monitoring the total data transferred into or out of the cluster | 1,070 |
null | [
"aggregation",
"queries",
"atlas-online-archive"
] | [
{
"code": "db.temp.aggregate([\n{ $match: { \"changes\": {$exists:true} } },\n{ $project: {\n \"changes\": {\n $filter: { input: \"$changes\", as:\"changes\", cond: { $lt: [ \"$$changes.updatedTimestamp\", NumberLong(ISODate().getTime() - 63072000000) ] } } //for 2 years\n },\n }\n}])\n \"changes\" : [ \n {\n \"type\" : \"NEW\",\n \"newValue\" : \"123\",\n \"oldValue\" : \"\",\n \"updatedBy\" : \"System\",\n \"field\" : \"logKey\",\n \"updatedTimestamp\" : NumberLong(1536734184000)\n }, \n {\n \"type\" : \"NEW\",\n \"newValue\" : \"234\",\n \"oldValue\" : \"\",\n \"updatedBy\" : \"System\",\n \"field\" : \"releaseKey\",\n \"updatedTimestamp\" : NumberLong(1674818289272)\n }\n ]\n",
"text": "Hi,I have a collection with documents containing an array as below. I would like to remove the specific array objects from the “changes” array, based on the value of “updatedTimestamp” (older than 2 years) from this collection and place it in another collection.I am able to project based on requirement using aggregation but this should be done using find() as there is a restriction for this use case. The restriction is Online Archive does not allow aggregation pipeline and it accepts custom criteria in the form of a JSON which has to be as db.collection.find({}).Query I have isDocument StructureCould you please let me know if there are any feasible options.",
"username": "Satyanarayana_Ettamsetty1"
},
{
"code": "$filterchanges{\n type: 'NEW',\n newValue: '234',\n oldValue: '',\n updatedBy: 'System',\n field: 'releaseKey',\n updatedTimestamp: Long(\"1674818289272\")\n}\n",
"text": "Hi @Satyanarayana_Ettamsetty1 - Welcome to the community I would like to remove the specific array objects from the “changes” array, based on the value of “updatedTimestamp” (older than 2 years) from this collection and place it in another collection.I am able to project based on requirement using aggregation but this should be done using find() as there is a restriction for this use case. The restriction is Online Archive does not allow aggregation pipeline and it accepts custom criteria in the form of a JSON which has to be as db.collection.find({}).Based on the above, my interpretation is that you wish to move the contents (or specific element which matches the $filter conditions in the changes array field) to another collection and have these archived using Online Archive - Is this correct? - i.e., You only want the elements filtered archived rather than the updated document.If the above is somewhat correct I believe you’re only wanting to archive the following then?:To better clarify could you provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Yes. The requirement is to archive an element alone from that array. Correct, The collection that I am referring to is Online Archive collection.",
"username": "Satyanarayana_Ettamsetty1"
},
{
"code": "",
"text": "Thanks for clarifying Satyanarayana.Array properties are currently not supported in partition fields for Online Archive. You can raise a feedback post, which the product team monitor, with your use case details in which others can vote for.Regards,\nJason",
"username": "Jason_Tran"
}
] | How can we use $filter in combination with $expr in find() function | 2023-02-03T13:10:27.077Z | How can we use $filter in combination with $expr in find() function | 1,480 |
null | [
"python",
"atlas-cluster",
"serverless"
] | [
{
"code": " \"errorMessage\": \"dev-pe-0.vvwclmg.mongodb.net:27017: [Errno -5] No address associated with hostname, Timeout: 30s, Topology Description: <TopologyDescription id: 64207dd34375c6ac1adb4bf4, topology_type: Unknown, servers: [<ServerDescription ('dev-pe-0.vvwclmg.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('dev-pe-0.vvwclmg.mongodb.net:27017: [Errno -5] No address associated with hostname')>]>\",\n \"errorType\": \"ServerSelectionTimeoutError\"\nreadWriteAnyDatabasecreate-vpc-endpointimport pymongo\n\n\nclient = pymongo.MongoClient(os.getenv(\"MONGODB_CONNECTION_URI\"))\n\n\ndef handler(event, context):\n data = some-data-to-insert\n\n db = client.myDatabase\n db.myCollection.insert_many(data)\n\nMONGODB_CONNECTION_URImongodb://dev-pe-0.vvwclmg.mongodb.net/?authMechanism=MONGODB-AWS",
"text": "The error I receive :Resources I’ve set up :How I’m calling MongoDB (code snippet) :where MONGODB_CONNECTION_URI = mongodb://dev-pe-0.vvwclmg.mongodb.net/?authMechanism=MONGODB-AWSThis seems to be a connectivity issue - what am I missing?",
"username": "nefariousdream"
},
{
"code": "+srv",
"text": "I needed to add +srv to my connection URI. The documentation on Authentication Examples — PyMongo 4.3.3 documentation is incorrect.",
"username": "nefariousdream"
},
{
"code": "",
"text": "Hi @nefariousdream thanks for pointing this out. While the documentation is correct (because srv connection URIs and MONGODB-AWS authentication are unrelated features), I will update the docs since using srv is the most common thing nowadays. I opened https://jira.mongodb.org/browse/PYTHON-3643",
"username": "Shane"
},
{
"code": "",
"text": "Hey @Shane, for my knowledge, why was the lambda unable to connect to the cluster without the srv connection URI, and what I should have done differently if not using it?",
"username": "nefariousdream"
},
{
"code": "",
"text": "Your solution to add back the “+srv” is correct. mongodb+srv:// use a different hostname than mongodb://. It is expected that DNS will fail with NXDOMAIN or “No address associated with hostname” errors when the “+srv” portion is removed from a URI without also changing the hostname. Feel free to read more about this here: https://www.mongodb.com/docs/manual/reference/connection-string/#dns-seed-list-connection-format",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | AWS Lambda unable to connect to Atlas Serverless | 2023-03-26T18:29:48.689Z | AWS Lambda unable to connect to Atlas Serverless | 1,236 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "[\n {\n \"$search\": {\n \"index\": \"TextIndex\",\n \"compound\": {\n \"must\": [\n {\n \"near\": {\n \"path\": \"adt\",\n \"origin\": datetime.datetime(2023, 3, 27, 3, 6, 14, 716199),\n \"pivot\": 60000,\n \"score\": {\"boost\": {\"value\": 1000000000}},\n }\n },\n {\"equals\": {\"path\": \"st\", \"value\": 2}},\n {\"equals\": {\"path\": \"cnf.st\", \"value\": 1}},\n {\"exists\": {\"path\": \"att.path\"}},\n ]\n },\n \"highlight\": {\"path\": \"msg\"},\n }\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"aactual_time\": 1,\n \"time_stamps\": 1,\n \"data_type\": 1\n }\n },\n {\"$limit\": 20},\n]\n",
"text": "I’m using this Atlas search pipeline in my query aggregation, but it gets every document in my database. I want to apply a date range in this pipeline to get documents on a specific date . But didn’t know how to manage the origin field. Any help, please?",
"username": "ahmad_al_sharbaji"
},
{
"code": "range",
"text": "Hi @ahmad_al_sharbaji - Welcome to the community.Have you tried looking into the range operator?If you need further help, please provide sample documents, expected output and index definition. Be sure to include the date range if you provide these examples.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "pivot",
"text": "Just an additional note: the pivot option value is used to calculate scores of Atlas Search result documents.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
}
] | How to use date range in MongoDB aggregation? | 2023-03-27T01:37:03.925Z | How to use date range in MongoDB aggregation? | 850 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hello. I am creating an app and It has trigger functions which creates a specific document which needs to be completed from user side. I wanted to ask if there is a way how can I update realm local db with this information. so that the user would have a skeleton to fill in.Thank you for your help.Kind Regards,\nLukas",
"username": "Lukas_Vainikevicius"
},
{
"code": "",
"text": "Hello @Lukas_Vainikevicius , this is very possible with the Device sync feature (Now it is recommended to use Flexible sync)https://www.mongodb.com/docs/atlas/app-services/sync/",
"username": "Oben_Tabiayuk"
}
] | Populating realm local db with Atlas stored data | 2022-08-13T15:40:05.591Z | Populating realm local db with Atlas stored data | 1,860 |
null | [
"aggregation",
"compass"
] | [
{
"code": "",
"text": "Hello everyone, i want to run aggregation from atlas ui, and it is long running one, but no matter what i’m setting in max time ms, the result is always the same, it is interesting because in compass it works, please help me to understand what i’m doing wrong?",
"username": "Bohdan_Chystiakov"
},
{
"code": "max time ms",
"text": "Hello @Bohdan_Chystiakov ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, can you please share more details for me to understand your use-case better?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Max time ms option doesn't work | 2023-03-21T08:46:56.714Z | Max time ms option doesn’t work | 503 |
null | [
"queries",
"schema-validation"
] | [
{
"code": "{ $jsonSchema: { bsonType: \"object\", required: [\"name\"], properties: { name: { bsonType: \"string\", description: \"Tag name must be a string and is required.\", }, }, }, }{ \"name\": 1 }",
"text": "I have tags schema created in mongo database. I am putting validation verification using JsonSchema. { $jsonSchema: { bsonType: \"object\", required: [\"name\"], properties: { name: { bsonType: \"string\", description: \"Tag name must be a string and is required.\", }, }, }, }when I try request with the { \"name\": 1 }. I don’t get the detailed error message mentioning the \"“Tag name must be a string and is required.” as reponse to my API, when I run this in shell i get detailed response {\nfailingDocumentId: {},\ndetails: {\noperatorName: ‘$jsonSchema’,\nschemaRulesNotSatisfied: [\n{\noperatorName: ‘properties’,\npropertiesNotSatisfied: [\n{\npropertyName: ‘name’,\ndetails: [\n{\noperatorName: ‘bsonType’,\nspecifiedAs: {\nbsonType: ‘string’\n},\nreason: ‘type did not match’,\nconsideredValue: 1,\nconsideredType: ‘int’\n}\n]\n}\n]\n}\n]\n}\n} is it not enabled on the MongoDB atlas, I am using 5.0.15 cluster any help will be appreciated.",
"username": "Sagar_Mahadik"
},
{
"code": ", validationAction: \"error\"}",
"text": "Have you tried adding , validationAction: \"error\" before the last } ?",
"username": "Jack_Woehr"
},
{
"code": "{\n $jsonSchema: {\n bsonType: 'object',\n required: [\n 'name'\n ],\n properties: {\n name: {\n bsonType: 'string',\n description: 'Tag name must be a string and is required.'\n }\n }\n }\n}\nAdditional information:\n{\n failingDocumentId: {},\n details: {\n operatorName: '$jsonSchema',\n schemaRulesNotSatisfied: [\n {\n operatorName: 'properties',\n propertiesNotSatisfied: [\n {\n propertyName: 'name',\n details: [\n {\n operatorName: 'bsonType',\n specifiedAs: {\n bsonType: 'string'\n },\n reason: 'type did not match',\n consideredValue: 1,\n consideredType: 'int'\n }\n ]\n }\n ]\n }\n ]\n }\n}\n{\n \"name\": 1\n}\n",
"text": "Thank you for your time!This is my schema. i get this message in the mongo db compass shell,but when I make an REST API call with the requestI get only this as error messageDocument failed validationexpected result was to get the detailed error message, I am using Mongodb Atlas.Reference link : Improved Error Messages for Schema Validation in MongoDB 5.0 | MongoDB",
"username": "Sagar_Mahadik"
},
{
"code": ", validationAction: \"error\"}",
"text": "Have you tried adding , validationAction: \"error\" before the last } ?I still would suggest trying this.",
"username": "Jack_Woehr"
},
{
"code": "{\n $jsonSchema: {\n bsonType: 'object',\n required: [\n 'name'\n ],\n properties: {\n name: {\n bsonType: 'string',\n description: 'Tag name must be a string and is required.'\n }\n }\n },\n validationAction: 'error'\n}\n",
"text": "Hi Thank you for your time, I added this but I still get the legacy message. MongoError: “Document failed validation” and not the details as mentioned in the article.",
"username": "Sagar_Mahadik"
},
{
"code": "",
"text": "What version of MongoDB Server are you running?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "5.0.15 it was recently upgraded from 4.4 automatically I think.",
"username": "Sagar_Mahadik"
},
{
"code": "",
"text": "Have you examined the System Log Options in your MongoDB configuration file?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Also see Log Messages",
"username": "Jack_Woehr"
}
] | Issue with JSON Schema Validation error in Atlas | 2023-03-23T12:16:16.484Z | Issue with JSON Schema Validation error in Atlas | 1,375 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Can someone suggest to me how to structure the MongoDB database, when I have multiple unique sensors and I am capturing data from these sensors for 24 hours daily after every 10 minutes?\nCurrently, I am creating a separate collection for each sensor data which has multiple documents of sensor data reading after every 10 minutes.But at one point the number of collections is going to increase massively. Then how can I possibly reduce the number of collections generated? How to model these collections so that performance will not degrade? Please note each collection will have multiple documents with timestamp and it’s reading.",
"username": "Prasanna_Sasne"
},
{
"code": "metadata",
"text": "Have you considered using a single time series collection (https://www.mongodb.com/docs/manual/core/timeseries-collections/)? You can then use the metadata component to store the information about your sensors.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "No, I have not tried this. Thank you. Can you also share with me what the schema structure will look like in this scenario?",
"username": "Prasanna_Sasne"
}
] | Reduce massive number of collections | 2023-03-22T19:09:11.890Z | Reduce massive number of collections | 687 |
null | [
"containers"
] | [
{
"code": "",
"text": "Hi everyone and thanks in advance for the help. Let me briefly explain the problem: I need to implement a software module that runs in an HA environment on two server machines for an aerospace project. So, I have a cluster of only two machines under Docker Swarm. In this cluster, a single stack is executed with 5 services, and for the context, I chose MongoDB. I am asking for your help to understand the best strategy to install an instance of MongoDB that allows me to have consistent data on both machines and to be tolerant to the failure of one server. I have no possibility to expand the number of machines.",
"username": "Giovanni_92"
},
{
"code": "",
"text": "Lets say:\nServer 1 has Primary, Secondary\nServer 2 has Secondary\nIf server 1 goes down the cluster is read only because there will be no primary\nIf server 2 goes down the cluster will be health and R/W is possible.Another example of 5 node replica\nServer 1 Primary, Secondary, Secondary\nServer 2 Secondary, Secondary\nAgain if Sever 1 goes down you have the same issue.",
"username": "tapiocaPENGUIN"
}
] | Best strategy in an HA with 2 servers | 2023-03-27T16:13:57.615Z | Best strategy in an HA with 2 servers | 686 |
null | [
"dot-net",
"production"
] | [
{
"code": "",
"text": "This is a patch release that addresses some issues reported since 2.19.0 was released.The list of JIRA tickets resolved in this release is available at CSHARP JIRA project.Documentation on the .NET driver can be found here.There are no known backwards breaking changes in this release.",
"username": "Dmitry_Lukyanov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | .NET Driver 2.19.1 Released | 2023-03-27T17:59:53.738Z | .NET Driver 2.19.1 Released | 1,192 |
null | [
"aggregation",
"java"
] | [
{
"code": "org.mongodb:mongodb-driver-sync:4.0.5allowDiskUse: trueimport com.mongodb.AggregationOptions;\n...\nAggregationOptions options = AggregationOptions.builder().allowDiskUse(true).build();\n...\nList<Document> queryResults = collection.aggregate(this.pipeline).withOptions(options).into(new ArrayList<>());\nerror: cannot find symbol\nimport com.mongodb.AggregationOptions;\n ^\n symbol: class AggregationOptions\n location: package com.mongodb\ncom.mongodballowDiskUse:true",
"text": "Java Driver Version: org.mongodb:mongodb-driver-sync:4.0.5I am trying to figure out how to pass in allowDiskUse: true to my aggregation pipeline. I have done some searching and found that the following should workHowever I get the following errorThis link to mongodb.github.io leads me to believe that this package should contain this class. I am using the com.mongodb package in several other locations without issue.What is the best way to specify allowDiskUse:true in mongodb’s Java Driver? Any ideas why that package import isn’t working?Thanks so much!",
"username": "Matt_Young"
},
{
"code": "List<Document> queryResults = collection.aggregate(this.pipeline).allowDiskUse(true).into(new ArrayList<>());\n",
"text": "The solution was this",
"username": "Matt_Young"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Java Driver allowDiskUse:true | 2023-03-24T03:59:52.692Z | Java Driver allowDiskUse:true | 856 |
null | [
"atlas",
"app-services-hosting"
] | [
{
"code": "",
"text": "I want to deploy a MERN app with the help of cPanel. I’ve been researching for it for so long but I am not able to find right direction and procedure of how to connect my mongoDb atlas with cPanel with the deployment of MERN app over it. Can I get a simple overview like how can I solve this.",
"username": "Harsh_Sharma7"
},
{
"code": "",
"text": "Can you connect by shell?\nCheck this thread",
"username": "Ramachandra_Tummala"
}
] | How to connect mongoDB atlas with cPanel? | 2023-03-27T09:25:17.713Z | How to connect mongoDB atlas with cPanel? | 1,213 |
[] | [
{
"code": "",
"text": "\nMongoDB_Developer_LearningPath1877×1063 109 KB\n",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Developer Leaning & Certification Path | 2023-03-27T16:18:49.278Z | MongoDB Developer Leaning & Certification Path | 1,033 |
|
[] | [
{
"code": "",
"text": "\nMongoDB_DBA_LearningPath1918×1066 122 KB\n",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB DBA/Architect Learning and Certification Path | 2023-03-27T16:17:22.227Z | MongoDB DBA/Architect Learning and Certification Path | 954 |
|
[] | [
{
"code": "",
"text": "\nimage907×192 10.5 KB\n\nthe above image gives the statusState Recv-Q Send-Q Local Address:Port Peer Address:Port Process\nLISTEN 0 4096 0.0.0.0:9191 0.0.0.0:*\nLISTEN 0 128 127.0.0.1:27017 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:8080 0.0.0.0:*\nLISTEN 0 511 0.0.0.0:80 0.0.0.0:*\nLISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*\nLISTEN 0 4096 127.0.0.1:43637 0.0.0.0:*\nLISTEN 0 128 0.0.0.0:22 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:6080 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:6081 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:6082 0.0.0.0:*\nLISTEN 0 4096 [::]:9191 [::]:*\nLISTEN 0 4096 [::]:8080 [::]:*\nLISTEN 0 511 [::]:80 [::]:*\nLISTEN 0 128 [::]:22 [::]:*\nLISTEN 0 4096 [::]:6080 [::]:*\nLISTEN 0 4096 [::]:6081 [::]:*\nLISTEN 0 4096 [::]:6082 [::]:*\nLISTEN 0 100 *:9090 :\noutput of the command ssn -tlnproot 804843 1 0 07:12 ? 00:00:07 mongod --fork --logpath /var/lib/mongodb/mongodb.log --dbpath /var/lib/mongodb\nroot 805223 805199 0 07:14 pts/2 00:00:00 tail -100f /var/log/mongodb/mongod.log\noutput for ps -aef | grep [m]ongodsrwx------ 1 root root 0 Mar 27 07:12 /tmp/mongodb-27017.sock\noutput for the command ls -l /tmp/mongodb-*",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "Looks like you started mongod as root?\nThat tmp file should not be owned by root\nFix the file/dir permissions and start the service again with sysctl",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Gowtham_Chendra,\ndo the checks suggested by @Ramachandra_Tummala, and paste the configuration file here, coz i think there is something wrong inside.also here it looks like you have already launched a mongod process from the command line:root 804843 1 0 07:12 ? 00:00:07 mongod --fork --logpath /var/lib/mongodb/mongodb.log --dbpath /var/lib/mongodbAnd you are trying to pull up the mongo service via systemctl when it is actually already running a mongod process launched from the command line.it’s not very clear to me what you’re trying to do.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Thank you for the reply, I changed the owner, but the mongod service is not started, it is in failed state only. Sir, do you have any other solution…",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "Did you check permissions,/ownership of dbpath,logpath dirs?\nYou can start mongod from command line giving different port,dbpath,logpath dirs but you need to use the port number while connecting\nStart mongod as normal user not as root\nCheck mongodb documentation on how to start mongod from command kine",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Changed the owner from root to mongodb sir. The mongod.service is still in failed state. I tried to start/stop/enable/disable/restart, but nothings works",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "Check mongod.log on why it is failing\nDid you take care of both dirs & tmp file?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "logpath:\nls -la /var/log/mongodb/mongod.log\n-rw------- 1 mongodb mongodb 4549510 Mar 27 12:19 /var/log/mongodb/mongod.logdb path:\ndrwxr-xr-x 4 mongodb mongodb 12288 Mar 27 12:08 mongodbsocket file in tmp dir\nsrwx------ 1 mongodb mongodb 0 Mar 27 12:19 mongodb-27017.sockLogs:\n\nimage1333×658 37.3 KB\nI don have any idea what to do sir…",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "Hi @Gowtham_Chendra ,\nplease, attach the output of ps -ef | grep mongo.\nAnother information you can attach is cat /etc/passwd | grep mongoRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi its my bad, I have changes owner for the particular file only, but need to change owner for the entire directory.Now the service is running. But I can not able to connect the DB in MongoDBcompass… May I know the reasons and solution. I can able to connect with mongoDB locally…",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "hi @Gowtham_Chendra,\ncheck if the bind_ip is set correctly and if the firewall rules are set correctly.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Sorry Fabio, after disabling the ufw, I can able to connect with mongoDB using MongoDB Compass. Now every thing is working fine.Issue: What ever I do, the mongoDB service state is failed.sudo ufw status (if it is active need to disable)\nsystemctl stop mongod\nsudo chown -R mongodb:mongodb /var/lib/mongodb/\nchown mongodb:mongodb mongodb-27017.sock\nsystemctl start mongod\nsystemctl status mongod",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "@Gowtham_Chendra ,\nPerfect, mark the solution in the last answer🤓\nBR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Done and provided complete solution which Works for me @Fabio_Ramohitaj and @Ramachandra_Tummala … Thank you for your help…",
"username": "Gowtham_Chendra"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb service is in failed state | 2023-03-27T07:44:01.017Z | Mongodb service is in failed state | 710 |
|
[
"node-js"
] | [
{
"code": "",
"text": "\nimage1920×1080 191 KB\n",
"username": "Hung_Viet"
},
{
"code": "MONGO_URIMONGO_URI=\"mongodb+srv://...\" node server.jsexport",
"text": "@Hung_Viet the error is indicating the connection string is not available (instead of being of type “string”, it’s of type “undefined”).Given the code you’ve shared it’s likely the MONGO_URI environment variable wasn’t made available to the Node.js process.Either start your application by doing MONGO_URI=\"mongodb+srv://...\" node server.js, or ensure you export the variable first (see Environment Variables in Windows/macOS/Linux)",
"username": "alexbevi"
}
] | Can someone help me fix this? I really can't solve it | 2023-03-25T18:14:23.350Z | Can someone help me fix this? I really can’t solve it | 447 |
|
null | [] | [
{
"code": "",
"text": "Hello,My requirement is to send text notification to user whenever there has been an update on the document. I see that this can be possible with database triggers and twilio service.\nRecent mongodb docs says, third party services (including twilio services) were deprecated.\nI would like to know what is the other way that I can work to meet my requirement.Thank you,\nSunita",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Hi, our current recommendation is just to use the twilio sdk to make your calls within App Services functions (instead of us having a dedicated twilio service).See here for more details:",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "I already have a database collection. So, in order to send document update notifications to twilio number, do we need to build realm application, as mention in your second blog?",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Im not sure what you mean by \"database collection. I think the simplest version of what you want to do is:Done! Does that sound right to you?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Understood. I did the way you explained. Thank you so much.When I created simple function, I am getting error asran at *****\ntook\nerror:\nError: password is requiredFunction is as below:\nexports = function(changeEvent) {const accountSid = context.values.get(“Twilio_SID_Value”);\nconst authToken = context.values.get(“Auth_Token_Twilio”);\nconst client = require(‘twilio’)(accountSid, authToken);client.messages.create({\nfrom: ‘+11111111’,\nto: ‘+22222222’,\nbody: ‘Document with name was updated.’,\n});}Not sure why I am getting this error. Could you please let me know where I am doing wrong?",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Can you send a link to your app in the UI? Also, not that you should run it like this in production, but can you try just pasting the paintext into the function editor to see if that gets it to work?I am looking at this and it seems like that is an error from the Twilio SDK: javascript - twilio error 'username required' - Stack Overflow",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you. I am doing this in function editor only. Will try one more time today. Thank you for your time and support.",
"username": "sunita_kodali"
},
{
"code": "Customer with id ${change_event._id} was updatedPatient with id ${change_event.fullDocumentBeforeChange} was updated",
"text": "Hi Tyler,\nI was able to fix the issue with the help of environment variables and send message to client via twilio. Thanks for all your support.\nNow I trying to send specific message like instead of “Customer record got updated” trying to send “Customer with customer id got update”.\nMessage is being delivered as “Customer with id [object Object] was updated”.Code is as below:\nconst body = Customer with id ${change_event._id} was updated;\nMessage delivered as-Customer with id [object Object] was updated\nconst body = Patient with id ${change_event.fullDocumentBeforeChange} was updated;\nMessage delivered as-Patient with id undefined was updated.All I am trying to do is, capture specific field in the document that got updated.\nAny idea how I can fix this?",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Hi, this is just a normal javascript issue, however, the basics is that you are trying to print something that is an object and not a string. I would reccomend using console.log() to figure out what you are passing in and re-reading the Change Event documentation.I suspect you might want something like “change_event.documentKey._id.toHexString()” or “change_event.fullDocumentBeforeChange._id.toHexString()”",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "They didn’t work. I’ll look into it. Thank you for your response.",
"username": "sunita_kodali"
}
] | Text Notification | 2023-03-22T16:19:17.748Z | Text Notification | 674 |
null | [
"aggregation"
] | [
{
"code": "{\n $project: {\n Artist: 1,\n Albums: {\n $filter:{\n input : \"$Albums\",\n as : \"album\",\n cond : {\n $in: [{\"$ifNull\":[{$first: \"$$album.Sales.Total_Sales.Date\"},-1]},dateArray]\n }\n }\n }\n }\n }\n $in: [{\"$ifNull\":[\"$$album.Sales.Total_Sales.0.Date\",-1]},dateArray]\n$in: [\"$$album.Sales.Total_Sales.0.Date\",dateArray]\n",
"text": "Hello,\nI am trying to filter Albums object in the last project stage of aggregation as the belowThe weird thing here is I try to use the array index first instead of $first operator like beloworand it doesn’t work like that . I have to specifically use the $first operator instead of array index. What am I missing here? Is there any documentation for this behaviour?",
"username": "Kyaw_Zayar_Tun"
},
{
"code": "$first",
"text": "Hi @Kyaw_Zayar_Tun,The weird thing here is I try to use the array index first instead of the $first operator like below\nand it doesn’t work like that. I have to specifically use the $first operator instead of the array index. What am I missing here?Could you please provide more details on why an array index instead of the $first and what the workflow of the aggregation pipeline entails?However, in order to understand the requirement better, could you please share some information to replicate in my local environment?Regards\nKushagra",
"username": "Kushagra_Kesav"
}
] | Using dot notation in the filter stage doesn't work | 2023-03-25T16:41:35.046Z | Using dot notation in the filter stage doesn’t work | 521 |
null | [
"queries",
"python"
] | [
{
"code": "collection.update_one(\n\t{\"item_number\": item_data[\"item_number\"], \"component_id\": item_data[\"component_id\"], \"timestamp\": {\"$lte\": item_data[\"timestamp\"]}},\n\t{\"$set\": item_data},\n\tupsert=True,\n)\ncollection.update_one(\n\t{\"item_number\": item_data[\"item_number\"], \"component_id\": item_data[\"component_id\"]},\n\t{\"$set\": {\n\t\t\"item_number\": {\"$cond\": [{\"timestamp\": {\"$lte\": item_data[\"timestamp\"]}}, item_data[\"item_number\"], \"$item_number\"]},\n\t\t\"item_name\": {\"$cond\": [{\"timestamp\": {\"$lte\": item_data[\"timestamp\"]}}, item_data[\"item_name\"], \"$item_name\"]},\n\t\t\"component_id\": {\"$cond\": [{\"timestamp\": {\"$lte\": item_data[\"timestamp\"]}}, item_data[\"component_id\"], \"$component_id\"]},\n\t\t\"component_name\": {\"$cond\": [{\"timestamp\": {\"$lte\": item_data[\"timestamp\"]}}, item_data[\"component_name\"], \"$component_name\"]},\n\t}},\n\tupsert=True,\n)\n",
"text": "I’m trying to make a query where it checks if a document exists based on filter, if it does, it checks if the timestamp is newer and updates if that is the case, and if the document doesn’t exist, it creates one. The old code that was supposed to do this was as follows:This however caused a data duplication bug if the timestamp of the uploaded file was older than the one existing in the collection (because it no longer matched the filter). I also tried to do this:Which is a bit of a mess (because the actual query has more fields than what I included) and it caused some documents getting deleted for some reason. Is there a better way of doing this? This sounds like something that should be possible, but I can’t figure out how to do it.",
"username": "Scott_Woofer"
},
{
"code": "",
"text": "To help us experiment with your use-cases please share sample documents and sample item_data values for each situations.",
"username": "steevej"
},
{
"code": "",
"text": "The data that we are working with is sensitive, so I can’t share it. In a bigger picture the contents of collection and documents don’t actually matter, besides the timestamp and how you decide to identify which document to update. The concept here is more important than the details, like what the data looks like, etc.",
"username": "Scott_Woofer"
}
] | How do I use $cond to make a condition for entire document, not just one field? | 2023-03-23T13:18:12.920Z | How do I use $cond to make a condition for entire document, not just one field? | 585 |
null | [] | [
{
"code": "",
"text": "Facing this error while trying to start:\nmongodb@cfg1:/$ sudo systemctl status mongod.service\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: failed (Result: exit-code) since Thu 2022-04-14 14:35:13 IST; 6min ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 8656 (code=exited, status=1/FAILURE)Apr 14 14:35:13 systemd[1]: Started MongoDB Database Server.\nApr 14 14:35:13 systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE\nApr 14 14:35:13 systemd[1]: mongod.service: Failed with result ‘exit-code’.I have checked permissions/ownership for dbpath , logpath & keyFile .\nmongod.log doesn’t print anything. Also .sock file doesn’t print anything.",
"username": "Debalina_Saha"
},
{
"code": "ps -aef | grep [m]ongod\nss -tlnp\n",
"text": "Share your systemd service file.Share your configuration file.Share the output of the following commands:",
"username": "steevej"
},
{
"code": "",
"text": "mongodb@ACIPRO:/$ ps -aef | grep [m]ongod\nroot 1237 3043 0 05:17 ? 00:00:00 sshd: mongodb [priv]\nmongodb 1429 1237 0 05:17 ? 00:00:01 sshd: mongodb@pts/1\nmongodb 1454 1429 0 05:17 pts/1 00:00:00 -bash\nroot 5410 1 99 Apr25 ? 1-05:32:10 mongod -f /apps/mongodb/config/mongod.conf --fork\nroot 5977 19077 6 Apr25 ? 01:20:37 /usr/local/percona/pmm2/exporters/mongodb_exporter --collector.collstats-limit=200 --collector.diagnosticdata --collector.replicasetstatus --compatible-mode --discovering-mode --mongodb.global-conn-pool --web.listen-address=:42002\nroot 7892 3043 0 07:12 ? 00:00:00 sshd: mongodb [priv]\nmongodb 8039 7892 0 07:12 ? 00:00:00 sshd: mongodb@pts/2\nmongodb 8093 8039 0 07:12 pts/2 00:00:00 -bash\nroot 10879 3043 0 07:14 ? 00:00:00 sshd: mongodb [priv]\nmongodb 11036 10879 0 07:14 ? 00:00:00 sshd: mongodb@notty\nmongodb 11037 11036 0 07:14 ? 00:00:00 /usr/lib/openssh/sftp-server\nroot 11209 3043 0 07:15 ? 00:00:00 sshd: mongodb [priv]\nmongodb 11348 11209 0 07:15 ? 00:00:04 sshd: mongodb@notty\nmongodb 11349 11348 0 07:15 ? 00:00:01 /usr/lib/openssh/sftp-server\nmongodb 13188 8093 0 11:24 pts/2 00:00:00 ps -aef\nmongodb 13189 8093 0 11:24 pts/2 00:00:00 grep --color=auto [m]ongod\nmongodb 52940 1 0 Apr25 ? 00:01:38 /lib/systemd/systemd --user\nmongodb 52941 52940 0 Apr25 ? 00:00:00 (sd-pam)\nmongodb@ACIPRO:/$ ss -tlnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port\nLISTEN 0 128 127.0.0.1:27017 0.0.0.0:*\nLISTEN 0 128 10.98.20.5:27017 0.0.0.0:*\nLISTEN 0 10 127.0.0.1:29130 0.0.0.0:*\nLISTEN 0 128 0.0.0.0:25324 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:80 0.0.0.0:*\nLISTEN 0 4096 127.0.0.1:42001 0.0.0.0:*\nLISTEN 0 4096 127.0.0.1:45013 0.0.0.0:*\nLISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*\nLISTEN 0 128 0.0.0.0:22 0.0.0.0:*\nLISTEN 0 128 0.0.0.0:8089 0.0.0.0:*\nLISTEN 0 100 0.0.0.0:25 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:443 0.0.0.0:*\nLISTEN 0 1024 10.98.20.5:17472 0.0.0.0:*\nLISTEN 0 1024 127.0.0.1:17473 0.0.0.0:*\nLISTEN 0 4096 127.0.0.1:7777 0.0.0.0:*\nLISTEN 0 10 127.0.0.1:28130 0.0.0.0:*\nLISTEN 0 10 127.0.0.1:28230 0.0.0.0:*\nLISTEN 0 4096 :42000 :\nLISTEN 0 4096 [::]:80 [::]:\nLISTEN 0 4096 :42002 :\nLISTEN 0 128 :8084 :\nLISTEN 0 128 [::]:22 [::]:\nLISTEN 0 100 [::]:25 [::]:\nLISTEN 0 4096 [::]:443 [::]:*\nmongodb@ACIPRO:/$",
"username": "Debalina_Saha"
},
{
"code": "",
"text": "",
"username": "steevej"
},
{
"code": "",
"text": "ps -aef | grep [m]ongodHi can you help me outubuntu@esopdev:/etc$ ps -aef | grep [m]ongod\nroot 804843 1 0 07:12 ? 00:00:16 mongod --fork --logpath /var/lib/mongodb/mongodb.log --dbpath /var/lib/mongodb\nroot 805223 805199 0 07:14 pts/2 00:00:00 tail -100f /var/log/mongodb/mongod.log$ ss -tlnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port Process\nLISTEN 0 4096 0.0.0.0:9191 0.0.0.0:*\nLISTEN 0 128 127.0.0.1:27017 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:8080 0.0.0.0:*\nLISTEN 0 511 0.0.0.0:80 0.0.0.0:*\nLISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*\nLISTEN 0 4096 127.0.0.1:43637 0.0.0.0:*\nLISTEN 0 128 0.0.0.0:22 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:6080 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:6081 0.0.0.0:*\nLISTEN 0 4096 0.0.0.0:6082 0.0.0.0:*\nLISTEN 0 4096 [::]:9191 [::]:*\nLISTEN 0 4096 [::]:8080 [::]:*\nLISTEN 0 511 [::]:80 [::]:*\nLISTEN 0 128 [::]:22 [::]:*\nLISTEN 0 4096 [::]:6080 [::]:*\nLISTEN 0 4096 [::]:6081 [::]:*\nLISTEN 0 4096 [::]:6082 [::]:*\nLISTEN 0 100 *:9090 :",
"username": "Gowtham_Chendra"
}
] | Not able to start mongod - mongod.service: Main process exited, code=exited, status=1/FAILURE | 2022-04-14T09:38:59.460Z | Not able to start mongod - mongod.service: Main process exited, code=exited, status=1/FAILURE | 6,082 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.