image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "{\n \"mrp\": {\n \"$numberInt\": \"10000\"\n },\n \"dp\": {\n \"$numberInt\": \"90000\"\n }\n}\n{\n \"mrp\":100000, \n \"dp\":9000\n}\n",
"text": "I have a object which have some key and valuesI want the above values like this",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "Hi @Zubair_Rajput,I have a object which have some key and valuesWhere do you get those, and can you provide a sample code? It’s likely you’re mixing JSON and Extended JSON (that has a wider variety of data types - hence needs each value to have type specified), but the context is everything here, to understand what can be corrected.",
"username": "Paolo_Manna"
},
{
"code": "\"mrp\": {\n \"$numberInt\": \"160000\"\n },\n{\n \"_id\": \"6466279bec6576a00b527434\",\n \"brand\": \"SS\",\n \"product_name\": \"Gunther \",\n \"gst\": 0,\n \"mrp\":16000\n}\n\nvar product = productColl.findOne({\"_id\": new BSON.ObjectId(\"6466279bec6576a00b527434\") })\n const product_item = EJSON.parse(product);\n \n var orderObj\n orderObj = order_items.map((item, idx) => {\n return {\n ...item,\n brand: product_item.brand,\n product_name: product_item.product_name\n }\n})\n\n",
"text": "I am reading a document then I am parsing it and mapping a new array with mrp price value\nwhich is integer but I am getting the like below",
"username": "Zubair_Rajput"
},
{
"code": "JSON.parse(EJSON.stringify(product));\nEJSON.parse(EJSON.stringify(product));\n\n\"product_name\": \"Cr. Bats\",\n\"mrp\": 100000,\n\"dp\": 98000,\n",
"text": "Sir actually I got the solution after changingtoNow I am getting like thisis it the correct way to do this",
"username": "Zubair_Rajput"
},
{
"code": "EJSON.parse(EJSON.stringify(product));\nproductproduct.brandproduct.product_namefindOne()var product = await productColl.findOne({\"_id\": new BSON.ObjectId(\"6466279bec6576a00b527434\") })\n",
"text": "Sir actually I got the solution after changingThis is redundant, you’re converting to string, then back to object again, why don’t you access the product fields directly, as in product.brand, product.product_name, etc.?The only point I see in your code that needs correction is that you should wait for the query to be done, because findOne() returns a Promise, as in",
"username": "Paolo_Manna"
}
] | How to get numeric value from the numeric value from $numberInt | 2023-06-19T11:20:42.909Z | How to get numeric value from the numeric value from $numberInt | 930 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "",
"text": "Can anyone help me out with using $search in $lookup?Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.It works fine if I use a static string instead of searchId, but I’m not able to use a variable from the “let” definition block. Anyone knows how to resolve variables inside the $search function?",
"username": "Smoothny"
},
{
"code": "$expr$search",
"text": "It works fine if I use a static string instead of searchId, but I’m not able to use a variable from the “let” definition block. Anyone knows how to resolve variables inside the $search function?$lookup variables can only be referenced inside $expr and I don’t believe that’s supported in $search at the moment.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Oh no, that’s a pity. ",
"username": "Smoothny"
},
{
"code": "$search$search$text$lookup{\n from: \"OtherCollection\",\n let: {\n otherName: \"$name\",\n },\n pipeline: [\n {\n $match: {\n $text: {\n $search: \"$$otherName\",\n },\n },\n },\n ],\n as: \"references\",\n}\n",
"text": "Just to confirm - the question here is for the $search aggregation stage. Is your answer also valid for $search field of the $text operator? For example, a $lookup stage with the following content:This is not expected to work as well, right?",
"username": "Rado_Stoyanov"
},
{
"code": "",
"text": "Correct.$$expr is a top level expression.",
"username": "Asya_Kamsky"
}
] | $search in $lookup with MongoDB v6 | 2023-01-11T15:37:18.192Z | $search in $lookup with MongoDB v6 | 1,304 |
null | [
"aggregation"
] | [
{
"code": "{\n id: 0,\n data: [\n {entries: [{value: 1}, {value: 2}]},\n {entries: [{value: 4}, {value: 7}]}\n}\navg: [2.5, 4.5]\n",
"text": "Hi,\nI have data where each document is of the following format:I would like to dynamically create a new field (via Charts UI which allows for data aggregation and adding new field) that would represent average of those entries. Eg, for document of id 0 I would like to obtainWhere\n(1 + 4) / 2 = 2.5,\n(2 + 7) / 2 = 4.5.So essentially zipping those values and averaging them to create new array of averages.What would be correct aggregation command to achieve that?\nThanks",
"username": "Tomasz_Borczyk"
},
{
"code": "{\n\t$map: {\n\t\tinput: \"$data\",\n\t\tas: \"mappedData\",\n\t\tin: {\n\t\t\t $avg: \"$$mappedData.entries.value\"\n\t\t}\n\t}\n}\n[\n\t{\n\t\t$set: {\n\t\t\tavg: {\n\t\t\t\t$map: {\n\t\t\t\t\tinput: \"$data\",\n\t\t\t\t\tas: \"mappedData\",\n\t\t\t\t\tin: {\n\t\t\t\t\t\t $avg: \"$$mappedData.entries.value\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} \n]\n",
"text": "Hi @Tomasz_Borczyk -Try creating a calculated field with this expression:Or alternatively you can make it an aggregation pipeline in the query bar:",
"username": "tomhollander"
},
{
"code": "",
"text": "Actually on re-reading your question I realised I’m not calculating the averages the way you want. Your scenario is a little more complex. To (potentially) make things easier: is the number of array elements at either level predictable?",
"username": "tomhollander"
},
{
"code": "entriesdata",
"text": "Yes, number of elements in entries is constant, but number of entries in data might differ between documents",
"username": "Tomasz_Borczyk"
}
] | Array of averages of arrays | 2022-09-30T14:44:54.046Z | Array of averages of arrays | 2,531 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "We are currently using ongoDB.Driver\" Version=“2.13.2” and trying to upgrade to ongoDB.Driver\" Version=“2.19.0”. we are using old mongodb bason query’s in this application.\nAfter upgrading the below lines throw exception.\nvar result = await _collection.FindAsync(filter,option).Result.ToListAsync();\nthrow exception:\n\" When called from ‘VisitListInit’,\nrewriting a node of type ‘System.Linq.Expressions.NewExpression’\nmust return a non-null value of the same type.\nAlternatively, override ‘VisitListInit’ and change it to not visit children of this type.’ while upgrade the mongodb driver 2.19.0\"\ncloud you please help me to fix it without changing the existing code,\nwe are using .net5",
"username": "Athira_K_S"
},
{
"code": "x => x.Price > 42var connectionString = \"mongodb://localhost\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V2;\nvar client = new MongoClient(clientSettings);\n",
"text": "Hi, @Athira_K_S,Welcome to the MongoDB Community Forums.I see that you’ve encountered a LINQ error when running a Fluent Find operation. Although this isn’t a LINQ query, internally the driver uses the LINQ machinery to translate expressions (e.g. x => x.Price > 42) into MQL. In 2.19.0, we switched from our existing LINQ2 provider to the new, improved LINQ3 provider - which is likely what is causing the problem. You can switch back to the LINQ2 provider using code similar to the following:It would be helpful if you could file a bug with a repro here so that we could investigate and resolve the issue. Thanks in advance!Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thank You James_Kovacs ,\nIts work for me",
"username": "Athira_K_S"
}
] | Exception occur while Mongodb Upgradation from 2.13.2 to 2.19.0 | 2023-06-16T06:10:11.345Z | Exception occur while Mongodb Upgradation from 2.13.2 to 2.19.0 | 612 |
null | [
"aggregation"
] | [
{
"code": "value_bvalue_belement{\n \"data\": {\n \"array_of_data\": [\n {\n \"element\": {\n \"elementData\": [\n { \"value_a\": 1, \"value_b\": 200},\n { \"value_a\": 2, \"value_b\": 2500 }\n ]\n }\n },\n {\n \"element\": {\n \"elementData\": [\n { \"value_a\": 1, \"value_b\": 150},\n { \"value_a\": 2, \"value_b\": 5600 }\n ]\n }\n }\n ]\n }\n}\n{\n \"min\": 150,\n \"max\": 5600\n}\n[{\n $project: {\n 'data.array_of_data.element.elementData.value_b': 1\n }\n }, {\n $group: {\n _id: '$data.array_of_data.element.elementData.value_b'\n }\n }, {\n $addFields: {\n minA: {\n $min: '$_id'\n },\n maxA: {\n $max: '$_id'\n }\n }\n }, {\n $addFields: {\n low: {\n $min: '$minA'\n },\n high: {\n $max: '$maxA'\n }\n }\n }, {\n $project: {\n low: 1\n }\n }]\n",
"text": "Hello,I have a complex data structure and having an extremely difficult time with an aggregation query to get the result I was hoping for.Here is a sample of my data structure, I had to really obscure this due to sensitive content reasons. I will have hundreds of thousands of documents in this collection.What I am looking to do is try and retrieve the lowest value_b and the highest value_b from elementWorking with MongoDB Community 6.0Looking for a result something like the followingI have tried quite a few options so far, but here is latest with the closest that I have gottenI appreciate any suggestions that someone may have.",
"username": "Ryan_Youngsma"
},
{
"code": "var pipeline =\n[\n { '$unwind': '$data.array_of_data' },\n { '$unwind': '$data.array_of_data.element.elementData' },\n {\n '$group': {\n _id: '$_id',\n max: { '$max': '$data.array_of_data.element.elementData.value_b' },\n min: { '$min': '$data.array_of_data.element.elementData.value_b' }\n }\n }\n]\n$unwinds$group$match",
"text": "Hi @Ryan_Youngsma - Welcome to the community.I haven’t tested this out on a larger set of data but I think the following may get you to a similar desired output:However, there are are 2 $unwinds and a $group here so you may encounter performance issues. Could I understand the use case here? Is this a workload that you would be running frequently?You might be able to include a $match stage with index usage if possible at the start to try reduce the amount of documents being passed through the pipeline.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_TranThank you for the reply. From what I have been able to test so far I think this will work for me.We will have a match stage at the beginning so this should bring down the resulting documents from the hundreds of thousands to thousands of documents to aggregate over. Depending on how many different match criteria a user will enter will also determine how many documents will be returned.This is a query that would not be run very often. If I were to guess, maybe 5 - 10 times an hour.I will try to run this on the original data source to evaluate performance as my testing environment has a pretty limited dataset.I appreciate your assistance.",
"username": "Ryan_Youngsma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Complex Embedded Arrays of Objects $min/$max | 2023-06-15T23:43:55.168Z | Complex Embedded Arrays of Objects $min/$max | 388 |
[] | [
{
"code": "",
"text": "Hi all,\nPlease see the attached 2 diagrams.\nI have a single compressed/json file coming out each our, nested in the directory structure as per image, that contains a json array with “documents” that have to be loaded each into a collection.How would I be able to do this, This will be from a Confluent/Kafka in AWS account onto Atlas.or would I maybe need to use a Lamda function t pickup the file and decompile it into individual docs and insert them.G\nScreenshot 2023-06-13 at 07.51.22798×426 193 KB\n",
"username": "georgelza"
},
{
"code": "",
"text": "\nScreenshot 2023-06-13 at 07.51.351328×130 26.1 KB\nG",
"username": "georgelza"
},
{
"code": "",
"text": "Anyone ?Is it possible to have a connector consume from a nested directory structure ?Is it possible for a connector to take a single message on a topic and split the contents into the array of documents. - Realise i might be able to do this on the Kafka topic using SMT.G",
"username": "georgelza"
},
{
"code": "",
"text": "Expanding…ignore the nested source folder structure, just realised that sits with my source connector, irrelevant for discussion here…{\n[\n{doc1},\n{doc2},\n{doc3},\n{doc4},\n{doc5}\n]\n}Each doc as a individual doc in the destination collectionG",
"username": "georgelza"
},
{
"code": "",
"text": "Answer.The mongo sing connector canonly uncertainty at the moment is if this version is available on the AWS Confluent cloud deployment.G",
"username": "georgelza"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Please advise - Is this possible | 2023-06-13T06:01:00.455Z | Please advise - Is this possible | 556 |
|
[
"aggregation",
"queries"
] | [
{
"code": "db.M0001.aggregate([\n {\n\t\"$match\" : {\n\t\t\"updated_time\": {\"$gt\": ISODate(\"2010-05-01T00:00:00.000Z\")}\n\t}\n},\n {\n \"$sort\": {\n \"updated_time\": -1\n }\n },\n {\n \"$project\": {\n \"GC0004440E_Y0Y2021010120211231\": 1,\n \"_id\": 0\n }\n },\n {\n \"$facet\": {\n \"data\": [\n {\n \"$skip\": 300\n },\n {\n \"$limit\": 1000\n }\n ],\n \"pagination\": [\n {\n \"$count\": \"total\"\n }\n ]\n }\n }\n], {explain: true})\ndb.M0001.find({}, {\"GC0004440E_Y0Y2021010120211231\" : 1}) \\\n.sort({updated_time:-1}).skip(300).limit(1000).explain('executionStats')\n",
"text": "One of my collection is large, and the size about 8.4G , about 5600 items.\nAnd I use the aggregate pipelines to make a api for pagination, but the query is so slow.\nThe query command is below:And the explain is shown in the jpg, it takes about 62 seconds:\nimage987×690 26.8 KB\nBut I can not know, why it’s so different from the simple find command shown below:it takes about 10 seconds.Can anyone give me some suggestions, thanks!\nPlus:\nthe version of mongo is: 4.4.10",
"username": "dean.du_2023"
},
{
"code": "db.M0001.explain().aggregate([...])db.M0001.stats()",
"text": "Hey @dean.du_2023,Thank you for reaching out to the MongoDB Community forums.One of my collection is large, and the size about 8.4GCould you please provide us with the sample document and the indexes of the collection you are currently working on?Additionally, it would be helpful if you could share the output of theBased on the screenshot shared, it seems that the index scanning (IXSCAN) took only 28ms, while the majority of the time (55s) is spent fetching the data, specifically retrieving the full document based on the index key. This is typically caused by hardware constraints. Could you please share your hardware configuration for the deployment? Also, let us know where the MongoDB server is running. Are you using Docker or some kind of virtual machine?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "db.Mooo1.stats()explain().aggregate([...]){\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"themes.M0001\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"9E9253CF\",\n\t\t\t\t\t\"planCacheKey\" : \"9E9253CF\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"GC0004440E_Y0Y2021010120211231\" : true,\n\t\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"updated_time\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"updated_time_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"updated_time\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"updated_time\" : [ \"[MaxKey, MinKey]\" ]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$facet\" : {\n\t\t\t\t\"data\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$teeConsumer\" : {\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$skip\" : 2000\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$limit\" : 1000\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"pagination\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$teeConsumer\" : {\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$group\" : {\n\t\t\t\t\t\t\t\"_id\" : {\n\t\t\t\t\t\t\t\t\"$const\" : null\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"total\" : {\n\t\t\t\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$project\" : {\n\t\t\t\t\t\t\t\"total\" : true,\n\t\t\t\t\t\t\t\"_id\" : false\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"mongo01\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.10\",\n\t\t\"gitVersion\" : \"58971da1ef93435a9f62bf4708a81713def6e88c\"\n\t},\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1686972731, 6),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"QxBw4lVSq6gixefHxluUxiO2voU=\"),\n\t\t\t\"keyId\" : NumberLong(\"7190173993972793345\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1686972731, 6)\n}\n",
"text": "db.M0001.stats()the result of the db.Mooo1.stats() is shown below:\n\nimage1153×663 38 KB\nAnd the result json of the explain().aggregate([...]) is following:One sample of the items of collection was uploaded to one cloud driver:\nOne sample to downloadThanks for your help! Best regards!",
"username": "dean.du_2023"
},
{
"code": "db.M0001.explain('executionStats').aggregate([..])",
"text": "Hey @dean.du_2023,Thank you for sharing the details. Could you please share the output of the following command: db.M0001.explain('executionStats').aggregate([..])? Additionally, could you provide information about your hardware configuration for the deployment and confirm where the MongoDB server is running? Also, are you using Docker or any virtual machine for the setup?Furthermore, based on the sample documents you shared, it appears that each document contains approximately 30K+ field-value pairs. Can you please confirm this?I’m asking for this information so that I can hopefully reproduce what you’re seeing and come up with some recommendations.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "ok, my mongo is running on a single server, and set up with Docker.\nThe configuration is shown below:CPU: 8 core\nMemory: 16 GAnd you are right about that each document contains 30K+ k-v pairs. Some documents contain even about 100k+ k-v pairs.And the result of the command is about like the following png:\n\nimage969×361 13.4 KB\nThanks for your replay.\nRegards,\nDean",
"username": "dean.du_2023"
}
] | Aggregate pipeline, why scan all the items | 2023-06-15T09:49:45.436Z | Aggregate pipeline, why scan all the items | 504 |
|
[
"connector-for-bi"
] | [
{
"code": "",
"text": "mongo_error1536×1440 97.9 KB\nI have installed mongo DB in Linux operating system(CentOS 7), I am configuring ODBC 32 bit connection in windows machine, when I test it is not responding, I have tried ODBC 64/32 bit and also tried with unicode and ascii option. I am not sure with what i am doing. I have also attached screenshot for connection details. Logs are not generating. Do i miss any configuration in ODBC like TLS or SSL?",
"username": "Vidya_Sagar_Reddy"
},
{
"code": "",
"text": "Welcome to the community @Vidya_Sagar_Reddy,The MongoDB ODBC Driver for BI Connector also requires a compatible version of the MongoDB Connector for BI to be installed and running.What are your installed versions of MongoDB ODBC driver, Connector for BI, and MongoDB server?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Sorry for Late reply,mongo --version\nMongoDB shell version v4.2.7\ngit version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212\nOpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\nallocator: tcmalloc\nmodules: none\nbuild environment:\ndistmod: rhel70\ndistarch: x86_64\ntarget_arch: x86_64cat /etc/os-release\nNAME=“CentOS Linux”\nVERSION=“7 (Core)”\nID=“centos”\nID_LIKE=“rhel fedora”\nVERSION_ID=“7”\nPRETTY_NAME=“CentOS Linux 7 (Core)”\nANSI_COLOR=“0;31”\nCPE_NAME=“cpe:/o:centos:centos:7”\nHOME_URL=“https://www.centos.org/”\nBUG_REPORT_URL=“https://bugs.centos.org/”CENTOS_MANTISBT_PROJECT=“CentOS-7”\nCENTOS_MANTISBT_PROJECT_VERSION=“7”\nREDHAT_SUPPORT_PRODUCT=“centos”\nREDHAT_SUPPORT_PRODUCT_VERSION=“7”BI Connector Version: 2.13.4ODBC Driver: Tried with all possibilities 32/64 bit and 1.0/1.4(ascii/unicode)",
"username": "Vidya_Sagar_Reddy"
},
{
"code": "",
"text": "Thanks so much - I had a similar issue to this thread but it was because I didn’t have the compatible version of the Connector for BI.",
"username": "Marty_Zager"
},
{
"code": "",
"text": "@Marty_Zager, Could you please share me your compatible version for below list and let me try it out,",
"username": "Vidya_Sagar_Reddy"
},
{
"code": "",
"text": "I’m also facing the same issue, could you please share us detailed resolution here.So far have tried using Mongdb driver version 1.0 and 1.4 but no luck.Thanks,\nPavankumar",
"username": "Pavankumar_Asagalli"
},
{
"code": "",
"text": "The MongoDB BI Connector connects to the MongoDB server on port 27017 and will bridge the NoSQL nature of MongoDB and make its data visible as sort of virtual tables. ODBC driver then connects to the MongoDB BI Connector, typically on port 3307.\nso try 3307 instead 27017",
"username": "vijai_kumar_S"
},
{
"code": "",
"text": "Please install MongoDB Connector for BI MongoDB BI Connector Download | MongoDB and then:\n“C:\\Program Files\\MongoDB\\Connector for BI<VERSION>\\bin\\mongosqld.exe” and:\n2021-12-02XXXXXXXXXXX I NETWORK [initandlisten] waiting for connections at 127.0.0.1:3307\nODBC will work fine on port 3307\nRegards ",
"username": "Edgar_Cap"
},
{
"code": "",
"text": "i am facing same issue ,which is the compatible version of the connector for bi",
"username": "sarath_sr"
},
{
"code": "",
"text": "I am facing same issues. I have connected at 127.0.0.1:3307. But not fully loaded databases in ODBC driver. Unknown database error. I am not able to connect. Please refer the screenshots and version\nimage731×494 68.9 KB\nOS : Centos 7\nMongoDB Version :MongoDB shell version v4.4.18\nBuild Info: {\n“version”: “4.4.18”,\n“openSSLVersion”: “OpenSSL 1.0.1e-fips 11 Feb 2013”,\n“modules”: ,\n“environment”: {\n“distmod”: “rhel70”,\n“distarch”: “x86_64”,\n“target_arch”: “x86_64”\n}\n}",
"username": "Murali_A"
}
] | ODBC Connection test not responding | 2020-06-04T04:49:24.021Z | ODBC Connection test not responding | 12,123 |
|
null | [
"aggregation",
"node-js",
"data-modeling",
"mongoose-odm"
] | [
{
"code": "export const updateContact = async (req, res) => {\n const { body } = req;\n const { id } = req.params;\n \n\n try {\n const updatedContact = Contact.findByIdAndUpdate(id, body, { new: true });\n \n return res.status(200).json({ message: completed update, status: true, updatedContact });\n } catch (error) {\n return res.status(500).json({ error: error.message, status: false });\n }\n}\n/home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/operations/insert.js:50\n return callback(new error_1.MongoServerError(res.writeErrors[0]));\n ^\n\nMongoServerError: E11000 duplicate key error collection: J&N_DB.contacts index: _id_ dup key: { _id: ObjectId('6487c4ea43cb7181f66ca9c5') }\n at /home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/operations/insert.js:50:33\n at /home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection_pool.js:327:25\n at /home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/sdam/server.js:207:17\n at handleOperationResult (/home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/sdam/server.js:323:20)\n at Connection.onMessage (/home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:213:9)\n at MessageStream.<anonymous> (/home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:59:60)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (/home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/cmap/message_stream.js:124:16)\n at MessageStream._write (/home/carlos/Desktop/react-jorge/node_modules/mongoose/node_modules/mongodb/lib/cmap/message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:392:12) {\n index: 0,\n code: 11000,\n keyPattern: { _id: 1 },\n keyValue: {\n _id: ObjectId {\n [Symbol(id)]: Buffer(12) [Uint8Array] [\n 100, 135, 196, 234,\n 67, 203, 113, 129,\n 246, 108, 169, 197\n ]\n }\n },\n [Symbol(errorLabels)]: Set(0) {}\n},\n",
"text": "when executing the function, it searches and updates but then throws the following error:every attempt to update gives me the same error even though I have only one object in my collection",
"username": "carlos_barreto"
},
{
"code": "test> db.sampletest.find()\n[\n { _id: ObjectId(\"648aec7ada6163c9fbd6fdee\"), name: 'wbc', age: 6 },\n { _id: '123', age: 8, name: 'abc' }\n]\ntest>\nconst mongoose = require('mongoose');\n\n// Connect to the MongoDB database\nmongoose.connect('mongodb://localhost:27017/test', { useNewUrlParser: true, useUnifiedTopology: true })\n .then(() => {\n console.log('Connected to the database');\n })\n .catch((error) => {\n console.error('Error connecting to the database:', error);\n });\n\n// Define the schema for the collection\nconst sampleTestSchema = new mongoose.Schema({\n name: String,\n age: Number\n});\nconsole.log(sampleTestSchema);\n\n// Create a model for the collection\nconst SampleTest = mongoose.model('sampletest', sampleTestSchema);\n\n// Update a document by its ID using findByIdAndUpdate\nconst updateDocumentById = async (id, newData) => {\n console.log(id,newData);\n try {\n const updatedDocument = await SampleTest.findByIdAndUpdate(id, newData);\n console.log('Updated document:', updatedDocument);\n } catch (error) {\n console.error('Error updating document:', error);\n }\n};\n\nconst idToUpdate = new mongoose.Types.ObjectId('648aec7ada6163c9fbd6fdee');\nconst newData = { name: 'ABC', age: 30 };\nupdateDocumentById(idToUpdate, newData);\n",
"text": "Hi @carlos_barreto and welcome to MongoDB community forums!!From the error message, it seems you are trying to update the _id to a value that already exists in the collection. To understand further. can you help me with a sample code?I tried to create a sample data as:and tried the following mongoose code for the following:and it has worked for me.Let me know if this helps.Regards\nAasawari",
"username": "Aasawari"
}
] | Error E11000 duplicate key error when i try to update | 2023-06-13T02:07:23.449Z | Error E11000 duplicate key error when i try to update | 1,460 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi, i have a cluster already in production, designed as replicaset and continuously being inserted documents inside the db. Since its a very write intensive database structure, i want to try whether sharding may help. To see or get a clue on which criteria i can shard, is there any tools or analyzers that will help me to decide the shard key?",
"username": "Oguz_Yarimtepe"
},
{
"code": "",
"text": "",
"username": "Kobe_W"
}
] | What is the good strategy for sharding an existing replicated cluster? | 2023-06-18T06:17:31.956Z | What is the good strategy for sharding an existing replicated cluster? | 544 |
null | [
"java"
] | [
{
"code": " private MongoClientSettings getTotalSettings() {\n return MongoClientSettings.builder()\n .applicationName(APPLICATION_NAME)\n .applyToClusterSettings(builder -> builder.applySettings(getClusterSettings()))\n .applyToConnectionPoolSettings(builder -> builder.applySettings(getConnectionPoolSettings()))\n .credential(getMongoCredential())\n .build();\n }\n\n private ClusterSettings getClusterSettings() {\n return ClusterSettings.builder()\n .hosts(Collections.singletonList(new ServerAddress(TEST_HOST, PORT)))\n .build();\n }\n\n// etc....\n",
"text": "TEST_HOST is the url of aws ec2.I didn’t set it to connect to localhost.But when I run this project locally I get the following log:2023-06-13 15:09:37.975 INFO 96205 — [ restartedMain] org.mongodb.driver.client : MongoClient with metadata {“driver”: {“name”: “mongo-java-driver|sync”, “version”: “4.6.1”}, “os”: {“type”: “Darwin”, “name”: “Mac OS X”, “architecture”: “aarch64”, “version”: “13.3.1”}, “platform”: “Java/Oracle Corporation/11.0.16.1+1-LTS-1”, “application”: {“name”: “temp”}} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName=‘spring’, source=‘test_location_db’, password=, mechanismProperties=}, streamFactoryFactory=null, commandListeners=, codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@5860267a]}, clusterSettings={hosts=[AWS_ADDRESS:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName=‘null’, serverSelector=‘null’, clusterListeners=‘’, serverSelectionTimeout=‘30000 ms’, localThreshold=‘30000 ms’}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=500, minSize=0, maxWaitTimeMS=5000, maxConnectionLifeTimeMS=1800000, maxConnectionIdleTimeMS=600000, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=, maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners=‘’, serverMonitorListeners=‘’}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName=‘temp’, compressorList=, uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null}\n2023-06-13 15:09:38.000 INFO 96205 — [onaws.com:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:2, serverValue:639}] to AWS_ADDRESS:27017\n2023-06-13 15:09:38.000 INFO 96205 — [onaws.com:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:640}] to AWS_ADDRESS:27017\n2023-06-13 15:09:38.001 INFO 96205 — [onaws.com:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=AWS_ADDRESS:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=19193083}\n2023-06-13 15:09:38.187 INFO 96205 — [ restartedMain] org.mongodb.driver.client : MongoClient with metadata {“driver”: {“name”: “mongo-java-driver|sync|spring-boot”, “version”: “4.6.1”}, “os”: {“type”: “Darwin”, “name”: “Mac OS X”, “architecture”: “aarch64”, “version”: “13.3.1”}, “platform”: “Java/Oracle Corporation/11.0.16.1+1-LTS-1”} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=, codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@5860267a]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName=‘null’, serverSelector=‘null’, clusterListeners=‘’, serverSelectionTimeout=‘30000 ms’, localThreshold=‘30000 ms’}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=, maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners=‘’, serverMonitorListeners=‘’}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName=‘null’, compressorList=, uuidRepresentation=JAVA_LEGACY, serverApi=null, autoEncryptionSettings=null, contextProvider=null}\n2023-06-13 15:09:38.191 INFO 96205 — [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017com.mongodb.MongoSocketOpenException: Exception opening socket\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.6.1.jar:na]\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:180) ~[mongodb-driver-core-4.6.1.jar:na]\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:193) ~[mongodb-driver-core-4.6.1.jar:na]\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157) ~[mongodb-driver-core-4.6.1.jar:na]\nat java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]\nCaused by: java.net.ConnectException: Connection refused (Connection refused)\nat java.base/java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:na]\nat java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[na:na]\nat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[na:na]\nat java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[na:na]\nat java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:na]\nat java.base/java.net.Socket.connect(Socket.java:608) ~[na:na]\nat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:107) ~[mongodb-driver-core-4.6.1.jar:na]\nat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-4.6.1.jar:na]\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-4.6.1.jar:na]\n… 4 common frames omittedwhy does the mongoDB java driver try to connect to localhost?",
"username": "gantodagee_N_A"
},
{
"code": "TEST_HOSTTEST_HOSTMongoAutoConfiguration",
"text": "Hello @gantodagee_N_A ,From my limited experiments, I was able to see what you saw if I didn’t set TEST_HOST correctly. Could you confirm that the TEST_HOST address is setup correctly? Please refer Connection Guide for Java. The reason is, I believe there is a fallback mechanism MongoAutoConfiguration.Spring Boot has a feature called “auto configuration”. It could be that the MongoAutoConfiguration is activated with default values, which point to localhost:27017. If you don’t want that behaviour, you can either configure the properties for MongoDB (see Spring Boot Reference Documentation for valid property keys) or disable the MongoAutoConfiguration:@SpringBootApplication(exclude = {MongoAutoConfiguration.class, MongoDataAutoConfiguration.class})Additionally, are you following any connection guide or tutorial to work on this? If yes, then can you share the same?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Why does the MongoDB Java driver try to connect to localhost? | 2023-06-13T06:22:02.153Z | Why does the MongoDB Java driver try to connect to localhost? | 1,336 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster"
] | [
{
"code": "",
"text": "28I’m trying to connect to my cluster on mongoDB Atlas via Mongoose.connect(), but every time i try to connect i get an exception “MongoError: authentication fail”\n“mongodb+srv://ahemddoha:@cluster0.0jwhkdk.mongodb.net/Socialmedia?retryWrites=true&w=majority”\nI checked all connection and it works and the code error in the connection with altas",
"username": "_sozan_Ahmed"
},
{
"code": "“mongodb+srv://ahemddoha:@cluster0.0jwhkdk.mongodb.net/Socialmedia?retryWrites=true&w=majority”:@",
"text": "“mongodb+srv://ahemddoha:@cluster0.0jwhkdk.mongodb.net/Socialmedia?retryWrites=true&w=majority”If that’s really your connection string, you have not provided a password.\nThe password comes between the : and the @",
"username": "Jack_Woehr"
},
{
"code": "“mongodb+srv://ahemddoha:@cluster0.0jwhkdk.mongodb.net\"",
"text": "“mongodb+srv://ahemddoha:@cluster0.0jwhkdk.mongodb.net\"And BTW, is it possible you misspelled your user id? I infer that your personal name is “Ahmed” but your user id is spelled “ahemddoha” which makes me wonder if your user id is really “ahmeddoha”.",
"username": "Jack_Woehr"
}
] | I can't connect with mongodb | 2023-06-19T00:54:02.523Z | I can’t connect with mongodb | 404 |
[] | [
{
"code": "",
"text": "hello mongo db teamI have a dashboard in mongo DB charts with only one chart, this chart is a geo heatmap.I created my heatmap and worked excellently…if you can see in the right corner there is a bar that indicates the deep in the graphic, this bar works excellent on your web page, if I make zoom in or zoom out the bar changes the scale properly, but when I embed this chart in a web page using iframe or node js SDK, and I make zoom in or zoom out the indicator bar doesn’t change the scale, permanently is fixed.chart embedded using node js sdk\nimage1920×811 75.2 KB\ncan you help me?",
"username": "sergio_vega"
},
{
"code": "",
"text": "this is my sdk implementation\ncharts-embedding-sdk (forked) - CodeSandbox",
"username": "sergio_vega"
},
{
"code": "",
"text": "The feature I particularly appreciate is the bar in the right corner that indicates the depth in the graphic. When I zoom in or zoom out on the web page, the bar scales properly and adjusts accordingly.However, I encountered an issue when embedding this chart in a web page using the iframe or Node.js SDK. When I zoom in or zoom out on the embedded chart, the indicator bar does not change its scale. It remains fixed and does not update dynamically like it does on your web page.",
"username": "smith_roy"
}
] | Bug when geo heatmap is embebed in a page | 2022-09-13T20:04:27.821Z | Bug when geo heatmap is embebed in a page | 1,953 |
|
null | [
"queries",
"mongodb-shell"
] | [
{
"code": "",
"text": "HiWe are using MongoDB Community Edition (6.0.1) (PSA) and planning to take a backup while the database is running on a live secondary node(full cluster level backup). Is it possible to do so without impacting production? Can someone please provide guidance and a backup plan for the secondary node? Thank you",
"username": "sindhu_K"
},
{
"code": "",
"text": "Hello,Welcome to the MongoDB community. You might choose to schedule backups during a time when the workload is low. If you can’t find a suitable time when the workload is low, you could add the following option to make backups:Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi Ramohitaj\nThanks for reply! when we take the secondary node backup is it not impact the secondary oplog ?",
"username": "sindhu_K"
},
{
"code": "",
"text": "Hi @sindhu_K,\nI think not, I think it gets impacted when you do a restore, but I am not 100% sure.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "It depends on the backup method you use.In case of disk snapshot, you need to flush cache data and lock writes so that the snapshot is consistent. Then your majority writes will always timeout.In case of mongodump, you can track ongoing oplog entries during backup (backup may take longer in case of heavy write traffic), but data back up will cause heavy disk read. So if the secondary node is very busy (in terms of read and write), performance can be impacted.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi Kobe\nthanks for reply, we have 3 node replica set (PSA), when we take the secondary node backup(MongoDump) its not impact secondary node ?",
"username": "sindhu_K"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB backup in Secondary node | 2023-06-17T10:56:28.094Z | MongoDB backup in Secondary node | 846 |
null | [] | [
{
"code": "",
"text": "I have a Realm function that using the testing console is invoked as exports({“userId”: “638cea4bd157479ec554d9b9”}). Within the function I get the object id asdata.userId and the function works as expected.Now if I expose the Realm function as an Https Endpoint how would I specify this?curl \n-H “Content-Type: application/json” \n-d ‘{“userId”: “638cea4bd157479ec554d9b9”}’ \nendpoint_urlreturns an error:{“error”:“{\"message\":\"ObjectId in must be a single string of 12 bytes or a string of 24 hex characters\",\"name\":\"Error\"}”,“error_code”:“FunctionExecutionError”,“link”:“App Services”}%The value is null because it isn’t being passed to REALM",
"username": "Richard_Thorne"
},
{
"code": "curl \\\n-H \"Content-Type: application/json\" \\\n-d '{\"query\": {\"userId\": \"63dcd7a0766580205fe7869d\"}}' \\\nhttps://data.mongodb-api.com/app/data-XXX/endpoint/feed\nexports = async function(request, response){\n console.log( \"Request: \" + JSON.stringify(request))\n const body = JSON.parse(request.body.text());\n const userId = body.query.userId;\n",
"text": "OK. I figured it out:From curl:Wiithin the function:",
"username": "Richard_Thorne"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to pass a parameter to Realm function from Https Endpoints | 2023-06-18T09:11:25.928Z | How to pass a parameter to Realm function from Https Endpoints | 422 |
null | [] | [
{
"code": "",
"text": "Need help to understand the possible ways to migrate data from 3.2 to 6.0 version?Scenario is like, If i setup new environment with MongoDB 6.0 upgraded version and want to migrate data from existing MongoDB 3.2 version.",
"username": "Mihir_Patel1"
},
{
"code": "",
"text": "Hi @Mihir_Patel1,\nHere is the best answer i’ ve ready about this topic.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Thanks @Fabio_Ramohitaj.",
"username": "Mihir_Patel1"
}
] | How to migrate data from 3.2 to 6.0 version? | 2023-06-14T15:33:14.362Z | How to migrate data from 3.2 to 6.0 version? | 662 |
null | [
"node-js",
"connecting"
] | [
{
"code": "MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27020\n at Timeout._onTimeout (D:\\Super Bill\\SB_API\\node_modules\\mongodb\\lib\\sdam\\topology.js:330:38)\n at listOnTimeout (node:internal/timers:557:17)\n at processTimers (node:internal/timers:500:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) { 'localhost:27020' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n logicalSessionTimeoutMinutes: undefined\n }\n}\n",
"text": "",
"username": "Mohammadali_Ghassemi"
},
{
"code": "",
"text": "ECONNREFUSEDmeans there is no server running at the given address and port number. This or may be you firewall is preventing you from connecting there.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,In my local nodejs environment working good, I hosted my app in heroku ,\nIn heroku it is not working , I allowed all IP’sThanks,",
"username": "Mohammadali_Ghassemi"
},
{
"code": "",
"text": "So you have a mongod running on your heroku host?If not and you want your app on heroku to talk to your server on your local desktop/server with 127.0.0.1 than you should read about localhost - Wikipedia because there is some knowledge you lack to accomplish this.",
"username": "steevej"
},
{
"code": "localhost:27020localhost",
"text": "Welcome to the MongoDB Community Forums @Mohammadali_Ghassemi !localhost:27020If your app is deployed on Heroku it will not be able to connect to a database instance on localhost.You should be using an external hostname for your database connection string, and will need to configure appropriate security measures to limit exposure to your database deployment. Please review the MongoDB Security Checklist – Role-Based Access Control and Encrypted Communication are essential security measures.For an example using MongoDB Atlas, see How to deploy MongoDB on Heroku.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello @Stennie_X ,\nthank you for your post,\nI am using atlas cluster with encryption , I tested app in local environment with atlas cluster DB connection string ,same app is not working in herokuThanks,",
"username": "Mohammadali_Ghassemi"
},
{
"code": "",
"text": "If you want to connect to your atlas cluster you must use your atlas cluster connection string.What you shared is using localhost as the connection string, not your atlas connection string.",
"username": "steevej"
},
{
"code": "",
"text": "@Mohammadali_Ghassemi, any update on this? Didyou must use your atlas cluster connection stringsolve your issue? If so please mark the post as the solution.",
"username": "steevej"
},
{
"code": "",
"text": "Hello everyone,\nif you hosting on Heroku then please add environment variables on Heroku like this:key = mongoDbUrl\nvalue = mongodb+srv://:@cluster0.s4zgzy8.mongodb.net/note: enter you username and password that you created while creating cluster0.",
"username": "pankaj_puri"
}
] | Heroku -> db not connecting | 2021-12-15T18:35:46.871Z | Heroku -> db not connecting | 6,447 |
[] | [
{
"code": "",
"text": "The MSI installer ask if we want to install MonboDB as a service. Then, doesn’t matter what value are entered it always end the following error message :The domain, user name and/or password are incorrect. Remember to use “.” for the domain if the account is on the local machine.563×557 17.7 KBit seems also described here :\nhttps://superuser.com/questions/1403332/invalid-domain-user-password-while-installing-mongodb-on-windows10And here :\nhttps://stackoverflow.com/questions/52092528/invalid-domain-user-password-while-installing-mongodb-on-windows10",
"username": "fran_volr"
},
{
"code": "",
"text": "Please try your Windows username/password",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "It works but I don’t get why I should give those sensitive data to MongoDB and why other DBMS such as MySQL doesn’t need it to install ",
"username": "fran_volr"
},
{
"code": "",
"text": "horrible, I want days where i could install software without giving so much information! For free edition? seriously mongodb, I don’t even have my password setup in windows it is PIN! What a horrible UX, made account just to vent out",
"username": "Ruth_H_Gilbreath"
},
{
"code": "mongodmongod",
"text": "Hello @Ruth_H_Gilbreath, and welcome to the MongoDB Community forums! The post you replied to talks about setting up mongod to run as a service. You are providing that info to the Windows OS to run the service. This is a Windows thing, not MongoDB. Note that this information is not sent to either MongoDB or Microsoft. The only time you’re providing information directly to MongoDB is if you’re creating an Atlas account, but then you’re not installing MongoDB on your Windows system in that case.You can run MongoDB on Windows without setting it up as a service. Doing this however means that you have to manually run the mongod command before using the database.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Using Windows Username/Password seems not working when we use a Windows email address to login as admin on our computer.I’m using Windows 11, and I log on my computer with an outlook email address. Windows display as Username my firstname and my lastname with a space between.Neither Firstname alone, nor Firstname + Lastname, nor email address, work with MongoDB installation.",
"username": "Thom_GBT"
},
{
"code": "",
"text": "What error you get with email address?\nTry domain/ID or firstname.lastname or give your I’d with space in quotes\nIf none works you have to install it as not as service as Doug suggested above\nDisadvantage with this is you have to start mongod manually\nOther options don’t use installer use command line install passing minimum parameters\nCheck mongo documentation",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I just came across your post and wanted to say that I had a similar issue when installing MongoDB locally on Windows 10. It can be a bit frustrating, but I managed to fix it by making sure I entered the correct domain, username, and password.",
"username": "EllaShort_EllaShort"
}
] | Issue when installing locally on Windows 10 | 2020-04-24T18:27:12.751Z | Issue when installing locally on Windows 10 | 19,915 |
|
null | [] | [
{
"code": "",
"text": "Hello, am new to mongo,\ni just encountered this weird error,\ni recently updated a required field in my schema to be non-required and this action somehow caused everything to crash my app is a MERN stack . the update was done inside my models folder on backend and somehow mongo atlas still considers this field to be required . hope you guys can help me",
"username": "deal_maker"
},
{
"code": "",
"text": "Hey @deal_maker,Thank you for reaching out to the MongoDB Community forums i recently updated a required field in my schema to be non-required andTo assist you better, could you please share the specific changes you made in the schema?his action somehow caused everything to crash my app is a MERN stackRegarding the crash in your app, it would be helpful if you could share the logs or any error messages you’re receiving.somehow mongo atlas still considers this field to be requiredCan you verify if there are any documents in your collection that still have that field as required? In addition, it would be beneficial to check if you have any validation rules set in your MongoDB Schema.Feel free to provide any further details or code snippets related to your issue so that we can assist you more effectively.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "thanks bro for your quick response but the issue is fixed. it turns out some pre save validation gave me trouble but its all good now. thanks for your time",
"username": "deal_maker"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Modifying schema in mongo atlas | 2023-06-05T17:30:26.793Z | Modifying schema in mongo atlas | 394 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n \"_id\": { \"$oid\": \"1fe49b6b1bf5fe3898cceac7\" },\n \"participants\": [\"bdf8bbcdbaf1bcb0fad67bfd\",\"42a907e5cd50bf52ee7aafbb\"],\n \"seen\": false,\n \"lastMsg\": \"Lorem Ipsum\",\n \"timestamp\": {\"$date\": \"2022-01-06T06:00:00.000Z\"}\n}\ntimestamp : -1",
"text": "Hi,\nThis is a sample chat document in my collection. I’ve an index on participants field and another index on timestamp. Both indexes are in ascending order.When I run $sort in pipeline based on timestamp : -1, lastMsg field is not included in the results. But when I add timestamp:1, I get the lastMsg fieldEDIT=> My bad, mock data didn’t have lastMsg in some docs. Can’t delete the post.",
"username": "Inder_Singh"
},
{
"code": "",
"text": "EDIT=> My bad, mock data didn’t have lastMsg in some docs. Can’t delete the post.No need to delete the post. It is good for people to step away from assumptions and check the actual data.Thanks for your update.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sort in descending order results in field omitted from final document | 2023-06-17T08:50:59.302Z | Sort in descending order results in field omitted from final document | 428 |
null | [
"swift",
"transactions"
] | [
{
"code": "freeze()freeze()",
"text": "When attempting to call freeze() I’m receiving this error:“Cannot freeze an object in the same write transaction as it was created in.”The error makes sense, but it’s not clear how to avoid it. I’d prefer to do so by inspecting the object to determine if calling freeze() will be allowed, as opposed to needing to keep a list of any objects created in the transaction.Any suggestions welcome! Thanks!",
"username": "Tom_J"
},
{
"code": "",
"text": "Hi @Tom_J !I am pretty sure I know why and how to avoid it but want to ensure the question is clear; and that’s best done through some code. I think we need to see a brief minimal code example. Can you edit your question and include that please?Jay",
"username": "Jay"
}
] | How to determine if an Object was created in a write transaction? | 2023-06-16T21:49:23.097Z | How to determine if an Object was created in a write transaction? | 575 |
null | [] | [
{
"code": "if( typeof value === 'string' )if( isNaN( value ) )",
"text": "HiI run a web site that takes home automation data from third-party sources, processes it and uses various visualisations to display to users. There a free and subscription tiers with paying customers getting more options. I have separate development and production sites using MongoDB Atlas to host the site and run the business logic.I got a message from a user yesterday saying that their data was corrupted and when I investigated, I found that yesterday (14th June) at around 06:00 UTC a piece of code started behaving differently. Essentially it GETs JSON data froma remote server and walks through the data and stores it in the database according to a set of rules. One line of code detected if a value was a string:\nif( typeof value === 'string' )where value comes from an array that was passed to the function. This used to return true for “abc” and false for “123” but now returns true for both. I have now changed the code to say:\nif( isNaN( value ) )which works consistently. Checking the logs, I can see that the behaviour of the code was changing on different executions, likely indicating that an update of some kind had been pushed to some servers but not others. I have tens of millions of records in the relevant collections and unfortunately it left my data in an inconsistent state and have had to spend hours cleaning up the data.Whilst the change in behaviour is minor, it broke a production web site on code that has been previously working for months. Has there been some kind of upgrade, to the JavaScript version or JSON libraries or similar?Has anyone else experienced any breaking changes recently?Maybe I missed the email or didn’t understand the significance of it but how do we get to find out about these changes ahead of time and test our code against any potentially harmful changes?Many thanksSimon",
"username": "ConstantSphere"
},
{
"code": "",
"text": "thinking about it; my original code should never have worked (but it did!). I assume that somewhere in the Atlas code, it was incorrectly changing the types of values whilst serializing and deserializing the parameters that get passed between functions. In the process of fixing this, it “fixed” my broken code in a bad way.",
"username": "ConstantSphere"
}
] | Breaking change in behaviour of JSON object type in function | 2023-06-15T12:24:15.216Z | Breaking change in behaviour of JSON object type in function | 393 |
null | [] | [
{
"code": "",
"text": "I would love to contribute to MongoDB core and I visited the GitHub repository but I didn’t see the Issues tab open nor the Discussions. Does anyone know why these aren’t open and where is this information documented in?",
"username": "akira"
},
{
"code": "",
"text": "As linked in the README.md for mongodb/mongo on Github, please browse Submit Bug Reports since MongoDB uses the much richer Jira project management system for their internal bugbase.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why aren't GitHub Issues turned on? | 2023-06-16T21:14:54.858Z | Why aren’t GitHub Issues turned on? | 749 |
null | [
"golang"
] | [
{
"code": "",
"text": "DB connection using mongo.Connect are by default 3 in aws monitoring active connection. Is there a way to set this to 1 initially using mongodb-driver for golang or any language?",
"username": "George_Taylor"
},
{
"code": "FindInsertUpdate",
"text": "@George_Taylor welcome and thanks for the question! As you observed, MongoDB drivers open a minimum of 3 connections to each node in a MongoDB deployment. Those connections are used for different purposes:MongoDB drivers rely on those connections to behave correctly, so unfortunately there is no way to disable them.",
"username": "Matt_Dale"
}
] | DB connection using mongo.Connect are by default 3 in aws. Is there a way to set this to 1 initially? | 2023-05-29T11:55:29.279Z | DB connection using mongo.Connect are by default 3 in aws. Is there a way to set this to 1 initially? | 682 |
null | [
"node-js",
"mongoose-odm",
"next-js"
] | [
{
"code": "",
"text": "I’ve recently received several email warnings: “You are receiving this alert email because connections to your cluster(s) have exceeded 500”. I’m on the M0 free tier and using Mongoose and Nextjs deployed to Vercel. After a lot of googling I refactored my connection code to cache the connection, see here: birdinghotspots/mongo.ts at main · rawcomposition/birdinghotspots · GitHubConnection caching seems to work. I console.log every time a new connection is created and I can see this happens once or twice after deployment and then never again.However, after making that change I received another warning this morning. My connections had spiked to 425 for a 5-10 min period. I set the maxPoolSize option in Mongoose to 10 (see here: birdinghotspots/mongo.ts at main · rawcomposition/birdinghotspots · GitHub ) which seemed to have no effect. My connections are hovering around 25-60 at any given time and seems to increase with higher traffic, such as when several bots are indexing my site at the same time. It even spiked to 97 while I was monitoring it this morning after adjusting the maxPoolSize.I should add that this is not a crazy high traffic site. We get maybe 10,000 users a month. Though we have tens of thousands of pages, and with all the bots hitting it I notice Vercel logs server requests every couple seconds, sometimes with bursts up to a few a second. I’m not sure what the traffic was like when it hit 380 connections since I wasn’t monitoring it.Why is maxPoolSize having no effect, and how can I get these connections under control?",
"username": "Adam_Jackson"
},
{
"code": "",
"text": "This happened again just now. I noticed the spike happened right when a bunch of my Vercel functions timed out at their 10s limit while trying to connect to MongoDB. I had set serverSelectionTimeoutMS to 9s, attempting to prevent reaching the Vercel timeout, but apparently that didn’t fix the problem. What could cause the MongoDB connection to time out? And why would this cause my MongoDB connections to spike exponentially?I just tried changing bufferCommands to false. I’ll see if that makes any difference.I’m considering switching to a regular server environment to avoid these serverless headaches.",
"username": "Adam_Jackson"
},
{
"code": "",
"text": "I’m still encountering this issue about every 2 days, usually early in the morning (Pacific time). It lasts for a minute or two. There’s a spike in connections and most of my queries are exceeding the 10s timeout on Vercel. It does seem like MongoDB successfully connects (see screenshot where I log the successful connection). The queries being run in these functions take < 200ms normally. Note that the Mongo connection is successfully cached when running normally. For some reason during these weird spikes, it tries to create a connection on each function request.Any suggestions on how I could debug this?\nScreen Shot 2023-01-04 at 11.45.22 AM1704×1926 361 KB\n",
"username": "Adam_Jackson"
},
{
"code": "maxPoolSizeMongoClientmaxPoolSize",
"text": "Hi @Adam_Jackson,I’m the Product Manager for the Node.js driver here at MongoDB. First off, our apologies as this request seems to have slipped through during the lead up to the Christmas break.Why is maxPoolSize having no effect, and how can I get these connections under control?If you’re using Vercel Serverless Functions it’s possible that though you’re specifying a maxPoolSize the connection isn’t being reused as the infrastructure behind the functions spawns additional workers to handle and influx of requests to your site.Unfortunately I cannot confirm/deny this directly, however given a single MongoClient instance the maxPoolSize options would cap the total number of connections within that client’s connection pool at 2 (per your code sample) as opposed to the default of 100.This happened again just now. I noticed the spike happened right when a bunch of my Vercel functions timed out at their 10s limit while trying to connect to MongoDB. I had set serverSelectionTimeoutMS to 9s, attempting to prevent reaching the Vercel timeout, but apparently that didn’t fix the problem. What could cause the MongoDB connection to time out? And why would this cause my MongoDB connections to spike exponentially?I see you have the source for your solution publicly accessible at GitHub - rawcomposition/birdinghotspots, so the first thing we’d want to do is verify the behavior you’ve described.Can you briefly outline the configuration/deployment requirements to Vercel so that we can spin up a similar deployment? Once we have this application deployed to Vercel and configured alike with your production instance we’ll need to emulate the traffic you’re generating to generate the connections you’ve described.This exercise will help us better understand how the application running within Vercel is connecting to your cluster and potentially exceeding the expected connection profile.Feel free to send me a DM if you’d like to discuss further.",
"username": "alexbevi"
},
{
"code": "",
"text": "Hi @alexbevi,I appreciate your help with this! After fiddling and watching closely over the last week, here’s some things I’ve observed:After adjusting the maxPoolSize to 2, I stopped getting the max connection warning. My surges now peak around 200 connections. However, I’m still having queries timeout.When I see a cluster of timeout errors, they’re usually a bunch of google bot requests all happening milliseconds apart. See attached screenshot. During that exact second there were 34 requests from google bot requests.In the attached screenshot, you’ll notice the console.log output relating to the MongoDB connection code. When everything is functioning normally (outside of these error/connection surges), I notice the console.log messages appearing once and all further requests seem to use the existing, cached connection.I’m wondering if the existing connection disconnects and google attempts to load 20 or 30 pages all at once. Since there’s no existing connection, I’m assuming it initiates a new connection for all 30 new requests. And there wouldn’t be time to cache the 1st request and share it with the remaining 29, because the requests all came in essentially at the same time. That’s just my speculation. Though that may explain the spike in connections, I don’t think it would explain why those 20-30 requests timeout.There’s a number of .env variables required to deploy to Vercel, I’ll get dev versions of all the necessary keys and DM you the details so you can deploy it to Vercel.\nScreen Shot 2023-01-10 at 6.30.05 PM1264×1888 112 KB\nExpanded detail view:\n\nScreen Shot 2023-01-10 at 6.46.19 PM1788×634 138 KB\n",
"username": "Adam_Jackson"
},
{
"code": "",
"text": "@Adam_Jackson as part of troubleshooting this issue can we just validate that those routes are actually supposed to return a response? Per Vercel’s Serverless Functions Timeouts Conditions Checklist, The function must return an HTTP response, even if that response is an error. If no response is returned, the function will time out.",
"username": "alexbevi"
},
{
"code": "",
"text": "@alexbevi assuming I am using the official MongoDB Atlas integration with Vercel, is there any pooling/proxy of connections? I am worried that as I get traffic that might start kicking off new serverless invocations, I could run out of connections.",
"username": "Jared_Wiener"
},
{
"code": "",
"text": "Hey @Jared_Wiener, when you follow our documented guidance for integrating Vercel with MongoDB Atlas you’re not using a MongoDB driver, but instead the Atlas Data API.The Data API is essentially a REST interface to your database you can communicate with over HTTPS and abstracts away the connection monitoring and pooling typically performed by socket-based drivers.Since connection management isn’t performed at the client level (ex: your serveless functions) but instead at the Data API level, the type of connection-storm behavior you may have previously seen as a result of an influx of traffic triggering a flurry of new serverless processes to spin up wouldn’t occur.",
"username": "alexbevi"
},
{
"code": "",
"text": "Thank you, @alexbevi.The documentation seemed to imply that I could continue using Mongoose- but it sounds like I need to refactor my code to query data via HTTP?",
"username": "Jared_Wiener"
},
{
"code": "minPoolSize=1&maxPoolSize=1",
"text": "@Jared_Wiener if you want to continue using Mongoose additional connection pools may be created as additional serverless processes are created. You can control this to a degree by setting minPoolSize=1&maxPoolSize=1 as this will ensure you have the smallest possible pool of connections per serverless instance.This may still result in more connections to the cluster than the Data API, however it should not put you in a position where you may reach cluster connection limits.",
"username": "alexbevi"
},
{
"code": "",
"text": "@Jared_Wiener Because you initializing the connection when nodejs started, as vercel is spawning as workers when there is new request coming in. So you have a lot of mongoose connect, you will rather use one time mongoose connect and close or use Data API( Please noted that only 1m request per month for free tier)",
"username": "GoFitness_Woon"
},
{
"code": "",
"text": "If you are sufferring of migrating mongoose to use Data API. Just try Railway <-> Atlas. Railway is not serverless. Projects in railway are containers. They run until you destroy them",
"username": "GoFitness_Woon"
}
] | Large number of connections with Mongoose and Vercel | 2022-12-18T16:03:45.647Z | Large number of connections with Mongoose and Vercel | 4,397 |
[
"connecting",
"monitoring"
] | [
{
"code": "",
"text": "Hello,\nI develop a small pet project which is mobile app in React Native. Noone uses it now. I have cluster M0 and today I got massive number of allerts saying Connections % of configured limit has gone above 80\n\nHere I found some advices: Closing Connections in MongoDB Atlas Clusters | MongoDB Atlas FAQAnd may questions are:",
"username": "Lukasz_Stachurski"
},
{
"code": "",
"text": "I had the same thing happen to me starting at approximately 9:34pm PST. I’m on the same tier (M0), have been using RealmSwift for prototyping on this cluster, and can no longer connect to it. (I can’t even browse collections, or connect via shell.)I’m very interested to know whats going on; M0 free tier doesn’t support log downloading, so the tools for debugging this are few.",
"username": "Rudi_Strahl"
},
{
"code": "",
"text": "Same problem for me with a M0 tier!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "Hey All, There is an issue with the proxy service for the free-tier only on Atlas - the team is working on a fix now, stay tuned.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Appears to have been resolved - thanks @Ian_Ward !",
"username": "Rudi_Strahl"
},
{
"code": "",
"text": "Hello,We are also getting the same issue on M0. The weird thing is that my machine was shut down and no one was using the app we are prototyping. Can’t access the DB and collections anymore.",
"username": "Surender_Kumar"
},
{
"code": "",
"text": "Hello,\nToday I too received an alert from Mongodb Atlas for M0 cluster. According to Cluster overview there are 427 open connections, where 500 is the limit. There are only one or two users using our system currently that too aren’t proactive users. There is very less possibility to open this number of connections as connection is not created every time, instead connection pooling is used. Can someone suggest what could be the reason for this?",
"username": "Avani_Khabiya"
},
{
"code": "",
"text": "Is this still an open issue?I’ve been receiving similar notifications on an m0 instance",
"username": "Yashlin_Maistry"
},
{
"code": "",
"text": "Same here, I have 40+ connections on a M0 even though the app is still in development and there are very few devices connected to it. How exactly does Realm manage connections to Atlas Clusters? Do we need to close the connections somehow?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "I got issue today, it’s on our dev environment and there’re very few devices (<5) connect to it",
"username": "Tai_Nguyen1"
},
{
"code": "",
"text": "M0 cluster has 80 connections even though the only app which is still in development is shutdown. How do we debug this? It is 2 years since the OP, but I don’t see any concrete steps/actions/help.",
"username": "Prof_Fish"
},
{
"code": "",
"text": "Hey all, I’m having the exact same issue here, can I ask how the issue ws solved back in '22?\nThanks!\nLuca",
"username": "Arteco_Prod"
}
] | Connections % of configured limit has gone above 80 | 2020-11-10T07:10:35.728Z | Connections % of configured limit has gone above 80 | 10,036 |
|
null | [] | [
{
"code": "",
"text": "Hi\ndb.createUser(\n{\nuser: “testAj”,\npwd: “test123”,\nroles: [{role: “read”, db: “dbname”}],\nauthenticationRestrictions: [ {\nclientSource: [“192.168.3.ip”],\nserverAddress: [“198.168.3.ip”]\n} ]\n}\n)getting this error\n“msg”:“Failed to acquire user because of unmet authentication restrictions”Failed to acquire user because of unmet authentication restrictions\",“attr”:{“user”:“testAj@admin”,“reason”:\"Restriction",
"username": "Aayushi_Mangal"
},
{
"code": "serverAddressclientSourceserverAddress",
"text": "Hi @Aayushi_MangalAs a part of code, if the serverAddress unavailable or user could not be connecting from the IPs specified in clientSource . Can you please check the serverAddress is valid and please share complete error log along with the bindIp specified in your deployment.Thanks,\nDarshan",
"username": "Darshan_j"
},
{
"code": "",
"text": "Thanks Darshan, Does bindip mentioned 0.0.0.0 in config file can be the reason of this error?think I found the cause, router is listening to 127.0.0.1 and we are restricting to internal ip, we must add that ip also listening right?",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "we are restricting to internal ip, we must add that ip also listening right?I’m not sure if this is a must. But it has no point to set a remote IP restriction when you are only listening on loopback address.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks @Kobe_W , yes we are only listening to localhost.",
"username": "Aayushi_Mangal"
}
] | Server restriction is not working | 2023-06-13T10:09:59.050Z | Server restriction is not working | 618 |
null | [
"node-js",
"data-modeling",
"mongoose-odm"
] | [
{
"code": "const productSchema = new mongoose.Schema({\n\n product_name: {\n type: String,\n required: true,\n index: true\n },\n slug: {\n type: String,\n required: true\n },\n product_description: {\n type: String,\n required: true\n },\n category_id: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'productscategories',\n required: true,\n },\n seller_id: {\n type: String,\n },\n product_type: {\n type: String,\n required: true\n },\n product_gallery: {\n type: Array,\n required: true\n },\n original_price: {\n type: Number,\n },\n sale_price: {\n type: Number,\n required: true\n },\n variations: [{\n attribute: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'productsterms',\n required: true\n },\n terms: [\n {\n term: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'productsattributes',\n required: true\n },\n sku: {\n type: String,\n },\n }\n ]\n }],\n sku: {\n type: String,\n },\n quantity: {\n type: Number,\n },\n}, { timestamps: true })\n\nconst productsAttributesSchema = new mongoose.Schema({\n attribute_name: {\n type: String,\n required: true,\n unique: true\n },\n slug: {\n type: String,\n unique: true,\n }\n}, { timestamps: true } )\n\nconst productsTermsSchema = new Schema({\n term_name: {\n type: String,\n required: true,\n unique: true\n },\n slug: {\n type: String,\n unique: true\n },\n price: {\n type: Number,\n required: false,\n },\n attribute_id: {\n type: Schema.Types.ObjectId,\n required: true\n },\n image: {\n type: String,\n required: false,\n },\n is_default: {\n type: Boolean,\n default: false\n },\n})\n",
"text": "I am building an ecommerce app using MongoDB as the choice of database. So far, I have developed this schema for products and variations. However, I have some doubts about its scalability and implementation of the shopping cart functionality. Additionally, I am considering whether it would be better to separate the variations into another schema. I have made several modifications to the schema, and I am currently feeling confused. I would greatly appreciate any help or guidance in improving this schema.It all started when I thought about the possibility of renaming a term, which led me to separate it into its own schema to ensure that the same _id is shared and any updates to the term are reflected across all instances. However, it seems there may be more to consider. Please assist me in refining this schema and addressing any potential issues. Thank you.",
"username": "Avelon_N_A"
},
{
"code": "",
"text": "From my own experience, I can say that designing the schema can be challenging, especially when it involves scalability and incorporating shopping cart features.",
"username": "Mixoponop_Mixoponop"
},
{
"code": "",
"text": "It’s awesome that you’re building an e-commerce app with MongoDB as your database choice. Schema design can be tricky, especially when it comes to scalability and implementing shopping cart functionality. Considering separating variations into another schema sounds like a good idea for better organization. I understand the confusion you’re feeling with the modifications. Hopefully, the community here can provide you with helpful guidance to improve your schema. By the way, I recently stumbled upon a website where you can Hire Magento Developer if you need additional support. Just thought I’d share!",
"username": "Alex_Deer1"
}
] | NOSQL ecommerce db design | 2023-05-21T20:36:45.025Z | NOSQL ecommerce db design | 1,981 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello Team,I wonder how to stop a replicat set wth OpsMgr agent command.\nI can do it easily with web interface, so i assume there is a command line that allow you to do the same ?RegardsCed",
"username": "Cedric_ROLLO1"
},
{
"code": "",
"text": "Hi @Cedric_ROLLO1,\nI looked quickly and it seems to me that it can’t be done.Here is the link of the doc:Best Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "processes[n].disabled",
"text": "You’re looking for the processes[n].disabled option",
"username": "chris"
},
{
"code": "",
"text": "processes[n].disabledThank you for your quick reply. As it possible via web interface i supposed it was also possible via a command. Let’s try the Chris way. And i let you know if i succeeded.cheers",
"username": "Cedric_ROLLO1"
},
{
"code": "",
"text": "Thank you Chris,I gonna try that way. Suppose I have to setup a curl command that will be raised against the OpsMgr server and that one will send it via agent.New with OpsMgr API.Thanks for the tips.Cheers",
"username": "Cedric_ROLLO1"
}
] | Stop all replica set with opsmgr command line | 2023-06-15T11:02:50.177Z | Stop all replica set with opsmgr command line | 413 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster"
] | [
{
"code": "var mongoose = require(\"mongoose\");\nvar string = \"mongodb+srv://admin:[email protected]/database1\";\nmongoose.connect(string)\n .then(function () {\n console.log(\"Connected to MongoDB.\")\n })\n .catch(function (e) {\n console.log(e);\n });\nvar schema1 = new mongoose.Schema({\n prop1: Number,\n});\nvar Document = mongoose.model(\"Document\", schema1);\nvar document = new Document({\n prop1: 1,\n});\ndocument.save();\n",
"text": "I am able to connect to MongoDB. I have an empty free Cluster0. I am unable to create a database and save a document with the snippet below. I get a “MongoError: (Unauthorized) not authorized\non admin to execute command”. There are similar questions on Stackoverflow, most discussing different connection strings, none of which has worked for me. Your help is appreciated and thanks in advance.",
"username": "Bernie_Ackerman"
},
{
"code": "testdb2> db.documents.find()\n/// No documents\ntestdb2> db.documents.find()\n[ { _id: ObjectId(\"648ba9487c0121ce02451cbb\"), prop1: 1, __v: 0 } ]\n",
"text": "Hi @Bernie_Ackerman - Welcome to the community Is this the only code you’re executing?I copied it and replaced the connection string to specify my own test free tier cluster and a document was inserted.\nBefore execution, empty collection:After executing the provided code snippet:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "@Jason_Tran Thank you for your reply. Yes, this is the only code being executed. After trying for hours last night, I tried again this morning, and it worked. A database and collection were created, although the document wasn’t saved and I got a diffrent error. I changed the name of the last variable, thinking “document” may be a reserved word, and it worked. Thanks again.",
"username": "Bernie_Ackerman"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to write to Atlas using Mongoose | 2023-06-15T23:18:21.161Z | Unable to write to Atlas using Mongoose | 513 |
null | [] | [
{
"code": "",
"text": "HelloIs there a way to do a force atlas sync?",
"username": "Roman_Bayik"
},
{
"code": "",
"text": "Hello @Roman_Bayik ,Is there a way to do a force atlas sync?Are you referring to Atlas Device Sync?\nIf yes, then can you please share more details such as:Check out the tutorialIf you prefer to learn by example, check out the App Services tutorial, which describes how to build a synced to-do list application with clients for common platforms that App Services supports.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello\nThank you for the questions.Yes, I’m about Atlas Device Sync.",
"username": "Roman_Bayik"
},
{
"code": "",
"text": "hey @Roman_Bayik What do you mean by force sync?When you enable Device Sync on Atlas the cloud will now expose an endpoint for you to connect devices to that will synchronize data to and from the cloud automatically. You access the cloud endpoint by using one of the Realm SDKs -There is also a separate forums topic on Atlas Device Sync here -Discussions about developing applications with MongoDB Atlas App Services and Realm, including Realm SDKs, Atlas Device Sync, Atlas Data API, Atlas GraphQL API, and Atlas Triggers.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hello @Ian_Ward ,\nI mean it is not defined that Atlas will sync data instantly right after the change (maybe it is how it works, but I didn’t find a concrete statement about it), so I’m looking for a possibility of making force sync just to make sure all data is synced.",
"username": "Roman_Bayik"
},
{
"code": "",
"text": "Did you find any solution ?",
"username": "Robson_Tenorio"
},
{
"code": "waitForDownload",
"text": "Atlas and Device Sync will sync immediately after a change is made. The client will receive this change a short but indeterminate amount of time after the change.If you want to ensure that the client has “caught up” to the latest version of data persisted in the cloud, you can use waitForDownload in the realm SDK: https://www.mongodb.com/docs/realm/sdk/flutter/sync/manage-sync-session/#wait-for-changes-to-upload-and-download",
"username": "Sudarshan_Muralidhar"
}
] | Force sync for MongoDB Atlas | 2023-03-17T13:03:35.362Z | Force sync for MongoDB Atlas | 1,230 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": " {\n \"data\": {\n \"idFS\": \"xx\",\n \"jobNumber\": \"Dxxx8\",\n \"applicationUrl\": \"xxxx\",\n \"idClient\": \"xxx\",\n \"title\": \"Werkstudent – IT\",\n \"language\": \"DE\",\n \"businessUnit\": \"Automotive Technology\",\n \"remote\": \"No specification\",\n \"company\": \"xxx xxx xxx\",\n \"additionalInfo\": \"\",\n \"cityState\": \"Hombergen,North Rhine-Westphalia\",\n \"city\": \"Hombergen\",\n \"zipCode\": \"58256\",\n \"state\": \"North Rhine-Westphalia\",\n \"address\": \"xxxx\",\n \"country\": \"Germany\",\n \"locations\": [\n {\n \"country\": \"Germany\",\n \"zipCode\": \"58256\",\n \"city\": \"Hombergen\",\n \"state\": \"North Rhine-Westphalia\",\n \"stateShort\": \"NRW\",\n \"cityState\": \"Hombergen,North Rhine-Westphalia\",\n \"address\": \"xxxx\"\n },\n {\n \"country\": \"Germany\",\n \"zipCode\": \"54429\",\n \"city\": \"Mandern\",\n \"state\": \"Rhineland-Palatinate\",\n \"stateShort\": \"RP\",\n \"cityState\": \"Mandern,Rhineland-Palatinate\",\n \"address\": \"\"\n }\n ],\n \"employmentType\": \"Part-time\",\n \"google_employmentType\": \"PART_TIME\",\n \"contract\": \"Limited\",\n \"socialInsurance\": \"Ja\",\n \"entryLevel\": \"Student job\",\n \"entryLevel_order\": 8,\n \"jobField\": \"IT\",\n \"category\": \"Automotive supply\",\n \"recruiter\": [\n \"xxx\",\n \"xxxx\",\n \"xxxx\",\n \"xxxx\"\n ],\n \"applicationEnd\": null,\n \"postingDate\": \"2021-11-29T00:00:00+01:00\",\n \"postingDate_timestamp\": 1638140400,\n \"new_postingDate\": \"2021-11-28 23:00:00+00:00\",\n \"expectedStartDate\": null,\n \"subClients\": null\n },\n \"content\": {\n \"employmentType\": \"Teilzeit\",\n \"contract\": \"Befristet\",\n \"entryLevel\": \"Studienjob\",\n \"jobField\": \"IT\",\n \"category\": \"Automobilzulieferung\",\n \"applicationEnd\": null,\n \"businessHL\": \"Unternehmen\",\n \"business\": \"<p>xxxx.</p>\",\n \"taskHL\": \"Aufgaben\",\n \"task\": \"<ul><lixxxx\",\n \"profileHL\": \"Profil\",\n \"profile\": \"<ul><li>xxx>\",\n \"offerHL\": \"<p>Ihre Vorteile bei uns</p>\",\n \"offer\": \"xxxxx\",\n \"contactHL\": \"Kontakt\",\n \"contact\": \"xxxx\",\n \"diversityHL\": \"Das bieten wir\",\n \"diversity\": \"xxxx>\",\n \"headerImage\": \"xxxx\",\n \"mobileHeaderImage\": \"xxxx\",\n \"compensation\": \"\",\n \"employerSeal\": \"\"\n },\n \"_geoloc\": [\n {\n \"lat\": 123.123,\n \"lng\": 123.123\n },\n {\n \"lat\": 123.123,\n \"lng\": 123.123\n }\n ],\n \"arbeitsAgentur\": {\n \"argeId\": null,\n \"baReferenzeId\": \"811389-002\"\n }\n}\n]\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"data\": {\n \"fields\": {\n \"businessUnit\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"category\": {\n \"type\": \"string\"\n },\n \"company\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"contract\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"employmentType\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"jobField\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"postingDate\": {\n \"type\": \"string\"\n },\n \"postingDate_timestamp\": {\n \"type\": \"number\"\n },\n \"title\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n }\n }\n },\n \"storedSource\": {\n \"include\": [\n \"data\"\n ]\n }\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"data\": {\n \"fields\": {\n \"businessUnit\": [\n {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n {\n \"type\": \"stringFacet\"\n }\n ],\n \"employmentType\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n ],\n \"entryLevel\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n ],\n \"jobField\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"type\": \"string\"\n }\n ],\n \"locations\": {\n \"fields\": {\n \"country\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"type\": \"string\"\n }\n ]\n },\n \"type\": \"document\"\n },\n \"title\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n }\n }\n },\n \"storedSource\": true\n}\nmainQuery [\n {\n '$search': {\n returnStoredSource: true,\n index: 'tkag_en',\n compound: {\n must: [\n {\n text: {\n query: 'Operations Manager',\n path: 'data.title',\n fuzzy: { maxEdits: 2 }\n }\n },\n {\n text: {\n path: 'data.jobField',\n query: [ 'Engineering & Science' ]\n }\n },\n {\n text: { path: 'data.employmentType', query: [ 'Full-time' ] }\n },\n {\n text: {\n path: 'data.businessUnit',\n query: [ 'Automotive Technology' ]\n }\n },\n { \n // instaed of $sort [which is expensive] use $near for sorting\n near: {\n path: 'data.postingDate_timestamp', \n origin: 1686729595572, // today\n pivot: 7776000000,// far in the future to give me the latest records based on timestamp\n\n score: { boost: { value: 1000 } }\n }\n }\n ]\n }\n }\n },\n {\n '$project': {\n 'data.title': 1,\n 'data.idClient': 1,\n 'data.city': 1,\n 'data.state': 1,\n 'data.country': 1,\n 'data.company': 1,\n 'data.postingDate': 1,\n 'data.locations': 1,\n _geoloc: 1,\n score: { '$meta': 'searchScore' }\n }\n },\n { '$skip': 0 },\n { '$limit': 50 }\n]\n$searchMeta [\n {\n '$searchMeta': {\n index: 'tkag_en_facets',\n returnStoredSource: true,\n facet: {\n operator: {\n compound: {\n must: [\n {\n text: {\n query: 'Operations Manager',\n path: 'data.title',\n fuzzy: { maxEdits: 2 }\n }\n },\n {\n text: {\n path: 'data.jobField',\n query: [ 'Engineering & Science' ]\n }\n },\n {\n text: {\n path: 'data.employmentType',\n query: [ 'Full-time' ]\n }\n },\n {\n text: {\n path: 'data.businessUnit',\n query: [ 'Automotive Technology' ]\n }\n }\n ]\n }\n },\n facets: {\n data_DOT_businessUnit: { type: 'string', path: 'data.businessUnit' },\n data_DOT_employmentType: { type: 'string', path: 'data.employmentType' },\n data_DOT_jobField: { type: 'string', path: 'data.jobField' }\n }\n }\n }\n }\n ],\n [\n {\n '$searchMeta': {\n returnStoredSource: true,\n index: 'tkag_en_facets',\n facet: {\n operator: {\n compound: {\n must: [\n {\n text: {\n path: 'data.employmentType',\n query: [ 'Full-time' ]\n }\n },\n {\n text: {\n path: 'data.businessUnit',\n query: [ 'Automotive Technology' ]\n }\n }\n ]\n }\n },\n facets: {\n data_DOT_jobField: { type: 'string', path: 'data.jobField' }\n }\n }\n }\n }\n ],\n",
"text": "Hi, I am trying out MongoDB atlas search and have some questions about getting the right cluster:\nHere’s a bit about my data:\nThere are 7.5M search requests for the last 6 months.\nI have a collection with ~2000 records that looks like this:I have created a search index that looks like this:then also another index for facets (I can probably combine them, but reading / watching some material I saw example of using more than 1 index):Finally here is my example query - I’d have to add some more fields for filtering but this is it:And here is the $searchMeta query:I am happy with the performance so far, searching / filtering is around 15-45ms.\nFirst question, am I doing something wrong? I’ve only done the research for the last week\nSecond questions, what type of a cluster do I need given my requirements? - Note I will have multiple collections with 2000 records.\nThanks in advance",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "Hi @Ed_Durguti and welcome to MongoDB community forums!!am I doing something wrong?Could you state your concerns with this particular question? I understand you’ve noted you are happy with the performance so I just wish to clarify the concerns here.For further details on Atlas Search Performance, please view the Tune Atlas Search Performance documentation which may be of use.However, we usually recommend contacting the MongoDB consulting in understanding the current workload and the suggesting what would be best suited for the application. This may be of additional use if you have further use cases out of the one described here in this post.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "@Aasawari I was just wondering if I am doing the indexing/querying correctly, are there any room from improvments etc.\nThank you for pointing me to the docs.",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search Cluster Sizing | 2023-06-14T11:34:48.751Z | Atlas Search Cluster Sizing | 594 |
null | [] | [
{
"code": "db.sportsTeams.deleteMany( { \"teamName\" : { $in [$arrayOfNamesToBeDeleted] } } )arrayOfNamesToBeDeleted",
"text": "Hi there, I’m pretty new to mongo but am trying to figure out how I would delete certain documents from my db if they match the contents of an array. For example, let’s say I wanted to delete some sports teams from my db. I want to be able to take a list of sports teams for deletion in a text file, and then delete those teams from my db without doing it manually.In my mind, it would look something like:db.sportsTeams.deleteMany( { \"teamName\" : { $in [$arrayOfNamesToBeDeleted] } } )If I’m on the right track, how would I actually create arrayOfNamesToBeDeleted from a text file?Thank you for your help.",
"username": "hiplzhelp1234"
},
{
"code": "arrayOfNamesToBeDeleted",
"text": "how would I actually create arrayOfNamesToBeDeleted from a text file?This is outside the scope of this forum. It depends of the programming language you are using. I would try stackoverflow and any language specific forum.",
"username": "steevej"
}
] | Deleting documents from collection if they are found in text file? | 2023-06-15T20:32:05.047Z | Deleting documents from collection if they are found in text file? | 200 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "Olá a todos!@Leandro_Domingues, conforme informado anteriormente, tivemos problema com nosso Upgrade, 4.2 para 4.4, por conta do script de complemento SElinux para RedHat CentOS 7 estar disponível apenas no roteiro de instalação do 4.4 e não no de Upgrade 4.2 para 4.4. Seria interessante solicitar adicionar o mesmo no roteiro de upgrade.\nJá contornamos, mas vale a ressalva de adicionar o complemento no Roteiro de Upgrade do 4.2 para o 4.4.Muito Obrigado,\nHenrique.",
"username": "Henrique_Souza"
},
{
"code": "",
"text": "Olá @Henrique_Souza bem-vindo ao Community Forums!Consegue compartilhar como você fez esse roteiro?Best!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Só usei o roteiro já disponibilizado no site da MongoDB para instalação do 4.4 no RedHat CentOS 7. A questão mesmo é colocar esse mesmo roteiro no manual de upgrade da versão 4.2 para 4.4 no RedHat CentOS, pois esse complemento não é mencionado lá:Roteiro de Upgrade (Sem a nota para o caso de ser RedHat CentOS7 e necessitar de configuração especifica do SElinux a partir da versão 4.4):\nUpgrade a Standalone to 4.4 — MongoDB ManualRoteiro da instalação da versão 4.4 já com o complemento a partir desta versão:Obs.: Outro ponto interessante de ser melhor esclarecido é o desmembramento de pacotes, onde a pkg tools passa a ser uma pkg isolada. (Upgrade manual, não via REPOS)Muito Obrigado,\nHenrique.",
"username": "Henrique_Souza"
},
{
"code": "",
"text": "olá Leandro, tudo bem?",
"username": "Arthur_Guedes_Guedes"
}
] | Problem upgrade 4.2 to 4.4 version (RedHat CentOS 7) | 2023-06-07T13:36:17.134Z | Problem upgrade 4.2 to 4.4 version (RedHat CentOS 7) | 625 |
null | [
"indexes",
"atlas-search",
"api"
] | [
{
"code": "200IN_PROGRESS200IN_PROGRESS404",
"text": "Hi! I have a test database in a MongoDB Atlas cluster (version 6.0.6) with a collection with a few documents in it. I’m trying to create an Atlas Search index through the Create One Atlas Search Index endpoint, from which I receive a response with a 200 status code and a payload with the status set to IN_PROGRESS (plus correct collectionName and database).After that, I start polling the endpoint to Return One Atlas Search Index, using the indexID I received in the creation response. This endpoint responds a few times as well with a 200 status code and a payload with the status set to IN_PROGRESS. After a few seconds, the endpoint responds with a 404 status code, with no details in the response about what happened to the index.I have checked the primary server logs and there’s absolutely no mention about any part of the activity described above.It may be worth mentioning that I have been able to successfully create an Atlas Search index for the same database from the Atlas UI.Could someone please provide some advice on how to keep troubleshooting this issue? I’ve already talked with the chat support in Atlas, but they weren’t able to provide an answer.",
"username": "Miguel_Igarzabal"
},
{
"code": "",
"text": "Hi @Miguel_Igarzabal , can you provide the index definition that you are trying to create? Have you reviewed the FAQ: Why is my search index disappearing?",
"username": "amyjian"
},
{
"code": "",
"text": "Hi amyjian, I realized what I was doing wrong, I was using the wrong cluster name in the API endpoint… sorry for the noise and thanks for you answer.",
"username": "Miguel_Igarzabal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't create Atlas Search index through the API | 2023-06-15T18:05:09.689Z | Can’t create Atlas Search index through the API | 637 |
[
"atlas-functions",
"atlas-data-lake"
] | [
{
"code": " \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:DeleteObject\"\n ] \n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:ListBucket\",\n \"s3:GetObject\",\n \"s3:GetObjectVersion\",\n \"s3:GetBucketLocation\"\n ],\n \"Resource\": [\n \"S3 Bucket\",\n \"S3 Bucket*\"\n ]\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:PutObject\",\n \"s3:DeleteObject\"\n ],\n \"Resource\": [\n \"S3 Bucket\",\n \"S3 Bucket\"\n ]\n }\n ]\n}\n",
"text": "Hello, im new to Mongo but was tasked with creating a DataLake and getting that Data into S3. Im following along with the guide How to Automate Continuous Data Copying from MongoDB to S3When I try to test the Export to S3 trigger I get the following error.\nimage1568×736 37.3 KB\nI tried contacting support and they just suggested I addTo the role policy in aws but its already there. It was in the initial policy that was generated on the DataLake creation.Not sure where the error lies on the AWS or Mongo side. Any help would be greatly appreciated. Thanks!",
"username": "Chase_Russell"
},
{
"code": "",
"text": "Hi @JoeKarlsson – someone having permission issues following along with your post.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thank you @Andrew_Morgan @JoeKarlsson",
"username": "Chase_Russell"
},
{
"code": "",
"text": "Hey @Chase_Russell! First of all, thanks for coming to the MongoDB Community and asking this great question! Let’s see if I can help you get this working.I’m guessing the issue lies somewhere with the AWS Integration with Atlas. Which isn’t too surprising since AWS auth can get pretty confusing. Could you send me a screenshot of your AWS IAM Role Access page? I just want to make sure it’s setup and pointing to the right place. Here’s mine:\nInkedScreenshot 2021-06-17 at 11-57-57 Project Integrations Atlas MongoDB Atlas_LI1264×506 60.4 KB\n",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "Hi Joe! Thank you very much for your reply. Great article by the way!Here are my settings:\n\nimage1664×391 23.4 KB\n",
"username": "Chase_Russell"
},
{
"code": "",
"text": "Can you show me the linked data sources on your Atlas Trigger? It should be linked to your Atlas Data Lake.\n\nimage1116×195 9.4 KB\n",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "Good Morning Joe, yep\n\nimage1667×627 33.5 KB\n",
"username": "Chase_Russell"
},
{
"code": "",
"text": "Interesting. What happens if you rerun the permissions script that Atlas gives you through the AWS CLI? Does it give any errors? Can you show the AWS IAM profile?",
"username": "JoeKarlsson"
},
{
"code": "",
"text": "Hey @Chase_Russell apologies for the delayed response here!Based on the error message you got, it’s not actually the IAM user that’s an issue. It’s the permissions on the Database User that you’re connecting to your Data Lake with. Since you’re using an Atlas Trigger for this, it’s actually using a system user to connect to the Data Lake, which had the wrong permissions set.This was a bug and should have been resolved a while ago though. Can you confirm it’s working now?",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Hi Benjamin, thank you for the reply. We still are experiencing this error\n\nimage1650×730 38.5 KB\n",
"username": "Chase_Russell"
},
{
"code": "",
"text": "Ah, I see @Chase_Russell . I think that error was a red herring, we’ll fix that.So in your Realm Trigger you are connecting to your Data Lake, so the name spaces that you would reference to access the data in your cluster are the ones you’ve defined in your Data Lake to reference the cluster. So in this example, to access the name space you’re trying to you should be using the namespace you’ve defined in the data lake (not the name of the db and collection in the cluster). Also the realm trigger will be pulling data from all the sources referenced under the virtual data lake collection so if you have multiple sources of data (e.g. S3 + Atlas) the data is going to be coming from both.Does that make sense? If not, we can setup a call and I can walk you through it? Calendly - Benjamin Flast",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Hi Benjamin, sorry for the delayed reply. Here are my DataLake settings. Based on the naming I believe I am calling them properly in the trigger but could be wrong:\n\nimage1689×689 77.1 KB\n",
"username": "Chase_Russell"
},
{
"code": "",
"text": "Hey @Chase_Russell, based on your Storage Config, in the trigger should be specifying “MessageCenter” and “MessageInfo” (not sample_airbnb.listingAndReviews)",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Following up. You may also need to create a new Data Lake to take advantage of the change we made to resolve the initial issue. But I can confirm, setup with a new Data Lake you will no longer receive this error.",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Thank you again for the reply Benjamin. I made the changes you recommended and its still showing the error. I think ill start from scratch. The initial go was months ago and the steps are kind of foggy. Itd be good to start over again for the practice alone.",
"username": "Chase_Russell"
},
{
"code": "",
"text": "@Chase_Russell sorry for the delay here. Do you mind sending me an email at [email protected]? I’d like to double check on the Data Lake and see what’s going on.Thanks!",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "You guys solved the problem ? I’m having the same issue here, I followed tutorials on the documentation and tried everything and nothing is working. Everything is right with my ROLE and user inside mongodb atlas\nCaptura de Tela 2022-08-18 às 16.10.152280×260 49 KB\n",
"username": "Mauricio_Pereira_Dos_Santos"
},
{
"code": "",
"text": "Hi All,\nI have the same problem in 2023.06. Really?",
"username": "Grzegorz_Szurkalo"
},
{
"code": "",
"text": "Hey @Grzegorz_Szurkalo can you tell me more about the error or share a screenshot?The error that was run into above would have been triggered by an issue with the IAM Role having access to the AWS S3 bucket. This could be due to an improperly setup role, or it could be due to changes made to the role after configuring the federated database instance. For example, if you setup and were able to setup and query your S3 bucket, and then on the AWS console you edited something to dissallow use then it could cause this error.",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Hi @Benjamin_Flast ,\nMy issue was related to usage of improper way of encrypting s3 bucket. Before, I was using SSE-KMS type, and Atlas role wasn’t able to decrypt files on this s3 bucket. After changing it to default SSE-S3, all become OK.\nRegards",
"username": "Grzegorz_Szurkalo"
}
] | Unauth Error on Moving Data from DataLake to S3 | 2021-06-16T20:40:59.888Z | Unauth Error on Moving Data from DataLake to S3 | 7,668 |
|
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello, I was wondering if the team would be open to migrating off using TooTallNate/node-bindings for loading the kerberos node addon.It appears the library has become unmaintained. We’re specifically running into issues with using webpack: Error.prepareStackTrace may not be called with webpack · Issue #61 · TooTallNate/node-bindings · GitHub and packaging for multiple target platforms: Add support for path based only on platform and architecture · Issue #79 · TooTallNate/node-bindings · GitHubI noticed mongodb-js/kerberos is also using prebuild and prebuild-install from the prebuild project which recommends updating to using prebuildify paired with node-gyp-build.Thanks!",
"username": "Devraj_Mehta"
},
{
"code": "",
"text": "Hi @Devraj_Mehta, and thank you for the great question. The recommendation you’ve made makes a lot of sense and I’ve captured it in NODE-5357 for the time being.Feel free to watch that ticket in JIRA for further details as our engineering triage and refine it.",
"username": "alexbevi"
},
{
"code": "",
"text": "Awesome, thanks for creating that!",
"username": "Devraj_Mehta"
}
] | Migrate Kerberos library off of node-bindings | 2023-06-13T20:47:05.324Z | Migrate Kerberos library off of node-bindings | 417 |
[
"crud"
] | [
{
"code": "",
"text": "on the document page i don’t see an option to set arrayFilters",
"username": "Trieu_Boo"
},
{
"code": "arrayFiltersupdateOneupdateManyarrayFilters",
"text": "Hey @Trieu_Boo,Thank you for reaching out to the MongoDB Community forums.Currently, the usage of arrayFilters is not supported in the Data API. It is limited to the upsert option when utilizing the updateOne and updateMany endpoints.I would recommend using a driver when dealing with queries that you believe require the use of arrayFilters.In case of any further questions feel free to reach out.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Trieu_Boo,Vote for the feature request if it is really helpful for you,Permit to use this operator:\n\nhttps://www.mongodb.com/docs/manual/reference/operator/update/positional-filtered/#mongodb-update-up.---identifier--The alternative approach is to create a custom HTTP endpoint, design your request, and execute an update query in a function.",
"username": "turivishal"
}
] | How do i user arrayFilters on updateOne with data api | 2023-06-14T07:20:03.998Z | How do i user arrayFilters on updateOne with data api | 806 |
|
null | [
"dot-net"
] | [
{
"code": "InvalidOperationException: Serializer for \"MyClass\" does not have a member named Key.",
"text": "I’m currently working on a blazor wasm app and am using Mongo DB Driver with Mongo DB Atlas for my data.\nFor some simplification I wanted to add an interface to my models which end up in the database. This interface is just one property which I only wanted to use the getter from to indicate what should be the key for the database search. It’s basically just a mapping of already existing properties of the model, like the name or the mail adresse or anything to this generic key property which only has a getter, so that I can use that with the IAsnycCursor to find the elements with this Key.\nMy issue with that, is that it throws an error InvalidOperationException: Serializer for \"MyClass\" does not have a member named Key. I figured, that it might has issues with it being a property and tried to use a function instead with a different exception but still not working.\nThis “Key” property should not get de/serialized in any case it should just point to another member variable or anything. That’s why I also added the [BsonIgnore] to the field but with no effect to my exception.\nIt would be nice if this could work somehow? Maybe I did something wrong or so, I appreciate any help.",
"username": "Kevin_Schafer"
},
{
"code": "",
"text": "Hi Kevin! I’m happy to help. I’ll need a bit more context to be of assistance. Could you share a self-contained repro (a simple console application will do)?",
"username": "Patrick_Gilfether1"
}
] | Bson serialization of interface properties throws exception | 2023-06-15T00:39:26.780Z | Bson serialization of interface properties throws exception | 460 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Hi, I’m Jéssica, I finish my studies path in mongo university.\nHow I do to obtain 50% off on the exam.\nIf this ´s coupon how i get it?I want to do de exam in this month yet.",
"username": "Jessica_Aparecida_Fe"
},
{
"code": "",
"text": "In the case of MongoDB Korea, the coupon was distributed during its own event period.I remember that there was an additional coupon entry box when I booked and paid for the test, and I think I entered the coupon number in that box and used it.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi Jessica and welcome to the forums!You will be given your 50% certification discount voucher automatically when you complete a learning path in MongoDB University. You should receive an email to whichever email address you used for your MongoDB University account.if you are a member of MongoDB for Students, you are eligible for a 100% discount voucher after completing a MongoDB University learning path. If the email address you used to sign into your MongoDB for Students account is the same as your MongoDB University account, you will receive your discount code automatically. If you used different email addresses for each account, you will need to sign into MongoDB for Students and fill out the request form on the post-login page.Hope this answers your question! Happy to answer any other questions you might have.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "udents account is the same as your MongoDB University account, you will reHi Aiyana,I didn’t receive any email, and I have only one email, this is the same that.\nCan you help me to receive this discount?",
"username": "Jessica_Aparecida_Fe"
},
{
"code": "",
"text": "Ms. McConnell,Is there by chance an education affiliate program for courses and educational creators who construct courses (of course probably with overview from MDB first I’d imagine) related to teaching and educating toward the DBA and DBD exams?",
"username": "Brock"
},
{
"code": "",
"text": "Hi Jessica, I’m going to reach out to you via DMs so I can better help you resolve your issue.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Hi Brock, I will reach out to in DMs ",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Hi Jassica,I hope the 50% certification discount is applicable even to the MongoDB DBA learning path completion. Please clarify.Thanks and Regards,",
"username": "irdaya_rajan"
},
{
"code": "",
"text": "Hi Irdaya,Welcome to the forums!The 50% discount is available to all learners who complete a MongoDB University learning path including the DBA learning path. It can be applied to any of the exams. This discount is different from the one offered through MongoDB for Students which is, as the name implies, only available to students through the GitHub Student Developer Pack.Hope this answers your question! If you have any specific questions about the certification exam, there’s a Certification sub-forum that will be better able to assist you.Have an awesome rest of your day!",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Certification exam coupon - DBA exam | 2023-04-19T01:11:03.626Z | Certification exam coupon - DBA exam | 1,589 |
null | [] | [
{
"code": "",
"text": "Is it possible to integrate AD with MongoDB COMMUNITY edition?\nIf yes, Please provide the documentation available.",
"username": "Viswa_Rudraraju"
},
{
"code": "",
"text": "No. This is a MongoDB Enterprise feature.",
"username": "chris"
},
{
"code": "",
"text": "Thanks for your Quick response.",
"username": "Viswa_Rudraraju"
}
] | MongoDB AD authentication | 2023-06-14T16:15:30.052Z | MongoDB AD authentication | 364 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hello, I’m new to MongoDB and have encountered an error trying to shard a collection. Here’s the setup I have:Project: ‘weather’\nDatabase: ‘weather’\nCollections within ‘weather’: ‘users’ and ‘weatherData’\nObjective: Create 2 partitions of ‘weatherData’ using a partition keyMy access setup in Atlas for user appemail002:\nDatabase access for database ‘weather’: * atlasAdmin[@]adminProject access for project ‘weather’:\nProject Owner, Project Cluster Manager, Project Data Access Admin, Project Data Access Read WriteOrganization access: Organization OwnerProblem: When I connect to the database and try to shard the collection ‘weatherData’, I get this error (I’m using Terminal on Mac):Atlas atlas-l0q9v9-shard-0 [primary] admin> sh.enableSharding(“weather”)MongoServerError: (Unauthorized) not authorized on admin to execute command { enableSharding: “weather”, lsid: { id: {4 [36 21 142 212 209 203 73 59 180 167 251 12 105 72 226 177]} }, $clusterTime: { clusterTime: {1686475395 2}, signature: { hash: {0 [227 222 216 58 147 247 173 17 114 162 175 202 192 80 195 225 79 131 207 72]}, keyId: 7187767833034489856.000000 } }, $db: “admin” }From what I can see, I have the required admin access to enable sharding. I’m not sure what I’ve missed.Looking forward to any advice.Thanks!",
"username": "D_M2"
},
{
"code": "",
"text": "Hi @D_M2,\nCan you paste your connection string?\nIn which db you’ve create the user?BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "Atlas atlas-l0q9v9-shard-0 [primary] weather> db.weatherData.createIndex({ \"Weather Station\": 1 })\nWeather Station_1\nAtlas atlas-l0q9v9-shard-0 [primary] weather> sh.enableSharding(\"weather\")\nMongoServerError: (Unauthorized) not authorized on admin to execute command { enableSharding: \"weather\", lsid: { id: {4 [210 136 236 165 103 73 78 142 157 207 39 31 47 93 86 139]} }, $clusterTime: { clusterTime: {1686529903 1}, signature: { hash: {0 [43 89 216 71 105 224 184 255 137 187 7 86 38 240 215 220 177 21 20 55]}, keyId: 7187767833034489856.000000 } }, $db: \"admin\" }\nAtlas atlas-l0q9v9-shard-0 [primary] weather> sh.enableSharding(\"weatherData\")\nMongoServerError: (Unauthorized) not authorized on admin to execute command { enableSharding: \"weatherData\", lsid: { id: {4 [210 136 236 165 103 73 78 142 157 207 39 31 47 93 86 139]} }, $clusterTime: { clusterTime: {1686530065 15}, signature: { hash: {0 [55 207 72 0 87 110 79 207 192 229 63 95 190 141 226 62 159 6 231 158]}, keyId: 7187767833034489856.000000 } }, $db: \"admin\" }\n",
"text": "Thanks for the reply @Fabio_Ramohitaj!Here’s my connection string for Mongo shell in MacOS:\nmongosh “mongodb+srv://weather.xdeoqms.mongodb.net” --apiVersion 1 --username appemail002And this is the access info for db ‘weather’:\n\nimage3134×696 139 KB\nUpdate: I was able to create an index for db ‘weather’> collection ‘weatherData’ > index ‘Weather Station’ (this is my Shard key)but still getting the same error when trying ‘sh.enableSharding(“weather”)’. – ‘weather’ is the databaseI also tried “sh.enableSharding(“weatherData”)” - weatherData is the collection I want to shard within the ‘weather’ database but got the same errorThanks!",
"username": "D_M2"
},
{
"code": "Atlas atlas-l0q9v9-shard-0 [primary] weather> db.weatherData.createIndex({ \"Weather Station\": 1 })\nWeather Station_1\n",
"text": "Looks like you are running the command from a shard? Run it against mongos instead.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks @Kobe_W - how do I do that? I used the connection string provided in Atlas but it seems to connect me to the shard straight away… This is the first time I’m using Mongodb for for study. Thanks again!\nimage1920×871 167 KB\n",
"username": "D_M2"
},
{
"code": "",
"text": "Ho @D_M2,\nAdd at the end of you connection string /weather.\nIn you case:mongodb+srv://weather.xdeoqms.mongodb.net/weatherI think in this way, will work correctly.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "mongodb+srv://weather.xdeoqms.mongodb.net/weatherThanks BR, I tried it just now, and it seems to still point me to the shard… very weird\nimage1358×506 80.1 KB\n",
"username": "D_M2"
},
{
"code": "",
"text": "If it helps at all, here’s what my Metrics page looks like for db ‘weather’ - there seems 2 be 1 Primary Shard and 2 Secondary Shards - I did not create any of these… What I am after is:Partition collection ‘weatherData’ (which lives in db ‘weather’) into 2 shards/partitions\nimage3154×1174 227 KB\n",
"username": "D_M2"
},
{
"code": "",
"text": "Hi @D_M2,\nLet’s do this, to create a temporary solution, we can create a root user in the admin db.\nSo in the connection string you will have to put:\nmongodb+srv://weather.xdeoqms.mongodb.net/admin\nAnd log in with the new user name.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Thanks heaps for your time BR. I got this reply from MongoDB, apparently it’s an issue with being on the M0 free cluster:\nIMG_0513750×1334 158 KB\n",
"username": "D_M2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sharding even with Admin access - (Unauthorized) not authorized on admin to execute command { enableSharding | 2023-06-11T10:22:11.932Z | Sharding even with Admin access - (Unauthorized) not authorized on admin to execute command { enableSharding | 923 |
null | [
"sharding",
"mongodb-shell"
] | [
{
"code": "test> sh.shardCollection(\"INVENTORY.inventory\", { name : \"hashed\" })\nMongoServerError: no such command: 'shardCollection'. Are you connected to mongos?\n\nWhat should I do?\nThanks!",
"text": "Hi everyone, I’m a newbie and just getting into MongoDB.\nNow encountered a strange problem, unable to perform sharding. I use Windows 11, MongoDB6.0.6, and log in through the Windows command line → mongosh. After entering the command at the correct position, it is always prompted:",
"username": "Di_Lin"
},
{
"code": "",
"text": "Where you are running this command?\nWhat is your setup like?data replicas,config servers,mongod etc\nHave you completed all pre steps\nYou have to run it on mongos instance",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Di_Lin,\nHere is the answer of your question:“Are you connected to mongos?”Best Regards",
"username": "Fabio_Ramohitaj"
}
] | SOS: How to perform collection sharding in MongoDB 6.0? | 2023-06-15T05:26:17.959Z | SOS: How to perform collection sharding in MongoDB 6.0? | 668 |
null | [] | [
{
"code": "",
"text": "realm-js/COMPATIBILITY.v10.md at main · realm/realm-js · GitHub and realm-js/COMPATIBILITY.v11.md at main · realm/realm-js · GitHub list compatibility for RN as >= x.xx but Expo is just xx (e.g. ==).Is the Realm/Expo compatibility really that tightly tied together that a release of Realm only works with specific version of the Realm SDK?",
"username": "Liam_Jones"
},
{
"code": "",
"text": "Still looking for an answer on this one…",
"username": "Liam_Jones"
}
] | Expo compatibility question | 2023-05-22T08:37:03.334Z | Expo compatibility question | 613 |
null | [
"react-native"
] | [
{
"code": "syncsync == trueSyncError: Client attempted a write that is outside of permissions or query filters; it has been reverted",
"text": "Hi,I’m working on a React Native Realm application and for our use case we don’t want to sync documents until they are “ready” (which involves some user interaction). I was hoping that I could use a sync property along with a subscription that looks like sync == true and then keep documents in the local realm until they are ready to be synced, but I end up with SyncError: Client attempted a write that is outside of permissions or query filters; it has been reverted.What is the best way to achieve what I’m looking for?",
"username": "Dave_Keen"
},
{
"code": "SyncError: Client attempted a write that is outside of permissions or query filters; it has been revertedlet realmLocal;\nlet realmSynched;\n\ntry {\n realmLocal = new Realm(localConfig);\n realmSynched = await Realm.open(synchedConfig);\n} catch (err) {\n //… handle error\n}\n\nlet obj;\nlet objCopy = {};\n\nrealmLocal.write(() => {\n obj = realmLocal.create(\"TestData\", { _id: new ObjectId(), … });\n});\n\n// … when document is ready…\nrealmSynched.write(() => {\n objCopy = realmSynched.create(\"TestData\", obj);\n // …change the fields to match synched requirements…\n});\n\n// … and remove the local temporary copy\nrealmLocal.write(() => {\n realmLocal.delete(obj);\n obj = null;\n});\n",
"text": "Hi @Dave_Keen,I end up with SyncError: Client attempted a write that is outside of permissions or query filters; it has been reverted.Yes, that’s by design: you can’t write documents into the database that don’t match your subscription, as Device Sync will try to maintain the content of your local DB consistent with the backend. There can be different workarounds, for example, you could open a second, local-only realm where you keep the unfinished document. When the document is ready to be synched, you can copy it to the synched realm, and remove it from the local one, i.e. something like",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thanks for your quick reply! The documents I am creating may have relationships (one to one and one to many) to existing documents in the synced database. Is that going to cause any problems?",
"username": "Dave_Keen"
},
{
"code": "",
"text": "The documents I am creating may have relationships (one to one and one to many) to existing documents in the synced database.No, relationship won’t work across realms, you may need to use placeholders while the document isn’t finalised, and set the exact relationships when you’re ready to write into the synched realm.Can you please describe the use case, why shouldn’t the document be synched from the beginning? If the issue is about different access, you may be able to simplify things, for example, by defining different access roles, before and after finalisation, and still keep the synching mechanism happy…",
"username": "Paolo_Manna"
},
{
"code": "toJSON()React.ObjectReact.Objectrealm.create",
"text": "I’m making a note taking app, where the note is a document and can have links to various other documents. The user enters their note, then have a UI where they can link the note the the other documents. Finally they press “done” and the note gets synced. We don’t want to sync anything until this point for reasons of privacy and bandwidth (the note can potentially have a lot of links to other documents which get pruned after the UI interaction).Currently I’m building the note in a transaction, using toJSON() to detach it from the realm and then cancelling the transaction. This sort of works but feels pretty hacky.I originally tried to build the document as a plain object but this proved tricky with Typescript as the class already inherits from React.Object so I couldn’t use the class without Realm noticing. However, I recently discovered that I can point the generic of React.Object to something other than the class itself, so maybe that will help to decouple the type definition and the Realm object (I haven’t tried this yet). I also wasn’t quite sure what will happen with relationships and collections when building a plain object and passing it into realm.create.All advice appreciated!",
"username": "Dave_Keen"
},
{
"code": "realm.createrealm.create",
"text": "Hello, @Dave_Keen,Thanks for sharing your use case.Finally they press “done” and the note gets synced. We don’t want to sync anything until this point for reasons of privacy and bandwidth (the note can potentially have a lot of links to other documents which get pruned after the UI interaction).You can have more control over privacy if you define document or even field-level permissions. Please follow MongoDB documentation on Role-based Permissions. Your feedback on how this can be made more clear is appreciated.I also wasn’t quite sure what will happen with relationships and collections when building a plain object and passing it into realm.create .The relationships work within the same realm. At this moment, the only alternate is to have separate local v/s sync realms or create managed v/s unmanaged objects as you suggested.Let us know if you try using realm.create and we will go forward from there.Cheers, \nhenna",
"username": "henna.s"
},
{
"code": "draft = true",
"text": "One common pattern for this I have seen, is to have a draft = true property on the objects that are work-in-progress. This obviously does not reduce the bandwidth usage, but it gives a lot of other benefits.It allows other devices using the same data (like the users other iPad, etc) to either filter out the drafts, or even allow the user to continue the work as they move between devices, and it also ensures that the data is backed up in case something happens to the device before the draft version is published.",
"username": "Alexander_Stigsen"
}
] | How can I sync a subset of local documents with flexible sync? | 2023-05-24T12:03:48.080Z | How can I sync a subset of local documents with flexible sync? | 1,018 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I’ve a schema for IoT data\nHere it is:-Devices collection : _id , name\nVariables collection : _id , name , deviceId (ref)\nValues collection : _id, value, timestamp , variableId (ref) this is a timeseries collectionIs this schema good enough to work for millions of data as the expected data rate is 1000 data points per minute?Please suggest any possible improvements in this if required",
"username": "deep_jagani"
},
{
"code": "{\ntime: new Date(\"2023-06-13T10:00:00Z\"),\nvalue: 25.5,\nmetafield: {\ndeviceId: \"device001\",\ndeviceName: \"Sensor 1\"\n},\nvariable: \"temperature\"\n}\n",
"text": "Hey @deep_jagani,Welcome to the MongoDB Community Forums! The schema you described seems reasonable for modeling IOT data. However, I see that you are creating a time-series collection, but have two additional non-time series collections. You can merge them into one ie, making the device and variable collection as metadata. Here’s how the time series collection will then look like:Please note that your actual query performance, however, will also depend on the queries that you will be using. A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.I would suggest using mgeneratejs to quickly create sample documents in any number, so the design can be tested easily. Additionally, you can create secondary indexes on TimeSeries collections based on your specific use case. Also, make sure to refer to the Best Practices for TimeSeries.You can also read our data modeling documentation on modeling IOT data on other tips to model and improve performance IOT data.Hope this helps. Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | A production ready database schema for my iot data | 2023-06-10T10:06:40.704Z | A production ready database schema for my iot data | 685 |
[
"aggregation",
"queries",
"node-js"
] | [
{
"code": "",
"text": "Hello! This is my first post and I am extremely frustrated trying to solve this issue.I have a document with a Date string.\nI want to query documents that fall on a particular day of the week by checking this date property.\nThis would require me to manipulate the property before applying a logical operator that checks equivalency.I found the $where and $function operators that seem to allow this custom functionality:\nHowever, neither of these methods are functional and for the life of me I cannot figure out why. I have also tried \"Model.aggregate({ $match: { $function: … \" instead of .find() with the same problems.I am trying to avoid needing separate properties for Date/Day/Hour information but it’s looking like I have no other choice",
"username": "Kevin_Rancourt"
},
{
"code": "{\n \"_id\": ObjectId(\"123456789012345678901234\"),\n \"title\": \"Sample Date Document\",\n \"someDate\": \"2022-04-17T12:30:00Z\"\n}\nconst agg = [\n {\n '$match': {\n '$expr': {\n '$eq': [\n { '$dayOfWeek': {'date': {'$toDate': '$someDate' } } }, 2\n ] }\n }}\n];\nconst client = await MongoClient.connect(\n 'mongodb://localhost:27017/',\n { useNewUrlParser: true, useUnifiedTopology: true }\n);\nconst coll = client.db('sampleDb').collection('sampleColl');\nconst result = await coll.aggregate(agg).toArray();\nawait client.close();\n$match$expr$dayOfWeek$toDate",
"text": "Hello @Kevin_Rancourt,Welcome to the MongoDB Community forums I have a document with a Date string.\nI want to query documents that fall on a particular day of the week by checking this date property.\nThis would require me to manipulate the property before applying a logical operator that checks equivalency.Based on your shared information I presume the document is as follows:Considering, you want to query for documents with a “someDate” property that falls on a Monday, I have written a MongoDB aggregation pipeline using $match, $toDate and $dayOfWeek to retrieve documents with a “someDate” property that falls on a ‘Monday’:It uses the $match stage to filter documents based on a logical expression, defined using the $expr operator. Inside that, the $dayOfWeek operator is used to extract the day of the week from the “someDate” after converting it to a Date object using $toDate, and the resulting value is compared to the numeric value for Monday, which is 2.I hope this helps. If you have any further questions or require additional assistance, please provide sample documents and your expected output.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you very much for your help.If I could trouble you for just one more thing.In the following example, I am trying to filter a collection based on a property the client determines. All documents that include a specific user account\nThis would require the object to follow the format { “friends.user”: {$in: [ _id ] } }The client needs this field to be populated in case they decide to edit it.\nI cannot refer to the field as obj.friends.user.$in because the singular field is called “friends.user”, it is not a nested object like {friends: {user: … } }\nIn which case, the property would be referenced using : obj[“friends.user”]This is a problem because the populate method requires the path as a string, and I cannot have a string within a string such as: Model.populate(“obj[‘friends.user’].$in”)Would there be a way for this format to function the way I intend? Or is there a better way to format the filter object in the first place so that this isn’t even an issue?\nAny help would be greatly appreciated.\n\nScreenshot 2023-04-20 at 1.39.06 AM1220×440 73.6 KB\n",
"username": "Kevin_Rancourt"
},
{
"code": "",
"text": "Hello @Kevin_Rancourt,Can you please open a new topic with the sample documents, code snippets, the expected output, and the versions of MongoDB and Node.js you are using?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Custom query operator | 2023-04-15T22:13:36.576Z | Custom query operator | 979 |
|
null | [] | [
{
"code": "",
"text": "Hi, I’m working on a feature in which we need to search by name in our database. The search works fine for single word queries, but when I input a string with spaces in it (2 or more words), the results are not accurate, because it returns different documents which match only one of the words that I inputted. From what I have read on a similar post on this forum, it might be a tokenization issue and the person which fixed it made a fix which works for exact matches, here I need an autocomplete solution for this issue.",
"username": "Dan_Muntean1"
},
{
"code": "",
"text": "Hi @Dan_Muntean1! Can you share your index definition, an example query and sample document?",
"username": "amyjian"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"groupTag\": {\n \"type\": \"autocomplete\"\n },\n \"name\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n{\n \"_id\": {\n \"$oid\": \"63e4f8a261f74736f0fcc8b6\"\n },\n \"_v\": 4,\n \"groupTag\": \"T-2\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1675950242663\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1676022397735\"\n }\n },\n \"name\": \"Dan test group\",\n}\n",
"text": "Hi, here is my index:Here is my document:And if I search for the name “Dan test group”, the search feature returns me other documents which contains in the name field any of the words “Dan”, “test”, “group”, and I end getting information that I didn’t search for. I need the search to return documents which name contains the entire string that I typed (“Dan test group”).",
"username": "Dan_Muntean1"
},
{
"code": "autocompletestring{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"groupTag\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.keyword\"\n },\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.keyword\"\n }\n }\n }\n}\n",
"text": "Hi Dan,This is happening because you are using the autocomplete field mapping, which allows you to return results which partially match your search query. If you are interested in return exact matches, you might want to consider using a string field mapping type with the lucene.keyword analyzer. Your index definition would look something like this:You can learn more about exact matching in this blog post.",
"username": "amyjian"
},
{
"code": "",
"text": "Thank you for help, but this solution does not help me. I need to still use autocomplete, but when I type 2 words, I need the response to contain both words. Currently it returns me different responses for each word.",
"username": "Dan_Muntean1"
}
] | Atlas search not working properly when searching for a string with space in it | 2023-05-25T07:29:34.261Z | Atlas search not working properly when searching for a string with space in it | 888 |
null | [
"aggregation",
"queries",
"crud"
] | [
{
"code": "test> db.test.insertMany([{_id: 'a', v: [2]}, {_id: 'b', v: [1, 3]}])\n{ acknowledged: true, insertedIds: { '0': 'a', '1': 'b' } }\ntest> db.test.find().sort({v: 1})\n[ { _id: 'b', v: [ 1, 3 ] }, { _id: 'a', v: [ 2 ] } ]\n\ntest> db.test.find().sort({v: -1})\n[ { _id: 'b', v: [ 1, 3 ] }, { _id: 'a', v: [ 2 ] } ]\n$sorttest> db.test.aggregate([{$sort: {v: 1}}])\n[ { _id: 'b', v: [ 1, 3 ] }, { _id: 'a', v: [ 2 ] } ]\n\ntest> db.test.aggregate([{$sort: {v: -1}}])\n[ { _id: 'b', v: [ 1, 3 ] }, { _id: 'a', v: [ 2 ] } ]\n$groupvtest> db.test.aggregate([{$group: {_id: \"$v\"}}, {$sort: {_id: 1}}])\n[ { _id: [ 1, 3 ] }, { _id: [ 2 ] } ]\n\ntest> db.test.aggregate([{$group: {_id: \"$v\"}}, {$sort: {_id: -1}}])\n[ { _id: [ 2 ] }, { _id: [ 1, 3 ] } ]\n",
"text": "Let’s say I have the following data set:A regular query with sorting works as expected:An aggregation pipeline with $sort works the same way:But when used with $group by the same array field v, sorting works differently:Note that the last result is different.I could not find anything in the documentation that would explain that behavior. Please help me understand it.",
"username": "Ratchet"
},
{
"code": "[2][1,3]",
"text": "Hello @Ratchet ,Welcome to The MongoDB Community Forums! There is a difference in behaviour on how $sort works while doing a $group operation on an array.I believe why you were getting [1,3] every time before [2] in above examples was because both the positions of arrays were being checked when a normal sort operation was underway but with $group [1,3] is considered as distinct value and when $sort is done with group, it will check 1 is less than 2 hence it will provide the result on just this condition, it will not check the second position that is 3 is less than 2.Lastly, I do not recommend sorting on an array field as the output can be unintuitive, which you saw when an ascending and descending sort resulted in the same ordering. Instead I would recommend having a more definite query field (proper sort key) which should not create ambiguity in the query planner.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Inconsistency with $group and $sort | 2023-06-13T07:52:09.928Z | Inconsistency with $group and $sort | 574 |
null | [
"android"
] | [
{
"code": "{\n \"_id\": \"d5243083e45e47f1\",\n \"UserId\": \"1\",\n \"IsAndroid\": true,\n \"IsIos\": false,\n \"IsWeb\": false\n},\n{\n \"_id\": \"d5243083e45e47f2\",\n \"UserId\": \"2\",\n \"IsAndroid\": false,\n \"IsIos\": true,\n \"IsWeb\": false\n},\n{\n \"_id\": \"d5243083e45e47f3\",\n \"UserId\": \"3\",\n \"IsAndroid\": false,\n \"IsIos\": false,\n \"IsWeb\": true\n}\n",
"text": "How would I add a field that rolls up IsAndroid, IsIos and IsWeb into one field and has values of ‘android’, ‘ios’, ‘web’? I have been thru the documentation regarding $cond and am having a hard time figuring this out.This is the data",
"username": "N_A_N_A20"
},
{
"code": "",
"text": "Hello @N_A_N_A20,Welcome to The MongoDB Community Forums! To understand your use-case better, can you please provide more details such as:Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | How would I add a calculated field for charting | 2023-06-13T19:18:44.927Z | How would I add a calculated field for charting | 673 |
null | [
"java",
"spring-data-odm"
] | [
{
"code": "\nclass User {\n\nprivate String userId;\n..........\n}\n\n\nclass Email {\nprivate String id;\n@DBref\nprivate User user;\n\n}\n",
"text": "I have two collection user and email. I am using spring data mongo db. See below classes.So here user has userId like [email protected]. So can we use it as _id as well as dbref. We have index on those id as well",
"username": "Salman_Khandu"
},
{
"code": "",
"text": "Hi @Salman_Khandu and welcome to MongoDB community forums!!The DBRefs in MongoDB provide a common format and type to represent relationships among documents.However this is a convention instead of a server feature, and so it comes with the tradeoff of having to do multiple queries to the database to resolve the references.Could you please provide more information regarding the code snippet you shared? I have a few questions to better understand the requirements:I would appreciate if you could provide additional details to help me grasp the context more accurately.Regards\nAasawari",
"username": "Aasawari"
}
] | Can we use special characters in primary key as well as DBRef | 2023-06-12T06:27:23.761Z | Can we use special characters in primary key as well as DBRef | 764 |
null | [
"storage"
] | [
{
"code": "> [aaa@ddd mongodb]$ sudo systemctl status mongod.service\n> ● mongod.service - MongoDB Database Server\n> Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n> Active: failed (Result: exit-code) since Mon 2023-06-12 16:55:03 CEST; 3min 39s ago\n> Docs: https://docs.mongodb.org/manual\n> Process: 13729 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=51)\n> Process: 13726 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n> Process: 13723 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n> Process: 13720 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n> Main PID: 1703 (code=killed, signal=ABRT)\n> \n> Jun 12 16:55:03 ddd systemd[1]: Starting MongoDB Database Server...\n> Jun 12 16:55:03 ddd mongod[13729]: about to fork child process, waiting until server is ready for connections.\n> Jun 12 16:55:03 ddd mongod[13729]: forked process: 13732\n> Jun 12 16:55:03 ddd mongod[13729]: ERROR: child process failed, exited with error number 51\n> Jun 12 16:55:03 ddd mongod[13729]: To see additional information in this output, start without the \"--fork\" option.\n> Jun 12 16:55:03 ddd systemd[1]: mongod.service: control process exited, code=exited status=51\n> Jun 12 16:55:03 ddd systemd[1]: Failed to start MongoDB Database Server.\n> Jun 12 16:55:03 ddd systemd[1]: Unit mongod.service entered failed state.\n> Jun 12 16:55:03 ddd systemd[1]: mongod.service failed.\n> [aaa@ddd mongodb]$ sudo systemctl start mongod.service\n> Job for mongod.service failed because the control process exited with error code. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /mongo\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.\n\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n",
"text": "First of all: hi everyone, my 1 topic here \nI have a problem because I cannot run mongo service on Linux.\nI tried changing ownership of folders, checking are folders ok with config.\nRemoving .lock files. I dont know what else I can do. Can I ask for some feedback? Thanks.This is my consoleand this is my mongod.conf:",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Issue could be your dbpath directory\nCan mongod write to this directory?\nChange it to some other dir say your home dir and see if mongod comes up",
"username": "Ramachandra_Tummala"
},
{
"code": "sudo mongod --repair --dbpath /path/to/data/db\n",
"text": "Before changing the directory, I didand my permissions seems to be broken. What is the exact permissions I should have to db? Is it 755?And how change the directory of mongo? Should I backup my original folder or just leave it and create new folder somewhere and change only dbPath: /mongo in /etc/mongod.conf?I am using separate volume for mongo db directory, so I need to use that.\nIs there a way to just fix it? Its a prod environment. Is switching directories safe?",
"username": "Brian_Bell"
},
{
"code": "",
"text": "If it is prod do not change dbpath.You have to identify the issue and fix it\nI have suggested to see if it works with new dbpath suspecting some issue with it\nIs the mount point ok?\nApart from permissions you have to check ownership also.It should be owned by mongod\nWhat has changed from working status to now?\nCheck your mongod.log and compare when it was working fine to now\nDoes /mongo have mongo related files?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Brian_Bell,\nAs suggested from the status of service, try to restart it without option fork for understand better what Is the error.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "----- END BACKTRACE -----\n2023-06-13T21:04:54.150+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] MongoDB starting : pid=94579 port=27017 dbpath=/mongo 64-bit host=ddd\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] db version v3.6.17\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] git version: 3d6953c361213c5bfab23e51ab274ce592edafe6\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] modules: none\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] build environment:\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] distmod: rhel70\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-13T21:04:54.344+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"0.0.0.0\", port: 27017 }, processManagement: { fork: true, pidFilePath: \"/var/run/mongodb/mongod.pid\", timeZoneInfo: \"/usr/share/zoneinfo\" }, storage: { dbPath: \"/mongo\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-13T21:04:54.345+0200 I STORAGE [initandlisten] exception in initAndListen: Location28596: Unable to determine status of lock file in the data directory /mongo: boost::filesystem::status: Permission denied: \"/mongo/mongod.lock\", terminating\n2023-06-13T21:04:54.345+0200 F - [initandlisten] Invariant failure globalStorageEngine src/mongo/db/service_context_d.cpp 272\n2023-06-13T21:04:54.345+0200 F - [initandlisten]\n\n***aborting after invariant() failure\n\n\n2023-06-13T21:04:54.370+0200 F - [initandlisten] Got signal: 6 (Aborted).\n-rwxr-xr-x. 1 mongod mongod 94208 Jun 13 20:50 _mdb_catalog.wt\ndrwxr-xr-x. 5 mongod mongod 16384 Jun 19 2020 mongo19.bck\n-rw-------. 1 mongod mongod 0 Jun 13 20:50 mongod.lock\n-rwxr-xr-x. 1 mongod mongod 65536 Jun 13 20:50 sizeStorer.wt\n-rwxr-xr-x. 1 mongod mongod 114 Apr 8 2020 storage.bson\ndrwx------. 2 mongod mongod 6 Jun 13 20:50 _tmp\n-rwxr-xr-x. 1 mongod mongod 45 Apr 8 2020 WiredTiger\n-rw-------. 1 mongod mongod 4096 Jun 13 20:50 WiredTigerLAS.wt\n-rwxr-xr-x. 1 mongod mongod 21 Apr 8 2020 WiredTiger.lock\n-rw-------. 1 mongod mongod 1140 Jun 13 20:50 WiredTiger.turtle\n-rwxr-xr-x. 1 mongod mongod 786432 Jun 13 20:50 WiredTiger.wt\n[aaa@ddd tmp]$ sudo systemctl restart mongod.service\nJob for mongod.service failed because the control process exited with error code. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n[aaa@ddd tmp]$ sudo journalctl -xe\n-- Unit mongod.service has begun starting up.\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=AVC msg=audit(1686683540.311:492556): avc: denied { search } for pid=96187 comm=\"mongod\"\nJun 13 21:12:20 ddd mongod[96187]: about to fork child process, waiting until server is ready for connections.\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686683540.311:492556): arch=c000003e syscall=2 success=no exit=-13 a0=55\nJun 13 21:12:20 ddd mongod[96187]: forked process: 96190\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=CWD msg=audit(1686683540.311:492556): cwd=\"/\"\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PATH msg=audit(1686683540.311:492556): item=0 name=\"/sys/fs/cgroup/memory/memory.limit_in_b\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686683540.311:492556): proctitle=2F7573722F62696E2F6D6F6E676F64002D660\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=AVC msg=audit(1686683540.351:492557): avc: denied { getattr } for pid=96190 comm=\"mongod\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686683540.351:492557): arch=c000003e syscall=4 success=no exit=-13 a0=55\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=CWD msg=audit(1686683540.351:492557): cwd=\"/\"\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PATH msg=audit(1686683540.351:492557): item=0 name=\"/mongo/mongod.lock\" inode=67 dev=fd:02\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686683540.351:492557): proctitle=2F7573722F62696E2F6D6F6E676F64002D660\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=ANOM_ABEND msg=audit(1686683540.366:492558): auid=4294967295 uid=776 gid=598 ses=4294967295\nJun 13 21:12:20 ddd mongod[96187]: ERROR: child process failed, exited with error number 51\nJun 13 21:12:20 ddd mongod[96187]: To see additional information in this output, start without the \"--fork\" option.\nJun 13 21:12:20 ddd systemd[1]: mongod.service: control process exited, code=exited status=51\nJun 13 21:12:20 ddd systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nJun 13 21:12:20 ddd systemd[1]: Unit mongod.service entered failed state.\nJun 13 21:12:20 ddd systemd[1]: mongod.service failed.\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=SERVICE_START msg=audit(1686683540.379:492559): pid=1 uid=0 auid=4294967295 ses=4294967295\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686683540.385:492560): arch=c000003e syscall=62 success=yes exit=0 a0=17\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686683540.385:492560): opid=96177 oauid=303693 ouid=0 oses=25537 obj=unc\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686683540.385:492560): proctitle=73797374656D63746C0072657374617274006\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686683540.386:492561): arch=c000003e syscall=62 success=yes exit=0 a0=17\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686683540.386:492561): opid=96178 oauid=303693 ouid=0 oses=25537 obj=unc\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686683540.386:492561): proctitle=73797374656D63746C0072657374617274006\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686683540.386:492562): arch=c000003e syscall=62 success=yes exit=0 a0=17\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686683540.386:492562): opid=96178 oauid=303693 ouid=0 oses=25537 obj=unc\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686683540.386:492562): proctitle=\"(null)\"\nJun 13 21:12:20 ddd polkitd[1028]: Unregistered Authentication Agent for unix-process:96176:453232162 (system bus name :1.51400, object path /org/fre\nJun 13 21:12:20 ddd sudo[96174]: pam_unix(sudo:session): session closed for user root\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=USER_END msg=audit(1686683540.391:492563): pid=96174 uid=0 auid=303693 ses=25537 subj=uncon\nJun 13 21:12:20 ddd audispd[988]: node=ddd type=CRED_DISP msg=audit(1686683540.393:492564): pid=96174 uid=0 auid=303693 ses=25537 subj=unco\nJun 13 21:12:24 ddd audispd[988]: node=ddd type=USER_ACCT msg=audit(1686683544.047:492565): pid=96197 uid=303693 auid=303693 ses=25537 subj\nJun 13 21:12:24 ddd audispd[988]: node=ddd type=USER_CMD msg=audit(1686683544.048:492566): pid=96197 uid=303693 auid=303693 ses=25537 subj=\nJun 13 21:12:24 ddd audispd[988]: node=ddd type=CRED_REFR msg=audit(1686683544.049:492567): pid=96197 uid=0 auid=303693 ses=25537 subj=unco\nJun 13 21:12:24 ddd sudo[96197]: pam_unix(sudo:session): session opened for user root by aaa(uid=0)\nJun 13 21:12:24 ddd audispd[988]: node=ddd type=USER_START msg=audit(1686683544.057:492568): pid=96197 uid=0 auid=303693 ses=25537 subj=unc\n[aaa@ddd tmp]$\n",
"text": "I tried other db path to run, but I had the same problem.No I switched back to my main mongo dir and I am stuck with this situation.\nMaybe you will have any idea of what to do next? I think there is a problem with lock file?logfile:listing of few /mongo dir files (mongo.lock example):and here is some command output:",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Hi @Brian_Bell,\nDelete this mongod.lock file and restart the service.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi Fabio,\nBut I am not restarting it with fork option…",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Hi @Brian_Bell,\nSee my previous answer",
"username": "Fabio_Ramohitaj"
},
{
"code": "[aaa@ddd mongo]$ sudo rm -fr mongod.lock\n[aaa@ddd mongo]$ sudo systemctl restart mongod.service\nJob for mongod.service failed because the control process exited with error code. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n[aaa@ddd mongo]$ sudo journalctl -xe\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=AVC msg=audit(1686684211.349:492648): avc: denied { search } for pid=98411 comm=\"mongod\"\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686684211.349:492648): arch=c000003e syscall=2 success=no exit=-13 a0=55\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=CWD msg=audit(1686684211.349:492648): cwd=\"/\"\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PATH msg=audit(1686684211.349:492648): item=0 name=\"/sys/fs/cgroup/memory/memory.limit_in_b\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686684211.349:492648): proctitle=2F7573722F62696E2F6D6F6E676F64002D660\nJun 13 21:23:31 ddd mongod[98411]: about to fork child process, waiting until server is ready for connections.\nJun 13 21:23:31 ddd mongod[98411]: forked process: 98414\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=AVC msg=audit(1686684211.392:492649): avc: denied { write } for pid=98414 comm=\"mongod\"\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686684211.392:492649): arch=c000003e syscall=2 success=no exit=-13 a0=55\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=CWD msg=audit(1686684211.392:492649): cwd=\"/\"\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PATH msg=audit(1686684211.392:492649): item=0 name=\"/mongo/\" inode=64 dev=fd:02 mode=040755\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PATH msg=audit(1686684211.392:492649): item=1 name=\"/mongo/mongod.lock\" objtype=CREATE cap_\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686684211.392:492649): proctitle=2F7573722F62696E2F6D6F6E676F64002D660\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=ANOM_ABEND msg=audit(1686684211.407:492650): auid=4294967295 uid=776 gid=598 ses=4294967295\nJun 13 21:23:31 ddd mongod[98411]: ERROR: child process failed, exited with error number 51\nJun 13 21:23:31 ddd mongod[98411]: To see additional information in this output, start without the \"--fork\" option.\nJun 13 21:23:31 ddd systemd[1]: mongod.service: control process exited, code=exited status=51\nJun 13 21:23:31 ddd systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nJun 13 21:23:31 ddd systemd[1]: Unit mongod.service entered failed state.\nJun 13 21:23:31 ddd systemd[1]: mongod.service failed.\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=SERVICE_START msg=audit(1686684211.416:492651): pid=1 uid=0 auid=4294967295 ses=4294967295\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686684211.418:492652): arch=c000003e syscall=62 success=yes exit=0 a0=18\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686684211.418:492652): opid=98399 oauid=303693 ouid=0 oses=25537 obj=unc\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686684211.418:492652): proctitle=73797374656D63746C0072657374617274006\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686684211.418:492653): arch=c000003e syscall=62 success=yes exit=0 a0=18\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686684211.418:492653): opid=98400 oauid=303693 ouid=0 oses=25537 obj=unc\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686684211.418:492653): proctitle=73797374656D63746C0072657374617274006\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686684211.419:492654): arch=c000003e syscall=62 success=yes exit=0 a0=18\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686684211.419:492654): opid=98400 oauid=303693 ouid=0 oses=25537 obj=unc\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686684211.419:492654): proctitle=\"(null)\"\nJun 13 21:23:31 ddd sudo[98397]: pam_unix(sudo:session): session closed for user root\nJun 13 21:23:31 ddd polkitd[1028]: Unregistered Authentication Agent for unix-process:98398:453299266 (system bus name :1.51413, object path /org/fre\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=USER_END msg=audit(1686684211.422:492655): pid=98397 uid=0 auid=303693 ses=25537 subj=uncon\nJun 13 21:23:31 ddd audispd[988]: node=ddd type=CRED_DISP msg=audit(1686684211.422:492656): pid=98397 uid=0 auid=303693 ses=25537 subj=unco\nJun 13 21:23:41 ddd audispd[988]: node=ddd type=USER_ACCT msg=audit(1686684221.080:492657): pid=98431 uid=303693 auid=303693 ses=25537 subj\nJun 13 21:23:41 ddd audispd[988]: node=ddd type=USER_CMD msg=audit(1686684221.081:492658): pid=98431 uid=303693 auid=303693 ses=25537 subj=\nJun 13 21:23:41 ddd audispd[988]: node=ddd type=CRED_REFR msg=audit(1686684221.081:492659): pid=98431 uid=0 auid=303693 ses=25537 subj=unco\nJun 13 21:23:41 ddd sudo[98431]: pam_unix(sudo:session): session opened for user root by aaa(uid=0)\nJun 13 21:23:41 ddd audispd[988]: node=ddd type=USER_START msg=audit(1686684221.087:492660): pid=98431 uid=0 auid=303693 ses=25537 subj=unc\n[aaa@ddd mongo]$ sudo systemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Tue 2023-06-13 21:23:31 CEST; 2min 2s ago\n Docs: https://docs.mongodb.org/manual\n Process: 98411 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=51)\n Process: 98410 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 98406 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 98404 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Main PID: 1703 (code=killed, signal=ABRT)\n\nJun 13 21:23:31 ddd systemd[1]: Starting MongoDB Database Server...\nJun 13 21:23:31 ddd mongod[98411]: about to fork child process, waiting until server is ready for connections.\nJun 13 21:23:31 ddd mongod[98411]: forked process: 98414\nJun 13 21:23:31 ddd mongod[98411]: ERROR: child process failed, exited with error number 51\nJun 13 21:23:31 ddd mongod[98411]: To see additional information in this output, start without the \"--fork\" option.\nJun 13 21:23:31 ddd systemd[1]: mongod.service: control process exited, code=exited status=51\nJun 13 21:23:31 ddd systemd[1]: Failed to start MongoDB Database Server.\nJun 13 21:23:31 ddd systemd[1]: Unit mongod.service entered failed state.\nJun 13 21:23:31 ddd systemd[1]: mongod.service failed.\n[aaa@ddd mongo]$ sudo tail /var/log/mongodb/mongod.log -n 60\n mongod(+0x2296F2D) [0x55735dd75f2d]\n libpthread.so.0(+0xF630) [0x7f2bb77f9630]\n libc.so.6(gsignal+0x37) [0x7f2bb7452387]\n libc.so.6(abort+0x148) [0x7f2bb7453a78]\n mongod(_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j+0x0) [0x55735c47affe]\n mongod(_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj+0x158) [0x55735c722878]\n mongod(_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE+0x41) [0x55735dc2af31]\n mongod(_ZN5mongo6Client20makeOperationContextEv+0x27) [0x55735dc27017]\n mongod(+0xA12863) [0x55735c4f1863]\n mongod(+0x2292BF5) [0x55735dd71bf5]\n mongod(_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE+0x364) [0x55735c47c1d7]\n mongod(_ZZN5mongo13duration_castINS_8DurationISt5ratioILl1ELl1000EEEES2_ILl1ELl1EEEET_RKNS1_IT0_EEENKUlvE_clEv+0x0) [0x55735c414cbc]\n mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x87A) [0x55735c4f91ea]\n mongod(main+0x9) [0x55735c47d129]\n libc.so.6(__libc_start_main+0xF5) [0x7f2bb743e555]\n mongod(+0xA0235F) [0x55735c4e135f]\n----- END BACKTRACE -----\n2023-06-13T21:23:31.360+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-13T21:23:31.393+0200 I CONTROL [initandlisten] MongoDB starting : pid=98414 port=27017 dbpath=/mongo 64-bit host=ddd\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] db version v3.6.17\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] git version: 3d6953c361213c5bfab23e51ab274ce592edafe6\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] modules: none\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] build environment:\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] distmod: rhel70\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-13T21:23:31.394+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"0.0.0.0\", port: 27017 }, processManagement: { fork: true, pidFilePath: \"/var/run/mongodb/mongod.pid\", timeZoneInfo: \"/usr/share/zoneinfo\" }, storage: { dbPath: \"/mongo\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-13T21:23:31.394+0200 I STORAGE [initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /mongo, terminating\n2023-06-13T21:23:31.394+0200 F - [initandlisten] Invariant failure globalStorageEngine src/mongo/db/service_context_d.cpp 272\n2023-06-13T21:23:31.394+0200 F - [initandlisten]\n\n***aborting after invariant() failure\n\n\n2023-06-13T21:23:31.408+0200 F - [initandlisten] Got signal: 6 (Aborted).\n\n 0x55d61a997831 0x55d61a996a49 0x55d61a996f2d 0x7f59393ad630 0x7f5939006387 0x7f5939007a78 0x55d61909bffe 0x55d619343878 0x55d61a84bf31 0x55d61a848017 0x55d619112863 0x55d61a992bf5 0x55d61909d1d7 0x55d619035cbc 0x55d61911a1ea 0x55d61909e129 0x7f5938ff2555 0x55d61910235f\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"55D618700000\",\"o\":\"2297831\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"55D618700000\",\"o\":\"2296A49\"},{\"b\":\"55D618700000\",\"o\":\"2296F2D\"},{\"b\":\"7F593939E000\",\"o\":\"F630\"},{\"b\":\"7F5938FD0000\",\"o\":\"36387\",\"s\":\"gsignal\"},{\"b\":\"7F5938FD0000\",\"o\":\"37A78\",\"s\":\"abort\"},{\"b\":\"55D618700000\",\"o\":\"99BFFE\",\"s\":\"_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j\"},{\"b\":\"55D618700000\",\"o\":\"C43878\",\"s\":\"_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj\"},{\"b\":\"55D618700000\",\"o\":\"214BF31\",\"s\":\"_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE\"},{\"b\":\"55D618700000\",\"o\":\"2148017\",\"s\":\"_ZN5mongo6Client20makeOperationContextEv\"},{\"b\":\"55D618700000\",\"o\":\"A12863\"},{\"b\":\"55D618700000\",\"o\":\"2292BF5\"},{\"b\":\"55D618700000\",\"o\":\"99D1D7\",\"s\":\"_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE\"},{\"b\":\"55D618700000\",\"o\":\"935CBC\",\"s\":\"_ZZN5mongo13duration_castINS_8DurationISt5ratioILl1ELl1000EEEES2_ILl1ELl1EEEET_RKNS1_IT0_EEENKUlvE_clEv\"},{\"b\":\"55D618700000\",\"o\":\"A1A1EA\",\"s\":\"_ZN5mongo11mongoDbMainEiPPcS1_\"},{\"b\":\"55D618700000\",\"o\":\"99E129\",\"s\":\"main\"},{\"b\":\"7F5938FD0000\",\"o\":\"22555\",\"s\":\"__libc_start_main\"},{\"b\":\"55D618700000\",\"o\":\"A0235F\"}],\"processInfo\":{ \"mongodbVersion\" : \"3.6.17\", \"gitVersion\" : \"3d6953c361213c5bfab23e51ab274ce592edafe6\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"3.10.0-1160.81.1.el7.x86_64\", \"version\" : \"#1 SMP Thu Nov 24 12:21:22 UTC 2022\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"b\" : \"55D618700000\", \"elfType\" : 3, \"buildId\" : \"F5A6F048CC3D823882D3FDB7A4A386EE2E0BC1D4\" }, { \"b\" : \"7FFD6F784000\", \"elfType\" : 3, \"buildId\" : \"EFDC01C543E3027D760D04B6BFDD53C3F48C6798\" }, { \"b\" : \"7F593A5B3000\", \"path\" : \"/lib64/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"E0CD0DD5466E6B9E5FB10BFAFF13B1BB50F08EAA\" }, { \"b\" : \"7F593A150000\", \"path\" : \"/lib64/libcrypto.so.10\", \"elfType\" : 3, \"buildId\" : \"622F79C1AB7612F082403F4987CE1DAC287775C3\" }, { \"b\" : \"7F5939EDE000\", \"path\" : \"/lib64/libssl.so.10\", \"elfType\" : 3, \"buildId\" : \"7CBCB0322F585236D81B557ED95C708F98A20C33\" }, { \"b\" : \"7F5939CDA000\", \"path\" : \"/lib64/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"7F2E9CB0769D7E57BD669B485A74B537B63A57C4\" }, { \"b\" : \"7F5939AD2000\", \"path\" : \"/lib64/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"3E44DF7055942478D052E40FDD1F5B7862B152B0\" }, { \"b\" : \"7F59397D0000\", \"path\" : \"/lib64/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"7615604EAF4A068DFAE5085444D15C0DEE93DFBD\" }, { \"b\" : \"7F59395BA000\", \"path\" : \"/lib64/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"EDF51350C7F71496149D064AA8B1441F786DF88A\" }, { \"b\" : \"7F593939E000\", \"path\" : \"/lib64/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"E10CC8F2B932FC3DAEDA22F8DAC5EBB969524E5B\" }, { \"b\" : \"7F5938FD0000\", \"path\" : \"/lib64/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"FC4FA58E47A5ACC137EADB7689BCE4357C557A96\" }, { \"b\" : \"7F593A7CD000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"62C449974331341BB08DCCE3859560A22AF1E172\" }, { \"b\" : \"7F5938DBA000\", \"path\" : \"/lib64/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"E69C3975164331DF84F4E8955CC3F7A0836B05D0\" }, { \"b\" : \"7F5938B6D000\", \"path\" : \"/lib64/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"0CAEC124D97114DA40DDEB0FED1FAD5D14C3D626\" }, { \"b\" : \"7F5938884000\", \"path\" : \"/lib64/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"52C5B6279DA9CD210E5D58D1B1E0E080E5C8B232\" }, { \"b\" : \"7F5938680000\", \"path\" : \"/lib64/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"E4C7298B74FEEADC4DDE40CDD8C4D6B85FE09ADE\" }, { \"b\" : \"7F593844D000\", \"path\" : \"/lib64/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"5FF9D1075A8D5D62F77F5CE56C935FCD92C62EFA\" }, { \"b\" : \"7F593823D000\", \"path\" : \"/lib64/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"779381063DAECC27E8480C8F79F0651162586478\" }, { \"b\" : \"7F5938039000\", \"path\" : \"/lib64/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"8CA73C16CFEB9A8B5660015B9223B09F87041CAD\" }, { \"b\" : \"7F5937E12000\", \"path\" : \"/lib64/libselinux.so.1\", \"elfType\" : 3, \"buildId\" : \"805AB866A4573EFEC4D8EA95123E8349B2B9D349\" }, { \"b\" : \"7F5937BB0000\", \"path\" : \"/lib64/libpcre.so.1\", \"elfType\" : 3, \"buildId\" : \"F5B144F9F5D9BE451C80211B34DB2CE348E039B6\" } ] }}\n mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55d61a997831]\n mongod(+0x2296A49) [0x55d61a996a49]\n mongod(+0x2296F2D) [0x55d61a996f2d]\n libpthread.so.0(+0xF630) [0x7f59393ad630]\n libc.so.6(gsignal+0x37) [0x7f5939006387]\n libc.so.6(abort+0x148) [0x7f5939007a78]\n mongod(_ZN5mongo22invariantFailedWithMsgEPKcS1_S1_j+0x0) [0x55d61909bffe]\n mongod(_ZN5mongo20ServiceContextMongoD9_newOpCtxEPNS_6ClientEj+0x158) [0x55d619343878]\n mongod(_ZN5mongo14ServiceContext20makeOperationContextEPNS_6ClientE+0x41) [0x55d61a84bf31]\n mongod(_ZN5mongo6Client20makeOperationContextEv+0x27) [0x55d61a848017]\n mongod(+0xA12863) [0x55d619112863]\n mongod(+0x2292BF5) [0x55d61a992bf5]\n mongod(_ZN5mongo8shutdownENS_8ExitCodeERKNS_16ShutdownTaskArgsE+0x364) [0x55d61909d1d7]\n mongod(_ZZN5mongo13duration_castINS_8DurationISt5ratioILl1ELl1000EEEES2_ILl1ELl1EEEET_RKNS1_IT0_EEENKUlvE_clEv+0x0) [0x55d619035cbc]\n mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x87A) [0x55d61911a1ea]\n mongod(main+0x9) [0x55d61909e129]\n libc.so.6(__libc_start_main+0xF5) [0x7f5938ff2555]\n mongod(+0xA0235F) [0x55d61910235f]\n----- END BACKTRACE -----\n",
"text": "I deleted it and got this output:",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Hi @Brian_Bell ,\nCommet the fork option in the config file and restart the service, then paste the output of systemctl status mongod & journalctl -xeBR",
"username": "Fabio_Ramohitaj"
},
{
"code": "[aaa@ddd mongo]$ cat /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /mongo\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n# fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.\n\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n[aaa@ddd mongo]$ sudo systemctl restart mongod.service\nJob for mongod.service failed because a fatal signal was delivered to the control process. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n[aaa@ddd mongo]$ sudo systemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: signal) since Tue 2023-06-13 21:40:38 CEST; 9s ago\n Docs: https://docs.mongodb.org/manual\n Process: 101600 ExecStart=/usr/bin/mongod $OPTIONS (code=killed, signal=ABRT)\n Process: 101596 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 101594 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 101592 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Main PID: 1703 (code=killed, signal=ABRT)\n\nJun 13 21:40:38 ddd systemd[1]: Starting MongoDB Database Server...\nJun 13 21:40:38 ddd systemd[1]: mongod.service: control process exited, code=killed status=6\nJun 13 21:40:38 ddd systemd[1]: Failed to start MongoDB Database Server.\nJun 13 21:40:38 ddd systemd[1]: Unit mongod.service entered failed state.\nJun 13 21:40:38 ddd systemd[1]: mongod.service failed.\n[aaa@ddd mongo]$ sudo journalctl -xe\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PATH msg=audit(1686685238.661:492779): item=0 name=\"/sys/fs/cgroup/memory/memory.limit_in_b\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685238.661:492779): proctitle=2F7573722F62696E2F6D6F6E676F64002D660\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=AVC msg=audit(1686685238.697:492780): avc: denied { write } for pid=101600 comm=\"mongod\"\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686685238.697:492780): arch=c000003e syscall=2 success=no exit=-13 a0=55\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=CWD msg=audit(1686685238.697:492780): cwd=\"/\"\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PATH msg=audit(1686685238.697:492780): item=0 name=\"/mongo/\" inode=64 dev=fd:02 mode=040755\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PATH msg=audit(1686685238.697:492780): item=1 name=\"/mongo/mongod.lock\" objtype=CREATE cap_\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685238.697:492780): proctitle=2F7573722F62696E2F6D6F6E676F64002D660\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=ANOM_ABEND msg=audit(1686685238.711:492781): auid=4294967295 uid=776 gid=598 ses=4294967295\nJun 13 21:40:38 ddd systemd[1]: mongod.service: control process exited, code=killed status=6\nJun 13 21:40:38 ddd systemd[1]: Failed to start MongoDB Database Server.\n-- Subject: Unit mongod.service has failed\n-- Defined-By: systemd\n-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel\n--\n-- Unit mongod.service has failed.\n--\n-- The result is failed.\nJun 13 21:40:38 ddd systemd[1]: Unit mongod.service entered failed state.\nJun 13 21:40:38 ddd systemd[1]: mongod.service failed.\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=SERVICE_START msg=audit(1686685238.720:492782): pid=1 uid=0 auid=4294967295 ses=4294967295\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686685238.724:492783): arch=c000003e syscall=62 success=yes exit=0 a0=18\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686685238.724:492783): opid=101587 oauid=303693 ouid=0 oses=25537 obj=un\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685238.724:492783): proctitle=73797374656D63746C0072657374617274006\nJun 13 21:40:38 ddd polkitd[1028]: Unregistered Authentication Agent for unix-process:101586:453401998 (system bus name :1.51432, object path /org/fr\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686685238.725:492784): arch=c000003e syscall=62 success=yes exit=0 a0=18\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686685238.725:492784): opid=101588 oauid=303693 ouid=0 oses=25537 obj=un\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685238.725:492784): proctitle=73797374656D63746C0072657374617274006\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686685238.725:492785): arch=c000003e syscall=62 success=yes exit=0 a0=18\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=OBJ_PID msg=audit(1686685238.725:492785): opid=101588 oauid=303693 ouid=0 oses=25537 obj=un\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685238.725:492785): proctitle=\"(null)\"\nJun 13 21:40:38 ddd sudo[101584]: pam_unix(sudo:session): session closed for user root\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=USER_END msg=audit(1686685238.728:492786): pid=101584 uid=0 auid=303693 ses=25537 subj=unco\nJun 13 21:40:38 ddd audispd[988]: node=ddd type=CRED_DISP msg=audit(1686685238.728:492787): pid=101584 uid=0 auid=303693 ses=25537 subj=unc\nJun 13 21:40:44 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686685244.085:492788): arch=c000003e syscall=159 success=yes exit=0 a0=7\nJun 13 21:40:44 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685244.085:492788): proctitle=\"/usr/bin/vmtoolsd\"\nJun 13 21:40:44 ddd audispd[988]: node=ddd type=SYSCALL msg=audit(1686685244.085:492789): arch=c000003e syscall=159 success=yes exit=0 a0=7\nJun 13 21:40:44 ddd audispd[988]: node=ddd type=PROCTITLE msg=audit(1686685244.085:492789): proctitle=\"/usr/bin/vmtoolsd\"\nJun 13 21:40:48 ddd audispd[988]: node=ddd type=USER_ACCT msg=audit(1686685248.074:492790): pid=101614 uid=303693 auid=303693 ses=25537 sub\nJun 13 21:40:48 ddd audispd[988]: node=ddd type=USER_CMD msg=audit(1686685248.075:492791): pid=101614 uid=303693 auid=303693 ses=25537 subj\nJun 13 21:40:48 ddd audispd[988]: node=ddd type=CRED_REFR msg=audit(1686685248.075:492792): pid=101614 uid=0 auid=303693 ses=25537 subj=unc\nJun 13 21:40:48 ddd sudo[101614]: pam_unix(sudo:session): session opened for user root by aaa(uid=0)\nJun 13 21:40:48 ddd audispd[988]: node=ddd type=USER_START msg=audit(1686685248.083:492793): pid=101614 uid=0 auid=303693 ses=25537 subj=un\nJun 13 21:40:48 ddd sudo[101614]: pam_unix(sudo:session): session closed for user root\nJun 13 21:40:48 ddd audispd[988]: node=ddd type=USER_END msg=audit(1686685248.097:492794): pid=101614 uid=0 auid=303693 ses=25537 subj=unco\nJun 13 21:40:48 ddd audispd[988]: node=ddd type=CRED_DISP msg=audit(1686685248.097:492795): pid=101614 uid=0 auid=303693 ses=25537 subj=unc\n[aaa@ddd mongo]$ sudo lsof -i:27017\n[aaa@ddd mongo]$\n",
"text": "@Fabio_Ramohitaj thanks for helping me outI did what you say, I guess there is no mongod process runinng (should there be one?)my output:",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Hi @Brian_Bell ,\nAs suggested from @Ramachandra_Tummala ,\nVerify the permission of dbpath, so:\nls -latrh /\nPaste the result of directory mongo here.BR",
"username": "Fabio_Ramohitaj"
},
{
"code": "[ddd@ddd mongo]$ ls -latrh /mongo\ntotal 4.2G\n-rwxr-xr-x. 1 mongod mongod 21 Apr 8 2020 WiredTiger.lock\n-rwxr-xr-x. 1 mongod mongod 45 Apr 8 2020 WiredTiger\n-rwxr-xr-x. 1 mongod mongod 114 Apr 8 2020 storage.bson\ndrwxr-xr-x. 5 mongod mongod 16K Jun 19 2020 mongo19.bck\ndr-xr-xr-x. 18 root root 276 Apr 18 10:08 ..\ndrwxr-xr-x. 2 mongod mongod 4.0K Jun 12 16:54 diagnostic.data\ndrwxr-xr-x. 4 mongod mongod 4.0K Jun 13 00:00 backups\n-rw-------. 1 mongod mongod 16K Jun 13 20:49 index-0-6233074057024477686.wt\n-rw-------. 1 mongod mongod 16K Jun 13 20:49 index-1-6233074057024477686.wt\n[...]\n-rw-------. 1 mongod mongod 292K Jun 13 20:50 index-196-6233074057024477686.wt\ndrwx------. 2 mongod mongod 6 Jun 13 20:50 _tmp\n-rw-------. 1 mongod mongod 11M Jun 13 20:50 index-197-6233074057024477686.wt\n[...]\n-rw-------. 1 mongod mongod 16K Jun 13 20:50 index-236-6233074057024477686.wt\n-rwxr-xr-x. 1 mongod mongod 32K Jun 13 20:50 collection-7-8546731449771946358.wt\n[...]\n-rwxr-xr-x. 1 mongod mongod 36K Jun 13 20:50 collection-133-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 4.0K Jun 13 20:50 collection-121-8546731449771946358.wt\n-rw-------. 1 mongod mongod 4.0K Jun 13 20:50 WiredTigerLAS.wt\n-rwxr-xr-x. 1 mongod mongod 64K Jun 13 20:50 sizeStorer.wt\n-rwxr-xr-x. 1 mongod mongod 92K Jun 13 20:50 _mdb_catalog.wt\n-rwxr-xr-x. 1 mongod mongod 6.3M Jun 13 20:50 collection-96-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 16K Jun 13 20:50 collection-80-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 124K Jun 13 20:50 collection-73-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 44K Jun 13 20:50 collection-52-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 4.0K Jun 13 20:50 collection-37-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 16K Jun 13 20:50 collection-144-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 32K Jun 13 20:50 collection-141-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 76K Jun 13 20:50 collection-137-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 52K Jun 13 20:50 collection-114-8546731449771946358.wt\n-rwxr-xr-x. 1 mongod mongod 4.0K Jun 13 20:50 collection-104-8546731449771946358.wt\n-rw-------. 1 mongod mongod 1.2K Jun 13 20:50 WiredTiger.turtle\n-rwxr-xr-x. 1 mongod mongod 768K Jun 13 20:50 WiredTiger.wt\ndrwxr-xr-x. 6 mongod mongod 20K Jun 13 21:23 .\n",
"text": "ls -latrh /I shorted the long list of similar files with […]\nThese permission change when I run some typical commands. He is the output:",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Hi @Brian_Bell,\nThe root directory of dbpath have the same permission?",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "What typical commands you ran?\nPermissions will not change unless you tried to start mongod directly as root\nYou should not start mongod as root.Use systemctl.Internally it will call mongod\nDefinitely there is some issue with your dbpath permissions\nFrom the logs you pasted it clearly says unable to read lock file at one time and /mongo is a read only file system at another time?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Which root directory?\nWhat i pasted is main directory of mongodb, its /mongo volume in my case.I run mongo service with sudo, because when I try to run it without sudo, that it ask me for root password.\nI think this might be the problem here.",
"username": "Brian_Bell"
},
{
"code": "\n[aaa@ddd ~]$ sudo tail -n 60 /var/log/mongodb/mongod.log\n2023-06-14T08:24:57.356+0200 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.\n2023-06-14T08:24:57.356+0200 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2023-06-14T08:24:57.356+0200 I CONTROL [initandlisten]\n2023-06-14T08:24:57.376+0200 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/mongo/diagnostic.data'\n2023-06-14T08:24:57.377+0200 I NETWORK [initandlisten] listening via socket bound to 0.0.0.0\n2023-06-14T08:24:57.377+0200 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-27017.sock\n2023-06-14T08:24:57.377+0200 I NETWORK [initandlisten] waiting for connections on port 27017\n2023-06-14T08:24:58.592+0200 I NETWORK [listener] connection accepted from 10.150.18.158:49367 #1 (1 connection now open)\n2023-06-14T08:24:58.676+0200 I NETWORK [conn1] received client metadata from 10.150.18.158:49367 conn1: { driver: { name: \"mongo-java-driver|legacy\", version: \"3.10.2\" }, os: { type: \"Windows\", name: \"Windows Server 2012 R2\", architecture: \"amd64\", version: \"6.3\" }, platform: \"Java/Oracle Corporation/1.8.0_211-b12\" }\n2023-06-14T08:26:26.093+0200 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2023-06-14T08:26:26.094+0200 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2023-06-14T08:26:26.094+0200 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock\n2023-06-14T08:26:26.096+0200 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n2023-06-14T08:26:26.098+0200 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2023-06-14T08:26:26.248+0200 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2023-06-14T08:26:26.248+0200 I CONTROL [signalProcessingThread] now exiting\n2023-06-14T08:26:26.248+0200 I CONTROL [signalProcessingThread] shutting down with code:0\n2023-06-14T08:37:38.057+0200 I CONTROL [main] ***** SERVER RESTARTED *****\n2023-06-14T08:37:38.087+0200 I CONTROL [initandlisten] MongoDB starting : pid=13496 port=27017 dbpath=/mongo 64-bit host=ddd\n2023-06-14T08:37:38.087+0200 I CONTROL [initandlisten] db version v3.6.17\n2023-06-14T08:37:38.087+0200 I CONTROL [initandlisten] git version: 3d6953c361213c5bfab23e51ab274ce592edafe6\n2023-06-14T08:37:38.087+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] modules: none\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] build environment:\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] distmod: rhel70\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] distarch: x86_64\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] target_arch: x86_64\n2023-06-14T08:37:38.088+0200 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"0.0.0.0\", port: 27017 }, processManagement: { pidFilePath: \"/var/run/mongodb/mongod.pid\", timeZoneInfo: \"/usr/share/zoneinfo\" }, storage: { dbPath: \"/mongo\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2023-06-14T08:37:38.089+0200 I - [initandlisten] Detected data files in /mongo created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2023-06-14T08:37:38.089+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1373M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-06-14T08:37:39.057+0200 I STORAGE [initandlisten] WiredTiger message [1686724659:57091][13496:0x7fe38e1abb80], txn-recover: Main recovery loop: starting at 115/6784\n2023-06-14T08:37:39.199+0200 I STORAGE [initandlisten] WiredTiger message [1686724659:199445][13496:0x7fe38e1abb80], txn-recover: Recovering log 115 through 116\n2023-06-14T08:37:39.299+0200 I STORAGE [initandlisten] WiredTiger message [1686724659:299223][13496:0x7fe38e1abb80], txn-recover: Recovering log 116 through 116\n2023-06-14T08:37:39.371+0200 I STORAGE [initandlisten] WiredTiger message [1686724659:371203][13496:0x7fe38e1abb80], txn-recover: Set global recovery timestamp: 0\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten]\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten]\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten]\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten]\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2023-06-14T08:37:39.402+0200 I CONTROL [initandlisten]\n2023-06-14T08:37:39.422+0200 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/mongo/diagnostic.data'\n2023-06-14T08:37:39.423+0200 I NETWORK [initandlisten] listening via socket bound to 0.0.0.0\n2023-06-14T08:37:39.423+0200 I NETWORK [initandlisten] listening via socket bound to /tmp/mongodb-27017.sock\n2023-06-14T08:37:39.423+0200 I NETWORK [initandlisten] waiting for connections on port 27017\n2023-06-14T08:37:41.435+0200 I NETWORK [listener] connection accepted from 10.150.18.158:49452 #1 (1 connection now open)\n2023-06-14T08:37:41.435+0200 I NETWORK [conn1] received client metadata from 10.150.18.158:49452 conn1: { driver: { name: \"mongo-java-driver|legacy\", version: \"3.10.2\" }, os: { type: \"Windows\", name: \"Windows Server 2012 R2\", architecture: \"amd64\", version: \"6.3\" }, platform: \"Java/Oracle Corporation/1.8.0_211-b12\" }\n2023-06-14T08:39:08.094+0200 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2023-06-14T08:39:08.094+0200 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2023-06-14T08:39:08.094+0200 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock\n2023-06-14T08:39:08.096+0200 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n2023-06-14T08:39:08.097+0200 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2023-06-14T08:39:08.126+0200 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2023-06-14T08:39:08.126+0200 I CONTROL [signalProcessingThread] now exiting\n2023-06-14T08:39:08.126+0200 I CONTROL [signalProcessingThread] shutting down with code:0\n[aaa@ddd ~]$ sudo systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: timeout) since Wed 2023-06-14 08:39:08 CEST; 5min ago\n Docs: https://docs.mongodb.org/manual\n Process: 13496 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=0/SUCCESS)\n Process: 13493 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 13490 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 13488 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n\nJun 14 08:37:37 ddd systemd[1]: Starting MongoDB Database Server...\nJun 14 08:39:08 ddd systemd[1]: mongod.service start operation timed out. Terminating.\nJun 14 08:39:08 ddd systemd[1]: Failed to start MongoDB Database Server.\nJun 14 08:39:08 ddd systemd[1]: Unit mongod.service entered failed state.\nJun 14 08:39:08 ddd systemd[1]: mongod.service failed.\n[aaa@ddd ~]$\n",
"text": "After server rebooting there is no file permission problem abymore, but I still cannot run service.\nLooks like connection issue?here is my output, please take a look:",
"username": "Brian_Bell"
},
{
"code": "",
"text": "Yes it came up but getting terminated by signal 15 graceful shutdown\nYou have to identify if anyone or process issuing this\nCheck your system logs,/var/adm/messages etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Ok thank you.\nShould my fork be still commented in config file?edit:\nI deleted fork comment, restarted server on application side and it worked ",
"username": "Brian_Bell"
}
] | ERROR child process failed, exited with error number 51 | 2023-06-12T15:17:35.962Z | ERROR child process failed, exited with error number 51 | 2,627 |
null | [
"aggregation",
"queries",
"indexes"
] | [
{
"code": "#<Aggregate::UniqueJob:0x00007f456ddf1380> {\n :_id => BSON::ObjectId('647f39ea1c1ea08d1ac6a7aa'),\n :arguments => [\n [0] 234,\n [1] 7186,\n [2] \"course\",\n [3] \"update_method_name!\"\n ],\n :job_class => \"JobName\"\n}\n \"filter\": {\n \"job_class\": \"JobName\",\n \"arguments\": [\n \"ClassName\",\n 42322,\n 170849,\n 81,\n 468,\n \"update_method_name!\"\n ]\n },\n",
"text": "I have a collection where there are 2 attributes as below:-Values for arguments attribute could have variable elements in the array.When there is where query applied on arguments and job_class query execution time is faster but Key Examined value (~40 k) is going very high as compared to Docs returned (0/1) even after index on ‘arguments’ column is used.So what could be the case that even after index being used Key Examined values are high? and can this be the reason for CPU utilization spike up?",
"username": "Viraj_Chheda"
},
{
"code": "test> db.testarry.findOne()\n{\n _id: ObjectId(\"64898d1834d0ff0b9c218515\"),\n names: [ 'A', 'B', 'C', 8, 5, 7 ]\n}\ntest> db.testarry.find({ names: { $gt: 7}}).explain('executionStats')....\nexecutionStats: {\n executionSuccess: true,\n nReturned: 304,\n executionTimeMillis: 4,\n totalKeysExamined: 893,\n totalDocsExamined: 304,\n executionStages: {\n....\narguments",
"text": "Hi @Viraj_Chheda and welcome to MongoDB community forums!!When there is where query applied on arguments and job_class query execution time is faster but Key Examined value (~40 k) is going very high as compared to Docs returned (0/1) even after index on ‘arguments’ column is used.If the keyExamined value is higher than docsReturned value, it means that more index entries were examined during the query execution than the number of matching documents found.This may occur because of the following reasons:When the indexes not have much distinct value which makes the query scan more documents the retuning the matching documents.This would also depend on how the query has been written for it to make efficient use of the indexes.\nFor instance, I tried to replicate this in my local environment using version 6.0.5.\nSample data:Index is created on names field and I use the\ntest> db.testarry.find({ names: { $gt: 7}}).explain('executionStats')\nwhich gives mewhich is similar to what you are seeing.Can you provide the query details and your expectations for the given sample document which would give us more clarity about the issue?Additionally, please provide the following information to help me efficiently understand and address the issue:Finally, my recommendation would be to follow the documentation on Indexing Strategies to find efficient method for using indexes.Regards\nAasawari",
"username": "Aasawari"
}
] | Query having large Keys Examined value even after index is used | 2023-06-12T08:26:25.486Z | Query having large Keys Examined value even after index is used | 813 |
[
"connecting"
] | [
{
"code": "",
"text": "I am trying to upload one of my full stack applications on render.com since heroku free hosting is dying , although the uploading is successfull , i get this error when deploying the server side of my app.\nIt says my Ip may not be whitelisted , however it very much is , i even did allow access to any IP and it still wouldnt work …\nimage1007×629 18.3 KB\n",
"username": "Mark_Klonis"
},
{
"code": "",
"text": "May i also mention i am using mongoose in nodejs with express\nAnd also M0 Sandbox (General) cluster tier",
"username": "Mark_Klonis"
},
{
"code": "",
"text": "this is what i get , so i assume the request is made successfully and i do get a response , however mongodb has not connected i guess\n\nimage919×152 10.2 KB\n",
"username": "Mark_Klonis"
},
{
"code": "",
"text": "This is the uri i am using to connect\n\nimage1122×249 11.8 KB\n",
"username": "Mark_Klonis"
},
{
"code": "",
"text": "Hey I had the same issue. On Render there is a “connect” button at the top; right beside manual deploy. It has some Static Outbound IP Addresses. I added those IP addresses to my Mongo DB account and it worked. Hope that helps.",
"username": "Mick_Maratta"
},
{
"code": "",
"text": "were you able to solve this issue… it would work fine and after some point, this error would pop up in logs… faced it in the lambda function is this related to the total number of connections or something?",
"username": "santhosh_h"
},
{
"code": "",
"text": "Hi @santhosh_h,My team and I faced the same problem and we could solve it by setting up a static IP address for the lambda, we put it in a private subnet and then used a NAT gateway, then you can add that IP to your MongoDB whitelist or you can keep your IP access list as 0.0.0.0/0, it will work both ways.I don’t know how aws make requests through lambdas or why is a static IP needed, but that solution works.Hope that can help you",
"username": "Antonio_Soto"
},
{
"code": "",
"text": "Why are there three IP Addresses? Thank you for this that was brilliant what you just did, Mick.",
"username": "Dean_Gladish"
},
{
"code": "",
"text": "Despite using the given connection string and whitlisting all IPs I’m not able to connect to the Atlas Cluster.\n\nimage1216×116 8.61 KB\n\nHow do I proceed with this ?",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "If you running a VPN try stopping it.If it is still does not work, turn off your firewall.",
"username": "steevej"
},
{
"code": "",
"text": "I’ve tried that too. It still doesn’t work. I am unable to run the mongodb shell too.\n\nimage1457×416 19.5 KB\n\nUnable to find a solution for these errors.\n\nimage462×867 8.34 KB\n",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "Why do you ping mongodb.net or mongodb.com while you try to connect to local instance at 127.0.0.1?Where do you want to connect exactly?",
"username": "steevej"
},
{
"code": "",
"text": "I was just trying to test my connection to mongodb.net\nSince all server connection strings end with that.I’ve used the Atlas CLI to create a cluster, and even that isn’t able to run any mongosh commands.\n\nimage1504×393 21.4 KB\n\n\nimage1891×325 17.2 KB\n\n\nimage1257×296 28 KB\nAnything else I should try out to test the extent of the issue ?",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "I was able to connect successfully to your cluster. This means the issue is on your side of the connection.Are you sure you are not using a VPN?Try a different internet provider.",
"username": "steevej"
},
{
"code": "",
"text": "Ohhh\nOkay…\nI’ll try a mobile network and see if the issue persists.",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "Thank you, this work for me",
"username": "sunkanmi_oguntimehin"
}
] | MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster | 2022-09-30T10:59:05.718Z | MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster | 6,511 |
|
null | [
"aggregation",
"dot-net",
"compass"
] | [
{
"code": "public List<Job> GetAbandonedJobs()\n{\n\tIMongoCollection<Job> jobColl = DatabaseService.GetCollection<Job>(_db, COLLECTION_NAME);\n\tIMongoCollection<Service> svcColl = DatabaseService.GetCollection<Service>(_db, \"Service\");\n\n\tvar query = from jobs in jobColl.AsQueryable()\n\t\t\t\tjoin services in svcColl.AsQueryable() on jobs.JobId equals services.JobId into joinGroup\n\t\t\t\tfrom services in joinGroup.DefaultIfEmpty()\n\t\t\t\twhere services.JobId == null\n\t\t\t\tselect new Job\n\t\t\t\t{\n\t\t\t\t\tID = jobs.ID,\n\t\t\t\t\tJobId = jobs.JobId,\n\t\t\t\t};\n\n\tvar results = query.ToList();\n\treturn results;\n}\n$project or $group does not support {document}public List<Service> GetAbandonedJobs()\n{\n\tIMongoCollection<Job> jobColl = DatabaseService.GetCollection<Job>(_db, COLLECTION_NAME);\n\tIMongoCollection<Service> svcColl = DatabaseService.GetCollection<Service>(_db, \"Service\");\n\n\tvar query = from job in jobColl.AsQueryable()\n\t\t\t\tjoin service in svcColl.AsQueryable() on job.JobId equals service.JobId into joinGroup\n\t\t\t\tfrom service in joinGroup.DefaultIfEmpty()\n\t\t\t\tselect new Service\n\t\t\t\t{\n\t\t\t\t\tID = job.ID,\n\t\t\t\t\tJobId = job.JobId,\n\t\t\t\t\tServiceId = service.ServiceId,\n\t\t\t\t};\n\n\tList<Service> results = query.ToList();\n\tresults = results.Where(x => x.ServiceId == null).ToList();\n\treturn results;\n}\ndb.Job.aggregate([\n {\n $lookup: {\n from: \"Service\",\n localField: \"JobId\",\n foreignField: \"JobId\",\n as: \"matchingRecords\"\n }\n },\n {\n $match: {\n matchingRecords: { $size: 0 }\n }\n }\n])\n",
"text": "I have two classes (job and service) which share a JobId and I’m trying to locate all of the job records that do not have any corresponding service records as follows;This however is returning the error $project or $group does not support {document}. I have also tried;Which returns numerous records but when they are filtered to return only those where ServiceId is null I get zero records. I know this cant be correct as the following query in Compass returns multiple records;",
"username": "Raymond_Brack"
},
{
"code": "var connectionString = \"mongodb://localhost\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V3;\nvar client = new MongoClient(clientSettings);\n",
"text": "Hi, @Raymond_Brack,Based on the exception message, it looks like you’re using the LINQ2 provider, which does not support the LINQ join syntax that you are using. Our newer LINQ3 provider does support this LINQ construct. LINQ3 is the default LINQ provider in 2.19.0 or later driver. For driver versions 2.14.0 to 2.18.x, you can opt into LINQ3 using code similar to the following:Please try the new LINQ3 provider and let us know if it returns the expected results from your query.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "IQueryable<Job> query = from jobs in jobQuery\n\t\t\t\t\t\tjoin services in svcQuery on jobs.JobId equals services.JobId into joinGroup\n\t\t\t\t\t\tfrom services in joinGroup.DefaultIfEmpty()\n\t\t\t\t\t\twhere services.JobId == null\n\t\t\t\t\t\tselect new Job\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tID = jobs.ID,\n\t\t\t\t\t\t\tRunId = jobs.RunId,\n\t\t\t\t\t\t\tJobId = jobs.JobId,\n\t\t\t\t\t\t};\n",
"text": "Hi James,Thanks for the prompt reply.I updated to 2.19.2 and tried it with the following and got no error however I still got no records.I wrote some code to loop through each job record and count how many of those had no service records and got to 36 before I stopped running the code, there are over 100,000 job records so it was taking it a while.",
"username": "Raymond_Brack"
},
{
"code": "query.ToString()queryToString()ToString()Console.WriteLine(query);\n",
"text": "I would recommend reviewing the MQL generated by this LINQ query to understand why no results are being returned. You can view the MQL by calling query.ToString(), setting a breakpoint to view the value of query in the debugger (which also implicitly calls ToString()), or installing the MongoDB Analyzer in your project (which will display the MQL as a tooltip). In most cases the easiest solution is to simply write the query to the console (which implicitly calls ToString()):Hopefully by reviewing the generated MQL you will be able to tweak your LINQ query to resolve the issue.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "joinGroup.DefaultIfEmpty()UUID(\"00000000-0000-0000-0000-000000000000\")nullservice.JobId == Guid.Empty",
"text": "Hi James,Viewing the MQL resolved the issue - the joinGroup.DefaultIfEmpty() option was creating an empty JobID, UUID(\"00000000-0000-0000-0000-000000000000\") rather than a null. Changing the where to service.JobId == Guid.Empty resolved the issue.Thanks again for your help.",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Join Into - $project or $group does not support {document} | 2023-06-13T23:23:29.982Z | Join Into - $project or $group does not support {document} | 562 |
null | [
"queries",
"ops-manager"
] | [
{
"code": "",
"text": "I am creating a rolling index using ops manager, It got stuck in middle and there is no progress, Kindly let us know how to stop or fix this issue.",
"username": "Krishna_Sai1"
},
{
"code": "",
"text": "Check your agent logs for clues.But as this is an Enterprise tool take the logs from for the deployment and create a support ticket.How does Ops Manager rotate its logs and the Agent logs?",
"username": "chris"
}
] | Rolling index is stuck via Ops manager | 2023-06-14T13:23:55.490Z | Rolling index is stuck via Ops manager | 641 |
null | [
"aggregation",
"python"
] | [
{
"code": "pymongo.errors.OperationFailure: not authorized on db to execute command { aggregate: \"enrollment\", pipeline: [ { $lookup: { from: \"program\", localField: \"programId\", foreignField: \"programId\", .... { id: UUID(\"1fb99c09-22d4-48d2-838e-64c6a0bfdc59\") }, $clusterTime: { clusterTime: Timestamp(1686673905, 227), signature: { hash: BinData(0, FC1576C7BC8A657F23C06554D40096C5CE316190), keyId: 7211160844858556522 } }, $db: \"db\" }', 'code': 13, 'codeName': 'Unauthorized', '...'\n",
"text": "I have a developer who’s trying to run a python script but is getting the following error:We created a user account that has the following permissions:\ndb_list_collection@admin\[email protected] I missing some permissions here? ideally I don’t want to give it full read permissions but just what it needs to execute the script. any and all help would be great!",
"username": "Chris_John"
},
{
"code": " aggregate: \"enrollment\"\"enrollment\"",
"text": " aggregate: \"enrollment\"Can you try providing the same user account with read permissions on the \"enrollment\" collection to see if it resolves the error?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "thanks, just did and same issue…not sure what’s going on!",
"username": "Chris_John"
},
{
"code": "",
"text": "Thanks for confirming Chris.A few questions:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "aha! found out that there was an error in the command. After they gave me the full error output found out they were pointing to a collection that didn’t exist. appreciate the help!",
"username": "Chris_John"
},
{
"code": "",
"text": "Great! Thanks for confirming the root cause Chris.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | PyMongo Errors - Not Authorized to execute command { aggregate | 2023-06-14T23:00:57.892Z | PyMongo Errors - Not Authorized to execute command { aggregate | 1,074 |
null | [
"queries"
] | [
{
"code": "$gt$lt$mod",
"text": "$gt, $lt works with ObjectID; I don’t see why $mod is unsupported.\nThe use case is common: sharding multiple jobs to be executed on multiple task runners.Is this a bug, a feature request, or something in between?\nHow do I proceed with the request? Does this forum where it is supposed to be?",
"username": "3Ji"
},
{
"code": "",
"text": "On second thought, I can also implement the functionality at the ODM layer.\nBy adding a field populated with randomized integers.But the question still stands; basically, shouldn’t anything that works with integers also works on ObjectIDs too?",
"username": "3Ji"
},
{
"code": "$gt$lt$mod$mod$mod (aggregation)",
"text": "Hi @3Ji,$gt, $lt works with ObjectID; I don’t see why $mod is unsupported.What’s the actual command you’re attempting to execute with $mod? This might give some clarity in regards to whether or not this should be a feature request or not. Please include some sample ObjectID values too if possible.The $mod (aggregation) documentation states:The arguments can be any valid expression as long as they resolve to numbers.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "Job.where({\n _id: {\n '$mod': [4, 0],\n },\n}).each do |record|\n do_something_with(record.data)\nend\nJobMongoid::Document_id",
"text": "I’ll try with Mongoid syntax.\nHow about this:Where Job is Mongoid::Document and _id is a field of ObjectID.BTW, I don’t use aggregation in my projects, so I referred to $mod (query) in all of my contexts.",
"username": "3Ji"
},
{
"code": "divisorremainderNaNInfinityJob.where({\n _id: {\n '$mod': [4, 0],\n },\n})\nObjectId()40divisorremainderObjectId()",
"text": "BTW, I don’t use aggregation in my projects, so I referred to $mod (query) in all of my contexts.Thanks for confirming - An error will be returned if the divisor or remainder values evaluate to:In terms of your post / question - Are you wanting to specify ObjectId() values where the 4 and 0 exist? That is, the divisor and remainder are ObjectId() values.If this is the case, then perhaps raising a feedback post for this which include your use case details would be the right path forward.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [feature request? bug?] $mod should works on ObjectID too | 2023-06-11T07:20:50.064Z | [feature request? bug?] $mod should works on ObjectID too | 467 |
null | [] | [
{
"code": "",
"text": "I am using MongoDB Driver nuget package version 2.19.1 in Xamarin iOS project and when i try to build i am getting the error error loading assemblies mongocrypt.dll. The last version that was working was 2.10.4.\nHow do i fix this error with version 2.19.1?Thanks in advance",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "mongocrypt.dllMongoDB.Libmongocryptmongocrypt.dll",
"text": "Hi, @Balasubramanian_Ramanathan,Welcome to the MongoDB Community Forums. mongocrypt.dll is an unmanaged DLL implementing features for Client-side Field-Level Encryption (CSFLE) and Queryable Encryption (QE). If you are not using these features, you can exclude the MongoDB.Libmongocrypt NuGet package (which references mongocrypt.dll) from your build process.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Could you please tell me, How can i exclude the dll from the build process. It is implicitly referenced package.Thanks",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "",
"text": "As you have said i excluded the MongoDB.Libmongocrypt from the .nuspec file and it is building. Will there be an official fix for this?Thanks",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "MongoDB.Libmongocrypt",
"text": "I’m glad that you were able to get your project building by excluding MongoDB.Libmongocrypt. We are considering making changes to how we package the driver. This wouldn’t happen until the next major release because it would involve breaking changes. Follow CSHARP-4442 and CSHARP-4531 for updates.",
"username": "James_Kovacs"
},
{
"code": "",
"text": "I am not able to get it to work with 2.19.2. It does not work for arm64 devices even though i exclude the mongocrypt.dll. After excluding this it works in iPhone simulato(which is x86 but it fails to build for ios devices. they are trying to link windows registry which is not available for ios devices(arm 64). i fall back to 2.10.4. Please fix this.",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "<PropertyGroup>\n <RuntimeIdentifiers>osx;osx-x86;osx-x64</RuntimeIdentifiers>\n <NuGetRuntimeIdentifier>osx</NuGetRuntimeIdentifier>\n</PropertyGroup>\n",
"text": "I managed to upgrade to version 2.17 by addingto the ios projecthowever when i upgrade the stable 2.19.2 i get this errorcould not aot the assembly zstdsharp.dll.This is a newly added dependency in 2.18. why do we have it? do we really need it?",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "List of configured name servers must not be empty.\nList of configured name servers must not be empty.\n",
"text": "With 2.17 when the connection string has +srv (dns seed list connection string) i am getting this error ```In Windows with 2.19.2 this is working. But with ios i cant use the +srv in the connection string. When +srv in the connection string it works in the simulator. when i deploy it to the device i get the error",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "List of configured name servers must not be empty.",
"text": "ZstdSharp.Port is a third-party managed library implementing the zstd compression algorithm. The driver uses it for wire protocol compression. It is not required unless you enable the zstd compressor for compressing network traffic. (It is off by default.)Regarding the error List of configured name servers must not be empty. is from DnsClient.NET, the third-party library that we use for looking up SRV and TXT records in DNS. Mobile devices often have their DNS configuration locked down preventing DnsClient.NET from initializing.The .NET/C# Driver is designed to work in server environments, not on mobile devices. For mobile development we recommend using the MongoDB Realm .NET SDK.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thank you for the information. It would be great to use the MongoDB Driver in xamarin ios/andorid applications. Because the drivers support .net standard they should be able to run on xamaring ios/android applications. The version 2.17 is working fine with xamarin ios. With 2.19 ZstdSharp.Port is causing problem. It has been newly added in 2.19.I have released a MongoDB Client for iOS in the appstore MongoDBProg2 - MongoDB Client on the App Store using the mongodb driver in ios side and also as a server side in bridge server.Thank you for answering my all questions.",
"username": "Balasubramanian_Ramanathan"
},
{
"code": "ZstdSharp.PortZstdSharp.Port",
"text": "That is fantastic that you released your project in the AppStore. Congratulations!As for the ZstdSharp.Port problem, I would suggest filing a bug with that project. Once it is resolved, we can pull in the updated dependency. ZstdSharp.Port is a completely managed implementation of the zstd compression algorithm. Therefore I’m not sure why Xamarin’s AOT compiler can’t process it.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | In Xamarin ios build i am getting the error loading assemblies mongocrypt.dll | 2023-05-14T12:14:39.441Z | In Xamarin ios build i am getting the error loading assemblies mongocrypt.dll | 904 |
[
"atlas-cluster"
] | [
{
"code": "",
"text": "\nimage1562×645 15.1 KB\n",
"username": "Sunwoo_Lee"
},
{
"code": "",
"text": "Hi @Sunwoo_Lee,Please contact the Atlas in-app chat support regarding this as they have further insight to your Atlas project. When contacting the chat support, please provide them a link to the cluster or the (cluster + project name) that is experiencing the issue.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Just checked and it was resolved",
"username": "Sunwoo_Lee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "Request invalid. Please visit your Clusters and try again." when trying to view collection in cluster | 2023-06-14T23:02:51.085Z | “Request invalid. Please visit your Clusters and try again.” when trying to view collection in cluster | 460 |
|
[] | [
{
"code": "",
"text": "Any idea why this filter doesn’t return a document when a similar filter returns a document in Atlas?\nScreenshot 2023-06-13 112956772×838 24 KB\n",
"username": "Joel_Zehring"
},
{
"code": "ObjectId()ObjectId('6487b46a56301af798b9a025')",
"text": "Hi @Joel_Zehring I’ve not familiarised myself with Power Automate as per your post title but from a quick google it appears to be workflow automation software / service from Microsoft but please correct me if I am wrong here.when a similar filter returns a document in Atlas?When you state “similar” filter, have you tried using the same filter that you’ve used in Atlas? In addition to this, when you state Atlas, do you mean the Atlas Data explorer UI?Another thing I have noticed, although it might not be the exact reason, is that the ObjectId() is inside double quotes. Have you tried without the double quotes? I.e. ObjectId('6487b46a56301af798b9a025')Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks for your reply @Jason_Tran!You’re correct on Power Automate. Think Azure Logic Apps, but targeted to business users.Also, yes, I was referring to Atlas Data explorer UI.Finally, removing the quotes raises a validation error as the Find Document action expects the value of the “filter” field to be valid JSON.\n\nScreenshot 2023-06-14 082658777×340 16.8 KB\nThe MongoDB connector in Power Automate is relatively new (still in preview), so there’s not a lot of documentation to go off of yet.Thanks again!",
"username": "Joel_Zehring"
},
{
"code": "{ \"_id\" : { \"$oid\" : \"6487b46a56301af798b9a025\" } }\n",
"text": "How about:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "That worked!Thanks!",
"username": "Joel_Zehring"
},
{
"code": "",
"text": "Glad to hear and thanks for confirming Joel ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Power Automate Find Document Doesn't Return... a Document | 2023-06-13T18:34:41.428Z | Power Automate Find Document Doesn’t Return… a Document | 768 |
|
null | [
"aggregation",
"dot-net"
] | [
{
"code": "[\n {\n $match: {\n _id: ObjectId(\"647615c2422457db597f9d96\")\n }\n },\n {\n $project: {\n accountMembers: {\n $filter: {\n input: \"$accountMembers\",\n as: \"eachItem\",\n cond: {\n $eq: [\"$$eachItem.createdById\", \"userId\"]\n }\n }\n }\n }\n }\n]\nvar filterDef = Builders<TAccount>.Filter.Eq(x => x.Id, \"647615c2422457db597f9d96\");\n\nreturn await Repository.Aggregate()\n .Match(filterDef)\n .Project(??????);\n",
"text": "Been trying to convert this atlas projection to c# for a while now with now luck. Any insight on how to use a filter in a projection with the c# driver and classes?In c# this is where I am:",
"username": "Jeff_VanHorn"
},
{
"code": " var filterDef = Builders<TAccount>.Filter.Eq(x => x.Id, \"647615c2422457db597f9d96\");\n var result = repository.Aggregate()\n .Match(filterDef)\n .Project(Builders<BsonDocument>.Projection.Expression(x => new {\n AccountMembers = x[\"accountMembers\"].AsBsonArray.Where(y => y[\"createdById\"].AsString == \"userId\")\n }))\n .ToList();\n",
"text": "Hi Jeff! You can accomplish this using the Projection Builder API and a LINQ expression… the code for that would look something like:You can also pass in the expression as a BSON document, which will closely resemble the MQL you’ve written above. This article and video will give you a good idea of some how to proceed there. Note that the article demos both the fluent api as well as a more MQL-oriented workflow.",
"username": "Patrick_Gilfether1"
},
{
"code": ".Project(Builders<TAccount>.Projection.Expression(\n account => account.Members.Where(member => member.CreatedById == identity.UserId)))\n",
"text": "Thanks again. That put me in the right dirrection. Here is where I ended up:",
"username": "Jeff_VanHorn"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# Project with a filter | 2023-06-14T17:09:31.536Z | C# Project with a filter | 579 |
null | [
"aggregation"
] | [
{
"code": "{\n$lookup: {\nfrom: “customer”,\nlet: {customerId: “$customerId”},\npipeline:[\n{$match: {\n$expr:{\n$and[\n{$eq:[“$_id”:{$toObjectId: “$customerId”}]}\n]\n}\n}\n}],\nas: “cid”\n}\n}\n",
"text": "I have an issue with the lookup field using in my charts. I need to pull one collection (customer) to another collection(program). CustomerId(object type in customer) is not the same in Program(it is string type in Program collection). I cannot change the types in the collection.\nI followed some of the previous posts and did some changes in my aggregation pipeline(managed charts view in program collection).Not sure where I am doing it wrong. Page is not even letting me test my pipeline or save it. Is there anything wrong in my code?\nThanks in advance.",
"username": "sunita_kodali"
},
{
"code": "",
"text": "If you look at the examples in the $lookup documentation you will see that variables defined with let: needs to use 2 $ signs. So I would try replacing $customerId in your $toObjectId with $$customerId.What do you mean by:Page is not even letting me test my pipeline or save it",
"username": "steevej"
},
{
"code": "",
"text": "Sure, will try. Thank you for your response.",
"username": "sunita_kodali"
}
] | Lookup string to object id | 2023-06-06T18:59:13.759Z | Lookup string to object id | 802 |
null | [
"replication",
"containers"
] | [
{
"code": "REPOSITORY TAG IMAGE ID CREATED SIZE\nbitnami/mongodb 5.0 9829d217910c 2 days ago 577MB\ncompose.yml mongodb:\n image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-5.0}\n restart: always\n volumes:\n - mongodb_data:/bitnami/mongodb\n environment:\n MONGODB_REPLICA_SET_MODE: primary\n MONGODB_REPLICA_SET_NAME: ${MONGODB_REPLICA_SET_NAME:-rs0}\n MONGODB_PORT_NUMBER: ${MONGODB_PORT_NUMBER:-27017}\n MONGODB_INITIAL_PRIMARY_HOST: ${MONGODB_INITIAL_PRIMARY_HOST:-mongodb}\n MONGODB_INITIAL_PRIMARY_PORT_NUMBER: ${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}\n MONGODB_ADVERTISED_HOSTNAME: ${MONGODB_ADVERTISED_HOSTNAME:-mongodb}\n MONGODB_ENABLE_JOURNAL: ${MONGODB_ENABLE_JOURNAL:-true}\n ALLOW_EMPTY_PASSWORD: ${ALLOW_EMPTY_PASSWORD:-yes}\n expose:\n - 27017\n ports:\n - 27017:27017\ndocker logs <id>errorClass [Error]: [An error occurred when creating an index for collection \"users: getaddrinfo ENOTFOUND mongodb]\n at Collection.createIndex (packages/mongo/collection.js:801:15)\n at setupUsersCollection (packages/accounts-base/accounts_server.js:1777:9)\n at new AccountsServer (packages/accounts-base/accounts_server.js:75:5)\n at packages/accounts-base/server_main.js:7:12\n at module (packages/accounts-base/server_main.js:19:31)\n at fileEvaluate (packages/modules-runtime.js:336:7)\n at Module.require (packages/modules-runtime.js:238:14)\n at require (packages/modules-runtime.js:258:21)\n at /app/bundle/programs/server/packages/accounts-base.js:2193:15\n at /app/bundle/programs/server/packages/accounts-base.js:2200:3\n at /app/bundle/programs/server/boot.js:369:38\n at Array.forEach (<anonymous>)\n at /app/bundle/programs/server/boot.js:210:21\n at /app/bundle/programs/server/boot.js:423:7\n at Function.run (/app/bundle/programs/server/profile.js:256:14)\n at /app/bundle/programs/server/boot.js:422:13 {\n isClientSafe: true,\n error: 'An error occurred when creating an index for collection \"users: getaddrinfo ENOTFOUND mongodb',\n reason: undefined,\n details: undefined,\n errorType: 'Meteor.Error'\n}\n.envRocket.Chat configuration\n# Rocket.Chat version\n# see:- https://github.com/RocketChat/Rocket.Chat/releases\nRELEASE=6.2.5\n# MongoDB endpoint (include ?replicaSet= parameter)\n#MONGO_URL=\n# MongoDB endpoint to the local database\n#MONGO_OPLOG_URL=\n# IP to bind the process to\nBIND_IP=192.168.1.174\n# URL used to access your Rocket.Chat instance\nROOT_URL=http://192.168.1.174:3001\n# Port Rocket.Chat runs on (in-container)\n#PORT=\n# Port on the host to bind to\nHOST_PORT=3001\n\n### MongoDB configuration\n# MongoDB version/image tag\n#MONGODB_VERSION=\n\n### Traefik config (if enabled)\n# Traefik version/image tag\n#TRAEFIK_RELEASE=\n# Domain for https (change ROOT_URL & BIND_IP accordingly)\n#DOMAIN=\n# Email for certificate notifications\n#LETSENCRYPT_EMAIL=\n",
"text": "I am running MongoDB in a container. Here are the details of the image that was used:It is part of the Rocket chat docker installation. From the compose.yml file, here is the relevant code for the Mongodb container:Doing a docker logs <id> shows it is running into this issue:The containers keep restarting. Here is my .env file:",
"username": "Andie_Notz"
},
{
"code": "getaddrinfo ENOTFOUND mongodb",
"text": "getaddrinfo ENOTFOUND mongodbhow did you initiate the create index command? what’s the connection string like?",
"username": "Kobe_W"
},
{
"code": " docker compose up -d",
"text": "Sorry, I am not sure. I just know that issuing the docker compose up -d command, then accessing the program through the web app should work.\nHow can I find this out?",
"username": "Andie_Notz"
}
] | Running mongodb in a docker container | 2023-06-14T10:59:44.114Z | Running mongodb in a docker container | 965 |
null | [] | [
{
"code": "",
"text": "Another Dev and I are building a SwiftUI app that utilizes Realm + MongoDB Atlas device sync.We are experiencing some oddly slow page load times when we have a simple Vstack that contains maybe 20 items. With a very small amount of items the lag is barely noticeable but once we get higher it breaks down.We had to stop using the SwiftUI wrapper / helper classes that Realm has available for a number of reasons, they seem to have not been fully thought through and don’t come with good examples. Removing these helped some of the issues.It seems like SwiftUI probably doesn’t play nice with the way Realm is designed, the views are re-rendered a lot more than you would think and I’m currently assuming Realm is re-initializing data every time it’s called to re-render the view but I can’t confirm that.Has anyone else had issues like this? Are there any tricks that you can share or some big fix that I haven’t found yet?Does it work to convert realm objects into structs and wrap/unwrap them instead of calling the live objects every time?",
"username": "Richard_Anderson"
},
{
"code": "",
"text": "Generally speaking, Realm is pretty darn fast; objects are stored locally and synced in the background so network lag isn’t an issue.In our experience, performance issues are usually caused by bad code or incorrect implementation. Given the examples in the documentation could be better, they really do demonstrate the core of working with Realm.I feel the question could be better addressed if you could provide a minimal example of the performance issues you’re experiencing. Keeping in mind a forum is not a good troubleshooting platform for long sections of code, if you could include some brief sample code that demonstrates those issues, maybe we’ll spot something.",
"username": "Jay"
},
{
"code": "",
"text": "I will continue to troubleshoot and will update this post here with what I discover.But, I mostly posted the topic here in case anyone from the community has experience with SwiftUI + Realm and if they know of any common gotchas / tips / tricks for using the two together. I’m pretty sure someone else has run into similar behavior.I don’t think there is anything wrong with Realm here, but there might be issues with SwiftUI not playing nice with it.This post was useful but is more focused on MacOS: Performance issues with SwiftUI on macOS",
"username": "Richard_Anderson"
},
{
"code": "",
"text": "Understood. We use Realm, Swift & SwiftUI on macOS daily and don’t have any significant performance issues. That being said, our implementation could be way different than yours and I am sure our use cases are different as well.I can tell you that in some cases, if you’re copying Realm objects to an array, it will hinder performance. But if you’re not doing that then it doesn’t apply.Are you using @Observed objects? How is that implemented into a VStack? Did you try LazyVStack to see if there was any difference?Seeing some code would be useful to track down the issue so if you have time, share some.",
"username": "Jay"
},
{
"code": "",
"text": "A few updates. I’m relaying some of the info from another dev on the team, sorry if I don’t explain some of the details well.We are using @ObservedResults / @ObservedRealmObject property wrapper, but what we found is that even though realm freezes these (they are immutable results that get replaced from updates), but the binding still returns new memory references every time the UI requests data.So the new memory references seem to cause SwiftUI to have a lot of extra needless re-renders. No idea why the freeze/immutable behavior was designed like this but I can see this behavior not playing nice with reactive frameworks like SwiftUI. The way this should work with the frozen/immutable results is that you get the same memory reference with observed/data bindings unless something changed, that would likely totally fix the problem.The extra needless re-renders appear to cause our slow loading performance issues, but I can imagine a lot of projects use a very simple implementation of VStack/LazyVStacks that might not have as much of a negative impact. Ours is a bit more on the complicated side.We decided to test converting all fetched data to structs to side step the issue with constantly new memory references causing re-renders. We set it up in a new test app to verify and so far it appears to have completely removed the needless re-renders that we were seeing.I “think” this is the strategy we are using for the struct conversions, but I can’t say for sure atm. CustomPersistable Protocol ReferenceWe haven’t fully finished trouble shooting this but I wanted to add a quick update.\nIf others aren’t experiencing this issue with Realm + SwiftUI all I can think is that their use cases in SwiftUI for the realm data must be very simple compared to ours.",
"username": "Richard_Anderson"
},
{
"code": "import RealmSwift\nimport SwiftUI\n\n// MARK: Models\n\n/// Random adjectives for more interesting demo item names\nlet randomAdjectives = [\n \"fluffy\", \"classy\", \"bumpy\", \"bizarre\", \"wiggly\", \"quick\", \"sudden\",\n \"acoustic\", \"smiling\", \"dispensable\", \"foreign\", \"shaky\", \"purple\", \"keen\",\n \"aberrant\", \"disastrous\", \"vague\", \"squealing\", \"ad hoc\", \"sweet\"\n]\n\n/// Random noun for more interesting demo item names\nlet randomNouns = [\n \"floor\", \"monitor\", \"hair tie\", \"puddle\", \"hair brush\", \"bread\",\n \"cinder block\", \"glass\", \"ring\", \"twister\", \"coasters\", \"fridge\",\n \"toe ring\", \"bracelet\", \"cabinet\", \"nail file\", \"plate\", \"lace\",\n \"cork\", \"mouse pad\"\n]\n\n/// An individual item. Part of an `ItemGroup`.\nfinal class Item: Object, ObjectKeyIdentifiable {\n /// The unique ID of the Item. `primaryKey: true` declares the\n /// _id member as the primary key to the realm.\n @Persisted(primaryKey: true) var _id: ObjectId\n\n /// The name of the Item, By default, a random name is generated.\n @Persisted var name = \"\\(randomAdjectives.randomElement()!) \\(randomNouns.randomElement()!)\"\n\n /// A flag indicating whether the user \"favorited\" the item.\n @Persisted var isFavorite = false\n\n /// Users can enter a description, which is an empty string by default\n @Persisted var itemDescription = \"\"\n \n /// The backlink to the `ItemGroup` this item is a part of.\n @Persisted(originProperty: \"items\") var group: LinkingObjects<ItemGroup>\n \n}\n\n/// Represents a collection of items.\nfinal class ItemGroup: Object, ObjectKeyIdentifiable {\n /// The unique ID of the ItemGroup. `primaryKey: true` declares the\n /// _id member as the primary key to the realm.\n @Persisted(primaryKey: true) var _id: ObjectId\n\n /// The collection of Items in this group.\n @Persisted var items = RealmSwift.List<Item>()\n \n}\n\nextension Item {\n static let item1 = Item(value: [\"name\": \"fluffy coasters\", \"isFavorite\": false, \"ownerId\": \"previewRealm\"])\n static let item2 = Item(value: [\"name\": \"sudden cinder block\", \"isFavorite\": true, \"ownerId\": \"previewRealm\"])\n static let item3 = Item(value: [\"name\": \"classy mouse pad\", \"isFavorite\": false, \"ownerId\": \"previewRealm\"])\n}\n\nextension ItemGroup {\n static let itemGroup = ItemGroup(value: [\"ownerId\": \"previewRealm\"])\n \n static var previewRealm: Realm {\n var realm: Realm\n let identifier = \"previewRealm\"\n let config = Realm.Configuration(inMemoryIdentifier: identifier)\n do {\n realm = try Realm(configuration: config)\n // Check to see whether the in-memory realm already contains an ItemGroup.\n // If it does, we'll just return the existing realm.\n // If it doesn't, we'll add an ItemGroup and append the Items.\n let realmObjects = realm.objects(ItemGroup.self)\n if realmObjects.count == 1 {\n return realm\n } else {\n try realm.write {\n realm.add(itemGroup)\n itemGroup.items.append(objectsIn: [Item.item1, Item.item2, Item.item3])\n }\n return realm\n }\n } catch let error {\n fatalError(\"Can't bootstrap item data: \\(error.localizedDescription)\")\n }\n }\n}\n\n// MARK: Views\n\n// MARK: Main Views\n/// The main screen that determines whether to present the SyncContentView or the LocalOnlyContentView.\n/// For now, it always displays the LocalOnlyContentView.\n@main\nstruct ContentView: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n LocalOnlyContentView()\n }\n }\n}\n\n/// The main content view if not using Sync.\nstruct LocalOnlyContentView: View {\n @State var searchFilter: String = \"\"\n // Implicitly use the default realm's objects(ItemGroup.self)\n @ObservedResults(ItemGroup.self) var itemGroups\n \n var body: some View {\n if let itemGroup = itemGroups.first {\n // Pass the ItemGroup objects to a view further\n // down the hierarchy\n ItemsView(itemGroup: itemGroup)\n } else {\n // For this small app, we only want one itemGroup in the realm.\n // You can expand this app to support multiple itemGroups.\n // For now, if there is no itemGroup, add one here.\n ProgressView().onAppear {\n $itemGroups.append(ItemGroup())\n }\n }\n }\n}\n\n// MARK: Item Views\n/// The screen containing a list of items in an ItemGroup. Implements functionality for adding, rearranging,\n/// and deleting items in the ItemGroup.\nstruct ItemsView: View {\n @ObservedRealmObject var itemGroup: ItemGroup\n @State var counter = 0\n \n /// The button to be displayed on the top left.\n var leadingBarButton: AnyView?\n \n var body: some View {\n let _ = Self._printChanges()\n NavigationView {\n VStack {\n // The list shows the items in the realm.\n List {\n ForEach(itemGroup.items) { item in\n ItemRow(item: item)\n }.onDelete(perform: $itemGroup.items.remove)\n .onMove(perform: $itemGroup.items.move)\n }\n .listStyle(GroupedListStyle())\n .navigationBarTitle(\"Items\", displayMode: .large)\n .navigationBarBackButtonHidden(true)\n .navigationBarItems(\n leading: self.leadingBarButton,\n // Edit button on the right to enable rearranging items\n trailing: EditButton())\n // Action bar at bottom contains Add button.\n HStack {\n Spacer()\n Button(action: {\n // The bound collection automatically\n // handles write transactions, so we can\n // append directly to it.\n $itemGroup.items.append(Item())\n }) { Image(systemName: \"plus\") }\n }.padding()\n \n Button(\"Increment counter (\\(counter))\") {\n counter += 1\n }\n }\n }\n }\n}\n\nstruct ItemsView_Previews: PreviewProvider {\n static var previews: some View {\n let realm = ItemGroup.previewRealm\n let itemGroup = realm.objects(ItemGroup.self)\n ItemsView(itemGroup: itemGroup.first!)\n }\n}\n\n/// Represents an Item in a list.\nstruct ItemRow: View {\n @ObservedRealmObject var item: Item\n\n var body: some View {\n let _ = Self._printChanges()\n\n // You can click an item in the list to navigate to an edit details screen.\n NavigationLink(destination: ItemDetailsView(item: item)) {\n Text(item.name)\n if item.isFavorite {\n // If the user \"favorited\" the item, display a heart icon\n Image(systemName: \"heart.fill\")\n }\n }\n }\n}\n\n/// Represents a screen where you can edit the item's name.\nstruct ItemDetailsView: View {\n @ObservedRealmObject var item: Item\n\n var body: some View {\n VStack(alignment: .leading) {\n Text(\"Enter a new name:\")\n // Accept a new name\n TextField(\"New name\", text: $item.name)\n .navigationBarTitle(item.name)\n .navigationBarItems(trailing: Toggle(isOn: $item.isFavorite) {\n Image(systemName: item.isFavorite ? \"heart.fill\" : \"heart\")\n })\n }.padding()\n }\n}\n\nstruct ItemDetailsView_Previews: PreviewProvider {\n static var previews: some View {\n NavigationView {\n ItemDetailsView(item: Item.item2)\n }\n }\n}\n",
"text": "You actually don’t really need any example project, as the behavior is in the Realm SwiftUI quickstart with some minor modifications just to display that it’s happening.All I’ve modified here is adding a simple counter state to ItemsView and some debug printing that shows when/why things rerender.To reproduce, run this, add an item, then click on the “Increment counter” button and check the log. You will see something like:ItemsView: _counter changed.\nItemRow: @self changed.ItemRow gets rerendered even though it did not (functionally) change at all. But, in actuality it’s a brand new observable object getting passed in to ItemRow.A dev on github tested the memory address theory and said the same thing happens if you cache the items into an array, so that disproves the theory.My current theory is just that when you pass a Realm object to a view, you’re always implicitly creating an observable object, and that is always going to cause the view to rerender.If you do some extra work and only pass equivalent data as a struct, the problem goes away. But, obviously you lose a lot of benefits with this approach.I think for simple apps using List views that use navigation to drill down to detail views, this likely isn’t much of a problem because List manages how many items are “mounted” at any given time. But imagine something like a kanban board desktop app using a lot of custom stuff instead of List, and you might be able to imagine it becoming a major problem. Rerendering a component far up on the hierarchy causes a huge cascade of rerendering.-Jon",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "This is actually very accurate, I used to reproduce this behavior with the same means for a couple of years now, and avoid passing realm objects to a view, I started the struct approach years ago. This is very accurate and on point.Also you can double check this by using TestFlight and comparing the re-renders that keep occurring on a page by page basis in the app. Readingminitial post, I’m glad I read through the rest of this as I was just about to suggest this.",
"username": "Brock"
},
{
"code": "",
"text": "Yes I find SwiftUI and RealmSwift don’t get along too well.Just placing the cursor in a TextField that is bound to a Realm object property causes the whole user interface to lock up on macOS. For some reason it seems this is causing all UI objects to be re-rendered. Not sure this is a RealmSwift related issue or a SwiftUI issue.I have two lists, one with around 30 realm objects and a second one with around 2000 objects (not realm objects).The bound property in the TextField comes from one of the realm objects from the first list. Just placing the cursor in the TextField causes the colourful umbrella to show up for 20 or 30 seconds. Pretty strange.EDIT:\nLazyVStack {} seems to largely address the issue - which does not appear to be RealmSwift related. Seems SwiftUI will by default create all the list views unless LazyStack is used. !",
"username": "Duncan_Groenewald"
},
{
"code": "",
"text": "Just placing the cursor in a TextField that is bound to a Realm object property causes the whole user interface to lock up on macOS.That’s very concerning. One of the projects we use internally for testing is macOS, SwiftUI and displays a couple hundred realm objects in a list along with a few bound TextFields in the same UI.We have not seen any slowdowns or other odd behavior. Not saying it isn’t there, we just dont notice them and are not experiencing a lock up.Do you have some brief sample code that duplicates the behavior? Going forward, it would be helpful to know what to avoid doing!",
"username": "Jay"
},
{
"code": "",
"text": "We had the same issue and here’s what we’ve found. Sorry if this isn’t explained well.SwiftUI does have a problem with re-rendering too much of the screen when any data changes. The new SwiftUI changes announced at WWDC should fix this problem moving forward with the new iOS version.RealmSwift still has a fundamental problem that doesn’t play nice with SwiftUI. Realm returns new references/object references every time the code hits a realm object. And every re-render requests the realm data… This is a very fundamental flaw that needs to be fixed for realm to play nice with SwiftUI. If someone doesn’t see issues from this, they probably have a simple view/screen with little data being displayed. SwiftUI’s reactive behavior is not designed to work with data that changes every single time you ask for it. The issue compounds when SwiftUI is re-rendering way too much of the screen for small data changes.It’s been a little while since I read about Realm’s freezing objects concept, which should remove this live object / new object issue, but it simply doesn’t do that when using Swift. I’m assuming this is a bug that hasn’t been addressed but it’s hard to say for sure.We work around this realm new object issue by transforming all realm objects to structs before giving it to a view so it properly freezes the data when a view gets ahold of the data. And if the data changes, we replace the relevant struct.Lastly, there is a problem with displaying a list of textfields in swiftUI. If 1 textfield grabs focus, SwiftUI re-renders the entire screen, basically. So that will cause very noticeable UI lag. We work around this by only displaying “text” objects in a list, and on tap we replace the text with a textfield so there is only 1 available at a time on a given list. I’m assuming this has been fixed in the new version of SwiftUI but I haven’t checked yet.",
"username": "Richard_Anderson"
},
{
"code": "",
"text": "@Jay - it turns out this has nothing to do with RealmSwift, it was just co-incidental that the text field was one bound to a realm objects property.Using LazyVStack or LazyHStack for a list containing a few thousand items makes the problem go away.Apparently SwiftUI will render every object in the list (??) unless the Lazy* is used so any list with a few thousand items becomes problematic.Not quite sure why it seems to render the entire list again when the cursor is placed in a TextField field.",
"username": "Duncan_Groenewald"
},
{
"code": "",
"text": "Not quite sure why it seems to render the entire list again when the cursor is placed in a TextField field.I’ve found this problem to be SwiftUI itself. Any view with @FocusState will rerender any time there is a change in focus, whether or not it’s relevant to that view. Hopefully iOS 17 and macOS Sonoma will fix it. I haven’t looked into the latest focus changes for that yet.",
"username": "Jonathan_Czeck"
}
] | Performance issues with SwiftUI + realm | 2023-03-17T17:00:29.782Z | Performance issues with SwiftUI + realm | 2,077 |
[] | [
{
"code": "",
"text": "I’ve read a few similar posts here and the docs but I’m still missing something. I get the following error when trying to create/write my first Realm object to my DB:RealmError: Cannot write to class Project when no flexible sync subscription has been created.I was reading the Node SDK and just following along with it to understand how to create a subscription. I’m not sure I need to actually create a filtered subscription that looks for objects with current user’s ID in the owner field but I felt like that’s a reasonable way to get started.I have no data in the DB for this object, this is my first attempt at running Realm. Thanks for any help… But, it seems I’m creating a subscription and I have flexible sync / dev mode enabled in the backend.Here’s what I have so far:\nScreenshot 2023-06-13 at 8.11.08 PM1081×495 40.7 KB\n",
"username": "d33p"
},
{
"code": "initialSubscriptionsawait realm.subscriptions.update((mutableSubs) => {\n mutableSubs.add(longRunningTasks, {\n name: \"longRunningTasksSubscription\",\n });\n mutableSubs.add(bensTasks);\n mutableSubs.add(realm.objects(\"Team\"), {\n name: \"teamsSubscription\",\n throwOnUpdate: true,\n });\n});\n",
"text": "Can you try using the subscriptions update API instead of using initialSubscriptions? Follow the guidelines from the docs here: https://www.mongodb.com/docs/realm/sdk/node/sync/flexible-sync/#std-label-node-sync-subscribe-to-queryable-fieldsSpecifically, update the subscriptions using something like this:",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "I did just test the initial subscriptions syntax that you’re using, and it appears to work. Can you also confirm the version of the realm SDK that you’re using?",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "How do I check version of my SDK? I just did an npm install realm about 2 weeks ago or so.As far as the subscriptions stuff, the code you referenced should that be run immediately after I run the open method?",
"username": "d33p"
},
{
"code": "\"realm\": \"^11.9.0\",await realm.openawait realm.subscriptions.update",
"text": "You can check your package.json - you should have a line line \"realm\": \"^11.9.0\",.And yes, after await realm.open you can run await realm.subscriptions.update",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "“version”: “11.9.0”,Ok, I’ll give it a try. Also, what is the syntax to create a subscription that says “just give me all objects in a collection”. That would be more helpful right now as I learn Realm, how it works, verifying the collections/data I read/write from my app, etc…",
"username": "d33p"
},
{
"code": "realm.objects(\"TableName\").filtered(\"truepredicate\")",
"text": "try realm.objects(\"TableName\").filtered(\"truepredicate\")",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "When you say true predicate, you mean something like “1 == 1”, just any expression that evaluates to true so that I get access to all documents in a collection?",
"username": "d33p"
},
{
"code": "\"truepredicate\"",
"text": "No I mean the literal string \"truepredicate\", in quotes, as your query",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "If I may make a suggestion?Since you’re new to Realm and really never used it before, I suggest starting work a local Realm first, and then once you have some experience with the Realm “ecossystem” expand into Flexible Sync and Sync’ing in general. Here’s a good starting placeMy suggestion is because trying to wrap your brain around Flex Sync, and at the same time learning the SDK and usage can be somewhat overwhelming - especially when trying to determine what subscriptions you’ll actually need and what you won’t.If you start with understanding Realm, when it comes time to plan your sync strategy, it will flow much easier and be less time consuming.Just a thoughtWelcome to Realm!Jay",
"username": "Jay"
},
{
"code": "const Realm = require('realm');\nconst { SchemaProject } = require('../models/SchemaProject.js');\nconst { ModelProject } = require('../models/ModelProject.js');\nconst { ControllerEventBus } = require('./ControllerEventBus.js');\n\nclass ControllerDatabase\n{\n constructor()\n {\n this.realmApp = null;\n this.db = null;\n this.user = null;\n this.config = null;\n }\n Close()\n {\n this.db.close();\n }\n Write()\n {\n /**\n * Test write\n */\n let project = new ModelProject();\n project.title = \"Test project\";\n project.name = \"Project name\";\n project.project = \"foo\";\n project.owner = this.user.id;\n var projectRealm = null;\n this.db.write(() => \n {\n projectRealm = this.db.create(\"Project\", project);\n });\n console.log(projectRealm.toString());\n }\n async Init()\n {\n try\n {\n this.realmApp = new Realm.App({id: \"APP\"});\n //const anonymousUser = await app.logIn(Realm.Credentials.anonymous());\n const credentials = Realm.Credentials.emailPassword(\n \"USER\",\n \"PW\"\n );\n this.user = await this.realmApp.logIn(credentials);\n this.config = \n {\n schema: [SchemaProject],\n sync: \n {\n user: this.user,\n flexible: true \n }\n }; \n this.db = await Realm.open(this.config);\n const query = this.db.objects(\"Project\").filtered(\"truepredicate\");\n await this.db.subscriptions.update((mutableSubs) => {\n mutableSubs.add(query, {name: \"allProjects\"});\n });\n\n global.deep.EventBus.Dispatch(ControllerEventBus.EVENT_DB, {\"state\": ControllerEventBus.EVENT_DB_READY });\n }\n catch(e)\n {\n console.log(\"Database exception is: \" + e.toString()); \n } \n }\n}\nexports.ControllerDatabase = ControllerDatabase;\n",
"text": "Ok excellent, moving the subs code into a step after the open, it now works. I verified that my test write (create a project object) worked and I can see it in my DB… very good.Thanks a ton, pasting code below as it might help someone learning Realm as I am.Thanks @Jay regarding ‘start slow’ advice. I agree… I’ll probably stop worrying about sync’ing to my Atlas instance for awhile and just try to pick up speed on how to CRUD locally with Realm, do queries, learn as I go, etc… But, I’ve spent the past couple weeks reading a ton and trying the basics so I’m getting past crawl phase at this point. Was a little bumpy but all new learning curves always are…",
"username": "d33p"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | New to Realm, problem creating sync subscription | 2023-06-14T00:14:50.684Z | New to Realm, problem creating sync subscription | 910 |
|
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "syncSession.publisher(for: \\.connectionState)syncSession.addProgressNotification(for: .download, mode: .forCurrentlyOutstandingWork)",
"text": "Is there any way to check the current sync status when using Atlas Device Sync in Swift?\nI’m looking for an ability to:I’m currently trying to determine the status usingsyncSession.publisher(for: \\.connectionState)andsyncSession.addProgressNotification(for: .download, mode: .forCurrentlyOutstandingWork)but that doesn’t seem to be reliable (the app sometimes incorrectly thinks it’s connected/synchronized).",
"username": "Andreas_Ley"
},
{
"code": "",
"text": "Are you using Flexible Sync or Partition Based Sync? It’s worth noting that progress notifications are not supported in Flexible Sync yet, but this is coming soon.You can check the connected state of the session by creating an observer: https://www.mongodb.com/docs/realm/sdk/swift/sync/network-connection/",
"username": "Sudarshan_Muralidhar"
},
{
"code": "\\.connectionState",
"text": "I’m currently using partition-based sync.As mentioned, observing the sync session’s \\.connectionState unfortunately isn’t perfect.",
"username": "Andreas_Ley"
}
] | Connection/sync status when using Atlas Device Sync with `RealmSwift` | 2023-06-13T18:45:16.479Z | Connection/sync status when using Atlas Device Sync with `RealmSwift` | 703 |
null | [
"atlas-triggers",
"app-services-cli"
] | [
{
"code": "",
"text": "We sometimes need to disable and eventually re-enable triggers in our mongo cloud application, usually after some heavy data processing during which we want all events emitted by the triggers to SQS and AWS event bridge to be completely ignored.We can do this by pulling the infra, updating the config and pushing back to the cloud (and repeating when we want them enabled), but is there an easier way to do this?Something like an endpoint we can hit or a function we can call that could e.g. switch the trigger “disabled” flag on-the-fly, every time it’s called or something along those lines?Thank you!",
"username": "George_Ivanov"
},
{
"code": "",
"text": "Hi, we actually have an Admin API for exactly this use case. See here for details on how to make Admin API requests and here for the specifics of the endpoint you are interested in. (MongoDB Atlas App Services Admin API)Let me know if that works for you.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you very much!\nWill give that a try!",
"username": "George_Ivanov"
},
{
"code": "",
"text": "That’s exactly what we were looking for!\nThanks!",
"username": "George_Ivanov"
}
] | Programmatically disabling and re-enabling triggers | 2023-06-13T09:30:37.687Z | Programmatically disabling and re-enabling triggers | 739 |
null | [] | [
{
"code": "",
"text": "GPG error: MongoDB Repositories bionic/mongodb-org/4.0 Release: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4\nE: The repository ‘MongoDB Repositories bionic/mongodb-org/4.0 Release’ is not signed.",
"username": "Divyesh_Panchani"
},
{
"code": "",
"text": "MongoDB 4.0 is EoL software(April 2022), as well as Ubuntu 18.04(June 2023)Looking at JIRA this is not something that MongoDB plans on fixing.",
"username": "chris"
},
{
"code": "",
"text": "its happening in Ubuntu 20.04 also , and I am currently using mongodb 3.6 , i want to upgrade it to mongodb 5.0 , how can I do it then ??",
"username": "Divyesh_Panchani"
},
{
"code": "dpkg -i trusted=yesman 5 sources.listecho \"deb [ arch=amd64,arm64 trusted=yes ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse\" > /etc/apt/sources.list.d/mongodb-org-4.0.list",
"text": "The deb files can be downloaded directly from the release archive and they can be installed via dpkg -i The other other workaround would be to explicitly trust the 4.0 repo, do this only on the 4.0 repo(or other unsupported / expired repo key) if you choose to use this option.Add the trusted=yes option to the repo definition, read man 5 sources.list to understand this option and its implications.\necho \"deb [ arch=amd64,arm64 trusted=yes ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse\" > /etc/apt/sources.list.d/mongodb-org-4.0.list",
"username": "chris"
}
] | Mongo db 4.0 GPG key expired for ubuntu 18.04 | 2023-06-13T13:12:09.278Z | Mongo db 4.0 GPG key expired for ubuntu 18.04 | 3,601 |
null | [
"aggregation",
"indexes",
"atlas-search"
] | [
{
"code": "",
"text": "Hello,In the WhatsCooking project, there are a default index to performe $search stage and another index (facetIndex) to performe $searchMeta stage. Is this a good practice and why? Can I have a single index for perform $search and $searchMeta stages?",
"username": "Matheus_Souza"
},
{
"code": "",
"text": "Hi @Matheus_Souza and welcome to the MongoDB community forum!!The $searchMeta stage of the aggregation pipeline, having the same syntax as $search, returns the Metadata information related to the search query.To answer your question:Can I have a single index for perform $search and $searchMeta stages?The same index can be used for both the operators, keeping in mind the output for both the stages would be different.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "\"jobField\": [\n {\n \"type\": \"stringFacet\"\n },\n {\n \"type\": \"string\"\n }\n",
"text": "@Aasawari but we would need to have two fields in the mapping, one for “string” and one for “facetString”? if we wanted to have a filter for jobField for example?",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "It is not mandatory to have stringFacet. You must have facets if you want to use this feature.",
"username": "Matheus_Souza"
},
{
"code": " {\n \"type\": \"stringFacet\"\n },\n {\n \"type\": \"string\"\n }\n```",
"text": "Right, but if I want to use facets I have to have both defined, one as String and one as StringFacet",
"username": "Ed_Durguti"
},
{
"code": "",
"text": "Hi @Matheus_Souza !Just seeing this and hope your project is coming along well. You could indeed use the facetIndex for both $search and $searchMeta, and it would probably save space, as well. I used 2 indexes for the sake of teaching explicitly the concept of indexes.You find our explicit recommendations here if you are querying for facets: https://www.mongodb.com/docs/atlas/atlas-search/facet/#search_meta-aggregation-variableHappy coding!\nKaren",
"username": "Karen_Huaulme"
},
{
"code": "",
"text": "Yes, that’s right. You must to explicit that you want to group and count values for that field",
"username": "Matheus_Souza"
},
{
"code": "",
"text": "Hi @Karen_Huaulme!\nThanks for reply and tutorial. Our project is stable now and we are moving forward with a single index.\nHere is our project in action! Bares e Restaurantes em Sao Paulo, SP - Apontador",
"username": "Matheus_Souza"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it better to use a specific index for facets? | 2023-02-15T12:25:45.079Z | Is it better to use a specific index for facets? | 1,327 |
null | [
"node-js"
] | [
{
"code": "exports.postLogin = (req, res, next) => {\n const validationErrors = [];\n if (!validator.isEmail(req.body.email))\n validationErrors.push({ msg: \"Please enter a valid email address.\" });\n if (validator.isEmpty(req.body.password))\n validationErrors.push({ msg: \"Password cannot be blank.\" });\n\n if (validationErrors.length) {\n req.flash(\"errors\", validationErrors);\n return res.redirect(\"/login\");\n }\n req.body.email = validator.normalizeEmail(req.body.email, {\n gmail_remove_dots: false,\n });\n\n passport.authenticate(\"local\", (err, user, info) => {\n if (err) {\n return next(err);\n }\n if (!user) {\n req.flash(\"errors\", info);\n return res.redirect(\"/login\");\n }\n req.logIn(user, (err) => {\n if (err) {\n return next(err);\n }\n req.flash(\"success\", { msg: \"Success! You are logged in.\" });\n res.redirect(req.session.returnTo || \"/profile\");\n });\n })(req, res, next);\n};\n\n",
"text": "Model.findOne() no longer accepts a callback",
"username": "Yusuf_Tamale"
},
{
"code": "Promiseasync/await",
"text": "Hey @Yusuf_Tamale,Thank you for reaching out to the MongoDB Community forums.Model.findOne() no longer accepts a callbackThe use of callback functions has been deprecated in the latest version of Mongoose (version 7.x).Reference: Mongoose v7.2.4: Migrating to Mongoose 7If you are using Mongoose 7.x+, please modify the functions that use a callback by switching to the Promise or async/await syntax.I hope this helps! If you have any further questions or concerns, feel free to ask.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | MongooseError: Model.findOne() no longer accepts a callback i can ues any help please | 2023-06-14T08:46:20.717Z | MongooseError: Model.findOne() no longer accepts a callback i can ues any help please | 5,734 |
null | [
"aggregation",
"queries",
"node-js",
"performance"
] | [
{
"code": "location{\n '_id' : ObjectId(\"635b92a0615739001cfc002d\"),\n 'location' : 'IND001',\n 'item_id' : '10001',\n 'quantity' : 20\n}\nCode{\n 'Code' : 'IND001',\n 'Name' : 'Shreyas Gombi',\n 'Address' : 'Bengaluru, Karnataka, India',\n 'Area' : 'Bengaluru',\n 'Region' : 'Karnataka',\n 'Zone' : 'South'\n}\nCode{\n 'Code' : '10001',\n 'ProductName' : 'PRODUCT-PART-XA123456',\n 'Category' : 'CATEGORY_1',\n 'SubCategory' : 'SUB_CATEGORY_3',\n 'Price' : '100',\n 'Currency' : 'INR'\n}\n$group$lookup$limit$skip$count$lookup$grouplocationitem_iddb.collection('_inventory').aggregate( [\n {\n $match : {'quantity' : {'$gt' : 0}}\n },\n {\n $group : {\n _id : {item_id : '$item_id', location : '$location'},\n location : {'$addToSet' : '$location'},\n quantity : {'$addToSet' : '$quantity'}\n }\n },\n {\n $lookup : {\n from : '_customers',\n localField : '_id.location',\n foreignField : 'Code',\n as : 'customer_details'\n }\n },\n {\n $match : {'$or' : [{'customer_details.Area' : 'Karnataka'}, {'customer_details.Area' : 'Maharashtra'}]}\n },\n {\n $lookup : {\n from : '_items',\n localField : '_id.item_id',\n foreignField : 'Code',\n as : 'item_details'\n }\n },\n {\n '$unwind' : {'path' : '$item_details', 'preserveNullAndEmptyArrays' : true}\n },\n {\n '$match' : search_query // ..... Search could be done by customers name or product name ... \n },\n {\n '$count' : 'count'\n }], {'allowDiskUse' : true}\nsearch_queryitem_details.ProductNamecustomer_details.Name{customer_details.Area : 'Karnataka'}AreaRegionZoneCategorySubCategory",
"text": "Hi Team,We are facing a slow query performance issue and we suspect that it might be due to our design approach being wrong in the first place, but we had no other way to validate it. Also we have close to a million records and we wish to first try and fix it through query optimisation instead of updating the records. The situation is quite straightforward and is explained below :We have an inventory collection with a property location where the value is a customer’s Unique ID / Code. We have created an index on the location and item / product ID columns. This collection holds close to half a million records.We have our Customer’s collection that holds meta data information of each customer, like their name, address, area, region, zones etc. We have created an index on the Code column. This collection holds data close to 5-10k.We also have our product information in our Items collection with product categorisation data. We have created an index on the Code column in Items collection. This collection holds another 60-70k data.We are building a front-end interface that will provide a view of our inventory as well as users can filter data by customer’s region / area / zone or by name. To achieve this we are using $group and $lookup aggregation pipelines (pipeline is shown below). We have 2 APIs, one for the data display of the inventory so we are using pagination - $limit and $skip, it will only pull 60 records, and the other API to pull a count - $count to indicate the number of records expected for that filtered data. But the performance of the count API is very slow, results are returned after 14-20 seconds.The problem here is that our Area, Region and Zone data is currently sitting in Customer’s collection and Category, SubCategory data of products in Items collection, which will need to be pulled by performing a $lookup. We also use a $group stage to group multiple lines of inventory data by location and item_id.Here’s the aggregation :search_query : We are building a search query assuming the lookup fields as item_details.ProductName or customer_details.Name, etc. We are also using the filters applied on the front-end like, filter by Area for example, we add a query here as {customer_details.Area : 'Karnataka'}We are faced with a few challenges here,We have a few possible solutions in mind,I remember in one of our earlier sessions on MongoDB, that redundancy is what complements MongoDB architecture and we should not treat it like an RDBMS. So we think this might be a design problem and not a query issue. But we wanted to know if there’s any other suggestion or if there’s a better way we can achieve the results without having to update the records.",
"username": "Shreyas_Gombi"
},
{
"code": "{\n $match : {'quantity' : {'$gt' : 0}}\n },\n {\n $group : {\n _id : {item_id : '$item_id', location : '$location'},\n location : {'$addToSet' : '$location'},\n quantity : {'$addToSet' : '$quantity'}\n }},\n {\n $lookup : {\n from : '_customers',\n localField : '_id.location',\n foreignField : 'Code',\n as : 'customer_details'\n }\n }\n",
"text": "Hi @Shreyas_Gombi and welcome to MongoDB community forums!!Firstly, thank you for the detailed post with all the relevant information.Based one the sample schema provided, I tried to create sample documents for all the collections and perform the above aggregation.\nBased on my understanding, the first part of the query,could be converted into a materialised view and perform the later stages in the aggregation pipeline.In recommending the materialised views, this would have a caveat that you would need to keep the documents updated with every update in the main collection.With the above recommendation, below are a few recommendations regarding the aggregation pipeline.Please visit the documentation for Building with Patterns: A Summary | MongoDB Blog for further understanding.Let us know if you have further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Aasawari ,Thanks for a detailed response. It’s very insightful and helped us open new doors.1. The collections mentioned seems to be very normalised. Is this the design that was used in a previous application, or is this a new application?This is how it was used in the previous application, and that’s why we are unable to modify the records currently. We are now looking into denormalising the data so that the filters can be run on the same collection.We now understand $skip is not recommended, we suspect that this could also be one of the reasons why our exports are not working well too. What would be your recommendation on the export of data using a NodeJS + AngularJS + MongoDB tech stack? We are currently facing some JS Stacktrace issues and memory when exporting the data in batches.Regards,\nShreyas",
"username": "Shreyas_Gombi"
},
{
"code": "",
"text": "Hi @Shreyas_GombiWhat would be your recommendation on the export of data using a NodeJS + AngularJS + MongoDB tech stack?For the above requirement, you could possibly use mongoimport and mongoexport from the official database tools to perform the operations.We are currently facing some JS Stacktrace issues and memory when exporting the data in batches.Furthermore, you mentioned encountering JS Stacktrace issues and memory limitations when exporting data in batches. I would appreciate it if you could provide further details about the specific issues you’re facing.Additionally, please clarify the batch size you are working with for the data import and export processes. Understanding these details will allow me to provide you with more accurate assistance.Regards\nAasawari",
"username": "Aasawari"
}
] | Slow query for $group, $lookup and $match | 2023-05-09T07:04:22.823Z | Slow query for $group, $lookup and $match | 1,045 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster"
] | [
{
"code": "const mongoose = require(\"mongoose\");\nmongoose.connect(\"mongodb+srv://u:[email protected]/cooking_Blog?retryWrites=true&w=majority\", {useNewUrlParser:true, useUnifiedTopology:true}).then(()=>{\n console.log(\"connected sucessfully\");\n}).catch((err)=>{\n console.log(\"error connection failed\");\n})\nE:\\projects\\recipies_website\\node_modules\\mongoose\\lib\\model.js:3198\n for (let i = 0; i < error.writeErrors.length; ++i) {\n ^\n\nTypeError: Cannot read properties of undefined (reading 'length')\n at E:\\projects\\recipies_website\\node_modules\\mongoose\\lib\\model.js:3198:47\n",
"text": "and I got this error:Node.js v19.7.0",
"username": "Vikash_Sharma"
},
{
"code": "const mongoose = require(\"mongoose\");\nmongoose.connect(\"mongodb+srv://u:[email protected]/cooking_Blog?retryWrites=true&w=majority\", {useNewUrlParser:true, useUnifiedTopology:true}).then(()=>{\n console.log(\"connected sucessfully\");\n}).catch((err)=>{\n console.log(\"error connection failed\");\n})\nnode_modules\\mongoose\\lib\\model.js:3198\n for (let i = 0; i < error.writeErrors.length; ++i) {\n ^\n\nTypeError: Cannot read properties of undefined (reading 'length')\n",
"text": "Hey @Vikash_Sharma,Thank you for reaching out to the MongoDB Community forums.I tested your code and it worked flawlessly for me using node v19.7.0.Would it be possible for you to provide the complete code snippet and the version of Mongoose you are using? I suspect that the problem may be related to something else.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | I got an error while connecting nodejs app to mongo cluster | 2023-06-14T05:25:45.097Z | I got an error while connecting nodejs app to mongo cluster | 524 |
null | [
"charts"
] | [
{
"code": "",
"text": "I have created my first mongoDB chart from my Notes collection. Our Notes collection has a clientID. I would like to create this generic chart that can be used by all clients. How can I configure the chart so that chart viewers(users) of one client are not able to view data of another client?I don’t want to create one chart per client.",
"username": "Rickson_Menezes"
},
{
"code": "",
"text": "Hi @Rickson_Menezes -Thanks for using Charts! Within the main Charts application, while you can use dashboard filtering to filter all charts by a specific client ID, there isn’t a way to force specific people to use specific filters.However this feature does exist when you use Embedding in Authenticated mode. There is a feature called Injected Filters, whereby you can use information in the JWT token from each user to apply appropriate filters to the embedded charts. For details see Filter Embedded Charts — MongoDB Charts.Let me know if this will work for you!\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,Thanks for the quick response. This will work for us.Is there a way to also restrict the chart author to only see data of his own client (clientID 3) while chart is being created on his dashboard? Our data(datasource) has data of all clients(say, ID 3,4,5,6) but we’d like to give permission to one client create charts only for his own client and another chart author to be able to create charts of his own data(say clientID 4)Thanks,Rickson",
"username": "Rickson_Menezes"
},
{
"code": "",
"text": "Unfortunately there isn’t a way of forcing filters on chart authors. We do have longer term plans to provide more granular permissions, and we will take your requirement into account.",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,Thank you for your reply. Can I know how what priority is set for multi-tenancy on the authoring side above? Is it something that is long term within the netx 6 months or not something on the ball park for now?Will apreciate some idea from your side.Rickson.",
"username": "Rickson_Menezes"
},
{
"code": "",
"text": "Not planned in the next 6 months I’m afraid.",
"username": "tomhollander"
},
{
"code": "",
"text": "In the same line of thought, will you enable a public link version of mongoDB charts, that is at least password protected or can rely on 2FA? We would like to share specific dashboards with specific clients, however adding them to MongoDB as project viewers is not desired (essentially we only want them to view the dashboard and nothing else).",
"username": "Maximilian_Czymoch"
},
{
"code": "",
"text": "Agree this would be a nice feature, but it’s actually pretty simple to roll your own solution for this. You can build a simple App Services app, turn on authentication for your users, and then embed your dashboard with an authentication provider tied to that app. Let me know if you need more info on how to do this.Tom",
"username": "tomhollander"
}
] | Create a single chart that can serve a multitenant application and make chart data client specific | 2022-10-31T16:39:18.073Z | Create a single chart that can serve a multitenant application and make chart data client specific | 2,366 |
null | [
"queries",
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "function colCon(){\n /* ... retrieve mongoose model ... */\n}\n\nfunction putData(_title = '', newData = {}){\n console.log('> put data');\n\n const condition = {title : _title};\n return colCon().then(async model => {\n return model.findOneAndReplace(\n condition, \n newData,\n {\n returnDocument : 'after',\n strict : false\n }\n ).then(doc => {\n return doc;\n }).catch(err => {\n throw err;\n })\n }).catch(err => {\n throw err;\n })\n}\n\nfunction patchData(_title = '', newData = {}){\n console.log('> patch data');\n\n const condition = {title : _title};\n return colCon().then(async model => {\n return model.findOneAndUpdate(\n condition, \n newData,\n {\n returnDocument : 'after',\n strict: false\n }\n ).then(doc => {\n return doc;\n }).catch(err => {\n throw err;\n })\n }).catch(err => {\n throw err;\n })\n}\n//schema\n{\n title: {\n type: String,\n unique: true,\n required: true,\n },\n \n content: { \n type: String, \n }\n}\n\n//document sample \n{\n \"content\": \"Something here\",\n \"title\": \"Some title\"\n}\n//new data\n{\n \"content\": \"Something new here\",\n \"title\": \"Some title\"\n \"car\": \"bugati\"\n}\n\n//PATCH result\n{\n \"content\": \"Something new here\",\n \"title\": \"Some title\",\n \"car\": \"bugati\"\n}\n\n//PUT result\n{\n \"content\": \"Something new here\",\n \"title\": \"Some title\"\n}\nstrict",
"text": "I am trying to implement PUT and PATCH request using Mongoose,the code for handing both requests are as follows:While both have similar approaches (retrieve model → find and update / replace → show after-modified result), the results are quite different. For example this is my intial schema and documentAnd this is the result after I update / replace with the same new dataAlthough I have turn off strict mode in both functions, extra fields is still prevented in PUT request.How can I disable it in PUT function the same as PATCH?Thank you in advance!",
"username": "D_n_Du"
},
{
"code": "strictfindOneAndUpdatefindOneAndReplacefindOneAndReplace(){strict: false}// Define the schema for your data\nconst YourSchema = new mongoose.Schema({\n title: String,\n // ... add other properties as needed\n}, { strict: false });\nstrict",
"text": "Hey @D_n_Du,Thank you for reaching out to the MongoDB Community forums How can I disable it in PUT function the same as PATCH?Based on my understanding, I believe Mongoose does not handle the PUT and PATCH methods. Typically, these methods are handled by web frameworks and server-side technologies.Although I have turn off strict mode in both functions, extra fields is still prevented in PUT request.Regarding the difference in output between the findOneAndUpdate and findOneAndReplace methods, it appears that the issue might be related to the strict option not being properly getting patched in for findOneAndReplace() method due to some reason the mongoose is implemented. I can confirm that I’ve encountered the same issue when testing it in my own environment. To address this issue, I have created a GitHub issue - 13507 on the Mongoose repository.In the meantime, you can achieve your goal by using the {strict: false} option in your schema. This will ensure that both functions work as you expect them to be:Note: Remove the strict option from both functions within your code.I hope this helps! If you have any further questions, feel free to ask.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Mongoose - `strict` option not working in findOneAndReplace | 2023-06-10T06:55:36.905Z | Mongoose - `strict` option not working in findOneAndReplace | 693 |
null | [
"transactions"
] | [
{
"code": "",
"text": "Hi community,We are looking to update our dashboard with dashboard filters, but the field we want to filter by is not stored in the main source collection. To make it more concrete our dashboard is based on the collection “transactions”, which includes client UIDs but not their names. To get their names we need to run a lookup from another collection called “clients”. When building the main charts it’s easy to run a lookup to the filter results based on relevant clients only. However we would like to do the same for the overall dashboard itself (e.g., looking at historical transactions for one specific client rather than all historical clients). At the moment we can only filter based on client UIDs at dashboard level which is quite cumbersome.Thus my question, is there a way to include lookup field in the options for filters at dashboard level?Thanks a lot,\nMax",
"username": "Maximilian_Czymoch"
},
{
"code": "",
"text": "Unfortunately you can’t add dashboard filters for fields added to individual charts. However you can solve your problem by adding the lookup to a pipeline in a “Charts view” (added from the data sources page). This will bake the lookup into the data source and allow you to use it in a dashboard filter.",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks we had done exactly that!",
"username": "Maximilian_Czymoch"
}
] | Using Dashboard Filters Based On Lookup Values | 2023-04-26T08:00:55.920Z | Using Dashboard Filters Based On Lookup Values | 900 |
null | [
"aggregation"
] | [
{
"code": "type Organization struct {\n\tId primitive.ObjectID `bson:\"_id,omitempty\"`\n\tDisplayName string `bson:\"display_name,omitempty\"`\n\tAdmins []string `bson:\"admins,omitempty\"`\n\tUsers []string `bson:\"users,omitempty\"`\n}\n\ntype User struct {\n\tId primitive.ObjectID `bson:\"_id,omitempty\"`\n\tOrgId primitive.ObjectID `bson:\"org_id,omitempty\"`\n\tOrgDisplayName string `bson:\"org_display_name,omitempty\"`\n\tOrgRole string `bson:\"org_role,omitempty\"`\n}\n",
"text": "Hi,My system’s user management module has an organization collection and a user collection. The user collection is frequently queries by my frontend to validate if the user has proper access to frontend features. The organization collection is rarely changed by our users.I am wondering whether I should having some organization info duplicately stored in the user collections like “OrgDisplayName” and “OrgRole” to avoid calling $lookup frequently. For a SQL database, I will definitely avoid this and use the SQL Joins. But I don’t know whether $lookup has very high cost in MongoDB such as latency and database capacity.Thanks!",
"username": "Tianjun_Fu"
},
{
"code": "",
"text": "Bumping up.Some online discussion seems to agree with this idea, like https://www.quora.com/Is-it-ok-to-duplicate-data-in-a-NoSQL-database-I-would-appreciate-any-detailed-answer.Maybe we should evaluate $lookup latency after creating indexes on these fields.",
"username": "Tianjun_Fu"
}
] | $lookup v.s having duplicate info in two collections | 2023-05-30T02:06:54.014Z | $lookup v.s having duplicate info in two collections | 455 |
null | [
"queries"
] | [
{
"code": "service_name: My_Service\n data: {\n field1: {\n field2: {\n tags: [\n 'a',\n 'b'\n ]\n },\n field4: {\n tags: [\n 'a',\n 'c'\n ]\n }\n },\n field3: {\n field4: {\n tags: [\n 'a',\n 'b'\n ]\n }\n }\n }\n",
"text": "I have a collection like thisI am trying to filter out how many of the documents have tag “a”. Can anyone help me out",
"username": "Rajdeep_R"
},
{
"code": "service_name: My_Service\n data: {\n field1: {\n field2: {\n tags: [ 'a','b']\n ....\nfield1field2field3field2tags$where",
"text": "Hey @Rajdeep_R,Thank you for reaching out to the MongoDB Community forums.May I ask if the document structure in the example is uniform across all documents in the collection? For instance, does field1 always contain field2, field3, and so on, and does all field2 contain the tags array? However, if the field names vary or the level of sub-documents is unpredictable, then it becomes difficult to do efficiently.If the documents are freeform and the tags can appear anywhere (without a definitive structure that MongoDB can rely on for efficient document processing), two immediate options come to mind:If this is a common workflow, would it be possible for you to reconsider the schema design so that this operation could be more efficient in the future?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Please Help Community Mongo DB | 2023-06-02T11:48:12.182Z | Please Help Community Mongo DB | 510 |
null | [
"schema-validation"
] | [
{
"code": "",
"text": "I want to inspect a Mongodb database collection and detect its schema. What are some recommended approaches? For example, I see that Airbyte data integration software does this by inspecting a fixed number of records in the collection and coming up with a schema based on these records - Mongo DB | Airbyte DocumentationAre there any better ways of doing this?",
"username": "Pramod_Biligiri"
},
{
"code": "",
"text": "Are there any better ways of doing this?i can’t think of any.Mongodb is schemaless db, so no such metadata. The only way would be trying to find a pattern from the data rows directly.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hey @Pramod_Biligiri,Thank you for reaching out to the MongoDB Community forums.I want to inspect a Mongodb database collection and detect its schema.When you say “detect its schema” do you mean Analyze Your Data Schema. If so, you can accomplish this using MongoDB Compass. In MongoDB Compass, the Schema tab provides a breakdown of the various data types present in the fields, along with the percentage representation of each data type.In the Schema tab, you can also use the query bar to create a filter and limit your results. You can click on Options to specify the fields to display and the number of results to return.\nquery-bar-schema-view2748×652 110 KB\nPlease let me know if the above response addresses your question or if you have any further inquiries or concerns.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks for the responses. I am trying to do this through a program so I can’t use the Compass UI for this. But it does looks like the approach is to inspect the data and then decide, so I guess I’m on the right track.",
"username": "Pramod_Biligiri"
}
] | How to infer schema for a Mongodb database? | 2023-06-13T05:56:43.437Z | How to infer schema for a Mongodb database? | 965 |
null | [] | [
{
"code": "",
"text": "I’m developing a Realm / Atlas app and the docs are burning my eyes out… I have a lot of reading to do to get my first app working but my god the white background… Please consider a dark mode ",
"username": "d33p"
},
{
"code": "",
"text": "Hey @d33p,Thanks for providing that feedback. I believe there’s currently a feedback post here that relates dark mode specific for the documentation which you can vote for.Regards,\nJason",
"username": "Jason_Tran"
}
] | Docs dark mode theme consideration | 2023-06-10T01:35:08.972Z | Docs dark mode theme consideration | 452 |
[] | [
{
"code": "{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": [\n {\n \"analyzer\": \"lucene.english\",\n \"maxGrams\": 8,\n \"minGrams\": 3,\n \"type\": \"autocomplete\"\n },\n {\n \"type\": \"stringFacet\"\n }\n ]\n }\n },\n \"storedSource\": {\n \"include\": [\n \"name\"\n ]\n }\n}\n",
"text": "It would be great if you can provide us another solution we are looking for in regards to AutoComplete.AutoComplete Definition\nimage900×673 24.5 KB\n\nimage834×659 20.7 KB\nWe tried using facet as we want to display duplicate only once and want to get result as given in following snapshot. But facet seems to be grouping on whole name field in which case we will get every record as unique\nExample : we want result as in following snapshot when “floor” is searched it shows only unique (“floor mat”, “floor lamp”) where name field data have “floor mat for dodge” , “floor mat for jeep” , “floor lamp for living room”, “floor lamp with table”…So in 1st search it should show unique and after search is proceed with “floor lamp” then it should get results for lamp related and so on\nimage469×555 27.2 KB\n(attaching the collection json)Product_collection.json (1.4 MB)",
"username": "Shopi_Ads"
},
{
"code": "",
"text": "Also we want to achieve the same scoring pattern with AutoComplete as we got during text search (Atlas Search - scoring - #8 by Jason_Tran)\nSo most matching search should get listed first. Like when “floor lamp” is searched then all “floor lamp …” should get high score then “floor mat”",
"username": "Shopi_Ads"
},
{
"code": "",
"text": "Does anyone have solution",
"username": "Shopi_Ads"
},
{
"code": "textautocomplete",
"text": "You can try a combination of text and autocomplete with the same search term to see if that works for you.Please refer to Index Field as Multiple Data Types documentation for more information that may help for this.",
"username": "Jason_Tran"
},
{
"code": "[\n {\n $search:\n {\n index: \"autoCompleteProducts\",\n compound: {\n must: [\n {\n text: {\n query: \"floor mats\",\n path: \"name\",\n },\n },\n {\n autocomplete: {\n query: \"floor mats\",\n path: \"name\",\n tokenOrder: \"sequential\",\n },\n },\n ],\n },\n },\n },\n {\n $project:\n /**\n * specifications: The fields to\n * include or exclude.\n */\n {\n _id: 0,\n name: 1,\n score: {\n $meta: \"searchScore\",\n },\n },\n },\n]\n",
"text": "\nimage933×676 25.4 KB\n\nimage847×718 22.3 KB\nTried combination but getting error, can you check if I’m missing something\nimage795×82 3.35 KB\n",
"username": "Shopi_Ads"
},
{
"code": "",
"text": "we are getting the results after configuring index option to “positions”. Can you please guide us or provide the query to achieve the result as provided in initial request. we want to show unique values in autocomplete on 1st word then 2nd word onwards it will show natural results. Example: when “floor” is searched then we should get “floor mat”, “floor lamp”…etc. Then when “floor lamp” is typed it will get natural results\nimage1621×822 101 KB\n",
"username": "Shopi_Ads"
}
] | Atlas search - autocomplete grouping | 2023-06-08T21:49:37.818Z | Atlas search - autocomplete grouping | 897 |
|
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "const express = require('express');\nconst fs = require(\"fs\");\nconst https = require(\"https\");\nconst app = express();\n\nvar mongoose = require('mongoose');\n\nconst uri = \"mongodb://root:<MyPassword>@<MyServerAdres>/tutorial:27017\"\n\nasync function connect() {\n try {\n await mongoose.connect(uri);\n console.log(\"connected to MongoDB\");\n } catch(error) {\n console.error(error);\n }\n}\n\nconnect();\n\nMongooseServerSelectionError: connect ECONNREFUSED 192.XXX.XXX.X:27017\n at _handleConnectionErrors (/root/PWA/api/node_modules/mongoose/lib/connection.js:792:11)\n at NativeConnection.openUri (/root/PWA/api/node_modules/mongoose/lib/connection.js:767:11)\n at async connect (/root/PWA/api/test.js:15:5) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) { '<MyServerAdres>:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n",
"text": "Hi,i have a problem to make a connection to mongodb.\nI have installed MongoDB on a external Ubuntu Server with node.js and connected to the server over VSCode SSH Connection on the Terminal.\nI want to test if i can connect to mongodb, but it gives an error in the shell.My Server Code:Im getting this error in the Terminal:My MongoDB is active and running. and in the mongodb.log i can see the entry “[initandlisten] waiting for connections on port 27017”Maybe, my URI is wrong? I tried to connect with the Mongodb Compas to test the connection, but it gives me also an error.",
"username": "Senel_Ekiz"
},
{
"code": "",
"text": "ECONNREFUSED means no server is listening at the given address and port.Assuming you are using the same installation as your previous postyou are doing the same error as before. You are using the default port 27017 rather than the port used by the server, that is the one specified in the configuration file. From your previous post this port is 10000.But since yousee the entry “[initandlisten] waiting for connections on port 27017”the cause might be different. In your other post, you also mentioned using 2 machines. May be the mongo log message you see is from a machine different from the machine with the host name tutorial.Basically, ECONNREFUSED means wrong address, wrong port or no listener. You seem to have a listener on the correct port, so the address must be wrong.",
"username": "steevej"
},
{
"code": "",
"text": "you are doing the same error as before. You are using the default port 27017 rather than the port used by the server, that is the one specified in the configuration file. From your previous post this port is 10000.I have changed the port numver after we have solved the problem to the standard Port.Do i need to write the collection in the URI, which i want to connected:\nlike …/tutorial:27017???",
"username": "Senel_Ekiz"
},
{
"code": "",
"text": "Another question.\nWhich username and password should i use in the URI String´?\nThe Access Data from my external Server, where i have installed MongoDB and Node.js or has MongoDB own Access Data, which should be defined in any config from mongodb?",
"username": "Senel_Ekiz"
},
{
"code": "",
"text": "Do i need to write the collection in the URI,That part is mongoose specific. From what I understand, since you are using schema to crud the data you do not have to do it.Which username and password should i use in the URI String´?The URI is the one used to access MongoDB so it has to be the username and password defined in MongoDB.",
"username": "steevej"
}
] | Connect to MongoDB with node.js | 2023-06-13T12:47:59.988Z | Connect to MongoDB with node.js | 634 |
null | [
"mongodb-shell",
"configuration"
] | [
{
"code": " config.set(\"displayBatchSize\", 2)config.set(\"displayBatchSize\", 2000) db.MyTable.distinct(\"person.id\")",
"text": "When I run config.set(\"displayBatchSize\", 2)orconfig.set(\"displayBatchSize\", 2000)My distinct query db.MyTable.distinct(\"person.id\")returns 100 items.I have 245 items that I’m trying to return. It doesn’t look like the displayBatchSize is working properly.Thanks,Tam",
"username": "Tam_Nguyen1"
},
{
"code": "displayBatchSizedistinct()find()myFirstDatabase >db.manydocs.distinct.returnType\n{ type: 'unknown', attributes: {} }\nmyFirstDatabase> db.manydocs.find.returnType\nCursor\ndistinct()",
"text": "Hi @Tam_Nguyen1,I have 245 items that I’m trying to return. It doesn’t look like the displayBatchSize is working properly.I believe the behaviour you’re experiencing is expected. As per the configure shell documentation, specific to the displayBatchSize property description:The number of items displayed per cursor iterationThe distinct() command does not return a cursor (comparing to a find() which does return a cursor):Just for more context, can you describe what the use case is here for wanting to return all 245 items using distinct() in the shell? You could get all the results on the application side if this works for you (more details on this on the distinct command documentation)Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran,Thanks for the reply that makes sense to me. I just grabbed all of it and used excel to get the distinct values instead. I was trying to find what documents were being used so I could delete the unused documents.Regards,Tam",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "Makes sense - Thanks for confirming the method you used to retrieve all the values You could raise a feedback post to have a setting in the shell in future to possibly allow this to be expanded.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongosh displayBatchSize config not working | 2023-06-12T17:22:35.432Z | Mongosh displayBatchSize config not working | 772 |
null | [
"dot-net"
] | [
{
"code": "2022-08-26 11:10:42.226 Error: Connection[2]: Session[2]: Failed to integrate downloaded changesets: Failed to parse, or apply received changeset: Update: No such field: 'Token' in class 'class_Oratore' (instruction target: Oratore[\"cs-OR53\"].Token, version: 18012, last_integrated_remote_version: 2, origin_file_ident: 3571, timestamp: 241431245890)\nException backtrace:\n0 librealm-wrappers.dylib 0x0000000111aa72f2 _ZN5realm4sync12_GLOBAL__N_125throw_bad_transaction_logENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 34\n1 librealm-wrappers.dylib 0x0000000111aa711c _ZNK5realm4sync18InstructionApplier19bad_transaction_logERKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 1260\n2 librealm-wrappers.dylib 0x0000000111aaccad _ZN5realm4sync18InstructionApplier12PathResolver8on_errorERKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEE + 13\n3 librealm-wrappers.dylib 0x0000000111aad1ca _ZN5realm4sync18InstructionApplier12PathResolver13resolve_fieldERNS_3ObjENS0_12InternStringE + 410\n4 librealm-wrappers.dylib 0x0000000111aa930f _ZN5realm4sync18InstructionApplier12PathResolver7resolveEv + 111\n5 librealm-wrappers.dylib 0x0000000111aa91f1 _ZN5realm4sync18InstructionApplierclERKNS0_5instr6UpdateE + 81\n6 librealm-wrappers.dylib 0x0000000111ac7f67 _ZN5realm4sync13ClientHistory27integrate_server_changesetsERKNS0_12SyncProgressEPKyPKNS0_11Transformer15RemoteChangesetEmRNS0_11VersionInfoENS0_18DownloadBatchStateERNS_4util6LoggerENSE_14UniqueFunctionIFvRKNSt3__110shared_ptrINS_11TransactionEEEEEEPNS1_20SyncTransactReporterE + 1431\n7 librealm-wrappers.dylib 0x0000000111ad92c5 _ZN5realm4sync10ClientImpl7Session20integrate_changesetsERNS0_17ClientReplicationERKNS0_12SyncProgressEyRKNSt3__16vectorINS0_11Transformer15RemoteChangesetENS8_9allocatorISB_EEEERNS0_11VersionInfoENS0_18DownloadBatchStateE + 149\n9 librealm-wrappers.dylib 0x0000000111ad5ec5 _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS_4sync10ClientImpl10ConnectionEEEvRT_NSt3__117basic_string_viewIcNS8_11char_traitsIcEEEE + 11509\n10 librealm-wrappers.dylib 0x0000000111ad04f4 _ZN5realm4sync10ClientImpl10Connection33websocket_binary_message_receivedEPKcm + 52\n11 librealm-wrappers.dylib 0x0000000111b319d6 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 1430\n12 librealm-wrappers.dylib 0x0000000111b257bd _ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINS0_14UniqueFunctionIFvNSt3__110error_codeEmEEEJRS7_RmEEEvbRT_DpOT0_ + 157\n13 librealm-wrappers.dylib 0x0000000111b251cc _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINS0_14UniqueFunctionIFvNSt3__110error_codeEmEEEE19recycle_and_executeEv + 140\n14 librealm-wrappers.dylib 0x0000000111b27605 _ZN5realm4util7network7Service4Impl3runEv + 645\n15 librealm-wrappers.dylib 0x00000001119c2140 _ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigENS_8weak_ptrIKNS7_11SyncManagerEEEEUlvE0_EEEEEPvSN_ + 176\n8 librealm-wrappers.dylib 0x0000000111a9f162 _ZN5realm4sync10ClientImpl7Session29initiate_integrate_changesetsEyNS0_18DownloadBatchStateERKNSt3__16vectorINS0_11Transformer15RemoteChangesetENS4_9allocatorIS7_EEEE + 114\n16 libsystem_pthread.dylib 0x00007ff8037ec4e1 _pthread_start + 125\n2022-08-26 11:10:42.226 Info: Connection[2]: Connection closed due to error\n17 libsystem_pthread.dylib 0x00007ff8037e7f6b thread_start + 15\n",
"text": "Hi, I add a new string field in my model.\nI activate develop mode, new field is added, but all old client have error.String Field are optional on schema, I have not ever this error in pastThis is log:Thanks\nLuigi",
"username": "Luigi_De_Giacomo"
},
{
"code": "",
"text": "Can you file a support ticket for this. It looks like a bug in the sync client or the server, but without the server logs, it’ll be next to impossible to track down.",
"username": "nirinchev"
},
{
"code": "Connection[1]: Session[1]: Failed to integrate downloaded changesets: Failed to apply received changeset: Update: No such field: 'i' in class 'class_TransactionV2RealmModel' (instruction target: TransactionV2RealmModel[ObjectId{636edb3f02c9f6da735af02d}].i, version: 190, last_integrated_remote_version: 1, origin_file_ident: 89786, timestamp: 266417971456). Please contact support.\nConnection[1]: Session[1]: Sending: ERROR \"Failed to apply received changeset: Update: No such field: 'i' in class 'class_TransactionV2RealmModel' (instruction target: TransactionV2RealmModel[ObjectId{636edb3f02c9f6da735af02d}].i, version: 190, last_integrated_remote_version: 1, origin_file_ident: 89786, timestamp: 266417971456). Please contact support.\" (error_code=212, session_ident=1)\n[Info] Connection[1]: Session[1]: Received: ERROR \"Synchronization no longer possible for client-side file\" (error_code=217, try_again=false, error_action=ClientReset)\n[Info] Connection[1]: Disconnected\n",
"text": "We have the same error on adding a new optional field to the scheme for old clients. Previously it was possible to update the scheme in advance and then update clients. Currently, it looks like we need to force update all clients on any scheme change which is problematic.@Luigi_De_Giacomo have you been able to resolve the issue?",
"username": "Anton_P"
},
{
"code": "",
"text": "Looks like the server-side issue that was resolved so not sure why are we facing it half year after a fix Flexible Sync and Optional Changes · Issue #1186 · realm/realm-kotlin · GitHubMaybe because we are using partition-based sync and the fix for the flexible-sync",
"username": "Anton_P"
}
] | Old Client Error - Please Help | 2022-08-26T11:31:37.198Z | Old Client Error - Please Help | 1,432 |
null | [
"queries",
"python"
] | [
{
"code": "",
"text": "\"/usr/local/lib/python3.10/dist-packages/pymongo/topology.py\", line 222, in select_server\\r\\n return random.choice(self.select_servers(selector,\\r\\n File \"/usr/local/lib/python3.10/dist-packages/pymongo/topology.py\", line 182, in select_servers\\r\\n server_descriptions = self._select_servers_loop(\\r\\n File \"/usr/local/lib/python3.10/dist-packages/pymongo/topology.py\", line 198, in _select_servers_loop\\r\\n raise ServerSelectionTimeoutError(\\r\\npymongo.errors.ServerSelectionTimeoutError: PY_SSIZE_T_CLEAN macro must be defined for ‘#’ formats\\r\\n\", “msg”: “MODULE FAILURE\\nSee stdout/stderr for the exact error”, “rc”: 1}",
"username": "Tamil_vanan"
},
{
"code": "",
"text": "PY_SSIZE_T_CLEAN macro must be definedThis issue was fixed in PyMongo 3.10 (see https://jira.mongodb.org/browse/PYTHON-2001). Please upgrade to pymongo>=3.10.",
"username": "Shane"
}
] | Facing the below issue while installating mongodb 5.x | 2023-06-13T09:21:23.086Z | Facing the below issue while installating mongodb 5.x | 1,114 |
[
"swift"
] | [
{
"code": "class Person: Object, ObjectKeyIdentifiable {\n @Persisted var name: String\n @Persisted var dog: Dog?\n\n convenience init(name: String) {\n self.init()\n self.name = name\n self.dog = Dog(name: \"Fido\", age: 5)\n }\n}\n\nclass Dog: Object, ObjectKeyIdentifiable {\n @Persisted var name: String\n @Persisted var age: Int\n\n convenience init(name: String, age: Int) {\n self.init()\n self.name = name\n self.age = age\n }\n}\nstruct ContentView: View {\n @Environment(\\.realm) var realm\n @ObservedRealmObject var person: Person\n @State private var showSheet = false\n\n var body: some View {\n NavigationView {\n Form {\n Text(\"Person's Name: \\(person.name)\")\n if let dog = person.dog {\n Button(action: {\n showSheet = true\n }) {\n Text(\"Dog's Age: \\(dog.age)\")\n }\n .sheet(isPresented: $showSheet) {\n DogView(dog: dog)\n }\n }\n }\n }\n }\n}\n\nstruct DogView: View {\n @ObservedRealmObject var dog: Dog\n\n var body: some View {\n Stepper(\"Age: \\(dog.age)\", value: $dog.age, in: 0...20)\n }\n}\n",
"text": "I’m trying to figure out how to properly use nested optional objects with realm swift, to update parent view when nested object change.In this code, Person is the parent object that has a Dog as a nested object. ContentView displays the person’s name and dog’s age, and DogView is used to change the dog’s age using a Stepper.\nWhen the dog’s age is updated in DogView, I want this change to be reflected in ContentView. However, the change in the nested object’s property doesn’t seem to be triggering a refresh of the parent view.Any help appreciated, tried without luck with using $person.dog and also explored realm projections.ModelsViewsContribute to aejimmi/realmNestedObjects development by creating an account on GitHub.",
"username": "Jimmi_Andersen"
},
{
"code": "@Persisted var refreshToken: Int\nstruct DogView: View {\n @ObservedRealmObject var person: Person\n @ObservedRealmObject var dog: Dog\n\n var body: some View {\n Stepper(\"Age: \\(dog.age)\", value: $dog.age, in: 0...20)\n .onChange(of: dog.age) { _ in\n $person.refreshToken.wrappedValue = person.refreshToken + 1\n }\n }\n}\n",
"text": "I found one approach that works. I’m still curious if there is better optionsAdded a refreshToken to the parent object (Person):In the dog view, I update the parent.refreshToken every time dog is changed.What would the Realm team do ?",
"username": "Jimmi_Andersen"
}
] | Observe and update nested optional objects (realm swift) | 2023-05-19T09:12:50.701Z | Observe and update nested optional objects (realm swift) | 892 |
|
null | [
"queries",
"python"
] | [
{
"code": "def status_false(collections_name):\n \n current_date = time.strftime('%Y-%m-%d', time.localtime())\n status_create = collections_name.find({\"status_create\": \"False\"})\n \n status_create_False = []\n for data in status_create:\n status_create_False.append(data)\n\n return status_create_False\n",
"text": "Hello,\nfirstly, I would like to thank you for the incredibly good support here in the community.\nI’ve asked 2 questions and received an answer to both that have helped me move forward with my project Fantastic!Now to my new Problem.I make a query on a collection and give me all documents from the field “status_create” that have the value False. That works great too, but now I need to append another condition which is not so easy for me, and that is, I have a field called Timestamp which has “2023-04-23 20:00:37” such values.The question now is, how do i get all the documents with “status_create=False” and the date, I don’t care about the time. Only the date should be correct, the date is the current date from the Day and will come from the VARHere is my func for the query.I hope this was understandable.And one last question, is it useful to create the field “status_create” as a real bool ? (now is a String) if so, what advantages does this have.",
"username": "Rainer_Schmitz"
},
{
"code": "status_create: FalseTimestampcurrent_datedb.test.aggregate([{\n $match: {\n status: false,\n Timestamp: {\n $gte: '2023-05-03',\n $lt: '2023-05-04'\n }\n }\n}])\n[\n {\n _id: ObjectId(\"6452462b566b22ff1630e55a\"),\n Timestamp: '2023-05-03 20:00:37',\n status: false\n },\n {\n _id: ObjectId(\"645248d2566b22ff1630e55c\"),\n Timestamp: '2023-05-03 21:10:17',\n status: false\n },\n {\n _id: ObjectId(\"64524903566b22ff1630e55d\"),\n Timestamp: '2023-05-01 20:00:37',\n status: true\n },\n {\n _id: ObjectId(\"645305c0dfcb4b65f6697098\"),\n Timestamp: '2023-05-03 20:00:37',\n status: true\n }\n]\ndb.test.aggregate([ {$match: {status:false, Timestamp:{$gte:'2023-05-03', $lt:'2023-05-04'}}} ])\n[\n {\n _id: ObjectId(\"6452462b566b22ff1630e55a\"),\n Timestamp: '2023-05-03 20:00:37',\n status: false\n },\n {\n _id: ObjectId(\"645248d2566b22ff1630e55c\"),\n Timestamp: '2023-05-03 21:10:17',\n status: false\n }\n]\nbooleanFalseTrueFlaseTure",
"text": "Hey @Rainer_Schmitz,firstly, I would like to thank you for the incredibly good support here in the community.\nI’ve asked 2 questions and received an answer to both that have helped me move forward with my project Fantastic!It’s great to hear that the answers provided were helpful for your project, this shows the real power of community. Coming back to your first question, if I understand this correctly, you want to get all the documents with status_create: False and the date from the Timestamp field without the time if it matches your current_date. Please correct my understanding if wrong. If this is indeed correct, you can use aggregation to do something like this:What the above code is doing is matching where the string is larger than the date in question and less than the date of the next day due to how string comparison work, you should see this query return the correct documents.Please note that since I do not know your sample documents or your exact use case, I could only test the aggregation part on my end to see whether it was working or not. I created a sample collection having the following documents:I then created a connection and used the following code:This returned the expected documents:Please note, that using indexes will help improve performance even more.And one last question, is it useful to create the field “status_create” as a real bool ? (now is a String) if so, what advantages does this have.Definitely use the right data type. A boolean can be represented by a single bit, but the string False and True need much more bytes, wasting space. Also, you can get into trouble when you mistype Flase or Ture, while you get no ambiguity when you use real boolean values.Hope this helps. Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "aggregate current_date = time.strftime('%Y-%m-%d', time.localtime())\n\n #DocCOM: \"^\" Regex-Muster\n regex_pattern = \"^\" + current_date\n #DocCOM: mongoDB abfrage mit bedienungen auf 2 feldern und $regex\n status_create = collections_name.find({\n \"status_create\": \"False\",\n \"timestamp\": {\"$regex\": regex_pattern}\n })\n\n status_create_False = []\n for data in status_create:\n status_create_False.append(data)\n\n\n return status_create_False\n",
"text": "aggregateHello Satyam,\nmany thanks for your answer, I have found a other solution",
"username": "Rainer_Schmitz"
}
] | pyMongo - MongoDB query with multiple conditions on two fields | 2023-04-23T19:16:04.363Z | pyMongo - MongoDB query with multiple conditions on two fields | 1,830 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 7.0 release candidates are out and ready for testing. This is the culmination of the past year’s worth of development and marks the first releases after the rapid release cycle. Please review the release notes which are being updated with information about the exciting new features, and instructions on how to report an issue.Stay tuned for more highlights as we approach MongoDB.local NYC on June 22nd.MongoDB 7.0 Release Notes | Changelog | Downloads– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 7.0 release candidates have been released | 2023-06-13T17:08:05.571Z | MongoDB 7.0 release candidates have been released | 1,145 |
null | [
"app-services-cli"
] | [
{
"code": "realm-cli push --remote=\"<<realm-app-id>>\" -y\nrealm-cli push",
"text": "We can push changes by issuing:But is there a way to give name to the resulting deployment?We can name deployments in the Deployment tab manually, GitHub integration will name deployments by commit messages but realm-cli push deployments have meaningless ids. I searched through the command options and didn’t find any suitable option.",
"username": "Maksim_Korinets"
},
{
"code": "",
"text": "Hi, right now there isn’t a way to do this through the CLI but we can bring this up as a possible improvement to the product team!",
"username": "Christine_Chen"
}
] | Is there a way to name a deployment while pushing from CLI | 2022-10-13T07:39:59.837Z | Is there a way to name a deployment while pushing from CLI | 1,847 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Can anyone point me in the right direction for rolling up counts/sums into 24 hour periods?e.g.I have an IoT device that started collecting data at 1pm 9/1/2021 to 12:59pm on 9/7/2021.I want to roll up and $sum the events not by calendar days, but by 24 hour periods starting at 1pm on 9/1/2021Ending up with something like [{ day: ‘1’, value: 240 }, { day: ‘2’, value: 260 } , { day: ‘3’, value: 210 } ]I know how to rollup data by the calendar day with $group, but haven’t approached it before from a calendar agnostic time interval with a given start date.Any tips would be great.",
"username": "Anthony_Comito"
},
{
"code": "",
"text": "Hi @Anthony_Comito ,Yes you need to use the new $setWindowFields aggregation stage using 5.0 MongoDB servers:See the example and change the unit to a day or use 24 hour.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "MongoServerError: unknown time unit value: 24hour",
"text": "That’s helpful @Pavel_Duchovny . I am getting a MongoServerError: unknown time unit value: 24hourIs there an enum list I can see somewhere?Are you part of the mongodb “flex consulting” offer? If not, could you put me in touch with somebody to setup an engagement? I’d like to make sure I get this correct and would like to speak to somebody who’s worked with $setWindowFields and IoT data",
"username": "Anthony_Comito"
},
{
"code": "",
"text": "Hi @Anthony_Comito ,I can contact to our consulting teams in the upcoming week.You can also contact them via our consulting web page.I think the unit can be only “hour” or “day” then in the boundaries you need to set -24 or -1 depending on the unit.I suggest you read our newly posted timeSeries article :MongoDB allows you to store and process time series data at scale. Learn how to store and analyze your time series data using a MongoDB cluster.It has an example of a rolling average with window fields.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "ISODate(\"2021-09-01T13:00:00Z\")startDate = ISODate(\"2021-09-01T13:00:00Z\");\ndb.rollup.aggregate(\n {$group:{_id:{$trunc:{$divide:[{ $subtract: [\"$date\", start] }, 86400000]}}, sum:{$sum:\"$value\"}}}\n)\ndate[ { _id: 0, sum: 35 }, { _id: 1, sum: 15 } ]\nstart$_id$toDate",
"text": "I want to roll up and $sum the events not by calendar days, but by 24 hour periods starting at 1pm on 9/1/2021You don’t need any fancy window functions here, you just need to group with a boundary that’s different than midnight. It turns out since date is stored as seconds since epoch you should be able to do this by creating appropriate boundaries yourself. Given your example to start on ISODate(\"2021-09-01T13:00:00Z\") do this:For each date it subtracts it from your given start date, and divides it by number of milliseconds in 24 hour period to get the “date” offset from your start. It uses that expression as the group key. Result might look like this:Here 0 would be start date, 1 would be the next 24 hour period, etc. You can then use date arithmetic to convert it back to a timestamp (something like start + $_id * 86400000 convert $toDate).Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "How about I want to group it by one more field along with the time interval?",
"username": "op1997"
}
] | $group by 24 hour period form a given start date (as oppose to group by calendar day)? | 2021-09-09T13:20:50.703Z | $group by 24 hour period form a given start date (as oppose to group by calendar day)? | 8,917 |
null | [] | [
{
"code": "",
"text": "Hi,We’re currently using Realm Authentication (Google/Apple/Local-userpass) with thousands of users.Right now it looks like the max expiration for Refresh Token is 180 days, this effectively means our users get force logged out every 180 days (even if they use our app every day).My question is, is there a way to force unlimited expiration? If not, is it possible to refresh a refresh token? We do not want our users to churn away from our app simply because they were forced logged out (and possibly forgot the password/email that they set 6 months ago)I understand there is another option (JWT custom auth), however this means that we will have to re-create the users and re-link them to the old realm users somehow?Is there a better solution here?Many thanks!",
"username": "Turbulent_Stability"
},
{
"code": "",
"text": "Unfortunately, it’s not possible to set refresh tokens longer than 180 days. You have two choices here: anonymous auth (which would not allow you to use user data or permissions, but has unlimited expiration) or custom JWT auth, as you suggest.For the latter case, you can use the realm admin API to fetch all users.Then, you could use this data as part of your custom JWT auth to ensure that you return consistent user ids as Realm would have for the same set of credentials. This should allow you to seamlessly switch auth providers.",
"username": "Sudarshan_Muralidhar"
}
] | Refresh a Realm Refresh token? | 2023-06-12T00:45:40.512Z | Refresh a Realm Refresh token? | 418 |
null | [
"aggregation",
"queries",
"flutter"
] | [
{
"code": "durationvar results = db.query<Entry>(\n \"startTime BETWEEN {\\$0, \\$1} SORT(startTime DESC)\",\n [startDate, endDate]).toList();\nstartDateendDateduration",
"text": "Hi, I’m using Realm as a database for a Flutter app that I’m working on. I want to the sum of a field called duration for all entries between two specified times.I’ve looked through the documentation and it definitely seems possible, but I can’t figure out how to integrate it into my Flutter app.At the moment, I’m getting all the entries like this:where startDate and endDate are two DateTime variables set previously.Since I can’t figure out how to sum the duration field, I’m simply foreaching through the list and adding up the duration the long way. But I’m sure there is a way to do this in Realm - I just can’t figure it out.",
"username": "Michael_Inglis"
},
{
"code": "startTimeimport 'package:realm_dart/realm.dart';\n\npart 'sum_duration.g.dart';\n\n@RealmModel()\nclass _Stuff {\n @Indexed()\n late DateTime startTime;\n\n late int durationInMilliseconds;\n Duration get duration => Duration(milliseconds: durationInMilliseconds);\n set duration(Duration value) => durationInMilliseconds = value.inMilliseconds;\n}\n\nvoid main(List<String> arguments) {\n final realm = Realm(Configuration.local([Stuff.schema]));\n\n final begin = DateTime.fromMillisecondsSinceEpoch(0);\n final end = DateTime.now();\n\n final results =\n realm.query<Stuff>(r'startTime between {$0, $1}', [begin, end]);\n final sum = results.fold(Duration.zero, (acc, i) => acc + i.duration);\n\n print(sum);\n}\n",
"text": "I’m afraid the Dart SDK don’t support aggregates yet . If you add an index on startTime then the following should be fairly efficient:",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "Thank you very much for your help.I’m afraid the Dart SDK don’t support aggregates yetDo you know if this functionality is coming to Dart?",
"username": "Michael_Inglis"
},
{
"code": "",
"text": "Yes, but I cannot promise when",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to get sum of a field using Flutter/Dart? | 2023-06-08T08:17:09.608Z | How to get sum of a field using Flutter/Dart? | 966 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I use atlas data api REST api.\nIs there any way i can batch multiple request with findOne or findMany on 1 request.\nit will be fine if if only have 1 request but sometime i need to retrieve data on multiple collection\nI know about aggregation can query that but my data doesn’t have relation.\nI just want to group 2 or 3 findOne request to 1 .",
"username": "Trieu_Boo"
},
{
"code": "",
"text": "Take a look at $unionWith. Basically, you do a $match for thr first find, then $unionWith for the other finds in the other collections.",
"username": "steevej"
},
{
"code": " pipeline: [\n { '$match': { _id: { '$oid': '64e21c2bfd2411163dde547f' } } },\n {\n '$unionWith': {\n coll: 'Fish',\n pipeline: [\n { '$match': { _id: { '$oid': '6486815f53f72d0e3c3c5cdb' } } }\n ]\n }\n }\n ]\n}\n",
"text": "@steevej hi Thank for your answer i try it and it working .\nbut it have a problem if one of multiple request have empty result i will not know what is empty.sample union 2 collection Box and Fish but it only return data from Fish .",
"username": "Trieu_Boo"
},
{
"code": "",
"text": "I just figure out how to do that with $faceit with the help of chatgptNow I looking for something to do multiple update by 1 aggregate but chatgpt say i can’t do that.\nIs that correct ?",
"username": "Trieu_Boo"
},
{
"code": "",
"text": "As a courtesy to other users of the forum it would be nice if you could publish the $facet implementation you came up with.Ad Thanks Vance",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Batch multiple request | 2023-06-08T02:11:29.388Z | Batch multiple request | 926 |
null | [
"charts"
] | [
{
"code": "",
"text": "Hi,I want to know if we can created calculated field, based on two columns of each row.\nI need to divide this two columns after charts make sum aggregation.\n( DONT WANT AGGREGATION, Want calculated field based on aggregation done by charts )\nAnd Dynamic Columns was grey cant add field.Thanks for help.",
"username": "Jonathan_Gautier"
},
{
"code": "[\n {\n $group: {\n _id: {},\n totalBeds: { $sum: '$beds' },\n totalBedrooms: { $sum: '$bedrooms' }\n }\n },\n {\n $set: {\n ratio: { $divide: ['$totalBeds', '$totalBedrooms'] }\n }\n }\n]\n",
"text": "Hey @Jonathan_Gautier -This is possible, but as you discovered you can’t just use a simple calculated field. You’ll need to add a chart query that pre-groups the values, and then adds a calculated field from the groups.Here’s an example query. Note that the query results in just a single document being returned, but you can use the calculated values on a chart.",
"username": "tomhollander"
},
{
"code": "[\n {\n$group: {\n _id: {},\n totalCA: { $sum: '$ca' },\n totalSpend: { $sum: '$spend' }\n}\n },\n {\n$set: {\n ratio: { $divide: ['$totalSpend', '$totalCA'] }\n}\n }\n]\n(InvalidPipelineOperator) Invalid $addFields :: caused by :: Unrecognized expression '$group'",
"text": "Hi @tomhollander, thanks for replying !I have try to use your code :But got this error :(InvalidPipelineOperator) Invalid $addFields :: caused by :: Unrecognized expression '$group'\nI want to add column and divide spend by ca.Spend and Ca was already aggreation $sum.\nAnd you didn’t see it but all data was by date, and when i filter with charts i can aggregate to have multiple days in one row with aggregation.",
"username": "Jonathan_Gautier"
},
{
"code": "_id$match",
"text": "OK. Sorry the sample code I sent needs to go into the query bar, not into the Calculated Field dialog.\nBut seeing the chart you want, you’ll need to put the country into the _id of the group so you get a separate line for each country. If you want to do any filtering, you’ll also need to add additional $match stages in the query.Basically with the approach I’m showing you, you need to do all of the data processing in the query bar, which will result in a few pre-calculated fields you can put onto your chart.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Yes but if i want to use Charts Filters ? How can i manage this case ?",
"username": "Jonathan_Gautier"
},
{
"code": "$match",
"text": "The filters in the filter pane are applied after the query bar in the pipeline. While you may still find a way to use it, I suspect for your case that you would be better off putting the filters into the query bar, i.e. by creating $match stages early in the pipeline.",
"username": "tomhollander"
},
{
"code": "",
"text": "I want viewer can use filter of dashboard and can change date range. I know it’s was tricky, but i dont understand why i cant calcul field after agregation was done.Just want way to post-process data and make formula like Excel can do \nHere divide two field of one row ( data generated by aggregation $sum )",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "Hi @Jonathan_Gautier -Thanks for the additional information. Unfortunately the approach I’ve outlined is not compatible with dashboard filters. Since the dashboard filters apply after the groupings in your query, they are too late to affect the results. I’m afraid I can’t think of a solution that meets all of your requirements, although we’ll think about how we can extend the product to support this in the future.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi @tomhollander, Thanks for your help ! Did you add this idea in mongo feedback ?",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "Hello @tomhollander,Is there any development regarding this? As I understand totals in the table are useless for any kind of calculated ratio.Many thanks.\nMichal",
"username": "Michal_Pinka"
}
] | Calculated Field Row Total Divide? | 2021-01-02T01:44:03.106Z | Calculated Field Row Total Divide? | 6,767 |
null | [
"node-js",
"mongoose-odm",
"atlas",
"graphql",
"graphql-api"
] | [
{
"code": "",
"text": "If we have a MongoDB database and app service for it, utilizing the Atlas GraphQL API, it autogenerates the GraphQL schema itself and you cannot modify as far as I can tell.I want to know if I can use this as a subgraph in Apollo Federation. The reason I am unsure if it will work, is Apollo Federation appears to require metadata & directives to specify to the supergraph certain things (ie. the @keys directive on a type to say which fields are the Ids defined in both subgraphs). Because of stuff like this, and not seeing a way to add this to the GraphQL schema within the app service / Atlas GraphQL API, I don’t believe it would work.But, in this article Building a Modern App Stack with Apollo GraphQL and MongoDB Atlas | MongoDB Blog I see that it is talking about Apollo supergraphs, and mentions the Atlas GraphQL API as 1 of 4 options, so it seems to imply it should be possible.I realize that I could create my own service to handle GraphQL requests with Apollo Server or something similar, and hook it up to MongoDB as the datasource with something like Mongoose and therefore use it with Apollo Federation, but I am wondering if it is possible to use the Atlas GraphQL API as a subgraph for Apollo Federation.",
"username": "JF_Deighton"
},
{
"code": "",
"text": "That’s a really good question because I also just finished viewing the youtube video you mentioned, and I was very disappointed : At the beginning of the video, she talked about how it is really quick to create its own GraphQL API based on its MongoDB collections, but then, the guy who presents the federation recreates all those APIs from scratch.\nSo where is AppService GraphQL usefull here ???",
"username": "Frederic_Meriot"
},
{
"code": "",
"text": "Hello,Any update on this? Have you tried to implement it?As alternative i was thinking to use the function trigger from atlas service to control the input data or requesting external API (to retrieve extra information for example) before updating the database, but I dont know how flexible it is and if it will be easy to maintain. Any thought about it?",
"username": "cyril_moreau"
},
{
"code": "",
"text": "The only way to enable an AppServices GraphQL API is to be able to add some custom directives to the generated schema (@keys, @shareable, @provides …etc) but it is not possible for the moment.",
"username": "Frederic_Meriot"
},
{
"code": "",
"text": "So right now the best way and more flexible would be to create my own graphql subgraph as on the video Building a Modern App Stack with Apollo GraphQL and MongoDB Atlas | MongoDB BlogAlso I would like to know how can i control the input data with AppService GraphQL, for example to check that a description field is smaller than 300 characters.Should I create a trigger function (database insert/update type of trigger) on every graphql query to parse the input data and make sure that it is correct?",
"username": "cyril_moreau"
},
{
"code": "",
"text": "That’s what I did finally. I created my own federation with NestJS and Apollo server. I abandoned the idea of using Appservice to create my graphql apis and use them inside a federation. It’s just a shame because the GQL schema generation is really awesome and really helpful to have a complete CRUD graphQL API on your data.",
"username": "Frederic_Meriot"
},
{
"code": "",
"text": "I think i will still use the appService and use Apollo Server as a graphql proxy that will do graphql query to the AppService and check the input data. I dont know if it is a good idea but AppService has nice features that can be helpful later (triggers/eventBridge on database update - authentication…) and add a layer of abstraction on the database.",
"username": "cyril_moreau"
},
{
"code": "apollo-servergraphqlconst { ApolloServer, gql } = require('apollo-server');\n\nconst typeDefs = gql`\n type Query {\n hello: String\n }\n`;\n\nconst resolvers = {\n Query: {\n hello: () => 'Hello world!'\n }\n};\n\nconst server = new ApolloServer({\n typeDefs,\n resolvers\n});\n\nserver.listen().then(({ url }) => {\n console.log(`Server ready at ${url}`);\n});\ngraphql-requestGraphQLClientgraphql-requestconst { GraphQLClient } = require('graphql-request');\n\nconst client = new GraphQLClient('https://your-atlas-graphql-endpoint', {\n headers: {\n Authorization: `Bearer ${your-access-key}`\n }\n});\n\nclient.request(`{\n hello\n}`).then(data => console.log(data));\nyour-atlas-graphql-endpointyour-access-keyclientclienthelloconst { ApolloServer, gql } = require('apollo-server');\nconst { GraphQLClient } = require('graphql-request');\n\nconst typeDefs = gql`\n type Query {\n hello: String\n }\n`;\n\nconst resolvers = {\n Query: {\n hello: async (_, args, context) => {\n const client = context.client;\n const data = await client.request(`{\n hello\n }`);\n return data.hello;\n }\n }\n};\n\nconst server = new ApolloServer({\n typeDefs,\n resolvers,\n context: ({ req }) => ({\n client: new GraphQLClient('https://your-atlas-graphql-endpoint', {\n headers: {\n Authorization: `Bearer ${your-access-key}`\n }\n })\n })\n});\n\nserver.listen().then(({ url }) => {\n console.log(`Server ready at ${url}`);\n});\nGraphQLClientcontext",
"text": "This is the tech doc I wrote up a couple weeks ago for connecting MongoDB Atlas GraphQL API to an Apollo GraphQL service, is this what you’re looking for?To connect MongoDB Atlas GraphQL with Apollo GraphQL, follow these steps:Create a MongoDB Atlas GraphQL service. This can be done in the Atlas console by selecting your cluster, navigating to the “GraphQL” tab, and clicking “Create Service”. Follow the prompts to create a new service.Once the service is created, note the GraphQL Endpoint URL and the Access Key. These will be used to connect to the service from Apollo.In your Apollo server, install the apollo-server and graphql packages. These can be installed using npm or yarn.Create a new instance of the ApolloServer class and pass in a configuration object that specifies the schema and resolvers for your GraphQL API. For example:Install the graphql-request package, which will be used to make requests to your MongoDB Atlas GraphQL service. This can be installed using npm or yarn.In your Apollo server, create a new instance of the GraphQLClient class from graphql-request. Use this instance to make requests to your MongoDB Atlas GraphQL service. For example:Replace your-atlas-graphql-endpoint with the actual GraphQL Endpoint URL of your MongoDB Atlas service, and your-access-key with the actual Access Key for the service.This code creates a new instance of the GraphQLClient class in the context function of the Apollo server. This instance is then passed to the resolvers as part of the context object, and used to make requests to the MongoDB Atlas GraphQL service.",
"username": "Brock"
},
{
"code": "",
"text": "Ok, but here your are not using federation of graphQL APIs. You are just “resolving” to an Atlas GraphQL service from your apollo api.",
"username": "Frederic_Meriot"
}
] | Can MongoDB Atlas GraphQL API be used with Apollo Federation (as a subgraph under an Apollo supergraph)? | 2022-08-29T20:43:33.016Z | Can MongoDB Atlas GraphQL API be used with Apollo Federation (as a subgraph under an Apollo supergraph)? | 3,933 |
[
"dot-net",
"data-modeling"
] | [
{
"code": "",
"text": "I have a problem with values of type double. I’m trying to create a BsonValue with the value 47.11. I tried new BsonDouble() and BsonValue.Create(). In both options the driver (MongoDB.Driver 2.19.0) creates a BsonValue of type double, but with a wired value. Instead of 47.11 I get 47.109999999999999. The same happens, when I use document.ToJson() I also get the wrong value.\nWhen I look at the document in Studio 3T the value is correct (47.11).What am I doning wrong?\n",
"username": "Darius_Ortmann"
},
{
"code": "",
"text": "Darius,Welcome to the MongoDB forum. I understand that you’re having trouble with the precision of doubles in BSON. There’s a few ways to deal with this, but the simplest will be to store these values using the Decimal128 type. Here’s some useful articles for that:",
"username": "Patrick_Gilfether1"
},
{
"code": "",
"text": "Thx for your help. I tried, but it’s the same problem:\n",
"username": "Darius_Ortmann"
},
{
"code": "var bsonDecimal = new BsonDecimal128(47.11M)\n",
"text": "Hi Darius-- this is more of a question about C# than MongoDB. You’ll need to let C# know that you want to use that decimal as a monetary value. You can do that by appending an ‘M’ to the value declaration, e.g.:Here’s the relevant C# documentation:Learn about the built-in C# floating-point types: float, double, and decimal",
"username": "Patrick_Gilfether1"
},
{
"code": "",
"text": "This was a simplified example. Of course the value is not hard coded. But your hint helped me:\nI have to cast a double variable into decimal, then it works.\nThx.",
"username": "Darius_Ortmann"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C#: Double 47.11 -> 47.109999999999999 | 2023-06-12T13:48:53.974Z | C#: Double 47.11 -> 47.109999999999999 | 575 |
|
null | [
"queries",
"mongodb-shell"
] | [
{
"code": "",
"text": "Can someone explain us what are the ways from Mongo DB features i can use to delete the documents which are greater than X days from the collection periodically from the mongo collection",
"username": "Venkatesh_Bingi"
},
{
"code": "",
"text": "Hi @Venkatesh_BingiTTL Indexes are the feature you are looking for.Some considerations when enabling a TTL index:",
"username": "chris"
},
{
"code": "",
"text": "In addition to what Chris mentioned about using a TTL index, Online Archive can be useful for your use case.Note that Online Archive Atlas moves infrequently accessed data from your Atlas cluster to a MongoDB-managed read-only Federated Database Instance on a cloud object storage older than X days. It not just deletes data from your cluster, but it moves your data older than X days (or any other rule you configure) to cloud object storage.Here are some useful links:",
"username": "Prem_PK_Krishna"
},
{
"code": "",
"text": "Thanks for your suggestion @chris I would like to know does TTL Index cause any performance issue ?",
"username": "Venkatesh_Bingi"
}
] | How to remove the documents older than X days from collection considering collection is having createAt date field | 2023-06-12T10:08:51.144Z | How to remove the documents older than X days from collection considering collection is having createAt date field | 625 |
null | [] | [
{
"code": "<!-- Include the MongoDB Charts Embedding SDK -->\n<script src=\"https://unpkg.com/@mongodb-js/[email protected]/dist/charts-embed-dom.umd.min.js\"></script>\n<script>\n async function renderDashboard() {\n const sdk = new ChartsEmbedSDK({\n baseUrl: 'https://charts.mongodb.com/charts-project-0-kuacs', // Replace with your base URL\n });\n \n const dashboard = sdk.createDashboard({\n dashboardId: '6483b793-7171-4c2e-8dea-122ef63e26a0', // Replace with your dashboard ID\n // other options\n });\n\n // Extract brand and model from the URL\n const pathArray = window.location.pathname.split('/');\n const brand = pathArray[2];\n const model = pathArray[3];\n\n // Set the filter\n const collectionName = brand + \" \" + model;\n console.log(\"collectionName: \" + collectionName);\n const filter = { 'collection': collectionName };\n dashboard.setFilter(filter);\n\n\n \n \n await dashboard.render(document.getElementById('dashboard'));\n }\n \n // Call renderDashboard after the window has loaded\n window.onload = function() {\n renderDashboard();\n };\n</script>\n",
"text": "Hello,Here is my code:I checked the console log it is right. I have allowed the filter in the dashboard.What is wrong?",
"username": "Vivien_Richaud"
},
{
"code": "",
"text": "Hi @Vivien_Richaud -Can you describe/show the behaviour you are seeing?Tom",
"username": "tomhollander"
},
{
"code": "<script>\n async function renderDashboard() {\n\n // Extract brand and model from the URL\n const pathArray = window.location.pathname.split('/');\n const brand = decodeURIComponent(pathArray[2]);\n const model = decodeURIComponent(pathArray[3]);\n\n // Set the filter\n const collectionName = brand + \" \" + model;\n console.log(\"collectionName: \" + collectionName);\n\n\n const sdk = new ChartsEmbedSDK({\n baseUrl: 'https://charts.mongodb.com/charts-project-0-kuacs', // Replace with your base URL\n });\n \n const dashboard = sdk.createDashboard({\n dashboardId: '6483b793-7171-4c2e-8dea-122ef63e26a0', // Replace with your dashboard ID\n // other options\n filter: {'collection': collectionName }\n });\n\n\n \n await dashboard.render(document.getElementById('dashboard'));\n }\n \n // Call renderDashboard after the window has loaded\n window.onload = function() {\n renderDashboard();\n };\n</script>\n",
"text": "I have found the solution:",
"username": "Vivien_Richaud"
}
] | Dashboard filtering not working in SDK | 2023-06-13T01:37:17.032Z | Dashboard filtering not working in SDK | 647 |
null | [
"aggregation",
"sharding"
] | [
{
"code": "from$lookup$graphLookupgroupsgroupsViewsortdb.createView(\n \"groupsView\",\n \"groups\",\n [{ $sort: { name: 1 } }]\n)\n{\n \"serverInfo\" : {\n \"host\" : \"b68e294df2ba\",\n \"port\" : 27017.0,\n \"version\" : \"6.0.5\",\n \"gitVersion\" : \"c9a99c120371d4d4c52cbb15dac34a36ce8d3b1d\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600.0,\n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600.0,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600.0,\n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600.0,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600.0,\n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0.0,\n \"internalQueryMaxAddToSetBytes\" : 104857600.0,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600.0\n },\n \"mergeType\" : \"mongos\",\n \"splitPipeline\" : {\n \"shardsPart\" : [\n {\n \"$graphLookup\" : {\n \"from\" : \"groupsView\",\n \"as\" : \"toto\",\n \"connectToField\" : \"group_index\",\n \"connectFromField\" : \"group_index\",\n \"startWith\" : \"$group_index\"\n }\n }\n ],\n \"mergerPart\" : [\n {\n \"$mergeCursors\" : {\n \"lsid\" : {\n \"id\" : \"e3671c9d-6a7b-4c22-9d53-8708bd4a1299\",\n \"uid\" : \"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NM.. 8 more bytes\"\n },\n \"compareWholeSortKey\" : false,\n \"tailableMode\" : \"normal\",\n \"nss\" : \"pocDB.groups\",\n \"allowPartialResults\" : false,\n \"recordRemoteOpWaitTime\" : false\n }\n }\n ]\n },\n \"shards\" : {\n \"rs-shard-eu-frc-2\" : {\n \"host\" : \"mongodb-shard-eu-frc-2-001:27017\",\n \"stages\" : [\n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"namespace\" : \"pocDB.groups\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n\n },\n \"queryHash\" : \"17830885\",\n \"planCacheKey\" : \"17830885\",\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"SHARDING_FILTER\",\n \"inputStage\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"forward\"\n }\n },\n \"rejectedPlans\" : [\n\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 10000.0,\n \"executionTimeMillis\" : 24023.0,\n \"totalKeysExamined\" : 0.0,\n \"totalDocsExamined\" : 10000.0,\n \"executionStages\" : {\n \"stage\" : \"SHARDING_FILTER\",\n \"nReturned\" : 10000.0,\n \"executionTimeMillisEstimate\" : 1.0,\n \"works\" : 10002.0,\n \"advanced\" : 10000.0,\n \"needTime\" : 1.0,\n \"needYield\" : 0.0,\n \"saveState\" : 11.0,\n \"restoreState\" : 11.0,\n \"isEOF\" : 1.0,\n \"chunkSkips\" : 0.0,\n \"inputStage\" : {\n \"stage\" : \"COLLSCAN\",\n \"nReturned\" : 10000.0,\n \"executionTimeMillisEstimate\" : 1.0,\n \"works\" : 10002.0,\n \"advanced\" : 10000.0,\n \"needTime\" : 1.0,\n \"needYield\" : 0.0,\n \"saveState\" : 11.0,\n \"restoreState\" : 11.0,\n \"isEOF\" : 1.0,\n \"direction\" : \"forward\",\n \"docsExamined\" : 10000.0\n }\n }\n }\n },\n \"nReturned\" : 10000,\n \"executionTimeMillisEstimate\" : 17\n },\n {\n \"$graphLookup\" : {\n \"from\" : \"groupsView\",\n \"as\" : \"toto\",\n \"connectToField\" : \"group_index\",\n \"connectFromField\" : \"group_index\",\n \"startWith\" : \"$group_index\"\n },\n \"nReturned\" : 10000,\n \"executionTimeMillisEstimate\" : 24024\n }\n ]\n }\n },\n \"command\" : {\n \"aggregate\" : \"groups\",\n \"pipeline\" : [\n {\n \"$graphLookup\" : {\n \"from\" : \"groupsView\",\n \"startWith\" : \"$group_index\",\n \"connectFromField\" : \"group_index\",\n \"connectToField\" : \"group_index\",\n \"as\" : \"toto\"\n }\n }\n ],\n \"cursor\" : {\n\n }\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : \"2023-06-05T09:08:26.000+0000\",\n \"signature\" : {\n \"hash\" : \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\n \"keyId\" : 0\n }\n },\n \"operationTime\" : \"2023-06-05T09:08:26.000+0000\"\n}\n",
"text": "Hi! As part of a PoC, I did some experimentation on a very simple local cluster (sharded, two nodes with some zones contraints). Everything’s going fine, but I’ve got a gap between my experiment and the documentation concerning sharded views. The documentation (6.0+) states:Views are considered sharded if their underlying collection is sharded. You cannot specify a sharded view for the from field in $lookup and $graphLookup operations.However, it works perfectly on my test example (a sharded collection groups and a simple view groupsView that only contains a sort stage).I was wondering if this was an omission linked to 5.1 release, which allows to use sharded collections from lookup and graphlookup? Or maybe my test case is too simplistic? If I can provide you with more details, please don’t hesitate to ask.MongoDB version: 6.0.5\nSharded view:Execution stats :",
"username": "Remi_Delmas"
},
{
"code": "",
"text": "Hi @Remi_Delmas and welcome to MongoDB community forums!!As mentioned in the SERVER-27533: Allow “from” collection of $graphLookup to be sharded and SERVER-29159: Allow “from” collection of $lookup to be sharded, “from” collections in $lookup and $graphLookup is introduced in the 5.1 release.While I appreciate your feedback on the documentation, I am raising an internal ticket in response to the same.Let us know if you have any further concerns.Regards\nAasawari",
"username": "Aasawari"
}
] | Sharded view with lookup and graphlookup | 2023-06-05T16:03:38.298Z | Sharded view with lookup and graphlookup | 772 |
[
"queries",
"replication",
"java",
"compass",
"change-streams"
] | [
{
"code": "db.adminCommand( { getClusterParameter: \"changeStreamOptions\" } )\nMongoServerError: command not found\n at Connection.onMessage (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:2:886000)\n at MessageStream.<anonymous> (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:2:883886)\n at MessageStream.emit (node:events:513:28)\n at p (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:2:909858)\n at MessageStream._write (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:2:908478)\n at writeOrBuffer (node:internal/streams/writable:391:12)\n at _write (node:internal/streams/writable:332:10)\n at Writable.write (node:internal/streams/writable:336:10)\n at TLSSocket.ondata (node:internal/streams/readable:754:22)\n at TLSSocket.emit (node:events:513:28)\n",
"text": "\nAs per the documentation we can see the changestream options with commandBut I am getting the below error using atlas for mongodb",
"username": "Debabrata_Patnaik"
},
{
"code": "db.adminCommand( { getClusterParameter: \"changeStreamOptions\" } )\ngetClusterParameter",
"text": "Hey @Debabrata_Patnaik,Thanks for reaching out to the MongoDB Community forums But I am getting the below error using the atlas for MongodbPlease note that the getClusterParameter is an administrative command for retrieving the values of cluster parameters and is only available in server version 6.0. The above command is not supported on MongoDB Atlas. Please refer to the getClusterParameter and setClusterParameter documentation to read more.Best Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoServerError for getClusterParameter: "changeStreamOptions" Need solution | 2023-06-08T10:51:47.474Z | MongoServerError for getClusterParameter: “changeStreamOptions” Need solution | 714 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.