image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "app-services-hosting" ]
[ { "code": "", "text": "Hi,Can anyone point me to any tutorial or guide or advice on how to host simple React apps on Realm Static Hosting? The documentation says we can deploy single page React Apps, and there is a Stitch tutorial on Hugo - https://www.mongodb.com/how-to/static-website-deployments-mongodb-stitch-hugo-git-travis-ci but it is a bit different. I need to deploy a simple React app on Static Hosting using npm.Thanks in advance…! ", "username": "shrey_batra" }, { "code": "", "text": "Maybe this blog post helps a bit more -It follows the same steps the docs describe.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks @Sumedha_Mehta1", "username": "shrey_batra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
React App on Realm Static Hosting
2020-07-07T16:01:11.955Z
React App on Realm Static Hosting
4,858
null
[]
[ { "code": "", "text": "Just a quick question: Is it possible to force a restart of a specific replica member in Atlas?We started a backup restore last Friday and it is stuck until now because one of the hosts did not come up after it was shut down for the restore process. A blue bar is displayed in Atlas which says “We are deploying your changes”. We are not able to change anything because the restore must be finished first.However, the restore is apparently stuck waiting for the host which is down. Is it possible to force a restart or to get another AWS instance spawning up to replace the one which is down?Best,\nMartin", "username": "MartinLoeper" }, { "code": "", "text": "Hi @MartinLoeper,There currently is no API for directly restarting a cluster member - Atlas should take care of that automatically if one is unhealthy.Please contact Atlas support for assistance with operational issues like this.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,thanks for your answer.\nI know that everything should be recovered automatically.\nHowever, the host is still down and the organization unfortunately has no active support plan.I assume there is no other option than subscribing to a developer plan in this case?\nI understand that this is an operational issue and should be discussed with support in the first place.\nHowever, under our current circumstances I wanted to check first if there isn’t an option how I can deal with it manually without support being involved.Best,\nMartin", "username": "MartinLoeper" }, { "code": "", "text": "I assume there is no other option than subscribing to a developer plan in this case?Hi Martin,You can still contact the Atlas support team for operational support questions via the in-app chat. The Atlas support link I provided earlier should show this as an option when you are logged into your Atlas account, but if you are having trouble with that link, the relevant instructions are also on Request Atlas Support.Developer and Premium support plans include private case management with severities & SLA, broader scope for questions, and 24x7 coverage for cases with Blocker and Critical severity. For more information see the MongoDB Support Policy.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to restart hosts that are down?
2020-07-07T21:01:15.825Z
How to restart hosts that are down?
3,826
null
[ "replication" ]
[ { "code": "", "text": "i’m using mongooseconfigured a replica set of 4 instancesi started inserting data into the replica set with interval 1 secondwhile the data is being inserted, i stepped down the primary server\nafter that a ‘disconnected’ event was fired and ‘reconnected’ is fired … after that the replica set has 3 servers it elects one of the 3 remaining to be the primary and the data continued to be inserted into the database\nwhen the new primary is stepped down now i have 2 working secondaries servers in the replica set it doesn’t elect a new primary of the two secondaries to be the new primary instead of that it throws MongooseServerSelectionErrorwhy not one of the two remaining servers is selected as primary?", "username": "ahmed_naser" }, { "code": "", "text": "Hi Ahmed,It’s because you have an even number of nodes in the replica set. This is not a recommended setup (see Replica Set Deployment Architectures). You should have an odd number of nodes in a replica set.A replica set elects a new primary based on simple majority voting between the nodes. Offline nodes is considered an abstain. So in your scenario of 4 nodes:This scenario is outlined in Consider Fault Tolerance, where a 4 node replica set can tolerate only 1 node offline. More than this, then they cannot elect a new primary.See Replica Set Elections for more details about election procedures in a replica set.Best regards,\nKevin", "username": "kevinadi" } ]
Server selection is not performed when replica has two members
2020-07-08T07:59:55.359Z
Server selection is not performed when replica has two members
3,009
https://www.mongodb.com/…12d15eeb2b9c.png
[ "java", "compass" ]
[ { "code": "import org.springframework.data.mongodb.core.aggregation.*; [{$match: {\n pd: 'PD',\n type: 'type1',\n date: {\n $gte: '2019-01-01',\n $lte: '2019-12-01'\n }\n}}, {$sort: {\n pdg: 1,\n date: 1\n}}, {$group: {\n _id: {\n site: '$site'\n },\n wf: {\n $push: {\n date: '$date',\n resources: '$resources',\n tt: '$n'\n }\n },\n tresources: {\n $sum: '$resources'\n }\n}}, {$match: {\n tresources: {\n $gt: 0\n }\n}}]\n [{\n \"_id\": {\n \"site\": \"T\"\n },\n \"wf\": [\n {\n \"date\": \"2019-01-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-02-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-03-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-04-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-05-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-06-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-07-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-08-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-09-01\",\n \"resources\": 2\n },\n {\n \"date\": \"2019-10-01\",\n \"resources\": 1\n },\n {\n \"date\": \"2019-11-01\",\n \"resources\": 1\n },\n {\n \"date\": \"2019-12-01\",\n \"resources\": 1\n }\n ],\n \"tresources\": 21\n},{\n \"_id\": {\n \"site\": \"G\"\n },\n \"wf\": [\n {\n \"date\": \"2019-01-01\",\n \"resources\": 42\n },\n {\n \"date\": \"2019-02-01\",\n \"resources\": 42\n },\n {\n \"date\": \"2019-03-01\",\n \"resources\": 42\n },\n {\n \"date\": \"2019-04-01\",\n \"resources\": 39\n },\n {\n \"date\": \"2019-05-01\",\n \"resources\": 38\n },\n {\n \"date\": \"2019-06-01\",\n \"resources\": 38\n },\n {\n \"date\": \"2019-07-01\",\n \"resources\": 38\n },\n {\n \"date\": \"2019-08-01\",\n \"resources\": 41\n },\n {\n \"date\": \"2019-09-01\",\n \"resources\": 39\n },\n {\n \"date\": \"2019-10-01\",\n \"resources\": 34\n },\n {\n \"date\": \"2019-11-01\",\n \"resources\": 34\n },\n {\n \"date\": \"2019-12-01\",\n \"resources\": 31\n }\n ],\n \"tresources\": 458\n}]\n List<Bson> bsons = Arrays.asList(\n match(and(eq(\"pdg\", \"PD\"), eq(\"type\", \"type1\"),\n and(gte(\"date\", \"2019-01-01\"), lte(\"date\", \"2019-12-01\")))),\n sort(orderBy(ascending(\"pdg\"), ascending(\"date\"))),\n group(eq(\"site\", \"$site\"),\n push(\"wf\", and(eq(\"site\", \"$site\"), eq(\"date\", \"$date\"), eq(\"resources\", \"$resources\"))),\n sum(\"tresources\", \"$resources\")),\n match(gt(\"tresources\", 0L)));\n template.getCollection(\"workforce\").aggregate(bsons).forEach(document -> log.info(document.get(\"wf\").toString()));\n Document{{_id=Document{{site=T}}, wf=[true, true, true, true, true, true, true, true, true, true, true, true], tresources=21.0}}\nDocument{{_id=Document{{site=G}}, wf=[true, true, true, true, true, true, true, true, true, true, true, true], tresources=458.0}}\n", "text": "I am using compass to design the pipeline and export it to Java.\nI am facing a problem when I use the exported pipeline in Java .\nI am using\nimport org.springframework.data.mongodb.core.aggregation.*;\nin Java .\nJava 8, Spring boot data mongodb 2.3.1,\nCompass 1.21.2 Mongodb Compass 4.2.8\nI cant find the driver version . It must be the latest since I am using the latest from spring boot.\nThe problem is that the items that I push during grouping becomes True False values in the Java frame work. I cannot use the attributes name because of my client .The pipeline is\n’In Compass I get the results ascompass485×574 16.7 KB\nIn Java CodeI get results asI cannot find any other way to do group based queries in Java . It would be nice to know How can I capture this output to custom Java Objects. I could do that using @Query annotation in the repository but it can only handle simple queries The same goes for template.aggregate and template.find.Collection sample…\n[{\n“_id”: “-1693282609”,\n“comments”: “mongoupload”,\n“date”: “2019-11-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 1\n}, {\n“_id”: “-1694086966”,\n“comments”: “mongoupload”,\n“date”: “2019-05-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1694027384”,\n“comments”: “mongoupload”,\n“date”: “2019-07-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1693967802”,\n“comments”: “mongoupload”,\n“date”: “2019-09-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1693997593”,\n“comments”: “mongoupload”,\n“date”: “2019-08-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1694176339”,\n“comments”: “mongoupload”,\n“date”: “2019-02-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1694057175”,\n“comments”: “mongoupload”,\n“date”: “2019-06-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1694116757”,\n“comments”: “mongoupload”,\n“date”: “2019-04-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1693312400”,\n“comments”: “mongoupload”,\n“date”: “2019-10-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 1\n},{\n“_id”: “-1694146548”,\n“comments”: “mongoupload”,\n“date”: “2019-03-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1693252818”,\n“comments”: “mongoupload”,\n“date”: “2019-12-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 1\n},{\n“_id”: “-1694206130”,\n“comments”: “mongoupload”,\n“date”: “2019-01-01”,\n“site”: “T”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 2\n},{\n“_id”: “-1840908577”,\n“comments”: “mongoupload”,\n“date”: “2019-03-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 42\n},{\n“_id”: “-1840044638”,\n“comments”: “mongoupload”,\n“date”: “2019-11-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 34\n},{\n“_id”: “-1840819204”,\n“comments”: “mongoupload”,\n“date”: “2019-06-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 38\n},{\n“_id”: “-1840074429”,\n“comments”: “mongoupload”,\n“date”: “2019-10-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 34\n},{\n“_id”: “-1840968159”,\n“comments”: “mongoupload”,\n“date”: “2019-01-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 42\n},{\n“_id”: “-1840878786”,\n“comments”: “mongoupload”,\n“date”: “2019-04-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 39\n},{\n“_id”: “-1840938368”,\n“comments”: “mongoupload”,\n“date”: “2019-02-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 42\n},{\n“_id”: “-1840789413”,\n“comments”: “mongoupload”,\n“date”: “2019-07-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 38\n},{\n“_id”: “-1840759622”,\n“comments”: “mongoupload”,\n“date”: “2019-08-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 41\n},{\n“_id”: “-1840848995”,\n“comments”: “mongoupload”,\n“date”: “2019-05-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 38\n},{\n“_id”: “-1840729831”,\n“comments”: “mongoupload”,\n“date”: “2019-09-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 39\n},{\n“_id”: “-1840014847”,\n“comments”: “mongoupload”,\n“date”: “2019-12-01”,\n“site”: “G”,\n“country”: “”,\n“pdg”: “PD”,\n“type”: “type1”,\n“resources”: 31\n}]\n…\n’", "username": "Sam_Sam" }, { "code": "@Query@Aggregation@Query", "text": "Hello @Sam_Sam, welcome to the MogoDB community forum.Please post a sample input document (properly formatted). Also, specify the MongoDB, Java Driver and the Spring Data MongoDB versions.[ EDIT ADD ]org.springframework.data.mongodb.repository.Aggregation annotation can be used for aggregation queries for the repository query methods - this is similar to @Query used for JSON based queries. Here is an example post:To run an aggregation group query using the Spring repository object, you have to use the @Aggregation instead of the @Query …", "username": "Prasad_Saya" }, { "code": "org.bson.BsonInvalidOperationException: readStartDocument can only be called when CurrentBSONType is DOCUMENT, not when CurrentBSONType is ARRAY. ", "text": "I updated the post with the document format in JSON.\nI tried @aggregation but it gives me this error\norg.bson.BsonInvalidOperationException: readStartDocument can only be called when CurrentBSONType is DOCUMENT, not when CurrentBSONType is ARRAY. \nI removed the array brackets and it worked but in this case it only returned the results from thefirst match in the pipelline . I also tried setting the pipeline value but still got the same results.", "username": "Sam_Sam" }, { "code": "{\n \"_id\" : \"-1693282609\",\n \"comments\" : \"mongoupload\",\n \"date\" : \"2019-11-01\",\n \"site\" : \"T\",\n \"country\" : \"\",\n \"pdg\" : \"PD\",\n \"type\" : \"type1\",\n \"resources\" : 1\n}\n{\n \"_id\" : \"-1694086966\",\n \"comments\" : \"mongoupload\",\n \"date\" : \"2019-05-01\",\n \"site\" : \"T\",\n \"country\" : \"\",\n \"pdg\" : \"PD\",\n \"type\" : \"type1\",\n \"resources\" : 2\n}\n\tMongoCollection<Document> coll = mongoTemplate.getCollection(\"sample\");\n\n\tList<Bson> pipeline = Arrays.asList(\n match(and(eq(\"pdg\", \"PD\"), eq(\"type\", \"type1\"),\n and(gte(\"date\", \"2019-01-01\"), lte(\"date\", \"2019-12-01\")))),\n sort(orderBy(ascending(\"pdg\"), ascending(\"date\"))),\n group(eq(\"site\", \"$site\"),\n push(\"wf\", and(eq(\"site\", \"$site\"), eq(\"date\", \"$date\"), eq(\"resources\", \"$resources\"))),\n sum(\"tresources\", \"$resources\")),\n match(gt(\"tresources\", 0L)));\n\t\n\tList<Document> result = coll.aggregate(pipeline).into(new ArrayList<Document>());\n\tresult.forEach(doc -> System.out.println(doc.toJson()));;\nmongo{\n \"_id\":{\n \"site\":\"T\"\n },\n \"wf\":[\n {\n \"site\":\"T\",\n \"date\":\"2019-05-01\",\n \"resources\":2.0\n },\n {\n \"site\":\"T\",\n \"date\":\"2019-11-01\",\n \"resources\":1.0\n }\n ],\n \"tresources\":3.0\n}\n{\n \"_id\":{\n \"site\":\"T\"\n },\n \"wf\":[\n true,\n true\n ],\n \"tresources\":3.0\n}", "text": "I tried this in a similar Java and Spring setup as yours:Input collection documents:Spring Data MongoDB code (same Java code from your posting):I get the same result from the aggregation run from the mongo shell query and the Spring Java code.[ EDIT ADD ]: I have a correction in this post.The above output is from the MongoDB Java Driver code , not from the Spring Java code. The output from Spring Java code is (as you had posted, different):", "username": "Prasad_Saya" }, { "code": "MongoTemplate#executeCommanddb.runCommand( { <command> } ){\n aggregate: \"<collection>\" ,\n pipeline: [ <stage>, <...> ],\n ...,\n cursor: <document>,\n ...\n}\nMongoTemplateString match1 = \"{ '$match':{ 'pdg':'PD', 'type':'type1', 'date':{ '$gte':'2019-01-01', '$lte':'2019-12-01' } } }\";\nString sort = \"{ '$sort':{ 'pdg':1, 'date':1 } }\";\nString group = \"{ '$group':{ '_id':{ 'site':'$site' }, 'wf':{ '$push':{ 'date':'$date', 'resources': '$resources', 'tt':'$n' } }, 'tresources':{ '$sum':'$resources' } } }\";\nString match2 = \"{ '$match':{ 'tresources':{ '$gt':0 } } }\";\n\nString pipe = match1 + \", \" + sort + \", \" + group + \", \" + match2;\nString cmd = \"{ 'aggregate': 'sample', 'pipeline': [\" + pipe + \"], 'cursor': { } }\";\n\nMongoTemplate template = new MongoTemplate(MongoClients.create(), \"test\");\nDocument result = template.executeCommand(cmd);\nSystem.out.println(result.toJson());\n", "text": "Here is a solution (I think).This uses the MongoTemplate#executeCommand method - “Execute a MongoDB command expressed as a JSON string”. This is equivalent to MongoDB shell’s database command which is run as db.runCommand( { <command> } ). The command in this case is the aggregate database aggregation command which has the following syntax:Note the three are mandatory arguments (others omitted for brevity, and are not used in this case).Now, build the pipeline from the aggregation and run the command using the Spring Data’s MongoTemplate API using the data from my previous post.", "username": "Prasad_Saya" }, { "code": "MongoRepositorypublic interface SampleRepository extends MongoRepository<Sample, Integer> {\n\n public static final String match1 = \"{ '$match':{ 'pdg': ?0, 'type': ?1, 'date':{ '$gte': ?2, '$lte': ?3 } } }\";\n public static final String sort = \"{ '$sort':{ 'pdg':1, 'date':1 } }\";\n public static final String group = \"{ '$group':{ '_id':{ 'site':'$site' }, 'wf':{ '$push':{ 'date':'$date', 'resources': '$resources', 'site':'$site' } }, 'tresources':{ '$sum':'$resources' } } }\";\n public static final String match2 = \"{ '$match':{ 'tresources':{ '$gt':0 } } }\";\n\t\n @Aggregation(pipeline = { match1, sort, group, match2 })\n List<PojoOut> aggregateBySample(String pdg, String type, String date1, String date2);\n}\nList<PojoOut> list = sampleRepository.aggregateBySample(\"PD\", \"type1\", \"2019-01-01\", \"2019-12-01\");PojoOut{\n \"_id\" : {\n \"site\" : \"T\"\n },\n \"wf\" : [\n {\n \"site\" : \"T\",\n \"date\" : \"2019-05-01\",\n \"resources\" : 2\n },\n {\n \"site\" : \"T\",\n \"date\" : \"2019-11-01\",\n \"resources\" : 1\n }\n ],\n \"tresources\" : 3,\n \"_class\" : \"com.example.demo.PojoOut\"\n}", "text": "Here is a solution using the MongoRepository API (originally you are looking for):Repository with the aggregation method:Running the aggregation:List<PojoOut> list = sampleRepository.aggregateBySample(\"PD\", \"type1\", \"2019-01-01\", \"2019-12-01\");PojoOut class is of the output type of this document:", "username": "Prasad_Saya" }, { "code": "", "text": "I am actually migrating from arango to mongo . I did the query in arango web client and pasted the query with argument parameters in @query the results were the same.\nIt seemed that I can skip the mongo repository and get results using Mongo template but I don’t know why the results are not the same .\nMy issue is that the output in Spring is different from the output in Compass . And you are also getting the same results , I guess there is a bug in Sping , But can you share how did you use the Java driver for that query.", "username": "Sam_Sam" }, { "code": "", "text": "Thanks this solution works for me for now.", "username": "Sam_Sam" }, { "code": "", "text": "My issue is that the output in Spring is different from the output in Compass . And you are also getting the same results , I guess there is a bug in Sping , But can you share how did you use the Java driver for that queryI had already posted the code in an earlier post. It is possible it is bug (but, I don’t know). See the one with:I tried this in a similar Java and Spring setup as yours:Input collection documents: …", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compass pipeline export to Java not producing same results
2020-07-07T08:43:27.573Z
Compass pipeline export to Java not producing same results
7,273
https://www.mongodb.com/…f1d16a95a347.png
[ "production", "php" ]
[ { "code": "mongodbDatabase::command()composer require mongodb/mongodb^1.6.1\nmongodb", "text": "The PHP team is happy to announce that version 1.6.1 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis release fixes a bug where the Database::command() helper incorrectly inherited a read preference from the database.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=25780DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.6.1 Released
2020-07-08T06:53:39.317Z
MongoDB PHP Library 1.6.1 Released
2,215
null
[ "production", "php" ]
[ { "code": "pecl install mongodb\npecl upgrade mongodb\n", "text": "The PHP team is happy to announce that version 1.7.5 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release fixes an issue where running a generic command incorrectly inherited a read preference from the manager.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "Andreas_Braun" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.7.5 Released
2020-07-08T06:46:01.542Z
MongoDB PHP Extension 1.7.5 Released
1,981
null
[]
[ { "code": "5f01a0cb4e35982a2908728623499", "text": "Hello,I was wondering if there was a way to change the appearance of an ObjectID to display on the frontend. For example, an ObjectID looks like: 5f01a0cb4e35982a29087286. Ugly appearance on the front end, would love to have it display as a random fixed number, such as 23499.Is this possible? I’m using nodejs/mongoose.Thanks in advance ", "username": "Matthew_Fay" }, { "code": "ObjectId", "text": "Why are you displaying the ObjectId field in the front end user interface? What is the purpose?", "username": "Prasad_Saya" }, { "code": "", "text": "It’s for comments. Made the ObjectId a link to click which you could reply to. Should I not do this?", "username": "Matthew_Fay" }, { "code": "ObjectId[ ...87286][...]ObjectId", "text": "You can simply use a sub-string of the ObjectId string, for example showing the last five characters : [ ...87286] or just [...]. Does the application user identify the document using the visible ObjectId value (I think not).", "username": "Prasad_Saya" }, { "code": "_id_id_id_id", "text": "Hi Matthew,Just to add one more idea to what @Prasad_Saya already said, I think by “ObjectId” you mean the content of the _id field.Note that ObjectId is just an auto-generated value if you don’t provide an _id field in the document you wanted to insert into MongoDB. If you provide your own _id field, it will be used instead, provided that it’s unique to the collection. So if you want the _id to be meaningful, you can supply your own values.Best regards,\nKevin", "username": "kevinadi" } ]
Change ObjectID string for frontend
2020-07-06T21:58:41.705Z
Change ObjectID string for frontend
4,568
null
[]
[ { "code": "", "text": "Hi I’m a newbie to MongoDB and I’m trying to setup a shard cluster, along with 1 config server and 1 mongos. However, I found out, after changing the license of MongoDB, Fedora community decided to remove the MongoDB server entirely (starting Fedora 30). (link)Thus, I’m interested to know how my requirement could be satisfied… Any ideas are welcome…", "username": "Laksheen_Mendis" }, { "code": "", "text": "Hi,Welcome to the community. There’s a detailed instruction in the page Deploy a Sharded Cluster that will take you step by step on deploying one.Are you trying to deploy a sharded cluster for testing/educational purposes? Please note that there are many considerations on using a sharded cluster since it’s quite a bit more complex than a standard replica set setup.If you’re trying to setup a sharded cluster for production use, I would recommend you to be familiar with the whole Sharded section on the MongoDB manual, or provision a cluster using Atlas where the deployment is managed and secured for you ready to use.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the reply… At the moment, just getting to know MongoDB and how it works (shard cluster, etc)However, I was able to install and then setup MongoDB.\nThanks again…!!", "username": "Laksheen_Mendis" } ]
Setting up a shard cluster in Fedora 31
2020-06-18T01:07:04.265Z
Setting up a shard cluster in Fedora 31
1,488
https://www.mongodb.com/…4_2_1024x512.png
[ "indexes" ]
[ { "code": "", "text": "Hi all,Q: what is the definite and reliable way in MongoDB 4.x to figure out whether a query is covered by an index ?In earlier MongoDB versions this important question was easy to answer via checking ‘indexOnly’ boolean key of the explain() dictionary.I’ve walked through different sources to answer this question and the only answer I found is:However neither the answer is unambiguous nor it sounds like be a straight forward to implement for an arbitrary query. If it plays any role, I am on pymongo.Thank you in advance for answers and ideas!\nkind regardsValery", "username": "Valery_Khamenya" }, { "code": "db.collection.explain('executionStats').find(...)mongo\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 10,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 10,\n\t\t\"totalDocsExamined\" : 10,\ntotalKeysExaminedtotalDocsExamined\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 10,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 10,\n\t\t\"totalDocsExamined\" : 0,\ntotalKeysExaminedtotalDocsExaminedtotalDocsExamined: 0totalKeysExaminedPROJECTION_COVERED\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"PROJECTION_COVERED\",\n", "text": "Hi Valery,The easiest way to determine if a query is fully covered is to use db.collection.explain('executionStats').find(...) in the mongo shell.For example, if a query is not covered:note that totalKeysExamined and totalDocsExamined are both 10. Meaning that the server must examine 10 documents to return the query.If it’s covered:Note that totalKeysExamined is 10, but totalDocsExamined is 0. Meaning that the server did not need to examine any documents, and can answer the query from index scan alone. A covered query will have totalDocsExamined: 0 with a non-zero totalKeysExamined in the explain output.Additionally, if you’re using MongoDB 4.2, you should see a PROJECTION_COVERED stage:Best regards,\nKevin", "username": "kevinadi" } ]
Easy criterion for "Covered Queries" (replacement for deprecated indexOnly)
2020-06-30T20:20:21.236Z
Easy criterion for &ldquo;Covered Queries&rdquo; (replacement for deprecated indexOnly)
1,912
null
[]
[ { "code": "", "text": "How does the organizations get the certificates for authentication(i.e for security). Does mongodb university provides these?Thanks,\nMahesh", "username": "mahesh_e" }, { "code": "", "text": "Hi Mahesh,As far as I know, MongoDB University does not provide certificates of any kind.Could you describe what you need in more details? What certificate do you need? Are you trying to connect to the server using TLS?Best regards,\nKevin", "username": "kevinadi" } ]
Certificates in authentication (Security)
2020-07-03T22:03:24.939Z
Certificates in authentication (Security)
1,366
null
[]
[ { "code": "", "text": "Hi there! My name is Alissa Jarvis and I work with the Design team here at MongoDB. We have an exciting opportunity to join our User Panel - a community of users like you who share valuable opinions on new products and features.We are looking to create a User Experience Research Panel of current, past, or prospective MongoDB users (developers, DevOps, DBAs, etc.) who want to provide meaningful and actionable feedback on MongoDB’s products. Panelists will be among the first to see new features, provide feedback, and help make MongoDB better for users around the world.A user panel is a database of people who agree to take part in future research, that we can reach out to when we need to test out new products or services. The information is secure, and never shared outside MongoDB.To sign up, visit mongodb.com/user-research and fill out your profile. If you’re a match for an upcoming study we’ll reach out. It’s as easy as that!If you have any questions please comment below or direct message me.We look forward to working with you!", "username": "Alissa_Jarvis" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Join our MongoDB User Research Panel!
2020-07-07T19:13:38.914Z
Join our MongoDB User Research Panel!
4,009
null
[]
[ { "code": "", "text": "We are using Mongo enterprise v3.4 on-premise to store BLOB using gridfs. And exploring Atlas for migrating the data from on-prem to cloud. Does Atlas support gridFS?", "username": "harry_kabbay" }, { "code": "", "text": "Hi Harry,Yes absolutely: GridFS is useful when you want to store your binary files directly with your metadata. However for completeness, note that many folks prefer to use AWS S3/Azure Blob Storage/Google Cloud Storage for object storage in public cloud contexts, and then store metadata in MongoDB Atlas. Both are options!Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does Atlas support gridFS?
2020-07-07T18:40:14.532Z
Does Atlas support gridFS?
4,912
null
[ "dot-net" ]
[ { "code": "[ERROR ] Unhandled exception. MongoDB.Driver.MongoCommandException: Command delete failed: not authorized on MY_HANGFIRE_DATABASE to execute command { delete: \"hangfire.migrationLock\", ordered: true, deletes: [ { q: { _id: ObjectId('5c351d07197a9bcdba4832fc') }, limit: 1 } ] }.\n[ERROR ] at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.ProcessReply(ConnectionId connectionId, ReplyMessage`1 reply)\n[ERROR ] at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable`1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action`1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer`1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.RetryableWriteCommandOperationBase.ExecuteAttempt(RetryableWriteContext context, Int32 attempt, Nullable`1 transactionNumber, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.Execute[TResult](IRetryableWriteOperation`1 operation, RetryableWriteContext context, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatches(RetryableWriteContext context, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.Execute(RetryableWriteContext context, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation`1 operation, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.MongoCollectionImpl`1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation`1 operation, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.MongoCollectionBase`1.DeleteOne(FilterDefinition`1 filter, DeleteOptions options, Func`2 bulkWrite)\n[ERROR ] at MongoDB.Driver.MongoCollectionBase`1.DeleteOne(FilterDefinition`1 filter, DeleteOptions options, CancellationToken cancellationToken)\n[ERROR ] at MongoDB.Driver.MongoCollectionBase`1.DeleteOne(FilterDefinition`1 filter, CancellationToken cancellationToken)\n[ERROR ] at Hangfire.Mongo.Migration.MigrationLock.DeleteMigrationLock()\n[ERROR ] at Hangfire.Mongo.Migration.MongoMigrationManager.MigrateIfNeeded(MongoStorageOptions storageOptions, IMongoDatabase database)\n[ERROR ] at Hangfire.Mongo.MongoStorage..ctor(MongoClient mongoClient, String databaseName, MongoStorageOptions storageOptions)\n[ERROR ] at Hangfire.Mongo.MongoBootstrapperConfigurationExtensions.UseMongoStorage(IGlobalConfiguration configuration, String connectionString, MongoStorageOptions storageOptions)\nHangfire.Mongo.Migration.MigrationLock.DeleteMigrationLock()", "text": "Hellodoes anyone know what would be the possible causes for this error while starting up a dotnet core application within a container? Locally it works fine.I have a c# project (dotnet core), where my Program.cs crashes on CreateHostBuilder(args).Build().Run(); and here are the docker logs:I guess it could be related to Hangfire.Mongo.Migration.MigrationLock.DeleteMigrationLock() method, but I am not aware of Hangfire.Mongo package internals. I opened the same issue there to check if there is any insight of how to overcome this problems.Is this totally related to user permissions?Thanks", "username": "Wanderley_Junior" }, { "code": "", "text": "It seems the problem was the connection string that was not proper set in our continuous integration. It must be something likemongodb://<mongo_user_name>:@/?replicaSet=", "username": "Wanderley_Junior" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Official .NET driver for MongoDB - MongoDB.Driver.MongoCommandException on delete command
2020-07-07T10:31:53.877Z
Official .NET driver for MongoDB - MongoDB.Driver.MongoCommandException on delete command
3,961
null
[ "node-js", "next-js" ]
[ { "code": "Successfully logged in as userLocalStorage", "text": "I use the Realm NodeJS SDK and I have a working Login Form that works but somehow all I get is a response Successfully logged in as user but there are no Tokens saved in the LocalStorage also there are no Tokens in the Response so I could take them and store them by myself so how the NodeJS SDK keeps a use logged in and has the state available, can anyone explain this to me? Thanks", "username": "Ivan_Jeremic" }, { "code": "", "text": "@Ivan_Jeremic We cache the login metadata for you under the hood and take care of token refreshes behind the scene - if you are trying to run in a frontend browser app you should be using the realm-web sdk. the node.js sdk is for building server side apps", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward I write the code on the Server (Next.js on the Server part). But the client needs somehow to know the status of the user how else to handle the frontend state (authenticated or not)?", "username": "Ivan_Jeremic" } ]
Realm NodeJS SDK is not setting tokens
2020-07-05T14:39:19.916Z
Realm NodeJS SDK is not setting tokens
2,130
null
[ "swift", "atlas-device-sync" ]
[ { "code": "", "text": "We are currently transitioning our system from Realm Cloud to MongoDB Realm. We have two components in this system. The first is a Node.js piece that is used to administer our MongoDB Cluster and initialize with a schema; this used to be a Realm instance. The second piece is an iOS app that is built on top of the MongoDB Realm application. In the past, when we used Realm Studio, we could export the model definitions to Swift - so we effectively had one source of truth as far as the schema was concerned - the Node.js Javascript schema. With MongoDB Realm, I do not see a way to export a Swift Schema. Am I missing something, or is this feature coming soon?Thanks", "username": "Richard_Krueger" }, { "code": "", "text": "@Richard_Krueger You can see it on the Realm App cloud UI under SDKs > Data Models > Swift", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_Ward you are indeed the man. Thanks for the quick turn around. The Data Models for Swift are available, but one must turn off “Development Mode” to see them. This works like a champ by clicking the “Copy All Data Models” to the clip board and pasting them into Xcode. Again I can’t say how much I think this product is a major improvement to your older Realm Cloud offering.", "username": "Richard_Krueger" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Export Swift schema definitions
2020-07-06T20:26:52.943Z
Export Swift schema definitions
1,977
null
[]
[ { "code": "", "text": "can i self host the mongodb realm servies? because my apps user are all in china , when they try to connect the mongodb alts is very slow.", "username": "Nasser_Xu" }, { "code": "", "text": "@Nasser_Xu Not at this time - MongoDB Realm must be run with MongoDB Atlas", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to self host the mongodb realm?
2020-07-07T08:43:25.906Z
How to self host the mongodb realm?
2,937
null
[ "sharding", "capacity-planning" ]
[ { "code": "", "text": "I’m planning a very large implementation of MongoDb. It will be hundreds of TB and maybe even over a PB. There is no need for global distribution, it will all reside in a single data center.My question is, would it be better to have many small shards or fewer large shards? To put it another way, should I consider hundreds of bare metal boxes, one for each RS node or would it be better to have thousands of containerized nodes?", "username": "Tim_Heikell" }, { "code": "", "text": "Hello @Tim_Heikellwithout deeper knowledge of your use case it is difficult to be precise. Since you have quite a bit of data, it most likely will be better to scale horizontal to a higher number of shards. The mongos will distribute the queries on the individual machines and only the final state will take place on the mongos or the primary shard, depending on your query. This way you will have smaller amounts of data to be processed on the indiviual machines. I am assuming here that you have a good shard key (high cardinality and frequency, non monotonic increasing) for targeted queries.I also would highly recommend to split your data on at least two, better three data centers!Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoellerI’m using a hashed index on the _id for my shard key but many queries will be using indexes which do not include the _id. I expect that scaling out makes the most sense.Thanks for the advice.Tim", "username": "Tim_Heikell" }, { "code": "", "text": "I’m using a hashed index on the _id for my shard key but many queries will be using indexes which do not include the _id. I expect that scaling out makes the most sense.Hi Tim,If your common queries aren’t using the shard key they will be scatter-gather (directed to all shards in the cluster). This won’t be ideal for performance if you have hundreds of shards to query. If your goal is to efficiently scale out, it would be better to have range-based shard keys that support your common queries and allow for targeted queries. For more information see Read Operations to Sharded Clusters.I’m planning a very large implementation of MongoDb. It will be hundreds of TB and maybe even over a PB.An important consideration with capacity planning is your working set of actively used data and indexes. If your use case has a large amount of historical data that is infrequently accessed, the desired ratio of storage to RAM will be different than working with the same data set in memory. Scalability has several dimensions (cluster scale, performance scale, and data scale) which will affect your deployment decisions. For some examples of different scaling scenarios, see: MongoDB at Scale.Depending on your use case, you may also want to consider zone sharding with tiered resources for varying SLA or SLO.I’d recommend getting Professional Advice to plan a deployment of this size. There are many considerations, and an experienced consultant can provide better advice with a more holistic understanding of your requirements. Some decisions affecting scalability (such as shard key selection) are more difficult to course correct once you have a significant amount of production data.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello @Tim_Heikell\n@Stennie_X knows it all, so please go with his suggestions. Re-reading this post one thing came up in my mind: With MongoDB 4.4, currently in beta, you will have compound hashed shard keys. This will enable you to choose a shard key which enables targeted queries and also, by “adding” a hashed key you gain high cardinality. But, as Stennie suggested, when you build such a large setup it probably pays off fast the get professional advice on board.\nCheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi Stennie.In my case I have different properties I need to query for and I don’t think I have any way to get around my queries being scatter-gather. Additionally, although I have a lot of data that is mostly stale, there is no way for me to know which data is stale. Any of it can become relevant at any time. The good news is that the queries will be executed by background services so ms responses aren’t critical.There are clearly too many factors for a non-expert like myself to find the best model so I think professional advice is needed.Thanks.Tim", "username": "Tim_Heikell" } ]
Many small shards or fewer large shards?
2020-07-03T13:50:00.647Z
Many small shards or fewer large shards?
3,777
null
[]
[ { "code": "", "text": "I can’t install MongoDB Enterprise Server in Win 7", "username": "Tatang_Rudi_Wicaksono" }, { "code": "", "text": "What error are you getting?\nWhat is the architecture (32bit vs 64bit)Cannot install window version MongoDB , keep loading with Installing MongoDB", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Tatang_Rudi_Wicaksono,Any update on this What error are you getting?\nWhat is the architecture (32bit vs 64bit)~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
How to install MongoDB Enterprise Server in Win 7
2020-07-04T14:09:08.155Z
How to install MongoDB Enterprise Server in Win 7
1,229
null
[]
[ { "code": "", "text": "we want to allow user to define schema and upload data to save based on that schema,\nwhat’s the best way to store the “user define schema” in mongodb, as Text ? or Json?one user can have\nvar userSchemaB = {\ntitle: String,\nbody: String\n};another user can have\nvar userSchemaA= {\nproductName: String,\nproductDescription: String\n};", "username": "Level_0_You" }, { "code": "const userId = <id>;\ndb.getCollection(`{userId}_products`).insertOne(<payload>);\n", "text": "Hello, @Level_0_You!MongoDB has the flexible schema. It means, that you can insert documents with completely different structure in your collection. But in this case, you would need to do the data validation on application side, before each write-operation (create, update).Although, MongoDB supports schema validation, but it will validate all the documents in a collection against the same schema. It means, that you can not have multiple schemas for a single collection.You can, however, use MongoDB schema validation with different schemas by creating a dedicated collection with its own schema per each user:That would work, but you can easily reach limitations collection number if your db instance is provided by some Cloud provider. However, if you have deployed MongoDB on your own servers, the limitations are much more compromising. So, use this at with extreme caution and only if you’re sure it will not create any complications in your application code.Better to think, if you can standardize the data format, coming from your users. This could make your developer’s life much easier: you could use only 1 schema for your single collection and all the validation could be done on database side.If you can not control user’s payload, then consider this 3-step solution:Define schema for each user. You can use shema-inspector shemas for schema definitions and validations, if you have Node.js on backend side. If you use other programming language - search a lib, that can support validation of documents, based on predefined schema.Persist the schemas in some user-related collection. You will need to fetch those schemas each time, when user would want to create/update the product.Validate user’s payload object before inserting or updating document:", "username": "slava" }, { "code": "", "text": "Persist the schemas in some user-related collection. You will need to fetch those schemas each time, when user would want to create/update the product.@slava thanks a lot for the response. after schema1 and schema2 is defined from the UI, how do i save it in the database before i fetch it? I tried to save it as Object Data type, but it doesnt work. each schema is for a different collection, so user can upload data into each collection based on the defined schema.", "username": "Level_0_You" }, { "code": "", "text": "Please, share:", "username": "slava" }, { "code": "\t\t\tvar userDefinedSchema = {\n\t\t\t\t title: String,\n\t\t\t\t body: String,\n\t\t\t\t date: \n\t\t\t\t\t{\n\t\t\t\t\t type: String,\n\t\t\t\t\t default: Date.now()\n\t\t\t\t\t}\n\t\t\t\t }; \n\n\t\t\tvar tableUploadSetup = {\n\t\t\t setupId:String,\n\t\t\t userDefinedSchemaDetails:Object\n\t\t\t };\n\n\t\t\tconst tableSetupSchema = new Schema(tableUploadSetup);\n\t\t\tconst newTableSetupModel = moogoose.model('TableSetup',tableSetupSchema);\n\n\n\t\t\tconst newTableSetupData = {\n\t\t\t setupId: 'SETUP001',\n\t\t\t userDefinedSchemaDetails:userDefinedSchema\n\t\t\t};\n\n\t\t\tconst newTableSetup = new newTableSetupModel(newTableSetupData);\n\n\t\t\tnewTableSetupModel.save((error) =>{\n\t\t\tif(error){\n\t\t\t\tconsole.log('error: '+error)\n\t\t\t } else{\n\t\t\t\tconsole.log('data is saved !!!')\n\t\t\t }\n\t\t\t })\n", "text": "This is the code to save userDefinedSchema in the database, idea is that later i can read it back from tableSetup collection and use it to save new data for the user defined collection.still new to mongodb, not sure whether this is the right approach. thanks for your reply.", "username": "Level_0_You" }, { "code": "const { createConnection } = require('./helpers');\nconst { ObjectId } = require('bson');\n\nconst inspector = require('schema-inspector');\n\n(async () => {\n // connect to your mongodb somehow\n const url = 'url-to-your-db';\n const connection = await createConnection(url);\n\n // create links to your collections for later usage\n const dbName = 'for-test';\n const usersColl = connection.db(dbName).collection('users');\n const productsColl = connection.db(dbName).collection('products');\n\n // get userId somehow\n const userId = new ObjectId();\n\n // accept user's schema definition\n // from request.body, for example\n const schemaDefinition = {\n type: 'object',\n properties: {\n productName: {\n type: 'string',\n optional: false,\n minLength: 2\n },\n productDescription: {\n optional: true\n },\n }\n };\n\n // write that definition for later usage\n await usersColl.insertOne({\n _id: userId,\n productSchema: schemaDefinition\n });\n\n // when user wants to create / update document,\n // fetch the definitions from the collection\n const user = await usersColl.findOne({ _id: userId });\n\n // get create / update payload somehow\n const payload = {\n productName: 'B',\n };\n\n // validate payload against your schema\n const validationResult = inspector.validate(user.productSchema, payload)\n\n if (!validationResult.valid) {\n console.log('errors:', validationResult.error);\n // return errors back to user\n }\n else {\n // insert or update the payload to your db\n productsColl.insertOne(payload);\n }\n\n console.log('done!');\n process.exit(0);\n\n})();\n", "text": "Ok, I must admit, schema management with mongoose is easy, but the schema can not be easily saved to db. Try schema-inspector instead. It’s schema definitions can be safely stored in DB. Here is an example of how it can be used:", "username": "slava" }, { "code": "", "text": "@slava thank you so much so much, i will try it out.", "username": "Level_0_You" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to store user defined schema and save data based on that schema
2020-07-05T05:53:04.754Z
How to store user defined schema and save data based on that schema
15,713
null
[ "data-modeling" ]
[ { "code": "", "text": "I am building a trading system.\nI have 40,000 different symbols, and 32 candles each symbol per day.\nI would like to store a month of data. and the read actions to be fast (I’m querying symbol by symbol)How do you suggest to design the DB? (collection for each symbol? collection for each day?)", "username": "11152" }, { "code": "", "text": "Can you share more details about your data and how you’re going to use it?", "username": "slava" }, { "code": "", "text": "", "username": "11152" } ]
Store 1M documents per day - Best Practices
2020-07-06T13:56:24.488Z
Store 1M documents per day - Best Practices
1,904
null
[ "dot-net", "data-modeling" ]
[ { "code": " public class Position {\n public byte[] PositionInfo = new byte[8];\n }\n public class DatePositions: Dictionary <int, Position> {}\n public class UserPositions {\n public int UserId;\n public DatePositions UserPositions;\n }\n public class Position {\n public int Date;\n public byte[] PositionInfo = new byte[8];\n }\n", "text": "I’m new to MongoDB, I’m working with .net core 2.2 and I’ve just added a library to my project to manage archives that, imho, are not kind to be managed with a sql db.The problem I would like to solve with MongoDB is (this is just an example) to archive the successive positions of many users to create something like an history of their positions.Users are identified by an ID, while the posiion is just a bunch of bytes (8) to be archived together with a date information (an int).So mainly the strucure of the collection UserPositions I want to create is this:I wonder if the creation of collections of UserPositions is the best way to use Mongo.Consider that this information is updated once a day for all the users and it’s retrieved only for a single user per query.A brilliant idea I had is to create a single collection for each user, a collection named Position_User_Id such that I can add a date/position to the collection without having to retrieve the entire UserPositions object, adding a new position and update it.\nIn this case I would have many position collections, in which the Position class is modified adding a date to the class:I apologize if this request is very basic, but after some reading (I’m not a very skilled programmer) I still wonder if I catched the philosophy of the MongoDB using.", "username": "Leonardo_Daga" }, { "code": "public class User\n{\n // The unique user identity - Mongo can provide a unique Id but if you're linking back to SQL stick to an in\n public int UserId;\n\n // The users position data as a nested object, see class below for details\n public IEnumerable<PositionHistory> UserPositions;\n}\n\n// Supporting/Partial Class for position data\npublic class PositionHistory\n{\n // The date time for this position\n public DateTime PositionDate;\n\n // Doesn't have to be a string, but basically whatever data identifies the position\n public string PositionName;\n}\n", "text": "Hi Leonardo,Generally speaking you wouldn’t create a new document for each position when using Mongo, instead opting to update the existing document with the new position and a date value of when it occurred. Much like you would if you were just recording it on paper.I’d suggest looking at changing your model slightly if possible to operate around the idea of one document per user and may positions in a document:I think you’re still editing your question so this may not be relevant shortly, but the last update appears to have you getting on to the same idea, a single collection with the data inside it MongoDb (and all/most NoSQL) databases tend to repeat data rather than try create relations between them as it’s considered that data storage is cheap compared to trying to build data in memory/cpu like SQL doesHope this helps, let me know if you want me to clarify anything or if you have further questions", "username": "Will_Blackburn" }, { "code": "", "text": "What @Will_Blackburn wrote is in line with what most will do in this situation. I found the following link very useful.", "username": "steevej" }, { "code": "", "text": "I do not think it is a good idea to multiply the number of collections.A brilliant idea I had is to create a single collection for each user", "username": "steevej" }, { "code": "", "text": "Infact, brilliant in this case was ironic. I’m trying to create an example on the basis of what @Will_Blackburn wrote, to be posted here, maybe it could help beginners like me ", "username": "Leonardo_Daga" }, { "code": " // get (and create if doesn't exist) a database from the mongoclient\n var db = mongo.GetDatabase(\"UserPositionsDb\");\n\n // get a collection of User (and create if it doesn't exist)\n var collection = db.GetCollection<User>(\"UserCollection\");\n\n var user = collection.AsQueryable()\n .SingleOrDefault(p => p.UserId == userID);\n\n bool newUser = false;\n if (user == null)\n {\n user = new User\n {\n UserId = userID,\n UserPositions = new List<PositionItem>()\n };\n\n newUser = true;\n }\n\n user.UserPositions.Add(positionItem);\n\n // Add the entered item to the collection\n if (newUser)\n collection.InsertOne(user);\n else\n collection.ReplaceOne(u => u.UserId == userID, user);\n", "text": "Starting from what @Will_Blackburn wrote and (I hope) what @steevej tried to suggest me through the link of his latest post, I wrote the example code that I put in the repository GitHub - LeonardoDaga/MongoDbSample: My first example using MongoDb to create an archive containing a collection of documents, each document containing a list of information.In this example, I’ve created a collection of User objects, each one identified by an ID and a List of positions, following the indications of Will. The idea is that each item of the collection (an User) has a list of positions that can be updated with new positions using the collection methods.In this specific case, I wonder if using a List (it should be equivalent to using an Enumerable, I suppose) and adding an item to the list and replacing the previous using the code that follows is the correct approach. I’ve expected to use the “UpdateOne” method, but it’s not clear for me how I should use it.The main code of the sample is the following:", "username": "Leonardo_Daga" }, { "code": "", "text": "I know nothing about .net driver.As a general comment you should try to avoid 2 round trips to the database in order to reduce latency or concurrency issue.MongoDB has the concept of upsert, where the document is updated if it exists and inserted otherwise. Exactly what you are trying to do. You may look at some information at mongodb upsert - Google SearchFor updating the array of position I think that https://docs.mongodb.com/manual/reference/operator/update/push/ will do the trick. In the shell and javascript I would do something likeTry the above and the run db.position.find().pretty() to see the result.", "username": "steevej" }, { "code": "ReplaceOne()collection.ReplaceOne(u => u.UserId == userId, user, new UpdateOptions {IsUpsert = true});\nif (newUser)// IEnumerable is the base of nearly all collections, so a List<T> is perfect here\nvar positions = new List<PositionItem>();\n\n// ... Generate the list of positions ...\npositions.Add(positionItem);\n\nuser = new User\n{\n UserId = userId,\n UserPositions = positions,\n};\n\n// Now upsert to the document\ncollection.ReplaceOne(u => u.UserId == userId, user, new UpdateOptions {IsUpsert = true});\n", "text": "The C# API uses the ReplaceOne() method to do the upsert, you’ll need to pass in an extra parameter to make it work though:That single line can then replace everything in the bottom if (newUser) section and allows you to remove the query for looking up if the user exists…It’s been a while since I updated a nested item so I’ll need to check if this handles it, but from memory it should give you what you need while adhering to the good advice from @steevej regarding single trips to the database", "username": "Will_Blackburn" }, { "code": " var positionItem = new PositionItem()\n {\n PositionDate = positionDay,\n Position = dataPosStr\n };\n\n var updatePositionFilter = Builders<User>.Update.Push(u => u.UserPositions, positionItem);\n\n collection.UpdateOne(u => u.UserId == userID, \n updatePositionFilter,\n new UpdateOptions { IsUpsert = true });\n", "text": "Thank you Steve for the suggestion, very helpful. It opened me the world of the operators \nI’ve translated the code to the c# equivalent and it works pretty well. In the efficiency test I added in the repository cited above I’ve found that the Upsert time is 3 to 5 (depending of the size of the array) faster than the two round trips approach.\nJust as help for beginners, I report the core of the instrucions used here:", "username": "Leonardo_Daga" }, { "code": "", "text": "Thank you Will for your reply.\nI suppose @steevej 's answer is more viable because that way I don’t need to retrieve the whole collection of positions from the user before updating it.\nThis is my typical operating condition, when I add a new position I don’t know and I don’t care if other positions are already available for the user.", "username": "Leonardo_Daga" }, { "code": "", "text": "Just to leave a note to the next beginners like me, I added another example (GitHub - LeonardoDaga/MongoDbSample: My first example using MongoDb to create an archive containing a collection of documents, each document containing a list of information, project MongoDbConsoleBulkWrite) to clarify how to use the bulk write operation (BulkWriteAsync) with the flag upsert true.\n10 to 20 time faster than previous attempt, really the best way to insert multiple information at the same time.", "username": "Leonardo_Daga" } ]
Archiving philosophy
2020-03-19T09:04:27.373Z
Archiving philosophy
2,359
null
[]
[ { "code": "", "text": "HelloI see that Atlas isn’t allowing me to select data size of more than 4TB per shard in MongoDB Atlas.Is that a hard limit?\nIf I wanted to spread my data of 40 TB across 4 nodes, is it not possible with Atlas because of that limit?\nIf that is the case, as a workaround, can I have multiple Instances per node and spread the multiple Instances across multiple nodes? Our data is flexible and doesn’t have to reside on a single Instance.", "username": "SatyaKrishna" }, { "code": "", "text": "Hi Satya,MongoDB offers horizontal scale-out using sharding: While a single ‘Replica Set’ (aka a shard in a sharded cluster) cannot exceed 4TB of physical storage, you can use as many shards as you want in your MongoDB Atlas sharded cluster.For example, if you allocated 2TB per shard, a twenty shard cluster would have a total of 40TB of physical space (all would be redundant for high availability).By the way, MongoDB offers compression by default meaning that your logical data size can in practice greatly exceed these physical storage numbers.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "@Andrew_Davidson, thank you for that information. Do you have any information on how the compression ratios will be?", "username": "SatyaKrishna" }, { "code": "", "text": "Hi @SatyaKrishna,Compression ratios depend on your data. Most data sets are highly compressible, but the default is often more than 50%. There’s an older blog post on New Compression Options in MongoDB 3.0 which is still generally applicable: ultimately you should test with a representative data set.If you need help designing a large cluster or scale-out plan with MongoDB Atlas, I’d encourage you to contact our sales team (or your Account Executive, if known). One of our experienced Solution Architects can likely provide more specific advice for your use case and growth plans.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Limits on data size?
2020-07-03T00:54:10.097Z
Limits on data size?
5,769
null
[ "replication" ]
[ { "code": "#0 0x00007efd9c853c21 in do_futex_wait () from /lib64/libpthread.so.0\n#1 0x00007efd9c853ce7 in __new_sem_wait_slow () from /lib64/libpthread.so.0\n#2 0x00007efd9c853d85 in sem_timedwait () from /lib64/libpthread.so.0\n#3 0x00005580f6d94c6c in mongo::TicketHolder::waitForTicketUntil(mongo::Date_t) ()\n#4 0x00005580f681aedc in mongo::LockerImpl<false>::_lockGlobalBegin(mongo::LockMode, mongo::Duration<std::ratio<1l, 1000l> >) ()\n#5 0x00005580f680a724 in mongo::Lock::GlobalLock::_enqueue(mongo::LockMode, unsigned int) ()\n#6 0x00005580f680a79e in mongo::Lock::GlobalLock::GlobalLock(mongo::OperationContext*, mongo::LockMode, unsigned int, mongo::Lock::GlobalLock::EnqueueOnly) ()\n#7 0x00005580f680a7e8 in mongo::Lock::GlobalLock::GlobalLock(mongo::OperationContext*, mongo::LockMode, unsigned int) ()\n2020-05-22T13:38:49.633+0000 I NETWORK [listener] connection accepted from 172.28.96.186:43437 #18988 (14484 connections now open)", "text": "We have a three node MongoDB replica set deployed in our Prod environment. The primary mongod process gets hung up after running for 12 hours or so. We are able to see too many threads (around 15,000) stuck in the same stack,Due to the sensitive nature of the db.logs they cannot be shared. The db logs had statements that showed around 14484 connections were open.2020-05-22T13:38:49.633+0000 I NETWORK [listener] connection accepted from 172.28.96.186:43437 #18988 (14484 connections now open)", "username": "Raghu_c" }, { "code": "MongoClient()", "text": "Hi,Without logs, unfortunately it’s difficult to say what went wrong.However, if you’re not using the latest 4.2 series of MongoDB, you may experience the issue described in SERVER-35770. Upgrading to the latest MongoDB 4.2 series should resolve this.I can’t say for sure if the number of connections you see is excessive or not. Are you running multiple copy of the app? Are they coded using proper connection pooling (e.g. by not calling MongoClient() multiple times during the life of the app)?Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi,Thanks for the reply.We are not using transactions in our applications. Also the number of connections is excessive because we create a single MongoClient configured with max 100 Connections in the application.The issue occurs only when using WiredTiger and in version 3.6.2. The same issue did not occur in 4.0.10.", "username": "Raghu_c" }, { "code": "", "text": "Hi Raghu,Since the issue in SERVER-35770 was fixed in MongoDB 4.0.2 and above (see the “fix version” entry in the ticket), it is plausible that you are hitting that issue.Either way I’m glad that you have this resolved. In the meantime, I would suggest you to explore the possibility of moving to the newest minor version of the 4.0 branch, which is currently 4.0.19. There may be additional issues that you haven’t experienced yet that were fixed in the latest version.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB processes get hung up when trying to acquire lock
2020-05-22T21:05:58.129Z
MongoDB processes get hung up when trying to acquire lock
3,413
null
[ "spark-connector" ]
[ { "code": "", "text": "Recently, I’ve started working on writing some data to MongoDB via Spark. I have developed an application which writes to MongoDB via MongoDB Spark Connector.I have used MongoSpark.save() to write Dataframe to MongoDB, however the decimal values are being written as NumberDecimal(\"\"), Eg: “Score” : NumberDecimal(“0.3916”)Instead, I would like to write as “Score” : 0.3916.On going through articles I got to know that the Decimal is being represented as NumberDecimal, but is there a way where the data can be written without NumberDecimal() represented as like “Score” : 0.3916.", "username": "Prasad_Hadoop" }, { "code": "", "text": "The quick workaround to deal with drive datatype issues is, explicit typecast the column of the dataframe in the notebook before calling save().\nex. dataframe.withColumn(field, dataframe(field).cast( DecimalType )\nPlease refer: (DecimalType (Spark 2.2.0 JavaDoc))))Thanks", "username": "Nitin_Goswami" } ]
Decimal values written as NumberDecimal("") - Spark Connector
2020-05-20T01:50:42.901Z
Decimal values written as NumberDecimal(&ldquo;&rdquo;) - Spark Connector
3,575
null
[]
[ { "code": "", "text": "Hi,We would like to optimize our MongoDB Atlas usage in order to save money. With the recent changes made by MongoDB Atlas to the Cluster Backup Storage offer, we are currently studying the different backup strategies and policies.We would like to delete snapshots that were taken in March or prior that we will never use to recover the Cluster anyway and see the effects it has on our costs.The problem is that everywhere in the documentation for the Cloud Continuous Backup, it is written that the snapshots are taken incrementally. Maybe we misunderstood the “incrementally”, but our understanding is that if there are 3 snapshots taken (A, B and C in this order), than B represents the changes that have been made on top of A and C represents the changes that have been made on top of B. But in this case, does this mean deleting snapshot B will corrupt or make snapshot C useless?Thank you for your help!", "username": "Arthur_Le_Saint" }, { "code": "", "text": "Hi @Arthur_Le_Saint, which cloud provider are you deployed on?", "username": "bencefalo" }, { "code": "", "text": "We are with currently with AWS.", "username": "Arthur_Le_Saint" }, { "code": "", "text": "Got it.Atlas uses the cloud providers native cabilities of taking snapshots along with our stack to maintain consistent backups. So when we say snapshots are incremental, this is the cloud providers definition. Here is what I mean… snap1 is a full snapshot. Snap2 and so on are incremental. Meaning you are only charged for the blocks that have changed. However how this really works under the covers is by reference blocks, not by a true incremental copy as defined by legacy backup technologies.Back to the example, snap1 is full and snap2 and 3 are incremental. You would like to delete snap2. This does not corrupt snap3. What the cloud provider does and figures out which blocks are still referenced by future snapshots, and those references are carried forward. And only the blocks that are no longer referenced in future snapshots are actually deleted.What this means is deleting snapshots doesn’t give you the cost savings you might think. This is very workload dependent.If you would like, send me a direct message and would be more than happy to take a deeper look at your atlas organization.", "username": "bencefalo" }, { "code": "", "text": "Hi @bencefalo,\nThanks a lot for your quick and very informative answer! We have now a much better idea of what “incremental” meant in the docs.\nWe will run some tests this week and delete some backups to see the impact it has on our billing.\nIf the tests don’t pan out or if we have more questions, we won’t hesitate to send you a DM to look at the problem more thoroughly.", "username": "Arthur_Le_Saint" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Deleting old backup snapshots with Cloud Continuous Backup
2020-07-03T22:03:07.542Z
Deleting old backup snapshots with Cloud Continuous Backup
1,608
null
[]
[ { "code": "", "text": "HiI’ve built a couple of realm apps from the mongo examples. Both have received an email alert saying “Sync to MongoDB has been paused for your application”. I’ve looked through the logs and can’t see anything wrong, so just looking for some guidance on how to debug this. They are just part of the free tier as I start using RealmThanks", "username": "Jonny" }, { "code": "", "text": "This is going to be an internal issue from what I understand.It’s probably not being caused by something you did. You can log into the console and check the logs to see what the error was - however, it’s usually pretty generic / unclear.I believe they are aware of this ongoing issue but you may want to file a bug report if you have access", "username": "Jay" }, { "code": "", "text": "Yeah I didn’t see any information pertaining to the error. But this makes sense, I didn’t think I could have done much as I followed two Realm tutorials and it happened in both. But good to know. My application (Task Tracker) incidentally still seemed to function correctly ", "username": "Jonny" }, { "code": "", "text": "I think pause sync and resume sync help resolve the problem. For development its fine but hope they know about this bug", "username": "Safik_Momin" }, { "code": "", "text": "Hey All - we are currently working on functionality in the UI that will give the operator the ability to know that sync is in a degraded state and allow the operator to restart the functionality. As well as file a ticket for us to investigateStay tuned", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_WardGood news. It would be good to know what would cause a degraded state from our (the developer) perspective and what we can do to prevent it / when to know to restart.Also, please note that as we are trying out this technology, there is no way for us to file a ticket - only chat is available.No Ticket1096×522 60.4 KB", "username": "Jay" }, { "code": "", "text": "A chat can also file an issue for us internally", "username": "Ian_Ward" }, { "code": "", "text": "Somewhat related @Ian_Ward any timeline on the Swift UI version of the Task Tracker app? I realize we’re in flux between iOS 13/14 currently", "username": "Jonny" }, { "code": "", "text": "Yes its here - Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sync to MongoDB paused
2020-07-01T23:02:01.917Z
Sync to MongoDB paused
3,232
null
[ "graphql" ]
[ { "code": "", "text": "I see a situations where my query works in the builtin graphiql client, but not in my app, nor using other test tools, e.g. altair client.I have narrowed it down to specific fields included in my aggregation projection.", "username": "Fred_Kufner" }, { "code": "", "text": "I found “anyOf” in json schema here. Is this supported and will it “resolve” to union in graphql?", "username": "Fred_Kufner" }, { "code": "", "text": "So anyOf looks to be supported. I cannot get it to work in my custom resolver. Trying a trivial one and get errors when saving custom resolver. Does this work?", "username": "Fred_Kufner" }, { "code": "", "text": "I see this is just under general mongodb support. I will await your response as to whether these operators (anyOf and oneOf in particular) are supported and they are properly reflected in graphql.", "username": "Fred_Kufner" }, { "code": "", "text": "Hi Fred – Currently for our schema, we require that a single type is specified in order for a GraphQL type to be generated. Therefore, anyOf is not supported with GraphQL at the moment, but this is something that we’re considering for the future. If you’d like, you can file a request in our feedback forum to better track this.", "username": "Drew_DiPalma" } ]
Custom resolver, built-in graphiql client works, external access does not
2020-07-03T12:10:55.260Z
Custom resolver, built-in graphiql client works, external access does not
2,455
null
[ "graphql" ]
[ { "code": "", "text": "Cann ot find anything on this", "username": "Fred_Kufner" }, { "code": "", "text": "Hi Fred – Currently specifying an Interface/Union within our GraphQL service is not possible, but this is something that we’re considering for the future. If you’d like, you can file a request in our feedback forum to better track this.", "username": "Drew_DiPalma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does mongo db realm graphql support interface and union (part of graphql spec)
2020-07-05T12:46:39.768Z
Does mongo db realm graphql support interface and union (part of graphql spec)
2,041
null
[]
[ { "code": "", "text": "When restoring collections on destination database mongorestore is giving “duplicate id” error.\nShould we treat this as serious Error or just ignore it as Warning or Informational message??\nIs it due to the fact that the collections already exist.\nIn what other cases this error will creep up.", "username": "venkata_reddy" }, { "code": "movie{ _id: 1, title: \"star wars\" }\n{ _id: 2, title: \"return of jedi\" }\nmongodumpmovie_newos > mongodump --db=test --collection=movie\nos > mongorestore --db=test --collection=movie_new dump/test/movie.bson\nmovie_new_id12movie{ _id: 9, title: \"the empire strikes back\" }mongodumpmongorestoremoviemovie_newmongorestore_id: 9movie_new2020-07-03T07:38:08.519+0530 checking for collection data in dump\\test\\movie.bson\n2020-07-03T07:38:08.527+0530 restoring to existing collection test.movie_new without dropping\n2020-07-03T07:38:08.532+0530 reading metadata for test.movie_new from dump\\test\\movie.metadata.json\n2020-07-03T07:38:08.537+0530 restoring test.movie_new from dump\\test\\movie.bson\n2020-07-03T07:38:08.627+0530 continuing through error: E11000 duplicate key error collection: test.movie_new index: _id_\n dup key: { _id: 1 }\n2020-07-03T07:38:08.631+0530 continuing through error: E11000 duplicate key error collection: test.movie_new index: _id_\n dup key: { _id: 2 }\n2020-07-03T07:38:08.638+0530 restoring indexes for collection test.movie_new from metadata\n2020-07-03T07:38:08.646+0530 finished restoring test.movie_new (1 document, 2 failures)\n2020-07-03T07:38:08.648+0530 1 document(s) restored successfully. 2 document(s) failed to restore.\n2020-07-03T07:38:08.627+0530 continuing through error: E11000 duplicate key error collection: test.movie_new index: _id_ dup key: { _id: 1 }_idmovie_newmongorestoremongorestore_idmongorestore", "text": "Hello @venkata_reddy, welcome to the MongoDB Community. I will explain the “error” with an example.Lets take an example collection movie with two documents:Do a mongodump of this collection and restore it to a new collection movie_new.Now, the movie_new has two documents with _id values 1 and 2.Insert one more document into the movie collection.{ _id: 9, title: \"the empire strikes back\" }Again, do a mongodump and mongorestore from movie to movie_new. You will see the mongorestore completes with inserting only the new document with _id: 9 into the movie_new collection.During the restore process you will see messages on the console like this:In the above output, the error 2020-07-03T07:38:08.627+0530 continuing through error: E11000 duplicate key error collection: test.movie_new index: _id_ dup key: { _id: 1 } is expected, as there is already a document in the target collection with the same _id that is being restored; and the document doesn’t get inserted (see below note on inserts). The restore process continues to process remaining document(s) after logging this “error” message - it does not abort the process.The process completes inserting only one document into movie_new collection.From the MongoDB documentation: mongorestore Inserts Onlymongorestore can create a new database or add data to an existing database. However, mongorestore performs inserts only and does not perform updates. That is, if restoring documents to an existing database and collection and existing documents have the same value _id field as the to-be-restored documents, mongorestore will not overwrite those documents.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_Saya, Thank you for the useful information. This has clarified my confusion.", "username": "venkata_reddy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to avoid "duplicate id" error during restoring collections using mongorestore
2020-07-02T20:32:06.161Z
How to avoid &ldquo;duplicate id&rdquo; error during restoring collections using mongorestore
27,475
null
[ "containers", "configuration" ]
[ { "code": "", "text": "Hello,Like we did load service auto start in rc.local file. What is the best way for autostart all the services at system reboot in Linux 18.I have multiple mongod services running at one Linux Node.", "username": "Aayushi_Mangal" }, { "code": "systemd/etc/rc.locallxccgroups", "text": "HI @Aayushi_Mangal,Please provide more details on your environment:What specific Linux distro and version are you referring to as Linux 18?What service manager are you using?Most modern Linux distros use systemd for service management and init scripts, with the older SysV-style init (/etc/rc.local) only existing for backwards compatibility. Configuring and managing services isn’t specific to MongoDB, so tutorials for managing other services for your O/S should be applicable.I have multiple mongod services running at one Linux Node.Note: running multiple MongoDB services on a single host without any sort of containers or resource management (lxc, cgroups, Docker, etc) generally isn’t recommended as all processes will be competing for the same resources.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,Thank you so much for your refrence.we are using this:\n18.04.4 LTS (Bionic Beaver)I have started my service as:\n/usr/bin/numactl --interleave=all mongod --config I am not getting where to place this so it will start automatically, like we did earlier in rc.local file.\nAs we have multiple mongo services, how to define all those with this NUMA enabled.", "username": "Aayushi_Mangal" }, { "code": "", "text": "This is resolved:Created service file and define in that that is working on reboot. inside this parameter: ExecStart", "username": "Aayushi_Mangal" } ]
Mongod restart on system reboot Linux 18
2020-07-01T06:26:52.676Z
Mongod restart on system reboot Linux 18
2,550
null
[ "aggregation" ]
[ { "code": "db.collection_001.drop();\ndb.createCollection(“collection_View_Aggregation_042”);\ndb.collection_001.insert(\n{\n “_id” : 1,\n “item” : “”,\n “price” :\n});\n\ndb.collection_001.insert(\n{\n “_id” : ,\n “item” : “”,\n “price” : ,\n “quantity” :\n});\n\ndb.collection_001.insert(\n{\n “_id” : ,\n “item” : “”,\n “price” : ,\n “quantity” : ,\n “date” : ISODate(“2014-01-01T08:15:39.736Z”)\n});\n", "text": "Hi Team,As part of sampling, if I have 10000 documents with different sampling of data. In order to find unique documents with their data type. How do I find it in compass or any MongoDB query available.Example:Increasing to 10000 documents ith different data type. In order to find unique documents from these collection is very difficult.How do i fetch unique document name along with data type? Is any shortcut available?w\nThanks & Regards,\nJayTo be precise, I have 1000 documents in 1 collection.\nDocument 1 - 10 different Field name (_id, name,dob, address,pin_code, Co-ordinates,etc.,)\nDocument 2 - Only 3 Field Name(_id, Name, Dob)\nDocument 3 - Contains 45 Field Name\nDocument 4 - Contains 150 Field Name\nDocument 5 - Contains 250 Field Name\nDocument 6 - Contains 200 Field Name\n.\n.\n.\nso on.Field names may be similar or may vary from one document to other. Same is called “Sampling” in MongoDB.If i want to find all Field name which is available in 1 Collection i have to manually verify all 1000 records. If i want to find all field with in the collection, irrespective of document, how do i find it?", "username": "Jayaprakash_Nagappan" }, { "code": "db.test1.insertMany([\n { propA: 1 },\n { propA: '1', propB: true },\n { propA: 1, propC: [1, 2, 3] }\n]);\n{\n \"_id\" : null,\n \"uniquePropNames\" : [\n \"propB\",\n \"propA\",\n \"propC\",\n \"_id\"\n ]\n}\ndb.test1.aggregate([\n {\n $addFields: {\n list: {\n $objectToArray: '$$CURRENT',\n }\n }\n },\n {\n $unwind: '$list',\n },\n {\n $group: {\n _id: null,\n uniquePropNames: {\n $addToSet: '$list.k'\n },\n }\n }\n]).pretty();\n{ \"propTypes\" : [ \"array\" ], \"propName\" : \"propC\" }\n{ \"propTypes\" : [ \"bool\" ], \"propName\" : \"propB\" }\n{ \"propTypes\" : [ \"string\", \"double\" ], \"propName\" : \"propA\" }\ndb.test1.aggregate([\n {\n $addFields: {\n list: {\n $objectToArray: '$$CURRENT',\n }\n }\n },\n {\n $unwind: '$list',\n },\n {\n $match: {\n 'list.k': {\n $ne: '_id',\n }\n }\n },\n {\n $group: {\n _id: '$list.k',\n propTypes: {\n $addToSet: {\n $type: '$list.v',\n }\n }\n }\n },\n {\n $project: {\n _id: false,\n propName: '$_id',\n propTypes: true,\n }\n }\n]).pretty();\n{\n \"documentIdsThatHoldProp\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e3\")\n ],\n \"propName\" : \"propB\"\n}\n{\n \"documentIdsThatHoldProp\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e2\"),\n ObjectId(\"5f02435b9cb1606d755fd1e3\"),\n ObjectId(\"5f02435b9cb1606d755fd1e4\")\n ],\n \"propName\" : \"propA\"\n}\n{\n \"documentIdsThatHoldProp\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e4\")\n ],\n \"propName\" : \"propC\"\n}\ndb.test1.aggregate([\n {\n $addFields: {\n list: {\n $objectToArray: '$$CURRENT',\n }\n }\n },\n {\n $unwind: '$list',\n },\n {\n $match: {\n 'list.k': {\n $ne: '_id',\n }\n }\n },\n {\n $group: {\n _id: null,\n uniquePropNames: {\n $addToSet: {\n name: '$list.k',\n documentId: '$_id'\n }\n },\n }\n },\n {\n $unwind: '$uniquePropNames',\n },\n {\n $group: {\n _id: '$uniquePropNames.name',\n documentIdsThatHoldProp: {\n $push: '$uniquePropNames.documentId',\n }\n }\n },\n {\n $project: {\n _id: false,\n propName: '$_id',\n documentIdsThatHoldProp: true,\n }\n }\n]).pretty();\n{\n \"variations\" : [\n {\n \"propType\" : \"string\",\n \"documentIds\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e3\")\n ]\n },\n {\n \"propType\" : \"double\",\n \"documentIds\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e2\"),\n ObjectId(\"5f02435b9cb1606d755fd1e4\")\n ]\n }\n ],\n \"propName\" : \"propA\"\n}\n{\n \"variations\" : [\n {\n \"propType\" : \"array\",\n \"documentIds\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e4\")\n ]\n }\n ],\n \"propName\" : \"propC\"\n}\n{\n \"variations\" : [\n {\n \"propType\" : \"bool\",\n \"documentIds\" : [\n ObjectId(\"5f02435b9cb1606d755fd1e3\")\n ]\n }\n ],\n \"propName\" : \"propB\"\n}\ndb.test1.aggregate([\n {\n $addFields: {\n list: {\n $objectToArray: '$$CURRENT',\n }\n }\n },\n {\n $unwind: '$list',\n },\n {\n $match: {\n 'list.k': {\n $ne: '_id',\n }\n }\n },\n {\n $group: {\n _id: {\n propName: '$list.k',\n propType: {\n $type: '$list.v',\n }\n },\n documentIds: {\n $push: '$_id',\n }\n }\n },\n {\n $group: {\n _id: '$_id.propName',\n variations: {\n $push: {\n propType: '$_id.propType',\n documentIds: '$documentIds'\n }\n }\n }\n },\n {\n $project: {\n _id: false,\n propName: '$_id',\n variations: true,\n }\n }\n]).pretty();\n", "text": "Hello, @Jayaprakash_Nagappan!The required result can be achieved mainly by using $objectToArray + $group stages.Keep in mind, when using the examples below, that you may reach 16MB BSON document size limit, if there is a big variety of prop names or some prop names present in the most of documents in a huge collection.For the example purpose, let’s simplify your documents structure to this:Case 1: You need to get the list of unique names of properties of all documents in your collection.Expected output:Aggregation:Case 2: You need to get the list of unique names of properties of all documents in your collection and each property should return unique list of types, it has throughout the documents per same collection - use this aggregation.Expected output:Aggregation:Case 3: You need to get the list of unique names of properties of all documents in your collection and list Ids of documents, that have this prop.Expected output:Aggregation:Case4: You need to get the list of unique names of properties of all documents in your collection and list Ids all possible types, that each unique property can have, mapped to document ids, that hold property value of a given type.Expected output:Aggregation:", "username": "slava" } ]
How to sample unique documents in a collection
2020-04-20T12:04:17.955Z
How to sample unique documents in a collection
2,469
null
[]
[ { "code": "", "text": "Hi, I’m new to this and tried to search for a way to add new admin to my database but still can’t get it done. i tried the db.CreateUser in robo 3t but it requires authentication. i tried connecting mongo shell to atlas to use the command and I failed to connect it (can’t get the error log). I just need to add new user admin so i won’t have to share the main admin credentialsthank you", "username": "salem_abdelniby" }, { "code": "", "text": "Welcome to the forum @salem_abdelniby.Database users are created through Atlas.", "username": "chris" }, { "code": "", "text": "Thank you Chris. that was really helpful", "username": "salem_abdelniby" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create (add new) user admin for a database
2020-07-05T20:45:06.416Z
Create (add new) user admin for a database
2,035
null
[ "aggregation" ]
[ { "code": "{\n \"firstscores\" : [ \n {\n \"content\" : {\n \"scores\" : [ \n \n {\n \"score_name\" : \"A\",\n \"score_value\" : 1\n }, \n {\n \"score_name\" : \"B\",\n \"score_value\" : 2\n }, \n {\n \"score_name\" : \"C\",\n \"score_value\" : 3\n }\n ]\n }\n }\n ],\n \"lastestscores\" : [ \n { \n \"content\" : {\n \"scores\" : [ \n {\n \"score_name\" : \"A\",\n \"score_value\" : 9\n }, \n {\n \"score_name\" : \"B\",\n \"score_value\" : 8\n }, \n {\n \"score_name\" : \"C\",\n \"score_value\" : 7\n }\n ]\n }\n }\n ]\n}\n\"allscores\" : [{\n \"content\" : {\n \"scores\" : [ \n {\n \"score_name\" : \"A\",\n \"first_score_value\" : 1,\n \"last_score_value\" : 9\n \"diff\": 8\n }, \n {\n \"score_name\" : \"B\",\n \"first_score_value\" : 2,\n \"last_score_value\" : 8\n \"diff\": 6\n }, \n {\n \"score_name\" : \"C\",\n \"first_score_value\" : 3,\n \"last_score_value\" : 7\n \"diff\": 4\n }\n ]\n }\n}]\n \"first_score_value\" : 1,\n \"last_score_value\" : 9\nSo far I've tried adding the latest score to the first score array , \n {$addFields: {\n 'firstscores.content.scores.lastscore_name.0': { $arrayElemAt: ['$lastscores.content.scores.score_name',0]}\n }}\n0:Object\nscore_name:\"A\"\nscore_value:1\nlastscore_name:Object\n 0:\"A\"\n 1:\"B\"\n 2:\"C\"\n {$addFields: { \n both: {$zip:{inputs: ['$firstscores.content.scores','$firstscores.content.scores.score_value','$lastscores.content.scores.score_value']}}\n}}\n, {$addFields: {\nbotha: {$arrayElemAt: ['$both',0]},\n}}, \n{$addFields: {\n firstandlastscores: {$zip:{inputs: [{ $arrayElemAt: ['$botha',0]},{ $arrayElemAt: ['$botha',1]},{ $arrayElemAt: ['$botha',2]}]}}\n }}]\n firstandlastscores:Array\n 0:Array\n 0:Object\n score_name:\"A\"\n score_value:1\n 1:1\n 2:9\n 1:Array\n 0:Object\n score_name:\"B\"\n score_value_string:2\n 1:2\n 2:8\n 2:Array\n 0:Object\n score_name:\"C\"\n score_value_string:3\n 1:3\n 2:7\n", "text": "I am trying to manipulate a dataset to make it easy to display in mongoCharts. There are two sets of scores firstscores and last scores, Each contains a set of score names and score values. I want to be able to calculate the difference between first and last values for each of A, ,B and C.Example input…Desired OutputNoteare optional/nice to haves. Its the diff that I’m really after.gives all the latestscore valuesAlso tried a combination of zipsbut lost the attribute names,mergeObjects overwrote the fields of the same name .I think map reduce may be the way to go but I have not managed to get any where near with that option.Any guidance or pointers gratefully received.", "username": "Neil_Albiston1" }, { "code": "score_value", "text": "Hello, @Neil_Albiston1 !\n‘allscores’, ‘firstscores’ and ‘lastestscores’ arrays always hold one object?Also, please:", "username": "slava" }, { "code": "", "text": "Yes, ‘allscores’, ‘firstscores’ and ‘lastestscores’ arrays always hold one object.I’ve reformatted as you suggested and fixed the value inconsistency.Thank you", "username": "Neil_Albiston1" }, { "code": "db.test1.aggregate([\n {\n $project: {\n merged_scores: {\n $concatArrays: [\n // order or concatenated arrays is important here\n // makes sense only if array always contain 1 single object\n {\n $arrayElemAt: ['$firstscores.content.scores', 0],\n },\n {\n $arrayElemAt: ['$lastestscores.content.scores', 0],\n },\n ],\n },\n },\n },\n {\n $unwind: '$merged_scores',\n },\n {\n $group: {\n _id: '$merged_scores.score_name',\n first_score_value: {\n $first: '$merged_scores.score_value',\n },\n last_score_value: {\n $last: '$merged_scores.score_value',\n },\n }\n },\n {\n $project: {\n _id: false,\n first_score_value: true,\n last_score_value: true,\n score_name: '$_id',\n diff: {\n // added this conversion, for the case\n // numeric values are stored as strings\n { $toDouble: '$last_score_value' },\n { $toDouble: '$first_score_value' }\n }\n }\n },\n // all the calculations are done here,\n // later stages are purposed to re-structure output document\n {\n // collect all score objects into one array\n $group: {\n _id: null,\n scores: {\n $push: '$$CURRENT',\n },\n },\n },\n {\n // remove unneeded prop\n $unset: ['_id'],\n },\n {\n // add additional props to the document structure\n $project: {\n _id: false,\n allscores: [\n {\n content: {\n scores: '$$CURRENT',\n }\n }\n ]\n }\n }\n]);\n", "text": "This aggregation should be a perfect fit for your situation.‘allscores’, ‘firstscores’ and ‘lastestscores’ arrays always hold one object.If your array always contains one value, why not convert it to an object? I can reduce some overhead in your aggregations. For example, $arrayElemAt would not be needed in the current aggregation, if you had an object instead of array.", "username": "slava" }, { "code": "db.test1.aggregate([\n {\n $project: {\n", "text": "Looks promising . Should the first part of the pipeline be an addFields ?", "username": "Neil_Albiston1" }, { "code": "", "text": "You can use $addFields instead of $projection only if you need to output initial data as well.\nBut, that will also require modifications to other stages in the pipeline.", "username": "slava" }, { "code": "", "text": "It did take me some time to knit that solution into the existing code. I had simplified the input and output,\n…but …The aggregate pipeline output is perfect. (… and you’ve saved me the task of understanding map reduce. )Thank you", "username": "Neil_Albiston1" }, { "code": "db.collection.aggregate([\n { \n $addFields: { \n firstscores: { $arrayElemAt: [ \"$firstscores\", 0 ] },\n lastestscores: { $arrayElemAt: [ \"$lastestscores\", 0 ] }\n } \n },\n { \n $project: { \n content: { \n $map: {\n input: \"$firstscores.content.scores\", as: \"f\",\n in: {\n $let: {\n vars: { varin: { \n $arrayElemAt: [ { $filter: {\n input: \"$lastestscores.content.scores\", as: \"n\", \n cond: { $eq: [ \"$$n.score_name\", \"$$f.score_name\" ] }\n } }, 0 ] \n } },\n in: { \n score_name: \"$$f.score_name\",\n first_score_value: \"$$f.score_value\",\n last_score_value: \"$$varin.score_value\", \n diff: { $subtract: [ \"$$varin.score_value\", \"$$f.score_value\" ] } \n }\n }\n }\n }\n }\n }\n },\n { \n $project: { allscores: [ \"$content\" ] } \n }\n]).pretty()", "text": "Hi @Neil_Albiston1, @slava,Here is just another way of getting the desired output:", "username": "Prasad_Saya" }, { "code": "", "text": "Nice solution. I really must get the hang of map. I could not work out how to pass two arrays into the function. … Now I know how.\nThanks.", "username": "Neil_Albiston1" }, { "code": "// 1. define map-function\nfunction mapFn() {\n const first = this.firstscores[0].content.scores;\n const last = this.lastestscores[0].content.scores;\n\n const result = first.map(firstScore => {\n const lastScore = last.find(item => {\n return item.score_name === firstScore.score_name;\n });\n\n const diff = lastScore.score_value ? \n lastScore.score_value - firstScore.score_value : \n firstScore.score_value;\n\n // lastScore may be undefined, that is why there are \n // some 'N/A' placeholders used down below\n return {\n score_name: firstScore.score_name,\n first_score_value: firstScore.score_value || 'N/A',\n last_score_value: lastScore.score_value || 'N/A',\n diff,\n };\n });\n\n // format the new document structure:\n const allScores = [];\n allScores.push({\n content: {\n scores: result,\n },\n });\n\n emit(this._id, { allScores });\n}\n\n// 2. define reduce-function\n// Since we do not need to group any documents,\n// and to provide the result per each document,\n// we do not need this function, \n// that is why we can leave it without any logic\n// but we need it for the .mapReduce fuction\n// as it is a required parameter\nfunction reduceFn(key, values) {\n /* no logic */\n}\n\n// 3. Call .mapReduce method on the source collection\ndb.test1.mapReduce(mapFn, reduceFn, {\n out: 'test2',\n});\n// the result will be written to 'test2' collection\n{ _id: <id>, value: <mapReduceResult> }\n[\n {\n \"_id\" : ObjectId(\"5f021ff1c87e385fe0c9bcb0\"),\n \"value\" : {\n \"allScores\" : [\n {\n \"content\" : {\n \"scores\" : [\n {\n \"score_name\" : \"A\",\n \"first_score_value\" : 1,\n \"last_score_value\" : 9,\n \"diff\" : 8\n },\n {\n \"score_name\" : \"B\",\n \"first_score_value\" : 2,\n \"last_score_value\" : 8,\n \"diff\" : 6\n },\n {\n \"score_name\" : \"C\",\n \"first_score_value\" : 3,\n \"last_score_value\" : 7,\n \"diff\" : 4\n }\n ]\n }\n }\n ]\n }\n }\n]\n", "text": "I think map reduce may be the way to go but I have not managed to get any where near with that option.MapReduce is not a good fit for this situation, because it should be used in situations, where you need to multiple emits per same key in order to do calculations in reduce stage and when it is hard to achieve the result with aggregation pipeline.Moreover, mapReduce is less performant, that aggregation pipeline. To get the same result for 1 document from your example it took about 0.5 seconds. This is mainly because it ran javascript code to calculate the results and because of the output to another collection. And to get that result to your application code you will need to make additional call to fetch the result from db. So, I do not recommend you to use it.Here, I did this mapReduce version of the solution to illustrate the differences in both approaches:Additionally, the output will be a bit different, as you can not exclude ‘_id’ prop (because the document will be written to a collection) and the mapReduce result will be always stored in ‘value’ prop - so the output document structure will always be:Output sample:", "username": "slava" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Merging two arrays into one and calculate the difference of each value
2020-07-01T09:17:18.527Z
Merging two arrays into one and calculate the difference of each value
7,561
null
[]
[ { "code": "0.0.0.0/0", "text": "Hello,I asked around on various corners of the web, but haven’t gotten any answers. I apologize in advance if this isn’t the place to ask.Anyway, I’ve been working on a project on my laptop (it basically allows you to upload images, write comments, etc.). Everything is up and running correctly. However, I just got a new desktop, and decided to clone the project repository from github. I have the exact code, but none of the images appear. When I try to upload an image, it’s just blank on the web app, but it is indexed in MongoDB atlas.Also, trying to view the image on the laptop also does the same thing(image doesn’t appear). If I upload an image on the laptop, I can’t see it on my desktop(just blank).Can anyone help? Also, I have 0.0.0.0/0 set for IP whitelisting.Thank you!", "username": "Matthew_Fay" }, { "code": "", "text": "Welcome to the community @Matthew_Fay!Displaying images on a web page sounds like an application issue rather than something specific to MongoDB.Can you provide more details on how you are saving and retrieving the images?I would also check your browser console (for example: Google Chrome DevTools) for more context on the error. In most browsers you can open developer tools with a Console tab by right-clicking on an empty space on the page and choosing a context action like “Inspect”.Regards,\nStennie", "username": "Stennie_X" }, { "code": "console.log()", "text": "Ok, yes, you’re correct - it is a problem with the template engine, not MongoDB. Jumped to conclusion too soon. A few console.log() calls lead me to the problem. Thank you!", "username": "Matthew_Fay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Images don't load in my application when I clone the repo on a new desktop
2020-07-04T23:30:29.446Z
Images don&rsquo;t load in my application when I clone the repo on a new desktop
2,201
null
[ "node-js", "java", "swift", "react-native", "objective-c" ]
[ { "code": "", "text": "Realm Database 6.0 is now GA, with a significant increase in performance and Frozen Objects for multithreading integration. In addition to this, we’ve updated the following SDK’s -and in conjunction with the above, Realm Studio 3.11.0 is now also released - https://www.mongodb.com/community/forums/t/realm-studio-3-11-0-is-released/4274Please see the individual release notes to learn more and let us know, in the comments, what you think!Many thanksThe MongoDB Realm Team", "username": "Shane_McAllister" }, { "code": "", "text": "First impression after upgrading on iOS was a crash on startup \nKeychain returned unexpected status code: -25299 · Issue #6494 · realm/realm-swift · GitHub. But I hope that get’s resolved soon After commenting out the Sync related code, it works and it was trivial to upgrade from the previous update.Codewize, I had to change one line of code on iOS and nothing on Android. Except for the initial crash on iOS, both apps seem to run fine!What performance improvements are we talking about? Do you have any measurements on these?Can’t wait to test out the freezed objects and to see what you have in store for the public beta of Atlas sync. I am not live with Realm Cloud and have decided to hold off on doing this until the beta.", "username": "Simon_Persson" }, { "code": "", "text": "@Simon_Persson thank you for giving it a try and reporting an issue. When it comes to performance there are few gains:We also have a blog post from a bit earlier Realm Core Database 6.0: A New Architecture and Frozen Objects | MongoDB", "username": "Sergey_Gerasimenko" }, { "code": "", "text": "I was Waiting eagerly for release of Realm Database 6.0 for dotnet. But didn’t get any updates on dotnet sdk. Does this mean dotnet is not in priority list of MongoDB Realm.", "username": "Paramjit_Singh" }, { "code": "", "text": "@Paramjit_Singh We got delayed on dotnet but are actively working on it. We hope to release it in the next week as well.", "username": "Ian_Ward" }, { "code": "", "text": "@Simon_Persson Running our highly artificial performance tests for Realm Cocoa on an iPad Air and comparing 4.x and 5.0 gives:\nCore 6 vs Core 5 runtime (3)1147×1339 144 KB\n(x axis is percentage of runtime, so the first one took 1.7% as long with core 6 as with core 5, and the last one took 120% as long)Overall we expect apps which were previously bottlenecked on our object deletion or sort performance to be a lot faster, and for most other use-cases to be a bit faster overall.", "username": "Thomas_Goyne" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "Looking forward to the .Net release. Still about a week away @Ian_Ward?", "username": "Dr_Charles_Roddie" }, { "code": "", "text": "@Dr_Charles_Roddie Unfortunately not, we had some staff shortages and are delayed. I am hoping we can get it out in June", "username": "Ian_Ward" }, { "code": "", "text": "Looking forward to the .Net release+1 on .Net SDK\nand a migration guide from Realm Cloud will be great.", "username": "Sing_Leung" }, { "code": "", "text": "Hi, is Realm 6.0 100% compatible with Realm Cloud or might there be some issues with upgrading cocoa clients from Realm Swift v 4.x ?", "username": "Duncan_Groenewald" }, { "code": "", "text": "Still hopeful for a .NET update on this!", "username": "Dean_Herbert" }, { "code": "", "text": "Hi, is there documentation somewhere that explains what these new releases are and what compatibility they have with older versions, including Realm Cloud ?I just tried opening a 4.6 database with the Realm Swift 5.1.0 release and it fails with an error “Key already used”.I also noticed in the release notes for 5.0.2 (I think it was) that this is not compatible with Realm Cloud (legacy?!).If that is the case then is there a plan for migration off realm cloud and if so what does that look like?Thanks\nDuncan", "username": "Duncan_Groenewald" }, { "code": "", "text": "Hi Duncan,\nI believe the best current docs are the release notes. If something there is unclear please let us know. But here is the gist:\nAs mentioned in this post we have updated all SDK’s (except .NET, which is lacking a little) with a new major version of the database. They all support Realm Cloud and we will continue to make important bug fixes in those, but don’t expect new features.All SDK’s that supports the new MongoDB Realm Cloud is versioned “10.0.0-beta-*”. They contain both new data types and new features.As for migration support from Realm Cloud to MongoDB Realm Cloud, we are working on that during this beta phase and will provide this as soon as we can.Let us know if anything is unclear.Thanks!\nBrian", "username": "Brian_Munkholm" }, { "code": "", "text": "Thanks for the prompt response. OK so in theory the 5.1.0 cocoa release should be compatible with an existing 4.x database and we should be able to upgrade to 5.1.0 and continue to sync with the existing Realm Cloud service.Assuming that is the case is there someone I can contact regarding an error when trying to open a 4.x database with v5.1.0 - I am seeing a “Key already used” error from Realm. Happy to send you a copy of the database itself to test if required.", "username": "Duncan_Groenewald" }, { "code": "", "text": "Yes - Cocoa v5 is fully compatible with existing Realm Cloud. So if you have any issues upgrading, please create a bug report in a github issue.Thanks!", "username": "Brian_Munkholm" }, { "code": "", "text": "Thanks, I’ve done that.", "username": "Duncan_Groenewald" }, { "code": "", "text": "Can we Dot Net developers expect any fixed date for the Realm Dot Net release.", "username": "Paramjit_Singh" }, { "code": "", "text": "FYI it seems the RealmSwift 5.2.0 build fixes that previous issue - and you will be pleased to know it all seems work perfectly on Apple Silicon to - I just used Xcode to add the RealmSwift package. Great job guys - it’s blazingly fast too.", "username": "Duncan_Groenewald" }, { "code": "", "text": "", "username": "henna.s" } ]
Realm Releases - Core 6 and multiple SDK updates
2020-05-18T16:11:41.015Z
Realm Releases - Core 6 and multiple SDK updates
8,658
null
[ "sharding" ]
[ { "code": "", "text": "I entered sh.status() but could not check the connection status of config server and shards.Can I check the connection status in command except how to check the logs?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hello @Kim_Hakseoncan you please elaborate a bit more the issue? How is your setup, where did you run the sh.status(), which output do you get?Michael", "username": "michael_hoeller" } ]
Sharded Cluster Connected State Check
2020-07-04T07:46:09.793Z
Sharded Cluster Connected State Check
1,541
null
[ "indexes" ]
[ { "code": "", "text": "I have added unique key for name field after that I have drop the indexes and I am trying to create indexes I am getting duplicate error because we have duplicate documents in the collection. how to get the duplicate documents please help me to resolve this issue", "username": "Vinay_reddy_Mamedi" }, { "code": "db.test1.insertMany([\n { _id: 1, val: 'A', },\n { _id: 2, val: 'B', },\n { _id: 3, val: 'C', },\n { _id: 4, val: 'A', },\n])\ndb.test1.aggregate([\n {\n $group: {\n // collect ids of the documents, that have same value \n // for a given key ('val' prop in this case)\n _id: '$val',\n ids: {\n $push: '$_id'\n },\n // count N of duplications per key\n totalIds: {\n $sum: 1,\n }\n }\n },\n {\n $match: {\n // match only documents with duplicated value in a key\n totalIds: {\n $gt: 1,\n },\n },\n },\n {\n $project: {\n _id: false,\n documentsThatHaveDuplicatedValue: '$ids',\n }\n },\n]);\n{ \"documentsThatHaveDuplicatedValue\" : [ 1, 4 ] }\n{\n $lookup: {\n // note, you need to use same collection name here\n from: 'test1', \n localField: 'documentsThatHaveDuplicatedValue',\n foreignField: '_id',\n as: 'documentsThatHaveDuplicatedValue'\n }\n}\n{\n \"documentsThatHaveDuplicatedValue\": [\n {\n \"_id\" : 1,\n \"val\" : \"A\"\n },\n {\n \"_id\" : 4,\n \"val\" : \"A\"\n }\n ]\n}\n", "text": "Welcome to the community, @Vinay_reddy_Mamedi!Let’s assume we have this data in collection ‘test1’:Then, to find duplicates we can use this aggregation:This will output ids:It is also possible join full documents with duplicated values, if just ids is not enough for you.\nYou can do this by adding $lookup stage in the end of the pipeline:Output, after adding the $lookup stage:", "username": "slava" }, { "code": "namenameurl", "text": "@Vinay_reddy_Mamedi and @slava,This is recent post on StackOverflow.com with a similar question and an answer. From the post’s answer:Assuming a collection documents with name (using name instead of url ) field consisting duplicate values. I have two aggregations which return some output which can be used to do further processing. I hope you will find this useful.\n…", "username": "Prasad_Saya" } ]
Finding duplicate documents before creating unique index
2020-07-03T10:57:29.833Z
Finding duplicate documents before creating unique index
17,927
null
[ "serverless", "next-js", "developer-hub" ]
[ { "code": "", "text": "Here is an example for how to use MongoDB with nextjs: Building Modern Applications with Next.js and MongoDB | MongoDBWhy is next-connect used here? Why share a connection with middleware? Couldn’t a module be used instead?", "username": "Roeland_Moors" }, { "code": "", "text": "Hi Roeland, you could. I’m actually making some updates to this article (will be out early next week). There’s been some pretty nice changes to Next.js since this article came out and there are a few improvements I can think of that would make it much better. ", "username": "ado" }, { "code": "import { MongoClient } from \"mongodb\";\n\nlet uri = \"YOUR-CONNECTION-STRING\";\nlet cachedDb = null;\n\nexport async function connectToDatabase() {\n if (cachedDb) {\n return cachedDb;\n }\n const client = await MongoClient.connect(uri, { useNewUrlParser: true });\n const db = await client.db(\"DB-NAME\");\n\n cachedDb = db;\n return db;\n}\nhandler.post(async (req, res) => {\n let data = req.body;\n data = JSON.parse(data);\n data.date = new Date(data.date);\n let db = await connectToDatabase();\n let doc = db\n .collection(\"daily\")\n .updateOne({ date: new Date(data.date) }, { $set: data }, { upsert: true });\n\n res.json({ message: \"ok\" });\n});\n", "text": "Hi Roeland - after looking into this some more, I think I have a more elegant solution (I talked to one of the folks over at Vercel, and they recommended this approach). It’s essentially what you suggested, no need to use middleware. You can simply update your database.js to look like this:and then in your API routes, you’d simply import the connectToDatabase method and go from there…Let me know if this helps!", "username": "ado" }, { "code": "", "text": "Yes, I did something similar.\nBut how does MongoDB handle connections?\nThe connection is not closed here and since an api route will be deployed to aws lambda, there could be many open connections.\nCould that cause problems?", "username": "Roeland_Moors" }, { "code": "", "text": "I found more about aws lambda and mongodb here:Thanks for the help!", "username": "Roeland_Moors" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Nextjs example. Why use next-connect
2020-06-15T05:28:21.527Z
Nextjs example. Why use next-connect
11,812
null
[]
[ { "code": "", "text": "iam using mongodump,mongorestore.i would like to know what kind of details from the collection are available in dump exported by mongodump command.\ndoes it export all the index created for specific fields in a collection?", "username": "Divine_Cutler" }, { "code": "mongodumpmongorestore", "text": "does it export all the index created for specific fields in a collection?Yes, mongodump does export the indexes created on the collection, and the indexes are restored with mongorestore along with the data. This is true with MongoDB v4.2.", "username": "Prasad_Saya" }, { "code": "mongorestore --db university --collection users /Users/dev/Documents/mongodb-performance-inputfiles/users.bson.gz --drop --gzipusers.metadata.jsonusers.metadata.jsonusers.metadata.json", "text": "@Prasad_Saya i don’t think it works like that for me. Below is the problem i’m facing.Collection stats before export\nimage1734×1102 98.6 KB\ni exported this collection using mongodump. below 2 files are generated\nNow i used mongorestore to import the file into another mongodb instance\nmongorestore --db university --collection users /Users/dev/Documents/mongodb-performance-inputfiles/users.bson.gz --drop --gzipAfter import i don’t see any indexes in stats.\ni haven’t done any other steps to import users.metadata.jsonimage1430×770 62.4 KBwhat am i missing here?\nshould i import users.metadata.json manually?But i read in documentation like , we don’t need to import users.metadata.json manually\nmy mongorestore logs shows no index to restore, but i have many index to restore as it is shown in my screenshots\nimage2430×370 269 KB", "username": "Divine_Cutler" }, { "code": "movie_idyearmongoshell > db.movie.find()\nshell > db.movie.getIndexes()\nos > mongodump --db=test --collection=moviedump\\testmovie.bsonmovie.metadatamongodumpos > mongorestore --db=test --collection=movie_new dump/test/movie.bsonshell > db.movie_new.find()\nshell > db.movie_new.getIndexes()", "text": "I have a collection called as movie with two documents and two indexes: the default index on _id and an index on a field year. I verified the data and indexes using the following methods from mongo shell:From the Windows OS command prompt:os > mongodump --db=test --collection=movieThis created a folder dump\\test with two files: movie.bson, and movie.metadata. The folder and files are created in the directory from where I ran the mongodump command.Again, from the OS command prompt:os > mongorestore --db=test --collection=movie_new dump/test/movie.bsonFrom the shell, I could verify:", "username": "Prasad_Saya" }, { "code": "", "text": "i have to export with gzip option as the collection size is too large. i think with gzip option this doesn’t work.", "username": "Divine_Cutler" }, { "code": "--gzip> mongodump --gzip --db=test --collection=movie\n> mongorestore --gzip --db=test --collection=movie3 dump/test/movie.bson.gz\nmovie3", "text": "I tried with --gzip option. It worked in my environment correctly.I could verify the restored movie3 collection and its indexes.", "username": "Prasad_Saya" }, { "code": "no Indexes to restore", "text": "@Divine_CutlerIf your dump was gzip the metadata should be gzip too, was it uncompressed accidentally?I can replicate no Indexes to restore if I uncompress the metadata. When I gzipped it back up mongorestore created the indexes.", "username": "chris" }, { "code": "", "text": "@chris my dump was gzipped but metadata is not gzipped. not sure why. may be its some mistake from my end,", "username": "Divine_Cutler" } ]
Does mongodump, dump all the data+indexes of the collection?
2020-07-02T09:56:43.690Z
Does mongodump, dump all the data+indexes of the collection?
16,103
null
[ "spring-data-odm" ]
[ { "code": "String[] externalArray{\n\"fields_v\" : [ {\n \"value\" : [ txt ]\n} ]\n}\n", "text": "Hi there,I’m coding for a requirement using Aggregation Framework to detect whether an array as a field of a parent nested array element contains at least one common element from an external array provided through the method parameter String[] externalArray. The schema of the document is like the following:I attempted to use array (as set) intersection and check if the resulting set is empty to resolve it. But as I had looked through the document, it seems the SetIntersection operator can only handle intersections of arrays already as fields, no way to access an external array. Is there any workaround or did I miss something? Any help is much appreciated, thanks!", "username": "Alex_Tsang" }, { "code": "db.test1.insertMany([\n {\n \"_id\": 1,\n \"rootArray\" : [\n {\n \"nestedArray\" : [ 'd1', 'd2' ]\n }\n ]\n },\n {\n \"_id\": 2,\n \"rootArray\" : [\n {\n \"nestedArray\" : [ 'd1', 'd3' ]\n }\n ]\n },\n {\n \"_id\": 3,\n \"rootArray\" : [\n {\n \"nestedArray\" : []\n }\n ]\n },\n {\n \"_id\": 4,\n \"rootArray\" : []\n }\n]);\n// mongo shell example\nfunction hasInNestedArray(values) {\n return db.test1.aggregate([\n {\n $match: {\n 'rootArray.nestedArray': {\n $in: values,\n }\n }\n }\n ]).pretty();\n}\nhasInNestedArray(['d1']) // will match both documents\nhasInNestedArray(['d3']) // will match only second document (_id=2) \n", "text": "Welcome, @Alex_Tsang!$setIntersection operator is not necessary for that.\nLet’s assume, you have the following dataset:Then, you can $match documents, that have one or more values from our function arguments:Usage examples:You should better take some courses to educate yourself to use MongoDB.\nI recommend to start with the very basic one.", "username": "slava" }, { "code": "$indb.test1.insertMany([\n {\n \"_id\": 1,\n \"rootArray\" : [\n {\n \"nestedArray\" : [ 20, 23, 34 ]\n }\n ]\n },\n {\n \"_id\": 2,\n \"rootArray\" : [\n {\n \"nestedArray\" : [ 31, 34, 56 ]\n }\n ]\n },\n {\n \"_id\": 3,\n \"rootArray\" : [\n {\n \"nestedArray\" : [56, 57, 58]\n }\n ]\n },\n {\n \"_id\": 4,\n \"rootArray\" : []\n }\n]);\n// mongo shell example\nfunction hasInNestedArray(min_value, max_value) {\n\n}", "text": "Hi Slava,Thank you for your quick and neat answer, all I need is the $in operator, who can imagine it implies a secondary function for array intersection simply by its name? And thank you also for your beautifully formatted code to convey my original question for I was in a haste. Anyway, if you are still available, please allow me to re-use your pretty code,Now this time, of a second real requirement, we supply a lower bound and an upper bound, do we still have a basic solution to spot all the nested arrays containing at least one element falling into the range?", "username": "Alex_Tsang" }, { "code": "function hasInRangeInNestedArray(lowerBound, upperBound) {\n return db.test1.aggregate([\n {\n $match: {\n 'rootArray.nestedArray': {\n $gte: lowerBound,\n $lte: upperBound,\n }\n }\n }\n ]).pretty();\n}\n", "text": "This function should do the thing:The solution supports inclusive range edges only. If you need exclusive ones - use $gt and $lt instead.", "username": "slava" }, { "code": " {\n \"_id\": 2,\n \"rootArray\" : [\n {\n \"nestedArray\" : [ 31, 34, 56 ]\n }\n ]\n } criteria.orOperator(\t\t\t\t\t\t\t\t\n\t\twhere(\"rootArray.nestedArray\").gte(lowerBound), \n\t\twhere(\"rootArray.nestedArray\").lte(upperBound));", "text": "Hi Slava,Unfortunately I’m afraid your code won’t work as expected. According to the array comparison rules,With arrays, a less-than comparison or an ascending sort compares the smallest element of arrays, and a greater-than comparison or a descending sort compares the largest element of the arrays.Let’s take one of the above objects as an example,And I offer a lower bound of 50 and an upper bound of 100, “rootArray.0.nestedArray” is less than 100 because its maximum value 56 is less than it, but this array won’t be greater than 50 because its minimum value 31 is still less than it, so this will result in this whole array missed out from a match, despite it has an element of 56 falling into my specified range and should be accepted.Of course with Spring Data we can easily swap the $and condition with an $orBut this won’t help either if we offer a range of [80, 100], this array will be checked as a match but it actually doesn’t have any elements inside the range.Any brighter ideas?", "username": "Alex_Tsang" }, { "code": "", "text": "You will have to take a look at $elemMatch.", "username": "steevej" }, { "code": "criteria.and(\"parentArray.nestedArray\").elemMatch(\n\tnew Criteria().andOperator(\n\t\t\tnew Criteria().gte(lowerBound),\n\t\t\tnew Criteria().lte(upperBound)));", "text": "Thanks for inspiration.", "username": "Alex_Tsang" } ]
SetIntersection with an array provided in parameter during Aggregation in Spring Data MongoDB
2020-07-02T12:39:30.044Z
SetIntersection with an array provided in parameter during Aggregation in Spring Data MongoDB
4,670
null
[ "sharding" ]
[ { "code": "DNSHostNotFound: Failed to look up service", "text": "Currently we use replica sets in Atlas without issue; using the mongodb+srv connection string from the ‘CONNECT’ button on the cluster page we connect nodejs applications, robo3t gui and mongo shell command line with no problems.We decided to experiment with sharding a cluster in anticipation of increasing volumes of data, so we setup a discreet cluster (non sharded) for test purposes and imported cloned data from one of our running databases. No problem so far.However once we enabled the “Shard your cluster” option with a value of 2 we can no longer connect to it. Trying to use the “mongodb+srv” value from ‘CONNECT’’ returns a DNS error:\nDNSHostNotFound: Failed to look up serviceAs I understand it we should be connecting to the mongos routers for the sharded cluster. Where are the connection details for the sharded cluster routers to be found in Atlas?", "username": "Simon_Dunn" }, { "code": "", "text": "Just a heads up for anyone else who might walk into this.The DNS error was indeed a problem with the local machine DNS resolution causing the mongodb+srv connection string to fail. Lord alone knows why this particular sharded cluster should have this problem, it is not a problem for the other half a dozen or so atlas databases we connect to. As a work-around we forced DNS to resolve against google’s 8.8.8.8 server and we can get to the database via the srv string.The second issue was with the database credentials. It uses simple username/password SCRAM mechanism and we had to create a new one to connect to the cluster after we sharded it from being a simple replica set. So it would appear that existing credentials are not applied to the cluster once it has been converted from a replica set.", "username": "Simon_Dunn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect to sharded cluster?
2020-07-02T09:05:40.048Z
Connect to sharded cluster?
3,540
null
[ "aggregation", "performance" ]
[ { "code": "", "text": "Been recently wrangling some huge collections with MongoDB (connection to each of the RS) and was wondering why every time I open a new connection to make an aggregation (always the same aggregation) the first one requires a minute or so to retrieve data, and subsequent aggregations take 5 seconds or so to complete.Then, if I close the connection with MongoDB secondary and straight away reconnect to the same or connect to another secondary, same thing happens. Really slow first aggregation but quicker afterwards.Needless to say that the aggregation is using an index.Why is this happening? Didn’t Mongo cache the result set in memory for subsequent calls? Not like the aggregation result but the portion of data used.", "username": "Eddy_H" }, { "code": "", "text": "I am not too sure but I suspect that the working set of one secondary does not necessaraly match the working set of another secondary especially if you manually connect and read from secondaries.", "username": "steevej" } ]
Why would first aggregate query be so slow and rest fast everytime?
2020-07-02T20:31:53.346Z
Why would first aggregate query be so slow and rest fast everytime?
1,894
null
[ "aggregation" ]
[ { "code": "db.createCollection(\n \"propertiesView\",\n {\n \"viewOn\" : \"propertiesPointers\",\n \"pipeline\" : [\n { $lookup: { from: \"DataTaxes\",\t\tlocalField: 'data.taxes',\t\tforeignField:'_id', as: 'taxes' } }\n ,{ $lookup: { from: \"DataBusinesses\",\tlocalField: 'data.businesses',\tforeignField:'_id', as: 'businesses' } }\n ,{ $lookup: { from: \"DataParking\",\t\tlocalField: 'data.parking',\t\tforeignField:'_id', as: 'parking' } }\n ,{ $lookup: { from: \"DataRestaurants\",\tlocalField: 'data.restaurants',\tforeignField:'_id', as: 'restaurants' } }\n ]\n }\n)\ntaxesdb.getCollection('propertiesView').find({\"Year\":2018, \"data.price\":4}).explain()db.getCollection('propertiesView').find({\"Year\":2018, \"data.parking\":{$size:4}}).explain()", "text": "I have a table with 500k rows. Each row may join 1:many with 20 other sources of data. Until now, I had all this information stored in a single table – it makes it very easy to search.Despite an on-disk size of 6GB, even with 32GB of ram, it can take 220+seconds to do a tablescan.I realized that I can opportunistically $lookup each data source, so that table scans are faster.I usually only query on 1-3 pieces of data, not all 20. So for matching, I can do a $lookup, apply my $matches, then do the next $lookup, etc.Can I do that automatically with a View? It does “automatic pipeline optimization” where it hoists the pre-$lookup queries to before the join, but then it just does all the $lookups before applying the $match, even though it should run that one $lookup and then the $match, which will remove most of the $rows from requiring $lookups.Is there any way to give hints? To shape my query better? I’m afraid I just have to build the query manually.Example:Pipeline as a view:If I do a match on taxes it should automatically be placed after the lookup on taxes, not at the end of the entire pipeline, which is what happens now on mongodb version v4.2.1 and latest v4.2.7Oddly, it works on normal queries, hoisting it to right after the $lookupdb.getCollection('propertiesView').find({\"Year\":2018, \"data.price\":4}).explain()But not on size - it puts it at the end, which ironically, this one it could figure out before doing the lookup (“how large is the array of IDs to lookup”)db.getCollection('propertiesView').find({\"Year\":2018, \"data.parking\":{$size:4}}).explain()But regardless, it should arrange the actual $lookup order based on which matches I’ll be doing.Is there a way to tell mongodb that?", "username": "Avi_Marcus" }, { "code": "db.your_collection.aggregate([\n {\n $lookup: {\n from: 'DataTaxes',\n let: {\n targetId: '$_id',\n },\n pipeline: [\n {\n $match: {\n $expr: {\n // filter the results of this current $lookup \n $eq: ['$_id', '$$targetId'],\n },\n },\n // ... your other stages for results of \n // this $lookup goes here ...\n },\n ],\n as: 'DataTaxes',\n },\n },\n // ... other $lookups ...\n]);\n", "text": "If you need a simple way to filter the results, received from your $lookup stages, you can open nested pipeline for each $lookup stage, like this:If the time, that you awaiting is a problem, then you can do the following:", "username": "slava" }, { "code": "$eq: ['$_id', '$$targetId'],", "text": "I don’t understand what is the $eq: ['$_id', '$$targetId'], all for? I’m already doing a direct ID match.What do you mean “use multiple aggregations and run them in parallel” - how do I tell mongo to do them in parallel?“if that data is so related to each other, consider to put everything, that you $lookup into base document”\nThat created a 7gb file on disk, which took several minutes to do a full table scan on… that’s why I decomposed it to merge it back as needed.", "username": "Avi_Marcus" }, { "code": "$eq: ['$_id', '$targetId'],let: {\n targetId: '$_id',\n},\nconst pipeline1 = [\n {\n $match: {},\n },\n // ... more stages\n];\n\nconst pipeline2 = [\n {\n $match: {},\n },\n // ... more stages\n];\n\nconst result1 = db.your_collection.aggregate(pipeline1).toArray();\nconst result2 = db.your_collection.aggregate(pipeline1).toArray();\n\nconst arrayOfResults = await Promise.all(result1, result2);\n", "text": "I don’t understand what is the $eq: ['$_id', '$targetId'], all for?The ‘let’ statement below declares new variable ‘targetId’ and assigns ‘_id’ of each document, that we have in the ‘your_collection’.We need this statement, so we can use ‘_id’ in the nested pipeline (see example in previous message).I’m already doing a direct ID match.Can you share that part of aggregation, where you ‘doing a direct ID match’?What do you mean “use multiple aggregations and run them in parallel” - how do I tell mongo to do them in parallel?I am using MongoDB driver for Node.js. In Node.js environment it is possible to do things in parallel, using ES6 Promises.\nExample:So, two aggregations will be executed in parallel, not synchronously.\nCheck, maybe your language has support for something similar.", "username": "slava" }, { "code": "{$lookup: { from: 'DataTaxes',localField: 'taxes', foreignField:'_id', as: 'taxes'} }", "text": "Oh now I understand. I was using a simple 1:1 lookup:\n{$lookup: { from: 'DataTaxes',localField: 'taxes', foreignField:'_id', as: 'taxes'} } and I had assumed I could add a pipeline to that to further limit the results, e.g. when using an array. But nope, you showed me the recipe.But the ID matching with $eq seems really slow – it’s doing a full table scan instead of using the index for IDs, I’m guessing. Any way around that? I’d love to use the same $match syntax I’ll be using elsewhere…I settled with doing the lookup and then doing a $filter (inside an $addFields) then an $arrayElemAt to just get the one.(I’m using nodejs too. Really, if I call a long list of $lookups it will do it in order? But if I access it as a cursor and load them up at the same level, it will happen at the same time? How would I check that?\nI don’t think that’s my issue though, the issue is generally with doing full table scans…)", "username": "Avi_Marcus" }, { "code": "", "text": "I’d really love to ADD a filter to the lookup step, thereby merging far fewer documents.It seems you can only do it after the lookup is finished.", "username": "Avi_Marcus" }, { "code": "", "text": "May be you want to look at https://docs.mongodb.com/manual/reference/operator/aggregation/graphLookup/ and pay attention to restrictSearchWithMatch.", "username": "steevej" } ]
Smarter pipeline optimization with lots of $lookups?
2020-06-15T20:37:12.126Z
Smarter pipeline optimization with lots of $lookups?
5,320
null
[ "mongodb-live-2020" ]
[ { "code": "", "text": "If you want to watch any of the sessions from MongoDB.live they’re all available here:", "username": "Naomi_Pentrel" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Session Playlists from MongoDB.live
2020-07-03T09:31:23.223Z
Session Playlists from MongoDB.live
3,251
null
[ "aggregation" ]
[ { "code": " // COMPANY\n {\n \"name\": \"Note Inc\",\n \"shortName\": \"The Note\",\n \"members\": [ \n {\n \"_id\": ObjectId(\"5efdac62cd210f3b658d98f6\"),\n \"status\": false,\n \"role\": \"chairman\",\n },\n {\n \"_id\": ObjectId(\"5effac62dd210f9b658d98f1\"),\n \"status\": true,\n \"role\": \"admin\",\n }\n ]\n }\n // COMPANY after aggregate the Person\n {\n \"name\": \"Note Inc\",\n \"shortName\": \"The Note\",\n \"members\": [ \n {\n \"_id\": ObjectId(\"5efdac62cd210f3b658d98f6\"),\n \"name\": \"Shinta\",\n \"gender\": \"female\"\n \"status\": false,\n \"role\": \"chairman\",\n },\n {\n \"_id\": ObjectId(\"5effac62dd210f9b658d98f1\"),\n \"name\": \"John\",\n \"gender\": \"male\"\n \"status\": true,\n \"role\": \"admin\",\n }\n ]\n }\n", "text": "I get difficult to add key into the aggregate from array objecthere is the data which I want to aggregateand the result I wantin my case here I am using Go, but I need the vanilla Mongo Query for this so I can translate into Go query pipeline", "username": "Virtual_Database" }, { "code": "", "text": "Ok, in the output you want to have 2 additional fields: ‘name’ and ‘gender’.\nWhere are you going to take that data from? Is it from sibling collection? Provide a document example, if the answer is ‘yes’.", "username": "slava" }, { "code": "{\n \"_id\": ObjectId(\"5effac62dd210f9b658d98f1\"),\n \"email\": \"[email protected]\",\n \"phoneNumber\": \"\",\n \"firstName\": \"John\",\n \"gender\": \"male\"\n}\n", "text": "yess that output was from the Person ID , so the members._id is trigger to Person ID ( reference )\nhere is the document Person", "username": "Virtual_Database" }, { "code": "db.members.aggregate([\n {\n $addFields: {\n ids: {\n $map: {\n input: '$members',\n in: '$$this._id',\n }\n }\n }\n },\n {\n // $lookup will be executed once per member document\n $lookup: {\n from: 'persons',\n localField: 'ids',\n foreignField: '_id',\n as: 'persons',\n }\n },\n // cleanup\n {\n $unset: ['ids'],\n },\n {\n $addFields: {\n mixed: {\n // order of array here matters for $group stage\n $concatArrays: ['$members', '$persons']\n }\n }\n },\n {\n $unwind: '$mixed',\n },\n {\n $group: {\n _id: {\n _id: '$mixed._id',\n rootId: '$_id',\n name: '$name',\n shortName: '$shortName',\n },\n // down below:\n // - use $first to read the prop from 'person' document\n // - use $last to read the prop from 'members' document\n name: {\n $last: '$mixed.firstName',\n },\n gender: {\n $last: '$mixed.gender',\n },\n status: {\n $first: '$mixed.status',\n },\n role: {\n $first: '$mixed.role',\n }\n }\n },\n {\n $group: {\n _id: '$_id.rootId',\n name: {\n $first: '$_id.name',\n },\n shortName: {\n $first: '$_id.shortName',\n },\n members: {\n $push: {\n _id: '$_id._id',\n name: '$name',\n gender: '$gender',\n status: '$status',\n role: '$role',\n }\n }\n }\n },\n // cleanup\n {\n $unset: '_id',\n }\n]);\ndb.members.insertMany([\n {\n \"name\": \"Note Inc A\",\n \"shortName\": \"The Note A\",\n \"members\": [\n {\n \"_id\": ObjectId(\"5effac62dd210f9b658d98f1\"),\n \"status\": false,\n \"role\": \"chairman\",\n },\n {\n \"_id\": ObjectId(\"5efdac62cd210f3b658d98f6\"),\n \"status\": true,\n \"role\": \"admin\",\n }\n ]\n },\n {\n \"name\": \"Note Inc B\",\n \"shortName\": \"The Note B\",\n \"members\": [\n {\n \"_id\": ObjectId(\"5efdac62cd210f3b65f68d98\"),\n \"status\": false,\n \"role\": \"hr\",\n },\n ]\n }\n]);\n\ndb.persons.insertMany([\n {\n \"_id\": ObjectId(\"5effac62dd210f9b658d98f1\"),\n \"firstName\": \"John\",\n \"gender\": \"male\",\n },\n {\n \"_id\": ObjectId(\"5efdac62cd210f3b658d98f6\"),\n \"firstName\": \"Shinta\",\n \"gender\": \"female\",\n },\n {\n \"_id\": ObjectId(\"5efdac62cd210f3b65f68d98\"),\n \"firstName\": \"Sara\",\n \"gender\": \"female\",\n },\n]);\n", "text": "This aggregation should give you what you need:Tested on this dataset:", "username": "slava" }, { "code": "", "text": "I updated the document", "username": "Virtual_Database" }, { "code": "", "text": "is this is can’t be done by $project ?", "username": "Virtual_Database" }, { "code": "db.members.aggregate([\n {\n $unwind: {\n path: '$members',\n preserveNullAndEmptyArrays: true,\n }\n },\n {\n $lookup: {\n from: 'persons',\n localField: 'members._id',\n foreignField: '_id',\n as: 'person',\n }\n },\n {\n $unwind: '$person',\n },\n {\n $addFields: {\n 'members.name': '$person.firstName',\n 'members.gender': '$person.gender',\n },\n },\n {\n $group: {\n _id: '$_id',\n name: {\n $first: '$name',\n },\n shortName: {\n $first: '$name',\n },\n members: {\n $push: '$members',\n }\n }\n }\n]);\n", "text": "Simply using $project is not enough.Here is the minimal aggregation, that you can have to get what you need:But this one will probably be less performant, as it will make $lookup per each item in ‘members’ array.", "username": "slava" }, { "code": "db.company.aggregate([ \n { \n $lookup: { \n from: \"person\", \n localField: \"members._id\", \n foreignField: \"_id\", \n as: \"company_persons\" \n } \n }, \n { \n $addFields: { \n company_persons: 0,\n members: { \n $map: { \n input: \"$members\", as: \"mem\", \n in: {\n $let: {\n vars: { \n varin: { \n $arrayElemAt: [ { $filter: {\n input: \"$company_persons\", as: \"per\", \n cond: { $eq: [ \"$$per._id\", \"$$mem._id\" ] }\n } }, 0 ] \n } \n },\n in: { \n $mergeObjects: [ \"$$mem\", { firstName: \"$$varin.firstName\", gender: \"$$varin.gender\" } ]\n }\n }\n }\n }\n }\n }\n }\n]).pretty()", "text": "@slava, @Virtual_Database, I tried this aggregation. Let me know how it works for you.", "username": "Prasad_Saya" }, { "code": "", "text": "the latest minimal query was work well, what different ? will the latest minimal query gonna has affect ??", "username": "Virtual_Database" }, { "code": "(AtlasError) _id is not allowed in this atlas tier", "text": "the previous query u show was , I got : (AtlasError) _id is not allowed in this atlas tier ", "username": "Virtual_Database" }, { "code": "db.company.aggregate([ \n { \n $lookup: { \n from: \"person\", \n localField: \"members._id\", \n foreignField: \"_id\", \n as: \"company_persons\" \n } \n }, \n { \n $addFields: { \n company_persons: 0,\n members: { \n $map: { \n input: \"$members\", as: \"mem\", \n in: {\n $let: {\n vars: { \n varin: { \n $arrayElemAt: [ { $filter: {\n input: \"$company_persons\", as: \"per\", \n cond: { $eq: [ \"$per._id\", \"$mem._id\" ] }\n } }, 0 ] \n } \n },\n in: { \n $mergeObjects: [ \"$mem\", { firstName: \"$varin.firstName\", gender: \"$varin.gender\" } ]\n }\n }\n }\n }\n }\n }\n }\n]).pretty()\n", "text": "this one is worked also, btw, can I just filter one person id ?like say I want just filter person where ID = ObjectId(“PERSON_ID”) so I dont want to show all members on the list", "username": "Virtual_Database" }, { "code": "var INPUT_PERSON = ObjectId(\"5efdac62cd210f3b658d98f6\") // or ObjectId(\"5effac62dd210f9b658d98f1\")\n\ndb.company.aggregate( [ \n { \n $lookup: { \n from: \"person\", \n localField: \"members._id\", \n foreignField: \"_id\", \n as: \"company_persons\" \n } \n }, \n { \n $addFields: { \n members: { \n $reduce: { \n input: \"$members\", initialValue: [ ],\n in: {\n $let: {\n vars: { \n match: { \n $arrayElemAt: [ { $filter: {\n input: \"$company_persons\", as: \"per\", \n cond: { $eq: [ \"$$per._id\", \"$$this._id\" ] }\n } }, 0 ] \n } \n },\n in: { \n $cond: [ { $eq: [ INPUT_PERSON, \"$$this._id\" ] }, \n [ { $mergeObjects: [ \"$$this\", { firstName: \"$$match.firstName\", gender: \"$$match.gender\" } ] } ],\n \"$$value\"\n ]\n }\n }\n }\n }\n }\n }\n },\n { \n $project: { company_persons: 0 } \n }\n] ).pretty()", "text": "can I just filter one person id ?Yes, you can (with the same aggregation and some changes):", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to include another key in array object into array aggregate
2020-07-02T11:18:50.912Z
How to include another key in array object into array aggregate
13,892
null
[]
[ { "code": "", "text": "Hi team,can anyone help me to find a best solution for the following performance related problem.?i have collection with fields\n_id\nsensorId\ndateTime(including timestamp)\nvalue1\nvalue2this collection has millions of data and also its increasing day by day. The query execution time is increasing day by day. Most of the time(more that 90%) i called this collection with sensorId and dateTime(with $gt or $lt or both). So, i think it should be better to create a compound index with sensorId and dateTime. Really its gives good result. but, the index size is increased dramatically. So, is it a good method? can anyone make good suggestion on this?", "username": "Renjith_S_P" }, { "code": "db.collection.explain('executionStats').find(...)", "text": "Hi Renjith,Welcome to the community. As I understand it, you’re facing query performance issues, tried to solve it using a compound index, but you are currently concerned by the index sizes. Is this accurate?It would help if you can provide more context:In general, having indexes to back your queries is a recommended practice in all databases, not just MongoDB. See Create Indexes to Support Your Queries for some examples.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank Kevin for your reply.Yes, you are right. My problem was that.This is our test database result. same as original. Right now we have 360K documents in our data. Using this index i can fetch data correctly with good performance. but issue will be in future within months. 100X more data are coming. So, the index size will increase dramatically.\nScreen Shot 2020-06-28 at 11.39.17 AM1663×765 78.8 KBAll my documents are in same structure. Sample one\n{\n“_id”:{\"$oid\":“5e813859eea99fb8a1cdcd1a”},\n“sensorId”:“e00fce685dc2dae3d35b2054”,\n“dateTime”:{\"$date\":“2020-03-30T00:07:53.000Z”},\n“pH”:3.928,\n“temperature”:14.9,\n“capTemperature”:15,\n“sg”:1.195\n}Please take a look on the image. As i said my concern is increase of index size. Now our DB has less data but by next it will increase up to 100X and will increase dramatically after that. As per my calculation, if i use this compound index, index size of the particular compound index will be 20% of the actual data.Version 4.0.19, M20 ClusterThanks in advance", "username": "Renjith_S_P" }, { "code": "", "text": "Output of question number 3Screen Shot 2020-06-28 at 11.41.08 AM1025×345 39.5 KB", "username": "Renjith_S_P" }, { "code": "_idsensorId", "text": "Hi Renjith,The index you have is optimal for your query, so if your collection gets much larger, the query still should return relatively quick (with caveat of data size vs. hardware capabilities, of course).I see you have two indexes: the mandatory _id index and the compound index. The compound index itself is 3MB, which is a bit less than 10% of the collection size.As with most things in computing, it is a balancing act. If you feel that returning queries quickly in a large collection is worth the space tradeoff, then I encourage you to keep the compound index. Case in point, your first screenshot shows that in a collection of 268k documents, the query returned in just 5ms. For most use cases, this space vs. speed tradeoff is worth it.However, if you feel that you can tolerate much slower query response time and if space is much more important to you, then dropping the index is an option.Another option is to create an index only on sensorId, so your query won’t be totally unindexed.All three options have advantages & disadvantages, so it’s up to you to choose which option is “best” for your use case.Having said that, be aware that as your collection gets larger, scanning unnecessary documents to return a query will have other side effect as well, such as pushing your working set off memory and could possibly make everything slower (since irrelevant documents are constantly being paged into memory to be examined, only to be thrown out again).Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Great. Thanks for the reply Kevin", "username": "Renjith_S_P" }, { "code": "", "text": "Hello @Renjith_S_P and @kevinadihow about a partial index just in case only a subset of the dates is need…Actually in combination with dates I have never used a partial index. That probably only makes sense when you e.g. drop the index every X month and rebuild with $gt( today -x month). However dropping and creating an index would need to be planed.Regards,\nMichael", "username": "michael_hoeller" } ]
Index on dateTime
2020-06-24T06:22:03.792Z
Index on dateTime
22,648
null
[ "dot-net", "change-streams" ]
[ { "code": "", "text": "Hi, is it possible to get the identifier of the user who changed the record using Change streams? How to get userid from a given string BackingDocument = {{ “txnNumber” : NumberLong(1), “lsid” : { “id” : CSUUID(“69de8063-6056-436a-a34f-4477535e79aa”), “uid” : new BinData(0, “KJ597HGXhcWn9yDeXiCOgCLrzOMR7Hpfnh3wu16W8DE=”) }, “_id” : { “_data” : \"825EF58337000000012B022C0100296E5A10041FCA10F429D1401496F906B747…", "username": "Valeriy_Filippenkov" }, { "code": "", "text": "Hi @Valeriy_Filippenkov, and welcome to the forum!is it possible to get the identifier of the user who changed the record using Change streams?Depending on your use case, you can include the user who performs the update as part of the document update operation. This should be reflected in the update delta of the change events. See also Lookup Full Document for Update Operations.If the database deployment is on Atlas, depending on the use case, you could set up Auditing. See also Set up Database Auditing.Regards,\nWan", "username": "wan" }, { "code": "", "text": "thank you but I need to get a user UUID from this line «uid»: новый BinData (0, «KJ597HGXhcWn9yDeXiCOgCLrzOMR7Hpfnh3wu16W8DE =»)}", "username": "Valeriy_Filippenkov" }, { "code": "UUIDCSUUIDuid", "text": "Hi @Valeriy_Filippenkov,I need to get a user UUID from this lineI don’t think the question is about MongoDB Change Streams, this is about how to deserialise a binary data in .NET/C#.How did you insert or generate this binary data ? Could you please elaborate more on the environment ? Generally a UUID object will show up as CSUUID. Depending on what the object instance of the binary data is for uid field, you need to find out how to deserialise it.Regards,\nWan.", "username": "wan" } ]
.NET change streams
2020-06-26T05:57:19.856Z
.NET change streams
2,224
null
[ "java" ]
[ { "code": "", "text": "I am trying to connect with Atlas MongoDB using X509 Certificate from Java. Before 3.6 Version of the Driver there used to be an option to use socketfactory in which we can programmatically get the certificate on the fly and put it in socketfactory.MongoClientOptions.Builder optionBuilder = new MongoClientOptions.Builder();\noptionBuilder.sslEnabled(true);\noptionBuilder.socketFactory(context.getSocketFactory());Now in 4.0 Version of Java Driver I don’t see a option to set SocketFactory in MongoClientSettingsCan you please provide some help.", "username": "Rajan_Shah" }, { "code": "", "text": "You should be able to achieve the same effect with com.mongodb.MongoClientOptions.Builder#sslContext, which allows your application to set the SSLContext that is used by the driver to get the SocketFactory.", "username": "Jeffrey_Yemin" } ]
Java authentication with X509 certificate
2020-07-02T08:48:51.379Z
Java authentication with X509 certificate
2,701
null
[ "database-tools" ]
[ { "code": "", "text": "Hi,I am trying to build mongo-tools from source code. I got errors regarding golang settingscd /opt/mongo-tools-r4.2.3\nsudo ./build.sh\nCurrent path ‘/opt/mongo-tools-r4.2.3’ doesn’t resemble a GOPATH-style path. Aborting.$which go\n/usr/bin/go$go env\nGOARCH=“amd64”\nGOBIN=\"\"\nGOEXE=\"\"\nGOHOSTARCH=“amd64”\nGOHOSTOS=“linux”\nGOOS=“linux”\nGOPATH=\"/usr/bin/go\"\nGORACE=\"\"\nGOROOT=\"/usr/lib/golang\"\nGOTOOLDIR=\"/usr/lib/golang/pkg/tool/linux_amd64\"\nGCCGO=\"/usr/bin/gccgo\"\nCC=“gcc”\nGOGCCFLAGS=\"-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build916680123=/tmp/go-build -gno-record-gcc-switches\"\nCXX=“g++”\nCGO_ENABLED=“1”\nCGO_CFLAGS=\"-g -O2\"\nCGO_CPPFLAGS=\"\"\nCGO_CXXFLAGS=\"-g -O2\"\nCGO_FFLAGS=\"-g -O2\"\nCGO_LDFLAGS=\"-g -O2\"\nPKG_CONFIG=“pkg-config”Thanks,\nSatya", "username": "satya_dommeti" }, { "code": "", "text": "Hi Satya,The tools need to be placed inside your Go workspace. The go workspace is a directory with a specific structure where Go expects you to put your source code. $GOPATH should point to this workspace directory, not to the Go binary. You can read more about $GOPATH here: go command - cmd/go - Go PackagesPeople usually set $GOPATH to $HOME/go. Then you must clone the tools into $GOPATH/src/github.com/mongodb/mongo-tools.These are all Go conventions. We’re working on updating the readme to make this a bit clearer. We’ll also soon be migrating to go modules, which will allow you to keep the tools source code outside of the Go workspace if you wish.", "username": "Tim_Fogarty" }, { "code": "", "text": "By the way, is there a particular reason you’re building the tools yourself rather than downloading the pre-built binaries? I work on the tools, so I’m always interested in learning more about how people are using the tools.", "username": "Tim_Fogarty" }, { "code": "", "text": "Hi Tim,Thank you for your reply. Its working fineBut I am getting the below error when I run build.sh\n$ sudo ./build.sh\nBuilding bsondump…\nimport cycle not allowed\npackage main\nimports github.com/mongodb/mongo-tools-common/log\nimports fmt\nimports errors\nimports runtime\nimports runtime/internal/atomic\nimports unsafe\nimports runtime\nimport cycle not allowed\npackage main\nimports github.com/mongodb/mongo-tools-common/signals\nimports github.com/mongodb/mongo-tools-common/util\nimports go.mongodb.org/mongo-driver/mongo\nimports crypto/tls\nimports crypto/x509\nimports net\nimports runtime/cgo\nimports runtime/cgo\nError building bsondumpMongo tools are placed in below path\n/home/mongoadm/go/src/github.com/mongodb/mongo-toolsGOPATH=“/home/mongoadm/go”Please help me in this regard.Regards,\nSatya", "username": "satya_dommeti" }, { "code": "", "text": "Hi Tim,We are building tools for security purposes and client wants to use source version in production.Please let me know if there is any issues with source code.Regards,\nSatya", "username": "satya_dommeti" }, { "code": "/usr/local/usr/local/go/usr/local/go/binPATH", "text": "Hi Tim,I could able to build mongo tools successfully after upgrade go version go1.14.4. Thank you.Download the Go archive and extract it into /usr/local , creating a Go tree in /usr/local/go\nReference : Download and install - The Go Programming LanguageSet GOROOT to Go root directory and GOPATH to work space directory.\n$GOROOT=“/usr/local/go”\n$GOPATH=“$HOME/go”Add /usr/local/go/bin to the PATH environment variable\n$export PATH=/usr/local/go/bin:$PATHRun build.sh\n$ sudo ./build.shRegards,\nSatya", "username": "satya_dommeti" }, { "code": "git checkout 100.0.2", "text": "Hi Satya,Great to hear it’s working! Thanks for the info on your use case. Just make sure to always build from a version tag (e.g. git checkout 100.0.2). The master branch is unstable and could break at any time, but the version tags will always be stable.", "username": "Tim_Fogarty" }, { "code": "", "text": "We are building tools for security purposes and client wants to use source version in production.Please let me know if there is any issues with source code.Hi @satya_dommeti,You mentioned a similar requirement for building the server from source. I will reiterate my comment on that discussion since it also applies here:I noticed you are building MongoDB 4.2.3. The latest 4.2 release is currently 4.2.8 (see MongoDB 4.2 Minor Releases). I strongly recommend staying current with minor releases for the latest security and stability fixes.If your client is concerned about security and stability, the official packages should provide more assurance on that front than building from source. All builds are extensively tested via our public (and open source) Evergreen CI and published packages are signed for verification.If you build from source, there may be variations in your build environment that cause unexpected issues.It is fine if you prefer to build from source, but typically the motivation for doing so would be to make alterations or build for an unsupported platform. If security is a key concern, you should also keep up with the latest minor releases.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to build mongo-tools from source code?
2020-06-29T19:30:35.014Z
How to build mongo-tools from source code?
5,431
null
[ "replication" ]
[ { "code": "", "text": "Hello,Recently I have faced data loss in MongoDB, and that is so rare:\n– checked for rollback, nothing there.\nfound that, only node in replicaset is alive, but unable to connect, on restart that single replica node, it starts replaying oplog, not sure if it is because member was not identified as primary or secondary.My questions here:why it is placing markers for truncation?", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hi @Aayushi_Mangal,does oplog replay truncate data from primary?No. Oplog replay does not remove any data from the primary.why it is placing markers for truncation?The oplog is a special capped collection with a configured maximum size. When the maximum size of a capped collection is reached, some of the oldest documents are removed to maintain the target oplog size.The oplog implementation in the WiredTiger storage engine has some optimisations specifically for replication use cases. A housekeeping process adds logical markers (aka oplog stones) for more efficient removal of batches of the oldest documents.For a more technical summary, see SERVER-19551: Keep “milestones” against WT oplog to efficiently remove old records.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,Ok, when does oplog replay comes to picture. Like we have primary node, but I can see oplog replay operations in primary log as well.Also what if we drop “local” database from primary only node. does we get data loss, as the data is already written in disk, but I can see data loss after dropping “local” database", "username": "Aayushi_Mangal" }, { "code": "localstartup_logoplog.rssystem.replsetlocallocal.oplog.rs", "text": "Ok, when does oplog replay comes to picture. Like we have primary node, but I can see oplog replay operations in primary log as well.Hi @Aayushi_Mangal,Can you clarify what you mean by “oplog replay”? Are you referring to a secondary pulling updates from another member of your replica set? Can you provide an example of the operations you are looking at?For a description of the general processes of initial sync and ongoing replication, please review Replication Set Data Synchronisation.Also what if we drop “local” database from primary only node. does we get data loss, as the data is already written in disk, but I can see data loss after dropping “local” databaseDropping the local database removes the collections and configuration it contains (startup_log, oplog.rs, system.replset, …). but does not affect data in other databases.However, replica set members rely on the oplog to determine the provenance of data. If you drop the local database (or the local.oplog.rs collection) for a replica set member, it cannot rejoin the same replica set without resyncing all data. If a replica set member has an oplog but no longer has an entry in common with the oplog of another member of the replica set, a resync will also be required.If there are data changes you want to preserve on a replica set member that you are about to resync, you should take a backup of the relevant data before starting the resync process. The resync process replaces all data and indexes with a fresh copy (and known state) from another replica set member.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Oplog replay -- does it truncate data from storage
2020-06-24T11:00:33.695Z
Oplog replay &ndash; does it truncate data from storage
4,896
null
[ "configuration" ]
[ { "code": "", "text": "If I make a change to a server parameter like wiredTigerConcurrentReadTransactions to the current running server (using adminCommand.setParameter), is that change lost after reboot?Looking at https://docs.mongodb.com/manual/reference/parameters/ it doesn’t say anything explicitly.Thanks in advance\nBen", "username": "Benjamin_Slade" }, { "code": "", "text": "Yes\nAdd it to config file for making it persistent“To make persistent add to the Mongo configuration file:”\nAbove line from this linkIn this blog post, we will discuss the MongoDB best practices applied at the Operating System (OS) and MongoDB levels.\nEst. reading time: 12 minutes\n", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Ok thanks. I guess it’s obvious once you know it, but other database technologies I’ve used only have the key parameters needed for boot in the config file, and the rest of the parameters stored in something like the admin db. Maybe I’ll submit an edit to the documention.", "username": "Benjamin_Slade" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are server parameter changes lost on reboot?
2020-07-01T14:34:02.938Z
Are server parameter changes lost on reboot?
2,250
null
[ "data-modeling" ]
[ { "code": "", "text": "I am new to NoSQL. I have experience of RDBMS systems and structures, but also by no means a DBA or app developer…I am a hobbyist / enthusiast, and I am looking to deploy a new iOS app tied to a Parse server backend, using MongoDB. I am thinking through the data model required, and I default to relational data structures…I would like to model recording golf competitions. Simplistically, a competition is a:Round of golfOn a specific courseOn a specific day / dateTypically, I would have tables to define the Course(s) and 1:Many Hole table linked to each Course.\nCourseID\nCourseName\nCourseParHoleID\nHoleNo\nHolePar\nHoleSI\nHoleCourseIDSeparately, I would store Round in a different table, with a foreign key back to Course. This starts needing many links between tables to perform queries, aggregation, etc. Simply put a Player plays a Round (another 1:Many relation).Should I instead create rows for each hole recorded, that ALSO has the Course data in it, so each row can perform it’s own queries & aggregation?Row1:\nRowID, HoleNo, CourseID, HolePar, HoleSI, HoleGrossScore, HoleNettScore, etc, etcThen I may not even need the Course & Hole tables?", "username": "Dan_Burt" }, { "code": "", "text": "I would like to model recording golf competitions. Simplistically, a competition is a:Hello and welcome to MongoDB community.So, the main purpose of the application is to collect the above data - a golf competition related data.And, then use it for what? What are your main and most used queries? So, your application has a main screen showing somethings and main options to show some details. What are those and important ones? These questions are to drive (not the golf play drive) the design and modeling of data.Competitions are played on a course with holes. Many players play in a competition. Many rounds are played during competition - and players play and there is an outcome for each round. This is general information, the entities and attributes involved.Tables or entities, keys, relationships are associated with data and how it is modeled. It is important to figure the application and its main purpose / functions / features - then think about modeling, storing and queries.You may have looked at this already, but here is a MongoDB documentation link to Data Modeling Introduction.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya for the reply.There will be 2 primary purposes:Live leaderboards of a Competition, while in progress - so querying from a Competition perspectivePlayer history, a record of all rounds played - I expect mostly per Player but also to aggregate scores together. So this would be querying from the Player perspectiveIn Parse, I don’t believe you can do nested documents - though I could very well be wrong! It has “Pointers” (1:Many) and Relations (Many:Many) links to other “Classes” (i.e. documents).Should I be thinking more about what my app actually does, rather than ensuring optimal data storage (i.e. normalization, etc)?My current entity relationship has multiple tables just to cover Golf Clubs, Courses, Holes. A table for Round. Then add in a join table between Player and Round for the Many:Many relationship.I wonder if the recording of the course data could simply be ignored in the database - the app would need to know this, but could be downloaded from a JSON data source(?). As a hole is played, it writes all the relevant data for that hole into the database, including:In this way, I would not have to do the multi-table linking in queries to do anything with the data - all the data is available within this document (Parse Class object) to perform queries / calculations / processing. And then group 18 rows of Score entries to make up my entire Round.Is this an example of a “NoSQL approach”? Or at least a less RDBMS-based approach?", "username": "Dan_Burt" }, { "code": "courseID\ncourseName\ncoursePar\nholes [ 1.. 18 array of holes]\n holeID\n holeNo\n holePar\n...\n", "text": "@Dan_Burt just some thoughts.Should I be thinking more about what my app actually does, rather than ensuring optimal data storage (i.e. normalization, etc)?Normalization or de-normalization is modeling data. It should serve the purpose of the application functionality, I think. Data modeling is an aspect of application design. MongoDB’s flexible schema allows de-normalization - you can model data such that the related data can be stored and queried together.De-normalized data has advantages over normalized data - performance and lesser code. Joining tables for queries can affect the performance the wrong way.For example, lets take course and holes data. A course has 18 (or sometimes 9) holes. This can be stored in a de-normalized form as shown below. Querying information about a hole (or holes) in a course has all the related information without a join.An example of a normalized data:A round and player are two entities. A round is played by 2 or more players. The player entity can store all the details (the id, name, address, history, …). But, the round can store some basic player details (e.g., name, city). So, when you query about a round, the query accesses only the round data with basic player info. If more details are needed about the player, then an additional query is made to get the player data.Is this an example of a “NoSQL approach”? Or at least a less RDBMS-based approach?It is an approach, alright. But, after storing this data, what kind of queries are run on this data (is to be figured)?The above discussion about normalized / de-normalized is about the NoSQL design approach. It blends, giving more options to work with data.", "username": "Prasad_Saya" } ]
NoSQL newbie - modelling golf course data
2020-07-01T12:16:45.179Z
NoSQL newbie - modelling golf course data
2,156
null
[ "sharding" ]
[ { "code": "2020-06-30T23:48:16.413+0000 I - [conn1086250] operation was interrupted because a client disconnected\n2020-06-30T23:48:16.443+0000 F - [conn1086250] terminate() called. No exception is active 0x55aa2cd50fe1 0x55aa2cd50d98 0x55aa2ce5b586 0x55aa2ce5b5c1 0x55aa2bfb557a 0x55aa2c238cd9 0x55aa2c239252 0x55aa2c159e00 0x55aa2c17df1c 0x55aa2c17806f 0x55aa2c17b2fc 0x55aa2c4f7c22 0x55aa2c175a6d 0x55aa2c178d23 0x55aa2c177137 0x55aa2c177fcb 0x55aa2c17b2fc 0x55aa2c4f808b 0x55aa2cbf3824 0x7f4333cafe65 0x7f43339d888d\n----- BEGIN BACKTRACE -----\n{\"backtrace\":[{\"b\":\"55AA2BA30000\",\"o\":\"1320FE1\",\"s\":\"_ZN5mongo15printStackTraceERSo\"},{\"b\":\"55AA2BA30000\",\"o\":\"1320D98\"},{\"b\":\"55AA2BA30000\",\"o\":\"142B586\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\"},{\"b\":\"55AA2BA30000\",\"o\":\"142B5C1\"},{\"b\":\"55AA2BA30000\",\"o\":\"58557A\"},{\"b\":\"55AA2BA30000\",\"o\":\"808CD9\"},{\"b\":\"55AA2BA30000\",\"o\":\"809252\",\"s\":\"_ZN5mongo8Strategy13clientCommandEPNS_16OperationContextERKNS_7MessageE\"},{\"b\":\"55AA2BA30000\",\"o\":\"729E00\",\"s\":\"_ZN5mongo23ServiceEntryPointMongos13handleRequestEPNS_16OperationContextERKNS_7MessageE\"},{\"b\":\"55AA2BA30000\",\"o\":\"74DF1C\",\"s\":\"_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE\"},{\"b\":\"55AA2BA30000\",\"o\":\"74806F\",\"s\":\"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE\"},{\"b\":\"55AA2BA30000\",\"o\":\"74B2FC\"},{\"b\":\"55AA2BA30000\",\"o\":\"AC7C22\",\"s\":\"_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE\"},{\"b\":\"55AA2BA30000\",\"o\":\"745A6D\",\"s\":\"_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE\"},{\"b\":\"55AA2BA30000\",\"o\":\"748D23\",\"s\":\"_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE\"},{\"b\":\"55AA2BA30000\",\"o\":\"747137\",\"s\":\"_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE\"},{\"b\":\"55AA2BA30000\",\"o\":\"747FCB\",\"s\":\"_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE\"},{\"b\":\"55AA2BA30000\",\"o\":\"74B2FC\"},{\"b\":\"55AA2BA30000\",\"o\":\"AC808B\"},{\"b\":\"55AA2BA30000\",\"o\":\"11C3824\"},{\"b\":\"7F4333CA8000\",\"o\":\"7E65\"},{\"b\":\"7F43338DA000\",\"o\":\"FE88D\",\"s\":\"clone\"}],\"processInfo\":{ \"mongodbVersion\" : \"4.2.2\", \"gitVersion\" : \"a0bbbff6ada159e19298d37946ac8dc4b497eadf\", \"compiledModules\" : [], \"uname\" : { \"sysname\" : \"Linux\", \"release\" : \"3.10.0-1062.9.1.el7.x86_64\", \"version\" : \"#1 SMP Fri Dec 6 15:49:49 UTC 2019\", \"machine\" : \"x86_64\" }, \"somap\" : [ { \"b\" : \"55AA2BA30000\", \"elfType\" : 3, \"buildId\" : \"A58FCE757C520F50A4A1BBBDE7E676CB6C5C160E\" }, { \"b\" : \"7FFE1D468000\", \"elfType\" : 3, \"buildId\" : \"B5A5458535A1397FA6BAAF5E8C13A6395426A1B2\" }, { \"b\" : \"7F43350D6000\", \"path\" : \"/lib64/libcurl.so.4\", \"elfType\" : 3, \"buildId\" : \"89C83CEB5DE5FDEC4F3DFBA4FCAACB53D747A998\" }, { \"b\" : \"7F4334EBD000\", \"path\" : \"/lib64/libresolv.so.2\", \"elfType\" : 3, \"buildId\" : \"3009B26B33156EAAF99787AA3DA0C6AE99649755\" }, { \"b\" : \"7F4334A5A000\", \"path\" : \"/lib64/libcrypto.so.10\", \"elfType\" : 3, \"buildId\" : \"4CF1939F660008CFA869D8364651F31AACD2C1C4\" }, { \"b\" : \"7F43347E8000\", \"path\" : \"/lib64/libssl.so.10\", \"elfType\" : 3, \"buildId\" : \"3B305C3BA17FE394862E749763F2956C9C890C2E\" }, { \"b\" : \"7F43345E4000\", \"path\" : \"/lib64/libdl.so.2\", \"elfType\" : 3, \"buildId\" : \"18113E6E83D8E981B8E8D808F7F3DBB23F950A1D\" }, { \"b\" : \"7F43343DC000\", \"path\" : \"/lib64/librt.so.1\", \"elfType\" : 3, \"buildId\" : \"4749697BF078337576C4629F0D30B296A0939779\" }, { \"b\" : \"7F43340DA000\", \"path\" : \"/lib64/libm.so.6\", \"elfType\" : 3, \"buildId\" : \"5681C054FDABCF789F4DDA66E94F1F6ED1747327\" }, { \"b\" : \"7F4333EC4000\", \"path\" : \"/lib64/libgcc_s.so.1\", \"elfType\" : 3, \"buildId\" : \"DAC0179F4555AEFEC9E97476201802FD20C03EC5\" }, { \"b\" : \"7F4333CA8000\", \"path\" : \"/lib64/libpthread.so.0\", \"elfType\" : 3, \"buildId\" : \"8B33F7F8C86F8D544C63C5541A8E42B3DDFEF8B1\" }, { \"b\" : \"7F43338DA000\", \"path\" : \"/lib64/libc.so.6\", \"elfType\" : 3, \"buildId\" : \"398944D32CF16A67AF51067A326E6C0CC14F90ED\" }, { \"b\" : \"7F4335340000\", \"path\" : \"/lib64/ld-linux-x86-64.so.2\", \"elfType\" : 3, \"buildId\" : \"5CC1A53B747A7E4D21198723C2B633E54F3C06D9\" }, { \"b\" : \"7F43336A7000\", \"path\" : \"/lib64/libidn.so.11\", \"elfType\" : 3, \"buildId\" : \"2B77BBEFFF65E94F3E0B71A4E89BEB68C4B476C5\" }, { \"b\" : \"7F433347A000\", \"path\" : \"/lib64/libssh2.so.1\", \"elfType\" : 3, \"buildId\" : \"1AF123CADB2F2910E89CBD540A06D3B33692F95E\" }, { \"b\" : \"7F4333221000\", \"path\" : \"/lib64/libssl3.so\", \"elfType\" : 3, \"buildId\" : \"B6321C434B5C7386B144B925CEE2798D269FDDF5\" }, { \"b\" : \"7F4332FF9000\", \"path\" : \"/lib64/libsmime3.so\", \"elfType\" : 3, \"buildId\" : \"BDA454441F59F41D2DA36E13CEA1FC4CE95B2BBB\" }, { \"b\" : \"7F4332CCA000\", \"path\" : \"/lib64/libnss3.so\", \"elfType\" : 3, \"buildId\" : \"DC3B36B530F506DE4FC1A6612D7DF44D4A3DDCDB\" }, { \"b\" : \"7F4332A9A000\", \"path\" : \"/lib64/libnssutil3.so\", \"elfType\" : 3, \"buildId\" : \"32C8FB6C2768FFE41E0A15CBF2089A4202CA2290\" }, { \"b\" : \"7F4332896000\", \"path\" : \"/lib64/libplds4.so\", \"elfType\" : 3, \"buildId\" : \"325B8CE57A776DE0B24B362A7E0C90E903B1A4B8\" }, { \"b\" : \"7F4332691000\", \"path\" : \"/lib64/libplc4.so\", \"elfType\" : 3, \"buildId\" : \"0460FF10A3C63749113D380C40E10DFCF066C76E\" }, { \"b\" : \"7F4332453000\", \"path\" : \"/lib64/libnspr4.so\", \"elfType\" : 3, \"buildId\" : \"8840B019EDB66B0CFBD2F77EF196440F7928106E\" }, { \"b\" : \"7F4332206000\", \"path\" : \"/lib64/libgssapi_krb5.so.2\", \"elfType\" : 3, \"buildId\" : \"E2AA8CA3D3164E7DBEC293BFA0B55D2B10DAC05D\" }, { \"b\" : \"7F4331F1D000\", \"path\" : \"/lib64/libkrb5.so.3\", \"elfType\" : 3, \"buildId\" : \"3EE7267AF7BFD3B132E6A222D997DA09C96C90DD\" }, { \"b\" : \"7F4331CEA000\", \"path\" : \"/lib64/libk5crypto.so.3\", \"elfType\" : 3, \"buildId\" : \"82E28CACB60C27CD6F14A6D2268F0CFF621664D0\" }, { \"b\" : \"7F4331AE6000\", \"path\" : \"/lib64/libcom_err.so.2\", \"elfType\" : 3, \"buildId\" : \"67E935BFABA2C914C01156B88947DD515EA51170\" }, { \"b\" : \"7F43318D7000\", \"path\" : \"/lib64/liblber-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"3192C56CD451E18EB9F29CB045432BA9C738DD29\" }, { \"b\" : \"7F4331682000\", \"path\" : \"/lib64/libldap-2.4.so.2\", \"elfType\" : 3, \"buildId\" : \"F1FADDDE0D21D5F4E2DCADEDD3B85B6E7AAC9883\" }, { \"b\" : \"7F433146C000\", \"path\" : \"/lib64/libz.so.1\", \"elfType\" : 3, \"buildId\" : \"B9D5F73428BD6AD68C96986B57BEA3B7CEDB9745\" }, { \"b\" : \"7F433125C000\", \"path\" : \"/lib64/libkrb5support.so.0\", \"elfType\" : 3, \"buildId\" : \"4F5FBB2087BE132892467C4E7A46A3D07E5DA40B\" }, { \"b\" : \"7F4331058000\", \"path\" : \"/lib64/libkeyutils.so.1\", \"elfType\" : 3, \"buildId\" : \"2E01D5AC08C1280D013AAB96B292AC58BC30A263\" }, { \"b\" : \"7F4330E3B000\", \"path\" : \"/lib64/libsasl2.so.3\", \"elfType\" : 3, \"buildId\" : \"E2F2017F821DD1B9D307DA1A9B8014F2941AEB7B\" }, { \"b\" : \"7F4330C14000\", \"path\" : \"/lib64/libselinux.so.1\", \"elfType\" : 3, \"buildId\" : \"D2DD4DA3FDE1477D25BFFF80F3A25FDB541A8179\" }, { \"b\" : \"7F43309DD000\", \"path\" : \"/lib64/libcrypt.so.1\", \"elfType\" : 3, \"buildId\" : \"84467C988F41D853C58353BEB247670E15DA8BAD\" }, { \"b\" : \"7F433077B000\", \"path\" : \"/lib64/libpcre.so.1\", \"elfType\" : 3, \"buildId\" : \"9CA3D11F018BEEB719CDB34BE800BF1641350D0A\" }, { \"b\" : \"7F4330578000\", \"path\" : \"/lib64/libfreebl3.so\", \"elfType\" : 3, \"buildId\" : \"197680DAE6538245CB99723E57447C4EF2E98362\" }, { \"b\" : \"7F4330365000\", \"path\" : \"/lib64/libnss_files.so.2\", \"elfType\" : 3, \"buildId\" : \"A1DB0E8103DE9F2540788EEA6CBCE3F639C2B39D\" } ] }}\n mongos(_ZN5mongo15printStackTraceERSo+0x41) [0x55aa2cd50fe1]\n mongos(+0x1320D98) [0x55aa2cd50d98]\n mongos(_ZN10__cxxabiv111__terminateEPFvvE+0x6) [0x55aa2ce5b586]\n mongos(+0x142B5C1) [0x55aa2ce5b5c1]\n mongos(+0x58557A) [0x55aa2bfb557a]\n mongos(+0x808CD9) [0x55aa2c238cd9]\n mongos(_ZN5mongo8Strategy13clientCommandEPNS_16OperationContextERKNS_7MessageE+0x1C2) [0x55aa2c239252]\n mongos(_ZN5mongo23ServiceEntryPointMongos13handleRequestEPNS_16OperationContextERKNS_7MessageE+0x3D0) [0x55aa2c159e00]\n mongos(_ZN5mongo19ServiceStateMachine15_processMessageENS0_11ThreadGuardE+0xEC) [0x55aa2c17df1c]\n mongos(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0x17F) [0x55aa2c17806f]\n mongos(+0x74B2FC) [0x55aa2c17b2fc]\n mongos(_ZN5mongo9transport26ServiceExecutorSynchronous8scheduleESt8functionIFvvEENS0_15ServiceExecutor13ScheduleFlagsENS0_23ServiceExecutorTaskNameE+0x182) [0x55aa2c4f7c22]\n mongos(_ZN5mongo19ServiceStateMachine22_scheduleNextWithGuardENS0_11ThreadGuardENS_9transport15ServiceExecutor13ScheduleFlagsENS2_23ServiceExecutorTaskNameENS0_9OwnershipE+0x10D) [0x55aa2c175a6d]\n mongos(_ZN5mongo19ServiceStateMachine15_sourceCallbackENS_6StatusE+0x843) [0x55aa2c178d23]\n mongos(_ZN5mongo19ServiceStateMachine14_sourceMessageENS0_11ThreadGuardE+0x2E7) [0x55aa2c177137]\n mongos(_ZN5mongo19ServiceStateMachine15_runNextInGuardENS0_11ThreadGuardE+0xDB) [0x55aa2c177fcb]\n mongos(+0x74B2FC) [0x55aa2c17b2fc]\n mongos(+0xAC808B) [0x55aa2c4f808b]\n mongos(+0x11C3824) [0x55aa2cbf3824]\n libpthread.so.0(+0x7E65) [0x7f4333cafe65]\n libc.so.6(clone+0x6D) [0x7f43339d888d]\n----- END BACKTRACE -----\n", "text": "Hi All,Today, In our production cluster, we faced an issue with MongoDB and couldn’t identify the issue. Kindly help me to identify the issue.Issue Details:-\nMongos Service got interrupted almost the same time in all the nodes (3 node cluster).\nError MessagePlease note that I couldn’t see any issue at Shard & Config Server Side at the same time.Kindly let me know in case of any other details required.Thanks.", "username": "Ann_Pricks_Edmund" }, { "code": "", "text": "Welcome to the community @Ann_Pricks_Edmund!Please provide more details on your deployment:If you are running MongoDB 4.2.6 or an earlier version of 4.2.x, you are likely encountering SERVER-47553: mongos crashes due to client disconnecting when signing keys being refreshed. A fix for this issue is included in MongoDB 4.2.7 and newer.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank You for your kind response.Our MongoDB Version - MongoDB v4.2.2\nO/S Version - CentOS Linux release 7.7.1908 (Core)As per the reported bug, we are also encountering the same. We will plan for an upgrade.Thanks.", "username": "Ann_Pricks_Edmund" } ]
Mongos service crashed
2020-07-01T20:45:20.554Z
Mongos service crashed
4,590
null
[ "data-modeling" ]
[ { "code": "", "text": "Is there any way restrict document delete in mongodb if it is being as refrence to any other docuement(s) (something like foreign key constraint in mysql) or is there any other work around for this functionality.", "username": "sudeep_gujju" }, { "code": "RESTRICT", "text": "Hi @sudeep_gujju,Is there any way restrict document delete in mongodb if it is being as reference to any other document(s)As of current version (v4.2), there is no built-in feature within the database to restrict deletion of a “parent” document (equivalent to RESTRICT referential action on MySQL).Depending on your use case, you could model the documents to embed the related documents. i.e. Model Embedded One-To-Many Relationships. I would also highly recommend Summary: Building With Patterns as a reference of modelling the document schema further.Alternatively, you could write an application logic before any document alteration on the collection. i.e. check whether the document is referenced on another collection before performing the delete operation. etc.Regards,\nWan.", "username": "wan" } ]
Foreign key constraint
2020-06-23T13:51:55.974Z
Foreign key constraint
24,998
null
[ "python", "connecting" ]
[ { "code": "kind: Service\nmetadata:\n name: mongo\n labels:\n app: mongo\nspec:\n ports:\n - name: mongo\n port: 27017\n targetPort: 27017\n clusterIP: None\n selector:\n app: mongo \napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mongo\nspec:\n selector:\n matchLabels:\n app: mongo\n serviceName: \"mongo\"\n replicas: 3\n template:\n metadata:\n labels:\n app: mongo\n spec:\n terminationGracePeriodSeconds: 10\n containers:\n - name: mongo\n image: mongo\n command:\n - mongod\n - \"--bind_ip_all\"\n - \"--replSet\"\n - rs0\n ports:\n - containerPort: 27017\n volumeMounts:\n - name: mongo-volume\n mountPath: /data/db\n volumeClaimTemplates:\n - metadata:\n name: mongo-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi\nENV MONGO_HOST mongo-0.mongo.mongodb.svc.cluster.local#ENV MONGO_HOST mongo-0.mongo.mongodb.svc.cluster.local,mongo-1.mongo.mongodb.svc.cluster.local,mongo-2.mongo.mongodb.svc.cluster.local/?replicaSet=rs0\npymongo.errors.ServerSelectionTimeoutError: No replica set members available for replica set name \"rs0:27017\"", "text": "Hi, I have deployed my MongoDB replicaset in Kubernetes in the following way1 - Headless Service -2 - MongoDB Replica SetNow, I see that there are three mongodb pods, mongo-0, mongo-1, mongo-2. I want my application to connect to all of these, just in case one of them goes down and the other one becomes the primary node. As of now, I am usingENV MONGO_HOST mongo-0.mongo.mongodb.svc.cluster.localBut if I want to connect to all the members in the replicaset, how do I do it ? I tried usingBut it returns the errorpymongo.errors.ServerSelectionTimeoutError: No replica set members available for replica set name \"rs0:27017\"Any help ??", "username": "Tushar_Sonawane1" }, { "code": "MongoClient()", "text": "Hi @Tushar_Sonawane1, and welcome to the forum!But if I want to connect to all the members in the replicaset, how do I do it ?Not entirely sure about your Kubernetes set up, but have you initiated the replica set?\nSee rs.initate() and also Deploy a Replica Set for more information.You may also find Troubleshoot Replica Set guide to be a useful resource.But it returns the errorThe error looks like related to an incorrect parsing, i.e. a concatenation between the replica set name and default port number. Please make sure that you’re passing a valid connection URI to MongoClient(). Please see PyMongo: making a connection with MongoClient for more information.Regards,\nWan.", "username": "wan" } ]
How to access mongodb replicaset from inside kubernetes cluster?
2020-06-29T09:43:31.793Z
How to access mongodb replicaset from inside kubernetes cluster?
4,826
null
[ "python", "beta" ]
[ { "code": "python -m pip install https://github.com/mongodb/mongo-python-driver/archive/3.11.0b1.tar.gz\n", "text": "We are pleased to announce the 3.11.0b1 release of PyMongo - MongoDB’s Python Driver. This beta release adds support for MongoDB 4.4.Note that this release will not be uploaded to PyPI and can be installed directly from the GitHub tag:", "username": "Shane" }, { "code": "", "text": "Sorry, for commenting here, but I couldn’t find a way to DM. first of all thanks for PyMongo its an awesome library. I’m using it extensively, although I’m unable to find a method synonmous to ‘collMod’ used for updated ‘expiresAfterSecond’ on TTL index in pymongo docs.\nThis seemed like silly question to ask on forum.", "username": "Niraj_Mukta" }, { "code": ">>> client.db.command('collMod', 'collection_name',\n index={'name': 'index_name', 'expireAfterSeconds': 3600})\n", "text": "Happy to help. In the future feel free to create a new forum post for this type of question. You can run the collMod command (or any other MongoDB command) with the Database.command() method:", "username": "Shane" }, { "code": "", "text": "Hello @Niraj_Mukta welcome to the community!as a brand new, highly welcome member, few limitations are available. You start as a Seedling and will be very soon promoted. Once promoted you will be able to send Private Messages and have further advantages:Please check out the post Getting Started with the MongoDB Community from @Jamie which will explain all this in much more details.Hope this helps,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "wow! That was quick. Thank you so much @Shane . It worked.", "username": "Niraj_Mukta" }, { "code": "", "text": "@michael_hoeller Thank you for the warm welcome. Will definitely checkout the community guidelines. Looking forward to learning with and helping fellow developers.", "username": "Niraj_Mukta" }, { "code": "", "text": "", "username": "system" } ]
PyMongo 3.11.0b1 Released
2020-06-09T17:27:57.330Z
PyMongo 3.11.0b1 Released
6,614
null
[ "python" ]
[ { "code": "", "text": "Hi fellow developers,I’m using mongodb with python using pymongo library. My usecase takes advantage of TTL indexes provided by mongo. But, I’ve a scenario where I want to update the ‘expireAfterSeconds’ time using collMod using Pymongo for which I cannot find anything on pymongo docs. If anyone could point in the right direction that would be really useful. Here’s the mongo db docs, and javascript implementation for the same.Also, if anyone has ever used this feature of updating index using ‘collMod’ how bad is the performance hit for like 100,000 documents.", "username": "Niraj_Mukta" }, { "code": "", "text": "If anyone came across same issue, check out this post by @Shane", "username": "Niraj_Mukta" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PyMongo: method to run command for collMod TTL index update
2020-07-01T17:30:50.122Z
PyMongo: method to run command for collMod TTL index update
5,460
null
[ "aggregation", "php" ]
[ { "code": "$pipeline = array(\n array('$match' =>$array), \n array(\n '$lookup' => array(\n 'from' => 'corporate',\n 'localField' => 'BIN',\n 'foreignField' => 'BIN',\n 'as' => 'corporate_details'\n )\n ),\n array(\n '$lookup' => array(\n 'from' => 'contact',\n 'localField' => 'BIN',\n 'foreignField' => 'BIN',\n 'as' => 'contact_details'\n )\n )\n\n );\n\n$options = ['CORPORATE_STATUS' =>'Active'];\n $result = $collection_contact->aggregate( $pipeline, $options );\n", "text": "I have two mongodb collections.corporate and contact collections. I need to get the count of results matching collection records.How to give the count value in query.Here i will paste my code . If anyone help me to find a solution to this issue.", "username": "CIBY_JOHN" }, { "code": "", "text": "Hello, @CIBY_JOHN! Welcome to the community!Help us to help you \nProvide:", "username": "slava" }, { "code": "[businessnature] => Array\n (\n [0] => Government\n [1] => Manufacturing\n [2] => Professional Services\n [3] => Retail_Trade\n [4] => Wholesale_Trade\n )\n\n[region] => Array\n (\n [0] => ANZ\n [1] => ASEAN\n [2] => INDIA\n )\n\n[buisiness_type] => Array\n (\n [0] => Limited Liability\n [1] => Non-Government Organisation\n [2] => Private Limited\n [3] => Public Listed\n [4] => Public Sector\n [5] => Sole Proprietorship\n )\n\n[industry_group] => Array\n (\n [0] => AFB - Agriculture, Food, Beverages and Home Products\n )\n\n[industry_details] => Array\n (\n [0] => AFB - Agricultural Products\n [1] => AFB - Food, Beverages And Tobacco Products\n [2] => AFB - Forestry\n [3] => AFB - Grain Mill\n [4] => AFB - Handicrafts, Fancy Goods And Other Household Goods\n [5] => AFB - Home Furnishings And Fittings\n [6] => AFB - Household Chemicals\n [7] => AFB - Paper Products\n [8] => AFB - Personal Effects\n [9] => AFB - Processing And Preserving\n [10] => AFB - Textiles, Clothing, Footwear And Leather Goods\n [11] => AFB - Utilities - Gas, Electric And Water\n )\n\n[sic_code] => Array\n (\n [0] => 0111\n [1] => 0112\n [2] => 0115\n [3] => 0116\n [4] => 0119\n [5] => 0131\n )\n\n[naics_code] => Array\n (\n [0] => 111110\n [1] => 111120\n [2] => 111130\n [3] => 111140\n [4] => 111150\n [5] => 111160\n [6] => 111191\n [7] => 111199\n )\n\n[employee_size] => Array\n (\n [0] => AA: 1 - 49\n [1] => BB: 50 - 99\n [2] => CC: 100 - 499\n [3] => DD: 500 - 999\n [4] => EE: 1,000 - 4,999\n [5] => FF: 5,000 - 9,999\n [6] => GG: 10,000+\n [7] => ZZ: NOT CODED\n )\n\n[corporate_status] => Array\n (\n [0] => Acquired\n [1] => Active\n [2] => Bankrupted\n [3] => Ceased operations\n [4] => Dormant\n [5] => Liquidating\n [6] => Merged\n [7] => Suspended\n )\n\n[job_level] => Array\n (\n [0] => Administrator\n [1] => C-Level\n [2] => Consultant\n [3] => Director\n [4] => Executive\n [5] => Manager\n [6] => Others\n [7] => President/VP\n [8] => Profession Service\n [9] => Professional\n )\n\n[job_function] => Array\n (\n [0] => Accounting / Finance\n [1] => Administration\n [2] => Customer Service\n [3] => Education\n [4] => Executive Management / C-Suite\n [5] => Human Resource\n [6] => IT / ALL\n [7] => IT / Application & Programming\n [8] => IT / Data & Database\n [9] => IT / Network & Infrastructure\n [10] => IT / Security & Compliance\n [11] => Legal\n [12] => Logistics / Facility / Warehouse\n [13] => Manufacturing\n [14] => Marketing / PR / Event\n [15] => Media / Communications\n [16] => Medical / Healthcare Services\n [17] => Operations\n [18] => Others\n [19] => Product Development\n [20] => Project Management\n [21] => Public Service / Policy Makers\n [22] => Purchasing / Procurement\n [23] => Quality Assurance\n [24] => Real Estate\n [25] => Research / Development\n [26] => Risk / Safety / Security\n [27] => Sales / Business Development\n )\n\n[contact_status] => Array\n (\n [0] => Active\n [1] => Deceased\n [2] => Fired\n [3] => Quit\n [4] => Retired\n [5] => Suspended\n )\n\n[no_of_pcs] => Array\n (\n [0] => A: 1 - 4\n [1] => B: 5 - 9\n [2] => C: 10 - 14\n [3] => D: 15 - 19\n [4] => E: 20 - 49\n [5] => F: 50 - 74\n [6] => G: 75 - 99\n [7] => H: 100 - 149\n [8] => I: 150 - 199\n [9] => J: 200 - 299\n [10] => K: 300 - 499\n [11] => L: 500 - 799\n [12] => M: 800 - 999\n [13] => N: 1,000 - 1,999\n [14] => O: 2,000 - 4,999\n [15] => P: 5,000 - 9,999\n [16] => Q: 10,000 - 24,999\n [17] => R: 25,000 - 49,999\n [18] => S: 50,000 - 99,999\n [19] => T: >= 100,000\n [20] => Z: NOT CODED\n )\n\n[no_of_servers] => Array\n (\n [0] => A: 1 - 4\n [1] => B: 5 - 9\n [2] => C: 10 - 14\n [3] => D: 15 - 19\n [4] => E: 20 - 49\n [5] => F: 50 - 74\n [6] => G: 75 - 99\n [7] => H: 100 - 149\n [8] => I: 150 - 199\n [9] => J: 200 - 299\n [10] => K: 300 - 499\n [11] => L: 500 - 799\n [12] => M: 800 - 999\n [13] => N: 1,000 - 1,999\n [14] => O: 2,000 - 4,999\n [15] => P: 5,000 - 9,999\n [16] => Q: 10,000 - 24,999\n [17] => R: 25,000 - 49,999\n [18] => S: 50,000 - 99,999\n [19] => T: >= 100,000\n [20] => Z: NOT CODED\n )\n\n[installed_product] => Array\n (\n [0] => Avtech\n [1] => Symantec NetBackup\n [2] => Symantec Storage Foundation\n [3] => Yardi\n [4] => Secure\n [5] => Symantec Veritas\n )\n\n[export_session_id] => 8C457309B7\n", "text": "corporate collection method result for db.coll.findOne() is{\n“_id” : ObjectId(“5ef32a45c7728380b0ccc1d8”),\n“BIN” : “BIN”,\n“COMPANY_NAME” : “COMPANY_NAME”,\n“COMPANY_NAME_LOCAL” : “COMPANY_NAME_LOCAL”,\n“BUSINESS_OVERVIEW” : “BUSINESS_OVERVIEW”,\n“BUSINESS_OVERVIEW_LOCAL” : “BUSINESS_OVERVIEW_LOCAL”,\n“CORPORATE_COUNTRY” : “CORPORATE_COUNTRY”,\n“CORPORATE_REGION” : “CORPORATE_REGION”,\n“URL” : “URL”,\n“BUSINESS_TYPE” : “BUSINESS_TYPE”,\n“ADDRESS1” : “ADDRESS1”,\n“ADDRESS2” : “ADDRESS2”,\n“ADDRESS3” : “ADDRESS3”,\n“CITY” : “CITY”,\n“STATE” : “STATE”,\n“POST_CODE” : “POST_CODE”,\n“ADDRESS1_LOCAL” : “ADDRESS1_LOCAL”,\n“ADDRESS2_LOCAL” : “ADDRESS2_LOCAL”,\n“ADDRESS3_LOCAL” : “ADDRESS3_LOCAL”,\n“CITY_LOCAL” : “CITY_LOCAL”,\n“STATE_LOCAL” : “STATE_LOCAL”,\n“CORPORATE_COUNTRY_LOCAL” : “CORPORATE_COUNTRY_LOCAL”,\n“TELEPHONE” : “TELEPHONE”,\n“CORPORATE_EMAIL” : “CORPORATE_EMAIL”,\n“REGISTRATION_NO” : “REGISTRATION_NO”,\n“COUNTRY_CODE3” : “COUNTRY_CODE3”,\n“COUNTRY_CODE2” : “COUNTRY_CODE2”,\n“COMPANY_FACEBOOK” : “COMPANY_FACEBOOK”,\n“COMPANY_YOUTUBE” : “COMPANY_YOUTUBE”,\n“COMPANY_TWITTER” : “COMPANY_TWITTER”,\n“COMPANY_LINKEDIN” : “COMPANY_LINKEDIN”,\n“COMPANY_WECHAT” : “COMPANY_WECHAT”,\n“YEAR_FOUNDED” : “YEAR_FOUNDED”,\n“NAICS6_DESC” : “NAICS6_DESC”,\n“NAICS6_CODE” : “NAICS6_CODE”,\n“BUSINESS_NATURE” : “BUSINESS_NATURE”,\n“INDUSTRY_GROUP” : “INDUSTRY_GROUP”,\n“PRIMARY_INDUSTRY_DETAIL” : “PRIMARY_INDUSTRY_DETAIL”,\n“SECONDARY_INDUSTRY_DETAIL” : “SECONDARY_INDUSTRY_DETAIL”,\n“SIC4_DESC” : “SIC4_DESC”,\n“SIC4_CODE” : “SIC4_CODE”,\n“EMPLOYEE_SIZE_BIG_BAND” : “EMPLOYEE_SIZE_BIG_BAND”,\n“COUNTRY_EMPLOYEE_SIZE” : “COUNTRY_EMPLOYEE_SIZE”,\n“COUNTRY_EMPLOYEE_SIZE_RANGE” : “COUNTRY_EMPLOYEE_SIZE_RANGE”,\n“GLOBAL_EMPLOYEE_SIZE” : “GLOBAL_EMPLOYEE_SIZE”,\n“GLOBAL_EMPLOYEE_SIZE_RANGE” : “GLOBAL_EMPLOYEE_SIZE_RANGE”,\n“REVENUE_VALUE” : “REVENUE_VALUE”,\n“REVENUE_VALUE_RANGE” : “REVENUE_VALUE_RANGE”,\n“CORPORATE_DATE_UPDATED” : “CORPORATE_DATE_UPDATED”,\n“CORPORATE_STATUS” : “CORPORATE_STATUS”,\n“CORPORATE_REMARKS” : “CORPORATE_REMARKS”,\n“TID” : “TID”,\n“NO_OF_PCS” : “NO_OF_PCS”,\n“NO_OF_SERVERS” : “NO_OF_SERVERS”,\n“IT_BUDGET” : “IT_BUDGET”,\n“INSTALLED_PRODUCT” : “INSTALLED_PRODUCT”,\n“TECH_DATE_UPDATED” : “TECH_DATE_UPDATED”\n}contact collection method result for db.coll.findOne() is{\n“_id” : ObjectId(“5ef32a59c7728380b0ccc1e0”),\n“BIN” : “BIN”,\n“CIN” : “CIN”,\n“SALUTATION” : “SALUTATION”,\n“FIRSTNAME” : “FIRSTNAME”,\n“LASTNAME” : “LASTNAME”,\n“FULLNAME” : “FULLNAME”,\n“TITLE” : “TITLE”,\n“DEPARTMENT” : “DEPARTMENT”,\n“SALUTATION_LOCAL” : “SALUTATION_LOCAL”,\n“FIRSTNAME_LOCAL” : “FIRSTNAME_LOCAL”,\n“LASTNAME_LOCAL” : “LASTNAME_LOCAL”,\n“FULLNAME_LOCAL” : “FULLNAME_LOCAL”,\n“TITLE_LOCAL” : “TITLE_LOCAL”,\n“DEPARTMENT_LOCAL” : “DEPARTMENT_LOCAL”,\n“PRIMARY_JOBFUNCTION” : “PRIMARY_JOBFUNCTION”,\n“PRIMARY_JOBLEVEL” : “PRIMARY_JOBLEVEL”,\n“SECONDARY_JOBFUNCTION” : “SECONDARY_JOBFUNCTION”,\n“SECONDARY_JOBLEVEL” : “SECONDARY_JOBLEVEL”,\n“CONTACT_ADDRESS1” : “CONTACT_ADDRESS1”,\n“CONTACT_ADDRESS2” : “CONTACT_ADDRESS2”,\n“CONTACT_ADDRESS3” : “CONTACT_ADDRESS3”,\n“CONTACT_CITY” : “CONTACT_CITY”,\n“CONTACT_STATE” : “CONTACT_STATE”,\n“CONTACT_POST_CODE” : “CONTACT_POST_CODE”,\n“CONTACT_COUNTRY” : “CONTACT_COUNTRY”,\n“CONTACT_REGION” : “CONTACT_REGION”,\n“CONTACT_ADDRESS1_LOCAL” : “CONTACT_ADDRESS1_LOCAL”,\n“CONTACT_ADDRESS2_LOCAL” : “CONTACT_ADDRESS2_LOCAL”,\n“CONTACT_ADDRESS3_LOCAL” : “CONTACT_ADDRESS3_LOCAL”,\n“CONTACT_CITY_LOCAL” : “CONTACT_CITY_LOCAL”,\n“CONTACT_STATE_LOCAL” : “CONTACT_STATE_LOCAL”,\n“CONTACT_COUNTRY_LOCAL” : “CONTACT_COUNTRY_LOCAL”,\n“DIRECT_PHONE” : “DIRECT_PHONE”,\n“EXT” : “EXT”,\n“MOBILE” : “MOBILE”,\n“BUSINESS_EMAIL” : “BUSINESS_EMAIL”,\n“PERSONAL_EMAIL” : “PERSONAL_EMAIL”,\n“CONTACT_FACEBOOK” : “CONTACT_FACEBOOK”,\n“CONTACT_YOUTUBE” : “CONTACT_YOUTUBE”,\n“CONTACT_TWITTER” : “CONTACT_TWITTER”,\n“CONTACT_LINKEDIN” : “CONTACT_LINKEDIN”,\n“CONTACT_WECHAT” : “CONTACT_WECHAT”,\n“CONTACT_DATE_UPDATED” : “CONTACT_DATE_UPDATED”,\n“CONTACT_STATUS” : “CONTACT_STATUS”,\n“CONTACT_REMARKS” : “CONTACT_REMARKS”\n}query ‘match’ array will beArray\n(\n[country] => Array\n(\n[0] => Australia\n[1] => New Zealand\n[2] => Brunei\n[3] => Indonesia\n[4] => Laos\n[5] => Malaysia\n[6] => Myanmar\n[7] => Philippines\n[8] => Singapore\n[9] => Sri Lanka\n[10] => Thailand\n[11] => India\n))we have to select the desired fields from the above matching list and take the result count.The result we get is joining both tables and matching count for ‘corporate’ and ‘contacts’", "username": "CIBY_JOHN" } ]
Join query count of two collections taken in MongoDB
2020-07-01T17:30:42.621Z
Join query count of two collections taken in MongoDB
4,554
null
[ "queries" ]
[ { "code": "> db.layouts.find({}, {_id: 1})\n{ \"_id\" : \"adresses\" }\n{ \"_id\" : \"clients\" }\n\n> db.layouts.find({_id: \"clients\"}, {\"info.id\": 1, \"info.married\": 1, _id: 0})\n{ \"info\" : [ { \"id\" : \"luis.alvarez\", \"married\" : 1 }, { \"id\" : \"jose.perez\", \"married\" : 0 }, { \"id\" : \"daniel.contreras\", \"married\" : 0 }, { \"id\" : \"javier.pirela\", \"married\" : 0 } ] }\n\n> db.layouts.find({_id: \"clients\", \"info.married\": \"1\"}, {\"info.id\": 1, \"info.married\": 1, _id: 0})\n\n> db.layouts.find({_id: \"clients\", \"info.married\": 1}, {\"info.id\": 1, \"info.married\": 1, _id: 0})\n{ \"info\" : [ { \"id\" : \"luis.alvarez\", \"married\" : 1 }, { \"id\" : \"jose.perez\", \"married\" : 0 }, { \"id\" : \"daniel.contreras\", \"married\" : 0 }, { \"id\" : \"javier.pirela\", \"married\" : 0 } ] }\n\n> db.layouts.find({_id: \"clients\", \"info.id\": \"luis.alvarez\"}, {\"info.id\": 1, \"info.married\": 1, _id: 0})\n\n> db.layouts.find({_id: \"clients\", \"info.married\": 1}, {\"info.id\": 1, \"info.married\": 1, _id: 0})\n{ \"info\" : [ { \"id\" : \"luis.alvarez\", \"married\" : 1 }, { \"id\" : \"jose.perez\", \"married\" : 0 }, { \"id\" : \"daniel.contreras\", \"married\" : 0 }, { \"id\" : \"javier.pirela\", \"married\" : 0 } ] }\n", "text": "All my life had beeb Oracle Development, just right now I’m starting witn mongoDB, and I need help with this query 'cause don’t work as I wish, it retrive all rows and not a specific row.As you can see, for the document clients (_id=clients) and married marital status (married=1)retrieves all clients regardless of status, why is this?Thanks in advance by your help !!!", "username": "Carlos_Aldana" }, { "code": "db.layouts.find(<filter>, <projection>)<filter><projection><projection>", "text": "Welcome to the community, @Carlos_Aldana!To understand this, you need to start thinking in MongoDB terms.You see, you have only 2 documents (rows) in your ‘layouts’ collection (table).\nAnd then, you have your db.layouts.find(<filter>, <projection>) operation.Since you’re always matching the same document (row) with <filter> object, you will always have same result. With <projection> object, you can select, what properties of the document (columns in the row) to show (1) or hide (0).When you specify { ‘info.married’: 1 } in the <projection> object, you decide to show the nested property (column of the column) ‘info.married’ in the output.I recommend you to take some courses on MongoDB. At least, the very basic one.", "username": "slava" }, { "code": "", "text": "Hi Slava, after all thanks a lot for your help and overall for reedit my post. As you can see in this line \" db.layouts.find({_id: “clients”, “info.married”: 1}, {“info.id”: 1, “info.married”: 1, _id: 0}) \" info.married: 1 is in both (filter & projections), my question would be, Can’t have the column in filter and projections at same time or where is the mistake ?", "username": "Carlos_Aldana" }, { "code": "<filter><projection><filter>{\n '_id': 'clients',\n 'info' : [\n { 'id' : 'luis.alvarez', 'married' : 1 },\n { 'id' : 'jose.perez', 'married' : 0 },\n { 'id' : 'daniel.contreras', 'married' : 0 },\n { 'id' : 'javier.pirela', 'married' : 0 }\n ]\n}\n<projection><projection>{\n '_id': 'clients',\n 'info' : [\n { 'id' : 'luis.alvarez' },\n { 'id' : 'jose.perez' },\n { 'id' : 'daniel.contreras' },\n { 'id' : 'javier.pirela' }\n ]\n}\n", "text": "Can’t have the column in filter and projections at same time or where is the mistake ?<filter> and <projection> objects can have same properties. It is not the issue.With <filter> you decide to get this one document (row) for your collection (table)And with <projection> you can only show or hide some fields (columns) in the above document (row);\nNotice: show / hide, but not filter.So, if you specify { ‘info.married’: 0 } in your <projection> object, you just hide all the props ‘married’, and this’s it. You will get the same document, but ‘info.married’ prop will be hidden:Before filtering nested properties (and ‘married’ is a nested property), you need to learn how simple things are done.", "username": "slava" }, { "code": "", "text": "Thanks a lot again Slava, I mean that my problem is with MongoDB (I began yesterday to work with MongoDB), I’ve much experience con OOP, Class, Objects, methods & properties, and overall with json files (nested and very complex) with Phyton, PHP, Javascript.\nI’m not sure if you saw well the filter (I took a example directly from MongoDB page for nested json) , because filter contains the field “info.married”: 1, and I can’t see the problem\nI’m very clear how do you mention that is a process of “change mind” for to work with MongoBD as Database.\nOnce more time, thanks a lot for your help & your time.", "username": "Carlos_Aldana" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why does my query return all documents instead of matching?
2020-07-01T22:21:17.109Z
Why does my query return all documents instead of matching?
3,624
null
[]
[ { "code": "", "text": "HI, I want to make sure I’m understanding the connection steps correctly. Prior to connecting to Mongo Atlas in the shell, when I type Mongo the prompt returns back Mongo. Now when I type Mongo in the shell the prompt is returning Mongo Enterprise.What’s controlling or being saved in the shell that recognizes that I’ve connected to Atlas in the past? I’ve even tested this with a new instance of a Mongo shell. Is there a way to get only the Mongo prompt back if needed?", "username": "Monique_Bennett-Lowe" }, { "code": "", "text": "Please show the screenshot\nWhat is your os?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi, I’m running Windows 10. See screenshot below. Thanks!Mongo Shell error1490×868 69.9 KB", "username": "Monique_Bennett-Lowe" }, { "code": "", "text": "When you type just mongo on cmd prompt of Windows it connects to default mongod running on port 27017\nAnd your prompt will look like > (greater than symbol)\nWhen you are connecting to Class cluster or your Sandbox cluster it will show MongodbEnterprise>I did not understand what you meant by this “when I type Mongo the prompt returns back Mongo”", "username": "Ramachandra_Tummala" }, { "code": ".mongorc.js", "text": "Hi @Monique_Bennett-Lowe,This text is just for informational purpose.You can use the .mongorc.js file to modify this. You can read more about it here.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Thanks for the modification file. I’ll review then apply.", "username": "Monique_Bennett-Lowe" }, { "code": "", "text": "", "username": "system" } ]
Understanding Mongo connection steps
2020-06-28T23:13:43.742Z
Understanding Mongo connection steps
1,468
null
[ "python" ]
[ { "code": "db.coll.find({}, { A:true, B:true, C:true}).hint('A_1_C_1_B_1').explain()coll.find({}, { A: True, B: True, C: True}).hint('A_1_C_1_B_1').explain() # form 1\ncoll.find({}, { A: True, B: True, C: True}, hint='A_1_C_1_B_1').explain() # form 2\n", "text": "Hi all,the following query issued via mongo shell returns the result instantly:db.coll.find({}, { A:true, B:true, C:true}).hint('A_1_C_1_B_1').explain()whereas the same query issued via PyMongo returns result in almost 10 minutes. I tried both forms of supplying the hint, no difference:any ideas why is that and how to fix this issue?PyMongo 3.10.1\nMongoDB 4.4", "username": "Valery_Khamenya" }, { "code": "explain()allPlansExecutioncommand()", "text": "Cursor.explain() uses the default explain verbosity (“allPlansExecution”). See the note in: cursor – Tools for iterating over MongoDB query results — PyMongo 4.3.3 documentationNote: Starting with MongoDB 3.2 explain() uses the default verbosity mode of the explain command, allPlansExecution . To use a different verbosity use command() to run the explain command directly.For some more context see this related ticket: https://jira.mongodb.org/browse/PYTHON-1656", "username": "Shane" } ]
Cursor.explain() extremely slow (PyMongo 3.10.1; MongoDB 4.4)
2020-07-01T13:51:44.342Z
Cursor.explain() extremely slow (PyMongo 3.10.1; MongoDB 4.4)
1,920
null
[ "aggregation" ]
[ { "code": "", "text": "Hi to all,I have a big doubt:\nMy pipeline begins with a match operator and continues with three unwind operators, there is another match, then a sort and limit and terminates with a projection.\nCan I have a performance gain if I add an index on the key used to sort inside the pipeline? An if I add an index on the key used by the second match?Thank you all,\nI appreciate any comments or suggestions.", "username": "Alaskent19" }, { "code": "", "text": "I am afraid that after the unwind no index can be used for the subsequent match and sort. The best would be to find a way to move the all the matches and sorts at the top of the pipeline. But it is not always possible.", "username": "steevej" }, { "code": "", "text": "Welcome to the community, @Alaskent19 !Indeed, as @steevej already have mentioned, after $unwind stage you can’t use indexes.But, if you’re flexible on modifying the aggregation or document schema, you can share:", "username": "slava" } ]
Pipeline and index: performance doubt
2020-07-01T12:16:55.356Z
Pipeline and index: performance doubt
1,380
null
[ "aggregation" ]
[ { "code": "db.orders.findOne();\n{\n \"_id\" : ObjectId(\"5efc6db38cb109193e41c4d3\"),\n \"createdDate\" : ISODate(\"2020-06-25T02:06:25.428Z\"),\n \"data\" : {\n \"nested\" : {\n \"country\" : \"France\"\n },\n \"product\" : \"Product 4\",\n \"latest\" : {\n \"sub\" : {\n \"code\" : \"Code 3\"\n }\n }\n }\n}\ndb.getCollection('orders').aggregate([\n{\n $unwind :{\n path: \"$data.nested.country\"\n }\n},\n{\n $group: {\n _id: { country: \"$data.nested.country\", product: \"$data.product\", code: \"$data.latest.sub.code\" }\n }\n}\n])\n{ \"_id\" : { \"country\" : \"Slovenia\", \"product\" : \"Product 3\", \"code\" : \"Code 7\" } }\n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 1\", \"code\" : \"Code 9\" } }\n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 4\", \"code\" : \"Code 4\" } }\n{ \"_id\" : { \"country\" : \"China\", \"product\" : \"Product 1\", \"code\" : \"Code 1\" } }\n{ \"_id\" : { \"country\" : \"France\", \"product\" : \"Product 3\", \"code\" : \"Code 4\" } } \n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 4\", \"code\" : \"Code 8\" } }\n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 4\", \"code\" : \"Code 5\" } }\n{ \"_id\" : { \"country\" : \"Slovenia\", \"product\" : \"Product 4\", \"code\" : \"Code 4\" } }\n{ \"_id\" : { \"country\" : \"Slovenia\", \"product\" : \"Product 4\", \"code\" : \"Code 7\" } }\n{ \"_id\" : { \"country\" : \"Slovenia\", \"product\" : \"Product 4\", \"code\" : \"Code 2\" } }\n{ \"_id\" : { \"country\" : \"China\", \"product\" : \"Product 4\", \"code\" : \"Code 8\" } }\n{ \"_id\" : { \"country\" : \"France\", \"product\" : \"Product 4\", \"code\" : \"Code 4\" } }\n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 2\", \"code\" : \"Code 3\" } }\n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 2\", \"code\" : \"Code 6\" } }\n{ \"_id\" : { \"country\" : \"Japan\", \"product\" : \"Product 2\", \"code\" : \"Code 3\" } }\n{ \"_id\" : { \"country\" : \"Slovenia\", \"product\" : \"Product 2\", \"code\" : \"Code 9\" } }\n{ \"_id\" : { \"country\" : \"China\", \"product\" : \"Product 2\", \"code\" : \"Code 6\" } }\n{ \"_id\" : { \"country\": \"Japan\", products: [{\"product\":\"Product 2\",\"codes\":[{\"code\":\"Code 3\",\"count\":2},{\"code\":\"Code 6\",\"count\":1]}]\n", "text": "Hope you can help I am new to aggregation queries.I have a nested data structure that I want to group to produce statistical output. I have a set of orders where the order is for a country, product and product code. An order looks like:I have an aggregation query that groups by country, product and code.This produces output such as:I want to group this data by country and then product and then code so, for example, Japan would have a list of products i.e. Product 4, Product 2 inside each there would be a list of codes so “Product 4”: [“Code 8”,“Code 5”,“Code 3”,“Code 6”,“Code 2”] etc. Since an order can be made for a Product with a particular code more than once for a country I need I think a map of codes and the counts for each code.", "username": "Sean_Barry" }, { "code": "db.orders.aggregate([\n {\n $group: {\n _id: {\n country: '$data.nested.country',\n product: '$data.product',\n },\n productCodes: {\n $push: '$data.latest.sub.code',\n },\n uniqueCodes: {\n $addToSet: '$data.latest.sub.code',\n }\n }\n },\n {\n $group: {\n _id: '$_id.country',\n country: {\n $first: '$_id.country',\n },\n products: {\n $push: {\n product: '$_id.product',\n codes: {\n $map: {\n // run $filter+$size operations per each code\n input: '$uniqueCodes',\n as: 'code',\n in: {\n code: '$$code',\n count: {\n $size: {\n $filter: {\n // collect same codes into one array\n // to be able to count them per product \n input: '$productCodes',\n cond: {\n $eq: ['$$code', '$$this'],\n },\n },\n },\n },\n },\n },\n },\n },\n },\n },\n },\n // cleanup\n {\n $unset: ['_id'],\n }\n]);\ndb.orders.insertMany([\n {\n '_id' : 1,\n 'data' : {\n 'nested' : {\n 'country' : 'France'\n },\n 'product' : 'Product 1',\n 'latest' : {\n 'sub' : {\n 'code' : 'Code A'\n }\n }\n }\n },\n {\n '_id' : 2,\n 'data' : {\n 'nested' : {\n 'country' : 'France'\n },\n 'product' : 'Product 2',\n 'latest' : {\n 'sub' : {\n 'code' : 'Code B'\n }\n }\n }\n },\n {\n '_id' : 3,\n 'data' : {\n 'nested' : {\n 'country' : 'Canada'\n },\n 'product' : 'Product 1',\n 'latest' : {\n 'sub' : {\n 'code' : 'Code B'\n }\n }\n }\n },\n {\n '_id' : 4,\n 'data' : {\n 'nested' : {\n 'country' : 'Ukraine'\n },\n 'product' : 'Product 2',\n 'latest' : {\n 'sub' : {\n 'code' : 'Code B'\n }\n }\n }\n },\n {\n '_id' : 5,\n 'data' : {\n 'nested' : {\n 'country' : 'Canada'\n },\n 'product' : 'Product 1',\n 'latest' : {\n 'sub' : {\n 'code' : 'Code A'\n }\n }\n }\n },\n {\n '_id' : 6,\n 'data' : {\n 'nested' : {\n 'country' : 'Canada'\n },\n 'product' : 'Product 1',\n 'latest' : {\n 'sub' : {\n 'code' : 'Code A'\n }\n }\n }\n }\n]);\n[\n {\n \"country\" : \"France\",\n \"products\" : [\n {\n \"product\" : \"Product 2\",\n \"codes\" : [\n {\n \"code\" : \"Code B\",\n \"count\" : 1\n }\n ]\n },\n {\n \"product\" : \"Product 1\",\n \"codes\" : [\n {\n \"code\" : \"Code A\",\n \"count\" : 1\n }\n ]\n }\n ]\n },\n {\n \"country\" : \"Canada\",\n \"products\" : [\n {\n \"product\" : \"Product 1\",\n \"codes\" : [\n {\n \"code\" : \"Code B\",\n \"count\" : 1\n },\n {\n \"code\" : \"Code A\",\n \"count\" : 2\n }\n ]\n }\n ]\n },\n {\n \"country\" : \"Ukraine\",\n \"products\" : [\n {\n \"product\" : \"Product 2\",\n \"codes\" : [\n {\n \"code\" : \"Code B\",\n \"count\" : 1\n }\n ]\n }\n ]\n }\n]\n", "text": "Hello, @Sean_Barry! Welcome to the community!You can achieve what you want with two sequential $group stagesSample dataset:Aggregation output on sample dataset:", "username": "slava" }, { "code": "db.orders.aggregate([\n {\n $group: {\n _id: {\n country: '$data.nested.country',\n product: '$data.product',\n },\n productCodes: {\n $push: '$data.latest.sub.code',\n },\n uniqueCodes: {\n $addToSet: '$data.latest.sub.code',\n }\n }\n },\n {\n $group: {\n _id: '$_id.country',\n country: {\n $first: '$_id.country',\n },\n products: {\n $push: {\n product: '$_id.product',\n codes: {\n $map: {\n // run $filter+$size operations per each code\n input: '$uniqueCodes',\n as: 'code',\n in: {\n code: '$code',\n count: {\n $size: {\n $filter: {\n // collect same codes into one array\n // to be able to count them per product \n input: '$productCodes',\n cond: {\n $eq: ['$code', '$this'],\n },\n },\n },\n },\n },\n },\n },\n },\n },\n },\n },\n // cleanup\n {\n $unset: ['_id'],\n }\n]);\n", "text": "Thank you very much. That’s great. Works perfectly. I couldn’t have come up with this solution. There’s definitely a lot going on. I thought I might just need another $group.", "username": "Sean_Barry" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to perform multiple group bys in a aggregation query
2020-07-01T13:51:25.944Z
How to perform multiple group bys in a aggregation query
20,101
null
[ "golang" ]
[ { "code": "InsertOne{\n\t\t\"_id\" : ObjectId(\"5ef59232fe31f24ac6b54822\"),\n\t\t\"batchId\" : NumberLong(184486),\n\t\t\"batchName\" : \"sample\",\n\t\t\"createdBy\" : \"createdSample\",\n\t\t\"status\" : \"CREATED\",\n\t\t\"type\" : \"normal\",\n\t\t\"batchType\" : \"Regular\",\n\t\t\"batchStartDate\" : ISODate(\"2020-06-26T00:00:00Z\"),\n\t\t\"batchEndDate\" : ISODate(\"2020-08-20T00:00:00Z\"),\n\t\t\"size\" : 10,\n\t\t\"sector\" : {\n\t\t\t\"id\" : \"18\",\n\t\t\t\"name\" : \"Food\"\n\t\t},\n\t\t\"jobRoles\" : [\n\t\t\t{\n\t\t\t\t\"jobName\" : \"Processed Food\",\n\t\t\t\t\"qpCode\" : \"FIC\",\n\t\t\t\t\"version\" : \"1.0\",\n\t\t\t\t\"nsqfLevel\" : \"6\",\n\t\t\t\t\"jobRoleDesc\" : \"\",\n\t\t\t\t\"attendanceUploaded\" : null,\n\t\t\t\t\"traingingAttendanceSubmitted\" : null,\n\t\t\t\t\"assessmentStartDate\" : ISODate(\"2020-08-23T00:00:00Z\"),\n\t\t\t\t\"assessmentEndDate\" : ISODate(\"2020-08-23T00:00:00Z\"),\n\t\t\t\t\"isPlatformQP\" : false,\n\t\t\t\t\"isBaseQP\" : false,\n\t\t\t\t\"isBatchAssigned\" : {\n\t\t\t\t\t\"masterTrainer\" : false,\n\t\t\t\t\t\"assessmentAgency\" : false\n\t\t\t\t},\n\t\t\t\t\"isRejected\" : {\n\t\t\t\t\t\"masterTrainer\" : false,\n\t\t\t\t\t\"assessmentAgency\" : false\n\t\t\t\t},\n\t\t\t\t\"sector\" : {\n\t\t\t\t\t\"id\" : \"12\",\n\t\t\t\t\t\"name\" : \"Food Processing\"\n\t\t\t\t},\n\t\t\t\t\"trainingHoursPerDay\" : 6,\n\t\t\t\t\"jobRoleCategory\" : \"3\",\n\t\t\t\t\"initialAssessmentStartDate\" : ISODate(\"2020-08-23T00:00:00Z\"),\n\t\t\t\t\"initialAssessmentEndDate\" : ISODate(\"2020-08-23T00:00:00Z\"),\n\t\t\t\t\"overrideQpHours\" : NumberLong(280)\n\t\t\t}\n\t\t],\n}\nbsonxbsonx.JavaScript", "text": "Is there any way where we can insert a BSON Mongo record using go-mongo -> InsertOne method?I have a record like this:I want to insert this document into MongoDB without the use of struct, wherein the above document will be assigned to a go variable and the record will be inserted using that variable.Is there any way to achieve this?Also, I have seen that the bsonx package has a bsonx.JavaScript. Is there any way to inset a bsonx datatype into MongoDB using go", "username": "Harshavardhan_Kumare" }, { "code": "collection.find()bsonxprimitive.JavaScriptdoc := bson.D{\n {\"jsKey\", primitive.JavaScript(\"function() { sleep(1000); }\")},\n}\n", "text": "Hi @Harshavardhan_Kumare,What format is this record in? The example record you gave looks like something printed out by calling collection.find() in the mongo shell, but it’s important for us to know what the exact format is.For your second question, bsonx is a set of experimental APIs that we don’t recommend for regular use because there’s no stability guarantees. You can use the public primitive.JavaScript type (primitive package - go.mongodb.org/mongo-driver/bson/primitive - Go Packages) to insert BSON JavaScript values. For example:– Divjot", "username": "Divjot_Arora" }, { "code": "findOnebson.M", "text": "Hi @Divjot_Arora,You are right. The example record I gave is the output from a Mongo Shell findOne command. I want to test the CRUD operations I have written in GoLang. I want to insert many example records for each CRUD operations and since there are so many sample records that I want to insert for testing, it is very difficult for me to convert those example documents to bson.M.Is there any workaround in achieving this?Or, is there a better way of testing the update and delete operations?Thanks a ton for your response.", "username": "Harshavardhan_Kumare" }, { "code": "Collection.FindCursor.Allbson.D", "text": "If these records already exist in your database, you can use Collection.Find during your test setup to fetch these documents from your collection and store them in a slice. Your tests can then use one or more of these documents to call Update/Delete/etc. I’d also recommend looking into the Cursor.All method to quickly convert all documents into a slice of something like bson.D.", "username": "Divjot_Arora" }, { "code": "", "text": "Thanks for the idea @Divjot_Arora, this looks like a viable option.", "username": "Harshavardhan_Kumare" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to insert BSON record using Go Driver
2020-06-28T11:17:59.642Z
How to insert BSON record using Go Driver
3,777
null
[]
[ { "code": "{\n \"_id\" : ObjectId(\"5ed0d70c009dab1100a6209c\"),\n \"name\" : \"toto\",\n \"permissions\" : {\n \"read\" : [ \n \"read1\"\n ],\n \"write\" : [ \n \"write1\"\n ],\n },\n}\nfind_onefind_one_and_updateguid$cond", "text": "Hi everyone!Before describing my issue I’d like to give you a bit of context. I am working on a GraphQL server coded in Rust that uses MongoDB Rust crate as a Database driver. One of the main goals of this project is to have a very safe permissions system.My problem is related to write permissions.\nLet’s say I have the following object:When a user tries to update this type of object, before the server actually updates it, it needs to make sure that:Now the problem with this is that it requires 2 Database interactions, a find_one for step 1 (to match the object itself) and a find_one_and_update for step 2 (to match its permissions to the user permissions and update if they are correct).\nBut this means that if user A is trying to update an object, and user B is trying to delete the same object, user B delete request might go off between user A step 1 and step 2 of its update. And that would lead to an “insufficient permissions” error when in reality the object does not exist.So what I wanted to do was generating a guid before making the update, and then adding it to the updated object to check the result and ensure that the update went through (this would remove step 1 from the process and allow use to avoid mutex in our code since it is a big performance decrease). But this requires to be able to make a conditional update based on the field value and from what I’ve read there is no way to do that ($cond is not available as an update operator: https://docs.mongodb.com/manual/reference/operator/update-field/).So my question is would there be a way to have some sort of field-based conditional object update using MongoDB? I also wonder if there would be a way to find a solution to my problem using aggregation pipelines but I haven’t found much success in that either.", "username": "Thomas_Plisson" }, { "code": "db.test1.updateOne({ _id: ObjectId(\"5ed0d70c009dab1100a6209c\")}, [\n {\n $set: {\n name: {\n $cond: {\n if: {\n // check if user has required permission\n $in: [userPerm, '$permissions.write']\n },\n then: 1, // overwrite prop value\n else: '$name', // return origin value\n }\n }\n }\n }\n]);\n{ \"acknowledged\" : true, \"matchedCount\" : 1, \"modifiedCount\" : 0 }\n", "text": "Hello, @Thomas_Plisson! Welcome to the community!But this means that if user A is trying to update an object, and user B is trying to delete the same object, user B delete request might go off between user A step 1 and step 2 of its update. And that would lead to an “insufficient permissions” error when in reality the object does not exist.Due to the concurrency docs, write operations will be put in a queue and execute one by one on Mongo server. That means, it delete operation was issued after update operation, it will execute after update operation.would there be a way to have some sort of field-based conditional object update using MongoDB?Since Mongo v4.2, update operations can use aggregation pipelines for updates. That means, that you can use $cond inside it:In my example, I used updateOne() method, because it returns convenient for you situation result:This output means that 1 document matched by provided query, but it was not updated - either it already had the required value before update (it was modified long time ago or just now by another user) or because user has not enough permissions (due to the code logic).You can achieve then same with .findOneAndUpdate() method and { returnNewDocumet: true } option by comparing your value with the one returned with method.You can not differentiate those two cases (already same value, not enough permissions) using this approach, especially if there is a high chance that same value can be updated by multiple users.The thing is, that you’re trying to assign too much work on 1 operation, such over-optimizations can lead to code complications. In your case, it would be totally OK to have 2 requests do database:You may end up with only 1 request anyway, if user has not enough permissions or the document already had desired value ", "username": "slava" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
"Conditional" object update
2020-07-01T13:52:11.960Z
&ldquo;Conditional&rdquo; object update
28,166
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "I have two collections like\nBusiness = {\n_id : ObjectID,\nname: string\n}\nand\nFeatures ={\nbid: ObjectID // it is _id of businesses\nfeatures: Array of string\n}And I need to build query that find all businesses with special feature.\nFirst solution is to do 2 queries: first query is features.Find(features:“my_feature”) and second to get all businesses with _id from list, that I get on first step.\nSecond solution is to build single query with $lookup and $match.\nBut I don`t know what is best solution for such situation.I came from RDMS, where such problem usually is solved by single query like\nSELECT * FROM businesses INNER JOIN features ON businesses._id=features.bid\nWHERE ‘my_feature’ IN features.features", "username": "Roman_Buzuk" }, { "code": "db.businesses.insertMany([\n {\n _id: 1,\n name: 'b1',\n features: ['f1', 'f2', 'f3'],\n },\n {\n _id: 2,\n name: 'b2',\n features: ['f2', 'f3', 'f10'],\n }\n]);\ndb.businesses.find({ feature: 'f1'}, { features: false });\n", "text": "Hello, @Roman_Buzuk! Welcome to the community!I think in your case it would be better to merge those collections into 1, like this:Later, you can get your busineses objects like this:This is because denormalized data schema is preferred in MongoDB.Learn more:", "username": "slava" }, { "code": "", "text": "Thanks for your answer.\nI understand, that such scheme can be better, but I can`t change scheme. So I should work with 2 collections", "username": "Roman_Buzuk" }, { "code": "db.features.aggregate([\n {\n $match: {\n features: 'f3',\n }\n },\n {\n $group: {\n _id: null,\n businessesIds: {\n $addToSet: '$bid',\n }\n }\n },\n {\n $lookup: {\n from: 'businesses',\n localField: 'businessesIds',\n foreignField: '_id',\n as: 'businesses',\n }\n },\n {\n $unwind: '$businesses',\n },\n {\n $replaceRoot: {\n newRoot: '$businesses'\n },\n }\n]);\n", "text": "First solution is to do 2 queries: first query is features.Find(features:“my_feature”) and second to get all businesses with _id from list, that I get on first step.\nSecond solution is to build single query with $lookup and $match.\nBut I don`t know what is best solution for such situation.I would go with aggregation solution (see example below).\nIt may require a bit more RAM for $unwind and $replaceRoot stages, but the result can be achieved faster using only 1 request to the database.", "username": "slava" }, { "code": "", "text": "Yes, I came with similar solution.\nThanks.", "username": "Roman_Buzuk" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lookup vs two queries
2020-07-01T08:57:44.879Z
Lookup vs two queries
8,697
null
[ "aggregation", "stitch" ]
[ { "code": "", "text": "Setting the scene:-\nI am writing functions to update fields within a collection based on values in multiple separate collections. Most of the updates are of the form if fieldA is null lookup a value from table 2. set fieldA = looked_up_value_from_table2\nTo do this I am using an aggregation pipeline followed by a merge.\nProblem.\nWhile the merge is setting the right value it is also overwriting some data I expected it to be leaving. E.g If I merge value ‘content.labels.name’ in the json copied below, the whole of content branch is overwritten.{\n“_id” : ObjectId(“5b0e9ca36f72c14f486e4e4d”),\n“team_id” : 28,\n“service_name” : “aaa”,\n“updated_at” : ISODate(“2019-05-30T00:00:00.000Z”),\n“created_at” : ISODate(“2018-05-30T00:00:00.000Z”),\n“document_type” : “questionnaireResponse”,\n“content” : {\n“questionnaire_uuid” : “9bf3daa775d3d79758f10186679aaf1baccb3eb219176d69ac84c15e77e9d3e3”,\n“questionnaire_name” : “B1234”,\n“status” : “expired”,\n“labels” : {\n“referral_uuid” : “99999”,\n“context” : “rrh_as”\n}\n},\n“basfiFound” : true\n}\nmerge with values for _id and ‘content.labels.name’ results in\n{\n“_id” : ObjectId(“5b0e9ca36f72c14f486e4e4d”),\n“team_id” : 28,\n“service_name” : “aaa”,\n“updated_at” : ISODate(“2019-05-30T00:00:00.000Z”),\n“created_at” : ISODate(“2018-05-30T00:00:00.000Z”),\n“document_type” : “questionnaireResponse”,\n“content” : {\n“labels” : {\n“name” : “new value”\n}\n},\n“basfiFound” : true\n}\nI lose the whole of the content branch\nIs there a way to get merge to not overwrite the parts of the branch not being updated? … or is there a better way to do this not using merge?", "username": "Neil_Albiston1" }, { "code": "db.test1.aggregate([\n {\n $lookup: {\n from: 'test2',\n localField: 'content.labels.context',\n foreignField: 'context',\n as: 'joined',\n },\n },\n {\n // transform array to object\n // for the case, when we have 1:1 relations between test1 and test2\n $addFields: {\n joined: {\n $arrayElemAt: ['$joined', 0],\n },\n },\n },\n {\n $addFields: {\n 'content.labels.name': {\n $cond: {\n if: {\n $eq: [{ $type: '$content.labels.name' }, 'missing'],\n },\n // set name if it is does not present in original doc\n then: '$joined.name',\n // return its current value\n else: '$content.labels.name',\n },\n },\n },\n },\n {\n // remove temp props\n $unset: ['joined'],\n },\n {\n // output result into other collection\n $merge: {\n into: 'test3',\n },\n },\n]);\ndb.test1.insertMany([\n {\n status: 's1',\n content: {\n labels: {\n context: 'c1',\n },\n },\n },\n {\n status: 's2',\n content: {\n labels: {\n context: 'c2',\n name: 'n2',\n },\n },\n },\n]);\n\ndb.test2.insertMany([\n {\n context: 'c1',\n name: 'n1',\n flag: 1,\n },\n]);\n[\n {\n \"_id\" : ObjectId(\"5ef9dbe9a1bfc8b19c3b4cf1\"),\n \"content\" : {\n \"labels\" : {\n \"context\" : \"c1\",\n \"name\" : \"n1\"\n }\n },\n \"status\" : \"s1\"\n }\n {\n \"_id\" : ObjectId(\"5ef9dbe9a1bfc8b19c3b4cf2\"),\n \"content\" : {\n \"labels\" : {\n \"context\" : \"c2\",\n \"name\" : \"n2\"\n }\n },\n \"status\" : \"s2\"\n }\n]\n", "text": "I lose the whole of the content branch\nIs there a way to get merge to not overwrite the parts of the branch not being updated? … or is there a better way to do this not using merge?What do you mean by ‘merge’?\n$merge aggregation pipeline stage?\n$mergeObjects aggregation pipeline operator?When you update nested properties in an aggregation pipeline or using update operations, refer to your prop using dot notation: { ‘prop.nestedObj.propA’: value }.Have a look at this aggregation example:The above aggregation uses this initial data:As a result, ‘test3’ collection has the following data:As you can see, field ‘name’ is updated only when it is necessary, and other props inside ‘labels’ object are not lost.", "username": "slava" }, { "code": "", "text": "@slava , Thank you for that alternative. This is exactly the behaviour I expected from $merge ( based on the documentation.)\nI noticed you used … $eq: [{ $type: ‘$content.labels.name’ }, ‘missing’]\nif this the same as … ‘content.labels.name’: {$exists: true }\n…or … ‘content.labels.name’: { $not: { $type: 10 }, $exists: true }\nIs there any difference between these …or they all just check the field exists", "username": "Neil_Albiston1" }, { "code": "$eq: [{ $type: ‘$content.labels.name’ }, ‘missing’]\ndb.test4.insertMany([\n {\n _id: 'A',\n name: 'Bilbo',\n age: null,\n },\n {\n _id: 'B',\n name: 'Frodo',\n surname: 'Baggins',\n },\n]);\ndb.test4.aggregate([\n {\n $project: {\n typeOfName: {\n $type: '$name',\n },\n typeOfSurname: {\n $type: '$surname',\n },\n typeOfAge: {\n $type: '$age',\n },\n },\n },\n]).pretty();\n[\n {\n \"_id\" : \"A\",\n \"typeOfName\" : \"string\",\n \"typeOfSurname\" : \"missing\",\n \"typeOfAge\" : \"null\"\n }\n {\n \"_id\" : \"B\",\n \"typeOfName\" : \"string\",\n \"typeOfSurname\" : \"string\",\n \"typeOfAge\" : \"missing\"\n }\n]\na) db.test4.find({ surname: { $exists: false } });\nb) db.test4.find({ surname: null });\n// all these queries will return document A only.\nc) db.test4.find({ age: { $type: 10 } }); \nd) db.test4.find({ age: { $type: 'null' } });\n// notice, that 'null' is just textual alias for '10'.\n// all these queries will return document A only.\ne) db.test4.find({ age: null }); \n/* this query will return both documents: A and B.\n yes, null with match missing fields, \n and ones that have null for their value\n*/\ndb.test4.find({ surname: { $type: 'string' }}); // => returns only doc B\ndb.test4.find({ age: { $type: 'null' }}); // => returns only doc A\ndb.test4.find({ age: { $type: 'missing' }}); // throws error\n", "text": "Good question!Well, { $exists: true } can be used only in queries for matching properties by their presense in a document.\nWhile $type can be used to match values by their type or to extract type from value.In the excerpt below, operator $type is used to extract value from ‘$content.labels.name’ prop, and then $eq operator compares it to the ‘missing’ string.Better to explain them in example. Consider, we have this dataset:As I wrote above, $type can be used to get the type of a value. Like this:The output of above aggregation will be this:Now, let’s review how we can use $type, and $exists operators in queries:To get documents, that does not have some propTo get documents, with a prop, that has null for its valueTo get documents, that have prop, that is either: missing or equal to nullNote, that ‘null’ and ‘string’ are names of the types, while ‘missing’ is just a string, that tells you that property value does not have a type (because property does not exist, it is ‘missing’). That means that you can do queries like this:But you can not use a query like this (because there is no such type as ‘missing’):", "username": "slava" }, { "code": "{ $merge: {into:'existingcollection',\n whenMatched:[{$set:{\"content.labels.name\":\"$$new.content.labels.name\"}}]\n}}", "text": "I also found a solution using merge.\nBy default if using path notation to update a value, such as ‘branch.twig.leaf’, merge does replace the entire branch… not just the leaf.The following will just replace, or add , the leaf. Leaving the branch intact.", "username": "Neil_Albiston1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Merge from multiple tables
2020-06-29T11:23:53.749Z
Merge from multiple tables
9,636
null
[]
[ { "code": "sudo mongod --repairsudo systemctl enable mongodsudo systemctl start mongodsudo mongod --config /etc/mongodb.conf", "text": "Hi All,We have install mongodb in Ubuntu 18.04 LTS. The system was working fine. mongodb service was auto started during reboot.We ran\nsudo mongod --repair , After this the mongod service is not autostarting andWe want to autostart mongodb using systemctl. Kindly guide me to fix the auto start issue.Regards,\nBala", "username": "bala_murugan" }, { "code": "systemctl status mongodjournalctl -u mongod/var/log/mongodsudo mongod --config /etc/mongodb.confchown -R mongodb: /var/lib/mongodb /var/log/mongodb", "text": "You have a few places to look for information.First off systemctl status mongod may have some useful messages .\nNext you have journalctl -u mongod.\nIf mongod managed to start up enough to start logging then you will find additional logs in /var/log/mongodThis mostly assume installation and default(ish) config from the official mongodb repository and packages.But When I execute sudo mongod --config /etc/mongodb.conf , Mongo db works well.This is likely your issue, mongod will usually run under the account mongodb. If you have run it via sudo some files and or directories are owned by root. One of the above commands will produce logs to that effect.You will have to recursively change ownership of the datadirectory and the log directory.\nchown -R mongodb: /var/lib/mongodb /var/log/mongodb", "username": "chris" }, { "code": "", "text": "Hi @bala_murugan\none word of warning: even when it sounds so simple to run the mongod as root. This will “fix” your problems only for short. It is absolutely not recommended to run mongod as root. Please follow the hints from Chris.\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB service autostart failed on Ubuntu 18.04
2020-06-30T23:46:35.750Z
MongoDB service autostart failed on Ubuntu 18.04
9,973
null
[ "dot-net" ]
[ { "code": "", "text": "Hi Team,I am facing issue with my existing query after i upgraded to mongodb driver 2.10.3 from 1.9, the query which i am facing issue given belowIQueryable query = mongoDatabase.GetCollection(“MonitorProfileDB”).AsQueryable().\nWhere(a => a.MonitorUser.GetType() == typeof(DefaultMonitorUserInfo));\nMongoDB Driver 1.9 :- query sending to mongdb server like below{{ “MonitorUser._t.0” : { “$exists” : false }, “MonitorUser._t” : “DefaultMonitorUserInfo” }}MongoDB Driver 2.10.3 :- query sending to mongodb server like below{{ “MonitorUser._t.0” : { “$exists” : false }, “MonitorUser._t” : “MongoDriver1._9.DefaultMonitorUserInfo, MongoDriver1.9” }}Note :- because of the _t value sending full assembly name data not filtering properly.Kindly please help me to resolve this issue.", "username": "Devaraj_S" }, { "code": "_t", "text": "Hi @Devaraj_S, welcome!Note :- because of the _t value sending full assembly name data not filtering properly.The _t value is related to the discriminator used in polymorphism. This would depends on how your class definition is written.It would be useful for others, if you would provide a minimal reproducible code that others could try to reproduce the issue between the two versions of the driver.Regards,\nWan.", "username": "wan" }, { "code": "public interface IInterface \n{\n\n}\n\n[BsonDiscriminator(\"ClassA\")]\npublic class ClassA: IInterface\n{\n [BsonElement(\"Property1\")]\n public Int64 Property1{ get; set; }\n\n [BsonElement(\"Property2\")]\n public Int64 Property2{ get; set; }\n}\n", "text": "Hi Wan,Thanks for the reply, here i given the elements which you requested,My Document entity structure :-public class Entity\n{\n[BsonElement(\"Property1 \")]\npublic IInterface Property1 { get; set; }\n}Sample Document :-{\n“_id” : ObjectId(“5bf8c6ca234d5f26944727fd”),\n“_t” : “Entity”,\n“Property1” : {\n“_t” : “ClassA”,\n“Property1” : NumberLong(0),\n“Property2” : NumberLong(10004)\n}\n}Failed Query in the new .Net mongodb driver 2.10.3 :-IQueryable query = mongoDatabase.GetCollection(“EntityCollection”).AsQueryable().\nWhere(a => a.Property1.GetType() == typeof(ClassA));MongoDB Driver 1.9 :- query sending to mongdb server like below{{ “Property1._t.0” : { “$exists” : false }, “Property1._t” : “ClassA” }}MongoDB Driver 2.10.3 :- query sending to mongodb server like below{{ “Property1._t.0” : { “$exists” : false }, “Property1._t” : “Folder1.Folder2.ClassA, NameSpace” }}I hope given details what you have requested, could you please let me know if there are any queries or details if you required.", "username": "Devaraj_S" }, { "code": "", "text": "Hi Team,Is there anything i can do for the above problem which i am facing right now for the mongoDB driver upgrade.", "username": "Devaraj_S" } ]
Problem after upgrading latest version of MongoDB .NET driver
2020-06-02T20:59:09.832Z
Problem after upgrading latest version of MongoDB .NET driver
1,758
null
[ "atlas-device-sync" ]
[ { "code": "// Write to the realm. No special syntax required for synced realms.\ntry! realm.write {\n realm.add(Task(partition: partitionValue, name: \"My task\"))\n}\n", "text": "I may be overlooking something in the Sync Data getting started guide but it saysTherefore, it’s a best practice never to write on the main thread when using Realm Sync.but then the example code writes data on the main thread.and indicates there’s no special syntax required even though it’s not supposed to be done that way.Is there clarification on that and perhaps some best practice code?Also, previously with realm when writing large amounts of data the write would be wrapped in an autorelease pool - is that still the case?", "username": "Jay" }, { "code": "", "text": "@Jay Yes there is a tradeoff between creating a Tutorial app that shows the bare minimum of code and then a best practices app that contains all the recommended best practices. For instance, every write should likely be wrapped in do/try/catch block but we don’t put that in the Tasks app because it would make the code verbose and we assume that every mobile developer should be following standard mobile development best practices - this is not specific to realm but is a best practice for any mutation to the data layer.Is there clarification on that and perhaps some best practice code?You can find a sample on SO - you should use an autoReleasePool, open a realm, make a write, and then make sure to close the realm by setting the realm variable to null when you are done with the write.", "username": "Ian_Ward" }, { "code": "", "text": "@Ian_WardThanks for the clarification.Since MongoDB Realm is a different product than Realm in many ways and the Realm Documentation specifically calls for the use of autorelease pools as well as how to handle threads when syncing, I thought it best to ask than to assume.", "username": "Jay" }, { "code": "", "text": "I understand why you would simplify the first steps tutorial, but it’s still not clear to me how to architecture an iOS app correctly for MongoDB Realm sync (given that writes on the main thread are a no-no).A demo project (like Tasks, but demonstrating best practices) would be very helpful.", "username": "zaco" } ]
Writing Sync Data on main thread
2020-06-20T14:09:43.028Z
Writing Sync Data on main thread
1,955
null
[ "cxx" ]
[ { "code": "", "text": "I retrieve the following document from a mongoDB query.\nI would like to access the values in the array within the “profile” key value pair, however i cannot figure out from the documentation how to do this.the document I get back from the mongodb query is:{ “_id” : { “$oid” : “5ef5fd085e27211f15d8f842” }, “username” : “arif”, “password” : “aspass”, “designer” : true, “status” : “active”, “profile” : [ { “FunctionalArea” : “creditRisk”, “AuthLevel” : 3 }, { “FunctionalArea” : “frontOffice”, “AuthLevel” : 1 } ] }in the above json structure, the key: “profile” contains a list of json objects. it is the elements in the json object that i would like to retrieve, i.e. FunctionalArea: frontOffice etc.so far I have tried to do the following:auto cursor = collection.find(queryDoc.view());\nfor (auto doc : cursor) {\nbsoncxx::types::b_array profile=doc[“profile”].get_array();up to this point, the code works fine, however i am now unable to figure out how to access the elements in the json objects contained in the array.Any help would be greatly appreciated.", "username": "arif_saeed" }, { "code": "for (auto doc : cursor) {\n bsoncxx::document::element profiles = doc[\"profile\"];\n if (profiles && profiles.type() == bsoncxx::type::k_array){\n bsoncxx::array::view profile{profiles.get_array().value};\n for (bsoncxx::array::element subdocument : profile){\n std::cout<< subdocument[\"FunctionalArea\"].get_utf8().value << std::endl;\n }\n }\n}\n", "text": "Hi @arif_saeed,it is the elements in the json object that i would like to retrieve, i.e. FunctionalArea: frontOffice etc.To inspect the array, you can retrieve the elements as bsoncxx::document::element which then you can inspect further. For example:Regards,\nWan", "username": "wan" } ]
How to access array elements from document using bsoncxx
2020-06-29T14:36:33.007Z
How to access array elements from document using bsoncxx
4,530
null
[ "field-encryption" ]
[ { "code": "", "text": "So we have successfully deployed docker containers based on the Node JS Official Debian Buster image with some helpful instructions by (@wan Is Client-Side Field Level Encryption supported with Atlas? - #5 by Stennie_X). Naturally the official debian image results in a 1GB container, the solution to this is to use the Alpine linux package. Alpine, being a base image does not come with all the dependencies required to install the mongocryptd process. Does anyone have any experience getting this process to run or getting it installed in alpine linux?David", "username": "David_Stewart" }, { "code": "node:slim node:alpine", "text": "Hi @David_Stewart,Have you tried customising the node:slim image up to specification?Its only ~50MB larger than node:alpine.", "username": "chris" }, { "code": "", "text": "Hey Chris! We have not, will likely need to also install some dependencies in order to get the mongocryptd process installed. I am hoping to get back on it tonight and will report back.David", "username": "David_Stewart" }, { "code": "mongocryptdglibc", "text": "Hi @David_Stewart,Does anyone have any experience getting this process to run or getting it installed in alpine linux?Currently mongocryptd is not supported on Alpine Linux, I’ve opened an issue tracker SERVER-49140 for this. Feel free to add yourself as a watcher or up-vote the ticket to receive notification on it.Naturally the official debian image results in a 1GB containerThat’s not only because of Debian though, very likely because there are other things on top of it. You can try other glibc images, for example the base Docker image for Ubuntu 18.04 LTS is only ~64MB total, it’s bigger than Alpine but still quite small. Try to combine the layers to reduce the size, and remove unneeded dependencies if possible.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks for the update Wan, I was able to get the docker image running on the slim version of debian as @chris recommended. I am going to do a bit more testing then will posts it up on the forum for others.David", "username": "David_Stewart" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Docker + NodeJS + Alpine Distro + CSFLE & mongodb-enterprise-cryptd
2020-06-30T19:42:14.383Z
Docker + NodeJS + Alpine Distro + CSFLE &amp; mongodb-enterprise-cryptd
4,872
null
[ "swift" ]
[ { "code": "", "text": "Was just wondering if anyone have built a watch app with Realm and if so, how you handle transferring data between the watch app and iOS app? Is there a good way to do this using Realm?I know that the realm cloud sync doesn’t work since it requires a web socket connection. So I guess I need to transfer the data to the iOS app using the watch connectivity framework. Anyone found a good way to do this?", "username": "Simon_Persson" }, { "code": "", "text": "No responses… I guess this means no? ", "username": "Simon_Persson" }, { "code": "", "text": "The only info I can find regarding this is a few years old and there was no way to do it then, has anyone got any recent info on this?Btw, is there a way to access the legacy Realm forum? It held info which would still be useful now.", "username": "Daniel_Kristensen" }, { "code": "", "text": "I haven’t seen any info either. The articles on the Realm site are from 2015.I found a couple of issues on github:I have a watchOS app running with realm. Everything works fine in simulator. But… fails in the real device.<!---\n\n**Questions**: If you have questions about how to use Realm, ask on\n[Stac…kOverflow](http://stackoverflow.com/questions/ask?tags=realm).\nWe monitor the `realm` tag.\n\n**Feature Request**: Just fill in the first two sections below.\n\n**Bugs**: To help you as fast as possible with an issue please describe your issue\nand the steps you have taken to reproduce it in as many details as possible.\n\n-->\n\n## Goals\nUse realm swift in an apple watch app\n\n## Expected Results\nIt should work ?\n\n## Actual Results\nThe sandbox of the apple watch prevents the connection to the realm.\n\n## Steps to Reproduce\nCreate an apple watch app and try to connect to a realm with it on a **real device**. It **works** on the **simulator** but **not** on a **real device**. \n\nThis is what you will get:\n```\n2017-12-10 12:50:24.432939+0100 RealmAWBug WatchKit Extension[373:540595] dnssd_clientstub ConnectToServer: connect()-> No of tries: 1\n2017-12-10 12:50:24.560815+0100 RealmAWBug WatchKit Extension[373:540433] Unbalanced calls to begin/end appearance transitions for <PUICNavigationController: 0x162f1a00>.\n2017-12-10 12:50:25.438822+0100 RealmAWBug WatchKit Extension[373:540595] dnssd_clientstub ConnectToServer: connect()-> No of tries: 2\n2017-12-10 12:50:26.443064+0100 RealmAWBug WatchKit Extension[373:540595] dnssd_clientstub ConnectToServer: connect()-> No of tries: 3\n2017-12-10 12:50:27.448825+0100 RealmAWBug WatchKit Extension[373:540595] dnssd_clientstub ConnectToServer: connect() failed path:/var/run/mDNSResponder Socket:14 Err:-1 Errno:1 Operation not permitted\nSync: Connection[1]: Failed to resolve 'Your address': Host not found (authoritative)\n```\n\n## Code Sample\n<!---\nProvide a code sample or test case that highlights the issue.\nIf relevant, include your model definitions.\nFor larger code samples, links to external gists/repositories are preferred.\nAlternatively share confidentially via mail to [email protected].\nFull Xcode projects that we can compile ourselves are ideal!\n-->\nhttps://github.com/TheNoim/RealmAWBug\nIt dose not really depends on code. But here is an example project\nThis is what I found: https://stackoverflow.com/questions/41219976/apple-watch-kit-wouldnt-fetch-image\n\n\n## Version of Realm and Tooling\n<!---\n[In the CONTRIBUTING guidelines](https://git.io/vgxJO), you will find a script,\nwhich will help determining some of these versions.\n-->\n```\nProductName: Mac OS X\nProductVersion: 10.13.1\nBuildVersion: 17B1003\n\n/Applications/Xcode.app/Contents/Developer\nXcode 9.2\nBuild version 9C40b\n\n/usr/local/bin/pod\n1.4.0.beta.2\nRealm (3.0.2)\nRealmSwift (3.0.2)\n\n/bin/bash\nGNU bash, version 3.2.57(1)-release (x86_64-apple-darwin17)\n\ncarthage not found\n(not in use here)\n\n/usr/local/bin/git\ngit version 2.15.0\n```From what I can see, sync doesn’t work as it relies on websockets. But my understanding is that it should be possible to build apps without sync. I hope improving watch support is something that is on the roadmap. Many of my competitors have watch apps already ", "username": "Simon_Persson" }, { "code": "", "text": "@Simon_Persson Sync does not work on watchOS but local Realms do - you can use a bluetooth library or similar to transfer data to the local mobile app which could be used with Sync.", "username": "Ian_Ward" }, { "code": "", "text": "Thanks @Ian_Ward ! Yes, I assume that this is the way to go, but it does rule out standalone apps on the watch. Hmm shouldn’t it be possible to post data to atlas using the new graphql apis and achieve sync that way instead?", "username": "Simon_Persson" }, { "code": "", "text": "Yeah that should be possible but I think the watch has a non-standard networking stack so I’m not sure if all networking libraries will integrate, this is part of the reason we don’t have realm sync out of the box on watchOS.", "username": "Ian_Ward" }, { "code": "", "text": "I think it might work. WatchOS doesn’t support websockets, but it seems like appollo-ios works on the apple watch: Add watchOS support to podspec · Issue #153 · apollographql/apollo-ios · GitHub.If I change the datamodel so that the data to sync is only a single document then I think it might work. You won’t get real-time sync this way, but I think it might work for some use-cases. Worth a try!", "username": "Simon_Persson" }, { "code": "", "text": "Are there any plans for supporting sync on watchOS?", "username": "Daniel_Kristensen" }, { "code": "", "text": "I know one reason that has been stated before is that WatchOS didn’t support websockets, but that doesn’t seem to be the case anymore:Creates a WebSocket task for the provided URL.Personally I’d love to see sync for WatchOS. A lot of my competitors already have watch support, but judging by AppStore reviews they all struggle with sync. If MongoDB Realm would solve this it would be a game changer", "username": "Simon_Persson" }, { "code": "", "text": "One issue that was discussed some years ago was that the size limit for swift apps for watchOS made it impossible to add the Realm frameworks(Realm on the Apple Watch with a Swift project is not feasible · Issue #5203 · realm/realm-swift · GitHub), although at the time the limit was 50 mb, and I think last year it was 85 mb so maybe this is no longer an issue.But using Realm without sync is not an option.I’m in the same situation as you where every comparable app in the same category has a watchOS version so I would hope that it’s on the roadmap.", "username": "Daniel_Kristensen" }, { "code": "URLSessionWebSocketTask", "text": "The addition of URLSessionWebSocketTask on watchOS 6.2 seems to provide the needed WebSocket functionality in terms of transport. Not sure if there are any auth limitations.@Ian_Ward Is there any plan to move to URLSessionWebSocketTask and a flow that will sync from the watch?We’ve paused production on our watch counterpart until we can sync. Thanks.", "username": "Andres_Canella" }, { "code": "", "text": "Thanks - I saw that. There are still other considerations so if sync from watchOS is something you are interested in please make a feature request here https://feedback.mongodb.com/ and we can consider it as part of roadmap planning.", "username": "Ian_Ward" }, { "code": "", "text": "Just submitted a feature request ", "username": "Simon_Persson" }, { "code": "", "text": "If anyone wants to upvote & watch the feature request @Simon_Persson raised, it is: Support Apple Watch (Sync).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Anyone built a watch app on Realm?
2020-02-21T22:49:24.897Z
Anyone built a watch app on Realm?
5,241
null
[ "text-search" ]
[ { "code": "", "text": "I have a full text search on this collectiondb.cve_2014.createIndex( { product: “text” } )When I do a search below I get more results then I should.db.cve_2014.find( { $text: { $search: “nx-os” } } )I have no idea what I am doing wrong.", "username": "Arthur_Gressick" }, { "code": "", "text": "So I changed the query to this below and it works perfectly but I have no idea why this is workingdb.cve_2014.find( {$text: { $search: “/nx-os0/i” } } )", "username": "Arthur_Gressick" }, { "code": "", "text": "So this is weird I used a search like thisdb.cve_2014.find( { $text: { $search: “junos” } })And this returned “juno” items, I can’t figure out why it is returning partial results.", "username": "Arthur_Gressick" }, { "code": "explain()parsedTextQuery$searchnx-osnxosdb.cve_2014.find( { $text: { $search: \"nx-os\" } } ).explain().queryPlanner.winningPlan.parsedTextQuery\n{\n\t\"terms\" : [\n\t\t\"nx\",\n\t\t\"os\"\n\t],\n\t\"negatedTerms\" : [ ],\n\t\"phrases\" : [ ],\n\t\"negatedPhrases\" : [ ]\n}\nsjunosjunodb.cve_2014.find( { $text: { $search: \"junos\" } } ).explain().queryPlanner.winningPlan.parsedTextQuery\n{\n\t\"terms\" : [\n\t\t\"juno\"\n\t],\n\t\"negatedTerms\" : [ ],\n\t\"phrases\" : [ ],\n\t\"negatedPhrases\" : [ ]\n}\n", "text": "Welcome to the community @Arthur_Gressick!To help understand your results, can you please provide some sample documents and your results that are expected (or not expected)?The text search feature is designed to match strings using language-based heuristics. Based on your examples so far, I suspect you may be interested in matching string patterns using regular expressions rather than language-based text search.You can get more insight into how a query is processed using explain(). For your questions about text search, I would start by looking at the parsedTextQuery section which shows how your original query was converted into search terms.Text search creates a list of search terms from your original $search string and stems each term to an expected root form using language heuristics (in particular, the Snowball stemming library).Non-word characters like whitespace and hyphens are tokenised as word delimiters, so nx-os will be an OR query on the stemmed terms nx OR os:The stemming library has a general set of language rules rather than a complete language grammar or dictionary. Words ending with a single s in English (the default text search language) are typically plural, so the stemmed form of junos will be juno:There’s an online Snowball demo if you’d like a quick way to see the outcomes of stemming text using different language algorithms.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I think I am going to switch to regex until I can dig deeper.db.cve_2014.count( { product: { $regex: ‘:junos:’} })", "username": "Arthur_Gressick" } ]
Full Text Search returning more results than expected
2020-06-29T19:30:28.266Z
Full Text Search returning more results than expected
2,883
null
[]
[ { "code": "", "text": "Hi,\nI have created replica set cluster of combination Primary/Secondary/Arbiter.\nMongo version: Mongo 3.6.9For Primary/Secondary data members, have configured specific dbpath size based on our usage demand.\nAs Arbiter member does not store data, need suggestion what is ideal dbpath size to given. And i observe dbpath utilization keep growing over a period of time, though its in small MBs.Since, i use dbpath from RAM(for faster access to data), don’t want to allocate more RAM to Arbiter member where there are no data operations except for voting. Please suggest in what is ideal dbpath size to be given for Arbiter members.Thanks,\nAnil", "username": "anil_neeluru" }, { "code": "", "text": "I cannot comment on the other subject but about:Since, i use dbpath from RAM(for faster access to data)I am not too sure it is a good idea. The storage engine are really good at caching the working sets in RAM. Giving RAM as “permanent” storage is making sure the storage engine has less RAM for the working sets. I do not have numbers to backup my claims but I would not opt for such a performance enhancement without real performance testing.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve for your comments.\nOur application needs real time response, it is very much required to use RAM for storage. And coming to performance, we use decent Sizing of RAM of around 64GB+ and it shows good results so far.", "username": "anil_neeluru" } ]
Arbiter dbpath size calculation in replica set
2020-06-30T10:47:58.958Z
Arbiter dbpath size calculation in replica set
1,878
null
[]
[ { "code": "", "text": "Is it recommended to use the Objective C or Swift MongoDB Realm SDK when working with a mixed Objective C/Swift code base? We migrated a mixed app to Realm last year and the Realm documentation stated that the Objective C SDK should be used. Is that still the case? I’ve noticed some methods in the SDK are marked with NS_REFINED_FOR_SWIFT which makes them unavailable in Swift. Also, there isn’t the RLMSupport.swift file that contains a lot of handy extensions.", "username": "Nina_Friend" }, { "code": "", "text": "@Nina_Friend RLMObject and RealmSwift.Object are different types that can’t be easily passed to functions expecting the other, so mixing the two gets really awkwardIf RLMSupport.swift is missing wrappers for things marked as NS_REFINED_FOR_SWIFT then we should fix that - please open an issue here for what is missing that you need: GitHub - realm/realm-swift: Realm is a mobile database: a replacement for Core Data & SQLite", "username": "Ian_Ward" }, { "code": "", "text": "I’ll submit the issue.", "username": "Nina_Friend" }, { "code": "", "text": "Should I continue to have my data classes in Objective C and descending from RLMObject or should I rewrite the data classes in Swift and descend from Object even though other parts of my code are still in Objective C? In the future I would like to convert the whole project to Swift. If I start with Objective C/RLMObject and then want to switch to Swift/Object later is that going to be a problem?", "username": "Nina_Friend" } ]
Mixed Objective C and Swift Project
2020-06-28T23:39:30.557Z
Mixed Objective C and Swift Project
1,657
null
[]
[ { "code": "", "text": "Hi,I use Azure Cosmos DB with Mongo DB (Mongo DB .Net Driver to connect). I would like to utilize a transaction feature but when I try to do this (calling session.StartTransaction()) I get following error:Server version 3.6.0 does not support the Transactions feature.Stack trace looks like this:at MongoDB.Driver.Core.Misc.Feature.ThrowIfNotSupported(SemanticVersion serverVersion)\nat MongoDB.Driver.Core.Bindings.CoreSession.EnsureTransactionsAreSupported()\nat MongoDB.Driver.Core.Bindings.CoreSession.EnsureStartTransactionCanBeCalled()\nat MongoDB.Driver.Core.Bindings.CoreSession.StartTransaction(TransactionOptions transactionOptions)\nat MongoDB.Driver.Core.Bindings.WrappingCoreSession.StartTransaction(TransactionOptions transactionOptions)\nat MongoDB.Driver.Core.Bindings.WrappingCoreSession.StartTransaction(TransactionOptions transactionOptions)\nat MongoDB.Driver.ClientSessionHandle.StartTransaction(TransactionOptions transactionOptions)Have enyone else experienced similar problem? I couldn’t find any information about this problem.", "username": "Sylwester_Jarosinski" }, { "code": "", "text": "Welcome to the community @Sylwester_Jarosinski!The general error message is correct: multi-document transaction support was added for replica set deployments in MongoDB 4.0 and extended to sharded clusters in MongoDB 4.2.However, please note that Cosmos’ API is an emulation of MongoDB which differs in features, compatibility, and implementation from an actual MongoDB deployment. Cosmos’ suggestion of API version support (eg 3.6) is referring to the wire protocol version rather than the full MongoDB feature set associated with that server version. Official MongoDB drivers (like .NET) are only tested against actual MongoDB deployments.If you’d like to try transactions in MongoDB, you can get started using a free cluster on MongoDB Atlas. Atlas clusters can be deployed in Azure, AWS, and Google Cloud Platform. Atlas free tier clusters include 512MB of storage, and can be scaled up to larger clusters more suitable for production apps.Regards,\nStennie", "username": "Stennie_X" } ]
Server version 3.6.0 does not support the Transactions feature
2020-06-30T06:18:32.020Z
Server version 3.6.0 does not support the Transactions feature
3,933
null
[ "containers" ]
[ { "code": "", "text": "I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OKplease help:\nMOngodb 4.0 version\nreplica set : this node was primary, not no node of replica set is starting.", "username": "Aayushi_Mangal" }, { "code": "mongod", "text": "Hi @Aayushi_Mangal, it’s great to see you around here in the community forums.Interestingly enough I just had my MongoDB Docker container crash and when I restarted MongoDB I had the same message in my log file. Even with the message however the mongod process started and continued to run. Are you seeing other errors in the log files?You can remove the FTDC folder as it just contains diagnostic data that may or may not be useful to you.replica set : this node was primary, not no node of replica set is starting.Are you saying that none of your replica set members are starting? The failure that the PRIMARY node had with FTDC shouldn’t have caused problems with the other nodes. Again, what errors are you seeing in the log files for the nodes that are not starting?", "username": "Doug_Duncan" }, { "code": "", "text": "W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set repl01Yes, no primary node found, and other replication member is not restarting because, of priority 0 is set .as a last resort, i have executed mongod with --repair.", "username": "Aayushi_Mangal" }, { "code": "/data/db/var/lib/mongodbstorage:\n dbPath: <Path>\nWiredTiger.wtmmap", "text": "The default config file included when installing MongoDB via a package manager typically sets /data/db as the database path./var/lib/mongodb is where your mongodb installation files are (I assume).Take a look at your config file and see if you can findThat path needs to point to where your MongoDB server was storing its data files. If that isn’t /data/db , you need to figure out where your mongo files were stored.If you were running MongoDB 3.2+, then try searching your filesystem for WiredTiger.wt . I’m not sure what to look for if you were running mmap as the storage engine.", "username": "Alfred_Williams" }, { "code": "", "text": "Hello @Alfred_Williams,Thank you for your response.\nYes, using 4.0 version, but could you please explain why to look at WiredTiger.wt file, if replica set members are unable to reach.At the end we drop our local database and need to reiniatalize the replica set and re-add the member to make the cluster up and running again.Do you have any other methods/process to do so.", "username": "Aayushi_Mangal" }, { "code": "", "text": "No but thanks for the response.", "username": "Alfred_Williams" } ]
Mongod is not started
2020-06-05T19:02:27.587Z
Mongod is not started
10,268
null
[ "aggregation" ]
[ { "code": "", "text": "Hi All,I have worked predominantly on relational databases and new to mongodb. Learning new things day by day. I am wondering how to get the size of the document inside the aggregation query?\ndb.collection.stats() gives me the total collection size. How to get a filtered dataset sizeThanks,\nVijay K", "username": "vijayaraghavan_krish" }, { "code": "// this will work in the mongo shell only\nconst doc = db.coll.findOne(<queryFilter>);\nObject.bsonsize(doc);\nconst bson = require('bson');\nconst doc = await db.coll.findOne(<queryFilter>);\nconst size = bson.calculateObjectSize(doc);\ndb.coll.aggregate([\n {\n $collStats: {\n storageStats: { },\n },\n },\n {\n $project: {\n averageDocumentSize: '$storageStats.avgObjSize',\n },\n },\n]);\nconst mapFn1 = function() {\n emit(this.groupId, this);\n};\n\nconst reduceFn1 = function(groupId, documents) {\n const sizes = documents.map((item) => {\n return {\n docId: item._id,\n size: Object.bsonsize(item),\n };\n });\n return { result: sizes };\n};\n\ndb.coll.mapReduce(\n mapFn1,\n reduceFn1,\n { out: 'mr_out' },\n)\n{\n \"_id\" : null,\n \"value\" : {\n \"result\" : [\n {\n \"docId\" : ObjectId(\"5ef0c89b8ce9f870270090e1\"),\n \"size\" : 241\n },\n {\n \"docId\" : ObjectId(\"5ef0c89b8ce9f870270090e2\"),\n \"size\" : 76\n },\n // ... other docs in the collection\n ]\n }\n}\n\n", "text": "You can use Object.bsonsize<object> to get object size in bytes.It is possible to get the object size in the Node.js environment using bson library. There should be similar drivers for other programming languages as well in web Within aggregation you can get average document size in the collection, like this:You can also, use map-reduce to take advantage of Object.bsonsize \nLike this:This will output to ‘mr_out’ collection something similar to this:", "username": "slava" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "", "text": "Thanks for the reply. Apologies for replying late. The aggregate function worked for my problem.", "username": "vijayaraghavan_krish" } ]
How to get document size inside aggregation
2020-06-22T14:11:56.973Z
How to get document size inside aggregation
7,860
null
[ "sharding" ]
[ { "code": "", "text": "Created a sharded cluster and added tags NA and EU as explained in the example. Next added tag range as specified in the documentation https://docs.mongodb.com/manual/tutorial/sharding-segmenting-data-by-location/When I insert a document with country as “US”, it put it in the EU shard. When I do sh.status() it says the collection has a active migration and there are no balancer failures. I checked the balancer and it is running.Shouldn’t the data automatically go to correct cluster? Why is it putting in a different cluster? Also, why taking so long for balancer to move the document to correct cluster? Does it wait for certain number of documents before starting the move? The document that I inserted was very small with two fields one for the country and other userid.", "username": "prashant_kulkarni" }, { "code": "sh.status()", "text": "Welcome to the community @prashant_kulkarniCan you share the status of your sh.status() command?What method are you using to determine what cluster the document is in ?", "username": "chris" } ]
Data not going into right shard
2020-06-29T19:30:43.636Z
Data not going into right shard
1,945
null
[ "scala" ]
[ { "code": "", "text": "Can we embed Mongo shell queries inside Scala code.\nIf yes, then how.My main motif is to fetch all the keys in a collection having many documents with different schemas using scala.Please guide.", "username": "Harmanat_Singh" }, { "code": "mongomongo", "text": "Welcome to the community @Harmanat_Singh!Can you provide an example of your document structure, the query you are trying to develop (or have working in the mongo shell), and your desired output?If you are using Scala it would be much more reliable and efficient to write your query using the native driver language.The Scala driver supports the same server-side features as the mongo shell. See the Scala Driver Quick Tour to get started.Regards,\nStennie", "username": "Stennie_X" } ]
Can we embed Mongo shell queries inside Scala code?
2020-06-29T20:20:52.942Z
Can we embed Mongo shell queries inside Scala code?
2,983
null
[ "aggregation" ]
[ { "code": " {\n \"id\": \"google.com/q=blah\",\n \"type\": \"url\",\n \"domain\": \"google.com\"\n \"key_id\": 123\n },\n {\n \"id\": \"google.com/q=blah\",\n \"domain\": \"google.com\"\n \"type\": \"url\",\n \"key_id\": 321\n },\n {\n \"id\": \"google.com\"\n \"type\": \"domain\",\n \"key_id\": 123,\n \"registred\": True,\n },\n {\n \"id\": \"google.com\"\n \"type\": \"domain\",\n \"key_id\": 321,\n \"registred\": True,\n }\n $group: {\n _id: \"$id\",\n \"key_ids\": {\"$addToSet\": \"$key_id\"},\n \"type\": {\"$last\": \"$type\"}\n \"domain\": {\"$last\": \"$domain\"}\n },\n $project: {\n \"id\": \"$_id\",\n \"keys_ids\": \"$key_ids\",\n \"type\": \"$type\",\n \"domain\": \"$domain\"\n }\n", "text": "Hey everyone. I have a trouble of aggregation of heterogeneous objects from the one collection. For example:My intention here is to merge objects that have the same “id” objects:This strategy works well when the query is filtered by “type” field and all objects have the same field set.When I have in the query output multiple types with different set of fields I don’t know how do I check if field exists and add it to project.\nIs it possible generally with Mongo?", "username": "Alex_G" }, { "code": "{\n $match: {\n fieldA: {\n $exists: false,\n },\n },\n},\n{\n $project: {\n fieldB: {\n $cond: {\n // if fieldB is not present in the document (missing)\n if: {\n $eq: [{ $type: '$fieldB' }, 'missing'],\n },\n // then set it to some fallback value\n then: 'Field is missing',\n // else return it as is\n else: '$fieldB',\n },\n },\n },\n},\n{\n $project: {\n unwantedFieldA: false,\n mayBeMissingFieldB: false,\n }\n}\n{\n $project: {\n contidionalField: {\n $cond: {\n if: {\n $eq: ['$filedA', 'abc'],\n },\n // then return its value as is\n then: '$fieldA',\n // else exclude this field\n else: '$$REMOVE',\n },\n },\n },\n},\n", "text": "Hello, @Alex_G! Welcome to the community!You can match documents with missing field like this:In the $project stage you can unify the fields set by adding a fallback value for missing fields, like this:In $project stage, if you try to include or exclude non-existent field for all documents, the pipeline will not break. So, if you want to exclude some specific field, but you are not sure if it is present in the document - don’t overthink, just exclude it You can conditionally exclude the field with $$REMOVE variable", "username": "slava" }, { "code": "", "text": "Hi Slava,thanks a lot for your answer.$cond: {\nif: {\n$eq: [’$filedA’, ‘abc’],\n},\n// then return its value as is\nthen: ‘$fieldA’,\n// else exclude this field\nelse: ‘$$REMOVE’,\n},is exactly what I looked for.\nCheers!", "username": "Alex_G" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add fields to output conditionally (dynamically)
2020-06-29T14:36:27.015Z
How to add fields to output conditionally (dynamically)
22,360
https://www.mongodb.com/…86a1a55a6313.png
[ "python" ]
[ { "code": "check = mycol.find({},{'ServerID':server_id}).count()", "text": "Hi, I’m making a license system for discord server, once I run the !authorize command in a server, my bot creates a value in the database:\n\nThe only problem is that now when I try to authorize another server, it says that the server is already authorized! The same thing happens if I try to run the !license command, which will show me the license, it shows the information of the first server’s license! There are the codes I use for creating a new value into the database: https://hastebin.com/kehijahefe.sql\nThe problem is the check = mycol.find({},{'ServerID':server_id}).count() Which returns 1 everytime, even if the server ID is different from the one into the database", "username": "Silvano_Hirtie" }, { "code": "mycol.find({ 'serverID': server_id }).count();\n", "text": "Hello, @Silvano_Hirtie! Welcome to the community!Make sure you’re using .find() method correctly and the arguments are passed in the correct order.\nAlso, keep in mind, that property names are case-sensetive.I think you can get the desired result if you modify your query a bit:", "username": "slava" }, { "code": "", "text": "Thank you very much! having {} before the serverID string helped me in not having an error, but the main Problem was the uppercase S, serverID worked, thank you again!", "username": "Silvano_Hirtie" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[pymongo] Problem while making a license system
2020-06-29T14:35:20.305Z
[pymongo] Problem while making a license system
2,323
null
[ "golang" ]
[ { "code": "the Database field must be set on Operation 0\n", "text": "Hello,\nI’m using MongoDB 4.2 on Atlas ,\nAnd using golang driverRunning locally I run just fine,\nBut when the app is on kubernetes pod,When an insert request is made I got an exception:Can’t google this error,\nSo if anyone here knows?", "username": "Altiano_Gerung" }, { "code": "", "text": "Hi @Altiano_Gerung,This error is coming from within the driver. Can you share your code so we can try to reproduce?– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "I provide empty string for the DB Name,\nBut the error could be more helpfull I thnk", "username": "Altiano_Gerung" }, { "code": "", "text": "@Altiano_Gerung I definitely agree that the error message could be more helpful here. I opened https://jira.mongodb.org/browse/GODRIVER-1668 to address this.", "username": "Divjot_Arora" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error: the Database field must be set on Operation 0
2020-06-25T19:57:54.531Z
Error: the Database field must be set on Operation 0
8,112
null
[ "aggregation" ]
[ { "code": "db.shopping_cart.aggregate(\n {\"$match\" : {\"userId\" : \"2\"}},\n {\"$unwind\" : \"$items\"}, \n {\"$lookup\" : {\"from\" : \"variants\", \"localField\" : \"items.variant\", \"foreignField\" : \"_id\", \"as\" : \"items.variantObject\"},\n {\"$unwind\" : \"$items.variantObject\"},\n {\"$lookup\" : {\"from\" : \"inventory\", \"let\" :{\"shop_id\" : \"$shopId\", \"variant_id\" : \"$items.variant\"}, \n pipeline: [ \n { $match: { \"shop\" : \"$$shop_id\", \"variants\" : {$elemMatch : {\"variant\" : \"$$variant_id\"}}}},\n { $unwind: \"$variants\" }, \n { $match: {\"variants.variant\" : \"$$variant_id\"},\n { $project : {\"shop\" : \"$$shop_id\", \"shopId\" : \"shop\", \"variant\" : \"$$variant_id\", \"price\" : \"$variants.price\", \"discount\" : \"$variants.discount\", \"sizes\" : \"$variants.sizes\", \"quantity\" : \"$variants.quantity\"}}\n ],\n as : \"items.inventory\"\n }\n }\n)\n{\n\t\"_id\" : \"5ec8d2924688d8310b909eee\",\n\t\"product\" : \"5ec43ac6bbe7852b815fb844\",\n\t\"shop\" : 13,\n\t\"version\" : 1,\n\t\"variants\" : [\n\t\t{\n\t\t\t\"variant\" : \"5ec43ac6bbe7852b815fb845\",\n\t\t\t\"quantity\" : 22,\n\t\t\t\"intSku\" : \"Iphone11_bianco64\",\n\t\t\t\"price\" : 840,\n\t\t\t\"iva\" : 22,\n\t\t\t\"discount\" : {\n\t\t\t\t\"salePrice\" : 722,\n\t\t\t\t\"saleEndTime\" : \"2020-05-31T07:36:00.000Z\"\n\t\t\t},\n\t\t\t\"deliverable\" : true,\n\t\t\t\"sizes\" : [ ],\n\t\t\t\"createDate\" : \"2020-05-23T07:36:50.557Z\"\n\t\t}\n\t]\n}\n{\n \"_id\": \"5ec43ac6bbe7852b815fb845\",\n \"ean\": null,\n \"product\": \"5ec43ac6bbe7852b815fb844\",\n \"name\": \"Iphone 11 - touchscreen 5.8 \\\" 64 GB IOS Bianco \",\n \"attributes\": [\n {\n \"attribute\": \"5e8d937ffa3fd358af0a20d4\",\n \"value\": \"touchscreen\",\n \"multiValues\": []\n },\n {\n \"attribute\": \"5ea2a5b3316b10162e706963\",\n \"value\": \"5.8\",\n \"multiValues\": []\n },\n {\n \"attribute\": \"5e8d940cfa3fd36d2256c5d3\",\n \"value\": \"64\",\n \"multiValues\": []\n },\n {\n \"attribute\": \"5e8d9412fa3fd358af0a20d6\",\n \"value\": \"IOS\",\n \"multiValues\": []\n },\n {\n \"attribute\": \"5e8d9404fa3fd317fc401d65\",\n \"value\": \"12\",\n \"multiValues\": []\n },\n {\n \"attribute\": \"5ea150e497a2ed5e54536789\",\n \"multiValues\": [\n \"5ea15819fb654b46605436b3\",\n \"5ea15819fb654b46605436b4\",\n \"5ea15819fb654b46605436b5\"\n ]\n },\n {\n \"attribute\": \"5e8efe4ffa3fd36c9e055103\",\n \"multiValues\": [\n \"5ea2ba77316b10162e706964\",\n \"5ea2ba77316b10162e706965\",\n \"5ea2ba77316b10162e706966\"\n ]\n },\n {\n \"attribute\": \"5e88bbf6fa3fd3294d252992\",\n \"value\": \"Bianco\",\n \"multiValues\": []\n }\n ],\n \"images\": [\n {\n \"imageLink\": \"0f651f533b2342889ee8480956181eb7\"\n },\n {\n \"imageLink\": \"16324710207d4514a37ed78c3ae05564\"\n },\n {\n \"imageLink\": \"5459cc2aa2fa496bba551252042f55a2\"\n }\n ],\n \"slugName\": \"iphone-11-touchscreen-5-8-64-gb-ios-bianco-108649\"\n}\n{\n \"_id\": \"5ed111db474d4752a52d5ac0\",\n \"shopId\": 13,\n \"userId\": \"2\",\n \"items\": [\n {\n \"variant\": \"5ec43ac6bbe7852b815fb845\",\n \"quantity\": 1\n }\n ]\n}\n", "text": "Hello Devs,Today i had a bad issue.I have this query, look at Lookup from inventory, if i use hardcoded data inside query done, with let operator this not work ( no take results )Data To replicate - inventory:Data To replicate - variants:Data To replicate - shopping_cart:", "username": "Antonio_Dell_Arte" }, { "code": "db.shopping_cart.aggregate([\n // ...\n {\n $lookup: {\n from : 'inventory',\n let: {\n shop_id: '$shopId',\n variant_id: '$items.variant',\n },\n pipeline: [\n {\n // put this $unwind before $match stage,\n // so $eq operators inside $match stage would work\n $unwind: '$variants', \n },\n {\n $match: {\n $expr: {\n $and: [\n { $eq: ['$shop', '$$shop_id'] },\n { $eq: ['$variants.variant', '$$variant_id'] },\n ],\n },\n },\n },\n { $project: { /* ... */ }},\n ],\n as : 'items.inventory',\n },\n },\n]);\n", "text": "Hello, @Antonio_Dell_Arte! Welcome to the community!To be able to use let-variables in $lookup.pipeilne.$match stage, you need to use $expr operator inside $match stage, like this:", "username": "slava" } ]
$lookup aggregation issue with Let
2020-05-29T14:22:38.062Z
$lookup aggregation issue with Let
5,574
null
[]
[ { "code": "const connectionString = 'mongodb+srv://' + process.env.MONGODB_USER + ':' + process.env.MONGODB_PW + '@<REDACTED_APP>-y1llv.mongodb.net/<REDACTED_APP>';\nmongoose\n .connect(connectionString, {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n useFindAndModify: false\n })\n .then(() => console.log('Database connected.'))\n .catch(err => console.log(err));\n", "text": "Hi everyone.I think I have a very specific issue.\nI created my first ever website with Node.js in the back-end and a MongoDB Atlas cluster as a database.I hosted my web app on Heroku where everything works fine. A few days ago when I got everything launch-ready I decided to pay for proper hosting and buy the domain I wanted.\nMy host server uses cPanel where it is possible to setup a Node.js app, which is what I did. However my website is not loading because I cannot establish a connection to my Atlas cluster.It’s the exact same code that works fine on Heroku, so I’m not sure it has to do with MongoDB itself. Perhaps it’s some security setting inside cPanel that prevents this connection but I was not able to find any.With the ‘useUnifiedTopology’ flag set to true I get a MongooseServerSelectionError. I saw some people having issues establishing a connection with this flag, so I tried commenting it out and now get a MongoNetworkError instead.Here’s my code:You can see a picture of my cPanel Node.js app setup and the complete callstack for these errors on my StackOverflow post here.I really hope someone can help me out. It’s my first website and the first time I host anything, so I might very well miss something but I don’t know where or what to look for anymore. I’m pretty much stuck.", "username": "Stefan_Wizsy" }, { "code": "", "text": "Hello good day!\nI am currently facing the same issue. I also saw your post here on stackoverflow : https://stackoverflow.com/questions/62570924/cpanel-node-js-app-cannot-connect-to-mongodb-atlas-cluster-but-works-on-heroku.I am also using Namecheap. Have you found any solution yet?", "username": "Onyejiaku_Theodore_K" }, { "code": "", "text": "Sadly not, seems to be a very specific issue not many people know about. If nobody is able to help I’ll have to stay on Heroku for hosting and pay for Hobby dynos to use my SSL certificate. Which is annoying because I just paid for 2 years hosting on Stellar and it’s stupidly cheap compared to Heroku. Not that Hobby dynos are expensive but compared to Stellar it costs 5 times more.I’m asking MongoDB support but I’m not sure they’ll be able to help. Maybe they can give me some clues how to get more information why this MongoNetworkError occurs. But I’ll have to wait until Monday to contact them.", "username": "Stefan_Wizsy" }, { "code": "", "text": "Alright thanks.\nKeep me posted on when you have solved the issue.\nThanks", "username": "Onyejiaku_Theodore_K" }, { "code": "", "text": "Wow I just fixed the issue.\nContact Namecheap support team and ask them to open up ports:\n3000\n443 and\n80.Good luck!", "username": "Onyejiaku_Theodore_K" }, { "code": "", "text": "Hey, I resolved the issue!\nI contacted Namecheap support via live chat, they have been very patient.\nLooking at my passenger.log file and everything. Turns out they had to open the default MongoDB port 27017 on their end, no way to do it without support unfortunately.So it seems you just need to contact them and tell them you need this port opened.\nHope that’s all it takes for you as well. Good luck!", "username": "Stefan_Wizsy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Node.js app cannot connect to MongoDB Atlas cluster
2020-06-26T07:15:36.798Z
Node.js app cannot connect to MongoDB Atlas cluster
11,770
null
[]
[ { "code": "", "text": "Can we please get a SwiftUI “List\" Realm Guide ???Someone! Anyone!", "username": "Kamron_Hopkins" }, { "code": "", "text": "SwiftUI “List\" Realm GuideWhat would that be, specifically? Are you asking about a List Realm object? Or something else? Or are you asking about SwiftUI List?Do you have some code to share you’re having difficulty with?", "username": "Jay" }, { "code": "import RealmSwift\n \n \nlet realm = try! Realm()\n \n class BindableResults<Element>: ObservableObject where Element: RealmSwift.RealmCollectionValue {\n var results: Results<Element>\n \n private var token: NotificationToken!\n \n init(results: Results<Element>) {\n self.results = results\n lateInit()\n }\n func lateInit() {\n token = results.observe { [weak self] _ in\n self?.objectWillChange.send()\n }\n }\n deinit {\n token.invalidate()\n }\n }\nvar body: some View {\n \n VStack{\n \n List{\n ForEach(diveLogs.results, id: \\.dateOfDive) { diveLog in\n DivePost(diveLog: diveLog)\n }.onDelete(perform: deleteRow )\n }\n try! realm.write {\n realm.delete(self.diveLogs.results[mainIndex])\n }\n })\n}\n", "text": "I’m currently working on a project and have gotten as far as grabbing the Realm data results, placing them inside cells and deleting the data according to the cell row when swiping to delete said cell.I’m having a fairly difficult time figuring out why I get the \"Thread error : Out of Bounds”& most importantly how to fix it…Realm doesn’t have much/if any documentation/tutorials including Swift UI Here’s my code if you have any suggestions…//Thanks in Advance!!!The Body:@ObservedObject var diveLogs = BindableResults(results: try! Realm().objects(DiveLog.self).sorted(byKeyPath: “dateOfDive”))Delete Row Function:private func deleteRow(with indexSet: IndexSet){\nindexSet.forEach ({ index in\n//Grabbing index of selected row for deletion\nlet mainIndex = indexThe deletion of the selected row/index works perfectly but I can’t quite figure out why I keep getting this error. Possibly to do with the data updating the List using Bindable Results func???When running this code above I get this Error in the AppDelegate:@UIApplicationMain\nclass AppDelegate: UIResponder, UIApplicationDelegate {\nThread 1: Exception: \"Index 1 is out of bounds (must be less than 1).”//The cell I selected to delete was the second item in the ArrayIf any additional information is needed, I will gladly provide…Thanks in Advance!", "username": "Kamron_Hopkins" }, { "code": "", "text": "@Kamron_Hopkins We are working on some developer blog posts right now for SwiftUI but perhaps this working example of SwiftUI with the latest RealmSwift can help you - realm-swift/examples/ios/swift/ListSwiftUI at master · realm/realm-swift · GitHub", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SwiftUI “List" Realm Guide
2020-06-27T21:20:53.993Z
SwiftUI “List&rdquo; Realm Guide
5,344
null
[ "node-js", "connecting" ]
[ { "code": "rsnode server.js", "text": "Hi ,\nI am unable to connect to mongodb …i get following error. pls suggest\nC:\\Users\\asinh015\\Desktop\\backend>npm run [email protected] start C:\\Users\\asinh015\\Desktop\\backend\nnodemon server.js[nodemon] 2.0.4\n[nodemon] to restart at any time, enter rs\n[nodemon] watching path(s): .\n[nodemon] watching extensions: js,mjs,json\n[nodemon] starting node server.js\nYou are connected http://127.0.0.1:8080!\ncatch Error: queryTxt ETIMEOUT cluster0-bvqvo.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (dns.js:203:19) {\nerrno: ‘ETIMEOUT’,\ncode: ‘ETIMEOUT’,\nsyscall: ‘queryTxt’,\nhostname: ‘cluster0-bvqvo.mongodb.net’\n}", "username": "Abhishek_Sinha" }, { "code": "", "text": "Can you connect to your mongodb by shell?May be network or DNS issues preventing your connection", "username": "Ramachandra_Tummala" }, { "code": "", "text": "you are right…there was a network issue…Thank you ", "username": "Abhishek_Sinha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect to Atlas from Node app on Windows
2020-06-27T11:45:51.390Z
Unable to connect to Atlas from Node app on Windows
3,164
https://www.mongodb.com/…4_2_1024x512.png
[ "indexes" ]
[ { "code": "sku (string)\nwarehouse (string)\nrecord_time (ISODate)\nqty (int)\ndb.test.createIndex({\n sku: 1,\n warehouse: 1,\n record_time: -1,\n});\ndb.test.aggregate([\n {$match: {sku: 'a', warehouse: 'a', record_time: {$lte: ISODate('2020-01-01T00:00:00')}}},\n {$sort: {record_time: -1}},\n {$limit: 1}\n])\ndb.test.aggregate([\n {$match: {record_time: {$lte: ISODate('2020-01-01T00:00:00')}}},\n {$sort: {\n sku: 1,\n warehouse: 1,\n record_time: -1,\n }},\n {$group: {\n _id: {\n sku: '$sku',\n warehouse: '$warehouse',\n },\n qty: {'$first': '$qty'},\n last_record_time: {'$first': '$record_time'}\n }},\n], {allowDiskUse: true});\n", "text": "Say I have a 4 field collection:I would like to query the data at a “snapshot” in time. For example, I may say “show me the qty of everything as of Jan 1, 2020”. The data is such that some combinations of sku + warehouse may not have any entries for days/months.First, I see that mongo has a recommended practice here:\n(jira ticket https://jira.mongodb.org/browse/SERVER-9507)So, I have something like the following index:So, to get a specific SKU + warehouse at certain time, I could run:If I want to get all distinct warehouse + SKU, then:Which would give me the data I want. However, this query runs VERY SLOWLY (10+ minutes on an m50 in atlas)I can see one problem in that the $match is running off of “record_time”, which isn’t first order indexed, so it’s probably causing a large part of the slowdown. However, if I just remove the $match portion, the query takes just as long to run.Based on my desired outcome, is there a different way to structure the data/indexes to allow for the query “give me the latest entry before ISODate X for every sku + warehouse combination” to be run in a reasonable timeframe?Thank you for any advice.", "username": "nefiga" }, { "code": "db.test.aggregate([\n {$sort: {\n sku: 1,\n warehouse: 1,\n record_time: -1,\n }},\n {$group: {\n _id: {\n sku: '$sku',\n warehouse: '$warehouse',\n },\n qty: {'$first': '$qty'},\n record_time: {'$first': '$record_time'}\n }},\n], {allowDiskUse: true, explain: true});\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\n\t\t\t\t},\n\t\t\t\t\"sort\" : {\n\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\"record_time\" : -1\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"qty\" : 1,\n\t\t\t\t\t\"record_time\" : 1,\n\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"test.test\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"E47CEE36\",\n\t\t\t\t\t\"planCacheKey\" : \"E47CEE36\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\t\t\t\"record_time\" : -1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"sku_1_warehouse_1_record_time_-1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"sku\" : [ ],\n\t\t\t\t\t\t\t\t\"warehouse\" : [ ],\n\t\t\t\t\t\t\t\t\"record_time\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"sku\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"warehouse\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"record_time\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : {\n\t\t\t\t\t\"sku\" : \"$sku\",\n\t\t\t\t\t\"warehouse\" : \"$warehouse\"\n\t\t\t\t},\n\t\t\t\t\"qty\" : {\n\t\t\t\t\t\"$first\" : \"$qty\"\n\t\t\t\t},\n\t\t\t\t\"last_record_time\" : {\n\t\t\t\t\t\"$first\" : \"$record_time\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t],\n}\n", "text": "I am using Mongo 4.2.3Explain output from the aggregation", "username": "nefiga" }, { "code": "qty{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\n\t\t\t\t},\n\t\t\t\t\"sort\" : {\n\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\"record_time\" : -1\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"qty\" : 1,\n\t\t\t\t\t\"record_time\" : 1,\n\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"test.test\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"28987361\",\n\t\t\t\t\t\"planCacheKey\" : \"28987361\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_COVERED\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"qty\" : 1,\n\t\t\t\t\t\t\t\"record_time\" : 1,\n\t\t\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"sku\" : 1,\n\t\t\t\t\t\t\t\t\"warehouse\" : 1,\n\t\t\t\t\t\t\t\t\"record_time\" : -1,\n\t\t\t\t\t\t\t\t\"qty\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"sku_1_warehouse_1_record_time_-1_qty_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"sku\" : [ ],\n\t\t\t\t\t\t\t\t\"warehouse\" : [ ],\n\t\t\t\t\t\t\t\t\"record_time\" : [ ],\n\t\t\t\t\t\t\t\t\"qty\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"sku\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"warehouse\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"record_time\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\"qty\" : [\n\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : {\n\t\t\t\t\t\"sku\" : \"$sku\",\n\t\t\t\t\t\"warehouse\" : \"$warehouse\"\n\t\t\t\t},\n\t\t\t\t\"qty\" : {\n\t\t\t\t\t\"$first\" : \"$qty\"\n\t\t\t\t},\n\t\t\t\t\"record_time\" : {\n\t\t\t\t\t\"$first\" : \"$record_time\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t]\n}\n", "text": "I tried adding qty to the index so that the query could utilize a “covered index”, but it had minimal impact.", "username": "nefiga" }, { "code": "db.test.aggregate([\n { $match: { record_time: { $lte: ISODate('2020-01-01T00:00:00') } } },\n...\n{ sku: 1, warehouse: 1, record_time: -1 }$match$matchrecord_timeexplain()", "text": "If I want to get all distinct warehouse + SKU, then:With the available index { sku: 1, warehouse: 1, record_time: -1 } the aggregation query will not be able to apply it for the $match stage. Having the $match stage at the beginning of the pipeline is very good, but without an index it is a very slow query with all that data.I think defining another index only on the record_time field is the right approach. I am sure there will be performance gain (how much improvement depends upon the data). I suggest you try this approach on a sample data set. Generate the query plans before and after creating the new index, and use the “executionStats” mode with the explain().Reference: Compound Index prefixes", "username": "Prasad_Saya" }, { "code": "db.test.createIndex({ record_time: 1 })\n\ndb.test.aggregate([ \n {$match: {record_time: {$lte: ISODate('2020-01-01T00:00:00')}}}, \n {$sort: { record_time: -1 }}, \n {$group: { _id: { sku: '$sku', warehouse: '$warehouse', }, \n qty: {'$first': '$qty'}, \n last_record_time: {'$first': '$record_time'} }} ], \n{allowDiskUse: true});\n", "text": "Yes, setting up an index on record_time will be a great help.Then I don’t think you need to sort en sku and warehouse but only on record_time because what is important is to get values in reverse record_time values. For sku and warehouse, the $group will do the job.So you may try :", "username": "RemiJ" }, { "code": "", "text": "@RemiJ @Prasad_Saya\nYa, I should have not even mentioned the $match stage you are talking about. If you re-read my post, you can see that the query slowness is not because of this match:I can see one problem in that the $match is running off of “record_time”, which isn’t first order indexed, so it’s probably causing a large part of the slowdown. However, if I just remove the $match portion, the query takes just as long to run.In the explain plans that follow, I removed the $match anyway.Regardless, thanks for at least replying. I ended up building a process to recombine data in hourly snapshots, and “cross fill” records for datehours during which there was no delta, so that the query could just be run against a single hour without sorting to get the latest record.I’d be interested to hear if anyone has a similar type of data set and use case (insert-only collection of events, and grouping by “last event per group id before time X”).", "username": "nefiga" }, { "code": "plandata.summary.elecScore_1updated_at_-1db.getCollection(\"test\").explain(\"executionStats\").aggregate(\n [\n { \n \"$match\" : { \n \"data.summary.elecScore\" : { \n \"$gte\" : 10\n }\n }\n },\n { \n \"$sort\" : { \n \"updated_at\" : -1\n }\n }, \n { \n \"$group\" : { \n \"_id\" : \"$user.user_id\", \n \"plan_id\" : { \n \"$first\" : \"$plan_id\"\n }, \n \"habitable_area\" : { \n \"$first\" : \"$data.summary.habitableArea\"\n }, \n \"last_created_at\" : { \n \"$first\" : \"$created_at\"\n }, \n \"last_updated_at\" : { \n \"$first\" : \"$updated_at\"\n }, \n \"count_plans\" : { \n \"$sum\" : NumberInt(1)\n }, \n \"total_elecs\" : { \n \"$sum\" : \"$data.summary.elecScore\"\n }\n }\n }, \n { \n \"$project\" : { \n \"_id\" : 0, \n \"user_id\" : \"$_id\", \n \"plan_id\" : 1, \n \"habitable_area\" : 1, \n \"last_created_at\" : 1, \n \"last_updated_at\" : 1, \n \"count_plans\" : 1, \n \"total_elecs\" : 1\n }\n }\n ], \n { \n \"allowDiskUse\" : true\n }\n);\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"sort\" : {\n\t\t\t\t\t\"updated_at\" : -1\n\t\t\t\t},\n\t\t\t\t\"fields\" : {\n\t\t\t\t\t\"created_at\" : 1,\n\t\t\t\t\t\"data.summary.elecScore\" : 1,\n\t\t\t\t\t\"data.summary.habitableArea\" : 1,\n\t\t\t\t\t\"plan_id\" : 1,\n\t\t\t\t\t\"updated_at\" : 1,\n\t\t\t\t\t\"user.user_id\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"kazadata.test\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"updated_at_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"executionStats\" : {\n\t\t\t\t\t\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 76865,\n\t\t\t\t\t\"executionTimeMillis\" : 732025,\n\t\t\t\t\t\"totalKeysExamined\" : 2528539,\n\t\t\t\t\t\"totalDocsExamined\" : 2528539,\n\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"nReturned\" : 76865,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 715211,\n\t\t\t\t\t\t\"works\" : 2528540,\n\t\t\t\t\t\t\"advanced\" : 76865,\n\t\t\t\t\t\t\"needTime\" : 2451674,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 39376,\n\t\t\t\t\t\t\"restoreState\" : 39376,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\"docsExamined\" : 2528539,\n\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"nReturned\" : 2528539,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 6944,\n\t\t\t\t\t\t\t\"works\" : 2528540,\n\t\t\t\t\t\t\t\"advanced\" : 2528539,\n\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\"saveState\" : 39376,\n\t\t\t\t\t\t\t\"restoreState\" : 39376,\n\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\"invalidates\" : 0,\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"updated_at_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"keysExamined\" : 2528539,\n\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\t\"seenInvalidated\" : 0\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : \"$user.user_id\",\n\t\t\t\t\"plan_id\" : {\n\t\t\t\t\t\"$first\" : \"$plan_id\"\n\t\t\t\t},\n\t\t\t\t\"habitable_area\" : {\n\t\t\t\t\t\"$first\" : \"$data.summary.habitableArea\"\n\t\t\t\t},\n\t\t\t\t\"last_created_at\" : {\n\t\t\t\t\t\"$first\" : \"$created_at\"\n\t\t\t\t},\n\t\t\t\t\"last_updated_at\" : {\n\t\t\t\t\t\"$first\" : \"$updated_at\"\n\t\t\t\t},\n\t\t\t\t\"count_plans\" : {\n\t\t\t\t\t\"$sum\" : {\n\t\t\t\t\t\t\"$const\" : 1\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"total_elecs\" : {\n\t\t\t\t\t\"$sum\" : \"$data.summary.elecScore\"\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"_id\" : false,\n\t\t\t\t\"count_plans\" : true,\n\t\t\t\t\"habitable_area\" : true,\n\t\t\t\t\"plan_id\" : true,\n\t\t\t\t\"last_created_at\" : true,\n\t\t\t\t\"total_elecs\" : true,\n\t\t\t\t\"last_updated_at\" : true,\n\t\t\t\t\"user_id\" : \"$_id\"\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"kdt-0-shard-00-00-xoqdb.gcp.mongodb.net\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.0.18\",\n\t\t\"gitVersion\" : \"6883bdfb8b8cff32176b1fd176df04da9165fd67\"\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1592560381, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1592560381, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"4CBPnwnRqeD0qZJYLu1kavioahc=\"),\n\t\t\t\"keyId\" : NumberLong(\"6790057799909900289\")\n\t\t}\n\t}\n} \n", "text": "Hi nefiga,I have similar issue on a collection with 2.6 millions of documentsI want to group plan (with at 10 electrics elements or more) per user with the last plan updated at first position in the groupThe first $match returns approximately 75.000 documentsI created indexes data.summary.elecScore_1 and updated_at_-1cluster info: M30 (replica set 3 nodes)explain:It took around 20 minutes to execute this query, the FETCH stage seems to be very very slow\nMy documents have some nested objectDid I miss something ?Thanks for your help !", "username": "FABIEN_RYCKOORT" }, { "code": "data.summary.elecScore$matchupdated_at", "text": "It took around 20 minutes to execute this query, the FETCH stage seems to be very very slow\nMy documents have some nested objectHello Fabien,The time taken is because the index on the data.summary.elecScore is not used in the first $match stage. As the plan shows only the sort stage used the index defined on the updated_at field. The query had to scan all documents to filter the 75k documents.To get an idea about how to use indexes with filter and sort stages see: Use Indexes to Sort Query Results. The sub-topic Sort and Non-prefix Subset of an Index is related to this aggregation query.P.S. You may want to post a sample document from your collection.", "username": "Prasad_Saya" }, { "code": "data.summary.elecScore_1_updated_at_1 db.getCollection('test')\n .explain('executionStats')\n .aggregate([\n {\n $match: { \"data.summary.elecScore\": { $gte: 10 } }\n },\n {\n $sort: { \"updated_at\": -1 }\n },\n {\n allowDiskUse: true,\n explain: true\n }\n ])\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"query\" : {\n\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"sort\" : {\n\t\t\t\t\t\"updated_at\" : -1\n\t\t\t\t},\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"kazadata.test\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\"data.summary.elecScore\" : {\n\t\t\t\t\t\t\t\t\"$gte\" : 10\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"updated_at_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"backward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"updated_at\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"kdt-0-shard-00-00-xoqdb.gcp.mongodb.net\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.0.18\",\n\t\t\"gitVersion\" : \"6883bdfb8b8cff32176b1fd176df04da9165fd67\"\n\t},\n\t\"ok\" : 1,\n\t\"operationTime\" : Timestamp(1592574905, 1),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1592574905, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"hZpCbjUQMejIHMoEY5EddyhKrmU=\"),\n\t\t\t\"keyId\" : NumberLong(\"6790057799909900289\")\n\t\t}\n\t}\n}\n{ \n \"_id\" : ObjectId(\"5eeb1e90bf7d06673af6cad9\"), \n \"plan_id\" : NumberInt(11153473), \n \"partner_id\" : NumberInt(8), \n \"created_at\" : ISODate(\"2020-06-18T07:57:43.000+0000\"), \n \"updated_at\" : ISODate(\"2020-06-18T07:57:43.000+0000\"), \n \"deleted_at\" : null, \n \"data\" : {\n \"floors\" : [\n {\n \"index\" : NumberInt(0), \n \"customName\" : null\n }, \n {\n \"index\" : NumberInt(1), \n \"customName\" : \"Comble\"\n }\n ], \n \"rooms\" : [\n {\n \"floorIndex\" : NumberInt(0), \n \"roomIndex\" : NumberInt(0), \n \"customName\" : null, \n \"area\" : 5.58, \n \"habitableArea\" : 5.58, \n \"kind\" : \"terraceext\", \n \"id\" : \"ac1ff9b3-6861-4e8a-907c-5017764b823c\", \n \"surfaceType\" : \"outside\", \n \"isInside\" : false, \n \"isHabitable\" : false, \n \"isMain\" : false, \n \"isOutside\" : true, \n \"isAnnex\" : false\n }, \n {\n \"floorIndex\" : NumberInt(0), \n \"roomIndex\" : NumberInt(1), \n \"customName\" : null, \n \"area\" : 5.05, \n \"habitableArea\" : 5.05, \n \"kind\" : \"bathRoom\", \n \"id\" : \"4927dd48-527c-4c9c-810b-f2eac81de59b\", \n \"surfaceType\" : \"habitable\", \n \"isInside\" : true, \n \"isHabitable\" : true, \n \"isMain\" : false, \n \"isOutside\" : false, \n \"isAnnex\" : false\n }, \n {\n \"floorIndex\" : NumberInt(0), \n \"roomIndex\" : NumberInt(2), \n \"customName\" : null, \n \"area\" : 15.91, \n \"habitableArea\" : 15.91, \n \"kind\" : \"livingdiningRoom\", \n \"id\" : \"74f7d362-d9fb-4195-aa92-1cd97331bb0c\", \n \"surfaceType\" : \"habitable\", \n \"isInside\" : true, \n \"isHabitable\" : true, \n \"isMain\" : true, \n \"isOutside\" : false, \n \"isAnnex\" : false\n }, \n {\n \"floorIndex\" : NumberInt(0), \n \"roomIndex\" : NumberInt(3), \n \"customName\" : null, \n \"area\" : 16.43, \n \"habitableArea\" : 16.43, \n \"kind\" : \"kitchen\", \n \"id\" : \"54ac840d-053c-48a2-83d0-084f440695bd\", \n \"surfaceType\" : \"habitable\", \n \"isInside\" : true, \n \"isHabitable\" : true, \n \"isMain\" : false, \n \"isOutside\" : false, \n \"isAnnex\" : false\n }, \n {\n \"floorIndex\" : NumberInt(0), \n \"roomIndex\" : NumberInt(4), \n \"customName\" : null, \n \"area\" : 28.88, \n \"habitableArea\" : 28.88, \n \"kind\" : \"garage\", \n \"id\" : \"e50cadf4-878d-425e-8bb1-6713f490c6ea\", \n \"surfaceType\" : \"annex\", \n \"isInside\" : true, \n \"isHabitable\" : false, \n \"isMain\" : false, \n \"isOutside\" : false, \n \"isAnnex\" : true\n }, \n {\n \"floorIndex\" : NumberInt(1), \n \"roomIndex\" : NumberInt(5), \n \"customName\" : null, \n \"area\" : 28.64, \n \"habitableArea\" : 28.64, \n \"kind\" : \"bedroom\", \n \"id\" : \"05cdd678-9747-4285-8b0e-be8e68b5ff93\", \n \"surfaceType\" : \"habitable\", \n \"isInside\" : true, \n \"isHabitable\" : true, \n \"isMain\" : true, \n \"isOutside\" : false, \n \"isAnnex\" : false\n }, \n {\n \"floorIndex\" : NumberInt(1), \n \"roomIndex\" : NumberInt(6), \n \"customName\" : null, \n \"area\" : 3.7, \n \"habitableArea\" : 3.7, \n \"kind\" : \"hopper\", \n \"id\" : \"78245c6e-2cba-4f80-855f-6abcd8940df2\", \n \"surfaceType\" : \"building\", \n \"isInside\" : false, \n \"isHabitable\" : false, \n \"isMain\" : false, \n \"isOutside\" : false, \n \"isAnnex\" : false\n }\n ], \n \"models\" : [\n {\n \"catalogId\" : \"model-6926\", \n \"fromSplitGroupId\" : null, \n \"addedAt\" : ISODate(\"2020-06-13T14:37:24.753+0000\"), \n \"width\" : 138.5, \n \"height\" : NumberInt(42), \n \"depth\" : NumberInt(45), \n \"isDimensionEdited\" : true, \n \"isMaterialEdited\" : true, \n \"updatedAt\" : ISODate(\"2020-06-18T07:17:46.693+0000\"), \n \"updatedMaterials\" : [\n \"material-701\", \n \"material-painting-component-Luxens-WHITE-2\"\n ], \n \"position\" : [\n 147.24388122558594, \n -577.5023803710938\n ], \n \"rotation\" : {\n \"x\" : NumberInt(0), \n \"y\" : -0.00000000000000024492937051703357, \n \"z\" : NumberInt(0)\n }, \n \"roomIndex\" : NumberInt(2), \n \"roomKind\" : \"livingdiningRoom\"\n }, \n ...\n ], \n \"materials\" : [\n {\n \"isModified\" : false, \n \"typeOfMerge\" : null, \n \"orig\" : \"catalog\", \n \"catalogId\" : \"material-559\", \n \"area\" : 5.58, \n \"surface\" : \"floor\", \n \"addedAt\" : ISODate(\"2020-06-14T09:02:13.221+0000\"), \n \"roomIndex\" : NumberInt(0), \n \"serializations\" : [\n\n ], \n \"roomKind\" : \"terraceext\"\n }, \n ...\n ], \n \"elecs\" : [\n {\n \"products\" : [\n\n ], \n \"quantities\" : [\n\n ], \n \"firstRoomIndex\" : NumberInt(0), \n \"roomIdsByFloor\" : [\n \"ac1ff9b3-6861-4e8a-907c-5017764b823c\", \n \"4927dd48-527c-4c9c-810b-f2eac81de59b\", \n \"74f7d362-d9fb-4195-aa92-1cd97331bb0c\", \n \"54ac840d-053c-48a2-83d0-084f440695bd\", \n \"e50cadf4-878d-425e-8bb1-6713f490c6ea\"\n ], \n \"roomKind\" : null\n }, \n {\n \"products\" : [\n\n ], \n \"quantities\" : [\n\n ], \n \"firstRoomIndex\" : NumberInt(5), \n \"roomIdsByFloor\" : [\n \"05cdd678-9747-4285-8b0e-be8e68b5ff93\", \n \"78245c6e-2cba-4f80-855f-6abcd8940df2\"\n ], \n \"roomKind\" : null\n }\n ], \n \"buildingBlocks\" : {\n \"openings\" : [\n {\n \"kind\" : \"opening.casementwindow\", \n \"addedAt\" : ISODate(\"2020-06-13T14:26:08.046+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:23:03.017+0000\"), \n \"width\" : NumberInt(80), \n \"height\" : NumberInt(95), \n \"nbCasement\" : NumberInt(2), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.door\", \n \"addedAt\" : ISODate(\"2020-06-14T08:16:45.601+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:22:09.224+0000\"), \n \"width\" : NumberInt(90), \n \"height\" : NumberInt(215), \n \"nbCasement\" : NumberInt(1), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.door\", \n \"addedAt\" : ISODate(\"2020-06-14T11:11:26.153+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:22:14.704+0000\"), \n \"width\" : NumberInt(90), \n \"height\" : NumberInt(215), \n \"nbCasement\" : NumberInt(1), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.door\", \n \"addedAt\" : ISODate(\"2020-06-14T13:06:50.586+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:22:21.611+0000\"), \n \"width\" : NumberInt(90), \n \"height\" : NumberInt(215), \n \"nbCasement\" : NumberInt(1), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.fixedwindow\", \n \"addedAt\" : ISODate(\"2020-06-15T08:28:50.610+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:47:18.607+0000\"), \n \"width\" : NumberInt(60), \n \"height\" : NumberInt(60), \n \"nbCasement\" : NumberInt(1), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.door\", \n \"addedAt\" : ISODate(\"2020-06-17T09:07:57.934+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-17T09:55:30.324+0000\"), \n \"width\" : NumberInt(240), \n \"height\" : NumberInt(200), \n \"nbCasement\" : NumberInt(2), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.door\", \n \"addedAt\" : ISODate(\"2020-06-14T08:16:45.601+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:21:51.487+0000\"), \n \"width\" : NumberInt(90), \n \"height\" : NumberInt(215), \n \"nbCasement\" : NumberInt(1), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.casementwindow\", \n \"addedAt\" : ISODate(\"2020-06-13T14:26:08.046+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:23:32.771+0000\"), \n \"width\" : NumberInt(80), \n \"height\" : NumberInt(95), \n \"nbCasement\" : NumberInt(2), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.casementwindow\", \n \"addedAt\" : ISODate(\"2020-06-14T08:38:39.199+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:48:34.485+0000\"), \n \"width\" : NumberInt(100), \n \"height\" : NumberInt(60), \n \"nbCasement\" : NumberInt(2), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }, \n {\n \"kind\" : \"opening.casementwindow\", \n \"addedAt\" : ISODate(\"2020-06-14T08:38:39.199+0000\"), \n \"updatedAt\" : ISODate(\"2020-06-18T07:49:31.403+0000\"), \n \"width\" : NumberInt(100), \n \"height\" : NumberInt(60), \n \"nbCasement\" : NumberInt(2), \n \"hasWindow\" : null, \n \"thirdFixed\" : null, \n \"atelierStyle\" : false\n }\n ], \n \"walls\" : {\n \"2\" : 15.955894497658939, \n \"7\" : 16.42514439746712, \n \"8\" : 6.761795964841681, \n \"20\" : 47.59999994721013\n }, \n \"swimmingPools\" : [\n\n ], \n \"stairways\" : [\n {\n \"addedAt\" : ISODate(\"2020-06-14T12:22:36.854+0000\"), \n \"bearing\" : true, \n \"height\" : NumberInt(270), \n \"position\" : [\n 282.6345520019531, \n -29.525814056396484\n ], \n \"railA\" : false, \n \"railB\" : false, \n \"stairDepth\" : NumberInt(25), \n \"stairOffset\" : NumberInt(1), \n \"stairThickness\" : NumberInt(3), \n \"stairWidth\" : NumberInt(90), \n \"stickSpacement\" : NumberInt(11), \n \"type\" : \"stairway.stairway-straight\", \n \"updatedAt\" : ISODate(\"2020-06-17T09:05:31.072+0000\")\n }\n ], \n \"roofV1\" : NumberInt(0), \n \"roofV2\" : NumberInt(8), \n \"elecScore\" : NumberInt(3)\n }, \n \"stats\" : {\n \"hasCatalogEvents\" : true, \n \"hasShoppingList\" : true, \n \"models\" : [\n {\n \"click\" : NumberInt(1), \n \"contextBrandClick\" : NumberInt(0), \n \"catalogBrandClick\" : NumberInt(0), \n \"shoppingView\" : NumberInt(0), \n \"shoppingClick\" : NumberInt(0), \n \"shoppingPrint\" : NumberInt(0), \n \"catalogId\" : \"model-6926\"\n }, \n ...\n ], \n \"modelTags\" : [\n {\n \"click\" : NumberInt(2), \n \"tagId\" : \"model-tags-78\"\n }, \n ...a\n ]\n }, \n \"id\" : null, \n \"title\" : \"PROJET 2502\", \n \"summary\" : {\n \"nbModel\" : NumberInt(32), \n \"nbMaterial\" : NumberInt(18), \n \"nbFloor\" : NumberInt(2), \n \"nbRoom\" : NumberInt(5), \n \"nbMainRoom\" : NumberInt(2), \n \"totalArea\" : 94.91, \n \"outsideArea\" : 5.58, \n \"habitableArea\" : 66.03, \n \"nbStairway\" : NumberInt(1), \n \"nbSwimmingPool\" : NumberInt(0), \n \"elecScore\" : NumberInt(3), \n \"nbRoofV2\" : NumberInt(8), \n \"nbRoofV1\" : NumberInt(0), \n \"hasShopping\" : true\n }, \n \"geolocation\" : null, \n \"metadata\" : {\n \"uuid\" : \"5b600380-203b-4c03-bd2b-1b6bb93aeb9e\", \n \"kazaplanVersion\" : \"3.58.8\", \n \"version\" : \"3.58.8\", \n \"creationDate\" : ISODate(\"2020-06-13T14:23:18.932+0000\"), \n \"lastUpdate\" : ISODate(\"2020-06-18T07:57:44.478+0000\")\n }\n }, \n \"user\" : {\n \"user_id\" : NumberInt(2860431), \n \"email\" : \"[email protected]\", \n \"first_name\" : null, \n \"last_name\" : null, \n \"country\" : \"fr\", \n \"created_at\" : ISODate(\"2020-06-10T06:56:33.000+0000\"), \n \"updated_at\" : ISODate(\"2020-06-10T06:56:33.000+0000\")\n }\n}\nupdated_at_1hint()", "text": "Thanks for your reply @Prasad_Saya,I created a compound index data.summary.elecScore_1_updated_at_1and tried to do this simple case :and I gotThis is a sample document:So, I tried to remove index I created previously updated_at_1 and now it works, it use compound index.My question is: Should I force index that I want to use with hint() in this case ?but this aggregate above, is still, very very very slow…Thanks !", "username": "FABIEN_RYCKOORT" }, { "code": "", "text": "It seems that the slow process is fetch with a lot of “complex” documents, like my previous example… I thought it was not a problem for mongo… but it’s a limitation for me… ", "username": "FABIEN_RYCKOORT" }, { "code": "updated_at_1hint()hint$match", "text": "So, I tried to remove index I created previously updated_at_1 and now it works, it use compound index.My question is: Should I force index that I want to use with hint() in this case ?Using the hint is considered as not a best practice. I suggest you have just one index, i.e., on the field “data.summary.elecScore”, and remove remaining indexes. See what the explain shows. This index will be applied in the $match stage, and I think it will get a better performance.", "username": "Prasad_Saya" }, { "code": "{ record_time: 1 }\n{\n sku: 1,\n warehouse: 1,\n record_time: -1,\n},\ndb.test.aggregate([\n // match only the docs, needed by the pipeline\n // record_time should have index for better performance\n {\n $match: {\n record_time: {\n $lte: new Date('2018-06-24T04:45:47.947Z'),\n },\n },\n },\n // it is important to do a reverse-sort here\n // so, $group stage does efficient document pick\n {\n $sort: {\n record_time: -1,\n },\n },\n {\n $group: {\n _id: {\n sku: '$sku',\n warehouse: '$warehouse',\n },\n doc: {\n $first: '$$CURRENT',\n },\n },\n },\n])\n{\n before_date: ISODate | String,\n sku: String,\n warehouse: String,\n doc_id: ObjectId,\n}\n{\n beforeDate: ISODate | String,\n sku: String,\n warehouse: String\n}\ndb.test2.updateOne(\n {\n beforeDate: <d>,\n warehouse: <w>,\n sku: <s>,\n }, \n { \n $set: { \n doc_Id: <ObjectId>, \n }\n }, \n { \n upsert: true,\n }\n);\ndb.test2.aggregate([\n {\n $match: {\n before_date: <d>,\n },\n },\n {\n $group: {\n docs_ids: {\n $addToSet: '$doc_id',\n }\n }\n },\n {\n $lookup: {\n from: 'test',\n localField: '$docs_ids',\n foreignField: '_id',\n as: 'latest',\n }\n },\n {\n $unwind: '$latest',\n },\n {\n $replaceRoot: {\n newRoot: '$latest',\n }\n }\n]);\n{\n before_date: ISODate | String,\n sku: String,\n warehouse: String,\n doc_id: { qty: Number, record_time: ISODate }, \n}\ndb.test2.find({ before_date: <d> });\n", "text": "$match stage you are talking about. If you re-read my post, you can see that the query slowness is not because of this matchYou should to use that $match stage that filters document by ‘record_time’, because it can cut out significant amount of docs. Also, you should add index { record_time: 1 } to speed up that $match stage.As @RemiJ already stated:Then I don’t think you need to sort en sku and warehouse but only on record_time because what is important is to get values in reverse record_time values. For sku and warehouse, the $group will do the job.You only need index on ‘record_time’ field.\nI did a test on your example documents on 10 Million collection size, and having only this index:instead of this:Gives about 25% of boost to get the same output.There is nothing match to optimize in your pipeline. You just need to add proper indexes. Here is the aggregation pipeline, that would be the most performant in your case:A different way of getting the desired result would be:You can achieve even more faster queries, if you add the latest record as nested object in ‘test2’ document, like this:with this you can use simple .find() operator to get the desired result:", "username": "slava" } ]
Optimization of pulling "latest record earlier than X" in a large (500 million record) collection using index
2020-03-11T19:39:04.582Z
Optimization of pulling &ldquo;latest record earlier than X&rdquo; in a large (500 million record) collection using index
6,976
null
[ "compass" ]
[ { "code": "{\n_id: '1234566788',\nurl: 'www.site.com',\nmemberships: [\n {0: \n {membershipType: 'free'},\n {1: \n {membershipType: 'paid'}\n]\n}\n{\n_id: 'xxxxxxx',\nsite: '12345667', // maps to _id from other doc\nstripeInfo: {\n customer: 'xyz' // This field will not exist if it is a free account\n }\n}\n[\n {\n '$addFields': {\n 'total_memberships': {\n '$cond': {\n 'if': {\n '$isArray': '$memberships'\n }, \n 'then': {\n '$size': '$memberships'\n }, \n 'else': 'NA'\n }\n }\n }\n }, {\n '$project': {\n '_id': 1, \n 'url': 1, \n 'memberships': 1, \n 'total_memberships': 1\n }\n }, {\n '$unwind': {\n 'path': '$memberships', \n 'preserveNullAndEmptyArrays': True\n }\n }, {\n '$group': {\n '_id': {\n '_id': '$_id', \n 'url': '$url'\n }, \n 'total_memberships': {\n '$sum': 1\n }, \n 'count_of_free_memberships': {\n '$sum': {\n '$cond': [\n {\n '$eq': [\n '$memberships.membershipType', 'free'\n ]\n }, 1, 0\n ]\n }\n }, \n 'count_of_paid_memberships': {\n '$sum': {\n '$cond': [\n {\n '$eq': [\n '$memberships.membershipType', 'paid'\n ]\n }, 1, 0\n ]\n }\n }\n }\n }, {\n '$lookup': { // connecting to members \n 'from': 'members', \n 'localField': '_id._id', \n 'foreignField': 'site', \n 'as': 'members'\n }\n }, {\n '$unwind': {\n 'path': '$members', \n 'preserveNullAndEmptyArrays': True\n }\n }, {\n '$addFields': {\n 'is_paid_member': {\n 'stripeInfo': {\n 'customer': {\n '$ifNull': [\n 1, 0\n ]\n }\n }\n }, \n 'is_free_member': {\n 'stripeInfo': {\n 'customer': {\n '$ifNull': [\n 0, 1\n ]\n }\n }\n }\n }\n }, {\n '$project': { // cleaning up/ flatening output\n '_id': '$_id._id', \n 'url': '$_id.url', \n 'total_memberships': 1, \n 'count_of_free_memberships': 1, \n 'count_of_paid_memberships': 1, \n 'is_paid_member': '$is_paid_member.stripeInfo.customer', \n 'is_free_member': '$is_free_member.stripeInfo.customer'\n }\n }, \n\n/* **This group is where the pipeline is timing out** */\n{\n '$group': {\n '_id': {\n '_id': '$_id', \n 'url': '$url', \n 'count_of_free_memberships': '$count_of_free_memberships', \n 'count_of_paid_memberships': '$count_of_paid_memberships', \n 'total_memberships': '$total_memberships'\n }, \n 'free_members': {\n '$sum': '$is_free_member'\n }, \n 'paid_members': {\n '$sum': '$is_paid_member'\n }, \n 'total_members': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n '_id': '$_id._id', \n 'url': '$_id.url', \n 'free_memberships': '$_id.count_of_free_memberships', \n 'paid_memberships': '$_id.count_of_paid_memberships', \n 'total_memberships': '$_id.total_memberships', \n 'free_members': 1, \n 'paid_members': 1, \n 'total_members': 1\n }\n }, {}\n]\nerror in $cursor stage ::caused by:: operator exceeded time limit\n", "text": "I’m working with relatively small (10k documents) database and need to combine collections in an aggregation pipeline. I’m using Compass for this, and it is timing out at a $group stage, but I can’t quite understand the reason.My goal is to create a view showing the number of paid/free/total memberships, and number of paid/free/total members. (members are different than memberships).Here is an example of the first doc:\naccount:The second doc:\nmembers:… and this is the pipeline:This Aggregation pipelinen will run if I limit the input to 1000, or increase the max time from 5000 to 20000, but otherwise will throw :I appreciate any advice in optimizing this aggregation pipeline.", "username": "James_Hall" }, { "code": "maxTimeMS", "text": "The default maxTimeMS is 5000 ms. You can increase it in the aggregation pipeline builder settings.2020-06-22_11-56-53 (1)1280×758 647 KB", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Thanks for the quick response and the gif is helpful. However, although raising the time limit works for my aggregation, when I save it to a view for the team they are getting a time out error. Is there a way to raise the time limit for the view?", "username": "James_Hall" }, { "code": "", "text": "Am I wrong, but doesn’t extending the time for a query seems like a way worse solution than optimizing it. I am reading (interpriting) this answer as there is no way to optimize this query. 5 secs? Is this accurate?", "username": "paul_N_A" }, { "code": "", "text": "Paul, I agree, with this relatively small set I keep running into time limits, and don’t want to have to instruct everyone to change settings each time they want to use a view.I came across another instance of this. If I increase the time limit I can view the collection, but the view shows total documents N/A:\n\n\n\nI’m unable to export this view, presumably because it doesn’t know how many rows to export.However if I set the limit to something higher than the total docs (I have 10.5k docs, set limit to 12k), then I am able to export to csv and things seem to work.I’m wondering what is causing this and if there is a way to avoid it in an aggregation stage.", "username": "James_Hall" }, { "code": "db.test_members.aggregate([\n {\n $group: {\n _id: '$site',\n totalMembers: {\n $sum: 1,\n },\n totalFreeMembers: {\n $sum: {\n $cond: {\n if: {\n $eq: [{ $type: '$stripeInfo.customer' }, 'missing'],\n },\n then: 1,\n else: 0\n }\n }\n }\n }\n },\n {\n // tried to avoid additional operations and conditional \n // in the $group stage, for the totalPaidMembers value, \n // that is why its calculation is moved to a separate stage, \n // so it just a simple subtraction of two integers\n $addFields: {\n totalPaidMembers: {\n $subtract: ['$totalMembers', '$totalFreeMembers'],\n }\n }\n },\n {\n $group: {\n _id: null,\n // accumulate membership sites to do 1 single $lookup\n membershipSites: {\n $push: '$_id',\n },\n // accumulate membership docs into a single objects, \n // so later it can be easily concatenated with members later\n members: {\n $push: '$$CURRENT',\n }\n }\n },\n {\n $lookup: {\n from: 'test_accounts',\n pipeline: [\n {\n // leave only necessary data in the pipeline\n $project: {\n _id: true,\n url: true,\n memberships: true,\n },\n },\n {\n $unwind: {\n path: '$memberships',\n // some 'membership' fields can contain empty array\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $group: {\n _id: '$_id',\n url: {\n $first: '$url',\n },\n totalFreeMemberships: {\n $sum: {\n $cond: {\n if: {\n $eq: ['$memberships.membershipType', 'free'],\n },\n then: 1,\n else: 0,\n }\n },\n },\n totalMemberships: {\n $sum: 1\n }\n }\n },\n {\n $addFields: {\n totalPaidMemberships: {\n $subtract: ['$totalMemberships', '$totalFreeMemberships'],\n }\n }\n },\n ],\n as: 'memberships',\n }\n },\n {\n $project: {\n result: {\n // at this point it is possible to hit the 100 MiB stage's limit\n $concatArrays: ['$memberships', '$members']\n }\n }\n },\n {\n $unwind: '$result',\n },\n {\n $group: {\n _id: '$result._id',\n url: {\n $max: '$result.url',\n },\n totalFreeMemberships: {\n $sum: '$result.totalFreeMemberships',\n },\n totalPaidMemberships: {\n $sum: '$result.totalPaidMemberships',\n },\n totalMemberships: {\n $sum: '$result.totalMemberships',\n },\n totalMembers: {\n $sum: '$result.totalMembers',\n },\n totalFreeMembers: {\n $sum: '$result.totalFreeMembers',\n },\n totalPaidMembers: {\n $sum: '$result.totalPaidMembers',\n }\n }\n },\n]).pretty();\ndb.test_accounts.insertMany([\n {\n _id: 'site1.com',\n url: 'url1.com',\n memberships: [\n { membershipType: 'free' },\n { membershipType: 'paid' },\n ],\n },\n {\n _id: 'site2.com',\n url: 'url2.com',\n memberships: [],\n },\n {\n _id: 'site3.com',\n url: 'url3.com',\n memberships: [\n { membershipType: 'free' },\n ],\n },\n {\n _id: 'site4.com',\n url: 'url4.com',\n memberships: [\n { membershipType: 'paid' },\n ],\n }\n]);\n\ndb.test_members.insertMany([\n {\n _id: 'm1',\n site: 'site1.com',\n stripeInfo: {\n customer: 'c1'\n }\n },\n {\n _id: 'm2',\n site: 'site1.com',\n stripeInfo: {\n customer: 'c2'\n }\n },\n {\n _id: 'm3',\n site: 'site1.com',\n stripeInfo: {\n customer: 'c3'\n }\n },\n {\n _id: 'm4',\n site: 'site3.com',\n stripeInfo: {\n customer: 'c4'\n }\n },\n {\n _id: 'm5',\n site: 'site4.com',\n stripeInfo: {\n customer: 'c5'\n }\n },\n {\n _id: 'm6',\n site: 'site1.com',\n stripeInfo: {}\n },\n {\n _id: 'm7',\n site: 'site4.com',\n stripeInfo: {}\n }\n]);\n{\n\t\"_id\" : \"site3.com\",\n\t\"url\" : \"url3.com\",\n\t\"totalFreeMemberships\" : 1,\n\t\"totalPaidMemberships\" : 0,\n\t\"totalMemberships\" : 1,\n\t\"totalMembers\" : 1,\n\t\"totalFreeMembers\" : 0,\n\t\"totalPaidMembers\" : 1\n}\n{\n\t\"_id\" : \"site4.com\",\n\t\"url\" : \"url4.com\",\n\t\"totalFreeMemberships\" : 0,\n\t\"totalPaidMemberships\" : 1,\n\t\"totalMemberships\" : 1,\n\t\"totalMembers\" : 2,\n\t\"totalFreeMembers\" : 1,\n\t\"totalPaidMembers\" : 1\n}\n{\n\t\"_id\" : \"site1.com\",\n\t\"url\" : \"url1.com\",\n\t\"totalFreeMemberships\" : 1,\n\t\"totalPaidMemberships\" : 1,\n\t\"totalMemberships\" : 2,\n\t\"totalMembers\" : 4,\n\t\"totalFreeMembers\" : 1,\n\t\"totalPaidMembers\" : 3\n}\n{\n\t\"_id\" : \"site2.com\",\n\t\"url\" : \"url2.com\",\n\t\"totalFreeMemberships\" : 0,\n\t\"totalPaidMemberships\" : 1,\n\t\"totalMemberships\" : 1,\n\t\"totalMembers\" : 0,\n\t\"totalFreeMembers\" : 0,\n\t\"totalPaidMembers\" : 0\n}\n", "text": "extending the time for a query seems like a way worse solution than optimizing it.Good point.@James_Hall, the main issue, that slows down your aggregation - is the $lookup stage. It is called for each membership document.You make your aggregation faster in one of the following ways:And for those datasets:The aggregation will provide this output:The above aggregation will work perfectly, if your collections are not huge. Otherwise, some stages may reach 100 MiB limitation. It can be negotiated with { allowDiskUse: true } option, but it will reduce the performance of the aggregation.Try to apply this aggregation to your datasets, if you will still have issues with the performance, consider other solutions, that I have suggested above.", "username": "slava" } ]
$group Aggregation in Compass is timing out
2020-06-20T21:53:38.388Z
$group Aggregation in Compass is timing out
6,152
null
[]
[ { "code": "", "text": "", "username": "Aradhana_Singh" }, { "code": "", "text": "You have to do that outside the mongo shell. Many threads with the same issue.", "username": "steevej" }, { "code": "", "text": "Hi @Aradhana_Singh,This You have to do that outside the mongo shellLet us know if you are still facing any issue.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Thanks…Its resolved.", "username": "Aradhana_Singh" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Unable to connect to atlas through mongo shell
2020-06-24T10:13:24.196Z
Unable to connect to atlas through mongo shell
1,362
null
[ "data-modeling" ]
[ { "code": "", "text": "Well, this more about Database design question. I have good idea about designing SQL for use cases like job portal but when it comes to Mongodb, i am a lot confused. Also, there is hardly any resource present on the web to guide me through.\nI have followed this link to understand patterns which made few things clear - Building with Patterns: A Summary | MongoDB BlogStill, I see there is lack of resources explaining dos and don’ts and hows and whys of database designing. Complicated schema design like Job portal requires a lot of thoughts - number of documents, data duplication, memory optimization, document size, in absence of joins what is the best ways to improve performance.If there are such samples of use case explained be guide me to those links/resources. Otherwise, for the community such resources should be added so that switching from sql to mongo becomes easier.", "username": "SAURAV_KUMAR" }, { "code": "", "text": "Hello @SAURAV_KUMAR welcome to the community,the move from SQL to noSQL is mainly to think in denormalized data. With an noSQL approach you do not want to have the least redundancy, more over your model is driven by the needs of your data requests plus some performance aspects.\nA very good start is a the MongoDB University Class: M320 Data Modeling. You may also can checkout:There is already a lot of information available, you may also can utilize the search, we had already some posts concerning data modeling or schema design.We will always encounter new situations, so in case the linked docs leave questions open please post them here in the community, we will try to get the best response.Hope that helps to get familiar with the data modelling.", "username": "michael_hoeller" } ]
Database design - Job Portal
2020-06-27T11:46:22.762Z
Database design - Job Portal
3,881
null
[]
[ { "code": "", "text": "While I try to download Enterprise server as mentioned, I am prompted to enter business email and Organization details. How can I download the MongoDB Enterprise Server without providing any business email details?", "username": "Chandrasekaran_Sivaraman" }, { "code": "", "text": "You can use your personal email id and say student for occupation", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Chandrasekaran_Sivaraman,This You can use your personal email id and say student for occupationPlease let us know if you are still facing any issue.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Thank you very much.", "username": "Chandrasekaran_Sivaraman" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
MongoDB Set up - Enterprise Server download
2020-06-13T14:58:16.468Z
MongoDB Set up - Enterprise Server download
1,396
https://www.mongodb.com/…20e72d9261ed.png
[ "security" ]
[ { "code": "", "text": "HI! everyone i’m just a newbie to MongoDB. I got this problem, i have granted to the role to a specific user for specific database. But the user doesn’t play their role.image820×246 5.79 KB\nThis user: has role to readimage1042×78 3.84 KB\ni login successfullyimage817×83 3.11 KB\nbut…image991×473 14 KB\nlike i said it can be insertedCan anyone help me pls ? i try to search and it said because i have not enable access. I tried to enable but work nothing…", "username": "Nam_Tran" }, { "code": "", "text": "I tried to enable but work nothingPlease elaborate on this\nHow did you enable access control\nWhat steps you followed", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_Tummala yes ! i know I haven’t enabled security, so i i edit the mongod.cfg (open with WordPad_) and found #security field is leave empty. So i put on authorization:“enabled” and ctrl +S. But it notification with this: Access to C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg was denied !I cannot enable permission on C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.cfg. Maybe i use mongoDB on local host ? i tried to open properties -> security and grant all access. But i cannot save what i change! i tried to save another file (.cfg) at the same C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\ BUT it said contact adminstrator to do that and if i save on another disk like D:\\ it success", "username": "Nam_Tran" }, { "code": "", "text": "I understood that you have not enabled access control as it was clearly mentioned in your first post\nWhat was not clear to me was your comment “I tried to enable but work nothing”\nSo i was trying to understand how you started mongod like from command line or using config file etc.That’s why asked you explain more on it\nAnyway from your latest post it is clear the issue is with file permissions/privileges issue\nYou should have admin privs to edit cfg file(it is read only)Yes try on local host with your own config file placed on different drive.I think you already did this and confirmed success", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_Tummala Screenshot (220)717×763 12.6 KBThis is my config file which was edited ! and the comment below i explained my step! have u got any idea to help me? Thanks", "username": "Nam_Tran" }, { "code": "", "text": "Screenshot (219)1910×844 36.3 KB1st: i create user admin2 has role: read\n2nd: exit the shell\n3rd: i login with username admin2 and success\n4th i use database testAuthentication that has user “admin2”\n5th i insert to the collection of database testAuthentication and success which was not the role of user i logined!Have u got any idea to help me? Thanks", "username": "Nam_Tran" }, { "code": "", "text": "Your snapshot clearly shows you have not enabled access control\nJust adding security param in the config file is not enoughDid you restart your mongod after config changes?\nHow did you start your mongod?or it was up and running already\nWhy you are using mongod running on default port 27017\nYou mentioned you will use separate config file and D drive for your testingYou can start your own mongod on a different port with minimum parameters in the config file\nor\nspin up a mongod from command line\nmongod --port --dbpath bind_ip --auth\nOnce instance is up login with localhost exception\nCreate root user\nlogin with root user\nThen create other user with required rolePlease check mongo documentation for details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "do we have to create new admin user every time we start mongod to enable access control. or is there any command to login with previous made admin user?", "username": "J_Ej" }, { "code": "", "text": "You don’t have to create the admin user everytime\nAre you not able to login with -u -p options?If you start new mongod on a different port then yes\nYou have to create new admin user", "username": "Ramachandra_Tummala" }, { "code": "", "text": "i logged in, but access control is still not enabled.\ni did it after starting mongod and then mongo\nmongo localhost/admin --username user -p", "username": "J_Ej" }, { "code": "", "text": "What steps you performed to enable access control?Please check", "username": "Ramachandra_Tummala" } ]
Need help enabling access control for MongoDB 4.2 on Windows
2020-06-15T20:37:30.081Z
Need help enabling access control for MongoDB 4.2 on Windows
2,807
null
[ "upgrading" ]
[ { "code": "", "text": "Hello All,\nI have a MongoDB database server running on 2.4 version. it consists around 90 dbs with total size of 4.5 TB.\nso i want to migrate these databases on new infrastructure with version 4.2.Question:-Thanks", "username": "mohit_gour" }, { "code": "", "text": "Welcome @mohit_gourThere are so many changes across the 7! major version updates I would not be comfortable with a dump/restore(even it worked) without rigorous testing.There a couple of threads on this topic.", "username": "chris" }, { "code": "", "text": "@chris, I’m not the topic originator, but first of all many thanks for those links!!I’ve spent like half a day to google out something clear and relevant on upgrade from 2.x to 4.x!Now I’ve found this topic and tried to google out the title of this topic ==> No results found for “Database upgrade from 2.4 to 4.2” Boaaah. I don’t know if it is OK not to have robots.txt on the domain root, but having Google Page Rank 0 (Zero!) for domain root of this forum is something really wrong…The main page also became more like scam-page if I compare to the great informative design of ~2010 as I remember it. Indeed, the main page proposed me to sign-in – Okay – but then I found me on MongoDB related cloud business… Guys. it is like 20 year old aggressive marketing style…Whatever… @chris said that, thank you for you answer a lot!!", "username": "Valery_Khamenya" }, { "code": "", "text": "Welcome @Valery_Khamenya and you’re welcome.There is a category for site feedback. I suggest that would be a good spot to make a post as the appropriate people would be more likely to see it in that category.", "username": "chris" } ]
Database upgrade from 2.4 to 4.2
2020-06-25T06:41:10.962Z
Database upgrade from 2.4 to 4.2
3,275
null
[ "charts", "on-premises" ]
[ { "code": "", "text": "Hello there!I’ve been able to deploy MongoDB-charts locally on my desktop successfully. Now I’m trying to deploy MongoDB-charts on an openshift cluster (Kubernetes based), but I’m founding several issues and a surprising lack of documentation about it that leads me to think that it may not be possible to so. Therefore, Is is possible to deploy MongoDB-charts on a container on a Kubernetes cluster?Thanks!", "username": "Matias_Salimbene" }, { "code": "", "text": "Hi MatiasIt can be done, but it’s not a configuration we are able to support. Please take a look at OpenShift Template - MongoDB Charts // Jack Alder for some inspiration.Tom", "username": "tomhollander" }, { "code": "", "text": "Yes, I got in touch with Jon, althought that templated didn’t quite work, it was helpful.Thanks for replying,Cheers.", "username": "Matias_Salimbene" }, { "code": "", "text": "@Matias_Salimbene,I’m also facing issues while deploying it in openshift.\nPlease let me know if you find the way to do.Cheers.", "username": "Gunsekar_Adisekar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Deploy MongoDB-Charts on Openshift/Kubernetes
2020-06-11T20:17:39.988Z
Deploy MongoDB-Charts on Openshift/Kubernetes
3,992
null
[]
[ { "code": "2020-06-19T11:13:47.647+0200 I CONTROL [main] Trying to start Windows service 'MongoDB'\n2020-06-19T11:13:47.648+0200 I CONTROL [initandlisten] MongoDB starting : pid=976 port=27017 dbpath=D:/corpuls.data/databases/mongodb/db 64-bit host=VS39362EH\n2020-06-19T11:13:47.648+0200 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2\n2020-06-19T11:13:47.648+0200 I CONTROL [initandlisten] db version v3.4.7\n2020-06-19T11:13:47.648+0200 I CONTROL [initandlisten] git version: cf38c1b8a0a8dca4a11737581beafef4fe120bcd\n2020-06-19T11:13:47.648+0200 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1u-fips 22 Sep 2016\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] allocator: tcmalloc\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] modules: none\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] build environment:\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] distmod: 2008plus-ssl\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] distarch: x86_64\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] target_arch: x86_64\n2020-06-19T11:13:47.649+0200 I CONTROL [initandlisten] options: { config: \"C:\\Program Files\\MongoDB\\3.4.7\\mongod.conf\", service: true, storage: { dbPath: \"D:/corpuls.data/databases/mongodb/db\", engine: \"wiredTiger\" }, systemLog: { destination: \"file\", logAppend: false, logRotate: \"rename\", path: \"D:/corpuls.data/databases/mongodb/log/mongod.log\", quiet: true, timeStampFormat: \"iso8601-local\", traceAllExceptions: false, verbosity: 0 } }\n2020-06-19T11:13:47.650+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7679M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),\n2020-06-19T11:13:47.659+0200 E STORAGE [initandlisten] WiredTiger error (0) [1592558027:659502][976:140705624250832], file:WiredTiger.wt, connection: WiredTiger.turtle: encountered an illegal file format or internal value\n2020-06-19T11:13:47.659+0200 E STORAGE [initandlisten] WiredTiger error (-31804) [1592558027:659502][976:140705624250832], file:WiredTiger.wt, connection: the process must exit and restart: WT_PANIC: WiredTiger library panic\n2020-06-19T11:13:47.659+0200 I - [initandlisten] Fatal Assertion 28558 at src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_util.cpp 361\n2020-06-19T11:13:47.659+0200 I - [initandlisten] \n\n***aborting after fassert() failure\n\n\n2020-06-19T11:13:47.813+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\stacktrace_windows.cpp(239) mongo::printStackTrace+0x43\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\signal_handlers_synchronous.cpp(180) mongo::`anonymous namespace'::printSignalAndBacktrace+0x74\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\signal_handlers_synchronous.cpp(236) mongo::`anonymous namespace'::abruptQuit+0x85\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] ucrtbase.dll raise+0x1e7\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] ucrtbase.dll abort+0x31\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\assert_util.cpp(172) mongo::fassertFailedWithLocation+0x181\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_util.cpp(361) mongo::`anonymous namespace'::mdb_handle_error+0x205\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\err.c(275) __wt_eventv+0x376\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\err.c(317) __wt_err+0x32\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\err.c(530) __wt_illegal_value+0x5e\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\meta\\meta_turtle.c(288) __wt_turtle_read+0x2ab\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\meta\\meta_table.c(269) __wt_metadata_search+0x28e\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\conn\\conn_dhandle.c(269) __conn_btree_config_set+0x22\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\conn\\conn_dhandle.c(337) __wt_conn_btree_open+0x5c\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_dhandle.c(542) __wt_session_get_btree+0x113\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_dhandle.c(534) __wt_session_get_btree+0x1d5\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_dhandle.c(347) __wt_session_get_btree_ckpt+0xc4\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\cursor\\cur_file.c(567) __wt_curfile_open+0x1dd\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_api.c(388) __session_open_cursor_int+0x2f7\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_api.c(443) __wt_open_cursor+0x1b\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\meta\\meta_table.c(91) __wt_metadata_cursor+0x99\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\conn\\conn_api.c(2454) wiredtiger_open+0xb09\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_kv_engine.cpp(265) mongo::WiredTigerKVEngine::WiredTigerKVEngine+0x932\n2020-06-19T11:13:47.814+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_init.cpp(91) mongo::`anonymous namespace'::WiredTigerFactory::create+0x12f\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\service_context_d.cpp(202) mongo::ServiceContextMongoD::initializeGlobalStorageEngine+0x59c\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\db.cpp(599) mongo::`anonymous namespace'::_initAndListen+0x77b\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\db.cpp(841) mongo::`anonymous namespace'::initAndListen+0x27\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\ntservice.cpp(560) mongo::ntservice::initService+0x53\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] sechost.dll LsaFreeMemory+0x512\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] KERNEL32.DLL BaseThreadInitThunk+0x14\n2020-06-19T11:13:47.815+0200 F - [initandlisten] Got signal: 22 (SIGABRT).\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] *** unhandled exception 0x0000000E at 0x00007FF894014C48, terminating\n2020-06-19T11:13:47.815+0200 I CONTROL [initandlisten] *** stack trace for unhandled exception:\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] KERNELBASE.dll RaiseException+0x68\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\signal_handlers_synchronous.cpp(237) mongo::`anonymous namespace'::abruptQuit+0x9d\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] ucrtbase.dll raise+0x1e7\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] ucrtbase.dll abort+0x31\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\assert_util.cpp(172) mongo::fassertFailedWithLocation+0x181\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_util.cpp(361) mongo::`anonymous namespace'::mdb_handle_error+0x205\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\err.c(275) __wt_eventv+0x376\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\err.c(317) __wt_err+0x32\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\support\\err.c(530) __wt_illegal_value+0x5e\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\meta\\meta_turtle.c(288) __wt_turtle_read+0x2ab\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\meta\\meta_table.c(269) __wt_metadata_search+0x28e\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\conn\\conn_dhandle.c(269) __conn_btree_config_set+0x22\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\conn\\conn_dhandle.c(337) __wt_conn_btree_open+0x5c\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_dhandle.c(542) __wt_session_get_btree+0x113\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_dhandle.c(534) __wt_session_get_btree+0x1d5\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_dhandle.c(347) __wt_session_get_btree_ckpt+0xc4\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\cursor\\cur_file.c(567) __wt_curfile_open+0x1dd\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_api.c(388) __session_open_cursor_int+0x2f7\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\session\\session_api.c(443) __wt_open_cursor+0x1b\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\meta\\meta_table.c(91) __wt_metadata_cursor+0x99\n2020-06-19T11:13:47.870+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\third_party\\wiredtiger\\src\\conn\\conn_api.c(2454) wiredtiger_open+0xb09\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_kv_engine.cpp(265) mongo::WiredTigerKVEngine::WiredTigerKVEngine+0x932\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_init.cpp(91) mongo::`anonymous namespace'::WiredTigerFactory::create+0x12f\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\service_context_d.cpp(202) mongo::ServiceContextMongoD::initializeGlobalStorageEngine+0x59c\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\db.cpp(599) mongo::`anonymous namespace'::_initAndListen+0x77b\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\db\\db.cpp(841) mongo::`anonymous namespace'::initAndListen+0x27\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] mongod.exe ...\\src\\mongo\\util\\ntservice.cpp(560) mongo::ntservice::initService+0x53\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] sechost.dll LsaFreeMemory+0x512\n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] KERNEL32.DLL BaseThreadInitThunk+0x14\n2020-06-19T11:13:47.871+0200 I - [initandlisten] \n2020-06-19T11:13:47.871+0200 I CONTROL [initandlisten] writing minidump diagnostic file C:\\Program Files\\MongoDB\\3.2020-06-19T09-13-47.mdmp\n2020-06-19T11:13:47.921+0200 I CONTROL [initandlisten] *** immediate exit due to unhandled exception", "text": "Our hardware storage crashed last Friday and after the hardware replace the mongo-db service is not starting anymore.\nWe have roled back the system of the last known stand.On a seperate VM we have attached the corrupt MongoDB to find a way to solve the corropution.\nDid someone have any idea what I can do to fix this issue?replication set was not enabled… So I can not switch.", "username": "Jerome_Bose" }, { "code": "", "text": "Welcome @Jerome_BoseIt looks like your failure resulted in the corruption of the data files. Unfortunately you are very unlikely to recover from this scenario without using a backup.You can attempt a repair. But do not be surprised it this does not work, or recover all data.", "username": "chris" }, { "code": "", "text": "Hi Chris,Ok I can do this. Should I upgrade the mongoDB first to 4.x? Or should try with the current installed 3.4 based installation.", "username": "Jerome_Bose" }, { "code": "", "text": "Use your existing installation.", "username": "chris" } ]
WT_PANIC after restore of the crashed backend storage
2020-06-26T07:45:05.542Z
WT_PANIC after restore of the crashed backend storage
3,215
null
[ "atlas-search", "mongodb-live-2020" ]
[ { "code": "", "text": "I’ve add a few requests for the Jupyter Notebook that I used in my Atlas Search talk at MongoDB.live.Here it is!", "username": "Doug_Tarr" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Jupyter Notebook from Atlas Search talk at MongoDB.live
2020-06-26T21:25:14.803Z
Jupyter Notebook from Atlas Search talk at MongoDB.live
3,275
null
[ "performance", "anti-patterns" ]
[ { "code": "", "text": "Hi,Looking at my cluster I clicked the “Performance Advisor” to see if there were any useful indexes I could create. Appeared I have what I need (no slow queries for now). The I selected the tab “Schema Anti-patters”. This displays a potential problem:Reduce the size of documentsISSUES FOUNDHowever the particular collection has no document in that size range, and the statistic for the colection say:\nCOLLECTION SIZE: 30.42MB\nTOTAL DOCUMENTS: 4838\nSo I would estimate about 6Kb per document. Which is confirmed when I export a single document as JSON. What could be happening here?Thanks for any clues,Leo", "username": "Leo_Van_Snippenburg" }, { "code": "", "text": "Hi Leo,The anti-patern is triggered when there is at-least one document in the sample that exceeds 2MB. So, the messaging is actually wrong in the UI. We will fix thatThanks for flagging!Rez", "username": "Rez_Khan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Performance Advisor -> Schema Anti-patters
2020-06-03T20:00:33.146Z
Performance Advisor -&gt; Schema Anti-patters
3,004
null
[ "stitch" ]
[ { "code": "", "text": "Hello Guys,I’m trying to load the webhook response into the iframe but it seems stitch server is throwing x-frame-options header to deny and I tried to remove usingresponse.removeHeader(‘x-frame-options’)It doesn’t work. Any Suggestions?", "username": "aqib_pandit" }, { "code": "", "text": "Hi – Unfortunately we do not support adjusting this header for security reasons. If you like you can make a request for a feature update here.", "username": "Drew_DiPalma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Loading Webhook response in iframe
2020-06-04T18:31:53.787Z
Loading Webhook response in iframe
2,993