image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "Hi everyone, I’m Rong (Sharon) from Toronto, Canada. Glad to be a part of this community! I’ve been learning MongoDB since 2018 and currently holding MongoDB DBA and Developer Associate Certification. I would like to learn from everyone here and contribute to the community as well. Cheers.", "username": "Sharon_Xue" }, { "code": "", "text": "Hi @Sharon_Xue and welcome to the community. We’re glad to have you join us and we look forward to your contributions and learning from you as well.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Sharon_Xue,Welcome to the community! Congrats on your certifications! We’re thrilled to have you here and excited to learn from you. ", "username": "Jamie" } ]
Hello from Canada
2020-06-12T16:45:41.007Z
Hello from Canada
2,007
null
[ "atlas-search" ]
[ { "code": "", "text": "Hi,I want to know if is possible to create a Search Index with filter.Example: I got 26 millions documents and i want to index in the search only recent documents.Thanks.", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi Jonathan -You can filter to recent documents using the range operator and the compound filter operator.In this case, you would have to set your own range (eg, “in the last 12 months”) but it might have the potential to miss results if there are no matching results in that time frame.You can also do a more exhaustive search using the near operator to score recent documents higher, but in this case you would be matching against all the documents in the query.Here is an example using date.", "username": "Doug_Tarr" }, { "code": "", "text": "Hi,You dont understand, i think, i dont want to filter when i search.I got collection with 100millions documents for example.\nI want to know if is possible to index in the search index only documents after date by example (This give like 1millions documents not 100millions to index in search).\nThis filter can reduce size of my search index and only use fresh documents with search engine.Thanks", "username": "Jonathan_Gautier" }, { "code": "", "text": "Take a look at\nandIt might help to do what you want to achieve.", "username": "steevej" }, { "code": "", "text": "I dont think i can create Atlas Search Index with this method ?I want to create partial index in Atlas SearchI am talking about this index creation, in Atlas Search\nimage634×616 26 KB", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi Jonathan -We don’t currently support partial indexes but it is something we are considering.You can vote for that feature on our Feedback Page and you will be notified if it gets implemented.", "username": "Doug_Tarr" }, { "code": "", "text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search Filter Search Index
2020-06-09T22:02:23.310Z
Atlas Search Filter Search Index
4,453
null
[ "aggregation" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5ee7ba7ea3ac0c3192c19e58\"),\n \"channel\" : [ \n {\n { 0 : [ {\"name\":\"shubham\"},{\"phone\": \" 839393\"] },\n { 1 : [ {\"name\":\"Anant\"},{\"phone\": \" 839393\"] }\n } ]\n}\nvar cursor = db.getCollection('user_name').find({ })\nwhile (cursor.hasNext()) {\nvar record = cursor.next();\nprint(record._id + ',' + record.channel.0.name + ',' + record.channel.1.name )}\n", "text": "Hi all, I am trying to access nested value which is stored in number field like [0] ,[1]\nplease find the document belowI want to find channel.0.name and channel.1.name using curser, But as field value is [0] and [1] I am not able to access the data.Please let me know if any altenative for this. I am using below query…", "username": "shubham_udata" }, { "code": "print(record._id + ‘,’ + record.channel[0].name + ‘,’ + record.channel[1].name )}\ndb.your_collection.find({ 'channel.0.name': 'Bill' });\n", "text": "If you need to access that data with print(), you can do it like this:If you you want to query by that object properties, do this:", "username": "slava" }, { "code": "", "text": "Hi Slava,\nThank you .I want to use print the data and your method worked,But it is not printing for all conditions when there are many documents in which some field doesnt contain any data present(undefined).{\n“_id” : ObjectId(“5ee7ba7ea3ac0c3192c19e58”),\n“channel” : [\n{\n{ 0 : [ {“name”:“shubham”},{“phone”: \" 839393\"] },\n{ 1 : [ {“name”:“Anant”},{“phone”: \" 839393\"] }\n} ],{\n“_id” : ObjectId(“5ee7ba7ea3ac0c3192c19e60”),\n“channel” : [\n{\n{ 1 : [ {“name”:“Mauank”},{“phone”: \" 839393\"] }\n} ]\n}\n}In above for second document I am not having channel[0] but having channel[1].\nI want output should be:\nObjectId Name Name\n5ee7ba7ea3ac0c3192c19e58. shubham Anant\n5ee7ba7ea3ac0c3192c19e60. undefined Mayank.But while printing the data I am getting error as TypeError:record.channel[0] is undefined…\nPlease let me now if any solution for this…Many Thanks in advance…", "username": "shubham_udata" }, { "code": "let name0 = record.channel[0] ? record.channel[0].name : null;\nlet name1 = record.channel[1] ? record.channel[1].name : null;\nprint(`${record._id}, ${name0}, ${name2}`)};\n", "text": "Try this:", "username": "slava" }, { "code": "let name0 = record.channel[0] ? record.channel[0].name : null;\nlet name1 = record.channel[1] ? record.channel[1].name : null;\n", "text": "This worked!! Thanks a lot Slava for your prompt responce.", "username": "shubham_udata" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to print nested value who's field value is number
2020-06-16T12:54:17.177Z
How to print nested value who’s field value is number
3,738
null
[ "aggregation" ]
[ { "code": "\"$elemMatch\": {\n \"item_field_1\": { $eq: \"$item_field_2\" }\n }\n", "text": "Hey,This is my basic $elemMatchI cannot find how to make a self reference un $elemMatch, that possible ?", "username": "Anthony_Moynet" }, { "code": "{\n $match: {\n $expr: { $eq: ['$item', '$$var']},\n },\n},\n", "text": "You need to reference your variable with two dollar signs ($$) and make a self-reference with one dollar sign ($). Like this:", "username": "slava" } ]
[Aggregation] Reference current iteration field in $elemMatch
2020-05-01T10:43:37.850Z
[Aggregation] Reference current iteration field in $elemMatch
1,527
null
[]
[ { "code": "", "text": "After trying the new MongoDb Realm platforms, I have to say it is a huge step up from Realm Cloud. Now that MongoDb Realm is in public beta. When is it safe to use the new SDK:s in production?In my case I never went live with Realm Cloud, so I don’t have any data to migrate. My idea would be to port the app to the new SDK:s first and prepare the datamodels and usage for sync, then slowly move user data over to the synced realms.But the question is. When is it safe to use the SDK:s in production? The SDK:s seem to be moving at a high speed now, hence my question. At the same time, there are big differences between the old SDK:s and new SDK:s making it hard to maintain two codebases.Excited about the new platform!//Simon", "username": "Simon_Persson" }, { "code": "", "text": "Hi Simon –We expect the SDKs to be relatively stable within the next month or so. While we don’t anticipate significantly changing the existing syntax, during this time we are adding lots of functionality as well as looking for bugs in usability/support surface area. Once we address remaining features and feel it’s gotten a good amount of bake time we’ll remove the ‘Beta’ labels on the SDKs.On the Realm Sync side, we’ll probably be in beta for a bit longer. We’re working with a few folks now to push the limits of Sync and make sure that we can meet the MongoDB Cloud SLAs. Once we feel that we’ve worked out any usability issues and are seeing good success with some production applications we’ll likely mark Sync as GA.Hope that helps!", "username": "Drew_DiPalma" } ]
What is the MongoDb definition of beta?
2020-06-15T12:19:02.175Z
What is the MongoDb definition of beta?
1,653
null
[ "aggregation" ]
[ { "code": "resultE=db.emp.aggregate([\n{ \"$group\":{ \n \"_id\":None ,\n \"maxSalary\": { \"$max\": \"$salary\" },\n \"empgrp\": { \"$push\": {\n \"_id\": \"$_id\",\n \"name\": \"$emp_name\",\n \"sal\": \"$salary\"\n }}\n}},\n{ \"$project\": {\n \"maxSalary\": 1,\n \"emps\": {\n \"$setDifference\": [\n { \"$map\": {\n \"input\": \"$empgrp\",\n \"as\": \"emp\",\n \"in\": {\n \"$cond\": [ \n { \"$eq\": [ \"$maxSalary\", \"$$emp.sal\" ] },\n \"$$emp\",\n False\n ]\n }\n }},\n [False]\n ]\n }\n}} ]\n", "text": "To execute queries doing analysis using aggregate and find similar records, $push is used. But, it gives error on certain MongoDB version i.e. 4.2.3.For example, to execute query “Find Employees with the highest salary”, I have used $push in aggregate functions and allowDiskUse (code mentioned below). But, it shows an error in MongoDB version 4.2.3. \" $push used too much memory and cannot spill to disk\" .What is an alternate options other than application-level join?, allowDiskUse=True\n)", "username": "Prof_Monika_Shah" }, { "code": "db.emp.aggregate([\n {\n $sort: {\n salary: -1,\n },\n },\n {\n $limit: 1,\n },\n]);\ndb.emp.find({}).sort({ salary: -1 }).limit(1);\n", "text": "Why not just just:OR?", "username": "slava" } ]
$push used too much memory and cannot spill to disk
2020-05-18T10:08:46.231Z
$push used too much memory and cannot spill to disk
6,314
null
[ "data-modeling" ]
[ { "code": "", "text": "I have data in Neo4j which I want to move to MongoDB, is there a good solution to this ?", "username": "Vijay_jindal" }, { "code": "", "text": "Hi @Vijay_jindal and welcome to the forum!is there a good solution to this ?Generally before migrating data into MongoDB, instead of just directly copying data you should also consider the schema design to better serve the application usage. See also:Once you have a data model in mind, you can either write an application using one of the supported Neo4J drivers to read the data, and utilise one of the supported MongoDB drivers to write the data into.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi @wan,Thank you for answering. I am currently using python MongoDB driver to perform the migration. Yes, I am looking into the data model.Thank you,\nVijay Jindal.", "username": "Vijay_jindal" }, { "code": "", "text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data migration from Neo4j to MongoDB
2020-06-11T10:34:55.884Z
Data migration from Neo4j to MongoDB
3,201
null
[]
[ { "code": "", "text": "Im having two models called user and user type i need to get the combined result of the user and user type by using the common user-id.how to achieve this condition.", "username": "Aruljothy_Sundaramo1" }, { "code": "", "text": "Can you leave document examples for each collection?\nAnd, please, make an example of ‘combined result’ you want to achieve.", "username": "slava" } ]
Aggregation with two collections
2020-06-16T10:17:38.693Z
Aggregation with two collections
1,097
null
[ "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5ee7ba7ea3ac0c3192c19e58\"),\n \"workspace_id\" : 1,\n \"attributes\" : {\n \"first_name\" : \"John\",\n \"last_name\" : \"Doe\",\n \"email\" : \"[email protected]\",\n \"phone_number\" : \"+1234567890\",\n \"gender\" : \"Male\"\n },\n \"events\" : [ \n {\n \"event\" : \"search_results\",\n \"event_data\" : {\n \"query\" : \"Shirts\",\n \"results\" : [ \n {\n \"name\" : \"Red T-Shirt XL\"\n }\n ]\n }\n }, \n {\n \"event\" : \"search_results\",\n \"event_data\" : {\n \"query\" : \"Shirts\",\n \"results\" : [ \n {\n \"name\" : \"Blue T-Shirt XL\"\n }\n ]\n }\n }\n ],\n \"created_at\" : ISODate(\"2020-03-17T09:58:02.000Z\"),\n \"updated_at\" : ISODate(\"2020-03-17T09:58:02.000Z\")\n}\ndb.getCollection('clients').find(\n\n{\n \"$and\": [\n {\n \"workspace_id\": 1,\n \"deleted_at\": {\n \"$exists\": false\n }\n },\n {\n \"events.event\": \"search_results\",\n \"$and\": [\n {\"events.event_data.results.name\" : \"Red T-Shirt XL\"},\n {\"events.event_data.results.name\" : \"Blue T-Shirt XL\"},\n ]\n }\n ]\n}\n)\n {\n \"event\" : \"search_results\",\n \"event_data\" : {\n \"query\" : \"Shirts\",\n \"results\" : [ \n {\n \"name\" : \"Red T-Shirt XL\"\n },\n {\n \"name\" : \"Blue T-Shirt XL\"\n }\n ]\n }\n }\n", "text": "For testing purposes, I created a collection with only one document:Now, I want to query all clients who had “search_results” but whose results are those where Red T-Shirt XL and Blue T-Shirt XL appeared.This is my query:However, it returns this ONE record, but shouldn’t… It should if Red and Blue shirts are in results array. Like this:Thank you!", "username": "jellyx" }, { "code": "db.getCollection('clients').find(\n\n{\n \"$and\": [\n {\n \"workspace_id\": 1,\n \"deleted_at\": {\n \"$exists\": false\n }\n },\n {\n \"events.event\": \"search_results\",\n \"$and\": [\n {\"events.event_data.results.0.name\" : \"Red T-Shirt XL\"},\n {\"events.event_data.results.1.name\" : \"Blue T-Shirt XL\"},\n ]\n }\n ]\n}\n)\n \"$and\": [\n {\"events.event_data.results.0.name\" : \"Red T-Shirt XL\"},\n {\"events.event_data.results.1.name\" : \"Blue T-Shirt XL\"},\n ]", "text": "Hi.\nDid you try to use the array selector?Please try with this.\nI’ve added the array selector to the $and query.", "username": "Valentine_Soin" }, { "code": "", "text": "@Valentine_SoinMany thanks for the solution. This definitely works! Not sure how I forgot that I could try something like this… Cheers!", "username": "jellyx" }, { "code": " \"$and\": [\n {\"events.event_data.results.0.name\" : \"Blue T-Shirt XL\"},\n {\"events.event_data.results.1.name\" : \"Red T-Shirt XL\"},\n ]\n \"$and\": [\n {\"events.event_data.results.0.name\" : \"Red T-Shirt XL\"},\n {\"events.event_data.results.1.name\" : \"Blue T-Shirt XL\"},\n ] \n", "text": "@Valentine_SoinUnfortunately, this doesn’t work because:and this:does not give the same results. I just switched Red & Blue words… Seems like I need to figure out something different.", "username": "jellyx" }, { "code": "{\n \"_id\" : ObjectId(\"5ee7ba7ea3ac0c3192c19e58\"),\n \"workspace_id\" : 1,\n \"attributes\" : {\n \"first_name\" : \"John\",\n \"last_name\" : \"Doe\",\n \"email\" : \"[email protected]\",\n \"phone_numbe\" : \"1-620-410-3432 x97756\",\n \"gender\" : \"Female\"\n },\n \"events\" : [ \n {\n \"event\" : \"search_results\",\n \"event_data\" : {\n \"query\" : \"Shirts\",\n \"results\" : [ \n {\n \"name\" : \"Red T-Shirt XL\"\n }\n ]\n },\n \"created_at\" : ISODate(\"2020-06-15T09:58:02.000Z\")\n }, \n {\n \"event\" : \"search_results\",\n \"event_data\" : {\n \"query\" : \"Shirts\",\n \"results\" : [ \n {\n \"name\" : \"Blue T-Shirt XL\"\n }\n ]\n },\n \"created_at\" : ISODate(\"2020-05-15T09:58:02.000Z\")\n }\n ],\n \"created_at\" : ISODate(\"2020-03-17T09:58:02.000Z\"),\n \"updated_at\" : ISODate(\"2020-03-17T09:58:02.000Z\")\n}\ndb.getCollection('clients').find(\n{\n \"$and\": [\n {\n \"workspace_id\": 1,\n \"deleted_at\": {\n \"$exists\": false\n }\n },\n {\n \"events.event\": \"search_results\",\n \"events.created_at\" : { $gte : new ISODate(\"2020-01-01T20:15:31Z\")},\n \"$and\": [\n {\"events.event_data.results.name\" : \"Red T-Shirt XL\"},\n {\"events.event_data.results.name\" : \"Blue T-Shirt XL\"},\n ]\n }\n ]\n}\n)\ndb.getCollection('clients').find(\n{\n \"$and\": [\n {\n \"workspace_id\": 1,\n \"deleted_at\": {\n \"$exists\": false\n }\n },\n {\n \"events.event\": \"search_results\",\n \"events\" : {\n \"$elemMatch\" : {\n \"created_at\" : { $gte : new ISODate(\"2020-06-01T20:15:31Z\")},\n \"$and\": [\n {\"event_data.results.name\" : \"Red T-Shirt XL\"},\n {\"event_data.results.name\" : \"Blue T-Shirt XL\"},\n ]\n } \n }\n }\n ]\n}\n)", "text": "Just a quick update.In case we have something like this:and execute the following query:This should return nothing because as you can see one event is created in May and one in June and I would like to have only those who searched for Red and Blue T-Shirt after June 1st.Not sure what I am missing here. When using the query above, it returns one record.EDIT:Seems like I got something:", "username": "jellyx" } ]
Search nested array if all conditions are satisfied
2020-06-15T19:02:02.509Z
Search nested array if all conditions are satisfied
2,497
null
[ "sharding" ]
[ { "code": "", "text": "Can I distribute shard already inserted data?I tried to do it alone, but it didn’t work.Step)It’s awkward because it’s a translator, sorry.\nAsk me if you don’t understand anything.", "username": "Kim_Hakseon" }, { "code": "", "text": "Can I distribute shard already inserted data?If the collection (with the inserted data) is sharded, the collection’s data is distributed among the shards - based upon the Shard Key.Also see:I tried to do it alone, but it didn’t work.Review your steps once again for deploying a sharded cluster: Deploy a Sharded Cluster. If you had followed the procedures then there should not be any problem.", "username": "Prasad_Saya" } ]
Already inserted data, how to distribute shard?
2020-06-16T05:21:51.770Z
Already inserted data, how to distribute shard?
2,213
null
[]
[ { "code": "", "text": "is it possible to deploy mongodb atlas in private (on-premise) cloud using kubernates.Thanks in advance", "username": "Praba_Karan" }, { "code": "", "text": "Welcome to the community @Praba_Karan!MongoDB Atlas is currently only available as a managed service for AWS, GCP, or Azure. You can set up a secure network peering connection to a virtual private cloud (VPC) within one of those providers, but all of the Atlas infrastructure is managed by MongoDB.If you want to manage your own on-premises MongoDB deployment using Kubernetes, you can use the MongoDB Enterprise Kubernetes Operator together with MongoDB Cloud Manager or Ops Manager.There is also a MongoDB Community Kubernetes Operator if you are not using Cloud/Ops Manager or MongoDB Enterprise.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas in private (on-premise) cloud
2020-06-16T08:25:27.957Z
MongoDB Atlas in private (on-premise) cloud
6,112
null
[ "aggregation" ]
[ { "code": "{\n buyer,\n seller,\n // ...lots of other fields, like:\n project,\n product,\n creationDate,\n}\n{\n name,\n emails: [],\n // etc.\n}\n", "text": "I have a collection Sale, with these fields:Both seller and buyer are ObjectIds that reference a User collection:Buyers and sellers can be involved in multiple Sales.\nMy goal is to get a list of unique users that are attached to a bunch of Sales. Sortable on name and email (last one in the array).I know how to use $lookup to populate the buyer and seller from the User coll. And if it were only one user-filed in Sale I would use $group to get a unique list.I have also tried to get the buyers and sellers in separate queries and then use javascript to get a list of unique ids, but this takes way too long.Is there and efficient aggregation way to do this?", "username": "devboell" }, { "code": "const pipeline = [\n // first, group ids of buyers and sellers\n {\n $group: {\n _id: null,\n buyersIds: {\n $addToSet: '$buyer',\n },\n sellersIds: {\n $addToSet: '$seller',\n },\n },\n },\n // then, concat grouped ids into single array \n // to fetch users with one single $lookup\n // ids in the array will not be unique, \n // but that does not matter for $lookup\n \n // PS: you can $unwind 'userIsds' \n // and then $group them like we did with 'buyersIds' above,\n // if unique values in 'usersIds' bother you :)\n {\n $project: {\n usersIds: {\n $concatArrays: ['$buyersIds', '$sellersIds'],\n },\n },\n },\n // use $lookup to join users by ids from accumulated array\n {\n $lookup: {\n from: '',\n localField: 'usersIds',\n foreignField: '_id',\n as: 'users',\n },\n },\n // unwind 'users' to be bring out each user to upper level\n {\n $unwind: '$users',\n },\n // make each user a root object in the pipeline array\n {\n $replaceRoot: {\n newRoot: '$users',\n },\n },\n // sort user objects, like you want\n {\n $sort: {\n name: 1,\n },\n },\n];\n// destruct your emails array to be able to sort by its values\n{\n $unwind: '$emails',\n},\n// do the sort\n{\n $sort: {\n name: 1,\n emails: -1,\n },\n},\n// re-construct user objects like they were before $unwind\n{\n group: {\n _id: '$_id',\n emails: {\n $push: '$emails',\n },\n // and for the rest fields\n sampleField: {\n $first: '$sampleFields',\n },\n // ...\n},\n{\n // note, that ObjectId must link to _id from the collection, \n // that you would do the $lookup from.\n buyer: { _id, ObjectId, emails: [], name: 'Rob' },\n seller: { _id, ObjectId, emails: [], name: 'Bob' },\n // ... other fields\n}\n", "text": "Here is a sample aggregation, that you can use to sort your sellers&buyers users by their user.name (or other fields):To be able to sort by user email, that lays in the array, you will need to replace the $sort stage in above aggregation to this:This will work, but it can be not as performant, as you may want Instead, consider adding redundancy to your documents:The above structure will allow sort buyers and sellers very fast and without $lookups and any aggregations. But, still, you will not be able to sort by ‘emails’ array.With the above structure can do one of the following:\na) add ‘primaryEmail’ to buyer and seller objects and then you can sort by both fields: ‘name’ and ‘primaryEmail’. Additionally, you can use regex to search users that have part of a searched ‘name’/‘email’.\nb) simply filter out users and to not have specified email with $in operator and then sort by ‘name’ field.", "username": "slava" }, { "code": " const users = await Sale.aggregate([\n { $match: { project: { $in: projectIds } } },\n {\n $group: {\n _id: null,\n buyersIds: {\n $addToSet: '$buyer',\n },\n sellersIds: {\n $addToSet: '$seller',\n },\n },\n },\n {\n $project: {\n usersIds: {\n $concatArrays: ['$buyersIds', '$sellersIds'],\n },\n },\n },\n { $unwind: '$usersIds' },\n { $group: { _id: null, usersIds: { $addToSet: '$usersIds' } } },\n {\n $lookup: {\n from: 'users',\n localField: 'usersIds',\n foreignField: '_id',\n as: 'users',\n },\n },\n {\n $unwind: '$users',\n },\n {\n $replaceRoot: {\n newRoot: '$users',\n },\n },\n {\n $project: {\n name: 1,\n email: { $arrayElemAt: ['$emails', -1] },\n },\n },\n { $sort: { email: 1 } },\n { $skip: 10000 },\n { $limit: 10 },\n ])\n", "text": "Thanks a lot Slava!This is what I have now:I went with a different approach for the email sorting, and it is performant enough for now.One thing I’d like to double check with you, is what you said about the uniqueness. I added the lines you suggested, but because you said it jokingly, I am a bit confused. Do you mean those lines are optional, and uniqueness is somehow ensured further down the pipeline?", "username": "devboell" }, { "code": "", "text": "My joke referred to $unwind’ing array of emails, sort and then grouping (to much work for mongodb).\nYou avoided that by selecting one email to sortBy, not the whole array. So, it is OK.Another thing is that why not write that email to a separate field in the document, like I suggested?\nThat way you would need to select it only 1 time, when you write a doc to a collection (plus times, when you change that email). Currently you select that email for each of 10.000 documents (due to your aggregation example) for each read operation. That means, if you run this query 100 times per hour, mongodb will do the exact same job 100 * 10.000 =1,000.000 times per hour And if you need this $skip and $limit for pagination, why not put in at the beginning of the aggregation?\nIn this case mongodb would do everything you have before $skip operation for 10 documents, not 10.000 As for uniqueness of Ids, I do not think you will have any benefit in performance if you make the items in the array unique. IMHO, for $lookup it will not make a big difference, but you will spend some calculation time on $unwind and $group stages to make them unique. And, of course, more stages = more code to read & maintain.", "username": "slava" }, { "code": "", "text": "Another thing is that why not write that email to a separate field in the document, like I suggested?\nThat way you would need to select it only 1 time, when you write a doc to a collection (plus times, when you change that email). Currently you select that email for each of 10.000 documents (due to your aggregation example) for each read operation. That means, if you run this query 100 times per hour, mongodb will do the exact same job 100 * 10.000 =1,000.000 times per hour That makes sense, but that would require quite some refactoring on the api and the frontend. I’ll make a note of it.And if you need this $skip and $limit for pagination, why not put in at the beginning of the aggregation?\nIn this case mongodb would do everything you have before $skip operation for 10 documents, not 10.000I tried it, but I get very different results. The list of users needs to go through the sorting stage first, before selecting a sublist to return to the client.As for uniqueness of Ids, I do not think you will have any benefit in performance if you make the items in the array unique. IMHO, for $lookup it will not make a big difference, but you will spend some calculation time on $unwind and $group stages to make them unique. And, of course, more stages = more code to read & maintain.My concern here is not so much performance, but that the user list does not contain duplicates when the data is presented to the end user.", "username": "devboell" } ]
Group on two fields that point to same collection
2020-06-14T21:26:48.963Z
Group on two fields that point to same collection
9,333
null
[ "node-js" ]
[ { "code": "node --trace-warnings ...node --inspect --trace-warnings \n require('mongodb')\n internal/process/warning.js:33 (node:10416) Warning: Accessing non-existent property 'count' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:814:11)\n at Object.get (internal/modules/cjs/loader.js:825:5)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\db_ops.js:16:42)\n at Module._compile (internal/modules/cjs/loader.js:1185:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)\n at Module.load (internal/modules/cjs/loader.js:1034:32)\n at Function.Module._load (internal/modules/cjs/loader.js:923:14)\n at Module.require (internal/modules/cjs/loader.js:1074:19)\n at require (internal/modules/cjs/helpers.js:72:18)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n writeOut @ internal/process/warning.js:33\n internal/process/warning.js:33 (node:10416) Warning: Accessing non-existent property 'findOne' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:814:11)\n at Object.get (internal/modules/cjs/loader.js:825:5)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\db_ops.js:17:44)\n at Module._compile (internal/modules/cjs/loader.js:1185:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)\n at Module.load (internal/modules/cjs/loader.js:1034:32)\n at Function.Module._load (internal/modules/cjs/loader.js:923:14)\n at Module.require (internal/modules/cjs/loader.js:1074:19)\n at require (internal/modules/cjs/helpers.js:72:18)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n writeOut @ internal/process/warning.js:33\n internal/process/warning.js:33 (node:10416) Warning: Accessing non-existent property 'remove' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:814:11)\n at Object.get (internal/modules/cjs/loader.js:825:5)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\db_ops.js:18:43)\n at Module._compile (internal/modules/cjs/loader.js:1185:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)\n at Module.load (internal/modules/cjs/loader.js:1034:32)\n at Function.Module._load (internal/modules/cjs/loader.js:923:14)\n at Module.require (internal/modules/cjs/loader.js:1074:19)\n at require (internal/modules/cjs/helpers.js:72:18)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n writeOut @ internal/process/warning.js:33\n internal/process/warning.js:33 (node:10416) Warning: Accessing non-existent property 'updateOne' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:814:11)\n at Object.get (internal/modules/cjs/loader.js:825:5)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\db_ops.js:19:46)\n at Module._compile (internal/modules/cjs/loader.js:1185:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1205:10)\n at Module.load (internal/modules/cjs/loader.js:1034:32)\n at Function.Module._load (internal/modules/cjs/loader.js:923:14)\n at Module.require (internal/modules/cjs/loader.js:1074:19)\n at require (internal/modules/cjs/helpers.js:72:18)\n at Object.<anonymous> (D:\\迅雷下载\\GitHub\\control-platform-server-consolidation\\common\\temp\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n", "text": "(node:10624) Warning: Accessing non-existent property ‘count’ of module exports inside circular dependency\n(Use node --trace-warnings ... to show where the warning was created)(node:10624) Warning: Accessing non-existent property ‘findOne’ of module exports inside circular dependency(node:10624) Warning: Accessing non-existent property ‘remove’ of module exports inside circular dependency\n(node:10624) Warning: Accessing non-existent property ‘updateOne’ of module exports inside circular dependencyWelcome to Node.js v14.0.0.\nType “.help” for more information.", "username": "masx200_masx200" }, { "code": "", "text": "Hi @masx200_masx200,Thanks for reporting this issue. This was also raised on NODE-2536, and a fix has been committed. This will be available in the next release of MongoDB Node.js driver (v3.5.7).Regards,\nWan.", "username": "wan" }, { "code": "node --trace-warnings ...", "text": "(node:19276) Warning: Accessing non-existent property ‘count’ of module exports inside circular dependency\n(Use node --trace-warnings ... to show where the warning was created)\n(node:19276) Warning: Accessing non-existent property ‘findOne’ of module exports inside circular dependency\n(node:19276) Warning: Accessing non-existent property ‘remove’ of module exports inside circular dependency\n(node:19276) Warning: Accessing non-existent property ‘updateOne’ of module exports inside circular dependencyexact same problem so, how can i fix it, and fix mongoDB version is already released or not?\nmongoose version 5.9.13 and mongoDB atlas 4.2.6, Node.js v 14.2.0\nwith these dependencies the same error is showing again", "username": "Koushik_Saha" }, { "code": "mongoosenodenode --trace-warningsnode_modules", "text": "Hi @Koushik_Saha, welcome!how can i fix it, and fix mongoDB version is already released or not?MongoDB Node.js driver version 3.5.7 that contains the fix has been released. Based on my test, I can confirmed that this version removed the warnings for circular dependencies.mongoose version 5.9.13 and mongoDB atlas 4.2.6, Node.js v 14.2.0Looking at mongoose v5.9.13 package.json, it is importing the correct driver version. Also, based on a quick test with mongoose v5.9.13 and node v14.2.0 it is working correctly for me.As the warning message suggested, could you execute with node --trace-warnings to show where the exact warning was created and post the trace ?Also, ensure within your node_modules that you have the correct version of the driver v3.5.7. If you’re still encountering the issue, please provide how you’re triggering these warning messages.Regards,\nWan.", "username": "wan" }, { "code": "(node:6444) Warning: Accessing non-existent property 'count' of module exports inside circular dependency\n(Use `node --trace-warnings ...` to show where the warning was created)\n(node:6444) Warning: Accessing non-existent property 'findOne' of module exports inside circular dependency\n(node:6444) Warning: Accessing non-existent property 'remove' of module exports inside circular dependency\n(node:6444) Warning: Accessing non-existent property 'updateOne' of module exports inside circular dependency\nPS D:\\PlayGround\\Social.Forum> node --inspect --trace-warnings\nDebugger listening on ws://127.0.0.1:9229/7ce5b368-4c76-42ad-9033-a129e99c5072\nFor help, see: https://nodejs.org/en/docs/inspector\nWelcome to Node.js v14.3.0.\nType \".help\" for more information.\n> require('mongodb')\n<ref *1> [Function (anonymous)] {\n MongoError: [Function: MongoError],\n MongoNetworkError: [Function: MongoNetworkError],\n MongoTimeoutError: [Function: MongoTimeoutError],\n MongoServerSelectionError: [Function: MongoServerSelectionError],\n MongoParseError: [Function: MongoParseError],\n MongoWriteConcernError: [Function: MongoWriteConcernError],\n MongoBulkWriteError: [Function: BulkWriteError],\n BulkWriteError: [Function: BulkWriteError],\n Admin: [Function: Admin],\n MongoClient: [Function: MongoClient] { connect: [Circular *1] },\n Db: [Function: Db] {\n SYSTEM_NAMESPACE_COLLECTION: 'system.namespaces',\n SYSTEM_INDEX_COLLECTION: 'system.indexes',\n SYSTEM_PROFILE_COLLECTION: 'system.profile',\n SYSTEM_USER_COLLECTION: 'system.users',\n SYSTEM_COMMAND_COLLECTION: '$cmd',\n SYSTEM_JS_COLLECTION: 'system.js'\n },\n Collection: [Function: Collection],\n Server: [Function: Server],\n ReplSet: [Function: ReplSet],\n Mongos: [Function: Mongos],\n ReadPreference: [Function: ReadPreference] {\n PRIMARY: 'primary',\n PRIMARY_PREFERRED: 'primaryPreferred',\n SECONDARY: 'secondary',\n SECONDARY_PREFERRED: 'secondaryPreferred',\n NEAREST: 'nearest',\n fromOptions: [Function (anonymous)],\n isValid: [Function (anonymous)],\n primary: ReadPreference { mode: 'primary', tags: undefined },\n primaryPreferred: ReadPreference { mode: 'primaryPreferred', tags: undefined },\n secondary: ReadPreference { mode: 'secondary', tags: undefined },\n secondaryPreferred: ReadPreference { mode: 'secondaryPreferred', tags: undefined },\n nearest: ReadPreference { mode: 'nearest', tags: undefined }\n },\n GridStore: [Function: GridStore] {\n DEFAULT_ROOT_COLLECTION: 'fs',\n DEFAULT_CONTENT_TYPE: 'binary/octet-stream',\n IO_SEEK_SET: 0,\n IO_SEEK_CUR: 1,\n IO_SEEK_END: 2,\n exist: [Function (anonymous)],\n list: [Function (anonymous)],\n read: [Function (anonymous)],\n readlines: [Function (anonymous)],\n unlink: [Function (anonymous)]\n },\n Chunk: [Function: Chunk] { DEFAULT_CHUNK_SIZE: 261120 },\n Logger: [Function: Logger] {\n reset: [Function (anonymous)],\n currentLogger: [Function (anonymous)],\n setCurrentLogger: [Function (anonymous)],\n filter: [Function (anonymous)],\n setLevel: [Function (anonymous)]\n },\n AggregationCursor: [Function: AggregationCursor],\n CommandCursor: [Function: CommandCursor],\n Cursor: [Function: Cursor],\n GridFSBucket: [Function: GridFSBucket],\n CoreServer: [Function: Server] {\n enableServerAccounting: [Function (anonymous)],\n disableServerAccounting: [Function (anonymous)],\n servers: [Function (anonymous)]\n },\n CoreConnection: [Function: Connection],\n Binary: <ref *2> [Function: Binary] {\n BUFFER_SIZE: 256,\n SUBTYPE_DEFAULT: 0,\n SUBTYPE_FUNCTION: 1,\n SUBTYPE_BYTE_ARRAY: 2,\n SUBTYPE_UUID_OLD: 3,\n SUBTYPE_UUID: 4,\n SUBTYPE_MD5: 5,\n SUBTYPE_USER_DEFINED: 128,\n Binary: [Circular *2]\n },\n Code: <ref *3> [Function: Code] { Code: [Circular *3] },\n Map: <ref *4> [Function: Map] { Map: [Circular *4] },\n DBRef: <ref *5> [Function: DBRef] { DBRef: [Circular *5] },\n Double: <ref *6> [Function: Double] { Double: [Circular *6] },\n Int32: <ref *7> [Function: Int32] { Int32: [Circular *7] },\n Long: <ref *8> [Function: Long] {\n fromInt: [Function (anonymous)],\n fromNumber: [Function (anonymous)],\n fromBits: [Function (anonymous)],\n fromString: [Function (anonymous)],\n INT_CACHE_: { '0': [Long], '1': [Long], '-1': [Long] },\n TWO_PWR_16_DBL_: 65536,\n TWO_PWR_24_DBL_: 16777216,\n TWO_PWR_32_DBL_: 4294967296,\n TWO_PWR_31_DBL_: 2147483648,\n TWO_PWR_48_DBL_: 281474976710656,\n TWO_PWR_64_DBL_: 18446744073709552000,\n TWO_PWR_63_DBL_: 9223372036854776000,\n ZERO: Long { _bsontype: 'Long', low_: 0, high_: 0 },\n ONE: Long { _bsontype: 'Long', low_: 1, high_: 0 },\n NEG_ONE: Long { _bsontype: 'Long', low_: -1, high_: -1 },\n MAX_VALUE: Long { _bsontype: 'Long', low_: -1, high_: 2147483647 },\n MIN_VALUE: Long { _bsontype: 'Long', low_: 0, high_: -2147483648 },\n TWO_PWR_24_: Long { _bsontype: 'Long', low_: 16777216, high_: 0 },\n Long: [Circular *8]\n },\n MinKey: <ref *9> [Function: MinKey] { MinKey: [Circular *9] },\n MaxKey: <ref *10> [Function: MaxKey] { MaxKey: [Circular *10] },\n ObjectID: <ref *11> [Function: ObjectID] {\n index: 1063505,\n createPk: [Function: createPk],\n createFromTime: [Function: createFromTime],\n createFromHexString: [Function: createFromHexString],\n isValid: [Function: isValid],\n ObjectID: [Circular *11],\n ObjectId: [Circular *11]\n },\n ObjectId: <ref *11> [Function: ObjectID] {\n index: 1063505,\n createPk: [Function: createPk],\n createFromTime: [Function: createFromTime],\n createFromHexString: [Function: createFromHexString],\n isValid: [Function: isValid],\n ObjectID: [Circular *11],\n ObjectId: [Circular *11]\n },\n Symbol: <ref *12> [Function: Symbol] { Symbol: [Circular *12] },\n Timestamp: <ref *13> [Function: Timestamp] {\n fromInt: [Function (anonymous)],\n fromNumber: [Function (anonymous)],\n fromBits: [Function (anonymous)],\n fromString: [Function (anonymous)],\n INT_CACHE_: { '0': [Timestamp], '1': [Timestamp], '-1': [Timestamp] },\n TWO_PWR_16_DBL_: 65536,\n TWO_PWR_24_DBL_: 16777216,\n TWO_PWR_32_DBL_: 4294967296,\n TWO_PWR_31_DBL_: 2147483648,\n TWO_PWR_48_DBL_: 281474976710656,\n TWO_PWR_64_DBL_: 18446744073709552000,\n TWO_PWR_63_DBL_: 9223372036854776000,\n ZERO: Timestamp { _bsontype: 'Timestamp', low_: 0, high_: 0 },\n ONE: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 0 },\n NEG_ONE: Timestamp { _bsontype: 'Timestamp', low_: -1, high_: -1 },\n MAX_VALUE: Timestamp { _bsontype: 'Timestamp', low_: -1, high_: 2147483647 },\n MIN_VALUE: Timestamp { _bsontype: 'Timestamp', low_: 0, high_: -2147483648 },\n TWO_PWR_24_: Timestamp { _bsontype: 'Timestamp', low_: 16777216, high_: 0 },\n Timestamp: [Circular *13]\n },\n BSONRegExp: <ref *14> [Function: BSONRegExp] {\n BSONRegExp: [Circular *14]\n },\n Decimal128: <ref *15> [Function: Decimal128] {\n fromString: [Function (anonymous)],\n Decimal128: [Circular *15]\n },\n connect: [Circular *1],\n instrument: [Function (anonymous)]\n}\n> (node:1372) Warning: Accessing non-existent property 'count' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:823:11)\n at Object.get (internal/modules/cjs/loader.js:837:5)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\db_ops.js:16:42)\n at Module._compile (internal/modules/cjs/loader.js:1200:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1220:10)\n at Module.load (internal/modules/cjs/loader.js:1049:32)\n at Function.Module._load (internal/modules/cjs/loader.js:937:14)\n at Module.require (internal/modules/cjs/loader.js:1089:19)\n at require (internal/modules/cjs/helpers.js:73:18)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n(node:1372) Warning: Accessing non-existent property 'findOne' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:823:11)\n at Object.get (internal/modules/cjs/loader.js:837:5)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\db_ops.js:17:44)\n at Module._compile (internal/modules/cjs/loader.js:1200:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1220:10)\n at Module.load (internal/modules/cjs/loader.js:1049:32)\n at Function.Module._load (internal/modules/cjs/loader.js:937:14)\n at Module.require (internal/modules/cjs/loader.js:1089:19)\n at require (internal/modules/cjs/helpers.js:73:18)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n(node:1372) Warning: Accessing non-existent property 'remove' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:823:11)\n at Object.get (internal/modules/cjs/loader.js:837:5)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\db_ops.js:18:43)\n at Module._compile (internal/modules/cjs/loader.js:1200:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1220:10)\n at Module.load (internal/modules/cjs/loader.js:1049:32)\n at Function.Module._load (internal/modules/cjs/loader.js:937:14)\n at Module.require (internal/modules/cjs/loader.js:1089:19)\n at require (internal/modules/cjs/helpers.js:73:18)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n(node:1372) Warning: Accessing non-existent property 'updateOne' of module exports inside circular dependency\n at emitCircularRequireWarning (internal/modules/cjs/loader.js:823:11)\n at Object.get (internal/modules/cjs/loader.js:837:5)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\db_ops.js:19:46)\n at Module._compile (internal/modules/cjs/loader.js:1200:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1220:10)\n at Module.load (internal/modules/cjs/loader.js:1049:32)\n at Function.Module._load (internal/modules/cjs/loader.js:937:14)\n at Module.require (internal/modules/cjs/loader.js:1089:19)\n at require (internal/modules/cjs/helpers.js:73:18)\n at Object.<anonymous> (D:\\PlayGround\\Social.Forum\\node_modules\\mongodb\\lib\\operations\\collection_ops.js:5:23)\n", "text": "It shows that after run ‘node --inspect --trace-warnings’ and require(mongodb)\nBut all the errors are same:", "username": "Koushik_Saha" }, { "code": "mongodbnode_modules/mongodb/package.jsonmongodbnpm update mongodb", "text": "Hi @Koushik_Saha,Please check the version of MongoDB Node.js driver (mongodb) being referenced in the project. For example, within node_modules/mongodb/package.json make sure that the version of mongodb package is greater than or equal to 3.5.7.If you’re using an older version, you could run npm update mongodb to update the package.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Sir,\ni use these packages on my project mongoose v3.5.8 and mongoose works behind with mongodb package v3.5.7“dependencies”: {\n“@sendgrid/mail”: “^7.1.1”,\n“async”: “^3.2.0”,\n“bad-words”: “^3.0.3”,\n“bcryptjs”: “^2.4.3”,\n“connect-mongo”: “^3.2.0”,\n“cookie-parser”: “^1.4.5”,\n“dotenv”: “^8.2.0”,\n“ejs”: “^3.1.3”,\n“express”: “^4.17.1”,\n“express-session”: “^1.17.1”,\n“lodash”: “^4.17.15”,\n“method-override”: “^3.0.0”,\n“mongoose”: “^5.9.16”,\n“multer”: “^1.4.2”,\n“socket.io”: “^2.3.0”\n}I waste your so many times but i don’t know how to fix it. i removed node_modules file then run “npm i” but that not work ether. so, what do i do?\nThat’s the mongoose package.json in node_modules file.–>“dependencies”: {\n“bson”: “^1.1.4”,\n“kareem”: “2.3.1”,\n“mongodb”: “3.5.7”,\n“mongoose-legacy-pluralize”: “1.0.2”,\n“mpath”: “0.7.0”,\n“mquery”: “3.2.2”,\n“ms”: “2.1.2”,\n“regexp-clone”: “1.0.0”,\n“safe-buffer”: “5.1.2”,\n“sift”: “7.0.1”,\n“sliced”: “1.0.1”\n}", "username": "Koushik_Saha" }, { "code": "package.jsonnpm installmongodbnpm list mongodbnpm list -g mongodb", "text": "Hi @Koushik_Saha,Unfortunately I’m unable to reproduce the issue you’re seeing. I copied the dependencies above into package.json, executed npm install with node version 14.3.0 and inspection of mongodb package is as expected.Could you provide the output of :Also, do you have any other node_modules within your application project ?Regards,\nWan.", "username": "wan" }, { "code": "-- [email protected] ", "text": "Sir, These are commands you want me to run.\n& i think connect-mongo is the problem right?\nIf it is what do i do. Because connect mongo v3.2.0 is latest and that uses v3.5.5 mongodb driver not latest one.PS D:\\PlayGround\\Social.Forum> npm list mongodb\[email protected] D:\\PlayGround\\Social.Forum\n±- [email protected]\n| -- [email protected][email protected]\n`-- [email protected] D:\\PlayGround\\Social.Forum> npm list -g mongodb\nC:\\Users\\SHARK\\AppData\\Roaming\\npm\n`-- (empty)", "username": "Koushik_Saha" }, { "code": "connect-mongomongodb3.5.5mongodbconnect-mongoconnect-mongomongodb\"^3.1.0\"mongodbnpm list [email protected] /path/to/appName\n\n├─┬ [email protected]\n│ └── [email protected] \n└─┬ [email protected]\n └── [email protected] \n", "text": "Hi @Koushik_Saha,That’s correct, in your project connect-mongo package depends on mongodb package version 3.5.5. You should update the mongodb package that is referenced as a dependency by connect-mongo. You could try to uninstall and re-install back connect-mongo package.Note that connect-mongo v3.2.0 requirement for mongodb is \"^3.1.0\", which means you should be able to use the latest stable version of mongodb package. For example, the result of my npm list mongodb :Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Sir, its done\nNo error is showing up now.\n& sir this is the 1st project (node, mongodb etc.) that i’m working on for IGNOU BCA 6th Sem.\nSir if you have some time just check this and let me know something is wrong or not.\ni used some github projects and merge & change some thing.\nThe Project Website- http://aroot-user-social-forum.herokuapp.com/", "username": "Koushik_Saha" }, { "code": "", "text": "Hi @Koushik_Saha,I’m glad that you have managed to solve the dependency problem.Well done on your first MongoDB/Node project. If you would like to learn more about Node.JS and MongoDB, I’d recommend to enrol in M220JS: MongoDB for JavaScript Developers a free online course from MongoDB University.If you have any questions, please open a new discussion thread with the relevant information.Best regards,\nWan.", "username": "wan" }, { "code": "", "text": "Sir,\nactually i thought the if the error was gone everything was all right but no, the session was saved when any one signed in on the site but now after sign in sometime later account automatically signed up but giving an error but the code was same from the beginning i never faced that problem before and maybe the problem is connect-mongo package. how to i fix this. or any alternative for connect-mongo.And sorry for too many questions.", "username": "Koushik_Saha" }, { "code": "", "text": "Hi @Koushik_Saha,Please open a new topic discussion as your current issue is different to the original question of this topic.\nIn order to help others help you better, when posting in the new topic please provide:Regards,\nWan", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Warning: Accessing non-existent property 'count' of module exports inside circular dependency
2020-04-23T10:06:12.312Z
Warning: Accessing non-existent property &lsquo;count&rsquo; of module exports inside circular dependency
97,595
null
[ "indexes" ]
[ { "code": "{\n \"appName\": \"MongoDB Shell\",\n \"command\": {\n \"count\": \"collname\",\n \"query\": {\n \"consumer.consumed\": {\n \"$exists\": true\n }\n },\n \"fields\": {},\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1591257092,\n \"i\": 1\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": \"PHh4eHh4eD4=\",\n \"$type\": \"00\"\n },\n \"keyId\": {\n \"$numberLong\": \"6790010151542718465\"\n }\n }\n },\n \"$db\": \"dpa-id-hub\"\n },\n \"planSummary\": [\n {\n \"IXSCAN\": {\n \"consumer.consumed\": 1\n }\n }\n ],\n \"keysExamined\": 316394,\n \"docsExamined\": 316394,\n \"numYields\": 2473,\n \"reslen\": 170,\n \"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 2474\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"r\": 2474\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"r\": 2474\n }\n }\n },\n \"storage\": {\n \"data\": {\n \"bytesRead\": 10726149,\n \"timeReadingMicros\": 42554\n }\n },\n \"protocol\": \"op_msg\",\n \"millis\": 709\n}\n", "text": "We are using a fairly trivial query to count data in a collection where a specific nested field is set or not set.db[‘collname’].find({“consumer.consumed”:{$exists: true}}).count()As simple as it comes. In order to be quick, we have created an index. The MongoDB profiler and the explain plan show that the index should be used. The query as such should be as simple as skimming over the existing index and return a docment number. However the explain shows a large number of yields that I completely fail to understand and execution quite often takes several seconds.", "username": "Karl_Banke" }, { "code": " \"keysExamined\": 316394,\n \"docsExamined\": 316394,\n \"numYields\": 2473,\nkeysExamineddocsExamined> db.test.explain('executionStats').find({a:{$exists:true}}).count()\n...\n\t\t\"totalKeysExamined\" : 10,\n\t\t\"totalDocsExamined\" : 10,\n...\n{_id:0,a:1}> db.test.explain('executionStats').find({a:{$gt:MinKey}}, {_id:0,a:1}).count()\n...\n\t\t\"totalKeysExamined\" : 11,\n\t\t\"totalDocsExamined\" : 0,\n...\n", "text": "Hi,Looking at these lines:seems to indicate that it scans the index (keysExamined), but also must confirm the query condition by loading the documents from disk (docsExamined). This loading is apparently expensive for the disk, since it yields a lot, meaning that MongoDB spent a lot of time just waiting for disk.I did a quick test using MongoDB 4.2.7 and saw a similar output, where it needs to load the documents from disk to examine them:However, changing the query a little to make it a covered query seem to improve things a bit (note the projection of {_id:0,a:1} to make it covered query):so it did the count by scanning the index only. Let me know if a similar method works with your query.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "I tried your suggestion. However, when I issuedb[‘dbname’].find({‘consumer.consumed’:{$gt:MinKey}}, {_id:0,‘consumer.consumed’:1}).count()I always get back the total number of elements in the collection rather than the number of elements where the queried field is set.", "username": "Karl_Banke" }, { "code": "", "text": "Hi Karl,I’m not sure I understand. Could you post some example documents, and what the query is supposed to return? Maybe there’s something I’m missing here.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Well the query is supposed to return the number of documents where the (nested) field consumer.consumed exists. There is a regular index on the field (not sparse). From what I understand, the index will have an entry for every field value, where the entry is null when the field is either null or not set. So, all the query would need to do is to count the keys in the index that are not null. I understand that there would be a need to look at the documents for the negated query (not exists), since the field could be either null or not there at all. So consider three documents:\n{consumer: {name: ‘karl’, consumed: ‘2019-10-10’}}\n{consumer: {name: ‘kevin’, consumed: null}}\n{consumer: {name: ‘iceman’}}\nIn which case I would assume the query to return 2.", "username": "Karl_Banke" }, { "code": "consumer.consumed> db.test.find()\n{ \"_id\" : 0, \"consumer\" : { \"name\" : \"karl\", \"consumed\" : \"2019-10-10\" } }\n{ \"_id\" : 1, \"consumer\" : { \"name\" : \"kevin\", \"consumed\" : null } }\n{ \"_id\" : 2, \"consumer\" : { \"name\" : \"iceman\" } }\n\n> db.test.createIndex({'consumer.consumed':1}, {sparse:true})\n> db.test.find().hint({'consumer.consumed':1})\n{ \"_id\" : 1, \"consumer\" : { \"name\" : \"kevin\", \"consumed\" : null } }\n{ \"_id\" : 0, \"consumer\" : { \"name\" : \"karl\", \"consumed\" : \"2019-10-10\" } }\n{$exists:true}> db.test.find().hint({'consumer.consumed':1}).count()\n2\n> db.test.explain('executionStats').find().hint({'consumer.consumed':1}).count()\n...\n\t\t\"totalKeysExamined\" : 3,\n\t\t\"totalDocsExamined\" : 0,\n...\n", "text": "Hi Karl,I see what you mean. Well, I have a “hack” that may work with your specific use case. You might be able to use sparse indexes to achieve a quick count. Using the example you provided, I created a sparse index on consumer.consumed:A feature of a sparse index is that it doesn’t create an index key for documents that doesn’t have the indicated field. This can be verified by doing a find() by hint():Note that the third document is missing here.For most other queries, MongoDB knows that the sparse index does not cover the whole collection, and would avoid using it if it thinks that it can return the wrong result. That is, unless you force it to use the index by hint(), which can work to your advantage in this specific case.Since the sparse index doesn’t include the 3rd document, for your count query to return the correct count, you just have to hint() it, and provide an empty parameter for find(), since that {$exists:true} parameter is already implicit in the index itself:and since you’re forcing it to use that index, it doesn’t load the documents from disk:Please have a read though Sparse Indexes and its superset Partial Indexes (which is a more flexible version of sparse indexes) for more details.However, a caveat worth repeating is that since sparse indexes don’t cover the whole collection, other queries may behave differently, e.g. some queries that should use the index may not use the index and end up being a collection scan instead. This is explained in the linked pages above.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,thanks a lot. I tried this and it seems to work alright from the console. As you said, the caveat is that this approach will not work well when we would like to update items based on some where condition on the field but for the time being, this will likely take away the pain. If I manage to translate it to Java properly ;-).Thank you very much, Karl", "username": "Karl_Banke" }, { "code": "\"keysExamined\": 318501,\n\"docsExamined\": 0,\n\"cursorExhausted\": 1,\n\"numYields\": 2488,\n\"nreturned\": 1,\n\"reslen\": 262,\n\"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 2490\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"r\": 2490\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"r\": 2490\n }\n }\n", "text": "I now made the changes and the overall behavior is much more predictable. Yet I see that the number of yields is still high, even though no documents are inspected.", "username": "Karl_Banke" }, { "code": "count()", "text": "Hi Karl,The high number of yields implies that the server needs to fetch data from disk frequently. In typical cases, this means that either the disk is too slow, or the working set (i.e. most frequently accessed data/indexes) are bigger than the available RAM.This usually means that it’s time to upgrade your hardware You can check if this is the case by specifying the count() using some parameter, so that it will count only a subset of the data. If you find that the number of yields are drastically lower, this is a good sign that more hardware is needed.Best regards,\nKevin", "username": "kevinadi" } ]
Exists query with index very slow
2020-06-04T12:16:59.443Z
Exists query with index very slow
11,316
null
[]
[ { "code": "db.collection_new.aggregate([\n{\n $lookup :{\n from: 'collection_ref',\n let :{key1 :\"$key1\",\n key2:\"$key2\",\n key3:\"$key3\",\n key4:\"$key4\"\n },\n pipeline:[\n {$match:{\n\t\t$expr:{\n\t\t\t$and : [{$eq :[\"$key1\",\"$$key1\"]},\n\t\t\t\t\t{$eq :[\"$key2\",\"$$key2\"]},\n\t\t\t\t\t{$eq :[\"$key3\",\"$$key3\"]},\n\t\t\t\t\t{$eq :[\"$key4\",\"$$key4\"]}\n\t\t\t\t\t]\n }\n }\n }],\n as : \"result\"\n}}})\n", "text": "Hi ,I need help or suggestion around running a built in function from mongoshell. Earlier mongo version has db.eval which provides function output but its deprecated now and couldnt use in my recent version of the mongshell.2nd question , My aggregate logic is below, can I store this as system saved function if so how. I want to later access this using function. function lookup(key1,key2,key3,key4)Example :", "username": "Karthikeyan_Madheswa" }, { "code": "", "text": "Hi,Although not a 1-1 replacement, you might want to check if views can satisfy your requirements.Best regards,\nKevin", "username": "kevinadi" } ]
Execute built in function via mongo Shell
2020-06-12T21:59:20.572Z
Execute built in function via mongo Shell
1,716
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.8 is out and is ready for production deployment. This release contains only fixes since 4.2.7, and is a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.8 is released
2020-06-15T22:51:24.099Z
MongoDB 4.2.8 is released
1,877
null
[ "charts" ]
[ { "code": "{\n $lookup:{\n from: \"name_123\", \n localField: \"count\", \n foreignField: \"name_345.count\", \n as: \"SingleCount\" \n }\n} \n", "text": "I am trying to draw some MongoDB Charts, I have a db called user_activities with user based collections, each collection has field called count, now I need to write a single chart which can show the total count of user activities from all user collections, I tried below query in query tab but getting below error(BadValue) unknown top level operator: $lookupHow to give multiple collections of same db in single chart for some aggregation or count ?", "username": "Great_Info" }, { "code": "$lookup", "text": "Hi @Great_Info -You’re on the right track, just a couple of things you need to do:HTH\nTom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add same fields from different collection of same db in Charts
2020-06-15T20:37:43.053Z
How to add same fields from different collection of same db in Charts
4,512
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.0.19 is out and is ready for production deployment. This release contains only fixes since 4.0.18, and is a recommended upgrade for all 4.0 users.Fixed in this release:4.0 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.19 is released
2020-06-15T21:47:58.064Z
MongoDB 4.0.19 is released
1,812
null
[ "aggregation" ]
[ { "code": "", "text": "Hi,I had previously encountered a scenario where we were unwinding “tags” column and then grouping by on top of it.\nPipeline would be\n{$unwind:\"$tags\"} ,{$group: {\"_id\": “$tags”}}Looking to see if\n{$group: {\"_id\": “$$tags”}}\ncould be a good alternative and if there is extra benefits of implementing a short path there.I did not find an open ticket.I had some cycles to kill,\nI did check the source code and it looks like unwind and groupby would make duplicate BSON objects.\nfor each document(Hopefully I am correct here).\nBut It looks like writing an iterator and using the same BSON object and replacing only “$tags” with unwound values, would give better performance for the scenario.This might be a widely used usecase.", "username": "Mallikdb" }, { "code": "db.sampleCollection.aggregate([\n {\n $match: {},\n },\n // the result of this stage is 1 document:\n // { _id: 1, tags: ['tag1', 'tag2', 'tag3'] }\n {\n $unwind: '$tags',\n },\n // after $unwind we will have 3 documents, because we duplicated that \n // single document and destructured its array of 'tags':\n // { _id: 1, tags: 'tag1' }\n // { _id: 1, tags: 'tag2' }\n // { _id: 1, tags: 'tag3' }\n {\n $group: {\n _id: '$tags',\n },\n },\n // if we group by 'tags' prop from the prev stage, we will get 3 groups:\n // { _id: 'tag1' }\n // { _id: 'tag2' }\n // { _id: 'tag3' }\n]);\ndb.sampleCollection.aggregate([\n {\n $match: {},\n },\n // the result of this stage is 1 document:\n // { _id: 1, tags: ['tag1', 'tag2', 'tag3'] }\n {\n $group: {\n _id: '$tags',\n },\n },\n // if we group by 'tags' prop from the prev stage, \n // we will get only 1 document,\n // because we used the whole (not-unwound) array as a grouping key\n // { _id: ['tag1', 'tag2', 'tag3'] }\n]);\n", "text": "‘$tags’ and ‘$$tags’ refer to completely different variables. You need to understand, that ‘$tags’ and ‘$$tags’ are not interchangeable.$unwind stage before $group is used when you need to group documents, bashed on the values in the array, not the whole array.Here are some examples, so it would be easier for you to understand.Below is an example of using $unwind before $group:Below is an example of using $group without $unwind:And if you use ‘$$tags’ instead of ‘$tags’ in the pipelines above, you will get an error, because variable ‘$$tags’ is not defined.To understand it better, you need to read more about aggregation pipeline in MongoDB, specifically:", "username": "slava" }, { "code": "And if you use ‘$$tags’ instead of ‘$tags’ in the pipelines above, you will get an error, because variable ‘$$tags’ is not defined.\n", "text": "Hi Slava,Thank you for the response.I agree with your assessment here.\nTrying to propose “$$tags” be supported in group by paths and it could be made faster(compared to unwind $tags + groupby $tags) based on mongo source code.Was there any such proposal before and was it rejected based on any evaluation.", "username": "Mallikdb" } ]
Is there a benefit in adding `groupby "$$arraycolumn"` instead of unwind and groupby
2020-06-13T23:19:03.734Z
Is there a benefit in adding `groupby &ldquo;$$arraycolumn&rdquo;` instead of unwind and groupby
2,373
null
[ "mongodb-shell" ]
[ { "code": "db.runCommand.help\n{\n help: 'shell-api.classes.Database.help.attributes.runCommand.example',\n docs: 'shell-api.classes.Database.help.attributes.runCommand.link',\n attr: [\n {\n description: 'shell-api.classes.Database.help.attributes.runCommand.description'\n }\n ]\n}\n", "text": "This may really be a “beginner js question” …In mongosh:How does one dereference the “help:” and “docs:” and “description:” values and see this content, please?", "username": "Jack_Woehr" }, { "code": "deferencemongoshdb.runCommand.help()> db.runCommand.help() \n\n db.runCommand({ text: \"myCollection\", search: \"searchKeywords\" }):\n\n Runs an arbitrary command on the database.\n\n For more information on usage: https://docs.mongodb.com/manual/reference/method/db.runCommand\nhelpdocs", "text": "Hi @Jack_Woehr,How does one dereference the “help:” and “docs:” and “description:” values and see this content, please?Could you elaborate further what do you mean by deference here ?\nIn mongosh v0.0.5 (current) I could execute db.runCommand.help() which gives me the output below:The help is expanded into the example on how to run the command, and the docs is expanded into a brief description of the command and the link.\nIs this what you’re looking for ?Regards,\nWan.", "username": "wan" }, { "code": "Using Mongosh Beta: 0.0.5\n\nFor more information about mongosh, please see the wiki: github.com/mongodb-js/mongosh/wiki\n\n> db.runCommand.help() \nTypeError: db.runCommand.help is not a function", "text": "", "username": "Jack_Woehr" }, { "code": "mastermaster", "text": "Looks like you are running from master. I recommend using the released version. master should be stable enough but you might run into things like this.", "username": "Massimiliano_Marcon" }, { "code": "db.RunCommand.help()", "text": "Thanks @Massimiliano_Marcon … I’m glad it is not the problem that I do not have my environment set up correctly.", "username": "Jack_Woehr" } ]
Mongosh deference help and examples
2020-06-11T20:17:10.238Z
Mongosh deference help and examples
1,696
null
[]
[ { "code": "", "text": "Hi, I’ve maintained the 3.4 and 4.2 ports of mongodb on FreeBSD. Currently I’m creating a port of 4.4.\nI get the following link error. With ld.gold I get the same error.ld.lld: error: undefined symbol: boost::log::v2s_mt_posix::aux::default_attribute_names::message()referenced by message.hpp:56 (/usr/local/include/boost/log/expressions/message.hpp:56)\nbuild/opt/mongo/shell/dbshell.o:(_main(int, char**, char**))Does this ring a bell to somebody? I can provide more info if needed.", "username": "R_K" }, { "code": "", "text": "Hi @R_K -No, that doesn’t look like something I’ve seen reported before. MongoDB v4.4 does make rather extensive use of boost::log though. Could you provide some additional details? What version of FreeBSD? What toolchain and version is in use there? What does the SCons invocation used to build MongoDB look like?I do note that it appears you are using the system version of boost rather than the vendored one. It would be interesting to know if the same error exists when using the vendored boost, just as an experiment.Finally, please be aware that FreeBSD is not a supported platform, so we can’t really guarantee that this will be fixed in a timely fashion, or even necessarily fixed at all. Of course, if an easy and obvious fix is available that works on our other platforms of record, we will not have any objection to getting it merged.Thanks,\nAndrew", "username": "Andrew_Morrow" } ]
Link error in 4.4.0-rc9 on FreeBSD/aarch64
2020-06-13T07:36:37.109Z
Link error in 4.4.0-rc9 on FreeBSD/aarch64
2,041
null
[]
[ { "code": "", "text": "Hello!Is there a way to output the actual values of the variables defined in the “$let” stage of a lookup? For debug purposes, I mean.In something like\n$lookup{…\nlet: {source: “SOURCE”, joinKey: “$CONC_DN”, dataType: “EVALS”, endDate: {$add: [{$toDate:\"$SCHEDULEDDATE\"}, -(31000606024)]}},\n…I would like to see whether endDate is actually correct (3 days before the value of SCHEDULEDDATE)Thank you in advance!", "username": "Davide_Cicuta" }, { "code": "const pipeline = [\n {\n $lookup: {\n from: 'other_collection',\n let: {\n varA: '$parentProp',\n },\n pipeline: [\n {\n // this is needed to output only 1 doc with vars\n // and hide all the unnecessary fields\n $group: {\n _id: null,\n },\n },\n {\n $addFields: {\n debugVarA: '$$varA',\n },\n },\n ],\n as: 'result',\n },\n },\n];\n", "text": "Here is an example of how you can debug your custom vars inside $lookup.pipeline:", "username": "slava" } ]
Pipeline: accessing variables defined in $let
2020-06-15T13:16:23.869Z
Pipeline: accessing variables defined in $let
2,483
null
[ "aggregation" ]
[ { "code": "/* 1 */\n{\n \"_id\" : \"1\",\n \"version\" : 0,\n \"data\" : \"aaaa\"\n}\n\n/* 2 */\n{\n \"_id\" : \"2\",\n \"version\" : 0,\n \"data\" : \"aaaa\"\n}\n\n/* 3 */\n{\n \"_id\" : \"3\",\n \"version\" : 1,\n \"data\" : \"aaaa\"\n}\ndb.getCollection(\"TestCollection\").aggregate([{\n\n$addFields: {\n \"test\": {\n $group: {\n _id: null,\n \n }\n }\n }\n}])\n", "text": "Is it possible to use a $group - aggregration when adding a field with $addFields to a document?The aggregation is something similiar to this.", "username": "Hien_Nguyen" }, { "code": "$addFields$groupdb.collection.aggregate( [\n { $addFields: { ... } },\n { $group: { ... } },\n { $addFields: { ... } }\n] )\n", "text": "Hello Hien Nguyen,It is not possible to add a $group stage to an $addFields stage. They are to used separately. But, you can use the $addFields before a $group stage or / and after.For example:What is it you are trying, in terms of getting a result? May be if you tell what you are expecting to get as output from the aggregation, perhaps I can suggest how to form the query using appropriate stages.", "username": "Prasad_Saya" } ]
Using $group in a $addFields
2020-06-15T13:16:17.776Z
Using $group in a $addFields
9,398
null
[ "aggregation" ]
[ { "code": "", "text": "Hi All,I am looking for something equivalent of $elemMatch Query operator for use in $match within the Aggregation $lookup with pipeline. Any suggestions are welcome.Thanks,\nSam", "username": "Sampreet_Chawla" }, { "code": "const pipeline = [\n {\n $lookup: {\n from: 'other_collection',\n pipeline: [\n {\n $match: {\n itemsInArray: {\n $elemMatch: {\n foo: false,\n bar: true,\n },\n },\n },\n },\n ],\n as: 'result',\n },\n },\n];\n {\n $lookup: {\n from: 'other_collection',\n let: {\n foo: '$parentFoo',\n bar: '$parentBar',\n },\n pipeline: [\n {\n $unwind: '$itemsInArray',\n },\n {\n $match: {\n $expr: {\n $and: [\n { $eq: ['$itemsInArray.foo', '$$foo']},\n { $eq: ['$itemsInArray.bar', '$$bar']},\n ],\n },\n },\n },\n {\n $group: {\n _id: '$_id',\n itemsInArray: {\n $first: '$itemsInArray',\n },\n },\n },\n ],\n as: 'result',\n },\n },\n];\n", "text": "You can use $elemMatch inside $lookup.pipeline.$match like this:Though, it does not support pipeline variables\nBut you can add variable support with $unwind + $expr:", "username": "slava" } ]
Equivalent of $elemMatch Query operator for use in $match within the Aggregation $lookup with pipeline
2020-06-12T21:58:22.165Z
Equivalent of $elemMatch Query operator for use in $match within the Aggregation $lookup with pipeline
10,089
null
[]
[ { "code": "{\n \"_id\" : ObjectId(\"5ed61144376edd4a601467cd\"),\n \"pname\" : \"actual_speed\",\n \"values\" : [ \n {\n \"timestamp\" : 1580120048,\n \"val\" : 27716\n }, \n {\n \"timestamp\" : 1580120113,\n \"val\" : 27730.5\n }, \n {\n \"timestamp\" : 1580120138,\n \"val\" : 27702\n }, \n ...\n]}\ndb.getCollection(\"histo\").find().forEach(function (e) {\n e.values.forEach(function (o) {\n o.val = parseFloat(o.val);\n });\n db.getCollection(\"histo\").save(e);\n});\n", "text": "Hi,Each document of my collection “histo” has the following schema:I want to force the type “val” to Double. For the moment, if its value is an integer, as for “27716”, its type is int32.I tried with:But it does not work.What is wrong ?Thanks!", "username": "Helene_ORTIZ" }, { "code": "double27702db.collection.aggregate([ \n { \n $addFields: { \n types: { \n $map: { input: \"$values\", in: { type: { $type: \"$$this.val\" }, val: \"$$this.val\" } }\n }\n }\n }\n]).pretty()", "text": "@Helene_ORTIZBy default, in MongoDB document the type of a number is a double. In the sample document you had posted though the number 27702 is an integer, actually its type is double. This can be verified with the following query:", "username": "Prasad_Saya" }, { "code": "", "text": "Sorrry but in my database the field containing 27702 is an integer in mongoDB.", "username": "Helene_ORTIZ" }, { "code": "valdoubledb.collection.updateMany(\n{},\n[\n { \n $set: { \n values: { \n $map: { input: \"$values\", in: { $mergeObjects: [ \"$$this\", { val: { $toDouble: \"$$this.val\" } } ] } }\n }\n }\n }\n] )\ndb.collection.aggregate([ \n { \n $addFields: { \n values: { \n $map: { input: \"$values\", in: { $mergeObjects: [ \"$$this\", { val: { $toDouble: \"$$this.val\" } } ] } }\n }\n }\n }\n] ).forEach( doc => db.collection.updateOne( { _id: doc._id }, { $set: { values: doc.values } } ) )\ndouble", "text": "The following update statement will update all the val field value type to double. Note this update runs in MongoDB version 4.2 or later only.In case your database version is older than 4.2, then use this query:NOTE: The aggregation operator $toDouble used in the above queries converts a number to a double.", "username": "Prasad_Saya" } ]
Change type of a field in a nested array
2020-06-15T08:18:58.483Z
Change type of a field in a nested array
4,961
null
[ "golang" ]
[ { "code": "mongodb+srv://<username>:<password>@cluster0-0pjmx.mongodb.net/test?retryWrites=true&w=majority\ninsertResult, err := collection.InsertOne(context.TODO(), entry_one)\n2020/04/18 15:56:55 server selection error: server selection timeout, current topology: { Type: Repl\nicaSetNoPrimary, Servers: [{ Addr: cluster0-shard-00-00-0pjmx.mongodb.net:27017, Type: Unknown, Stat\ne: Connected, Average RTT: 0, Last error: connection() : connection(cluster0-shard-00-00-0pjmx.mongo\ndb.net:27017[-179]) incomplete read of message header: EOF }, { Addr: cluster0-shard-00-01-0pjmx.mon\ngodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connecti\non(cluster0-shard-00-01-0pjmx.mongodb.net:27017[-181]) incomplete read of message header: EOF }, { A\nddr: cluster0-shard-00-02-0pjmx.mongodb.net:27017, Type: Unknown, State: Connected, Average RTT: 0,\nLast error: connection() : connection(cluster0-shard-00-02-0pjmx.mongodb.net:27017[-180]) incomplete\nread of message header: read tcp 192.168.43.131:34958->3.7.51.68:27017: read: connection reset by p\neer }, ] }\nexit status 1\n", "text": "I’m trying to connect to via my code in golang. I am using the mongo-driver module. I am able to connect to instance:But on this line:I get this panic:Any idea if I am missing something?", "username": "sntshk" }, { "code": "", "text": "Hi @sntsh, welcome!2020/04/18 15:56:55 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimaryYou’re getting the error message on insert operation because the database deployment has been identified as a replica set without a primary. Check whether your MongoDB Atlas cluster has any alerts. See also Replica set has no primary alert.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks very much.I don’t see any cluster alerts.Aren’t there any step by step solution for this?", "username": "sntshk" }, { "code": "", "text": "I’m not 100% sure, but these seem like SSL errors. Unfortunately, those manifest as vague errors like “EOF” because the server closes the connection if the driver isn’t configured to connect with correct SSL settings. A few questions to dig deeper into this:", "username": "Divjot_Arora" }, { "code": "", "text": "Hello @sntshk I have same problem.\nDid you find solution?", "username": "Viktor_Fefilov" }, { "code": "", "text": "Ok. I found solution for my case. I added IP into whitelist and now it’s working.", "username": "Viktor_Fefilov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Server selection error: server selection timeout, current topology
2020-04-18T11:48:55.308Z
Server selection error: server selection timeout, current topology
105,150
null
[ "performance", "containers" ]
[ { "code": "j:true and w:1", "text": "Hey there. I’ve performed a couple of simple benchmarks to see how Mongo behaves when inserting documents with j:true and w:1. The environments I used were my local machine with 8 core CPU and SSD, and Azure hosted VM with 4 core CPU and dedicated 256 GB SSD disk. I use the C# driver with shared singleton Mongo Client. The documents I insert are fairly small ones - no more than 2kb. The number of writers I tested are 1-5, 100 locally and distributed tests with up to 600 concurrent writers on Azure.\nThere are a couple of questions I’d like to clarify:Further notes: the local file system is NTFS and it’s a Win 10 machine, the Azure one is the latest mongo docker image with the XFS filesystem on the dedicated Az disk.", "username": "Vlad_H" }, { "code": "mongodmongod", "text": "Hi Vlad,I’m not a performance expert nor very knowledgeable in Windows/Azure, but I’ll try to answer what I can.MongoDB doesn’t actively try to buffer anything, so the plateauing IOPS number you’re seeing seem to imply that there is a bottleneck somewhere else (e.g. potentially not on your disk performance). Perhaps your singleton MongoClient is hitting a performance ceiling somewhere? Have you tried inserting with multiple client applications, or using other driver e.g. Java, node, etc.?Another hint that seem to point toward unknown bottleneck is the fact that running two mongod and alternate write between them seem to push your IOPS usage higher, meaning the bottleneck was bypassed. Could it be the docker image you’re using is artificially limiting resources somehow? Have you tried using the Windows native mongod server and see similar behaviour?Best regards,\nKevin", "username": "kevinadi" }, { "code": "mongod", "text": "Hi,MongoDB doesn’t actively try to buffer anythingI observe that Mongo consumes the same IOPS with X and X*3 number of events inserted per second (according to mongostat). Given that I believe IOPS are measured correctly, it does look as if Mongo buffers things.Perhaps your singleton MongoClient is hitting a performance ceiling somewhere?I don’t think so. The client also utilises multiple connections (usually roughly the same as the number of writers), so it’s not the connection re-use issue.Have you tried inserting with multiple client applications, or using other driver e.g. Java, node, etc.?I’ve tried multiple client applications up to 3 nodes. They behave the same as one node and offer the same throughput.Could it be the docker image you’re using is artificially limiting resources somehow?There are restrictions placed, but the resources utilised are nowhere near the limit.Have you tried using the Windows native mongod server and see similar behaviour?Yes, the behaviour is the same.I’ve profiled the mongod process and it looks like the issue is the sub-optimal locking, which was fixed in later versions: https://jira.mongodb.org/browse/SERVER-43417I tried mongo 4.4-rc and observed an increase in throughput up to x3 with the same setup and load patterns.", "username": "Vlad_H" } ]
Insert-heavy load performance questions
2020-05-18T10:31:53.846Z
Insert-heavy load performance questions
4,693
null
[ "kotlin" ]
[ { "code": "Cannot find a public constructor for 'TodoItem'Cannot find a public constructor for 'TodoItem'.\n\nA custom Codec or PojoCodec may need to be explicitly configured and registered to handle this type.\n at org.bson.codecs.pojo.AutomaticPojoCodec.decode(AutomaticPojoCodec.java:40) ~[bson-4.0.3.jar:na]\n at com.mongodb.internal.operation.CommandResultArrayCodec.decode(CommandResultArrayCodec.java:52) ~[mongodb-driver-core-4.0.3.jar:na]\n at com.mongodb.internal.operation.CommandResultDocumentCodec.readValue(CommandResultDocumentCodec.java:60) ~[mongodb-driver-core-4.0.3.jar:na]\n at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41) ~[bson-4.0.3.jar:na]\n at org.bson.internal.LazyCodec.decode(LazyCodec.java:48) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:101) ~[bson-4.0.3.jar:na]\n at com.mongodb.internal.operation.CommandResultDocumentCodec.readValue(CommandResultDocumentCodec.java:63) ~[mongodb-driver-core-4.0.3.jar:na]\n at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:84) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:41) ~[bson-4.0.3.jar:na]\n...\nCaused by: org.bson.codecs.configuration.CodecConfigurationException: Cannot find a public constructor for 'TodoItem'.\n at org.bson.codecs.pojo.CreatorExecutable.checkHasAnExecutable(CreatorExecutable.java:140) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.CreatorExecutable.getInstance(CreatorExecutable.java:107) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.InstanceCreatorImpl.<init>(InstanceCreatorImpl.java:40) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.InstanceCreatorFactoryImpl.create(InstanceCreatorFactoryImpl.java:28) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.ClassModel.getInstanceCreator(ClassModel.java:75) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.PojoCodecImpl.decode(PojoCodecImpl.java:121) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.PojoCodecImpl.decode(PojoCodecImpl.java:126) ~[bson-4.0.3.jar:na]\n at org.bson.codecs.pojo.AutomaticPojoCodec.decode(AutomaticPojoCodec.java:37) ~[bson-4.0.3.jar:na]\n ... 98 common frames omitted\n\n@GraphQLDescription(\"Type for TodoItem\")\ndata class TodoItem(\n val id: Long,\n val details: String,\n val status: String\n)\n\n@Component\nclass TodoItemDto() {\n\n private val mongoClient: MongoClient\n\n init {\n val codecRegistry: CodecRegistry = fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n fromProviders(\n PojoCodecProvider.builder()\n .automatic(true)\n .build()\n )\n )\n val settings: MongoClientSettings = MongoClientSettings.builder()\n .codecRegistry(codecRegistry)\n .build()\n mongoClient = MongoClients.create(settings)\n }\n\n fun getTodoItemList(): List<TodoItem> {\n return getCollection().find()\n .toList()\n }\n\n private fun getCollection(): MongoCollection<TodoItem> {\n val database: MongoDatabase = mongoClient.getDatabase(\"TodoItemsDB\")\n\n return database.getCollection(\"todoItem\", TodoItem::class.java)\n }\n}\n", "text": "I am getting Cannot find a public constructor for 'TodoItem' when trying to query the MondoDB collection. I am assuming the kotlin data classes have public constructor by default.Can someone help me out? I am new to kotlin and mongo db.Code repository: GitHub - sashwatp/kotlin-graphql-server", "username": "Sashwat_Prakash" }, { "code": "Caused by: org.bson.codecs.configuration.CodecConfigurationException: Cannot find a public constructor for 'TodoItem'.TodoItem.javaconstructor.", "text": "Caused by: org.bson.codecs.configuration.CodecConfigurationException: Cannot find a public constructor for 'TodoItem'.The Pojo (plain old Java Object) class TodoItem.java needs a constructor. Also, if you are reading and setting values to the object you need to provide get / set methods within the class. See this: POJOS (MongoDB Java Deriver).I am assuming the kotlin data classes have public constructor by default.Yes, the class has a default no-argument constructor (as you have not defined any). The driver is expecting a constructor with arguments - the arguments with which to construct an object using the state (the instance variables) you have defined in the class.", "username": "Prasad_Saya" } ]
MongoDB CodecConfigurationException for Kotlin data class
2020-06-14T06:01:18.832Z
MongoDB CodecConfigurationException for Kotlin data class
5,642
null
[ "replication" ]
[ { "code": "", "text": "Hi Team,We are running MongoDB replica set with 1 Primary, 1 Secondary and 1 Arbiter. Secondary instance was lagging by 1000 seconds and Primary went down due to Hardware failure.\nNow, Secondary become primary but data of 1000 seconds is lost. How can I recover it?Kindly suggest how to solve it.Cheers,\nMukesh Kumar", "username": "Mukesh_kumar" }, { "code": "", "text": "Hi Mukesh, if can’t get the data from the machine that failed (or lucky enough to have had a backup run just before the crash) then there is no way to retrieve that data.Out of curiosity, was the secondary set up to be delayed? If not you might want to check why you had ~17 minutes of latency, In most cases you shouldn’t have more than a few seconds at most.Sorry I don’t have better news for you.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks @Doug_Duncan for the update,We somehow recover the data and downed Primary node but should I add it to the replica set. We don’t have backup and how can I restore the 1000 seconds of data loss. And newly elected primary received write operation also.Please let me know, what to do and how?Cheers,\nMukesh Kumar", "username": "Mukesh_kumar" }, { "code": "", "text": "When you add the crashed PRIMARY node back into the replica set it will join as a secondary as it doesn’t have the most recent data. It will recognize that is has data that the rest of the replica set doesn’t have an will rollback that data.Before adding the machine back to the replica set however I would go ahead and make a copy of that data just so you have it in case something goes bad. I would also make a backup of your current PRIMARY member (you should be doing backups on regular basis anyways to make sure you can restore as recently as your SLAs require).", "username": "Doug_Duncan" }, { "code": "", "text": "@Doug_Duncan Thanks you so much for the update ", "username": "Mukesh_kumar" } ]
How to Recover the Replica Set in case Secondary is lagging and primary went down
2020-06-05T19:02:41.381Z
How to Recover the Replica Set in case Secondary is lagging and primary went down
3,618
null
[ "compass", "connecting" ]
[ { "code": "", "text": "I get this error in Compass querySrv EREFUSED _mongodb._tcp.sandbox.h0cha.gcp.mongodb.net. I whitelisted my IP address. User name and password were entered correctly. How to fix this?", "username": "Sunil_Skanda" }, { "code": "", "text": "Are you able to connect by shell?\nHave you added correct IP in whitelist\nIf any VPN/firewall preventing your connection", "username": "Ramachandra_Tummala" }, { "code": "DNSHostNotFound: Failed to look up service \"\":DNS operation refused.", "text": "I have mongo shell installed and when I paste the connection string in cmd it says:\nDNSHostNotFound: Failed to look up service \"\":DNS operation refused.Also, I’ve whitelisted it to access from anywhere, which includes my IP address.I don’t have any VPN installed.The thing is yesterday, it was working fine and I’ve not installed any new software till now to suspect that might have caused it.", "username": "Sunil_Skanda" }, { "code": "", "text": "It really has to be something local as the DNS information is correct as:If you can change the DNS resolver try with 8.8.8.8 and 8.8.4.4. They are the google’s public DNS server.", "username": "steevej" } ]
Trouble connecting to my DB in Compass
2020-06-14T06:16:03.303Z
Trouble connecting to my DB in Compass
3,731
https://www.mongodb.com/…c_2_1024x618.png
[ "swift" ]
[ { "code": "", "text": "Hello,\nI followed some instructions that was in Github Issue \" CLibMongoC compilation fails when SPM package used in an Xcode project #387\" to download the MongoDB Swift Driver project, build, and then add it to a workspace. I did since I kept having issues in Xcode when I tried adding the package dependencies.Before I moved the .xcodeproj to the workspace, I did build it, and it had no issues. Then I moved the entire mongo-swift-driver-master folder that includes the mongo .xcodeproj into my Xcode folder that is for my other Xcode project. I then built the mongo .xcodeproj and again had no issues. The issue occurs when I move the mongo .xcodeproj to the workspace and then build again.The issue I am having is the below:\nMissing ‘#include “/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator13.5.sdk/System/Library/Frameworks/Security.framework/Headers/SecureTransport.h”’; declaration of ‘SSLContextRef’ must be imported from module ‘Security.SecureTransport’ before it is requiredScreen Shot 2020-06-13 at 12.28.36 AM1366×825 114 KBCould someone provide some assistance please?", "username": "Oscar_Rodriguez" }, { "code": "", "text": "Hi @Oscar_Rodriguez, what deployment target do you have selected? the error message sounds like you are targeting an iPhone simulator.The driver does not support iOS usage, and is intended for building backend applications in Swift, that run on Linux and/or macOS.The typical ways to handle DB interaction from a mobile app would be:", "username": "kmahar" } ]
MongoDB Swift Driver - CLibMongoC compilation fails
2020-06-13T05:27:17.158Z
MongoDB Swift Driver - CLibMongoC compilation fails
3,265
null
[ "aggregation" ]
[ { "code": "---------- a_collection --------------\n // 01 \n { \n \"accountId\" : \"12345\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12345\", \n \"balance\" : 3242.2, \n \"balanceAed\" : 32423.23\n }, \n // 02\n { \n \"accountId\" : \"12346\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12346\", \n \"balance\" : 12131, \n \"balanceAed\" : 123.1\n }\n ---------b_collection ----------\n // 01 \n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"transactionId\" : \"T12345\", \n \"transactionDate\" : ISODate(\"2018-02-13T16:53:33.324Z\"),\n \"referenceNumber\" : \"R12345\"\n }, \n // 02\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"transactionId\" : \"T12346\", \n \"transactionDate\" : ISODate(\"2018-02-15T16:53:33.324Z\"),\n \"referenceNumber\" : \"R12346\"\n }// 03\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"transactionId\" : \"T12347\", \n \"transactionDate\" : ISODate(\"2018-01-13T16:53:33.324Z\"),\n \"referenceNumber\" : \"R12347\"\n }\n -------------c_collection ---------------\n // 01\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"cardId\" : \"C1234\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 15000.5, \n \"PaymentAmount\" : 5000.5\n },\n // 02\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"cardId\" : \"C1236\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 15000.5, \n \"PaymentAmount\" : 5000.5\n }\ndb.getCollection(\"a_collection\").aggregate(\n [\n {\n \"$match\" : {\n \"customerId\" : \"1234\"\n }\n },\n {\n \"$lookup\" : {\n \"from\" : \"b_collection\",\n \"localField\" : \"customerId\",\n \"foreignField\" : \"customerId\",\n \"as\" : \"Transactions\"\n }\n },\n {\n \"$lookup\" : {\n \"from\" : \"c_collections\",\n \"localField\" : \"customerId\",\n \"foreignField\" : \"customerId\",\n \"as\" : \"Cards\"\n }\n },\n {\n \"$unwind\" : {\n \"path\" : \"$Transactions\"\n }\n },\n {\n \"$unwind\" : {\n \"path\" : \"$Cards\"\n }\n },\n { \n \"$project\" : { \n \"accountId\" : 1.0, \n \"customerId\" : 1.0, \n \"accountNumber\" : 1.0, \n \"balance\" : 1.0, \n \"balanceAed\" : 1.0, \n\n \"Transactions.accountId\" : 1.0, \n \"Transactions.customerId\" : 1.0, \n \"Transactions.transactionId\" : 1.0, \n \"Transactions.referenceNumber\" : 1.0, \n \"Transactions.transactionDate\" : 1.0\n\n \"Cards.customerId\" : 1.0, \n \"Cards.accountId\" : 1.0, \n \"Cards.cardId\" : 1.0, \n \"Cards.cardHolderName\" : 1.0, \n \"Cards.LimitAmount\" : { \n \"$divide\" : [\n \"$Cards.LimitAmount\", \n 5.0\n ]\n }, \n \"Cards.PaymentAmount\" : { \n \"$divide\" : [\n \"$Cards.PaymentAmount\", \n 5.0\n ]\n }, \n }\n },\n {\n $group:\n {\n _id: {\n \"accountId\" : \"$accountId\", \n \"customerId\" : \"$customerId\", \n \"accountNumber\" : \"$accountNumber\", \n \"balance\" : \"$balance\", \n \"balanceAed\" : \"$balanceAed\", \n },\n Transactions: { $addToSet: \"$Transactions\" }\n ,Cards: { $addToSet: \"$Cards\" }\n }\n },\n {\n \"$sort\" : {\n \"Transactions.transactionDate\" : 1.0\n }\n },\n { \n \"$project\" : { \n \"_id\" : 0, \n \"Accounts\":\"$_id\",\n \"Transactions\" : \"$Transactions\", \n \"Cards\" : \"$Cards\"\n }\n }\n ], \n { \n \"allowDiskUse\" : true\n }\n);\n---------- a_collection --------------\n // 01 \n Accounts { \n \"accountId\" : \"12345\", \n \"customerId\" : \"1234\", \n \"accountNumber\" : \"AC12345\", \n \"balance\" : 3242.2, \n \"balanceAed\" : 32423.23\n }\n\tTransaction{[\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"transactionId\" : \"T12345\", \n \"transactionDate\" : ISODate(\"2018-01-13T16:53:33.324Z\"),\n \"referenceNumber\" : \"R12345\"\n }, \n // 02\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"transactionId\" : \"T12346\", \n \"transactionDate\" : ISODate(\"2018-02-13T16:53:33.324Z\"),\n \"referenceNumber\" : \"R12346\"\n }// 03\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"transactionId\" : \"T12347\", \n \"transactionDate\" : ISODate(\"2018-02-15T16:53:33.324Z\"),\n \"referenceNumber\" : \"R12347\"\n }\n\t\t]\n\t}\n Cards: [{\n \"customerId\" : \"1234\", \n \"accountId\" : \"12345\", \n \"cardId\" : \"C1234\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 15000.5, \n \"PaymentAmount\" : 5000.5\n },\n {\n \"customerId\" : \"1234\", \n \"accountId\" : \"12346\", \n \"cardId\" : \"C1236\", \n \"cardHolderName\" : \"John Doe\",\n \"LimitAmount\" : 15000.5, \n \"PaymentAmount\" : 5000.5\n }\n\t]\n", "text": "Hello Everyone, Let me share everything in detail.\nI have 3 collections and i use lookup for joining them.\nThen i first unwind the 2nd and 3rd collection to perform some mathematical operations(division) in projection.\nIt give multiple documents but i need one collection for that i use group clause and then i use projection. It gave me a desired output\nBut now i want to get the output in sorted order for 2nd collection (for which i use $addToSet clause) but it doesn’t give the desired output as the order is undefined.So i want to know the solution of this problem… I am sharing the sample collection and query which is as followed\nSample CollectionsQueryExpected OutputThe 2nd collection is in sorted order w.r.t transactiondate field.\nKindly help me out.", "username": "Nabeel_Raza" }, { "code": "$sortTransactions.transactionDate$unwindTransactions$group{\n \"$unwind\" : {\n \"path\" : \"$Transactions\"\n }\n},\n{\n \"$sort\" : {\n \"Transactions.transactionDate\" : 1\n }\n},", "text": "Nabeel,You can try the following (that is move the $sort on Transactions.transactionDate immediately after the $unwind Transactions, and before the $group stage):", "username": "Prasad_Saya" }, { "code": "", "text": "yes i did, but the order was undefined.", "username": "Nabeel_Raza" }, { "code": "TransactionstransactionDate", "text": "The Transactions array has the sub-documents sorted by the transactionDate - as I tried the altered code. That is what you are expecting.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Prasad_Saya for your reply but $addToSet doesn’t project the output in any specified order(not ascending nor descending).picturemessage_tiipw55t.nng832×348 19.5 KB", "username": "Nabeel_Raza" }, { "code": "", "text": "The idea is that the pre-sorted order (data is sorted before it is grouped) is maintained. At least it can be seen as sorted with the posted sample data. You have to try with more data samples and see if it works for you.", "username": "Prasad_Saya" }, { "code": "", "text": "Use $Push instead of $addToSet.\nSequence of the pipeline will be as followed\n1 - apply $match on the customer field\n2 - apply $lookup on other collections\n3 - then $unwind the b and c collection to perform some operations.\n4 - use $sort for sorting the collection array\n5- use $project for projection and perform division operation with c_collection using $divide clause.\n6 - As the output is in different document then use $group to merge them and use $push for b_collection.\n7 - use final $project", "username": "Nabeel_Raza" }, { "code": "", "text": "This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Facing issues while using $addToSet clause(of group)
2020-06-08T15:32:33.914Z
Facing issues while using $addToSet clause(of group)
1,815
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "beta" ]
[ { "code": "$metarandValsearchScoresearchHighlightsgeoNearDistancegeoNearPointrecordIdindexKeysortKeyfindAndModifyallowDiskUseMONGODB-AWSCommitQuorumcreateIndexestlsDisableCertificateRevocationCheckExceededTimeLimitLockTimeoutClientDisconnectAuthorizedDatabasesListDatabasesAsQueryabletlsDisableCertificateRevocationCheck=true", "text": "This is a beta release for the 2.11.0 version of the driver.The main new features in 2.11.0-beta2 support new features in MongoDB 4.4.0. These features include:Other new additions and updates in this beta include:The full list of JIRA issues that are currently scheduled to be resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.0%20ORDER%20BY%20key%20ASCThe list may change as we approach the release date.Documentation on the .NET driver can be found at:Because certificate revocation checking is now enabled by default, an\napplication that is unable to contact the OCSP endpoints and/or CRL\ndistribution points specified in a server’s certificate may experience\nconnectivity issues (e.g. if the application is behind a firewall with\nan outbound whitelist). This is because the driver needs to contact\nthe OCSP endpoints and/or CRL distribution points specified in the\nserver’s certificate and if these OCSP endpoints and/or CRL\ndistribution points are not accessible, then the connection to the\nserver may fail. In such a scenario, connectivity may be able to be\nrestored by disabling certificate revocation checking by adding\ntlsDisableCertificateRevocationCheck=true to the application’s connection\nstring.", "username": "Vincent_Kam" }, { "code": "MONGODB-AWS", "text": "I’m very interested in learning more about MONGODB-AWS authentication. Does this feature only work for Atlas deployments? Can we use it in our Cloud Manager deployments?", "username": "Rayan_Alsubhi" }, { "code": "MONGODB-AWSMONGODB-AWSMONGODB-AWSauthSource$external", "text": "Welcome to the community forums @Rayan_Alsubhi!The new MONGODB-AWS authentication mechanism currently requires an Atlas cluster running MongoDB 4.4+.Borrowing from the MongoDB 4.4 connection string documentation:To use MONGODB-AWS, you must be connecting to a MongoDB Atlas cluster which has been configured to support authentication via AWS IAM credentials (i.e. an AWS access key ID and a secret access key, and optionally an AWS session token). The MONGODB-AWS authentication mechanism requires that the authSource be set to $external .Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver Version 2.11.0-beta2 Released
2020-06-10T01:28:26.368Z
.NET Driver Version 2.11.0-beta2 Released
2,946
null
[ "change-streams" ]
[ { "code": "", "text": "Our Prod Mongo is a PSA running on version 4.2 (Read Concern Majority disabled). I currently have a service that is reading off the change stream for a specific collection. Suddenly today outta nowhere I see this error:After this, the stream tried to restart from last persisted token which also gives an error:Note:Not sure how the resume token got invalidated. Do you think using a timestamp to resume is a better idea?\nIs there any best practice that the service would need to follow, to handle this kind of error?\nWould be a of great help to even get some knowledge about these errors and some best practices.", "username": "Atil_Pai" }, { "code": "", "text": "I have found an answer to this. It is as the error says. Actually our service that read from the change stream uses backpressure and was way behind on consumption (latency was much more than what was expected). Due to which, when the oplog got slashed (it being a capped collection), the position the change stream cursor currently on, was no more present. We fixed the throughput and now all is good.\nThough I’ll share.", "username": "Atil_Pai" }, { "code": "", "text": "@Atil_Pai thanks for coming back and posting the solution to your original issue. This will help others who might come across this same issue in the future.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error occurred - ''CollectionScan died due to position in capped collection being deleted'
2020-05-24T21:42:57.662Z
Error occurred - &rdquo;CollectionScan died due to position in capped collection being deleted&rsquo;
9,100
null
[ "security" ]
[ { "code": "\"content\" : \"All your data is a backed up. You must pay 0.015 BTC to 15QSUeLd23GnUQqqndbwWR5UaPPqnwpSrc 48 hours to recover it. After 48 hours expiration, we will be leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com with this guide https://localbitcoins.com/guides/how-to-buy-bitcoins After paying write to me in the mail with your DB IP: [email protected]\\n\nflask-bcryptflask-jwt-extended", "text": "Hi everyone,Let me give some background, I came from a JS FrontEnd Env, and recently I started to learn Python and Mongo. (Also I love Light Modes, I’m not crazy… just different )Well, this happens to me in my first project on Mongo hehe.It is very funny because is just a test environment and the data is irrelevantI use DO because it has a basic droplet that I can create a quick Mongo DB and they have VPC, that I can connect one droplet to another, just for the sake of testing performance.I start using Studio3T (that I will sadly stop using it because is very expensive after the trial version and I don’t know how long my test will be going on), but is super easy to use to create Collections, and add DBs. (Also I promise myself I will start learning the CLI mode.I started with a simple project RestFull API , so I decide to with Flask, added some authentications with flask-bcrypt and flask-jwt-extended.Why and how in the earth, some bots or people got into my DB?Could you please guide me to the correct please to secure my servers or Mongodb.thank you!", "username": "Adrian_Galvez_G" }, { "code": "", "text": " Hi @Adrian_Galvez_G and welcome to the community!Sorry to hear your first project with MongoDB resulted in getting your database attacked. MongoDB has a security checklist to help make your database secure. This might be more than you need however at this time if you’re just working through a test project that won’t be sticking around.For a test system like this, you can do a couple of things to make it secure enough:", "username": "Doug_Duncan" }, { "code": "bindIp: 127.0.0.1, ipServer, VCPIp,MongoDB shell version v4.0.3\nconnecting to: mongodb://127.0.0.1:27662/\n2020-06-12T14:06:50.208-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27662, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27662 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:257:13\n@(connect):1:6\nexception: connect failed\n", "text": "Thank you so much for pointing me in the correct links.Now I’m trying to secure the server .but I dont get why I only can work well with bindIp: 127.0.0.1, ipServer, VCPIp,and also change the port , but this is not working at all i get always:does my yaml config is wrong?", "username": "Adrian_Galvez_G" }, { "code": "", "text": "Hello @Adrian_Galvez_Gthere is a great answer for this error from @Stennie_XHope this helps\nMichael", "username": "michael_hoeller" }, { "code": "MongoDB shell version v4.0.3\nconnecting to: mongodb://127.0.0.1:27662/\n2020-06-12T14:06:50.208-0500 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27662, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27662 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:257:13\n@(connect):1:6\nexception: connect failed\nmongoconnecting to: mongodb://127.0.0.1:27662/", "text": "all i get always:One thing I notice is that the mongo shell connection is to a server on the same machine as the connection is being made from (the connecting to: mongodb://127.0.0.1:27662/ bit). Are you trying to connect to your DO instance from inside of the DO droplet, or are you trying to connect from your local machine?If the link @michael_hoeller posted doesn’t help, let us know and we can provide further assistence.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
First Project and Hacked?
2020-06-12T01:33:22.690Z
First Project and Hacked?
12,302
null
[ "atlas-online-archive" ]
[ { "code": "", "text": "Just a quick question: We tried out the new Online Archive beta feature today because we have a lot of existing data (basically read audits) that we want to move into cold storage.The archive is created successfully and displays “Active” state. However, no data is transferred into it yet.\nI chose an age limit of “1” days, which means that almost all data inside the collection should be archived.Is that assumption correct?\nWhen is the archival process taking place?Thanks,\nMartin", "username": "MartinLoeper" }, { "code": "", "text": "Hello Martin!Thanks for bringing this up! That all looks correct to me. We would expect archiving to begin within 5 minutes and occur in intervals of up to 2GB of data being archived every 5 minutes until all data that matches the rule is archived. I don’t see any alerts, but it might be helpful to understand bit more about the workload if you can share? If you’d prefer please feel free to reach out to me directly to discuss ([email protected]).Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Online Archive - When is data copied over into archive?
2020-06-12T15:33:44.280Z
Online Archive - When is data copied over into archive?
3,481
null
[ "node-js", "production" ]
[ { "code": "roundTripTimeServerDescriptionroundTripTimeroundTripTimeMongoClientReplicaSetNoPrimary", "text": "The MongoDB Node.js team is pleased to announce version 3.5.9 of the driverThe default roundTripTime of a ServerDescription is -1, which means if that value is used we can potentially calculate a negative roundTripTime. Instead, if no previous roundTripTime exists, we use the duration of the initial handshake.A number of new options were added when the CMAP compliant connection pool was introduced in 3.5.x. Unfortunately, these options were not documented properly. Now they are mentioned in the MongoClient documentation, with a notice that they are only supported with the unified topology.A fix in 3.5.8 which ensured proper filtering of servers during server selection exposed an issue in max staleness calculations when the topology type is ReplicaSetNoPrimary and no servers are currently known. In order to estimate an upper bound of max staleness when there is no primary, the most stale known server is known to compare the others to - if there are no known servers, you can’t reduce the array!In certain very high load fail-over scenarios the driver is unable to reschedule a monitoring check in order to update its view of the topology for retryability. This would result in a high number of failed operations, as they were unable to determine a new viable server.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.5 · mongodb/node-mongodb-native · GitHubWe invite you to try the driver immediately, and report any issues to the NODE project. Thanks very much to all the community members who contributed to this release!The MongoDB Node.js team", "username": "mbroadst" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Node.js Driver 3.5.9 Released
2020-06-12T13:45:21.884Z
MongoDB Node.js Driver 3.5.9 Released
2,506
null
[ "dot-net" ]
[ { "code": "var bson = BsonSerializer.Deserialize<BsonDocument>(query);\nvar definition = new BsonDocumentFilterDefinition<CellDo>(bson);\n{\n \"description\": {\n \"$regex\": /\\w*\\.a\\b/\n }\n}\nJSON reader expected a string but found '/\\\\w*\\\\.a\\\\b/'.JSON reader expected a string but found 'RegEx'.{\n \"description\": {\n \"$regex\": RegEx(\"/\\w*\\.a\\b/\")\n }\n}\n{\n \"description\": {\n \"$regex\": RegEx(/\\w*\\.a\\b/)\n }\n}\nmongo:4.2.6-bionicMongoDB.Driver 2.10.3", "text": "Hi, one of our endpoints accepts raw mongo query and I have encountered issue with regexes.The filter definition is created byThe query looks like this:But when the query is passed serializer throws an exception saying JSON reader expected a string but found '/\\\\w*\\\\.a\\\\b/'.I tried to wrap regular expression in strings - the expression is treated as string so doesn’t really help and I have also wrapped the expression into RegEx call, but it just thrown exception JSON reader expected a string but found 'RegEx'.The queries looks like this:I wonder if this is a bug because similar calls with ISODate instead are working fine.We are currently using mongo from docker image mongo:4.2.6-bionic and MongoDB.Driver 2.10.3", "username": "Tomas_Chrobocek" }, { "code": "", "text": "I just had the same problem today. Basically, I wanted to dynamically replace a part of my string (PLACEHOLDER) with either exact or regex match. Exact match would replace it so that it becomes ’ “chocolate” ', or regex ’ \"{$regex: “chocolate”} \" '.Instead what is returned is ‘/chocolate/’ after deserialization. This causes a problem for some of the names with special characters. In Mongo Compass I can use ’ \"{$regex: “chocolate”} \" ’ directly, but for some reason deserialization does not work with it.I used the following deserialization method in C#:\nvar something = MongoDB.Bson.Serialization.BsonSerializer.Deserialize(stringFilter);", "username": "Jan_Inge_Nygard" } ]
Bson regex deserialization
2020-06-08T23:37:16.564Z
Bson regex deserialization
2,965
null
[]
[ { "code": "mongoDB ver: mongoDB 3.2.9\n========================================================================================================================================================================================================================================================\n2020-04-03T14:04:10.960+0800 E STORAGE [thread1] WiredTiger (12) [1585893850:957914][5420:140729384308864], file:collection-2-4945547502775328916.wt, eviction-server: memory allocation of 11431936 bytes failed: Not enough space\n2020-04-03T14:04:10.960+0800 E STORAGE [thread1] WiredTiger (12) [1585893850:960911][5420:140729384308864], file:collection-2-4945547502775328916.wt, eviction-server: session unable to allocate a scratch buffer: Not enough space\n2020-04-03T14:04:10.960+0800 E STORAGE [thread1] WiredTiger (12) [1585893850:960911][5420:140729384308864], eviction-server: cache eviction thread error: Not enough space\n2020-04-03T14:04:10.960+0800 E STORAGE [thread1] WiredTiger (-31804) [1585893850:960911][5420:140729384308864], eviction-server: the process must exit and restart: WT_PANIC: WiredTiger library panic\n2020-04-03T14:04:10.960+0800 I - [thread1] Fatal Assertion 28558\n2020-04-03T14:04:10.960+0800 I - [thread1] \n***aborting after fassert() failure\n2020-04-03T14:04:10.994+0800 I - [WTJournalFlusher] Fatal Assertion 28559\n2020-04-03T14:04:10.994+0800 I - [WTJournalFlusher] \n***aborting after fassert() failure\n2020-04-03T14:47:36.529+0800 I CONTROL [main] ***** SERVER RESTARTED *****\n", "text": "Hello there,We’ve encountered a sudden termination of MongoDB,\nchecked that there was an error message -\n“2020-04-03T14:04:10.960+0800 E STORAGE [thread1] WiredTiger (12) [1585893850:957914][5420:140729384308864], file:collection-2-4945547502775328916.wt, eviction-server: memory allocation of 11431936 bytes failed: Not enough space”Details error log as below:Is there any insight can be shared with us?", "username": "Marco_Chou" }, { "code": "", "text": "Hi @Marco_Chou and welcome to the community forums.That’s a sign that your system didn’t have enough memory to complete all the tasks that it was trying to do at the time.Was your system under heavy load at that time? Do you frequently see issues like this? Does this machine only run MongoDB or do you run other applications that would require large memory use?I see you’re using MongoDB 3.2 which was end of life’d back in October of 2018. Can you upgrade your MongoDB instances to a supported version (MongoDB 3.6+)? If what you ran into was due to a bug in MongoDB chances are it has been fixed in newer versions.", "username": "Doug_Duncan" } ]
Memory allocation of xxxxx bytes failed: Not enough Space
2020-06-12T10:15:53.758Z
Memory allocation of xxxxx bytes failed: Not enough Space
4,355
null
[ "flutter" ]
[ { "code": "", "text": "Dear Sir,\nHowdy,We are Flutter Developers love MongoDB Realm but there is no MongoDB Realm Flutter SDK. So, please could you make MongoDB Realm Flutter SDK by Dart language to get benefits of MongoDB Realm database.Best Regards,\nTom William", "username": "Tom_William" }, { "code": "", "text": "Welcome to the community @Tom_William and thank you for your feedback!We are aware of the strong interest in a Flutter SDK and this has been discussed recently:We do not currently have a SDK with Flutter support. We are exploring a possible implementation but are currently blocked by features we need added to the Dart language before we can continue - we are tracking the issue here: https://github.com/realm/realm-object-server/issues/55 You can use MongoDB Realm for authentication and push notification services today\nhttps://docs.mongodb.com/stitch/authentication/\nhttps://docs.mongodb.com/stitch/services/push-notifications/ Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Realm Flutter SDK
2020-06-12T12:51:54.240Z
MongoDB Realm Flutter SDK
5,621
null
[ "sharding" ]
[ { "code": "", "text": "I’m curious to understand what happens to indexes after a chunk is moved. It appears to me that when a chunk is moved from one replica set to another a full re-index occurs for all of the source collection indexes. Is this true? If so, are there options that can be used to throttle the chunk movement, like not moving any chucks until the indexing is complete?In my particular case it appears that indexing after moving a chunk is flooding my disk with activity and crushing the performance.Let me explain.I’ve been working with a standalone instance and I’m now working to move it to two sharded replica sets. I’ve successfully moved the database to a replica set of three data nodes and a single shard. I then created an empty three node replica set and added it as a second shard. I’ve enabled sharding on the database and one by one I am sharding the collections. The small collections balanced without issue but I’m running into balancing issues when sharding the larger collections.It seems that after a chunk is moved from the source RS to the target RS, each node in the source RS performs a significant amount of indexing work, so much so that it overwhelms the disk drive which will peg at 100% activity and a disk queue of around 10. I believe it is indexing activity because it the files being written to are index files. The data disk for each node is four 10K SAS drives in a RAID 10 array. CPU utilization hovers around 40% and memory around 50%.I expect this is resulting in excessive lag and causing the migrations to begin failing.This issue seems to build over time. What I mean by that is that I can disable balancing on the collection and let the indexing work finish. If I then enable balancing on the collection, the indexing load after the first chunk is moved isn’t too bad, but after 6 or 7 chunks have been moved the load on the drives gets so bad that the balancing begins to throw ‘aborted’ errors.", "username": "Tim_Heikell" }, { "code": "sh.status()mongos", "text": "Hi Tim,I don’t believe any reindex operation was done in a chunk move in either the donor shard nor the recipient shard. In the donor shard, all indexes must be present, otherwise the command will not proceed (reference: moveChunk command).When you say that issues start to pop out when balancing larger collections, could you provide the scale of the collection? Are they gigantic collections approaching TB-size each? How many of those collections are you trying to shard?I expect this is resulting in excessive lag and causing the migrations to begin failing.Could you share the log line or the output of sh.status()? What was the exact cause of the failed migration?This issue seems to build over time. What I mean by that is that I can disable balancing on the collection and let the indexing work finishThis sounds like two separate issues to me, indexing and moving chunks. If possible, could you let the indexing work finish before doing balancing?Actually, the “best” way of migrating into a sharded cluster is to pre-create and pre-split the sharded collection before filling it with data. In essence, this can be done by:If you pre-create the sharded collection and pre-split the (empty) chunks, you don’t need to do migrations anymore. The relevant data would be restored straight into its proper shard.If you’re interested in the inner workings of chunk migration and sharded cluster, please see the Sharding Internals article, which is part of the MongoDB source code itself.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin, thanks for the reply.I have three collections that fail. One has 27M documents and a total document size of 28GB. Two has 86M documents and is 26GB. The third is by far the largest having 4.2B documents and a total document size of 1.4TB. I only enable balancing on one collection at a time in my attempts to make this work.sh.status() only says something about n failures in the last 24 hours with a message of ‘aborted’. There were not any helpful details. There is more information in the logs which can be viewed here.The collections are fully indexed in a single sharded replica set. My application is disabled, so there are no attempted CRUD actions being taken. It is when I add a second shard and set a collection to be sharded that the problem occurs. On the smaller collections a few chunks will move but eventually the disks on all nodes in the donor replica set are overwhelmed with disk activity (100% and disk queue ~10). I’m opining that it is indexing related because the huge disk activity is all on .wt files that begin with ‘index’. On the large collection errors happen immediately. Eventually one chunk was moved but it took nearly 24 hours. There are 2,900 chunks in this collection and I can’t wait 8 years for the balancing to complete The pre-split process sounds interesting but my initial concern is that using mongodump and mongorestore was the first method I tried when I started this migration task nearly two months ago. At that time I was merely trying to restore to another standalone instance and I gave up on that path because after restoring the data it started rebuilding the indexes and I was in the same situation with the indexing crushing my hard drives.If I create pre-splitted shards won’t I potentially have the same indexing issue on each member of each replicaset?To try the pre-split I’ll need to round up some equipment for a lab. My application is finally back online after two months of attempting to create a sharded cluster But I have a question regarding pre-splitting the collections. I have my chunk size set to 1024MB to minimize re-balancing needs, which results in 2,900+ chunks in my large collection. In a 1:1 meeting with MondoDb yesterday I was encouraged to reset it back to the default of 64MB which means I’ll have nearly 50K chunks. Is there a way to tell MondoDb that I want 50K evenly spaced chunks to do I need to use loop in a script that calls splitAt 50K times?", "username": "Tim_Heikell" }, { "code": "2020-06-02T06:55:12.326-0700 W STORAGE [FlowControlRefresher] Flow control is engaged and the sustainer point is not moving. Please check the health of all secondaries.2020-06-02T06:55:29.190-0700 I STORAGE [WTCheckpointThread] WiredTiger message [1591106129:189774][1352:140709697032320], WT_SESSION.checkpoint: Checkpoint ran for 99 seconds and wrote: 89632 pages (4141 MB)", "text": "Hi Tim,You have a lot of data there. I think the hardware is overwhelmed with the demand placed onto it by all the chunk moves and rebalancing.From the logs you references, two things seem to support the theory:2020-06-02T06:55:12.326-0700 W STORAGE [FlowControlRefresher] Flow control is engaged and the sustainer point is not moving. Please check the health of all secondaries.This message seem to indicate that the secondaries are having trouble keeping up with demand. The flow control is a throttling mechanism on the primary to allow the secondaries to keep up (see Flow Control). I can’t tell why the secondaries can’t keep up, but it could be caused by a couple things, e.g. slow network, or if the secondaries are less powerful than the primary, then the primary simply must wait for the secondary to finish it work.The second thing that caught my eye:2020-06-02T06:55:29.190-0700 I STORAGE [WTCheckpointThread] WiredTiger message [1591106129:189774][1352:140709697032320], WT_SESSION.checkpoint: Checkpoint ran for 99 seconds and wrote: 89632 pages (4141 MB)By default, WiredTiger creates a checkpoint every 60 seconds. See the linked page for a short explanation of checkpoints. A typical checkpoint runs for much less than 60 seconds. This particular checkpoint runs for 99 seconds, which implies that: a) it needs to write a whole lot of data, or b) the disk is overwhelmed by the write request placed on it.From the numbers you provided so far, it sounds to me that doing dump & restore would be faster than waiting for the chunk to rebalance themselves Regarding creating chunks, I don’t think there’s an automatic method to create the chunks. Usually faced with this situation, I would create a small script that calls sh.splitAt() in a loop.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hey Kevin.Yep, I have a lot of data and I’m just getting started. I anticipate having to add shards on an ongoing basis and I’ll need to be able to do it without having to manually recreate the cluster, which is why I’m trying to understand where my bottleneck is.The nodes are all identical. They are VMs running on six identically configured servers.CPU, memory and network all seem fine. I am definitely overwhelming my disks as mentioned in the original post. All of the disk activity is on the index files so I’m trying to understand why this is, and what can I do about it.Could it be my 1GB chunks? I thought that using the maximum chunk size would be best in my case since I will be constantly adding new data and I wanted to minimize the need to split chunks and balance shards, but maybe these large chunks are the problem? When I get my lab setup I plan to retry this process using the default 64MB chunks.Thanks.Tim", "username": "Tim_Heikell" } ]
Indexing after moving a chunks
2020-06-03T03:15:47.442Z
Indexing after moving a chunks
3,125
null
[]
[ { "code": "", "text": "Hi,\nI was trying to understand how to update an object with an embedded object inside a vector:Here is an example workflow, of course its contrived:db.test.insert({“a”:1})\ndb.test.update({“a”:1}, {$set : {“f.a.b”:2}})This works as expected, yielding the object:db.test.find({“a”:1})\n{ “_id” : ObjectId(\"…\"), “a” : 1, “f” : { “a” : { “b” : 2 } } }However, if my aim is to add a new vector containing compound objects:db.test.update({“a”:1}, {$set : {“f”:[{“a.b”:2}]}})I am getting:db.test.find({“a”:1})\n{ “_id” : ObjectId(\"…\"), “a” : 1, “f” : [ { “a.b” : 2 } ] }What I wanted to get is{ “_id” : ObjectId(\"…\"), “a” : 1, “f” : [{ “a” : { “b” : 2 }}]}What is the correct way to do this?Thanks!", "username": "Shlomi_Vaknin" }, { "code": "db.test.update({“a”:1}, {$set : {“f”:[{“a.b”:2}]}})f{ \"a\" : { \"b\" : 2 } }MY_OBJ = { \"a\" : { \"b\" : 2 } }\ndb.test.update( { \"a\": 1 }, { $push : { \"f\": MY_OBJ } } )\n", "text": "What I wanted to get is{ “_id” : ObjectId(“…”), “a” : 1, “f” : [{ “a” : { “b” : 2 }}]}What is the correct way to do this?Hi Shlomi,To update a collection with array field and add elements to it you use the $push array update operator.In the code db.test.update({“a”:1}, {$set : {“f”:[{“a.b”:2}]}}), you are trying to update the document with an array field f with one element, an object, { \"a\" : { \"b\" : 2 } }.The correct way would be:", "username": "Prasad_Saya" }, { "code": "> db.test.update({\"a\": 1}, {\"$set\": {\"f\": [{\"a\": {\"b\": 2}}]}})\n{\n acknowleged: 1,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0,\n insertedId: null\n}\nf> db.test.find()\n[\n {\n _id: 5ee21df45b4e7466e2f4f520,\n a: 1,\n f: [ { a: { b: 2 } } ]\n }\n]\n$push", "text": "(wave) Hi @Shlomi_Vaknin and welcome to the community!In addition to the method that @Prasad_Saya gives, you could also use the following:This updates the f field as you would like as well:Both methods work. Prasad’s method uses the $push operator to add the document to the array, where as my method has you manually build the array with the document in it.", "username": "Doug_Duncan" }, { "code": "$set$push$set$push$push", "text": "I want to add some related information. about the update operators $set and $push.$set sets a field’s value. If the field doesn’t exist, it creates the field and sets the value. If the field already exists, then the existing value is completely replaced by the new value.$push adds an element to an array. If the array field already exists, the element is added to the array. If the field doesn’t exist, then a new array field is created and the element is added to the array.So, you can choose either of the update operators, depending upon your need. I think, arrays should be worked with $push - unless you want to completely replace the existing array with a new value.", "username": "Prasad_Saya" }, { "code": "$set$array", "text": "@Prasad_Saya thanks for the clarification. Whether to use $set or $array does depend on what you are trying to accomplish, and I did indeed forget to leave that comment off my post.", "username": "Doug_Duncan" } ]
Dot notation for objects embedded within vectors
2020-06-11T04:01:02.062Z
Dot notation for objects embedded within vectors
3,035
null
[ "spring-data-odm" ]
[ { "code": "", "text": "Hi there,the combination of AWS DocumentDB, Cloud Foundry, Spring Boot, Spring Data, Mongo Java Driver and blue/green deployments caused my team a lot of trouble in the last days. We are kind of stuck and looking for insights that we might be missing.We are using AWS DocumentDB which is a MongoDB compatible clustered database by AWS. We have a Spring Boot microservice which connects to the DocumentDB cluster via the cluster endpoint.Short story:\nWhen our Spring Boot microservice shuts down, the MongoDB driver gets an interrupt which invalidates the connection pool. Somehow, this seems to close all connections of other microservices as well. The other microservice takes around one minute to be able to open a new connection to the DocumentDB cluster.Long story:We deploy a new Spring Boot microservice to Cloud Foundry via blue/green deployment. The previous microservice keeps running until the new microservices starts up and gets healthy.After the new application gets healthy, CF Diego stops the venerable (previous) microservice.The stopped microservice gets a (SIGTERM) signal. Spring Boot tries a graceful shutdown and sends an interrupt signal to all running threads which sometimes causes a MongoDB driver interrupt.After the interrupt, the MongoDB driver invalidates the connection pool. So far so good… Just a couple of milliseconds after the connection pool invalidation, the new microservices, which was healthy before, also loses the connection to the DocumentDB Cluster. It restarts. Even after restarting, it gets connection refused exceptions. Around 1 minute after the connection pool invalidation, the new microservice can open the connection again.I cannot reproduce the error with MongoDB and multiple applications locally. I guess, the error only occurs either with clusters or with DocumentDB.Do you have any ideas about possible causes? Why would shutting down a Spring Boot application and the MongoDB driver interrupts cause global connection errors which sustain for 1 minute?The problem exists with Spring Boot versions 2.1.7.RELEASE and 2.3.0.RELEASE. The latter has the MongoDB Java Driver 4.0.x", "username": "Artun_Subasi" }, { "code": "", "text": "I cannot reproduce the error with MongoDB and multiple applications locally. I guess, the error only occurs either with clusters or with DocumentDB.Hi @Artun_Subasi,Amazon DocumentDB is a separate implementation from the MongoDB server.DocumentDB uses the MongoDB 3.6 wire protocol, but there are number of functional differences and the supported commands are a subset of those available in MongoDB 3.6.If your issue isn’t reproducible with an actual MongoDB deployment, there isn’t much we can do to investigate. You could try standing up a MongoDB Atlas cluster to compare behaviour. The free tier in Atlas provides a basic replica set with 512MB of storage.For support using DocumentDB I suggest posting on the AWS Forums or Stack Overflow.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks Stennie,i was just looking for insights and ideas in case that I’m missing something. I’ll investigate further using other support channels and give a feedback here if I find something new.", "username": "Artun_Subasi" }, { "code": "", "text": "I created a minimal project with an attempt to reproduce the problem with either DocumentDB or MongoDB Atlas: GitHub - ArtunSubasi/documentdb-cluster-tester: A sample app which attempts to reproduce a connection problemIt seems to work fine with DocumentDB. The problem may exist within our Cloud Foundry infrastructure.", "username": "Artun_Subasi" } ]
DocumentDB cluster: invalidating the connection pool kills connections of other microservices
2020-06-10T15:27:25.709Z
DocumentDB cluster: invalidating the connection pool kills connections of other microservices
4,846
null
[ "installation" ]
[ { "code": "", "text": "I’m following the install guide “Install MongoDB Community Edition on macOS”. I’m getting the following error:MongoDB shell version v4.2.3\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-02-18T23:15:25.465-0800 E QUERY [js] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-02-18T23:15:25.468-0800 F - [main] exception: connect failed\n2020-02-18T23:15:25.468-0800 E - [main] exiting with code 1I’m out of ideas and trying several Stackoverflow solutions have taken me back to square one where I can’t connect to the server running “mongo”", "username": "Stephen_Fuller" }, { "code": "brew services start mongodb-community", "text": "Error connecting to 127.0.0.1:27017 :: caused by :: Connection refusedYou need to ensure the MongoDB server is running.Try: brew services start mongodb-community.Regards,\nStennie", "username": "Stennie_X" }, { "code": "~ netstat -an | grep 27017\n\ntcp4 0 0 *.27017 *.* LISTEN", "text": "Make sure MongoDB server is running and listening the port 27017 using netstat command", "username": "coderkid" }, { "code": "brew services list", "text": "I’m running the service already and still unable to connect. I can verify this with brew services listmongodb-community started {user} /Users/{user}/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist", "username": "Stephen_Fuller" }, { "code": "netstat -an | grep 27017mongod --dbpath /users/{username}/data/db/", "text": "Running netstat -an | grep 27017 doesn’t return anything. This is after I verified that the brew service is running.If I run mongod --dbpath /users/{username}/data/db/ after creating those directories I do get a return", "username": "Stephen_Fuller" }, { "code": "mongod.confbindIp: 127.0.0.1", "text": "netstat -an | grep 27017Looks like problem is solved.\nBy default, MongoDB doesn’t allow remote connections, except localhost (127.0.0.1)if you want other IPs connect to this server, go and edit mongod.conf file, comment out (put # in front of the line) the line looks like thisbindIp: 127.0.0.1", "username": "coderkid" }, { "code": "", "text": "i am going through the same issues, please how did you solve yours? i tried the netstat, heck i even uninstalled and reinstalled several times, nothing on stackoverflow has helped… i really need help", "username": "Odiachi_Daniel" }, { "code": "mongodmongo", "text": " Hello @Odiachi_Daniel and welcome to the MongoDB community forums.It’s helpful if you provide more information so we can help you. While your issue is similar to the original poster you might have other things going on.", "username": "Doug_Duncan" }, { "code": "", "text": "Error\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nQUERY [js] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n[main] exception: connect failed\n[main] exiting with code 1solution.service mongod stop\n#dont start mongod…instead…systemctl start mongod\n#then mongo commandmongo", "username": "Ngugi_David" }, { "code": "#run-mongodb-community-edition$brew services start [email protected]$mongod --config /usr/local/etc/mongod.conf --forkUnrecognized option: auth=true$ps aux | grep -v grep | grep mongod$mongo$brew services start [email protected]$mongod/data/db$brew services start [email protected]/data/db$mongod --dbpath /users/{username}/data/db/", "text": "Same issue, macOS Catalina 10.15.4. I have successfully installed mongodb following official tutorial. But when I reach #run-mongodb-community-edition point it gets messy. There are 2 options to run: as a macOS service and as a background process.First one ($brew services start [email protected]) returns Successfully started mongodb-community, second one ($mongod --config /usr/local/etc/mongod.conf --fork) returns Unrecognized option: auth=true. Let’s just use the first one as it preffered way according to the docs anyway. At this moment if I run $ps aux | grep -v grep | grep mongod I don’t get anything (which basically means that MongoDB is not running).After that we go to the run mongo part, which is just a $mongo call. And here I get the weird output:MongoDB shell version v4.2.6\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-04-30T15:41:02.184+0700 E QUERY [js] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-04-30T15:41:02.186+0700 F - [main] exception: connect failed\n2020-04-30T15:41:02.186+0700 E - [main] exiting with code 1Well, here we start to find a workaround. Instead of running $brew services start [email protected] I tried to run just $mongod and I got the following output with an error:2020-04-30T15:47:43.685+0700 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2020-04-30T15:47:43.687+0700 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] MongoDB starting : pid=34897 port=27017 dbpath=/data/db 64-bit host=Egors-iMac.local\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] db version v4.2.6\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] git version: 20364840b8f1af16917e4c23c1b5f5efd8b352f8\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] allocator: system\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] modules: none\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] build environment:\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] distarch: x86_64\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] target_arch: x86_64\n2020-04-30T15:47:43.687+0700 I CONTROL [initandlisten] options: {}\n2020-04-30T15:47:43.688+0700 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating\n2020-04-30T15:47:43.688+0700 I NETWORK [initandlisten] shutdown: going to close listening sockets…\n2020-04-30T15:47:43.688+0700 I - [initandlisten] Stopping further Flow Control ticket acquisitions.\n2020-04-30T15:47:43.688+0700 I CONTROL [initandlisten] now exiting\n2020-04-30T15:47:43.688+0700 I CONTROL [initandlisten] shutting down with code:100So it looks like installation process didn’t create directory /data/db. I am not sure, where is it must be created? And this seems to be the reason why $brew services start [email protected] does not work. Because if I manually create /data/db folder and run $mongod --dbpath /users/{username}/data/db/ everything work as expected.But please don’t close the issue as it’s not a solution. Because following official docs lead to different errors. How should we solve it properly?", "username": "Dean_1812" }, { "code": "", "text": "I’ve just read that this issue is common for macOS Catalina users becauseThe new MacOS 10.5 does not allow you, by default to create new volumes in the / subsystem and if you do, they will be deleted on restart.So /data/db can’t be created during installation’s process.", "username": "Dean_1812" }, { "code": "", "text": "@Dean_1812 I am having the exact same issue as you. Were you able to get it resolved?", "username": "Levi_Bernard" }, { "code": "", "text": "Buenas Noches, Les comento que hace un Rato MongoDB me Arrojo el Mismo Error y para mi Me Resulto Reiniciar la Pc, luego Ejecute los comandos para ver el Estatus e Iniciar y Listo Levanto sin Problemas…\nSolo Prueben a ver si les Ayuda, no se pierde nada con hacerlo. ", "username": "Omar_Rodriguez" }, { "code": "", "text": "create a data/db in your home directory1.cd /Users\n2. cd to your home directory\n3.mkdir data\n4.cd data\n5.mkdir db\n6. mongod --dbpath ~/data/db press enter then just keep it running.", "username": "robinson_garcia" }, { "code": "", "text": "Would love an official Mongo response to this - as of right now my experience has been identical to Dean_1812’s. Could we get an update or addition to the official tutorial?", "username": "Andy_Eblin" }, { "code": "127.0.0.127017127.0.0.127017localhostmongod/data/dbmongodmongodmongod--dbpath", "text": "“Error: couldn’t connect to server 127.0.0.1:27017” is a general error message indicating that your client/driver cannot connect to a server on the specified hostname/IP and port. In this specific example, 127.0.0.1 is the hostname or IP and 27017 is the port.Typical reasons for this error are:Please refer to Troubleshoot Connection Issues in the MongoDB Atlas documentation.All modern versions of MongoDB bind to localhost by default so you have to explicitly adjust IP Binding to allow external connections.Please review the MongoDB Security Checklist and consider available security measures to avoid opening your deployment to being compromised. In particular, I strongly recommend always:Follow the general security practice of Principle of Least Privilege and consider how you can reduce unnecessary risk to your deployment. For example, instead of opening up your deployment to the world, consider using a VPN or SSH tunnel to provide secure remote access.The above suggestions should be helpful, but the Compass documentation also includes a reference for Compass Connection Errors.If you run mongod without providing a configuration file, it will use a hardcoded default of /data/db which is not created as part of the Homebrew installation and not supported on macOS Catalina (which no longer allows creating paths on the root filesystem).The documented steps for Run MongoDB Community Edition on macOS are correct (and in recommended order of approach):brew services start [email protected] run MongoDB (i.e. the mongod process) manually as a background process , issue the following:mongod --config /usr/local/etc/mongod.conf --forkIf you want to run mongod without using either of these options, you can also explicitly pass the --dbpath parameter. Per SERVER-46398, this suggestion has been added to recent releases of MongoDB (3.6.19+, 4.0.19+, 4.2.7+, 4.4.0-rc4).If you are having a similar issue connecting to your MongoDB deployment, please start a new discussion topic with details specific to your environment such as:It would be very useful to include information on any troubleshooting steps you have already tried.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Error: couldn't connect to server 127.0.0.1:27017
2020-02-19T08:43:07.992Z
Error: couldn&rsquo;t connect to server 127.0.0.1:27017
421,631
null
[]
[ { "code": "", "text": "Hi, we’re managing deeply nested content (Prosemirror-Documents). Now I have realized, that a lot of such documents contain the unicode character U+00A0, which is a non breaking space, instead of a regular space. this destroys a lot of user interfaces. Can I somehow run a script that traverses through the whole collection and replaces the character with the correct one? ", "username": "Nicolas_Katte" }, { "code": "mongoexportsedtrmongoimportmongoexport -d db -c coll | tr ... | mongoimport -d db -c coll2\n", "text": "Hi Nicolas,I’m afraid the method to do global search/replace of a single character inside a document, deeply nested or otherwise, is not part of the server’s codebase.A quick and dirty method I can think of is to do a mongoexport on the collection, pipe to sed or tr, then pipe to mongoimport. Something like:Probably not ideal for a large collection. Unfortunately if you need a more granular method, you’d have to write a specialized script.Best regards,\nKevin", "username": "kevinadi" } ]
Replace Specific Unicode Character in whole collection
2020-06-06T21:46:59.877Z
Replace Specific Unicode Character in whole collection
2,760
null
[ "dot-net" ]
[ { "code": "", "text": "I started learning ASP.Net Core Web API development a few days back. For the same, I am following the official tutorial document by Microsoft.However, there is an issue. A major one I guess. Since I am also new in MongoDB, I found that when running a particular code from the tutorial\nthe code section:\npublic void Update(string id, Book bookIn) =>\n_books.ReplaceOne(book => book.Id == id, bookIn);returns an error:\nMongoDB.Driver.MongoCommandException: Command findAndModify failed: After applying the update, the (immutable) field ‘_id’ was found to have been altered to _id: null.Do note, the code on Microsoft Site is exactly the same in my project.", "username": "Kunal_Das" }, { "code": "bookIn_idnullId", "text": "Hi @Kunal_Das, welcome to the forum!MongoDB.Driver.MongoCommandException: Command findAndModify failed: After applying the update, the (immutable) field ‘_id’ was found to have been altered to _id: null.Based on the code and the error message, it suggests that the bookIn variable contains _id field with a null value. Try to omit the Id field of the class, see also db.collection.findAndModify for more information.If you have further question, it would be helpful to provide:Since I am also new in MongoDB,I’d recommend to enrol in a free online course on MongoDB University M220N: MongoDB for .NET Developers to learn about .NET and MongoDB.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Facing this weird issue in ASP.Net Core 3.0
2020-06-07T20:36:14.614Z
Facing this weird issue in ASP.Net Core 3.0
2,614
null
[ "security", "configuration" ]
[ { "code": "", "text": "Hello all,I’ve been struggling with a URI connect issue for the last day or two, and have run out of options to try. Im hoping someone here will be able to answer my question.The error I am getting is that the SSL handshake fails because no ssl certificate is presented in the URI string.Background:MongoDB 4.0 in a 5 server sharded cluster on EC2 instances - a test environment.\nThere is 1 Query/Router/Mongos, 1 config server and 3 sharded servers. (I understand this isn’t the ideal config but its for testing purposes).x.509 is enabled and functioning properly (SSL is enforced, not optional) on the mongo shell level. In fact even with Studio 3T I can validate against the server using x.509 - and everything works fine. HOWEVER.The problem comes when attempting to stitch together a URI for Meteor. Our application is packaged/bundled with Meteor - and is looking to connect to the mongo instance with mongodb://This does seem to be a significant issue for Mongo to pass the location of the client PEM file and the CA PEM file to the mongos instance.For security reasons Im reluctant to share the full connect string Im using - and I realize that makes it difficult for folks to help, so I’ve altered it to take it out the sensitive details. The *'s are obviously the data I’ve removed. The string below works from a 3T query perspective. However, if I take out the 3T extensions (3T.rootCApath - etc.) it obviously does not work. I have tried several combinations instead to pick up the pem files such as:mongodb://C%3D**%2CST%3D**%2CL%3D*****%2CO%3D********%2COU%3D*****%2CCN%3D****%2CemailAddress%3D**40.com@localhost:9999/?ssl=true&sslInvalidHostNameAllowed=true&readPreference=primary&serverSelectionTimeoutMS=5000&connectTimeoutMS=10000&authSource=$external&authMechanism=MONGODB-X509&3t.uriVersion=3&3t.connection.name=Mongo+Dev+Shard&3t.certificatePreference=RootCACert:from_file&3t.rootCAPath=C:\\Documents*CA.pem&3t.clientCertPath=C:\\Documents*.pem&3t.useClientCertPassword=truesslCertificateKeyfile=\nsslCAFile=\nclientCertPath=\nCAPath=\nssl_certfile=\nssl_ca_certs=\nsslPEMKeyFIle=\nsslPEMKeyPassword=\nsslCAFile=\nsslCertificateKeyfile=\nsslCertificateKeyFilePassword=Here is an example of what I have used (numerous variations) of a string where the 3T items are removed:mongodb://C%3D**%2CST%3D**%2CL%3D******%2CO%3D***********%2COU%3D*******%2CCN%3D******%2CemailAddress%3D****%40*******.com@localhost:9999/?ssl=true&sslInvalidHostNameAllowed=true&sslCertificateKeyfile=C:\\FileBuilds*.pem&sslCAFile=C:\\FileBuilds*.pem&sslCertificateKeyFilePassword=’*************’&readPreference=primary&serverSelectionTimeoutMS=5000&connectTimeoutMS=10000&authSource=$external&authMechanism=MONGODB-X509//////\nLastly -and this is more for feedback purposes in the event anyone from the MongoDB team reads this. The documentation on how to use x.509 authorization with URI’s seems quite sparse to me, and difficult to come by. The information presented is very highly level, incomplete, and leads to the user to a conclusion that it should be quite easily achievable to use X.509 with a URI. As some of you may have discovered - setting up x.509 with a sharded cluster is not a trivial task and is a time sink - one would expect that implementing a supported security approach should result in the ability to use published/supported mechanisms to connect. Username formatting (particularly around the %hex value replacement of ‘reserved characters’) appears to be dated, difficult to locate in the documentation, and not at all obvious.Secondly - key words regarding how to pass the SSL PEM file is not directly addressed, other than for tls - and then its unclear whether ssl(keyword) is a valid usage in the connect string, and whether tls(keyname) is even supported. I understand they may have deprecated SSL - however if they are going to allow ssl=true - then the supporting documentation should be there to understand how to implement it. If x.509 is not intended to be used with a URI string - that should be stated obviously so that the user/architect can make an informed choice when it comes to implementation. ///‘feedback’ over.", "username": "Eric_Taylor" }, { "code": "options", "text": "Hi Eric,The official Node driver supports TLS connection options using an options object (see MongoClient.connect(), however I can’t find the relevant documentation on the Meteor side.Presumably, if Meteor allows passing the connection options object to the Node driver (I’m assuming they use the official Node driver under the hood), you should be able to connect using the method described in the Node driver page.The only resource I can find is this Meteor forum post regarding this exact issue.Best regards,\nKevin", "username": "kevinadi" } ]
Mongodb:// URI with X.509
2020-06-11T11:11:21.211Z
Mongodb:// URI with X.509
2,350
null
[ "node-js" ]
[ { "code": "", "text": "Can anyone explain why for a find query the profiler reports 30 ms in the millis fields, while the toArray in the node code takes 600+ ms?\nIt’s just one line with await and toArray wrapped with console.time.I should mention that in the system.profile collection, after the profiler report, there’s another report with “getMore”, but it also takes ~30 ms, so together they account for 10% of the time reported in the JS code.", "username": "nyonyi" }, { "code": "find()sort()limit()toArray()find()cursor.next()toArray()", "text": "Hi Ivan,I’m assuming you’re talking about the Node driver.The difference is because the result of find() is an unexecuted cursor, which you can then string together with other operations like sort(), limit(), etc.In contrast, toArray() actually executes the cursor, walk through all the output documents, and puts them all into an array.If you execute the cursor after find() by iterating through it with cursor.next(), you should see a similar timing with toArray().Best regards,\nKevin", "username": "kevinadi" } ]
Difference between profiler millis and toArray timing in JS
2020-06-10T05:22:55.394Z
Difference between profiler millis and toArray timing in JS
1,663
null
[ "dot-net" ]
[ { "code": "var bucket = new GridFSBucket(_context.Database); \nvar bytes = bucket.DownloadAsBytesByName(\"b7be1813-589a-4a0b-b720-70f9efd165aa\");\n", "text": "I have a c# .net core project where I am trying to download a file which I have stored in GridFS. The upload works fine and using 3T studio I can access the files but any of the download commands throw an exceptionThe exception thrown is:Command find failed: Error=2 {“Errors”:[“The index path corresponding to the specified order-by item is excluded.”]I don’t understand what is wrong here, I used the driver to create the GridFS bucket initially so the indexes should be correct, I don’t know what it would be trying to order by which would cause this issue. No matter which download command I use they all fail with the same erroAny help appreciated", "username": "Jon_Howell" }, { "code": "", "text": "Hi @Jon_Howell, welcome!It’s been a while since you posted this question, have you found a solution?Command find failed: Error=2 {“Errors”:[“The index path corresponding to the specified order-by item is excluded.”]Is there more information on the exception ? i.e. which class is throwing the exception.Could you also provide the following information:Regards,\nWan", "username": "wan" } ]
GridFS file download, C# driver
2020-05-21T15:09:18.733Z
GridFS file download, C# driver
4,101
null
[ "compass" ]
[ { "code": "", "text": "How to make Hashed Index in Compass?I’m checking there are asc, desc, 2ds, but where is hashed index?", "username": "Kim_Hakseon" }, { "code": "mongo", "text": "Hi @Kim_Hakseon, unfortunately at this time Compass cannot build hashed indexes, so you will have to drop out to a mongo shell to create the index. I just checked the project’s JIRA page and I don’t see a ticket for adding hashed indexes to Compass. You could create an issue however. There are similar requests for text indexes and partial and sparse indexes, but these tickets have been around for a while now, so I wouldn’t plan on new index type creation being added any time soon. ", "username": "Doug_Duncan" }, { "code": "", "text": "Ah-ha! Thank you Thank you ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
I have a question about Create Index in Compass
2020-06-11T06:27:31.546Z
I have a question about Create Index in Compass
2,600
https://www.mongodb.com/…3_2_1023x583.png
[ "golang" ]
[ { "code": "child nodeparent nodeparentid{\"_id\":{\"$oid\":\"5ebd05b52f3700008500220b\"},\"username\":\"DHBK\",\"password\":\"123456\",\"lastname\":\"DHBK\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111001\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":null,\"comid\":\"DHBK\",\"comdepartment\":\"DHBK\",\"usercode\":\"DHBK_0001\",\"usertype\":\"ADMIN_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500220c\"},\"username\":\"KHOA_DIEN\",\"password\":\"123456\",\"lastname\":\"KHOA_DIEN\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111002\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"DHBK\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_DIEN\",\"usercode\":\"DHBK_0002\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500220d\"},\"username\":\"KHOA_XD\",\"password\":\"123456\",\"lastname\":\"KHOA_XD\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111003\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"DHBK\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_XD\",\"usercode\":\"DHBK_0003\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500220e\"},\"username\":\"KHOA_CNTT\",\"password\":\"123456\",\"lastname\":\"KHOA_CNTT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111004\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"DHBK\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_CNTT\",\"usercode\":\"DHBK_0004\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500220f\"},\"username\":\"BOMON_TUDONG\",\"password\":\"123456\",\"lastname\":\"BOMON_TUDONG\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111005\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_DIEN\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_DIEN\",\"usercode\":\"DHBK_0005\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002210\"},\"username\":\"BOMON_VIENTHONG\",\"password\":\"123456\",\"lastname\":\"BOMON_VIENTHONG\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111006\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_DIEN\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_DIEN\",\"usercode\":\"DHBK_0006\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002211\"},\"username\":\"BOMON_HETHONG\",\"password\":\"123456\",\"lastname\":\"BOMON_HETHONG\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111007\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_DIEN\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_DIEN\",\"usercode\":\"DHBK_0007\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002212\"},\"username\":\"BOMON1_XD\",\"password\":\"123456\",\"lastname\":\"BOMON1_XD\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111008\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_XD\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_XD\",\"usercode\":\"DHBK_0008\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002213\"},\"username\":\"BOMON2_XD\",\"password\":\"123456\",\"lastname\":\"BOMON2_XD\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111009\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_XD\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_XD\",\"usercode\":\"DHBK_0009\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002214\"},\"username\":\"BOMON3_XD\",\"password\":\"123456\",\"lastname\":\"BOMON3_XD\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111010\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_XD\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_XD\",\"usercode\":\"DHBK_0010\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002215\"},\"username\":\"TRUONGKHOA_BMVT\",\"password\":\"123456\",\"lastname\":\"TRUONGKHOA_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111011\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"BOMON_VIENTHONG\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0011\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002216\"},\"username\":\"PHOKHOA_BMVT\",\"password\":\"123456\",\"lastname\":\"PHOKHOA_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111012\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"BOMON_VIENTHONG\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0012\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002217\"},\"username\":\"THUKY_BMVT\",\"password\":\"123456\",\"lastname\":\"THUKY_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111013\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"BOMON_VIENTHONG\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0013\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002218\"},\"username\":\"GV_BMVT\",\"password\":\"123456\",\"lastname\":\"GV_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111014\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"BOMON_VIENTHONG\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0014\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f37000085002219\"},\"username\":\"SV1_BMVT\",\"password\":\"123456\",\"lastname\":\"SV1_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111015\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"GV_BMVT\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0015\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500221a\"},\"username\":\"SV2_BMVT\",\"password\":\"123456\",\"lastname\":\"SV2_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111016\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"GV_BMVT\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0016\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500221b\"},\"username\":\"SV3_BMVT\",\"password\":\"123456\",\"lastname\":\"SV3_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111017\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"GV_BMVT\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0017\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ebd05b52f3700008500221c\"},\"username\":\"SV4_BMVT\",\"password\":\"123456\",\"lastname\":\"SV4_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111018\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"GV_BMVT\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0018\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ec642b2412b0000e70021a5\"},\"username\":\"KHOA_KT\",\"password\":\"123456\",\"lastname\":\"KHOA_KT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111002\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"DHKT\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_KT\",\"usercode\":\"DHKT_0002\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ec642b2412b0000e70021a8\"},\"username\":\"BOMON_KTDOANHNGHIEP\",\"password\":\"123456\",\"lastname\":\"BOMON_KTDOANHNGHIEP\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111005\",\"userdate\":\"2020-05-05\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"KHOA_KT\",\"comid\":\"DHBK\",\"comdepartment\":\"KHOA_KT\",\"usercode\":\"DHKT_0005\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5ece3517b8d5570916d013f6\"},\"username\":\"SV5_BMVT\",\"password\":\"123\",\"lastname\":\"SV5_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111019\",\"userdate\":\"2020-05-14\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"GV_BMVT\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0019\",\"usertype\":\"USER_COM\"}\n{\"_id\":{\"$oid\":\"5eddf0a9b8d5570916dae6ff\"},\"username\":\"SV6_BMVT\",\"password\":\"123456\",\"lastname\":\"SV6_BMVT\",\"useremail\":\"[email protected]\",\"usertel\":\"0907111020\",\"userdate\":\"2020-06-08\",\"userstatus\":\"ACTIVE\",\"userparentid\":\"GV_BMVT\",\"comid\":\"DHBK\",\"comdepartment\":\"BOMON_VIENTHONG\",\"usercode\":\"DHBK_0019\",\"usertype\":\"USER_COM\"}\nDHBK nodevar descendants=[]\nvar stack=[];\nvar item = db.users.findOne({username:\"DHBK\"});\nstack.push(item);\nwhile (stack.length>0){\n var currentnode = stack.pop();\n var children = db.users.find({userparentid:currentnode.username});\n while(true === children.hasNext()) {\n var child = children.next();\n descendants.push(child.username);\n stack.push(child);\n }\n}\ndescendants.join(\",\")\nKHOA_DIEN,KHOA_XD,KHOA_CNTT,BOMON1_XD,BOMON2_XD,BOMON3_XD,BOMON_TUDONG,BOMON_VIENTHONG,BOMON_HETHONG,TRUONGKHOA_BMVT,PHOKHOA_BMVT,THUKY_BMVT,GV_BMVT,SV1_BMVT,SV2_BMVT,SV3_BMVT,SV4_BMVT,SV5_BMVT,SV6_BMVT\npackage main\n import (\n \"context\"\n \"fmt\"\n \"strings\"\n \"time\"\n \"go.mongodb.org/mongo-driver/bson\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n )\n func main() {\n GetAllChildOfNode(\"DHBK\")\n }\nfunc GetAllChildOfNode(node string) error {\n ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)\n client, err := mongo.Connect(ctx, options.Client().ApplyURI(\"URI string\"))\n if err != nil {\n return err\n }\n defer client.Disconnect(ctx)\n database := client.Database(\"users\")\n users := database.Collection(\"users\")\n var descendants []string\n var stack []string\n err = users.FindOne(ctx, bson.M{\"username\": \"DHBK\"}).Decode(&stack)\n leng := len(stack)\n for leng > 0 {\n //I HaVE TROUBLE HERE\n currentnode := stack.\n }\n return nil\n }\npushpopwhile loop", "text": "I have data collection in MongoDB.My data was built in model tree and relationship between child node and parent node is property parentid .Here is my data architecture\nimage1700×969 64.2 KB\nAnd here is my sample dataNow I want to get all child node of specific parent node. For example, I want to get all child node of DHBK node .I have completed MongoDB shell to query this requirement.Here is my MongoDB shellIt worked and showed me correct result. Here is my output resultThen I write Go code to implement this MongoDB shell.Here is my codeBut I have trouble in implement push , pop method and while loop as MongoDB shell by use Go.\nThank you in advance.", "username": "Napoleon_Ponaparte" }, { "code": "DHBK nodeDHBKdb.users.aggregate([\n {\"$match\":{\"username\":\"DHBK\"}},\n {\"$graphLookup\":{\n \"from\":\"users\",\n \"startWith\":\"$username\", \n \"connectFromField\":\"username\", \n \"connectToField\":\"userparentid\", \n \"as\":\"children\"}}, \n {\"$project\":{\"_id\":0 ,\"result\":\"$children.username\"}}\n])\ncollection := client.Database(\"users\").Collection(\"users\")\n\n// Filter only for the highest parent node\nmatchStage := bson.D{\n\t{\"$match\", bson.D{\n\t\t{\"username\", \"DHBK\"},\n\t}},\n}\n\n// Traverse through the graph\ngraphLookupStage := bson.D{\n\t{\"$graphLookup\", bson.D{\n\t\t{\"from\", \"users\"},\n\t\t{\"startWith\", \"$username\"},\n\t\t{\"connectFromField\", \"username\"},\n\t\t{\"connectToField\", \"userparentid\"},\n\t\t{\"as\", \"children\"},\n\t}},\n}\n\n// Project only the required field\nprojectStage := bson.D{\n\t{\"$project\", bson.D{\n\t\t{\"_id\", 0},\n\t\t{\"result\", \"$children.username\"},\n\t}},\n}\n\npipeline := mongo.Pipeline{matchStage, graphLookupStage, projectStage}\ncursor, err := collection.Aggregate(context.TODO(), pipeline)\ndefer cursor.Close(context.TODO())\nif err != nil {\n panic(err)\n}\nfor cursor.Next(context.TODO()) {\n var doc bson.M\n\terr := cursor.Decode(&doc)\n\tif err != nil {\n\t panic(err)\n\t}\n\tfmt.Println(doc[\"result\"])\n}\n", "text": "Hi @Napoleon_Ponaparte,Now I want to get all child node of specific parent node. For example, I want to get all child node of DHBK node .You can optimise this by performing the operation as one database operation via Aggregation Pipeline. Especially with the use of $graphLookup stage to perform a recursive search on a collection. This would also reduce the network round trips between the application and the database server.Using the document examples posted, you can get the children of username DHBK as below:You can write this in Go using mongo-go-driver with an example as below:You may also find Model Tree Structures a useful reference to design various ways of tree data structures in MongoDB.Regards,\nWan.", "username": "wan" }, { "code": "collection.Aggregatecursor.Nextdoccursor.Allvar results []bson.M\nif err = cursor.All(ctx, &results); err != nil {\n panic(err)\n}\n", "text": "A couple of small edits to @wan’s great answer:", "username": "Divjot_Arora" }, { "code": "", "text": "Thank you for your help Mr @wan and @Divjot_Arora", "username": "Napoleon_Ponaparte" }, { "code": "", "text": "Thanks @Divjot_Arora, edited.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get all model tree child node in MongoDB with Go
2020-06-09T16:16:16.082Z
Get all model tree child node in MongoDB with Go
5,647
null
[ "graphql" ]
[ { "code": "", "text": "When mongdb realm creates the default graphql queries, is there a way to use an operator, such as $regex with the query? I cannot seem to get it to work.\nOne of the automatically generated query for movies is this:\nmovies(\nlimit: Int = 100\nsortBy: MovieSortByInput\nquery: MovieQueryInput\n): [Movie]!This works in the shell:\ndb.movies.find({“title”:{$regex:“girl”}})How would I do this with the above query?", "username": "Fred_Kufner" }, { "code": "", "text": "Hi Fred – You aren’t able to feed MongoDB syntax directly to the GraphQL API, but we do generate a set of resolvers to make querying simpler. However, we don’t generate anything for Regex at this point and recommend that you define a custom resolver for this.", "username": "Drew_DiPalma" }, { "code": "", "text": "I figured. Thanks Drew.", "username": "Fred_Kufner" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Graphql query with MongoDB operators
2020-06-10T21:57:27.700Z
Realm Graphql query with MongoDB operators
2,994
null
[ "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"5edfced6c677ea092ddf1f4a\"),\n \"email\" : \"[email protected]\",\n \"lastuserip\" : \"0:0:0:0:0:0:0:1\",\n \"userlaw\" : 1,\n \"validationhash\" : null,\n \"registered\" : ISODate(\"2020-06-09T18:03:02.259Z\"),\n \"_lc\" : ISODate(\"2020-06-11T15:17:49.986Z\"),\n \"password\" : \"b1d56015b4280e2a08c45943631635cd74d0e1e00f9db38a0e63c817dbd2816a5f7fd72a5b7b7baceaedc604b07b9a18be4eb61b4c4583f9681106ebec324a6a\",\n \"lastlogin\" : ISODate(\"2020-06-11T15:17:39.777Z\"),\n \"Name\" : {\n \"lastname\" : \"Bogensperger\",\n \"surname\" : \"Rupert\"\n },\n \"birthdate\" : ISODate(\"1978-09-29T23:00:00Z\"),\n \"Display\" : [\n {\n \"_id\" : ObjectId(\"5ee24b30c4b930419306be17\"),\n \"Size\" : {\n \"height\" : 0,\n \"lenght\" : 0,\n \"width\" : 0\n },\n \"type\" : 1\n }\n ],\n \"image\" : \"20200611171749984ff26a4252.png\"\n}\n{\n \"_id\" : ObjectId(\"5edfced6c677ea092ddf1f4a\"),\n \"email\" : \"[email protected]\",\n \"lastuserip\" : \"0:0:0:0:0:0:0:1\",\n \"userlaw\" : 1,\n \"validationhash\" : null,\n \"registered\" : ISODate(\"2020-06-09T18:03:02.259Z\"),\n \"_lc\" : ISODate(\"2020-06-11T15:17:49.986Z\"),\n \"password\" : \"b1d56015b4280e2a08c45943631635cd74d0e1e00f9db38a0e63c817dbd2816a5f7fd72a5b7b7baceaedc604b07b9a18be4eb61b4c4583f9681106ebec324a6a\",\n \"lastlogin\" : ISODate(\"2020-06-11T15:17:39.777Z\"),\n \"Name\" : {\n \"lastname\" : \"Bogensperger\",\n \"surname\" : \"Rupert\"\n },\n \"birthdate\" : ISODate(\"1978-09-29T23:00:00Z\"),\n \"Display\" : [\n {\n \"_id\" : ObjectId(\"5ee24b30c4b930419306be17\"),\n \"Size\" : {\n \"height\" : 0,\n \"lenght\" : 0,\n \"width\" : 0\n },\n \"Image\" : [{\"name\": \"test\"}]\n \"type\" : 1\n }\n ],\n \"image\" : \"20200611171749984ff26a4252.png\"\n}\n", "text": "I want to updateOne Document. I need 2 filters to find (\"_id\", And the “_id” in the Display) and set as shown below.From:To:", "username": "Rupert_Bogensperger" }, { "code": "```{\n “_id” : ObjectId(“5edfced6c677ea092ddf1f4a”),\n “email” : “[email protected]”,\n “lastuserip” : “0:0:0:0:0:0:0:1”,\n “userlaw” : 1,\n “validationhash” : null,\n “registered” : ISODate(“2020-06-09T18:03:02.259Z”),\n “_lc” : ISODate(“2020-06-11T15:17:49.986Z”),\n “password” : “b1d56015b4280e2a08c45943631635cd74d0e1e00f9db38a0e63c817dbd2816a5f7fd72a5b7b7baceaedc604b07b9a18be4eb61b4c4583f9681106ebec324a6a”,\n “lastlogin” : ISODate(“2020-06-11T15:17:39.777Z”),\n “Name” : {\n “lastname” : “Bogensperger”,\n “surname” : “Rupert”\n },\n “birthdate” : ISODate(“1978-09-29T23:00:00Z”),\n “Display” : [\n {\n “_id” : ObjectId(“5ee24b30c4b930419306be17”),\n “Size” : {\n “height” : 0,\n “lenght” : 0,\n “width” : 0\n },\n “type” : 1\n }\n ],\n “image” : “20200611171749984ff26a4252.png”\n}\n$", "text": "Hi @Rupert_Bogensperger and welcome to the community forums.A couple tips to make it easier for people to help you.As for helping you get to where you want to get to with your code, you want to look at the $ positional operator.", "username": "Doug_Duncan" } ]
I need help to query nested arrays
2020-06-11T16:34:36.350Z
I need help to query nested arrays
1,576
null
[]
[ { "code": "", "text": "I have been on the search for a solution to this problem for 5 years with MongoDB. I just watched all the new announcements and I got pretty pumped with all the awesome features, but nothing came close to solving this issue. I decided its time to put it here so maybe it can be put on your radar for future updates… Fingers crossed!The problem: We work in a very competitive industry. We give out quotes via our software with a random 8 digit quote number. We do this so our competitors can’t figure out how many quotes we give per day by getting a new quote every 24 hours and seeing how many where issued. These IDs have to be unique, never used before.The Challenge: Distribute a series of IDs in a Unique Random order in the most simple way possible.There are 3 solutions to this problem in MongoDB. All three suck.Solution 1:Pre-insert every ID into the Quotes collection as tiny records {_id: 55555555}, {_id: 55555556}, {_id: 55555557}, etc. Have logic to pick a random number, select the first unused record. For example random number: 55555554. Run query: {\"_id\": {$gte: 55555554}, “name_field”: {\"$exists\": false}}. Once a record is found, update it with the new quote data.This is not a good solution because 1. you have to pre-create 100000000 records. 2. each time you create a new quote you are doing an update in place where the size of the document gets much larger, so MongoDB has to move the document to the bottom of the collection on disk, which is really not good.Solution 2:Mine the new ID out of previously inserted quotes. You could use a do a series of DB queries to figure out a new ID by fetching a portion of data out of the collection and looking for an ID gap. You could fetch a range of 1000 records. Lets say 55555555-55556555 and count how many records are found. If you the answer is not 1000, then perform tons of queries figuring out what ID can be used.There are different ways you can do this idea, but the gist is multiple successive full loop query calls. Super painful. At least you don’t have to insert 100000000 first… progress?Solution 3:Maintain 2 different collections. One that stores the remaining IDs. Like: {_id: 55555555}, {_id: 55555556}, {_id: 55555557}, etc. The other that stores the Quotes. Picking a random number like: 55555554, you can run a query like: {\"_id\": {$gte: 55555554}}, pick a new ID record. Use this ID to create a new record in the Quotes collection, then remove it from the IDs collection. You could use a transaction to make this a bit safer.The problem with this is you are still pre-creating 100000000 records. If you do a big import into the Quotes collection from a csv or something, this IDs collection can fall out of date with the Quotes collection. And lastly you now have 2 collections to maintain. Every time you login to your DB you are reminded that you are a failure… Sadly this is the solution we have been using for the last 5-6 years. It sucks, I am always on the look for a better idea / solution!Please note, this is not just a MongoDB problem. Every DB software I know has this issue. Solution 3 is the winner for Relational DBs on the internet where this issue is discussed. I just keep hoping that MongoDB is going to pull though with a feature that can bail me out of my sadness.Does anyone have any ideas? Thanks for your support and for taking the time to read this insanely long post. Maybe a new feature??", "username": "Tyler_Jensen" }, { "code": "", "text": "Why not use an ObjectID? It’s 12 digits instead of eight but it provides uniqueness and provides no hints about how many previous invoices were generated.Really, anytime time-based should work. An ObjectID starts with a timestamp, a timestamp is monotonic so it will always be unique per second, and any competitor generating a new ID will only get information about what time they generated the quote. The same is true of using an 8 digit timestamp.", "username": "Justin" }, { "code": "", "text": "An objectID is 12 bytes, 24 chars in length.My client is adamant the ID be number only and as short as possible. Representing time would not work under these circumstances unfortunately as after a short amount of time it will loop back over and we are stuck with these problems again.Clients seem to have a knack to coming up with hard problems accidentally. lol", "username": "Tyler_Jensen" }, { "code": "", "text": "Is there any reason why quotes id have to be unique between all customer?You could simply keep a quote count per customer and simply increase that number when a customer request a new quote. Ids would be unique for a given customer. This way your competitors could not figure out the number of quotes you give. The quote collection will have a compound index quote-id:1,customer-id:1.", "username": "steevej" }, { "code": "", "text": "The customer service department want to ask the customer for a single number to pull up their quotes. Asking for 2 numbers increases complexity and confusion. If the customer number is incrementing then we have a similar problem as you can figure out how much new business there is per day by getting 2 quotes 24 hours apart.", "username": "Tyler_Jensen" }, { "code": "", "text": "Does this mean a customer does not have to authenticate or identify him self? That high security risk since if you get that single number your free to call customer service and get access to the quote.The customer service department want to ask the customer for a single number to pull up their quotes.I assumed that once a customer has a customer number, then it is fixed. To get a new customer number the competition will have to generate a new identity every time. Might be hard to do.If the customer number is incrementing then we have a similar problem as you can figure out how much new business there is per day by getting 2 quotes 24 hours apart.", "username": "steevej" }, { "code": "", "text": "Once the customer service rep pulls up the quote they are asked some questions about the quote to verify that this is the customers quote. This is no different than an order number at a eCommerce provider. Same policy.", "username": "Tyler_Jensen" }, { "code": "", "text": "This seems to be a contraction.The customer service department want to ask the customer for a single numberandthey are asked some questions about the quote to verify that this is the customers quoteBut who are we to argue with the customer service department. B-) I hope you will be able to satisfy them.", "username": "steevej" }, { "code": "", "text": "This is not a good solution because 1. you have to pre-create 100000000 records. 2. each time you create a new quote you are doing an update in place where the size of the document gets much larger, so MongoDB has to move the document to the bottom of the collection on disk, which is really not good.I think you are underestimating the elegance of this solution.\nA: Your requirements have created an artificially limited resource. In lieu of changing this too.\nB: Your second point about this data moving on disk. Where do you think the data goes with solution 3? Also since the introduction of wired tiger I believe this pattern is not expensive as you think.\nC: I would make the find of an unallocated quote number grab n numbers. So you can quickly try with another quote id if there is a violation on insert(without another find).", "username": "chris" } ]
Distribute a series of IDs in a Unique Random order in the most simple way possible
2020-06-09T23:08:23.986Z
Distribute a series of IDs in a Unique Random order in the most simple way possible
3,676
null
[]
[ { "code": "", "text": "Hello all,I am trying to link Github to automatically deploy functions in realm using GitHub as opposed to coding in my browser following the instructions here https://docs.mongodb.com/realm/deploy/deploy-automatically-with-github/But, it throws a 404 error when I click ‘install realm on GitHub’. Has anyone else had this issue? Is there another easy way for me to switch to local development in the meantime? I primarily want to develop functions.Thank you!", "username": "Kadin_Donohoe" }, { "code": "", "text": "Update: was able to correct the error by changing ‘stitch’ to ‘realm’ in the GitHub redirect URL", "username": "Kadin_Donohoe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm link Github 404 error
2020-06-11T00:23:14.963Z
Realm link Github 404 error
1,725
null
[]
[ { "code": "StringList", "text": "Hey, I’m wondering if it’s possible to build a rich text editor that can be edited by multiple users simultaneously (similar to Google Docs).I believe if you would simply use the String type in Realm, the last update on the string would always win and you could lose text that was entered by the user.So I thought about using the List type in Realm and representing each character as a list item. But I’m not sure if Realm is able to resolve all conflicts as you would expect it in a text editor. I couldn’t find much about the Operational Transformation algorithm that Realm is using. It was only mentioned once in the documentation at A simple example - Realm Sync (LEGACY).Has anyone tried this before? Can Realm be used for this kind of app? Do you have any other ideas on how to implement this?Cheers!", "username": "Bruno_Joseph" }, { "code": "", "text": "@Bruno_Joseph Google Doc’s implementation of OT is optimized for substring manipulations of a full-text document, they have built a full application around this implementation whereas Realm’s conflict resolution is more general purpose.You are correct in your assumption that a String type would merge with the Last update so the substring edits would be lost. Adding a per character List would needlessly tax the OT algorithm and likely result in bizarre merges that might not merge to real words.", "username": "Ian_Ward" }, { "code": "", "text": "@Bruno_JosephWe crafted an app very similar to this for internal use/communication. One slight difference is that when a user is ‘editing’ the other users can see live changes but cannot ‘edit’ at the same time.While it worked, there were a number of concurrency issues (@Ian_Ward comment is on point for sure) and one big issue is the lack of record/row locking in Realm in general. A user can just randomly delete the item being worked on and other users didn’t know about it until after and that obviously created an issue.We added a feature to ‘lock’ the item being worked on and the other users would observe that ‘lock’ and not allow them to edit. The problem with that was Realm doesn’t offer user presence so if a user was disconnected from the internet, the item would stay ‘locked’. Soft deletes and edits kinda worked but that added a whole other set of code to manage that.A while back, we submitted a feature request for having some kind of locking feature that would really be ideal for our use case to keep data protected and not be overwritten by changes made by others<!---\n\nQuestions: If you have questions about how to use Realm, please ask on\n…\nStackOverflow: http://stackoverflow.com/questions/ask?tags=realm\nWe monitor the `realm` tag.\n\nFeature Request: Just fill in the first two sections below.\n\nBugs: To help you as fast as possible with an issue please describe your issue\nand the steps you have taken to reproduce it in as much detail as possible.\n\n-Thanks for helping us help you! :-)\n-->\n\n## Goals\n\nHave the ability to lock objects/records.\n\n> The isolation part of the ACID test is addressed by locking of objects (in most cases one or more rows of data) until the associated transactions are completed. The locking of objects stops all other processes from being able to change these objects until the lock is removed.\n\nCurrently in a multi-user situation, a Realm object can be viewed by one user and deleted by another. There is no mechanic that prevents deleting an object that is 'in use' or even knowing what the status of that object is before attempting a transaction.\n\n## Expected Results\n\nI would imagine the API being something like this:\n\n```\nlet personResults = realm.objects(PersonClass.self).filter(\"is_available == true\")\npersonResults.setLock(withStatus: .readOnly) //sets all person objects that are available to readOnly.\n```\n\nIf another user wants to delete those person objects, they could obtain the lock status before attempting delete:\n\n```\nlet lockStatus = personResults.lockStatus //or realm_object.lockStatus\n\nswitch lockStatus {\n case .readOnly:\n case .writeOnly:\n case .noLock\n}\n```\n\nor if the object is readOnly, the delete attempt would return an error instead/in addition to being able to obtain it's status. I suggest the proactive approach of getting the status *before* deleting instead of reactive.\n\nRealizing the server is not updated until a write transactions completes, the status could be done with a write, or perhaps the object function have it's own write.\n\nSome have suggested manually adding a isLocked property to an object to obtain a similar functionality but that fails if for example, the client disconnects; there would be no way to 'reset' that lock status.\n\nThere would be additional benefit to adding an event observer to the server or enabling Server Functions that could take action when say, a user disconnects. Similar to the [onDisconnect](https://firebase.google.com/docs/database/ios/offline-capabilities#how-ondisconnect-works) function in Firebase.but were essentially told we we crazy for even suggesting such a thing as “record locking”. lol.So, for that specific app, we leveraged Firebase which, because of user presence, gave us the ability to ‘lock’ records and prevent changes, and if a user d/c’s the Firebase server automatically ‘unlocks’ that data.We think it’s a great idea in general and there would be a market for a non-web based multi-user editor so I would encourage you to take a whack at it and see if you can come up with a different approach.", "username": "Jay" }, { "code": "onDisconnect", "text": "The “user presence” if very useful feature for collaborative apps even when locking is not needed. MongoDB Atlas supports some kind of triggers. Maybe it would be enough to add some onDisconnect trigger type to the MongoDB Atlas?", "username": "Ondrej_Medek" }, { "code": "", "text": "@Ondrej_Medek That’s exactly what we attempted to do; Leverage MongoDB Atlas (or Stitch) for this purpose.Our apps are multi-device, similar to Notes, Reminders or even Messages (iMessage)\nfrom Apple… in other words, a user can start a document on their iPhone, continue working with it on their iPad and then when they get back to the office, work on it on their Mac (macOS).Unfortunately MongoDB does not support the Macintosh (macOS) platform (with their Swift API) and doesn’t have any intention of adding it (according to their roadmap per engineering) so we had to abandon that plan as well after several months of effort.", "username": "Jay" } ]
Text Editor with Real Time Collaboration powered by Realm
2020-05-24T13:33:45.515Z
Text Editor with Real Time Collaboration powered by Realm
3,712
null
[]
[ { "code": "", "text": "Anybody know of good mongo-admin-script-repository? My main interest would be to check performance/status/monitoring/issues, but I’d welcome other stats/tools as well.Thanks!\n-Mike", "username": "Mike_Sachs" }, { "code": "", "text": "If you have more of these, please reply.\nHere are 2 good scripts I found… so far!\nThey are both written for ‘Nagios’, but you can execute them in python/bash to get good results.\nAnd NO you don’t need Nagios to enjoy them!#1 - check_mongodb.pycheck_mongodb.py -H 127.0.0.1 -u admuser -p mypw -A connections -W 60 -C 80\ncheck_mongodb.py -H 127.0.0.1 -u admuser -p mypw -P 27017 -A memory -W 80 -C 90\ncheck_mongodb.py -H 127.0.0.1 -u admuser -p mypw -P 27017 -A replication_lag -W 80 -C 90\ncheck_mongodb.py -H 127.0.0.1 -u admuser -p mypw -P 27017 -A replset_state -W 80 -C 90A Nagios plugin to check the status of MongoDB. Contribute to mzupan/nagios-plugin-mongodb development by creating an account on GitHub.\n.\n.\n.#2 check_mongodbThis one will execute a query, and see if mongo is responsive - No actual results, but still useful!check_mongodb -h 127.0.0.1 -u admuser -p 1234 -d local --collection test -q “db.test.findOne()”Check a MongoDB Database. Contribute to itssoke/nagios-check_mongodb development by creating an account on GitHub.\n.\n.Looking forward to more scripts, if you have them!!!Thanks\n-Mike", "username": "Mike_Sachs" } ]
Looking for a good Admin-Script-Repo
2020-06-09T15:58:16.293Z
Looking for a good Admin-Script-Repo
1,827
null
[]
[ { "code": "", "text": "Hi all! I’m Lieke from the Emerging Developer team, and I’m excited to share with you that we’ve recently launched our MongoDB for Academia program.This program is for educators who are teaching MongoDB and would love some help, or who would like to start teaching MongoDB and would love some help We’re here for you!What we offer:If you’re interested in joining the program, please check out our brand-new website: educators.mongodb.com.If you have any questions and/or feedback, I’d be love to hear from you in this thread or by email [email protected]. If you’re a student, MongoDB is also part of the GitHub Student Developer Pack! Read more on students.mongodb.com ", "username": "Lieke_Boon" }, { "code": "", "text": "", "username": "Stennie_X" } ]
We've launched MongoDB for Academia!
2020-06-11T11:36:55.709Z
We&rsquo;ve launched MongoDB for Academia!
3,701
null
[ "react-js" ]
[ { "code": "import React, { Component, Fragment } from 'react'\nimport { Link } from 'react-router-dom';\nimport axios from 'axios';\nimport { NotificationContainer, NotificationManager } from 'react-notifications';\n\n\nclass Contact extends Component {\n constructor(props) {\n \n super(props)\n this.state = {\n fields: {},\n errors: {}\n }\n }\n\n handleValidation(){\n let fields = this.state.fields;\n let errors = {};\n let formIsValid = true;\n\n \n if(!fields[\"fullname\"]){\n formIsValid = false;\n errors[\"fullname\"] = \"Cannot be empty\";\n }\n\n if(typeof fields[\"fullname\"] !== \"undefined\"){\n if(!fields[\"fullname\"].match(/^[a-zA-Z]+$/)){\n formIsValid = false;\n errors[\"fullname\"] = \"Only letters\";\n } \n }\n\n \n if(!fields[\"email\"]){\n formIsValid = false;\n errors[\"email\"] = \"Cannot be empty\";\n }\n\n if(typeof fields[\"email\"] !== \"undefined\"){\n let lastAtPos = fields[\"email\"].lastIndexOf('@');\n let lastDotPos = fields[\"email\"].lastIndexOf('.');\n\n if (!(lastAtPos < lastDotPos && lastAtPos > 0 && fields[\"email\"].indexOf('@@') == -1 && lastDotPos > 2 && (fields[\"email\"].length - lastDotPos) > 2)) {\n formIsValid = false;\n errors[\"email\"] = \"Email is not valid\";\n }\n } \n\n if(!fields[\"subject\"]){\n formIsValid = false;\n errors[\"subject\"] = \"Cannot be empty\";\n }\n\n if(typeof fields[\"subject\"] !== \"undefined\"){\n if(!fields[\"subject\"].match(/^[a-zA-Z]+$/)){\n formIsValid = false;\n errors[\"subject\"] = \"Only letters\";\n } \n }\n\n\n if(!fields[\"message\"]){\n formIsValid = false;\n errors[\"message\"] = \"Cannot be empty\";\n }\n\n if(typeof fields[\"message\"] !== \"undefined\"){\n if(!fields[\"message\"].match(/^[a-zA-Z]+$/)){\n formIsValid = false;\n errors[\"message\"] = \"Only letters\";\n } \n }\n\n this.setState({errors: errors});\n return formIsValid;\n }\n\n contactSubmit(e){\n e.preventDefault();\n\n const contactObject = {\n fullname: this.state.fields[\"fullname\"],\n email: this.state.fields[\"email\"],\n subject: this.state.fields[\"subject\"],\n message: this.state.fields[\"message\"]\n };\n\n axios.post('/contact/create-contact/', contactObject)\n .then(res => {\n \n if(this.handleValidation()){\n NotificationManager.success('Merci de nous faire confiance, Nous revenons vers vous au plutôt', 'Successful!', 2000);\n }else{\n NotificationManager.error('error form!', 'errors');\n }\n })\n };\n\n handleChange(field, e){ \n let fields = this.state.fields;\n fields[field] = e.target.value; \n this.setState({fields});\n }\n \n \n\n render() {\n return (\n <Fragment>\n\n <section className=\"saas_home_area\">\n <div className=\"banner_top\">\n <div className=\"container\">\n <div className=\"row\">\n <div className=\"col-md-12 text-center\">\n <h2 className=\"f_p f_size_40 l_height60 wow fadeInUp\" data-wow-delay=\"0.3s\" style={{fontSize:'30px'}}>Bonjour, comment <span className=\"f_700\">peut-on vous aider</span><br /><span className=\"f_700\">aujourd'hui ?</span></h2>\n \n \n </div>\n </div>\n <div className=\"saas_home_img wow fadeInUp\" data-wow-delay=\"0.8s\">\n <img src={require('../../images/nicole.png')} alt=\"\" />\n </div>\n </div>\n </div>\n </section>\n \n <section className=\"contact_info_area sec_pad bg_color\">\n <div className=\"container\">\n <div className=\"row\">\n <div className=\"col-lg-3 pr-0\">\n <div className=\"contact_info_item\">\n <h6 className=\"f_p f_size_20 t_color3 f_500 mb_20\">Office Address</h6>\n <p className=\"f_400 f_size_15\">Complexe Beac Yaoundé, Cameroun</p>\n </div>\n <div className=\"contact_info_item\">\n <h6 className=\"f_p f_size_20 t_color3 f_500 mb_20\">Contact Info</h6>\n <p className=\"f_400 f_size_15\"><span className=\"f_400 t_color3\">Phone:</span> <a href=\"tel:698780156\">(+237) 698 78 01 56</a></p>\n <p className=\"f_400 f_size_15\"><span className=\"f_400 t_color3\">Fax:</span> <a href=\"tel:698780156\">(+237) 698 78 01 56 </a></p>\n <p className=\"f_400 f_size_15\"><span className=\"f_400 t_color3\">Email:</span> <a href=\"mailto:[email protected]\">[email protected]</a></p>\n </div>\n </div>\n <div className=\"col-lg-8 offset-lg-1\">\n <div className=\"contact_form\">\n <form onSubmit= {this.contactSubmit.bind(this)} className=\"contact_form_box\" method=\"post\" id=\"contactForm\" novalidate=\"novalidate\">\n <div className=\"row\">\n <div className=\"col-lg-6\">\n <div className=\"form-group text_box\">\n <input type=\"text\" onChange={this.handleChange.bind(this, \"fullname\")} value={this.state.fields[\"fullname\"]} placeholder=\"Your Name\" />\n <span style={{color: \"red\"}}>{this.state.errors[\"fullname\"]}</span>\n </div>\n </div>\n <div className=\"col-lg-6\">\n <div className=\"form-group text_box\">\n <input type=\"text\" onChange={this.handleChange.bind(this, \"email\")} value={this.state.fields[\"email\"]} name=\"email\" id=\"email\" placeholder=\"Your Email\" require=\"require\" />\n <span style={{color: \"red\"}}>{this.state.errors[\"email\"]}</span>\n </div>\n </div>\n <div className=\"col-lg-12\">\n <div className=\"form-group text_box\">\n <input type=\"text\" onChange={this.handleChange.bind(this, \"subject\")} value={this.state.fields[\"subject\"]} id=\"subject\" name=\"subject\" placeholder=\"Subject\" />\n <span style={{color: \"red\"}}>{this.state.errors[\"subject\"]}</span>\n </div>\n </div>\n <div className=\"col-lg-12\">\n <div className=\"form-group text_box\">\n <textarea onChange={this.handleChange.bind(this, \"message\")} value={this.state.fields[\"message\"]} name=\"message\" id=\"message\" cols=\"30\" rows=\"10\" placeholder=\"Enter Your Message . . .\"></textarea>\n <span style={{color: \"red\"}}>{this.state.errors[\"message\"]}</span>\n </div>\n </div>\n </div>\n <button type=\"submit\" className=\"btn_three\">Send Message</button>\n </form>\n </div>\n </div>\n </div>\n </div>\n </section>\n \n <NotificationContainer/>\n\n </Fragment>\n\n )\n }\n }\n\nexport default Contact;\n", "text": "hi I have a problem with the control of the form and I need help to optimise it.This is code 1 → https://github.com/patbi/reactjs-form-input-validation/blob/master/reactjs-form-input-validation.jsThis is code 2 → ", "username": "Patrick_Biyaga" }, { "code": "", "text": "Hi @Patrick_Biyaga,Your question appears to be about using React and form input validation, but doesn’t currently have any direct relation to MongoDB.While you may be able to get advice from some of the experienced developers in the MongoDB community, you’re likely to get faster advice on general React programming questions from Stack Overflow.You should also provide more information on the problem you are trying to solve: what isn’t working with the current approach, what are you trying to optimise, and what have you tried so far.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for your feedback @Stennie_X , I’m using approach 2 at the moment and I managed to send the data to the database hosted on MongoDB Atlas. except that I have a problem with “empty form submission”, there is a good control of the fields but also submission and sending of an empty object. after that also I would like to empty the form fields as with approach 1. this is in short what I’m doing at the moment.", "username": "Patrick_Biyaga" }, { "code": "contactSubmit(e){\n e.preventDefault();\n let fields = this.state.fields;\n if(this.handleValidation(this.state.errors)){\n axios.post('/contact/create-contact/', fields)\n .then(res => {\n this.setState({fields});\n })\n NotificationManager.success('Merci de nous faire confiance, Nous revenons vers vous au plutôt', 'Successful!', 2000);\n }else{\n NotificationManager.error('error form!', 'errors'); \n } \n };\n", "text": "@Stennie_X I just solved my problem like this.", "username": "Patrick_Biyaga" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
React js validation form
2020-06-10T20:52:56.163Z
React js validation form
5,436
null
[ "stitch" ]
[ { "code": "", "text": "I’m using mongodb-stitch-browser-sdk 4.8.0.\nI’m simply trying to import Stitch using this code…import { Stitch } from ‘mongodb-stitch-browser-sdk’;…and I get this error:searchParams: ‘URLSearchParams’ in self,\nReferenceError: self is not definedSo this means the Stitch module won’t even load.\nI’ve already tried using older versions of mongodb-stitch-browser-sdk.\nSame error keeps coming up.Are there any workarounds for this?\nThanks.", "username": "Jonathan_Gautier" }, { "code": "nodejs", "text": "Hi @Jonathan_Gautier, welcome!Are you importing this for the browser or are you executing this via node ? If you’re trying to build a server-side client application, try MongoDB Realm Node.js SDK instead.If you’re still encountering this issue, could you provide steps and minimal reproducible example js ?Regards,\nWan.", "username": "wan" } ]
ReferenceError: self is not defined
2020-06-04T18:31:40.232Z
ReferenceError: self is not defined
4,943
https://www.mongodb.com/…8_2_1024x694.png
[ "mongodb-shell" ]
[ { "code": "mongoshmongo - es - eɪtʃ$ brew tap mongodb/brew\n$ brew install mongosh\n", "text": "Today we introduced the first beta of the new MongoDB Shell (mongosh - mongo - es - eɪtʃ), a shell for MongoDB with a modern user experience that will grow in functionality along with the MongoDB data platform.\nimage1270×861 63.5 KB\nYou can get the new shell from the MongoDB Download Center or if you are on macOS install it with Homebrew:Read about it on the MongoDB Blog, try it out, and let us know what you think.To know more about the MongoDB Shell and see demos of if, you can join MongoDB.live and join the “MongoDB Tools Everywhere” session.", "username": "Massimiliano_Marcon" }, { "code": ".pretty()", "text": "Okay, I dont have to type .pretty() for the queries with larger document outputs - (it is pretty printing on its own). Also, noticed the MongoDB green in the syntax highlighting.", "username": "Prasad_Saya" }, { "code": "", "text": "What branch of the repo is this if we want to clone directly, please?", "username": "Jack_Woehr" }, { "code": "", "text": "Hooray!,Looking forward to taking it out for a spin. I see from the .live keynote this is integrated in Compass and Jetbrainz IDEs!", "username": "chris" }, { "code": "master", "text": "Hi @Jack_Woehr,What branch of the repo is this if we want to clone directly, please?You can find the code on GitHub repository: GitHub - mongodb-js/mongosh: The MongoDB Shell\nFrom a brief look, the master branch is sync’d with the latest tag release which is version 0.0.5.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Builds and runs, thanks, @wan\nUbuntu 18.04\nUbuntu 20.04\nFedora 32 Server", "username": "Jack_Woehr" }, { "code": "mongoshTypeError: db.version is not a function", "text": "Do we have full documentation to mongosh ?\nI just tried db.version() in mongosh and got TypeError: db.version is not a function\nSo I think I need some help ", "username": "Jack_Woehr" }, { "code": "mongoshmongoshmongoshmongoTypeError: db.version is not a functionmongo()> db.version\n function() {\n return this.serverBuildInfo().version;\n }\nserverBuildInfomongoshdb.runCommand( { buildInfo: 1 } ).version\n", "text": "Hi @Jack_Woehr,Do we have full documentation to mongosh ?The documentation for mongosh can be found on https://docs.mongodb.com/mongodb-shell/\nCurrently mongosh (beta) supports a subset of the legacy mongo shell commands. Extending the MongoDB Shell API coverage is an ongoing effort.I just tried db.version() in mongosh and got TypeError: db.version is not a functionThis particular method is not currently covered in the documentation unfortunately.There are a number of convenience wrappers/methods that only exists in the legacy mongo shell. A general work around is to find out what the wrapper is executing. You can execute the method without invoking it .i.e. minus the () for example:Then we can find out under mongo shell methods documentation what does serverBuildInfo does. i.e. db.serverBuildInfo(). With this knowledge we can then use it in mongosh:If you have additional questions, it would be helpful if you can open a new topic discussion. This would help others with similar issues to find relevant topic easily.Kind regards,\nWan.", "username": "wan" } ]
Introducing the new MongoDB Shell
2020-06-08T15:53:58.193Z
Introducing the new MongoDB Shell
3,736
null
[ "dot-net" ]
[ { "code": "", "text": "Is there an official road map or similar for how long drivers (in particular the C# driver) will support a certain protocol version? Was thinking of 3.2 in particular.", "username": "Daniel_Wertheim" }, { "code": "OP_MSG", "text": "Welcome to the community @Daniel_Wertheim!We currently don’t have a standard timeline on driver support for End-of-Life (EOL) server versions, but the typical period is significantly longer than the server EOL. For example, the current C# / .NET driver is still compatible as far back as MongoDB 2.6. MongoDB 2.6 Server was first released in March, 2014 and reached end of life in October, 2016.Since MongoDB 3.2 Server reached end of life in September, 2018 I would definitely plan an upgrade to a newer server version in the near future. MongoDB 3.6 is currently the oldest non-EOL production release series. There have so far been 5 major server releases series since MongoDB 3.2, and changes to the wire protocol may also affect server logging and troubleshooting if you are using an older driver. For example, MongoDB 3.6 drivers uses a new OP_MSG format which subsumes several opcodes used by older versions of the wire protocol.Supported server versions may also vary by driver. For example, the new MongoDB Rust Driver only supports MongoDB 3.6+ since older server versions were already EOL before the Rust Driver GA release this week.Was thinking of 3.2 in particular.Can you elaborate on this requirement? If upgrading your server isn’t practical in the near term you can always continue to use an older driver version, but I would generally expect testing of drivers, tools, and applications against EOL server releases to decline over time.There are also features in newer driver & server pairings that you’ll be missing out on, such as retryable writes, logical sessions, and transactions.Regards,\nStennie", "username": "Stennie_X" } ]
Wire protocol support any road map about deprecation of certain versions?
2020-06-10T15:27:10.020Z
Wire protocol support any road map about deprecation of certain versions?
1,948
null
[ "installation" ]
[ { "code": "", "text": "I tried to install MongoDB on Windows 10 and it didn’t create bin folder. I used this version to install: mongodb-win32-x86_64-2012plus-4.2.7-signed.msiIs correct?", "username": "Paulo_Correa_da_Silv" }, { "code": "", "text": "Welcome to the communityWhich version you are trying to install?\nEnterprise or Community?\nbin dir should be in the default locationC:\\Program Files\\MongoDB\\Server\\4.2\\binDid you specify any other location while install?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Community versionYes, i tried to change location to install and not default.", "username": "Paulo_Correa_da_Silv" }, { "code": "", "text": "So search in your installed path/location\nIt should be there similar to default path\nLocate MongoDB first then search in sub dirs", "username": "Ramachandra_Tummala" } ]
Problems installing MongoDB on Windows 10
2020-06-10T15:27:16.781Z
Problems installing MongoDB on Windows 10
2,049
null
[ "charts" ]
[ { "code": "", "text": "Hi there,\nCurrently, there is a requirement to have a pdf report download option for Mongodb charts (reports). Is there any option or workaround available within Mongodb atlas.\nAny inputs would be helpful.Thanks,", "username": "Praveen_Elumalai" }, { "code": "", "text": "Hi @Praveen_Elumalai, welcome!Is there any option or workaround available within Mongodb atlas.A work around is to right-click a chart and save as an image as PNG (with transparent background). This feature is built-in into supported browsers. You can then either convert this to PDF, or utilise it into a document that can be imported into a PDF.There is a feature tracking for this export ability on feedback.mongodb.com - 923524. Feel free to vote and/or comment with more information on the feature tracker.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks @wan for your comments. I will vote for the feature. Business users will not like it to do these additional things if the current old system based on Google data studio got this feature ", "username": "Praveen_Elumalai" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Charts PDF reports
2020-06-10T12:16:23.511Z
MongoDB Charts PDF reports
4,677
null
[ "connecting" ]
[ { "code": "2020-06-10T10:13:15.727+0530 I CONTROL [initandlisten] git version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212\n2020-06-10T10:13:15.727+0530 I CONTROL [initandlisten] allocator: tcmalloc\n2020-06-10T10:13:15.728+0530 I CONTROL [initandlisten] modules: enterprise\n2020-06-10T10:13:15.728+0530 I CONTROL [initandlisten] build environment:\n2020-06-10T10:13:15.728+0530 I CONTROL [initandlisten] distmod: windows-64\n2020-06-10T10:13:15.728+0530 I CONTROL [initandlisten] distarch: x86_64\n2020-06-10T10:13:15.728+0530 I CONTROL [initandlisten] target_arch: x86_64\n2020-06-10T10:13:15.728+0530 I CONTROL [initandlisten] options: {}\n2020-06-10T10:13:15.730+0530 I STORAGE [initandlisten] Detected data files in C:\\data\\db\\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2020-06-10T10:13:15.731+0530 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=5580M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2020-06-10T10:13:15.885+0530 I STORAGE [initandlisten] WiredTiger message [1591764195:884567][10776:140706903839984], txn-recover: Recovering log 1 through 2\n2020-06-10T10:13:15.975+0530 I STORAGE [initandlisten] WiredTiger message [1591764195:974357][10776:140706903839984], txn-recover: Recovering log 2 through 2\n2020-06-10T10:13:16.127+0530 I STORAGE [initandlisten] WiredTiger message [1591764196:126928][10776:140706903839984], txn-recover: Main recovery loop: starting at 1/24704 to 2/256\n2020-06-10T10:13:16.392+0530 I STORAGE [initandlisten] WiredTiger message [1591764196:392214][10776:140706903839984], txn-recover: Recovering log 1 through 2\n2020-06-10T10:13:16.516+0530 I STORAGE [initandlisten] WiredTiger message [1591764196:515672][10776:140706903839984], txn-recover: Recovering log 2 through 2\n2020-06-10T10:13:16.573+0530 I STORAGE [initandlisten] WiredTiger message [1591764196:573567][10776:140706903839984], txn-recover: Set global recovery timestamp: (0, 0)\n2020-06-10T10:13:16.975+0530 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)\n2020-06-10T10:13:16.981+0530 I STORAGE [initandlisten] Timestamp monitor starting\n2020-06-10T10:13:17.118+0530 I CONTROL [initandlisten]\n2020-06-10T10:13:17.118+0530 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2020-06-10T10:13:17.119+0530 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2020-06-10T10:13:17.119+0530 I CONTROL [initandlisten]\n2020-06-10T10:13:17.119+0530 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.\n2020-06-10T10:13:17.122+0530 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.\n2020-06-10T10:13:17.123+0530 I CONTROL [initandlisten] ** Start the server with --bind_ip <address> to specify which IP\n2020-06-10T10:13:17.124+0530 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to\n2020-06-10T10:13:17.124+0530 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the\n2020-06-10T10:13:17.125+0530 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.\n2020-06-10T10:13:17.125+0530 I CONTROL [initandlisten]\n2020-06-10T10:13:17.129+0530 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>\n2020-06-10T10:13:17.131+0530 I STORAGE [initandlisten] Flow Control is enabled on this deployment.\n2020-06-10T10:13:17.131+0530 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>\n2020-06-10T10:13:17.132+0530 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>\n2020-06-10T10:13:17.134+0530 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>\n2020-06-10T10:13:17.642+0530 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data'\n2020-06-10T10:13:17.645+0530 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>\n2020-06-10T10:13:17.646+0530 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>\n2020-06-10T10:13:17.649+0530 I NETWORK [listener] Listening on 127.0.0.1\n2020-06-10T10:13:17.652+0530 I NETWORK [listener] waiting for connections on port 27017\n2020-06-10T10:13:18.002+0530 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>\n", "text": "unable to start mongod server cmd is showing as follow", "username": "Charan_Narukulla" }, { "code": "2020-06-10T10:13:17.649+0530 I NETWORK [listener] Listening on 127.0.0.1\n2020-06-10T10:13:17.652+0530 I NETWORK [listener] waiting for connections on port 27017\n", "text": "The following indicates that the server is ready and waiting for connection.Please post of screenshot of the issue showing you cannot connect.", "username": "steevej" }, { "code": "** WARNING: This server is bound to localhost.mongod", "text": "Hi @Charan_Narukulla and welcome to the community forums.** WARNING: This server is bound to localhost.Note that this line means that you can only access the MongoDB server while on the machine that is running the mongod process. Are you trying to access from a remote machine by chance?", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect to mongod on Windows 10
2020-06-10T04:50:46.674Z
Unable to connect to mongod on Windows 10
2,544
null
[ "licensing" ]
[ { "code": "", "text": "Hi,We are planning to use MongoDB community/free version for our commercial SAAS based web application, is there any restriction to use it ? what kind of License will be applicable for us, is it GPL, AGPL, SSPL ?I would appreciate if anyone can clear my doubt or suggest proper place to ask this question.Thanks,\nParesh Modi", "username": "Paresh_Modi" }, { "code": "", "text": "Welcome to the community @Paresh_Modi!The license for the MongoDB Community database server is the Server Side Public License (SSPL).For more information please see the Server Side Public License FAQ.Commercial licenses are also available with MongoDB Enterprise Advanced.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDb Community version License related Query
2020-06-10T15:27:05.744Z
MongoDb Community version License related Query
3,193
null
[ "php" ]
[ { "code": "[\n {\n \"ProcessId\": 448,\n \"ProcessName\": \"wininit.exe\",\n \"Path\": \"C:\\\\Windows\\\\system32\\\\wininit.exe\",\n \"CommandLine\": \"wininit.exe\"\n },\n {\n \"ProcessId\": 456,\n \"ProcessName\": \"winlogon.exe\",\n \"Path\": \"C:\\\\Windows\\\\system32\\\\winlogon.exe\",\n \"CommandLine\": \"winlogon.exe\"\n },\n {\n \"ProcessId\": 524,\n \"ProcessName\": \"lsass.exe\",\n \"Path\": \"C:\\\\Windows\\\\system32\\\\lsass.exe\",\n \"CommandLine\": \"C:\\\\Windows\\\\system32\\\\lsass.exe\"\n },\n {\n \"ProcessId\": 656,\n \"ProcessName\": \"svchost.exe\",\n \"Path\": \"C:\\\\Windows\\\\system32\\\\svchost.exe\",\n \"CommandLine\": \"C:\\\\Windows\\\\system32\\\\svchost.exe -k DcomLaunch\"\n },\n {\n \"ProcessId\": 700,\n \"ProcessName\": \"svchost.exe\",\n \"Path\": \"C:\\\\Windows\\\\system32\\\\svchost.exe\",\n \"CommandLine\": \"C:\\\\Windows\\\\system32\\\\svchost.exe -k RPCSS\"\n }\n]\n", "text": "Hi all,\nI have a json file with content:How can I import it into MongoDB?", "username": "Quoc_Anh_Nguyen_Le" }, { "code": "json_decode()Collection::insertMany()_idProcessId_id", "text": "Given the JSON schema you shared, I expect you can utilize PHP’s json_decode() method to parse the JSON as an array (sequential list) of associative arrays, and then use a method like Collection::insertMany() to insert the entire batch at once. Note that this will result in each document being assigned a new ObjectId for its _id field, so if you prefer to rely on the ProcessId instead you’ll need to manually rename that field to _id before passing the documents along to the insert method. That said, there’s no harm in keeping your document as-is and allowing MongoDB to generate its own ObjectId.Note that the PHP driver does have additional functions for parsing JSON, but those are mainly useful if you’re reading Extended JSON, which is a specific format of JSON with syntax for expressing BSON types beyond the basic types supported by common JSON. I don’t think this applies to your use case, but it bears mentioning.", "username": "jmikola" } ]
How to import JSON file using PHP script
2020-04-03T06:36:41.683Z
How to import JSON file using PHP script
3,774
https://www.mongodb.com/…f1d16a95a347.png
[ "php", "beta" ]
[ { "code": "mongodbCollection::createIndex()createIndexes()commitQuorumClient::listDatabases()authorizedDatabasescomposer require mongodb/mongodb^1.7.0@beta\nmongodb", "text": "The PHP team is happy to announce that version 1.7.0-beta2 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis beta release provides support for additional new features in MongoDB 4.4 following the previous 1.7.0beta1 release.Collection::createIndex() and createIndexes() now support a commitQuorum option, which can be used with MongoDB 4.4.Client::listDatabases() now supports an authorizedDatabases option, which can be used with MongoDB 4.0.5 or newer.As previously announced, this version drops compatibility with PHP 5.6 and requires PHP 7.0 or newer.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=27339DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.", "username": "jmikola" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Library 1.7.0-beta2 Released
2020-06-10T19:43:07.544Z
MongoDB PHP Library 1.7.0-beta2 Released
3,136
null
[ "php", "beta" ]
[ { "code": "tlsDisableOCSPEndpointChecktlsDisableCertificateRevocationCheckzstdcompressorsdirectConnectionreplicaSetdirectConnection=falsereplicaSetdirectConnection=truereplicaSethedge['enabled' => true]pecl install mongodb-1.8.0beta2\npecl upgrade mongodb-1.8.0beta2\n", "text": "The PHP team is happy to announce that version 1.8.0beta2 of the mongodb PHP extension is now available on PECL.Release HighlightsThis beta release provides support for additional new features in MongoDB 4.4 following the previous 1.8.0beta1 release.This release introduces support for OCSP and OCSP stapling, which is used to validate the revocation status of TLS certificates. OCSP is enabled by default, but can be controlled via two new URI options: tlsDisableOCSPEndpointCheck and tlsDisableCertificateRevocationCheck.The driver now supports Zstandard compression if it is available during compilation. Applications can opt into using Zstandard by specifying zstd in the compressors URI option, which is used to negotiate supported compression formats when connecting to MongoDB.The driver now supports a directConnection URI option, which can be used to control replica set discovery behavior when only a single host is provided in the connection string. By default, providing a single member in the connection string will establish a direct connection or discover additional members depending on whether the replicaSet option is omitted or present, respectively. This default behavior remains unchanged, but applications can now specify directConnection=false to force discovery to occur (if replicaSet is omitted) or specify directConnection=true to force a direct connection (if replicaSet is present).The ReadPreference constructor now supports a hedge option, which can be passed ['enabled' => true] to enable Hedged Reads when connected to a MongoDB 4.4 sharded cluster.This release also fixes a long-standing bug where the driver might segfault during shutdown while garbage-collecting a ReadConcern, ReadPreference, or WriteConcern object.As previously announced, this version drops compatibility with PHP 5.6 and requires PHP 7.0 or newer.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "username": "jmikola" }, { "code": "", "text": "", "username": "system" } ]
MongoDB PHP Extension 1.8.0beta2 Released
2020-06-10T19:42:25.087Z
MongoDB PHP Extension 1.8.0beta2 Released
3,056
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "We received an email to download and try out MongoDB Realm.When we follow the link to the download it asks us to either register or log in with an existing account.MongoDB’s developer data platform is flexible, scalable, and ensures that you can deliver reactive application experiences on mobile devices.We have an existing MongoDB account. When logging in to our existing account, it takes us to our MongoDB Admin Console with no further links to actually download MongoDB Realm.Perhaps we are overlooking it.", "username": "Jay" }, { "code": "", "text": "I have the same issue here.", "username": "Trunks99" }, { "code": "", "text": "@Trunks99 @Jay Sorry about that, you can follow the instructions here:\nhttps://docs.mongodb.com/realm/sync/ \nhttps://docs.mongodb.com/realm/ios/sync-data/#ios-sync-data And depending on which SDK you are using you can clone one of these repositories and build -MongoDB Realm Tutorials. Contribute to mongodb-university/realm-tutorial development by creating an account on GitHub.", "username": "Ian_Ward" } ]
Downloading MongoDB Realm
2020-06-10T16:26:27.677Z
Downloading MongoDB Realm
1,713
null
[ "atlas-functions", "stitch" ]
[ { "code": "", "text": "I believe that theres a bug in the context.http.post library.When you try to post with form data\nresponse = await context.http.post({The post add’s additional information into the form and json fields.\n“form”: {This explains why when you add the multipart/form-data header, the response is\n{“message”:“invalid character ‘I’ looking for beginning of value”}", "username": "Chi_Tran" }, { "code": "", "text": "Hi Chi – We’ll take a look into this, in the meantime you may want to use a library to make HTTP requests such as Axios or Node’s HTTPS library.", "username": "Drew_DiPalma" } ]
Critical Stitch Function Bug (context.http.post)
2020-06-08T05:46:45.453Z
Critical Stitch Function Bug (context.http.post)
3,393
https://www.mongodb.com/…_2_1024x535.jpeg
[ "event" ]
[ { "code": "", "text": "MongoDB stands with the Black community against racism, violence, and hate. While words have power, we must also act as agents of change. MongoDB is raising funds for organizations that support the Black community by fighting for racial justice and economic advancement.From June 10 - 12, MongoDB is raising funds for: Code 2040, The Bail Project, National Lawyers Guild Foundation, and National Urban League, Inc.Please visit https://mongodb.brightfunds.org/funds/mongodb-for-good to donate. MongoDB will match up to $250,000 USD. #BlackLivesMatter", "username": "Jamie" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB Stands With Our Black Community
2020-06-10T18:19:02.774Z
MongoDB Stands With Our Black Community
3,810
null
[ "mongodb-live-2020" ]
[ { "code": "", "text": "Watching the Scalable Realm Sync Design as we speak, but it seems to be targeted to users that haven’t been using Realm before. I know MongoDb Realm is a new product. But are there any sessions that are targeted to the developers that have been on Realm for years?Excited about MongoDb Realm btw ", "username": "Simon_Persson" }, { "code": "", "text": "Now we are talking @Ian_Ward to the rescue ", "username": "Simon_Persson" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Will there be any sessions on Realm that are not intro sessions?
2020-06-10T16:06:56.539Z
Will there be any sessions on Realm that are not intro sessions?
3,380
null
[]
[ { "code": "", "text": "HiI’m early into a hobby project that is using a Atlas and a Swift REST Vapor API powering a Vue.js front-end, that eventually will have an iOS client. I’ve just watched the keynote for 4.4 and it seems that perhaps I’d benefit from using Realm for this. So if I add a realm application application, will that be just a layer on top of my existing mongo, or will I have to redo the data layer?Other than the https://docs.mongodb.com/realm/, are there any good or recommended guides/courses people would recommend?Thanks", "username": "Jonny" }, { "code": "", "text": "@Jonny The Realm Sync data layer replicates changes between objects in Realm on an iOS mobile app and documents in MongoDB. All of this is done for you in a background thread so you do not need to write any of the code to perform the shuttling of data. What you are describing is REST API on the server-side which will serve data down to the client - Realm Sync takes care of this for you.Take a look -https://docs.mongodb.com/realm/ios/sync-data/#ios-sync-dataAnd you can clone the tutorial app and get started here -main/swift-iosMongoDB Realm Tutorials. Contribute to mongodb-university/realm-tutorial development by creating an account on GitHub.", "username": "Ian_Ward" }, { "code": "", "text": "Ok thanks Ian, I think that answers my question, which maybe could have been phrased better. But I can essentially scrap most of my REST layer (I’m sure I still might need an API for some things I want to do) and replace with built in realm functionality.Thank you, great changes in 4.4 ", "username": "Jonny" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting started with Realm
2020-06-10T13:43:47.347Z
Getting started with Realm
2,262
null
[ "aggregation" ]
[ { "code": "$addFields\n{\n \"month_document\": {\"$month\": \"$date_document\"},\n \"month_date\": {\"$month\": new Date()}\n}\n$match\n{\n \"month_document\": \"$month_date\"\n}\n\"month_document\": 6addFields\n{\n \"month_document\": { $dateToString: { format: \"%m-%Y\", date: \"$date_document\" } },\n \"month_date\": { $dateToString: { format: \"%m-%Y\", date: new Date() } },\n}\n", "text": "Hello !I’m trying to do a really simple thing but I don’t know why it’s not working.I have this aggregateWhich gives the following result :\nmonth_document = 6\nmonth_date = 6Then I’m trying to match all my documents with month_date.And it doesn’t work but \"month_document\": 6 works fine.I can’t figure it out why ?EDIT :I tried to convert my date to stringWhich gives the following result :\nmonth_document = 06-2020\nmonth_date = 06-2020And it’s not working either but matching month_document with the literal string “06-2020” works.", "username": "Henry" }, { "code": "$match\n{\n \"month_document\": \"$month_date\"\n}\n\"month_document\": 6$month_date$match$matchfinddb.test.aggregate( [\n { \n $addFields: {\n month_document: { \"$month\": \"$date_document\" },\n month_date: {\"$month\": new Date() } \n }\n },\n { \n $match: { $expr: { $eq: [ \"$month_document\", \"$month_date\" ] } }\n }\n] )\ndb.test.aggregate( [\n { \n $match: { $expr: { $eq: [ { \"$month\": \"$date\" }, { \"$month\": new Date() } ] } }\n }\n] )\nyyyy-mmmm-yyyy", "text": "Then I’m trying to match all my documents with month_date.And it doesn’t work but \"month_document\": 6 works fine.I can’t figure it out why ?Because, you cannot use the $month_date in the $match stage. The $match stage uses MongoDB Query Language operators (those you use with the find method) for comparison. You can match two document / derived variables using the aggreagtion operators only. To use aggregation operators you must use them with the $expr operator as shown below:Or, simply (this is same as the above aggregation):The same is the issue with the “date to string” converted value fields.Also, note that when using the month-year format string date for comparison, you should be using the yyyy-mm format (not the mm-yyyy format).", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Matching with $month
2020-06-10T12:16:21.354Z
Matching with $month
5,490
null
[]
[ { "code": "", "text": "Hi,I have taken the /data/db folder copy from a different windows laptop and pasted it in my laptop. I tried starting the mongod with dbpath pointing to the folder which has the copy. I’m able to start the server, but I don see the collections/databases. Please advice.Thanks,\nDurga", "username": "Durga_Krishnamoorthi" }, { "code": "mongod", "text": "Hi Durga,Please share the mongod command line you used on the second laptop.Also, how exactly did you perform the copy? Could you check if the “pasted” folder contains exactly the same files as the source folder?Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,Thank you for the reply!I pasted the data/db folder from source(it was from my peer’s laptop) into /data/db1 in my laptop. I checked for the files and it matched the source count. I used the below command.mongod --port 27017 --dbpath /data/db1Regards,\nDurga", "username": "Durga_Krishnamoorthi" }, { "code": "/data/db1C:\\data\\db1mongod", "text": "Hi Durga,That should work ok, so I’m not sure what went wrong. Could you double check the dbpath, since typically path like /data/db1 (with slashes) are Unix-style paths. Windows usually have drive letters e.g. C:\\data\\db1Another possibility is that there is a startup error or warning on the logs. Maybe the server is unable to open some file, permission issues, etc.? If you can post the full mongod log when it starts up, there might be some hint there.Best regards,\nKevin", "username": "kevinadi" } ]
Moving datafiles to a different windows machine
2020-06-03T13:07:26.433Z
Moving datafiles to a different windows machine
3,256
null
[ "performance" ]
[ { "code": "", "text": "I’m facing a completely different behaviour on a data node between $set and document replacement operations.\nWhenever I use $set, there is a huge spike of WiredTiger cache eviction plus an increase of read IOPS. If I replace the document, I don’t see this behaviour.\nNote: the $set is not setting any array field.Can someone please help me to understand the different behaviour? Is it due to the fact that document replacement doesn’t pull the document into memory?Thanks!", "username": "Pedro_Albuquerque" }, { "code": "", "text": "It would be helpful to have more details. For instance: what version of MongoDB? Default settings? Replica set? Size of document? Exact $set clause? Single document update? What’s the query? Both updates would be most useful.", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi @Asya_Kamsky, thanks for your reply.\nWe have found the problem and it was definitely not related with $set operation. The fact that we saw different behaviour was that working set was not fitting in memory due to another big index that we added to accommodate a new operation for the $set operation.\nOnce we dropped the old index, performance went back to previous numbers.", "username": "Pedro_Albuquerque" } ]
Update $set vs document replacement operations
2020-05-15T20:53:12.126Z
Update $set vs document replacement operations
2,560
null
[ "installation" ]
[ { "code": "", "text": "Pretty embarrassing but I’m going through MongoDB University for my first course in installing Mongo Shell on OS and after I’ve gone through the steps as outlined the command line response is: -Bash: Mongo: command not found.I am running OS Catalina (10.15.4), downloaded MongoDB Enterprise Server 4.2.7 tgz package, unpacked the package and went into my terminal to switch the Zsh shell to bash shell.Everything seems to work until I terminate the existing terminal and open a new terminal session. I enter the command mongo —nodb as instructed and I receive the error response mentioned above.Granted, I’ve NEVER worked on a command line in my life!H-E-L-P", "username": "Jakub_Malobecki" }, { "code": "", "text": "You should search the MongoDB University forum for this as a solution has already been provided. Basically it is related to the fact that $PATH variable is not updated with the installation directory.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve. I’ll go and find that thread. Would you mind if I message you again if I’m still having issues?Thanks\nJakub", "username": "Jakub_Malobecki" }, { "code": "", "text": "I am on both forums. I’ll see your message for sure.", "username": "steevej" } ]
Installing Enterprise for MongoDB University course
2020-06-09T04:24:51.748Z
Installing Enterprise for MongoDB University course
1,912
null
[ "node-js" ]
[ { "code": "db.collection('locations') .updateMany({},{ $pull: {subscribers: '[email protected]'} });db.collection('locations') .updateMany({},{ $addToSet: {subscribers: '[email protected]'} });modifiedCountmatchedCount", "text": "Hello. I have a collection called ‘locations’ and it has 64 documents, each has a field called ‘subscribers’ and the value is an array of email addresses. When I tried to run db.collection('locations') .updateMany({},{ $pull: {subscribers: '[email protected]'} }); and db.collection('locations') .updateMany({},{ $addToSet: {subscribers: '[email protected]'} }); again and again, the modifiedCount should always be 64, right? But it returns a random number every time and loses data. The matchedCount is always 64, no error. Why does that happen? It’s weird because when I try these updates on a test db and collection, it doesn’t happen, but on my production db and collection, it happens. Is it because I have too frequent queries that locks the documents? But mongodb has journal that can help recover from heavy load, doesn’t it? I am so confused. Please help!I am running on an M2 cluster and there are a couple of connections querying and updating the cluster. I use the Node.js driver.", "username": "dongst" }, { "code": "$pull$addToSet$push", "text": "Hello Shaotian_Dong,$pull removes all instances of the email address values from the array; i.e., if there are duplicate values of the email address, all the matching values are removed. $addToSet only adds an email if it doesn’t exist in the array. $push adds an email even if the email already exists in the array.Also, you may be using different set of data in your test db (as you are getting different results).I suggest check your data before and after updating the documents / array. Since, you are using NodeJS driver, you can post the code you are running.", "username": "Prasad_Saya" } ]
An updateMany with $addToSet returns varying modifiedCount results
2020-06-10T04:48:23.984Z
An updateMany with $addToSet returns varying modifiedCount results
2,964
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi! I’m building a blog like app with mongoDB storage where many users can post many articles. Which is the best approach? 1. To have one collection of user documents in which I include the articles array. Or 2. have two collections, users and articles. In the articles collection I’ll store all of the users articles. What I’m thinking: If a user have only two articles, it is ok to search through 100k articles. On the other hand if a user have many articles and a single document have a max size of 16Mb and is not enough space?", "username": "Marius_Petrov" }, { "code": "userarticles", "text": "Hi Marius,You are looking at One-to-Many data relationship between the user and articles entities. You can model this as a single collection or two collections - this totally depends upon your use case. The most common questions you have to ask (yourself, in this case) are - what is the application, how many users and articles per user (maximum), the size of documents, the kind of queries (CRUD operations in general, and the most important queries), what functions are there in the application, etc.There are some guidelines in the following documentation on modeling this data relationship:", "username": "Prasad_Saya" } ]
Schema design approach
2020-06-09T14:57:58.676Z
Schema design approach
1,284
null
[ "morphia-odm" ]
[ { "code": "", "text": "I am facing the issue of intermittent spikes in response latencies while fetching data in a sharded cluster. Data is always being queried via indexed fields.Also seeing high memory usage on secondary boxes (memoryUsage:99% loadAvg:2.5) at exact same time as the latencies.Client: Morphia Client with ReadPreference = PRIMARY, ReadConcern = Local, WriteConcern Default. As per my understanding secondaries should not affect the response if ReadConcern is Local.Is there any correlation here?MongoDB version 4.2.2\nmongod instance running on 8 core, 40 GB RAM machine, wiredtiger cache 11 GB.Can anybody suggest what might be the issue or how to debug this?", "username": "Samarth_Goyal" }, { "code": "free -mmongod", "text": "Hi Samarth_Goyal,Unfortunately, we don’t have enough information to diagnose the problem. Memory usage is unlikely to be related unless you’re experiencing cache pressure (in fact, 99% memory usage could simply be filesystem cache if you’re checking with free -m).Latency can be caused by a lot of things. It includes the network initiation, network round trip, query execution, and client-side deserialization. Do you have any measurements available to isolate the issue? Generally, intermittent spikes are network based. If the spikes are more regular, it may be a cron job. Are you instantiating a lot of connections all at once when you’re seeing the spikes? It could be CPU exhaustion.Ultimately, there are so many things it could be that we’ll need a lot more information. Are you using Atlas? The profiler and performance adviser are invaluable tools for helping to diagnose performance problems with the mongod instance itself.", "username": "Justin" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Mongodb intermittent spikes in fetch latencies
2020-06-09T13:37:36.826Z
Mongodb intermittent spikes in fetch latencies
4,202
https://www.mongodb.com/…81aa7dbaf9b4.png
[ "swift" ]
[ { "code": "", "text": "Hello,I am getting this error while using realm on my swift project.\nAfter i installed SwipeCellKit pod on it when i tried to compile, i get this error at build time:Searched for some topics on stackoverflow but only found about react native.\nI checked the Build Phases area for some default checks i found on google but there is no duplicated files:Captura de Tela 2020-06-08 às 19.36.261776×1006 176 KBIs this a normal error? Realm was compiling fine with the other pod, then suddenly started to give me this error.", "username": "Vitor_Gomes_Silva" }, { "code": "", "text": "I think i resolved my problem, based on this answer: https://stackoverflow.com/questions/24298144/duplicate-symbols-for-architecture-x86-64-under-xcodeI deleted my Pods folder then made another pod install, using same xcode version either on podfile, same realm version. Without SwipeCellKit implementation on code is not presenting problems anymore. I really dont know how issue started, but i guess it is working fine now.", "username": "Vitor_Gomes_Silva" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Swift] Getting duplicate symbols for architecture x86_64 error
2020-06-08T23:37:23.383Z
[Swift] Getting duplicate symbols for architecture x86_64 error
17,015
null
[ "containers", "installation" ]
[ { "code": "FROM debian:stretch\nRUN apt-get update\nRUN apt-get install -y wget gnupg\nRUN wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add -\nRUN echo \"deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.2 main\" | tee /etc/apt/sources.list.d/mongodb-org-4.2.list\nRUN apt-key list\nRUN apt-get update\nRUN apt-get install -y mongodb-org\nWARNING: The following packages cannot be authenticated!\nmongodb-org-shell mongodb-org-server mongodb-org-mongos mongodb-org-tools\nmongodb-org\nE: There were unauthenticated packages and -y was used without --allow-unauthenticated\nStep 6/8 : RUN apt-key list\n---> Running in 1edf49f36623\nWarning: apt-key output should not be parsed (stdout is not a terminal)\n/etc/apt/trusted.gpg\n--------------------\npub rsa4096 2018-04-18 [SC] [expires: 2023-04-17]\nE162 F504 A20C DF15 827F 718D 4B7C 549A 058F 8B6B\nuid [ unknown] MongoDB 4.2 Release Signing Key <[[email protected]](mailto:[email protected])>\n", "text": "I’m having an issue with the signing keys for Mongo 4.2 on Debian.9 stretch, as documented here: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-debian/A minimal Dockerfile using the commands from that doc:This errors with:This has just started failing yesterday on one of our builds, was previously working OK.I’ve followed the troubleshooting instructions here: https://docs.mongodb.com/manual/reference/installation-ubuntu-community-troubleshooting/And the key has been added as expected:Has anyone else seen this issue? I’m very concerned that packages in the official repo.mongodb.org appear to no longer be signed by the officially documented public key, given that they previously were. Using debian:buster instead works fine, so I don’t think it’s an issue with the key that is documented (which is the same for both Stretch and Buster).", "username": "Callum_McIntyre" }, { "code": "", "text": "Trying to edit but it won’t let me (new users can only have 2 links apparently), so to update - this is now resolved. Unsure what fixed it, no changes our side, assume it was a MongoDB infrastructure issue.", "username": "Callum_McIntyre" }, { "code": "", "text": "Hi Chris,Some of the latest 4.2 Debian/Ubuntu packages released were inadvertently signed with the 4.4 signing key: https://jira.mongodb.org/browse/DOCS-13691Apologies for the inconvenience, this problem has been corrected.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2 packages on Debian Stretch cannot be authenticated
2020-06-09T08:52:55.925Z
MongoDB 4.2 packages on Debian Stretch cannot be authenticated
2,750
null
[ "charts", "on-premises" ]
[ { "code": "", "text": "I am looking for to mongodb charts on s390x architecture. The documentation for mongodb charts (https://docs.mongodb.com/charts/current/installation/) is using docker image Quay, which only supports/works on “amd64 architecture”.Is there docker image availabe for mongodb that is for s390x architecture ?\nIf yes, where I can find it.\nIf no, when it will be available ?", "username": "alak_patel" }, { "code": "", "text": "Hi @alak_patel -No the image is not available for any other architectures, and we have no plans to do this.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom,It will help if you can make s390x docker image. I was also looking for same. How difficult would it be ? It will really help us utilize MongoDB Charts.", "username": "Eddie_Harris" }, { "code": "", "text": "Hello @tomhollander I am also looking for s390x arch supported docker image. Can you please help ?", "username": "Mehul_Agrawal" }, { "code": "", "text": "As I mentioned, we have no plans for this. Just to help me understand, can you explain why this is important to you? Do you not have any x86 servers on your network where you could run Charts?Tom", "username": "tomhollander" }, { "code": "", "text": "@tomhollander, unfortunately our all services and applications are hosted on s390x architecture based host and our organization don’t provide any other architecture, since all other running applications are suitable for s390x architecture. It will be huge help if you can help get s390x architecture based docker image for MongoDB Charts.", "username": "alak_patel" }, { "code": "", "text": "@tomhollander can you help getting s390x docker image ?", "username": "Mehul_Agrawal" }, { "code": "", "text": "Hi @Mehul_Agrawal, unfortunately we don’t have the resources to produce and support this image.", "username": "tomhollander" }, { "code": "", "text": "@tomhollander, is there any github repo to which I can contribute in order to build s390x docker image?", "username": "Mehul_Agrawal" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there MongoDB Charts docker image for s390x architecture?
2020-05-19T22:16:45.585Z
Is there MongoDB Charts docker image for s390x architecture?
4,169
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.8-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.7. The next stable release 4.2.8 will be a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.8-rc0 is released
2020-06-09T19:27:18.577Z
MongoDB 4.2.8-rc0 is released
1,759
null
[ "c-driver", "beta" ]
[ { "code": "", "text": "I’m pleased to announce version 1.17.0-beta2 of libbson and libmongoc,\nthe libraries constituting the MongoDB C Driver.Features:Bug fixes:Thanks to everyone who contributed to the development of this release.Features:Bug fixes:Thanks to everyone who contributed to the development of this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.17.0-beta2 released
2020-06-09T19:22:47.670Z
MongoDB C driver 1.17.0-beta2 released
2,964
null
[]
[ { "code": "$ cat /etc/apt/sources.list.d/mongodb-org-4.4.list\ndeb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/testing multiverse\n\n$ sudo apt install mongodb-org-tools\nReading package lists... Done\nBuilding dependency tree\nReading state information... Done\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n mongodb-org-tools : Depends: mongodb-database-tools but it is not installable\nE: Unable to correct problems, you have held broken packages.", "text": "mongodb-org-tools fails to install on Ubuntu 20.04 because it depends on mongodb-database-tools which doesn’t exist:", "username": "Andrew_Wason" }, { "code": "", "text": "Ubuntu 20.04 is not a supported OS for MongoDB 4.2 or earlier (meaning any current version of MongoDB at this time).@Stennie_X mentions in another thread that support for Ubuntu 20.04 is planned for MongoDB 4.4.Unfortunately this means that you will have to wait until the 4.4 version of Mongo is released unfortunately. If this is a playground/test system, you might be able to pull down the zip file from the MongoDB downloads page and use it that way.", "username": "Doug_Duncan" }, { "code": "", "text": "I know 4.2 does not support Ubuntu 20.04 - I am installing 4.4 rc8 which does.\nIt’s just the mongodb-org-tools that fails to install due to an unmet dependency. The database itself installs.", "username": "Andrew_Wason" } ]
Ubuntu 20.04 mongodb-org-tools
2020-06-04T19:51:57.998Z
Ubuntu 20.04 mongodb-org-tools
7,795
null
[ "java", "beta" ]
[ { "code": "", "text": "The 4.1.0-beta2 MongoDB Java & JVM Drivers has been released, with support for the upcoming release of MongoDB 4.4.The documentation hub includes extensive documentation of the 4.1 driver, includingand much more.You can find a full list of bug fixes here .You can find a full list of improvements here .You can find a full list of new features here .MongoDB Java Driver documentation", "username": "Jeffrey_Yemin" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Java Driver 4.1.0-beta2 Released
2020-06-09T18:53:29.170Z
MongoDB Java Driver 4.1.0-beta2 Released
3,521
null
[ "golang", "beta" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of v1.4.0-beta2 of the MongoDB Go Driver.This release contains support for some MongoDB server version 4.4 features and improvements to the driver API. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.4.0-beta2 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Divjot_Arora" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.4.0-beta2 released
2020-06-09T18:42:09.943Z
MongoDB Go Driver 1.4.0-beta2 released
2,818
https://www.mongodb.com/…_2_1024x912.jpeg
[ "production", "atlas-search", "atlas", "text-search" ]
[ { "code": "", "text": "I am pleased to announce that as of today, Atlas Search is now GA! Atlas Search allows you to configure search indexes directly on clusters you have running in MongoDB Atlas and write queries using the MongoDB query language.Here are some of the recent updates we made:\nSearch_GA_Community_Post_-_Google_Docs1250×1114 165 KB\nThe detailed announcement is located here.We also have two sessions available at MongoDB.live, presented by Doug Tarr and Evan Nixon, members of the Atlas Search engineering team. Click here to register and watch the presentations on-demand.Let us know what you think! You can read our Docs for more info or submit ideas to our feedback portal.", "username": "bencefalo" }, { "code": "", "text": "", "username": "system" } ]
Atlas Search is now GA
2020-06-09T18:39:03.143Z
Atlas Search is now GA
1,973
null
[]
[ { "code": "", "text": "Hi everyone!Exciting news - today, we announced both the release of MongoDB Realm and the public beta of the new Realm Sync service!You can check out our release blog post here: mongodb.com/blog/post/announcing-mongodb-realm and read the documentation here: docs.mongodb.com/realm.We are hosting some sessions on Realm’s services as part of the MongoDB.live event, which you can attend for free today and tomorrow: www.mongodb.com/world. (Note: Even if you can’t make it this week, most sessions will be available on demand after the event)We really want to hear from you during the Realm Sync beta. You can head to feedback.mongodb.com to share your feedback as you start building.Drew, Product Lead for MongoDB Realm", "username": "Drew_DiPalma" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "We were looking forward to attending this event and really interested in the MongoDB Realm sessions (are the sessions listed somewhere?)However, we first try to access the site we get thisNo Safari For You1226×1118 113 KBAnd then using a different browser, we attempt to register and get thisNo Register for you1512×640 45.8 KBIt should not be this difficult to attend an event.", "username": "Jay" }, { "code": "", "text": "In the first browser please try clicking “Continue” under the section you’ve screenshotted. Popup blockers are integrated into all browsers so this is the expected result and once you click continue below this you’ll be in.", "username": "Ryan_Quinn" }, { "code": "", "text": "That was the first thing we tried!It presented that error in Safari so we switched over to Firefox and received the same messages when attempting to use the site and then register.I then tried it on my iPad with the same results. I had one of my engineers working from home try it. He already tried it on multiple devices and it never worked so he gave up.I know this isn’t a support forum so thanks for responding anyway.", "username": "Jay" }, { "code": "", "text": "We’re here to help. We have seen a couple similar reports and I’ve just been notified that a fix is being deployed for the registration and login issues.", "username": "Ryan_Quinn" } ]
Announcing MongoDB Realm and Realm Sync public beta
2020-06-09T13:47:39.691Z
Announcing MongoDB Realm and Realm Sync public beta
1,675
null
[ "c-driver" ]
[ { "code": "mongoc_init ();\nclient = mongoc_client_new (\"mongodb://user:pass@localhost:27017/?authSource=admin\");\ncollection = mongoc_client_get_collection (client, \"DB\", \"Collection\");\n", "text": "Greetings!I have very specific issue. I must work through a C program to access a mongo database.If i connect to a mongo database with a simple C program like this :I would now like to simply pass to this connection a shell script like this:\nstr = ‘db.DB.Collection.insert(name:“bob”)’My_Mongo_Command(client, collection, str);and have it return any output from the command as it would on the mongo command line.\nI have not been able to find anything like this in the documentation but maybe i am blind. Thanks", "username": "aaron_thompson" }, { "code": "", "text": "It’s the same for me. If you get a solution, please contact me.Discover the problem is “mongoc_init();”, so decided to tested in simple C program, when commented de “mongo_init();” line, the program go perfect.My setup is QT Creator (QT 5.13)[email protected]", "username": "Cesar_Cherre" } ]
Mongoc and passing through mongo shell commands
2020-04-15T20:24:54.492Z
Mongoc and passing through mongo shell commands
1,949
null
[ "installation" ]
[ { "code": "sudo apt-get update", "text": "I’m trying to install MongoDB on Ubuntu 18.04. I’m following the instructions on here: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/However I’ve hit the following error when running sudo apt-get updateW: GPG error: MongoDB Repositories bionic/mongodb-org/4.2 Release: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 656408E390CFB1F5\nE: The repository ‘MongoDB Repositories bionic/mongodb-org/4.2 Release’ is not signed.Does anyone know how I can fix this?", "username": "Ian_Wright" }, { "code": "", "text": "Please check this linkhttps://chrisjean.com/fix-apt-get-update-the-following-signatures-couldnt-be-verified-because-the-public-key-is-not-available/", "username": "Ramachandra_Tummala" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to install Community Edition on Ubuntu 18.04
2020-06-08T19:31:27.068Z
Unable to install Community Edition on Ubuntu 18.04
2,832
https://www.mongodb.com/…c_2_1024x437.png
[ "atlas" ]
[ { "code": "", "text": "I’m run to a stange issue in mongo db atlas restores.\nWhenever I restore a snapshort via mongo db atlas UI , this snapshorts moves from the “SnapShots” to the “Restore History” tan.\nFrom this minue , there is no option to use this snapshort anymore as its gone from the UI and lost forever.This is where I click the restore :\nimage1193×510 111 KBAfter a succusfull restore , I see the restore History , but there i sno way to restore using that snapshot again\nimage1253×509 106 KBAm I missing something ?\nHow do I use a snapshot which I have used before ?Thanks\nSagi", "username": "Sagi_Karni" }, { "code": "", "text": "Hi @Sagi_Karni the snapshot just moved to the second page because of how many snapshots you are keeping in retention. While on the snapshots page, in the bottom right you will see a page button.", "username": "bencefalo" }, { "code": "", "text": "and look at a day where I did not restore.\nI se a snapshot every 2 hours\n", "username": "Sagi_Karni" }, { "code": "", "text": "The snapshots are NOT on the next pages.\nAfter I restore them they were completly gone from the Snapshot tab\nPlease look on the attached file.\nMy policy is to backup every 2 hours , which is what I usually see a snapshot is created every 2 hours exactly.\nHowever , pay a close attemntion : see the difference between each snapshot (the timing)see here (this is where i did many restores)\n", "username": "Sagi_Karni" }, { "code": "", "text": "I will send you a direct message, I am happy to review this with you on a Zoom", "username": "bencefalo" }, { "code": "", "text": "That would be great.\nWaiting for your invite.\nThanks", "username": "Sagi_Karni" }, { "code": "", "text": "Thanks you for your help\nIt should work as expected now after you reolved the issue (very fast if I might say (-:Thanks\nSagi", "username": "Sagi_Karni" } ]
Lost snapshots in MongoDB Atlas
2020-06-08T13:32:14.228Z
Lost snapshots in MongoDB Atlas
1,907
null
[]
[ { "code": "", "text": "Hi I’m building a small serverless web app and just now the stitch doc looks migrated with realm, is there any doc on what are the changes or migration guide from stitch to realm?", "username": "Orville_Lim" }, { "code": "", "text": "Stitch has been re-branded and become a part of the MongoDB Realm ecosystem. Joining the Realm database and offline support and syncing tools, the component formerly known as stitch will provide function hosting for Realm. For Stitch users this does not mean any immediate changes to how you build apps but will provide the additional tools from the rest of the realm platform in an easier to integrate way.", "username": "Ryan_Quinn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is Realm replacing MongoDB Stitch?
2020-06-09T15:38:54.927Z
Is Realm replacing MongoDB Stitch?
2,679
null
[ "node-js", "transactions" ]
[ { "code": " const orderitem = await services\n .query(\"orderitem\")\n .findOne({ _id: orderitemId });\n const currStatus = orderitem.status;\n const newStatus = request.status ?? currStatus;\n const currPrice = orderitem.price;\n const newPrice = request.price ?? currPrice;\n const statusNew = \"new\";\n let inc = 0.0;\n\n if (newStatus === statusNew && currStatus === statusNew) {\n inc = newPrice - currPrice;\n } else if (newStatus === statusNew && currStatus !== statusNew) {\n inc = newPrice;\n } else if (newStatus !== statusNew && currStatus === statusNew) {\n inc = currPrice * -1;\n }\n\n const session = services.common.startDbSession();\n\n try {\n await session.withTransaction(async () => {\n if (inc !== 0.0) {\n await services.order.updateOutstandingBalance(\n orderitem.order._id,\n inc,\n session\n );\n }\n\n await services\n .query(\"orderitem\")\n .model.updateOne(params, request)\n .session(session);\n });\n } finally {\n await session.endSession();\n }\n updateOutstandingBalance: async (_id, inc = 0.0, session = null) => {\n await services\n .query(\"order\")\n .model.updateOne(\n { _id: _id },\n {\n $inc: { price: inc },\n updatedAt: new Date(),\n myLock: { appName: \"myApp\", pseudoRandom: new ObjectID() },\n }\n )\n .session(session);\n }\n", "text": "Hi everyone,I have 2 collections: order and orderitems where an order consists of order item(s).The order has field price which should be the sum of the price of the order items only if the status is “new”.For example, order ABC has 3 order items :This order should have the price = 20.What happens is sometimes the order price does not calculate correctly when the order items’ status is updated concurrently from “new” to “paid” or vice versa.For example, when order item 1 and 3 are updated to “paid”, the order price is still 20 instead of 0.My update order item API (Node JS) looks like this:And here is the updateOutstandingBalance function:The transaction is using write concern \" majority\" and read concern “local”.Any help is really appreciated, thank you.", "username": "Leonardy_Huang" }, { "code": "", "text": "I’m curious - why are order line items in a separate collection? It’s kind of a textbook example of schema that’s likely going to be better with array of line items in order object.", "username": "Asya_Kamsky" }, { "code": "", "text": "As far as transaction, I’m not sure I follow your explanation of what the business logic should be but I find it highly suspect that the query against orderitems originally happens outside of transaction. It means if there are two attempts to write that conflict the second one will retry the transaction - but not the read.", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi Asya,In our case, order items can be dynamically added and updated individually, customers can also pay for only specific order items. It’s a little bit different from the usual e-commerce where order items are created together with the order upon checkout. I hope this answers you.", "username": "Leonardy_Huang" }, { "code": "", "text": "You can update individual items within an array - and update the total, etc atomically in a single update. It’s part of normal e-commerce. Different items ship at different times. Some item might get returned. It’s still best done as an array in the order document. Saves space and indexes too.", "username": "Asya_Kamsky" }, { "code": "", "text": "Thanks for the prompt reply.This discrepancy only intermittently happens when there is batch status update on multiple order items of a particular order.For example, order ABC has 3 order items:The read outside of the transaction is only reading from the orderitem, not from the order. Then based on its status and price change, it does $inc (if not 0) to the order’s price and update the orderitem itself. So CMIIW, the calculation doesn’t rely on the order price.Do you think the orderitem read should also happen in the transaction?", "username": "Leonardy_Huang" }, { "code": "", "text": "Thanks for the suggestion, but it’s currently not possible to change the data structure because it impacts all the APIs and UI that the customers face.That aside, what could be wrong from my implementation? Seems like the update sometimes just doesn’t increment correctly. Is there anything wrong on the transaction options, write concern “majority” and read concern “local”?", "username": "Leonardy_Huang" }, { "code": "", "text": "Using write concern “majority” and read concern “snapshot” also did not help.", "username": "Leonardy_Huang" }, { "code": "services.common.startDbSession()startSession()commitTransaction", "text": "Sorry about the delay but I’m still curious to understand what happens and not seeing all of the code I find myself guessing here.Can you explain what APIs you are using? This looks close to our node.js API examples but not exactly - so I’m wondering if the issue is in some of the supporting code.For instance I’m not sure what the services.common.startDbSession() call does as I couldn’t find any references to it anywhere - if that your own wrapper of startSession()? Your subsequent calls are on `services.query().model.updateOne().session(session) - is this Mongoose wrapper over MongoDB Node.js driver? Or some other wrapper?If you can’t share the code, you can run this with logging increased on the DB side and confirm that in fact transaction is starting when you think it’s starting and all the writes are showing up before commitTransaction - otherwise it’s really hard to say what’s happening when you run this code.", "username": "Asya_Kamsky" }, { "code": "services.common.startDbSession()client.startSession() MongoClient {\n _events: [Object: null prototype] {\n newListener: [Function (anonymous)],\n left: [Function (anonymous)]\n },\n _eventsCount: 2,\n _maxListeners: undefined,\n s: {\n url: 'mongodb://some_url/strapi?ssl=true&replicaSet=Cluster0-shard-0&authSource=some_source&retryWrites=true&w=majority&readPreference=primaryPreferred',\n options: {\n servers: [Array],\n ssl: true,\n replicaSet: 'Cluster0-shard-0',\n authSource: 'some_source',\n retryWrites: true,\n w: 'majority',\n readPreference: [ReadPreference],\n caseTranslate: true,\n useNewUrlParser: true,\n useUnifiedTopology: 'false',\n promiseLibrary: [Function: Promise],\n driverInfo: [Object],\n auth: [Object],\n dbName: 'strapi',\n name: 'Mongoose',\n version: '5.8.0',\n socketTimeoutMS: 360000,\n connectTimeoutMS: 30000,\n useRecoveryToken: true,\n credentials: [MongoCredentials]\n },\n promiseLibrary: [Function: Promise],\n dbCache: Map(1) { 'strapi' => [Db] },\n sessions: Set(0) {},\n writeConcern: undefined,\n namespace: MongoDBNamespace { db: 'some_db', collection: undefined }\n },\n topology: NativeTopology {\n _events: [Object: null prototype] {\n authenticated: [Function (anonymous)],\n error: [Array],\n timeout: [Array],\n close: [Array],\n parseError: [Array],\n fullsetup: [Array],\n all: [Array],\n reconnect: [Array],\n serverOpening: [Function (anonymous)],\n serverDescriptionChanged: [Function (anonymous)],\n serverHeartbeatStarted: [Function (anonymous)],\n serverHeartbeatSucceeded: [Function (anonymous)],\n serverHeartbeatFailed: [Function (anonymous)],\n serverClosed: [Function (anonymous)],\n topologyOpening: [Function (anonymous)],\n topologyClosed: [Function (anonymous)],\n topologyDescriptionChanged: [Function (anonymous)],\n commandStarted: [Function (anonymous)],\n commandSucceeded: [Function (anonymous)],\n commandFailed: [Function (anonymous)],\n joined: [Array],\n left: [Array],\n ping: [Function (anonymous)],\n ha: [Function (anonymous)],\n open: [Function],\n reconnectFailed: [Function (anonymous)]\n },\n _eventsCount: 26,\n _maxListeners: Infinity,\n s: {\n id: 0,\n options: [Object],\n seedlist: [Array],\n state: 'connected',\n description: [TopologyDescription],\n serverSelectionTimeoutMS: 30000,\n heartbeatFrequencyMS: 10000,\n minHeartbeatFrequencyMS: 500,\n Cursor: [Function: Cursor],\n bson: BSON {},\n servers: [Map],\n sessionPool: [ServerSessionPool],\n sessions: Set(0) {},\n promiseLibrary: [Function: Promise],\n credentials: [MongoCredentials],\n clusterTime: [Object],\n iterationTimers: Set(0) {},\n connectionTimers: Set(0) {},\n clientInfo: [Object],\n sCapabilities: [ServerCapabilities]\n },\n [Symbol(kCapture)]: false\n },\n [Symbol(kCapture)]: false\n }\n ClientSession {\n _events: [Object: null prototype] {\n ended: [Function: bound onceWrapper] { listener: [Function (anonymous)] }\n },\n _eventsCount: 1,\n _maxListeners: undefined,\n topology: NativeTopology {\n _events: [Object: null prototype] {\n authenticated: [Function (anonymous)],\n error: [Array],\n timeout: [Array],\n close: [Array],\n parseError: [Array],\n fullsetup: [Array],\n all: [Array],\n reconnect: [Array],\n serverOpening: [Function (anonymous)],\n serverDescriptionChanged: [Function (anonymous)],\n serverHeartbeatStarted: [Function (anonymous)],\n serverHeartbeatSucceeded: [Function (anonymous)],\n serverHeartbeatFailed: [Function (anonymous)],\n serverClosed: [Function (anonymous)],\n topologyOpening: [Function (anonymous)],\n topologyClosed: [Function (anonymous)],\n topologyDescriptionChanged: [Function (anonymous)],\n commandStarted: [Function (anonymous)],\n commandSucceeded: [Function (anonymous)],\n commandFailed: [Function (anonymous)],\n joined: [Array],\n left: [Array],\n ping: [Function (anonymous)],\n ha: [Function (anonymous)],\n open: [Function],\n reconnectFailed: [Function (anonymous)]\n },\n _eventsCount: 26,\n _maxListeners: Infinity,\n s: {\n id: 0,\n options: [Object],\n seedlist: [Array],\n state: 'connected',\n description: [TopologyDescription],\n serverSelectionTimeoutMS: 30000,\n heartbeatFrequencyMS: 10000,\n minHeartbeatFrequencyMS: 500,\n Cursor: [Function: Cursor],\n bson: BSON {},\n servers: [Map],\n sessionPool: [ServerSessionPool],\n sessions: [Set],\n promiseLibrary: [Function: Promise],\n credentials: [MongoCredentials],\n clusterTime: [Object],\n iterationTimers: Set(0) {},\n connectionTimers: Set(0) {},\n clientInfo: [Object],\n sCapabilities: [ServerCapabilities]\n },\n [Symbol(kCapture)]: false\n },\n sessionPool: ServerSessionPool {\n topology: NativeTopology {\n _events: [Object: null prototype],\n _eventsCount: 26,\n _maxListeners: Infinity,\n s: [Object],\n [Symbol(kCapture)]: false\n },\n sessions: [\n [ServerSession], [ServerSession],\n [ServerSession], [ServerSession],\n [ServerSession], [ServerSession],\n [ServerSession], [ServerSession],\n [ServerSession], [ServerSession],\n [ServerSession], [ServerSession],\n [ServerSession], [ServerSession],\n [ServerSession]\n ]\n },\n hasEnded: false,\n serverSession: ServerSession {\n id: { id: [Binary] },\n lastUse: 1591716044756,\n txnNumber: 2,\n isDirty: false\n },\n clientOptions: {\n servers: [ [Object], [Object], [Object] ],\n ssl: true,\n replicaSet: 'Cluster0-shard-0',\n authSource: 'some_source',\n retryWrites: true,\n w: 'majority',\n readPreference: ReadPreference { mode: 'primaryPreferred', tags: undefined },\n caseTranslate: true,\n useNewUrlParser: true,\n useUnifiedTopology: 'false',\n promiseLibrary: [Function: Promise],\n driverInfo: { name: 'Mongoose', version: '5.8.0' },\n auth: {\n username: 'some_username',\n password: 'some_password',\n db: 'some_db',\n user: 'some_user'\n },\n dbName: 'strapi',\n name: 'Mongoose',\n version: '5.8.0',\n socketTimeoutMS: 360000,\n connectTimeoutMS: 30000,\n useRecoveryToken: true,\n credentials: MongoCredentials {\n username: 'some_username',\n password: 'some_password',\n source: 'some_source',\n mechanism: 'scram-sha-1',\n mechanismProperties: undefined\n }\n },\n supports: { causalConsistency: true },\n clusterTime: {\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1591716043 },\n signature: { hash: [Binary], keyId: [Long] }\n },\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1591716043 },\n explicit: true,\n owner: undefined,\n defaultTransactionOptions: {},\n transaction: Transaction {\n state: 'TRANSACTION_IN_PROGRESS',\n options: {\n writeConcern: [Object],\n readConcern: [Object],\n readPreference: 'primary'\n },\n _pinnedServer: undefined,\n _recoveryToken: undefined\n },\n [Symbol(kCapture)]: false\n }\n", "text": "Hi Asya,I am using Strapi and the driver is Mongoose 5.8.0, services.common.startDbSession() is just the wrapper of client.startSession().Here is the client detail:And yes, I can see the TRANSACTION_IN_PROGRESS from the session:Hopefully this helps.", "username": "Leonardy_Huang" } ]
Incorrect total price on "order" document when its order items are being updated concurrently
2020-05-17T21:11:20.814Z
Incorrect total price on &ldquo;order&rdquo; document when its order items are being updated concurrently
2,660
null
[]
[ { "code": "", "text": "I’m putting together a small doc mgmt solution based on node.js and mongodb. MongoDB can store metadata and files pretty easily, although I’ll need to use GridFS to break up larger documents, but beyond that everything I need is there.I need to control user access to documents (meaning what they can do on a document-by-document basis) via the equivalent to an ACL. The most stringent limit is where a document isn’t even visibile in query results. Next level up is visibility that it’s there, but no ability to read; then read; then update; then delete. I’m looking for ideas on how to accomplish this.The challenge is how I can limit visibility of search results in the first place so that users that aren’t authorized to see that a document is even present. I can check a user’s rights for a single document without issue. The overhead is negligible for a single document. The problem comes in if I have millions of documents. I need to limit search results with something that can be combined in the search with minimal overhead.Is there anything natively in MongoDB like this (which is something available in Oracle and SQL Server)? Is there some approach that may not be built-in, but is available as an add-on or custom?The one thing I thought of was using a bit field (bits positions representing groups) of enough size to perform or operations against a user’s group memberships (generated when they logon) and doing an or operation of their memberships against it, with bits set to 1 (or 0) when that group bit is excluded from seeing the object in a query result. I’m afraid of the overhead of this though given this would not be indexable, but ensuring the other conditions of the search are applied first.Thanks,\nGene", "username": "Mark_Klamerus" }, { "code": "", "text": "What does it mean to see what document is there but not be able to read it?Is that seeing things like “title” but not content?", "username": "Asya_Kamsky" }, { "code": "", "text": "Sorry, it’s a fussy thing to describe. The example I have is Documentum (a traditional doc mgmt product) that demonstrates this functionality. As I described, we’re talking about a small document management solution (where the documents are traditional documents, not the data structures that are often called documents in NoSQL DBs.). Most of the time these are PDF, but also MS Office. These can be stored via BSON, with the majority of the other information being metadata such as customer, consignee, etc.In cases of sensitive documents (HIPPA, ITAR, highly valuable IP), only limited people can actually view the documents, but it is useful for users of the system to know that they are there. At that point they can ask someone allowed to view the documents if they can be allowed to view them as well.So, in Documentum, this is called browse access. It’s a step above no awareness of the documents, but a step below being able to actually look at them. A person with browse access is able to see metadata, but not the PDF/Word/etc. document itself.Sorry, but that’s a bit long-winded.", "username": "Mark_Klamerus" }, { "code": "", "text": "Sorry missed your reply - this can be handled by defining a “read only view” and excluding the field(s) that contain content. Then give those users the ability to read the view but not the underlying collection and they will only see a subset of fields of each document.See more details here: https://docs.mongodb.com/manual/core/views/", "username": "Asya_Kamsky" } ]
Limiting query results
2020-04-06T17:20:09.297Z
Limiting query results
1,759
https://www.mongodb.com/…ebb06a5685b3.png
[ "stitch" ]
[ { "code": "<webhook-url>?secret=1234<webhook-url>?filter=cats", "text": "Hello,\nI have followed Micheal Lynn’s tutorial on creating a basic http service with webhooks.Create an API in under 10 minutes with MongoDB Stitch.Currently the webhook only allows me to hardcode filters for my documents. Just like secrets with <webhook-url>?secret=1234 is there a way to do something like <webhook-url>?filter=cats and get all the documents with cats?\nThanks!", "username": "Liam_Grossman" }, { "code": "", "text": "Absolutely! The query parameter you pass to your stitch (no realm) application is available to you in the script as part of the payload. See https://docs.mongodb.com/realm/services/http/#request-payload", "username": "Michael_Lynn" } ]
Stitch: Create a webhook that takes in parameters for filtering documents
2020-06-03T13:07:08.422Z
Stitch: Create a webhook that takes in parameters for filtering documents
3,127
null
[ "dot-net" ]
[ { "code": "var mongoClientSettings = new MongoClientSettings();\n\nmongoClientSettings.Compressors = new List<CompressorConfiguration>() {\n new CompressorConfiguration(CompressorType.ZStandard)\n };\n\nvar client = new MongoClient(mongoClientSettings);\n\nIMongoDatabase testdb = client.GetDatabase(\"testdb\");\n\nvar eaiRequestLogsCollection = testdb.GetCollection<EAIRequestsLogMDB>(\"EAIRequestsLogs\");\n\neaiRequestLogsCollection.InsertMany(eAIRequestsLogMDBs);\nstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\4.2\\data\n journal:\n enabled: true\n engine: \"wiredTiger\"\n wiredTiger:\n collectionConfig:\n blockCompressor: \"zstd\"\n", "text": "I’m trying to utilize ZStandard compression but for some reason whenever I add a new collection, its block_compression is set to “snappy”.\nI’ve edited the mongod.cfg file and set the block_compression field under WiredTiger to “zstd” and I specified zstd compression in my MongoSettings while creating the MongoClient in my c# code.\nIs there something else I’m supposed to do for Zstd compression to work?This is my C# code:This is the part I edited in the mongo.cfg file:Note: I’m using MongoDB v4.2.7 along with the .Net MongoDB driver v2.11.0(beta v) on a windows 10 machine.", "username": "Asmaa_Hamdi" }, { "code": "mongoClientSettings.Compressors = new List<CompressorConfiguration>() {\n new CompressorConfiguration(CompressorType.ZStandard)\n };\nblockCompressorblockCompressorblock_compressormongo// Create new collection using zstd compression\n> db.createCollection(\"zlogs\", {storageEngine: {wiredTiger: {configString: \"block_compressor=zstd\"}}})\n{ \"ok\" : 1 }\n\n// Check which block compressor is being used for a collection\n> var wt_options = db.zlogs.stats().wiredTiger.creationString.split(',')\n> wt_options.filter((wt_options) => wt_options.startsWith('block_compressor'))\n[ \"block_compressor=zstd\" ]\nCreateCollectionStorageEngineCreateCollectionOptions", "text": "Welcome to the community @Asmaa_Hamdi!This option is for network compression, so isn’t directly related to the compression used for collections on disk. Network compression and on-disk compression can use different compressors.Can you confirm if the collection you are referencing was created before changing the blockCompressor option in your MongoDB config file? The blockCompressor option sets the default compressor for newly created collections but does not affect existing collections.If you want to compare the effect of different compressors for your data, you can also pass block_compressor as a storage engine option when creating a collection.Here’s a quick example in the mongo shell:The .NET driver has an analogous CreateCollection command which supports setting the block compressor via the StorageEngine property in the CreateCollectionOptions class.Regards,\nStennie", "username": "Stennie_X" }, { "code": " IMongoDatabase testdb = client.GetDatabase(\"testdb\");\n\n testdb.CreateCollection(\"Hamsters\", new CreateCollectionOptions()\n {\n StorageEngine = new BsonDocument {\n { \"wiredTiger\", new BsonDocument {\n { \"configString\" , \"block_compressor=zstd\" }\n } }\n }\n });```\n\nThank you", "text": "The collection was created after I edited the config but I was setting the Network compression instead of block compression as you explained in your reply.I used CreateCollectionOptions class as suggested and it worked! \nThis is the C# code in case anyone needs it later on:", "username": "Asmaa_Hamdi" }, { "code": "blockCompressormongod", "text": "The collection was created after I edited the config but I was setting the Network compression instead of block compression as you explained in your reply.I used CreateCollectionOptions class as suggested and it worked!Hi @Asmaa_Hamdi,Great to hear you have a solution.FYI, the CreateCollectionOptions approach should only be needed if you explicitly want to set a compressor.If you change the server’s blockCompressor default, all new collections created should use the current default compressor without having to specify any additional collection options. Make sure you restart the mongod service after changing any server configuration defaults to have changes take effect.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I set the block_compressor setting to zstd in the mongod.cfg file and restarted the server but it still defaults to snappy compression unless I explicitly specify zstd compression via the CreateCollection Options class.Regards,\nAsmaa", "username": "Asmaa_Hamdi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ZStandard Compression not working in MongoDB v4.2.7
2020-06-09T10:25:41.152Z
ZStandard Compression not working in MongoDB v4.2.7
4,778
null
[ "python" ]
[ { "code": "", "text": "Dear, someone has been able to connect django with mongodb atlas, they have different information everywhere but it is not clearly explained.", "username": "Luis_Humberto_Llatas" }, { "code": "", "text": "Welcome to the community @Luis_Humberto_Llatas!Unfortunately core Django currently doesn’t have first-class support for non-SQL databases. A few projects have attempted to add core support but I’m not aware of any currently active forks.However, some community members have reported success using the community djongo driver which translates SQL to equivalent MongoDB queries.Stack Overflow has some examples of Atlas configuration for Djongo.Regards,\nStennie", "username": "Stennie_X" } ]
Connect django with mongodb
2020-05-27T22:14:23.565Z
Connect django with mongodb
1,605