image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null |
[
"database-tools",
"backup"
] |
[
{
"code": "use adminkubectl -n <namespace> exec -it <pod_name> -- sh\nuse admin\ndb.createUser( {user: \"backupuser\", pwd: \"abc123\", roles: [\"root\", \"userAdminAnyDatabase\", \"dbAdminAnyDatabase\", \"readWriteAnyDatabase\",\"backup\"], mechanisms:[\"SCRAM-SHA-256\"]})\n `db.getUsers({ filter: { mechanisms: \"SCRAM-SHA-256\" } })`\nmongodump -u backupuser -p abc123 --authenticationDatabase admin -d TESTDB --out /var/backups/dump-25-05-22 --gzip\nls -la\ntotal 8\ndrwxr-xr-x 2 root root 4096 Feb 18 2021 .\ndrwxr-xr-x 1 root root 4096 Feb 18 2021 ..\n",
"text": "I have created a database and accompanying user for the database but It appears I cant do backups with that user and neither can I add the backup role to the user. Having checked documentation I added a user but this time at the admin database level ( use admin ) and added backup role for the same.However when I attempt to do a backup I am getting an error Failed: error dumping metadata: error creating directory for metadata file /var/backups/…: mkdir /var/backups/…: permission deniedSteps\n1.(mongo is running in kubernetes)(switch to admin user)(Verify if user exists)Is it possible to even amend permissions for this user in such a case or I should be looking somewhere else. In the container it seems I cant do any permission updates (for the group) but the user already has all permissions on /var/backups :I am not convinced either that I should be going even this far. The backup should execute out of the box as per the user I added.What exactly am I missing ?",
"username": "Edmore_Tshuma"
},
{
"code": "ls -la\ntotal 8\ndrwxr-xr-x 2 root root 4096 Feb 18 2021 .\ndrwxr-xr-x 1 root root 4096 Feb 18 2021 ..\n",
"text": "the user already has all permissions on /var/backups :Only OS user root has write permission. The OS user that runs the command mongodump … needs to have write access to /var/backups/.",
"username": "steevej"
},
{
"code": "",
"text": "hello @steevej I am not quite sure how to grant those permissions and from which context. Do I grant for users in the container host or the kubernetes host machine?\nBy OS user you mean a user profile on the kubernetes host?",
"username": "Edmore_Tshuma"
},
{
"code": "",
"text": "I do not use k18s in conjunction with MongoDB, so I cannot answer more than:The user executing the command mongodump needs to have write access to the directory written to.",
"username": "steevej"
}
] |
Mongodump error creating directory for metadata file : Access denied
|
2022-05-25T11:06:53.144Z
|
Mongodump error creating directory for metadata file : Access denied
| 5,544 |
null |
[
"node-js",
"crud"
] |
[
{
"code": "// collName.js\nconst client = new MongoClient(uri);\nconst db = client.db(\"dbName\");\nexport default const collection = db.collection(\"collName\");\n//file1.js\nimport collection form \"collName.js\"\n\ncont data = [ ]\ncollection.insertMany(data)\ncollection.modifiedInsertMany(data)\nclass myCollection {\n col = db.collection(\"collName\");\n \n function modifiedInsertMany(data) {\n // do stuff with data\n return col.insertMany(data);\n }\n}\n",
"text": "As the title says, I want to add extra methods to the mongodb drivers.\nCurrently i’m using a collection instance to to insert, delete documents in my database like this:But I would like to add some methods to the collection instance, to do something like this:I guess, I could create my own class:But if i did that I would have to rewrite every single method of the original mongodb.Collection, isn’t that correct?\nIf possible, I want to keep all the methods of the mongodb.Collection and just add a couple of my methods.Is it possible to do that?",
"username": "Stergios_Nanos"
},
{
"code": "class myColl extends Collection{\nconstructor(name1, arg2,arg3){\n super(name); //calls \n //uses methods like find below \n this.myFind = function (arg2) { return this.find(...) }\n }\n}\ncollection.mymethod= function (){ }function myFind(coll, ...args){\n collection.find({a:args[1]})\n//or whatever\n}\n",
"text": "Not a dev but if Collection were a Class you can extend a class, adding new methods. For example:But from you code it is just an object not a class, but you can still add collection.mymethod= function (){ }.Doesn’t writing a wrapper fits your needs though?",
"username": "Mah_Neh"
},
{
"code": "",
"text": "I thought doing that but how do I call super() in my class constructor?\nThis is written in the mongodb docs:The Collection class is an internal class that embodies a MongoDB collection allowing for insert/update/remove/find and other command operation on that MongoDB collection.\nCOLLECTION Cannot directly be instantiated. link",
"username": "Stergios_Nanos"
},
{
"code": "// collName.js\nconst client = new MongoClient(uri);\nconst db = client.db(\"dbName\");\nconsole.log(db.collection(\"collName\"))\n> Array.prototype.myFn = function() { console.log(this) }\n[Function (anonymous)]\n> a = new Array(1,2,3)\n[ 1, 2, 3 ]\n> a.myFn()\n[ 1, 2, 3 ]\nCollection.prototype",
"text": "When you doIndeed it appears to be a collection instance probably called under the hood.But I do not think you can modify a class definition unless you add it to MongoDB code itself, and extending won’t be useful here. You can, I think, modify the prototype of the class:Just add the functions to the Collection.prototypeBut I would just write wrapper functions as I said before.",
"username": "Mah_Neh"
}
] |
Is there a way to wrap the mongodb drivers to add extra functionality?
|
2022-05-30T17:21:23.569Z
|
Is there a way to wrap the mongodb drivers to add extra functionality?
| 2,341 |
null |
[
"java",
"crud"
] |
[
{
"code": "{\n \"_id\": {\n \"$binary\": {\n \"base64\": \"h7MRZycaOk6vLUlnqPx/HA==\",\n \"subType\": \"03\"\n }\n },\n \"ParentId\": null,\n \"Version\": xx,\n \"Name\": \"aaa\",\n \"HashedPassword\": \"dfdfd\",\n \"IsActive\": false,\n \"SapId\": \"\",\n \"InvoiceSapId\": null,\n \"LegalName\": \"sdsds\",\n \"BillingAddress\": {\n \"_id\": {\n \"$binary\": {\n \"base64\": \"LRSwK05HfEWDAr8Y1q1GcQ==\",\n \"subType\": \"03\"\n }\n },\n \"Line1\": \"dfdfd\",\n \"DistrictName\": null,\n \"CityCode\": dfd,\n \"TownCode\": dfdf,\n \"DistrictCode\": dfd,\n \"Label\": null\n },\n \"ShippingAddresses\": [\n {\n \"_id\": {\n \"$binary\": {\n \"base64\": \"eMK1k+wG6k2XTafzEBYQng==\",\n \"subType\": \"03\"\n }\n },\n \"Line1\": \"fdfdfds\",\n \"DistrictName\": null,\n \"CityCode\": sdfds,\n \"TownCode\": sdffs,\n \"DistrictCode\":dfdf,\n \"Label\": \"dfdsfd\"\n }\n ]\n}\n",
"text": "Hi.I am writing to connector from on prem MongoDB to Bigquery. I have a very difficult point. I convert one binary using decodeBinarytouuid method in java but data have lots of Binary data and also this connector is not specific to one team. So that I need a help for this situation.Also I use Uuid Representation in MongoClientSettings but it isn’t work.This example like thisThis is my example payload and I want to take all binaries to string without I give. So can you help me please?Best Regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": "mongosh_id",
"text": "Hi @Emin_Can_OGUZ and welcome in the MongoDB Community !What’s in your MongoDB collection initially? Can you share one document maybe from mongosh? Are these _id fields really binary data or it’s actually ObjectIds?I have a Java quick start repo here: GitHub - mongodb-developer/java-quick-start: This repository contains code samples for the Java Quick Start blog post seriesMaybe this can help you getting up and running.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "{\n\t\"_id\" : BinData(3,\"AGUfrNBHhEK90SoDxgCIDA==\")\n ...\n\t\"BillingAddress\" : {\n\t\t\"_id\" : BinData(3,\"EuMKzcDDBkiDrZxsgwJgyw==\")\n ...\n\t},\n\t\"ShippingAddresses\" : [\n\t\t{\n\t\t\t\"_id\" : BinData(3,\"HwVvtOoXEEGc5ZF8wNe2cQ==\")\n ...\n\t\t}\n\t],\n\t\"FinancialInformation\" : {\n\t\t\"TaxOfficeId\" : BinData(3,\"AAAAAAAAAAAAAAAAAAAAAA==\")\n ...\n\t},\n}\n",
"text": "Hi @MaBeuLux88Thanks for warm welcoming. Sorry for late response. This data see like this in mongosh. (I share only _id’s in data)I try also CodecRegistry but it isn’t works I want to try Convert all Binary Data to String.Can you help me again?Sincerely,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": "_id",
"text": "Well that’s very unusual.\nIt’s the first time that I actually see a BinData used as an _id. Usually we have ObjectIds which can be represented by an hexadecimal number.\nWhat makes you think that this binary data can be represented as a string?\nI have no idea how you could solve this problem really.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Because of I have a connector from on-prem mongo to BQ and I write this code using Java for my company. This connector is not supporting Binary Data. So that I am very trouble.Thanks for responding for this situationBest regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": "",
"text": "@Jeffrey_YeminHi Jeffrey,We are trouble for this situation and we are trying several ways to changing Binary data to String (Uuid is enough for us but this type must be string). Can you help us?Sorry about my mention but we are so trouble for this problem.Best regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": "MongoCollection<BsonDocument> collection = database.getCollection(\"<name>\", BsonDocument.class); \n",
"text": "For a generic tool like this, consider using a MongoCollection with a generic type of BsonDocument, e.g.Then all documents that you query for will be of type BsonDocument, and all binary values will be of type BsonBinary. From there you can get the raw byte array and convert it to string using any encoding you want, e.g. java.util.Base64. If you need to also encode the binary subtype, you’ll have to figure out a way to include the subtype as well.UUIDs can be tricky though, so be careful with how you treat those bytes once you store them. See specifications/uuid.rst at master · mongodb/specifications · GitHub for the gory details. If you are able to start with fresh data, use STANDARD UuidRepresentation for all UUIDs. If you can’t do that, but you can assume JAVA_LEGACY, you might want to convert those to the standard representation before then converting to a String. I can help you with that as well if you need it.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "writer.writeString(UuidHelper.decodeBinaryToUuid(value.getData(),value.getType(),UuidRepresentation.JAVA_LEGACY).toString());\ncursor.next().toJson(JsonWriterSettings.builder().binaryConverter(new JsonBinaryConverter()).dateTimeConverter(new JsonDateTimeConverter()).build())\n",
"text": "Hi Jeffrey,We solve problem in JSONObject. We can’t change default Mongo data. We write new function using BinaryConverter for JsonWriterSettings. This code this like that.And also we use this data like that.Thanks for helping Jeff. I am very honour to talk with you.Best Regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": "",
"text": "Hi @Jeffrey_Yemin again,We solved this problem. Thanks for responding. So we are so discussing about adding this feature to driver with PR. We want to PR but we don’t know the PR rules in MongoDB. So that can you advice to PR the this feature or this problem is solved in forum. Which one do you prefer?Best Regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": "",
"text": "Have a look at mongo-java-driver/CONTRIBUTING.md at master · mongodb/mongo-java-driver · GitHub.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": " \"SourceEndpoint\": \"file:///routes/appDataRoot/Source/\",\n\n \"Payload\": {\n\n \"$binary\": \"ew0KICAgICJwb3N0SWQiOiAxLA0KICAgICJpZCI6IDEsDQogICAgIm5hbWUiOiAiaWQgbGFib3JlIGV4IGV0IHF1YW0\",\n\n \"$type\": \"00\"\n\n },\n\n \"_id\": {\n\n \"$oid\": \"628659c1d03f0e204cdddd79\"\n\n \n\n }\n",
"text": "Hi @Emin_Can_OGUZ, I also got stuck in the same situation.I also have the same moto to decoding binary columns in my JSON responsehere is my codeObject myData = new Object();\nFindIterable findQuery = collection.find(Filters.and(Filters.eq(key, value)));\nMongoCursor cursor = findQuery.iterator();\ntry {\nwhile (cursor.hasNext()) {\nresponse =cursor.next().toJson();\nJSONParser parser = new JSONParser();\nmyData=parser.parse(response);\nexpData.add(myData);\n}\n} finally\n{\ncursor.close();\n}my JSON response:\n{]I need to decode the payload binary field in the above JSON response\nI tried by the following code with json parameters.cursor.next().toJson(JsonWriterSettings.builder().binaryConverter(new JsonBinaryConverter()).dateTimeConverter(new JsonDateTimeConverter()).build());But I am getting issue Binary converter cannot be resolved a type\nScreenshot (142)1573×715 58 KB\nCould you please provide any suggestions on this?",
"username": "Ravikishore_Bodha"
},
{
"code": "import org.bson.BsonBinary;\nimport org.bson.UuidRepresentation;\nimport org.bson.internal.UuidHelper;\nimport org.bson.json.Converter;\nimport org.bson.json.StrictJsonWriter;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\npublic class JsonBinaryConverter implements Converter<BsonBinary> {\n private static final Logger LOGGER = LoggerFactory.getLogger(JsonDateTimeConverter.class);\n\n\n @Override\n public void convert(BsonBinary value, StrictJsonWriter writer) {\n try {\n writer.writeString(UuidHelper.decodeBinaryToUuid(value.getData(),value.getType(), UuidRepresentation.JAVA_LEGACY).toString());\n } catch (Exception e) {\n LOGGER.info(String.format(\"Fail to convert offset %d to JSON date\",value),e);\n }\n }\n}\nimport org.bson.json.Converter;\nimport org.bson.json.StrictJsonWriter;\n\nimport java.time.Instant;\nimport java.time.ZoneId;\nimport java.time.format.DateTimeFormatter;\nimport java.util.Date;\n\n\npublic class JsonDateTimeConverter implements Converter<Long> {\n\n\n static final DateTimeFormatter DATE_TIME_FORMATTER = DateTimeFormatter.ISO_INSTANT\n .withZone(ZoneId.of(\"UTC\"));\n\n @Override\n public void convert(Long value, StrictJsonWriter writer) {\n try {\n Instant instant = new Date(value).toInstant();\n String s = DATE_TIME_FORMATTER.format(instant);\n writer.writeString(s);\n } catch (Exception e) {\n System.out.println(e.getMessage());\n }\n }\n}\n",
"text": "Hi,You can write a BinaryConverter class service in Java. Because of that JsonBinaryConverter is not here in Mongo. So that you can write a class after than you called the class new statement.I want PR but I haven’t got any free time so that I share my code.Also if you have a time value you must convert this like JsonDateTimeConverter()You will call this service in your main statement. If you have a problem I will help youBest Regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": ".dateTimeConverter(new JsonDateTimeConverter())",
"text": ".dateTimeConverter(new JsonDateTimeConverter())@Emin_Can_OGUZI have implemented of your code snippet Json Binary Converter and JsonDateTimeConverter.\nI am getting the errororg.bson.BsonInvalidOperationException: Invalid state VALUE\nimage1600×351 18.1 KB\n",
"username": "Ravikishore_Bodha"
},
{
"code": "",
"text": "Can you change mongo java driver to mongo driver sync? I used 4.2.0 version",
"username": "Emin_Can_OGUZ"
},
{
"code": "",
"text": "Changed to mongo driver sync but again getting the same issue invalid state VALUE.",
"username": "Ravikishore_Bodha"
},
{
"code": "",
"text": "Can you add your code to Github if this code is not privacy? I don’t know what is going to this code. Maybe try the debug code.",
"username": "Emin_Can_OGUZ"
},
{
"code": "",
"text": "Or can you change binary type? Because of that type is 00 for your payload.",
"username": "Emin_Can_OGUZ"
},
{
"code": "",
"text": "Hi @Emin_Can_OGUZAfter a lots of debugging my code.\nI am getting this issueorg.bson.BsonSerializationException: Expected length to be 16, not 1523.my payload size not more than 1kb. which is inserted as binary format in mongodb.",
"username": "Ravikishore_Bodha"
},
{
"code": "",
"text": "Hi @Ravikishore_BodhaWhat is your Java version? If your Java version is under 16, probably you get this error. My java version is Java 16.Best regards,\nEmin Can",
"username": "Emin_Can_OGUZ"
},
{
"code": " System.out.println(\"SUBTYPE\"+value.getType());\n \n int sg= value.getData().length;\n System.out.println(\"LENGHTH\"+sg);\n byte[] d = value.getData();\n String s = new String(d); \n System.out.println(\"DATA-->\"+s);\n try {\n writer.writeString(s);\n System.out.println(writer);\n \n } catch (Exception e) {\n \t System.out.println(\"EXCEPTION\"+e);\n \t \n // LOGGER.info(String.format(\"Fail to convert offset %s to JSON date\",value),e);\n }\n",
"text": "I think I fixed the issue by converting BSON byte[] to string and returned to JSON Writer.here is the code snippet@Override\npublic void convert(BsonBinary value, StrictJsonWriter writer) {JSON Response with decoded BINARY:“Payload”: “{\\r\\n “postId”: 1,\\r\\n “id”: 1,\\r\\n “name”: “id labore ex et quam laborum”,\\r\\n “email”: “[email protected]”,\\r\\n “body”: “laudantium enim quasi est quidem magnam voluptate ipsam eos\\r\\ntempora quo necessitatibus\\r\\ndolor quam autem quasi\\r\\nreiciendis et nam sapiente accusantium”\\r\\n\"path” :\"\\game\\forum\\files\\index.php$!@#%^&*()\"\\r\\n }\",@Emin_Can_OGUZ Could you please help how to remove \\r\\n \\ special characters in JSON response.In my IDE console printing my payload response with no characters like \\r\\n \nas shown below.\n",
"username": "Ravikishore_Bodha"
}
] |
How to change all Binary types to String in MongoDB using Java?
|
2022-04-12T06:47:18.987Z
|
How to change all Binary types to String in MongoDB using Java?
| 16,818 |
null |
[
"aggregation"
] |
[
{
"code": "db.getCollection('viewscounts').aggregate(\n[\n {\n $match: {\n MODULE_ID: 4, \n }\n },\n {\n $group: {\n _id: '$ITEM_ID',\n }\n }\n], { allowDiskUse: true })\n",
"text": "I have a collection with over 10 Million records, I need to match with a particular field and get the distinct _ids of the records set.after the $match pipeline the result set becomes less than 5 Million. if i group with id to get the unique ids, the execution time on my local environment is over 20 seconds.I’m okay with limiting the _ids, but they should be unique.Can anyone suggest a better way to get the results faster?",
"username": "Praveen_Ramkumar"
},
{
"code": "",
"text": "@Praveen_Ramkumar, can you provide example of the document from ‘viewcounts’ collection?",
"username": "slava"
},
{
"code": "{\n \"_id\" : ObjectId(\"5ec9b916899b6c0013d84826\"),\n \"MODULE_ID\" : 4,\n \"USER_ID\" : ObjectId(\"5c5b2a0a12df970a22fc0467\"),\n \"ITEM_ID\" : ObjectId(\"5d622b9492463900134b5b82\"),\n \"COMPANY_ID\" : ObjectId(\"5d1260206c8ca32c10fac59a\"),\n \"IP_ADDRESS\" : \"::ffff:10.255.0.2\",\n \"TYPE\" : 3,\n \"LOCATION\" : 1,\n \"VIEWED_DATE\" : ISODate(\"2020-05-24T00:00:22.702Z\"),\n \"ELEMENT_ID\" : 0,\n \"UTM_VARIABLES\" : null,\n \"__v\" : 0\n}\n",
"text": "Hi @slava, Thanks for your interest on helping me.Please find below an example:",
"username": "Praveen_Ramkumar"
},
{
"code": "db.getCollection('viewscounts').aggregate([\n {\n $match: {\n MODULE_ID: 4,\n }\n },\n {\n $group: {\n _id: null,\n UNIQUE_IDS: {\n $addToSet: '$ITEM_ID',\n }\n }\n }\n], { allowDiskUse: true })\ndb.getCollection('viewscounts').distinct('ITEM_ID', { MODULE_ID: { $ne: 4 } })\ndb.getCollection('viewscounts').createIndex({ MODULE_ID: 1 })\n",
"text": "You can get unique ids with aggregation pipeline, like this:You can also achieve the same with distinct collection command:Don’t forget to add index on ‘MODULE_ID’ field, so both above approaches would work faster ",
"username": "slava"
},
{
"code": "",
"text": "I’ve already tried the above approaches and they take around 14 seconds in local environment, the execution time will be doubled when I run the query on my hosted production db. only having $match or $group is executed within less than 1ms. combining both the pipelines increases the execution time. what could be the reason? My expection was as the first match already reduce the dataset the group should work even faster.",
"username": "Praveen_Ramkumar"
},
{
"code": "",
"text": "i have same problem with $match and $group.\ni think it is slow because Aggregation Pipeline work like a process Pipe.",
"username": "Tai_Huynh1"
}
] |
Query Optimization for collection over 10 Million records
|
2020-06-24T04:25:04.753Z
|
Query Optimization for collection over 10 Million records
| 12,952 |
null |
[
"queries"
] |
[
{
"code": "[{\n\"_id\":\"abhj\",\n\"id\":\"abhj\",\n\"Main_array\":[\n {\n \"number\":12345,\n \"pincode\":247800,\n \"address\": [\n \"vasant\"\n \"vihar\"\n \"kota\"\n ]\n }\n ],\n}]\nMongoDatabase database = mongoClient.getDatabase(\"Friends\");\n MongoCollection<Document> collection = database.getCollection(\"Friend\");\n\n BasicDBObject filter = new BasicDBObject(\"I_StationOverlayId_primary.0.space\", new BasicDBObject(\"$exists\", \"true\"));\n collection.find(filter).forEach((Consumer<Document>) doc -> { \n Object obj = doc.get(\"I_StationOverlayId_primary.0.space\")\n}\n",
"text": "I’m new to springboot and mongodb as well. I have the following json document in mongodbNote: Name of database is Friends and name of collection is Friend. It has many document around 118k. One sample of document is given below. –Now as you can see there is Main_array inside which there is an Object inside which we have address which is an array.Now, I want to fetch the size of this address array.I tried this but it didn’t worked.Note: I have to use MongoClient.But I got null value in obj. Can someone please suggest me",
"username": "Vartika_Malguri"
},
{
"code": "db.example.aggregate([\n {$unwind:'$Main_array'},\n {$addFields:{'Main_array.address_size':{$size:'$Main_array.address'}}}\n])\n\n{ \"_id\" : \"abhj\", \"id\" : \"abhj\", \"Main_array\" : { \"number\" : 12345, \"pincode\" : 247800, \"address\" : [ \"vasant\", \"vihar\", \"kota\" ], \"address_size\" : 3 } }\n/*\n * Requires the MongoDB Java Driver.\n * https://mongodb.github.io/mongo-java-driver\n */\n\nMongoClient mongoClient = new MongoClient(\n new MongoClientURI(\n \"mongodb://localhost:27018/\"\n )\n);\nMongoDatabase database = mongoClient.getDatabase(\"test\");\nMongoCollection<Document> collection = database.getCollection(\"example\");\n\nFindIterable<Document> result = collection.aggregate(Arrays.asList(new Document(\"$unwind\", \n new Document(\"path\", \"$Main_array\")), \n new Document(\"$addFields\", \n new Document(\"Main_array.address_size\", \n new Document(\"$size\", \"$Main_array.address\")))));\nMain_array",
"text": "Hi @Vartika_Malguri\nWelcome to the community forum!!I tried to reproduce the issue on my local setup with MongoDB version 5.0.7\nUsing the data you posted, the following query proved to be useful in finding the size of the address array.MongoDB Compass provides the feature to export aggregation pipeline to specific language. Please refer to the documentation export-pipeline-to-languageBelow is the Java code for the above aggregation query.Please note that if the Main_array contains more elements in the array, the above aggregation would need a projection stage in the pipeline to suit your specific need.Let us know if you have any further questionsThanks\nAasawari",
"username": "Aasawari"
}
] |
How can I fetch the size of array in mongodb using springboot?
|
2022-05-19T09:12:37.388Z
|
How can I fetch the size of array in mongodb using springboot?
| 3,738 |
null |
[
"java",
"production"
] |
[
{
"code": "",
"text": "The 4.6.0 version of MongoDB Java & JVM Drivers has been released.The Java driver reference documentation site contains detailed documentation of the 4.6 driver.See the full list of bug fixes on this JIRA boardSee the full list of improvements on this JIRA boardSee the full list of new features on this JIRA board",
"username": "Christopher_Cho"
},
{
"code": "",
"text": "Is this available in Maven Repo. I don’t see the Maven artifacts for this Java Driver v4.6.0 in fact for any v4.x.x.\nCan I get the Repo details from where can I download the artifacts…",
"username": "Sabareesh_Babu"
},
{
"code": "",
"text": "That is because drivers 4.x.x are now in a different project of mongodb on maven. You need to find libraries separately. For instance for bson library you need: https://mvnrepository.com/artifact/org.mongodb/bson",
"username": "Alperen_Pulur"
},
{
"code": "",
"text": "Thanks for the quick response.As per GitHub - mongodb/mongo-java-driver-reactivestreams: The Java Reactive Stream driver for MongoDB, The MongoDB Reactive Streams Java Driver is now officially end-of-life and this codebase has moved into the [MongoDB Java Driver]. That’s the reason I was looking into JavaDriver which has reactivestreams API’s for V4.x.x.So, If I want to make use of reactivestreams driver API’s, the following coordinates would be the right one’s?Correct me if I’m wrong.",
"username": "Sabareesh_Babu"
},
{
"code": "",
"text": "Yes, you can find all java related libraries for mongodb under following parent repository:The Official Java driver for MongoDB . Contribute to mongodb/mongo-java-driver development by creating an account on GitHub.",
"username": "Alperen_Pulur"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] |
MongoDB Java Driver 4.6.0 Released
|
2022-04-26T21:08:55.221Z
|
MongoDB Java Driver 4.6.0 Released
| 4,871 |
null |
[
"queries"
] |
[
{
"code": "db.collection.find().sort({$natural: -1 }).limit(5)\n",
"text": "query –I was basically finding the most recent documents in the collection. But how should I write this query in springboot?",
"username": "Vartika_Malguri"
},
{
"code": "/*\n * Requires the MongoDB Java Driver.\n * https://mongodb.github.io/mongo-java-driver\n */\n\nBson filter = new Document();\nBson sort = new Document(\"$natural\", -1L);\n\nMongoClient mongoClient = new MongoClient(\n new MongoClientURI(\n \"mongodb+srv://m001-student:[email protected]/test?authSource=admin&replicaSet=atlas-gbccyn-shard-0&readPreference=primary&ssl=true\"\n )\n);\nMongoDatabase database = mongoClient.getDatabase(\"sample_analytics\");\nMongoCollection<Document> collection = database.getCollection(\"accounts\");\nFindIterable<Document> result = collection.find(filter)\n .sort(sort)\n .limit((int)5L);\n CriteriaDefinition filter = (CriteriaDefinition) new Document();\n List sort = (List) new Document(\"$natural\", -1L);\n\n MongoClient mongoClient = new MongoClient(\n new MongoClientURI(\n \"mongodb+srv://m001-student:[email protected]/test?authSource=admin&replicaSet=atlas-gbccyn-shard-0&readPreference=primary&ssl=true\"\n )\n);\n Query query = new Query().addCriteria(filter).with(Sort.by(sort));\n ) ), thus it may not sort based on the most recent inserted documents. Realistically, you may be able to drop the natural sort criteria and receive the same ordering of documents. If it is feasible for you to modify the schema design to have a field, the",
"text": "Hi @Vartika_Malguri\nWelcome to the community forum!MongoDB Compass provides the feature to convert the MongoDB query to a specific programming language. Please refer to the following documentationHowever, this would be the Java code:ORbasically finding the most recent documents in the collectionPlease note that $natural is “The order in which the database refers to documents on disk” (see [$natural]( https://www.mongodb.com/docs/manual/reference/glossary/#std-term-natural-order` ) ), thus it may not sort based on the most recent inserted documents. Realistically, you may be able to drop the natural sort criteria and receive the same ordering of documents. If it is feasible for you to modify the schema design to have a createdAtfield, thecreatedAt` field could be used to sort the data as per the requirements.Also, would recommend you to visit the following documentationSpring Boot Integration With MongoDB Tutorial | MongoDB for more understanding on integrating MongoDB with Spring boot application and explains how to access MongoDB data using the typical spring boot methods.https://www.mongodb.com/docs/manual/tutorial/query-documents/ you can refer to the mentioned document to read more on how to work with query by selecting the required language.Getting Started with MongoDB and Java - CRUD Operations Tutorial This blog post tutorial would help you to understand on how to perform CRUD operations with MongoDB using Java.PS: I would also recommend you to go through the following course\nMongoDB Courses and Trainings | MongoDB University to understand and learn more on MongoDB with Java.Let us know if you have any more doubts.Thanks\nAasawari",
"username": "Aasawari"
}
] |
How to write the following mongo query in springboot?
|
2022-05-10T20:01:14.326Z
|
How to write the following mongo query in springboot?
| 5,149 |
null |
[] |
[
{
"code": "",
"text": "Hello,What is the best practice to track changes in mongoDB? In relational database, we normally have extra columns(createTimestamp, updateTimestamp, userId) in each table, so that we know who/when/what the change is done for each table.Thanks in advance.",
"username": "mike_zenon"
},
{
"code": "",
"text": "Hi @mike_zenon ,The best way to actively track changes is by using change streams:MongoDB triggers, change streams, database triggers, real timeNow indexing and query a last modified field as you used to do in RDBMS is also valid but less convenient.Thanks",
"username": "Pavel_Duchovny"
}
] |
Best practice to track changes
|
2022-05-27T11:57:05.409Z
|
Best practice to track changes
| 4,216 |
null |
[
"aggregation"
] |
[
{
"code": "Person7Person7Person4, 5, 6, 8, 9, 10 [\n\n { name: \"Person1\", rating: 50 },\n\n { name: \"Person2\", rating: 55 },\n\n { name: \"Person3\", rating: 60 },\n\n { name: \"Person4\", rating: 65 },\n\n { name: \"Person5\", rating: 70 },\n\n { name: \"Person6\", rating: 75 },\n\n { name: \"Person7\", rating: 800 },\n\n { name: \"Person8\", rating: 850 },\n\n { name: \"Person9\", rating: 900 },\n\n { name: \"Person10\", rating: 100000 },\n\n { name: \"Person11\", rating: 100001 },\n\n { name: \"Person12\", rating: 102000 }\n\n ]\n",
"text": "My goal is to find two random documents. Each document has a rating. The rating difference between these two document doesn’t matter to me. But what matters is the distance is documents between these 2 documents.Take this list for example. Say I randomly select Person7. Now I want a random document say max 3 documents away from Person7. So in this case it should be either Person4, 5, 6, 8, 9, 10.I think I would have to aggregate this list but I’m not sure what type of aggregations to run. Also I’m not sure one aggregation is enough. I would maybe need to two this in multiple steps(in a transaction).",
"username": "anthon_N_A"
},
{
"code": "",
"text": "Hi @anthon_N_A ,You probably can hack it through an aggregation with $rand games :But I would definitely say it will be much easier to calculate your random range of ratings or the needed random list of value the app side . Pass the range or list to a simple query to get those ranked documentsTy",
"username": "Pavel_Duchovny"
}
] |
Finding two random documents within a max range of sorted documents
|
2022-05-30T18:46:46.356Z
|
Finding two random documents within a max range of sorted documents
| 1,276 |
null |
[
"aggregation"
] |
[
{
"code": "",
"text": "hello\ni have some collections with couple million documents. i have some aggregation pipelines i use for small datasets (50-100k documents). when i try to use them with large datasets it takes a lot of time.\nam i using aggregation framework for something it is not designed to do?\nwhat can i do to improve performance ?\nmy aggregations are mostly using $setField to set some field values and some of those $setField steps uses $function operator to run some small javascript code.",
"username": "Ali_ihsan_Erdem1"
},
{
"code": "",
"text": "Hello @Ali_ihsan_Erdem1,Could you please share below information, to help us understand the aggregation and data model being used?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Best way to update big datasets
|
2022-05-10T15:36:18.108Z
|
Best way to update big datasets
| 1,248 |
[] |
[
{
"code": "",
"text": "\nMongod.log Space utilization_11700×893 118 KB\n",
"username": "Ashish_Wanjare"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
}
] |
I have a space utilization problem on Linux redhat server. I saw Mongod.log file consume more space. So how can I delete or remove old logs
|
2022-05-30T19:11:51.545Z
|
I have a space utilization problem on Linux redhat server. I saw Mongod.log file consume more space. So how can I delete or remove old logs
| 1,313 |
|
null |
[
"java"
] |
[
{
"code": "public static void main(String[] args) throws Throwable {\n MongoCredential credential = MongoCredential.createCredential(\"user\", \"dbName\", \"password\".toCharArray());\n Block<ClusterSettings.Builder> localhost = builder -> builder.hosts(singletonList(new ServerAddress(\"host.mongodb.net\", 27017)));\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyToClusterSettings(localhost)\n .credential(credential)\n .build();\n MongoClient client = MongoClients.create(settings);\n MongoCollection<Document> col = client.getDatabase(\"test\").getCollection(\"col\");\n System.out.println(col.countDocuments());\n\n}\nException in thread \"main\" java.lang.NoClassDefFoundError: com/mongodb/internal/connection/DefaultClusterFactory\n\tat com.mongodb.client.internal.MongoClientImpl.createCluster(MongoClientImpl.java:208)\n\tat com.mongodb.client.internal.MongoClientImpl.<init>(MongoClientImpl.java:63)\n\tat com.mongodb.client.MongoClients.create(MongoClients.java:108)\n\tat com.mongodb.client.MongoClients.create(MongoClients.java:50)\n\tat ServiceApplication.main(ServiceApplication.java:48)\nCaused by: java.lang.ClassNotFoundException: com.mongodb.internal.connection.DefaultClusterFactory\n\tat java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)\n\tat java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)\n\tat java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)\n\t... 5 more\n SSLContext sslContext;\n try {\n sslContext = SSLContext.getInstance(InmarsatConstants.SSL_PROTOCOL);\n\n // set up a TrustManager that trusts everything\n sslContext.init(null, new TrustManager[]{new X509TrustManager() {\n\n @Override\n public void checkClientTrusted(java.security.cert.X509Certificate[] x509Certificates, String s) throws CertificateException {\n }\n\n @Override\n public void checkServerTrusted(java.security.cert.X509Certificate[] x509Certificates, String s) throws CertificateException {\n }\n\n @Override\n public java.security.cert.X509Certificate[] getAcceptedIssuers() {\n return new java.security.cert.X509Certificate[0];\n }\n }}, new SecureRandom());\n } catch (NoSuchAlgorithmException e) {\n throw new NoSuchAlgorithmException(e);\n } catch (KeyManagementException e) {\n throw new KeyManagementException(e);\n }\n MongoCredential credential = MongoCredential.createCredential(mongoDbUserName,mongoDbName,mongoDbPassword.toCharArray());\n MongoClient mongoClient = MongoClients.create(\n MongoClientSettings.builder()\n .applyToClusterSettings(builder ->\n builder.hosts(Arrays.asList(new ServerAddress(\"mongodb://\"+mongoDbCLientHostName, 27017))))\n .credential(credential)\n .build());\n\n return new SimpleMongoClientDatabaseFactory(mongoClient,mongoDbName);\n}\n\n@Bean\npublic MongoTemplate mongoTemplate() throws KeyManagementException, NoSuchAlgorithmException {\n return new MongoTemplate(mongoDatabaseFactory());\n}\npublic @Bean\ncom.mongodb.client.MongoClient mongoClient() throws NoSuchAlgorithmException, KeyManagementException {\n SSLContext sslContext;\n try {\n sslContext = SSLContext.getInstance(InmarsatConstants.SSL_PROTOCOL);\n\n // set up a TrustManager that trusts everything\n sslContext.init(null, new TrustManager[]{new X509TrustManager() {\n\n @Override\n public void checkClientTrusted(java.security.cert.X509Certificate[] x509Certificates, String s) throws CertificateException {\n }\n\n @Override\n public void checkServerTrusted(java.security.cert.X509Certificate[] x509Certificates, String s) throws CertificateException {\n\n\n }\n\n @Override\n public java.security.cert.X509Certificate[] getAcceptedIssuers() {\n return new java.security.cert.X509Certificate[0];\n }\n }}, new SecureRandom());\n } catch (NoSuchAlgorithmException e) {\n throw new NoSuchAlgorithmException(e);\n } catch (KeyManagementException e) {\n throw new KeyManagementException(e);\n }\n\n\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n\n\n MongoCredential credential = MongoCredential.createCredential(mongoDbUserName,mongoDbName,mongoDbPassword.toCharArray());\n MongoClient mongoClient = MongoClients.create(\n MongoClientSettings.builder()\n .applyToClusterSettings(builder ->\n builder.hosts(Arrays.asList(new ServerAddress(\"mongodb://\"+mongoDbCLientHostName, 27017))))\n .credential(credential).codecRegistry(codecRegistry)\n .build());\n return mongoClient;\n\n}\nCaused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.data.mongodb.MongoDatabaseFactory]: Factory method 'mongoDatabaseFactory' threw exception; nested exception is com.mongodb.MongoException: host and port should be specified in host:port format\n\tat org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185)\n\tat org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:651)\n\t... 24 common frames omitted\nCaused by: com.mongodb.MongoException: host and port should be specified in host:port format\n\tat com.mongodb.ServerAddress.<init>(ServerAddress.java:125)\n\tat ServiceApplication.lambda$mongoDatabaseFactory$0(ServiceApplication.java:96)\n\tat com.mongodb.MongoClientSettings$Builder.applyToClusterSettings(MongoClientSettings.java:230)\n\tat ServiceApplication.mongoDatabaseFactory(ServiceApplication.java:95)\n\tat ServiceApplication$$EnhancerBySpringCGLIB$$c3b4fc1b.CGLIB$mongoDatabaseFactory$0(<generated>)\n\tat ServiceApplication$$EnhancerBySpringCGLIB$$c3b4fc1b$$FastClassBySpringCGLIB$$22498f2e.invoke(<generated>)\n\tat org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244)\n\tat org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331)\n<dependency>\n <groupId>org.springframework.data</groupId>\n <artifactId>spring-data-mongodb</artifactId>\n <version>3.2.3</version>\n </dependency>\n<dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.2.3</version>\n </dependency>\n",
"text": "I’m working on a upgrade programe.Facing issue with connection stringerror:Process finished with exit code 1the other piece of code that I tried is :\n@Bean\npublic MongoDatabaseFactory mongoDatabaseFactory() throws NoSuchAlgorithmException, KeyManagementException {error:Dependencies used for both the cases:",
"username": "Priyanka_Dhole"
},
{
"code": "",
"text": "Hi Team,I also referred GitHub - mongodb-developer/java-quick-start: This repository contains code samples for the Java Quick Start blog post series but no luck",
"username": "Priyanka_Dhole"
},
{
"code": "",
"text": "Hi Team,The major issue is with connecting string. Unable to create MongoClient.Mentioning again, need to upgrade Java code compatible to Mongodb 4.4. Hence using dependencyDependencies used for both the cases:",
"username": "Priyanka_Dhole"
},
{
"code": "Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.mongodb.client.MongoClient]: Factory method 'mongoClient' threw exception; nested exception is java.lang.NoClassDefFoundError: com/mongodb/internal/connection/InternalConnectionPoolSettings\n\tat org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.18.jar:5.3.18]\n\tat org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[spring-beans-5.3.18.jar:5.3.18]\n\t... 99 common frames omitted\nCaused by: java.lang.NoClassDefFoundError: com/mongodb/internal/connection/InternalConnectionPoolSettings\n\tat com.mongodb.client.internal.MongoClientImpl.createCluster(MongoClientImpl.java:223) ~[mongodb-driver-sync-4.6.0.jar:na]\n\tat com.mongodb.client.internal.MongoClientImpl.<init>(MongoClientImpl.java:70) ~[mongodb-driver-sync-4.6.0.jar:na]\n",
"text": "I’m using mongodb with spring framework and ran into a similar issue. See stack trace below. Using the dependency versions 3.2.3 and 4.2.3 fixed my problem, thank you!",
"username": "Henri_Idrovo"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Caused by: java.lang.ClassNotFoundException: com.mongodb.internal.connection.DefaultClusterFactory
|
2021-07-23T14:02:43.630Z
|
Caused by: java.lang.ClassNotFoundException: com.mongodb.internal.connection.DefaultClusterFactory
| 37,449 |
[
"replication"
] |
[
{
"code": "",
"text": "Hi,\nThere is a small mistake in this documentation: https://www.mongodb.com/docs/manual/tutorial/upgrade-cluster-to-ssl/\nWhen selecting “Configuration File Options” is shows net.tls.PEMKeyFile which is not a valid config option.\nIt should be net.tls.certificateKeyFile.\n\nimage821×431 18 KB\n\nThanks,\nRafael,",
"username": "Rafael_Green"
},
{
"code": "",
"text": "Hi @Rafael_GreenMany thanks for the report! I have raised a JIRA ticket to fix this issue: DOCS-15375Best regards\nKevin",
"username": "kevinadi"
}
] |
A small mistake in Upgrate a Cluster to use tls
|
2022-05-26T03:32:38.992Z
|
A small mistake in Upgrate a Cluster to use tls
| 1,621 |
|
[
"queries",
"replication"
] |
[
{
"code": "",
"text": "Hello.\nI’m using Atlas’ paid tier Replicaset (Primary-Secondary-Secondary)Every hour I create a new collection and insert about 1.5 to 2 million documents.When I check Atlas’ cluster metrics every time I insert it, the primary is unchanged, and the query targeting of secondary is rapidly increasing.As a result, Interferes with alerts of actual dangerous operations according to collscan and it is very noisy because the alarm of atlas occurs every hourThe alarm is using readPreference=secondary in my application, so it is difficult to disable.I need an opinion on how this can happen.Below is the metric information that I checked.\n스크린샷 2022-05-26 오후 6.27.232392×1920 202 KB\n",
"username": "gyus"
},
{
"code": "",
"text": "Hi @gyus and welcome in the MongoDB Community !This alert is set quite low by default and I often increase the number in my alerts so it doesn’t trigger that often.That being said, why are you writing 1.5+M docs in MongoDB and then instantly reading them all from the cluster? Why not just write them to MongoDB and keep them in memory for processing?Also why are you reading with readPreference=Secondary? Your secondaries are now doing more work than your primary (because they are also doing the write operations (=replication) just like the primary + the reads now. Why not read directly the data using the primary?If you still want the data as soon as you write it. Maybe you could use a Change Streams filtering on insert operations maybe instead of a coll scan?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hello. @MaBeuLux88\nThe reason why I read in secondary is because of load balancing.Apart from this, I’ve done some tests and found the cause.In my case, it’s because of the Atlas Search Index that exists in different databases or collections that exist in the same cluster instanceWhen you create a new cluster and insert 1.5 million data with nothing in it, the secondary QueryTargeting did not rise\nIf there is an Atlas Search Index in a completely different database, collection (not a collection that inserts data), the queryTargeting of the secondary rises sharply when 1.5 million documents are inserted.I think it is unreasonable for this to happen because of the search index that exists in the collection where no data changes have occurred, and I think unnecessary collscan work of unnecessary alerts and secondary is a waste of cluster resources.I wrote a post on MongoDB Jira about this part, and I will attach the link below.https://jira.mongodb.org/plugins/servlet/mobile#issue/SERVER-66824Progressive improvement is required for that part",
"username": "gyus"
},
{
"code": "",
"text": "Replica Set are not meant to be used to scale read operations. Scaling == Sharding.\nIf one of your Secondary goes down now, the other one will have to take 100% of the workload and it will blow up instantly as well (overloaded => Domino effect).Using RS to scale => No more High Availability. RS are here for one thing only: HA.",
"username": "MaBeuLux88"
}
] |
I need to help for Query Targeting: Scanned Objects / Returned has gone above 1000 alert
|
2022-05-26T09:39:32.984Z
|
I need to help for Query Targeting: Scanned Objects / Returned has gone above 1000 alert
| 2,221 |
|
null |
[
"node-js",
"connecting"
] |
[
{
"code": "exports.connect = function (options = { maxPoolSize: 100 }) {\n return async function (cb) {\n await disconnect();\n let URI;\n if (process.env.IS_REPLICA_SET == \"true\") {\n URI = `mongodb://${process.env.MONGO_USER}:${process.env.MONGO_PASSWORD}@`\n let sets = [\n process.env.SET_SECONDARY2 + process.env.MONGO_PORT,\n process.env.SET_SECONDARY1 + process.env.MONGO_PORT,\n process.env.SET_PRIMARY + process.env.MONGO_PORT\n ]\n URI = `${URI}${sets.toString()}/${process.env.MONGO_DEFAULT_DATABASE}?ssl=true&replicaSet=${process.env.REPLICA_NAME}&authSource=admin&retryWrites=true&w=majority`;\n } else\n URI = `mongodb+srv://${process.env.MONGO_USER}:${process.env.MONGO_PASSWORD}@${process.env.MONGO_CLUSTURE}/${process.env.MONGO_DEFAULT_DATABASE}?retryWrites=true&w=majority`;\n try {\n let db = mongoose.connect(URI, options);\n log.warn(\"Connecting to MongoDB...\");\n mongoose.connection.on('error', error => {\n log.error('Connection lost to MongoDB! ' + error.message);\n });\n // mongoose.set('debug', true);\n console.info(\"---------------------------------------\");\n global.log.info(\"Connection mode: \" + (process.env.IS_REPLICA_SET == \"true\" ? \"Replica set\" : \"SRV\"));\n console.info(\"---------------------------------------\");\n if (cb) cb(db); log.info(\"Connection established with MongoDB...\");\n } catch (error) {\n log.error('Could not connect to MongoDB! ' + error.message);;\n }\n }\n}\n\nfunction disconnect() {\n mongoose.disconnect(() => {\n log.warn('Disconnected from MongoDB.');\n });\n};\n\nexports.disconnect = disconnect;\n",
"text": "i am facing issue with maintaining connection and connection pool sizethis above code i am using",
"username": "neeraj_tripathi"
},
{
"code": "",
"text": "First of all welcome!and please describe what is the problem you’re facing with?",
"username": "Shay_I"
},
{
"code": "",
"text": "Currently we have 5000 Users and the are going to login our site and do some action there\nbut our mongodb server connection limit goes high more than 4000 connection uses,we tried with m0 to m40 cluster instance\nso need a proper solution by which we get proper response time and minimum connection cpu usages",
"username": "neeraj_tripathi"
},
{
"code": "",
"text": "from the hardware perspective, I am not sure I can give you advice here (other maybe).\nbut it sounds you need to give more details about your business needs because you have a lot of\ndata modeling solutions in mongo that can give you this efficiency per request which can reduce resources usage.if there is a pattern of requests from your client you can adjust your schema to be more effective,\nif you want to elaborate please feel free.I’ll give an example of what I mean: if your client requests documents with a filter on date property and they are always searching in forms whole days (1,2… 10 days) you can potentially use the Bucket pattern and reduce the number of documents processed by Mongo",
"username": "Shay_I"
},
{
"code": "",
"text": "Currently the issue belong to connection management , connection pooling\nhow much pool size i need to put for 5000 user login and is my above code is correct for making connection and pool size and is the above code having function to close connection when work done",
"username": "neeraj_tripathi"
},
{
"code": "",
"text": "hey, sorry but i can’t post questions here in the community, so i’m using Stackoverflow, maybe someone can help me with this tricky mongodb issue?",
"username": "mart_q"
}
] |
How to manage MongoDB Connection code and connection pool size
|
2022-02-08T15:38:44.996Z
|
How to manage MongoDB Connection code and connection pool size
| 5,328 |
null |
[
"aggregation",
"java"
] |
[
{
"code": "import com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\nimport org.bson.json.JsonWriterSettings;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.function.Consumer;\n\nimport static com.mongodb.client.model.Aggregates.lookup;\nimport static java.util.Arrays.asList;\nimport static java.util.Collections.singletonList;\n\npublic class Lookup {\n\n public static void main(String[] args) {\n String connectionString = \"mongodb://localhost\";\n try (MongoClient mongoClient = MongoClients.create(connectionString)) {\n MongoDatabase db = mongoClient.getDatabase(\"test\");\n MongoCollection<Document> books = db.getCollection(\"books\");\n MongoCollection<Document> authors = db.getCollection(\"authors\");\n books.drop();\n authors.drop();\n books.insertOne(new Document(\"_id\", 1).append(\"title\", \"Super Book\").append(\"authors\", asList(1, 2)));\n authors.insertOne(new Document(\"_id\", 1).append(\"name\", \"Bob\"));\n authors.insertOne(new Document(\"_id\", 2).append(\"name\", \"Alice\"));\n\n Bson pipeline = lookup(\"authors\", \"authors\", \"_id\", \"authors\");\n List<Document> booksJoined = books.aggregate(singletonList(pipeline)).into(new ArrayList<>());\n booksJoined.forEach(printDocuments());\n }\n }\n\n private static Consumer<Document> printDocuments() {\n return doc -> System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));\n }\n}\n{\n \"_id\": 1,\n \"title\": \"Super Book\",\n \"authors\": [\n {\n \"_id\": 1,\n \"name\": \"Bob\"\n },\n {\n \"_id\": 2,\n \"name\": \"Alice\"\n }\n ]\n}\nauthors",
"text": "Hi,In this topic, I’m answering a question I saw on Twitter:https://twitter.com/nskarthik_k/status/1374614363709919233Here is a short example of a $lookup with Java 4.2.2 (mongodb-driver-sync).Here is the result I get in my console:As you can see, the authors IDs have been replaced by the actual documents from the authors collection.I hope this help. I will be happy to answer your questions here if something isn’t clear in this example.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thx sir\nThis sample Aggregation/lookup saved my time,\nAppreciate u for sample code.",
"username": "KARTHIK_SHIVAKUMAR"
},
{
"code": "",
"text": "can u explain the three paramaters of the lookup function ?",
"username": "Meriem_Br"
},
{
"code": "$lookup{\n $lookup:\n {\n from: <collection to join>,\n localField: <field from the input documents>,\n foreignField: <field from the documents of the \"from\" collection>,\n as: <output array field>\n }\n}\ndb.orders.aggregate( [\n {\n $lookup:\n {\n from: \"warehouses\",\n let: { order_item: \"$item\", order_qty: \"$ordered\" },\n pipeline: [\n { $match:\n { $expr:\n { $and:\n [\n { $eq: [ \"$stock_item\", \"$$order_item\" ] },\n { $gte: [ \"$instock\", \"$$order_qty\" ] }\n ]\n }\n }\n },\n { $project: { stock_item: 0, _id: 0 } }\n ],\n as: \"stockdata\"\n }\n }\n] )\n",
"text": "$lookup is like this:Or there is another alternative with a sub-pipeline like this:See the doc:In my example above, I’m using the first version and my 4 parameters are matching the 4 parameters of the first version.Here is the doc of what I’m using exactly:declaration: package: com.mongodb.client.model, class: AggregatesCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Aggregation pipeline with $lookup in Java
|
2021-03-25T13:21:31.602Z
|
Aggregation pipeline with $lookup in Java
| 12,561 |
null |
[
"node-js",
"replication",
"containers",
"installation",
"upgrading"
] |
[
{
"code": "root@nxtc:/opt/rocketchat# docker exec -it rocketchat_mongo_1 bash\nError response from daemon: Container 764b5a5051f1247aeca43241b9d7a704c64c50d7b91f2f68317533776e170a66 is restarting, wait until the container is running\nException in setInterval callback: MongoServerSelectionError: getaddrinfo EAI_AGAIN mongo\n at Timeout._onTimeout (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/sdam/topology.js:437:30)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) { 'mongo:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: 9\n }\n}\n/app/bundle/programs/server/node_modules/fibers/future.js:313\n\t\t\t\t\t\tthrow(ex);\n\t\t\t\t\t\t^\nMongoServerSelectionError: getaddrinfo EAI_AGAIN mongo\n at Timeout._onTimeout (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/core/sdam/topology.js:437:30)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) {\n 'mongo:27017' => ServerDescription {\n address: 'mongo:27017',\n error: Error: getaddrinfo EAI_AGAIN mongo\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:71:26) {\n name: 'MongoNetworkError'\n },\n roundTripTime: -1,\n lastUpdateTime: 12192833,\n lastWriteDate: null,\n opTime: null,\n type: 'Unknown',\n topologyVersion: undefined,\n minWireVersion: 0,\n maxWireVersion: 0,\n hosts: [],\n passives: [],\n arbiters: [],\n tags: []\n }\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\nCould not start Rocket.Chat. Waiting 5 secs...\n",
"text": "I have Proxmox server with my virtual machines. My CPU is Intel Pentium Gold G6400 - comparison, characteristics and benchmarks which supports AVX.\nUbuntu 20.04.4 is one of my VM where I have installed Rocket.chat as docker. This tutorial was applied to upgrade MongoDB - The Ultimate Guide: Upgrading RocketChat Deployed in Docker and Upgrading MongoDB\nMongo upgrade was made succesfully from 4.0->4.2 and 4.2->4.4 as well.\nBut when I tried to upgrade from 4.4->5.0 there were some errors.I changed docker-compose.yml for image mongo:5.0 and create new docker. After that I tried to go inside dockerLog from container rocketchat_rocketchat_1when I used\ndocker-compose down\nand\ndocker-compose up -dmy rocketchat_rocketchat_1 has this logWhat should I do?\nI tried to change my Proxmox VM CPU settings to host but it does not help.\nThank you.",
"username": "Anton_Krajcik"
},
{
"code": "",
"text": "Hi @Anton_Krajcik and welcome in the MongoDB Community !I’m really not an expert in anything you mentioned (almost) but I did noticed this blog post:Mongo 5 and Node.JS driver update, but also cordova!\nReading time: 3 min read\nSo which version of meteor are you using? Maybe an update to 2.6 or 2.7 could help? I assume you already thought about that but you didn’t mention it so… Worth a shot.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] |
Cannot upgrade MongoDB 4.4 -> 5.0 in docker
|
2022-05-30T13:54:10.358Z
|
Cannot upgrade MongoDB 4.4 -> 5.0 in docker
| 5,193 |
null |
[
"schema-validation"
] |
[
{
"code": "",
"text": "Hi, I have a problem with mongoDB that delete the validation schema of a collection after I perform a Export from a Hadoop Cluster to MongoDB.\nThe things I Do in the export are the same that I perform over another collection where there aren’t any validation schema empty after that, so I don’t know where I’m wrong.\nCan anyone help me in this case?\nThanks for the help!!!",
"username": "Alberto_De_Gregorio"
},
{
"code": "mongoexport",
"text": "Hi @Alberto_De_Gregorio and welcome in the MongoDB Community !mongoexport doesn’t modify the collection it’s extracting the documents from. The only way to remove the metadata from a MongoDB collection (indexes, jsonschema, validator, etc) would be to delete the collection and recreate it. To my knowledge that’s the only way to do it “accidentally”.Can you explain exactly the operations that you are doing?Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] |
Validator empty after Export in MongoDB
|
2022-05-30T10:35:02.185Z
|
Validator empty after Export in MongoDB
| 2,326 |
null |
[
"spring-data-odm"
] |
[
{
"code": "2021-04-07 12:11:34.569 INFO 10268 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017\n\ncom.mongodb.MongoSocketOpenException: Exception opening socket\n\tat com.mongodb.connection.netty.NettyStream$OpenChannelFutureListener.operationComplete(NettyStream.java:439) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.connection.netty.NettyStream$OpenChannelFutureListener.operationComplete(NettyStream.java:407) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat java.lang.Thread.run(Thread.java:748) [na:1.8.0_161]\nCaused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/0:0:0:0:0:0:0:1:27017\nCaused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/0:0:0:0:0:0:0:1:27017\n\nCaused by: java.net.ConnectException: Connection refused\nCaused by: java.net.ConnectException: Connection refused\n\n\tat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_161]\n\tat sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_161]\n\tat io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.60.Final.jar:4.1.60.Final]\n\tat java.lang.Thread.run(Thread.java:748) [na:1.8.0_161]\n",
"text": "Hi, I am working on spring reactive project having embedded Mongodb. I have installed mongodb and getting below error after starting my SpringBootApplication.Can anyone please help me on this.",
"username": "Gaurav_Jain1"
},
{
"code": "",
"text": "I suspect that you have to start mongod first.",
"username": "steevej"
},
{
"code": "",
"text": "I am getting a similar error, did you happen to find a solution to this?",
"username": "Daniel_Serna"
},
{
"code": "",
"text": "I am getting this same error now. Any solution for this please?",
"username": "Sonali_Dutta"
},
{
"code": "",
"text": "If it is really the same issue, then the same solution should be tried.I suspect that you have to start mongod first.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your response. But I don’t have any mongodb installed in my local. Is that necessary? Will the embeddedMongoServer not serve on its behalf?",
"username": "Sonali_Dutta"
},
{
"code": "",
"text": "I know nothing about embeddedMongoServer but if you can connection refused for localhost:27017 you are not using something that I would call embedded or you are not configuring it correctly. The exception com.mongodb.MongoSocketOpenException seems to come from the normal driver.",
"username": "steevej"
},
{
"code": "",
"text": "Could you please suggest me a good article to help me configure it correctly?",
"username": "Sonali_Dutta"
},
{
"code": "",
"text": "Sorry, but like I wrote:I know nothing about embeddedMongoServerThis looks like something supported by another group.Try\nhttps://www.google.com/search?client=firefox-b-d&q=EmbeddedMongoServer",
"username": "steevej"
},
{
"code": "",
"text": "I was actually requesting you to share any good documentation for configuring with the normal driver (as you mentioned earlier looking at the error)",
"username": "Sonali_Dutta"
},
{
"code": "",
"text": "You have to start mongod first.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
MongoSocketOpenException: Exception opening socket
|
2021-04-07T19:16:39.781Z
|
MongoSocketOpenException: Exception opening socket
| 40,475 |
[
"atlas-device-sync",
"react-native"
] |
[
{
"code": "Bad changeset (DOWNLOAD)WARN [\"Realm sync error\", \"realm-default\", {\"category\": \"realm::sync::ClientError\", \"code\": 112, \"isFatal\": true, \"message\": \"Bad changeset (DOWNLOAD)\", \"name\": \"Error\", \"userInfo\": {}}]\n\nWARN {\"errorCode\": 1, \"message\": \"Bad changeset (DOWNLOAD)\"}\n",
"text": "Every day we are receiving sync errors with message Bad changeset (DOWNLOAD) when users try to login in our Realm App with custom JWT auth.Our only option is to perform a “terminate sync” whenever this occurs.We searched for the error message and found several reports of this sync issue here, but we couldn’t identify what could be causing this issue.Our “client_max_offline_days” setting is set with 4 days, and the problem continues to occur every day, even if users uninstall the app and install it again, the error only goes away with “terminate sync”.Log:We also tried to open a support ticket, but I think that the support page is down.\nimage1104×416 18.9 KB\n",
"username": "Douglas_Junior"
},
{
"code": "Bad Changeset (DOWNLOAD)",
"text": "Hi @Douglas_Junior ,The Bad Changeset (DOWNLOAD) error is something we should look into as Support, if you have a contract for that Project please open a case. The Support Portal seems to be working properly at the moment, the error you receive is about authentication, can you please double check you’re connected as the Project Owner?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Yes, I’m Organization Owner and Project Owner. I tried to enter with Google and with password too.The “Support SSO Failure” occurs just after the login page.I’ve even tried changing the browser ",
"username": "Douglas_Junior"
},
{
"code": "",
"text": "Would it be possible to initiate a Support ticket via email?",
"username": "Douglas_Junior"
},
{
"code": "",
"text": "A found a “chat” button inside Atlas, I will try to open a ticket now.",
"username": "Douglas_Junior"
},
{
"code": "",
"text": "Yes, that’s the right procedure, it should work.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Recurring sync error with message Bad changeset (DOWNLOAD)
|
2022-05-30T13:18:10.983Z
|
Recurring sync error with message Bad changeset (DOWNLOAD)
| 2,423 |
|
null |
[
"queries",
"node-js",
"mongoose-odm"
] |
[
{
"code": "const locationSchema = new mongoose.Schema(\n {\n departureLocation: {\n name: {\n type: String,\n required: true,\n lowercase: true\n },\n time: {\n type: String,\n required: true,\n },\n subLocations: { type: [String], lowercase: true },\n },\n arrivalLocation: {\n name: {\n type: String,\n required: true,\n lowercase: true\n },\n time: {\n type: String,\n required: true,\n },\n subLocations: { type: [String], lowercase: true },\n },\n },\n {\n timestamps: true,\n }\n);\nconst routeSchema = new mongoose.Schema({\n location:{\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Location',\n required: true\n },\n duration: {\n type: Number,\n required: true\n },\n date: {\n type:String,\n required: true\n },\n},\n{\n timestamps: true,\n});\nconst busSchema = new mongoose.Schema({\n busNumber: {\n type: String,\n unique: true,\n required: true,\n },\n seats: {\n type: Number,\n },\n route: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Route\",\n required: true,\n },\n },\n {\n timestamps: true,\n });\nconst bookingSchema = new mongoose.Schema({\n userId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\",\n required: true,\n },\n busId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Bus\",\n required: true,\n },\n passengers: [\n {\n name: { type: String, required: true, trim: true },\n gender: { type: String, required: true, trim: true },\n age: { type: Number, required: true, trim: true },\n }],\n phone: {\n type: Number,\n required: true,\n },\n email: {\n type: String,\n required: true,\n },\n bookingDate: {\n type: String,\n required: true,\n },\n fare: {\n type: Number,\n required: true,\n },\n seats: {\n required: true,\n type: [Number],\n },\n departureDetails: [\n {\n city: { type: String, required: true, trim: true },\n location: { type: String, required: true, trim: true },\n time: { type: String, required: true, trim: true },\n date: { type: String, required: true, trim: true },\n },\n ],\n arrivalDetails: [\n {\n city: { type: String, required: true, trim: true },\n location: { type: String, required: true, trim: true },\n time: { type: String, required: true, trim: true },\n date: { type: String, required: true, trim: true },\n },\n ],\n},{\n timestamps:true\n});\nrouter.get(\"/trip/single\", async (req, res) => {\n if (!req.query.departure || !req.query.arrival || !req.query.date) {\n return res.send({\n error: \"Please enter the data to get the trip\",\n });\n }\n const { departure, arrival, date } = req.query;\n \n let locations = await Location.find({\n 'departureLocation.name': departure,\n 'arrivalLocation.name': arrival,\n });\n \n const ids = locations.map(location => location._id)\n \n const routes = await Route.find({$and: [{location : {$in: ids}},{date}]});\n const route = routes.find(()=>{\n return ([{ date }, { routes }])\n });\n \n let buses = await Bus.find({})\n let matchedBuses = buses.filter((bus)=> {\n return bus.routes === locations._id\n })\n \n const bookings = await Booking.find({})\n const busIdWithSeatsObj = {}\n for(let i = 0; i < matchedBuses.length; i++){\n let currentBusSeats = []\n const busBookings = bookings.filter((booking) => {\n return (booking.departureDetails.date === date &&\n booking.busId.toString() === matchedBuses[i]._id.toString()\n )\n })\n busBookings.forEach((booking) => {\n currentBusSeats = [...currentBusSeats, ...booking.seats]\n })\n busIdWithSeatsObj[matchedBuses[i].seats] = currentBusSeats\n }\n \n res.status(200).send({route, matchedBuses, busIdWithSeatsObj});\n});\n",
"text": "I am building a bus ticket booking app in node.js. I want to fetch only related data about queries not all the data But I am getting all the data about Bus.Here is the location model. only admin can enter data about locations.Here is the route table. Again admin…Bus model: Admin…and here finally the booking table:Whenever an authorized user enters the booking data It is getting saved to the database. No problem whatsoeverNow I want to show every user(Non-authorized as well) of my app about trips(routes), the bus which will run on that particular trip and reserved and available seats in that particular bus which is currently stored.But the problem is that I am getting all the buses even if it is not on that trip.here is the query:Now I also want to fetch details by Id(only single bus).How to do that?Can anyone help me out here?\nhere’s the input data image with result for date: 2022-06-02 which is actually available in the database : https://i.stack.imgur.com/FlfaG.pngThis is the second query with result for date: 2022-06-01. It is not available in the database still showing matchedBus.: https://i.stack.imgur.com/xY5pW.png Now route data is gone but matched buses still shows.",
"username": "Mitul_Kheni"
},
{
"code": "",
"text": "To extract only some elements of an array you have to used $filter.I do not know how $filter is used in mongoose.Documents as images are useless for us to help you. Time is limited and typing document is long. We can cut-n-paste easily when publish as indicated in Formatting code and log snippets in posts",
"username": "steevej"
}
] |
How to get only matched data in node.js using mongoose?
|
2022-05-29T04:24:50.313Z
|
How to get only matched data in node.js using mongoose?
| 7,462 |
null |
[
"queries",
"dot-net"
] |
[
{
"code": "Exception: System.FormatException: An error occurred while deserializing the Hourly property of class ClassRow: An error occurred while deserializing the Subscribers property of class ClassRow+Counters: An error occurred while deserializing the CountDiff property of class ClassRow+Subscribers: Cannot deserialize a 'Int32' from BsonType 'Null'.",
"text": "What I have:\nCollection with TTL index.\nNeed to execute find query on collection.What SOMETIMES I get:\nException: System.FormatException: An error occurred while deserializing the Hourly property of class ClassRow: An error occurred while deserializing the Subscribers property of class ClassRow+Counters: An error occurred while deserializing the CountDiff property of class ClassRow+Subscribers: Cannot deserialize a 'Int32' from BsonType 'Null'.My query code:\nreturn await _rowsCollection.Find(filter})Limit(limit).ToListAsync();What i guess is the cause:\nOrder of execution:Is there any way not to get an exception(ignore some of desserialization errors), but simply ignore it, and get 99 objects instead of 100, as example?",
"username": "Maksim_Shapovalov"
},
{
"code": "System.FormatException: Cannot deserialize a 'Int32' from BsonType 'Null'.\n at\nMongoDB.Bson.Serialization.Serializers.Int32Serializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize(IBsonSerializer serializer, BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n --- End of inner exception stack trace ---\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize(IBsonSerializer serializer, BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n --- End of inner exception stack trace ---\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize(IBsonSerializer serializer, BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n --- End of inner exception stack trace ---\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeMemberValue(BsonDeserializationContext context, BsonMemberMap memberMap)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.DeserializeClass(BsonDeserializationContext context)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Deserialize[TValue](IBsonSerializer`1 serializer, BsonDeserializationContext context)\n at MongoDB.Driver.Core.Operations.CursorBatchDeserializationHelper.DeserializeBatch[TDocument](RawBsonArray batch, IBsonSerializer`1 documentSerializer, MessageEncoderSettings messageEncoderSettings)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.CreateFirstCursorBatch(BsonDocument cursorDocument)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.CreateCursor(IChannelSourceHandle channelSource, IChannelHandle channel, BsonDocument commandResult)\n at MongoDB.Driver.Core.Operations.FindCommandOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(RetryableReadContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.FindOperation`1.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToListAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n at\nMyMethod()\n",
"text": "Full stack trace:",
"username": "Maksim_Shapovalov"
},
{
"code": "CountDiffnullnullCountDiffCountDiffint?int",
"text": "I think a more straightforward explanation of what is happening is that you actually have some documents where the CountDiff element is null in the database.Perhaps you only get this error sometimes because it depends on which documents match your filter.If null is a valid value for CountDiff you can declare the CountDiff property of be of type int? instead of int.",
"username": "Robert_Stam"
},
{
"code": "CountDiff",
"text": "Its not valid case where CountDiff is null. Its retriction on write-side code. So I guess this is caused by TTL",
"username": "Maksim_Shapovalov"
}
] |
Cannot deserialize a 'Int32' from BsonType 'Null' for deleted object by TTL on find query
|
2022-05-26T09:34:34.114Z
|
Cannot deserialize a ‘Int32’ from BsonType ‘Null’ for deleted object by TTL on find query
| 5,232 |
null |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "useNewUrlParser: true,\nserver is running on port ${PORT}..",
"text": "I have another problem to connect my application with the cluster.*my server.js\nconst express = require(“express”);const mongoose = require(“mongoose”);const cors = require(“cors”);const { readdirSync } = require(“fs”);const dotenv = require(“dotenv”);dotenv.config();const app = express();app.use(cors());//routesreaddirSync(“./routes”).map((r) => app.use(“/”, require(“./routes/” + r)));//databasemongoose.connect(process.env.DATABASE_URL, {}).then(() => console.log(“database connected successfully”)).catch((err) => console.log(“error connecting to mongodb”, err));const PORT = process.env.PORT || 8000;app.listen(PORT, () => {console.log( server is running on port ${PORT}.. );});server is running on port 8000…\nerror connecting to mongodb MongoParseError: mongodb+srv URI cannot have port number\nat new ConnectionString (C:\\Users\\vujim\\OneDrive\\Desktop\\jvuwithjesus\\backend\\node_modules\\mongodb-connection-string-url\\lib\\index.js:146:23)\nat parseOptions (C:\\Users\\vujim\\OneDrive\\Desktop\\jvuwithjesus\\backend\\node_modules\\mongoose\\node_modules\\mongodb\\lib\\connection_string.js:213:17)\nat new MongoClient (C:\\Users\\vujim\\OneDrive\n************Please help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": ".env -> DATABASE_URL",
"text": "Nice trick for the routes. Please add the .env -> DATABASE_URL . That has an error. You can remove any personal detail / pwd.",
"username": "Mah_Neh"
}
] |
I can not connect my application to cluster (please Help!)
|
2022-05-30T07:46:38.051Z
|
I can not connect my application to cluster (please Help!)
| 1,851 |
null |
[
"replication",
"containers"
] |
[
{
"code": "",
"text": "Hey!We are using fluent-bit to push MongoDB logs to Elasticsearch. But elastic accepts not all logs:“error”:{“type”:“mapper_parsing_exception”,“reason”:“object mapping for [attr.error] tried to parse field [error] as object, but found a concrete value”}.There is log with string attr.error:{“t”:{\"$date\":“2022-05-13T15:16:31.203+00:00”},“s”:“I”, “c”:“CONNPOOL”, “id”:22572, “ctx”:“MirrorMaestro”,“msg”:“Dropping all pooled connections”,“attr”:{“hostAndPort”:“mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017”,“error”:“ShutdownInProgress: Pool for mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017 has expired.”}}\nThere is log with object attr.error:{“t”:{\"$date\":“2022-05-13T15:20:56.857+00:00”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-680”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“alerta-mongodb-arbiter-0.alerta-mongodb-arbiter-headless.monitoring. svc.cluster.local:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:93,“codeName”:“InvalidReplicaSetConfig”,“errmsg”:“replica set IDs do not match, ours: 61ea35f29cfd494fef169571; remote node’s: 61eef8589d065c56e61d6e52”}}}bitnami/mongodb:4.4.12-debian-10-r18 docker image is usedCould you direct CONNPOOL error message to attr.error.errmsg instead of attr.error (the same as REPL_HB error message)?",
"username": "Yevhen_Koroviatskyi"
},
{
"code": "",
"text": "In addition to attr.error, the attr.command.update and attr.from fields also conflict",
"username": "Yevhen_Koroviatskyi"
}
] |
Logging: attr.error field type conflicts
|
2022-05-16T15:42:29.859Z
|
Logging: attr.error field type conflicts
| 2,131 |
null |
[
"mongodb-shell",
"database-tools",
"backup"
] |
[
{
"code": "",
"text": "Hello I have this problem regarding using mongodb dump. Kindly check out the full question on StackOverflow using this link. I will appreciate any helphttps://stackoverflow.com/q/72136731/11080717",
"username": "Felix_Omuok"
},
{
"code": "error counting admin.system.views",
"text": "Hi @Felix_Omuok and welcome to the community!!Based on the information shared over the stackoverflow channel, I tried to reproduce the issue in my local environment with a MongoDB version 5.0.7 and it works for me.Please note: following privileges have been granted on the admin database for my tests.db.createUser({ user: “admin”, pwd: “admin”, roles: [ { role: “userAdminAnyDatabase”, db: “admin” }]})However if you could upgrade to a later version and try the same, if the issue still persists.Also, could you provide a few more details for the topic:P.S. : Please note that it’s better to ask the question in complete entirety for timely response.Thanks\nAasawari",
"username": "Aasawari"
}
] |
How to Clone/Copy a mongodb database using mongodump on an AWS ECS server
|
2022-05-06T05:59:18.210Z
|
How to Clone/Copy a mongodb database using mongodump on an AWS ECS server
| 3,307 |
null |
[
"crud",
"golang",
"transactions"
] |
[
{
"code": "",
"text": "Hello, community,I am writing a REST API in Golang and I am using MongoDB as the database. Some things about locking data/transactions are unclear to me. In my case, I am storing data in a two-dimensional array. When a user makes a request I am retrieving the corresponding document from the database. In Go I run some complex logic to check if the request is valid, change the two-dimensional array and update the document with the new array.From my understanding, this could go wrong because if this user would make two rapid requests after each other or another user makes a request after the retrieval of the document the data will be old and invalid in which case the document will not have the new data and data may get corrupted.What I would like to do is lock the document before I retrieve the document and only release the lock after I have run my logic and updated the document. I have not found out how to do this. I feel that the things I find about transactions are more about multi-document/collection updates and it doesn’t work with a standard database because you will need a cluster/replica-set?. Other answers advise using compounds like FindOneAndUpdate however this would not work in my case because I would need to run logic between “find” and “update”.Kind regards",
"username": "Egbert-Jan_Terpstra"
},
{
"code": "",
"text": "I forgot to add that I also saw someone suggest adding a “islocked” property to the document and update this when you start and finish with the document. However, this does not seem right to me.",
"username": "Egbert-Jan_Terpstra"
},
{
"code": "updatelocked",
"text": "Hi @Egbert-Jan_Terpstra welcome to the community!MongoDB using WiredTiger has document-level locking since at least MongoDB 3.0, so there will be no instant where the document is partly updated (unless your app runs multiple update commands that deliberately only update parts of a document).From my understanding, this could go wrong because if this user would make two rapid requests after each other or another user makes a request after the retrieval of the document the data will be old and invalid in which case the document will not have the new data and data may get corrupted.If I understand correctly, your workflow involves: 1) retrieving a document, 2) calculate new values for that document based on the document’s current state, then 3) push an updated document, and you’re worried that in-between steps 1 and 3, there are other threads that are trying to update the exact same document. Is this accurate?If yes, using a transaction is a perfectly valid solution. Having a field called locked to act as a semaphore is also valid, although you’ll need to take care of cases where an update crashes between steps 1 & 3, rendering the document locked forever until it’s manually unlocked. From the application side, it is also possible to implement the semaphore construct outside of the database, to ensure that no two threads are working on the same document at one time.In my opinion, the simplest & safest solution is to use transaction. Yes you’ll have to deploy a replica set (three data-bearing nodes as the minimum recommended for production), but the benefits are many. For example, you’ll gain high availability, redundancy and thus safer data storage, easier database maintenance, while also gaining transactions, change streams, and other replica-set specific features. In fact, I would not recommend you to deploy a standalone node for a production environment, ever. Using transactions, you would not need to do locked documents cleanup, nor have to implement special semaphore in your app.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Yes, you understood my workflow correct. Thanks for your advice I will use it! I think the locked field problem could be solved by having a timestamp which can max be x seconds valid. I do not suspect many users to edit the same document at the same time however will look at using replicasets and transactions.Kind regards",
"username": "Egbert-Jan_Terpstra"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Understanding locking/transactions in mongo
|
2022-05-29T09:21:23.654Z
|
Understanding locking/transactions in mongo
| 7,038 |
null |
[] |
[
{
"code": "",
"text": "At a specific time each day, the mongod process causes cpu high (over 80%).\nWhat should I focus on?",
"username": "George_Kim"
},
{
"code": "mongotopmongostatmongod",
"text": "Hi @George_Kim welcome to the community!There’s not enough information here, but I would start with:Best regards\nKevin",
"username": "kevinadi"
}
] |
Mongod process cpu high
|
2022-05-27T23:47:09.394Z
|
Mongod process cpu high
| 1,194 |
null |
[
"java"
] |
[
{
"code": "priceAggregate.add(new Document(\"$group\",\n new Document(\"_id\",\n new Document(\"conditionGroupControlId\", \"$conditionGroupControlId\")\n .append(\"conditionGroupScac\", \"$conditionGroupScac\")\n .append(\"conditionGroupId\", \"$conditionGroupId\"))\n.append(\"conditionGroupRevisionId\", new Document(\"$max\", \"$conditionGroupRevisionId\"))));\n conditionGroupCollection.aggregate(priceAggregate).forEach(conditionGroups::add);\n @BsonProperty(\"_id\")\n @BsonId\n private ObjectId id;\n",
"text": "Hello,\nI’m trying to group documents based on conditionGroupControlId, conditionGroupScac, conditionGroupId and then selecting the document having the highest conditionGroupRevisionId. Then putting the result in conditionGroups which is list of POJO PrismConditionGroup.PrismConditionGroup extends another class containing below field:Below is the error I’m getting:[INFO] Decoding into a ‘PrismConditionGroup’ failed with the following exception:\n[INFO]\n[INFO] Failed to decode ‘PrismConditionGroup’. Decoding ‘_id’ errored with: readObjectId can only be called when CurrentBSONType is OBJECT_ID, not when CurrentBSONType is DOCUMENT.\n[INFO]\n[INFO] A custom Codec or PojoCodec may need to be explicitly configured and registered to handle this type.Can someone please suggest me how to solve this issue?",
"username": "Aditi_Barde"
},
{
"code": "@BsonProperty(\"id\") \nid",
"text": "Hi @Aditi_Barde,Welcome to the MongoDB Community forums Can you add the annotationto the field id, and it will work as expected!If you have any doubts, please feel free to reach out to us.Regards,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Decoding '_id' errored with: readObjectId can only be called when CurrentBSONType is OBJECT_ID, not when CurrentBSONType is DOCUMENT
|
2021-12-21T10:31:17.343Z
|
Decoding ‘_id’ errored with: readObjectId can only be called when CurrentBSONType is OBJECT_ID, not when CurrentBSONType is DOCUMENT
| 5,625 |
null |
[
"crud"
] |
[
{
"code": "exports = async function(changeEvent) {\n var docId = changeEvent.fullDocument._id;\n \n const countercollection = context.services.get(\"<ATLAS-CLUSTER>\").db(changeEvent.ns.db).collection(\"counters\");\n const studentcollection = context.services.get(\"<ATLAS-CLUSTER>\").db(changeEvent.ns.db).collection(changeEvent.ns.coll);\n \n var counter = await countercollection.findOneAndUpdate({_id: changeEvent.ns },{ $inc: { seq_value: 1 }}, { returnNewDocument: true, upsert : true});\n var updateRes = await studentcollection.updateOne({_id : docId},{ $set : {studentId : counter.seq_value}});\n \n console.log(`Updated ${JSON.stringify(changeEvent.ns)} with counter ${counter.seq_value} result : ${JSON.stringify(updateRes)}`);\n };\n",
"text": "Hello. I am trying to implementing a playerID field - separate from _id - that auto increments from 1 onward with each new entry into my users collection. I have read and am trying to follow this tutorial but I’m not having any luck. I simply don’t understand the example code and I’m fairly new with MongoDB as well.I have two collections: “users” and “playerIDCounter”. I would like to adapt the example code from the tutorial for these two collections and to increment on the playerID field rather than the _id field. So far I have not been able to get this to work.The example code is:I would appreciate anyone that can explain what’s actually going on here.",
"username": "Aaron_Dubois"
},
{
"code": "",
"text": "Hello @Aaron_Dubois ,You need to replace with your cluster service name in 3rd and 4th line of the code.The code in the example is an async function, in which first this is accessing the _id of the latest changed document. Then it is updating the next available document with the help of counter.Can you share the error you are getting while trying to replicate this?Thanks,\nTarun",
"username": "Tarun_Gaur"
}
] |
Auto Increment With Atlas Triggers
|
2022-04-27T23:14:55.782Z
|
Auto Increment With Atlas Triggers
| 2,248 |
null |
[
"php"
] |
[
{
"code": "",
"text": "Hi everyone.\nMy name is Stefano, I’m 45, one wife, two childs and lot of interests.\nI work for a big semiconductor company since about 20 years.\nI have a discrete experience with DBMS but I don’t have any experience with NO-SQL DB.I’m curious about anything is new, I’m strongly focused to always learn something new as long I’m convincted that I have to change my mindset to understand something more about MongoDB.Thanks for accepting me in your comunity.\nStefano, Milan, Italy.",
"username": "Stefano_Radaelli"
},
{
"code": "",
"text": "Hello @Stefano_Radaelli,Welcome to the community! Please feel free to explore our free MongoDB University Courses and discuss any roadblocks you may face in your learning journey.Best regards,\nTarun",
"username": "Tarun_Gaur"
}
] |
Ciao to everybody, I'm Stefano, from Milan, Italy
|
2022-04-29T09:45:18.492Z
|
Ciao to everybody, I’m Stefano, from Milan, Italy
| 2,153 |
[
"aggregation",
"data-modeling",
"atlas",
"atlas-triggers",
"delhi-mug"
] |
[
{
"code": "Software Engineer, LinkedInSenior Technical Service Engineer, MongoDBCo-Founder, The Coding CultureThoughtFocus, Lead Database, and Cloud OperationsSoftware Engineer, LinkedIn",
"text": "\nMUG Delhi1920×1080 98.6 KB\nDelhi-NCR MongoDB User Group is excited to announce it’s first in-person meetup on Thursday, May 26, 2022 11:30 AM at MongoDB Office, Gurugram. The event will include two lightning sessions with demos, some games, pizzas and swag to win! In one of the talks, Shrey from LinkedIn will focus on dealing with Real-Time Analytics in Big Data, showcasing how we can use MongoDB Change Events to achieve this. He will be going over a few different patterns of Data Modelling and Application Architecture, so that any new use case which comes in, can be solved in a Plug N Play type of architecture, without affecting your application code.This session will be a demo-based session, starting with how to model your data in MongoDB, use MongoDB Change Streams / MongoDB Database Triggers to capture events, and build analytics dashboards using MongoDB Charts.In the other session, Saurav from MongoDB will talk about Federated Authentication with Atlas, He will talk about how Federated Auth links your credentials across many MongoDB systems.The sessions being planned are focused on beginner database operations. If you are a beginner or have some experience with MongoDB already, there is something for all of you!Event Type: In-Person\n Location: MongoDB Office, Gurugram.\n Floor 14, Building, 10C, DLF Cyber City, DLF Phase 2, Sector 24, Gurugram, Haryana 122001To RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\nimage1280×1280 109 KB\nSoftware Engineer, LinkedInI work as a Software Developer @ LinkedIn in Big Data Applications. Mostly I focus on Software Application Architecture, Databases, Data Flow Automation, and Scaling. You can talk to me about MongoDB, its various different products such as Atlas, Realm, Charts, and generally anything about product development as a whole.–\nSenior Technical Service Engineer, MongoDBCo-Founder, The Coding Culture–ThoughtFocus, Lead Database, and Cloud Operations–\nSoftware Engineer, LinkedInJoin the Delhi-NCR group to stay updated with upcoming meetups and discussions.",
"username": "GeniusLearner"
},
{
"code": "Government-Issued ID Card",
"text": "Hey Everyone,Excited to see you all tomorrow. Here are a few things to note:Also, excited to see so many of you RSVPed, if you have any change of plans, please un-check the Going button at the top to allow other interested members to sign up Please reply on this thread in case you have any questions.Looking forward to seeing most of you tomorrow!",
"username": "Harshit"
},
{
"code": "",
"text": "\n165356566285380107583671331188471920×2560 179 KB\n",
"username": "Kanishk_Khurana"
},
{
"code": "",
"text": "\n165356579069383247178318299299001920×1440 67.3 KB\n",
"username": "Divjot_kaur"
},
{
"code": "",
"text": "\nIMG_20220526_1720271920×2560 166 KB\n",
"username": "Simarpreet_Singh"
},
{
"code": "",
"text": "\n16535658890916385152156543518041920×2560 118 KB\n",
"username": "Divjot_kaur"
},
{
"code": "",
"text": "\n20220526_1721201920×865 80 KB\n\nUploading: 20220526_172156.jpg…",
"username": "Jugraj_Singh"
},
{
"code": "",
"text": "\nIMG_20220526_1722251920×2560 81.7 KB\n",
"username": "Ripudaman_Singh"
},
{
"code": "",
"text": "\nIMG202205261722021920×4160 240 KB\n",
"username": "shashank_sharma2"
},
{
"code": "",
"text": "\n165356601371671158865003255622543472×4624 888 KB\n",
"username": "36_Tanmaydeep_Singh"
},
{
"code": "",
"text": "\n20220526_1721561920×4260 357 KB\n",
"username": "Jugraj_Singh"
},
{
"code": "",
"text": "\nIMG202205261723071920×1440 80.7 KB\n",
"username": "Harpreet_Singh2"
},
{
"code": "",
"text": "\nIMG-20220526-WA00051600×1200 73.8 KB\n",
"username": "Taranjot_Singh"
},
{
"code": "",
"text": "\nIMG_20220526_175425813×1854 76.9 KB\n",
"username": "shashank_sharma2"
},
{
"code": "",
"text": "very informative and fun event\n\nIMG_85171920×1280 193 KB\n",
"username": "33_ANSHDEEP_Singh"
},
{
"code": "",
"text": "\nIMG_85211920×1280 172 KB\n",
"username": "33_ANSHDEEP_Singh"
},
{
"code": "",
"text": "\nIMG_85251920×1280 221 KB\n",
"username": "33_ANSHDEEP_Singh"
},
{
"code": "",
"text": "Hi Everyone Thank you for participating in our event. We hope you enjoyed both the presentation.Thank you again for your attendance. We hope the best for you and success in your future. See you at our next event.Feel free to reach out if you have any doubts.Best Regards,Sanchit Khurana\nMongoDB User Group Leader\nConnect with me on LinkedIn: https://www.linkedin.com/in/sanchit-khurana/",
"username": "GeniusLearner"
},
{
"code": "",
"text": "\nimage1920×2560 213 KB\n",
"username": "17_Chirag_Sharma"
},
{
"code": "",
"text": "\nimage1920×2560 209 KB\n",
"username": "Aditi_Jalali"
}
] |
Delhi-NCR MUG: Real Time Analytics, Atlas Federated Authentication & much more!
|
2022-05-19T12:18:07.075Z
|
Delhi-NCR MUG: Real Time Analytics, Atlas Federated Authentication & much more!
| 9,174 |
|
null |
[] |
[
{
"code": "",
"text": "hey guys i want to install mongodb server and shell on a machine with no internet access.i downloaded mongodb-org-shell_5.0.8_amd64.deb and mongodb-org-server_5.0.8_amd64.deb and moved it to that machine and tried installing using apt-get …i am getting the following error\nMongoDB shell version v5.0.8connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodbError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :connect@src/mongo/shell/mongo.js:372:17@(connect):2:6exception: connect failedexiting with code 1",
"username": "Stuart_S"
},
{
"code": "",
"text": "os- Ubuntu -20.04\narch-x86_64",
"username": "Stuart_S"
},
{
"code": "",
"text": "Your mongod should be up and running before you can connect to it\nCheck service status if it was started as service\nor\nps -ef|grep mongod if started manually from command line",
"username": "Ramachandra_Tummala"
}
] |
Mongodb setup issues
|
2022-05-29T18:10:03.986Z
|
Mongodb setup issues
| 1,855 |
null |
[
"queries",
"python"
] |
[
{
"code": "users = db.user.find()\nfor obj in users:\n print(obj)\nbossmandave@Davids-MacBook-Pro-2 kld9-bazaar % cd \"/Users/bossmandave/Library/Mobile Documents/com~apple~CloudDocs/KLD9 Bazaar/kld9-bazaar\"\nbossmandave@Davids-MacBook-Pro-2 kld9-bazaar % /usr/local/bin/python3.9 \"/Users/bossmandave/Library/Mobile Documents/com~apple~CloudDocs/KLD9 Bazaar/k\nld9-bazaar/main.py\"\n{'_id': ObjectId('61358b63af884f0bacbd8775'), 'email': '[email protected]', 'phone': '123456789', 'preferences': ['pref 1', 'pref 2'], 'status': 'private', 'uid': 'E943F20A-82B6-4FF2-8460-F8D369E6C51B', 'cart_id': None, 'date_created': datetime.datetime(1, 1, 1, 0, 0), 'date_updated': datetime.datetime(1, 1, 1, 0, 0), 'first_name': 'John', 'hashed_password': None, 'is_Oowner': True, 'is_admin': False, 'last_name': 'Doe2', 'user_created': None, 'user_updated': None, 'address': {'street': '123 2nd St.', 'street1': '', 'city': 'Jacksonville', 'state': 'Florida', 'postal': '49504', 'country': 'United States'}, 'to_delete': False, 'card': {'name_on_card': 'John Doe2', 'card_number': Decimal128('1111222233334444'), 'card_type': 'credit', 'card_issuer': 'Mastercard', 'expiry_date': datetime.datetime(2027, 12, 31, 16, 0), 'cvc_code': 123, 'is_primary': True}}\n{'_id': ObjectId('61358b72af884f0bacbd8776'), 'uid': 'B38AC513-06F6-47A5-9AFC-2B38645D492D', 'phone': '123456789', 'email': '[email protected]', 'preferences': ['pref 1', 'pref 2'], 'cartID': None, 'status': 'private', 'card': {'name_on_card': 'John Doe3', 'card_number': Decimal128('1111222233334444'), 'card_type': 'cedit', 'card_issuer': 'Visa', 'expiry_date': datetime.datetime(2025, 5, 14, 16, 0), 'cvc_code': 456, 'is_primary': True}, 'date_created': datetime.datetime(1, 1, 1, 0, 0), 'date_updated': datetime.datetime(2021, 9, 12, 8, 0, 17, 351000), 'first_name': 'John', 'hashed_password': None, 'is_admin': False, 'is_owner': False, 'last_name': 'Doe3', 'to_delete': False, 'user_created': None, 'user_updated': None, 'address': {'street': '123 3rd St.', 'street2': '', 'city': 'Bankok', 'state': '', 'country': 'Thailand', 'postal': 259859}}\nbossmandave@Davids-MacBook-Pro-2 kld9-bazaar % \n",
"text": "I am new to using MongoDB and I am currently struggling with the db.find() command. I am trying to query my user database and retrieve a list of all users using Python…I can get the list of my users by using the following command.where I am and if I list the variable users I get the following:Yields the following:But I cannot get the program to print each record out ?? I can’t seem to get into each object and print out data according to the object.I am quite sure this is a simple fix, but I cannot seem to get it. What am I doing wrong. My initial thinking is that I may need to use a dictionary? Am I on the right track?",
"username": "David_Thompson"
},
{
"code": "print( obj.get( 'email' ) )\n",
"text": "Your code seems okay.You only print 2 documents because you only have 2 documents in your collection named user.You may print a single field, like email, from obj with",
"username": "steevej"
},
{
"code": "",
"text": "@steevej ,Thank you sooo much!!! Now, I have another question… If I wanted to format my query into individual records, would I declare my variable as an array and then unwind it? Or is this the only way for me to access the records?Thanks again,Dave",
"username": "David_Thompson"
},
{
"code": "",
"text": "@steevej ,\nOne more question… If you look at the object output, I have two nested objects within the user. Address and Card. I can get the address to print out using user.get(‘address’) which prints the entire object, How do I select specific fields within the nested object? I tried user.get(‘address.street’) to print only the street from the address object, but I got a return of none.\nScreenshot 2022-05-28 at 9.37.20 AM1388×766 78.5 KB\nnow if I change it to the user.get(‘address’) it returns the entire object.\nScreenshot 2022-05-28 at 9.38.31 AM1383×765 83.3 KB\nThanks for your time,Dave",
"username": "David_Thompson"
},
{
"code": "address = user.get( 'address' )\nstreet = address.get('street')\ncity = address.get('city')\n",
"text": "First, I am not a python programmer. I really hate python because it uses an invisible character (space or tab which one? I don’t know) to mark blocking. They made the same well know error as make(1) creators made 50 years ago. At least, they did not made the same error as JS because all literals have to be in quotes.The dot notationaddress.streetis a mongo thing, not python.An object like address is the same thing as an object like user.While user.get(‘first_name’) gives you the field first_name of the object user, user.get(‘address’) gives your the field address of the object user but user.get(‘address’) is a object and you use get() to get its fields. So\nuser.get(‘address’).get(‘street’) gives you the field street of the object address which is the field address of the object user. If you use a few fields from the user’s object address, it is good practice to doOne you get a MongoDB document in a python variable like user, everything that follows is plain standard python and has nothing to do with MongoDB.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej ,\nWhen I started to learn, I also really didn’t like the fact that it counts white space. And indentation counts in Python, not {}. That being said, now that I have had a chance to “delve” into it for some time (granted I’m a true beginner) I like it just for its portability. But that is my opinion.I really appreciate your help. I am now getting it and I understand much more on accessing objects than before. I appreciate your patience with me and I’m truly grateful for your tidbits.Thanks,Dave",
"username": "David_Thompson"
}
] |
Accessing Objects
|
2022-05-27T03:57:32.755Z
|
Accessing Objects
| 2,602 |
null |
[] |
[
{
"code": "invalid custom auth token: valid UID required (between 1 and 128 characters)const functions = require('firebase-functions');\nconst jwt = require('jsonwebtoken');\nconst fs = require('fs');\nconst key = fs.readFileSync('privatekey.pem');\nexports.myAuthFunction = functions.https.onCall((data, context) => {\n const uid = context.auth.uid;\n const payload = { userId: uid };\n const token = jwt.sign(payload, { key: key, passphrase: \"redacted\" }, { algorithm: 'RS256'});\n return { token:token }\n});\n func authUI(_ authUI: FUIAuth, didSignInWith authDataResult: AuthDataResult?, error: Error?) {\n guard let fireUser = authDataResult?.user else { return }\n\n Functions.functions().httpsCallable(\"authenticationfunc\").call { [weak self] result, _ in\n guard let dict = result?.data as? [String: Any] else { return }\n guard let token = dict[\"token\"] as? String else { return }\n\n self?.gotJWT(jwt: token)\n }\n }\n\n func gotJWT(jwt: String) {\n let credentials = Credentials.jwt(token: jwt)\n app.login(credentials: credentials) { [weak self] (result) in\n switch result {\n case .failure(let error):\n print(\"Login failed: \\(error.localizedDescription)\")\n case .success(let user):\n print(\"Successfully logged in as user \\(user)\")\n }\n }\n }\naud",
"text": "I had JWT auth set up back in the Realm Cloud days. Now I’m migrating to MongoDB and hit a snag logging into realm, the error is invalid custom auth token: valid UID required (between 1 and 128 characters)My JWT provider is a Firebase function:I’m calling that in my app like so:Should I be adding more fields to the payload? This guide makes it sound like I need to add aud field, etc, https://www.mongodb.com/docs/atlas/app-services/tutorial/jwt/#payload",
"username": "Harry_Netzer1"
},
{
"code": "const functions = require('firebase-functions');\nconst jwt = require('jsonwebtoken');\nconst fs = require('fs');\nconst key = fs.readFileSync('privatekey.pem');\nexports.myAuthFunction = functions.https.onCall((data, context) => {\n const uid = context.auth.uid;\n const subject = context.auth.token.email;\n const userName = context.auth.token.name;\n const audience = \"redacted\";\n const expires = Math.floor(Date.now() / 1000) + (very secure number of seconds);\n const payload = { userId: uid, sub: subject, name: userName, aud: audience, exp: expires };\n const token = jwt.sign(payload, { key: key, passphrase: \"redacted\" }, { algorithm: 'RS256'});\n return { token:token }\n});\n",
"text": "To answer my question - yes those fields are required. If anyone else runs into this, here’s my cloud function:",
"username": "Harry_Netzer1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Migrating JWT provider from Realm Cloud to MongoDB
|
2022-05-29T22:31:50.834Z
|
Migrating JWT provider from Realm Cloud to MongoDB
| 1,306 |
null |
[
"capacity-planning"
] |
[
{
"code": "",
"text": "Hi everyone.I am building a Saas app and i want to create a Multi tenant model where the clients can create an account and after they can to add users to this account. I was planning to use one database and use a tenantID in each documuent but i need to create backups for each tenant, i am going to work with a lot of optimistic reads/writes and, what i undestood from the mongodb Concurrency, mongodb locks collections not documents and in the future i want to allow to the clients to create their own models with their own fields so if i have one database i am goin to have a lot of collections in one database and this will be a problem.So i decided to create one database per tenant but i would like to know, how many databases can mongodb support with this vps setup:4 vCPU Cores\n8 GB RAM\n200 GB SSDEach database will have 16-20 collections and documents depends of the client.Thanks for you support.",
"username": "relb"
},
{
"code": "mongod",
"text": "Welcome to the MongoDB Community @relb !The general answer to capacity planning questions like this is that you will have to model your use case to understand resource usage and performance:The maximum number of databases and collections is a practical limit determined by a combination of factors including your system resources, schema design, workload, and performance expectations. If your working set is significantly larger than available RAM, performance will suffer and eventually become I/O bound shuffling data to and from disk. You can scale a single server vertically (adding more RAM and disk), but it will eventually be more economical to scale horizontally (across multiple servers) using sharding.what i undestood from the mongodb Concurrency, mongodb locks collections not documentsWiredTiger, MongoDB’s default storage engine since MongoDB 3.2 (early 2015), has document-level concurrency for most operations. See FAQ: Concurrency for more details.vps setup:4 vCPU Cores\n8 GB RAM\n200 GB SSDYou’ll have to evaluate whether 8GB RAM is sufficient for your working set and performance expectations, but keep in mind that this will be divided amongst:If you are building a self-hosted deployment for production usage, I highly recommend starting with a replica set deployment for data redundancy, high availability, and admin convenience. However, that would require a minimum of three instances rather than one.Once your multi-tenant SaaS platform has significant adoption you will likely have to consider how to scale and distribute workload. One option would to grow to a sharded cluster deployment for geographic distribution and workload balancing (for example, Segmenting Data by Application or Customer using zone sharding).As an alternative to managing and scaling a self-hosted deployment, you could also look into MongoDB Atlas which has Auto-Scaling for M10+ dedicated clusters and an Atlas Serverless offering currently in preview with pricing based on resource usage.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "After reviewing the links you provide me, searching and studying my case, I think it is a better option to have only one database and one tenant ID in each document. I have to create a script to backup the tenants and for my custom models I have to create a collection and store all the documents with their own custom json.If my vps is overloaded, I can use mongodb sharding and apply it to each collection (especially the collection with the custom fields).Thanks for your help.",
"username": "relb"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How many databases can mongodb support?
|
2022-05-28T20:06:25.728Z
|
How many databases can mongodb support?
| 3,364 |
null |
[] |
[
{
"code": "",
"text": "Requirement: I am trying to integrate MongoDB with Microservices to utilize the ReactiveMongoTemplate and also need to have embeddedeMongoServer for unit test cases.Issue: The build is success in local but in pipeline I am having maven build failure.Could you please help resolve this?",
"username": "Sucindran_M"
},
{
"code": "",
"text": "I have had trouble doing the same and switched to the Data API since it only needs HTTP Post. I implemented it in Kotlin with Ktor client libraries and deployed it to GCP as a PUBSUB. On AWS it might be better to stick to python or move forward to rust or go. My solution (which sounds a bit immodest) doesn’t provide a local test server since the Data Api requires Atlas. You could use the test DB thoughUsing Google Cloud’s Scheduler and Cloud Functions to run a MongoDB task\nReading time: 9 min read\n",
"username": "Ilan_Toren"
}
] |
Trying to integrate MongoDB with Microservices to utilize the ReactiveMongoTemplate and also need to have embeddedeMongoServer for unit test cases
|
2022-05-27T14:29:49.725Z
|
Trying to integrate MongoDB with Microservices to utilize the ReactiveMongoTemplate and also need to have embeddedeMongoServer for unit test cases
| 1,277 |
null |
[
"kafka-connector"
] |
[
{
"code": "copy.existing",
"text": "I’m planning to use KAFKA-CONNECT to sync data between two systems. MongoDB as source with copy.existing as one of the connector configuration to sync past data.I know change streams can pull the past data with this config. We have around 34GB of data and we have data for last year. Can change stream pull the data from the beginning ? How long the old data that change streams have?",
"username": "Selvakumar_Ponnusamy"
},
{
"code": "",
"text": "copy.existing opens a change stream at the start marking the current time. It then copies the data via an aggregation query, then when complete starts a new change stream passing the resume token captured from the start. This was we don’t lose any events while the data is being copied.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "@Robert_Walters Hi Robert. I have trying to use Mongodb Source Connector in Kafka Connect. I did try to use copy.existing. But this only pushes data which is been newly inserted. Not the existing data which is been already present.\nPlease let me know about the workaround for this.Thanks,\nKunal",
"username": "Kunal_51024"
},
{
"code": "",
"text": "It sounds like your config file isn’t correct can you share your source config ?",
"username": "Robert_Walters"
},
{
"code": "{\n \"name\": \"MongoSourceConnectorConnector_0\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"connection.uri\": \"\",\n \"database\": \"demo\",\n \"collection\": \"identity\",\n \"pipeline\": \"[ { '$match': { 'operationType': {'$in': ['insert', 'update', 'replace'], } } }, { '$project': { '_id': 1, 'fullDocument': 1, 'ns': 1, } } ]\",\n \"publish.full.document.only\": \"true\",\n \"topic.namespace.map\": \"{\\\"*\\\":\\\"demo.identity\\\"}\",\n \"copy.existing\": \"true\"\n }\n}```",
"text": "hi @Robert_Walters ,",
"username": "Kunal_51024"
},
{
"code": "offset.partition.name",
"text": "You can delete MongoSourceConnectorConnector_0 and recreate it. Note that If you have used this same configuration previously and just restarted the connector the resume token is stored so it won’t copy from the beginning. Also, if you set offset.partition.name with a new value this will also ensure that the old resume token does not get used.",
"username": "Robert_Walters"
}
] |
Maximum past data that copy.existing pull the change streams
|
2021-10-03T11:16:05.505Z
|
Maximum past data that copy.existing pull the change streams
| 3,152 |
null |
[
"app-services-user-auth"
] |
[
{
"code": "",
"text": "I’m currently implementing Apple Sign In on an app (as Apple requires of apps that have other providers) and the IdToken that Apple returns natively on iOS 13/14 does not work with MongoDB Realm if it’s configured to use the ServiceId (as explained by the official MongoDB Apple ID Authentication guide).I did manage to get it working natively, but then the web authentication stops working because I have to change the Client Id on Realm Apple Authentication configuration to use the App ID, not the Service ID.This doesn’t seem right to me, Apple requires Apple ID authentication to be used on all iOS versions and it isn’t possible to use the native UI before iOS 13, so it’s always required to use both the native auth and the web auth.How could I use the native UI for iOS 13/14 and the web UI for all the others?",
"username": "Luccas_Clezar"
},
{
"code": "",
"text": "We’re facing the same problem, is there a way to specify two client ID’s? We tried comma exasperated, space separated, and semi-colon separated values but none of them seem to work even though no errors are thrown.",
"username": "Chris_Long"
},
{
"code": "",
"text": "That would really fix the issue, but I don’t think there’s a way to have two client ID’s. I tried to use Realm CLI to change the string into an array, but when I imported the configuration file into Realm the CLI complained that the field must be a string.For now, my “workaround” is to use the web service even for iOS 13+. The app opens an SFSafariViewController modally and when on iOS 13+ the native authentication dialog pops up automatically even if it was initiated on the web, then Safari gets redirected to the redirect URL and you can go back to the app with a custom URL scheme.Even though this works, it’s not ideal and doesn’t make sense for Realm to not support both native and web at the same time as they will always be used together (to support both iOS 13+ and <=12).",
"username": "Luccas_Clezar"
},
{
"code": "",
"text": "On 2021 August still facing the same issue. It is still not possible to do it via UI.",
"username": "Dududu"
},
{
"code": "",
"text": "@Andrew_Morgan hey Andrew do you have any idea on this? Thanks for all the great videos you posted on YouTube! Awesome content!",
"username": "Dududu"
},
{
"code": "",
"text": "@Luccas_Clezar @Chris_Long @Dududu Did you find a way to do this?",
"username": "Nyan"
}
] |
Can't use sign in with Apple natively and as a web authentication at the same time
|
2021-04-08T21:44:19.190Z
|
Can’t use sign in with Apple natively and as a web authentication at the same time
| 4,257 |
null |
[
"ops-manager"
] |
[
{
"code": "2021-02-23T07:51:06.315+0000 [JettyHttpPool-324] ERROR com.xgen.svc.mms.res.exception.DefaultThrowableHandler [DefaultThrowableHandler.java.handle:33] - com.xgen.svc.mms.res.AllClustersResource.getAllClustersForUser(javax.servlet.http.HttpServletRequest) - msg: null\njava.lang.NullPointerException: null\n at com.xgen.svc.mms.res.view.allclusters.OrgGroupView$OrgGroupViewComparator.compare(OrgGroupView.java:127)\n at com.xgen.svc.mms.res.view.allclusters.OrgGroupView$OrgGroupViewComparator.compare(OrgGroupView.java:101)\n at java.base/java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)\n at java.base/java.util.TimSort.sort(TimSort.java:220)\n at java.base/java.util.Arrays.sort(Arrays.java:1515)\n at java.base/java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:353)\n at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)\n at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)\n at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)\n at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\n at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)\n at com.xgen.svc.mms.res.AllClustersResource.generateGroupViews(AllClustersResource.java:101)\n at com.xgen.svc.mms.res.AllClustersResource.getAllClusters(AllClustersResource.java:75)\n at com.xgen.svc.mms.res.AllClustersResource_$$_jvstfcf_10._d16getAllClusters(AllClustersResource_$$_jvstfcf_10.java)\n at jdk.internal.reflect.GeneratedMethodAccessor266.invoke(Unknown Source)\n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.base/java.lang.reflect.Method.invoke(Method.java:566)\n at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1287)\n at org.jvnet.hk2.internal.MethodInterceptorHandler.invoke(MethodInterceptorHandler.java:103)\n at com.xgen.svc.mms.res.AllClustersResource_$$_jvstfcf_10.getAllClusters(AllClustersResource_$$_jvstfcf_10.java)\n at com.xgen.svc.mms.res.AllClustersResource.getAllClustersForUser(AllClustersResource.java:50)\n at com.xgen.svc.mms.res.AllClustersResource_$$_jvstfcf_10._d18getAllClustersForUser(AllClustersResource_$$_jvstfcf_10.java)\n at jdk.internal.reflect.GeneratedMethodAccessor265.invoke(Unknown Source)\n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.base/java.lang.reflect.Method.invoke(Method.java:566)\n at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1287)\n at org.jvnet.hk2.internal.MethodInterceptorHandler$MethodInvocationImpl.proceed(MethodInterceptorHandler.java:188)\n at com.xgen.module.upgrademode.AppUpgradeModeInterceptor.invoke(AppUpgradeModeInterceptor.java:35)\n at org.jvnet.hk2.internal.MethodInterceptorHandler$MethodInvocationImpl.proceed(MethodInterceptorHandler.java:211)\n at com.xgen.svc.mms.util.MethodCallStatsDImpl.invoke(MethodCallStatsDImpl.java:25)\n at org.jvnet.hk2.internal.MethodInterceptorHandler.invoke(MethodInterceptorHandler.java:121)\n at com.xgen.svc.mms.res.AllClustersResource_$$_jvstfcf_10.getAllClustersForUser(AllClustersResource_$$_jvstfcf_10.java)\n at jdk.internal.reflect.GeneratedMethodAccessor264.invoke(Unknown Source)\n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.base/java.lang.reflect.Method.invoke(Method.java:566)\n at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)\n at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)\n at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)\n at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)\n at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)\n at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)\n at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)\n at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)\n at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)\n at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)\n at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)\n at org.glassfish.jersey.internal.Errors.process(Errors.java:316)\n at org.glassfish.jersey.internal.Errors.process(Errors.java:298)\n at org.glassfish.jersey.internal.Errors.process(Errors.java:268)\n at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)\n at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)\n at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)\n at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)\n at org.glassfish.jersey.servlet.ServletContainer.serviceImpl(ServletContainer.java:409)\n at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:584)\n at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:525)\n at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:462)\n at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)\n at com.xgen.svc.brs.slurp.IncorrectContentTypeFilter.doFilter(IncorrectContentTypeFilter.java:30)\n at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)\n at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:121)\n at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:133)\n at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)\n at com.xgen.svc.mms.res.filter.GzipDecompressFilter.doFilter(GzipDecompressFilter.java:49)\n at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)\n at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:51)\n at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)\n at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\n at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:753)\n at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1700)\n at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)\n at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\n at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1667)\n at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)\n at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)\n at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n at org.eclipse.jetty.server.Server.handle(Server.java:505)\n at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)\n at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)\n at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)\n at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)\n at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:698)\n at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:804)\n at java.base/java.lang.Thread.run(Thread.java:834)\n",
"text": "Hello,Our Ops Manager stopped working, it shows the error “There was a problem fetching your clusters.” on the front page. Also, the Kubernetes pod/ops-manager-0 shows more details:The rest of links work fine and all the clusters are available via UserAccount > Organizations string. The clusters with replica sets work as well.If someone has some ideas or suggestions about this issue, please share your thoughts.\nThank you.",
"username": "Oleksii_Prokhorenko"
},
{
"code": "",
"text": "Hi Oleskii,Were you able to fix the problem? I faced the same problem.\nIf yo could, your hep will be very appreciated.Thanks.",
"username": "ali_veli"
}
] |
Ops Manager error: "There was a problem fetching your clusters"
|
2021-02-23T09:19:20.541Z
|
Ops Manager error: “There was a problem fetching your clusters”
| 4,375 |
null |
[] |
[
{
"code": "",
"text": "I’m working on an app that uses user registration with help of Realm. So I need to provide my own implementation to confirm a user registration and change a password. I’m using ‘hosting’ section to provide code which is responsible for doing this stuff. However after I uploaded a new version of a document with a code for resetting password my client somehow calls old version of the document. But if I open the document from ‘hosting’ section in the browser I will see the latest version of the code.How long does it take to confirm changes in these files? And maybe there’re a daily pool for changes which I crossed already🙃",
"username": "Suprafen"
},
{
"code": "",
"text": "Hi @Suprafen,There is a button at the top in the hosting section to invalidate the cache of your hosted app. No more old version after that. \nimage938×608 36 KB\nCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Oh, I see, it’s pretty easy.Thank you a lot. Have a good day ",
"username": "Suprafen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How long will it take to confirm changes in hosting section on my RealmUI?
|
2022-05-26T17:05:06.579Z
|
How long will it take to confirm changes in hosting section on my RealmUI?
| 1,333 |
null |
[] |
[
{
"code": "[\n {\n \"userName\": \"User Name 1\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 2\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 3\",\n \"email\": \"[email protected]\",\n \"active\": true\n }\n]\n[\n {\n \"userName\": \"User Name 1\",\n \"email\": \"[email protected]\",\n \"active\": null\n },\n {\n \"userName\": \"User Name 2\",\n \"email\": \"[email protected]\",\n \"active\": null\n },\n {\n \"userName\": \"User Name 4\",\n \"email\": \"[email protected]\",\n \"active\": null\n }\n]\n[\n {\n \"userName\": \"User Name 1\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 2\",\n \"email\": \"[email protected]\",\n \"active\": false\n },\n {\n \"userName\": \"User Name 3\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 2\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 4\",\n \"email\": \"[email protected]\",\n \"active\": true\n }\n]\n",
"text": "Hey everyone!I’m not very experienced with MongoDB, and I would like a suggestion.I have a collection, it will be updated daily by automatically uploading an XLSX file in which each row represents a document of the collection and they will have some sort of unique identifier, imagine that the document “person” has the field “username” and at any given time in the whole collection there can be only one document with that username and the field “active” = true.Imagine that the first XLSX is uploaded, the collection will be empty so all the documents will be inserted with “active” = true.Collection after first upload.The next day a second XLSX is uploaded containing the following data:At this point, the first document (User 1) of the new XLSX file is identical to the existing document in the collection, so nothing needs to be done and I ignore it… The second document (User 2) changed his “email” so I need to change the “active” field from the existing document in the collection to false and insert a new document that will completely replace the existing information relative to User 2 with the new data in the XLSX file and “active” = true… The third document is new to the collection so it will be inserted normally as a new document with “active” = true.The Resulting collection will be as follow:I cannot update the previous document if something changed, is a project requirement, so I keep a sort of “history” of every single variation.The “real” user information for a given username will be defined by the “active” flag.What will be the best way to do this using MongoDB? Eficiency wise.Any ideas about how to face face this situation?\nPreferably MongoDB side, not by fetching all the data and comparing all fields backend side.Thanks in advance for the help <3",
"username": "RandomNando"
},
{
"code": "{\n \"userName\": \"User Name 1\",\n \"email\": \"[email protected]\",\n \"active\": true\n }\n",
"text": "To meis notidentical to the existing document in the collectionBecause active:true is not the same as active:false.",
"username": "steevej"
},
{
"code": "[\n {\n \"userName\": \"User Name 1\",\n \"email\": \"[email protected]\",\n \"active\": null\n },\n {\n \"userName\": \"User Name 2\",\n \"email\": \"[email protected]\",\n \"active\": null\n },\n {\n \"userName\": \"User Name 3\",\n \"email\": \"[email protected]\",\n \"active\": null\n }\n]\n[\n {\n \"userName\": \"User Name 1\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 2\",\n \"email\": \"[email protected]\",\n \"active\": true\n },\n {\n \"userName\": \"User Name 3\",\n \"email\": \"[email protected]\",\n \"active\": true\n }\n]\n",
"text": "My bad, I thought that it was clear enough. When the data is imported is always active = false / null since active is defined inside the collection, not in the XLSX file. It will be active once if inserted, if the other fields change.Initial file data:Imagine that the first XLSX is uploaded, the collection will be empty so all the documents will be inserted with “active” = true.Once inserted in the collection:I understand the confusion, I intended identical except for the “active” field, I’ll correct it now.At this point, the first document (User 1) of the new XLSX file is identical to the existing document in the collection, so nothing needs to be done and I ignore it… The second document (User 2) changed his “email” so I need to change the “active” field from the existing document in the collection to false and insert a new document that will completely replace the existing information relative to User 2 with the new data in the XLSX file and “active” = true… The third document is new to the collection so it will be inserted normally as a new document with “active” = true.Any advice about the best way of optimizing a field-by-field comparison and depending on the result insert and/or update?",
"username": "RandomNando"
},
{
"code": "{\n \"userName\": \"User Name 1\",\n \"email\": [ \"[email protected]\" ]\n} \n{\n \"userName\": \"User Name 2\",\n \"email\": [ \"[email protected]\" ] \n}\n{\n \"userName\": \"User Name 3\",\n \"email\": \"[email protected]\"\n}\n{ /* this document is unchanged */\n \"userName\": \"User Name 1\",\n \"email\": [ \"[email protected]\" ]\n} \n{ /* here a new email is pushed, [email protected]\n is active because it is last and other one is not active \n because it is not last. */\n \"userName\": \"User Name 2\",\n \"email\": [ \"[email protected]\" , \"[email protected]\" ] \n}\n{ /* it is not clear what happen to an existing userName when\n it is absent from an update. */\n \"userName\": \"User Name 3\",\n \"email\": [ \"[email protected]\" ]\n}\n{ /* new document */\n \"userName\": \"User Name 4\",\n \"email\": [ \"[email protected]\" ]\n}\n{\n \"userName\": \"User Name 1\",\n \"current_email\": \"[email protected]\",\n \"email_history\": [ \"[email protected]\" ]\n} \n{\n \"userName\": \"User Name 2\",\n \"current_email\": \"[email protected]\",\n \"email_history\": [ \"[email protected]\" ] \n}\n{\n \"userName\": \"User Name 3\",\n \"current_email\": \"[email protected]\"\n \"email_history\": [ \"[email protected]\" ]\n}\n{ /* this document is unchanged */\n \"userName\": \"User Name 1\",\n \"current_email\": \"[email protected]\",\n \"email_history\": [ \"[email protected]\" ]\n} \n{ /* here a new email is pushed, [email protected]\n and becomes current/active email */\n \"userName\": \"User Name 2\",\n \"current_email\": \"[email protected]\",\n \"email_history\": [ \"[email protected]\" , \"[email protected]\" ] \n}\n{ /* it is not clear what happen to an existing userName when\n it is absent from an update so I assume a no-op. */\n \"userName\": \"User Name 3\",\n \"current_email\": \"[email protected]\",\n \"email_history\": [ \"[email protected]\" ]\n}\n{ /* new document */\n \"userName\": \"User Name 4\",\n \"current_email\": \"[email protected]\",\n \"email_history\": [ \"[email protected]\" ]\n}\n",
"text": "Your problem is not easy because you are not leveraging the flexibility of MongoDB.You problem would be almost trivial if you would keep 1 document per userName and store the email in an array.When a new email comes in, you simply $push it into the array. The active email is always the $last.After initial insert you have the following documents.The on the next day. You can easily end up with.As I write, I experiment with Compass and may be the following is easier to implement.\nFirst upload:after 1st update you end up withDespite repeating current_email as $last of email_history, I am pretty sure that model is efficient space wise and performance wise as your original. In the original, the old document has to be written with active false at the same time the new document is written. Using the same document with history, only 1 document has to be written back.I am still thinking about how to implement your original requirement because it is hard and hard problems are more interesting but I think a schema change would be better.",
"username": "steevej"
},
{
"code": "",
"text": "I can’t modify the same document; the requirement was having a single document for each modification. They wanted a full history of everything.Also, the document is not that small, it’s not enormous but it has like 10 fields, I would end up with who knows how many arrays, but mostly, the imported documents could be out of order, let me explain that…In this project, X times a month someone (or an automation script) will upload a document that will be read, each row will be a document and need to be inserted in the collection.The fields may change and every time that something changes, I have to flag the existing document as “active” and the new document will take his place.The uploaded file has a “validFrom” general to the document, representing from which point in time that document will be valid, so this modification even if is uploaded today, shouldn’t be considered until we pass “validFrom” in time.{\n“userName”: “User Name 1”,\n“validFrom”: “2022/01/01”,\n“email”: “email1@email. com”,\n“active”: false\n}\n{\n“userName”: “User Name 1”,\n“validFrom”: “2022/01/01”,\n“email”: “email12345@email. com”,\n“active”: true\n}\n{\n“userName”: “User Name 1”,\n“validFrom”: “2022/03/01”,\n“email”: “email2@email. com”,\n“active”: true\n}\n{\n“userName”: “User Name 1”,\n“validFrom”: “2022/03/10”,\n“email”: “email3@email. com”,\n“active”: true\n}At a given time you could have 10 documents active for a single “userName” as long as the “validFrom” is different. And doing this inside an array in a single document would mean having for each field an array of objects each of them with different “validFrom” and “active” properties also loosing the information related to the “uploadProperties”, one of the fields is an object with information about the document upload, like date, file, user and so, so that they can recreate if something goes wrong which upload or filed was wrong, when was it uploaded, all the documents of that single upload, etc.I know that’s not an easy task and like you, I like hard questions that’s why I keep asking.",
"username": "RandomNando"
}
] |
Document Field-by-Field Comparison -> Insert/Update
|
2022-05-25T10:49:49.325Z
|
Document Field-by-Field Comparison -> Insert/Update
| 2,559 |
null |
[
"aggregation",
"queries",
"node-js"
] |
[
{
"code": "[\n {\n \"_id\": ObjectId(\"6249c77a99e5c26e50736c02\"),\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\"),\n \"createdAt\": \"2022-04-03T16:12:42.328Z\"\n },\n {\n \"_id\": ObjectId(\"624a700199e5c26e50736c07\"),\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\"),\n \"createdAt\": \"2022-04-06T04:11:45.891Z\"\n }\n]\n\n {\n \"_id\": ObjectId(\"6272e84b6fc62f16bd0f337d\"),\n \"month\": \"2022-04\",\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\"),\n \"shift\": [\n {\n \"_id\": ObjectId(\"6272e84b6fc62f16bd0f337c\"),\n \"date\": \"2022-04-03\",\n \"name\": \"Day\"\n },\n {\n \"_id\": ObjectId(\"6272e84c6fc62f16bd0f337e\"),\n \"date\": \"2022-04-04\",\n \"name\": \"Week Off\"\n },\n {\n \"_id\": ObjectId(\"6272e8546fc62f16bd0f337f\"),\n \"date\": \"2022-04-5\",\n \"name\": \"Night\"\n }\n ]\n }\n]\nWeek OffAbsents[\n {\n \"_id\": ObjectId(\"6249c77a99e5c26e50736c02\"),\n \"createdAt\": \"2022-04-03T16:12:42.328Z\",\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\"),\n \"shift\": {\n \"_id\": ObjectId(\"6272e84b6fc62f16bd0f337c\"),\n \"date\": \"2022-04-03\",\n \"name\": \"Day\"\n }\n },\n {\n \"_id\": ObjectId(\"543761b43b2eaac4b25d42e8\") //Not Required,,\n \"createdAt\": \"2022-04-04T00:00:00.000Z\",\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\"),\n \"shift\": {\n \"_id\": ObjectId(\"6272e84c6fc62f16bd0f337e\"),\n \"date\": \"2022-04-04\",\n \"name\": \"Week Off\"\n }\n },\n {\n \"_id\": ObjectId(\"668761b43b2eaac4b25d42e5\") //Not Required,\n \"createdAt\": \"2022-04-05T00:00:00.000Z\",\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\"),\n \"absent\": true\n },\n {\n \"_id\": ObjectId(\"624a700199e5c26e50736c07\"),\n \"createdAt\": \"2022-04-06T04:11:45.891Z\",\n \"employee\": ObjectId(\"622061b73b2eaac4b15d42e4\")\n }\n]\n",
"text": "I am trying to make attendance system and have multiple collection like attendances, shifts, leaves, Holidays. In the shifts collection have Week Off but obviously absents are not in record. I have created\na Playground.\nMy attendance collections as follows.And my shifts collection looks like bellow.Here I need to get Week Off that have marked with employee and date but not related to attendance collections and as well as need employees wise Absents date in this same result which have no record and related to any collection.\nI need something like this.Please help out.",
"username": "Pallab_Kole"
},
{
"code": "",
"text": "I got the inspiration from Stackoverflow Post\nBut unable to implement in my scenario.",
"username": "Pallab_Kole"
},
{
"code": "\"date\": \"2022-04-04\",\"createdAt\": \"2022-04-06T04:11:45.891Z\"\"date\": \"2022-04-5\"> d = new Date( \"2022-04-5\" )\n>2022-04-05T04:00:00.000Z\n> strings.find()\n{ _id: 0, d: '2022-05-26' }\n{ _id: 1, d: '2022-05-27' }\n{ _id: 2, d: '2022-05-28' }\n> dates.find()\n{ _id: 0, d: 2022-05-26T00:00:00.000Z }\n{ _id: 1, d: 2022-05-27T00:00:00.000Z }\n{ _id: 2, d: 2022-05-28T00:00:00.000Z }\n> strings.stats().avgObjSize\n32\n> dates.stats().avgObjSize\n25\n",
"text": "I did not have time to experiment yet but here is a recommendation I can make from seeing your sample documents.Do not use string data type for dates such as:\"date\": \"2022-04-04\",Use date data type such as:\"createdAt\": \"2022-04-06T04:11:45.891Z\"Dates as date are safer to useConsider the following error from your sample documents:\"date\": \"2022-04-5\"versus what you get with the safer Date.Dates as date are more space efficientGiving 2 collections:Using collections.stats() we get:Not a big deal, but multiply by millions of documents. And that size difference is also compounded in any indexes that include your dates. And compounded in all data transfers during queries, results and replication.Dates as date are faster during comparisonsWhen comparing 2 dates as date, it is 1 low level comparisons. When comparing 2 dates as string, it 1 low level comparisons for each character. When comparing 2 strings such as 2022-05-26 and 2022-05-27 you only detect difference, hence order, after the 10th character comparison.Cannot compare strings and dates without convertingYou already have some dates as date data type (field createdAt) so you will not be able to compare the 2 together without converting one to the other type. Indexes on the converted value cannot be used.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for your suggestion. @steevej . I will remember your suggestion.Please suggest me if possible to achieve my requirement accordingly to my data.",
"username": "Pallab_Kole"
},
{
"code": "",
"text": "I’d love to help but I don’t understand what you are trying to do. You give an example of what you want to get back but not what it means. Do you want to get back a record for each employee? Each day that they worked? Something else?",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Yes I want to get back the records for each employee and each day that warked and not worked. @Asya_Kamsky",
"username": "Pallab_Kole"
}
] |
Mongodb unmatch date aggregate lookup
|
2022-05-25T16:29:02.871Z
|
Mongodb unmatch date aggregate lookup
| 2,529 |
[
"queries",
"data-modeling",
"compass",
"mongodb-shell"
] |
[
{
"code": "{ \"people.age\": { $in: [24] } }",
"text": "Hello Members,\nI am a new mongodb user, this why I am asking this question. I have a document, in this document I have 3 objects under one _id.\nWhen I am filtering { \"people.age\": { $in: [24] } } I am getting full this document. But I want to see only the matching object. Is it possible to show only the matching object? If you kindly explain me it will be helpful for me.",
"username": "Debajyoti_Chowdhury"
},
{
"code": "",
"text": "You need a $project with a $filter.",
"username": "steevej"
}
] |
MongoDB query for nested array for specific object
|
2022-05-27T22:24:02.418Z
|
MongoDB query for nested array for specific object
| 2,136 |
|
null |
[
"security",
"atlas"
] |
[
{
"code": "",
"text": "Having set up ATLAS database encryption using Customer Keys with AWS KMS, what are the implications of changing the CMK at a later date? If this were deemed necessary would the change require a dump of the existing database followed by a restore into a new cluster to which the new CMK could then be used?",
"username": "DXC_AWS"
},
{
"code": "",
"text": "No need to re-write the data: MongoDB Atlas can do an update of the wrapping keys as well as of the database level keys in a rolling manner that’s light-weight (envelope encryption)",
"username": "Andrew_Davidson"
}
] |
AWS-KMS CMK Change
|
2022-05-26T23:21:38.295Z
|
AWS-KMS CMK Change
| 2,740 |
[
"aggregation"
] |
[
{
"code": "",
"text": "Hi Team,\nWe are using Mongo Realm trigger and function, which doing aggregations on the fields of collection and then inserting calculated data to other collection, tigger is running every day at 12:00 AM.Collection having 500 thousands of data in a single day, we are getting below logs while trigger gets excecute:uncaught promise rejection: FunctionError: exceeded max async work queue size of 1000Please see below is the screenshot:\nMongoRealmError1350×692 70.1 KB\n",
"username": "Jaymin_Modi"
},
{
"code": "",
"text": "Hi @Jaymin_Modi and welcome in the MongoDB Community !I think the main problem here is that you reached the limit of 120s of execution time for a single function.One solution would be to create smaller processing job (split the job into smaller pieces then map / reduce) or optimize the pipeline so it runs faster.If everything is already optimized, another solution would be to upgrade to a bigger tier or increase the IOPS / RAM.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] |
Facing uncaught promise rejection: FunctionError: exceeded max async work queue size of 1000
|
2022-05-26T10:01:46.380Z
|
Facing uncaught promise rejection: FunctionError: exceeded max async work queue size of 1000
| 2,302 |
|
null |
[
"node-js",
"dot-net",
"python",
"connecting"
] |
[
{
"code": "client = MongoClient(\"mongodb://192.168.10.182:27017\")\nserver = client.GetServer()\ndatabase = server.GetDatabase(\"rhino\")\n",
"text": "I’m trying to connect Rhinoceros 3d, a modeling software, with a local instance of MongoDB, via IronPython.Ive the references to the latest drive rand that imports fine. However when trying to simply connect I get the following error:\n“The type initializer for ‘MongoDB.Driver.Core.Misc.DnsClientWrapper’ threw an exception.”here is the code snippet:Its fairly straight forward, Im able to connect in csharp and also nodejs an dregular python. Just no luck here in this instance. Any ideas?",
"username": "Craig_Forneris"
},
{
"code": "pymongoDnsClientWrapperLookupClientlookupClient = LookupClient()\nDnsClient.NETDnsClientLookupClientpymongo",
"text": "Hi, @Craig_Forneris,Welcome to the MongoDB Community Forums.Based on the error message, IronPython is using the .NET/C# Driver rather than pymongo (our native Python driver). Notably the static constructor (aka type initializer) for DnsClientWrapper is throwing an exception. The static constructor creates a new default LookupClient from the DnsClient.NET NuGet package. I would suggest trying the following in IronPython to see if an exception is thrown:You’ll have to reference the DnsClient.NET NuGet package and import the DnsClient namespace. If instantiating the LookupClient instance throws an exception, then the problem is in that third-party dependency.Note that IronPython isn’t a supported nor tested deployment target for the .NET/C# Driver. You might have better success using the pymongo driver, which is our native Python driver.Sincerely,\nJames",
"username": "James_Kovacs"
}
] |
Iron Python and Mongo DB C#Driver
|
2022-05-27T19:50:55.650Z
|
Iron Python and Mongo DB C#Driver
| 1,265 |
null |
[
"queries",
"dot-net"
] |
[
{
"code": "db.data.FindAsync(d => d.Field == \"a\");\n",
"text": "Hi,This document only mentioned the bson query syntax and the bson query builder when programming in c#. I assume the bson query and the bson query builder are equivalent.In c#, the API also allows us to use the expression query syntax, e.g.:I want to ask if the API will translate the expression syntax into bson and then send it to the server, or if it pulls all the data to the client and then performs the filtering.",
"username": "Xi_Shen"
},
{
"code": "ToListIEnumerable<T>var query = coll.AsQueryable().Where(d => d.Field == \"a\"); // will generate MQL\nvar results = query.ToList(); // now you're dealing with an IEnumerable<T> in memory\nvar filtered = results.Where(d => d.AnotherField == 42); // LINQ-to-Objects\nToString()var query = coll.AsQueryable().Where(d => d.Field == \"a\");\nConsole.WriteLine(query.ToString());\naggregate([{ \"$match\" : { \"Field\" : \"a\" } }])\n",
"text": "Hi, @Xi_Shen,Welcome to the MongoDB Community Forums.You are correct that you can specify queries in the .NET/C# driver using a variety of syntactic options including builders and expressions. The query is translated into MQL and sent to the server as BSON. The driver never performs in-memory filtering/sorting unless you explicitly call ToList (or similar method) and then use LINQ-to-Objects on the returned IEnumerable<T>.You can use the MongoDB Analyzer for .NET to visualize the generated MQL for Builder and LINQ queries in Visual Studio and JetBrains Rider. In many cases, you can also call ToString() on a query to display the generated MQL:Output:Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Difference between about c# expression query syntax and bson query syntax
|
2022-05-27T01:19:26.839Z
|
Difference between about c# expression query syntax and bson query syntax
| 3,014 |
null |
[
"data-modeling",
"containers",
"schema-validation"
] |
[
{
"code": "validatedb.collection.validate()db.collection.validate()docker run \\\n --name mongo5_0_validator_test \\\n -d \\\n --env=MONGO_INITDB_ROOT_USERNAME=admin \\\n --env=MONGO_INITDB_ROOT_PASSWORD=password \\\n mongo:5.0\ndocker exec -it mongo5_0_validator_test mongo --username admin --password password\nuse test\ndb.col1.insert({ name: \"joe\" });\ndb.col1.insert({ namE: \"bre\" });\ndb.col1.insert({ test: \"poe\" });\ndb.runCommand({\n collMod: \"col1\",\n validator: {\n $jsonSchema: {\n bsonType: \"object\",\n properties: {\n \"_id\": { bsonType: \"objectId\" },\n name: { bsonType: \"string\", description: \"test\" }\n },\n additionalProperties: false\n }\n },\n validationLevel: \"strict\",\n validationAction: \"error\"\n});\ndb.col1.insert({ name: \"jack\" });\ndb.col1.insert({ namE: \"this throws an error because 'namE' is not a defined property, we still only have 4 documents now, 2 existing are invalid to the schema pre validator addition.\" });\ndb.col1.validate()\nnInvalidDocuments0{\n \"ns\" : \"test.col1\",\n \"nInvalidDocuments\" : 0,\n \"nrecords\" : 4,\n \"nIndexes\" : 1,\n \"keysPerIndex\" : {\n \"_id_\" : 4\n },\n \"indexDetails\" : {\n \"_id_\" : {\n \"valid\" : true\n }\n },\n \"valid\" : true,\n \"repaired\" : false,\n \"warnings\" : [\n \"Detected one or more documents not compliant with the collection's schema. See logs.\"\n ],\n \"errors\" : [ ],\n \"extraIndexEntries\" : [ ],\n \"missingIndexEntries\" : [ ],\n \"corruptRecords\" : [ ],\n \"ok\" : 1\n}\n{\n \"ns\" : \"test.col1\",\n \"nInvalidDocuments\" : 0,\n \"nrecords\" : 4,\n \"nIndexes\" : 1,\n \"keysPerIndex\" : {\n \"_id_\" : 4\n },\n \"indexDetails\" : {\n \"_id_\" : {\n \"valid\" : true\n }\n },\n \"valid\" : true,\n \"warnings\" : [ ],\n \"errors\" : [ ],\n \"extraIndexEntries\" : [ ],\n \"missingIndexEntries\" : [ ],\n \"ok\" : 1\n}\nnInvalidDocmentsvalidate",
"text": "The mongo documentation explains schema validation and the ability to validate existing documents in two places:To perform validation checks on existing documents, use the validate command or the db.collection.validate() shell helper.db.collection.validate() also validates any documents that violate the collection’s schema validation rules.The output of the validate function would then theoretically report the number of all documents that are invalid based on the current validator settings on the collection. This, however, is not what is observed. Mongodb validate() will return that all Documents are valid after adding a validator to a collection with existing documents that are not valid.In v5 and greater it does report warnings and says to check logs, but in <v4 no information is reported at all.In v5 of mongo we at least get a warning that tells us some documents are actually invalid, but the nInvalidDocuments is still 0You’ll notice in v4 there isn’t even any hint that anything may be wrong with the documents.I expect, based on the mongo documentation, that the nInvalidDocments would report the # of Documents that have failed. It does not appear that there is any good way to identify the invalid documents without looking at database logs, which is not very useful.How can you determine all existing invalid Documents in a collection that has had validator added/updated?Is there any way to iterate over the collection and validate each document in lue of this behavior, especially if this behavior is actually expected?The only idea we had in the discord conversation was creating an entire new collection with the new validation on it and bulk inserting the old collection into it to see what fails. This is obviously not an ideal scenario.It is also not great or appropriate for application logic to do the validation since there is no straightforward way to go from mongo’s custom bson jsonSchema to whatever language/driver you’re using.I also did not find anything on validating a single Document, only the full collection validate method exists which would rule out some kind of pipeline/match to figure out all invalid documents.(I can’t put valid links here because forums block me, I figure linking the docs above were better use of my 2 possible links…)",
"username": "joshua_bell"
},
{
"code": "db.collection.validate()$jsonSchema$jsonSchema",
"text": "Welcome to the MongoDB Community @joshua_bell !In v5 and greater it does report warnings and says to check logs, but in <v4 no information is reported at all.The change in validation output and db.collection.validate() behaviour is due to Improved Error Messages for Schema Validation in MongoDB 5.0.You haven’t mentioned which release of MongoDB 4.x, but the major release series are 4.0 (now end of life), 4.2, and 4.4. New features and compatibility changes are only introduced in new major releases, so referring to x.y is more meaningful than vX in terms of common behaviour.The major versioning scheme changed as of MongoDB 5 (Accelerating Delivery with a New Quarterly Release Cycle, Starting with MongoDB 5.0) so major production releases are now annual (5.0, 6.0, …). There are also quarterly rapid releases (5.1, 5.2, 5.3) which are development previews leading up to the next major release (X.0).How can you determine all existing invalid Documents in a collection that has had validator added/updated?You can use the $jsonSchema query operator to find existing documents that do (or do not) satisfy the criteria for a validator. The $jsonSchema operator requires a JSON schema definition as a parameter, so you can include the current collection validator or a custom one.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "\"nInvalidDocuments\"",
"text": "Thanks for the info. Those runs where with v4.4 and v5.0 (Last two docker images released on docker hub)I didn’t realize you can use the $jsonSchema as a query operator so that is good to know. Sounds like that will be the only way to detect existing invalid records in a collection.Can you explain what \"nInvalidDocuments\" actually means though? Based in the current docs the $jsonSchema validator should be run when calling collection.validate. It clearly runs at some point in v5 because it prints the warning, but I would expect the invalid document count to include those because it is set to strict error and not warn. Is the behavior of validate more well documented? The description for nInvalidDocuments does not talk about edge cases that go against the quoted docs at the top of my original post. Those quotes seem misleading if the validate function does not, in fact, validate existing documents.",
"username": "joshua_bell"
},
{
"code": "db.collection.validate()validatevalidatenInvalidDocumentsvalidatevalidatenCompliantDocuments",
"text": "Hi @joshua_bell,db.collection.validate() is a wrapper around the validate command. The current documentation definitely needs some improvements:Prior to MongoDB 5.0, validate was focused on the structural integrity of indexes and documents so the nInvalidDocuments is reporting documents that cannot be read by the underlying storage engine (for example, document corruption with mismatched document vs data size).In MongoDB 5.0+, validate checks for non-compliant documents and should include more details in the logs about non-compliant documents. Documents that are not compliant with the schema are not counted as invalid documents.In the upcoming MongoDB 6.0 release validate has a new nCompliantDocuments counter.I added an improvement suggestion for the documentation: DOCS-15364: Add more detail on what validate considers “invalid documents”.I hope that clarifies the expected outcomes. Please also feel free to comment/upvote/create DOCS tickets in the MongoDB Jira issue tracker (or provide feedback to the team directly via a documentation page). Discussion and feedback in the forums is also an option Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for the explanation and getting a ticket made for the docs, that makes a lot of sense. That clarifies everything I need to know. Looking forward to a v6 with the compliant documents counter.",
"username": "joshua_bell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Validate() behavior not consistent with documetnation
|
2022-05-26T21:37:20.726Z
|
Validate() behavior not consistent with documetnation
| 3,801 |
null |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "Greetings, Hackers!It’s TIME!!! Less than 24 hours to go to submit your project! Get your Hackathon project submissions in now. If you’re not in, you can’t WIN !! Remember that if it’s Friday 27th May Anywhere On Earth, then it’s not too late to submit your project.Since April 11th, we’ve enjoyed hosting this Hackathon and now it’s time to show off all your hard work!! Our submissions wizzard is open. To get straight to it go HERE and simply click +NEW TOPIC at the top right of the screen.We’re looking forward to seeing the result of all your hard work!",
"username": "Mark_Smith"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Last 24 Hours - get those submissions in!
|
2022-05-27T14:47:37.131Z
|
Last 24 Hours - get those submissions in!
| 2,557 |
null |
[
"database-tools",
"backup"
] |
[
{
"code": "date +\"%m-%d-%y\"date +\"%m-%d-%y\"",
"text": "Hello,I’m having this error running mongodump:mongodb@shards:~$ mongodump --port 27005 --db phoenix --collection audit_log --query ‘{ “_id”: { “$gt”: ObjectId(“5c2bcb014cf5143349ee0fd7”) }}’ --out /mnt/backup/mongodump/date +\"%m-%d-%y\" --verbose\n2022-05-27T12:34:16.590+0200 will listen for SIGTERM, SIGINT, and SIGKILL\n2022-05-27T12:34:16.643+0200 Failed: error parsing query as Extended JSON: error decoding key _id: invalid JSON input. Position: 18. Character: Omongodb@shards:~$ mongodump --port 27005 --db phoenix --collection audit_log --query ‘{ “_id”: { “$gt”: {“ObjectId”(“626ecf14679c0b146a8c8184”) }}}’ --out /mnt/backup/mongodump/date +\"%m-%d-%y\" --verbose\n2022-05-27T14:13:07.212+0200 will listen for SIGTERM, SIGINT, and SIGKILL\n2022-05-27T14:13:07.216+0200 Failed: error parsing query as Extended JSON: error decoding key _id.$gt: invalid JSON input: missing colon after key “ObjectId”My syntax is not correct?Thanks in advance.Regards",
"username": "Agusti_Luesma_Termens"
},
{
"code": "--querymongodumpObjectId{ \"_id\": { \"$gt\": { \"$oid\": \"5c2bcb014cf5143349ee0fd7\" } } }\n",
"text": "Hello @Agusti_Luesma_Termens, Welcome to the MongoDB community forum,As per the documentation of --query, The query must be in Extended JSON v2 format (either relaxed or canonical/strict mode).And see what format mongodump command supports,So you have to use the canonical format of the object id instead of ObjectId function,",
"username": "turivishal"
}
] |
Mongodump invalid json
|
2022-05-27T12:58:50.625Z
|
Mongodump invalid json
| 3,772 |
[
"aggregation",
"queries",
"atlas-functions",
"graphql"
] |
[
{
"code": "{\n users {\n _id \n tasks {\n \n _id\n }\n \n \n }\n}\n",
"text": "I am very new to Graphql. I have a basic question I want to write a GraphiQL query to fetch data among two collections I have a two collections one is Users and other one is Tasks.Ideally in Sql the same would be written as follows…SELECT * FROM Users INNER JOIN Tasks ON Users.id = Tasks.user_idI tried writing in GraphiQl. I also tried adding filters also tried adding relationships. But I always not able to retrieve the data.My collections are as follows. Please let me know if any more info is needed from me?.. Appreciate your help.\nimgonline-com-ua-twotoone-rVxaEXtjMOmnf3863×927 359 KB\n",
"username": "Joel_Fernandes"
},
{
"code": "{\n $lookup:\n {\n from: <collection to join>,\n localField: <field from the input documents>,\n foreignField: <field from the documents of the \"from\" collection>,\n as: <output array field>\n }\n}\n",
"text": "Hi @Joel_Fernandes, welcome to the community.\nHave tried using $lookup to join two collections? Here’s the syntax for the same:In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer",
"username": "SourabhBagrecha"
},
{
"code": "$lookup",
"text": "$lookupI am not able to find anything mentioned with $lookup in GraphQL docs? I suppose your referring to Mongo Db docs … I am trying to write the query in GraphiQL.",
"username": "Joel_Fernandes"
},
{
"code": "{\n users {\n _id \n tasks { \n _id\n name\n }\n }\n}\n",
"text": "Hi @Joel_Fernandes,\nThe GraphQL query for your case would look something like this:Are you using Realm GraphQL or are creating GraphQL on your own using MongoDB?In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "I am using GraphQL with Realm and Atlas database. Using Apollo Client in javascript with React Js front end.",
"username": "Joel_Fernandes"
},
{
"code": "{\n \"data\": null,\n \"errors\": [\n {\n \"message\": \"Cannot query field \\\"tasks\\\" on type \\\"User\\\".\",\n \"locations\": [\n {\n \"line\": 36,\n \"column\": 5\n }\n ]\n }\n ]\n}\n",
"text": "I get the below error on output…On the sugguested GraphQL solution… I want to know a way to do it without making changes in my schema.Some additional info … I am using https://www.apollographql.com/. with React js on the front end… Adding relationships to the schema is breaking our Mobile app we are using Realm sync in the mobile app.",
"username": "Joel_Fernandes"
},
{
"code": " {\n users {\n _id \n tasks { \n _id\n name\n }\n }\n}\n",
"text": "Also as per my research I see the below query can lead me to a n+1 problem? as mentioned here Solving the N+1 Problem for GraphQL through Batching",
"username": "Joel_Fernandes"
},
{
"code": "exports = async function fetchTasks(source, input) {\n const mongodb = context.services.get(\"mongodb-atlas\")\n const tasks = mongodb.db(\"task-manager\").collection(\"tasks\")\n // Replace them with your ^^^^ Database Name and your ^^^^ Collection Name\n return await tasks.find({ user_id: source._id }).toArray()\n // Please note that the above source ^^ is responsible for getting \n // the details from the parent GraphQL Type (User).\n}\ntasksUserNoneExisting Type(List) {\n users {\n _id \n tasks { \n _id\n name\n }\n }\n}\n{\n \"data\": {\n \"users\": [\n {\n \"name\": \"Sourabh Bagrecha\",\n \"tasks\": [\n {\n \"_id\": \"626f86d5e2a0e2655c95d017\",\n \"name\": \"Laundry\"\n }\n ]\n }\n ]\n }\n}\n",
"text": "Hi @Joel_Fernandes,\nYes, it is possible to fetch the tasks associated with a single user.\nFollow the following steps to proceed:\nimage1924×878 120 KB\nEnter the following details in their respective input fields:\nGraphQL Field Name: tasks\nParent Type: User\nFunction: fetchTasks\nInput Type: None\nPayload Type: Select Existing Type(List) and then select [Task]\nThe form should look something like this once done:\n\nimage1918×960 165 KB\n\nHit Save Draft.\nimage1948×252 76 KB\nThe above GraphQL Query will return the following:And yes, this would lead to N+1, and for now there’s no way around that.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Thanks for the valuable information it really does help. Does using the $lookup in the fetchTasks function help us avoid the n+1 problem? … ( I had tried using the $lookup in the custom resolver function and it had worked)… because the n+1 problem is only in Graphql as far as I know… and $lookup is a mongodb function to help us get the combined data . I am not aware if the n+1 problem exists in mongodb … Is that correct?",
"username": "Joel_Fernandes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
GraphQL equivalent Join Query among collections?
|
2022-04-17T20:08:07.539Z
|
GraphQL equivalent Join Query among collections?
| 7,197 |
|
null |
[
"transactions"
] |
[
{
"code": "Mongo.startSessioncausalConsistencytrue{readConcern: {level: 'snapshot'}}findmajoritysnapshotmajoritysnapshotsnapshotmajoritymajoritywriteConcern{w: 1}readConcernlocalmajoritywriteConcernreadConcernsnapshotsnapshotreadConcern",
"text": "I am trying to implement the following functionality in an app but I struggle in getting all the pieces together:I want to implement a migration-like process consisting in executing a set of transactions.\nDependencies between these transactions is modeled by a directed acyclic graph (DAG): each transaction should read the writes of preceding transactions in the DAG. Some transactions consist in scanning an entire collection, sometimes updating documents inside that collection, but the scan should behave as if those updates are not happening. Some non-scan reads may happen during this transaction and those should read the updates. Note that all reads including scan reads must read updates of preceding transactions. Some transactions may happen in parallel because they do not act on the same parts of the database. Hence, if I want the entire process to execute swiftly, I cannot just linearize the DAG and use one big transaction. Here a set of questions I could not quite find the answer to:Could you help me sorting out these? I find the current documentation about this subject to be both terse and scattered.",
"username": "Josh_Hamlet"
},
{
"code": "",
"text": "Somebody has an idea on the subject?",
"username": "Josh_Hamlet"
},
{
"code": "Mongo.startSessioncausalConsistencytruemajoritymajoritysnapshotmajoritysnapshotsnapshotmajoritymajoritysnapshotsnapshotmajoritywriteConcern{w: 1}readConcernlocalmajoritywriteConcernreadConcernsnapshotsnapshotreadConcern",
"text": "Hi @Josh_Hamlet , welcome !Since the documentation links that you provided are pointing to v4.4, I’d assume that you are using MongoDB v4.4From “Tunable Consistency in MongoDB - YouTube” and the Mongo.startSession docs I get that causalConsistency is true by default for sessions. Is this correct?MongoDB v3.6+ enables causal consistency in client sessions. Client sessions only guarantee casual consistency for read operations with majority, and write operations with majority. In addition, please ensure that the application only have one thread at a time to execute those operations in the client session. Please see also Client Session and Causal Consistency GuaranteesIs a snapshot read also a majority read? Does that mean snapshot reads succeeding writes see those writes? In that case, the only difference between multi-document snapshot and majority reads is that majority reads can see writes occurring after the cursor initialization?Read concern snapshot is only available for transactions. If a transaction with read concern snapshot is part of a causally consistent session, upon transaction commit with write concern majority, the transaction operations are guaranteed to have read from a snapshot of majority-committed data that provides causal consistency with the operation immediately preceding the transaction start.Since the default writeConcern is {w: 1} and the default readConcern is local , does that mean we have to specify majority as default writeConcern and readConcern for each session?I’d recommend to be explicit about the level of write/read concern on sessions as intended.Is this pattern necessary and sufficient in my use case (each transaction advances its session time to at least the completion time of each of its preceding transaction’s session)?It’s difficult to say without more details of the use case, but for that one sentence above it would be yes.Please note that the code example you referred to, does not use transactions. Operations within a causally consistent session are not isolated from operations outside the session. If a concurrent write operation interleaves between the session’s write and read operations, the session’s read operation may return results that reflect a write operation that occurred after the session’s write operation.In addition to the initial snapshot collection scan, can all other multi-document reads use a snapshot readConcern while maintaining causal consistency? If that’s the case, I assume the only penalty is potential increased memory usage and execution time?Read concern snapshot is only available for multi document transactions (Certain read operations outside of multi-document transactions starting in MongoDB v5.0).I want to implement a migration-like process consisting in executing a set of transactions.\nDependencies between these transactions is modeled by a directed acyclic graph (DAG): each transaction should read the writes of preceding transactions in the DAG.Distributed cluster-wide transaction could be complex, I hope the answers above help you.It is quite challenging to answer some of these questions without having more context, as the answer may differ depending on the requirements and the deployment configuration. i.e. sharded clusterIf you have additional questions it would be helpful to provide a specific use case and concern/issue that you’re facing.Regards,\nWan.",
"username": "wan"
},
{
"code": "await transaction1();\nawait transaction2();\nawait transaction3();\nawait transaction4();\nasync function transaction3() {\n await wrapTransaction(async (session) => {\n for await (const x of scanCollectionAtSnapshotAfterTransaction2(session)) {\n await doCausallyConsistentReadsAndWrites3(session);\n }\n });\n}\nwrapTransactionscanCollectionAtSnapshotAfterTransaction2doCausallyConsistentReadsAndWrites3doCausallyConsistentReadsAndWrites3doCausallyConsistentReadsAndWrites3",
"text": "Thanks for your help. Unfortunately, I think I still do not understand how to achieve what I want. Maybe this pseudocode will help you understand part of what I want to achieve:The missing pieces in this example is how to implement the subroutines so that the semantics of their names are respected:Does this help you answer the question?",
"username": "Josh_Hamlet"
},
{
"code": "wrapTransactionwrapTransactionsession.withTransaction()scanCollectionAtSnapshotAfterTransaction2doCausallyConsistentReadsAndWrites3doCausallyConsistentReadsAndWrites3doCausallyConsistentReadsAndWrites3wrapTransactionscanCollectionAtSnapshotAfterTransaction2doCausallyConsistentReadsAndWrites3 transactionLifetimeLimitSecondsmongod",
"text": "Hi @Josh_Hamlet ,Does this help you answer the question?Thank you, the pseudo code helps better to elaborate your use case.The reads and writes inside wrapTransaction are atomic and isolated.As long as all of the operations within wrapTransaction are within a single transaction, then yes. i.e. session.withTransaction()You cannot have a transaction inside of a transaction (or in this case triple nested). If you have a transaction at wrapTransaction then every operations within it are in the same transaction.Since the scanCollectionAtSnapshotAfterTransaction2 is a loop within the same transaction as doCausallyConsistentReadsAndWrites3 , then this this will read any writes happening (interleaving). Essentially reading your own writes.Also, keep in mind that if the loop is a long running loop, by default, a transaction must have a runtime of less than one minute.However you can modify this limit using transactionLifetimeLimitSeconds for the mongod instances. Although this may increase the cache pressure further.Regards,\nWan.",
"username": "wan"
},
{
"code": "wrapTransactionsession.withTransaction()session.withTransaction()wrapTransactionsession.withTransaction()sessionwrapTransactionscanCollectionAtSnapshotAfterTransaction2doCausallyConsistentReadsAndWrites3 {readConcern: {level: 'snapshot'}}transactionLifetimeLimitSecondsmongodtransactionLifetimeLimitSecondsconst wrapTransaction = async ({\n\tclient,\n\ttransactionOptions,\n\tsessionOptions,\n\ttransaction,\n}) => {\n\t// NOTE causalConsistency: true is the default but better be explicit\n\tconst session = client.startSession({\n\t\tcausalConsistency: true,\n\t\t...sessionOptions,\n\t});\n\tlet result;\n\ttry {\n\t\tawait session.withTransaction(async () => {\n\t\t\tresult = await transaction(session);\n\t\t}, transactionOptions);\n\t} catch (error) {\n\t\tconst message = error instanceof Error ? error.message : 'unknown error';\n\t\tconsole.debug(message);\n\t\tconsole.debug({error});\n\t\tthrow new Error('Database Transaction Failed', message);\n\t} finally {\n\t\t// NOTE No need to await this Promise, this is just used to free-up\n\t\t// resources.\n\t\tsession.endSession();\n\t}\n\n\treturn result;\n};\n\nconst forEachAsync = async ({client, collection, selector, cb}) =>\n\twrapTransaction({\n\t\tclient,\n\t\tsessionOptions: undefined,\n\t\ttransactionOptions: undefined,\n\t\ttransaction: async (session) => {\n\t\t\t// NOTE This needs to read from a snapshot, it should not read writes\n\t\t\t// happening in `cb(session, item)`\n\t\t\tconst cursor = collection.find(selector, {\n\t\t\t\tsession,\n\t\t\t\treadConcern: {level: 'snapshot'}\n\t\t\t}).hint({$natural: 1});\n\t\t\tfor (;;) {\n\t\t\t\tconst item = await cursor.next();\n\t\t\t\tif (item === null) break;\n\t\t\t\tawait cb(session, item);\n\t\t\t}\n\t\t}\n\t});\n\n// NOTE example usage, for illustration only, do not try to simplify in order\n// to circumvent the problem: replaces each item counter with the sum of other\n// items counter\nawait forEachAsync({\n\tclient: SomeClient,\n\tcollection: SomeCOllection,\n\tselector: {}, // NOTE Everything\n\tcb: async (session, {_id}) => {\n\t\tSomeCOllection.deleteOne({_id}, {session});\n\t\t// NOTE this should read the most up to date value for counter\n\t\tconst sameKeyButNotSelf = await SomeCOllection.find({}, {session}).toArray();\n\t\tconst total = sameKeyButNotSelf.reduce((\n\t\t\tprevious, {counter}\n\t\t) => previous + counter, 0);\n\t\t// NOTE this new item should not be read by the `find` cursor loop of\n\t\t// `forEachAsync` but should be read by the `find` of succeeding calls\n\t\t// to `cb` inside that loop.\n\t\tawait SomeCOllection.insert({counter: total}, {session});\n\t},\n});\n",
"text": "Thanks for you answer @wan. Unfortunately I think it is still not clear if it is possible to achieve what I described. Here are some additional details.As long as all of the operations within wrapTransaction are within a single transaction, then yes. i.e. session.withTransaction()I do indeed intend to use session.withTransaction() as part of wrapTransaction. What is not so clear to me is what options to pass to session.withTransaction() and database CRUD operations (other than the session object) to have the code I sent you behave as I have described.You cannot have a transaction inside of a transaction (or in this case triple nested). If you have a transaction at wrapTransaction then every operations within it are in the same transaction.I do not need nested transactions. Whenever I compose subroutines, I just pass the relevant transaction session object around.Since the scanCollectionAtSnapshotAfterTransaction2 is a loop within the same transaction as doCausallyConsistentReadsAndWrites3 , then this this will read any writes happening (interleaving). Essentially reading your own writes.I thought, see for instance this video, that it is possible to pass {readConcern: {level: 'snapshot'}} to individual CRUD operations.Also, keep in mind that if the loop is a long running loop, by default, a transaction must have a runtime of less than one minute.However you can modify this limit using transactionLifetimeLimitSeconds for the mongod instances. Although this may increase the cache pressure further.Indeed, thanks for the information. I have not run into that problem yet though. And I would modify transactionLifetimeLimitSeconds if needed. I am interested into correctness at reasonable cost and I do not expect cache pressure to be a problem in my particular case.With the hope of make more progress at answering the original question, here is what I have come up with so far, could you tell me where it would fail to achieve what I intend it to do?Kind regards,\nJosh",
"username": "Josh_Hamlet"
},
{
"code": "{readConcern: {level: 'snapshot'}}\t\t\t// NOTE This needs to read from a snapshot, it should not read writes\n\t\t\t// happening in `cb(session, item)`\n",
"text": "Hi @Josh_Hamlet,I do not need nested transactions. Whenever I compose subroutines, I just pass the relevant transaction session object around.As long as you only do one transaction per session, that should be correct.that it is possible to pass {readConcern: {level: 'snapshot'}} to individual CRUD operations.Operations in a transaction use the transaction-level read concern. You can set the transaction-level read concern at the start of the transaction. If left unset, it would default to the session-level read concern.I don’t think the video above mentioned that you can specify read concern per individual operation within a transaction.As you are looping through the cursor (which you can’t specify the read concern at that level), you will read your own writes.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi @wan,So the example code does not work, how could it be modified so that it behaves like I want it to behave?Perhaps using two sessions in parallel? One for the loop, and one for operations inside the loop?Kind regards,\nJosh",
"username": "Josh_Hamlet"
},
{
"code": "",
"text": "Hi @Josh_Hamlet,Let’s go back to the original question.I want to implement a migration-like process consisting in executing a set of transactions.Depending on the use case (how big is the data, other write operations, etc), if you would like to perform some sort of migration, would you be able to migrate to another collection instead ? i.e. read from collection A, insert/update in collection BRegards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi Wan!I am sorry but I am not looking for a solution that involves making a copy of the DB, although temporary. I am trying to exploit the transactions implementation of MongoDB to achieve correct behavior, with possibly minimal runtime and maintenance overhead. I think using two parallel transactions could work, I have just not tried it yet.Kind regards,\nJosh",
"username": "Josh_Hamlet"
}
] |
Correctly exploiting causal consistency and snapshot reads
|
2022-01-10T22:37:29.486Z
|
Correctly exploiting causal consistency and snapshot reads
| 4,641 |
null |
[
"node-js",
"crud",
"mongoose-odm"
] |
[
{
"code": "\t{\n\t\t\"username\" : \"Raxo\",\n\t\t\"score\" : 0,\n\t\t\"solved\" : [\n\t\t\t{\n\t\t\t\t\"challenge\" : {\n\t\t\t\t\t\"_id\" : ObjectId(\"62716b84cef98df9866d6a8a\"),\n\t\t\t\t\t\"name\" : \"Challenge1884 \",\n\t\t\t\t\t\"category\" : \"crypto\",\n\t\t\t\t\t\"flag\" : \"Nice try XD\",\n\t\t\t\t\t\"hints\" : [\n\t\t\t\t\t\t\"Easy Peasy Lemon Squeezy!\"\n\t\t\t\t\t],\n\t\t\t\t\t\"points\" : 100,\n\t\t\t\t\t\"info\" : \"I am a challenge!\",\n\t\t\t\t\t\"level\" : 0,\n\t\t\t\t\t\"solveCount\" : 1,\n\t\t\t\t\t\"file\" : \"\",\n\t\t\t\t\t\"__v\" : 0\n\t\t\t\t},\n\t\t\t\t\"timestamp\" : 1651602135100\n\t\t\t}\n\t\t]\n}\n await users.updateMany({\n solved: { $elemMatch: { 'challenge._id': challengeExists._id } }\n }, {\n $inc: { score: -challengeExists.points },\n $pull: { solved: { $elemMatch: { 'challenge._id': challengeExists._id } } }\n });\ntype or paste code here\n",
"text": "So here is my mongoose Object:I am trying to delete the challenge inside the solved array using:It does find the user successfully and remove points from the score but does not successfully pull the challenge, I cant seem to find what I am doing wrong. Any help appreciated",
"username": "Oscar_Gomez"
},
{
"code": "challengessolvedchallengessolved",
"text": "Hi @Oscar_Gomez - Welcome to the community! Could you provide more details about your use case and expected output? I would just like to clarify as I imagine there may be a case where there are multiple challenges in the solved array field.In saying so, could you please also provide the following information as well:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] |
Mongoose Nodejs pull element from array by property of property
|
2022-05-03T19:15:29.376Z
|
Mongoose Nodejs pull element from array by property of property
| 4,098 |
null |
[
"node-js",
"database-tools",
"mdbw22-hackathon",
"mdbw-hackhelp"
] |
[
{
"code": "",
"text": "We have a mongoimport command that is running for more than 24 hours. Wanted to be double sure if we cancel the operation then will it have data upto that timeframe ?@Shane_McAllister Also wondering why the script is taking so long !! Is this normal for that much amount of data?+@Fiewor_John",
"username": "Avik_Singha"
},
{
"code": "",
"text": "How much data are you loading? Can you post the command here? Can you look at the collection and see if it is growing in Compass?If you cancel it will not impact data which has been loaded already.",
"username": "Joe_Drumgoole"
},
{
"code": "mongoimport",
"text": "We basically downloaded gdelt csv data from 2019 or so and ran the mongoimport script\nimage571×768 59.8 KB\nYes, it’s growing in CompassAlright. Makes sense that the uploaded data will stay there. Thank you",
"username": "Fiewor_John"
}
] |
If we cancel mongoimport midway, will it import the data upto that point or will it cancel alltogether?
|
2022-05-26T15:52:58.960Z
|
If we cancel mongoimport midway, will it import the data upto that point or will it cancel alltogether?
| 3,052 |
null |
[
"database-tools",
"backup"
] |
[
{
"code": "Failed: archive writer: error writing data for collection to disk: error reading collection: (CursorNotFound) cursor id 7535000083817651709 not found / Mux ending but selectCases still open 3",
"text": "Hi everyone,\nI have been using mongodump in order to backup my entire db. I do it once a day.\nHowever, since switching to mongodb 4.2.2 I have been having random errors on my backup job:Failed: archive writer: error writing data for collection geo.geos to disk: error reading collection: (CursorNotFound) cursor id 7535000083817651709 not found / Mux ending but selectCases still open 3\nSome days it will work, others it won’t at all. The crash will occur on different collections everytime, I cannot understand what is happening.\nDo you have any idea ?For more context here is my stack: I run mongodb 4.2.2 in a replicaset inside a kubernetes cluster.\nI use a kubernetes scheduled job that launches a simple shell script: Docker\nHere is the information of mongorestore inside the containermongorestore version: r4.2.2\ngit version: a0bbbff6ada159e19298d37946ac8dc4b497eadf\nGo version: go1.12.13\nos: linux\narch: amd64\ncompiler: gcThanks for your help",
"username": "Andre_Paulos"
},
{
"code": "",
"text": "Hi Andre\nI’m having exactly the same issue, did you find the reason?\nDavid",
"username": "David_Espinosa"
},
{
"code": "",
"text": "Hi David,\nNo I haven’t found the solution yet. However I think the issue might be coming from the fact that mongodump seems to be accessing all the members of my replicaset. From what I read it should only access the primary member of the replicaset.\nI doing a few tests. I will post a comment if I ever found what’s going on.",
"username": "Andre_Paulos"
},
{
"code": "",
"text": "Hi David,\nSo as I exposed the other day I have found the solution. Because I was using a kubernetes service to access my replicaset, mongodump requests were being redirected to all the members of the replicaset and not only the master one.\nAll I did was point mongodump directly to the master and since then all the backups have been working fine.\nHope it helps you",
"username": "Andre_Paulos"
},
{
"code": "",
"text": "Thank you very much @Andre_Paulos! I will use this approach too ",
"username": "David_Espinosa"
},
{
"code": " mongodump --gzip --host 10.131.0.239 --port 27017 ...\n",
"text": "Thank you. You saved my date. I finally worked when I tried to use directly the podId of Mongodb Primary node in my StatefulSet, something like this:",
"username": "Huong_Nguyen3"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Mongodump randomly throwing CursorNotFound
|
2020-04-10T15:15:50.821Z
|
Mongodump randomly throwing CursorNotFound
| 9,096 |
null |
[
"node-js"
] |
[
{
"code": "",
"text": "In node application, there is a addUser and removeUser function to add/remove admin users.Is there a similar function to update a user? e.g. updating a user role, or authenticationRestrictions (IP addresses)",
"username": "Dave_Teu"
},
{
"code": "db.command db.addUser('restrictedUser', 'password123');\n db.command({ updateUser: 'restrictedUser', authenticationRestrictions: [{ clientSource: '127.0.0.1'}] })",
"text": "For those who are looking for the same thing. Apparently you can use db.command .E.g.",
"username": "Dave_Teu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
NodeDriver: Updating admin user
|
2022-05-26T11:01:01.248Z
|
NodeDriver: Updating admin user
| 1,263 |
[
"mongoose-odm",
"connecting"
] |
[
{
"code": "",
"text": "despite whitelisting the ip address, i am not able to access the mongodb cluster.\n\nWhatsApp Image 2021-09-04 at 5.57.06 PM1600×1017 259 KB\n",
"username": "Firoz_N_A"
},
{
"code": "",
"text": "Welcome @Firoz_N_A ! I’m sure someone will be able to steer you in the right direction soon . They’re good like that.",
"username": "Jason_Nutt"
},
{
"code": "",
"text": "despite whitelisting the ip address, i am not able to access the mongodb cluster.Have you tried allow access from anywhere?\nIf it works the IP you added may not be the correct one\nAre you using any firewall/VPN/proxy\nDid it work with shell or Compass\nDid it work before or new setup\nCheck this link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
MongooseServiceSelctionError
|
2021-09-05T05:42:45.196Z
|
MongooseServiceSelctionError
| 3,090 |
|
null |
[] |
[
{
"code": "",
"text": "Hi,is anyone facing issues logging into their realm App?i keep getting the following error:Request failed (GET https://realm.mongodb.com/api/client/v2.0/app//location): (status 530)and based on a quick google, status 530 implies the site is frozen/inactive?",
"username": "5ff25d3440814e198ead77c273f7525"
},
{
"code": "Error 1016You've requested a page on a website (realm.mongodb.com) that is on the Cloudflare network. Cloudflare is currently unable to resolve your requested domain (realm.mongodb.com).",
"text": "to add on - i realized this error comes from cloudflare workers.\nError 1016\nYou've requested a page on a website (realm.mongodb.com) that is on the Cloudflare network. Cloudflare is currently unable to resolve your requested domain (realm.mongodb.com).",
"username": "5ff25d3440814e198ead77c273f7525"
}
] |
Mongodb Realm login Error : status 530
|
2022-05-27T01:56:28.133Z
|
Mongodb Realm login Error : status 530
| 1,579 |
null |
[
"crud"
] |
[
{
"code": "{\n \"_id\": \"xyz\",\n \"badges\": [{\n \"count\": 2,\n \"categorieId\": \"menu1\",\n \"subCategorieId\": \"1\"\n }, {\n \"count\": 1,\n \"categorieId\": \"menu2\",\n \"subCategorieId\": \"1\"\n }]\n}\nreturn getCollection()\n .updateOne(\n and(\n eq(\"badges.categorieId\", \"menu2\"),\n eq(\"badges.subCategorieId\", \"1\")\n ),\n Updates.inc(\"badges.$.count\", 1)\n );\n",
"text": "Hello everyone,I have an array of objects and would like to update the count of the object where categoryId = “menu2” and subCategoryId = “1”.in my mongodb i currently have two records in the array:if i now execute the following method the object with categorieId “menu1” will be updated and not my menu2…I am using the io.quarkus.mongodb.reactive.ReactiveMongoCollection.Thanks in advance!",
"username": "neeeextL"
},
{
"code": "return getCollection().updateOne(\n eq(\"_id\", \"xyz\"),\n Updates.combine(\n Updates.inc(\"badges.$[badges].count\", 1)\n ),\n new UpdateOptions()\n .arrayFilters(Arrays.asList(\n and(\n eq(\"badges.categorieId\", \"menu2\"),\n eq(\"badges.subCategorieId\", \"1\")\n ))));\n",
"text": "The filtered positional operator is working:Why the other method does not work, i unfortunately do not know.",
"username": "neeeextL"
},
{
"code": "mongosh$elemMatch$elemMatchmongosh/// Original Document\ndb> db.collection.find()\n[\n {\n _id: 'xyz',\n badges: [\n { count: 2, categorieId: 'menu1', subCategorieId: '1' },\n { count: 1, categorieId: 'menu2', subCategorieId: '1' }\n ]\n }\n]\n/// Using `$elemMatch` in the update\ndb> db.collection.updateOne({ \"badges\": { \"$elemMatch\": { \"categorieId\": \"menu2\", \"subCategorieId\": \"1\" } } },{$inc:{\"badges.$.count\":1}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n/// Resulting document after the update\ndb> db.collection.find()\n[\n {\n _id: 'xyz',\n badges: [\n { count: 2, categorieId: 'menu1', subCategorieId: '1' },\n { count: 2, categorieId: 'menu2', subCategorieId: '1' } /// <--- count incremented by 1\n ]\n }\n]\n\"badges\"$ (update)db.collection.updateOne()db.collection.findAndModify()$query documentarrayquery documentarrayFiltersbadges$[<identifier>]",
"text": "Hi @neeeextL - Welcome to the community.I will start by saying I am not too familiar with “io.quarkus.mongodb.reactive.ReactiveMongoCollection.” but I have performed my testing below in mongosh to perhaps help illustrate a possible change to the update operation which may suit your use case.if i now execute the following method the object with categorieId “menu1” will be updated and not my menu2…If you need multiple conditions to match an array element, then you’ll need to use $elemMatch.Please correct me if I am wrong here but I believe the behaviour you’re after would be demonstrated in the Update Embedded Documents Using Multiple Field Matches documentation in which you will be required to use $elemMatch operator in the query filter portion of your code as mentioned above. You can select your specific language at the top right corner of the page (Java) as shown below:\nimage2078×604 105 KB\nI have performed the above in a test environment using the sample document provided in mongosh. You may need to alter this to suit your environment but this is more so for demonstration purposes to see if it updates the document to how you desire:Please note that the above was only tested briefly in my test environment (MongoDB Version 5.0) with a single sample document. It is highly recommended to perform any code changes against a test environment to verify it suits your use case before performing the changes in productionIn addition to the above, the query parameters specified in your initial update operation are not an array field. They are fields within objects inside the \"badges\" array field. The $ (update) documentation states:When used with update operations, e.g. db.collection.updateOne() and db.collection.findAndModify() ,Why the other method does not work, i unfortunately do not know.The documentation does provide a similar example to this which may help illustrate why your initial update operation updated the unexpected element.However in saying so, the second method you had attempted states the query filter to match on within the arrayFilters section which ended up matching the object inside the badges array you wanted to have updated. To update all elements that match an array filter condition or conditions, see the filtered positional operator instead $[<identifier>] .However, if you have any further questions please feel free to post them here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Update array object mongoDB
|
2022-05-02T11:05:44.526Z
|
Update array object mongoDB
| 2,534 |
null |
[
"aggregation",
"dot-net"
] |
[
{
"code": "SetWindowFieldsIAggregateFluent<BsonDocument>BsonDocumentSetWindowFieldsBsonDocumentAddFields",
"text": "Hi to all.\nI have a complex aggregation pipeline, where SetWindowFields is only one of the stages in the middle of it.\nSince the method is defined as IAggregateFluent<BsonDocument> it stops being strongly typed and I have to work with BsonDocument in the next stages (or find a way to project it back to the type it is).Am I missing something or is this a current limitation? I didn’t find many examples using the C# LINQ provider with SetWindowFields. All the examples have it as a final stage returning BsonDocument.Since the function works more or less like AddFields I checked to see how this is defined but it’s not available through LINQ.",
"username": "adas"
},
{
"code": "SetWindowFieldsIAggregateFluent<BsonDocument>$setWindowFieldsAs<TNewResult>public class C\n{\n public int Id { get; set; }\n public int X { get; set; }\n}\n\npublic class CWithAverage\n{\n public int Id { get; set; }\n public int X { get; set; }\n public double Average { get; set; }\n}\nCWithAverageSetWindowFields// collection is of type IMongoCollection<T>\nvar aggregate = collection\n .Aggregate()\n .SetWindowFields(output: p => new { Average = p.Average(c => c.X, null) })\n .As<CWithAverage>();\n",
"text": "SetWindowFields is defined to return IAggregateFluent<BsonDocument> in order to reduce the number of required overloads and also to not require you to create a custom class to hold the result.If you do want to define a custom class to hold the result of $setWindowFields you can use the existing As<TNewResult> pseudo-stage to tell the driver that you want to specify a new class to represent the documents in the pipeline at that stage.For example, if you had the following classes:You could use the CWithAverage class to represent the result of SetWindowFields like this:",
"username": "Robert_Stam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
C# LINQ Strongly type return type for SetWindowFields?
|
2022-05-26T16:41:55.349Z
|
C# LINQ Strongly type return type for SetWindowFields?
| 2,142 |
null |
[
"node-js"
] |
[
{
"code": "let Datastore = {\n name: 'Datastore',\n primaryKey: '_id',\n properties: {\n _id: 'objectId',\n name: 'string',\n description: 'string'\n }\n}\n\nlet Block = {\n name: 'Block',\n primaryKey: '_id',\n properties: {\n _id: 'objectId',\n sourceUid: 'string',\n content: 'string',\n datastore: 'Datastore?'\n }\n}\nDatastoreBlockBlock.contentdatastore.namelet foundBlocks = realm.Object('Block').filtered('datastore.name == \"Source 12345\" AND content CONTAINS \"cool block\" AND content CONTAINS \"also this\" AND content LIKE \"*wildcard*this*that*\" AND content BEGINSWITH \"starting\"');\nDatastore.nameBlockDatastore?Blockcontent",
"text": "Hi all!Fairly new to Realm (first post on these forums) using the JS SDK for building an Electron app using Realm local and eventually will be setting up Realm Sync to store and sync data with MongoDB Atlas.I come from a SQL Server background. Trying to de-program my relational DB normalization mindset, haha. Slowly but surely loving Realm / Mongo document based DB concept more and more everyday!My question is on query optimization. Lets say I have the following schema:Lets say that Datastore only has about 10 objects but Block has 5 million. And then lets say for example I want to do several string CONTAINS and LIKE matches on the Block.content but also then filter by a specific datastore.name.So I have a couple questions:Any other tips would be much appreciated! Thanks a lot and I look forward to my (our) journey into Realm and eventually MongoDB Atlas!Best,\nShawn",
"username": "Shawn_Murphy"
},
{
"code": "//first filter down to just Blocks matching the Datastore reference which reduces collection from 5mil to 100k results\nlet filterBlocks = realm.Object('Block').filtered('datastore.name == \"Source 12345\"');\n//now apply the string content operators to the 100k remaining results\nlet foundBlocks = filterBlocks.filtered('content CONTAINS \"cool block\" AND content CONTAINS \"also this\" AND content LIKE \"*wildcard*this*that*\" AND content BEGINSWITH \"starting\"');\n",
"text": "To be more clear, my question 2 above would mean do something like this:So the question becomes does breaking this into two queries perform more efficiently because the string content operators are only applied against 100k Block objects and not the initial full collection of 5mil objects?",
"username": "Shawn_Murphy"
},
{
"code": "",
"text": "Hi @Shawn_Murphy welcome to the community!\nYou have a good intuition about performance cost of the query.I also recommend that you try turning on an index for any properties that you are doing exact string matches on (Datastore.name). This should dramatically help performance of the query, at the cost of some overhead to maintain the index on insertion/deletion/modify.",
"username": "James_Stone"
},
{
"code": "",
"text": "@James_Stone thanks for the response! One follow up question on your suggestion on Indexing. I posted another question at link below, but if my property is a string with lots of text like a paragraph and I try to do “contains” queries on it, will indexing help? Or only when doing == exact matches of the entire string property?See here: How does Realm indexes work with string properties and partial match search",
"username": "Shawn_Murphy"
}
] |
Realm filter Query optimization with several CONTAINS parameters and a Realm Reference filter
|
2022-05-11T00:12:29.625Z
|
Realm filter Query optimization with several CONTAINS parameters and a Realm Reference filter
| 3,286 |
null |
[
"database-tools",
"backup"
] |
[
{
"code": "",
"text": "Hi,I have taken data dump using mongodump from Standalone instance. Now I want to restore that data into a Shard cluster.Please note that the data I will restore already contains the shard key. So for restoration, should the mongorestore command point to one of the routers or individual shards? What are the best practices to be followed for mongodump and mongodump?Thanks",
"username": "Allwyn_Jesu"
},
{
"code": "mongorestore --host <host> --port <port> -u <username> --authenticationDatabase admin /path/to/file\n",
"text": "You will want to restore to a mongos.The data wont be sharded when it is restored, it will be on a single shard, you will need to enable sharding once the data is restored.",
"username": "tapiocaPENGUIN"
},
{
"code": "mongosmongorestoremongosmongosmongodumpmongorestoremongorestore",
"text": "Hi @Allwyn_Jesu,You definitely want to restore into a sharded cluster via mongos as @tapiocaPENGUIN suggested, but there are a few further details to be aware of.You will want to restore to a mongos.More specifically: you should always mongorestore data into a sharded cluster using mongos so the cluster metadata is properly maintained. Inserting directly to a shard (bypassing mongos) is likely to cause operational issues.The data wont be sharded when it is restored, it will be on a single shard, you will need to enable sharding once the data is restored.Sharding information isn’t part of the metadata when you mongodump data from a sharded cluster.However, mongorestore uses the sharding options for the target collection so you can define a shard key prior to restoring data and avoid some unnecessary rebalancing that would happen if a collection is sharded after all data is inserted.If you already know the distribution of shard key values and plan to mongorestore into an empty collection, you can also save some time by Pre-Splitting Chunks in a Sharded Cluster.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to restore data into Shard Cluster?
|
2022-05-26T16:37:29.810Z
|
How to restore data into Shard Cluster?
| 3,015 |
null |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "All good things must come to an end! And unfortunately, for the hackathon, that end is soon! Submissions will close May 27th - so get movingThe Submission Form is HERE and further details are also HERE and to entice you even more, remember, we’ve got some suberb prizes up for grabs and all submissions will receive exclusive Hackathon Swag!!Don’t delay - submit today!!",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "What time on the 27th will the submissions be closing?",
"username": "Fiewor_John"
},
{
"code": "",
"text": "We’ve been saying all along that time zones are hard!! So, as long as it’s still May 27th somewhere (eg end of day PST US) submissions will still be accepted!Make sense? ",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Just 2 days to go now! Get those submissions in!
|
2022-05-26T10:36:54.622Z
|
Just 2 days to go now! Get those submissions in!
| 2,720 |
[
"database-tools",
"mdbw22-hackathon",
"mdbw-hackhelp"
] |
[
{
"code": "fieldFile",
"text": "Hello, @Joe_Drumgoole , while in the process of loading my cluster with more data, I get this error\nimage1353×38 3.34 KB\nand I think it’s because there’s a document with an empty string for the GoldSteinScale and the fieldFile then returns this error.CC: @Mark_Smith",
"username": "Fiewor_John"
},
{
"code": "--ignoreBlanksmongoimport.sh",
"text": "Update:\nI checked the mongoimport documentation and I saw an --ignoreBlanks flag that I added to the mongoimport.sh script and it is now working, but I don’t think this really solves the problem.",
"username": "Fiewor_John"
},
{
"code": "--ignoreblanks",
"text": "--ignoreblanks seems to do the right thing. What do want it to do?",
"username": "Joe_Drumgoole"
},
{
"code": "--ignoreBlanks",
"text": "Oh.\nI wasn’t quite sure if --ignoreBlanks was ignoring the whole document or just maybe skipping the empty field.\nI wanted it to do the latter.",
"username": "Fiewor_John"
}
] |
MongoImport failed because of empty string
|
2022-05-23T19:14:10.630Z
|
MongoImport failed because of empty string
| 3,176 |
|
null |
[
"cxx",
"c-driver"
] |
[
{
"code": " db[\"Rotation\"].insert_many(documents);db[str].insert_many(documents);",
"text": "Hii all , iam working on monogcxx in qt creator . i need to create multiple collection in a loop.\nfor example\n db[\"Rotation\"].insert_many(documents);\ni have data of rotation in collection “Rotation”. But i need to Rotation2 , 3 , 4 etc upto n ,\nso how to create like that . i know programming logic but dont know mongocxx syntax\nerror happening when we put string variable in\ndb[str].insert_many(documents);\nwhat is the correct syntax???",
"username": "VIVEK_A"
},
{
"code": "db.getCollection(str)",
"text": "Hi. Have you tried db.getCollection(str)? You can use variables for collection names this way.",
"username": "adas"
}
] |
How to create multiple collection using loop?
|
2021-11-05T12:06:46.529Z
|
How to create multiple collection using loop?
| 3,015 |
[] |
[
{
"code": "",
"text": "\nimage867×863 18.6 KB\nHello everyone, I am new to MongoDB. Currently I am stump as to how I should be removing {“trip number”: “15”}.I tried writing as db.transport.update({“EMPLOYEE.e#”: “11”},{$pull:{“EMPLOYEE.trips.trip number”: “15”}}).But the return input was errmsg: Cannot use the part of Employee…Hope you are able to assist me on this. Thank you",
"username": "Izzat_Ismail"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and publish your sample document in JSON so that we can cut-n-paste into our system for experimentation.",
"username": "steevej"
}
] |
How to remove a field from a thriply nest documents
|
2022-05-26T14:09:59.776Z
|
How to remove a field from a thriply nest documents
| 3,558 |
|
null |
[
"replication",
"storage"
] |
[
{
"code": "",
"text": "Hello Everyone,I get some error as below and due to this mongod instance was crashed.2022-05-21T05:31:35.557+0000 E STORAGE [WTCheckpointThread] WiredTiger error (5) [1653111095:442272][11024:0x7f0c7bbe5700], file:index-35–2958543832295789442.wt, WT_SESSION.checkpoint: __posix_sync, 99: /var/lib/mongo/index-35–2958543832295789442.wt: handle-sync: fdatasync: Input/output error Raw: [1653111095:442272][11024:0x7f0c7bbe5700], file:index-35–2958543832295789442.wt, WT_SESSION.checkpoint: __posix_sync, 99: /var/lib/mongo/index-35–2958543832295789442.wt: handle-sync: fdatasync: Input/output error2022-05-21T05:31:35.562+0000 E STORAGE [WTCheckpointThread] WiredTiger error (-31804) [1653111095:562169][11024:0x7f0c7bbe5700], file:index-35–2958543832295789442.wt, WT_SESSION.checkpoint: __wt_panic, 494: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1653111095:562169][11024:0x7f0c7bbe5700], file:index-35–2958543832295789442.wt, WT_SESSION.checkpoint: __wt_panic, 494: the process must exit and restart: WT_PANIC: WiredTiger library panic\n2022-05-21T05:31:35.563+0000 F - [WTCheckpointThread] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 420\n2022-05-21T05:31:35.563+0000 F - [WTCheckpointThread]***aborting after fassert() failure2022-05-21T05:31:35.605+0000 F - [WTCheckpointThread] Got signal: 6 (Aborted).\n0x55e23ede6c21 0x55e23ede5e39 0x55e23ede631d 0x7f0c87524600 0x7f0c8717d3b7 0x7f0c8717eaa8 0x55e23d39f28b 0x55e23d4a2c76 0x55e23d50d741 0x55e23d325ed1 0x55e23d3262eb 0x55e23d4e0243 0x55e23d5de894 0x55e23d520559 0x55e23d52147b 0x55e23d5069ba 0x55e23d4800d6 0x55e23e831c31 0x55e23eef6630 0x7f0c8751ce75 0x7f0c872459bd\n----- BEGIN BACKTRACE -----\n{“backtrace”:[{“b”:“55E23C975000”,“o”:“2471C21”,“s”:\"_ZN5mongo15printStackTraceERSo\"},{“b”:“55E23C975000”,“o”:“2470E39”},{“b”:“55E23C975000”,“o”:“247131D”},{“b”:“7F0C87515000”,“o”:“F600”},{“b”:“7F0C87147000”,“o”:“363B7”,“s”:“gsignal”},{“b”:“7F0C87147000”,“o”:“37AA8”,“s”:“abort”},{“b”:“55E23C975000”,“o”:“A2A28B”,“s”:\"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj\"},{“b”:“55E23C975000”,“o”:“B2DC76”},{“b”:“55E23C975000”,“o”:“B98741”},{“b”:“55E23C975000”,“o”:“9B0ED1”,“s”:\"__wt_err_func\"},{“b”:“55E23C975000”,“o”:“9B12EB”,“s”:\"__wt_panic\"},{“b”:“55E23C975000”,“o”:“B6B243”},{“b”:“55E23C975000”,“o”:“C69894”},{“b”:“55E23C975000”,“o”:“BAB559”},{“b”:“55E23C975000”,“o”:“BAC47B”,“s”:\"__wt_txn_checkpoint\"},{“b”:“55E23C975000”,“o”:“B919BA”},{“b”:“55E23C975000”,“o”:“B0B0D6”,“s”:\"_ZN5mongo18WiredTigerKVEngine26WiredTigerCheckpointThread3runEv\"},{“b”:“55E23C975000”,“o”:“1EBCC31”,“s”:\"_ZN5mongo13BackgroundJob7jobBodyEv\"},{“b”:“55E23C975000”,“o”:“2581630”},{“b”:“7F0C87515000”,“o”:“7E75”},{“b”:“7F0C87147000”,“o”:“FE9BD”,“s”:“clone”}],“processInfo”:{ “mongodbVersion” : “4.0.16”, “gitVersion” : “2a5433168a53044cb6b4fa8083e4cfd7ba142221”, “compiledModules” : [], “uname” : { “sysname” : “Linux”, “release” : “4.9.51-10.52.amzn1.x86_64”, “version” : “#1 SMP Fri Sep 29 01:16:19 UTC 2017”, “machine” : “x86_64” }, “somap” : [ { “b” : “55E23C975000”, “elfType” : 3, “buildId” : “5DD743B8BBEB16201177D6D09046399B64F5E029” }, { “b” : “7FFDC0775000”, “elfType” : 3, “buildId” : “E1FD0678C5C561D462A1DAA29BD08CC5361EF117” }, { “b” : “7F0C8893F000”, “path” : “/usr/lib64/libcurl.so.4”, “elfType” : 3, “buildId” : “9E7C58F4A4EE9752AF068429BA3DB14397639056” }, { “b” : “7F0C88725000”, “path” : “/lib64/libresolv.so.2”, “elfType” : 3, “buildId” : “AE1FDB1B0712ABF27B24A3F0E983619A30981750” }, { “b” : “7F0C882C6000”, “path” : “/lib64/libcrypto.so.10”, “elfType” : 3, “buildId” : “3270D2720328EEC2846C4B0D993582A0F657F54B” }, { “b” : “7F0C88055000”, “path” : “/lib64/libssl.so.10”, “elfType” : 3, “buildId” : “183215EA0DA6EE9C80A1E3A3319EC2905D1BF6E0” }, { “b” : “7F0C87E51000”, “path” : “/lib64/libdl.so.2”, “elfType” : 3, “buildId” : “D8859C267836C8AF28BBB238819141B6BF34F8D9” }, { “b” : “7F0C87C49000”, “path” : “/lib64/librt.so.1”, “elfType” : 3, “buildId” : “DB30FAB7C82FF7E06EE4913B0D3AB02C51DC0530” }, { “b” : “7F0C87947000”, “path” : “/lib64/libm.so.6”, “elfType” : 3, “buildId” : “19CC4E1B82AD44838E8DFD1FF893CA0CDAB7A5F5” }, { “b” : “7F0C87731000”, “path” : “/lib64/libgcc_s.so.1”, “elfType” : 3, “buildId” : “AC58019512A5359B077D2610DEA4AD6CF14CAC53” }, { “b” : “7F0C87515000”, “path” : “/lib64/libpthread.so.0”, “elfType” : 3, “buildId” : “E45DFDE9B88CABE002BE746AD486F620DE2B3E54” }, { “b” : “7F0C87147000”, “path” : “/lib64/libc.so.6”, “elfType” : 3, “buildId” : “75FE5BEDAD7802FF4A0268752CB4B4FFB293D1DC” }, { “b” : “7F0C88BC6000”, “path” : “/lib64/ld-linux-x86-64.so.2”, “elfType” : 3, “buildId” : “FD30DCC79F68A409A7C742A6943C16AE02E52986” }, { “b” : “7F0C86F21000”, “path” : “/usr/lib64/libnghttp2.so.14”, “elfType” : 3, “buildId” : “903C20D899C962C2E93B006E3BB7172C83D8ACF4” }, { “b” : “7F0C86D00000”, “path” : “/usr/lib64/libidn2.so.0”, “elfType” : 3, “buildId” : “5235BD50D3FB450683328735B730532020DEE4BF” }, { “b” : “7F0C86AD8000”, “path” : “/usr/lib64/libssh2.so.1”, “elfType” : 3, “buildId” : “E03CF776B39054AC3B2EA2AB15B161A858B5732C” }, { “b” : “7F0C86863000”, “path” : “/usr/lib64/libpsl.so.0”, “elfType” : 3, “buildId” : “09BFE69665CFEEC18F81D8C4A971DCA29310186C” }, { “b” : “7F0C86615000”, “path” : “/usr/lib64/libgssapi_krb5.so.2”, “elfType” : 3, “buildId” : “1BE9E6309ED365E35806E13FA9E23350D71F2513” }, { “b” : “7F0C8632E000”, “path” : “/usr/lib64/libkrb5.so.3”, “elfType” : 3, “buildId” : “9EE23694485D684651195C7B51766E47D0CB95E3” }, { “b” : “7F0C860FC000”, “path” : “/usr/lib64/libk5crypto.so.3”, “elfType” : 3, “buildId” : “FD5974E4861D56DFFFFC8BF5DB35E74B1C20ABD5” }, { “b” : “7F0C85EF9000”, “path” : “/usr/lib64/libcom_err.so.2”, “elfType” : 3, “buildId” : “5C01209C5AE1B1714F19B07EB58F2A1274B69DC8” }, { “b” : “7F0C85CA7000”, “path” : “/lib64/libldap-2.4.so.2”, “elfType” : 3, “buildId” : “97F36EE026428345EEB18EE7F9EFB048ADB415A7” }, { “b” : “7F0C85A98000”, “path” : “/lib64/liblber-2.4.so.2”, “elfType” : 3, “buildId” : “BBE520FB0B4F67D5F708D233C31E7047B759068D” }, { “b” : “7F0C85882000”, “path” : “/lib64/libz.so.1”, “elfType” : 3, “buildId” : “89C6AF118B6B4FB6A73AE1813E2C8BDD722956D1” }, { “b” : “7F0C8556C000”, “path” : “/usr/lib64/libunistring.so.0”, “elfType” : 3, “buildId” : “2B090A6860553944846E3C227B6AD12F279B304F” }, { “b” : “7F0C851F6000”, “path” : “/usr/lib64/libicuuc.so.50”, “elfType” : 3, “buildId” : “06AB750458E6948B6F40F05E705996DB44ADDF9B” }, { “b” : “7F0C84FE7000”, “path” : “/usr/lib64/libkrb5support.so.0”, “elfType” : 3, “buildId” : “1B55330B231D45AF433F7D9DCA507C5FB0609780” }, { “b” : “7F0C84DE4000”, “path” : “/lib64/libkeyutils.so.1”, “elfType” : 3, “buildId” : “37A58210FA50C91E09387765408A92909468D25B” }, { “b” : “7F0C84BC9000”, “path” : “/usr/lib64/libsasl2.so.2”, “elfType” : 3, “buildId” : “354560FFC93703E5A80EEC8C66DF9E59DA335001” }, { “b” : “7F0C8496D000”, “path” : “/usr/lib64/libssl3.so”, “elfType” : 3, “buildId” : “D6B37F82A6D0A2DC428F305A2F5D9D78DB60D488” }, { “b” : “7F0C84746000”, “path” : “/usr/lib64/libsmime3.so”, “elfType” : 3, “buildId” : “2FF9EB779ACEAB03691997491A29D7275013D770” }, { “b” : “7F0C84419000”, “path” : “/usr/lib64/libnss3.so”, “elfType” : 3, “buildId” : “19D59FF9A54C790463BCD0349B0D78A4B8E1304E” }, { “b” : “7F0C841E9000”, “path” : “/usr/lib64/libnssutil3.so”, “elfType” : 3, “buildId” : “353A34D6411C8F45DB54E122B06E97EF6AEFD4F9” }, { “b” : “7F0C83FE5000”, “path” : “/lib64/libplds4.so”, “elfType” : 3, “buildId” : “D835EB19EC07E13AECBE3E80652846DAB04553CE” }, { “b” : “7F0C83DE0000”, “path” : “/lib64/libplc4.so”, “elfType” : 3, “buildId” : “668B5E4DEDFB2CABFEE75B097CC04CBBF8EC231D” }, { “b” : “7F0C83BA2000”, “path” : “/lib64/libnspr4.so”, “elfType” : 3, “buildId” : “8C8278A9557AA5D7942C94992BE85C2E73EA358B” }, { “b” : “7F0C825CF000”, “path” : “/usr/lib64/libicudata.so.50”, “elfType” : 3, “buildId” : “291EDB545286F945CDE3AF6F5CF24FA2F53FDDDA” }, { “b” : “7F0C8224C000”, “path” : “/usr/lib64/libstdc++.so.6”, “elfType” : 3, “buildId” : “699868CB2BF35D0936C954AF8AD53A001D6690EE” }, { “b” : “7F0C8202B000”, “path” : “/usr/lib64/libselinux.so.1”, “elfType” : 3, “buildId” : “F5054DC94443326819FBF3065CFDF5E4726F57EE” }, { “b” : “7F0C81DF4000”, “path” : “/lib64/libcrypt.so.1”, “elfType” : 3, “buildId” : “CA95D3723C3A72A75EAA6448328B06470DE7D1CC” }, { “b” : “7F0C81BF2000”, “path” : “/lib64/libfreebl3.so”, “elfType” : 3, “buildId” : “6C6DA5F0ECDD84E81C6A44036EBACD7AA77707EB” } ] }}\nmongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55e23ede6c21]\nmongod(+0x2470E39) [0x55e23ede5e39]\nmongod(+0x247131D) [0x55e23ede631d]\nlibpthread.so.0(+0xF600) [0x7f0c87524600]\nlibc.so.6(gsignal+0x37) [0x7f0c8717d3b7]\nlibc.so.6(abort+0x148) [0x7f0c8717eaa8]\nmongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x55e23d39f28b]\nmongod(+0xB2DC76) [0x55e23d4a2c76]\nmongod(+0xB98741) [0x55e23d50d741]\nmongod(__wt_err_func+0x90) [0x55e23d325ed1]\nmongod(__wt_panic+0x39) [0x55e23d3262eb]\nmongod(+0xB6B243) [0x55e23d4e0243]\nmongod(+0xC69894) [0x55e23d5de894]\nmongod(+0xBAB559) [0x55e23d520559]\nmongod(__wt_txn_checkpoint+0x1DB) [0x55e23d52147b]\nmongod(+0xB919BA) [0x55e23d5069ba]\nmongod(_ZN5mongo18WiredTigerKVEngine26WiredTigerCheckpointThread3runEv+0x356) [0x55e23d4800d6]\nmongod(_ZN5mongo13BackgroundJob7jobBodyEv+0x131) [0x55e23e831c31]\nmongod(+0x2581630) [0x55e23eef6630]\nlibpthread.so.0(+0x7E75) [0x7f0c8751ce75]\nlibc.so.6(clone+0x6D) [0x7f0c872459bd]\n----- END BACKTRACE -----",
"username": "Aditya_Sharma3"
},
{
"code": "",
"text": "Hi @Aditya_Sharma3 and welcome in the MongoDB Community !https://jira.mongodb.org/browse/SERVER-49317You should try a repair operation from what I’m reading in this ticket.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88,Thanks for your reply, If I go for the repair option, what will be the impact of that on the production server, However after restart the mongo instance is running fine.Do we still need to go for the repair option?",
"username": "Aditya_Sharma3"
},
{
"code": "",
"text": "If it’s running fine now then don’t do a repair.\nRepair only tries to restores corrupted data. If it’s running now, it means you don’t have any corrupted data in your cluster at the moment.\nThe impact would be that it could save your production env if it was completely stopped and couldn’t be restarted at all because of some corrupted data due to an incorrect stop.",
"username": "MaBeuLux88"
}
] |
E STORAGE [WTCheckpointThread] WiredTiger error (5)
|
2022-05-23T19:16:29.641Z
|
E STORAGE [WTCheckpointThread] WiredTiger error (5)
| 2,853 |
null |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "Hello Hackers,By now you should all be on the path to submission before the closing date on Friday. As mentioned before, there’s no issue with submitting early, Judging won’t start until after the closing date and all the details of your submission are shared only with the Judging team and not made public.To encourage early submissions, the 1st 10 eligible submissions will receive some special bonus Swag - so don’t delay, submit now!!Submissions close on Friday May 27th",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "Hello!\nI think I made an incomplete submission by error is there any way to modify it?\nAlso, it has not sent me any verification it was sent but when I try submitting again, it says I have already completed this wizard.\nAny help welcome,\nThanks",
"username": "Margarita_Campos_Quinones"
},
{
"code": "",
"text": "Hi @Margarita_Campos_Quinones ,Apologies for any inconvenience – we’re investigating your incomplete submission and any possible technical issue.The wizard was set to one submission per participant, but we removed that restriction in case it is causing issues.Can you please try resubmitting?Thank you,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you!\nI just did it and it worked perfectly.\nSorry for the inconvenience.",
"username": "Margarita_Campos_Quinones"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
3 days to go! Don't delay - Submit now for some extra Swag goodness!
|
2022-05-25T12:40:56.783Z
|
3 days to go! Don’t delay - Submit now for some extra Swag goodness!
| 3,165 |
[
"mdbw22-hackathon"
] |
[
{
"code": "",
"text": "For those of you looking for help on your submissions @Mark_Smith and I did a very brief (for us! It’s 10 minutes) livestream on the process and you can watch it back belowIf you’ve any questions on submitting, we’re here to help, so please reply below or send us a message.Get Submitting!",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Looking for help on Submitting? Watch our livestream
|
2022-05-26T15:05:23.175Z
|
Looking for help on Submitting? Watch our livestream
| 2,472 |
|
[
"mdbw22-hackathon"
] |
[
{
"code": "Lead Developer AdvocateSenior Developer Advocate",
"text": "Fellow hackers!With less than 48hrs to go, we’re running a short guidelines session talking participants through the submission process and answering any questions you may have. The time hopefully should suit most participants and don’t worry if you can’t join in live, as with all other streams, this will be recorded too.So join us, on the penultimate day of this journey, to hear and learn all about submitting the fabulous projects you’ve been building!!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "You can watchback below",
"username": "Shane_McAllister"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] |
Submission Guidelines & Help Livestream Session
|
2022-05-26T10:42:12.930Z
|
Submission Guidelines & Help Livestream Session
| 2,723 |
|
null |
[
"aggregation",
"node-js",
"mongoose-odm"
] |
[
{
"code": "{\n _id: \"628ceeae3df06d49419f0bb4\",\n name: \"John\",\n notifications: [\n {_id: \"someIdA\", details: \"xyz\", dateTime: \"1653321337762\"},\n {_id: \"someIdB\", details: \"jkl\", dateTime: \"1653321337762\"}\n {_id: \"someIdC\", details: \"abc\", dateTime: \"1653321321323\"}\n {_id: \"someIdD\", details: \"lmn\", dateTime: \"1653123412341\"}\n ]\n}\n\n",
"text": "so i am trying to sort notifications array of user by it’s insertion date (i want the latest one on top) but it seems to be not working am i missing something ?here’s the template for data:and the aggregation pipeline that i’m trying:const foundUser = await users.aggregate([\n{\n$match: {\n_id: mongoose.Types.ObjectId(userId)\n}\n},\n{\n$project: {\nnotifications: 1,\n_id: 0\n}\n},\n{\n$sort: {\n_id: -1\n}\n}\n])",
"username": "Ali_Abyer"
},
{
"code": "notifications: [\n {_id: \"someIdA\", details: \"xyz\", dateTime: \"1653321337762\"},\n {_id: \"someIdB\", details: \"jkl\", dateTime: \"1653321337762\"}\n {_id: \"someIdC\", details: \"abc\", dateTime: \"1653321321323\"}\n {_id: \"someIdD\", details: \"lmn\", dateTime: \"1653123412341\"}\n ]\n",
"text": "If you remove your $sort you will see that documents after the $project look like:That is, all fields projected out except for notifications. So as you can see, your $sort:{_id:-1} does not make sense because you do not have a field named _id anymore.If you really want to sort on the top level _id:628ceeae3df06d49419f0bb4, you need to remove _id:0 from your $project.But, if you want to sort based on the _id within the notifications array and keep the result as an array within each top level document, you will need to use $sortArray.If you do not want a sorted array within a top level document, you will need an $unwind stage before the $sort stage.The exact scenario depends on your desired result. If you publish expected resulting documents, the help will be more precise.",
"username": "steevej"
},
{
"code": "",
"text": "alright it’s done, so the problem was exactly the not use of $unwind",
"username": "Ali_Abyer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Unable to sort documents after grouping mongodb aggregation framework
|
2022-05-25T12:05:19.321Z
|
Unable to sort documents after grouping mongodb aggregation framework
| 2,673 |
null |
[] |
[
{
"code": "{\n \"description\": \"The AWS US East 1 developer credentials.\",\n \"field\": \"Authorization\",\n \"key\": {\n \"data\": \"/api/credentials/123\",\n \"type\": \"cm_credential\"\n },\n \"location\": \"header\",\n \"name\": \"AWS US East 1 - Dev\",\n \"plugin_match\": \"^rs-aws-\",\n \"tags\": [\n {\n \"key\": \"cloud\",\n \"value\": \"aws\"\n }\n ],\n \"type\": \"Bearer\"\n}\n",
"text": "We are supporting a customer who has a MongoDB Atlas account with their associated invoices.We are looking to access this data in a read-only capacity as part of the services we offer, however this comes in 2 parts. Firstly a ‘policy’ that runs via our platforms policy engine which contains the logic to drill down into the org > invoices etc and dumps data into a CSV and then ultimately uploads it into our platform.Before any of this is possible however I need to create a credential that can be referenced by the policy. We don’t natively support Digest Auth, however if I create the credential via our API, it is possible, however I am struggling to have any success. The format of the API request body when we create a credential is as follows:I know the obvious that will need updating, but struggling to put together a request using the public:private keys as this is what has ben supplied by the customer.After any suggestions that may help ",
"username": "HD86"
},
{
"code": "{\n \"description\": \"MongoDB API Key for Invoice and Usage data\",\n \"field\": \"MongoDB Digest Auth\",\n \"key\": {\n \"data\": \"https://cloud.mongodb.com/api/atlas/v1.0\",\n \"type\": \"plain\"\n },\n \"location\": \"header\",\n \"name\": \"MongoDB API\",\n \"username\": \"xxxxxxxxx\",\n \"password\": \"xxxxxxxxxxxxxxxxxxxxxxxxxx\",\n \"tags\": [\n {\n \"key\": \"provider\",\n \"value\": \"mongodb_atlas\"\n }\n ],\n \"type\": \"Digest Auth\"\n }\n\"field\": \"MongoDB Digest Auth\"",
"text": "I seem to have the credentials working, as I’m no longer getting a 401 error… but I do get an error on the GET when trying to retrieve invoices as a datasource in my request…now I know that the field should be “Authorization” but if use that it breaks the credentials and I’m back to a 401 error… tried using the following which seems to allow the creds through, but breaks the request\"field\": \"MongoDB Digest Auth\"",
"username": "HD86"
},
{
"code": "",
"text": "Are you using an API key that’s authorized to the context you’re requesting invoice data for?By the way, are you in the cloud billing management space? if so we would be happy to partner and ensure you have more direct assistance",
"username": "Andrew_Davidson"
}
] |
Creating a MongoDB Digest credential for API request
|
2022-05-24T09:33:19.624Z
|
Creating a MongoDB Digest credential for API request
| 1,463 |
null |
[
"atlas-device-sync"
] |
[
{
"code": "",
"text": "const WORKPLAN_SCHEMA = {\nname: ‘workplans’,\nproperties: {\n_id: “objectId”,\nstatus: “int”,\nzoneId: “???”, <—<< array\n},\nprimaryKey: ‘_id’\n};We need to declare array and make relationship between this array with other tables.------> like that\nconst WORKPLAN_SCHEMA = {\nname: ‘workplans’,\nproperties: {\n_id: “objectId”,\nstatus: “int”,\nzoneId: [\n{\nzoneId: “objectId” >>----> relation with zone table >>-----> { _id: “objectId”, zoneName: “test_1” }\n},\n{\nzoneId: “objectId” >>----> relation with zone table >>-----> { _id: “objectId”, zoneName: “test_1” }\n}\n]\n},\nprimaryKey: ‘_id’\n};",
"username": "paula_sanchez"
},
{
"code": "const WORKPLAN_SCHEMA = {\n name: ‘workplans’,\n properties: {\n _id: {bsonType: “objectId”},\n status: {bsonType: “int”},\n zoneId: {bsonType: “array”, items: {bsonType: \"objectId}},\n}\n",
"text": "Hi, we use JSON Schema so please see here for that: array — Understanding JSON Schema 2020-12 documentationSo you probably want something like:And then you can add that this is a relationship in the UI in the relationships tab of the schema editor.If, you are trying to use the JS SDK and define your schema model there (not sure as you didnt give much information about where the schema you give here is being defined), then please see this guide: https://www.mongodb.com/docs/realm/sdk/node/examples/define-a-realm-object-model/#define-a-to-many-relationship-property",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "\nScreenshot from 2022-05-18 22-59-011366×768 213 KB\n",
"username": "paula_sanchez"
},
{
"code": "const Person = {\n name: \"Person\",\n properties: {\n name: \"string\",\n birthdate: \"date\",\n dogs: \"Dog[]\"\n }\n};\n\nconst Dog = {\n name: \"Dog\",\n properties: {\n name: \"string\",\n age: \"int\",\n breed: \"string?\"\n }\n};\n",
"text": "That’s not how you define a relationship on the client - in the doc Tyler linked the example here shows how Person can have a list of DogsIf you’re having trouble setting up the schema you may want to just use developer mode, that way, you just need to setup the schema on the client side and it will be automatically replicated on the Server. -",
"username": "Ian_Ward"
},
{
"code": "",
"text": "\nScreenshot from 2022-05-24 19-51-451366×768 158 KB\n\n\nScreenshot from 2022-05-24 19-51-591366×768 162 KB\n\n\nScreenshot from 2022-05-24 19-59-131366×768 122 KB\n",
"username": "paula_sanchez"
},
{
"code": "",
"text": "of type ‘array’ has unknown object type ‘friendList’ i getting this error.",
"username": "paula_sanchez"
},
{
"code": "",
"text": "@ Ian_Ward",
"username": "paula_sanchez"
},
{
"code": "",
"text": "If you take a look at the error message you will see that your names are mismatched - you need to correct that.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "\nScreenshot from 2022-05-25 22-49-221366×768 200 KB\n\n\nScreenshot from 2022-05-25 22-49-291366×768 120 KB\n\n@Ian_Ward",
"username": "paula_sanchez"
},
{
"code": "",
"text": "I can’t see the error message - can you paste it in here? Also, can you please share your Realm App UI URL (the URL of the Realm App in the browser) - so I can take a look at the logs ?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "\nScreenshot from 2022-05-26 00-49-531366×768 154 KB\n",
"username": "paula_sanchez"
},
{
"code": "",
"text": "Are you wiping the local state on the device as you change the schema? You need to do that because you are making breaking changes. Also I think friendList needs to have a valid _id field if it is a standalone objectThe Realm Server-side Cloud logs would tell us more as well",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I dont know where we found the cloud logs. Can you send me the path.",
"username": "paula_sanchez"
}
] |
How to declare array in schema in Realm Sync?
|
2022-05-18T03:21:15.014Z
|
How to declare array in schema in Realm Sync?
| 5,768 |
null |
[
"aggregation",
"queries",
"data-modeling",
"mongodb-world-2022"
] |
[
{
"code": "{\n _id: \n date: Date\n technicalOne: ObjectId\n client: ObjectId\n center: String\n appointments: [\n isBlockTime: boolean\n isRecurrentBreak: boolean\n isOcasionalMeet: boolean\n isRemoteWork: boolean\n isOcasionalRemote: boolean\n isOcasionalClientRemote: boolean\n busyDays: Array of numbers\n busyDaysWithout: Array of numbers\n state: number\n _id:\n isTechnicalOne: boolean\n technical: String\n technicalId: ObjectId\n date: Date\n ]\n}\nconst pipeline = [\n {\n '$unwind': {\n 'path': '$appointments',\n 'preserveNullAndEmptyArrays': false\n }\n },\n {\n '$match': {\n 'appointments.technical': technicalSelected,\n 'center': workCenter,\n 'appointments.state': 0,\n 'appointments.date': {\n '$gte': moment(startDate).startOf('month').toDate(),\n '$lte': moment(endDate).endOf('month').toDate()\n }\n }\n },\n {\n '$sort': {\n 'appointments.date': 1,\n 'appointments.busyDays': 1\n }\n },\n {\n '$lookup': {\n 'from': 'clienthistories',\n 'localField': 'client',\n 'foreignField': '_id',\n 'as': 'client'\n }\n },\n {\n '$unwind': {\n 'path': '$client',\n 'preserveNullAndEmptyArrays': true\n }\n },\n {\n '$lookup': {\n 'from': 'users',\n 'localField': 'appointments.technicalSave',\n 'foreignField': '_id',\n 'as': 'user'\n }\n }, {\n '$unwind': {\n 'path': '$user',\n 'preserveNullAndEmptyArrays': true\n }\n },\n {\n '$project': {\n 'center': '$center',\n 'technical': '$technical.technical',\n 'savedFor': '$user.technical',\n 'date': '$appointments.date',\n 'dateTakeAppointment': '$dateTakeAppointment',\n 'busyDays': '$appointments.busyDays',\n 'client': '$client',\n 'appointment': '$appointments',\n 'busyDays': '$appointments.busyDays',\n 'busyDaysWithout': '$appointments.busyDaysWithout',\n 'appointmentObservation': '$appointmentObservation',\n }\n },\n {\n '$sort': {\n 'busyDays': 1\n }\n }\n ]\n\n\n const AppointmentsCollection = Appointments.collection\n\n const appointments = await AppointmentsCollection.aggregate(pipeline).toArray()\n",
"text": "Hi , I’m working on a query that works fine and does what I want but it’s too slow (1.5s-2s) and I don’t know why. Could you help me understand the reason for this slowness and a possible solution.I have a collection where each document is like this:And de pipleline is this:The maxium results are 120-180 documents.\nWhat is wrong with this query?Thank you very much ",
"username": "Sergi_Ramos_Aguilo"
},
{
"code": "",
"text": "",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much! the query now is faster than before. ",
"username": "Sergi_Ramos_Aguilo"
}
] |
Is it possible a solution for a very slow query with aggregation?
|
2022-05-19T18:13:03.837Z
|
Is it possible a solution for a very slow query with aggregation?
| 4,308 |
null |
[] |
[
{
"code": "",
"text": "Hi team, How much time will take to upgrade from M50 General to M50 LocalNVMe ssd and our Database size is 900+ GB",
"username": "Kirubananthan_M"
},
{
"code": "",
"text": "It’s difficult to estimate precisely because what’s going to have to happen is a network transfer from one node to another as well as a team searching for initial syncc the data including application of changes, and that will need to happen in a rolling manner so it is workload dependent. Multiple hours per node likely",
"username": "Andrew_Davidson"
}
] |
Upgrade M50 general to M50 Lcoal NVMe SSD
|
2022-05-20T13:51:27.590Z
|
Upgrade M50 general to M50 Lcoal NVMe SSD
| 1,532 |
null |
[
"storage"
] |
[
{
"code": "",
"text": "Hello,I’m working on a SaaS product and I’m planning to use MongoDB with 1 DB per tenant for security and clean data modeling per tenant.\nThe DBs should be hosted on my own VPS - if load / data growth is increasing, I’m fine with (virtual) vertical scaling and even horizontal scaling.\nFor a standard tenant DB I assume:To get more specific, what are the implications on servers with a few dozen, hundreds and then thousands tenants in terms of:Anything else to pay attention to, apart from the standard setup (change port, use proper roles / passwords, etc)?Thanks,\nChris",
"username": "Chris_Haus"
},
{
"code": "",
"text": "A must read is Massive Number of Collections | MongoDB.With mongodb we are talking documents rather than records. It just help discussion if we are using the same terminology.change portfor security reason is applying security through obscurity and considered useless.have a replicaforReliabilityis absolutely a must. A replica set, on different machine, stored on different file system and no arbitrary.",
"username": "steevej"
},
{
"code": "",
"text": "we are talking documents rather than records.Sorry … of course documents. I thought about the source systems here.A must read is Massive Number of Collections.That would probably be a great problem to have - 10000 paying customers \nMy idea here would be, after ca. 6 - 9k database, I’d simply create a new / separate server, if that would be an easy solution?\nBut really … great problem to have and nowhere close to by now.Thanks for the hints on the replica set. Then I take this into consideration during the planning phase already.",
"username": "Chris_Haus"
},
{
"code": "",
"text": "create a new / separate server, if that would be an easy solutionWith Atlas it is trivial. With physical servers it is another story. With AWS or GCP easier that physical but harder than Atlas and potentially more expensive.I do not have any thing else to add more that I only saw recommendations against one database per customer.",
"username": "steevej"
},
{
"code": "",
"text": "Ok, then I’ll plan accordingly. Thanks ",
"username": "Chris_Haus"
}
] |
What are the SaaS one DB per tenant implications?
|
2022-05-22T16:10:05.478Z
|
What are the SaaS one DB per tenant implications?
| 2,622 |
null |
[
"replication",
"devops"
] |
[
{
"code": "",
"text": "I’ve got a three node cluster setup running on kubernetes with each node having a ~1TB Drive. I’ve created a Persistent Volume Claim for Mongodb of 100 GB.When I get the actual oplog size from rs.printReplicationInfo() it’s reporting 45011.683349609375 MBWhen I run db.oplog.rs.dataSize() it’s reporting 469617794Should I pass in the size in the conf file to make it 5% of the size of the Persistent Volume Claim as it seems it’s reading my system drive and in the process taking up 50% of the space in my Persistent Volume Claim?",
"username": "Tim_Pynegar"
},
{
"code": "dbpathdbPathoplog",
"text": "Hi @Tim_Pynegar and welcome to the community forum!!Do you mind providing more details regarding your setup:three node cluster setup running on kubernetes with each node having a ~1TB DriveFor the above mentioned deployment, is 1 TB shared among all three clusters or is this for all the three nodes.P.S. here are two things which can be noted for reference:Oplog can grow beyond their set size when the majority commit point is behind.The oplog is stored in a database called “local”, and it resides in the dbpath of the instance\nIf the dbPath resides in a persistentVolume then the size of the entire database is bound by that volume, including the oplog size.Thanks\nAasawari",
"username": "Aasawari"
}
] |
Oplog Size / Persistent Volume Claim / Kubernetes
|
2022-05-08T11:53:24.537Z
|
Oplog Size / Persistent Volume Claim / Kubernetes
| 3,092 |
null |
[
"python",
"motor-driver",
"pymodm-odm"
] |
[
{
"code": "",
"text": "Friends are learning how to connect to mongodb over fastapi using the async motor driver.\nI’ve highlighted two ways to connect to mongodb that people use.",
"username": "Kaper_Di"
},
{
"code": "",
"text": "Hi @Kaper_Di and welcome to the forums!Friends are learning how to connect to mongodb over fastapi using the async motor driver.\nI’ve highlighted two ways to connect to mongodb that people use.Please have a look at a similar post below, and see whether this answer your questionRegards,\nWan.",
"username": "wan"
}
] |
What is the correct and most economical way to connect to async(motor) mongodb. and is it necessary to close the connection?
|
2022-05-18T19:14:01.837Z
|
What is the correct and most economical way to connect to async(motor) mongodb. and is it necessary to close the connection?
| 4,617 |
null |
[
"graphql"
] |
[
{
"code": "query {\n grandparents(query: {address: {city: \"New York\"}}) {\n parents {\n children (query: {age: 10}) {\n name\n }\n }\n}",
"text": "I’m trying to write a query to do a nested query within a larger one. I feel like this should be possible, but since Realm generates the schema for me, they only generate inputs for each high level one, and not for when it’s nested like below.So there would be inputs for grandsparents, parents, and children if they were each at the top level. But when nested like this, it’s expecting an input for grandparents.parents.children which doesn’t exist in what Realm generates.Pretend schema, but this is the gist of what I’m trying with the subquery on children.",
"username": "canpan14"
},
{
"code": "",
"text": "Did you find a solution for this? I 'm having the same problem.",
"username": "Cesar_Varela"
},
{
"code": "",
"text": "I am having the same problem. Do we have any workaround for this",
"username": "Krithika_Hegde"
}
] |
GraphQL Inputs to Filter Nested Collections
|
2021-07-02T13:51:45.295Z
|
GraphQL Inputs to Filter Nested Collections
| 4,958 |
null |
[
"queries",
"indexes"
] |
[
{
"code": "\"indexBounds\" : {\n \"productId\" : [\n \"(479894.0, inf.0]\"\n ]\n}\n\nExecutionStats:\n\"nReturned\" : 16782,\n\"executionTimeMillis\" : 10140,\n\"totalKeysExamined\" : 741071,\n\"totalDocsExamined\" : 399367\n\"indexBounds\" : {\n \"productId\" : [\n \"[-inf.0, 100000.0]\"\n ]\n}\n\nExecutionStats:\n\"nReturned\" : 15087,\n\"executionTimeMillis\" : 175,\n\"totalKeysExamined\" : 25925,\n\"totalDocsExamined\" : 15087,\n\"indexBounds\" : {\n \"productId\" : [\n \"[-inf.0, 2479894.0]\"\n ]\n}\n\nExecutionStats:\n\"nReturned\" : 186809,\n\"executionTimeMillis\" : 5618,\n\"totalKeysExamined\" : 336759,\n\"totalDocsExamined\" : 186809,\n",
"text": "Hi everyone!I have some doubts about indexBounds.\nI have a database and I want to do a search for a range of ids.\nI noticed that the query is slow when I use some ranges of ids and the indexBounds presents the value “-inf.0”, which I don’t understand the reason.Case 01\ndb.collection.find({“productId”: {\"$gt\": 479894,\"$lte\": 479995}}).explain(“executionStats”);Query time: 14sCase 02\ndb.collection.find({“productId”: {\"$gt\": 0,\"$lte\": 100000}}).explain(“executionStats”);Query time: 0.36sCase 03\ndb.collection.find({“productId”: {\"$gt\": 1479894,\"$lte\": 2479894}}).explain(“executionStats”);Query time: 6.15sNote that the number of keys examined in case 01 is much higher than in case 02 even though the range is much lower.If anyone can help understand these scenarios.",
"username": "Andre_Luiz"
},
{
"code": "",
"text": "Hi @Andre_Luiz,Can you share an entire winning exec plan?My guess would be that the index is either ascending or descending so the index bounds takes only one of the 2 conditions and then their is another filter later in the query plan to resolve the other condition.Which version of MongoDB is this?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": " \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"products.productInfo.productId\" : {\n \"$lte\" : 479899.0\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"products.productInfo.productId\" : 1\n },\n \"indexName\" : \"products.productInfo.productId\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"products.productInfo.productId\" : [ \n \"products\"\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"products.productInfo.productId\" : [ \n \"(479894.0, inf.0]\"\n ]\n }\n }\n }\n",
"text": "Hi @MaBeuLux88 !!Which version of MongoDB is this?4.4.14WinningPlan:",
"username": "Andre_Luiz"
},
{
"code": "$gt$lte: XXX",
"text": "Yes, so because your index is ascending,LGTM",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Is there any way to solve it, with compound indexes?",
"username": "Andre_Luiz"
},
{
"code": "db.coll.drop()\ndb.coll.createIndex({n:1})\ndb.coll.insertMany([{n:1},{n:2},{n:3},{n:4},{n:5},{n:6},{n:7},{n:8},{n:9},{n:10}])\ndb.coll.find({n:{$gt: 4, $lte: 8}}).explain(true)\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'test.coll',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [ { n: { '$lte': 8 } }, { n: { '$gt': 4 } } ]\n },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { n: 1 },\n indexName: 'n_1',\n isMultiKey: false,\n multiKeyPaths: { n: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { n: [ '(4, 8]' ] }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 4,\n executionTimeMillis: 0,\n totalKeysExamined: 4,\n totalDocsExamined: 4,\n executionStages: {\n stage: 'FETCH',\n nReturned: 4,\n executionTimeMillisEstimate: 0,\n works: 5,\n advanced: 4,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n docsExamined: 4,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 4,\n executionTimeMillisEstimate: 0,\n works: 5,\n advanced: 4,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: { n: 1 },\n indexName: 'n_1',\n isMultiKey: false,\n multiKeyPaths: { n: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { n: [ '(4, 8]' ] },\n keysExamined: 4,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n },\n allPlansExecution: []\n },\n command: {\n find: 'coll',\n filter: { n: { '$gt': 4, '$lte': 8 } },\n '$db': 'test'\n },\n serverInfo: {\n host: 'hafx',\n port: 27017,\n version: '5.0.8',\n gitVersion: 'c87e1c23421bf79614baf500fda6622bd90f674e'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1653497779, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1653497779, i: 1 })\n}\n",
"text": "I have already seen that before. Therefore the “LGTM”.\nBut thing is, I can’t reproduce this behaviour in 5.0.8. So maybe it’s the version diff, or maybe it’s something else…Output:Can you test the same query in 5.0.8 in the same conditions (more or less) and can you reproduce this behaviour?Note: the fact that this is a multikey index and I’m not using one might be part of the problem here.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": " \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"products.productInfo.productId\" : 1\n },\n \"indexName\" : \"products.productInfo.productId\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"products.productInfo.productId\" : [ \n \"products\"\n ]\n },\ndb.collection.find({“productId”: {\"$gt\": 1479894,\"$lte\": 2479894}}).explain(“executionStats”);db.collection.find({“\"products.productInfo.productId\" ”: {\"$gt\": 1479894,\"$lte\": 2479894}}).explain(“executionStats”);",
"text": "In your explain you have thisWhich is a multiKey index meaning this is an index on an array. Your Quey you say isdb.collection.find({“productId”: {\"$gt\": 1479894,\"$lte\": 2479894}}).explain(“executionStats”);But presumably it isdb.collection.find({“\"products.productInfo.productId\" ”: {\"$gt\": 1479894,\"$lte\": 2479894}}).explain(“executionStats”);This query is not looking for what you think it is as it is looking for a record where “products.productInfo.productId” is >1479894 and “products.productInfo.productId” < 2479894 but those two clauses can be true for different elements in the array - thus it needs to find where one clause is true then fetch the record to look at all the other values in the array.If you want these to apply to the same value (and use the index) you need to use $elemMatch",
"username": "AdventureMaker"
},
{
"code": "",
"text": "Yup, John is totally right @Andre_Luiz. I hope that makes sense!",
"username": "MaBeuLux88"
},
{
"code": "db.testeRange.drop()\ndb.testeRange.createIndex({\"n.id\":1, \"n.id\":-1})\ndb.testeRange.insertMany([{\"n\": [{\"id\":1}]},{\"n\": [{\"id\":3}]},{\"n\": [{\"id\":4}]}])\ndb.testeRange.find({\"n.id\":{$gt: 4, $lte: 8}}).explain(true)\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"display.testeRange\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"n.id\" : {\n \"$lte\" : 8.0\n }\n }, \n {\n \"n.id\" : {\n \"$gt\" : 4.0\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"n.id\" : {\n \"$lte\" : 8.0\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"n.id\" : -1.0\n },\n \"indexName\" : \"n.id_-1\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"n.id\" : [ \n \"n\"\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"n.id\" : [ \n \"[inf.0, 4.0)\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"n.id\" : {\n \"$gt\" : 4.0\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"n.id\" : -1.0\n },\n \"indexName\" : \"n.id_-1\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"n.id\" : [ \n \"n\"\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"n.id\" : [ \n \"[8.0, -inf.0]\"\n ]\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 0,\n \"executionTimeMillis\" : 1,\n \"totalKeysExamined\" : 0,\n \"totalDocsExamined\" : 0,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"n.id\" : {\n \"$lte\" : 8.0\n }\n },\n \"nReturned\" : 0,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 2,\n \"advanced\" : 0,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"docsExamined\" : 0,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 0,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 1,\n \"advanced\" : 0,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"n.id\" : -1.0\n },\n \"indexName\" : \"n.id_-1\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"n.id\" : [ \n \"n\"\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"n.id\" : [ \n \"[inf.0, 4.0)\"\n ]\n },\n \"keysExamined\" : 0,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n },\n \"operationTime\" : Timestamp(1653498607, 11)\n}\n",
"text": "Simulate according to my environment and the array actually generates this behavior.Output",
"username": "Andre_Luiz"
},
{
"code": "",
"text": "If you want these to apply to the same value (and use the index) you need to use $elemMatchI’m going to do some tests this way.\nThanks @AdventureMaker .",
"username": "Andre_Luiz"
},
{
"code": "",
"text": "@AdventureMaker , elemMatch worked for me.Thanks!!!",
"username": "Andre_Luiz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Mongo IndexBounds doubts
|
2022-05-25T14:41:26.094Z
|
Mongo IndexBounds doubts
| 2,818 |
[
"queries"
] |
[
{
"code": "",
"text": "When I try to query a document using “IN”-keyword, I get this error:\nError: Unsupported comparison between type ‘string’ and type ‘link’_id is of type string and “allHeadUnits” is an array of strings\nimage1016×43 12.5 KB\n",
"username": "Olle_Thunberg"
},
{
"code": "",
"text": "Did you manage to solve this? Running into the same issue…",
"username": "Marc_Frankel"
}
] |
Error: Unsupported comparison between type 'string' and type 'link'
|
2022-04-14T12:54:40.447Z
|
Error: Unsupported comparison between type ‘string’ and type ‘link’
| 2,285 |
|
null |
[
"dot-net",
"atlas-cluster",
"unity"
] |
[
{
"code": "translator failed to complete processing batch: failed to update resume token document: connection pool for realmcluster-shard-00-01.kx2p2.mesh.mongodb.net:30444 was cleared because another operation failed with: connection() error occurred during connection handshake: read tcp 127.0.0.1:36740->127.0.0.1:30444: read: connection reset by peer\nrecoverable event subscription error encountered: failed to load unsynced documents cache: error querying for unsynced documents while initializing unsynced documents cache: connection() error occurred during connection handshake: read tcp 127.0.0.1:52668->127.0.0.1:30444: read: connection reset by peer\n\nmessage handler failed with error: error handling \"upload\" message: error updating sync progress: connection(realmcluster-shard-00-01.kx2p2.mesh.mongodb.net:30444[-98435]) socket was unexpectedly closed: EOF\nLogs:\n[\n \"Connection was active for: 1m56s\"\n]\ntranslator failed to complete processing batch: failed to flush instructions to client history: allocating new client versions failed: error incrementing the version counter for (appID=\"62079d369f7c7cf6d91c56d9\", fileIdent=2): connection pool for realmcluster-shard-00-01.kx2p2.mesh.mongodb.net:30444 was cleared because another operation failed with: connection() error occurred during connection handshake: read tcp 127.0.0.1:44172->127.0.0.1:30444: read: connection reset by peer\nending session with error: integrating changesets failed: error creating new integration attempt: failed to get latest server version while integrating changesets: connection(realmcluster-shard-00-01.kx2p2.mesh.mongodb.net:30444[-2701]) socket was unexpectedly closed: EOF (ProtocolErrorCode=201)\nintegrating changesets failed: error creating new integration attempt: failed to get latest server version while integrating changesets: connection(realmcluster-shard-00-01.kx2p2.mesh.mongodb.net:30444[-2701]) socket was unexpectedly closed: EOF (ProtocolErrorCode=201)\ntranslator failed to complete processing batch: failed to update resume token document: connection() error occurred during connection handshake: read tcp 127.0.0.1:41830->127.0.0.1:30444: read: connection reset by peer\ntranslator failed to complete processing batch: failed to update resume token document: connection() error occurred during connection handshake: EOF\n",
"text": "Hi,I’m using flexible sync for syncing the game state in a game project and never had any problems before.\nHowever, today after I let my app test by ~50 users suddenly the following errors popped up after a few hours of testing and a few users were complaining about sync not working anymore until they restarted the app.This happened during a time frame of ~1-2 hours and then everything worked normally again. I hope this error doesn’t occur again, otherwise I cannot use flexible sync in my project until it’s stable…What could be the reason? Was there any update from your side between May 23 20:06:11+02:00 and May 23 21:46:05+02:00? Or is it a bug?I’m using Unity 2020.3.33f1 and Realm Unity v10.11.1.These are the error logs I got during this time frame:Thanks in advance!",
"username": "MetalMonkey"
},
{
"code": "",
"text": "Hi. These are all transient errors that cause a quick restart / rejection and then things should pick back up. All of them look like issues connecting to your Atlas cluster which point to either (a) an event on the cluster occuring or (b) an underpovisioned cluster (if you are using an M0, performance issues are common). Did any of these cause issues, or it is just the error in the UI that is concerning you?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi,thanks for the quick reply! I’m currently using a M10 cluster and cannot find any performance issues or special events that occurred during this time frame in the cluster…\nI only stumbled about these error messages, because a few testers were complaining that syncing suddenly stopped and didn’t pick up again unless they restarted the app.\nSince I cannot find any problems in my code causing this (it’s a very basic flexible sync implementation) or on the M10 cluster and it never happened before I assumed the problem might be connected to a (temporary?) bug in flexible sync.The cluster itself seemed to work fine during this incident as there were no errors or performance issues in any database related backend functions…",
"username": "MetalMonkey"
},
{
"code": "",
"text": "Hmm this looks like a connection error to your Atlas cluster. Can you open a support ticket or share with us the Realm App URL (the url in the web browser) - and we can take a look on the backend for you?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Do you mean this URL: Link ?",
"username": "MetalMonkey"
},
{
"code": "",
"text": "Hey There,We took a look and there appears to be I/O timeouts on your Atlas cluster during this time. This typically points to your cluster being overloaded and not being able to respond to requests from the sync servers. Upgrading your Atlas cluster to a higher instance type should resolve this.-Ian",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Suddenly problems with flexible sync
|
2022-05-23T21:04:03.541Z
|
Suddenly problems with flexible sync
| 4,967 |
null |
[
"data-modeling",
"compass",
"database-tools",
"containers"
] |
[
{
"code": "mongoimportmongoimport --db \"db01\" --collection \"table01\" --file \"output01.json\"\nmongoimport --db \"db01\" --collection \"table01\" --file \"output02.json\"\nmongoimport --db \"db01\" --collection \"table01\" --file \"output03.json\"\n...etc...\nmongoimport --db \"db01\" --collection \"table01\" --file \"output01.json\" --map \"Input String 01\"\nmongoimport --db \"db01\" --collection \"table01\" --file \"output02.json\" --map \"Input String 02\"\nmongoimport --db \"db01\" --collection \"table01\" --file \"output03.json\" --map \"Input String 03\"\n...etc...\nmongoimport",
"text": "Hi everyone,I’m a MongoDB newbie who has put together his first Mongo DB. (Ubuntu platform, Mongo v5.0.8. I’m actually using the Docker container version of Mongo.)I’m wondering if there’s a way to map a string to a database document and/or vice versa?To explain in more detail: In my job, I have a piece of software that takes in a text string as input, processes the string, then generates output in the form of a JSON file. These JSON files can be quite diverse; no two are really alike. To analyze the output, I’ve put about a thousand of these JSON files into my Mongo DB instance.Only now, I’m realizing that just looking at the JSON output is only half of that picture. For each document, I need the original text string associated with the JSON. (And sadly, that string is not included within the JSON itself.)To be explicit: If I’m in Compass and I’m searching on a given input string, I need a way to pull up the corresponding JSON output document. Or, given a JSON document, I need to be able to lookup the original string. There is an exact 1:1 relationship between string and JSON; no two strings will be the same, and no two JSON documents will be the same, either. Every string will map to exactly one JSON, and vice versa.When I uploaded my- JSON docs into Mongo, I used this mongoimport command from the Ubuntu command line:Very easy. But now, I can’t manually assign each input string to its corresponding JSON output document. I’m willing to delete the current database and re-enter everything again, perhaps with something like this:Of course, I don’t see something like that in the mongoimport documentation. Does anyone have any suggestions? I don’t mind rebuilding the database to include mapping function. Thank you.",
"username": "redapplesonly"
},
{
"code": "{\n 'input_string': 'abcde',\n 'field1': 10,\n 'field2': 'Hello There!',\n 'field3': 42,\n 'field4': ISODate(xxx)\n}\n{input_string: 1}",
"text": "Hi @redapplesonly and welcome in the MongoDB Community !Why not include the input string in the doc you are inserting into MongoDB along with the fields generated by that string?With an index on {input_string: 1}, you could retrieve these documents easily. Also, you could use MongoDB as a cache to avoid reprocessing incoming input strings that are already in your MongoDB collection.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "MAP(document) ==> stringMAP(string) ==> document",
"text": "Thanks for the thought, Maxime! Unfortunately, I don’t have control over what goes into the JSON documents. They are static, and I have to import them as is. No editing allowed. Its a big headache, to be honest.Ultimately, if I am sorting though my documents within MongoDB and a find() pulls up any specific document, then I need a way to MAP(document) ==> string. Conversely, if I know the string and I want to see the JSON that was its output, I need MAP(string) ==> document. You see my dilemma.",
"username": "redapplesonly"
},
{
"code": "{\n _id: ObjectId('628e5ee995973139032f704c')\n input_string: 'abcde'\n related_doc: ObjectId('628e5ee995973139032f704d')\n}\n{\n '_id': ObjectId('628e5ee995973139032f704d')\n 'field1': 10,\n 'field2': 'Hello There!',\n 'field3': 42,\n 'field4': ISODate(xxx)\n}\n",
"text": "Create another collection which you can control then and reference to the other doc that you can’t touch for some obscur reasons?Your collection ( can’t touch this ):JSON collection:You’ll have to use a $lookup now but if it’s the only way…Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Interesting, thank you! I will implement and report back…!",
"username": "redapplesonly"
}
] |
Map Document with Text String?
|
2022-05-24T19:04:50.675Z
|
Map Document with Text String?
| 2,813 |
null |
[
"aggregation",
"queries",
"dot-net"
] |
[
{
"code": "[\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Appointment\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Thankyou\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Thankyou\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Thankyou\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Appointment\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Demo\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Demo\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Appointment\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Appointment\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Appointment\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Appointment\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Leads\"\n }\n ]\n }\n]\n{\n\"Appointment\" : 6,\n\"Welcome\" : 6,\n\"Thankyou\" : 3\n\"Leads\" : 1\n\"Demo\" : 2\n}\n",
"text": "Dear All,\nI am very new to MongoDB. I need your guy help.\nI want total count group by intent with C# .NET.\nCould you guys please help me the way how to do?\nThe below are the request and the response I want.Thank you so much.RequestResponseBest Regards,\nKyi Moh",
"username": "Kyimohmoh_Thwin"
},
{
"code": "GroupGroupBy",
"text": "Hi, @Kyimohmoh_Thwin,Welcome to the MongoDB Community Forums. What you are trying to accomplish is a grouping operation. This can be expressed in MongoDB using an aggregation with a Group stage or using LINQ with GroupBy. Hopefully that helps get you started.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi @James_Kovacs ,Thanks you so much for your reply help.\nCould you please provide the reference code with c#?Best Regards,\nKyi Moh",
"username": "Kyimohmoh_Thwin"
},
{
"code": "[\n {\n \"messages\":[\n {\n \"intent\":\"Welcome\"\n },\n {\n \"intent\":\"Appointment\"\n },\n {\n \"intent\":\"Welcome\"\n }\n ]\n },\n {\n \"messages\":[\n {\n \"intent\":\"Thankyou\"\n },\n {\n \"intent\":\"Demo\"\n }\n ]\n },\n]\n",
"text": "Hi @James_Kovacs ,I am using MongoDB C# Driver 2.15.\nThe Intent property I want to group by is located at the array list.\nI was finding out it how to get. But, I find only simple group by field not in array list.\nCould you please provide one the ways?Best Regards,\nKyi Moh",
"username": "Kyimohmoh_Thwin"
},
{
"code": "",
"text": "Hi All,Please help me the way? I need this urgent and I am very new to MongoDB. So I need your guy help.Best Regards,\nKyi Moh",
"username": "Kyimohmoh_Thwin"
},
{
"code": "var pResults = collection.Aggregate()\n .Unwind(\"messages\")\n .Match(new BsonDocument { { \"version\", \"3.0\" }})\n .Group(new BsonDocument\n {\n { \"_id\", \"$messages.intent\" },\n {\"count\", new BsonDocument(\"$sum\", 1)}\n })\n .ToList();\n",
"text": "Dear @James_Kovacs\nI got it with the below codeThank youBest Regards,\nKyi Moh",
"username": "Kyimohmoh_Thwin"
},
{
"code": "",
"text": "Hi, @Kyimohmoh_Thwin,We’re glad that you were able to find a solution based on the provided resources. While we cannot write code for you, we are always happy to point you in the right direction.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
How to query Total Count by Group By?
|
2022-05-19T18:00:42.406Z
|
How to query Total Count by Group By?
| 5,452 |
null |
[
"aggregation"
] |
[
{
"code": "db.collections.aggregate([\n {$match: {$and: [{\"products.productInfo.productId\": {\"$gte\": 93143, \"$lte\": 93643}}]}},\n {$unwind: \"$products\"},\n {$match: {$and: [{\"products.productInfo.productId\": {\"$gte\": 93143, \"$lte\": 93643}}]}}, \n {$sort: {\"products.productInfo.productId\": 1}}, \n {$limit: 50}\n])\n",
"text": "We are facing an issue in our query where using “sort” significantly increases execution time.The query is this:We use unwind because our document has a list of products and filters from a range of ids. The productId is an index.ExecutionStats without the “sort” clause (average time below 1s):\n“nReturned” : 163\n“executionTimeMillis” : 16\n“totalKeysExamined” : 351\n“totalDocsExamined” : 297ExecutionStats with the “sort” clause (average time 14s):\n“nReturned” : 1919\n“executionTimeMillis” : 17504\n“totalKeysExamined” : 847869\n“totalDocsExamined” : 451385I read this in a post:This happens because of $unwind stage.Your query performance is slow because query is not considering index after the $unwind stage.Check that with explain.you will get to know.This happens because after the $unwind whole documents will change and it becomes different that is stored in RAM for indexing purpose.Is that why sort degrades the query?",
"username": "Andre_Luiz"
},
{
"code": "$match$sort$group$geoNear$unwind$and$sort$filter$match + $unwind + $match$match",
"text": "Long story short, yes.This doc explains when an aggregation pipeline can use the collection indexes.Basically, $match and $sort at the beginning of a pipeline can use an index from the collection. Under certain conditions, $group can also use an index. Same for $geoNear but that’s it.After a $unwind, the docs are completely different from the ones in the collection so indexes are useless after that stage.I think the only thing you can remove in your query to “improve” is to remove the $and that only contain a single filter. But this won’t affect the speed much.As the $sort happens in memory, more RAM (if the RAM is constantly full) or faster RAM (DDR5 For the Win) could help.Another potential improvement would be to use $filter instead of the $match + $unwind + $match combo. This would help reduce the size of the pipeline in RAM and this would also avoid the second $match that’s also not backed up by an index anymore.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Unwind and Sort - Decrease Query Time
|
2022-05-24T19:34:33.545Z
|
Unwind and Sort - Decrease Query Time
| 3,413 |
null |
[
"aggregation"
] |
[
{
"code": "[\n {\n \"account\": \"Cat12\",\n \"activities\": [\n {\n \"name\": \"A1\",\n \"status\": \"S1\",\n \"type\": \"T1\"\n },\n {\n \"name\": \"A2\",\n \"status\": \"S2\",\n \"type\": \"T2\"\n }\n ]\n },\n {\n \"account\": \"Cat12\",\n \"activities\": [\n {\n \"name\": \"A3\",\n \"status\": \"S3\",\n \"type\": \"T3\"\n },\n {\n \"name\": \"A2\",\n \"status\": \"S2\",\n \"type\": \"T2\"\n }\n ]\n },\n {\n \"account\": \"Cat12\",\n \"activities\": [\n {\n \"name\": \"A1\",\n \"status\": \"S1\",\n \"type\": \"T1\"\n },\n {\n \"name\": \"A2\",\n \"status\": \"S2\",\n \"type\": \"T2\"\n }\n ]\n },\n {\n \"account\": \"Cat13\",\n \"activities\": [\n {\n \"name\": \"A1\",\n \"status\": \"S1\",\n \"type\": \"T1\"\n },\n {\n \"name\": \"A2\",\n \"status\": \"S2\",\n \"type\": \"T2\"\n }\n ]\n }\n]\n[\n {\n \"name\": \"A1\",\n \"status\": \"S1\",\n \"type\": \"T1\",\n \"count\": 2\n },\n {\n \"name\": \"A2\",\n \"status\": \"S2\",\n \"type\": \"T2\",\n \"count\": 3\n },\n {\n \"name\": \"A3\",\n \"status\": \"S3\",\n \"type\": \"T3\",\n \"count\": 1\n }\n]\n[\n {\n \"name\": \"A1\",\n \"status\": \"S1\",\n \"type\": \"T1\",\n \"count\": 1\n },\n {\n \"name\": \"A2\",\n \"status\": \"S2\",\n \"type\": \"T2\",\n \"count\": 1\n }\n]\n",
"text": "I have a category Document data like as shown belowWhat I am trying to do is to aggregate the activities based on the account. Lets say for Cat12 I should get the following output with the no repeated countsand for Cat13 I should get like as shown belowIs this achievable using aggregate",
"username": "AlexMan"
},
{
"code": "[\n {\n '$match': {\n 'account': 'Cat12'\n }\n }, {\n '$unwind': {\n 'path': '$activities'\n }\n }, {\n '$group': {\n '_id': {\n 'name': '$activities.name', \n 'status': '$activities.status', \n 'type': '$activities.type'\n }, \n 'count': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n 'name': '$_id.name', \n 'status': '$_id.status', \n 'type': '$_id.type', \n 'count': 1, \n '_id': 0\n }\n }\n]\nCat12[\n { count: 2, name: 'A1', status: 'S1', type: 'T1' },\n { count: 3, name: 'A2', status: 'S2', type: 'T2' },\n { count: 1, name: 'A3', status: 'S3', type: 'T3' }\n]\nCat13[\n { count: 1, name: 'A1', status: 'S1', type: 'T1' },\n { count: 1, name: 'A2', status: 'S2', type: 'T2' }\n]\n{account:1}$matchCat12Cat13[\n {\n '$unwind': {\n 'path': '$activities'\n }\n }, {\n '$group': {\n '_id': {\n 'cat': '$account', \n 'name': '$activities.name', \n 'status': '$activities.status', \n 'type': '$activities.type'\n }, \n 'count': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n 'cat': '$_id.cat', \n 'name': '$_id.name', \n 'status': '$_id.status', \n 'type': '$_id.type', \n 'count': 1, \n '_id': 0\n }\n }\n]\n[\n { count: 3, cat: 'Cat12', name: 'A2', status: 'S2', type: 'T2' },\n { count: 1, cat: 'Cat12', name: 'A3', status: 'S3', type: 'T3' },\n { count: 1, cat: 'Cat13', name: 'A1', status: 'S1', type: 'T1' },\n { count: 2, cat: 'Cat12', name: 'A1', status: 'S1', type: 'T1' },\n { count: 1, cat: 'Cat13', name: 'A2', status: 'S2', type: 'T2' }\n]\n",
"text": "Hi @AlexMan and welcome in the MongoDB Community !It’s a relatively basic aggregation pipeline. You can learn more about the Aggregation Pipeline in the dedicated course that we offer on MongoDB University in the course M121.Here is my solution:Result for Cat12:Results for Cat13:Note that you need an index on {account:1} to support the $match stage.Also, nothing prevents you from computing Cat12 and Cat13 at the same time:Results:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Aggregate and group the child list with the count
|
2022-05-25T13:14:13.110Z
|
Aggregate and group the child list with the count
| 3,410 |
null |
[
"atlas-cluster",
"golang",
"containers"
] |
[
{
"code": "",
"text": "My whole application already dockerize & I want to add mogodb database in our project. here is following codeserverAPIOption := options.ServerAPI(options.ServerAPIVersion1)\nclientOptions := options.Client().ApplyURI(“mongodb+srv://enigma:@cluster0.ckk7d.mongodb.net/enigma?retryWrites=true&w=majority”).\nSetServerAPIOptions(serverAPIOption)client, err := mongo.NewClient(clientOptions)\nif err != nil {\nlog.Fatalln(“Error create client object :”, err)\nreturn &mongo.Client{}\n}docker run --rm --hostname=wizdwarfs --net=host -p 127.0.0.1:5000:5000 -v app:/app/app_data -it v0b2022/05/12 12:16:11 Error create client object : error parsing uri: lookup cluster0.ckk7d.mongodb.net on 127.0.0.53:53: cannot unmarshal DNS message, Please help. --net=host allow you to connect with internet; either this cause by docker or some technical issue",
"username": "Ali_Hassan1"
},
{
"code": "\"mongodb+srv://\"\"mongodb://\"",
"text": "Hi @Ali_Hassan1 - Welcome to the community!error parsing uri: lookup cluster0.ckk7d.mongodb.net on 127.0.0.53:53: cannot unmarshal DNS messageThe above error message could possibly be caused by to what is detailed on https://jira.mongodb.org/browse/GODRIVER-829. More specifically, I would refer to the following comment on March 15 which provides more information on the possible causes and a solution / workaround.If you still require additional assistance, please:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks … Some day I will try but currently I’m very busy in different project",
"username": "Ali_Hassan1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Mongodb Atlas connect with docker app golang
|
2022-05-12T13:19:25.095Z
|
Mongodb Atlas connect with docker app golang
| 3,147 |
null |
[
"indexes",
"database-tools",
"backup"
] |
[
{
"code": "mongodumpmongodump: read data: make dump: error dumping metadata: (Location5254501) Could not parse catalog entry while replying to listIndexes.cmddb.tickets.validate()\n{\n \"ns\" : \"cmd.tickets\",\n \"nInvalidDocuments\" : 0,\n \"nrecords\" : 88638415,\n \"nIndexes\" : 42,\n \"keysPerIndex\" : {\n\n },\n \"indexDetails\" : {\n\n },\n \"valid\" : false,\n \"repaired\" : false,\n \"warnings\" : [ ],\n \"errors\" : [\n \"The index specification for index 'interaction.networkItemId_1' contains invalid field names. The field 'safe' is not valid for an index specification. Specification: { v: 1, key: { interaction.networkItemId: 1 }, name: \\\"interaction.networkItemId_1\\\", ns: \\\"cmd.tickets\\\", background: true, safe: null }. Run the 'collMod' command on the collection without any arguments to remove the invalid index options\"\n ],\n \"extraIndexEntries\" : [ ],\n \"missingIndexEntries\" : [ ],\n \"corruptRecords\" : [ ],\n \"advice\" : \"A corrupt namespace has been detected. See http://dochub.mongodb.org/core/data-recovery for recovery steps.\",\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1651416503, 80),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1651407947, 2)\n}\nmongodumpdb.runCommand({listIndexes: \"tickets\"})ticketsdb.runCommand({listIndexes: \"tickets\", cursor: {batchSize:19}})getMoredb.runCommand({listIndexes: \"tickets\", cursor: {batchSize:19}})MongoServerError: Could not parse catalog entry while replying to listIndexes\n\"warnings\" : [\n \"Could not complete validation of table:collection-31-1751203610025669779. This is a transient issue as the collection was actively in use by other operations.\",\n \"Could not complete validation of table:index-32-1751203610025669779. This is a transient issue as the collection was actively in use by other operations.\"\n",
"text": "Since 2022-04-26, when we updated to mongo 5.0 with compatibility version on 4.4, we’re having a issue with mongodump and other tools / commands for mongodb.\nWe investigated and figured it might boil down to a index on a single collection in one of our databases.Mongodump error: mongodump: read data: make dump: error dumping metadata: (Location5254501) Could not parse catalog entry while replying to listIndexes.\nWe validated all indices in our cmd database and got an error on one of the indices.We dropped the faulty index and ran a full-validate again, that returned all collections in that database valid.But mongodump still fails with the same error. Also db.runCommand({listIndexes: \"tickets\"}) fails.There are some 42 indices on the tickets collection. When executing db.runCommand({listIndexes: \"tickets\", cursor: {batchSize:19}}) and then using getMore on that cursorId with an arbitrary batchSize, we can list all indices in that collection.But when we want to list all or a significant number of indices the command fails. We figured out the magic index seems to be number 20 as db.runCommand({listIndexes: \"tickets\", cursor: {batchSize:19}}) works but db.runCommand({listIndexes: “tickets”, cursor: {batchSize:20}})` fails withThe full-validate had two warnings but they don’t seem to be that much of a problemWe plan to increase compatibilitylevel to 5 this week. we are also interested in your opinion if this might worsen or resolve our current issue.Kind regards\nMichael",
"username": "Michael_Schmid"
},
{
"code": "2022-05-25T17:59:52.121+0300\tFailed: error dumping metadata: (Location5254501) Could not parse catalog entry while replying to listIndexesdb.getCollection('triggers').getIndexes();\ndb.getCollection('unsaved-edits').getIndexes();\ndb.getCollection('users').getIndexes();\n> db.getCollection('cfs_gridfs._tempstore.files').getIndexes();\nMongoServerError: Could not parse catalog entry while replying to listIndexes\ndb.getCollection('cfs_gridfs._tempstore.files').dropIndexes();\n",
"text": "Hi , Michael_Schmid\ngot the same problem after upgrade to 5.0 mongo\nlike 2022-05-25T17:59:52.121+0300\tFailed: error dumping metadata: (Location5254501) Could not parse catalog entry while replying to listIndexesthe thing is in broken indexes after upgrade.for me helped next steps\nrunning commands likefor all your collections.\nThis helps to find broken stuffAfter that just dropped and recreated needed indexes",
"username": "Serhii_Martyniuk"
}
] |
Mongodump and other tools broken after upgrade to 5.0
|
2022-05-09T11:40:52.556Z
|
Mongodump and other tools broken after upgrade to 5.0
| 2,357 |
null |
[
"node-js"
] |
[
{
"code": "My Document Structure is :\n{\n \"_id\":{\n \"$oid\":\"61b09087c6379653642cc9d6\"\n },\n \n \"name\":\"Seven Rocks Mens Tshirt\",\n \"sku\":\"XXXL-T1-TMBL\",\n \"asin\":\"B07QMCLYYZ\",\n \"productIdealConsumption\":[\n {\n \"id\":8470,\n \"productIdealConsumptionId\":663,\n \"productId\":1212,\n \"qty\":0.22,\n \"product\":{\n \"id\":1212,\n \"sku\":\"6040CP-SJ-TB-FR-TM\",\n \"name\":\"Fabric Roll\",\n \"categoryId\":5\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n },\n {\n \"id\":8471,\n \"productIdealConsumptionId\":663,\n \"productId\":1028,\n \"qty\":0.036,\n \"product\":{\n \"id\":1028,\n \"sku\":\"6040CP-SJ-TB-FR-BL\",\n \"name\":\"Fabric Roll\",\n \"categoryId\":5\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n },\n {\n \"id\":8472,\n \"productIdealConsumptionId\":663,\n \"productId\":1342,\n \"qty\":0.021,\n \"product\":{\n \"id\":1342,\n \"sku\":\"CPL-1X1 RB-TB-FR-BL\",\n \"name\":\"Rib Fabric Roll\",\n \"categoryId\":14\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n \n },\n {\n \"id\":8473,\n \"productIdealConsumptionId\":663,\n \"productId\":2907,\n \"qty\":136.8,\n \"product\":{\n \"id\":2907,\n \"sku\":\"GI-MTR-SEW.THD-2PLY-TM\",\n \"name\":\"2PLY\",\n \"categoryId\":72\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0 \n \n },\n {\n \"id\":8474,\n \"productIdealConsumptionId\":663,\n \"productId\":2901,\n \"qty\":3.75,\n \"product\":{\n \"id\":2901,\n \"sku\":\"GI-MTR-SEW.THD-2PLY-BL\",\n \"name\":\"2PLY\",\n \"categoryId\":72\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n },\n {\n \"id\":8478,\n \"productIdealConsumptionId\":663,\n \"productId\":3058,\n \"qty\":1,\n \"product\":{\n \"id\":3058,\n \"sku\":\"QCI-PCS-WASH CARE LBL-30MM*200MTR\",\n \"name\":\"WASH CARE LBL-30MM*200MTR\",\n \"categoryId\":62\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n },\n {\n \"id\":8479,\n \"productIdealConsumptionId\":663,\n \"productId\":3059,\n \"qty\":1,\n \"product\":{\n \"id\":3059,\n \"sku\":\"QCI-PCS-WASH CARE RIBBON-40MM*300MTR\",\n \"name\":\"WASH CARE RIBBON-40MM*300MTR\",\n \"categoryId\":62\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n },\n {\n \"id\":8480,\n \"productIdealConsumptionId\":663,\n \"productId\":3046,\n \"qty\":1,\n \"product\":{\n \"id\":3046,\n \"sku\":\"QCI-PCS-EB TAGS\",\n \"name\":\"EB TAGS\",\n \"categoryId\":62\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n },\n {\n \"id\":8481,\n \"productIdealConsumptionId\":663,\n \"productId\":3049,\n \"qty\":1,\n \"product\":{\n \"id\":3049,\n \"sku\":\"QCI-PCS-PACKING GATTA- 10*13\",\n \"name\":\"PACKING GATTA- 10*13\",\n \"categoryId\":62\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0\n\t\t },\n {\n \"id\":8482,\n \"productIdealConsumptionId\":663,\n \"productId\":3053,\n \"qty\":1,\n \"product\":{\n \"id\":3053,\n \"sku\":\"QCI-PCS-PLAIN BOPP BAG-11*16-21MIC.\",\n \"name\":\"PLAIN BOPP BAG-11*16-21MIC.\",\n \"categoryId\":62\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0,\n },\n {\n \"id\":8483,\n \"productIdealConsumptionId\":663,\n \"productId\":3083,\n \"qty\":0.001,\n \"product\":{\n \"id\":3083,\n \"sku\":\"QCI-LTR-BENZENE CHM.\",\n \"name\":\"BENZENE CHM.\",\n \"categoryId\":66\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0,\n },\n {\n \"id\":8484,\n \"productIdealConsumptionId\":663,\n \"productId\":2895,\n \"qty\":0.75,\n \"product\":{\n \"id\":2895,\n \"sku\":\"GCI-MTR-BONE DORI (MOTI)\",\n \"name\":\"BONE DORI (MOTI)\",\n \"categoryId\":71\n },\n \"avlInventory\":500,\n \"reservedInventory\":2.25,\n \"currAvlInventory\":500,\n \"skuAvlIntventoryQty\":0,\n }\n ],\n \"totalOprationTime\":38,\n \"totalOprations\":38,\n \"productId\":57,\n \"skuIdealConsumptionAvlIntventory\":1,\n \"skuOperationsTimeAvlIntventory\":0,\n \"skuAvlIntventory\":0,\n \"__v\":0\n} \nfinishedproducts.updateMany({\n \"sku\":{\n \"$in\":[\n \"XS-T1-WMBL\",\n \"S-T1-WMBL\",\n \"M-T1-WMBL\",\n \"L-T1-WMBL\",\n \"XL-T1-WMBL\",\n \"XXL-T1-WMBL\",\n \"XXXL-T1-WMBL\",\n \"XS-T1-NMBL\",\n \"S-T1-NMBL\",\n \"L-T1-NMBL\",\n \"XL-T1-NMBL\",\n \"XXL-T1-NMBL\",\n \"XXXL-T1-NMBL\",\n \"XS-T1-KMBL\",\n \"S-T1-KMBL\",\n \"M-T1-KMBL\",\n \"L-T1-KMBL\",\n \"XL-T1-KMBL\",\n \"XXL-T1-KMBL\",\n \"XXXL-T1-KMBL\",\n \"XS-T1-BLAG\",\n \"S-T1-BLAG\",\n \"M-T1-BLAG\",\n \"L-T1-BLAG\",\n \"XL-T1-BLAG\",\n \"XXL-T1-BLAG\",\n \"XXXL-T1-BLAG\",\n \"XS-T1-TMBL\",\n \"S-T1-TMBL\",\n \"M-T1-TMBL\",\n \"L-T1-TMBL\",\n \"XL-T1-TMBL\",\n \"XXL-T1-TMBL\",\n \"XXXL-T1-TMBL\"\n ]\n },\n \"productIdealConsumption.$.productId\":2895\n},\n[\n {\n \"$set\":{\n \"productIdealConsumption.$.reservedInventory\":\"$productIdealConsumption.$.qty\"\n }\n }\n],\n{\n \n})\n",
"text": "I need a help for my updateMany query,My query is like this,I want to update all subdocument “productIdealConsumption” which productId is “2895” with given document “skus”.\nMy query given me error , what wrong with my query.\nPlease look once my query, and suggest me a solution for this.\nThanks.",
"username": "Seekex_youtube"
},
{
"code": "\"productIdealConsumption.$.productId\":2895\"productIdealConsumption.productId\":2895",
"text": "I gave a quick look at your query.You do not use $ in the query part. You should replace\"productIdealConsumption.$.productId\":2895with\n\"productIdealConsumption.productId\":2895See https://docs.mongodb.com/manual/reference/operator/update/positional/ for examples. You might need to use https://docs.mongodb.com/manual/reference/operator/update/positional-all/ if more than one productIdealConsumption item refers to the same productId within the same top level document. This is not the case in the sample document you supplied but you should be aware.Your problem is the same as Changing value in nested array from a scalar to an arry.What is unresolved is how to get the positional value, productIdealConsumption.$.qty in your case. As mentioned in the other thread $map is worth investigating.Another avenue would be to $unwind, update the unwinded documents, then $group back all documents and use $out or $merge.",
"username": "steevej"
},
{
"code": "",
"text": "$map is was worth investigating.See my answer at Mongo query assistance update array element with field from object - #5 by steevej",
"username": "steevej"
},
{
"code": "",
"text": "@Seekex_youtube, is your issue resolve? If it is, please mark my post as the solution so that others know that it works. This will help keep this post useful and efficient. It is simply courtesy.",
"username": "steevej"
},
{
"code": "",
"text": "Need Help…!!!\nWhat If I want to check two conditions. Like I want to update a field only when two given conditions are true. If either of are false, I don’t want to update that field in that particular document. I also have a sub-doc like following\nteam: [ {field1, field2, field3,…}, {field1, field2, field3,…}, {field1, field2, field3,…}, …]So, what I want to do is to check if field1 and field2 are satisfied or not. If they both are true/satisfied, make the update to field3, otherwise if either of it are false or not satisfied, don’t make any update.",
"username": "Najam_Ul_Sehar"
},
{
"code": "",
"text": "Share real sample documents we can cut-n-paste into our system.Share the expected result.Share the exact condition you want.Share what you have tried and explain us how it failed to deliver the expected result.",
"username": "steevej"
},
{
"code": "",
"text": "So, without going into other details of my project, I have a collection containing multiple documents. In each document, there is an array named “team” and inside this array, I have objects. Each individual object contain information for a particular player like name, points, _id and isCaptain (which tells that whether this particular player is a captain of this team or not in the form of ‘true’ and ‘false’ i.e isCaptain: true means that this player is captain of this team and isCaptain: false means that this player is not a captain of this team). From the frontend, I am sending two values; id and points. Now, what I am doing is that I’m fetching/reaching out to all the players(objects) whose _id matches with the input ‘_id’ (coming from the frontend). After this, I want to check if this player is the captain of this team or not. If he is, then I want to $set the points of this player as double points by doing points 2. And if that player is not a captain, then I just want to $set points to the input points without making it double.\nI was able to do everything but I’m stuck on isCaptain. I want my code to be modelled in such a way that it checks the matching _id and also check that if that player is captain or not. If he is captain than double the points (points 2) otherwise go to the other update operation. I have attached the code snippet plus the Screenshots of my documents.\nLet me know if I was able to make it clear enough for you to understand.\n\nScreen Shot 2022-05-21 at 4.23.42 PM1330×967 81.3 KB\n\n\nScreen Shot 2022-05-21 at 4.19.37 PM860×768 68.6 KB\n\n\nScreen Shot 2022-05-21 at 4.20.17 PM983×751 79.3 KB\n\n\nScreen Shot 2022-05-21 at 4.21.18 PM907×762 68.6 KB\n",
"username": "Najam_Ul_Sehar"
},
{
"code": "",
"text": "We cannot cut-n-paste your documents when you publish them as a screenshot.We cannot cut-n-paste your code when you publish it as a screenshot.We need text thatwe can cut-n-paste into our system.Please read Formatting code and log snippets in posts.",
"username": "steevej"
},
{
"code": "{\"_id\":{\"$oid\":\"61fb77d0b3185a7a67f34c64\"},\"team\":[{\"name\":\"Abid Ali\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f3\"},\"isCaptain\":false},{\"name\":\"Abdullah Shafique\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f4\"},\"isCaptain\":false},{\"name\":\"Cameron Green\",\"role\":\"allrounder\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94616\"},\"isCaptain\":false},{\"name\":\"David Warner\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94617\"},\"isCaptain\":false},{\"name\":\"Glenn Maxwell\",\"role\":\"allrounder\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94618\"},\"isCaptain\":false},{\"name\":\"Josh Hazlewood\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94619\"},\"isCaptain\":false},{\"name\":\"Jhye Richardson\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461a\"},\"isCaptain\":false},{\"name\":\"Matthew Wade\",\"role\":\"wicketkeeper\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461b\"},\"isCaptain\":false},{\"name\":\"Mitchel Starc\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461f\"},\"isCaptain\":false},{\"name\":\"Marnus Labuschagne\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461e\"},\"isCaptain\":false},{\"name\":\"Fawad Alam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fb\"},\"isCaptain\":true}],\"tournmentslug\":\"pakistan-australia-2022\",\"userid\":{\"$oid\":\"61fb7799b3185a7a67f34bee\"},\"matchid\":{\"$oid\":\"61fb7721b3185a7a67f34b78\"},\"totalPoints\":{\"$numberInt\":\"0\"},\"__v\":{\"$numberInt\":\"1\"}} \n{\"_id\":{\"$oid\":\"61fb7e10b3185a7a67f34d9d\"},\"team\":[{\"name\":\"David Warner\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94617\"},\"isCaptain\":false},{\"name\":\"Usman Khawaja\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94628\"},\"isCaptain\":false},{\"name\":\"Marnus Labuschagne\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461e\"},\"isCaptain\":false},{\"name\":\"Steven Smith\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94626\"},\"isCaptain\":false},{\"name\":\"Babar Azam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f7\"},\"isCaptain\":true},{\"name\":\"Fawad Alam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fb\"},\"isCaptain\":false},{\"name\":\"Mohammad Rizwan\",\"role\":\"wicketkeeper\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94606\"},\"isCaptain\":false},{\"name\":\"Hasan Ali\",\"role\":\"bowler\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fe\"},\"isCaptain\":false},{\"name\":\"Pat Cummins\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94624\"},\"isCaptain\":false},{\"name\":\"Mitchel Starc\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461f\"},\"isCaptain\":false},{\"name\":\"Shaheen Afridi\",\"role\":\"bowler\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9460c\"},\"isCaptain\":false}],\"tournmentslug\":\"pakistan-australia-2022\",\"userid\":{\"$oid\":\"61fb7d86b3185a7a67f34d27\"},\"matchid\":{\"$oid\":\"61fb7721b3185a7a67f34b78\"},\"totalPoints\":{\"$numberInt\":\"0\"},\"__v\":{\"$numberInt\":\"1\"}}\n{\"_id\":{\"$oid\":\"61fc2b8cb3185a7a67f3570f\"},\"team\":[{\"name\":\"David Warner\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94617\"},\"isCaptain\":false},{\"name\":\"Usman Khawaja\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94628\"},\"isCaptain\":false},{\"name\":\"Marnus Labuschagne\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461e\"},\"isCaptain\":false},{\"name\":\"Steven Smith\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94626\"},\"isCaptain\":false},{\"name\":\"Babar Azam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f7\"},\"isCaptain\":true},{\"name\":\"Fawad Alam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fb\"},\"isCaptain\":false},{\"name\":\"Mohammad Rizwan\",\"role\":\"wicketkeeper\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94606\"},\"isCaptain\":false},{\"name\":\"Hasan Ali\",\"role\":\"bowler\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fe\"},\"isCaptain\":false},{\"name\":\"Pat Cummins\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94624\"},\"isCaptain\":false},{\"name\":\"Mitchel Starc\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461f\"},\"isCaptain\":false},{\"name\":\"Shaheen Afridi\",\"role\":\"bowler\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9460c\"},\"isCaptain\":false}],\"tournmentslug\":\"pakistan-australia-2022\",\"userid\":{\"$oid\":\"61fb7d86b3185a7a67f34d27\"},\"matchid\":{\"$oid\":\"61fb7721b3185a7a67f34b78\"},\"totalPoints\":{\"$numberInt\":\"0\"},\"__v\":{\"$numberInt\":\"1\"}}\n{\"_id\":{\"$oid\":\"61ff4b38b3185a7a67f36712\"},\"team\":[{\"name\":\"Abid Ali\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f3\"},\"isCaptain\":false},{\"name\":\"Babar Azam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f7\"},\"isCaptain\":true},{\"name\":\"David Warner\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94617\"},\"isCaptain\":false},{\"name\":\"Marnus Labuschagne\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461e\"},\"isCaptain\":false},{\"name\":\"Mohammad Rizwan\",\"role\":\"wicketkeeper\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94606\"},\"isCaptain\":false},{\"name\":\"Shaheen Afridi\",\"role\":\"bowler\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9460c\"},\"isCaptain\":false},{\"name\":\"Hasan Ali\",\"role\":\"bowler\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fe\"},\"isCaptain\":false},{\"name\":\"Pat Cummins\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94624\"},\"isCaptain\":false},{\"name\":\"Nathan Lyon\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94623\"},\"isCaptain\":false},{\"name\":\"Steven Smith\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94626\"},\"isCaptain\":false},{\"name\":\"Usman Khawaja\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94628\"},\"isCaptain\":false}],\"tournmentslug\":\"pakistan-australia-2022\",\"userid\":{\"$oid\":\"61ff4a84b3185a7a67f3669c\"},\"matchid\":{\"$oid\":\"61fb7721b3185a7a67f34b78\"},\"totalPoints\":{\"$numberInt\":\"0\"},\"__v\":{\"$numberInt\":\"0\"}}\n{\"_id\":{\"$oid\":\"621b0135b3185a7a67f3784a\"},\"team\":[{\"name\":\"Babar Azam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f7\"},\"isCaptain\":false},{\"name\":\"Fakhar Zaman\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945f9\"},\"isCaptain\":false},{\"name\":\"Fawad Alam\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fb\"},\"isCaptain\":false},{\"name\":\"Haider Ali\",\"role\":\"batsman\",\"team\":\"Pakistan\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf945fc\"},\"isCaptain\":true},{\"name\":\"David Warner\",\"role\":\"batsman\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94617\"},\"isCaptain\":false},{\"name\":\"Glenn Maxwell\",\"role\":\"allrounder\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94618\"},\"isCaptain\":false},{\"name\":\"Matthew Wade\",\"role\":\"wicketkeeper\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf9461b\"},\"isCaptain\":false},{\"name\":\"Cameron Green\",\"role\":\"allrounder\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94616\"},\"isCaptain\":false},{\"name\":\"Alex Carey\",\"role\":\"wicketkeeper\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94615\"},\"isCaptain\":false},{\"name\":\"Scott Boland\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94625\"},\"isCaptain\":false},{\"name\":\"Pat Cummins\",\"role\":\"bowler\",\"team\":\"Australia\",\"points\":{\"$numberInt\":\"0\"},\"_id\":{\"$oid\":\"61f62219b1c44a37bbf94624\"},\"isCaptain\":false}],\"tournmentslug\":\"pakistan-australia-2022\",\"userid\":{\"$oid\":\"61fb7d86b3185a7a67f34d27\"},\"matchid\":{\"$oid\":\"61fb77fbb3185a7a67f34cde\"},\"totalPoints\":{\"$numberInt\":\"0\"},\"__v\":{\"$numberInt\":\"0\"}}\n const { id, points } = req.body;\n\n MyTeam.find({ 'team._id': id }).where('team.isCaptain').equals(true).updateMany(\n { \n $set: {\n 'team.$.points': points*2,\n 'totalPoints': points*2\n }\n },(err, numAffected) => {\n if(err) throw err;\n res.send(numAffected);\n }\n )\n\n MyTeam.find({ 'team._id': id }).where('team.isCaptain').equals(false).updateMany(\n { \n $set: {\n 'team.$.points': points,\n 'totalPoints': points\n }\n },(err, numAffected) => {\n if(err) throw err;\n res.send(numAffected);\n }\n )\n",
"text": "Documents are following:Following is my code snippetLet me know if this is fine or if you want me to share the code and documents in other way.",
"username": "Najam_Ul_Sehar"
},
{
"code": "MyTeam.find({ 'team._id': id }).where('team.isCaptain').equals(true)\n\"totalPoints\" : { \"$multiply\" : [ points , { \"$cond\" : [ \"$isCaptain\" , 2 , 1 ] } ] } \n\"team._id\":id",
"text": "I do not know the following syntax.Is it mongoose or something like that?You are using $set rather than $inc? If you do the operation for 2 people of the same team, your total point will be equal to the last value, not the sum of the 2 values. I am pretty sure that $inc is more appropriate.You are doing 2 database access to do your update. You should always try to avoid doing 2 db access. In this case, you could use $cond to $inc (or $set) to a different value based on the isCaptain.Something likeI am surprise that you do that simply for \"team._id\":id, don’t you want to restrict the update to a given matchid? For example, Hasan Ali is present in 3 documents. Your query will update the 3 documents where Hasan Ali is listed.",
"username": "steevej"
},
{
"code": "MyTeam.updateMany({ 'team._id': id }, {\n // Rest of the code is same as above\n})\n\"{ '$multiply': [ 10, { '$cond': [Array] } ] }\"assert.ok(!isNaN(val))\n 'team.$.points': Number({ \"$cond\": [\"$isCaptain\", points*2, points] }),\n 'totalPoints': Number({ \"$cond\": [\"$isCaptain\", points*2, points] })\n MyTeam.updateMany({ 'team._id': id },\n // MyTeam.where('team._id').equals(id).$where('team.isCaptain').equals(false).updateMany(\n { \n $inc: {\n 'team.$.points':{ \"$multiply\" : [ points , { \"$cond\" : [ \"$isCaptain\" , 2 , 1 ] } ] },\n 'totalPoints': { \"$multiply\" : [ points , { \"$cond\" : [ \"$isCaptain\" , 2 , 1 ] } ] }\n }\n },(err, numAffected) => {\n if(err) throw err;\n res.send(numAffected);\n }\n )\n",
"text": "Yeah, so I used $set instead of $inc just because of testing. I didn’t wanted to manually reset the values back to original (which is “0”) everytime I make the request to the endpoint since that’s quite time consuming.\nSecondly, I was trying this (MyTeam.find({ ‘team._id’: id }).where(‘team.isCaptain’).equals(true)) code but I had another line of code before which is belowIt works the same.\nNow coming on to the code you shared, I tried it but I got the following error\n‘’’\nCastError: Cast to Number failed for value “{ ‘$multiply’: [ 10, { ‘$cond’: [Array] } ] }” (type Object) at path “totalPoints”\nmessageFormat: undefined,\nstringValue: \"{ '$multiply': [ 10, { '$cond': [Array] } ] }\",\nkind: ‘Number’,\nvalue: {\n‘$multiply’: [ 10, { ‘$cond’: [ ‘$isCaptain’, 2, 1 ] } ]\n},\npath: ‘totalPoints’,\nreason: AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:{\ngeneratedMessage: true,\ncode: ‘ERR_ASSERTION’,\nactual: false,\nexpected: true,\noperator: ‘==’\n},\nvalueType: ‘Object’\n}\n‘’’\nI modified it in the following way but it is giving me a similar sort of errorSo, how can I fix this error? I just want to have the following implementation.\nIf isCaptain is true, I want ‘team.$.points’ and ‘totalPoints’ to be assigned or increment as points*2. And if isCaptain is false, I want ‘team.$.points’ and ‘totalPoints’ to be assigned or increment as simply points.My overall code snippet looks like the following:",
"username": "Najam_Ul_Sehar"
},
{
"code": "",
"text": "Smart move, just warn us before the next time.Yeah, so I used $set instead of $inc just because of testing. I didn’t wanted to manually reset the values back to original (which is “0”) everytime I make the request to the endpoint since that’s quite time consuming.The error seems to come from mongoose or an other framework rather than coming from mongod. However, I do what I propose directly in the shell, I do not get the error you get but I still don’t get the appropriate result.You might need to use the new $set with aggregation with a $map to update the array. I still don’t know for sure.",
"username": "steevej"
}
] |
Update nested sub Document with a specific condition
|
2021-12-10T06:42:40.781Z
|
Update nested sub Document with a specific condition
| 18,047 |
null |
[
"node-js",
"mongoose-odm"
] |
[
{
"code": "",
"text": "Hello everyone.My name is Fran Godoy and I am front-end developer. I am learning back-end development. For learning, I am developing a open source API for manage organic producers (Atlas, Node, Express, Mongoose).I am 38 years old and have been in the web development world since I was 24. I’m autodidactic. Everything I have learned is thanks to people who share their knowledge through the Internet.I have taken several Mongodb University courses. They are amazing, they have helped me a lot.I hope to find help in this forum. And also, in a future where I have more experience, to be able to help.See you on the forums.",
"username": "fcojgodoy"
},
{
"code": "",
"text": " Welcome to the MongoDB Community @fcojgodoy !I have also learned so much from others who share their knowledge (and try to do the same)!Thank you for the feedback on MongoDB University courses – they are indeed a great (and free!) resource.I hope to find help in this forum. And also, in a future where I have more experience, to be able to help.We’re here to help! I look forward to seeing your questions & contributions Regards,\nStennie",
"username": "Stennie_X"
}
] |
Greetings from Andalusia, I am Fran
|
2022-05-25T07:08:54.916Z
|
Greetings from Andalusia, I am Fran
| 2,233 |
null |
[
"java",
"android",
"kotlin"
] |
[
{
"code": "",
"text": "Hi,I bumped on a new issue today, while trying to write data to a Collection.\nI’m working on an Android Project with Kotlin and Realm Sync since a few months, and the testing has been quite extensive on all the projects features.\nI’m pretty sure this wasn’t happening a few months ago, wondering if it’s a new thing.So the scenario is:var productData: Asset? = Asset(),var installedFirmware: InstalledFirmware? = InstalledFirmware()When I try to write a new Asset on a WorkOrder, it throws an exception:java.lang.IllegalStateException: Wrong kind of table\nException backtrace:\nbacktrace not supported on this platformI found out that if I change:var installedFirmware: InstalledFirmware? = InstalledFirmware()tovar installedFirmware: InstalledFirmware? = nulleverything works just fine.\nI tested the same in other objects as well.What’s the logic behind this?\nI can easily change to have null in those cases, but I’d like to understand why that’s needed.Thanks,\nAlessandro",
"username": "Alessandro_Benvenuti1"
},
{
"code": "",
"text": "Hello @Alessandro_Benvenuti1,Welcome to the Community WorkOrder collection: items contain an optional reference to Asset collectionjava.lang.IllegalStateException: Wrong kind of table\nException backtrace:\nbacktrace not supported on this platformCould you please share the full model definition that was before and after? Did you recently update the SDK version or made any changes that led to this error you getting?What’s the logic behind this?\nI can easily change to have null in those cases, but I’d like to understand why that’s needed.This is dependent on the use-case of your application. Generally, both null and object references from an constructor should work.I look forward to your response.Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Hi Henna Thanks for your reply.\nHere my models:open class WorkOrder(\n@PrimaryKey var _id: ObjectId = ObjectId(),\n@Required var _partition: String = “”,\nvar productData: Asset? = Asset()\n) : RealmObject() {@RealmClass(embedded = true)\nopen class Asset(\n@Required var serialNumber: String = “”,\nvar installedFirmware: InstalledFirmware? = InstalledFirmware()\n) : RealmObject()@RealmClass(embedded = true)\nopen class InstalledFirmware (\n@Required var supplier: String = “”,\n@Required var model: String = “”,\n@Required var version: String = “”\n) : RealmObject()(I’ve removed a bunch of properties from Asset and WorkOrder, to make it easier)\nWith this setup I still get that error on Client (android kotlin) side:E/REALM_JNI: jni: ThrowingException 9, Wrong kind of table\nException backtrace:\n<backtrace not supported on this platform>, .\nE/REALM_JNI: Exception has been thrown: Wrong kind of table\nException backtrace:\n<backtrace not supported on this platform>On server side, on Logs, I can see the write attempt but no changes are actually written:Logs:\n[\n“Upload message contained 1 changeset(s)”,\n“Integrating upload required conflict resolution to be performed on 0 of the changesets”,\n“Latest server version is now 300”,\n“Number of upload attempts was 1”\n]\nPartition:\nmyPartition\nWrite Summary:\n{\n“WorkOrder”: {\n“updated”: [\n“longGeneratedId”\n]\n}\n}\nRemote IP Address:\n..*.\nSDK:\nandroid v10.10.1\nPlatform Version:\n11If I change this line:var installedFirmware: InstalledFirmware? = InstalledFirmware()tovar installedFirmware: InstalledFirmware? = null,everything works smoothly.I don’t recall any change on SDK side nor on server side.\nI can replicate the issue at any time apparently Thanks!\nAlessandro",
"username": "Alessandro_Benvenuti1"
},
{
"code": "InstalledFirmware",
"text": "G’Day, @Alessandro_Benvenuti1,Thank you for sharing the requested details.It is possible to initialize an object reference with a directly constructed object if it has a no-arg constructor.\nbut Realm requires an empty constructor for its internal processing so, in the absence of a real no-arg constructor, it is causing the reflective lookup to fail and hence the error.Could try to rework the InstalledFirmware to only have the default no-arg constructor and assign the defaults inside the class instead?I look forward to your response.Cheers, ",
"username": "henna.s"
},
{
"code": "@RealmClass(embedded = true)\nopen class InstalledFirmware () : RealmObject() {\n @Required var supplier: String = \"\"\n @Required var model: String = \"\"\n @Required var version: String = \"\"\n}\nvar productData: Asset? = Asset(),\nvar installedFirmware: InstalledFirmware? = InstalledFirmware()\n@RealmClass(embedded = true)",
"text": "Hi Henna,thank’s for the help and sorry for my (really) late reply!It looks like that doesn’t fix the problem.\nI have changed my code such as:And same for the Asset class.\nI get the same error when I try to write an Asset object to the db.Furthermore I’ve just realised that I can have this in my WorkOrder class:but I can’t have this in my Asset class:Both Asset and InstalledFirmware are flagged as @RealmClass(embedded = true), WorkOrder is not.Cheers,\nAlessandro",
"username": "Alessandro_Benvenuti1"
},
{
"code": "",
"text": "G’Day, @Alessandro_Benvenuti1 ,Apologies for the delay in getting back to you. Were you able to go past this error?I have raised this concern internally and will get back once I have feedback. It’s possible this may have to do with multiple levels of embedded classes but will get a confirmation on this soon.Meanwhile, please feel free to share any more observations or feedback.I genuinely appreciate your patience with us on this.Cheers, ",
"username": "henna.s"
},
{
"code": "create()copyToRealm",
"text": "Hello @Alessandro_Benvenuti1,I have information from the engineering team, from the error it appears the embedded object is getting created as a normal object rather than being part of the parent object. There is a test that covers the case of constructing the object with an embedded object directly in the initializer in EmbeddedObjects.ktCould you please share code snippets on how you are inserting the code into the realm and also the full stack trace? Are you using create() in your code for object creation? or alternatively you can try using copyToRealm as in the linked github code?I look forward to your response.Cheers, ",
"username": "henna.s"
}
] |
Android Java SDK: Wrong kind of table
|
2022-03-01T10:12:44.339Z
|
Android Java SDK: Wrong kind of table
| 4,311 |
null |
[
"queries",
"crud"
] |
[
{
"code": " UpdateResult updateResult = db.getCollection(collection).updateMany(filter, \n Updates.set(\"xyzfield\", \"specific_value\"));\n if(updateResult.wasAcknowledged()){\n logger.info(String.format(\"Collection :: %s, matched count :: %d, updated count :: %d\",\n collection,\n updateResult.getMatchedCount(),\n updateResult.getModifiedCount()));\n }\n count = StreamSupport.stream(db.getCollection(collection).find(filter).spliterator(), false).count();\n",
"text": "Hi,i am running updateMany with filter where a field doesnt exist and set that field to a value. And the number of records are around 33M. Once the updateMany is done, I expect new runs will not have any matching documents.\n(During this update, incoming traffic is disabled)But on multiple runs, we find new matching documents. What could be the issue with this code?==================\nBson filter = Filters.exists(“xyzfield”, false);",
"username": "Nithin_Kumar"
},
{
"code": "count = StreamSupport.stream(db.getCollection(collection).find(filter).spliterator(), false).count();",
"text": "Can you share the output of running your code twice in row?Can you confirm it is Java code?Rather thancount = StreamSupport.stream(db.getCollection(collection).find(filter).spliterator(), false).count();Can you simply output the first document using db.getCollection(collection).find(filter).first()?",
"username": "steevej"
},
{
"code": "",
"text": "This is java code:\nmatched count :: 149081, updated count :: 149081\nAttempt :: 0, left over count :: 3631\nmatched count :: 7324, updated count :: 7324\nAttempt :: 1, left over count :: 3727",
"username": "Nithin_Kumar"
},
{
"code": "",
"text": "I want to see the first document.However, it is clear from the output you supplied that you have something that creates new documents while you are testing.After the first run you getAttempt :: 0, left over count :: 3631but when you run your second attempt you havematched count :: 7324It means that 7324 - 3631 new documents have been created between the 2 runs and that 3631 new documents have been created while the first attempt was running. What ever is running has created 3727 new documents during the 2nd attempt.So I do not believe:incoming traffic is disabled",
"username": "steevej"
}
] |
updateMany with filter
|
2022-05-24T07:36:44.792Z
|
updateMany with filter
| 2,050 |
null |
[
"replication",
"performance",
"storage"
] |
[
{
"code": "2021-11-19T09:38:24.226+0100 I STORAGE [initandlisten] WiredTiger message [1637311104:225994][5108:140713466155616], txn-recover: Recovering log 6 through 273\n2021-11-19T09:38:24.228+0100 E STORAGE [initandlisten] WiredTiger error (2) [1637311104:228003][5108:140713466155616], txn-recover: __win_open_file, 539: C:\\ProgramData\\MongoDB\\db\\journal\\WiredTigerLog.0000000006: handle-open: CreateFileW: The system cannot find the file specified.\n: No such file or directory Raw: [1637311104:228003][5108:140713466155616], txn-recover: __win_open_file, 539: C:\\ProgramData\\MongoDB\\db\\journal\\WiredTigerLog.0000000006: handle-open: CreateFileW: The system cannot find the file specified.\n: No such file or directory\n2021-11-19T09:47:33.226+0100 E STORAGE [initandlisten] WiredTiger error (28) [1637311653:226007][5484:140713466155616], connection: __win_file_set_end, 394: C:\\ProgramData\\MongoDB\\db\\journal\\WiredTigerTmplog.0000000001: handle-set-end: SetEndOfFile: There is not enough space on the disk.\n: No space left on device Raw: [1637311653:226007][5484:140713466155616], connection: __win_file_set_end, 394: C:\\ProgramData\\MongoDB\\db\\journal\\WiredTigerTmplog.0000000001: handle-set-end: SetEndOfFile: There is not enough space on the disk.\n: No space left on device\n2021-11-09T09:59:12.657+0100 I STORAGE [serviceShutdown] WiredTiger message [1636448352:656659][9476:140720848654944], txn-recover: Recovering log 271 through 272\n2021-11-09T09:59:12.809+0100 I STORAGE [serviceShutdown] WiredTiger message [1636448352:809657][9476:140720848654944], txn-recover: Recovering log 272 through 272\n",
"text": "Hi,We are using MongoDB 4.2.0 running in Windows Server 2019 environment. We have a replica set with 2 data nodes and an arbiter. After restarting the service on one of the data nodes, it tried to recover, but complained about not finding a specific file (WiredTigerLog.0000000006):From now on, the number of these journal files started to grow rapidly, filling the disk completely in less than 10 minutes.What is strange, the last recovery log lines were encountered 10 days earlier:No problem was detected in the meantime, unit 2021-11-19.Maybe it is just by coincidence, but it looks strange that it tries to search for file with index 6 up to 273, which is one more as 272 10 days before.Did someone have a similar problem or experience?Thank you in advance\nBest regards\nLubo",
"username": "lubo17"
},
{
"code": "WiredTigerLog.0000000006",
"text": "Hi @lubo17 welcome to the community!Sorry to hear you’re having issues with MongoDB.So if I understand correctly, the timeline of the events are like this:Is this correct?If yes, is this the only node that experienced this sequence of events? What about the other data bearing node in the replica set?Also could you confirm that:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi, thank you Basically, regarding your 1. point, highly probably there were all nodes restarted at about the same time, as our system was newly deployed. Unfortunately, we are missing the logs from the other two nodes from the restart time.\nFor 2. and 3. yes, it happened like that.This means that I cannot fully answer your question if this was the only affected node.Our MongoDB version used is actually 4.2.5 Windows version. For installation we use the ZIP file and package it into our MSI. Before that, MS Visual VC++ Redistributable package 2015-2019 is installed on nodes.\nDuring the affected restart, the MongoDB was not re-installed. It was running before without any issues.\nDue to time pressure we’ve removed the entire MongoDB installation afterwards on all nodes and started fresh.Unfortunately, the ticket reached me with the delay of some months and therefore I do not have the first-hand info.Best regards\nLubo",
"username": "lubo17"
},
{
"code": "",
"text": "Hi @lubo17Due to time pressure we’ve removed the entire MongoDB installation afterwards on all nodes and started fresh.Can I assume that the old deployment is totally gone since you wiped the whole thing and started fresh? That’s unfortunate. I’d very much like to know what’s going on since what you described is quite a unique situation.Having said that, best of luck with the current deployment Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi I have just signed up to say that this happens to me as well. Luckily it has happened in my local MongoDB running over Docker. It has happened twice. I think MongoDB at some point becomes crazy and starts creating journal entries. I would like to send some logs but as I was in 0 bytes free space I had to delete journals manually from the file explorer, making my MongoDB impossible to start now. When you are in 0 bytes free space pcs start to misbehave that is why I had to do that quick deletion. This has happened twice in less than 3 months. Next time I will try to grab some logs so someone can take a look at it. Version used is 4.4",
"username": "Javier_Guzman"
},
{
"code": "",
"text": "I forgot; Is there a way to tell MongoDB to keep the journal size smaller than X size? Thank you in advance and regards",
"username": "Javier_Guzman"
},
{
"code": "",
"text": "Hi @Javier_GuzmanIs there a way to tell MongoDB to keep the journal size smaller than X size?The journal files should be no more than 100MB in size, individually. If you mean the number of journal files created, then no I don’t believe you can restrict it. The journals are WT’s write-ahead log, so anything that’s not in the data files yet would be in the journal. If you artifically restrict the journals, then you’ll either lose data or have to force WT to stop receiving writes. Since WT performs a checkpoint every 60s and the journals are only there to recover to the latest data state (checkpoint + journal), it’s curious if you see journal files accumulate in large numbers.Do you mind using the latest MongoDB version, either 5.0.8 or 4.4.14 and see if the situation repeat itself? If yes, please provide details on how you can induce this behaviour.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hello @kevinadi,Thank you for your response. I have checked the Mongo version in my Docker image and it is\n4.4.13, I will see if I can upgrade without too much hassle.Just for future; is there a particular command that would help for diagnosis when this happens again (if it is happen)? Or only with the logs would be enough?Kind regards,\nJavier",
"username": "Javier_Guzman"
},
{
"code": "ls -lRamongod",
"text": "Hi @Javier_GuzmanAny information you can provide will be helpful. Logs, the content (ls -lRa) of the dbpath, any config file or mongod parameters in use, and the Dockerfile of the image should be good as a starting point. It’s great if you can also provide a reproduction steps as well.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "All right @kevinadi , I will come back with more information if it happens again. Thank you and kind regards, Javier",
"username": "Javier_Guzman"
}
] |
MongoDB fills disks with WiredTigerLog.XXXXXXXXXX
|
2022-03-16T14:49:44.778Z
|
MongoDB fills disks with WiredTigerLog.XXXXXXXXXX
| 7,388 |
null |
[
"aggregation",
"queries"
] |
[
{
"code": " \"pipeline\": [\n {\n \"$match\": {\n \"uid\": \"cdc67cf2-0c23-4d32-b103-f78503824b18\"\n }\n },\n {\n \"$sort\": {\n \"score\": -1,\n \"_id\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 100\n }.........>\n, \"collation\": {\n \"locale\": \"en\"\n },\n{uid:1, score: -1, _id:1}",
"text": "with the below pipelineand index - {uid:1, score: -1, _id:1}My execution plan shows up -\n“planSummary”: “COLLSCAN”,\n“keysExamined”: 0,\n“docsExamined”: 72719,\n“hasSortStage”: true,\n“cursorExhausted”: true,\n“numYields”: 74,\n“nreturned”: 100,WHY is the above query not using the index? What is wrong with the index?",
"username": "pkp"
},
{
"code": "\"pipeline\": [",
"text": "\"pipeline\": [The above looks like a pipeline inside a $lookup stage.Please share the whole aggregation, including the code that calls the aggregation.Please share the output of getIndexes() from both the starting collection and the looked up collection.Please share the whole explain plan.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej the problem my query not taking indexes was that the index I created was not with collation option. But my query has collation usage in the aggregation. So creating new index with the collation strategy worked for me.\nThanks for taking time to reply.",
"username": "pkp"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Why is mongo query not using indexes?
|
2022-05-23T04:44:55.887Z
|
Why is mongo query not using indexes?
| 3,259 |
null |
[
"crud"
] |
[
{
"code": "",
"text": "Hi all,Quick question. I’m running the standalone (5.0.7) version of MongoDB.\nIs there a way of creating an automatic self-populating field within a collection. IE. Similar to the _id field, but user definable.Thanks.",
"username": "Andy_Bryan"
},
{
"code": "",
"text": "Hi @Andy_Bryan,In this blog post, I’m using MongoDB Realm Triggers to populate an entire object automatically from an external API. You could use change streams to do the same thing and populate your custom field(s) based on an event (insert, update, replace, delete).https://www.mongodb.com/developer/how-to/data-enrichment-stitch/Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi there @MaBeuLux88 ,Thanks for the response. Is this only possible on MongoDB Atlas, or is it also possible on the stand alone version?Thanks.Andy.",
"username": "Andy_Bryan"
},
{
"code": "",
"text": "A stand-alone node would have to be transformed into a single node Replica Set (RS) because Change Streams (like a few other MongoDB features) rely on the oplog that is only available if you are running a Replica Set.Atlas Triggers are “just” a way to run serverless functions each time an event happens. These events are generated from Change Streams that rely on the underlying oplog collection.So you can totally work with Change Stream in a Community single node RS, but you’ll have to write your own program to listen to the change stream and start some code each time an event happens. Eventually multi-thread that program if you don’t want to queue the events. Also you will probably need a restart mechanism (change stream have that built-in) so you don’t restart from scratch if the program stops.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Brilliant. Thanks for your help.",
"username": "Andy_Bryan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Automatic self-populating field
|
2022-05-23T13:10:51.574Z
|
Automatic self-populating field
| 2,345 |
null |
[
"java",
"kafka-connector"
] |
[
{
"code": "",
"text": "I like to make the MongoSourceConnector start the change stream at an operation time. I like to add a new property. However, I am quite new in gradle and JAVA. Could someone let me know how to compile, debug, and test this project?\nFor IDE, I am using IntelliJ.",
"username": "Pakorn_K"
},
{
"code": "",
"text": "Check out the GitHubMongoDB Kafka Connector. Contribute to mongodb/mongo-kafka development by creating an account on GitHub.There is a build section that tells you how to build the connector. Feel free to submit PR for your added properties!",
"username": "Robert_Walters"
}
] |
How to modify the open source Mongo Source Connector
|
2022-05-25T04:01:14.126Z
|
How to modify the open source Mongo Source Connector
| 2,017 |
[
"aggregation"
] |
[
{
"code": "",
"text": "Hello guys, i’ve been trying to query my data using $lookup, i want to access the fields in the emballage details that i’m getting as an array with an object, how can i access the thos fields using $project\nThanks in advance\n",
"username": "BOULLOUS_Laila1"
},
{
"code": "",
"text": "See the examples at $arrayElemAt (aggregation) — MongoDB Manual.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you so much it worked!!",
"username": "BOULLOUS_Laila1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
$lookup output as an array of object
|
2022-05-24T22:03:55.308Z
|
$lookup output as an array of object
| 1,841 |
|
null |
[
"atlas"
] |
[
{
"code": "",
"text": "I’m just getting started with MongoDB Atlas, and I have a couple of questions that I haven’t found an answer for browsing through UI/Docs.First, I keep reading that I can scale the cluster, modify replicationfactor, but I can’t find any settings like that when setting up an instance, but I see it can be specified at cluster creation time, using the API. Nor can I edit the cluster to change it. Is it specific to a certain tier? I’m focused on dedicated tiers, currently, but have started with an M10 to start, though I expect that to grow as we continue on the project.Second, regarding permissions. It looks like there are two layers of user access within a project. One is at the Atlas project level, the other is at the Database Access level. The latter, I assume, only allowing access to the DB and not Atlas. Database Access configuration allows me to specify which cluster/DB the user can access (same for Data API).But, when I assign permissions to an org level user within a project, it doesn’t appear that I can specify which database they can write to. So, it seems the only way to get that separation would be to create a separate project.My usecase here is to have a project dedicated to a team, and houses prod and non-prod clusters. I want to grant my users access to the Atlas project, so they can have visibility into various aspects of the project (beyond the DB), but only certain people should be able to write/admin certain instances that way.Thanks for any suggestions or clarification. Hopefully, these questions have enough context.",
"username": "Kyle_H"
},
{
"code": "Organization rolesProject rolesOrganization rolesProject rolesProject Read OnlyProject Data Access Read Only",
"text": "Hi @Kyle_H and welcome to the community forum!!modify replicationfactor,Could you please elaborate or clarify “replication factor” and what your use case is for changing this is? In saying so, If you’re wanting to expand your cluster with additional electable nodes , I would refer to the Add Electable Nodes documentation for further information and instructions on how to do so. Please note that configuration of high availability and workload isolation nodes is only possible on M10+ tier clusters.Second, regarding permissions.For this part of your question, the Atlas UI Authorisation documentation talks about permissions and use cases of what creating more projects under single organisation provides. Also, Atlas User Roles specifically describes Organization roles and Project roles which can be assigned to particular Atlas usersMy usecase here is to have a project dedicated to a team, and houses prod and non-prod clusters. I want to grant my users access to the Atlas project, so they can have visibility into various aspects of the project (beyond the DB), but only certain people should be able to write/admin certain instances that way.The following Atlas User Roles documentation which describes details both the Organization roles and Project roles which can be assigned to particular Atlas users.\nFor your use case, you can assign / grant particular Atlas users with the Project Read Only or Project Data Access Read Only roles so that they can have visibility to the Atlas UI in certain parts of a specific project.However, I am not aware of all the requirements so please go through the details of each assignable role and update the user’s accordingly based off your access policies / requirementsYou can then assign suitable write/admin Project roles to additional Atlas users based on your needs. However, please study the descriptions for each of the jobs on this page to ensure you assign the relevant roles to the appropriate individuals.I hope the above documentation is helpful for you. If you have any further queries, please provide details on the requirements like:Thanks\nAasawari",
"username": "Aasawari"
}
] |
2 questions regarding Atlas: Can I not specify which database/cluster a project level user can access? Also, is it possible to specify replicationfactor within the Atlas UI?
|
2022-05-05T22:09:30.965Z
|
2 questions regarding Atlas: Can I not specify which database/cluster a project level user can access? Also, is it possible to specify replicationfactor within the Atlas UI?
| 2,369 |
null |
[] |
[
{
"code": "db.expiringdownloadlinks{ \"_id\" : ..., \"expires\" : ISODate(\"2021-12-26T23:32:42.190Z\"), \"linkId\" : \"1z0hkcj/tutorial.zip\"}\nexpiresdb.expiringdownloadlinks.getIndexes(){\n \"v\" : 2,\n \"key\" : {\n \"expires\" : 1\n },\n \"name\" : \"expires_1\",\n \"ns\" : \"js_ru.expiringdownloadlinks\",\n \"expireAfterSeconds\" : 0,\n \"background\" : true\n}\nexpires",
"text": "Is it possible that TTL deleting thread is not running at all for some reason?I have a collection db.expiringdownloadlinks with such records:I created an index on expires, here’s how db.expiringdownloadlinks.getIndexes() shows it:Is it correct that a record with expires field like above (2021 year) must be deleted?For some reason, nothing happens to it.P.S. I tried dropping and re-creating the index, and also tried waiting for 1 hour.",
"username": "Ilya_Kantor"
},
{
"code": "",
"text": "You scenario works fine for me.You must have a typo somewhere.Check database name? collection name? field name?",
"username": "steevej"
},
{
"code": "",
"text": "Thanks.Solved. TTL wasn’t running due to a replication issue.",
"username": "Ilya_Kantor"
},
{
"code": "ttl.passesdb.serverStatus()test> db.serverStatus().metrics.ttl\n{ deletedDocuments: Long(\"0\"), passes: Long(\"6\") }\n\ntest> db.serverStatus().metrics.ttl\n{ deletedDocuments: Long(\"0\"), passes: Long(\"7\") }\nttl.passesdeletedDocumentsdb.expiringdownloadlinks.insertMany([\n {\n \"expires\" : ISODate(\"2021-12-26T23:32:42.190Z\"), \n \"note\": \"Expires immediately -- valid BSON Date in the past\" \n },\n {\n \"expires\" : ISODate(\"2025-12-26T23:32:42.190Z\"), \n \"note\": \"Expires in the future -- valid BSON Date\" \n },\n {\n \"expires\" : \"2021-12-26T23:32:42.190Z\", \n \"note\": \"Won't expire -- not a valid BSON Date\" \n },\n {\n \"note\": \"Won't expire -- missing the `expires` field\" \n }\n])\n// Documents with past expiry date\ndb.expiringdownloadlinks.find({\n \"expires\" : {\n $lte: new Date()\n }\n})\n\n// Documents with future expiry date\ndb.expiringdownloadlinks.find({\n \"expires\" : {\n $gt: new Date()\n }\n})\n\n// Documents that will never expire\ndb.expiringdownloadlinks.find({\n \"expires\" : {\n $not: {\n $type: \"date\",\n }\n }\n})\nexpires// TTL expiry per-document\n// https://www.mongodb.com/docs/manual/tutorial/expire-data/\ndb.expiringdownloadlinks.createIndex( \n { \"expires\": 1 }, \n { expireAfterSeconds: 0 }\n)\ndb.serverStatus().metrics.ttl",
"text": "Welcome to the MongoDB Community @Ilya_Kantor !I see found a solution before I posted this draft, but it may still be a useful reference for troubleshooting.Is it possible that TTL deleting thread is not running at all for some reason?By default the TTL monitor thread runs every 60 seconds. Each invocation should increment the ttl.passes metric in db.serverStatus():If the ttl.passes count isn’t changing every minute or so, it is possible the TTL thread is not running or an admin has configured a longer interval.If the number of passes is increasing but there are no new deletedDocuments, I would try running an equivalent query to see if there are documents that should have expired.For example:Checking expected results before adding a TTL index:After adding a TTL index on expires any documents with past expiry date should be removed within 60 seconds:If the TTL expiry isn’t working for you as above, please provide some further details including:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you @Stennie_X for the in-depth answer!I wish I knew this TTL metrics stuff before asking.\nSurely it’ll be useful for future searchers!",
"username": "Ilya_Kantor"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
TTL deleting thread isn't working?
|
2022-05-24T05:15:14.904Z
|
TTL deleting thread isn’t working?
| 4,371 |
null |
[
"python"
] |
[
{
"code": "client: MongoClient[Dict[str, Any]] = MongoClient(settings.mongo_host)\n\nclient.db.user.create_index([(\"user_id\", ASCENDING)], unique=True)\n",
"text": "Hi,\nI’m trying to use pymongo on a pyright/pylance using codebase (strict), but after a bunch of trying, I’ve decided to just add # type: ignore to a bunch of lines.gets:Type of “create_index” is \"(keys: str | Sequence[Tuple[str, int | str | Mapping[str, Any]]], session: ClientSession[Unknown] | None = None, comment: Any | None = None, **kwargs: Any) → str\"PylancereportUnknownMemberTypeAnd with bulk_write, even the let gets an unknown type on mapping.Am I missing something obvious? If not, is there a plan to fix this?",
"username": "Marcin_Platek"
},
{
"code": "",
"text": "Hi @Marcin_Platek,I tried something similar. For me it was specifing the db you want to use.\nSo for your example you can use:\ndb = client[“database”] and then try db.collection.create_index()Hope this helps!Greetings,\nNiklas",
"username": "NiklasB"
},
{
"code": "",
"text": "I am specifying the database.Anyway, I tried to rewrite the code to look more like yours and nothing changed. The ClientSession has still an Unknown in the type variable.",
"username": "Marcin_Platek"
},
{
"code": "ClientSessionunknown",
"text": "Hi @Marcin_Platek, I think what you’re seeing is another unfortunate limitation from https://github.com/python/mypy/issues/3737. The ClientSession object is generic, but we can’t provide a default value so it defaults to unknown.",
"username": "Steve_Silvester"
},
{
"code": "ClientSession",
"text": "After further discussion, we opened PYTHON-3283 to track changing ClientSession to not be generic in the next minor release.",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "Thanks, great to know!",
"username": "Marcin_Platek"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] |
Most operations have Session type unknown with pyright/pylance
|
2022-05-24T08:33:15.732Z
|
Most operations have Session type unknown with pyright/pylance
| 3,634 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.