image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": "\"ok\":0,\"errMsg\":\"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected\",\"errName\":\"ClientDisconnect\",\n\n\"errCode\":279, ?\n\n\n{\"t\":{\"$date\":\"2023-09-20T11:48:21.945+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":518,\n\n\"ctx\":\"conn1990250\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"BPO.customer\",\n\"command\":{\"aggregate\":\"test\",\"pipeline\":[{\"$match\":{\"$and\":[{\"status\":\"Open\"},{\"appDateTime\":{\"$gte\":{\"$date\":\"2023-09-20T11:48:13.031Z\"}}},\n{\"appDateTime\":{\"$lt\":{\"$date\":\"2023-10-20T11:48:13.031Z\"}}}]}},{\"$group\":{\"_id\":\"$storeNumber\",\"totalSlotCount\":{\"$sum\":1},\"minSlotDate\":\n{\"$min\":\"$appDateTime\"}}}],\"cursor\":{},\"allowDiskUse\":false,\"$db\":\"RAVaccineSchedulerPRODDB\",\"lsid\":{\"id\"{\"$uuid\":\"807d5eb6-4067-4051-a89b-38f06aa1bd86\"}}},\n\"planSummary\":\"IXSCAN { status: 1, appTime: 1, reservedTime: 1 }\",\n\"numYields\":1725,\"queryHash\":\"E2C2E097\",\"planCacheKey\":\"6DD10207\",\n\"ok\":0,\"errMsg\":\"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected\",\"errName\":\"ClientDisconnect\",\n\"errCode\":279,\"reslen\":311,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1782}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":1782}},\n\"Global\":{\"acquireCount\":{\"r\":1782}},\"Database\":{\"acquireCount\":{\"r\":1781}},\"Collection\":{\"acquireCount\":{\"r\":1781}},\"Mutex\":{\"acquireCount\":{\"r\":57}}},\n\"protocol\":\"op_msg\",\"durationMillis\":8050}}\n", "text": "Why it is happened in Atlas Cloudwhen we received alert CPU usage gone above 80% in Mongod.logs we are found the below error", "username": "hari_dba" }, { "code": "“ok”:0,“errMsg”:“Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected”,“errName”:“ClientDisconnect”,\n\n“errCode”:279, ?\n", "text": "Hey @hari_dba,Welcome to the MongoDB Community!I suspect this is happening due to the client getting disconnected, so if the issuing client disconnects before the operation completes, MongoDB marks the following operations for termination. To read more please refer to the documentation.Also, refer to the FAQ - What Happens to Running Operations If the Client Disconnects?.Could you please see the logs that show what happened on the client or application side? It will give you insights into the specific events around this issue.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "same error we are got againCan you fix the issue ?Why the lock was accquired time out what is reason ?how do we fing which query was lock happened ?\n,“msg”:“Failed to gather storage statistics for slow operation”,\n“attr”:{“opId”:529542,“error”:“lock acquire timeout”}}{“t”:{“$date”:“2023-09-27T16:41:43.626+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:5180,\n“ctx”:“conn25883”,“msg”:“client metadata”,“attr”:{“remote”:“193.168.249.52:52970”,\n“client”:“conn25883”,“doc”:{“driver”:{“name”:“mongo-java-driver|sync|spring-boot”,“version”:“4.6.1”},\n“os”:{“type”:“Linux”,“name”:“Linux”,“architecture”:“amd64”,“version”:“5.10.184-175.731.amzn2.x86_64”},\n“platform”:“Java/Oracle Corporation/1.8.0_342-b07”}}}{“t”:{“$date”:“2023-09-27T16:41:43.626+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:2024,\n“ctx”:“conn25883”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“user”,\n“authenticationDatabase”:“admin”,“remote”:“193.168.249.52:52970”,“extraInfo”:{},“error”:“BadValue: SCRAM-SHA-256 authentication is disabled”}}{“t”:{“$date”:“2023-09-27T16:41:43.693+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:202,\n“ctx”:“conn25883”,“msg”:“Authentication succeeded”,“attr”:{“mechanism”:“SCRAM-SHA-1”,“speculative”:false,“principalName”:“user”,\n“authenticationDatabase”:“admin”,“remote”:“193.168.249.52:52970”,“extraInfo”:{}}}{“t”:{“$date”:“2023-09-27T16:41:49.800+00:00”},“s”:“I”, “c”:“-”,\n“id”:20883, “ctx”:“conn2588325”,“msg”:“Interrupted operation as its client disconnected”,“attr”:{“opId”:529542}}{“t”:{“$date”:“2023-09-27T16:41:49.808+00:00”},“s”:“W”,\n“c”:“COMMAND”, “id”:20525, “ctx”:“conn25883”,“msg”:“Failed to gather storage statistics for slow operation”,\n“attr”:{“opId”:529542,“error”:“lock acquire timeout”}}{“t”:{“$date”:“2023-09-27T16:41:49.808+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:518,\n“ctx”:“conn25883”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“test:sample”,\n“command”:{“aggregate”:“Appointment”,“pipeline”:[{“$match”:{“$and”:[{“status”:“Open”},{“DateTime”:{“$gte”:{“$date”:“2023-09-27T16:41:42.853Z”}}},\n{“DateTime”:{“$lt”:{“$date”:“2023-10-27T16:41:42.854Z”}}}]}},{“$group”:{“_id”:“$storeNumber”,“totalSlotCount”:{“$sum”:1},\n“minSlotDate”:{“$min”:“$appDateTime”}}}],“cursor”:{},\n“allowDiskUse”:false,“$db”:“test”,“lsid”:{“id”:\n{“$uuid”:“057da301-5b71-43a4-a102-db2a8a354050”}}},“planSummary”:“IXSCAN { status: 1, DateTime: 1, Time: 1 }”,\"\nnumYields\":1261,“queryHash”:“E2C2E097”,“planCacheKey”:“6DD10207”,\"\nok\":0,“errMsg”:“Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected”,\n“errName”:“ClientDisconnect”,“errCode”:279,“reslen”:311,“locks”:{“FeatureCompatibilityVersion”:{“acquireCount”:{“r”:1303}},\n“ReplicationStateTransition”:{“acquireCount”:{“w”:1303}},“Global”:{“acquireCount”:{“r”:1303}},“Database”:{“acquireCount”:{“r”:1303}},\n“Collection”:{“acquireCount”:{“r”:1303}},\n“Mutex”:{“acquireCount”:{“r”:42}}},“protocol”:“op_msg”,“durationMillis”:6035}}{“t”:{“$date”:“2023-09-27T16:41:49.809+00:00”},“s”:“I”,\n“c”:“NETWORK”, “id”:22944, “ctx”:“conn25883”,“msg”:“Connection ended”,\"\nattr\":{“remote”:“193.168.249.52:52970”,“connectionId”:25883,“connectionCount”:279}}", "username": "hari_dba" } ]
"errCode":279, errName":"ClientDisconnect",
2023-09-20T15:53:54.415Z
“errCode”:279, errName”:”ClientDisconnect”,
318
null
[ "installation" ]
[ { "code": "", "text": "I need to set mongodb. log rotation and to keep files for sometime in windows. I was adding logRotate:rename in conf file file. But it’s not working", "username": "JINSU_MANI" }, { "code": "systemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n logRotate: reopen\nprocessManagement:\n pidFilePath: /var/run/mongodb/mongod.pid\n/var/log/mongodb/mongod.log {\n daily\n size 100M\n rotate 10\n missingok\n compress\n delaycompress\n notifempty\n create 640 mongod mongod\n sharedscripts\n postrotate\n /bin/kill -SIGUSR1 `cat /var/run/mongodb/mongod.pid 2>/dev/null` >/dev/null 2>&1\n endscript\n}\n", "text": "Hello, welcome to the MongoDB community.You can do it as follows.mongod.confUse the Linux utility logrotation to create an automation\nCreate this file /etc/logrotate.d/mongod.confRead the options to adjust as needed.To test the functionality use\nlogrotate -f /etc/logrotate.d/mongodHere is a link that can serve as a basis:In this blog post, we will look at how to do MongoDB® log rotation in the right—and simplest—way.\nEst. reading time: 7 minutes\n", "username": "Samuel_84194" }, { "code": "", "text": "I need to set up in windows environment not in Linux/Unix. Is it possible to set it up like up this. While using logRotate, I was getting error. So could you please confirm", "username": "JINSU_MANI" }, { "code": "", "text": "Sorry, I now understand that it is a Windows environment. Can you tell me what error is occurring and how the mongod.conf file is configured?Furthermore, the error is to raise the service or perform the rotation on the mongodb.", "username": "Samuel_84194" }, { "code": "", "text": "Actually I want to know in windows how to make log rotation possible in mongodb without mongodb restart. Once I made logAppend:false in mongod.cfg and restartedmongodb from service, it will be creating new file. But I want to implement it automatically without mongodb restart.Could you please help me in this whether I can do it automatically by making updation in cfg file.", "username": "JINSU_MANI" }, { "code": "", "text": "You can try using LogRotateWin to automate the procedure.It works basically the same as Linux, read the document and see if it helps =Dhttps://sourceforge.net/p/logrotatewin/wiki/LogRotate/", "username": "Samuel_84194" }, { "code": "", "text": "Hi @JINSU_MANI,\nYou can create a custom role for the log rotation and schedule the run of the following command:db.adminCommand( { logRotate : “server” } )Regards", "username": "Fabio_Ramohitaj" } ]
Log Rotation in windows
2023-09-26T11:53:35.040Z
Log Rotation in windows
337
null
[ "node-js" ]
[ { "code": "", "text": "Some times query taking exactly 10 seconds of time to execute even there is indexing also and moreover there are around 30k documents in database. What could be reason for that.", "username": "Durga_Prasad_Gandham" }, { "code": "", "text": "what have you done for troubleshooting?", "username": "Kobe_W" }, { "code": "executionStats", "text": "Hello @Durga_Prasad_Gandham ,Welcome to The MongoDB Community Forums! In addition to @Kobe_W’s response, can you also share additional details such as:Regards,\nTarun", "username": "Tarun_Gaur" } ]
Query taking long time to execute
2023-09-27T03:07:49.389Z
Query taking long time to execute
202
https://www.mongodb.com/…e_2_1024x576.png
[ "chicago-mug" ]
[ { "code": "Senior Threat Research Engineer @ ProofpointMongoDB User Group Leader and Sr Solutions Architect", "text": "\nMUG - Design Kit (17)1920×1080 371 KB\nThe Chicago MongoDB User Group (MUG) is excited to present a meetup that promises to be both enlightening and engaging. Join us for a dynamic gathering featuring captivating presentations complete with live demonstrations, trivia, and of course, great food and exciting swag! 06:00 PM - 06:30 PM: Registration and Welcome\nKickstart the day with registration, followed by a warm welcome and an overview of the exciting agenda.06:30 PM - 06:45 PM: Unveiling MongoDB’s Data Modelling and Schema Design\nWhat’s new in MongoDB? MongoDB 7.0 and more!06:45 PM - 07:15 PM: Atlas Stream Processing & Vector Search\nLearn how Atlas Stream Processing combines the document model, flexible schemas, and a rich aggregation language to provide a new level of power and convenience when building applications that require processing complex event data at scale.Witness live demonstrations showcasing the capabilities of semantic search and AI-powered applications.07:15 PM - 07:45 PM: Explore Detection-as-Code with MongoDB Change Streams Detection-as-Code is being steadily adopted by security teams for the ability to have version control, metrics, and testing on their detections. Explore how MongoDB change streams can be partnered with Detection-as-Code practices to run detections against system logs and event logs stored in MongoDB in near real-time with an event-driven architecture. The talk will highlight detection-as-code and MongoDB Change Streams and elaborate on how MongoDB and Detection-as-code practices can drive security detections in your environment with a proof-of-concept implementation.07:45 PM Onwards: Swag Giveaways and Networking\nWrap up the day with exciting swag giveaways and a chance to network further with your newfound MongoDB connections. We are looking for speakers to speak at the event Submit your proposal to speak at a MUG To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Event Type: In-Person\nLocation: **Haymarket - 737 W Randolph St.**Senior Threat Research Engineer @ Proofpoint–MongoDB User Group Leader and Sr Solutions ArchitectCassiano has been part of the MongoDB team for 2.5 years. He has over 20 years of experience in it and he is local to Chicago! He is passionate about all thing tech, and love gadgets, cars, and grilling!", "username": "bein" }, { "code": "", "text": "Great event last night ! Looking forward for the next one !\nIMG_7761 Large1280×960 281 KB\nIMG_7757 Large1280×960 312 KB", "username": "bein" }, { "code": "", "text": "Here is the link for the presentation Jacob gave us last nightGoogle Drive file.", "username": "bein" } ]
Chicago MUG: Vector Search, Stream Processing and Detection-as-Code with MongoDB Change Streams
2023-08-11T11:49:54.287Z
Chicago MUG: Vector Search, Stream Processing and Detection-as-Code with MongoDB Change Streams
1,957
https://www.mongodb.com/…0b20167c8ca9.png
[]
[ { "code": "Could not find user \"arn:aws:iam::23412346546:user/iam_user\" for db \"$external\"\nCould not find user \"arn:aws:iam::23412346546:user/iam_user\" for db \"$external\"\n", "text": "I’m trying to connect to the Atlas cluster from mongoDb compose using aws IAM mechanism however I keep getting the following error. What might be the reason for this? and what will help to resolve this problem?Error:Steps i followed:1). From compose>Advance connection options>Authentication>AWS IAM\n\nimage895×819 30.2 KB\n2). Click connect.\n3). this error i am gettingNote: I copied aws IAM Access Key ID, Secret access Key, and session token from single sign-on (SSO)I really appreciate any help you can provide.", "username": "Prashant_A" }, { "code": "", "text": "Does the arn user have privileges to connect to the cluster you are trying to connect\nCheck this thread May help", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for your answer i will check this and update you. if anyone has any other suggestion,s please let me know thanks", "username": "Prashant_A" }, { "code": "", "text": "if you haven’t figured it out yet, considering that you successfully configured your AWS integration already, please confirm that your user “arn:aws:iam::23412346546:user/iam_user” is added in the database access section inside the mongo atlas cloud like soimage1920×620 33.4 KBNote that one is a user and the other is a role, i noticed that even though I assign the role to my user, it still doesn’t work with just having the role in the database access tab, so you need to actuall have your user for which you’re using the aws keys to auth.", "username": "Liviu_Dobrea" } ]
Could not find user "arn:aws:iam::23412346546:user/iam_user" for db "$external"
2023-03-31T09:17:23.946Z
Could not find user “arn:aws:iam::23412346546:user/iam_user” for db “$external”
737
null
[ "aggregation", "java" ]
[ { "code": "{\n\"fname\": \"xyz\"\n}\n{\n\"FIRST_NAME\" : \"xyz\"\n}\n", "text": "How to fetch a mongo document with renamed fields without changing the actual data programmatically using Java?\nDoes Mongo supports this? If so, which one to you use for the data fetch Query or Aggregation?Ex.\nMongo Doc:Expected Format:", "username": "Vignesh_Paulraj" }, { "code": "{\n\"fname\": \"xyz\"\n}\nExpected Format:\n{\n\"FIRST_NAME\" : \"xyz\"\n}\nprivate static void reshapeDocs(MongoCollection<Document> coll) {\n Bson includeFname = include(\"fname\");\n // Compute a new 'FIRST_NAME' field by copying 'fname'\n Bson computedFirstName = computed(\"FIRST_NAME\", \"$fname\");\n\n Bson projection = project(computedFirstName);\n MongoCursor<Document> cursor = coll.aggregate(Arrays.asList(projection)).iterator(); \n Document reshapedDoc = cursor.next();\n}\n", "text": "Hey @Vignesh_Paulraj,Welcome to the MongoDB Community!I think you can use the $project stage to get the expected output on your application side. Here is an example code snippet for the same:Hope it answers your questions!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "{\n \"fname\": \"xyz\",\n \"lname\": \"qaz\",\n \"address\": {\n \"addressLine\": \"\",\n \"city\": \"\",\n \"postCode\": \"\",\n \"country\": \"\"\n }\n}\npublic static void reshapeDocs(MongoCollection<Document> mongoColl) {\n\t\tBson projectionFields = Aggregates.project(Projections.fields(Projections.excludeId(), Projections.include(\"fname\", \"lname\"),\n\t\t\t\tProjections.computed(\"FIRST_NAME\", \"$fname\"), Projections.computed(\"LAST_NAME\", \"$lname\")));\n\t\tmongoColl.aggregate(Arrays.asList(projectionFields,\n\t\t\t\tAggregates.match(Filters.eq(\"fname\", \"xyz\")))).forEach(doc -> {\n\t\t\tSystem.out.println(doc.toJson());\n\t\t});\n\t}\n{\n \"fname\": \"xyz\", \n \"lname\": \"qaz\", \n \"FIRST_NAME\": \"xyz\",\n \"LAST_NAME\": \"qaz\"\n}\n{\n \"FIRST_NAME\": \"xyz\",\n \"LAST_NAME\": \"qaz\"\n}\n", "text": "Great help, thanks @Kushagra_Kesav, it works perfect.One more quick clarification, the output has the actual fields as well.How can I get them ignored?Data:current-code:Current Result:Expected Result:", "username": "Vignesh_Paulraj" }, { "code": "public static void reshapeDocs(MongoCollection<Document> mongoColl) {\n\t\tBson projectionFields = Aggregates.project(Projections.fields(Projections.excludeId(), Projections.include(\"fname\", \"lname\"),\n\t\t\t\tProjections.computed(\"F_NAME\", \"$fname\"), Projections.computed(\"L_NAME\", \"$lname\")));\n\t\tmongoColl.aggregate(Arrays.asList(\n\t\t\t\tprojectionFields,\n\t\t\t\tAggregates.match(Filters.eq(\"fname\", \"xyz\")),\n\n// Added another projection here to ignore the actual fields. \n\t\t\t\tAggregates.project(Projections.fields(Projections.exclude(\"fname\", \"lname\")))\n\t\t\t\t)\n\t\t).forEach(doc -> {\n\t\t\tSystem.out.println(\"---------Output------------\");\n\t\t\tSystem.out.println(doc.toJson());\n\t\t\tSystem.out.println(\"---------------------\");\n\t\t});\n\t}\n{\n\"F_NAME\": \"xyz\", \n\"L_NAME\": \"qaz\"\n}\n", "text": "@Kushagra_Kesav Please ignore the last question. Thanks for the guidance on the code.Found the way to ignore the actual fields. Added another projection stage as below.Link: How to exclude $project computed field in mongoDB - Stack OverflowCurrent Code:Output:", "username": "Vignesh_Paulraj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to fetch a mongo document with renamed fields without changing the actual data programmatically using Java?
2023-09-25T21:26:54.292Z
How to fetch a mongo document with renamed fields without changing the actual data programmatically using Java?
271
https://www.mongodb.com/…201ddfe481b9.png
[ "atlas-cluster", "atlas", "connector-for-bi" ]
[ { "code": "let\n Source = MongoDBAtlasODBC.Contents(\"mongodb://-ftpuq.a.query.mongodb.net/?ssl=true&authSource=admin\", \"db\", null, []),\n db = Source{[Name=\"collection\",Kind=\"Database\"]}[Data],\n properties_COPY_Table = scrapy_quibble_Database{[Name=\"properties_COPY\",Kind=\"Table\"]}[Data]\nin\n properties_COPY_Table\n", "text": "Hello MongoDB Community,I am a beginner working on a project in which I aim to connect Power BI to MongoDB Atlas using the MongoDB Atlas SQL interface. Within the Power BI Desktop environment, everything functions seamlessly; I am able to retrieve and transform data without any issues. However, the challenge arises when I publish my report to the Power BI Service. When attempting to set up a scheduled refresh, I’m met with the following error:\nimage699×66 2.91 KB\nFor clarity, here’s the M code I utilize to fetch my data:Being relatively new to this, I’m seeking guidance. Has anyone encountered a similar issue or have insights on potential solutions or workarounds? Your assistance would be invaluable and greatly appreciated.", "username": "Sidney_Guaro" }, { "code": "", "text": "Here’s my source\n\nimage691×193 5.78 KB\n", "username": "Sidney_Guaro" }, { "code": "", "text": "Hi There - thanks for posting your question. You must install the on-premise data gateway to use the Power BI refresh Service. Also, on the computer/server where the MS Data Gateway is installed and configured, you must also have the MongoDB Connector and ODBC Driver Installed. You can download both of these from our download center.\nHere are some instructions that should help you through that:\n\nScreenshot 2023-09-14 at 4.39.03 PM1448×814 158 KB\n\n\nScreenshot 2023-09-14 at 4.39.11 PM1454×804 133 KB\nOnce the Gateway is installed and configured, you then go into the Power BI Web settings for the data set and attach the Gateway. Then you can refresh on demand or on a schedule.", "username": "Alexi_Antonino" }, { "code": "", "text": "Thank you @Alexi_Antonino. Do you also know why some fields are missing?", "username": "Sidney_Guaro" }, { "code": "", "text": "Yes I do know why some fields may be missing. We generate a sql schema so that Power BI (or any relational tool) can take our MongoDB Schema and represent it in a relational way. When we automatically generate the SQL Schema, we take a small sample size of your documents to build that schema. And this sample may not represent all of your fields. So you have the option to manually generate the SQL Schema which gives you more control over the schema and your specific data needs. We allow users to generate the sql schema, get the current schema and set the schema. While users do this in Mongo Shell today, we plan to release some UI that will allow users to do this from Atlas in the future.Here is some information on how you would regenerate the SQL Schema:\n\nScreenshot 2023-08-25 at 9.15.00 AM1285×722 152 KB\n", "username": "Alexi_Antonino" }, { "code": "", "text": "Can I safely assume that the scheduled refresh will proceed even if my laptop is turned off?", "username": "Sidney_Guaro" }, { "code": "", "text": "The concern with this approach is that our collection contains over approximately 726 million documents. Wouldn’t that pose a problem?", "username": "Sidney_Guaro" }, { "code": "", "text": "@Alexi_Antonino will it work online?", "username": "Sidney_Guaro" }, { "code": "", "text": "Hello @Sidney_Guaro I will try to answer these questions:Can I safely assume that the scheduled refresh will proceed even if my laptop is turned off? your laptop needs to be on if this is where the gateway is installed.The concern with this approach is that our collection contains over approximately 726 million documents. Wouldn’t that pose a problem? The import mode of data connection in Power BI may limit your data set. To combat this, your initial query (created within Power Query) should filter your data as much as possible. When we support the direct query mode, which connects to the database live (coming in 2024) you will still want a query to efficiently narrow the data, otherwise it will take a long time to execute the report or dashboard.\nAlternatively, you can create views within MongoDB to limit/filter the data so it is a more targeted dataset for the report author as well.will it work online? Our custom connector will work with Power Query online, but the gateway install and configuration is still required for this option.Hope this helps,\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PowerBI MongoDB Atlas SQL Schedule Refresh
2023-09-26T00:24:01.071Z
PowerBI MongoDB Atlas SQL Schedule Refresh
477
null
[ "aggregation", "queries" ]
[ { "code": "Groups: {_id : ObjectId, title: String}\nStudents{mainGroups: String}\n[\n //mainGroups is String that`s why I convert Group`s $_id to String here\n {$addFields: {\n gid: {$toString:\"$_id\"} \n }},\n {$project: {\n _id: 1,\n gid: 1,\n title:1\n }},\n { \n $lookup: {\n \"from\": 'students',\n \"let\": {\"groupId\": \"$gid\"},\n pipeline: [\n {\"$match\": \n {\"$expr\" : \n {\"mainGroups\":{\"$regex\": \"$$groupId\", \"$options\" :\"i\"}}\n }\n }\n ],\n as: \"student\"\n }\n },\n]\n", "text": "I have groups and students collections.mainGroups is concat string, that all groups, for every student has taken in school.My code is giving me, all students collection for all groups.I want to aggregate groups collection with students. And get students collection in one array, for every groups if how many students it has.How can I get how many students have for every groups ?", "username": "Jumamidin_Tashaliev" }, { "code": "group_101 = \"101\"\ngroup_201 = \"201\"\ngroups_concat = group_101 + group_201 => \"101201\"\n// In groups_concat, the group 120 is then now $regex findable even if not a valid group\ngroups_array = [ group_101 , group_201 ]\n// In groups_array, no confusion is possible\n", "text": "It would be helpful if you could share sample documents from both collections. This way we could experiment without having to multiply effort to create our own documents from your description.mainGroups is concat string, that all groups, for every student has taken in school.By doing that, you deprive yourself from the power of arrays, including indexing of individual elements. You make any use case slow since you need to use regex for any match. You make any use case slow since you have to $toString to convert an _id that you could store and lookup without any conversion. A concatenation of the string representation will take a lot more memory compared to an array of native $oid. Depending on your concat some non-existing group might become valid.", "username": "steevej" } ]
Regex in lookup through addFields result not working
2023-09-26T09:44:03.859Z
Regex in lookup through addFields result not working
248
null
[ "react-native" ]
[ { "code": "export class Session extends Realm.Object<Session> {\n _id: Realm.BSON.ObjectId = new Realm.BSON.ObjectId();\n name!: string;\n cube!: string;\n //@ts-ignore\n solves: Realm.List<Solve> = [];\n // @ts-ignore\n validTimes!: number[];\n owner_id!: Realm.BSON.ObjectId;\n used: Date = new Date();\n amount: number = 0;\n fullAmount: number = 0;\n stdev: number = 0;\n average: number = 0;\n best: number = 0;\n\n static primaryKey = '_id';\n}\nvalidTimes!: number[];\ntransform[stderr]: Unable to determine type of 'validTimes' property\n", "text": "Hi,\nI’m creating an app in Realm with Expo/React Native.\nI have a huge problem with specifying Schema for one field. I want to have an array of numbers.My Session Schema looks like:Problem is with:This gives me an error in console:How should I specify such an property as array of primitive values in Realm/React native?", "username": "Rafal_Nawojczyk" }, { "code": "", "text": "For clarity, do you want to have an array of numbers, or a Realm List of numbers?", "username": "Jay" }, { "code": "Realmnew Realm()\nimport Realm from 'realm';\n\nconst realm = new Realm({ schema: [YourSchema] });\n\nYourSchemaList\nconst MySchema = {\n\nname: 'MySchema',\n\nproperties: {\n\nmyArray: 'list',\n\n},\n\n};\n\nrealm.write(() => {\n\nrealm.create('MySchema', {\n\nmyArray: [1, 2, 3, 4, 5],\n\n});\n\n});\n\nMySchemamyArraylistrealm.create()MySchemamyArray\nconst result = realm.objects('MySchema')[0];\n\nconsole.log(result.myArray); // [1, 2, 3, 4, 5]\n\nrealm.write(() => {\n\nresult.myArray.push(6);\n\n});\n\nconsole.log(result.myArray); // [1, 2, 3, 4, 5, 6]\n\nMySchemarealm.objects('MySchema')[0]myArraypush()", "text": "Defining an array of primitive values in Realm/React Native is fairly straightforward. Let me break it down for you in a step-by-step manner.Step 1: Import the necessary modulesBefore we can define an array of primitive values, we need to import the required modules. In this case, we will need to import the Realm module.Step 2: Create a Realm objectNext, we need to create a Realm object that will hold our array. To do this, you can use the new Realm() constructor.Replace YourSchema with the schema definition for your specific use case.Step 3: Define your arrayNow that we have our Realm object ready, we can define our array of primitive values. In Realm, arrays are represented using the List type. This is similar to how arrays are defined in JavaScript.In this example, we define a schema called MySchema with a property called myArray of type list. We then use the realm.create() method to create an instance of MySchema with the myArray property set to an array of primitive values.Step 4: Access and manipulate the arrayOnce you have defined your array, you can access and manipulate its elements using standard JavaScript array methods.In this example, we retrieve the MySchema instance using realm.objects('MySchema')[0]. We then log the initial value of myArray. After that, we use the push() method to add an element to the array and log the updated value.That’s it! You have successfully defined an array of primitive values in React Native.You can also choose alternative to React Native - FlutterRemember to adjust the code according to your specific requirements and schema definition.", "username": "Jacelyn_Sia" } ]
How to define Array of primitive values in Realm/React Native
2023-09-17T11:24:18.989Z
How to define Array of primitive values in Realm/React Native
394
null
[ "aggregation" ]
[ { "code": "db.users.aggregate([\n {\n $match: {\n $or: [\n {\n _id: {\n $in: [targetUserId, currentUserId]\n }\n },\n {\n $and: [\n {\n $or: [\n { \"field.filter\": false },\n { \"field2.filter\": true }\n ]\n },\n { \"profile.memberOrgId\": \"orgId\" },\n {$project: { \n name: \"$profile.name\",\n ...\n } }\n ]\n }\n ],\n }\n }\n])\n{\n currentUser: { current user object },\n targetUser: { target user object },\n members: [ {array of users that match the { \"profile.memberOrgId\": \"orgId\" } filter }]\n}\n", "text": "I am getting a list of users like so:How can I format \\ transform the results to looks like this:Thanks.", "username": "Dev_Ops" }, { "code": "", "text": "Are you able to provide some sample input documents and the current output documents from the aggregation pipeline provided? This will make it easier for any user here to insert to a test environment to try assist.Additionally, please confirm the MongoDB version in use.Regards,\nJason", "username": "Jason_Tran" }, { "code": "{\n _id: \"document id\",\nemail: {\n address: \"[email protected]\",\n verified: Boolean\n },\n profile: {\n name: \"User Name\",\n memberOrgId: \"Org ID\"\n }\n}\n{\n _id: \"document id\",\n name: \"$profile.name\",\n memberOrgId: \"$profile.memberOrgId\",\n emailAddress: \"$email.address\"\n}\n", "text": "MongoDB version 5.0Documents are as follows (Pre projection):during projection I change them as follows (Post projection):Thanks.", "username": "Dev_Ops" }, { "code": "{\n currentUser: { current user object },\n targetUser: { target user object },\n members: [ {array of users that match the { \"profile.memberOrgId\": \"orgId\" } filter }]\n}\n", "text": "Thanks.during projection I change them as follows (Post projection):So you get the list of users and then project them to the advised format? I assume after this, you then want to transform this to the following?:How can I format \\ transform the results to looks like this:From the post-projection, what field indicates that it is a current user or target user?Regards,\nJason", "username": "Jason_Tran" }, { "code": "tagertUserIdcurrentUserId_id'scurrentUsertargetUser", "text": "The tagertUserId and currentUserId which are _id's would determine the currentUser and targetUser from the list of returned users.Thanks.", "username": "Dev_Ops" }, { "code": "/// documents with _id of 1, 2 and 6 have the same `memberOrgId` value\n[\n {\n _id: 1,\n name: 'Example Name 1',\n memberOrgId: 'org1',\n emailAddress: '[email protected]'\n },\n {\n _id: 3,\n name: 'Example Name 3',\n memberOrgId: 'org3',\n emailAddress: '[email protected]'\n },\n {\n _id: 4,\n name: 'Example Name 4',\n memberOrgId: 'org4',\n emailAddress: '[email protected]'\n },\n {\n _id: 5,\n name: 'Example Name 5',\n memberOrgId: 'org5',\n emailAddress: '[email protected]'\n },\n {\n _id: 6,\n name: 'Example Name 6',\n memberOrgId: 'org1',\n emailAddress: '[email protected]'\n },\n {\n _id: 2,\n name: 'Example Name 2',\n memberOrgId: 'org1',\n emailAddress: '[email protected]'\n }\n]\ncurrentUserIdcurrentUserId$match/// Get all members that have the matching orgId\n{\n $match: {\n memberOrgId: 'org1'\n }\n},\n/// using $project and $cond to display _id (member), memberOrgId, currentUser and targetUser (assuming _id values for currentUser and targetUser are known (1 and 2 in this case))\n{\n $project: {\n _id:1,\n memberOrgId:1,\n currentUser : {\n $cond: {\n if : {$eq: ['$_id',1]},\n then: '$$CURRENT',\n else: '$$REMOVE'\n }\n },\n targetUser : {\n $cond: {\n if : {$eq: ['$_id',2]},\n then: '$$CURRENT',\n else: '$$REMOVE'\n }\n }\n }\n},\n/// $group all (assuming there is only 1 unique currentUser and 1 unique targetUser document)\n{\n $group: {\n _id: null,\n currentUser : {$max:'$currentUser'},\n targetUser : {$max:'$targetUser'},\n members : {$push:'$_id'}\n }\n}\n[\n {\n _id: null,\n currentUser: {\n _id: 1,\n name: 'Example Name 1',\n memberOrgId: 'org1',\n emailAddress: '[email protected]'\n },\n targetUser: {\n _id: 2,\n name: 'Example Name 2',\n memberOrgId: 'org1',\n emailAddress: '[email protected]'\n },\n members: [ 1, 6, 2 ]\n }\n]\n$cond", "text": "Sample documents I created (based off the post-projection you provided):I’m not 100% sure of the expected output but does something like the below work?Note: I only tested this on documents post-projection in my test environment. I do not know what the actual initial documents would look like.I’m assuming currentUserId and currentUserId are known values since you use them in your initial $match stage.Resulting in:For reference, the $cond documentation which may be of use.If the above doesn’t suit your use case or requirements, please provide:There might be a simpler way to achieve this but it’s difficult without any sample documents and without knowing the expected output.I’d advise testing on a test environment to verify it suits your use case and/or requirement(s).Regards,\nJason", "username": "Jason_Tran" }, { "code": "db.users.aggregate([\n {\n $match: {\n $or: [\n {\n _id: {\n $in: [targetUserId, currentUserId]\n }\n },\n {\n $and: [\n {\n $or: [\n { \"profile.receiveEmails\": true },\n { \"profile.receiveNotifications\": true }\n ]\n },\n { \"profile.memberOrgId\": \"orgId\" },\n \n ]\n }\n ],\n }\n \n },\n {$project: {\n firstName: \"$profile.firstName\",\n receiveEmails: \"$profile.receiveEmails\",\n receiveNotifications: \"$profile.receiveNotifications\",\n emailAddress: \"$emails.address\"\n }},\n {\n $group: {\n \"_id\": \"Users\",\n \"currentUser\": {\n \"$max\": {\n $cond:[\n { $eq: [\"$_id\", currentUserId] },\n \"$$CURRENT\",\n \"$$REMOVE\"\n ]\n }\n },\n \"targetUserId\": {\n \"$max\": {\n $cond:[\n { $eq: [\"$_id\", targetUserId] },\n \"$$CURRENT\",\n \"$$REMOVE\"\n ]\n }\n },\n \"members\": {\n \"$addToSet\": {\n $cond:[\n { $ne: [\"$_id\", targetUserId] },\n \"$$CURRENT\",\n \"$$REMOVE\"\n ]\n }\n }\n }\n }\n ])\n", "text": "Thanks very much @Jason_Tran , your suggestion put me on the right path.Here is what I’ve come up with:This gives me exactly what I was looking for.Thanks again for your help!", "username": "Dev_Ops" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Grouping returned documents
2023-09-26T16:40:24.462Z
Grouping returned documents
244
null
[ "queries" ]
[ { "code": "{\n \"_id\": 1,\n \"skills\": [\n {\n \"uid\": 1000,\n \"grade\": 1,\n \"count\": 1\n },\n {\n \"uid\": 2000,\n \"grade\": 2,\n \"count\": 1\n },\n {\n \"uid\": 3000,\n \"grade\": 10,\n \"count\": 2\n }\n ]\n}\n\ndb.nonames.updateOne(\n {_id:1},\n { $set : {\"skills.$[elem].count\" : [100,200] }},\n { arrayFilters :[ {\"elem.uid\":{\"$in\":[1000,3000]}} ]}\n )\n", "text": "This is my data. I want to change “count” of two uid (1000,3000) count of uid=1000 is 777, and count of uid = 2000 is 888. how to change at once.My query is not good and not working.", "username": "DEV_JUNGLE" }, { "code": "uid_1 = 1000\ncount_1 = 777\nuid_2 = 2000\ncount_2 = 888\ndb.nonames.updateOne(\n { \"_id\" : 1 } ,\n { \"$set\" : {\n \"skills.$[one].count\" : count_1 ,\n \"skills.$[two].count\" : count_2\n } } ,\n { \"arrayFilters\" : [\n { \"one.uid\" : uid_1 } ,\n { \"two.uid\" : uid_2 }\n ] }\n) ;\n", "text": "If you want to modify more than a single element, you simply need multiple arrayFilters, one for each element you want to modify. If I understand your situation correctly you may try the following untested code to see if it works.While at it, could you please provide followup and potential mark as solve your thread", "username": "steevej" } ]
How to change value?
2023-09-26T07:39:13.432Z
How to change value?
225
null
[ "queries", "dot-net", "transactions" ]
[ { "code": "", "text": "Dear all,currently I face the following scenario: I have a dataset of 5k entities, which should be updated on my mongo db instance (currently MongoDB 4.2 but we will update to Mongo6 or 7 soon). All entities belong to the same collection. I want to use a multi-document transaction to make sure to update either all or none. I want to divide this huge package into smaller packages and send the updates via a 10 bulk write calls. Since we do some preprocessing, I would try to parallelize the whole process of preprocessing and sending.In general, this is a problem since I have to take care of a strict order of the entity-updates send to the database IF they are casual related ( Read Isolation, Consistency, and Recency — MongoDB Manual). BUT in my case, the entities are guaranteed to be casually unrelated.Now my question: If my entities are casually unrelated, I can parallelize the preprocessing and bulk-write requests via multiple threads and the mongo db driver or the mongo db server will take care of locking the collection and indices and organizing the requests so that at the end of the transaction a consistent db state (collection and indices) is guaranteed, right?Thanks in advance!Thorsten", "username": "Thorsten_Schodde" }, { "code": "", "text": "So you want to know whether you can use multiple bulk write operations in a single transaction?Answer should be yes. (link)But as i recall mongodb sessions (which is needed to use transactions) are not thread-safe. link", "username": "Kobe_W" }, { "code": "", "text": "Thanks! And yes, i want to know whether i can use multiple bulk wirte operations in a single transaction but in parallel not just after each other. The only limit is found so far is that this can screw up everything if the entities are casually realted. But in generel, the mongo db has a nice locking system with reader and writer locks directly on database or collection level. So i think it might work.Regarding the thread safty of the session, I will have a look at the C# driver. Thanks for that hint!Edit: Well, according to your link, every driver HAS TO document whether they have implemented sessions thread safe or not. The c# driver docs don’t do that: Sessions and Transactions (mongodb.github.io)\n…Thanks,\nThorsten", "username": "Thorsten_Schodde" }, { "code": "", "text": "All driver implementations will need to conform to that specification, (and java driver does mention that thread safety thing), so c# will not be an exception.The answer for that is, no, you can’t have concurrent operations in a same session at the same time.", "username": "Kobe_W" }, { "code": "", "text": "Hi,\nyes … I re-read that paragraph after a few liters of coffee and now I agree. It is totally clear. Sorry.\nThat is bad for our performance. It result in a processing time factor of 10, so 10 times slower. Ok. I need a smarter solution. Thanks for your help!", "username": "Thorsten_Schodde" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Single session and transaction with multiple update-queries on the same collection possible?
2023-09-26T11:15:01.344Z
Single session and transaction with multiple update-queries on the same collection possible?
386
null
[ "aggregation" ]
[ { "code": "{\n $addFields: {\n lat1Radians: { $multiply: [36.7589882, Math.PI / 180] }, // Convert latitude1 to radians\n lon1Radians: { $multiply: [-119.4249381, Math.PI / 180] }, // Convert longitude1 to radians\n lat2Radians: { $multiply: [36.7151076, Math.PI / 180] }, // Convert latitude2 to radians\n lon2Radians: { $multiply: [-119.4159583, Math.PI / 180] }, // Convert longitude2 to radians\n },\n },\n {\n $addFields: {\n dlat: { $subtract: ['$lat2Radians', '$lat1Radians'] },\n dlon: { $subtract: ['$lon2Radians', '$lon1Radians'] },\n },\n },\n {\n $addFields: {\n a: {\n $add: [\n { $sin: { $divide: ['$dlat', 2] } },\n {\n $multiply: [\n { $cos: '$lat1Radians' },\n { $cos: '$lat2Radians' },\n { $sin: { $divide: ['$dlon', 2] } },\n ],\n },\n ],\n },\n },\n },\n {\n $addFields: {\n c: {\n $atan2: [\n { $sqrt: { $max: ['$a', 0] } },\n { $sqrt: { $subtract: [1, '$a'] } },\n ],\n },\n },\n },\n {\n $addFields: {\n distanceH: { $multiply: [3959, '$c'] },\n },\n },\n );\n", "text": "I need to use an aggregate query to determine the distance between two places. Given that the $geoNear query was already used at the beginning of the query and that the distance needed to be calculated again for filterdown, we inserted the Haversine formula, however, the outcome is incorrect. any suggestions for how to fix this? The query samples are given below.", "username": "Sreeraj_S1" }, { "code": "db.aggregate([\n{\n $documents:[\n {\n lat1:36.12,\n long1:-86.67,\n lat2:33.94,\n long2:-118.40 \n }\n ]\n},\n{\n $addFields:{\n pi:3.14159265359,\n Inverse180:0.00555555555,\n r:6372.8\n }\n},\n{\n $addFields:{\n dLat:{$subtract:['$lat2', '$lat1']},\n dLon:{$subtract:['$long2', '$long1']}\n } \n},\n{\n $addFields:{\n dLatRadians:{$multiply:['$pi', '$dLat', '$Inverse180']},\n dlonRadians:{$multiply:['$pi', '$dLon', '$Inverse180']},\n lat1:{$multiply:['$pi', '$lat1', '$Inverse180']},\n lat2:{$multiply:['$pi', '$lat2', '$Inverse180']}\n } \n},\n{\n $addFields:{\n a:{\n $multiply:[\n {\n $sin:{\n $divide:['$dLatRadians',2]\n }\n },\n {\n $sin:{\n $divide:['$dLatRadians',2]\n }\n }\n ]\n },\n a2:{\n $multiply:[\n {\n $sin:{\n $divide:['$dlonRadians',2]\n }\n },\n {\n $sin:{\n $divide:['$dlonRadians',2]\n }\n },\n {\n $cos:'$lat1'\n },\n {\n $cos:'$lat2'\n },\n ] \n }\n }\n},\n{\n $addFields:{\n a_out:{\n $add:[\n '$a','$a2'\n ]\n }\n }\n},\n{\n $addFields:{\n c:{\n $multiply:[\n 2,\n {\n $asin:{\n $sqrt:'$a_out'\n }\n }\n ]\n }\n }\n},\n{\n $addFields:{\n retVal:{\n $multiply:[\n '$r',\n '$c', \n ]\n }\n }\n}\n])\n", "text": "Rather than unwind the above, I tried to implement it and it seemed to work:This is the algo I took it from:The haversine formula is an equation important in navigation, giving great-circle distances between two points on a sphere from their longitudes and latitudes....I had some issues, but I debugged it by putting my algo into a debugger and then comparing the variables in the aggregate as it ran against the debugger (C# in this case).I suggest you do the same with your code to see where the issue is.", "username": "John_Sewell" }, { "code": "", "text": "Also:\nhttps://jira.mongodb.org/browse/SERVER-2990", "username": "John_Sewell" }, { "code": "", "text": "Thank you so much @John_Sewell . The code is working as expected.", "username": "Sreeraj_S1" }, { "code": "", "text": "No problem, I would clean it up a touch though as I laid it out to be simple to read and many of the stages can be combined.\nYou may want to look at how many decimals you need for the PI constant if you use it like this as opposed to using MATH.PI, out of interest I looked at what NASA / JPL use for calculations for space probes given they were asked if they really just used 3.14 as an approximation (they don’t):While world record holders may have memorized more than 70,000 digits of pi, a JPL engineer explains why you really only need a tiny fraction of that for most calculations – even at NASA.An interesting read!Also there was some talk in the articles about what value to use for R, but I’ll let you work out what you want to use!", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
To determine the distance between two places, translate the Haversine formula to a Mongo query
2023-09-26T17:45:41.261Z
To determine the distance between two places, translate the Haversine formula to a Mongo query
316
https://www.mongodb.com/…5_2_785x1024.png
[ "node-js", "mongoose-odm", "transactions" ]
[ { "code": "", "text": "\nScreenshot 2022-01-07 at 19.33.101176×1534 180 KB\n\nI was just doing some transactions using session.startSession() and session.endSession() I got this error and nodejs server crashed after this even though i had a try catch block.\nAnd this error happens randomly some times it happens and sometimes it doesn’t.\nWhen i access the server by my mobile no problem , when i use my laptop there is this error that crashes the server", "username": "dev_varivas" }, { "code": "", "text": "I’m getting the very same error, have you found any solutions yet??", "username": "Deepak_Prakash1" } ]
MongoError : Transaction 2 has been commited , Code : 256 , codeName : 'TransactionCommitted'
2022-01-07T14:18:14.760Z
MongoError : Transaction 2 has been commited , Code : 256 , codeName : &lsquo;TransactionCommitted&rsquo;
2,499
null
[]
[ { "code": "Host of any type\nConnections above 0\nHi Angelo,\n\nYou are receiving this alert email because connections to your cluster(s) have exceeded 500, and is nearing the connection limit for the M0 cluster Cluster0 in the Project Project 0 within Organization Purizu. Please restart your application or login to your Atlas account to see some suggested solutions.\nHost\nBecame active in the past hour\n", "text": "Hi team,I’ve noticed that email alerts on the free tier set for the target:With condition:Is sending an email with a confusing messaging indicating that I’m about to exceed the 500 max connections free tier limit.The SMS alert is more comprehensive, and indicates how many connections are open, as a generic alert.My use case:I have a collection to store tokens for users that allow notifications from my mobile app.I feel happy to know when someone has subscribed, so I’d like to receive an alert.Maybe that would be a good target/condition to be added, as in:Thank you,Angelo Reale.", "username": "Angelo_Reale1" }, { "code": "Host\nBecame active in the past hour\n", "text": "Hi @Angelo_Reale1,Can you describe the bug itself? It does sound like this is more of a feature request as opposed to a bug.Additionally, how does the following correlate with each other?:I feel happy to know when someone has subscribed, so I’d like to receive an alert.Maybe that would be a good target/condition to be added, as in:Perhaps using Database Triggers may be of use here since you’ve noted that you have a collection which stores tokens for users (I assume perhaps when a unique token / user is inserted then you’d like a notification of this). This wouldn’t trigger an alert directly but you could create some form of notification for when a user / token is inserted using the database triggers although this is just an idea off the top of my head based off the brief description.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason, thanks for your reply.Sure, I believe the confusions derives from the fact that I proposed a solution to the bug in the same report.The bug itself is that of functional misbehavior, or an indisputable bug of sorts:I setup an alert for > 0 connections.\nI get 1-5 connections.\nI’m alerted that I’m about to exceed the 500 max connections on the free tier.It seems that the alert trigger is mapped to a static email message for free tier users, which does not dynamically includes relevant data from the cluster but that misleads the user into thinking they’re at their resource usage limits.This can generate frustration and stress on people who are still testing or using Atlas on small workloads as free tier users.I hope that helps us understand it better.Have a nice day!", "username": "Angelo_Reale1" }, { "code": "Host of any typeConnections above 0", "text": "I setup an alert for > 0 connections.\nI get 1-5 connections.\nI’m alerted that I’m about to exceed the 500 max connections on the free tier.Ah gotcha. Thanks for clarifying Angelo.What i’ll try to do to reproduce this behaviour (let me know if theres anything missing in particular):I’ll report back here with my results.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Managed to get the same email from the above steps mentioned. I will check with the team regarding this behaviour and update here once I have further information ", "username": "Jason_Tran" }, { "code": "", "text": "@Angelo_Reale1 - This was confirmed to be a bug should be resolved at a later date Thanks for reporting this one.", "username": "Jason_Tran" } ]
Bug in Email alert notifications
2023-09-10T14:19:05.378Z
Bug in Email alert notifications
334
null
[ "crud", "time-series" ]
[ { "code": "{\n \"timeStamp\": {\n \"$date\": \"2023-09-18T11:39:01.673Z\"\n },\n \"company\": {\n \"campaignId\": {\n \"$oid\": \"6422f1598110d28c94ff5e79\"\n },\n \"campaignName\": \"Another Yannis Campaign\",\n \"companyId\": {\n \"$oid\": \"63cfe44156386513a4012235\"\n },\n \"companyName\": \"loc-LOCAL Yannis company\",\n \"ipaddress\": \"::ffff:172.18.0.1\",\n \"page\": \"http://localhost:3006/\",\n \"placementId\": \"9d76c56531044ef2809337289d72db81\",\n \"placementName\": \"Home\",\n \"offer\": \"$offer\"\n },\n \"_id\": {\n \"$oid\": \"650836d5153be58017be1f48\"\n },\n \"event\": \"onofferview\",\n \"offer\": {\n \"offerName\": \"test tracking offer\",\n \"offerId\": {\n \"$oid\": \"646e0a4db2492b89d2775c67\"\n },\n \"advertiserId\": {\n \"$oid\": \"63c7d80ba8659914c0050acc\"\n }\n }\n}\n\ndb.tagAnalytics.updateMany({}, \n {\n $set:{\"company.offer\":\"$offer\"}\n },\n)\n\n", "text": "Hello,\nI have the following schema in a timeseries collectionI want to copy object field “offer” which is outside, inside the meta field “company”. The reason is because I want to add advertiserId as index but is not currently supported by mongo due to limitationsI ve tried withbut creates a new field as string like that:“company.offer”:“$offer”Is this possible on a timeseries collection?Thanks", "username": "Yannis_Ragkavas" }, { "code": "", "text": "Hi @Yannis_Ragkavas,I want to copy object field “offer” which is outside, inside the meta field “company”. The reason is because I want to add advertiserId as index but is not currently supported by mongo due to limitationsPresently, only these operations are allowed for time-series meta-only updates. The arbitrary updates may get added in future versions.However, if this feature is important to you, I would recommend posting this type of feedback on the MongoDB feedback engine, where you and others can vote for it.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks @Kushagra_Kesav", "username": "Yannis_Ragkavas" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update timeseries collection
2023-09-19T10:56:00.721Z
Update timeseries collection
388
null
[]
[ { "code": "{ \"id1\" : \"hashed\", \"id2\" : 1 }{ \"id1\" : 1, \"id2\" : \"hashed\" }", "text": "I’m trying to create a compound hashed index for hash based sharding. I want to know if would there be any difference between the below indexes:", "username": "Yash" }, { "code": "", "text": "they are different indexes, either one can be used as a hashed sharding index.", "username": "Kobe_W" }, { "code": "", "text": "Thanks for the response @Kobe_W. I’ve a follow up question.Lets say id1 is randomly generated fixed length number that takes timestamp as seed. And I’ve another field id2 which is an enum and has a limited number of values, around 10. I’ve mandatory queries involved on filed id2, and quite frequent queries on id1 as well. Now which is a better shard key in this case? Could you please suggest.", "username": "Yash" }, { "code": "", "text": "Now the question is to select a shard key pattern so that you can get the most benefit.To answer that you will need to get a rough understanding on the traffic for different query patterns and see which one you sacrifice.For instance, if you use id1 as shard key, then your “frequent” queries will be very easy (they are targeting queries), but then your “mandatory” queries will have to be broadcast (assume they don’t also include id1 values).Another option is to use a compound key for both id1 and id2, but in that case your queries need to include both fields in order to be targeting queries.", "username": "Kobe_W" }, { "code": "", "text": "Yes I wanted to use a compound key. All my queries have a mandatory filter on id2 field. And majority (more than 80%) of queries include a filter on id1 along with the mandatory id2 field.I believe shard key on id2 field alone won’t be useful as cardinality of it is just 10, and again shard key on id1 alone wouldn’t give me a benefit as all my queries have a mandatory filter on id2 field (please note that id1 is like a uuid and id2 is a enum). So which of the following would be a better shard key in such case?{id1: hashed, id2: 1}\n{id1: 1, id2: hashed}\n{id2: hashed, id1: 1}\n{id2: 1, id1: hashed}", "username": "Yash" }, { "code": "", "text": "there’s no need to use a hash index for id2.my answer would be : either {id1: hashed, id2: 1} or {id2: 1, id1: hashed} should work for you. (unless there are some tricks that are applied internally for building a more efficient index tree).Most your queries include both fields (as equality check), so i don’t think the ordering matters a lot.for other queries, they have to be broadcast.another way is to simply use hashed id1 as the shard key. Given id1 is UUID. i believe it’s no big difference from {id1:hashed + id2}another thing to consider is that a hashed index doesn’t support a good/targeted range scan.", "username": "Kobe_W" }, { "code": "", "text": "Right, those 2 shard keys makes sense.And I think {id2: 1, id1: hashed} would be a better shard key, as id2 can be treated as a index prefix and all my queries that has a compulsory filter on field id2 would use it. What do you think?", "username": "Yash" }, { "code": "", "text": "that is fair. but given id2 has only 10 values, the benefit is not that big.", "username": "Kobe_W" } ]
Difference between the way we create compund hashed index
2023-09-22T12:40:01.142Z
Difference between the way we create compund hashed index
379
https://www.mongodb.com/…7_2_1024x426.png
[ "london-mug", "newyork-mug", "saopaulo-mug", "mexico-mug" ]
[ { "code": "**MUG50****MDB.Community.23**", "text": "Email_Local-Series1200×500 613 KBHello All!MongoDB is heading out on a world tour to bring the best, most relevant content directly to you! Join us to connect with MongoDB experts, meet fellow users building the next big thing, and be among the first to hear the latest announcements!Register Now with **MUG50** discount code to get 50% off! For India-based events use Invite Code **MDB.Community.23** to register yourself for the event. Keynote Presentations: Learn about the latest product announcements from MongoDB industry leaders. Technical Sessions: Experience educational technical sessions for all levels, delivered by MongoDB experts and customers.Relational Migrator\nLearn how to easily migrate a relational database to MongoDB with one-time or continuous data migration.Data Modeling\nLearn data modeling principles, and take your models to the next level with tips and tricks from the experts.Atlas Search:\nExplore the Atlas Search ecosystem and how you can use it to effortlessly integrate search into your applications. MongoDB Product Demos: Stop by the MongoDB booth for demos of the latest products and to get your questions answered. Networking: Meet with our MongoDB enthusiasts in your area to share ideas and expand your network.Join our regional virtual user groups to connect with MongoDB Community in your region and stay updated on events in your time zone! Use code MUG50 for 50% off your ticket!!\n Use Invite Code MDB.Community.23 for .locals in IndiaUpcoming Events:We look forward to seeing you at these events! ", "username": "Harshit" }, { "code": "", "text": "We’re looking forward to meeting community members at these events! Please share if you’re planning to attend!", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "Hey regarding the MongoDB.local Bangalore event, it’s mentioned that it’s free of cost to attend. In that case, should I use MUG50 as invite code?", "username": "_karanel" }, { "code": "MDB.Community.23", "text": "Hey @_karanel,\nFor India-based .locals you could use MDB.Community.23 to register for the event ", "username": "Harshit" }, { "code": "", "text": "Hey @Harshit, actually I registered with my work email ([email protected]) without a code entering it as “NA”, and I can’t register again.\nCould you help me out? ", "username": "_karanel" }, { "code": "", "text": "No problem @_karanel. You don’t need to register again.\nWe will let the team know about it Looking forward to seeing you at the event!", "username": "Harshit" }, { "code": "", "text": "Hey @Harshit, I registered with my work email with a code entering it as “yogesh”, I can’t register again.\nCould you please help?", "username": "yogesh_mishra" }, { "code": "", "text": "Hey @yogesh_mishra,\nLet us check with the team and help you out. Could you also in the meantime, use the “Contact the Organiser” button on the registration page to share your concern.https://events.mongodb.com/mongodb-local-delhincr", "username": "Harshit" }, { "code": "", "text": "I am a student. Which Email should I use for .local Mumbai to register as it is not accepting my personal email. Should I use my college email instead?", "username": "Ansari_Taufique" }, { "code": "", "text": "Yes, you can probably use your college email.", "username": "Shashwat_Verma_N_A" }, { "code": "", "text": "Registered. See y’all there!", "username": "Shashwat_Verma_N_A" }, { "code": "", "text": "Thanks for sharing Harshit, would be there in the Boston Meetup.\nCan you also update if there would be any events in Canada for this tour - Toronto/Vancouver?Thanks.", "username": "Devang_Sharma1" }, { "code": "", "text": "Hi Devang - thanks for your interest! We are starting a MongoDB User Group in Boston in late October after the .local event. You can go ahead and join here: Boston, us: mongodb user group - MongoDB Developer Community ForumsThere aren’t any .local events in Canada this year.", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "Robbed, Toronto was robbed I say! ", "username": "chris" }, { "code": "", "text": "Hi, I registered for the .local Bengaluru event, but I didn’t receive any confirmation email.Can you please tell by when we will receive the confirmation for this event?", "username": "rakesh_verma24" }, { "code": "", "text": "Hey @rakesh_verma24,\nOur team is currently in the process of sending out confirmations for the event. We understand that this may cause some anticipation. Also, preference is being given to professionals as the content is tailored to their needs.In the meantime, we invite you to join our Bangalore User Group to stay informed about any upcoming meetups in the region.Thanks\nHarshit", "username": "Harshit" }, { "code": "", "text": "26th August Ahmmedabad is missing.", "username": "Anjesh_Agrawal" }, { "code": "", "text": "Hey, @Anjesh_Agrawal That is our MongoDB User Group Meetup event run and managed by the community leaders, while these .locals are larger day-long events, majorly run by MongoDB Staff. It has around 7-8 sessions, booths and ask the experts sections to interact with MongoDB staff.This year .locals in India are being held in Bengaluru, Delhi and Mumbai. ", "username": "Harshit" }, { "code": "", "text": "hello , i registered for MongoDB .local mumbai but didn’t received any acknowledgment about the event ,\ni even wrote the mail to organizer didnt receive any response, either the computergenerated nor thereply mail", "username": "AKSHAT_Kotpalliwar" }, { "code": "", "text": "Hey @AKSHAT_Kotpalliwar,Welcome to the MongoDB Community! That is correct. The team would soon start to send out confirmation emails to the registered folks. In the meanwhile, you can also explore our MongoDB User group in Mumbai: MUG-Mumbai.Regards,\nSatyam", "username": "Satyam" } ]
[MongoDB .local Events] MongoDB is heading out on a World Tour!
2023-05-15T11:15:29.135Z
[MongoDB .local Events] MongoDB is heading out on a World Tour!
13,998
null
[]
[ { "code": "", "text": "At this time, there is no way to save an ‘invalid date’. What I do is to either set the date field when data is available or I unset the field. When you look at the object, the date field may or may not exist.Although this works very well, it means more work for people using strong typed languages. I have never used the Javascript driver for MongoDb, but from what I can see, it is trivial to store either a valid date or a null date.I love MongoDb, it is very unlike any other datatabase that I’ve worked with. In other databases, you can store a ‘blank’ date or a value that represents an ‘invalid date’. When that value is read, you know that you’ve got a blank date.I’m using version 4.2 at the moment and I’m wondering if there are plans to allow the storage of an ‘invalid’ date in the future. This is not a request, I ask out of curiosity. Thank you.", "username": "Billy_Zee" }, { "code": "", "text": "Hello @Billy_Zee ,Welcome back to The MongoDB Community Forums! there is no way to save an ‘invalid date’MongoDB stores the date in BSON format. May I ask what you meant by “invalid date”? Can you also share some examples of the same?In other databases, you can store a ‘blank’ date or a value that represents an ‘invalid date’MongoDB supports a flexible schema design which gives you data-modeling choices to match your application and its performance requirements. In your case, you can either add the date or leave it as it is.Can you please provide more clarification around your requirements so that I can assist you better?Lastly, MongoDB version 4.2 is no longer supported so I would recommend you to upgrade to at least MongoDB version 4.4 which is in support till Feb 2024. For more information regarding MongoDB Software Lifecycle Schedules, kindly visitMongoDB Software Lifecycle SchedulesRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "$set$unsetbsoncxx::types::b_datenull", "text": "Hello Tarun,I currently use a solution that I’m sure is commonly used by MongoDb users which is to either $set a date provided and to $unset when it is not provided.On the SQL systems that I’ve worked with, an ‘invalid’ value is specified and I’ll explain why this is helpful further down below. Please note that the solution to set/unset works perfect but just means more work for people working with a strongly typed language because you must declare the data type, e.g.: bsoncxx::types::b_date which means that if I store null, we obviously will get a compiler error. To avoid this, we must check to see if the date has been provided or not, then set/unset accordingly.Now back to your question to me, an ‘invalid date’ would be a 64 bit integer value that is stored when say no date has been specified. That value would have meaning to the database that no date has been stored, just like $unset. The only difference is that if we $unset, there is no field (which is perfectly acceptable). If we stored an ‘invalid_date’ value, we’d have a field with a value representing ‘invalid date’ (or no date supplied).Having the ability to store an ‘invalid_date’ would allow users using a strong typed language to use the same syntax to save a date without the need to have special logic to set/unset the field.This may not be a concern for Javascript developers using MongoDb because I am assuming that the can either set/unset like I currently do (using the mongocxx driver) or they can simply save ‘null’ to MongoDb if they so chose to do (unlike users of strong typed languages).", "username": "Billy_Zee" }, { "code": "kvp(kfirstName, firstName)auto publish = getTimestamp(kpublish);\nif (!publish.is_not_a_date_time()) {\n\tsetDoc.append(kvp(kpublish, to_b_date(publish)));\n}\nelse {\n\tunsetDoc.append(kvp(kpublish, \"\"));\n}\n", "text": "As you can see in the snippet below, I can use the following line of code to handle an empty string or a string with somebody’s name:kvp(kfirstName, firstName)If we had the ability to store an ‘invalid date’, we could use identical syntax to do the same with a strongly typed language.Since MongoDb does not allow you to store the concept of an invalid date, I do this:As you can see, I cannot handle MongoDb dates as easily as I can handle string data. I have to do some checking to set/unset.This is perfectly fine and acceptable for me since it works exactly as it should. I just wanted to be sure that the newer versions of MongoDb had not changed in this specific regard.P.S. In my humble opinion, I do think that not storing a ‘null’ date really is best not only because you store less data, but my guess is that indexing benefits by simply not dealing with ‘null’ values.", "username": "Billy_Zee" } ]
Future plans to allow saving an 'invalid date'?
2023-09-24T20:36:10.250Z
Future plans to allow saving an &lsquo;invalid date&rsquo;?
189
null
[ "java", "python", "cxx", "data-api" ]
[ { "code": "{\n _id: ObjectId(\"6511bad731f6775711b4ee51\"),\n emp_no: 211,\n last_name: 'Susan',\n first_name: 'Fillingnough',\n hired_on: ISODate(\"2023-09-25T16:52:39.774Z\"),\n phoneNumbers: [\n { type: 'Home', number: '111-000-2222' },\n { type: 'Cell', number: '222-111-3333' }\n ],\n married: 'n'\n }\ntheresult = collection.update_one({\"emp_no\" : 211 , \"phoneNumbers.type\":\"Home\"} , {\"$set\": {\"phoneNumbers.$.number\": \"647-298-7194\"}})", "text": "This seems so simple, and I was able to do it in Python, no problem, but in Java, no one seems to know.I have this document:In Java, I would like to update the Home phoneNumber to something else. In Python, I did it this way:theresult = collection.update_one({\"emp_no\" : 211 , \"phoneNumbers.type\":\"Home\"} , {\"$set\": {\"phoneNumbers.$.number\": \"647-298-7194\"}})How could one do this in Java and C/C++?Thanks all!", "username": "Bill_Komanetsky" }, { "code": "Document empDocument = new Document();\nempDocument.append(\"emp_no\", 211L);\nempDocument.append(\"phoneNumbers.type\", \"Home\");\ntempDocument = new Document();\nDocument tempDocument.append(\"$set\", new Document(\"phoneNumbers.$.number\", \"647-298-7194\"));\ntheUpdateResult = collection.updateOne(empDocument, tempDocument);\nif (theUpdateResult.getModifiedCount() == 0) {\t\n\tSystem.out.println(\"Error updating the employee details home phone number\");\n}//if\nelse {\n\tSystem.out.println(\"Employee home phone number details document successfully updated\");\n}//else\n", "text": "Figured it out. And No-SQL is a benefit here?", "username": "Bill_Komanetsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updated Nested Array Value in Java
2023-09-25T17:45:27.453Z
Updated Nested Array Value in Java
289
null
[ "node-js" ]
[ { "code": " {\n fieldname: 'file',\n originalname: 'test_video.mp4',\n encoding: '7bit',\n mimetype: 'video/mp4',\n buffer: <Buffer 00 00 00 20 66 74 79 70 69 73 6f 6d 00 00 02 00 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 00 00 00 08 66 72 65 65 00 23 cb b5 6d 64 61 74 00 00 ... 2353584 more bytes>,\n size: 2353634\n }\n upload(req, res, async function (err) {\n \n let item = req.body\n \n if (req.file) {\n item.file = new mongo.Binary(req.file.buffer)\n }\n \n let result = await uploadDAO.insertItem(item);\n res.json(result)\n })\nvideo: Binary('QUFBQUlHWjBlWEJwYzI5dEFBQUNBR2x6YjIxcGMyOHlZWFpqTVcxd05ERUFBQUFJWm5KbFpRQWp5N1Z0WkdGMEFBQUNyZ1lGLy8r...', 0)", "text": "Hello,I need to update and retrieve video files to/from my database. Seeing that files will be small I decided to upload a video file directly to a document’s video field (no GridFS, no external storage, etc.)I use multer (with memory storage) to upload via form data but I’m not sure if I’m doing it right (?)Multer’s memory storage puts a file in req.file.buffer which looks like thisOn server side I try to write this buffer into dbMy document’s video field looks like this afterwards:video: Binary('QUFBQUlHWjBlWEJwYzI5dEFBQUNBR2x6YjIxcGMyOHlZWFpqTVcxd05ERUFBQUFJWm5KbFpRQWp5N1Z0WkdGMEFBQUNyZ1lGLy8r...', 0)On the client side, when retrieving the video I get this same string and I have no idea what to do with it ? Ideally I would like to createObjectURL and display a video, would it be possible ?Thanks in advance for your help ", "username": "flyingfish22" }, { "code": "const [video, setVideo] = useState('')\nuseEffect(() => {\n axios\n .get(/api/profile/${username}/video, {\n responseType: 'blob' // Set the responseType to 'blob' to handle binary data\n })\n .then(response => {\n const blob = new Blob([response.data], { type: response.headers['content-type'] });\n \nconst videoUrl = URL.createObjectURL(blob);\n setVideo(videoUrl);\n })\n .catch(error => {\n // Handle errors\n });\n }, []);\nuseState", "text": "It’s dependent upon what your client side is, the video may be too large to upload but give this a try first, I use React primarily as my front-end application. Here is how I would handle a video or picture.here we receive the video as a blob and create an object URL with it, that way the front end knows it’s a binary video and we give it that URL so it can be displayed, then for clean code I use a useState to set the video so it can be used elsewhere in the application. If the video is too large you may have to use GridFS which there is a lot of information on the web on how to use and set it up in your personal application.", "username": "Charles_Redfield" } ]
Uploading and retrieving videos
2021-11-08T13:39:40.798Z
Uploading and retrieving videos
4,821
null
[ "database-tools" ]
[ { "code": "", "text": "Is there any free tools for generate documentation from database collections in MongoDB?\nI would like generate an excel sheets with collection’s name and its fields. If possible, I would like add field’s comments for showing it in a generate excel sheets.\nAn tool exemple is DBForge for MariaDB. Is there a free tools like this for MongoDB?", "username": "Rafael_de_PauliBaptista" }, { "code": "", "text": "Hello @Rafael_de_PauliBaptista,Welcome back to The MongoDB Community Forums! Does MongoDB Compass work for you?If yes, then you can export data from a collection as either a JSON or CSV file. If you specify a filter or aggregation pipeline for your collection, Compass only exports documents which match the specified query or pipeline results. To learn more, kindly visitImport and export data with MongoDB Compass.If not, then please share your use case and further information so we can assist you better.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Free documentation tools for MongoDB
2023-09-22T19:30:03.678Z
Free documentation tools for MongoDB
288
null
[ "replication" ]
[ { "code": "", "text": "Dear MongoDB community,Since I did not find a clear answe yet: Is connecting to the (formerly determined) primary member of a replica set with the “mongo” command line tool supported? It is also equivalent to using the full replica set connection string?Best Regards,\nPatrick", "username": "Friedolin" }, { "code": "", "text": "Hi Friedolin,Yes, one can connect to any of the members of a replica set direction or make a replicaset connection. The difference between the former and the latter is the inclusion of “replicaSet” query parameter and the list of members. Connecting directly, to a member, like the primary, would mean omitting the “replicaSet” query parameter and having only the host name of the target member in the connection URI.Regards,\nSteve", "username": "Steve_Hand1" }, { "code": "", "text": "Thanks for you reply! When connected explicitly and only to the primary member of a replica set, will writes/changes be replicated in the same way like when connected using the full connection URI?", "username": "Friedolin" }, { "code": "", "text": "Yes, change applied to primary will be replicated regardless of the connection types discussed.", "username": "Steve_Hand1" } ]
Proper replica set connection
2023-09-26T12:49:52.899Z
Proper replica set connection
273
null
[ "queries", "node-js", "replication" ]
[ { "code": "testrs [direct: secondary] test> rs.config()\n{\n _id: 'testrs',\n version: 7,\n term: 31,\n members: [\n {\n _id: 0,\n host: 'mongodb01.test.local:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: 'mongodb02.test.local:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 0.5,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 2,\n host: 'mongoarbiter.test.local:27017',\n arbiterOnly: true,\n buildIndexes: true,\n hidden: false,\n priority: 0,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 3,\n host: 'mongodb03.test.local:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 4,\n host: 'mongodb04.test.local:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"63636928358215d6a25de74c\")\n }\n}\n \"spring.data.mongodb.uri\": \"mongodb://notification:[email protected]:27017,mongodb02.test.local:27017,mongodb03.test.local:27017,mongodb04.test.local:27017/notification?replicaSet=testrs&readPreference=secondaryPreferred&serverSelectionTimeoutMS=10000&connectTimeoutMS=10000\"\n", "text": "Hi, I configure mongodb replicaset: 1 primary and 3 secondary nodes. I want to route all read data’s to all secondary nodes but node js app routes all request to only one mongodb secondary node. why does it work like this?\nmongodb server rs config:node js app server config:", "username": "Murad_Samadov" }, { "code": "", "text": "Hey @Murad_Samadov,In MongoDB, there is a primary node and multiple secondary nodes. By default, all write operations (insert, update, delete) are directed to the primary node because it’s responsible for handling writes and maintaining data consistency. However, for read operations, MongoDB provides different read preferences that allow you to control how reads are distributed among secondary nodes.Please refer to the documentation for more information on the common read preferences modes.I want to route all read data’s to all secondary nodes but node js app routes all request to only one mongodb secondary node. why does it work like this?Would you please clarify what you mean by “route all read data to all secondary nodes”?Looking forward to your response.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks for response.\nWould you please clarify what you mean by “route all read data to all secondary nodes” ? : i mean i choose secondaryPreferred read preference mode in app side. Please explain me how it is working? App get read data onle 1 secondary node? Or all replica members?", "username": "Murad_Samadov" }, { "code": "maxStalenessSeconds", "text": "Hi @Murad_Samadov,Please explain me how it is working? App get read data onle 1 secondary node? Or all replica members?As per the secondaryPreferred - documentation:So, in simple terms, in “secondaryPreferred” mode, the client tries to read from a secondary server that’s not too far behind the primary, and if that’s not possible, it reads from the primary.Hope it answers your questions!Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Node js app routes all request to only one mongodb secondary node
2023-09-25T19:30:19.147Z
Node js app routes all request to only one mongodb secondary node
313
null
[]
[ { "code": "", "text": "Hi. On the Github pro Website they mentioned that students will get “MongoDB University including free certification valued at $150” but on MongoDB Developer Learning Path it is mentioned that “you will receive 50% off an Associate Developer certification exam attempt”. Can you please tell me which offer is valid? As I am an international student, I can’t afford this certification even at half the cost.\nThank you.", "username": "217_Soham_Mahajan_CompsB" }, { "code": "", "text": "Hi Soham, welcome to the forums!Both offers are valid. All users who finish a learning path receive a voucher code good for 50% off of the certification exam. Students are eligible to receive a 100% certification discount voucher through the GitHub Student Developer Pack id they complete a learning path.To receive the 100% dicount voucher through the GitHub Student Developer Pack, you must first sign-in at mongodb.com/students with your GitHub credentials. After that, you must complete a learning path in MongoDB University.Hope this helps!", "username": "Aiyana_McConnell" } ]
About Github Pro account and MongoDB offer
2023-09-26T14:17:39.274Z
About Github Pro account and MongoDB offer
328
null
[ "aggregation", "java", "compass" ]
[ { "code": "{\n $lookup: {\n from: \"dataInstances\",\n localField: \"fxLookup\",\n foreignField:\n \"compoundIndexValues.gbpFXLookup\",\n as: \"exchangeRates\",\n },\n }\n{\n \"$lookup\": {\n \"from\": \"dataInstances\",\n \"as\": \"exchangeRates\",\n \"localField\": \"fxLookup\",\n \"foreignField\": \"compoundIndexValues.gbpFXLookup\"\n },\n \"totalDocsExamined\": 3631,\n \"totalKeysExamined\": 3631,\n \"collectionScans\": 0,\n \"indexesUsed\": [\n \"schemaId_6f527503-ca52-4121-93ef-aa8b40537f1b_concatenatedFieldNames_gbpFXLookup\"\n ],\n \"nReturned\": 4170,\n \"executionTimeMillisEstimate\": 194\n }\ndocumentAsMap = {LinkedHashMap@8342} size = 7\n \"$lookup\" -> {Document@8252} size = 4\n \"totalDocsExamined\" -> {Long@8254} 440902440\n \"totalKeysExamined\" -> {Long@8153} 0\n \"collectionScans\" -> {Long@8257} 4170\n \"indexesUsed\" -> {ArrayList@8259} size = 0\n \"nReturned\" -> {Long@8261} 4170\n \"executionTimeMillisEstimate\" -> {Long@8263} 160575\n", "text": "I use Java to create an aggregation pipeline. If I copy/paste that pipeline and run it in Compass, it executes quickly and the explain plan show that it’s using all the indexes I expect. When I run it in Java, it executes extremely slowly, and the query plan shows that it is NOT using the indexes that were used when in Compass. It’s definitely exactly the same pipeline in terms of its text representation, because I just copy/paste that from Java into Compass.Why might this happen, and what can I do to ‘fix’ it in Java?I’m running a local community edition v6.06, Compass 1.39.4, and Java sync version 4.10.2.Here’s a snippet of the pipeline that seems to be behaving differently (forgive the field names, it’s all handled by the code and honestly there’s a good reason for it!)The explain on Compass shows me this for that bit:And the same pipeline executed via Java gives this in its explain:(sorry, it’s just the copy of the debug stuff from IntelliJ, but hopefully you get the drift that it isn’t selecting to use the index)Note that the ‘from’ collection is the same as the one the pipeline is running against. It has ~100k rows in it, so pretty tiny really. The pipeline IS quite large, with a few unions and projections up top, and 3 $lookups in total. One of those DOES use the index in both scenarios, but the other two don’t.I can post more stuff but the pipeline is quite big and the field names are so complicated that it makes reading it difficult, so I’m hoping that somebody out there can help me to pin it down.", "username": "Simon_Burgess" }, { "code": "", "text": "For what it’s worth, the issue turned out to be a ‘rogue’ comma at the end of a list definition in the JSON describing the pipeline. Compass is quite happy with this. Parsing the string to Bson docs and passing them to the MongoDB driver is not fine with it. Nothing complains, but behind the scenes there’s a null element of the array, which means the thing you think you’re matching is not what’s being matched.", "username": "Simon_Burgess" }, { "code": "", "text": "Hey @Simon_Burgess, Compass developer here! Trailing commas are valid in JavaScript and we can’t see any null values added to the pipeline in Compass when the pipeline is parsed from user input. When using “export to language” feature, it also seems to produce valid Java code with no null-s at the end:\nimage971×570 47.4 KB\nWe’d like to figure out what exactly might be going wrong here to decide how to address this. To help us with that can you share the whole pipeline string you got from Compass that was then parsed incorrectly? Can you also clarify a bit at what stage the null value is added to the pipeline? What are you using to convert pipeline from Compass pipeline editor to Java code?", "username": "Sergey_Petushkov" }, { "code": "gson.fromJson(pipelineText, Array<Document>::class.java).toList()\npipelineText", "text": "Hi @Sergey_Petushkov,Thanks for responding. I can confirm that Compass itself was doing everything I expected to do, and doing it well!I’ve got some Kotlin code that parses a string into a list of Bson docs to be used as the pipeline:Pasting the contents of pipelineText into Compass works fine. However, the gson conversion doesn’t handle trailing commas well (depending on how you look at it), so my arrays get an extra null value in the Bson which completely throws off the MongoDB engine. That was the problem, not Compass.Actually, out of interest, how do you handle converting to/from Bson/Text in Compass?-Simon.", "username": "Simon_Burgess" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query plan different for the same aggregation pipeline executed from Java vs in Compass
2023-09-22T22:16:32.543Z
Query plan different for the same aggregation pipeline executed from Java vs in Compass
340
https://www.mongodb.com/…6_2_1024x113.png
[ "queries", "crud", "compass", "mongodb-shell" ]
[ { "code": "{\n _id: 376818,\n buildingName: \"test building\",\n fuelTypeList: [\n {\n _id: 376820,\n buildingId: 376818,\n type: \"Natural Gas\"\n },\n {\n _id: 376821,\n buildingId: 376818,\n type: \"District Cooling\"\n }\n\n ]\n \n}\n db.building.updateOne(\n { _id: 376818 },\n { $set: { fuelTypeList.$[].id: 5 } }\n );\nError: clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:...<omitted>...)} could not be cloned.\n", "text": "Sample document:I want to add a new field “id” to every item in fuelTypeList. I am using MongoDB Compass (Version 1.39.4 (1.39.4) ) MONGOSH terminal to run the command.This is my queryand I am getting error\nScreenshot 2023-09-22 at 15.02.212528×280 34.8 KB\nAccording to https://www.mongodb.com/docs/manual/reference/operator/update/positional-all/, $ should be supported. What is wrong here?", "username": "Yat_Man_Wong" }, { "code": "fuelTypeList.$[].id\"fuelTypeList.$[].id\"\n", "text": "this is a syntax error. it means your code is not formatted correctly. most likely it is the usage of dot notation that is not enclosed in double quotes.rather thanfuelTypeList.$[].idtry", "username": "steevej" }, { "code": " db.building.updateOne(\n { _id: 376818 },\n { $set: { \"fuelTypeList.$[].id\": \"fuelTypeList.$[]._id\" } }\n );\n", "text": "that works, thank you Instead of hard coding id = 5 for every item, if I want every item’s id equals to _id does the terminal supportIt updated the value as string\n", "username": "Yat_Man_Wong" }, { "code": "$[]", "text": "If you need to use a current value from the same document you will need to use the update with aggregation.The $[]is however not available. You will need to use $map to accomplish the modification of the array.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to set a field for every item in an array
2023-09-22T22:03:41.907Z
How to set a field for every item in an array
347
null
[]
[ { "code": "", "text": "Dear all :Thanks for letting me join to this BIG community.I hope you can support me , maybe any of you have suffered this issue [2020/12/19 10:21:09.736] [.error] [cm/director/director.go:planAndExecute:539] <rbalvkapam.bas.roche.com-27017_8> [10:21:09.736] Plan execution failed on step Download as part of move Download : <rbalvkapam.bas.roche.com-27017_8> [10:21:09.736] Postcondition failed for step Download because\n[‘desiredState.FullVersion’ is not a member of ‘currentState.VersionsOnDisk’ (‘desiredState.FullVersion’={“trueName”:“4.4.0-ent”,“gitVersion”:“563487e100c4215e2dce98d0af2a6a5a2d67c5cf”,“modules”:[“enterprise”],“major”:4,“minor”:4,“patch”:0}, ‘currentState.VersionsOnDisk’=)]. Outcome=3I’m using the official tgz but always said the same issueDoes any of you suffered this issue ?I have RHEL 7.8 and i’m trying to install mongodb-linux-x86_64-enterprise-rhel70-4.4.2.tgzThanks in advance !!!", "username": "Roberto_Rodriguez_Pe" }, { "code": "", "text": "have you got any solution of this", "username": "Md_Azaz_Ahamad1" }, { "code": "", "text": "Hey @Md_Azaz_Ahamad1,May I ask if you are facing any specific issues similar to this one?If yes also share the additional details such as MongoDB version and deployment configuration. Also, are you utilizing Docker, a VM, or a similar solution to host your MongoDB server?Looking forward to your response.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
'desiredState.FullVersion' is not a member of 'currentState.VersionsOnDisk'
2020-12-19T10:44:39.320Z
&lsquo;desiredState.FullVersion&rsquo; is not a member of &lsquo;currentState.VersionsOnDisk&rsquo;
2,606
null
[ "aggregation", "views" ]
[ { "code": "[\n {\n $project:\n {\n messages: {\n $filter: {\n input: \"$messages\",\n as: \"message\",\n cond: {\n $gte: [\n \"$$message.time_sent\",\n date,\n ],\n },\n },\n },\n },\n },\n {\n $unwind:\n {\n path: \"$messages\",\n },\n },\n {\n $group:\n {\n _id: \"$_id\",\n messages: {\n $push: \"$messages\",\n },\n },\n },\n {\n $set:\n {\n ttl: {\n $add: [\n new Date(),\n 1000 * 3600 * 24 * 90,\n ],\n },\n },\n },\n {\n $merge: {\n 'into': 'test',\n 'on': '_id',\n 'whenMatched': 'merge',\n 'whenNotMatched': 'insert'\n }\n }\n ]\n", "text": "Hi allI have a large collection of data, each document comprises an _id, ttl and then an array called ‘messages’ which can be large.One of the fields in the messages array is called ‘time_sent’, currently the collection contains time_sent dating back 18 months or so - I need this to be reduced to 90 days.The issue I am facing is that while I have created an aggregation which filters the array based on ‘date’ (which is current time minus 90 days), the results appear correct in the aggregation tool in Atlas, i.e. in a test collection I start with 6 documents, end up with 3 based on the date. Documents that only have items in the array that doesn’t match the filter condition persist, I guess logically this makes sense but it’s not what I want…Also if the filter results in an empty ‘messages’ array, the whole document should be removed. I don’t seem to be able to achieve this either.Current aggregation below:I feel like I missing something obvious, but I just can’t seem to crack it.Thanks\nChris", "username": "Chris_Davies" }, { "code": "", "text": "If you use $match then $out rather than $merge, the original collection will be replace with only the documents that matches. This way you could $match out documents with empty array. With $out you need to make sure you $project all the other fields, a $set or $addFields would then be more appropriate.I am pretty sure your $unwind and $group is kind of useless.But i also think that you should use $merge as you did and then do a separate deleteMany to get rid of documents you do not want to keep.", "username": "steevej" }, { "code": "", "text": "Thanks - I was ending up just adding more stages to the aggregation in some hope of it doing what I wanted! I will give the deleteMany a go as well ", "username": "Chris_Davies" } ]
Managing array of subdocuments by date via aggregation
2023-09-22T05:11:42.746Z
Managing array of subdocuments by date via aggregation
331
null
[ "atlas-cluster" ]
[ { "code": "const response = await fetch(`mongodb+srv://uname:[email protected]/db_sample?retryWrites=true&w=majority`);\n if (!response.ok) {\n const message = `An error occurred: ${response.statusText}`\n window.alert(message)\n return;\n } \n", "text": "I tried using basic fetch and connecting my react app with MongoDB but it is throwing error. I tried all of possible tutorial but I can’t fix this please help.here is my code snipper", "username": "nut_craker" }, { "code": "", "text": "Hi @nut_craker and welcome to MongoDB community forums!!In order to assist you further, could you help us with few information regarding the error that you are facing:Regards\nAasawari", "username": "Aasawari" } ]
Error in fetching data URL scheme "mongodb+srv" is not supported
2023-09-23T01:50:26.146Z
Error in fetching data URL scheme &ldquo;mongodb+srv&rdquo; is not supported
314
null
[ "aggregation" ]
[ { "code": "Id(document's name) : {\n\nmany json for same ID\n\n} \n", "text": "i have a question: but how can i insert many json files inside one document collection? I have a ruby script connected with mongoDB which generate json files for each ID product. In mongo i should want a structure like this:how can i get this structure in ruby?DB’s name is “test_db” and collection’s name is “test_coll”", "username": "gioele_valori" }, { "code": "require 'mongo'\nrequire 'json'\n# Initialize MongoDB Connection\nclient = Mongo::Client.new(['localhost:27017'], database: 'test_db')\ncollection = client[:test_coll]\n\n# Generate JSON Data for IDs\nid1_data = [\n { \"key1\": \"value1\", \"key2\": { \"key5\": \"value3\", \"key6\": \"value6\"} },\n { \"key1\": \"value3\", \"key2\": \"value4\" }\n]\n\nid2_data = [\n { \"key1\": \"value5\", \"key2\": \"value6\" },\n { \"key1\": \"value7\", \"key2\": \"value8\" }\n]\n\n# Insert Data for IDs\ncollection.insert_one({ _id: 'id1', data: id1_data })\ncollection.insert_one({ _id: 'id2', data: id2_data })\n\n# Repeat for other IDs as needed\n\n# Closing the Connection\nclient.close\n", "text": "Hi @gioele_valori and welcome to MongoDB community forums!!If I understand your question correctly, you are trying to add a JSON file into the collection. If my understanding is correct, you can either make use of the MongoDB tools and use the mongoimport to import the json file.\nThe other way to solve at the application end would be by making use of the file system.This can be using usingBelow is the example code using insertMany.Please note that the following has been tested using Ruby Driver version 3.1.2 and would recommend to perform through testing before using in a production environment.\nHowever if this is not you are looking for, could you help me a sample document that you wish to insert into the collection.Regards\nAasawari", "username": "Aasawari" } ]
Json array in a single collection's document
2023-09-19T12:58:29.980Z
Json array in a single collection&rsquo;s document
368
null
[]
[ { "code": "new MongoNetworkError(\nMongoNetworkError: failed to connect to server [cluster0-shard-00-01.0kdvb.mongodb.net:27017] on first connect [Error: write EPIPE\n", "text": "I’m following this tutorial here and having issues getting it work.I’ve added a Heroku addon to get a static IP Address.\nI’ve tried Fixie, Fixie Socks, now I’m using Quotaguard (each on results in same error).\nI’m adding those static IPs to Atlas’s whitelisted IP addresses.The Mongo connection URL is added as a Heroku env var.In localhost everything works (I’ve whitelisted my Macbooks IP address). But on Heroku it fails.I know it is suggested to open the whitelist to ALL IP addresses. But that just is a terrible solution. There has to be a way to make this work.", "username": "George_N" }, { "code": "", "text": "Hi @George_N!Thanks for posting your issue here (and on other related topics; I see ya!). I’m sorry to hear that this has been a persistent issue for you.By chance, have you tried this possible solution from QuotaGuard’s documentation?", "username": "yo_adrienne" }, { "code": "", "text": "", "username": "Mohammad_Sadoghe" }, { "code": "", "text": "I also experienced the same issue. As the IP address is not whitelisted you can allow any IP.\nFor the production purpose that is not a good approach. You can do it by QuotaGuard Static. I didn’t try it. I just simply want to know how these stuffz works.", "username": "Pranav_Shastri" }, { "code": "", "text": "QuotaGuard Static does not work either. I tried it after having the same failures.", "username": "Simeon_Florea" }, { "code": "", "text": "Thank you a lot!!!1", "username": "Daniel_Martinez1" }, { "code": "", "text": "Hi, did you manage to get this sorted? I have the same problem.I followed this tutorial:Learn how to set up QGTunnel for MongoDB database connections on Heroku using QuotaGuard’s Static IP services. Follow these step-by-step instructions for a secure and reliable connection.It states to look up your mongoDB cluster shards at cluster’s shards here:\nSRV Record Lookup - Check Service Record (SRV) DNS records for any domain.here is one of my three shards (I can’t post more as I am a new user on this forum) came up with the following shards:tcp://cluster0-shard-00-00.yk6dyto.mongodb.net:27017I just can’t get this working, which is really frustrating. When you search around for this, i’ve found very little. People keep saying “allow access from anywhere” - this is the polar opposite of what i want to do.Any help greatly appreciated.", "username": "Matt_dunning" }, { "code": "", "text": "I facing the same issues… I cant get it up&running with heroku atlas and\nQuotaGuard Static IP’s", "username": "morpheus_N_A1" } ]
Heroku and MongoDB Atlas connection failing
2021-02-13T05:33:19.938Z
Heroku and MongoDB Atlas connection failing
7,536
null
[ "swift", "flexible-sync" ]
[ { "code": "/// Custom User Meta Data from Authentication\nclass Account: Object {\n @Persisted(primaryKey: true) var _id: ObjectId \n}\n\n/// Created on Authentication Trigger when User is created\nclass UserProfile: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var userId: String\n @Persisted var firstName: String\n @Persisted var lastName: String\n @Persisted var avatarImageUrl: String\n}\n\nclass Space: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var title: String\n @Persisted var buckets: List<Bucket>\n @Persisted var contributors: List<UserProfile>\n\n /// Is there a better way of handling this?\n @Persisted var ownerId: String\n @Persisted var owner: UserProfile?\n}\n\nclass Bucket: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var title: String\n @Persisted var entries: List<BucketEntry>\n @Persisted(originProperty: \"buckets\") var space: LinkingObjects<Space>\n\n @Persisted var ownerId: String\n @Persisted var owner: UserProfile?\n}\n\nclass BucketEntry: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var title: String\n @Persisted(originProperty: \"entries\") var bucket: LinkingObjects<Bucket>\n\n @Persisted var ownerId: String\n @Persisted var owner: UserProfile?\n}\nlet config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"profiles\") == nil {\n subs.append(QuerySubscription<UserProfile>(name: \"profiles\") {\n $0.userId == user.id /* plus profiles which are contributing to my spaces? */\n })\n }\n if subs.first(named: \"spaces\") == nil {\n subs.append(QuerySubscription<Space>(name: \"spaces\") {\n $0.ownerId == user.id || $0.contributors.userId.contains(user.id)\n })\n }\n if subs.first(named: \"buckets\") == nil {\n subs.append(QuerySubscription<Bucket>(name: \"buckets\") {\n $0.ownerId == user.id /* plus buckets in spaces I'm contributing to */\n })\n }\n if subs.first(named: \"bucketEntries\") == nil {\n subs.append(QuerySubscription<BucketEntry>(name: \"bucketEntries\") {\n $0.ownerId == user.id /* plus entries in buckets in spaces I'm contributing to */\n })\n }\n})\nif subs.first(named: \"buckets\") == nil {\n subs.append(QuerySubscription<Bucket>(name: \"buckets\") {\n $0.ownerId == user.id || $0.space.contributors.userId.contains(user.id)\n })\n}\n", "text": "Hi everyone,I’m looking for some best practices for platforms with collaboration features. I’m struggling with defining the queries for the flexible sync and with defining the permission roles for the schemas.So imagine a platform where:My main question is: How should I configure the flexible sync to sync as less data as possible. Means only the spaces, buckets, entries which the user owns or is contributor of.When I understand it correctly I’m not able to do this:So what is best practice to handle these kind of nested relationships?", "username": "Thomas_Flad" }, { "code": "", "text": "Another aspect of this I’m not sure of is how to define the roles/permissons in this scenario? I had a look on this article: https://www.mongodb.com/docs/atlas/app-services/sync/app-builder/device-sync-permissions-guide/#securityIt seems one approach could be to store the information of the collaboration in the user meta data. It’s fine for the workspaces but how can I pass this deeper to the bucket and bucket entries", "username": "Thomas_Flad" } ]
How to define flexible sync subscriptions for collaboration platform
2023-09-26T07:53:43.315Z
How to define flexible sync subscriptions for collaboration platform
283
null
[ "java", "many-to-many-relationship" ]
[ { "code": "@MappedEntity\npublic class Element{\n @Id\n @AutoPopulated\n private UUID id;\n @NotNull\n private ElementType elementType;\n @NotNull\n private String name;\n @NotNull\n private PointGeoJson pointGeometry;\n @Nullable\n private Set<Element> associations;\n ... other attributes ...\n}\n{\n \"id\": \"a581edd0-ee62-4b1d-b02a-5e3b68f7b234\",\n \"elementType\": \"PARKING_AREA\",\n \"name\": \"Area Element\",\n \"pointGeometry\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 37.7749,\n -122.4194\n ]\n },\n \"associations\": [\n {\n \"id\": \"b581edd0-ee62-4b1d-b02a-5e3b68f7b235\",\n \"elementType\": \"POINT\"\n },\n {\n \"id\": \"c581edd0-ee62-4b1d-b02a-5e3b68f7b236\",\n \"elementType\": \"POINT\"\n }\n }\n ],\n ... other attributes ...\n}\n{\n \"id\": \"b581edd0-ee62-4b1d-b02a-5e3b68f7b235\",\n \"elementType\": \"POINT\",\n \"name\": \"Point Element\",\n \"pointGeometry\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 37.7749,\n -122.4194\n ]\n },\n \"associations\": null,\n ... other attributes ...\n}\n", "text": "What would be the best approach to implement below use case in MICRONAUT DATA mongoDBThe element collection is used to store elements of different types. Area, points, segments etc.\nSo for e.g. Area can have multiple childs - points, segments etcThe document will have full element details and selective child details with full hierarchy.\nThe child will be present in database independently as well with complete details.Any update/delete/create to the element will require the changes to be propageted to the parent & child elements respectively.Would we also require mapping annotation usage between parent and child?ENTITYPARENTCHILDUse case Image", "username": "Samyak_Jain5" }, { "code": "", "text": "Hello @Samyak_Jain5 ,Welcome to The MongoDB Community Forums! I would recommend you to check different Design patterns discussed in blog mentioned belowA summary of all the patterns we've looked at in this seriesThe most common models being used in one to many relationships areYou can test them as per your application requirements and can take advantage of the flexible schema.\nPlease share the end goal that you are trying to achieve here.Let me know if you have any queries or face any issue, would be happy to help you!Regards,\nTarun", "username": "Tarun_Gaur" } ]
One to Many relation in the same collection
2023-09-05T11:43:04.217Z
One to Many relation in the same collection
376
null
[ "java", "python", "spark-connector", "scala" ]
[ { "code": "def write_to_filesink(batch_df, batch_id, collection, base_dir):\n batch_df.write\\\n .format('json')\\\n .mode('append')\\\n .save(f'{base_dir}/{collection}')\n\nbase_s3_path = s3://bucket/dir1/dir2\npartial_filesink_writter = partial(write_to_filesink, collection=collection, base_dir=base_s3_path)\n\nstreaming_df = spark.readStream\\\n .format('mongodb')\\\n .option('spark.mongodb.connection.uri', connection_uri)\\\n .option('spark.mongodb.database', db_name)\\\n .option('spark.mongodb.collection', collection_name) \\\n .option('spark.mongodb.change.stream.publish.full.document.only', 'true')\\\n .option(\"forceDeleteTempCheckpointLocation\", \"true\")\\\n .load()\n\nquery = streaming_df.writeStream \\\n .foreachBatch(partial_filesink_writter) \\\n .option('checkpointLocation', \\\n f'{base_dir}/{collection}/_checkpoint') \\\n .trigger(processingTime='10 seconds') \\\n .start()\n\nquery.awaitTermination()\n", "text": "Hi Community,We are trying to perform CDC(Changed Data Capture) and write that to S3 in JSON format, from all of our collections created in MongoDB Atlas(v4.4.23) deployment using Spark Structured Streaming. We are using PySpark in AWS Glue(v3.0) to run this Spark Streaming Job. We used mongo-spark-connector_2.12-10.1.1.\nAlso passed below jars to the streaming job.However the job is failing with below exception when its being executed in AWS Glue. I ran the similar streaming job in my local system, but I did not encountered any issue.java.lang.NoSuchMethodError: org.bson.conversions.Bson.toBsonDocument()Lorg/bson/BsonDocument;\nat com.mongodb.spark.sql.connector.read.MongoMicroBatchPartitionReader.getCursor(MongoMicroBatchPartitionReader.java:169)\nat com.mongodb.spark.sql.connector.read.MongoMicroBatchPartitionReader.next(MongoMicroBatchPartitionReader.java:103)\nat org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:79)\nat org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:112)\nat org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)\nat scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454)\nat org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)\nat org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)\nat org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)\nat scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454)\nat scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:454)\nat org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)\nat org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)\nat org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)\nat org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:277)\nat org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)\nat org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)\nat org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:210)\nat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)\nat org.apache.spark.scheduler.Task.run(Task.scala:131)\nat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)\nat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)\nat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\nat java.lang.Thread.run(Thread.java:750)Providing below the code for more clarity.Would appreciate help in solving this issue.Thanks.", "username": "Mithun_Pal1" }, { "code": "java.lang.NoSuchMethodError", "text": "java.lang.NoSuchMethodError - typically indicates a version mismatch or conflict between libraries in AWS glue environment. possibly between the BSON library and the MongoDB driver or the MongoDB Spark Connector. In the past we have seen some similar incompatibility with AWS glue - https://jira.mongodb.org/browse/SPARK-399\nCan you try trying out different version and possibly opening a support ticket on AWS (if possible)", "username": "Prakul_Agarwal" }, { "code": "availableNow=True", "text": "Does mongo-spark-connector v10.1.1 supports availableNow=True Trigger type? Because I encountered some error while used this trigger type.", "username": "Mithun_Pal1" } ]
Write changed data from collections to S3 using Spark Streaming
2023-09-16T10:08:36.459Z
Write changed data from collections to S3 using Spark Streaming
486
null
[ "swift", "atlas-device-sync", "flexible-sync" ]
[ { "code": "{\n \"_id\": {\n \"$in\": \"%%user.custom_data.accountIds\"\n }\n}\n", "text": "When creating a user (email/password) through the Atlas UI, I noticed the User Creation Function does not get called (this is where I insert a record into the users collection). It does get called if the user creation happens through the swift sdk. Is this expected?In working around the above issue, I moved the User Creation code to a Log In Trigger. Sync Service Rules seem to be determined before the trigger happens. When I open the realm I am getting errors because the data needed in custom_data does not exist. If restart the swift app, it works fine. Again, this only happens the first time if a user does not have a record defined in the users table.Below is my Read Document Permissions assigned to a collection. When there is no user record, it throws an error … “state_Account”: (BadValue) $in needs an array. Is there anyway to write this rule so I can handle a case when accountIds does not exist (return no records and not throw an error).", "username": "Robert_Charest" }, { "code": "", "text": "", "username": "Sudarshan_Muralidhar" } ]
New User Sync Questions
2023-09-23T17:40:21.261Z
New User Sync Questions
331
null
[]
[ { "code": "", "text": "How to upgrade a version to a cluster in Mongo with the creation of a replica without losing data while moving between the clusters", "username": "Ruth_Goldstein" }, { "code": "Upgrade Major MongoDB Version for a Cluster", "text": "Hello @Ruth_Goldstein ,Welcome to The MongoDB Community Forums! You should not lose data while you upgrade to a new version of MongoDB. In case you face any issues, I would advise you to bring this up with the Atlas support team or connect with support via the in app chat support available in the Atlas UI. They may be able to check if anything on the Atlas side could have possibly caused this issue. In saying so, if a chat support is raised, please provide them with the following:Below is the recommended procedure on how to Upgrade Major MongoDB Version for a ClusterIn case you want to modify your current cluster, below documentation can help you in that caseRegards,\nTarun", "username": "Tarun_Gaur" } ]
When upgrade version dont lose data
2023-09-05T09:57:20.625Z
When upgrade version dont lose data
312
null
[ "node-js", "mongoose-odm" ]
[ { "code": " const serverSelectionError = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: Invalid message size: 1347703880, max allowed: 67108864\n at NativeConnection.Connection.openUri (D:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\connection.js:807:32)\n at D:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\index.js:342:10\n at D:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (D:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (D:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\index.js:1181:10)\n at Mongoose.connect (D:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\index.js:341:20)\n at Object.<anonymous> (D:\\web\\web developement\\Dance website pug\\app.js:10:10)\n at Module._compile (node:internal/modules/cjs/loader:1103:14)\n at Object.Module._extensions..js (node:internal/modules/cjs/loader:1157:10) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n '127.0.0.1:27017' => ServerDescription {\n _hostAddress: HostAddress { isIPv6: false, host: '127.0.0.1', port: 27017 },\n address: '127.0.0.1:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 2786731,\n lastWriteDate: 0,\n error: MongoParseError: Invalid message size: 1347703880, max allowed: 67108864\n at processIncomingData (D:\\web\\web developement\\Dance website pug\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:91:18)\n at MessageStream._write (D:\\web\\web developement\\Dance website pug\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:28:9)\n at writeOrBuffer (node:internal/streams/writable:389:12)\n at _write (node:internal/streams/writable:330:10)\n at MessageStream.Writable.write (node:internal/streams/writable:334:10)\n at Socket.ondata (node:internal/streams/readable:754:22)\n at Socket.emit (node:events:526:28)\n at addChunk (node:internal/streams/readable:315:12)\n at readableAddChunk (node:internal/streams/readable:289:9)\n at Socket.Readable.push (node:internal/streams/readable:228:10) {\n [Symbol(errorLabels)]: Set(0) {}\n }\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n logicalSessionTimeoutMinutes: undefined\n },\n code: undefined\n", "text": "The application successfully started on 27017\nD:\\web\\web developement\\Dance website pug\\node_modules\\mongoose\\lib\\connection.js:807", "username": "Devendra_Solanki" }, { "code": "", "text": "did you get sollution to this bug?", "username": "Brian_Sanzio" }, { "code": "", "text": "This error is probably occur because the mongo dB services is not started yet what you have to do is\nGo to control panel\nDouble click on services\nthen search for mongodb.exe and then click on start\nits for window user\nI hope it would be work.", "username": "Asma_Butt" }, { "code": "image: \"mongo\"\n\nvolumes:\n\n - data:/data/db\nbuild: ./server\n\nports:\n\n - \"4000:4000\"\n\nvolumes:\n\n - logs:/app/logs\n\n - ./server:/app\n\n - /app/node_modules\n\ndepends_on:\n\n - mongodb\nbuild: ./client\n\nports:\n\n - \"3000:3000\"\n\nenvironment:\n\n - CHOKIDAR_USEPOLLING=true\n\nvolumes:\n\n - ./client/src:/app/src\n\nstdin_open: true\n\ntty: true\n\ndepends_on:\n\n - server\n", "text": "/app/node_modules/mongoose/lib/connection.js:847\nsource_code-server-1 | const serverSelectionError = new ServerSelectionError();\nsource_code-server-1 | ^\nsource_code-server-1 |\nsource_code-server-1 | MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017After running the mongodb service i got the above error for docker\ndocker-compose.yamlversion: “3.8”services:mongodb:server:client:volumes:data:logs:dockerfileFROM node:18.8.0WORKDIR /appCOPY package*.json /app/RUN npm installCOPY . /app/EXPOSE 4000CMD [ “npm”, “start”]Please help me.", "username": "Tarek_Hossain" }, { "code": "mongodbmongoose.connect('mongodb://localhost:27017/blog')\nmongoose.connect('mongodb://127.0.0.1/blog')\n", "text": "Hey I think your problem is that you don’t have mongodb installed on your computer. Once you download it you can changeto", "username": "Abdelrahman_Khallaf" }, { "code": "", "text": "Thankyou this saved me haha", "username": "Rachel_Sabo" }, { "code": "", "text": "Ok, changing the URI worked but why?!?! Ok I’m using mongoose and found out why:from MS Edge", "username": "Leenard_Lacay" }, { "code": "", "text": "this worked for me. thank you", "username": "ROHAN_SAINI1" }, { "code": "", "text": "thank u, it was pain in my head", "username": "sachin_bhootali" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I am getting this error const serverSelectionError = new ServerSelectionError();
2022-05-09T08:31:34.359Z
I am getting this error const serverSelectionError = new ServerSelectionError();
25,395
null
[]
[ { "code": "", "text": "I’m fairly new to setting up and managing my own database for my application.Quick rundown - for now all that I really need is users being able to read specified data which I’m doing through https endpoints, currently with anonymous authentication.I’m not currently planning on writing any of the user’s application data which is stored locally. I would like to, as a start for understanding all of this as well as for metrics, keep a tab on how many unique users have used the application, maybe a running count of times they’ve opened it, and the most recent date they opened it.I also don’t really want to add an actual login (email/pass, fb, google) to the application startup if it’s not necessary.My first thought was adding in some way of generating unique ids or data in my application’s startup logic and putting that into the request body for the authentication token while still using anonymous authentication. Is this something that would make sense in my scenario?Obviously, there may well not be a foolproof way of differentiating because users could swap devices, networks, use VPNs, etc, but as I said, at least for now, I’m really just looking for a rough and dirty metrics mechanism.", "username": "Zachary_Mitchell" }, { "code": "", "text": "Hi @Zachary_Mitchell and welcome to MongoDB community forums!!Based on the above information you have shared, if I understand correctly, you are trying to implement an authentication method for your application without any specific login systems.\nIf I understand correctly, could you help me understand if you are using an Official MongoDB driver or using a custom https endpoint ?Warm Regards\nAasawari", "username": "Aasawari" } ]
What Authentication Method to Use?
2023-09-20T00:00:24.066Z
What Authentication Method to Use?
273
null
[ "aggregation", "node-js", "text-search" ]
[ { "code": "Connection.db.collection('users').aggregate([\n {\n \"$match\": {\n $and: [{\n $or: [\n { first_name: new RegExp(searchString, 'i') },\n { last_name: new RegExp(searchString, 'i') },\n { email: new RegExp(searchString, 'i') },\n { phone_no: new RegExp(searchString, 'i') }\n ]\n },\n {\n 'company_id': ObjectId(req.body.company_id),\n is_deleted: 0,\n user_type: process.env.EMPLOYEE_USER_TYPE\n }]\n }\n },\n\n {\n \"$lookup\": {\n \"from\": \"employees\",\n \"let\": {\n eId: \"$_id\"\n },\n \"pipeline\": [\n {\n $match: {\n $and: [\n {\n $or: [\n { employee_id: new RegExp(searchString, 'i') },\n { user_type_name: new RegExp(searchString, 'i') }\n ]\n },\n {\n $expr: {\n $eq: [\n \"$user_id\",\n \"$eId\"\n ]\n }\n }\n ]\n }\n }\n ],\n \"as\": \"other_details\"\n }\n },\n { $skip: offset },\n { $limit: perPage },\n ]).toArray((err, result) => {\n if (err) throw err;\n response = { status: status, msg: \"Company user list.\", data: result, total_page: total_page_number, total_record: total_record_count };\n res.json(response);\n });\n", "text": "I have two collection one users and another one employees, I want to search with single keyword in first_name,last_name,email,phone_no from the first collection and employee_id,user_type_name from second collection.I tried with the below query but it doesn’t work", "username": "Sayan_Sen" }, { "code": "", "text": "Hi @Sayan_Sen and welcome to MongoDB community forums!!I want to search with single keyword in first_name,last_name,email,phone_no from the first collection and employee_id,user_type_name from second collection.From the above posts, the quoted part seems to be unclear to me, it would be very helpful for us to triage and provide you with the assistance if you could provide the following informations:Thanks\nAasawari", "username": "Aasawari" } ]
How to search with same value in two collection?
2023-09-25T07:00:55.918Z
How to search with same value in two collection?
316
null
[ "queries", "flutter" ]
[ { "code": "", "text": "i have a atlas account and i have flutter app now i know that 500 connection can be made on the free MongoDB atlas database but if i have 1000 users then what will happen to my app i basically trying to build a chat app where when the chat page opens i with start listening for changes to MongoDB with mongo_dart package available for flutter.", "username": "Jaiswal_Pharma" }, { "code": "mongosmongod", "text": "Hi @Jaiswal_Pharma,Welcome to the MongoDB Community!i know that 500 connection can be made on the free MongoDB atlas databaseThe connection limit in Atlas represents the maximum number of simultaneous connections that the mongos or mongod will accept.The documentation also mentions other limitations that may be useful for you to be aware of. Please also note, the connection limit is per node for each cluster. Additionally, you may find the connection limits per cluster tier & class as documented here.i have 1000 users then what will happen to my appSo you’re asking what would happen if you have 1000 users on your application server. In such a scenario, you can leverage connection pooling, where multiple client connections share a smaller number of actual connections to the database. This can help you optimize the use of your database connections.Also, in some situations in which connections are opened but never closed can allow old connections to pile up and eventually exceed the connection limit. So, it’s recommended to keep a tab on the available metrics (specifically Connections) to see if the connections surge up instantaneously or are gradually building up.Additionally, refer to the MongoDB Atlas - Fix Connection Issues documentation, which includes some possible ways for an immediate fix and details.Hope this answers your questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "i am not sure yet here is my use case again i will connect my apps directly to mongodb atlas like if i have 1000 users then they will listen for changes and based on the change i will update the ui", "username": "Jaiswal_Pharma" } ]
Mongodb connection from flutter basicaly watch listeners
2023-09-24T10:33:55.423Z
Mongodb connection from flutter basicaly watch listeners
328
https://www.mongodb.com/…c_2_685x1024.png
[ "atlas-cli" ]
[ { "code": "atlas backup restore start pointInTimePOST: HTTP 400 (Error code: \"INVALID_JSON_ATTRIBUTE\") Detail: Received JSON for the pointInTimeUTCSeconds attribute does not match expected format. Reason: Bad Request. Params: [pointInTimeUTCSeconds]\n", "text": "Hi there.I think there is a mistake on this page.It clearly says in the documentation that point in time backups expects number of milliseconds. But the example shows the UNIX timestamp in seconds.Based on my own usage of the CLI, I actually think the example is correct and the documentation/naming of the option is wrong. The atlas backup restore start pointInTime does actually expect the epoch to be in seconds.\nCleanShot 2023-08-11 at 15.39.19@2x1530×2284 426 KB\nBtw, if you actually provide in milliseconds, you get this error:", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi @Alex_Bjorlig,Thanks for raising this one. I’ll just run a quick test on my own system to verify the expected measurement of time unit and then work with the team to have this updated to match.Regards,\nJason", "username": "Jason_Tran" }, { "code": "POST: HTTP 400 (Error code: \"INVALID_JSON_ATTRIBUTE\") Detail: Received JSON fo\natlas backup restore start pointInTime", "text": "Hi @Alex_Bjorlig,Just to double check, can you provide the atlas cli version you’re using and the command you used to get this error:\nPlease redact the specific project ID’s and any other sensitive information before posting hereBtw, if you actually provide in milliseconds, you get this error:Based on my own usage of the CLI, I actually think the example is correct and the documentation/naming of the option is wrong. The atlas backup restore start pointInTime does actually expect the epoch to be in seconds.There is a ticket now logged for this which I am monitoring and will update here when the appropriate changes are made.Regards,\nJason", "username": "Jason_Tran" }, { "code": "atlascli version: 1.10.0\ngit version: homebrew-release\nGo version: go1.20.6\n os: darwin\n arch: arm64\n compiler: gc\natlas backup restore start pointInTime --clusterName ${clusterName} --pointInTimeUTCMillis ${UTC_SECONDS!} --targetClusterName ${clusterName} --targetProjectId ${projectIdProduction} --output json\n", "text": "Sure:The command is", "username": "Alex_Bjorlig" }, { "code": "", "text": "@Alex_Bjorlig Apologies for delay. This is fixed in version 1.11 of atlas cli", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas CLI, restore pointInTime pointInTimeUTCMillis confusion
2023-08-11T13:47:21.855Z
MongoDB Atlas CLI, restore pointInTime pointInTimeUTCMillis confusion
516
null
[ "aggregation", "compass" ]
[ { "code": "dayOfWeek: {$dateToString: {\n date: new Date(\"March 5, 2023\"),\n format: \"%w\"\n}},\n1$dateToString0", "text": "March 5, 2023 is clearly a Sunday. I double checked this on two calendars The output from this is 1, but it should, according to the $dateToString documentation, be 0.Instead, 1-7 are being used. Is this a bug or some setting on my local server?I need to be able to rely on this output when we move data to Atlas.What is correct?", "username": "Tim_Rohrer" }, { "code": "", "text": "Hi Tim,Definitely an interesting one. I’ll have to run some of my own tests to see whether this is expected or not. Just to understand how this is happening as well, could you advise the MongoDB server version you’re running?Regards,\nJason", "username": "Jason_Tran" }, { "code": " case 'w': // Day of week\n if (auto status = insertPadded(os, dayOfWeek(date), 1); status != Status::OK())\n return status;\n break;\n case 'u': // Iso day of week\n if (auto status = insertPadded(os, isoDayOfWeek(date), 1);\n status != Status::OK())\n return status;\n break;\nint TimeZone::dayOfWeek(Date_t date) const {\n auto time = getTimelibTime(date);\n // timelib_day_of_week() returns a number in the range [0,6], we want [1,7], so add one.\n return timelib_day_of_week(time->y, time->m, time->d) + 1;\n}\n", "text": "I had a peek at this as well, looking at the source code for the server, it seems to call outputDateWithFormat in date_time_support.h and the two options for days are:I assume this is the same expected output as:\nhttps://cplusplus.com/reference/ctime/strftime/I.e.\n|%u|ISO 8601 weekday as number with Monday as 1 (1-7)|4|\n|%w|Weekday as a decimal number with Sunday as 0 (0-6)|4|Checking the code for %w it calls dayOfWeek, the tooltip for this function returns 1-7 which is at odds of the documentation.dayOfWeek is defined as:Which seems to indicate that it shoudl be 0-6 but it’s being transformed to 1-7 by adding 1 to the output.Of course I’m not a C++ expert (or competent!) But if the documentation and other definitions are correct it seems that the server code could be misbehaving…/Edit I could also not see a unit test for the server for this scenario, but that could just be me missing it.", "username": "John_Sewell" }, { "code": "", "text": "5.0.8.Thank you for looking into this.", "username": "Tim_Rohrer" }, { "code": "", "text": "It didn’t occur to me that the server code was available for inspection:-)I also don’t know much C/C++, but that comment makes it clear to me. However, I would think most devs would be quite happy to start counting at 0. The standard for our org is Sunday=0, and so I’ll need to adjust every single record to accomplish this project because I’m relying on pulling day of the week info from a timestamp.But we’ll see what @Jason_Tran comes back with.Thank you for digging into this.", "username": "Tim_Rohrer" }, { "code": "", "text": "No probs, I should have linked but…The MongoDB Database. Contribute to mongodb/mongo development by creating an account on GitHub.//github.com/mongodb/mongo/tree/master/src/mongo/db/query/datetimeNow I assume that I’m looking in the right place but it looks about right!", "username": "John_Sewell" }, { "code": "", "text": "Thanks for your inputs @John_Sewell and @Tim_Rohrer Nice find on the code too John!I’ve raised this with the team to see if doc changes are required in this specific case. I’ll update this thread for any changes.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi All,Just posting an update here. The following DOCS ticket has been created : https://jira.mongodb.org/browse/DOCS-16360Thanks for all the details provided in finding this one Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Quick update here. Documentation was updated accordingly for this one (to show 1-7 return value in docs). Will close off this post.", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Incorrect results or problem with documentation
2023-08-29T04:20:26.364Z
Incorrect results or problem with documentation
706
null
[ "aggregation" ]
[ { "code": "knnBetashouldmust", "text": "Noted from the document that https://www.mongodb.com/docs/atlas/atlas-search/knn-beta/ cannot be used inside compound operator, so I am wondering how I could use knnBeta and should/must operators together to get search results considering scores from both parts?", "username": "williamwjs" }, { "code": "knnBetashouldmustshouldmustcompoundknnBetacompoundfilterknnBeta", "text": "Hi @williamwjs,so I am wondering how I could use knnBeta and should/must operators together to get search results considering scores from both parts?The should and must options are used inside the compound operator. As you (and the documentation) has stated, the knnBeta operator cannot be used inside compound operator so this isn’t currently possible to my knowledge. In saying so, Atlas Vector Search is available as a Preview feature and it’s behaviour is subject to change in future. You can raise feedback regarding this if you would like.I am also wondering if you’ve tested with the filter option (example linked) within the knnBeta operator to see if this might work for use case?Regards,\nJason", "username": "Jason_Tran" }, { "code": "filterknnBeta", "text": "Hi @Jason_Tran , thank you for the quick reply!I’ve tested with the filter option inside knnBeta, but it would only work for filter, not for should/must, i.e., with or without them, the score does not change", "username": "williamwjs" }, { "code": "[\n {\n $search:\n {\n index: \"default\",\n knnBeta: {\n vector: <array-of-numbers-to-search>,\n path: <indexed-field-to-search>,\n k: <number of nearest neighbors to return>,\n filter: {\n compound: {\n must: [\n {\n // must clause \n },\n },\n ],\n should: [\n {\n // should clause\n },\n },\n ],\n },\n },\n },\n },\n },\n]\n", "text": "Hi @williamwjs , you should be able to achieve this by using a compound operator nested within the filter option:Hope this helps!", "username": "amyjian" }, { "code": "", "text": "Thank you @amyjian for the suggestion!I’ve tried that, but it does not affect the final score at all, with or without them", "username": "williamwjs" }, { "code": "", "text": "Hi @Jason_Tran @amyjian , may I ask if you have any other suggestions? Or is it a bug that is being fixed?Thank you!", "username": "williamwjs" }, { "code": "filtershouldcompound", "text": "Hi @williamwjs, this is the expected behavior of filter. While we do not support combining the score across kNNbeta and should clauses within compound at this time, this is something that is in active consideration for our roadmap. Depending on your needs, something like this might work for you.Let me know if this helps!", "username": "amyjian" }, { "code": "", "text": "@amyjian Thank you Amy! I will give that a try, and let me know when kNNbeta could support compound\nThank you!", "username": "williamwjs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use knnBeta together with should operator
2023-09-05T22:19:54.040Z
How to use knnBeta together with should operator
538
null
[]
[ { "code": "", "text": "Hi EveryOne,Actually, I need to configure live replication from AWS Postgresql to Atlas AWS hosted MongoDB but this is not one time activity so replication should be available all the time. so I will use postgresql and mongodb as well so we able to move data’s from postgresql to mongodb By using relation migrator with Continous approach not as snapshot approach. and I tested this with sample data’s but will it work in production server? because I read relational migrator not recommended for indefinite moving data’s in mongodb blog. so I want to know will we use relational migrator for this scenario? around 4 lakhs data will come to postgre per day so I need to clarify this?Thanks & Regards\nAravind\nData Engineer", "username": "Aravind_rajamani" }, { "code": "", "text": "Good morning, welcome to the MongoDB community.I would try to do it differently. Maybe using Kafka or some other streaming platform, taking the CDC and saving it to MongoDB. Maybe Stream Processing will help you.Build event-driven apps that react and respond in real time. Atlas Stream Processing is a unified developer experience for all your data, in motion or at rest.I really don’t know if Migrator is the best tool to use in this case. Let me know what you think ;D", "username": "Samuel_84194" }, { "code": "", "text": "I’m a beginner in MongoDB but I read about the Atlas Stream Processing so as of my knowledge Atlas Stream Processing (ASP) is primarily designed for real-time event processing within MongoDB Atlas. It allows you to capture and process changes that occur within your MongoDB collections and take actions based on those changes, such as sending notifications, performing analytics, or triggering other events. but in my case I need configure replication from PostgreSQL to MongoDB. initially all data should move from PostgreSQL to Atlas mongodb then if any data newly coming in PostgreSQL then those data should move to Atlas Mongodb and also if any previous data updated those update also should replicate in Atlas mongodb so in this scenario will it work? But I already tried Relational migrator with continous sink. it’s working but I read this not recommended for indifinte running so this is my concern. and will it able to move data contrinously from PostgreSQL to Atlas MongoDB by using Atlas Stream Processing?", "username": "Aravind_rajamani" }, { "code": "", "text": "In this case, you can use CDC to do this from PG to Atlas. Use some connector, for example debizium and use the mongodb connector for kafka.Example: How to Implement Change Data Capture for MySQL and Postgres | by Lewis Gavin | Rockset | Medium", "username": "Samuel_84194" } ]
Mongodb Relational Migrator
2023-09-25T10:44:41.188Z
Mongodb Relational Migrator
242
null
[ "sharding" ]
[ { "code": "", "text": "Hi Team,I’m a beginner in Mongodb, so I need to perform sharding in the UAT server so First explain what I have. I have 2 separate server with Linux centos OS so I installed MongoDB 4.2.24. I tried to open the Mongos because of need to enable the sharding but I getting error in ip. Please find the error below:sudo mongos -f /etc/mongos.conf\nFailedToParse: invalid url [172.28.129.131:27017,172.28.136.157:27017]\ntry ‘mongos --help’ for more information again and again getting same error tell me the proper wayso intially mongos.conf file not there so I created a mongos.conf file with below code:systemLog:\ndestination: file\npath: “/var/log/mongos/mongos.log”\nprocessManagement:\nfork: true\nnet:\nbindIp: 0.0.0.0\nport: 27017\nsharding:\nconfigDB: configReplSet/configServer1:27017,configServer2:27017\nthen also getting same error only so I need to enable the shadring for the collection so I need to open the mongos and give a clustered nodes IP so how to do that?Thanks & Regards\nAravind", "username": "Aravind_rajamani" }, { "code": "\nsharding:\n configDB: \"<CONFIG_REPL_NAME>/<host1:27019,host2:27019>\"\n \nsystemLog:\n destination: file\n path: /var/log/mongos/mongos.log\n logAppend: true\n logRotate: reopen\n \nnet:\n port: 27017\n bindIp: \"<HOSTNAME>,localhost\"\n \nprocessManagement:\n fork: true\n \n", "text": "", "username": "tapiocaPENGUIN" } ]
When trying to add clustered nodes IP in mongos Getting error
2023-09-25T11:23:03.522Z
When trying to add clustered nodes IP in mongos Getting error
250
https://www.mongodb.com/…e_2_1024x865.png
[]
[ { "code": "", "text": "I followed the How to Use Azure Credits for MongoDB Atlas: Deploy a MERN Stack App tutorial. It works on my computer, but when I deploy to Azure I get an application error screen.And in the error log I get:\nimage1061×897 78.2 KB\n", "username": "Arturo_Proal" }, { "code": "", "text": "So, I changed in package.json the “main”: “index.js” to server.mjs and it gets through. No error logged, but I don’t get access to the site. I get a blank page with “Cannot GET /”", "username": "Arturo_Proal" } ]
Deploy a MERN Stack App tutorial -Azure
2023-09-25T08:55:35.907Z
Deploy a MERN Stack App tutorial -Azure
337
null
[ "swift" ]
[ { "code": "realm.syncSession?.suspend()\ntry! await app.currentUser!.refreshCustomData()\nrealm.syncSession?.resume()\n", "text": "Flexible Sync is configured and works … I store accountIds in custom_data.accountIds that limit which accounts a user can see. Uses can only see the ones defined there. As I understand, these changes (ie, add an additionl account that a user can see) will only take affect on the next sync session.How do I force a new sync session? (I am not worried about detecting this change on the client, I just want to run it when I know there is a change).I have tried the following (suggestion I have seen in other posts)…but this has not been successful in showing the newly added account. If I restart the app, it will usually cause the subscription to be reset and all works fine. How can I reset a subscription manually without making the user restart or login/logout?", "username": "Robert_Charest" }, { "code": "", "text": "Hi @Robert_Charest,By default, the SDK will allow multiple sessions to share a connection, so pausing & resuming a session does not guarantee that the permissions on the connection are refreshed. We have an improvement scheduled to refresh the permissions on each new session which will provide the behavior you’re looking for. This should be available in the next few weeks.it will usually cause the subscription to be reset and all works fineAlso, keep in mind that under the hood a change in permissions like this will trigger a client reset.", "username": "Kiro_Morkos" } ]
Force Session Restart in Swift
2023-09-24T01:31:55.533Z
Force Session Restart in Swift
301
null
[]
[ { "code": "", "text": "I want to create a collection for the list of universities that had a particular number of intake in a particular academic year(E.g: 2021-22). The problem encountered is that the universities that exist before a particular academic year (E.g: 2020-21) also exist in the next year(2021-22),and this results in a lot of repeated data. Is there a way to build a query that includes entries of and before the years searched? Is there any other way to handle this problem in MongoDB? Also,how do we handle if a university is closed?", "username": "Akshay_Bhandari" }, { "code": "", "text": "Probably the first step is to look at your data model.\nHow is your collection (or collections) structured?", "username": "Jack_Woehr" }, { "code": "", "text": "Hello Akshaya,Greeting of the day !!Before responding to your query, I want to be clarified to your question as I can see two things you have asked here.Do you need assistance on to create collection as per the requirement or you need to help on to create query?After your response, we may discuss about how to design your requirement and then the query.Looking forward to responding. Thanks , Have a good day .", "username": "biswajeetbasantaray" } ]
How to avoid repitition in MongoDB collection?
2023-09-23T12:47:10.815Z
How to avoid repitition in MongoDB collection?
206
null
[ "security" ]
[ { "code": "", "text": "I’m running the service inside a container. I followed the instructions here to create a user and password and update the mongod.conf file. If I try to log in with authentication but I use the wrong password, I’m blocked. If I use the right password, it works. But if I use no authentication at all I can get in and make changes.", "username": "amb83" }, { "code": "mongosh --eval 'db.serverCmdLineOpts()' --quiet", "text": "If you can login without credentials and make changes then authentication is not enabled.This is quite a different scenario from topic(The poster explains a “Server Information Disclosure”). Please create a new topic, post your docker config(compose file/docker run), mongo.conf.The output of mongosh --eval 'db.serverCmdLineOpts()' --quiet could be useful to see what is going on also.", "username": "chris" }, { "code": "mongodmongodmongodservicemongod --config /path/to/mongod.conf--authdb.serverCmdLineOpts()", "text": "Welcome to the MongoDB Community @amb83 !As @chris noted, your question is different than the older topic you replied on so I have moved this to a new topic focused on your question.It sounds like you have created a user but have perhaps missed a step like restarting the mongod process after enabling access control in the mongod configuration file. Another possibility would be that you have started mongod manually (without using a service definition), in which case you would also have to explicitly specify a path to a configuration file (eg mongod --config /path/to/mongod.conf) or include the --auth option to enable access control.Running db.serverCmdLineOpts() in the MongoDB shell will show command line options that were used, including either a configuration file or access control parameter.For more information on available security measures please see the MongoDB Security Checklist.Regards,\nStennie", "username": "Stennie_X" } ]
I have created users but access control is not being enforced
2023-01-09T19:01:01.489Z
I have created users but access control is not being enforced
1,337
null
[ "data-modeling" ]
[ { "code": "", "text": "We are currently in process of designing a Multi tenant application, we choose to go with Mongo db for data storage. Our application needs to store different entities, like user, employee, payroll etc., We want to build a scalable platform where clients can onboard/offboard faster, and manage the solutions easily. We also wanted to isolate the customer data logically.We are discussing few options on Mongo db how to place the data -Few queries -Many Thanks", "username": "Thirumalai_M" }, { "code": "{\"name\": \"Naomi\", \"team\": \"Engineering Communications\", ...}", "text": "Welcome to the community, @Thirumalai_M!Am I understanding correctly that the same application is to be used by multiple companies but with the same backend servicing all? Or will it be different deployments with a backend for each deployment? Generally you will want to store data for entities in collections so you’d want a users collection, an employees collection etc. Is the isolation of customer data so that you can give access to the data on a database level or are there other reasons? Will your clients be able to access the database directly or only via your application?So per your questions:I’m not sure what you mean with questions 3 and 4 - can you provide more information?", "username": "Naomi_Pentrel" }, { "code": "", "text": "Thanks Naomi for the detailed response. Your answer to my queries gives what I feed for No1. I have few queries on No1 and will clarify for No3.Basically we are a SaaS provider company developing system, the tenants can be onboarded and store their data. There are multiple entities in our application. We are using docker/container for service layer, but database wise we have some confusion. Whether separate database per client, separate collection per client or documents to be segregated using partition key.As per your answer for No2, I understand going with one collection per client will create data model issue. Hence going with single database with one collection per client is not feasible approach.Using a one collection for one entity (for Ex: Employee) and having its own index/partition will not provide isolation. Our application is a heath care system, should be HIPPA compliance.I understanding going with a single database per client will make sense for our scenario. Please correct me if I am wrong in my understanding.", "username": "Thirumalai_M" }, { "code": "", "text": "If you are using MongoDB Atlas, you can control access based on field values using Realm Rules. This would allow you to use one collection per entity while restricting access to the client that the data belongs to. So you could have a rule that a user can only see data that matches the company associated to that user.If you are not using Atlas you can control access on a database or collection level. So you could use that to restrict access for a database per client for your scenario.", "username": "Naomi_Pentrel" }, { "code": "", "text": "@Thirumalai_M - my two cents - since it’s going to be a Healthcare/HIPPA compliant - better to go with separate database per client, that gives the highest level of isolation and security and due to security & compliance: most clients usually may want to have the database deployed on their owned secured network (cloud or on-premise) only. At the same time - it also allows doing client specific customization’s later, if required. Now, the major drawbacks of this approach is the cost and maintenance overhead etc.Had this been a non-healthcare system - you could have even thought of having single collection for all clients but segregating their rows/documents based on additional “tenant_id” column. Although, it only provides only logical level of isolation but it’s very easier to maintain and very cost effective. Depending on the number of clients/customers - you can easily scale it by having multiple shard clusters across - may be 1 for each and/or group of customers combined into one cluster i.e. tenant_id = 1 to 10 or tenant_name = A to M and tenant_id = 11 to 20 or tenant_name = M to Z into another cluster.", "username": "ManiK" }, { "code": "", "text": "Hi @ManiK,\nI really like the idea what you’ve proposed. Could you also share some insights about how the connections will be handled in the application layer (i.e. Node js as the app) to different clusters, because I’ll now have many connection strings and based on clients’ request I have to connect to different DBs contained in different clusters.", "username": "iMS_Systems" } ]
Designing Mongo DB for Multi tenanted database
2020-09-24T07:53:53.197Z
Designing Mongo DB for Multi tenanted database
11,982
null
[ "compass" ]
[ { "code": "", "text": "Is there a split screen feature in MongoDB Compass to view 2 or more collections side by side?", "username": "Anshul_Negi" }, { "code": "", "text": "Do this in MongoDB for VS Code", "username": "Jack_Woehr" } ]
Mongodb Compass Split Screen Feature
2023-09-24T14:48:19.000Z
Mongodb Compass Split Screen Feature
279
null
[ "aggregation" ]
[ { "code": "{\n\t$and[\t\n\t\t{\n\t\t “a”:1\n\t\t},\n\t\t{\n\t\t “b”:\n\t\t\t{$in:[3,4,5]}\n\t\t}\n\t]\n\t$sort:{\n\t\t{ d: 1}\n\t}\n}\n$in$in$in.sort()$in.sort()", "text": "Hi community,While I’m working on index I came across with a “strange” behavior of index.\nI’ve a simple query:Following the ESR rule written on the mongoDB official doc, my index is:{a:1, d:1, b:1}. ($in considered as range)But, when I execute the query it is used another index that is:{a:1, b:1, d:1}. ($in considered as equality)I’ve already checked the score with the explain() and the second index is better than the first one.To write my index with the $in operator I followed the little paragraph on the Official Doc that says:$in can be an equality operator or a range operator. When $in is used alone, it is an equality operator that does a series of equality matches. $in acts like a range operator when it is used with .sort() .So, my question is:The line: \" $in acts like a range operator when it is used with .sort()\" means that the same field used in $in must be used in $sort operator to consider $in as a range operator in the index?Thanks in advance", "username": "Luciano_Bigiotti" }, { "code": "a_1_c_1_b_1db.sample.find({a:1,c:{$in:[1,2,3]}}).sort({c:1}) stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { a: 1, c: 1, b: 1 },\n indexName: 'a_1_c_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], c: [], b: [] },\n direction: 'forward',\n indexBounds: {\n a: [ '[1, 1]' ],\n c: [ '[1, 1]', '[2, 2]', '[3, 3]' ],\n b: [ '[MinKey, MaxKey]' ]\n }\n }\n },\na_1_c_1_b_1db.sample.find({a:1,b:{$in:[1,2,3]}}).sort({c:1})winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { a: 1, c: 1, b: 1 },\n indexName: 'a_1_c_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], c: [], b: [] },\n direction: 'forward',\n indexBounds: {\n a: [ '[1, 1]' ],\n c: [ '[MinKey, MaxKey]' ],\n b: [ '[1, 1]', '[2, 2]', '[3, 3]' ]\n }\n }\n },\na_1_c_1_b_1db.sample.find({a:1,c:{$in:[1,2,3]}}).sort({b:1})winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'SORT_MERGE',\n sortPattern: { b: 1 },\n inputStages: [\n {\n stage: 'IXSCAN',\n keyPattern: { a: 1, c: 1, b: 1 },\n indexName: 'a_1_c_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], c: [], b: [] },\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n a: [ '[1, 1]' ],\n c: [ '[1, 1]' ],\n b: [ '[MinKey, MaxKey]' ]\n }\n },\n {\n stage: 'IXSCAN',\n keyPattern: { a: 1, c: 1, b: 1 },\n indexName: 'a_1_c_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], c: [], b: [] },\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n a: [ '[1, 1]' ],\n c: [ '[2, 2]' ],\n b: [ '[MinKey, MaxKey]' ]\n }\n },\n {\n stage: 'IXSCAN',\n keyPattern: { a: 1, c: 1, b: 1 },\n indexName: 'a_1_c_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], c: [], b: [] },\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n a: [ '[1, 1]' ],\n c: [ '[3, 3]' ],\n b: [ '[MinKey, MaxKey]' ]\n }\n }\n ]\n }\n },\n", "text": "Hi @Luciano_Bigiotti and welcome to MongoDB community forums!!I’ve already checked the score with the explain() and the second index is better than the first one.could you please help me understand the above statement on how the the second index was performing better than that first index defined above. Also, what is the meaning of the scores here ?I tried to replicate the same in my local environment and here is my understanding:Case 1:\nwhen index is a_1_c_1_b_1 and the query is db.sample.find({a:1,c:{$in:[1,2,3]}}).sort({c:1}), in this case according to the ESR rule, $in is using the index for range. Hence the field values, a follows the Equality and c follows the Range in the ESR rule.\nThe following is explained in the below part of the explain():Case 2: Now, consider the same index a_1_c_1_b_1 and the query is db.sample.find({a:1,b:{$in:[1,2,3]}}).sort({c:1}). Here the $in is used as equality operator as MongoDB will look for exact matches of the values within the $in array.\nThe explain output here:shows that $in is being used a the equality operator in this case.Case 3: Now consider a case where index is a_1_c_1_b_1 and the query is db.sample.find({a:1,c:{$in:[1,2,3]}}).sort({b:1}), the $in here is used as the equality operator as shown in case 2 and the sort merge stage will be used in this case.The explain out for this is shown as:P.S. I have intentionally removed a few fields of the explain output to showcase only relevant fields.\nPlease feel free to reach out in case of any further questions.Warm regards\nAasawari", "username": "Aasawari" } ]
$in operator is an equality or a range?
2023-09-20T09:50:05.070Z
$in operator is an equality or a range?
297
https://www.mongodb.com/…fde58b8e97f1.png
[ "replication" ]
[ { "code": "", "text": "Hi,\nSo I am able create custom Replica set on my localhost using configuration files.\nBy custom I mean I can set priority of node and also setup some delay in secondary.\nI am also able to connect to only individual nodes of replica in my Express app using different ports in Connection String.I wanted to ask how to do these same things when using Mongodb Atlas. I am not able to find any docs or anything about it. When I open my project in Mongodb Atlas Dashboard, I noticed it mentioned it is using 3 regions and that it is a Replica set but I am unable to connect with the individually using same Express code that I used when connecting with individual nodes of Replica set running on localhost.\nI am only able to connect to whole Mongodb Atlas cluster in Express.\nSo it will be really helpful if someone could guide or share these details about Mongodb Atlas.\nThanks!", "username": "Naman_Saxena1" }, { "code": "", "text": "Hi @Naman_Saxena1,\nCan you try as suggested from documentation:Regards", "username": "Fabio_Ramohitaj" }, { "code": "Metrics", "text": "So I am able create custom Replica set on my localhost using configuration files.\nBy custom I mean I can set priority of node and also setup some delay in secondary.You can see the priority of each node by going to the Metrics tab of a cluster and hovering over the node state icon. For example:\nimage1564×554 54.6 KB\nIn addition to Fabio’s comments, you won’t be able to set a specific numeric value for the priority of the Atlas cluster node’s. More information on unsupported commands in Atlas. The closest thing would be a multi region cluster in which you can change the order of the priority (highest to lowest) by region (example on linked page). However, this may not suit your use case.Can you explain the reasoning for the delay in the secondary you’ve mentioned?Regards,\nJason", "username": "Jason_Tran" }, { "code": "cfg = rs.conf()\ncfg.members[0].priority = 0\ncfg.members[0].hidden = true\ncfg.members[0].secondaryDelaySecs = 3600\nrs.reconfig(cfg)\n", "text": "Thanks @Fabio_Ramohitaj and @Jason_Tran for information.So use case is I am studying system design and practicing MongoDb.\nThere is a topic of setting some delay in secondary replica, it helps to create a time window during which we can recover data from a point in time before a destructive event occurred. This is valuable in scenarios where we want to recover from accidental data corruption or a malicious attack.I was able to setup delay when running on localhost using below commands, so just got curious if it is also possible in Mongodb Atlas.", "username": "Naman_Saxena1" }, { "code": "cfg = rs.conf()\ncfg.members[0].priority = 0\ncfg.members[0].hidden = true\ncfg.members[0].secondaryDelaySecs = 3600\nrs.reconfig(cfg)\nrs.reconfig()Replication", "text": "I was able to setup delay when running on localhost using below commands, so just got curious if it is also possible in Mongodb Atlas.Thanks for the details Naman. Unfortunately rs.reconfig() is one of the unsupported commands in Atlas (noted in the Replication section) so you won’t be able to execute the same on your Atlas cluster.", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_Tran got it, thanks!", "username": "Naman_Saxena1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to create Custom Replica Set in Mongodb Atlas
2023-09-23T15:57:21.955Z
How to create Custom Replica Set in Mongodb Atlas
341
https://www.mongodb.com/…d783b57c8603.png
[]
[ { "code": "", "text": "Hi,I created my MongoDB account using my GitHub account. I just noticed now that the Lastname is “N/A” resulting to “Antonio N/A” in the certificate of completion.Another person already asked the same question. How to update Username whenever I Update My First Name or Last NameAnd the solution says to check this help page:But it did not solve problem because the help page states “First Name” and “Last Name” field is not updatable.Is there another way?", "username": "Antonio_N_A2" }, { "code": "", "text": "Hi @Antonio_N_A2,Could you please reach out to [email protected]? The team is based in the US, and they will assist you once they are back online.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Is there a way to change my First Name and Last Name of my MongoDB Account?
2023-09-23T19:30:54.552Z
Is there a way to change my First Name and Last Name of my MongoDB Account?
352
https://www.mongodb.com/…6_2_1024x403.png
[ "python" ]
[ { "code": "", "text": "\nimage1505×593 85.6 KB\n", "username": "Tr_n_B_o" }, { "code": "", "text": "Hey @Tr_n_B_o,Could you please reach out to [email protected]? The team is based in the US, and they will assist you once they are back online.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Sorry for late rely, I received the result report from [email protected]. Thank you for support !", "username": "Tr_n_B_o" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Gettingt a 401 Unauthorize error on Examity
2023-09-22T02:59:28.375Z
Gettingt a 401 Unauthorize error on Examity
388
null
[ "queries", "node-js" ]
[ { "code": "DhIGSRROpQBSONError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer", "text": "Hello,Im reviving an older (4 - 5 years old) mongodb. I have it running on Atlas and am able to do general queries ( find({}) ) etc. However, Im finding that the _id in these records are only 10 characters long DhIGSRROpQ and when I try to use it in the Node driver Im getting BSONError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer.From what I’m finding online, that doesn’t appear to be a valid ObjectId. Did MongoDB change how they handle OjbectIds over the last few years? And if so, is there anything that I can do to be able to use my db as is?If you need more information in order to lend a hand, Im happy to provide.Thank you in advance,Peter", "username": "Peter_Koruga" }, { "code": "_id_id", "text": "Are you sure the existing _id is an ObjectId ?The _id could just be a string.", "username": "chris" }, { "code": "interface StringIdDocument {\n _id: string;\n [keys: string]: any\n}\nconst customerCol = db.collection<StringIdDocument>(\"_users\")\n", "text": "It’s was a string. In the Node driver, the default is that its an ObjectId. So you have to let the query know its a string withThis is working.", "username": "Peter_Koruga" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Old MongoDB with 10 character _ids?
2023-09-14T23:35:23.111Z
Old MongoDB with 10 character _ids?
323
null
[ "atlas-search" ]
[ { "code": "this(_[_shark_]_)thetext,and thissharkan animalshark(_[_${someWord}_]_)sharkthis(_[shark_]_)thetext,and thisan animal[\n {\n $search: {\n index: \"text\",\n regex: {\n query: \"<someQuery>\",\n allowAnalyzedField: true,\n path: \"text\"\n }\n }\n }\n]\n", "text": "text : this(_[_shark_]_)thetext,and thissharkan animal // stringi want to search shark but ignore the pattern (_[_${someWord}_]_)so the result that i expect must be found, because the shark word found at 32nd letter.but if the text is this(_[shark_]_)thetext,and thisan animal , it must not found because i want to ignore the patternthis is my example query:", "username": "fice_N_A" }, { "code": "regexkeyword", "text": "If that pattern of text is always to be ignored in searches, then it can be ignored during indexing, using the regex token filter. Given that you’re looking for substrings within words, your custom analyzer using the regex token filter would need to start with the keyword tokenizer.", "username": "Erik_Hatcher" }, { "code": "", "text": "thank you very much sir, i can do it because of your answer. ", "username": "fice_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can i search text but ignore some word pattern?
2023-09-19T10:30:28.220Z
How can i search text but ignore some word pattern?
370
null
[ "mongodb-shell" ]
[ { "code": "Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.6 MongoServerSelectionError: Server selection timed out after 2000 ms\"\n{\"t\":{\"$date\":\"2023-09-20T10:28:51.485+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:62675\",\"uuid\":{\"uuid\":{\"$uuid\":\"8c007f81-c179-4164-8a42-b631ed789239\"}},\"connectionId\":2,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-09-20T10:28:51.488+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn2\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:62675\",\"uuid\":{\"uuid\":{\"$uuid\":\"8c007f81-c179-4164-8a42-b631ed789239\"}},\"connectionId\":2,\"connectionCount\":0}}\n", "text": "I am using MongoDB “version”: “7.0.1”\nWhen I run the command mongosh it is giving me the below error:when i checked the log it seems it is connecting then it is getting disconnected, below is the log.", "username": "Feel_The_Bits-DK_N_A" }, { "code": "serverSelectionTimeoutMS=2000MongoServerSelectionError: Server selection timed out after 2000 ms\"\nserverSelectionTimeoutMSmongod", "text": "Hey @Feel_The_Bits-DK_N_A,Welcome to the MongoDB Community!serverSelectionTimeoutMS=2000Could you attempt connecting to MongoDB after extending the serverSelectionTimeoutMS? Currently, if MongoDB fails to connect within the 2-second window, it triggers an error.I suspect that there could be some other factors contributing to this issue. Could you check the mongod logs corresponding to the same timestamps? This additional information would help you in providing an understanding of the situation and identifying the root cause.Please feel free to reach out in case of further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "{\"t\":{\"$date\":\"2023-09-20T10:28:51.485+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:62675\",\"uuid\":{\"uuid\":{\"$uuid\":\"8c007f81-c179-4164-8a42-b631ed789239\"}},\"connectionId\":2,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-09-20T10:28:51.488+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn2\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:62675\",\"uuid\":{\"uuid\":{\"$uuid\":\"8c007f81-c179-4164-8a42-b631ed789239\"}},\"connectionId\":2,\"connectionCount\":0}}\n", "text": "@Kushagra_Kesav I haven’t tried extending serverSelectionTimeoutMS, and when I hit command mongosh only two latest message appears in the log.", "username": "Feel_The_Bits-DK_N_A" }, { "code": "{\"t\":{\"$date\":\"2023-09-20T16:39:05.425+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:59396\",\"uuid\":{\"uuid\":{\"$uuid\":\"2c9780d9-bad9-401c-97e8-0b34c08d2abe\"}},\"connectionId\":3,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.451+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn3\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:59396\",\"client\":\"conn3\",\"doc\":{\"application\":{\"name\":\"mongosh 1.10.6\"},\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.7.0|1.10.6\"},\"platform\":\"Node.js v16.20.2, LE\",\"os\":{\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\",\"type\":\"Windows_NT\"}}}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.467+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:59397\",\"uuid\":{\"uuid\":{\"$uuid\":\"01101a06-77f2-43d2-8167-997c9e7f392b\"}},\"connectionId\":4,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.468+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:59398\",\"uuid\":{\"uuid\":{\"$uuid\":\"d21e5a85-f04d-4f95-bb91-02c134e4af9a\"}},\"connectionId\":5,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.469+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn4\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:59397\",\"client\":\"conn4\",\"doc\":{\"application\":{\"name\":\"mongosh 1.10.6\"},\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.7.0|1.10.6\"},\"platform\":\"Node.js v16.20.2, LE\",\"os\":{\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\",\"type\":\"Windows_NT\"}}}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.470+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn5\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:59398\",\"client\":\"conn5\",\"doc\":{\"application\":{\"name\":\"mongosh 1.10.6\"},\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.7.0|1.10.6\"},\"platform\":\"Node.js v16.20.2, LE\",\"os\":{\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\",\"type\":\"Windows_NT\"}}}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.473+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:59399\",\"uuid\":{\"uuid\":{\"$uuid\":\"f6bdd65d-553d-44c3-b4a4-75068c7c5274\"}},\"connectionId\":6,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.473+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":6788700, \"ctx\":\"conn4\",\"msg\":\"Received first command on ingress connection since session start or auth handshake\",\"attr\":{\"elapsedMillis\":3}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.474+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn6\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:59399\",\"client\":\"conn6\",\"doc\":{\"application\":{\"name\":\"mongosh 1.10.6\"},\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.7.0|1.10.6\"},\"platform\":\"Node.js v16.20.2, LE\",\"os\":{\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\",\"type\":\"Windows_NT\"}}}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.478+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":6788700, \"ctx\":\"conn6\",\"msg\":\"Received first command on ingress connection since session start or auth handshake\",\"attr\":{\"elapsedMillis\":3}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:05.567+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":6788700, \"ctx\":\"conn5\",\"msg\":\"Received first command on ingress connection since session start or auth handshake\",\"attr\":{\"elapsedMillis\":97}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:15.972+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:59406\",\"uuid\":{\"uuid\":{\"$uuid\":\"fee6145c-0fbd-4630-b8e1-6de4347593ff\"}},\"connectionId\":7,\"connectionCount\":5}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:15.973+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn7\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:59406\",\"client\":\"conn7\",\"doc\":{\"application\":{\"name\":\"mongosh 1.10.6\"},\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"5.7.0|1.10.6\"},\"platform\":\"Node.js v16.20.2, LE\",\"os\":{\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\",\"type\":\"Windows_NT\"}}}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:17.105+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn6\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:59399\",\"uuid\":{\"uuid\":{\"$uuid\":\"f6bdd65d-553d-44c3-b4a4-75068c7c5274\"}},\"connectionId\":6,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:17.105+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:59398\",\"uuid\":{\"uuid\":{\"$uuid\":\"d21e5a85-f04d-4f95-bb91-02c134e4af9a\"}},\"connectionId\":5,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:17.105+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn4\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:59397\",\"uuid\":{\"uuid\":{\"$uuid\":\"01101a06-77f2-43d2-8167-997c9e7f392b\"}},\"connectionId\":4,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:17.105+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn7\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:59406\",\"uuid\":{\"uuid\":{\"$uuid\":\"fee6145c-0fbd-4630-b8e1-6de4347593ff\"}},\"connectionId\":7,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:18.074+05:30\"},\"s\":\"W\", \"c\":\"NETWORK\", \"id\":4615610, \"ctx\":\"conn3\",\"msg\":\"Failed to check socket connectivity\",\"attr\":{\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"peekASIOStream :: caused by :: Connection reset by peer\"}}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:18.074+05:30\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn3\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":106841}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:18.080+05:30\"},\"s\":\"I\", \"c\":\"EXECUTOR\", \"id\":22989, \"ctx\":\"conn3\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"futurize :: caused by :: Connection reset by peer\"},\"remote\":\"127.0.0.1:59396\",\"connectionId\":3}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:18.080+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn3\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:59396\",\"uuid\":{\"uuid\":{\"$uuid\":\"2c9780d9-bad9-401c-97e8-0b34c08d2abe\"}},\"connectionId\":3,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:28.401+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:59414\",\"uuid\":{\"uuid\":{\"$uuid\":\"07bdefd9-f9cb-40f0-adff-0e0f592b492c\"}},\"connectionId\":8,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-09-20T16:39:28.402+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn8\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:59414\",\"uuid\":{\"uuid\":{\"$uuid\":\"07bdefd9-f9cb-40f0-adff-0e0f592b492c\"}},\"connectionId\":8,\"connectionCount\":0}}\n", "text": "@Kushagra_Kesav here is the log from same timestamp.", "username": "Feel_The_Bits-DK_N_A" }, { "code": "", "text": "can anyone suggest where it is going wrong?", "username": "Feel_The_Bits-DK_N_A" }, { "code": "", "text": "i am not getting the solution, still i am trying to find out the solution, community can you please in this regard", "username": "Feel_The_Bits-DK_N_A" }, { "code": "", "text": "try disabling your firewall", "username": "steevej" }, { "code": "", "text": "MongoDBCompass is getting connected this issue only occurring with mongosh command, i checked the windows defender firewall already disable", "username": "Feel_The_Bits-DK_N_A" }, { "code": "", "text": "try upgrading mongosh, may be there is some tls certificate issue", "username": "steevej" } ]
Mongodb command "mongosh" giving error even server is running
2023-09-20T05:29:14.111Z
Mongodb command &ldquo;mongosh&rdquo; giving error even server is running
489
null
[ "database-tools" ]
[ { "code": "", "text": "C:\\Users\\Shihab>mongoimport C:\\Users\\Shihab>mongoimport C:\\Program Files\\MongoDB\\practisingJsonFiles\\customerDetails.json -d shop -c products\n2023-09-22T14:29:09.546+0530 error parsing command line options: error parsing positional arguments: provide only one file name and only one MongoDB connection string. Connection strings must begin with mongodb:// or mongodb+srv:// schemes\n2023-09-22T14:29:09.547+0530 try ‘mongoimport --help’ for more information", "username": "Minhajul_N_A" }, { "code": "", "text": "Put your file path in quotes or cd to the dir where your json file is residing and run the mongoimport command just passing the file name customerDetails.json without full path", "username": "Ramachandra_Tummala" }, { "code": "", "text": "If I understand correctly the documentation at\nyou are missing an equal sign between -d and shop and between -c and products.", "username": "steevej" } ]
I can't import json files into my command prompt.Solve this please anyone
2023-09-22T09:07:52.353Z
I can&rsquo;t import json files into my command prompt.Solve this please anyone
295
null
[ "aggregation", "queries" ]
[ { "code": "const shotsPipeline = [\n {\n $match: {\n _id: { $nin: req?.alreadyViewedShots },\n restaurants: {\n $elemMatch: { $eq: restId },\n },\n },\n },\n {\n $group: {\n _id: '$VideoCategory', // Group by 'VideoCategory'\n shots: { $push: '$_id' }, // Collect all documents in an array for each 'VideoCategory'\n },\n },\n {\n $project: {\n shots: {\n $slice: ['$shots', 1], // Limit the array to 1 elements per 'VideoCategory'\n },\n },\n },\n {\n $project: {\n _id: 0, // Exclude the _id field at this stage\n shots: 1, // Include the 'shots' array\n },\n },\n {\n $unwind: '$shots', // Unwind the array to separate documents\n },\n ];\n", "text": "we are using this aggregation:Our requirement : First remove all alreadyViewedShots shots and find shots where the field “restaurants” in Shots object contain at-least one restId. In next stage it groups all shots based on “VideoCategory” field. in next stage it takes just 1 item from each group of VideoCategory, and then it unwinds the shots. In the first stage we are using restId to get the shots where the field “restaurants” in Shots object contain at-least one restId, we are looping through this pipeline multiple times(with different restIds) until we find a nonempty array of shots obtained from this pipeline, with just 30 documents in our collection the above logic takes min 700-800 milliseconds. we have also used indexing in “VideoCategory” and “restaurants” fields. We want to reduce the latency as our production environment will be having min 1000+ documents in the collection.can someone please give their inputs/advice on this.", "username": "Amogh_Saxena" }, { "code": "", "text": "Hi there,\nWill try to provide a viable solution (providing schema can provide better insight)\nSo breaking things at each\nAt the match step, we can have a separate flag field to view for alreadyViewedShots and can check if the index provides the same or different performance.\nAt the group step, we can combine group and project with $first to have only one shot.\nIf you don’t have a pagination issue then unwind at the API levelHope it helps!", "username": "Anshul_Negi" } ]
Need in help in optimising the query
2023-09-23T17:49:45.959Z
Need in help in optimising the query
236
https://www.mongodb.com/…5_2_1024x525.png
[ "aggregation", "indexes" ]
[ { "code": "[\n {\n // Match all docs, from bigbang to doomsday\n $match: {\n timestamp: {\n $gte: 0,\n $lt: 32472147600000\n }\n }\n },\n {\n $facet: {\n daily: [\n {\n $group: {\n _id: {\n year: { $year: { $toDate: \"$timestamp\" } },\n month: { $month: { $toDate: \"$timestamp\" } },\n day: { $dayOfMonth: { $toDate: \"$timestamp\" } }\n },\n count: { $sum: 1 }\n },\n },\n {\n $sort: {\n \"_id.year\": 1,\n \"_id.month\": 1,\n \"_id.day\": 1\n },\n },\n {\n $project: {\n _id: 0,\n year: \"$_id.year\",\n month: \"$_id.month\",\n day: \"$_id.day\",\n count: 1\n }\n }\n ],\n },\n },\n]\n{\n \"deviceKey\": \"18:A:B\",\n \"state\": {\n \"rotation_x\": 1.006,\n \"rotation_y\": -2.404\n },\n \"device\": {\n \"physicalId\": \"B\",\n \"id\": 418,\n \"thing\": {\n \"physicalId\": \"A\",\n \"id\": 108,\n \"enabled\": true,\n \"network\": {\n \"id\": 18\n }\n },\n \"enabled\": true\n },\n \"timestamp\": {\n \"$numberLong\": \"1635109200000\"\n },\n}\n", "text": "Hi,Can someone help me with this query?\nimage2742×1408 257 KB\nThe documents inside the collection are like this one:", "username": "Mario_Stefanutti" }, { "code": "", "text": "As a start, I’d just convert the field to a date once, there is no reason to do it three times, convert it before you use it in the facet. I assume you have three grouping facets, so extract the day, month year BEFORE you hit the facet so all stages can share the computed values.You’re also not going to be able to make use of proper date functions efficiently using a tick to store the time, I’d look at storing it as a proper date, do you use this query a lot?That fact you have an index is of no use in this scenario, your’re calculating a value and that is not indexed.", "username": "John_Sewell" }, { "code": "", "text": "But why does it say that the projection takes 378ms and the IXSCAN takes 1.2s, BUT the execution time takes 31 seconds?Thanks for the other suggestions, I’ll implement them.", "username": "Mario_Stefanutti" }, { "code": "[\n {\n // Match all docs, from bigbang to doomsday\n $match: {\n timestamp: {\n $gte: 0,\n $lt: 32472147600000\n }\n }\n },\n {\n // Convert timestamp to date once and extract day, month, and year\n $addFields: {\n date: { $toDate: \"$timestamp\" },\n year: { $year: { $toDate: \"$timestamp\" } },\n month: { $month: { $toDate: \"$timestamp\" } },\n day: { $dayOfMonth: { $toDate: \"$timestamp\" } }\n }\n },\n {\n $facet: {\n daily: [\n {\n $group: {\n _id: {\n year: \"$year\",\n month: \"$month\",\n day: \"$day\"\n },\n count: { $sum: 1 }\n },\n },\n {\n $sort: {\n \"_id.year\": 1,\n \"_id.month\": 1,\n \"_id.day\": 1\n },\n },\n {\n $project: {\n _id: 0,\n year: \"$_id.year\",\n month: \"$_id.month\",\n day: \"$_id.day\",\n count: 1\n }\n }\n ],\n },\n },\n]\n", "text": "I tried to implement your suggestions (I think) but now I get an additional FETCH in the explain that makes things a slower\nimage2742×1408 282 KB\n", "username": "Mario_Stefanutti" }, { "code": "", "text": "Ideally you want to hit the groups with sorted data, in this case with the data in that form I’m not sure what other improvements you can make to be honest.\nI’m tied up at the moment with work so can’t repro locally but from the screenshot you’re running this over 10M docs approx?\nComputing and grouping on that many records is going to be slow, if this is a common query then perhaps look at an alternative storage, bucket pattern perhaps and then you can easily count and group at a higher level?", "username": "John_Sewell" }, { "code": "", "text": "Try projecting the timestamp right after the match.", "username": "steevej" }, { "code": "[\n {\n // Match all docs, from the BigBang to the Doomsday\n $match: {\n year: {\n $gte: 0,\n $lt: 802701\n }\n }\n // year: {\n // $eq: 2023\n // },\n // month: {\n // $eq: 08\n // }\n },\n {\n $facet: {\n daily: [\n {\n $group: {\n _id: {\n year: \"$year\",\n month: \"$month\",\n day: \"$day\"\n },\n count: { $sum: 1 }\n }\n },\n {\n $sort: {\n \"_id.year\": 1,\n \"_id.month\": 1,\n \"_id.day\": 1\n }\n },\n {\n $project: {\n _id: 0,\n year: \"$_id.year\",\n month: \"$_id.month\",\n day: \"$_id.day\",\n count: 1\n }\n }\n ],\n }\n }\n]\n", "text": "Since I had other fields in the collection: year, month, day, hour, minute in addition to the raw timestamp, I changed the query to use these new fields (numerical). But what bothers me is that I always see a big difference between the sum of the times reported by the Explain and the total execution time. Do you know why is that? What is that hidden difference in time (33 seconds - 1.5 seconds)?\nimage2740×1422 268 KB\n", "username": "Mario_Stefanutti" }, { "code": "[\n {\n \"$match\": {\n \"timestamp\": {\n \"$gte\": 0,\n \"$lt\": 32472147600000\n }\n }\n },\n {\n \"$project\": {\n \"timestamp\": 1\n }\n },\n {\n \"$addFields\": {\n \"date\": { \"$toDate\": \"$timestamp\" },\n \"year\": { \"$year\": { \"$toDate\": \"$timestamp\" } },\n \"month\": { \"$month\": { \"$toDate\": \"$timestamp\" } },\n \"day\": { \"$dayOfMonth\": { \"$toDate\": \"$timestamp\" } }\n }\n },\n {\n \"$facet\": {\n \"daily\": [\n {\n \"$group\": {\n \"_id\": {\n \"year\": \"$year\",\n \"month\": \"$month\",\n \"day\": \"$day\"\n },\n \"count\": { \"$sum\": 1 }\n }\n },\n {\n \"$sort\": {\n \"_id.year\": 1,\n \"_id.month\": 1,\n \"_id.day\": 1\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"year\": \"$_id.year\",\n \"month\": \"$_id.month\",\n \"day\": \"$_id.day\",\n \"count\": 1\n }\n }\n ]\n }\n }\n]\n", "text": "Try projecting the timestamp right after the match.I tried it, but it gets a lot worse.\nimage2746×1422 287 KB\nThis is the modified query:", "username": "Mario_Stefanutti" }, { "code": "\"$project\": {\n \"timestamp\": 1 ,\n \"_id\" : 0\n }\n", "text": "one last suggestion before I give up.in your first $project add _id:0 to end up with", "username": "steevej" } ]
Aggregate counts by day, months, year on a collection with an indexed timestamp field
2023-09-21T15:44:36.823Z
Aggregate counts by day, months, year on a collection with an indexed timestamp field
372
null
[ "aggregation", "queries", "node-js", "java", "php" ]
[ { "code": " [\n {\n $search: {\n index: \"boolean\",\n compound: {\n must: [\n {\n text: {\n query: [\"java\",\"php\",\"Newyork\",\"jack\"],\n path: [\n \"first_name\",\n \"last_name\",\n \"state\",\n \"city\",\n \"skillset.skill\",\n \"prefered_location\",\n \"email\",\n \"employment_details.job_role\",\n \"employment_details.job_skills\",\n ],\n },\n },\n ],\n should: [\n {\n text: {\n query: [\"java\",\"php\",\"Newyork\",\"jack\"],\n path: [\n \"first_name\",\n \"last_name\",\n \"state\",\n \"city\",\n \"skillset.skill\",\n \"prefered_location\",\n \"email\",\n \"employment_details.job_role\",\n \"employment_details.job_skills\",\n ],\n },\n },\n ],\n },\n },\n }, { $limit: 15 }\n ]\n", "text": "Hi Folks,\nAm looking for option to extract a particular document as per user desire , they wish to filter a document with particular condition like skill and country and last_name.kindly suggest your thoughts , thanks in Advance", "username": "Arun_Kumar12" }, { "code": "querymustmust", "text": "Using a query array of values, each clause generated becomes a “should” (optional) clause. You have that wrapped in a single must clause, so any of those must match, but not necessarily all of them. If you require all the query values to match, separate them individually into other items in your must array.", "username": "Erik_Hatcher" }, { "code": "employment_details.job_role[\n {\n $search: {\n index: \"ats_boolean\",\n compound: {\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"first_name\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"last_name\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"skillset.skill\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"city\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"state\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"prefered_location\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"email\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"employment_details.job_role\"],\n },\n },\n must: {\n text: {\n query: [\"java\", \"php\", \"chennai\"],\n path: [\"employment_details.job_skills\"],\n },\n },\n },\n },\n },\n {\n $limit: 15,\n },\n]\n", "text": "employment_details.job_rolethis is also gave a any one or two matches", "username": "Arun_Kumar12" }, { "code": "mustcompoundtextmustscoreDetails.explain().explain()shouldmust", "text": "At first glance, the syntax is not quite right. There should be only one must under compound, which is an array of text clauses, rather than a bunch of musts.Also, there are two diagnostic tools worth exploring here: scoreDetails to see how the scores are computed and the .explain() on the aggregation call which shows how the query was interpreted. The .explain() is going to be helpful in your case. The must clauses are likely overly restrictive, as at least one of each of those query terms is going to have to match the specified field in order for a document to match. Perhaps make this a should array rather than must and let relevancy pull the best matching documents to the top.", "username": "Erik_Hatcher" }, { "code": "", "text": "Thanks for your suggestion but I have to find exact matching document not a relevant match document that’s why am looking for suggestion furthermore people are also suggesting SQL DB will achieve this Mongodb have some limitations, is that true ?", "username": "Arun_Kumar12" }, { "code": "", "text": "You have an array of queries, and you’re trying those against various fields - an exact match is a bit undefinable. If you can have a single query string for each specific field, you’ll be able to get as exact as you like. Where does the array of query strings come from? Is each element specific to a particular field? Also, what do you mean by “exact”? Is “chennai” exactly as it appears in the city field? Or is this case insensitive?", "username": "Erik_Hatcher" }, { "code": "", "text": "it means perfect match, my requirement I have to fetch a document as per my array of values.", "username": "Arun_Kumar12" }, { "code": "", "text": "I hear you about wanting a perfect match, but you have an array of values - it’s not specified which field those values should match, so you’re trying several. Do the elements in the array have a particular field they should match? If not, then a “perfect match” does not seem well defined. Can you provide some sample documents that should and should not match? In general, full text search is not about a perfect or exact match but about relevancy, with the best matching documents at the top of the list.You mentioned considering a SQL DB - how would that query be specified, out of curiosity?It’s an interesting search challenge you have proposed, so I’m here to help as best I can. Thanks for your patience ", "username": "Erik_Hatcher" }, { "code": "{\n \"_id\": {\n \"$oid\": \"64e5ae1d2682288e398a5c25\"\n },\n \"CandidateId\": \"SS46620\",\n \"first_name\": \"varathan\",\n \"last_name\": \"raja\",\n \"email\": \"[email protected]\",\n \"mobile_number\": \"9840176815\",\n \"gender\": \"Male\",\n \"state\": \"Chennai\",\n \"city\": \"Chennai\",\n \"pincode\": \"627501\",\n \"current_location\": \"Chennai\",\n \"willing_to_relocate\": true,\n \"prefered_location\": \"Chennai\",\n \"expected_ctc\": \"To be modified\",\n \"notice_period\": \"15 Days or less\",\n \"status\": \"Active\",\n \"prefered_mode_of_hire\": \"To be modified\",\n \"resume_url\": \"https://ss-ats-assets.s3.ap-south-1.amazonaws.com/candidate_resume/varathan-Resume_1692773917416.html\",\n \"skillset\": [\n {\n \"skill\": \"JAVA,.NET,PHP,CORELDRAW,SQL,VB\",\n \"years\": 2,\n \"months\": 6\n }\n ],\n \"employment_details\": [\n {\n \"company_name\": \"apx solution\",\n \"start_date\": \"2018-10-01T18:30:00.000Z\",\n \"end_date\": \"\",\n \"job_role\": \"chennai\",\n \"work_model\": \"To be modified\",\n \"ctc\": \"7.0 Lacs\",\n \"employment_type\": \"Permanent\",\n \"industry_type\": \"Software Product\",\n \"c2h_payroll\": \"apx solution\",\n \"job_skills\": null,\n \"is_current\": true\n },\n {\n \"company_name\": \"THIRIPURA chits p LTD\",\n \"start_date\": \"2010-10-01T18:30:00.000Z\",\n \"end_date\": \"2018-02-01T18:30:00.000Z\",\n \"job_role\": \"Regional Manager\",\n \"work_model\": \"To be modified\",\n \"ctc\": \"To be modified\",\n \"employment_type\": \"Permanent\",\n \"industry_type\": \"Software Product\",\n \"c2h_payroll\": \"THIRIPURA chits p LTD\",\n \"job_skills\": null,\n \"is_current\": false\n },\n {\n \"company_name\": \"csc computer center\",\n \"start_date\": \"2008-01-01T18:30:00.000Z\",\n \"end_date\": \"2018-10-01T18:30:00.000Z\",\n \"job_role\": \"kalakad\",\n \"work_model\": \"To be modified\",\n \"ctc\": \"To be modified\",\n \"employment_type\": \"Permanent\",\n \"industry_type\": \"Software Product\",\n \"c2h_payroll\": \"csc computer center\",\n \"job_skills\": null,\n \"is_current\": false\n }\n ],\n \"created_by\": {\n \"$oid\": \"64ccd040cface0ef8be4db92\"\n },\n \"is_deleted\": false,\n \"createdAt\": {\n \"$date\": \"2023-08-23T06:58:37.685Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-08-23T06:58:37.685Z\"\n },\n \"__v\": 0\n}\n", "text": "Here is the document for your reference,in this document have java, php, chennai, (chennai is a city), first moment i was used wildcard search , wildcard will full fill my requirement but i got unmatched documents, that’s why am reached here, thanks for your support Erik_Hatcher.", "username": "Arun_Kumar12" }, { "code": "skill", "text": "Thanks for the clear example. I understand the data. What I’m not quite clear on and I think will help a lot is … where do the query terms come from? Is the user adding three separate terms, and expects that they could/should/must be a match anywhere in the document (or a select subset of the fields as seems to be the case)? Or is the user typing in “java php chennai”? Would “java” match a location that pointed to “Java, VA”, for example? Java, Virginia - Wikipedia - or are the terms “java” and “php” only to be matching in the skill field?We can do much better than wildcard with a bit of fine tuning of what you’re doing here. Again, thanks for your patience as it’s an interesting challenge to 1) try to understand what you’re ultimately needing, and 2) how best to configure and query it to get there.What is your index configuration? Fully dynamic mappings, or overriding some/all fields settings?", "username": "Erik_Hatcher" }, { "code": " compound: {\n must: [\n text: { path: [ ... ], query: \"java\" },\n text: { path: [ ... ], query: \"php\" },\n text: { path: [ ... ], query: \"chennai\" }\n ]\n }\npath", "text": "After a few minutes of re-reading this entire thread, I am reminded that you did specify in the first message that the user is supplying an array of query terms. Ok, here’s a proposed solution:This is saying that all individual terms must match in one or more of the fields in the path array provided. Use the array that you initially have been using, with the change being to make a clause for each query term.How does this work for your needs?", "username": "Erik_Hatcher" }, { "code": "[\n {\n $search: {\n index: \"ats_boolean\",\n compound: {\n must: [\n {\n text: {\n path: [\n \"first_name\",\n \"last_name\",\n \"state\",\n \"city\",\n \"skillset.skill\",\n \"prefered_location\",\n \"email\",\n \"employment_details.job_role\",\n \"employment_details.job_skills\",\n ],\n query: \"java\",\n },\n text: {\n path: [\n \"first_name\",\n \"last_name\",\n \"state\",\n \"city\",\n \"skillset.skill\",\n \"prefered_location\",\n \"email\",\n \"employment_details.job_role\",\n \"employment_details.job_skills\",\n ],\n query: \"php\",\n },\n text: {\n path: [\n \"first_name\",\n \"last_name\",\n \"state\",\n \"city\",\n \"skillset.skill\",\n \"prefered_location\",\n \"email\",\n \"employment_details.job_role\",\n \"employment_details.job_skills\",\n ],\n query: \"chennai\",\n },\n },\n ],\n },\n },\n },\n]\n{\n \"_id\": {\n \"$oid\": \"6463784e5c2e1cf13bb8f78d\"\n },\n \"CandidateId\": \"SS39246\",\n \"first_name\": \"Sriba\",\n \"last_name\": \"\",\n \"email\": \"[email protected]\",\n \"mobile_number\": \"9003781504\",\n \"gender\": \"Female\",\n \"state\": \"Tamil Nadu\",\n \"city\": \"Chennai\",\n \"pincode\": \"600034\",\n \"current_location\": \"Chennai\",\n \"willing_to_relocate\": false,\n \"prefered_location\": \"\",\n \"expected_ctc\": \"15 LPA\",\n \"notice_period\": \"Immediate\",\n \"status\": \"In progress\",\n \"prefered_mode_of_hire\": \"C2H (contract to Hire) - Client side\",\n \"resume_url\": \"https://ss-ats-assets.s3.ap-south-1.amazonaws.com/candidate_resume/Sriba_Java%20Developer_Chennai.doc_1684240460376.msword\",\n \"skillset\": [\n {\n \"skill\": \"Chennai\",\n \"exp\": 49\n }\n ],\n \"employment_details\": [\n {\n \"company_name\": \"Mindtree\",\n \"start_date\": \"2019-02-03T18:30:00.000Z\",\n \"end_date\": \"2022-04-28T18:30:00.000Z\",\n \"job_role\": \"Java Developer\",\n \"work_model\": \"Remote\",\n \"ctc\": \"9 LPA\",\n \"employment_type\": \"Permanent\",\n \"industry_type\": \"IT\",\n \"c2h_payroll\": \"-\",\n \"job_skills\": \"Java\",\n \"is_current\": \"yes\"\n }\n ],\n \"created_by\": {\n \"$oid\": \"641462a847038cf77ecc7e81\"\n },\n \"is_deleted\": false,\n \"createdAt\": {\n \"$date\": \"2023-05-16T12:34:22.861Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-05-16T12:34:22.861Z\"\n },\n \"__v\": 0\n}\n`Php and java and Berlin not WashingtonDC not Jasper`", "text": "I appreciate your approach on this , i have worked on your query in mongodb compass it produced one of my old result, it gives any two of values will be present on document not all of them,here is the document was i got as a solutionwhy am into this, am working on Boolean search functionality in this case i have completed OR , NOT, except AND gate functionality, my client they will enter text to search their requirement like `Php and java and Berlin not WashingtonDC not Jasper` for this my code will extract AND key Words OR keywords NOT keywords then produce their results, am Succeed on OR , NOT gate but failed in AND condition. so now am working on hardcode functionality to bare this case,\nanyway thanks for your effort on this, i never imagine someone will take care of my task to help me out but you did it, thanks for that.", "username": "Arun_Kumar12" } ]
Multiple Field Search to get single matched document
2023-09-22T07:24:50.067Z
Multiple Field Search to get single matched document
493
null
[ "golang" ]
[ { "code": "", "text": "I am having trouble figuring out how to programmatically create collections using the Golang driver. I can see that the Java driver has the API to create collections, but the Golang driver either doesn’t have such API, or it is buried somewhere I can’t find. Any help is appreciated.", "username": "Pranay_Singhal" }, { "code": "db := client.Database(\"dbName\")\ncommand := bson.D{{\"create\", \"collectionName\"}}\nvar result bson.M\nif err := db.RunCommand(context.TODO(), command).Decode(&result); err != nil {\n\tlog.Fatal(err)\n}\n", "text": "Hi @Pranay_Singhal,You can use Database.RunCommand to execute a command. For example:See also MongoDB create command for more information.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thanks @wan, this worked for me, and will solve my purpose.", "username": "Pranay_Singhal" }, { "code": "", "text": "func openCollection(dbClient *mongo.Client, collectionName string) *mongo.Collection {\nvar collection *mongo.Collection = dbClient.Database(“DatabaseName”).Collection(collectionName)\nreturn collection\n}", "username": "Mudasir_Ali" } ]
Creating collections via Go driver
2020-02-26T22:25:09.482Z
Creating collections via Go driver
5,398
null
[ "aggregation" ]
[ { "code": "a_collection: \n\n{\n _id : ObjectId(‘63f381b50ee158b55cc82b1a’),\n\ta_name : ‘This is an example’,\n\tb_tags: [ \n\t\tObjectId(‘640624f7dace963b6d2865c3’),\n\t\tObjectId(‘640624f7dace963b6d2865c4’),\n\t\tObjectId(‘640624f7dace963b6d2865c5’),\n\t]\n}\n\nb_collection:\n \n{\n {\n _id: ObjectId(‘640624f7dace963b6d2865c3’),\n b_tag : 'This',\n },\n {\n _id: ObjectId(‘640624f7dace963b6d2865c4’),\n b_tag : 'demo',\n },\n {\n _id: ObjectId(‘640624f7dace963b6d2865c5’),\n b_tag : 'only',\n },\n\n}\nvar searchStr = 'demo';` // example string for matching\n\ntagSearchQueryResult = await a_collection.aggregate([ \n {\n // begin pipeline for \n $lookup: {\n \"from\": \"b_collection\",\n \"localField\": \"b_tags\",\n \"foreignField\": \"b_tags._id\",\n \"as\": \"b_tags\",\n \"pipeline\": [\n { \n \"$addFields\": { \"b_tags\": \"$b_tags.b_tag\" }\n }\n ]\n },\n },\n {\n $match: { \n $expr: { \"$in\": [ searchStr, \"$b_tag\" ] }\n },\n },\n {\n \"$project\": {\n \"id\": \"$_id\" ,\n \"a_name\": 1,\n }\n }\n ]);\nreturn { tagSearchQueryResult };\nforeignField: \"_id\"foreignField: \"_b_tags._id\"result (JavaScript): \nEJSON.parse('{\"tagSearchQueryResult\":[{\"_id”:{},”a_name”:”This is an example\",\"id\":{}}]}')\n", "text": "I’m having an issue with db.collection.aggregate in a MongoDB function where the results returned to me is missing the _id, it’s just blank. It’s a simple aggregate, with a $lookup, $match, and a $project, but anything I’ve tried to actually output the _id in the results is failing.here is a simple example of the schema and query I’m doing this with…My function is doing an aggregate to search in the joined table for a string passed in as a parameter.I do get back results as I expect, but without the _id of the document returned from the aggregate. I’m stumped why. I originally had foreignField: \"_id\" for the $lookup, changed it to foreignField: \"_b_tags._id\" thinking there was a conflict, but this did not fix the issue.The results I’m returned look like this…I’ve read and researched, found nothing that tells me what’s the cause.Thanks.", "username": "Josh_Whitehouse" }, { "code": "", "text": "I tried to run this on a local collection but it came up with some issues, can you put it in mongo playgound(https://www.mongoplayground.net/) to give a working example?", "username": "John_Sewell" }, { "code": "", "text": "Hi John,the issue seemed to be with the double quotes when I formatted the code here. I corrected it and put it in Mongo Playground for you. It seems to work differently in Mongo Playground vs. the results I get running it inside an Atlas Function. Here I do see the _id field, but it’s not coming back in the Atlas Function.Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Josh_Whitehouse" }, { "code": "", "text": "Also, the $addFields should overwrite the existing one in the collection, but it doesn’t seem to.", "username": "Josh_Whitehouse" }, { "code": "", "text": "The pipeline is running on the joined collection so it does not find anything that matches that path, so the pipeline is running on data in “inventory” which does not have “b_tags.b_tag”, but it does have “b_tag”", "username": "John_Sewell" }, { "code": "", "text": "ok, but when I run this in an Atlas Function, I’m still losing the _id in the results.\nEJSON.parse(‘{“atlasSearchQueryResult”:[{“_id”:{},“event_name”:“demo”}]}’)", "username": "Josh_Whitehouse" }, { "code": "", "text": "It’s working fine in the playground, only when being run in the Atlas Function is it failing.I’m converting the aggregate with a toArray() method.", "username": "Josh_Whitehouse" }, { "code": "exports = async function(arg){\n // This default function will get a value and find a document in MongoDB\n // To see plenty more examples of what you can do with functions see: \n // https://www.mongodb.com/docs/atlas/app-services/functions/\n\n // Find the name of the MongoDB service you want to use (see \"Linked Data Sources\" tab)\n var serviceName = \"mongodb-atlas\";\n\n // Update these to reflect your db/collection\n var dbName = \"Lookup\";\n var collName = \"orders\";\n\n // Get a collection from the context\n var collection = context.services.get(serviceName).db(dbName).collection(collName);\n\n var results = await collection.aggregate([\n {\n \"$lookup\": {\n \"from\": \"inventory\",\n \"localField\": \"b_tags\",\n \"foreignField\": \"_id\",\n \"as\": \"b_tags\",\n \"pipeline\": [\n {\n \"$addFields\": {\n \"b_tags\": \"$b_tags.b_tag\"\n }\n }\n ]\n }\n },\n {\n $match: {\n $expr: {\n \"$in\": [\n \"demo\",\n \"$b_tags.b_tag\"\n ]\n }\n }\n },\n {\n \"$project\": {\n \"id\": \"$_id\",\n \"a_name\": 1,\n \"b_tags\": 1\n }\n } \n]).toArray();\n\nreturn results\n\n};\n[{\n\t\"_id\": {\n\t\t\"$oid\": \"63f381b50ee158b55cc82b1a\"\n\t},\n\t\"a_name\": \"This is an example\",\n\t\"b_tags\": [{\n\t\t\"_id\": {\n\t\t\t\"$oid\": \"640624f7dace963b6d2865c3\"\n\t\t},\n\t\t\"b_tag\": \"This\"\n\t}, {\n\t\t\"_id\": {\n\t\t\t\"$oid\": \"640624f7dace963b6d2865c4\"\n\t\t},\n\t\t\"b_tag\": \"demo\"\n\t}, {\n\t\t\"_id\": {\n\t\t\t\"$oid\": \"640624f7dace963b6d2865c5\"\n\t\t},\n\t\t\"b_tag\": \"only\"\n\t}],\n\t\"id\": {\n\t\t\"$oid\": \"63f381b50ee158b55cc82b1a\"\n\t}\n}]\n", "text": "That’s really weird, I just tried it on Atlas and this is my Atlas function:And this is the output of running it:", "username": "John_Sewell" }, { "code": "", "text": "ok, i found the weird. in my function i’m calling aggregate on the same collection before I call this aggregate.\nThat aggregate is using $search and works fine. If I only run one or the other in the same function they work fine, it’s just when I run one after the other I get the “JSON.parse('{“tagSearchQueryResult”:[{”_id”:{},”a_name”:”This is an example\",“id”:{}}]}')\" results, with the empty _id field.", "username": "Josh_Whitehouse" }, { "code": "", "text": "For now, i’ll break this down into two separate Atlas Functions, but I’m curious to know why running aggregate in the same collection one after the other does result in the _id field missing.", "username": "Josh_Whitehouse" }, { "code": "", "text": "Turns out I need to do a Javascript deep copy when putting the results from the to aggregates together.", "username": "Josh_Whitehouse" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Function: _id (ObjectId) missing in query aggregate results
2023-09-22T21:11:02.736Z
Mongodb Function: _id (ObjectId) missing in query aggregate results
432
null
[ "data-modeling" ]
[ { "code": "{\n _id: ObjectId(),\n userID1: \"userA_ID\",\n userID2: \"userB_ID\",\n sender: \"one of the users\",\n message: \"some text\",\n timestamp: time\n}\nuserID1db.find({ userID1, userID2 }).sort({ \"timestamp\" : -1 })\n .skip(offset).limit(limit)\n", "text": "Hi, in my application I’m developing I need to allow for chats between any 2 users. I have the following schema in mind for a Messages collection:userID1 will always be the first user to start the chat, I’ll save this for every Chat room consisting of 2 people. (EDIT: Should I use lexicographical ordering instead, with userID1 coming before userID2?)I’ll create an index on the timestamp field so I can sort it in reverse, and then I can do the following to get the data every time the chat room is loaded (with pagination as you scroll up):And this I think should give me my intended behavior.Appreciate any advice, thanks.", "username": "Ajay_Pillay" }, { "code": "db.find({ userID1, userID2 }).sort({ \"timestamp\" : -1 })", "text": "db.find({ userID1, userID2 }).sort({ \"timestamp\" : -1 })That particulate query will be better serve with a compound index on userID1,userID2,timestamp. See Performance Best Practices: Indexing | MongoDB Blog for the specifics.", "username": "steevej" }, { "code": "", "text": "In your schema you might want also want to add a field “sent_by” or something similar, this way you could moderate / find messages by users with a search feature.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Thanks for pointing this out, it’s a necessity actually. Otherwise I have no way of knowing who actually sent the message, oversight on my part.", "username": "Ajay_Pillay" }, { "code": "", "text": "I wonder if I should use lexicographical sorting to determine userID1 and userID2, with userID1 coming before userID2, instead of who initiates the chat first.", "username": "Ajay_Pillay" }, { "code": "", "text": "Could you merge userID1, userID2 and sender into just 2 fields?sender rather than userID1\nreceiver rather than userID2This would reduce the size of each document.", "username": "steevej" }, { "code": "db.find({ $or: [{ sender: userA, receiver: userB }, { sender: userB, receiver: userA }]}\n .sort({ timestamp: -1 }).skip(offset).limit(limit)\n", "text": "Thanks for the suggestion, that makes a lot of sense, then the accompanying query would be the following right?So I would then have to create a multikey index on the sender, receiver and timestamp fields?", "username": "Ajay_Pillay" }, { "code": "\n{ id: 123\n participants: ['user1', 'user2'],\n}\n{ sender: 'user1', \n message: 'Hello World', \n timestamp: time,\n converstationId: 123\n}\n", "text": "Another implementation you could do that would involve another collection is, when a new chat is created in a “Conversation” collection you can add the information:Then in the message instead of UserID1 and UserID2 you would just have “conversation_id”This way if the members of the group change you only have to change it once in the “Conversation” collection and all messages referencing the ID will see the changes.This would be similar to a One-to-Many with reference", "username": "tapiocaPENGUIN" }, { "code": "", "text": "That’s a nice approach! Thank you both for the insights and advice.I was wondering (which was my second question), is storing a huge amount of random chat data in a collection good practice? I say random because in the Messages collection the ordering will be jumbled up when different users communicate at different times. Although it has no effect on the end users, is that acceptable practice?A more traditional data structure would be perhaps to keep all this chat in an array within a document (but of course it’s subject to the 16MB BSON size limit), but logically this means there’s no way any messages are interleaved with other messages.", "username": "Ajay_Pillay" }, { "code": "{\n conversation_id: 12345,\n time: time,\n members: ['user1', 'user2'],\n messages: [\n {\n sender: 'user1', \n message: 'Hello World', \n timestamp: time\n },\n {\n sender: 'user1', \n message: 'Hello World', \n timestamp: time\n }],\n total_messages: 2\n}\n", "text": "In general I don’t believe that collections with a lot of documents is an issue. As long as your queries are indexed it shouldn’t be a problem.Blockquote\nA more traditional data structure would be perhaps to keep all this chat in an array within a document (but of course it’s subject to the 16MB BSON size limit), but logically this means there’s no way any messages are interleaved with other messages.MongoDB does have a bucket design pattern. In which you store related items in an array.You could have a field called “total_messages” that is the sum of all messages and once it hits a certain number it creates a second bucket so you stay within the 16MB limit and don’t have the massive arrays anti pattern.\nAlthough this may be more complicated than is required.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Thank you for the links and explanation. That’s a nice approach as well, but yes it comes with a massive array regardless, and queries become a little more complex.What we’re doing now without using the bucket design pattern is essentially merging all of these potential arrays into one collection. I think I shall be going ahead with what’s been discussed so far with the sender/receiver method.", "username": "Ajay_Pillay" }, { "code": "", "text": "If you are working on a chat application that is great, I think you should have the feature to add more users.", "username": "Jason_Farnandez" }, { "code": "\ndb.chats.aggregate(\n {\n /**\n * query: The query in MQL.\n */\n $match: {\n \"members\": {\n \"$all\": [\n ObjectId('63ad8631e0d6cfe452b80677'),\n ObjectId('63adadf287e482e6f128ec0e'), \n ]\n }\n },\n },\n {\n /**\n * Provide any number of field/order pairs.\n */\n $sort: {\n \"created_at\": -1\n }\n },\n {\n /**\n * specifications: The fields to\n * include or exclude.\n */\n $project: {\n \"_id\": 0,\n // \"members\": 1,\n \"result\": {\n $sortArray: {input: \"$message\", sortBy: {created_at: -1}}\n }\n }\n },\n)\nresult$limit$skip[\n {\n \"result\": [\n {\n \"uuid\": \"df359053-5b69-4fc3-ac42-57aaf70ec9d9\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00011\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:52:26Z\"\n }\n },\n {\n \"uuid\": \"0cfe04ab-ccf2-4eb9-8760-d3471f033ea6\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00010\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:52:15Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"fe54bf97-5dad-4d3f-ada8-ea1f1ab31039\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00010\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:52:14Z\"\n }\n },\n {\n \"uuid\": \"9face81d-b989-4662-a090-2f8f498b1a7b\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00010\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:52:12Z\"\n }\n },\n {\n \"uuid\": \"8578ce16-f3ac-4ceb-b779-0c1f3e8fec2f\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00010\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:51:02Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"ef6f12e0-b601-4377-8ff9-28921799985b\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00009\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:51:00Z\"\n }\n },\n {\n \"uuid\": \"6fcbcc3b-ff09-4e1c-9ce5-d73790911592\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00008\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:50:47Z\"\n }\n },\n {\n \"uuid\": \"fdde26cd-6688-466a-be61-37ac8e312a70\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00007\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:49:33Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"fdde26cd-6688-466a-be61-37ac8e312a70\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00007\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:49:33Z\"\n }\n },\n {\n \"uuid\": \"fb72bef2-ceff-4e22-80cd-b54da6ba6f01\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00006\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:48:51Z\"\n }\n },\n {\n \"uuid\": \"3ffee1e6-8eb4-4a93-99a2-329576dc7754\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00005\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:48:34Z\"\n }\n },\n {\n \"uuid\": \"927726bd-acc9-461b-98db-9ce07cca5ff0\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00004\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:48:31Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"927726bd-acc9-461b-98db-9ce07cca5ff0\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00004\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:48:31Z\"\n }\n },\n {\n \"uuid\": \"664cb8fb-0284-4ad2-af1f-d9458fb09c24\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00003\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:48:25Z\"\n }\n },\n {\n \"uuid\": \"460e8a5a-327f-47c9-a8c3-76772cca4cc7\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n },\n {\n \"uuid\": \"ed4e5060-45f9-4f4a-a6a0-e45eddbd64bc\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"a45e6738-afb4-47aa-a8f4-fdf1d70eeb5f\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n },\n {\n \"uuid\": \"466b8569-12c4-4d06-8572-eb5999f0e4ae\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n },\n {\n \"uuid\": \"314b60eb-1eaf-4a72-a7a6-528ab474a0d9\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n },\n {\n \"uuid\": \"460e8a5a-327f-47c9-a8c3-76772cca4cc7\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"a45e6738-afb4-47aa-a8f4-fdf1d70eeb5f\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:22:03Z\"\n }\n },\n {\n \"uuid\": \"4f56f58d-1b0c-48f9-b3b6-c63020068760\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00002\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:21:55Z\"\n }\n },\n {\n \"uuid\": \"4fb86042-93ef-4209-88ca-8c287f16a2f6\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"00001\",\n \"created_at\": {\n \"$date\": \"2023-01-10T19:20:30Z\"\n }\n },\n {\n \"uuid\": \"3a6dc43d-bdb5-41b9-be93-15ab389380a9\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"New Docu \",\n \"created_at\": {\n \"$date\": \"2023-01-10T18:44:01Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"69b1dd00-8ecf-4ba3-8df4-2cbba3887505\",\n \"sender\": {\n \"$oid\": \"63adadf287e482e6f128ec0e\"\n },\n \"message\": \"88888888888888888888\",\n \"created_at\": {\n \"$date\": \"2023-01-10T16:01:22Z\"\n }\n },\n {\n \"uuid\": \"3ea03321-3c99-48b6-a7fd-2cc5fa514fb3\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"0000000000000\",\n \"created_at\": {\n \"$date\": \"2023-01-10T15:51:20Z\"\n }\n },\n {\n \"uuid\": \"c8e35012-1106-4e3f-b678-6040002d54b8\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"pppppp\",\n \"created_at\": {\n \"$date\": \"2023-01-10T15:40:24Z\"\n }\n }\n ]\n },\n {\n \"result\": [\n {\n \"uuid\": \"b78f1288-5363-4b0f-9bbe-ddf73f60fac1\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"Kio nn sass,,,\",\n \"created_at\": {\n \"$date\": \"2023-01-10T18:46:39Z\"\n }\n },\n {\n \"uuid\": \"94e3ae3a-eb2c-4ceb-b580-e97f4ec49aba\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"Kio nn sass,,,\",\n \"created_at\": {\n \"$date\": \"2023-01-10T18:44:57Z\"\n }\n },\n {\n \"uuid\": \"80c86174-9836-477b-b0d3-e4cb968b015a\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"Another messgee 2\",\n \"created_at\": {\n \"$date\": \"2023-01-10T18:44:45Z\"\n }\n },\n {\n \"uuid\": \"3a6dc43d-bdb5-41b9-be93-15ab389380a9\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"New Docu \",\n \"created_at\": {\n \"$date\": \"2023-01-10T18:44:01Z\"\n }\n },\n {\n \"uuid\": \"69b1dd00-8ecf-4ba3-8df4-2cbba3887505\",\n \"sender\": {\n \"$oid\": \"63adadf287e482e6f128ec0e\"\n },\n \"message\": \"i am fine thanks\",\n \"created_at\": {\n \"$date\": \"2023-01-10T16:01:22Z\"\n }\n },\n {\n \"uuid\": \"3ea03321-3c99-48b6-a7fd-2cc5fa514fb3\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"Buddy hhh.. \",\n \"created_at\": {\n \"$date\": \"2023-01-10T15:51:20Z\"\n }\n },\n {\n \"uuid\": \"c8e35012-1106-4e3f-b678-6040002d54b8\",\n \"sender\": {\n \"$oid\": \"63ad8631e0d6cfe452b80677\"\n },\n \"message\": \"LOL How are you \",\n \"created_at\": {\n \"$date\": \"2023-01-10T15:36:24Z\"\n }\n }\n ]\n }\n]\n$concatArray// MongoDB Playground\n// Use Ctrl+Space inside a snippet or a string literal to trigger completions.\n\nconst database = 'test3';\n\n// The current database to use.\nuse(database);\n\n// db.chats.find({\n// \"members\": {\n// \"$eq\": [\n// ObjectId('63ad8631e0d6cfe452b80677'),\n// ObjectId('63adadf287e482e6f128ec0e')\n// ]\n// }\n// }\n// )\n\n// db.chats.aggregate(\n// {\n// $match: {\n// \"members\": {\n// \"$all\": [\n// ObjectId('63ad8631e0d6cfe452b80677'),\n// // ObjectId('63adadf287e482e6f128ec0e'),\n// ObjectId('63b3c2b8ce51cf1fca0325ad')\n// ]\n// },\n// } \n// },\n// {\n// $sort: {\n// \"created_at\": -1\n// }\n// },\n// // {\n// // $project: {\n// // \"total\": 1,\n// // \"_id\": 0\n// // }\n// // },\n\n\n// {\n// $limit: 1\n// },\n// )\n// .sort({\"created_at\": -1})\n\n// db.chats.find(\n// {\n// \"members\": {\n// \"$all\": [\n// ObjectId('63ad8631e0d6cfe452b80677'),\n// ObjectId('63adadf287e482e6f128ec0e'),\n// // ObjectId('63b3c2b8ce51cf1fca0325ad')\n// ]\n// }\n// },\n// )\n\n// db.chats.find({\n// \"members\": {\n// \"$eq\": [\n// ObjectId('63ad8631e0d6cfe452b80677'),\n// ObjectId('63adadf287e482e6f128ec0e')\n// ]\n// }\n// }\n// )\n\n\ndb.chats.aggregate(\n {\n /**\n * query: The query in MQL.\n */\n $match: {\n \"members\": {\n \"$all\": [\n ObjectId('63ad8631e0d6cfe452b80677'),\n ObjectId('63adadf287e482e6f128ec0e'), \n ]\n }\n },\n },\n {\n /**\n * Provide any number of field/order pairs.\n */\n $sort: {\n \"created_at\": -1\n }\n },\n {\n /**\n * specifications: The fields to\n * include or exclude.\n */\n $project: {\n \"_id\": 0,\n // \"members\": 1,\n \"result\": {\n $sortArray: {input: \"$message\", sortBy: {created_at: -1}},\n }\n }\n },\n {\n /**\n * specifications: The fields to\n * include or exclude.\n */\n $project: {\n \"messages\": {\n '$concatArrays': [\n \"$result\", \"$result\"\n ]\n }\n }\n }\n)\nresultmessagesresult", "text": "Hi thanks for this… but i need to know how to get all the results by sortedI have used this… but thing is every things are in result block and I want it to in one array so I can fetch previous messages with $limit and $skip so I can send back to API of user message…\nhere is my sample dataI have made 3 messages collection document after 3 new document is created… i will make it 1000 in production…\nand I have use another thing called $concatArray but its adding only fields which I explicitly give exampleit will add two result to messages but not to all of that result\nhere my demo MongoDB collections [{ \"_id\": { \"$oid\": \"63bd85a54ffc8dc686c0df5b\" }, \"message\": [ - Pastebin.com", "username": "k_N_A1" }, { "code": "\ndb.chats.aggregate(\n {\n /**\n * query: The query in MQL.\n */\n $match: {\n \"members\": {\n \"$all\": [\n ObjectId('63ad8631e0d6cfe452b80677'),\n ObjectId('63adadf287e482e6f128ec0e'), \n ]\n }\n },\n },\n {\n /**\n * Provide any number of field/order pairs.\n */\n $sort: {\n \"created_at\": -1\n }\n },\n // {\n // /**\n // * specifications: The fields to\n // * include or exclude.\n // */\n // $project: {\n // \"_id\": 0,\n // // \"members\": 1,\n // \"result\": {\n // $sortArray: {input: \"$message\", sortBy: {created_at: -1}},\n // // \"$initialValue\": {},\n // }\n // }\n // },\n {\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n $unwind: {\n path: \"$messages\",\n // includeArrayIndex: 'string',\n // preserveNullAndEmptyArrays: boolean\n }\n },\n {\n /**\n * specifications: The fields to\n * include or exclude.\n */\n $project: {\n \"_id\": 0,\n \"messages\": 1\n // \"members\": 1,\n // \"result\": {\n // $sortArray: {input: \"$message\", sortBy: {created_at: -1}}\n // }\n }\n },\n {\n /**\n * Provide any number of field/order pairs.\n */\n $sort: {\n \"messages.created_at\": -1\n }\n },\n {\n /**\n * outputFieldN: The first output field.\n * stageN: The first aggregation stage.\n */\n $facet: {\n data: [ \n {$skip: 0},\n {$limit: 2} \n ],\n pagination: [\n {$count: \"count\"}\n ]\n }\n }\n // {\n // /**\n // * Provide any number of field/order pairs.\n // */\n // $sort: {\n // \"created_at\": 1\n // }\n // }\n // {\n // $unwind: '$message',\n // }\n \n\n // {\n // /**\n // * specifications: The fields to\n // * include or exclude.\n // */\n // $project: {\n // \"result\": {\n // $sortArray: {input: \"$message\", sortBy: {created_at: -1}}\n // }\n\n // }\n // }\n)\n", "text": "i solved this… but is it performance friendly", "username": "k_N_A1" }, { "code": "", "text": "if you allow me then I’ll be the first beta tester of your chat schema design, which I used on my website page.", "username": "eddy_Johns" }, { "code": "", "text": "Of course, you can use it. and if you can please improve it. thank you", "username": "k_N_A1" }, { "code": "", "text": "I appreciate you bringing this up because it is absolutely necessary. Otherwise, I would be unable to determine who actually delivered the message; this is a mistake on my part.", "username": "William_Json" } ]
Advice for Chat schema design
2021-07-06T18:40:35.960Z
Advice for Chat schema design
30,413
null
[ "node-js", "mongoose-odm" ]
[ { "code": "interface IPlan {\n accounts: {\n facebook?: string;\n instagram?: string;\n linkedin?: string;\n x?: string;\n };\n}\n\nconst PlanSchema = new mongoose.Schema<IPlan>(\n {\n accounts: {\n facebook: {\n type: String,\n },\n instagram: {\n type: String,\n },\n linkedin: {\n type: String,\n },\n x: {\n type: String,\n },\n },\n },\n { timestamps: true }\n);\n\nexport default mongoose.model<IPlan>('Plan', PlanSchema);\n\n", "text": "How do I add validation to this schema such that at least 1 property of account needs to be provided. It can be any 1 of the 4?", "username": "Gbemiga_Atolagbe" }, { "code": "import mongoose from 'mongoose';\n\ninterface IPlan {\n accounts: {\n facebook?: string;\n instagram?: string;\n linkedin?: string;\n socialmediasitehere?: string;\n };\n}\n\nconst PlanSchema = new mongoose.Schema<IPlan>(\n {\n accounts: {\n type: Object,\n validate: {\n validator: function (accounts) {\n // Check if at least one property of the accounts object is provided\n return (\n accounts.facebook ||\n accounts.instagram ||\n accounts.linkedin ||\n accounts.socialmediasitehere\n );\n },\n message: 'At least one account property must be provided.',\n },\n },\n // Add the rest of your schema\n }\n);\n\nconst PlanModel = mongoose.model<IPlan>('Plan', PlanSchema);\n\nexport default PlanModel;\n\n", "text": "This is what I got for form validation like this, I changed it with what variables (double check) I could find in your original to hopefully make it easier for you to fit it.It will need some changes of course (maybe) for your environment, as I don’t know what’s all in it. But without further ado, this is a template I use for this that validates if either of the fields have been filled.", "username": "Brock_Leonard" }, { "code": "PlanSchema.pre(\"save\", function () {\n const { facebook, instagram, linkedin, x } = this.accounts;\n if (\n facebook === undefined &&\n instagram === undefined &&\n linkedin === undefined &&\n x === undefined\n ) {\n throw new Error(\"At least one account property must be provided.\");\n }\n});\n", "text": "This doesn’t work but I wish it did. What I’ve done is use a pre which feels wrong, adding the validation inside the schema as you did above seems best.", "username": "Gbemiga_Atolagbe" }, { "code": "", "text": "I didn’t do so much debugging as I should have, I was more so pushing to what direction to aim towards.But if that works you got it, the issue is that there isn’t much documentation on this like there should be.The code I posted works in the environment it’s in, I’m just not familiar enough with your environment to really tailor it.", "username": "Brock_Leonard" }, { "code": "", "text": "I’ll keep trying things. Maybe convert it to an array of objects and try to validate the array, etc.You’re right on the docs part, the most I’m seeing is the basic use of required.", "username": "Gbemiga_Atolagbe" } ]
How do I add validation to an embedded schema?
2023-09-21T00:30:27.074Z
How do I add validation to an embedded schema?
356
https://www.mongodb.com/…_2_1024x346.jpeg
[]
[ { "code": "", "text": "\nimage1169×395 111 KB\n It was showing error of duplicate email but all email were different. Then I on my MongoDB server- indexes there was an email index I deleted it and then i was able to store multiple data.\nPlease explain what actually happened.???", "username": "Prajjval_Mishra" }, { "code": "", "text": "email_1 dup key {email: null}Seems you are trying to store another null value as the “email” field which is not allowed.Check this.", "username": "Kobe_W" } ]
Please Explain why I getting E11000 duplicate error
2023-09-23T07:36:01.504Z
Please Explain why I getting E11000 duplicate error
303
null
[ "queries" ]
[ { "code": "@GetMapping(value = \"/downloadFile/{fileId}\")\n public void downloadFile(@PathVariable(\"fileId\") String fileId, HttpServletRequest request, HttpServletResponse response) throws IOException {\n\n MongoDatabase database = mongoClient.getDatabase(\"database\");\n GridFSBucket gridFSBucket = GridFSBuckets.create(database);\n List<GridFSFile> files = new ArrayList<>();\n\n Bson query = Filters.eq(\"_id\", new ObjectId(fileId));\n gridFSBucket.find(query)\n .limit(5)\n .forEach(new Consumer<GridFSFile>() {\n @Override\n public void accept(final GridFSFile gridFSFile) {\n files.add(gridFSFile);\n }\n });\n\n String[] fileNameArray = files.get(0).getMetadata().get(\"filename\").toString().split(\"\\\\.\");\n\n String extension = fileNameArray[fileNameArray.length-1].toLowerCase();\n\n TrustManager[] trustAllCerts = new TrustManager[]{\n new X509TrustManager() {\n public X509Certificate[] getAcceptedIssuers() {\n return null;\n }\n public void checkClientTrusted(\n X509Certificate[] certs, String authType) {\n }\n public void checkServerTrusted(\n X509Certificate[] certs, String authType) {\n }\n }\n };\n\n try {\n SSLContext sc = SSLContext.getInstance(\"SSL\");\n sc.init(null, trustAllCerts, new SecureRandom());\n HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());\n } catch (Exception e) {\n }\n\n\n if(extension.equals(\"pdf\")) {\n response.setContentType(\"application/pdf\");\n }\n else {\n response.setContentType(\"application/octet-stream\");\n }\n response.setHeader(\"Content-Disposition\", String.format(\"inline; filename=\\\"\" + files.get(0).getMetadata().get(\"filename\") + \"\\\"\"));\n response.setContentLength((int) files.get(0).getLength());\n\n ObjectId objectFileId = new ObjectId(fileId);\n try (GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStream(objectFileId)) {\n int fileLength = (int) downloadStream.getGridFSFile().getLength();\n byte[] bytesToWriteTo = new byte[fileLength];\n downloadStream.read(bytesToWriteTo);\n GridFSFile gridFSFile = downloadStream.getGridFSFile();\n downloadStream.close();\n GridFsResource gridFsResource = new GridFsResource(gridFSFile);\n\n FileCopyUtils.copy(gridFsResource.getContent(), response.getOutputStream());\n }\n }\n", "text": "Hi all,I’m trying to update a download functionality developed some years ago.\nI tried to readapt my code but something is not working properly. First of all, I did not understand how to retrieve one file only (I know that the _id field is unique, so how can I avoid that “files.get(0)”?)\nMoreover, the last line in my code is not working anymore: the download seems to start but the browser shows a dark screen or keeps loading, but nothing happens in the end. Could you please help me? Any advice?Thanks and have a good day", "username": "No_Bi" }, { "code": "files.get(0)findOnefindGridFSFileGridFSFile file = gridFSBucket.find(query).limit(1).first();\n// Remove this code:\n/*\nTrustManager[] trustAllCerts = new TrustManager[]{\n new X509TrustManager() {\n public X509Certificate[] getAcceptedIssuers() {\n return null;\n }\n public void checkClientTrusted(\n X509Certificate[] certs, String authType) {\n }\n public void checkServerTrusted(\n X509Certificate[] certs, String authType) {\n }\n }\n};\n\ntry {\n SSLContext sc = SSLContext.getInstance(\"SSL\");\n sc.init(null, trustAllCerts, new SecureRandom());\n HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());\n} catch (Exception e) {\n}\n*/\n_idgridFSBucket.find(Filters.eq(\"_id\", new ObjectId(fileId))).first()GridFSDownloadStreamGridFsResourceMediaTypeContent-TypeContent-Disposition\"attachment\"\"inline\"@GetMapping(value = \"/downloadFile/{fileId}\")\npublic void downloadFile(@PathVariable(\"fileId\") String fileId, HttpServletResponse response) throws IOException {\n\n MongoDatabase database = mongoClient.getDatabase(\"database\");\n GridFSBucket gridFSBucket = GridFSBuckets.create(database);\n\n GridFSFile file = gridFSBucket.find(Filters.eq(\"_id\", new ObjectId(fileId))).first();\n\n String fileName = file.getMetadata().getString(\"filename\");\n String[] fileNameArray = fileName.split(\"\\\\.\");\n String extension = fileNameArray[fileNameArray.length - 1].toLowerCase();\n\n TrustManager[] trustAllCerts = new TrustManager[]{\n new X509TrustManager() {\n public X509Certificate[] getAcceptedIssuers() {\n return null;\n }\n public void checkClientTrusted(X509Certificate[] certs, String authType) {\n }\n public void checkServerTrusted(X509Certificate[] certs, String authType) {\n }\n }\n };\n\n try {\n SSLContext sc = SSLContext.getInstance(\"SSL\");\n sc.init(null, trustAllCerts, new SecureRandom());\n HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());\n } catch (Exception e) {\n // log the exception\n }\n\n MediaType mediaType;\n if (extension.equals(\"pdf\")) {\n mediaType = MediaType.APPLICATION_PDF;\n } else {\n mediaType = MediaType.APPLICATION_OCTET_STREAM;\n }\n response.setContentType(mediaType.toString());\n response.setHeader(\"Content-Disposition\", String.format(\"attachment; filename=\\\"%s\\\"\", fileName));\n response.setContentLength((int) file.getLength());\n\n ObjectId objectFileId = new ObjectId(fileId);\n try (GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStream(objectFileId);\n GridFsResource gridFsResource = new GridFsResource(file, downloadStream)) {\n\n FileCopyUtils.copy(gridFsResource.getInputStream(), response.getOutputStream());\n } catch (IOException e) {\n // log the exception\n }\n}\n@GetMapping(value = \"/downloadFile/{fileId}\")\npublic void downloadFile(@PathVariable(\"fileId\") String fileId, HttpServletResponse response) throws IOException {\n\n MongoDatabase database = mongoClient.getDatabase(\"database\");\n GridFSBucket gridFSBucket = GridFSBuckets.create(database);\n\n GridFSFile file = gridFSBucket.find(Filters.eq(\"_id\", new ObjectId(fileId))).first();\n if (file == null) {\n response.setStatus(HttpServletResponse.SC_NOT_FOUND);\n return;\n }\n\n String[] fileNameArray = file.getMetadata().get(\"filename\").toString().split(\"\\\\.\");\n String extension = fileNameArray[fileNameArray.length - 1].toLowerCase();\n\n response.setContentType(extension.equals(\"pdf\") ? \"application/pdf\" : \"application/octet-stream\");\n response.setHeader(\"Content-Disposition\", String.format(\"inline; filename=\\\"%s\\\"\", file.getMetadata().get(\"filename\")));\n response.setContentLength((int) file.getLength());\n\n SSLContext sslContext;\n try {\n sslContext = SSLContext.getInstance(\"TLSv1.2\");\n sslContext.init(null, new TrustManager[] { new X509TrustManager() {\n @Override\n public X509Certificate[] getAcceptedIssuers() {\n return null;\n }\n\n @Override\n public void checkClientTrusted(X509Certificate[] certs, String authType) {\n }\n\n @Override\n public void checkServerTrusted(X509Certificate[] certs, String authType) {\n if (certs.length > 0) {\n try {\n certs[0].checkValidity();\n } catch (CertificateException e) {\n throw new RuntimeException(\"Invalid certificate\", e);\n }\n }\n }\n } }, new SecureRandom());\n } catch (Exception e) {\n throw new RuntimeException(\"Unable to initialize SSL context\", e);\n }\n\n SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory();\n HttpsURLConnection.setDefaultSSLSocketFactory(sslSocketFactory);\n\n ObjectId objectFileId = new ObjectId(fileId);\n try (GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStream(objectFileId)) {\n byte[] fileContent = IOUtils.toByteArray(downloadStream);\n IOUtils.write(fileContent, response.getOutputStream());\n } catch (IOException e) {\n throw new RuntimeException(\"Unable to read file content\", e);\n }\n}\nforEachIOUtilsGridFsResourceFileCopyUtils@GetMapping(value = \"/downloadFile/{fileId}\")\npublic void downloadFile(@PathVariable(\"fileId\") String fileId, HttpServletRequest request, HttpServletResponse response) throws IOException {\n\n MongoDatabase database = mongoClient.getDatabase(\"database\");\n GridFSBucket gridFSBucket = GridFSBuckets.create(database);\n\n // Query the database for the file\n Bson query = Filters.eq(\"_id\", new ObjectId(fileId));\n GridFSFile gridFSFile = gridFSBucket.find(query).first();\n\n if (gridFSFile == null) {\n response.sendError(HttpServletResponse.SC_NOT_FOUND);\n return;\n }\n\n String[] fileNameArray = gridFSFile.getMetadata().get(\"filename\").toString().split(\"\\\\.\");\n String extension = fileNameArray[fileNameArray.length - 1].toLowerCase();\n\n // Set response headers\n response.setContentType(extension.equals(\"pdf\") ? \"application/pdf\" : \"application/octet-stream\");\n response.setHeader(\"Content-Disposition\", String.format(\"inline; filename=\\\"%s\\\"\", gridFSFile.getMetadata().get(\"filename\")));\n response.setContentLength((int) gridFSFile.getLength());\n\n try (GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStream(new ObjectId(fileId));\n InputStream inputStream = new BufferedInputStream(downloadStream);\n OutputStream outputStream = new BufferedOutputStream(response.getOutputStream())) {\n\n inputStream.transferTo(outputStream);\n }\n}\n", "text": "Why are you disabling SSL for?If you just want one file, retrieve only one file instead of using files.get(0), you can use the findOne method instead of find, which returns a single GridFSFile instead of a cursor. Here’s an example:Regarding the issue with the download, it’s possible that the problem is related to the SSL configuration. The code seems to be disabling SSL certificate validation, which can be dangerous. Instead of disabling SSL validation, you could try configuring your server to use a valid SSL certificate. If that’s not an option, you could try removing the SSL configuration code and see if that resolves the issue. Here’s an example of how to remove the SSL configuration code:Here are some suggestions to improve your code:Avoid unnecessary database queries and simplify the code: If _id field is unique, you can directly retrieve the file using gridFSBucket.find(Filters.eq(\"_id\", new ObjectId(fileId))).first() instead of iterating over the result set and adding all the files to a list.Use try-with-resources for better resource management: Use try-with-resources statements for GridFSDownloadStream and GridFsResource to automatically close these resources after they are used.Handle exceptions appropriately: You have a catch block that does nothing, which can hide potential errors. At least, you should log the exception to know what happened.Set appropriate HTTP headers: You can use MediaType constants provided by Spring framework to set the Content-Type header instead of hard-coding the values. Also, you can use Content-Disposition header value \"attachment\" instead of \"inline\" to force the browser to download the file.Here’s the updated code:Here’s another solution for you, and this enhances functionalities for the .509Here’s a simplified version of the code that still takes into account the SSL validation:This version simplifies the code by removing the unnecessary forEach loop and list of files. It also uses a ternary operator to set the content type based on the file extension, and handles the case where the file is not found by setting the HTTP response status to 404.In addition, the SSL validation has been updated to only check the validity of the first certificate in the chain, because we care for, or want a bunch of them, and throw a runtime exception if it is not valid. This ensures that the SSL certificate is properly validated before downloading the file. Finally, the file content is read using IOUtils from Apache Commons IO and written directly to the response output stream, instead of first creating a GridFsResource object and copying the content using FileCopyUtils.And some other things to consider, here are some general tips to make the code more efficient as a whole:Avoid unnecessary database queries: In the current implementation, the code queries the database for the same file up to 5 times with different limits. Instead of this, you can query the database for the file once and store the result in a variable.Use try-with-resources: The current implementation does not use try-with-resources to automatically close resources. You can use try-with-resources to automatically close resources such as GridFSDownloadStream and GridFsResource.Use InputStream.transferTo() method: Instead of using FileCopyUtils.copy(), you can use the InputStream.transferTo() method to transfer the file data directly to the response output stream.Cache SSLContext: Creating an SSLContext is an expensive operation. You can cache the SSLContext instance and reuse it across requests.Here’s a modified version of the code incorporating these optimizations:Note that the above code assumes that the SSLContext instance has already been created and cached. If this is not the case, you can create and cache the SSLContext instance using a static initializer or a Singleton pattern.@No_Bi", "username": "Brock" }, { "code": "", "text": "Edited out some private conversation parts that I meant to write in a mute chat in discord to a friend in video call.", "username": "Brock" } ]
GridFsFile download
2023-04-12T12:27:33.719Z
GridFsFile download
570
null
[ "queries", "node-js", "replication", "mongoose-odm", "mongodb-shell" ]
[ { "code": "", "text": "I have a simple replica set setup in a single computer with 3 instances. If all 3 instances are running i can check the status with rs.status() in mongosh, however, if I stop one of the instances, I can’t check the status anymore because I get an error. Even if i do it through mongoose in a nodejs server, whenever I try to get the replica set status it just jumps straight to the exception in the title.I’ve tried with:\ndb.system.profile.find({}, {}, {enableUtf8Validation: false})and it didn’t work.in my Nodejs server i tried connecting through this line:\nmongoose.connect(“mongodb://127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/term_point?replicaSet=rs0”, { family: 4, enableUtf8Validation: false});it connects to the db but i still get the same exception as soon as i try to get the status.How can I solve this issue? thank you very much!Not sure if this is the reason but my computer’s locale is spanish, however, computer name, user name are in english (read somewhere that they were getting the same error in a different place because their computer name had an accent in one of the letters).", "username": "Mauricio_Ramirez" }, { "code": "", "text": "Same problem, also locale in Spanish although no Spanish characters used in the configuration, names, etc. MongoDB 7.0.1", "username": "RAFAEL_CABALLERO_ROLDAN" }, { "code": "", "text": "I can confirm that’s the issue. I’ve replicated the entire setup in a full english environment and i had no problems at all. It will always happen in a computer that has spanish.", "username": "MAURICIO_RAMIREZ1" } ]
Running rs.status() throws BSONError: Invalid UTF-8 string in BSON document in mongosh
2023-05-04T18:16:27.171Z
Running rs.status() throws BSONError: Invalid UTF-8 string in BSON document in mongosh
1,001
null
[]
[ { "code": "{'_id': ObjectId('6070aaa40f58ac3193c388d2'), 'variant_id': ['chr1', 60351, 'A', 'G', 'b38'], 'gene_id': 'ENSG00000268903.1', 'tss_distance': -75544, 'ma_samples': 28, 'ma_count': 33, 'maf': Decimal128('0.0774648'), 'pval_nominal': Decimal128('0.0000846859'), 'slope': Decimal128('0.496066'), 'slope_se': Decimal128('0.12321'), 'pval_nominal_threshold': Decimal128('0.000093155'), 'min_pval_nominal': Decimal128('0.0000262399'), 'pval_beta': Decimal128('0.0209234')}{'_id': ObjectId('6070aaa40f58ac3193c388d3'), 'variant_id': ['chr1', 61920, 'G', 'A', 'b38'], 'gene_id': 'ENSG00000268903.1', 'tss_distance': -73975, 'ma_samples': 15, 'ma_count': 19, 'maf': Decimal128('0.0446009'), 'pval_nominal': Decimal128('0.0000639084'), 'slope': Decimal128('0.609808'), 'slope_se': Decimal128('0.1488'), 'pval_nominal_threshold': Decimal128('0.000093155'), 'min_pval_nominal': Decimal128('0.0000262399'), 'pval_beta': Decimal128('0.0209234')}{'_id': ObjectId('6070aaa40f58ac3193c388d4'), 'variant_id': ['chr1', 63697, 'T', 'C', 'b38'], 'gene_id': 'ENSG00000268903.1', 'tss_distance': -72198, 'ma_samples': 75, 'ma_count': 82, 'maf': Decimal128('0.192488'), 'pval_nominal': Decimal128('0.0000355138'), 'slope': Decimal128('0.402319'), 'slope_se': Decimal128('0.0947622'), 'pval_nominal_threshold': Decimal128('0.000093155'), 'min_pval_nominal': Decimal128('0.0000262399'), 'pval_beta': Decimal128('0.0209234')}variant_id[{'$match': {}}, {'$project': {'variant_id.0': 1, 'variant_id.1': 1}}]{'_id': ObjectId('6070aac8aa35c7fb75c8d11d'), 'variant_id': []}{'_id': ObjectId('6070aac8aa35c7fb75c8d11e'), 'variant_id': []}{'_id': ObjectId('6070aac8aa35c7fb75c8d11f'), 'variant_id': []}", "text": "The documents look like this:{'_id': ObjectId('6070aaa40f58ac3193c388d2'), 'variant_id': ['chr1', 60351, 'A', 'G', 'b38'], 'gene_id': 'ENSG00000268903.1', 'tss_distance': -75544, 'ma_samples': 28, 'ma_count': 33, 'maf': Decimal128('0.0774648'), 'pval_nominal': Decimal128('0.0000846859'), 'slope': Decimal128('0.496066'), 'slope_se': Decimal128('0.12321'), 'pval_nominal_threshold': Decimal128('0.000093155'), 'min_pval_nominal': Decimal128('0.0000262399'), 'pval_beta': Decimal128('0.0209234')}{'_id': ObjectId('6070aaa40f58ac3193c388d3'), 'variant_id': ['chr1', 61920, 'G', 'A', 'b38'], 'gene_id': 'ENSG00000268903.1', 'tss_distance': -73975, 'ma_samples': 15, 'ma_count': 19, 'maf': Decimal128('0.0446009'), 'pval_nominal': Decimal128('0.0000639084'), 'slope': Decimal128('0.609808'), 'slope_se': Decimal128('0.1488'), 'pval_nominal_threshold': Decimal128('0.000093155'), 'min_pval_nominal': Decimal128('0.0000262399'), 'pval_beta': Decimal128('0.0209234')}{'_id': ObjectId('6070aaa40f58ac3193c388d4'), 'variant_id': ['chr1', 63697, 'T', 'C', 'b38'], 'gene_id': 'ENSG00000268903.1', 'tss_distance': -72198, 'ma_samples': 75, 'ma_count': 82, 'maf': Decimal128('0.192488'), 'pval_nominal': Decimal128('0.0000355138'), 'slope': Decimal128('0.402319'), 'slope_se': Decimal128('0.0947622'), 'pval_nominal_threshold': Decimal128('0.000093155'), 'min_pval_nominal': Decimal128('0.0000262399'), 'pval_beta': Decimal128('0.0209234')}My goal is to get only the first 2 elements of each array belonging to the variant_id field.Aggregation:\n[{'$match': {}}, {'$project': {'variant_id.0': 1, 'variant_id.1': 1}}]The output contains exclusively empty arrays:\n{'_id': ObjectId('6070aac8aa35c7fb75c8d11d'), 'variant_id': []}\n{'_id': ObjectId('6070aac8aa35c7fb75c8d11e'), 'variant_id': []}\n{'_id': ObjectId('6070aac8aa35c7fb75c8d11f'), 'variant_id': []}", "username": "Platon_workaccount" }, { "code": "", "text": "Try the following array operator:", "username": "steevej" }, { "code": "", "text": "Thank you!\nIsn’t there a more syntactically sugary way?", "username": "Platon_workaccount" } ]
Project by array element index
2021-04-26T14:49:09.196Z
Project by array element index
1,932
null
[ "dot-net", "data-modeling", "crud" ]
[ { "code": " public dynamic data { get; set; }\nMongoDB.Bson.BsonSerializationException: An error occurred while serializing the data property of class prueba.Models.Servicio.Servicio: Type System.Text.Json.JsonElement is not configured as a type that is allowed to be serialized for this instance of ObjectSerializer.\n ---> MongoDB.Bson.BsonSerializationException: Type System.Text.Json.JsonElement is not configured as a type that is allowed to be serialized for this instance of ObjectSerializer.\n at MongoDB.Bson.Serialization.Serializers.ObjectSerializer.SerializeDiscriminatedValue(BsonSerializationContext context, BsonSerializationArgs args, Object value, Type actualType)\n at MongoDB.Bson.Serialization.Serializers.ObjectSerializer.Serialize(BsonSerializationContext context, BsonSerializationArgs args, Object value)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Serialize(IBsonSerializer serializer, BsonSerializationContext context, Object value)\n at MongoDB.Bson.Serialization.BsonClassMapSerializer`1.SerializeMember(BsonSerializationContext context, Object obj, BsonMemberMap memberMap)\n", "text": "Hello!\nI am using ASP .NET Core 6 and MongoDB.Driver 2.21.0 I need a model that has a dynamic or generic property.The property in question is declared as follows:After creating, the services and the controller, when I try to create using swagger in mongoDB I get this error:I have been investigating and I have not found any solution, Could you tell me how to solve it?Thanks in advance.", "username": "Roque_Rojo_Bacete" }, { "code": "ObjectSerializer", "text": "Hi, @Roque_Rojo_Bacete,Welcome to the MongoDB Community Forums. I understand that you’re having a problem serializing a dynamic type. Due to a .NET vulnerability with type descriptors, we require you to explicitly opt into safe types when using the ObjectSerializer. You can find details in our FAQ:What Object Types Can Be Serialized?Sincerely,\nJames", "username": "James_Kovacs" } ]
Error when model has dynamic property ASP .NET Core 6
2023-09-19T12:14:48.479Z
Error when model has dynamic property ASP .NET Core 6
440
https://www.mongodb.com/…84362eedc0f8.png
[ "dot-net" ]
[ { "code": "public class ClassToSave\n{\n [BsonId] public int Id { get; set; } = 0;\n public IInterface[] Data { get; set; } = new IInterface[] { new ArrayItem() };\n}\n\n\npublic class ArrayItem : IInterface\n{\n [BsonId]\n public string Name { get; set; } = \"Test\";\n}\npublic interface IInterface\n{\n string Name { get; set; }\n}\nBsonClassMap.RegisterClassMap<ArrayItem>(cm =>\n{\n var fullName = typeof(ArrayItem).FullName;\n cm.SetDiscriminator(fullName);\n cm.AutoMap();\n cm.SetIgnoreExtraElements(true);\n});\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"MyTest\");\nvar collection = db.GetCollection<ClassToSave>(\"collection\");\ncollection.InsertOne(new ClassToSave());\n", "text": "Hi, I have a project worked with mongoDb Driver 2.2.0And there’s a object defined as:And the exception happened when insert data to db.Detail of the Exception:\nHow can I solve this?", "username": "wang_howard" }, { "code": "ArrayItemobjectObjectSerializerObjectSerializer", "text": "Hi, @wang_howard,Welcome to the MongoDB Community Forums. I understand that you’re running into an exception when attempting to serialize a discriminated interface. Because the base class of your ArrayItem class is object, the type is serialized using the ObjectSerializer.Due to a vulnerability in .NET involving type discriminators, you must declare types as safe to be used with the ObjectSerializer. You can find information about how to do this in our FAQ: What Object Types Can Be Serialized?Sincerely,\nJames", "username": "James_Kovacs" } ]
MongoDb C# Exception Happened When insert a interface property
2023-09-20T06:20:09.666Z
MongoDb C# Exception Happened When insert a interface property
315
null
[ "containers" ]
[ { "code": "storage.dbPathstorage.dbPath", "text": "Hello,I want to customise (change) the dbPath on ubuntu 20.04.The configuration file:/etc/mongod.confspecifies the data directory/dbPath:/var/lib/mongodbThe docs here: https://docs.mongodb.com/manual/reference/configuration-options/#storage.dbPathnote that on linux one can’t simply change the dbPath in the config file:\" The Linux package init scripts do not expect storage.dbPath to change from the defaults. If you use the Linux packages and change storage.dbPath , you will have to use your own init scripts and disable the built-in scripts.\"Unfortunately, I haven’t found any information on those init script(s), i.e. where I can find them, and what in them might need editing in order for the change of dbPath in the config. file to be allowed to work.Because I’m using ubuntu the init system is systemctl, see here: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/Aside: why do I want to change the dbPath from its default? Because my laptop is dual boot and I have mongodb on both win10 and unbuntu, but want to be able to access the same database files–i.e. access the win10 directory/data storage from within the mongodb on ubuntu. So I think I want to use something like:/media/[username]/[win10 partition name]/data/dbwhich corresponds to:C:\\data\\dbBut as the docs say, the init script(s) also need changing for that to work.Thanks for your help.", "username": "Joe_Barwell" }, { "code": "sudo systemctl status mongod", "text": "Changing the init script, if running systemd, is not that complicated.The commandsudo systemctl status mongodwill tell you the location of the script. On my system, it is /usr/lib/systemd/system/mongod.service. You will see in this text file some ExecStartPre= statement. You just have to match those directories and files to the one specified in your mongod.conf. You will see that these are probably only needed at the first start and it is too make sure that the directories exist and are writable by mongod user.But that’s the theory. On a production server I do not mess with this. On a dev machine I start mongod manually with the config file I pleased. Sometimes I have different config files that point to different directories depending of the project. Just make sure directories exist and are writable by the user that starts mongod.", "username": "steevej" }, { "code": "dbPathmongodmongod/data/dbchmod 777 /media/[username]/[win10 partition name]/data/db", "text": "On top of what @steevej mentioned, you will also need to set the correct permissions on the directory. MongoDB in linux will need your new dbPath location to be writable by the mongod user. When you install MongoDB in Linux, the package will create the mongod user and group and then the /data/db path will be owned by this user/group. On a dual boot system, I don’t know what will happen if you try to change the ownership of a path on a Windows mount. You might need to open the path up to be world read/writable (chmod 777 /media/[username]/[win10 partition name]/data/db).", "username": "Doug_Duncan" }, { "code": "user@machine:~$ sudo systemctl status mongod\n[sudo] password for user: \n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset>\n Active: active (running) since Wed 2020-06-03 18:28:28 NZST; 5h 5min ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 902 (mongod)\n Memory: 209.8M\n CGroup: /system.slice/mongod.service\n └─902 /usr/bin/mongod --config /etc/mongod.conf\n\nJun 03 18:28:28 machine systemd[1]: Started MongoDB Database Server.\n[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network.target\n\n[Service]\nUser=mongodb\nGroup=mongodb\nEnvironmentFile=-/etc/default/mongod\nExecStart=/usr/bin/mongod --config /etc/mongod.conf\nPIDFile=/var/run/mongodb/mongod.pid\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n\n# Recommended limits for for mongod as specified in\n# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings\n\n[Install]\nWantedBy=multi-user.target\n", "text": "Thanks. Here’s the result when I tried that:When I look inside the /lib/systemd/system/mongod.service file I see:It may just be my lack of knowledge about linux but I don’t see the sort of statement your describe. There is the line:ExecStart=/usr/bin/mongod --config /etc/mongod.confbut that’s the same config file I earlier wrote of wanting to be able to change–but when I do so, mongod will not start.Is this line important?:EnvironmentFile=-/etc/default/mongodThe thing is, I don’t know what the leading - is doing there. There’s no file at:/etc/default/mongod", "username": "Joe_Barwell" }, { "code": "EnvironmentFile-ExecStart/etc/mongod.confstrorage.dbPathsudo systemctl restart mongodsystemctl status mongod/var/log/mongodb/mongodb.log", "text": "Is this line important?:EnvironmentFile=-/etc/default/mongodThe thing is, I don’t know what the leading - is doing there. There’s no file at:/etc/default/mongodThe EnvironmentFile bit states that the service should look in the provided file for environment variables. The - in front of the file means to not error out if the file doesn’t exist. You can read more about EnvironmentFile in the systemd man page.The first place that I would start is changing the the path in the MongoDB config file. You can find the location of that file by looking at the ExecStart line:ExecStart=/usr/bin/mongod --config /etc/mongod.confIn your case, the file is located at /etc/mongod.conf. Edit that file and change the strorage.dbPath location to the path you want. Save and exit from the file. After doing that run sudo systemctl restart mongod, check the status of the service with systemctl status mongod and verify if it’s running. If it is you’re all set. If it’s not you need to check the MongoDB log file (default should be /var/log/mongodb/mongodb.log I believe) to see why the service failed.", "username": "Doug_Duncan" }, { "code": "user@machine~$ sudo service mongod start\nuser@machine:~$ sudo service mongod status\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor pres>\n Active: failed (Result: exit-code) since Thu 2020-06-04 01:37:16 NZST; 3>\n Docs: https://docs.mongodb.org/manual\n Process: 33255 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=>\n Main PID: 33255 (code=exited, status=100)\n\nJun 04 01:37:16 machine systemd[1]: Started MongoDB Database Server.\nJun 04 01:37:16 machine systemd[1]: mongod.service: Main process exited, c>\nJun 04 01:37:16 machine systemd[1]: mongod.service: Failed with result 'ex>\n", "text": "Good point by both you (Doug) and Steeve, earlier. These are the current permissions for that dir:user@machine:~$ ls -l /media/[user name]/[win10 partition name]/data\ntotal 8\ndrwxrwxrwx 1 owner group 8192 Jun 3 15:43 dbSo I think that shows access permissions are not what’s preventing mongod working if I change the config file’s dbPath to that target dir.?Would seeing the status help? After changing the dbPath in the config file to my target dir when mongod is stopped, I then:NB, '‘user’ and ‘machine’ are my placeholders.", "username": "Joe_Barwell" }, { "code": "mongod", "text": "The next step is to look at the the log file to see why it the mongod process failed. The service status just states that the process failed, but the log file will give more information about why.", "username": "Doug_Duncan" }, { "code": "", "text": "OK, thanks. Here’s nearly the last bit from that log file:2020-06-04T01:37:16.571+1200 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIp: “127.0.0.1”, port: 27017 }, processManagement: { timeZoneInfo: “/usr/share/zoneinfo” }, storage: { dbPath: “/media/[user name]/[win10 partition name]/data/db”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }\n2020-06-04T01:37:16.572+1200 I STORAGE [initandlisten] exception in initAndListen: Location28596: Unable to determine status of lock file in the data directory /media/[user name]/[win10 partition name]/data/db: boost::filesystem::status: Permission denied: “/media/[user name]/[win10 partition name]/data/db/mongod.lock”, terminatingI don’t think it’s simply permissions on that file itself, though, as:user@machine:/media/[user name]/[win10 partition name]/data/db$ ls -l mongod.lock\n-rwxrwxrwx 2 owner group 0 Jun 3 15:43 mongod.lockIs the file ownership itself important? The owner is not mongodb but rather my user name, while mongodb is the owner for the corresponding file on my linux partition’s default dbPath, i.e.:\n/var/lib/mongodb/mongod.lockuser@machine:/var/lib/mongodb$ ls -l mongod.lock\n-rw------- 1 mongodb mongodb 0 Jun 4 01:35 mongod.lockI tried changing the owner & group to mongodb but after that mongod still failed to start, with the log:2020-06-04T02:16:26.635+1200 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIp: “127.0.0.1”, port: 27017 }, processManagement: { timeZoneInfo: “/usr/share/zoneinfo” }, storage: { dbPath: “/media/[user name]/[win10 partition name]/data/db”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }\n2020-06-04T02:16:26.637+1200 I STORAGE [initandlisten] exception in initAndListen: Location28596: Unable to determine status of lock file in the data directory /media/[user name]/[win10 partition name]/data/db: boost::filesystem::status: Permission denied: “/media/[user name]/[win10 partition name]/data/db/mongod.lock”, terminatingedit: correction: my attempt to change the owner & group of the mongod.lock file in the target win10 partition actually failed, but I’m still not sure whether file ownership is important given everyone has rw access?", "username": "Joe_Barwell" }, { "code": "/media/[user]/[win10 partition]/data/dbrwmongod.lock", "text": "What are the permissions on the /media/[user]/[win10 partition]/data/db folder? My guess is they are wide open as well, but worth checking.MongoDB is trying to access the lock file but it can’t because the permissions of the file are rw by the owner only. You could try removing the mongod.lock file to see if you can start the service in Linux. I am not sure how that would affect things when you go back to your Windows version however.I get what you’re trying to do having MongoDB available no matter whether you boot into Windows or Linux, but I’m not sure that’s going to work well. If you have Windows 10 and enabled WSL this might work a little more smoothly as you’d have the same permissions and owners between the two systems. Trying to get the permissions on the files correctly and in a way that doesn’t affect the other OS might be more hassle than it’s worth.A couple of options that might be a better choice:Unfortunately I don’t have a dual boot Windows machine so I’m not able to test things out to see what would be needed to make this work on your set up.", "username": "Doug_Duncan" }, { "code": "", "text": "That would be the way to go.A couple of options that might be a better choice:The free tier is very nice for that sort of things because it is accessible everywhere. Since I switch from a laptop on the go and a desktop at home having my data no matter what.", "username": "steevej" }, { "code": "", "text": "Boa Notie. ! Td BEm !\nEstou enfrentando o mesmo problema. Porem utilizamos o Oracle Linxu.\nCriamos um novo diretorio e alteramos o arquivode configuração para apontar para o novo caminhio.Revisamos todas as permissões do diretorio porem o erro acontece. Sabem o que poder.Unable to determine status of lock file in the data directory /media/[user name]/[win10 partition name]/data/db: boost::filesystem::status: Permission denied: “/media/[user name]/[win10 partition name]/data/db/mongod.lock”, terminatingI don’t think it’s simply permissions on that", "username": "Tatiana_Jandira" }, { "code": "", "text": "@Joe_Barwell is it resolved?", "username": "ROHIT_KHURANA" } ]
Change dbPath on Ubuntu
2020-06-03T08:59:04.726Z
Change dbPath on Ubuntu
14,279
null
[ "python", "field-encryption" ]
[ { "code": "", "text": "I am using Mongo Client Side Field Level Encryption for storing data. I am maintaining my master key as a “local” master key, and I have a Credential Manager on my side where I am storing this key.I am trying to rewrap the Data Encryption Key using a new Customer Master Key.I am using python motor library’s AsyncIOMotorClientEncryption class and using the rewrap_many_data_key method to rewrap the key.I am getting errors from mongocryptd library that it’s not able to recognise the key dict that I am passing.Any one who has used local key rotation in python?", "username": "Prakhar_Lohumi" }, { "code": "", "text": "Hello Prakhar and welcome to the community,Support for local key rotation/rewrapping is on the roadmap and should be supported in the coming months.Thank you,Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "Thank you for the quick reply Cynthia. May I ask which Mongo version would this the change be released for and will it be backwards compatible with MongoDB 4.2+ versions?", "username": "Prakhar_Lohumi" }, { "code": "", "text": "Hi Prakhar,The change will be implemented in the MongoDB drivers, since CSFLE is a client-side feature and no cryptographic operations are done on the server side. This means that it will be compatible with all supported server versions (4.4+). When available, all you’ll need to do is update your driver to the latest version.Thank you,Cynthia", "username": "Cynthia_Braund" } ]
MongoDB CSFLE - Local Key Rotation not working
2023-09-21T17:44:10.367Z
MongoDB CSFLE - Local Key Rotation not working
362
null
[ "transactions" ]
[ { "code": "", "text": "I have an application that tries to update the same row in a collection in 2 separate threads at the same time. Each thread has its own connection the MongoDB server and has started a transaction.Thread 1 has altered the row and the 2nd thread tries to update the same row when MongoDB returns the write conflict error. The 2nd thread waits 5 millisecs and then tries to update the row again, when MongoDB returns transaction aborted.Each thread is updating and inserting into multiple documents in the transaction.This a major issue for us, is this a bug?Update:\nI changed the app to work as separate processes instead of multiple threads and I still get the same issue.", "username": "Phillip_Carruthers" }, { "code": "", "text": "Write conflict can occur in following case:the data read by A before A’s write is no longer valid (modified by B), so you get a conflict.", "username": "Kobe_W" }, { "code": "", "text": "I understand your logic for explaining a write conflict, however in a multi-user transaction based environment that should not happen, thread B after updating the row should have it locked and blocking thread A until thread B commits/rollback or thread A gets a transaction timeout.If MongoDB is not working as above then it cannot be used in a multi-user environment that requires updates in a transaction to block other users.My understanding is that if you update a row in MongoDB it locks it.", "username": "Phillip_Carruthers" }, { "code": "", "text": "Check this.It looks like at least in mongo 4.0, other sessions attempting to modify the locked document (locks are only released upon transaction finish) will not block and will be aborted.", "username": "Kobe_W" }, { "code": "", "text": "MongoDB 4.2 our code seemed to work fine, since 4.4, 5 and 6 it is broken.I would expect any updates in a transaction to have their rows locked until committed/rollbacked and other reads/updates outside the transaction blocked.I have seen many people asking about MongoDB transactions on the internet, it seems to be very confusing or not straight forward as expected.", "username": "Phillip_Carruthers" }, { "code": "", "text": "Hey,we experience the same problem on Atlas MongoDB 6.0 with latest C# Driver.\nWe have tested await Task.WaitAll with multi document transaction tasks(10 Transactions on 10 Collections) and retry logic(Abort, Dispose, Wait, Repeat). The problem is that we don’t get any transaction committed. Somehow the first transaction also runs in to the error.", "username": "Dimitri_Kroo" } ]
Write conflict in transaction
2023-05-09T11:49:06.255Z
Write conflict in transaction
1,556
null
[ "node-js" ]
[ { "code": "", "text": "Hi everyone, Im a newbie with exploring mongodb\nIm getting a trouble that when i have a database call user, i tested POST action success in postman and no error with that, but when i go to database in mongodb, i saw that the object i posted was in a “test” database (I didnt create it), i tried to delete it and POST object again but it still auto create “test” database and store all object in it, instead of my main database ( still exist) .\nIn that struggle, i believe that is the reason why i cant connect my DB to my frontend surface\nPls help me", "username": "Thai_Ph_m_Qu_c" }, { "code": "", "text": "Hey @Thai_Ph_m_Qu_c,Welcome to the MongoDB Community forums Im getting a trouble that when i have a database call user, i tested POST action success in postman and no error with that, but when i go to database in mongodb, i saw that the object i posted was in a “test” database (I didnt create it), i tried to delete it and POST object again but it still auto create “test” database and store all object in it, instead of my main database ( still exist) .From your description, it seems like the objects you are posting are getting stored in the “test” database instead of the “user” database that you intended. This could be due to a misconfiguration or a default setting in your MongoDB setup.To resolve this issue, I recommend checking your MongoDB connection string and ensuring that it specifies the correct database name (“user” in your case). Additionally, double-check your code to ensure that you are specifying the correct database name when performing database operations.If you’re still experiencing difficulties after verifying the above, it would be helpful to provide more details about your MongoDB configuration and code snippets. This will allow us to assist you better.Looking forward to hearing back from you and assisting you further.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hey @Thai_Ph_m_Qu_c ,In the MongoDB URI string, you need to add the name of the database you need the collections to be added into.For example,mongodb+srv://user:[email protected]/DbName?retryWrites=true&w=majority", "username": "Vikas_Gupta5" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Objects getting auto-saved in 'test' database instead of 'user' database
2023-07-06T16:46:27.653Z
Objects getting auto-saved in &lsquo;test&rsquo; database instead of &lsquo;user&rsquo; database
1,098
null
[]
[ { "code": " phoneNumber: {\n type: String,\n unique: [true, \"Phone number is already in use.\"],\n validate: {\n validator: validatePhoneNumber,\n message: \"Invalid phone number.\",\n },\n default: \"\",\n },\nexport const phoneNumberRegex = /^05\\d{8}$/; // Like 0526665656\n\nexport const validatePhoneNumber = (phoneNumber) => {\n return phoneNumberRegex.test(phoneNumber);\n};\n", "text": "How can I make sure a phone number is unique, but also empty string is allowed?\nThis is the current scheme, but obviously it’s not working:(Node.js, Express, Mongoose)With this validator and regex:Thanks ", "username": "Arie_Levental" }, { "code": "", "text": "Would appreciate someone’s help ", "username": "Arie_Levental" }, { "code": "db.collectionName.createIndex(\n {phoneNumber: 1},\n {unique: true, partialFilterExpression: {phoneNumber: {\"$gt\":\"\"}}}\n);\n\ntest> db.collectionName.find()\n[\n {\n _id: ObjectId(\"650d8d918af6e1afae571da1\"),\n name: 'John',\n phoneNumber: '1234567890'\n },\n {\n _id: ObjectId(\"650d8d948af6e1afae571da3\"),\n name: 'John',\n phoneNumber: ''\n },\n {\n _id: ObjectId(\"650d8d948af6e1afae571da4\"),\n name: 'John',\n phoneNumber: ''\n }\n]\ntest> db.collectionName.find({ \"phoneNumber\": \"\"}).explain();\n{\n explainVersion: '2',\n queryPlanner: {\n namespace: 'test.collectionName',\n indexFilterSet: false,\n parsedQuery: { phoneNumber: { '$eq': '' } },\n queryHash: '0EFD98BB',\n planCacheKey: 'DF014AED',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n queryPlan: {\n stage: 'COLLSCAN',\n planNodeId: 1,\n filter: { phoneNumber: { '$eq': '' } },\n direction: 'forward'\n },\n slotBasedPlan: {\n slots: '$$RESULT=s5 env: { s7 = \"\", s2 = Nothing (SEARCH_META), s3 = 1695387225258 (NOW), s1 = TimeZoneDatabase(Etc/GMT+1...Asia/Nicosia) (timeZoneDB) }',\n stages: '[1] filter {traverseF(s4, lambda(l1.0) { ((l1.0 == s7) ?: false) }, false)} \\n' +\n '[1] scan s5 s6 none none none none lowPriority [s4 = phoneNumber] @\"3591b1ec-a577-4ba5-8206-44ebbcf3f97a\" true false '\n }\n },\n rejectedPlans: []\n },\n command: {\n find: 'collectionName',\n filter: { phoneNumber: '' },\n '$db': 'test'\n },\n serverInfo: {\n host: 'e10d44068e0f',\n port: 27017,\n version: '7.0.1',\n gitVersion: '425a0454d12f2664f9e31002bbe4a386a25345b5'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600,\n internalQueryFrameworkControl: 'trySbeEngine'\n },\n ok: 1\n}\ntest> db.collectionName.find({ \"phoneNumber\": \"1234567890\"}).explain();\n{\n explainVersion: '2',\n queryPlanner: {\n namespace: 'test.collectionName',\n indexFilterSet: false,\n parsedQuery: { phoneNumber: { '$eq': '1234567890' } },\n queryHash: '0EFD98BB',\n planCacheKey: 'C344E9E5',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n queryPlan: {\n stage: 'FETCH',\n planNodeId: 2,\n inputStage: {\n stage: 'IXSCAN',\n planNodeId: 1,\n keyPattern: { phoneNumber: 1 },\n indexName: 'phoneNumber_1',\n isMultiKey: false,\n multiKeyPaths: { phoneNumber: [] },\n isUnique: true,\n isSparse: false,\n isPartial: true,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { phoneNumber: [ '[\"1234567890\", \"1234567890\"]' ] }\n }\n },\n slotBasedPlan: {\n slots: '$$RESULT=s11 env: { s3 = 1695387212539 (NOW), s6 = KS(3C3132333435363738393000FE04), s2 = Nothing (SEARCH_META), s5 = KS(3C31323334353637383930000104), s10 = {\"phoneNumber\" : 1}, s1 = TimeZoneDatabase(Etc/GMT+1...Asia/Nicosia) (timeZoneDB) }',\n stages: '[2] nlj inner [] [s4, s7, s8, s9, s10] \\n' +\n ' left \\n' +\n ' [1] cfilter {(exists(s5) && exists(s6))} \\n' +\n ' [1] ixseek s5 s6 s9 s4 s7 s8 [] @\"3591b1ec-a577-4ba5-8206-44ebbcf3f97a\" @\"phoneNumber_1\" true \\n' +\n ' right \\n' +\n ' [2] limit 1 \\n' +\n ' [2] seek s4 s11 s12 s7 s8 s9 s10 [] @\"3591b1ec-a577-4ba5-8206-44ebbcf3f97a\" true false \\n'\n }\n },\n rejectedPlans: []\n },\n command: {\n find: 'collectionName',\n filter: { phoneNumber: '1234567890' },\n '$db': 'test'\n },\n serverInfo: {\n host: 'e10d44068e0f',\n port: 27017,\n version: '7.0.1',\n gitVersion: '425a0454d12f2664f9e31002bbe4a386a25345b5'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600,\n internalQueryFrameworkControl: 'trySbeEngine'\n },\n ok: 1\n}\n", "text": "Hey, welcome to the MongoDB community.I believe this will help you to maintain your logicI created a small lab for you to understand the only point of attention.List all documents:The partial index will not be used when you filter for null values, as the expression must be greater than “”If you pass the number, you can use the index for your query, as you need to meet its filter for it to be in the index.If necessary, you can add more fields to the index to meet your workload, here is just an example of how to meet your problem.\nI’m available ", "username": "Samuel_84194" } ]
Best practice for phone number validation?
2023-09-21T15:14:34.537Z
Best practice for phone number validation?
226
null
[ "connector-for-bi" ]
[ { "code": "", "text": "How can I connect Mongodb BI Connector and Mongodb which is present on AWS.", "username": "Aniket_Amshekar" }, { "code": "", "text": "Good morning! Welcome to the MongoDB community.Have you looked at the implementation documentation? Below, if you have any questions, I’m at your disposal.", "username": "Samuel_84194" } ]
Connect Mongodb BI Connector and Mongodb which is present on AWS
2023-09-22T08:23:28.640Z
Connect Mongodb BI Connector and Mongodb which is present on AWS
290
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Hi everyone, I’ve been using the Realm Database for a while and it really fits my need. But now I have a problem. As described in the title, I want to provide a paid subscription to allow users sync their data.I want to achieve it by allow users pay for the service on the Gumroad and add a serial number to the custom user data. What I want to know is is there a way to prevent users without a proper SN from syncing the data?In my understanding, the sync starts right after user login to the Realm app on the client side right?", "username": "John_Cido" }, { "code": "", "text": "Hi John, and welcome to the community! Sorry for the delay in replying; a lot of us have been out of the office around the holidays.You’re correct about sync starting when you make a call to open the Realm. We’ll be adding some formal guidance to the documentation around this soon-ish, but the recommendation is to use local-only Realm Database for non-paying subscribers, and use a synced Realm for paying subscribers.With the SN you propose as custom user data, you could use that to determine whether to use a synced realm or a local realm. If the user does not have a valid SN, open a local Realm Database. If the user does have a valid SN, open a synced Realm.If a user subscribes to your paid service after using a local Realm database, you’d need to copy the contents of the local database to a synced Realm. Unfortunately, we don’t currently have a way to easily copy data between a local Realm and a synced Realm. On our page about adding Sync to a Local-Only App, we describe the process of copying data from a local Realm to a synced Realm. You’d need to manually handle this process. And the same thing goes in reverse, of course - if a paying user stops their subscription, you’d need to manually copy from the synced Realm to a local Realm database.We are planning some enhancements that should hopefully make this process easier, which should be available later this year. Keep an eye on your preferred SDK’s release notes to watch for that.", "username": "Dachary_Carey" }, { "code": "", "text": "We are planning some enhancements that should hopefully make this process easier, which should be available later this year. Keep an eye on your preferred SDK’s release notes to watch for that.Hi Any update on when the enhancements will be released? I’m developing an invoicing app and pretty scared to go for Realm because of the risks of these manual migrations when user subscribe/unsubscribe… Is there example code in Kotlin of how you do the migration?Thanks in advance", "username": "aude" }, { "code": "", "text": "Hey there - we did add an API to our older Sync mode that enabled copying between a local and a synced realm, and vice versa. That API is not yet supported to copy data from a non-synced to a synced Realm that uses Flexible Sync. But the Kotlin documentation for this functionality is here: https://www.mongodb.com/docs/realm/sdk/kotlin/realm-database/open-and-close-a-realm/#copy-data-into-a-new-realm", "username": "Dachary_Carey" }, { "code": "", "text": "Hi, thanks a lot for your reply!If I understand correctly I have 3 choices if I want to sync data for users who pay for it, and without forcing sign-up for those who don’t. Please let me know if i’m missing something:", "username": "aude" }, { "code": "", "text": "Yes, your understanding is correct. I don’t know of a timeframe for this API to be available in Flexible Sync. You’re welcome to request this in the MongoDB Feedback Engine for Realm.We don’t have a code example directly demonstrating how to iterate through all the objects in the realm and copy them into a new one. But we do have this higher-level diagram that walks through the logic involved: https://www.mongodb.com/docs/atlas/app-services/sync/app-builder/local-to-sync/#copy-existing-dataWe are generally not recommending that people create new Partition-Based Sync Apps at this time. It is our older Sync mode, and all of the new development for Device Sync is happening for Flexible Sync. So while your second option would work, we probably wouldn’t recommend it. The first option is currently our recommended approach.", "username": "Dachary_Carey" }, { "code": "", "text": "I extremely discourage doing anything with Partition Sync at all, you’re a lot better off going to Flexible Sync if anything. It resolves an enormous amount of problems associated to partition sync, and is a heavily more performant model in general.As far as mapping data, I’m not on here to market myself but there are tons of people in general who can be commissioned to setup what you’re looking for with working logics.But the blueprint Dachary posted is a great resource to map up, if this is something that continues to be a hard point in moving forward let me know and I’ll build a tutorial for it using Swift.", "username": "Brock_Leonard" }, { "code": "", "text": "Thank you for your replies, it’s very helpful.\nFinally I think i’ll force sign up for everyone… because the risks of errors/data loss during local databases migrations (even when handled with the API) are triggering me I mean, i know that you delete the original db only when all records are saved in the synced one, and you can eventually relaunch it if it failed, but with Android and all its possible configurations, this is a wild world…Obviously the ideal thing to preserve the local db would be:", "username": "aude" } ]
Provide Realm Sync as a paid service for users
2021-12-29T13:01:18.939Z
Provide Realm Sync as a paid service for users
2,776
null
[ "queries", "node-js", "atlas-functions" ]
[ { "code": "/find?table=dataquery.table[]/find?table=data/find?table=dataexports = async function({ query, headers, body }, response) {\n \n //return [query.table, Type of query.table ] -- [ \"data\", \"string\"]\n\n if (query?.['_id']) query._id = new BSON.ObjectId(query._id);\n\n const collection = context.services.get(\"mongodb-atlas\").db(\"test\").collection(query.table); //even with \"data\"\n return collection.find(query).toArray()\n .then(docs => {\n return JSON.parse(JSON.stringify(docs));\n })\n }\n};\n[]", "text": "I am trying to pass a collection name through query parameters, but I’m facing issues with the response. When I use the URL /find?table=data, the code provided below correctly returns ‘data’ as a string when querying query.table. However, when I attempt to assign the collection name using this query parameter, it returns an empty list []. Even if I hardcode the collection name as “data” and keep the URL as /find?table=data, it still returns an empty list.Endpoint: /find?table=dataFunction:javascriptCopy codeOutput: []", "username": "Rajan_Braiya" }, { "code": "", "text": "This might sound stupid, but does the user or service you’re doing have permissions in proper permissions in place?", "username": "Brock_Leonard" }, { "code": " const table = query.table; \n delete query.table;\n", "text": "Thank you @Brock_Leonard for your reply.I need to remove the ‘table’ key before I send it for query filtering, actually.", "username": "Rajan_Braiya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I pass a collection name from query parameters successfully?
2023-09-21T15:38:27.332Z
How can I pass a collection name from query parameters successfully?
283
null
[ "queries", "transactions" ]
[ { "code": "Tuple2<CollectionName,Document> doc1;\nTuple2<CollectionName,Document> doc2;\nList<doc1,doc2>;\nClientSession clientSession = mongoClient.startSession();\ntry {\nclientSession.startTransaction(TransactionOptions.builder().writeConcern(WriteConcern.MAJORITY).build());\n\nfor (Tuple2<String, String> myDocument: DocumentList) {\n\n MongoDatabase db = mongoClient.getDatabase(\"mydb\");\n MongoCollection<Document> collection = db.getCollection(myDocument.f0);\n\n Document event = Document.parse(myDocument.f1);\n collection.insertOne(clientSession, event);\n}\n} \ncatch (MongoCommandException | MongoWriteException exception) {\n clientSession.abortTransaction();\n log.error(\"Exception happened while inserting record into Mongo DB rolling back the transaction and cause of exception is:%s\", ExceptionUtils.getStackTrace(exception)));\n \n }\n clientSession.commitTransaction();\n}\n", "text": "I am new to mongo db i am trying to write documenst to different collection in same database and maintaining atomicity through transaction feature in mongodb in case of exception it should rollback all the data written. I have list with tuple2 containing collection name and document like belowI am getting below errorThe full response is {“ok”: 0.0, “errmsg”: \"Error=2, Details='Response status code does not indicate success: BadRequest (400); Substatus: 1101; ActivityId: f689251a-3e97-40e1-843a-6f339d7b2554; Reason: (Message: {“Errors”:[“Transaction is not active”]}\\r\\nActivityId: f689251a-3e97-40e1-843a-6f339d7b2554, Request URI: /apps/1db5759c-505a-463f-9614-5d39ab12da22/services/c45af58e-eb23-4f44-8707-a4242a082f54/partitions/da75cb38-62d2-4415-a81a-4d9a4a1b4fd7/replicas/132950835021485319p/, RequestStats:But if i move the clientSession.commitTransaction(); to for loop it will commit the data but in case of transaction failure it is not rolling back the data already written", "username": "Rahulkumar_Kurba" }, { "code": "", "text": "Did you find a fix for it ?", "username": "Wellington_Rafael_Barros_Amorim" } ]
Transaction not active error while trying to insert data to multiple collection in same database
2022-04-23T01:32:53.253Z
Transaction not active error while trying to insert data to multiple collection in same database
3,204
null
[ "aggregation" ]
[ { "code": "projectObj.$project.createdAtParsed = { $dateToString: { format: \"%m/%d/%Y\", date: \"$createdAt\" } }\nprojectObj.$project.createdAtParsed = { $dateToString: { format: \"%B/%d/%Y\", date: \"$createdAt\" } }\n", "text": "I’m doing an aggregation on mongo db.\nI’m trying to format date fieldThis works and show me date in this format: 09/01/2023Now i need to show extended month name (September, October, etc) so I try to change %m with %B\nThi doesn’t works and give me this error: PlanExecutor error during aggregation :: caused by :: Invalid format character ‘%B’ in format stringAnyone?", "username": "Giorgio_Brugnone" }, { "code": "%B$dateToString", "text": "Hello @Giorgio_Brugnone,The %B Specifier of $dateToString is supported from MongoDB v6.3, and probably you are using a lower MongoDB version than 6.3.", "username": "turivishal" }, { "code": "$dateToString", "text": "@Giorgio_Brugnone, as @turivishal mentioned this functionality is available (added in SERVER-73402).If you’re seeing this error you likely weren’t configured to use Rapid Releases in MongoDB Atlas, however when you upgrade your cluster to MongoDB 7.0 these $dateToString format specifiers will be available.", "username": "alexbevi" }, { "code": "", "text": "Sounds good.\nThank you i try", "username": "Giorgio_Brugnone" }, { "code": "", "text": "Uhm, i’m working with MongoDBCompass.\nI must find out how to upgrade db version…any suggestions?", "username": "Giorgio_Brugnone" }, { "code": "", "text": "@Giorgio_Brugnone assuming your cluster is hosted in MongoDB Atlas the following documentation should help you upgrade the version of your cluster: https://www.mongodb.com/docs/atlas/tutorial/major-version-change/", "username": "alexbevi" }, { "code": "", "text": "tnx a lot, i’ll take a look", "username": "Giorgio_Brugnone" } ]
$dateToString doesn't accept %b - extended month....why?
2023-09-18T14:44:43.959Z
$dateToString doesn&rsquo;t accept %b - extended month&hellip;.why?
408
null
[ "aggregation", "indexes" ]
[ { "code": "{some_id: 1, time: -1, codes: 1}[\n {\"$match\": {\"id\": some_id}},\n {\"$unwind\": \"$codes\"},\n {\"$group\": {\"_id\": \"$codes\", \"time\": {\"$max\": \"$time\"}}}\n]\n", "text": "We have a collection with 3 relevant fields: ID, time and a list of codes.\nOur index has all 3 relevant fields indexed as {some_id: 1, time: -1, codes: 1}Before upgrading to Mongo 6, the explain to the following query returned PROJECTION_COVERED.\nSince the upgrade the query is no longer covered for some reason. Explain gives PROJECTION_DEFAULT and FETCH.\nWhat happened? How can we fix it?Thanks in advance.", "username": "SchiaviniUM" }, { "code": "", "text": "Hi @SchiaviniUMThe newest Mongo version is Mongo 7.It’s began after an upgrade to Mongo 6 or to Mongo 7? I am asking for the version because I have a similar issue here after upgrading to Mongo 7 and I solved it with the hint.Can you try to use hint to see if your performance gets better?", "username": "Jennysson_Junior" }, { "code": " {\"$match\": {\"some_id\": id}}, {\"$match\": {\"id\": some_id}},", "text": "May be it is a bad cut-n-paste from your part but you $match on the field id but this field is not in the index. So if your code is really like the one you shared and the index really the one you shared then I am pretty sure that the query is not covered even in the previous version. Perhaps what you really want is {\"$match\": {\"some_id\": id}},rather than {\"$match\": {\"id\": some_id}},", "username": "steevej" }, { "code": "some_id{some_id: 1, time: -1, codes: 1}{\"$match\": {\"some_id\": some_id}},", "text": "some_id is correct and that’s in the index {some_id: 1, time: -1, codes: 1}\nThe query had a paste typo though, I meant {\"$match\": {\"some_id\": some_id}},This started happening since v6.\nUsing hint does not work as the query already does use the index. It just has an extra FETCH stage that should not be needed.", "username": "SchiaviniUM" } ]
Query is not covered anymore after upgrade to Mongo 6
2023-09-18T12:18:55.338Z
Query is not covered anymore after upgrade to Mongo 6
346
null
[]
[ { "code": "", "text": "Hello, I have a problem because I missed my exam due to a technical problem with the calendar. I wrote an e-mail to [email protected], waited for a reply and didn’t receive any, so I decided to start a topic on the forum", "username": "Szymon_Ziolkowski" }, { "code": "", "text": "Hey @Siva_Ganesh2,Hello, I have a problem because I missed my exam due to a technical problem with the calendar. I wrote an e-mail to [email protected], waited for a reply and didn’t receive any, so I decided to start a topic on the forumI can see that your issue has been resolved by the MongoDB Certification team. Therefore, I am closing this thread. If you have any questions, please feel free to open a new thread in the community, and we will be happy to assist you.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
I missed my exam, can i have another try?
2023-09-10T10:35:44.316Z
I missed my exam, can i have another try?
454
null
[ "replication", "transactions" ]
[ { "code": "", "text": "Hi,I have a problem that I really don’t know how to solve or what approach to take. For data analysis and other similar things I need to use the AWS data migration service to migrate data from mongoDB to an AWS Redshift, but when I run the migration task I always get an error: “Unable to find transaction: …” . This process reads the oplog of the replica set. I have seen that this problem only occurs in certain collections not in all but I can’t find the possible cause. Any ideas?Thanks", "username": "Juan_Luis_Garcia" }, { "code": "Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to create new client connection Failed to connect to database., Application-Detailed-Message: Error verifying connection\n", "text": "I have a developer having a similar issue where they’re trying to connect MongoDB Atlas to S3 via AWS DMSThey got this error:But looking at the AWS documentation, looks like AWS doesn’t support MongoDB Atlas yet with AWS DMS? Or am I mistaken? What’s the best course of action here?", "username": "Chris_John" }, { "code": "", "text": "Hello mates,I was able to perform cdc once. However, I’ve deleted my replication instance and now I’m currently not able to establish the database connection again.Let me know in case you accomplish it, please.Diego", "username": "Diego_Santos" }, { "code": "", "text": "Hi Chris,At this moment I have some DMS task working fine with MongoDB Atlas, so I suppose you have another problem. I have a problem with some specific mongo collections but I can connect with Mongo Atlas.In your case I would check the DMS connection config and the mongoDB Atlas security config", "username": "Juan_Luis_Garcia" }, { "code": "", "text": "Hi Diego,Do you have another mongoDb replicaset or you only have removed the replication instance?. DMS needs a mongo log with name “oplog”, this log is created when you have a mongodb replicaset, without this log I dont think DMS worksJuan", "username": "Juan_Luis_Garcia" }, { "code": "", "text": "I had that issue before and ours was related to the url schema. DMS failed when trying to connect to the url mongodb+srv:// but worked when we connected to the analytics node mongodb://", "username": "Devon_Kinghorn" }, { "code": "", "text": "Hi @Juan_Luis_Garcia\nDid you found any solution for “Error reading the database log. Stop Reason FATAL_ERROR Error Level FATAL”. I am facing with same issue, When I am doing any add/edit/delete actions from compass DMS replication is working, but when we are add/edit/delete action from application replica is not happening. Only the difference is DMS connects with MongoDB:// and my application connects with MongoDB+srv:// .", "username": "Bharath_Gengi" } ]
Problems with MongoDB Atlas and AWS DMS
2022-07-19T06:27:54.739Z
Problems with MongoDB Atlas and AWS DMS
4,592
null
[]
[ { "code": "", "text": "I have made a dumbest mistake in my life. Using my Github Student Pack I got free certification. I have scheduled exam for Sept 22, 12:00 AM and confused it for 12:00 PM and missed my exam. I know that I can’t schedule exam again. I have been learning MongoDB for few weeks to get certification, I don’t want this effort to go in vain. Can I be please excused for a honest mistake ? I wouldn’t be asking this if I can afford for certification again. I know this request is lost cause but I have to try even if there is tiniest sliver of hope.", "username": "Siva_Ganesh2" }, { "code": "", "text": "Hey @Siva_Ganesh2,Could you please reach out to [email protected]? The team is based in the US, and they will assist you once they are back online.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
MongoDB certificate exam
2023-09-22T05:57:27.990Z
MongoDB certificate exam
381
null
[ "crud", "golang" ]
[ { "code": "InsertMany()Ordered: falsemongo.IsDuplicateKeyError()true", "text": "Hey everyone \nI’m using Golang’s MongoDB driver and I’m facing an issue I would love getting assistance with.\nI try running InsertMany() with Ordered: false in order to ignore duplicate key errors (I use a unique index for a specific field which should be unique) and keep on inserting all the other documents.It works perfectly fine (in the happy flow ), however I’m not sure it’s possible to handle errors if multiple different errors arise in the same insertion - the mongo.IsDuplicateKeyError() function seems to return true even if the insertion returns 10 errors and only 1 of them is duplicate key Is there any solution I can use to ignore duplicate key errors but still be able to handle other errors, should they occur?Thank you!", "username": "Adir_Halfon" }, { "code": "mongo.IsDuplicateKeyError()truemongo.IsDuplicateKeyError()", "text": "Hi @Adir_Halfon and welcome to MongoDB community forums!!It works perfectly fine (in the happy flow ), however I’m not sure it’s possible to handle errors if multiple different errors arise in the same insertion - the mongo.IsDuplicateKeyError() function seems to return true even if the insertion returns 10 errors and only 1 of them is duplicate key Based on the above statement, can you please clarify is you are looking for a solution where the mongo.IsDuplicateKeyError() is not working as expected?It would be very helpful for me to provide you with assistance if you could help me understand the concern in more detail.Warm Regards\nAasawari", "username": "Aasawari" } ]
Handling multiple errors in InsertMany function with Ordered: false
2023-09-18T07:57:42.849Z
Handling multiple errors in InsertMany function with Ordered: false
381
null
[ "node-js", "mongodb-shell", "atlas-cluster" ]
[ { "code": "mongosh \"mongodb+srv://my-default-cluster.nopctvg.mongodb.net/\" --apiVersion 1 --username sandeepc\nEnter password: ***********\nCurrent Mongosh Log ID:\t650b432e52a027b381779cdd\nConnecting to:\t\tmongodb+srv://<credentials>@my-default-cluster.nopctvg.mongodb.net/?appName=mongosh+2.0.1\nUsing MongoDB:\t\t6.0.10 (API Version 1)\nUsing Mongosh:\t\t2.0.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\nAtlas atlas-cjxqji-shard-0 [primary] test>\nconst { MongoClient, ServerApiVersion } = require('mongodb');\nconst uri = \"mongodb+srv://sandeepc:<password>@my-default-cluster.nopctvg.mongodb.net/?retryWrites=true&w=majority\";\n\n// Create a MongoClient with a MongoClientOptions object to set the Stable API version\nconst client = new MongoClient(uri, {\n serverApi: {\n version: ServerApiVersion.v1,\n strict: true,\n deprecationErrors: true,\n }\n});\n\nasync function run() {\n try {\n // Connect the client to the server\t(optional starting in v4.7)\n await client.connect();\n // Send a ping to confirm a successful connection\n await client.db(\"admin\").command({ ping: 1 });\n console.log(\"Pinged your deployment. You successfully connected to MongoDB!\");\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\nrun().catch(console.dir);\n/foo/node_modules/mongodb/lib/admin.js:62\n session: options?.session,\n ^\n\nSyntaxError: Unexpected token '.'\n at wrapSafe (internal/modules/cjs/loader.js:915:16)\n at Module._compile (internal/modules/cjs/loader.js:963:27)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)\n at Module.load (internal/modules/cjs/loader.js:863:32)\n at Function.Module._load (internal/modules/cjs/loader.js:708:14)\n at Module.require (internal/modules/cjs/loader.js:887:19)\n at require (internal/modules/cjs/helpers.js:74:18)\n at Object.<anonymous> (/media/sandeep/SANDEEP_C/DOCHUB/11_Repositories/01. Github/foo/node_modules/mongodb/lib/index.js:6:17)\n at Module._compile (internal/modules/cjs/loader.js:999:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)\n", "text": "Hi Everyone,I am new to MongoDB and Node.js. I am following the MongoDB documentation and am currently unable to connect to the MongoDB Cluster using Node.js, but it appears to be functioning normally as I am able to establish a connection using the Mongo Shell without any issues.The following works fine when I run from a terminaLHowever, when I run the following code, it gives me an error. The password enclosed within the parentheses is replaced by the actual password.CodeError:I just ran npm init followed by npm install mongodb and node connect.js it gave me the above error.Not sure what am I doing wrong here.", "username": "Sandeep_Chatterjee" }, { "code": "import {MongoClient} from \"mongodb\";\n\n// Replace the uri string with the connection string from your MongoDB deployment.\nconst uri = \"<connection string uri>\";\n\nconst client = new MongoClient(uri);\n\nasynchronous function run() {\n to try {\n \n // Gets the database and collection on which to perform the operation\n const database = client.db(\"sample_mflix\");\n const movies = database.collection(\"movies\");\n\n // Query for a film titled 'The Room'\n const query = {title: \"The Room\"};\n\n const options = {\n // Sorts matching documents in descending order by rank\n sort: { \"imdb.rating\": -1 },\n // Includes only the `title` and `imdb` fields in the returned document\n projection: {_id: 0, title: 1, imdb: 1},\n };\n\n //Run query\n const movie = wait for movies.findOne(query, options);\n\n // Print the document returned by findOne()\n console.log(movie);\n } finally {\n wait client.close();\n }\n}\nrun().catch(console.dir);\n", "text": "Hello, welcome to the MongoDB community.I’m not very knowledgeable about nodeJS, but from the examples I saw, they import the client and then you put it as a variable, is it the same thing? Example for finding documents:", "username": "Samuel_84194" }, { "code": "", "text": "Hi Samuel,I am using the same code provided here in nodejs-quickstart but it just throws the error mentioned in the original post.I can connect fine to my cluster from mongo shell using the below, but not from nodejs.mongosh “mongodb+srv://my-default-cluster.nopctvg.mongodb.net/” --apiVersion 1 --username sandeepc", "username": "Sandeep_Chatterjee" }, { "code": "", "text": "@Samuel_84194 @Sandeep_ChatterjeeThis error is commonly a syntax problem from the driver vs the Node.JS version.It’s also possible you’re just using the wrong password…", "username": "Brock_Leonard" }, { "code": "v20.7.0", "text": "Thank you, Brock. Updating the node version to the latest one v20.7.0 fixed the problem. Cheers. ", "username": "Sandeep_Chatterjee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect to MongoDB Cluster deployed on MongoDB Cloud using Node.js
2023-09-20T19:19:22.619Z
Unable to connect to MongoDB Cluster deployed on MongoDB Cloud using Node.js
592
null
[ "node-js", "replication", "sharding", "transactions" ]
[ { "code": "", "text": "Hi there, It´s my first time here …\nI use one lab to learn and I need to use transactions to insert data in 2 diferent collections, but since yesterday I´m trying and this error message pops in my screen … “Transaction numbers are only allowed on a replica set member or mongos”. I already tryed the node university video and copy the code with my data, and all others internet sources, but i have no success.\nI am using an stand alone windows with node (all updated). Is it possible to use transactions in my environment? how ?\nps- I tryed to set up the replication in cfg file, but when I do it the mongo service does not run anymore. ( I also set up the security: authorization enable)", "username": "Gustavo_Castro" }, { "code": "", "text": "Post the Config File? I got some time to kill if you’d like me to look at it.", "username": "Brock_Leonard" }, { "code": "", "text": "storage:\ndbPath: C:\\Program Files\\MongoDB\\Server\\6.0\\data\njournal:\nenabled: truesystemLog:\ndestination: file\nlogAppend: true\npath: C:\\Program Files\\MongoDB\\Server\\6.0\\log\\mongod.lognet:\nport: 27017\nbindIp: 127.0.0.1#processManagement:security:\nauthorization: enabled#operationProfiling:#replication:#sharding:#auditLog:#snmp:", "username": "Gustavo_Castro" }, { "code": "mongod --config \"M:\\path\\mongod.cfg\"\n\nmongo\n> rs.initiate()\n\n> rs.status()\n\n", "text": "@Gustavo_CastroSo it looks like you’re trying to do this for a standalone instance, so do the following:Stop the service.Build a new config file like this (add more fields but this is the minimum you have to have):\nsystemLog:\ndestination: file\npath: “mongod.log”\nstorage:\ndbPath: “M:\\data\\db”\nreplication:\nreplSetName: “replicaSetName”Start it with the config file.Let me know if this works out for you.EDIT@Gustavo_Castro We want to build this with the bare minimum we need, so that way we can narrow down where the configuration is failing.", "username": "Brock_Leonard" }, { "code": "", "text": "This is a bit surprising to me. I thought transactions are also available in a standalone node.\nJust use a single node replica set for it.", "username": "Kobe_W" } ]
May I use transaction in an stand alone win server with node?
2023-09-21T21:05:18.725Z
May I use transaction in an stand alone win server with node?
470
null
[ "aggregation", "node-js", "crud" ]
[ { "code": "{\n owner: String,\n items: [ {name: String, amount: Number }]\n}\nInventory.findOneAndUpdate({ owner: \"test\" }, {\n $inc: {\n 'items.$[aa].amount': 10,\n 'items.$[bb].amount': 2,\n 'items.$[cc].amount': 100,\n 'items.$[dd].amount': 50\n }\n }, {\n arrayFilters: [\n { 'aa.name': 'gold' },\n { 'bb.name': 'gems' },\n { 'cc.name': 'food' },\n { 'dd.name': 'wood' }\n ]\n }).exec();\n", "text": "I’m trying to make a player inventory system, where all items the player has are in an array.if a player does something which rewards more then one item i would like to do it all in one query, currently i have this.this works great if the player already has at least one of each item, but if they for instance don’t have any wood yet, wood doesn’t get added. how would i go about creating any items that don’t already exist.I’ve looked around and I’m almost entirely certain I’ll need to use aggregation but having only picked up Mongo a couple days ago can’t figure out which parts I need.", "username": "Damorr" }, { "code": "", "text": "You have embedded documents in an array and you want to update the document in that array. If there is a document doesn’t yet exist in that array, you can use $addToSet in your update statement.", "username": "Zhen_Qu" }, { "code": "", "text": "The issue i have when using $addToSet is that even if I have the item, if the amount is different it still adds a new entry into the array since the objects are not fully equal.This question here is almost exactly what i need. the example on the post shows how to add or ignore multiple objects based on keys. and the linked video shows to increment a value or create an entry using a single object’s key. but i have yet to find out how to combine them into one.", "username": "Damorr" }, { "code": "let test = [{name: \"Gold\", amount: 10}, {name: \"Gems\", amount: 2}, {name: \"Food\", amount: 100}, {name: \"Wood\", amount: 50}]\n\nInventory.updateOne({ owner: id }, [{\n $addFields: {\n items: {\n $reduce: {\n input: test,\n initialValue: \"$items\",\n in: {\n $cond: [\n { $in: [\"$$this.name\", \"$items.name\"] },\n {\n $map: {\n input: \"$$value\",\n as: \"el\",\n in: {\n $cond: {\n if: { $ne: [\"$$el.name\", \"$$this.name\"] },\n then: \"$$el\",\n else: { name: \"$$el.name\", amount: { $sum: [\"$$el.amount\", \"$$this.amount\"] } }\n }\n }\n }\n },\n { $concatArrays: [[\"$$this\"], \"$$value\"] }\n ]\n }\n }\n }\n }\n}]).exec();\n", "text": "Update, i played around and eventually got something to work.It most certainly doesn’t scale super well, and I’m open to improvements, but it does work for what i had in mind.", "username": "Damorr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating multiple items in array and creating those that don't exsist
2023-09-20T23:06:06.560Z
Updating multiple items in array and creating those that don&rsquo;t exsist
361
null
[ "node-js", "charts" ]
[ { "code": "", "text": "I am trying to figure out how I can use the Mongodb JS SDK to modify the binning setting in my data. I am not able to find anything in the SDK documentation- basically my use case is- I want to show a Day of the Month bin when the user selects monthly in a drop down, and a Week bin when the user selects weekly and so on. Is this an option?", "username": "Vibha_Gopal" }, { "code": "", "text": "Have you found out the answer of it ? I want to do the same. How can I if you know ?", "username": "Mujtaba_Ahmed" }, { "code": "", "text": "Unfortunately this is not supported yet, but the ability to dynamically change things like binning is planned for the future. For now, you’d need to create multiple charts with different binning settings, and swap in the correct one for the user’s context.Tom", "username": "tomhollander" } ]
Mongodb Charts SDK - dynamically configure binning
2021-12-31T09:03:34.581Z
Mongodb Charts SDK - dynamically configure binning
3,106
null
[]
[ { "code": "", "text": "I am trying to convert a string to Decimal128 in NodeJs but I keep getting the following error “TypeError: Cannot read property ‘fromString’ of undefined”. Here is a snippet of my code.Blockquote\nconst {MongoClient: mongodb} = require(“mongodb”);\nconst MongoClient = mongodb.MongoClient;\nconst Decimal128 = mongodb.Decimal128;\n…\nprice: Decimal128.fromString(req.body.price.toString()), // store this as 128bit decimal in MongoDB", "username": "Newbie" }, { "code": "", "text": "const { Decimal128 } = require(‘mongodb’)\n/////////////////////////////////////////////////////\nimport { Decimal128 } from ‘mongodb’;", "username": "Belal_mohsen" }, { "code": "", "text": "HI, Actually i want to use Decimal128 to convert the value into decimal128 in mongodb triigger\nconst { Decimal128 } = require(‘mongodb’)\nthat is not working after I have install the dependency mongodbSo, can give me some suggestions", "username": "Tripti_Kothari" } ]
Decimal128.fromString gives me an error
2021-12-02T03:15:06.973Z
Decimal128.fromString gives me an error
2,928
https://www.mongodb.com/…_2_1024x484.jpeg
[]
[ { "code": "", "text": "IMG-20200916-WA00011280×606 370 KBMy mongo server closes automatically after running for few minutes after I start it.Any helps?", "username": "Atul_tiwari" }, { "code": "", "text": "Can you please provide the entire log (using paste bin for example) along with the actions you are doing right before the error happens?\nAlso, can you share the configuration of this cluster and how you are starting the node(s)?\nDo you also have enough RAM / disk space in this system?Also, are you running on a supported platform? Which one?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I got the same error, i am using windows 10 updated, with 6 gb of ram , 78 gb free space on my c drive . its working on my office pc very well but here at home at my HP Compaq 6000 business pc all in one, following cluster i am pasting from the log file. please can you tell me what should i do now?", "username": "Shujaat_Ali_Khan" } ]
Immediate exit due to unhandled exception
2020-09-16T08:58:11.726Z
Immediate exit due to unhandled exception
4,348
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "mongoDBconnection.jsconst mongoose = require('mongoose');\nrequire('dotenv').config({ path: __dirname + '/../../../.env' });\n\nconst dbURI = 'mongodb+srv://' + process.env.DBUSER + ':' + process.env.DBPASSWD + process.env.CLUSTER + '.mongodb.net/' +\n process.env.DB + '?retryWrites=true&w=majority';\n\nconst dbOptions = {\n useNewUrlParser: true,\n useUnifiedTopology: true\n}\n\nconst connectToDatabase = async () => {\n mongoose.connect(dbURI, dbOptions)\n .then(result => console.log(\"Database connected\"))\n .catch(error => console.log(error));\n};\n\nmodule.exports = connectToDatabase;\nconst express = require('express');\nconst exphbs = require('express-handlebars');\nconst mongoose = require('mongoose');\nconst handlebars = require(\"handlebars\");\nconst connectToDatabase = require('./Connection/dbConnection.js');\nrequire('dotenv').config({ path: __dirname + '/./../../.env' })\n\nconnectToDatabase();\n\nconst app = express();\n\n// Middlewares\napp.use(express.json());\napp.use(express.urlencoded({ extended: false }));\napp.use(express.static('public'));\n\napp.use('', require('./routes/routes.js'));\n\napp.engine('handlebars', exphbs.engine({\n defaultLayout: 'main'\n}));\napp.set(\"view engine\", \"handlebars\");\n\nconst PORT = process.env.PORT || 3000;\napp.listen(PORT, () => console.log(`App listening port ${PORT}`));\nenv_variables.yamlenv_variables:\n KEY = 'random-String-here'\n DBUSER = 'mondodb-atlas-username'\n DBPASSWD = 'mondodb-atlas-password'\n DB = 'database name'\n CLUSTER = '@cluster123.example'\napp.yamlincludes:\n - env_variables.yaml\n\nruntime: nodejs20\nconnection.jsconst mongoose = require('mongoose');\nconst fs = require('fs');\n\n// Check if the app is deployed by looking for a deployment-specific environment variable\nconst isDeployed = process.env.DEPLOYED === 'true';\n\n// Function to load environment variables\nconst loadEnvVariables = () => {\n if (isDeployed) {\n // Use env_variables.yaml when deployed\n const yaml = require('js-yaml');\n const envVarsYaml = fs.readFileSync('env_variables.yaml', 'utf8');\n const envVars = yaml.safeLoad(envVarsYaml);\n return envVars;\n } else {\n // Use .env when running locally\n require('dotenv').config({ path: __dirname + '/../../../.env' });\n return process.env;\n }\n};\n\nconst env = loadEnvVariables();\n\nconst dbURI = 'mongodb+srv://' + env.DBUSER + ':' + env.DBPASSWD + env.CLUSTER + '.mongodb.net/' +\n env.DB + '?retryWrites=true&w=majority';\n\nconst dbOptions = {\n useNewUrlParser: true,\n useUnifiedTopology: true\n};\n\nconst connectToDatabase = async () => {\n mongoose.connect(dbURI, dbOptions)\n .then(result => console.log(\"Database connected\"))\n .catch(error => console.log(error));\n};\n\nmodule.exports = connectToDatabase;\nServer side error creating a userdbURIdotevnenv_variablemongoDB AtlasGoogle Cloud", "text": "I have created an express server that adds users to mongoDB. On local host it works fine. However when I deployed it to google cloud it doesn’t add anything to the database. So I believe that I need to configure my connection.js differently. Here is the code that I am using:connection.js:index.js:And here is what I tried to do to make the deployed app work:Of course this solution didn’t work. Every time I try to add a user I get Server side error creating a user Even if I am explicitly using the environment variables values in the dbURI. So I am wondering what is the approach that I need to do so that If I am working in development the code uses the dotevn file and if the app is deployed it uses the env_variable file to connect to the database. But maybe the the verification for production and development isn’t needed since mongoDB Atlas is hosted online?First time deploying an app and first time using Google Cloud. Any help would be much apprecitaed!", "username": "Basil_Omsha" }, { "code": "const mongoose = require('mongoose');\n\n// Use the MONGODB_URI environment variable directly\nconst dbURI = process.env.MONGODB_URI;\n\nconst dbOptions = {\n useNewUrlParser: true,\n useUnifiedTopology: true\n}\n\nconst connectToDatabase = async () => {\n mongoose.connect(dbURI, dbOptions)\n .then(result => console.log(\"Database connected\"))\n .catch(error => console.log(error));\n};\n\nmodule.exports = connectToDatabase;\n\nenv_variables:\n MONGODB_URI: 'mongodb+srv://your-username:[email protected]/your-database?retryWrites=true&w=majority'\n\n", "text": "Let’s try to simplify this:What I wrote this to do in relation to your stuff above, is it’s meant to set your connection to the mongoldb URI unconditionally.You need to modify a bit better for your environment of course, but something like this is what you probably are more so looking for so that you just focus on the URI being defacto in your deployment.If you have any questions hit me up, and we can even setup a one on one via discord or google meets if you really need. But this is overall the easiest way to do what what you’re wanting by making google just use the MONGODB_URI.In your APP.YAML do something like this, of course not exactly but something like it that will put the URI to your YAML env.", "username": "Brock_Leonard" } ]
How to connect to mongoDB Atlas form a deployed application in google cloud
2023-09-21T18:11:08.652Z
How to connect to mongoDB Atlas form a deployed application in google cloud
299
null
[ "compass", "atlas-cluster" ]
[ { "code": "#define MyMONGO_DBASE \"https://us-east-2.aws.data.mongodb-api.com/app/data-xxxxx/endpoint/ur_sensor/do\"\n\nchar serverAddressM[] = \"us-east-2.aws.data.mongodb-api.com\";\n\nconst char resourceM[] = \"/app/data-xxxxx/endpoint/ur_sensor/do\";\n\nHttpClient shttp(sclient, serverAddressM, sport);\n\nsclient.connect(serverAddressM, 443)\n\nsclient.print(String(\"POST \") + resourceM + \" HTTP/1.1\\r\\n\");\n\nsclient.print(String(\"Host: \") + serverAddressM + \"\\r\\n\");\n\netc……\n\nsclient.println(postData);\n\n", "text": "Please help me understand MongoDB connections and networking better.I developed a sensor using the Arduino IDE and signed up for a free Cluster on Mongodb and created my own database to develop and test the code. I’ve successfully implemented what I wanted. Using Atlas/Mongodb and its Data API , I created an endpoint in the form of a URL and plugged that into the code.I use Https POST requests in my Arduino code to send to the database so I have definitions and constructs such as :(I can share my code but it’s about 1500 lines long so I thought I’d first try just showing the relevant constructs)I understand what is going with my code finding its way to the endpoint using the URL.So now I want to give the sensor and code to someone else that have their own MongoDB. I obviously need to change the connection path. They gave me a login and password for access to their database through Compass in the form of : mongodb+srv://:@theirorg.xtcr1.gcp.mongodb.net/?retryWrites=true&w=majority. I believe that is referred to as a URI.I can log into Compass with this connection string and I created a database (called sensors) and collection (called DO) in Compass, much like mine own database.But NOW WHAT? I don’t have an Atlas connection to this new database and the Atlas features that I originally used in my own DB to create a URL.I’m told I only need this connection string. I don’t see a lot of options through Compass. The connection string Compass gives me is the same as the login so its implied that is all I need. I understand how to code with a URL but how do I change my code to use this connection string without Atlas? Is it true this is all I need? I don’t need a URL?Can someone help me make this leap and explain what I need to do to my original code using just a connection string to send the data to the database?..OR how to get a URL through only Compass?", "username": "kurt_h" }, { "code": "", "text": "Good afternoon, welcome to the MongoDB community.What error are you making? Access to the Atlas platform is private, so it is necessary to grant access to the cluster from your IP to the project, do you understand?", "username": "Samuel_84194" }, { "code": "", "text": "I have no error to discuss because I don’t know how to re-code from using a URL to this connection string, that is the question. I can’t try anything until I understand that. I do understand that eventually the person that owns the new database would need grant permission to the sensor box based on its IP. I would tell them that. However, we haven’t gotten to that point yet because my main question remains , do I need a URL or can the connection string be used? If the connection string itself can be used, how is that coded?. Are you implying the person needs to grant me access to their Atlas? If I had that access I could then create an endpoint point like I did with my prototype BUT I wasn’t given access because they said all I need is the connection string.", "username": "kurt_h" }, { "code": "mongodb+srv://server.example.com/?connectTimeoutMS=300000&authSource=aDifferentAuthDB", "text": "Hi, thanks for clarifying. Really the only thing you need is a connection string + releasing your IP to the cluster.In the case of Atlas it is a URI:\nmongodb+srv://server.example.com/?connectTimeoutMS=300000&authSource=aDifferentAuthDB", "username": "Samuel_84194" }, { "code": "char serverAddressM[] = \"mongodb+srv://server.example.com/?\";\nconst char resourceM[] = \"connectTimeoutMS=300000&authSource=aDifferentAuthDB\";\n\nHttpClient shttp(sclient, serverAddressM, sport);\nsclient.connect(serverAddressM, 443)\nsclient.print(String(\"POST \") + resourceM + \" HTTP/1.1\\r\\n\");\nsclient.print(String(\"Host: \") + serverAddressM + \"\\r\\n\");\netc....\n", "text": "OK, you are confirming only the URI is needed…but that is the part I don’t understand. Are you saying in my Arduino code I would make the following assignments?:Then use it like I did the URL?..", "username": "kurt_h" }, { "code": "#include <SPI.h>\n#include <Ethernet.h>\n#include <MongoDB.h>\n\n// MongoDB server configuration\nIPAddress serverIP(192, 168, 1, 100); // Replace with your MongoDB server's IP address\nint serverPort = 27017; // MongoDB default port\n\n// MongoDB client\nMongoClient client(serverIP, serverPort);\n\nvoid setup() {\n Ethernet.begin(mac); // Replace 'mac' with your Ethernet shield's MAC address\n Serial.begin(9600);\n}\n\nvoid loop() {\n if (client.connect(\"mydb\")) { // Connect to the \"mydb\" database\n // Perform MongoDB operations here\n\n client.stop(); // Close the connection when done\n }\n delay(10000); // Delay for 10 seconds before attempting another connection\n}\n", "text": "Like this:MongoDB typically doesn’t use HTTP for connections. Instead, it uses its own binary protocol.", "username": "Samuel_84194" }, { "code": "", "text": "Interesting, I never read that Http isn’t typical. Perhaps I used the wrong examples to build on.\nSo where does the connection string come in?", "username": "kurt_h" }, { "code": "#include <SPI.h>\n#include <Ethernet.h>\n#include <MongoDB.h>\n\n// Define connection information\nchar serverAddress[] = \"192.168.1.100\"; // MongoDB server IP address\nint serverPort = 27017; // MongoDB port\nchar dbName[] = \"mydb\"; // Database name\nchar username[] = \"your_username\"; // Username (if needed)\nchar password[] = \"your_password\"; // Password (if needed)\n\n// MongoDB client\nMongoClient client(serverAddress, serverPort);\n\nvoid setup() {\n Ethernet.begin(mac); // Replace 'mac' with your Ethernet shield's MAC address\n Serial.begin(9600);\n}\n\nvoid loop() {\n if (client.connect(dbName, username, password)) {\n // Perform MongoDB operations here\n\n client.stop(); // Close the connection when done\n }\n delay(10000); // Wait for 10 seconds before attempting another connection\n}\n", "text": "It is not common to use a connection string in the same way you would in more complete programming languages, such as Python, Node.js, or C#. Creating a connection string in the conventional way, such as in high-level languages, is not supported directly on Arduino due to limitations of available resources and libraries.", "username": "Samuel_84194" }, { "code": "MongoClient client(serverIP, serverPort);\n", "text": "I have been reading about mongdb+srv, seed lists and connection pools . So your comment on the connection string not being supported directly on Arduino and its libraries I now interpret to be that Arduino libraries, like the http client libraries for instance, need updating to handle the new syntax and the features the seed lists provide. Using these connection strings with this syntax can bring advantages like more robust connections.\nIt’s unfortunate Arduino doesn’t directly support this. It’s not that Mongodb can’t be used with Arduino, my functioning application proves it can, it is just it can’t be used in this way. I think that means I really need to use a URL in my current implementation, because I can’t use the connection string in my implementation.\nAlternatively, I need to re-design the code to be closer to the coding you show that uses direct IP addresses and ports and move away from http. By the way, your code examples show a call to MongoDB.h and MongoClient constructs, where is that coming from? I tried looking in some examples under the C++ drivers but I haven’t run across anything like that yet.Please let me know if my interpretations and understandings are correct.", "username": "kurt_h" }, { "code": "", "text": "This guy doesn’t exist, I took an adaptation of C and transformed it, but that’s what it was, I hadn’t noticed.", "username": "Samuel_84194" }, { "code": "", "text": "Hello Kurt,MongoDB does not have a driver for Arduino, so your approach using HTTP is the right one.MongoDB offers App-Services, and you can use HTTPS calls to connect and operate on your current database. Please follow the link here.I am not familiar with Arduino, but if it supports HTTPS, this is a good work-around. The Data API endpoints do not use SRV records.Cheers,Jorge", "username": "Jorge_Imperial-Sosa" } ]
Connection String vs URL for an endpoint
2023-09-09T14:57:16.335Z
Connection String vs URL for an endpoint
506
null
[ "java", "field-encryption" ]
[ { "code": "", "text": "I am using out of the box client side field level encryption feature of the Mongo Drivers in Java. We are able to get the field encrypted.\nI am using the “Local” KMS provider for our implementation, where we have our own logic to create the master key and fetch it from our APIs to populate in the AutoEncryptionSettings.But now we want to perform key rotations for security purposes.For this I was exploring the ClientEncryption’s rewrapManyDataKey method → \n[ClientEncryption (driver-sync 4.7.0 API) (mongodb.github.io)] ClientEncryption (driver-sync 4.7.0 API) (mongodb.github.io)What we want to do here is to supply the new master key in the above method and let the data keys in the keyVault get re-encrypted with the newly supplied master key.The official docs says that in case of “Local” KMS provider, the master key is not applicable for the rewrapManyDataKeyOptions parameter.\nDoes it mean that the rewrapping of the data keys with the new master key is not possible for the “Local” kms provider?\nIs there a solution for enabling key rotation for the “Local” kmsProvider.", "username": "Ishant_Tandulkar" }, { "code": "", "text": "Hello Ishant and welcome to the MongoDB Community,Rewrapping of local keys is in the roadmap and should be available in the coming months.Thank you,Cynthia", "username": "Cynthia_Braund" } ]
Mongo CSFLE | Data Key rotation with local KMS provider | Encryption
2023-09-06T09:37:40.775Z
Mongo CSFLE | Data Key rotation with local KMS provider | Encryption
378
null
[ "flexible-sync" ]
[ { "code": "history__realm_sync", "text": "Hi!\nI created an app following the O-Fish app mongo blog post.\nIn this app the image is first attached to the document and when the document is synced it is removed.\nThis post is using partitionned sync, the only thing I did different is using the flexible sync.My problem is with the history collection in the __realm_sync database created by Mongo. This collection keeps track of every change in each document of my database.\nThe problem here is that since it tracks each change it stores a document for the brief instant when the document is synced with the image embedded as base64 and it is taking a lot of storage.I was wondering if there was something I could do to prevent this. I already implemented trimming but I’m using a shared cluster and don’t have that much storage space.Is it possible to only delete some part of the history? My photo document is always synced with the base64 image embedded which is deleted directly by a trigger. In the end I only care about the photo document without the embedded image. Like if my document goes from A => B => C, is it possible to only keep A => C ?", "username": "Renaud_Aubert" }, { "code": "", "text": "Hello! What you’re describing is known as “compaction” and exists only for partition-based sync. This is because, for flexible sync, trimming is far more powerful - as it will remove all history entries that are older than the Max Offline Time.You have three options here:", "username": "Sudarshan_Muralidhar" }, { "code": "", "text": "Hi @Sudarshan_MuralidharThank you for your reply.\nThe second option is what I’m doing, but as mentionned in the MongoDB blog post for the O-Fish app I’m first attaching the image to the document and when the document is synced I’m uploading the image to S3 and deleting the image from the document. Just like the Mongo team in the blog post.Deleting history is great but my app is used during short “burst” and the history cannot be trimmed fast enough.\nIs there no way to compact the history even in flexible sync?\nOr to select which documents to keep in the history table?\nAs mentionned since the image is uploaded to S3 I don’t need to keep a history document specifying that my image was attached to the document.Best regards\nRenaud Aubert", "username": "Renaud_Aubert" }, { "code": "", "text": "Unfortunately, Flexible Sync does not support compaction today, as we instead delete all history that is older than your set “Max Offline Time”. I’ll take this feedback to the team though.Sudarshan", "username": "Sudarshan_Muralidhar" } ]
__realm_sync history taking up too much storage
2023-08-25T08:32:10.624Z
__realm_sync history taking up too much storage
629
null
[]
[ { "code": "", "text": "I need help please its really urgent", "username": "jeff_rayz" }, { "code": "", "text": "Hello, welcome to the MongoDB community.To access the mongodb shell use the mongosh cli", "username": "Samuel_84194" } ]
After installing mongodb and running varaibles on hyper terminal, i started server with mongod it started running but typing mongo is says bad request
2023-09-21T00:44:23.981Z
After installing mongodb and running varaibles on hyper terminal, i started server with mongod it started running but typing mongo is says bad request
204
https://www.mongodb.com/…b_2_1024x602.png
[]
[ { "code": "sudo systemctl start mongod# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 3389\n bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\nsecurity:\n authorization: enabled\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n", "text": "Hi, all. I am new to MongoDB. I recently tried to set up a MongoDB deployment on a Linux VM machine.\nAfter I finished setting up the DB by following https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-red-hat/#std-label-install-mdb-community-redhat-centos,\nI created a user, enabled the authorization, set bind ip to 0.0.0.0, and set the port to 3389.\nAfter I run the command sudo systemctl start mongod. the db fail to start with exit code 48\nbelow are the log and configure files. any help would be appreciatedlog:\n\nimage1920×1129 268 KB\nconfig:", "username": "Samson" }, { "code": "getenforce", "text": "Hi @Samson, welcome to the forums.First thing I suggest when encountering permission issues on RedHat is checking the SELinux Policy using getenforceIf you are planning on using SELinux, you will have to update the policy to use non-standard configuration.\nOtherwise set SELinux to permissive or disable it entirely.Another possibility is as this is a VirtualMachine is if the Host OS is Windows and depending on the Virtualisation platform port 3389 is likely in use by Windows for RDP.", "username": "chris" }, { "code": "", "text": "@chris thank you for your help. I really appreciate your suggestion.", "username": "Samson" } ]
Issue with Starting a MongoDB on other port
2023-09-21T10:51:04.154Z
Issue with Starting a MongoDB on other port
333