image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": "[\n { \n $match: \n { \n \"fPort\": 60,\n \"rxInfo.time\": \n { \n \"$gte\": \"2020-12-01T00:00:00.000000Z\", \n \"$lte\": \"2020-12-31T59:59:59.935Z\"\n }\n }\n },\n { $limit: '1' }, ///// This only returns 1 record and not 1 per day.\n { \n $group: \n { \n _id: \"9cd9cb00000124c1\",\n \"Total\": \n {\n \"$sum\": \"$object.TotalDaily\" \n }\n }\n }\n]\n", "text": "Hi\nI’m trying to do to aggregate, sum up values in a month, but i only want to sum 1 entry per day, i tried to use $limit before the group but only get 1 entry return.Thanks.", "username": "Q_Lion" }, { "code": "[{\n $group: {\n _id: {\n year: { $year: \"$yourDateFieldName\" },\n month: { $month: \"$yourDateFieldName\" },\n day: { $dayOfMonth: \"$yourDateFieldName\" }\n },\n Total: { $sum: 1 }\n }\n}]\n[{\n $group: {\n _id: {\n year: { $year: \"$yourDateFieldName\" },\n month: { $month: \"$yourDateFieldName\" }\n },\n Total: { $sum: 1 }\n }\n}]\nyourDateFieldName$match$group", "text": "Hello @Q_Lion, Welcome to MongoDB Developer Community,As i can understand your question, you need to get count of documents yearly or monthly or daily,You can use $year, $month, $dayOfMinth Date Expression Operators.Example for Daily document count,Example for Monthly document count,You can replace yourDateFieldName to your collection field, and put your $match stage before this $group stage.", "username": "turivishal" }, { "code": "[\n { \n $match: \n { \n \"fPort\": 60,\n \"rxInfo.time\":\n {\n \"$gte\": \"2020-12-01T00:00:00.000000Z\",\n \"$lte\": \"2020-12-31T59:59:59.99\" \n }\n } \n }, \n { \n $project:\n { \n _id:1,\n year:\n {\n $year: \n { \n $dateFromString: \n {\n dateString: \n {\"$arrayElemAt\": [ \"$rxInfo.time\", 0 ] }\n }\n }\n },\n month:\n {\n $month: \n {\n $dateFromString: \n { \n dateString: \n {\"$arrayElemAt\": [ \"$rxInfo.time\", 0 ] }\n }\n }\n },\n day:\n {\n $dayOfMonth: \n {\n $dateFromString: \n { \n dateString: \n {\"$arrayElemAt\": [ \"$rxInfo.time\", 0 ] }\n }\n }\n },\n sum : \"$object.TotalDaily\"\n } \n }, \n { \n $group: \n { \n _id:\n {\n year:\"$year\",\n month:\"$month\",\n day:\"$day\"\n },\n TotalPerDay:\n {\n $sum:\"$sum\"\n }\n }\n }, \n { \n $group: \n { \n _id: \"9cd9cb00000124c1\",\n \"MonthlyTotal\":\n {\n \"$sum\": \"$TotalPerDay\" \n },\n }\n }\n]", "text": "I think i have found the query.i managed to get 1 record per day and sun the values for 1 month, Thanks", "username": "Q_Lion" }, { "code": "$object.TotalDaily{\n _id: {\n year:{$year:\"$rxInfo.time\"},\n month:{$month:\"$rxInfo.time\"},\n day:{$dayOfMonth:\"$rxInfo.time\"}\n },\n \"Total\": \n {\n \"$sum\": \"$object.TotalDaily\"\n }\n}\n", "text": "$object.TotalDailyHi @turivishal Thanks for the response,no i don’t want count it, i want to add value of $object.TotalDaily together, but u only want to add 1 value per day, per month.I tried to use $group this way but i get a error.but keeps telling me “can’t convert from BSON type array to Date”.Thanks", "username": "Q_Lion" }, { "code": "\"$rxInfo.time\"rxInfo$group$group$addFields$group", "text": "The field \"$rxInfo.time\" having string type date and also inside an array, it should be date type and outside the array, you need to deconstruct the rxInfo array using $unwind, and than use $group stage.You have already did in your second reply, you can convert it using $dateFromString or $toDate Date Expression Operators, Before the $group stage inside $addFields, or inside the $group stage that you have already did.", "username": "turivishal" }, { "code": "", "text": "Hi, I need to order all the data by months from an aggregate in a list recordsomething like thisJanuary01-01-2020filed1,field2,field302-01-2020fieild1,field2,field3February01-02-2020field1,field2,field302-02-2020field1,field2,field3April01-04-2020field1,field2,field302-04-2020field1,field2,field3And I have no clue how to do it.", "username": "jack_austin" }, { "code": "", "text": "Hello @jack_austin, Welcome to the MongoDB community forum,This topic is not related to the current topic, Can you please ask your question in a new post?", "username": "turivishal" }, { "code": "", "text": "@turivishal ok no problem. I will create a new topic.", "username": "jack_austin" } ]
How to sum up one record per day for one month?
2020-12-14T09:28:07.347Z
How to sum up one record per day for one month?
13,148
null
[]
[ { "code": "", "text": "Hello, with MongoDB version 5.0, I can use change stream option 'full_document=‘updateLookup’ to get fullDocument of the inserted and updated documents but not for delete operations. For auditing purpose, I also need to put a record for deleted records and only documentKey in the change stream record is not sufficient? Does it require upgrade to version 6 to get the pre-image for delete operations?", "username": "Linda_Peng" }, { "code": "", "text": "Hi @Linda_PengI believe this is available since MongoDB 6.0. From Change Streams with Document Pre- and Post-Images:Starting in MongoDB 6.0, you can use change stream events to output the version of a document before and after changes (the document pre- and post-images):Best regards\nKevin", "username": "kevinadi" } ]
Which version of change streams supports pre-image for delete operations?
2023-03-07T21:58:35.034Z
Which version of change streams supports pre-image for delete operations?
532
null
[ "mongodb-shell", "php" ]
[ { "code": "", "text": "Hi community,\nI installed the mongo-server community edition and mongosh on a linux-vm (ubuntu 20.04) with an IP-4 let’s say 1:2:3:4\nNo replicas at the moment.Now I want to read and write data from my local PC with php.My questions are:More securing: Yes, I know, there is a way to go \nFor developing purposes user and password would be enough but how is the ConnString ? Best and thanks in advance\nRobert", "username": "Robert_Haupt" }, { "code": "mongodmongodb://127.0.0.1:27017mongodb+srv", "text": "Hi @Robert_HauptDepending on how you install MongoDB, by default the mongod process binds only to localhost so it’s not accessible through the network. On top of that, even if you enable auth on the MongoDB server, without any user defined in the server, the localhost exception is active and you can connect to it locally. The exception will be disabled once you create a user on the server.In this case, the connection string URI should be just mongodb://127.0.0.1:27017 (see Standard Connection String Format).With regard to your questions:I don’t think you should tamper with it. Having a separate user is standard procedure for installation in Linux.What do you mean by “user rights” in this context?Yes, once you enable auth and created a user.Correct. The mongodb+srv protocol uses DNS to supply the actual server addresses, which you don’t need for local deployment.It’s always good practice to secure your deployment, even locally. Please see Use SCRAM to Authenticate Clients on how to enable this.Best regards\nKevin", "username": "kevinadi" } ]
ConnString for community edition with username and passwort
2023-03-04T13:13:27.540Z
ConnString for community edition with username and passwort
1,029
null
[ "queries", "python" ]
[ { "code": "def gen_random_codes(count, username, product_name=None, batch_id=None):\n return [\n {\n \"_id\": ulid.new().str,\n \"owner_name\": username,\n \"generated_on\": datetime.now(timezone.utc),\n \"code\": code_gen(8),\n \"product\": product_name,\n \"batch_id\": batch_id,\n \"status\": False,\n }\n for _ in range(count)\n ]\ndb.codes.insert_many(generated_codes)db.batch_request.insert_one(\n {\n \"_id\": ulid.new().str,\n \"email\": user_email,\n \"start\": start_series,\n \"end\": end_series,\n \"count\": total_count,\n \"timestamp\": datetime.now(timezone.utc)}\n )\nstart_series=codes[0][\"_id\"], end_series=codes[-1][\"_id\"]def get_code_list_by_series(start, end):\n return list(db.codes.find({\"owner_name\": current_user.id, \"$and\": [{\"_id\": {\"$gte\": start}}, {\"_id\": {\"$lte\": end}}]}))\n", "text": "in my python app i have a function that generated random codes:Then i insert them using this:db.codes.insert_many(generated_codes)\nafter this i insert the batch information in another collection:start series and end series are retrieved using start_series=codes[0][\"_id\"], end_series=codes[-1][\"_id\"] now in another route i want to get all the codelist associated with that batch using the starting ulid and ending ulid: @user.route('/download-batch', methods=[\"GET\", \"POST\"])@login_requireddef do - Pastebin.com and here is the function:i am usning $gte and $lte to fetch all the codes generated for that batch. But the problem is it is not returning the correct number of codes. eg. if 100 codes are inserted it’s returning probably 60-70 codes. Can anyone help me what i am doing wrong. Because as per my knowledge we can sort the ulid lexiographicallyhere is the db schema of db.codes: Batches--------------_id:\"01GTFAJM8HNE96BSS8TQ9AENMW\"email:\"[email protected]\" - Pastebin.com", "username": "BIJAY_NAYAK" }, { "code": "_id\"_id\": ulid.new().str,\n", "text": "Hi @BIJAY_NAYAK welcome to the community!the problem is it is not returning the correct number of codes. eg. if 100 codes are inserted it’s returning probably 60-70 codes.Are you sure the query is supposed to return 100 codes instead of only 60-70 of them? I don’t see anything wrong with the query itself (it’s just a range query on _id), so if the query looks right, the only explanation is that the data is not correct.as per my knowledge we can sort the ulid lexiographicallyPerhaps this is the issue here. Maybe ULID doesn’t behave like you think it does?One quick way to test your assumptions is to replace this line:with something else that you know will definitely work as expected. Maybe a sequential integer? This way, if your query does return the expected number, we can definitely say that the ULID generator doesn’t behave like you expect.Best regards\nKevin", "username": "kevinadi" } ]
Issue whille sorting by ulid
2023-03-02T15:29:15.349Z
Issue whille sorting by ulid
971
https://www.mongodb.com/…a_2_1024x503.png
[]
[ { "code": "", "text": "\nmong1907×937 158 KB\n\ni tried many time and could not move to the next chapter. Can anyone help me with this?", "username": "Thrushwanth_Kakuturu" }, { "code": "", "text": "Hi @Thrushwanth_Kakuturu,Welcome to the MongoDB Community forums Here you need to authenticate your shell with the created MongoDB Atlas cluster:To authenticate with the shell, follow these steps:Open the link provided in the shell: Cloud: MongoDB CloudEnter the one-time verification code that was given to you in the shell. (If you are not logged in - it will redirect you to log in to your MongoDB Atlas Cluster and then the following window prompt will open)\n\nauthenicate-cli2494×1258 167 KB\nFollow the additional prompts in the shell.Click on the Check button located in the bottom right corner to complete the lab.If you have any doubts, please feel free to reach out to us.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I cannot find the next button in the Lab: Creating and Deploying an atlas cluster
2023-03-08T01:55:11.560Z
I cannot find the next button in the Lab: Creating and Deploying an atlas cluster
1,066
null
[ "queries", "data-modeling", "indexes" ]
[ { "code": "", "text": "my document have 100 fields\ni require find documents by 30 fields at ANY COMBINATION (ex field1+field17, field20+filed5+filed3…)\nmongo use full scan in 99% of same search and use 1 index by datei create indexes(60) for most popular queries, but now i have index limits and indexes size=data sizeany hint for organization complex search? full scan is very slow", "username": "_N_A37" }, { "code": "{\n _id: 1,\n field_1: x,\n field_2: y,\n field_3: z.\n ...\n}\n{\n _id: 1,\n attr: [\n key: 'field_1', value: x,\n key: 'field_2', value: y,\n key: 'field_3', value: z,\n ...\n ]\n}\ndb.collection.createIndex({'attr.key': 1, 'attr.value': 1})\n", "text": "Hi @_N_A37 welcome to the community!100 fields is a lot, and your search criteria is difficult to execute with ordinary schema design.One possible solution is to use the attribute pattern. In short, instead of having documents like this:using the attribute pattern means refactoring it to look like this:You can then create an index for this document like:However note that this is just one possible solution, and depending on your intended use case it may or may not be applicable to you.If you require further help, could you post:Best regards\nKevin", "username": "kevinadi" } ]
Search by many fields at any combination
2023-03-04T19:29:52.562Z
Search by many fields at any combination
868
null
[]
[ { "code": "", "text": "Hi. bros\nI got problem to develop slow query monitoring system for MongoDB 4.0.I want to use mongod.log and I don’t want to use system.profile collection to get slow query information.how can i find slow query in mongod.log?\nsorry for my bad english.\nthank you for reading ", "username": "jeongmu.park" }, { "code": "mongod--logpath{\"t\":{\"$date\":\"2023-03-08T13:50:43.571+11:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn613\",\"msg\":\"Slow query\" ....", "text": "Hi @jeongmu.parkBy default mongod will record queries slower than 100ms to the logs, which will be printed to stdout by default, or directed to a file using the --logpath option.In later versions of MongoDB (6.0.4 in this example), it will look like this:{\"t\":{\"$date\":\"2023-03-08T13:50:43.571+11:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn613\",\"msg\":\"Slow query\" ....In 4.0 this may look a bit different, but since the 4.0 series was not supported anymore as per April 2022 I would encourage you to upgrade to a supported version.This slow query threshold can be customized using the db.setProfilingLevel() command.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
In Mongodb 4.0, can i get slow queries information at mongod.log?
2023-03-06T13:14:10.196Z
In Mongodb 4.0, can i get slow queries information at mongod.log?
1,571
null
[]
[ { "code": "", "text": "Hello,Is there a way to configure a maximum time for running queries on a MongoDB instance?", "username": "andresil" }, { "code": "", "text": "Hi @andresilAt the moment, the parameter is called maxTimeMS and it is configurable per-query/operation.It is not currently possible to set this globally on a server yet, but SERVER-13775 is the ticket that tracks this feature request. Please feel free to comment/vote/watch the ticket for future updates on this.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there a way to configure timeouts for queries on `mongod`?
2023-03-06T15:23:46.090Z
Is there a way to configure timeouts for queries on `mongod`?
436
null
[]
[ { "code": "", "text": "As I could see I can only set an expire date for referenced documents not embedded ones. I have a subdocument thats embedded (an array of objects) and I would like to expire each object at a specified time. Is this anyhow possible if said documents are embedded into a collection? Thanks!", "username": "Anna_N_A" }, { "code": "", "text": "Hi @Anna_N_AAre you referring to the TTL index?In the case of the TTL index, it can expire (delete) documents after a certain amount of time or at a specific clock time. However it works on a per-document basis and not on parts of a document. That is, if you have e.g. an array of sub-documents with its own timestamp, it cannot reach into the document and remove array elements.If this is what you want, I suggest you unwrap the array into separate documents, each with their separate timestamp so the TTL index can operate on them.Otherwise if this is not what you mean, could you provide some example documents and what operation you intended?Best regards\nKevin", "username": "kevinadi" } ]
Is it possible to have specific embedded documents expire on a specified date?
2023-03-07T07:18:25.890Z
Is it possible to have specific embedded documents expire on a specified date?
707
https://www.mongodb.com/…4423c546b06d.png
[ "atlas", "charts" ]
[ { "code": "", "text": "Hi!\nIs there a way to show the menu filter to the user in an embedded dashboard (or chart) ?\nI know code (“chart.setFilter”) is one option. But, I mean, can the user see the menu that I can see in Atlas charts? In that way, they update filters and see the result in real time.\nUntitled794×508 23.6 KB\nThanks in advance.", "username": "Orlando_Herrera" }, { "code": "", "text": "@Orlando_Herrera This question was probably missed earlier. We don’t support the UI(shown in the pic) in embedded charts or dashboards. But this can be easily built using our easy-to-use chart and dashboard filtering logic in the backend and your own UI in the front end.", "username": "Avinash_Prasad" }, { "code": "", "text": "Hi @Avinash_Prasad . Thank you for your answer.I see this post in mongodb charts: COVID-19 Dashboard for ReactWhat are the steps to do something like this? (I mean, show the options like: show filters, get data, etc)\nis it an “iFrame” ?\nmongocharts1869×773 89.7 KB\n", "username": "Orlando_Herrera" } ]
How do I show menu filter in a embed dashboard (or chart)?
2023-02-12T18:22:49.501Z
How do I show menu filter in a embed dashboard (or chart)?
1,282
null
[ "backup" ]
[ { "code": "", "text": "What is the permission for downloading a Backup on Atlas.\nI have tried all permissions and it doesn’t work, only Project Owner works. Is there any other custom configuration or do I have to add users as Project Owners.", "username": "Rron_94649" }, { "code": "Project Owner", "text": "Hi @Rron_94649,Currently the Project Owner role is required for managing backups. In the meantime, you can vote the following feedback posts which I believe are in relations to what you are after:The latter of the feedback posts is probably more specific to your post here but please do note that the Granular Permissions feedback post has a status of STARTED which indicates work to address the request has started.Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB Backup Download Permission
2023-03-07T08:29:52.698Z
MongoDB Backup Download Permission
890
null
[ "node-js", "app-services-user-auth", "next-js", "developer-hub" ]
[ { "code": "", "text": "Hello. I’m new to Mongo, and Next JS.\nI read the article on How to Integrate MongoDB Into Your Next.js App today and now I’m looking for more!I’d like to build a NextJS app that uses Realm for user authentication with protected API routes for user data. Does anyone have a link to an tutorial article, or Github Repo or any other resources to do this?Looking forward to your responses.", "username": "Brian_Lang" }, { "code": "", "text": "Hey Brian - While I don’t think we have one, it seems like Vercel has an example of using Realm with Next.JS. Would this help? - Example with Monogdb ReamlWeb (#14555) · vercel/next.js@1b5ca78 · GitHub", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I will take a look when I can.", "username": "Brian_Lang" }, { "code": "", "text": "Hey Brian,Depending on your approach to authentication, you’d either use the Realm Web SDK or the Node.js SDK. But when it comes to Auth, are you trying to use Realm solely for auth or do you need other realm features in your app. The only reason I ask is that if you just need user authentication, a specific library like Next-Auth could be a better choice.", "username": "ado" }, { "code": "", "text": "I’m planning to build a Admin/Dashboard app for authenticated users only. Non-authenticated users will only see a logon screen.I would love to use NextAuth if it supports Realm features. Other developers in my company are working with Realm for other, non-Next JS apps, so I think I need to find a way to tie into Realm for authentication.Does NextAuth work with Realm?", "username": "Brian_Lang" }, { "code": "", "text": "Hey Brian - not sure what your implementation details are for auth in the other apps, but it seems like NextAuth would work with Realm if you use JWT Authentication. After using NextAuth to sign in, you can pass in the JWT that you get from the callback to Realm’s JWT Auth provider", "username": "Sumedha_Mehta1" }, { "code": "", "text": "How can assure I can call app.currentUser\nin the getServerSideProps ? It seems undefined if I call It on getServerSideProps", "username": "Aaron_Parducho" }, { "code": "", "text": "Is there a concrete example for this? In the docs https://www.mongodb.com/docs/realm/web/nextjs/#log-the-user-in it saysIn a real application, you would want to have a more complex authentication flow. For more information, refer to the Next.js authentication documentation.Could anyone share of an example using Next Auth and Realm Web?", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "I’d also like to see a concrete example of a production-ready authentication flow.@Alexandar_Dimcevski did you manage to find a good example of figure out a good approach?", "username": "Ian" }, { "code": "", "text": "Unfortunately not - Let me know if you do!", "username": "Alexandar_Dimcevski" } ]
Help with Mongo Realm and NextJS with User Authentication?
2020-10-27T02:51:58.128Z
Help with Mongo Realm and NextJS with User Authentication?
10,492
null
[ "atlas-cluster", "rust" ]
[ { "code": "use mongodb::{bson::{doc,Document},options::ClientOptions, Client};\n//..\nlet client = Client::with_options(ClientOptions::parse(\"mongodb+srv://user:[email protected]/?retryWrites=true&w=majority\").await.unwrap()).unwrap();\nlet coll = client.database(\"braq\").collection(\"users\");\nlet user = coll.find_one(doc!{\"user\":\"user1\"},None).await.unwrap();\nprintln!(\"Hello, {}!\",user.user1);\n#9 311.1 error[E0698]: type inside `async fn` body must be known in this context\n#9 311.1 --> src/main.rs:46:35\n#9 311.1 46 | let db = client.database(\"braq\").collection(\"users\");\n#9 311.1 | ^^^^^^^^^^ cannot infer type for type parameter `T` declared on the associated function `collection`\n#9 311.1 note: the type is part of the `async fn` body because of this `await`\n#9 311.1 --> src/main.rs:47:50\n#9 311.1 47 | let deb = db.find_one(doc!{\"user\":\"user1\"},None).await.unwrap();\n#9 311.1 | ^^^^^^\n#9 311.1 \n------\n > [4/4] RUN cargo install --path .:\n#9 311.1 46 | let db = client.database(\"braq\").collection(\"users\");\n#9 311.1 | ^^^^^^^^^^ cannot infer type for type parameter `T` declared on the associated function `collection`\n#9 311.1 note: the type is part of the `async fn` body because of this `await`\n#9 311.1 --> src/main.rs:47:50\n#9 311.1 47 | let deb = db.find_one(doc{\"user\":\"user1\"},None).await.unwrap();\n#9 311.1 | ^^^^^^\n", "text": "I’m trying to get the value of a specific field in the document when searching for it like:The error I get:How can I do that without errors?", "username": "mmahdi" }, { "code": "let coll = client.database(\"braq\").collection::<User>(\"users\");\n", "text": "You have to set type, something like this", "username": "Marko_Coric" }, { "code": "collectionlet coll = client.database(\"braq\").collection::<Document>(\"users\");\n#[derive(Deserialize)]\nstruct User {\n name: String,\n age: i32,\n}\n//..\nlet coll = client.database(\"braq\").collection::<User>(\"users\");\n", "text": "Just to expand on Marko’s answer a bit - collection requires a type parameter so it knows how to dezerialize the data. You can use the bson Document type if you don’t know the shape of the data or don’t want to use specific structs:otherwise you can define a struct that corresponds to the shape of the data you’re retrieving:", "username": "Abraham_Egnor" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I get the value of a specific field in the document? was
2023-03-04T19:45:02.777Z
How can I get the value of a specific field in the document? was
1,240
null
[ "replication" ]
[ { "code": "", "text": "Hi,I plan to use MongoDB Enterprise Edition v5.0.x in a single cluster replica set mode across three data centers (with 5 nodes in total, 2 in DC1, 2 in DC2 and 1 in DC3) for one of the application we are working on in my company (with eventual consistency, of course). However, I have heard concerns about mongodb not able to guarantee the durability and data integrity across all nodes within the replica set. Essentially, the area of concern is that if the application writes ‘n’ number of records to primary, there is no guarantee that the secondary nodes (especially in other DCs) will also end up having ‘n’ records. There may be a small delta records missing in the secondary (especially those in the other DCs). And hence the use of Mongo is not recommended unless there is any official documentation which says that data replication is guarantee across all nodes in the replica set.Is there any such documentation available which specifically mentions or talks about guarantee in data durability and integrity across replica set within a single cluster? That will help me to present my case appropriately for using mongodb.", "username": "Adi_Min" }, { "code": "", "text": "Hello @Adi_Min ,Welcome to The MongoDB Community Forums! MongoDB provides strong guarantees for data durability and integrity across replica sets, even in multi-data center deployments. The MongoDB documentation provides detailed information about how replication works and what guarantees it provides. Here are some relevant sections of the documentation:In particular, MongoDB uses a primary-secondary replication model in which all writes must be acknowledged by a majority of the replica set members before they are considered committed. This ensures that data is replicated across a majority of the nodes in the replica set, which provides both data durability and consistency guarantees. Additionally, MongoDB provides configurable write concerns that allow you to control the level of durability and consistency you require for your application.Regarding multi-data center deployments, MongoDB provides several deployment architectures that allow you to replicate data across multiple data centers while maintaining strong consistency and durability guarantees.MongoDB Enterprise is part of the Enterprise Advanced subscription and is a commercial product that requires a license to operate. If you’re in the process of evaluating MongoDB Enterprise and you have specific questions regarding operational concerns, please DM me and I can connect you to the relevant people that can also help answer any questions you have.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Any production-ready replication technology in a distributed data system has to ensure that.But you know, we are humans, and we write code. and the code can have bugs. And it’s difficult to ship such a big product bug free. That’s it.", "username": "Kobe_W" } ]
Does mongo guarantee data durability and integrity with replication?
2023-03-02T15:27:04.076Z
Does mongo guarantee data durability and integrity with replication?
993
null
[ "atlas-device-sync", "sharding", "flexible-sync" ]
[ { "code": "", "text": "Hi, I saw here that says \" Enabling Flexible Sync in Your App Services Application requires a non-sharded MongoDB Atlas cluster running MongoDB 5.0 or greater\".Mongodb Atlas supports a single node with max size of 4TB, we’re planning to have much more data than 4TB as we currently have high MAUs, how can we support such use case with Realm Sync?", "username": "lahaOoo" }, { "code": "", "text": "Hi, can you tell me more about your use case? We can enable you to run sync on a sharded cluster, but it is more likely than not that if you are trying to sync 4TB of data then sync might not be able to handle that load. Generally, our customers that have that much data either:Happy to chat more about the specifics about what you are trying to build with sync,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks for the info. We’re currently a offline productivity app that includes note taking and scribble, but we’re trying to add collaboration support to it, therefore it’s almost certain that the total data size will exceed 4TB based on our current calculation, and this is exclusive of the realm ops history data size.Each user should only receive a subset of the actual data though, at most a few hundreds MBs based on our current data. However, as we’re building collaboration support where one user needs to share notes with other users, so it doesn’t sound like the multi-tenant or app-per-region approach could work, since realm data cannot sync across different apps.I saw in this video during MongoDB World 2022 that support for sharded cluster is on the roadmap [Flexible Sync: The Future of Mobile to Cloud Synchronization (MongoDB World 2022) - YouTube](I saw in this video during MongoDB World 2022 that support for sharded cluster is on the roadmap Flexible Sync: The Future of Mobile to Cloud Synchronization (MongoDB World 2022) - YouTube (at 29:15)) (at 29:15), is there more info on the timeline?", "username": "lahaOoo" }, { "code": "", "text": "@lahaOoo I’m in that video! Yes we are doing some foundational work now and then plan to tackle sharding support later this year.", "username": "Ian_Ward" } ]
Flexible Sync on sharded cluster
2023-03-07T15:47:14.290Z
Flexible Sync on sharded cluster
1,141
null
[ "swift", "atlas-cluster", "serverless" ]
[ { "code": "import SwiftUI\n\n@main\nstruct MongoTestUIApp: App {\n var body: some Scene {\n WindowGroup {\n ContentView()\n }\n }\n}\nimport SwiftUI\nimport MongoSwiftSync\nimport MongoSwift\nimport NIOPosix\n\nstruct ContentView: View {\n \n @State var count:UInt64 = 28\n \n var body: some View {\n VStack {\n Text(\"Hello, world!\")\n Button(\"Run\") {\n runSyncTest()\n// Task {\n// await runAsyncTest()\n// }\n }\n }\n .padding()\n }\n \n func runAsyncTest() async {\n do {\n let elg = MultiThreadedEventLoopGroup(numberOfThreads: 4)\n let client = try MongoClient(\"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\", using: elg)\n defer {\n // clean up driver resources\n try? client.syncClose()\n cleanupMongoSwift()\n // shut down EventLoopGroup\n try? elg.syncShutdownGracefully()\n }\n \n let db = client.db(\"office\")\n print(\"made db\")\n let collection = try await db.collection(\"items\")\n print(\"got collection\")\n \n let item = Item(id:count, name:\"Crate \\(count)\")\n count += 1\n \n let doc: BSONDocument = try BSONEncoder().encode(item) // entity.createDoc()\n let result = try await collection.insertOne(doc)\n print(result?.insertedID ?? \"\")\n } catch {\n print(error)\n }\n }\n \n func runSyncTest() {\n do {\n defer {\n // free driver resources\n cleanupMongoSwift()\n }\n \n let client = try MongoClient(\"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\")\n \n let db = client.db(\"office\")\n let collection = db.collection(\"items\")\n \n let item = Item(id:count, name:\"Crate \\(count)\")\n count += 1\n \n let doc: BSONDocument = try BSONEncoder().encode(item)\n print(\"adding item\")\n let result = try collection.insertOne(doc)\n print(result?.insertedID ?? \"\") // prints `.int64(100)`\n } catch {\n print(error)\n }\n }\n}\n\nstruct Item: Encodable, Identifiable, Hashable {\n var id: UInt64?\n var name: String = \"\"\n var version: Int = 1\n var description: String?\n}\n\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\nimport Foundation\nimport MongoSwiftSync\nimport MongoSwift\nimport NIOPosix\n\nlet item = Item(id:8, name:\"Crate 8\")\n\nsyncTest()\n//asyncTest()\n\nfunc asyncTest() {\n Task {\n do {\n let elg = MultiThreadedEventLoopGroup(numberOfThreads: 4)\n let client = try MongoClient(\"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\", using: elg)\n defer {\n // clean up driver resources\n try? client.syncClose()\n cleanupMongoSwift()\n // shut down EventLoopGroup\n try? elg.syncShutdownGracefully()\n }\n \n let db = client.db(\"office\")\n let collection = try await db.createCollection(\"items\")\n \n let doc: BSONDocument = try BSONEncoder().encode(item) // entity.createDoc()\n print(\"adding item\")\n let result = try await collection.insertOne(doc)\n } catch {\n print(error)\n }\n }\n}\n\nfunc syncTest() {\n do {\n defer {\n // free driver resources\n cleanupMongoSwift()\n }\n \n let client = try MongoClient(\"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\")\n \n let db = client.db(\"office\")\n let collection = db.collection(\"items\")\n \n let doc: BSONDocument = try BSONEncoder().encode(item)\n print(\"adding item\")\n let result = try collection.insertOne(doc)\n print(result?.insertedID ?? \"\") // prints `.int64(100)`\n } catch {\n print(error)\n }\n}\n\n\nstruct Item: Encodable, Identifiable, Hashable {\n var id: UInt64?\n var name: String = \"\"\n var version: Int = 1\n var description: String?\n}\n", "text": "I am trying to connect to a serverless Mongodb instance in Atlas. The only connection string provided is the srv version. I have setup the code following the examples in the docs for the driver. I have tested in a command line app and in a SwiftUI app using both async and sync methods.When calling collection.insertOne(doc), I receive the following error for both async and sync methods ONLY on SwiftUI tests. The same code works fine in a command line app in the same environment.ConnectionError(message: “Failed to look up SRV record \"_mongodb._tcp.main.ua0ol.mongodb.net\": A temporary error occurred on an authoritative name server. Try again later.”, errorLabels: nil)Again… this works fine with a command line test app using the same connection code so I know this isn’t a network or dns issue. It seems to be something about how the driver works in a SwiftUI environment.I am using the main branch of the driver repo in the swift package manager UI. No version is specified so “main” is latest, as I understand, though I’ve also tried specifying 1.3.1+.Thanks for any help here.Code for both tests is as follows:MongoTestUIApp.swiftContentView.swiftCommand line app - main.swift", "username": "csandels" }, { "code": "", "text": "Are the command line app and the SwiftUI app running in the same environment? This looks to me like a DNS failure trying to look up the SRV record; perhaps there’s a difference in the environments?", "username": "Spencer_Brown" }, { "code": "", "text": "Yeah, same environment. That’s what makes it interesting and rules out any DNS or env related issue. It must be something about the SwiftUI framework. Maybe threading in the UI or some UI related configuration like privileges (brainstorming)… I’m not sure.", "username": "csandels" }, { "code": "", "text": "thanks, I see you opened a support case, your support engineer will follow up and I’ll keep an eye on it", "username": "Spencer_Brown" }, { "code": "", "text": "In case anyone finds this in a search, the cause of this issue was the app sandbox. You must enable both outgoing AND incoming connections, even though your app may not ever accept incoming connections and the driver itself doesn’t need an incoming connection to do the dns lookup and connect to Mongo. The Mongo support team so far is not sure why incoming connections must be allowed for the driver to work. I don’t see any reason but that’s how it is today.If anyone has any details on why incoming connections need to be configured on the sandbox to avoid this error, please share. There is a firewall between the app and Mongo that will not let any incoming connection through so no connection is actually initiated from outside the firewall or app, however allowing incoming connections on the sandbox is what makes this error go away.", "username": "csandels" }, { "code": "", "text": "The MongoDB Swift driver imbeds the MongoDB C driver to do its system functions, and the C driver in turn does a Linux system call to resolve the DNS name. This works everywhere except the App Sandbox with incoming connections disabled. Why, exactly, the App Sandbox requires incoming connections enabled for applications that use Linux syscalls to resolve DNS addresses, and whether Apple considers this a bug or working as designed, is a question for Apple App Sandbox support. I’d be very interested to hear that answer.In any case, thanks for bringing this up on this forum and with MongoDB Support. It is an interesting use case and I learned some things while helping with it.", "username": "Spencer_Brown" }, { "code": "", "text": "Thanks for the extra insight Spencer.", "username": "csandels" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SwiftUI "Failed to look up SRV record"
2023-02-14T00:44:41.128Z
SwiftUI &ldquo;Failed to look up SRV record&rdquo;
1,388
null
[ "dot-net", "replication", "atlas-cluster" ]
[ { "code": "A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Connected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"<sensitive>\" }\", EndPoint: \"<sensitive>\", ReasonChanged: \"Heartbeat\", State: \"Connected\", ServerVersion: , TopologyVersion: { \"processId\" : ObjectId(\"<sensitive>\"), \"counter\" : NumberLong(5) }, Type: \"ReplicaSetSecondary\", Tags: \"{ nodeType : ELECTABLE, region : US_EAST, workloadType : OPERATIONAL, provider : AZURE }\", WireVersionRange: \"[0, 17]\", LastHeartbeatTimestamp: \"2023-03-07T10:18:46.4085885Z\", LastUpdateTimestamp: \"2023-03-07T10:18:46.4085892Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"<sensitive>\" }\", EndPoint: \"<sensitive>\", ReasonChanged: \"Heartbeat\", State: \"Connected\", ServerVersion: , TopologyVersion: { \"processId\" : ObjectId(\"<sensitive>\"), \"counter\" : NumberLong(4) }, Type: \"ReplicaSetSecondary\", Tags: \"{ region : US_EAST, workloadType : OPERATIONAL, nodeType : ELECTABLE, provider : AZURE }\", WireVersionRange: \"[0, 17]\", LastHeartbeatTimestamp: \"2023-03-07T10:18:47.8866555Z\", LastUpdateTimestamp: \"2023-03-07T10:18:47.8866558Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"<sensitive>\" }\", EndPoint: \"<sensitive>\", ReasonChanged: \"ReportedPrimaryIsStale\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", LastHeartbeatTimestamp: null, LastUpdateTimestamp: \"2023-03-07T10:18:46.3312199Z\" }] }.", "text": "Hi,\nWe have a .NET 6.0 application running on Kubernetes using a MongoDB.Driver version 2.15.0. We have a Mongo Atlas M0 tier cluster which we are upgrading to M10 to get access to some of its features.\nThe problem is that when the upgrade happens, application cannot reconnect to M10 by its own and we have to restart a service (by deleting a K8s pod basically).\nWe have lots of services spread among multiple teams, so it can be cumbersome to sync all of them to restart at once + we have a downtime possibility here.Exception message we are getting:A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Connected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"<sensitive>\" }\", EndPoint: \"<sensitive>\", ReasonChanged: \"Heartbeat\", State: \"Connected\", ServerVersion: , TopologyVersion: { \"processId\" : ObjectId(\"<sensitive>\"), \"counter\" : NumberLong(5) }, Type: \"ReplicaSetSecondary\", Tags: \"{ nodeType : ELECTABLE, region : US_EAST, workloadType : OPERATIONAL, provider : AZURE }\", WireVersionRange: \"[0, 17]\", LastHeartbeatTimestamp: \"2023-03-07T10:18:46.4085885Z\", LastUpdateTimestamp: \"2023-03-07T10:18:46.4085892Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"<sensitive>\" }\", EndPoint: \"<sensitive>\", ReasonChanged: \"Heartbeat\", State: \"Connected\", ServerVersion: , TopologyVersion: { \"processId\" : ObjectId(\"<sensitive>\"), \"counter\" : NumberLong(4) }, Type: \"ReplicaSetSecondary\", Tags: \"{ region : US_EAST, workloadType : OPERATIONAL, nodeType : ELECTABLE, provider : AZURE }\", WireVersionRange: \"[0, 17]\", LastHeartbeatTimestamp: \"2023-03-07T10:18:47.8866555Z\", LastUpdateTimestamp: \"2023-03-07T10:18:47.8866558Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"<sensitive>\" }\", EndPoint: \"<sensitive>\", ReasonChanged: \"ReportedPrimaryIsStale\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", LastHeartbeatTimestamp: null, LastUpdateTimestamp: \"2023-03-07T10:18:46.3312199Z\" }] }.Connection string: mongodb+srv://username:password@[database].[id].mongodb.net", "username": "Proxi" }, { "code": "", "text": "Hi, @Proxi,Welcome to the MongoDB Community Forums.I understand that your .NET 6.0 applications running MongoDB .NET/C# Driver 2.15.0 are failing to maintain connectivity to your Atlas cluster while upgrading from M0 to M10. You also note that your services are running in Kubernetes.This is unexpected. The 2.15.0 release included a fix for a DnsClient.NET issue that caused problems with DNS resolution in k8s environments. Subsequent releases have not included any additional DNS-related fixes. See CSHARP-4001 for more information on the DnsClient.NET issue fixed in 2.15.0.I would recommend reaching out to MongoDB Atlas Support to assist in troubleshooting further.Sincerely,\nJames", "username": "James_Kovacs" } ]
Application cannot reconnect to Mongo Atlas after upgrading from M0 to M10
2023-03-07T13:20:05.032Z
Application cannot reconnect to Mongo Atlas after upgrading from M0 to M10
661
null
[]
[ { "code": "", "text": "Hi there,I am looking to generate and return a schema for a collection using the following curl command -curl -X GET “MongoDB: The Developer Data Platform | MongoDB<project_id>/clusters/<_cluster>/databases/<_database>/collections/<_collection>/schema” -H \"Authorization: Bearer <api_key> -H “Content-Type: application/json” -H “Accept: application/json”Unfortunately, it does not appear to be working. Is this the correct approach?", "username": "Kris_McGlinn" }, { "code": "", "text": "Hello @Kris_McGlinn,I don’t see any schema related data endpoint in this doc,I don’t think that is present, Where did you find that endpoint?", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal,If there is no method for generating the schema using the API, is it possible to query a generated schema?When I use the Data API, under Data Access and Schema, there is a method to generate a schema. It also gives an option to save it. I was guessing you could query this using the API, but I do not see any documentation on doing so. I would like to avoid querying the entire collection and then generating the schema locally. Is this possible?", "username": "Kris_McGlinn" }, { "code": "", "text": "When I use the Data API, under Data Access and Schema, there is a method to generate a schemaCan you show the exact steps where is this option available?", "username": "turivishal" }, { "code": "", "text": "In Atlas Mongodb when logged into your project, select “Data API” in the dashboard on the left. Click the “Advanced Settings” button on the right. Select “Continue to App Services”From App Services, under “DATA ACCESS” in the left hand dashboard, select “Schema”, select a database and collection and click “Generate Schema”. This generates a schema based on the collection. There is a save option in the top right.", "username": "Kris_McGlinn" }, { "code": "", "text": "The schema under data access is available only for “Device Sync” and “GraphQL API” features, you can read in this doc,I don’t found any API for that.Not sure if present you can ask for another post or wait until someone confirms.", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal,Firstly, thanks a lot for your help so far. I am looking now at the graphQL API.K.", "username": "Kris_McGlinn" } ]
Generating schema using Data API not working
2023-03-06T13:43:37.080Z
Generating schema using Data API not working
1,103
null
[ "node-js", "mongoose-odm" ]
[ { "code": "<C:\\AI\\ai-real-estate-friend\\node_modules\\mongoose\\lib\\connection.js:755\n err = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at _handleConnectionErrors (C:\\AI\\ai-real-estate-friend\\node_modules\\mongoose\\lib\\connection.js:755:11)\n at NativeConnection.openUri (C:\\AI\\ai-real-estate-friend\\node_modules\\mongoose\\lib\\connection.js:730:11)\n at runNextTicks (node:internal/process/task_queues:60:5)\n at listOnTimeout (node:internal/timers:538:9)\n at process.processTimers (node:internal/timers:512:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 27239058,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (C:\\AI\\ai-real-estate-friend\\node_modules\\mongodb\\lib\\cmap\\connect.js:383:20)\n at Socket.<anonymous> (C:\\AI\\ai-real-estate-friend\\node_modules\\mongodb\\lib\\cmap\\connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n", "text": "I am getting this error, please help:", "username": "Musa_Mazhar" }, { "code": "", "text": "Hi @Musa_Mazhar,Welcome to the MongoDB Community forums It seems like MongoDB is not running or is not listening on port 27017.To troubleshoot this error, you can try:Let us know if you need any further help!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi Kushagra,Good day to you.Here’s the snapshot from cmd window where it shows that MongoDB is running on port 27017. But somehow when I use the same cmd for Node.js, MongoDB gives above error.\n\nMongoDB1526×787 40.6 KB\n", "username": "Musa_Mazhar" }, { "code": "::1:27017ECONNREFUSED ::1:27017", "text": "The::1:27017part in the error messageECONNREFUSED ::1:27017indicates that your localhost resolve to the IPv6 address rather than the IPv4 127.0.0.1. Try replacing localhost in your connection string with 127.0.0.1.", "username": "steevej" }, { "code": "const express = require('express');\nconst mongoose = require('mongoose');\nconst cors = require('cors');\n\nconst app = express();\nconst PORT = process.env.PORT || 5000;\n\napp.use(cors());\napp.use(express.json());\n\nmongoose.connect('mongodb://localhost:127.0.0.1/ai-real-estate-friend');\n\nconst connection = mongoose.connection;\nconnection.once('open', () => {\n console.log('MongoDB database connection established successfully');\n});\n\napp.listen(PORT, () => {\n console.log(`Server is running on port: ${PORT}`);\n});\n\napp.get('/', (req, res) => {\n res.sendFile(path.join(__dirname, 'client/build', 'index.html'));\n});\n\nconst listingSchema = new mongoose.Schema({\n title: String,\n description: String,\n address: String,\n city: String,\n state: String,\n zip: String,\n price: Number,\n image: String\n});\n\nconst Listing = mongoose.model('Listing', listingSchema);\n\napp.get('/api', async (req, res) => {\n try {\n const listings = await Listing.find();\n res.json(listings);\n } catch (error) {\n console.error(error);\n res.status(500).send('Server error');\n }\n});\n", "text": "Hi Steevej, so i replaced the localhost in connection string with 127.0.0.1. Here’s the code for server.js file which I am trying to run in cmd using ‘npm start’", "username": "Musa_Mazhar" }, { "code": "mongoose.connect('mongodb://localhost:127.0.0.1/ai-real-estate-friend');", "text": "i replaced the localhost in connection string with 127.0.0.1You did not.You replaced the port 27017 with 127.0.0.1.mongoose.connect('mongodb://localhost:127.0.0.1/ai-real-estate-friend');", "username": "steevej" }, { "code": "", "text": "Okay, maybe I am doing it wrong. Can you please guide me how to to replace the port?", "username": "Musa_Mazhar" }, { "code": "mongoose.connect('mongodb://127.0.0.1:27017/ai-realestate-friend');\n", "text": "Thank you so much. I didn’t correct it first, then I replaced it with. It worked. Thank you.", "username": "Musa_Mazhar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting Error while connecting to MongoDB using Node.js
2023-03-07T09:27:29.531Z
Getting Error while connecting to MongoDB using Node.js
11,508
https://www.mongodb.com/…9_2_1024x421.png
[ "indexes", "database-tools" ]
[ { "code": "", "text": "Hi, this is my first post I hope I don’t miss out lol\nThe thing is that I created a big collection with more than 1TB of data (I was 1 and half importing it with mongoimport), and now once is all imported and have more than 400M rows, I am trying to create the index but it get’s stuck at 4.1KB\n\nimage1091×449 17.7 KB\n\nIt goes without saying that the index doesn t work and is not used ina search, any idea?\nI did a similar collection but only 1/700 rows and the index was created correctly with a size of 1MB or something like that", "username": "Anon_N_A1" }, { "code": "db.currentOp()db.collection.totalIndexSize()", "text": "Did you ever use db.currentOp() to check if the op is still in progress?Also try using db.collection.totalIndexSize() to check total index size.", "username": "Kobe_W" }, { "code": "", "text": "It was because the file was huge, it took some time but now is working.\nBtw do you know why I got more errors using mongoimport than importing from compass?", "username": "Anon_N_A1" } ]
Index not beeing created correctly
2023-03-06T16:24:10.766Z
Index not beeing created correctly
1,051
null
[ "node-js", "data-modeling", "mongoose-odm" ]
[ { "code": "boughtCourses{ \n...\nboughtCourses: [ { enrolledAt: Date, course: ReferenceID }, { enrolledAt: Date, course: ReferenceID } ]\n}\nCourseEnrollment{\ncourse: CourseID;\nbuyer: BuyerID;\nseller: SellerID;\ncreatedAt: Date;\n}\n", "text": "Hey there I am working on a schema design where some users create digital courses and others can buy them - think of it like Udemy. I am having trouble deciding on the best way of storing the bought courses so the fetch queries are efficient.This approach is straightforward but can lead to an unbound array issue (really small chance tho). I would use populate from mongoose to get the referenced data. Implementing pagination with filtering and also providing the total number of documents may not be that efficient for a large number of documents.To fetch the courses of a user I would create an index on the buyer field and fetch them by it. However I am not sure about the large number of documents over time that would appear in the collection as every buy would insert a new document inside. Fetching the bought courses may be easier however the size of the collection may become to much to include it in the cache.Which option would you say is better?", "username": "Jan_Ivanovic" }, { "code": "things that are queried together should stay together", "text": "Hey @Jan_Ivanovic,Welcome to the MongoDB Community Forums! You have correctly mentioned both the advantages and disadvantages of the two options.This approach is straightforward but can lead to an unbound array issue (really small chance tho)Yes, even though the chance is small, it’s still there. And when this happens, complex workaround would be required. It’s best to design the schema so that this has no possibility of happening.\nA general thumb rule to follow while schema designing in MongoDB is things that are queried together should stay together. Thus, it may be beneficial to work from the required queries first and let the schema design follow the query pattern.I would recommend you experiment with multiple schema design ideas. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Please let us know if you have any additional questions. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "mgeneratejs", "text": "Hey! Thank you for a quick and great response Satyam I have decided to go with the second option for now, since it is more scale prone. The schema design cases depend on the domain you are working on and sometimes it is hard to choose a solution because there are no right or wrong answers. If needed I will adjust the designs later on but I think this option should serve me well. Also, thanks for linking mgeneratejs!Have a great day ", "username": "Jan_Ivanovic" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best way to keep track of bought digital content of users
2023-03-04T22:04:34.770Z
Best way to keep track of bought digital content of users
788
null
[ "crud" ]
[ { "code": "", "text": "I want to create seven different databases with a single request in mongoDb Atlas on submission of a form.I am looking for any mongoose feature or module with which I can create multiple databases on a single request.I searched for Atlas API but there is no API for this purpose.Is there any mongoose feature with which I can perform this task. I don’t want to make multiple connection request, I tried createConnection of mongoose but in that I have to use it seven time.I want to perform this task in one request.", "username": "Shrishti_Raghav1" }, { "code": "createCollection()mongodumpmongorestore--numParallelCollections--numInsertionWorkersPerCollection", "text": "Welcome to the MongoDB Community Forums @Shrishti_Raghav1!Databases are implicitly created when you insert the first data into a collection in a database namespace.As far as I’m aware there is no driver or API command that will create multiple databases in a single request, however this is not a particularly time consuming task.Please provide some further details:What are you hoping to gain from creating all of the collections in a single request?Are you creating empty collections/databases or loading some initial data as well?If you send multiple server commands in a loop, the Node.js driver will be creating and reusing a single connection rather than creating multiple connections. The Node.js driver maintains a connection pool to efficiently reuse established connections and avoid some of the overhead of establishing a connection.A few approaches to consider:If you are using MongoDB 4.4+ and want to ensure that all seven collections are created at the same time, you could consider creating collections in a transaction. However, you’d still have to call createCollection() seven times and this is unlikely to have any noticeable performance benefit.If you are looking for a more convenient way to create a standard set of collections/databases with initial data, you could perhaps use mongodump and mongorestore to backup & restore multiple databases. This still translates into the same underlying server commands but you can adjust concurrency options like --numParallelCollections and --numInsertionWorkersPerCollection to speed up import of a larger data set.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi Stennie,I want to create Seven to ten databases with no initial data and then assign a user to them.I will be having many users , each user will be assigned their separate collection of databases which I created using single request.Regards,\nShrishti", "username": "Shrishti_Raghav1" }, { "code": "", "text": "Any news on this @Stennie_X?Our use-case: we are using MongoDB and have a new white-label product. For this product we want to set up a separate db for each client, so that they are not getting mixed up when it comes to authentication & other sensitive data.How can we achieve this? Thanks!", "username": "cris" } ]
How to create multiple database in mongodb atlas with single request
2021-10-21T05:51:43.303Z
How to create multiple database in mongodb atlas with single request
6,116
null
[]
[ { "code": "", "text": "Hello there,One of my client have a cluster. In one of the secondary there are all of the data, database and collection but the path of this folder doesn’t match what is in the .conf fileThere is any chance to know why a .conf file like thisdbpath: /var/lib/mongodbdoesn’t containt what I’m looking for?How can I find the information about the folder data. Maybe there is another .conf file with another name that is used by mongod process?\nThere is only one mongod process that use the above mentioned .conf file and there aren’t any other mongod processes.\nI tried to investigate into another secondary but I didn’t find anything.Any help will be appreciated.", "username": "Enrico_Bevilacqua1" }, { "code": "", "text": "How was your mongod started?\nAs a service or from command line?\nIf started from command line ps --ef|grep mongod will show some details\nor\nCheck mongod.log on each node.It will show the parameters that was used at startup\nor run\ndb.adminCommand( { getCmdLineOpts: 1 } ) if you are connected to db", "username": "Ramachandra_Tummala" }, { "code": "", "text": "db.adminCommand( { getCmdLineOpts: 1 } )Thank you for your reply, it was very helpful.", "username": "Enrico_Bevilacqua1" }, { "code": "", "text": "My first question was because we had an issue with one secondary that suddenly the size of indexes started to growth abnormally until almost of the free space in the file system it runs out.We tried to re-sync the secondary in the Ops Manager dashboard but after about 10 hours the mongod process has crashed.Then to restore the synchronization between the primary and secondary we’re going to do the following:From your point of view is a rigth way to restore the data get (copy) from a good secondary server, issued in the command prompt and get synchronized with the primary or there could be an alternative solution that it takes less effort and risks to lose data?Thank you in advance.", "username": "Enrico_Bevilacqua1" }, { "code": "", "text": "Yes copy from good secondary is faster but make sure datafiles are recent to catch with oplog\nRefer to option 2 in this link", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you for your reply.", "username": "Enrico_Bevilacqua1" } ]
The dbpath of the .conf file doesn't match folder data
2023-03-06T14:48:54.653Z
The dbpath of the .conf file doesn&rsquo;t match folder data
441
null
[ "replication", "mongodb-shell" ]
[ { "code": "{\n _id: 'admin.admin',\n userId: UUID(\"15920e8b-3019-43cf-8938-9357699d3bf8\"),\n user: 'admin',\n db: 'admin',\n roles: [\n { role: 'userAdminAnyDatabase', db: 'admin' },\n { role: 'readWriteAnyDatabase', db: 'admin' },\n { role: 'dbAdminAnyDatabase', db: 'admin' },\n { role: 'dbOwner', db: 'admin' },\n { role: 'clusterAdmin', db: 'admin' }\n ],\n mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]\n}\n users: [\n{\n _id: 'admin.someuser ',\n userId: UUID(\"4ede2b9e-cb81-44cd-b2bb-6c92a078b350\"),\n user: 'someuser ',\n db: 'admin',\n roles: [Array],\n mechanisms: [Array]\n }\n\t],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1677838887, i: 923 }),\n signature: {\n hash: Binary(Buffer.from(\"0e632a7fd10df8bb9f211b0509ffdaa1eb07377b\", \"hex\"), 0),\n keyId: Long(\"7142600741474009090\")\n }\n },\n operationTime: Timestamp({ t: 1677838887, i: 923 })\n}\n", "text": "Hi how do one go about deleting a user that was created with a trailing space?\nI am connected as the admin user to a replica set, with the required permissions to drop users.db.getUser(“admin”)use admin\n‘already on db admin’\ndb.getUsers()when trying to drop the user containing the trailing space I get a server error.db.dropUser('someuser ')\nMongoServerError: User ‘someuser @admin’ not found", "username": "Leon_De_Leeuw" }, { "code": "test> db.createUser( { user: \"accountAdmin01 \",\n... pwd: passwordPrompt(), // Or \"<cleartext password>\"\n... customData: { employeeId: 12345 },\n... roles: [ { role: \"clusterAdmin\", db: \"admin\" },\n... { role: \"readAnyDatabase\", db: \"admin\" },\n... \"readWrite\"] },\n... { w: \"majority\" , wtimeout: 5000 } )\nEnter password\n***{ ok: 1 }\ntest> db.getusers\ntest.getusers\ntest> db.getUsers()\n{\n users: [\n {\n _id: 'test.accountAdmin01 ',\n userId: new UUID(\"462ddb92-424d-430d-9d64-79e606eb2f42\"),\n user: 'accountAdmin01 ',\n db: 'test',\n customData: { employeeId: 12345 },\n roles: [\n { role: 'readWrite', db: 'test' },\n { role: 'clusterAdmin', db: 'admin' },\n { role: 'readAnyDatabase', db: 'admin' }\n ],\n mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]\n }\n ],\n ok: 1\n}\ntest> db.dropUser('accountAdmin01 ')\n{ ok: 1 }\ntest> db.getUsers()\n{ users: [], ok: 1 }\n\n", "text": "Hello @Leon_De_Leeuw ,Welcome to The MongoDB Community Forums! I tried the db.dropUser() as below and it is working as expected, can you please check if username you are trying to drop is same(number of trailing spaces can be different if you are writing the username yourself, please copy the username from getUser and try to run again)Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
db.dropUser() not able to drop a user containing a trailing space
2023-03-03T10:35:55.107Z
db.dropUser() not able to drop a user containing a trailing space
727
https://www.mongodb.com/…5_2_1024x269.png
[ "queries", "compass", "connector-for-bi" ]
[ { "code": "", "text": "I am trying to connect Mongo BI connector to PowerBI. I am using a server based database, but not able to connect that database to the ODBC Driver. The server database has been set up and reflects on my MongoDB Compass GUI, but is not being able to get picked up from mongosqld.Mongosqld picks up the local database files I had downloded a while ago, but is not able to configure to the server database. My BI connector works perfectly fine with my local DBs, but is not connecting to the server DB.As attached in the screenshots below, you can see my database cronjob in my compass GUI, but when I try to direct the connector to it, it picks up the other sample “Airlines” database I had loaded on earlier. Please do help. Thank You.\n\nsqld1347×354 31 KB\n", "username": "Mahika_N_A" }, { "code": "", "text": "I think your mongosqld uses a config file\nWhat is the uri it is pointing to?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you for the prompt reply Ramachandra. The config file that comes with the installation (example.yml). Is this the file you are talking about, or do you recommend using any other file. I have attached a screenshot of the uri the config file is pointing to. The port I am working on is 27018, but the uri is pointing to 27017. Do you recommend switching that?\nPlease do help. Thank you.\n", "username": "Mahika_N_A" }, { "code": "", "text": "It is pointing to localhost\nIt is not just port you have to give your cluster address:port\nPlease check this link.", "username": "Ramachandra_Tummala" } ]
Mongo BI Connector
2023-03-02T08:50:23.128Z
Mongo BI Connector
1,417
null
[ "aggregation" ]
[ { "code": "$count$project$group$sum$count$count{ $group: { _id: null, count: { $sum: 1 } } },\n{ $project: { _id: 0 } }\n$count$group$sum$project{ $project: { _id: 1 } }\n", "text": "Given an aggregation pipeline that is intended to count the resulting number of records, is it more efficient to use a $count stage or a $project into a $group using a $sum aggregator?I see in the v6.0 docs for $count under “Behavior”, there is a note that $count is the same as doing this:but I’m curious if preceding either a $count or the interchangeable [ $group w/ $sum → $project ] withwould be more efficient.I don’t have a good way to test this since I don’t have a sufficiently large data set available to see a meaningful difference, but if anyone has a way to test this or knows already I’d love to find out which is better.", "username": "Lucas_Burns" }, { "code": "1,600,000[\n {\n $project: {\n _id: 1,\n },\n },\n {\n $group: {\n _id: null,\n count: {\n $sum: 1,\n },\n },\n },\n {\n $project: {\n _id: 0,\n },\n },\n]\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1600000,\n \"executionTimeMillis\": 1677,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 1600000,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 111,\n \"works\": 1600002,\n \"advanced\": 1600000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 1663,\n \"restoreState\": 1663,\n \"isEOF\": 1,\n \"transformBy\": {\n \"_id\": true\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 71,\n \"works\": 1600002,\n \"advanced\": 1600000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 1663,\n \"restoreState\": 1663,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 1600000\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 1391\n },\n ],\n \"command\": {\n \"pipeline\": [\n {\n \"$project\": {\n \"_id\": 1\n }\n },\n {\n \"$group\": {\n \"_id\": null,\n \"count\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n ],\n },\n \"ok\": 1,\n}\n$project: {_id: 1}[\n {\n $group: {\n _id: null,\n count: {\n $sum: 1,\n },\n },\n },\n {\n $project: {\n _id: 0,\n },\n },\n]\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1600000,\n \"executionTimeMillis\": 940,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 1600000,\n \"executionStages\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 63,\n \"works\": 1600002,\n \"advanced\": 1600000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 1601,\n \"restoreState\": 1601,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 1600000\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 736\n },\n ],\n \"command\": {\n \"pipeline\": [\n {\n \"$group\": {\n \"_id\": null,\n \"count\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n ],\n },\n \"ok\": 1,\n}\n[\n {\n $project: { _id: 1 },\n },\n {\n $count: \"mycount\",\n },\n]\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1600000,\n \"executionTimeMillis\": 1717,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 1600000,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 134,\n \"works\": 1600002,\n \"advanced\": 1600000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 1665,\n \"restoreState\": 1665,\n \"isEOF\": 1,\n \"transformBy\": {\n \"_id\": true\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 93,\n \"works\": 1600002,\n \"advanced\": 1600000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 1665,\n \"restoreState\": 1665,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 1600000\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 1408\n },\n ],\n \"command\": {\n \"pipeline\": [\n {\n \"$project\": {\n \"_id\": 1\n }\n },\n {\n \"$count\": \"mycount\"\n }\n ],\n },\n \"ok\": 1,\n}\n$project: {_id: 1}[\n {\n $count: \"mycount\",\n },\n]\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1600000,\n \"executionTimeMillis\": 924,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 1600000,\n \"executionStages\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 63,\n \"works\": 1600002,\n \"advanced\": 1600000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 1601,\n \"restoreState\": 1601,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 1600000\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1600000,\n \"executionTimeMillisEstimate\": 754\n ],\n \"command\": {\n \"pipeline\": [\n {\n \"$count\": \"mycount\"\n }\n ],\n },\n \"ok\": 1,\n}\n\nMongoDB Atlas M0 version: \"5.0.15\"executionTimeMillisexecutionTimeMillisEstimate$project: {_id: 1}$project: {_id: 1}", "text": "Hi @Lucas_Burns,Welcome to the MongoDB Community forums I have performed 4 separate queries on a sample collection containing 1,600,000 documents, and here are the results of the execution:it returned:1st CaseAnd similarly, without $project: {_id: 1} as the first stage, it returned the following execution time:2nd CaseAnd the following query:it returned:3rd CaseAnd similarly, without $project: {_id: 1} as the first stage, it returned the following execution time:4th CaseThe above operation has been done on MongoDB Atlas M0 version: \"5.0.15\"Notice the executionTimeMillis and executionTimeMillisEstimate for all 4 cases:I hope this makes it clear that there is a difference in the efficiency of the query without $project: {_id: 1}. The query runs faster without $project: {_id: 1}.I would suggest you experiment with different collection scenarios. You can use mgeneratejs to create sample documents quickly in any number, so the different aggregation pipelines can be tested easily.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$count vs $project -> $group w/ $sum
2023-03-03T18:27:00.330Z
$count vs $project -&gt; $group w/ $sum
740
null
[ "queries", "node-js" ]
[ { "code": "", "text": "const data = {\nusers: [{\nname: ‘Chippu’,\nemail: ‘[email protected]’,\nhashedPassword: bcryptjs.hashSync(‘123456’),\nwallet: [{blockchain: “Solana”, wallet: ‘8jyvEbTeUtKJAvQMeNsCR2hsguwWAm8Kp4oFK7n5XkAp’}, {blockchain: “Etherum”, wallet: ‘0x4aB10fbE398287672b8c48d1f59C741FA9B895eF’}],\navatar: “…”\ndiscord: ‘#Chippu652728’,\ninstagram: ‘Woauchimmer’,\n},\n],\nprojects: [\n{\nname: ‘miniroyale’,\npath: ‘miniroyale’,\nlogo: ‘…’,\nbanner: ‘h…’,\ndescription:\n‘Mini Royale: Nations is a browser-based first-person-shooter set on top of a land strategy game. Players can earn mintable items through Battle Pass and Quests, join or create Clans to participate in Clan Wars, fuse weapons and characters for ultra rare skins, and more.’,\nblockchain: ‘Solana’,\npolicyID: ‘DkihrQwDWTUsCzPAGZa8TUEc3UcKSY2KKHV98cBNiXjX’,\ntokenAdresse: ‘FcvPATycd7uEKZJ5rAsv9fNyu51j4EhzgmoE6FCAgxBW’,\nwebsite: ‘https://miniroyale.io/’,\ntwitter: ‘@MiniNations’,\ndiscord: ‘hmmm’,\ninstagram: ‘Woauchimmer’,\nadmins: [{key: “”, name: “Chippu”}],\n}\n],\n};Hi this is my data for now. I try the following code from my backend node js:const user = await User.findOne({wallet: {wallet: addresse}}).clone();But the results are null. I have no idea, how to query the array behind the property wallet. Any hint would be super awesome. Thank you in advance.", "username": "Florian_Weise" }, { "code": "walletconst user = await User.findOne({'wallet.wallet': address}).clone();\nwalletwalletaddresswalletfindfindOne", "text": "Hello @Florian_Weise ,Welcome to The MongoDB Community Forums! To query the array behind the wallet property, you can use the dot notation in your query to specify the nested field. Here’s an example:This will find a document where the wallet field is an array containing an object with a wallet property equal to address.To learn more, please refer Query an Array of Embedded Documents.Note: If there are multiple objects in the wallet array that match the query, only the first one will be returned. If you want to find all matching documents, you can use the find method instead of findOne.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Querry an Array as beeing part of my User Scheme in my User collection
2023-03-02T19:15:05.717Z
Querry an Array as beeing part of my User Scheme in my User collection
620
null
[ "queries" ]
[ { "code": "", "text": "I need to create an auto incremental id field in an envirironment with high concurrency", "username": "jose_cabrera" }, { "code": "", "text": "Hello @jose_cabrera ,Welcome to The MongoDB Community Forums! To understand your scenario better, could you please explain your use-case for an auto increment field?MongoDB provides an _id field for every document which is automatically generated by default. This field is designed to be unique and may be able to be used instead of an auto incremental id field.If you need an auto incremental id field, you can use a sequence collection to generate unique ids. The sequence collection contains a document that stores the current value of the sequence. Whenever a new document is inserted, the application fetches the next value from the sequence collection and assigns it as the _id of the new document.To ensure that the sequence is incremented atomically, you can use the findAndModify command with the $inc operator to increment the sequence value and return the updated value in a single atomic operation. This will prevent race conditions that may occur when multiple threads try to increment the sequence at the same time.Note: A single counter document could be a bottleneck in your application as it has it’s limitations, kindly refer Generating Globally Unique Identifiers for Use with MongoDB | MongoDB BlogPlease refer below blogs to learn more about implementations and working of this.In this article, we will explore a trick that lets us auto increment a running ID using a trigger.Learn how to implement auto-incremented fields with MongoDB Atlas triggers following these simple steps.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to implement an auto incremental id in an environment of 100 isertions per minute?
2023-03-01T22:44:23.039Z
How to implement an auto incremental id in an environment of 100 isertions per minute?
2,778
https://www.mongodb.com/…f_2_1024x464.png
[ "node-js" ]
[ { "code": "jsonwebtokenexports = async function(arg){\n \n const jwt = require(\"jsonwebtoken\")\n let res = jwt.verify(\"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiI4WEFxeWhMWEFwYUp6NmtIem51QmFwTUp0WVA0In0.XPwOjkCLR7TD0KjJtke1UwKbc-mXZQDNkClojffXXxs\", \"secret1\")\n\n return res\n};\n{ uid: '8XAqyhLXApaJz6kHznuBapMJtYP4' } > ran at 1671900424456\n> took \n> error: \n{\"message\":\"Value is not an object: undefined\",\"name\":\"TypeError\"}\n", "text": "I was trying to make a API using mongodb realm functions, and I’ve correctly added/installed dependency named jsonwebtokenThe code above successfully returns { uid: '8XAqyhLXApaJz6kHznuBapMJtYP4' } when run normally using nodejs CLI/Interpreter.\nBut here it fails, I get this resultPlz help I’m stuck here is the screenshot of the same\n\nimage1584×719 66.6 KB\n", "username": "Bhushan_Bharat" }, { "code": "jsonwebtoken9.0.08.0.0", "text": "There was a Issue with jsonwebtoken version 9.0.0, I’ve tried an old version 8.0.0 and it worked like charm ", "username": "Bhushan_Bharat" }, { "code": "console.log(process.version);jsonwebtoken", "text": "If you run console.log(process.version);, you will see realm functions runs on node 10.This isn’t supported by version 9 of jsonwebtoken. See Migration Notes: v8 to v9 · auth0/node-jsonwebtoken Wiki · GitHub", "username": "Raphael_Eskander" } ]
Can't use JWT (jsonwebtoken) in mongodb realm functions to verify
2022-12-24T17:01:32.532Z
Can&rsquo;t use JWT (jsonwebtoken) in mongodb realm functions to verify
1,784
null
[ "aggregation" ]
[ { "code": "", "text": "We are storing JSON Document that is greater than 16MB and we storing it using GridFS into collections(fs.chunks, fs.files).Below is my sample json.I want filter,sort and limit based on the fields in address array.Can you please point me to an example .Thank you in advance.[{\n“name”: “John”,\n“age”: 25,\n“address”: [\n{\n“street”: “123 Main St”,\n“city”: “Anytown”,\n“state”: “CA”\n},\n{\n“street”: “456 Oak Ave”,\n“city”: “Someville”,\n“state”: “NY”\n}\n]\n},\n{\n“name”: “Doe”,\n“age”: 26,\n“address”: [\n{\n“street”: “sdfs sdf St”,\n“city”: “sdf”,\n“state”: “sdf”\n},\n{\n“street”: “sdfsd sdf sdf”,\n“city”: “dfs”,\n“state”: “dsfs”\n}\n]\n},{\n“name”: “abc”,\n“age”: 29,\n“address”: [\n{\n“street”: “abc Main St1”,\n“city”: “xyz”,\n“state”: “CA12”\n},\n{\n“street”: “abcsdsd”,\n“city”: “addfsd”,\n“state”: “sdfsd”\n}\n]\n}]", "username": "Sundar_Koduru" }, { "code": "", "text": "I want filter,sort and limit based on the fields in address array.You simply cannot do that if you store your JSON documents as a file in GridFS.Why do you do that?The 16MB limits apply to a single document. In what you share you have 3 very small documents.{\n“name”: “John”,\n“age”: 25,\n“address”: [\n{\n“street”: “123 Main St”,\n“city”: “Anytown”,\n“state”: “CA”\n},\n{\n“street”: “456 Oak Ave”,\n“city”: “Someville”,\n“state”: “NY”\n}\n]\n}is one document.{\n“name”: “Doe”,\n“age”: 26,\n“address”: [\n{\n“street”: “sdfs sdf St”,\n“city”: “sdf”,\n“state”: “sdf”\n},\n{\n“street”: “sdfsd sdf sdf”,\n“city”: “dfs”,\n“state”: “dsfs”\n}\n]\n}Is another document.", "username": "steevej" }, { "code": "", "text": "Thank you very much,This is just example JSON document,my JSON document is >16MB and mongodb throws error as ‘It exceeded the size of 16MB’.But unfortunately we have mix of documents for the same collection that are mostly less than 16MB and very few greater than 16MB and structure has embedded documents with arrays.We have to use aggregation queries for unwind,filter,sort and limit etc. And now the problem in question is that if >16MB documents stored using GridFS is there a way that can queried using Aggregation,as we do for normal collections or any other better solution could help.\nThank you in advance.", "username": "Sundar_Koduru" }, { "code": "core_collection :\n[\n { \"_id\" : 369 ,\n \"name\": \"John\",\n \"age\": 25 }\n]\n\naddress_collection :\n[\n { \"core_id\" : 369 ,\n \"address\" : [\n { “street”: “123 Main St”,\n “city”: “Anytown”,\n “state”: “CA” } ,\n { “street”: “456 Oak Ave”,\n “city”: “Someville”,\n “state”: “NY” }\n ]\n }\n}\n", "text": "is there a way that can queried using AggregationNo you cannot use aggregation on the content of the files stored using GridFS.May be you can split oversized document into smaller related documents using something like the Extended Reference Pattern.Something like jq may help you split an oversized JSON into a set of smaller related documents.Using your sample documents, you could split a document like{\n“name”: “John”,\n“age”: 25,\n“address”: [\n{\n“street”: “123 Main St”,\n“city”: “Anytown”,\n“state”: “CA”\n},\n{\n“street”: “456 Oak Ave”,\n“city”: “Someville”,\n“state”: “NY”\n}\n]\n}into 2 smaller documents likeIf it is still too big you may make each entry of address into a top level document, looks a lot like $unwind.In some cases, plain old normalization, is still a valid solution.While it is best to keep together the things that are accessed together, sometimes you have no choice.", "username": "steevej" } ]
GridFS-Can we use aggregation query from GridFS specification
2023-03-05T11:18:03.972Z
GridFS-Can we use aggregation query from GridFS specification
936
null
[ "schema-validation" ]
[ { "code": "class Condition: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId?\n @Persisted var __v: Int?\n @Persisted var condition_string: String?\n}\n{\n \"title\": \"condition\",\n \"properties\": {\n \"__v\": {\n \"bsonType\": \"int\"\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"condition_string\": {\n \"bsonType\": \"string\"\n }\n }\n}\nclass condition: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId?\n @Persisted var __v: Int?\n @Persisted var condition_string: String?\n}\n", "text": "I currently have an iOS app that connects to my database thru Atlas Sync. When I created the database, I named all of my collections starting with lowercase. Obviously the object models within my iOS app should start with a capital letters. I am running into a problem with the schema generated and syncing to the client iOS app.Currently I have a Condition object model class.I have the following JSON schema generated by the Atlas Sync App.I can only get my app to sync if I rename my class Condition to lower case ie:How can I get a successful sync to occur without renaming my collections to start with a capital letter? I assume this is what is causing my problem when i get the “Client Query is invalid/malformed error” and the Xcode \" ERROR \"Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: “condition”", "username": "Chris_Stromberg" }, { "code": "", "text": "Well, I found the problem to be not handling the realm migration. In order to get the app functioning again, I needed to delete the realm instance and start over.Still trying to figure out how to implement migration within the app!", "username": "Chris_Stromberg" }, { "code": "class Condition: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId?\n @Persisted var __v: Int?\n @Persisted var condition_string: String?\n}\n{\n \"title\": \"condition\",\n \"properties\": {\n \"__v\": {\n \"bsonType\": \"int\"\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"condition_string\": {\n \"bsonType\": \"string\"\n }\n }\n}\n", "text": "Hi @Chris_Stromberg let me first understand your issue.\nDo you already have an App in production which has on the schema the following objectand you created a Device Sync App with the following schemawhich cannot be changed.\nNow you are getting an error because the model name on your App doesn’t match the name on the server schema.\nSo what you have to do is migrate the model above to use a lowercase capital letter so you can match it with Device Sync?\nLet me know if this your issue? So I can help you with the migration.", "username": "Diana_Maria_Perez_Af" }, { "code": "", "text": "Hello Diana,Thanks for the reply. You are correct, I made breaking changes to my app when the server schema did not match the model name. To get around this I have been locating the associated realm file on my iMac and deleted it. When a new realm is instantiated, with the matching server schema and model name, it works as it should. I am currently researching how to perform migrations when making breaking changes. For now, I would be fine with using “deleteRealmIfMigrationNeeded”, but I can’t figure out how to implement this while configuring a “flexibleSyncConfiguration” for a user.", "username": "Chris_Stromberg" }, { "code": "deleteRealmIfMigrationNeeded", "text": "Hi @Chris_Stromberg you cannot set deleteRealmIfMigrationNeeded you will get this error `Cannot set ‘deleteRealmIfMigrationNeeded’ when sync is enabled (‘syncConfig’ is set).Synced realms do not have schema versions and automatically migrate objects to the latest schema. Synced realms only support non-breaking schema changes, as it mentions in the documentation.", "username": "Diana_Maria_Perez_Af" }, { "code": "writeCopy", "text": "If you need to migrate some local data to a synced Realm, you can do the migration on the local realm, and then use our writeCopy API to pass the migrated data to a synced Realm. Let me know if this is helpful or you need more help.", "username": "Diana_Maria_Perez_Af" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas App Service Generating a schema to match Realm Object Models problem
2023-02-20T04:00:07.470Z
Atlas App Service Generating a schema to match Realm Object Models problem
1,467
null
[ "aggregation", "python" ]
[ { "code": "PyMongoArrowMongoDBPandaspyarrow.lib.ArrowException: Unknown error: Wrappingaggregate_pandas_alldf: DataFrame = pymongoarrow.api.aggregate_pandas_all(**aggregate_params)\nerror_message: Traceback (most recent call last):\n File \"migration.py\", line 250, in process_schemas\n df: DataFrame = pymongoarrow.api.aggregate_pandas_all(**aggregate_pandas_all_params)\n File \"/home/ubuntu/.local/lib/python3.8/site-packages/pymongoarrow/api.py\", line 201, in aggregate_pandas_all\n return _arrow_to_pandas(aggregate_arrow_all(collection, pipeline, schema=schema, **kwargs))\n File \"/home/ubuntu/.local/lib/python3.8/site-packages/pymongoarrow/api.py\", line 159, in _arrow_to_pandas\n return arrow_table.to_pandas(split_blocks=True, self_destruct=True)\n File \"pyarrow/array.pxi\", line 830, in pyarrow.lib._PandasConvertible.to_pandas\n File \"pyarrow/table.pxi\", line 3908, in pyarrow.lib.Table._to_pandas\n File \"/home/ubuntu/.local/lib/python3.8/site-packages/pyarrow/pandas_compat.py\", line 820, in table_to_blockmanager\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\n File \"/home/ubuntu/.local/lib/python3.8/site-packages/pyarrow/pandas_compat.py\", line 1170, in _table_to_blocks\n result = pa.lib.table_to_blocks(options, block_table, categories,\n File \"pyarrow/table.pxi\", line 2594, in pyarrow.lib.table_to_blocks\n File \"pyarrow/error.pxi\", line 138, in pyarrow.lib.check_status\npyarrow.lib.ArrowException: Unknown error: Wrapping ] w �� T/� � failed\n", "text": "Hi,I am using PyMongoArrow package to fetch data from MongoDB to Pandas.My source data collection contains few values which are in another language (Hindi Text to be specific). Although I have no problem storing the data in MongoDB, I get pyarrow.lib.ArrowException: Unknown error: Wrapping PyArrow error when I fetch the data using PyMongoArrow’s aggregate_pandas_all method.Below is the python code I use to fetch data from MongoDBBelow is the error I am getting on the above line upon execution", "username": "Harshavardhan_Kumare" }, { "code": " _id hindi\n0 b'd\\x06q\\x95\\xd9j\\xdf?\\x87\\x8c\\x83O' अआइईउऊऋएऐओऔव्यंजनकखगघङचछजझञाटठडढणतथदधनपफबभमयरल...\n1 b'd\\x06q\\x95\\xd9j\\xdf?\\x87\\x8c\\x83P' अआइईउऊऋएऐओऔव्यंजनकखगघङचछजझञाटठडढणतथदधनपफबभमयरल...\n2 b'd\\x06q\\x95\\xd9j\\xdf?\\x87\\x8c\\x83Q' अआइईउऊऋएऐओऔव्यंजनकखगघङचछजझञाटठडढणतथदधनपफबभमयरल...\n...\naggregate_paramsstr.encode", "text": "Hi, thank you for raising this issue! Unfortunately, I am unable to replicate this error myself. Right now I am getting this when using aggregate_pandas_all and some Hindi Unicode characters:Would it be possible for you to provide more details on exactly what unicode characters are causing the failure, in addition to exactly what you are providing to as aggregate_params? I think that a malformed unicode character may be causing this error. You can check by using the str.encode function in Python. Furthermore, could you provide what version of PyMongo, Python, and PyMongoArrow you are using?", "username": "Julius_Park" } ]
Handle non utf-8 character in PyMongoArrow upon data fetch
2023-03-04T22:31:06.355Z
Handle non utf-8 character in PyMongoArrow upon data fetch
886
null
[ "swift", "flutter" ]
[ { "code": "", "text": "I see that the Swift SDK added the ability to bundle synchronized realms in v10.32. Is this feature also present in the Dart SDK? I am hoping to ship with a prepopulated realm that can also sync updates once it comes online.For this use case, the prepopulated realm would not include user authenticated data, only data that all clients would keep in sync.I have looked through the community forums and the github issues, but don’t see any reference to this.Thank you.", "username": "Fr_Matthew_Spencer_O.S.J" }, { "code": "", "text": "Hi @Fr_Matthew_Spencer_O.S.J ,\nThanks for your interest!\nYes, it is supported. This document can help you Bundle a Realm - Flutter SDK\nIs this what you need?", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "Oh great, thank you, that’s what I needed.", "username": "Fr_Matthew_Spencer_O.S.J" } ]
Bundle prepopulated synchronized realm?
2023-03-06T22:03:04.648Z
Bundle prepopulated synchronized realm?
928
https://www.mongodb.com/…1_2_1024x279.png
[ "swift" ]
[ { "code": " let app = App(id: RealmConstants.realm_App_ID)\n let users = try await app.login(credentials: Credentials.anonymous)\n var config = user.flexibleSyncConfiguration()\n config.objectTypes = [AccessibleItem.self]\n let realm = try await Realm(configuration: config, downloadBeforeOpen: .once)\n let subscriptions = realm.subscriptions\n let foundSubscription = subscriptions.first(named: \"all_access_items\")\n\n if foundSubscription != nil {\n print(\"found the subscription: \\(String(describing: foundSubscription?.name))\")\n } else {\n try await subscriptions.update {\n subscriptions.append(\n QuerySubscription<AccessibleItem>(name: \"all_access_items\")\n \n )\n }\n }\n return realm\n\n{\n \"name\": \"Anyone\",\n \"apply_when\": {},\n \"document_filters\": {\n \"read\": true,\n \"write\": false\n },\n \"read\": true,\n \"write\": false\n}\n{\n \"place\": {\n \"$exists\": true\n },\n \"address\": {\n \"$exists\": true\n }\n}\n", "text": "I am using device flexible sync on App UI as per Swift examples. The correct roles, permissions and filters are in place. I use anonymous credentials to login and to add a flexible sync configuration. I’ve added a subscription. The logs on the App UI show a connection starting, a session starting, and the status is okay. But there is nothing on my local realm sync default file. What am I missing?Create realm sync functionRulesDocument filtersLocal realm file\nScreenshot 2023-03-06 at 1.03.32 PM2110×576 32.3 KB\nThanks.", "username": "swar_leong" }, { "code": "", "text": "Hi, is there any data that should be added to the file? IE, are you writing any objects to the Realm or is there any data in Atlas that should be getting synced? If so, can you explain where the data is and perhaps add a link to your application in the App Services UI? Separately, can you clarify what your document filters are set to? You seem to suggest it is both { read: true, write: false } as well as { place: { $exists: true } , address: {$exists: true } }", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler,Thanks for getting back to me so quickly.I want to sync the data on Atlas with the local Realm file. Currently the user cannot write to the file. This could change in the future. The collection is on a shared cluster tier, the atlas service name is mongodb-atlas.Is this the link you are looking for ? App ServicesFrom what I understand of the filters for flexible sync, there needs to be a document filter for the read property, in order for the role to be sync compatible. That’s why I added more detailed filters. Have I misunderstood the requirements?", "username": "swar_leong" }, { "code": "", "text": "Hi,So I would start off with a few things.First off, you seem to have misunderstood the “apply when” section as you have defined many of the other rules-related fields within the apply when. As it is written now, the apply_when will fail because of this and therefore the sessions will connect with a “no access” role. I would suggest reading through this page: https://www.mongodb.com/docs/atlas/app-services/rules/roles/#rolesHowever, my meta comment would be that I generally advise people to develop their application without permissions first, make sure things are working, tune your data model, and then try to figure out how to add permissions in on top of what you have. So I would suggest removing that rule, adding a new one with a default “everyone can read” rule, and then continuing to build your app. Once you have things generally working, I would advise you to start thinking about rules and permissions. This will make your development experience a lot better.", "username": "Tyler_Kaye" }, { "code": " let realm = try await Realm(configuration: config, downloadBeforeOpen: .once)\n let realm = try await Realm(configuration: config, downloadBeforeOpen: .always)\n", "text": "Thanks for the good advice.I’ve set the rule to everyone can read. I deleted the sync default realm file and also changedtobefore starting up again.It is now working.", "username": "swar_leong" }, { "code": "", "text": "Awesome. I would continue to build and put things together and once you have something resembling a working prototype I think you will have more context around what your permissioning needs to be and then you can go add it.Enjoy,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Blank local sync default file
2023-03-06T18:04:08.221Z
Blank local sync default file
730
null
[ "queries", "python", "upgrading" ]
[ { "code": "", "text": "“FieldPath cannot be constructed with empty string”. Facing this error. The code was running properly. But upgraded the mongo version from 4.2 to 5.0.4. is it due to the upgrade? Has any function used in 4.2 removed /changed in higher versions? The code is failing when it is trying to taking json data and converting it into CSV file.\nI’m using python script and error came as pymongo error", "username": "Sreekanth_R_Shekar" }, { "code": "", "text": "Further to this, it has been identified that the mongo cursor is inside a function which takes in values for field_name to be projected.Cursor = db.coll_name.find({}, {field_name:1})But there are instance where no projection is required and therefore field_name becomes empty string.This has not caused problem when the version was 4.2.23\nBut now version is 5.0.4Wanted to know is this occuring as a part of this upgrade.", "username": "Sreekanth_R_Shekar" }, { "code": "if field_name:\n projection = {field_name: 1}\nelse:\n projection = None\ncursor = db.coll_name.find({}, projection=projection)\n", "text": "FieldPath cannot be constructed with empty stringLooks like this extra validation on field paths in projections was added in MongoDB 4.4. https://jira.mongodb.org/browse/SERVER-43613 was opened to request that the server return a more helpful error message. Please follow that ticket for updates.As for how to workaround this issue, if the field_name is empty you can set the entire projection to be empty or None:", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
"FieldPath cannot be constructed with empty string". Facing this error. Upgraded mongo version from 4.2 to 5.0.4. is it due to the upgrade?
2023-03-04T04:57:56.048Z
&ldquo;FieldPath cannot be constructed with empty string&rdquo;. Facing this error. Upgraded mongo version from 4.2 to 5.0.4. is it due to the upgrade?
2,537
null
[ "sharding" ]
[ { "code": "mongos> db.getCollection('20230209').getShardDistribution() ---- \n\nShard ShardA at ShardA/1xx.1xx.xx.77x:27017,1xx.1xx.xx.78x:27017\n data : 49.51GiB docs : 330654921 chunks : 395\n estimated data per chunk : 128.35MiB\n estimated docs per chunk : 837101\n\nShard ShardB at ShardB/1xx.1xx.xx.1xx:27017,1xx.1xx.xx.1xx:27017\n data : 24.76GiB docs : 165255242 chunks : 394\n estimated data per chunk : 64.36MiB\n estimated docs per chunk : 419429\n\nTotals\n data : 74.27GiB docs : 495910163 chunks : 789\n Shard ShardA contains 66.65% data, 66.67% docs in cluster, avg obj size on shard : 160B\n Shard ShardB contains 33.34% data, 33.32% docs in cluster, avg obj size on shard : 160B\n", "text": "Hello,We are facing an issue with MongoDB sharding setup(5.0.8 and community edition) and two shards are in place with PSA architecture(Primary on one server, Secondary+Arbiter point to another one) and config servers too in the same model. Weekly Collections are generated automatically based on a pipleline execution of extracting data from external sources and later shard key will be imposed on top of the collection and thereafter subsequent extractions will make the data distribute across two shards. Below is the output of two big collections.Collection Metrics:Name of the collection is 20230209\nCount of documents : 495910163 ( overall )Shard A count : 330654921\nShard B count : 165255242Database name : “INDIA_SPIRIT”, “primary” : “ShardA”Can someone help us on this…? Also, Initial data extracted before shard key is imposed will remain under Primary Shard or will also gets distributed, post sharding the collection…?Best Regards,\nKesav", "username": "ramgkliye" }, { "code": "Jumbo chunkshow to choose a shard keyJumbo chunks", "text": "Hello @ramgkliye ,Welcome back to The MongoDB Community Forums! Also, Initial data extracted before shard key is imposed will remain under Primary Shard or will also gets distributed, post sharding the collection…?I did not understand the question clearly, are you asking:what happens when you shard an existing collection (will it spread across the cluster by the balancer)?\nIn case you want to shard an existing collection data, it can only be sharded if its size does not exceed specific limits. These limits can be estimated based on the average size of all shard key values, and the configured chunk size. For more details please check Sharding Existing Collection Data Size. In case it lies within the limit then the data of sharded collection will be divided into chunks and moved to different shards until the collection is balanced.what happens to data that is not in the collection that is sharded (will non-sharded collection stay on the primary shard)?\nA database can have a mixture of sharded and unsharded collections. Sharded collections are partitioned and distributed across the shards in the cluster. Unsharded collections are stored on a primary shard. Each database has its own primary shard. Here, collection 1 represents sharded collection and collection 2 represents unsharded collection.Additionaly, how evenly the data distribution happens is mainly determined by the shard key (and the number of shards).In your case, It looks like both the shards have similar number of chunks (ShardA: 395 & ShardB: 394) but estimated data per chunk in ShardA is double in comparison to ShardB. So to check the un-even data distribution across your shards we need to make sure of some details, such as:Kindly go through below links to make sure you have followed the required steps necessary for efficient and performant working of your sharded cluster.\n-Deploy a Sharded Cluster\n-Performance Best Practices: ShardingNote: Starting in MongoDB 6.0.3, data in sharded clusters is distributed based on data size rather than number of chunks, so if balancing based on data size is your ultimate goal, I recommend you to check out MongoDB 6.0.3 or newer. For details, see Balancing Policy Changes.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Many thanks for the reply, dear Tarun. Attached is the db.printShardingStatus output for your reference and suggesting any…? Also as your mentioned “estimated data per chunk in ShardA is double in comparison to ShardB” - Any remedies like resharding the key or moving the shard from one to another works any…?Best Regards,\nramgkliye\nsharding_status.txt (19.9 KB)", "username": "ramgkliye" }, { "code": "", "text": "Most databases have primary as ShardB for unsharded collections, and sharded databases have a even distribution in terms of number of chunks.Since in this version sharding rule is based on number of chunks, from mongo’s view point, this is being “evenly” sharded.So i’m guessing your shard key doesn’t provide an even distribution. (e.g. it has 2/3 chance to sit in shardA and 1/3 for shardB). As a result, though # of chunks are same, data size is different.", "username": "Kobe_W" }, { "code": "", "text": "Thanks for the response and can resharding the existing shardkey makes any difference…?", "username": "ramgkliye" }, { "code": "", "text": "Yes, this version should already support that feature. Make sure your new sharding key can distribute more evenly.(forgot to mention, default chunk size is 128M or so, that’s why they are not auto splitting even further)", "username": "Kobe_W" }, { "code": "", "text": "hello, is my understanding like “does shardA default chunk size is 128 MB and ShardB default chunk size is 64MB” and due to this mismatch, estimated data per chunk varies and hence the uneven distribution any…? Also, read somewhere like due to high number of deletions across the collections ( within the bound ranges) will generate empty chunks …? True any… can someone clarify on this…?Best Regards,\nKesav", "username": "ramgkliye" }, { "code": "", "text": "Do we have any query/command that can be run inside mongos or on a Shard node for getting the list of unsharded collections against a database…? I’d used coll.stats, but it is showing at the collection level and inside the database, we have multiple collections… Can somebody provide some pointers…? Thanks in Advance… Kesav", "username": "ramgkliye" }, { "code": "", "text": "try this", "username": "Kobe_W" }, { "code": "", "text": "the default chunk size should be same if you use same version and not specifically change it (if this can be configured)", "username": "Kobe_W" }, { "code": "", "text": "We have nearly 150 collections ( average sized at 25 to 30 GB) residing on ShardA and due to this disk space consumption is more, when compared with Shard B. We have a collection(s) created as a part of application pipeline flow, where data will be loaded initially into the ShardA ( without shard key) and then shard key ( range based shard key) will be created, which will make the data distribute across shard A and B.Attached is the output of sharding distribution, where upon sharding the unsharded collections, chunk migration is happening, but going very slow. Any specific reasons for this…? Target : Get all the newlyy sharded collections distribute data equally and release the disk space at the ShardA servers side.\n\nshard-behaviour736×620 32.2 KB\nAppreciate someone’s help on this matter. Thanks in AdvanceBest Regards,\nKesava Ram\nsharding-behaviour.txt (2.8 KB)", "username": "ramgkliye" }, { "code": "", "text": "I remember one shard can only migrate one chunk at a time, this is to minimize disk usage on the servers.But it’s too slow in your case. Maybe your servers are too busy? or high disk/network usage already due to high traffic?", "username": "Kobe_W" } ]
Data Distribution is not even under MongoDB sharding Setup
2023-02-16T10:08:28.989Z
Data Distribution is not even under MongoDB sharding Setup
1,649
null
[ "dot-net" ]
[ { "code": "", "text": "Hi,Any chance to get a security update on dotnet driver before v2.19 for CVE-2022-48282?", "username": "Carretero_Ruben" }, { "code": "System.ObjectSystem.ObjectObjectSerializer.AllowedTypesObjectSerializer.AllowedTypesvar connectionString = \"<<YOUR_MONGODB_URI>>\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V2;\nvar client = new MongoClient(clientSettings);\n", "text": "Hi, @Carretero_Ruben,Welcome to the MongoDB Community Forums.I understand that you have a question about backporting CSHARP-4475, which addresses CVE-2022-48282, to earlier versions of the MongoDB .NET/C# Driver.Note that the vulnerability documented in the CVE has very specific requirements, notably a property or field typed as System.Object or a collection of System.Object as opposed to a specific type. An attacker would also require direct write access to the underlying collection to modify document data in an arbitrary fashion. Typical write access through an application is insufficient to exploit this vulnerability. Lastly the vulnerability is only present on .NET Framework on Windows. If you are running .NET Core or .NET 5+, you are not vulnerable to this particular exploit. Thus many users of the MongoDB .NET/C# Driver are not affected by this vulnerability.The challenge with a potential backport is that the fix is a breaking change. It requires affected users to opt into ObjectSerializer.AllowedTypes. Upgrading to a patch build of 2.18.X (or any other earlier version) should not require code changes, but backporting CSHARP-4475 would necessitate such code changes.Since the CSHARP-4475 fix requires code changes to configure ObjectSerializer.AllowedTypes, upgrading to 2.19.x seemed like a reasonable ask. One potential hurdle is that 2.19.0 makes our new LINQ3 provider the default. If this causes problems, it is straightforward to switch back to the older LINQ2 provider as follows:Please let us know if there is a blocker to upgrading your codebase to 2.19.x to take advantage of the CSHARP-4475 fix. In particular, which earlier versions of the driver would you like to see CSHARP-4475 backported to? Ideally we can remove any blockers and facilitate an upgrade to 2.19.x. If not, we may consider a limited backport to earlier versions.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Thanks James,We try and update to 2.19.0 and check if the linq provider works fine.", "username": "Carretero_Ruben" }, { "code": "", "text": "Please let us know if you run into any difficulties. If you do, a stack trace and self-contained repro would be most helpful and appreciated. Happy to discuss here in the forums, but you can also file bugs in our issue tracker directly.NOTE: We have already fixed a number of reported issues in 2.19.0 and plan to release 2.19.1 in the coming weeks to address these.", "username": "James_Kovacs" } ]
Vulnerability CVE-2022-48282
2023-03-06T08:44:41.453Z
Vulnerability CVE-2022-48282
1,060
null
[]
[ { "code": "", "text": "Hi,I just have a few questions related too the Atlas App Services User accounts:Where are the users stored (in the mongodb?) and is it possible to get a more direct access to them other than via the Realm App Services Web UI. I would like to access/manage other information like the identity provider IID etc.Is it possible to unlink auth providers, manually via a console, cli or API.Can users be moved/shared between Applications, I can see Applications can share the database, but I don’t really see any way to migrate/manage users.Can Custom Function providers store additional metadata? The Custom JWT providers let me map the meta data, but Custom Function providers doesn’t seem to have that function.Thanks!", "username": "Murray" }, { "code": "", "text": "In general, when you want more robust control of users and their metadata our recommendation is to use Custom Function or Custom JWT auth with Custom User data. This will give you the most flexibility for managing users accounts and metadata.", "username": "Ian_Ward" } ]
Atlas User Authentication
2023-03-03T22:33:10.226Z
Atlas User Authentication
830
null
[ "atlas-search" ]
[ { "code": "pathstring\"hammer\"titleplottitleplotpath", "text": "The documentation states that the path needs to be a string and not an array of strings. I just want to confirm that that is in fact the only possibility, and if, in either case, there is a recommended way to do this.e.g. I want to search (with autocompletion) my movies for the text \"hammer\" on both the title and the plotIn the current scenario I can implement a search by either title or plot with ease. But If I try to do it for both, making path an array of strings, which is acceptable on other operators, I get an error", "username": "Rodrigo_Sasaki" }, { "code": "", "text": "Hey @Rodrigo_Sasaki,Did you find an answer for this? Im curious as well.", "username": "Tyler_Bell" }, { "code": "", "text": "I am also looking for a solution to this.", "username": "Stephan_06935" }, { "code": "[\n {\n $search: {\n compound: {\n must: [\n {\n autocomplete: {\n query: search,\n path: \"searchName\"\n }\n }\n ],\n should:[\n {\n text: {\n query: search,\n path: \"searchName\"\n }\n }\n ],\n },\n },\n },\n ]\n", "text": "", "username": "Academia_Moviles" } ]
How do I run an autocomplete $search on multiple fields?
2020-07-09T19:10:50.152Z
How do I run an autocomplete $search on multiple fields?
4,183
null
[ "queries" ]
[ { "code": "", "text": "Hi Team ,\nI am getting below error , while running explain plan using a hint with index.\ndb.collection.explain().find({“createdDateUTC” : ISODate(“2021-04-14T00:48:30.820Z”)}).hint({“createdDateUTC”:1});\nuncaught exception: Error: explain failed: {\n“operationTime” : Timestamp(1678115286, 84),\n“ok” : 0,\n“errmsg” : “error processing query: ns=ATTPT.davidTree: createdDateUTC $gt new Date(1618361310820)\\nSort: {}\\nProj: {}\\n planner returned error :: caused by :: hint provided does not correspond to an existing index”,\n“code” : 2,\n“codeName” : “BadValue”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1678115286, 84),\n“signature” : {\n“hash” : BinData(0,“cRm/DMwcuvEC13V1XLzhqpXdIZA=”),\n“keyId” : NumberLong(“7154570222622474241”)\n}\n}\n}.", "username": "Prince_Das" }, { "code": "hint()db.collection.getIndexes()db.collection.find({...}).hint({...}).explain()\n", "text": "Hello @Prince_Das,It seems the provided index in hint() is not present, can you please make sure is that index present or not by db.collection.getIndexes() command,Second, you need to pass explain() at the end of the query, like this.", "username": "turivishal" } ]
Error while using hint(Index) in explain plan
2023-03-06T15:10:17.136Z
Error while using hint(Index) in explain plan
2,231
null
[ "replication", "java" ]
[ { "code": " HighLevel Problem statement: We are getting the exception “com.mongodb.MongoClientException: Sessions are not supported by the MongoDB” while creating the session from the MongoClient API when using ReactiveStreams MongoDB Java driver version 3.6.0.\n\n Recently we migrated from MongoDB version 2.6 to 3.6 and We used ReactiveStreams MongoDB Java driver version 3.6.0 to create session out of MongoClient API.\n Creation of session failed with “Sessions are not supported by the MongoDB”.\n\n However from the mongo shell, we can see the replica set status and in fact was able to execute db.getMongo().startSession() successfully.\n We tested the same by created a new instance of MongoDB version 3.6 directly and this time MongoClient api was able to create the session with out issues.\n\n Not sure why session is not getting created while in the migration environment. \n Is it the issue with the migration? or with the driver?\n Any help on this.\n\n Java code Snippet:\n MongoClient client;\n ClientSession clientSession = null;\n MongoDatabase mongoDatabase = null;\n MongoCollection<Document> fromCollection = null;\n Builder clientBuilder = null;\n MongoClientSettings settings = null;\n ConnectionString conn = null;\n \n clientBuilder = MongoClientSettings.builder();\n conn = new ConnectionString(clientURI);\n clientBuilder = clientBuilder.applyConnectionString(conn);\n settings = clientBuilder.retryReads(true).readPreference(ReadPreference.primary()).build();\n\n client = MongoClients.create(settings);\n\n Publisher<ClientSession> session = client.startSession(); // fails here\n MongoDBOperationsSubscriber<ClientSession> sub = new\n MongoDBOperationsSubscriber<>();\n sub.setCallingMethodName(“ClientSession”);\n session.subscribe(sub);\n\n if (sub.hasErrors()) {\n sub.cancelSubscription();\n System.out.println(\n “Error occurred when MongoDBOperationsSubscriber is\n receiving the documents from publisher.“);\n System.out.println(“Error is [ ” + sub.getError().getMessage() +\n ” ].“);\n throw new MongoDBCaptureException(sub.getError().getMessage());\n }\n if(!sub.getData().isEmpty()) {\n clientSession = sub.getData().get(0);\n }\n MongoShell outputs:\nrs01:PRIMARY> rs.conf();\n {\n “_id” : “rs01\",\n “version” : 25828,\n “protocolVersion” : NumberLong(1),\n “members” : [\n {\n “_id” : 5,\n “host” : “mongodb03:27017\",\n “arbiterOnly” : false,\n “buildIndexes” : true,\n “hidden” : true,\n “priority” : 0,\n “tags” : {\n\n },\n “slaveDelay” : NumberLong(0),\n “votes” : 1\n },\n {\n “_id” : 8,\n “host” : “mongodb02:27017\",\n “arbiterOnly” : false,\n “buildIndexes” : true,\n “hidden” : true,\n “priority” : 0,\n “tags” : {\n\n },\n “slaveDelay” : NumberLong(0),\n “votes” : 1\n },\n {\n “_id” : 9,\n “host” : “mongodb04:27017\",\n “arbiterOnly” : false,\n “buildIndexes” : true,\n “hidden” : false,\n “priority” : 4,\n “tags” : {\n\n },\n “slaveDelay” : NumberLong(0),\n “votes” : 1\n },\n {\n “_id” : 12,\n “host” : “mongodb05:27017\",\n “arbiterOnly” : false,\n “buildIndexes” : true,\n “hidden” : false,\n “priority” : 2,\n “tags” : {\n\n },\n “slaveDelay” : NumberLong(0),\n “votes” : 1\n },\n {\n “_id” : 13,\n “host” : “mongodb06:27017\",\n “arbiterOnly” : false,\n “buildIndexes” : true,\n “hidden” : false,\n “priority” : 2,\n “tags” : {\n\n },\n “slaveDelay” : NumberLong(0),\n “votes” : 1\n }\n ],\n “settings” : {\n “chainingAllowed” : true,\n “heartbeatIntervalMillis” : 2000,\n “heartbeatTimeoutSecs” : 10,\n “electionTimeoutMillis” : 10000,\n “catchUpTimeoutMillis” : 60000,\n “catchUpTakeoverDelayMillis” : 30000,\n “getLastErrorModes” : {\n\n },\n “getLastErrorDefaults” : {\n “w” : 1,\n “wtimeout” : 0\n }\n }\n }\n MongoDB logs has this message:\n {“log”:“2023-03-03T06:44:44.298+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist\\n”,“stream”:“stdout”,“time”:“2023-03-03T06:44:44.298265861Z”}", "text": "Hi, {“log”:“2023-03-03T06:44:44.298+0000 I CONTROL [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist\\n”,“stream”:“stdout”,“time”:“2023-03-03T06:44:44.298265861Z”}", "username": "Anwesh_Kota" }, { "code": "", "text": "Hi @Anwesh_KotaSessions will not be supported by a 3.6 cluster until 3.6 features are configured to be available. Please see the documentation at https://www.mongodb.com/docs/manual/release-notes/3.6-upgrade-replica-set/#enable-backwards-incompatible-features for instructions on how to do that.If you’ve already done that procedure and sessions still don’t work, please let us know.", "username": "Jeffrey_Yemin" }, { "code": "private void readOplog() {\n\n Publisher<Document> pubDoc = null;\n MongoDBOperationsSubscriber<Document> sub = null;\n\n Document filter = new Document();\n filter.put(\"ns\", namespace);\n \n pubDoc = fromCollection.find(clientSession, filter);\n sub = new MongoDBOperationsSubscriber<Document>();\n pubDoc.subscribe(sub);\n try {\n sub.await();\n sub.onComplete();\n } catch (Throwable e) {\n System.out.println(\n \"Read from Start, Error occurred while subscriber is in wait for messages from MongoDB publisher.\"\n + e);\n }\n fetchedOplogDocs = sub.getData();\n System.out.println(\"Total documents fetched so far are [ \"+ fetchedOplogDocs.size() +\" ].\");\n }\n", "text": "Hi @Jeffrey_Yemin ,Thank you, Session issue got resolved.But our code is still unable to read data/transactions from “oplog”.Below is the snapshot of reading the “oplog”. The test instance where we directly installed 3.6, the same code works.Please let us know if there is any additional setting which need to be done for reading the “oplog”.", "username": "Anwesh_Kota" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to create session after MongoDB migration from version 2.6 to 3.6 while using ReactiveStreams Java driver
2023-03-03T07:58:00.211Z
Unable to create session after MongoDB migration from version 2.6 to 3.6 while using ReactiveStreams Java driver
1,255
null
[ "compass", "mongodb-shell" ]
[ { "code": "# network interfaces\nnet:\n port: 27017\n# bindIp: 127.0.0.1\n", "text": "Total newbie on MongoDB, so please be kind I have installed MongoDB Server (community edition) on one server (Microsoft Windows Server 2022 Standard) (OnPrem) and are now trying to connect to it from another server.Add first I had a problem just seeing the server, but I have made sure my network is now open to the server including port 27017. That part works!I have also changed the mongod.cfg file so it is not only bound to 127.0.0.0:\n……\nAnd after making the config-change, I have restarted my MongoDB service.But I continue to get the “connect ECONNREFUSED :27017” (where is my ip-address to the server).When trying to log on to MongoDB on the server directly using Mongosh, I can connect and it works. But when I try from my developer machine using MongoDBCompass, I get the above error.MongoDB Community Edition 6.0.4\nCompass version 1.35.0What can I do from here?", "username": "Ole_Frederiksen" }, { "code": "", "text": "Actually found the answer:Instead of commenting out bindIp: 127.0.0.1 I wrote:\nbindIpAll: trueThat worked!", "username": "Ole_Frederiksen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect ECONNREFUSED (not 127.0.0.1) OnPrem installation
2023-03-06T14:47:21.059Z
Connect ECONNREFUSED (not 127.0.0.1) OnPrem installation
651
null
[ "node-js", "replication", "mongoose-odm", "containers" ]
[ { "code": "", "text": "So I have been scratching my head on this for a while but did not find any solution anywhere. I’m running a node v18 express application on docker and I’m trying to connect to mongodb that is running on my host machine and not in the docker container. I was able to connect to redis and memcached in similar way using host.docker.internal. Redis and memcached also running on host machine and not on the container.I thought it was issue from mondb node version, so I updated it from node 12 to node 18 after wards, but still the issue persisted.My mongodb connection string is“mongodb://userName:[email protected]:27017/?ssl=false&replicaSet=replicaSetName”\n“mongodb://userName:[email protected]:27017/?ssl=false&replicaSet=replicaSetName”\n“mongodb://userName:[email protected]:27017/?ssl=false&replicaSet=replicaSetName”In all the three URIs that I used, I got the same error evertimebackend-backend-1 | MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\nbackend-backend-1 | at Timeout._onTimeout (/usr/src/app/node_modules/mongodb/lib/sdam/topology.js:277:38)\nbackend-backend-1 | at listOnTimeout (node:internal/timers:569:17)\nbackend-backend-1 | at process.processTimers (node:internal/timers:512:7) {\nbackend-backend-1 | reason: TopologyDescription {\nbackend-backend-1 | type: ‘ReplicaSetNoPrimary’,\nbackend-backend-1 | servers: Map(1) { ‘127.0.0.1:27017’ => [ServerDescription] },\nbackend-backend-1 | stale: false,\nbackend-backend-1 | compatible: true,\nbackend-backend-1 | heartbeatFrequencyMS: 10000,\nbackend-backend-1 | localThresholdMS: 15,\nbackend-backend-1 | setName: ‘rs0’,\nbackend-backend-1 | maxElectionId: new ObjectId(“7fffffff0000000000000023”),\nbackend-backend-1 | maxSetVersion: 1,\nbackend-backend-1 | commonWireVersion: 0,\nbackend-backend-1 | logicalSessionTimeoutMinutes: null\nbackend-backend-1 | },\nbackend-backend-1 | code: undefined,\nbackend-backend-1 | [Symbol(errorLabels)]: Set(0) {}\nbackend-backend-1 | }But I could not understand why it is always trying to connect to 127.0.0.1 when I passed a different IP as well.I’m able to run the app easily without docker, but using docker, MongoDB is not able to resolve the correct IP even when provided different IP as well.", "username": "Devesh_Aggrawal" }, { "code": "", "text": "But I could not understand why it is always trying to connect to 127.0.0.1 when I passed a different IP as well.Most likely your code is wrong and it is not using your configured connection string. You will need to share your code where you establish the connection to the server.", "username": "steevej" }, { "code": "node --trace-warnings ...", "text": "this is my code and I’m running await MongoDB.disconnectDB(); in my index fileconst { uri } = conf.db;\nconst connectDB = async () => {\n// eslint-disable-next-line no-useless-catch\ntry {\nconsole.log(‘uri===’, uri);\ndb = await MongoClient.connect(uri, options);\nreturn db;\n} catch (err) {\nthrow err;\n}\n};backend-backend-1 | System Initialization started with DEVELOPMENT config\nbackend-backend-1 | uri=== mongodb://buyucoinUser:[email protected]:27017/buyucoin_stagin?ssl=false&replicaSet=rs0\nbackend-backend-1 | (node:149) Warning: Accessing non-existent property ‘UserModel’ of module exports inside circular dependency\nbackend-backend-1 | (Use node --trace-warnings ... to show where the warning was created)\nbackend-backend-1 | (node:149) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.\nbackend-backend-1 |\nbackend-backend-1 | Please migrate your code to use AWS SDK for JavaScript (v3).\nbackend-backend-1 | For more information, check the migration guide at Migrating your code to SDK for JavaScript V3 - AWS SDK for JavaScript\nbackend-backend-1 | System Initialization error\nbackend-backend-1 | MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\nbackend-backend-1 | at Timeout._onTimeout (/usr/src/app/node_modules/mongodb/lib/sdam/topology.js:277:38)\nbackend-backend-1 | at listOnTimeout (node:internal/timers:569:17)\nbackend-backend-1 | at process.processTimers (node:internal/timers:512:7) {\nbackend-backend-1 | reason: TopologyDescription {\nbackend-backend-1 | type: ‘ReplicaSetNoPrimary’,\nbackend-backend-1 | servers: Map(1) { ‘127.0.0.1:27017’ => [ServerDescription] },\nbackend-backend-1 | stale: false,\nbackend-backend-1 | compatible: true,\nbackend-backend-1 | heartbeatFrequencyMS: 10000,\nbackend-backend-1 | localThresholdMS: 15,\nbackend-backend-1 | setName: ‘rs0’,\nbackend-backend-1 | maxElectionId: new ObjectId(“7fffffff0000000000000023”),\nbackend-backend-1 | maxSetVersion: 1,\nbackend-backend-1 | commonWireVersion: 0,\nbackend-backend-1 | logicalSessionTimeoutMinutes: null\nbackend-backend-1 | },\nbackend-backend-1 | code: undefined,\nbackend-backend-1 | [Symbol(errorLabels)]: Set(0) {}\nbackend-backend-1 | }\nbackend-backend-1 | Process exited with code 1\nbackend-backend-1 | [nodemon] app crashed - waiting for file changes before starting…This is the exact console with everything\n@ steevej", "username": "Devesh_Aggrawal" }, { "code": "", "text": "Try to connect to the same URI with Compass or mongosh.Try removing the replicaSet=rs0 from your connection string.Are you really using mongod or AWS DynamoDB?Please share the output of docker ps.", "username": "steevej" }, { "code": "127.0.0.1mongodb://buyucoinUser:[email protected]:27017/buyucoin_stagin?ssl=false&replicaSet=rs0mongoshrs.status()PRIMARYSECONDARYnamename127.0.0.1localhost192.168.29.160mongoshvar c = rs.config();\nc.members[0].host = \"192.168.29.160:27017\";\nc.members[1].host = \"192.168.29.160:27018\";\nc.members[2].host = \"192.168.29.160:27019\";\nrs.reconfig(c);\n", "text": "@Devesh_Aggrawal,But I could not understand why it is always trying to connect to 127.0.0.1 when I passed a different IP as well.Chances are you configured your replica set with the hosts pointing to 127.0.0.1. As a result, once you connect using a connection string such as mongodb://buyucoinUser:[email protected]:27017/buyucoin_stagin?ssl=false&replicaSet=rs0 the first thing that will happen is the driver will attempt to discover the replica set and will get a list of hosts back using those “internal” hosts/IPs - not what you passed in the connection string.The easiest way to validate this is to connect to the replica set using the mongosh shell and run rs.status()\nimage711×835 86.9 KB\nThis will output an array of members in the set that includes details such as the state of the member (PRIMARY, SECONDARY … etc) and the name. The name here is the host/port for that node.Assuming you have your mapped to either 127.0.0.1 or localhost (like the example above), you can fix this by reconfiguring the replica set.As an example, let’s say you have 3 members on ports 27017, 27018 and 27019. If you wanted to configure the replica set to map these nodes to those ports on IP address 192.168.29.160, from the mongosh shell you would do the following:Once the replica set is reconfigured, when your application connects next it will discover the replica set members and try to connect to them on the host/port pairs that you configured above.", "username": "alexbevi" }, { "code": "", "text": "If I try to connect with the same URI without docker, I’m able to easily connect with nodejs driver as well and robo mongo also.If I remove replicaSet=rs0 then without using docker I’m able to connect, but using docker still the same errorI’m using [email protected] on macbookBelow is the docker ps responseCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nfe5ecd3ece2d backend-backend “docker-entrypoint.s…” 7 seconds ago Up 6 seconds 5001/tcp, 0.0.0.0:8000->8000/tcp backend-backend-1@ steevej", "username": "Devesh_Aggrawal" }, { "code": "", "text": "@ alexbeviYou are right, using rs.status() members are shown as below“members” : [\n{\n“_id” : 0,\n“name” : “127.0.0.1:27017”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 55830,\n“optime” : {\n“ts” : Timestamp(1678103836, 1),\n“t” : NumberLong(35)\n},\n“optimeDate” : ISODate(“2023-03-06T11:57:16Z”),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1678048011, 1),\n“electionDate” : ISODate(“2023-03-05T20:26:51Z”),\n“configVersion” : 1,\n“configTerm” : 35,\n“self” : true,\n“lastHeartbeatMessage” : “”\n}\n],So I used 192.168.29.160 because this is the IP on which my macbook is available on the local wifi. So I tried the same thing without using docker and I was able to connect to the mongo without any issue, but if I try to connect to mongo of host machine from docker container, then this issue persists", "username": "Devesh_Aggrawal" }, { "code": "mongodmongod --port 27017 ...localhostmongod --port 27017 --bind_ip_all", "text": "@Devesh_Aggrawal you may need to update the IP Binding info for your mongod. For example, starting a node with mongod --port 27017 ... will only bind to localhost and listen on port 27017.Using mongod --port 27017 --bind_ip_all will bind to all IPv4/v6 addresses. For local testing this may be the easiest way to get this sorted out, but if you’re managing a production cluster I’d recommend configuring your IP bindings a bit more restrictively ", "username": "alexbevi" }, { "code": "", "text": "@ alexbeviI’m already using binding all ipv4 and ipv6 in the configuration\nmongo.conf is belowsystemLog:\ndestination: file\npath: /opt/homebrew/var/log/mongodb/mongo.log\nlogAppend: true\nstorage:\ndbPath: /opt/homebrew/var/mongodb\nreplication:\nreplSetName: “rs0”\nnet:\nbindIp: ::,0.0.0.0", "username": "Devesh_Aggrawal" }, { "code": "", "text": "@Devesh_Aggrawal this is not a MongoDB question, but a question of configuring Docker networks. I’ll let someone else weigh in but it sounds like you may need to setup a bridge network to ensure you can access your local network from the docker network layer.Not by forte, and it sounds like the default bridge should work as you expect so I’ll defer to someone else.", "username": "alexbevi" } ]
Mongodb cannot connect to Docker Host machine
2023-03-05T20:51:30.634Z
Mongodb cannot connect to Docker Host machine
2,880
null
[ "sharding", "mongodb-shell", "change-streams" ]
[ { "code": "dms_admin> watch_cursor = db.getMongo().watch()\nChangeStreamCursor on mongodb://<credentials>@ip:port/?directConnection=true&appName=mongosh+1.6.2\ndms_admin> watch_cursor.tryNext();\n{\n _id: {\n _data: '8263EB034E000000032B022C0100296E5A1004D775448B6F1E4A91A72F8304F5786F03463C5F6964003C34653738316138632D643634632D343130302D623831612D386364363463373130306137000004'\n },\n operationType: 'replace',\n clusterTime: Timestamp({ t: 1676346190, i: 3 }),\n fullDocument: {\n xxxxx: xxxxx\n },\n ns: { db: 'db', coll: 'coll' },\n documentKey: { _id: '4e781a8c-d64c-4100-b81a-8cd64c7100a7' }\n}\ndms_admin> watchCursor = db.watch([], {\"startAfter\": {\"_data\": \"8263EB034E000000032B022C0100296E5A1004D775448B6F1E4A91A72F8304F5786F03463C5F6964003C34653738316138632D643634632D343130302D623831612D386364363463373130306137000004\"}});\nChangeStreamCursor on dms_admin\n[direct: mongos] dms_admin> watchCursor.tryNext();\nnull\n[direct: mongos] dms_admin> watchCursor.tryNext();\nnull\n[direct: mongos] dms_admin> watchCursor = db.watch([], {\"resumeAfter\": {\"_data\": \"8263EB034E000000032B022C0100296E5A1004D775448B6F1E4A91A72F8304F5786F03463C5F6964003C34653738316138632D643634632D343130302D623831612D386364363463373130306137000004\"}});\nChangeStreamCursor on dms_admin\n[direct: mongos] dms_admin> watchCursor.tryNext();\nnull\n[direct: mongos] dms_admin> watchCursor.tryNext();\nnull\n// before execute tryNext, modify some data at another session.\n[direct: mongos] dms_admin> watchCursor.tryNext();\nnull\n[direct: mongos] dms_admin> watchCursor.tryNext();\nMongoServerError: cannot resume stream; the resume token was not found. {_data: \"8263EB0393000000032B022C0100296E5A10042A041639D2024311906742C001F1320B463C5F6964003C4F35313233353933000004\"}\n[direct: mongos] dms_admin>\n", "text": "mongo version: 4.4.5anyone can help?", "username": "rancho_zhang" }, { "code": "[direct: mongos] dms_admin> watchCursor.tryNext();\nnull\nwatchCursor.tryNext();MongoServerError: cannot resume stream; the resume token was not found. {_data: \"8263EB0393000000032B022C0100296E5A10042A041639D2024311906742C001F1320B463C5F6964003C4F35313233353933000004\"}\nresumability", "text": "Hi @rancho_zhang,Welcome to the Community forums Apologies for the late response!If no operation has taken place in the database, watchCursor.tryNext(); will return null. However, if an operation has been executed, it will return the changes along with their details.A change stream can be resumed using a resume token that points to a specific timestamp in the oplog. However, when the oplog rolls over, the resume token becomes invalid. Trying to resume a change stream with an invalid resume token will result in an error, as indicated above.For more information on the resumability of the change stream check the documentation link.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Change stream. the resume token was not found
2023-02-14T05:46:33.461Z
Change stream. the resume token was not found
1,329
null
[ "replication", "java", "compass" ]
[ { "code": "", "text": "Hi MongoDB Community,I would be grateful for any help in clarifying why the MongoDB connection settings to connect via a Socks5 Proxy (“proxyHost” and “proxyPort”) are seemingly not available in the MongoDB java drivers. Also if there may be any known plans to add these settings?The according connection settings can be found here:But the settings are missing in the latest java drivers:This would be required to connect to a MongoDB ReplicaSet behind a Firewall, that can only be accessed through a tunneling service that acts as Socks5 proxy.Thank you in advance and kr,\nJan", "username": "Jan_de_Wilde" }, { "code": "", "text": "FYI I got the following answer from MongoDB support:No; unfortunately, the options are not yet available in the Java driver.Yes, there are plans to implement the options in the Java driver, but at this moment there is no estimated data we may provide. Once the feature is available, you would be able to see the details in the Java driver documentation page. Additionally, you may follow any public progress details looking at the ticket JAVA-4347.", "username": "Jan_de_Wilde" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Missing Proxy Connection Settings in Java Drivers
2023-03-03T10:59:39.789Z
Missing Proxy Connection Settings in Java Drivers
1,057
https://www.mongodb.com/…a8b56c34d197.png
[]
[ { "code": "", "text": "Hi everyone,When editing a chart, I can add a missing field by clicking the + icon next to ‘FIELDS’ and this works great.\n\nimage654×511 20.8 KB\nHowever, I do not see the same option for fields missing in Dashboard Filters. How can I add a missing field to a Dashboard Filter?", "username": "Alexander_Van_den_Bulck" }, { "code": "", "text": "@Alexander_Van_den_Bulck Sorry to hear you are having difficulties while adding Missed fields when using dashboard filters. We currently don’t support adding missed, lookup or calculated fields using the dashboard filter pane.An alternative you can use is create a view from the Data Sources page and add a query similar to{ $set: { missedField: 1 } }so that the missed field is always visible and can be used as a dashboard filter. This logic can be applied to Lookup and Calculated fields(virtual fields) as well.Also, we are planning to work on easily adding and using virtual fields in dashboard filtering some time this year.", "username": "Avinash_Prasad" } ]
How do I add a missing field to a Dashboard Filter?
2023-03-04T11:40:54.071Z
How do I add a missing field to a Dashboard Filter?
1,142
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hello,I want to create a view which will contains data from two collections. This view C is created by aggregation which is executed from collection A and look up to collect B. In this way the view C will have data from A and B. Is there a way to update the view C when A or B updated?\nThanks for the support,James", "username": "Zhihong_GUO" }, { "code": "", "text": "This is how standard views work.", "username": "steevej" }, { "code": "", "text": "Hello Steeve,Thank you for the answer.Just confirm if my understanding is correct. You mean if I create a view from collection A and B, it will keep updated when the data in A or B is changed. So in this case I just need to read the data from the view, even if the collection A or B changed, I don’t need to re-create the view, just read it and I can get new data, right?Thanks,James", "username": "Zhihong_GUO" }, { "code": "/* mongosh> */ db.A.insertOne( { _id : 0 , user : 1 })\n{ acknowledged: true, insertedId: 0 }\n/* mongosh> */ db.B.insertOne( { _id : 1 , name : \"Steeve\" })\n{ acknowledged: true, insertedId: 1 }\n/* mongosh> */ db.createView( \"C\" , \"A\" , [ { \"$lookup\" : { \"from\" : \"B\" , localField : \"user\" , as : \"_result\" , foreignField : \"_id\"}}])\n{ ok: 1 }\n/* mongosh> */ db.C.find()\n{ _id: 0, user: 1, _result: [ { _id: 1, name: 'Steeve' } ] }\n/* mongosh> */ db.B.updateOne( { _id : 1 } , { $set : { \"first_name\" : \"Steeve\" , \"last_name\" : \"Juneau\"}})\n{ acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0 }\n/* mongosh> */ db.C.find()\n{ _id: 0,\n user: 1,\n _result: \n [ { _id: 1,\n name: 'Steeve',\n first_name: 'Steeve',\n last_name: 'Juneau' } ] }\n", "text": "It is clear from the documentation that youdon’t need to re-create the viewyou simplyjust read itand youget new dataIt is actually trivial to test.", "username": "steevej" }, { "code": "", "text": "@steevej , Hello Steeve, got it. Thanks a lot for the help!", "username": "Zhihong_GUO" } ]
Can view be updated when it's base collection updated?
2023-03-03T13:43:35.059Z
Can view be updated when it&rsquo;s base collection updated?
608
null
[ "node-js" ]
[ { "code": "", "text": "Trying to setup github auto deployment and i’m hit with this!It used to work but then I copy pasted some trigger files from a different project and now here I am…", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "Hi Alexandar,The triggers you copied from a different project probably have a different service_name in the file. Please ensure that the correct service is specified that references to the data source in the app you’re importing to.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Service name would be the same. “mongodb-atlas”,But I noticed the cron and auth triggers didn’t have a service name. Could that be it?Also appriciate your quick response. been stuck on this an entire day…", "username": "Alexandar_Dimcevski" }, { "code": "mongo_service_name", "text": "Have you enabled custom user data?\nIf so please check your /auth/custom_user_data.json file and ensure the mongo_service_name is “mongodb-atlas”Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Please also share your data_sources/mongodb-atlas/config.json file with the clusterName redacted.", "username": "Mansoor_Omar" }, { "code": "mongo_servicemongodb-atlas{\n \"name\": \"mongodb-atlas\",\n \"type\": \"mongodb-atlas\",\n \"config\": {\n \"clusterName\": \"🤐\",\n \"readPreference\": \"primary\",\n \"wireProtocolEnabled\": false\n },\n \"version\": 1\n}\n\n", "text": "/auth/custom_user_data.json is mongo_service is set to mongodb-atlas I can make changes from the UI, then it pushed to github. When I pull, add one comment in a function and push it doesn’t work", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "Does the clusterName value in your data source reflect the actual name of the cluster?I can make changes from the UI, then it pushed to github. When I pull, add one comment in a function and push it doesn’t workIf I understand correctly you’re using both realm-cli and github commits to make changes.\nPlease know that you should not be using realm-cli while github auto-deploy is enabled since realm-cli will only be able to push to the UI but it will not update the github repo. Only changes made directly in the UI will push to github.I would recommend doing a fresh pull using realm-cli, and update your github repo with that state, after this point do not use realm-cli to push changes.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Does the clusterName value in your data source reflect the actual name of the cluster?yes.No i’m using the UI and github. NOT realm-cliI would recommend doing a fresh pull using realm-cli, and update your github repo with that state, after this point do not use realm-cli to push changes.Ok. Brb", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "No i’m using the UI and github. NOT realm-cliUnderstood.\nAt which point does the error happen? i.e. when making a change in the UI or pushing down from github?\nAlso what is the related app id (from the URL) ?", "username": "Mansoor_Omar" }, { "code": "", "text": "Nope.Do you know where this issue stems from? What is mongo service ID?", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "Also what is the related app id (from the URL) ?Not sure. How can I find out?At which point does the error happen? i.e. when making a change in the UI or pushing down from github?\nAlso what is the related app id (from the URL) ?Making changes form the UI works great! Making changes from VS Code and then pushing causes this error.", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "I’m experiencing this issue too", "username": "Paul_Vu" }, { "code": "", "text": "Actually I’m getting close to narrowing it down, it might have to do with default roles\nI pulled my main app down and was able to cd into the local directory and push it,So to give you a little more context, I created a new app, but recently I had issues with watch streams as well, but when I edited default roles it my issue with changestreams was resolved, I’m going to do a little more problem solving and report back what I find", "username": "Paul_Vu" }, { "code": "", "text": "What is mongo service ID?This would be the data source id.\nIf you visit your data source in the app services UI, you’ll find both the service id and app id in the URL.…/apps/APP_ID/services/SERVICE_ID/config", "username": "Mansoor_Omar" }, { "code": "", "text": "Got it. Neither of those IDs are mentioned in the project. Sent you DM with the IDs", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "Facing the same issue all of a sudden.\nWe use Admin APIs to create a Realm app in the required region and then use the Admin CLI to push the realm application and start the sync.All of this worked fine and all of a sudden it broke last night.\n07:22:43 push failed: must specify a mongo service IDFrom custom_user_data.json\n“mongo_service_name”: “mongodb-atlas”,", "username": "Shahil_Shah" }, { "code": "npx realm-cli push --remote=\"${{env.REALM_APP_ID}}\" --include-package-json -ypush failed: must specify a mongo service ID", "text": "We have the same issue all of a sudden.\nWe use realm-cli in a Github action to deploy our Atlas apps.npx realm-cli push --remote=\"${{env.REALM_APP_ID}}\" --include-package-json -yThe error message is the same as for the other posters:push failed: must specify a mongo service ID", "username": "Erlend_Blomseth" }, { "code": "", "text": "Hey guys after some problem solving , I wasn’t able to narrow down the answer yet, when I created a second brand new mongoDB realm atlas application it worked.Since Shahil is having issues with an existing app, I’m not sure I can pinpoint exactly what it was.At first I thought it was a default roles conflict, since my new app had not roles set on the user’s collection, and that was not the case.Then I thought that it could have been the deployment location, also not the case. (It was the main difference between our app in development and the new app I had made)Still looking into it, and will let you know if I discover anything. My temporarily solution was to create another realm app. And as I continue my migration, I will see if anyone of the new configurations I set will recreate this error.", "username": "Paul_Vu" }, { "code": "", "text": "Hi Paul,I did delete and create a new app. Even then the same issue.\nWe create the realm apps in 2 steps.Now this app was deployed using above to 2 steps and working.\nBut we added one more collection and hence I had to deploy the update which errored out and then I tried deleting and creating a new version on our dev environment.", "username": "Shahil_Shah" }, { "code": "", "text": "Hi All,I’ve raised this with our developers to investigate what could be causing this error as I’ve not found a reason thus far.If the error happens when using realm-cli please do reply with the version you’re using.Otherwise if the error happens when pushing from github please advise if the github integration was working previously or if you’re getting the error immediately after setting up github auto-deploy.Regards", "username": "Mansoor_Omar" } ]
Failed : failed to import app: must specify a mongo service ID
2023-02-26T22:54:39.388Z
Failed : failed to import app: must specify a mongo service ID
1,964
null
[ "queries" ]
[ { "code": "\n \n auto aNode = static_cast<UpdateObjectNode*>(mergedRootNode->getChild(\"a\"));\n ASSERT_TRUE(fieldsMatch({}, *aNode));\n \n \n ASSERT_TRUE(aNode->getChild(\"$\"));\n ASSERT_TRUE(aNode->getChild(\"$\")->type == UpdateNode::Type::Object);\n ASSERT_TRUE(typeid(*aNode->getChild(\"$\")) == typeid(UpdateObjectNode&));\n auto positionalNode = static_cast<UpdateObjectNode*>(aNode->getChild(\"$\"));\n ASSERT_TRUE(fieldsMatch({\"b\", \"c\"}, *positionalNode));\n }\n \n \nTEST(UpdateObjectNodeTest, TopLevelConflictFails) {\n auto setUpdate1 = fromjson(\"{$set: {'a': 5}}\");\n auto setUpdate2 = fromjson(\"{$set: {'a': 6}}\");\n FieldRef fakeFieldRef(\"root\");\n boost::intrusive_ptr<ExpressionContextForTest> expCtx(new ExpressionContextForTest());\n std::map<StringData, std::unique_ptr<ExpressionWithPlaceholder>> arrayFilters;\n std::set<std::string> foundIdentifiers;\n UpdateObjectNode setRoot1, setRoot2;\n ASSERT_OK(UpdateObjectNode::parseAndMerge(&setRoot1,\n modifiertable::ModifierType::MOD_SET,\n setUpdate1[\"$set\"][\"a\"],\n \n rs0:PRIMARY> db.test.find();\n\n{ \"_id\" : ObjectId(\"64050396a27f4d08e8c3a0d4\"), \"a\" : 10, \"b\" : 4444 }\n\n{ \"_id\" : ObjectId(\"640504d3781e448660488d79\"), \"a\" : 1000, \"b\" : 4444, \"d\" : 4444 }\n\nrs0:PRIMARY> db.test.update({\"a\":1000},{$set:{\"a\":100}, $set:{\"a\":100}, $set:{\"f\":4444},$set:{\"f.g\":1000}});\n\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\n\nrs0:PRIMARY> db.test.update({\"a\":1000},{$set:{\"a\":100}, $set:{\"a\":100}, $set:{\"x\":4444},$set:{\"x.y\":1000}});\n\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\nrs0:PRIMARY> db.test.update({\"a\":1000},{$set:{\"a\":100}, $set:{\"a\":900}, $set:{\"a\":10000}});\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\n", "text": "I am trying to simulate update conflict at path error in MongoDB as shown here:I also tried this:What am I missing. I am not getting that error.", "username": "Vineel_Yalamarthi" }, { "code": "{$set:{\"a\":100}, $set:{\"a\":100}, $set:{\"f\":4444},$set:{\"f.g\":1000}}{ '$set': { 'f.g': 1000 } }\n{$set:{\"a\":100}, $set:{\"a\":100}, $set:{\"x\":4444},$set:{\"x.y\":1000}}{ '$set': { 'x.y': 1000 } }\n{$set:{\"a\":100}, $set:{\"a\":900}, $set:{\"a\":10000}}{ '$set': { a: 10000 } }\ndb.test.updateOne( {} , { \"$set\" : { \"a\" : 1 } , \"$unset\" : { \"a\" : \"\" } } )\n", "text": "What am I missing. I am not getting that error.Most likely because it is not the same updates.In most implementation of JSON, only the last occurrence of repeated fields is kept.So{$set:{\"a\":100}, $set:{\"a\":100}, $set:{\"f\":4444},$set:{\"f.g\":1000}}is really equivalent toand{$set:{\"a\":100}, $set:{\"a\":100}, $set:{\"x\":4444},$set:{\"x.y\":1000}}is reallyand finally{$set:{\"a\":100}, $set:{\"a\":900}, $set:{\"a\":10000}}is really the same asno conflict.Try something like", "username": "steevej" } ]
Update Conflict Error MongoDB
2023-03-05T21:15:55.111Z
Update Conflict Error MongoDB
408
null
[ "aggregation", "performance" ]
[ { "code": "collection1.dest=collection2.src AND collection2.type='some_constant'// Get list of edges, returns 256 docs\n{ $match: { source: \"some_id\", type: \"1\" },\n// Lookup destinations, returns 256 docs\n{ $lookup: { \"from\": \"grouped_assocs\", \"localField\": 'destination_id', \"foreignField\": 'source_id', \"as\": \"target_objs\", pipeline: [ $match: { \"type\": \"2\" } ] } }\n// Get the list of edges, 256 docs\n[ { $match: { source: \"some_id\", type: \"1\"} ]\n// Load the resulting objects for each edge, 256 docs\n[\n{ $match: { type: '2', src: { $in: [ list of \"dest\" returned from first query ] } } }\n]\n", "text": "I have a fairly straightforward aggregation pipeline to join a collection with itself, but I’m having trouble getting the performance I expect. At a high level, I want to join collection1.dest=collection2.src AND collection2.type='some_constant'. collection1 and collection2 are the same collection though.My pipeline looks like this:Today, this takes ~100ms. By upgrading to mongo 6.0 and removing the pipeline filter in stage 2, it speeds up to 25ms (returning 3k documents though, which can be mitigated by filtering in another stage after). However, if I simply fire two separate queries to the DB I can achieve ~1ms for both stages (incurring latency twice though).I’m confused about why the pipeline is unable to treat it the same as two sequential queries to the database. Here are the sequential queries which results in much better performance:followed by:I understand the join and $in have slightly different results ($in would remove any dupes, and have a different order, although in my case these don’t matter). Is there some way to achieve that same performance with a pipeline though?Note: I have an index on “src,type”, and it does seem to be used in all cases listed", "username": "Alex_Coleman" }, { "code": "", "text": "It is hard to make sense of your issue. Either you have redacted field names and collection names and the modified names are not consistent. Or you have misspelling errors.Sometimes you use source and at other times your use either src or source_id.If your index is on src and type and the field name is source or source_id then do not be surprised if it is slow and your index is not used.To help you we need you to share real sample documents. The real pipeline. The output of getIndexes().$in would remove any dupesThe $lookup stage does not produce duplicate as far as know.have a different orderThe only way to have a specific order is to $sort.", "username": "steevej" }, { "code": "[{\n $match: {\n source_id: UUID(\"140cdf407c8311eb8bc38572321f765c\"),\n assoc_id: 10,\n company_id: UUID(\"140cdf407c8311eb8bc38572321f765c\")\n }\n}, {\n $limit: 10000\n}, {\n $lookup: {\n from: 'grouped_assocs',\n localField: 'destination_id',\n foreignField: 'source_id',\n as: 'target_objs'\n }\n}, {\n $unwind: {\n path: '$target_objs'\n }\n}, {\n $match: {\n 'target_objs.assoc_id': 4,\n 'target_objs.company_id': UUID(\"140cdf407c8311eb8bc38572321f765c\")\n }\n}, {\n $replaceWith: '$target_objs'\n}]\n// Part 1: Fetch destination IDs\n[{\n $match: {\n source_id: UUID(\"140cdf407c8311eb8bc38572321f765c\"),\n assoc_id: 10,\n company_id: UUID(\"140cdf407c8311eb8bc38572321f765c\")\n }\n}, {\n $project: {\n destination_id: 1\n }\n}]\n// Part 2: Fetch the definitions for each destination Entity\n[{\n $match: {\n assoc_id: 4,\n company_id: UUID(\"140cdf407c8311eb8bc38572321f765c\"),\n source_id: {\n $in: [\n UUID(\"0036c760607a11edb513fba47e97a4ac\"),\n UUID(\"01792fb011dc11edb724bd44abf0d73f\"),\n UUID(\"0230e720587511ec9849b1197c2ef65d\"),\n // (Ommitted 250 more entries for brevity)\n ]\n }\n }\n}]\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n { v: 2, key: { source_id: 1 }, name: 'source_id_1' },\n { v: 2, key: { destination_id: 1 }, name: 'destination_id_1' },\n {\n v: 2,\n key: { company_id: 1, source_id: 1, assoc_id: 1, destination_id: 1 },\n name: 'company_id_1_source_id_1_assoc_id_1_destination_id_1'\n },\n {\n v: 2,\n key: { assoc_id: 1, created_time: 1 },\n name: 'assoc_id_1_created_time_1'\n },\n {\n v: 2,\n key: {\n company_id: 1,\n source_id: 1,\n assoc_id: 1,\n destination_id: 1,\n created_time: 1\n },\n name: 'company_id_1_source_id_1_assoc_id_1_destination_id_1_created_time_1'\n },\n {\n v: 2,\n key: { company_id: 1, source_id: 1, assoc_id: 1, created_time: 1 },\n name: 'company_id_1_source_id_1_assoc_id_1_created_time_1'\n },\n {\n v: 2,\n key: { company_id: Long(\"1\"), assoc_id: Long(\"1\") },\n name: 'company_id_1_assoc_id_1'\n },\n { v: 2, key: { 'node.name': Long(\"1\") }, name: 'node.name_1' },\n {\n v: 2,\n key: { source_id: 1, assoc_id: 1, company_id: 1 },\n name: 'source_id_1_assoc_id_1_company_id_1'\n }\n]\n", "text": "Apologies, let me give the exact pipeline. I tried to redact to simplify unnecessary pieces, but I can see that only added confusion.This is the pipeline we use today, which takes 100+msThis is a two-part query which resolves in <10ms total (ignoring latency):Ideally I would be able to send a single pipeline to the database which does part 1 & part 2 in <10ms, but currently I have to decide between 1 pipeline of 100ms or 2 pipelines of <10ms.Additionally, using MongoDB 6.0 I’m able to get 25ms by removing the last two stages, despite that they should be a simple indexed filter to apply on top of the previous stages (I don’t want to 10x the transferred data because my database won’t let me do a simple filter).Also, here’s my indexes, which should cover everything and then some:", "username": "Alex_Coleman" }, { "code": "$lookup: {\n from: 'grouped_assocs',\n localField: 'destination_id',\n foreignField: 'source_id',\n as: 'target_objs' ,\n pipeline: [\n { $match: {\n 'assoc_id': 4,\n 'company_id': UUID(\"140cdf407c8311eb8bc38572321f765c\")\n } }\n ]\n }\n", "text": "It looks like you have some redundant indexes.What ever query uses source_id_1 can use source_id_1_assoc_id_1_company_id_1.Same with company_id_1_source_id_1_assoc_id_1_destination_id_1 and company_id_1_source_id_1_assoc_id_1_destination_id_1_created_time_1.It is not clear if source_id is unique or not so I will assume it is not.The big difference I see between the single access pipeline and the 2 parts is the assoc_id and company_id $match. In the 2 parts version, you must likely be able to use the index company_id_1_source_id_1_assoc_id_1_destination_id_1 while in the single access you $match after the $unwind. My suggestion would be to try to move the $match, that comes after the $unwind, into a pipeline of the $lookup. Your $lookup would then look like:At least this way you do not $unwind documents just to get rid of them in the $match.At this point I am not too sure it will be sufficient. But as an exercise it is easy to try before trying something else. It would be nice to give us feedback on what was the effect.The next thing to try would be to forgo the use of localField/foreignField and the source_id match inside the new $match of the $lookup. The foreignField might make the query use the redundant source_id_1 index which is not as good as source_id_1_assoc_id_1_company_id_1 for this query.", "username": "steevej" } ]
Slow $lookup performance
2023-03-04T20:20:08.568Z
Slow $lookup performance
2,798
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": " Excited to share our latest product: [Mongoose Query Builder ] (https://www.jsexpert.io/Modules/QueryBuilder/Mongoose)! UpdatedIf you’re tired of writing complex and time-consuming Mongoose queries, our tool is here to help. Our intuitive interface makes it easy to build queries with just a few clicks. Plus, you can customize and fine-tune your queries using a variety of operators and conditions.We’d love for you to give our tool a try and share your feedback with us! Whether you have suggestions for new features or just want to let us know how we’re doing, we’re all ears. Let’s build a better developer experience together! ", "username": "Neeraj_Dana" }, { "code": "", "text": "", "username": "wan" } ]
Simple way to created nested query
2023-03-04T04:54:29.679Z
Simple way to created nested query
789
null
[ "next-js" ]
[ { "code": " import { MongoClient } from 'mongodb';\n import nextConnect from 'next-connect';\n\n const mongoClient = new MongoClient(process.env.mongoApiUrl, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n\n //with serverless we need to use cache to prevent re-opening connection\n let cached = global.mongo\n\n\n if (!cached) {\n cached = global.mongo = { conn: null, promise: null }\n }\n\n async function database(req, res, next) {\n //console.log(cached.promise)\n if (!cached.promise) {\n cached.promise = mongoClient.connect().then((client) => {\n return {\n client,\n db: client.db(process.env.MONGODB_DB),\n }\n })\n cached.conn = await cached.promise\n }\n\n req.dbClient = cached.conn.client\n req.db = cached.conn.db\n\n return next();\n }\n\n const middleware = nextConnect();\n\n middleware.use(database);\n\n\n export default middleware;```\n\nAND (I have tested also)\n\n\n\nimport { MongoClient } from 'mongodb';\n\nconst MONGODB_URI = process.env.mongoApiUrl;\nconst MONGODB_DB = process.env.MONGODB_DB;\n\n// check the MongoDB URI\nif (!MONGODB_URI) {\n throw new Error('Define the MONGODB_URI environmental variable');\n}\n\n// check the MongoDB DB\nif (!MONGODB_DB) {\n throw new Error('Define the MONGODB_DB environmental variable');\n}\n\nlet cachedClient = null;\nlet cachedDb = null;\n\nexport async function connectToDatabase() {\n // check the cached.\n if (cachedClient && cachedDb) {\n // load from cache\n return {\n client: cachedClient,\n db: cachedDb,\n };\n }\n\n // set the connection options\n const opts = {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n };\n\n // Connect to cluster\n let client = new MongoClient(MONGODB_URI, opts);\n await client.connect();\n let db = client.db(MONGODB_DB);\n\n // set cache\n cachedClient = client;\n cachedDb = db;\n\n return {\n client: cachedClient,\n db: cachedDb,\n };\n}\n\n", "text": "Hello,\nI have a huge amount of connections not closing, I tried a lot of actions to make, but nothing changed, connections are always > 80%.I have Nextjs application and MongoDB connection. I am deploying my nextjs app with Vercel, so each time I push to git branch vercel creates new “version” of app - maybe problem is here.I have bought M10 subscription to have high performance, but each day I am receiving notification “Connections % of configured limit has gone above 80” - but I have only 3 users working (test platform).What I tried to do:The problem I see is that connections are not closing, even at nights where no users working and using DB.\nPlease help!", "username": "Il_Chi" }, { "code": "", "text": "Same issue here, did you find a solution?", "username": "daanz" } ]
Huge amount of connections
2022-01-19T09:06:58.965Z
Huge amount of connections
3,666
null
[ "dot-net", "python", "atlas-device-sync", "flexible-sync", "unity" ]
[ { "code": "bar Object\n a: \"aaa\"\n b: \"bbb\"\n c: \"ccc\"\n{\n \"title\": \"Foo\",\n \"properties\": {\n \"_id\": { \"bsonType\": \"string\" },\n \"bar\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": { \"bsonType\": \"string\" }\n }\n}\npublic class Foo : RealmObject\n{\n [MapTo(\"_id\")]\n [PrimaryKey]\n [Required]\n public string Id { get; set; }\n\n [MapTo(\"bar\")]\n [Required]\n public IDictionary<string, string> Bar { get; }\n}\nrealm.All<Foo>()realm.WriteCopy()RealmDictionary<string>", "text": "Hey there! I’m trying to sync data from the atlas.I’m using Unity 2021.3.16 and RealmSDK version 10.20.0Sample object in the collection:This data was written by Python MongoDB side as a dictionary.Here’s my schema:and c# class:Here are some problems I collided with:I also tried to use RealmDictionary<string> but the result was the same.Thanks!", "username": "Daniil_T" }, { "code": "", "text": "Hi,Sorry to hear things aren’t going great for you thus far. We are looking to surface some of these schema compatibility issues a bit better in the coming months. A few thoughts off the bat:Why are you using WriteCopy()? That is a very specific method to generate state realms and I suspect you might not be trying to. You likely are just looking for Write()Can you provide more details around what isnt working with Flexible Sync?My best advice for you now would be to:If that doesn’t work, please let me know and perhaps provide your app_id (can be the URL in your App Services page) and I can try to look at what might be going wrong.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler!\nI’ve joined from another account.I made a document about all steps I did to show you my progress with this issue. I also made a more complicated schema to understand how to deal with embedded objects.I got sample data: Connected to atlas from MongoDB .NET SDK using this code client = new MongoClient(\"mongodb+srv://:@cluster0..mongodb.net/?retryWrites=true&amp;w=majority?connect=replicaSet\"); db = client.GetDatabase(\"SampleDatabase\"); ...I also made a sample project from another Gmail, so you can easily play with it. Here’s the link:\nhttps://cloud.mongodb.com/v2/6400fd7092a24557972a3b45#/clustersThanks!", "username": "RFS_6ro" } ]
Flexible sync error while syncing Dictionaries
2023-03-01T02:46:21.597Z
Flexible sync error while syncing Dictionaries
1,215
https://www.mongodb.com/…_2_1024x764.jpeg
[]
[ { "code": "", "text": "I’m excited to announce our new Community Champions and Community Enthusiasts joining the Community Advocacy Program.This program is a global community of passionate and dedicated MongoDB advocates. Through it, members can grow their knowledge, profile, and leadership by engaging with the larger community and advocating for MongoDB technologies and our users.Members gain a variety of experiences and relationships that grow their professional stature as MongoDB practitioners and enable them to form meaningful bonds with community leaders.The nomination process is currently closed. To be notified when the application process re-opens, please click here.Please join me in congratulating our new cohort : @Arkadiusz_Borucki , @chris, @eliehannouch, @hpgrahsl , @Jay, @Justin_Lee, @kev_bite, @Leandro_Domingues, @MalakMSAH, @Michael_Holler, @Nuri_Halperin , @RajeshSNair, @Roman_Right, @shrey_batra, @TJ_Tang, @Aditi_Sharma_132, @Danielle_Monteiro, @Darine_Tleiss, @GeniusLearner, @Hamza_Faham, @Heba_Ahmed, @HemantSachdeva, @juan_roy1, @Justin_Jenkins, @Khushi_Agarwal, @logwriter, @onkarjanwa, @Otavio_Santana, @Paavni_Ahuja, @Pallaav_Sethi, @Sani_Yusuf, @Sumanta_Mukhopadhyay, @Tanya_Sinha, @Tiyasa_Khan, @Trina_Yau, @turivishal\nNew C1920×1434 130 KB\n\nNew E1920×1664 169 KB\n", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "thank you so much ma’am", "username": "Hamza_Faham" }, { "code": "", "text": " That’s amazing. Great to see so many awesome folks and brilliant talents in these pictures. I’m humbled and honoured to be part! THX.", "username": "hpgrahsl" }, { "code": "", "text": "Welcome \nIt so great to see that many talents joining in!\nGreat times are ahead ", "username": "michael_hoeller" }, { "code": "", "text": "Great to be part of the Community Champions!", "username": "kev_bite" }, { "code": "", "text": "I’m honoured to be the part of this program <3", "username": "HemantSachdeva" }, { "code": "", "text": "Congrats to everyone and nice to meet you. Can’t wait to start working with you!\nRegards for my old friends here ", "username": "juan_roy1" }, { "code": "", "text": "This topic was automatically closed after 60 days. New replies are no longer allowed.", "username": "system" } ]
Announcing the new cohort of MongoDB Community Champions and Enthusiasts!
2023-03-03T18:15:16.513Z
Announcing the new cohort of MongoDB Community Champions and Enthusiasts!
1,313
null
[ "replication", "java", "atlas-cluster", "spring-data-odm" ]
[ { "code": "2023-02-17 15:09:46.998 INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server <server-id>.mongodb.net:27017\ncom.mongodb.MongoSocketReadException: Prematurely reached end of stream\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:112) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:135) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:713) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:571) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:410) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:369) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:221) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat java.base/java.lang.Thread.run(Unknown Source) ~[na:na]\n2023-02-17 15:09:47.015 INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server <server-id>.mongodb.net:27017\ncom.mongodb.MongoSocketWriteException: Exception sending message\n\tat com.mongodb.internal.connection.InternalStreamConnection.translateWriteException(InternalStreamConnection.java:684) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:555) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendCommandMessage(InternalStreamConnection.java:381) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:329) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:101) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:45) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:131) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnectionInitializer.startHandshake(InternalStreamConnectionInitializer.java:73) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:182) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:193) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat java.base/java.lang.Thread.run(Unknown Source) ~[na:na]\nCaused by: java.net.SocketException: Connection reset\n\tat java.base/java.net.SocketInputStream.read(Unknown Source) ~[na:na]\n\tat java.base/java.net.SocketInputStream.read(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.read(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketInputRecord.decode(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLTransport.decode(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl.decode(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl.ensureNegotiated(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(Unknown Source) ~[na:na]\n\tat com.mongodb.internal.connection.SocketStream.write(SocketStream.java:99) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendMessage(InternalStreamConnection.java:552) ~[mongodb-driver-core-4.6.1.jar:na]\n\t... 10 common frames omitted\n2023-02-17 15:09:57.018 INFO 1 --- [ngodb.net:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server <server-id>.mongodb.net:27017\ncom.mongodb.MongoSocketOpenException: Exception opening socket\n\tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:180) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:193) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:157) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat java.base/java.lang.Thread.run(Unknown Source) ~[na:na]\nCaused by: java.net.ConnectException: Connection refused (Connection refused)\n\tat java.base/java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:na]\n\tat java.base/java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) ~[na:na]\n\tat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) ~[na:na]\n\tat java.base/java.net.AbstractPlainSocketImpl.connect(Unknown Source) ~[na:na]\n\tat java.base/java.net.SocksSocketImpl.connect(Unknown Source) ~[na:na]\n\tat java.base/java.net.Socket.connect(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl.connect(Unknown Source) ~[na:na]\n\tat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:107) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-4.6.1.jar:na]\n\tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-4.6.1.jar:na]\n\t... 4 common frames omitted\nmaxIdleTimeMSmongodb+srv://<user>:<password>@<server>.mongodb.net/<db>?retryWrites=true&w=majority&maxIdleTimeMS=60000\n", "text": "I have a Spring Boot service deployed on AWS ECS. The service offers a REST API for clients and stores its data on a MongoDB Atlas M2 cluster (Replica Set - 3 nodes).Recently, I noticed some strange logs popping up:MongoSocketReadExceptionMongoSocketWriteExceptionMongoSocketOpenExceptionAs this seems to consistently happen after idle periods, I figured this may be a timeout issue. Various sources on the internet suggest setting maxIdleTimeMS, but to no avail. Please note that my service is working nonetheless and I can retrieve data via its REST API w/o any issues. It seems to be able to recover from the above-mentioned connection exceptions…My connection string looks as follows:What am I missing?", "username": "Iggy" }, { "code": "", "text": "The key line in the logs is “Exception in monitor thread while connecting to server …”.It’s normal to see these sorts of messages during planned maintenance of MongoDB servers (e.g. restarts for upgrades). The automatic retry features of the driver should hide most of these from your application and avoid exceptions being thrown to application threads via the drivers’s CRUD API.Hope this helps.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Thanks Jeffrey for the quick response!So what you are saying is those info logs are expected and the result of an Atlas node becoming (temporarily) unavailable, which is a perfectly normal scenario that won’t affect my service thanks to the driver’s retry features?If so, could you please share some details on those “monitor threads”? What are they monitoring exactly?", "username": "Iggy" }, { "code": "", "text": "We refer to this as Server Discover and Monitoring (SDAM).The general description is here.The gory details are here.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas connections reset/refused
2023-03-03T10:15:14.348Z
MongoDB Atlas connections reset/refused
1,896
null
[ "replication", "atlas-cluster", "atlas" ]
[ { "code": "connecting to: mongodb://ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017,ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017,ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017/myFirstDatabase?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-7eueer-shard-0&ssl=true\n2023-03-04T09:38:03.492-0430 I NETWORK [js] Starting new replica set monitor for atlas-7eueer-shard-0/ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017,ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017,ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017\n\n2023-03-04T09:38:04.808-0430 E NETWORK [js] SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:04.809-0430 E NETWORK [ReplicaSetMonitor-TaskExecutor-0] SSL peer\ncertificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:05.197-0430 E NETWORK [js] SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:05.198-0430 F NETWORK [js] ReplicaSetMonitor atlas-7eueer-shard-0\nrecieved error while monitoring ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017: Location40356: connection pool: connect failed ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017 : couldn't connect to server ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.(1705 ms)\n2023-03-04T09:38:05.199-0430 I NETWORK [js] Received another failure for host ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017 :: caused by :: Location40356: connection pool: connect failed ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017 : couldn't connect to server ac-srubobk-shard-00-00.xyymilx.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:05.199-0430 E NETWORK [ReplicaSetMonitor-TaskExecutor-0] SSL peer\ncertificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:05.201-0430 F NETWORK [ReplicaSetMonitor-TaskExecutor-0] ReplicaSetMonitor atlas-7eueer-shard-0 recieved error while monitoring ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017: Location40356: connection pool: connect failed ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017 : couldn't connect to server ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.(1708 ms)\n2023-03-04T09:38:05.201-0430 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Received\nanother failure for host ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017 :: caused by :: Location40356: connection pool: connect failed ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017 : couldn't connect to server ac-srubobk-shard-00-02.xyymilx.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate\nvalidation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:06.240-0430 E NETWORK [js] SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:06.607-0430 E NETWORK [js] SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:06.608-0430 F NETWORK [js] ReplicaSetMonitor atlas-7eueer-shard-0\nrecieved error while monitoring ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017: Location40356: connection pool: connect failed ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017 : couldn't connect to server ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.(1408 ms)\n2023-03-04T09:38:06.608-0430 I NETWORK [js] Received another failure for host ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017 :: caused by :: Location40356: connection pool: connect failed ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017 : couldn't connect to server ac-srubobk-shard-00-01.xyymilx.mongodb.net.:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: (800B0109)Se procesó correctamente una cadena de certificados, pero termina en un certificado de raíz no compatible con el proveedor de confianza.\n2023-03-04T09:38:06.609-0430 W NETWORK [js] Unable to reach primary for set atlas-7eueer-shard-0\n2023-03-04T09:38:06.609-0430 I NETWORK [js] Cannot reach any nodes for set atlas-7eueer-shard-0. Please check network connectivity and the status of the set. This has\nhappened for 1 checks in a row.\n2023-03-04T09:38:06.708-0430 I CONTROL [thread1] Shutdown started\n2023-03-04T09:38:06.718-0430 E - [thread1] Error saving history file: FileOpenFailed: Unable to fopen() file : La operación se completó correctamente.\n2023-03-04T09:38:06.718-0430 I CONTROL [thread1] shutting down with code:0\n", "text": "Hello guys I can’t connect to my atlas cluster, I’m new on this I don’t know what to do about this…\nThis is the command that I type:\nmongo “mongodb+srv://cluster0.xyymilx.mongodb.net/myFirstDatabase” --username lect123 --password lect147258369this is the error that I received", "username": "Jesus_Guerrero" }, { "code": "", "text": "I think you must delete your user info in this post and delete user in mongodb atlas", "username": "jeongmu.park" }, { "code": "", "text": "Anyone can use your mongodb database with username and password in this post. I recommend to delete your mongodb user and create new user.", "username": "jeongmu.park" }, { "code": "", "text": "yes bro I already deleted it thanks!", "username": "Jesus_Guerrero" } ]
I can't connect to my Atlas db with mongoshell
2023-03-04T13:39:35.136Z
I can&rsquo;t connect to my Atlas db with mongoshell
960
null
[ "aggregation" ]
[ { "code": "[\n {\n \"id\": \"173420\",\n \"dataset_name\": \"173420 - gene expression in treatment\"\n\t\t\"file_details\": [\n {\n \"size\": 31983,\n \"data_type\": \"Methylation\"\n },\n {\n \"size\": 110193,\n \"data_type\": \"Methylation\"\n },\n {\n \"size\": 254763,\n \"data_type\": \"Methylation\"\n },\n {\n \"size\": 1632726,\n \"data_type\": \"Clinical\",\n }\n ]\n\t},\n {\n \"id\": \"GSE88\",\n \"dataset_name\": \"GSE88 tumour recurrence prediction\",\n \"file_details\": [\n {\n \"size\": 7402964,\n \"data_type\": \"Expression\"\n },\n {\n \"size\": 7368643,\n \"data_type\": \"Expression\"\n },\n {\n \"size\": 7540211,\n \"data_type\": \"Clinical\"\n\n },\n {\n \"size\": 7426688,\n \"data_type\": \"Clinical\"\n }\n\t]\n\t}\n]\ndb.dataset.aggregate([ \n{ $unwind: \"$file_details\" },\n{\"$group\" : {_id:\"$file_details.data_type\", count:{$sum:1}}},\n{$sort : {count:-1}}\n])\n", "text": "The collection has the following structure:The requirement is to count the distinct file_details.data_type and count the number of documents that have the data_type in it.\nI have used the following unwind-group-count query for it:It returns the following result:\n_id: “Methylation”\ncount: “3”\n-id:“Expression”\ncount: “2”\n_id: “Clinical”\ncount: “3”But the expected result needs to count the occurrence of the data_type per document:\nexpected output:_id: “Methylation”\ncount: “1”\n-id:“Expression”\ncount: “1”\n_id: “Clinical”\ncount: “2”could you please help me improve the query or any new approch.\nThank you in advance.", "username": "Masuma_Bibi" }, { "code": "{ \"$project\" : {\n unique_data_types : { $setUnion : [ \"$file_details.data_type\" ] }\n} }\n", "text": "If I understand correctly you want to count Methylation once because it is within the same document.If that is the case, you must eliminate duplicate within documents. A simple $project that uses $setUnion will do that for you:You then $unwind the new array unique_data_types and your $group.", "username": "steevej" } ]
Array of objects unwind, group and counting per document
2023-03-03T07:24:38.509Z
Array of objects unwind, group and counting per document
637
null
[]
[ { "code": "", "text": "Hi, is c100dev exam retired? The new exam does not seem to contain the same vast syllabus . Also, I have a voucher for c100dev will I be able to use that?Thanks!", "username": "Anmol_N_A" }, { "code": "", "text": "Hi @Anmol_N_A,Welcome to the MongoDB Community forums Hi, is the c100dev exam retired?No, it’s not retired, the MongoDB Associate Developer Exam has been revamped. You can find all the details related to the exam here.The new exam does not seem to contain the same vast syllabusYou can find the syllabus and the updated study guide for the MongoDB Associate Developer Exam here.Also, I have a voucher for c100dev will I be able to use that?Please email the voucher details to [email protected] and the team will be happy to assist you.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C100dev retired?
2023-03-04T11:57:09.089Z
C100dev retired?
1,062
null
[ "next-js" ]
[ { "code": " [\n{\n_id: '323030049934bwd', \ntype: ObjectId\n},{\nname: \"David\", \ntype: String\n},{\nactive: false, \ntype: boolean\n}\n]\n", "text": "Hi everyone, I have nextjs project where I use MongoDB to get data, so when I fetch data I want to get types, I show you what result I want to get.please consider that, I want such an array because I want to know what type of data is coming, when I get “_id”, “typeof” gives me “String” (I know why it doing such), and I want to get that _id is a “ObjectId”, and so onfor example:I hope I did to describe my situation,", "username": "David_Takidze" }, { "code": "", "text": "The mongodb equivalent is $type.", "username": "steevej" }, { "code": "import clientPromise from \"../../../lib/mongo/mongo\";\n\nexport default async function handler(req, res) {\n const client = await clientPromise;\n try {\n await client.connect();\n\n const allData = await client\n .db()\n .collection(req.query.collection[0])\n .find({})\n .toArray();\n\n return res.send(allData);\n } finally {\n await client.close();\n }\n}\n\n", "text": "sorry for the late answer, but, I use Nextjs, how I can use $type?", "username": "David_Takidze" }, { "code": "", "text": "I know that youuse Nextjsthat is in the title of the post.But using $type (please click on the link and look at the examples) is a server thing. It has nothing to do with nextjs or what ever client you are using.I just found out the following which might be a better solutionA bson parser for node.js and the browser. Latest version: 5.0.1, last published: 15 days ago. Start using bson in your project by running `npm i bson`. There are 1333 other projects in the npm registry using bson.", "username": "steevej" } ]
Get data with types from mongodb in Nextjs
2023-02-26T17:12:09.086Z
Get data with types from mongodb in Nextjs
2,019
null
[ "dot-net", "atlas", "change-streams" ]
[ { "code": "", "text": "I’ve setup a continuous Azure WebJob (without triggers) to act as a watcher for the MongoDB change stream. Everything seems to work fine. However, I’m curious as to how the listener will work if I scaled out the WebJob to multiple instances? Will this cause issues with the MongoDB stream if multiple instances are connecting to the same change stream at the same time? Is there a more elegant way to handle scaling out the watcher or do we have to make it a singleton?", "username": "Sam_Lanza" }, { "code": "", "text": "One way to scale is to distribute the load by watching a different set of operations or documents.OperationsFor example, rather than one process watching of all operations, you have one that listen to inserts, one that listen for deletes and a third that listen to updates. You can then run one or more instances of each depending of your pattern.DocumentsYou may also share the load by watching different documents. For example, you may use one of the field of the ObjectId like timestamp to watch to odd timestamp and a second one that watches even timestamp. So each watcher will receive notifications for a different set of documents.", "username": "steevej" }, { "code": "", "text": "We’re running into a similar situation. We have a java service that can horizontally scale watching a collection, is there a way to achieve true scaling with mongodb change streams? Having a logic based on timestamp or operation type feels hacky instead of a clean implementation.", "username": "Darshan_Bangre" }, { "code": "", "text": "For a non-hacky solution, kafka can always be used.", "username": "steevej" }, { "code": "", "text": "Right, we’re using mongodb sink connector to sink the events from Kafka to mongodb. These events are then processed by horizontally scaled service via change streams.", "username": "Darshan_Bangre" }, { "code": "", "text": "@steevej how does kafka solve horizontal scaling of change stream processing by multiple instances of an application?", "username": "Darshan_Bangre" }, { "code": "consumer group", "text": "From 4. Kafka Consumers: Reading Data from Kafka - Kafka: The Definitive Guide [Book]Kafka consumers are typically part of a consumer group. When multiple consumers are subscribed to a topic and belong to the same consumer group, each consumer in the group will receive messages from a different subset of the partitions in the topic.Written differently, each instance of the application will receive a different change stream event.", "username": "steevej" }, { "code": "", "text": "thanks, we’re already sourcing the data from kafka to mongodb via kafka sink connector and the application is receiving the changes via mongodb changestreams so, we need to scale the stream without having another messaging layer", "username": "Darshan_Bangre" }, { "code": "", "text": "If your data already transit via kafka, why don’t you simply skip the change stream altogether and let your application instances be consumers part of a different group. So what ever is received by the sink connector it is also received by your application. Completely removing the change stream and the need to scale it.That would reduce the work load on the mongod server. It would also reduce latency since your application will get the data faster.", "username": "steevej" }, { "code": "", "text": "That’s a good solution but the application is joining a few collections (avoiding the usage of kafka streams and joins directly on topics) in mongodb before pushing the data into upstream. Also, the kafka messages do not contain the entire event so the mongodb will be updated with the changes first and then change stream provides the latest state of entire record.", "username": "Darshan_Bangre" } ]
Scaling out the Change Stream watcher/listener
2023-02-27T11:38:04.114Z
Scaling out the Change Stream watcher/listener
1,521
null
[]
[ { "code": "", "text": "Hello,\nIs it possible to run a case-insensitive search using the Keyword analyzer?Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "keywordlowercase", "text": "This can be accomplished with a custom analyzer defined to use the keyword tokenizer and the lowercase token filter. The keyword analyzer is hard coded to be case-sensitive, without lowercasing.", "username": "Erik_Hatcher" }, { "code": "", "text": "Thanks Erik. Will the lowercase token filter allow for searches that have mixed case terms such seARch, TesT etc?", "username": "Prasad_Kini" }, { "code": "", "text": "Yes, mixed case is fine with keyword+lowercase custom analyzer. All strings will be indexed as lowercased, and then during querying they’ll be lowercased behind the scenes for matching.", "username": "Erik_Hatcher" }, { "code": "", "text": "Thanks Erik.On the same topic, I am unable to find any option on the Atlas portal to define custom analyzers. Is API the only way to do add/update/delete custom analyzers?", "username": "Prasad_Kini" }, { "code": "", "text": "Custom analyzers requiring defining in JSON, which can be done in the Atlas Search UI: Edit Index Definition with JSON Editor.", "username": "Erik_Hatcher" }, { "code": "", "text": "Thanks Erik.I was able to create a custom analyzer and use it as well. It seems that custom analyzers when used mandates adding “allowAnalyzedField” to the query. This was not required with the standard analyzers. Is this expected behavior?", "username": "Prasad_Kini" } ]
Atlas Case Insensitive Search
2023-03-02T16:19:21.536Z
Atlas Case Insensitive Search
797
null
[ "c-driver" ]
[ { "code": "bson_iter_utf8bson_tbson_t#include <stdio.h>\n#include <mongoc/mongoc.h>\n\nint main (int argc, char** argv) {\n mongoc_init ();\n\n char* string;\n bson_t* document = bson_new ();\n BSON_APPEND_UTF8 (document, \"key\", \"test_string_123\");\n bson_iter_t iter;\n if (bson_iter_init_find (&iter, document, \"key\")) {\n string = (char*) bson_iter_utf8 (&iter, NULL);\n printf (\"before destroy: %s\\n\", string);\n }\n bson_destroy (document);\n printf (\"after destroy: %s\\n\", string);\n\n mongoc_cleanup ();\n\n return 0;\n}\n$ gcc -o exe/test tmp/test.c -I/usr/include/libbson-1.0 -I/usr/include/libmongoc-1.0 -lmongoc-1.0 -lbson-1.0 && exe/test\n\nbefore destroy: test_string_123\nafter destroy: test_string_123\n$ valgrind --track-origins=yes --leak-check=full --show-leak-kinds=all exe/test\n\n==441== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info\n==441== Command: exe/test\n==441== \nbefore destroy: test_string_123\n==441== Invalid read of size 1\n==441== at 0x484ED16: strlen (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)\n==441== by 0x49CDDB0: __vfprintf_internal (vfprintf-internal.c:1517)\n==441== by 0x49B781E: printf (printf.c:33)\n==441== by 0x109331: main (in /hb/exe/test)\n==441== Address 0x74ef835 is 21 bytes inside a block of size 128 free'd\n==441== at 0x484B27F: free (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)\n==441== by 0x109315: main (in /hb/exe/test)\n==441== Block was alloc'd at\n==441== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)\n==441== by 0x49353A2: bson_malloc (in /usr/lib/x86_64-linux-gnu/libbson-1.0.so.0.0.0)\n==441== by 0x492CEE1: bson_new (in /usr/lib/x86_64-linux-gnu/libbson-1.0.so.0.0.0)\n==441== by 0x109281: main (in /hb/exe/test)\n[...]\nafter destroy: test_string_123\n==441== \n==441== HEAP SUMMARY:\n==441== in use at exit: 0 bytes in 0 blocks\n==441== total heap usage: 8,735 allocs, 8,735 frees, 1,371,850 bytes allocated\n==441== \n==441== All heap blocks were freed -- no leaks are possible\n==441== \nbson_tbson_iter_dup_utf8", "text": "Hi everyone! I’d like to understand better the char* returned by the function bson_iter_utf8.\nThe documentation says “returns a UTF-8 encoded string that has not been modified or freed”. But from my test it looks like that this string is just a pointer to the bson_t buffer, so it is valid only as long as the bson_t buffer is valid, is that correct?I’m doing this simple test here:If I compile and run it, it seems to correctly print all:But if I run it with valgrind I get this (shortened):So the question is: if I need to use this string even after I dispatched the bson_t (i.e. when iterating on a cursor) do I have to make a copy of the string using bson_iter_dup_utf8?Thank you for your help!\nHave a nice day!", "username": "Francesco_Ballardin" }, { "code": "bson_tbson_tbson_t", "text": "But from my test it looks like that this string is just a pointer to the bson_t buffer, so it is valid only as long as the bson_t buffer is valid, is that correct?Yes. That is correct. The lifetime of the returned string depends on the the lifetime of the bson_t.", "username": "Kevin_Albertson" }, { "code": "", "text": "Hi Kevin, thanks for your reply and clarification.\nHave a nice weekend!", "username": "Francesco_Ballardin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[libbson] function `bson_iter_utf8` memory management
2023-03-03T15:53:31.924Z
[libbson] function `bson_iter_utf8` memory management
1,092
null
[]
[ { "code": "C:\\Users\\USER>mongod\n{\"t\":{\"$date\":\"2022-07-25T22:34:24.352+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:24.354+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.219+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.221+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.221+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.221+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.221+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.222+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":17136,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"DESKTOP-CVKJN9O\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.222+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.222+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.0\",\"gitVersion\":\"e61bf27c2f6a83fed36e5a13c008a32d563babe2\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.223+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19043)\"}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.223+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.227+05:30\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22271, \"ctx\":\"initandlisten\",\"msg\":\"Detected unclean shutdown - Lock file is not empty\",\"attr\":{\"lockFile\":\"C:\\\\data\\\\db\\\\mongod.lock\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.227+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.227+05:30\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.227+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7643M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.349+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":121}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.349+05:30\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.350+05:30\"},\"s\":\"I\", \"c\":\"WT\", \"id\":4366408, \"ctx\":\"initandlisten\",\"msg\":\"No table logging settings modifications are required for existing WiredTiger tables\",\"attr\":{\"loggingEnabled\":true}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.352+05:30\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.352+05:30\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.354+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.354+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.354+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.354+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.454+05:30\"},\"s\":\"W\", \"c\":\"FTDC\", \"id\":23718, \"ctx\":\"initandlisten\",\"msg\":\"Failed to initialize Performance Counters for FTDC\",\"attr\":{\"error\":{\"code\":179,\"codeName\":\"WindowsPdhError\",\"errmsg\":\"PdhAddEnglishCounterW failed with 'The specified object was not found on the computer.'\"}}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.454+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.456+05:30\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.456+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.457+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:25.457+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2022-07-25T22:34:26.008+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20631, \"ctx\":\"ftdc\",\"msg\":\"Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost\",\"attr\":{\"error\":{\"code\":0,\"codeName\":\"OK\"}}}\n", "text": "", "username": "Zishaan_chem" }, { "code": "", "text": "getting this error when i try to run", "username": "Zishaan_chem" }, { "code": "", "text": "I do not see any error in what you shared.Errors will be marked with \"s\":\"E\", you have informative messages like{“t”:{\"$date\":“2022-07-25T22:34:25.223+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{}}}and some warning messages like:{“t”:{\"$date\":“2022-07-25T22:34:25.227+05:30”},“s”:“W”, “c”:“STORAGE”, “id”:22271, “ctx”:“initandlisten”,“msg”:“Detected unclean shutdown - Lock file is not empty”,“attr”:{“lockFile”:“C:\\data\\db\\mongod.lock”}}Have you tried to connect with Compass, mongosh or your application? It should work.The main issue with your setup is that you start mongod manually rather than a service. When started as a service, the mongod process is cleaning terminated. Manually, you getDetected unclean shutdown - Lock file is not emptywhen you forget to shut it down manually.", "username": "steevej" }, { "code": "", "text": "Same issue here, help pls", "username": "ZUSL" } ]
MongoDB doesn't start after unclean shutdown
2022-07-25T17:06:56.206Z
MongoDB doesn&rsquo;t start after unclean shutdown
4,978
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "", "text": "We have changed our aggregation code from node and hit mongo its applied and works for some time, after a day if hit means its showing old aggregation in shard what is the issue.ex: we have 3 replica sets, we deployed our API yesterday to check our keys examined in mongo. That time it was working fine. but today when we saw it is showing the old code that is not exists.", "username": "Girish_Kumar2" }, { "code": "", "text": "Where are you seeing this happen? I think a support case will be needed, there’s not enough info here", "username": "Andrew_Davidson" } ]
Mongo db primary shard callig old aggregation when api get hits
2023-02-28T12:52:45.670Z
Mongo db primary shard callig old aggregation when api get hits
511
null
[ "queries" ]
[ { "code": "const BoardSchema = Schema({\n\tuserId: {\n\t\ttype: Schema.Types.ObjectId,\n\t\tref: 'User',\n\t\trequired: true,\n\t},\n\tboardName: {\n\t\ttype: String,\n\t\trequired: [true, 'Board name is required.'],\n\t},\n\tcolumns: [\n\t\t{\n\t\t\ttype: Schema.Types.ObjectId,\n\t\t\tref: 'Column',\n\t\t\trequired: true,\n\t\t},\n\t],\n})\n\nconst ColumnSchema = Schema({\n\tcolumnName: {\n\t\ttype: String,\n\t\trequired: [true, 'Column name is required.'],\n\t},\n\ttasks: [\n\t\t{\n\t\t\ttype: Schema.Types.ObjectId,\n\t\t\tref: 'Task',\n\t\t},\n\t],\n\tboard: {\n\t\ttype: Schema.Types.ObjectId,\n\t\tref: 'Board',\n\t\trequired: true,\n\t},\n});\nconst TaskSchema = Schema({\n\ttitle: {\n\t\ttype: String,\n\t\trequired: true,\n\t},\n\tdescription: {\n\t\ttype: String,\n\t\trequired: true,\n\t},\n\tstatus: {\n\t\ttype: String,\n\t\trequired: true,\n\t},\n\tsubtasks: [\n\t\t{\n\t\t\ttype: Schema.Types.ObjectId,\n\t\t\tref: 'Subtask',\n\t\t},\n\t],\n\tcolumn: {\n\t\ttype: Schema.Types.ObjectId,\n\t\tref: 'Column',\n\t\trequired: true,\n\t},\n});\nconst SubtaskSchema = Schema({\n\ttitle: {\n\t\ttype: String,\n\t\trequired: [true, 'Subtask title is required.'],\n\t},\n\tisCompleted: {\n\t\ttype: Boolean,\n\t\trequired: true,\n\t\tdefault: false,\n\t},\n\ttask: {\n\t\ttype: Schema.Types.ObjectId,\n\t\tref: 'Task',\n\t\trequired: true,\n\t},\n});\nconst getUserBoards = async (req = request, res = response) => {\n\tconst userId = req.user._id;\n\ttry {\n\t\tconst userBoards = await Board.find({ userId }).populate({\n\t\t\tpath: 'columns',\n\t\t\tpopulate: {\n\t\t\t\tpath: 'tasks',\n\t\t\t\tpopulate: {\n\t\t\t\t\tpath: 'subtasks',\n\t\t\t\t},\n\t\t\t},\n\t\t});\n\t\tres.json({\n\t\t\tok: true,\n\t\t\tuserBoards,\n\t\t});\n\t} catch (error) {\n\t\tres.json({\n\t\t\tok: false,\n\t\t\tmsg: `Some error happened: ${error}`,\n\t\t});\n\t}\n};\n", "text": "I’m a newbie in mongodb and I’m trying to create a kanban task management, so when the user logs in he can fetch all its boards and populate it each board with its corresponding data. Here are my schemas :\n´´´´´´\nAnd this is what I’m trying to execute but it brings me an empty array of columns: //! Get boards by user\n´´´´´´I’m doing it this way to handle easily the CRUD operations for each schema, but if anybody has a better idea to handle this please provide me some information.", "username": "Jaaciel_Briseno_Espinoza" }, { "code": "", "text": "Hello @Jaaciel_Briseno_Espinoza, Welcome to the MongoDB community forum,The simple principle of the document database is “Data that is accessed together should be stored together”.Try to normalize your schemas and reduce them to one or two collections, and avoid populating/joining to another collection and it will improve the query.You can refer to the MongoDB available resources for schema design,Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!Developer Center: MongoDB Developer Center\nFree University Courses: MongoDB Courses and Trainings | MongoDB University", "username": "turivishal" }, { "code": "", "text": "Thanks for taking the time to answer! I used to have just one schema (the board schema) but i was getting getting some problems for example updating a single property of the board. (e.g. toggling the isCompleted property of the subtasks) so that’s why I split my schemas to perform CRUD operations in specific properties .", "username": "Jaaciel_Briseno_Espinoza" } ]
Populating nested documents
2023-03-03T02:00:44.337Z
Populating nested documents
685
null
[]
[ { "code": "", "text": "Hi, I am building a mobile app using realm data sync, I have a QA environment/app service with development mode on. I’m planning on creating a production app service pointing at it’s own cluster with development mode off (as advised). So I’ll have schema changes going into QA automatically from the app created objects, my plan was to then find an automated way of moving schema changes up to production ahead of a mobile release (non breaking changes). However, I’m struggling to find a way of doing this in an automated way with either the realm cli or atlas cli.What are the best practises for achieving automated schema migrations between environments please?", "username": "Steven_Wilson1" }, { "code": "", "text": "Hi, I think that this docs page explains some of the best practices better than I could summarize here: https://www.mongodb.com/docs/atlas/app-services/apps/cicd/Let me know if you have any other questions though about anything specific to sync ", "username": "Tyler_Kaye" } ]
Moving Schema Changes Between Environments when Using Data Sync
2023-03-03T15:56:39.928Z
Moving Schema Changes Between Environments when Using Data Sync
640
null
[ "aggregation", "node-js", "mongoose-odm" ]
[ { "code": "SwapModel.aggregate([\n {\n $match: {\n organisationId: mongoose.Types.ObjectId(organisationId),\n matchId: null,\n matchStatus: 0,\n offers: {\n $elemMatch: {\n from: { $lte: new Date(from) },\n to: { $gte: new Date(to) },\n locations: { $elemMatch: { $eq: location } },\n types: { $elemMatch: { $eq: type } },\n },\n },\n//problem is HERE\n $or: {\n $map: {\n input: \"$offers\",\n as: \"offer\",\n in: {\n from: { $gte: new Date(\"$$offer.from\") },\n to: { $lte: new Date(\"$$offer.to\") },\n location: { $in: \"$$offer.locations\" },\n type: { $in: \"$$offer.types\" },\n },\n },\n },\n },\n },\n { ...swapUserLookup },\n { $unwind: \"$matchedUser\" },\n { $sort: { from: 1, to: 1 } },\n]);\n$match[{\n _id: ObjectId(\"507f1f77bcf86cd799439011\"),\n from: ISODate(\"2023-01-21T06:30:00.000Z\"),\n to: ISODate(\"2023-01-21T18:30:00.000Z\"),\n matchStatus: 0,\n matchId: null,\n userId: ObjectId(\"ddbb8f3c59cf13467cbd6a532\"),\n organisationId: ObjectId(\"246afaf417be1cfdcf55792be\"),\n location: \"Chertsey\",\n type: \"DCA\",\n offers: [{\n from: ISODate(\"2023-01-23T05:00:00.000Z\"),\n to: ISODate(\"2023-01-24T07:00:00.000Z\"),\n locations: [\"Chertsey\", \"Walton\"],\n types: [\"DCA\", \"SRV\"],\n }]\n}, {\n _id: ObjectId(\"21575faf348660e8960c0d931\"),\n from: ISODate(\"2023-01-23T06:30:00.000Z\"),\n to: ISODate(\"2023-01-23T18:30:00.000Z\"),\n matchStatus: 0,\n matchId: null,\n userId: ObjectId(\"d6f10351dd8cf3462e3867f56\"),\n organisationId: ObjectId(\"246afaf417be1cfdcf55792be\"),\n location: \"Chertsey\",\n type: \"DCA\",\n offers: [{\n from: ISODate(\"2023-01-21T05:00:00.000Z\"),\n to: ISODate(\"2023-01-21T07:00:00.000Z\"),\n locations: [\"Chertsey\", \"Walton\"],\n types: [\"DCA\", \"SRV\"],\n }]\n}]\n\n\nI want the $or to match all documents that have the corresponding from/to/location/type as the current document - the idea is two shifts that could be swapped\n\nIf the offers are known (passed as an array to the function calling `aggregate`), I can do this with:\n\n$or: offers.map((x) => ({\n from: { $gte: new Date(x.from) },\n to: { $lte: new Date(x.to) },\n location: { $in: x.locations },\n type: { $in: x.types },\n }))\noffers$offers", "text": "Hi all, hoping someone can help as I am truly stuck!I have this queryI’m trying to use the results of the $match document to generate an array for $or. My data looks like this:BUT I want to be able to do this in an aggregation pipeline when the offers will only be known from the current document, $offersIs this possible? I’ve tried $in, $map, $lookup, $filter, $getField but can’t get it right and can’t get anything from Google as it thinks I want $in (which is the opposite of what I need).I’m pretty new to MongoDB and am probably approaching this completely wrong but I’d really appreciate any help!", "username": "Laurence_Summers" }, { "code": "{ $match : { /* your current $match */ } } ,\n{ $unwind : \"$offers\" } ,\n{ $lookup : {\n from : /* SwapModel collection */ ,\n let : {\n from : \"$$offers.from\" ,\n to: \"$$offers.to\" ,\n location : \"$$offers.locations\" ,\n type : \"$$offers.types\" ,\n }\n pipeline : [\n { $match : { $expr : $and : [\n { $gte : [ \"$from\" , \"$$from\" } } ,\n { $lte : [ \"$to\" , \"$$to\" } } ,\n { $in : [ \"$locations\" , \"$$location\" } } ,\n { $in : [ \"$types\" , \"$$type\" } } ,\n ] } }\n ]\n} }\n", "text": "It looks like you have some misunderstanding about an aggregation pipeline.All the stages of a pipeline are executed on the server. Your offers.map((x) …) JS code cannot be executed in between stages (unless you split your pipeline in 2 and do 2 database access).If you want to find other documents within the same collection based on some documents you will need to do a $lookup. Your $match selects some documents, the the following $lookup will use what it is your in: as the $match stage of the internal lookup. This should look like:I am not too sure about $in above but it is a starting point.", "username": "steevej" }, { "code": "Argument passed in must be a string of 12 bytes or a string of 24 hex characters\nlet : {\n from : \"$offers.from\" ,\n to: \"$offers.to\" ,\n location : \"$offers.locations\" ,\n type : \"$offers.types\" ,\n }\n{ $eq : [ \"$$location\" , \"$location ] }\nand\n{ $eq : [ $$type\" , \"$type\" ] }\n", "text": "I try to test the pipeline with your sample documents and it looks like all of the ObjectId’s are wrong. They all generate the error:It looks like one hex character was added to each.So I used the string in order to test and found some issues with my original pipeline.The let of the $lookup should be replaced byThere was an extra $ sign to each.As suspected my $in arewrong in the first draft. They should be $eq since we $unwind", "username": "steevej" }, { "code": "module.exports.getSwapMatches = async ({\n from,\n to,\n location,\n type,\n offers,\n organisationId,\n}) =>\n SwapModel.aggregate([\n {\n $match: {\n organisationId: mongoose.Types.ObjectId(organisationId),\n matchId: null,\n matchStatus: 0,\n offers: {\n $elemMatch: {\n from: { $lte: new Date(from) },\n to: { $gte: new Date(to) },\n locations: { $elemMatch: { $eq: location } },\n types: { $elemMatch: { $eq: type } },\n },\n },\n $or: offers.map((x) => ({\n from: { $gte: new Date(x.from) },\n to: { $lte: new Date(x.to) },\n location: { $in: x.locations },\n type: { $in: x.types },\n })),\n },\n },\n { ...swapUserLookup },\n { $unwind: \"$matchedUser\" },\n { $sort: { from: 1, to: 1 } },\n ]);\n$or: [{\n from: {\n $gte: new Date(\"2023-01-23T06:30:00.000Z\")\n },\n to: {\n $lte: new Date(\"2023-01-23T18:30:00.000Z\")\n },\n location: {\n $in: [\"Chertsey\", \"Walton\"]\n },\n type: {\n $in: [\"DCA\", \"SRV\"]\n },\n },\n ...\n]\n", "text": "Thanks for taking the time to reply Steve!I do understand the basic principal of pipeline execution, I’m currently using the .map() to build the array used by $ using a passed argument offer or as I don’t know what the approach should be.This is the current working code:so as you can imagine the .map just returns an array to $or so it ends up likefor every item in the offers argumentWhat I want to learn is how to do this without passing an offers argument, just matching in the swaps collection and then for each finding the corresponding swaps", "username": "Laurence_Summers" }, { "code": "offers$offersoffers$offers$matchyour_current_match = {\n organisationId: mongoose.Types.ObjectId(organisationId),\n matchId: null,\n matchStatus: 0,\n offers: { $elemMatch: {\n from: { $lte: new Date(from) },\n to: { $gte: new Date(to) },\n locations: { $elemMatch: { $eq: location } },\n types: { $elemMatch: { $eq: type } },\n } }\n}\n{ $match : your_current_match } ,\n{ $unwind : \"$offers\" } ,\n{ $lookup : {\n from : /* SwapModel collection */ ,\n let : {\n from : \"$offers.from\" ,\n to: \"$offers.to\" ,\n location : \"$offers.locations\" ,\n type : \"$offers.types\" ,\n }\n pipeline : [\n { $match : { $expr : $and : [\n { $gte : [ \"$from\" , \"$$from\" } } ,\n { $lte : [ \"$to\" , \"$$to\" } } ,\n { $in : [ \"$$locations\" , \"$location\" } } ,\n { $in : [ \"$$type\" , \"$type\" } } ,\n ] } }\n ]\n} }\nmongosh> userId = ObjectId(\"ddbb8f3c59cf13467cbd6a532\")\nTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters\nmongosh> organisationId = ObjectId(\"246afaf417be1cfdcf55792be\")\nTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters\n", "text": "How to you obtain or compute theoffers argumentfrom your previous post it looks likethe offers will only be known from the current document, $offersYour last questionWhat I want to learn is how to do this without passing an offers argument, just matching in the swaps collection and then for each finding the corresponding swapslooks exactly the same asBUT I want to be able to do this in an aggregation pipeline when the offers will only be known from the current document, $offersAndI’m trying to use the results of the $match document to generate an array for $or.confirms thatyou want to find other documents within the same collection based on some documents (you $matched)And my answer will be the sameIf you want to find other documents within the same collection based on some documents you will need to do a $lookup. Your $match selects some documents, the the following $lookup will use what it is your in: as the $match stage of the internal lookup. This should look like:and the code will be the same (once the corrections I also posted are done)Have you tried the above solution?If it does not work please provide sample documents from both collections and the expected results. Please make sure your documents are usable. The previous documents you published are not usable. Some of the ObjectId are not correct. I get errors when I cut and paste into mongosh:", "username": "steevej" }, { "code": "", "text": "Sorry for my late reply I’ve been on shiftCurrently this query is run knowing the offers. A match is done, then for each result the query in my inital answer is run to find the matching swaps. The goal here to to do this in one query.Here’s an example of a mismatch\nimage634×792 36.7 KB\nThe returned document is for 2023-01-15, and the offers array starts on the 2023-01-17. But the returned match is on the 2023-01-14. This means the returned document can’t possibly swap with the matched document, because the from and to of the matched document aren’t in any offers of the returned document.Does this diagram help explain it at all? I’m not doing a good job of it\nimage1280×720 93.1 KB\n", "username": "Laurence_Summers" }, { "code": "", "text": "This is my last reply in this thread.Sorry but you keep mentioningThe goal here to to do this in one query.And I have repeatedyou will need to do a $lookupmy answer will be the sameand I have supplied code to try and wroteHave you tried the above solution?I have askedplease provide sample documents from both collections and the expected results. Please make sure your documents are usable. The previous documents you published are not usable.Screenshots that shows an image of documents is even less usable than documents with wrong ObjectId.Despite writing that you need to do $lookup you stillrun knowing the offersand do multiple accesses to the database.If your $gte/$lte with dates gives you wrong results then\n1 - the order of the parameters are wrong\n2 - the operator is wrong\n3 - the type of data differs between the parametersI really cannot help any further.", "username": "steevej" }, { "code": "db.swaps.aggregate([\n {\n $match: {/*find some documents*/},\n },\n {\n $unwind: \"$offers\",\n },\n {\n $lookup: {\n from: \"swaps\",\n as: \"matches\",\n let: {\n parentId: \"$_id\",\n parentOrganisationId: \"$organisationId\",\n parentUserId: \"$userId\",\n parentLocations: \"$offers.locations\",\n parentTypes: \"$offers.types\",\n parentOffersFrom: \"$offers.from\",\n parentFrom: \"$from\",\n parentTo: \"$to\",\n parentOffersTo: \"$offers.to\",\n parentLocation: \"$location\",\n parentType: \"$type\",\n },\n pipeline: [\n {\n $match: {\n matchStatus: 0,\n matchId: null,\n $expr: {\n $and: [\n {\n $ne: [\"$_id\", \"$$parentId\"],\n },\n {\n $ne: [\"$userId\", \"$$parentUserId\"],\n },\n {\n $eq: [\n \"$organisationId\",\n \"$$parentOrganisationId\",\n ],\n },\n {\n $in: [\"$location\", \"$$parentLocations\"],\n },\n {\n $in: [\"$type\", \"$$parentTypes\"],\n },\n {\n $lte: [\"$$parentOffersFrom\", \"$from\"],\n },\n {\n $gte: [\"$$parentOffersTo\", \"$to\"],\n },\n {\n $anyElementTrue: {\n $map: {\n input: \"$offers\",\n as: \"offer\",\n in: {\n $and: [\n {\n $in: [\n \"$$parentLocation\",\n \"$$offer.locations\",\n ],\n },\n {\n $in: [\n \"$$parentType\",\n \"$$offer.types\",\n ],\n },\n {\n $lte: [\n \"$$offer.from\",\n \"$$parentFrom\",\n ],\n },\n {\n $gte: [\n \"$$offer.to\",\n \"$$parentTo\",\n ],\n },\n ],\n },\n },\n },\n },\n ],\n },\n },\n },\n {\n $lookup: {\n from: \"users\",\n localField: \"userId\",\n foreignField: \"_id\",\n as: \"matchedUser\",\n },\n },\n {\n $set: {\n matchedUser: {\n $ifNull: [\n {\n $first: \"$matchedUser\",\n },\n null,\n ],\n },\n },\n },\n ],\n },\n },\n {\n $group: {\n _id: \"$_id\",\n doc: {\n $first: \"$$ROOT\",\n },\n matches: {\n $push: \"$matches\",\n },\n offers: {\n $push: \"$offers\",\n },\n },\n },\n {\n $set: {\n matches: {\n $reduce: {\n input: \"$matches\",\n initialValue: [],\n in: {\n $concatArrays: [\"$$value\", \"$$this\"],\n },\n },\n },\n },\n },\n {\n $replaceRoot: {\n newRoot: {\n $mergeObjects: [\n \"$doc\",\n {\n matches: \"$matches\",\n offers: \"$offers\",\n },\n ],\n },\n },\n },\n {\n $lookup: {\n from: \"users\",\n localField: \"userId\",\n foreignField: \"_id\",\n as: \"user\",\n },\n },\n {\n $set: {\n user: {\n $ifNull: [\n {\n $first: \"$user\",\n },\n null,\n ],\n },\n },\n },\n {\n $sort: {\n _id: 1,\n },\n },\n]);\n", "text": "Thanks for your time, I have been able to get the solution as below, I strongly suspect there is a better way to do this as I’m clearly not very experienced but it seems to be workingworking demo", "username": "Laurence_Summers" } ]
Generating dynamic $or using pipeline variable?
2023-01-20T16:03:55.424Z
Generating dynamic $or using pipeline variable?
1,455
https://www.mongodb.com/…4_2_1024x512.png
[ "node-js" ]
[ { "code": "", "text": "I’ve been struggling with connecting db to expressjs server.\nI googled for a solution and find this official doc:Learn how to add MongoDB Atlas as the data store for your applications, by creating NodeJS and Express Web REST API | Complete Tutorial\nThe instruction is good but its js code is not specialized for nodejs(It’s using import so it’s specialized for web coding, I thought so =D)\nAny idea?", "username": "Nghiem_Gia_B_o" }, { "code": "", "text": "Node js supports import, since many years back. What version of nodejs are you working with?", "username": "Fredrik_Fager1" } ]
How to connect to MongoDB using Nodejs, Expressjs
2023-03-02T22:28:07.006Z
How to connect to MongoDB using Nodejs, Expressjs
411
null
[ "java", "compass", "mongodb-shell", "containers", "spring-data-odm" ]
[ { "code": "obiwan-rest> db.company.getIndexes();\n[ { v: 2, key: { _id: 1 }, name: '_id_' } ]\nobiwan-rest> db.customer.createIndex({\"customerNumber\":1},{unique:true});\ncustomerNumber_1\nobiwan-rest> db.company.getIndexes();\n[ { v: 2, key: { _id: 1 }, name: '_id_' } ]\nobiwan-rest> db.company.getIndexes();\n[ { v: 2, key: { _id: 1 }, name: '_id_' } ]\nobiwan-rest> db.customer.createIndex({\"customerNumber\":1});\nMongoServerError: An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { customerNumber: 1 }, name: \"customerNumber_1\" }, existing index: { v: 2, unique: true, key: { customerNumber: 1 }, name: \"customerNumber_1\" }\n", "text": "I’m having a very strange issue, and I suspect that I must be missing something obvious, but I can’t figure out what.I’m running 5.0.15 in a Docker Desktop k8s cluster, and I can store and retrieve data as expected. However, there are no indexes defined (I’m using Spring Data’s @Indexed annotation). At first I thought that there might be a Spring issue, but I finally tried in mongosh, and I can’t create indexes there, so I think that’s the main problem.When I run this in mongosh:it seems that createIndex() succeeded, but the created index is not shown. In Compass, I can’t see it either.When I create the same index in Compass, it does show up. And if I try to create a similar index, for example, by leaving out the unique option, I do get an error message:I’m not sure what’s happening here. Is there something missing on the server side? Am I using createIndex() wrong?The collection has one document in it, so I would expect index creation to be instantaneous. But even waiting 10 minutes does not make the index show up, so I don’t think it’s anything to do with background index creation.", "username": "Stefan_Bethke" }, { "code": "obiwan-rest> db.company.getIndexes();\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { customerNumber: 1 },\n name: 'customerNumber_1',\n unique: true,\n sparse: false\n }\n]\n", "text": "This is what the index looks like after I’ve created it in Compass:This is exactly what I would expect from the shell command, so I’m not sure what Compass does differently.", "username": "Stefan_Bethke" }, { "code": "obiwan-rest> db.customer.dropIndex(\"customerNumber_1\");\n{ nIndexesWas: 2, ok: 1 }\nobiwan-rest> db.company.getIndexes();\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { customerNumber: 1 },\n name: 'customerNumber_1',\n unique: true,\n sparse: false\n }\n]\n", "text": "If I try to drop the index I created in Compass, this happens:", "username": "Stefan_Bethke" }, { "code": "db.customer.createIndex({\"customerNumber\":1},{unique:true});\ncustomerNumber_1\ndb.company.getIndexes();db.customer.createIndex({\"customerNumber\":1});MongoServerError: An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { customerNumber: 1 }, name: \"customerNumber_1\" }, existing index: { v: 2, unique: true, key: { customerNumber: 1 }, name: \"customerNumber_1\" } name: 'customerNumber_1',obiwan-rest> db.customer.dropIndex(\"customerNumber_1\");obiwan-rest> db.company.getIndexes();", "text": "There is something I do not understand in what your are doing. It does not seem a mistake since you do it over and over.You are wrongly creating and removing the index in one collection and testing the existence of the index in another collection. This seems like a clear misunderstanding of what are indexes.You create in collection customerbut check in collection company.And then again check in companydb.company.getIndexes();create in customerdb.customer.createIndex({\"customerNumber\":1});You get the errorMongoServerError: An existing index has the same name as the requested index. When index names are not specified, they are auto generated and can cause conflicts. Please refer to our documentation. Requested index: { v: 2, key: { customerNumber: 1 }, name: \"customerNumber_1\" }, existing index: { v: 2, unique: true, key: { customerNumber: 1 }, name: \"customerNumber_1\" }because you created it in customer before.However in Compass you seem to created correctly in company as shown by name: 'customerNumber_1',And then same think again, you are dropping in customerobiwan-rest> db.customer.dropIndex(\"customerNumber_1\");but checking in companyobiwan-rest> db.company.getIndexes();", "username": "steevej" }, { "code": "", "text": "You are absolutely correct, I was copy/pasting commands and obivously had gotten confused about the collection names.", "username": "Stefan_Bethke" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
createIndex() succeeds but index doesn't show with showIndexes()
2023-03-03T09:54:48.879Z
createIndex() succeeds but index doesn&rsquo;t show with showIndexes()
1,247
https://www.mongodb.com/…2_2_1024x592.png
[ "react-native" ]
[ { "code": "const appConfig = {\n\n id: appId,\n\n timeout: 10000,\n\n};\n\nconst myapp = new Realm.App(appConfig)\n\nconst {currentUser} = myapp\n\nconsole.log(\"my app \", myapp.currentUser);\n\nreturn (\n\n <NavigationContainer initialRouteName=\"Auth\">\n\n <Stack.Navigator initialRouteName=\"Auth\">\n\n {\n\n Object.keys(USER.user).length == 0 || !myapp.currentUser ?\n\n \n\n (<Stack.Screen name=\"Auth\" component={AuthNavigator} options={screenCofigWithoutHeader} />)\n\n :\n\n (<Stack.Screen name=\"Home\" component={MyTabs} options={screenCofigWithoutHeader} />)\n\n }\n\n </Stack.Navigator>\n\n </NavigationContainer>\n\n);\n", "text": "I have successfully login and do some crud operation in realm. Now I have login screen and the user enter email and password, the login is successfully but how to render MainNavigator component, I have already built my app other API and I am useing Redux libraries, but now I am testing realm login so how to re-render the component when I login with realm.\nNote is when I first login with realm it required user in realm config but got undefined MainNavigator,\nbut when I closed the app the app get render and I successfully go the dashbaord. So how to re-render the MainNavigator, I tried redux but still not getting the result.You can the MainNavigtore code belowconst appId = ‘MyApp Id’;\nCapture1137×658 27.4 KB\n", "username": "Zubair_Rajput" }, { "code": "**MainNavigator**MainNavigatorconst navigateToMain = () => {\n navigation.navigate('MainNavigator');\n};\nconst login = async () => {\n // Perform login with Realm\n // If login is successful, call navigateToMain\n navigateToMain();\n};\n**createStackNavigatorreact-navigation**MainNavigator**import { createStackNavigator } from 'react-navigation';\n\nconst MainNavigator = createStackNavigator({\n Main: {\n screen: MainScreen\n },\n Settings: {\n screen: SettingsScreen\n }\n});**createAppContainerreact-navigation**MainNavigator**import { createAppContainer } from 'react-navigation';\n\nconst AppContainer = createAppContainer(MainNavigator);\n**AppContainer**render() {\n return (\n <AppContainer />\n );\n}\n", "text": "To render a **MainNavigator** in React Native after a successful login with Realm, you can follow these steps:I hope this helps! Let me know if you have any additional questions.", "username": "Simran_Kumari1" }, { "code": "", "text": "Thanks for the help, but before your message I somehow manage.", "username": "Zubair_Rajput" } ]
How to render MainNavigator in React Native Realm Login
2022-04-25T07:12:32.565Z
How to render MainNavigator in React Native Realm Login
3,162
null
[ "database-tools", "backup" ]
[ { "code": "mongodumpmongodumpmongodmongorestore", "text": "Hello there!I’m using Mongo Cloud Altas to host production and staging databases which are both M10 instances.I have written a node script that retrieves a subset of the production database, anonymises the user data and writes it to the staging database. Up until now this has been adequate for my needs.However, we’ve had some issues with migrations that are falling over with larger datasets so we want to be able to run them against a full production dataset on our staging server so that we can have a high degree of confidence in our migrations when we run them in production.Because we have scheduled backups for our production database we can run these against our staging database using the admin UI of the Atlas web interface. Aside from it being a little too easy to accidentally restore the production database, this works fairly well except that:Any backup made on staging after restoring from a production snapshot fails, creating a fallover snapshot which is 3x larger in size that the production snapshot used to restore the database.Restoring the data from the production snapshot means that the user data on staging isn’t anonymised.Ideally we would like to be able to copy the data from prod to staging on a regular interval (at say, midnight on a Sunday for example), so that we always have a decent set of data on staging for QA and testing purposes, ideally anonymised. My thinking behind this is to ensure we are GDPR complient (although I’m not 100% convinced this is necessary).I have investigated mongodump however I have concerns about running this against the live database due to performance concerns:When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory, causing page faults.Since Atlas takes regular snapshots for us it would probably make sense to use these instead. One idea I had was write a cron task to download a snapshot, unarchive it and use mongorestore to restore the staging database with the production data. This seems reasonable, however the staging server would have to write the contents of the archive to disk which would use up quite a lot of space and potentially memory.Another thought would be to use the Atlas CLI to schedule a restore from prod to staging using a cron job, which also seems plausible. I am a little worried about developing such a script in case during development I accidentally restore the live database, but perhaps it’s worth the risk?So I guess my questions to the community are:Many thanks for your time!", "username": "Mike_Hayden" }, { "code": "", "text": "Hey Mike,Nice to meet you.This seems like a very interesting use case. I have a couple ideas about how you might be able to rearchitect this process to be a bit more full proof. Have you looked at our Atlas Data Federation and Atlas Data Lake offerings at all?If you’d be interested in chatting, put some time on my calendar here and I can step you through what I’m thinking to see if it’s a good fit. Calendly - Benjamin FlastBest,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "Thanks! I’ve booked in a slot with you ", "username": "Mike_Hayden" }, { "code": "", "text": "I have a suggestion. Don’t take things like this “offline”. The community site gets no benefit from this type of transaction, where the problem is defined publicly, but the solution is procured/defined privately. ):", "username": "Jay_Eno" }, { "code": "", "text": "Good point @Jay_Eno ! I was suggesting it as I thought a conversation would be easier to clarify some details. But let me follow up here with some of the things I was thinking about.Depending on a customers needs, they could created a “Federated Database Instance” and then use “$out to Atlas” with a scheduled trigger to copy data from the source cluster specified in the virtual collection to the target. https://www.mongodb.com/docs/atlas/data-federation/supported-unsupported/pipeline/out/Learn how to set up a continuous copy from MongoDB into an AWS S3 bucket in Parquet.Using the background aggregation option here for Data Federation will be key here so that the connection of the trigger closing does not cancel the query.Separately if consistency as of a specific point in time is important you could use our new Data Lake Service in order to create a consistent snapshot of your collection at a specific point in time based on your backups, and then run the $out to Atlas from that snapshot.Atlas Data Lake, MongoDB’s fully managed storage solution, is optimized for high-throughput analytical queries over large data sets, while maintaining the economics of cloud object storage.Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "Hi! I’m willing to implement the same testing of production data on a staging database. Could you share please, what solution have you chosen in the end, if any? I believe it would be valuable for the community as well.", "username": "Gleb_Ignatev" } ]
Regularly transferring data from one database to another
2022-10-07T10:35:10.330Z
Regularly transferring data from one database to another
3,596
null
[ "api" ]
[ { "code": "curl --request PUT \\\n --url 'https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{CLUSTER-NAME}/processTypes/{PROCESS-TYPE}' \\\n --header 'Authorization: Basic {YOUR-ATLAS-API-KEY}' \\\n --header 'Content-Type: application/json' \\\n --data '{\n \"providerSettings\": {\n \"instanceSizeName\": \"{NEW-INSTANCE-SIZE}\"\n }\n }'\n", "text": "I’m currently using an M20 instance and I wanted to confirm if there’s an API that can trigger an immediate scaling process to upgrade to an M30 instance or to scale down to a smaller instance. I’ve reviewed the available APIs, but the one I found only enables auto-scaling, which is not what I need. Instead, I need an API that can directly initiate a scaling operation immediately without relying on auto-scalingI tired this but it’s not working", "username": "Vikas_Rathore" }, { "code": "", "text": "Hi @Vikas_Rathore and welcome to the MongoDB Community forum!!I’ve reviewed the available APIs, but the one I found only enables auto-scaling, which is not what I needThe documentation on Update Cluster Configuration would be helpful for the requirements mentioned.\nIn case of any error observed, please help us with the error message so we could help you further.Best Regards\nAasawari", "username": "Aasawari" } ]
Could someone provide me the documentation or api which I can use to scale up the instance immediately
2023-02-28T13:26:35.290Z
Could someone provide me the documentation or api which I can use to scale up the instance immediately
1,267
null
[ "connecting" ]
[ { "code": "", "text": "Good dayI’ve been trying to make a connection between MongoDB cluster and ETL tool Talend for Big Data, but when I checked the connection a error happendThe error is:MongoTimeoutException: Time out after 30000 ms while waiting to connect.The information I’ve to create the connection iscluster0.ivazw.mongodb.netI hope one of you help me to resolve this problem.Thanks!!", "username": "Analisis_Kapital" }, { "code": "", "text": "Welcome to the community!Can you connect to your mongodb using shell?\nHave you whitelisted your IP?Possible causes discussed in this threadhttps://community.talend.com/s/feed/0D53p00007vCrzdCAC", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for your reply,Yes, I connected using shell & I already checked the text on Talend Community, according to database administrator all ip’s can connect to the cluster, he told me that in IP configuration the address is 0.0.0.0/0 so this address allows any connection.", "username": "Analisis_Kapital" }, { "code": "", "text": "I think its too late to answer this, Hope this will be helpful for someone. If you are using talend 7.3 and a srv mongodb connection, then you need to check the following things.You need to use replica set, add the host url and port.\nyou need to enable the ssl option - Use ssl connection checkbox\nyou need to set Authentication type to negotiate and also need to set Authentication database as ‘admin’\nThen set username and password. You are good to connect to mongodb.", "username": "Ameenudheen_irshad" }, { "code": "", "text": "Thank you for posting the steps. It can always be useful though it may be late for someone .\nI’m using Talend 7.3. I tried the steps you mentioned above but it still doesn’t work for me. I tested the connection using mongosh on the server from where I’m connecting. The only missing step is the tSetKeystore, but I’m not sure how to generate the TrustStore file If it’s required in this case. Any idea?", "username": "PatOnCloud09" }, { "code": "", "text": "Did anyone find the solution to this issue ? I have been struggling with this for a couple of weeks .", "username": "Satyendra_Kumar" }, { "code": "", "text": "What error are you getting?\nDid you try the steps given above for srv type string?", "username": "Ramachandra_Tummala" } ]
Connect MongoDB with ETL Talend for Big Data
2021-03-22T18:09:35.028Z
Connect MongoDB with ETL Talend for Big Data
4,532
null
[ "data-modeling", "reference-pattern" ]
[ { "code": "user.model.ts{\n _id: ObjectId,\n email: string,\n name: string\n}\ntax.model.ts{\n _id: ObjectId,\n name: string\n rate: number\n}\nexpense.model.ts{\n creator: ObjectId,\n reviewer: ObjectId,\n splits: [{\n name: string,\n tax: ObjectId\n }]\n}\nextendedReferencesexpense.model.ts{\n creator: ObjectId,\n reviewer: ObjectId,\n splits: [{\n name: string,\n tax: ObjectId\n }],\n extendedReferences: {\n users: [{\n _id: ObjectId,\n email: string,\n }],\n taxes: [{\n _id: ObjectId,\n rate: number\n }],\n }\n}\nexpense.model.ts{\n creator: {\n _id: ObjectId,\n email: string\n },\n reviewer: {\n _id: ObjectId,\n email: string\n },\n splits: [{\n name: string,\n tax: {\n _id: ObjectId,\n rate: number\n }\n }]\n}\n{\n creator: 1337,\n reviewer: 1337, \n splits: [\n {\n name: \"1st Split\",\n tax: 1\n },\n {\n name: \"2nd Split\",\n tax: 2\n },\n {\n name: \"3rd Split\",\n tax: 1\n },\n ],\n extendedReferences: {\n users: [\n {\n _id: 1337,\n email: \"[email protected]\"\n }\n ],\n taxes: [\n {\n _id: 1,\n rate: 10\n },\n {\n _id: 2,\n rate: 20\n }\n ]\n }\n}\n", "text": "Hi all,I’m are currently implementing the extended reference pattern to increase query speed and remove lookups. I hope somebody can give me feedback regarding best practices about working w/ arrays.My model is something like this (simplified):user.model.tstax.model.tsexpense.model.tsI’m considering the following two approaches:Adding new object extendedReferences containing the extended reference dataexpense.model.tsReplacing ObjectId w/ the extended reference object (like explained in the blog post)expense.model.tsExample for an expense and how I want to store the data using the 1st approach:I’m considering storing a single extended reference separately because the same tax can be used several times in the splits. Which I hope would give me the following advantages:But I’m wondering if that 2nd point is true or premature optimization.\nFurthermore, the downside is application logic for making searches when accessing extended reference data and I’m assuming more complex pipelines when working w/ that data because some find and replace steps are required.What are the recommended best practices for this use case?", "username": "Steve" }, { "code": "user.model.tstax.model.tsexpense.model.ts$lookup", "text": "Hi @Steve,Welcome to the MongoDB Community forums I’m considering storing a single extended reference separately because the exact tax can be used several times in the splits. Which I hope would give me the following advantages:Yes, by including an extended reference to the data that would most frequently be looked up/JOINed, we save a step in processing. By embedding the extended reference data directly in each document, it can simplify your queries and make it easier to access the necessary data without using $lookup.Furthermore, to better understand your question, please share your common example query that you would be using without the extended reference pattern i.e., with the 3 initial collections user.model.ts, tax.model.ts, and expense.model.ts i.e., how would you do the queries using $lookup?Also, it would be helpful if you could provide more context and details about your specific use case here.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Working with arrays using the Extended Reference Pattern (ERP)
2023-02-18T08:50:34.350Z
Working with arrays using the Extended Reference Pattern (ERP)
1,000
null
[ "react-native" ]
[ { "code": "", "text": "Hi to cut to the chase the performance of our react native app is terrible and its down to how we are using realm.We are leaning heavily on adding listeners to collections to update the UX, and these listeners fire all too frequently, and with seemingly massive amounts of changes reported.After looking into it, I think this is due to my misunderstanding of how realm works WRT the schema and modifications to entities - Ive done some tests to confirm my suspicions, but Id really appreciate some clarity around the issue so that I can work out a proper solution.The main issue is that our schema has an entity type that is commonly related to a bunch of other entities. Ill give an example.lets say we have many Pets, Toys & Books. Also a couple of Childs.This relationship is common: Pet.child Toy.child Book.childSuch that, if I update a specific Child, I am likely to trigger an update in many Pets, Toys & Books that have a relationship with that ChildSo this is one source of inefficiency - am I right so far?If each item had a ‘foreign key’ type of reference to its Child owner, or the Child itself had a collection of owned items, then this disproportionate update would not occur?Second source of inefficiency is how our app creates/updates entities from the backend. We use a ‘PUT’ type of operation, rather than a ‘PATCH’. That is, the schema is almost identical to the structure of the entities coming from the backend. Our Pet API does return a Pet with a Child relationship. We know that something about the Pet has changed, but we dont know what - so we PUT the entire Pet and Child when we update the database, rather than writing only a specific attribute that has changed.example. Lets say the backend sends us a modified Pet , which is owned by John. The Pets colour has changed for some reason. We write the new Pet and Pet.child to the DB. Realm considers John has been changed because we are writing it, and now every Pet, Toy and Book owner by John is also changed, even though we just updated a single Pets colour.This is what I observe from our collection listeners firing and the nature of the changes they are reportingThe upshot is that whenever something changes it seems that many other types things are updated as a consequence, even though they havent changed.Lastly, another thing we are doing which Id like clarification about is when the backend does not supply the Child belonging to a Pet with the Pet, but only the Child’s id…To be clear, our example Schema looks like this:Pet {\nid: string\nname: string,\ncolour: string,\nowner: Child\n}Child {\nid: string,\nname: string\n}And in some circumstances we would recieve this from the backend:Pet.id = 3, Pet.name = fido, pet.colour=brown, owner.id = 9And we would write this to the DBPet = {\nid: 3,\nname: fido,\ncolour: brown,\nowner: {\nid:9\n}\n}which will write any changes to Pet, and simply maintain the 1:1 relationship with the Child , without writing any changes to the Child itself.In this situation, I believe that Realm still considers the Child to have been modified? Is this correct?phew, thats enough for now. I can anticipate that some comments will be, why is your schema like that? In part its because it mirrors the structure of entities coming to us from the backend, and … we didnt know any better. And also having the relationship in the Pet/Child/Toy is simple and useful. If you are dealing with a list of Books, finding the owner is easy. It seems natural to structure the schema this way.Thanks for any help, confirmation, discussoin this generates.Steve", "username": "Steven_Mathers" }, { "code": "", "text": "Still experimenting.Ive noticed that if I load a managed entity from realm, dont change anything , and immediately write it back, I get single change recorded - one modification of that entitty, as you would expect.However, our app doesnt work like that - it makes a rest call to our backend to sync any changes that have occured recently. So we receive the changed entity, marshall it into a format compatible with our schema, and write it to the DB with a create/modified flag set to update it.When we do this, Realm considers everything about the entity has changed, and thats when the massive amount of changes propagate through, due to the relationshipsHow should we approach this?", "username": "Steven_Mathers" } ]
How does realm determine if something has changed (has consequences for performance)
2023-03-02T23:12:01.713Z
How does realm determine if something has changed (has consequences for performance)
750
null
[ "queries" ]
[ { "code": "", "text": "can anyone help me?\nSELECT *, ROW_NUMBER() OVER (ORDER BY Id DESC) AS rn FROM tbl_transaction_history WHERE Username = ? AND Acc_name = ? AND DateAndTime = ? AND Remarks = ? AND Type = ‘Bet’ AND Payment_method = ‘Ending’ AND Remarks IN (‘1st Quarter’,‘2nd Quarter’,‘3rd Quarter’,‘4th Quarter’,‘Over Time’) ORDER BY Id DESCplease convert to mongodb?", "username": "Rannie_Prince_Marayag" }, { "code": "", "text": "Hey @Rannie_Prince_Marayag,Welcome to the MongoDB Community Forums! In order for us to better generate a query, could you please share below details:Regards,\nSatyam", "username": "Satyam" } ]
Row_number over ()
2023-02-22T09:28:36.478Z
Row_number over ()
629
null
[ "aggregation", "time-series" ]
[ { "code": "", "text": "I want to ingest data into a MongoDB Time Series collection as efficiently as possible without ingesting duplicate data.At then moment I am having to process the data one record at a time, checking for a matching record by performing a find operation and inserting the record if no match is found. This approach doesn’t seem to be particularly efficient, is there a better way of doing this when using MongoDB Time Series collections?I am aware that when there are duplicates an an aggregation pipeline can be created using a $group stage to filter out the duplicates from users accessing the data but this is tackling the symptom and not the cause.With normal collections you can add a unique index to a collection and then perform upsert efficiently using bulk write however for MongoDB Time Series collection unique indexes are not yet supported and upsert doesn’t appear to be supported even when $setOnInsert has been specified.Are there any plans to support unique indexes and upserts for MongoDB Time Series collection in the near future?", "username": "Peter_B1" }, { "code": "metaFields", "text": "Hi @Peter_B1 and welcome to the MongoDB community forum!!Are there any plans to support unique indexes and upserts for MongoDB Time Series collection in the near future?Both the feature asks, unique indexes and bulk upserts in Time Series collection, are in the pipeline, but I’m not able to say when or how these features will be implemented in the future.However, if you are on MongoDB version 5.1 or above, the update operation is possible for the metaFields values, provided this requires a certain conditions to be fulfilled. Please visit the documentation on Update in Time Series Collection for further detailsAlso, for tracking the feature request you could also put the feature requests into the MongoDB Feedback Engine.To add more, if you can clarify for the above requirement, are you expecting a lot for duplicates in the collection?\nIf, having a unique field is an enforced requirement, working with a regular collection over time series be more effective?Please let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "AasawariHi Aasawari,Thanks for your reply.In relation to your question about duplicates, they don’t happen all the time but can happen when data is resubmitted. We want to be able to insert data in bulk and identify what the new data was so we can inform downstream systems of the new data (and not the duplicates).If it were possible to add a unique index on the key fields then the duplicate records would fail and we would be able to identify this by the error code.Similarly if it were possible to perform an upsert with $setOnInsert I was expecting only the insert operations to be triggered and therefore for the command to be permitted for timeseries collections however this is not the case. Had this been permitted I expect I would have been able to determine which requests resulted in an insert.A unique composite key isn’t a requirements however being able to insert new records in bulk, ignoring any duplicates and being able to identify the inserted records is.Using regular collections would work however they wouldn’t have the benefits and optimizations that come with timeseries collections such as optimized internal storage and improved query efficiency.If the unique indexes and bulk upserts are both in the pipeline do you have any open issues for them that I can track?Thanks,\nPeter Baccaro", "username": "Peter_B1" }, { "code": "", "text": "Hi @Peter_B1 and thank you for the detailed reply.If the unique indexes and bulk upserts are both in the pipeline do you have any open issues for them that I can track?Since this is in the planning stage for the future releases, there are no tickets to be watched as of yet. Alternatively, you can raise the feature request in the requests into the MongoDB Feedback Engine. where you can track individual requests and the progress.Regards\nAasawari", "username": "Aasawari" } ]
How to write to MongoDB Time Series collections efficiently?
2023-02-23T17:46:14.043Z
How to write to MongoDB Time Series collections efficiently?
1,421
null
[ "aggregation" ]
[ { "code": "orderRouter.get('/income', verifyTokenAndAdmin, async (req:Request, res:Response)=>{\n const date = new Date();\n const lastDay = new Date(date.setDate(date.getDate()-1));\n const year = new Date().getFullYear();\n const lastMonth = new Date(date.setMonth(date.getMonth()-1));\n const previousMonth = new Date(new Date().setMonth(lastMonth.getMonth()-1));\n try{\n const income = await Order.aggregate([\n {$match:{createdAt:{$gte: previousMonth}}},\n {$match:{createdAt:{$gte: lastDay}}},\n {$match:{createdAt:{$eq:year}}},\n {$project: {\n month: {$month: \"$createdAt\"},\n day:{$day:\"$createdAt\"},\n year:{$year:\"$createdAt\"},\n sales: \"$netto\",\n }},\n {$group:{\n _id: {\n \"month\":\"$month\",\n \"day\":\"$day\",\n \"year\":\"$year\",\n },\n total:{$sum: \"$sales\"}\n }}\n \n ]);\n console.log(income);\n", "text": "Hello everyone, currently I am trying around with aggregation. Surely I will join the aggregation course, but I have to fullfill my exercise first. I have build a webshop and want to get the income out of my orders-collection. I matched the field createdAt with month and that is working. But I want to get the income of the day and the full year either. Now I am getting the error:MongoServerError: Invalid $project :: caused by :: Unknown expression $dayCode:Thanks for your help", "username": "Roman_Rostock" }, { "code": " day:{$day:\"$createdAt\"},", "text": "Surely I will join the aggregation courseYou should do it before producing any production code.Yes you may match the same field multiple times. Look at $and to see how to.The followingMongoServerError: Invalid $project :: caused by :: Unknown expression $daytells you that day:{$day:\"$createdAt\"},is wrong. And if you google for mongodb $day, the first result will be\nwhich is closer to what you need.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation - Can I match the same field multiple times?
2023-03-02T13:08:16.756Z
Aggregation - Can I match the same field multiple times?
1,145
https://www.mongodb.com/…394a539ca5b3.png
[ "security" ]
[ { "code": "", "text": "Just like this one?\n\nimage749×577 30.8 KB\n", "username": "sg_irz" }, { "code": "+srvtlsssltruetlssslfalsetls=falsessl=false", "text": "Hi @sg_irz,I’m not too sure if this is what you were after but as per the DNS Seed List Connection Format documentation:Use of the +srv connection string modifier automatically sets the tls (or the equivalent ssl) option to true for the connection. You can override this behavior by explicitly setting the tls (or the equivalent ssl) option to false with tls=false (or ssl=false ) in the query string.if this doesn’t answer your question, could you clarify which tls options this topic is about?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks @Jason_TranIf I use this connection string for my application server. Will it be able to connect to my mongodb cluster? (even without passing through a client certificate)\nimage749×577 30.8 KB\n", "username": "sg_irz" }, { "code": "", "text": "Can we go back a step and clarify what you mean originally by “tls options are not passed”? From the screenshots you posted, it looks to me like a standard code example to connect to Atlas using C#, and I didn’t see any TLS options there. Did you modify this example code, added some options, and found that you cannot connect?If I use this connection string for my application server. Will it be able to connect to my mongodb cluster? (even without passing through a client certificate)If you’re asking about whether you need to supply a TLS certificate to connect to Atlas, the answer is no. Atlas uses LetsEncrypt as certificate authority, and official drivers should be able to connect. The example code should have no trouble connecting to Atlas without passing any additional TLS options.", "username": "Jason_Tran" }, { "code": "", "text": "Can we go back a step and clarify what you mean originally by “tls options are not passed”? From the screenshots you posted, it looks to me like a standard code example to connect to Atlas using C#, and I didn’t see any TLS options there. Did you modify this example code, added some options, and found that you cannot connect?No I did not modify this, this is straight from guide in connecting application to Atlas. Although I did not try to test this connection as I am just trying to compare a mongo atlas cluster to a self managed mongodb cluster.Sorry for the confusion, I just need clarification for my self managed MongoDB if I am able to connect to my cluster (with TLS enabled) without having a client certificate for each of my application server. Just like how mongodb atlas is doing it. Based on reading MongoDB doc, I can do it but I have to pass the tlsInsecure=true option on my connectionstring which is not advisable on prod environment.If you’re asking about whether you need to supply a TLS certificate to connect to Atlas, the answer is no. Atlas uses LetsEncrypt as certificate authority, and official drivers should be able to connect. The example code should have no trouble connecting to Atlas without passing any additional TLS options.Based on this, can I replicate this (about not needing to supply a TLS certificate to connect on mongodb) on my self managed mongodb using LetsEncrypt as the certificate authority?", "username": "sg_irz" } ]
Why is it when connecting to mongo atlas cluster for my application, tls options are not passed?
2023-03-02T03:56:46.006Z
Why is it when connecting to mongo atlas cluster for my application, tls options are not passed?
1,156
null
[ "serverless" ]
[ { "code": "", "text": "Hi,Just some basic question here. I set up mongodb atlas serverless and shared node(free tier) for my local testing. Both of them are set to set AWS oregon west datacenter. However, the read performance has a big gap. For my 1.8 document (the only document in the collection), the shared node latency is about 1.5 seconds where the serverless latency is about 5.6 seconds. Everything else is same.I’m surprised by this latency gap and any insights is appreciated.", "username": "Weide_Zhang" }, { "code": "", "text": "Hi @Weide_Zhang - Welcome to the community.I presume the only thing you’re changing application end is the connection string (pointing to either the shared tier cluster or serverless instance) when testing. Please correct me if I am wrong here.Besides that, would you be able to provide some information:I’ll try reproduce this behaviour if possible.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason,I’m attaching the document here. to answer your question.Thanks,Weide", "username": "Weide_Zhang" }, { "code": "mongoshexecutionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 0,\n totalKeysExamined: 0,\n totalDocsExamined: 1,\n executionStages: {\n stage: 'COLLSCAN',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n direction: 'forward',\n docsExamined: 1\n }\n },\n command: { find: 'collection', filter: {}, '$db': 'db' },\n serverInfo: {\n host: 'serverlessinstance0-lb.redacted.mongodb.net',\n port: 27017,\n version: '6.2.0',\n gitVersion: '290112ef1396982a187217b4cb8a943416ad0db1'\n }\nexecutionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 4,\n totalKeysExamined: 0,\n totalDocsExamined: 1,\n executionStages: {\n stage: 'COLLSCAN',\n nReturned: 1,\n executionTimeMillisEstimate: 4,\n works: 3,\n advanced: 1,\n needTime: 1,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n direction: 'forward',\n docsExamined: 1\n }\n },\n command: { find: 'collection', filter: {}, '$db': 'db' },\n serverInfo: {\n host: 'ac-iceu1mh-shard-00-01.redacted.mongodb.net',\n port: 27017,\n version: '5.0.15',\n gitVersion: '935639beed3d0c19c2551c93854b831107c0b118'\n }\nexecutionTimeMillis", "text": "Hi @Weide_Zhang,I imported the document and I tried to test this via mongosh but got the below results from the execution stats output for both an M0 and serverless instance.Serverless instance:M0 instance:The executionTimeMillis for both are <5ms.the shared node latency is about 1.5 seconds where the serverless latency is about 5.6 seconds.Can you advise how you got these times? Was this from the execution stats output?Regards,\nJason", "username": "Jason_Tran" }, { "code": " let client = MongoDBClient.getInstance(); //this is a singleton \n const start = new Date().getTime();\n await client.connect();\n let elapsed1 = new Date().getTime() - start;\n let fontservice = new FontsService();\n // this is the real mongodb call, \n let data: any = await client.db!.collection(\"collectionname\").findOne({});\n let elapsed2 = new Date().getTime() - start;\n console.log(`elapsed time is ${elapsed2} and connection taks ${elapsed1}`);\ntype or paste code here\n", "text": "Hi Jason,\nThanks for your reply.\nHere is what my client code looks like, first i call connect for every request for my singleton mongodbclient and then i do the fetch", "username": "Weide_Zhang" }, { "code": "", "text": "so seems connection takes about 500ms and then the real difference is in the findOne call where I measured the latency (elapsed 2 and elapsed1) using the nodejs client sdk.", "username": "Weide_Zhang" }, { "code": "let data: any = await client.db!.collection(\"collectionname\").explain(\"executionStats\").find({})\n.find().explain()findOne()find()", "text": "Thanks for clarifying.Since your main purpose is the latency comparisons of the same find operation between the two types of instances, I believe you should try the following tests described below because it would compare the same operation executed server side:Can you run the same queries against both instances using the explain output in “executionStats” verbosity and provide the output for each?:Note: .find() returns a cursor and can be used with .explain().The above is to compare the execution times on both the serverless instance and shared tier instance. If these are relatively the same then the latency gap you’ve mentioned may exist elsewhere outside of the particular find operation being executed.find with no filter (using the nodejs mongo sdk)Please note : Your code is showing that it’s a findOne() and not a find().Regards,\nJason", "username": "Jason_Tran" } ]
Serverless vs shared latency gap
2023-02-28T21:59:22.886Z
Serverless vs shared latency gap
1,266
null
[ "aggregation" ]
[ { "code": "", "text": "Hi Guys,I have an application that uses a change stream to watch incoming information. However, there is a use case that pipeline of the change stream will be updated from other sources. I cannot predict the input, it is user related.Is there a suggested way to update the pipeline for a change stream? I am using mongocxx. Mongocxx example is mostly welcomed, other language/general suggestions are also appreciated.thanks.", "username": "Kong_Ip" }, { "code": "", "text": "it might be a rare case as i cannot find much information about this problem. however, i think it could a case people will run into.", "username": "Kong_Ip" }, { "code": "", "text": "ok, there is no answer which means it is either not a problem, or it is an unsolvable problem…i will do some work around then lol", "username": "Kong_Ip" }, { "code": "pipeline = [\n {\"$match\": {\"fullDocument.username\": \"alice\"}},\n {\"$addFields\": {\"newField\": \"this is an added field!\"}},\n]\ncursor = db.inventory.watch(pipeline=pipeline)\ndocument = next(cursor)\n", "text": "Hi @Kong_Ip welcome to the community!Currently I don’t believe there is a way to change the pipeline other than creating a new changestream. From the Python example in https://www.mongodb.com/docs/manual/changeStreams/The pipeline is part of the creation of the change stream itself, so I don’t think it can be modified once it’s created.An alternative would be to get everything from the change stream (without any pipeline), pass the output to a second process that can filter it, then pass that output to the final destination. You can design the second process to have modifiable filters (which could be in any language).Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "thanks very much Kevin.", "username": "Kong_Ip" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change Stream with changable pipeline
2023-03-01T03:54:36.970Z
Change Stream with changable pipeline
533
null
[]
[ { "code": "", "text": "Hi,\nI have a requirement to store data in MongoDB (Images / audio / video), the file size could be > 16 MB.\nI have gone through the documentation which suggests :\nIf data size < 16 MB\nstore in MongoDB document\nelse\nstore in GridFSI have the following queries:\nQ1) Is it advisable to store data > 16 MB in multiple documents in MongoDB as per required “chunk size”, do custom logic to split the files as per “chunk size” & query the chunks for a given (images / audio / video) file from MongoDB or use GridFS for this ? Which is a better option & why ?Q2) If using GridFS, how do I decide for the chunk size for files of varying sizes ( > 16 MB) to get best performance while quering.Thanks,\nSachin Vyas", "username": "Sachin_Vyas" }, { "code": "", "text": "Hello @Sachin_Vyas,Welcome to the MongoDB Community forum Based on your requirements, it is recommended to use GridFS to store data in MongoDB for files larger than 16 MB. Storing data larger than 16 MB in multiple documents in MongoDB is generally not advisable as it can cause performance issues and increase the risk of data inconsistencies.GridFS is designed to handle large files by storing them as separate documents with a fixed chunk size. The default chunk size in GridFS is 255 KB, but you can specify a different size when creating the GridFS bucket. The optimal chunk size may depend on the type and size of the files you are storing, as well as the resources of your hardware and system.Using GridFS also allows you to take advantage of MongoDB’s powerful querying capabilities to retrieve files efficiently.Although officially supported, GridFS is a convention rather than a specific built-in method, thus the binary files are stored as regular MongoDB documents. In most cases it’s preferable to store binaries outside the database, where the database entry contains the pointer toward the relevant file. However if storing binaries in the database is desired or required for the use case, then GridFS is a valid solution.I hope this helps answer your questions. Let me know if you have any further queries.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Storing data (images / audio / video) > 16 MB in MongoDB or GridFS?
2023-02-26T11:12:38.517Z
Storing data (images / audio / video) &gt; 16 MB in MongoDB or GridFS?
1,152
null
[ "crud", "transactions" ]
[ { "code": " ClientSession clientSession = mongoClient.startSession();\n clientSession.startTransaction();\n int batchSize = 100000;\n try {\n for (int i = 1; i <= 1000000; i++) {\n ArrayList<SampleData> sampleDataList = new ArrayList();\n sampleDataList.add(new SampleData(i));\n if (i % batchSize == 0 || i == sampleDataList.size()) {\n sampleDataRepo.mongoCollection().insertMany(clientSession, sampleDataList);\n sampleDataList.clear();\n }\n }\n clientSession.commitTransaction();\n }\n catch (Exception e){\n clientSession.abortTransaction();\n throw new Exception(e.getMessage());\n }\n", "text": "Below is a snippet of my transaction implementation. Currently MongoDB will throw WriteConflict error every time when it is processing the records from 800k onwards.", "username": "Zhen_Wei_Wong" }, { "code": "SampleData()800k", "text": "Hi @Zhen_Wei_Wong,Welcome to the MongoDB Community forums The issue you are facing could be related to MongoDB’s document-level concurrency control. MongoDB uses optimistic concurrency control to ensure consistency in its transactions.Like, if two or more transactions attempt to modify the same document concurrently, MongoDB will throw a WriteConflict error, indicating that one or more transactions failed due to conflicts.To understand more about your error, could you please share the following:Meanwhile please go through this link to read about In-progress Transactions and Write ConflictsBest,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Honestly, i have no idea in what scenario one would use a transaction to insert 1million docs.", "username": "Kobe_W" }, { "code": "{\"t\":{\"$date\":\"2023-03-01T13:30:39.464+08:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"conn23\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":12,\"message\":{\"ts_sec\":1677648639,\"ts_usec\":460201,\"thread\":\"2784:140705341332272\",\"session_dhandle_name\":\"file:collection-14--2788248468824538652.wt\",\"session_name\":\"WT_CURSOR.insert\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"int __cdecl __realloc_func(struct __wt_session_impl *,unsigned __int64 *,unsigned __int64,bool,void *):134:memory allocation of 8583939072 bytes failed\",\"error_str\":\"Not enough space\",\"error_code\":12}}} {\"t\":{\"$date\":\"2023-03-01T13:30:39.465+08:00\"},\"s\":\"F\", \"c\":\"REPL\", \"id\":17322, \"ctx\":\"conn23\",\"msg\":\"Write to oplog failed\",\"attr\":{\"error\":\"UnknownError: WiredTigerRecordStore::insertRecord 12: Not enough space\"}} {\"t\":{\"$date\":\"2023-03-01T13:30:39.474+08:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23089, \"ctx\":\"conn23\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":17322,\"file\":\"src\\\\mongo\\\\db\\\\repl\\\\oplog.cpp\",\"line\":369}} {\"t\":{\"$date\":\"2023-03-01T13:30:39.474+08:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23090, \"ctx\":\"conn23\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"} {\"t\":{\"$date\":\"2023-03-01T13:30:39.475+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn23\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 22 (SIGABRT).\\n\"}}", "text": "Hi, thanks for the warm welcome.I found out that the WriteConflict error is gone after setting --wiredTigerCacheSizeGB 180. However when the transaction is getting committed after all the 1million insertions goes through it will throw the error as shown.{\"t\":{\"$date\":\"2023-03-01T13:30:39.464+08:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"conn23\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":12,\"message\":{\"ts_sec\":1677648639,\"ts_usec\":460201,\"thread\":\"2784:140705341332272\",\"session_dhandle_name\":\"file:collection-14--2788248468824538652.wt\",\"session_name\":\"WT_CURSOR.insert\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"int __cdecl __realloc_func(struct __wt_session_impl *,unsigned __int64 *,unsigned __int64,bool,void *):134:memory allocation of 8583939072 bytes failed\",\"error_str\":\"Not enough space\",\"error_code\":12}}} {\"t\":{\"$date\":\"2023-03-01T13:30:39.465+08:00\"},\"s\":\"F\", \"c\":\"REPL\", \"id\":17322, \"ctx\":\"conn23\",\"msg\":\"Write to oplog failed\",\"attr\":{\"error\":\"UnknownError: WiredTigerRecordStore::insertRecord 12: Not enough space\"}} {\"t\":{\"$date\":\"2023-03-01T13:30:39.474+08:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23089, \"ctx\":\"conn23\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":17322,\"file\":\"src\\\\mongo\\\\db\\\\repl\\\\oplog.cpp\",\"line\":369}} {\"t\":{\"$date\":\"2023-03-01T13:30:39.474+08:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23090, \"ctx\":\"conn23\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"} {\"t\":{\"$date\":\"2023-03-01T13:30:39.475+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn23\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 22 (SIGABRT).\\n\"}}", "username": "Zhen_Wei_Wong" }, { "code": "\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":12,\"message\":{\"ts_sec\":1677648639,\"ts_usec\":460201,\"thread\":\"2784:140705341332272\",\"session_dhandle_name\":\"file:collection-14--2788248468824538652.wt\",\"session_name\":\"WT_CURSOR.insert\"\n\"msg\":\"Write to oplog failed\",\"attr\":{\"error\":\"UnknownError: WiredTigerRecordStore::insertRecord 12: Not enough space\"}}\nWiredTigerRecordStore{\"message\":\"Got signal: 22 (SIGABRT).\\n\"}}\n", "text": "Hi @Zhen_Wei_Wong,Thanks for sharing the full error message.The error seems to be related to the WT storage engine. The first part of the msg shows an error due to memory allocation failure and it was unable to allocate certain bytes of memory due to insufficient space.The further part of the msg shows that there was a failure to write to the oplog. The error message shows that there was not enough space available to insert a record into the WiredTigerRecordStore.Finally, there is a message showing that a signal (SIGABRT) was received, which likely triggered the process to abort.Overall the error message indicates a problem likely related to memory or disk space constraints. To resolve this issue, the root cause of the memory allocation failure needs to be identified.However, can you clarify the following:Best,\nKushagra", "username": "Kushagra_Kesav" } ]
WriteConflict error in MongoDB when running batches of insertMany operation with transaction from Quarkus
2023-02-23T10:29:32.892Z
WriteConflict error in MongoDB when running batches of insertMany operation with transaction from Quarkus
1,947
null
[ "queries" ]
[ { "code": "db.customers.createIndex({ active: 1, birthdate: -1, name: 1 });\n{\n \"_id\": {\n \"$oid\": \"5ca4bbcea2dd94ee58162a88\"\n },\n \"username\": \"paul82\",\n \"name\": \"Joseph Dawson\",\n \"birthdate\": {\n \"$date\": {\n \"$numberLong\": \"-60209941000\"\n }\n },\n \"email\": \"[email protected]\",\n \"accounts\": [ 158557 ],\n \"active\": true\n}\ndb.customers.find({ birthdate: { $gt: ISODate('1995-08-01') }, active: true }).sort({ birthdate: -1, name: 1 });\nwinningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: {\n active: 1,\n birthdate: -1,\n name: 1\n },\n indexName: 'active_1_birthdate_-1_name_1',\n isMultiKey: false,\n multiKeyPaths: {\n active: [],\n birthdate: [],\n name: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n active: [\n '[true, true]'\n ],\n birthdate: [\n '[new Date(9223372036854775807), new Date(807235200000))'\n ],\n name: [\n '[MinKey, MaxKey]'\n ]\n }\n }\n },\n rejectedPlans: []\n }\n", "text": "index created:sample document:query:explain output:I didn’t understand why this gave the correct result. I couldn’t make sense of the fact that the birth dates were sorted in the output of the query. Aren’t the birth dates supposed to be sorted within the active prefix?So, how did it manage to provide the birth dates in a sorted manner without using an additional SORT for the ‘birthdate’ field?Is there a place I missed? Can’t i expect the outputs to be sorted by the “active” field?", "username": "recepaltun" }, { "code": "data{ a: 1, b: 1, c: 1, d: 1 }\ndb.data.find( { a: 5 } ).sort( { b: 1, c: 1 } ){ a: 1 , b: 1, c: 1 }db.data.find( { b: 3, a: 4 } ).sort( { c: 1 } ){ a: 1, b: 1, c: 1 }db.data.find( { a: 5, b: { $lt: 3} } ).sort( { b: 1 } ){ a: 1, b: 1 }{ c: 1 }abdb.data.find( { a: { $gt: 2 } } ).sort( { c: 1 } )\ndb.data.find( { c: 5 } ).sort( { c: 1 } )\n{ a: 1, b: 1, c: 1, d: 1 }", "text": "it was like talking to myself but I think the answer is here:An index can support sort operations on a non-prefix subset of the index key pattern. To do so, the query must include equality conditions on all the prefix keys that precede the sort keys.For example, the collection data has the following index:The following operations can use the index to get the sort order:As the last operation shows, only the index fields preceding the sort subset must have the equality conditions in the query document; the other index fields may specify other conditions.If the query does not specify an equality condition on an index prefix that precedes or overlaps with the sort specification, the operation will not efficiently use the index. For example, the following operations specify a sort document of { c: 1 }, but the query documents do not contain equality matches on the preceding index fields a and b:These operations will not efficiently use the index { a: 1, b: 1, c: 1, d: 1 } and may not even use the index to retrieve the documents.", "username": "recepaltun" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sorting by 2nd compound index element
2023-03-02T23:26:27.932Z
Sorting by 2nd compound index element
376
null
[ "aggregation", "queries" ]
[ { "code": "# In DATABASE\n{\n \"_id\": 8144,\n \"merchant_id\": 8144,\n \"name\": \"Google\",\n \"email\": \"[email protected]\",\n}\n{\n \"_id\": 2,\n \"merchant_id\": 2,\n \"name\": \"LENOVO\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 8144,\n}\n{\n \"_id\": 3,\n \"merchant_id\": 3,\n \"name\": \"HP\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 8144,\n}\n{\n \"_id\": 4,\n \"merchant_id\": 4,\n \"name\": \"DELL\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 8144,\n}\n#Output\n{\n \"_id\": 8144,\n \"merchant_id\": 8144,\n \"name\": \"Google\",\n \"email\": \"[email protected]\",\n \"count\": 3, \n # merchant_id : 8144 has 3 submerchants (Lenovo,hp,dell)\n}\ndb.merchant_info.aggregate([\n { '$group': {'_id': '$submerchant_id', 'count': {'$sum': 1}}},\n\n{ '$project': {'merchant_id': 1, 'name': 1, 'email': 1,'count' :1} }])\n \n{ _id: 8144, count: 3 }\n", "text": "I m working with MongoDB but i m facing problem with $project not displaying the data after $group is executed .My database look like thisThe output I want is to display the details of the merchant along with count of submerchantsMY QUERY is belowBUT I m getting the output has thisWhy is it not showing the merchant_id,name,email data ? Where am i going wrong please let me know ?", "username": "Joseph_Anjilimoottil" }, { "code": "", "text": "output of $group stage has only two fields: _id and count, so regardless of $projection, you won’t get them. You need to somehow fetch merchant_id/name/email… based on the _id field of $group output.I’m not familiar with aggregation so not sure how to fix it.", "username": "Kobe_W" }, { "code": "", "text": "Ignore what I wrote in my first edit. Which I keep here for prosperity.I think that the _id of your $group is wrong. From the output you shared, it should be $merchant_id.To get merchant_id, name and email in the output, use $first accumulator.It looks like the monkey felt from the tree. I did not analyzed the sample data enough. Asya’s following post made me realized that.", "username": "steevej" }, { "code": "$project$groupdb.merchant_info.aggregate([\n {$set:{\n submerchant_id:{$ifNull:[\"$submerchant_id\",\"$merchant_id\"]},\n submerchant:{$cond:{if:{$eq:[\"missing\",{$type:\"$submerchant_id\"}]},then:false, else:true}}}\n }, \n {$sort:{submerchant_id:1, submerchant:1}},\n {$group:{\n _id:\"$submerchant_id\", \n name:{$first:\"$name\"}, \n email:{$first:\"$email\"}, \n count:{$sum:1}, \n submerchants:{$push:{_id:\"$_id\",name:\"$name\",email:\"$email\"}}\n }},\n {$project:{\n _id:0, \n merchant_id:\"$_id\", \n name:1, \n email:1, \n count:{$subtract:[\"$count\",1]}, \n submerchants:{$slice:[\"$submerchants\",1,\"$count\"]}}\n }\n] )\n{\n\"name\" : \"Google\",\n\"email\" : \"[email protected]\",\n\"merchant_id\" : 8144,\n\"count\" : 3,\n\"submerchants\" : [\n\t{\n\t\t\"_id\" : 2,\n\t\t\"name\" : \"LENOVO\",\n\t\t\"email\" : \"[email protected]\"\n\t},\n\t{\n\t\t\"_id\" : 3,\n\t\t\"name\" : \"HP\",\n\t\t\"email\" : \"[email protected]\"\n\t},\n\t{\n\t\t\"_id\" : 4,\n\t\t\"name\" : \"DELL\",\n\t\t\"email\" : \"[email protected]\"\n\t}\n]\n}\nfalsetrue$first$project$group$lookup$project_id: null", "text": "This is a cute problem. $project operates on the documents that come out of the stage before it, and your $group stage only outputs two fields, so of course there is no way to manufacture the needed details. Here is how I would recommend doing the aggregation you need:Output would be:The trick I used is taking advantage of the fact that the “parent” merchant does not have the “submerchant_id” field. I added it so that it would be there to group by, I added a field to preserve which document was the parent merchant and I sorted it by that field (taking advantage of false sorting before true) so that parent would be first and I could use $first to preserve its name, etc. Then in the last $project stage I reshape the document and correct count (to remove the parent from the count of its submerchants) and remove it from the array of submerchants as well.There are probably many different ways to achieve this, hope this example helps you understand and decide whether this is the right approach for you or not. The way you did it can actually work as well, but after the $group stage you would need to perform $lookup on the same collection to get the fields you are trying to add in $project. That would also work, though you’d need to filter out the parent documents before (or after) the $group otherwise you’ll have an extra document with _id: null and you’d still need to do some reshaping of documents in the last stage.Asya", "username": "Asya_Kamsky" }, { "code": "{\n \"_id\": 70,\n \"merchant_id\": 70,\n \"name\": \"Apple\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 10,\n}\n{\n \"_id\": 8144,\n \"merchant_id\": 8144,\n \"name\": \"Google\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 70,\n}\n{\n \"_id\": 2,\n \"merchant_id\": 2,\n \"name\": \"LENOVO\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 8144,\n}\n{\n \"_id\": 3,\n \"merchant_id\": 3,\n \"name\": \"HP\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 8144,\n}\n{\n \"_id\": 4,\n \"merchant_id\": 4,\n \"name\": \"DELL\",\n \"email\": \"[email protected]\",\n \"submerchant_id\": 8144,\n}\n", "text": "Hi Asya ,\nYour Solution is Great !\nJust a small difference if the database as\nmerchant 1 → submerchant 1 of merchant 1 (merchant 2 ) → submerchant 1 of merchant 2How does the query change in this case?", "username": "Joseph_Anjilimoottil" }, { "code": "match = { \"$match\" : {\n \"submerchant_id\" : { \"$exists\" : false }\n} }\nlookup = { \"$lookup\" : {\n \"from\" : \"merchant_info\" ,\n \"localField\" : \"merchant_id\" , \n \"foreignField\" : \"submerchant_id\" ,\n \"as\" : \"submerchants\"\n /* An optional pipeline could be added here in order to $project only the fields \n that we want to keep (_id, name, email). Sometimes it is better to do more work \n on the server to download less data and sometimes it is better to do less work \n to download more data. It depends on the use-case and the data. If you $project\n to get rid of a few simple fields, useless, if you $project to remove sensitive information\n or to reduce the size of bloated documents please do it.\n */\n} }\n{ _id: 8144,\n merchant_id: 8144,\n name: 'Google',\n email: '[email protected]',\n submerchants: \n [ { _id: 2,\n merchant_id: 2,\n name: 'LENOVO',\n email: '[email protected]',\n submerchant_id: 8144 },\n { _id: 3,\n merchant_id: 3,\n name: 'HP',\n email: '[email protected]',\n submerchant_id: 8144 },\n { _id: 4,\n merchant_id: 4,\n name: 'DELL',\n email: '[email protected]',\n submerchant_id: 8144 } ] }\n", "text": "Now that I understand the problem better, thanks to Asya, I would like to propose an alternative solution which uses $lookup rather than $group/$push to get the submerchants.The first pipeline stage matches the parent merchant using:the fact that the “parent” merchant does not have the “submerchant_id” field.Documents out from this stage will documents with _id:8144, the _id of the $group.Then the $lookup stage (corrected thanks to @Joseph_Anjilimoottil) is using the simplest form:After, this stage the document _id:8144 will be:You still do not have the count but you could easily $set/$addFields or $project to get it. But since you are getting the array, there is no point to add load on the server to get something that your programming language gives you for free.Side discussion:One facet of using $lookup rather that $group that I do not know is performance. As far as I know, $group blocks until all incoming documents are processed. It makes sense since you cannot know if a document is part of any group or not. But it is possible that a $match’ed document with its $lookup could be produced faster if both are served by appropriate indexes and because $match and $lookup are not blocking.How does the query change in this case?May be, $graphLookup if you are using the $lookup alternative.", "username": "steevej" }, { "code": "", "text": "@Asya_Kamsky How does the query change in this case?", "username": "Joseph_Anjilimoottil" }, { "code": "lookup = { \"$lookup\" : {\n \"from\" : \"merchant_info\" ,\n \"localField\" : \"merchant_id\" ,\n \"foreignField\" : \"submerchant_id\" ,\n \"as\" : \"submerchants\"\n \n} }\n", "text": "@steevej your lookup seem to be jumbled , shouldn’t it be like this", "username": "Joseph_Anjilimoottil" }, { "code": "", "text": "your lookup seem to be jumbled , shouldn’t it be like thisYou are right. I must have cut-n-paste the first version which was wrong. I will update my post and indicate that you pointed the error.", "username": "steevej" }, { "code": "$matchsubmerchant_id", "text": "Looks like y’all figured it out - using $lookup will let you get the details of each merchant after grouping the submerchants by their “parent”. $graphLookup could be useful if you want to know all the “trees”. Only thing is I don’t think the $match works given that every sample document has a submerchant_id field present…Asya", "username": "Asya_Kamsky" }, { "code": "db.merchant_info.aggregate([\n{'$lookup': {\n 'from': \"merchant_info\",\n 'localField': \"merchant_id\",\n 'foreignField': \"submerchant_id\",\n 'as': \"submerchant_merchants\"\n }\n },\n{ '$addFields': {'Count_merchant': {'$size': \"$submerchant_merchants\"}} },\n{ '$project': {'merchant_id': 1, 'name': 1, 'email': 1,'Count_merchant' :1} }])\n", "text": "Yea $lookup worked !!The thing that still doubts me is if i have over a million or 10 million documents , then loading the submerchants list and then counting the size , wouldn’t it take a lot of processing time ?", "username": "Joseph_Anjilimoottil" }, { "code": "", "text": "The thing that still doubts me is if i have over a million or 10 million documents , then loading the submerchants list and then counting the size , wouldn’t it take a lot of processing time ?Why don’t you try it? In general in this pipeline the $group will be the most intensive stage, not $addFields or $project. $lookup may be depending on how many documents are going through that stage (it happens after $group so there are fewer than what you started with).Asya", "username": "Asya_Kamsky" }, { "code": "$addToSet$pushmerchant_idexplain()$group$name$email$group$push$addToSet$push$group", "text": "Hi All, I’ve been tinkering with this pipeline since I’ve got a very similar use case and I do have millions of documents to process, I’m noticing some interesting things related to performance.In my use case, I need to guarantee the submerchants are unique, so I’m using $addToSet instead of $push.If I don’t index any of the fields involved, performance begins to suffer noticeably on my particular cluster somewhere around 1M records. I added an index on merchant_id and didn’t really notice much improvement, but did notice that the winning query plan (from explain()) elected to use the index.\nAfter that, I decided to add a few more indexes with the other fields involved in the $group operation, like like $name and $email, and also compound indexes with all three fields. The winning query plan was using the compound index that included the most fields. Performance was noticeably way better.I’m confused by this - if the documents are already sorted before grouping, and then the group operation just needs to assemble the submerchants and they’re all “right there” next to each other, why does the $group operation perform so much worse without the compound index? Is mongo doing some sort of subquery on each call to $push/$addToSet as it’s assembling the submerchant array? If so, I’m thinking a non-group based strategy of streaming the documents through the pipeline would be more performant. Anyone know what’s going on during that $push in the $group call?", "username": "Marc_Dostie" }, { "code": "$group", "text": "I’m confused by this - if the documents are already sorted before grouping, and then the group operation just needs to assemble the submerchants and they’re all “right there” next to each other, why does the $group operation perform so much worse without the compound index?Probably because compound index includes all the fields it needs creating a covered index query - meaning everything is available in the index and there is no need to fetch the actual documents themselves.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$group overriding $project MOngoDB
2023-02-22T05:19:25.269Z
$group overriding $project MOngoDB
1,271
null
[ "aggregation", "queries", "dot-net" ]
[ { "code": "var projection = Builders<ClassA>.Projection.Expression(a => new ClassB {\n\tClassId = a.ClassId,\n\tArrayValues = a.ArrayOfX.OfType<SubclassOfX>().ToList()\n});\n\nvar query = database.GetCollection<ClassA>(\"ClassA\")\n\t.Find(a => a.ClassId == myId)\n\t.Project(projection);\n\t\nvar a = query.ToString();\na.ArrayOfX.Where(x => x is SubclassOfX).Cast<SubclassOfX>().ToList()", "text": "The following code that does a client-side filter and cast of the sub-document elements in an array field in the projection always worked with drivers prior to 2.19 using the LINQ2 provider. Running in 2.19 with the default LINQ3 provider will throw an “Expression not supported: a.ArrayOfX.OfType()” exception. When setting the provider to LINQ2, this works as expected.Same exception occurs when using aggregation pipeline with AsQueryable().Where().Select().Changing the cast to something like a.ArrayOfX.Where(x => x is SubclassOfX).Cast<SubclassOfX>().ToList() also works with LINQ2 but not LINQ3.", "username": "Jasper_Li" }, { "code": "", "text": "Does LINQ3 no longer support expressions with client-side custom code/functions/lambdas? This is a very powerful capability as you can do things in a single operation with very clean code instead of having to do the two steps of materializing the results from the mongodb query into an anonymous class and then doing another client side LINQ copy + transformation over that to get it into its final form.", "username": "Jasper_Li" }, { "code": "ExpressionNotSupportedExceptionIEnumerable.OfType<TResult>$expr", "text": "Hi, Jasper,Welcome to the MongoDB Community Forums!I understand that you’re having a problem with a projection that works with LINQ2 but fails with LINQ3.LINQ2 would attempt to translate as much of the expression to MQL as it could and then run the remainder on the client side. This led to code that was difficult to reason about since you wouldn’t know by looking at the code which portions executed server side and which client side.In LINQ3 we decided to make this explicit by throwing ExpressionNotSupportedException for any expression that we could not translate into MQL. The intention is to require the separation of server-side and client-side projections. More details can be found in CSHARP-4498.In theory we could potentially support IEnumerable.OfType<TResult> in MQL using $expr and type discriminators on the array’s subdocuments. If such a feature would be valuable to you, please file a feature request.Please let us know if you have any additional questions.Sincerely,\nJames", "username": "James_Kovacs" } ]
Client-side casting with LINQ3 projection throws Expression not supported exception
2023-03-02T04:57:35.362Z
Client-side casting with LINQ3 projection throws Expression not supported exception
929
null
[]
[ { "code": "", "text": "Hello.I have faced with a strange case. I have an app that has list of items and each of them contain small image (~50 kb). When we import a data, we add these items into realm and expect them to be synced, but I got an error about limitation: “Error. could not decode next message: error reading body: failed to read: read limited at 16777217 bytes”\nWhat is the correct workaround for a such situation?\nMaybe realm is not good for storing any kind of data, that differ from string, int, double and other simple data?", "username": "Roman_Bayik" }, { "code": "", "text": "Are you importing a lot of images in a single transaction? If that’s the case, then you might want to try to batch the inserts so that every batch is smaller than 16 mb - e.g. if your images are on average 50 kb, you could try inserting them in batches of 100 objects for roughly ~5mb/batch.", "username": "nirinchev" }, { "code": "", "text": "Hello.\nDo you think Realm has some in-house solution for making separated batches or you’re about splitting the data manually? (like to upload first 50 items, then pause, then next 50 and so on)", "username": "Roman_Bayik" }, { "code": "", "text": "The issue is that the smallest unit Realm operates with is a transaction - while a single transaction may be uploaded in multiple chunks, it can either be applied in its entirety or not at all. I’ll follow up with our sync server team to see if the behavior you’re observing is expected and if there’s any immediate improvements we can make, but splitting the transaction on your end will verify if that’s indeed the culprit and allow you proceed until we have a fix in place.", "username": "nirinchev" } ]
Realm Atlas Sync with small images
2023-03-02T20:07:10.538Z
Realm Atlas Sync with small images
1,347
null
[ "java" ]
[ { "code": "", "text": "Hi there,\nWe have recently upgraded MongoDB from v4.2 to 5.0\nWe have a collection that has a field by name ‘title’ to be unique. Java code has C (create operation), which is expected to throw DuplicateKeyException when a new document with duplicate ‘title’ is inserted.This Exception is not thrown now and hence, posting here so that, someone can point me to the right Exception class (per version 5.0) .\nis com.mongodb.DuplicateKeyException deprecated?\nif so, what is the latest one.", "username": "Ganapathi_Vaddadi" }, { "code": "", "text": "declaration: package: com.mongodb, class: DuplicateKeyException", "username": "Kobe_W" }, { "code": "", "text": "Thank you. So, there is no change in this.", "username": "Ganapathi_Vaddadi" } ]
Whats the newer version of com.mongodb.DuplicateKeyException?
2023-02-17T19:33:57.189Z
Whats the newer version of com.mongodb.DuplicateKeyException?
621
null
[]
[ { "code": "", "text": "Hi\nI am looking at using Atlas Data federation to be able to query data from multiple databases. All of the data sources are clusters/DBs in the Atlas . Is any of the queried data being saved in AWS by the federation component. Looking to see if this works from out data privacy and localization point of view. All of our clusters are being hosted in Azure right now. Is there any caching of the data happens in the federated instance ? Where will the indexes be stored?\nThank you", "username": "Ilana_shapiro" }, { "code": "", "text": "Hello @Ilana_shapiro ,Thanks for the question!In short, no, there is no data saved in AWS by the federation service. Currently the infrastructure for federation is hosted in AWS, but all data is ephemeral in the system and is deleted once a query has finished.There is no data that will be persisted in the federated instance.But I do have some good news, native Azure support is coming soon too. Once we release it, it should be a very simple switch that allows you to switch to having all processing happen in Azure. The statement from earlier will still stand that is data is immediately released once a query completes, but there will be a slight benefit in terms of less Data Transfer costs.There is no caching of customer data happening. For the use case of using data federation to query a cluster the indexes are stored on the cluster, they get used locally by the cluster when the cluster is returning results to the federated layer but they are not moved or copied.Best,", "username": "Benjamin_Flast" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas Data Federation - what data is being saved outside of the original data sources
2023-03-01T20:38:51.985Z
MongoDB Atlas Data Federation - what data is being saved outside of the original data sources
506
null
[ "crud", "php" ]
[ { "code": "public function store(Request $request) { \n\n $filename=time().'.'.$request->file->getClientOriginalExtension();\n\n $bucket= DB::connection('mongodb')->getMongoDB()->selectGridFSBucket();\n\n $stream = $bucket->openUploadStream( $request->file);\n\n $metadata = $bucket->getFileDocumentForStream($stream); \n\n $contents = $filename ;\n\n fwrite($stream, $contents);\n\n fclose($stream); \n\n return Redirect::back()->with('success', ' your case has been created successfully');\n\n }\n", "text": "Hi team,i’m laravel developer i want to store pdf to mongodb, have done integration connection all i don’t any solution to store pdf. thank you", "username": "Megala_Shekar" }, { "code": "", "text": "why?Please use some storage service provider to store pdf and store these pdf access links in mongo db. It is better and economical", "username": "BIJAY_NAYAK" } ]
I want to store pdf to mongodb using laravel
2022-03-22T09:56:06.438Z
I want to store pdf to mongodb using laravel
2,934
null
[ "compass", "mongodb-shell", "server" ]
[ { "code": "brew services start [email protected]\n==> Successfully started `mongodb-community`\nmongoshmongosh\nCurrent Mongosh Log ID:\t63f0f50dc46c87b2ca8c78fc\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.7.1\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nmongodbsuperuser", "text": "Hi,I believe that this question has been tackled here Connect ECONNREFUSED 127.0.0.1:27017 in Mongodb Compass but I am not entirely happy with it.I really closely follow the documentation https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-os-x/#installing-mongodb-6.0-edition-edition and https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-os-x/#run-mongodb-community-edition but in spite of that when I tryI obtain the deceptively optimistic messagewhile in fact running mongosh givesAlthough I do believe the solution exists, it is quite intimidating that it is not reflected in the docs. Currently, I am only able to run mongodb locally as a superuser…", "username": "Jiri_Jahn" }, { "code": "❯ brew services start [email protected]\n\n==> Successfully started `mongodb-community` (label: homebrew.mxcl.mongodb-community)\n❯ mongosh\nCurrent Mongosh Log ID:\t63f2912b4d7d8dffb21c4f96\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.7.1\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017```", "text": "I have the same problem", "username": "Walid_MAROUF" }, { "code": "", "text": "The error ECONNREFUSED means mongodb is not listening at the given address and port. The fact that mongodb has been started correctly does not mean that it listens to the given address and port.It is also possible that your firewall settings are preventing mongosh/Compass to connect.The configuration file will tell you which address and port are used by mongod.The commands ss or netstat can also be used to determine on which address/port mongod is listening.", "username": "steevej" }, { "code": "/opt/homebrew/etc/mongod.confsystemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /opt/homebrew/var/mongodb\nnet:\n bindIp: 127.0.0.1, ::1\n ipv6: true\n27017bindIpmongo.logmongoshmongodb{\"t\":{\"$date\":\"2023-03-02T12:15:44.509+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.514+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.518+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.544+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.549+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.549+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.549+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.549+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.550+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":69920,\"port\":27017,\"dbPath\":\"/opt/homebrew/var/mongodb\",\"architecture\":\"64-bit\",\"host\":\"Jiri-MacBook-Air-2.local\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.550+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.550+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.550+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.4\",\"gitVersion\":\"44ff59461c1353638a71e710f385a566bcd2f547\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.550+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.3.0\"}}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.550+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/opt/homebrew/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1, ::1\",\"ipv6\":true},\"storage\":{\"dbPath\":\"/opt/homebrew/var/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/opt/homebrew/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.552+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.552+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.553+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/opt/homebrew/var/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:44.553+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7680M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.203+01:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1677755745:202842][69920:0x1f00cc140], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.204+01:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1677755745:204727][69920:0x1f00cc140], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.204+01:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1677755745:204945][69920:0x1f00cc140], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.204+01:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.204+01:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"13: Permission denied\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.205+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":708}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.205+01:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "Thank you for your reply. So let’s try to figure out what’s going on here:Do I get it right that the problem could be in the fact that the port 27017 is not explicitly specified at the bindIp key?For the sake of completeness, I had a glimpse at the mongo.log file mentioned in the config, and this what happens when I try to run the mongosh command after starting the mongodb as a service:Is it helpful somehow? Thank you", "username": "Jiri_Jahn" }, { "code": "{\"t\":{\"$date\":\"2023-03-02T12:15:45.203+01:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1677755745:202842][69920:0x1f00cc140], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.204+01:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1677755745:204727][69920:0x1f00cc140], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n{\"t\":{\"$date\":\"2023-03-02T12:15:45.204+01:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":13,\"message\":\"[1677755745:204945][69920:0x1f00cc140], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: int __posix_open_file(WT_FILE_SYSTEM *, WT_SESSION *, const char *, WT_FS_OPEN_FILE_TYPE, uint32_t, WT_FILE_HANDLE **), 805: /opt/homebrew/var/mongodb/WiredTiger.turtle: handle-open: open: Permission denied\"}}\n", "text": "Is it helpful somehow?Very useful indeed.You have to fix the following file permission errors. I do not know how to do that on iOS.", "username": "steevej" }, { "code": "WiredTiger.turtlesuperusermongodb", "text": "@steevej You were right, I had to change the permission for WiredTiger.turtle and several other files. This state was probably a relict of the time when I started the process as superuser.Now I am a better mongodb user, thank you ", "username": "Jiri_Jahn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connection to mongodb://127.0.0.1:27017 refused
2023-02-18T16:19:00.476Z
Connection to mongodb://127.0.0.1:27017 refused
14,641
null
[ "aggregation" ]
[ { "code": "", "text": "I would like to know whether is there a way to perform a lookup findind just one document, and stop the lookup when it’s found.It’s something similar to findOne() in mongo, or cross apply in some SQL engines.It’s quite usefull to speed up some executions in some scenarios.The option $lookup/$limit: 1 wouldn’t be a replace for what I ask for, event this should return a document instead of an array like $lookup does.Thank you very much.", "username": "Jaime_de_Roque" }, { "code": "", "text": "I think that $lookup with a $limit 1 sub-pipeline followed by a stage like{ “$set” : { single_document : { “$arrayElemAt” : [ “$lookup_result_with_limit_one” , 0 ] } } }should be close to what you want.I am pretty sure that a sub-pipeline with a $limit stops the $lookup as soon as the count is reached so the performance should be on par to any single (invented name) $findOne stage.", "username": "steevej" }, { "code": "", "text": "Thank you very much for your answer Steeve.Indeed the option $lookup/$limit: 1 is missleading, because I want a limit 1 just for the lookup, and not for the whole aggregation pipeline. I don’t know why well, but when I use sub-pipelines in the extended $lookup, the performance is reduced dramatically.I have faced this situation several times and I have always to do some kind of hack to get the data…", "username": "Jaime_de_Roque" }, { "code": "", "text": "To help you further on this you will need to share1 - some sample documents from all collections involved\n2 - the indexes of all collections involved\n3 - the complete pipeline that you uses", "username": "steevej" } ]
Is there a way to make $lookup behave like findOne?
2023-02-27T10:28:43.830Z
Is there a way to make $lookup behave like findOne?
694
null
[ "mongodb-shell", "storage" ]
[ { "code": "× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Thu 2023-03-02 09:46:38 UTC; 12min ago\n Docs: https://docs.mongodb.org/manual\n Process: 2618 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=14)\n Main PID: 2618 (code=exited, status=14)\n CPU: 670ms\n\nMar 02 09:46:38 graylog systemd[1]: Started MongoDB Database Server.\nMar 02 09:46:38 graylog systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nMar 02 09:46:38 graylog systemd[1]: mongod.service: Failed with result 'exit-code'.\nroot@graylog:~# mongod.service: Main process exited, code=exited, status=14/n/a^C\nroot@graylog:~# mongo\nMongoDB shell version v5.0.15\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.244+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.248+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.248+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInte rnalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.248+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.248+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQue ueSize.\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"ns\":\"config.tenantMigrationDono rs\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"ns\":\"config.tenantMigration Recipients\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":2618,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"grayl og\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.15\",\"gitVersion\":\"935639beed3d0c19c2551c93854b831107c0b118\",\"openSS LVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.259+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":2701 7},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.260+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.260+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/ core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.260+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3462M,session_max=33000,eviction=(threads_min=4,threads_max =4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_mini mum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.871+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":4671205, \"ctx\":\"initandlisten\",\"msg\":\"This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.\"}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.871+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":4671205,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":6 53}}\n{\"t\":{\"$date\":\"2023-03-02T09:46:38.871+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n", "text": "Hello,\nI was doing the graylog upgrade but before that mongodb upgrade was also needed or followed the guide but now mongodb doesn’t turn on anymore it gives me this error status 14/n/ahow to fix this?", "username": "Lisi_lis" }, { "code": "{\"t\":{\"$date\":\"2023-03-02T09:46:38.871+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":4671205, \"ctx\":\"initandlisten\",\"msg\":\"This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.\"}\n", "text": "how to fix this?By doing what the log messages are telling you.", "username": "steevej" } ]
Non able to stat MongoDB mongod.service: Main process exited, code=exited, status=14/n/a
2023-03-02T10:04:27.387Z
Non able to stat MongoDB mongod.service: Main process exited, code=exited, status=14/n/a
4,255
null
[]
[ { "code": "", "text": "Hello,I would like to know if there is a best practice minimum value for the oplog window?\nI know a warning is triggered below 1 hour.\nAnd I’m sure the value should different based on what everybody is trying to achieve, but I’m honestly clueless if a 24h window is too much or not.", "username": "Jerome_Pasquier" }, { "code": "", "text": "I don’t quite get you are exactly asking for.The oplog time window is time diff between oldest and newest entry. And this depend on the oplog size and write traffic of your service. There’s no configuration for this.Are you asking about optimal oplog size setting??", "username": "Kobe_W" }, { "code": "mongodoplogSizeMBreplSetResizeOplogreplSetResizeOplogmongodmongod", "text": "Hello @Jerome_Pasquier ,Welcome back to The MongoDB Community Forums! As per the documentation on Oplog SizeIn most cases, the default oplog size is sufficient. For example, if an oplog is 5% of free disk space and fills up in 24 hours of operations, then secondaries can stop copying entries from the oplog for up to 24 hours without becoming too stale to continue replicating. However, most replica sets have much lower operation volumes, and their oplogs can hold much higher numbers of operations.Before mongod creates an oplog, you can specify its size with the oplogSizeMB option. Once you have started a replica set member for the first time, use the replSetResizeOplog administrative command to change the oplog size. replSetResizeOplog enables you to resize the oplog dynamically without restarting the mongod process.New in version 4.4: Starting in MongoDB 4.4, you can specify the minimum number of hours to preserve an oplog entry. The mongod only truncates an oplog entry if:By default MongoDB does not set a minimum oplog retention period and automatically truncates the oplog starting with the oldest entries to maintain the configured maximum oplog size.See Minimum Oplog Retention Period for more information.Of course, the more oplog window you can have the better. However, anecdotally, I would say that a good value is perhaps something that can cover the weekend, in case something happened to your cluster during the weekend period. Realistically, it should be able to comfortably cover any planned maintenance window length you’re planning to have.I know a warning is triggered below 1 hour.I would recommend you to kindly go through below mentioned documentation and thread to learn about Alert conditions, Common Triggers and Fix/Solution for Oplog Issues.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Thank you for your reply @Tarun_GaurI have indeed read the documentation before posting.I was just surprised to see that upon autoscaling to a M30 instance (because we had a surge of new users) the oplog window kept shrinking instead of autoscaling with the instance.\nBefore instance upgrade (and the wave of new users), we were on a the minimum instance (M10 I believe?) and we had around 3 days of oplog window.Our M30 instance has currently 34Go disk free, but the oplog window doesn’t seem to be increasing. It’s like the oplog size doesn’t automatically change and is still at the same initial default value.\nThat’s why I wanted to set a value myself but I needed guidance on best practice.Currently and according to real time metrics of Atlas, we have around 90Mo/h written, so that ~2Go/day.\nHaving 34Go of free disk, it’s not an issue at all.However, if I specify 2 days of oplog window, that means 4Go of available logs, but what happens if the disk has less than 4Go? Do I receive a warning? Does the server crash or maybe oplog will automatically shrink to the remaining disk space?Best,\nJerome", "username": "Jerome_Pasquier" }, { "code": "", "text": "what happens if the disk has less than 4Go? Do I receive a warning? Does the server crash or maybe oplog will automatically shrink to the remaining disk space?Replication Oplog alerts can be triggered when the amount of oplog data generated on a primary cluster member is larger than the cluster’s configured oplog size. You can configure the following alert conditions in the project-level alert settings page to trigger alerts.Please go through below link, as it answers your specific questions and also provide solutions for such issues.", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Oplog window best practice value?
2023-02-27T15:12:36.777Z
Oplog window best practice value?
1,623
https://www.mongodb.com/…_2_825x1024.jpeg
[ "server" ]
[ { "code": "", "text": "Hello,\ngot the problem with launching mongod\n\nScreenshot 2023-03-02 at 12.53.441174×1456 431 KB\n\nshows me this.\nWhat is the problem?", "username": "guri_shubitidze" }, { "code": "", "text": "Your mongod is already up & running and you tried to start mongod again.Thats why the second run failed with address in useYour mongosh prompt shows you successfully connected to your mongodb\nReplace the file as suggested.Its an environment related file", "username": "Ramachandra_Tummala" } ]
2023-03-02T11:58:14.078Z
989
null
[ "atlas-search" ]
[ { "code": "", "text": "I was doing the Lab: Creating a Search Index with Dynamic Field Mapping where I am supposed to register with the given token. I did that and returned tot he CLI and it asked me to select an organization and project. I did that and selected the mongodb university one (which I had created for this courses). It keeps failing no matter what I did. Then I created a new one, with the exact cluster name, user, password and access that the questions tells me and then chose that one when prompted after registering with the token. Yet, it keep failing. I have no idea what to do. there is no hints or anything. Please help.", "username": "Sagar_Subedi" }, { "code": "", "text": "not everyone has taken that course nor remembers where the page you mention is.\nso please give the link to that lab so we can find which one it is without wandering tons of pages of MDBU content.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "eating a Search Index with Dynamic Field MappingSorry about that, here is the link: MongoDB Courses and Trainings | MongoDB University\nThanks", "username": "Sagar_Subedi" }, { "code": "", "text": "Hi @Satyam, Can you please ping a “learn” team member? I tried a few things and neither I could not pass this step. so, either the instructions are missing a crucial part, or there is a bug in validation.also here are some small problems I noticed:", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Also having this issue. I had to create a new Project and new cluster with their given info (took me WAY too long to find out I wasn’t doing this part incorrectly), but this only changed the error to: “FAIL:”. However, it’s still not allowing the lab to pass. If you go into the settings of the cluster it automatically has the cluster marked as readAndWrite enabled for their default account, so there’s no issues there.Unfortunately, they don’t allow you to skip this one… So, we’ll have to wait until it’s fixed.The instructions for this lesson in particular are very unclear.", "username": "Andrew_Dutson" }, { "code": "", "text": "Also having this issue after recreate the cluster as they ask for (as Andrew_Dutson).\nI have written to [email protected] and I am waiting for the replay.\nIf anyone knows how to fix it, please post it here…", "username": "macabea" }, { "code": "", "text": "Found the solution:You CANNOT have more than one project on the account you’re using. They just added a new error message that clarifies this now.I just created a new account, if you only have the account that you’re learning on, then you can simply delete your current cluster and then project and start over as well.Then you create a new project and cluster with the information given in the lab.Add the sample data set like they instructed in unit 2.Once this is done you authenticate this NEW account with their cluster information and it passes.@macabea @Sagar_Subedi", "username": "Andrew_Dutson" }, { "code": "", "text": "I appreciate the effort of the fixer as the new error messages are better, but absolutely not a solution to the problem at hand.So now, it tells me I have multiple organizations:Error 1: Identified multiple Atlas orgs … create a new dedicated Atlas accountI got 2 and then I tried to move MDBU projects into the other so I would have a single organization. Now, the error shows another face:Error 2: Identified multiple Atlas projects … create a new dedicated Atlas accountI would accept going with a single organization, but this is something else. It basically asks you to have only, and only, 1 organization and 1 project in it for your whole MDBU experience.The proposed solution of opening a new account, which requires a new e-mail address, along with the implied solution of deleting all other projects for “learn” users, is simply NOT acceptable. This unit, and similar ones requiring the same, will be my first ones to completely abandon.Hi again @Satyam , I am sorry if I am giving headache Correct me if I am wrong, but in my opinion, this behavior is against the spirit of MongoDB I have experienced past few years.I am hoping this situation is just “temporary” and a better fix is on the way.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hey @Yilmaz_Durmaz,Thanks for bringing this to our attention. Will take this up with the concerned team. Since @macabea has already mailed the issue to [email protected], I’m sure someone from the University Team would have already started looking into it. Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Thanks @Yilmaz_Durmaz for your feedback, we’ve implemented a prototype with less rigorous checking/requirements that we hope to deploy to the existing labs this week. Unfortunately, we will still have some hard requirements as without these we cannot provide the seamless learning environment experience as it depends on these requirements to provide the underpinning automation in our lab infrastructure.It is still our belief and recommendation that you create a separate account for your learnings as it helps ensure there is no possible overlap with existing production or development environments. I realize this may not be the case for many learners but for that subset, we feel it is particularly important to provide guards/rails to avoid any accidental issue/command impacting these important environments. If in the future, these looser restrictions in terms of the lab checking do cause issues with such environments, we equally will revisit the restrictions.I do thank you for your input and hope you appreciate that the team is balancing many different aspects as we implement our new labs.Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "Thank you @Eoin_Brazil, for the heads up. “learn” is already a great place with you guys behind it. And as a learner of the “old” MDBU, I love testing it (while also learning new things).My suggestion here would be, instead of a new account (and single organization/project), to force users to have an organization with a predefined name, such as “MDBU Learning”, and have them create project resources in it with, again, predefined names for the sake of lab validations (just like you do with cluster name and username/password). And not just for a few labs, but all over the “learn” experience.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you very much.\nI had two projects. I removed the one that didn’t match the instructions of this lab, and It passed the checks (or may be it was because of the changes made by Eoin_Brazil).\nAnyway, thank you for your help.", "username": "macabea" }, { "code": "", "text": "I am getting this error:\nMongoServerSelectionError: Server selection timed out after 30000 msThis is after pasting in:\nmongosh -u myAtlasDBUser -p myatlas-001 $MY_ATLAS_CONNECTION_STRING/sample_suppliesMind you, the myAtlasClusterEDU is the only cluster I have under project 0 and I authenticated well, as well as passed the first 2 labs of this lesson", "username": "Sifiso_Lucolo_Dhlamini" }, { "code": "Add Current IP addressAllow access from anywhere", "text": "Hi @Sifiso_Lucolo_Dhlamini,Welcome to the MongoDB Community forums I suspect you might not have whitelisted the IP address of your client connections to the MongoDB Atlas deployment.To do so:Go to cloud.mongodb.com and follow these steps:And then try again to connect to the MongoDB Lab. It will work as expected.Let us know if the issue still persists.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you KushagraI had already tried to whitelist for ‘anywhere’ since I had whitelisted my device at the beginning. It did not resolve my problem though. I do however suspect that it had something to do with the different accounts I had open on the same browser (not certain oif that was the poroblem).At the end of the day, I left it and went to bed. The following day, I decided to open the Atlas account for the lab in an incognito browser (by copying the link and pasting it) and everything went through well. But like I said, I am not sure if that was the problem.Thank you kindly for your response. It is very encouraging to see the support on the platform. I am here to stay.RegartdsSifiso a.k.a Scifi", "username": "Sifiso_Lucolo_Dhlamini" }, { "code": "", "text": "Hi @Sifiso_Lucolo_Dhlamini,open the Atlas account for the lab in an incognito browser (by copying the link and pasting it) and everything went through well. But like I said, I am not sure if that was the problem.I suspect it might be due to the browser cache or cookies. Clearing these may resolve the issue. You can also try logging out of MDBU and logging back in after clearing your cache and cookies.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search Practice Lab keeps failing
2023-01-28T23:26:00.436Z
Atlas Search Practice Lab keeps failing
2,408
null
[ "node-js", "serverless", "field-encryption" ]
[ { "code": "", "text": "Hi,so I was able to write encrypted fields with gcp kms to my serverless instance. I did this without shared lib on macos (m1) but with starting mongocryptd from enterprise bins.So now from what I’ve read, the same thing should be possible without mongocryptd using the shared lib? So in extraOptions.cryptSharedLibPath I’ve passed the downloaded mac shared lib (https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/shared-library/#download-the-automatic-encryption-shared-library), I’ve also tried libmongocrypt (GitHub - mongodb/libmongocrypt: Required C library for Client Side and Queryable Encryption in MongoDB) using homebrew but I always get the error that the connection is refused on port 27020.\nI’m using nodejs 16 with mongodb 4.9.0 client.Thanks\nTom", "username": "Tom_Kremer" }, { "code": "const extraOptions = {\n mongocryptdBypassSpawn: true,\n cryptSharedLibPath: \"full path to mongo_crypt_v1.dylib\",\n cryptSharedLibRequired: true,\n};\n\nconst secureClient = new MongoClient(connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n monitorCommands: true,\n autoEncryption: {\n keyVaultNamespace,\n kmsProviders,\n encryptedFieldsMap: patientSchema,\n extraOptions: extraOptions,\n },\n});\nMongoError: `cryptSharedLibRequired` set but no crypt_shared library loaded\n", "text": "Update: Updated client config, now shared library cannot be loaded.So I had a wrong setting within extraOptions. Here is my current config:Now I get always the following error:", "username": "Tom_Kremer" }, { "code": "", "text": "Hello Tom and welcome,You are correct that the Shared Library can be used in place of mongocryptd. Libmongocrypt is a driver component that does the cryptographic operations and isn’t related to mongocryptd or the Shared Library. There are langauge specific examples here of how to specify the location of the shared library - https://www.mongodb.com/docs/manual/core/queryable-encryption/quick-start/#specify-the-location-of-the-automatic-encryption-shared-library. Please note that this documentation is for Queryable Encryption but the code snippet there should apply to CSFLE as well. Since you aren’t using mongocryptd you shouldn’t need the mongocryptdBypassSpawn.Cynthia", "username": "Cynthia_Braund" }, { "code": "cryptSharedRequired cryptSharedLibRequired ", "text": "Thanks for the reply. I removed mongocryptdBypassSpawn.I followed that guide but on macos m1 with nodejs 16, I don’t get it to work. Could it be macos security not allowing to load the dylib? I had a similar issue with mongocryptd when I launched manually for the first time.One error I found in the docs here https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/shared-library/#std-label-qe-reference-shared-library is that cryptSharedRequired should be cryptSharedLibRequired as described in the specs: https://github.com/mongodb/specifications/blob/master/source/client-side-encryption/client-side-encryption.rst#extraoptions-cryptsharedlibrequired", "username": "Tom_Kremer" }, { "code": "", "text": "Thank you for finding that error. I will get it fixed! Were you able to get it working?", "username": "Cynthia_Braund" }, { "code": "", "text": "Hi,yes I got it fixed. I had my path variable name incorrect \nThen I got it to work but since I need this within a nodejs debian 10 docker, I moved away from automatic to explicit encryption since the shared lib is not available there (yet?).Best,\nTom", "username": "Tom_Kremer" } ]
Automatic csfle using shared crypt not working
2023-02-23T20:15:01.129Z
Automatic csfle using shared crypt not working
1,455
https://www.mongodb.com/…a_2_1024x508.png
[ "cxx" ]
[ { "code": "", "text": "Hello! I have a quick question just to make sure I’m not actually missing something.\nI have just upgraded to version 3.8-pre (master) from 3.6. Before, I could get a string element like “element.get_utf8().value.to_string()”. Now get_utf8() is deprecated and the method get_string() gives the following error.\nimage1473×732 118 KB\n\nimg_array is of type bsoncxx::v_noabi::array::view\nAlso there is no to_string method anymore.From bsoncxx::document::view this conversion is implicit and seems to work.As said, is this normal?", "username": "Arreme_N_A" }, { "code": "\n \n #include <mongocxx/pool.hpp>\n #include <mongocxx/uri.hpp>\n \n \nnamespace {\n \n \nstd::string get_server_version(const mongocxx::client& client) {\n bsoncxx::builder::basic::document server_status{};\n server_status.append(bsoncxx::builder::basic::kvp(\"serverStatus\", 1));\n bsoncxx::document::value output = client[\"test\"].run_command(server_status.extract());\n \n \n return bsoncxx::string::to_string(output.view()[\"version\"].get_string().value);\n }\n \n \nvoid watch_until(const mongocxx::client& client,\n const std::chrono::time_point<std::chrono::system_clock> end) {\n mongocxx::options::change_stream options;\n // Wait up to 1 second before polling again.\n const std::chrono::milliseconds await_time{1000};\n options.max_await_time(await_time);\n \n \n auto collection = client[\"db\"][\"coll\"];\n \n ", "text": "Hi @Arreme_N_A,You can refer to the example shown here for this conversion", "username": "Rishabh_Bisht" } ]
No implicit conversion from b_string to string on arrays?
2023-02-25T17:07:14.288Z
No implicit conversion from b_string to string on arrays?
1,249
null
[ "node-js", "crud" ]
[ { "code": " const passwordHash = password == \"\" ? null : await bcrypt.hash(password, 10);\n let result = await accounts.updateOne(\n { _id: ObjectId(accountID) }, [{\n $set: {\n username: \"name\",\n passwordHash: { $ifNull: [passwordHash, \"$passwordHash\"] } }\n } ] );\n", "text": "Hi,\nI’ve recently made the switch from a self-hosted to a M0 cloud-hosted server, try before you buy to see if it is worth it for me kinda thing, and while doing some tests I have a problem with $ifNull.simplified codebut each time I try and update the account the passwordHash never changes, all the other fields do but not the passwordHash.I have been trying to debug this for a few hours now and all the documentation I read is telling me it should work. I don’t remember this being a problem when I was self-hosting the server.I don’t get any errors returned.\nIs there anything I should do/check?", "username": "William_Brown" }, { "code": "$ifNullpasswordHash$db.inventory.aggregate(\n [\n {\n $project: {\n item: 1,\n description: { $ifNull: [ \"$description\", \"Unspecified\" ] }\n }\n }\n ]\n)\n$ifNulldescription\"Unspecified\"description$ifNull", "text": "Hi @William_Brown,Welcome to the MongoDB Community forums Based on your code, it seems like you’re trying to update a document using the $ifNull operator to set the passwordHash field if it is null or missing in the document.passwordHash: { $ifNull: [passwordHash, “$passwordHash”] } }The syntax appears to be slightly incorrect. To ensure accuracy, please refer to the documentation for $ifNull. The correct format requires the field name to be listed first with a $ symbol, followed by the value you wish to use for updating. Here is the query snippet for reference:Here example uses $ifNull to return the description if it is non-null and set it to \"Unspecified\" string if description is null or missing.Meanwhile also note that, in MongoDB 4.4 and earlier versions, $ifNull only accepts a single input expression. Now in MongoDB 5.0 onwards it accepts multiple input expressions.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" } ]
$ifNull not working as expected any help
2023-01-23T14:23:33.626Z
$ifNull not working as expected any help
934
null
[ "node-js" ]
[ { "code": "", "text": "I’m baffled that if there any errors occur in the middle of a crud operation in Nodejs. What will return???", "username": "Nghiem_Gia_B_o" }, { "code": "try-catcherror middleware", "text": "Hi @Nghiem_Gia_B_o,Welcome to the MongoDB Community forums If an error occurs in the middle of a CRUD operation in Node.js, the function or method that was called will typically throw an error or return an error object.In the case where the document contains {acknowledge: false}, it is possible that this could be interpreted as a successful operation, depending on the specific context and requirements of the application. However, if there was a real error during the operation, the error object returned will typically contain information about the error, such as a message describing the error and a status code indicating the type of error.We need to handle errors appropriately to prevent unexpected behavior or crashes in the application. This can be done by implementing error-handling mechanisms, such as try-catch blocks, error middleware functions, or other error-handling libraries, to handle errors gracefully and provide meaningful responses to users.I hope this answers your questions! Feel free to reach out if you have any further questions.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
If an error occurs in mongodb crud operations will it return an error in Nodejs?
2023-03-02T04:22:55.208Z
If an error occurs in mongodb crud operations will it return an error in Nodejs?
614