image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | []
| [
{
"code": "",
"text": "Hello,Is the possibility of creating a dedicated search cluster on the roadmap? Or, being able to manually specify the percentage of memory and CPU resources that search should use?Thank you.",
"username": "Dima"
},
{
"code": "",
"text": "Hi Dima, we cannot specify what percentage Atlas Search should use at this time and we probably would never make this type of configuration available. It would actually be more risky than it might appear at first glance because every search query is a mongod query.We do have dedicated infrastructure on the roadmap, but that would only be recommended or cost effective for very large and demanding use cases. It would also likely incur a latency penalty.Do you have particular concerns about resource consumption in the present architecture?",
"username": "Marcus"
},
{
"code": "",
"text": "Hello Marcus,Thank you for your reply and the explanation.The concern is that we have dedicated search clusters that will do nothing but search, but, as of right now, there is no way to configure these clusters to optimally use resources in this scenario.Thank you.",
"username": "Dima"
},
{
"code": "",
"text": "I see, are you up for a meeting with someone from the team. I’m sure @Ruchir_Mehta @Elle_Shwer or @amyjian would love to interview you to better understand your use case. Can you send me a message?",
"username": "Marcus"
}
]
| Create dedicated search cluster? | 2022-11-24T03:26:27.581Z | Create dedicated search cluster? | 1,568 |
[
"sharding",
"containers",
"time-series"
]
| [
{
"code": "sh.shardCollection(\n \"test.weather\",\n { \"metadata.sensorId\": 1 },\n {\n timeseries: {\n timeField: \"timestamp\",\n metaField: \"metadata\",\n granularity: \"hours\"\n }\n }\n)\n",
"text": "Hi,\ni’m using a docker container with mongodb image 6.0.2 (Docker hub)\ni configured successfull the sharding with 2 shards, and 3 replicas per shard.The sharding is working properly on the normal collections, but when i try to shard a timeseries collections, i got this error:\n\nimage755×29 4.12 KB\nthe command i’m using is the following:How can i solve this problem? looking at the documentation, this feature is avaiable since 5.1.Thanks",
"username": "Andrei_Goncear"
},
{
"code": "db.version()\nfeatureCompatibilityVersiondb.adminCommand({\n\tgetParameter: 1,\n\tfeatureCompatibilityVersion: 1\n})\n",
"text": "Welcome to the MongoDB Community @Andrei_Goncear !Was this a fresh installation of MongoDB server or were you upgrading from an earlier version?Can you check the version of the MongoDB deployment you are connected to using the MongoDB shell:… and the featureCompatibilityVersion (fCV):As you noted, Sharding a Time Series Collection is supported in MongoDB 5.1+. Both of the above commands should report a 6.0 server version.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "featureCompatibilityVersiondb.adminCommand( { setFeatureCompatibilityVersion: \"6.0\" } )\n",
"text": "featureCompatibilityVersionThank you! This was the solution:\nhas worked like a charm!\nThank you a lot",
"username": "Andrei_Goncear"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb 6.0.2 - Time Series sharding | 2022-11-24T10:49:06.097Z | Mongodb 6.0.2 - Time Series sharding | 2,121 |
|
null | [
"aggregation",
"atlas-search"
]
| [
{
"code": "[{\n $search: {\n compound: {\n must: [\n {\n wildcard: {\n query: '*',\n path: [\n 'fifa_id',\n 'ussf_id',\n 'email',\n 'competitions.name'\n ],\n allowAnalyzedField: true\n }\n },\n {\n compound: {\n mustNot: [\n {\n exists: {\n path: 'deleted_at'\n }\n }\n ]\n }\n }\n ]\n }\n }\n}, {\n $sort: {\n name_first: -1\n }\n}]\n",
"text": "",
"username": "Mohit_kumar_Sharma"
},
{
"code": "",
"text": "There are some examples here: https://www.mongodb.com/docs/atlas/atlas-search/tutorial/sort-tutorial/Though this is quite similar to your own query. Did you have any issues?",
"username": "Elle_Shwer"
},
{
"code": "[{\n $search: {\n compound: {\n must: [\n {\n wildcard: {\n query: '*',\n path: [\n 'a_id',\n 'sf_id',\n 'email',\n 'tutions.name'\n ],\n allowAnalyzedField: true\n }\n },\n {\n compound: {\n mustNot: [\n {\n exists: {\n path: 'deleted_at'\n }\n }\n ]\n }\n }\n ]\n }\n }\n}, {\n $sort: {\n name_first: 1\n }\n}]\n[{\n $search: {\n compound: {\n must: [\n {\n wildcard: {\n query: '*com*',\n path: [\n 'a_id',\n 'sf_id',\n 'email',\n 'tutions.name'\n ],\n allowAnalyzedField: true\n }\n },\n {\n compound: {\n must: [\n {\n text: {\n query: 'male',\n path: 'gender'\n }\n }\n ]\n }\n },\n {\n wildcard: {\n path: 'name_first',\n query: '*mo*',\n allowAnalyzedField: true\n }\n },\n {\n wildcard: {\n path: 'name_last',\n query: '*sh*',\n allowAnalyzedField: true\n }\n },\n {\n compound: {\n mustNot: [\n {\n exists: {\n path: 'deleted_at'\n }\n }\n ]\n }\n }\n ]\n }\n }\n}, {\n $sort: {\n name_first: 1\n }\n}]\n",
"text": "I read your provide article and found 2 waysBut As described before my sorting perform on alphabetical order. So I used $sort But the issue is $sort work too slow on large dataset for example below:MyDataSetCount Approx: 2 LacsOther Qyery format :",
"username": "Mohit_kumar_Sharma"
},
{
"code": "",
"text": "Did you try it with Stored Source? https://www.mongodb.com/docs/atlas/atlas-search/stored-source-definition/",
"username": "Elle_Shwer"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true\n },\n \"storedSource\": {\n \"include\": [\n \"name_first\"\n ]\n }\n}\n",
"text": "Hi Elle,Yes I tried Source in our indexing like below but result same (too slow):",
"username": "Mohit_kumar_Sharma"
},
{
"code": "",
"text": "Okay, we are working on a solution for this. You can follow progress here. We hope to have something soon!",
"username": "Elle_Shwer"
}
]
| I am using the “wildcard” operator and our clients want the results to be sorted alphabetically. Would you have an example? | 2022-11-26T05:19:47.664Z | I am using the “wildcard” operator and our clients want the results to be sorted alphabetically. Would you have an example? | 1,766 |
null | []
| [
{
"code": "",
"text": "I am using Atlas mongodb for my application, however my application needs to be working while no internet situation also. So, I am planning to have a local mongo server which syncs with Atlas mongo and works during no internet situation. Any idea, how I can do that?",
"username": "sobin_asir"
},
{
"code": "",
"text": "Realm, or now App Services",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "thanks, I am really new to mongodb, it is really appreciated if you can share any documents. Also, my application is .net core and can’t use realm.",
"username": "sobin_asir"
},
{
"code": "netstandard2.0net6.0",
"text": "Have you checked this? Realm .NET SDK — Realm (mongodb.com)latest version supports netstandard2.0 and net6.0. you need to upgrade parts of your app if it won’t compile.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks for the reply. I like to explain my situation. My frontend(raect) & backend(.net core) both hosted in kubernetes (Azure) cluster and for the d/b it uses Atlas Mongo. User uses the browser to connect the application as foo.boo.com. Since it is a web based, the internet is mandatory, but in some rare cases the end user goes offline; I like to cover ‘no internet’ situation. - any idea?",
"username": "sobin_asir"
},
{
"code": "",
"text": "As far as I know, Realm is like SQLite (forgive if you don’t know that too), that is, a very lightweight version MongoDB. But I think it fits to mobile apps as well as apps outside browsers.I am not sure how MongoDB will fit in your situation, but there is this concept of “web workers”, “service workers” and “progressive web applications”. I suggest checking them out.",
"username": "Yilmaz_Durmaz"
}
]
| Atlas mongo to local mongodb | 2022-11-25T18:48:46.888Z | Atlas mongo to local mongodb | 1,580 |
null | [
"student-developer-pack"
]
| [
{
"code": "",
"text": "Hello , i got my github student pack and recieved $50 Atlas Credit after that i completed my developper path and i claimed for my coupon code for the certification but i didn’t recieve any feedback…\nany solution? thanks",
"username": "Khalil_Benkhelil"
},
{
"code": "",
"text": "Hi Khalil, welcome to the forums!Apologies for the delay on your certification voucher. Our US-based team members had a few days off to celebrate Thanksgiving. Expect a response to your application by the end of today (5:00pm EST).Please let me know if you have anymore questions!",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Thanks Aiyana , i really appreciate your feedback.",
"username": "Khalil_Benkhelil"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Didn't recieve my coupon code | 2022-11-27T19:08:24.192Z | Didn’t recieve my coupon code | 2,171 |
null | [
"aggregation",
"atlas-search"
]
| [
{
"code": "",
"text": "Hi,\nIs there something like a complete schema for what $search aggregation pipeline accepts? Let’s take the compound operator for example. I know what clauses can be used inside the compound operator, but i can’t find anywhere the information about what operators can be used inside the said clauses (must, must not etc).",
"username": "Eduard_Clatinici"
},
{
"code": "$search",
"text": "Hello @Eduard_Clatinici ,Welcome to The MongoDB Community Forums! Please take a look at below link to learn about the stages of $search aggregation pipeline.Learn how to perform specific types of searches on your collection and how to group your query results with the $search aggregation pipeline stage.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "ok, I saw that. But let’s take for example the embeddedDocument operator. What sub operator does it support? I searched in the entire documentation for this hoping I could find it. Thanks!",
"username": "Eduard_Clatinici"
},
{
"code": "",
"text": "As I understand it, EmbeddedDocument’s supports all the operators",
"username": "Elle_Shwer"
}
]
| $search aggregation full schema | 2022-11-25T17:17:08.182Z | $search aggregation full schema | 1,308 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I have two collections Post (belongs to posts database) and User (belongs to account database). My requirements to do join on these two collection. But I am unable to reproduce my requirements.",
"username": "Muhammad_Umer_Farooq"
},
{
"code": "",
"text": "Hi Muhammad_umer_FarooqBy “joining” you mean running one single query over the different collections in the different databases? If yes I am afraid this is not possible directly across different databases. If the two collections are in the same database, you can use the aggregation operator $unionWith to include another collection in your current aggregation pipeline.\nOnly if you host your databases on Atlas you can add your databases to a Data Lake and then run queries across multiple clusters and databases. But if you are running an on-prem instance, this is not possible.",
"username": "Simon_Bieri"
}
]
| How can I join two collections from different databases in mongoDB? | 2022-11-24T05:21:58.603Z | How can I join two collections from different databases in mongoDB? | 6,120 |
null | [
"replication",
"mongodb-shell",
"containers"
]
| [
{
"code": "version: '3.1'\n\nservices:\n\n mongo:\n image: mongo\n restart: always\n # entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n ports:\n - 27017:27017 # admin\n environment:\n MONGO_INITDB_ROOT_USERNAME: admin\n MONGO_INITDB_ROOT_PASSWORD: superlongpassword\n volumes:\n - /host/mongodb:/data/db\n(in mongosh):\ntest> rs.initiate()\nMongoServerError: already initialized (PS: I had tried this before...)\ntest> rs.status()\nMongoServerError: Our replica set config is invalid or we are not a member of it.\n\n",
"text": "I have the following docker-compose file that starts my MongoDB as standalone (in PROD).My goal: convert this instance to the PRIMARY in a replica set.\nI have another instance running, freshly set up, currently doing nothing (if this works, I’ll add a 3rd).My thinking was that, stopping the instance, uncommenting the ‘entrypoint’ line and bringing the instance back up, would convert the standalone instance into a replica set instance, as described in this tutorial .\nHowever, when doing this, I get:Any hints in what I’m doing wrong?",
"username": "Sander_de_Ruiter"
},
{
"code": "",
"text": "What message you got when you ran rs.initiate() first time?\nOnce you start your mongod with replset and run rs.initiate() it should become primary\nCheck this link.I don’t know about dockerI've a Docker instance with mongo. This run as standalone configuration. Now I w…ould to change `entrypoint.sh` to run itself as replica set following [official guide standalone to replica set](http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/)\n\nAny suggestion? I've tried to manually setting up entrypoint.sh but it have crashed my container.",
"username": "Ramachandra_Tummala"
},
{
"code": "environment:\n MONGO_INITDB_ROOT_USERNAME:\n",
"text": "I wish I knew Is there a config file somewhere that stores info about whether a node is in a replica set or not? Can you revert a node in a replica set to again be a standalone version (such that, when started again enabling replica set and rs.initiate(), it will actually say it’s in a replica set)?",
"username": "Sander_de_Ruiter"
},
{
"code": "",
"text": "Mongod.conf is the configuration file where we define replica parameters but in your case whatever commands mentioned at entry point are run when you initialise your docker\nSo comment this entry and restart to get back to standalone but advice you not to do experiments on prod\nAs mentioned before I don’t know about docker setup but many docs available\nCheck this linkThis tutorial will show how to create a replica set in MongoDB, then use Docker compose to make a more straightforward setup.",
"username": "Ramachandra_Tummala"
},
{
"code": "local//run in mongosh\nuse local\ndb.dropDatabase()\ncommandversion: \"3.8\"\nservices:\n mongo-0-a:\n image: mongo:6.0\n command: --wiredTigerCacheSizeGB 0.25 --replSet s0 \n volumes:\n - mongo-0-a:/data/db\nvolumes:\n mongo-0-a:\n",
"text": "The error you are receiving is consistent with the scenario that the replSet was previously initialized and then the --replSet name changed.Once you are back to standalone you need to drop the local database to remove the previous replicaSet configuration which will allow you to try again.I set the parameters using command command in docker-compose:",
"username": "chris"
},
{
"code": "\"--replSet\", \"rs0\"rs.initiate()rs.add()rs.initiate({\n _id: \"rs0\", members: [\n { _id : 0, host : \"mongo1:27017\"},\n { _id : 1, host : \"mongo2:27017\"},\n { _id : 2, host : \"mongo3:27017\"}\n] } )\n",
"text": "\"--replSet\", \"rs0\"if you start your instances with this parameter set, no matter how many instances you have, you use rs.initiate() only once on one of them and then use rs.add() to add other members.you could also use a configuration on initialization if you knew what would their IP addresses be. it is a tedious thing to set up but not impossible (play on compose file):",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Dropping the local db and restarting with the replica set option worked for me.Next issue: I’m running version 6.0.3 and tried to add a Raspberry Pi in the mix. However, the last version that works is 4.4. rs.status() complains that:remote host has incompatible wire version: Server min and max wire version (9,9) is incompatible with client min wire version (17,17).You (client) are attempting to connect to a node (server) with a binary version with which you (client) no longer accept connections. Please upgrade the server’s binary version.Can I downgrade my 6.0.3 version to be 4.4, or is that not advisable? If not, how would you suggest I add a replica set member with 4.4?",
"username": "Sander_de_Ruiter"
},
{
"code": "",
"text": "Dropping the local db and restarting with the replica set option worked for me.Ok, I think I got why that has worked. the problem here is that you do not add some PRIMARY to your replica set. you wouldn’t want to do this every time you try to add another machine as primary.all members can be a primary depending on their priorities (or never if set otherwise) through a voting system amongst the members.as for downgrading, can you please check this answer about running version 6 on your Pi: How to install mongodb 6.0 on Ubuntu 22.04 - #4 by Yilmaz_DurmazPS: as I stated in that other post, I run version 6 on ubuntu-in-a-docker-container, and should in theory run for you too. the responsibility of breaking things is yours.",
"username": "Yilmaz_Durmaz"
}
]
| Convert standalone docker-compose version to replica set primary | 2022-11-25T14:43:14.679Z | Convert standalone docker-compose version to replica set primary | 16,521 |
null | [
"java",
"android"
]
| [
{
"code": "",
"text": "The dependency given for one official java driver throws an error when used … “Invalid build error” .How do we solve this",
"username": "Joe_Annel"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Setting Up Gradle in Android Studio | 2022-11-27T18:39:00.354Z | Setting Up Gradle in Android Studio | 1,533 |
null | []
| [
{
"code": "",
"text": "In Atlas, when I hit the ‘Triggers’ menu item on the left hand side menu, only blank page comes.\nProject id: 62cdfc0917b0ab2bdcb16697\nAm I missing a permission to manage triggers?Thanks\nStrahinja",
"username": "Strahinja"
},
{
"code": "",
"text": "is it “empty” as in “empty list” (hope it is not all-white screen)\nor you are getting “This application has no triggers” message?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I would say it is more like all-white screen than like “empty list”. There is no title, no buttons … Hope the screenshot below will help:\nimage888×879 43.7 KB\n",
"username": "Strahinja"
},
{
"code": "",
"text": "at first glance, this seems an issue from the browser.can you try clearing the browser cache and try again?by the way, I haven’t done a team project before, nor I am in a multi-personal project. I will not be able to test permission issues.so, for the benefit of the next person to join the discussion, what is given your position in this cluster management?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Clearing the browser cache did the job - thanks Yilmaz!If your question is about given roles within the project, my account has the following roles assigned:\nProject Read Only, Project Data Access Read Only, Project Data Access Read Write",
"username": "Strahinja"
},
{
"code": "",
"text": "yep, it was those roles I had in mind if the cleaning were not to work ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas Triggers page is blank | 2022-11-25T19:58:09.771Z | Atlas Triggers page is blank | 1,896 |
null | [
"connector-for-bi"
]
| [
{
"code": "",
"text": "My cluster has been paused due to inactivity. I would like to resume my cluster. I found some article here\nhttps://www.mongodb.com/docs/atlas/pause-terminate-cluster/ and walked through. I installed atlas cli and logged in. After executing below command I had the below error. Unfortunately there is no more explanation and I do not know how to go around it. Could you please help me to resume my cluster.\nMy cluster name is Cluster0$ atlas clusters start Cluster0The error i got : Error: cluster update is not supported, try ‘atlas cluster upgrade’ commandI tried atlas cluster upgrade and I got below error$ atlas cluster upgrade Cluster0Error: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/627be347f6384f7cd1060b7d/clusters/tenantUpgrade: 400 (request “CLUSTER_PROVIDER_DOES_NOT_SUPPORT_BI_CONNECTOR”) The BI Connector is not supported with the specified cluster provider.Could you please help me to resume my cluster ?Ergun",
"username": "Ergun_Oz"
},
{
"code": "",
"text": "Is this your Sandbox cluster?\nIf there is no activity it will stop automatically\nYou can use Atlas UI also.Login to your Atlas account and resume it",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yes.It is Sandbox cluster. I followed the link but got those errors. I do not know what you by mean Atlas UI and could not find enough information about Atlas UI to resume paused cluster",
"username": "Ergun_Oz"
},
{
"code": "",
"text": "Please check the article shared by you\nThere is CLI and a UI tab.Click on UI and follow the steps\nThe commands you ran are on CLI\nI am referring to your Atlas user interface",
"username": "Ramachandra_Tummala"
}
]
| How to resume paused cluster | 2022-11-27T20:14:30.892Z | How to resume paused cluster | 2,896 |
null | []
| [
{
"code": "const restaurant = await restaurants.findOne({ _id: restaurantID}).then(res => {\n console.log(res);\n })\n",
"text": "How I can compare ObjectId with string value inside my atlas function that is called from a Trigger?“_id” is an ObjectId\n“restaurantID” is a StringAlso, what is the best way to get get a value from DB?",
"username": "Ciprian_Gabor"
},
{
"code": "\"6380b9bd3b0718df59e5b71d\"const restaurant_id_as_string = \"6380b9bd3b0718df59e5b71d\" ;\nconst restaurant_id_as_oid = ObjectId( restaurant_id_as_string ) ;\nconst restaurant = await restaurants.findOne( { _id : restaurant_id_as_oid } ).then(res => {\n console.log(res);\n })\n",
"text": "If restaurandId is a string like \"6380b9bd3b0718df59e5b71d\" you simply have to call the ObjectId constructor like",
"username": "steevej"
},
{
"code": "",
"text": "I have tried this and got this error:\n\nimage1288×348 17.3 KB\n",
"username": "Ciprian_Gabor"
},
{
"code": "const restaurant = await restaurants.findOne( { _id : { \"$convert\" : { \"input\" : restaurantId , \"to\" : \"objectId\" } } ).then(res => {\n console.log(res);\n })\n",
"text": "Sorry, I missed the part that you are not running this in the mongosh or nodejs.Try with $convert:",
"username": "steevej"
},
{
"code": "$oidrestaurantIDrestaurants.findOne({ \"_id\": { \"$oid\": restaurantID } })\n",
"text": "you can also use $oid if restaurantID is a legit ObjectId",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "It does not work inside function. Do I need to add some dependency?\nimage909×314 14.3 KB\n",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Does not work with “$oid” either\n\nimage798×311 13.1 KB\n",
"username": "Ciprian_Gabor"
},
{
"code": "in quotes",
"text": "did you use them in quotes?\nwe use them all the time in queries outside Realm, so logically they should work there too.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "It seems Realm functions fail to process queries as we know them. Even official docs are vague about it. For that, it is unfortunate I do not have an immediate answer for now.I will make a new post and ask about why Realm functions fail like this. this might even be a bug in recent versions (or maybe in all).",
"username": "Yilmaz_Durmaz"
},
{
"code": " const bid = new BSON.ObjectId( restaurantID )\n const result = await col.find({ \"_id\": bid })\n",
"text": "I split myself again, one problem in one hand and a solution in the other.I keep the problem to myself in the other post here (keep watching if interested): Why do Realm functions fail to process well-known operators (error: unknown operator)? especially for “$oid”And here is how your problem should solve (at least worked for me):",
"username": "Yilmaz_Durmaz"
},
{
"code": "const bid = new BSON.ObjectId( restaurantID )",
"text": "const bid = new BSON.ObjectId( restaurantID )I worked! Thank you ",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "I have a side question related to that.Why do you need to convert restaurantId to an OID?If that value comes from a field of another to document, then this value should be stored as an object id. You do not want to keep the string representation of an object id in your database. You want to keep it as an object id.An object id takes less space than its string representation.It is faster to compare 2 object ids compared to comparing the string representation of the same object ids. (That last sentence sounds weird to me too.)And the last and most important reason is you do not have to convert the string representation to an object id when you do things like you are doing. Or when you do $lookup. If you keep restaurantId as a string, you will not be able to use the simple form of $lookup with localField:restaurantId and foreignField:_id. You would have to use a pipeline that $match and $convert-ed version of each.",
"username": "steevej"
},
{
"code": "",
"text": "I can’t speak about how @Ciprian_Gabor is using it, but I can tell this has an important use case: IoT. it is where a full-fledged driver will not usually fit. device IDs will be sent over as strings.In the App Services, you can create “HTTPS Endpoints” through which you can query the database and send back results. It is just like any CRUD API you would know: data flows mostly as strings over requests.for example, you can directly access this endpoint with any value but will get a string as the type of arg1 (I used the sample endpoint function).\nexample endpointPS: by the way, thanks for that question. I was procrastinating to practice this endpoints and functions thing, and that help trick my lazy part ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "What you wrote does not contradict what you I wrote.I wrote about keeping reference to other documents inside the database as the native object id. You wrote about external interface with the data.Yes outside the data layer, most is string. It is not a reason to bloat and slow down your data as a convenience for external entities. The same can be said with dates. You should keep dates as date object rather than the string representation for some of the same reason, take less space and faster to compare. And for dates, natural ordering and rich library. The case in point is the thread I was supposed to filter documents based on month and year between query but I'm not getting 1 month of next year due to the condition i have used any other ways to filter out where the $gte and $lte did not work as expected because dates were strings with the year last. There was also another thread were dates stored as strings needed to $convert for each document in order to be able to compute the date of the week before.So store your data in the appropriate format and when you deal with human then you have no choice but to display or enter the values as string.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Compare ObjectId with string | 2022-11-24T21:40:15.246Z | Compare ObjectId with string | 9,238 |
[
"node-js"
]
| [
{
"code": "const mongodb = app.currentUser.mongoClient(\"mongodb-atlas\");\nconst plants = mongodb.db(\"example\").collection(\"plants\");\n",
"text": "From the below documentation, I have established a connection to mongodb instance. However, the returned object does not have ‘close’ as a function. How do I close this connection?To access a linked cluster from your client application, pass the cluster name to User.mongoClient(). This returns a MongoDB service interface that you can use to access databases and collections in the cluster.",
"username": "Joseph_Bittman"
},
{
"code": "realm.close()User.logOut",
"text": "From the “returns” chain, I would say it does not have a “close” method on its own and closes when you close the realm realm.close(), or garbage collected if it goes out of scope. User.logOut should also close it if it matters.Class: User (mongodb.com)\nUser.mongoClient:MongoDB → db: MongoDBDatabase → collection: Realm.MongoDBCollection → collection methods",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi Folks – There’s no close method associated here as the request is routed through Atlas App Services vs. connecting to the database directly. Atlas App Services serves as a proxy between the client and Atlas cluster and will open/pool/close connections automatically so there is no need to explicitly close the connection. Note, that while this may lead to slightly more connections open than expected at lower levels of usage it means that overall requests made via Atlas App Services will be very efficient with connection usage.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to close app.currentUser.mongoClient connection | 2022-11-27T04:11:07.708Z | How to close app.currentUser.mongoClient connection | 1,676 |
|
null | [
"realm-web"
]
| [
{
"code": "",
"text": "I am unable to get or set my user object custom_data field. I have added the rule for the collection and enabled custom data, but where do I go next",
"username": "Alfred_Lotsu"
},
{
"code": "",
"text": "Welcome to the forums @Alfred_Lotsu!Sorry you are having difficulty - in order for us to help, we need a clear description of the issue, the code you’ve attempted and your troubleshooting. In this case, it’s not clear if you are having the issue in the Realm console or somewhere else. Can you provide more info so we can get a feel for what the issue is?",
"username": "Jay"
},
{
"code": "const mongo = app.currentUser.mongoClient(\"mongodb-atlas\");\n const collection = mongo.db(\"tree\").collection(\"leaves\");\n\n await collection.insertOne(\n {userID: app.currentUser.id},\n )\n await app.currentUser.refreshCustomData()\n console.log(app.currentUser.customData)\n",
"text": "Thanks for the reply.I am hoping that this would match my user object to the custom_data field with the same id, so when I make further changes to my custom_data field, it would apply to my current user.\nAny guidance would be very much appreciated",
"username": "Alfred_Lotsu"
},
{
"code": "tree->leavesuserIduserID",
"text": "@Alfred_Lotsu In general you’re on the right track.It’s important that the App Services UI is configured correctly as well. Check the App Users page under the Custom User Data tab in the console to find and configure custom user data settings, including the custom user data cluster, database, and collection and the userId field used to map custom user data documents to users.Also check the Permissions.In this case you’re storing their data in the tree->leaves collection which is technically legal but I would suggest a better naming scheme, perhaps a collection name of “users”.One mistake I’ve made is inconsistent naming so I would also suggest userId for instead of userID.Check your settings and report back your findings.",
"username": "Jay"
},
{
"code": "",
"text": "Thanks for the reply. I was able to figure it out. The code I attached in the previous reply was in my signup component, at which point “app.currentUser” was still null because it was not verified by logging in. I moved the code to run after logging in and user verification and it is running smoothly now",
"username": "Alfred_Lotsu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to mutate my custom data field | 2022-11-22T12:40:26.106Z | How to mutate my custom data field | 2,678 |
null | [
"queries",
"atlas-functions"
]
| [
{
"code": "{ \"_id\": { \"$oid\": id_string } }\"$convert\"const bid = new BSON.ObjectId(id)\nconst result = await col.find({ \"_id\":bid})\nexports = async function(id){\n\n const service= context.services.get(\"mongodb-atlas\")\n const col=service.db(\"products\").collection(\"current\")\n\n const bid = new BSON.ObjectId(id)\n const result = await col.find({ \"_id\":bid})\n // const result = await col.find({ \"_id\":{\"$oid\":id }})\n return {result:result};\n};\n// console\n// exports(\"637f35605b44ae9eff4da7bf\")\n",
"text": "I am new to App Services and Functions, but learning bit by bit when it comes up to use.Today is one of them where we were trying to help another forum member using our usual MongoDB query instincts. Compare ObjectId with stringour glorious { \"_id\": { \"$oid\": id_string } } query horribly fails with these two lines:error:\n(BadValue) unknown operator: $oidit is not alone in this that \"$convert\" also fails with the same error.I know the following works for the purpose:But why can’t we use these queries as seen in any other areas of MongoDB? especially with the App Services being the closest to the heart.use the following if you want to try live on Atlas:PS: funny part (for me at least), even the result has “$oid” in itresult (JavaScript):\nEJSON.parse(‘{“result”:[{“_id”:{“$oid”:“637f35605b44ae9eff4da7bf”},“name”:“name 1”,“pid”:{“$numberInt”:“123”}}]}’)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I noticed the “Functions” I am mentioning here are not part of “Realm”.Being new in this part of MongoDB, it is easy to confuse these names. I replaced the title to have “Atlas Functions” and other parts in the text with “App Services”.",
"username": "Yilmaz_Durmaz"
}
]
| Why do Atlas Functions fail to process well-known operators (error: unknown operator)? especially for "$oid" | 2022-11-26T12:13:07.377Z | Why do Atlas Functions fail to process well-known operators (error: unknown operator)? especially for “$oid” | 2,396 |
null | []
| [
{
"code": "",
"text": "Hey there, I am kinda new to mongo db but can I make a database in the folder containing my main code files like can I create a serverModels.db or something Kinda like that? like locally",
"username": "TheWeebSamurai_N_A"
},
{
"code": "",
"text": "Yes.A database is a number of collections.\nEach collection contains some documents.\nDocuments are JSON.\nYou can store JSON in a file, so each collection can be its own file.\nThe file can be located anywhere you wish.\nYou then use mongoimport to load each of the collections.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Create a database in the folder containing the main files | 2022-11-27T17:47:20.079Z | Create a database in the folder containing the main files | 1,272 |
null | [
"aggregation",
"node-js",
"mongoose-odm"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"63769c377615fe4cdb4995a6\"\n },\n \"userId\": \"620920aa9ddac2074a50472f\",\n \"toAsset\": {\n \"$oid\": \"63769c117615fe4cdb499515\"\n },\n \"fromAsset\": {\n \"$oid\": \"63769c067615fe4cdb4994d9\"\n },\n \"comment\": \"<p>Linking of Note 0001 to Note 0002.</p>\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1668717623761\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1668717623761\"\n }\n },\n \"isEmbedded\": false,\n \"isActive\": true,\n \"__v\": 0\n}\ntoAssetfromAsset{\n \"_id\": {\n \"$oid\": \"6377a8d834671794449f0dca\"\n },\n \"userId\": \"636b73f31527830f7bd7a47e\",\n \"folderId\": \"636b73f31527830f7bd7a482\",\n \"title\": \"Note that hasn't been shared\",\n \"note\": \"<p>Here's a Note that hasn't been shared.</p>\",\n \"typeOfAsset\": \"note\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1668786392389\"\n }\n },\n \"updatedAt\": {\n \"$date\": {\n \"$numberLong\": \"1668786392389\"\n }\n },\n \"isActive\": 3,\n \"meta\": [...],\n \"preferences\": [...],\n \"sequence\": 1,\n \"tags\": [],\n \"attributes\": [\n {\n \"$oid\": \"6377a8d834671794449f0dc8\"\n }\n ],\n \"__v\": 0\n}\n{\n \"_id\": {\n \"$oid\": \"6377a8d834671794449f0dc8\"\n },\n \"userId\": \"636b73f31527830f7bd7a47e\",\n \"numberOfViews\": 2,\n \"isFavourite\": false,\n \"isToRead\": false,\n \"typeOfAccess\": \"isOwner\",\n \"sharing\": {\n \"typeOfShare\": \"withUsers\",\n \"sharedWith\": [],\n \"segementsForUrl\": []\n },\n \"__v\": 0\n}\nconst project = {\n $project: {\n _id: 0,\n id: '$_id',\n userId: 1,\n [directionOfLink]: 1,\n // attributes: {\n // $filter: {\n // input: '$assets',\n // as: 'assets',\n // cond: {\n // $and: [\n // { $in: [ '$$assets.attributes.typeOfAccess', ['isOwner', 'asAuthor', 'asReader'] ] },\n // { $eq: [ '$$assets.attributes.userId', context.body.variables.userId ] }\n // ]\n // }\n // }\n // },\n comment: 1,\n createdAt: 1,\n updatedAt: 1,\n isActive: 1,\n score: {\n $meta: 'searchScore'\n }\n }\n}\n\nconst lookup = {\n $lookup: {\n from: 'assets',\n localField: directionOfLink,\n foreignField: '_id',\n // pipeline: [{\n // $match: {\n // 'attributes.userId': context.body.variables.userId,\n // $expr: { $in: [ 'attributes.typeOfAccess', ['isOwner', 'asAuthor', 'asReader'] ] }\n // }\n // }],\n as: directionOfLink\n }\n}\n\nconst addFields = {\n $addFields: {\n something: {\n $filter: {\n input: '$assets',\n cond: {\n $and: [\n { $eq: [ '$$this.attributes.typeOfAccess', ['isOwner', 'asAuthor', 'asReader'] ] },\n { $eq: [ '$$this.attributes.userId', context.body.variables.userId ] }\n ]\n }\n }\n }\n }\n}\n\nconst match = {\n $match: {\n [args.directionOfLink]: new mongoose.Types.ObjectId(args.assetId)\n }\n}\n",
"text": "I’m using an aggregation to return data via a lookup to build the links between documents.At the moment, the linking is working when User A creates links between their own Assets. But if User A is viewing an Asset that’s been shared with them by User B and navigates to one that has a link to an Asset that hasn’t been shared, I don’t need to return those documents.The data for a Link is:The data for an Asset, as in toAsset and fromAsset, is:I’m using Attributes to manage what Assets have been shared with whom, and the data is:Now, the task here is to somehow how return the Assets that have been shared, but after a bunch of different attempts (as per the code that’s been commented out), I’ve so far failed.The code is:Any thoughts would be appreciated.",
"username": "Wayne_Smallman"
},
{
"code": "",
"text": "Anyone have thoughts on this? The project has stalled at the moment because of this problem.",
"username": "Wayne_Smallman"
},
{
"code": "",
"text": "Hello @Wayne_Smallman,Your question is not clear to me, Are there 2 collections or a single? can you please provide example documents and the expected result as per those documents, and also show your external input values?I can see you have assigned stages in variables, Can you show how did you execute the final query?",
"username": "turivishal"
},
{
"code": "const lookup = {\n $lookup: {\n from: 'assets',\n localField: directionOfLink,\n foreignField: '_id',\n as: directionOfLink,\n pipeline: [\n {\n $lookup: {\n from: 'assets_attributes',\n as: 'attributesInAssets',\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [ '$userId', context.body.variables.userId ] },\n { $in: [ '$typeOfAccess', ['isOwner', 'asAuthor', 'asReader'] ] },\n ]\n }\n }\n }\n ]\n }\n },\n {\n $unwind: '$attributesInAssets'\n },\n {\n $match: {\n $expr: {\n $in: [ '$attributesInAssets._id', '$attributes' ]\n }\n }\n },\n {\n $group: {\n _id: '$_id',\n userId: { $first: '$userId' },\n folderId: { $first: '$folderId' },\n title: { $first: '$title' },\n typeOfAsset: { $first: '$typeOfAsset' },\n createdAt: { $first: '$createdAt' },\n updatedAt: { $first: '$updatedAt' },\n isActive: { $first: '$isActive' },\n attributes: { $first: '$attributes' },\n attributesInAssets: {\n $push: '$attributesInAssets._id'\n }\n }\n },\n {\n $project: {\n _id: 1,\n userId: 1,\n folderId: 1,\n title: 1,\n typeOfAsset: 1,\n attributes: 1,\n attributesInAssets: 1,\n createdAt: 1,\n updatedAt: 1,\n isActive: 1\n }\n }\n ]\n }\n}\n\nconst redact = {\n $redact: {\n $cond: {\n if: {\n $gt: [ {\n $size: `$${directionOfLink}`\n }, 0 ]\n },\n then: '$$KEEP',\n else: '$$PRUNE'\n }\n }\n}\n",
"text": "While I admit it’s possible this isn’t the most efficient approach (I’m no expert), it’s at least working:",
"username": "Wayne_Smallman"
}
]
| Aggregation: Return documents based on fields in a subdocument | 2022-11-19T12:00:10.929Z | Aggregation: Return documents based on fields in a subdocument | 1,307 |
null | [
"replication",
"storage"
]
| [
{
"code": "",
"text": "I’m using MongoDB 3.6 in replicaSet modeI found that the database query speed was slow when many wt files were openedI found this issue https://jira.mongodb.org/browse/WT-8413And I have some questions:",
"username": "kongfu-cat"
},
{
"code": "",
"text": "I have no answers for your questions.WT cannot predict when it will need to read or write to the file. As a good practice, it probably tries to keep the file open as long as possible for efficiency reasons.Andquery speed was slow when many wt files were openedmight simply be the symptom of insufficient resources for the workload which might be the result of the massive number of collections anti-pattern.",
"username": "steevej"
}
]
| [MongoDB 3.6 ] When does WiredTiger open or close the file handles? | 2022-11-27T15:57:18.929Z | [MongoDB 3.6 ] When does WiredTiger open or close the file handles? | 1,529 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hi There, i hope every one is fine\nWhile connecting to mongo db(local), am getting this error\nerror MongooseServerSelectionError: connect ECONNREFUSED ::1:27017Even i checked in services Mongodb is in runnning state and i restarted as well.My Code is\nmongoose.connect(“mongodb://localhost:27017/database_name”,{useNewUrlParser:true, useUnifiedTopology:true}).then((result) =>{console.log(‘Server Connected’)}).catch(err => console.log(‘error’,err))",
"username": "aswin_lakshmanan"
},
{
"code": "",
"text": "This has been answered many times. Search for ECONNREFUSED.One of 2 reasons:I also read somewhere that some lib were sorting the IP address coming from DNS resolution. This lib might not doing that anymore. If localhost IPv6 is defined as the first one, the connection is tried with this and the IPv4 is not tried. Before, IPv4 was be returned first.If you want to make sure to connnect to IPv4 localhost defined as 127.0.0.1 use 127.0.0.1 rather that localhost.",
"username": "steevej"
}
]
| Mongo Connection Error in Node JS | 2022-11-27T13:51:56.401Z | Mongo Connection Error in Node JS | 4,078 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I have 2 collections (orders containing products Id and the date of the order (ISODate)) and customers that include the order as a field reference to the order collection.\nAt first, I made a lookup operator to link between the two collections, then I wanted to group the result by month or maybe year, and an error msg:“PlanExecutor error during aggregation:: caused by:: can’t convert from BSON type array to Date” So, I tried with a date as a Date type but I have the same error",
"username": "Malika_Taouai"
},
{
"code": "",
"text": "Post simple example code",
"username": "Jack_Woehr"
},
{
"code": "db.customers.aggregate({$lookup:{\n from: \"orders\",\n localField: \"orders\",\n foreignField: \"_id\",\n as: \"orders\",\n}})\n{ _id: ObjectId(\"63728124290f7a2159df21e5\"),\n name: 'Jay.K',\n age: 32,\n gender: 'male',\n address: ObjectId(\"6372838c290f7a2159df21f0\"),\n contact: ObjectId(\"637284a8290f7a2159df21fa\"),\n paymentMethod: \n [ ObjectId(\"63761403f8fe05d9db646186\"),\n ObjectId(\"63762a01beab2f5d33b4980b\") ],\n orders: \n [ { _id: 1, date: 2021-07-09T00:00:00.000Z, pId: [ 12, 46 ] },\n { _id: 2, date: 2021-10-05T00:00:00.000Z, pID: [ 152, 87, 100 ] },\n { _id: 3, date: 2022-01-10T00:00:00.000Z, pId: [ 212, 646 ] } ] }\n",
"text": "",
"username": "Malika_Taouai"
},
{
"code": "$project : {\n _id: 0,\n Day: {\n $dayOfMonth: \"$orders.date\",\n },\n Month: {\n $month: \"$orders.date\",\n },\n Year: {\n $year: \"$orders.date\",\n }\n}\n",
"text": "",
"username": "Malika_Taouai"
},
{
"code": "",
"text": "Same result: PlanExecutor error during aggregation:: caused by:: can’t convert from BSON type array to Date.I tried to work on the date as a collection and use references to the link between orders and date, and then link with customers to group my data by date, but it didn’t work too.",
"username": "Malika_Taouai"
},
{
"code": "",
"text": "@ Pavel_Duchovny",
"username": "Malika_Taouai"
},
{
"code": "_id: ObjectId(\"63728124290f7a2159df21e5\"),\n name: 'Jay.K',\n age: 32,\n gender: 'male',\n address: ObjectId(\"6372838c290f7a2159df21f0\"),\n contact: ObjectId(\"637284a8290f7a2159df21fa\"),\n paymentMethod: \n [ ObjectId(\"63761403f8fe05d9db646186\"),\n ObjectId(\"63762a01beab2f5d33b4980b\") ],\n orders: \n [ { _id: 1, date: 2021-07-09T00:00:00.000Z, pId: [ 12, 46 ] },\n { _id: 2, date: 2021-10-05T00:00:00.000Z, pID: [ 152, 87, 100 ] },\n { _id: 3, date: 2022-01-10T00:00:00.000Z, pId: [ 212, 646 ] } ] }\ndb.customers.aggregate(\n [\n {\n $match: {\n name: \"Jay.K\",\n },\n },\n {\n $unwind: \"$orders\",\n },\n {\n $group: {\n _id: {\n name: \"$name\",\n orderDate: {\n $dateToString: {\n format: \"%Y\",\n date: \"$orders.date\",\n },\n },\n },\n total: {\n $sum: 1,\n },\n },\n },\n ]\n)\n",
"text": "Assuming you have the order_date field in the orders array in the customers collection, you don’t need the $lookup. Correct me if I’m wrong…",
"username": "Leandro_Domingues"
},
{
"code": "db.customers.aggregate(\n [\n {\n $match: {\n name: \"Jay.K\",\n },\n },\n {\n $unwind: \"$orders\",\n },\n {\n $group: {\n _id: {\n name: \"$name\",\n orderDate: {\n $dateToString: {\n format: \"%Y\",\n date: \"$orders.date\",\n },\n },\n },\n total: {\n $sum: 1,\n },\n },\n },\n ]\n)\n",
"text": "It works!\nThank you ^^",
"username": "Malika_Taouai"
},
{
"code": "$lookuporders:[1]orders: [{ _id: 1,date:\"pudatehere\"}]$unwind",
"text": "Hi, @Malika_Taouai, in your posts, can you please try to create a stripped version of your actual data with both collections having all required fields, but with only 1-2 unrelated fields. this will help us as well as yourself to see the problem.it is nice to hear that the assumption @Leandro_Domingues has made did work.Here is a remark on your problem:$lookup stage returns an array of objects even when you match a single value with a single document. so a single orders:[1] will be orders: [{ _id: 1,date:\"pudatehere\"}].this is why your “$orders.date” does not work as you would expect because it expected an object, instead got an array of objects. depending on the situation you will need an $unwind operation to unpack this array to individual objects, such as grouping on a field of the objects in the array.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can’t convert from BSON type array to Date | 2022-11-22T16:56:28.284Z | Can’t convert from BSON type array to Date | 3,750 |
[
"connecting",
"atlas-cluster",
"golang"
]
| [
{
"code": "",
"text": "I can’t run my project. I’ve been looking for the solution but still nothing works, can you guys help me? Here’s the error\n\nScreenshot (11)1920×1080 199 KB\n",
"username": "Valen_Rionald"
},
{
"code": "mongodb+srv://mongodb://&tls=false&ssl=false",
"text": "@Valen_Rionald thanks for the question! The errorsocket was unexpectedly closed: EOFcan be caused by many issues, but the most common is that TLS is not enabled. Make sure your connection string is using scheme mongodb+srv://, which enables TLS by default (mongodb:// scheme URIs do not enable TLS by default). Also make sure you’re not explicitly disabling TLS with option &tls=false or &ssl=false.Can you post the connection string you’re using (with the username/password redacted)?",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Here is the connection string\n\nWhatsApp Image 2022-11-22 at 11.59.47 PM1103×34 16.2 KB\n",
"username": "Valen_Rionald"
},
{
"code": "",
"text": "IP restriction is another cause for this kind of error. Go to your cluster and select network access in the security part. make sure you have at least your “current” IP in the list.If this happens all the time, the network you are on (school, work) restricts you to use the required ports.PS: be careful when sharing connection strings with passwords in them.",
"username": "Yilmaz_Durmaz"
}
]
| Can't conecting Golang and MongoDB | 2022-11-11T12:32:59.131Z | Can’t conecting Golang and MongoDB | 1,965 |
|
null | [
"queries",
"node-js"
]
| [
{
"code": "",
"text": "Hello, I have always created a single connection with one connection string. My question is, how to create multiple connections(MongoDB instances) if an array of connection strings are given in NodeJs get API?Let’s say multiple connection strings will have the same type of database. e.g., my database name is “University” and this database is available in all different locations. And I wanted to write one common API which will provide me with an array of universities from different connections, how to do it?Example connectionString1 = mongodb://localhost:27017\nconnectionString2 = mongodb://localhost:27018\nconnectionString3 = mongodb://localhost:27019Now I wanted to connect with all three connection strings and fetch all records from it and send it\nin a response to one common API, how can I do it in an efficient manner?Your input will help me to understand this structure in better way",
"username": "Prasanna_Sasne"
},
{
"code": "const client = new MongoClient(uri);newconst client1 = new MongoClient(uri1);\nconst client2 = new MongoClient(uri2);\nconst client3 = new MongoClient(uri3);\n",
"text": "This line is the heart of the connection:\nconst client = new MongoClient(uri);new operator will give you a new object each time so you would need:unfortunately, the rest is not this easy. for each function in CRUD, you have to implement your own logic to integrate them.",
"username": "Yilmaz_Durmaz"
}
]
| Multiple MongoDB database Connections in NodeJS | 2022-11-22T00:58:54.060Z | Multiple MongoDB database Connections in NodeJS | 5,104 |
[
"atlas",
"react-js",
"data-api",
"delhi-mug"
]
| [
{
"code": "Head of Eng, PhysicsWallah (PW)Software Engineer @ IntuitLead - MUG Delhi NCR | Sr. SWE @ LinkedInLead - MUG Delhi NCR",
"text": "\nDelhi - MUG1920×1080 255 KB\nPlease note: The RSVP above is a waitlist, we will be confirming your attendance a few days before the event.Delhi-NCR MongoDB User Group is organizing a meetup in the last week of November on Saturday, November 26, 2022, at 11:00 AM at LinkedIn Office, Gurugram.The day will include two sessions, In the beginning, Manan Varma, (Head of Engineering @PhysicsWallah) will talk about - How to Scale with MongoDB Atlas.His sessions will be followed by Shreya Prasad, (Software Engineer @Intuit) who will talk about how you to build apps with React and MongoDB Data APIs.We will also have some fun activities like Trivia and Spot the Bug along with an amazing Lunch .This is a meet-up for you where we’re trying to attract newcomers, experienced developers, architects, and startup founders to come, share ideas and case studies as well as meet new people in the community. Come join us for a day of learning lunch and fun…!Please Note: We have limited seats available for the event. RSVP on the event page to express your interest and enter the waitlist. We will reach out to you with a confirmation email to confirm your attendance.Event Type: In-Person\n Location: LinkedIn Office, Gurugram .\n Tower 8th Road, Sikandarpur Upas, DLF Cyber City, DLF Phase 2, Sector 24, Gurugram, Haryana 122022To join the waitlist - Please click on the “ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you RSVPed. Please Note: We have limited seats available for the event. RSVP on the event page to express your interest and enter the waitlist. We will reach out to you with a confirmation email to confirm your attendance.\nmanan verma800×800 70.9 KB\nHead of Eng, PhysicsWallah (PW)–\nshreya prasad596×596 60.2 KB\nSoftware Engineer @ Intuit\nshrey batra800×800 214 KB\nLead - MUG Delhi NCR | Sr. SWE @ LinkedInLead - MUG Delhi NCRJoin the Delhi-NCR group to stay updated with upcoming meetups and discussions.",
"username": "shrey_batra"
},
{
"code": "",
"text": "Hi, Not able to RSVP. Getting error 500",
"username": "Manan_Bedi"
},
{
"code": "",
"text": "exactly , its showing error",
"username": "Arman_Mansury"
},
{
"code": "",
"text": "getting 500 error on RSVP …please add me for event [email protected]",
"username": "Shivam_Vishwakarma"
},
{
"code": "",
"text": "Getting 500 error when clicking on RSVP",
"username": "Shivam_Gupta6"
},
{
"code": "",
"text": "Hello @Manan_Bedi, @Arman_Mansury, @Shivam_Vishwakarma, @Shivam_Gupta6!\nThanks for letting us know!We are actively working to resolve the issue. Will let you know here once it’s resolved and you can RSVP to get added to the waitlist ",
"username": "Harshit"
},
{
"code": "",
"text": "Unable to register, getting a 500 error",
"username": "Tushar_Anand1"
},
{
"code": "",
"text": "Hi everyone, please tag me when the registration issue is resolved. Looking forward to the the exciting event!",
"username": "Nimish_Jain"
},
{
"code": "",
"text": "Hi, I’m getting the same error. ",
"username": "Pragya_Bansal"
},
{
"code": "",
"text": "Hi, please let us know by tagging when RSVP issue resolves, thanks.",
"username": "Sohil_Khanduja"
},
{
"code": "",
"text": "Unable to get rspv showing 500 error",
"username": "Avneet_Singh_20CS133"
},
{
"code": "",
"text": "Please fix this\nThank you",
"username": "Sidharth_Dang"
},
{
"code": "",
"text": "Hi @Sidharth_Dang , @Avneet_Singh_20CS133 , @Sohil_Khanduja , @Pragya_Bansal , @Nimish_Jain , @Tushar_Anand1 , @Shivam_Gupta6 , @Shivam_Vishwakarma , @Arman_Mansury , @Manan_BediThe RSVPs are now fixed, please go ahead and RSVP.\nKeep in mind, this is for waitlist and we will be sending separate confirmation mails to people on waitlist…!Thanks ",
"username": "shrey_batra"
},
{
"code": "",
"text": "RSVP is working now. Thanks a lot!\nLooking forward to attend🙌",
"username": "Nimish_Jain"
},
{
"code": "",
"text": "RSVP is not working properly.",
"username": "Mayank_Agarwal"
},
{
"code": "",
"text": "Hey, @Mayank_Agarwal I just tested and it’s working fine. Can you check again - click on the “RSVP” button and it will turn Green if you RSVPed There’s a known error though if you try seeing the RSVP list by clicking on the number.",
"username": "Harshit"
},
{
"code": "",
"text": "Yes, it is working fine now.",
"username": "Mayank_Agarwal"
},
{
"code": "",
"text": "@Harshit 403 error. RSVP booked out?",
"username": "Gourav_Singh3"
},
{
"code": "",
"text": "+1 to this, couldn’t rsvp earlier as well due to issue saying you can’t login with this ip address ",
"username": "Narayan_Soni"
},
{
"code": "",
"text": "Hey @Gourav_Singh3 and @Narayan_Soni - We are opening 15 more waitlist slots. Please RSVP now to register yourself for the waitlist ",
"username": "Harshit"
}
]
| Delhi-NCR MUG: Building React Applications with Data APIs & Scalability With MongoDB Atlas! | 2022-11-16T00:11:24.828Z | Delhi-NCR MUG: Building React Applications with Data APIs & Scalability With MongoDB Atlas! | 10,639 |
|
null | []
| [
{
"code": "",
"text": "Hi Everyone,I am beginner to IoT and I want to store my raw data to cloud. I came to know MongodB is the best database for this.\n-I want to learn from scratch that how to send the data from device (controller) to cloud.\n-Is there any guide?\n-Any getting started?\n-Any tutoriual?",
"username": "Alihussain_Vohra"
},
{
"code": "",
"text": "Hello @Alihussain_Vohra ,Welcome to The MongoDB Community Forums! Below are some links that could help you learn more about IoT and how you can integrate it with MongoDB.Some video sessions by industry expertsRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| IoT data to mongoDB database | 2022-11-25T10:35:59.063Z | IoT data to mongoDB database | 1,403 |
null | [
"queries"
]
| [
{
"code": "Lagrum: 32 § 1 mom. första stycket a) kommunalskattelagen (1928:370) ABLagrum: 32 § 1 mom. första stycket AB \"Content\": [\n {\n \"analyzer\": \"lucene.swedish\",\n \"minGrams\": 4,\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n },\n {\n \"analyzer\": \"lucene.swedish\",\n \"type\": \"string\"\n }\n ]\n{\n\t\"index\": \"test_index\",\n\t\"compound\": {\n\t\t\"filter\": [\n\t\t\t{\n\t\t\t\t\"text\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"111111111111\"\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"ProductId\"\n\t\t\t\t}\n\t\t\t},\n\t\t],\n\t\t\"must\": [\n\t\t\t{\n\t\t\t\t\"autocomplete\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"AB\"\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"Content\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"autocomplete\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"\\xc2\\xa7\",\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"Content\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"autocomplete\": {\n\t\t\t\t\t\"query\": [\n\t\t\t\t\t\t\"32\"\n\t\t\t\t\t],\n\t\t\t\t\t\"path\": \"Content\"\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t},\n\t\"count\": {\n\t\t\"type\": \"lowerBound\",\n\t\t\"threshold\": 500\n\t}\n}\n",
"text": "Hello!I faced with the issue when I try to search for several words including a special character (section sign “§”). Example: AB § 32.\nI want all words “AB”, “32” and symbol “§” to be included in found documents.\nIn some cases document can be found, in some not.\nIf my document contains the following text then search find its:\nLagrum: 32 § 1 mom. första stycket a) kommunalskattelagen (1928:370) ABBut if document contains this text then search doesn’t find:\nLagrum: 32 § 1 mom. första stycket ABFor symbol “§” I use UT8-encoding “\\xc2\\xa7”.Index uses “lucene.swedish” analyzer.Query looks like:The question is what is wrong with search and how can I make it working?",
"username": "Jelena_Arsinova"
},
{
"code": "edgeGramnGram \"Content\": [\n {\n \"analyzer\": \"lucene.swedish\",\n \"minGrams\": 4,\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n },\n {\n \"analyzer\": \"lucene.swedish\",\n \"type\": \"string\"\n }\n ]\n",
"text": "The first issue in this field definition for content is the autocomplete definition should use edgeGram rather than nGram, which should almost always be used for left-to-right languages that respect whitespace. Please also add minGram value.If you want to understand how the Lucene analysis worked here, you can try this tool for understanding Atlas Search analysis. It’s not maintained by engineering, but a different team. It could disappear and has no official support. In it, you will discover that the § symbol is stripped out as non-essential to search relevance in the Swedish analyzer. If you need to preserve the symbol, you need to index that field with the Keyword or Whitespace analyzers. The other option is a custom analyzer..Let me know if any of these options work for you!",
"username": "Marcus"
},
{
"code": "",
"text": "Hello!I tried edgeGram. but it doesn’t work for us as well.\nIn our project we would like to search in Swedish text using autocomplete operator with nGram tokenization, since we want to find as in the beginning, as in the middle, as at the end of the word (like mentioned here https://www.mongodb.com/docs/atlas/atlas-search/autocomplete/). We want special characters to be included in found documents as well. May be you could give us an example how the custom analyzer could look like for us?",
"username": "Jelena_Arsinova"
},
{
"code": "content \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"content\": [\n {\n \"type\": \"autocomplete\",\n \"tokenization\": \"nGram\",\n \"minGrams\": 4,\n \"maxGrams\": 7,\n \"foldDiacritics\": false,\n \"analyzer\": \"lucene.whitespace\"\n },\n {\n \"analyzer\": \"lucene.swedish\",\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n",
"text": "Focusing only on the content field, here is an index definition that should work for your requirements. The docs are here. Let me know if this works for you.",
"username": "Marcus"
},
{
"code": "",
"text": "I tried, but it doesn’t work for us as well. No any documents were found. Even if I don’t use a special character in the search, I got empty result in this case.",
"username": "Jelena_Arsinova"
},
{
"code": "ABminGram:4",
"text": "What specifically doesn’t work? If you search for AB and have minGram:4 you not have reached the minimum number of characters. Could you specify the documents, query, and index definition?",
"username": "Marcus"
}
]
| Search including special characters in MongoDB Atlas | 2022-10-20T08:41:34.583Z | Search including special characters in MongoDB Atlas | 4,468 |
null | []
| [
{
"code": "",
"text": "Hi all.\nI am currently doing a project that requires me to build one MongoDB database and one separate Oracle SQL database and then build a GUI to swap between the two database to view the data which is the same for both.I am not sure how to model these so that i can swap between them and update them at the same time? I am using Intellij with Glassfish and netbeans.I have been searching for information to help me understand but i guess im not looking at the right things. Any advice would be really appreciated. Thank you.",
"username": "Jean_Smith"
},
{
"code": "",
"text": "I haven’t done this before, so only have ideas.keeping data in relational format will degrade MongoDB performance whereas having JSON might have issues in Oracle SQL.you probably need to re-design your data structure to suit them both. using ODM/ORM frameworks would also help to use a shared data class.regardless, this might help in using JSON in Oracle SQL: How to Store, Query, and Create JSON Documents in Oracle Database",
"username": "Yilmaz_Durmaz"
}
]
| How Can i use two databases with one GUI | 2022-11-26T15:37:07.046Z | How Can i use two databases with one GUI | 768 |
null | [
"sharding",
"php"
]
| [
{
"code": "php-fpmphp-fpmphp 8.0php7.4-fpmphp-fpminstall mongodb --version 6.0.3 '<======================================================================================='\ndeb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse\nReading package lists...\nBuilding dependency tree...\nReading state information...\nThe following package was automatically installed and is no longer required:\n libssl1.1\nUse 'apt autoremove' to remove it.\nThe following packages will be upgraded:\n mongodb-org mongodb-org-database mongodb-org-mongos mongodb-org-server\n mongodb-org-shell mongodb-org-tools\n6 upgraded, 0 newly installed, 0 to remove and 169 not upgraded.\n1 not fully installed or removed.\nNeed to get 53.3 MB of archives.\nAfter this operation, 2,583 kB disk space will be freed.\nGet:1 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0/multiverse amd64 mongodb-org-shell amd64 6.0.3 [2,986 B]\nGet:2 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0/multiverse amd64 mongodb-org-server amd64 6.0.3 [31.2 MB]\nGet:3 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0/multiverse amd64 mongodb-org-mongos amd64 6.0.3 [22.1 MB]\nGet:4 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0/multiverse amd64 mongodb-org-database amd64 6.0.3 [3,424 B]\nGet:5 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0/multiverse amd64 mongodb-org-tools amd64 6.0.3 [2,768 B]\nGet:6 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0/multiverse amd64 mongodb-org amd64 6.0.3 [2,804 B]\ndebconf: unable to initialize frontend: Dialog\ndebconf: (TERM is not set, so the dialog frontend is not usable.)\ndebconf: falling back to frontend: Readline\ndebconf: unable to initialize frontend: Readline\ndebconf: (This frontend requires a controlling tty.)\ndebconf: falling back to frontend: Teletype\ndpkg-preconfigure: unable to re-open stdin:\nFetched 53.3 MB in 6s (9,359 kB/s)\n(Reading database ... 112930 files and directories currently installed.)\nPreparing to unpack .../0-mongodb-org-shell_6.0.3_amd64.deb ...\nUnpacking mongodb-org-shell (6.0.3) over (5.0.10) ...\nPreparing to unpack .../1-mongodb-org-server_6.0.3_amd64.deb ...\nUnpacking mongodb-org-server (6.0.3) over (5.0.10) ...\nPreparing to unpack .../2-mongodb-org-mongos_6.0.3_amd64.deb ...\nUnpacking mongodb-org-mongos (6.0.3) over (5.0.10) ...\nPreparing to unpack .../3-mongodb-org-database_6.0.3_amd64.deb ...\nUnpacking mongodb-org-database (6.0.3) over (5.0.10) ...\nPreparing to unpack .../4-mongodb-org-tools_6.0.3_amd64.deb ...\nUnpacking mongodb-org-tools (6.0.3) over (5.0.10) ...\nPreparing to unpack .../5-mongodb-org_6.0.3_amd64.deb ...\nUnpacking mongodb-org (6.0.3) over (5.0.10) ...\nSetting up php7.4-fpm (1:7.4.30-5+ubuntu22.04.1+deb.sury.org+1) ...\n\nConfiguration file '/etc/php/7.4/fpm/pool.d/www.conf'\n ==> Modified (by you or by a script) since installation.\n ==> Package distributor has shipped an updated version.\n What would you like to do about it ? Your options are:\n Y or I : install the package maintainer's version\n N or O : keep your currently-installed version\n D : show the differences between the versions\n Z : start a shell to examine the situation\n The default action is to keep your current version.\n*** www.conf (Y/I/N/O/D/Z) [default=N] ?\nConfiguration file '/etc/php/7.4/fpm/pool.d/www.conf'\n ==> Modified (by you or by a script) since installation.\n ==> Package distributor has shipped an updated version.\n What would you like to do about it ? Your options are:\n Y or I : install the package maintainer's version\n N or O : keep your currently-installed version\n D : show the differences between the versions\n Z : start a shell to examine the situation\n The default action is to keep your current version.\n*** www.conf (Y/I/N/O/D/Z) [default=N] ?\nConfiguration file '/etc/php/7.4/fpm/pool.d/www.conf'\n ==> Modified (by you or by a script) since installation.\n ==> Package distributor has shipped an updated version.\n What would you like to do about it ? Your options are:\n Y or I : install the package maintainer's version\n N or O : keep your currently-installed version\n D : show the differences between the versions\n Z : start a shell to examine the situation\n The default action is to keep your current version.\n*** www.conf (Y/I/N/O/D/Z) [default=N] ? dpkg: error processing package php7.4-fpm (--configure):\n end of file on stdin at conffile prompt\nSetting up mongodb-org-server (6.0.3) ...\nInstalling new version of config file /etc/mongod.conf ...\nSetting up mongodb-org-shell (6.0.3) ...\nSetting up mongodb-org-tools (6.0.3) ...\nSetting up mongodb-org-mongos (6.0.3) ...\nSetting up mongodb-org-database (6.0.3) ...\nSetting up mongodb-org (6.0.3) ...\nProcessing triggers for man-db (2.10.2-1) ...\nErrors were encountered while processing:\n php7.4-fpm\nneedrestart is being skipped since dpkg has failed\nE: Sub-process /usr/bin/dpkg returned an error code (1)\n",
"text": "I try to undestand what kind of requirement is php-fpm for mongodb. I tried to upgrade mongodb to 6.0.3 (latest) on Ubuntu 22.04. Since the upgrade process failed on php-fpm I must specify that I replaced the php 8.0 which is default on Ubuntu 22.04 with php7.4-fpm. I have no standard php installed on the server, just php-fpm.Can someone analize the following error and tell me what’s going on?",
"username": "Sorin_GFS"
},
{
"code": "apt1 not fully installed or removed.\n",
"text": "There is no such requirement.php7.4-fpm is likely appearing in the output from a previous apt operation:",
"username": "chris"
},
{
"code": "1 not fully installed or removed.libssl1.1mongodb-orgshow the differences between the versionsapt-get purge mongodb-org*\nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nNote, selecting 'mongodb-org-database-tools-extra' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-unstable-server' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-shell' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-database' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-unstable' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-unstable-mongos' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-unstable-shell' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-unstable-database-tools-extra' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-server' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-tools' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-mongos' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-unstable-tools' for glob 'mongodb-org*'\nNote, selecting 'mongodb-org-tools-unstable' for glob 'mongodb-org*'\nPackage 'mongodb-org-tools-unstable' is not installed, so not removed\nPackage 'mongodb-org-unstable' is not installed, so not removed\nPackage 'mongodb-org-unstable-mongos' is not installed, so not removed\nPackage 'mongodb-org-unstable-server' is not installed, so not removed\nPackage 'mongodb-org-unstable-shell' is not installed, so not removed\nPackage 'mongodb-org-unstable-tools' is not installed, so not removed\nPackage 'mongodb-org-unstable-database-tools-extra' is not installed, so not removed\nThe following packages were automatically installed and are no longer required:\n libssl1.1 mongodb-database-tools mongodb-mongosh\nUse 'apt autoremove' to remove them.\nThe following packages will be REMOVED:\n mongodb-org* mongodb-org-database* mongodb-org-database-tools-extra* mongodb-org-mongos*\n mongodb-org-server* mongodb-org-shell* mongodb-org-tools*\nThe following held packages will be changed:\n mongodb-org mongodb-org-database mongodb-org-mongos mongodb-org-server mongodb-org-shell\n mongodb-org-tools\n0 upgraded, 0 newly installed, 7 to remove and 168 not upgraded.\n1 not fully installed or removed.\nAfter this operation, 243 MB disk space will be freed.\nDo you want to continue? [Y/n] y\n(Reading database ... 112928 files and directories currently installed.)\nRemoving mongodb-org (6.0.3) ...\nRemoving mongodb-org-database (6.0.3) ...\nRemoving mongodb-org-tools (6.0.3) ...\nRemoving mongodb-org-database-tools-extra (5.0.10) ...\nRemoving mongodb-org-mongos (6.0.3) ...\nRemoving mongodb-org-server (6.0.3) ...\nRemoving mongodb-org-shell (6.0.3) ...\nSetting up php7.4-fpm (1:7.4.30-5+ubuntu22.04.1+deb.sury.org+1) ...\n\nConfiguration file '/etc/php/7.4/fpm/pool.d/www.conf'\n ==> Modified (by you or by a script) since installation.\n ==> Package distributor has shipped an updated version.\n What would you like to do about it ? Your options are:\n Y or I : install the package maintainer's version\n N or O : keep your currently-installed version\n D : show the differences between the versions\n Z : start a shell to examine the situation\n The default action is to keep your current version.\n*** www.conf (Y/I/N/O/D/Z) [default=N] ? D\n--- /etc/php/7.4/fpm/pool.d/www.conf 2022-05-18 10:51:16.000000000 +0000\n+++ /etc/php/7.4/fpm/pool.d/www.conf.dpkg-new 2022-08-01 15:06:35.000000000 +0000\n@@ -41,7 +41,8 @@\n\n ; Set permissions for unix socket, if one is used. In Linux, read/write\n ; permissions must be set in order to allow connections from a web server. Many\n-; BSD-derived systems allow connections regardless of permissions.\n+; BSD-derived systems allow connections regardless of permissions. The owner\n+; and group can be specified either by name or by their numeric IDs.\n ; Default Values: user and group are set as the running user\n ; mode is set to 0660\n listen.owner = www-data\n@@ -235,7 +236,7 @@\n ; anything, but it may not be a good idea to use the .php extension or it\n ; may conflict with a real PHP file.\n ; Default Value: not set\n-pm.status_path = /status\n+;pm.status_path = /status\n\n ; The ping URI to call the monitoring page of FPM. If this value is not set, no\n ; URI will be recognized as a ping page. This could be used to test from outside\nlines 5-21/21 (END)\n",
"text": "1 not fully installed or removed.I think that refers to libssl1.1 from above.I tried now to fully remove mongodb-org and I got this when I opted to show the differences between the versions:",
"username": "Sorin_GFS"
},
{
"code": "",
"text": "That is a php thing you’ll have to resolve yourself. It is not related to mongodb, except for the co-incidence of an outstanding operation when you apt install/remove mongodb-org. How you resolve it up to you and your site requirements.It will keep appearing on apt operations until you resolve it, regardless of what package it is (mongodb-org, curl, cowsay etc…)",
"username": "chris"
},
{
"code": "",
"text": "Ok, thank you for your time.",
"username": "Sorin_GFS"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Upgrade mongodb to 6.0.3 on ubuntu 22.04 failed while setting php7.4-fpm | 2022-11-26T09:47:31.747Z | Upgrade mongodb to 6.0.3 on ubuntu 22.04 failed while setting php7.4-fpm | 2,619 |
null | []
| [
{
"code": "db.collection.remove({})",
"text": "I got this warning way too many times, .remove is deprecated, at lease wont be removed on the next big update x)My question is, what do I use instead to fully remove one entry?\nMy collection has only one document that is updated often, by removing it entirely and placing a new one in its place, I’m using db.collection.remove({}) but I keep getting a warning, tried using deleteOne but didnt experiment much with it… Straight to the point, what can I use to not be warned?Thanks in advance",
"username": "Zoo_Zaa"
},
{
"code": "db.collection.deleteOne()db.collection.deleteMany()",
"text": "Hi @Zoo_Zaa ,You should use db.collection.deleteOne() or db.collection.deleteMany()MongoDB Manual: How to delete documents in MongoDB. How to remove documents in MongoDB. How to specify conditions for removing or deleting documents in MongoDB.\nDepands on your intention.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "db.collection.deleteOne({})",
"text": "so I would instead use db.collection.deleteOne({}) to delete that one entry? or must I specify a field for this deleteOne? 'cause its the only entry in the collection…",
"username": "Zoo_Zaa"
},
{
"code": "",
"text": "Its always good to put a filter if you know the criteria…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I hope not to sound lazy, but could you examplify me a working function for removing an entire single present entry(document) in a collection with deleteOne, or deleteMany?Thank you in advance…Any base goes, example: 4 fields, one is an array => That’s roughly alike what I have on the .remove warning.",
"username": "Zoo_Zaa"
},
{
"code": "db.collection.insertOne({item : \"pencil\"})\n\ndb.collection.deleteOne({item : \"pencil\")\n\ndb.collection.insertOne({item : \"pencil\"})\ndb.collection.insertOne({item : \"pen\"})\n\ndb.collection.deleteMany({})\n//All documents gone\n",
"text": "Hi @Zoo_Zaa ,",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| About .Remove... instead? | 2022-11-24T15:03:11.711Z | About .Remove… instead? | 1,843 |
null | [
"aggregation",
"java"
]
| [
{
"code": "db.c2.aggregate([\n {\n '$lookup': {\n 'from': 'organization',\n let: {\n c2_orgId: '$organizationId', c2_conId: '$conId', c2_enabled: '$enabled',\n c2_selected: '$selected'\n },\n pipeline: [\n {\n $match:\n {\n $expr:\n {\n $and:\n [\n { $eq: ['$_id', '$$c2_orgId'] },\n { $eq: ['$$c2_selected', true] },\n { $ne: ['$$c2_enabled', true] },\n { $eq: ['$$c2_name', 'NAME'] },\n { $ne: ['$disabled', true] }\n ]\n }\n }\n }], 'as': 'res'\n }\n },\n { '$unwind': '$res' },\n {\n '$project': {\n 'org.type':1; 'enabled': 1, 'org._id': 1, 'description': 1, 'created': 1, '_id': 1\n }\n }\n])\n",
"text": "Hi all, I have a native query and I want to execute it in java. I know about Java Driver’s MongoDatabase#runCommand but I don’t know how I can convert the query to correcponding Bson object from below query in java or using spring, please?",
"username": "Igor_Nem"
},
{
"code": "",
"text": "The easiest way is to load the pipeline into Compass and then Export to JAVA.I usually keep my aggregations in a resource file that I read at run time and use Document.parse(). This way, I can modify the queries without compiling. And I can use the same file in mongosh. I use special values for variable parts of the query to substitute with current value.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you bro, you saved my life!",
"username": "Igor_Nem"
},
{
"code": "mongotmpl.aggregate(Document.parse('$match...'), Document.parse('$lookup'...))mongotmpl.<method>(Document.parse('{$match...},{$lookup...'))",
"text": "How do you execute your aggregations?\nmongotmpl.aggregate(Document.parse('$match...'), Document.parse('$lookup'...)) or\nmongotmpl.<method>(Document.parse('{$match...},{$lookup...'))?\nIf the second case then which method do you use ?",
"username": "Igor_Nem"
},
{
"code": "query_string = \"\"\"\n{ pipeline : [\n {\n '$lookup': {\n 'from': 'organization',\n let: {\n c2_orgId: '$organizationId', c2_conId: '$conId', c2_enabled: '$enabled',\n c2_selected: '$selected'\n },\n pipeline: [\n {\n $match:\n {\n $expr:\n {\n $and:\n [\n { $eq: ['$_id', '$$c2_orgId'] },\n { $eq: ['$$c2_selected', true] },\n { $ne: ['$$c2_enabled', true] },\n { $eq: ['$$c2_name', 'NAME'] },\n { $ne: ['$disabled', true] }\n ]\n }\n }\n }], 'as': 'res'\n }\n },\n { '$unwind': '$res' },\n {\n '$project': {\n 'org.type':1; 'enabled': 1, 'org._id': 1, 'description': 1, 'created': 1, '_id': 1\n }\n }\n] }\n\"\"\" ;\nDocument query = Document.parse( query_string ) ;\n",
"text": "You do not parse each an every stage. You parse the whole query. Something like:",
"username": "steevej"
},
{
"code": "",
"text": "But how do you execute this BSon?\nmongoTemplate.aggregate(query) ?",
"username": "Igor_Nem"
},
{
"code": "Class Document\n\n java.lang.Object\n org.bson.Document \n\n All Implemented Interfaces:\n Serializable, Map<String,Object>, Bson \n",
"text": "Like any other query. Document.parse gives you a Document. And according to documentation",
"username": "steevej"
},
{
"code": "@Autowired\nprivate MongoTemplate mongoTemplate;\nvoid runQuery() {\n Document query = Document.parse( query_string ) ;\n MongoDatabase db = mongoTemplate.getDb();\n db.**method**(query);\n}\n",
"text": "Didn’t get you still, sorry.\nIn my application for connection with DB I use Spring’s MongoTemplate.\nSo, e.g. my code is loocks like:The question is what a method should I use to run this query?",
"username": "Igor_Nem"
},
{
"code": "",
"text": "The question is what a method should I use to run this query?What method do you use usually?You should use the same one.In more details:I avoid abstract layers like String’s MongoTemplate so I have no idea of what method could be in this context. I just know that usually a query is performed on a collection, so method should be method from MongoDatabase that gives you a MongoCollection like getCollection(). Once you get the MongoCollection object you usually use aggregate() to run an aggregation.But there is no magic. Calling Document.parse() is just a way to build the query. You use the query as you used any other query.If you are not familiar enough with java and mongo together, university.mongodb.com offers a free (as in no fee) course for that.",
"username": "steevej"
},
{
"code": "db.**method**(query);\ndb.find().sort().skip().limit()db.aggregate([ {$match:\"...\"},{$sort:\"...\"},{\"$skip:\"...\"},{$limit:\"...\"}])",
"text": "The question is what a method should I use to run this query?you can say there are two types of methods: cursor and aggregation.the cursor is actually a category name for “find/sort/limit/skip” methods. you can append them to one other: db.find().sort().skip().limit(). “find” is the first in the line to return a cursor.aggregation is a “pipeline”. each method above will be a single stage in the pipeline array: db.aggregate([ {$match:\"...\"},{$sort:\"...\"},{\"$skip:\"...\"},{$limit:\"...\"}]). every stage can be “parsed” separately from different strings as long as they are combined in an array (array or list, which one java driver uses? check documentation). aggregation does not return a cursor unless set explicitly.which one your query fits into?",
"username": "Yilmaz_Durmaz"
},
{
"code": "com.mongodb.MongoCommandException: Command failed with error 40324 (Location40324): 'Unrecognized pipeline stage name: 'pipeline'' on server localhost:27017. The full response is { \"ok\" : 0.0, \"errmsg\" : \"Unrecognized pipeline stage name: 'pipeline'\", \"code\" : 40324, \"codeName\" : \"Location40324\" }\n",
"text": "I use also MongoDatabase object for query execution, it written in my code example.\nThe problem is when I break the initial query to pipeline parts and then execute db.aggregate(part1, part2…), this returns me correct result, but if I use your approach(wrap whole my aggregation chain into one pipeline) then db.find(query) return me an empty response, and db.aggregate(List.of(query)) fail with following error:",
"username": "Igor_Nem"
},
{
"code": "",
"text": "If you would read the conversation a bit more thoroughly then you could know that a talking is about aggregation query wrapped up into one Bson object. How do you think, which method should be used for such case? My tries I wrote in answer above.",
"username": "Igor_Nem"
},
{
"code": "com.mongodb.MongoCommandException: Command failed with error 40324 (Location40324): 'Unrecognized pipeline stage name: 'pipeline'' on server localhost:27017. The full response is { \"ok\" : 0.0, \"errmsg\" : \"Unrecognized pipeline stage name: 'pipeline'\", \"code\" : 40324, \"codeName\" : \"Location40324\" }",
"text": "com.mongodb.MongoCommandException: Command failed with error 40324 (Location40324): 'Unrecognized pipeline stage name: 'pipeline'' on server localhost:27017. The full response is { \"ok\" : 0.0, \"errmsg\" : \"Unrecognized pipeline stage name: 'pipeline'\", \"code\" : 40324, \"codeName\" : \"Location40324\" }Now I see. The thing is that Document.parse needs a document and a pipeline is an array. That is why the query_string is an object with the field pipeline. You need to call getList() to get the array of stages.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks bro, now works like a charm!",
"username": "Igor_Nem"
},
{
"code": "",
"text": "If you would read the conversation a bit more thoroughly then you could know that a talking is about aggregation query wrapped up into one Bson objectI already did that before answering.but did you notice you asked many other questions after you already accepted a solution?I do not know your understanding level of MongoDB, and thus I gave an answer to one of those questions (was in quotes): given that you have a query to parse, which method would you use to execute it?I thought it was clear: if you take it to your original question, the method is “aggregate”. If you want to break it up into pieces it is “cursors”.Do not forget that there is no single way to query MongoDB. There is only a first method you see working and get familiar with, and many others you will hesitate to try because we, as human beings, fear the unfamiliar.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
]
| Haw to execute aggregation native query in java | 2022-11-17T17:11:12.756Z | Haw to execute aggregation native query in java | 5,238 |
[
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hello everybody,I am trying to make a Moderation system for a Discord Bot using MongoDB.\nI am trying to put all moderation cases in a specific collection by referencing the moderation collection but I am unsure on how I could do the population.I couldn’t really understand alot of the mongoose documentation on population. I am wondering if I just could put the schema types of the moderation schema right into the Guild one and use it like that.Please note that I have recently started leveraging MongoDB/Mongoose more, so I don’t know a couple things as of yet.\nSchemas:\nimage1162×738 92.6 KB",
"username": "Nikos_Papadiotis"
},
{
"code": "",
"text": "Hello @Nikos_Papadiotis, Welcome to the MongoDB Community Forum,You can find the examples and steps of the mongoose populate in this documentation,\nhttps://mongoosejs.com/docs/populate.htmlPlease provide more details about where you are stuck or getting any errors and what is the expected response you want.",
"username": "turivishal"
}
]
| How to populate a ref in mongoose easily? | 2022-11-26T08:01:19.670Z | How to populate a ref in mongoose easily? | 1,588 |
|
null | [
"queries"
]
| [
{
"code": "{\n\t\"_id\" : ObjectId(\"63810333ae6cd2130104bfd7\"),\n\t\"calendarId\" : ObjectId(\"63810333ae6cd2130104bfd6\"),\n\t\"date\" : 26,\n\t\"endDate\" : \"01-01-2024\",\n\t\"endMonth\" : 1,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2024,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:02:27.432666Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2024-01-01\",\n\t\"reverseStartDate\" : \"2023-09-26\",\n\t\"sortDate\" : 20230926,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"26-09-2023\",\n\t\"startMonth\" : 9,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2023,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T18:02:27.432696Z\",\n\t\"venue\" : \"MYSORE\"\n}\n{\n\t\"_id\" : ObjectId(\"63810363ae6cd21301348ef9\"),\n\t\"calendarId\" : ObjectId(\"63810df2ae6cd21301ad489b\"),\n\t\"date\" : 28,\n\t\"endDate\" : \"16-08-2022\",\n\t\"endMonth\" : 8,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2022,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:48:18.809333Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2022-08-16\",\n\t\"reverseStartDate\" : \"2022-08-15\",\n\t\"sortDate\" : 20220815,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"15-08-2022\",\n\t\"startMonth\" : 8,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2022,\n\t\"title\" : \"New\",\n\t\"updatedAt\" : \"2022-11-25T18:48:18.809358Z\",\n\t\"venue\" : \"MYSORE\"\n}\n{\n\t\"_id\" : ObjectId(\"638103baae6cd213014b43fd\"),\n\t\"calendarId\" : ObjectId(\"638103baae6cd213014b43fc\"),\n\t\"date\" : 30,\n\t\"endDate\" : \"02-10-2023\",\n\t\"endMonth\" : 10,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2023,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:04:42.263385Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2023-10-02\",\n\t\"reverseStartDate\" : \"2023-09-30\",\n\t\"sortDate\" : 20230930,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"30-09-2023\",\n\t\"startMonth\" : 9,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2023,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T18:04:42.263411Z\",\n\t\"venue\" : \"MYSORE\"\n}\n{\n\t\"_id\" : ObjectId(\"638103f9ae6cd2130198b544\"),\n\t\"calendarId\" : ObjectId(\"638103f9ae6cd2130198b543\"),\n\t\"date\" : 1,\n\t\"endDate\" : \"02-10-2023\",\n\t\"endMonth\" : 10,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2023,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:05:45.365134Z\",\n\t\"isActive\" : false,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2023-10-02\",\n\t\"reverseStartDate\" : \"2023-10-01\",\n\t\"sortDate\" : 20231001,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"01-10-2023\",\n\t\"startMonth\" : 10,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2023,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T18:52:55.336916Z\",\n\t\"venue\" : \"MYSORE\"\n}\n{\n\t\"_id\" : ObjectId(\"63811207ae6cd21301772c14\"),\n\t\"calendarId\" : ObjectId(\"63811207ae6cd21301772c13\"),\n\t\"endDate\" : \"02-09-2023\",\n\t\"endMonth\" : 9,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2023,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T19:05:43.518031Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2023-09-02\",\n\t\"reverseStartDate\" : \"2022-10-01\",\n\t\"sortDate\" : 20221001,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"01-10-2022\",\n\t\"startMonth\" : 10,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2022,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T19:05:43.518053Z\",\n\t\"venue\" : \"MYSORE\"\n}\n db.calendar_events_db.find({\"$and\":[{startMonth:{\"$lte\":9}},{endMonth:{\"$gte\":9}},{startYear:{\"$lte\":2023}},{endYear:{\"$gte\":2023}}]}).pretty()\n{\n\t\"_id\" : ObjectId(\"63810333ae6cd2130104bfd7\"),\n\t\"calendarId\" : ObjectId(\"63810333ae6cd2130104bfd6\"),\n\t\"date\" : 26,\n\t\"endDate\" : \"01-01-2024\",\n\t\"endMonth\" : 1,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2024,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:02:27.432666Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2024-01-01\",\n\t\"reverseStartDate\" : \"2023-09-26\",\n\t\"sortDate\" : 20230926,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"26-09-2023\",\n\t\"startMonth\" : 9,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2023,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T18:02:27.432696Z\",\n\t\"venue\" : \"MYSORE\"\n}\n{\n\t\"_id\" : ObjectId(\"638103baae6cd213014b43fd\"),\n\t\"calendarId\" : ObjectId(\"638103baae6cd213014b43fc\"),\n\t\"date\" : 30,\n\t\"endDate\" : \"02-10-2023\",\n\t\"endMonth\" : 10,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2023,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:04:42.263385Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2023-10-02\",\n\t\"reverseStartDate\" : \"2023-09-30\",\n\t\"sortDate\" : 20230930,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"30-09-2023\",\n\t\"startMonth\" : 9,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2023,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T18:04:42.263411Z\",\n\t\"venue\" : \"MYSORE\"\n}\n\n{\n\t\"_id\" : ObjectId(\"638103baae6cd213014b43fd\"),\n\t\"calendarId\" : ObjectId(\"638103baae6cd213014b43fc\"),\n\t\"date\" : 30,\n\t\"endDate\" : \"02-10-2023\",\n\t\"endMonth\" : 10,\n\t\"endTime\" : \"5:00 AM\",\n\t\"endYear\" : 2023,\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-11-25T18:04:42.263385Z\",\n\t\"isActive\" : true,\n\t\"location\" : {\n\t\t\"latitude\" : \"127.386\",\n\t\t\"logitude\" : \"138.43\"\n\t},\n\t\"reminder\" : \"30 MIN\",\n\t\"reverseEndDate\" : \"2023-10-02\",\n\t\"reverseStartDate\" : \"2023-09-30\",\n\t\"sortDate\" : 20230930,\n\t\"sortTime\" : 300,\n\t\"startDate\" : \"30-09-2023\",\n\t\"startMonth\" : 9,\n\t\"startTime\" : \"3:00 AM\",\n\t\"startYear\" : 2023,\n\t\"title\" : \"New post test\",\n\t\"updatedAt\" : \"2022-11-25T18:04:42.263411Z\",\n\t\"venue\" : \"MYSORE\"\n}\n\n",
"text": "query used to filterexpected documentsresultAny way to filter the documents",
"username": "Prathamesh_N"
},
{
"code": "$dateDiff$dateFromPartsdb.collection.aggregate([\n {\n $addFields: {\n is_it_good: {\n $and: [\n {\n $lte: [\n {\n $dateDiff: {\n startDate: {\n $dateFromParts: {\n year: 2023,\n month: 9,\n },\n },\n endDate: {\n $dateFromParts: {\n year: \"$startYear\",\n month: \"$startMonth\",\n },\n },\n unit: \"day\",\n },\n },\n 0,\n ],\n },\n {\n $gte: [\n {\n $dateDiff: {\n startDate: {\n $dateFromParts: {\n year: 2023,\n month: 9,\n },\n },\n endDate: {\n $dateFromParts: {\n year: \"$endYear\",\n month: \"$endMonth\",\n },\n },\n unit: \"day\",\n },\n },\n 1,\n ],\n },\n ],\n },\n },\n },\n {\n $match: {\n is_it_good: true,\n },\n },\n {\n $unset: \"is_it_good\",\n },\n]);\n\n",
"text": "working with date format is not an easy job, especially when you try to calculate differences.the code below uses $dateDiff to calculate the difference, but it needs proper dates. for that purpose, I used $dateFromParts. Also note that logic and comparison operators are also things to be used cautiously as they can wreak havoc if you do not notice what they include/exclude.I also assumed you want a query for documents that have already started before (or starts at) 2023-09 and will end at least 1 month after that.if you change the logic to “ends in that month” (change 1 to 0 in $gte) then you will also get the document with id “63811207ae6cd21301772c14” (“endDate”: “02-09-2023”).use days to fine tune to difference, and do not forget the query depends on the assumptions above and may (and possibly will not) work if you change the requirements too much. Instead, use this as an example and try to understand how it is made.",
"username": "Yilmaz_Durmaz"
},
{
"code": "startDate: { $lte: \"09-2023\" },\n// and\nendDate: { $gte: \"09-2023\" }\n{\n \"_id\": ObjectId(\"63811207ae6cd21301772c14\"),\n \"calendarId\": ObjectId(\"63811207ae6cd21301772c13\"),\n \"endDate\": \"02-09-2023\",\n \"endMonth\": 9,\n \"endTime\": \"5:00 AM\",\n \"endYear\": 2023,\n \"groupId\": ObjectId(\"5f06cca74e51ba15f5167b86\"),\n \"insertedAt\": \"2022-11-25T19:05:43.518031Z\",\n \"isActive\": true,\n \"location\": {\n \"latitude\": \"127.386\",\n \"logitude\": \"138.43\"\n },\n \"reminder\": \"30 MIN\",\n \"reverseEndDate\": \"2023-09-02\",\n \"reverseStartDate\": \"2022-10-01\",\n \"sortDate\": 2.0221001e+07,\n \"sortTime\": 300,\n \"startDate\": \"01-10-2022\",\n \"startMonth\": 10,\n \"startTime\": \"3:00 AM\",\n \"startYear\": 2022,\n \"title\": \"New post test\",\n \"updatedAt\": \"2022-11-25T19:05:43.518053Z\",\n \"venue\": \"MYSORE\"\n }\n$ordb.calendar_events_db.find({\n \"$and\": [\n {\n \"$or\": [\n {\n \"startYear\": 2023,\n \"startMonth\": { \"$lte\": 9 }\n },\n { \"startYear\": { \"$lt\": 2023 } }\n ]\n },\n {\n \"$or\": [\n {\n \"endYear\": 2023,\n \"endMonth\": { \"$gte\": 9 }\n },\n { \"endYear\": { \"$gt\": 2023 } }\n ]\n }\n ]\n}).pretty()\n",
"text": "expected documentsAs I can understand as per your query, you want to get the documents as per below conditions,You expected 2 documents in the result, but why is the below document will not come in the result?\nWhere startDate is “10-2022” and endDate is “09-2023”, it should come in the result as per your query.You can use the below conditions if you want to match separately month and year,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| I was supposed to filter documents based on month and year between query but I'm not getting 1 month of next year due to the condition i have used any other ways to filter out | 2022-11-25T19:23:58.856Z | I was supposed to filter documents based on month and year between query but I’m not getting 1 month of next year due to the condition i have used any other ways to filter out | 2,162 |
null | [
"100daysofcode"
]
| [
{
"code": "",
"text": "Hello lovely people, I’m here again joining the #100DaysofCode again for a second round, after finishing the first one covering a lot of topics concerning backend development, project management, software engineering and more.Now the second round is here. And in this #100DaysofCode, I’ll be with you all covering System Design, starting from zero and reaching the hero level at the end of the 100 days. As we will go through principles, methods, theorem’s and of course a bunch of examples and real business solutions scenarios that will help you cracking System Design interviews See you all in my first day starting from tomorrow ",
"username": "eliehannouch"
},
{
"code": "",
"text": "Hello amazing people, the first day in our system design journey is here. Be ready, & committed to be a game master in 100 days.",
"username": "eliehannouch"
},
{
"code": "",
"text": "Hello amazing folks, what an amazing day. The counter is increasing and only 98 day left . What I really love about 100daysofcode is the commitment level that increase in me, talking and sharing with my amazing community new topics on a daily basis.And for today we will start discussing Horizontal scaling / Vertical scaling, how they differ and when to implement each of them.Horizontal scaling (aka scaling out) refers to adding additional nodes or machines to your infrastructure to cope with new demands.If you are hosting an application on a server and find that it no longer has the capacity or capabilities to handle traffic, adding a server may be your solution. \n \nimage726×570 58.7 KB\nIncreased performanceIncreased resilience and fault toleranceScaling is easier from a hardware perspectiveFewer periods of downtimeIncreased Initial costsIncreased complexity of maintenance and operation - Multiple servers are harder to maintain than a single server is.Coming from the horizontal scaling which means adding new nodes to handle the system traffic, vertical scaling describes adding more power to your current machines. For instance, if your server requires more processing power, vertical scaling would mean upgrading the CPUs. You can also vertically scale the memory, storage, or network speed.Vertical scaling may also describe replacing a server entirely or moving a server’s workload to an upgraded one.- \nimage514×796 54.8 KB\nCost-effectiveLess complex process communicationLess complicated maintenanceUpgrade limitationsSingle point of failureHigher possibility for downtime",
"username": "eliehannouch"
},
{
"code": "",
"text": "Hello friends, a new day is here and a dose of new knowledge is required to wrap the day in an informative way. Today we will discuss some new topics in this amazing field, starting with a definition on servers than moving to proxy servers exploring their key role and importance in implementing secure systems.A server stores, sends, and receives data. In essence, it “serves” something else and exists to provide services. A computer, software program, or even a storage device may act as a server, and it may provide one service or several.We have several types of servers, and each of them serve a specific purpose. Starting with mail servers, game servers, print servers, proxy servers and more.Today friends, we are going to discuss what a proxy server is all about, it’s key role and the benefits from using it.A special type of servers, that act as a channel between a user and the internet. It separate the end user from the website they are browsing serving as a man in middle between them.How a proxy server work? \nimage900×680 8.35 KB\nProxy servers Benefits ? Improved privacy Improved security Access to blocked resources Cache data to speed up requests Filter content Control of the internet usage inside an organization, home …",
"username": "eliehannouch"
},
{
"code": "",
"text": "Hello community, the 4th day is here, and only 96 day are remaining. A lot of fun, knowledge are coming in this journey. And as everyday today we will discover a new and interesting topic in the system design industry. Micro-services architecture, it’s benefits and how to implement them.Independent deployabilityAdditional options for scaling up applicationsHelp isolate the “blast radius” of service failuresAllows developers to “buy into” a new series of options and choices that app developers can make A Need to Independently Deploy New Functionality with Zero Downtime A Need to Isolate Specific Data and Data Processing Through Data Partitioning A Need to Enable a High Degree of Team Autonomy",
"username": "eliehannouch"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| The Journey of #100DaysofCode Round#2 (@eliehannouch) | 2022-11-20T20:05:38.302Z | The Journey of #100DaysofCode Round#2 (@eliehannouch) | 2,576 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "findfindaggregationfind$arrayElemAtfindinnerNamedb.collection.find({\n $or: [\n {\n $expr: {\n \"$eq\": [\n {\n \"$arrayElemAt\": [\n \"$matches.name\",\n -1\n ]\n },\n \"match 5\"\n ]\n }\n }\n ]\n})\n[\n {\n \"matches\": [\n {\n \"name\": \"match 1\",\n \"ids\": [\n {\n \"innerName\": \"12\"\n },\n {\n \"innerName\": \"3\"\n }\n ]\n }\n ]\n },\n {\n \"matches\": [\n {\n \"name\": \"match 5\",\n \"ids\": [\n {\n \"innerName\": \"123\"\n },\n {\n \"innerName\": \"1234\"\n }\n ]\n },\n {\n \"name\": \"match 5\",\n \"ids\": [\n {\n \"innerName\": \"1\"\n },\n {\n \"innerName\": \"1234\"\n },\n \n ]\n },\n \n ]\n }\n]\n",
"text": "I need to filter documents according to the value of the last element of a nested array using find\nThe reason I need it with find and not aggregation is the fact that the endpoint I’m sending the query to handle it is using only find. pretty weird but I gotta work with that at the moment.I tried to use $arrayElemAt which is what I’ve found so far to handle it with find and I managed to get the first arrays value, but I can’t figure out how to select the ids array and to act according to innerName value.\nAny suggestions?Working query for the first array:Mock data of the use case:",
"username": "orpt"
},
{
"code": "",
"text": "Hello @orpt ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a desired solution?\nIf not, could you please share the desired output with respect to the provided mock data?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "{ \"$getField\" : {\n \"field\" : \"innerName\" ,\n \"input\" : { \"$arrayElemAt\" : [\n { \"$getField\" : {\n \"field\" : \"ids\" ,\n \"input\" : { \"$arrayElemAt\" : [\n \"$matches\" , \n -1\n ] }\n } } ,\n -1\n ] }\n} }\n{ \"$getField\" : {\n \"field\" : \"innerName\" ,\n \"input\" : { \"$last\" : { \"$getField\" : {\n \"field\" : \"ids\" ,\n \"input\" : { \"$last\" : \"$matches\" }\n } } }\n} }\n",
"text": "If you are running 5.0 or more recent, you use $getField together with 2 calls to $arrayElemAt. Something along the following untested lines that results in an expression equals to the innerName of the last ids of the last matches:Note that with 4.0 and above you may use $last rather than $arrayElemAt which gives more concise code that should look like",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Find documents by the last element of nested array | 2022-11-17T20:28:25.416Z | Find documents by the last element of nested array | 2,823 |
null | [
"queries",
"atlas-search",
"text-search"
]
| [
{
"code": "",
"text": "Hi everyone,I would like to obtain the results of a text search with the results of a regex. In particular, assuming that the input string is “substring”, I would like to be able to do the following filter:{ $or : [ { name : { $regex : “.substring.” } }, {$text : { $search: “substring” } } ] }Of course, I’ve previously created the $text index on “name” field, however it gives me a Mongo Error since $or and $text operator cannot be used together.\nHow can I achieve this filtering?Thanks in advance",
"username": "Matteo_Tarantino"
},
{
"code": "[{$match: {\n $text: {\n $search: 'substring'\n }\n}}, {$unionWith: {\n coll: '<COLLECTION-NAME>',\n pipeline: [\n {\n $match: {\n name: {\n $regex: '. substring.'\n }\n }\n }\n ]\n}}]\n",
"text": "Hi @Matteo_Tarantino ,Why do you need this kind of double search if you already perform a text search. In general we recommend using Atlas search for full text searches if it happens that your cluster is an Atlas cluster:Learn how to use a regular expression in your Atlas Search query.If you still insist on running this query using the traditional text indexes , I believe the use of $unionWith aggregation might work:Ty",
"username": "Pavel_Duchovny"
},
{
"code": "$searchnamename$search$search",
"text": "Hi @Pavel_Duchovny ,to the best of my knowledge $search operator does not find a match if the searching string is actually a substring of the field I’m seaching on. In particular:Suppose that you are searching by name and the name field of a document is “Matteo”. If the searching string is “atte” the $search operator discards that document. At leats I’m experiencing this behavior when applying the $search operator in a serverless atlas cluster. Is it correct?",
"username": "Matteo_Tarantino"
},
{
"code": "[{$search: {\n regex: {\n path: 'title',\n query: '.*los.*',\n allowAnalyzedField: true\n }\n}}]\nsample_mflix.movies.*los.*title : \"Broken Blossoms or The Yellow Man and the Girl\"\n...\ntitle : \"The Lost World\"\n",
"text": "Hi @Matteo_Tarantino ,Do you mean when using the “regex” operator in the $search stage of Atlas Search?With regex operator (I linked you with…) you can search for partial expressions of a word:In this example I search the sample_mflix.movies collection using a regex .*los.* and it find strings like:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This operator concatenates two regular expressions a and b . No character represents this word combiner operator; you simply put b after a . The result is a regular expression that will match a string if a matches its first part and b matches the rest.",
"username": "Richard_Gravener"
},
{
"code": "",
"text": "Hi @Richard_Gravener ,I need an example document and query…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "[$unionWith: coll: ‘COLLECTION-NAME>’, pipeline: [$match: name: $regex: ‘. substring.’ ] ]example document have not show on this platform for this security reason. If want to see please visit …",
"username": "Richard_Gravener"
}
]
| Combine $text with $regex | 2022-03-18T20:26:55.025Z | Combine $text with $regex | 10,824 |
null | [
"aggregation"
]
| [
{
"code": "// user\n{\n \"_id\":1,\n \"cart\":[\n {\"_id\":2,\"type\":\"A\"},\n {\"_id\":3,\"type\":\"B\"}\n ]\n}\n// items\n[\n {\"_id\":2,\"name\":\"item 1\"},\n {\"_id\":3,\"name\":\"item 2\"}\n]\n\n// expected\n{\n \"_id\":1,\n \"cart\":[\n {\"_id\":2,\"type\":\"A\",item:{\"_id\":2,\"name\":\"item 1\"}},\n {\"_id\":3,\"type\":\"B\",item:{\"_id\":3,\"name\":\"item 2\"}}\n ]\n}\n{\n from: \"items\",\n foreignField: \"_id\",\n localField: \"cart._id\",\n let:{type:\"$cart.type\"},\n pipeline:[\n {$project:\n {\n _id:1,\n type:\"$$type\",\n item:\"$$ROOT\"\n }\n }\n ],\n as: \"result\"\n}\ntype:[\"A\",\"B\"]",
"text": "I was trying to help in another post here : Lookup & populate objects in array. I have a working solution, but I wonder if we could do that with a single $lookup operation.The array items we use to $lookup are objects themselves. It would not be a problem if we were just replacing the whole item, but we need to keep the original item (or parts of it at least) and insert the result back into it.I used “let” and “pipeline” and wrote the following query but the “type” was not what I expected:here is the problem: instead of taking the value of the current item, “let” scans the whole array and extracts the “type” from all items, and sends this array to the pipeline: type:[\"A\",\"B\"].Is there a simpler way than $unwind/$group, or without complex $match if that matters, that I missed?",
"username": "Yilmaz_Durmaz"
},
{
"code": "localFields$unwind$group$mapcart$filter_id$first$arrayElemAt$mergeObjectscartitemitems$$REMOVE$projectdb.user.aggregate([\n {\n \"$lookup\": {\n \"from\": \"items\",\n \"localField\": \"cart._id\",\n \"foreignField\": \"_id\",\n \"as\": \"items\"\n }\n },\n {\n \"$addFields\": {\n \"cart\": {\n \"$map\": {\n \"input\": \"$cart\",\n \"in\": {\n \"$mergeObjects\": [\n \"$$this\",\n {\n \"item\": {\n \"$first\": {\n \"$filter\": {\n \"input\": \"$items\",\n \"as\": \"i\",\n \"cond\": { \"$eq\": [\"$$i._id\", \"$$this._id\"] }\n }\n }\n }\n }\n ]\n }\n }\n },\n \"items\": \"$$REMOVE\"\n }\n }\n])\n",
"text": "Hello @Yilmaz_Durmaz,Here don’t need lookup with the pipeline, you can pass an array of ids in localFields property,\nYes, You can use another approach without $unwind and $group stages,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Thanks, @turivishal, for giving your timeI am a bit lost in the nesting levels, so I will need time to digest it ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is it possible to $lookup without $unwind to use a property of current matched item? | 2022-11-25T11:39:28.164Z | Is it possible to $lookup without $unwind to use a property of current matched item? | 2,684 |
null | [
"spark-connector"
]
| [
{
"code": "change.stream.publish.full.document.only=true_data_id",
"text": "Hi,I’m trying to replicate a MongoDB collection to Delta Lake using the Spark Connector with structured streaming but there is one problem.\nWhen using the option change.stream.publish.full.document.only=true I won’t get the deleted document. But that is expected.\nBut if I omit the option, I only get a row with the _data field. All other fields are null.\nI would at least expect to have the _id field so I can delete the entry.Can someone explain me how to capture deleted documents with structured streaming?Thanks,\nAmer",
"username": "Amer_Aljovic"
},
{
"code": "",
"text": "can you use something like this:It will be a SparkConf setting so “spark.mongodb.read.aggregation.pipeline”:“[{”$match\": {“operationType”: “insert”}]’ for exampleref: MongoDB Connector for Spark V10 and Change Stream - #11 by khang_pham",
"username": "khang_pham"
},
{
"code": "",
"text": "I tried this…but pipeline didnt triggeted…where u exactly want to add pipeline in structured streaming…",
"username": "Krishnamoorthy_Kalidoss"
}
]
| Capture id of deleted document with Spark Structured Streaming | 2022-06-01T09:33:11.418Z | Capture id of deleted document with Spark Structured Streaming | 3,194 |
[
"stockholm-mug"
]
| [
{
"code": "Senior Solutions ArchitectSenior Solutions Architect, MongoDBSenior Solutions Architect, MongoDB",
"text": "\nstockholm-mug-kickoff1920×1078 82.7 KB\nStockholm MongoDB User Group is excited to kick-off and launch the user group in the region with their first meetup on 12th April 2022. The meetup is being hosted to bring together the interested developers and MongoDB enthusiasts in the region, introduce everyone to the group, and share the plan for future events.The introduction will be followed by, a quick demo by @emil_nildersen on exploring \"Energy Prices\" with MongoDB Charts. If you don’t know - MongoDB Charts is a quick and simple way to create visualizations for your MongoDB data. The event will close with some mingle time along with food and drinks! We are looking forward to seeing you all soon! In the meantime make sure you join the Stockholm Group to introduce yourself and stay abreast with future meetups and discussions.Event Type: In-Person\nVasagatan 28 · StockholmTo RSVP - Sign in and then please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going.Senior Solutions Architect, MongoDB`–Senior Solutions Architect, MongoDB–Senior Solutions Architect, MongoDB",
"username": "Johannes_Brannstrom"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
]
| Stockholm MUG: Kickoff & Explore Energy Prices with MongoDB | 2022-04-02T00:01:32.442Z | Stockholm MUG: Kickoff & Explore Energy Prices with MongoDB | 4,180 |
|
null | [
"data-modeling",
"react-native",
"schema-validation"
]
| [
{
"code": "export const PlayerSchema = {\n\tname: 'Player',\n\tproperties: {\n\t\t_id: 'objectId?',\n\t\tname: 'string?',\n\t\tposition: 'string?',\n\t\turi: 'string?',\n\t},\n\tprimaryKey: '_id',\n};\n\nexport const TeamSchema = {\n\tname: 'Team',\n\tproperties: {\n\t\t_id: 'objectId?',\n\t\tplayers: 'Player[]',\n\t\tteam_image_url: 'string?',\n\t\tteam_name: 'string?',\n\t\tuser_id: 'string?',\n\t},\n\tprimaryKey: '_id',\n};\n{\n \"title\": \"Player\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"club_id\": {\n \"bsonType\": \"objectId\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"club\": {\n \"bsonType\": \"string\"\n },\n \"position\": {\n \"bsonType\": \"string\"\n },\n \"uri\": {\n \"bsonType\": \"string\"\n }\n }\n}\n{\n \"title\": \"Team\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"team_name\": {\n \"bsonType\": \"string\"\n },\n \"team_image_url\": {\n \"bsonType\": \"string\"\n },\n \"user_id\": {\n \"bsonType\": \"string\"\n },\n \"players\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n }\n }\n}\n",
"text": "I have defined two collections in my code, Team and Player. I can’t figure out how to turn it into a JSON schema in the realm UI, with the relationships. I want the team to contain a list of players.I think I figured it out like this:",
"username": "Mads_Haerup"
},
{
"code": "TeamSchemaplayers: Player[]Playerexport const PlayerSchema = {\n\tname: 'Player',\n\tproperties: {\n\t\t_id: 'objectId?',\n\t\tname: 'string?',\n\t\tposition: 'string?',\n\t\turi: 'string?',\n // assignee field links the Player back to Team.players.\n assignee: {\n type: 'linkingObjects',\n objectType: 'Team',\n property: 'players'\n }\n\t},\n\tprimaryKey: '_id',\n};\nconst team = useQuery(Team).filtered(`team_name == ${nameOfTeamToFind}`)[0];\nPlayerteam.players.push(newPlayer)\n",
"text": "Hi @Mads_Haerup! You may want to consider an inverse relationship in your schemas.Looks like you’ve got part of it already. TeamSchema has players: Player[], which is correct. However, you need to add a specific field to Player to complete the relationship.To actually add the player to a team, you need to query for your team. If you’re using the Realm React library, it would look something like this:After you query for your team, create a new Player object. Then, push the player to Team.players:Hopefully, some of this helps!",
"username": "Kyle_Rollins"
},
{
"code": "const loggedInTeam = useContext(TeamContext);\n\nconst team = useQuery(\"Team\").filtered(`team_name == ${loggedInTeam`)[0];\n\nconst team = useQuery(\"Team\").filtered(`team_name == 'DC United'`)[0];\n",
"text": "Hi @Kyle_Rollins , Thanks for the answer, that helped me out a lot. Before Closing I just have one more pressing error to solve.I set the logged in team in a context. But when I’m filtering for it, like you showed above\nI get this error:Invalid predicate: ‘team_name == DC United’: syntax error, unexpected identifier, expecting end of fileWith the code below:But with this code, it’s fine.",
"username": "Mads_Haerup"
},
{
"code": "team_name == ${loggedInTeam}loggedInTeamteam_name${loggedInTeam.name}team_name_idconst team = useQuery(Team).filtered(`_id == oid(${loggedInTeam._id})`)[0];\noid()_id",
"text": "I’m glad I could help, @Mads_Haerup!I noticed a couple things that might be causing the error.",
"username": "Kyle_Rollins"
},
{
"code": "",
"text": "@Kyle_Rollins\nI just up messed up the formatting here. I didn’t forget the bracket in my code.LoggedInTeam is not part of my Schema. LoggedInTeam was just a context string. in this example loggInTeam was a string set to ‘DC United’. So essentially loggedInTeam and ‘DC United’ is the same thing, so I was wondering why it didn’t work. Anyways, I’m doing some restructuring, so I will close the original question and open a new, if It is necessary, Thanks for the help.",
"username": "Mads_Haerup"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to create one-to-many relationship React native Realm SDK | 2022-11-22T14:04:28.036Z | How to create one-to-many relationship React native Realm SDK | 2,894 |
null | []
| [
{
"code": "\"dId\":\"869738067258421\"\ncreatedOn:2022-11-22T04:41:36.152+00:00\n\"dId\":\"869738067258421\"\ncreatedOn:2022-11-20T04:41:36.152+00:00\n\"dId\":\"869738067258421\"\ncreatedOn:2022-11-15T04:41:36.152+00:00\n\n\n\"dId\":\"869738067254263\"\ncreatedOn:2022-09-20T04:41:36.152+00:00\n\"dId\":\"869738067254263\"\ncreatedOn:2022-09-15T04:41:36.152+00:00\n\n\"dId\":\"869738067441613\"\ncreatedOn:2022-10-01T04:41:36.152+00:00\n",
"text": "Hi,\nI am having a collection having fields with dId and createdOn fields, Where i am trying to get the last one week records based on the dId with createdOn field. Assume , the record existing with dId:869738067254263 inserted last record on 2022-11-21 and for dId:869738067258421 lastly inserted on 2022-10-13 and i want to get the dId’s and based on the last inserted date to previous 7 days records. I am expecting the output asHope you get my query,\nreport.csv (18.3 KB)Following attachment is the data for your reference.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "\"createdOn\"\"dId\"\"dId\"[\n{\"dId\":\"869738067258421\",createdOn:ISODate(\"2022-11-22T04:41:36.152+00:00\")}, /// <- Latest\n{\"dId\":\"869738067258421\",createdOn:ISODate(\"2022-11-20T04:41:36.152+00:00\")},\n{\"dId\":\"869738067258421\",createdOn:ISODate(\"2022-11-15T04:41:36.152+00:00\")},\n{\"dId\":\"869738067258421\",createdOn:ISODate(\"2022-10-10T04:41:36.152+00:00\")},\n{\"dId\":\"869738067258421\",createdOn:ISODate(\"2021-11-15T04:41:36.152+00:00\")},\n{\"dId\":\"869738067254263\",createdOn:ISODate(\"2022-09-20T04:41:36.152+00:00\")},\n{\"dId\":\"869738067254263\",createdOn:ISODate(\"2022-09-15T04:41:36.152+00:00\")},\n{\"dId\":\"869738067254263\",createdOn:ISODate(\"2022-10-20T04:41:36.152+00:00\")}, /// <- Latest\n{\"dId\":\"869738067254263\",createdOn:ISODate(\"2022-10-12T04:41:36.152+00:00\")},\n{\"dId\":\"869738067254263\",createdOn:ISODate(\"2022-10-18T04:41:36.152+00:00\")},\n]\n\"dId\"[\n {\n '$group': { _id: '$dId', dateArray: { '$addToSet': '$createdOn' } }\n },\n { '$addFields': { latestCreatedOnDate: { '$max': '$dateArray' } } },\n {\n '$addFields': {\n sevenDaysDate: {\n '$dateSubtract': { startDate: '$latestCreatedOnDate', unit: 'day', amount: 7 }\n }\n }\n },\n {\n '$addFields': {\n filteredArray: {\n '$filter': {\n input: '$dateArray',\n as: 'date',\n cond: {\n '$and': [\n { '$gte': [ '$$date', '$sevenDaysDate' ] },\n { '$lte': [ '$$date', '$latestCreatedOnDate' ] }\n ]\n }\n }\n }\n }\n }\n]\nfilteredArray[\n {\n _id: '869738067258421',\n dateArray: [\n ISODate(\"2022-11-20T04:41:36.152Z\"),\n ISODate(\"2021-11-15T04:41:36.152Z\"),\n ISODate(\"2022-11-22T04:41:36.152Z\"),\n ISODate(\"2022-11-15T04:41:36.152Z\"),\n ISODate(\"2022-10-10T04:41:36.152Z\")\n ],\n latestCreatedOnDate: ISODate(\"2022-11-22T04:41:36.152Z\"),\n sevenDaysDate: ISODate(\"2022-11-15T04:41:36.152Z\"),\n filteredArray: [\n ISODate(\"2022-11-20T04:41:36.152Z\"),\n ISODate(\"2022-11-22T04:41:36.152Z\"),\n ISODate(\"2022-11-15T04:41:36.152Z\")\n ]\n },\n {\n _id: '869738067254263',\n dateArray: [\n ISODate(\"2022-09-20T04:41:36.152Z\"),\n ISODate(\"2022-09-15T04:41:36.152Z\"),\n ISODate(\"2022-10-20T04:41:36.152Z\"),\n ISODate(\"2022-10-18T04:41:36.152Z\"),\n ISODate(\"2022-10-12T04:41:36.152Z\")\n ],\n latestCreatedOnDate: ISODate(\"2022-10-20T04:41:36.152Z\"),\n sevenDaysDate: ISODate(\"2022-10-13T04:41:36.152Z\"),\n filteredArray: [\n ISODate(\"2022-10-20T04:41:36.152Z\"),\n ISODate(\"2022-10-18T04:41:36.152Z\")\n ]\n }\n]\n$addFieldsdateArray",
"text": "Hi @MERUGUPALA_RAMES,Where i am trying to get the last one week records based on the dId with createdOn fieldI assume the latest \"createdOn\" field’s value for a particular \"dId\" represents the last inserted time for said \"dId\". Please correct me if I am wrong here in my assumption.Based off the data provided, I created a smaller sample test collection with documents shown below:I have commented the latest creation date values for the distinct \"dId\" value.One possible approach that may achieve what you are after is to utilise the aggregation pipeline stages below:Using the above pipeline stages, the output from my test environment is shown below (I presume the most important information you are after would be shown in the filteredArray field):You can alter the $addFields (or any other) stages accordingly but I have used them here to demonstrate the values at each stage of the pipeline where they are used.Depending on the amount of documents, the dateArray may become quite large. You can try minimising the amount of input documents using a more selective query at the start if it suits your use case.This pipeline was only briefly tested against a small dataset as shown at the top of my reply. If you believe this may work for you, please test thoroughly on a test environment to ensure it meets all you use case(s) and requirements.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "$dateSubtract",
"text": "$dateSubtractHello Jason,\nThanks for the query, that matches my requirement, and currently i am using 4.4.16 V, sorry to say this, i am really dont know how to alter with $dateSubtract in v4.4. it throwing an error. Definetly it willl work in v5+ but my bad, i forgoty to mention my version previously. Thats really my mistake. Is that possible to modify the query or is there any alternative way to use $dateSubtract instead. Kindly do needfull . I am tring from end to use any other alternate way. Kindly do needfull in this.Regards,\nRamesh.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "$subtract[\n {\n '$group': { _id: '$dId', dateArray: { '$addToSet': '$createdOn' } }\n },\n { '$addFields': { latestCreatedOnDate: { '$max': '$dateArray' } } },\n {\n '$addFields': {\n sevenDaysDate: {\n '$subtract': ['$latestCreatedOnDate', 7 * 24 * 60 * 60 * 1000 ]\n }\n }\n },\n {\n '$addFields': {\n filteredArray: {\n '$filter': {\n input: '$dateArray',\n as: 'date',\n cond: {\n '$and': [\n { '$gte': [ '$$date', '$sevenDaysDate' ] },\n { '$lte': [ '$$date', '$latestCreatedOnDate' ] }\n ]\n }\n }\n }\n }\n }\n]\nsevenDaysDate",
"text": "Perhaps using $subtract might work for you :I used (7 * 24 * 60 * 60 * 1000) milliseconds to calculate the 7 day period for the field sevenDaysDate.I only tested this briefly but it generated same output as the previous pipeline I had used in my prior reply.Again, please test thoroughly to ensure you encounter no issues and that it suits all your use case and requirements.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello Jason,\nThanks a lot to provide me the query which i am looking to get the results as per my requirement. It worked for my requirement. Once again thank a log for the support.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to get last 7 days records based on createdOn date field | 2022-11-22T12:45:44.578Z | How to get last 7 days records based on createdOn date field | 5,921 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "db.stores.aggregate([{$lookup: {from: \"deals\", localField: \"_id\", foreignField: \"store\", as: \"deals\"}}])\n{\n \"_id\": {\n \"$oid\": \"636525107e14050c705adca1\"\n },\n \"name\": \"Amazon\",\n \"slug\": \"amazon\",\n \"country\": \"usa\"\n}\n{\n \"_id\": {\n \"$oid\": \"63652803358e6b86f8f88860\"\n },\n \"name\": \"Get free shipping\",\n \"type\": \"coupon\",\n \"code\": \"FREESHIP\",\n \"store\": {\n \"$oid\": \"636525107e14050c705adca1\"\n }\n}\n\n{\n \"_id\": {\n \"$oid\": \"41652803358e6b86f8f88811\"\n },\n \"name\": \"30% off all beds\",\n \"type\": \"campaign\",\n \"store\": {\n \"$oid\": \"636525107e14050c705adca1\"\n }\n}\n",
"text": "Hello!I have this query that lists all stores + all the deals along with it as an array called “deals”.Would it be possible to create an aggregate where I only provide the slug for a store and it gives me the\nwhole store document + the array with all deals?Store:Deals:",
"username": "David_N_A7"
},
{
"code": "db.stores.aggregate([{$match : {\"slug'\" : \"amazon\" }},{$lookup: {from: \"deals\", localField: \"_id\", foreignField: \"store\", as: \"deals\"}}])\n",
"text": "Hi @David_N_A7 ,If I understood correctly you just need to do a $match stage before the lookup:Obviously index {slug : 1, _id : 1} on stores and {store : 1} on deals.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Help with aggregate store and its deals | 2022-11-24T16:12:10.420Z | Help with aggregate store and its deals | 969 |
null | [
"queries"
]
| [
{
"code": "people = [\"CFF0FC9CCB\",\"4093FC8D87\",\"63407DCB5E\",\"FF14E5FA11\",\"9B30FB0595\",\"8C71FB8B73\",\"D39B586686\",\"F20C4D636F\",\"E3AB4638CA\",\"748A6D0C29\"];\nfor(var i=0; i<100000; i++){\n name = people[Math.floor(Math.random()*people.length)];\n user_id = i;\n boolean = [true, false][Math.floor(Math.random()*2)];\n\n d = new Date();\n added_at = d.getFullYear() +\"-\" +(d.getMonth()+1) + \"-\" + d.getDate() + \"T\" + d.getHours() + \":\" + d.getMinutes()+ \":\" + d.getSeconds()+ \".\" + d.getMilliseconds() +\"Z\";\n \n number = Math.floor(Math.random()*10001);\n db.test_collection5.save({\"_id\":number+ObjectId().str.substring(14),\"name\":name, \"user_id\":user_id, \"boolean\": boolean, \"added_at\":added_at, \"number\":number });\n}\n\ndb.test_collection5.createIndex({name:1,added_at:-1},{name:\"index\"});\n{\n \"_id\" : \"0000008626\",\n \"companyId\" : \"7B4B691836\",\n \"distributionId\" : \"9B30F343erB0595\",\n \"code\" : \"sdfdsf23\",\n \"roomTypeCode\" : \"K1\",\n \"deleted\" : false,\n \"state\" : \"Activated\",\n \"version\" : NumberLong(0),\n \"createdDate\" : \"2022-06-04T10:03:53.382Z\",\n \"lastModifiedDate\" : \"2022-06-04T10:03:53.382Z\"\n}\n",
"text": "hi, recently , i do some performance about $or and $in$or always uses sort_merge ,but $in not ;this is my test datagenerate data script:last sqldb.test_collection5.find({name:{$in:[“CFF0FC9CCB”,“4093FC8D87”,“63407DCB5E”,“FF14E5FA11”,“9B30FB0595”,“8C71FB8B73”,“D39B586686”,“F20C4D636F”,“E3AB4638CA”,“748A6D0C29”]},added_at:{$gt:“2022-10-22T07:28:00.782Z”}}).sort({added_at:-1}).limit(1000).explain(“executionStats”);the sql’s explain always can get sort_merge ,this is test data,but in my business data\nlike thiswhen execute $in , then not use sort_mergeso I want know what is the influence factors.thank you for your response .",
"username": "Huang_Huang"
},
{
"code": "added_at : -1 , name : 1\n",
"text": "Hi @Huang_Huang ,A $in is in fact a range operator when it gets multiple values.Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.If we consider the ESR rule to have sort fields before range you should consider:As a better order, this should do an index sort and not in memory.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "added_at : -1 , name : 1\ndb.test_collection5.find({name:{$in:[“CFF0FC9CCB”,“4093FC8D87”,“63407DCB5E”,“FF14E5FA11”,“9B30FB0595”,“8C71FB8B73”,“D39B586686”,“F20C4D636F”,“E3AB4638CA”,“748A6D0C29”]},added_at:{$gt:“1970-01-01T00:00:00.000Z”}}).sort({added_at:-1}).limit(1000).explain(“executionStats”);\n",
"text": "Thank for your response.if I use the index.when i execute the below sql , then return full table , and then match rang name .",
"username": "Huang_Huang"
},
{
"code": "",
"text": "Not sure I understand, can you show me the explain plan?",
"username": "Pavel_Duchovny"
}
]
| When $in can use sort_merge | 2022-11-23T15:49:11.936Z | When $in can use sort_merge | 1,197 |
null | [
"replication",
"sharding"
]
| [
{
"code": "",
"text": "I have been reading about upgrading the 3.6 sharded cluster to 4.0. It is said there that we have to convert master-slave replication to replicaset. I have been wondering if there is an detailed documentation between them. Or can you explain the difference. I am asking this question because I couldn’t find any resource from the net or documentation.",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "",
"text": "Master slave replication is deprecated\nCheck these linksreplication - MongoDB: Replica Set - master vs. slave - Database Administrators Stack Exchange.",
"username": "Ramachandra_Tummala"
},
{
"code": "50 membersMongoDB 4.0 removes support for the deprecated master-slave replication. Before you can upgrade to MongoDB 4.0, if your deployment uses master-slave replication, you must upgrade to a replica set.",
"text": "Hello @Mashxurbek_Muhammadjonov ,@Ramachandra_Tummala is right, this replication is deprecated, in addition to thatI have been reading about upgrading the 3.6 sharded cluster to 4.0. It is said there that we have to convert master-slave replication to replicaset.Are you referring to Remove Master-Slave Replication?For some background detail on why master-slave replication was used, please check Replica Set Members v3.6 docsWhile replica sets are the recommended solution for production, a replica set can support up to 50 members in total. If your deployment requires more than 50 members, you’ll need to use master-slave replication. However, master-slave replication lacks the automatic failover capabilities.As mentioned in Remove Master-Slave ReplicationMongoDB 4.0 removes support for the deprecated master-slave replication. Before you can upgrade to MongoDB 4.0, if your deployment uses master-slave replication, you must upgrade to a replica set.To learn more about this, please go through below threadRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| The difference between master-slave replication and replicaset | 2022-11-24T11:45:59.645Z | The difference between master-slave replication and replicaset | 2,229 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "db.hej.aggregate([\n {\n \"$addFields\": {\n \"firstName\": {\n \"$reduce\": {\n \"input\": {\n \"$map\": {\n \"input\": {\n \"$split\": [\n \"$firstName\",\n \" \"\n ]\n },\n \"in\": {\n \"$concat\": [\n {\n \"$toUpper\": {\n \"$substrCP\": [\n \"$$this\",\n 0,\n 1\n ]\n }\n },\n {\n \"$toLower\": {\n \"$substrCP\": [\n \"$$this\",\n 1,\n { \"$strLenCP\": \"$$this\" }\n ]\n }\n }\n ]\n }\n }\n },\n \"initialValue\": \"\",\n \"in\": {\n \"$concat\": [\n \"$$value\",\n \" \",\n \"$$this\"\n ]\n }\n }\n }\n }\n }\n ]\n)\n",
"text": "Hello! I wanted to Capitalize the first letter only for all users. I found this snippet that works fine but only when the name is directly on the user while in our structure it’s one level down so profile.firstName I was hoping i could just use profile.firstName but that didn’t work. What do i need to add/change in order for the snippet to target the correct key? (sorry new to mongodb)So in the example below firstName is actually profile.firstName ",
"username": "David_N_A7"
},
{
"code": "",
"text": "Hello @David_N_A7 , Welcome to the MongoDB community forum,Can you please provide an example document structure and the expected result?",
"username": "turivishal"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"637e1662d8095ee22f03b82f\"\n },\n \"profile\": {\n \"firstName\": \"john\",\n \"lastName\": \"Smith\"\n },\n \"age\": 12\n },\n{\n \"_id\": {\n \"$oid\": \"637e1662d8095ee22f03b82f\"\n },\n \"profile\": {\n \"firstName\": \"John\",\n \"lastName\": \"Smith\"\n },\n \"age\": 12\n },\n",
"text": "@turivishal Looks like:Expected rresult after:",
"username": "David_N_A7"
},
{
"code": "firstNamelastName$reduce$mapdb.hej.aggregate([\n {\n $set: {\n \"profile.firstName\": {\n $concat: [\n { $toUpper: { $substr: [\"$profile.firstName\", 0, 1] } },\n { $substr: [\"$profile.firstName\", 1, { $strLenCP: \"$profile.firstName\" }] }\n ]\n },\n \"profile.lastName\": {\n $concat: [\n { $toUpper: { $substr: [\"$profile.lastName\", 0, 1] } },\n { $substr: [\"$profile.lastName\", 1, { $strLenCP: \"$profile.lastName\" }] }\n ]\n }\n }\n }\n])\n",
"text": "The properties firstName and lastName are inside a profile object so don’t need to do any $reduce or $map operators, look at the below pipeline, it will cut the first character from the string and make it upper case and concat with next characters,",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal thank you! However it doesn’t seem to work? Nothing happens at all when i run the code?\n\nhh1640×809 47.4 KB\n",
"username": "David_N_A7"
},
{
"code": "",
"text": "As per your screenshot, you have added a find() query at the end and I think that replaces the aggregate() query result, can you make sure after removing that find() command?I don’t know more about this editor so try to execute in MongoDB shell or MongoDB compass.I am clearing the thing up, this aggregation query will format the result only not the actual document in the database, if you want to update permanently in the database then you have to use update commands with update with aggregation pipeline.The following page provides examples of updates with aggregation pipelines.",
"username": "turivishal"
},
{
"code": "",
"text": "Yeah was supposed to be permanent overwritten. And its only something i´m gonna run once. All this code is way above my head as a complete beginner hehe… The first code i posted did exactly everything i wanted and needed, only problem was that it only worked if the firstname was direct in root and not as a sub document inside “profile” so my problem was how to write the syntax so it targets firstname inside of profile.",
"username": "David_N_A7"
},
{
"code": ".updateMany().aggregate()",
"text": "The query will be same that i have suggested above, you just need to use .updateMany() method instead of .aggregate().",
"username": "turivishal"
},
{
"code": "",
"text": "updateManyMhm I get an error sadly \nhuu759×332 8.44 KB\n",
"username": "David_N_A7"
},
{
"code": "db.hej.updateMany({}, [\n {\n $set: {\n \"profile.firstName\": {\n $concat: [\n { $toUpper: { $substr: [\"$profile.firstName\", 0, 1] } },\n { $substr: [\"$profile.firstName\", 1, { $strLenCP: \"$profile.firstName\" }] }\n ]\n },\n \"profile.lastName\": {\n $concat: [\n { $toUpper: { $substr: [\"$profile.lastName\", 0, 1] } },\n { $substr: [\"$profile.lastName\", 1, { $strLenCP: \"$profile.lastName\" }] }\n ]\n }\n }\n }\n])\n",
"text": "I forgot to mention, just add the query part, this query might work if you are using MongoDB version 4.2 or greater,My suggestion is don’t copy-paste the query please try to understand the basic concept of the method from the documentation, it is not that hard to understand and in just a few minutes can understand the concept.",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Syntax question (capitalize first letter) | 2022-11-23T14:23:35.628Z | Syntax question (capitalize first letter) | 4,601 |
[
"compass"
]
| [
{
"code": "acc_adj_rateconfiglocalaoec",
"text": "There is a ‘compass’ tag but no category, so I’m writing this in “Other MongoDB Topics”.It seems like I had an update on MongoCompass yesterday, and when I open it today, it has a different theme. It starts with a light theme (which I like) but one thing that stood out as annoying as the font.I have a collection called acc_adj_rate and the readability of this font isn’t so good especially when I have a blurry vision.On the same line, config and local have the similar problem.It’s cool that Mongo decided to set the light theme as a default theme (I think dark theme is overrated lol) but I wish there is a support for the previous theme including the font, or have an option to choose font.I mean, when any big services update, don’t they put in legacy theme and stuff? I had ZERO complaints regarding the font in the previous version, but this version’s font really makes you pay a close attention to circular alphabets like a, o, e, and c .I love Compass. Please consider adding them in. I guess I’ll have to use the previous version of Compass if there is a way to roll it back.",
"username": "Man_Chul"
},
{
"code": "",
"text": "Hi @Man_Chul and welcome back to the community forum!!I appreciate you taking time for a valuable feedback regarding the interface issues you are facing with the current version. You could perhaps raise this as a feedback using the MongoDB feedback engine.In saying so, could you provide the version and OS that you are using, which makes it easier to raise the concern to the respective team.Also, to switch back to the previous version, you could select the appropriate version from MongoDB Compass Download | MongoDB and download the respective version for the use.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| MongoCompass theme update and the font | 2022-11-23T01:54:38.422Z | MongoCompass theme update and the font | 1,599 |
|
null | [
"data-modeling"
]
| [
{
"code": "{\n firstName: John,\n lastName: Doe,\n email: [email protected],\n password: @#@#^#&^*,\n roles: ['client', 'agent', 'owner'],\n}\n",
"text": "Let’s say I have a user document:What is the best way to allow certain properties only on a specific role (one account can have multiple roles)? For example, client can leave a review but owner can’t. If a user only has an owner role, I don’t want the review property in the user document.",
"username": "Darnell_Noel"
},
{
"code": "",
"text": "Hi there Darnell, I’ve been asking the same question for some time and what I found out is, first to create a base user model with the properties that all the user roles share and then use mongoose discriminator to extend on that base model. I don’t really know if it is the best practice or not, if someone from the MongoDB team answers your question or you found some different approach that worked for you it would be great if you share it with me too.",
"username": "Gorkem_Gocer"
}
]
| Best way to model User Schema with varying roles? | 2022-01-08T17:19:49.758Z | Best way to model User Schema with varying roles? | 3,864 |
null | []
| [
{
"code": "",
"text": "Hello,We are using Prometheus integration in Atlas and scraping the metrics from https://(target):27018/metrics.\nAt times we are getting 429 too many requests error when we hit the endpoint.\nWould like to know what is the current rate limit per Target endpoint.\nIs there a way to increase the rate of requests for the endpoint.",
"username": "prathish_m"
},
{
"code": "",
"text": "Hi @prathish_m - Welcome to the community.Would like to know what is the current rate limit per Target endpoint.I believe the Rate Limiting documentation (Atlas Admin API specific) is related to the 429 error / response you are getting. Currently the limit is 100 requests requests per minute per project.Is there a way to increase the rate of requests for the endpoint.You would probably need to contact the Atlas support team via the in-app chat to find out if this is possible.Hope the above helps.Regards,\nJason",
"username": "Jason_Tran"
}
]
| MongoDB Atlas Prometheus integration: Increase rate limit | 2022-11-12T07:09:34.523Z | MongoDB Atlas Prometheus integration: Increase rate limit | 1,088 |
null | [
"schema-validation"
]
| [
{
"code": "",
"text": "can we set mail a alert for a host if mongodb validation criteria fails?",
"username": "Sreeraj_Vd"
},
{
"code": "",
"text": "Hi @Sreeraj_Vd - Welcome to the community.Would the alerts this topic is about be related to the Atlas Alerts? If so, there currently isn’t an alert for validation criteria failures.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Can we set mail a alert for a host if mongodb validation criteria fails? | 2022-11-18T07:30:04.778Z | Can we set mail a alert for a host if mongodb validation criteria fails? | 1,643 |
null | [
"aggregation"
]
| [
{
"code": "documents\n[\n {\n users: ['A','B'],\n speaker: 'C'\n },\n {\n users: ['A','B'],\n speaker: 'C'\n },\n {\n users: ['D'],\n speaker: 'C'\n },\n {\n users: ['E','F'],\n speaker: 'G'\n },\n {\n users: ['H'],\n speaker: 'C'\n },\n {\n users: ['A','B'],\n speaker: 'C'\n },\n {\n users: ['E','F'],\n speaker: 'G'\n }\n]\n[\n {\n _id: 'C',\n class: [\n {\n _id: ['A','B'],\n sessions: [\n {\n {\n users: ['A','B'],\n speaker: 'C'\n },\n {\n users: ['A','B'],\n speaker: 'C'\n },\n {\n users: ['A','B'],\n speaker: 'C'\n }\n }\n ]\n },\n {\n _id: ['D'],\n sessions: [\n {\n users: ['D'],\n speaker: 'C'\n }\n ]\n },\n {\n _id: ['C'],\n sessions: [\n {\n users: ['H'],\n speaker: 'C'\n }\n ]\n }\n ]\n },\n {\n _id: 'G',\n class: [\n {\n _id: ['E','F'],\n sessions: [\n {\n users: ['E','F'],\n speaker: 'G'\n },\n {\n users: ['E','F'],\n speaker: 'G'\n }\n ]\n }\n ]\n }\n]\n",
"text": "Hye,\nI want to ask if is this possible to run group two times and make the structure like this.here is my sample of my documentsand the end goals to get structure like thisfor first layers group by speaker I am able to do it. but the second layer to push and group by users I am stuck there. hope someone able to help. thanks!",
"username": "Hazim_Ali"
},
{
"code": "\"speaker\"[\n {\n '$group': {\n _id: { speaker: '$speaker', _id: '$users' },\n sessions: { '$push': { users: '$users', speaker: '$speaker' } }\n }\n },\n {\n '$bucket': {\n groupBy: '$_id.speaker',\n boundaries: [\n 'A', 'B', 'C',\n 'D', 'E', 'F',\n 'G', 'H'\n ], // <-- Depending on how 'speaker' is initially valued. This assumes it's only a single character and not an array of characters.\n default: 'other',\n output: { classInitial: { '$push': '$sessions' } }\n }\n },\n {\n '$addFields': {\n class: {\n '$map': {\n input: '$classInitial',\n in: {\n _id: { '$arrayElemAt': [ '$$this', 0 ] },\n sessions: '$$this'\n }\n }\n }\n }\n },\n { '$project': { _id: 1, class: 1 } }\n]\nboundaries'H'$project[\n {\n _id: 'C',\n class: [\n {\n _id: { users: [ 'A', 'B' ], speaker: 'C' },\n sessions: [\n { users: [ 'A', 'B' ], speaker: 'C' },\n { users: [ 'A', 'B' ], speaker: 'C' },\n { users: [ 'A', 'B' ], speaker: 'C' }\n ]\n },\n {\n _id: { users: [ 'D' ], speaker: 'C' },\n sessions: [ { users: [ 'D' ], speaker: 'C' } ]\n },\n {\n _id: { users: [ 'H' ], speaker: 'C' },\n sessions: [ { users: [ 'H' ], speaker: 'C' } ]\n }\n ]\n },\n {\n _id: 'G',\n class: [\n {\n _id: { users: [ 'E', 'F' ], speaker: 'G' },\n sessions: [\n { users: [ 'E', 'F' ], speaker: 'G' },\n { users: [ 'E', 'F' ], speaker: 'G' }\n ]\n }\n ]\n }\n]\n\"class._id\"\"speaker\"",
"text": "Hi @Hazim_Ali - Welcome to the community.I’ve written a test aggregation pipeline that i’ve only tested on the sample documents provided. There is an assumption that the \"speaker\" field’s values are single characters only ranging from A - Z so this may or may not work for you depending on your data / use case.Pipeline:Note: you can increase the boundaries array depending on your use case. I stopped at 'H' just as an exampleI’ve used a $project stage at the end to get as close as possible to your desired output but I would recommend running each stage 1 by 1 to see what the output field is at each stage.Output:The main difference that I have noticed between the above output and your desired output is that the \"class._id\" value contains the \"speaker\" value as well all contained inside an object.Although this may work or there may be improved suggestions in terms of an aggregation to get your desired output, I am curious to understand the use case for this kind of output or if you have considered perhaps doing some post-processing of the data after it is retrieved from the database.Please lastly take into consideration that I have only briefly tested this against the sample data provided and if you believe it could possibly work for you then please test thoroughly to verify it suits all your use case(s) and requirement(s).Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Group aggregate two times on different field | 2022-11-17T18:40:32.124Z | Group aggregate two times on different field | 995 |
[
"atlas-search"
]
| [
{
"code": "",
"text": "I want to count the number of documents/results of the previous stage of the pipeline. This is an excercise about atlas search (Mongo db university).I am supposed to use the $count operator for the second stage, for counting the number of documents , but no matter what I have tried, I have failed.\n\nimage1670×699 57.8 KB\nYou may see the photo I have uploaded, I have used the line: {$count : “results” } , but i receive an error, the count field must be a non-empty string.Can anyone please! tell me what am I doing wrong.",
"username": "Tilemachos_Kosmetsas"
},
{
"code": "",
"text": "Take a look at the hint:\n\nimage1203×169 10.9 KB\nAnd take a look at the solution for anyone who may stumble upon my struggles:\nJust delete my line , and replace simply with: “ANYSTRINGNAME”\nProblem solved.",
"username": "Tilemachos_Kosmetsas"
},
{
"code": "",
"text": "\nimage1334×556 27.2 KB\n",
"username": "Tilemachos_Kosmetsas"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Some help with atlas search pipeline please | 2022-11-24T23:59:42.939Z | Some help with atlas search pipeline please | 1,766 |
|
null | [
"kotlin"
]
| [
{
"code": "",
"text": "Hello everyone. Could someone tell how I can write a function to send a push notification from an insert/update Trigger. The trigger will watch over Order collection. My FCM tokens are stored inside UserInfo collection, “FCMToken” field. The idea is when a new Order is created, the correct user will receive a push notification. My app is KMM (Kotlin+SwiftUI).",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "You have to create a Trigger on the collection you want the trigger to be.There you write a function. The function receives the changeEvent as an parameter. So from the object itself you can now access the user that you want to send the notification to. It must be in someway stored in the document.Not all you have to do is send a notification using the firebase-admin library.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "I am doing this. I dont know how I can use the firebase library. Could you give me an example.",
"username": "Ciprian_Gabor"
},
{
"code": "const firebase_credentials = context.values.get(\"firebase_credentials\");\n const doc = JSON.parse(firebase_credentials);\n admin.initializeApp({\n credential: admin.credential.cert(doc),\n });\n await admin.messaging().send(message);\n",
"text": "You have to add firebase-admin as a dependency. I am using verdion 9.7.0.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "What I have to put inside firebase_credentials value?",
"username": "Ciprian_Gabor"
}
]
| Send push notifications using Triggers and FCM | 2022-11-24T18:34:38.919Z | Send push notifications using Triggers and FCM | 3,090 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n '$match': {\n 'subProducts.barred': false, \n 'subProducts.showOnHomepage': true\n }\n }, {\n '$project': {\n 'productGroupNumber': '$productGroupNumber'\n }\n }, {\n '$group': {\n '_id': null, \n 'productGroupNumber': {\n '$addToSet': '$productGroupNumber'\n }\n }\n }, {\n '$unwind': '$productGroupNumber'\n }\n]\n",
"text": "I have two tables:productGroups\n{number, name}products (with nested subproduct)\n{number, productGroupNumber, subProducts: {barred, showOnHomepage}}I would like to extract all productGroups which have at least one product that has at least one subproduct that is “!barred && show”.I started getting unique productGroupNumbers from products, but fails to join them to productGroups.Any help appreciated",
"username": "Lasse_Johansen"
},
{
"code": "",
"text": "Couldn’t figure it out, so I made two queries to get around this.",
"username": "Lasse_Johansen"
},
{
"code": "",
"text": "Hello @Lasse_Johansen ,Welcome to The MongoDB Community Forums! I have two tables:Do you mean two collections?I started getting unique productGroupNumbers from products, but fails to join them to productGroups.In case you want to work with relevant fields from two different collections you can use $lookupPlease feel free to reach out in case of any more queries/issues, would be happy to help!Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| Join and filter | 2022-11-24T14:04:58.026Z | Join and filter | 1,249 |
null | []
| [
{
"code": "",
"text": "I have large number of notifications logs on mogodb. What i am doing i am trying to check status of logs every few seconds. So after sometime it is slowing the performance of mogodb, alot of read requests are joining to mongodb. I have almost 500,000 logs in my table. And 2 Cpu’s for mongodb server.",
"username": "Hassan_Asif"
},
{
"code": "mongod",
"text": "Hello @Hassan_Asif ,Could you please share below details for me to understand you use case better?Based on your description of the issue, it may be that your server is overwhelmed with the workload, could you also share below details regarding your server?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello @Tarun_Gaur thank you for your response. So we have email logs we are storing in mongodb, so we are reading and writing data in large amount. So atleast we have 20 jobs interacting with mongodb at same time.",
"username": "Hassan_Asif"
},
{
"code": "mongod",
"text": "Is your application still experiencing slowness? Number of open connections performing operations does not affect the database’s performance given if the hardware is able to support that load. To understand what could be the cause of this slowness, please share below details:You can also go through below links to check for additional metrics.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "When we use filter, sort skip and limit. When we pass 50000 to skip then query is every slow.\nwhen we use filter without sort query is fast.",
"username": "Hassan_Asif"
},
{
"code": "executionStats",
"text": "Please run your query with explain in executionStats mode (e.g. `db.collection.explain(‘executionStats’).aggregate(…)) and check the output.Make sure you are using index efficiently to support sort, for details please check use Indexes to Sort Query Results.In case the indexes are working as expected then another area to look into are resources, make sure your server is not having resource crunch, you can also take a look at the output of mongostat.",
"username": "Tarun_Gaur"
}
]
| Mongodb performance when I try to read data every few seconds | 2022-09-26T12:43:54.151Z | Mongodb performance when I try to read data every few seconds | 1,874 |
null | [
"queries",
"indexes"
]
| [
{
"code": "",
"text": "I don’t understand the difference between ‘index’ and ‘search index’ just by reading thru official document. In Atlas, when setting index for a collection, i do not understand in which case I should use ‘index’ or ‘search index’. Should i use ‘search index’ for full text search only?For two example, in mycollection, there is a field called ‘a’ and ‘b’, which is an string type and number type, and the query below will be used.db.mycollection.find({“a”:“xxxxx”})\ndb.mycollection.find({“b”:12345})About “a” and “b” field, i want make each index.\nIs it ‘index’ to create an ‘a’ field? Is it “search index”?\nIs it ‘index’ to create an ‘b’ field? Is it “search index”?",
"username": "Damon_Kim"
},
{
"code": "",
"text": "Hi @Damon_Kim and welcome to the MongoDB community forum!!The Indexes in MongoDB are used for efficient execution for the query by limiting the number of documents getting scanned for the inspect. They enable the server to do less unnecessary work to return the query.\nThe MongoDB Atlas also gives the functionality to create, view and drop indexes. You can refer to the documentation for further understanding on How to use indexes in Atlas.Atlas Search, in contrast, leverages the power of Apache Lucene to enable full text search indexes for your data. It enables you to create full-text indexes that are not available natively in MongoDB server deployed locally on-prem. Notably, this feature requires the use of Atlas.Are you referring to the same index in the above example? To read more on Create an Atlas Search Index, you can refer to the documentation.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "normal \"index\"ing is made on the “whole” value of the field, thus you need to also use whole values when you search. this indexed search will get you pretty fast.But an index will not be used if you try to have a search on partial values, such as regex. in this case, all documents in the collection will be scanned which will in turn take longer.with search index, you give up a bit more data space, but get a partial value (full-text) search capability with the speed of indexing.so, knowing numbers and booleans (others?) are already searched as a whole, it comes down mostly to strings, and then your question just becomes:do you want a faster partial search on those particular fields of that collection?PS: There is a limit of 3-5-10 on the total number of search indexes for M0-M2-M5 clusters. Unlimited for M10+",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for your answer. I understand.",
"username": "Damon_Kim"
}
]
| What is the difference between 'index' and 'search index'? | 2022-11-21T13:18:09.137Z | What is the difference between ‘index’ and ‘search index’? | 4,444 |
null | [
"serverless"
]
| [
{
"code": "tenant_id",
"text": "Hi there,I have essentially the same question as derjanni posted here: How many serverless instances can I have in a project? - specifically a multi-tenant SaaS application wishing to create a serverless instance for each tenant.(For my use-case: each tenant has unique data structures → hence unique collections per tenant rather than a multi-tenant model using shared collections with a tenant_id index. Another factor is desire for maximum separation of data between tenants for security of sensitive data.)I note the guidance provided by Vishal_Dhiman in that thread before the mention of a possible offline discussion:For serverless, here are the current limits:\nMax number of instances per project - 25\nMax number of databases per instance - 50So in total you can have 25x50=1250 databases per project. You can also create many projects per organization. To get to 5000, you can create 4 projects.I’m curious where this discussion ended - in general is there a more flexible approach possible to 1 serverless instance per tenant than having to shard across 50 databases/instance → 25 instances/project → N projects?For the near future, the greatest undesirable overhead of the shard-across-projects model would be the need to build/manage all the following:…vs. a model of 1 serverless instance per tenant would dramatically simplify things, potentially eliminating all 3 of the above requirements in favour of a single instance-per-tenant provisioning model.The theoretical max capacity is also a future concern given the application intent is to support a self-serve freemium model (max 1250 databases/project x 250 projects/organisation → max ~312,500 tenants).Appreciate if any further guidance can be shared on where this discussion ended - or recommendations for this model in general.Big thanks!",
"username": "andyy"
},
{
"code": "",
"text": "(Correctly tagging @derjanni @Vishal_Dhiman - much appreciated if you have any additional conclusions or context to share after the prior thread - thanks!)",
"username": "andyy"
},
{
"code": "",
"text": "much appreciated if you have any additional conclusions or context to share after the prior threadThe outcome was pretty simple, it’s not possible. For us this was just one of the reasons why we did not proceed with Mongo. What we did as a prototype is that we’ve written the provisioning of tenants onto 25 serverless instances with 50 dbs each. To be honest: that’s an ugly hack.If you need to avoid noisy neighbours and have an architectual reason why you cannot pool tenants in collections, databases or instances (which there are numerous reasons for), then Mongo serverless is probably not for you.We have abandoned our prototype and moved on with MySQL8 (AWS Aurora Serverless V2). Again, this restriction wasn’t the main reason, there were others where Mongo did not fit our use case.",
"username": "derjanni"
},
{
"code": "",
"text": "@derjanni disappointing to hear, but very useful to know. thanks for taking the time to share your experience. fingers crossed for this to change in the future!",
"username": "andyy"
}
]
| 1 serverless instance per tenant in a multi-tenant app? | 2022-11-21T10:39:48.570Z | 1 serverless instance per tenant in a multi-tenant app? | 1,595 |
null | [
"app-services-cli"
]
| [
{
"code": "realm-cli",
"text": "What does this mean? It happens after I export my app with realm-cli, and then - without making changes - pushing what’s exported to my repository for automatic deployment back to my realm app. It breaks my schema.",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "Hi Eric,This usually means there is a mismatch of types for the property between your server side schema and your client side schema.Has the type for this property changed in your client side?Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hi thereThanks for getting back to me. I’ll need to refresh myself with this particular issue as it is a week old now. Unfortunately I don’t have the time to reproduce this myself on demand because it really slows me down.When next I have to perform this work flow again, I’ll post the results here,For now, from what I recall, I can reproduce it like this:\nNote: I start with a working sync configuration having no sync errors on server or client.The only solution I have found is to manually delete all schemas using the Atlas UI, and then deactivating Sync, and re-enabling in development mode, running the client, turning off development mode, re-entering my permission expressions and finally activating sync.As you can imagine this is a huge amount of time to pay for wanting to do something as simple as edit one line of code in a realm function in my own development environment, for instance.",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "Hi there,Any input or thoughts on the above? I’m still trying to find or make a solution for this issue.Thanks!",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "I’d like to bump this issue if possible. We’ve been seeing this happen for us, but only for a specific class, other changes or collection updates seem to work ok.",
"username": "bainfu"
},
{
"code": "",
"text": "I’m also having this issue. Is there a quick fix to this rather than deleting the schema?",
"username": "tobitech"
}
]
| Schema mismatch: Property '###' in class '###' is of type Link on one side and type ObjectId on the other | 2021-04-14T16:18:30.192Z | Schema mismatch: Property ‘###’ in class ‘###’ is of type Link on one side and type ObjectId on the other | 5,001 |
null | [
"change-streams"
]
| [
{
"code": "",
"text": "The project that I work on currently works by tailing the oplog. When the project starts, it queries the oplog and obtains the last entry in the oplog before we began a series of tasks and state changes. We record this oplog position at bootstrap as we want to make sure that when we transition to a later phase that we begin processing oplog entries without lose of events.I’m currently looking into whether we can transition this to using change streams instead. What the code is not yet designed to support is creating a change stream earlier on in the process and reusing that stream in the later phase so what I would like to do is something similar to the oplog approach where I determine a check-point in the stream and restart the stream later using that check-point.So is there a way to query a change stream and ask for the last/most recent resumeToken? I’d like to stash this resume token in our cache and then use this in the later phase to restart the change stream specifying that resume token as the startAfter argument.",
"username": "Chris_Cranford"
},
{
"code": "MongoCursor<ChangeStreamDocument<Document>> cursor = collection.watch().iterator();\nChangeStreamDocument<Document> next = cursor.next();\nBsonDocument resumeToken = next.getResumeToken();\ncursor = collection.watch().resumeAfter(resumeToken).iterator();\n",
"text": "Hi @Chris_Cranford, welcome!So is there a way to query a change stream and ask for the last/most recent resumeToken ?Generally, as you watch a collection you cache the resume token. If there’s any interruptions to the watch process, you can restore the token from the cache. For example using MongoDB Java driver:You can then resume from a token as below:You could perhaps store the resumeToken caches with some metadata information that you could query later on.See also Resume a Change Stream.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hi @wanPerhaps a bit more background here might help illustrate my issue.The Debezium project operates in 2 phases, a snapshot and a streaming phase. In a situation where the user is creating a new connector and performing the snapshot and streaming operations, we first want to record some marker to indicate that anything prior will be included in the snapshot phase and that any changes done after that marker will be picked up by the streaming phase.In the traditional oplog scenario, we could capture the last timestamp from the oplog before we begin the snapshot so we know where we need to start tailing the oplog from when streaming begins.Now to be clear, I’m not talking about restarts of Debezium here as we already cache the resumeToken from the change stream so that its available during restarts, which fits to what you described.The niche problem is more to do with the notion of a user creating a new connector and how that would work during the small window between snapshot and streaming phases. What I need here is to get this marker before I begin the snapshot so that I can control the point where the change stream should effectively start when streaming begins. In other words, I need to guarantee that whatever happens to the database while snapshotting is running is later captured during streaming.It doesn’t sound like with how the API is written there is such a way to obtain this marker as we have traditionally done so by basically getting the last event in the oplog before we begin snapshotting.If that is all true and correct, I think there is maybe only a single option and that is we would need to open the change stream where we would normally get the last event from the oplog and somehow provide this stream to the streaming phase rather than opening it up later like we’re doing.It seems unfortunate there is no way to open a change stream and issue some type of projection or query to have it return the last event in the stream.",
"username": "Chris_Cranford"
},
{
"code": "startAtOperationTimestartAtOperationTimeoplogChangeStreamIterable cs = collection.watch();\n// Unix timestamp in seconds, with increment 1\ncs = cs.startAtOperationTime(new BsonTimestamp(1587009106, 1));\nMongoCursor<ChangeStreamDocument> cursor = cs.iterator();\nclusterTime",
"text": "Hi @Chris_Cranford,Thanks for providing more context.In the traditional oplog scenario, we could capture the last timestamp from the oplog before we begin the snapshot so we know where we need to start tailing the oplog from when streaming begins.If that’s your use case, you could utilise startAtOperationTime instead.Available in MongoDB 4.0+, you can specify a startAtOperationTime to open the cursor at a particular point in time (timestamp). Just make sure the time range is within the oplog range. Using MongoDB Java driver (v3.12) as an example. :See also Change Event’ s clusterTime field.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Awesome, I think I understand. I’ll give this a shot next week and report back with how that works.",
"username": "Chris_Cranford"
},
{
"code": "",
"text": "Hi @wan\nI’m faced with this issue. But I’m using AWS DocumentDB and it only compatibility with MongoDB 3.6.",
"username": "chu_quang"
},
{
"code": "",
"text": "Hi @chu_quang, and welcome to the forum!But I’m using AWS DocumentDB and it only compatibility with MongoDB 3.6.AWS DocumentDB API is an emulation of MongoDB which differs in features, compatibility, and implementation from an actual MongoDB deployment. AWS DocumentDB suggestion of API version support (eg 3.6) is referring to the wire protocol used rather than the full MongoDB feature set for that version.For further questions on AWS DocumentDB I’d suggest to contact AWS.If you want to use the latest MongoDB features and drivers without emulation I’d strongly recommend to use MongoDB Atlas.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thank for your support. I will contact AWS.",
"username": "chu_quang"
},
{
"code": "",
"text": "Hi @Chris_Cranford were you able to resolve this issue/approach?Taking a guess, is this for the Debezium mongodb connector for allowing to load an entire collection upon creation + then start with the CDC via change streams…?Is this “snapshot.mode”: “initial” of Debezium connector for MongoDB :: Debezium Documentation?",
"username": "Hartmut"
},
{
"code": "",
"text": "Thanks for these comments. I found the blog very useful.",
"username": "Bathri_Nathan_Ramanathan"
}
]
| Change stream projection, getting last resume token | 2020-04-15T16:52:22.291Z | Change stream projection, getting last resume token | 11,434 |
null | [
"storage"
]
| [
{
"code": "recordsizeinnodb_page_size",
"text": "Hi.\nI’m trying to run MongoDB on top of ZFS (I know about ext4 and XFS as the recommended solution), and I got stuck a bit with a recordsize ZFS option. To set it properly, I need to know if there is a fixed pagesize WiredTiger operates with. Something like an innodb_page_size for InnoDB MySQL.I couldn’t find a word about it, only logrecord size param of a journal.Thanks in advance.",
"username": "weastur"
},
{
"code": "allocation_size4KBstorage.wiredTiger.collectionConfig.configStringstorage.wiredTiger.indexConfig.configString",
"text": "The parameter I was looking for is referred to as the allocation_size It’s WiredTiger parameter. From the documentationA component of WiredTiger called the Block Manager divides the on-disk pages into smaller chunks called blocks, which then get written to the disk. The size of these blocks is defined by a parameter called allocation_size, which is the underlying unit of allocation for the file the data gets stored in. An application might choose to have data compressed before it gets stored to disk by enabling block compression.By default, for MongoDB it’s equal to 4KBIt could be changed with storage.wiredTiger.collectionConfig.configString , storage.wiredTiger.indexConfig.configString and while create collection.https://source.wiredtiger.com/11.1.0/tune_page_size_and_comp.html",
"username": "weastur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| WiredTiger page size | 2022-11-23T13:44:37.856Z | WiredTiger page size | 1,996 |
null | [
"aggregation"
]
| [
{
"code": "> { name: \"ball\", category: [\"sport\", \"toys\" ], quantity: 5 }\n> { name: \"gym bar\", category: [\"sport\"], quantity: 3 }\n.aggregate([\n {$unwind: \"$category\"},\n {$group: {category: \"$category\", quantity: {$sum: \"$quantity\"}}}\n ])\n",
"text": "I have products collection with a “category” as array:How to aggregate by category in array ?\nHow to calc quantity of all products in each category ?Expexted result - “sport”: 8, “toys”: 5I tryed this …Error: “The field ‘category’ must be an accumulator object”",
"username": "Aleksander_Podmazko"
},
{
"code": "{$group: {category: \"$category\", quantity: {$sum: \"$quantity\"}}}_id{$group: {_id: \"$category\", quantity: {$sum: \"$quantity\"}}}\n",
"text": "Hello @Aleksander_Podmazko, Welcome to the MongoDB community forum,{$group: {category: \"$category\", quantity: {$sum: \"$quantity\"}}}The _id is the field to set group key for the document, it should be:See documentation for more details:",
"username": "turivishal"
},
{
"code": "",
"text": "Thanks a lot, I unerstood it not right.",
"username": "Aleksander_Podmazko"
}
]
| Aggregation by field inside array | 2022-11-23T11:29:15.499Z | Aggregation by field inside array | 1,523 |
null | [
"android",
"kotlin"
]
| [
{
"code": "",
"text": "Okay so in my application from a Date/Time picker I’m allowing a user to make a choice. That choice is later represented as a LocalDateTime type. Now I want to upload that LocalDateTime to Mongo DB Atlas/Realm. However the type that I’ve defined in the schema is a date (RealmInstant).So my question is, how can I convert a “LocalDateTime” into a “RealmInstant” format?",
"username": "111757"
},
{
"code": "RealmInstant.from()LocalDateTime.now().toInstant(ZoneOffset.UTC)",
"text": "Have you tried RealmInstant.from() with something like LocalDateTime.now().toInstant(ZoneOffset.UTC) to get required parameters ?",
"username": "Mohit_Sharma"
},
{
"code": "RealmInstant.from(date.toInstant(ZoneOffset.UTC).epochSecond, 0)+054860-03-23T17:21:49.000+00:002022-11-21T19:12:42.000+00:00DateTimeFormatter.ofPattern(\"dd MMM yyyy\").format(date)",
"text": "When I try that approach:RealmInstant.from(date.toInstant(ZoneOffset.UTC).epochSecond, 0)Then in the database I get a different date format:A regular RealmInstant date format:\n+054860-03-23T17:21:49.000+00:00Date format when I convert LocalDateTime into RealmInstant:\n2022-11-21T19:12:42.000+00:00As you can see those two do not have the same formatted string in the database.\nAnd when I parse the second example in my app, instead of the correct date, I’m getting a year 1970:DateTimeFormatter.ofPattern(\"dd MMM yyyy\").format(date)",
"username": "111757"
},
{
"code": "localDPdate val localDP = LocalDateTime.parse(\"2022-11-21T19:12:42\")\n val realmDP = RealmInstant.from(localDP.toEpochSecond(ZoneOffset.UTC), localDP.nano)\n val date = LocalDateTime.ofEpochSecond(realmDP.epochSeconds, realmDP.nanosecondsOfSecond, ZoneOffset.UTC)\n\n Log.e(\"Container\", \"Container: $realmDP -- $date.\")",
"text": "Hello Again,This works for me, both localDP & date return the same value, didn’t check on Atlas.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Thanks for your answer, I couldn’t figure that out myself. ",
"username": "111757"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| LocalDateTime to RealmInstant conversion? | 2022-11-21T13:25:56.326Z | LocalDateTime to RealmInstant conversion? | 2,929 |
null | [
"node-js",
"serverless"
]
| [
{
"code": "server.listen(port);MongoServerSelectionError: Server selection timed out after 30000ReplicaSetNoPrimaryMongoNetworkTimeoutError: connection timed out at connectionFailureError useUnifiedTopology",
"text": "Hi,We are having some issues with our Cloud Run instance.Stack: Node.js, Express.js, mongodb driver (4.10.0), MongoDB Atlas on version 6.0.3, Cloud Run using VPC Network with Serverless VPC Access Connector and Cloud NAT to get fixed ipOne of the first things the Cloud Run / container process has to do is connect to the database. Afterwards, we create some indices. Only after that, we start the server with server.listen(port);.On our development environment: one out of ten times when deploying a new revision, it fails to start.In most cases, the connection with the database just times out on startup: MongoServerSelectionError: Server selection timed out after 30000 (with ReplicaSetNoPrimary).In some cases, the connection is established, but the creation of the indices right after fails: MongoNetworkTimeoutError: connection timed out at connectionFailureError In some other cases, the revision is deployed and starts, but from the logs, it’s clear that the connection with the database is unstable. It has many reconnects, which is not normal.On our production environment: it’s much worse. It almost never connects to the database succesfully. If it does, it’s not useable due to all the reconnects, that also often fail.IP whitelisting is not the issue: on development we allow all IPs, and on production we also tested this to see if it made any difference.We use a Cloud NAT to get a fixed IP for all traffic from our Cloud Run instances.Our MongoDB Atlas was hosted on AWS. I moved it to GCP to see if this makes a difference. It doesn’t.Next I wanted to try if VPC peering makes a difference. However, I encountered a problem with setting up the peer network (overlap in CIDR MongoDB and subnet used for NAT). I will test this later on, but don’t know if there is any chance it will fix the problem.Connecting to our databases (dev & production) from our local machine was never a problem. Same goes for our live app that is currently running on AWS: the connection works perfectly. (We are migrating to GCP Cloud Run, our app is already live on AWS)I saw online someone say using the flag useUnifiedTopology fixed a similar issue for them, but iirc this is no longer available on MongoDB 6.Similarly, someone else said to use “CPU is always allocated” on Cloud Run settings: this did not help.Deploying a “hello world” express app that just pings the DB seemed to work at first, but when creating some revisions, it also failed once in a while. So it really seems a problem with the connection itself.It doesn’t seem as if our codebase can have a large impact, because the first thing we do is connect with the DB, and it’s that that fails (and the hello world app failed too). So it must be either something in GCP or MongoDB I think.Does anybody have any idea what might cause this? Did anybody encounter a similar problem?",
"username": "Laurens"
},
{
"code": "",
"text": "I did more testing. The issue seems to be the Cloud NAT on GCP.\nIf we don’t use it, by either not using any VPC connector, or by using VPC peering, the connection is stable.",
"username": "Laurens"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unstable connection between GCP Cloud Run and MongoDB Atlas | 2022-11-23T22:37:43.631Z | Unstable connection between GCP Cloud Run and MongoDB Atlas | 2,744 |
[
"node-js"
]
| [
{
"code": "",
"text": "HelloI first want to say, I really enjoy the University content and the new format you’re going for. I am currently working towards becoming a certified MongoDB developer (Associate Developer Certification), and have been following the learning path for NodeJS developer in the old MongoDB University format. So I just tried signing up for the new MongoDB University and noticed that my current active learning path is not really working. It also have not migrated all of my completed courses (like MM220JS). So I registered for the same learning path but in the new format and I now have two learning paths. However the new doesn’t recognize the old courses I completed for that learning path.How does this migration work/will work at the 1st of December? Should I just start all over in the new format to be sure everything is registered correctly.\nimage2203×844 53.1 KB\nBest Regards\nThor",
"username": "Thor_Thyeborg_Lind"
},
{
"code": "",
"text": "Hey @Thor_Thyeborg_Lind,I first want to say, I really enjoy the University content and the new format you’re going forReally glad to know you are enjoying our content and finding it useful. The MongoDB Node.js Developer Path contains the new MongoDB Courses that have been recently launched along with the new university site. It includes all of the content from the Introduction to MongoDB plus the driver-specific(in this case, Node.js) content. This new learning path contains courses with fresh and most up-to-date content.It also have not migrated all of my completed courses (like MM220JS).Are you continuing to take the path on the old platform or completely moved to the new platform? We are in process of migrating the progress of learners from the old university site to the new one(which is scheduled for Dec 1). So on 1st Dec, you should see your progress shift to the new LMS. But you would continue to see the two paths since they both contain different contents(the old path will contain old courses).Although it’s up to you, it would be better to explore the new courses under the Node.js Developer Path since it contains up-to-date content and will contain courses that are specifically targeted at the Node.js driver. Furthermore, when you complete this learning path, you will receive 50% off an Associate Developer certification exam attempt. Please let us know if you have any further questions. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Thanks for the detailed answer.I think I will start over with the new learning path and maybe even the course just to try the new content. ",
"username": "Thor_Thyeborg_Lind"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Migrating to the new MongoDB University format | 2022-11-18T09:45:31.436Z | Migrating to the new MongoDB University format | 2,165 |
|
null | []
| [
{
"code": "",
"text": "Hi,A simple question to understand: Can we use single MongoDB server for multiple developers installed same application in multiple App servers and connects to same MongoDB?\nBasically we don’t have enough servers to accommodate multiple MongoDB servers, so the plan is to use same MongoDB server for all the developers.Thanks,\nVikas",
"username": "Vikas_Reddy"
},
{
"code": "",
"text": "Hi Vikas_ReddyYou can use one server hosting multiple databases. You can then configure multiple users with only access to a dedicated database (or multiple databases). You can even configure access only to a collection within a database, if you do not want to create multiple databases.",
"username": "Simon_Bieri"
}
]
| Single MongoDB setup for multiple developers | 2022-11-24T06:07:58.511Z | Single MongoDB setup for multiple developers | 1,554 |
null | [
"aggregation"
]
| [
{
"code": " {\n \"_id\": \"some-id\",\n \"_class\": \"org.some.class\",\n \"number\": 1015,\n \"timestamp\": {\"$date\": \"2020-09-05T12:08:02.809Z\"},\n \"cost\": 0.9200000166893005\n }\n \"_id\": {\n \"productId\": \"some-id\",\n \"countryCode\": \"DE\"\n },\n \"_class\": \"org.some.class\",\n \"number\": 1015,\n \"timestamp\": {\"$date\": \"2020-09-05T12:08:02.809Z\"},\n \"cost\": 0.9200000166893005\n }\n",
"text": "Hello!\nI have to migrate data from a structuretoThe change that is in the new document is the _id field is replaced by a complex _id object (productId : String, country : String)\nThe country field is to be completed for the entire collection with a specific value - DE.\nThe collection has about 40 million records in the old format and 700k in the new format. I would like to bring these 40 million to this new form. I’m using mongo 3.6, so I’m a bit limited and I’ll probably have to use the aggregate functions to create a completely new collection, and then remove the old one. I will be grateful for help on how to do it - how the query that will do it should look like and how to keep these migrated 700k documents.",
"username": "dirtydb"
},
{
"code": "db.productDetails.aggregate(\n{$match: {_id: {$exists: true}}},\n{$addFields: {\"_id\": {\"productId\": \"$_id\", \"country\": \"DE\"}},\n{$project: {_id: 1, _class: 1, number: 1, timestamp: 1, cost: 1}},\n{$out: \"productDetailsV2\"}\n)\n",
"text": "What I have got so far:but this solution would only work if I didn’t have 700k documents in the new form.",
"username": "dirtydb"
},
{
"code": "_id : { product : 1 , country : 2 }\n_id : { country : 2 , product : 1 }\ncollection.find( { \"_id.product\" : 1 , \"_id.country\" : 2 } )\ncollection.find( { \"_id.country\" : 2 , \"_id.product\" : 1 } )\n{_id: {$exists: true}}",
"text": "The first smart move would be to migrate to a supported version.Be aware that the objectis not equal to the objectdespite the fact thatwill find the same documents as{_id: {$exists: true}}Will match all documents. I think you want { “_id.products” : { “$exists” : false } }.",
"username": "steevej"
},
{
"code": "[ \n {$addFields: {\n \"_id\":{\n $cond:{\n if:{$not:[\"$_id.productId\"]},\n then: {\"productId\": \"$_id\", \"country\": \"DE\"}},\n else:\"$_id\"\n }\n }\n }},\n {$out: \"productDetails_all_new\"}\n]\n",
"text": "quick note: “_id” field is flexible in its data type, however, please try to keep it being an “ObjectId” (or at least simple as GUID, Int, or String).if you still want to go with your change in the “_id” field, check this one:$match is not needed here. $project is also not needed unless you are changing the shape.this will process all documents, old and new, and replaces all old “_id” fields, so try on a test collection first as you might have parts we are not aware of.",
"username": "Yilmaz_Durmaz"
}
]
| Migrate to new document structure in mongo 3.6 | 2022-11-23T16:52:11.844Z | Migrate to new document structure in mongo 3.6 | 1,538 |
null | []
| [
{
"code": "",
"text": "How to get the voucher code the exam?\nMongoDB Associate Developer Exam",
"username": "Aesha_Tirghoda"
},
{
"code": "",
"text": "I think MongoDB Team will email you the voucher code after you’ve finished the learning path course, 3-5 days. I just finished the learning path, got the email from MongoDB Team and they said toBe on the lookout for your coupon code to hit your inbox in the next 3-5 days",
"username": "Tobias_Aditya"
},
{
"code": "",
"text": "Hi @Aesha_Tirghoda,Welcome to the MongoDB Community forums Please reach out to our Certification team at [email protected]. They will be happy to assist you!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to get the voucher code the exam? | 2022-11-21T06:27:27.687Z | How to get the voucher code the exam? | 2,933 |
null | [
"replication"
]
| [
{
"code": "use local ;\ndb.runCommand({ \"compact\" : \"oplog.rs\" } );\nMongoServerError: not authorized on local to execute command { compact: \"oplog.rs\", lsid: { id: UUID(\"138f3b1d-c0b8-465f-b4d7-cd6c319af187\") }, $clusterTime: { clusterTime: Timestamp(1669219680, 3), signature: { hash: BinData(0, 868D44FD9FD4E162D053F094FE7D5ACA778891D1), keyId: 7166368587978899460 } }, $db: \"local\" }\n",
"text": "hi,i have a 4 member replica set , 2 data holding member, 1 arbiter, 1 hidden member with no index ,no votes, priority 0 ,i am trying to run compact command on a direct connection to a secondary , with root user permission , as mentioned below, but i am getting an error see below , can any one help find a solution ?command i am runningErrorthanks\nDee",
"username": "gemini_geek"
},
{
"code": "",
"text": "I don’t think root can run compact against system collection oplog.rs\nYou have to give explicit privileges on local db/oplog.rs collection to a custom role and ssign that to your user\nThe dbadmin buitin role given to root can run compact only on non system collections\nPlease check this link",
"username": "Ramachandra_Tummala"
},
{
"code": "use admin;\ndb.createRole(\n {\n role: \"myCustomCompactRole\",\n privileges: [\n {\n resource: { \"db\" : \"local\" , \"collection\" : \"oplog.rs\" },\n actions: [ \"compact\" ]\n }\n ],\n roles: []\n }\n)\n\n\ndb.grantRolesToUser(\"dee-cluster-admin\", [ \"myCustomCompactRole\" ] )\n\n",
"text": "Hi,thanks for your reply, i created a role with privileges , and granted it to my user , than it worked, i am new to Mongodb it seems i was confused between roles and privileges.i did the following, in case some one need a reference",
"username": "gemini_geek"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error executing compact command on a replica member? | 2022-11-23T16:14:31.614Z | Error executing compact command on a replica member? | 1,394 |
null | [
"aggregation",
"serverless"
]
| [
{
"code": "",
"text": "I received a bill that was higher than expected for my server less database. Many of the queries I was making include aggregate lookup operations for documents in other collections (sometimes hundreds of lookups for one query) .According to the serverless pricing information:\n“You are charged one RPU for each document read (up to 4KB) or for each index read (up to 256 bytes).”Just to make things crystal clear, does that mean I am charged one RPU for each aggregate lookup performed? E.g. If I query a document with 100 ids that I want replaced with their corresponding documents and through aggregate operations end up doing 100 lookups, I am charged 101 RPUs (one to query the original document and then 100 for each id lookup) for that query?",
"username": "penguinlover"
},
{
"code": "",
"text": "Hi @penguinlover and welcome to the MongoDB community forum!!does that mean I am charged one RPU for each aggregate lookup performed?The RPU in MongoDB Atlas is charged depending on the number of documents being scanned for the the query and not on the documents being returned from the query.Generally, the RPU that forms the basis of serverless charges concerns about the work needed to be performed by MongoDB to service the work. As an example, if a query were to cause a collection scan and return 0 documents, MongoDB still has to service the collection scan and the appropriate RPU’s would be charged accordingly.However, indexes is one of the many factors which would determine the pricing for the read and write. The other factor which should also be taken into consideration is the document size read as noted in the quote you posted from the Atlas Billing Documentation.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| MongoDB Atlas serverless billing for aggregate query lookup | 2022-11-20T20:56:42.244Z | MongoDB Atlas serverless billing for aggregate query lookup | 1,942 |
null | [
"aggregation"
]
| [
{
"code": "db={\n \"sched_info\": [\n {\n \"_id\": {\n \"$oid\": \"63739f0de6b9aae9c681aeba\"\n },\n \"cluster_info\": {\n \"attached_clients\": {\n \"bng-emake-9a\": {\n \"IP\": \"10.223.37.24\",\n \"MaxJobs\": 20,\n \"NoRemote\": true,\n \"Speed\": 79.484589\n }\n },\n \"attached_daemons\": {\n \"bng-ea-agt-3a\": {\n \"IP\": \"10.223.36.42\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 90.203011\n },\n \"bng-ea-agt-3b\": {\n \"IP\": \"10.223.36.43\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 55.782074\n },\n \"bng-ea-agt-7a\": {\n \"IP\": \"10.223.36.62\",\n \"MaxJobs\": 18,\n \"NoRemote\": false,\n \"Speed\": 89.654556\n },\n \"bng-ea-agt-7b\": {\n \"IP\": \"10.223.36.63\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 87.926308\n },\n \"bng-ea-agt-7c\": {\n \"IP\": \"10.223.36.64\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 89.026802\n },\n \"bng-ea-agt-7d\": {\n \"IP\": \"10.223.36.65\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 89.416687\n }\n },\n \"cluster_nodes_info\": {\n \"active\": 0,\n \"available_clients\": 1,\n \"available_daemons\": 6,\n \"client_and_daemon\": 7\n },\n \"daemon_cpu_available_and_free_info\": {\n \"active\": 0,\n \"free\": 158,\n \"local\": 0,\n \"pending\": 0,\n \"total_cpu\": 158\n },\n \"description\": {\n \"analysis\": \"free cpu available\",\n \"health\": true\n },\n \"dump_time\": \"Tue Nov 15 19:44:02 2022\",\n \"scheduler_ip\": \"bng-ea-agt-7a\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"63739f72e6b9aae9c681aebb\"\n },\n \"cluster_info\": {\n \"attached_daemons\": {\n \"qnc-ea-agt-175b\": {\n \"IP\": \"10.44.138.82\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 0.0\n }\n },\n \"cluster_nodes_info\": {\n \"active\": 0,\n \"available_clients\": 0,\n \"available_daemons\": 1,\n \"client_and_daemon\": 1\n },\n \"daemon_cpu_available_and_free_info\": {\n \"active\": 0,\n \"free\": 28,\n \"local\": 0,\n \"pending\": 0,\n \"total_cpu\": 28\n },\n \"description\": {\n \"analysis\": \"free cpu available\",\n \"health\": true\n },\n \"dump_time\": \"Tue Nov 15 19:45:53 2022\",\n \"scheduler_ip\": \"qnc-ea-agt-175a\"\n }\n },\n {\n \"_id\": {\n \"$oid\": \"63739fd6e6b9aae9c681aebc\"\n },\n \"cluster_info\": {\n \"attached_clients\": {\n \"bng-ea-agt-76a\": {\n \"IP\": \"10.224.152.170\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 46.928242\n },\n \"bng-ea-agt-76b\": {\n \"IP\": \"10.224.152.171\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.899769\n },\n \"bng-ea-agt-76c\": {\n \"IP\": \"10.224.152.172\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 53.941841\n },\n \"bng-ea-agt-76d\": {\n \"IP\": \"10.224.152.173\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.147972\n },\n \"bng-ea-agt-77a\": {\n \"IP\": \"10.224.152.174\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.398701\n },\n \"bng-ea-agt-77b\": {\n \"IP\": \"10.224.152.175\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 45.801868\n },\n \"bng-ea-agt-77c\": {\n \"IP\": \"10.224.152.176\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 43.399036\n },\n \"bng-ea-agt-77d\": {\n \"IP\": \"10.224.152.177\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 43.504852\n },\n \"bng-ea-agt-78a\": {\n \"IP\": \"10.224.152.178\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 46.324295\n },\n \"bng-ea-agt-78b\": {\n \"IP\": \"10.224.152.179\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.12402\n },\n \"bng-ea-agt-78c\": {\n \"IP\": \"10.224.152.180\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 43.496323\n },\n \"bng-ea-agt-78d\": {\n \"IP\": \"10.224.152.181\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 37.699486\n },\n \"bng-ea-agt-79a\": {\n \"IP\": \"10.224.152.182\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-79b\": {\n \"IP\": \"10.224.152.183\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-79c\": {\n \"IP\": \"10.224.152.184\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-79d\": {\n \"IP\": \"10.224.152.185\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-80a\": {\n \"IP\": \"10.224.152.186\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-80b\": {\n \"IP\": \"10.224.152.187\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-80c\": {\n \"IP\": \"10.224.152.188\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n },\n \"bng-ea-agt-80d\": {\n \"IP\": \"10.224.152.189\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0.0\n }\n },\n \"attached_daemons\": {\n \"bng-ea-agt-65d\": {\n \"IP\": \"10.224.152.117\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.715843\n },\n \"bng-ea-agt-66a\": {\n \"IP\": \"10.224.152.118\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.045591\n },\n \"bng-ea-agt-66b\": {\n \"IP\": \"10.224.152.119\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.129501\n },\n \"bng-ea-agt-66c\": {\n \"IP\": \"10.224.152.120\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.980595\n },\n \"bng-ea-agt-66d\": {\n \"IP\": \"10.224.152.121\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.966152\n },\n \"bng-ea-agt-67a\": {\n \"IP\": \"10.224.152.122\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.545408\n },\n \"bng-ea-agt-67b\": {\n \"IP\": \"10.224.152.123\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.567656\n },\n \"bng-ea-agt-67c\": {\n \"IP\": \"10.224.152.124\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.602665\n },\n \"bng-ea-agt-67d\": {\n \"IP\": \"10.224.152.125\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.978537\n },\n \"bng-ea-agt-68a\": {\n \"IP\": \"10.224.152.126\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.515978\n },\n \"bng-ea-agt-68b\": {\n \"IP\": \"10.224.152.127\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.063932\n },\n \"bng-ea-agt-68c\": {\n \"IP\": \"10.224.152.128\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.261913\n },\n \"bng-ea-agt-68d\": {\n \"IP\": \"10.224.152.129\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.822884\n },\n \"bng-ea-agt-69a\": {\n \"IP\": \"10.224.152.130\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.858036\n },\n \"bng-ea-agt-69b\": {\n \"IP\": \"10.224.152.131\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.856615\n },\n \"bng-ea-agt-69c\": {\n \"IP\": \"10.224.152.132\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.345226\n },\n \"bng-ea-agt-69d\": {\n \"IP\": \"10.224.152.133\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.255632\n },\n \"bng-ea-agt-70a\": {\n \"IP\": \"10.224.152.134\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.546576\n },\n \"bng-ea-agt-70b\": {\n \"IP\": \"10.224.152.135\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.122202\n },\n \"bng-ea-agt-70c\": {\n \"IP\": \"10.224.152.136\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.60383\n },\n \"bng-ea-agt-70d\": {\n \"IP\": \"10.224.152.137\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.3407\n },\n \"bng-ea-agt-71a\": {\n \"IP\": \"10.224.152.150\",\n \"MaxJobs\": 38,\n \"NoRemote\": false,\n \"Speed\": 27.695211\n },\n \"bng-ea-agt-72a\": {\n \"IP\": \"10.224.152.154\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.563095\n },\n \"bng-ea-agt-72b\": {\n \"IP\": \"10.224.152.155\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.593485\n },\n \"bng-ea-agt-72c\": {\n \"IP\": \"10.224.152.156\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.327879\n },\n \"bng-ea-agt-72d\": {\n \"IP\": \"10.224.152.157\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.512634\n },\n \"bng-ea-agt-73a\": {\n \"IP\": \"10.224.152.158\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.542143\n },\n \"bng-ea-agt-73b\": {\n \"IP\": \"10.224.152.159\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.394144\n },\n \"bng-ea-agt-73c\": {\n \"IP\": \"10.224.152.160\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.713577\n },\n \"bng-ea-agt-73d\": {\n \"IP\": \"10.224.152.161\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.258692\n },\n \"bng-ea-agt-74a\": {\n \"IP\": \"10.224.152.162\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.941082\n },\n \"bng-ea-agt-74b\": {\n \"IP\": \"10.224.152.163\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.259108\n },\n \"bng-ea-agt-74c\": {\n \"IP\": \"10.224.152.164\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.658989\n },\n \"bng-ea-agt-74d\": {\n \"IP\": \"10.224.152.165\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.375834\n },\n \"bng-ea-agt-75a\": {\n \"IP\": \"10.224.152.166\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.81979\n },\n \"bng-ea-agt-75b\": {\n \"IP\": \"10.224.152.167\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.066696\n },\n \"bng-ea-agt-75c\": {\n \"IP\": \"10.224.152.168\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.778721\n },\n \"bng-ea-agt-75d\": {\n \"IP\": \"10.224.152.169\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.476954\n }\n },\n \"cluster_nodes_info\": {\n \"active\": 50,\n \"available_clients\": 20,\n \"available_daemons\": 38,\n \"client_and_daemon\": 58\n },\n \"daemon_cpu_available_and_free_info\": {\n \"active\": 1674,\n \"free\": 140,\n \"local\": 50,\n \"pending\": 0,\n \"total_cpu\": 1814\n },\n \"description\": {\n \"analysis\": \"free cpu available\",\n \"health\": true\n },\n \"dump_time\": \"Tue Nov 15 19:49:02 2022\",\n \"scheduler_ip\": \"bng-ea-agt-71a\"\n }\n }\n ],\n \"schedlist\": [\n {\n \"_id\": {\n \"$oid\": \"6371db5f6b3a2232b4e71f58\"\n },\n \"active_schedulers\": {\n \"bng-ea-agt-7a\": {\n \"ice_version\": \"4.1\",\n \"netname\": \"icecc_bng_test\"\n },\n \"qnc-ea-agt-175a\": {\n \"ice_version\": \"4.0\",\n \"netname\": null\n }\n }\n },\n {\n \"_id\": {\n \"$oid\": \"6371db7f244f385faa8d0802\"\n },\n \"active_daemons\": {\n \"bng-ea-agt-7a\": {\n \"ice_version\": \"4.1\",\n \"netname\": \"icecc_bng_test\"\n },\n \"bng-ea-agt-3b\": {\n \"ice_version\": \"4.1\",\n \"netname\": \"icecc_bng_test\"\n },\n \"bng-ea-agt-7d\": {\n \"ice_version\": \"4.1\",\n \"netname\": \"icecc_bng_test\"\n },\n \"bng-ea-agt-7b\": {\n \"ice_version\": \"4.1\",\n \"netname\": \"icecc_bng_test\"\n }\n }\n }\n ]\n}\ndb.sched_info.aggregate([\n {\n \"$lookup\": {\n \"from\": \"schedlist\",\n \"localField\": {\n $getField: \"$cluster_info.scheduler_ip\"\n },\n \"foreignField\": {\n $getField: \"$active_schedulers.\"\n },\n \"as\": \"testing\"\n }\n }\n])\n",
"text": "i have two collections . schedlist and sched_infoi want to join these two collections to get scheduler info a and daemon info (respective netname and version) from schedlist .\nHow to do lookup based on nested documents. $cluster_info.scheduler_info in local field of lookup throws a error",
"username": "Stuart_S"
},
{
"code": "[{k : \"active_schedulers\", \"v\" : ....}] ",
"text": "Hi @Stuart_S ,Because the lookup value is a field name and not a value it cannot be performed with such a standard lookup.It will have to use a pipeline syntax of lookup with $objectToArray conversion just to get the field name into [{k : \"active_schedulers\", \"v\" : ....}] In order to lookup.Now later on it will need to be assembled back to object using $arrayToObject …As you can see it is overcomplex … What limit you to change the data model to store them together prejoined or at least turn the lookup values to be placed as a value (even additionally) to ease the lookup.At the moment the current model is not designed good for those queriesThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "db.sched_info.aggregate([{\n $lookup: {\n from: 'schedList',\n 'let': {\n sched_ip: '$cluster_info.scheduler_ip'\n },\n pipeline: [\n {\n $addFields: {\n keys: {\n $objectToArray: '$active_schedulers'\n }\n }\n },\n {\n $unwind: {\n path: '$keys'\n }\n },\n {\n $match: {\n $expr: {\n $eq: [\n '$keys.k',\n '$$sched_ip'\n ]\n }\n }\n },\n {\n $project: {\n keys: 0\n }\n }\n ],\n as: 'schedList'\n }\n}])\n",
"text": "Hi @Stuart_S ,Here is a sample idea for doing the join with current structure Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "[\n {\n \"_id\": ObjectId(\"63739f0de6b9aae9c681aeba\"),\n \"cluster_info\": {\n \"attached_clients\": {\n \"bng-emake-9a\": {\n \"IP\": \"10.223.37.24\",\n \"MaxJobs\": 20,\n \"NoRemote\": true,\n \"Speed\": 79.484589\n }\n },\n \"attached_daemons\": {\n \"bng-ea-agt-3a\": {\n \"IP\": \"10.223.36.42\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 90.203011\n },\n \"bng-ea-agt-3b\": {\n \"IP\": \"10.223.36.43\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 55.782074\n },\n \"bng-ea-agt-7a\": {\n \"IP\": \"10.223.36.62\",\n \"MaxJobs\": 18,\n \"NoRemote\": false,\n \"Speed\": 89.654556\n },\n \"bng-ea-agt-7b\": {\n \"IP\": \"10.223.36.63\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 87.926308\n },\n \"bng-ea-agt-7c\": {\n \"IP\": \"10.223.36.64\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 89.026802\n },\n \"bng-ea-agt-7d\": {\n \"IP\": \"10.223.36.65\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 89.416687\n }\n },\n \"cluster_nodes_info\": {\n \"active\": 0,\n \"available_clients\": 1,\n \"available_daemons\": 6,\n \"client_and_daemon\": 7\n },\n \"daemon_cpu_available_and_free_info\": {\n \"active\": 0,\n \"free\": 158,\n \"local\": 0,\n \"pending\": 0,\n \"total_cpu\": 158\n },\n \"description\": {\n \"analysis\": \"free cpu available\",\n \"health\": true\n },\n \"dump_time\": \"Tue Nov 15 19:44:02 2022\",\n \"scheduler_ip\": \"bng-ea-agt-7a\"\n },\n \"schedList\": []\n },\n {\n \"_id\": ObjectId(\"63739f72e6b9aae9c681aebb\"),\n \"cluster_info\": {\n \"attached_daemons\": {\n \"qnc-ea-agt-175b\": {\n \"IP\": \"10.44.138.82\",\n \"MaxJobs\": 28,\n \"NoRemote\": false,\n \"Speed\": 0\n }\n },\n \"cluster_nodes_info\": {\n \"active\": 0,\n \"available_clients\": 0,\n \"available_daemons\": 1,\n \"client_and_daemon\": 1\n },\n \"daemon_cpu_available_and_free_info\": {\n \"active\": 0,\n \"free\": 28,\n \"local\": 0,\n \"pending\": 0,\n \"total_cpu\": 28\n },\n \"description\": {\n \"analysis\": \"free cpu available\",\n \"health\": true\n },\n \"dump_time\": \"Tue Nov 15 19:45:53 2022\",\n \"scheduler_ip\": \"qnc-ea-agt-175a\"\n },\n \"schedList\": []\n },\n {\n \"_id\": ObjectId(\"63739fd6e6b9aae9c681aebc\"),\n \"cluster_info\": {\n \"attached_clients\": {\n \"bng-ea-agt-76a\": {\n \"IP\": \"10.224.152.170\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 46.928242\n },\n \"bng-ea-agt-76b\": {\n \"IP\": \"10.224.152.171\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.899769\n },\n \"bng-ea-agt-76c\": {\n \"IP\": \"10.224.152.172\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 53.941841\n },\n \"bng-ea-agt-76d\": {\n \"IP\": \"10.224.152.173\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.147972\n },\n \"bng-ea-agt-77a\": {\n \"IP\": \"10.224.152.174\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.398701\n },\n \"bng-ea-agt-77b\": {\n \"IP\": \"10.224.152.175\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 45.801868\n },\n \"bng-ea-agt-77c\": {\n \"IP\": \"10.224.152.176\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 43.399036\n },\n \"bng-ea-agt-77d\": {\n \"IP\": \"10.224.152.177\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 43.504852\n },\n \"bng-ea-agt-78a\": {\n \"IP\": \"10.224.152.178\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 46.324295\n },\n \"bng-ea-agt-78b\": {\n \"IP\": \"10.224.152.179\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 48.12402\n },\n \"bng-ea-agt-78c\": {\n \"IP\": \"10.224.152.180\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 43.496323\n },\n \"bng-ea-agt-78d\": {\n \"IP\": \"10.224.152.181\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 37.699486\n },\n \"bng-ea-agt-79a\": {\n \"IP\": \"10.224.152.182\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-79b\": {\n \"IP\": \"10.224.152.183\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-79c\": {\n \"IP\": \"10.224.152.184\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-79d\": {\n \"IP\": \"10.224.152.185\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-80a\": {\n \"IP\": \"10.224.152.186\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-80b\": {\n \"IP\": \"10.224.152.187\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-80c\": {\n \"IP\": \"10.224.152.188\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n },\n \"bng-ea-agt-80d\": {\n \"IP\": \"10.224.152.189\",\n \"MaxJobs\": 40,\n \"NoRemote\": true,\n \"Speed\": 0\n }\n },\n \"attached_daemons\": {\n \"bng-ea-agt-65d\": {\n \"IP\": \"10.224.152.117\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.715843\n },\n \"bng-ea-agt-66a\": {\n \"IP\": \"10.224.152.118\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.045591\n },\n \"bng-ea-agt-66b\": {\n \"IP\": \"10.224.152.119\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.129501\n },\n \"bng-ea-agt-66c\": {\n \"IP\": \"10.224.152.120\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.980595\n },\n \"bng-ea-agt-66d\": {\n \"IP\": \"10.224.152.121\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.966152\n },\n \"bng-ea-agt-67a\": {\n \"IP\": \"10.224.152.122\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.545408\n },\n \"bng-ea-agt-67b\": {\n \"IP\": \"10.224.152.123\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.567656\n },\n \"bng-ea-agt-67c\": {\n \"IP\": \"10.224.152.124\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.602665\n },\n \"bng-ea-agt-67d\": {\n \"IP\": \"10.224.152.125\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.978537\n },\n \"bng-ea-agt-68a\": {\n \"IP\": \"10.224.152.126\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.515978\n },\n \"bng-ea-agt-68b\": {\n \"IP\": \"10.224.152.127\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.063932\n },\n \"bng-ea-agt-68c\": {\n \"IP\": \"10.224.152.128\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.261913\n },\n \"bng-ea-agt-68d\": {\n \"IP\": \"10.224.152.129\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.822884\n },\n \"bng-ea-agt-69a\": {\n \"IP\": \"10.224.152.130\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.858036\n },\n \"bng-ea-agt-69b\": {\n \"IP\": \"10.224.152.131\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.856615\n },\n \"bng-ea-agt-69c\": {\n \"IP\": \"10.224.152.132\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.345226\n },\n \"bng-ea-agt-69d\": {\n \"IP\": \"10.224.152.133\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.255632\n },\n \"bng-ea-agt-70a\": {\n \"IP\": \"10.224.152.134\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.546576\n },\n \"bng-ea-agt-70b\": {\n \"IP\": \"10.224.152.135\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.122202\n },\n \"bng-ea-agt-70c\": {\n \"IP\": \"10.224.152.136\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.60383\n },\n \"bng-ea-agt-70d\": {\n \"IP\": \"10.224.152.137\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.3407\n },\n \"bng-ea-agt-71a\": {\n \"IP\": \"10.224.152.150\",\n \"MaxJobs\": 38,\n \"NoRemote\": false,\n \"Speed\": 27.695211\n },\n \"bng-ea-agt-72a\": {\n \"IP\": \"10.224.152.154\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.563095\n },\n \"bng-ea-agt-72b\": {\n \"IP\": \"10.224.152.155\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.593485\n },\n \"bng-ea-agt-72c\": {\n \"IP\": \"10.224.152.156\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.327879\n },\n \"bng-ea-agt-72d\": {\n \"IP\": \"10.224.152.157\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.512634\n },\n \"bng-ea-agt-73a\": {\n \"IP\": \"10.224.152.158\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.542143\n },\n \"bng-ea-agt-73b\": {\n \"IP\": \"10.224.152.159\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.394144\n },\n \"bng-ea-agt-73c\": {\n \"IP\": \"10.224.152.160\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.713577\n },\n \"bng-ea-agt-73d\": {\n \"IP\": \"10.224.152.161\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.258692\n },\n \"bng-ea-agt-74a\": {\n \"IP\": \"10.224.152.162\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.941082\n },\n \"bng-ea-agt-74b\": {\n \"IP\": \"10.224.152.163\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.259108\n },\n \"bng-ea-agt-74c\": {\n \"IP\": \"10.224.152.164\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.658989\n },\n \"bng-ea-agt-74d\": {\n \"IP\": \"10.224.152.165\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.375834\n },\n \"bng-ea-agt-75a\": {\n \"IP\": \"10.224.152.166\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.81979\n },\n \"bng-ea-agt-75b\": {\n \"IP\": \"10.224.152.167\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 27.066696\n },\n \"bng-ea-agt-75c\": {\n \"IP\": \"10.224.152.168\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 25.778721\n },\n \"bng-ea-agt-75d\": {\n \"IP\": \"10.224.152.169\",\n \"MaxJobs\": 48,\n \"NoRemote\": false,\n \"Speed\": 26.476954\n }\n },\n \"cluster_nodes_info\": {\n \"active\": 50,\n \"available_clients\": 20,\n \"available_daemons\": 38,\n \"client_and_daemon\": 58\n },\n \"daemon_cpu_available_and_free_info\": {\n \"active\": 1674,\n \"free\": 140,\n \"local\": 50,\n \"pending\": 0,\n \"total_cpu\": 1814\n },\n \"description\": {\n \"analysis\": \"free cpu available\",\n \"health\": true\n },\n \"dump_time\": \"Tue Nov 15 19:49:02 2022\",\n \"scheduler_ip\": \"bng-ea-agt-71a\"\n },\n \"schedList\": []\n }\n]\n",
"text": "Actually this is the output im getting\nEmpty array in the joined field “schedList”",
"username": "Stuart_S"
},
{
"code": "",
"text": "@Stuart_S ,Are you running the query on a MongoDB server? What version? Don’t test on mongo playground or other emulators.I am getting results on 6.0 version.\n\nScreenshot 2022-11-22 at 12.39.501920×1046 102 KB\nThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny\nYa at last i am able to join but why is the entire active scheduler joining (2 records) instead of just one matched record. The match seems to be not working in your screenshot too right?\n\nScreenshot 2022-11-23 at 8.30.35 AM1920×1200 97.8 KB\n",
"username": "Stuart_S"
},
{
"code": "",
"text": "Hi @Stuart_SWhat do you mean 2 records?In your screen there is only one object under schedList array so its 1 record.\n.\nAll the fields are projected beside temporary “keys”.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "hi @Pavel_Duchovny ,\nI mean in the matched array schedList we noticed both the schedulers are present . Shouldnt it match with scheduler_ip from sched_info so only one object should be under active schedulers(the one that matches with the sched_info)",
"username": "Stuart_S"
},
{
"code": "",
"text": "No.It brings the document that was matched. On the application side you can filter out the others.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "ya @Pavel_Duchovny ,Thanks i got how to do that .Now my last thing is , i need to match daemon_ip from sched_info to active_daemons from sched_list collection . And again active daemons are not a single entity? how can i get another array like daemon_list (similar to schedList that we joined) . Should i join them or rather just bring the whole active_daemons list and then filter them out on application side",
"username": "Stuart_S"
},
{
"code": "$addFields: {\n keys: {\n $objectToArray: '$active_deamons'\n }\n }\n",
"text": "@Stuart_S ,If that in a different query you can change the keys array to act on the “active_deamons” array:Then use it to join to the relevant field in the deamons_list.Now to be honest the queries this way are far from sufficient.You should consider changing the data model to store this data together maybe , or at least do 2 queries :Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Should i change json format?.Aggregation lookup for nested documents | 2022-11-22T02:07:24.030Z | Should i change json format?.Aggregation lookup for nested documents | 2,168 |
null | [
"aggregation",
"node-js"
]
| [
{
"code": "offices[0] = \n{ value: {\n .... other fields,\n lawyers: Array of lawyer object\n }\n}\n",
"text": "Hi everybody.I have a firm Object, which has a offices field. This offices is an array of objects that has the following structure:How can I write an aggregation that returns just the firms with LESS than 5 lawyers in total? Take in consideration that a firm can have multiple offices and these offices can have multiple lawyers.\nThank you very much !",
"username": "Teodor_Aspataritei"
},
{
"code": "$map$cond",
"text": "Hi @Teodor_Aspataritei - Welcome to the community.Wondering if you still require assistance with this? If so, would you be able to provide the full sample document as well as the expected output?In the meantime, based off the initial description, perhaps one of the below operators may help although it’s difficult to say without the full structure of the document(s) and whether or not they all have the same structure:Regards,\nJason",
"username": "Jason_Tran"
}
]
| Count total number of objects inside multiple fields | 2022-11-15T16:40:01.634Z | Count total number of objects inside multiple fields | 1,325 |
null | [
"node-js",
"containers"
]
| [
{
"code": "`30-mongo-1 | 2022-11-22T12:49:45.364+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock\n30-mongo-1 | 2022-11-22T12:49:45.364+0000 I NETWORK [listener] Listening on 0.0.0.0\n30-mongo-1 | 2022-11-22T12:49:45.364+0000 I NETWORK [listener] waiting for connections on port 27017\n30-iap-1 | Connecting to Database...\n30-mongo-1 | 2022-11-22T12:49:47.138+0000 I NETWORK [listener] connection accepted from 172.21.0.4:44314 #1 (1 connection now open)\n30-mongo-1 | 2022-11-22T12:49:47.162+0000 I NETWORK [conn1] received client metadata from 172.21.0.4:44314 conn1: { driver: { name: \"nodejs\", version: \"3.6.3\" }, os: { type: \"Linux\", name: \"linux\", architecture: \"x64\", version: \"5.10.124-linuxkit\" }, platform: \"'Node.js v12.20.1, LE (unified)\" }\n30-iap-1 | Connection established.`\n\nso the last line says `Connection established`, which means mongo db is running and up for connections.\nThen after some lines , i see some error as \n\n\n `iap-1 | Error creating database service_configs!\n iap-1 | MongoError: command listCollections requires authentication\n iap-1 | 2022-11-22 12:49 +00:00: MongoError: command find requires authentication`\nversion: \"3.7\"\nservices:\n iap:\n # my app details\n mongo:\n image:mongo:latest\n environment:\n MONGO_INITDB_ROOT_USERNAME: admin\n MONGO_INITDB_ROOT_PASSWORD: <ADMIN_PW>. # i gave admin here\n ports:\n - xxx:27017\n \n\n",
"text": "I am trying to start an application called inential pronghorn and a mongo db related to it.\nthe containers are started and now i see this error in the logsI am not able to figure out, if is related to my database or is coming from my application itself.this is what my docker compose looks likeis this the right way of adding password to mongo db?\nDo i need to create users and roles specifically?",
"username": "insta_girl"
},
{
"code": "",
"text": "Check this thread.It may help as your error is related to authentication",
"username": "Ramachandra_Tummala"
}
]
| I am getting this error when mongo is trying to connect to my application | 2022-11-22T13:34:58.489Z | I am getting this error when mongo is trying to connect to my application | 2,551 |
null | [
"node-js",
"replication",
"sharding",
"flexible-sync"
]
| [
{
"code": "",
"text": "Hi everyone,I have an application written in Node with Electron. Each client installs it on his computer alongside a MongoDB database. Each database has 3 collections (coll_A, coll_B, and coll_C).So, if we have 20 clients, we’ll have 20 different databases. We are building a dashboard that will show metrics about all clients and this dashboard will have access to a central remote MongoDB database hosted on the cloud, for example.I need infrastructure or some feature in MongoDB that will allow me to merge all documents from the coll_A of all clients into the coll_A located in the central database. It’s important to note that the dataflow is unidirectional (n clients → remote server)Can I solve this problem with sharding or replica sets? As far I understood from the docs, it is not possible unless I am missing something If there is no such tool, the best way would be to write an algorithm to merge from clients, right?",
"username": "Paulo_Henrique_Favero_Pereira"
},
{
"code": "I need infrastructure or some feature in MongoDB that will allow me to merge all documents from the coll_A of all clients into the coll_A located in the central database.",
"text": "Hi @Paulo_Henrique_Favero_Pereira welcome to the community!If I understand correctly, your use case is:Is this accurate? You also mention:I need infrastructure or some feature in MongoDB that will allow me to merge all documents from the coll_A of all clients into the coll_A located in the central database.Does this mean that you want a copy of all client’s data in the central server?If this is the case, then I don’t think any built-in MongoDB feature is suitable for the use case.I’m wondering, if having a central server is the main use case, why do you need to install a local MongoDB on the client at all? Is it possible to just use a single centralized database and all client connect to it instead?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi,Thanks for the reply.Does this mean that you want a copy of all client’s data in the central server?Yes, this is what we want.I’m wondering, if having a central server is the main use case, why do you need to install a local MongoDB on the client at all? Is it possible to just use a single centralized database and all client connect to it instead?Unfortunately, each client needs its own local database. I agree that this would be the best scenario.I am almost achieving my goal with Debezium and Kafka MongoDb Sink Connector. With Debezium, I have a CDC pipeline and Kafka streaming all my data to the global database.\nFor now, my setup is just failing for update operations. I am going to create a new topic with all the information regarging this issue.",
"username": "Paulo_Henrique_Favero_Pereira"
}
]
| Unify Multiple Local Databases in a Remote Central Database | 2022-11-03T21:01:43.421Z | Unify Multiple Local Databases in a Remote Central Database | 2,006 |
null | [
"monitoring"
]
| [
{
"code": "iotopWTCheck.tThreadWTJourn.FlusherWTCheck.tThreadWTJourn.Flusher",
"text": "My MongoDB has been using so much io cpu recently. I checked iotop and I found these two processes using io cpu: WTCheck.tThread and WTJourn.Flusher. How can I reduce this io cpu? Will adding more RAM to the WiredTiger cache work? WTCheck.tThread peaks at around 99% CPU and WTJourn.Flusher uses around 1-5%. I’m using the EX4 filesystem but my host doesn’t offer the ability for me to change to XFS",
"username": "mental_N_A"
},
{
"code": "",
"text": "Hi , Were you able to find the solution. Facing similar issue in my env as well.",
"username": "venkataraman_r"
},
{
"code": "",
"text": "nope sorryI would assume it was because I had too many documents in the cache. Switching to XFS would’ve helped and allocating more RAM to the cache.",
"username": "mental_N_A"
}
]
| High IO CPU usage by WTCheck.tThread and WTJourn.Flusher | 2021-02-01T19:28:15.534Z | High IO CPU usage by WTCheck.tThread and WTJourn.Flusher | 3,706 |
null | [
"node-js",
"production",
"typescript"
]
| [
{
"code": "mongodb",
"text": "The MongoDB Node.js team is pleased to announce version 4.12.1 of the mongodb package!This version includes a fix to a regression in our monitoring logic that could cause process crashing errors that was introduced in v4.12.0.If you are using v4.12.0 of the Node driver, we strongly encourage you to upgrade.We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Bailey_Pearson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB NodeJS Driver 4.12.1 Released | 2022-11-23T18:54:37.774Z | MongoDB NodeJS Driver 4.12.1 Released | 1,867 |
null | [
"java",
"morphia-odm"
]
| [
{
"code": "private static final boolean USE_SLF4J = false\n<logger name=\"org.mongodb.driver.client\" level=\"OFF\" />\nLogger.getLogger(\"org.mongodb.driver\").setLevel(Level.OFF);\n",
"text": "Hello,\n(Java, Maven, Minecraft Paper 1.19.2)Since i’ve upgraded mongodb-driver-sync to 4.7.2 my console have been spamming logs fromBefore i could go in the Loggers file and edit theand that fixed my problem.\nNow i cant do that anymore.What i currently have installedI’ve installed those logging things but it still does not hide(not sure if im doing right) i read the documentation also but i don’t get it.Tried to use this also but notingI don’t really want to downgrade again just to make the console hide.",
"username": "Zekhap_N_A"
},
{
"code": "",
"text": "Couple of questions:",
"username": "Jeffrey_Yemin"
},
{
"code": " .applyToServerSettings(builder -> {\n builder.addServerMonitorListener(new MongoListener(this));\n });\nMongoClientSettings.Builder options = MongoClientSettings.builder()\n.disableLogging(true); //someting like this\n",
"text": "I actually had the mongo-java-driver: 3.12.4 installed. But now when i revert the changes it still prints the logging. Must be java then since i updated that also when i updated the drivers. But i used this to hide the logging HereSince i’m using the Listeners to log the connections i don’t really want the drive to print in the console that it connects to the server. So yes just the driver. I tried to use slf4j before but whatever i did, nothing happend.But i do wish there was like an option to just add a line and the drive wont print anything in the consoleI hope you understand what i mean ",
"username": "Zekhap_N_A"
},
{
"code": " <Logger name=\"org.mongodb.driver\" level=\"off\" additivity=\"false\">\n <AppenderRef ref=\"Out\"/>\n </Logger>\n",
"text": "I should have asked which logging system that you are actually using. But given what you have installed, it sounds like Log4J. Assuming that, what you should do isIf you’re using a different logger than Log4J, it will be a bit different, but same idea.Good luck,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Remove any other SLF4J bridge library. From what I can tell, that would be logback-classicI removed the logback-classic.Configure log4j to disable all driver logging. Looking at Log4J documentation, that would look something like thisI looked at the Log4J docs, i tested with the files log4j.xml and log4j2.xml inside the src/resource/\nI compiled it to a .jar file after and the .xml file was there atleast, but it did not work either.\nDo i need to write some code to make it work / did i put the files at wrong place?This is the logging dependency’s i have now.When i looked through the minecraft paper .pom files they had the libaries",
"username": "Zekhap_N_A"
},
{
"code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Configuration status=\"INFO\">\n <Loggers>\n <Logger name=\"org.mongodb.driver\" level=\"off\" additivity=\"false\">\n <AppenderRef ref=\"Out\"/>\n </Logger>\n </Loggers>\n</Configuration>\n",
"text": "Okey everything works until i add the minecraft paper inside the .pom.\nThen it stops.But… if i use the System.out.println(“test”); it has edited my output atleast\nnot it is 00:00:00 INFO]: [Plugin] [STDOUT] TEST\nBefore it was only 00:00:00 INFO]: [Plugin] TESTMy log4j2.xml right nowI tested with",
"username": "Zekhap_N_A"
}
]
| Hide mongodb logging | 2022-11-20T17:02:01.178Z | Hide mongodb logging | 3,280 |
[]
| [
{
"code": "",
"text": "I have been working on developing a way to store tabular data in mongodb and have not been able to come up with a solution. I am building a project-management system where projects can be edited by different users before being approved. I want each project to have a section where user’s can put their responses in a table like this:\nScreenshot 2022-11-21 at 15.36.191999×891 125 KB\nThe caveat is that I want users to be able to add columns and rows to each of these tables. I have tried modelling this functionality with a one-many-many relationship between projects, “question groups” (each question group representing each one of the 5 tables\", and “questions” (each question being the column headers\". The problem is when I want an “admin” user to be able to add/edit/remove the column headers for every single project. Is there a better way to do this?",
"username": "Thomas_Crosbie-Walsh"
},
{
"code": "{\n_id : doc1,\nquestions:[{ qId :1question : \"What ....\", Response: ... },\n...\n qId : n question : \"What ....\", Response: ... }]\n...\n}\n",
"text": "Hi @Thomas_Crosbie-Walsh ,What would be the problem to keep each row and its reaponse/additional information in its own document:This is flexible model.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny Thank you for your suggestion, I have rebuilt my schema around this and it works well.",
"username": "Thomas_Crosbie-Walsh"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Model for user-editable, multi-tabular data | 2022-11-21T15:54:01.752Z | Model for user-editable, multi-tabular data | 1,428 |
|
null | [
"dot-net"
]
| [
{
"code": "",
"text": "Hi there,\nI need the ability for a customer to have the realm local only. Then subscribe and sync into the cloud. If not required anymore the Customer can go local again and unsubscribe from the sync. The sync method is flexible sync. How can it be archived? Does anyone have any experience with this procedure?Thank’s in advanced",
"username": "Bruno_Zimmermann"
},
{
"code": "",
"text": "Hi @Bruno_Zimmermann unfortunately there is no automated way to do what you are trying to do with flexible sync, and your best option would be to do the conversion yourself, moving objects from one realm to another.That said, we have opened an issue on Github (Add support for realm sync disconnected configuration type · Issue #3110 · realm/realm-dotnet · GitHub) that you can follow and that could help in your case when implemented. The main idea there is to provide a configuration (“disconnected”) that allows to open synced realm without synchronisation.\nIn that case your flow would be:There are two issues with this approach though:For those reason I still think that doing it manually is still the best option in your case.",
"username": "papafe"
}
]
| Move from local to synced and back for flexible sync | 2022-11-22T07:13:58.223Z | Move from local to synced and back for flexible sync | 1,159 |
null | [
"backup",
"ops-manager"
]
| [
{
"code": "",
"text": "Hi,I need to take incremental backup for our system. I searched on the internet and there are so many post for incremental backup. There are some github project or some open source project like percona.Could you please help us?\nWhat is the best way to take incremental backup? There are 4-5 post on this community but it’s not help us. I think --oplog options is for PITR. --oplog option will not reduce our backup size right?Is there any way to take incremental backup on Atlas or OPS manager ? I checked official mongodb web site. https://www.mongodb.com/docs/manual/core/backups/ . Below Backup With Atlas section, Legacy backup is taking incremental backup but deprecated. Thats why i want to ask you.Thanks,\nKadir.",
"username": "Kadir_USTUN"
},
{
"code": "",
"text": "Someone has any advice please ?",
"username": "Kadir_USTUN"
}
]
| Mongodb Incremental Backup | 2022-11-21T15:07:19.770Z | Mongodb Incremental Backup | 2,107 |
null | [
"installation"
]
| [
{
"code": "$ cat /etc/yum.repos.d/mongodb-org-5.0.repo\n[mongodb-org-5.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/5.0/aarch64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-5.0.asc\n$ sudo yum search mongod-org\nLoaded plugins: extras_suggestions, langpacks, priorities, update-motd\nWarning: No matches found for: mongod-org\nNo matches found\n",
"text": "Unable to install mongodb 5.0 on Amazon Linux 2 on arm64 architecture. I am using t4g.medium instance for this.\nThis is repo config file:Searching for mongodb package gives this:There is no info on installing arm64 on amazon linux2 while it is clearly stated in the docs that its supported.",
"username": "Himanshu"
},
{
"code": "",
"text": "You spelled it wrong.",
"username": "Jack_Woehr"
},
{
"code": "$ sudo yum install -y mongodb-org\nLoaded plugins: extras_suggestions, langpacks, priorities,\n : update-motd\namzn2-core | 3.7 kB 00:00\namzn2extra-docker | 3.0 kB 00:00\namzn2extra-kernel-5.10 | 3.0 kB 00:00\nmongodb-org-5.0 | 2.5 kB 00:00\nmongodb-org-5.0/primary_db | 77 kB 00:00\nNo package mongodb-org available.\nError: Nothing to do\n",
"text": "Apologies, I spelled it out wrong here in this question. Here is a more detailed output which I copy pasted from the docs.",
"username": "Himanshu"
},
{
"code": "",
"text": "I just took a look at the docs for installing MongoDB 5 Community Edition on Amazon Linux 2.There is clearly an omission regarding the arm64 architecture. A sentence is incomplete.A MongoDB staffer will have to help you with this. @Stennie_X , do you have any information on this install process?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Subscribing, I’m also having the same issue.",
"username": "Donald_Frederick"
},
{
"code": "",
"text": "5 posts were split to a new topic: Problem installing MongoDB 6.0 on Amazon Linux 2",
"username": "Stennie_X"
},
{
"code": "sudo yum clean metadata\n[mongodb-org-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/6.0/aarch64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc\n sudo yum install -y mongodb-org\n",
"text": "Same issue.\nI’m on a t4g.xlarge on AWS.\nSolution:Then use this:run:Not sure that will help you",
"username": "Andrea_Ferrari"
}
]
| Install mongodb-org 5.0 on Amazon Linux 2 aarch64 architecture | 2022-05-21T05:55:11.331Z | Install mongodb-org 5.0 on Amazon Linux 2 aarch64 architecture | 6,920 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I have 2 collections (orders containing products Id and the date of the order (ISODate)) and customers that include the order as a field reference to the order collection.\nAt first, I made a lookup operator to link between the two collections, then I wanted to group the result by month or maybe year, and an error msg:“PlanExecutor error during aggregation:: caused by:: can’t convert from BSON type array to Date” So, I tried with a date as a Date type but I have the same error",
"username": "Malika_Taouai"
},
{
"code": "",
"text": "Hi @Malika_Taouai ,Can you share sample documents and the aggregation used?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi ^^,\nI accidentally post the same post twice, so I mentioned you in the right one.\nThank…",
"username": "Malika_Taouai"
}
]
| Lookup and group operations | 2022-11-22T09:49:15.931Z | Lookup and group operations | 867 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "My Code: hatebinMy Error: “(node:28581) UnhandledPromiseRejectionWarning: MongoError: Cannot use a session that has ended”This is for my discord bot, but on my pc everything works fine but on my server I get this error.\nAnd I have the exact same mongoose versionHelp pls",
"username": "tomson"
},
{
"code": "mongoose.connection.close();mongoose.connection.close();finallycatchtry",
"text": "Hi @tomson welcome to the community.If I have to guess, it’s due to the mongoose.connection.close(); statement in the function you posted.Mongoose and the MongoDB node driver encourages the use of a global variable to connect once to the database during the lifetime of the application, and discourage the practice of connect-disconnect for every operation. This is because the MongoDB driver keeps a connection pool and will create/reuse connections as needed. Note that this is the reason why the connection object is a global object in Mongoose.Could you try removing the mongoose.connection.close(); statement and see if the issue persists?I would also replace the finally block there with catch to grab any errors in the try block.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you for the help brother ",
"username": "Iavor_Kamenov"
},
{
"code": "",
"text": "@kevinadi Thanks lot",
"username": "Sanjay_Makwana"
}
]
| MongoError: Cannot use a session that has ended | 2020-09-07T10:41:58.949Z | MongoError: Cannot use a session that has ended | 14,860 |
[
"schema-validation"
]
| [
{
"code": "",
"text": "I’d like to use validation for an object that has several variants (polymorphic) it would seem that oneOf is the place to implement this (by my understanding), but I don’t see any examples of that in the documentation.\n\nvalidation-q845×234 38.6 KB\nCould someone fill me in on this please?",
"username": "Ilan_Toren"
},
{
"code": "oneOf",
"text": "Hello @Ilan_Toren ,To learn more about oneOf, please go through Video: Schema Validation and read below threads on oneOf usage.https://www.mongodb.com/community/forums/community/forums/t/multiple-json-schemas-with-oneof/97856?u=tarun_gaurIn case you have queries or issues with this, please feel free to reach out with below detailsRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks for answering. Looks good",
"username": "Ilan_Toren"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Schema validation for multiple types of subdocuments | 2022-11-20T20:15:34.238Z | Schema validation for multiple types of subdocuments | 2,660 |
|
null | []
| [
{
"code": "",
"text": "Hi all,\nI keep getting index suggestion alerts for my cluster, and when I check the Performance Advisor, there is nothing there.I’ve used Performance Advisor before to set up indices based on such alerts, but in the past few days my email inbox is flooded with index suggestion alerts, and no corresponding index suggestion in the Atlas UI. I think it’s because the suggestion gets “revoked” quickly - the alert gets opened then closed within just a few minutes, and I get 2 emails each time.I have some scheduled tasks that do heavy lifting - my guess is that as these run, they trigger index suggestions, but those suggestions get closed/revoked once the task is done and the traffic reverts to normal. Completely removing the suggestions from the UI doesn’t seem like the best idea? I’ve tried to load performance advisor immediately after the task runs but I can’t catch them.Some improvement ideas for this feature:Any ideas on how to access these disappearing suggestions so I can act on them?",
"username": "Sitati"
},
{
"code": "",
"text": "Hi @Sitati,Thank you for your question and I’m sorry you’re experiencing this problem. We recently rolled out an update to Atlas to make the Performance Advisor alerts more deterministic. However, if you’re seeing the opposite behavior within the last few days, we may need some additional investigation. Could you please open an Atlas support case including your project ID so we can further investigate?Thank you,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "I don’t think my plan lets me raise support cases, I get a screen saying “not enabled for support” when I try to raise a case.This alert spam is still going on all the time, and it’s clearly some flawed logic.I tweaked the alert so it should only come if the conditions for index suggestions are consistent for 10 hours, but I still get a flood of emails showing the alert condition flipping between OPEN and CLOSED every few minutes, which doesn’t make logical sense.\nScreenshot 2022-11-23 at 10.57.18@2x1906×982 227 KB\nI’ll email this to the support email, could you help in getting this followed up on? Looks very much like a bug report rather than a user support case.",
"username": "Sitati"
}
]
| Index suggestions disappear before I get a chance to view them | 2022-11-09T07:07:59.913Z | Index suggestions disappear before I get a chance to view them | 1,401 |
null | [
"sharding",
"upgrading"
]
| [
{
"code": " sharding version: {\n \t\"_id\" : 1,\n \t\"minCompatibleVersion\" : 5,\n \t\"currentVersion\" : 6,\n \t\"clusterId\" : ObjectId(\"57cd1d5d6303e86ab8f6a764\")\n }\ncurrentVersion",
"text": "Background:\nHi, we have upgraded our sharded cluster for years without any problems. Recently we updated from 4.4 to 5.0 and it seemed to work fine (though we only let it burn in for a day or 2). Last night we tried to do the same exact steps to upgrade from 5.0.14 to 6.0.3. It looked like everything was smooth, but we ran into some issues (I’m going to create a separate post about that). We successfully downgraded our db cluster back to 5.0.14. We never set the FCV to 6.0 so we didn’t need to undo that.Question:\nWhen I run sh.status(), I see the following:Is the currentVersion supposed to be 6? Is there something I need to do to downgrade that to 5? I never increased the FCV to 6.0.",
"username": "AmitG"
},
{
"code": "",
"text": "Hi @AmitG and welcome to the MongoDB community forum!!The two variables $minCompatibleVersion and $currentVersion are deprecated schema version versions for config server and they are different from Feature Compatibility Version.\nWith the introduction of of Feature Compatibility Version, these variables are no longer used and are scheduled to be removed at a later date.\nThe server ticket https://jira.mongodb.org/browse/SERVER-68889 mentions the removal of these variables in the near future.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sharding currentVersion after downgrade from 6.0.3 to 5.0.14 | 2022-11-20T16:02:08.148Z | Sharding currentVersion after downgrade from 6.0.3 to 5.0.14 | 2,133 |
null | [
"production",
"php"
]
| [
{
"code": "examples/tools/mongodbcomposer require mongodb/mongodb:1.15.0\nmongodb",
"text": "The PHP team is happy to announce that version 1.15.0 of the MongoDB PHP library is now available. Note that version 1.14.0 has been intentionally skipped to restore version parity between the library and extension.Release HighlightsNew examples/ and tools/ directories have been added to library repository, which contain code snippets and scripts that may prove useful when writing or debugging applications, respectively. These directories are intended to supplement the library’s existing documentation, and will be added to over time.Various backwards compatible typing improvements have been made throughout the library. Downstream impact for these changes are discussed in UPGRADE-1.15.md. Additionally, Psalm has been integrated for static analysis going forward.This release upgrades the mongodb extension requirement to 1.15.0.A complete list of resolved issues in this release may be found in JIRA.DocumentationDocumentation for this library may be found in the PHP Library Manual.InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.",
"username": "jmikola"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB PHP Library 1.15.0 Released | 2022-11-23T04:51:31.321Z | MongoDB PHP Library 1.15.0 Released | 2,031 |
null | [
"production",
"php",
"field-encryption"
]
| [
{
"code": "BackedEnum::from()MongoDB\\BSON\\Binary::__construct()$typeBinary::TYPE_GENERICpecl install mongodb-1.15.0\npecl upgrade mongodb-1.15.0\n",
"text": "The PHP team is happy to announce that version 1.15.0 of the mongodb PHP extension is now available on PECL.Release HighlightsTentative return types have been added to interfaces throughout the extension. Applications that cannot declare a compatible return type in their implementations will need to specify a ReturnTypeWillChange attribute on each method in order to silence deprecation notices on PHP 8.1+.This release adds several new methods to MongoDB\\Driver\\ClientEncryption, which facilitate key management operations on the key vault collection. These methods mirror the existing APIs found in the MongoDB shell.Backed enumerations are now supported during BSON encoding and will serialize as their case value. Round-tripping a backed enum through BSON will require special handling (e.g. converting the value to a case using BackedEnum::from() ). Pure enums, which have no backed cases, cannot be directly serialized. Enums are prohibited from implementing MongoDB\\BSON\\Unserializable and MongoDB\\BSON\\Persistable, but may implement MongoDB\\BSON\\Serializable. MongoDB\\BSON\\Binary::__construct() no longer requires a $type parameter and will default to Binary::TYPE_GENERIC .This release upgrades our libbson and libmongoc dependencies to 1.23.1. The libmongocrypt dependency has been upgraded to 1.5.2.A complete list of resolved issues in this release may be found in JIRA.DocumentationDocumentation is available on PHP.net.InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are attached to the GitHub release notes.",
"username": "jmikola"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB PHP Extension 1.15.0 Released | 2022-11-23T03:56:58.238Z | MongoDB PHP Extension 1.15.0 Released | 7,568 |
null | [
"node-js",
"connecting",
"containers"
]
| [
{
"code": "",
"text": "IN local Db connection is working perfectly but in docker error (Error in DB connection MongoParse Error : URI malformed, cannot be parsed) in the DB connection",
"username": "Nidhi_Savaliya"
},
{
"code": "host.docker.internal/etc/hosts",
"text": "Hi @Nidhi_Savaliya and welcome to MongoDB community forum!!It would be helpful to understand the issue further if you could help with the following details:As a general tip, if you are trying to connect to the localhost, please make sure if the localhost name has the host.docker.internal in the /etc/hosts file for the linux systemBest Regards\nAasawari",
"username": "Aasawari"
}
]
| We are getting issue with docker to connect with MongoDb | 2022-11-21T08:58:30.162Z | We are getting issue with docker to connect with MongoDb | 1,396 |
null | [
"mongodb-shell"
]
| [
{
"code": "",
"text": "I just discovered the shell in MongoDB Docs. Awesome.! A lot of space to work. Visually pleasant to work with . Color! . Can I get that on my machine? Is that mongosh 6?",
"username": "Email_Me"
},
{
"code": "",
"text": "The GUI for MongoDB including mongosh is Compass",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "",
"username": "SourabhBagrecha"
}
]
| Web shell in Mongodb docs-awesome-can I get it? | 2022-11-22T22:54:57.575Z | Web shell in Mongodb docs-awesome-can I get it? | 1,177 |
null | []
| [
{
"code": "",
"text": "is there a way to block writes to mongodb but allow reads and deletes if mongodb makes it past a certain memory threashold EX: 95%. Reason is if client keeps writing to the db, and db becomes full mongodb replicas start crashing and cluster can be difficult to recover",
"username": "Daniel_Bernstein1"
},
{
"code": "",
"text": "Hi @Daniel_Bernstein1 welcome to the community!is there a way to block writes to mongodb but allow reads and deletes if mongodb makes it past a certain memory threasholdThis is an intriguing question. What do you mean by memory, exactly? Do you mean disk space, or system RAM?It’s worth mentioning that to WiredTiger, deletes also mean writes.Reason is if client keeps writing to the db, and db becomes full mongodb replicas start crashing and cluster can be difficult to recoverSo do you have a plan for what to do with the incoming writes? Do you plan to just drop the writes? Sorry I’m a little confused on the use case here: if the database is supposed to be read-only to begin with, why writes were allowed in the first place?Having said that, since you’re running a replica set, if a replica set do not have a majority of voting nodes online, it will go into read-only mode (no deletes though, since these are also writes), so perhaps you can deliberately shut off some nodes to induce this read-only state? There’s no built-in MongoDB method to determine the disk space remaining in a server though, so you’ll need to craft an external trigger that monitors this event.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi , thanks for replying I mean disk space.Yes I want to drop incoming writes if disk space is about to get full. We ran into an issue where mongodb client kept writing to the db until full and this resulted in replicas crashing and replica set being difficult to recovering.I can have an external service that monitors the disk usage but how can the service then put the db into a state that allows only reads and deletes",
"username": "Daniel_Bernstein1"
},
{
"code": "",
"text": "Hi @Daniel_Bernstein1Sounds to me like you need a quota system for MongoDB. Unfortunately I don’t think such a feature exists nor in the works.Off the top of my head, this is probably best implemented in the application layer at the moment. That is, you may be able to have the application communicate to a service that monitors disk space availability, and stop any inserts when a threshold is reached.Having said that, if this is an important feature for your use case, please do provide a feedback in the MongoDB Feedback Engine summarizing your idea and use case. Ideas in this feedback engine is constantly being monitored by the development team, and is used to prioritize upcoming work.Best regards\nKevin",
"username": "kevinadi"
}
]
| Block writes to the db but allow deletes | 2022-11-16T10:27:52.812Z | Block writes to the db but allow deletes | 1,040 |
null | []
| [
{
"code": "",
"text": "We have set up Mongo DB replica and in the app server we are getting following logs , due to which mongo Db record saving operation seems failing\nCache Reader No Keys found for HMAC that is valid for timePlease help us to provide solution to fix this issue",
"username": "prashant_sinha"
},
{
"code": "",
"text": "Any one who faced this issue earlier, pls advise solution to fix this issue",
"username": "prashant_sinha"
},
{
"code": "",
"text": "Hi @prashant_sinha welcome to the community!What’s your MongoDB version, and what’s the replica set topology? I believe a similar-looking issue was fixed in SERVER-40535. If you’re not using the latest version in the MongoDB series you’re using (4.2.23. 4.4.17, 5.0.13, or 6.0.2), please try to upgrade and see if the issue persists.If you’re not using one of the supported series (e.g. 3.6 or older), then I would suggest to upgrade to a supported version, since unsupported versions will not get any more updates/fixes.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi\nMongo DB version is 4.2.18 . Is it required to be upgraded to resolve this issueRegards\nPrashant",
"username": "prashant_sinha"
},
{
"code": "",
"text": "Hi @prashant_sinhaIt’s recommended to upgrade to the newest version in the series you’re using regardless.If you’re still seeing this issue after upgrading to 4.2.23 then we’ll know for sure that the issue you’re seeing is new and not the same one fixed in the SERVER ticket, even though superficially it looks similar.Best regards\nKevin",
"username": "kevinadi"
}
]
| Cache Reader No Keys found for HMAC that is valid for time | 2022-11-11T07:21:01.596Z | Cache Reader No Keys found for HMAC that is valid for time | 6,422 |
null | []
| [
{
"code": "",
"text": "Hi allI am new to MongoDB and Postman and wanted to test out some stuff.Followed the following instructions to setup Postman:MongoDB's new Data API is a great way to access your MongoDB Atlas data using a REST-like interface. In this article, we will show you how to use Postman to read and write to your MongoDB Atlas cluster.but I do receive alwayse a 400 bad request with following error:\n“error”: “mime: expected token after slash”,does someone have any clue what the issue is?Thank you very much for support!",
"username": "Sasa_Kelebuda"
},
{
"code": "",
"text": "try going to environments within Postman and re-save the variables. This worked for me.Make sure that the environment you use when testing the API is “Data API”",
"username": "Luke_Nascimento"
},
{
"code": "",
"text": "or manually enter the values in the headers section of the test",
"username": "Luke_Nascimento"
},
{
"code": "",
"text": "did work for me! thank you very much Luke!",
"username": "Sasa_Kelebuda"
}
]
| Postman "error": "mime: expected token after slash", when calling {{URL_ENDPOINT}}/action/insertOne | 2022-11-21T22:39:51.354Z | Postman “error”: “mime: expected token after slash”, when calling {{URL_ENDPOINT}}/action/insertOne | 2,636 |
null | [
"python",
"indexes",
"performance"
]
| [
{
"code": "{\n _id: <unique_id>\n name: <some str, under 256 chars, not necessarily unique>\n field_a: <some str, under 256 chars, not necessarily unique> \n field_b: <some str, under 256 chars, not necessarily unique>\n attributes: <array of strings - each under 32 chars, max size of array typically under 4 elements>\n ... a bunch of other fields\n}\nfield_a, field_b and _idname\"a\"attributesnamecommand: find { find: \"COLL\", filter: { name: { $regex: \"part\" }}, sort: { name: 1 }, projection: { field_a: 1, field_b: 1, _id: 1 }\n\ncommand: find { find: \"COLL\", filter: { name: { $regex: \"part\" }, attributes: \"a\" }, sort: { name: 1 }, projection: { field_a: 1, field_b: 1, _id: 1 }\n{\n \"name\" : 1,\n \"field_a\" : 1,\n \"field_b\" : 1,\n \"_id\" : 1,\n \"attributes\" : 1\n}\n\"attributes\"{\n \"name\" : 1,\n \"field_a\" : 1,\n \"field_b\" : 1,\n \"_id\" : 1\n}\n{name: 1}field_a, field_b, _id",
"text": "Hello all,\nI have a question about the fields of a compound index, hopefully this great community may enlighten me a bit here My data can be modelled asI need to search regex in name (not ideal, I know) and return field_a, field_b and _id.\nSometimes, not always, I would want to search a regex in name and also return only records that have attribute \"a\" in their attributes list.\nIn both cases I would want to sort by name.\nThese are the 2 typical query commands -Originally I used the following a compound indexwhich would give me a covered query for both queries, not needing to fetch.\nHowever, this is significantly slower (on both types of queries I have) than querying with an index that doesn’t include \"attributes\"Difference in times 100 sec vs 6 sec for 5.1M records.Is it because of how mongoDB indexes arrays? The arrays, again, contain very few elements, so it’s a bit weird for me to see such a huge difference.Another interesting observation is that indexing just {name: 1} rather than a compound index provides slightly better performance in both queries (around 4 sec, vs 6 sec), even though in this case it needs to fetch field_a, field_b, _id from each record instead of having them covered in the index. Why is that?for what it’s worth, I still use mongoDB 4.0.10, driver is pymongo 3.7.1Many thanks! ",
"username": "Keren-Or_Curtis"
},
{
"code": "isMultiKeytrue\"attributes\"PROJECTION_COVERED{\n \"name\" : 1,\n \"field_a\" : 1,\n \"field_b\" : 1,\n \"_id\" : 1\n}\nPROJECTION_SIMPLE\"attributes\"{name: 1}field_a, field_b, _iddb.collection.getIndexes()explain(\"allPlansExecution\")db.collection.stats()",
"text": "Hi @Keren-Or_Curtis - Welcome to the community Regarding your regex, you could possibly further optimize it using a “prefix expression”. More details in the $regex index use documentation.which would give me a covered query for both queries, not needing to fetch.As noted in the Multikey indexes limitations documentation:Multikey indexes cannot cover queries over array field(s).I believe if you run an explain plan against the queries, you will see that the isMultiKey value is true for the index that includes the array field \"attributes\".Just to clarify, would be able to post some details regarding the method you used to determine that this is a covered query?command: find { find: “COLL”, filter: { name: { $regex: “part” }}, sort: { name: 1 }, projection: { field_a: 1, field_b: 1, _id: 1 }For this particular query, the index below results in a PROJECTION_COVERED (based off my test environment with around 140K documents):Based off the test environment containing documents with the same fields and indexes, a PROJECTION_SIMPLE (i.e. not a covered query) will occur due to the multikey index limitation noted in my reply for the index including the additional array field \"attributes\".Another interesting observation is that indexing just {name: 1} rather than a compound index provides slightly better performance in both queries (around 4 sec, vs 6 sec), even though in this case it needs to fetch field_a, field_b, _id from each record instead of having them covered in the index. Why is that?There could be many explanations for the difference in execution timing, such as it’s possible that one index is in memory and one isn’t, or even perhaps hardware-specific reasons. However, in saying so, could you provide the following regarding the above:Please note that with my above testing, I did not add the “other fields” to my test documents or query you had mentioned so results may differ. I am also testing on version 5.0.13 of MongoDB.for what it’s worth, I still use mongoDB 4.0.10, driver is pymongo 3.7.1Lastly, MongoDB version 4.0 has reached end of life on April 2022. Please test again on a support version as MongoDB is constantly being improved, so doing this test on a supported version would be more representative.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "attributesattributesdb.collection.getIndexes()explain(\"allPlansExecution\")db.collection.stats()attributes",
"text": "Hi, thank you for the reply!Regarding your regex, you could possibly further optimize it using a “prefix expression”.Indeed, I try to utilize it whenever possible, however I’m afraid certain usecases require me to search the entire expression.Multikey indexes cannot cover queries over array field(s).But also, on that same page, it saysHowever, starting in 3.6, multikey indexes can cover queries over the non-array fields if the index tracks which field or fields cause the index to be multikey.since attributes is the array field that causes the index to be multikey, at least my first type of query (the one that doesn’t filter by attributes) should have still be covered? (I realized looking more closely at the explain output it indeed isn’t, but wondering why, considering that part of the documentation?)There could be many explanations for the difference in execution timing, such as it’s possible that one index is in memory and one isn’tI will clarify my test a bit - I replaced the index each time, I didn’t add more test indexes. I ran each query at least 3 consecutive times to eliminate the initial longer time of loading the index to memory.\nThat said, I do understand the time difference is perhaps not that meaningful, but the single-field index surely doesn’t provide a covered query, isn’t it a bit weird?MongoDB version 4.0 has reached end of life on April 2022. Please test again on a support version as MongoDB is constantly being improved, so doing this test on a supported version would be more representative.I am dreadfully aware Due to some legacy restrictions I currently have to work with 4.0, I guarantee you I’m working to get these upgraded. If this in itself is considered an issue impeding the results, I understand (and may return to this topic once upgraded)I will provide the outputs per test-index. They are very lengthy as you can imagine so I wanted to attach them as files, regretfully new users cannot upload files so I put it on google drive (honestly that’s a lot of output for a comment)summarizing again,note I am fine with 4-6 seconds, just aiming to understand this behaviour a bit moreThanks!",
"username": "Keren-Or_Curtis"
},
{
"code": "'PROJECTION'PROJECTION_COVEREDtotalDocsExaminedtotalDocsExamined\"no-attributes-index\"\"attributes\"db.getCollection('COLL').find({name: { $regex: \"one\" }}, { field_a: 1, field_b: 1, _id: 1 })\"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 2048,\n \"executionTimeMillis\" : 6023,\n \"totalKeysExamined\" : 5118976,\n \"totalDocsExamined\" : 0\ndb.getCollection('COLL').find({name: { $regex: \"one\" }, attributes: \"a\"}, { field_a: 1, field_b: 1, _id: 1 }).sort({name: 1}) \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 2048,\n \"executionTimeMillis\" : 18527,\n \"totalKeysExamined\" : 5118976,\n \"totalDocsExamined\" : 2048\n\"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 2048,\n \"executionTimeMillis\" : 98361,\n \"totalKeysExamined\" : 5118976,\n \"totalDocsExamined\" : 5118976\n$regexdb.collection.find({'name':{$regex:\"Est\"}},{\"name\": 1, \"field_a\": 1, \"field_b\": 1, \"_id\": 1}).sort({\"name\":1}).explain(\"executionStats\")executionStats: {\n executionSuccess: true,\n nReturned: 8090,\n executionTimeMillis: 994,\n totalKeysExamined: 1000000,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'PROJECTION_COVERED'\ndb.collection.find({'name':{$regex:\"^Est\"}},{\"name\": 1, \"field_a\": 1, \"field_b\": 1, \"_id\": 1}).sort({\"name\":1}).explain(\"executionStats\")executionStats: {\n executionSuccess: true,\n nReturned: 6061,\n executionTimeMillis: 12,\n totalKeysExamined: 6062,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'PROJECTION_COVERED'\nexecutionTimeMillistotalKeysExamined",
"text": "Hi Keren - Thanks for providing those requested details.since attributes is the array field that causes the index to be multikey, at least my first type of query (the one that doesn’t filter by attributes) should have still be covered? (I realized looking more closely at the explain output it indeed isn’t, but wondering why, considering that part of the documentation?)I did some testing with MongoDB version 4.0 and it seems even if the projection is covered, the stage from the execution stats output will still show as 'PROJECTION' so my apologies there for any confusion caused (My test environment using version 5.0.13 displays the particular stage as PROJECTION_COVERED). However, in saying so, you can possibly try inspecting the totalDocsExamined value for your queries to determine if the query is covered or not. If the totalDocsExamined value is 0, then the query was most likely covered (there is an exception for when the query returns no results). This can be seen with the \"no-attributes-index\" output for the query that does not contain \"attributes\":db.getCollection('COLL').find({name: { $regex: \"one\" }}, { field_a: 1, field_b: 1, _id: 1 })Compared with db.getCollection('COLL').find({name: { $regex: \"one\" }, attributes: \"a\"}, { field_a: 1, field_b: 1, _id: 1 }).sort({name: 1}) which isn’t coveredthe queries (both types of queries) take ~100sec with the full index, which is a multikey index, and apparently not covered - perhaps my misunderstanding of the documentation thereIf we inspect the execution stats output for full index output for both queries, we can see that the server scanned 5.1M index keys and also 5.1M documents. Interestingly enough, this number sounds like the whole collection and if this is the case, my guess is that performing a collection scan may be faster in this particular scenario if the same 5.1M are needed to be scanned (without needing to inspect any index keys).:both queries take ~6 seconds with the no-attributes-index, despite it needing to fetch attributes for the query that filters according to it\nboth queries take ~4 seconds with the only-name index, despite it needing to fetch all the projection fieldsI do see why this would appear quite strange as inspecting the execution stats you can see the query / index combinations requiring a fetch are slightly faster than the covered query. The query selectivity drastically impacts the performance. For your reference, on a test environment with 1M documents, I compared 2 $regex queries, one with an anchor:db.collection.find({'name':{$regex:\"Est\"}},{\"name\": 1, \"field_a\": 1, \"field_b\": 1, \"_id\": 1}).sort({\"name\":1}).explain(\"executionStats\"):(with anchor)\ndb.collection.find({'name':{$regex:\"^Est\"}},{\"name\": 1, \"field_a\": 1, \"field_b\": 1, \"_id\": 1}).sort({\"name\":1}).explain(\"executionStats\"):Note the executionTimeMillis difference between an anchored an non-anchored regex queries. Also the number of totalKeysExamined, where the non-anchored regex query is scanning the whole keyspace: 1000000 scanned vs 8000 returned, an average of 0.008 document returned per index key scanned, and the anchored query is scanning 6062 keys and returns 6061 documents, almost 1:1 ratio, which means that the server does not do unnecessary work.On an additional test with anchors, I found that a full index was faster than the single field index.Although I do understand you have stated your particular use case requires the full expression to be searched. If this is a frequent operation, you may wish to consider using Atlas Search (although I presume you are on-prem due to the MongoDB version stated ).Regards,",
"username": "Jason_Tran"
},
{
"code": "\"name\"\"winningPlan\" : {\n \"stage\" : \"PROJECTION\",\n \"transformBy\" : {\n \"field_a\" : 1.0,\n \"field_b\" : 1.0,\n \"_id\" : 1.0\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"name\" : {\n \"$regex\" : \"one\"\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"name\" : 1,\n \"field_a\" : 1,\n \"field_b\" : 1,\n \"_id\" : 1,\n \"attributes\" : 1\n },\n \"indexName\" : \"playIdx\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"name\" : [],\n \"field_a\" : [],\n \"field_b\" : [],\n \"_id\" : [],\n \"attributes\" : [ \n \"attributes\"\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"name\" : [ \n \"[\\\"\\\", {})\", \n \"[/one/, /one/]\"\n ],\n \"field_a\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"field_b\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"_id\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"attributes\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\nFETCH\"attributes\"\"attributes\"\"name\"explaindb.getCollection('COLL').find({name: { $regex: \"one\" }}, { field_a: 1, field_b: 1, _id: 1 })attributes\"name\"\"name\"\"attributes\"Search regex without attributes filtering : \n\t\t\t\t\t\t\t| Time\t | IdxKey examined\t| Docs examined\t| total returned\n-----------------------------------------------------------------------------------------\n1. Full Idx\t\t\t\t\t| 101.85 |\t5118976\t\t\t| 5118976\t\t| 2048\n2. Idx without attributes\t| 5.845\t | \t5118976\t\t\t| 0\t\t\t\t| 2048\n3. Idx = just name\t\t\t| 4.175\t | \t5118976\t\t\t| 2048\t\t\t| 2048\n\t\t\t\t\nSearch regex with attributes filtering:\n\n\t\t\t\t\t\t\t| Time\t | IdxKey examined\t| Docs examined\t| total returned\n-----------------------------------------------------------------------------------------\n4. Full Idx\t\t\t\t\t| 55.675 |\t5118976\t\t\t| 2559488\t\t| 2048\n5. Idx without attributes\t| 6.08\t | \t5118976\t\t\t| 2048\t\t\t| 2048\n6. Idx = just name\t\t\t| 4.34\t | \t5118976\t\t\t| 2048\t\t\t| 2048\n",
"text": "Hi Jason, thanks for your input once again! It is very helpful.It is interesting to note that the original index not only doesn’t cover the query, it also doesn’t utilize the fact \"name\" is part of the index.\nI can deduce it from the number of examined docs & see it in the query plan of the full index -On an additional test with anchors, I found that a full index was faster than the single field index.True, this is also very important to note, thanks.About the difference between the two other indexes, it looks as follows (numbers are slightly different than before since I changed the attributes so that only half of the DB will have the required attribute “a”)-I think the interesting difference is between 2 & 3. [2] is a covered query, yet it takes slightly more time than [3]. Perhaps the index-size itself is a possible reason for slowdown? The index of 2 & 5 is ~8 times heavier than the simpler index of 3 & 6, so I may answer myself here by saying that despite the fact it may give a more optimized plan, in the practical sense it’s still not the best performanceThanks again! I appreciate all your input, while I will certainly look into Atlas Search (for a hopeful future in which I won’t need to maintain an on-prem DB anymore, as you correctly guessed), I am more interested in more thoroughly understanding the behaviors we’re experiencing, and then decide if/how I’d change the indexes.Best regards",
"username": "Keren-Or_Curtis"
}
]
| Adding the projected fields to compound index (to create covered query) doesn't improve performance | 2022-11-14T16:16:33.055Z | Adding the projected fields to compound index (to create covered query) doesn’t improve performance | 2,757 |
[
"queries",
"data-modeling"
]
| [
{
"code": "this.model.find({\n members: {\n $elemMatch: {\n userId: new ObjectId(userId),\n },\n },\n})\n{\n \"explainVersion\": \"1\",\n \"queryPlanner\": {\n \"namespace\": \"***.groups\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"members\": {\n \"$elemMatch\": {\n \"userId\": {\n \"$eq\": \"61b091ee9b50220e75208eb6\"\n }\n }\n }\n },\n \"queryHash\": \"DCF50157\",\n \"planCacheKey\": \"DCF50157\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"members\": {\n \"$elemMatch\": {\n \"userId\": {\n \"$eq\": \"61b091ee9b50220e75208eb6\"\n }\n }\n }\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"members.userId\": 1\n },\n \"indexName\": \"members.userId_1\",\n \"isMultiKey\": true,\n \"multiKeyPaths\": {\n \"members.userId\": [\n \"members\"\n ]\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"members.userId\": [\n \"[ObjectId('61b091ee9b50220e75208eb6'), ObjectId('61b091ee9b50220e75208eb6')]\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 17,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 17,\n \"totalDocsExamined\": 17,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"members\": {\n \"$elemMatch\": {\n \"userId\": {\n \"$eq\": \"61b091ee9b50220e75208eb6\"\n }\n }\n }\n },\n \"nReturned\": 17,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 18,\n \"advanced\": 17,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"docsExamined\": 17,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 17,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 18,\n \"advanced\": 17,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"members.userId\": 1\n },\n \"indexName\": \"members.userId_1\",\n \"isMultiKey\": true,\n \"multiKeyPaths\": {\n \"members.userId\": [\n \"members\"\n ]\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"members.userId\": [\n \"[ObjectId('61b091ee9b50220e75208eb6'), ObjectId('61b091ee9b50220e75208eb6')]\"\n ]\n },\n \"keysExamined\": 17,\n \"seeks\": 1,\n \"dupsTested\": 17,\n \"dupsDropped\": 0\n }\n },\n \"allPlansExecution\": []\n },\n \"command\": {\n \"find\": \"groups\",\n \"filter\": {\n \"members\": {\n \"$elemMatch\": {\n \"userId\": \"61b091ee9b50220e75208eb6\"\n }\n }\n },\n \"projection\": {},\n \"readConcern\": {\n \"level\": \"majority\"\n },\n \"$db\": \"***\"\n },\n \"serverInfo\": {\n \"host\": \"***\",\n \"port\": 27017,\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"ok\": 1,\n \"operationTime\": {\n \"$timestamp\": \"7168789227251957761\"\n }\n}\n",
"text": "We’re trying to optimise our read performance on our MongoDB cluster. We serve a social media like application where users are member of 1 or multiple groups.We were storing who is in which group and whether he/she is an admin of that group in a separate collection. However we noticed it was quite slow to retrieve the group information for the groups the user is member of. (find(+filter) groupMember documents, populate the groups).Therefor we recently migrated all the group members to an array on the group collection documents itself.The schema now looks as following:\n\nScreenshot 2022-11-22 at 13.37.28988×654 64.1 KB\nThe query we execute is simply:We expected this to be much more performed because you don’t need to populate/lookup anything. The opposite is true however, after deploying this change we noticed a performance decrease.We have around 40k group documents where the largest groups have around 3k members, most groups are much smaller however.The groups are indexed and the index is also used. This is an explain plan:Under load the query takes 300-400ms, which is not acceptable for us.However right now we don’t really know anymore what would be the best next step in improving the solution. Mongo does not advise any additional indexes or schema improvements at this moment.What can we do best to get this query really performand?",
"username": "Wouter_Lemcke"
},
{
"code": " \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 17,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 17,\n \"totalDocsExamined\": 17,\n \"executionTimeMillis\": 0,",
"text": "Hi @Wouter_Lemcke ,According to the explain plan the query was blazing fast and did minimal scan of 17 entries returning those 17 entries:The time to run it was also sub ms \"executionTimeMillis\": 0, …So the 300-400ms are probably lost somewhere between the database and the app service returning it…I would recommend investigate where the time is spent.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the reply @Pavel_Duchovny.The explain was taken from my laptop connecting with the database at a moment there was hardly any load. Does that matter for the results?We have a custom metric around the query, not measuring anything else then retrieving the result. During load it’s quite slow (time is in seconds):\n\nScreenshot 2022-11-22 at 15.39.331464×598 33.2 KB\n",
"username": "Wouter_Lemcke"
},
{
"code": "",
"text": "Hi @Wouter_Lemcke ,This needs to be investigated by a support engineer.Maybe you need to scale the database server during those times to accommodate the specific needs you have.However,.there is nothing on the query specifically that can improve its performance , its already optimal.Thanks",
"username": "Pavel_Duchovny"
}
]
| MongoDB querying array slow | 2022-11-22T13:03:02.352Z | MongoDB querying array slow | 1,989 |
|
null | [
"aggregation",
"queries",
"compass",
"transactions"
]
| [
{
"code": "{'transactions.0.created_at':{$gte: ISODate('2022-09-01')}}{'transactions.0.created_at':{$gte: ISODate('2022-09-01')}, 'transactions.0.created_at':{$lte: ISODate('2022-10-01')}}",
"text": "Hi there,\nThere is a collection sessions with a field transactions that is an array of json-documents. Each element of this array has a field created_at (date of a transaction). I need to match all documents having transactions.0.created_at field between “2022-09-01” and “2022-10-01”.\nSo firstly the match-stage is created.\nWhen it has only 1 condition,\n{'transactions.0.created_at':{$gte: ISODate('2022-09-01')}} ,\nall documents having transactions.0.created_at>=“2022-09-01” are shown, this is ok.\nBut when adding the second condition and getting the match-stage like\n{'transactions.0.created_at':{$gte: ISODate('2022-09-01')}, 'transactions.0.created_at':{$lte: ISODate('2022-10-01')}},\nI’m getting all documents even created in 2021.\nIt seems like OR connection, but as far as I know, all conditions in match-stage are connected via AND. Where can be a problem?",
"username": "Anne_Kim"
},
{
"code": "",
"text": "Please read the following for some explanations:\nhttps://www.mongodb.com/community/forums/t/request-for-explanation-chapter-4-query-operators-lecture/122914/2?u=steevej\nhttps://www.mongodb.com/community/forums/t/exclusive-and-vs-multiple-filter-in-data-explorer-atlas/130214/7?u=steevej",
"username": "steevej"
},
{
"code": "",
"text": "Now it’s clear, thank you!",
"username": "Anne_Kim"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Matching data between 2 dates in MongoDB Compass | 2022-11-22T11:39:57.186Z | Matching data between 2 dates in MongoDB Compass | 2,613 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "import mongoose, { Types } from 'mongoose';\n\nexport interface UserDocument extends mongoose.Document{\n vorname:string;\n nachname:string;\n username:string;\n email:string;\n street:string;\n number:string;\n plz:string;\n city:string;\n password:string;\n isAdmin:boolean;\n createdAt: Date;\n updatedAt: Date;\n _doc?: any;\n organization: Types.ObjectId;\n }\nconst UserSchema = new mongoose.Schema<UserDocument>({\n vorname:{type:String, required:true},\n nachname:{type:String, required:true},\n username:{type:String, required:true },\n email:{type:String, required:true },\n street:{type:String, required:true },\n number:{type:String, required:true },\n plz:{type:String, required:true },\n city:{type:String, required:true },\n password:{type:String, required:true },\n isAdmin:{type:Boolean, default:false},\n organization: { type: mongoose.Schema.Types.ObjectId, ref: 'Organization' }\n}, \n {timestamps:true}\n)\n\nconst User = mongoose.model<UserDocument>('User', UserSchema);\n\nexport default User;\nuserRouter.put('/:id', verifyTokenAndAuthorization, async (req:Request, res:Response)=>{\n try{\n const updatedUser = await User.findByIdAndUpdate(req.params.id,{\n $set: req.body,\n },{new:true})\n res.status(200).json(updatedUser);\n } catch(error){\n res.status(404)\n throw new Error(\"User not found\");\n }\n});\nuserRouter.get('/find', verifyTokenAndAdmin, async (req:Request, res:Response)=>{\n\n try{\n\n const allUsers = await User.find();\n\n res.status(200).json(allUsers)\n\n console.log(typeof allUsers); //gives back object\n\n } catch(error){\n\n res.status(404).json(\"Users not found\");\n\n }\n\n});\n",
"text": "Hello everyone, like I had understood, the mongoose find() method gives me all documents from a collection. Now I have the issue in my frontend, that I am able to map through my users collection, but when I update userData, I get the error:user.map is not a functionI debugged this error and found out, that find() gives not back an array, like I have expected, but it is an object, so that I cannot use the map() method. What I do wrong? Thanks for your help.\nThat is my model:My Routes:",
"username": "Roman_Rostock"
},
{
"code": "User.find();YourModel.find({}, function (err, docs) {\n // docs is an array of all docs in collection\n});\nCollection.find().toArray()",
"text": "Hi @Roman_Rostock ,I think User.find(); gives a “Query” object back.To my understanding now you will need to build the array in a callback function:Now in a MongoDB Driver we return a cursor and this one has Collection.find().toArray()Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Find() gives back an object | 2022-11-22T10:39:22.552Z | Find() gives back an object | 4,193 |
null | [
"node-js"
]
| [
{
"code": "MongoServerSelectionprocess.onprocess.exitnode.js",
"text": "Long time MongoDB user here, first time posting on this forum.I have several NodeJS servers that maintain long-running connections to our MongoDB backend. From time to time I’ve noticed that they seem to drop this connection, throwing a MongoServerSelection error. I assume that perhaps this is due to load, or network fluctuations, or whatever.Honestly, I don’t really care. What I want to happen is for the service to exit so that Kubernetes can restart it. But instead it just seems to hang until I notice and restart the container manually. I have process.on handlers for both “uncaughtException” and “unhandledRejection” which call process.exit, but these don’t seem to be working.So what’s the best way in a node.js app to either (1) reconnect across timeouts (the option to do this seems to have been removed recently) or (2) just exit the server in this situation? I’m using the latest version of the NodeJS MongoDB library.",
"username": "Geoffrey_Challen"
},
{
"code": "",
"text": "As an update, I figured out at least part of what the issue was here, in that my driver script wasn’t restarted properly.However, I’m still seeing these spurious connection drops, even in my local development setup with a local database where they should be no connectivity issues. Is there an option to handle these in the driver through a retry? That would be nice.",
"username": "Geoffrey_Challen"
},
{
"code": "",
"text": "Hi @Geoffrey_Challen have you found a solution? We’re experiencing the same issues.thanks",
"username": "Pini_Usha"
}
]
| Restarting After Spurious Connection Timeouts (NodeJS Client) | 2022-01-26T01:53:01.225Z | Restarting After Spurious Connection Timeouts (NodeJS Client) | 2,166 |
[]
| [
{
"code": "",
"text": "I am trying to create custom role with changeOwnPassword privilege’s but I do not see this privilege’s in any of the built in actions. How do I assign a user with this privileges’? I see documentation that this action is needed for the roles that are applied to user in order to change the user’s password, but how do I assign this action without having it available in the list? Please, advice.\nimage1920×1040 98 KB\n",
"username": "Todd_Garrison"
},
{
"code": "",
"text": "Hi @Todd_Garrison ,In atlas the database users passwords are not changed with the regular MongoDB server commands but with the Atlas UI/API and therefore this specific server role is not relavent for Atlas projects.https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Database-Users/operation/updateOneDatabaseUserInOneProjectTo use this resource, the requesting API Key must have the Project Atlas Admin or Project Charts Admin roles. This resource doesn’t require the API Key to have an Access List.If you need to have a more granular way of managing resources please refer to my article:Learn how to build Service-Based Atlas Cluster Management webhooks/functionality with Atlas Admin API and MongoDB Realm.This approach will allow you to check if the changed user is actually the one who can access this user.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| How do I give user changeOwnPassword priviledge | 2022-11-20T14:44:18.620Z | How do I give user changeOwnPassword priviledge | 943 |
|
null | []
| [
{
"code": "",
"text": "I don’t like introductions because I know I will forget the names and faces as always. So, I prefer not to have them at all.This forgetfulness along with my learning passion comes with a different result: I know a lot of things but I don’t have enough experience with them. I still try connecting the dots and doing something I love helps to do so: helping others.I had a pretty nice year on the Forums trying to solve problems and, although we haven’t made personal connections, have benefitted from other community supporters, new and elder.So, Hello All the first time. Excuse me for I will most possibly forget you. See you around solving problems.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz,\nThank you so much for all your contributions to the community, I have been following your work since the beginning and your posts and answers are very insightful and easy to understand, learning to code and debug errors can sometimes become very challenging and we really appreciate you for taking out the time to empathize with other community members and help them.I had a pretty nice year on the ForumsWe are glad you enjoyed helping others, please keep up the good work. More power to you Thanks and Regards.\nSourabh Bagrecha,\nMongoDB",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Hi Yilmaz,Nice to meet you and thanks for hanging around in the forums helping others! I’ve read this and it resonated with me:This forgetfulness along with my learning passion comes with a different result: I know a lot of things but I don’t have enough experience with them. I still try connecting the dots and doing something I love helps to do so: helping others.I really, really recommend you to read Apprenticeship Patterns, it’s an incredible book that will help you learn faster and better, and leave any imposter syndrome behind.Cheers!",
"username": "Diego_Freniche"
}
]
| Hello All (I got my Anniversary badge and this is my first hello) | 2022-11-16T08:17:12.885Z | Hello All (I got my Anniversary badge and this is my first hello) | 2,245 |
null | [
"atlas-cluster"
]
| [
{
"code": "",
"text": "When we create a new MongoDB cluster in MongoDB Atlas using the API, it is taking more than 20min. We dont have a call back from MongoDB that can communicate the status. Have you found this problem? Any help is appreciated.",
"username": "Madhusudhan_KM"
},
{
"code": "",
"text": "Hi @Madhusudhan_KM,Please contact the Atlas support team via the in-app chat to investigate any operational and billing issues related to your Atlas account. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account / billing enquiries.Some examples of when to contact the Atlas support team:Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Creation of new MongoDB cluster takes too long | 2022-11-22T06:03:52.200Z | Creation of new MongoDB cluster takes too long | 1,599 |
null | [
"migration",
"cluster-to-cluster-sync"
]
| [
{
"code": "",
"text": "Hello,We have 2 MongoDB 4.0 clusters. I’ve been looking for live migrations options. I tried mongomirror, but it turned out that it was meant to work with Atlas clusters, then found out mongosync, but unfortunately, it’s compatible with MongoDB 6. So I’m curious, what do you recommend for data syncing between two non-Atlas clusters? I like to do a live migration and will highly appreciate your ideas on this.Thanks!",
"username": "Ercin_Demir"
},
{
"code": "",
"text": "Hi @Ercin_DemirFirst I would note that MongoDB 4.0 series is out of support by now, so you might want to consider upgrading to a supported version. At the moment, series 4.2 is the oldest supported release series. Unsupported versions will not receive bugfixes or improvements.Since the mongosync route is not available for you, you may be able to roll your own solution using perhaps change streams. You need to implement a method that can read the change stream from the source cluster and apply it to the destination cluster. However you’ll need to debug and maintain this connector, which could be an issue in the long term.Alternatively, if you can spare some time and effort, I think the best method is to use mongomirror to migrate to Atlas, upgrade your Atlas deployment to the latest series (6.0) using automation, then you can dump the migrated data into your own MongoDB 6.0 deployment. With this, you’re basically: 1) upgraded to the latest supported MongoDB version, and 2) now you can use mongosync. It’s up to you if you want to keep your data in Atlas if that’s more convenient for you, but you can always dump & restore them anywhere you wish.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi,Thank you for taking the time to reply on the post. Both of the clusters are not Atlas clusters. We are going to migrate from our self-host MongoDB to our K8s MongoDB managed by Kubernetes Operator model. Since the data is quite large and we don’t want to compromise of losing any data, we want to do a live migration/syncing data. I believe your alternative solution was mainly for Atlas clusters. Did you get have any experience on live migrations on MongoDB 4?I also found out a migration tool called mongopush. I tried that but it didn’t complete properly. Any tools ideas are also welcome.P.S: Upgrading from 4.0 to 6 is not a quick solution as it needs more tests on the side application before going to production.I’m looking forward to your opinion on this. Thanks very much in advance!",
"username": "Ercin_Demir"
},
{
"code": "rs.add(<nodes in the new deployment>)rs.remove(<nodes in the old deployment>)",
"text": "Hi @Ercin_DemirWhile you can do a backup/restore to a new cluster (see Backup and Restore with MongoDB Tools), doing it with a large dataset live is a different thing altogether.I don’t have experience with managing such a move, but if I’m in your shoes, I would perhaps look into the possibility of extending the replica set into the new deployment (e.g. adding the new deployment using rs.add(<nodes in the new deployment>) one by one), then once everything is in sync, remove the nodes from the old deployment (e.g. rs.remove(<nodes in the old deployment>)). This is off-the-top-of-my-head idea, so please take this with a grain of salt and do a thorough testing Hope you’ll find a workable solution.Best regards\nKevin",
"username": "kevinadi"
}
]
| How to sync two on-prem MongoDB clusters | 2022-11-19T01:18:35.184Z | How to sync two on-prem MongoDB clusters | 2,607 |
null | [
"queries",
"python",
"indexes",
"serverless"
]
| [
{
"code": "published_atArtsTechnology01#Midterms2022#ElonMusk#SuperBowlLVIIlimit=10offset=0published_at# Lists a single episode\n# /podcast/{podcast_id}/episode/{episode_id}\nid_ = f\"{podcast_id}/{episode_id}\"\ndb.episodes.find({\"_id\": id_})\n\n\n# Lists episodes in desc order of published_at\n# /podcast/{podcast_id}/episodes\ndb.episodes\\\n .find({\"podcast_id\": podcast_id})\\\n .sort(\"published_at\", -1)\\\n .offset(offset)\\\n .limit(limit)\n\n\n# SRP for episodes in desc order of published_at\n# /search/episodes?genre=genre_id&popularity=flag&hashtag=hashtag_id\nexpr1 = {\"genres.id\": genre_id}\nexpr2 = {\"popularity\": flag}\nexpr3 = {\"hashtags.id\": hashtag_id}\nexpr = {}\nfor expr_ in [expr1, expr2, expr3]:\n expr = {\"$and\": [expr_, expr]}\ndb.episodes\\\n .find(expr)\\\n .sort(\"published_at\", -1)\\\n .offset(offset)\\\n .limit(limit)\nimport pymongo\n\ndb.episodes.create_index([(\"published_at\", pymongo.DESCENDING)])\n\nfor name in [\n \"podcast_id\",\n \"popularity\",\n \"genres.id\",\n \"hashtags.id\",\n]:\n db.episodes.create_index(\n [(name, pymongo.ASCENDING), (\"published_at\", pymongo.DESCENDING)],\n )\nepisodesSTORAGE SIZE: 4.42GB\nLOGICAL DATA SIZE: 14.71GB\nTOTAL DOCUMENTS: 4944529\nINDEXES TOTAL SIZE: 631.56MB\n",
"text": "Hi,I have a straightforward Atlas serverless installation populated with sample data that I’m looking to scale 10x, however, I am concerned about eventual costs. Looking at the invoice breakdown, most of the costs are due to the high number of WPUs - which is surprising since all of my updates/inserts are point queries.I get the sense that high WPUs may be because of the multiple indexes I have to facilitate various views in my app.I am running a podcast website. Each podcast may have a number of episodes and each episode hasMy app has the following views which take limit=10, offset=0 as default parameters (unless provided) and are sorted by published_at in descending order (newest first).I have the following three views in my app:So, I have added the following indexes:As you can imagine, my inserts are also fairly simple - I monitor the RSS feed of the podcast and whenever a new episode is published, I simply insert it into the DB. That’s about it.Here are some stats about the episodes collection from the Atlas dashboard:Can somebody reason why I may have high WPUs?Thanks!",
"username": "Abhinav_Kulkarni"
},
{
"code": "db.collection.stats()db.collection.getIndexes()",
"text": "Hi @Abhinav_Kulkarni,Generally, the RPU/WPU that forms the basis of serverless charges concerns about the work needed to be performed by MongoDB to service the work. If MongoDB needs to do much work that requires many reads or writes (even though superficially it doesn’t look like it), then the RPU/WPU numbers will reflect this.In saying so, it sounds like the main concern here is regarding the WPU’s. Could you provide the following details:The number of indexes you have defined in the collection will also affect the WPU numbers, since a write operation will need to write to the collection itself and all the associated indexes, as mentioned in the Write Operation Performance pageRegards,\nJason",
"username": "Jason_Tran"
}
]
| How to debug high WPUs for an Atlas serverless instance? | 2022-11-14T12:02:31.499Z | How to debug high WPUs for an Atlas serverless instance? | 1,598 |
null | [
"python"
]
| [
{
"code": "> show users;\n{\n \"_id\" : \"test.rohan\",\n \"userId\" : UUID(\"c12dcc3e-d791-4886-8e3d-0c316fd5a009\"),\n \"user\" : \"rohan\",\n \"db\" : \"test\",\n \"roles\" : [\n {\n \"role\" : \"readWrite\",\n \"db\" : \"config\"\n }\n ],\n \"mechanisms\" : [\n \"SCRAM-SHA-1\",\n \"SCRAM-SHA-256\"\n ]\n}\n\n[mongodb]\ndatabase_type = mongodb\nserver = localhost\nport = 27017\ndatabase = test\nprivileged_account = rohan\nprivileged_account_password = root\napplication_account = apprunuser \nconfiguration_file = /etc/mongod.conf\n",
"text": "I am try to connect my python to mongo db but getting this error .\ni have create a user for rohan .Error detail: command SON([(‘authenticate’, 1), (‘user’, ‘rohan’), (‘nonce’, ‘ddd2127c331e77c1’), (‘key’, ‘fc7cc53585111f5050a68990614e3e26’)]\ndatabase.conf file",
"username": "Rohan_kar"
},
{
"code": "",
"text": "The user need to be created on admin db and give necessary privileges on the db you want to access\nYou have created it on test db and gave access to config db?",
"username": "Ramachandra_Tummala"
},
{
"code": "mongodb.createUser(\n\t\t{ \n\t\t\tuser: \"rohan3\",\n\t\t\tpwd: \"root3\",\n\t\t\troles:\n\t\t\t[\n\t\t\t{ role:\"userAdmin\",db:\"flower\"},\n\t\t\t] } );\n",
"text": "So i need to create user inside the admin db .\nfirst i need to do mongo\nuse admin\ncreate db flower;\nthen create userThis process i need to follow ?Thanks for Time",
"username": "Rohan_kar"
}
]
| Error connecting to database! | 2022-11-21T18:51:47.222Z | Error connecting to database! | 1,377 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.