image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"queries",
"replication",
"sharding",
"indexes"
] | [
{
"code": "",
"text": "Hi, is there any good source like a book where I can learn about the internals of mongoDB. By internals I mean, how mongodb solves the problems like replication, leader election, indexes, writes, reads, caching etc rather than what it can do.\nI know that one of the options is to go through source code, but are there any other sources?",
"username": "amar_nadh"
},
{
"code": "",
"text": "Mastering MongoDB 6.x is a good overview, and is fairly up to date. Chapter 13 covers some of the things that you mention.If you want to dive even deeper, there’s dedicated books to topics such as wiredtiger",
"username": "Eamon_Scullion"
},
{
"code": "",
"text": "@Eamon_Scullion Thanks for the suggestion. I will go through this.",
"username": "amar_nadh"
},
{
"code": "",
"text": "Hey @amar_nadh,Welcome to the MongoDB Community Forums! The best way to learn about MongoDB internals is to check the readme-files in the source code. They are the most up-to-date, technical, and best resource if learning about the internals is what you’re looking for.Replication internals: mongo/README.md at master · mongodb/mongo · GitHubSharding internals: mongo/README.md at master · mongodb/mongo · GitHubStorage execution internals: mongo/README.md at master · mongodb/mongo · GitHubQuery system internals: mongo/README.md at master · mongodb/mongo · GitHubHoping this helps. Please feel free to reach out for anything else as well. Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there any one stop source to learn the internals of mongoDB? | 2023-01-20T19:02:56.807Z | Is there any one stop source to learn the internals of mongoDB? | 1,268 |
[] | [
{
"code": "",
"text": "Hi, I’m trying to set up the Encryption at Rest using Azure Key Vault, and it fails with an “Invalid Azure API request” error. I provided all the input data needed (Azure AD account credentials, Key Vault credentials and encryption key), but the INVALID_AZURE_API_REQUEST error code does not have give me enough information to troubleshoot it:\n\nScreen Shot 2021-10-06 at 10.26.432572×1474 355 KB\nAny help/insights would be appreciated.",
"username": "jorge.barsoba"
},
{
"code": "",
"text": "Hi Jorge, there could be a number of different things happening here, mind opening up a support case so the team can help you get to the bottom of this issue?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "INVALID_AZURE_API_REQUESTThanks @Andrew_Davidson for your reply. New case #00834360 created!",
"username": "jorge.barsoba"
},
{
"code": "",
"text": "Hi, how are you guys?I have the same problem when trying to configure my DB to encryption at rest with Azure Key Vault.\nI provide all the information on the fields and when I click save, I receive the same message and I can’t figure out the underling problem.Can you provide me with some information on how to create solve this problem or where I can find some logs?Thanks in advance",
"username": "Andre_Jean_Bastiani"
},
{
"code": "",
"text": "Hi,\nI am also facing same issue, Will you let me know how to open support Case?",
"username": "Rehan_Raza_Khatib_US"
},
{
"code": "",
"text": "I am also facing same issue and I am sure all my input are correct.\n\nScreenshot 2023-01-30 at 4.14.00 PM1920×1007 143 KB\n",
"username": "Ben_Luk1"
},
{
"code": "",
"text": "Hi all,I found that “Key Vault Crypto Officer” role is required for the SP.",
"username": "Ben_Luk1"
}
] | Encryption at Rest with Azure Key Vault error: Invalid Azure API request | 2021-10-06T13:31:14.463Z | Encryption at Rest with Azure Key Vault error: Invalid Azure API request | 3,508 |
|
null | [] | [
{
"code": "",
"text": "Could not find a way to get the deployment cost estimate for Atlas cluster via API. Is it possible? Found how get basic list of regions, but not much more.\nGoal is to use something like a price calculator API in order to choose deployment size and cost it.\nSomething similar to cloud providers who provide a calculator via API as well as UI.",
"username": "Nuri_Halperin"
},
{
"code": "",
"text": "Hey Nuri,Unfortunately there isn’t currently a way to get the cost estimate for an Atlas cluster config via the API. I’ve located a couple of feedback posts that you can vote for which appear to be what you’re after (more specifically the first one from my assumptions):Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks!\nI voted up the 2 items.Is there a static table of prices in documentation listing all cloud region node costs? (Such as an html doc or something?)",
"username": "Nuri_Halperin"
},
{
"code": "\"View Pricing\"",
"text": "Is there a static table of prices in documentation listing all cloud region node costs? (Such as an html doc or something?)The only page with a static list of pricings I was able to locate was from the following page (and clicking \"View Pricing\") but it’s not for all regions. Each cloud provider tab on the linked page is based off a single region.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Deployment cost via API? | 2023-01-30T19:24:20.423Z | Deployment cost via API? | 521 |
null | [] | [
{
"code": "",
"text": "Hi, I am completely new to MongoDB and Atlas and I am trying to test out this software to see the extent of its usability.For a start, I have 4 collections. The first one is the master list of all the things I am selling, the second is the forecast in 2023, the third is the forecast in 2024 and the last is the current inventory level.Collection 1: Items, Lead time to produce\nCollection 2: Items, 2023 Forecast\nCollection 3: Items, 2024 Forecast\nCollection 4: Items, Current InventoryI used the master list as the base and used lookup field for the forecast and displayed it as a table on the MongoDB Atlas Charts. I want to highlight the items with the following conditions:Are there any way of doing this? I tried creating a calculated field in charts but for some reason, I am not able to write the conditions and when I tried summing 2 columns, it shows as 1 instead of the total sum.",
"username": "H_Han"
},
{
"code": "",
"text": "Hi @H_Han - Welcome to the community Collection 1: Items, Lead time to produce\nCollection 2: Items, 2023 Forecast\nCollection 3: Items, 2024 Forecast\nCollection 4: Items, Current InventoryWould you be able to provide some sample documents from each of the collections? This is to better understand the highlight conditions and calculations you’ve mentioned as well.I’ll need some clarification (which i may possibly get with the sample documents from each collection) regarding the above conditions. For example, “lead time exceed 1 year, current inventory” is not clear for me (is the current inventory collection containing an item id field and boolean field?)I presume you’re trying to achieve something similar to the following Conditional Formatting - Use Case image but please correct me if i’m wrong here.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
}
] | MongoDB Atlas Charts Calculated Field | 2023-01-23T20:26:30.669Z | MongoDB Atlas Charts Calculated Field | 625 |
null | [
"php"
] | [
{
"code": "",
"text": "Any idea how to solve this issue??\nUndefined property: MongoDB\\Driver\\Manager",
"username": "sahil_sekh"
},
{
"code": "",
"text": "Please post additional information, most importantly the full error message and the offending line. It looks like you’re accessing an undefined property, but without knowing what you’re actually doing, there’s no way for us to help you.",
"username": "Andreas_Braun"
},
{
"code": "<?php\n //$m = new MongoDB\\Driver\\Manager();\n $m = new MongoDB\\Driver\\Manager(\"mongodb://192.168.0.105:27017\");\n echo \"Connection to database successfully\";\n $db = $m->mydb;\n echo $db;\n echo \"Database mydb selected\";\n?>\n",
"text": "“i am also facing Undefined property: MongoDB\\Driver\\Manager::$mydb in /var/www/html/index.php on line 5”‘when I want to connect with the MongoDB with PHP’‘I am using PHP 7.4 and MongoDB Version 3.6.3’",
"username": "NAEEM_AKRAM"
},
{
"code": "",
"text": "@NAEEM_AKRAM , “Undefined property” is usually related to missing PHP definitions from a require or require_once. In this case, it looks like the definition of the MongoDB\\Driver\\Manager was not included upstream in your app.We have a great set by step tutorial to get started with PHP and MongoDB. In there, we recommend using the PHP Library instead of using the low-level driver directly.Getting Started with MongoDB and PHP - Part 1 - SetupAnd a follow-up article about how to detect and handle potential PHP errors in the context of MongoDB:This article shows you common mechanisms to deal with potential PHP Errors and Exceptions triggered by connection loss, temporary inability to read/write, initialization failures, and more.Check them out and let us know,",
"username": "Hubert_Nguyen1"
}
] | Undefined property: MongoDB\Driver\Manager | 2021-06-24T21:24:02.278Z | Undefined property: MongoDB\Driver\Manager | 4,298 |
null | [] | [
{
"code": "",
"text": "We have a live instance that we inherited from a previous sale and so we have limited access.\nThe disk is now full and preventing any WriteCOncern operation such as delete or installing TTL (which would reduce space)…\nI am wondering what we can do other than getting into the server and increasing disk space as we are skittish of changing anything before backing up the system.Any ideas ?Thank you in advance !",
"username": "Pascal_Audant"
},
{
"code": "",
"text": "Hi @Pascal_Audant welcome to the community!There are some things I can think of off the top of my head:Those are the immediate solutions that I can think of, but depending on your situation, some of them may not be a practical solution at all. Unfortunately since this is a hardware-related issue, you may not be able to avoid getting into the server.Best regards\nKevin",
"username": "kevinadi"
}
] | What can I do if disk is full? | 2023-01-30T16:12:52.258Z | What can I do if disk is full? | 869 |
null | [] | [
{
"code": "// This function is the endpoint's request handler.\nexports = async function ({ query, headers, body }, response) {\n\n // Verify the event came from Stripe\n\n const stripe = require(\"stripe\")(context.values.get(\"stripeKeyTEST\"));\n const endpointSecret = context.values.get(\"{ENDPOINT-SECRET}\");\n\n const payload = body.text();\n const signature = headers['Stripe-Signature'][0];\n\n let event;\n try {\n\n event = stripe.webhooks.constructEvent(\n payload,\n signature,\n endpointSecret\n );\n \n \n }\n catch (err) {\n\n console.error(err.message);\n throw err.message;\n }\n return \"Hello, World!\";\n};\nCannot access member 'length' of undefined\n",
"text": "Hello. I created an HTTPS endpoint with the following function:The function is based on the snippet in @clueless_dev’s reply:\nContinuing the discussion from Stripe Webhooks and AWS-S3 function call issueHowever, I got the error:The signature was successfully gotten, so, by process of elimination, I concluded the payload is causing the problem. What can I do to fix this?Note: Currently, I’m using a REST API endpoint on AWS to handle the Stripe events, but I’d prefer to do that here on Realm instead.",
"username": "njt"
},
{
"code": "",
"text": "Hmm, can you check whether your endpoint secret is undefined? I ran into this too and supplying a nonempty webhook secret was the fix.This function looks good to me otherwise",
"username": "Andrew_Chen1"
},
{
"code": "body.text()\nBuffer.from(request.body.toBase64(), \"base64\")\n /////////////////////////////\n // Verify stripe webhook call\n /////////////////////////////\n console.log(\"Verifying request\");\n const stripeApiKey = context.values.get(\"stripeApiKey\");\n const stripeWebhookSecret = context.values.get(\"stripeWebhookSecret\");\n\n const stripe = require(\"stripe\")(stripeApiKey, { apiVersion: \"2022-11-15\" });\n const signature = request.headers[\"Stripe-Signature\"][0];\n\n if (signature == undefined) {\n console.error(\"Missing stripe signature\");\n throw Error(\"missing-stripe-signature\");\n }\n\n var event;\n try {\n event = stripe.webhooks.constructEvent(Buffer.from(request.body.toBase64(), \"base64\"), signature, stripeWebhookSecret);\n } catch (e) {\n console.error(`Stripe event could not be verified. Error: ${e.message}`);\n throw Error(\"invalid-signature\");\n }\n console.log(\"Request is valid.\");\n",
"text": "I had the same issue. It turns out the problem is with getting the raw body which is needed for stripe to verify the request properly.won’t do. Instead, it can be achieved by using:Here’s my full stripe request verification code:",
"username": "Daniel_Weiss"
}
] | Help. It seems Stripe does not like body.text() | 2022-10-11T08:42:34.885Z | Help. It seems Stripe does not like body.text() | 2,067 |
null | [
"python"
] | [
{
"code": "",
"text": "We are pleased to announce the 0.7.0 release of PyMongoArrow - a PyMongo extension containing tools for loading MongoDB query result sets as Apache Arrow tables, Pandas and NumPy arrays.This release that adds support for BSON embedded documents and BSON array\ntypes, and supports PyArrow 11.0.See the 0.7.0 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongoArrow 0.7.0 Documentation\nChangelog: Changelog 0.6.3\nSource: GitHubThank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | PyMongoArrow 0.7.0 Released | 2023-01-30T20:38:19.399Z | PyMongoArrow 0.7.0 Released | 1,237 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "MongoDB and OpsManager supported in RHEL 9 version?",
"username": "amit.faibish"
},
{
"code": "",
"text": "If it is MongoDB version 6, then both community and enterprise are supported (can’t say for previous versions):\nInstall MongoDB — MongoDB ManualOps Manager does not have a download for RHEL 9, but you should be able to install package supplied for “RHEL 7, 8” as long as libraries it requires can also be installable from repositories.\nMongoDB Ops Manager Download | MongoDB",
"username": "Yilmaz_Durmaz"
}
] | RHEL 9 with MongoDB? | 2023-01-30T14:05:07.203Z | RHEL 9 with MongoDB? | 1,569 |
null | [] | [
{
"code": "",
"text": "Dears,I managed to make Google Signin in my application and i can see the data on Mongodb App Users. However, why when i return the profile in my application i can see the name but there is no email. Even on MongoDB App Users there is no email only name. Is there a way i can get the email for Google ? Also, for other authenticators?Thank you.",
"username": "laith_ayyat"
},
{
"code": "",
"text": "I’m having same issue. I use the web sdk with OpenId and all I get is the user’s name and an id. I’ve even set the scopes in the Oauth Screen configuration to include the user’s email. The only option I can think of is to link user documents in my collection by the id. I’m not sure if the user name’s must be unique between gmail accounts.",
"username": "thecyrusj13_N_A"
},
{
"code": "",
"text": "Okay. So because we’re using OpenID Connect the way to get around this is to take the response.credential and send it as a parameter/argument to Atlas App Services using a function. You can write your function so that it decodes the credential and stores the decoded information (for example the user’s email address) in one of your collections. You can use the jwt_decode module to do this. Alternatively, you could decode the credential on the client and just send the decoded information as the argument for your function.After that if you want to store the id that App Services stores for the Google user you will need use another function that finds the document you previously added to your collection and then insert the id. You could either do this on the frontend by calling the function directly using the user object returned from App services after you get a Realm Google credential (not to be confused with the response.credential you received from Google) or use a trigger on the backend that uses the authEvent and user object to get the id.By the way I don’t know if this is the best way to do this, but if you’re frustrated because you can’t see the authEvent object consider storing it temporarily in a collection and then view it in MongoDB Compass.",
"username": "thecyrusj13_N_A"
}
] | Google authentication | 2022-06-29T14:12:36.807Z | Google authentication | 1,394 |
null | [
"java"
] | [
{
"code": "",
"text": "Hi,Currently I am using atlas search for search use case in my java application. I have gone through the aggregation pipeline functionality provided in the java SDK and found that there isn’t a support for $search operator in the SDK(Aggregates.java) like there is a support for $match, $project and so on. I have done one experiment using java code to create BSON documents with required hierarchy maintained and passed it to aggregation pipeline like coll.aggregate(Arrays.asList(new Document(\"$search\", value))). Using this approach things are working as expected. I just want to know on below things,Do let me know what else information is needed from my end.",
"username": "Sachin_Havinal"
},
{
"code": "/*\n * Copyright 2008-present MongoDB, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage com.mongodb.client.model;\n\nimport com.mongodb.MongoNamespace;\nimport com.mongodb.client.model.densify.DensifyOptions;\n",
"text": "Hi @Sachin_Havinal and welcome in the MongoDB Community !You are correct. There is no helper for this stage, yet. So your current approach is the correct one until we have one.You will have to make your own little helper in the meantime. That being said… The Java Driver is open source and we love new contributors. Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hello there,\nIn version 4.7.0 the Java driver added a new builder for Atlas search. Please give it a try.Thank you!\nRachelle",
"username": "Rachelle"
},
{
"code": "",
"text": "Hello all,\nCould you give an example of how to use it, or where I could find the documentation?",
"username": "Alejandro_Lopez"
},
{
"code": "",
"text": "Hi Alejandro,Have a look at this link from Spring Data MongoDB – the section titled “10.12.2 Supported Aggregation Operations” includes a tip on how to use $search with Spring Data MongoDB.",
"username": "Ashni_Mehta"
}
] | MongoDB Atlas Search using mongo-java-driver | 2021-05-31T12:52:29.551Z | MongoDB Atlas Search using mongo-java-driver | 4,311 |
null | [] | [
{
"code": "",
"text": "I am using Realm Flexible Sync on iOS. My app has 5 collections, but I only want to sync 2 of them to the cloud, is it possible? Or I have no choice but to sync ALL the collections? It seems that if I don’t define a subscription, I will not be able to save any data to the collection. Thanks!",
"username": "Richard_Chui"
},
{
"code": "",
"text": "Hey Richard - if you only want to sync 2 of your 5 collections, you would need to use separate realms; a synced realm to manage the object types you want to sync, and a non-synced realm for the non-synced objects. There is no way to write objects to a synced realm that you don’t want to sync.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Thanks so much for your response. Could you point me to the documentation that explains how to use multiple realms within the same app? Thanks!P.S. Using objectTypes in configuration?",
"username": "Richard_Chui"
},
{
"code": "",
"text": "Yes, you’ve got it - to use multiple realms in an app you would need to use objectTypes in the realm configuration to manage the objects in each realm.Unfortunately we don’t have any documentation that deals specifically with using multiple realms within the same app. We’re hoping to add more guidance in the docs in the future, but I don’t have anything to point you at right now.Conceptually, you just initialize both realms with different variable names and just read/write from the one you want wherever you need those objects. If you have a specific question, though, I’m happy to try to answer it.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Will definitely try that out. Thanks for the great help!",
"username": "Richard_Chui"
},
{
"code": "",
"text": "When I am just using the local realm, I don’t need to define anything, ObservedResults will automatically bring in all the objects in the collection.Now when I have both a local and a sync realm, I am not sure exactly what I should do. Should I define two realms at the app level - a localRealm with objectTypes of the local collections, and a syncRealm with objectTypes of the sync collections. And then pass them as environment objects to the child views? So in those child views, I can continue to use ObservedResults with the collection name (no matter local or sync)? A view needs to observe both a local and a sync collection. I am lost…P.S. When the app is first run, Realm will look at my data models and create the default.realm, how would it know which collection(s) to include when I have a local and a sync realm?",
"username": "Richard_Chui"
},
{
"code": "defaultConfiguration",
"text": "One thing you’ll run into - realm’s default environmentObject implementation is to use one realm as an environment object. If you want to use more than one realm as an environment object, you might have a look at this similar thread - it discussed opening two synced realms, which isn’t your use case, but there’s some good advice there about using more than one realm as environment objects: Help on opening multiple Synced Realms (SwiftUI) .I personally haven’t tried to work through this use case before, so I don’t know off the top of my head what this looks like in practice. But in theory, yes, if you define two realms - a non-synced Realm that manages the object types you don’t want to sync, and a synced realm that manages the object types you do want to sync, you should be able to use them wherever you need to access those objects. I don’t think there’s anything wrong with having a view observe more than one ObservedResults collection - those are bound to the collection type anyway so if your app has 5 objects, you could have 5 ObservedResults observing each of those types of objects. But, again, I haven’t tried to do this in practice, so I’m not sure what the actual behavior would be on doing this.For the default realm, you can explicitly define one of your realm configuration to set which is the default realm. These docs show how to use the defaultConfiguration parameter to specifically define a default realm: Open a Default Realm",
"username": "Dachary_Carey"
}
] | Syncing some but not all of the collections in the database | 2023-01-27T16:07:40.537Z | Syncing some but not all of the collections in the database | 1,391 |
[
"dublin-mug"
] | [
{
"code": "Senior Technical Service Engineer, MongoDBSoftware Engineer, IndeedTech Lead, Qstream",
"text": "Howdy Folks,Dublin MUG will have its first meetup of 2023 on 25th January in collaboration with React Dublin group \nDublinMUG25Jan1920×1080 115 KB\n\njosman800×800 115 KB\n@Josman_Perez_Exposit Senior Technical Service Engineer, MongoDBIn this talk, we will create a trivia game with MongoDB Atlas and React.\nThe game will allow us to choose between several categories and will show us questions. The app will have a ranking based on parameters and we allow you to compete with others and check the leaderboard.\nMatheus800×800 109 KB\nMatheus Monte Software Engineer, IndeedIn this talk, we will talk about how important it is to keep a logical organization of the app structure to help you scale not just on features, but in code maintainability.\nRicardo512×512 31.7 KB\nRicardo Luz Tech Lead, QstreamNFT, also known as a non-fungible token, is one of the most buzzing words on the internet; it embraces the web3 potential for artwork, games, and music and will vastly impact intellectual property and copyright.\nWe can now start exploring the possibilities of using a provider like OpenZepellin and React to build robust applications.\nThis talk will share how you can begin to combine frontend applications and NFT as a web3 developer - and a glance into the exciting future of the web.If you are interested in sharing a project or your experience, please register your interest.Event Type: In-Person\nMongoDB, Ballsbridge, DublinTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Many Thanks to our speakers and the GitNation Team.GitNation is a foundation contributing to the development of the technological landscape by organizing events which focus on the open source software. They organize meaningful and entertaining JavaScript conferences and meetups, connecting talented engineers, researchers, and core teams of important libraries and technologies.Volunteers will be at the gate for door opening. Please arrive on time.Please sign in at the reception iPad when you enter inThe event will take place in the Office cafeteria on the third floor. Access to the third floor will be given by the volunteer.Doors require access. Contact +353- 899722424 if no one is available to open the door.Please be respectful of the workplace.I welcome you all to join the Dublin MongoDB User group, introduce yourself, and I look forward to collaborating with our developer community for more exciting things Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Thank you all for the amazing time at the event. Some highlights are below:\nReact and MongoDB Event1920×1440 322 KB\n\nMongoDB and React Meetup1920×1440 358 KB\n\nIMG_51901920×2560 350 KB\n",
"username": "henna.s"
}
] | Dublin MUG in collaboration with React Dublin | 2022-12-14T15:57:12.339Z | Dublin MUG in collaboration with React Dublin | 2,698 |
|
null | [
"atlas-functions"
] | [
{
"code": "Realm functionsAWS Lambda",
"text": "Hi Team,I want to know if Realm functions behave like AWS Lambda or not. AWS lambda starts like thisand how how long does Realm function take until it starts to run my code ?",
"username": "Mohammed_Ramadan"
},
{
"code": "",
"text": "Similar question please…Under the hood, are realm functions using aws Lambda? I’m asking for capacity planning and to understand if these will be as reliable and predictable as lambdas under heavy load?",
"username": "Christopher_Barber"
},
{
"code": "",
"text": "Still no reply? @mongodb_mongodb",
"username": "Zach_Weiss"
},
{
"code": "",
"text": "FaunaDB claims that MongoDB has cold starts. Do you have official answer from the MongoDB team about functions?",
"username": "kulXtreme_N_A"
},
{
"code": "golang",
"text": "Atlas Functions do not suffer from cold starts in the way that lambda functions do.The engine behind App Services transpiles all functions into golang and runs them natively which means that they run immediately without any type of provisioning.",
"username": "Paolo_Manna"
}
] | Does Realm functions have cold start like AWS lambda? | 2022-02-01T11:27:41.626Z | Does Realm functions have cold start like AWS lambda? | 3,158 |
null | [
"java",
"atlas-device-sync"
] | [
{
"code": " Realm previousLocalDB = Realm.getDefaultInstance();\n \n \n \n SyncConfiguration syncConfig = new SyncConfiguration.Builder(userRealm)\n .modules(new ModuloUserAndAnalysis1())\n .initialSubscriptions(new SyncConfiguration.InitialFlexibleSyncSubscriptions() {\n @Override\n public void configure(Realm realm, MutableSubscriptionSet subscriptions) {\n // add a subscription with a name\n \n subscriptions.addOrUpdate(Subscription.create(\"AnalysisSubscription\",\n realm.where(AnalysisModel.class).equalTo(\"partitionId\", partitionId)\n .and().equalTo(\"clientId\", clientId)\n .and().equalTo(\"elementId\", elementId)\n .and().isNull(\"sendDate\")\n ));\n\n }\n })\n .allowWritesOnUiThread(true)\n .allowQueriesOnUiThread(true)\n .build();\n \n Realm realmRemoto = Realm.getInstance(syncConfig);\n \n RealmResults<AnalysisModel> previousAnalysis = previousLocalDB.where(AnalysisModel.class).findAll();\n realmRemoto.executeTransaction((realmInTransaction) -> {\n realmInTransaction.copyToRealmOrUpdate(previousAnalysis);\n });\n \n \n realmRemoto.close();\n previousLocalDB.close();\n",
"text": "I want my app to store data when it’s offline and when it’s on it transfers to the atlas.What I’m doing is I configure a local realm with RealmConfiguration.Builder() and with the help of a timer I create a syncConfiguration flexible with configuration from my atlas and copy from the local Realm to the sync Realm. is this the most correct way?",
"username": "multiface_biometria"
},
{
"code": "",
"text": "The synced Realm will still work while offline, so you don’t need to create two Realms. Just use the synced Realm the way you would normally, even if offline, and all persisted changes will get synchronized when the device comes back online",
"username": "Sudarshan_Muralidhar"
},
{
"code": " protected void onCreate(@Nullable Bundle savedInstanceState)\n {\n super.onCreate(savedInstanceState);\n\n Realm.init(this);\n\n Credentials credentials = Credentials.anonymous();\n User userRealm = app.login(credentials);\n\n\n SyncConfiguration syncConfig = new SyncConfiguration.Builder(userRealm)\n .modules(new ModuloUserAndAnalysis1())\n .initialSubscriptions(new SyncConfiguration.InitialFlexibleSyncSubscriptions() {\n @Override\n public void configure(Realm realm, MutableSubscriptionSet subscriptions) { \n subscriptions.addOrUpdate(Subscription.create(\"AnalysisSubscription\",\n realm.where(AnalysisModel.class).equalTo(\"partitionId\", partitionId)\n .and().equalTo(\"clientId\", clientId)\n .and().equalTo(\"elementId\", elementId)\n .and().isNull(\"sendDate\")\n )); \n }\n })\n .allowQueriesOnUiThread(true)\n .allowWritesOnUiThread(true)\n .build();\n\n\n Realm.setDefaultConfiguration(syncConfig);\n",
"text": "Thanks for the feedback.When I tried to use a realm sync by default it always crashed. the operation of is app always starts offline and writes the data.\nCode I used.",
"username": "multiface_biometria"
},
{
"code": "",
"text": "Do you have an example of the error you are seeing when the app crashes? Also, it would help if you could share your schema if possible.",
"username": "Sudarshan_Muralidhar"
},
{
"code": "{\n \"title\": \"AnalysisModel\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"analysisDate\": {\n \"bsonType\": \"string\"\n },\n \"idImage\": {\n \"bsonType\": \"string\"\n },\n \"cpf\": {\n \"bsonType\": \"string\"\n },\n \"clientId\": {\n \"bsonType\": \"string\"\n },\n \"deviceId\": {\n \"bsonType\": \"string\"\n },\n \"elementId\": {\n \"bsonType\": \"string\"\n },\n \"error\": {\n \"bsonType\": \"string\"\n },\n \"partitionId\": {\n \"bsonType\": \"string\"\n },\n \"latitude\": {\n \"bsonType\": \"double\"\n },\n \"longitude\": {\n \"bsonType\": \"double\"\n },\n \"altitude\": {\n \"bsonType\": \"double\"\n },\n \"speed\": {\n \"bsonType\": \"double\"\n },\n \"sendDate\": {\n \"bsonType\": \"string\"\n }\n }\n}\n",
"text": "Thanks for the feedback.My question is: when I run offlineUser userRealm = app.login(credentials);the app crashes.that’s why I create Realm previousLocalDB = Realm.getDefaultInstance();\nfor when I start offline.My schema",
"username": "multiface_biometria"
},
{
"code": "",
"text": "I would like to know how to create a SyncConfiguration offline, because when I run a User with app.currentUser(); does not work",
"username": "multiface_biometria"
},
{
"code": "",
"text": "You have to be online to log in and create the user object. After that, the user is cached and you can use the realm offline",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "this has not helped he is trying to indicate that it is impossible to run sync configuration in the app class since it needs the user to login in",
"username": "BIASHARA_KIT"
},
{
"code": "app.currentUser",
"text": "Hi @BIASHARA_KIT,Let’s put the things in perspective, so that the answer is clear:",
"username": "Paolo_Manna"
}
] | How can I implement offline and online sync? | 2022-10-20T19:50:18.783Z | How can I implement offline and online sync? | 4,273 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi Guys,Let’s say I have two collections:I want to run a batch to populate the User Entries with, let’s say, the first 50 random data from the Community Entries. However, on the following runs, I would like to query and add only entries that haven’t been added to the User Entries.In SQL, I used to create a left join, and set the other collection field == to null, which I would get only the remaining item. What would be the equivalent in MongoDB?",
"username": "Thiago_Bernardes1"
},
{
"code": "",
"text": "I was wondering about using $nin and the array of IDs already set to the user, but my concern is that this array will grow daily, and the $nin is going to be pretty big.",
"username": "Thiago_Bernardes1"
},
{
"code": "",
"text": "that is a problem of schema design: embedding or referencing.if your documents are, and will stay, small enough, then prefer embedding all related data in one place.one or more data fields are big in size or grow over time, give them their own document and put references in others.this referencing is similar to relations in SQL, in this case the “left join” you refer. and the equivalent is a “$lookup” operation in an “aggregation” pipeline. (please search these keywords).",
"username": "Yilmaz_Durmaz"
},
{
"code": "\n\n\tconst $match = {\n\t\tuserId: {\n\t\t\t$ne: userId\n\t\t},\n\t\tisBlocked: false\n\t}\n\n\tconst promptsCollection = this.getCollectionName('promo-tbc-prompt-entries');\n\n\tconst $lookupPrompts = {\n\t\tfrom: promptsCollection,\n\t\tlet: {\n\t\t\tpromptUserId: '$userId',\n\t\t\tpromptId: '$_id'\n\t\t},\n\t\tpipeline: [\n\t\t\t{\n\t\t\t\t$match: {\n\t\t\t\t\t$expr: {\n\t\t\t\t\t\t$and: [\n\t\t\t\t\t\t\t{ $eq: ['$promptUserId', '$$promptUserId'] },\n\t\t\t\t\t\t\t{ $eq: ['$promptId', '$$promptId'] },\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\tas: 'prompts'\n\t}\n\n\tconst $project = {\n\t\t_id: 0,\n\t\tid: '$_id',\n\t\tuserId: 1,\n\t}\n\n\treturn this.aggregate()\n\t\t.match($match)\n\t\t.lookup($lookupPrompts)\n\t\t.append({\n\t\t\t$set: {\n\t\t\t\thasEntries: { '$ne': [{ '$size': '$prompts' }, 0] }\n\t\t\t}\n\t\t})\n\t\t.match({\n\t\t\thasEntries: false\n\t\t})\n\t\t.project($project)\n\t\t.sample(4)\n\t\t;\n\n",
"text": "Hi Yilmaz,As mentioned, the collection will grow a lot - I understand that the NoSQL DB can handle embedded documents; however, you need to be very careful when using it.When you have lots of transactions like reading and writing the embedded array, the document itself will grow drastically. In this scenario, the embedded document is not a good approach for performance.Despite being a non-relational database, it costs much less if you add those references in another collection and need to access it often, runs consolidations, etc.Find below the query that I have created and works exactly life the “left join == null”:I had to append a new column called “hasEntries” and retrieve the size of its prompts, and next I had to add another match to return only those that had size Zero.Maybe it’s not the best performance yet, however this saves the document grow with embedded arrays.",
"username": "Thiago_Bernardes1"
},
{
"code": "",
"text": "Yep, there are many ways to do the same thing in MongoDB, so it is possible to have a better and faster query.By the way, you wrote “populate the User Entries with, let’s say, the first 50 random data from the Community Entries”, so you have a size limit on this array. It is just not clear when you are populating it; always or only when you read!? this may also change how you store data.If you are still developing, I believe you may have a different angle on the problem if you can state it in SQL as you used to have. MongoDB, in the referencing sense, can be a mirror to SQL. And then, explore how embedding may change that behavior.As you may appreciate, it is not always easy to understand what is asked. If you can provide some example data and your SQL query, Forum readers may find another solution to your problem which, in this case, requires you un-mark “solution” from your above post.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks Yilmaz,No worries, I’ve just tried to summarize the scenario as changing the data structure is not an option.There is a reason for the data to be structured in that way, and I was just looking for this similar approach to SQL.The challenge I’ve been always facing is when doing a lookup for an exact match 1:1, if I unwind the data that specific field is null/undefined, however I couldn’t find a way to match this scenario neither $eq: [prop, null] nor $eq: [prop, undefined]The way I’ve found is to leave it as an array, without unwinding, and match with the length of the array which would be zero or 1.Thanks for taking time on it, I really appreciated your suggestion, I was just trying to share something that I hear very often by developers, trying to embed everything in the document, and in a long term find performance issues.",
"username": "Thiago_Bernardes1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Query (equivalente to left join == null) | 2023-01-28T11:13:37.116Z | Query (equivalente to left join == null) | 825 |
null | [
"atlas-triggers"
] | [
{
"code": "changeEventchangeEvent.updateDescription.updatedFieldsupdatedFields{\n \"firstName\": \"John\",\n \"lastName\": \"Depp\",\n \"children\": [\n { \"age\": 20},\n { \"age\": 23},\n ]\n}\nfirstNameupdatedFields{\n \"firstName\": \"Johnny\",\n \"childrens.0.name\": 20,\n \"childrens.1.name\": 23,\n}\nlastName5.0.14",
"text": "Hi.\nIm using triggers as a way to audit actions on my collections.\nI recently noticed a weird behaviour regarding to the changeEvent object.\nIm getting the updated fields (on update operation) from changeEvent.updateDescription.updatedFields.\nIt usually works as expected, but only on the first update of the document, Im getting extra data in updatedFields that did not get updated, specifically elements inside an array of objects.example document:If I will update the field firstName AND its the first update of this document, the updatedFields will contain:(notice that lastName was not included)I made sure that the query of first update was the same as the rest of the update queries.Im using mongodb version 5.0.14 on mongoDb Atlas",
"username": "Lidor_Shoshani"
},
{
"code": "updatedFields{\n \"firstName\": \"Johnny\",\n \"childrens.0.name\": 20,\n \"childrens.1.name\": 23,\n}\nchildrensnameupdatedFieldsagename",
"text": "Hi @Lidor_Shoshani - Welcome to the community!Thanks for clarifying the environment and MongoDB version I assume you’ve created a Database Trigger for this since it’s on Atlas but please correct me if i’m wrong here.It usually works as expected, but only on the first update of the document, Im getting extra data in updatedFields that did not get updated, specifically elements inside an array of objects.Would you be able to provide the specific update command used?Additonally, I noticed the childrens field containing objects with the name field present in the updatedFields value you provided but not in the original example document. Is this part of the update or was this supposed to be age instead of name?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "agenameage",
"text": "was this supposed to be age instead of name ?Thats my mistake, it’s supposed to be age.I assume you’ve created a Database Triggercorrect.Would you be able to provide the specific update command used?As we are using orm in our app (prisma) the update does include the whole document (and its a bit complicated), however, I did check and the update command is basically the same except the field that I’m updating (and updatedAt that we have), so there should not be a difference between the first update and the rest of them.",
"username": "Lidor_Shoshani"
}
] | Weird triggers behaviour | 2023-01-29T15:08:27.696Z | Weird triggers behaviour | 1,160 |
null | [
"node-js",
"atlas-functions"
] | [
{
"code": "exports = function() {\n const axios = require(\"axios\");\naxios.get('https://api.github.com/users/PowellTravis/repos?per_page=100&page=1')\n .then(function (response) {\n onSuccess(response)\n })\n .catch(function (error) {\n console.log(error);\n });\n}\nfailed to execute source for 'node_modules/axios/index.js': FunctionError: failed to execute source for 'node_modules/axios/lib/axios.js': FunctionError: failed to execute source for 'node_modules/axios/lib/core/Axios.js': FunctionError: failed to execute source for 'node_modules/axios/lib/core/dispatchRequest.js': FunctionError: failed to execute source for 'node_modules/axios/lib/adapters/adapters.js': FunctionError: failed to execute source for 'node_modules/axios/lib/adapters/http.js': TypeError: Cannot access member 'Z_SYNC_FLUSH' of undefined\n at node_modules/axios/lib/adapters/http.js:36:10(187)\n\n at require (native)\n at node_modules/axios/lib/adapters/adapters.js:16:44(41)\n\n at require (native)\n at node_modules/axios/lib/core/dispatchRequest.js:20:48(77)\n\n at require (native)\n at node_modules/axios/lib/core/Axios.js:18:55(63)\n\n at require (native)\n at node_modules/axios/lib/axios.js:17:45(51)\n\n at require (native)\n at node_modules/axios/index.js:22:45(73)\n",
"text": "I am trying to use axios in Functions. The package installs successfully but when I want to use it, I get following errors. I installed the latest axios version. But which version is NodeJS within Functions? Or what am I missing?Here my basic function:and here the error:",
"username": "borabora"
},
{
"code": "axiosv1.2.0",
"text": "Hi @borabora,App Services Functions don’t run in a Node.js process, but in a specific environment that emulates, as close as possible, Node.js v10, so it’s very likely that the latest axios package wouldn’t be compatible.Have you tried older versions? Some users have reported v1.2.0 as working, YMMV.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "what is the correct way to make requests? if i try to use “fetch” the response is “fetch is not define” and with axios 1.2.0 the response if i try to make a request with \"await axios.get(“url”) iserror:\n{“message”:\"‘get’ is not a function\", “name”: “TypeError”}",
"username": "ruben_martin_acebedo"
},
{
"code": "exports = async function(arg) {\n const axios = require('axios').default;\n\n const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1', {\n responseType: 'json',\n headers: {\n \"Accept\": \"application/json\",\n \"Accept-Encoding\": \"deflate\"\n }\n });\n \n return response.data;\n};\n",
"text": "Hi @ruben_martin_acebedo,The following works:",
"username": "Paolo_Manna"
},
{
"code": "{\n \"error\": \"failed to execute source for 'node_modules/axios/index.js': FunctionError: failed to execute source for 'node_modules/axios/lib/axios.js': FunctionError: failed to execute source for 'node_modules/axios/lib/core/Axios.js': FunctionError: failed to execute source for 'node_modules/axios/lib/core/dispatchRequest.js': FunctionError: failed to execute source for 'node_modules/axios/lib/adapters/adapters.js': FunctionError: failed to execute source for 'node_modules/axios/lib/adapters/http.js': TypeError: Cannot access member 'Z_SYNC_FLUSH' of undefined\\n\\tat node_modules/axios/lib/adapters/http.js:36:10(187)\\n\\n\\tat require (native)\\n\\tat node_modules/axios/lib/adapters/adapters.js:16:44(41)\\n\\n\\tat require (native)\\n\\tat node_modules/axios/lib/core/dispatchRequest.js:20:48(77)\\n\\n\\tat require (native)\\n\\tat node_modules/axios/lib/core/Axios.js:18:55(63)\\n\\n\\tat require (native)\\n\\tat node_modules/axios/lib/axios.js:17:45(51)\\n\\n\\tat require (native)\\n\\tat node_modules/axios/index.js:22:45(73)\\n\",\n \"error_code\": \"FunctionExecutionError\",\n \"link\": \"https://realm.mongodb.com/groups/63c732d0ff601149806c78ea/apps/63c927c66cfbe1353c093075/logs?co_id=63d5cb15af6f2c4ff6011455\"\n}\n",
"text": "I tried to run this and got this error just to test axios\n’",
"username": "Tim_Carrender"
},
{
"code": "axios1.2.0",
"text": "Hi @Tim_Carrender,Which version of axios have you uploaded as dependency? As specified above, the sample code I provided works for 1.2.0, other versions may not work within the current Function environment.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Use axios in Functions. Throws errors | 2023-01-25T10:56:47.013Z | Use axios in Functions. Throws errors | 3,792 |
null | [
"data-modeling",
"transactions"
] | [
{
"code": "You don't want to create an unbounded array in your schema.f your application frequently retrieves the transactions data with the customer's information, you should consider embedding so that your application doesn't need to issue multiple queries to resolve the references.",
"text": "In the quiz for lesson 3 on Modelling Data Relationships, I have trouble understanding the difference between a wrong and a correct answer.Which of the following are valid ways to represent this one-to-many relationship between the customer and their transactions?Wrong answer: ‘Then document contains […] an array that holds all their transactions.’Correct answer: ‘Embed the transactions for each customer as an array of subdocuments in the corresponding customer document’.As an explanation for why the first is wrong, it says that You don't want to create an unbounded array in your schema..As an explanation for why the second is correct, it says that f your application frequently retrieves the transactions data with the customer's information, you should consider embedding so that your application doesn't need to issue multiple queries to resolve the references., with an additional warning about the 16MB document size limit.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.I don’t get it. Both are unbound arrays? As more transactions are recorded, the array will grow?What am I missing here?",
"username": "Vegar_Vikan"
},
{
"code": "",
"text": "Nothing?\nNobody knows?",
"username": "Vegar_Vikan"
},
{
"code": "",
"text": "which course is this one? provide us a link to see what those questions have in mind.But, basically, you would embed most used relations and separate others into corresponding collections. for example, transactions from the last 7 to 30 days are fit to be (also) embedded if their numbers are not excessive. or top 10 reviews of a product is also fit to embed. on the other hand, while you can embed today’s weather data with hourly intervals, ever-growing 15-minute weather data is not suitable to embed. you may embed a patient’s weekly heart statistics (as long as it is small enough), but not hourly readings.so, the contexts of those questions are important.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The question asks for valid ways to connect customers with their transactions.\nI can’t really see the difference between ‘customer document contains an array that holds all their transactions’ and ‘embed the transactions as an array in the customer document’.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "Vegar_Vikan"
},
{
"code": "",
"text": "Thanks for the link. Please include this link in your first question post so others won’t need to scroll down to find it, and also state it is the 2nd question.I also agree there is a vagueness in the answers. besides having an array of objects, both answers also refer to the same limitation with two terms: unbounded array and document size limit of 16MB. I am guessing they had something else in mind but this slipped into the result.@Kushagra_Kesav, I would appreciate it if you tag someone from the “learn” team.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hey @Vegar_Vikan,Thanks for bringing this to our attention. I’ll raise this question with the concerned team.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "¯\\_(ツ)_/¯",
"text": "Please include this link in your first question post so others won’t need to scroll down to find itSorry - can’t find a way to edit my posts ¯\\_(ツ)_/¯Edit: And by that I gained edit-privileges \nEdit2: Nope I did not. Just for my most recent post…",
"username": "Vegar_Vikan"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lesson 3 - Embedded array - ain't wrong and correct answer the same? | 2023-01-20T11:54:52.310Z | Lesson 3 - Embedded array - ain’t wrong and correct answer the same? | 1,478 |
null | [
"aggregation",
"queries",
"data-modeling"
] | [
{
"code": "userfriendfriendfriendfrom$lookupuserfriend",
"text": "There are 2 collections in my app, user and friend. The friend collection will be for storing the the friendship relationship between the users of the app.In a relational database, the friend collection will have one row for each friendship, which involves 2 users.In MongoDB, I am advised to store all of a user’s friend in an array field.What is wrong with storing one document per friendship? Is there really no scenario where this schema would be preferred over storing friends in an array?The goal is fast queries. The friend collection will be used as the from collection in a $lookup stage of a find query on the user collection. And the friend collection will have compound index on the friends’ user ids.What is wrong with this schema design?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Hey @Big_Cat_Public_Safety_Act,In MongoDB, I am advised to store all of a user’s friend in an array field.A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Hence, you may have been advised to store all of a user’s friends in an array field.What is wrong with storing one document per friendship? Is there really no scenario where this schema would be preferred over storing friends in an array?Storing one document per friendship in MongoDB can lead to an increased number of write operations and result in performance degradation over time, as the size of the collection grows. This is because, with each new friendship, two new documents will have to be inserted and updated.Using an array to store friends is more efficient, as a single update operation can be used to add or remove a friend.Additionally, if you have a large number of friends for a single user, storing them in an array can result in exceeding the document size limit, leading to further performance issues. MongoDB has a hard limit of 16MB per document, thus any schema design that can have a document growing indefinitely will hit this 16MB limit sooner or later. In this case, it may be better to implement a separate collection for friends and use a referenced relationship instead, and then you can use the $lookup approach that you mentioned.In conclusion, the array-based schema is preferred for storing friendships in MongoDB for fast queries, but one document per friendship may be a better choice for certain scenarios, such as if you have a large number of friends for a single user. It ultimately depends on the specific requirements and constraints of your project. I have a similar post where we have discussed some of the things to keep in mind while deciding schema and whether to favor embedding or to use references: Schema design: Many-to-many relationships and normalization - #3 by SatyamYou can further read the following documentation to cement your knowledge of Schema Design in MongoDB.\nData Model Design\nFactors to consider when data modeling in MongoDBPlease let us know if this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Modelling N-to-N relationship with one document per relationship | 2023-01-27T16:01:43.098Z | Modelling N-to-N relationship with one document per relationship | 1,814 |
null | [
"aggregation",
"data-modeling",
"compass",
"golang"
] | [
{
"code": "parsing time \\\"\\\\\\\"2023-01-10T08:56:02\\\\\\\"\\\" as \\\"\\\\\\\"2006-01-02T15:04:05Z07:00\\\\\\\"\\\": cannot parse \\\"\\\\\\\"\\\" as \\\"Z07:00\\\"type MyDate time.timefunc (v *MyDate) UnmarshalJSON(b []byte) error {\n parsedTime, err := time.Parse(\"\\\"2006-01-02T15:04:05\\\"\", string(b))\n if err != nil {\n return err\n }\n *v = MyDate(parsedTime)\n return nil\n}\ntype MyData struct {\n FieldS string\n FieldI int64\n FieldD *MyDate\n}\nfunc (v FcsDate) MarshalBSON() ([]byte, error) {\n return bson.Marshal(map[string]map[string]interface{}{\n \"$date\": {\"$numberLong\": fmt.Sprintf(\"%v\", time.Time(v).UnixMilli())},\n })\n}\n",
"text": "HiWe have to import several different JSON Exports into MonogDB using Go. What we do is fetching the JSON from a database, unmarshalling it into a struct and save the struct into a Mongo collection. This works. The problem I struggle is the handling of the dates. The dates are stored in the format “YYYY-MM-DDTHH:MM:SS” in the incoming JSON. When using the normal unmarshal function it claims with a parsing time error “parsing time \\\"\\\\\\\"2023-01-10T08:56:02\\\\\\\"\\\" as \\\"\\\\\\\"2006-01-02T15:04:05Z07:00\\\\\\\"\\\": cannot parse \\\"\\\\\\\"\\\" as \\\"Z07:00\\\"”.For this reason, I created then an own typetype MyDate time.timeand a implmented the UnmarshalJSON interface.So far so good, I got my date in my Go structWhen I now try to save this struct to MongoDB, it creates an empty Object for MyDate. Therefore I implmented the MarshalBSON interface:This works, and my date is stored in the collection, but as it seems, it’s not really a date. When checking with Compass, it’s shown as an object, when checking with Atlas it’s shown as a date. When trying to project it using an aggregation pipeline it claims with “can’t convert from BSON type object to Date”. When I look into another collection from another running application holding a date, then the JSON representation is the same {$date:{$numberLong: nnn}}What am I doing wrong? I could use string instead of a date, and then everything works technically fine, but my date is still not a date, it’s a string. What solution would you suggest to solve this issue?",
"username": "Meinrad_Hermanek"
},
{
"code": "{\"$date\": ... }time.TimeMarshalBSON$dateMarshalBSONMarshalBSONMarshalBSONValuetime.Timefunc (v FcsDate) MarshalBSONValue() (bsontype.Type, []byte, error) {\n\treturn bson.MarshalValue(time.Time(v))\n}\n",
"text": "@Meinrad_Hermanek thanks for posting and welcome!The implementation you posted has two issues:To resolve those two issues, replace MarshalBSON with MarshalBSONValue and return the default BSON field encoding for a Go time.Time value:See an example on the Go Playground here.",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Many thanks, this solves my issue!",
"username": "Meinrad_Hermanek"
},
{
"code": "func (v *FcsDate) UnmarshalBSONValue(t bsontype.Type, b []byte) (err error) {\n\tlog.Printf(\"UnmarshalBSONValue: %v, %v\", t, b)\n\tts := int64(binary.BigEndian.Uint64(b))\n\t// ts is in nanoseconds, so we convert it to seconds\n\t*v = FcsDate(time.Unix(ts/1000/1000/1000, (ts%1000)*1000000).UTC())\n\tlog.Printf(\"FcsSate is: %v\", *v)\n\treturn nil\n}\n",
"text": "@Matt_Dale thanks again for your hint.Now, when I try to Unmarshal this bson value back into my FcsDate, I’m struggling again. I tried following:This returns a date, but it’s wrong. For example it returns “1970-06-06T09:59:31.880Z” instead of “1997-05-23T00:00:00.000+00:00”.Do you have any advise?RegardsMeinrad",
"username": "Meinrad_Hermanek"
},
{
"code": "time.Timefunc (v *MyDate) UnmarshalBSONValue(t bsontype.Type, b []byte) error {\n\trv := bson.RawValue{\n\t\tType: t,\n\t\tValue: b,\n\t}\n\n\tvar res time.Time\n\tif err := rv.Unmarshal(&res); err != nil {\n\t\treturn err\n\t}\n\t*v = MyDate(res)\n\n\treturn nil\n}\nUnmarshalValuebsonValueUnmarshalerUnmarshalValue",
"text": "Hey @Meinrad_Hermanek thanks for the follow-up question! To unmarshal the BSON value, create a bson.RawValue and use that to unmarshal the bytes into a Go time.Time value:See an example on the Go Playground here.P.S. I’m surprised that there isn’t a corresponding UnmarshalValue function in the bson package. That seems like an oversight and definitely makes satisfying the ValueUnmarshaler interface a lot less intuitive. There is an open Jira ticket for adding an UnmarshalValue function (see GODRIVER-1892) that I will suggest the Go driver team prioritize (I work on the Go driver team).",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Hey @Matt_Dale many thanks for this solution. If I could vote for the Jira you mentioned, I would.",
"username": "Meinrad_Hermanek"
}
] | Date format Handling from JSON to BSON using GO | 2023-01-10T09:48:54.463Z | Date format Handling from JSON to BSON using GO | 5,491 |
null | [
"queries",
"java",
"spring-data-odm"
] | [
{
"code": "String mongodbURL = env.getProperty(\"spring.data.mongodb.uri\");\n\t\t com.mongodb.client.MongoClient client = MongoClients.create( mongodbURL);\n\n\t MongoDatabase mongoDatabase = client.getDatabase(\"EDD\");\n\t MongoCollection<org.bson.Document> coll = mongoDatabase.getCollection(\"ZipStoreDistanceData\");\n\t \n\t\tMongoCollection<StoreListForZip> zipCodeMongoCollection = mongoDatabase.getCollection(\"ZipStoreDistanceData\",\n\t\t\t\tStoreListForZip.class);\n\t\tBson filter = and(eq(\"_id\", zipcode), lte(\"storeList.distanceInMiles\", distance));\t\t\n\t\tStoreListForZip storeListForZip = zipCodeMongoCollection.find(filter).first();\nStoreListForZip storeListForZip = zipCodeMongoCollection.find(filter).first();Caused by: org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.blah.blah.dto.StoreListForZip```\n\n{\"_id\":\"US_00721\",\"countryCode\":\"US\",\"countryZip\":\"US_00721\",\"storeList\":[{\"distanceInMiles\":{\"$numberInt\":\"20\"},\"storeNum\":\"11010\"},{\"distanceInMiles\":{\"$numberInt\":\"20\"},\"storeNum\":\"21111\"},{\"distanceInMiles\":{\"$numberInt\":\"17\"},\"storeNum\":\"31176\"},{\"distanceInMiles\":{\"$numberInt\":\"20\"},\"storeNum\":\"33060\"},{\"distanceInMiles\":{\"$numberInt\":\"17\"},\"storeNum\":\"51176\"},{\"distanceInMiles\":{\"$numberInt\":\"20\"},\"storeNum\":\"53060\"}]}```\n@Data\n@Getter\n@Setter\n@AllArgsConstructor\n@NoArgsConstructor\n@Builder\npublic class StoreListForZip {\n\n\t@JsonProperty(\"id\")\n\t@Id\n\tString id;\n\n\t@JsonProperty(\"countryZip\")\n\tString countryZip;\n\n\t@JsonProperty(value = \"countryCode\")\n\tString countryCode;\n\n\t@JsonProperty(value = \"storeList\")\n\tList<StoreDistance> storeList;\n\n\t\n\n}```",
"text": "I am getting the following exception while trying to fetch data from Mongo DB using a Java application.The line StoreListForZip storeListForZip = zipCodeMongoCollection.find(filter).first();\nis the one which throws exception.\nThe exception is belowmy document looks like below in Mongo DBany insight into why the Java application is throwing this exception ?my POJO class looks like below.",
"username": "ivin_jacob"
},
{
"code": "Caused by: org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.blah.blah.dto.StoreListForZip```\n",
"text": "Hi @ivin_jacob,This error message is indicating that there is a problem with the MongoDB codec configuration for a class called “StoreListForZip”. The exception is saying that it can’t find a codec (i.e. a class responsible for encoding and decoding instances of the class to and from BSON) for this class. To resolve this issue, you need to either define your own decoding logic or use a different class that already has a codec registered.For Quick Reference check out this article: Java - Mapping POJOs | MongoDBLink to MongoDB Java Driver: Quick Start - POJOsI hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Unable to query data set from Mongo DB | 2023-01-29T05:35:03.624Z | Unable to query data set from Mongo DB | 1,510 |
[
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "HelloRecently (last 2 days) we’ve had our application go down and become inaccessible for about 15-30 minutes.It seems like it coincides with the time that a new primary is elected.Looking back through the history, it seems like the swap over usually happens instantly. I see no significant errors in my application during the swap over.The red line here indicates the start of the errors occurring on my backend. You can see the traffic is slowly being bled from the (old) primary. It takes about 30 minutes until eventually the new primary is elected and things resume normallyFor this window of time our application is unable to talk to process requests. All requests timeout for this window of time.\nimage1548×594 48.9 KB\nIt eventually sorted it self out. I put our application into maintenance mode about 20 minutes into the situation which reduced the number of requests to the backend to zero. It seems like this gave it a chance to swap over properly?I am just wondering if this is expected, if I could get some insights into this problem. I am not sure how to diagnose the problem. Everything is fine, until its not, and then in about 15-30 minutes a new primary is elected and then everything is fine again. That’s all I’ve been able to figure out.We have a NestJS application on Heroku, we’re using Mongoose to interact with the database.",
"username": "Tim_Aksu"
},
{
"code": "",
"text": "Noticed that the CPU steal is really high around the start of the problem we see. I cant tell if this is a symptom of the problem of the problem itself. Not sure. Help would be appreciated, thanks.\nimage1560×715 66.1 KB\n",
"username": "Tim_Aksu"
},
{
"code": "Normalized System CPUOpcountersSystem: CPU (Steal) % isSystem: CPU (Steal) % isM10M20",
"text": "Hi @Tim_Aksu,Noticed that the CPU steal is really high around the start of the problem we see. I cant tell if this is a symptom of the problem of the problem itself. Not sure. Help would be appreciated, thanks.Based off the details on this post (I assume these metrics are Normalized System CPU and Opcounters for two nodes), it seems like the CPU increase correlates with the operation count increase which is probably what we expect here. However, as you have mentioned the issue appears to occur when “CPU steal” occurs (I presume everything is okay prior to this but correct me if i’m wrong here). As per the Fix CPU usage issues documentation (specifically regarding CPU steal): System: CPU (Steal) % is occurs when the CPU usage exceeds the guaranteed baseline CPU credit accumulation rate by the specified threshold. For more information on CPU credit accumulation, refer to the AWS documentation for Burstable Performance Instances.Note: The System: CPU (Steal) % is alert is applicable when the EC2 instance credit balance is exhausted. Atlas triggers this alert only for AWS EC2 instances that support Burstable Performance Instances. Currently, these are M10 and M20 cluster types.Based off the details above, it’s likely that your cluster’s cpu usage is exceeding the guaranteed baseline CPU credit accumulate rate (in which it will need to utilise burstable performance) up until a point all the CPU credit balance is exhausted in which CPU steal starts occuring (assuming the same workload continues).Of course this is just my assumption based only on the data available on this post. The following CPU steal details may be useful to you too. However, I would recommend you contacting the Atlas in-app chat support as they have more insight into your cluster. It might be that you may need to optimize your workload (where possible) or test a higher tier cluster (M30 for example) to see if the primary swapping issue still occurs.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Errors thrown during primary swap over, application down 15-30 mins per day? | 2023-01-23T00:57:47.401Z | Errors thrown during primary swap over, application down 15-30 mins per day? | 815 |
|
null | [
"react-native"
] | [
{
"code": "<AppProvider id={process.env.REALM_APP_ID}>\n <UserProvider fallback={<SignIn />}>\n <RealmProvider sync={{\n flexible: true,\n onError: error => console.error(error)\n }}>\n <App />\n </RealmProvider>\n </UserProvider>\n</AppProvider>\nconst data = useQuery(Data)\nconsole.log(data.length)\n\n==> 0\n",
"text": "Hi,I am working on a React Native app, but unfortunately the synchronization between the app and Atlas is not working. The structure of my app follows exactly the template: https://www.mongodb.com/docs/realm/sdk/react-native/bootstrap-with-expo/Nevertheless, I don’t get any results with useQuery(), although records are available in Atlas.I am on:In MongoDB Compass I can see that a collection “state_data” has been created in the database “_realm_sync…”. Why can’t I access the data? What can be the reason for this?Many greetings",
"username": "mazdoa"
},
{
"code": "",
"text": "Quick check: Have you followed the steps in this guide and changed e.g. the App ID (generating via realm-cli should get this right automatically)?",
"username": "otso"
},
{
"code": "",
"text": "I was able to solve the issue by subscribing to the object:realm.subscriptions.update(mutableSubs => {\nmutableSubs.add(realm.objects(…))\n})",
"username": "mazdoa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm does not synchronize records | 2023-01-19T01:06:44.015Z | Realm does not synchronize records | 990 |
null | [
"node-js",
"field-encryption"
] | [
{
"code": "const key = await encryption.createDataKey(provider, {\n masterKey,\n // keyAltNames: [credentials.GCP_KEY_NAME],\n});\nTypeError: error constructing KMS message: Failed to create GCP oauth request signature: error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag\n at ClientEncryption.createDataKey (c:\\DEV\\poc-data-encryption\\node_modules\\mongodb-client-encryption\\lib\\clientEncryption.js:243:40)\n at main (c:\\DEV\\poc-data-encryption\\src\\setup-key.js:72:34)\n at processTicksAndRejections (node:internal/process/task_queues:96:5) {stack: 'TypeError: error constructing KMS message: Fa…ions (node:internal/process/task_queues:96:5)', message: 'error constructing KMS message: Failed to cr… encoding routines:asn1_check_tlen:wrong tag'}\n",
"text": "Hi!\nI’m trying to do a POC to work with MongoDB CSFLE using the tutorial code here (for node.js):main/csfle/node/gcp/reader/In-use encryption sample applications. Contribute to mongodb-university/docs-in-use-encryption-examples development by creating an account on GitHub.A)\nAfter some research, almost works great, but when I reach this last code:It throws an error:I don’t know what to do, since we don’t have so much help online.\nWhere is the problem? My device, MongoDB, node.js driver or GCP?More details:\nWindows 11\nNode.js 16.13.0\nAtlas M0 5.0.14\n“mongodb”: “^4.13.0”,\n“mongodb-client-encryption”: “^2.3.0”B)\nThe documentation about CSFLE don’t talk about required user permissions.\nMy current permissions are:MongoDB User for key setup:\nreadWrite@encryption.__keyVaultMongoDB user for the application:\nread@encryption.__keyVaultGCP Service Account for setup:\nAPI Keys Admin\nCloud KMS Admin\nCloud KMS CryptoKey Encrypter/Decrypter\nCloud KMS CryptoKey Signer/Verifier\nCloud KMS Viewer\nTag User\nViewerGCP Service Account for the application:\n(I didn’t reach this step yet) What should be?",
"username": "Gabriel_Anderson"
},
{
"code": "",
"text": "Nobody? Someone help meee… plz ",
"username": "Gabriel_Anderson"
},
{
"code": "maskterKeyprivate_key<credentials-filename>cat <credentials-filename> | jq -r .private_key | openssl pkcs8 -topk8 -nocrypt -inform PEM -outform DER | base64",
"text": "Ok. I figure out.\nI don’t know why, Private Key in the maskterKey object is not the same value of private_key in the GCP JSON Key object.\nI didn’t see this instructions in the documentation:If you downloaded your credentials in JSON format, you can use the following command to extract the value of your private key, substituting <credentials-filename> with the name of your credentials file:cat <credentials-filename> | jq -r .private_key | openssl pkcs8 -topk8 -nocrypt -inform PEM -outform DER | base64This solved the problem and all works great. ",
"username": "Gabriel_Anderson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas MongoDB CSFLE + GCP: Failed to create GCP oauth request signature | 2023-01-14T22:10:41.522Z | Atlas MongoDB CSFLE + GCP: Failed to create GCP oauth request signature | 1,537 |
null | [
"queries",
"change-streams"
] | [
{
"code": "",
"text": "Having severe speed issue\nwith change streams\ncan anyone help",
"username": "Pavleen_Kaur1"
},
{
"code": "io.on(\"connection\", async()={ \nconst changeStream = await NotificationModel.watch([{\n $match: {\n operationType: { $in: ['insert'] },\n },\n }]);\n\n changeStream.on('change',\n (event) => {\n if (event.fullDocument.userRef.valueOf() === socket.decoded.data.id) {\n notifications({\n id: socket.decoded.data.id,\n }).then((data) => {\n io.sockets.in(socket.decoded.data.id).emit('event', { key: data });\n });\n }\n });\n})\n",
"text": "Using changing streams mongodb inside io.connection which is hugely impacting the socket io performance but want to emit the event to a specific socket id io.sockets.in(socket.id).emit()thats why using it inside :",
"username": "Pavleen_Kaur1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Speed issue with change streams inside io connection event | 2023-01-28T16:10:01.818Z | Speed issue with change streams inside io connection event | 1,168 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I’m following the getting Tasks app tutorials. Each time I quit the tasks app and restart it. The list is populated with tasks, but syncing does not work. Sync only works the first time the app runs. In order to get it working again, I have to delete the app from my device and relaunch it. As long as the app is running for the first time sync works. Any ideas?The following is printed to the console:Sync: Connection[2]: Websocket: Expected HTTP response 101 Switching Protocols, but received:HTTP/1.1 404 Not FoundCache-Control: no-cache, no-store, must-revalidateContent-Length: 4Content-Type: text/plain; charset=utf-8Date: Mon, 15 Jun 2020 02:44:13 GMTVary: OriginX-Content-Type-Options: nosniffX-Frame-Options: DENY404",
"username": "Rob_Cummings"
},
{
"code": "",
"text": "We need to know what platform you’re using, how your authentication is set up (and the code) and then how you’re connecting to Realm (code). So we really need to see some code.",
"username": "Jay"
},
{
"code": "",
"text": "After some trial and error I discovered this to be specific to my iPhone. Since it was not occurring in the iOS simulator or on another iOS device I tired.I’m using iOS 13.5.1 and Xcode 11.5 (11E608c)The code was direct from the tutorial.I was able to rectify the problem by deleting the tutorial task app from the iPhone, restarting the iPhone then reinstalling the task app. It appears to be working normally now.",
"username": "Rob_Cummings"
},
{
"code": "",
"text": "Hello @Rob_Cummings,What are you passing for “Partition”?Can you please share iOS snippet code if possible?Thanks in advance.",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "Sorry guys, I got it working as described in previous post. I’ve since deleted to tutorial project.",
"username": "Rob_Cummings"
},
{
"code": " var app = App.Create(\"My-Realm-App-Id\");\n\n var user = app.CurrentUser;\n\n if(user == null){\n\n user = await app.LogInAsync(Credentials.Anonymous());\n\n var syncConfig = new FlexibleSyncConfiguration(app.CurrentUser);\n\n _realm0 = await Realm.GetInstanceAsync(syncConfig);\n\n }else{\n\n var syncConfig = new FlexibleSyncConfiguration(app.CurrentUser);\n\n _realm0 = Realm.GetInstance(syncConfig);\n\n \n\n }\n\n _realm0.Subscriptions.Update(() =>\n\n {\n\n _realm0.Subscriptions.Add(_realm0.All<Usuarios_data>(),new SubscriptionOptions {Name = \"RegistrarSubscription\"});\n\n });\n",
"text": "Hi, I am using the Realm SDK for Unity and getting the same Error:\n‘’’\nError: Websocket: Expected HTTP response 101 Switching Protocols, but received:\nHTTP/1.1 404 Not Found\n‘’’\nThis is how I am trying to connect to the application:\n‘’’\nasync void OnEnable(){‘’’\nI have already checked that I have set the permissions and enabled Anonymous Authentication, so I do not know what can be failing.\nThanks ",
"username": "RAUL_MC"
},
{
"code": "Sync: Websocket: Expected HTTP response 101 Switching Protocols, but received:\nHTTP/1.1 401 Unauthorized\ncache-control: no-cache, no-store, must-revalidate\nconnection: close\ncontent-length: 212\ncontent-type: application/json\ndate: Fri, 27 Jan 2023 22:59:45 GMT\nserver: mdbws\nstrict-transport-security: max-age=31536000; includeSubdomains;\nvary: Origin\nx-appservices-request-id: 63d45761629bbbb2087a6128\nx-envoy-max-retries: 0\nx-frame-options: DENY\n2023-01-27 17:59:45.015076-0500 Foody23[4788:57544] Sync: Connection[1]: Connection closed due to error\nlet ra = RealmSwift.App(id: \"foody23realm-jfsbk\")\nlet realmUserKey = *** not showing API key in public post *** // DevUser23 API key\nlet credentials = Credentials.userAPIKey(realmUserKey)\n\n do {\n let thisUser = try await ra.login(credentials: credentials)\n var loginConfiguration = thisUser.configuration(partitionValue: uuidFamily, clientResetMode: .discardUnsyncedChanges()) /// uuidFamily is a UUID value I would prefer not to publish in a public post\n loginConfiguration.objectTypes = [Categories.self,\n Commodities.self,\n Units.self,\n Stores.self,\n Accounts.self,\n ShoppingItems.self\n Ledgers.self,\n Journals.self,\n Vehicles.self,\n Packages.self]\n let realm = try await Realm(configuration: loginConfiguration, downloadBeforeOpen: .always)\n} catch {\n fatalError(\"WARNING: Can't run without database access; \\(devDate.timeIntervalSinceNow)\")\n}\n",
"text": "Similar problem here.I’m using the following code in RealmSwiftAny suggestions?",
"username": "Adam_Ek"
},
{
"code": "",
"text": "I took a look at your backend app and you have a lot of successful writes occurring - are you still experiencing this issue?",
"username": "Ian_Ward"
}
] | Sync not working | 2020-06-15T03:07:53.816Z | Sync not working | 4,386 |
null | [
"app-services-user-auth"
] | [
{
"code": " exports = async function(loginPayload) {\n // Get a handle for the app.users collection\n const users = context.services\n .get(\"Syndes\")\n .db(\"account\")\n .collection(\"users\");\n \n //console.log(loginPayload);\n const username = loginPayload.toString();\n\n const user = await users.findOne( {\"userData.username\": username} );\n\n if (user) {\n // If the user document exists, return its unique ID\n return user._id.toString();\n } else {\n // If the user document does not exist, create it and then return its unique ID\n const newDocument = {\n \"userData\": {\n \"username\": username,\n \"email\": \"\",\n \"password\":\" \"\n }\n };\n \n const result = await users.insertOne(newDocument);\n \n return result.insertedId.toString();\n }\n};\n {\"error\":\"SyntaxError: invalid character 'u' looking for beginning of value\",\"error_code\":\"FunctionExecutionError\",\"link\":\"https://realm.mongodb.com/groups/630d71853be32c526f524ebe/apps/630d7789b5b628f76fcf2edd/logs?co_id=631af32e3c46216516bff9a4\"}\n\n",
"text": "Hello, i am trying to build login and register function to my application. The software i use for develop my application only can connect with database. So, i use http endpoint mongodb. But I shown error when i am trying to connect with my application. I am very new to this. Anyone can help.This is my function for auth in the http endpoints:and this is the error message i got:",
"username": "Faiq_Anargya"
},
{
"code": "",
"text": "Did you ever solve this?",
"username": "Tim_Carrender"
}
] | Create login and register function to another application | 2022-09-09T08:10:46.243Z | Create login and register function to another application | 2,789 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "node --trace-deprecation src/index.js\n(node:17196) [MONGOOSE] DeprecationWarning: Mongoose: the `strictQuery` option will be switched back to `false` by default in Mongoose 7. Use `mongoose.set('strictQuery', false);` if you want to prepare for this change. Or use `mongoose.set('strictQuery', true);` to suppress this warning.\n at Mongoose.connect (E:\\my-project\\node_modules\\mongoose\\lib\\index.js:405:5)\n at main (E:\\my-project\\src\\index.js:13:18)\n at Object.<anonymous> (E:\\XXXXXXXXXXX\\express-basic-backend-structure\\src\\index.js:10:1)\n at Module._compile (node:internal/modules/cjs/loader:1218:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1272:10)\n at Module.load (node:internal/modules/cjs/loader:1081:32)\n at Module._load (node:internal/modules/cjs/loader:922:12)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\n at node:internal/main/run_main_module:23:47\n",
"text": "Hello, community\nHope you are doing well.I am building simple express backend api server.\nRun this command in project terminal.And I can see following error.I need your help.Used Express 4.18.2, MongoDB 6.04, Node.js 18.13.0, mongoose 6.84Thanks in advance.",
"username": "mountiv"
},
{
"code": "DeprecationWarning:\n Mongoose: the `strictQuery` option will be switched back\n to `false` by default in Mongoose 7.\n\nUse `mongoose.set('strictQuery', false);` if you want to prepare for this change.\n\nOr use `mongoose.set('strictQuery', true);` to suppress this warning\n",
"text": "it means a breaking change is on the way with mongoose version 7, and it also says what you need to do if your queries are tied tight on this strictness, or experience differences with version 7. check mongoose documentation to find out what it means: Mongoose v6.9.0: Schemasbut do not confuse this with MongoDB itself as this is a behavior of your schemas queries for mongoose.",
"username": "Yilmaz_Durmaz"
}
] | What this mean "[MONGOOSE] DeprecationWarning: Mongoose"? | 2023-01-28T21:53:25.525Z | What this mean “[MONGOOSE] DeprecationWarning: Mongoose”? | 1,962 |
null | [
"aggregation",
"ruby"
] | [
{
"code": "collection.aggregate(\n [\n {\"$match\": {_id: 123}},\n {\"$project\": {\"attributes.key\": 1, \"attributes.value\": 1}},\n {\n \"$project\": {\n attributes: {\n \"$filter\": {\n input: \"$attributes\", as: \"attr\", cond: {\"$in\": [\"$$attr.key\", [1, 2, 3]]}\n }\n }\n }\n }\n ]\n)\n{read: {mode: :secondary_preferred, max_staleness: 120}}explain",
"text": "Hello,\nI have a query like thisAnd want to use replica to run this. I added {read: {mode: :secondary_preferred, max_staleness: 120}} as options. And now I’d like to check/confirm replica was used. I tried to run explain on the result but it doesn’t provide any information like that. Is there a way to check what was used to run aggregate query like this?",
"username": "Alexey_Blinov"
},
{
"code": "secondaryPreferredmaxStalenessSecondsmongodexplainserverInfo",
"text": "Hi @Alexey_Blinov,Specifying a Read Preference of secondaryPreferred will cause the operation to target a secondary in most situations. When combined with a maxStalenessSeconds the operation will take replication lag (“staleness”) into consideration as well.Per your example using the Ruby driver, assuming you (a) have a secondary member available and healthy and (b) that secondary is lagging behind the primary by less than 120 seconds, the operation should target that node.And now I’d like to check/confirm replica was used.Assuming the operation took longer than the slow query threshold (default 100ms) you could check the mongod logs for your secondaries to confirm which node ran the query.I tried to run explain on the result but it doesn’t provide any information like that.The explain results should contain a serverInfo field that would tell you which server ran the operation when it was explained. This doesn’t mean the operation prior to being explained ran on this server, but assuming you used the same read preference and max staleness settings it should allow you to rule out that the operation ran on the primary.",
"username": "alexbevi"
},
{
"code": "serverInfo\"queryPlanner\"",
"text": "Thanks!\nYeah, I was expected to see serverInfo there. But saw only \"queryPlanner\" section. Hence my question.\nI’ll try to run e few more explains. Maybe server info will be there.",
"username": "Alexey_Blinov"
},
{
"code": "cursor.explain()mongoshmongoserverInfoexplainqueryPlanner",
"text": "I’ll try to run e few more explains. Maybe server info will be there.Note that if you’re running cursor.explain() via the mongosh or mongo shell it’s possible the serverInfo would not be available unless you increase the explain Verbosity Mode (default via the shell is queryPlanner)",
"username": "alexbevi"
},
{
"code": "explainexplain",
"text": "Ruby MongoDB driver’s explain do not accept options. It will show everything. And yeah, I was able to find info I need. Just have to call explain right after cursor was used. When I was checking something else and call explain after some time - output was modest.\nThank you",
"username": "Alexey_Blinov"
},
{
"code": "require('mongo')\nclient = Mongo::Client.new('mongodb://mongo-0-a/test?readPreference=secondary')\nxs = client[:foo].find({\"x\":1}).explain(verbosity:'executionStats')\nputs JSON.pretty_generate(xs[\"serverInfo\"])\n",
"text": "",
"username": "chris"
}
] | A way to confirm/check replica was used | 2023-01-10T12:15:09.069Z | A way to confirm/check replica was used | 1,337 |
null | [
"atlas-device-sync",
"mongodb-shell",
"react-native"
] | [
{
"code": "",
"text": "Hello guys, I really need help: This is my first time working with Realm. I’m using my app ID to connect to atlas device sync.\nOn Dev Mode, I can generate all the collections, but whenever I save any data, it’s never synced. The data is kept locally and never goes to the database in the atlas cloud.\nAm I missing something?\n@Andrew_Meyer\nPlease help!",
"username": "Musekwa_Evariste"
},
{
"code": "",
"text": "Hi @Musekwa_Evariste, for starters, I would recommend setting up our template application to see a working example on how to use Realm with sync. Here is a link to our tutorial. When that is working, you can compare that to your current implementation to see if you are missing some sort of configuration in Atlas, or locally in your application code.I hope this helps. Please reply if you are running into any problems.",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Hi @Andrew_Meyer,Thanks for the tips you gave me. I followed the tutorial you recommended me, and it worked like a charm.Unfortunately, I’m still having the same problem, since my use cases seem to be more complex that the one shown in the tutorial.For instance, I need to implement custom user data. I couldn’t find any tutorial showing how to save custom user data. The one I found only shows how to update and delete some user data.Can you recommend someone who can mentor me on this topic, in case you don’t have enough time to do it? I can pay this service.Locally, Mongo Realm works perfectly without any authentication and device sync enabled. For this app, I need to enable any authentication provider with custom user data and the device sync.Regards.",
"username": "Musekwa_Evariste"
},
{
"code": "",
"text": "Hi @Musekwa_Evariste,I’m part of Technical Support for Realm: we can try to help you here, if you can forward us more details about the specifics of your app, but, it being a public forum, there may be information that you wouldn’t be ready to share so broadly. It’s your decision, however, so feel free to follow up here.There are otherwise consulting packages available, if you feel ready for them, but in my opinion the best option at this stage is to get Developer Support from your Atlas Project page, that’s cheap enough at 49$/month, and has a 30 days trial. There you can create 1-to-1 support cases that will be followed by myself or my colleagues (who will respond depends also by your timezone), and it’s precisely our job to frame the issues you may be experiencing and answer your questions.Let me know how you want to proceed.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Hi @Paolo_Manna,Thanks for this information. This is the kind of support I need. I’m adopting Mongo Realm as one of the main technologies used by my apps. It’s important that I get a such Developer Support.I’ve just registered for the Developer Support from my Atlas Project page and, right now, I’m waiting for the support portal to be available, before I submit a request for further support.I might come back to this comments’ thread if anything goes wrong in the support portal.Thanks to you and @Andrew_Meyer for the support.\nRegards.",
"username": "Musekwa_Evariste"
}
] | Atlas Device Sync | 2023-01-24T04:53:05.075Z | Atlas Device Sync | 1,237 |
null | [] | [
{
"code": "com.mongodb.MongoSocketOpenException: Exception opening socket\n\tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-3.11.1.jar!/:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.1.jar!/:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.11.1.jar!/:na]\n\tat java.base/java.lang.Thread.run(Unknown Source) ~[na:na]\nCaused by: java.net.SocketTimeoutException: connect timed out\n\tat java.base/java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:na]\n\tat java.base/java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) ~[na:na]\n\tat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) ~[na:na]\n\tat java.base/java.net.AbstractPlainSocketImpl.connect(Unknown Source) ~[na:na]\n\tat java.base/java.net.SocksSocketImpl.connect(Unknown Source) ~[na:na]\n\tat java.base/java.net.Socket.connect(Unknown Source) ~[na:na]\n\tat java.base/sun.security.ssl.SSLSocketImpl.connect(Unknown Source) ~[na:na]\n\tat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.11.1.jar!/:na]\n\tat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-3.11.1.jar!/:na]\n\tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.1.jar!/:na].\n",
"text": "Hi All,Sometimes for one hour we are having connection issue in between Spring BootApplication 2.0.0 Release version with Mongo Db Java driver 3.11.1.\nApplication is deployed in Azure App service.It works good but some of the requests are getting below issue.Please let me know if anyone have similar issue.",
"username": "vamsi_krishna"
},
{
"code": "java.net",
"text": "Hi @vamsi_krishna welcome to the community!It seems that the error was a connection time out from the java.net side and not from the MongoDB driver particularly. I’m not sure why this is so, since the reason may be specific to Azure App Service.Do you see a similar issue when not using the App Service? Also, it could be a limitation of certain pricing tier. Have you tried using a higher tier and see a similar failure?Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "@Kevin.\nThank you for responding to my query.\nYes, we are seeing this issue in Azure APP service. It is working good in local system.\nRegarding app service plans. Sure, will check that one.Thank you.",
"username": "vamsi_krishna"
},
{
"code": "",
"text": "@kevinadiWe are using Production P2V2 App Service Plan in Azure.",
"username": "vamsi_krishna"
},
{
"code": "",
"text": "It is happening in local how to solve it ?",
"username": "Kuldeep_Kumar"
}
] | com.mongodb.MongoSocketOpenException: Exception opening socket | 2020-09-21T20:35:44.498Z | com.mongodb.MongoSocketOpenException: Exception opening socket | 26,553 |
[
"aggregation"
] | [
{
"code": "",
"text": "but i am getting this\nthere should be array on top instead of object\nArray → object → object",
"username": "411_Danish_Ranjan"
},
{
"code": "",
"text": "it is not clear what you have and what you need. provide us with an original document, your query, and the result.",
"username": "Yilmaz_Durmaz"
}
] | I want to use objectToArray without key and value | 2023-01-27T09:15:29.766Z | I want to use objectToArray without key and value | 513 |
|
null | [
"queries"
] | [
{
"code": " If true , mongod will drop the target of renameCollection prior to renaming the collection.",
"text": "According to doc: If true , mongod will drop the target of renameCollection prior to renaming the collection.\nWant to understand if this operation is atomic or not. Or is it possible it may end up dropping the collection and then the command fails to rename the collection?",
"username": "Ashish_Jha2"
},
{
"code": "4.2db.collection.renameCollection()renameCollection()dropTarget : true",
"text": "Hello @Ashish_Jha2, and welcome to the Developer Community Forums!Regarding your question, as noted within our documentation, since MongoDB version 4.2 , when performing db.collection.renameCollection() , the operation obtains an exclusive lock on the source and target collections for the duration of the operation. All subsequent operations on the collections must wait until renameCollection() completes. Seeing as this operation has an intent exclusive lock on both collection, I believe that the rename would occur prior to releasing the lock, therefore, making this an atomic operation. However, an important note to this is the renaming a collection with dropTarget : true is atomic so long as the renamed collection stays within the same database .",
"username": "Cian_Sinclair"
}
] | Is renameCollection() command when dropTarget=true an atomic operation? | 2023-01-27T10:00:52.174Z | Is renameCollection() command when dropTarget=true an atomic operation? | 896 |
null | [
"aggregation",
"compass"
] | [
{
"code": "[\n {\n $lookup:\n /**\n * from: The target collection.\n * localField: The local join field.\n * foreignField: The target join field.\n * as: The name for the results.\n * pipeline: Optional pipeline to run on the foreign collection.\n * let: Optional variables to use in the pipeline field stages.\n */\n {\n from: \"asset_earnings\",\n localField: \"_id\",\n foreignField: \"track_id\",\n as: \"output\",\n },\n },\n {\n $unwind:\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n {\n path: \"$output\",\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $project: {\n _id: 0,\n track_id: \"$_id\",\n year: \"$output.reporting_year\",\n payout: \"$output.payable_amount\",\n },\n },\n {\n $out:\n /**\n * Provide the name of the output collection.\n */\n \"matches_with_details\",\n },\n]\n",
"text": "I have two Collections:\nasset_earnings collection with 524K documents\ntrack_id collection with 944 documents\nthis aggregation is run on track_id\nI have the Max Time MS set to 2147483647 and the following aggregation still times out:\nEven the lookup alone also timed out.",
"username": "Willy_Hermanto"
},
{
"code": "track_id",
"text": "if you do not have one yet, create an index on track_id and try again. without an index, every lookup operation needs a collection scan which takes longer.also, try using another project stage before the unwind. the unwind makes copies of that full document as many times the size of the array. reducing the number of other fields will reduce the size of the document in memory and increase performance if there is a disk swap in use",
"username": "Yilmaz_Durmaz"
}
] | MongDB Compass: operation exceeded time limit | 2023-01-28T06:57:27.245Z | MongDB Compass: operation exceeded time limit | 1,092 |
null | [
"aggregation"
] | [
{
"code": "db.getCollection('songs').aggregate([\n {\n $match: {\n correct:true, \n createdAt : { '$gte' : ISODate('2022-02-01'), '$lte' :ISODate('2022-02-09') },\n\n covers: {\n $elemMatch: {\n \"relation.correct\": false\n }\n }\n }\n },\n {\n $project: {\n \n covers: {\n $filter: {\n \n input: \"$covers\",\n as: \"cover\",\n cond: { $eq: [\"$$cover.relation.correct\", false],\n }\n }\n }\n }\n },\n \n {\n $lookup: {\n from: \"songs\",\n let: { covers_song: \"$covers.song\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [\"$_id\", \"$$covers_song\"] },\n { $eq: [\"$correct\", true] }\n ]\n }\n }\n }\n ],\n as: \"cover_songs\"\n }\n}\n\n \n])\n",
"text": "Hello, I am beginner with MongoDB. I need help with the following pipeline:The result of cover_songs is null elements and I don’t know why? Who can help me?",
"username": "Anja_H"
},
{
"code": "",
"text": "It would really help to help you if you share some sample documents from the collection. You might have a typo in some of the field names.",
"username": "steevej"
},
{
"code": "{\n \"_id\" : ObjectId(\"5acc307fd734b019b873e90c\"),\n \"title\" : \"Listen To Your Heart\",\n \"artists\" : [ \n {\n \"artist\" : ObjectId(\"5acc307ad734b019b871cac8\"),\n \"name_id\" : \"jft428eo3qepqgvrtvw\",\n \"delimiter\" : null\n }\n ],\n \"composers\" : [ \n {\n \"artist\" : ObjectId(\"5acc307ed734b019b873d978\")\n }, \n {\n \"artist\" : ObjectId(\"5acc307ad734b019b8728a0e\")\n }\n ],\n \"search_terms\" : [ \n {\n \"term\" : \"listentoyourheart\"\n }\n ],\n \"legacy_id\" : 16679,\n \"legacy_artists\" : \"Roxette\",\n \"legacy_composers\" : \"Per Håkan Gessle / Mats Persson\",\n \"sources\" : [ \n {\n \"source\" : \"bla bla bla\"\n }, \n \n ],\n \"cover_count\" : 16,\n \"covers\" : [ \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b0\")\n }, \n \n \"comment_de\" : \"\",\n \"editor\" : \"MIG\",\n \"edited_by\" : \"AHO\",\n \"note\" : \"\",\n \"release_date\" : \"1988\",\n \"createdAt\" : ISODate(\"2005-05-13T22:00:00.000Z\"),\n \"updatedAt\" : ISODate(\"2023-01-13T16:15:17.921Z\"),\n \"languages\" : [ \n \"eng\"\n ],\n \"correct\" : true,\n \"youtube\" : {\n \"modified\" : ISODate(\"2022-12-04T02:04:52.052Z\"),\n \"autogenerated\" : false,\n \"id\" : \"yCC_b5WHLX0\"\n },\n \"original_count\" : 1,\n \"url_key\" : \"Roxette-Listen-To-Your-Heart\",\n \"first_release\" : {\n \"medium\" : \"7''-Vinyl\",\n \"title\" : \"Listen To Your Heart\",\n \"label\" : \"Parlophone\",\n \"record_number\" : \"1363237\",\n \"country\" : \"SE\",\n \"track_number\" : \"A\",\n \"ean\" : \"\",\n \"asin\" : \"\",\n \"medium_description\" : \"Single\"\n },\n \"folksong\" : {\n \"identifiers\" : []\n },\n \"is_folksong\" : false,\n \"live_version\" : false,\n \"highlight\" : {\n \"is\" : false\n },\n \"recorded_date\" : \"\",\n \"keywords\" : [ \n \"roxette\", \n \"listen\", \n \"2\", \n \"yore\", \n \"heart\", \n \"to\", \n \"your\"\n ]\n}\n{\n \"_id\" : ObjectId(\"61fe7e1fe5bbd565edbb9868\"),\n \"covers\" : [ \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2001-08-31T22:00:00.000Z\"),\n \"updatedAt\" : ISODate(\"2001-08-31T22:00:00.000Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc309bd734b019b8793d3c\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2001-08-31T22:00:00.000Z\"),\n \"updatedAt\" : ISODate(\"2001-08-31T22:00:00.000Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"cover_songs\" : []\n}\n",
"text": "Here a sample document: It is a database with original-songs and related covers:I want to know: original songs are correct (song completely\nprocessed), and cover songs where correct = song.relation = falseThe result of my query:Why “cover_songs”: is 0?",
"username": "Anja_H"
},
{
"code": "SyntaxError: Unexpected token, expected \",\" (53:17)\n\n 51 | }, \n 52 | \n> 53 | \"comment_de\" : \"\",\n | ^\n 54 | \"editor\" : \"MIG\",\n 55 | \"edited_by\" : \"AHO\",\n 56 | \"note\" : \"\",\n",
"text": "When I try to cut-n-paste your document into my mongosh I get:It looks like the covers array, which is central to your use-case, is not correct.It will also be nice that the sample documents you supply are the one you used in your sample result. You sample document has _id:5acc307fd734b019b873e90c but your result has _id:61fe7e1fe5bbd565edbb9868. If we do not see the source document of 61fe7e1fe5bbd565edbb9868 there is not way for us to see if the issue is with the document or with the aggregation.",
"username": "steevej"
},
{
"code": " \"_id\" : ObjectId(\"5acc307fd734b019b873e90c\"),\n \"title\" : \"Listen To Your Heart\",\n \"artists\" : [ \n {\n \"artist\" : ObjectId(\"5acc307ad734b019b871cac8\"),\n \"name_id\" : \"jft428eo3qepqgvrtvw\",\n \"delimiter\" : null\n }\n ],\n \"composers\" : [ \n {\n \"artist\" : ObjectId(\"5acc307ed734b019b873d978\")\n }, \n {\n \"artist\" : ObjectId(\"5acc307ad734b019b8728a0e\")\n }\n ],\n \"search_terms\" : [ \n {\n \"term\" : \"listentoyourheart\"\n }\n ],\n \"legacy_id\" : 16679,\n \"legacy_artists\" : \"Roxette\",\n \"legacy_composers\" : \"Per Håkan Gessle / Mats Persson\",\n \"sources\" : [ \n {\n \"source\" : \"https://www.discogs.com/Roxette-Listen-To-Your-Heart/release/1404673\"\n }, \n {\n \"source\" : \"https://en.wikipedia.org/wiki/Listen_to_Your_Heart_(Roxette_song)\"\n }, \n {\n \"source\" : \"https://de.wikipedia.org/wiki/Listen_to_Your_Heart\"\n }\n ],\n \"cover_count\" : 16,\n \"covers\" : [ \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b0\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"A\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2022-03-06T22:38:31.640Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5ae38d4ec8b438ca3f4a4e51\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2022-03-18T21:12:46.044Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9aa\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b2\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9af\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"Q\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2023-01-13T16:55:41.785Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b1\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ae\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2021-11-10T07:34:55.744Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a9\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-04-20T14:35:05.514Z\"),\n \"updatedAt\" : ISODate(\"2022-04-20T14:35:22.944Z\")\n },\n \"song\" : ObjectId(\"626019f251685b0de4874798\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"M\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"00:23\",\n \"cover\" : \"01:22\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-03-20T10:55:47.538Z\"),\n \"updatedAt\" : ISODate(\"2022-03-20T10:56:18.275Z\")\n },\n \"song\" : ObjectId(\"6236ff8405189150766ebcdc\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ad\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ac\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"Q\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ab\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a8\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2021-11-09T22:31:55.385Z\"),\n \"updatedAt\" : ISODate(\"2021-11-09T22:31:55.385Z\")\n },\n \"song\" : ObjectId(\"618af6c3a151e6db261f93a8\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-30T05:41:01.337Z\"),\n \"updatedAt\" : ISODate(\"2022-08-30T05:42:41.476Z\")\n },\n \"song\" : ObjectId(\"630da214ee9bd995ee18bba5\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2023-01-06T18:08:37.450Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a7\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-17T12:18:27.292Z\"),\n \"updatedAt\" : ISODate(\"2022-08-17T12:19:30.931Z\")\n },\n \"song\" : ObjectId(\"62fcdc699336888e0e569e25\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-07-31T07:01:47.859Z\"),\n \"updatedAt\" : ISODate(\"2022-07-31T07:01:47.859Z\")\n },\n \"song\" : ObjectId(\"62e6285dd1aee7826833d486\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-30T05:25:01.705Z\"),\n \"updatedAt\" : ISODate(\"2022-08-30T05:25:01.705Z\")\n },\n \"song\" : ObjectId(\"630d9e1cee9bd995ee184386\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"Q\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a6\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-30T09:20:57.658Z\"),\n \"updatedAt\" : ISODate(\"2022-08-30T09:20:57.658Z\")\n },\n \"song\" : ObjectId(\"630dd5c5ee9bd995ee1e967e\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-30T09:27:46.987Z\"),\n \"updatedAt\" : ISODate(\"2022-08-30T09:27:46.987Z\")\n },\n \"song\" : ObjectId(\"630dd784ee9bd995ee1ecb91\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2020-08-03T19:34:16.244Z\"),\n \"updatedAt\" : ISODate(\"2020-08-03T19:37:31.340Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5f286613520cbf88170d8d7d\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-30T09:38:54.379Z\"),\n \"updatedAt\" : ISODate(\"2022-08-30T09:38:54.379Z\")\n },\n \"song\" : ObjectId(\"630dd93eee9bd995ee1efed0\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-02-15T20:23:56.733Z\"),\n \"updatedAt\" : ISODate(\"2022-02-15T20:23:56.733Z\")\n },\n \"song\" : ObjectId(\"620c0a59b0343c2ea9456d3d\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2021-09-13T05:55:42.481Z\"),\n \"updatedAt\" : ISODate(\"2021-10-09T05:28:12.389Z\")\n },\n \"song\" : ObjectId(\"613ee736d6db78d96a0b160c\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : true,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"comment_de\" : \"\",\n \"comment_en\" : \"\",\n \"createdAt\" : ISODate(\"2022-08-30T06:14:05.816Z\"),\n \"updatedAt\" : ISODate(\"2022-08-30T06:14:05.816Z\")\n },\n \"song\" : ObjectId(\"630daa0fee9bd995ee19a76f\")\n }\n ],\n \"comment_de\" : \"\",\n \"editor\" : \"MIG\",\n \"edited_by\" : \"AHO\",\n \"note\" : \"\",\n \"release_date\" : \"1988\",\n \"createdAt\" : ISODate(\"2005-05-13T22:00:00.000Z\"),\n \"updatedAt\" : ISODate(\"2023-01-13T16:15:17.921Z\"),\n \"languages\" : [ \n \"eng\"\n ],\n \"correct\" : true,\n \"youtube\" : {\n \"modified\" : ISODate(\"2022-12-04T02:04:52.052Z\"),\n \"autogenerated\" : false,\n \"id\" : \"yCC_b5WHLX0\"\n },\n \"original_count\" : 1,\n \"url_key\" : \"Roxette-Listen-To-Your-Heart\",\n \"first_release\" : {\n \"medium\" : \"7''-Vinyl\",\n \"title\" : \"Listen To Your Heart\",\n \"label\" : \"Parlophone\",\n \"record_number\" : \"1363237\",\n \"country\" : \"SE\",\n \"track_number\" : \"A\",\n \"ean\" : \"\",\n \"asin\" : \"\",\n \"medium_description\" : \"Single\"\n },\n \"folksong\" : {\n \"identifiers\" : []\n },\n \"is_folksong\" : false,\n \"live_version\" : false,\n \"highlight\" : {\n \"is\" : false\n },\n \"recorded_date\" : \"\",\n \"keywords\" : [ \n \"roxette\", \n \"listen\", \n \"2\", \n \"yore\", \n \"heart\", \n \"to\", \n \"your\"\n ]\n}\nI changed the skript:\ndb.getCollection('songs').aggregate([\n {\n $match: {\n correct:true, title: \"Listen To Your Heart\",\n createdAt : { '$gte' : ISODate('1999-02-01'), '$lte' :ISODate('2022-02-09') },\n\n covers: {\n $elemMatch: {\n \"relation.correct\": false\n }\n }\n }\n },\n {\n $project: {\n \n covers: {\n $filter: {\n \n input: \"$covers\",\n as: \"cover\",\n cond: { $eq: [\"$$cover.relation.correct\", false],\n }\n }\n }\n }\n },\n \n {\n $lookup: {\n from: \"songs\",\n let: { covers_song: \"$covers.song\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [\"$_id\", \"$$covers_song\"] },\n { $eq: [\"$correct\", true] }\n ]\n }\n }\n }\n ],\n as: \"cover_songs\"\n }\n}\n{\n \"_id\" : ObjectId(\"5acc307fd734b019b873e90c\"),\n \"covers\" : [ \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b0\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b2\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9af\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"Q\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2023-01-13T16:55:41.785Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9b1\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ae\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ad\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ac\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"Q\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9ab\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.336Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a8\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"C\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2023-01-06T18:08:37.450Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a7\")\n }, \n {\n \"relation\" : {\n \"relation_type\" : \"Q\",\n \"correct\" : false,\n \"timestamps\" : [ \n {\n \"original\" : \"\",\n \"cover\" : \"\"\n }\n ],\n \"createdAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"updatedAt\" : ISODate(\"2019-06-25T19:36:47.337Z\"),\n \"comment_de\" : \"\",\n \"comment_en\" : \"\"\n },\n \"song\" : ObjectId(\"5acc308fd734b019b876b9a6\")\n }\n ],\n \"cover_songs\" : []\n",
"text": "Hello, here the correct sample for _id:5acc307fd734b019b873e90c\n{the result of my query:",
"username": "Anja_H"
},
{
"code": "{ $eq: [\"$_id\", \"$covers_song\"] }{ $lookup : {\n from : \"songs\" ,\n localField : \"covers.song\" ,\n foreignField : \"_id\",\n pipeline : [\n { $match : { correct : true } }\n ] ,\n as : \"cover_songs\"\n} }\n",
"text": "The issue is the use of let:.The let: variable is evaluated once for the top document, not for each element of that needs to be $lookup-ed up. So its value is the array $covers.song so the $match will always evaluate{ $eq: [\"$_id\", \"$covers_song\"] }to false.A small change to your lookup should do what you want. Try:",
"username": "steevej"
},
{
"code": "",
"text": "thank you. I got a failure message? “$lookup with ‘pipeline’ may not specify ‘localField’ or ‘foreignField’”, My MongoDB Version is 4.46.",
"username": "Anja_H"
},
{
"code": "",
"text": "Bummer!Consize syntax in new in 5.0.One way out is to leave out the pipeline: in the $lookup and then do a $project after with a $filter like you did in your existing $project.",
"username": "steevej"
},
{
"code": "\ndb.getCollection('songs').aggregate([\n\n\n\n {\n\n\n\n $match: {\n\n\n\n correct:true,\n\n\n\n createdAt : { '$gte' : ISODate('1999-01-01'), '$lte' :ISODate('2000-12-31') },\n\n\n\n\n\n covers: {\n\n\n\n $elemMatch: {\n\n\n\n \"relation.correct\": false\n\n\n\n }\n\n\n\n }\n\n\n\n }\n\n\n\n },\n\n \n\n \n\n\n\n {\n\n\n\n $project: {\"url_key\" : 1,\n\n\n\n \n\n\n\n covers: {\n\n\n\n $filter: {\n\n\n\n input: \"$covers\",\n\n\n\n as: \"cover\",\n\n\n\n cond: { $eq: [\"$$cover.relation.correct\", false],\n\n\n\n }\n\n\n\n }\n\n\n\n }\n\n\n\n }\n\n\n\n },\n\n\n\n{\n\n $lookup: {\n\n from: \"songs\",\n\n localField: \"covers.song\",\n\n foreignField: \"_id\",\n \n \n\n as: \"coverversionen\",\n\n }\n\n },\n\n {\n $project: {\n \"coverversionen._id\": 1,\n \"coverversionen.url_key\": 1, \n \n\n\n\n coverstrue: {\n\n\n\n $filter: {\n\n\n\n input: \"$coverversionen.correct\",\n\n\n\n as: \"correct\",\n\n\n\n cond: { $eq: [\"$$correct\", true],\n\n\n\n }\n\n\n\n }\n\n\n\n }\n\n\n\n }\n\n\n\n },\n\n \n\n])\n",
"text": "I’ve done it in that way:Is it ok in this way? I still need to see the coverstrue.id und coverstrue.url_key fields?\nCan you help?",
"username": "Anja_H"
},
{
"code": "",
"text": "Is it ok in this way?Code with so many blank lines is not readable.Is it ok in this way?If you get the result you want it is okay. If you do not get the result you want it is not okay? But I really do not know if you get the result you want.Please stop changing fields names. You had cover_songs and not it seems it is conversionen. It is really hard to follow a moving target.I still need to see the coverstrue.id und coverstrue.url_key fields?The sentence ends with a question mark but looks like an affirmation. I really do not know what you need to see.",
"username": "steevej"
},
{
"code": "db.getCollection('songs').aggregate([\n {\n $match: {\n correct:true,\n createdAt : { '$gte' : ISODate('1999-01-01'), '$lte' :ISODate('2000-12-31') },\n covers: {\n $elemMatch: {\n \"relation.correct\": false\n }\n }\n }\n },\n {\n $project: {\"url_key\" : 1,\n covers: {\n $filter: {\n input: \"$covers\",\n as: \"cover\",\n cond: { $eq: [\"$$cover.relation.correct\", false],\n }\n }\n }\n }\n },\n \n{ \n $lookup: {\n from: \"songs\",\n localField: \"covers.song\",\n foreignField: \"_id\",\n as : \"cover_songs\"\n }\n },\n { \"$project\": {\n \"cover_songs.id\": 1,\n coverstrue: {\n $filter: {input: \"$cover_songs.correct\",\n as: \"correct\",\n cond: { $eq: [\"$$correct\", true],\n }\n }\n }\n}\n}\n])\n",
"text": "Hello, thanks at first for your patience. Here is the script without changed field names and without changed blank lines:Yes, it works in the right way, it filters the songs, where correct = true. My problem now is, that I see only as result: coverstrue.0.true, coverstrue1.true, and so on. I need still the information, what the id for the cover is and the field url_key, (artist of the cover-title and the name of the cover-title?) If this solved, then I am finally ready?",
"username": "Anja_H"
},
{
"code": "coverstrue : {\n $filter : {\n input : \"$cover_songs\" ,\n as : \"cover\" ,\n cond : { $eq : [ \"$$cover.correct\" , true ] } ,\n }\n}\n",
"text": "You only see the fields that you $project.After the first $project, only the fields url_key and covers the filtered array will be present.In your final $project, since the input: is $cover_songs.correct you only get the correct field. If you want the other fields of the array elements the input: has to be the whole array. Something like",
"username": "steevej"
},
{
"code": "",
"text": "Hey, thank you so much, you are great. It works in the way I want.",
"username": "Anja_H"
}
] | Need Help With Aggregation Pipeline | 2023-01-24T22:13:16.238Z | Need Help With Aggregation Pipeline | 1,090 |
null | [
"queries",
"indexes",
"time-series"
] | [
{
"code": "{\n \"metadata\": {\n \"sensorId\": \"123B\",\n \"type\": \"A\"\n },\n \"timestamp\": ISODate(...),\n \"reading\": 7.0\n}\n[\n {\n \"metadata\": {\n \"sensorId\": \"10\",\n \"type\": \"A\"\n },\n \"timestamp\": 1,\n \"reading\": 1.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"10\",\n \"type\": \"A\"\n },\n \"timestamp\": 1,\n \"reading\": 2.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"20\",\n \"type\": \"A\"\n },\n \"timestamp\": 2,\n \"reading\":1.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"20\",\n \"type\": \"A\"\n },\n \"timestamp\": 2,\n \"reading\":5.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"20\",\n \"type\": \"A\"\n },\n \"timestamp\": 2,\n \"reading\":9.0\n }\n]\n[\n {\n \"metadata\": {\n \"sensorId\": \"10\",\n \"type\": \"A\"\n },\n \"timestamp\": 1,\n \"reading\": 2.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"20\",\n \"type\": \"A\"\n },\n \"timestamp\": 2,\n \"reading\":5.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"20\",\n \"type\": \"A\"\n },\n \"timestamp\": 2,\n \"reading\":9.0\n },\n]\n[\n {\n \"metadata\": {\n \"sensorId\": \"10\",\n \"type\": \"A\"\n },\n \"timestamp\": 1,\n \"reading\": 2.0\n },\n {\n \"metadata\": {\n \"sensorId\": \"20\",\n \"type\": \"A\"\n },\n \"timestamp\": 2,\n \"reading\":9.0\n },\n]\n",
"text": "Hello everyone,I want to use a time-series collection (or any collection for that matter) with the following document schema:The idea is to return the latest reading (or multiple values within a time range) for all sensors of a particular type.For example, I insert the following:I would like to return everything of type “A” with a reading > 1.0. These would be:Similarly, I would like to return everything of type “A” with their latest readings. This would be:From the documentation it is not clear how one would set up indexing and queries for such a thing. Is it even possible to achieve this in an efficient manner?I understand time-series does bucketing, so maybe it is not actually the correct collection type to support these queries. Perhaps a normal collection or a clustered collection would work better?Any help would be greatly appreciated, thank you!",
"username": "Iulian_Nitescu"
},
{
"code": "_id",
"text": "Time-series collections are good to reduce disk usage for all server types, but especially for sharded clusters to distribute data evenly. So, no, other collection types won’t work better.as for the indexing, it reads (similar to automatic _id indexing):When you create a time series collection, MongoDB automatically creates an internal clustered index on the time field ( Time Series — MongoDB Manual)so your question of “range” and “latest” is already part of the answer: you just query your data on timestamp as you normally do for any other field, and the internal index is automatically used meaning it is already efficient.To efficiently query fields other than timestamp, please make sure you read this page: Add Secondary Indexes to Time Series Collections — MongoDB Manual. It is not much different than making compound indexes.",
"username": "Yilmaz_Durmaz"
}
] | How to return a range of readings (or latest) from all sensors of a type from a time-series collection | 2023-01-28T02:13:02.143Z | How to return a range of readings (or latest) from all sensors of a type from a time-series collection | 896 |
null | [
"aggregation",
"crud"
] | [
{
"code": "db.ingredient.insertMany(\n\t[{\n\t\tname: \"sugar\",\n\t\tinventory: 100,\n\t\tdaysConsume: [\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0\n\t\t]\n\t},\n\t{\n\t\tname: \"salt\",\n\t\tinventory: 230,\n\t\tdaysConsume: [\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0\n\t\t]\n\t}]\n);\ndb.ingredient.updateOne({_id: ObjectId(\"63d27f53cc3fa8ed2594a6ee\")},[{$set:{\"daysConsume.1\":\"$inventory\"}}]){\n _id: ObjectId(\"63d27f53cc3fa8ed2594a6ee\"),\n name: 'salt',\n inventory: 230,\n daysConsume: [\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }, { '1': 230 }, { '1': 230 },\n { '1': 230 }\n ]\n}\n{\n\t\t_id: ObjectId(\"63d27f53cc3fa8ed2594a6ee\"),\n\t\tname: \"salt\",\n\t\tinventory: 230,\n\t\tdaysConsume: [\n\t\t 0, 230, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0, 0, 0, 0, 0, 0,\n\t\t 0, 0, 0, 0\n\t\t]\n\t}\n",
"text": "My collection is named ingredient, its document data looks like this:now I want to set the daysConsume array of index 1 to the value of field inventory, I use script as this:db.ingredient.updateOne({_id: ObjectId(\"63d27f53cc3fa8ed2594a6ee\")},[{$set:{\"daysConsume.1\":\"$inventory\"}}])after that the document looks like this:every item of the array was updated, this is not what I expected,\nI just want to update the item of index 1, as below:How can I only update the value of index 1?",
"username": "Jayden_Woo"
},
{
"code": "index = 1\nslice_before_index = { $slice : [ \"$daysConsume\" , 0 , index ] }\nslice_after_index = { $slice : [ \"$daysConsume\" , index + 1 , 31 ] }\nconcat_arrays = { $concatArrays : [ slice_before_index , [ \"$inventory\" ] , slice_after_index ] }\ndb.ingredient.update( query , [ { $set : { daysConsume : concat_array } } ] )\n",
"text": "One way is to use $concatArrays with 2 $slice-s…Try the following:You might need to adjust the values in the slice_… with + or - 1 to get exactly what you need. Your sample document started with all 0s so I am not too sure of which 0 I lose or gain.",
"username": "steevej"
},
{
"code": "slice_after_index = { $slice : [ \"$daysConsume\" , index , 31 ] }",
"text": "After some test, it look like slice_after_index needs to beslice_after_index = { $slice : [ \"$daysConsume\" , index , 31 ] }The +1 is removed compared to the original.",
"username": "steevej"
}
] | How to update array by index in aggregation? | 2023-01-26T14:05:13.829Z | How to update array by index in aggregation? | 709 |
null | [
"aggregation",
"queries",
"data-modeling",
"indexes"
] | [
{
"code": "$match$lookup$lookup_id_id$lookup",
"text": "Suppose that after $match stage, 10,000 documents will be passed into a $lookup stage. For each of those 10,000 documents, the $lookup will need to join based on its _id. The foreign field to join by will be indexed and almost unique - as it is made up of _id.Will each $lookup for each of the 10,000 documents have O(1) due to the index?So is it safe to assume that this pipeline will be fast and scalable?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "almost uniquethis does not sound good for this lookup.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Can you explain? Does it need to be unique in order to scale?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "_id_id",
"text": "$lookup (aggregation) — MongoDB Manualthough there are some limitations to using indexes in lookup, an index speeds up finding matching documents to O(1) if index keys are selected carefully to provide uniqueness.The _id field is one such key that should be unique alone.O(1) means your document is unique and instantly picked up from the index. if your _id is not unique, you will have more than one reading for the same key leading to an O(n) scale in the worst case (imagine all documents have the same id).",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Will $lookup still be slow if it is joining by _id? | 2023-01-27T16:42:45.742Z | Will $lookup still be slow if it is joining by _id? | 901 |
null | [
"production",
"golang",
"transactions"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.10.6 of the MongoDB Go Driver.This release resolves a panic when aborting a transaction. For more information please see the 1.10.6 release notes.You can obtain the driver source from GitHub under the v1.10.6 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "Preston_Vasquez"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.10.6 Released | 2023-01-27T20:40:07.292Z | MongoDB Go Driver 1.10.6 Released | 1,143 |
null | [
"dot-net",
"java"
] | [
{
"code": "",
"text": "Hello,\nIs there any recommendation on what should be used to connect to Atlas from .NET, Java etc?As per the documentation on Atlas, the data API is a managed middleware layer which would sit between my app and the database. This seems like additional overhead and may add to the response time (to the browser/mobile) for my application.Any thoughts on this?Thanks,\nPrasad",
"username": "Prasad_Kini"
},
{
"code": "",
"text": "Hi Prasad,Your intuition is spot on. If you’re building a server-side application on .NET or Java then you should use the MongoDB C# (https://www.mongodb.com/docs/drivers/csharp/) and Java (https://www.mongodb.com/docs/drivers/java-drivers/) drivers respectively to do so. Building your own server-side backend maximizes for flexibility and allows you to optimize for performance.The Atlas App Services capabilities offer a customizable managed API tier which is essentially an alternative to building your own server-side backend. The Data API (https://www.mongodb.com/docs/atlas/app-services/data-api/generated-endpoints/) offers a specific set of data plane methods whereas Custom HTTPS Endpoints (https://www.mongodb.com/docs/atlas/app-services/data-api/custom-endpoints/) and the GraphQL option (https://www.mongodb.com/docs/atlas/app-services/graphql/) allow you to define custom business logic).Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Atlas Connectivity - Drivers vs Atlas Data API | 2023-01-20T13:39:22.440Z | Mongo Atlas Connectivity - Drivers vs Atlas Data API | 823 |
null | [] | [
{
"code": "",
"text": "I am using Terraform for the mongoatlas cluster provisioning. I am getting following error:\n404 (request “GROUP_NOT_FOUND”) No group with ID xxxxxxx exists. From UI, I can easily create a project, but Terraform provider mongodb/mongodbatlas is not accepting anything in project_id. Any suggestions?",
"username": "Surender_Rathore"
},
{
"code": "",
"text": "Hi @Surender_Rathore, welcome to the forums!I’m afraid this question is not something covered in the current version of the M312 course. I would suggest moving this question to the “Ops and Admin” section as this will allow the community and particularly those with Terraform to see and help you with this issue.Kindest regards,\nEoin",
"username": "Eoin_Brazil"
},
{
"code": "xxxxxxxmongodbatlas_cluster",
"text": "Hi @Surender_Rathore,404 (request “GROUP_NOT_FOUND”) No group with ID xxxxxxx existsIs this the exact error you are getting? If so, xxxxxxx is not a valid project ID. You can head over to your Project Settings page to locate the Project ID.Can you also share the mongodbatlas_cluster resource block in the terraform configuration file you’re using which generates the error you sent? Please redact any personal or sensitive information before sending it across here.Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Happened to me because my session timed out and I tried to create again. And I couldn’t resolve the error.\nAlternative way to Deploy:\nDeploy it in the graphical interface (https://cloud.mongodb.com/) and then go back to the terminal (https://ti.instruqt.com/) and connect to the cluster via ssh:mongosh “mongodb+srv://clusterXXX.YYYY.mongodb.net/myFirstDatabase” --apiVersion 1 --username ",
"username": "Bruna_Rocha"
}
] | 404 (request "GROUP_NOT_FOUND") No group with ID exists | 2021-05-19T09:00:23.342Z | 404 (request “GROUP_NOT_FOUND”) No group with ID exists | 3,338 |
null | [] | [
{
"code": "",
"text": "Hello, so I’m wondering how to set up TLS/SSL on MongoDB because all the sources I’ve tried didn’t work, was wondering if you guys could give a clear example on how to do it?",
"username": "ElectricOrBloxtric_N_A"
},
{
"code": "",
"text": "Hi @ElectricOrBloxtric_N_A welcome to the community!Have you read the page Upgrade a Cluster to Use TLS/SSL? There’s a step-by-step instructions on how to do it on existing deployments.Note that this is just a procedure to turn it on. You need valid TLS/SSL certificates on the server and the client side to make this work, and it’s beyond the scope of the tutorial on how to obtain the correct certificates. Typically those certificates are released by a certificate authority and requires you to prove ownership of a domain name.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I did that, but for some reason it can’t seem to find the “mongod.pem” file",
"username": "ElectricOrBloxtric_N_A"
}
] | How to secure the connection/enable TLS/SSL between the user & the MongoDB server? | 2023-01-26T18:27:29.539Z | How to secure the connection/enable TLS/SSL between the user & the MongoDB server? | 471 |
null | [
"queries"
] | [
{
"code": "db.companies.find( { offices: { $type: 3 } } )\n",
"text": "I am querying for documents where a specific field in each document, say “offices” for example, can be of type object or array. The problem is that when I am querying for documents with “offices” field of type object I am getting documents with “offices” array type. Here is my query;Are array type fields referred as object types as well? Or is it some other problem?",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "c.find()\n{ _id: ObjectId(\"63d2849e4ab82600010427db\"), offices: [] }\n{ _id: ObjectId(\"63d284b84ab82600010427dc\"),\n offices: { city: 'Montreal' } }\nc.find( { offices: { $type: 3 } })\n{ _id: ObjectId(\"63d284b84ab82600010427dc\"),\n offices: { city: 'Montreal' } }\n",
"text": "According to documentation arrays are $type:4.Please provide sample documents that do not behave according to documentation.",
"username": "steevej"
},
{
"code": "",
"text": "I am sure you have something else because your query works well. check this Mongo playground (use type 4 for array)",
"username": "Yilmaz_Durmaz"
},
{
"code": "{\n \"company\": \"B\",\n \"offices\": [\n {\n city: \"Tashkent\"\n }\n ]\n },\n {\n \"company\": \"C\",\n \"offices\": {\n \"location\": \"somewhere\"\n }\n }\n",
"text": "Sorry, I forgot to note that the field I am querying by can be an object or an array of objects. I have tried my own case in the playground. I think that because the field contains objects when its of type array, the query still returning those documents.\nExamples of documents:",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "unwind",
"text": "Alright, I see what you mean and I am also surprised to see this result. This may have something to do with the unwind operator, but logically that should not be happening in filtering.Here is the updated dataset on playground:\nPlayground - Array of Objects are filtered on both type 3 and 4@steevej, I am of no use now. do you have an idea and/or link for this? or can you tag other members that may help?",
"username": "Yilmaz_Durmaz"
},
{
"code": "$type$type",
"text": "Sorry, I read about the $type operator but forgot on case:",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "",
"text": "Thank you very much for your support",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "c.find( { $expr : { $eq : [ { $type : \"$offices\"} , \"object\" ] } } )\n",
"text": "Thanks for the question, the complete example and the answer’s link. I forgot the same case.Saru mo ki kara ochiruIf the goal is to filter of the type of the field offices rather than the element of offices when offices is an array. That is to only get company:A and company:C in the playground example. The aggregation version of $type can be use:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb data types: Filtering Array vs Object vs Array of Object | 2023-01-26T13:32:16.733Z | Mongodb data types: Filtering Array vs Object vs Array of Object | 1,504 |
null | [] | [
{
"code": "",
"text": "Hi MongoDB for Startups community,Rockstart is a global accelerator-VC empowering purpose-driven founders. Through its €22m fund focused on Energy startups, Rockstart is dedicated to finding and investing in top tier founders. Rockstart Energy is for early-stage startups contributing to the energy transition and driving the transformation towards renewable, clean and low carbon energy solutions. The fund is looking to make new investments in the coming months in promising startups, specifically in these three area of interest:Applications page : https://www.rockstart.com/energy/energy-2021/",
"username": "Manuel_Meyer"
},
{
"code": "",
"text": "Hi Manuel, what a lucky coincidence, we will be joining Rockstart in Amsterdam next month!Want me to report back? ",
"username": "maxci"
}
] | Our MongoDB for Startups partner Rockstart is looking for startups in the energy sector | 2021-07-30T16:51:12.134Z | Our MongoDB for Startups partner Rockstart is looking for startups in the energy sector | 5,189 |
null | [
"aggregation"
] | [
{
"code": "{ \"_id\": 393657177, \"data\": [ { \"_id\": 2021, \"payout\": 2938.83633 }, { \"_id\": 2022, \"payout\": 3680.71977 }, { \"_id\": 2019, \"payout\": 3091.98733 }, { \"_id\": 2020, \"payout\": 3184.78084 } ]{ \"_id\": 393657177, \"year\": 2019, \"payout\": 3091.98733 },{ \"_id\": 393657177, \"year\": 2020, \"payout\": 3184.78084 },{ \"_id\": 393657177, \"year\": 2021, \"payout\": 2938.83633 },{ \"_id\": 393657177, \"year\": 2022, \"payout\": 3680.71977 }Preformatted text",
"text": "I have this nested document below:\n{ \"_id\": 393657177, \n \"data\": [ { \"_id\": 2021, \"payout\": 2938.83633 },\n { \"_id\": 2022, \"payout\": 3680.71977 },\n { \"_id\": 2019, \"payout\": 3091.98733 },\n { \"_id\": 2020, \"payout\": 3184.78084 } ]\n}`But I want to combine it into\n{ \"_id\": 393657177, \"year\": 2019, \"payout\": 3091.98733 },\n{ \"_id\": 393657177, \"year\": 2020, \"payout\": 3184.78084 },\n{ \"_id\": 393657177, \"year\": 2021, \"payout\": 2938.83633 },\n{ \"_id\": 393657177, \"year\": 2022, \"payout\": 3680.71977 }How do I do that?Preformatted text",
"username": "Willy_Hermanto"
},
{
"code": "aggregateaggregate pipeline stagesaggregate pipeline operatorsdb.collection.aggregate([\n {\n $unwind: \"$data\"\n },\n {\n $sort: {\n \"data._id\": 1\n }\n },\n {\n $project: {\n \"_id\": 1,\n \"year\": \"$data._id\",\n \"payout\": \"$data.payout\"\n }\n }\n])\n[\n {\n \"_id\": \"393657177\",\n \"payout\": 3091.98733,\n \"year\": 2019\n },\n {\n \"_id\": \"393657177\",\n \"payout\": 3184.78084,\n \"year\": 2020\n },\n {\n \"_id\": \"393657177\",\n \"payout\": 2938.83633,\n \"year\": 2021\n },\n {\n \"_id\": \"393657177\",\n \"payout\": 3680.71977,\n \"year\": 2022\n }\n]\n",
"text": "Hi Willy_Hermanto, welcome to the community.Please try the aggregate method provided by the mongodb to get the result. Have a look at the aggregate docs for aggregate pipeline stages and aggregate pipeline operators.Output:Hoping it is useful!!\nRegards",
"username": "R_V"
}
] | Merging nested documents | 2023-01-27T06:15:44.779Z | Merging nested documents | 314 |
null | [
"storage"
] | [
{
"code": "",
"text": "Hi there,I currently have an M10 cluster running v4.4.18 with a 20GB database residing on it. Atlas takes a daily backup snapshot and keeps it for 7 days. Recently I have been trying to download snapshots to my local machine to do some testing only to find that all the snapshots are corrupt. When I extract the folder from the download and connect to it using mongod I get errors and then the process terminates (I can’t upload the logs as I’m a new user of the community).The common error I get from all these snapshots is:\n{“t”:{\"$date\":“2023-01-26T13:01:13.273+00:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:-31802,“message”:\"[1674738073:273058][9008:140719278152144], file:sizeStorer.wt, WT_SESSION.open_cursor: int __cdecl __win_file_read(struct __wt_file_handle *,struct __wt_session *,__int64,unsigned __int64,void *), 288: C:/databases/productionv2\\sizeStorer.wt: handle-read: ReadFile: failed to read 4096 bytes at offset 24576: Reached the end of the file.\\r\\n: WT_ERROR: non-specific WiredTiger error\"}}If it’s of any relevance the sizeStorer.wt file is exactly 4096 bytes.Right now I have zero faith that any snapshots are of actual use if I ever need to restore to my cluster. With nearly 400,000 users and associated data in the database this is of real concern.Can anybody please advise as to what might be going on and possible solutions. This sort of undermines the exact reason why we’re currently paying for Atlas.Thanks,\nPaul",
"username": "Paul_Kenyon"
},
{
"code": "",
"text": "Hi @Paul_Kenyon welcome to the community!Sorry to hear you’re having issues with Atlas. Since you’re using a dedicated M10 instance, could you contact Atlas in-app chat support team about this issue? They’ll have the resources to escalate any issues for you and also have more visibility into what’s happening with your backups.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin. Thanks for taking time to respond. I did contact the in-app chat and they referred me to here.To answer my problem for anyone else with the same issue…After I’ve downloaded the snapshot I extract the TAR file, and then extract the database from the TAR using 7-Zip on Windows. After comparing a working downloaded snapshot with the most recent ones, I noticed that the largest collection (over 8GB) was showing as 0 bytes, even though the overall snapshot was the right size. I discovered that 7-Zip can’t handle files in a TAR over 8GB by the looks of it (although not documented anywhere), and doesn’t extract them, hence the corrupt database (which is also why a repair removes this collection). So, fingers crossed, all the snapshots are okay. (My previous working snapshots must have this collection at just under 8GB.)I still have an issue to do with compatibility versions but I’ve raised that with the in-app support team.Thanks,\nPaul.",
"username": "Paul_Kenyon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Corrupt Atlas backup snapshots | 2023-01-26T20:39:57.195Z | Corrupt Atlas backup snapshots | 1,176 |
[
"compass"
] | [
{
"code": "",
"text": "Pls I am having issues creating my database in mongodb,I am using replit IDE, no errors in my “console”, “Shell”, my codes are working fine, it said succes when I tried using curl link to create my database in “Shell” so I dont know what is wrong I tried using schema code for linking stuff didn’t work even mongodb compass, isnt working tried configuring open ssh server to see if the issue would be resolved still didn’t work, I have tried soo many things that I am speechless self on what to do\n\n20230120_1058281800×4000 962 KB\n\nI would have uploaded the others but this one explains it more, what I meant, and also I wanted to upload them but i cant because I am a new user, new users can’t upload more than 1 media, I am not because this is my first time visiting mongodb with my phone, I use my laptop mostly for that to visit mongodb,",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "please give more details:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Okay,good day sir, u asked me for my database server atlas, so I will provide u with a picture of what I think might be the answer\n\nIMG-20230121-WA00291920×3413 628 KB\n\n2. You adked me for which app is my backend app using, okay thats index.js, so it’s javascript I am using4 You asked me if have tried putting logs in endpoints, like console.log e.t.c do u mean in the api url, if u meant so or u didn’t pls can u provide me with examples?Thanks so much for providing feedback to my 1st question, sorry for the long note I typed just I just wanted to explain in details and if that’s what u meant",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I have found a matching repl to yours: review-backend - Replitfrom the looks, I would say you have not provided a movie id in your url. when you used “curl” you have inserted a movie with an id of 12. so your url should end with “…/api/v1/reviews/movie/12” to get reviews for the movie.there are few issues with it:more importantly, that repl has a couple of design issues on defining endpoints. for example, it loads all reviews at the beginning without a filter, so take it as an example only.now about endpoints. since you are new, here is a quick intro. it is CRUD meaning Create-Read-Update-Delete, and endpoints are the names and methods. for example for a movies database",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "this link, for example, runs the repl I linked and returns few reviews for a movie of id 1: https://review-backend.beaucarnes.repl.co/api/v1/reviews/movie/1and this one is yours, with results: https://review-backend.dglory1.repl.co/api/v1/reviews/movie/12",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Sorry sir pls it didn’t work that api/v1/reviews/movies/12\nIt just said-> {“error”: “not found”}\nIt didn’t create any database on mongodb",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Sir pls for the last post u sent about a brief intro to endpoints can I use that I typed from the https to the post method for the no 1 it was displaying errors",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "app.use(\"/api/v1/reviews\", reviews).../api/v1/reviews/....../api/v1/reviews/movie/12.../api/v1/reviews/new.../api/v1/reviews/123abc",
"text": "api/v1/reviews/movies/12this endpoint is reading all reviews for a given movie with an id.check the “server.js” file. app.use(\"/api/v1/reviews\", reviews) tells you that to operate on reviews your url must have this form .../api/v1/reviews/.... then in “api/reviews.route.js” file you see the rest of the endpoints to append to this url:so your frontend must communicate with this backend server through these endpoints. check “api/reviews.controller.js” to see how requests are processed. when using POST you need to send an object in the request body similar to the one you used with curl in your first screenshot (req.body.movieId, req.body.user, req.body.review). GET and delete needs only an id in the url (req.params.id). PUT needs both an id in the url and object in the body.by the way, this server is open to public as you can see, and anyone can read/write/delete on these endpoints right now. so when you finish learning basics, you may want to learn security basics after.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Okay that means u were saying that there is something wrong with my server.js or something that I am not able to create a database?\nIs there anything I can do to resolve it? My system is down so I have to charge it when I buy light, but still then is there anything I can do in order to resolve it?\nBut sir is there anything wrong with my codes so far, I have cross-checked soo many times still no errors, like I am confused on what to do, is there anyway I can connect with that man beaucamps?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "are you the owner of dglory1 account? else can you give your repl’s address?except design flaws, there is nothing wrong with both repls I linked. the problem is your expectation it seems.these repls are coded only to create a review with a movie id, get reviews for that movie, and 3 more operations on a single review, that is all. you cannot do anything else. you need to crawl the source code, learn how endpoints work, adapt/expand the source so you can do more.you don’t have to follow that source alone. there are many resources to write your own backend server with different purposes. there are many “To do” application examples also with their frontend applications. MERN stack for example, use Express.js (as these repls do) for backend and React.js for frontend.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I suggest taking the following course. It is free and provided by MongoDB.MongoDB Node.js Developer Learning Path | MongoDB University",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Good day sir I watched the videos on how to create a database using mongodb, so my issues is that the review isn’t created in my website, for people to post review about a particular movie",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "are you the owner of dglory1 account?You did not answer that part, but you do not seem so. I am guessing you were using someone else’s repl all these times because your first screenshot shows the successful creation of a review. you cannot create something on your database using a url that does not belong to you.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The repli is mine the dglory account is my repli I created the password and username for it so it is mine, its not someone own I am using",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "I will provide u with my repli address soon my system is down now",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "the dglory account is my replAlright then, I checked your list of repls and this one seems your frontend.\nhtmlfrontend - Dglory1 - ReplitThen, these are the main problems:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Okay I understood what u were saying when u talked about I using the wrong variables, u also said I should refactoring my code, how will I move html changing parts to their function pls is there an example",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "const url = new URL(location.href);\nconst movieId = url.searchParams.get(\"id\")\nconst movieTitle = url.searchParams.get(\"title\")\n\nconst APILINK = \"https://review-backend.dglory1.repl.co/api/v1/reviews/\";\n\ndocument.addEventListener('DOMContentLoaded', returnReviews )\n\nfunction returnReviews() {\n fetch(APILINK + \"movie/\" + movieId)\n .then((res) => res.json())\n .then((data) => {\n console.log(data);\n const main = document.getElementById(\"section\");\n data.forEach((review) => {\n const div_card = document.createElement(\"div\");\n div_card.innerHTML = `\n <div id=\"${review._id}\">\n <p>\n <strong>Review: </strong>${review.review}\n <strong>User: </strong>${review.user}\n </div>`\n main.appendChild(div_card);\n });\n });\n}```\n",
"text": "Here, check this code (click the arrow)Use this instead of your current “movie.js” and your reviews will show up in the page, proving your backend is already working.you just need to learn more aboutfor all these steps, there are countless resources on the internet, however, they are not always in one place. And not all resources might be suitable for your learning pace.I will make an exception and suggest “Dave Gray” from youtube ( DaveGrayTeachesCode) for things including HTML, CSS, Javascript and MongoDB.you can move to any other resource at any time if his style is not for you.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Okay thanks I will try that out",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | Database creation | 2023-01-20T11:46:08.897Z | Database creation | 1,056 |
|
null | [
"node-js",
"sharding"
] | [
{
"code": "{\"t\":{\"$date\":\"2022-04-12T08:58:49.368+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22567, \"ctx\":\"establishCursors cleanup\",\"msg\":\"Ending idle connection because the pool meets constraints\",\"attr\":{\"hostAndPort\":\"be-db-0.xxxx:27018\",\"numOpenConns\":41}}\n",
"text": "Hi,\nseeing high CPU on all cores on all primaries on a three shard cluster with 6 distributed mongos. All 5.0.6, latest nodejs (4.5.0). All replicasets are 3 voting 2 non voting and distributed globally on AWS with private network peeriing between them.\nThere are about 20 nodejs applications at each mongos, with a few changeStreams active on reasonably busy sharded collections (~300 updates/second).\nWe have tried various max/minPoolSize options on the applications with no effect. We have also tried setting a number of the mongod/mongos.conf setParameters with little or no change. Each primary mongod is seeing approximately 1200 concurrent connections, and a very high reconnection rate. Have searched everywhere without luck so hoping someone can assist. On mongos, there is a lot of logs like:and then reconnections to the same host pretty much straight away. Have set the maxPoolSize/minPoolSize to 100/10 respectively and there’s still a lot of reconnections.\nCustomer’s cluster has been rather unstable for the past 24 hours. Our workload is almost constant and continuous (RAM/CPU usually close to a flat line) and have checked that has not changed. Cannot identify any underlying AWS network issues and have even thought some noisy neighbor has appeared on the hardware but we’re clutching at straws.\nAny assistance or suggestions very much appreciated.\nRob",
"username": "Rob_Gillan"
},
{
"code": "",
"text": "getting the same issue, did we get any fix?",
"username": "Kapil_Gupta"
}
] | Primary CPU load jump with continuous connection idle refresh pool constraints | 2022-04-12T09:14:33.718Z | Primary CPU load jump with continuous connection idle refresh pool constraints | 2,312 |
null | [
"mongoose-odm",
"connecting"
] | [
{
"code": "const mongoose = require(\"mongoose\");\n\nconst port = process.env.PORT || 5000;\n\nconst connectionParams = {\n useNewUrlParser: true,\n useCreateIndex: true,\n useUnifiedTopology: true \n}\n\nmongoose\n .connect(process.env.ATLAS_URL, connectionParams)\n .then(() => console.log( 'Database Connected' ))\n .catch(err => console.log( err ));\n\n",
"text": "Encounter this error when trying to connect to the cluster on mongoDB atlas.\nMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/I have whitelisted my IP and I have tried every possible methods I found on the internet, they just don’t work.",
"username": "Lim_Jie_Xi"
},
{
"code": "",
"text": "i have also tried every possible way from the internet, the only solution that works to me is to change internet connection to any other bandwidth like mobile hotspot",
"username": "Ritesh_Gangwar"
}
] | Could not connect to the servers in your MongoDB Atlas cluster | 2021-05-05T14:57:06.370Z | Could not connect to the servers in your MongoDB Atlas cluster | 5,909 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi,I know this isn’t probably where this should be but I really need help.I am using Node.JS\nI am running a local server\nI am connecting to MongoDB shared Cluster\nI want to have a static Ip for when connecting to the cluster\nThe static Ip will be used on my network to allow my server to connect to the cluster without opening every ip and port.if you have any information on how to get a static ip that will never change for a mongodb cluster please let me know.",
"username": "Reece_Barker"
},
{
"code": "",
"text": "Hi @Reece_Barker welcome to the community!Do you mean you want to have static IP for you local server so you don’t have to open the database server to all IPs?In general terms, since MongoDB strongly recommends the use of replica sets for production deployment, using static IPs for the databases will make things harder in the long run. Static IPs will make maintenance harder, and will undermine the high-availability functionality of a replica set. This is mentioned in the Operations Checklist:Use hostnames when configuring replica set members, rather than IP addressesHowever if your goal is to have a static IP for your app, this depends on your internet service provider. Many providers allow you to have static IPs at additional cost.Having said that, these networking concerns are one layer below the services provided by the app and MongoDB. Both will just connect to/from any network interface you deploy them on.Best regards\nKevin",
"username": "kevinadi"
}
] | Static IP MongoDB | 2023-01-26T14:09:27.688Z | Static IP MongoDB | 743 |
null | [
"aggregation",
"crud"
] | [
{
"code": "keyskeykeys[0]keys[]keys[]$unsetkeys$set myCollection.updateMany({}, [\n { $set: { key: \"keys.0\" ? \"keys.0\" : undefined } },\n ]);\n",
"text": "I have a collection of documents whose schema contains an array property (e.g., let’s call it keys) that holds an array of strings.\nHowever, I have since realized that this array is only ever going to contain 1 string or be empty, so I want to update all the documents in my collection such that instead of having an array property called ‘keys’, each one will have a string property called key that either has the value of keys[0] if keys[] contained an element, or undefined if keys[] was empty, and then I want to delete the keys property.I know that I can use the $unset operator to get rid of the keys property at the end. My question is: for the part about setting a new property called key based on what keys contains – how do I specify it when using the $set operator?\nI want to do something like the following:and I am aware that the above syntax is of course wrong. What is the right syntax for this operation?",
"username": "Francesca_Ricci-Tam"
},
{
"code": "await myCollection.updateMany({ keys: { $size: 1 } }, [\n { $set: { key: { $first: \"$keys\" } } },\n ]);\nawait myCollection.updateMany({}, [{ $unset: \"keys\" }]);\n",
"text": "Just to close the issue, thanks to a tip from a colleague, I was able to do the following:",
"username": "Francesca_Ricci-Tam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setting a new property in a document based on elements of another array property | 2023-01-25T02:37:33.377Z | Setting a new property in a document based on elements of another array property | 905 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I have a document with a property of type “object” referring subdocs of different schemas. We are using BsonSerializer to deserialize it, but this class is throwing a BsonInternalException error for subdocs with properties of type decimal: Invalid Bson TypeIt looks like decimal deserialization is not supported for decimal types. When should we expect to be supported?DB Sample\n{\n…\nCurrent: { amount: NumberDecimal(“1”) }\n…\n}\nCSharp Sample\nBsonSerializer.Deserialize(bsonDoc.Current.AsBsonDocument.ToString())",
"username": "Daniel_Comanici"
},
{
"code": "BsonInternalExceptionJsonReaderBsonType.Decimal128BsonDocumentToString()BsonDocumentvar bsonDoc = new BsonDocument { { \"amount\", new Decimal128(42.42m)} };\nConsole.WriteLine(bsonDoc[\"amount\"]);\nbsonDoc[\"amount\"]BsonValuebsonDoc[\"amount\"].AsDecimalbsonDoc[\"amount\"].BsonTypeIsDecimalBsonValueJsonReader",
"text": "Hi, @Daniel_Comanici,Welcome to the MongoDB Community Forums. I understand that you’re encountering BsonInternalException when trying to deserialize a JSON string containing a decimal. I have reproduced the issue and filed CSHARP-4496.This bug is due to JsonReader missing support for BsonType.Decimal128. However you don’t need to render the BsonDocument to JSON (which is what ToString() is doing) and then reparse it. You can simply access the BsonDocument.The return value of the bsonDoc[\"amount\"] is a BsonValue. If you need a specific type, you can call bsonDoc[\"amount\"].AsDecimal. You can also inspect bsonDoc[\"amount\"].BsonType if you need to switch on different types. There are also IsDecimal and other methods to determine if the BsonValue is a particular type.Hopefully this helps solve your immediate problem. Please follow CSHARP-4496 for the fix to JsonReader.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | BsonSerializer - Cannot Deserialize NumberDecimal due to Invalid BspnType error | 2023-01-26T19:20:03.753Z | BsonSerializer - Cannot Deserialize NumberDecimal due to Invalid BspnType error | 751 |
[
"data-modeling"
] | [
{
"code": "",
"text": "I was researching on how mongodb tech companies implement theirs paginate cursors and I saw some auto increment on load more data response. I mean, there’s nothing wrong with having auto-incrementing values in collections, but why the id? we do not send the _id for security reasons or these collections of user, groups, etc. don’t use ObjectID or maybe mongodb use a relational database, would be funny but I guess is not the reason?? I need an answer from the mongodb team or I won’t be able to sleep easy for the rest of my life\nimage1829×1000 273 KB\n",
"username": "Darwin_Velez"
},
{
"code": "id",
"text": "Hi @Darwin_Velez welcome to the community!To be honest I’m not sure I understand your question. You mentioned that you wanted to see how MongoDB implements pagination queries? The screenshot and the id you mentioned seems to show this forum, which is using Discourse, a popular forum software that powers many community forums (not only ours!). This is not written by MongoDB, so I cannot comment on how they choose to implement pagination.If pagination using MongoDB is your question, there are some worthwhile reading material, such as:Hope this helps.Best regards\nKevin",
"username": "kevinadi"
}
] | Does mongodb team save _id and id fields? | 2023-01-25T02:24:48.814Z | Does mongodb team save _id and id fields? | 1,036 |
|
null | [] | [
{
"code": "",
"text": "I’m trying to add directoryPerDB: true but when i put it into mongod.conf ther service wont start… i already change the dbPath and i only have the dbs that comes with the instalation…Help",
"username": "Mauricio_Urrego"
},
{
"code": "mongodump",
"text": "HI @Mauricio_UrregoThis can only be set on a brand new(empty) data directory.If you haven’t inserted or configured anything yet, you can remove all the files in the data directory.If you have data to preserve from the database then use mongodump to create a copy first.The configuration options page goes into a little more including replica sets.",
"username": "chris"
},
{
"code": "",
"text": "This can only be set on a brand new(empty) data directoryThank you so much… its works",
"username": "Mauricio_Urrego"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | directoryPerDB: true | 2023-01-25T16:41:00.734Z | directoryPerDB: true | 687 |
null | [
"aggregation",
"dot-net"
] | [
{
"code": "BsonDocumentvar copyAtoB = new BsonDocument { [\"$set\"] = new BsonDocument { [\"B\"] = \"$A\" } };\nvar setAudit = new BsonDocument\n{\n [\"$set\"] = new BsonDocument\n {\n [\"props.audit.updatedBy\"] = new BsonString(\"me\"),\n [\"props.audit.updatedTimestamp\"] = new BsonDateTime(DateTime.UtcNow)\n }\n};\n\nvar pipeline = new EmptyPipelineDefinition<LeEntity>()\n .AppendStage<LeEntity, LeEntity, LeEntity>(copyAtoB)\n .AppendStage<LeEntity, LeEntity, LeEntity>(setAudit);\n\nvar update = Builders<LeEntity>.Update.Pipeline(pipeline);\nclass LeEntity\n{\n // ... some other props here\n\n [BsonElement(\"A\")]\n public string Value { get; init; } = null!;\n\n [BsonElement(\"B\")]\n public string DefaultValue { get; init; } = null!;\n\n [BsonElement(\"props\")]\n public EntityProps Props { get; init; } = null!;\n}\n\nclass EntityProps\n{\n [BsonElement(\"audit\")]\n public AuditDetails AuditDetails { get; init; } = null!;\n}\n\nclass AuditDetails\n{\n [BsonElement(\"updatedBy\")]\n public string UpdatedBy { get; init; } = null!;\n\n [BsonElement(\"updatedTimestamp\")]\n public DateTime LastUpdate { get; init; }\n}\n\"B\"\"$A\"\"props.audit.updatedBy\"BsonElement",
"text": "As far I’m aware, there’s no way how to use higher level API to write MongoDB aggregation pipeline (using entity types with generics) - so when I use aggregation pipeline, I have to manually build BsonDocuments representing pipeline, e.g.:while entity may looks something like:Is there any helper method which could help me generate values such as \"B\", \"$A\" or \"props.audit.updatedBy\" using my entity type? (effectively reading BsonElement attributes and generating full path)",
"username": "Zdenek_Havlin"
},
{
"code": "IBsonSerializer<T>BsonSerializerRegistryusing System;\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Bson.Serialization.Attributes;\n\nvar serializer = BsonSerializer.LookupSerializer<Person>() as IBsonDocumentSerializer;\nserializer.TryGetMemberSerializationInfo(\"Name\", out var serializationInfo);\nConsole.WriteLine(serializationInfo.ElementName); // writes \"nm\"\n\nclass Person\n{\n public ObjectId Id { get; set; }\n [BsonElement(\"nm\")]\n public string Name { get; set; }\n}\n$getFieldBsonDocumentscoll.Aggregate()coll.AsQueryable()collIMongoCollection<Person>var client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<Person>(\"people\");\n\nvar query1 = coll.Aggregate().Match(x => x.Name == \"James\");\nvar query2 = coll.AsQueryable().Where(x => x.Name == \"James\");\nvar query3 = from p in coll.AsQueryable()\n where p.Name == \"James\"\n select p; \n",
"text": "Hi, @Zdenek_Havlin,Welcome back to the MongoDB Community Forums.I understand you are trying to find the serialized names for your entities’ fields. The mapping of C# properties/fields to database field names is handled by IBsonSerializer<T> classes, which are registered in the global BsonSerializerRegistry.While the registered serializer for a type will correctly map the C# property/field name to the database field name including any conventions, mapping attributes, and custom class maps, it would be up to you to properly traverse the object graph and build up a dotted path. Note that field names can now contain dots themselves and thus you may need to use $getField if your application uses dots in database field names.You mention that you are doing this as you have to handroll your aggregation pipelines using BsonDocuments. I find this surprising. In the vast majority of cases, you should be able to write aggregations either using Fluent Aggregate (coll.Aggregate()) or LINQ (coll.AsQueryable()) where coll is of type IMongoCollection<Person>.Rather than handrolling your aggregation pipelines, I would suggest troubleshooting why your class model cannot be serialized. Feel free to share a self-contained repro of the serialization problem so we can better understand the problem that you’ve encountered.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Determining full mongo property name (path) from [BsonElement] | 2023-01-25T15:33:52.784Z | Determining full mongo property name (path) from [BsonElement] | 1,321 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "We created a realm/app service which fails to start with a BSONObjectTooLarge exception, which is very puzzling since Atlas is supposed to restrict the documents size to 16MB already. Why would Sync fail? Also, how would we identify which document is the root cause as we have an exiting database with many many collections and documents?Synchronization between Atlas and Device Sync has been stopped, due to error:\nfailed to register trigger after 10 attempts: recoverable event subscription error encountered: (BSONObjectTooLarge) PlanExecutor error during aggregation :: caused by :: BSONObj size: 17206212 (0x1068BC4) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: { _data: “826386F28C000000172B022C0100296E5A1004433CBC7ED98647F188BBE90FB8FA55E2463C5F6964003C42543248000004” }",
"username": "Daniel_Comanici"
},
{
"code": "Use DB_NAME\ndb.COLL_NAME.aggregate([ \n\t{ $project: { max: { $bsonSize: \"$$ROOT\" } } },\n\t{ $sort: { max: -1} },\n\t{ $limit: 10 },\n])\n{\n \"clusterTime\": Timestamp(1669788300, 23),\n \"clusterTimeReadable\", \"2022-11-30T06:05:00Z\",\n \"version\": 1,\n \"tokenType\": \"EventToken\",\n \"txnOpIndex\": 0,\n \"fromInvalidate\": false,\n \"uuid\": \"433cbc7e-d986-47f1-88bb-e90fb8fa55e2\",\n \"eventIdentifier\": {\"_id\":\"BT2H\"}\n}\n{\"_id\":\"BT2H\"}",
"text": "This is unfortunately a long-standing limitation with MongoDB Change Streams where the limit of a document is 16 MB but the limit for a Change Event is also 16 MB. This means that if you have a document that is say 14 MB and you update a large field (say 4 MB), then the change event will contain the PreImage (14 MB) and the Update Description (4 MB) and the Change Event will be 18MB.It seems like it should be possible to just “skip” this event, but unfortunately due to the way that MongoDB handles these, it is actually impossible to move past this event without the possibility of skipping events. Additionally, we cant be sure which object caused this issue (since we cant see the full event), so we would “lose” those changes and could end up corrupting data by applying changes to stale objects.The MongoDB server team is in the process of fixing this, and once they have completed the work we will enable users who are on the most recent version of MongoDB (likely 7.0) to be able to move past this error.One thing that could be helpful is to identify which documents are causing this issue. I have found this query to be helpful for identifying large documents in a collection:I was able to decode the resume token you posted which has the following information:My bet would be that the object causing the issue is {\"_id\":\"BT2H\"} (cant find which namespace though). It is not for certain this is the document causing issues, but it is very likely that it is.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Device Sync does not fully start due to error BSONObjectTooLarge | 2023-01-26T19:36:07.154Z | Device Sync does not fully start due to error BSONObjectTooLarge | 957 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi,\nI frequently used to run ‘mongo’ with --eval parameter, mostly in some automation scripts, for example simple ‘mongo --eval 1’ to see if mongod is already fully started.\nWhen I try the same with mongosh the execution of command never ends. It prints some welcome information, then the literal ‘1’ and freezes. I can’t stop it by either ctrl+c, ctrl+d or ctrl+z. Only killing it from another shell helps.\nIT is the same with any other command.\nIs this something that you know about? Maybe I’m doing something wrong?Thanks",
"username": "Andrzej_Podgorski"
},
{
"code": "",
"text": "@Andrzej_Podgorski How are you killing the process (i.e. with which signal)? Can you see if there’s a spike in CPU usage? Which version of mongosh are you using? Can you look at the log file (~/.mongodb/mongosh/_log) and see if there’s any relevant information in there?",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "I’m able to kill it even with SIGTERM. I don’t see any spike in CPU usage.\nI see a problem on both mongosh 1.0.4 and 1.0.5 on Ubuntu 18.04 on a few different machines. Usage of --nodb or --norc does not change the behavior.\nI don’t see anything relevant in the log. The last entry is\n{“level”:30,“time”:1630050655432,“pid”:15175,“hostname”:\"\",“name”:“mongosh”,“msg”:“mongosh:evaluate-input {“input”:“1”}”,“v”:1}",
"username": "Andrzej_Podgorski"
},
{
"code": "",
"text": "And I have the same on Windows using powershell",
"username": "Andrzej_Podgorski"
},
{
"code": "",
"text": "any ideas here folks?",
"username": "Andrzej_Podgorski"
},
{
"code": "mongosh --eval \"print(1); exit();\"mongosh --eval \"print(1); process.exit();\"",
"text": "@Andrzej_Podgorski The problem is that this is hard to debug without a reproduction – I’m also running Ubuntu but haven’t had an issue like this so far.I guess you could explicitly use something like mongosh --eval \"print(1); exit();\" or mongosh --eval \"print(1); process.exit();\" to explicitly stop the shell? I know it’s not a great solution, but I’m also not sure how else to move forward here.If you are willing to debug this in depth (which we are grateful for but obviously don’t expect of you) on Ubuntu, running mongosh under strace might be pretty helpful (although the log it generates would presumably be huge).",
"username": "Anna_Henningsen"
},
{
"code": "--evalmongosh --quiet --eval 'print(1);'kill <PID>mongosh --quiet --eval 'print(1); exit();'$ mongosh --quiet --eval 'print(1); exit();'\n1\n^CError: Asynchronous execution was interrupted by `SIGINT`\nstrace -ttmongo",
"text": "Hello,\nI have a similar issue. After executing command(s) passed through --eval, mongosh freezes. But in my case it only does it occasionally and it doesn’t freeze forever. Sometimes it freezes for around 30 seconds, sometimes for several minutes. Often it freezes several times in succession and then works fine for a long time (hours), until it breaks again (although I look at it for several days now, I didn’t find anything it may be related to, yet).When I run mongosh --quiet --eval 'print(1);' and it freezes, it can’t be stopped by Ctrl-C, but kill <PID> from another shell works, exaxctly as Andrzej_Podgorski described.However, when I run mongosh --quiet --eval 'print(1); exit();', it CAN be stopped by Ctrl-C, in which case it prints this message and exits:I tried to run it with strace -tt several times. Outputs can be found on the links below (as a new user, I cannot upload it here). The first one is of a run when it freezes for a very long time (I killed it after 30 minutes), the second one, for comparison, of a run when it freezes for just 15-20 seconds.Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.I know this will be very difficult to debug. Hopefully the error message and strace output will help.I’m running it on CentOS 7 (two instances with similar configuration, it happens on both).\nmognod v. 5.0.3\nmongosh v. 1.0.7\nLegacy mongo shell on the same system works well.",
"username": "Vaclav_Bartos"
},
{
"code": "",
"text": "Sorry Anna, I somewhat forgot to respond there, but I think that @Vaclav_Bartos described all of my issues and provided needed traces. Thanks @Vaclav_Bartos for that!",
"username": "Andrzej_Podgorski"
},
{
"code": "",
"text": "Hi all,This issue has gone stale with no real progress. However, the problem seems to be getting worse. With the adoption of mongosh under bitnami charts, more and more people are complaining about it (see [bitnami/mongodb] mongosh connections (used in several probes) freeze and consuming a lot of cpu and memory ressources · Issue #10264 · bitnami/charts · GitHub). How can we get some traction here? @Anna_HenningsenThx,\nAndrei",
"username": "Andrei_Neagoe"
},
{
"code": "exit()process.exit(0)process.exit(0)",
"text": "I have the same issue, with mongodb v5.0.2 and mongosh v1.0.5, running with the official docker image(Docker).\nIn my case, if I put exit() at the end of my script, it will also freezes but can be terminated with Ctrl-C, just as @Vaclav_Bartos said; And if I put process.exit(0) at the end of my script, it never freezes, which is my currently workaround.\nI guess this problem is related to that the JS event loop is not empty after excuting all codes in the script, so the JS process does not terminate. I made that guess because terminating the JS process with process.exit(0) can solve this problem.",
"username": "Starrah_N_A"
},
{
"code": "",
"text": "I just installed mongosh and am looking at refactoring some scripts to use mongosh instead of mongo for automation, and I ran into this problem very hard. I’m responding to this comment specifically because not only am I getting mongosh hanging on evals as other folks are, but I’m getting it even when the eval includes a terminal exit(); call. When I use mongosh interactively instead of calling it in a shell script, I still get it to hang when I issue an exit or quit command. I’m using mongosh 1.6.1 on a MongoDB 6.0.1 server running on an AWS CentOS 7 instance. The only way I can exit mongosh reliably is with ctrl-C, which isn’t something I can use for scripting and automation.This is a major problem for me; the new shell program is (still) unusable but the old one isn’t supported.I’d be willing to work with you (or anyone else at MongoDB) on debugging in depth given the show-stopping nature of this bug. You can contact me with messaging on the forum but I’m happy to share emails or anything else for a tighter communication loop.",
"username": "Tyler_Elkink"
},
{
"code": "",
"text": "@Tyler_Elkink The ideal scenario is probably still a way of fully reproducing this, but I also just tried this on an AWS CentOS 7 machine and couldn’t get it to fail, so yeah, debugging this more directly sounds good. You can contact me at [email protected] and we can try to either debug this over email or set up a short call to look at the problem in a “live” environment.",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "For those who find this thread in the future; CentOS 7 seems to be the common named element across the “mongosh hangs” problem; since Oracle’s dropping support for 7 shortly and has destabilized future releases this problem should age out. I’m working with a MongoDB employee on tracking down the actual bug, but for those of us dealing with the problem in the meantime, and in my instance at least;TL;DR using process.exit() instead of exit will successfully terminate mongosh.",
"username": "Tyler_Elkink"
},
{
"code": "db._mongo._instanceState.evaluationListener.ioProvider.close = () => {};process.exit()exit",
"text": "Updating with further fixes;\nthanks to Anna’s help, any of the following fixes mongosh failing to terminate on exit (hanging). Tested this on CentOS 7, using MongoDB version 5.0.9, and mongosh 1.5.0, though I suspect any MongoDB 5 and mongosh <1.6 will have the same issue.If you can manage it, I’d suggest the upgrade as the best option.",
"username": "Tyler_Elkink"
}
] | Mongosh --eval freezes the shell | 2021-08-26T19:02:58.268Z | Mongosh –eval freezes the shell | 8,289 |
null | [
"python",
"beta"
] | [
{
"code": "",
"text": "We are pleased to announce the 4.4.0b0 beta release of PyMongo - MongoDB’s Python Driver. This release adds beta support for Queryable Encryption range queries, improvements in type support, and updated our PyMongoCrypt min version to 1.5.0.See the changelog for a high level summary of what’s new and improved or see the 4.4 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 4.4.0b0 documentation\nChangelog: Changelog\nSource: GitHub - mongodb/mongo-python-driver at 4.4.0b0 Thank you to everyone who contributed to this release!",
"username": "Julius_Park"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | PyMongo 4.4.0b0 Release | 2023-01-25T23:47:00.678Z | PyMongo 4.4.0b0 Release | 1,495 |
null | [
"dot-net",
"crud"
] | [
{
"code": "MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server. ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream. at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken) at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken) --- End of inner exception stack trace --- at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol1 protocol, ICoreSession session, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.ExecuteAsync[TResult](IRetryableWriteOperation1.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1 requests, BulkWriteOptions options, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl2 funcAsync, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionBase1 filter, UpdateDefinition3 bulkWriteAsync)",
"text": "Hello, I’m facing a problem with an update in millions of documents. The stack is this:MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server. ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream. at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken) at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken) --- End of inner exception stack trace --- at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessageAsync(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol1 protocol, ICoreSession session, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.ExecuteAsync[TResult](IRetryableWriteOperation1 operation, RetryableWriteContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.ExecuteBatchesAsync(RetryableWriteContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteAsync(IWriteBinding binding, CancellationToken cancellationToken) at MongoDB.Driver.OperationExecutor.ExecuteWriteOperationAsync[TResult](IWriteBinding binding, IWriteOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.ExecuteWriteOperationAsync[TResult](IClientSessionHandle session, IWriteOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.BulkWriteAsync(IClientSessionHandle session, IEnumerable1 requests, BulkWriteOptions options, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSessionAsync[TResult](Func2 funcAsync, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionBase1.UpdateManyAsync(FilterDefinition1 filter, UpdateDefinition1 update, UpdateOptions options, Func3 bulkWriteAsync)This updates millions of documents, so at first, we thought it maybe was too much to handle at once (because it always happens with the same “part” of the routine), so we split in multiple updates. We’ve tried with 250k at once and now with 100k, but we still have the error in some cases. The cases have around 26 and 11 million documents, respectively, but as said before, we are only updating 100k each time. So the question is, could this be related to the amount of documents being updated or could be something else? Do you have any leads that can help us?Thanks in advance!",
"username": "Douglas_Breda"
},
{
"code": "InsertManyAsync",
"text": "Observing the same behaviour as mentioned above so I think it is still relevant.Our issue is inserting documents (InsertManyAsync), with the chunk of 50k documents it throws an error, but when split to the chunks of 10k documents, it passes successfully. But it decreases the performance of application.\nIt is not occasional issue, but reproduceable.We use MongoDB.Driver version 2.18.0\nDatabase is in Atlas, with replica sets. With local replica-set database in Docker there is not problem (only for testing purposes).",
"username": "Milos_Koscelansky"
}
] | Attempted to read past the end of the stream in update routine | 2022-07-13T16:39:46.546Z | Attempted to read past the end of the stream in update routine | 2,881 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "We’re using GridFS to store files such as PDFs, Word documents etc. We’ve got a cluster setup and our app stores documents using a GUID for the filename. We’ve recently noticed some documents are missing when trying to retrieve them. I’ve looked at the code and it looks resilient in that once a document is uploaded a Find request is made to it, to ensure it was uploaded and returned to the calling function.Initially I thought it maybe a network issue, so I induced a connection problem by simply turning off the instance of MongoDB connected with the app during the upload of a file which caused an exception as expected which causes the document not to be uploaded at all.In the case of the missing documents we know they have been uploaded because there are references to the GUID from another record which is held in another database which is created after the document is uploaded to MongoDB.Any ideas what it could be? I wonder if it’s a replication issue or the data is getting corrupted, could these potentially be causes for the missing documents?",
"username": "Imran_Azad"
},
{
"code": "",
"text": "I wonder if it’s a replication issueDid a replica set election occurred between the time you have your confirmation (using your find request) and the time you discovered that some files are missing? It could be that the oplog was not propagated to the secondary that became primary. This should not happen if you use write majority and you should since you have implemented some kind of mechanism to ensure it is written. Write majority is a safer mechanism.the data is getting corruptedYou would have more problem that with specific documents.So if you used majority writes or if there was no election and there is no issues except a few documents, then the most logical explication is that someone or something deleted the documents.",
"username": "steevej"
}
] | Missing files using GridFS | 2023-01-25T17:11:42.451Z | Missing files using GridFS | 647 |
null | [] | [
{
"code": " \"clusterTime\": { \"$timestamp\": { \"t\": 1674202191, \"i\": 1 } }",
"text": "I was using Confluent Cloud Atlas Source Connector which is dumping JSON data to kafka having\nclusterTime as epochMillis\nbut when I am using MongoDB Source Connector it is giving in below format.\n \"clusterTime\": { \"$timestamp\": { \"t\": 1674202191, \"i\": 1 } }Same goes for other date fields. Can someone know How to make it similar as Atlas Source Connector produces ?",
"username": "Piyush_Mujavadiya"
},
{
"code": "",
"text": "This might have to do with how you set your Output Kafka Record format\n\nimage1330×1488 78.2 KB\nWhat is your configuration setting?",
"username": "Robert_Walters"
}
] | Confluent MongoDB Atlas Source vs MongoDB Source Connector | 2023-01-20T10:14:01.535Z | Confluent MongoDB Atlas Source vs MongoDB Source Connector | 711 |
null | [
"aggregation"
] | [
{
"code": "db.getCollection('events').aggregate([\n{ $match: { 'action':'signed',\n 'quality':{ $in: [ 'QES', 'AES', 'SES' ] },\n 'timestamp':{'$gte': ISODate(\"2023-01-01T00:00:00Z\"), '$lt': ISODate(\"2023-02-01T00:00:00Z\")},\n 'paid_by':\"2fa321ccaeab411b95066bd7b6f4588d\"}},\n{ $project: { _id: 0, 'quality':1}},\n{ $group : { _id: '$quality', \"count\":{$sum:1}}}\n])\ndb.getCollection('events').createIndex(\n{ \n \"action\" : 1,\n \"quality\" : 1,\n \"timestamp\" : 1,\n \"paid_by\" : 1,\n \"user.business_id\" : 1,\n \"user.username\" : 1,\n \"legislation\" : 1,\n \"tsp_type\" : 1\n},\n { \n name:\"signed_quality_timestamp_paidby_legislation_tsptype\",\n partialFilterExpression: { \n 'action': 'signed'\n }\n }\n)\n",
"text": "Hi,\nI have a very slow aggregation (240 sec), but an index exists, which should match the query perfectly.\nIf I add a hint to the aggregation with the index name it is blazing fast (0.2 sec) on 120M records.Now, if I compare the explain output between the aggregation with and without hint, they look the same.\nBoth use a IXSCAN on the same index.What am I missing?MongoDB 5.0.9QueryIndex:",
"username": "Waldemar_Dick"
},
{
"code": "",
"text": "If your usual use-case is the query shared where you have equality match on paid_by and range match on timestamp, your index would be better with paid_by before timestamp.I don’t think your $project helps but it may negatively impact.",
"username": "steevej"
}
] | Aggregation slow without hint, but winning plan same as with hint | 2023-01-25T07:47:24.114Z | Aggregation slow without hint, but winning plan same as with hint | 604 |
null | [
"dot-net"
] | [
{
"code": " [BsonRepresentation(BsonType.Decimal128)]\n public double? ProductPrice{ get; set; }\n",
"text": "I see issue while storing decimal value in mongodb. I am using C# .Net drivers in my application to insert update in mongodb.I have below property in class with type double.In Code we are assigning value for ProductPrice as 38.59 but in collection it got stored like below which is unexpected. I want to get stored whatever value we are assigning to property. It should get stored without extra decimal points.“ProductPrice” : NumberDecimal(“38.590000000000003”)",
"username": "Ershad_Shaikh"
},
{
"code": "",
"text": "why do you use “double” instead of “decimal”?it does not matter how you store data, it is about how you interpret that data. after all, a decimal number is just another floating point number, but with extra care taken by the language it is interpreted in, which is C# in your case with “decimal” data type. just be careful not to connect with a language having loose or no decimal support.Represents a decimal floating-point number.The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding.",
"username": "Yilmaz_Durmaz"
}
] | Decimals are not getting stored properly in database using C# .Net | 2023-01-26T06:33:34.464Z | Decimals are not getting stored properly in database using C# .Net | 1,173 |
null | [
"node-js",
"replication",
"mongoose-odm"
] | [
{
"code": "",
"text": "Hi!Me and my team trying to investigate an error for about a month and sadly without any luck We have cluster of mongodb on premise and an app that is written in nodejs using mongoose(in nestjs).\nWe have around 10 pods of that app and everything works ok until suddenly some of the pods getting “server selection timeout, no primary replicaset”.\nThe weird thing is that if I kill the pods everything works ok again and then after minutes/hours it happens again from different pods.We’re really lost and will be happy for every help or something you can think of ",
"username": "Ben_Hason"
},
{
"code": "",
"text": "have you checked",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks for your reply!Another thing that I forgot to write its that all of our operations to DB happens in transaction.",
"username": "Ben_Hason"
},
{
"code": "",
"text": "I tried to ping the db from the pods that having the error and it works so it doesn’t seems like a network problem…",
"username": "Ben_Hason"
},
{
"code": "",
"text": "check mongodb server config for connection limits. mongodb server logs would also indicate this but connection drops may not be apparent. check the lines where connections accepted/dropped in the timespans of your errors.if limit is 10k and your pods create 100 singleton connections there is no problem. but they create, lets say, 10 individual connection per request, you can only serve 1000 requests at a time, and 1001th person will get timeout.as far as I know, mongoose should be handling connections as singleton.our operations to DB happens in transactionthis might be the issue. or not. a transaction itself should not affect, but check how mongoose uses connections during a transaction.",
"username": "Yilmaz_Durmaz"
}
] | Server selection timeout suddenly happens, mongoose | 2023-01-25T20:59:27.524Z | Server selection timeout suddenly happens, mongoose | 1,180 |
null | [
"dot-net",
"xamarin"
] | [
{
"code": "public class Product : RealmObject\n{\n public string Name { get; set; }\n public decimal Price { get; set; }\n}\n\npublic class ProductsViewModel\n{\n private readonly Realm _realm;\n\n public IEnumerable<Product> Products { get; set; }\n public ProductsViewModel(Realm realm)\n {\n _realm = realm;\n }\n\n public void LoadProducts()\n {\n Products = _realm.All<Product>();\n }\n}\n\n//The xaml page\n<ListView\n ItemsSource=\"{Binding Products}\"\n CachingStrategy=\"RecycleElement\">\n <ListView.ItemTemplate>\n <DataTemplate>\n <TextCell Text=\"{Binding Name}\"/> \n </DataTemplate>\n </ListView.ItemTemplate>\n</ListView>\nProductProductProductViewModelpublic class ProductViewModel\n{\n public ProductViewModel(Product product)\n {\n Name = product.Name;\n FormattedPrice = $\"{product.Price:C}\";\n }\n public string Name { get; set; }\n public string FormattedPrice { get; set; }\n}\nProductsViewModelpublic class ProductsViewModel\n{\n private readonly Realms.Realm _realm;\n\n public IEnumerable<ProductViewModel> Products { get; set; }\n public ProductsViewModel(Realms.Realm realm)\n {\n _realm = realm;\n }\n\n public void LoadProducts()\n {\n Products = _realm.All<Product>().Select(p => new ProductViewModel(p));\n }\n}\nProductViewModel",
"text": "We have a pretty complex application that uses MongoDB App Services and Realm.Realm database has a fantastic feature of lazy-loading data when the data is actually accessed. This has pretty exciting effects. Imagine you have a screen that shows a list of items. Those items can be tens of thousands. In the traditional database or API approach, one would implement some sort of data pagination - either automatic, when the user reaches the end of the list, the next chunk of data is loaded, or manual - the user manually selects the page that they should load. With realm, you can simply fetch all of the items of an entity and directly bind those items to a ListView control in Xamarin. For exampleIn the above example, even if there are tens of thousands of products, the ListView will load instantly and will scroll smoothly because of the lazy-loading magic.The issue arises when we need to transform the native realm object Product into a different class. Imagine, we want to create a new view model out of the Product class, called ProductViewModel :This is a trivial example where we simply format the currency into a string. In our real-world application, the scenario is much more complicated, but it’s good to show the issue we are facing. Once would guess that changing the ProductsViewModel to something like this would do the job:However, this approach is naive and will essentially break the lazy-loading aspect of the list. What it will do is iterate over every item in the list; for every item, it will fetch the name and the price of the product, will create a new ProductViewModel, and only after all of these operations are completed, the list will show any data. If the product collection is extensive, this will take a significant amount of time to complete.How can we approach solving this problem the right way? Essentially, we need some reactive collection transform operator which does the following.If we do not find a solution, we will have to add artificial pagination so that the transformed objects are created only for a subset of items, which won’t take a lot of time. But this is a super undesired approach, as it breaks all of the benefits that the Realm database has.P.S.\nIf we go further, there’s also another issue we do have, which probably is outside of this specific topic and is a more complicated variant of this issue. Imagine you have a list that should show data from 2 different Realm collections. How would we manage this? We need a mechanism to merge 2 Realm collections reactively.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "ProductViewModelProduct",
"text": "Hi @Gagik_Kyurkchyan Unfortunately what you are trying to do is not easily achievable with realm. It obviously depends on how complex your use case is, but I can think of two general solution:Finally, both these approaches starts to get complex if you need two-way data binding.If you have more complicated use cases that you can show we can give a look and maybe give you some suggestions about what you could do with it.",
"username": "papafe"
},
{
"code": "",
"text": "@Gagik_Kyurkchyan still regarding this, I’ve opened an issue in our repo to track something that could be useful in your case.",
"username": "papafe"
},
{
"code": "",
"text": "Hey @papafeNice to see you again And thanks for the quick reply as always.I wouldn’t actually assume that Realm would support something like this out of the box. But rather, there might be some open-source libraries that are meant to solve this problem.I am investigating Dynamic Data currently, which looks promising. I will try digging deeper into it and see if I can create a fully reactive collection pipeline with it.Anyway, thanks for the reply one more time, I will keep digging and update this thread with the outcomes.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "Lazy<>ProductProductViewModelProductsViewModelProductsViewModelpublic class ProductsViewModel : IDisposable\n{\n private readonly Realm _realm;\n\n private readonly CompositeDisposable _bindings = new();\n\n //--1--\n private readonly ObservableCollectionExtended<Lazy<ProductViewModel>> _products;\n\n //--2--\n public ReadOnlyObservableCollection<Lazy<ProductViewModel>> Products { get; }\n\n public ProductsViewModel(Realm realm)\n {\n //--3--\n _products = new();\n Products = new(_products);\n _realm = realm;\n }\n\n public void LoadProducts()\n {\n //--4--\n var results = (IRealmCollection<Product>)_realm.All<Product>().OrderBy(p => p.Name);\n\n _bindings.Add(results\n //--5--\n .ToObservableChangeSet<IRealmCollection<Product>, Product>()\n //--6--\n .Transform(product => new Lazy<ProductViewModel>(() => new ProductViewModel(product)))\n //--7--\n .Bind(_products)\n .Subscribe());\n }\n\n public void Dispose()\n {\n //--8--\n _realm?.Dispose();\n _bindings.Dispose();\n }\n}\nNameValue.NameLazy<ProductViewModel<ListView\n ItemsSource=\"{Binding Products}\">\n <ListView.ItemTemplate>\n <DataTemplate>\n <TextCell Text=\"{Binding Value.Name}\" />\n </DataTemplate>\n </ListView.ItemTemplate>\n</ListView>\nObservableCollectionExtendedProductViewModelObservableCollectionDynamicDataDynamicDataLazy<ProductViewModel>ProductViewModelProductViewModelObservableCollectionExtendedObservableCollectionReadOnlyObservableCollectionDynamicDataDynamicDataIQueryableIRealmCollectionIRealmCollectionINotifyCollectionChangedDynamicDataToObservableChangeSetIObservable<IChangeSet<>>DynamicDataDynamicDataProductLazy<ProductViewModel>_productsSubscribeDynamicDataLazy",
"text": "Hey @papafe and anybody who would face this issue, we’ve solved the issue by using a combination of DynamicData and Lazy<>.We have the same Product and ProductViewModel. Let’s modify the ProductsViewModel logic to ensure everything is reactive and lazy. Here’s how the ProductsViewModel would look like:And inside the page xaml, instead of binding to Name we now bind to Value.Name, as our items are of type Lazy<ProductViewModelLet’s break down the changes.This approach guarantees end-to-end reactiveness and lazy loading. Whenever a new product is added to the database, the DynamicData magic transformation will ensure that the target collection is updated as well. And as we create a Lazy object, the data will actually be loaded whenever it’s accessed. And you won’t have to implement any pagination or lazy loading at the UI level. It’s as simple as binding to a collection. ENJOY ",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Preserve lazy-loading reactivity of realm collections after transforming them to another object | 2023-01-23T13:49:29.767Z | Preserve lazy-loading reactivity of realm collections after transforming them to another object | 1,180 |
null | [
"atlas-cluster",
"atlas"
] | [
{
"code": "atlas cluster upgrade <cluster-name> --tier M20 --diskSizeGB 100 --mdbVersion 6.0400 (request \"INVALID_JSON_ATTRIBUTE\") Received JSON for the providerSettings attribute does not match expected format.",
"text": "Hi,I am trying to upgrade my cluster tier from M10 to M20 with atlas cli.\nI am using the command:\natlas cluster upgrade <cluster-name> --tier M20 --diskSizeGB 100 --mdbVersion 6.0and I am getting: 400 (request \"INVALID_JSON_ATTRIBUTE\") Received JSON for the providerSettings attribute does not match expected format.The docs are very clean and easy to understand but I can’t figure why this is not working.\nI logged in with an api key with permissions of Project Cluster Manager and I can describe my dbs , The database is a dedicated type.atlascli version is: 1.4.0",
"username": "Ofek_Ifergan"
},
{
"code": "cluster upgradecluster update",
"text": "Hi @Ofek_Ifergancluster upgrade is to upgrade a cluser from a shared tier (M0 - M5) to a dedicated cluster (M10+).You will want to use the cluster update command to change the tier on a dedicated instance.",
"username": "chris"
},
{
"code": "",
"text": "Thanks!\nIt’s working now",
"username": "Ofek_Ifergan"
}
] | Modifing database tier | 2023-01-25T14:39:55.333Z | Modifing database tier | 1,017 |
null | [
"aggregation",
"queries",
"data-modeling",
"crud"
] | [
{
"code": "db.absences.insertMany( [\n { \"_id\" : 1, \"student\" : \"Ann Aardvark\", sickdays: [ new Date (\"2018-05-01\"),new Date (\"2018-08-23\") ] },\n { \"_id\" : 2, \"student\" : \"Zoe Zebra\", sickdays: [ new Date (\"2018-02-01\"),new Date (\"2018-05-23\") ] },\n] )\ndb.holidays.insertMany( [\n { \"_id\" : 1, year: 2018, name: \"New Years\", date: new Date(\"2018-01-01\") },\n { \"_id\" : 2, year: 2018, name: \"Pi Day\", date: new Date(\"2018-03-14\") },\n { \"_id\" : 3, year: 2018, name: \"Ice Cream Day\", date: new Date(\"2018-07-15\") },\n { \"_id\" : 4, year: 2017, name: \"New Years\", date: new Date(\"2017-01-01\") },\n { \"_id\" : 5, year: 2017, name: \"Ice Cream Day\", date: new Date(\"2017-07-16\") }\n] )\ndb.absences.aggregate( [\n {\n $lookup:\n {\n from: \"holidays\",\n pipeline: [\n { $match: { year: 2018 } },\n { $project: { _id: 0, date: { name: \"$name\", date: \"$date\" } } },\n { $replaceRoot: { newRoot: \"$date\" } }\n ],\n as: \"holidays\"\n }\n }\n] )\n{\n _id: 1,\n student: 'Ann Aardvark',\n sickdays: [\n ISODate(\"2018-05-01T00:00:00.000Z\"),\n ISODate(\"2018-08-23T00:00:00.000Z\")\n ],\n holidays: [\n { name: 'New Years', date: ISODate(\"2018-01-01T00:00:00.000Z\") },\n { name: 'Pi Day', date: ISODate(\"2018-03-14T00:00:00.000Z\") },\n { name: 'Ice Cream Day', date: ISODate(\"2018-07-15T00:00:00.000Z\")\n }\n ]\n},\n{\n _id: 2,\n student: 'Zoe Zebra',\n sickdays: [\n ISODate(\"2018-02-01T00:00:00.000Z\"),\n ISODate(\"2018-05-23T00:00:00.000Z\")\n ],\n holidays: [\n { name: 'New Years', date: ISODate(\"2018-01-01T00:00:00.000Z\") },\n { name: 'Pi Day', date: ISODate(\"2018-03-14T00:00:00.000Z\") },\n { name: 'Ice Cream Day', date: ISODate(\"2018-07-15T00:00:00.000Z\")\n }\n ]\n}\n",
"text": "I am referring to an uncorrelated $lookup. Each document passing through this stage will receive the same array computed in the $lookup. What if the actual size of this array exceeds 100mb? Does it matter? An array is a reference type, so does the size of the reference contribute to the 100mb limit or the size of the actual value of the array?output of $lookup:",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Hi @Big_Cat_Public_Safety_Act ,The limit for one document in MongoDB is for 16mb. This means that if a query correlate data more than 16mb within that document it should fail as far as I know.The 16mb limit is for the entire constructed document on any stage.The 100MB data is for the processed data on each stage (the ones in memory) , match or sort should usually perform on indexes and do not count in this limit. If you need to increase that limit you can use disk paging by allowing: {allowDiskUse : true}Tanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | 100mb aggregation limit - does object fields contribute their reference size or actual value size to this limit? | 2023-01-21T23:34:32.054Z | 100mb aggregation limit - does object fields contribute their reference size or actual value size to this limit? | 665 |
null | [
"swift"
] | [
{
"code": "",
"text": "I’m trying to use a XCFramework built with a private pod, that have another public dependencies inside. In that dependencies, all of them seem to work fine, except RealmSwift. When I try to import the XCFramework I built in my class, the following errors appear:I have been searching but I don’t know why this error is showing because it doesn’t seem like it has anything related to my problem, but I ran out of ideas.I have tried install Realm both by Cocoapods and XCFramework. Same result",
"username": "Ruben_Velazquez"
},
{
"code": "",
"text": "I had the same issue, and I came across this formum post\nhttps://developer.apple.com/forums/thread/123253There’s Realm framework and also there’s Realm class in RealmSwift framework, this confuses the linker. According to the post, I added the script at the bottom of Run Script section in Build Phases in building xcframework script:find [Path to .xcframework] -name “*.swiftinterface” -exec sed -i -e ‘s/Realm.RealmSwiftObject/RealmSwiftObject/g’ {} ;",
"username": "Masatoshi_Nishikata"
}
] | Realm XCFramework nested in another XCFramework error | 2023-01-11T09:22:09.444Z | Realm XCFramework nested in another XCFramework error | 1,225 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi,I’m kind of new working with the C# mongodb driver, basically i want to create a console application the monitors long running queries and emailing it to a certain group who monitors our server. just wanted to ask if how can i execute the command below using a mongodb driver… I’m already ok with the connection to the server , executing the command below im having trouble at… i want to execute the query below and save the result on a file.db.currentOp({ active: true,op : { $ne : “none” },secs_running : { $gt : 60 },})?.inprog",
"username": "Daniel_Inciong"
},
{
"code": "$currentOp$currentOpadmin$currentOpAppendStageMatchusing System;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\nvar client = new MongoClient();\nvar admin = client.GetDatabase(\"admin\");\n\nvar query = admin.Aggregate()\n .AppendStage<BsonDocument>(\"{$currentOp:{ }}\")\n .Match(x => x[\"active\"] == true && x[\"op\"] != \"none\" && x[\"secs_running\"] > 60);\nforeach (var result in query.ToList())\n{\n Console.WriteLine(result);\n}\n",
"text": "Hi, @Daniel_Inciong,Thank you for your question. You are using a shell function, which executes the $currentOp aggregation stage. In C#, you can do the same thing by executing $currentOp against the admin database. The .NET/C# Driver doesn’t have a helper function for $currentOp, but you can use AppendStage to add it to the pipeline and then add one or more Match stages to filter the results:Hopefully this gets you started in the right direction.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Working with MongoDB Driver on VB.NET | 2023-01-25T07:35:29.261Z | Working with MongoDB Driver on VB.NET | 1,462 |
[
"python",
"data-api"
] | [
{
"code": "'Content-Type: application/json' \n'Access-Control-Request-Headers: *' \n'api-key: <Data API Key>'\n",
"text": "Hello, I hope this is the right topic.\nSo I am trying to post something to my data api that would insert a key. Everything is fine and everything works fine when I do it with python and even on my other lua end.\nHere are some screenshots of the error.Captured with Lightshot\nAlso tried using this header format but it returns a syntax error.And here you can see that it works just fine with python. Screenshot by Lightshot",
"username": "Sin_S"
},
{
"code": "",
"text": "Hi @Sin_S, welcome to the community.\nCan you please post the error that you are getting? I am unable to locate the error in both of your screenshots.PS: Please post the error and the code by pasting the text directly, it will make the post easily discoverable if someone faces a similar issue in the future.Thanks & Regards,\nSourabh Bagrecha,\nMongoDB Community Team",
"username": "SourabhBagrecha"
},
{
"code": "fetch(\"https://data.mongodb-api.com/app/data-ysfzy/endpoint/data/beta/action/insertOne\", {\n mode: \"no-cors\",\n method: \"POST\",\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'api-key': 'KEYHERE'\n },\n data: JSON.stringify({\n \"collection\": \"Keys\",\n \"database\": \"modlogs\",\n \"dataSource\": \"RGH\",\n \"document\": {\"linkkey\": \"gayy\"}\n })\n}).then (res =>{\n console.log(res)\n})\n",
"text": "Hello. Thank you for replying. Here is my codeAnd here is the error that I get in console\n\n1018×287 13.4 KB\nHere is the error that I get in MongoDB https://i.imgur.com/q0x6dQQ.pngAnd yes the API-Key that I am using is right.",
"username": "Sin_S"
},
{
"code": "fetch(\"https://data.mongodb-api.com/app/data-ysfzy/endpoint/data/beta/action/insertOne\", {\n mode: \"no-cors\",\n method: \"POST\",\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'api-key': 'KEYHERE'\n },\n data: JSON.stringify({\n \"collection\": \"Keys\",\n \"database\": \"modlogs\",\n \"dataSource\": \"RGH\",\n \"document\": {\"linkkey\": \"gayy\"}\n })\n}).then (res =>{\n console.log(res)\n})\n",
"text": "Hello. Thank you for replying. Here is my codeAnd here is the error that I get in console\n\n1267×358 63.1 KB\nHere is the error that I get in mongodb\n\n1087×153 6.89 KB\n\nand yes the api key is right",
"username": "Sin_S"
},
{
"code": "",
"text": "Hello, was the problem solved?Regards,S.",
"username": "Sebastian_Mirski"
}
] | Posting returns no authentication methods were specified | 2022-03-04T06:58:08.597Z | Posting returns no authentication methods were specified | 4,699 |
|
null | [
"react-native"
] | [
{
"code": "app.deleteUser(user){\"errorCode\": 1, \"message\": \"user_not_found\"}<UserProvider>fallbackfallback<UserProvider>app.removeUser(user)** or **user.logOut()app.deleteUser()User must be logged in to be deleted.app.deleteUser()Exception in HostFunction: User is no longer valid.app.removeUser(user); app.deleteUser(_.clone(user));",
"text": "Hello, I’m running into an error using the [app.deleteUser]\n(https://www.mongodb.com/docs/realm/web/create-delete-user/#:~:text=Call%20the%20App.,addition%20to%20clearing%20local%20data.) and useUser hook.I am trying to add functionality to delete a realm user (from the server and from the device). I have tried the following (and encountered the following errors):app.deleteUser(user)\nWhen I call this function, the user gets deleted and the app crashes.I encounter this error: {\"errorCode\": 1, \"message\": \"user_not_found\"}I am having trouble tracing this error, but so far as I can tell, the children of <UserProvider> are getting rendered—I have a fallback, which is not rendering despite the user deletion succeeding.When I restart the app, the app uses the fallback passed to <UserProvider>.app.removeUser(user)** or **user.logOut()\nIf I call either of these functions followed by an app.deleteUser(), I encounter this error: User must be logged in to be deleted.If I add a loading state and call either of these functions after app.deleteUser(), I encounter this error: Exception in HostFunction: User is no longer valid.app.removeUser(user); app.deleteUser(_.clone(user));\nThe app continues to run, but the intended behavior does not occur; the user does not get deleted. (The user merely gets removed/logged out, depending on the function I call before app.deleteUser.)So… I need help: How do I delete a user and ensure that the renders its fallback component? (Post-deletion, it seems the app still thinks a user object exists.)Thanks,\nAlex",
"username": "Alexander_Ye"
},
{
"code": "",
"text": "Updated to realm-js v.0.4.3 and it appears to work now. I was going to something weird with Realm.UserState (which considers a “Removed” state), but now I don’t have to.",
"username": "Alexander_Ye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [Realm React Native] useUser , Delete User Error | 2023-01-25T19:26:11.969Z | [Realm React Native] useUser , Delete User Error | 925 |
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "Hello everyone,I am trying to setup ODBC on Mongo Standalone Linux Server which is an EC2 Instance with OS ‘Amazon Linux 2’.I have installed mongodb-bi-connector and my mongosqld service is running.I have downloaded the required drivers which were in a compressed file (mongodb-connector-odbc-1.4.3-ubuntu-14.04-64.tar.gz) form here Releases · mongodb/mongo-bi-connector-odbc-driver · GitHubHere’s my mongosqld.conf:\nsystemLog:\npath: ‘/home/ec2-user/mongodb-bi-linux-x86_64-amzn64-v2.14.5/mongosqld.log’\nnet:\nbindIp: ‘127.0.0.1’\nport: 3307Extracted files libmdbodbca.so & libmdbodbcw.so and placed it in /usr/local/lib directory as it is told in the documentaionCreated odbc.ini file and below is the config file[MongoDBODBC]\nDESCRIPTION = ODBC for MongoDB BI Connector\nDRIVER = /usr/local/lib/libmdbodbcw.so\nTRACE = Off\nTRACEFILE = stderr\nREADONLY = yes\nSERVER = myec2instace_public_ip (have also tried mongodb’s hostname)\nPORT = 3307\nUSER =\nPASSWORD =\nDATABASE = stagingDBTested the odbc connection with unixODBC\nexecuted the command iusql -v MongoDBODBCAnd got the following error\n[unixODBC][Driver Manager]Data source name not found, and no default driver specified\n[ISQL]ERROR: Could not SQLDriverConnectCan someone please tell me what am I missing here and what could the possible fixes to resolve this issue.Thank you.",
"username": "junaid_hulhub"
},
{
"code": "/etc/odbc.ini~/.odbc.ini",
"text": "Seem like there is a syntax error or something.The file is readable by the executing user? I just updated permission on /etc/odbc.ini and I get the same error when my user cannot read the file.If you’re not aware you can also create a ~/.odbc.ini that is just for the user vs creating a system wide one.",
"username": "chris"
},
{
"code": "",
"text": "UPDATE\nInstead of executing the command sudo iusql -v MongoDBODBCI tried sudo isql -v MongoDBODBC\nand then the connection was made\n±--------------------------------------+\n| Connected! |\n| |\n| sql-statement |\n| help [tablename] |\n| quit |\n| |\n±--------------------------------------+Now the problem is I tested my odbc.ini and commented several lines from the configuration.\nBut the command sudo isql -v MongoDBODBC was still working.the SERVER and PORT variables were also commented in the configuration but still the connection was made. Now this is confusing.However if I execute the command with iusql Igot the same error which I posted earlier.\nI have updated the permissions of odbc.ini and have also tried creating the file ~/.odbc.ini but still the issue persists.Pls help ",
"username": "junaid_hulhub"
},
{
"code": "",
"text": "Out of advise on this one. I had to deliberately break it to get the error in the first place.",
"username": "chris"
}
] | [unixODBC][Driver Manager]Data source name not found, and no default driver specified | 2023-01-24T11:21:52.768Z | [unixODBC][Driver Manager]Data source name not found, and no default driver specified | 2,909 |
null | [
"dot-net",
"realm-studio"
] | [
{
"code": "private async void StartRealm(string p)\n {\n try\n {\n Dictionary<int, Realm> Databases = new Dictionary<int, Realm>\n {\n { 1, await Realm.GetInstanceAsync\n (\n new RealmConfiguration(Path.Combine(p, \"Primary\"))\n {\n ShouldDeleteIfMigrationNeeded = true,\n }\n )\n }\n };\n }\n catch (Exception ex)\n {\n string e = ex.Message;\n }\n }\n",
"text": "In latest Mac Ventura (13.2), latest Visual Studio for Mac 2022 (either Stable or Preview), Realm now fails in MAUI (iOS or Catalyst) on “GetInstance” or “GetInstanceAsync” with a TypeInitializerException for Realms.SharedRealmHandle, reporting a System.DllNotFound exception for @rpath/realm-wrappers.framework/realm-wrappers.\nExample failing function added into MainPage.xaml.cs in brand new MAUI app (where p is the Realm directory path, already proven to be accessible):",
"username": "David_Pressman"
},
{
"code": "",
"text": "Have you tried wiping the bin/obj folders and rebuilding your project? This sounds like the native library is not being included correctly in the project and typically, the issue is with the tooling. If the problem persists, can you share a simple project that reproduces the issue?",
"username": "nirinchev"
},
{
"code": "",
"text": "Thank you for a very quick reply.As I work primarily with Windows I am still a Mac newbie of sorts, though having owned 2 generations of a Mac Mini, the latter being the 2017 Intel version with 256 GB “hard drive”.Last August I successfully used Realm in my Maui iOS and Catalyst projects of the time, but later found the Mini’s hard drive had become so filled I could no longer update XCode and newer versions of VS for Mac required that update. Thus, just days ago, I restored the Mini to its initial state, but now with Ventura. The only apps since installed have been the latest XCode and VS for Mac, plus Beyond Compare. When I discovered that Realm now fails in my updated solution in a manner similar to how it had failed before August (my reported Type Initialization exception), I prepared both .Net 7 and .Net 6 Test solutions, whose only changes from the default Maui programs VS provides is adding Realm and code added into MainPage.xaml.cs.Neither runs, although just once the .Net 7 test ran Catalyst without throwing the error. I was not able to repeat the successful run.I want to attach that .Net 6 project zipped, although you may have to indicate how, as I do not see an obvious way in the forum reply area (Upload only allows pictures.).A possible (though unlikely) complication is my personal area was moved to an attached portable drive (as I had always done before) to alleviate Mini HD usage. My VS projects are also on that drive.It would not be that hard to again reset the Mini and keep everything on it alone. If Realm still failed right after such a reset that reinstalled only XCode and VS, it would surely indicate the problem was not with me. But I would like to see if someone else can actually run the test project both for Catalyst and iOS; I have not been able to.",
"username": "David_Pressman"
},
{
"code": "",
"text": "You can send me the project at [email protected] or if it’s too large, you could upload it to google drive and share the file with me.Our tests run both on iOS and Catalyst and they work for us on CI running Intel Macs and locally on our M1s. That being said, I’m running Monterey and not Ventura, so it’s possible either the OS or something peculiar about your project to be the issue. In any case, if you’re able to share the project, we can start ruling out some causes.",
"username": "nirinchev"
}
] | Realm fails in MAUI iOS and Mac using Visual Studio Mac | 2023-01-24T18:00:56.290Z | Realm fails in MAUI iOS and Mac using Visual Studio Mac | 1,367 |
null | [
"atlas-device-sync",
"field-encryption"
] | [
{
"code": "",
"text": "Currently, data is currently encrypted on the Realm client device but stored unencrypted in MongoDB Atlas.I would like to implement Zero-Knowledge encryption, where the service provider does not know the content of the data, with field encryption for actual data.The private key would be generated on the device and not stored on the server, only accessible to the user.It is unclear if this is possible with the current version of Atlas Device Sync/MongoDB Atlas.Ideally, I would ensure that the data on the device remains accessible for search queries while logged in, and synchronization will not be affected by encryption.I’ve seen this post on Stackoverflow which mentions possibly using 2 databases - one synced with Realm and one not.Is there a feasible approach for encrypting the data with a private key before it is sent to the server for synchronization?",
"username": "Thompson"
},
{
"code": "",
"text": "You could theoretically store all fields that you want encrypted as BinData fields and just store the raw binary in MongoDB. You would not be able to “query” on any of those fields though, so if you need to define a partition key or queryable fields you would need to leave those unencrypted. Then you would have to build the client-side logic to encrypt/decrypt each field before storing it in Realm.Note that MongoDB has an offering for this but unfortunately Device Sync does not support this yet. MongoDB Client-Side Field Level Encryption | MongoDB",
"username": "Tyler_Kaye"
}
] | Is Zero-Knowledge Encryption for Realm Sync Client Data Stored on MongoDB Atlas Possible? | 2023-01-25T18:47:54.175Z | Is Zero-Knowledge Encryption for Realm Sync Client Data Stored on MongoDB Atlas Possible? | 1,315 |
null | [
"aggregation"
] | [
{
"code": "{_id: \"1234abcd\", \"numofdocs\": 2, \"date\": \"02/02/2022\"}{{\"_id\" : \"d1\", \"user_id\": \"1234abcd\", \"doc_name\" : \"a.pdf\"},\n {\"_id\" : \"d2\", \"user_id\": \"1234abcd\", \"doc_name\" : \"b.pdf\"}}\n{\n{\"_id\" : \"d1p1\", \"doc_id\" : \"d1\", \" page_size\" : [540,860]},\n{\"_id\" : \"d1p2\", \"doc_id\" : \"d1\", \" page_size\" : [540,860]},\n{\"_id\" : \"d2p1\", \"doc_id\" : \"d2\", \" page_size\" : [545,865]},\n{\"_id\": \"d2p2\", \"doc_id\": \"d2\", \" page_size\" : [545,865]}\n}\n{\n{\"_id\" : \"ap1\", \"page_id\" : \"d1p1\", \"text\" : \" hello a \"},\n{\"_id\" : \"ap2\", \"page_id\" : \"d1p1\", \"text\" : \" hello b \"},\n{\"_id\" : \"ap3\", \"page_id\" : \"d1p2\", \"text\" : \" hello c \"},\n{\"_id\" : \"ap4\", \"page_id\" : \"d1p2\", \"text\" : \" hello d\"},\n{\"_id\" : \"bp1\", \"page_id\" : \"d2p1\", \"text\" : \" hello e\"},\n{\"_id\" : \"bp2\", \"page_id\" : \"d2p1\", \"text\" : \" hello f\"},\n{\"_id\" : \"bp3\", \"page_id\" : \"d2p2\", \"text\" : \" hello g \"},\n{\"_id\" : \"bp4\", \"page_id\" : \"d2p2\", \"text\" : \" hello h \"}\n}\n{_id: \"1234abcd\", numofdocs: 2, date: \"02/02/2022\", \n\"doc\": [{\"doc_name\": \"a.pdf\", \"pages\": [ {\"page_size\": [540,860] , \"paragraph\": [{\"text\":\"hello a\"},{\"text\":\"hello b\"}]},\n {\"page_size\": [540,860] , \"paragraph\": [{\"text\":\"hello c\"},{\"text\":\"hello d\"}]}]},\n{\"doc_name\" : \"b.pdf\", \"pages\": [ {\"page_size\": [545,865] , \"paragraph\": [{\"text\":\"hello e\"},{\"text\":\"hello f\"}]},\n {\"page_size\": [545,865] , \"paragraph\": [{\"text\":\"hello g\"},{\"text\":\"hello h\"}]}]}]\n}\n",
"text": "Hi All,\nI am trying to write a query for nested documents but am not able to get the desired output. If someone can guide me it will be very helpful.I have 4 collections names user, docs, pages, paragraphs\nuser :\n{_id: \"1234abcd\", \"numofdocs\": 2, \"date\": \"02/02/2022\"}docs:pages:paragraphs:and my desired outputso far i have tried with 5 stages like match, lookup, unwind, lookup, unwind, lookup, project\nbut not getting in desired format…i am getting 8 documents in which each paragraph text is attached with page and doc information which is repeating in every document.",
"username": "Sourabh_Dhaker"
},
{
"code": "{\t\"_id\" : \"1234abcd\",\n\t\"numofdocs\" : 2,\n\t\"date\" : \"02/02/2022\", \n\t\"doc\": [\n\t\t{\t\"doc_name\": \"a.pdf\",\n\t\t\t\"pages\" : [\n\t\t\t\t{\t\"page_size\": [540,860] ,\n\t\t\t\t\t\"paragraph\": [{\"text\":\"hello a\"},{\"text\":\"hello b\"}]\n\t\t\t\t},\n\t\t\t\t{\t\"page_size\": [540,860] ,\n\t\t\t\t\t\"paragraph\": [{\"text\":\"hello c\"},{\"text\":\"hello d\"}]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\n\t\t{\t\"doc_name\" : \"b.pdf\",\n\t\t\t\"pages\": [\n\t\t\t\t{\t\"page_size\": [545,865] ,\n\t\t\t\t\t\"paragraph\": [{\"text\":\"hello e\"},{\"text\":\"hello f\"}]\n\t\t\t\t},\n\t\t\t\t{\t\"page_size\": [545,865] ,\n\t\t\t\t\t\"paragraph\": [{\"text\":\"hello g\"},{\"text\":\"hello h\"}]\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t]\n}\nmatch_user = { $match : { _id : \"1234abcd\" } }\n\n/* build the list of paragraphs for pages */\n\nlookup_paragraphs = { $lookup : {\n\tfrom : \"paragraphs\" ,\n\tlocalField : \"_id\" ,\n\tforeignField : \"page_id\" ,\n\tas : \"paragraph\"\n} }\n\n/* build the list of pages with their paragraphs for a documents */ \n\nlookup_pages = { $lookup : {\n\tfrom : \"pages\" ,\n\tlocalField : \"_id\" ,\n\tforeignField : \"doc_id\" ,\n\tas : \"pages\" ,\n\tpipeline : [\n\t\tlookup_paragraphs\n\t]\n} }\n\n/* build list of docs with their pages and paragraphs for users */\n\nlookup_docs = { $lookup : {\n\tfrom : \"docs\" ,\n\tlocalField : \"_id\" ,\n\tforeignField : \"user_id\" ,\n\tas : \"doc\" ,\n\tpipeline : [\n\t\tlookup_pages\n\t]\n} }\n\npipeline = [ match_user , lookup_docs ]\n",
"text": "It always help to understand a document structure when there is a little bit of formatting. This gives the following for your desired output.The solution involves $lookup with pipeline: with a initial $match on user.I left out the cosmetic $project to get the exact format you want.1 - do not store dates as strings, dates as date compared to string takes less space, are faster and provide a rich API\n2 - you would be better off storing your data in your desired output rather than the normalized SQL like tables and forgo completely the hierarchical $lookup.",
"username": "steevej"
},
{
"code": "db.getCollection(\"user\").aggregate(\n [\n {\n \"$match\" : {\n \"request_id\" : \"02e97006-9d0d-41cc-84f6-185dbacafdd5\"\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"docs\",\n \"localField\" : \"_id\",\n \"foreignField\" : \"user_id\",\n \"as\" : \"doc\"\n }\n }, \n {\n \"$unwind\" : {\n \"path\" : \"$doc\"\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"pages\",\n \"localField\" : \"doc._id\",\n \"foreignField\" : \"doc_id\",\n \"as\" : \"doc.pages\"\n }\n }, \n {\n \"$unwind\" : {\n \"path\" : \"$doc.pages\"\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"paragraphs\",\n \"localField\" : \"doc.pages._id\",\n \"foreignField\" : \"page_id\",\n \"as\" : \"doc.pages.paras\"\n }\n }, \n {\n \"$group\" : {\n \"_id\" : \"$doc.pages.doc_id\",\n \"request_id\" : {\n \"$first\" : \"$user_id\"\n },\n \"data\" : {\n \"$push\" : \"$doc\"\n }\n }\n }\n }\n ])\n",
"text": "Thanks, @steevej for the response and yeah for formatting also, next time I will make sure to do formatting.\nThe solution you are telling me will require 4 aggregation operations and it will work but I am looking for single aggregation with multiple stages, I am working in MongoDB compass, so there this solution won’t work I guess if it does then please guide me on how to do it. And thanks for the suggestion.\nI have tried with one aggregation and I got it almost but not exactly the format. Here is the mongo shell extraction code.It is giving me data as a list of pages not nested according to the desired output.\nPlease correct me where am I wrong or needs modifications. And all these 4 collections are in single db.",
"username": "Sourabh_Dhaker"
},
{
"code": "db.getCollection( \"user\" ).aggregate( [\n { $match : { \"request_id\" : \"02e97006-9d0d-41cc-84f6-185dbacafdd5\" } } ,\n { $lookup : {\n from : \"docs\" ,\n localField : \"_id\" ,\n foreignField : \"user_id\" ,\n as : \"doc\" ,\n pipeline : [\n { $lookup : {\n from : \"pages\" ,\n localField : \"_id\" ,\n foreignField : \"doc_id\" ,\n as : \"pages\" ,\n pipeline : [\n { $lookup : {\n from : \"paragraphs\" ,\n localField : \"_id\" ,\n foreignField : \"page_id\" ,\n as : \"paragraph\"\n } }\n ]\n } }\n ]\n } }\n] ) \n",
"text": "It is a single aggregation pipeline with a single access to the database but I use variable to express each stage since it is usually easier to understand. What make you think you call aggregate 4 times? My structured code is equivalent to the monolithic",
"username": "steevej"
},
{
"code": "",
"text": "Thanks a lot @steevej it worked and got the output.",
"username": "Sourabh_Dhaker"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to get query output as nested document using aggregation stages | 2023-01-25T02:48:13.132Z | How to get query output as nested document using aggregation stages | 603 |
null | [
"app-services-cli"
] | [
{
"code": "npx mongodb-realm-cli import --remote=$APP_ID\nDetermining changes\nDeployed app is identical to proposed version, nothing to do\n",
"text": "I am trying to manually deploy a realm app and it just keeps saying it’s already identical to the proposed version:I have made changes in the UI multiple times and nothing works. Has anyone had a problem like this? Definitely feels like a bug as well.",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "nvm I figured this out. works fine.",
"username": "Lukas_deConantseszn1"
},
{
"code": "--include-node-modules--include-package-json--include-hosting",
"text": "Hello @Lukas_deConantseszn1,Thanks for raising your query. The message means that the diff of the imported config against the server-side config is the same.depending on what changed you might need one of:If you resolved this, could you let the community know how you fixed it? Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "I was in the wrong directory I think so we are good!",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "I have the same issue now. Functions update as expected, but my sync/config.json file seems to not react to changes.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Hello @Thomas_Anderl ,Thank you for raising your concern. Could you confirm if this has been fixed for you or could you share more details on what changes you implemented and what errors are you getting?I look forward to your response.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hey @henna.s,the issue is not fixed for me. I created a seperate post here:",
"username": "Thomas_Anderl"
}
] | Realm CLI constantly says app is identical to proposed version | 2022-01-25T04:21:00.115Z | Realm CLI constantly says app is identical to proposed version | 4,642 |
null | [
"aggregation",
"python",
"indexes"
] | [
{
"code": "db.collection.aggregate([\n \"$match\": {\n \"$expr\": {\n \"$gte\": [\n \"$created_at\",\n {\n \"$subtract\": [\n {\n \"$dateFromParts\": {\n \"day\": {\n \"$dayOfMonth\": \"$$NOW\"\n },\n \"hour\": 0,\n \"millisecond\": 0,\n \"minute\": 0,\n \"month\": {\n \"$month\": \"$$NOW\"\n },\n \"second\": 0,\n \"timezone\": \"+0530\",\n \"year\": {\n \"$year\": \"$$NOW\"\n }\n }\n },\n {\n \"$multiply\": [\n 1,\n 86400000\n ]\n }\n ]\n }\n ]\n }\n }\n])\n{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"candidate_id\" : 1,\n\t\t\t\"created_at\" : -1\n\t\t},\n\t\t\"name\" : \"candCreatedAtIndex\",\n\t\t\"ns\" : \"proddb.applications\",\n\t\t\"background\" : true\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"establishment_id\" : 1,\n\t\t\t\"created_at\" : -1\n\t\t},\n\t\t\"name\" : \"estCreatedAtIndex\",\n\t\t\"ns\" : \"proddb.applications\",\n\t\t\"background\" : true\n\t},\n\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"created_at\" : -1\n\t\t},\n\t\t\"name\" : \"createdAtIndex\",\n\t\t\"background\" : true,\n\t\t\"ns\" : \"proddb.applications\"\n\t},\n$hintvar dt = new Date();\ndb.applications.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$gte\": [\n \"$created_at\",\n {\n \"$subtract\": [\n {\n \"$dateFromParts\": {\n \"day\": {\n \"$dayOfMonth\": dt\n },\n \"hour\": 0,\n \"millisecond\": 0,\n \"minute\": 0,\n \"month\": {\n \"$month\": dt\n },\n \"second\": 0,\n \"timezone\": \"+0530\",\n \"year\": {\n \"$year\": dt\n }\n }\n },\n {\n \"$multiply\": [\n 1,\n 86400000\n ]\n }\n ]\n }\n ]\n }\n },\n }\n], {\n 'hint': 'createdAtIndex'\n}).explain()\n$hint$matchCOLLSCANPyMongoPyMongodt = datetime.utcnow()\ncursor = applications_col.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$gte\": [\n \"$created_at\",\n {\n \"$subtract\": [\n {\n \"$dateFromParts\": {\n \"day\": {\n \"$dayOfMonth\": dt\n },\n \"hour\": 0,\n \"millisecond\": 0,\n \"minute\": 0,\n \"month\": {\n \"$month\": dt\n },\n \"second\": 0,\n \"timezone\": \"+0530\",\n \"year\": {\n \"$year\": dt\n }\n }\n },\n {\n \"$multiply\": [\n 1,\n 86400000\n ]\n }\n ]\n }\n ]\n }\n },\n }\n], hint='createdAtIndex')\n",
"text": "I am using MongoDB version 4.0 and performing the following aggregate query.I have the below index in place, but its not getting used by the MongoDB query.So, I made use of the $hint method to force MongoDB in using that aggregation query.When I use $hint in MQL, its utilizing the index for the $match query, but its still performing COLLSCAN in PyMongo.Below is the PyMongo code I used.",
"username": "Harshavardhan_Kumare"
},
{
"code": "created_atcreated_at",
"text": "Indexes can only be used when a field is being compared to a constant value. Here you are computing a value on the fly so it’s not possible for index to be used.If you want an index to be used, you can compute the date client side to compare created_at to and then the index on created_at field will be used.Asya",
"username": "Asya_Kamsky"
},
{
"code": "datetime",
"text": "Thanks for your response @Asya_Kamsky. I stored datetime object to a variable and the query uses indexes now!",
"username": "Harshavardhan_Kumare"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Index `hint` in pymongo aggregation query not working | 2023-01-21T15:47:08.342Z | Index `hint` in pymongo aggregation query not working | 1,420 |
null | [
"replication",
"backup",
"upgrading"
] | [
{
"code": "db.fsyncLock()BackupTAKE SNAPSHOT NOW",
"text": "Hello,I am making a plan for upgrading my Replica Set from v4.2 to v5.0.I want to know if it is possible to take snapshots of my Replica Set after having stopped writes to the database using db.fsyncLock() . When I say take snapshots, I mean navigating to the Backup tab in the dashboard for my cluster and then clicking TAKE SNAPSHOT NOW.Thank you !",
"username": "Edouard_Finet"
},
{
"code": "",
"text": "Any help about this would be much appreciated.From what I can gather from the docs: db.fsyncLock() , it seems like this should not be a problem.",
"username": "Edouard_Finet"
},
{
"code": "fsyncLock",
"text": "You should not need to use fsyncLock to take a snapshot but you have to have journaling on, and on the same filesystem (and snapshot all the files, data and journal).Have you reviewed this page in the docs: MongoDB Backup Methods — MongoDB Manual?",
"username": "Asya_Kamsky"
},
{
"code": "db.fsyncLock()fsyncLock",
"text": "I believe that journaling is enabled as we are currently taking regular scheduled snapshots.I was asking more in general if it is still possible to take snapshots if one has used db.fsyncLock() .The reason that I was thinking of doing this is to guarantee that the snapshot taken is exactly the same state as the database was before fsyncLock was used and so that no more writes to the database could be performed during the upgrade.The situation I am trying to mitigate against is if something goes wrong during the upgrade, I want the snapshot that we recover the database from to be in the same state as before we started the upgrade.",
"username": "Edouard_Finet"
}
] | db.fsyncLock() and snapshots | 2023-01-23T15:48:48.978Z | db.fsyncLock() and snapshots | 1,227 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Hello, I am a newbie to mongodb and I am trying to sort venues by name and performers by name in ascending order in the following query after the group stage so results are consistent, as of now I am able to sort by event start date which seems consistent every single time I run the query, I have tried adding sort to this query after the group stage with no success I have also tried to sort after the $project stage, however this feels incorrect since I’m slicing the results in the $project stage, I have also tried to sort inside the lookup pipelines. I really don’t know what else to try, I was reading a bit about facets but I’m not sure that it’s what I want?Another issue that I’m having is that sometimes I get records in venues which are empty, i.e:,\n[{ actual venue data}]This issue is secondary, I’ve been busting my head all day to try to sort this query correctly.Query:\nhttps://pastebin.com/GRMcxusuSample data can be found here:https://pastebin.com/hqmVMgwsThanks in advance",
"username": "Federico_Stange"
},
{
"code": "",
"text": "Hello @Federico_Stange .Welcome to The MongoDB Community Forums! Could you please share below details to understand your use-case better:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "[\n {\n performers: [\n [],\n [\n {\n _id: 701,\n name: 'Atlantic City Beer & Music Festival',\n segment: {\n name: 'Music',\n uid: 'e062293a-9b46-11ed-8aab-0242ac130005'\n },\n bio: null,\n homepage: 'https://www.acbeerfest.com/',\n uid: 'f35adf69-1cb0-41d6-91ab-9cb330566c91',\n genre: {\n uid: '506a0d84-d310-471f-8d56-b863f91dc69b',\n name: 'Undefined'\n },\n type: [],\n subtype: [],\n websites: {\n facebook: 'https://www.facebook.com/acbeerfest/',\n instagram: 'https://www.instagram.com/acbeerfest/',\n twitter: 'https://twitter.com/acbeerfest'\n }\n }\n ],\n [\n {\n _id: 855,\n name: 'Beenie Man',\n segment: {\n name: 'Music',\n uid: 'e062293a-9b46-11ed-8aab-0242ac130005'\n },\n bio: null,\n homepage: 'http://www.beenieman.net/',\n uid: 'cd19510e-f7d4-4f30-a177-eb3382c95fdf',\n genre: {\n uid: '152b15b3-91d4-11ed-a212-0242ac130003',\n name: 'Reggae'\n },\n type: [],\n subtype: [],\n websites: {\n lastfm: 'http://www.last.fm/music/Beenie+Man',\n musicbrainz: '0d85b9f2-802d-48bb-aa85-6a9668869053',\n wiki: 'https://en.wikipedia.org/wiki/Beenie_Man'\n }\n }\n ]\n ],\n events: [\n {\n uid: '4199441d-2c40-4cac-936a-4c357ba83d69',\n title: 'Okeechobee Music & Arts Festival',\n type: 'Festival',\n start: ISODate(\"2023-03-02T00:00:00.000Z\"),\n performers: {\n names: [\n 'Austin Millz',\n 'Baby Keem',\n 'Big Boi',\n 'Biig Piig',\n 'Blunts & Blondes',\n 'Boogie T',\n 'Break Science'\n ],\n total: 62\n },\n venue: 'Sunshine Grove',\n country: 'United States',\n state: 'Florida',\n city: 'Okeechobee'\n },\n {\n uid: 'e46e3dc8-1327-4130-aab3-f09c250e54d0',\n title: \"PSW & Pherm Brewing present a night of Phun with The Last Rwind (DC's Phish Tribute), Pherm Brewing Beer specials and give aways\",\n type: 'Generic',\n start: ISODate(\"2023-03-11T01:00:00.000Z\"),\n performers: { names: [], total: 0 },\n venue: 'Pearl Street Warehouse',\n country: 'United States',\n state: 'District of Columbia',\n city: 'Washington'\n },\n {\n uid: '870dac81-5964-48dd-8bc8-d830f29bc440',\n title: 'Decibel Metal & Beer Fest - 2 Day Pass',\n type: 'Generic',\n start: ISODate(\"2023-04-14T00:00:00.000Z\"),\n performers: {\n names: [\n 'All Out War',\n 'Decibel Metal & Beer Fest',\n 'Drowning Man',\n 'Escuela Grind',\n 'Eyehategod',\n 'Frozen Soul',\n 'Fuming Mouth'\n ],\n total: 15\n },\n venue: 'The Fillmore Philadelphia',\n country: 'United States',\n state: 'Pennsylvania',\n city: 'Philadelphia'\n }\n ],\n venues: [ [] ]\n }\n]\n",
"text": "Hello I dont want empty records, the resulting records should look a bit like the following, I had to unwind and then group again for the sorting to take place, which I think it’s not ideal but at least it works.In this case I looked up for bee, so it matched all performers with .bee. all events containing .bee. and all venues containing .bee., to get rid of the empty records I tried to $match $ne on the record, but then I realized that this would only bring results if ALL the subdocuments would match .bee., in this case as we have all records containing .bee. it would bring the result without the empty records, however if I would search for a venue .*sunshine.grove. using $match after the $unwind stage gets me no records because there are no performers matching .*sunshine.grove. however I do need that the result comes back as a venue, even tho there are no performers named .*sunshine.grove.Additionaly to try to match for non empty records after $unwind, I have tried to filter those out in the $group stage with no success. Here is the updated query:Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.This is the result set, in this case it’s correct, minus the empty record in performer ",
"username": "Federico_Stange"
}
] | Trying to sort multiple groups without success | 2023-01-24T21:40:17.589Z | Trying to sort multiple groups without success | 509 |
null | [
"dot-net",
"cxx",
"field-encryption"
] | [
{
"code": "GlenM@DESKTOP-5K7AURG MINGW64 ~/src/build-mongo-cxx-driver\n$ CMAKE_PREFIX_PATH='c:/packages/libmongocrypt;c:/packages/libmongoc' cmake ../mongo-cxx-driver -DCMAKE_INSTALL_PREFIX=c:/packages/libmongocxx -DBSONCXX_POLY_USE_MNMLSTC=1 -DBUILD_VERSION=3.7.0\n-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.\n-- No build type selected, default is Release\n-- The C compiler identification is MSVC 19.29.30147.0\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting C compile features\n-- Detecting C compile features - done\nbsoncxx version: 3.7.0\nfound libbson version 1.23.2\n-- Performing Test COMPILER_HAS_DEPRECATED_ATTR\n-- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Failed\nmongocxx version: 3.7.0\nfound libmongoc version 1.23.2\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed\n-- Looking for pthread_create in pthreads\n-- Looking for pthread_create in pthreads - not found\n-- Looking for pthread_create in pthread\n-- Looking for pthread_create in pthread - not found\n-- Found Threads: TRUE\n-- Build files generated for:\n-- build system: Visual Studio 16 2019\n-- instance: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community\n-- Configuring done\n-- Generating done\n-- Build files have been written to: C:/Users/GlenM/src/build-mongo-cxx-driver\n\nGlenM@DESKTOP-5K7AURG MINGW64 ~/src/build-mongo-cxx-driver\n$ cmake --build . --config Release\nMicrosoft (R) Build Engine version 16.11.2+f32259642 for .NET Framework\nCopyright (C) Microsoft Corporation. All rights reserved.\n\n Checking Build System\n Creating directories for 'EP_mnmlstc_core'\n Building Custom Rule C:/Users/GlenM/src/mongo-cxx-driver/src/bsoncxx/third_party/CMakeLists.txt\n Performing download step (git clone) for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core download command succeeded. See also C:/Users/GlenM/src/build-mongo-cxx-driver/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-download-*.log\n No update step for 'EP_mnmlstc_core'\n No patch step for 'EP_mnmlstc_core'\n Performing configure step for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core configure command succeeded. See also C:/Users/GlenM/src/build-mongo-cxx-driver/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-configure-*.log\n Performing build step for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core build command succeeded. See also C:/Users/GlenM/src/build-mongo-cxx-driver/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-build-*.log\n Performing install step for 'EP_mnmlstc_core'\n -- EP_mnmlstc_core install command succeeded. See also C:/Users/GlenM/src/build-mongo-cxx-driver/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-stamp/EP_mnmlstc_core-install-*.log\n Performing fix-includes step for 'EP_mnmlstc_core'\n Completed 'EP_mnmlstc_core'\n Building Custom Rule C:/Users/GlenM/src/mongo-cxx-driver/src/bsoncxx/CMakeLists.txt\n element.cpp\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,46): error C2146: syntax error: missing ';' before identifier 'and' [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_shared.vcxproj]\nC:\\Users\\GlenM\\src\\mongo-cxx-driver\\src\\bsoncxx\\..\\bsoncxx/types.hpp(152): message : see reference to function template instantiation 'bool core::v1::operator ==<char,std::char_traits<char>>(core::v1::basic_string_view<char,std::char_traits<char>>,core::v1::basic_string_view<char,std::char_traits<char>>) noexcept' being compiled [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_shared.vcxproj]\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,46): error C2065: 'and': undeclared identifier [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_shared.vcxproj]\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,50): error C2146: syntax error: missing ';' before identifier 'lhs' [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_shared.vcxproj]\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,67): warning C4553: '==': result of expression not used; did you intend '='? [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_shared.vcxproj]\n Building Custom Rule C:/Users/GlenM/src/mongo-cxx-driver/src/bsoncxx/CMakeLists.txt\n element.cpp\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,46): error C2146: syntax error: missing ';' before identifier 'and' [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_testing.vcxproj]\nC:\\Users\\GlenM\\src\\mongo-cxx-driver\\src\\bsoncxx\\..\\bsoncxx/types.hpp(152): message : see reference to function template instantiation 'bool core::v1::operator ==<char,std::char_traits<char>>(core::v1::basic_string_view<char,std::char_traits<char>>,core::v1::basic_string_view<char,std::char_traits<char>>) noexcept' being compiled [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_testing.vcxproj]\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,46): error C2065: 'and': undeclared identifier [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_testing.vcxproj]\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,50): error C2146: syntax error: missing ';' before identifier 'lhs' [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_testing.vcxproj]\nC:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\third_party\\EP_mnmlstc_core-prefix\\src\\EP_mnmlstc_core\\include\\core/string.hpp(309,67): warning C4553: '==': result of expression not used; did you intend '='? [C:\\Users\\GlenM\\src\\build-mongo-cxx-driver\\src\\bsoncxx\\bsoncxx_testing.vcxproj]\n",
"text": "Hello! I’ve compiled libmongocrypt and mongo-c-driver successfully for Windows10 using VS2019. Now, when I attempt to compile mongo-cxx-driver, I get this error:Seems like it’s an silly issue with the 3rd-party EP_mnlstc_core stuff – I get this error in both r3.7.0 and in git-master.Surely there’s an easy fix for this?? Please ",
"username": "Glen_Mabey"
},
{
"code": "EP_mnmlstc_core",
"text": "Hi @Glen_Mabey ,If you are OK building with C++17, you can avoid choosing a polyfill (and relying on EP_mnmlstc_core)\nSee the instructions here - Getting Started with MongoDB and C++ | MongoDB",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Windows build failure mongo-cxx-driver | 2023-01-25T00:55:53.624Z | Windows build failure mongo-cxx-driver | 1,279 |
null | [
"atlas",
"cloud-manager"
] | [
{
"code": "",
"text": "We’re excited to announce that our integration with the Prometheus monitoring solution is complete and ready for use on MongoDB Atlas (clusters m10+) and Cloud Manager.The Prometheus integration allows you to view MongoDB hardware and monitoring metrics in a single location while providing you with the flexibility to create dynamic dashboards to gain insights. For more info view our blog post and docs.",
"username": "Marissa_Jasso"
},
{
"code": "",
"text": "Hello. We’ve enabled prometheus endpoint on our MongoDB community operator. I can’t find any documentation on the metrics. We use version 0.7.5.\nIs there the documentation available somewhere?",
"username": "Petr_Studeny"
},
{
"code": "",
"text": "Here is the HELP strings from the prometheus endpoint.\nI can’t find metrics that could be used for Oplog window calculation.\nmongo_metrics_help.txt (126.7 KB)",
"username": "Petr_Studeny"
}
] | New Prometheus Integration Announcement | 2022-03-18T19:55:21.701Z | New Prometheus Integration Announcement | 3,068 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "I am trying to use mongorestore to move a database to DocumentDB. From the exact same EC2 instance, I run the same mongorestore command from 4.0 and 4.2. The 4.0 run errors based on the dump being in 4.2. When I use the same exact call with the 4.2 shell command, it throws the following error:error connecting to host: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: … Last error: connection() : x509: certificate signed by unknown authority }, ] }Is there a diff in the call I am missing?Here is a sample of the call:\nmongorestore --ssl --host docdbcluster-yyyy.cluster-zzzz.us-east-1.docdb.amazonaws.com:27017 -d xxx --sslCAFile rds-combined-ca-bundle.pem --username xxx --password xxx “C:\\MONGODUMPS\\xxx”I know I must be missing something very simple.DK",
"username": "David_Koth"
},
{
"code": "",
"text": "Hi @David_Koth welcome to the community!DocumentDB is not a MongoDB product but rather a re-implementation of an old MongoDB version with notable differences with the genuine MongoDB of similar version. Please see their documentation for more on this subject.Thus using any MongoDB official tools on it is not supported. Even if mongorestore works, I would be suspicious if it doesn’t complain about something, as there are too many differences between it and a genuine MongoDB server.If you need a cloud-hosted database and you’re not locked into a DocumentDB solution, I would encourage you to have a look at MongoDB Atlas, as this is created and supported by MongoDB. You would be able to use the official tools that you’re familiar with Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks! We are moving away from MongoDB Atlas as it is not as cost efficient for us.DK",
"username": "David_Koth"
}
] | Mongorestore with DocumentDB Issue | 2023-01-24T20:45:06.826Z | Mongorestore with DocumentDB Issue | 1,306 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi All,I’m trying to diagnose an issue with our mongodb installation (hosted with mongodb) - we enabled auto scaling on our account as we’re syncing a few gb of data and then our account ran out of space cutting off access to the db.However since doing this, the auto-scaling seems to be triggering constantly and constantly increasing the size of our system.So while we’re only storing around ~2gb of data in our system, mongo is reporting ~130gb of disk usage overall. In mongosh / db stats, our filestorage is reporting a datasize of 1662mb but then a fsUsedSize of 131,548MbI’m not quite sure how to fix this as now we’re paying a really high rate for a system that’s only using 2gb but if we reduce the size it’s “out of space” again.All of this happened within a 24h period where it went from a ~2gb instance to 130gb. Any ideas would be appreciated.Thanks\n~Ben",
"username": "Ben_Lewis"
},
{
"code": "",
"text": "Hi @Ben_Lewis,I’m trying to diagnose an issue with our mongodb installation (hosted with mongodb) - we enabled auto scaling on our account as we’re syncing a few gb of data and then our account ran out of space cutting off access to the db.I assume this is an Atlas deployment based on the “auto scaling” mention. If this is the case, I would definitely raise this with the Atlas in-app chat support team (or raise a support case if you have a support subscription) if the auto-scaling upgrade got stuck here.The Atlas support team will have more insight into your cluster / Atlas project.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hey Jason,Thanks for the reply. Yes its a mongodb managed solution.Chat support has been less than helpful and sent me here unfortunately.I can confirm the problem is with the oplog which is reporting 170gb, which is insane given our system is around 2gb stored.But I can’t seem to change the oplog size, or clear it, or do anything… it’s super frustrating.",
"username": "Ben_Lewis"
},
{
"code": "> atlas clusters advancedSettings update Cluster0 --oplogSizeMB 2500\n> atlas clusters advancedSettings update Cluster0 --oplogMinRetentionHours 0.25 \n> db.runCommand({ compact: \"oplog.rs\", \"force\":true })\n",
"text": "OK, so I managed to figure out some things.for some reason the oplog went haywire and was holding 170gb of data despite being configured for 990mb max (not sure how this is possible). You can’t set the max size in mb via the atlas web GUI.editing the max oplog window in the GUI of atlas did nothing. The cli still reported 24hediting the max oplog window via shell is not possible due to “replSetResizeOplog” not seemingly available on their cloud serversBUTThen compact oplog via shellThanks for your help Jason - it was actually another post you commented on about the oplog that really helped hone in on this one.Seems crazy to me that Atlas support couldn’t have recommended looking at oplog when I reported this problem.",
"username": "Ben_Lewis"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Huge database usage with auto-scaling enabled | 2023-01-24T01:22:21.819Z | Huge database usage with auto-scaling enabled | 849 |
null | [
"dot-net",
"flexible-sync"
] | [
{
"code": "",
"text": "We have a Xamarin.iOS app that runs Flexible Sync. It’s working fine on newer devices but on the iPad Mini with iOS 12.5, it crashes while syncing with no exception message. We use the Realm Nuget 10.18.0. Does Flexible Sync support iOS 12.5?\nAny help is very much appreciated.",
"username": "Sandeepani_Senevirathna"
},
{
"code": "",
"text": "Hi @Sandeepani_Senevirathna, Realm and Flexible Sync should work with iOS 12.5.I have tried to run a flexible sync project in the simulator using iOS 12.4 (couldn’t find precisely iOS 12.5) and iPad mini but it was working fine.\nI have some questions that could help us finding the problem:",
"username": "papafe"
},
{
"code": "",
"text": "Hi @papafe ,I have upgraded the Realm .NET SDK to version 10.19.0 and it’s working fine now.\nIn case you need more information on the issue I will provide them below:",
"username": "Sandeepani_Senevirathna"
},
{
"code": "",
"text": "Glad to hear it’s working now! It could be that you encountered a specific bug that has been fixed in the last version.If you find bugs in the future, feel free to open an issue in the .NET Realm Github repo, as we follow that more closely than the forum.",
"username": "papafe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Synchronization with the .NET SDK not working on iPad Mini with iOS 12.5 | 2023-01-24T07:39:25.334Z | Realm Synchronization with the .NET SDK not working on iPad Mini with iOS 12.5 | 1,219 |
null | [
"aggregation",
"queries",
"crud",
"compass"
] | [
{
"code": "{\n \"_id\": {\"$oid\": \"626b9f8888daf0141b7069f0\" },\n \"tools\": [\n {\"name\": \"A\", \"output\": \"B\"},\n {\"name\": \"C\", \"output\": \"D\"},\n ]\n}\ntools.namedb.collection.updateMany({}, { $set: { \"tools.$[].name\": \"X\" } })\n$toLower\"X\"$toLower [\n {\n $unwind:\n /**\n * unwind tools, to get a document per element\n */\n {\n path: \"$tools\",\n },\n },\n {\n $set:\n /**\n * Perform the toLower operation\n */\n {\n \"tools.name\": {\n $toLower: \"$tools.name\",\n },\n },\n },\n {\n $group:\n /**\n * re create the array\n */\n {\n _id: \"$_id\",\n tools: {\n $push: \"$tools\",\n },\n },\n },\n ]\ndb.collection.updateMany({}, [... pipeline from above ...])\n$unwind is not allowed to be used within an update",
"text": "Hi everybody!I have a very large mongodb database. In this database I have a datastructure similar to this (simplified):I am currently cleaning up this database. What I would like to achieve is to have a uniform lowercase representation of the field tools.name (which is not the case right now). I already tried different approaches, but none of them worked for me. My hope is to find anybody here who can help me out please Here are the approaches which I have tried so far:\nApproach 1:This performs an update operation on the correct fields, but I did not find a way to get the current value of the field, in order to perform a $toLower operation. I only achieved to set it to static values (\"X\").Approach 2:\nUse an aggregation pipeline to deconstruct the array, perform the $toLoweroperation:This almost does the job, at least it shows correct results in the compass pipeline editor. Anyway, the database does not accept the query when I use it in an updateMany statement likeas it reports the error $unwind is not allowed to be used within an update.Can anybody help me with my operation?\nThanks a lot,\nPhilipp",
"username": "Philipp_Kratzer"
},
{
"code": "$unwind$settools[ {$set:{ tools: {$map: {input: \"$tools\", in: { name: {$toLower: \"$$this.name\"}, output:\"$$this.output\"}}}}}]\ntools",
"text": "You’re close! You need to use an aggregation pipeline that doesn’t do any $unwinding - only stages that do one-to-one document transformations are allowed in the update pipeline. All you need to do is $set (aka $addFields) of the tools field like this:This iterates over the tools array replacing each name with lower case version of the same name.Asya",
"username": "Asya_Kamsky"
},
{
"code": "$map",
"text": "Hi Asya!Thank you so much. Your elegant solution works well! I did not come up with a solution using $map - which makes sense now.Thanks,\nPhilipp",
"username": "Philipp_Kratzer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update every sub document: string manipulation | 2023-01-24T18:27:10.475Z | Update every sub document: string manipulation | 853 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "We are seeing intermediate issue of pod crashing with error stating MongoTimeoutError: Server selection timed out after 30000 ms when it is trying to connect mongoDb. On replacing the pod it is able to connect successfully. If we again try to replace the running pod we see above error. It is intermediate, could anyone help us here. Mongoose version used is 5.8",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "When you say pod are you referring to K8s?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "yes @Andrew_Davidson",
"username": "Mithun_Shetty1"
},
{
"code": "let db = mongoose.connection;\ndb.once('open', function () {\n logger.info('MongoDB event open');\n db.on('connected', function () {\n logger.info('MongoDB event connected');\n });\n\n db.on('error', function (err) {\n logger.info('MongoDB event error.: ' + err);\n });\n});\n",
"text": "We have whitelist of all ip via 0.0.0.0 is in place and we use a m30 clusterBelow is the code snippet which we are using to connect mongo atlasmongoose version: “mongoose”: “5.8.7”\nmongo: 4.2.21<<<<< code >>>>>var options = {\nuseNewUrlParser: true,\nuseUnifiedTopology: true,\nuseFindAndModify:false,\nconnectTimeoutMS: 300000,\nsocketTimeoutMS:0 ,\nkeepAlive: 1\n};mongoose.connect(“connection string”, options, function (err) {\nif (err) {\nlogger.critical(\"Error while connecting to mongoDB … \", err);\n} else {\nlogger.info(“mongodb connected successfully.”)\n}\n});",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "Tested above code with mongoose version 6.5.x with the above code snippetvar options = {“ssl”:true,“sslValidate”:false,“useNewUrlParser”:true,“useUnifiedTopology”:false,“keepAlive”:1,“connectTimeoutMS”:300000,“socketTimeoutMS”:0}Error while connecting to mongoDB … may be down Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://docs.atlas.mongodb.com/security-whitelistMongoDB event error.: MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you’re trying to access the database from an IP that isn’t whitelisted. Make sure your current IP address is on your Atlas cluster’s IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/Basically with both version 5.8 and 6.5 we are facing similar issue just that error message are different , could any one help us here who has faced these issues",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "It’s not going to be any kind of configuration if it’s intermittent like this.Are there any other suspicious events in the time before it crashes? Have you check the logs for the pod?",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "@Dan_Mckean we don’t see any suspicious things , we are only seeing below error in the log\n\nScreenshot 2022-08-01 at 2.08.20 PM3584×188 38.7 KB\n\n\nScreenshot 2022-07-29 at 7.41.05 PM3158×152 34.1 KB\nError while connecting to mongoDB\nmay\nbe down Could not connect to any servers in your MongoDB Atlas cluster.\nOne common reason is that you’re trying\nto\nfrom an IP that isn’t whitelisted. Make sure vour current IP address is on vour Atlas cluster’s IP whitelist: httos://docs.atlas.monaodb.com/securitv-whiteli\n14:05:26.446Z] [INFO]: [BPA] [cisco-bpa-platform/mw-backup-restorel [ServerName: bpa-ns] [PodName: backup-restore-service-5fb4d8f4dc-bm9vv] [session-id: 1 [c\n10DB event error.: MonaooseserverselectionError: Could not connect to anv servers in vour MonaoDB Atlas cluster. One common reasonWe try to open multiple concurrent connections is that any concerns here ?",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "We try to open multiple concurrent connections is that any concerns here ?I think it probably depends on how many multiple is…! I’m not sure of the limits but I’m sure it’s fine for sensible numbers.My suggestion would be to open a support case (top right in Atlas) and see if our support folks can help take a look at things from the Atlas end of things. Perhaps we have some logs on that end.",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "@Dan_Mckean @Andrew_DavidsonOn further investigations below are the observationsSince this issue is reproducible all over the time, we tried packet capture between working pod and non working pod [both of same deployment version], the observation here is in the non working scenario, after the client makes TCP connection it tries to perform TLS client hello but the pod is not getting reply from the mongodb server resulting in timeout and repeated retries. In all retries the TLS client hello is sent but mongodb server is not replying back.We further did a deep dive analysis on the packet capture and found that, when the client is making “Client Hello”, for working pods its taking TLSv1.2 protocol and in a non working its using TLSv1 protocol layer…It is very intermediate, application fails to get connection with TLS1.0 , but with TLS1.2 is able to connect successfullyIn mongo db atlas we have set TLS 1.0 and above , still the pod crashes when at run time TLS1.0 gets used.Could someone help us here , we are clueless how to go ahead",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "Good find Is there any possibility of configuring your application to consistently use TLS1.2? That would be the ideal for security reasons.And did you enable it by editing the deployment configuration and changing it as follows?\n\nimage855×307 44.3 KB\nBut yes, in theory Atlas can still support TLS1.0 - if that’s been enabled in the cluster settings (as above) but it’s not being accepted for those incoming connections I’d suggest opening a support case (top right in Atlas) and see if our support folks can help take a look at things from the Atlas end of things.",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "@Dan_Mckeanyes we have enabled deployment configuration from day 1 here is screen shot for the same\n\nimage1756×560 76 KB\nIn that scenario it should work right with TLS1.0, and from the application we are not setting TLS version, could be mongoose driver [5.8.7] thats using it at runtime. Is there a way we could set TLS version when we try to connect via mongoose?Below is the TCP dump for working pod\n\nimage1508×478 92.6 KB\nBelow is the TCP dump for non working pod\n\nimage1246×447 50.5 KB\n",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "Sorry - we’ve reached the limits of my ability to help in this area.Are you able to open a support case so that someone can help diagnose why TLS1.0 isn’t working as it should?",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "@MaBeuLux88 could you help us here, we are currently clueless\nIn addition to above issue mentioned we see in logsmongoose connect(…) failed with err: MongoNetworkError: failed to connect to server [hostname:27017] on first connect [Error: read ETIMEDOUT\nat TLSWrap.onStreamRead (internal/stream_base_commons.js:209:20)\nat TLSWrap.callbackTrampoline (internal/async_hooks.js:130:17) {\nname: ‘MongoNetworkError’,\n[Symbol(mongoErrorContextSymbol)]: {}\n}]\nat Pool. (/home/node/app/node_modules/mongodb/lib/core/topologies/server.js:433:11)\nat Pool.emit (events.js:400:28)\nat /home/node/app/node_modules/mongodb/lib/core/connection/pool.js:577:14\nat /home/node/app/node_modules/mongodb/lib/core/connection/pool.js:1021:9Clearly says its a TLS issue , can some one help us here",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "Hey all,I read the thread in diagonal but looks like you two have been pretty far in the debugging already. I’d try to use the latest versions (both for Mongoose and MDB) to see if that solves the problem. I’d also try to update & upgrade the OS (linux?) running the pods and make sure they are using the latest libssl package or equivalent.Else open a ticket because it could also be a weird bug that needs proper investigation from the Atlas team.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "[2022-08-05T15:37:59.091Z] [INFO]: [] [platform/activation] [ServerName: ns] [PodName: 7cbb8b8c69-rp9wn] [session-id: ] [correlation-id: ] MongoDB event error.: MongoNetworkError: failed to connect to server [secondaryHostname:27017] on first connect [MongoNetworkError: connection 5 to <<secondaryHostname>>:27017 timed out\n at TLSSocket.<anonymous> (/home/node/app/node_modules/@platform/common-app/node_modules/mongodb/lib/core/connection/connection.js:355:7)\n at Object.onceWrapper (events.js:519:28)\n at TLSSocket.emit (events.js:400:28)\n at TLSSocket.Socket._onTimeout (net.js:495:8)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n [Symbol(mongoErrorContextSymbol)]: {}\n}]\n\n\n[2022-08-05T15:37:59.208Z] [INFO]: [] [platform/activation] [ServerName: ns] [PodName: -7cbb8b8c69-rp9wn] [session-id: ] [correlation-id: ] MongoDB event error.: MongoNetworkError: failed to connect to server [<<primaryHostname:27017] on first connect [MongoNetworkError: connection 5 to primaryHostname:27017 timed out\n at TLSSocket.<anonymous> (/home/node/app/node_modules/@-platform/-common-app/node_modules/mongodb/lib/core/connection/connection.js:355:7)\n at Object.onceWrapper (events.js:519:28)\n at TLSSocket.emit (events.js:400:28)\n at TLSSocket.Socket._onTimeout (net.js:495:8)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n [Symbol(mongoErrorContextSymbol)]: {}\n}]\n\n",
"text": "Hey @MaBeuLux88In addition to above issue we noticed below error , noting hostname of one of the primary is getting printed with error MongoNetworkError. Could you please look into these logsWe have also raised case, its been 48 hrs no help from the support team",
"username": "Mithun_Shetty1"
},
{
"code": "",
"text": "Is there any fix out for this issue??",
"username": "inampaki"
}
] | Intermediate MongoTimeoutError: Server selection timed out after 30000 ms | 2022-07-28T15:05:13.895Z | Intermediate MongoTimeoutError: Server selection timed out after 30000 ms | 10,246 |
null | [
"spring-data-odm"
] | [
{
"code": "",
"text": "We are running mongo in the replica set with read and write taking place from the master node. Now it is happening intermittently that the same document can be found with a query and sometimes it didn’t returns the document using exact same query.Env variable:\nLanguage: Java client for fetching mongo documents.\nCluster: Replica set modeClient Verison: spring data mongo 3.0.7\nMongo version: 4.2.9We found that issue is happening when query is run on indexes. It sometimes intermittently not returning the exact document.",
"username": "vipul_tiwari"
},
{
"code": "",
"text": "Hi @vipul_tiwari, welcome to our forums.I’m not certain if this question is directly related to an exercise or lesson in the M312 course, as such it may be better asked in the “Drivers & ODM” topic.In terms of the scenario, you describe there could be many reasons why the same query criteria are returning or not returning a specific document. It is most likely related to the point you mention that reading and writing is taking place and that the specific document and a field used in the criteria is being updated. The field updates are potentially causing it to match the query criteria until it is updated at which point, one or more fields are changed resulting in the document no longer matching the query criteria exactly. This would be expected behavior.Beyond this explanation, I think you should move this post to the other forum to more deeply dive into the reasons why this may be occurring.If you have a question related to any of the lessons or exercises on M312, we’d be happy to answer them in this forum.Kindest regards,\nEoin",
"username": "Eoin_Brazil"
},
{
"code": "",
"text": "Thanks @Eoin_Brazil. We aren’t updating the documents for search criteria, anyways I have moved the post to Drivers and ODM. Let see if they can provide help.",
"username": "vipul_tiwari"
},
{
"code": "",
"text": "hello, did you found the solution? I have an exactly same problem here. I don’t setup replica set. Thanks",
"username": "nguyen_tung"
}
] | Document not found sometimes using same query | 2021-03-24T07:20:00.876Z | Document not found sometimes using same query | 3,696 |
null | [
"security"
] | [
{
"code": "",
"text": "Hi ,Just wanted to ask if there is a way to upgrade or include SCRAM-SHA-256 from an existing user with only SCRAM-SHA-1 authentication mechanism , other then recreating the user id. Thanks",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "So did you try recreating the user with scram-sha-256 mechanism?",
"username": "Ramachandra_Tummala"
},
{
"code": "updateUseruse admin\ndb.runCommand({'updateUser':'user0',pwd:passwordPrompt(), mechanisms:['SCRAM-SHA-256']})\nmechanisms:['SCRAM-SHA-1','SCRAM-SHA-256']",
"text": "Hi @Daniel_Inciong,This can be updated using the updateUser command.Although you may wish to use both SCRAM-SHA mechanisms as you switch to make sure all tools and drivers you are using can connect correctly.In that case use: mechanisms:['SCRAM-SHA-1','SCRAM-SHA-256'] a following update could be done just specifying SCRAM-SHA-256 and that one would not need the password provided as you’d be removing a mechanism.",
"username": "chris"
},
{
"code": "",
"text": "hi chris,thank you for your feedback , sorry for the delayed response problem was already solved by updating the driver of the application connecting to the upgraded version.",
"username": "Daniel_Inciong"
}
] | Upgrading from SCRAM-SHA-1 to SCRAM-SHA-256 | 2022-12-14T23:00:33.704Z | Upgrading from SCRAM-SHA-1 to SCRAM-SHA-256 | 3,133 |
null | [
"aggregation",
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "MainTraveller.aggregate([\n{\n $match: {\n _id: mongoose.Types.ObjectId(req.query.id),\n },\n},\n\n{\n $lookup: {\n from: \"subassemblies\",\n localField: \"SubAssemblies.SubAssyId\",\n foreignField: \"_id\",\n as: \"subassemblies\",\n },\n},\n{\n $project: {\n _id: 0,\n ProductId: 0,\n TemplateRevisionNum: 0,\n SerialNum: 0,\n FinishedTasks: 0,\n WorkTasks: 0,\n __v: 0,\n },\n},\n]).then((travellerDetails) => {\nif (travellerDetails.length) {\n return res.send(travellerDetails);\n} else {\n return res.status(500).json({\n message: \"Some error occured while retrieving traveller\",\n });\n}\n});\n};\n {\n \"SubAssemblies\": [\n {\n \"SubAssyId\": \"63aa8b12c0671d8a31cabf33\",\n \"CurrentSerialNum\": 99,\n \"ExchangeHistory\": [\n 11\n ],\n \"_id\": \"63ad718852fb1827115a9be5\"\n }\n ],\n \"subassemblies\": [\n {\n \"_id\": \"63aa8b12c0671d8a31cabf33\",\n \"Name\": \"DDPP PCBA\",\n \"Prefix\": \"DDPP 20\",\n \"TemplateExists\": false,\n \"Templates\": [\n {\n \"RevisionNum\": \"221227140634\",\n \"EffectiveDate\": \"2022-12-27T06:02:31.674Z\",\n \"WorkTasks\": [],\n \"HasTraveller\": false,\n \"_id\": \"63aa8b6ac0671d8a31cabf38\"\n }\n ],\n \"__v\": 0\n }\n ]\n}\n{\n \"subassemblies\": [\n {\n \"_id\": \"63aa8b12c0671d8a31cabf33\",\n \"CurrentSerialNum\": 99,\n \"ExchangeHistory\": [\n 11\n ],\n \"Name\": \"DDPP PCBA\",\n \"Prefix\": \"DDPP 20\",\n \"TemplateExists\": false,\n \"Templates\": [\n {\n \"RevisionNum\": \"221227140634\",\n \"EffectiveDate\": \"2022-12-27T06:02:31.674Z\",\n \"WorkTasks\": [],\n \"HasTraveller\": false,\n \"_id\": \"63aa8b6ac0671d8a31cabf38\"\n }\n ],\n }\n ]\n}\n",
"text": "The result return at the followingI wish to get result like this. how to merge data in lookup?",
"username": "Min_Thein_Win"
},
{
"code": "",
"text": "Hey @Min_Thein_Win,Apologies for the late reply. It’s been a while since you posted this. Has this been resolved? If so, it would be great if you can share your approach with the community as it might help others too with a similar issue to yours. If the issue still persists, kindly share a sample document too so that we can try the aggregation at our end as well.Regards,\nSatyam",
"username": "Satyam"
}
] | How to merge two collection array? | 2022-12-29T12:26:30.907Z | How to merge two collection array? | 1,168 |
null | [
"aggregation"
] | [
{
"code": "5$round[{$project: {\n '1905': {$round: [1905,-1]},\n '1915': {$round: [1915,-1]},\n '1925': {$round: [1925,-1]},\n '1935': {$round: [1935,-1]},\n '1945': {$round: [1945,-1]},\n '1955': {$round: [1955,-1]},\n '1965': {$round: [1965,-1]},\n '1975': {$round: [1975,-1]},\n '1985': {$round: [1985,-1]},\n '1995': {$round: [1995,-1]},\n '190,5': {$round: [190.5,0]},\n '191,5': {$round: [191.5,0]},\n '192,5': {$round: [192.5,0]},\n '193,5': {$round: [193.5,0]},\n '194,5': {$round: [194.5,0]},\n '195,5': {$round: [195.5,0]},\n '196,5': {$round: [196.5,0]},\n '197,5': {$round: [197.5,0]},\n '198,5': {$round: [198.5,0]},\n '199,5': {$round: [199.5,0]},\n '19,05': {$round: [19.05,1]},\n '19,15': {$round: [19.15,1]},\n '19,25': {$round: [19.25,1]},\n '19,35': {$round: [19.35,1]},\n '19,45': {$round: [19.45,1]},\n '19,55': {$round: [19.55,1]},\n '19,65': {$round: [19.65,1]},\n '19,75': {$round: [19.75,1]},\n '19,85': {$round: [19.85,1]},\n '19,95': {$round: [19.95,1]},\n '19,005': {$round: [19.005,2]},\n '19,015': {$round: [19.015,2]},\n '19,025': {$round: [19.025,2]},\n '19,035': {$round: [19.035,2]},\n '19,045': {$round: [19.045,2]},\n '19,055': {$round: [19.055,2]},\n '19,065': {$round: [19.065,2]},\n '19,075': {$round: [19.075,2]},\n '19,085': {$round: [19.085,2]},\n '19,095': {$round: [19.095,2]},\n '19,0005': {$round: [19.0005,3]},\n '19,0015': {$round: [19.0015,3]},\n '19,0025': {$round: [19.0025,3]},\n '19,0035': {$round: [19.0035,3]},\n '19,0045': {$round: [19.0045,3]},\n '19,0055': {$round: [19.0055,3]},\n '19,0065': {$round: [19.0065,3]},\n '19,0075': {$round: [19.0075,3]},\n '19,0085': {$round: [19.0085,3]},\n '19,0095': {$round: [19.0095,3]},\n '19,00005': {$round: [19.00005,4]},\n '19,00015': {$round: [19.00015,4]},\n '19,00025': {$round: [19.00025,4]},\n '19,00035': {$round: [19.00035,4]},\n '19,00045': {$round: [19.00045,4]},\n '19,00055': {$round: [19.00055,4]},\n '19,00065': {$round: [19.00065,4]},\n '19,00075': {$round: [19.00075,4]},\n '19,00085': {$round: [19.00085,4]},\n '19,00095': {$round: [19.00095,4]},\n '19,000005': {$round: [19.000005,5]},\n '19,000015': {$round: [19.000015,5]},\n '19,000025': {$round: [19.000025,5]},\n '19,000035': {$round: [19.000035,5]},\n '19,000045': {$round: [19.000045,5]},\n '19,000055': {$round: [19.000055,5]},\n '19,000065': {$round: [19.000065,5]},\n '19,000075': {$round: [19.000075,5]},\n '19,000085': {$round: [19.000085,5]},\n '19,000095': {$round: [19.000095,5]},\n}}]\n \"1905\" : 1900.0,\n \"1915\" : 1920.0,\n \"1925\" : 1920.0,\n \"1935\" : 1940.0,\n \"1945\" : 1940.0,\n \"1955\" : 1960.0,\n \"1965\" : 1960.0,\n \"1975\" : 1980.0,\n \"1985\" : 1980.0,\n \"1995\" : 2000.0,\n \"190,5\" : 190.0,\n \"191,5\" : 192.0,\n \"192,5\" : 192.0,\n \"193,5\" : 194.0,\n \"194,5\" : 194.0,\n \"195,5\" : 196.0,\n \"196,5\" : 196.0,\n \"197,5\" : 198.0,\n \"198,5\" : 198.0,\n \"199,5\" : 200.0,\n \"19,05\" : 19.1,\n \"19,15\" : 19.1,\n \"19,25\" : 19.2,\n \"19,35\" : 19.4,\n \"19,45\" : 19.4,\n \"19,55\" : 19.6,\n \"19,65\" : 19.6,\n \"19,75\" : 19.8,\n \"19,85\" : 19.9,\n \"19,95\" : 19.9,\n \"19,005\" : 19.0,\n \"19,015\" : 19.02,\n \"19,025\" : 19.02,\n \"19,035\" : 19.04,\n \"19,045\" : 19.05,\n \"19,055\" : 19.05,\n \"19,065\" : 19.07,\n \"19,075\" : 19.07,\n \"19,085\" : 19.09,\n \"19,095\" : 19.09,\n \"19,0005\" : 19.0,\n \"19,0015\" : 19.002,\n \"19,0025\" : 19.003,\n \"19,0035\" : 19.003,\n \"19,0045\" : 19.005,\n \"19,0055\" : 19.006,\n \"19,0065\" : 19.006,\n \"19,0075\" : 19.008,\n \"19,0085\" : 19.009,\n \"19,0095\" : 19.009,\n \"19,00005\" : 19.0001,\n \"19,00015\" : 19.0002,\n \"19,00025\" : 19.0003,\n \"19,00035\" : 19.0004,\n \"19,00045\" : 19.0005,\n \"19,00055\" : 19.0006,\n \"19,00065\" : 19.0007,\n \"19,00075\" : 19.0008,\n \"19,00085\" : 19.0008,\n \"19,00095\" : 19.0009,\n \"19,000005\" : 19.00001,\n \"19,000015\" : 19.00002,\n \"19,000025\" : 19.00003,\n \"19,000035\" : 19.00004,\n \"19,000045\" : 19.00005,\n \"19,000055\" : 19.00005,\n \"19,000065\" : 19.00006,\n \"19,000075\" : 19.00007,\n \"19,000085\" : 19.00008,\n \"19,000095\" : 19.0001\n$round",
"text": "Hi\nIn the mongodb docs it states:When rounding on a value of 5 , $round rounds to the nearest even value.\nhttps://www.mongodb.com/docs/manual/reference/operator/aggregation/round/#rounding-to-even-valuesHowever, when checking the following aggregation example:The following results are returned:As far as I can tell it does not match the described behaviour in the docs when rounding decimal values.\nAm I missing something or is there a bug in the behaviour for $round?",
"username": "Gerrie_Van_wyk"
},
{
"code": "",
"text": "Hi @Gerrie_Van_wyk - Welcome to the communityI believe the explanation for the behaviour you are seeing can be described in the following MongoDB ticket SERVER-71557 in which is it working as designed due to how floating point numbers work. If you want more precision, consider using Decimal128 numbers.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Understanding $round | 2023-01-22T14:01:13.856Z | Understanding $round | 613 |
null | [
"swift"
] | [
{
"code": "if let user = app.currentUser {\n let realm = openRealm(user)\n let results = realm.objects(Object.self)\n \n // Performing any read of this results object causes several disk writes \n // to occur per second, resulting in slow UI performance\n\n} else { \n let user = app.login(credentials) \n let realm = openRealm(user)\n let results = realm.objects(Object.self)\n\n // Performing reads of this results object, after logging in, \n // does not cause any disk writes and is much more performant. \n}\n",
"text": "I’ve been wrestling with a performance issue for awhile. While scrolling through a list of Results on iOS, Xcode Instruments shows about 1.5MB/s writes to numerous sync_metadata.realm.lock files. These writes are occurring when reading any data from a realm file. These writes only occur if the realm was initialized from an already logged in user.My code is roughly:",
"username": "Nathan_Apple"
},
{
"code": "app.currentUserlet realm = openRealm(user)let results = realm.objects(Object.self)",
"text": "The question and code are a little unclear.It appears a synced Realm is being used by checking for a user here app.currentUser but then what does this let realm = openRealm(user) do? Does it open a Flex Sync? Full Sync? Something else?This code let results = realm.objects(Object.self) gets all of your objects (reads them) so then is there an additional read or is something else being done with the results?Bearing in mind that a results object always reflects the current status of the underlying data, so it would not surprise me if there was additional disk access but why do you think it’s causing performance issues?e.g. We have a number of databases with megabytes of databases and are not encountering what’s being described (if I understand it correctly)Can you elaborate a bit or better yet, how about a brief sample code that duplicates the issue - what’s the expected result and what result are you getting?",
"username": "Jay"
},
{
"code": "if let user = app.currentUser {\n let realm = try await Realm(configuration: user.flexibleSyncConfiguration)\n let results = realm.objects(Object.self) // about 2000 objects \n\n results.forEach { object in // this causes a disk write for each object accessed\n print(object.date) \n print(object.someIntegerValue) \n }\n\n // This exact snippet takes close to 1 second to run and *writes* 26mb of data to disk \n\n} else { \n let user = app.login(credentials) \n let realm = try await Realm(configuration: user.flexibleSyncConfiguration)\n let results = realm.objects(Object.self) \n\n results.forEach { object in // this does not cause any disk writes for each object accessed\n print(object.date) \n print(object.someIntegerValue) \n }\n\n // This snippet takes 0.06 seconds to run and does not perform any disk writes.\n}\n",
"text": "Each time I access an object in those results, a small write is being performed to a sync_metadata file. But this only occurs if the realm was opened with an already logged in user, not after opening a realm with the user returned from app.login.Maybe this code will explain better:The worst offender in my case is when I need to build a printable report on all the objects in those results, and have to iterate through them all to insert numerous properties into columns of a table. If the user is already logged in when the app launches, generating the report takes ~15 seconds and about 360mb is written to disk. If the user logs in, generating the report takes ~1 second with zero disk writes.The problem occurs everywhere my app reads from live objects. If the user is already logged in when the app launches, reading from live objects causes writes to a sync_metadata file.I’m expecting performance to be the same regardless if the realm is opened with an already logged in user, or one that is opened after the user logs in that session.",
"username": "Nathan_Apple"
},
{
"code": "loginlet user = app.login(credentials) let user = try await app.login(credentials: creds)elseuser = app.currentUserifelseuser = app.currentUser",
"text": "For clarify and testing purposes, I don’t believe this line of code is valid; login is asynchronous and has a closurelet user = app.login(credentials) unless you’re using async/await, which would then belet user = try await app.login(credentials: creds)unless you have a some other code involved.I have no idea if this will help, but here’s our results using your code and roughly 2000 Sync’d Realm objects with two properties per object.Testing your code execution times produced an opposite and expected result. Reading the data with an existing current user was faster than authenticating and then reading. I pretty much copy and pasted your code - the only difference is we use partition sync, not flex sync.The first column in the below chart (in seconds) runs when there is no logged in user, so the authentication code must run, which is the code after the else statement in the question.The second column is if there’s already an existing user logged in user = app.currentUser which is the first section of code after the ifIf we change the speed testing to eliminate the delay caused by the authentication code, I ran the test 5 more times and the reading of the data is identical between the first section and second section of codeIn summary - the auth code causes the second section of code to be slower, as expected. However, if we remove that from the performance test, it’s identical in speed.Here’s a disk writing chart. The first section is on app start, no logged in user and the app is idling.The second section is when the user logs in and reads the data (the code after the else) and the third section is if the current user exists, which is the first section of code. Total Data written is roughly 1.4 Mb\nWrites944×354 7.94 KB\nSummary; we’re not seeing any big difference in disk writes between when a user is already logged in user = app.currentUser or when they log in fresh. Total amount of written data was roughly 1.4Mb in either case.",
"username": "Jay"
},
{
"code": "",
"text": "Your results are exactly what I expect would happen. Here are the graphs I captured in my case:Using external auth service to get JWT credentials, login to app with those credentials, get the results, and iterate through them:\nScreen Shot 2023-01-23 at 5.42.46 PM1093×674 25.6 KB\nversus skipping auth and just using app.currentUser to get results and iterate:\nScreen Shot 2023-01-23 at 5.44.25 PM1057×674 25.9 KB\nHere’s the Instruments capture showing, in a 10 millisecond span, 15 writes to 5 separate sync_metadata.realm.lock files\nScreen Shot 2023-01-23 at 6.06.24 PM1152×709 145 KB\nAnd here’s the entire 6 second capture:\nScreen Shot 2023-01-23 at 6.10.46 PM1206×1016 271 KB\nHere’s the same code ran, but with a fresh user from app.login:\nScreen Shot 2023-01-23 at 6.18.31 PM1188×736 137 KB\nHere’s the exact code I used to capture these screenshots. There are 1659 LogbookEntry objects in the realm, and the number of writes seems to correspond with that.\nScreen Shot 2023-01-23 at 6.30.09 PM1243×611 119 KB\nAs expected, the authentication pipeline takes more time than simply grabbing app.currentUser. But once the code reaches the loop to print the dates, the “else” section is much faster because it isn’t making writes to disk for each item in the loop.Within the rest of the app, if I don’t get a “fresh” user, merely scrolling through a lazy list of Results will cause tens/hundreds/thousands of writes.This is with Realm-Swift v10.34.",
"username": "Nathan_Apple"
},
{
"code": "",
"text": "the “else” section is much fasterWe are not seeing that at all. Over a run of 10 tests, both sections performed exactly the same and we are not seeing any kind of writing or high disk usage in either case (as shown in my prior post)I wonder what the difference is as we copied and pasted your code.So, with the code in the question, the issue is not duplicatable. I suspect there’s something else affecting the performance or operation - possibly other code in the app.May I suggest creating a fresh project? Only add code that writes 2000 objects, and then the code in your question that reads them in and see if it behaves differently.",
"username": "Jay"
},
{
"code": "",
"text": "I’m at a loss for what might be causing my results. The code I posted an image of is the first and only code running when my app launched in the test cases. I even went on to take out the date formatting and printing operations, and just iterated through the results with nothing else being performed.I’ll try a fresh project and report back.",
"username": "Nathan_Apple"
},
{
"code": "class SomeObject: Object {\n @Persisted var _ownerID: String = app.currentUserID // This was causing the writes to the .lock file\n}\n",
"text": "I was able to discover the root cause of my issue. Initializing objects with a _ownerID string, read from the current user ID, was causing the writes to disk.It’s interesting this problem only showed up when referencing a “stale” user object, but not one received from the app.login function.Thanks for your assistance!",
"username": "Nathan_Apple"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Excessive writes occurring when initializing app with app.currentUser | 2023-01-23T02:04:27.658Z | Excessive writes occurring when initializing app with app.currentUser | 948 |
null | [
"aggregation"
] | [
{
"code": " \"_id\": {\n \"$oid\": \"63d00ec0771f06853d860862\"\n },\n \"game_id\": 1,\n \"winner\": \"player1\",\n \"con_id\": \"I\",\n \"map\": \"US\",\n \"player_results\": [\n {\n \"player-id\": \"player1\",\n \"totalMoney\": 15,\n \"totalPoints\": 3\n },\n {\n \"player-id\": \"player2\",\n \"totalMoney\": 15,\n \"totalPoints\": 3,\n \"winner\": false\n }\n ]\n },\n {\n \"_id\": {\n \"$oid\": \"63d00ec4dfdd8cb0e496422f\"\n },\n \"game_id\": 2,\n \"winner\": \"\",\n \"con_id\": \"I\",\n \"map\": \"EU\",\n \"player_results\": [\n {\n \"player-id\": \"player1\",\n \"totalMoney\": 25,\n \"totalPoints\": 3,\n \"winner\": true\n },\n {\n \"player-id\": \"player2\",\n \"totalMoney\": 15,\n \"totalPoints\": 2,\n \"winner\": false\n }\n ]\n },\n{\n \"_id\": {\n \"$oid\": \"63d01ae4b152a993335ef75e\"\n },\n \"player-id\": \"player1\",\n \"totalMoney\": 0,\n \"totalPoints\": 0\n }\n",
"text": "I’m trying to build an aggregation that updates the players document with the points that they receive from the games that they play.Here is the game documentHere is the player documents in the same collectionSo in the ideal scenario it would update the player1 document to have totalPoints of 6 and totalMoney of 40.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "I’m a little surprised that you keep players and documents in the same collection, but ok, let’s go with that.Is there a way to tell which games are new? When you say “update” the players’ document to have total points - how is it known what games are already reflected in their totals? Like here the player has 0, but what if they had 6 points and 40 “money”? How do we know if that’s from this game or previous game?Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Hi Asya,i was planning on separating them into different collections but I was just doing some query testing today so they were in the same collection.Is there a way to tell which games are new?I was hoping to just total their score of all the games they were in, as there will not be very many games maybe 10 max. So I was going to re-total and update the document with the total result instead of tracking which had been counted.",
"username": "tapiocaPENGUIN"
}
] | Sum fields fields in one document into another document | 2023-01-24T17:55:51.064Z | Sum fields fields in one document into another document | 452 |
null | [
"aggregation",
"python"
] | [
{
"code": "pipeline = [\n {\"$match\": {'_id': user_id}},\n {\"$lookup\": {\"from\": \"merchant\", \"localField\": \"merchantsid\",\n \"foreignField\": \"_id\", \"as\": \"merchants\"}},\n {\"$project\": {\"email\" : 1, \"name\" : 1,\"merchants\" : 1, \"_id\" : 0,\n \"merchants\": {\"$filter\": { \"input\": \"$merchants\", \"as\": \"merchant\",\n \"cond\": { \"$eq\": [ \"merchant.date\", \"null\"]}}}}},\n {\"$sort\": {\"merchants.teid\": -1}},\n ]\nreturn await database.User.aggregate(pipeline).to_list(int(limit))\n[\n {\n \"email\": \"[email protected]\",\n \"name\": \"John Doe\",\n \"role\": \"user\",\n \"merchants\": [\n {\n \"_id\": \"913c5df4-581c-4fa5-9ba5-53c2f255d220\",\n \"teid\": 906220009,\n \"merchant_name\": \"SHOWCASE LTD\",\n \"model\": \"Gateway\",\n \"package\": \"1N5\",\n \"date\": null,\n \"created_at\": \"2023-01-22T21:44:07.789000\",\n \"updated_at\": \"2023-01-22T21:44:07.789000\"\n }\n ]\n }\n]\n",
"text": "Hi there,\nI am new to Mongo and I have a question regarding aggregation. How can I return a document that contains null value in the date field after running a lookup?Here is my code;The above gives me an empty array but the desired result should be something like this;Any help is appreciated.Thanks",
"username": "Patrick_Lwanga"
},
{
"code": "pipeline = [\n {\"$match\": {'_id': user_id}},\n {\"$lookup\": {\"from\": \"merchant\", \"localField\": \"merchantsid\",\n \"foreignField\": \"_id\", \"as\": \"merchants\"}},\n { \"$project\": {\"email\": 1,\"name\": 1,\n \"merchantsid\": {\"$filter\": {\"input\": \"$merchants\",\n \"as\": \"merchant\",\n \"cond\": { \"$eq\": [ \"$merchant.date\", \"1970-01-01T00:00:00\"]}}}}},\n {\"$sort\": {\"merchants.teid\": -1}},\n ]\n[\n {\n \"_id\": \"8aa6c533-d98b-4557-be97-a95990becf1c\",\n \"email\": \"[email protected]\",\n \"name\": \"John Doe\",\n \"merchantsid\": [\n {\n \"_id\": \"4adaadf8-25d9-48dd-b7d3-2494b0d4006a\",\n \"teid\": 3662200014,\n \"merchant_name\": \"SHOWCASE LTD\",\n \"model\": \"Gateway\",\n \"package\": \"1N5\",\n \"date\": \"1970-01-01T00:00:00\",\n \"created_at\": \"2023-01-23T15:27:00.759000\",\n \"updated_at\": \"2023-01-23T15:27:00.759000\"\n }\n ]\n }\n]\n",
"text": "I have found a solution but its not the best one. It requires that the null date field be filled with a default date value and filter against this.\nAt the moment if i try to filter by “” or “null” I get no documents.Here the code but I am still seeking solutions that will filter for dates that have null value.The result is as follows;For context, I have 2 collections, a user and merchant one. There merchant has a reference in the user collection. ie. A user can have many merchants. The lookup gets a user’s and any child merchants as an array. What I want to do now is get users’ who have merchants with null dates.",
"username": "Patrick_Lwanga"
},
{
"code": "\"$eq\": [ \"merchant.date\", \"null\"]\n",
"text": "Null written like the followingis not null but a string with the text null.",
"username": "steevej"
},
{
"code": "[\n {\n \"_id\": \"8aa6c533-d98b-4557-be97-a95990becf1c\",\n \"email\": \"[email protected]\",\n \"name\": \"Patrick Lou\",\n \"merchantsid\": []\n }\n]\n{\"$eq\": [ \"merchant.date\", {\"$type\":10} ]}",
"text": "Hi Steeve,Thanks for your reply.\nI have tried your suggestion but the result is an empty array.Here is the result;I have also tried using type like so;{\"$eq\": [ \"merchant.date\", {\"$type\":10} ]}But I get the same result as above.Any ideas why?Thanks.",
"username": "Patrick_Lwanga"
},
{
"code": "",
"text": "If yourdate field be filled with a default date valueit is not null anymore.",
"username": "steevej"
},
{
"code": "",
"text": "Share sample documents from both collections.Your $sort stage is useless since the field merchants is not present anymore.",
"username": "steevej"
},
{
"code": "[\n {\n \"_id\": \"1c7844ac-ebbb-4520-afb2-6374a4f1a3ac\",\n \"teid\": 366220001,\n \"merchant_name\": \"Hartnells Fresh Food\",\n \"model\": \"Pro\",\n \"package\": \"1N5\",\n \"date\": \"2022-11-10T00:00:00\",\n \"created_at\": \"2023-01-22T15:56:10.746000\",\n \"updated_at\": \"2023-01-22T15:56:10.746000\"\n },\n {\n \"_id\": \"3b4a31f5-9fbd-448f-863a-ae1987678d03\",\n \"teid\": 366220002,\n \"merchant_name\": \"ABC LTD\",\n \"model\": \"Pro\",\n \"package\": \"456\",\n \"date\": \"2022-10-10T00:00:00\",\n \"created_at\": \"2023-01-22T15:56:42.010000\",\n \"updated_at\": \"2023-01-22T15:56:42.010000\"\n },\n {\n \"_id\": \"46c9af68-7361-4cc2-9ee4-6bf4a80da2b8\",\n \"teid\": 366220003,\n \"merchant_name\": \"SURVEY LTD\",\n \"model\": \"Pro\",\n \"package\": \"456\",\n \"date\": \"2022-09-10T00:00:00\",\n \"created_at\": \"2023-01-22T15:56:58.822000\",\n \"updated_at\": \"2023-01-22T15:56:58.822000\"\n },\n {\n \"_id\": \"913c5df4-581c-4fa5-9ba5-53c2f255d220\",\n \"teid\": 366220009,\n \"merchant_name\": \"SHOWCASE LTD\",\n \"model\": \"Gateway\",\n \"package\": \"123\",\n \"date\": null,\n \"created_at\": \"2023-01-22T21:44:07.789000\",\n \"updated_at\": \"2023-01-22T21:44:07.789000\"\n },\n {\n \"_id\": \"7c7584c0-19c9-4c6c-80a0-ee27cddc2b7e\",\n \"teid\": 3662200011,\n \"merchant_name\": \"JACINTA LTD\",\n \"model\": \"Gateway\",\n \"package\": \"123\",\n \"date\": \"1970-01-01T00:00:00\",\n \"created_at\": \"2023-01-23T13:18:01.415000\",\n \"updated_at\": \"2023-01-23T13:18:01.415000\"\n },\n {\n \"_id\": \"1d3bc919-4894-412e-9a67-e9450aefe598\",\n \"teid\": 3662200012,\n \"merchant_name\": \"Vimto LTD\",\n \"model\": \"Gateway\",\n \"package\": \"1N5\",\n \"date\": null,\n \"created_at\": \"2023-01-23T13:18:55.027000\",\n \"updated_at\": \"2023-01-23T13:18:55.027000\"\n },\n {\n \"_id\": \"4adaadf8-25d9-48dd-b7d3-2494b0d4006a\",\n \"teid\": 3662200014,\n \"merchant_name\": \"Drake Services LTD\",\n \"model\": \"Gateway\",\n \"package\": \"1N5\",\n \"date\": \"1970-01-01T00:00:00\",\n \"created_at\": \"2023-01-23T15:27:00.759000\",\n \"updated_at\": \"2023-01-23T15:27:00.759000\"\n }\n]\n[\n{\n \"_id\": \"8aa6c533-d98b-4557-be97-a95990becf1c\",\n \"email\": \"[email protected]\",\n \"name\": \"John Doe\",\n \"disabled\": false,\n \"sellerid\": 101,\n \"role\": \"user\",\n \"created_at\": \"2023-01-22T15:41:26.645000\",\n \"updated_at\": \"2023-01-22T15:41:40.200000\",\n \"merchantsid\": [\n \"1c7844ac-ebbb-4520-afb2-6374a4f1a3ac\",\n \"46c9af68-7361-4cc2-9ee4-6bf4a80da2b8\",\n \"913c5df4-581c-4fa5-9ba5-53c2f255d220\"\n ]\n }\n]\n",
"text": "Hi Steve,OK. Got it. I will remove $sort.Here is the Merchant Collection;Here is the user collection;Thanks.",
"username": "Patrick_Lwanga"
},
{
"code": "\"null\"\"cond\": { \"$eq\": [ \"$merchant.date\", \"1970-01-01T00:00:00\"]} {\n \"_id\": \"4adaadf8-25d9-48dd-b7d3-2494b0d4006a\",\n \"teid\": 3662200014,\n \"merchant_name\": \"SHOWCASE LTD\",\n \"model\": \"Gateway\",\n \"package\": \"1N5\",\n \"date\": \"1970-01-01T00:00:00\",\n \"created_at\": \"2023-01-23T15:27:00.759000\",\n \"updated_at\": \"2023-01-23T15:27:00.759000\"\n }\n\"$merchant.date\"cond: { $eq : [ \"$$merchant.date\" , null ] }\n{$match:{date:null}}\n",
"text": "Your logic was correct since the beginning but you are making a lot of syntax error.As already mentioned \"null\" with quotes is not null as in your data it is the string with the 4 characters n, u, l and l.Also, which gives me doubts about what you share when you write that\"cond\": { \"$eq\": [ \"$merchant.date\", \"1970-01-01T00:00:00\"]}givesIt cannot since the syntax\"$merchant.date\"is wrong. You are missing a $. It should be $$merchant.date, look at https://www.mongodb.com/docs/manual/reference/operator/aggregation/filter/. What is funny is that when you were using the string “null” you had no $ but you added one when testing with the special date. The correct cond: that will work should be:Now that I understand your intent you would be better off adding a pipeline: to your $lookup with awhich should be more efficient than doing $filter once the $lookup is done.Your $sort on teid in this $lookup pipeline will then make sense to sort the merchant within the merchantsid result.",
"username": "steevej"
},
{
"code": "cond: { $eq : [ \"$$merchant.date\" , null ] }{$match:{date:null}}async def some_function(limit=100,):\n user_id = \"8aa6c533-d98b-4557-be97-a95990becf1c\"\n \n stage_match_user_id = {\"$match\": {\"_id\": user_id}}\n \n stage_lookup_related = {\"$lookup\": {\"from\": \"merchant\", \"localField\": \"merchantsid\", \n \"foreignField\": \"_id\", \"as\": \"related_merchants\"}}\n \n stage_project_required = {\"$project\": {\"email\": 1,\"name\": 1, \n \"related\": {\"$filter\": {\"input\": \"$related_merchants\",\n \"as\": \"merchant\",\n \"cond\": { \"$eq\" : [ \"$$merchant.date\" , \"null\" ]}}}}}\n \n pipeline = [stage_match_user_id,stage_lookup_related,stage_project_required]\n \n return await database.User.aggregate(pipeline).to_list(int(limit))\nclass MerchantBaseSchema(BaseModel):\n id: str = Field(default_factory=uuid.uuid4, alias=\"_id\")\n teid: int\n merchant_name: str\n model: str\n package: str \n date: dt | None = None\n created_at: dt | None = None\n updated_at: dt | None = None\n \n class Config:\n orm_mode = True\n allow_population_by_field_name = True\n arbitrary_types_allowed = True\n schema_extra = {\n \"example\": {\n \"teid\": 366220001,\n \"merchant_name\": \"Hartnells Fresh Food\",\n \"model\": \"Pro\",\n \"package\": \"1N5\",\n \"date\": \"2022-11-10T00:00:00\",\n \"created_at\": \"2022-11-10T00:00:00\",\n \"updated_at\": \"2022-11-10T00:00:00\",\n }\n }\n",
"text": "Thanks for your solution.I tried;\ncond: { $eq : [ \"$$merchant.date\" , null ] }I also tried:\n{$match:{date:null}}Both did not work. So I used them on Studio 3T for Mongo and it worked correctly. However, since I am using python the null value is encapsulated in quotes it does not work for the date field.Here is the modified code again.I used the same code above to fetch model or merchant_name and it worked as expected in Python. The only issue is that it does not work with dates in code.\nI have stored the dates as datetime and I have used a model as shown below;The issue seems to lie in python I think. Your code works when used elsewhere but not in python.Do you have any idea why this is the case?Patrick",
"username": "Patrick_Lwanga"
},
{
"code": "\"cond\": { \"$eq\" : [ \"$$merchant.date\" , None]}}}}}",
"text": "I have found the answer.In python null is represented as None.\nThere if i change null to None it works like so.\"cond\": { \"$eq\" : [ \"$$merchant.date\" , None]}}}}}Oh my. Why didn’t I think of this.Thanks for your help and validating the code.",
"username": "Patrick_Lwanga"
},
{
"code": "date: dt | None = None\"cond\": { \"$eq\" : [ \"$merchant.date\" , None ]}{$match:{date:null}}\n",
"text": "It would have been good to know that you are using python right from the start.In python you have to use None rather than null. What is funny is that you seem to know that because in your schema you havedate: dt | None = NoneSo in python, your cond: has to be\"cond\": { \"$eq\" : [ \"$merchant.date\" , None ]}Like I wroteyou would be better off adding a pipeline: to your $lookup with awhich should be more efficient than doing $filter once the $lookup is done.But make it date:None since you are using python.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks Steve,Sorry I didn’t say what language I was using. Now I know. I will add the match pipeline to make it more efficient. Thank you for your prompt replies.",
"username": "Patrick_Lwanga"
},
{
"code": "stage_match_user_id = {\"$match\": {\"_id\": user_id}}\n \nstage_match_null_related = {\"$lookup\": {\"from\": \"merchant\",\n \"pipeline\": [{\"$match\":{\"date\": None}, }], \n \"localField\": \"merchantsid\",\n \"foreignField\": \"_id\", \"as\": \"related_merchants\"}}\nstage_project_required = {\"$project\": {\"email\": 1,\"name\": 1, \"related_merchants\": 1}}\n \npipeline = [stage_match_user_id, stage_match_null_related, stage_project_required]\n \nreturn await database.User.aggregate(pipeline).to_list(int(limit)) \n",
"text": "For completeness here is the final code taking into consideration @steevej advice on efficiency;It works well now. Thanks again Steevej.",
"username": "Patrick_Lwanga"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filter for Null after Lookup | 2023-01-23T13:11:21.958Z | Filter for Null after Lookup | 2,514 |
null | [
"aggregation",
"queries",
"node-js",
"crud"
] | [
{
"code": " agencyID: { type: Number },\n tourID: { type: Number, unique: true },\n totalVisits: Number,\n hosts: [\n {\n name: {type: String},\n total: {type: Number}\n }\n ]\n agencyID: 123,\n tourID: 9,\n hostName: \"www.google.com\"\ntotalVisitstotalVisits agencyID: 123,\n tourID: 9,\n totalVisits: 1,\n hosts: [\n {\n name: \"www.google.com\",\n total: 1\n }\n ]\nnamewww.google.com agencyID: 123,\n tourID: 9,\n hostName: \"www.mongodb.com\"\n agencyID: 123,\n tourID: 9,\n totalVisits: 2,\n hosts: [\n {\n name: \"www.google.com\",\n total: 1\n },\n {\n name: \"www.mongodb.com\",\n total: 1\n }\n ]\n",
"text": "I have a collection that keeps track of the number of visits a tour (identified by tourID). This is the schema:Given an object like this:I’m trying to increment the totalVisits field if the tours exists, or create a tour with totalVisits = 1 in the same query (upsert). I’ve been able to do this without a problem. This should be the result:However, I also want to do an upsert in the nested array, that has objects identified by name. If the host (in this case www.google.com exists, we should increment the total value of this host. If the host does not exist, then it should push a new object to the array. If now we receive this data, for example,The result should be this:I haven’t been able to find any way to make this second upsert in the nested array.\nAny help?",
"username": "Pau_Mateu_i_Jordi"
},
{
"code": "$concatArrays",
"text": "That’s a bit trickier but you can do that using aggregation expressions with expressive update.I don’t think there’s an example in the docs for exactly this but you can see some code that does this in a Jira ticket comment here: https://jira.mongodb.org/browse/SERVER-1050?focusedCommentId=2305623&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-2305623 - in the forums there are a few examples that are similar, just search for the words upsert and $concatArrays…Asya",
"username": "Asya_Kamsky"
}
] | Upsert Document with increment in nested array | 2023-01-24T12:42:17.244Z | Upsert Document with increment in nested array | 850 |
null | [
"java"
] | [
{
"code": "",
"text": "I see issue as per the subject. How to eradicate it.",
"username": "Suryanarayana_Murthy_Maganti"
},
{
"code": "",
"text": "Your title says whatever you were doing was “completed successfully”.\nAre you sure you copy-pasted the correct issue?",
"username": "Yilmaz_Durmaz"
}
] | Execution of command with request id 21 completed successfully in 76.24 ms on connection | 2023-01-23T22:35:34.893Z | Execution of command with request id 21 completed successfully in 76.24 ms on connection | 776 |
null | [
"golang",
"alpha"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.12.0-alpha1 of the MongoDB Go Driver.This release includes an experimental API for explicitly encrypting a range index, a patch to retry heartbeat on timeout, the deprecation of the x/bsonx package, as well as various performance optimizations to server selection functions. For more information please see the 1.12.0-alpha1 release notes.You can obtain the driver source from GitHub under the v1.12.0-alpha1 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "Preston_Vasquez"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.12.0-alpha1 Released | 2023-01-24T20:34:19.336Z | MongoDB Go Driver 1.12.0-alpha1 Released | 1,087 |
null | [
"indexes"
] | [
{
"code": "",
"text": "can we add new field as a unique field for existing collections?\nDoes unique validation work?",
"username": "Goutham_Reddy"
},
{
"code": "",
"text": "Do you want toadd a unique index on an existing field?oradd a field for which a unique index exists?Does unique validation work?It works in both.If a unique index exist and add a document with a duplicate value or modify a document with a duplicate value the operation will fail. If you try to create a unique index and duplicates exist the index creation will fail.",
"username": "steevej"
},
{
"code": "sparseunique",
"text": "It sounds like you have a collection where a certain field does not yet exist. You can add a sparse index that’s unique which will enforce uniqueness across all the documents which have this field but will ignore documents which don’t have this field.Does this help?Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Yes, thank you; this was helpful.",
"username": "Goutham_Reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can we add new field as a unique field for existing collections? | 2022-08-08T10:03:34.150Z | Can we add new field as a unique field for existing collections? | 2,698 |
null | [
"connecting",
"mongodb-shell"
] | [
{
"code": "",
"text": "MongoDB shell version v5.0.14\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:372:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1",
"username": "Pankaj_Sharma"
},
{
"code": "localhost",
"text": "there are two possible causes for your problem:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | Connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 | 2023-01-24T14:32:07.618Z | Connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 | 1,187 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "We plan to upgrade from 4.2 to 6.0.3 by mongodump and mongorestore. Can we dump from 4.2 and restore to 6.0.3 directly ?\nIs is a must to go through version 4.2 → 4.4 → 5.0 → 6.0 ? This will take considerable long time to upgrade between each versions.\nYour advice / experience is very welcome. Thank you",
"username": "Daniel_Wan"
},
{
"code": "",
"text": "Hi @Daniel_Wan ,\nI suggest you to try a dump & restore from 4.2 to 5.0 (coz Is one major release) and the upgrade ti 6.0.* in a test enviroment, but the best practice as you’ ve mentioned is to through the version 4.2 → 4.4 → 5.0 → 6.0.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "mongodumpmongorestore",
"text": "Welcome to the MongoDB community @Daniel_Wan !The recommended (and most throughly tested) upgrade path is to do in-place upgrades through successive major releases of MongoDB ( 4.2 → 4.4 → 5.0 → 6.0 as you have suggested).However, if you want to fast forward through several major release upgrades and downtime is acceptable, you can consider using mongodump and mongorestore to go directly from your current release to the latest. As with any major system change, I recommend testing this in a representative staging or QA environment before upgrading in production. You may discover a few issues that need to be corrected in the source data (for example, stricter validation of collection options), however I believe these should all be fixable.Please also see this response including alternative approaches such as automation: Replace mongodb binaries all at once? - #3 by Stennie.I suggest you to try a dump & restore from 4.2 to 5.0 (coz Is one major release)4.2 => 5.0 is actually two major releases (4.4 and 5.0). The release versioning scheme changed starting from MongoDB 5.0 so going forward the annual major release versions are #.0 (5.0, 6.0, …).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks all !!your feedback is appreciated and we will go for version by version upgrade which looks much more promising",
"username": "Daniel_Wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can we do a mongo 4.2 dump and then restore to mongo 6.0.3 directly | 2023-01-19T19:43:17.066Z | Can we do a mongo 4.2 dump and then restore to mongo 6.0.3 directly | 2,303 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.