image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"queries",
"replication"
] | [
{
"code": "",
"text": "Hello everyone,I am currently working on a project that involves using MongoDB as our primary database system. As our data continues to grow, we are starting to experience increased server storage (Data Center Products & Services | Best Providers | Lenovo Deutschland) usage, and we’re looking for some guidance on how to optimize our storage usage.I’m interested in hearing from the MongoDB community about best practices for managing server storage with MongoDB. Specifically, I have the following questions:Any advice or suggestions you can provide would be greatly appreciated.\nThanks in advance for your help!",
"username": "Elive_Joseph"
},
{
"code": "",
"text": "You can do some research if haven’t yet as answering those questions will be comprehensive and take a significant amount of effort.Many related articles can be found online (e.g. official manual), but my two cents.I personally don’t worry too much about storage optimization, as a prod ready database system is supposed to be smart enough to manage disk use on its own. So i generally only follow guidelines from official doc and monitoring, then that’s it.",
"username": "Kobe_W"
}
] | Best Practices for Managing Server Storage with MongoDB | 2023-05-04T11:41:57.975Z | Best Practices for Managing Server Storage with MongoDB | 547 |
null | [
"api"
] | [
{
"code": "",
"text": "Hey, we’re analyzing migrating our business analytics into mongo charts. The thing is that in our architecture, every client has its own database. From what I’ve gathered, I’d have to manually create all my charts and views for each new client, and modifying a general chart would mean going into each client’s chart and updating it manually. My team and I were wondering if you are planning to release an API to programmatically create and modify charts and chart views, the current mongoDB Charts version is not scalable for multi-db use cases.Cheers.",
"username": "Juan_DIego_Arango"
},
{
"code": "",
"text": "@Juan_DIego_Arango Thanks for your question. We have plans to enable API for Charts, allowing you to create charts and dashboards dynamically. We plan to pick this up in Q3 or early Q4 this year.\nYou do mention charts views as well. Do you have use cases to create/update them through the API? Could you please elaborate?",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "Great! Hope to see the api soon. In terms of the chart view let me explain to you my scenario: We are a software business, all our data is in a cluster and due to privacy and security reasons each of our clients have its own independent DB inside our cluster. Some visualizations, require a chart view to preprocess data with multiple complex lookups and other operations, operations we can’t perform on the chart query builder due to the no lookup restrictions (we know lookups can be done with click-ops but for some cases it is not enough due to the pipeline we need for an specific visualization). So to create a desired viz or dashboard, we first create its chart view and build the vez from the chart view. Having a lot of clients means I’d have to manually do this process for all of them, if we had an API we could write scripts so that I can create the same chart view-via-dashboard for all my different client DB’s without having to manually create each one of them. We are really excited about using charts because so far, its the only BI tool that allows us to build with click-ops in a NoSQL DB",
"username": "Juan_DIego_Arango"
},
{
"code": "",
"text": "I’d like to add my +1 for this. Our company’s use case / pain point is identical to Juan’s. Glad to know this is on the roadmap.",
"username": "Zach_Buckner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Charts API, progrmamatically create and update charts and chart views | 2023-03-08T03:12:09.999Z | Charts API, progrmamatically create and update charts and chart views | 1,311 |
null | [
"swift",
"atlas-device-sync",
"flexible-sync"
] | [
{
"code": "Server permissions for this file ident have changed since the last time it was used (IDENT)\"document_filters\": { \"write\": { \"_id\": { \"$in\": \"%%user.custom_data.owned_documents\" } } }\nuser.custom_data.owned_documentsUserCusomDataUserDocumentscondition _id == the newly added id### Just after a fresh launch (will download added document and perform reset):\nSession[1]: Binding '<...clipped>/db.realm' to ''\nSession[1]: client_reset_config = false, Realm exists = true, client reset = false\nConnected to app services with request id: \"6453afa84e2b82fb08ee51f8\"\nSession[1]: Begin processing pending FLX bootstrap for query version 0. (changesets: 1, original total changeset size: 4495)\nSession[1]: Integrated 1 changesets from pending bootstrap for query version 0, producing client version 10 in 20 ms. 0 changesets remaining in bootstrap\nSession[1]: Begin processing pending FLX bootstrap for query version 1. (changesets: 1, original total changeset size: 947)\nSession[1]: Integrated 1 changesets from pending bootstrap for query version 1, producing client version 14 in 5 ms. 0 changesets remaining in bootstrap\nSession[1]: Begin processing pending FLX bootstrap for query version 2. (changesets: 1, original total changeset size: 5103)\nSession[1]: Integrated 1 changesets from pending bootstrap for query version 2, producing client version 19 in 4 ms. 0 changesets remaining in bootstrap\nDisconnected\nSession[2]: Binding '<...clipped>/db.realm' to ''\nSession[2]: client_reset_config = false, Realm exists = true, client reset = false\nConnected to app services with request id: \"6453afaa1e186846b76f50d0\"\nSession[2]: Begin processing pending FLX bootstrap for query version 3. (changesets: 1, original total changeset size: 343)\nSession[2]: Integrated 1 changesets from pending bootstrap for query version 3, producing client version 24 in 6 ms. 0 changesets remaining in bootstrap\nSession[2]: Begin processing pending FLX bootstrap for query version 4. (changesets: 1, original total changeset size: 343)\n<...clipped>\nSession[2]: Begin processing pending FLX bootstrap for query version 10. (changesets: 1, original total changeset size: 343)\nSession[2]: Integrated 1 changesets from pending bootstrap for query version 10, producing client version 59 in 6 ms. 0 changesets remaining in bootstrap\n### Just after creating the document by calling the cloud function:\nSession[2]: Begin processing pending FLX bootstrap for query version 11. (changesets: 1, original total changeset size: 0)\nSession[2]: Integrated 1 changesets from pending bootstrap for query version 11, producing client version 65 in 13 ms. 0 changesets remaining in bootstrap\n### Closed app, and relaunched:\nRealm sync client ([realm-core-13.9.4])\nConnection[1]: Session[1]: Binding '<...clipped>/db.realm' to ''\nConnection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\nConnected to endpoint '<...clipped>:443' (from '<...clipped>:54823')\nConnection[1]: Connected to app services with request id: \"6453b14273a19bf390945f2c\"\nConnection[1]: Session[1]: Received: ERROR \"Server permissions for this file ident have changed since the last time it was used (IDENT)\" (error_code=228, try_again=true, error_action=ClientReset)\nConnection[2]: Session[2]: Binding '<...clipped>/db.realm.fresh' to ''\nConnection[2]: Session[2]: client_reset_config = false, Realm exists = true, client reset = false\nConnection[1]: Disconnected\nConnected to endpoint '54.81.24.155:443' (from '192.168.1.131:54836')\nConnection[2]: Connected to app services with request id: \"6453b1430f4949df9f90f3cf\"\nConnection[2]: Session[2]: Begin processing pending FLX bootstrap for query version 0. (changesets: 1, original total changeset size: 4495)\nConnection[2]: Session[2]: Integrated 1 changesets from pending bootstrap for query version 0, producing client version 9 in 10 ms. 0 changesets remaining in bootstrap\nConnection[2]: Session[2]: Begin processing pending FLX bootstrap for query version 1. (changesets: 1, original total changeset size: 8459)\nConnection[2]: Session[2]: Integrated 1 changesets from pending bootstrap for query version 1, producing client version 12 in 8 ms. 0 changesets remaining in bootstrap\nConnection[3]: Session[3]: Binding '<...clipped>/db.realm' to ''\nConnection[3]: Session[3]: client_reset_config = true, Realm exists = true, client reset = true\nConnection[2]: Disconnected\nConnected to endpoint '<...clipped>:443' (from '<...clipped>:54838')\nConnection[3]: Connected to app services with request id: \"6453b1444e2b82fb08f98717\"\nConnection[3]: Session[3]: Client reset, path_local = <...clipped>/db.realm.fresh, mode = Recover, recovery_is_allowed = true\nConnection[3]: Session[3]: Local changesets to recover: 2\nConnection[3]: Session[3]: Recreated the active subscription set in the complete state (12 -> 12)\nConnection[3]: Session[3]: perform_client_reset_diff is done, old_version.version = 67, old_version.index = 0, new_version.version = 70, new_version.index = 1\nConnection[3]: Session[3]: Tracking pending client reset of type \"Recover\" from 2023-05-04 13:21:08\n",
"text": "We are facing the following issue, why are we getting this?\nServer permissions for this file ident have changed since the last time it was used (IDENT)The device would call a cloud function to create a brand new document for the user.\nThe cloud function would verify some things, create the document and then add it to\nthe user’s user.custom_data.owned_documents array. (which is a collection called UserCusomData)After we receive a success call on the device, we register a new subscription to the UserDocuments’s object with condition _id == the newly added id.\nSo essentially If we have multiple documents, we would have a subscription for each. (why we are not using $in is another discussion and out of scope)So after that, we expect the realm to sync and any new documents to be downloaded, and also, any collection observations to be updated so that the UI updates.Nothing happens. We need to restart the app several times, and sometimes do a fresh install (or just clearing the state to remove the db) so that all of the documents get redownloaded.",
"username": "Georges_Jamous"
},
{
"code": "TTT\"document_filters\": { \"write\": { \"_id\": { \"$in\": \"%%user.custom_data.owned_documents\" } } }owned_documentsUserDocumentsUserDocumentsUserDocumentsowned_documents%%user.custom_data.owned_documentsConnection[1]: Session[1]: Received: ERROR \"Server permissions for this file ident have changed since the last time it was used (IDENT)\" (error_code=228, try_again=true, error_action=ClientReset)",
"text": "Hello @Georges_Jamous,A few things to note here:The logs you have sent are suggesting that the following is happening:The team recognizes that the behavior surrounding handling permissions changes between sessions could be improved (ie, not have the end result be a client reset). In fact, we have a future project planned to have the server make handling this more graceful.Let me know if you have any more questions,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "user.allSessions.forEach { session in\n session.suspend()\n}\ntry! await Task.sleep... // wait 1 second\nuser.allSessions.forEach { session in\n session.resume()\n}\n",
"text": "Hey @Jonathan_Lee , thanks for this.So taking what you said into account, we will try to find a way to re-initiate the sync session to force the new permissions to take effect.Currently the only way (to my knowledge) to do it gracefully in the client (swift in my case) would be something like that:Any suggestions or a better way? also, do you foresee any downside of doing it?thanks",
"username": "Georges_Jamous"
},
{
"code": "",
"text": "Unfortunately, I don’t think there’s really a way of gracefully doing it at the moment - ultimately changing read permissions between sync sessions will result in a client reset error. I would suggest taking a look at the docs for handling client resets and identifying which strategy from there works best for your use case. I do want to reiterate though that this is something the team plans to improve and your feedback here is appreciated .Best,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Understanding why? (ERROR "Server permissions for this file ident have changed) | 2023-05-04T14:17:04.639Z | Understanding why? (ERROR “Server permissions for this file ident have changed) | 948 |
[
"atlas-cluster",
"atlas",
"configuration"
] | [
{
"code": "",
"text": "I’m sharing an Atlas Cluster with several other databases. I know that there’s a way to disable server-side JavaScript at the cluster level. Is it possible to disable server-side JavaScript at the individual database level? This would be convenient in case the other databases in the cluster need to use JavaScript.\n\nserver-side-javascript794×433 21.6 KB\n",
"username": "Sam_Lanza"
},
{
"code": "",
"text": "Hi @Sam_Lanza and welcome to MongoDB community forums!!Currently we do not have a feature to disable the server side JavaScript at the database level. The feature you mentioned applies to the entire cluster configuration.As a workaround, you can look into using a proxy\nproxy between the client and the server, which filters out the server-side operators that can be restricted to reach the server. This approach can help ensure that only the desired operators are allowed to reach the server.If the above workaround does not help, could you share your use case which would help us to assist you further.Nevertheless, we constantly strive to enhance our product and enhance its value for our users. If this feature would be useful for you and your use case, we kindly encourage you to share your thoughts with us via our MongoDB Feedback Engine.We value your feedback, and our team will thoroughly consider it before implementing the feature.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there a way to turn off server-side JavaScript by Database rather than Cluster? | 2023-04-28T09:00:17.258Z | Is there a way to turn off server-side JavaScript by Database rather than Cluster? | 915 |
|
[
"installation"
] | [
{
"code": "",
"text": "\nCapture d’écran (1)1920×1080 243 KB\n\nDear, iìm trying to install mongodb server 6.0 on my ubuntu 22.04. During the installation process, i did not get error, but lauching mongod “sudo systemctl status mongod”, i have the error that servi failed like you can see in the picture. Can you help me to resove this problem please? Thanks you very much",
"username": "elvis_sounna"
},
{
"code": "",
"text": "Check this link",
"username": "Ramachandra_Tummala"
}
] | Mongodb failled installation | 2023-05-04T10:56:49.854Z | Mongodb failled installation | 1,178 |
|
null | [
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "",
"text": "I’m getting this error continuously in Production and Syncing is not working\nTranslator failed to complete processing batch: failed to update resume token document: client is disconnected.\nCan Anyone help me with this.",
"username": "Abhishek_Matta"
},
{
"code": "translator failed to complete processing batch: failed to update resume token document: client is disconnected\nfailed to complete history scan: error while doing history scan for session: error building download message: connection(cluster0-shard-00-01.wrav6.mesh.mongodb.net:30454[-539103]) incomplete read of message header: read tcp 127.0.0.1:54004->127.0.0.1:30454: use of closed network connection\n",
"text": "The same for usand rarelybut Realm Sync seems to work at least for some users. Though, we noticed that some clients are experiencing unexpected client resets but not sure if related",
"username": "Anton_P"
},
{
"code": "",
"text": "Let me know if you find any solution to this.",
"username": "Abhishek_Matta"
},
{
"code": "",
"text": "G’day @Abhishek_Matta, @Anton_P ,Thank you for raising your concerns. This is a known issue and engineering teams are working on this.@Anton_P I noticed you have multiple projects. Could you please confirm if this is happening for all of them or otherwise which are the ones affected?I would share more information as it comes to light.I appreciate all your patience with us Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "Hi @henna.s , the error happens in all environments but I think the frequency is more on the production.",
"username": "Anton_P"
},
{
"code": "",
"text": "Yes, I’m having the same issue in production.",
"username": "Abhishek_Matta"
},
{
"code": "",
"text": "This is happening for us as well in production",
"username": "Amit_Goenka"
},
{
"code": "",
"text": "G’Day Folks @Anton_P, @Abhishek_Matta,I got an update from engineering that this has been fixed. Could you please confirm if this issue has been resolved for you?I look forward to your response.Cheers, ",
"username": "henna.s"
},
{
"code": "",
"text": "We didn’t see the error lately",
"username": "Anton_P"
},
{
"code": "",
"text": "A post was split to a new topic: Atlas Device Sync: Translator Error",
"username": "henna.s"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | Realm error: failed to update resume token document | 2022-03-22T07:35:41.087Z | Realm error: failed to update resume token document | 6,701 |
[
"vscode"
] | [
{
"code": "",
"text": "Hello guys!\nI’ve installed the mongodb VS code extension.\nI created a db inside my cluster and a collection inside my db via mongodb playground.\nI’ve noticed that the created db isn’t showing under my cluster.\n\nThough I’m able to see the db with it’s collection on the Atlas cloud.Captured with LightshotWhy I’m not able to see my created db in vs code?\nThank you",
"username": "Karim_sari_eddine"
},
{
"code": "v0.11.1",
"text": "Hello @Karim_sari_eddine ,Welcome to The MongoDB Community Forums! If you’re not currently using the latest version, I recommend updating VSCode to the most recent version available, which is Version: 1.77.3 (Universal) . Additionally, ensure that you have updated the MongoDB Extension to the latest version, MongoDB for VS Codev0.11.1 .Furthermore, please verify that the user you are attempting to log in with has the appropriate database permissions (such as Atlas admin or readWriteAnyDatabase role) to access the cluster data.If the issue persists, please don’t hesitate to provide us with more information, including:Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | I'm not able to see my created db under my cluster | 2023-04-30T18:22:08.539Z | I’m not able to see my created db under my cluster | 818 |
|
[
"dach-virtual-community",
"conference"
] | [
{
"code": "MongoDB Senior Solutions ArchitectIndependent Consultant",
"text": "\nMUG DACH - EDA960×540 103 KB\nDie Umstellung von monolithischen Anwendungen auf Microservice-Architekturen ist alles andere als einfach - lohnt sich aber oftmals! Da die Services in der Regel nicht isoliert arbeiten, ist die Implementierung geeigneter Kommunikationsmodelle wichtig. Um eine enge Kopplung und zahlreiche Punkt-zu-Punkt-Verbindungen zwischen zwei beliebigen Diensten zu vermeiden, besteht ein effektiver Ansatz darin, eine Event-Driven-Architektur (EDA) zu nutzen. Auf diese Weise können beliebige Services Ereignisse veröffentlichen und abonnieren, ohne direkt miteinander zu kommunizieren.Wir laden Euch herzlich ein, an unserer MongoDB User Group teilzunehmen, in der wir die Grundlagen der EDA erklären und zeigen, wie ihr sie mit MongoDB Atlas umsetzen könnt.Wir werden die wichtigsten Konzepte der EDA durchgehen und Euch zeigen, wie Ihr sie in Eure Anwendungen integrieren könnt. Wenn Ihr konkrete Anwendungsfälle oder Fragen habt, bringt sie mit und wir versuchen, alles zu beantworten. Freut Euch auch auf ein unterhaltsames Quiz am Ende, bei dem Ihr Euer Wissen testen und tolle MongoDB-Swag-Preise gewinnen könnt!Event Type: Online\nLink(s):\nVideo Conferencing URLMongoDB Senior Solutions Architect\nIndependent Consultant\nBitte klicken Sie auf den Link ✓ RSVP oben auf dieser Seite um teilzunehmen.\nDer Link ändert sich zur Bestätigung in eine grüne Schaltfläche, für die Anmeldung müssen eingeloggt sein.Treten Sie der virtuellen DACH MongoDB User Group bei, um über anstehende Treffen und Diskussionen auf dem Laufenden zu bleiben.",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Gentle Reminder: The event starts in 50 mins. Please join using this link here: Launch Meeting - Zoom",
"username": "Harshit"
}
] | DACH MUG: Event Driven Architecture & MongoDB: Eine perfekte Kombination | 2023-04-03T14:03:35.681Z | DACH MUG: Event Driven Architecture & MongoDB: Eine perfekte Kombination | 2,566 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi Mongo RealmIn the Realm reference Authenticate HTTP Client Requests it states that access tokens expire 30 minutes after MongoDB Realm grants them.My question:Some background:Currently in my non-realm api I enforce the business logic that a user must re-authenticate once their subscription could end by setting the refresh token to expire when the users current subscription period ends (approx 1 month) and supplying the same refresh token to the user until it expires (do not regenerate another refresh token until current one has expired).",
"username": "mba_cat"
},
{
"code": "",
"text": "@mba_cat If you use a custom JWT authentication you can set your own expiry which the system will respect - https://docs.mongodb.com/realm/authentication/custom-jwt/#mongodb-data-exp",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian if I use a custom JWT does this just customise the access token or can the refresh token be customised as well?I assume if I want to use the custom JWT option I would also have to build out my own with endpoints to handle login or is there a way to combine the email/password auth with my own custom JWTs?",
"username": "mba_cat"
},
{
"code": "",
"text": "Refresh tokens have a lifetime of 60 days for username/password and the other builtin providers. If you want to customize anything that is where the custom JWT provider comes in. Although I’d probably recommend a 3rd party provider rather than setting up your own auth endpoints as there are many out there that make it super easy.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi IanI have been experimenting with Mongo Realm authentication and the custom JWT option.As far as I can see custom JWT only allows me to generate a custom JWT that can be used to log a user into Realm and create them - at which point Realm generates its own access and refresh JWT tokens for the user to to use to authenticate to the GraphQL endpoint.What I would like to do is to authenticate directly to the GraphQL endpoints using my own JWTs (which will have the custom expiry) or setup this custom expiry in the JWT tokens that Realm provides to the user.Is there a way to either:To clarify if I cannot set a custom expiry I would need to verify if the user is still a subscriber on each call they make to the endpoint which would slow everything down, if I can customise the tokens I am able to know that the user is a subscriber at least until their current refresh token expires.",
"username": "mba_cat"
},
{
"code": "exp",
"text": "@mba_cat If you use a Custom JWT token and set the exp field then the Realm Cloud will respect that and no longer issue tokens for that user after the expiration limit has been reached. You cannot use your own tokens for requests to Realm Cloud - that would be a large security hole for the system. But we will respect the settings you pass from your custom JWT token.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward that does not appear to be the case. I generated a custom JWT token with an expiry in 1 hour, the refresh token returned to me expires in 1 month as per Mongo Realm default.How do I pass a setting from my custom JWT to Realm so that is respects the expiry?",
"username": "mba_cat"
},
{
"code": "",
"text": "That sounds like unexpected behavior - please open a support ticket",
"username": "Ian_Ward"
},
{
"code": "",
"text": "OkIn the meantime is there an alternative way to customise the Realm JWT tokens - by setting custom data etc?",
"username": "mba_cat"
},
{
"code": "",
"text": "Any updates? It seems like the on the latest version I can still access the realm db after my jwt token expires",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "A post was split to a new topic: I am trying to integrate Realm into my project using the authentication facility",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hey \nCurrently we’re still seeing the same behaviour with Custom JWTs.@Ian_Ward - do you know if there is some progress on this internally or if this was expected to be fixed?",
"username": "Rico_Barisch"
},
{
"code": "",
"text": "What behavior? Setting the expiration time of a token? That is released",
"username": "Ian_Ward"
},
{
"code": "expapp.logIn(Credentials.jwt(\"xxx\"))",
"text": "@Ian_Ward Sorry for the confusion. I meant the behaviour that mba_cat stated previously.We’re using Custom JWT Authentication. Our tokens do include the required ‘exp’ field. According to the docs “Custom JWT refresh token expiration is determined by the exp value of the user’s JWT” (https://www.mongodb.com/docs/atlas/app-services/users/sessions/#configure-refresh-token-expiration).Though the users refreshToken returned from app.logIn(Credentials.jwt(\"xxx\")) always is valid for 60 days. So it seems like it ignores the JWTs ‘exp’ field and actually uses the default refresh token expiration time instead.",
"username": "Rico_Barisch"
},
{
"code": "",
"text": "You can now configure the expiration on the App Services configuration. It should be an option under the JWT auth provider",
"username": "Ian_Ward"
},
{
"code": "exp",
"text": "Yes I saw that for non-Custom JWT Auth (and non-Anonymous Auth) it can be customized in the User Settings.But the linked docs states the exception that for “Custom JWT refresh token expiration is determined by the exp value of the user’s JWT.”So from my understanding this cannot be configured (which is good in this/our case) BUT instead should re-use the same ‘exp’ field value from the provided Custom JWT for the refresh token’s ‘exp’ field as well, correct? Or do I get the docs wrong? ",
"username": "Rico_Barisch"
},
{
"code": "",
"text": "You should be able to set the expiration in the UI and that will be respected. Is that not the case?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Setting refresh expiry on the UI works for me, thanks!",
"username": "Tam_Nguyen1"
},
{
"code": "",
"text": "Hey, sorry for my long abstinence.Meanwhile additionally I also added some report at Github: Refresh token ignores Custom JWT expiration time · Issue #6497 · realm/realm-core · GitHubOutcome: Feature works as expected but the docs have been incorrect. (Sad in our case🙉)",
"username": "Rico_Barisch"
}
] | Realm refresh token expiry and customisation | 2021-05-16T09:03:21.455Z | Realm refresh token expiry and customisation | 10,856 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.11.6 of the MongoDB Go Driver.This release fixes the import failure introduced in 1.11.5. This release also includes the patch in the retracted 1.11.5, which fixes a bug that can squash the FullDocument configuration value when merging multiple ChangeStreamOptions structs. For more information please see the 1.11.6 release notes.You can obtain the driver source from GitHub under the v1.11.6 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team",
"username": "Qingyang_Hu1"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.11.6 Released | 2023-05-04T12:44:23.539Z | MongoDB Go Driver 1.11.6 Released | 715 |
null | [
"aggregation"
] | [
{
"code": "{\n \"groupId\":{\n \"$numberLong\":\"12345\"\n },\n \"detailList\":[\n {\n \"Type\":\"P\",\n \"fromCode\":\"1000000\",\n \"toCode\":\"1100000\"\n },\n {\n \"Type\":\"P\",\n \"fromCode\":\"2000000\",\n \"toCode\":\"2200000\"\n },\n {\n \"Type\":\"M\",\n \"fromCode\":\"3000000\",\n \"toCode\":\"3300000\"\n },\n {\n \"Type\":\"M\",\n \"fromCode\":\"4000000\",\n \"toCode\":\"5500000\"\n }\n ]\n}\nfromCodetoCode",
"text": "I have a set of the value in the array, I need to write a query in the arrays objectBelow is my dataI need to fetch the groupId only when my searching criteria matched the range of the fromCode and toCodeFor example:-\nIf I am searching 1000000-3300000 then there are two types P, M so, I have to return groupId only all the criteria has type P with the searched criteriaWhat will be the query?",
"username": "Prabhat_Gautam"
},
{
"code": "db.collection.aggregate([\n {\n $project: {\n groupId: 1,\n detailList: {\n $filter: {\n input: \"$detailList\",\n as: \"detail\",\n cond: {\n $and: [\n {\n $eq: [\"$$detail.Type\", \"P\"],\n },\n {\n $gte: [\n \"$$detail.fromCode\",\n \"1000000\",\n ],\n },\n {\n $lte: [\n \"$$detail.toCode\",\n \"3300000\",\n ],\n },\n ],\n },\n },\n },\n },\n },\n]);\n{\n _id: ObjectId(\"645370489f19c564d617d4c6\"),\n groupId: 12345,\n detailList: [\n {\n Type: 'P',\n fromCode: '1000000',\n toCode: '1100000'\n },\n {\n Type: 'P',\n fromCode: '2000000',\n toCode: '2200000'\n }\n ]\n}\n$eq$gte$lte",
"text": "Hello @Prabhat_Gautam,Welcome to the MongoDB Community forum I have created a sample collection based on the shared data and written the following aggregation pipeline to get the desired output:and it returned the following result:In the above query, I’m using the $filter operator to filter an array of subdocuments based on certain conditions. This operator takes an input array, an identifier for each element in the array, and a condition that determines which elements to include in the output array. Within the condition, I’ve used the comparison operators $eq, $gte and $lte to compare values and return the result.Hope this helps. Feel free to reach out if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Query to Fetch groupId based on Range Criteria in detailList Array Objects | 2023-05-04T08:31:21.835Z | Query to Fetch groupId based on Range Criteria in detailList Array Objects | 348 |
null | [
"atlas-search"
] | [
{
"code": "atlas clusters search indexes create --clusterName myAtlasClusterEDU -f /app/search_index.json",
"text": "Encountering errors since yesterday at “Create a Atlas Search Index With Static Mapping”atlas clusters search indexes create --clusterName myAtlasClusterEDU -f /app/search_index.jsonError: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/64395b6ed81c4023ae7a58c1/clusters/myAtlasClusterEDU/fts/indexes: 400 (request “MAXIMUM_INDEXES_FOR_TENANT_EXCEEDED”) The maximum number of FTS indexes has been reached for this instance size.",
"username": "Naoto_Hayashi"
},
{
"code": "M0M2M5",
"text": "Hello @Naoto_Hayashi,Welcome to the MongoDB Community forums.400 (request “MAXIMUM_INDEXES_FOR_TENANT_EXCEEDED”) The maximum number of FTS indexes has been reached for this instance size.The error message indicates that you have exceeded the maximum number of FTS indexes that you can create.As per the MongoDB Atlas Search M0 (Free Cluster), M2, and M5 Limitations you cannot create more than:Hope it helps. Let us know if you have any further questions or concerns.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MAXIMUM INDEXES FOR TENANT EXCEEDED - MongoDB Atlas Search | 2023-05-04T07:03:05.599Z | MAXIMUM INDEXES FOR TENANT EXCEEDED - MongoDB Atlas Search | 1,006 |
null | [] | [
{
"code": "",
"text": "I have been trying to connect to atlas with my router at home and it won’t let me, now when I try another router if it works, could it be because of the provider since the routers are from different providers?",
"username": "Magdiel_Asicona"
},
{
"code": "https://assets.mongodb-cdn.com/",
"text": "Hey @Magdiel_Asicona,Welcome to the MongoDB Community Forums! There may be a number of reasons why this is happening. Kindly make sure you have whitelisted the IP when trying to connect. Atlas allows client connections only from IP addresses and CIDR address ranges in the IP access list. Atlas also uses a CDN to serve content quickly. If you’re using a firewall, add the following Atlas CDN host to the firewall’s allow list to prevent issues accessing the Atlas UI:\nhttps://assets.mongodb-cdn.com/\nYou can read more about this from the documentation: Attempting to connect from behind a firewallAdditionally, as you pointed out that there are different providers, it may also be that your ISP is blocking the port from which you’re trying to connect to your Atlas Cluster, so it might be worth checking that too.I’m also linking some documentation that you should find helpful: Troubleshoot Connection Issues\nSet Up Atlas ConnectivityHope this helps. If not, please provide any error or warning message that you may be getting while trying to connect to Atlas. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas does not recognize my IP | 2023-05-02T07:40:38.452Z | Atlas does not recognize my IP | 421 |
[] | [
{
"code": "",
"text": "I was completing the courses on MongoDB university until in one lab i started following error in instruqt tool where CLI is assessed so i am not able to access the CLI as the pages isn’t loading and i am unable to complete the lab Please try again in 30 seconds.\nimage1905×847 83.5 KB\n",
"username": "Muhammad_Bilal7"
},
{
"code": "",
"text": "Hey @Muhammad_Bilal7,Welcome to the MongoDB Community forums Thank you for highlighting this. We want to let you that we are aware of this issue and the team has already put a fix in place to resolve it.We hope that you can now access the labs. If you have any further issues or questions, please do not hesitate to reach out to us.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Yes it is working now thanks <3",
"username": "Muhammad_Bilal7"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | # Error: Server Error in instruqt CLI | 2023-05-03T12:51:22.370Z | # Error: Server Error in instruqt CLI | 832 |
|
null | [
"performance",
"storage"
] | [
{
"code": "",
"text": "Hi,Recently our mongo reached 50% Ram utilization, so we decided to increase the infra. Earlier Mongo was running in 16 core 32GB ram, which we increased to 32 core 62GB ram.\nBut we noticed that this time Mongo was utilizing only 42-43% of Ram i.e. around 25GB.With db.serverStatus().wiredTiger.cache we found that\n“bytes currently in the cache” : 25984630208\n“maximum bytes configured” : 32509001728So is there any reason why Mongo is not able to utilize the maximum allotted ram, but instead is stuck around 25GB?",
"username": "Pratik_Singh2"
},
{
"code": "",
"text": "Hi @Pratik_Singh2,Recently our mongo reached 50% Ram utilization, so we decided to increase the infra. Earlier Mongo was running in 16 core 32GB ram, which we increased to 32 core 62GB ram.\nBut we noticed that this time Mongo was utilizing only 42-43% of Ram i.e. around 25GB.WiredTiger will aim to use only 80% of the max configured cache size - One reason this is done so if there are unexpected memory spike in some workload, there are about 20% additional memory ceiling that WT can work with. Since memory is always needed to do any operation, filling the cache to 100% would likely be very detrimental to the situation. The WT cache is only part of the overall MongoDB memory usage requirement. MongoDB uses memory for incoming connections (~1MB of RAM per connection), aggregation pipeline, etc. that are outside of WT cache.In addition to the above, the OS also needs memory for its operation and for the filesystem cache.In saying the above, is the question more for curiosity or is there an issue you are encountering?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Bytes currently in use not able to reach till maximum bytes configured | 2023-02-07T21:31:06.115Z | Bytes currently in use not able to reach till maximum bytes configured | 1,168 |
null | [
"data-modeling",
"compass",
"schema-validation"
] | [
{
"code": "matches{\n $jsonSchema: {\n bsonType: 'object',\n required: [\n 'team_id'\n ],\n properties: {\n team_id: {\n bsonType: 'objectId',\n description: '\\'team_id\\' deve ser um objectId que identifique a equipe do jogo e precisa ser informado.'\n },\n createdAt: {\n bsonType: 'date',\n description: '\\'createdAt\\' deve ser uma data que represente a data de criação do jogo.'\n },\n updatedAt: {\n bsonType: 'date',\n description: '\\'updatedAt\\' deve ser uma data que represente a data da última atualização do jogo.'\n }\n }\n }\n}\nteam_idteams",
"text": "I have the following schema for matches:But I want to mark team_id as a reference to another schema (teams). Is it possible?",
"username": "Marcos_Visentini"
},
{
"code": "",
"text": "Hey @Marcos_Visentini,At this point, based on what you descibed, you cannot reference a schema into another schema using validations. You can submit a feature request in our feedback engine.Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | Referencing another schema (one-to-many relationship) using MongoDB Compass validation tab | 2023-05-01T11:35:09.735Z | Referencing another schema (one-to-many relationship) using MongoDB Compass validation tab | 924 |
null | [
"golang"
] | [
{
"code": "",
"text": "As part of testing/continuous integration, I would like to make sure that every query in my application uses an index. I already have a “wrapper” function for virtually every find query so if in testing mode, before executing the query, I’d like to make sure that there is an index that supports the query. How do I run the explain command? Or are there “server stats” that would show me how many queries were run without an index?My production dataset is such that queries that don’t have indexes don’t return, so it’s not okay to not have an index for a query.",
"username": "Matthew_Zimmerman"
},
{
"code": "",
"text": "It is a big subject that is very well covered in course M201: MongoDB Performance at https://university.mongodb.com. You may also start with https://docs.mongodb.com/manual/reference/method/cursor.explain/",
"username": "steevej"
},
{
"code": "",
"text": "Excellent question as the golang driver doesn’t seem to have an Explain method.",
"username": "zaai"
}
] | Go - How to use Explain? | 2020-04-25T14:36:39.648Z | Go - How to use Explain? | 4,317 |
[
"aggregation",
"queries",
"compass",
"mongodb-shell",
"views"
] | [
{
"code": "mongoshmongoshdb.getCollectionNamesgetCollectionInfosmongoshdb.stats.findOne()TypeError: db.stats.findOne is not a function[\n {\n $addFields:\n {\n processed: {\n $dateFromString: {\n dateString: {\n $replaceOne: {\n input:\n \"$__dc_process.last_updated\",\n find: \"@\",\n replacement: \"\",\n },\n },\n },\n },\n },\n },\n {\n $sort:\n {\n processed: -1,\n },\n },\n {\n $group:\n {\n _id: {\n \"campaign\": \"$_projectId\",\n \"bot\": \"$bot\",\n \"render-status\": \"$render-status\",\n \"stream-status\": \"$stream.status\",\n \"processor\": \"$__dc_process.satellite\",\n },\n \"total\": {\n $count: {},\n },\n \"last_updated\": {\n $first: \"$processed\",\n },\n \"last_updated_job\": {\n $first: \"$$ROOT\",\n },\n },\n },\n {\n $out: \"stats\",\n },\n]\ngroup",
"text": "My goal is to create a materialized view using an aggregation on an existing collection. I was following this presentation - Materialized Pre-Aggregations for Exploratory Analytics Queries. There’s an accompanying Github repoI’ve tried these approaches:Both created a collection, which I verified via the Compass UI, the Atlas UI and the mongosh shell. It’s visible when running the command db.getCollectionNames and getCollectionInfos and it’s visible in both Compass and Atlas.\n\nScreenshot 2023-05-03 at 4.25.29 PM1088×334 20.9 KB\n\nCollection info from mongosh terminal\n\nScreenshot 2023-05-03 at 4.29.21 PM958×354 22.6 KB\nRunning db.stats.findOne() outputs this:\nTypeError: db.stats.findOne is not a functionThis is the pipelineI’ve tried disconnecting and reconnecting the shell. In the UI for Compass and Atlas, querying does work and returns a result.I’m not really sure if the fact that the group stage is not first has any effect? The presenter notes that you should not do any filtering beforehand, but I haven’t done that here.Any advice?",
"username": "Abi_Scholz"
},
{
"code": "db.getCollection( \"stats\" ).findOne()\n",
"text": "You chose a collection name, stats, that collide with a function of the db object. This means you will not be able to use the short cut db.stats to get the collection. You will need to use the getCollection() function like:",
"username": "steevej"
},
{
"code": "",
"text": "That was it!!! The above command works perfectly, and I’ll be more careful when naming next time.",
"username": "Abi_Scholz"
},
{
"code": "",
"text": "I prefer to use getCollection in the code. I use the short-cut only in mongosh. So most of the time, special names, like names with dash or names with space, are not an issue.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation $out stage creates collection, all queries return TypeError: db.collection.. is not a function | 2023-05-03T20:42:34.710Z | Aggregation $out stage creates collection, all queries return TypeError: db.collection.. is not a function | 823 |
|
[
"atlas-cluster"
] | [
{
"code": "",
"text": "\nimage1362×675 33.5 KB\n\nwhen I try to create new cluster on free tier it shows this error. How to resolve and create free tier cluster? Anyone have idea. Thanks in advance",
"username": "Dev_Vendhan"
},
{
"code": "",
"text": "Hi @Dev_Vendhan,I would recommend you contact the Atlas in-app chat support team regarding this.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | I am unable to create Free Tier Cluster | 2023-05-03T17:39:23.738Z | I am unable to create Free Tier Cluster | 721 |
|
null | [
"aggregation"
] | [
{
"code": "",
"text": "I’m doing some conversion from Elasticsearch to Atlas FTS and I’ve just hit a feature I’m not sure exists yet in mongo in a streamlined way. In ES you are able to use aggregators in a “global” context meaning they are NOT influenced by the search query. This is useful in cases where you want to return search results that ARE refined by search criteria, and aggregations that are NOT influenced by the same filters within a single query.IE I provide a search term of “Food” which returns results that contain the term “Food” in their name, as well as facet values that contain the term “Food” (Not faceted values from the results with food in their name).The only workaround I can identify atm for such a use case is by sending 2 separate requests. One for only results, and one using $searchMeta to only get the facets so that I would have control over the filtering context of said facets.Hope this makes sense, any feedback appreciated ",
"username": "Luke_Snyder"
},
{
"code": "",
"text": "Hi @Luke_Snyder In ES you are able to use aggregators in a “global” context meaning they are NOT influenced by the search query.Just to clarify - Would the following elastic search documentation be correct with regards to the elasticsearch global aggregations you had mentioned?The only workaround I can identify atm for such a use case is by sending 2 separate requests. One for only results, and one using $searchMeta to only get the facets so that I would have control over the filtering context of said facets.Would you mind sharing some sample documents as well as the output you achieved using this workaround? This will give me a better idea of what you’re after. Please redact any sensitive or personal information before posting here.Look forward to hearing from you!Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n name: \"Leanne Smith\"\n bio: \"Test Bio\",\n interests: [\"Soccer\", \"Food\"]\n}\n\n{\n name: \"Mike Rob\"\n bio: \"Text goes here\",\n interests: [\"Leadership\"]\n}\n{\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"autocomplete\": {\n \"query\": \"lea\",\n \"path\": \"name\"\n }\n },\n {\n \"text\": {\n \"query\": \"lea\",\n \"path\": \"bio\"\n }\n }\n ],\n \"minimumShouldMatch\": 1\n }\n }\n }\n[\n {\n \"$searchMeta\": {\n \"facet\": {\n \"operator\": {\n \"compound\": {\n \"filter\": [],\n \"should\": [],\n \"minimumShouldMatch\": 0\n }\n },\n \"facets\": {\n \"interests\": {\n \"type\": \"string\",\n \"path\": \"interests\",\n \"numBuckets\": 100\n }\n }\n }\n }\n },\n {\n \"$facet\": {\n \"facets\": [\n {\n \"$replaceRoot\": {\n \"newRoot\": {\n \"interests\": {\n \"$filter\": {\n \"input\": \"$facet.interests.buckets\",\n \"as\": \"interest\",\n \"cond\": {\n \"$regexMatch\": {\n \"input\": \"$$interest._id\",\n \"regex\": \"([Ll][Ee][Aa]).*|([Ll][\\\\. ]*[Ee][\\\\. ]*[Aa][\\\\. ]*).*|(.*[^a-zA-Z0-9][Ll][Ee][Aa].*)\",\n \"options\": \"i\"\n }\n }\n }\n }\n }\n }\n },\n {\n \"$limit\": 1\n }\n ]\n }\n }\n]\n\"hits\": [\n {\n name: \"Leanne Smith\"\n bio: \"Test Bio\",\n interests: [\"Soccer\", \"Food\"]\n }\n],\n\"facets\": {\n \"interests\": {\n \"Leadership\": 1\n }\n}\ninterestsLeadership",
"text": "Yes that is the correct elastic documentation Basically allows you to do the faceting on the entire index as a whole without being subjected to the filters present in the “search” portion of the request.Below are some examples of the workaround and what the results end up looking like. I’ve simplified the example and removed any extraneous stuff out of it so it might not look like it works 1:1 to the code im providing you.Sample Docs:Sample $search request:Sample $searchMeta request:Sample Output:So, the big issue is that the autocomplete and text clauses from the $search request cannot exist in the same request that I’m faceting in. This would limit my faceted results to ONLY the documents that meet the criteria of those clauses, meaning I would never hit the interests value of Leadership.The workaround allows me to retrieve facets for all documents or for a subset of filters that don’t include the search criteria. Then I utilize regex filtering to supply the “matches”. Although, I’m realizing now I can probably improve the second request by just making the values searchable instead of using the regex to scan the results.Hope this makes sense, please let me know if that provides clarity.",
"username": "Luke_Snyder"
},
{
"code": "\"hits\": [\n {\n name: \"Leanne Smith\"\n bio: \"Test Bio\",\n interests: [\"Soccer\", \"Food\"]\n }\n],\n\"facets\": {\n \"interests\": {\n \"Leadership\": 1\n }\n}\n$searchMeta$facetdb>db.search.aggregate(\n{\n '$searchMeta': {\n facet: {\n operator: { autocomplete: { query: 'lea', path: 'interests' } },\n facets: {\n interests: { type: 'string', path: 'interests', numBuckets: 100 }\n }\n }\n }\n})\n{\n count: { lowerBound: Long(\"1\") },\n facet: {\n interests: { buckets: [ { _id: 'Leadership', count: Long(\"1\") } ] }\n }\n}\n",
"text": "Sample Output:That makes sense - thanks for clarifying and providing all those details. I also assume your sample output is the result of the 2 individual requests combined together but please correct me if I am wrong here.Although, I’m realizing now I can probably improve the second request by just making the values searchable instead of using the regex to scan the results.Would something like below work for you as an alternative to the $searchMeta and $facet aggregation pipeline you provided? (i.e. your second request):Output:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "\nLuke_Snyder\n1d\nYes that is the correct elastic documentation :+1: Basically allows you to do the faceting on the entire index as a whole without being subjected to the filters present in the “search” portion of the request.\n\nBelow are some examples of the workaround and what the results end up looking like. I’ve simplified the example and removed any extraneous stuff out of it so it might not look like it works 1:1 to the code im providing you.\n\nSample Docs:\n\n{\n name: \"Leanne Smith\"\n bio: \"Test Bio\",\n interests: [\"Soccer\", \"Food\", \"Basket Weaving\"],\n sports: [\"Basketball\"],\n languages: [\"English\", \"Spanish\"]\n}\n\n{\n name: \"Mike Rob\"\n bio: \"Text goes here\",\n interests: [\"Leadership\"],\n sports: [\"Hockey\"],\n languages: [\"Bavarian\"]\n}\n{\n '$searchMeta': {\n facet: {\n operator: { \n compound: {\n should: [\n { autocomplete: { query: 'ba', path: 'interests' } },\n { autocomplete: { query: 'ba', path: 'languages' } },\n { autocomplete: { query: 'ba', path: 'sports' } }\n ]\n }\n },\n facets: {\n interests: { type: 'string', path: 'interests', numBuckets: 100 },\n languages: { type: 'string', path: 'languages', numBuckets: 100 },\n sports: { type: 'string', path: 'sports', numBuckets: 100 }\n }\n }\n }\n}\nfacet: {\n interests: { buckets: [\n { _id: 'Leadership', count: Long(\"1\") } ,\n { _id: 'Basket Weaving', count: Long(\"1\") },\n { _id: 'Soccer', count: Long(\"1\") } ,\n { _id: 'Food', count: Long(\"1\") } \n ] },\n sports: { buckets: [\n { _id: 'Basketball', count: Long(\"1\") },\n { _id: 'Hockey', count: Long(\"1\") } \n ] },\n languages: { buckets: [\n { _id: 'Bavarian', count: Long(\"1\") } ,\n { _id: 'English', count: Long(\"1\") } ,\n { _id: 'Spanish', count: Long(\"1\") } \n ] },\n }\n",
"text": "Correct, the output I supplied is a combination of the 2 requests which we return as a single object to the user.The suggestion you provided would work when used on a single facet field, but we are often time supplying facet results for up to 10-15 fields in a single request. So, if you adjusted your code for that, the operator would end up being a compound operarator with a bunch of should conditions spanning the various fields, and the results would be unidentifiable as to which facet bucket they belong in. So for example:I would expect the output to end up looking like this, which contains ALL values on the matched documents for the faceted fields. Since the autocomplete is just narrowing down the document matches and the facets are returned based on the values contained in those docs. With the regex, the actual filtering is occurring on the FACET VALUES themselves after they’ve been returned. I believe that is why I did it the way I did. Please correct me if I’m wrong.",
"username": "Luke_Snyder"
},
{
"code": "autocomplete",
"text": "The suggestion you provided would work when used on a single facet field, but we are often time supplying facet results for up to 10-15 fields in a single request. So, if you adjusted your code for that, the operator would end up being a compound operarator with a bunch of should conditions spanning the various fieldsAh yes, there is currently a feedback post with regards to this portion which you can vote for in regards to autocomplete and multiple fields.In this case you could create another feedback post in regards to having something like the global aggregation in elastic search. I will also check with the team if there’s any other workarounds that may help here in the meantime.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "$unionWith",
"text": "Hi Luke,From what I know, there isn’t anything directly available in Atlas search currently that mimics / matches the global aggregation in elastic search but you could create a feedback post which includes your use case details it in which others can vote for the feature.In terms of other workarounds perhaps $unionWith might work for you (it works using the same collection as well) but required MongoDB version 6.0 or higher.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Global Faceting | 2023-04-27T18:06:15.279Z | Global Faceting | 749 |
null | [] | [
{
"code": "# mongod --dbpath /var/tmp/mongotest/\n{\"t\":{\"$date\":\"2023-05-02T09:15:47.365Z\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20574, \"ctx\":\"thread1\",\"msg\":\"Error during global initialization\",\"attr\":{\"error\":{\"code\":5,\"codeName\":\"GraphContainsCycle\",\"errmsg\":\"Cycle in dependency graph: LoadICUData -> LoadIcuPrep -> default -> BeginExpressionRegistration -> addToExpressionParserMap_stdDevSamp -> LoadICUData\"}}}\n",
"text": "Hi, I’m maintaining some MongoDB ports for FreeBSD.I accomplished to compile 7.0-RC0. But on first run it crashes.This is all. No files are created in the dbpath. Any thoughts or advice on this? It happens on amd64 as well as on aarch64.",
"username": "R_K"
},
{
"code": "MONGO_INITIALIZER_GENERALbuildscripts/scons.py --cxx-std=20 --disable-warnings-as-errors --dbg=on --opt=off AR=llvm-ar VERBOSE=on --use-sasl-client --ssl CC=\"clang\" CPPPATH=\"/usr/local/include\" CXX=\"clang++\" LIBPATH=\"/usr/local/lib\" VARIANT_DIR=jem --libc++ --ninja --allocator=system\n_LIBCPP_ENABLE_CXX20_REMOVED_TYPE_TRAITS",
"text": "Thank you for your work on the port.I am trying to repro this locally but it will take time (small VM). That error usually comes incorrect usage of MONGO_INITIALIZER_GENERAL (and related functions) which want to define a DAG for a list of functions to call at startup. It does a topological sort so it is incompatible with a graph with cycles.I had to add define into SConstruct for _LIBCPP_ENABLE_CXX20_REMOVED_TYPE_TRAITS to fix an issue in ASIO with the recent upgrade to C++ 20 in the MongoDB source code.Thanks",
"username": "Mark_Benvenuto"
},
{
"code": "/wrkdirs/usr/ports/databases/mongodb70/work/mongo-r7.0.0-rc0/buildscripts/scons.py\t-C /wrkdirs/usr/ports/databases/mongodb70/work/mongo-r7.0.0-rc0 --cxx-std=20 --disable-warnings-as-errors --libc++ --runtime-hardening=on --use-system-icu --use-system-libunwind --use-system-pcre2 --use-system-snappy --use-system-stemmer --use-system-yaml --use-system-zlib --use-system-zstd -j2 AR=llvm-ar MONGO_VERSION=7.0.0-rc0 VERBOSE=on --use-sasl-client --ssl CC=\"cc\" CCFLAGS=\"-O2 -pipe -fstack-protector-strong -fno-strict-aliasing \" CPPPATH=\"/usr/local/include\" CXX=\"c++\" CXXFLAGS=\"-O2 -pipe -fstack-protector-strong -fno-strict-aliasing \" LIBPATH=\"/usr/local/lib\" LINKFLAGS=\" -fstack-protector-strong \" PKGCONFIGDIR=\"\" PREFIX=\"/usr/local\" destdir=/wrkdirs/usr/ports/databases/mongodb70/work/stage DESTDIR=/wrkdirs/usr/ports/databases/mongodb70/work/stage\n",
"text": "Hi, thanks for your quick reply.The command I’m using is:And added “# define ASIO_HAS_STD_INVOKE_RESULT” to src/third_party/asio-master/asio/include/asio/detail/config.hpp to make asio compile.In the meantime I found that mongod runs fine if I don’t use --use-system-* for various libraries. Currently trying to find which use-system was the cause. It runs at least if I disable all of them.",
"username": "R_K"
},
{
"code": "",
"text": "Succeeded in running mongod without errors by removing --use-system-icu from the build parameters.\nThe port is submitted: https://www.freshports.org/databases/mongodb70/.\nNow testing some of you parameters like --ninja and --allocator=system.What would be the benefits of using those settings?",
"username": "R_K"
},
{
"code": "",
"text": "I filed two issues: https://jira.mongodb.org/browse/SERVER-76814 (Fix mongodb compilation with libc++) and https://jira.mongodb.org/browse/SERVER-76813 (Fix --use-system-icu) for the issues you identified. The fix for --use-system-icu is in the ticket.In terms of the two options, --ninja is a useful for developers as it compiles faster then scons. For packaging, It is simpler to just use scons. The --allocator=system tells MongoDB to use system allocator instead of the vendored GPerfTools/tcmalloc. Since FreeBSD uses jemalloc, which is comparable in performance too tcmalloc, you save a tiny bit of binary size by just using it.",
"username": "Mark_Benvenuto"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb 7.0 RC0 Cycle in dependency graph: LoadICUData | 2023-05-02T09:21:01.823Z | Mongodb 7.0 RC0 Cycle in dependency graph: LoadICUData | 868 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi, I have problem with connecting to MongoDB, on connecting this error is throwing:MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. —> System.MissingMethodException: void System.Security.Cryptography.Rfc2898DeriveBytes…ctor(string,byte,int,System.Security.Cryptography.HashAlgorithmName)I’m using .NET Framework 4.8.1 and MongoDB 2.19.1.\nHow I can fix this?",
"username": "LulaczTV"
},
{
"code": "MissingMethodExceptionSystem.Security.Cryptography.Rfc2898DeriveBytesRfc2898DeriveBytescsproj",
"text": "Hi, @LulaczTV,Welcome to the MongoDB Community Forums. I understand that you’re receiving a MissingMethodException related to System.Security.Cryptography.Rfc2898DeriveBytes. Taking a look at the MSDN documentation on this particular constructor, it is part of .NET Framework 4.7.2, 4.8, and 4.8.1 as well as .NET Standard 2.1.Rfc2898DeriveBytes is used by both SCRAM-SHA-1 and SCRAM-SHA-256 authenticators. These authenticators are commonly used with MongoDB for challenge-response authentication when a username/password are provided.Please provide a self-contained repro of the issue including the csproj file so that we can investigate further.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": " var client = new MongoClient(\"mongodb://username:password@ip:port/?authMechanism=SCRAM-SHA-1&authSource=admin&connectTimeoutMS=1000&socketTimeoutMS=1000&serverSelectionTimeoutMS=1000\");\n //database\n var database = client.GetDatabase(\"test\");\n //collection\n var collection = database.GetCollection<PlayerModel>(\"playerdatas\");\n //Finding document with player SteamID\n var playerFilter = Builders<PlayerModel>.Filter.Eq(\"SteamID\", player.UserId);\n //here's problem\n var playerData = collection.Find(playerFilter).FirstOrDefault();\n",
"text": "Hi @James_Kovacs,I found out that problem is not on connecting but on finding document in collection.Here’s code where problem appear:if it’s needed, I can upload whole code on github.\nand here’s csproj file (I have no idea if I can send it here as file so i uploaded it)\nhttps://anonfiles.com/Pbt0R4obzc/4Site_Main_csproj",
"username": "LulaczTV"
},
{
"code": "link.xml",
"text": "Hi, @LulaczTV,Thank you for providing the code sample and csproj file. I can see from the csproj file that you are using Unity. Unity performs code stripping to remove unreachable code. Unfortunately it can sometimes get it wrong, especially when code is referenced dynamically at runtime. I would recommend disabling code stripping and see if the problem is resolved. If so, you can try adjusting the level of stripping. You can also create a link.xml file that preserves particular types, methods, and properties.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi,\nI don’t really know if I can use it because i’m making plugin to the unity game. I tried before to make link.xml but idk if I did it correctly.",
"username": "LulaczTV"
},
{
"code": "",
"text": "If you’re making a game plugin, I would typically recommend using Atlas App Services and/or the Realm .NET SDK rather than the MongoDB .NET/C# Driver.The driver is designed and intended for server-side apps. It would require embedding credentials with read/write access into your plugin. If you need to rotate credentials for any reason, you would have to update all your clients. It is also a security risk as a malicious attacker could extract those credentials from your binary and use them to directly access and modify your data.Using Atlas App Services and/or the Realm .NET SDK would allow you finer grained access control more appropriate for client-side apps. I strongly recommend investigating this approach for your plugin.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "alright, I will try that, thank you so much for replying and help!",
"username": "LulaczTV"
},
{
"code": "",
"text": "Realm looks good, but I have no idea how I can use that in my case. I need to connect database to my discord bot in JavaScript, to website in PHP and game plugin in c#, but I don’t really know from these docs how I can connect it together.",
"username": "LulaczTV"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB connection | 2023-04-28T19:31:49.832Z | MongoDB connection | 815 |
null | [
"python",
"crud",
"transactions"
] | [
{
"code": "client = MongoClient()\ndatabase = client['Eg']\ncollection = database['eg']\nstart = datetime.now()\n\ndf = pd.read_csv(\"eg.csv\")\ndf['_id'] = df[\"Factory_Id\"] + df[\"Order ID\"]\ndata = df.to_dict(orient=\"records\")\n\nrequests = []\nfor doc in data:\n filter = {'_id': doc['_id']}\n update = {'$set': doc}\n request = pymongo.UpdateOne(filter, update, upsert=True)\n requests.append(request)\n\nif requests:\n result = collection.bulk_write(requests)\n",
"text": "Hey, I am a new User of MongoDB. I am working with Large amount of Transaction data which is updated periodically after each day or week. I created my code for this use where i am using pymongo.UpdateOne but as i have large amount of data, I want to use UpdateMany. Here’s the snippet of my code:Here I am using UpdateOne for syncing but i want it sync by once and not use for-loop with the help of “UpdateMany”.",
"username": "Dheeraj_Sain"
},
{
"code": "coll.update_many({}, {\"$set\": {\"new\": 0}})\n",
"text": "As far as I can tell you are already using the API correctly. UpdateMany is only useful for applying a single update across multiple matching documents, for example adding a new field to all documents in the collection:Since your updates are scoped to a single _id UpdateOne (or ReplaceOne) is appropriate.",
"username": "Shane"
}
] | Syncing MongoDB Data | 2023-05-03T12:01:35.637Z | Syncing MongoDB Data | 668 |
null | [
"storage"
] | [
{
"code": "",
"text": "Hi,\nMongo DB is crashing which is causing the application to fail. When extracted mongo Db logs found below error 2022-09-11T00:36:21.977+0000 E STORAGE [WTJournalFlusher] WiredTiger (-28992) [1662856581:400819][7072:1995194080], WT_SESSION.log_flush: journal/WiredTigerLog.0000158373 handle-sync: FlushFileBuffers error: Not enough storage is available to process this command.2022-09-11T00:36:21.989+0000 I - [WTJournalFlusher] Invariant failure: s->log_flush(s, “sync=on”) resulted in status UnknownError: -28992: Not enough storage is available to process this command.\nThis is the second occurrence and After restarting application servers mongo started working as expected.what is the possible reason for this issue? and do we have any work around to avoid future occurences?Attaching logs for your reference.Thanks in advance!",
"username": "Mamatha_K"
},
{
"code": "2022-09-11T00:36:21.849+0000 I COMMAND [conn3987] insert meolutdb.rawDetection ninserted:40 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } 113ms\n2022-09-11T00:36:21.977+0000 E STORAGE [WTJournalFlusher] WiredTiger (-28992) [1662856581:400819][7072:1995194080], WT_SESSION.log_flush: journal/WiredTigerLog.0000158373 handle-sync: FlushFileBuffers error: Not enough storage is available to process this command.\n\n2022-09-11T00:36:21.989+0000 I - [WTJournalFlusher] Invariant failure: s->log_flush(s, \"sync=on\") resulted in status UnknownError: -28992: Not enough storage is available to process this command.\n\n at src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_session_cache.cpp 203\n2022-09-11T00:36:25.065+0000 I COMMAND [conn4275] update meolutdb.eMSStatus query: { _id: \"421d0553-e28b-4feb-a48e-7bfb6ed129fc\" } update: { _id: \"421d0553-e28b-4feb-a48e-7bfb6ed129fc\", _class: \"com.emsgt.lut.il.dal.api.EMSStatus\", subsystem_id: 3, service_id: 10005, service_description: \"Motor Temperature\", timestamp: 1662856584631000000, status: \"OK\", errors: \" 34.00 celsius\", subsystem_type: \"Antenna\" } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:6 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 113ms\n2022-09-11T00:36:26.505+0000 I COMMAND [conn3965] insert meolutdb.rawDetection ninserted:40 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } 119ms\n2022-09-11T00:36:26.632+0000 I COMMAND [conn4001] insert meolutdb.rawDetection ninserted:40 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } 156ms\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe ...\\src\\mongo\\util\\stacktrace_windows.cpp(174) mongo::printStackTrace+0x43\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe ...\\src\\mongo\\util\\log.cpp(136) mongo::logContext+0xa8\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe ...\\src\\mongo\\util\\assert_util.cpp(164) mongo::invariantOKFailed+0x14c\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_session_cache.cpp(203) mongo::WiredTigerSessionCache::waitUntilDurable+0x2bd\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe ...\\src\\mongo\\db\\storage\\wiredtiger\\wiredtiger_kv_engine.cpp(97) mongo::WiredTigerKVEngine::WiredTigerJournalFlusher::run+0x1a8\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe ...\\src\\mongo\\util\\background.cpp(152) mongo::BackgroundJob::jobBody+0x1b1\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe c:\\program files (x86)\\microsoft visual studio 12.0\\vc\\include\\thr\\xthread(188) std::_LaunchPad<std::_Bind<0,void,std::_Bind<1,void,std::_Pmf_wrap<void (__cdecl mongo::dur::JournalWriter::*)(void) __ptr64,void,mongo::dur::JournalWriter>,mongo::dur::JournalWriter * __ptr64 const> > >::_Go+0x1c\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe f:\\dd\\vctools\\crt\\crtw32\\stdcpp\\thr\\threadcall.cpp(28) _Call_func+0x14\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe f:\\dd\\vctools\\crt\\crtw32\\startup\\threadex.c(376) _callthreadstartex+0x17\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] mongod.exe f:\\dd\\vctools\\crt\\crtw32\\startup\\threadex.c(354) _threadstartex+0x102\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] kernel32.dll BaseThreadInitThunk+0xd\n2022-09-11T00:36:50.126+0000 I CONTROL [WTJournalFlusher] \n2022-09-11T00:36:50.126+0000 I - [WTJournalFlusher] \n\n***aborting after invariant() failure\n",
"text": "",
"username": "Mamatha_K"
},
{
"code": "2022-09-11T00:36:21.977+0000 E STORAGE [WTJournalFlusher] WiredTiger (-28992) [1662856581:400819][7072:1995194080], WT_SESSION.log_flush: journal/WiredTigerLog.0000158373 handle-sync: FlushFileBuffers error: Not enough storage is available to process this command.\n",
"text": "Hello @Mamatha_K and welcome to the MongoDB community forums. This log entry seems to be stating that your disk is full and that the process cannot write to the disk. Have you verified that amount of free space you have on the drive you’re writing to?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan : Yes we have 800gb in D drive, 40-50gb is C drive. And mongo DB is around 720gb.",
"username": "Mamatha_K"
},
{
"code": "<dbdata path>/journal/WiredTigerLog.*",
"text": "It looks like you should have enough free space on your drives, but I’m not sure if the numbers given are the amount free or the amount total. You also don’t state where MongoDB data is located.If you have 800G free on the D drive and your MongoDB data is also on the D drive, then you definitely have more than enough free space for most things. You might run into problems if you were to back up to that drive or do some other admin-y type things that would require you to basically copy the data to that drive.If you have 800G total on the D drive and your MongoDB data is also on the D drive, then you might still have enough space for running the database. I would be concerned that I was sitting at 90% usage on the drive however, especially if your database is growing in size. Once you run out of space (as your error indicates) the system will shutdown to mitigate any potential corruption.If you have 40 - 50G free on the C drive and your MongoDB data is also on the C drive, then again you might have enough space for running the database. I would caution again that this is not a lot of space compared to the size of the database.If you have 40 - 50G total on the C drive then your MongoDB data will not be there because it’s just not possible. You can take a look at the <dbdata path>/journal/WiredTigerLog.* files to see how big in size they are on average as that might give more insight into what’s going on.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "WiredTigerLog.*Hi Doug_Duncan : i managed to extract WiredTigerLog.* logs but unfortunately it’s not in user readable format Do we have any modes to parse it?",
"username": "Mamatha_K"
},
{
"code": "",
"text": "\nMicrosoftTeams-image (1)1783×802 10.2 KB\nAttaching memory availability report where mongo is installed for reference.",
"username": "Mamatha_K"
},
{
"code": "",
"text": "\nMicrosoftTeams-image1269×746 15.5 KB\n",
"username": "Mamatha_K"
},
{
"code": "",
"text": "Hi @Mamatha_K,Please share more details on your environment including:specific version of MongoDB serverversion of Windowsfilesystem used for your MongoDB data volume. If NTFS, are you using compression?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "HI @Stennie_X ,MongoDB version used is : 3.2.6\nWindows : Windows Server 2008 R2\nFIle system used is : NTFS, compression is not used. We are using a raid array\nwith two logical drives.\non the disk drive where the mongo data is stored the compression option is not been selected.",
"username": "Mamatha_K"
},
{
"code": "",
"text": "i managed to extract WiredTigerLog.* logs but unfortunately it’s not in user readable formatYou don’t need to look at the contents. My comment was to look to see how big the files were and to see if creating a new file of that size would cause problems for the process.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Mamatha_K,MongoDB 3.2.6 was released in April, 2016 and the 3.2 series reached end of life in September, 2018.As a first step I recommend upgrading to the final 3.2.22 server version as there have been a few years of bug fixes and improvements since the version you are using. Minor/patch releases do not introduce any backward breaking changes or incompatibilities so they are very straightforward.I would also consider planning an upgrade to the latest release series supported for your O/S. Per the MongoDB Platform Support Matrix, MongoDB 4.2 is the last server release series to support Windows Server 2008 R2 (which reached end of life in Jan 2015).If your NTFS volume appears to have sufficient free space but writes are not successful, one option to check might be disk fragmentation and filesystem limits. There are some underlying filesystem limits that might result in large files being unable to grow, similar to SERVER-32808 where the problem was more evident on a compressed NTFS volume.However, since you are using very old and unsupported software versions I would start by moving to later versions as you may be encountering issues which have since been resolved.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,\nThank you for the solution .\nI am planning to upgrade to 3.2.22 version on windows server 2008 R2. I have downloaded zip file (mongodb-win32-x86_64-2008plus-3.2.22.zip) .\nWill it work if i just replace the content? or do we need to follow any installation process?\nPlease help.",
"username": "Mamatha_K"
},
{
"code": "",
"text": "I am getting this error on running mongod on my kali os:{“t”:{“$date”:“2023-05-03T12:38:39.224+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“thread1”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2023-05-03T12:38:39.225+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“thread1”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{“$date”:“2023-05-03T12:38:39.226+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“thread1”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2023-05-03T12:38:39.227+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread1”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2023-05-03T12:38:39.227+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread1”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2023-05-03T12:38:39.227+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“thread1”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“ShardSplitDonorService”,“namespace”:“config.tenantSplitDonors”}}\n{“t”:{“$date”:“2023-05-03T12:38:39.227+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“thread1”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2023-05-03T12:38:39.228+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:18499,“port”:27017,“dbPath”:“/data/db”,“architecture”:“64-bit”,“host”:“kali”}}\n{“t”:{“$date”:“2023-05-03T12:38:39.228+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“6.0.1”,“gitVersion”:“32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b”,“openSSLVersion”:“OpenSSL 3.0.7 1 Nov 2022”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2023-05-03T12:38:39.228+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“PRETTY_NAME=\"Kali GNU/Linux Rolling\"”,“version”:“Kernel 6.0.0-kali3-amd64”}}}\n{“t”:{“$date”:“2023-05-03T12:38:39.228+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{}}}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“E”, “c”:“CONTROL”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.”}}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“initandlisten”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“initandlisten”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“-”, “id”:6371601, “ctx”:“initandlisten”,“msg”:“Shutting down the FLE Crud thread pool”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“initandlisten”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“initandlisten”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“initandlisten”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“initandlisten”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“initandlisten”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“initandlisten”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“initandlisten”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“initandlisten”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“initandlisten”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“initandlisten”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“initandlisten”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“initandlisten”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:6278511, “ctx”:“initandlisten”,“msg”:“Shutting down the Change Stream Expired Pre-images Remover”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“initandlisten”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“initandlisten”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“initandlisten”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:100}}Any solution ??",
"username": "Neer_Amrutia"
},
{
"code": "",
"text": "Your dbpath dir is missing\nCreate it and you should be able to start the mongod",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Neer_Amrutia ,\nAs mentioned from @Ramachandra_Tummala, you Need to create a dir for data.\nHere Is your error{“t”:{“$date”:“2023-05-03T12:38:39.232+05:30”},“s”:“E”, “c”:“CONTROL”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.”}}Regards",
"username": "Fabio_Ramohitaj"
}
] | Mongo DB crashing with an error: Not enough storage is available to process this command | 2022-09-14T07:23:40.959Z | Mongo DB crashing with an error: Not enough storage is available to process this command | 5,347 |
null | [] | [
{
"code": "",
"text": "I have an existing Atlas account. I want to use it for MongoDB access. I am attempting to work on 2 different computers to make following the tutorial easy. As a result, I can’t pass the Check.Any advice?",
"username": "Aubin_Bakana"
},
{
"code": "",
"text": "Hello @Aubin_Bakana ,Welcome to The MongoDB Community Forums! I am attempting to work on 2 different computers to make following the tutorial easyCan you please share the tutorial that you are following?I can’t pass the Check.What is the error that you are getting?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Attempting to connect CLI to my existing cluster | 2023-05-02T08:46:40.033Z | Attempting to connect CLI to my existing cluster | 359 |
null | [] | [
{
"code": "",
"text": "I’m getting graphql error on first start of my app\nError is:\n“cannot find app using Client App ID”\nAnd when I refresh the app login again it works.Configurations:\nNode js 14,\nelectron 12.0.0\nrealm 10.10.1 (Node js SDK)",
"username": "Abhishek_Matta"
},
{
"code": "const credentials = Realm.Credentials.jwt(token);\n console.log('credentials--->>>>', credentials)\n await app.logIn(credentials);\n return app.currentUser.accessToken\n",
"text": "Here’s my code",
"username": "Abhishek_Matta"
},
{
"code": "",
"text": "same here{ error : cannot find app using Client App ID }\nthe data API was deployed in region AWS, Paris (EU_WEST_3)\ngenerated an APIKEY\ncannot run any Read or Write request\nI am testing the API with Postman for the moment. No source code to displayhttps://data.mongodb-api.com/app/62a5dfb989fb0cb27658cd06/endpoint/data/v1/What can you do to troubleshoot the issue ?\nOn my end, there is not much that I can do. Any help is appreciated",
"username": "Sandy_L"
},
{
"code": "",
"text": "I am having the same problem when making a request from a web app to https://realm.mongodb.com/api/client/v2.0/app/[APP_ID]/graphql.The response is “cannot find app using Client App ID ‘[APP_ID]’”It had been working for months without a problem. Storing data through webhooks is still working.Any ideas?",
"username": "Diego_V"
},
{
"code": "",
"text": "I’m having the same issue.Did you guys find anything that works?I’m on the free tier, using M0.Thanks",
"username": "Zanek_shaw"
},
{
"code": "",
"text": "Same here, please help!",
"username": "Carlos_Alvidrez"
}
] | Realm graphql throws error: cannot find app using Client App ID | 2022-01-06T04:56:59.213Z | Realm graphql throws error: cannot find app using Client App ID | 4,803 |
null | [
"app-services-cli"
] | [
{
"code": "",
"text": "I’m trying to deploy a Realm App via the CLI and my app is tied to an M2 cluster and only has 5 database triggers in it. And here’s the error I’m getting when I try to deploy via the CLI:\nerror validating trigger: maximum database trigger count for cluster size=‘M0’ is 5Can anyone from MongoDB and the App Services team help investigate? I can provide app ID to someone who can check this out.",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi @Lukas_deConantseszn1,Was this cluster was upgraded from an M0 after it was linked to the app? We do have a known issue with upgrades to linked clusters not propagating back to your app, so it would still think it’s linked to an M0. For now you can get around this by unlinking + re-linking the data source, but let me know if that is not an acceptable solution in your case.",
"username": "Kiro_Morkos"
}
] | [HELP] realm-cli deploy is giving nonsense error about maximum database trigger | 2023-05-03T00:51:14.600Z | [HELP] realm-cli deploy is giving nonsense error about maximum database trigger | 734 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "Hello,I’m trying to add existing mongodb replicasets to a rencently installed Ops Manager, but I’m obtaining always the same error:Error adding automation.\nPORT 27017: Unknown MongoDB version with git version: 72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7, name 4.4.6Mongodb is running on that port and the communication between OpsManager server and the deployment is OK through 27017 port. All server are reachable from the others. The user login is correct.I’m completely stuck on this.Can anyone help?Ops Manager version: 5.0.4.100.20211103T1316Z\nOps Manager MongoDB version: 4.4.8Deployments MongoDB versions: 4.4.8 and 4.4.6",
"username": "Guillermo_San_Juan_Corral"
},
{
"code": "",
"text": "Hi @Guillermo_San_Juan_Corral welcome to the community!Since Ops Manager is part of the Enterprise Advanced subscription, I think the best way forward is to contact support so this can be resolved (Enterprise Advanced subscription includes access to MongoDB support). This is because a lot of Ops Manager & automation issues are tied to the environment, and a deeper knowledge is needed to be able to conclusively get this fixed.If you’re evaluating Ops Manager and would like further help with this, please DM me and I can connect you to the right person.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Finally I have opened a case on Mongo Support as you suggested.Thanks for your time!",
"username": "Guillermo_San_Juan_Corral"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error attaching an existing deployment to OpsManager | 2023-04-28T12:20:47.783Z | Error attaching an existing deployment to OpsManager | 952 |
null | [] | [
{
"code": "",
"text": "Hello there.I have a field on my collections that needs the information of _id. The problem is that _id is auto generated on server so I can’t populate this field before saving it first. Is there a way to achieve this without making 2 calls to the server? (1 for saving and then 1 for update)Just to make a simple analogy, in an sql env, I would create a trigger afterInsert and then populate this field.Thanks in advanced.",
"username": "Rafael_Fogel"
},
{
"code": "",
"text": "You can always generate a value yourself (e.g. a UUID string) and set it as the _id value in db.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I know that, but I think the best way to guarantee uniqueness is leaving this job to the server, right?",
"username": "Rafael_Fogel"
},
{
"code": "",
"text": "not really, use UUID is good enough,",
"username": "Kobe_W"
},
{
"code": "const _id = new ObjectId() ;\n",
"text": "Actually, it is the driver that generates the _id, not the server.Nothing stops you from creating the _id withand then use this _id for your other field.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steeve, thanks for your reply.Ok, so now i have 2 questions:EDIT:I have been reading and considering that process and machine ids are used in the composition of objectId, it shouldn’t cause collisions. Therefore, calling new ObjectId() should be enough for my problem.Thanks for the replys",
"username": "Rafael_Fogel"
}
] | Refer autogenerated _id on another field on insert | 2023-05-02T16:47:11.529Z | Refer autogenerated _id on another field on insert | 666 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "runCommandrunCommandrunCommandtry {\n const collExists = (await this._db.listCollections().toArray()).some(\n (col) => col.name === this._collectionName\n );\n if (collExists) {\n return this._db.runCommand({\n collMod: this._collectionName,\n ...this._validator\n });\n }\n\n return this._db.createCollection(this._collectionName, this._validator);\n} catch (err) {\n throw new Error(err.message);\n}\n",
"text": "I am currently using MongoDB 4.13 and need to update the validator of an existing collection. However, it looks like the runCommand method is not recommended for this task in MongoDB 4.13.I am not using any ORMs like Mongoose, and I am wondering what the best approach is to update the validator of an existing collection in MongoDB 4.13.Does anyone have experience with updating the validator of an existing collection in MongoDB 4.13 without using the runCommand method? If so, could you share an example of how to do this?In case the runCommand method is the only solution, do you recommend implementing it like this?Thank you in advance for any help you can provide!",
"username": "gaming_state"
},
{
"code": "",
"text": "can somebody assist me please?",
"username": "gaming_state"
}
] | Updating validator of existing collection in MongoDB 4.13 | 2023-05-01T21:59:19.177Z | Updating validator of existing collection in MongoDB 4.13 | 454 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi,I have two types of users, user A who are basically customer and user B who would be serving the As.I am wondering if i should have different collections to store this data or have it in the same collection?",
"username": "Punit_Pal"
},
{
"code": "",
"text": "Hey @Punit_Pal,Welcome to the MongoDB Community Forums! Without knowing the nature of the relationship between A and B(one-to-many, one-to-one, or many-to-many), as well as your common queries, it is very hard to give you a definitive answer. A general rule of thumb while doing schema design in MongoDB is that you should design your database in a way that the most common queries can be satisfied by querying a single collection, even when this means that you will have some redundancy in your database. Thus, it may be beneficial to work from the required queries first, making it as simple as possible, and let the schema design follow the query pattern.I would suggest you experiment with multiple schema design ideas. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.In general, favor denormalization when:and favor normalization when:Do note that these points are just general ideas and not strict rules. I’m sure there are exceptions and counter examples to any of the points above, but generally, it’s more about designing the schema according to what will suit the use case best (i.e. how the data will be queried and/or updated), and not how the data is stored (unlike in most tabular databases where 3NF is considered “best” for most things).You can further read the following documentation to cement your knowledge of Schema Design in MongoDB.\nData Model Design\nFactors to consider when data modeling in MongoDB Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "They can have Many-to-Many relationship",
"username": "Punit_Pal"
},
{
"code": "",
"text": "Hey @Punit_Pal,Thanks for letting me know. The above pointers should help you out a lot while modeling your many-to-many relationship. I’m also linking some conversations from the forums that you might find useful:Hope this helps too! Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to design schema for different kinds of users? | 2023-05-03T04:54:47.022Z | How to design schema for different kinds of users? | 690 |
null | [
"python"
] | [
{
"code": "db.adminCommand({\n \"setDefaultRWConcern\" : 1,\n \"defaultWriteConcern\" : {\n \"w\" : 1\n },\n \"defaultReadConcern\" : {\n \"level\" : \"available\"\n }\n})\ncurs = db_col.with_options(write_concern=WriteConcern(w=0)).insert_many(data)",
"text": "Hi,I have checked docs and done many searches on this topic and thought I had it all figured but it just still doesn’t work out.I have 16 shards in my cluster each is a PSAI have set the default Write Concern to 1I am insert_many with Python drivers and making sure to set the WriteConcern to 0 as with bulk operations.curs = db_col.with_options(write_concern=WriteConcern(w=0)).insert_many(data)Yet still when I shut down an S member of any shard the whole insert stops working until I bring the S back up again.Any idea what else I can do?",
"username": "John_Torr"
},
{
"code": "",
"text": "the whole insert stops workingwhat you mean by this?\ntimeout? exception? blocking forever?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Majority data bearing nodes should be up to acknowledge writes but Arbiter is a non data bearing\nCheck this link",
"username": "Ramachandra_Tummala"
},
{
"code": "'numYields': 0,\n 'waitingForLatch': {'timestamp': '2023-05-03T08:00:09.812+00:00',\n 'captureName': 'FlowControlTicketHolder::_mutex'},\n 'locks': {},\n 'waitingForLock': False,\n 'lockStats': {},\n 'waitingForFlowControl': True,\n 'flowControlStats': {'acquireWaitCount': 1,\n",
"text": "I probably should have mentioned it is MongoDB 6.0.4\nSo it is supposed to work when Write Concern w=1 even with only PA up and S down, Isn’t it?From the client the queries just stop and sit there not doing anything until S is back up again.Here is a snippet from the current_ops which looks interesting",
"username": "John_Torr"
}
] | PSA unable to write when S is stopped | 2023-05-03T04:13:44.120Z | PSA unable to write when S is stopped | 744 |
null | [
"flutter",
"schema-validation"
] | [
{
"code": "",
"text": "Hi I am a newbie in flutter & realm sdk world , I am stuck here with this errorRealmException: Error opening realm at path /data/data/com.didibusiness/files/mongodb-realm/application-8-kdfbo/644a5089f05fa32952ef3b63/default.realm. Error code: 2016 . Message: Schema validation failed due to the following errors: - Property ‘Documents.documentUrl’ of type ‘array’ has unknown object type ‘documentObject’@RealmModel()\nclass _FssaiDocument {\n@PrimaryKey()\n@MapTo(“_id”)\nlate ObjectId id;\nlate ObjectId? userId;\nlate String applicantEmail; //like shg/ individual\nlate _ApplicantType? applicantType;\nlate String? phoneNumber; //phoneNumber of applicant\nlate int\napplicationYears; //number of years for which the application is applied\nlate int fees;\nlate _FssaiAddress?\nfoodPremiseAddress; //address of premise where food is prepared\nlate List<_Documents> documentList = ; //list of documents for fssai\nlate List<_ManufacturingProduct> manufacturingProducts = ;\nlate List statusOfApplication; //based on mital didi & govt\nlate String? created_date;\nlate String? expiry_date;\nlate String? renewal_date;\nlate List<_PaymentStatus> paymentStatus = ;\nlate _Downloads? downloads;\nlate int currentStep = 0;\n}@RealmModel(ObjectType.embeddedObject)\nclass _Documents {\nlate String documentName;\nlate List<_documentObject> documentUrl = ; //document uploaded on S3\nlate List<_videoWatchedObject> videoWatchedStatus = ;\n}@RealmModel(ObjectType.embeddedObject)\nclass _videoWatchedObject {\nlate ObjectId? videoId;\nlate String? videoStatus;\n}@RealmModel(ObjectType.embeddedObject)\nclass _documentObject {\nlate ObjectId id;\nlate String url;\nlate String status = “pending with farmdidi”;\nlate String? error;\n}",
"username": "Suman_Chakravarty"
},
{
"code": "Configuration.flexibleSync(user, [\n ...,\n DocumentObject.schema,\n]);\n",
"text": "DocumentObject.schema\nYou forgot to add the embedded object schema in the Configuration.",
"username": "Gitesh_Kumar"
},
{
"code": "",
"text": "Hey gitesh,\nThank you so much for the explanation ,Indeed I was missing that , sorry for asking such stupid question.Thank you for helping us noobs.",
"username": "Suman_Chakravarty"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm schema validation failed in flutter sdk | 2023-04-27T11:04:20.932Z | Realm schema validation failed in flutter sdk | 1,431 |
null | [
"node-js",
"data-modeling",
"mongoose-odm"
] | [
{
"code": "const categorySchema = new Schema(\n {\n parent: xxxxxxxx // how do i set \"parent\" to be Category or null?\n name: { type: String, required: true },\n },\n { timestamps: true }\n);\nconst productSchema = new Schema(\n {\n categories: xxxxxxxxx // how do i set \"categories\" to be an array of Category? should i import Category interface (see below) or make a ref to Category model?\n name: { type: String, required: true }\n qty: { type: Number, required: true }\n price: { type: Number, required: true }\n },\n { timestamps: true }\n);\ninferface Category {\n parent: Category | null; // is this ok?\n name: String;\n};\ninterface Product {\n categories: Category[];\n name: String;\n qty: Number;\n price: Number;\n};\n",
"text": "hi. i’m working with two basic models: Category and Productthanks!",
"username": "mongu"
},
{
"code": " // how do I set \"parent\" to be Category or null?parentcategorynullconst categorySchema = new mongoose.Schema(\n {\n parent: { type: mongoose.Schema.Types.ObjectId, ref: 'Category', default: null },\n name: { type: String, required: true }\n },\n { timestamps: true }\n);\n\nconst Category = mongoose.model('Category', categorySchema);\nproductSchemaconst productSchema = new mongoose.Schema(\n {\n categories: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Category' }],\n name: { type: String, required: true },\n qty: { type: Number, required: true },\n price: { type: Number, required: true }\n },\n { timestamps: true }\n);\n\nconst Product = mongoose.model('Product', productSchema);\n",
"text": "Hi @mongu,Welcome to the MongoDB Community forums // how do I set \"parent\" to be Category or null?Can you please confirm if my understanding of your use-case is correct? Are you intending to assign the parent field of the category schema to either “category” or “null” by default? If yes then you can use the referencing feature. Sharing the code snippet for reference://How do I set “categories” to be an array of Categories? Should I import the Category interface (see below) or make a ref to the Category model?You can perform the same action for the productSchema as well. Here is a code snippet for your reference:I hope it helps. Let us know if you have any further queries.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "it helps! then i do have to populate, right? do i have a link so i can read how populate works for cases like this? thank you very much, @Kushagra_Kesav !!",
"username": "mongu"
},
{
"code": "",
"text": "Hey @mongu,Here is the link to the Mongoose documentation on populate: Mongoose v7.1.0: Query PopulationBest,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Mongoose model reference to self | 2023-04-09T23:51:35.109Z | Mongoose model reference to self | 1,685 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "[\n {\n \"_id\": \"one\",\n \"category\": \"bear\",\n \"tags\": [\"202\"],\n \"text\": \"This is some text\"\n },\n {\n \"_id\": \"two\",\n \"category\": \"bear\",\n \"tags\": [\"202-L\"],\n \"text\": \"This is some text\"\n },\n {\n \"_id\": \"three\",\n \"category\": \"bear\",\n \"tags\": [\"204-L\"],\n \"text\": \"This is some text\"\n }\n {\n \"_id\": \"four\",\n \"category\": \"tiger\",\n \"tags\": [\"202\"],\n \"text\": \"This is some text\"\n }\n]\n{\n index: \"text\",\n facet: {\n operator: {\n compound: {\n filter: [\n {\n text: {\n path: [\n \"text\",\n ],\n query: \"some\",\n },\n },\n {\n text: {\n path: \"category\",\n query: \"bear\",\n },\n },\n {\n text: {\n path: \"tags\",\n query: \"202\"\n },\n },\n ],\n },\n },\n facets: {\n tags: {\n type: \"string\",\n path: \"tags\",\n },\n\n },\n },\n}\n{\n text: {\n path: \"tags\",\n query: \"202\" // or [\"202\"]\n }\n}\n{\n text: {\n path: \"tags\",\n query: \"202-L\"\n }\n}\n",
"text": "Having some issues with atlas $search and compound querying of faceted/arrays.Take the following dataset:I’m using the above data with the following $search aggregate:My problem is specifically around the tags field. I only want documents to be returned that explicitly match “202” but that isn’t what is happening.The above returns document “one” and “two” but I would only expect document “one” to be returned. It seems that it is matching the “202” in the tag values and not acknowledging that “202” != “202-L”If I change the filter to the following:It returns “one”, “two”, and “three” when I would only expect it to return document “two”. It appears to be matching the “202” and the “-L” across all documents.I’ve read through the documentation and just can’t figure out what I am missing. How can I go about only matching explicit strings in an array and not partial values?",
"username": "w3e"
},
{
"code": "{\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"mappings\": {\n \"dynamic\": true\n }\n}\n{\n index: 'default',\n text: {\n query: '202',\n path: 'tags'}\n}\n202{\n index: 'default',\n text: {\n query: '202-L',\n path: 'tags'}\n}\n",
"text": "Hey @w3e,Welcome to the MongoDB Community Forums! I replicated your sample documents on my end and used the following index definition:I then used the following search query:and only one and four were returned since they both have 202. Similarly, when I searched:only document two was returned.Hope this helps. If not, it would be good if you can share your index definition as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "$search{\n $match: {\n tags: \"202\"\n }\n}\n",
"text": "@Satyam,First off, thank you for taking the time to give this a look.I think what you have shared exhibits the issue perfectly. I’m using the default index type but I did play around with whitespace and keyword with similar behavior.If I am dealing with tags, (or taxonomy), I would not be interested in substrings or each array string.If I am querying “202” I would only expect/want results that match the full “202” string. I’m not interested in items tagged with “202-L” or “2023”.Likewise, if I query for “202-L” I would not expect “215-L” or something different (I was seeing that the search was treating the “L” as its own word, returning erroneous results.)Using non $search aggregators, it does not behave this way. If I have the aggregator as follows:I only get documents that match “202” not “202-L” or “2023”.I understand this behavior for standard text searches, but if we are dealing with any type of facet, the results are flawed. This is because we need to look at the field’s full array item and not a substring of that item.",
"username": "w3e"
},
{
"code": "",
"text": "Hey @w3e,I think what you have shared exhibits the issue perfectly. I’m using the default index type but I did play around with whitespace and keyword with similar behavior.If I am dealing with tags, (or taxonomy), I would not be interested in substrings or each array string.Yes, I would advise you to play around and read the documentation on different search index types available as well as the various operators that Atlas offers to see which one would serve your use case best. Based on what you described, using lucene.whitespace might be of more use to you than the default lucene.standard.This is because we need to look at the field’s full array item and not a substring of that item.Trying out different approaches is the best way to learn and should help you a lot. I suggest reading up more on other search operators as well as analysers and then deciding which one would be ideal for your use case.Regards,\nSatyam",
"username": "Satyam"
}
] | Compound $search not explicitly matching array value in results | 2023-04-24T22:01:53.760Z | Compound $search not explicitly matching array value in results | 808 |
null | [
"backup"
] | [
{
"code": "",
"text": "How long (roughly) should a snapshot restore take on an Atlas cluster, I have an M2 instance where the database is only 580MB.",
"username": "Jeremy_Sales"
},
{
"code": "",
"text": "Hi @Jeremy_Sales and welcome to MongoDB community forums!!How long (roughly) should a snapshot restore take on an Atlas cluster, I have an M2 instance where the database is only 580MB.Ideally, for a data with a small size, it shouldn’t take more than a few minutes.However, if the restore process takes longer than expected, you might want to check the Atlas UI showing any cause of the delay during the process.Since for M2 and M5, the snapshots are automatically created, can you confirm if you are creating any on demand snapshots for the cluster. If yes, can you share the details for the same.Please take a look at the following documentation for further assistance.Regards\nAasawari",
"username": "Aasawari"
}
] | How long does a cluster snapshot take to restore? | 2023-04-29T00:50:23.022Z | How long does a cluster snapshot take to restore? | 841 |
null | [
"node-js",
"atlas-functions"
] | [
{
"code": "exports = ({ token, tokenId, username }) => {\n const API_KEY = 'my mailgun API key';\n const DOMAIN = 'my mailgun domain';\n \n const axios = require('axios').default;\n axios({\n method: 'post',\n url: DOMAIN,\n auth: {\n username: 'api',\n password: API_KEY\n },\n params: {\n from: \"Test <support@domain>\",\n to: username,\n subject: \"Email Confirmation\",\n template: \"email_confirmation\",\n 'h:X-Mailgun-Variables': JSON.stringify({userToken: token, userTokenId: tokenId})\n }\n }).then(\n response => {\n console.log(response)\n },\n reject => {\n console.log(reject)\n }\n )\n\n return { status: 'pending' };\n};\n> ran at 1681606325251\n> took 359.796052ms\n> logs: \nTypeError: Invalid scheme\n> result: \n{\n \"status\": \"pending\"\n}\n> result (JavaScript): \nEJSON.parse('{\"status\":\"pending\"}')\nTypeError: Invalid scheme",
"text": "I’ve been struggling a bit to implement a custom email/password auth function in Atlas. I’ve tried functions using mailgun.js, mailgun-js, and axios to post a request to Mailgun, but while they’re easy to get running in my local environment I keep getting errors in Atlas. I believe this is because Atlas’s version of Node.js is very out of date, which means a lot of dependencies need to be loaded with long out-of-date versions to run.Currently this is the function I’m running:And the error I’m seeing:What is TypeError: Invalid scheme referencing? Since this same code runs on my local environment perfectly I’m not sure what it could be. I’ve also uploaded my node_modules dependencies so that I’m using the same dependencies in Atlas.",
"username": "Campbell_Affleck"
},
{
"code": " const API_KEY = 'my_mailgun _API_key';\n const DOMAIN = 'my_mailgun_domain';\n context.http.post({\n url: \"https://api:my_mailgun [email protected]_mailgun_domain\",\n headers: {\n \"Content-Type\": [ \"multipart/form-data\" ]\n },\n form: {\n 'from': \"support@domain\",\n 'to': username,\n 'subject': \"Email Confirmation\",\n 'template': \"email_confirmation\",\n 'h:X-Mailgun-Variables': JSON.stringify({userToken: token, userTokenId: tokenId})\n },\n encodeBodyAsJSON: false\n })\n .then(response => {\n // The response body is encoded as raw BSON.Binary. Parse it to JSON.\n const ejson_body = EJSON.parse(response.body.text());\n return ejson_body;\n })\n .catch( error => console.log(error) );\n> ran at 1681612573424\n> took 315.297896ms\n> logs: \nFunctionError: TypeError: invalid character '<' looking for beginning of value\n> result: \n{\n \"status\": \"pending\"\n}\n> result (JavaScript): \nEJSON.parse('{\"status\":\"pending\"}')\n",
"text": "I’ve also tried it using context.http.post() with the following code in the export function, if that might be a better way to go about it:And I’m seeing the following error:I’ve tried to find documentation on these errors but I haven’t been able to find anything. Am I missing something here?",
"username": "Campbell_Affleck"
},
{
"code": "exports = ({ token, tokenId })=>{\n \"use strict\";\n const API_KEY = API_KEY;\n const Domain = `https://api.mailgun.net/v3/${DOMAIN_NAME}/messages`;\n const axios = require('axios').default;\n axios({\n method: 'post',\n url: Domain,\n auth: {\n username: 'api',\n password: API_KEY\n },\n params: {\n from: \"Mailgun Sandbox <[email protected]>\",\n to: \"[email protected]\",\n subject: \"Email Confirmation\",\n template: \"testing\",\n 'h:X-Mailgun-Variables': JSON.stringify({userToken: token, userTokenId: tokenId})\n }\n }).then(\n response => {\n console.log(response);\n console.log('Success');\n\n },\n reject => {\n console.log(reject);\n console.log('Failed');\n }\n );\n return JSON.stringify({ Status: \"Pending\" });\n};\n> ran at 1682956544266\n> took 1.095613339s\n> logs:\n[object Object]\nSuccess\n> result: \n\"{\\\"Status\\\":\\\"Pending\\\"}\"\n> result (JavaScript): \nEJSON.parse('\"{\\\"Status\\\":\\\"Pending\\\"}\"')\nTypeError: Invalid scheme\n",
"text": "Hello @Campbell_Affleck,Welcome to the MongoDB Community forums I tested the following code:and it worked fine for me, returning the output as follows:I also received the email at my recipient email.It appears that the error is occurring from the reject block. Can you ensure that you have provided all the correct credentials?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "https://api.mailgun.net/v3/${DOMAIN_NAME}/messages",
"text": "Hey @Kushagra_Kesav, ahhhhh you’re absolutely right. The error was in the credentials not the code, what a silly error Thank you! I’ve fixed it up and it works perfectly. I’d seen some conflicting posts on the proper DOMAIN string structure, but it looks like https://api.mailgun.net/v3/${DOMAIN_NAME}/messages; is the proper way to go.",
"username": "Campbell_Affleck"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Axios to post to Mailgun within an Atlas function | 2023-04-16T01:04:54.124Z | Using Axios to post to Mailgun within an Atlas function | 1,413 |
null | [] | [
{
"code": "",
"text": "Hello, I see that it is possible to create roles and permissions that are document/field level. I, however, want to do the following and wondering if it is possible:Thanks!",
"username": "Archna_Johnson"
},
{
"code": "",
"text": "Hey @Archna_Johnson,Welcome to the MongoDB Community Forums! I see that it is possible to create roles and permissions that are document/field levelThe document/field level permission you described is for the Atlas App services and not Atlas. It would be good if you can clarify your use case for us to be better able to help you. Also, can you provide any sample documents and explain what exactly are you looking for? This would help us better able to understand your ask.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Thanks for your response. Here is the scenario I am dealing with. Working with a Blazor app hosted in Azure; using Azure AD for authentication. Information for users creating accounts on the app are stored in a collection in MongoDB. These users can create objects which are also stored in a collection in Mongo.We are using Mongo Atlas to store these collections.Users can be part of a group and are allowed to view documents owned by the group. Some users are standalone - we want these users to be able to share the objects they create with other users - either read or write permissions.The fact that the objects the users create are stored in Mongo is opaque to the users of the app.I have been unable to map these requirements to the functionality provided by Mongo. Am I missing something? I am currently considering writing code in the app to support this. Document level permissions are supported in Atlas app services but I think that is not appropriate for my scenario. I’d appreciate your response.Thanks!",
"username": "Archna_Johnson"
},
{
"code": "",
"text": "Hey @Archna_Johnson,We are using Mongo Atlas to store these collections.Thanks for letting me know. The document/field level permissions that you described in your first post pertains to App Services and not Atlas. I would suggest you write this logic on your application side since it would be easier to control and manage. MongoDB also provides built-in roles with pre-defined pairings of resources and permitted actions. For lists of the actions granted, see Built-In Roles. To define custom roles, see Create a User-Defined Role.Regards,\nSatyam",
"username": "Satyam"
}
] | Create groups with certain permissions | 2023-04-20T20:32:33.589Z | Create groups with certain permissions | 647 |
[
"node-js",
"atlas-cluster"
] | [
{
"code": "",
"text": "\nimage2834×994 111 KB\n\nGetting this error. Please help me to resolve this.",
"username": "Shubham_Mantri"
},
{
"code": "",
"text": "Hello @Shubham_Mantri,Welcome to the MongoDB Community forums I would recommend contacting Atlas in-app chat support regarding this. Please provide the chat support with a cluster link.You can reach out to them by tapping on the bubble icon in the right corner of the MongoDB Atlas account.\nimage488×540 10.7 KB\nRegards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Can someone please help me I am getting this error on mongodb altas website. Request invalid. Please visit your Clusters and try again | 2023-05-01T17:46:14.093Z | Can someone please help me I am getting this error on mongodb altas website. Request invalid. Please visit your Clusters and try again | 1,203 |
|
null | [
"queries",
"indexes",
"schema-validation"
] | [
{
"code": "[\n {\n \"username\": u1,\n emails: [\n {\n type: \"work\",\n value: [email protected],\n primary: true\n },\n {\n type: \"work\",\n value: [email protected],\n primary: false\n }\n ]\n },\n {\n \"username\": u2,\n emails: [\n {\n type: \"work\",\n value: [email protected],\n primary: true\n },\n {\n type: \"work\",\n value: [email protected],\n primary: false\n },\n {\n type: \"work\",\n value: [email protected],\n primary: false\n }\n ]\n }\n]\n{\n \"username\": u3,\n emails: [\n {\n type: \"work\",\n value: [email protected],\n primary: true\n },\n {\n type: \"work\",\n value: [email protected],\n primary: false\n }\n ]\n }\n",
"text": "I need to write a validator expression while inserting the document in a collection. For example , my collection contains data:While inserting new data , I want to validate that the primary email is not duplicate i.e. email mentioned in the object having “primary:true” should not contain in any document’s primary’s email. For example if I insertIt shouldn’t allow me to insert as user U2 has same primary email. I am struggling with the creation of validator expression. Read about Unique index with partialFilterExpression but having difficulties with that expressions too.",
"username": "Vikram_Tanwar"
},
{
"code": "[\n {\n \"username\": u1,\n primary_email: \"[email protected]\",\n primary_email_type: \"work\"\n other_emails: [\n {\n type: \"work\",\n value: [email protected],\n }\n ]\n },\n {\n \"username\": u2,\n primary_email: \"[email protected]\",\n primary_email_type: \"work\"\n other_emails: [\n {\n type: \"work\",\n value: [email protected],\n },\n {\n type: \"work\",\n value: [email protected],\n }\n ]\n }\n]\ndb.collection.insertMany([\n {\n \"username\": \"u1\",\n \"primary_email\": \"[email protected]\",\n \"primary_email_type\": \"work\",\n \"other_emails\": [\n {\n \"type\": \"work\",\n \"value\": \"[email protected]\"\n }\n ]\n },\n {\n \"username\": \"u2\",\n \"primary_email\": \"[email protected]\",\n \"primary_email_type\": \"work\",\n \"other_emails\": [\n {\n \"type\": \"work\",\n \"value\": \"[email protected]\"\n },\n {\n \"type\": \"work\",\n \"value\": \"[email protected]\"\n }\n ]\n }\n]);\n\nprimary_emaildb.collection.createIndex(\n { \"primary_email\": 1 },\n { unique: true, partialFilterExpression: { primary_email: { $exists: true } } }\n);\nprimary_email$existsprimary_emailprimary_emailprimary_emailprimary_emailpartialFilterExpressionprimary_emailprimary_emaildb.collection.insertOne({\n \"username\": \"u3\",\n \"primary_email\": \"[email protected]\",\n \"primary_email_type\": \"work\",\n \"other_emails\": []\n});\nprimary_emaildb.collection.insertOne({\n \"username\": \"u4\",\n \"primary_email\": \"[email protected]\",\n \"primary_email_type\": \"work\",\n \"other_emails\": []\n});\nprimary_emailprimary_emaildb.collection.insertOne({\n \"username\": \"u5\",\n \"primary_email_type\": \"work\",\n \"other_emails\": []\n});\nprimary_emailprimary_email",
"text": "Hello @Vikram_Tanwar ,Welcome to The MongoDB Community Forums! As of now, the partialFilterExpression applies to the document. It does not apply to which array elements will be indexed, so I don’t believe that having more complex partial filter expression would help you with your current schema design. For reference, please see SERVER-17853.One possible workaround is to record the primary email in a separate top-level field, maybe in addition to the email marked as primary in the array, see an example belowNow, let’s try this with an example, I added below documents to my collectionNow, to validate that the primary_email field is not a duplicate while inserting new data, I used a unique index with a partial filter expression. Here’s an example of how you can create the unique index:This unique index ensures that the primary_email field is unique across the collection only for documents where the field exists. The $exists operator is used in the partial filter expression to match only the documents that have the primary_email field.With this unique index in place, if you try to insert a new document with a primary_email that already exists, MongoDB will throw a duplicate key error and prevent the insertion.Note: The unique index will only apply to documents where the primary_email field exists. If you want to ensure uniqueness across all documents, including those where the primary_email field is missing, you can omit the partialFilterExpression option from the index creation command.Now, let’s run a few test cases to test the uniqueness of the primary_email field:Expected result: The document should be inserted successfully without any errors.\nOutput: Result as expected.Expected result: The insertion should fail with a duplicate key error since the primary_email value already exists in the collection.\nOutput: Result as expected.Expected result: The document should be inserted successfully since the primary_email field is not present, and the unique index only applies to documents where the field exists.\nOutput: Result as expected.These test cases cover scenarios where the primary_email is unique, where it is a duplicate, and where it is not present. You can modify the values and add more test cases as per your requirements.Let me know if this helps!Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Need help with validator expression while inserting a document | 2023-04-27T06:20:14.435Z | Need help with validator expression while inserting a document | 914 |
null | [
"queries",
"crud",
"sharding"
] | [
{
"code": "",
"text": "Hi Team,\nThe following query exexcuted on command prompted (On windows CMD)\nbut getting below error , so where i did mistakeMongoDB Enterprise mongos> db.test.updateMany({}, [{$set:{ createdAt:{$toDate:“$_id”}}},{upsert:false, multi:true }])\n2023-05-01T19:26:46.906+0530 E QUERY [js] Error: the update operation document must contain atomic operators :\nDBCollection.prototype.updateMany@src/mongo/shell/crud_api.js:625:1\n@(shell):1:1MongoDB Enterprise mongos> db.version()\n4.0.4\nMongoDB Enterprise mongos>",
"username": "hari_dba"
},
{
"code": "",
"text": "The square brackets before the $set and after the multi:true.The second argument needs to be an update object/document. With the square brackets, you are passing an array.",
"username": "steevej"
},
{
"code": " },\n \"u\" : {\n \"$set\" : {\n \"createdAt\" : {\n \"$toDate\" : \"$_id\"\n }\n }\n },\n \"multi\" : true,\n \"upsert\" : false\n }\n",
"text": "Getting error afte the square brackets removed before the $set and after the multi:trueThe second argument updated objecct/document “createdAt”: {$toDate: “$_id”} .Here i would want add new field on exiting document based on “_id”Can i modify query please let me know ?MongoDB Enterprise mongos> db.test.updateMany({ }, {$set:{“createdAt”: {$toDate: “$_id” }}},{upsert:false, multi:true })\n2023-05-01T20:27:02.293+0530 E QUERY [js] WriteError: The dollar ($) prefixed field ‘$toDate’ in ‘createdAt.$toDate’ is not valid for storage. :\nWriteError({\n“index” : 0,\n“code” : 52,\n“errmsg” : “The dollar ($) prefixed field ‘$toDate’ in ‘createdAt.$toDate’ is not valid for storage.”,\n“op” : {\n“q” : {})\nWriteError@src/mongo/shell/bulk_api.js:461:48",
"username": "hari_dba"
},
{
"code": "db.test.updateMany({}, [{$set:{ createdAt:{$toDate:\"$_id\"}}},{upsert:false, multi:true }])\ndb.test.updateMany({}, [{$set:{ createdAt:{$toDate:\"$_id\"}}}],{upsert:false, multi:true })\n",
"text": "I understand better your use-case.You want to use update with aggregation because you want your new field be calculated from the existing field _id. That explains why you add square brackets at first.For this to work, you need brackets (unlike what I mentioned). The real issue was that the closing bracket was at the wrong place. It has to be after the 3 closing curly braces rather than after the upsert: and multi: option object. To be clear, rather thantryPlease read Formatting code and log snippets in posts before posting code or documents.",
"username": "steevej"
},
{
"code": "",
"text": "Same error, i used as you mentoned query…MongoDB Enterprise mongos> db.test.updateMany({}, [{$set:{ createdAt:{$toDate:“$_id”}}}],{upsert:false, multi:true })\n2023-05-02T21:32:42.487+0530 E QUERY [js] Error: the update operation document must contain atomic operators :\nDBCollection.prototype.updateMany@src/mongo/shell/crud_api.js:625:1\n@(shell):1:1",
"username": "hari_dba"
},
{
"code": "",
"text": "May be your version is to old and it does not support update with aggregation.",
"username": "steevej"
},
{
"code": "> db.version()\n4.0.13\n\n> db.test.updateMany({}, [{$set:{a:1}}])\n2023-05-03T11:01:38.333+1000 E QUERY [js] Error: the update operation document must contain atomic operators :\nDBCollection.prototype.updateMany@src/mongo/shell/crud_api.js:625:1\n> db.version()\n4.2.22\n\n> db.test.updateMany({}, [{$set:{a:1}}])\n{ \"acknowledged\" : true, \"matchedCount\" : 1, \"modifiedCount\" : 1 }\n",
"text": "Hi @hari_dbaI agree with @steevejI tried this in MongoDB 4.0 series and see an identical error message:Then I tried the same in MongoDB 4.2 series:Since MongoDB 4.0 series is out of support per April 2022, I encourage you to consider moving to a supported version. This will also provide you with the updates with aggregation capabilities that you need.Note: MongoDB 4.2 series is also out of support as per April 2023. I’m using it here for illustration purposes only.Best regards\nKevin",
"username": "kevinadi"
}
] | Error: the update operation document must contain atomic operators | 2023-05-01T14:01:12.178Z | Error: the update operation document must contain atomic operators | 2,054 |
[
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hey, please explain how to prevent the problem:\n‘Synchronization between Atlas and Device Sync has beed stopped, due to error:\nnon-recoverable error processing event: the number of unsynced documents for this app has exceeded the maximum’.\nWhatsApp Image 2023-04-29 at 18.14.021600×205 61 KB\nWhat could be the cause? CPU? RAM? Too small oplog size? I’m trying in the metrics to find the causes, but haven’t found anything that stands out.Can someone explain to me how I can speed up the synchronization of documents or increase the limit of unsynchronized documents?",
"username": "Mateusz_Piwowarski"
},
{
"code": "mongoencoding",
"text": "Hi Matuesz,Thanks for posting and welcome to the community.This error means there have been more than 100k documents in Atlas which do not comply with the device sync cloud schema that you have configured on your cloud app. For e.g. if the cloud schema requires a string field for a property but your document has objectId, this will cause a mongoencoding error which you’ll be able to see in your app logs.To rectify this you will need to update the affected documents to bring them into compliance with the schema. The document id will be shown in your app logs under the mongoencoding error mentioned. When this limit of 100k documents is reached you will need to terminate sync and restart it after fixing the problematic documents.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sync between Atlas and Device Sync has been stopped - the number of unsynced documents for this app has exceeded the maximum | 2023-05-03T00:37:04.013Z | Sync between Atlas and Device Sync has been stopped - the number of unsynced documents for this app has exceeded the maximum | 916 |
|
null | [
"node-js"
] | [
{
"code": "",
"text": "Which mongodb version you are using. I tried this but still not working. Please can you help. I am not even getting any error. Getting success print but db not created.",
"username": "Anand_Vaidya1"
},
{
"code": "",
"text": "Hi @Anand_Vaidya1, your problem might not be a connection issue at all. what do you get if you run the code in the question itself?",
"username": "Yilmaz_Durmaz"
},
{
"code": "const { MongoClient } = require('mongodb');\nvar ObjectId = require('mongodb').ObjectId;\n\nconst url = 'mongodb://127.0.0.1:27017';\nconst client = new MongoClient(url);\n\n// Database Name\nconst dbName = 'myProject';\n\nasync function main() {\n // Use connect method to connect to the server\n await client.connect();\n console.log('Connected successfully to server');\n const db = client.db(dbName);\n const collection = db.collection('documents');\n\n client.close();\n // the following code examples can be pasted here...\n\n return 'done.';\n}\nmain();\n",
"text": "I get success log but when I go and check through MongoDb Compass no DB is created.",
"username": "Anand_Vaidya1"
},
{
"code": "",
"text": "Alright now, besides not being a connection problem, your issue is actually not an issue at all but a small detail you missed on the way (that can happen to any of us at any time):follow on with tutorials to see how you insert documents, then you should see your newly created collection and the data in it.if you happen to have other issues, create a new topic and give as many details as possible.cheers ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks a lot. This worked but I need to only create empty tables without default entries. I am providing a library for other modules where my functions need to be clear and crisp.\nIs there a way where I can just do specific actions.\nE.g.\nCreateDB() : THis function will only create db.\nCreateTable(): This function will only create table/ collection.\nWriteTable(), ReadTable() etc.",
"username": "Anand_Vaidya1"
},
{
"code": "CreateDB()CreateTable()WriteTable()WriteTable()",
"text": "Hi @Anand_Vaidya1 welcome to the community!I need to only create empty tables without default entriesThe resource you’re looking for is probably https://www.mongodb.com/docs/manual/reference/method/db.createCollection/In short, MongoDB uses a very different concept to SQL in managing your data.Unlike the typical SQL workflow, MongoDB works very differently. Here, a “collection” is a collection of JSON-like documents. The “schema” here is more like uniformity of keys and values of the documents inside the collection. You don’t define the schema beforehand, the collection will be created automatically when you insert a new document into a non-existing collection, and you can have two documents that look very different from each other inside a single collection. It’s termed “flexible schema” in MongoDB.This is quite different to SQL’s concept of tables and schemas, where you have to define a schema for every table. Thus, the concept of creating functions of CreateDB(), CreateTable(), WriteTable() etc. don’t have direct equivalents in MongoDB.If you need specific help on what you need to do, please provide more details. e.g. what’s the WriteTable() function supposed to do? What’s the overall function of your application? And other relevant details.Best regards\nKevin",
"username": "kevinadi"
}
] | Create a collection with node.js | 2023-04-28T19:51:08.634Z | Create a collection with node.js | 2,065 |
null | [
"queries",
"node-js"
] | [
{
"code": "import BaseModel from \"./BaseModel\";\nexport abstract class UsersModel extends BaseModel {\n // Search single item and and reduce results to minimum by fields\n public static checkExists(userId: string) {\n return !!Meteor.users.findOne({ _id: userId }, { fields: {} });\n }\n // Search single item\n public static checkExists(userId: string) {\n return !!Meteor.users.findOne({ _id: userId });\n }\n // Search multiple items with count\n public static checkExists(userId: string) {\n return !!Meteor.users.find({ _id: userId }).count();\n }\n}\n",
"text": "How to effective check, if document by _id exists in database?Or there is not difference between this methods?\nThanks a lot.",
"username": "klaucode"
},
{
"code": " { fields: {} }reduce results to minimum by fields",
"text": "According to nodejs documentation the following is deprecated: { fields: {} }This is not how youreduce results to minimum by fieldsIf you do not use a projection a document fetch will occur. So yes it is better to projection. I am not too sure what happens internally when you specify an empty projection. You might have a document fetch even if it is not needed. I think it would be better to project _id to make sure the query hits the logic of a covered query.I have no numbers to confirm my gut feeling expressed above.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej _id is added to projection automatically. Only why I asked is, that I wont to be sure, that I’m using right way for that (especially in that case, when there is a lot of documents in collection or document size is bigger and projection /I think/ can be more effective like selection whole document).Thanks a lot for your answer Nobody else answered.",
"username": "klaucode"
},
{
"code": "#read Object ID Document from the collection\nfrom bson.objectid import ObjectId\nHOST = vmhost\nprint(bcolors.OKBLUE + \"[+] Reading the data of the collection\"+ bcolors.ENDC)\nt0 = time.time()\nwith MongoClient(HOST) as client:\n db = client['collection']\n #db.test_col.insert_one({\"foo\": \"bar\"})\n with db.test_col.find_one({'_id': ObjectId('6446d2d85d69b01f140dd453')}) as cursor:\n for c in cursor:\n print(c)\n\nt1 = time.time()\ntotal = t1-t0\nprint(\"Execution Time: {}\".format(total))\n[+] Reading the data of the collection\n{'_id': ObjectId('6446d2d85d69b01f140dd453'), 'foo': 'bar'}\nExecution Time: 0.2895638942718506\n",
"text": "I would agree that the fastest way to do this is:Below I ran a simple test (in python) and got the round trip execution time. Most of the time is network travel.Results",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "_id is added to projection automaticallyYes it does. But an empty projection might be an edge case equivalent to no projection. That is whyI think it would be better to project _idexplicitly to make sure that an empty does not end up being a fetch.",
"username": "steevej"
}
] | I woud like to check by _id, if item exists, what is the effective way? | 2023-04-26T14:54:37.559Z | I woud like to check by _id, if item exists, what is the effective way? | 948 |
null | [] | [
{
"code": "",
"text": "How to configure the Log level for the mongocxx?\nI have provided custom logger implementation to instance. However, I could not find any way to configure the log level. Looking forward to some help",
"username": "Milind_T"
},
{
"code": "log_level",
"text": "Hi @Milind_T ,You should be able to set the log level with log_level enum - mongo-cxx-driver/logger.hpp at master · mongodb/mongo-cxx-driver · GitHub\nAPI Doc - MongoDB C++ Driver: mongocxx::logger Class Reference\nExample - mongo-cxx-driver/logging.cpp at master · mongodb/mongo-cxx-driver · GitHub",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "thanks for reply. Yes, I did follow this.I could not figure out where to set this enum - log_levelIs there any global or any where we need to set “log_level” ?I found old issue here: https://jira.mongodb.org/browse/SERVER-9966\nHowever, this does not seem to be applicable as mongo::logLevel does not exist",
"username": "Milind_T"
},
{
"code": "",
"text": "loggerWhich Mongo client are you using?\nIf you want to set log level using database command, you can try db.setLogLevel.\nRefer setLogLevel Doc - db.setLogLevel",
"username": "Monika_Shah"
},
{
"code": "",
"text": "This question is specific to C++ mongo driver.\nWant to enable debug /trace logging for mongo driver (Mongocxx).I guess, db.setLogLevel is for changing log level for database. I need it to configure log level for Mongo Driver (c++)",
"username": "Milind_T"
},
{
"code": "",
"text": "Hello @Milind_T\nYes . db.setLogLevel is for configuring log level for database.\nWhat is your purpose of Mongo Driver(c++) ? If database connectivity, and then db.setLogLevel can be useful.You can try logger.operator() . refer, doc",
"username": "Monika_Shah"
},
{
"code": "",
"text": "This does not help. It is just interface method to invoke logging from Mongo driver.\nI need something to configure the log level for the mongo c++ driver.",
"username": "Milind_Torney"
},
{
"code": "",
"text": "Just wondering if anyone has used logging before in the c++ driver",
"username": "Milind_T"
},
{
"code": "",
"text": "You can look at this example - you’d need to pass a logger to the instance mongo-cxx-driver/connect.cpp at master · mongodb/mongo-cxx-driver · GitHub",
"username": "Rishabh_Bisht"
},
{
"code": "#include <iostream>\n#include <cstdint>\n\n#include <mongocxx/client.hpp>\n#include <mongocxx/instance.hpp>\n#include <mongocxx/logger.hpp>\n\nclass logger final : public mongocxx::logger {\n public:\n explicit logger(std::ostream* stream) : _stream(stream) {}\n\n void operator()(mongocxx::log_level level,\n bsoncxx::stdx::string_view domain,\n bsoncxx::stdx::string_view message) noexcept override {\n if (level >= mongocxx::log_level::k_trace)\n return;\n *_stream << '[' << mongocxx::to_string(level) << '@' << domain << \"] \" << message << '\\n';\n }\n\n private:\n std::ostream* const _stream;\n};\n\nint main(int, char**) {\n mongocxx::instance inst{bsoncxx::stdx::make_unique<logger>(&std::cout)};\n mongocxx::uri uri(\"mongodb+srv://...\");\n\n std::cout << \"Connecting to MongoDB Atlas ...\\n\";\n mongocxx::client conn{uri};\n\n // ...\n}\n",
"text": "@Milind_T, just posting this here as I found the example from @Rishabh_Bisht helpful but it can be useful to see something here you can just copy/paste:",
"username": "alexbevi"
},
{
"code": "",
"text": "thank you! this is helpful",
"username": "Milind_Torney"
}
] | Mongocxx - how to configure the log level | 2023-03-14T10:18:44.848Z | Mongocxx - how to configure the log level | 955 |
null | [
"java"
] | [
{
"code": "",
"text": "Hi Team,\nOur application is built Java 1.8 with mongo driver [mongo-java-driver-3.12.9.jar] 3.12 API.\nWe are seeing some performance issues with mongo community server 4.2 and 4.x versions. Intermittently ,the mongo process is consuming 100% CPU which is causing timeouts and impacting customer experience.\nThe same application code works well with Mongo community server 4.0.x version and no timeouts are seen under same load condition as above.\nI would like to know whether driver and server combination used is a right combination.\nIs there any recommendation which i can try for better performance and avoid the CPU issues.\nHas anyone experienced similar issues, if yes how were they resolved?",
"username": "Nirup_Kumar"
},
{
"code": "",
"text": "Can anybody reply to the above query ?",
"username": "Udaya_Bhaskar_chimak"
}
] | Mongo Java driver 3.12 with mongo server 4.4 | 2023-04-20T10:17:13.382Z | Mongo Java driver 3.12 with mongo server 4.4 | 654 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I want to split data between two computing node shards. Is customised sharding possible which distribute data with 40% on one shard and 60% on another?",
"username": "Monika_Shah"
},
{
"code": "",
"text": "Customized ranges are possible with commands, but you will have to ensure 40%-60%, as the system only knows key ranges.",
"username": "Kobe_W"
}
] | Is customized sharding possible? | 2023-05-01T07:00:54.610Z | Is customized sharding possible? | 537 |
null | [
"storage"
] | [
{
"code": "",
"text": "when we are out of disk space does mongodb rewrite the oldest document and replace it by a new one or we get an error like : there is no space left on the disk … ?",
"username": "Fatemeh_Ahmadi"
},
{
"code": "",
"text": "If this is an insertion, then exception. If more space is needed but an update but disk really full, exception.",
"username": "Kobe_W"
}
] | What happen to write operation if we are out of disk space in mongodb? | 2023-05-01T09:43:56.840Z | What happen to write operation if we are out of disk space in mongodb? | 711 |
null | [] | [
{
"code": "",
"text": "How did you connect to my cluster without the credentials??\nThe error persists even though I’ve switched my network.",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "You tried to hide it from your post but you forgot one place.",
"username": "steevej"
},
{
"code": "",
"text": "Ohh. Okay…\nThat is dumb.",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "I am still facing the same issue\nVPNs are disabled, Firewalls are down and I am running the most basic commands related to Mongo.",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "May be you are just to far for the AWS region you selected. Try to create a new cluster in a region that is geographically closer to you.",
"username": "steevej"
},
{
"code": "",
"text": "I’m not sure what happened, but it started working. Does server Location really affect the connection so much ??",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "I’m still facing the issue from time to time.\nI’m not sure why.",
"username": "Arvind_Iyer"
},
{
"code": "",
"text": "Does server Location really affect the connection so much ??If it does it has more to do with your connection to world rather than the connection of Atlas to the world.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Regarding MongoServerSelectionError: connect ETIMEDOUT | 2023-05-02T14:01:09.446Z | Regarding MongoServerSelectionError: connect ETIMEDOUT | 401 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n\"id\": 1,\n\"name\": \"channel\",\n\"participants\": [\n{\"profile\": {\"id\": 1, \"name\": \"John\"}, \"status\": {\"id\": 1, \"name\": \"active\"}},\n{\"profile\": {\"id\": 2, \"name\": \"Kate\"}, \"status\": {\"id\": 2, \"name\": \"inactive\"}},\n{\"profile\": {\"id\": 3, \"name\": \"Alex\"}, \"status\": {\"id\": 1, \"name\": \"active\"}},\n{\"profile\": {\"id\": 4, \"name\": \"Glea\"}, \"status\": {\"id\": 1, \"name\": \"active\"}},\n]\n}\n",
"text": "Hello guys,I have been looking at this for few hours, no results what so ever.\nLet’s say I have documents (around 20000 or so) with the following structureI want to fetch only those documents where one of the participants profile.id = 1 or 2 or something that I passHow can I do so?",
"username": "Aleksandre_Bregadze"
},
{
"code": "",
"text": "This is in principal a simple query. So please share what you tried as you might only have a simple detail wrong that we can spot.Also, taking into account that the top document you posted include 2 matching participants, do you want the array elements of profile.id:3 and profile.id:4 in the result document or do you want to filtered out.",
"username": "steevej"
},
{
"code": "",
"text": "I used eventually match to do this and all worked.\nNot sure if search works here though",
"username": "Aleksandre_Bregadze"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Search objects where id == objetc.id for object in list field of a document | 2023-04-02T15:26:32.630Z | Search objects where id == objetc.id for object in list field of a document | 634 |
null | [
"cxx"
] | [
{
"code": "In file included from /usr/include/c++/7/x86_64-amazon-linux/bits/c++allocator.h:33:0,\n from /usr/include/c++/7/bits/allocator.h:46,\n from /usr/include/c++/7/string:41,\n from /usr/include/c++/7/stdexcept:39,\n from /usr/include/c++/7/array:39,\n from /usr/include/c++/7/tuple:39,\n from /usr/include/c++/7/mutex:38,\n from /home/lstorino/psrplot/src/mongo.h:6,\n from /home/lstorino/psrplot/src/mongo.cpp:1:\n/usr/include/c++/7/ext/new_allocator.h: In instantiation of ‘void __gnu_cxx::new_allocator<_Tp>::construct(_Up*, _Args&& ...) [with _Up = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>; _Args = {const std::pair<const std::basic_string<char, std::char_traits<char>, std::allocator<char> >, mongocxx::v_noabi::client>&}; _Tp = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>]’:\n/usr/include/c++/7/bits/alloc_traits.h:475:4: required from ‘static void std::allocator_traits<std::allocator<_CharT> >::construct(std::allocator_traits<std::allocator<_CharT> >::allocator_type&, _Up*, _Args&& ...) [with _Up = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>; _Args = {const std::pair<const std::basic_string<char, std::char_traits<char>, std::allocator<char> >, mongocxx::v_noabi::client>&}; _Tp = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>; std::allocator_traits<std::allocator<_CharT> >::allocator_type = std::allocator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client> >]’\n/usr/include/c++/7/bits/hashtable_policy.h:2066:37: required from ‘std::__detail::_Hashtable_alloc<_NodeAlloc>::__node_type* std::__detail::_Hashtable_alloc<_NodeAlloc>::_M_allocate_node(_Args&& ...) [with _Args = {const std::pair<const std::basic_string<char, std::char_traits<char>, std::allocator<char> >, mongocxx::v_noabi::client>&}; _NodeAlloc = std::allocator<std::__detail::_Hash_node<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, true> >; std::__detail::_Hashtable_alloc<_NodeAlloc>::__node_type = std::__detail::_Hash_node<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, true>]’\n/usr/include/c++/7/bits/hashtable_policy.h:182:58: required from ‘std::__detail::_AllocNode<_NodeAlloc>::__node_type* std::__detail::_AllocNode<_NodeAlloc>::operator()(_Arg&&) const [with _Arg = const std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>&; _NodeAlloc = std::allocator<std::__detail::_Hash_node<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, true> >; std::__detail::_AllocNode<_NodeAlloc>::__node_type = std::__detail::_Hash_node<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, true>]’\n/usr/include/c++/7/bits/hashtable.h:1818:18: required from ‘std::pair<typename std::__detail::_Hashtable_base<_Key, _Value, _ExtractKey, _Equal, _H1, _H2, _Hash, _Traits>::iterator, bool> std::_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal, _H1, _H2, _Hash, _RehashPolicy, _Traits>::_M_insert(_Arg&&, const _NodeGenerator&, std::true_type) [with _Arg = const std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>&; _NodeGenerator = std::__detail::_AllocNode<std::allocator<std::__detail::_Hash_node<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, true> > >; _Key = std::basic_string<char>; _Value = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>; _Alloc = std::allocator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client> >; _ExtractKey = std::__detail::_Select1st; _Equal = std::equal_to<std::basic_string<char> >; _H1 = std::hash<std::basic_string<char> >; _H2 = std::__detail::_Mod_range_hashing; _Hash = std::__detail::_Default_ranged_hash; _RehashPolicy = std::__detail::_Prime_rehash_policy; _Traits = std::__detail::_Hashtable_traits<true, false, true>; typename std::__detail::_Hashtable_base<_Key, _Value, _ExtractKey, _Equal, _H1, _H2, _Hash, _Traits>::iterator = std::__detail::_Node_iterator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, false, true>; std::true_type = std::integral_constant<bool, true>]’\n/usr/include/c++/7/bits/hashtable_policy.h:843:55: required from ‘std::__detail::_Insert_base<_Key, _Value, _Alloc, _ExtractKey, _Equal, _H1, _H2, _Hash, _RehashPolicy, _Traits>::__ireturn_type std::__detail::_Insert_base<_Key, _Value, _Alloc, _ExtractKey, _Equal, _H1, _H2, _Hash, _RehashPolicy, _Traits>::insert(const value_type&) [with _Key = std::basic_string<char>; _Value = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>; _Alloc = std::allocator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client> >; _ExtractKey = std::__detail::_Select1st; _Equal = std::equal_to<std::basic_string<char> >; _H1 = std::hash<std::basic_string<char> >; _H2 = std::__detail::_Mod_range_hashing; _Hash = std::__detail::_Default_ranged_hash; _RehashPolicy = std::__detail::_Prime_rehash_policy; _Traits = std::__detail::_Hashtable_traits<true, false, true>; std::__detail::_Insert_base<_Key, _Value, _Alloc, _ExtractKey, _Equal, _H1, _H2, _Hash, _RehashPolicy, _Traits>::__ireturn_type = std::pair<std::__detail::_Node_iterator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, false, true>, bool>; std::__detail::_Insert_base<_Key, _Value, _Alloc, _ExtractKey, _Equal, _H1, _H2, _Hash, _RehashPolicy, _Traits>::value_type = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>]’\n/usr/include/c++/7/bits/unordered_map.h:579:31: required from ‘std::pair<typename std::_Hashtable<_Key, std::pair<const _Key, _Tp>, _Alloc, std::__detail::_Select1st, _Pred, _Hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<std::__not_<std::__and_<std::__is_fast_hash<_Hash>, std::__detail::__is_noexcept_hash<_Key, _Hash> > >::value, false, true> >::iterator, bool> std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::insert(const value_type&) [with _Key = std::basic_string<char>; _Tp = mongocxx::v_noabi::client; _Hash = std::hash<std::basic_string<char> >; _Pred = std::equal_to<std::basic_string<char> >; _Alloc = std::allocator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client> >; typename std::_Hashtable<_Key, std::pair<const _Key, _Tp>, _Alloc, std::__detail::_Select1st, _Pred, _Hash, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<std::__not_<std::__and_<std::__is_fast_hash<_Hash>, std::__detail::__is_noexcept_hash<_Key, _Hash> > >::value, false, true> >::iterator = std::__detail::_Node_iterator<std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>, false, true>; std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::value_type = std::pair<const std::basic_string<char>, mongocxx::v_noabi::client>]’\n/home/lstorino/psrplot/src/mongo.cpp:90:73: required from here\n/usr/include/c++/7/ext/new_allocator.h:136:4: error: use of deleted function ‘std::pair<_T1, _T2>::pair(const std::pair<_T1, _T2>&) [with _T1 = const std::basic_string<char>; _T2 = mongocxx::v_noabi::client]’\n { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn file included from /usr/include/c++/7/utility:70:0,\n from /usr/include/c++/7/tuple:38,\n from /usr/include/c++/7/mutex:38,\n from /home/lstorino/psrplot/src/mongo.h:6,\n from /home/lstorino/psrplot/src/mongo.cpp:1:\n/usr/include/c++/7/bits/stl_pair.h:292:17: note: ‘std::pair<_T1, _T2>::pair(const std::pair<_T1, _T2>&) [with _T1 = const std::basic_string<char>; _T2 = mongocxx::v_noabi::client]’ is implicitly deleted because the default definition would be ill-formed:\n constexpr pair(const pair&) = default;\n ^~~~\n/usr/include/c++/7/bits/stl_pair.h:292:17: error: use of deleted function ‘mongocxx::v_noabi::client::client(const mongocxx::v_noabi::client&)’\nIn file included from /home/lstorino/psrplot/src/mongo.h:10:0,\n from /home/lstorino/psrplot/src/mongo.cpp:1:\n/home/lstorino/psrplot/lib/mongo-cxx-driver/src/mongocxx/client.hpp:58:20: note: ‘mongocxx::v_noabi::client::client(const mongocxx::v_noabi::client&)’ is implicitly declared as deleted because ‘mongocxx::v_noabi::client’ declares a move constructor or move assignment operator\n class MONGOCXX_API client {\n ^~~~~~\n",
"text": "I’ve compiled my application with gcc 9.4.0 on Ubuntu 20.04 successfully. But with gcc 7.2.1 on Amazon Linux 2 it logs the following error:Mongo-cxx-driver version: 3.6.5I need to compile it on this specific envirioment. What could possibly cause this error?",
"username": "Lucas_Bezerra_Storino"
},
{
"code": "",
"text": "Hi @Lucas_Bezerra_Storino.\nCan you try compiling with the same gcc version on Amazon Linux 2?\nWhat is the C++ standard in use?\nCan you also share the code snippet where this error originates (seems like you’re trying to copy a client object)?",
"username": "Rishabh_Bisht"
},
{
"code": "mongocxx::collection MONGO::getCollection(std::string uri, std::string db, std::string collection) {\n\tauto instance = MONGO::getInstance();\n\n\tif (instance->clients.find(uri) == instance->clients.end()) {\n\t\tinstance->clients.insert({ uri, mongocxx::client(mongocxx::uri(uri)) });\n\t}\n\n\tmongocxx::database database = instance->clients.at(uri)[db];\n\tif (!database) {\n\t\tthrow std::runtime_error(\"Database \" + db + \" not found in \" + uri);\n\t}\n\n\tbool has_collection = database.has_collection(collection);\n\tif (!has_collection) {\n\t\tthrow std::runtime_error(\"Collection \" + collection + \" not found in \" + uri + \"/\" + db);\n\t}\n\n\treturn database[collection];\n}\n",
"text": "Hi! Thanks for the reply and sorry for the delay.\nThe C++ standard is C++17Here is the code snippet",
"username": "Lucas_Bezerra_Storino"
},
{
"code": "...\nprivate:\n\tstatic std::unique_ptr<MONGO> instance;\n\tstatic std::once_flag once_flag;\n\n\tstd::unordered_map<std::string, mongocxx::client> clients;\n\t\n\tstatic MONGO* getInstance() {\n\t\tstd::call_once(once_flag, [] { instance.reset(new MONGO); });\n\t\treturn instance.get();\n\t}\n\n\tstatic mongocxx::collection getCollection(std::string uri, std::string db, std::string collection);\n",
"text": "On header file, in class MONGO",
"username": "Lucas_Bezerra_Storino"
}
] | Error: use of deleted function ‘mongocxx::v_noabi::client::client(const mongocxx::v_noabi::client&) | 2023-04-19T20:18:18.439Z | Error: use of deleted function ‘mongocxx::v_noabi::client::client(const mongocxx::v_noabi::client&) | 1,026 |
null | [
"aggregation",
"node-js",
"serverless"
] | [
{
"code": "_id: ObjectId('641c689e0974d4617351897f'),\nname: 'Bob',\nemail: '[email protected]',\nuser_group_ids: [\n 0: 630ccd1a3a2da7d625cc7ccf\n]\n_id: ObjectId('630ccd1a3a2da7d625cc7ccf'),\nname: 'Editor',\naccess: [\n 0: {\n accessTargetId: 6443fc97748be7952dab922b\n }\n]\n_id: ObjectId('6443fc97748be7952dab922b'),\nname: 'posts',\nurl: '/posts'\ndb.users.aggregate([\n\t{\n\t\t$match: {\n\t\t\t_id: ObjectId.createFromHexString('630ccd943a2da7d625cc7cd4'),\n\t\t},\n\t},\n\t{\n\t\t$lookup: {\n\t\t\tfrom: 'user_groups',\n\t\t\tlocalField: 'user_group_ids',\n\t\t\tforeignField: '_id',\n\t\t\tlet: { userGroupId: '$access' },\n\t\t\tpipeline: [\n\t\t\t\t{\n\t\t\t\t\t$lookup: {\n\t\t\t\t\t\tfrom: 'access_targets',\n\t\t\t\t\t\tlocalField: 'userGroupId.accessTargetId',\n\t\t\t\t\t\tforeignField: '_id',\n\t\t\t\t\t\tas: 'access_targets_out',\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t],\n\t\t\tas: 'user_groups',\n\t\t},\n\t}\n]);\n[\n {\n \"name\": \"Bob\",\n \"email\": \"[email protected]\",\n \"user_group_ids\": [\n {\n \"$oid\": \"630ccd1a3a2da7d625cc7ccf\"\n }\n ],\n \"user_groups\": [\n {\n \"_id\": {\n \"$oid\": \"630ccd1a3a2da7d625cc7ccf\"\n },\n \"name\": \"Editor\",\n \"access\": [\n {\n \"accessTargetId\": {\n \"$oid\": \"6443fc84748be7952dab922a\"\n },\n \"accessRightId\": {\n \"$oid\": \"644635c08de56feeeb600baa\"\n }\n }\n ],\n \"access_targets_out\": []\n }\n ]\n }\n]\n",
"text": "I am working on a fairly big sveltekit project and use MongoDB for NodeJs. I have a serverless Atlas project. I try to get user data with the related user group data and with the (user group) related access data:Example data:users collection:user_groups collection:access_tragets collection:This is my aggregation:And I get this:Why is the ‘access_targets_out’ array empty?The lookup and pipeline documentation states that nested lookups in pipelines are possible.If I uselocalField: ‘$$userGroupId.accessTargetId’,which should be correct accoding to the documentation, I get an error message:FieldPath field names may not start with ‘$’. Consider using $getField or $setField.Any help is pretty much appreciated.",
"username": "Frank_Herzog"
},
{
"code": "let: { userGroupId: '$access' },let: { userGroupId: '$access' },",
"text": "let: { userGroupId: '$access' },Hi does this line works?\nlet: { userGroupId: '$access' },can you project that field and makesure its available and is performing as you expected?",
"username": "Joby_Joseph"
},
{
"code": "let: { userGroupId: '$access' },db.users.aggregate([\n {\n $lookup: {\n from: \"user_groups\",\n localField: \"user_group_ids\",\n foreignField: \"_id\",\n pipeline: [\n {\n $lookup: {\n from: \"access_targets\",\n localField: \"access.accessTargetId\",\n foreignField: \"_id\",\n as: \"access_targets_out\",\n },\n },\n ],\n as: \"user_groups\",\n },\n },\n])\n[\n {\n _id: ObjectId(\"6450e3514c46644e4df1b92e\"),\n name: 'Bob',\n email: '[email protected]',\n user_group_ids: [ ObjectId(\"6450e3b54c46644e4df1b930\") ],\n user_groups: [\n {\n _id: ObjectId(\"6450e3b54c46644e4df1b930\"),\n name: 'Editor',\n access: [ { accessTargetId: ObjectId(\"6450e5104c46644e4df1b933\") } ],\n access_targets_out: [\n {\n _id: ObjectId(\"6450e5104c46644e4df1b933\"),\n name: 'Editor',\n access: {\n accessTargetId: ObjectId(\"6450f2584c46644e4df1b93b\"),\n accessRightId: ObjectId(\"644635c08de56feeeb600baa\")\n }\n }\n ]\n }\n ]\n }\n]\n",
"text": "Hello @Frank_Herzog.Welcome to the MongoDB Community forums As mentioned by @Joby_Joseph, there might be an issue with the let: { userGroupId: '$access' }, line in your aggregation pipeline. However, based on the details you shared, I ran the following aggregation query:and it returned the expected output as follows:Hope this helps. Feel free to let us know if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "pipelinepipeline",
"text": "KushagraHi Kushagra,great, that works. So, I don’t need to define a variable with ‘let’?\nThe documentations says:The pipeline cannot directly access the joined document fields. Instead, define variables for the joined document fields using the let option and then reference the variables in the pipeline stages.Thats a bit confusing. Maybe, I still don’t get around. Aggregation is very powerfull – and very complex to understand.Cheers\nFrank",
"username": "Frank_Herzog"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Nested lookup aggregation | 2023-05-01T16:22:43.402Z | Nested lookup aggregation | 3,315 |
null | [] | [
{
"code": "",
"text": "Out of nowhere, our users let us know that a significant part of our website had stopped working overnight. After a lot of unscheduled 3am work, we found this was because M0, M2 and M5 clusters were upgraded to a new MongoDB version without any warning or agreement from us. After talking to support, they informed us that automatic version upgrades happened to clusters without warning and these could not be disabled without upgrading to a dedicated cluster (+720% more dollar).These MongoDB version upgrades can break your app or website overnight, and means Atlas is no longer a viable solution for us. Without bringing mLab into it - but, you know, bringing mLab into it - we never had such issues on mLab.We did get a $10 coupon from support and they pointed us to these forums, so yeah let me know what you reckon.",
"username": "Daniel_S"
},
{
"code": "",
"text": "Hi Daniel,I very much regret that this happened to you. We clearly need to improve our communications process and evolve our product toward no more breaking changes period. I want to re-earn your trust even if it will be a long road to do so.You should have received numerous heads-up emails about the upgrade of the Atlas shared tier (M0/M2/M5) clusters to MongoDB 4.4. We will be in touch to investigate how that didn’t happen.It’s important to emphasize that the vast majority of applications do not rely on capability in MongoDB 4.2 that changed in 4.4: nevertheless it pains us that there are exceptions and we will make major investments in future to reduce the chance of that happening.-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew,\nWe having the exact same crisis now in 2023.\nAtlas upgraded without any notice to v6.0.5 and broke\nour app completely (its not usable without db connection, even if all errors handled properly).It seems latest mongodb client v5.2.0 is not supporting the breaking changes:https://www.mongodb.com/docs/manual/legacy-opcodes/#op_querySame as Daniel we cannot change cluster version as we are using M2 instances.Please assist @Andrew_DavidsonThanks",
"username": "Elis"
},
{
"code": "",
"text": "Same happened today a query of mine was working perfectly fine. But today it stopped working and it broked many parts of the application.",
"username": "M_khaziq"
},
{
"code": "",
"text": "Hi Elis, Khaziq_N_A,We would like to apologize for the inconvenience this caused — it is certainly is not the experience we strive for and we will make sure to reduce the likelihood of this happening in the future.\nPlease reach out via the Chat in the Atlas UI or through support and we will be sure to assist.",
"username": "Alek_Antoniewicz"
},
{
"code": "",
"text": "Same happened here. We couldn’t find a solution. @Elis did you find a solution?Thanks in advance.",
"username": "Jose_Pereira1"
},
{
"code": "",
"text": "Two years later, the same has occurred again - MongoDB has broken my project after forcing an update on my production database under the hood.Without doubt, Atlas is not a stable or viable solution for any production project. Can anyone recommend alternatives, similar to mLab before it was bought out and taken down by MongoDB Atlas?",
"username": "Daniel_S"
}
] | How can Atlas be a stable or viable solution for any project when it forces random and breaking version changes on you overnight? | 2021-01-28T18:35:48.392Z | How can Atlas be a stable or viable solution for any project when it forces random and breaking version changes on you overnight? | 2,100 |
[
"replication",
"database-tools",
"backup"
] | [
{
"code": "",
"text": "We have one replica set and one standalone. We want to take a mongodump from replica set and restore at standalone. The dump operation was successful, but I am getting the following error while restoring.\nHow we can solve this ?\n\nScreenshot 2023-04-25 100630996×42 2.8 KB\n",
"username": "Ayberk_Cengiz"
},
{
"code": "",
"text": "How did you “make the dump” ? (eg. what command used)",
"username": "Kobe_W"
},
{
"code": "",
"text": "(post deleted by author)",
"username": "Ayberk_Cengiz"
},
{
"code": "",
"text": "mongodump --port 27018 -u admin -p “aydvbdvahvd” --authenticationDatabase admin --db catalog -c test_backup --out /Products/mongodb/test_backup_dump/",
"username": "Ayberk_Cengiz"
},
{
"code": "",
"text": "The error looks like the server is checking if current node is a primary which shouldn’t happen as this is a standalone instance.you can double check if this server is indeed running as a standalone machine. In addition, you can try mongoexport instead. This is plain text file and it’s easier to troubleshoot in case of errors. (maybe in the mongodump result file, something related to replica set is there? i don’t know)",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hello Kobe,Problem solved. The cause of the problem is that you have incorrectly configured the replica field in the conf file. After fixing the conf file, we tried the reset operation again. It worked flawlessly. Thank you very much for your support.Kind regards\nAyberk",
"username": "Ayberk_Cengiz"
}
] | Mongo Restore Operation Failed | 2023-04-25T07:09:15.225Z | Mongo Restore Operation Failed | 818 |
|
null | [
"cxx"
] | [
{
"code": "[00:03:42] /wrkdirs/usr/ports/databases/mongodb70/work/mongo-r7.0.0-alpha/buildscripts/scons.py\t-C /wrkdirs/usr/ports/databases/mongodb70/work/mongo-r7.0.0-alpha --cxx-std=17 --disable-warnings-as-errors --libc++ --runtime-hardening=on --use-system-icu --use-system-libunwind --use-system-pcre --use-system-snappy --use-system-stemmer --use-system-yaml --use-system-zlib --use-system-zstd -j2 AR=llvm-ar MONGO_VERSION=7.0.0-alpha VERBOSE=on --lto=on --use-sasl-client --ssl CC=\"cc\" CCFLAGS=\"-O2 -pipe -fstack-protector-strong -fno-strict-aliasing \" CPPPATH=\"/usr/local/include\" CXX=\"c++\" CXXFLAGS=\"-O2 -pipe -fstack-protector-strong -fno-strict-aliasing \" LIBPATH=\"/usr/local/lib\" LINKFLAGS=\" -fstack-protector-strong \" PKGCONFIGDIR=\"\" PREFIX=\"/usr/local\" destdir=/wrkdirs/usr/ports/databases/mongodb70/work/stage DESTDIR=/wrkdirs/usr/ports/databases/mongodb70/work/stage\n[00:03:43] scons: Entering directory `/wrkdirs/usr/ports/databases/mongodb70/work/mongo-r7.0.0-alpha'\n[00:03:43] scons: Reading SConscript files ...\n[00:03:44] ModuleNotFoundError: No module named 'mongo_tooling_metrics':\n[00:03:44] File \"/wrkdirs/usr/ports/databases/mongodb70/work/mongo-r7.0.0-alpha/SConstruct\", line 26:\n[00:03:44] from mongo_tooling_metrics.client import get_mongo_metrics_client\n[00:03:44] *** Error code 2\n[00:03:44] \n[00:03:44] Stop.\n[00:03:44] make: stopped in /usr/ports/databases/mongodb70\n",
"text": "Hi,I’m trying to build MongoDB 7.0-alpha on FreeBSD. (Maintaining ports for 4.2 - 6.0)\nThe build immediately crashes with:\n“ModuleNotFoundError: No module named ‘mongo_tooling_metrics’”Where does this symbol come from? I don’t find it in the source or in the mongodb github repos.Regards,\nRonald.",
"username": "R_K"
},
{
"code": "",
"text": "I see the same issue with 7.0.0-RC0.",
"username": "R_K"
},
{
"code": "",
"text": "I commented out the lines in SConstruct which uses this feature and now it builds fine.",
"username": "R_K"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is mongo_tooling_metrics in 7.0-alpha? | 2023-02-14T19:55:15.346Z | What is mongo_tooling_metrics in 7.0-alpha? | 1,356 |
null | [
"queries",
"crud",
"indexes"
] | [
{
"code": "db.test.insertMany([\n { items: [{ a: 2, b: 2 }] },\n { items: [{ a: 1, b: 7 }] },\n { items: [{ a: 2, b: 5 }] },\n { items: [{ a: 1, b: 2 }] },\n]);\n\ndb.test.createIndex({ 'items.a': 1, 'items.b': 1 });\n\ndb.test.find({\n items: {\n $elemMatch: {\n a: { $gt: 0 },\n b: { $gt: 0 }\n }\n }\n});\n\n[\n {\n _id: ObjectId(\"642658957af10b1de797d5ef\"),\n items: [ { a: 1, b: 2 } ]\n },\n {\n _id: ObjectId(\"642658957af10b1de797d5ed\"),\n items: [ { a: 1, b: 7 } ]\n },\n {\n _id: ObjectId(\"642658957af10b1de797d5ec\"),\n items: [ { a: 2, b: 2 } ]\n },\n {\n _id: ObjectId(\"642658957af10b1de797d5ee\"),\n items: [ { a: 2, b: 5 } ]\n }\n]\n.sort({ 'items.a': 1 })\n.sort({ 'items.b': 1 })\n.sort({ 'items.a': 1, 'items.b': 1 })\n",
"text": "Hi there,Say I have an array of items in a collection, and I index two properties in the items, I can do queries on the properties and it will use the correct compound index bounds:The documents will also be returned in ascending order of the indexes:When I specify a sort, it will always do an in-memory sort though. Is there a way to sort the results with use of the index? Or, if not, is there a way to return the results in naturally reversed order for the index?I have tried some of the following:If this is not possible, I know I can just split the items array off into a separate collection and index on it, but I just would like to know if this sort of sorting is possible before I do this.Many thanks!",
"username": "Joseph_Dunne"
},
{
"code": "sort()db.test.find({ items: { $elemMatch: { a: { $gt: 0 }, b: { $gt: 0 } } } }).sort({'items.b':1})\n{items.a:1,items.b:1}items.a{a:{$gt:0}}items.b winningPlan: {\n stage: 'SORT',\n sortPattern: { 'items.b': 1 }\n ...\n{a:1,b:1}sort({a:1,b:1})sort({a:-1,b:-1})sort({a:1,b:-1})sort({'items.a':1}db.test.find({ items: { $elemMatch: { a: { $gt: 0 }, b: { $gt: 0 } } } }).sort({'items.a':1})\nsort()winningPlan: {\n stage: 'FETCH',\n filter: {\n ...\nrejectedPlans: [\n {\n stage: 'SORT',\n sortPattern: { 'items.a': 1 },\n ...\nsort({'items.a':-1,'items.b':-1})db.test.find({ items: { $elemMatch: { a: { $gt: 0 }, b: { $gt: 0 } } } }).sort({'items.a':-1,'items.b':-1})\n[\n {\n _id: ObjectId(\"64382e81aa700bea8961598e\"),\n items: [ { a: 2, b: 5 } ]\n },\n {\n _id: ObjectId(\"64382e81aa700bea8961598c\"),\n items: [ { a: 2, b: 2 } ]\n },\n {\n _id: ObjectId(\"64382e81aa700bea8961598d\"),\n items: [ { a: 1, b: 7 } ]\n },\n {\n _id: ObjectId(\"64382e81aa700bea8961598f\"),\n items: [ { a: 1, b: 2 } ]\n }\n]\n",
"text": "Hello @Joseph_Dunne,Welcome to the MongoDB Community forums When I specify a sort, it will always do an in-memory sort though. Is there a way to sort the results with the use of the index?In short, it’s not guaranteed that specifying the sort() method in MongoDB will always result in an in-memory sort.To elaborate, there are certain conditions when MongoDB cannot use the sorted nature of the index and has to perform an in-memory SORT stage. This happens when the query cannot use the “index prefix,” which means that the index cannot guarantee the sorting order of the returned documents. In this situation, MongoDB has to perform an in-memory sort to return the results in the desired order.For example:In the query above, the index {items.a:1,items.b:1} can be used to match documents having items.a greater than 0 for the {a:{$gt:0}} portion of the query.However, there is no guarantee that the returned documents are sorted in terms of items.b.Therefore, MongoDB has no choice but to perform an in-memory sort. The explain() output of this query will have a SORT stage.Whereas in another scenario MongoDB can use the sorted nature of the index if the query specifies sort keys that match the order of the index and the same ordering as the index.For example:the index (i.e. the index {a:1,b:1} can be used for sort({a:1,b:1}) or sort({a:-1,b:-1}) but not sort({a:1,b:-1}) and sort({'items.a':1})Here, in this caseMongoDB can guarantee that the returned documents are sorted in terms of the specified key. This means that the explain() output of the query will not have a SORT stage and the sort() is essentially free.Or, if not, is there a way to return the results in naturally reversed order for the index?You can do it by specifying the sort({'items.a':-1,'items.b':-1}) and it will not perform an in-memory SORT.It will return the following output (reversed order):I hope it clarifies your doubt. Let us know if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "{ 'items.a': 1 }\n{ 'items.a': 1, 'items.b': 1 }\n",
"text": "Hi there,Many thanks for your response. I have tried again and it is now sorting using the index when I specify one of the following sorts:I recently updated to Mongo version 6, whereas I don’t recall this happening in version 4 and 5. Do you know if this a new feature in version 6?Many thanks,",
"username": "Joseph_Dunne"
}
] | Sorting with $elemMatch on array indexes | 2023-03-31T04:03:05.224Z | Sorting with $elemMatch on array indexes | 1,258 |
null | [] | [
{
"code": " {\n \"_id\": \"59d400b6d8c987b0196efe50\",\n \"name\": \"Natura\",\n \"domains\": [\n \"natura.net\",\n \"natura.com\"\n ]\n }\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"domains\": {\n \"type\": \"string\"\n }\n }\n }\n}\n{\n $search: {\n index: 'companyDomains',\n text: {\n query: \"natura\",\n path: \"domains\",\n fuzzy: {},\n },\n },\n}\n[\n {\n \"_id\": \"5f6210981923bf120bdac7b7\",\n \"name\": \"Laura\",\n \"domains\": [\n \"laura-br.com\"\n ]\n }\n]\ndomains: /natura/i",
"text": "Hi, I’m also having issues using Atlas Search for fields that are array of strings…I have a very simple index and it won’t do even basic search that a regex would.For example, I have in my collection around 32.000 documents on this format:I made a very simple Atlas Search index like this:And even with a very basic search like this, it won’t return correct results:Search:Result:Which is very weird, because the only result returned has almost nothing to do with the search string.If I make a simple regex $match stage with domains: /natura/i, I get 14 results!I’m still trying to understand what’s the issue with Atlas Search and array of strings…",
"username": "Rafael_Levy"
},
{
"code": "naturanatura.netnatura.com\n{\n \"_id\":\"59d400b6d8c987b0196efe50\",\n \"name\":\"Natura\",\n \"domains\":[\"natura.net\",\"natura.com\"]\n},\n\n{\n \"_id\":\"5f6210981923bf120bdac7b7\",\n \"name\":\"Laura\",\n \"domains\":[\"laura-br.com\"]\n}\n{\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"domains\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n{\n index: 'default',\n autocomplete: {\n query: 'natura',\n path: 'domains'\n }\nnatura.netnatura.com",
"text": "Hey @Rafael_Levy,Welcome to the MongoDB Community Forums! Since your query contains natura and you want Atlas Search to return results that include names like natura.net, and natura.com, I would recommend trying to use autocomplete instead of Text Search Operator to see if it suits your use case / requirements.I tried to reproduce this on my end as well to confirm this. I inserted the following documents:This is my index definiton:Then, when I used the following search query:it only returned the document having domains natura.net and natura.com and then the laura one.You can read more about setting up and using autocomplete from the documentation: how to Index fields for AutocompleteHope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "@Rafael_Levy is your search index using the standard anaylzer? If so then you will find that “natura.com” gets ‘tokenized’ to “natura.com”. Whereas if you use the ‘simple’ analyzer it would get tokenized to “natura” and “com”. This would allow you to search on “natura”.You can read more about tokenization here: https://www.mongodb.com/docs/atlas/atlas-search/analyzers/Autocomplete will work in this case but you can read about other advanced options, such as tokenizing email addresses, here: https://www.mongodb.com/docs/atlas/atlas-search/analyzers/tokenizers/#uaxurlemail",
"username": "Junderwood"
}
] | Atlas Search for fields that are array of strings | 2023-04-27T20:19:10.373Z | Atlas Search for fields that are array of strings | 893 |
[
"compass",
"atlas"
] | [
{
"code": "{\n \"fragment\":\"test\",\n \"datasetId\":\"d00000000000000000000050\",\n \"revision\":\"LATEST\",\n \"file\":{\n \"$binary\":\"/9j/4AAQSkZJRgABAQEAAAAAAAD/4QBCRXhpZgAATU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAAkAAAAMAAAABAAAAAEABAAEAAAABAAAAAAAAAAAAAP/bAEMACwkJBwkJBwkJCQkLCQkJCQkJCwkLCwwLCwsMDRAMEQ4NDgwSGRIlGh0lHRkfHCkpFiU3NTYaKjI+LSkwGTshE//bAEMBBwgICwkLFQsLFSwdGR0sLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLCwsLP/AABEIALwAvAMBIgACEQEDEQH/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/AKGKTbUu2jbXIdNxgFKBTttSKtWTcYoqcLSBakC0CuPVflqVRSIKmRapCBRUw20gp6LWo7D6o3+q2+l+S80bvvbaqpV/Fcv4hTzLq2X+FV3VMpaDjG7OpiNpqFvDcwcLImeWzx3H1pv2fa26sXRZHtLGb/ru2z/vn0+uK0ob/wAxk8z5V2/7tbQXukVEkyxsWmmNaniaGZflqTan92lYmxBAqqz/AO61OYVMqorf8BppFZSOujsQEU0ipmWoiKg6SI0w1JtpHFIZCabmpNtNxSNLmYaSpxbP/epwtn/vVB5tiAU8VJ9mf+9Txbf7VNBYiAqQCni2/wBqpVt/9qqHYjSrApVt/wDaqdbf5fvVSERqtWVWhIKsCKrJIgtc7riJ5yN/0y/2f71dT5e2uZ8Qf66GJfm3L/D9771Jpl09yKxkRbWFdrNuaVv9n738NX3j3Q+fCu5tu7/gX3ayrY/vPLb5VXav+y25ei+hPFa9k+5bmDb/ALSf7v8AkEV1U+wqqvqT2T/LDH/E22tFV+WsyGSZf4du1tv3f4fWtAXNtGqbpF+b/vrdUtGG5IU/ippWmi7haSGNf4t3/oO6pCKykdVHREZWo2Wp8VGayN7lcrUbirBG3/eqMikUmV8UmKkYU2gu43bQFqxto20jiK4WpAlSBaXFMRHtqQLTgtOC0CbBVqULQoqZFq0SKi1OsdCCrKCrIbKs2yGN5G+6qs3/AAGuFu5vtuoPIv7vcqqjbV+bcvG4N83ORXYazJ+78hW2+Z8sv+638qxTpkNx+7h3LF82+Td/qWVtwMf5dPQ1Dvc6qSsrmdZWieY63Hysu3Zt3fKy4/vdRnla2DIlvs2qu7bt3f3vm3UyW7tIW8uPazL8u77230P1OR+VZ5lmmkdvLaTb/d/vN23dj1q1It07lyfU4VX5vl27fu/M3zKWHFZwvYdzsq7lX5dzK3oMc9zn+tSz26TLMzL+9+78vyqvy/w//XrPtoPtc3kWrboIPllkb7vuF9ee9N3GoRSLWn6jc3GrWEawfKs/zt821V2nj9RXYFa57To7aPUrCOFf3sbMr/d/unmukApNEkRNNNSkVGy1IyIimEVMVphpNFIrlabipmXdSbKgsTFIRTwtLtoscTYw0BafilAoAaBUoWkAp4FXYBwFSotMUVMq00SSJU27bUaVDdzQ28M0kzKsW37zf7XSrFa7Kcwe7uN3ytFt2v8A7P4/drA129mtIUg+aOKOdd7L8rNu+bGfoRV2C6hjjhWT5tzN5rL/AM9JMqOO3Bx+FOvbSG/s5oJFZmZt25v70fQ/kBUbnYla3Y5vT2e6j8xmVYml3bvvSMyoc/lwKadZtLST7MrfLBuV2/2vvEsfU/1NWL1E0jw++3/XttWJv+msuI+npwxrk7ZIY1+13n7zc26KP+8zd6taFuR1Y1qGZf8Aj2uZIm+/tjb5v8itbTNQ0i4jeC12xyqrfu2XZJ8vt96uQuNTvoY7ORWsfKuYmZIYJFeSFVO0CUL90n0PNNtLr7RqFhPH8tz58Syr91ZF3f8A66v5kt33VjrbVXXXrBV2r5dz+9ZvvN5mcf1/SupYfM/+9XDi7SHVtyt5jLeQf7qqrbWP1yB+ddzUMh6EePmppWpcUw0mgRER/wCPUw1OajxWYyMrTdtTEVHigBtKKUikxQcoUYp2KAKoEAFPAoAp4FBQ5FqcCmKKkFUQJJKkK7m+7/erCmmfVWddyrbRfN/dZpP4dwb0/pU+rXsKyQ2Xzbm/eu3y7Vj/ANr69KzdOuEjuJrKT+8zW6syszRt/d/XbTSN6cdLmfPa32mTefHumgVvm3bW3LJjP5Vct9XRr6G2VW8qdWbdt+75ff5vfIrZLwrvWT7rL/q/vL97+EKu4VClnbTTJIqxfu2+Rl2+Z349qOXU3UlbU5/xxHt02w2/da8+f/gMRx/WuN1AbWtl/hWJf/Qa9Z1fSodT02a0Zfm27om/uyR/dNebz2yNH9iuP3d5bNtRm/5aKvdfWravczXRmGZPlRVWtvw7bP51zqkny22mxM27+GSdvlVPwzn8qgtNAvru6S2VolX/AJazMy7Y4+5+Xv2UV1F7HaQ2dtpenr5ltaMstw27/XSN8u9j3Oc1ik76mu5mJInnQybWk3bZdzfKu3cG3/N1ycn8a9KT7teeRi2j3/N833tu7dtVV9F7cj869DiP7tP91a06mcgJplPNJTZIw0w09hTCKyKEpuKU0YqRDKSnEUgoOUUCnAUClAqgFAp4pBSrVoCRac8sMMbySMqqq7nZv7q00Vla9Ii2M3mfLB/y1Zf7vpRYIq7OT13VEbVIdQs9zQLEsTyMrbfM57N82MHr0qo+pJf+TJJ5Ec8cu2JlXdIq54PyrwKS7u9LvY/LWdYVjb7v/LRlXp5e7oPaq721t5fmW+7ymWLZMq7WVWbox7E+h4oO+KsrHSwahcyQp5nkLKvy7lZWXcv/AD0LfnWjp11Mu/7Q25d277sf3t3PMfzHvt+XtXFJdTQ/N5jL8qsi+Wv+sZipK+mV+ozWnFf7mhWP5lVlb7zfN8v8Ttuz39qaZTV1Y9CSaFo/vf8AAv4a43xZBpyxvcqsUlysq/uV2szL33BelQy61NHH+5ZlVfl2su1V3dVXd646+9YdzdXM38O1W+Z/vfMvp95s1MqiQU6ViCC9u5G8tdtuq/8ALGD5WX+I7ivzVqR72Xay/N/G33V3N/e2/X+dZUUbw793yq33F+VWVfSuh05IbiPcrbv93+L5v++e9Y8zNktNTnngtreaZYZ55LmSJokZm3Luk7+1er2x/wBHtv8AdX/0HrXIS6ZDDDNc+Ws0rLtTd96Pap/xrrrI/wCh2bf3oIv/AEAVujlqkpppNPNIRVMyGEUmKUmmkVkWNNNpaTNADDSikNGKVzkHilpM0UXAUGpFNRg1IDVgSA1j64sM1v5EnzRN99d38P8Ate1a5Ncnr8cM29t237q/LtXdtY55pl01qRRW+hXa+Q1tFug2q6/d3KuMc+mahvUtlh/dx7dv7r+Lays3RkrBsotXt5nkjj3Lu/iZW3fw/dZelaw1KG9mS0mVdy/LKyszK347uKi52IynaZWddvmbfl3L/rPLyW6N6c1Ej7f36x/KrfJtkbb93b/F83HJrrFsbaT73y/7W7b9N1VZ9JRVm8tV/e//ABJwaLhbUxIp/tX3vur9/wD759dv0/WrKeTHC8kir5Ua7v8ACsiGV7eZ4G/vMu7+Fvm5NT3d1u2W0bL+8bbL/eVVw351zvVnVTasVZYpmhmv7hdy/wAEbfL8rNWt4Pud0j2zN/eZFb+Hndikubjbb7Y41mXbtePdtZl+lYUWorZ3kNzaw+T5bfPGv8XqK1iFaNtmeryw/K6/d3fxbv4f9o/jVyyZPs8Kr91VVU/3e1c6Nctr23RoZG/eL/d+78vIYetbOkybrdPm/h/9BY10nBNOxp4pDTs01qTIQwikZqU0w1myxDTMU4mmZ/2qgY0GlzTRQTSOQdS0zNOFAx4NP3UwGkLJ/wABqxEGoXiWlu8jSKv8Kf73+zWHFdQyQ+ZNuZdzbWZf129qx/Eer/aLq2trf5oIG3Oy/wATd/wqAXDzWMyqzNt+V1Xdtb/d96u9jopx0N+eS2ZXb5vmVfm3bfvdB8vWufv59O0r5oY908v3Y/7zL/ExqG0kuY49zbv9azRbpG8tdvzZb+LjrnvWEJJru88yZtzMzf7P3flAx2FQ9TW9i0dQ1y7uEijuZfNnZVSONo0j3N0XLf1anrrOtWkjwXErSeWzK25lbayt6r8prMztZ9395qa7bqhuw0Xri5e7aZmXbL/rf++uu3b+H51Wdrldk7f3a2TpEzWOm3sa7vMtlldf9n7v+FRwWfmSJBNt/i+X/gXIzSehaM2ytdRvbhPJZtzN97/Paupi8EuyvJeXe3/ZgVfvf8CrT0i2sbCN2Vdu377f7X+z7da1ftm63dl2/wC9u/h9a0p26k1LpnAXmlXemb2t7lvK/g/uyfL/ADruPB8rzaTZtJ8zfvVb/gLms3UFhuI3Vvvf7v8AtVqeFI0j03avy7Z51/8AH6rqTLVHSBqQ0maaaGzJICajJqQmoialsoQmmUFqbmoY0BNJTN1KDSOWw/NANJmo5GdVdl+ZlVtlFwsOnu7a0jeSaRVVa47U/Ec12rx26tHA3yptb5m/75qldx6/qN0/nRttVvkVWXy417ctToLVLTe0zLJOv93544V9c92q0bKNtWZ88LqqeYv72Rl/4Cq/w+1amlOnlvB8scTK3yqv3v4f978agkgmuJvm2srfKm5f4l7/AFxV63g8lkZf73+983anqXzoFtodtzGq/wAP8X8LSZ4x9Oa5s2j28yfd+82z/eVun45NdTO7/Iyr95dzsq7f89ayJhDNcWEDfMsk+19v+6F+X8QaWpUWupRbTZrvfLa7d38cO75lbvxUcWi6pNJDB5W1pJVX/Hb+FdNceGd372PUJY/L/i/5aL7eYvzVPFHbWELtHJPcSsrRPdXMjPJ5fcR+gzkYDVrZdUO3Ziahewwqmn2+7yrSDykZfu+Yq8/QDB/Ksie8tIbpPOVl/cM3/Au24/T+lRySPJJ8reWsm75W/i+YfpTPJe4tX8mJmn3Kr7tvyxs3Jz2yKzbuPYSDUrtrXUpG+aJm2xeX/D7ewx/M/wB6tOynh/s9/tDSrEzbUb/noy9eF7ZOPyqrqdilvDZ2kc6ws23fGv8AFJtOT8v4/pSSGaO3h3fvFWKKJGb5W3SOG+UfQUDuXp5fLj+X5otq/N/EytXReGPl099v/PzPXGXI3SfL8q/d2/3W2Bv6Gut8KM/2GZW+8s7f+gikmJs6MmkNNJpC1NmaAtTSaaxpCahjEzTM0MajzSLSAGnZpm6jdSOVjyaPvUwmlDUCsRNZ2zfw/wDfPy1Wk0q2/h+7/u/xVezS5qrgZL6bt/3fm/76q1HZp+53Kv8Ay1/8eXb/ACq7upwNX7Qmxl3unp9lfb/Cu5/91a45v9HvrORvuxy+b8395VOFX35r0V23LXLazo7yK88P3l+bb/teq1DdzaLsZ17q73C+RtaNfvP/ABbvT7tV0meRU2yMqqy/Kvzfw1lNJMsjrJ8rfdf/AHV/u1Mks3l7VXav96ldmlySaT5v4V3ffkb+Ftv3Vq9pUvkyQ/3d3zqrfM3u3vWaiuqu0m35vuL/AFqWyX99NJuZYo90rt/u9EX3JoGbd+qXGoQ/L/q/uN97btXnd7kgj8Kinj3NDGq7tqyy7f4dywnFRW94672Zd33v9rb8vFWbObddbZP+Wm7f/wACWncRTij8yN5GX5lZZf5qf5iuo8OHbDc/9df/AGWucib5nX/ZZf8Avn5h/Kt3w86NHef7y/8AAW20riOi3bqQtTd1ITVsuwE01moL1EWqBpCu9R7v9qmmmZoLJd1Jmmmm5qLnGPJozTC1JuouBLmgNUJanBqoCYPS76r7qXdSuFibdSOU2/NURZF+9Ru3f/E//FUrlWOf1bRPOZ7m3+WX+7/e96xIrby2fzP4fv8A+17LXek7qpXdnbTfw/N/eWi5epyEyP8AO33f/wBnpSAPHGkG7azbZZf97sn4D+dbMulzfeVvMVfm2/d3NWPdrNCyLJH80rMz/wCyu7aP5VQFiJfl/wCukqxf4/0qW2bbqW7/AKat/wChVXjl/fW0f/PPbv8A95vmNQ20r/aEb+83/oTf/XpgTzyeXcTbf4Z2b/gO7cP0rd8PHa2pL/daBk/3dp/+tXNz/wCsRv7yr/31HmM/yrd8Nv8ANef9cIl/75Y4/n+lA1udPu/ioL/LUQalY0zUUmoy1G7dTWNSMC1NzQabmgdiQtTSac1RmoOQDSE0lMagkeTShqiNLU3LSH7qA1RGlFK5RLmjdUIpy1NxkpamlqYfvU00mykiT+4tQSwQyN8yq3+9Uifepq0X1G0QtpFo0iMq7Wbd93+8y7aa3hX/AFMlvJ/Eu/d/stWkv+sSt21/1Kf8C/rXZRXM9TknNo47/hEruTZ+/iVd0rbtv8LNuq1baG+lTTSeZugkiWJP73mK24/pmutT7qVT1P8A1cP/AF1b/wBANbuCSM6VRuoZYagmmj73/fNKa5j0gYf3aaKSg/wUhoTO2m7qGptRcs//2Q==\",\n \"$type\":\"00\"\n },\n \"filename\":\"test.jpeg\",\n \"length\":5299,\n \"sha1\":\"d73b81197a0f5e80f428b2b4d71e070b5ee2f07e\",\n \"uploadDate\":{\n \"$date\":\"2023-04-29T12:20:52.518Z\"\n }\n}\n",
"text": "I am using MongoDB cluster that has been deployed on MongoDB Atlas. I am encountering an issue. When I try to view documents of the collection which consist of Binary data, the MongoDB Atlas hangs and does not show any documents as can be seen in the image below\nMongoDB_Atlas_Error1775×536 17.3 KB\nThe structure of my documents is given below:I have added documents in this collection and I can view them in MongoDB compass but when I try to directly view them in the MongoDB Atlas, I am facing this issue.",
"username": "Usman_Sajid"
},
{
"code": "mongosh",
"text": "Hi @Usman_Sajid and welcome to MongoDB community forums!!I was able to successfully transfer the data provided above to my Atlas cluster. For your convenience, I have attached a screenshot for your reference.\nScreenshot 1945-02-11 at 12.36.36 PM1738×550 54.2 KB\n\nPlease consider the following troubleshooting steps:Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for your reply.yes, i can view the documents of all other collections in the MongoDB Atlas.I have also tried to connect mongosh with my mongo database cluster deployed on Atlas and I can view the documents of the same collection for which i get only “Loading documents” message in the browser and the documents do not load.I have also shared the Mongo Database cluster details.\n\nmongodb-details2243×502 34.5 KB\n",
"username": "Usman_Sajid"
},
{
"code": "",
"text": "Hi @Usman_SajidThank you sharing the details here.I would suggest you to contact the Atlas in-app chat support team for the assistance in this case.Regards\nAasawari",
"username": "Aasawari"
}
] | MongoDB Atlas just shows "Loading Documents" message when I try to view collection documents with binary data | 2023-04-29T23:14:05.731Z | MongoDB Atlas just shows “Loading Documents” message when I try to view collection documents with binary data | 843 |
|
[
"lebanon-mug"
] | [
{
"code": " Meeting ID: 896 5074 2632\n Passcode: 063037\nBackend & System Engineer | Blockchain engineer | Project Manager | Mentor | Community builder ",
"text": " Hello All, Due to a health emergency. The event has been postponed until further notice.The MongoDB User Group in Lebanon is pleased to invite you to join us in our first event in 2023. Exploring blockchain, it’s features, advantages, and how to build your own one with MongoDB.Event Type: OnlineLocation: Online via Zoom\nVideo Conferencing URLMeeting Credentials:Backend & System Engineer | Blockchain engineer | Project Manager | Mentor | Community builder \n1600×1200 400 KB\n2023-02-13T17:00:00Z→2023-02-13T19:00:00Z",
"username": "eliehannouch"
},
{
"code": "",
"text": "Hello amazing people,\nDue to an urgent health emergency, we are moving today’s workshop (Build Your Blockchain DB with MongoDB) till Monday 13 February from 7:00 PM —> 9:00 pm (EET).Thank you for your understanding,\nCan’t wait to e-meet you all on Monday.Best Regards,\nElie Hannouch",
"username": "eliehannouch"
},
{
"code": "",
"text": "Please take care Elie ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey Everyone,\nThe event has been postponed until further notice. We will keep you updated on the new event date and timings. Sorry for the inconvenience ",
"username": "Harshit"
}
] | Lebanon MUG: Build your own Blockchain DB with MongoDB | 2023-01-24T16:30:39.396Z | Lebanon MUG: Build your own Blockchain DB with MongoDB | 2,632 |
|
null | [
"node-js",
"mongoose-odm",
"serverless"
] | [
{
"code": "",
"text": "Hello All,I am shifting from MongodbCloud atlas M10 cluster to Serverless Intance, My project has developed using mongooseJs,I have done all the setup in serverless instance, but when I am connecting to serverless instance from my node application, I am getting below error,MongoParseError: Load balancer mode requires driver version 4+Can someone please help me understand what exactly I need to upgrade to make the connection from my nodeJs application.Thanks",
"username": "Aliasgar_Bharmal"
},
{
"code": "",
"text": "Hi @Aliasgar_Bharmal,MongoParseError: Load balancer mode requires driver version 4+It seems like you are encountering an error while trying to connect to an Atlas Serverless instance with an outdated version of the Node.js driver.To connect to Atlas Serverless, you will need at least version 4.1.0 of the Node.js driver (as noted in the Minimum Driver Versions for Serverless Instances document). Additionally, if you are using Mongoose, you will need version 6.0 or newer, which depends on the 4.1.x MongoDB Node.js driver.Can you please provide more information about your setup, including the following:Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | MongoDb Serverless instance connection with Mongoose Js | 2023-05-01T19:29:17.918Z | MongoDb Serverless instance connection with Mongoose Js | 1,124 |
null | [
"change-streams"
] | [
{
"code": "var options = new ChangeStreamOptions();\n options.FullDocument = ChangeStreamFullDocumentOption.UpdateLookup;\n options.FullDocumentBeforeChange = ChangeStreamFullDocumentBeforeChangeOption.WhenAvailable;\n using (var cursor = collection.Watch(options))\n {\n while (cursor.MoveNext())\n {\n //Sending E-mail code\n }\n }\n",
"text": "Hi Team,I am facing one issue with ChangeStream.\nI am using ChangeStream to send email alert for backend deletion from MongoDB collection, however for bulk deletion emails are triggering.\nPlease help me to fix this issue or do we have any other option instead of ChangeStream.Below is my code:Thanks,\nLalitha.C",
"username": "Lalitha_Chevuru"
},
{
"code": "",
"text": "Please help me to fix this issuewhat issue? are you saying you don’t want to get notified by bulk delete event?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Aplogies. Actual issue is mails are not triggering incase of bulk deletion.\nLets say if I delete 50 records from collection at a time, only 20 mails are triggering and query is not getting killed which leads to deadlock.My requirement : Need to send all 50 mails and query should get killed.\nplease note count depends on the number of records deleted, it can be more than 1000 also.I found there is a limitation for ChangeStream\n“Change stream response documents must adhere to the 16MB BSON document limit. Depending on the size of documents in the collection against which you open a change stream, notifications may fail if the resulting notification document exceeds the 16MB limit.”Please let me know any solution is there to solve this issue.Thanks,\nLalitha.C",
"username": "Lalitha_Chevuru"
},
{
"code": "updateDescriptionupdateDescriptionreplacereplace",
"text": "query should get killedwhat you mean by this?This question says in case of bulk writes, you will receive one notification for each of the write event. So you should be ideally notified 50 times unless the before-deleted document exceeds 16MB limit.In the exceed case, the doc doesn’t have any workaround, You will have to reduce the result size from the events.To limit the event size, you can:",
"username": "Kobe_W"
},
{
"code": "",
"text": "Sure … Thank you Kobe.",
"username": "Lalitha_Chevuru"
},
{
"code": "public async Task RealtionalCollectionCollectionChange(CancellationToken cancellationToken)\n {\n var options = new ChangeStreamOptions\n { \n FullDocument = ChangeStreamFullDocumentOption.UpdateLookup,\n FullDocumentBeforeChange = ChangeStreamFullDocumentBeforeChangeOption.WhenAvailable\n }; \n string logHistory = string.Empty; \n\n using (var cursor = await collection.WatchAsync(options, cancellationToken))\n {\n while (await cursor.MoveNextAsync(cancellationToken))\n { \n if (cancellationToken.IsCancellationRequested)\n {\n break;\n }\n\n foreach (var change in cursor.Current)\n {\n if (change.OperationType == ChangeStreamOperationType.Invalidate)\n {\n _logger.LogWarning(\"Change stream cursor has been invalidated\");\n _createLogService.CreateLogs(\"Error\",\"Change stream cursor has been invalidated\");\n break;\n }\n\n var key = change.DocumentKey.GetValue(\"_id\").ToString();\n\n switch (change.OperationType)\n {\n case ChangeStreamOperationType.Insert:\n await InsertIntoHistoryCollection(change);\n await TriggerEmail(change);\n break;\n\n case ChangeStreamOperationType.Delete:\n _logger.LogInformation(\"{Key} has been deleted from Mongo DB\", key);\n logHistory = key + \" has been deleted from Mongo DB\";\n var filter = Builders<BsonDocument>.Filter.Eq(\"_id\", ObjectId.Parse(key.ToString()));\n var document = await collectionHistory.Find(filter).FirstOrDefaultAsync();\n\n try\n {\n await _mailService.SendEmail(change, document, logHistory);\n }\n catch (Exception ex)\n {\n _logger.LogError(ex, \"An error occurred while sending email for {Key} for operation type {OperationType}\", key, change.OperationType);\n _createLogService.CreateLogs(\"Error\", \"An error occurred while sending email for {Key} for operation type {OperationType}\");\n }\n break;\n }\n }\n }\n }\n}\n",
"text": "Hi Kobe,I need help optimizing my change stream code.\nIf possible please go through my code and suggest.\nMy main concern is, as we are using a cursor and connection to keep open to DB, it is using more memory and CPU utilization is becoming high because other applications are getting stuck.\nBelow is my code:Thanks,\nLalitha.C",
"username": "Lalitha_Chevuru"
}
] | Change Stream issue | 2023-04-27T17:02:17.085Z | Change Stream issue | 1,134 |
null | [
"dot-net",
"atlas-device-sync"
] | [
{
"code": "public class Pattern : RealmObject\n {\n\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; }\n\n public Guid UserId { get; set; }\n\n public DateTimeOffset DateCreated { get; set; }\n\n public DateTimeOffset DateUpdated { get; set; }\n\n public string? PatternNumber { get; set; } \n\n public string? Description { get; set; }\n\n public int? BustInInches { get; set; }\n\n public int? WaistInInches { get; set; }\n\n public ObjectId? DecadeTypeId { get; set; }\n\n public ObjectId? GarmentTypeId { get; set; }\n\n public bool IsFavourite { get; set; }\n\n public IList<PatternImage> PatternImages { get; }\n\n }\nFailed to transform received changeset: Schema mismatch: Link property 'PatternImages' in class 'Pattern' points to class 'PatternImage' on one side and to 'Pattern_PatternImages' on the other. (ProtocolErrorCode=201)\npublic class PatternImage : EmbeddedObject\n {\n\n public string StoragePath { get; set; }\n\n public string ImageFilePath { get; set; }\n\n public bool IsMainImage { get; set; }\n\n }\n",
"text": "Hi,A little bit of background. I have created an application which has taken the data from a SQL database and has added it into Mongo. From here I created a Realm based app using dotnet MAUI which has all worked fine until I wanted to add an array of class I called PatternImage as part of primary object called Pattern.I added this in as can be seen below:This is how I see this needing to be configured according to the documentation.I added a PatternImage and this adds fine and appears in the MongoDb without a problem but then disappears from the Realm and I’m getting the following error:Without ripping my code apart, I’m trying to understand what I need to do to fix this and am looking for suggestions.",
"username": "Chris_Boot1"
},
{
"code": "{parent_table_name}_{field_name}",
"text": "Hi, this is happening because of a divergence in the schemas between the backend (App Services UI) and the Realm Data Model. You have an “embedded object” that has a title of “PatternImage” in Realm but in AppServices the name is “Pattern_PatternImage”. This naming tells me that you defined the embedded object in the JSON Schema section for the App Services UI but did not give that object a “title” so we infer it to be the {parent_table_name}_{field_name}.Your best move forward is likely to add a “title” field of “PatternImage” to the embedded object’s schema in the JSON Schema. Note that this will be a “breaking” change and we will prompt you to terminate and re-enable sync.Let me know if this works!\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Excellent. Thank you so much, this is something I had on a previous project and effectively worked around it but nice to know how to address properly going forward!",
"username": "Chris_Boot1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Subset of data appearing in Mongo but not Realm (despite being added in Realm) | 2023-05-01T20:29:09.447Z | Subset of data appearing in Mongo but not Realm (despite being added in Realm) | 716 |
null | [
"node-js"
] | [
{
"code": "const { MongoClient } = require('mongodb');\n\n// or as an es module:\n// import { MongoClient } from 'mongodb'\n\n// Connection URL\nconst url = 'mongodb://localhost:27017';\nconst client = new MongoClient(url);\n\n// Database Name\nconst dbName = 'myProject';\n\nasync function main() {\n // Use connect method to connect to the server\n await client.connect();\n console.log('Connected successfully to server');\n const db = client.db(dbName);\n const collection = db.collection('documents');\n\n // the following code examples can be pasted here...\n\n return 'done.';\n}\n\nmain()\n .then(console.log)\n .catch(console.error)\n .finally(() => client.close());\n",
"text": "os: win10;\nnode: 18.12.1\nmongodb: “^4.11.0”Program which worked with node.js 16 doesn’t work with node.js 18.\nEven the simplest sample:breaks with :\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\nat Timeout._onTimeout (D:\\TEST\\mongodb-test\\node_modules\\mongodb\\lib\\sdam\\topology.js:293:38)\nat listOnTimeout (node:internal/timers:564:17)\nat process.processTimers (node:internal/timers:507:7) {\nreason: TopologyDescription {\ntype: ‘Unknown’,\nservers: Map(1) { ‘localhost:27017’ => [ServerDescription] },\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: null,\nmaxElectionId: null,\nmaxSetVersion: null,\ncommonWireVersion: 0,\nlogicalSessionTimeoutMinutes: null\n},\ncode: undefined,\n[Symbol(errorLabels)]: Set(0) {}\n}",
"username": "Francek_Prijatelj"
},
{
"code": "",
"text": "if it is only the nodejs version you have upgraded:",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongodb://localhost:27017",
"text": "The error ECONNREFUSED means no server is listening at the given address and port.The address mentioned::1:27017with the connection stringmongodb://localhost:27017means localhost is defined for IPv6. A localhost IPv4 would be 127.0.01. You have to make sure your mongod is listening to IPv6 interfaces or use 127.0.01 in your connection string rather than localhost.",
"username": "steevej"
},
{
"code": "localhost::1c:\\Windows\\System32\\drivers\\etc\\::1 localhost127.0.0.1",
"text": "localhost is defined for IPv6Wow! I completely missed that part.@Francek_Prijatelj you may check 2 forum posts below if you have time for a similar issue.As @steevej stated above, your localhost is bound to ::1 right now. It possibly is a recent change in your c:\\Windows\\System32\\drivers\\etc\\ file (an upgrade or a new program) that activated a ::1 localhost lineYou may remove/comment out it, or use 127.0.0.1 instead, as suggested.Econnrefused ::1:27017Follow up of Econnrefused ::1:27017",
"username": "Yilmaz_Durmaz"
},
{
"code": "const url = 'mongodb://127.0.0.1:27017';\n",
"text": "Withit works fine.Sample code is copied directly from official github site https://github.com/mongodb/node-mongodb-native\nand it works with node16.\nI don’t think localhost is changed globally, as it wouldn’t work with node16 .\nMaybe there is a regression or a change in [email protected] .",
"username": "Francek_Prijatelj"
},
{
"code": "verbatimdns.lookup()--dns-result-orderRespect the OS configuration",
"text": "When we develop an app, the first assumptions we made might not be the best. And that choice must become a more generic one at some point to support the future. even if it breaks things, this is the right way we should choose.Node.js team made that decision change with this about a year ago: dns: default to verbatim=true in dns.lookup() by treysis · Pull Request #39987 · nodejs/node (github.com)One of developers #richardlau has this response on an issue ticket here https://github.com/nodejs/node/issues/40702#issuecomment-958157082For reference the change was #39987 – Node.js no longer re-sorts results of IP address lookups and returns them as-is (i.e. it no longer ignores how your OS has been configured). You can change the behaviour via the verbatim option to dns.lookup() or set the --dns-result-order command line option to change the default.Respect the OS configuration lies at the heart of the change. whether this configuration change was made by users themselves intentionally or an app changes it.Please check this for a priority discussion over IPv4 and IPv6\nnetworking - IPv4 vs IPv6 priority in Windows 7 - Super User",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "To avoid confusion, it would be helpful to change localhost to 127.0.0.1 on the github page",
"username": "Francek_Prijatelj"
},
{
"code": "localhost",
"text": "could be. But using localhost is a tradition for decades as using names rather than numbers is preferred by us human beings.Plus, IPv6 is relatively new and it is better to leave the job of dealing with problems that come together with using it to the users.It is actually a problem caused by OS developers who choose to activate IPv6 to be “localhost” in the “hosts” file without making it clear that it might break running programs. I was shocked to see this problem when I had it for the first time.Anyways, it is still a good exercise to show not everything goes to our expectations and we will have to dig deeper if we want to fix things. Fortunately, this one is not too deep to swallow us: use the number format, edit hosts file, or add flags.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "3 posts were split to a new topic: TypeError: Object prototype may only be an Object or null: undefined",
"username": "Stennie_X"
},
{
"code": "",
"text": "I’ve been facing this error from an hour,\nwhat if we want to create multiple collections with multiple schemas?",
"username": "Nayab_Rasool"
},
{
"code": "",
"text": "Hi @Nayab_RasoolIt is not clear what your problem is and does not seem to relate to this topic.Can you please give more details so we may have a better understanding!? and we maybe move it to a new discussion topic for broad attention.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "5 posts were split to a new topic: Create a collection with node.js",
"username": "kevinadi"
}
] | MongoDb doesnt' work with node.js 18.12.1 | 2022-11-15T20:21:06.191Z | MongoDb doesnt’ work with node.js 18.12.1 | 10,561 |
null | [
"android",
"storage",
"unity"
] | [
{
"code": " string dbPath = $\"{Application.persistentDataPath}\"; \n config = new RealmConfiguration($\"{dbPath}/{localPlayerId}_db.realm\")\n {\n IsReadOnly = false,\n SchemaVersion = 5001001\n };\n\n try\n {\n Realm = Realm.GetInstance(config);\n }\n catch (RealmFileAccessErrorException ex)\n {\n Debug.LogError($\"Error creating or opening the realm file. {ex.Message}\");\n }\n\n",
"text": "Hi,\nI’ve added Realm to my Unity application for Android. While everything works fine now in the Editor, I ran into problems when running the app on an Android device. So far, I’ve used the persistent data path to store data on the device, and I would like to continue doing so with the Realm database. But when I try to do so I get the exceptionError Unity RealmException: Failed to create fifo at ‘realm_9223144369529052091.cv’: Read-only file systemI am using UnityEngine.Application.persistentDataPath to get the persistent storage path, and this resolves to\n/storage/emulated/0/Android/data/[Package Name]/filesHere is how I initialize Realm:Does anyone have an idea how this problem could be resolved? Does the persistent storage at this location differ somehow from the rest of the Android filesystem? Because when I just use the default path of Realm on Android, storing the db works, but of course, this is not in persistent storage then.",
"username": "Ingo_Scholz"
},
{
"code": "Application.persistentDataPathApplication.persistentDataPathRealmConfiguration.FallbackPipePath",
"text": "The default path that Realm chooses is Context.getFilesDir, which means the files there are persisted. We chose this specifically because Application.persistentDataPath on Android will sometimes resolve to a read-only folder (see this comment).In your case though, the issue may be different - if you’re able to write regular files in the location returned by Application.persistentDataPath but not able to open a Realm in this location, it could be because the filesystem doesn’t allow creating named pipes, in which case you’ll need to set RealmConfiguration.FallbackPipePath to a path where named pipe creation is allowed (e.g. the internal data directory).",
"username": "nirinchev"
},
{
"code": "Application.persistentDataPath",
"text": "Hi Nicola,thanks a lot for your reply! I’ve tried it and used the same approach to get the internal data directory as you did (also described here). Adding this path to the RealmConfiguration’s FallbackPipePath did the trick.But generally - and this is rather a Unity topic - I wonder why I shouldn’t use the internal data directory for all cases. What is the benefit of Application.persistentDataPath? As I understand it, it may point to storage on an external SD card, depending on some permission settings and whether there is an external SD card at all. /storage/emulated hides a potential FAT file system on the external SD card, and hence the problems you describe with permissions. But “how persistent” is this internal storage? Is it retained when the app is removed and installed again? Is it cloned to a new device if Google’s migration tool is used?And another question which relates to the persistency topic: what happens if the named pipe is deleted? Is it still possible to read the Realm database? Is it recreated?Sorry for all those questions, but I want to make sure that the game data is stored relyably on all possible devices and that I don’t need to migrate the storage location at some point in the future.",
"username": "Ingo_Scholz"
},
{
"code": "",
"text": "The named pipes should be recreated the next time you open the database, so deleting them is not an issue. If you attempt to delete them while in use, they’ll likely be marked for deletion, but not really deleted until the database is closed (this is OS and filesystem specific - on NTFS/windows, you’ll get an exception trying to delete the pipe).Regarding the persistent path: yes, the internal data folder is backed up (per Google’s docs). I guess the only benefit of using the external data path over the internal one is that it’s possibly larger, but it comes with a bunch of downsides - it can be slower or on a filesystem that doesn’t support named pipes/is readonly. It’s on you to decide what is best for your users depending on the amount of data you plan to store in the database. In most cases we’ve seen, the database size is quite small compared to the game installed size, so using the internal storage is fine, but, of course, your mileage may vary.",
"username": "nirinchev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Writing to Android persistent storage with Unity throws Read-only file system error | 2023-04-27T21:50:45.168Z | Writing to Android persistent storage with Unity throws Read-only file system error | 1,866 |
null | [
"queries",
"atlas-search"
] | [
{
"code": "[\n {\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"text\": {\n \"query\": \"Honeycrisp\",\n \"path\": \"name\"\n }\n }, {\n \"autocomplete\": {\n \"query\": \"Honeycrisp\",\n \"path\": \"description\"\n }\n }\n ]\n }\n }\n }\n]\n",
"text": "In the documentation for the compound operator, it reads that the scores for each matching clause in a should statement are summed together:The returned score is the sum of the scores of all the subqueries in the clause.Is there a way I can customize the scoring so that the maximum subquery score is used? I’m using the should operator to search through multiple fields (exactly 4) in a single query. I’m trying to customize the scoring so that documents matching on multiple fields don’t dominate over documents matching on fewer fields when the results are ranked. My should clause uses a mix of autocomplete and text operators.This is an imaginary example similar to what I have:So I want records matching at least one of these fields but I want to limit the relevance score to the maximum score of the subqueries.",
"username": "Xavier_Carlson"
},
{
"code": "",
"text": "Hi @Xavier_Carlson - Welcome to the community I’m trying to get a better understanding of what you’re wanting to achieve so the following is my interpretation of your post details but please correct me if I am incorrect on any of these:In short, it seems you are only after score customisation and the documents themselves being returned are correct?I’m wondering if it’s also possible if you can provide the output you are currently getting to help clarify what you are seeing so that I understand the scenario a bit better.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran,Thanks for your reply. Your interpretation is spot on. I cannot provide the output but I can say the search I’m trying to do is an all-in-one / omni-search through all the text fields in my collection using the should operator. The results are correct but my concern is with the ranking of the results. For instance, some records may lack a “description” but others may have both “name” and “description” and share many of the same terms. So yes I am trying to go for “3.” to account for this.",
"username": "Xavier_Carlson"
},
{
"code": "constantcompound\"Honeycrisp\"\"Honeycrisp\"\"Honeycrisp\"DB>db.search.find({},{_id:0})\n[\n { name: 'Honeycrisp', description: 'Honeycrisp' },\n { name: 'Honeycrisp', description: 'Nothing' },\n {\n name: 'Honeycrisp',\n description: 'Honeycrisp',\n thirdfield: 'Honeycrisp'\n }\n]\n[\n {\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"text\": {\n \"query\": \"Honeycrisp\",\n \"path\": \"name\"\n }\n }, {\n \"autocomplete\": {\n \"query\": \"Honeycrisp\",\n \"path\": \"description\"\n }\n }\n ],\n \"score\": { \"constant\": {\"value\": 1 } }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"name\": 1,\n \"description\": 1,\n \"thirdfield\": 1,\n \"score\": {\"$meta\": \"searchScore\"}\n }\n }\n]\n[\n { name: 'Honeycrisp', description: 'Honeycrisp', score: 1 },\n { name: 'Honeycrisp', description: 'Nothing', score: 1 },\n {\n name: 'Honeycrisp',\n description: 'Honeycrisp',\n thirdfield: 'Honeycrisp',\n score: 1\n }\n]\n",
"text": "Thanks for confirming @Xavier_Carlson,Will using constant for the scoring option in the compound operator work for you? I have an example below which contains 3 documents:Test data:Based off what we discussed and using a similar pipeline to the one you provided, I assume you want these 3 documents returned but with the same score. Does the following possibly suited to your use case:Output (All same score):If you think this may work for you, please go ahead and test it out on a test environment / larger data set to ensure it meets all your requirements - I have only tested this on 3 sample documents on my test environment.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
}
] | How to use the maximum should clause score | 2023-04-24T20:56:14.507Z | How to use the maximum should clause score | 710 |
null | [
"node-js",
"mongoose-odm",
"connecting"
] | [
{
"code": "MongoParseError: Load balancer mode requires driver version 4+",
"text": "Hi team,I am currently trying to use mongodb with nodejs and keep on getting MongoParseError: Load balancer mode requires driver version 4+ error. For testing purpose, the DB is open to world 0.0.0.0 and still this is not connecting.Any help here?",
"username": "Security_Labs"
},
{
"code": "",
"text": "Welcome to the MongoDB Community Forums @Security_Labs !Based on the error message it sounds like you are trying to connect to an Atlas Serverless instance using an older version of the Node.js driver. For Atlas Serverless you will need a minimum of Node.js 4.1.0 driver (see Minimum Driver Versions for Serverless Instances). Since the title of your topic mentions Mongoose, you will also need Mongoose 6.0 or newer (which depends on the 4.1.x MongoDB Node.js driver).If you aren’t using Atlas Serverless, please provide some more information including:Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hey All,I am also getting the same error, while connecting to Serverless instance, I had just shifted from M10 dedicated cluster to Serverless instance, I am able to connect with both the instance to get dump from M10 cluster and restore in serverless instance, But when I am connecting to server less instance from my node application, I am getting the same error,Can someone please explain to me what exactly, the below statement meansAtlas Serverless you will need a minimum of Node.js 4.1.0 driverDo I need to update my NodeJs version, Mongoose Js version, Or anything else,",
"username": "Aliasgar_Bharmal"
}
] | Moongose connection error with Mongodb Atlas | 2022-01-26T19:25:25.285Z | Moongose connection error with Mongodb Atlas | 4,633 |
null | [
"kotlin"
] | [
{
"code": "RlmItem",
"text": "I’m trying to set up Realm with Device Sync for a new project. I want to add a prefix to my class names for clarity sake (e.g. RlmItem), but I don’t necessarily want that name in the App Services schema. Is there a way that I can manually specify what collection I want the class to sync as?",
"username": "Jacob_Rhoda"
},
{
"code": "\"title\"RlmItemRlmItem\"title\": \"RlmItem\"",
"text": "Hi @Jacob_Rhoda,You can specify the mapping from an Atlas collection to a Realm object by leveraging the \"title\" field in your app configuration’s JSON schema. For example, given the class name RlmItem, you can define a schema for the Atlas collection that you want to sync RlmItem to with \"title\": \"RlmItem\". Please see the docs for more information – there’s also an example at the bottom of the page to illustrate this.Let me know if that works,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "Hi Jonathan,That is helpful and I am able to do so… However I was hoping that I could do it on the client side instead of on the Atlas side. That way different apps could name the internal class whatever they want. Is that possible?Thanks,\nJacob",
"username": "Jacob_Rhoda"
},
{
"code": "\"title\"",
"text": "Jacob,\nI believe only some SDKs support this, while others use the class name as a direct mapping to table name (which as Jonathan notes can be configured by the \"title\" field). What Realm SDK/platform are you using?",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "@Jacob_Rhoda: Thanks for asking this question.Sadly, Kotlin SDK doesn’t support this feature as of now but is under our radar.But, do upvote this issue on Github as it helps us to understand developer needs and balance priorities accordingly.",
"username": "Mohit_Sharma"
},
{
"code": "@PersistedName",
"text": "Looks like this works now with the @PersistedName annotation. Thanks for getting this in so quick!",
"username": "Jacob_Rhoda"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Kotlin SDK Sync - Map model class to different name | 2023-04-18T16:05:13.501Z | Realm Kotlin SDK Sync - Map model class to different name | 911 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.4.21 is out and is ready for production deployment. This release contains only fixes since 4.4.20, and is a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.21 is released | 2023-05-01T18:41:17.465Z | MongoDB 4.4.21 is released | 1,007 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 5.0.17 is out and is ready for production deployment. This release contains only fixes since 5.0.16, and is a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.17 is released | 2023-05-01T18:38:09.630Z | MongoDB 5.0.17 is released | 933 |
null | [
"aggregation",
"queries",
"performance"
] | [
{
"code": "{\n \"executionSuccess\": true,\n \"nReturned\": 1180.0,\n \"executionTimeMillis\": 91587.0,\n \"totalKeysExamined\": 1180.0,\n \"totalDocsExamined\": 1180.0,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 1180.0,\n \"executionTimeMillisEstimate\": 78623.0,\n \"works\": 1181.0,\n \"advanced\": 1180.0,\n \"needTime\": 0.0,\n \"needYield\": 0.0,\n \"saveState\": 1117.0,\n \"restoreState\": 1117.0,\n \"isEOF\": 1.0,\n \"transformBy\": {\n \"created_on\": 1.0,\n \"estimated_time_of_completion\": 1.0,\n \"group_name\": 1.0,\n \"hash\": 1.0,\n \"name\": 1.0,\n \"processing\": 1.0,\n \"progress\": 1.0,\n \"total_credits_required_email\": 1.0,\n \"total_credits_required_email_uid\": 1.0,\n \"total_credits_required_uid\": 1.0,\n \"total_emails_found\": 1.0,\n \"total_members\": 1.0,\n \"unlocked_email\": 1.0,\n \"unlocked_email_uid\": 1.0,\n \"unlocked_uid\": 1.0,\n \"_id\": 0.0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 1180.0,\n \"executionTimeMillisEstimate\": 78590.0,\n \"works\": 1181.0,\n \"advanced\": 1180.0,\n \"needTime\": 0.0,\n \"needYield\": 0.0,\n \"saveState\": 1117.0,\n \"restoreState\": 1117.0,\n \"isEOF\": 1.0,\n \"docsExamined\": 1180.0,\n \"alreadyHasObj\": 0.0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 1180.0,\n \"executionTimeMillisEstimate\": 92.0,\n \"works\": 1181.0,\n \"advanced\": 1180.0,\n \"needTime\": 0.0,\n \"needYield\": 0.0,\n \"saveState\": 1117.0,\n \"restoreState\": 1117.0,\n \"isEOF\": 1.0,\n \"keyPattern\": {\n \"user_email\": 1.0\n },\n \"indexName\": \"user_email\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"user_email\": [\n ]\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2.0,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"user_email\": [\n \"[\\\"[email protected]\\\", \\\"[email protected]\\\"]\"\n ]\n },\n \"keysExamined\": 1180.0,\n \"seeks\": 1.0,\n \"dupsTested\": 0.0,\n \"dupsDropped\": 0.0\n }\n }\n }\n}\n",
"text": "I have a standalone MongoDB instance running on a 2 CPU 8GB RAM server. The collection in question has ~38.5k documents with an uncompressed size of 77.7GB (24.7GB storage size) along with 4 indexes (total index size = 2.2MB).Running aggregation queries on this collection takes almost a minute to complete. Normal find() queries take more than 5 minutes. Below is the explain plan execution stats for one of the aggregation query:I also monitored server resource usage while running these queries. CPU seems to fluctuate between 20% - 37% while RAM is constantly at about 90%. The indexes seems to be working properly. Not sure what else to look out for.Is this because of insufficient server resources or some other reason?",
"username": "Amin_Memon1"
},
{
"code": "",
"text": "Is this because of insufficient server resources or some other reason?It is lack of server resources. Double your RAM until you are happy with speed. What is your permanent storage?",
"username": "steevej"
},
{
"code": "",
"text": "What is your permanent storage?500 GB SSD (AWS EBS)",
"username": "Amin_Memon1"
}
] | Need help finding reasons for slow queries | 2022-06-04T09:15:31.863Z | Need help finding reasons for slow queries | 2,307 |
null | [
"compass"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"64459a3906135f434f43a1d5\"\n },\n \"data\": [\n {\n \"id\": 34541863,\n \"name\": \"\\\"A\\\" Cell Breeding Device\",\n \"type\": \"Spell Card\",\n \"frameType\": \"spell\",\n \"desc\": \"During each of your Standby Phases, put 1 A-Counter on 1 face-up monster your opponent controls.\",\n \"race\": \"Continuous\",\n \"archetype\": \"Alien\",\n },\n {\n \"id\": 64163367,\n \"name\": \"\\\"A\\\" Cell Incubator\",\n \"type\": \"Spell Card\",\n \"frameType\": \"spell\",\n \"desc\": \"Each time an A-Counter(s) is removed from play by a card effect, place 1 A-Counter on this card. When this card is destroyed, distribute the A-Counters on this card among face-up monsters.\",\n \"race\": \"Continuous\",\n \"archetype\": \"Alien\",\n{\n \"id\": 13026402,\n \"name\": \"A-Team: Trap Disposal Unit\",\n \"type\": \"Effect Monster\",\n \"frameType\": \"effect\",\n \"desc\": \"This effect can be used during either player's turn. When your opponent activates a Trap Card, Tribute this face-up card to negate the activation of the Trap Card and destroy it.\",\n \"atk\": 300,\n \"def\": 400,\n \"level\": 2,\n \"race\": \"Machine\",\n \"attribute\": \"FIRE\",\n }, ...\n",
"text": "Hello! I am new at MongoDB and I have imported this data set to play around with:I’ve tried to do the equivalent of “Select * from DB where type = “Effect Monster”” on MongoDB Compass with this:{“data.type”: {$eq: “Effect Monster”}}This should retrieve only rows with type: “Effect Monster”However I keep getting all entries and seems to be ignoring the “where” clause",
"username": "Henry_Zheng"
},
{
"code": "{“data.type”: {$eq: “Effect Monster”}}\ntype: \"Effect Monster\"Effect Monster type{\n \"_id\": {\n \"$oid\": \"64459a3906135f434f43a1d5\"\n },\n \"data\": [\n {\n \"id\": 34541863,\n \"name\": \"\\\"A\\\" Cell Breeding Device\",\n \"type\": \"Spell Card\",\n \"frameType\": \"spell\",\n \"desc\": \"During each of your Standby Phases, put 1 A-Counter on 1 face-up monster your opponent controls.\",\n \"race\": \"Continuous\",\n \"archetype\": \"Alien\"\n }\n ]\n}\ntype: \"Effect Monster\"_id{\n \"_id\": ObjectId(\"64459a3906135f434f43a1d5\"),\n \"data\": [\n {\n \"id\": 34541863,\n \"name\": \"Alex\",\n }\n ]\n},\n{\n \"_id\": ObjectId(\"64459a3906135f434f43a1d5\"),\n \"data\": [\n {\n \"id\": 34541864,\n \"name\": \"Ben\",\n }\n ]\n}\n_id{\n \"_id\": ObjectId(\"64459a3906135f434f43a1d5\"),\n \"data\": [\n {\n \"id\": 34541863,\n \"name\": \"Alex\",\n },\n {\n \"id\": 34541864,\n \"name\": \"Ben\",\n }\n ]\n}\n",
"text": "Hello @Henry_Zheng,Welcome to the MongoDB Community forums I’ve tried to do the equivalent of “Select * from DB where type = “Effect Monster”” on MongoDB Compass with this:This should retrieve only rows with the type: “Effect Monster”\nHowever, I keep getting all entries and seem to be ignoring the “where” clauseThe reason for this is that the query is retrieving documents where the type: \"Effect Monster\" matches and that contain more than one embedded sub-document. To obtain the desired result of the Effect Monster type, you must save each sub-document as a separate document.For example:Document 1:and so on…After that, if you query the data you will get the document with the specific type: \"Effect Monster\" To clarify this further, let’s understand what documents and subdocument are in MongoDB:A document is a group of data or information that is stored in a particular format, usually as a JSON or BSON object. For example, here are the 2 separate documents with specific _id. Please refer to ObjectId to learn more.A row is a similar concept in the context of a relational database. Each row represents a single record or instance of data in the table, and each column represents a specific attribute or property of that record. For example:On the other hand, a subdocument is a document that is nested inside another document. It is essentially a field that contains another document as its value. For example:Here are the 2 subdocuments embedded under the same _id.I suggest you refer to the following resources that should further help you further cement your understanding of these MongoDB concepts:Hope this helps. Please let us know if there are any doubts about this.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Rows vs Documents vs Sub-documents | 2023-04-28T06:37:18.267Z | Rows vs Documents vs Sub-documents | 831 |
null | [
"security",
"serverless"
] | [
{
"code": "",
"text": "Can someone please explain the process in detail.",
"username": "Jalaj_Kumar"
},
{
"code": "",
"text": "Hi @Jalaj_Kumar, thanks for posting to the Community Forums.Have you checked out this thread which poses a similar question and solution?",
"username": "yo_adrienne"
},
{
"code": "",
"text": "@Pavel_Duchovny Why do we need to create a user and supply its key and secret access key when there’s an alternative to directly connect with the AWS resource execution role like IAM role of the lambda function? The video Using AWS IAM Authentication with MongoDB 4.4 in Atlas to Build Modern Secure Applications - YouTube clearly explains this process should be password less and no credentials should be sent over to connect.",
"username": "Jalaj_Kumar"
},
{
"code": "",
"text": "As I understand it there are 2 types one is user/password like using the access key and secret key.And another one is by having the driver issuing kind of temp creds from aws api, but I haven’t watched the presentation in full and I am not big expert with IAM auth…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks. Is there someone who can help with this ?",
"username": "Jalaj_Kumar"
},
{
"code": "mongodb+srv://<AWS access key>:<AWS secret key>@my-cluster.8xebk.mongodb.net/myFirstDatabase?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:<session token (for AWS IAM Roles)>\nmongodb+srv://${encodeURIComponent(process.env.AWS_ACCESS_KEY_ID)}:${encodeURIComponent(process.env.AWS_SECRET_ACCESS_KEY)}@my-cluster.8xebk.mongodb.net/myFirstDatabase?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:${encodeURIComponent(process.env.AWS_SESSION_TOKEN)}",
"text": "Hi @Jalaj_Kumar,If you’re connecting to a MongoDB Atlas cluster from your local machine, you will need to supply IAM credentials to the command line.However, if you’re connecting to an Atlas cluster from AWS Lambda, Lambda will automatically retrieve temporary IAM credentials for you and make them accessible via environment variables (Using AWS Lambda environment variables - AWS Lambda).The three environment variables that your code will need are AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN. The secret access key will never be sent to Atlas or persisted by the driver (see the video for more details).For example the Atlas UI will publish an AWS IAM connection string that looks like this:If you’re using Node.js on Lambda, you could then fill in the placeholders like this:const uri = mongodb+srv://${encodeURIComponent(process.env.AWS_ACCESS_KEY_ID)}:${encodeURIComponent(process.env.AWS_SECRET_ACCESS_KEY)}@my-cluster.8xebk.mongodb.net/myFirstDatabase?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:${encodeURIComponent(process.env.AWS_SESSION_TOKEN)}Note that in order for this to work, you will need to have configured a database user (https://docs.atlas.mongodb.com/security-add-mongodb-users/) for AWS IAM Authentication, selecting “IAM role” from the “AWS IAM Type” drop-down menu.If you have additional questions or run into any problems, please let us know.Angela@MongoDB",
"username": "Angela_Shulman"
},
{
"code": "const MONGODB_URI = \"mongodb+srv://${encodeURIComponent(process.env.AWS_ACCESS_KEY_ID)}:${encodeURIComponent(process.env.AWS_SECRET_ACCESS_KEY)}@cluster/database?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:${encodeURIComponent(process.env.AWS_SESSION_TOKEN)}\";",
"text": "Hi @Angela_ShulmanThanks for the explanation. I tried using the below connection string from lambda function and receiving a timeout.I also have the lambda execution role arn added as a database user in my cluster.const MONGODB_URI = \"mongodb+srv://${encodeURIComponent(process.env.AWS_ACCESS_KEY_ID)}:${encodeURIComponent(process.env.AWS_SECRET_ACCESS_KEY)}@cluster/database?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:${encodeURIComponent(process.env.AWS_SESSION_TOKEN)}\";",
"username": "Jalaj_Kumar"
},
{
"code": "",
"text": "Hi @Jalaj_Kumar,Can you ensure that you have allowed IP access (https://docs.atlas.mongodb.com/security/ip-access-list/) and that there are no network connectivity issues?If you still have problems please create an in-app support chat session or open a support case (https://docs.atlas.mongodb.com/support/) so that we can help you with specifics in the context of your Atlas cluster and project. Please link to this forum topic and feel free to ask for me.Kind regards,\nAngela@MongoDB",
"username": "Angela_Shulman"
},
{
"code": "",
"text": "9 posts were split to a new topic: Task timed out after 5.01 seconds - MongoDB Atlas AWS connection issue",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to connect from AWS lambda function to mongo atlas by using an IAM role which is password less? | 2021-03-24T15:39:43.624Z | How to connect from AWS lambda function to mongo atlas by using an IAM role which is password less? | 11,637 |
null | [] | [
{
"code": "",
"text": "I’m trying to create a data model in MongoDB for a windows/unix like folder structure that contain files.\nUsers can have access to these files with permission read, write or both.Is there any best practies modelling this structure in MongoDB?\nCan anyone please point me in the right direction?",
"username": "Eirik_Andersen"
},
{
"code": "",
"text": "Hey @Eirik_Andersen,Welcome to the MongoDB Community forums! From what you described, I think the Tree Model might be a good way for you to design your schema. In this, MongoDB allows various ways to use tree data structures to model large hierarchical or nested data relationships. Using this, you can model your data in the folder-like hierarchy that you mentioned.Regarding the roles, MongoDB provides built-in roles with pre-defined pairings of resources and permitted actions. For lists of the actions granted, see Built-In Roles. To define custom roles, see Create a User-Defined Role.. You can read about Inherited Privilegages, which I feel might be suitable for your use-case.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | MongoDB data model for folder structure with permissions | 2023-04-28T20:46:18.837Z | MongoDB data model for folder structure with permissions | 967 |
null | [
"dot-net",
"android"
] | [
{
"code": "",
"text": "Open an in memory database with .net maui on WinUI generates the following error:System.InvalidOperationException: “Could not determine a writable folder to store the Realm file. When constructing the RealmConfiguration, provide an absolute optionalPath where writes are allowed.”Tried to use the preserv static class but does not help. The same code is working on Android without the error.",
"username": "Bruno_Zimmermann"
},
{
"code": "",
"text": "Hmmm - you may want to open an issue here and provide some environmental details -Realm is a mobile database: a replacement for SQLite & ORMs - GitHub - realm/realm-dotnet: Realm is a mobile database: a replacement for SQLite & ORMs",
"username": "Ian_Ward"
},
{
"code": "",
"text": "upgrading to the latest version 10.21.1 solved the problem",
"username": "Bruno_Zimmermann"
}
] | Error when try to use InMemory Database in .net maui | 2023-04-14T19:07:02.702Z | Error when try to use InMemory Database in .net maui | 924 |
null | [
"migration"
] | [
{
"code": "",
"text": "Hi Community. I’m looking for some guidance.We have an on-prem MongoDB cluster running version 3.4 and plan to migrate it to MongoDB Atlas 6x. We planned to use the Live Migration, but we just discovered that we need to be in 4.2 (which is not an option now).My doubt is, can a Mongo dump and restore work for me? My doubt is because of the indexes’ compatibility.Has anyone tested this before? Any advice?Thanks.",
"username": "Janx_05"
},
{
"code": "",
"text": "Hi Jose Luis,There can definitely be index incompatibilities when you’re jumping over multiple major versions. It might be easiest to just try and do a test run of dump/restore - usually, metadata issues result in explicit errors.You should also be able to use Live Migrate to migrate from 3.4 to 4.4 or 5.0 on Atlas. If your final destination is 6.0, then you can easily upgrade from 5.0 to 6.0 once you’re on Atlas. Unfortunately, there’s no direct path from older versions to MongoDB 6.0 via Live Migrate (yet).",
"username": "Alexander_Komyagin"
},
{
"code": "",
"text": "Thank you very much, Alexander. I will give it a try to both recommendations. Dump/Restore and Live Migrate",
"username": "Janx_05"
}
] | Migration to MongoDB atlas compatibility | 2023-04-27T20:31:21.155Z | Migration to MongoDB atlas compatibility | 846 |
null | [
"python",
"transactions"
] | [
{
"code": "(env) ubuntu@playdb1:~$ sudo pip freeze |grep pymongo\npymongo==3.13.0\ndb.user_messages.findOne()\n{ \"_id\" : ObjectId(\"xxxxxxx\"), \"iid\" : \"xxxxxxx\", \"is_sender\" : true, \"_lm\" : ISODate(\"2015-07-16T15:27:10.685Z\"), \"message_id\" : ObjectId(\"xxxxxxxx\"), \"seq\" : 1, \"status\" : \"read\", \"thread_id\" : ObjectId(\"xxxxxxxx\"), \"time_created\" : ISODate(\"2015-07-16T15:27:10.685Z\"), \"time_read\" : ISODate(\"2015-07-16T15:27:10.685Z\"), \"type\" : \"system\", \"user_id\" : ObjectId(\"xxxxxxxxx\"), \"visibility\" : \"archived\" }\n2023-03-31T06: 25: 12.932+0000 I COMMAND [conn39652] command local.$cmd command: **find { find:** \"oplog.rs\", filter: { ts: { $gte: Timestamp(1680243781,3) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 5, readConcern: { afterClusterTime: Timestamp(1680243781, 3) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" }, $clusterTime: { clusterTime: Timestamp(1680243870, 1), signature: { hash: BinData(0, DDE58FBA0BEECDD6C6FF9677206EC95473E5BF23), keyId: 7213388223488196610 } }, $db: \"local\" } numYields: 0 ok: 0 errMsg: \"**operation exceeded time limit**\" errName:**MaxTimeMSExpired** errCode: 50 reslen: 665 locks: { ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } \nprotocol:op_msg 36589ms \n2023-03-31T06: 25: 13.689+0000 I COMMAND [conn20369] command cureatr.$cmd command: **update { update**: \"threads_garbage\", ordered: true, lsid: { id: UUID(\"dfd49a79-f537-4ecd-952f-154d4d35364b\") }, txnNumber: 4099331, autocommit: false, $clusterTime: { clusterTime: Timestamp(1680243863, 1), signature: { hash: BinData(0, 43057D869446C1D65E1CBCBEAE6F7227FFF446F6), keyId: 7213388223488196610 } }, $db: \"cureatr\" } numYields: 0 ok: 0 errMsg: \"Transaction 4099331 has been aborted.\" errName:**NoSuchTransaction** errCode: 251 reslen: 306 locks: { ReplicationStateTransition: { acquireCount: { w: 1 } } } \nprotocol:op_msg 42957ms \n2023-03-31T06: 30: 42.940+0000 I COMMAND [conn39720] command local.$cmd command: **find { find:** \"oplog.rs\", filter: { ts: { $gte: Timestamp(1680244080, 5) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 5, readConcern: { afterClusterTime: Timestamp(1680244080, 5) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" }, $clusterTime: { clusterTime: Timestamp(1680244213, 1), signature: { hash: BinData(0, 86B28A16186A0F9FA16705757DCC5D617A664CD1), keyId: 7213388223488196610 } }, $db: \"local\" } numYields: 0 ok: 0 errMsg: \"**operation exceeded time limit**\" errName:**MaxTimeMSExpired** errCode: 50 reslen: 665 locks: { ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } \nprotocol:op_msg 2011ms\n2023-03-31T06:31:09.336+0000 I NETWORK [conn39802] received client metadata from 127.0.0.1:35286 conn39802: { driver: { name: \"**PyMongo\", version: \"3.10.1**\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-1096-aws\" }, platform: \"CPython 3.7.5.final.0\" }\ndb.messages.count()\n139774535\ndb.user_messages.count()\n276214451\n",
"text": "I would like to let you knowColumns in the document (xxxxxx is the fabricated value)Please see below the latest error log lines:PyMongo Version in the MongoDB logThere is an interesting part that the same code is working fine where records counts are less:If failed when record count is double than above-mentioned count:",
"username": "Shashank_Shekhar5"
},
{
"code": "findOne()139774535276214451errMsg: \"**operation exceeded time limit**\" errName:**MaxTimeMSExpired** errCode: 50\ndb.version()update()find()mongo shell{ \n \"_id\" : ObjectId(\"xxxxxxx\"), \n \"iid\" : \"xxxxxxx\", \n \"is_sender\" : true, \n \"_lm\" : ISODate(\"2015-07-16T15:27:10.685Z\"), \n \"message_id\" : ObjectId(\"xxxxxxxx\"), \n \"seq\" : 1, \"status\" : \n \"read\", \n \"thread_id\" : ObjectId(\"xxxxxxxx\"), \n \"time_created\" : ISODate(\"2015-07-16T15:27:10.685Z\"), \n \"time_read\" : ISODate(\"2015-07-16T15:27:10.685Z\"), \n \"type\" : \"system\", \n \"user_id\" : ObjectId(\"xxxxxxxxx\"), \n \"visibility\" : \"archived\" \n}\npymongo",
"text": "Hey @Shashank_Shekhar5,Welcome to the MongoDB Community forums So, to summarize the issue I have understood so far: when you run a findOne() command on a collection with a record count of 139774535, it works fine, but when you run the same command on a collection with a record count of 276214451, it throws an error.Let me know if I misunderstood the context here.Furthermore, could you please share the following information to help us better understand the problem:MongoDB still showing PyMongo version 3.10.1 whereas the below command shows\nPyMongo Version 3.13 (Need help to properly upgrade PyMongo Version)Please follow the instructions here Installing / Upgrading — PyMongo 4.3.3 documentation to upgrade your pymongo version.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | ExceedTimeLimit error and not upgrading PyMongo version | 2023-04-03T04:49:07.615Z | ExceedTimeLimit error and not upgrading PyMongo version | 948 |
[
"thailand-mug"
] | [
{
"code": "MongoDB Senior Consulting Engineer",
"text": "\nApr-2023960×540 72.2 KB\nLanguage: Thai\nDate: 30-April-2023\nTime: 20:00 PM - 21:00 PM (ICT)Agenda in ThaiเหมาะสำหรับAgenda in EnglishEvent Type: Online\nLink(s):\nVideo Conferencing URLMongoDB Senior Consulting Engineer",
"username": "Piti.Champeethong"
},
{
"code": "",
"text": "Hi Everyone,Please find the live meetup video recording below.THMUG Live Meetup - April 2023 Edition - YouTubeThank you.",
"username": "Piti.Champeethong"
}
] | Thailand MongoDB User Group: April Edition | 2023-04-25T09:36:32.819Z | Thailand MongoDB User Group: April Edition | 1,346 |
|
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "specializations: [specializationsSchema],\nconst specializationsSchema = new mongoose.Schema({\n specialityId: {\n type: String,\n required: true,\n },\n specialityName: {\n type: String,\n required: true,\n },\n status: {\n type: String,\n enum: ['active', 'inactive'],\n default: 'active'\n }\n})\n",
"text": "here ‘specialityName’ field is added later but not able to add the search index for this",
"username": "curaster_app"
},
{
"code": "{\n _id: ObjectId(\"644f59e524564d038dd192d7\"),\n specialityId: '12334567',\n status: [ 'active', 'inactive' ]\n }\ndb.searchAtlas.updateOne( \n { specialityId: '12334567'}, \n { $set: { \"specialityName\": \"ABC\"}})\n{\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"specialityName\": {\n \"type\": \"document\"\n }\n }\n }\n}\n",
"text": "Hi @curaster_app and welcome to MongoDB community forums!!Based on my understanding from the above posts, I tried to replicate the issue in my test Atlas cluster with version 6.0.5 using the following steps:I was successfully able to create the search index for the fields. Therefore to understand your concern better, could you help me with the few details below.Regards\nAasawari",
"username": "Aasawari"
}
] | Not able to add search index after adding a new field inside a existing field in schema | 2023-04-30T15:29:38.196Z | Not able to add search index after adding a new field inside a existing field in schema | 582 |
null | [
"atlas-cluster",
"golang"
] | [
{
"code": "> mongodb://<user>:<password>@rycaocluster-shard-00-02.apwt4.mongodb.net:27017/2022/03/17 20:18:18 server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: rycaocluster-shard-00-02.apwt4.mongodb.net:27017, Type: Unknown, Last error: connection() error occured during connection handshake: connection(rycaocluster-shard-00-01.apwt4.mongodb.net:27017[-64]) socket was unexpectedly closed: EOF }, ] }",
"text": "DB String - > mongodb://<user>:<password>@rycaocluster-shard-00-02.apwt4.mongodb.net:27017/please note that I have MongoDB Atlas Dedicated Server with Auto Scaling and I have used the primary node of the cluster in the string.Error -\n2022/03/17 20:18:18 server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: rycaocluster-shard-00-02.apwt4.mongodb.net:27017, Type: Unknown, Last error: connection() error occured during connection handshake: connection(rycaocluster-shard-00-01.apwt4.mongodb.net:27017[-64]) socket was unexpectedly closed: EOF }, ] }I searched about this and found a solution -The solution says to provide a tlsCAfile in the DB String, but I am not sure how to configure a tls certificate in MongoDB Atlas. Please tell me if you know how to do that, or if you have any other solutions.\nThanks",
"username": "VIBHANSHU_RATHOD"
},
{
"code": "mongodb+srv://...",
"text": "Hi @VIBHANSHU_RATHOD and welcome in the MongoDB Community !You are supposed to connect to the entire Replica Set, not just the primary. Use the connect button to retrieve the correct connection string starting with mongodb+srv://....As this actually points to a DNS seedlist, this will redirect your connection to the right servers if your cluster ever scales up or down due to the auto scaling.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "it doesn’t work, mongodb doesn’t work with the srv link, any solution? also the same MongoDB worked on a vultr server but is not working on an aws ec2 server.",
"username": "VIBHANSHU_RATHOD"
},
{
"code": "pymongo[srv]pymongo",
"text": "It could be different reasons. Here are the usual one I know about:Can you connect from a mongosh from this same EC2 machine? This would be the first thing to check.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I am aware about pymongo[srv], what is the similar dependency for go driver?",
"username": "VIBHANSHU_RATHOD"
},
{
"code": "mongosh",
"text": "It doesn’t look that there is one in Go. But I never tried myself. I didn’t notice you where using Go earlier. You should be good to go with just the Go Driver.Can you connect from mongosh from the same PC you are trying to connect from the Go driver?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "func GetClient() (*mongo.Client, error) {\n\turi := os.Getenv(\"MONGODB_URI\")\n\tlogrus.Info(\"uri: \", uri)\n\tif mgClient != nil {\n\t\treturn mgClient, nil\n\t}\n\tserverAPI := options.ServerAPI(options.ServerAPIVersion1)\n\tclientOptions := options.Client().ApplyURI(uri).SetServerAPIOptions(serverAPI)\n\tmgClient, err := mongo.NewClient(clientOptions)\n\tif err != nil {\n\t\treturn nil, errors.Wrap(err, \"can't create mongo client\")\n\t}\n\tctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)\n\tdefer cancel()\n\terr = mgClient.Connect(ctx)\n\tif err != nil {\n\t\treturn nil, errors.Wrap(err, \"can't connect mongo client\")\n\t}\n\treturn mgClient, err\n}\n",
"text": "I have the same issue even after I configure AWS Peering.my uri: mongodb+srv://< user >:< password >@< cluster >/?retryWrites=true&w=majority\ngo.mongodb.org/mongo-driver v1.11.4I have another lambda (same account, call the same uri) writing with nodejs work perfectly without pb connection handshake. But lambda golang throw this error.\nDo you have any idea?",
"username": "Thuc_NGUYEN1"
},
{
"code": "go get -u go.mongodb.org/mongo-driver/mongo\nmongodb+srv://go get github.com/miekg/dns\ntls=truemongodb+srv://<user>:<password>@<cluster>/dbname?retryWrites=true&w=majority&tls=true\nmongosh",
"text": "It seems like you are trying to connect to a MongoDB Atlas cluster from an AWS Lambda function using the Go driver. Based on the error you provided and the code snippet, here are a few suggestions to help resolve the issue:If you have followed these suggestions and still face issues, please provide more details about the error messages you are encountering and any additional information about your environment.",
"username": "ibilalkayy"
}
] | Unable to connect to mongodb dedicated cluster with auto scaling from go driver | 2022-03-18T01:12:27.943Z | Unable to connect to mongodb dedicated cluster with auto scaling from go driver | 4,520 |
null | [
"golang"
] | [
{
"code": "bla := []map[string]interface{}{\n {\n Name: \"data1\",\n Fields: map[string]interface{}{\n \"Field1\": true,\n \"Field2\": map[string]interface{}{\n \"Key\": \"foo\",\n },\n \"Field3\": []map[string]interface{}{\n {\n \"Key\": \"Bar\",\n \"InnerField\": []map[string]interface{}{\n {\n \"Key\": \"Bar\",\n \"Array\": []int{1, 2}\n },\n {\n \"Key\": \"Baz\",\n \"Array\": []int{3, 4}\n },\n },\n },\n {\n \"Key\": \"Foo\",\n \"InnerField\": []map[string]interface{}{\n {\n \"Key\": \"Rab\",\n \"Array\": []int{342, 234}\n },\n {\n \"Key\": \"Zab\",\n \"Array\": []int{534, 3453}\n },\n },\n },\n },\n },\n },\n}\nField3FieldNonemap[string]interfacefilter := map[string]interface{}{\n \"Fields.Fields3[].InnerField.Key\": \"Bar\" \n}\nres := db.Get(filter)\nresmap[string]interface{}{\n Name: \"data1\",\n Fields: map[string]interface{}{\n \"Field1\": true,\n \"Field2\": map[string]interface{}{\n \"Key\": \"foo\",\n },\n \"Field3\": []map[string]interface{}{\n {\n \"Key\": \"Bar\",\n \"InnerField\": []map[string]interface{}{\n {\n \"Key\": \"Bar\",\n \"Array\": []int{1, 2}\n },\n {\n \"Key\": \"Baz\",\n \"Array\": []int{3, 4}\n },\n },\n },\n },\n },\n}\nfilter := map[string]interface{}{\n \"Name\": \"data2\" \n}\nres := db.Get(filter)\n\nres == []map[string]interface{} // res is empty\n",
"text": "Hello, I have in go data like:It is only the example, there will be much more fields and the names of each field can be random (like Field3 or FieldNone). I have to keep it as map[string]interface I am not able to make from it struct My question is if I am able to in easy way to, for example, filtering,sorting,paginating over that complicated struct? In my head it should looks like:So I have got res like:or",
"username": "Mm_Kow"
},
{
"code": "package main\n\nimport (\n\t\"fmt\"\n\t\"reflect\"\n)\n\nfunc filterData(data interface{}, filter map[string]interface{}, path []string) interface{} {\n\tif len(path) == 0 {\n\t\tif reflect.DeepEqual(data, filter) {\n\t\t\treturn data\n\t\t}\n\t\treturn nil\n\t}\n\n\tif m, ok := data.(map[string]interface{}); ok {\n\t\tif v, ok := m[path[0]]; ok {\n\t\t\treturn filterData(v, filter, path[1:])\n\t\t}\n\t} else if arr, ok := data.([]map[string]interface{}); ok {\n\t\tresult := []map[string]interface{}{}\n\t\tfor _, e := range arr {\n\t\t\tif r := filterData(e, filter, path); r != nil {\n\t\t\t\tresult = append(result, r.(map[string]interface{}))\n\t\t\t}\n\t\t}\n\t\tif len(result) > 0 {\n\t\t\treturn result\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc main() {\n\t// Your 'bla' data\n\n\tfilter := map[string]interface{}{\n\t\t\"Key\": \"Bar\",\n\t}\n\n\tfilterPath := []string{\"Fields\", \"Field3\", \"InnerField\"}\n\n\tfiltered := filterData(bla, filter, filterPath)\n\n\tfmt.Printf(\"%v\\n\", filtered)\n}\n\nfilterPathfilter",
"text": "It looks like you want to perform operations like filtering, sorting, and pagination on the nested data structures you’ve provided. Since you can’t create a custom struct for this data, one way to do this is by using recursion and the reflect package.Here’s a simple function that filters the data based on a filter map:This function traverses the data structure following the path given by filterPath. If it finds a match for the filter, it will return that element; otherwise, it will return nil.This is just a starting point to help you with filtering. To implement sorting and pagination, you might need to extend this function or create new ones that work with your specific use case. Note that this implementation can be improved in terms of performance and code style, but it should provide a good starting point for handling the nested structures you’ve provided.",
"username": "ibilalkayy"
}
] | Filtering/sorting/paginating complicated data | 2023-04-20T13:23:53.266Z | Filtering/sorting/paginating complicated data | 1,076 |
null | [] | [
{
"code": "",
"text": "We’re getting this error on Atlas while trying to look at collections and none of our Application is being able to connect to Mongo DB Atlas. We’re using a shared M2 Cluster. Cluster is also marked as green for all 3 clusters.",
"username": "Anmol_Malik"
},
{
"code": "",
"text": "Facing same issue. AWS Mumbai (ap-south-1) region",
"username": "Bhuvan_Sharma"
},
{
"code": "",
"text": "Facing the same issue\n\nScreenshot 2023-03-25 at 2.45.09 PM2558×1118 166 KB\n",
"username": "Suraj_Jorwekar"
},
{
"code": "",
"text": "Now Issue is resolved for me guys. Please check. Thanks.",
"username": "Bhuvan_Sharma"
},
{
"code": "",
"text": "We are facing this issue from an hour, is there any possibilities to resolved it as soon as possible.\nWhatsApp Image 2023-03-25 at 2.52.20 PM1600×708 93.8 KB\n",
"username": "Ashutosh_Pal"
},
{
"code": "",
"text": "Hi, Yes you may open a case from support panel. Mongo DB team might need some manual intervention to resolve this issue. Or you can use the chat option to talk to the support. This is the only fastest approach in my knowledge.",
"username": "Bhuvan_Sharma"
},
{
"code": "",
"text": "I have trhe same issue just started yesterday, any way to resolve this?",
"username": "Ian_Duchesne"
},
{
"code": "",
"text": "I’m facing the same problem, any solution?",
"username": "EdoRetro"
},
{
"code": "",
"text": "Hi @Ian_Duchesne and @EdoRetro,Since the OP and previous commenters were from ~1 month ago and was possibly related to some of the incidents noted on the cloud status history page, I would recommend contacting the Atlas in-app chat support team for your own cluster / atlas account as it’s possibly a different issue to the original posters.In saying so, there haven’t been any incident updates since April 29th 2023 - ref : Cloud Status page.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | IMPORTANT - An error occurred while querying your MongoDB deployment. Please try again in a few minutes | 2023-03-25T08:26:19.102Z | IMPORTANT - An error occurred while querying your MongoDB deployment. Please try again in a few minutes | 2,805 |
null | [] | [
{
"code": "",
"text": "I have a scheduled function that is using the Mongo API to update a secret value, which works as expected. Another scheduled function does not see the new value and instead gets the old value until the function is redeployed.Is this expected behavior? Are there other options to rotate a secret value that can be read by a scheduled function?",
"username": "Brad_Tillman"
},
{
"code": "",
"text": "Hi Brad,I understand this is an old thread however I was not able to reproduce the problem.Steps I took:This was done while drafting was enabled in the app.If anyone else runs into any problem with updating the secret via admin API, please provide details.Regards\nManny",
"username": "Mansoor_Omar"
}
] | Reading updated secret values in triggered function | 2022-08-29T13:59:15.381Z | Reading updated secret values in triggered function | 1,735 |
[
"aggregation"
] | [
{
"code": "db.CreatorStats.aggregate([\n {\n $group: {\n _id: null,\n totalAdSpend: { $sum: \"$Ad-Spend\" },\n totalViews: { $sum: \"$Views\" }\n }\n },\n {\n $project: {\n _id: \"null\",\n result: {\n $multiply: [{ $divide: [\"totalAdSpend\", \"totalViews\"] }, 1000]\n }\n }\n }\n]);\n",
"text": "I am currently trying to create a calculated field: where we divide the sum of total ad spend by sum of total views then multiply by 1000. However, this query currently leads to an “Unexpected “:” at character 50”My database looks like below\n\nScreenshot 2023-04-28 at 2.21.40 PM1752×1692 303 KB\n",
"username": "Jaeho_Lee"
},
{
"code": "{ $divide: [\"$totalAdSpend\", \"$totalViews\"] }\n",
"text": "The “Unexpected “:” is a syntax error, which is odd as the query looks well-formed to me.\nOne thing I can see is that you forgot to dollar-prefix the fields, you are diving, i.e:",
"username": "tomhollander"
}
] | Error with Calculated Fields | 2023-04-28T21:22:32.008Z | Error with Calculated Fields | 729 |
|
null | [
"python"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-04-16T23:01:50.516+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:61312\",\"uuid\":\"f570f0d0-96a7-4036-a6c3-65c90800740c\",\"connectionId\":51331,\"connectionCount\":316}}\n{\"t\":{\"$date\":\"2023-04-16T23:01:50.516+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn51330\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:61310\",\"client\":\"conn51330\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"4.3.3\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"5.15.0-69-generic\"},\"platform\":\"CPython 3.10.6.final.0\"}}}\n{\"t\":{\"$date\":\"2023-04-16T23:01:50.516+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn51331\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:61312\",\"client\":\"conn51331\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"4.3.3\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"5.15.0-69-generic\"},\"platform\":\"CPython 3.10.6.final.0\"}}}\n{\"t\":{\"$date\":\"2023-04-16T23:01:50.894+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn51330\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61310\",\"uuid\":\"583efaef-22a9-43e2-9e9c-4c96588e2b2f\",\"connectionId\":51330,\"connectionCount\":315}}\n{\"t\":{\"$date\":\"2023-04-16T23:01:51.016+02:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn51329\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":1040260}}\n{\"t\":{\"$date\":\"2023-04-16T23:01:51.017+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn51331\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61312\",\"uuid\":\"f570f0d0-96a7-4036-a6c3-65c90800740c\",\"connectionId\":51331,\"connectionCount\":314}}\n{\"t\":{\"$date\":\"2023-04-16T23:01:51.017+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn51329\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61298\",\"uuid\":\"9d873de2-eaca-451a-aa45-32c4134d3421\",\"connectionId\":51329,\"connectionCount\":313}}\n{\"t\":{\"$date\":\"2023-04-16T23:02:12.016+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\\n\"}}\n{\"t\":{\"$date\":\"2023-04-16T23:02:12.017+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /var/lib/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)39, mongo::AssertionException>\\n\\n\"}}\n{\"t\":{\"$date\":\"2023-04-16T23:02:12.360+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"561CB9EA4C74\",\"b\":\"561CB5094000\",\"o\":\"4E10C74\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.362\",\"C\":\"mongo::stack_trace_detail::(anonymous namespace)::printStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&, mongo::StackTraceSink*) [clone .constprop.362]\",\"s+\":\"1F4\"},{\"a\":\"561CB9EA71B9\",\"b\":\"561CB5094000\",\"o\":\"4E131B9\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"C\":\"mongo::printStackTrace()\",\"s+\":\"29\"},{\"a\":\"561CB9EA1507\",\"b\":\"561CB5094000\",\"o\":\"4E0D507\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"C\":\"mongo::(anonymous namespace)::myTerminate()\",\"s+\":\"D7\"},{\"a\":\"561CBA02BFA6\",\"b\":\"561CB5094000\",\"o\":\"4F97FA6\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"C\":\"__cxxabiv1::__terminate(void (*)())\",\"s+\":\"6\"},{\"a\":\"561CBA0C0929\",\"b\":\"561CB5094000\",\"o\":\"502C929\",\"s\":\"__cxa_call_terminate\",\"s+\":\"39\"},{\"a\":\"561CBA02B995\",\"b\":\"561CB5094000\",\"o\":\"4F97995\",\"s\":\"__gxx_personality_v0\",\"s+\":\"275\"},{\"a\":\"7FD973BE8C64\",\"b\":\"7FD973BD2000\",\"o\":\"16C64\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1EF4\"},{\"a\":\"7FD973BE9321\",\"b\":\"7FD973BD2000\",\"o\":\"17321\",\"s\":\"_Unwind_RaiseException\",\"s+\":\"311\"},{\"a\":\"561CBA02C107\",\"b\":\"561CB5094000\",\"o\":\"4F98107\",\"s\":\"__cxa_throw\",\"s+\":\"37\"},{\"a\":\"561CB6F83554\",\"b\":\"561CB5094000\",\"o\":\"1EEF554\",\"s\":\"_ZN5mongo13error_details23throwExceptionForStatusERKNS_6StatusE\",\"C\":\"mongo::error_details::throwExceptionForStatus(mongo::Status const&)\",\"s+\":\"2036\"},{\"a\":\"561CB6F98800\",\"b\":\"561CB5094000\",\"o\":\"1F04800\",\"s\":\"_ZN5mongo21uassertedWithLocationERKNS_6StatusEPKcj\",\"C\":\"mongo::uassertedWithLocation(mongo::Status const&, char const*, unsigned int)\",\"s+\":\"2F8\"},{\"a\":\"561CB6A30A8A\",\"b\":\"561CB5094000\",\"o\":\"199CA8A\",\"s\":\"_ZN5mongo14FTDCController6doLoopEv.cold.495\",\"C\":\"mongo::FTDCController::doLoop() [clone .cold.495]\",\"s+\":\"A6\"},{\"a\":\"561CB747F0FC\",\"b\":\"561CB5094000\",\"o\":\"23EB0FC\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_14FTDCController5startEvEUlvE0_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"C\":\"std::thread::_State_impl<std::thread::_Invoker<std::tuple<mongo::stdx::thread::thread\n",
"text": "Hey guys, we have a single mongo instance (community) running on Ubuntu 20.4 and it keeps crashing randomly. Rebooting the system resolves the error. In the mongodb.log we got the following:Can anyone help us with this? Thank you!",
"username": "Lars_Dittrich"
},
{
"code": "mongod",
"text": "Hi @Lars_Dittrich and welcome to MongoDB community forums!!Could you help me some information regarding the deployment to assist you further.Rebooting the system resolves the error.After the mongod is connected, does the connection ends again abruptly or any request is sent to the mongo client ?would suggest you to make sure you have all the right permissions enabled and enough empty disk space to write to the respective directory. Please visit the documentation on Configuration File Options for further details.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "mongo = pymongo.MongoClient(config['MONGODB']['HOST'], 27017)\n",
"text": "Hi Aasawari,ok, to have some more details about our environment. We have multiple python-based services (pymongo) that make thousands of requests per day…until this error happens everything works fine. Nothing special in the snippet…simple connect…no security enabled…After the error happens the mongo-service is dead…cannot connect with any client…also MongoDB Compass App cant connect…Mongo Community Version is 6.0.5After reboot everything is working again for some days…until this error returns…Do you have any suggestions how i can force MongoDB to write this interim file buffer to test permissions?Thank you!",
"username": "Lars_Dittrich"
},
{
"code": "mongod",
"text": "Hi @Lars_Dittrich and thank you for sharing the above details.Do you have any suggestions how i can force MongoDB to write this interim file buffer to test permissions?The temporary solution as a part of the trouble shooting process would be to turn off the FTDC.Please note that this is not a recommended procedure and does not guarantee a solution since it may be a symptom of another underlying issue.However, if you still seeing an error even after trying to turn off FTDC, could you provide more details regarding your deployment, for example, what hardware are you using, your CPU & RAM size, are you using some container architecture, are the disks local or accessed by network (e.g. NFS), any error in any logs (not just mongod logs), and other details that may pinpoint the underlying issueRegards\nAasawari",
"username": "Aasawari"
},
{
"code": "/var/lib/mongodb/diagnostic.data\nsetParameter:\n diagnosticDataCollectionEnabled: false\n",
"text": "Hi Aasawari, thank you for your response. We already tried to do exactly this and it seems to work.\nSo for everyone with the same problem. Make sure that mongodb has permissions to write to this folder:If this does not help, place this in the mongodb.conf:",
"username": "Lars_Dittrich"
},
{
"code": "",
"text": "Hi Lars, when you say “rebooting the system resolves the error”, does this mean that simply restarting the service does not resolve the error?Also, did the crashes seem to happen during periods of low activity?We are trying to debug a random crash as well. Your forum post seems to come up on my google searches for log message entries.",
"username": "AmitG"
},
{
"code": "",
"text": "Hi Aasawari, is there a JIRA ticket open that you could point me to about this type of temporary solution? I’m trying to learn more about the circumstances of why that may be a solution to see if our circumstances may match.",
"username": "AmitG"
}
] | MongoDB randomly crashes | 2023-04-17T06:58:01.015Z | MongoDB randomly crashes | 1,436 |
null | [
"compass",
"schema-validation"
] | [
{
"code": "{\n $jsonSchema: {\n bsonType: \"object\",\n required: [\"nome\", \"integrantes\"],\n properties: {\n nome: {\n bsonType: \"string\",\n description: \"'nome' deve ser uma string que representa o nome da equipe e precisa ser informado.\"\n },\n integrantes: {\n bsonType: [\"string\"],\n description: \"'integrantes' deve ser um array de string que representa o(s) nome(s) do(s) integrante(s) da equipe e precisa ser informado.\",\n minItems: 1,\n uniqueItems: true\n },\n timestamps: {\n createdAt: \"criadaEm\",\n updatedAt: \"atualizadaEm\"\n }\n }\n }\n}\nParsing of collection validator failed :: caused by :: Unknown $jsonSchema keyword: createdAt",
"text": "This is my schema validator:And the error is:Parsing of collection validator failed :: caused by :: Unknown $jsonSchema keyword: createdAt",
"username": "Marcos_Visentini"
},
{
"code": "timestamps : {\n bsonType: \"object\" ,\n properties: {\n createdAt: {\n bsonType: \"date\" ,\n description: \"criadaEm\"\n }\n updatedAt: {\n bsonType: \"date\" ,\n description: \"atualizadaEm\"\n }\n }\n}\n createdAt: {\n bsonType: \"date\" ,\n description: \"criadaEm\"\n } ,\n updatedAt: {\n bsonType: \"date\" ,\n description: \"atualizadaEm\"\n }\n",
"text": "I am not too familiar with schema-validation but by looking at the first example of the documentation, I think that timestamps, being an object with 2 fields would need to be specified asUnless of course, I misunderstand your intent and what your really want is to have have createdAt and updatedAt as top level attributes like nome and integrantes. If this is the case, then it should be:",
"username": "steevej"
},
{
"code": "createdAtcriadaEmupdatedAtatualizadaEm",
"text": "I want to “enable” the timestamps of any document, renaming createdAt to criadaEm and updatedAt to atualizadaEm. I don’t know if it’s possible to do that in the schema validatior.",
"username": "Marcos_Visentini"
},
{
"code": "",
"text": "MongoDB has no default and automatic createdAt and updatedAt fields. Thanks you MongoDB since I do not want to pay performance penalty to a feature that I do not need in some of my use case.I think mongoose has something in this effect. However I do not know if you can renamed them.",
"username": "steevej"
},
{
"code": "",
"text": "Oh, ok! Thank you for clarifying!",
"username": "Marcos_Visentini"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't rename timestamp fields (createdAt and updatedAt) in MongoDB Compass | 2023-04-30T12:01:12.635Z | Can’t rename timestamp fields (createdAt and updatedAt) in MongoDB Compass | 1,204 |
null | [
"queries"
] | [
{
"code": "",
"text": "I have an odd issue - characters that would normally have Spanish-language diacritics are displayed as question marks. I need to be able to search the whole collection for incorrect characters and fix them. Just wondering if anyone has run in to any similar issues before. I’m thinking it is actually an issue with the source file, but I haven’t been able to verify that yet.",
"username": "Matthew_Andersen"
},
{
"code": "?",
"text": "Hi @Matthew_Andersen and welcome to MongoDB community forums!!characters that would normally have Spanish-language diacritics are displayed as question marks.Can you confirm if my understanding is correct here saying that, you are not able to search Spanish diacritics and it results the response with ? as the response?\nCan you also confirm is this is related to MongoDB Atlas Search language Analysers?If yes, you can take a look at the example code from MongoDB language Analyser documentation, since Spanish is a supported language analyser, the expectations is to get correct response.However, to understand further, could you help me in understanding the requirement better by proving the below information.I need to be able to search the whole collection for incorrect characters and fix themCould you please assist me in understanding what is meant by incorrect characters in the above statement?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi Aasawari! I appreciate the response. The issue isn’t with being able to search Spanish-language characters - the accented characters are displaying as the unicode unknown character of a question mark inside a diamond. I was able to do a find on those by searching for “new RegExp(‘\\ufffd’)” in the impacted fields, then did an updateMany coupled with a $replaceAll to correct the character. I believe the problem is with the data ingestion and I’ll have to track that down separately.",
"username": "Matthew_Andersen"
}
] | Diacritics not displaying correctly | 2023-04-26T21:20:32.560Z | Diacritics not displaying correctly | 426 |
null | [] | [
{
"code": "",
"text": "Hey,Currently I have mongo cluster with one RS,\nI want to move couple of collections to a separate and new RS without performing downtime or reducing it to the minimum.I have looked for couple of solutions but either of them gave me easy and clean transition,\ncan you recommend me how to do this kind of change?",
"username": "alex_bondar"
},
{
"code": "",
"text": "Check this post",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W it creates a lot of moving parts to do this kind of copy,\nand it is one of many I have to do…looking for something more easy and seamless to the client",
"username": "alex_bondar"
}
] | Moving collections to new rs | 2023-04-27T08:24:03.124Z | Moving collections to new rs | 368 |
null | [
"aggregation",
"queries",
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": " {\n \"_id\": \"61a02dc3e044cc34ce8a3a2f\",\n \"name\": \"Product 1\",\n \"status\": true,\n \"items\": [\n {\n \"_id\": \"61a02dc3e044cc34ce8a3a30\",\n \"foodName\": \"Item 1\",\n \"price\": 10,\n \"status\": true\n },\n {\n \"_id\": \"61a02dc3e044cc34ce8a3a31\",\n \"foodName\": \"Item 2\",\n \"price\": 20,\n \"status\": false\n }\n ]\n }\nstatusitems._id61a02dc3e044cc34ce8a3a31db.collection.update({\n _id: \"61a02dc3e044cc34ce8a3a2f\",\n \"items._id\": \"61a02dc3e044cc34ce8a3a31\"\n},\n{\n $set: {\n \"items.$.status\": {\n $not: [\n {\n $eq: [\n \"$items.$.status\",\n true\n ]\n }\n ]\n }\n }\n})\n {\n \"_id\": \"61a02dc3e044cc34ce8a3a2f\",\n \"items\": [\n {\n \"_id\": \"61a02dc3e044cc34ce8a3a30\",\n \"foodName\": \"Item 1\",\n \"price\": 10,\n \"status\": true\n },\n {\n \"_id\": \"61a02dc3e044cc34ce8a3a31\",\n \"foodName\": \"Item 2\",\n \"price\": 20,\n \"status\": {\n \"$not\": [\n {\n \"$eq\": [\n \"$items.$.status\",\n true\n ]\n }\n ]\n }\n }\n ],\n \"name\": \"Product 1\",\n \"status\": true\n }\n",
"text": "I have documents in my collection in the formatNow lets say I want to toggle the status of where items._id : 61a02dc3e044cc34ce8a3a31. I tried with the positional operator but it simply doesn’t give me the resultThis is the query I triedI get back the below which is not what I wantI mean it doesn’t read the conditions just assigns the condition object as the value. Here is the mongo playground link I tested withIs this not possible. Any alternatives?",
"username": "schach_schach"
},
{
"code": "",
"text": "You need the following:",
"username": "steevej"
},
{
"code": "",
"text": "db.demo.bulkWrite([\n{\nupdateOne: {\nfilter: { items: { $elemMatch: { _id: “61a02dc3e044cc34ce8a3a31” } } },\nupdate: { $set: { “items.$.status”: true } }\n}\n}\n])",
"username": "Durrah_Khan"
},
{
"code": "",
"text": "update: { $set: { “items.$.status”: true } }Not quite a working solution as the problem is toggle a boolean field. What you share sets it to true even when it is already true. When true it has to be set to false.",
"username": "steevej"
},
{
"code": "",
"text": "hi thanks for the reply\nbut how do i access the positional operator in an aggregation pipeline. I’m not quite sure how to do it",
"username": "schach_schach"
},
{
"code": "{ $set : {\n \"items\" : { $map : {\n \"input\" : \"$items\" ,\n \"as\" : \"item\" ,\n \"in\" : { \"$cond\" : {\n \"if\" : { \"$eq\" : [ \"$$item._id\" , \"61a02dc3e044cc34ce8a3a31\" ] } ,\n \"then\" : { \"$mergeObjects : [\n \"$$item\" ,\n { \"status\" : { \"$not\" : [ \"$$item.status\" ] } }\n ] } ,\n \"else\" : \"$$item\"\n } }\n } }\n} }\n",
"text": "Sorry for the delay. I don’t know.I am pretty sure that you can achieve the toggle with $map along the lines:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to toggle a boolean field in an array using $set and positional operator | 2023-04-24T17:46:43.402Z | How to toggle a boolean field in an array using $set and positional operator | 1,489 |
null | [] | [
{
"code": "",
"text": "Hello:I’m trying to install mongodb latest version on Ubuntu (vmware virtual machine).I follow the instructions from\n‘https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/’I get the following message on the when checking the status of the service.\n‘’’\n× mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: failed (Result: core-dump) since Thu 2023-04-27 17:53:46 PDT; 8s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 6184 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)\nMain PID: 6184 (code=dumped, signal=ILL)\nCPU: 17msApr 27 17:53:46 systemd[1]: Started MongoDB Database Server.\nApr 27 17:53:46 systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL\nApr 27 17:53:46 systemd[1]: mongod.service: Failed with result ‘core-dump’.\n‘’’Please help pretty new with Linux",
"username": "Al_Maitchoukow"
},
{
"code": "",
"text": "ILL means illegal instructions.Check whether your platform,chip architecture supports the mongodb version you are trying to install\nSearch our forum threads for ILL and check compatibility matrix etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks for the response, I’m running an Intel® Xeon® Processor E5-4660 v3 CPU. For what I read and saw online it doesn’t look like is compatible. am I right?‘Intel Xeon Processor E54660 v3 35M Cache 2.10 GHz Product Specifications’",
"username": "Al_Maitchoukow"
},
{
"code": "",
"text": "I’m trying to install mongodb latest version on Ubuntu (vmware virtual machine).Check EVC mode in VMWare, your CPU supports AVX2.",
"username": "chris"
}
] | I cannot install mongodb on Ubuntu 22.04.2 live server (code=dumped, signal=ILL) | 2023-04-28T01:24:50.329Z | I cannot install mongodb on Ubuntu 22.04.2 live server (code=dumped, signal=ILL) | 2,503 |
null | [
"java",
"connecting"
] | [
{
"code": "\"stack_trace\":\"java.util.concurrent.ExecutionException: com.mongodb.MongoSocketReadException: Exception receiving message\n at java.base/java.util.concurrent.CompletableFuture.reportGet(Unknown Source)\n at java.base/java.util.concurrent.CompletableFuture.get(Unknown Source)\n at com.creativeradicals.openio.pipeline.persist.feed.FeedMultiSaver.accept(FeedMultiSaver.java:107)\n at com.creativeradicals.openio.pipeline.persist.feed.FeedMultiSaver.accept(FeedMultiSaver.java:34)\n at com.creativeradicals.openio.rabbit.base.RequestConsumer.handleDelivery(RequestConsumer.java:42)\n at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:149)\n at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:104)\n at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n at java.base/java.lang.Thread.run(Unknown Source)\nCaused by: com.mongodb.MongoSocketReadException: Exception receiving message\n at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:569)\n at com.mongodb.internal.connection.InternalStreamConnection.access$1200(InternalStreamConnection.java:76)\n at com.mongodb.internal.connection.InternalStreamConnection$5.failed(InternalStreamConnection.java:520)\n at com.mongodb.internal.connection.AsynchronousChannelStream$BasicCompletionHandler.failed(AsynchronousChannelStream.java:235)\n at com.mongodb.internal.connection.AsynchronousChannelStream$BasicCompletionHandler.failed(AsynchronousChannelStream.java:203)\n at java.base/sun.nio.ch.Invoker.invokeUnchecked(Unknown Source)\n at java.base/sun.nio.ch.Invoker$2.run(Unknown Source)\n at java.base/sun.nio.ch.AsynchronousChannelGroupImpl$1.run(Unknown Source)\n ... 3 common frames omitted\nCaused by: java.io.IOException: Connection reset\n at java.base/sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishRead(Unknown Source)\n at java.base/sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(Unknown Source)\n at java.base/sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(Unknown Source)\n at java.base/sun.nio.ch.EPollPort$EventHandlerTask.run(Unknown Source)\n ... 1 common frames omitted\"}\n",
"text": "Seeing an abundance of the below in a sharded cluster environment hosted in AWS. Any insight as to how to debug? Have tinkered with tcp keepalive on the servers (currently set to 120) and maxIdleTime on the client without any noticeable change.MongoDB Server Version: 4.4.2\nJava Driver: 'org.mongodb:mongodb-driver-reactivestreams:1.13.1\n‘io.reactivex.rxjava3:rxjava:3.0.3’\nArchitecture: arm64",
"username": "Firass_Almiski"
},
{
"code": "",
"text": "See the discussion containing this suggestion and perhaps that will solve your problem…",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Our application uses docker, and it looks like the version of Java we are on (OpenJDK Runtime Environment AdoptOpenJDK (build 14.0.2+12) has patched the bug mentioned in that thread. Is there any other way to debug these constant connection reset errors and socket exceptions?",
"username": "Firass_Almiski"
},
{
"code": "",
"text": "Hmm, I don’t know an easy way … maybe we can ask @Jeffrey_Yemin",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "There’s no straightforward way to determine the root cause of connection reset errors. It’s not typically a driver bug that causes it. Rather, it’s either something happening in the MongoDB server or in the network between driver and server. I would look first at MongoDB server logs to see if there are any clues there. It’s possible that the server itself is closing the connection for some reason. If not, you’ll need to involve an expert in network administration, perhaps to employ a tool like Wireshark to figure out what’s happening, assuming that you can reproduce the error.One other thought: if you’re able to test outside of Docker, that would be one way to rule that Docker itself as a contributing factor.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Yes, simplification is a useful debugging tool.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Our cluster is running on ARM64 CentOS machines in AWS. I see that the compatibility specs don’t list CentOS under ARM64. Is it worth trying to switch the underlying Operating System to Ubuntu?\n.",
"username": "Firass_Almiski"
},
{
"code": "",
"text": "Seeing an abundance of these Client Disconnect errors reported by our routers:message: {“t”:{\"$date\":“2021-01-19T17:47:02.856+00:00”},“s”:“D1”, “c”:“SHARDING”, “id”:22772, “ctx”:“conn1749”,“msg”:“Exception thrown while processing command”,“attr”:{“db”:“admin”,“headerId”:-697198149,“error”:“ClientDisconnect: operation was interrupted”}} c:SHARDING severity:7 ctx:conn1749 @timestamp:Jan 19, 2021 @ 12:47:22.867 host:mongodb-router-5 s:D1 @timegenerated:Jan 19, 2021 @ 12:47:03.144 host_ip:10.204.0.165 msg:Exception thrown while processing command severity_label:debug type:rsyslog id:22,772 attr.headerId:-697,198,149 attr.error:ClientDisconnect: operation was interrupted attr.db:admin program:mongos @version:1 port:34,092 facility_label:user t.$date:Jan 19, 2021 @ 12:47:02.856 pid:1606 logsource:mongodb-router-5syslogtag:mongos[1606]: facility:1 _id:CqjCG3cB7JtNHvfUX1ig _type:_doc _index:mongodb-2021.01.19 _score: -Any more ideas? These client disconnects are happening frequently. As i stated we’ve followed the administrative guide to a tee. This is very disruptive to our application.",
"username": "Firass_Almiski"
},
{
"code": "",
"text": "It does not sound like a MongoDB problem, but rather, a network, hardware, or hypervisor problem with net connectivity. In 2017 I saw something like this and it turned out to be net connectivity between 2 subnets in separate wings of a factory installation. Have your network people looked at this?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I am facing a similar issue here, while inserting the document or listing the database names from MongoDB server I am facing the exact above issue. Attaching logs below:317 [main] INFO org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out\n682 [cluster-ClusterId{value=‘643f9d4e2e7bc45d1876b7a6’, description=‘null’}-10.7.202.205:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server 10.7.202.205:27017\ncom.mongodb.MongoSocketReadException: Exception receiving message\nat com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:543)\nat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:428)\nat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:289)\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:255)\nat com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:83)\nat com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:33)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:106)\nat com.mongodb.internal.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:63)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:127)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.base/java.lang.Thread.run(Thread.java:833)\nCaused by: java.net.SocketException: Connection reset\nat java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:323)\nat java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)\nat java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)\nat java.base/java.net.Socket$SocketInputStream.read(Socket.java:966)\nat com.mongodb.internal.connection.SocketStream.read(SocketStream.java:89)\nat com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:554)\nat com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:425)\n… 9 more",
"username": "Prithvi_Singh"
}
] | Connection Reset Errors | 2020-12-20T03:19:10.774Z | Connection Reset Errors | 10,601 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi everyone,I’m trying to optimize an aggregation query having a match based on some calculated fields from a lookup stage, using limit and skip for pagination.Let’s consider 2 collections, invoice and invoiceLine.invoice will have a property called status, based the paidAt of the invoiceLines.\ninvoiceLine will have an invoiceId, and a paidAt that is a date.invoice status can have different values depending on all the items paidAt. If one is unpaid, status is unpaid, if all are paid, status is paid for exemple.If i want to query only the 500 first paid invoices, how can I limit my query?For now, the limit operator is located at the end of my aggregation pipeline, but I feel like it term of efficiency, i could do better.I’ve created an index on invoiceId in the invoiceLines collection, that helps, but with a lot of data, it’s still quite slow.If I use limit at the start of my pipeline, I only have 500 invoices to work with, and then some are filtered, so I do not have the 500 expected invoices in the result.I also feel like I can’t use a match in the lookup stage, because the status is calculated using all the invoiceLines paidAt of the invoice.So I don’t really know how to paginate efficiently this use case.Hoping that the question is clear, tell me if you need any complementary information.\nThanks in advance.",
"username": "Lucas_GHANEM"
},
{
"code": "",
"text": "Please share sample documents from all collections so that we can experiment with your use case.",
"username": "steevej"
}
] | Optimize aggregation query limit based on lookup results | 2023-04-25T19:30:29.755Z | Optimize aggregation query limit based on lookup results | 422 |
null | [
"aggregation"
] | [
{
"code": "{ \n \"name\":\"$name\",\n \"value\":\"$value\",\n \"object_first_half_info\":{\n \\\\ this can contain up to 300 items\n },\n \"object_second_half_info\":{\n \\\\ this can also have about 300 items\n }\n}\n{ \n \"name\":\"$name\",\n \"value\":\"$value\",\n \"object_first_half_info\":{\n \\\\ same as before\n },\n \"object_second_half_info\":{\n \\\\ contains data for before + data from object_first_half_info\n }\n}\n",
"text": "Hi,\nThis question is for mongoDB version 4.0.0\nI am trying to move data from one object to another object.\nExample to understand the problem better:\ni have a document that is like → what i want to do is copy everything i have in “object_first_half_info” to “object_second_half_info”. Basically updating everything i got in object A to B while still keeping everything else same.So the data should be like:As i have a lot of fields, i cant do manual work and i have to do this for a lot of documents not just 1.",
"username": "Rajnish_Lather"
},
{
"code": "",
"text": "There are plenty of examples of what you want to do in",
"username": "steevej"
}
] | Copying one object's data to another object in same document without manually entering all values | 2023-04-28T08:28:09.320Z | Copying one object’s data to another object in same document without manually entering all values | 408 |
null | [
"queries"
] | [
{
"code": "{\n \"_id\": \"64482f0adad6e42a7b490168\",\n \"Phoebe\": [\n [\n \"Rachel\",\n \"PAX0\",\n \"FRE2\"\n ],\n [\n \"Ross\",\n \"PAX1\",\n \"FRE1\"\n ]\n ]\n}\n",
"text": "Hello. I have data stored in a mongodb document as below:This indicates Phoebe is the person of interest, and Rachel and Ross are her friends with some data for them stored in their respective arrays.\nThis is basically arrays within arrays within a dictionary.Im stuck trying to do basic CRUD on this.For example, Id like to retrieve only the data at index 0 of the outermost array, which in the example above, would be the data for Rachel.Im using projection but i get error messages. Ive tried using dot notation and specifying the index position (Phoebe.0). Ive tried to use the $ parameter instead of specifying the location (Phoebe.$). I also tried using an aggregate query but this runs the query against every document in the collection, which is just senseless in this context.db.CName.find(\n{‘Phoebe’:{‘$exists’:True}},{‘Phoebe.0’:1}\n)db.CName.find(\n{‘Phoebe’:{‘$exists’:True}},{‘Phoebe.$’:‘Rachel’}\n)db.CName.aggregate(\n[\n{ “$project”: { “matched”: { “$arrayElemAt”: [ “$Phoebe”, 0 ] } } }\n]\n)Any help, guidance would be much appreciated!",
"username": "Subinay_Bedi"
},
{
"code": "",
"text": "Using values as field names is not a good idea. SeeLearn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.",
"username": "steevej"
}
] | CRUD for array data | 2023-04-28T10:06:38.220Z | CRUD for array data | 383 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "MongooseError: Model.find() no longer accepts a callback\n",
"text": "I am getting this error message when trying to run a callback function in the .find() mongoose method. Please suggest any alternative way to perform the operation done by the .find() method.",
"username": "Ankit_Patnaik"
},
{
"code": "MongooseError: Model.find() no longer accepts a callback\nasync/awaitpromises// Before\nconn.startSession(function(err, session) {\n // ...\n});\n\n// After\nconst session = await conn.startSession();\n// Or:\nconn.startSession().then(sesson => { /* ... */ });\n",
"text": "Hi @Ankit_Patnaik,Welcome to the MongoDB Community forums The usage of callback functions has been deprecated in Mongoose 7.x. Therefore, if you were using these functions with callbacks, it is recommended to use either async/await or promises. If async functions do not meet your requirements, you can go for promises.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "can you please tell me in which file i change?",
"username": "monika_verma"
},
{
"code": "List.find().then(function(lists){\n\n//Return results\n\n})\n",
"text": "if you were finding for examplein the above, the List model is finding everything within the collection and you can tap on the results (lists) and then use the info the way you want. This solves that issue",
"username": "mwebaze_nicholas"
},
{
"code": "const router = require('express').Router();\nconst { Entry } = require(\"../../models\");\n\nrouter.get('/journal', async (req, res) => {\n //did not work (entry is my model)\n // Entry.find({}, (err, result) => {\n // if (err) {\n // res.send(err)\n // }\n // res.send(result)\n // })\n\n//this worked\n Entry.find().then((err, result) => {\n console.log(\"result\")\n if (err) {\n res.send(err)\n }\n res.send(result)\n })\n});\n\nmodule.exports = router;\n",
"text": "My issue occurred in my controllers director in my API routes.",
"username": "Brandon_Espinosa"
}
] | Any alternative for '<Modelname>.find()' function in mongoose | 2023-03-01T04:08:50.227Z | Any alternative for ‘<Modelname>.find()’ function in mongoose | 17,923 |
null | [] | [
{
"code": "",
"text": "Hi,In my application we have users who signed up using Google auth or Email/Password. But now I am facing an issue. If any existing user who signed up Email/password if try to sign in with Google login, new user id getting created. I am looking for an option to avoid creating multiple user and link user having same user id by different provider type. I went through documentation and got to know about linkCredentials() function but I guess that’s only applicable if user is authenticated.Please suggest some solution for this, I tried switching to other auth providers but even if I try to authenticate, new id will be created since provider type is different.",
"username": "Nagendra_Kushwah"
},
{
"code": "",
"text": "Did you find a solution for this?",
"username": "Try_Catch_Do_Nothing"
}
] | Link User with multiple provider Type | 2022-04-10T16:59:00.466Z | Link User with multiple provider Type | 1,383 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 6.0.6-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.5. The next stable release 6.0.6 will be a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "Hi, could you point me to steps on how to install this release candidate on Ubuntu 22.04? Does it need to be compiled from source?",
"username": "AmitG"
},
{
"code": "wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu2204-6.0.6-rc1.tgz\n",
"text": "sorry… in case it’s helpful, you can download your files from here https://www.mongodb.com/download-center/community/releases/development:",
"username": "AmitG"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 6.0.6-rc0 is released | 2023-04-27T11:52:02.959Z | MongoDB 6.0.6-rc0 is released | 859 |
null | [
"react-native"
] | [
{
"code": " if (idToken) {\n const credentials = Realm.Credentials.google({\n idToken,\n // authCode: code,\n // idToken: access_token,\n });\n\n realm\n .logIn(credentials)\n .then((user) => {\n console.log(`Logged in with id: ${user.id}`);\n })\n .catch((err) => {\n console.log(\"@realm.login: err\", err);\n });\n }\n}\n{ \"oauth2-google\": { \"name\": \"oauth2-google\", \"type\": \"oauth2-google\", \"disabled\": false, \"config\": { \"clientId\": process.env.GOOGLE_CLIENT_ID, \"openId\": false }, \"secret_config\": { \"clientSecret\": process.env.GOOGLE_CLIENT_SECRET }, \"metadata_fields\": [\"name\", \"first_name\", \"last_name\", \"picture\", \"email\" ], \"redirect_uris\": [], \"domain_restrictions\": [] } } const getUserInfo = async (token) => { let userInfoResponse = await fetch( \"https://www.googleapis.com/oauth2/v2/userinfo\", { headers: { Authorization:userInfoResponse\n .json()\n .then((data) => {\n console.log(\"@userInfoResponse > data:\", data);\n const user = data;\n })\n .catch((err) => console.log(\"@userInfoResponse: err\", err));\n",
"text": "I’m trying to use realm.login.`\nconst [request, response, promptAsync] = Google.useAuthRequest({\nexpoClientId: process.env.GOOGLE_EXPO_CLIENT_ID,\nandroidClientId: process.env.GOOGLE_ANDROID_CLIENT_ID,\niosClientId: process.env.GOOGLE_IOS_CLIENT_ID,\nwebClientId: process.env.GOOGLE_WEB_CLIENT_ID,\n});useEffect(async () => {\n// try {\nif (response?.type === “success”) {\nconst idToken = response.params.id_token;\nconst { access_token, code } = response.params;}, [response]);\n`\nWith the above code, if I do console.log(credentials) it’s an empty object. Is it normal?For Realm,and Google console.\nI have the web application credential for Realm;\nAuthorized JavaScript origins: https://realm.mongodb.com\nAuthorized Redirect URIs: https://ap-southeast-1.aws.stitch.mongodb.com/api/client/v2.0/auth/callback\nAnd 4 more credentials (Web application for Expo Go Proxy, Web application for web client, iOS, and Android)And also have /auth/providers.json (I’m not sure it’s needed even I set it on the Realm UI page)\n{ \"oauth2-google\": { \"name\": \"oauth2-google\", \"type\": \"oauth2-google\", \"disabled\": false, \"config\": { \"clientId\": process.env.GOOGLE_CLIENT_ID, \"openId\": false }, \"secret_config\": { \"clientSecret\": process.env.GOOGLE_CLIENT_SECRET }, \"metadata_fields\": [\"name\", \"first_name\", \"last_name\", \"picture\", \"email\" ], \"redirect_uris\": [], \"domain_restrictions\": [] } }FYI,\nIf I run this code with the access_token, I can receive the user data without any issue.\n const getUserInfo = async (token) => { let userInfoResponse = await fetch( \"https://www.googleapis.com/oauth2/v2/userinfo\", { headers: { Authorization:Bearer ${token}` },\n},\n).catch((err) => console.log(“@getUserInfo: err”, err));};`",
"username": "HeeYoung_Moon"
},
{
"code": "",
"text": "Did you figure out the issue here?",
"username": "Try_Catch_Do_Nothing"
}
] | Code 47 error on realm.logIn(credentials) with Google oAuth | 2022-02-03T16:44:18.976Z | Code 47 error on realm.logIn(credentials) with Google oAuth | 3,312 |
null | [
"node-js",
"serverless"
] | [
{
"code": "",
"text": "Disclaimer: This is my first time setting up a connection between AWS and Atlas. Sorry if this is a new guy question but I did search the forum with no luck.\nI have set up peering between an AWS VPC and an Atlas Serverless VPC, including the routing table and whitelisting my AWS CIDR. Both sides show the peering as available. However, I keep getting “MongoNetworkError: connection 1 to xx.xx.xxx.xxx:2717 close” from my nodejs application. I have double check that the user/pwd are correct. The only clue I have is that when I whitelist network access from anywhere in Atlas as a test, everything works. Any idea what might be the issue? Any additional information I can provide?",
"username": "Stephen_Rich"
},
{
"code": "",
"text": "I have set up peering between an AWS VPC and an Atlas Serverless VPC, including the routing table and whitelisting my AWS CIDR. Both sides show the peering as available. However, I keep getting “MongoNetworkError: connection 1 to xx.xx.xxx.xxx:2717 close” from my nodejs application. I have double check that the user/pwd are correct. The only clue I have is that when I whitelist network access from anywhere in Atlas as a test, everything works. Any idea what might be the issue?I was able to get in touch with support just now (which was nice), but it turns out that peering is not supported in serverless configurations. I really wish they would disable the interface in the GUI if it is an unavailable option.",
"username": "Stephen_Rich"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | AWS VPC Peering First Time Connection Issues (New Guy Question Sorry) | 2023-04-28T20:52:56.688Z | AWS VPC Peering First Time Connection Issues (New Guy Question Sorry) | 787 |
[
"aggregation",
"queries",
"indexes"
] | [
{
"code": "",
"text": "Hello Community,I’m wokring on a draft document on the MongoDB Indexes Simplified as below at my website, could you please review it and let me know for any suggestions/corrections if any.https://lnkd.in/gQpHH5gF\n\nMongoDB Indexdes Simplified !\n\nMongoDB #MongoDBIndexes #mongodb MongoDB Master MongoDB DBA Jobs MongoDB en Español MongoDB Arabic…Thanks,",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "First, thx for writing this great article for students.I didn’t go through everything in it but i noticed this:MongoDB indexes work by creating an index object that references the location of the data in the database. The index object consists of a key and a value. The key is the field or fields that you want to index, and the value is a reference to the location of the data in the database.Generally secondary indexes point to the primary key of the record instead of the ultimate data location. This is to reduce the complexity (and overhead) when records are moving around on disk. Only primary index points directly to the data page on disk.I’m not a Mongodb Employee, so you can verify this with them.",
"username": "Kobe_W"
}
] | MongoDB Indexes Explained - Document for review/suggestions | 2023-04-28T05:43:12.177Z | MongoDB Indexes Explained - Document for review/suggestions | 974 |
|
null | [
"aggregation",
"java"
] | [
{
"code": "\"details\" : { \"details\" : \"{ \\\"batteryLvl\\\": \\\"85\\\", \\\"operation\\\": \\\"DEVICE_STATUS_BATTERY\\\", \\\"doorSerial\\\": \\\"TLKJ3ACT53UTZTNW\\\", \\\"logInstant\\\": \\\"1682531305560\\\", \\\"class\\\": \\\"br.com.loopkey.model.logging.CommBatteryLoggingModel\\\" }\"\n> db.logs.aggregate([ { $match: { $and: [ { \"doorId\": 12804 }, { \"operation\": 130 } ] } }, { \"$project\": { \"doorId\": \"$doorId\", \"details\": \"$details\",\"teste\":{$toString: \"$details\"}} }])\n2023-04-27T00:59:03.603+0000 E QUERY [js] Error: command failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Unsupported conversion from object to string in $convert with no onError value\",\n\t\"code\" : 241,\n\t\"codeName\" : \"ConversionFailure\"\n} : aggregate failed :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\ndoassert@src/mongo/shell/assert.js:18:14\n_assertCommandWorked@src/mongo/shell/assert.js:536:17\nassert.commandWorked@src/mongo/shell/assert.js:620:16\nDB.prototype._runAggregate@src/mongo/shell/db.js:260:9\nDBCollection.prototype.aggregate@src/mongo/shell/collection.js:1062:12\n@(shell):1:1\n\n",
"text": "Hi guys,I need help to get information through the mongo query, but this information is inside a serialized java object.My object have this content:I’m trying to convert this information into a string in several ways, using toString, or convert but I’m not succeeding. Because from there I would break information that I need.When I try to convert using toString function:The version of mongo is 4.0.6. At first I have no way to update this mongo, so I would like to know if there is any way I can force this conversion to be able to treat it as a string?",
"username": "Filipe_Carvalhedo"
},
{
"code": "",
"text": "Hi @Filipe_Carvalhedo and welcome to MongoDB community forums!!The version of mongo is 4.0.6.I believe you’re seeing the effect of SERVER-46079, which is a known issue. And it’s not supported in the latest 6.0 version either, but it’s likely to be resolved in a future release.In the meantime, we recommend upgrading to the latest supported version (refer to the Legacy Support Policy), as it contains the latest upgrades and bug fixes.That being said, we can try to help you with a workaround, which may or may not be suitable for your use case. To do so, could you please provide us with the sample document and the expected output?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "\"\"details\" : { \"details\" : \"{ \\\"batteryLvl\\\": \\\"85\\\", \\\"operation\\\": \\\"DEVICE_STATUS_BATTERY\\\", \\\"doorSerial\\\": \\\"TLKJ3ACT53UTZTNW\\\", \\\"logInstant\\\": \\\"1682531305560\\\", \\\"class\\\": \\\"br.com.loopkey.model.logging.CommBatteryLoggingModel\\\" }\"\"\n",
"text": "Hi @Aasawari, thanks for the answer. I’m using metabase software with mongo db. I need to extract the battery level of the details field. The details field have this content:I need to extract the batteryLvl infomartion, in this case is 85.I know how to extract this information when the data is a string, but in this case is an object that I can’t convert to string.",
"username": "Filipe_Carvalhedo"
}
] | Unsupported conversion from object to objectId in $convert with no onError value | 2023-04-27T01:03:54.890Z | Unsupported conversion from object to objectId in $convert with no onError value | 969 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.