image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
https://www.mongodb.com/…3_2_1024x144.png
[ "app-services-hosting" ]
[ { "code": "", "text": "I’m trying to upload the production build files of Angular 8 project. Assets, html and js files are uploading but some .js files and .js.map are not uploading to the hosting of realm\n\nCapture1127×159 4.67 KB\n", "username": "Faizan_Afzal" }, { "code": "", "text": "@Faizan_Afzal Hello, did you find out how to solve this? I just started having this problem with react build.", "username": "SOCAL_First_Time_Homebuyer" } ]
Unable to upload angular build files to Realm hosting
2021-09-08T12:27:03.394Z
Unable to upload angular build files to Realm hosting
3,494
null
[ "aggregation", "queries", "java", "atlas-device-sync", "kotlin" ]
[ { "code": "", "text": "It is been weeks trying to figure how to use Pagination using Realm Sync or Data API (Preview). Nothing Works! Please help with any type of Document on how to Paginate using Realm Sync as i am doing an E-Commerce Application and in need of Pagination as i cannot have many requests to show products. Appreciate the help. Note i am coding Either Java or Kotlin.Thank you.", "username": "laith_ayyat" }, { "code": "", "text": "Hi, Realm Sync does not support pagination. The ideal way to accomplish this would be to sync down the result set you are looking for and then just using the Realm query language to get the subset of data that is in your local database. The reason for this is that syncing on a paginated set of data is a bit of an anti-pattern as its just emulating a REST client but with a lot more overhead and uncertainty about what it means to sync a paginated set of data (if something is inserted in the beggining of the result set should we sync that?).If you cannot sync down the entire query of data, then perhaps you can use GraphQL or the Remote MongoDB client of the SDK: https://www.mongodb.com/docs/realm/sdk/node/examples/query-mongodb/", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks to lazy evaluation, the common task of pagination becomes quite simple. For example, suppose you have a results collection associated with a query that matches thousands of objects in your realm. You display one hundred objects per page. To advance to any page, simply access the elements of the results collection starting at the index that corresponds to the target page.This is an answer on Realm Document, however is there any Example on this!", "username": "laith_ayyat" }, { "code": "", "text": "You can use Sort and Limit to perform pagination on results that are already in a realm: https://www.mongodb.com/docs/realm/reference/realm-query-language/#sort--distinct--limit", "username": "Tyler_Kaye" }, { "code": "", "text": "So, i should ignore the fact i have large data and just use Realm Sync? which is the same as Realm Results right?", "username": "laith_ayyat" }, { "code": "", "text": "Its up to you. You can either sync down all of the data and just paginate locally (should make for very fast scrolling once the data has been synced down) or you can use the MongoDB Query endpoing in the SDK’s which go through Realm Cloud or GraphQL if you want it to be more like a REST client.Do you know roughly how much data you are talking about here?", "username": "Tyler_Kaye" }, { "code": "", "text": "Lets say i have a Recycler View that will have products that show 500 to 1000 products? how would i go from there? i would appreciate the help of any kind.\nI am also looking to filter these products depending on client selections such as Color, Size, etc…Thank you.", "username": "laith_ayyat" }, { "code": "", "text": "Like I said above, it really just depends on your applications constraints and the size of your data. If its just 1000 objects/documents that are all medium sized I imagine, and you dont have any major networking constraints or storage constraints (if this is a normal mobile phone then you shouldn’t be too worried about this amount of data), then the ideal solution would be to sync down all 1000 of the documents and then have your application’s view controller read from the synced realm and perform whatever filtering you would like. This will provide a low-latency experience when using the application because when the user is scrollong, adding a new filter, etc, no new data needs to be requested from the server as it is all already stored on the device.", "username": "Tyler_Kaye" } ]
Realm Sync Pagination
2022-05-03T13:29:13.917Z
Realm Sync Pagination
5,305
https://www.mongodb.com/…5_2_1024x535.png
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "It’s May the Fourth and in honour of Star Wars Day, MongoDB World tickets are on sale until May the 6th and are reduced to $400. And, you can still avail of the MDBHACK50 (if you’re quick - only for 1st 25 registrants) to get an extra 50% off - so making MongoDB World just $200, which is definitely the best value in the universe!!MongoDB World is where the world's fastest-growing data community comes to connect, explore, and learn. We're looking for speakers who can inspire attendees by introducing them to new technologies, ideas, and solutions. Join us from June 7 to June 9,...", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
May the Fourth be with you! Special rate for World tickets!
2022-05-04T13:00:20.100Z
May the Fourth be with you! Special rate for World tickets!
2,731
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Wow!! Do we have some great news for those thinking of attending MongoDB World in New York on June 7 to 9? Yes we do!!In the hackathon on-boarding, I’ve been mentioning that we’ve a discount code HACK25 for 25% off a MongoDB World ticket…but now (drum roll please!! ) I’m thrilled to announce that we’ve managed to secure a limited discount of 50%!! Yes, 50%!! We have a new code -And the first 25 to use this code, get 50% off registration to MongoDB World! So don’t delay, if you’re thinking of it, then go for it, this won’t be around for very long and it’s exclusively for the Hackathon participants!!We hope to see you in New York! ", "username": "Shane_McAllister" }, { "code": "", "text": "It’s May the Fourth and in honour of Star Wars Day, MongoDB World tickets are on sale today until May the 6th and are reduced to $400. And, you can still avail of the MDBHACK50 (if you’re quick - only for 1st 25 registrants) to get an extra 50% off - so making MongoDB World just $200, which is definitely the best value in the universe!!MongoDB World is where the world's fastest-growing data community comes to connect, explore, and learn. We're looking for speakers who can inspire attendees by introducing them to new technologies, ideas, and solutions. Join us from June 7 to June 9,...", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Thinking of going to MongoDB World? We've an exclusive for hackathon participants!
2022-04-22T23:01:30.026Z
Thinking of going to MongoDB World? We’ve an exclusive for hackathon participants!
2,781
null
[ "data-modeling", "mdbw22-hackathon" ]
[ { "code": "", "text": "I can’t form a team in this Hackathon, due to time limitations. But I like to share this idea and if a team likes to pick up I’d be glad to support you when it comes to data architecture / schema design. If you look at the use of personal pronouns as well as their (frequency of) use in news, can you deduce a correlation to the mood of the news? Can potential findings be used for nowcasting the present or even forecasting events?various3open", "username": "michael_hoeller" }, { "code": "", "text": "I like it! Is it “us” or “them”? Are we to blame for the mess, or others? How much personal responsibility do we take?", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Looking for Hackers: i/we/us vs. you/they/their in News
2022-05-04T06:03:07.251Z
Looking for Hackers: i/we/us vs. you/they/their in News
2,474
null
[ "data-modeling", "mdbw22-hackathon" ]
[ { "code": "", "text": "I can’t form a team in this Hackathon, due to time limitations. But I like to share this idea and if a team likes to pick up I’d be glad to support you when it comes to data architecture / schema design. This is more a fun project but it might show interesting aspects of data mining.\nThe question is whether certain colors occur more frequently in certain languages / regions / cultures and whether a reliable correlation can be proven.\nE.g. (completely personal opinion) I get the impression of “green” when I think of Ireland. The sources which point to a certain color might be of interest, too. Are these daily news messages, pictures, text or tourism infos, campaigns?some analytical skills + various3open", "username": "michael_hoeller" }, { "code": "", "text": "Another great idea. Green is indeed associated with Ireland…but if you live here, it’s more “grey+overcast” ", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Looking for Hackers: Do regions of the world have a favorite color?
2022-05-04T05:47:23.896Z
Looking for Hackers: Do regions of the world have a favorite color?
2,449
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Here are some useful resources for participating in the Hackathon", "username": "Shane_McAllister" }, { "code": "", "text": "Cross-linking a couple of other useful links on the dataset itself:", "username": "webchick" }, { "code": "", "text": "I have built a Python package to download GDELT data. You can install it from:A set of tools to support downloading GDELT dataYou will need a Python interpreter to install it.", "username": "Joe_Drumgoole" }, { "code": "", "text": "We were fortunate to receive some further links and resources are GDELT that will help all participants to understand and get familiar with this fantastic Dataset -The Global Knowledge Graph (GKG) is probably the most relevant, since it is about general purpose extraction of core metadata. You can drop the GCAM field, since it doubles the size of the data and while it contains incredibly rich and deep sentiment data is likely overkill for this kind of analysis:There’s also the entity dataset that uses the NLP API to annotate a subset of articles each day:You can see how this can be used for detection and contextualization:And the geographic graph that could showcase geographic analysis in MongoDB:There are also image annotations:Then there is the video annotation dataset of television news:This can be used for interesting things like tracking tweets on TV news:There is also a rich embedded metadata graph that compiles JSON-LD and META blocks:There is an excerpted version at:We hope the above links and details will inspire you to dig deeper into the GDELT resources. Remember, we will be running GDELT sessions at least twice a week on our livestream - do join us.", "username": "Shane_McAllister" }, { "code": "", "text": "I used the read-only url source, but would it be possible to connect to it from atlas (my account) so that I could produce some charts with the data?", "username": "Ilan_Toren" }, { "code": "", "text": "You can import the data into your own cluster. If you follow the steps from @Joe_Drumgoole and his tools here - gdelttools · PyPI that will help you.", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Resources for participants in the MongoDB World Hackathon '22
2022-04-05T19:38:20.530Z
Resources for participants in the MongoDB World Hackathon ‘22
5,552
null
[ "aggregation", "queries", "node-js", "data-modeling" ]
[ { "code": "{\n \"client\": \"John Doe\",\n \"id\": \"2345656\",\n \"category\": \"Urgent\",\n \"address\": \"Some Street\",\n \"buildingNumber\": \"23\",\n}\n", "text": "Hello,Consider we have the following Shipment document:Let’s say this document is part of a collection that contains thousands of different clients, addresses and a couple of different categories.Suppose we wanted to see all the Urgent shipments and we ran the find() query obtaining the desired result.Here’s the question: When running this main query to retrieve all the Urgent shipments available, what would be the best and the most optimal way to in addition also obtain related results such as:An example of the desired outcome:Example of the first found URGENT element presented[ ] Shipment[#345656] to John Doe, Some Street, 23.Shipments of different category to the same client have been found:\n[ ] Shipment[#845543] to John Doe, Some Street, 23, Category: NORMAL\n[ ] Shipment[#345543] to John Doe, Some Street, 23, Category: NORMALOther shipments to the same building:\n[ ] Shipment[#141123] to Mary White, Some Street, 23, Category: NORMALOther shipments to the same street:\n[ ] Shipment[#741456] to Simon Brown, Some Street, 74, Category: URGENTAfter that the query listing would continue with another URGENT document found:[ ] Shipment[#342346] to Brian Smith, Some Other Street, 90.Shipments of different category to the same client have been found:\n[ ] Shipment[#242345] to Brian Smith, Some Other Street, 90, Category: NORMALNo other shipments to the same building found.Other shipments to the same street:\n[ ] Shipment[#141456] to Susy White, Some Other Street, 103, Category: NORMALAnd etc…What would be the best way to achieve this?Thank you very much for your attention and time, and help,Ren", "username": "RENOVATIO" }, { "code": "", "text": "Publish a couple of sample documents that matches all the possibilities you want. Without real sample documents it is hard to experiment and test.What have you tried so far? How did it fails? Knowing this will prevent us investigating in a direction that you already know does not work.At first glance, it will probably involve $lookup within $facet after a $match for category:Urgent.It is not clear what you want to do when multiple category:Urgent are for the same client.", "username": "steevej" } ]
Aggregate found documents with related documents
2022-05-04T10:10:29.631Z
Aggregate found documents with related documents
1,407
null
[ "queries", "replication", "storage" ]
[ { "code": "2022-05-01T19:57:03.058+0000 W FTDC [ftdc] Uncaught exception in 'FileStreamFailed: F ailed to write to interim file buffer for full-time diagnostic data capture: /var/data/mon godb/diagnostic.data/metrics.interim.temp' in full-time diagnostic data capture subsystem. Shutting down the full-time diagnostic data capture subsystem.2022-05-01T19:57:04.607+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [16514 35024:607232][80:0x7f0b7fe7e700], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_ write, 539: /var/data/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No space left on device Raw: [1651435024:607232][80:0x7f0b7fe7e70 0], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_write, 539: /var/data/mongodb/ WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No sp ace left on device2022-05-01T19:57:04.607+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [16514 35024:607232][80:0x7f0b7fe7e700], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_ write, 539: /var/data/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No space left on device Raw: [1651435024:607232][80:0x7f0b7fe7e70 0], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_write, 539: /var/data/mongodb/ WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No space left on device\n10624 2022-05-01T19:57:04.608+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [1651435024:608377][80:0x7f0b7fe7e700], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_ write, 539: /var/data/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No space left on device Raw: [1651435024:608377][80:0x7f0b7fe7e70 0], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_write, 539: /var/data/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No space left on device\n10625 2022-05-01T19:57:04.608+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [16514 35024:608480][80:0x7f0b7fe7e700], file:WiredTiger.wt, WT_SESSION.checkpoint: __wt_turtle_u pdate, 391: WiredTiger.turtle: fatal turtle file update error: No space left on device Raw : [1651435024:608480][80:0x7f0b7fe7e700], file:WiredTiger.wt, WT_SESSION.checkpoint: __wt_ turtle_update, 391: WiredTiger.turtle: fatal turtle file update error: No space left on device\n 2022-05-01T19:57:04.608+0000 F - [WTCheckpointThread] Fatal Assertion 50853 at src /mongo/db/storage/wiredtiger/wiredtiger_util.cpp 486\n10628 2022-05-01T19:57:04.608+0000 F - [WTCheckpointThread] \\n\\n***aborting after fassert() failure\\n\\n\n", "text": "We are using MongoDB ReplicaSet with three nodes (One Primary and two secondaries). We are performing load testing. We got a weird Disk Space issue on one of the secondaries even though there is plenty of disk space available.2022-05-01T19:57:03.058+0000 W FTDC [ftdc] Uncaught exception in 'FileStreamFailed: F ailed to write to interim file buffer for full-time diagnostic data capture: /var/data/mon godb/diagnostic.data/metrics.interim.temp' in full-time diagnostic data capture subsystem. Shutting down the full-time diagnostic data capture subsystem.2022-05-01T19:57:04.607+0000 E STORAGE [WTCheckpointThread] WiredTiger error (28) [16514 35024:607232][80:0x7f0b7fe7e700], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_ write, 539: /var/data/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No space left on device Raw: [1651435024:607232][80:0x7f0b7fe7e70 0], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_write, 539: /var/data/mongodb/ WiredTiger.turtle.set: handle-write: pwrite: failed to write 1262 bytes at offset 0: No sp ace left on device", "username": "Vineel_Yalamarthi" }, { "code": "", "text": "Hello @Vineel_Yalamarthi ,Can you share some more information on the environment?Thanks,\nTarun", "username": "Tarun_Gaur" } ]
MongoDB WiredTiger Fatal Exception
2022-05-03T03:56:10.371Z
MongoDB WiredTiger Fatal Exception
2,869
null
[ "swift" ]
[ { "code": "@ObservedResults(Item.self, filter: NSPredicate(format: \"lastName begins with A\")) var itemList.onChange(of: searchText, perform: { searchString in itemList.filter = NSPredicate(format: \"firstName begins with %@\", searchString) })filtersortDescriptorvalue = try! Realm(configuration: configuration ?? Realm.Configuration.defaultConfiguration).objects(ResultType.self)", "text": "Hi,I just started playing with the new property wrappers and I have a question regarding ObservedResults. Let’s say I have a collection of items and I want to display only a subset of it:@ObservedResults(Item.self, filter: NSPredicate(format: \"lastName begins with A\")) var itemListLater I would like to further filter that subset using a search field:.onChange(of: searchText, perform: { searchString in itemList.filter = NSPredicate(format: \"firstName begins with %@\", searchString) })Everything is working as expected but, when I clear the search field, I get back all the items without any filter (all the Item records from realm). And if, for any reason, SwiftUI re-renders the view, then I get the items properly filtered.The comment within the ObservedResults implementation says:A base value to reset the state of the query if a user reassigns the filter or sortDescriptorvalue = try! Realm(configuration: configuration ?? Realm.Configuration.defaultConfiguration).objects(ResultType.self)That means whenever the filter or the sort descriptor is changed, all the items are received, right?Is there any way to have returned only the items as they were queried in the beginning?I hope I made myself clear.Thank you!Horatiu", "username": "horatiu_anghel" }, { "code": "", "text": "Could you send us a small, self contained view/views that shows a good use case for this? I intentionally didn’t allow for sub filtering when designing the feature.", "username": "Jason_Flax" }, { "code": "@ObservedResults(T.self, filter: NSPredicate(format: \"documentDate BETWEEN {startDate, endDate}\")) var results\n\nlet rowView: (T) -> RowView\nlet detailView: (T) -> DetailView\n\n@State var selectedRow: T?\n\n@State var searchText: String = \"\"\n\ninit(@ViewBuilder rowView: @escaping (T) -> RowView, @ViewBuilder detailView: @escaping (T) -> DetailView) {\n self.rowView = rowView\n self.detailView = detailView\n}\n\nvar body: some View {\n \n List(selection: $selectedRow) {\n ForEach(results) { row in\n rowView(row)\n .tag(row)\n }\n }\n .frame(minWidth: 300)\n .listStyle(InsetListStyle())\n .toolbar {\n SearchField(searchText: $searchText)\n .frame(minWidth: 150, idealWidth: 200, maxWidth: .infinity)\n Spacer()\n Button(action: { }, label: { Image(\"plus\") })\n }\n .onChange(of: searchText, perform: { searchString in\n $results.filter = searchString.isEmpty ? nil : NSPredicate(format: \"companyName BEGINS WITH %@\", searchString)\n })\n}\n", "text": "Bellow is a snippet of a more generic implementation of a List:`import SwiftUI\nimport RealmSwiftstruct ListView<T: Object, RowView: View, DetailView: View>: View where T: ObjectKeyIdentifiable {}`Let’s assume we have an invoicing app and we want to retrieve only the documents within a specific month/period. Then we can search for the documents belonging to a specific customer.But, as long as I clear the search field, I get all the results from realm not only those within the date range specified in the predicate. And on next rendering the results are updated correctly. For example, if the app window becomes inactive the results are updated and filtered properly.The same thing can be applied to sortDescriptor where we can have an initial sorting and then use a control to sort the results as we want. Like NSTableView sorting headers.", "username": "horatiu_anghel" }, { "code": "", "text": "did this ever get resolved? I am having a similar issue.", "username": "Thomas_Rademaker" }, { "code": "", "text": "I believe using Realm’s .searchable implementation preserves the NSPredicate or Query injected initially into @ObservedResults.", "username": "horatiu_anghel" } ]
ObservedResults filtering and sorting
2021-02-26T19:29:02.911Z
ObservedResults filtering and sorting
5,939
null
[ "aggregation", "java", "compass", "spring-data-odm" ]
[ { "code": "Arrays.asList(new Document(\"$lookup\", \n new Document(\"from\", \"indirizzi\")\n .append(\"localField\", \"indirizzi\")\n .append(\"foreignField\", \"_id\")\n .append(\"as\", \"indirizzi\")), \n new Document(\"$match\", \n new Document(\"name\", \"Diego\")))\n<dependency>\n<groupId> org.springframework.boot </groupId>\n<artifactId> spring-boot-starter-data-mongodb </artifactId>\n</dependency>\n", "text": "Good evening everyone,\nI’m new to MongoDb, I tried to export an aggregation from compass to the java language.I couldn’t find an example where to apply this code in Spring Boot using dependency:Can you kindly tell me how can I do? or a guide where I can find a complete example?\nthank you", "username": "Diego83" }, { "code": "", "text": "Hello @Diego83, welcome to the forum!The Compass export of aggregation query to Java language creates Java code that can be used in an application that accesses MongoDB database using MongoDB Java Driver. You will find examples and the Maven dependency details in the Quick Start sub-topic of the linked page.", "username": "Prasad_Saya" } ]
Spring-data MongoDb and compass aggregation export
2022-05-01T16:36:41.553Z
Spring-data MongoDb and compass aggregation export
4,295
null
[ "replication", "containers", "storage" ]
[ { "code": "**docker logs <DB01_container_ID>** \n<snip>\n2022-04-28T00:55:23.546+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Could not find host matching read preference { mode: \"primary\" } for set graylog\n2022-04-28T00:59:58.027+0000 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for graylog/mongodb_db01:27017,mongodb_db02:27018,mongodb_db03:27019\n2022-04-28T01:00:03.030+0000 W NETWORK [LogicalSessionCacheRefresh] Failed to connect to 10.0.11.2:27017 after 5000ms milliseconds, giving up.\n2022-04-28T01:00:03.033+0000 W NETWORK [ReplicaSetMonitor-TaskExecutor-0] Failed to connect to 10.0.11.5:27018 after 5000ms milliseconds, giving up.\n2022-04-28T01:00:08.033+0000 W NETWORK [LogicalSessionCacheRefresh] Failed to connect to 10.0.11.8:27019 after 5000ms milliseconds, giving up.\n2022-04-28T01:00:08.033+0000 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set graylog\n2022-04-28T01:00:08.033+0000 I NETWORK [LogicalSessionCacheRefresh] Cannot reach any nodes for set graylog. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2022-04-28T01:00:13.539+0000 W NETWORK [LogicalSessionCacheRefresh] Failed to connect to 10.0.11.8:27019 after 5000ms milliseconds, giving up.\n2022-04-28T01:00:18.545+0000 W NETWORK [LogicalSessionCacheRefresh] Failed to connect to 10.0.11.2:27017 after 5000ms milliseconds, giving up.\n2022-04-28T01:00:23.551+0000 W NETWORK [LogicalSessionCacheRefresh] Failed to connect to 10.0.11.5:27018 after 5000ms milliseconds, giving up.\n2022-04-28T01:00:23.551+0000 W NETWORK [LogicalSessionCacheRefresh] Unable to reach primary for set graylog\n2022-04-28T01:00:23.551+0000 I NETWORK [LogicalSessionCacheRefresh] Cannot reach any nodes for set graylog. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n2022-04-28T01:00:23.551+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Could not find host matching read preference { mode: \"primary\" } for set graylog\n<snip>\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27019 dbpath=/data/db 64-bit host=01d5b6a43712\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] db version v3.6.18\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] git version: 2005f25eed7ed88fa698d9b800fe536bb0410ba4\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] allocator: tcmalloc\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] modules: none\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] build environment:\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] distmod: ubuntu1604\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] distarch: x86_64\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] target_arch: x86_64\n2022-04-28T00:08:14.735+0000 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIpAll: true, port: 27019, ssl: { CAFile: \"/etc/certs/ca.pem\", PEMKeyFile: \"/etc/certs/certandkey.pem\", allowConnectionsWithoutCertificates: true, allowInvalidHostnames: true, mode: \"preferSSL\" } }, replication: { oplogSizeMB: 400, replSetName: \"graylog\" } }\n2022-04-28T00:08:14.737+0000 W - [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.\n2022-04-28T00:08:14.741+0000 I - [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2022-04-28T00:08:14.743+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.\n2022-04-28T00:08:14.743+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=63873M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n[root@dcvsl126 sjillalla]# docker exec -it 24664c0d5a58 mongo\nMongoDB shell version v3.6.18\nconnecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"964473e9-b60b-46b9-b1d1-de99829f62a4\") }\nMongoDB server version: 3.6.18\nWelcome to the MongoDB shell.\nFor interactive help, type \"help\".\nFor more comprehensive documentation, see\n http://docs.mongodb.org/\nQuestions? Try the support group\n http://groups.google.com/group/mongodb-user\nServer has startup warnings:\n2022-04-27T20:09:57.767+0000 I CONTROL [initandlisten]\n2022-04-27T20:09:57.767+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2022-04-27T20:09:57.767+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2022-04-27T20:09:57.767+0000 I CONTROL [initandlisten]\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten]\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten]\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten]\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2022-04-27T20:09:57.769+0000 I CONTROL [initandlisten]\ngraylog:OTHER> rs.status()\n{\n \"state\" : 10,\n \"stateStr\" : \"REMOVED\",\n \"uptime\" : 16290,\n \"optime\" : {\n \"ts\" : Timestamp(1650247277, 4),\n \"t\" : NumberLong(111)\n },\n \"optimeDate\" : ISODate(\"2022-04-18T02:01:17Z\"),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"ok\" : 0,\n \"errmsg\" : \"Our replica set config is invalid or we are not a member of it\",\n \"code\" : 93,\n \"codeName\" : \"InvalidReplicaSetConfig\",\n \"operationTime\" : Timestamp(1650247277, 4),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1650247277, 4),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\ngraylog:OTHER> exit\nversion: '3'\nservices:\n db01:\n image: docker-prod.tools.royalsunalliance.ca/mongo:3.6.18\n volumes:\n - /docker/services/mongodb/db01:/data/db\n - /docker/services/mongodb/db01-dump:/data/db/dump\n - /docker/services/mongodb/db01-config/mongod.conf:/etc/mongod.conf\n - /docker/services/elasticsearch-prod/certs/db01:/etc/certs\n ports:\n - \"27017:27017\"\n command: [\"mongod\", \"--sslAllowConnectionsWithoutCertificates\", \"--sslMode\", \"preferSSL\", \"--sslPEMKeyFile\", \"/etc/certs/certandkey.pem\", \"--sslCAFile\", \"/etc/certs/ca.pem\", \"--config\", \"/etc/mongod.conf\", \"--sslAllowInvalidHostnames\"]\n\n db02:\n image: docker-prod.tools.royalsunalliance.ca/mongo:3.6.18\n volumes:\n - /docker/services/mongodb/db02:/data/db\n - /docker/services/mongodb/db02-dump:/data/db/dump\n - /docker/services/mongodb/db02-config/mongod.conf:/etc/mongod.conf\n - /docker/services/elasticsearch-prod/certs/db02:/etc/certs\n ports:\n - \"27018:27018\"\n command: [\"mongod\", \"--port\", \"27018\", \"--sslAllowConnectionsWithoutCertificates\", \"--sslMode\", \"preferSSL\", \"--sslPEMKeyFile\", \"/etc/certs/certandkey.pem\", \"--sslCAFile\", \"/etc/certs/ca.pem\", \"--config\", \"/etc/mongod.conf\", \"--sslAllowInvalidHostnames\"]\n #command: [\"mongod\", \"--config\", \"/etc/mongod.conf\"]\n\n db03:\n image: docker-prod.tools.royalsunalliance.ca/mongo:3.6.18\n volumes:\n - /docker/services/mongodb/db03:/data/db\n - /docker/services/mongodb/db03-dump:/data/db/dump\n - /docker/services/mongodb/db03-config/mongod.conf:/etc/mongod.conf\n - /docker/services/elasticsearch-prod/certs/db03:/etc/certs\n ports:\n - \"27019:27019\"\n command: [\"mongod\", \"--port\", \"27019\", \"--sslAllowConnectionsWithoutCertificates\", \"--sslMode\", \"preferSSL\", \"--sslPEMKeyFile\", \"/etc/certs/certandkey.pem\", \"--sslCAFile\", \"/etc/certs/ca.pem\", \"--config\", \"/etc/mongod.conf\", \"--sslAllowInvalidHostnames\"]\nrs.initiate({\n \"_id\": \"graylog\",\n \"version\": 1,\n \"members\" : [\n {\"_id\": 1, \"host\": \"mongodb_db01:27017\"},\n {\"_id\": 2, \"host\": \"mongodb_db02:27018\"},\n {\"_id\": 3, \"host\": \"mongodb_db03:27019\"}\n ]\n })\n", "text": "Hi All,\nSorry if this is the wrong category.I have inherited the infrastructure from someone who is long gone from my company. I am not aware of the configuration/steps used to bringup the MongoDB containers.Basically, we are running MongoDB (v3.6.18) as three containers and are configured as a replicaset. When the docker stack is deployed, two of the containers are up and running, but the third one is taking a long time to come up. The two DBs that are up, are about 1GB each. The third DB is about 637GB. Since the third DB is large, it tries for about 3.5hrs and exits and tries to be recreated. This goes on a loop.The logs from the other two DBs which are up show that they try to reach the other DBs and fails. Similar logs from both DB01 and DB02.For DB03, the following are the initial set of logsWhen I logged into either the DB01 or DB02 containers and check the status, it shows the followingThe docker compose file isand there is a rs.initiate file which I am sure is used for configuring the replicaset, but not sure how.Please let me know how I can recover the MongoDB stack so that the containers are all up and running properly.\nLet me know if you need any more information.", "username": "Sarojini_Jillalla" }, { "code": "mongod.conf", "text": "Hi @Sarojini_Jillalla\nWelcome to the community forum!!Could you help with a few information in regard to the concern.Was this setup in working condition before the issue started to arrive? If yes, could you please help in understanding the changes done in between due to which the issue was seen.Could you share the contents of the mongod.conf file for the replica sets?Was there any change done to the docker compose file when in working condition to now, if yes, could you please share the original docker file.Lastly would suggest to scratch if the database does not contain important data o if the backup for the deployment is already available.Please let me know the following details to assist you better.Thanks\nAasawari", "username": "Aasawari" }, { "code": "replication:\n oplogSizeMB: 400\n replSetName: graylog\n", "text": "Hi @Aasawari,Please find the answers below.for all the dbs.\n3. There is no change to the docker-compose.yml file.Since the DB03 contains huge data (about 640GB), I did not want to tinker with it.Please let me know if you need any more information.Thanks,\nSarojini Jillalla", "username": "Sarojini_Jillalla" }, { "code": "db.stats()rs.status()rs.conf()hostname -f", "text": "Hi @Sarojini_JillallaCould you provide us with below details on the above sent information.As per the above information, DB01, DB02 and DB03 are from a replica set configurations so they should be of similar sizes. However, thats not the case with DB03 where the size is much larger than the other two sets. Can you help me understand the method you used to calculate the sizes of the replica sets. Also please provide the output for db.stats() from the database that your application is using from all three nodes.Can you send the output of rs.status() and rs.conf() by logging in into each of the three nodes along with hostname -f for the nodes. This would help in understanding the configurations of the replica set.As per the docker compose file, the docker image looks like a custom made image, can you provide us with the information on how the docker image was created.Thanks\nAasawari", "username": "Aasawari" } ]
MongoDB Docker container not starting properly
2022-04-28T01:22:43.729Z
MongoDB Docker container not starting properly
15,577
null
[ "java", "connecting" ]
[ { "code": "", "text": "I’ve been trying to access my atlas database from my java application and I haven’t found any methods that seem to work, I must be doing some small mistake. Could someone tell me how to do it, please? Sorry if this is the wrong category or if it has been posted before.", "username": "keenest" }, { "code": "", "text": "Hello @keenest, welcome to MongoDB community!You can find tutorials regarding connecting to MongoDB database and performing operations on the database at the MongoDB Java Driver website. Here is the link to it: MongoDB Java Driver - Quick Start.Also, here is another blog post: MongoDB & Java - CRUD Operations Tutorial", "username": "Prasad_Saya" }, { "code": "mongojava -version", "text": "I’ve been trying to access my atlas database from my java application and I haven’t found any methods that seem to work, I must be doing some small mistake.Hi @keenest,There are several possibilities for connection issues. Before connecting with a driver, I would try connecting using the latest version of a command-line tool like the mongo Shell or a GUI like MongoDB Compass. This will help confirm that your Atlas whitelist allows connections from your originating IP, and that you have the correct credentials and connection string to access your deployment. After connecting to your Atlas cluster you should try inserting or querying data to ensure the expected read/write permissions are available.The Atlas documentation includes a guide to help Troubleshoot Connection Issues and guides to Connecting to a Cluster using various sources (Shell, Compass, Driver, …).If you are able to connect to your cluster with Compass or the Shell, you can then look into any issues with your driver or code approach.If you still need more assistance, please provide details for your application environment:Thanks,\nStennie", "username": "Stennie_X" }, { "code": "MongoClient mongoClient = MongoClients.create(<connection string>);java.lang.NoClassDefFoundError: com/mongodb/client/MongoClients", "text": "I tried using compass and it works. I’m using mongodb java driver 3.12.6. My java version is 1.8.0_241.My IDE (eclipse) wants me to change “MongoClient” in:\nMongoClient mongoClient = MongoClients.create(<connection string>);\nto “com.mongodb.client.MongoClient”, and if I don’t it displays it as an error. My application doesn’t seem to like me using this method since it outputs this error:\njava.lang.NoClassDefFoundError: com/mongodb/client/MongoClients", "username": "keenest" }, { "code": "java.lang.NoClassDefFoundError", "text": "Hello @keenest,The error java.lang.NoClassDefFoundError indicates that a class required by the code is not available at runtime (but the code compiled fine). You may have to review your setup using the MongoDB Java Driver. How have you configured your application?", "username": "Prasad_Saya" }, { "code": "com.mongodb.client.MongoClient mongoClient = MongoClients.create<connections string>);", "text": "I run com.mongodb.client.MongoClient mongoClient = MongoClients.create<connections string>); as soon as the application starts, if that is what you were asking.", "username": "keenest" }, { "code": "maintry(MongoClient client = MongoClients.create(\"mongodb://localhost:27017\")) {\n\t\t\t\n MongoDatabase database = client.getDatabase(\"test\");\n MongoCollection<Document> collection = database.getCollection(\"test\");\n Document doc = collection.find().first();\n System.out.println(doc.toJson());\n}\n", "text": "@keenest, I mean how you have installed the Java Driver?Here is an example of using MongoDB Java Driver I found online: Java MongoDB Example. It uses Eclipse IDE; I tried and it works fine, with driver version 3.12.7. There are lot of other details (which you may want to skip), but follow the below steps from the article:The following MongoDB Java code in the newly created class at step 3.2, ran fine from the main method:", "username": "Prasad_Saya" }, { "code": "", "text": "I didn’t create my project as a maven project, should I just be able to add a pom.xml manually and be able to use it?", "username": "keenest" }, { "code": "Select the project -> Right-click and navigate to Configure -> Convert to Maven Project<dependencies>\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>3.12.7</version>\n </dependency>\n </dependencies>\n", "text": "add a pom.xml manuallySure.Select the project -> Right-click and navigate to Configure -> Convert to Maven ProjectThis opens the Maven POM window. Select the defaults and Finish. This creates the pom.xml.Add the following to the pom.xml:Build and run the project.", "username": "Prasad_Saya" }, { "code": "Preformatted textSLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/Leo/.p2/pool/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_1.16.0.20200610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [file:/E:/Program/eclipse/java-2020-06/eclipse/configuration/org.eclipse.osgi/5/0/.cp/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/Leo/.p2/pool/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_1.16.0.20200610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [file:/E:/Program/eclipse/java-2020-06/eclipse/configuration/org.eclipse.osgi/5/0/.cp/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]", "text": "Sorry for replying a bit late. I’ve converted my project to a maven project but it doesn’t seem to run the right way. When I run the application I get these messages in red a the top of my console:\nPreformatted textSLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/Leo/.p2/pool/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_1.16.0.20200610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [file:/E:/Program/eclipse/java-2020-06/eclipse/configuration/org.eclipse.osgi/5/0/.cp/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory] SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/C:/Users/Leo/.p2/pool/plugins/org.eclipse.m2e.maven.runtime.slf4j.simple_1.16.0.20200610-1735/jars/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [file:/E:/Program/eclipse/java-2020-06/eclipse/configuration/org.eclipse.osgi/5/0/.cp/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]", "username": "keenest" }, { "code": "java.util.logging", "text": "Does the project run or may be those messages could be ignored. I didn’t see any such messages in my console.Those messages are just warnings (not errors). There is documentation online about those messages, e.g., SLF4J: Class path contains multiple SLF4J bindings..As I know the Java driver version v3.12.7 uses java.util.logging, not SLF4J.", "username": "Prasad_Saya" }, { "code": "", "text": "I’ve switched IDE from eclipse to intellij and after some tinkering the application works as it used to before I started using maven. Although the mongodb stuff still doesn’t work.EDIT: The example you sent shows how to connect to a regular mongodb database. The problem I’m having is connecting to an atlas database, which uses a different method.", "username": "keenest" }, { "code": "", "text": "Hello, I’m Ody.\nI’ve met problem in android studio when trying to connect to mongoDB atlas with my application using connectionString.\n{\njava.lang.NoClassDefFoundError: Failed resolution of: Ljavax/naming/directory/InitialDirContext;\n}\ncompass connection runs fine, but this error occurs when using application.\nI’m stopped since few days…\nPlease, can you give some solutions to solve it??\nThanks", "username": "Ody" }, { "code": "", "text": "ci dont know how to connect a mongodb compass to ecplise? can you please help me", "username": "sumithra_shan" } ]
Connecting to my database with java
2020-08-25T20:53:15.408Z
Connecting to my database with java
12,672
null
[ "data-modeling", "mdbw22-hackathon" ]
[ { "code": "", "text": "I can’t form a team on my own in this Hackathon, also I am a little bit limited in time. But I like to share some ideas and if a team likes to pick up I’d be glad to support when it comes to Data Architecture / Schema Design. The hypothesis is that before major, often negative or unpleasant, events. Politicians like to emphasize that these events will NOT occur.\nAn interesting question is: can we establish a relationship between these statements and the final occurrence? Is it possible to identify contraindications and thus possibly predict the occurrence of events?various3open", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoeller ! I just got in here for the night & am very intrigued by and interested in seeing if or how I could help a team get started with the project that you outline here. I am just not sure where I can start or how I can help. I am a frontend tech degree grad with about 2 1/2 years of study, with some MongoDB university course work under my belt but very limited when it comes to how to tackle a hackathon project of this caliber. If I can help or be a part of helping even just to find a team, I will be more than happy to do it, like you I am pressed for time, so I like to put my best foot forward with the time I do have. Now I am off to catch up on some hackathon material, read some GDELT and CAMEO code documentation and try my best to soak it in and become more useful than I was yesterday. Have a great night and hope to talk soon!", "username": "Jason_Nutt" }, { "code": "", "text": "Hello @Jason_Nutt ,glad to hear that you want to get into this. I was loosely following your 100 DayOfCode challange. As noted I can not run this project but would be happy to support. IMHO it would be a good setup to find two seniors who want to sit in the driving seats and rock the project.To explain more: I am bound more than expected in two other customer projects. I might be too careful at this point but I do not want to over commit and not deliver at the end and in turn negatively affect a team. I can commit to support in terms of schema design and querying - every commitment beyond would be a risk. Also I do not mind when you and others pick the idea and you do it on your own.I like the idea mentioned in the subject so I wanted to share the idea. If no one picks it up now there is a good chance that this will become later a side project Hope this works for you,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Looking for Hackers: Does NO mean YES?
2022-05-02T16:34:51.095Z
Looking for Hackers: Does NO mean YES?
3,946
null
[]
[ { "code": "", "text": "Why does it take such a long time to save a function draft?\nThis happens on the first time I save a draft and it even fails some times.\nAfter the first time the draft on any function is saved immediately.\nI have the same complaint about the time it takes to deploy changes.\nI have this problem on free instances and also on m10 instances.", "username": "michael_schiller" }, { "code": "", "text": "Hi Michael,How long is it taking to save a draft?Is this only happening with one function or any function change or any other type of change?Are there any errors in your deployment history ?Do you have github auto deployment enabled on the app?It might be best to raise this as a support ticket with us so that we can look into the specifics.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "It might take several minutes to save or entirely fail and it happens only on the first save since the last deployment. The strange thing is that if it fails the next attempt to save succeeds immediately.It is noticeable mainly on functions but might happen on other types like schema changes or triggers.No errors on deployment history.I do not use github auto deployment.I do not have a paid support plan.Thx\nMichael", "username": "michael_schiller" }, { "code": "", "text": "As a sample please provide the exact name of the function which experienced this delay.Along with it, it would be helpful if you can provide the time/date (including timezone) when it happened so we can track down the logs on our sideRegards", "username": "Mansoor_Omar" } ]
It takes a long time to save a function draft
2022-05-02T13:04:49.050Z
It takes a long time to save a function draft
1,766
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Trying to setup MongoDB in Ubunutu VPS for a MERN app.\nI’ve enabled authentication in the mongodb settings.\nSet up the usernames with password with the roles of read and dbOwner of the target DB.\nAnd I have the mongoURI string setup in the nodeJS server like this:\nmongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource={admindb}\nIn the dry run, without any data in the db, pm2 monit only shows:\n“command find requires authentication”\nDespite that, mongoURI string from above can log me into the mongo and mongo shell without propblem.\nIs there something I’ve forgot?", "username": "Jae_Hong" }, { "code": "", "text": "I would guess you have not correctly set up roles for the MONGO_USERNAME.\nA user can log into MongoDB but not be able to do very much if the user does not have certain roles.\nIf this is your problem, see Built-in Roles and Create a User.", "username": "Jack_Woehr" }, { "code": "", "text": "The roles are read and dbOwner.\nBoth roles should be able to use find function.", "username": "Jae_Hong" }, { "code": "", "text": "If you are able to login with shell can you query your collections or run that find command\nFor authsource what exactly is the value given", "username": "Ramachandra_Tummala" }, { "code": "", "text": "For authsource what exactly is the value givenauthsource is admin, it’s a different db from MONGO_DB.", "username": "Jae_Hong" }, { "code": "", "text": "Ok\nCan you run find query from shell?\nCan you show output of db.getUsers()", "username": "Ramachandra_Tummala" }, { "code": "", "text": "The problem was resolved when readWrite role added to the user with database access.\nIs it because of POST requests?", "username": "Jae_Hong" }, { "code": "find()find()readWrite", "text": "@Jae_Hong you said in your first post “In the dry run, without any data in the db”.\nWhat sort of find() did you try without any data?\nIf the find() implied creating a collection in the empty db, I believe you would need readWrite.\nPerhaps the problem was related to “without any data in the db”.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
"Command find requires Authentication" despite having correct authentication details in mongoURI
2022-05-02T02:45:20.304Z
&ldquo;Command find requires Authentication&rdquo; despite having correct authentication details in mongoURI
31,999
null
[ "production", "c-driver" ]
[ { "code": "", "text": "Announcing 1.21.1 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.Announcing libbson 1.21.1No changes since 1.21.0; release to keep pace with libmongoc’s version.Bug Fixes:Thanks to everyone who contributed to this release.", "username": "Jesse_Williamson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver 1.21.1 Released
2022-03-01T21:21:51.828Z
MongoDB C Driver 1.21.1 Released
3,487
null
[ "production", "cxx" ]
[ { "code": "cxx", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.7 .Please note that this version of mongocxx requires the MongoDB C driver 1.17.0 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx . Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C++11 Driver 3.6.7 Released
2022-05-03T15:51:09.854Z
MongoDB C++11 Driver 3.6.7 Released
2,321
https://www.mongodb.com/…fe0fad6b036.jpeg
[ "queries", "data-modeling", "atlas-cluster", "kotlin", "dublin-mug" ]
[ { "code": "Lead, Developer Relations at MongoDB Technical Service Engineer, MongoDBSoftware Engineer, Zendesk", "text": "G’Day, Folks, It’s great to know we are now starting to get back to normal and more exciting things have started happening now… Dublin MongoDB User Group is excited to kick off it’s very first in-person event on 27th April 2022.The meetup invites anyone interested to understand MongoDB. The event talks are of mixed nature from beginners to intermediates. We are open to any feedback you may have and will take into account developer interest in future events.We have some amazing speakers talking about MongoDB things Rita Rodrigues @Rita_Martins_RodriguLead, Developer Relations at MongoDB Rita is a woman in tech that has designed and developed software for some time. Now she mentors and leads teams. Rita is passionate about helping developers to solve challenging problems. In her free time, she loves spending time with her 2 daughters (Bea and Eva) doing lots of fun artistic arts & crafts, knitting, and cooking.\nProfile Picture-Sheila1080×720 128 KB\nSheila Doyle @Sheila_DoyleTechnical Service Engineer, MongoDBSheila loves to optimize. She is a woman in tech that helps solve problems and is working with the MongoDB core team for almost 4 years now. In her spare time, she likes to play with her dog and do gardening in her backyard.\nIan576×576 142 KB\nIan Arbuckle @Ian_ArbuckleSoftware Engineer, ZendeskIan is a Mobile app developer specializing in Android with experience in the Travel industry. He enjoys spending time traveling, reading, and engaging in activities such as football and fitness.Event Type: In-Person\nMongoDB, Ballsbridge, DublinTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.A guard or a volunteer will be at the reception for door opening. Please arrive on time.Please sign in at the reception iPad when you enter inThe event will take place in the Office cafeteria on the third floor. Access to the third floor will be given by the volunteer or the guard.Doors will close at 18:15 PM. Contact +353- 899722424 if you come after thatPlease be respectful of the workplace.I welcome you all to join the Dublin MongoDB User group, introduce yourself, and I look forward to collaborating with our developer community for more exciting things Cheers, ", "username": "henna.s" }, { "code": "", "text": "Hi @GeniusLearner tedXdtc", "username": "Tushti_Joshi" }, { "code": "", "text": "Hello @Tushti_Joshi, Welcome to MongoDB Community and our Dublin, Ireland MUG event I believe you are referring to Sanchit, he is an organizer for our Delhi -User group. Please feel free to join the Delhi MUG and the discussions.Cheers, ", "username": "henna.s" } ]
Dublin MUG: Let’s talk Atlas, Realm and MongoDB Queries
2022-04-12T15:05:52.139Z
Dublin MUG: Let’s talk Atlas, Realm and MongoDB Queries
4,676
null
[]
[ { "code": "", "text": "Hi guys, hope everyone is ok and healthy.Anyone here is gonna be in person at NYC ?Cheers!", "username": "Alexandre_Araujo" }, { "code": "", "text": "", "username": "Stennie_X" } ]
In-Person for MongoDB World 2022
2022-04-03T15:17:21.727Z
In-Person for MongoDB World 2022
3,038
null
[ "app-services-cli" ]
[ { "code": "", "text": "I’m following the mongoDB realm tutorial and when i try to create the app using “realm-cli push”, I get this error “push failed: unable to retrieve data source details”.\nHow can I solve this problem???", "username": "Hisnaider_Campello" }, { "code": "clusterNamedata_sources/mongodb-atlas/config.json", "text": "G’day @Hisnaider_Campello ,Welcome to MongoDB Community Forums👋This error means your cluster is set to a different name. You could either create a new cluster with the same name as mentioned in the documentation or in your cloned repo you can change the value of clusterName in the data_sources/mongodb-atlas/config.json file to the same name as your current cluster.I hope this helps.I look forward to hearing from you.Cheers, ", "username": "henna.s" }, { "code": "", "text": "Thank you very much, you saved my day !!!", "username": "Hisnaider_Campello" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Push failed: unable to retrieve data source details
2022-05-03T22:18:11.261Z
Push failed: unable to retrieve data source details
4,175
https://www.mongodb.com/…9_2_1024x220.png
[ "aggregation", "swift" ]
[ { "code": "", "text": "I have an aggregation pipeline setup but I’m having trouble with this one part. Here is what I’m trying to do:This is what I currently have which doesnt work. I’ve tried it multiple ways and none of them have worked so please let me know how I can accomplish this!\nScreen Shot 2022-05-03 at 2.00.20 PM2060×444 57.6 KB\n", "username": "Komal_Shrivastava" }, { "code": "", "text": "Why is there “$” in front of “target_gender field?Also it’s hard to tell in the screenshot but are all those brackets {} cause they look like []…", "username": "Asya_Kamsky" }, { "code": "", "text": "This is actually written in swift hence the square brackets. And thank you for catching the $ in front of targetGender, I was moving around so much code I might’ve put it there accidentally. Upon removing the $, it seems to work!", "username": "Komal_Shrivastava" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation $match on $or value
2022-05-03T18:16:47.511Z
Aggregation $match on $or value
1,500
null
[ "aggregation", "node-js", "atlas-search", "text-search" ]
[ { "code": "_id:62595a1c8c0fce4f9dbfc99e\ncontent:DM Id added to db\ncreatorId:624f98d1db10f12db205b263\nrecipientId:6257e81d2a74d7363f621370\ndmId:6257e81d2a74d7363f621370-624f98d1db10f12db205b263\npostId:6258206b1f681ed05547fa90\ncreatedAt:2022-04-15T11:42:20.753+00:00\nreturn db\n .collection('dms')\n .aggregate([\n {\n $text: { $search: currentUser }\n },\n { $sort: { _id: -1 } },\n { $limit: limit },\n {\n $lookup: {\n from: 'users',\n localField: 'creatorId',\n foreignField: '_id',\n as: 'creator',\n },\n },\n { $unwind: '$creator' },\n { $project: dbProjectionUsers('creator.') },\n ])\n .toArray();\n}\n", "text": "I am on atlas free tier with version → 5.0.7I am trying to build Chat system and for that I used\ndmId like this “recipient_id-sender-id”my objec looks like thisSo from nodejs backend I try to fetch databut this gives me this error -$text is not allowed in this atlas tierany other possible solutions possilbe\nthanks", "username": "anish_jain" }, { "code": "$text$match$textreturn db\n .collection('dms')\n .aggregate([\n {\n $match: { $text: { $search: currentUser } } \n },\n...\n$textM0M2M5$text", "text": "Hi @anish_jain,but this gives me this error -\n$text is not allowed in this atlas tierThanks for confirming your Atlas tier details and providing the aggregation pipeline in use. The above error is a bit odd as the $text operator you have used will not work in the pipeline provided in it’s current format. However, I have taken note of this as it may cause some confusion. In saying so, the pipeline will require a $match stage that contains the $text operator at the beginning similar to the following:Additionally, could you provide the use case for using the $text operator in Atlas deployment? Since this is an Atlas deployment, I would recommend taking a look at using Atlas Search which allows fine-grained text indexing and querying of data on your Atlas cluster. You may find the following documentation useful regarding Atlas Search:I would also recommend reviewing the Text Search in the Aggregation Pipeline documentation which provides another example of the $text operator being used in an aggregation pipeline.Hope the above is useful.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to Search text and returns collection based on that
2022-04-15T12:19:43.089Z
How to Search text and returns collection based on that
2,560
null
[ "java", "crud" ]
[ { "code": "{\n \"_id\": ObjectId(\"6269593b82bde6eb18b8e9b9\"),\n \"dg_mid\": \"021d9bf4-2f3c-4ba9-9963-f34c0e8a3663\",\n \"comp_type\": 0,\n \"comp_users\": [\n {\n \"dg_user\": \"NETWORK-SERVICE\",\n \"comp_sid\": \"S-1-5-20\"\n },\n {\n \"dg_user\": \"DWM-254\",\n \"comp_sid\": \"S-1-5-19\",\n \n },\n\t{\n \"dg_user\": \"DWM-255\",\n \"comp_sid\": \"S-1-5-18\", \n }\n ]\n}\ncomp_users.dg_usercomp_users.$.dg_use", "text": "The following works in Mongo shell:db.computers.update({“dg_mid”: {$eq: “ff4c4ff6-6492-4326-8a41-9a0272c7c265”}}, {$pull: {“comp_users”: {“dg_user”:{$regex: /^DWM/i}}}}`for Document with the following structure:Unfortunately I can’t achieve in Java , trying to use Updates.pullByFilter . Such as:Bson arrayFilter = Filters.regex(“comp_users.dg_user”, Pattern.compile(\"^DWM\", Pattern.CASE_INSENSITIVE));then use updateOne :collection.updateOne(eq(“dg_mid”, “021d9bf4-2f3c-4ba9-9963-f34c0e8a3663”), pullByFilter(arrayFilter))failed because of dotted notation comp_users.dg_user in array filter. I tryed to use instead of dotted notation comp_users.$.dg_use as suggested still getting failure. Any suggestion on what I am doing wrong , appreciated", "username": "Irene_Levina" }, { "code": "{$regex: /^DWM/i}{ $regex: \"^DWM\" , $options : \"i\" }{\"dg_mid\":\"ff4c4ff6-6492-4326-8a41-9a0272c7c265\"}{\"dg_mid\": {\"$eq\": \"ff4c4ff6-6492-4326-8a41-9a0272c7c265\"}}query = new Document( \"dg_mid\" ,\n \"ff4c4ff6-6492-4326-8a41-9a0272c7c265\" ) ;\n\nupdate = new Document( \"$pull\" :\n new Document( \"comp_users\" :\n new Document( \"db_users\" :\n new Document( \"$regex\" : \"^DWM\" ).append( \"$options\" , \"i\" ) ) ) ) ;\n\ncollection.updateOne( query , update ) ;\nquery = Document.parse( \"{'dg_mid':'ff4c4ff6-6492-4326-8a41-9a0272c7c265'}\" ) ;\n\nupdate = Document.parse( \"{'$pull':{'comp_users':{'dg_user':{'$regex':'^DWM','$options':'i'}}}}\" ) ;\n", "text": "I do not use builder classes because of the extra knowledge you need to have to use them.I do not use builder classes because I use mongosh extensively and JS and I like my queries and aggregations to be syntactically closed in each language. Builders are an extra layers that I avoid.For this what I proposed do not use builders, but straight org.bson.Document in a way that is easier for me to use.I start with the equivalent of\n{$regex: /^DWM/i}*\nthat is easier to map into Java. The equivalent is\n{ $regex: \"^DWM\" , $options : \"i\" }.I also use the equivalent\n{\"dg_mid\":\"ff4c4ff6-6492-4326-8a41-9a0272c7c265\"}\nrather than\n{\"dg_mid\": {\"$eq\": \"ff4c4ff6-6492-4326-8a41-9a0272c7c265\"}}.Here it is (n.b. I have not tried to compile or test it):Alternatively but relatively slower, you should get the same result with:", "username": "steevej" }, { "code": " update = Document.parse( \"{'$pull':{'comp_users':{'dg_user':{'$regex':'^DWM','$options':'i'}}}}\" )new UpdateOneModel<>(\n eq(\"dg_mid\", id), update, options.upsert(true)));\n", "text": "THX Steeve, I used suggested:\n update = Document.parse( \"{'$pull':{'comp_users':{'dg_user':{'$regex':'^DWM','$options':'i'}}}}\" )With:Since I would like to use Bulk.write , and it works. Thx for help", "username": "Irene_Levina" } ]
How to pull from array of objects using filter
2022-04-30T04:08:14.642Z
How to pull from array of objects using filter
4,476
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hello all,Just in case you’ve missed a livestream, we’ve gathered them all in a playlist and you can catch up with them at your leisure (see below)NOTE: The “play” button will play the last playlist video, if you want a particular video, then howver over the video preview and on the top right, you’ll see a dropdown icon for the playlist where you can choose other videos) \nWant anything in particular covered in our livestream? Then reply here and we’ll incorporate.The Hackathon team!", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Missed a livestream? Here's the Playlist
2022-05-03T17:40:16.895Z
Missed a livestream? Here&rsquo;s the Playlist
2,600
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate[details=Link Details]\nEvent Type: Online\nLink(s):\nLocation\nVideo Conferencing URL[/details]2022-04-26T10:00:00Z", "username": "Shane_McAllister" }, { "code": "", "text": "If you can’t make the livestream - but still have questions, post them here as a reply and I’ll be sure to raise and address them on the livestream for all to benefit from.", "username": "Shane_McAllister" }, { "code": "", "text": "We will be live in just over 1 hour. You can catch it on MongoDB Youtube and MongoDB Twitch or directly here below - please join in and ask any questions via Twitch or Youtube and we’ll pick them up.Tha Hackathon Team", "username": "Shane_McAllister" }, { "code": "", "text": "I am trying to download data via gdeltloader. But getting the following error\ngdelt_issue1366×674 29.4 KB\n", "username": "Avik_Singha" }, { "code": "", "text": "What version of gdelttools are you using? You can find what version by typing `gdeltloader -he.", "username": "Joe_Drumgoole" }, { "code": "", "text": "gdeltloader version is 0.06a5", "username": "Avik_Singha" }, { "code": "", "text": "Can you upgrade to the latest version?", "username": "Joe_Drumgoole" }, { "code": "", "text": "Getting the below issue, when trying to upgrade to latest version. My Python version is 3.9.6.\ngdelt_issue_upgrade1366×730 58.1 KB\n", "username": "Avik_Singha" }, { "code": "python3 -m pip install gdelttools\npython -m pippippippython", "text": "Aha, GDELT is somebody elses package. You need to install the gdeltttools package. To install and upgrade run:I suggest using python -m pip as opposed to pip directly as this ensures you use the pip associated with your default python. Tip of the hat to @Mark_Smith, my colleague for this suggestion.", "username": "Joe_Drumgoole" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Getting started with MongoDB & GDELT - APAC/EMEA Session
2022-04-25T13:50:25.272Z
Getting started with MongoDB &amp; GDELT - APAC/EMEA Session
3,778
null
[ "atlas-device-sync", "change-streams", "atlas-triggers" ]
[ { "code": "matrix.items\"record\": {\n \"inserted\": [\n \"626fa2578c3afa5f231b8bbe\"\n ]\n }\nrecordsmatrix.items\"record\": {\n \"updated\": [\n \"626fa2578c3afa5f231b8bbe\"\n ]\n }\nrecordsfullDocumentmatrix.itemsundefinedmatrix.itemsfullDocument", "text": "I have a Realm Sync app that works both online and offline. When offline, a user would insert a ‘record’ document. Then later update a particular field (matrix.items) of that document (while still offline). When connectivity is restored with Atlas, it appears that the only changeset that occurs is an insert:I have a trigger on the records collection that listens to updates to the matrix.items field. This works fine when the user is updating the document while online (changeset is:)I suspect that Realm Sync may bulk operations together to limit data transfer and as such, when connectivity is restored, it only sends one inserted document containing all the changes made while offline.So I added a trigger that listens to inserts in the records collection. But the fullDocument received by this trigger does not contain the matrix.items field (it is undefined), even though the synced document in Atlas does contain the matrix.items field.Should my trigger on inserted records not receive a fullDocument containing all the updates to the document made while offline? If not, how do I listen to changes to the record document that were made offline so my trigger can fire?", "username": "Laekipia" }, { "code": "", "text": "Hi. You are correct that Realm Sync batches many operations together when possible, and doing so lets us achieve much better performance. I suspect the reason you are not seeing the “updates” to the list is that they may be showing up as Replace events on the change stream. Some of the operations that we perform on lists can only be done using MongoDB Update Pipelines (https://www.mongodb.com/docs/manual/tutorial/update-documents-with-aggregation-pipeline/). These operations result in full-document replaces, so they will appear in the Trigger/ChangeStream as a Replace event.Please let me know if that is the case,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "It worked! Thanks a lot @Tyler_Kaye for being so quick to help!", "username": "Laekipia" }, { "code": "", "text": "Of course glad that we could help.Have a nice day,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi @Tyler_Kaye, actually I may have been a bit quick to celebrate here. I’m not sure it has worked. I’ve put a trigger for ‘replace’ events and the records collection but it doesn’t get triggered when a document gets synced (when connectivity is restored). Any idea what’s going on?", "username": "Laekipia" }, { "code": "", "text": "Hi, apologies if I was not clear, but I think that in order to safely capture all events you will need to enable a trigger for “insert”, “update”, and “replace” events.", "username": "Tyler_Kaye" }, { "code": "matrix.itemsmatrix.itemsupdateDescription.updatedFields{\"updateDescription.updatedFields.matrix\":{\"$exists\":true}}false{\"updateDescription.updatedFields.matrix.items\":{\"$exists\":true}}matrix.items", "text": "Hi @Tyler_Kaye,So I’ve been conducting some further tests and it appears that my issue is not with the type of event the trigger listens to but the match expression used to fire it.I have a handleRecordUpdated trigger (for update events) with a match expression so that it only gets triggered when the matrix.items field is updated. That’s to prevent many unnecessary trigger fires when the user updates other fields of the record.I’ve noticed that, whether the update occurred offline or online, the update event always contains ‘matrix.items’ in updateDescription.updatedFields. So that’s the field I want to write my match expression on.For info, that’s what my initial issue was: I was using {\"updateDescription.updatedFields.matrix\":{\"$exists\":true}} as my match expression but when updated offline, this evaluates to false.So my match expression is now: {\"updateDescription.updatedFields.matrix.items\":{\"$exists\":true}}. But this doesn’t seem to work (maybe because matrix.items is an array?). The trigger doesn’t fire anymore.Here is a screenshot of an example matrix field in a record:\nAny idea what’s wrong with my match expression?", "username": "Laekipia" }, { "code": "", "text": "Hi, the first thing that comes to mind is that updateDescription only exists for Update events, so replace events will now all be filtered out.As for how to go about fixing this, I find that with all match expression issues the best way to go about it is to log the change events being made so that you can physically see that they look like. MongoDB change events can be a bit hard to think about, so I would reccomend setting up a trigger without any match expression and having the function code just be “console.log(EJSON.Stringify(changeEvent))”. Then perhaps it will become apparant why your match expression is not firing. If its still not clear, then plase post a sampling of them here and I can help out!Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "recordrecord.matrix.items", "text": "Hi @Tyler_Kaye ,\nAs you suggested, I’ve created a test trigger listening to insert, update and replace events (and no match expression).When I insert a record while offline then update record.matrix.items while still offline and finally restore connectivity, I get the following changeEvents on the trigger:I don’t get any replace events.I’ve seen in another post that people have had issues in the past with match expressions on nested arrays. Would there be a way around? Is there a way to write a match expression where at least one element in updatedFields startsWith the word ‘matrix’?Thanks!", "username": "Laekipia" } ]
Trigger on updates to document made while offline
2022-05-02T10:09:33.882Z
Trigger on updates to document made while offline
3,658
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "We will be live in just over 30 minsWatch on MongoDB Youtube and MongoDB Twitch or else just tune in below!Don’t forget - ask any questions in the chat and we’ll endeavour to answer", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Working with GDELT
2022-05-03T12:33:45.252Z
Working with GDELT
3,007
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hi all (and I mean ALL! We’ve over 400 registered now - this is a great!)It’s week 4 of the Hackathon and I wanted to clearly point out to existing and new participants the “flow” of participation and what to expect in participation(Note: If you’re reading this, then most likely you’ve registered, but if not, then make sure to join the “Hackathon Goup” (you’ll know if you’ve joined as your profile will say so and you’ll have the >_ beside your name (the hackathon Flair))So - please make sure you’ve done some (or all) of the following -Read the About the MongoDB World Hackathon section. This has all the info you need about participating and any past events have all the livestreams recorded and viewable - just select each event and you’ll see the embedded YouTube link (so you haven’t missed out on anything)Check out the Resources for participants in the MongoDB World Hackathon ‘22 and also our special GDELT section which specifically deals with GDELT topics.Have an IDEA for the hackathon and need teammates? Post the idea in the Projects looking for Hackers category OR have skills but no idea? Then post in the Hackers looking for Projects category. As you can see, many people have posted there already, and have been sucessful in finding team mates or teams to join. Don’t be shy, get posting!So, once participating, either as a team, or on your own, you MUST post your project details in the newly created Project Teams category. This is CRUCIAL as we really want to be able to see & help all participants. It’s so CRUCIAL, that everyone who lists here, with a link to their project repo, will get exclusive hackathon swag!!Keep hacking away and join in our weekly livestreams to learn more and interact with the hackathon team. The livestream schedule is updated weekly and can be seen in the event list or the event calendarYou have until May 20th to keep hacking away on your projects - but the Projects submissions for will open from Wednesday 11th May.WIN SWAG - we’ve plenty of other ways to be in the chances of getting swag", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
World Hackathon 101! Need to know info & guide
2022-05-03T13:08:41.600Z
World Hackathon 101! Need to know info &amp; guide
2,547
null
[ "crud" ]
[ { "code": "", "text": "Hello,When I try to insert \\n into a document, mongo translates it to \\n instead which causes issues reading the document data later. I can obviously parse that data with a regex, but is there anyway I can insert the data so that \\n would insert as \\n instead?Thanks", "username": "Joe" }, { "code": "{ _id: 1,\n lines: \n [ 'Ah ! comme la neige a neigé !',\n 'Ma vitre est un jardin de givre.',\n 'Ah ! comme la neige a neigé !',\n 'Qu’est-ce que le spasme de vivre',\n 'À la douleur que j’ai, que j’ai.' ] }\n{ _id: 0,\n text: 'Ah ! comme la neige a neigé !\\\\nMa vitre est un jardin de givre.\\\\n\\\\nAh ! comme la neige a neigé !\\\\nQu’est-ce que le spasme de vivre\\\\nÀ la douleur que j’ai, que j’ai.' }\n", "text": "Rather than storing long text with line breaks, you could consider storing the text in an array of lines.To recreate the text with line breaks, you simply loop over. Some use cases might be simpler with an array. For example, you could render your text html friendly by adding the <br> in the looping and terminal friendly by adding the new line instead.Another advantage, it is easier to view and edit in Compass and shell.This:compared to thisand in Compass\nimage1920×1040 84.6 KB\n", "username": "steevej" } ]
Line break \n shows up as \\n in mongoDB document
2022-05-02T21:44:44.054Z
Line break \n shows up as \\n in mongoDB document
6,976
null
[ "node-js", "python", "flutter", "next-js", "mdbw22-hackathon", "react-js" ]
[ { "code": "", "text": "Worked as a Full-stack developer in some free lance work. Developed and published apps on playstore.Flutter, NodeJs, ReactJS, NextJs, C++, and python. Data Structure and AlgorithmsAccording to Indian Standard Time", "username": "Shivam_Modi1" }, { "code": "", "text": "Hi @Shivam_Modi1 ,Welcome to the MongoDB community.You seem to have great experience developing front-ends and mobile apps.I’ve an idea where I’ve started creating back-end for the same and if you’re interested, Please feel free to jump in to get things rolling on Mobile / front-end side.", "username": "viraj_thakrar" }, { "code": "", "text": "This is great @Shivam_Modi1 and @viraj_thakrarIf you do come together, don’t forget to post your project in the Project Teams categoryVery much looking forward to seeing what you will build", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Shivam_Modi1 is looking for a project!
2022-04-30T07:59:03.348Z
Shivam_Modi1 is looking for a project!
3,225
null
[ "containers" ]
[ { "code": "", "text": "Hi,\nI would like to know what server is actually running in the official mongo docker image. Is it the MongoDB Enterprise Server or the MongDB Community Server?\nIs it a clear instance of one of the server or does it have any additional applications added?Thanks!\nLadislav", "username": "Ladislav_Chvila" }, { "code": "", "text": "Hi @Ladislav_Chvila welcome to the community!The “official” Docker image is a bit of a misnomer since it’s not really officially supported by MongoDB, but rather it’s Docker’s official, so any issues regarding the image should be reported in their specific Github Issues page.I believe for the Docker image, you can choose to have either Community or Enterprise as seen in this line in the source DockerfileThe default seems to be Community though, so you’ll need to do a docker build yourself to get the Enterprise edition. Note that the Enterprise edition is free to try, but it’s part of the Enterprise Advanced subscription so you’ll need that subscription to run it in production.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Docker server product type
2022-05-02T15:26:19.082Z
Docker server product type
1,496
https://www.mongodb.com/…9_2_1024x114.png
[ "atlas-triggers" ]
[ { "code": "", "text": "Hi,\nWe’re using Realm Trigger with AWS Eventbridge setup. For some reason, We’re facing this issue. Please help us with any hint.\nIssue_Realm_trigger1266×142 42.5 KB\n", "username": "Timey_AI_Chatbot" }, { "code": "", "text": "Hi Timey,Could you please provide the exact steps taken to produce this error?Do you have any Match or Project expressions in the trigger?Regards", "username": "Mansoor_Omar" } ]
Issue with creating & updating Realm Trigger
2022-04-08T10:21:26.550Z
Issue with creating &amp; updating Realm Trigger
2,619
null
[ "aggregation" ]
[ { "code": "[\n\t{\n\t\t\"_id\": \"62702be5d87ed16cea3b2315\",\n\t\t\"G\": 3,\n\t\t\"N\": [\n\t\t\t1295748572\n\t\t],\n\t\t\"t\": \"020682390\"\n\t},\n\t{\n\t\t\"_id\": \"62702be5d87ed16cea3b2316\",\n\t\t\"G\": 3,\n\t\t\"N\": [\n\t\t\t1609988849\n\t\t],\n\t\t\"t\": \"200766679\"\n\t},\n\t{\n\t\t\"_id\": \"62702be5d87ed16cea3b2317\",\n\t\t\"G\": 3,\n\t\t\"N\": [\n\t\t\t1083965362\n\t\t],\n\t\t\"t\": \"105764876\"\n\t},\n\t{\n\t\t\"_id\": \"62702be5d87ed16cea3b2318\",\n\t\t\"G\": 3,\n\t\t\"N\": [\n\t\t\t1063897809,\n\t\t\t1144556531,\n\t\t\t1316227135\n\t\t],\n\t\t\"t\": \"200719145\"\n\t}\n]\n{\n\t\"G\": 3,\n\t\"N\": [\n\t\t1295748572\n\t],\n\t\"t\": \"020682390\",\n\t\"N\": [\n\t\t1609988849\n\t],\n\t\"t\": \"200766679\",\n\t\"N\": [\n\t\t1083965362\n\t],\n\t\"t\": \"105764876\",\n\t\"N\": [\n\t\t1063897809,\n\t\t1144556531,\n\t\t1316227135\n\t],\n\t\"t\": \"200719145\"\n}\n", "text": "I have data in an aggregation pipeline stage in the following format:I want to group on “G” and use it as an “id” field, with the rest of the data in the array.My desired result would be:The raw data is coming from SQL Server and I have control over everything but the final output format. There are hundreds of groups (“G”) with members, and each member can have multiple “N” values and one “t” value.Any help or directions is greatly appreciated. I am very experienced in SQL, but still learning NoSQL.", "username": "Bill_Johnson" }, { "code": "mongosh > expected_result = {\n\t\"G\": 3,\n\t\"N\": [\n\t\t1295748572\n\t],\n\t\"t\": \"020682390\",\n\t\"N\": [\n\t\t1609988849\n\t],\n\t\"t\": \"200766679\",\n\t\"N\": [\n\t\t1083965362\n\t],\n\t\"t\": \"105764876\",\n\t\"N\": [\n\t\t1063897809,\n\t\t1144556531,\n\t\t1316227135\n\t],\n\t\"t\": \"200719145\"\n}\n< { G: 3, N: [ 1063897809, 1144556531, 1316227135 ], t: '200719145' }\nmongosh > expected_result\n< { G: 3, N: [ 1063897809, 1144556531, 1316227135 ], t: '200719145' }\n{\n \"G\": 3 ,\n \"data\" : [\n {\n \"N\": [\n\t\t1295748572\n ],\n \"t\": \"020682390\"\n } ,\n {\n \"N\": [\n\t\t1609988849\n ],\n \"t\": \"200766679\",\n } ,\n {\n \"N\": [\n\t\t1083965362\n ],\n \"t\": \"105764876\"\n } ,\n {\n \"N\": [\n\t\t1063897809,\n\t\t1144556531,\n\t\t1316227135\n ],\n \"t\": \"200719145\"\n }\n ]\n}\n", "text": "Your expect result cannot really achieve.In javascript if I try to create a document with your expected result, I get:In most of the JSON implementation, you lose all but the last occurrence of a key:value. The first few t’s get overwritten by the last one.What can be done is to have an array of those repeated keys. For example, the following would be possible:", "username": "steevej" }, { "code": "", "text": "Thanks! I was afraid that was where I would end up.\nI appreciate having someone more knowledgeable keep me from wasting any more time.", "username": "Bill_Johnson" } ]
Is it possible to group on a field and create an array with that field as _id?
2022-05-02T21:39:09.907Z
Is it possible to group on a field and create an array with that field as _id?
1,154
null
[ "python", "compass" ]
[ { "code": "", "text": "Hello everyone, I am a beginner in mongoDB. I am developing an application using mongoDB with python and I am currently struggled with some problem. I have been manually creating a view using mongo compass before, but I realized that my system should automatically create a view when I create a collection using python. Is there anyway to solve this problem ( I have looked in stackoverflow 5 years ago and they said there is no libraries for that)?.", "username": "Nattapol_Chiewnawintawat" }, { "code": "db.createView()$lookup$match$projectpymongo.database.create_collection()", "text": "Views are created programmatically by operations like mongosh db.createView()In short, it’s a $lookup a $match and a $projectSee pymongo.database.create_collection() … views are there in the notes. It’s all done in the kwargs to the function.", "username": "Jack_Woehr" }, { "code": "", "text": "I edited my previous reponse quite a bit, @Nattapol_Chiewnawintawat … hope you got the latest ", "username": "Jack_Woehr" } ]
Is there anyway to create view using python
2022-05-03T03:40:38.227Z
Is there anyway to create view using python
2,624
null
[ "100daysofcode" ]
[ { "code": "", "text": "Today I wanted to refresh and restart my 100 days of code journey after a brief layoff due to burnout, mental exhaustion, life, getting a new job I’m not a good fit for…life, family obligations, life…ya know stuff lol. Well, day #1… I spent a few hours going over where I had left off and re-iterating through some things I built… I am so very glad I took the time to do this. I didn’t realize the EXTREME benefit that documenting my journey through utilizing the skills and building out applications, visualizing and documenting the data that I was processing and interacting with the community would bring but lo and behold, Stennie, one of the community leaders, reached out and let me know that I was missed and was indeed a benefit to their community as well. Sometimes it takes only that…to know you don’t go unnoticed. So, thanks Stennie, it is well appreciated and I am hoping to be helpful, inspiring, teachable and involved. At the very least, I will let you know if you have an impact on me. Thanks for letting me share my journey again. I’ll be seeing y’all in the chats. Yay!#100daysofcode Day #1 video kickoff!!\nHere is a link to a video I made to kick off the challenge and commit to the journey. Thanks again mr. @Stennie_X ", "username": "Jason_Nutt" }, { "code": "", "text": "Welcome back @Jason_Nutt! I enjoyed the enthusiasm shared in your previous posts, including your learning journey and portfolio as well as feedback on University courses with fun observations.Great to have you along for the 100 days coding challenge and to share your experience!Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello @Jason_Nutt , Welcome to #100DaysOfCode journey Super excited to have you join us, Hi We look forward to learning from you and building together… Cheers ", "username": "henna.s" }, { "code": "", "text": "It’s good to be back at it. Doing some refreshing today. Thanks a bunch! ", "username": "Jason_Nutt" }, { "code": "", "text": "I’ll be seeing ya’ll through the journey. Thanks for doing this challenge @henna.s . I love what 100 days of code does for accountability. \nSooo, back to it. Refresh refresh refresh. This week will be learning what I think I know and hopefully clearing up some things I may have struggled with at first glance. Byeeee ", "username": "Jason_Nutt" }, { "code": "", "text": "Here I am back at it with y’all. I started to go through Basic cluster administration but thought to myself, I need to go through and make my project that I was building a complete work and fully fleshed out following of everything that MongoDB university plus the community has and will bring to my skill set and knowledge & understanding of building, deploying and utilizing NoSQL databases like MongoDB in future projects. I have a renewed sense that this is going to lead to somethings and projects that I don’t even know about yet. But, at any rate, @Stennie_X…I am looking forward to looking for new badges lol…after all, isn’t that what it’s all about haha?? Day 2 #100DaysOfCode I did what I know, learn by building and creating. My database admin understanding struggles continue. Started over with the basics. Here is the progress on documenting.https://t.co/fWw4B60jKCWe will get there together folx. Blessings to all. #lovecode", "username": "Jason_Nutt" }, { "code": "", "text": "Hi I’m video documenting my #100daysofcode on youtube. 1 hour a day. Here is day #3 First part of the video (About a minute) is finishing up on a form I was making for a friend. Then the rest of it is my consistent slow walk back down MongoDB Basics course as to not jump back in the deep end blindly. I will become what I will become in due time. I must solidify knowledge and skills through re-iteration. God bless y’all everybody. Gonna go watch some Resident Alien and wind down yay!!https://twitter.com/JasonNutt14/status/1495954772888854530?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1495954772888854530|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1495954772888854530widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "Day 4 #100DaysOfCode In the spirit of consistency & redundancy I am starting this post similar to the last…hahaha I kill myself. Anyways, like this video description says, I was feeling creative and also, tired of thinking from work. So, as I am committed to coding everyday , I decided to just refactor and add some artsy stuff while re-reading and listening to some relaxing Raffi music as it is my son’s favorite & secretly mine as well. God bless everyone Have a wonderful night \nhttps://twitter.com/JasonNutt14/status/1496309990973489156?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1496309990973489156|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1496309990973489156widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "Day 5 #100DaysOfCode I am just trying to get re-familiar with the technologies that I use and grow from there. I looked through the react docs and spun up a new application from command line. Then I spent the hour playing with and refreshing my memory on what is in a fresh react app and revisited the MongoDB playground a bit. Anyways, I will keep posting and hopefully have breakthroughs along the way. Saturday and Sunday I hope to really put a few good hours in and spend an hour or two looking through posts here in the community. Hope everyone is well. Keep it up and don’t give up.https://twitter.com/JasonNutt14/status/1496683140215189509?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1496683140215189509|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1496683140215189509widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-6 @Stennie_X welcome to the party! 100 days of code it up my friend This is a lab I’m trying to figure out where I’m going wrong. This is where I went wrong the last time I tried to tackle learning about MongoDB and databases in general. I did not take the time to fully comprehend the failures and so when I proceeded to the next parts of courses I was already behind the power curb. Anyways I will be sharing this with the community while watching the next video and trying to wrap my head what I am doing wrong or misunderstanding in this lab. Also, I think I need that database dabbler badge lol.\nWordness to the turdness!\nSo as not to be idle with my couple hours today, I am proceeding with the next course in the Developer Learning Path … Introduction to the MongoDB Aggregation Framework.\nWordness to the turdness!\nDatabase DabblerParticipate in #100DaysOfCode challenge and share at least one day of learning MongoDB or Realmhttps://twitter.com/JasonNutt14/status/1497038038018535431?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1497038038018535431|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1497038038018535431widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-7 I am really happy that I looked back into and rewatched this Jumpstart series 2021 with @Jesse_Hall!\nI had learned soooo much through following and building and experimenting with serverless functions & going back through and creating and modifying my data in Atlas, adding my D&D characters to make something that I haven’t finished yet (Lol what’s new)…but by re-watching all these videos and pulling up those projects, even ones that I had trouble completing or just abandoned, I did get a huge dose of gratitude and felt like, yeah I am NOT starting completely from scratch like it feels sometimes. I have to look at and learn from the things I have struggled with in the past & not been able to understand or didn’t have the foundational knowledge necessary to grasp…and that is what it is alot of the times with programming. I have to re-iterate until the thing that made no sense to me does now. I don’t know if that makes sense to anyone else but it’s very profound to me to know this deep in my bones here y’all lol. Have a great weekend and happy coding journey to all #100daysofcode Warriors, keep it up, we are getting somewhere!!https://twitter.com/JasonNutt14/status/1497406179227623431?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1497406179227623431|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1497406179227623431widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-8, I am determined to be as slow as needed to get through these courses with thorough understanding of what is happening. I needed to alter the connection string ( by typing mongosh at the beginning of the string instead of mongo as shown in course ) here in the chapter 1 course in order to connect to our aggregations db but hey, this is farther than I got the last time I did this course. So on to the next portion of the chapter… Today, I learned much about the fundamentals of the aggregation pipeline, $match and filtering documents. Tonight after work, I will try the homework and filter out the movies in the lab by imb rating, genre, rating and language then count the ones that I’ve filtered…if I am understanding correctly. I actually am looking forward to the challenges instead of being frustrated with my slowness. I think this is growth. Hope everyone is learning much through this 100 day challenge, I know I am getting more patient in my learning if nothing else. \nhttps://twitter.com/JasonNutt14/status/1497663035015671808?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1497663035015671808|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1497663035015671808widget%3DTweet\n\naggregations_db1035×711 137 KB\n", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-9 did a crash course for 2022 tutorial w/ @codeSTACKr today in order to make sure I’m current. I also started a document to practice and keep up with my GSAP3 animation, scrollTrigger and timeline skills and not get rusty with git and pushing code to github. That is a skill that took me a very long time to be comfortable (well kinda) so I don’t want to take it for granted that I will just always be able to add commit push and publish and deploy things if I am not doing it on the regular, even if it is just writing the things that frustrate me or that I learned today in an html document and styling it a tad with a bit of animations . That will do for keeping the rust down I think, While I get back to where I think I lefty off with my MongoDB skills. I am re-iterating through so much that I am going, wow, I remember thinking how awesome MongoDB was. It still is!https://twitter.com/JasonNutt14/status/1498127784236961801?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1498127784236961801|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1498127784236961801widget%3DTweet\nBuilding a Study Sheet", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-10 Today was humbling, I am such a noob still…and that’s got to be ok with me, even something to embrace, because it keeps me from jumping ahead and thinking I can just start to build my fantasy ideas without first painstakingly go through these fundamental courses. Even though I have struggled with the labs sometimes , I know that I am fully grasping the concepts being taught in the course. I also previewed the next chapter on $addFields and just excited about taking it slow until things really click. I have committed to an hour but find it very hard to limit it to just that. Working full time at a call center though, I must force myself into a little rest time or I will be burnt out in no time( speaking from experience)… One day I will have built that D&D campaign and linked character lookup/Create sheets but for now, fundamentals it is. And the growth continues…I hope. Great night everyone. Keep plugging away at your dreams. https://twitter.com/JasonNutt14/status/1498478557546303489?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1498478557546303489|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1498478557546303489widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-11 I ended the day with Accumulator Stages w/ $project! Went over again that _id is where to specify what incoming documents should be grouped on. That _id can use all accumulator expressions within $group. $group may be used multiple times within a pipeline…and also, it may be necessary to sanitize incoming data.\nThen we went over the accumulator expressions available within $project…$sum.$avg$max$min$stdDevPop$stdDevSam\n…within $project, these expressions have no memory between documents.\nand we still may have to use $reduce or $map for more complex calculations…\nGoodnight everyone! My brain is donsky for today. Have a blessed evening and hope you are all safe and sound. God bless.https://twitter.com/JasonNutt14/status/1498839387320242178?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1498839387320242178|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1498839387320242178widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "Hi @Jason_Nutt,Excellent to see your learning progress! If you do get stumped while taking MongoDB University courses, definitely ask for assistance in the dedicated course forums (eg https://www.mongodb.com/community/forums/c/university/m121/67). There is a MongoDB squad that supports learners in University courses (coincidentally, they are now part of my Community team!) and many helpful community members.I also recommend using the Aggregation Pipeline Builder in MongoDB Compass to develop and test your aggregation queries. Since the output of one pipeline stage is the input for the next stage, complex aggregation queries are more straightforward to troubleshoot if you work on adding one stage at a time.You can use a similar technique in the MongoDB shell with variables (for example, https://www.mongodb.com/community/forums/t/mongoerror-a-pipeline-stage-specification-object-must-contain-exactly-one-field/120870/15?u=stennie), but Compass provides a visual reference with some sample documents.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Nice! Thanks for the guidance and resources @Stennie_X Oh and of course…thanks for letting us earn badges for milestones with #100daysofcode … I’m a sucker for badges as a study incentive. There is something wrong with me lol ", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-12 First I just want to say, that Javascript is and always will be hard…and quizes on Javascript are hard, and coding is hard …but I am also, very glad to be back to doing some of it and growing. Thanks for inviting me back @Stennie_X ! Anyways, I failed a javascript skill quiz today and unlocked a couple courses that I should retake so I will. I also, reached the $graphLookup Chapter in MongoDB University’s M121: The MongoDB Aggregation Framework. I have to take it slow this time around. Just like I have to go back through some basics with other skills, I must keep solidifying basics until basics become my fundamentals and that is when I can grow. I really appreciate the idea of bringing this #100daysofcode into the underneath (or rather the DownUnder ) world of MongoDB Community…thanks for heading up the surge @henna.s and @Kushagra_Kesav . Keep it up and thanks for encouraging me to get better and be accountable. Many blessings to all.\nhttps://twitter.com/JasonNutt14/status/1499201898506821635?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1499201898506821635|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1499201898506821635widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "#100DaysOfCode D-13 Today I had a fun time learning about Facets, Bucketing & Multi-dimensional grouping. Ok it was not as fun as it sounds but I am enjoying the growth but still need to practice the concepts more. I took a creative moment to say \"Hey dude , you know you haven’t thought about making a new portfolio recently. Maybe it’ll get you energized to study a bit more if you start designing one for this year \"…so I did that. So I am proud to be growing in MongoDB and other areas too, hopefully it will all come together to build something mo’ beautiful someday soon. Have a great night everyone! https://twitter.com/JasonNutt14/status/1499569370380521476?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1499569370380521476|twgr^|twcon^s1_c10&ref_url=https%3A%2F%2Fpublish.twitter.com%2F%3Fquery%3Dhttps3A2F2Ftwitter.com2FJasonNutt142Fstatus2F1499569370380521476widget%3DTweet", "username": "Jason_Nutt" }, { "code": "", "text": "Hi Jason,So happy to know that you are enjoying being back to #100DaysOfCode I absolutely am… I myself touching Android Development after almost a gap of 2 years… Its a tough ride but small steps make a difference I am a self-taught developer, with the help of the community of course and it makes a lot of difference when you are working together… Please feel free to give a shout if you feel stuck on a concept, we have lot of MongoDB folks to jump and help and @Kushagra_Kesav is one of our champ For Javascript, I recently got email from FreeCodeCamp, to learn Javascript while making games… I have not explored it much yet but I hope it provides some help Keep Rocking Cheers ", "username": "henna.s" } ]
The Journey of #100DaysOfCode (@JasonNutt14)
2022-02-19T17:32:50.884Z
The Journey of #100DaysOfCode (@JasonNutt14)
12,572
null
[ "aggregation", "node-js", "mongoose-odm" ]
[ { "code": "{\n $lookup:\n {\n from: 'boleto_cancelado',\n localField: '_id',\n foreignField: 'boleto',\n as: 'boleto_cancelado'\n },\n },\n {\n $lookup:\n {\n from: 'boleto_extrato',\n localField: '_id',\n foreignField: 'boleto',\n as: 'boleto_extrato'\n },\n },\n {\n $match:\n {\n estabelecimento: new ObjectID(req.params.estabelecimento),\n ativo: true,\n \"dados_boleto.data_vencimento\": {$gte: monthStart, $lt: monthEnd},\n 'boleto_extrato.modulo_id': {$exists: false},\n 'boleto_cancelado.modulo_id': {$exists: false},\n }\n },\n {\n $group:\n {\n _id: { mes: { $month: \"$data_cadastro\" }, dia: { $dayOfMonth: \"$data_cadastro\" } },\n quantidade: { $sum: 1 },\n soma_npagos: { $sum: \"$valor.valor_boleto\" },\n }\n },\n {\n $project:\n {\n dia: \"$_id.dia\",\n soma_npagos: \"$soma_npagos\",\n quantidade: \"$quantidade\"\n }\n },\n {\n $sort:\n {\n \"dia\": 1\n }\n }],\n\n \"_id\": {\n \"mes\": 1,\n \"dia\": 19\n },\n \"dia\": 19,\n \"soma_npagos\": 189700,\n \"quantidade\": 2\n },\n {\n \"_id\": {\n \"mes\": 1,\n \"dia\": 28\n },\n \"dia\": 28,\n \"soma_npagos\": 133000,\n \"quantidade\": 1\n },\n {\n \"_id\": {\n \"mes\": 3,\n \"dia\": 29\n },\n \"dia\": 29,\n \"soma_npagos\": 71750,\n \"quantidade\": 1\n }\n\"_id\": {\n \"mes\": 1, //MONTH\n \"dia\": 1 //DAY\n },\n \"dia\": 2, //DAY\n \"soma_npagos\": 0, // SUM\n \"quantidade\": 0 // COUNT\n },\n\"_id\": {\n \"mes\": 1, //MONTH\n \"dia\": 2 //DAY\n },\n \"dia\": 2,\n \"soma_npagos\": 0,\n \"quantidade\": 0\n },\n\"_id\": {\n \"mes\": 1,\n \"dia\": 3\n },\n \"dia\": 3,\n \"soma_npagos\": 0,\n \"quantidade\": 0\n },\n\"_id\": {\n \"mes\": 1,\n \"dia\": 4\n },\n \"dia\": 4,\n \"soma_npagos\": 0,\n \"quantidade\": 0\n },\n\"_id\": {\n \"mes\": 1,\n \"dia\": 5\n },\n \"dia\": 5,\n \"soma_npagos\": 0,\n \"quantidade\": 0\n },\n.........\n \"_id\": {\n \"mes\": 1,\n \"dia\": 19\n },\n \"dia\": 19,\n \"soma_npagos\": 189700,\n \"quantidade\": 2\n },\n {\n \"_id\": {\n \"mes\": 1,\n \"dia\": 28\n },\n \"dia\": 28,\n \"soma_npagos\": 133000,\n \"quantidade\": 1\n },\n {\n \"_id\": {\n \"mes\": 3,\n \"dia\": 29\n },\n \"dia\": 29,\n \"soma_npagos\": 71750,\n \"quantidade\": 1\n }\n", "text": "Hello, I’m using Mongoose in NODEJS and I’ve searched a lot to do this, but I don’t find a way to make this happen in my code, I’m using Mongo 5.x.x (IDK exactly version)Here is the question, I need to bring all month day result (a sum of values) even if sum is null or 0, but in my actual query I only bring dates with some valueMy query:Thats the result:And I want something like that:How can i do that, please?", "username": "Eduardo_Bacarin" }, { "code": "", "text": "See related thread How create an array of dates? - #3 by steevej.", "username": "steevej" } ]
How to get every day of month even if sum is 0 or null?
2022-05-02T13:48:33.266Z
How to get every day of month even if sum is 0 or null?
2,099
null
[ "node-js", "data-api" ]
[ { "code": "", "text": "I have been testing out mongodb atlas new Data-API.I have found quite a big hurdle in the way the length of time it takes for a simple document to be returned using /findOneIn comparison , I am using a aws lambda function using the normal node driver to create a database connection to my atlas cluster. Every time the function runs cold the total connection time is about 3 seconds. (warmup, create connection to atlas, query and return the data)\nHowever when the function is warm and the database connection is re-used the round trip time to get data is ranges from 200-800ms (huge difference)I expected that the data-api would perform much better, but on average it is about 3-4 seconds, and never better on subsequent tries.Does it mean the using the data api to http://data.mongodb-api.com/xxxx which i believe behind the scenes invokes another cloud function and therefore is always cold and the db connection is always closed?Using a simple data-set the data-api seems very slow when using the drivers.Any support on what I may be doing wrong would be appreciated.", "username": "Rishi_uttam" }, { "code": "", "text": "Hey Rishi - thanks for your feedback! It would be helpful to understand a little bit more about your scenario. While the API will ineviatably have another hop before reaching the database given it’s going via a HTTPS connection, I’m curious about -3-4s is unexpected and thank you for bringing this to our attention!", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hello, and thanks for the reply.My atlas tier is M1 shared HKG region AWS (i seldom get over 20 + connections and using a database that is currently less then 30MB, the http response payload is only 1-2kb, infact i am just returning a small document for testing.region is Hong Kong AWS. I am also invoking my lamda from Hong Kong, the connecting to lambda is super fast, (as it is with the mongo node driver to my atlas db )but when i use the data-api i find their is no caching of open db connections on mongo’s side? Does the data-api create a new connection each time? this would certaintely make the request slow. This has not been documented as far as I know, I would like to know how the data -api is working behind he scenes, is a function being called (similar to realm-web?)Yes HTTPS does carry some overhead, but still, 4 seconds to findOne is slow, my lamda function using node driver is also creating a https connection to Atlas using the connection string is it not?My last test was lask week, i will try to follow-up with some screenshots with dev tools showing the total time.Yes 3-4 seconds is perhaps OK on first try, but subsequent tries should be within 200ms, it is for this reason we have refrained form using the DATA-API as it is too slow right now.My data api is using the global URI as mentioned in my post : http://data.mongodb-api.com/xxxx… I believe this means it would auto select the closest region (out of 4)I am happy to conduct more testing should you need, or if you have any results on your side I would very much like to see them.Thanks\nRishii would like to know do others also ex", "username": "Rishi_uttam" }, { "code": "", "text": "Hi Rishi -we are selecting the closest region, but the closest one to you is currently located in Sydney, which is why it might be taking longer - we have plans to allow the singapore region in AWS shortly, so that will help a little bit.We do some caching, I’m curious if you’re still running into this today - we had some issues on our end around mid April and I’m wondering if it was related to thatScaling up the cluster might help out a little bit, so will optimizing for the payload size by projecting only what you need in each request. How big is your payload for the find?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I’ll send a dev tools report in order to provide more context to this issue.It would be good if there was a deep dive video into the DataAPI and how the backend works with requests/cahching/opening closing of DB connections and how these are re-used. I did watch some live mongodb videos, but they dont discuss this.My payload is literaly 1 byte, so scaling up should not make a difference at all, infact the free tier should be a perfect use case to measure speed and reliability going forward.At the current rate of 4 seconds response time, it is way too long even for simple hobby apps.\nI am getting better response times using the mongodbDB node driver on a cloud function.\nYes certainly Atlas & Relam need more edge locations for realm-web and the DataAPI.", "username": "Rishi_uttam" } ]
Data API (beta) - seems very slow to access data
2022-04-21T09:44:51.173Z
Data API (beta) - seems very slow to access data
4,885
null
[ "crud" ]
[ { "code": "{\n \"name\": \"Jimmy Maxwell\",\n \"phone\": \"555-123-4567\",\n \"address\": \"123 Fake Street\"\n}\ndb.person.updateOne({\"name\": \"Jimmy Maxwell\"}, {\"$set\": {\"phone\": \"555-765-4321\"}})\ndb.person.updateOne({\"name\": \"Jimmy Maxwell\"}, {\"$set\": {\"address\": \"321 Maple Ave\"}})\n", "text": "Say I have the following example doc:Say I have two independent operations: one that updates the phone number, and one that updates the address. These two operations are called concurrently (using an async library) at the same time:Is there a potential race condition such that either the phone or address won’t be set properly, or is it safe given that the updates only touch their respective fields?", "username": "Jared_Lindsay1" }, { "code": "", "text": "“In MongoDB, an operation on a single document is atomic.”\nSee Transactions", "username": "Jack_Woehr" } ]
Is there a potential race condition here?
2022-05-02T17:11:50.233Z
Is there a potential race condition here?
2,243
null
[ "node-js", "mongoose-odm" ]
[ { "code": "MongoServerError: E11000 duplicate key error collection: DB1.questions index: themes_1 dup key: { themes: null }\nconst mongoose = require(\"mongoose\");\nconst Schema = mongoose.Schema;\nconst conn = require('../../config/mongooseSetup');\n\nconst questionShema = new Schema({\n question: { type: String, required: true, unique: true },\n lang: { type: String, default: \"en\" }\n});\n\nmodule.exports = conn.DB1.model(\"question\", questionShema);\nconst questionShema = new Schema({\n question: { type: String, required: true },\n lang: { type: String, default: \"en\" },\n themes: [\n { type: Schema.Types.ObjectId, ref: \"theme\", required: true },\n ]\n});\n\nmodule.exports = conn.DB2.model(\"question\", questionShema);\n", "text": "Hi guys, I created a small project to learn. I created a connection to 2 DB, and I have 2 schemas with the same name. I can add multiples docs on the first DB but only one on the second because I got the errorI created 2 Schemas, 1 per DB and they are differentSchema 1Schema 2Is this possible in MongoDB?, if I use a different schema name on DB2 works like a charm", "username": "Alexis_73125" }, { "code": "", "text": "Is this possible in MongoDB?I do not think this is a MongoDB issue.Most likely it is a mongoose limitation or bug since:if I use a different schema name on DB2 works like a charmand MongoDB is not aware of your new Schema.Yes the issue manifest as a MongoServerError on DB1.questions with a duplicate key (value null) on the index themes_1.But your mongoose schema for DB1 has no mention of the themes. This may also point to an error from your part. You define the unique index themes_1 on DB1 and you forgot to add it to your mongoose schema.HoweverI can add multiples docs on the first DB but only one on the second because I got the errorseems to indicate that you try to insert documents in DB2 but the error is on DB1. This looks like mongoose tries to insert your DB2 question model into the Mongo’s DB1 database but collection questions. I do not know moogoose, because I am avoiding abstraction layers, so I wonder how the question in mongoose becomes questions with an s in MongoDB. I would hate mongoose even more if it mangles the schema name into the collection name by adding an s.What worry me is that the inconsistency you between your DB1.model where you do not mentioned the field themes yet you have a unique index themes_1 on DB1.questions.", "username": "steevej" }, { "code": "", "text": "I think that the only reason why it works when you change the DB2 schema name it is because you do not have the unique index themes_1 on the DB1 collection with the other name.If you insert with conn.DB2 and you got an error on DB1 then you did not initialized conn.DB2 correctly.If you insert with conn.DB1 and you get a unique index error on themes_1, the it means you defined the unique index on the wrong collection because conn.DB1 model has no field themes.I would be surprised if mongoose let you insert on conn.DB2 without themes or with themes:null since it is marked as required.In your schema code, you shared the require(“mongoose”) part for conn.DB1 but not for conn.DB2. Are both module.exports = … in the same file? Or you simply did not shared the require lines for conn.DB2?", "username": "steevej" }, { "code": "// CONNECT TO DATABASES (mongooseSetup.js file)\nmongoose.DB1= mongoose.createConnection(mongoDb1, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n}, (err) => {\n if (err) {\n console.log('Error DB1!!!', err);\n } else {\n console.log('Connection succefull to DB1!!!');\n }\n});\n\nmongoose.DB2 = mongoose.createConnection(mongoDb2, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n}, (err) => {\n if (err) {\n console.log('Connection error', err);\n } else {\n console.log('Connection succefull to DB2!!!');\n }\n});\n\nmodule.exports = mongoose;\nconst mongoose = require(\"mongoose\");\nconst Schema = mongoose.Schema;\nconst conn = require('../config/mongooseSetup');\n\nconst questionShema = new Schema({\n question: { type: String, required: true },\n lang: { type: String, default: \"en\" },\n themes: [\n { type: Schema.Types.ObjectId, ref: \"theme\", required: true },\n ]\n});\n\nmodule.exports = conn.DB1.model(\"question\", questionShema);\nconst mongoose = require(\"mongoose\");\nconst Schema = mongoose.Schema;\nconst conn = require('../../config/mongooseSetup');\n\nconst questionShema = new Schema({\n question: { type: String, required: true, unique: true },\n lang: { type: String, default: \"en\" }\n});\n\nmodule.exports = conn.DB2.model(\"question\", questionShema);\n", "text": "In your schema code, you shared the require(“mongoose”) part for conn.DB1 but not for conn.DB2. Are both module.exports = … in the same file? Or you simply did not shared the require lines for conn.DB2?Yes both are on the same fileI think that the only reason why it works when you change the DB2 schema name it is because you do not have the unique index themes_1 on the DB1 collection with the other name.I tried removing also the unique: true attribute and I also got an error.I made a mistake in my explanation. The first test I did was with Shema 1 (this is the one that includes Theme) and everything worked perfectly. Then I added a Shema 2 for the DB 2 without Themes.My problem is, inserting documents in DB 2 using Shema 2 (the one without Themes)Shema for DB1Shema for DB2So I can continue inserting questions using Shema 1, that’s is fine, but I can only insert 1 doc on Db2 using Shema 2, on the second attempt I got the error ", "username": "Alexis_73125" }, { "code": "use DB1\ndb.getCollectionNames()\ndb.questions.getIndexes()\nuse DB2\ndb.getCollectionNames()\ndb.questions.getIndexes()\nbash> cat Test.js\nmodule.exports = \"foo\"\nmodule.exports = \"bar\"\nbash> node\nnode> test = require( \"./Test\" )\n'bar'\nnode> test\n'bar'\n", "text": "I tried removing also the unique: true attribute and I also got an error.Removing unique:true on the mongoose model does not remove the unique index themes_1 in the MongoDB collection.We are still missing some code.How do you initialize mongoDb1 and mongoDb2?Just to make sure I understand correctly.Point 5 and point 6 are strong indication that mongoDb1 and mongoDb2 are NOT initialized correctly because inserting with mongoose’s conn.DB2 results to an insert in mongo’s DB1.When you change question to something else for DB2, it works because it is not inserting in DB1.questions which has the unique index, but it inserts into another DB1 collection that does not have a unique index. That is what I meant by:I think that the only reason why it works when you change the DB2 schema name it is because you do not have the unique index themes_1 on the DB1 collection with the other name.Using mongosh, please share the output of the commands:But I think the issue isYes both are on the same fileCan you really have multiple module.exports in the same file? Yes but only the last one is really exported.The answer is then that you can but only the last one is really exported.Please share your insert code?I also have an answer toI wonder how the question in mongoose becomes questions with an s in MongoDB. I would hate mongoose even more if it mangles the schema name into the collection name by adding an s.This is what it does when you do not explicitly specify a collection name.", "username": "steevej" }, { "code": "", "text": "Removing unique:true on the mongoose model does not remove the unique index themes_1 in the MongoDB collection.This comment solved my problem, thank you very much. What happened was that I had created the Schema some time ago, and I had added documents to it and then I continued adding attributes to the schema, as is the case of “unique: true”. I thought that when I compiled the application everything was re-built, but it was not. I deleted the table Questions on both DB and voila, now I can send data without problems .", "username": "Alexis_73125" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error adding on post a doc to a database. My app is connected to 2 DB. Help!
2022-04-30T04:04:34.697Z
Error adding on post a doc to a database. My app is connected to 2 DB. Help!
3,204
null
[ "node-js", "next-js", "mdbw22-hackathon" ]
[ { "code": "", "text": "Please do give feedback.https://goals-of-our-generation.vercel.app/How to make this is covered in this blog https://tech-blog.agrittiwari.com/nextjs-and-mongodb-application-v1", "username": "Agrit_Tiwari" }, { "code": "GDELT datasetJust to brief you:", "text": "Hi @Agrit_Tiwari,Welcome to the MongoDB Community forums I checked your Tech blog and the web app you built using MongoDB, NodeJs, and NextJs. It’s really amazing.Please correct me if I’m wrong because I didn’t find the use of the GDELT dataset in your web app.Just to brief you:To successfully qualify for the MongoDB World Hackathon '22, we have two prerequisites: You must be using the GDELT dataset.\n You must be using MongoDB in your project.Please check out some amazing resources that will be really beneficial for you:Happy Hacking,\nKushagra Kesav", "username": "Kushagra_Kesav" }, { "code": "", "text": "Great idea! I’m inspired. Thanks so much for sharing.", "username": "Jason_Nutt" } ]
Built an application with integration of mongodb in nextjs
2022-05-02T02:59:03.677Z
Built an application with integration of mongodb in nextjs
3,528
null
[ "dot-net" ]
[ { "code": "public class Person{\n string id, \n string name,\nlist<string> Addresses. // stores id's from Address document\n//bsonignore\nAddress AddressDetail\n} \n\npublic class Address{\nstring id,\nstring address1,\nstring address2 \n}\n", "text": "My Document structure is as follow.When I load person details , I want to load all the addresses based on id’s present in the List of Person document.Either MongoDB way or C# Way will atleast give me a hint to resolve this.", "username": "Manish_Pandit" }, { "code": "{\n _id: ObjectId(\"6270582dfe5ecac36c5d315d\"),\n Name: \"Jane Doe\",\n Addresses: [\n { AddressLine: \"123 Nowhere Street\", city: \"Somewhere\", state: \"NY\" },\n { AddressLine: \"456 Somewhere Avenue\", city: \"Nowhere\", state: \"NY\" }\n ]\n}\npublic class Person {\n public ObjectId Id { get; set; }\n public string Name { get; set; }\n public IEnumerable<Address> Addresses { get; set; }\n}\n\npublic class Address {\n public string AddressLine { get; set; }\n public string City { get; set; }\n public string State { get; set; }\n}\n", "text": "Hi, @Manish_Pandit,Welcome to the MongoDB Community Forums. I understand that you’re trying to represent a Person with a list of Addresses. The easiest way to model this in MongoDB is with subdocuments:The .NET/C# driver will map collections of subdocuments automatically to your POCOs.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# Load Related document based on List<string>
2022-05-02T16:41:59.217Z
C# Load Related document based on List&lt;string&gt;
2,100
null
[ "aggregation", "queries", "crud" ]
[ { "code": "$replaceWithelse$cond$condtruedb.getCollection(\"port_status\").updateOne({\n id: 'abc123'\n }, \n [\n {\n $replaceWith: {\n $cond: {\n if: { $lt: ['$updatedAt', ISODate(\"2022-01-10T19:56:00.025+0000\")] },\n then: { \n id: 'abc123',\n locId: 'aaa',\n updatedAt: '2022-05-02T01:00:00.000Z'\n },\n else: null\n }\n }\n\n }\n ]\n)\n", "text": "I’m currently attempting to use an aggregation pipeline to conditionally update a document based on a date field (the date field is provided by an API, not controlled by Mongo).I’m using the $replaceWith stage, as I’d like to update the entire document if the condition is met. The issue I am having is in the else branch of $cond. I’d like no update to happen if $cond does not return true, but with my query below I am getting an error.", "username": "Greg_Fitzpatrick-Bel" }, { "code": "", "text": "Simply put your updatedAt condition inside your query argument.If condition is true document will found and updated.If condition is false no document will be found and not update will occur.", "username": "steevej" }, { "code": "db.getCollection(\"port_status\").updateOne({\n id: 'abc123',\n updatedAt: {\n $cond: {\n if: { $lt: ['$updatedAt', ISODate(\"2022-01-10T19:56:00.025+0000\")] },\n }\n }\n\n }, \n {\n $set: {\n id: 'abc123',\n locId: 'aaa',\n updatedAt: '2022-05-02T01:00:00.000Z'\n }\n\n }\n)\nunknown operator: $cond", "text": "I’m not sure I’m totally following, do you mean something like this?I’m getting an error: unknown operator: $cond", "username": "Greg_Fitzpatrick-Bel" }, { "code": "", "text": "Yes but you have to follow the correct syntax.See https://www.mongodb.com/docs/manual/reference/operator/query/lt/ and follows the examples.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Conditionally updating document based on date field
2022-05-02T19:09:03.170Z
Conditionally updating document based on date field
4,522
null
[ "queries", "node-js" ]
[ { "code": "import app from \"./server\";\nimport { MongoClient } from \"mongodb\";\nimport BeautyDAO from \"./dao/beautyDAO\"\n\nconst port = process.env.PORT || 8181;\n\nMongoClient.connect(\n process.env.MDECK_DB_URI,\n {\n maxPoolSize: 100, \n useNewUrlParser: true, \n writeConcern: {wtimeout:3000}\n },\n )\n .catch( err => {\n \n console.error(err.stack);\n\n process.exit(1);\n })\n .then(async client => {\n\n await BeautyDAO.injectDB(client);\n\n app.listen(port, () => {\n console.log(`Server is running on port: ${port}`);\n });\n })\nimport express from \"express\";\nimport bodyParser from \"body-parser\";\nimport cors from \"cors\";\nimport morgan from \"morgan\";\nimport beautyData from \"./api/beautyData.route\"\n\n\nconst app = express();\n\napp.use(cors());\nprocess.env.NODE_ENV !== 'prod' && app.use(morgan(\"dev\"));\napp.use(bodyParser.json());\napp.use(bodyParser.urlencoded({extended: true}));\n\n// register API routes\napp.use(\"/api/v1\", beautyData)\n// app.use(\"/post\", beautyData)\n// app.use(\"/status\", express.static(\"build\"))\n// app.use(\"/\", express.static(\"build\"))\napp.use(\"*\", (req, res) => res.status(404).json({ error: \"not found\" }))\n\nexport default app;\nimport { ObjectId } from \"bson\";\n\nexport let beauty\nlet MetaDeck\n// const DEFAULT_SEARCH = [[\"ratings.viewr.numReviews\", -1]]\n\nexport default class BeautyDAO {\n \n static async injectDB(conn) {\n if(beauty) return;\n try{\n MetaDeck = await conn.db(\"MetaDeck\");\n beauty = await conn.db(\"MetaDeck\").collection(\"beauty\");\n console.log(\"connection to db established\")\n } catch (e) {\n console.error(`unable to establish a connection handle in beautyDAO: ${e}`)\n }\n }\n\n static async getSome() {\n const query = {product: \"calm shoes\" }\n let cursor\n try {\n cursor = await beauty.find(query) \n } catch (e) {\n console.error(`Unable to issue find command, ${e}`)\n return { results: []}\n }\n }\n\n static async postSome() {\n try {\n await beauty.insertOne({product: \"Dyshco Dress\"})\n //return cursor\n } catch (e) {\n console.error(`Unable to issue find command, ${e}`)\n return { results: []}\n }\n }\n\n static async deleteSomething() {\n try {\n await beauty.deleteOne({product: \"Fang Dress\"})\n } catch (error) {\n \n }\n }\n}\nimport {BeautyDAO, beauty} from \"../dao/beautyDAO\";\n\nexport default class BeautyController {\n\n static async apiGetAllData(req, res, next) {\n //res.json({message: \"hi im working on it\"})\n let ans = beauty.find({});\n try {\n res.json(ans)\n } catch (err) {\n res.status(500).json({error: err});\n }\n }\n\n static async postSomething(req, res, next) {\n await BeautyDAO.postSome();\n res.json({message: \"still working on it\"})\n // try {\n // let response = {\n // product: product\n // }\n // res.json(response)\n // } catch (err) {\n // res.status(500).json({error: err});\n // }\n }\n static async apiDeleteSomething(req, res, next){\n let ans = await beauty.deleteOne({product: \"Fang Dress\"})\n try{\n res.json(ans)\n } catch (e){\n res.json({message: \"still working on it\"})\n }\n \n }\n\n}\nimport { Router } from \"express\";\nimport BeautyCtrl from \"./beautyData.controller\";\n\nconst router = new Router();\n\n// router.route(\"/\").get(BeautyCtrl.apiGetAllData);\n// router.route(\"/search\").get(BeautyCtrl.apiSearchData);\n// router.route(\"/product-type\").get(BeautyCtrl.apiGetProdTypeData);\n// router.route(\"/id/:id\").get(BeautyCtrl.apiGetProdById);\n\n\nrouter.route(\"/\").get(BeautyCtrl.apiGetAllData) \nrouter.route(\"/addItem\").post(BeautyCtrl.postSomething)\nrouter.route(\"/delete\").delete(BeautyCtrl.apiDeleteSomething)\n\n\nexport default router;\n", "text": "Hi im working on my first app with mongodb. Putting together the api and Ive been able to post and delete from my collection but for some reason I cant get anything to return from .find() I just get back{\n“_events”: {},\n“_eventsCount”: 0\n}posting multiple files sorry in advance.index.jsserver.jsbeautyDAO.jsbeautyData.controller.jsbeautyData.routes.js", "username": "Billy_Best" }, { "code": "", "text": "I put .find({product: { $elemMatch: { $exists: true } }}) and got back the same result making me think the find is seeing the documents in the collection as im getting back status 200?\nReally confused why this one operation seemingly the simplest is not working…", "username": "Billy_Best" }, { "code": "", "text": "Working now had to use .toArray() after my find()\ndb.collection.find().project().toArray()", "username": "Billy_Best" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
CRUD confusion. failing to get documents. status is 200
2022-05-02T04:42:13.437Z
CRUD confusion. failing to get documents. status is 200
2,878
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "Query Targeting: Scanned Objects / Returned has gone above 1000\n \nThe ratio of documents scanned to returned exceeded 1000.0 on XXXX.am0hn.gcp.mongodb.net, which typically suggests that un-indexed queries are being run. To help identify which query is problematic, we recommend navigating to the Profiler tool within Atlas. Read more about the Profiler here.\n const LNowPlaying = require('mongoose').model('NowPlaying');\n var query = LNowPlaying.findOne({\"station\":req.params.stationname, 'timeplay':{$gte: new Date(today).toISOString()}, \"history\":{$in: [y]}}).sort({\"_id\":-1}).limit(100)\n query.exec(function (err, docs) {\n \n //DO STUFF HERE \n\n});\n", "text": "Trying not to be negative however MongoDB LIVE chat support is really not very helpful, and being asked by MongoDB to pay USD$800 to get support for their system is really not cool, when you pay a monthly fee to host your data on their servers.Personally I am getting sick of these query alerts that really don’t give me the customer a way to fix the problem - that if it truly is a problem. Why I say that is because I never got these alerts on the free Atlas which tells me their must be a difference in the way mongodb Atlas is handling data for free/paid atlas clients.Emails like this:Does not help the customer. In fact it makes me angry, and when you go and ask for support to fix the issue I am told - sorry you need to pay USD$800 per day with a min of 2 days for a PDF document report on what the issues are.This makes great business sense if you care about money, but creates a VERY poor customer relations if you care about the customer. I even told the person on the chat. I can’t afford $800USD x 2. It’s just out of the budget.If MongoDB is so smart and the best DB in the industry. Why don’t they have a button that says “FIX ISSUE” and give us the correct query or correct way to be sending the query to the cluster.I have no issue paying for support, but I have an issue when the price is way out of my budget and was told that the $50 per month support would not cover these issues, nor would they be able to help on a code level.I was also told that Mongoose was not recommended by MongoDB - which is fare however how can a query like the one below, be searching over 1000 queries it’s impossible.Then I get an alert the same alert for my campaign collection which only has a total of 8 documents and I get told that it also queried over 1000 documents - that is impossible.Clearly MongoDB alerts is broken and not helpful to myself. Please make your support affordable, and help people because our other options include moving DB, losing us as a Atlas customer or even outsourcing support to a person on Upwork who will do it at $100 and will fix our code if it’s a coding issue.", "username": "Russell_Harrower" }, { "code": "", "text": "Hi Russell,Let me just start by saying up front based on what you’ve shared here, you’ve experienced an unacceptably rocky road with us and that’s definitely not the experience we want to delivering you or any other customers. I want to make sure we deeply understand your journey so we can learn from it and will have the team reach out privately.The fact that you feel that we couldn’t help you with our built-in free always-available in-UI chat which offers best effort assistance or with our Atlas Developer support starting at $49 per month (or 20% of your monthly spend whichever is greater) sounds like a real miss to me that is simply not what we’re going for.Query targeting, alerts slow query logging, and index suggestions are all an area we want to improve and also a hard problem, one that we’re working hard to improve. To add some color, the query targeting alert is powered by gauge metrics that the C+±based core database engine aggregates efficiently on the backend without any meaningful overhead on the workload, whereas our slow query logs and index suggestions are built on a slow query log sampling approach downstream from the database engine: the difference in these two nuanced (and from your perspective probably “excuses”) backend details can emerge in sub-optimal ways that we’re very eager to improve upon–I’m sorry we still have some of these gaps. Our goal is to chip away and reduce the chance of these issues emerging over time so that the vast majority of users have an easier experience.I love the idea of a “FIX ISSUE” button and it’s a great north star for us to build the product around: in fact I’d go further and say we should fix issues if we can without even asking anything of you. HOWEVER to be intellectually honest, a general purpose application data platform like MongoDB Atlas offers customers such a comprehensive set of paths for customers to take, to build for such a wide variety of use cases, that it’s easier said than done to get to that vision. In fact it’s iterative and it’s something we’ll always be building toward. Hearing this from you and being reminded of the importance of this with your passion is exactly what we need to hear to keep improving.-Andrew", "username": "Andrew_Davidson" } ]
Too many Alerts - and support does not help
2022-04-30T09:48:53.718Z
Too many Alerts - and support does not help
1,968
null
[ "dot-net", "text-search" ]
[ { "code": "", "text": "I have a text index created on 5 fields of my collection. I use C# to query the data. I would like to know if it is possible to perform text search on only one field from that index instead of searching in all 5 fields.", "username": "Prajakta_Sawant1" }, { "code": "path", "text": "You can specify a single field in the path parameter of the query.", "username": "Marcus" }, { "code": "", "text": "That worked. Thank you @Marcus", "username": "Prajakta_Sawant1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Text Index search query
2022-04-28T08:13:46.703Z
Text Index search query
2,610
null
[ "dot-net" ]
[ { "code": "log:{\"t\":{\"$date\":\"2022-04-30T15:01:33.010+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2022-04-30T15:01:33.010+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileRenameFailed: \\ufffdܾ\\ufffd\\ufffd\\ufffd\\ufffdʡ\\ufffd\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2022-04-30T15:01:33.232+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF649112853\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"},{\"a\":\"7FF6491146EE\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"},\n{\"a\":\"7FF6491D9347\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},\n{\"a\":\"7FF6491D9329\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FFB93E1D4E8\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},\n{\"a\":\"7FFB8BFA1AAB\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"95B\"},\n{\"a\":\"7FFB8BFA2317\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11C7\"},\n{\"a\":\"7FFB8BFA40D9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},\n{\"a\":\"7FF6493C789C\",\"module\":\"mongod.exe\",\"file\":\"d:/a01/_work/12/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FFB969DA71D\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11D\"},{\"a\":\"7FFB969649D3\",\"module\":\"ntdll.dll\",\"s\":\"RtlImageNtHeaderEx\",\"s+\":\"483\"},{\"a\":\"7FFB969666E9\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"2D9\"},{\"a\":\"7FFB92E34F38\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"68\"},{\"a\":\"7FFB8B356480\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF64917BEB1\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/47248606fd9a23008b92cb0d96c8c674/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1891,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"},{\"a\":\"7FF64911EEE3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"},{\"a\":\"7FF64800D52B\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"53B\"},{\"a\":\"7FF64800CE8C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.29.30133/include/thread\",\"line\":56,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FFB93DCFB80\",\"module\":\"ucrtbase.dll\",\"s\":\"o__realloc_base\",\"s+\":\"60\"},{\"a\":\"7FFB94B784D4\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2022-04-30T15:01:33.232+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF649112853\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"}}}\n{\"t\":{\"$date\":\"2022-04-30T15:01:33.232+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6491146EE\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"}}}\n{\"t\":{\"$date\":\"2022-04-30T15:01:33.232+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6491D9347\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2022-04-30T15:01:33.232+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6491D9329\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n", "text": "", "username": "zhu_Mr" }, { "code": "", "text": "This is my working environment:Tencent cloud server,Windows version: windows server 2016 DataCenter。Cpu:Inter(R) Xeon(R) Platinum 8255C ,RAM:8GB", "username": "zhu_Mr" }, { "code": "", "text": "Hello @zhu_Mr\nit will help others to help when you post your message in a more readable format. You can find in Formatting code and log snippets in posts a helpful description from @Stennie_X how to do this.Also it will help to get responses when you elaborate onRegards,\nMichael", "username": "michael_hoeller" } ]
My MongoDB crashed on windows
2022-05-02T03:28:38.125Z
My MongoDB crashed on windows
4,191
null
[ "queries" ]
[ { "code": "", "text": "{\n“_id” : ObjectId(“606b7031a0ccf722226a85ae”),\n“groupId” : ObjectId(“5f06cca74e51ba15f5167b86”),\n“insertedAt” : “2021-04-05T20:16:49.893343Z”,\n“isActive” : true,\n“staffId” : [\n“606b6b44a0ccf72222ce375a”\n],\n“subjectName” : “English”,\n“teamId” : ObjectId(“6069a6a9a0ccf704e7f4b537”),\n“updatedAt” : “2021-04-05T20:16:49.893382Z”\n}\n{\n“_id” : ObjectId(“606b7046a0ccf72222c00c2f”),\n“groupId” : ObjectId(“5f06cca74e51ba15f5167b86”),\n“insertedAt” : “2021-04-05T20:17:10.144521Z”,\n“isActive” : true,\n“staffId” : [\n“606b6c34a0ccf72222c5a4df”,\n“606b6c48a0ccf722228aa035”\n],\n“subjectName” : “Maths”,\n“teamId” : ObjectId(“6069a6a9a0ccf704e7f4b537”),\n“updatedAt” : “2022-04-29T07:57:31.072067Z”,\n“syllabus” : [\n{\n“chapterId” : “626b9b94ae6cd2092024f3ee”,\n“chapterName” : “chap1”,\n“topicsName” : [\n{\n“topicId” : “626b9b94ae6cd2092024f3ef”,\n“topicName” : “1.1”\n},\n{\n“topicId” : “626b9b94ae6cd2092024f3f0”,\n“topicName” : “1.2”\n}\n]\n},\n{\n“chapterId” : “626b9b94ae6cd2092024f3f1”,\n“chapterName” : “chap2”,\n“topicsName” : [\n{\n“topicId” : “626b9b94ae6cd2092024f3f2”,\n“topicName” : “2.1”\n},\n{\n“topicId” : “626b9b94ae6cd2092024f3f3”,\n“topicName” : “2.2”\n}\n]\n}\n]\n}The query to fetch syllabus with chapter id\ndb.subject_staff_database.find( { “_id” : ObjectId( “606b7046a0ccf72222c00c2f” )},{“syllabus” : {\"$elemMatch\" : { “chapterId”:“626b9b94ae6cd2092024f3ee”} } } ).pretty()output{\n“_id” : ObjectId(“606b7046a0ccf72222c00c2f”),\n“syllabus” : [\n{\n“chapterId” : “626b9b94ae6cd2092024f3ee”,\n“chapterName” : “chap1”,\n“topicsName” : [\n{\n“topicId” : “626b9b94ae6cd2092024f3ef”,\n“topicName” : “1.1”\n},\n{\n“topicId” : “626b9b94ae6cd2092024f3f0”,\n“topicName” : “1.2”\n}\n]\n}\n]\n}Now I want to select topicsName array first elemet data along with chapterId and chapterName how to achive it", "username": "Prathamesh_N" }, { "code": "", "text": "Read Formatting code and log snippets in posts and re-publish your documents in a way we can use them to experiment.", "username": "steevej" }, { "code": "\"syllabus.topicNames.$\" : 1\n{ $addFields: {\n \"syllabus.topicNames\": { $first: \"$syllabus.topicNames\" }\n}}\n", "text": "Hi @Prathamesh_N ,What you are looking for is called positional projection :Where you can project first element that match a query :Or use aggregationThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to select the array element
2022-04-30T17:14:59.692Z
How to select the array element
3,573
null
[]
[ { "code": "", "text": "Hello,I am curious if the community has any feedback here. I am new to mongo, with little to no experience with SQL. So a bit of guidance here would definitely help me tremendously.I have been looking for more information about the best way to store a fair amount of text as a string in a document. Let’s say that a user simply enters a string of text that could be 10 paragraphs in length. So, like a 10KB string.First of all, would this be OK to do in Mongo? I haven’t been able to find a specific example of this long string of text yet, searching online.Also, in terms of design I have let’s say… 25 different pages that a user will read, learn about, and provide feedback on… sort of how a content management system (CMS), might allow a user to save data entry for a blog post. Considering repeat data entry fields like the above, wouldSo perhaps (1) is the better option… which I think would mean one Schema for one page that contains different questions/answers per each page.Please remember I am a noob here! So I am open to any feedback or questions.Thanks.-Will", "username": "Will_M" }, { "code": "{\n _id: ObjectId(\"626f55009a73985c1a5cba0c\"),\n content: \"Hello world...\",\n created_by: \"some_author_id\",\n created_date: ISODate(\"2022-05-02T03:43:38.637Z\"),\n // ...\n}\n{\n content: [\n \"This is page 1 content, etc.\",\n \"This is page 2 content, and theer are more pages.\",\n //... more pages ...\n ]\n}\n{\n content: [\n { page: 1, text: \"This is page 1 content, etc.\", other_info: \"...\" },\n { page: 2, text: \"This is page 2 content, and there are more pages.\", other_info: \"...\" },\n //... more pages ...\n ]\n}\n", "text": "Hello @Will_M, welcome to the forum!You can store your data in the MongoDB database. In general, you can find some relevant information in the MongoDB Manual (example topics are Data Modeling and Introduction).Here is some information related to your questions:> I have been looking for more information about the best way to store a fair amount of text as a string in a document. Let’s say that a user simply enters a string of text that could be 10 paragraphs in length. So, like a 10KB string.Yes, you can store text in the database. A document can store upto 16 Megabytes of data. The text data can be stored as data type “string”.> I was also thinking about a parent reference for this particular document. So the document would have a field of, “created_by”, and the associated “_id” of the user who input this information.You can store your data using similar structure in a document:> …25 different pages that a user will read, learn about, and provide feedback on…You can store a number of pages within the same document. The content can be structured as an array of strings or an array of sub-documents (as shown below).Or,", "username": "Prasad_Saya" } ]
Form Data entry, long strings of text, and design question
2022-05-01T19:52:32.763Z
Form Data entry, long strings of text, and design question
3,547
https://www.mongodb.com/…e_2_1024x512.png
[ "swift" ]
[ { "code": "", "text": "I am in a similar position to @Alex_Tang1 a year ago (responded to by @Chris_Bush) in that I have an existing iOS (swift) app using a local realm and plan to make this multi-user through using Mongo DB Realm-sync. I too have completed Task Tracker successfully and have created a synched realm for my app with developer mode active so that the server side schemas can be created for me automatically through their definition in my mobile app as intended.I am now following the Quick start with Sync document:I have successfully registered users on the server side via SIWA (Sign in with Apple) on a real device, as well as email/password users and anon users (since SIWA is not an available option on the XCode iPhone simulators).I have also successfully opened a synced Realm (asynchronously of course as suggested!) based on a partition strategy using an _partition field which I have defined in each of my objects locally.\nMy expectation was that I would be able to write new objects in my local app to the synched realm and that they would appear on the server side database as schemas and collections im my flashCardGenie database. I therefore started with one object (the Account object) and wrote an instance of that class to the realm. However as soon as I do that the server logs a permissions error (see below)\n\nimage1234×730 72.3 KB\n\nAnd the error advises:ending session with error: user cannot perform additive schema changes without write access: non-breaking schema change: adding schema for Realm table “Account”, schema changes from clients are restricted when developer mode is disabled (ProtocolErrorCode=206)So sync seems to be in place but given that developer mode is definitely on (as advised at the top of the logs screen above) what is the rationale for the schema changes being prevented in this case. Presumably I have not done something I should have done but I am struggling to discover what!", "username": "Chris_Lindsey" }, { "code": "", "text": "Update to this question:After some more analysis, I have discovered a little more about the point at which the permissions error occurs. It is after the Realm.asynchOpen command returns with a success result to open the session, but that the error is not then reliant on any other local iOS processes.I am guessing that because Development mode is on, once the session opens successfully it is at this point that the server side attempts to create the schemas based on the local iOS definitions and the Account schema happens to be the first one it attempts. It then fails with the permissions error, closes the session and the connection immediately after that.The question therefore remains why Development mode has been disabled despite it being on?", "username": "Chris_Lindsey" }, { "code": "user cannot perform additive schema changes without write access{\n \"%%partition\": \"%%user.id\"\n}\n", "text": "Hi Chris,According to the error this appears to be a Sync permissions issue on the partition being opened.\nuser cannot perform additive schema changes without write accessYour write access for sync is currently this:Please try changing the permission to True to troubleshoot if the change will be allowed.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi @Mansoor_Omar ,Yes ! That has fixed it. Hurrah!\nIs this an undocumented pre-requisite for development mode to work, or did I miss this in the docs somewhere? If the latter, please can you give me the link so that I can review. Many thanks for your help.,Chris", "username": "Chris_Lindsey" }, { "code": "", "text": "Hi Chris,I didn’t find it explicitly mentioned but it would be implied since enabling development mode means a client needs to write to the cloud schema. Generally this would be used in the initial development stage of the app where permissions can be defaulted to true until the schema is in order. Please know that you should not enable development mode in a production environment when needing to apply breaking schema changes - you can use the “partner collection” strategy for doing so.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create synched realm-sync schemas server side using development mode
2022-04-27T10:51:35.816Z
Create synched realm-sync schemas server side using development mode
2,526
null
[]
[ { "code": "mongod", "text": "I’m new to mongoDB. So will try to explain my 2 issue in fewer technical terms as I’m not aware much.which capture the logs in trial environment under logs/mongod.log but not in prod. I’ve compared the settings by using db.adminCommand({getParameter:\"*\"}) and both are same only. And even set the profile level in PROD to 2 as follow db.setProfilingLevel(2). But still no luck. Do you know any other possibilities for logs are not captured?Since we dont have logs couldn’t able to figure out the exact root cause. Atleast if we able to enable the logs, we would come to know why its going down automatically.TIA,\nTamil", "username": "Tamil_Mani" }, { "code": "", "text": "Sounds like the userid running mongod doesn’t have write permissions to the assigned log directory.\nIt appears you’re using a Unix/Linux-like operating system.\nIf so, you should be using the system facilities to start services like mongod with a configuration file.\nYou might consider a review of how you installed MongoDB and perhaps try to make your setup more orthodox.", "username": "Jack_Woehr" }, { "code": "mongodwhoami\npwd\nls -ld /bnsf/mongodb/data/db/\nls -ld /bnsf/mongodb/logs/\nps -aef | grep [m]ongod\nss -tlnp | grep [2]7017\n", "text": "If I understand correctly you start mongod exactly the same way on trial machine and on production machine. And the way you start mongod is with:./mongod --dbpath /bnsf/mongodb/data/db --logpath /bnsf/mongodb/logs/mongod.logDirectories /bnsf/mongodb/data/db/ and /bnsf/mongodb/logs/ exist on both trial and production machine.You can read the log file mongod.log on trial machine but not on production machine. I am pretty sure that if mongod cannot write to /bnsf/mongodb/logs/mongod.log it fails right away. So I will assume that it does because you would not be in the situation whereAfter every some hours the mongod services goes down automatically.That is the first reason why I suspect that you are not looking at the right place for the logs on the production system. The second reason iscapture the logs in trial environment under logs/mongod.logbecause I suspect that you current directory is /bnsf/mongodb/ on the trial machine but something else on the production machine. You really have to look for /bnsf/mongodb/logs/mongod.log rather than logs/mongod.log. Share the output of the following commands from both trial and production, just after you started mongod.", "username": "steevej" }, { "code": "--forkmongodmongodmongod--logpathstdoutmongodmongodsystemdmongod --version", "text": "Welcome to the MongoDB Community @Tamil_Mani !We’re starting mongoDB with following command,\n./mongod --dbpath /bnsf/mongodb/data/db --logpath /bnsf/mongodb/logs/mongod.logSince your command line does not include the --fork option to Start mongod as a daemon, I expect the mongod process will be terminated when the associated terminal/shell session is closed.If logs aren’t being captured, perhaps mongod has been started without including the --logpath option. If this is the case, logs will be output to stdout in the terminal session used to start mongod.As others have suggested, you should be running your production mongod processes using a system service facility like systemd. If MongoDB was installed using one of the official packages, a service definition should already exist. See MongoDB Installation Tutorials for details relevant to your operating system.If you need more specific advice please confirm:Regards,\nStennie", "username": "Stennie_X" }, { "code": "--forkmongodmongod", "text": "This makes so much sense.Since your command line does not include the --fork option to Start mongod as a daemon, I expect the mongod process will be terminated when the associated terminal/shell session is closed.", "username": "steevej" } ]
MongoDB is going down again and again
2022-04-29T14:16:14.757Z
MongoDB is going down again and again
3,319
null
[ "aggregation", "python", "change-streams" ]
[ { "code": "from pymongo import MongoClient\n\nCHANGE_STREAM_DB='mongodb://localhost:27017/mongodb_pipe?retryWrites=true'\nmongo = MongoClient(CHANGE_STREAM_DB)\n\nstudents = [\n { 'name': 'aaa', 'address': 'Seoul', 'course': 'Statistics', 'score': 96 },\n { 'name': 'bbb', 'address': 'Pusan', 'course': 'Biology', 'score': 83 }]\n\nmongo[mongodb_pipe][collection].insert_many(students)\nprint('student inserted.............................')\n\npipeline = [{'$match': {'operationType': 'insert'}}]\nresults = mongo[mongodb_pipe][collection].watch(pipeline=pipeline)\npymongo.errors.OperationFailure: The $changeStream stage is only supported on replica sets, full error: {'ok': 0.0, 'errmsg': 'The $changeStream stage is only supported on replica sets', 'code': 40573, 'codeName': 'Location40573'}\n", "text": "Hello! I try to implement pymongo changeStream using mongodb.db.collection.watch method. My MongoDB is working on standalone mode on windows 11.\nBelows are my simple pymongo codes.But, watch() method throws Exception,Now I have a question. Is it possible that the $changeStream stage can be supported on standalone mode? If possible, how to configure? Or if the $changeStream stage is possible only on the replica sets, kindly inform me how to set the mongo mode to replica sets mode with pymongo codes. Any reply will be thanksful! Best regards", "username": "Joseph_Hwang" }, { "code": "", "text": "You may convert your stand alone to a replica set.See https://www.mongodb.com/docs/manual/tutorial/convert-standalone-to-replica-set/", "username": "steevej" } ]
pymongo.errors.OperationFailure: The $changeStream stage is only supported on replica sets
2022-05-01T07:52:51.670Z
pymongo.errors.OperationFailure: The $changeStream stage is only supported on replica sets
11,992
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Got a project? Tell us about it, and get Free exclusive World hackathon '22 swag!!Go here - https://www.mongodb.com/community/forums/t/about-the-project-teams-category/160634Get posting!", "username": "Shane_McAllister" }, { "code": "", "text": "Looks Promising, Will surely apply to it ", "username": "Sourabh_Sourabh" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Started a project already? Get Free Swag!
2022-04-29T12:54:51.242Z
Started a project already? Get Free Swag!
3,176
null
[]
[ { "code": "db.runCommanddb.$cmdsdb.runCommand({ $query: { count: \"collection\" } })\ndb.$cmd.findOne({ $query: { count: \"collection\" } })\n$cmd{ \n \"ok\" : 0.0, \n \"errmsg\" : \"command not found\", \n \"code\" : 59.0, \n \"codeName\" : \"CommandNotFound\"\n}\ndb.runCommanddb.runCommanddb.$cmds", "text": "Hi there,We’ve got a weird situation where a certain command is exposed by db.runCommand but not via the db.$cmds virtual collection on an Atlas cluster.We are not able to change the way a query is being run because we are using a third party tool which is connecting directly to MongoDB to run the queries. The queries are for example:andBoth queries work fine on MongoDB 5.0.7 installed locally; however, on MongoDB 5.0.7 in Atlas I get for the second query (via $cmd):The command clearly exists, though, because db.runCommand works just fine.This feels like a bug. It works with the same version of MongoDB locally and works when using db.runCommand, but not via db.$cmds.I accept the third party tool probably shouldn’t be using these commands, but it’s annoying that it all works locally but not against the cluster.Please let me know if there’s something I need to enable/fix to make it work.", "username": "Adam_Gilmore" }, { "code": "", "text": "Hello Adam,Welcome to the MongoDB Community Forums.Couple of follow-up questions, please:Thanks,\nSpencer", "username": "Spencer_Brown" }, { "code": "", "text": "Hi Spencer,It’s an M2 shared cluster with a replica set. But we spun up an M10 test cluster just to see if it was any different, and it wasn’t.We’re using the legacy mongo shell, but the third party tool is communication directly with MongoDB (not via a shell of any sort) and sending down a db.$cmd.findOne() command.The third party tool is the CData MongoDB Connector (we’re trying to get DirectQuery in Power BI working, which the MongoDB BI Connector doesn’t support).We’ve seen what CData’s sending down the wire, and it’s failing on the db.$cmd execution, so we tried that directly and found the issue. On our local instance of MongoDB, using db.$cmd.findOne({ $query: … }) works just fine, so it’s odd.", "username": "Adam_Gilmore" }, { "code": "", "text": "I’ve looked into this a little. It seems that “$query” is an artifact in the legacy mongo shell, and has been deprecated since MongoDB 3.2.Can you tell us what MongoDB driver and version is being used by the CData MongoDB Connector?Also, when you say “we’ve seen what CData is sending down the wire”, what is it sending down the wire, exactly?", "username": "Spencer_Brown" } ]
Possible Atlas bug: $cmd collection doesn't expose all commands
2022-04-26T06:15:10.194Z
Possible Atlas bug: $cmd collection doesn&rsquo;t expose all commands
1,495
null
[ "data-api" ]
[ { "code": "{\n \"dataSource\":\"Cluster0\",\n \"database\":\"v1\",\n \"collection\":\"pipedriveDeals\",\n \"filter\":{\n \"reasonLost\":{\n \"$ne\":\"Existing Customer\"\n }\n },\n \"inquiryDate\":{\n \"$lt\":{\n \"$date\":\"2022-04-15T00:00:00Z\"\n },\n \"$gt\":{\n \"$date\":\"2022-04-01T00:00:00Z\"\n }\n },\n \"sourceFirst\":\"Google Paid\"\n}\n", "text": "Hi there, I can’t figure out how to use the $ne filter correctly in the data api. Here is what I’m trying to query. Everything works up until I add the “$ne” part. Can you help? Thank you!", "username": "spencerm" }, { "code": "\n\n\"filter\":{\n \"reasonLost\":{\n \"$ne\":\"Existing Customer\"\n },\n \"inquiryDate\":{\n \"$lt\":{\n \"$date\":\"2022-04-15T00:00:00Z\"\n },\n \"$gt\":{\n \"$date\":\"2022-04-01T00:00:00Z\"\n }\n },\n \"sourceFirst\":\"Google Paid\"\n}\n", "text": "Hi @spencerm ,Your filter is malformed.See that you have closed the filter after $ne for some reason, extra }…If i am not mistaken this should.work:Ty", "username": "Pavel_Duchovny" } ]
$ne (Not Equal) in data api?
2022-04-30T17:47:55.071Z
$ne (Not Equal) in data api?
2,873
null
[]
[ { "code": " Document model\n {\n ....\n collaborators: ObjectId; // e.g. 0x507f1f77bcf86cd799439011\n } \nCollaborators model \n{\n _id: 0x507f1f77bcf86cd799439011; // refererenced by Document model\n collaborators: [\n {userId: 1, role: \"editor\"},\n {userId: 2, role: \"commenter}\n ]\n}\n", "text": "Hello, I am fairly new to MongoDB and I’m looking for advice on designing the schema before I commit to going down this route. I’m developing a collaborative documentation system, where the user creates a document and invites other users to collaborate, much like Google docs.There are two collections. The first one stores documents and the second one stores lists of collaborators. When the user creates a new document, they assign a list of collaborators to this document. In the simplest form, the schema would look something like thisThe Document schema contains some data but it also maintains a reference to a document in the Collaborators collectionCollaborators collection contains documents that contain an array of roles for the collaborators.I will have an API that fetches all those documents where the logged-in user’s userId is in the list of collaborators referenced by the document. Without much experience with writing efficient queries, I think a two-step lookup will work but it won’t be very efficient.Step 1 → Find all the collaborators lists which contain userId, and obtain their _id field\nStep 2 → Find all documents that have collaborators field containing one of the values found in Step 1Is there a more efficient way to construct this query particularly if the users fetch this list frequently?If I should redesign the schema in some way so that the lookup can be efficient, I’d like to know.", "username": "Vineet_Dixit1" }, { "code": "", "text": "I realized using mongodb aggregation framework is what I needed. I was able to use $lookup and $match stage to achieve what I want. Still not sure how expensive this is given that $lookup will perform left join.Here’s an example if anybody wants to look.Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Vineet_Dixit1" }, { "code": "", "text": "Hi @Vineet_Dixit1 ,Is there a reason you don’t want to embed the collaborators into the main document model? It seems like a 1 to 1 relationship.You can than query just one collection on the embedded list to find all data in one go without lookup.In general both sides will have similar best practices where 2 main rules apply:I would recommend reading:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Almost the same thing I am facing. MongoDB is getting complicated as much as I study it more and more.It is such a sad thing!I am looking my alternatives in Graph Database. Let’s see if they cover something for me.", "username": "Ali_Muqaddas" }, { "code": "", "text": "Hi @Ali_Muqaddas ,The beauty of outlier documents is that you can have many of them chained. So an outlier filled document will require to open an available new document if such does not yet exist.Now reference can be done in many ways , it doesn’t mean you have to use $lookup. You can do 2 queries for example …I suggest to read the embedding consideration antipatterns for better understanding.Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.Specifically focus your reading on massive arrays example and how to best reference it…Mongodb is a great graph database and I personally know huge customers with massive graph data that were successful using MongoDB, the learning curve worth the outcome believe me…Ty\nPavel", "username": "Pavel_Duchovny" } ]
Looking for advice on designing schema for performing efficient queries
2021-09-02T21:36:44.923Z
Looking for advice on designing schema for performing efficient queries
2,104
null
[ "aggregation", "queries" ]
[ { "code": "/**\n * This function fetches a job post using the document \"_id\" : ObjectId('xxx...')\n * Looks up alert_preferences in the settings collection that matches the job poster id, ie., userID\n * @param {String} - string of the job _id from the front end\n * @returns {Object} - document matching the document with an _id of ObjectId(jobId)\n */\n\nexports = async function(jobId){\n console.log('job id ', jobId, new BSON.ObjectId(jobId), `and 3 `,new BSON.ObjectId(JSON.parse(JSON.stringify(jobId))))\n const job_id = BSON.ObjectId(jobId)\n const id = new BSON.ObjectId();\n console.log('screw you job id', job_id, 'generated id ', id)\n\n //create pipeline that gets the job together with the alert preferences stored in the employer's settings\n const pipeline = [\n {\n '$match': {\n _id: BSON.ObjectId(JSON.parse(jobId)) \n }\n }, {\n '$lookup': {\n 'from': 'settings', \n 'localField': 'userID', \n 'foreignField': 'userID', \n 'as': 'settings'\n }\n },\n {\n '$unwind': {\n 'path': '$settings'\n }\n }, \n {\n '$project': {\n 'settings.role': 0, \n 'settings.acquisition_channel': 0, \n 'settings.mode': 0, \n 'settings.geocode_address': 0, \n 'settings.created': 0, \n 'settings._id': 0, \n 'settings.complete': 0,\n 'settings.userID':0\n }\n } \n ]\n \n //get a reference to the jobs collection\n const collection = context.services.get(\"mongodb-atlas\").db(\"kinshealth\").collection(\"jobs\")\n \n try{\n \n //get the results and first item of the object\n const results = await collection.aggregate(pipeline)\n \n console.log('collection ', JSON.stringify(results))\n \n }catch(err){\n console.log(`error fetching id : -> ${err}`)\n }\n};\n", "text": "Folks,I want to run an aggregation pipeline on a MongoDB Realm function using object id on the match stage. I have tried using new BSON.ObjectId(jobId), new BSON.ObjectId(JSON.parse(JSON.stringify(jobId))) and each variation without the ‘new’ keyword without any success [NOTE: jobId is the string version].Please review the code below:Thanks for your help!", "username": "Excel_Health_Careers_Training" }, { "code": "{\n '$match': {\n _id: BSON.ObjectId(jobId) \n }\n }\n...\nconst results = await collection.aggregate(pipeline).toArray()\n\n", "text": "Hi @Excel_Health_Careers_Training ,First what is the permission on this function ? Is it application level or system level?Does the user running this pipeline has permission on the matched id?Now regarding syntax it doesn’t need all extra parsing just passing a string to BSON helper, it should be:Try to add to array in the end of aggregation execution.See more examples :Ty", "username": "Pavel_Duchovny" } ]
How to search for documents using document _id in an aggregation pipeline in a Realm function
2022-04-19T17:47:14.035Z
How to search for documents using document _id in an aggregation pipeline in a Realm function
3,935
null
[ "node-js", "mongoose-odm" ]
[ { "code": "child{ child: { _id: ObjectId, name: 'Ricky' }}{ child: { name: 'Ricky' }}", "text": "Hi there,What does MongoDB call the following child value in the following examples:Mongoose calls these a “subdocument” and “nested path” respectively\nhttps://mongoosejs.com/docs/subdocs.html#subdocuments-versus-nested-pathsMongoDB calls these both a “subdocument” and “embedded document”\nhttps://www.mongodb.com/docs/manual/core/data-modeling-introduction/#embedded-data", "username": "Ricky0" }, { "code": "", "text": "Is there anyone that can advise on the naming convention used? Thanks", "username": "Ricky0" }, { "code": "> document = collection.findOne( { _id : 0 })\n< { _id: 0, child: { age: 1 } }\n> typeof document\n< 'object'\n> child = document.child\n< { age: 1 }\n> typeof child\n< 'object'\n> collection.aggregate( [\n { \"$match\" : { \"_id\" : 0 } } ,\n { $set : { \"type_of_child:\" : { \"$type\" : \"$child\"}}}\n] )\n{ _id: 0, child: { age: 1 }, 'type_of_child:': 'object' }\n", "text": "I like to use object and keep document for entities of a collection.Why object?So collection.findOne({}) gives you a document. And document.child gives you an object.And if not confusing enough, in Java,And in JS", "username": "steevej" }, { "code": "", "text": "Welcome to the MongoDB Community @Ricky0 !`“Embedded document” and “subdocument” are used interchangeably, and you’ll also see references like “embedded subdocument” and “nested document”. Preferred naming conventions also depend on context (for example, language drivers or ODMs can be influenced by the author or phrases that are more idiomatic for a language community).As @steevej illustrated, many object-oriented programming languages use Object as a primitive type that is extended with additional properties, and have class definitions as a template for creation of objects. Structured data without an associated class will be typed as an Object; data with a class will ultimately inherit from Object or an equivalent primitive class.The document nomenclature comes from data modelling and thinking about the shape and relationship of data. Embedded documents represent relationships (1:1 or 1:many) that are candidates for normalisation if you are designing a data model optimised for storage efficiency (or tabular data limitations) rather than application efficiency. Optimal MongoDB data models will support how data is commonly used by your application, which includes choices like appropriately modelling relationships with linking versus embedding.I don’t have a strong preference for using “embedded document” vs “subdocument”, but I do think of these as referring to the shape of data as compared to “objects” which may include additional functional hooks and attributes.For example, Mongoose follows an Object-Document Mapper (ODM) pattern where your code interacts with Mongoose objects that represent MongoDB documents (the data actually stored in your MongoDB deployment). Mongoose objects have associated middleware functions and virtual properties that are not part of the database representation. Mongoose also has some specific schema treatment for Subdocuments versus Nested Paths: both are stored identically in MongoDB so this a Mongoose-specific difference.Regards,\nStennie", "username": "Stennie_X" } ]
"Embedded document" or "subdocument"?
2022-04-22T05:27:16.845Z
&ldquo;Embedded document&rdquo; or &ldquo;subdocument&rdquo;?
8,596
null
[]
[ { "code": "", "text": "How can get a connection string to a remote mongo DB server that has been installed via package manager on linux centos server(Digital Ocean)?", "username": "Adhiraj_Kinlekar" }, { "code": "mongodb://user:[email protected]", "text": "something like mongodb://user:[email protected] should work for a default installation if it’s configured to allow remote connections.", "username": "Jack_Woehr" } ]
Connect to Mongo DB on a remote server
2022-04-30T15:40:35.979Z
Connect to Mongo DB on a remote server
2,397
null
[ "aggregation" ]
[ { "code": "{\n _id: ObjectId(\"61f3882cbd56c6d86dad92d9\"),\n charid: ObjectId(\"6140d7ca11c1853b3d42c1e6\"),\n pool: 'Stamina',\n successes: 7\n},\n{\n _id: ObjectId(\"61f576a7392b0461d801254a\"),\n charid: ObjectId(\"6140d7ca11c1853b3d42c1e6\"),\n pool: 'Composure',\n successes: 1\n},\n{\n _id: ObjectId(\"61f57a0fbb252e061c2a8227\"),\n charid: ObjectId(\"61f577e1bb252e061c2a820f\"),\n pool: 'Composure',\n successes: 9\n}\ncharidtraitspool{\n _id: ObjectId(\"6140d7ca11c1853b3d42c1e6\"),\n traits: {\n Composure: 10,\n Stamina: 7\n }\n}\n$mergeObjects$addFields", "text": "I have the following sample documents:I need to group them all by charid and add a traits field that is an object with the sums of the pool values, like so:How can I do this? I’ve looked at $mergeObjects and $addFields, but while either of those seem like they might work, I’m not exactly sure how to wrangle them into what I want.", "username": "Jared_Lindsay1" }, { "code": "", "text": "I just realized my title doesn’t make sense. My bad. Seems I can’t edit. It should be: “How to merge the sum of different documents based on one field value?”", "username": "Jared_Lindsay1" }, { "code": "aggregate(\n[{\"$group\": {\"_id\": \"$pool\", \"count\": {\"$sum\": \"$successes\"}}},\n {\"$group\": {\"_id\": null, \"docs\": {\"$push\": {\"k\": \"$_id\", \"v\": \"$count\"}}}},\n {\"$replaceRoot\": {\"newRoot\": {\"traits\": {\"$arrayToObject\": [\"$docs\"]}}}}])\n", "text": "Hello, welcome : )If you want the sums in 1 document you can try something like thisQueryPlaymongo (put the cursor in the end of each stage to see what it does)This looks like the expected ouput if this is not what you need send if you can the expected output.", "username": "Takis" }, { "code": "{\n _id: ObjectId(\"6140d7ca11c1853b3d42c1e6\"),\n traits: {\n Composure: 10,\n Stamina: 7\n }\n}\n", "text": "How to you get Composure:10 for _id:“6140d7ca11c1853b3d42c1e6” inI can see that you have one pool:Composure,successes:9 and one pool:Composure,successes:1 and 9 + 1 = 10, but none of the _id or charid of Composure:9 matches the _id of the result.", "username": "steevej" }, { "code": "", "text": "@steevej Simple answer is that was a typo in my example docs. I missed fixing the last one when I posted!@Takis This worked, thanks!", "username": "Jared_Lindsay1" } ]
How to turn an array into an object with summed values?
2022-04-29T15:13:42.612Z
How to turn an array into an object with summed values?
3,273
null
[ "transactions" ]
[ { "code": "", "text": "What is Snaphot Isolation in the context of transactions in Mongo DB", "username": "Adhiraj_Kinlekar" }, { "code": "snapshotmajority", "text": "Hi @Adhiraj_Kinlekar,Snapshot isolation refers to transactions seeing a consistent view of data: transactions can read data from a “snapshot” of data committed at the time the transaction starts. Any conflicting updates will cause the transaction to abort.MongoDB transactions support a transaction-level read concern and transaction-level write concern. Clients can set an appropriate level of read & write concern, with the most rigorous being snapshot read concern combined with majority write concern.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X Thank you for your response. If i understood you right, after my transaction creates a snapshot, if there have been writes to the document that i am trying to write to then it will create write conflicts?Eg 1. User A started a tranasaction and a snapshot has been created.\n2. User B writes to a document with _id of 1234\n3. This leads to User A seeing seeing old data for document with _id of 1234(Because of snapshot)\n4. This leads to a write conflict for User A when he tries to write to the document with _id of 1234", "username": "Adhiraj_Kinlekar" } ]
Isolation in transactions
2022-04-15T17:56:16.889Z
Isolation in transactions
5,153
https://www.mongodb.com/…8012a736abae.png
[ "connecting" ]
[ { "code": "", "text": "Hi! my MongoDB is running on a university virtual machine I need to connect with it through an IP address and MongoDB assigns user and password but unable to connect with it. why?\n\nimage793×915 31.1 KB\n\nD4.png)", "username": "Fatima_Rani" }, { "code": "", "text": "The form field is SSH Hostname. However, 141.76.56.139:27017 indicates an host and a port. Being not recognized as an IP address, a DNS request is made but fails because it is not an host name either.I would try without the :27017 part.", "username": "steevej" } ]
MongoDB Virtual machine connection
2022-04-30T06:49:18.972Z
MongoDB Virtual machine connection
2,905
null
[ "aggregation" ]
[ { "code": "", "text": "Any suggestions for how to group in an aggregation stage documents with a date field by week ending on Sunday?For example, today is 2022-04-29, and the next week ending Sunday is 5/1; therefore, week ending Sunday 2022-05-01 represents the dates 2022-04-25 to 2022-05-01.", "username": "xtian_simon" }, { "code": "", "text": "My approach would be toWhat I do not know is what is the last day of an isoWeek or if it can be changed.However according to ISO 8601 - Wikipedia, it looks like that an isoWeek ends on Sunday.", "username": "steevej" } ]
In an aggregation pipeline, how to group records by date?
2022-04-29T22:23:41.145Z
In an aggregation pipeline, how to group records by date?
1,221
https://www.mongodb.com/…020a326cd82a.png
[ "database-tools", "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate[details=Link Details]\nEvent Type: Online\nLink(s):\nLocation\nVideo Conferencing URL[/details]2022-04-26T10:00:00Z", "username": "Shane_McAllister" }, { "code": "", "text": "What a great session - thanks so much @Mark_Smith and @Michael_LynnFor those who missed it, you can re-watch below", "username": "Shane_McAllister" }, { "code": "0:00 - 13:4813:49 - 22:1022:11 - 25:4625:47 - 27:0527:06 - 27:3327:49 - 28:4030:50 - 31:5532:13 - 32: 5532:55 - 33:3033:59 - 34:29gdeltloader34:30 - 36:1136:58 - 37:32gdeltloader --master --download --overwrite --last 20 --filter export38:08 - 39:4941:05 - 41:3041:31 - 42:26 gdelt_field_file.ff42:27 - 42:52mongoimport.sh42:53 - 45:33mongoimport.sh45:34 - 45:54mongoimport.sh45:55 - 48:4248:43 - 51:3051:31gdelt_reshaper.js", "text": "What a blast that was!? Thanks for the invite @Shane_McAllister… awesome job @Mark_Smith… If you’re reading this and want to skip around… I put this handy time index together.0:00 - 13:48 - Introduction and Information about the Hackathon (Mike)\n13:49 - 22:10 - Introduction to GDELT Dataset\n22:11 - 25:46 - Launch a MongoDB Atlas Cluster\n25:47 - 27:05 - Connecting to the MongoDB Atlas Database (Obtaining the connection string)\n27:06 - 27:33 - Username and password information\n27:49 - 28:40 - Network security information - IP Access List Entry\n30:50 - 31:55 - Installing and using python venv\n32:13 - 32: 55 - gdelttools install\n32:55 - 33:30 - pip tip * don’t skip this \n33:59 - 34:29 - Running gdelttools - getting help, obtaining master file with gdeltloader\n34:30 - 36:11 - Master file (77mb)\n36:58 - 37:32 - Looking at exports - last 20 files with gdeltloader --master --download --overwrite --last 20 --filter export\n38:08 - 39:49 - Looking at the data from a GDELT export in TSV format\n41:05 - 41:30 - Cloning the gdelttools repo - to get some additional helpful tools\n41:31 - 42:26 - Looking at the gdelt_field_file.ff - formats and field types for the GDELT data\n42:27 - 42:52 - mongoimport.sh - Script to import GDELT data into mongodb\n42:53 - 45:33 - Modifying mongoimport.sh to remove hard reference to database name\n45:34 - 45:54 - Running mongoimport.sh - importing data into your MongoDB Atlas cluster\n45:55 - 48:42 - Viewing data in Atlas - Browse Collections\n48:43 - 51:30 - Optimizing mongoimport\n51:31 - Transforming the imported data into improved, document-style shapes using gdelt_reshaper.js", "username": "Michael_Lynn" }, { "code": "", "text": "It was so much fun! Looking forward to the upcoming live streams!", "username": "Mark_Smith" }, { "code": "", "text": "SUPER helpful, thank you!", "username": "webchick" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Getting Started with MongoDB & GDELT - Session 2 - US Session
2022-04-25T16:40:33.151Z
Getting Started with MongoDB &amp; GDELT - Session 2 - US Session
4,557
null
[]
[ { "code": "", "text": "Considering this sample scheme:\n{\n_id: “id”,\ntest: [ [“0”, “1”] , [“0”] ]\n}How can I update these elements by their position (index)? I use dot notation test.0 for the first level, I access the first sublist of the list, but how can I access the elements of this sublist?", "username": "BrainTrance_N_A" }, { "code": "db.collection.updateOne(\n { },\n { $set: { \"test.1.0\": \"99\" } }\n)\ntest[ [ \"0\", \"1\" ], [ \"99\" ] ]", "text": "Hello @BrainTrance_N_A, you can try this approach to update the nested array elements by the index:This will update the test array field to: [ [ \"0\", \"1\" ], [ \"99\" ] ].Hope this helps!", "username": "Prasad_Saya" }, { "code": "", "text": "I’ve tried this approach but I get the error \"Cannot create field \" \" in element \" \".\nI want to clarify that “test” is a nested field.\nThe first dot notation works, but not the second one.\n{’$set’: {‘PARENT_TEST.$.test.0’: ‘99’} this doesn’t raise an error,\nbut this {’$set’: {‘PARENT_TEST.$.test.1.0’: “99”} does.Sorry for not posting the actual data scheme, but it’s confusing because it’s embedded in Python code.", "username": "BrainTrance_N_A" }, { "code": "", "text": "I suggest you include an example document if it has different structure than you had posted earlier.Also, refer this in the manual about updating array fields:", "username": "Prasad_Saya" }, { "code": "This is the scheme:\n\n{\n \n \"id\": \"1\",\n \"name\": \"test\",\n \"vms\": \n [{\n \"onoma\": \"nikolas\",\n \"plirwmi\": [['0', '1'], ['0']]\n \n }]\n \n}\n", "text": "“plirwmi” is the field of interest", "username": "BrainTrance_N_A" }, { "code": "$[<identifier>]vms\"onoma\": \"nikolas\"db.collection.updateOne(\n {}, \n { $set: { \"vms.$[e].plirwmi.1.0\": \"99\" } },\n { arrayFilters : [ { \"e.onoma\" : { $eq: \"nikolas\" } } ] }\n)\n\"plirwmi\": [ [ '0', '1' ], [ '0' ] ]\"plirwmi\": [ [ '0', '1' ], [ '99' ] ]\"vms.$[e].plirwmi.1.0\"1.0plirwmi1plirwmi100[ '0' ]", "text": "@BrainTrance_N_A, you can do the update using the array update operator $[<identifier>] (the link I had provided has the documentation about “filtered positional operator”).Note that you need to specify a condition to identify the element of the array field vms to update. I used the \"onoma\": \"nikolas\" as the filter criteria. The query:This modifies \"plirwmi\": [ [ '0', '1' ], [ '0' ] ] to:\n\"plirwmi\": [ [ '0', '1' ], [ '99' ] ]EDIT ADD:In the expression \"vms.$[e].plirwmi.1.0\", 1.0 represents the index positions of the array field plirwmi and the inner array. 1 is the second element of plirwmi array (index 1; array indexes start from 0). And, the 0 index is the inner array’s ([ '0' ] ) first element.", "username": "Prasad_Saya" }, { "code": "", "text": "Although it works for the above scheme, it raises the same error in the original scheme which virtually has the same structure. I suppose I’ll have to check my code, something is different. In the context of the question your answer is an accepted solution. Thanks!", "username": "BrainTrance_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I update an element which is in a sublist of list by index?
2022-04-28T22:42:53.776Z
How can I update an element which is in a sublist of list by index?
5,524
https://www.mongodb.com/…9dd914cf748.jpeg
[ "100daysofcode" ]
[ { "code": "--- Sharding Status --- \n sharding version: {\n \"_id\" : 1,\n \"minCompatibleVersion\" : 5,\n \"currentVersion\" : 6,\n \"clusterId\" : ObjectId(\"621f583d9f07f427acd036e8\")\n }\n shards:\n { \"_id\" : \"shard1\", \"host\" : \"shard1/localhost:27001,localhost:27002,localhost:27003\", \"state\" : 1 }\n active mongoses:\n \"4.0.5\" : 1\n autosplit:\n Currently enabled: yes\n balancer:\n Currently enabled: yes\n Currently running: no\n Failed balancer rounds in last 5 attempts: 0\n Migration Results for the last 24 hours: \n No recent migrations\n databases:\n { \"_id\" : \"config\", \"primary\" : \"config\", \"partitioned\" : true }\n\nmongos\n", "text": "Hi, Finally I made it here to my first post for #100DaysOfCode wherein I am glad to share my 100 days of leaning and this day seems to be my Day_1.I will be sharing a few of learnings throughout the day. Let’s begin this and would love to have more insights about the same!!Beginning to discuss about sharding, Sharding is a method of distributing data as chunks across various servers and clusters using a key known as Shard key which is common to all the chunks of data. It is a way for horizontal scaling.Architecture of Sharded ClusterAfter you understand the architecture:\nyour configuration file should contain the path for config database as:\n\nScreenshot 1943-12-11 at 5.13.32 PM1428×492 40.9 KB\nOnce all is set, you can view the architecture being set up here: Making the right choice to select the shard key in order to make out queries optimised and more efficient. So here are a few points to remember about it:Wanted to share this insight as well where mongodump and mongorestore have been talked about in association with Shardings. Mongorestore shard collectiong by default or not?We can have more discussions for this would like to welcome you to our community forums for more such discussions.Thanks\nAasawariShared on twitter: https://twitter.com/Aasawar61618175", "username": "Aasawari" }, { "code": "db.<collection_name>.aggregate( [ { stage_1}, { stage 2}, { ... }, ...., { stage N} ] )MongoDB Enterprise Cluster0-shard-0:PRIMARY> db.solarSystem.aggregate( [ { $match: { type: \"Terrestrial planet\" } }, { $project: { _id: 0, name: 1, orderFromSun: 1}}])\n{ \"name\" : \"Earth\", \"orderFromSun\" : 3 }\n{ \"name\" : \"Venus\", \"orderFromSun\" : 2 }\n{ \"name\" : \"Mercury\", \"orderFromSun\" : 1 }\n{ \"name\" : \"Mars\", \"orderFromSun\" : 4 }\nMongoDB Enterprise Cluster0-shard-0:PRIMARY>\nMongoDB Enterprise Cluster0-shard-0:PRIMARY> db.icecream_data.aggregate( [ \n{ \n$project: \n { _id: 0, \n max_high: \n { $reduce: \n { input: \"$trends\", \n initialValue: -Infinity, \n in: \n { $cond: \n [ { $gt: [ \"$$this.avg_high_tmp\", \"$$value\"] }, \n \"$$this.avg_high_tmp\",\n \"$$value\" ] } } } } } ] )\n{ \"max_high\" : 87 }", "text": "Day02 of 100DaysOfCode as 100DaysOfMongoDBShare on twitter: https://twitter.com/Aasawar61618175I started with the basics of aggregation concepts and came across some amazing which makes life easier.Theoretically, aggregation is based out of pipeline concept where output of one stage(series of query/operations) becomes input for the second stage of the pipeline and so on…Aggregation have proved to be widely and immensely used in the real time analytics, Big Data, part of Transformation of the ETL process and various other applications etc.The syntax and structure of an aggregation pipelines\ndb.<collection_name>.aggregate( [ { stage_1}, { stage 2}, { ... }, ...., { stage N} ] )\nUntitled Diagram (1)741×211 13.3 KB\nBeginning here with aggregation operators which has$match and $projectwhere $match should make it at the beginning of the the aggregation where one can take the advantage of indexes. Below is an example to showcase the usage of $match and $project operators:A few stages known as cursor stage allows you to calculate, process and evaluate data as per your requirements. The aggregation give you a full freedom to perform operations without having to change the schema of the database.\nSharing a query which helped to find a data from a big collection to figure out the avg max temp in 1000 cities.{ \"max_high\" : 87 }I shared a very basic of the aggregation as I understand and I am sure there is more to it too and will keep posting about my learning and challenges while learning the aggregation framework better.Here are a few challenges I faced while making the pipelines:The approach I followed to overcome this are:Please feel free to add your challenges in learning aggregation and any comments and reviews would be appreciated.Thanks\nAasawari", "username": "Aasawari" }, { "code": "$project$project$set$unset$set/$unset$project$set$unset$project{\n _id: ObjectId(\"6044faa70b2c21f8705d8954\"),\n employee_name: \"Mrs. Jane A. Doe\",\n employee_num: \"1234567890123456\",\n id_validity: \"2023-08-31T23:59:59.736Z\",\n joining_date: \"2021-01-13T09:32:07.000Z\",\n reportedTo: \"Mr. X\", \n amex_card: \"Credit\",\n joined: True\n}\n[\n {\"$set\": {\n \"joining_Date\": {\"$dateFromString\": {\"dateString\": \"$id_validity\"}},\n \"amex_card\": \"CREDIT\", \n }},\n \n {\"$unset\": [\n \"_id\",\n ]},\n]\n{\n transaction_info: { \n date: ISODate(\"2021-01-13T09:32:07.000Z\")\n },\n joined: true\n}\n[ {\"$project\": { \n date: \"$joining_date\",\n \"status\": \n {\"$cond\": \n {\"if\": \"$joined\", \"then\": \"JOINED\", \"else\": \"YET TO JOIN\"}}, \n \"_id\": 0,\n }},\n ]\n", "text": "Day03 of MongoDB as 100DaysOfMongoDBExtending the Aggregations learnings here:The $project stage of aggregation is confusing yet very use pipeline step while we wish to query data in a specific manner. On one hand where $project allows us to figure which fields to include and exclude it however seems to be extremely confusing and inflexible at the same time.Later in the 4.2 version of MongoDB, $set and $unset version was introduced which lead me into the ambiguity to use $set/$unset or $project operation.The $set and $unset can be used when you require a minimal change in the output documents where as $project is recommended to be used when you need more changes in the output document than the input fields.Consider the following dataset to understand the usage of $set and $unsetConsider the change is required to convert the field id_validity from test to type Date.Now consider an example where you need information in the format:the best way to use aggregation here would be:", "username": "Aasawari" }, { "code": "> package com.springtest.demo.repo;\n> \n> import com.springtest.demo.models.User;\n> import org.springframework.data.mongodb.repository.MongoRepository;\n> import org.springframework.stereotype.Repository;\n> \n> @Repository\n> public interface UserRepo extends MongoRepository<User, String> {\n> }\n> \n> ackage com.springtest.demo.controller;\n> \n> import com.springtest.demo.models.User;\n> import com.springtest.demo.repo.UserRepo;\n> import org.springframework.beans.factory.annotation.Autowired;\n> import org.springframework.web.bind.annotation.GetMapping;\n> import org.springframework.web.bind.annotation.RestController;\n> \n> import java.util.List;\n> \n> @RestController\n> public class UserController {\n> \n> private final UserRepo userRepo;\n> \n> @Autowired\n> public UserController(UserRepo userRepo){\n> this.userRepo = userRepo;\n> }\n> \n> @GetMapping(\"/users\")\n> public List<User> getUser(){\n> return userRepo.findAll();\n> }\n> \n> }\n com.springtest.demo.DemoApplication : Starting DemoApplication using Java 17.0.2 on Aasawaris-MacBook-Pro.local with PID 72205 (/Users/aasawari.sahasrabuddhe/Downloads/demo/target/classes started by aasawari.sahasrabuddhe in /Users/aasawari.sahasrabuddhe/Downloads/demo)\n> 2022-03-08 15:17:05.679 INFO 72205 --- [ restartedMain] com.springtest.demo.DemoApplication : No active profile set, falling back to default profiles: default\n> 2022-03-08 15:17:05.735 INFO 72205 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : Devtools property defaults active! Set 'spring.devtools.add-properties' to 'false' to disable\n> 2022-03-08 15:17:05.735 INFO 72205 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : For additional web related logging consider setting the 'logging.level.web' property to 'DEBUG'\n> 2022-03-08 15:17:06.185 INFO 72205 --- [ restartedMain] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.\n> 2022-03-08 15:17:06.218 INFO 72205 --- [ restartedMain] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 30 ms. Found 1 MongoDB repository interfaces.\n> 2022-03-08 15:17:06.588 INFO 72205 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8093 (http)\n> 2022-03-08 15:17:06.595 INFO 72205 --- [ restartedMain] o.apache.catalina.core.StandardService : Starting service [Tomcat]\n> 2022-03-08 15:17:06.595 INFO 72205 --- [ restartedMain] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.0.16]\n> 2022-03-08 15:17:06.632 INFO 72205 --- [ restartedMain] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext\n> 2022-03-08 15:17:06.633 INFO 72205 --- [ restartedMain] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 897 ms\n> 2022-03-08 15:17:06.861 INFO 72205 --- [ restartedMain] org.mongodb.driver.cluster : Cluster created with settings {hosts=[127.0.0.1:27017], srvHost=cluster0.jrhrm.mongodb.net, mode=MULTIPLE, requiredClusterType=REPLICA_SET, serverSelectionTimeout='30000 ms', requiredReplicaSetName='atlas-292w65-shard-0'}\n> 2022-03-08 15:17:06.978 INFO 72205 --- [hrm.mongodb.net] org.mongodb.driver.cluster : Adding discovered server cluster0-shard-00-02.jrhrm.mongodb.net:27017 to client view of cluster\n> 2022-03-08 15:17:07.011 INFO 72205 --- [hrm.mongodb.net] org.mongodb.driver.cluster : Adding discovered server cluster0-shard-00-00.jrhrm.mongodb.net:27017 to client view of cluster\n> 2022-03-08 15:17:07.012 INFO 72205 --- [hrm.mongodb.net] org.mongodb.driver.cluster : Adding discovered server cluster0-shard-00-01.jrhrm.mongodb.net:27017 to client view of cluster\n> 2022-03-08 15:17:07.174 INFO 72205 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:5, serverValue:26636}] to cluster0-shard-00-01.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:6, serverValue:24652}] to cluster0-shard-00-02.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:24652}] to cluster0-shard-00-02.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:25610}] to cluster0-shard-00-00.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:2, serverValue:25610}] to cluster0-shard-00-00.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:4, serverValue:26520}] to cluster0-shard-00-01.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=cluster0-shard-00-02.jrhrm.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=183920181, setName='atlas-292w65-shard-0', canonicalAddress=cluster0-shard-00-02.jrhrm.mongodb.net:27017, hosts=[cluster0-shard-00-02.jrhrm.mongodb.net:27017, cluster0-shard-00-00.jrhrm.mongodb.net:27017, cluster0-shard-00-01.jrhrm.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-01.jrhrm.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='AP_SOUTH_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=7, topologyVersion=TopologyVersion{processId=62263dc244611ef569db1e07, counter=3}, lastWriteDate=Tue Mar 08 15:17:07 IST 2022, lastUpdateTimeNanos=196121665163299}\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=cluster0-shard-00-01.jrhrm.mongodb.net:27017, type=REPLICA_SET_PRIMARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=206546822, setName='atlas-292w65-shard-0', canonicalAddress=cluster0-shard-00-01.jrhrm.mongodb.net:27017, hosts=[cluster0-shard-00-02.jrhrm.mongodb.net:27017, cluster0-shard-00-00.jrhrm.mongodb.net:27017, cluster0-shard-00-01.jrhrm.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-01.jrhrm.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='AP_SOUTH_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=7fffffff0000000000000089, setVersion=7, topologyVersion=TopologyVersion{processId=62263c95fc481516f34bd6f1, counter=6}, lastWriteDate=Tue Mar 08 15:17:07 IST 2022, lastUpdateTimeNanos=196121665168063}\n> 2022-03-08 15:17:07.519 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=cluster0-shard-00-00.jrhrm.mongodb.net:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=13, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=261042452, setName='atlas-292w65-shard-0', canonicalAddress=cluster0-shard-00-00.jrhrm.mongodb.net:27017, hosts=[cluster0-shard-00-02.jrhrm.mongodb.net:27017, cluster0-shard-00-00.jrhrm.mongodb.net:27017, cluster0-shard-00-01.jrhrm.mongodb.net:27017], passives=[], arbiters=[], primary='cluster0-shard-00-01.jrhrm.mongodb.net:27017', tagSet=TagSet{[Tag{name='nodeType', value='ELECTABLE'}, Tag{name='provider', value='AWS'}, Tag{name='region', value='AP_SOUTH_1'}, Tag{name='workloadType', value='OPERATIONAL'}]}, electionId=null, setVersion=7, topologyVersion=TopologyVersion{processId=62263b5055c8d646e1c81237, counter=4}, lastWriteDate=Tue Mar 08 15:17:07 IST 2022, lastUpdateTimeNanos=196121665201937}\n> 2022-03-08 15:17:07.521 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.cluster : Setting max election id to 7fffffff0000000000000089 from replica set primary cluster0-shard-00-01.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.521 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.cluster : Setting max set version to 7 from replica set primary cluster0-shard-00-01.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.521 INFO 72205 --- [ngodb.net:27017] org.mongodb.driver.cluster : Discovered replica set primary cluster0-shard-00-01.jrhrm.mongodb.net:27017\n> 2022-03-08 15:17:07.557 INFO 72205 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8093 (http) with context path ''\n> 2022-03-08 15:17:07.565 INFO 72205 --- [ restartedMain] com.springtest.demo.DemoApplication : Started DemoApplication in 2.143 seconds (JVM running for 2.532)\n> 2022-03-08 15:17:07.829 INFO 72205 --- [ restartedMain] ConditionEvaluationReportLoggingListener :\n\nUser[id: '60rf4533789h7800gh5h', Aasawari', Sahasrabuddhe']\nUser[Id: '689bjhdg67u83be3e65r', 'Aniket', 'Sharma']\n", "text": "Day04 of #100DaysofCode as #100DaysOfMongoDBA little away from Aggregations today, will be sharing an example for a simple spring boot MVC architecture example, which tries to connect to MongoDB Atlas using a certificate(say X.509 Certificate)The MongoDB Atlas provides the scope for deploying and managing the database to build global applications on major cloud providers.\nHence, creating a cluster over say AWS, and it will allow you to perform CRUD operations on the local application.Here is a small example wherein I tried to insert data using a simple MVC based architecture which takes first and last name as input parameters and can be queried along the same line.The steps used here to create a connection are:The URI by which you are trying to connect to the MongoDB Atlas cluster should be in correct format as per the format.\nIf you look at the following Post where the ssl connection was not getting established.\nIf you look at the URI Format documentation for reference.Try to create a MVC project with Repository class, Model Class and Controller Class in order to do CRUD operations, Insert data and Query Data through the spring boot application:Here is the sample code for the same:Controller Class:Attaching following logs from the connection and data being inserted:Important points to remember here areIf you have any further questions , please do share your feedback.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "", "text": "Day05 of 100DaysOfMongoDB as #100DaysOfCodeAs we know that Replication in MongoDB is asynchronous hence follows a master-slave replication approach.This means the reads from secondary may return data that doesn’t reflect the state of data on the primaryThe complete basis of replication depends on the oplog which is collection that keeps a record of all the operations on the Primary dataset.\nIn the latest version of MongoDB oplog, the entry is deleted only when the oplog has reached specified size or entry is older than configures amount of time.The Primary node of the replica set maintains an oplog which is then copied by the secondary nodes’ oplog and this proves to be advantageous to recover when the Primary nodes steps down and initiates an election. The secondary node with the most recent data will be elected as the primary for the replica set.Consider a failover condition when the Primary node goes does and an election is initiated. Discussion a few factors and conditions which effect the election process.If you have any questions with reference to the replica sets or architecture please feel free to reply below.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "setDefaultRWConcern{w: majority}if ((#arbiters > 0) AND (#non-arbiters <= majority(#voting nodes)) then {w:1} else {w:majority}", "text": "Day06 as #100DaysofMongoDB as #100daysofcodeExtending the understanding of Replication, will be discussion few interesting insights from Write Concern in replica set.In simple words, write concern determines the type of acknowledgement received for write operation to a standalone mongod, replica sets and sharded clusters. The Primary node of the replica set keeps a track of how well the secondaries are in sync with the oplog when compared to primary.\nA write concern can specify a number of nodes to wait for, or majority .If a write operation does not have a write concern, the default configuration is chosen. This default configuration is either cluster-wide write concern or implicit default write concern.The cluster wide is set using setDefaultRWConcern command. and when this is not set, the default will be set and this default write concern is known as implicit default write concern. Mostly it is {w: majority}.The calculation for the default write concern is given by:\nif ((#arbiters > 0) AND (#non-arbiters <= majority(#voting nodes)) then {w:1} else {w:majority}\nThe rule stand out to be an exception in case of sharded clusters where the write concern will always be a majority.The Oplog maintenance:The secondary nodes keeps itself in sync with their sync source. This is done via OplogFetcher… This OplogFetcher maintains a buffer known as OplogBuffer. A separate thread, runs an OplogBatcher which pulls from the buffer to the fetcher and creates a new batch to be applied.Will be posting some new understanding about Replication and Topology Coordinators… Stay Tuned.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Day07 as #100DaysofMongoDB as #100daysofcodeWe discussed a few concepts on Write concern in my previous post, for this post, we will begin with read concern.\nA Read concern is specified to determine the level of consistency in any replica set. There are 5 types of Reads concern which are common:For multi-document transaction, which write concern will be preferred and why?\nBy default the read concern is set to local. Showing similar kind of logic as local, the available read concern is not preferred.The Local read concern waits to refresh the routing table when it identifies a stale data and hence maintains the consistency, however, the available read concern, does not provide consistency as it does not wait for cache to be refreshed. This also means that the available read concern would has the potential to return orphan documents.The selection of read concern will entirely depend on the architecture of the replica set.Will be continuing more on replication in the upcoming post.Thanks\nAasawari", "username": "Aasawari" }, { "code": "startTransaction: truelsidtxnNumbertxnStateapplyOpstxtStatekCommitedtxnStatekAbortedWithoutPrepareTwo Phase Commit ProtocoltransactionCorrdinatorTransactionCoordinatorprepareTransactionabortTransactioncommitTransactionprepareTransaction", "text": "Day08 as #100DaysofMongoDB as #100daysofcodeStarting on Transactions today.\nThe most essential property which the database transactions should manage are ACID properties and hence committing, aborting or rolling back will be done for its operations as well as to the corresponding data changes.The transactions are started on the server using startTransaction: true parameter. Along with these, there are several other parameters which are set.In order to maintain the synchronisation and consistency among the data, the transactions acquire locks in a mutual exclusive manner, however the locks are release when preparing for transaction and acquired again while committing or aborting.To add operations to the transactions, the user can run commands on the same session. This will be stored in memory and once the write is done, the transaction is informed accordingly.Committing or Aborting a transaction.\nAfter the transaction is committed, the applyOps command is applied to the oplog entry and the commit is done on storage transaction on the OperationContext. Once the commit is called, all associated transaction will commit in the storage engine. and the txtState is updated to kCommited.To abort the transaction, the txnState is changed to kAbortedWithoutPrepare and log the transaction metrics and reset the in memory state.ACID Properties for Sharded Clusters.In order to main the ACID property, MongoDB used the concept of Two Phase Commit Protocol, where the concept of transaction coordinator comes into picture.\nThe transactionCorrdinator is supposed to coordinate between the participating shards and eventually commit or abort the transaction. When the TransactionCoordinator is told to commit a transaction, it must first make sure that all participating shards successfully prepare the transaction before telling them to commit the transaction.Before the coordinator send the commit command, all the majority participating shards must commit the prepareTransaction command, failing to do so, the abortTransaction will be issued.Failovers with TransactionsIf a primary with transactions with a mutual exclusive lock, steps down , the node will abort all the unprepared until it can acquire the RSTL. If a secondary has a prepared transaction when it steps up, it will have to re-acquire all the locks for the prepared transaction.Recovery for failed transactions. : The transaction must ensure the recovery for any failover transaction. This recovery will be done using the algorithm on the oplog entries. If the oplog entry contain commitTransaction, the transaction will be immediately committed else will look for prepareTransaction and get operations and prepare and commit it.Thanks\nAasawari", "username": "Aasawari" }, { "code": "Parallel Batch Writer Mode locklastAppliedParallel Batch Writer ModeReplication State Transition LockPRIMARYSECONDARYSECONDARYPRIMARYSECONDARYROLLBACKROLLBACKSECONDARYSECONDARYRECOVERINGParallel Batch Writer Mode lockReplication State Transition Lock", "text": "Day09 as #100DaysofMongoDB as #100daysofcodeStarting the discussion for Concurrency control in ReplicationIn Day07 we discussed about having mutual exclusive lock on the transaction in order to maintain the atomicity of the transactions. There is yet another lock known as Parallel Batch Writer Mode lock which helps in maintaining the concurrency of the operations while a secondary is applying oplog entries.\nThe lastApplied is a timestamp value set when the secondary has completed the batch of writing to the oplog.\nSince secondary node of the replica set writes the oplog in batched, the Parallel Batch Writer Mode lock is acquired and released when the complete batch is written in the entry.There is yet another lock whose purpose is to maintain the concurrency among the transactions, known as, Replication State Transition Lock which is acquired when the nodes are in transition state.\nIt is acquired in exclusive mode for the following replication state transitions: PRIMARY to SECONDARY (step down), SECONDARY to PRIMARY (step up), SECONDARY to ROLLBACK (rollback), ROLLBACK to SECONDARY , and SECONDARY to RECOVERING.Note: The locks must be acquired before the global lock is acquired.\nThe order recommended is:\nAcquire Parallel Batch Writer Mode lock prior to Replication State Transition Lock in shared and exclusive mode respectively. Keep the global lock to be acquired later.Let me know if you have questions related to the same.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "replSetStepDownreplSetRequestVoteslocal.replset.electionforcewaitUntil", "text": "Day10 as #100DaysofMongoDB as #100daysofcodeTaking the replication concept ahead, will be discussing a few of Election concepts in the Replica set.There are a few conditions when an election is initiated in a replica set:Lets understand the Election concept from a voter as well as candidate perspective.When an election is initiated, the candidate runs replSetRequestVotes towards all the replica sets. The candidate first votes for itself and then sends replSetRequestVotes towards all the members of the replica sets.\nNow when the message is received by the voter, it evaluated the candidate on various parameters:When a voter casts its vote, it records in local.replset.election collection.The Stepping down of a primary can be conditional or unconditional.\nIn Conditional Stepdown, ifforce is true and now > waitUntil deadline\nAt least one of the updated secondaries, runs for a candidate.For Unconditional StepDownAs long as primary is connected to all the secondaries, it will remain as Primary and will not be eligible for step down.In both the conditions, RSTL lock is acquired and and allow secondaries to catch up without new writes coming in.If you have any thoughts on Replication Election, Will be more than happy to discuss.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "containerised application.apiVersion: v1\ndata:\n password: cGFzc3dvcmQxMjM= #Encoded with Base64 (password)\n username: YWRtaW51c2Vy #Encoded with Base64 (password)\nkind: Secret\nmetadata:\n creationTimestamp: null\n name: mongo-creds\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n labels:\n app: mongo\n name: mongo\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: mongo\n strategy: {}\n template:\n metadata:\n labels:\n app: mongo\n spec:\n containers:\n - image: mongo\n name: mongo\n args: [\"--dbpath\",\"/data/db\"]\n livenessProbe:\n exec:\n command:\n - mongo\n - --disableImplicitSessions\n - --eval\n - \"db.adminCommand('ping')\"\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: 1\n failureThreshold: 6\n readinessProbe:\n exec:\n command:\n - mongo\n - --disableImplicitSessions\n - --eval\n - \"db.adminCommand('ping')\"\n initialDelaySeconds: 30\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: 1\n failureThreshold: 6\n env:\n - name: MONGO_INITDB_ROOT_USERNAME\n valueFrom:\n secretKeyRef:\n name: mongo-creds\n key: username\n - name: MONGO_INITDB_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mongo-creds\n key: password\n volumeMounts:\n - name: \"mongo-data-dir\"\n mountPath: \"/data/db\"\n volumes:\n - name: \"mongo-data-dir\"\n persistentVolumeClaim:\n claimName: \"mongo-data\"\nkind: Deployment.replicas : 1livenessProbePersistent VolumesPersistent volume Chainsaasawari.sahasrabuddhe@Aasawaris-MacBook-Pro mongodb % kubectl get all\nNAME READY STATUS RESTARTS AGE\npod/mongo-cd755c96f-67sc8 1/1 Running 0 6d\npod/mongo-client-6c7bc768c4-bp5hq 1/1 Running 0 6d\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d7h\nservice/mongo-nodeport-svc NodePort 10.101.236.24 <none> 27017:32000/TCP 6d\n\nNAME READY UP-TO-DATE AVAILABLE AGE\ndeployment.apps/mongo 1/1 1 1 6d\ndeployment.apps/mongo-client 1/1 1 1 6d\n\nNAME DESIRED CURRENT READY AGE\nreplicaset.apps/mongo-cd755c96f 1 1 1 6d\nreplicaset.apps/mongo-client-6c7bc768c4 1 1 1 6d\n\naasawari.sahasrabuddhe@Aasawaris-MacBook-Pro mongodb %\n\nstatefulset.yaml: \napiVersion: apps/v1\nkind: StatefulSet. #Defining the MongoDB deployment as statefulset\nmetadata:\n name: mongo\nspec:\n selector:\n matchLabels:\n app: mongo\n serviceName: \"mongo\"\n replicas: 4. #Three copies of MongoDB pods will be deployed.\n template:\n metadata:\n labels:\n app: mongo\n spec:\n terminationGracePeriodSeconds: 10. #When pod is deleted, it will take this time to terminate\n containers:\n - name: mongo\n image: mongo\n command:\n - mongod\n - \"--bind_ip_all\". #Allowing MongoDB to allow all IPs\n - \"--replSet\"\n - rs0\n ports:\n - containerPort: 27017. # Defining port\n volumeMounts:\n - name: mongo-volume\n mountPath: /data/db\n volumeClaimTemplates:\n - metadata:\n name: mongo-volume\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 1Gi\napiVersion: v1\nkind: Service\nmetadata:\n name: mongo\n labels:\n app: mongo\nspec:\n ports:\n - name: mongo\n port: 27017\n targetPort: 27017\n clusterIP: None\n selector:\n app: mongo\nkubectl create -f .\n\naasawari.sahasrabuddhe@Aasawaris-MacBook-Pro mongodb % kubectl get all\nNAME READY STATUS RESTARTS AGE\npod/mongo-0 1/1 Running 0 2d5h\npod/mongo-1 1/1 Running 0 2d5h\npod/mongo-2 1/1 Running 0 2d5h\npod/mongo-3 1/1 Running 0 6h20m\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/mongo ClusterIP None <none> 27017/TCP 2d5h\n\n\nNAME READY AGE\nstatefulset.apps/mongo 4/4 2d5h\naasawari.sahasrabuddhe@Aasawaris-MacBook-Pro mongodb %\nkubectl exec -it <pod-name> mongo or mongosh", "text": "Day11 as #100DaysofMongoDB as #100daysofcodeStarting a pattern today for the next 10 days n I hope I keep up with it.Calling it Day11 as First Day of using MongoDB in a kubernetes deployment.Kubernetes is an orchestration framework which allows you to scale, automate and manage containerised application.\nBy Containerised, means, to bundle and package the application in one and run on a cloud or different deployment.This is fir for modern applications which are based on Microservice architectures, have frequent deployments or statefulsets.Let’s see a sample deployment where MongoDB is deployed in Kubernetes with a creation of replica sets.The statefulsets and deployments are two ways through which the MongoDB can be deployed in a kubernetes environment.\nThe difference occurs when:\n1. Deployment model: This is preferred when a stateless application is preferred. All the pods of the application use the same volume specified in the deployment.yaml files. The pod names are deployed with a different name every-time the pod is deleted.\n2. Statefulset model : This deployment is preferred when each pod of the deployment used its own independent set and volume. The pods here are deployed with the same name every-time the pods are deleted.Each deployment of the application has config-map and secrets which contains non-sensitive data and sensitive data respectively.Deploying MongoDB as DeploymentSpecifying a few key value pairs defined above:kind: Deployment.. defines the applications is of type deployment.\nreplicas : 1 specifics that the application will have only one replica pod.\nlivenessProbe: To make sure that the pods are not stuck and are redeployed after the timeouts mentioned.\nPersistent Volumes: is a storage provisioned by the admin\nPersistent volume Chains: Works for a dynamic volume provisioning enabled systemOnce the deployments and service files created withkubectl create -f deployment.yaml/service.yaml filesOnce the pods are up and running the deployment will look like the following:In-order to login to the databasekubectl exec deployment/mongo-client -it – /bin/bash\nmongo --host mongo-nodeport-svc --port 27017 -u adminuser -p password123Deploying MongoDB as StatefulsetCreating ststefulset.yaml and service.yaml files deploying MongoDB as deployment files:service.yaml file:After kubectl create -f . The deployment will look like the following:You can exec to any of the 4 MongoDB commands using\nkubectl exec -it <pod-name> mongo or mongoshand explore the databases and perform various operations.Will be discussing MongoDB deployment for the next 10 days in this sections of #100DaysOfMongoDBThanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "kubectl exec -it <pod-name> mongo/mongosh/bash do the following steps:\n1. rs.initiate() \n2. rs.add(\"mongo-1.mongo\")\n3. rs.add(\"mongo-2.mongo\")\n4. rs.status()\nrs0:PRIMARY> rs.status()\n{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2022-03-22T13:48:33.664Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(23),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 3,\n\t\"writeMajorityCount\" : 3,\n\t\"votingMembersCount\" : 4,\n\t\"writableVotingMembersCount\" : 4,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1647956913, 1),\n\t\t\t\"t\" : NumberLong(23)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1647956913, 1),\n\t\t\t\"t\" : NumberLong(23)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2022-03-22T13:48:33.284Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2022-03-22T13:48:33.284Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1647952973, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2022-03-22T13:48:13.230Z\"),\n\t\t\"electionTerm\" : NumberLong(23),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1647953024, 1),\n\t\t\t\"t\" : NumberLong(21)\n\t\t},\n\t\t\"numVotesNeeded\" : 3,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"numCatchUpOps\" : NumberLong(0),\n\t\t\"newTermStartDate\" : ISODate(\"2022-03-22T13:48:13.268Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"mongo-0.mongo:27017\",\n\t\t\t\"health\" : 0,\n\t\t\t\"state\" : 6,\n\t\t\t\"stateStr\" : \"(not reachable/healthy)\",\n\t\t\t\"uptime\" : 0,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\t\"t\" : NumberLong(-1)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\t\"t\" : NumberLong(-1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-03-22T13:48:33.285Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"1970-01-01T00:00:00Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"authenticated\" : false,\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : -1,\n\t\t\t\"configTerm\" : -1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"mongo-1.mongo:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 39,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1647956913, 1),\n\t\t\t\t\"t\" : NumberLong(23)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-03-22T13:48:33Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2022-03-22T13:48:33.284Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2022-03-22T13:48:33.284Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1647956893, 1),\n\t\t\t\"electionDate\" : ISODate(\"2022-03-22T13:48:13Z\"),\n\t\t\t\"configVersion\" : 25,\n\t\t\t\"configTerm\" : 23,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"mongo-2.mongo:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 34,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1647953024, 1),\n\t\t\t\t\"t\" : NumberLong(21)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1647953024, 1),\n\t\t\t\t\"t\" : NumberLong(21)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-03-22T12:43:44Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2022-03-22T12:43:44Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2022-03-22T12:43:44.038Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2022-03-22T12:43:44.038Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-03-22T13:48:33.286Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2022-03-22T13:48:31.802Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"mongo-1.mongo:27017\",\n\t\t\t\"syncSourceId\" : 4,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 25,\n\t\t\t\"configTerm\" : 23\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"mongo-3.mongo:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 30,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1647953024, 1),\n\t\t\t\t\"t\" : NumberLong(21)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1647953024, 1),\n\t\t\t\t\"t\" : NumberLong(21)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-03-22T12:43:44Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2022-03-22T12:43:44Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2022-03-22T12:43:44.038Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2022-03-22T12:43:44.038Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-03-22T13:48:33.285Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2022-03-22T13:48:32.307Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"mongo-1.mongo:27017\",\n\t\t\t\"syncSourceId\" : 4,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 25,\n\t\t\t\"configTerm\" : 23\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1647956913, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1647956913, 1)\n}\nrs0:PRIMARY>\n\naasawari.sahasrabuddhe@Aasawaris-MacBook-Pro kubernetes-mongodb % kubectl run mongo --rm -it --image mongo -- sh\n\nIf you don't see a command prompt, try pressing enter.\n\n#\n# mongo mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo\nMongoDB shell version v5.0.6\nconnecting to: mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017,mongo-2.mongo:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"847d3424-dc49-4242-8443-effdbf732682\") }\nMongoDB server version: 5.0.6\n================\nWarning: the \"mongo\" shell has been superseded by \"mongosh\",\nwhich delivers improved usability and compatibility.The \"mongo\" shell has been deprecated and will be removed in\nan upcoming release.\nFor installation instructions, see\nhttps://docs.mongodb.com/mongodb-shell/install/\n================\nWelcome to the MongoDB shell.\nFor interactive help, type \"help\".\nFor more comprehensive documentation, see\n\thttps://docs.mongodb.com/\nQuestions? Try the MongoDB Developer Community Forums\n\thttps://community.mongodb.com\n---\nThe server generated these startup warnings when booting:\n 2022-03-22T13:47:54.264+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n 2022-03-22T13:47:54.797+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2022-03-22T13:47:54.797+00:00: You are running this process as the root user, which is not recommended\n---\n---\n Enable MongoDB's free cloud-based monitoring service, which will then receive and display\n metrics about your deployment (disk utilization, CPU, operation statistics, etc).\n\n The monitoring data will be available on a MongoDB website with a unique URL accessible to you\n and anyone you share the URL with. MongoDB may use this information to make product\n improvements and to suggest MongoDB products and deployment options to you.\n\n To enable free monitoring, run the following command: db.enableFreeMonitoring()\n To permanently disable this reminder, run the following command: db.disableFreeMonitoring()\n---\n\nrs0:PRIMARY> cfg.members\n[\n\t{\n\t\t\"_id\" : 0,\n\t\t\"host\" : \"mongo-0.mongo:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 1,\n\t\t\"host\" : \"mongo-1.mongo:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 2,\n\t\t\"host\" : \"mongo-2.mongo:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 3,\n\t\t\"host\" : \"mongo-3.mongo:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t}\n]\nrs0:PRIMARY>\nAccessing the MongoDb outside the clusterkubectl expose pod mongo-0 27017 --target 27017 --type LoadBalancer\nkubectl expose pod mongo-1 27017 --target 27017 --type LoadBalancer`\nkubectl expose pod mongo-3 27017 --target 27017 --type LoadBalancer`\nkubectl get svc \nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d7h\nservice/mongo ClusterIP None <none> 27017/TCP 3d5h\nservice/mongo-0 LoadBalancer 10.100.162.172 172.42.43.200 27017:31701/TCP 3h45m\nservice/mongo-1 LoadBalancer 10.109.181.110 172.42.43.208 27017:31111/TCP 3h44m\nservice/mongo-2 LoadBalancer 10.104.155.2 172.42.43.289 27017:30028/TCP 3h44m\nservice/mongo-nodeport-svc NodePort 10.101.236.24 <none> 27017:32000/TCP 7d\nmongo mongodb://172.42.43.200:31701,172.42.43.208:31111,172.42.43.289:30028", "text": "Day12 as #100DaysofMongoDB as #100daysofcodeAfter Day11, where we saw how to deploy a database in kubernetes environment using both deployment as well as statefulsets model of deployments.Let’s understand how replica sets are created in kubernetes environment and how to access these replica sets.\nThe replica sets in the kubernetes environment can be accessed both inside and outside of the cluster.Creating a replica setOnce all the pods are up and running, do\nkubectl exec -it <pod-name> mongo/mongosh/bashThe output will look like:This setup consists of one Primary and two secondaries.Now to access these replica sets.Access the replica sets within the same cluster.Accessing the MongoDb outside the clusterIn this case, the pods are needed to be exposed as LoadBalancers and then one can access the replica set outside the kubernetes clusterAfter the three services are up, theAnd then you can access the three replica sets, you can access it using the following urlmongo mongodb://172.42.43.200:31701,172.42.43.208:31111,172.42.43.289:30028Hence these are the two ways where you can connect to the MongoDB replica set and explore more.Let me know if you have suggestions and views on the above topics.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "\nversion: '3'\n\nservices:\n\n shard1svr1:\n container_name: shard1svr1\n image: mongo\n command: mongod --shardsvr --replSet shard1rs --port 27017 --dbpath /data/db\n ports:\n - 50001:27017\n volumes:\n - shard1svr1:/data/db\n\n shard1svr2:\n container_name: shard1svr2\n image: mongo\n command: mongod --shardsvr --replSet shard1rs --port 27017 --dbpath /data/db\n ports:\n - 50002:27017\n volumes:\n - shard1svr2:/data/db\n\n shard1svr3:\n container_name: shard1svr3\n image: mongo\n command: mongod --shardsvr --replSet shard1rs --port 27017 --dbpath /data/db\n ports:\n - 50003:27017\n volumes:\n - shard1svr3:/data/db\n\nvolumes:\n shard1svr1: {}\n shard1svr2: {}\n shard1svr3: {}\nmongod --configsvr --replSet cfgrs --port 27017 --dbpath /data/dbversion: '3'\n\nservices:\n\n mongos:\n container_name: mongos\n image: mongo\n command: mongos --configdb cfgrs/192.168.29.4:40001,192.168.29.4:40002,192.168.29.4:40003 --bind_ip 0.0.0.0 --port 27017\n ports:\n - 60000:27017\ndocker-compose -f configserver.yaml up -d\ndocker-compose -f shardserver.yaml up -d\ndocker-compose -f mongod.yaml up -d\n\naasawari.sahasrabuddhe@M-C02DV42LML85 sharding % docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\na89092ce4415 mongo \"docker-entrypoint.s…\" 12 minutes ago Up 12 minutes 0.0.0.0:60000->27017/tcp mongos\nfea4644bc0e0 mongo \"docker-entrypoint.s…\" 15 minutes ago Up 14 minutes 0.0.0.0:50001->27017/tcp shard1svr1\nda83eb637ae4 mongo \"docker-entrypoint.s…\" 15 minutes ago Up 14 minutes 0.0.0.0:50003->27017/tcp shard1svr3\n4c7f0f0ecf86 mongo \"docker-entrypoint.s…\" 15 minutes ago Up 14 minutes 0.0.0.0:50002->27017/tcp shard1svr2\n39418f832da9 mongo \"docker-entrypoint.s…\" 32 minutes ago Up 31 minutes 0.0.0.0:40003->27017/tcp cfgsvr3\n9297b1bc78ab mongo \"docker-entrypoint.s…\" 32 minutes ago Up 31 minutes 0.0.0.0:40001->27017/tcp cfgsvr1\nb1804155e587 mongo \"docker-entrypoint.s…\" 32 minutes ago Up 31 minutes 0.0.0.0:40002->27017/tcp cfgsvr2\n08bcb43c7ff8 gcr.io/k8s-minikube/kicbase:v0.0.30 \"/usr/local/bin/entr…\" 9 days ago Up 7 hours 127.0.0.1:50295->22/tcp, 127.0.0.1:50296->2376/tcp, 127.0.0.1:50298->5000/tcp, 127.0.0.1:50299->8443/tcp, 127.0.0.1:50297->32443/tcp minikube\naasawari.sahasrabuddhe@M-C02DV42LML85 sharding %\nmongo mongodb://192.168.29.4:40001192.168.29.4mongos> sh.addShard(\"shard1rs/192.168.29.4:50001,192.168.29.4:50002,192.168.29.4:50003\")\n\nmongos> sh.status()\n--- Sharding Status ---\n sharding version: {\n \t\"_id\" : 1,\n \t\"minCompatibleVersion\" : 5,\n \t\"currentVersion\" : 6,\n \t\"clusterId\" : ObjectId(\"623c84a6a12cb9bd172d459a\")\n }\n shards:\n { \"_id\" : \"shard1rs\", \"host\" : \"shard1rs/192.168.29.4:50001,192.168.29.4:50002,192.168.29.4:50003\", \"state\" : 1, \"topologyTime\" : Timestamp(1648133766, 1) }\n active mongoses:\n \"5.0.6\" : 1\n autosplit:\n Currently enabled: yes\n balancer:\n Currently enabled: yes\n Currently running: no\n Failed balancer rounds in last 5 attempts: 0\n Migration results for the last 24 hours:\n No recent migrations\n databases:\n { \"_id\" : \"config\", \"primary\" : \"config\", \"partitioned\" : true }\n config.system.sessions\n shard key: { \"_id\" : 1 }\n unique: false\n balancing: true\n chunks:\n shard1rs\t1024\n", "text": "Day13 as #100DaysofMongoDB as #100daysofcodeTaking the Replication on kubernetes environment ahead, let us learn about how to do sharding in a kubernetes Environment.A MongoDB sharded cluster consists of:Here, we will be using docker containers to design a sharded structure with one sharded cluster, which is basically a replica set, one config server and one mongos.\nHence, 7 docker containers will be needed.\nAll the members of shard and config map and mongos will be running on different ports.Deploying a sharded replica set using docker containers.The docker compose file will look like:The command here, mongod --shardsvr means the mongod is started as sharded cluster.The config server compose file would look similar with the command as\nmongod --configsvr --replSet cfgrs --port 27017 --dbpath /data/db.\ndefining the mongod as config server.\nLastly the yaml file for mongosAfter all the docker compose have been created doThe containers will look like:After replica sets have been initiated in config server and sharded server with the respective port numbers asmongo mongodb://192.168.29.4:40001 where 192.168.29.4 is the system IP addressLogin to the mongos to add the shards to the mongos.asFurther let us understand how to add data and access the data through these sharded clusters.If you have any questions related to docker containers or deploying replica sets using docker containers, feel free to reach out.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "aasawari.sahasrabuddhe@M-C02DV42LML85 sharding % docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nce46540cdd3d mongo \"docker-entrypoint.s…\" 3 hours ago Up 3 hours 0.0.0.0:50006->27017/tcp shard2svr3\n54fcb0fa8aff mongo \"docker-entrypoint.s…\" 3 hours ago Up 3 hours 0.0.0.0:50004->27017/tcp shard2svr1\n017c62afcacb mongo \"docker-entrypoint.s…\" 3 hours ago Up 3 hours 0.0.0.0:50005->27017/tcp shard2svr2\na89092ce4415 mongo \"docker-entrypoint.s…\" 24 hours ago Up 16 minutes 0.0.0.0:60000->27017/tcp mongos\nfea4644bc0e0 mongo \"docker-entrypoint.s…\" 24 hours ago Up 16 minutes 0.0.0.0:50001->27017/tcp shard1svr1\nda83eb637ae4 mongo \"docker-entrypoint.s…\" 24 hours ago Up 16 minutes 0.0.0.0:50003->27017/tcp shard1svr3\n4c7f0f0ecf86 mongo \"docker-entrypoint.s…\" 24 hours ago Up 16 minutes 0.0.0.0:50002->27017/tcp shard1svr2\n39418f832da9 mongo \"docker-entrypoint.s…\" 25 hours ago Up 5 minutes 0.0.0.0:40003->27017/tcp cfgsvr3\n9297b1bc78ab mongo \"docker-entrypoint.s…\" 25 hours ago Up 5 minutes 0.0.0.0:40001->27017/tcp cfgsvr1\nb1804155e587 mongo \"docker-entrypoint.s…\" 25 hours ago Up 5 minutes 0.0.0.0:40002->27017/tcp cfgsvr2\nmongo mongodb:192.168.29.4:60000\nmongos> use shard\nswitched to db shard\nmongos> show collections\nmovies\nmovies2\nmongos> db.movies2.getShardDistribution()\nCollection shard.movies2 is not sharded. $Since the collection is not yet sharded. \n\nmongos> sh.enableSharding(\"shard\")\n{\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1648221933, 2),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1648221933, 2)\n}\n\n\nmongos> db.movies2.getShardDistribution()\nCollection shard.movies2 is not sharded.\nmongos> sh.enableSharding(\"shard\")\n{\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1648223161, 7),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1648223161, 7)\n}\nmongos> sh.shardCollection(\"shard.movies2\", { \"title\" : \"hashed\" } )\n{\n\t\"collectionsharded\" : \"shard.movies2\",\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1648223203, 6),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1648223202, 17)\n}\nmongos> db.movies2.getShardDistribution()\n\nShard shard2rs at shard2rs/192.168.29.4:50004,192.168.29.4:50005,192.168.29.4:50006\n data : 0B docs : 0 chunks : 2\n estimated data per chunk : 0B\n estimated docs per chunk : 0\n\nShard shard1rs at shard1rs/192.168.29.4:50001,192.168.29.4:50002,192.168.29.4:50003\n data : 0B docs : 0 chunks : 2\n estimated data per chunk : 0B\n estimated docs per chunk : 0\n\nTotals\n data : 0B docs : 0 chunks : 4\n Shard shard2rs contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n Shard shard1rs contains 0% data, 0% docs in cluster, avg obj size on shard : 0B\n\nmongos>\nmoviesdb.movies.getShardDistribution()Index not created for the collection moviesdb.movies.createIndex( {\"title\": \"hashed\"})", "text": "Starting Day 14 in continuatin to Day 13, by adding a new sharded collection to the same deployement.Create a similar yaml for shard2 using different name and different port from all the mentioned ports earlier and thenshows 2 sharded deployment with a config server and mongos.use mongo mongodb:192.168.29.4:60000 to get inside mongos and add the shrad2.Lets create sharded collection with sample data.Here, in the above example it shows, how to enable sharding for a database and hence the collection will be sharded.\nAlso, the sharding is enabled when the data is not present in the collection.Let us understand how to create sharding where the data set is already present.Create random generated data in collection moviesnow, when you do : db.movies.getShardDistribution()\nyou get an error sayingIndex not created for the collection moviesFuther when index is created on the collection using\ndb.movies.createIndex( {\"title\": \"hashed\"})\nand then apply sharding to the collection, the sharding can be performed in this case.Let me know if you have any question/views or doubts related to the topic, will be happy to help in all ways we can.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "kubeletsingress", "text": "Let us understand what kubernetes and containerisation implies.Docker:\nTo understand the concept of docker, let’s understand the tern Containers.\nA container is way to package an application with all the required dependencies and configurations need to the the application and hence makes the applicationThese containers are stored on public or private spaces known as registries where the image/application can be pulled and use in different environments.Docker vs Virtual MachineA entire system or a machine is composed of\n-----------------\n| Application |\n-----------------\n-----------------\n| OS…Kernel |\n-----------------\n-----------------\n| Hardware |\n-----------------\nBoth docker and VM being virtualisation tools, the Docker comprises of virtualising the Application Layer where as VM virtualising the Application with the OS Kernel layer.Also, the size of docker images are smaller and are more efficient faster as compared to VM.Kubernetes:\nAn open source orchestration tools which allows you to deploy containers.\nUseful in the microservice architecture applications where the application is composed of various services deployed over a cloud infrastructure.\nThe basic characteristics would be:A kubernetes works on the master-slave model where there is at least one Master node which manager the slave nodes knows and kubelets which helps in running application processes.A Master Node runs several kubernetes process which are necessary to run the application. Eg API Server, Controller Manager, Schedular etc.Service and Ingress in KubernetesThe nodes and pods communicate using the service.\nEach pod communicates using its own IP address but when a pod crash a new IP address is assigned which makes it difficult while making connections to different pods.A service gives a static IP address assigned to each pod which makes it easier.To make the pod or a service in a microservice architecture accessible in a browser. the Node Ip address with a default port is used.To make the pods communicate using the IP address, the request goes through the ingress.Config Map and SecretsNow we know that the pods communicate using the service, the URLs and other configurations necessary are written inside the config maps which makes the mapping easier.Therefore every time a service is changed , only the URLs needs to be modified.\nHowever, the config maps cannot store confidential information, and hence are stored inside the Secrets. The username and passwords to access them are stored inside the secret.yaml files in a base64 encoding mechanism.All the above mentioned theory has been implemented inside the\nIf you have any questions, feel free to post on the forums.Thanks\nAasawari.Share on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "mongotmongodallowDiskUse", "text": "Starting off after a long break from all the relocation to a completely new place. Settling up everything by yourself is always a challenge but the most interesting part of adulting.\nFrom buying the furniture to keeping everything in place for the entire day and not making a mess around is always a challenge. Growing up with the adulting everyday and hence has become a part of my learning now.Meanwhile I have been working about a stand alone Spring boot application with MongoDB as a database layer in a Kubernetes Environment.\nStuck at some issues here, while I try to resolve this, I started learning about Atlas Search and various aspects it provides us with.Starting with a new 10 days of MongoDB product provided my MongoDB: Atlas SearchMongoDB Atlas search available on MongoDB 4.2+ versions, provides you with the functionality for indexing and querying the data in the Atlas Cluster.\nIt uses a mongot process with Apache Lucene along with mongod process running different atlas servers.The following documentation here will help you load a sample data and introduce you towards the Atlas Search feature provided my MongoDB.You can perform several queries by loading the sample data and perform various queries over the same data.There are few insights which are restricted in M0, M2 and M5 clusters:You can read more about the restrictions from Atlas M0 (Free Cluster), M2, and M5 LimitationsFeel free is to add questions and suggestions related to Atlas Search.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "", "text": "I recently came across something very interesting important when you come across MongoDB which is Wired Tiger.\nI found it to very effective part of discussion topic and hence this has made it here to my 100DaysofLearning MongoDBWired Tiger data engine is a high performance, scalable, transactional, production quality, open source, NoSQL data engine.Wired Tiger comes with the benefit ofThe question arrises is why do we need a new storage engine?\nThe answer to is as:Getting the data on every query from the disk, is always very costly. hence making use of Hazard pointers in wiredTiger was an effective solution.The WiredTiger architecture remains independent of the way you access the data and also the driver for the query language, only the way data is being stored in the changes.\nThe WiredTiger stores the data in tow formats:WiredTiger supports both row-oriented storage (where all columns of a row are stored together), and column-oriented storage (where groups of columns are stored in separate files), resulting in more efficient memory useLet’s understand more about how the data is stored in WiredTiger and different aspects which it provides in the next upcoming posts.Thanks\nAasawari", "username": "Aasawari" }, { "code": "block.h", "text": "In the previous post, we had discussion on how and why wiredTiger was introduced.\nLet us understand how WiredTiger stored the data.WiredTiger uses the concept of Log Structured Merge Trees, where the update gets stored first into small files and then merged with the larger files in the background so they maintain the read latency of B trees.WiredTiger has B tree structure internally, which maximise the data transfer rate in each I/O and minimise the miss rate.WiredTiger supports static encoding with a configurable Huffman engine, which typically reduces the amount of information maintained in memory by 20-50%.WiredTiger Architecture Guide\nimage536×910 53.9 KB\nBlock Manager: This manages the reading and writing in disk blocks. This is also responsible for the data compression and encryption.Cache: This represents various data structures which makes up in memory B trees. The memory exists only temporary during the I/O operation whereas the data is transferred from the disk.Checkpoint: A point of recovery in case of a crash or an unexpected shutdown.\nThis comprise of 5 stages:\n3.1.The prepare stage\n3.2 The data files checkout\n3.3 The history store checkpoint\n3.4 Flushing the files to disk\n3.5 The metadata checkoutColumn store: This is a 64 bit unsigned integer, which has the record Id.Data file format: The format of the WiredTiger data file is given by structures in block.h , defining the overall structure of the file and its blocks.Will include more of the architecture details in the upcoming posts and add more interesting facts about the wired-tiger.Thanks\nAasawari", "username": "Aasawari" }, { "code": "WiredTigerLog.*WiredTigerTmpLogWiredTigerPrepLog", "text": "Extending the concepts of WiredTiger in here:DataHandle and Btrees\nThe Datahandle also known as dhandle represents the B trees. This are created when a collection is created and destroyed when no longer in use.\nThis contains the following information:The lifecycle of dhandle comprise of three stages:For the creating of dhandle, two counter values are created:dhandle cache sweep: The dhandle that have not been in use for a longer period are removed.sweep server:\nIf the session_ref counts to 0, comparison of configured times with current time is calculated and are marked then as dead and the resources are released.\nHowever. if the value is not 0 and the dhandle is not referenced by any session, the servers removes from the global list and frees the remaining resources.Eviction:\nThis is the process of removing old data from the cache. It uses a dedicated set of eviction threads that are tasked. This cannot be triggered by APIs.File System/ Operating System Interface:\nAn abstraction layer allowing main line WiredTiger code to make call to interface.\nHistory Store:\nThis has old version of records and used to service long running transactions.\nLogging:\nThis are write-ahead-log when configured. The sole purpose is to retain the changes made after the last checkpoints and helps in recovery in case of crash.\nThere are three log related files created:MetaData:\nThis is a key value pair with key as the uri string and value as the configuration string which contains other key values pairs describing data encoding for uri.RawStores:\nThis are B trees without the record idSchema:\nThis defines the format of application dataSnapshots:\nThey are implemented by storing set of transactions id committed before transaction started.Rollback:\nThis has modifications which are stable according to stable timestamp and recovered checkpoint snapshots.\nThis scans all tables except the metadata.\nThis involves three phrases:The prerequisites for a rollback is that there should NOT be any transaction activity happening in the WiredTiger.\nThe checks performed includes:This describes the WiredTiger from the architectural perspective.Thanks\nAasawariShare on twitter: https://twitter.com/Aasawari_24", "username": "Aasawari" }, { "code": "", "text": "Discussing on how WiredTiger uses Transactions and how are WiredTiger helpful in Transactions.WiredTiger uses Transactions within the API to start and stop the transactions within a session. If the user doesn’t not explicitly enables the transaction, it gets enabled for the operations.The Lifecycle of TransactionsIt can be explained through the following diagram:\nimage1133×580 40.2 KB\nSource: WiredTiger: Transactions (Architecture Guide)The transaction gets committed automatically when when explicitly enabled else, when enabled via WT_SESSION::begin_transaction , it will be active until committed or rolled back.Like any other database, the WiredTiger enforce the ACID properties in the Transactions.Along with the ACID properties, WiredTiger provides a mechanism of Timestamps.\nThese are sequence numbers associated with each operations. Users can assign a read timestamp at the beginning of the transaction. And updates smaller or equal to read timestamp would be visible.\nUsers can use any 64 bit unsigned integer as logical timestamps.A stable timestamp is the minimum timestamp that a new operation can commit it.Along with the timestamp, transaction also provides the Visibility feature.\nThe operation is visible only when both transaction-id and timestamp are visible.In order to read the key, WiredTiger traverses until a visible update ifs found. WiredTiger are organise as singly linked list with latest transaction at head, known as the update chain. If unavailable, WiredTiger will search the history store to check if there is a version to the reader.WiredTiger also has prepared Transactions which work under snapshot isolations.\nBy introducing the prepared stage, a two-phase distributed transaction algorithm can rely on the prepared state to reach consensus among all the nodes for committing.The WiredTiger also has prepared timestamp and durable timestamp which prevents the slow transaction with stable global timestamp.If you have any questions and suggestions related to wired tiger, feel free to post on the community platforms.Thanks\nAasawari", "username": "Aasawari" } ]
The Journey of 100DaysofCode aka 100DaysofMongoDB (@Aasawari_24)
2022-03-02T11:55:22.014Z
The Journey of 100DaysofCode aka 100DaysofMongoDB (@Aasawari_24)
11,098
null
[]
[ { "code": "export const chatroomSchema = {\n name: 'chatroom',\n properties: {\n _id: 'objectId',\n _partition: 'string?',\n date: 'date',\n last_message: 'string',\n title: 'string',\n },\n primaryKey: '_id',\n};\n\nexport const userSchema = {\n name: 'user',\n properties: {\n _id: 'objectId',\n _partition: 'string?',\n chatrooms: 'chatroom[]',\n friends: 'user[]',\n name: 'string',\n photoUrl: 'string',\n username: 'string',\n },\n primaryKey: '_id',\n};\n\nexport const messageSchema = {\n name: 'message',\n properties: {\n _id: 'objectId',\n _partition: 'string?',\n content: 'string',\n chatroom_id: 'string',\n date: 'date',\n sender: 'user?',\n },\n primaryKey: '_id',\n};\n\nexport const memberSchema = {\n name: 'member',\n properties: {\n _id: 'objectId?',\n _partition: 'string?',\n chatroom_id: 'string?',\n participants: 'user[]',\n },\n primaryKey: '_id',\n};\n", "text": "I wanted to enquire as to how to partion my chat data into realms in mongoDB realm. Users can be part of of any number of chatrooms.Right now my chat database has _partition as the partitionValue, and the following collections-Thank You In Advance. Stay Safe!", "username": "Aryaman_Shrey" }, { "code": "", "text": "@Aryaman_Shrey So chat applications are pretty complex but one approach I have seen is to have a partition per chat room. Generally you only want to hold about 10 partitions open at any one time for a mobile app. I’m not sure how many chatRooms you are expecting a user to join but what you can do to get around this limitation is to have a per-user notification partition which would contain a list of potential chatRooms the user could join as well as a Notification object which would say that you have a chat message pending from a chatRoom the user had joined which was closed on the mobile at that time since realm only syncs when the realm reference is active. You would use Realm Database trigger to detect that a new message was pending on the backend and insert a new Notification object into the per-user notification realm which contained a partitionKey value for the chatRoom where the message was pending.", "username": "Ian_Ward" }, { "code": "", "text": "Sorry for the intrusion on this thread, but I am also writing a chat program for MongoDB Realm. I agree with @Ian_Ward that each chat should be given their realm. This is both more scalable, but more importantly more secure as only members of the chat can access the chat Realm. The question I have is one of scalability on writes. I have read somewhere that MongoDB Realm (or maybe the older Realm Cloud) could only support up to 30 concurrent users writing to the same Realm at the same time. To mitigate this scalability issue, we were thinking of have each user write the chat data to their private realm, and having a backend server function copy it to the chat’s shared realm, which would only be read-only to the various members of the chat. I would welcome any feedback.", "username": "Richard_Krueger" }, { "code": "", "text": "@Richard_Krueger That’s correct - the old Realm system would start to slow down around 30 writers per realm because it would tax the conflict resolution algorithm. We don’t know what the performance will be with the new MongoDB Realm because we haven’t done granular performance testing yet but that is on deck this quarter and we expect it to be much higher in the new system since we are now leveraging MongoDB for storage.Your proposal of moving chats to different realms with a server side function is a fair workaround.", "username": "Ian_Ward" }, { "code": "", "text": "But does not multiple realms create a whole number of other complexities like querying across realms?", "username": "Anthony_CJ" }, { "code": "", "text": "@Anthony_CJ I don’t think that you would want to query across multiple realms. Usually you already know which realm the object is in before the query.", "username": "Richard_Krueger" }, { "code": "", "text": "I would instead take a look at Flexible Sync as this makes it much easier to implement a chat style application without the copying of data between partitions with triggers that Partition-based Sync required. I would take a look at @Andrew_Morgan 's post on how he built RChat here -Learn how to use Realm Flexible Sync to create an iOS chat appThis will actually get much simpler as we are looking to release queries on arrays hopefully next week!", "username": "Ian_Ward" } ]
MongoDB Realm partition strategy for a chat app
2020-06-22T23:32:56.759Z
MongoDB Realm partition strategy for a chat app
3,610
null
[]
[ { "code": "", "text": "I would like to enable SSL on my Atlas cluster, and I looked through the below docs, however, may I know where is the mongod.cfg file located for Atlas Clusters, and what are the step by step procedure, to trust and enable SSL certificate in Cluster and the application client? Thank you!Security — MongoDB Manual - Checked all the links under TSL/SSL section, still need help in SSL configuration.", "username": "Vijaya_Karthikeyan" }, { "code": "", "text": "Is this Atlas cluster or your own mongod installed on local host?\nAtlas clusters have ssl/TLS enabled by default\nWhat is your cluster type?\nPaid or free\nFor Atlas cluster you can modify only some params from Atlas interface only\nProvide more details on your cluster/setup", "username": "Ramachandra_Tummala" }, { "code": "", "text": "To reinforce this: MongoDB Atlas always requires TLS (SSL) network encryption over the wire: this is fully managed and configured for you so you don’t need to worry about it.By the way TLS is basically the new/next-gen way of referring to what we all previously referred to as SSL", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to setup SSL enabled Atlas Cluster(For Mongo - 4.0 and Mongo 5.0), and where is the mongod.cfg file located to add the certificate details for Atlas clusters?
2022-04-25T04:17:48.995Z
How to setup SSL enabled Atlas Cluster(For Mongo - 4.0 and Mongo 5.0), and where is the mongod.cfg file located to add the certificate details for Atlas clusters?
2,846
null
[ "queries", "java", "containers", "licensing" ]
[ { "code": "", "text": "Hello community, thanks for a such a awesome product for storing no-sql data. My query is , we are using mongodb to store our no SQL data and all the binaries like frontend(react), back end(Java) and database like mongodb, redis are containierized into a single docker image and published for commercial use. Will I need to pay licensing cost to mongodb if I use like this?", "username": "Manikumar_Nune" }, { "code": "", "text": "Welcome to the MongoDB Community @Manikumar_Nune!Please see the Server Side Public License FAQ for questions around licensing.I think a relevant FAQ would be:There will be no impact to anyone in the community building an application using MongoDB Community Server unless it is a publicly available MongoDB as a service. The copyleft condition of Section 13 of the SSPL does not apply to companies building other applications or a MongoDB as a service offering for internal-only use.Since you are planning to redistribute, I recommend investigating the MongoDB Partner Program which includes support & benefits for OEM and other partner integrations.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello stennie, thank you for the response. Still I am confused with the internal use only. I want to commercial my solution. My solution is to build a low code app which will use mongodb as a database to store my data. And I am shipping the solution as a single docker image.", "username": "Manikumar_Nune" }, { "code": "", "text": "Your commercial solution isn’t a MongoDB as a service solution. You are not selling MongoDB clusters to your clients like MongoDB Atlas is doing. So you are good !", "username": "MaBeuLux88" }, { "code": "", "text": "My solution is to build a low code app which will use mongodb as a database to store my data.Hi @Manikumar_Nune,Per the SSPL FAQ, copyleft conditions of Section 13 of the SSPL only apply if you are building a publicly available MongoDB-as-a-service offering.To reiterate @MaBeuLux88’s comment: using MongoDB Community Server as the database for an application that users install on-premises is not providing a public MongoDB-as-a-service offering.The Partner Program I mentioned may be of interest if there is a company or commercial entity that would like potential technical, sales, or marketing benefits to support your solution. It may be too early in your planning or development process to consider applying for a program like this, but it is an option to keep in mind for future.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I have a query regarding new SSPL LICENSE
2022-04-29T02:16:50.026Z
I have a query regarding new SSPL LICENSE
3,214
null
[ "node-js", "graphql" ]
[ { "code": "import { createServer } from \"@graphql-yoga/node\";\nimport { MongoClient } from \"mongodb\";\nimport typeDefs from \"./graphql/schemas\";\nimport resolvers from \"./graphql/resolvers\";\nconst process = require(\"dotenv\").config();\n\nconst uri = process.env.MONGODB_URI;\nconst client = new MongoClient(uri);\n\nclient.connect((err) => console.log(\"db is connected\"));\nconst db = client.db(\"connect\");\nconst users = db.collection(\"users\");\nconst chats = db.collection(\"chats\");\nconst messages = db.collection(\"messages\");\n\nconst server = createServer({\n schema: {\n typeDefs,\n resolvers,\n },\n context: {\n chats,\n users,\n messages,\n },\n});\nserver.start();\n.thenclient.connect()", "text": "Hey guys i’m facing a problem connecting mongoDB database and graphql server\ntogether and mongodb is taking so long to connect.here’s my server.ts codei’m using chats, users and messages collections as context and those are derived after db is connected but db connection is taking much longer and the issue is my server is started before the db is connectedi wanna know why mongodb is taking so long to connect or is there any other way to connect db and start the server. i tried chaining .then on client.connect() but mongodb is taking about a minute to connect. any help would be appreciated.", "username": "The_Aman" }, { "code": "", "text": "This is very odd: is it possible that your ISP or network connection has serious challenges reaching the cloud provider region you’ve deployed your Atlas cluster in? Is the issue ongoing? I recommend opening a support case.By the way what framework are you using for the GraphQL server? are you sure the delay is between that server and the Atlas cluster?", "username": "Andrew_Davidson" } ]
My mongodb connection is taking much longer about 1minute to connect
2022-04-23T09:11:27.696Z
My mongodb connection is taking much longer about 1minute to connect
3,274
null
[ "python", "atlas-cluster" ]
[ { "code": "...\n\ndb_url = os.environ.get('DB_URL')\ndb = os.environ.get('DB')\ndb_collection = os.environ.get('DB_COLLECTION')\n\n\nclient = pymongo.MongoClient(db_url)\ndatabase = client[db]\n\ndef lambda_handler(event, context):\n try:\n logger.debug(f\"Event captured: {json.dumps(event)}\")\n\n user = /* Some code to get data*/\n collection = database[db_collection]\n\n collection.insert_one(user)\n \n return user\n except Exception as e:\n logger.exception(f\"Exception: {e}\")\n\n return {\n 'statusCode': 500\n }\n\n[ERROR]\t2022-04-20T08:20:23.344Z\t718b9e10-cedc-4d22-b6f3-a6f3230471d8\tException: smslogcluster-shard-00-01.cnoue.mongodb.net:27017: timed out,smslogcluster-shard-00-00.cnoue.mongodb.net:27017: timed out,smslogcluster-shard-00-02.cnoue.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 625fc22906289886d8d3bdfa, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('smslogcluster-shard-00-00.cnoue.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('smslogcluster-shard-00-00.cnoue.mongodb.net:27017: timed out')>, <ServerDescription ('smslogcluster-shard-00-01.cnoue.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('smslogcluster-shard-00-01.cnoue.mongodb.net:27017: timed out')>, <ServerDescription ('smslogcluster-shard-00-02.cnoue.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('smslogcluster-shard-00-02.cnoue.mongodb.net:27017: timed out')>]>\nTraceback (most recent call last):\n File \"/var/task/lambda_function.py\", line 30, in lambda_handler\n collection.insert_one(user)\n File \"/opt/python/pymongo/collection.py\", line 606, in insert_one\n self._insert_one(\n File \"/opt/python/pymongo/collection.py\", line 547, in _insert_one\n self.__database.client._retryable_write(acknowledged, _insert_command, session)\n File \"/opt/python/pymongo/mongo_client.py\", line 1398, in _retryable_write\n with self._tmp_session(session) as s:\n File \"/var/lang/lib/python3.8/contextlib.py\", line 113, in __enter__\n return next(self.gen)\n File \"/opt/python/pymongo/mongo_client.py\", line 1676, in _tmp_session\n s = self._ensure_session(session)\n File \"/opt/python/pymongo/mongo_client.py\", line 1663, in _ensure_session\n return self.__start_session(True, causal_consistency=False)\n File \"/opt/python/pymongo/mongo_client.py\", line 1608, in __start_session\n self._topology._check_implicit_session_support()\n File \"/opt/python/pymongo/topology.py\", line 519, in _check_implicit_session_support\n self._check_session_support()\n File \"/opt/python/pymongo/topology.py\", line 535, in _check_session_support\n self._select_servers_loop(\n File \"/opt/python/pymongo/topology.py\", line 227, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: smslogcluster-shard-00-01.cnoue.mongodb.net:27017: timed out,smslogcluster-shard-00-00.cnoue.mongodb.net:27017: timed out,smslogcluster-shard-00-02.cnoue.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 625fc22906289886d8d3bdfa, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('smslogcluster-shard-00-00.cnoue.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('smslogcluster-shard-00-00.cnoue.mongodb.net:27017: timed out')>, <ServerDescription ('smslogcluster-shard-00-01.cnoue.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('smslogcluster-shard-00-01.cnoue.mongodb.net:27017: timed out')>, <ServerDescription ('smslogcluster-shard-00-02.cnoue.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('smslogcluster-shard-00-02.cnoue.mongodb.net:27017: timed out')>]>END RequestId: 718b9e10-cedc-4d22-b6f3-a6f3230471d8\n", "text": "Hi, I am having problem connecting to MongoDB Atlas from my AWS Lambda Function.My Lambda function looks like this:This code works well when I run in on local machine but when I run it from Lambda Function, I got the error:I’ve tried setting the IP for network access is 0.0.0.0/0, creating user, adding this tag?ssl=true&ssl_cert_reqs=CERT_NONEto the host string, … but none of those works.\nCan anyone help me figure out what’s the problem? Thanks in advance.", "username": "Dat_Nguyen1" }, { "code": "", "text": "Are you sure you’re providing the authentication credentials?", "username": "Andrew_Davidson" }, { "code": "mongodb+srv://<username>:<password>@smslogcluster.cnoue.mongodb.net/test?retryWrites=true&w=majority\n", "text": "Yes.\nMy connection string looks like this:And now I think the problem comes from the VPC which I’m connecting my Lambda function to cuz when I remove the VPC, my lambda function can connect to Mongo. So is there any configuration needed to connect Mongo to Lambda function in a VPC?", "username": "Dat_Nguyen1" }, { "code": "", "text": "Sorry for the delay: as a general rule if you’re using Lambda in a VPC then you can take advantage of either VPC Peering or AWS PrivateLink (Atlas private endpoints) to connect to your Atlas cluster: this works for M10+ dedicated clusters.", "username": "Andrew_Davidson" } ]
pymongo.errors.ServerSelectionTimeoutError when connect to MongoDB Atlas from Lambda
2022-04-20T08:28:59.723Z
pymongo.errors.ServerSelectionTimeoutError when connect to MongoDB Atlas from Lambda
8,385
https://www.mongodb.com/…4_2_1024x575.png
[ "aggregation", "queries", "mongoose-odm", "atlas-device-sync", "mdbw22-hackathon", "mongodb-world-2022" ]
[ { "code": "record patient data, view /search, update and remove it.it's have real time Data analyze admin pagedaily at 7AM send report notification email to doctor/admin [email protected]@123that helps inform the patient of the doctor appointment time at one day ago.real time Data analyze admin pagesend report to doctor mailadmin/doctor account", "text": "The purpose of this project is to have a real time use case hospital management system with data analytics admin dashboard using MongoDB Atlas & Realm there.You can record patient data, view /search, update and remove it.Not only CURD Operation…it's have real time Data analyze admin page\nImage description1366×768 121 KB\n\nImage description1366×768 103 KB\ndaily at 7AM send report notification email to doctor/admin \nImage description1366×768 123 KB\n\nImage description1366×768 72.5 KB\nGitHub{% github GitHub - jacksonkasi0/Hospital-Management-System-with-mongoDB-Realm: real time hospital management system %}Website Link: Hospital Management SystemE-mail: [email protected]\nPassw0rd: samanta@123##NOTE:Guys, I’m will add a new future.It is a MongoDB trigger function, that helps inform the patient of the doctor appointment time at one day ago.And also, real time Data analyze admin page…\n& send report to doctor mailBut the truth is, I’m will be start to learn about MongoDB Trigger function and Realm.yeah! I will be finish… I’m done as I said.\nEverything is ready as stated above🚀{% youtube hospital management system with Data Analytics Admin Dashboard using MongoDB Atlas & MongoDB Realm - YouTube %}Now new version 2.0part-0: how to create new admin/doctor account{% youtube Real time 🔥 Hospital Management System 🐱‍👤 using MongoDB Atlas & Realm - part 1 - YouTube %}part-1:{% youtube Real time 🔥 Hospital Management System 🐱‍👤 using MongoDB Atlas & Realm - part 2 - YouTube %}part-2:{% youtube Real time 🔥 Hospital Management System 🐱‍👤 using MongoDB Atlas & Realm - part 3 - YouTube %}crudwithout-passing-schema-in-mongoose$group (aggregation)Scheduled TriggersTrigger TypesGmail for NodemailerI made a small mistake at the time, now ok.\nNo longer will it work at exactly 7 AM ( Indian time based )…\nImage description1366×768 117 KB\n\nImage description1366×768 96.6 KB\nThere are many thoughts to improve it, I am going to improve it further with React JS…What is your opinion on this? Can you tell me a little bit! ", "username": "Jackson_Kasi" }, { "code": "", "text": "Hey Jackson, let me know when the site up as I’m getting exceptions with samantha credentials. I’m curious if you are also capturing doctors notes, comments etc. entered during patient visits? I like your project. Thanks, AR", "username": "Albert_Rojas" }, { "code": "", "text": "Hi Jackson - this looks like a great project and lots of detail, however, it doesn’t follow the theme of the Hackathon which is Data as News and using the GDELT dataset.Given your obvious experience with MongoDB, perhaps you could re-configure your project? If healthcare is your space, then perhaps using the GDELT dataset to examine health based events globally?", "username": "Shane_McAllister" }, { "code": "", "text": "yeah! I will start right now …", "username": "Jackson_Kasi" } ]
Real time Hospital Management System using MongoDB Atlas & Realm
2022-04-29T13:49:18.823Z
Real time Hospital Management System using MongoDB Atlas &amp; Realm
5,879
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer AdvocateStaff Developer Advocate", "text": "In this session, Staff Developer Advocate Nic Raboy shares the progress of his News Browser Web App that he is building alongside all our hackathon participants.In this session, Nic will focus on the meta scraper he built for adding info and images to the GDELT data?We will be running these sessions each Friday during the hackathon where we will build features onto the WebApp sample app. All repos will be shared, so you can follow along too.We also use this Friday session to share team and indivdual progress - so if there’s anything you want to share on camera about your own hackathon project and progress, with all the Hackathon viewers, please reply to this post and we’ll send you an invite link. Anybody sharing is in-line to get some cool swag!!Join us, it will be fun and you will learn too! What’s not to like!!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer AdvocateStaff Developer AdvocateEvent Type: Online\nLink(s):\nLocation\nVideo Conferencing URL", "username": "Shane_McAllister" }, { "code": "", "text": "This is starting in just over 90 minutes from now - very much looking forward to it.You can join on MongoDB Youtube and MongoDB Twitch or just watch below.If you have any questions, you can ask them via Chat live during the stream, or if you can’t make it, just reply to this post, and we’ll ask them for you.", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Live CODING and FUN Hack Friday - week 3!
2022-04-25T16:45:11.107Z
Hackathon Live CODING and FUN Hack Friday - week 3!
2,442
null
[ "realm-web" ]
[ { "code": "\nimport { ObjectId } from \"bson\";\n\nentry = {\n _id: (ObjectId generated by mongodb Atlas)\n field1: value1,\n field2: value2,\n}\n\nentryCopy = {\n _id: (ObjectId will be generated by mongodb Atlas)\n field1: value1,\n field2: value2,\n originalEntryId: new ObjectId(entry._id.id),\n}\n", "text": "Hello, I’m working on a mongodb Realm app on Atlas. (Web SDK).I’m trying to save an object containing on ObjectId with collection.insertOne(myObject)But insertOne always returns “InvalidParameter Error”\nIf I save the myObject without the ObjectId, it works properly.What I want to achieve is to copy entry._id (which is an ObjectId) into entryCopy.\nHow can I make this work ?", "username": "Benoit_Werner" }, { "code": "", "text": "I’m not sure this will work but how about :new ObjectId(entry._id.str)", "username": "David_Boyd" }, { "code": "", "text": "Nope, doesn’t work either ", "username": "Benoit_Werner" }, { "code": "import {BSON} from \"realm-web\";\nlet id = BSON.ObjectID(BSON.ObjectID.generate());\nlet stringify = id.toHexString();\n", "text": "I just moved away from bson-object to realm-web and heres what worked for me.", "username": "Paul_Olson" } ]
How to insert an object containing an ObjectId with Realm
2020-11-27T14:48:24.708Z
How to insert an object containing an ObjectId with Realm
4,758
null
[]
[ { "code": "", "text": "We are about to migrate from one mongo infrastructure to another. In the mean time, is there a way, while we test the new setup, that we can sync the two databases?I imagine a scenario, where while we test the new infrastructure, the old infrastructure will sync with the new database, so they are aligned data-wise, and if anything should go wrong, we could simply change the mongo URI to the old database.What are my options?", "username": "simplenotezy" }, { "code": "", "text": "", "username": "Jack_Woehr" } ]
Syncing of two mongo databases
2022-04-29T10:02:20.386Z
Syncing of two mongo databases
1,357
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer AdvocateStaff Developer Advocate", "text": "In this session, Staff Developer Advocate Nic Raboy shares the progress of his News Browser Web App that he is building alongside all our hackathon participants.In this session, Nic will focus on the meta scraper he built for adding info and images to the GDELT data?We will be running these sessions each Friday during the hackathon where we will build features onto the WebApp sample app. All repos will be shared, so you can follow along too.We also use this Friday session to share team and indivdual progress - so if there’s anything you want to share on camera about your own hackathon project and progress, with all the Hackathon viewers, please reply to this post and we’ll send you an invite link. Anybody sharing is in-line to get some cool swag!!Join us, it will be fun and you will learn too! What’s not to like!!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer AdvocateStaff Developer AdvocateEvent Type: Online\nLink(s):\nLocation\nVideo Conferencing URL", "username": "Shane_McAllister" }, { "code": "", "text": "Hello Hackers!! Unfortunately, we need to postpone the live coding session today. Both Nic & Mark are ill and can’t make it.Rest assured, we’ll catch up and maybe have 2 livecoding sessions next week. Keep an eye on the events section", "username": "Shane_McAllister" }, { "code": "", "text": "Hi Shane could you please point me to the events section or the Hackathon calendar.", "username": "Fiewor_John" }, { "code": "", "text": "Sure - the events section is accessed via the links on the top right - see image\nScreenshot 2022-04-29 at 13.41.112424×1290 352 KB\n", "username": "Shane_McAllister" }, { "code": "", "text": "Thank you very much!", "username": "Fiewor_John" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Live Coding and Fun Hack Friday!
2022-04-20T07:47:45.137Z
Hackathon Live Coding and Fun Hack Friday!
2,477
null
[ "migration" ]
[ { "code": "", "text": "Hi there! I’ve just attempted to migrate from Stitch to Realm, under iOS and Swift.I’m extremely disappointed and frustrated with the process, and also the documentation. There doesn’t seem to be an API guide that matches that of Stitch (there is something but it doesn’t even seem to list all the methods a class defines), and much worse, it looks like now I have to battle all the complexity, quirks, limitations, restrictions, confusing terminology, etc. of Realm, when all I want is to be able to connect to my Atlas cluster, my database and collection, and insert some documents that I defined as Swift structs.Do I understand correctly that now I have to re-implement my simple and lightweight Swift structs as subclasses of the ‘Object’ type, something bridged from Obj-C; define schema (which I really didn’t want since my struct keeps evolving and changing), and add my objects to the default Realm, before I could insert them into a MongoDB collection on Atlas?And what does “collection” even mean any more, since Realm seems to redefine the term completely?What if I really, really, really am not interested in Realm, and especially not in Objective-C, and just want to be able to store Swift structs in MongoDB collections? That’s what I signed up for, that’s why I chose this technology, and it seems that now I have to fight some poorly-integrated chimera of a system in order to accomplish the same thing that used to be so much better and easier before.Please tell me that I’m missing something, and there’s a way to just do what I was able to do before!Thanks a lot, and all the best,A.", "username": "Andras_Puiz" }, { "code": "", "text": "Yes you can definitely still query the Atlas collections directly, get the document, and then store them in your swift structs. You do not have to use Realm Sync with their Object class definitions. You can see an example of this here - https://docs.mongodb.com/realm/sdk/ios/examples/mongodb-remote-access/", "username": "Ian_Ward" }, { "code": "", "text": "Thanks!However, I need to insert records into the collection.With Stitch, I used to be able to do this:let itemsCollection = mongoServiceClient.db(“myDatabase”).collection(“myCollection”, withCollectionType: MyStruct.self)This is what I miss so dearly: the API understanding and working with my Swift structs, without forcing me to map them to new obscure types.Now I’d need to either convert MyStruct into an Object subclass (an absolute no-go since MyStruct embeds a lot of other structs I’ve defined in the app, and now I’d need to redefine them all as compatible Object subclasses), or to encode MyStruct into JSON and somehow insert it into the collection… except that the (mostly undocumented) API seems to disallow that.Is there any way to accomplish inserting a Swift struct into an Atlas collection, like we could with Stitch? If not, than this would be a massive regression. While the Apple ecosystem is moving towards Swift and away from Objective-C, you are doing the opposite. There’s also a major trend to use value types (structs) instead of objects, and boom, MongoDB now wants everything to be a class.I’ve been a paying customer for over a year now, and I chose MongoDB instead of the myriad competitors specifically because it allowed me to insert my structs into a MongoDB instance in the cloud, without too much hassle. This now seems to be gone.I understand that Realm integration is a strategic goal (pet project) for the company, but if it comes at the cost of removing useful functionalities and making them way more complicated, then I don’t see how MongoDB is to remain competitive in the Apple ecosystem, and I believe some of this old functionality would need to come back ASAP.My two cents anyway.", "username": "Andras_Puiz" }, { "code": "", "text": "@Andras_Puiz I’m not sure if I can be clearer than my previous post but I’ll state it again - you do not need to migrate to Objects and can continue using structs. There is a mongo-client on the Realm SDK that exposes different APIs for you to create, read, update, and delete documents from an Atlas collection, for example -\nhttps://docs.mongodb.com/realm-sdks/swift/latest/Typealiases.html#/s:10RealmSwift16MongoInsertBlocka", "username": "Ian_Ward" }, { "code": "public typealias MongoInsertBlock = (Result<AnyBSON, Error>) -> Voidstruct", "text": "Thanks again… is there any documentation of public typealias MongoInsertBlock = (Result<AnyBSON, Error>) -> Void?Once again: we used to be able to add a struct to a collection, and there was a clearly-documented API for that. I would need some more guidance about this API you are proposing.", "username": "Andras_Puiz" }, { "code": "", "text": "I still don’t know how I can convert Swift structs into AnyBSON types and back. Any info, please?", "username": "Andras_Puiz" } ]
Migrating from Stitch to Realm (under iOS and Swift)
2021-06-03T23:19:42.273Z
Migrating from Stitch to Realm (under iOS and Swift)
4,172
null
[ "node-js" ]
[ { "code": "", "text": "I have an issue on Windows 11 where Realm Sync has stopped workingIt was working and now the login is failing and I am not able to find a resolution.Tried installing the Certificates from Lets-encrypt but this has not worked.Any help is appreciated.", "username": "Matthew_Needham" }, { "code": "", "text": "It might be helpful to add some more detail to your question:", "username": "Jack_Woehr" } ]
Failed to login, reason: certificate has expired
2022-04-29T13:59:26.379Z
Failed to login, reason: certificate has expired
1,945
https://www.mongodb.com/…_2_1023x216.jpeg
[ "server" ]
[ { "code": "", "text": "\nScreen Shot 2022-04-28 at 22.55.571919×405 147 KB\n\nHi all, could anyone help me for this prob\nI install mongodb by brew install and when i check by “brew services list” , it return mongodb started\nThen i ran “pecl install mongodb”, everything was successfull… but when i run “brew services list”. it return “[email protected] error 3584”\ni checked log in hombrew mongo folder and it show like the image\nmay i research not enough for this, pls help me to fix this\nThanks so much", "username": "Nguy_n_Trung_D_c" }, { "code": "sudo brew services start mongodb-community", "text": "I think you are starting the Homebrew version of MongoDB incorrectly.\nsudo brew services start mongodb-community", "username": "Jack_Woehr" }, { "code": "", "text": "hi mr, thank for your quick response\ni’ve already fixed it by “chmod -R 777 /otp/homebrew/var/mongo”", "username": "Nguy_n_Trung_D_c" } ]
Mongo return error status after "pecl install mongodb"
2022-04-28T15:59:45.856Z
Mongo return error status after &ldquo;pecl install mongodb&rdquo;
3,799
null
[ "data-modeling" ]
[ { "code": "considerations{\n \"_id\" : \"\",\n \"departmentId\" : \"\",\n \"employees\" : [{\n \"employeeId\" : 213132,\n \"name\" : \"John\",\n \"building\" : \"123\"\n },\n {\n \"employeeId\" : 213132,\n \"name\" : \"John\",\n \"building\" : \"123\"\n }]\n}\n\n{\n \"_id\" : \"1\",\n \"departmentId\" : \"1\",\n \"employeeId\" : 213132,\n \"name\" : \"John\",\n \"building\" : \"123 QWE 213\"\n}\n\n{\n \"_id\" : \"2\",\n \"departmentId\" : \"2\",\n \"employeeId\" : 2132,\n \"name\" : \"Paul\",\n \"building\" : \"123 QWE 213\"\n}\n\n", "text": "Doc 1:Doc 1:", "username": "Porali_K_Sivandhaperumal" }, { "code": "", "text": "Your starting point should beDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "steevej" } ]
Single Doc Design or Multi Doc Design
2022-04-28T18:50:02.081Z
Single Doc Design or Multi Doc Design
1,546
null
[]
[ { "code": "", "text": "While doing a many to many relationship in SQL we create a table that will have the ID of the 2 tables that are forming the relationship. Sometimes in that table that we are creating, we put more data.Example:A table called worker and a table called department. Workers can work in many departments and many workers can work in the same department.However the workers move between department and we need to store the year that they move into another department.In mongoDB how can we represent that year. I am creating id for every worker and adding the department where they have worked but how do I store the year?", "username": "Eneko_Izaguirre_Martin" }, { "code": "", "text": "how do I store the year?How do you do it in SQL? What ever you do in SQL could be done here.However the workers move between department and we need to store the year that they move into another department.So they always switch department on an exact year boundary. Nobody ever moves from one department to the other on May 4th for example?I think it would be better to have a date rather than a year.", "username": "steevej" }, { "code": "", "text": "\n1408×210 5.73 KB\n", "username": "Eneko_Izaguirre_Martin" }, { "code": "{\n \"_id\" : worker_primary_key_value ,\n ...\n}\n{\n \"_id\" : department_primary_key_value ,\n ...\n}\n{\n \"worker_id\" : worker_primary_key_value ,\n \"department_id\" : department_primary_key_value ,\n \"start_date\" : ISODate( \"2022-04-29\" )\n}\n", "text": "Like I wrote, you could do the same:collection Workerscollection Departmentscollection EntityBut you would not leverage the flexible schema nature of MongoDB, seeDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "steevej" } ]
Many to many relation table field into MongoDB
2022-04-29T11:49:38.932Z
Many to many relation table field into MongoDB
2,265
null
[ "aggregation" ]
[ { "code": "", "text": "If I use https://cloud.mongodb.com/ I would like to be able to count using he filter on a collection. How do I do this ? Its fine if the count is less than 20, other wise I get something like 1-20 of many … If I try to enter .count() I cannot get it to do that with “Find” - I havent tried aggregation - but its not clear how to apply find as its not in the pulldownAnd in general if I have a query like {“city”:“BELLEVUE”} how do I make he query case insensitive as I would in SQL ?", "username": "Andrew_Watts" }, { "code": "mongosh", "text": "Hello @Andrew_Watts, you can connect to your Atlas cloud database from the desktop GUI tool like Compass or command line tool mongosh. These have more features and functions.In the cloud UI you can specify a query’s filter in the Find View. There is a field TOTAL DOCUMENTS just above the Find, which shows the document count.I would like to be able to count using he filter on a collection.When you enter and apply the filter, you will see the resulting documents and the TOTAL DOCUMENTS will be updated to the new count.I havent tried aggregation - but its not clear how to apply find as its not in the pulldownTo work with Aggregation queries, you will benefit from working with the Compass or the mongosh.And in general if I have a query like {“city”:“BELLEVUE”} how do I make he query case insensitiveYou can specify Collation option in the Find Options. Collation allows searching in a case-insensitive way.", "username": "Prasad_Saya" } ]
Cloud site IDE use of Find
2022-04-28T21:53:06.453Z
Cloud site IDE use of Find
1,214
null
[ "upgrading" ]
[ { "code": "", "text": "Hello Experts,After upgrade If we set the FCV to upgraded MongoDB version then can we downgrade or rollback to previous version without any issues if required. Could you please explain in detail.Thanks in advance.", "username": "5e7dda1ea15ba6e59075ac9b05b7f3a" }, { "code": "", "text": "Hi,The Feature Compatibility Version (FCV) is set as the last step in a major version upgrade as it enables features that can persist data incompatible with earlier versions of MongoDB. Incompatible data types will definitely complicate the downgrade process as you will need to modify or remove data that is unsupported by older versions of MongoDB.The Release Notes for every major MongoDB server release include a detailed section on Compatibility Changes and the downgrade instructions also mention incompatible features. For example: Remove FCV 5.0 Persisted Features.If you need time to validate a newly upgraded deployment you should wait before enabling the new FCV version, per the note in the docs:It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X ,Thanks for the quick update.\nTwo things are clear from your update.\ni)In a way Down grade is a complicate process.\nii)We should do complete testing after upgrade and before setting FCV to new value.Thanks", "username": "5e7dda1ea15ba6e59075ac9b05b7f3a" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we downgrade MongoDB once FCV is to upgraded version
2022-04-21T10:09:56.246Z
Can we downgrade MongoDB once FCV is to upgraded version
3,181
null
[ "kafka-connector" ]
[ { "code": "\"change.stream.full.document\" : \"default\"default{\n \"name\" : \"msc-connector-test\",\n \"config\" : {\n \"batch.size\" : \"8192\",\n \"connection.uri\" : \"***\",\n \"connector.class\" : \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"copy.existing\" : \"false\",\n \"collection\" : \"***\",\n \"database\" : \"***\",\n \"heartbeat.interval.ms\" : \"5000\",\n \"heartbeat.topic.name\" : \"__msc_heartbeat-topic\",\n \"key.converter\" : \"org.apache.kafka.connect.storage.StringConverter\",\n \"key.converter.schemas.enable\" : \"false\",\n \"name\" : \"msc-connector-test\",\n \"output.json.formatter\" : \"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\",\n \"change.stream.full.document\" : \"default\",\n \"topic.creation.default.partitions\" : \"3\",\n \"topic.creation.default.replication.factor\" : \"3\",\n \"transforms\" : \"dropPrefix\",\n \"transforms.dropPrefix.regex\" : \"(.*)<db_name>(.*)\",\n \"transforms.dropPrefix.replacement\" : \"msc-topic-test\",\n \"transforms.dropPrefix.type\" : \"org.apache.kafka.connect.transforms.RegexRouter\",\n \"value.converter\" : \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter.schemas.enable\" : \"false\"\n }\n}\n{\n \"_id\" : {\n \"_data\" : \"***\"\n },\n \"operationType\" : \"replace\",\n \"clusterTime\" : {\n \"$timestamp\" : {\n \"t\" : 1646896094,\n \"i\" : 1\n }\n },\n \"fullDocument\" : {\n ***\n },\n \"ns\" : {\n \"db\" : \"***\",\n \"coll\" : \"***\"\n },\n \"documentKey\" : {\n \"_id\" : \"***\"\n }\n}\n", "text": "Hello,Faced the following behavior while using connector 1.6.1:\nNeed to publish messages in topic as a diffs, according to documentation it were a common connector use case some time ago. But now, according to docs, for this case the following parameter must be used:\n\"change.stream.full.document\" : \"default\"The default setting returns the differences between the original document and the updated document.Created connector with following configuration:But published messages have following format:There’s no diffs in messages, just an updated fullDocument.\nIs this an expected behavior since 1.6 update or something wrong or missed in connector configuration?", "username": "AGorshkov" }, { "code": "", "text": "Hi,I have faced the same issue and I had to came up with the idea of EventCollection where backend is posting events and connector is just gathering insert operations. It’s a workaround for that lack of diff in messages for “update” operation. Ideally would be if I could relay on connector functionality.Where you able to solve Your issue ?", "username": "Adam_Pogrzebny" } ]
No diffs in messages from MongoDB Kafka source connector
2022-03-10T09:46:05.329Z
No diffs in messages from MongoDB Kafka source connector
2,524
null
[]
[ { "code": "", "text": "When we order for example a M20 cluster with “2 vCPUs” - does it mean that each of the “3 data bearing servers” has “2 vCPUs”?How can we monitor the CPU usage of the replica sets?", "username": "max_matinpalo" }, { "code": "", "text": "Hi @max_matinpalo,Yes, the amount of RAM, IOPS, CPUs, etc that you see when you order a cluster is the config of one of the 3 data bearing nodes in the RS. They are always all identical - so you are actually Highly Available (HA).You can monitor the CPU usage either in the Real Time tab or the Metrics tab.In the Metrics tab, you have a bunch of options at the bottom related to CPU monitoring:\nimage1039×316 23.7 KB\nCheers,\nMaxime.", "username": "MaBeuLux88" } ]
Replica Sets - CPU usage
2022-04-28T07:25:18.820Z
Replica Sets - CPU usage
1,558
null
[ "python", "atlas-cluster" ]
[ { "code": "", "text": "hi there, I’m learning mongoDB.\nwhen I connected mongoDB atlas with python, got error code in python terminer in pycharm.<ServerDescription (‘cluster0-shard-00-02.3ar0e.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘cluster0-shard-00-02.3ar0e.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)’)>]>1-install dnspython, pymongo\n2- sign up mongodb and try to connect my python to my Cluster0 using MongoDB’s native drivers.\n3- I use MacOS Big Sur 11.1 ver.my knowledge of IT is almost no base… please help me… omg…", "username": "Bright_Knowledge" }, { "code": "CERTIFICATE_VERIFY_FAILEDcertificertifipython --versionpip freeze | grep pymongo", "text": "Hi @Bright_Knowledge,Based on the CERTIFICATE_VERIFY_FAILED message, I suspect you need to update your Python environment’s trusted TLS certifications by installing the certifi package:pip install certifiIf you are still having issues connecting to Atlas after installing certifi, please provide:version of Python reported by: python --versionversion of PyMongo reported by: pip freeze | grep pymongocurrent error message receivedRegards,\nStennie", "username": "Stennie_X" } ]
I can't connect to mongoDB atlas with python with pycharm
2022-04-29T02:15:30.429Z
I can&rsquo;t connect to mongoDB atlas with python with pycharm
2,842
null
[ "aggregation", "queries" ]
[ { "code": "{ \n\"_id\": { \"$oid\": \"1\" },\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 10, \"day\": 2 }\n}\n{ \n\"_id\": { \"$oid\": \"2\" },\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 10, \"day\": 5 }\n}\n{ \n\"_id\": { \"$oid\": \"3\" },\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 8, \"day\": 21 }\n}\n{ \n\"_id\": { \"$oid\": \"4\" },\n\"firstName\": \"Danica\",\n\"lastName\": \"Taylor\", \n\"birthday\": { \"month\": 8, \"day\": 12 }\n}\n{ \n\"_id\": { \"$oid\": \"5\" },\n\"firstName\": \"Daniel\",\n\"lastName\": \"Johnson\", \n\"birthday\": { \"month\": 6, \"day\": 14 }\n}\n{ \n\"_id\": { \"$oid\": \"6\" },\n\"firstName\": \"Pamela\",\n\"lastName\": \"Giesen\", \n\"birthday\": { \"month\": 4, \"day\":22 }\n}\n{ \n\"_id\": { \"$oid\": \"7\" },\n\"firstName\": \"Alicia\",\n\"lastName\": \"Travis\", \n\"birthday\": { \"month\": 2, \"day\": 18 }\n}\n{ \n\"_id\": { \"$oid\": \"3\" },\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 8, \"day\": 21 }\n}\n{ \n\"_id\": { \"$oid\": \"1\" },\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 10, \"day\": 2 }\n}\n{ \n\"_id\": { \"$oid\": \"2\" },\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 10, \"day\": 5 }\n}\n{ \n\"_id\": { \"$oid\": \"7\" },\n\"firstName\": \"Alicia\",\n\"lastName\": \"Travis\", \n\"birthday\": { \"month\": 2, \"day\": 18 }\n}\n{ \n\"_id\": { \"$oid\": \"6\" },\n\"firstName\": \"Pamela\",\n\"lastName\": \"Giesen\", \n\"birthday\": { \"month\": 4, \"day\":22 }\n}\n{ \n\"_id\": { \"$oid\": \"5\" },\n\"firstName\": \"Daniel\",\n\"lastName\": \"Johnson\", \n\"birthday\": { \"month\": 6, \"day\": 14 }\n}\n", "text": "Hello everyone.\nI have, for example, 7 documents:I need to sort them by selected month and day, for example, “month”: 8 “day”: 12. The output should be:Is there a chance to do only in $match? I know how to filter them but only to the highest month. Thank you in advance, I am verry newbie in this.", "username": "Dusan_Manic" }, { "code": "", "text": "Some clarifications are needed.“month”: 8 “day”: 12.The document _id:4 which has month:8 day:12 is not in the output. Is that what you want?Your sort in not clear. What I understand is that the months higher or equal to the specified to comes first and sorted and then then one lower that the specified to come after but also sorted.What about the month equals but day lower than specified day. You removed month:8 day:12 from output, what about if you had month:8 day:11?", "username": "steevej" }, { "code": "{ \n\"_id\": {\"$oid\": \"4\"},\n\"firstName\": \"Danica\",\n\"lastName\": \"Taylor\", \n\"birthday\": { \"month\": 8, \"day\": 12 }\n}\n{ \n\"_id\": { \"$oid\": \"3\"},\n\"firstName\": \"Mie\",\n\"lastName\": \"Ragnar\", \n\"birthday\": { \"month\": 8, \"day\": 21 }\n}\n{ \n\"_id\": {\"$oid\": \"1\"},\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 10, \"day\": 2 }\n}\n{ \n\"_id\": {\"$oid\": \"2\"},\n\"firstName\": \"John\",\n\"lastName\": \"Doe\", \n\"birthday\": { \"month\": 10, \"day\": 5 }\n}\n{ \n\"_id\": {\"$oid\": \"7\"},\n\"firstName\": \"Alicia\",\n\"lastName\": \"Travis\", \n\"birthday\": { \"month\": 2, \"day\": 18 }\n}\n{ \n\"_id\": {\"$oid\": \"6\"},\n\"firstName\": \"Pamela\",\n\"lastName\": \"Giesen\", \n\"birthday\": { \"month\": 4, \"day\":22 }\n}\n{ \n\"_id\": {\"$oid\": \"5\"},\n\"firstName\": \"Daniel\",\n\"lastName\": \"Johnson\", \n\"birthday\": { \"month\": 6, \"day\": 14 }\n}\n{ \n\"_id\": {\"$oid\": \"9\"},\n\"firstName\": \"Brandan\",\n\"lastName\": \"Kendall\", \n\"birthday\": { \"month\": 8, \"day\": 5 }\n}\n", "text": "The document _id:4 which has month:8 day:12 I forgot to write him in an output. As you said if someone selects the 10th month and 3rd day in a month it should be sorted to the higher months till the 12th month after that should start from the 1st to 10th month and 2nd day in that month. Basically a circle. I need to list birthdays from the selected month and sort them from specified month this year to specified month next year. If someone selects the 8th month and 12th day in a month, it should sort customers this way:I have been listed users but only to the 12th month (the highest number), I don`t know with $match and $sort how to do that. Thank you in advance.", "username": "Dusan_Manic" }, { "code": "specified_month = 8\nspecified_day = 12\nset__passed_birthday = { \"$set\" : {\n \"birthday_passed\" : { \"$cond\" : {\n \"if\" : { \"$or\": [\n { \"$lt\" : [ \"$birthday.month\" , specified_month ] } ,\n { \"$and\" : [\n { \"$eq\" : [ \"$birthday.month\" , specified_month ] } ,\n { \"$lt\" : [ \"$birthday.day\" , specified_day ] }\n ] }\n ] }\n \"then\" : 1\n \"else\" : 0 } }\n}}\n\nsort_stage = { \"$sort\" : {\n \"passed_birthdat\" : 1 ,\n \"birthday.month\" : 1 ,\n \"birthday.day\" : 1\n} }\n\npipeline = [ set__passed_birthday , sort_stage ]\n\ncollection.aggregate( pipeline )\n", "text": "I would try the following approach using the aggregation framework.I would set a new field passed_birthday to 1 to all birthday that comes before the specified month and day and would set it to 0 to the others.I would then sort using passed_birthday:1,birthday.month:1,birthday.day:1. Documents with passed_birthday 0 will be listed first ordered by month and day and documents with passed_birthday 1 will be listed after.Untested first draft", "username": "steevej" }, { "code": "specified_month = 8\n\nspecified_day = 12\n\n// I had passed_birthday, birthday_passed and other misspelling so I renamed it\n// to __passed\n\n// Some missing , after the 'if:' and 'then:'\n\nset_stage = { \"$set\" : {\n \"__passed\" : { \"$cond\" : {\n \"if\" : { \"$or\": [\n { \"$lt\" : [ \"$birthday.month\" , specified_month ] } ,\n { \"$and\" : [\n { \"$eq\" : [ \"$birthday.month\" , specified_month ] } ,\n { \"$lt\" : [ \"$birthday.day\" , specified_day ] }\n ] }\n ] } ,\n \"then\" : 1 ,\n \"else\" : 0 } }\n}}\n\nsort_stage = { \"$sort\" : {\n \"__passed\" : 1 ,\n \"birthday.month\" : 1 ,\n \"birthday.day\" : 1\n} }\n\n// added unset_stage to cleanup temporary field __passed.\n\nunset_stage = { \"$unset\" : [ \"__passed\" ] }\n\npipeline = [ set_stage , sort_stage , unset_stage ]\n\ncollection.aggregate( pipeline )\n\n// Result when applied on original collection\n{ _id: 4,\n firstName: 'Danica',\n lastName: 'Taylor',\n birthday: { month: 8, day: 12 } }\n{ _id: 3,\n firstName: 'John',\n lastName: 'Doe',\n birthday: { month: 8, day: 21 } }\n{ _id: 1,\n firstName: 'John',\n lastName: 'Doe',\n birthday: { month: 10, day: 2 } }\n{ _id: 2,\n firstName: 'John',\n lastName: 'Doe',\n birthday: { month: 10, day: 5 } }\n{ _id: 7,\n firstName: 'Alicia',\n lastName: 'Travis',\n birthday: { month: 2, day: 18 } }\n{ _id: 6,\n firstName: 'Pamela',\n lastName: 'Giesen',\n birthday: { month: 4, day: 22 } }\n{ _id: 5,\n firstName: 'Daniel',\n lastName: 'Johnson',\n birthday: { month: 6, day: 14 } }\n", "text": "Modified version following testing:", "username": "steevej" }, { "code": "", "text": "Thank you for your fast reply. I will try this and send you feedback.", "username": "Dusan_Manic" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$match $sort data
2022-04-28T13:25:00.365Z
$match $sort data
2,433
null
[ "atlas-search" ]
[ { "code": "", "text": "Hey there,I’m creating the geo index in one of my collections. In the UI it says it uses default analyzer which is completely useless for my case (geo-queries only).When switched to simple analyzer the index got a little bit smaller.Is there a way how to disable it completely to save some resources?", "username": "Ikar_Pohorsky" }, { "code": "lucene.keyworddynamic:true", "text": "Welcome back @Ikar_Pohorsky.You can try to use the lucene.keyword analyzer as one option, or you can index only the fields that you need, rather than dynamic:true. Defining a specific subset of fields is the best path for reducing the size of your index.", "username": "Marcus" }, { "code": "keyword'mappings': {\n 'dynamic': False,\n 'fields': {\n 'geometry': {\n 'type': 'geo'\n }\n }\n},\n'$search': {\n 'geoWithin': {\n 'path': 'geometry',\n 'geometry': geometry,\n },\n 'index': INDEX_NAME_AVAILABILITY,\n},\n", "text": "Thanks @Marcus ,setting keyword analyzer didn’t help much - the index size was about the same.my index definition is following:The UI also confirms the dynamic mapping is disabled.But I’d guess - when your search is based purely on geometry:…you don’t need any analyzers, do you? The question is how to disable it completely.", "username": "Ikar_Pohorsky" } ]
How to define Atlas Search index without analyzer?
2022-04-28T12:40:43.712Z
How to define Atlas Search index without analyzer?
1,795
null
[ "python", "connecting" ]
[ { "code": "Invalid URI scheme: mongodb+srv", "text": "Hi,I am trying to connect my python application. Every time I am getting Invalid URI scheme: mongodb+srv error while trying to connect with mongo Atlas.Please help me where i m doing wrong.", "username": "Alok_verma" }, { "code": "import pymongo\nimport dns # required for connecting with SRV\n\nclient = pymongo.MongoClient(\"mongodb+srv://kay:[email protected]/test?w=majority\")\ndb = client.test\n", "text": "Hello @Alok_vermaFrom The Fine Manual:Does your look like this ?", "username": "chris" }, { "code": "", "text": "Please provide the following information so that we can troubleshoot the issue:", "username": "Prashant_Mital" }, { "code": " def get_connection(self):\n try:\n client = MongoClient(MongoDbConnection.CONNECTION_STR)\n db = client.API\n return db\n except Exception as e:\n print(e)\n", "text": "Hi Prashant,please find details belowError: Invalid URI scheme: mongodb+srv\npython flask code which i m trying along with URICONNECTION_STR=“mongodb+srv://username:[email protected]/API?retryWrites=true&w=majority”", "username": "Alok_verma" }, { "code": "", "text": "Hi @Alok_verma and welcome to the forums!python flask code which i m trying along with URIUnfortunately I’m unable to reproduce the error that you’re encountering. Are you able to reproduce the error consistently ? Are you able to connect from the same machine to the Atlas cluster using mongo shell ?Could you also provide the following information :Regards,\nWan.", "username": "wan" }, { "code": "InvalidURI Traceback (most recent call last)\n<ipython-input-86-5c9178705b3c> in <module>()\n 1 from pymongo import MongoClient\n----> 2 client = MongoClient(\"mongodb+srv://XXX:[email protected]/aiq?retryWrites=true&w=majority\")\n\n/Users/shamim/workspaceroot/2020/openrules_dev/lib/python2.7/site-packages/pymongo/mongo_client.pyc in __init__(self, host, port, document_class, tz_aware, connect, **kwargs)\n 398 .. seealso:: :doc:`/examples/server_selection`\n 399 \n--> 400 | **Authentication:**\n 401 \n 402 - `username`: A string.\n\nInvalidURI: Invalid URI scheme: mongodb+srv\n", "text": "Hi wan,here is my error and the version of pymongo\npymongo==3.11.3request your help, also want to understand how do we connect through pymongo using certificate provided by atlas", "username": "sham_khan" }, { "code": "", "text": "You might be missing a package. Last python project I was using dnspython==1.15.0 with Anaconda Since pymongo was 3.7.0, you might need to use a more recent dnspython.", "username": "steevej" } ]
Invalid URI scheme: mongodb+srv
2020-09-24T19:38:05.570Z
Invalid URI scheme: mongodb+srv
34,674
https://www.mongodb.com/…2_2_1023x499.png
[ "java", "atlas-device-sync", "android", "kotlin" ]
[ { "code": "build.gradle// Top-level build file where you can add configuration options common to all sub-projects/modules.\nplugins {\n id 'com.android.application' version '7.1.2' apply false\n id 'com.android.library' version '7.1.2' apply false\n id 'org.jetbrains.kotlin.android' version '1.6.10' apply false\n}\n\ntask clean(type: Delete) {\n delete rootProject.buildDir\n}\nbuild-gradleapply plugin:id ...pluginsviewBinding trueviewBinding = true", "text": "How am I currently supposed to set up my new Android Studio 2021.1.1 project to use Realm? The install page is useless. Let me show you what I mean.I am apparently supposed to do this:\n\nimage1578×770 58.9 KB\nBut my project-level build.gradle script, as created by the new project wizard, looks like this:Where am I supposed to put these things?Also, the instructions for the application-level build-gradle file are out of date. For example, instead of apply plugin: I seem to now be supposed to add id ... to the plugins block…? And the syntax now seems to be viewBinding true instead of viewBinding = true.It would be nice if the documentation actually told me correctly what to do… ", "username": "polymath74" }, { "code": "build.gradlebuild.gradleplugins {\n id 'com.android.application'\n id 'kotlin-android'\n id 'kotlin-kapt'\n id 'realm-android'\n}\nbuild.gradlebuildscript {\n\n repositories {\n google()\n mavenCentral()\n }\n dependencies {\n classpath \"io.realm:realm-gradle-plugin:10.10.1\"\n }\n}\n\nplugins {\n id 'com.android.application' version '7.1.2' apply false\n id 'com.android.library' version '7.1.2' apply false\n id 'org.jetbrains.kotlin.android' version '1.6.10' apply false\n}\n", "text": "G’Day @polymath74,Thank you for raising your concerns, they are absolutely correct.The Android Studio has now changed the format for adding plugins.I have tested this on Android Studio Arctic Fox 2020.3.1 Patch 4.In the Android Bumblebee version, you can add dependencies as below in the project-level build.gradle fileI hope the provided information helps. Please let us know if you have any further questions.I look forward to your response.Cheers, ", "username": "henna.s" }, { "code": "", "text": "Thanks for the reply, and sorry for the delay on my side - I’m finally getting time to come back to this.In the meantime, the Kotlin SDK beta is supposedly ready for use, so I thought I would try that instead - but also ran into problems there.I can add the packages to the gradle files as per the install instructions, and this actually worked! But when I paste in my object models from the Realm UI, I get errors:\nimage482×547 36.4 KB\n", "username": "polymath74" }, { "code": "build.gradleclasspathbuildscript// Top-level build file where you can add configuration options common to all sub-projects/modules.\n\nbuildscript {\n dependencies {\n classpath \"io.realm:realm-gradle-plugin:10.10.1\"\n }\n}\n\nplugins {\n id 'com.android.application' version '7.1.3' apply false\n id 'com.android.library' version '7.1.3' apply false\n id 'org.jetbrains.kotlin.android' version '1.6.21' apply false\n}\n\ntask clean(type: Delete) {\n delete rootProject.buildDir\n}\nbuild.gradlekotlin-kaptrealm-androidplugins {\n id 'com.android.application'\n id 'org.jetbrains.kotlin.android'\n id 'kotlin-kapt'\n id 'realm-android'\n}\n\nandroid { ...\n", "text": "Ok, I have finally worked out how to get the (older) Java SDK working with Kotlin on Android Studio Bumblebee 2021.1.1 Patch 3, so I’m going to post it here for anyone else playing along at home (and for my own future reference ).In the project-level build.gradle file, I put the gradle plugin classpath line at the top, but I had to embed it in a buildscript section:And in the app-level build.gradle file, I added the kotlin-kapt and realm-android plugins, using the new syntax:", "username": "polymath74" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help, I can't even install the Java SDK into my new Android Studio project
2022-03-18T04:46:57.082Z
Help, I can&rsquo;t even install the Java SDK into my new Android Studio project
4,462
null
[ "aggregation" ]
[ { "code": "generalopd:{\n\n type:String,\n\n required:true\n\n},\n\nopdmedical:{\n\n type:String,\n\n required:true\n\n},\n\nsurgicalopd:{\n\n type:String,\n\n required:true\n\n},\n\ngynaeobsopd:{\n\n type:String,\n\n required:true\n\n},\n\npaediatricsopd:{\n\n type:String,\n\n required:true\n\n},\n\nnephrologyopd:{\n\n type:String,\n\n required:true\n\n},\n\ndental:{\n\n type:String,\n\n required:true\n\n},\n\ntotalopd:{\n\n type:String,\n\n required:true\n\n},\n\n\n\ndate:{\n\n type:Date,\n\n required:true\n\n},\n\nname:{\n\n type:String,\n\n require:true\n\n},\n\nuserId:{\n\n type:Schema.Types.ObjectId,\n\n ref:'user'\n\n}\n", "text": "const opdSchema = Schema({},{timestamps:true});", "username": "Masaud_noor" }, { "code": "", "text": "I want to show totalopd of each month?\nFor Example:\njun totalopd = 7,\nfeb totalopd = 17,\nmarch totalopd = 33\nmay , june …", "username": "Masaud_noor" }, { "code": "totalopd", "text": "Hi @Masaud_noor\nWelcome to the community Forum!!Could you please help me to understand what totalopd field saves in the collection.\nAlso, please share a sample document data to understand better.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "\ndashboard1092×641 46.6 KB\n", "username": "Masaud_noor" }, { "code": "", "text": "Ui\n\nreact745×450 15 KB\n", "username": "Masaud_noor" }, { "code": "totalopd", "text": "Hi @Masaud_noorwhat totalopd field saves in the collection.Could you please help me understand this with a sample document?Thanks\nAasawari", "username": "Aasawari" } ]
How to fetch record montly wise ? For Example junary totalopd , february totalopd. schema is following in detail section
2022-04-26T02:19:12.265Z
How to fetch record montly wise ? For Example junary totalopd , february totalopd. schema is following in detail section
1,981
null
[ "node-js", "connecting", "atlas-cluster" ]
[ { "code": "MongoServerSelectionError: connection <monitor> closed\n at Timeout._onTimeout (/Users/_/Desktop/Clone/node_modules/mongodb/lib/sdam/topology.js:318:38)\n at listOnTimeout (node:internal/timers:557:17)\n at processTimers (node:internal/timers:500:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'cluster0-shard-00-00.d1mrz.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-01.d1mrz.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-02.d1mrz.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-lkpac7-shard-0',\n logicalSessionTimeoutMinutes: undefined\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\nservers: Map(3) {...}", "text": "Hi,\nWhen I try to submit a form in a localhost:5000 website, I am getting aerror. I’m unsure why this is happening. I don’t know if it’s because for some reason it’s listing 3 in servers: Map(3) {...}, but I would really appreciate any advice on how to fix this.", "username": "Geenzie" }, { "code": "", "text": "Have you whitelisted your IP?type: ‘ReplicaSetNoPrimary’\nReading time: 3 min read\n", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you! That worked. I had seen something that mentioned whitelisting an IP for a similar issue, but I was not sure where in mongodb that was, so that article was perfect!", "username": "Geenzie" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoServerSelectionError: connection closed
2022-04-25T23:47:02.401Z
MongoServerSelectionError: connection closed
30,926
null
[ "aggregation", "indexes" ]
[ { "code": "", "text": "For example, if I write an aggregation to query some collection, and then average a specific field, it would technically only return 1 document, but could look at N number of objects. Would that examined:returned ratio be N all the time then?", "username": "Kevin_Rathgeber" }, { "code": "", "text": "I have the similar case, where I have and aggregation and use group to get the values of the month of a date range. This is more unpredictable, but it’s very similar to your situation.\nWe need something more specific to evaluate the performance of aggregations, the current one isn’t very accurate for this case.", "username": "Paulo_David" }, { "code": "numberdb> db.numbercoll.find()\n[\n { _id: ObjectId(\"626b4cbb5d078c11bef0fc23\"), a: 1 },\n { _id: ObjectId(\"626b4cbd5d078c11bef0fc24\"), a: 2 },\n { _id: ObjectId(\"626b4cbe5d078c11bef0fc25\"), a: 3 },\n { _id: ObjectId(\"626b4cc15d078c11bef0fc26\"), a: 4 },\n { _id: ObjectId(\"626b4cc25d078c11bef0fc27\"), a: 5 }\n]\nanumberdb> db.numbercoll.aggregate({$group:{_id:null,aAverage:{\"$avg\":\"$a\"}}})\n[ { _id: null, aAverage: 3 } ]\ndb.collection.explain(\"executionStats\").aggregate(...) executionStats: {\n executionSuccess: true,\n nReturned: 5,\n executionTimeMillis: 0,\n totalKeysExamined: 0,\n totalDocsExamined: 5,\n executionStages: {\n stage: 'PROJECTION_SIMPLE',\n nReturned: 5,\n executionTimeMillisEstimate: 0,\n works: 7,\n advanced: 5,\n needTime: 1,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n transformBy: { a: 1, _id: 0 },\n inputStage: {\n stage: 'COLLSCAN',\n nReturned: 5,\n executionTimeMillisEstimate: 0,\n works: 7,\n advanced: 5,\n needTime: 1,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n direction: 'forward',\n docsExamined: 5\n }\n }\n }\na3nReturnedtotalDocsExaminedtotalDocsExaminednReturned", "text": "Hi @Kevin_Rathgeber and @Paulo_David,Would that examined:returned ratio be N all the time then?5 sample documents in my test environment:Performing an aggregation to get the average value of a for all documents:Running a db.collection.explain(\"executionStats\").aggregate(...) on the above aggregation. The execution stats output is as follows:As you can see from the above data, only a singular document is returned in the shell output when running the aggregation to display the average a value as 3. However, based off the execution stats output, we can see that multiple documents were needed to be examined and processed to determine this average. More specifically:\nnReturned: 5 (Number of documents that match the query condition)\ntotalDocsExamined: 5 (Number of documents examined during query execution)Would that examined:returned ratio be N all the time then?I believe it may depend on what you mean by “examined” and “returned” here. As you can see from the above, totalDocsExamined is 5 and nReturned is 5 (ratio of 5:5) but the output of the aggregation is a single document. If you are referring to the totalDocsExamined to the single output document of the aggregation, then it would be N for this aggregation example.We need something more specific to evaluate the performance of aggregations, the current one isn’t very accurate for this case.Does the above example provide any help or insight for your use case? If not, it may be best to create a new topic with more information such as:Depending on your pipeline, you may be able to create indexes for a covered query to avoid document fetches.Regards,\nJason", "username": "Jason_Tran" } ]
Do aggregations impact examined:returned ratio? Should they?
2022-01-20T16:39:35.171Z
Do aggregations impact examined:returned ratio? Should they?
2,941
null
[ "aggregation", "queries" ]
[ { "code": "\"schedule\" : {\n \"monday\" : [\n {\n \"_id\" : ObjectId(\"62667b8f77b24028c80d1d3d\"), \n \"startTime\" : \"12:00\", \n \"endTime\" : \"2:00\", \n \"price\" : 100.01\n }, \n {\n \"_id\" : ObjectId(\"62667b8f77b24028c80d1d3e\"), \n \"startTime\" : \"19:00\", \n \"endTime\" : \"20:00\", \n \"price\" : 101.01\n }\n ], \n \"tuesday\" : [\n {\n \"_id\" : ObjectId(\"62667b8f77b24028c80d1d3f\"), \n \"startTime\" : \"23:00\", \n \"endTime\" : \"23:59\"\n }\n ], \n}\n\"schedule\" : {\n \"monday\" : [\n {\n \"_id\" : ObjectId(\"62667b8f77b24028c80d1d3e\"), \n \"startTime\" : \"19:00\", \n \"endTime\" : \"20:00\", \n \"price\" : 101.01\n }\n ], }\n", "text": "Hi, I have this time slot array.I want to filter the array based on the given time slot.For example, if I provide Monday with the time 19:00. So in that case I want a result whose start time equal to or greater than 19:00So as per the input, I will get a resultSo the first record of the Monday array will not be shown. because it’s time is 12:00 i.e less then 19:00\nSo how can filter it using aggregation? can anyone please guide me?Thanks.", "username": "maitry_Thakkar" }, { "code": "specified_day = \"monday\"\n\nproject_stage = { \"$project\" : { today : \"$schedule.\" + specified_day } }\n\ncollection.aggregate( [ project_stage ] )\n$filter : {\n \"input\" : \"schedule.\" + specified_day ,\n \"as\" : \"filtered\" ,\n \"cond\" : [ ]\n}\n", "text": "Read Formatting code and log snippets in posts and republish your documents. We cannot use them, by cut-n-paste into our system because of the way they are published the quotes are wrong.Also share what you have tried so far and how it fails to provide the correct results. This will help us avoiding to experiment in a wrong direction.Without giving too much thought, you could use a $project stage to get the specified day.Untested snippetBut the above will give all schedule.monday array. Using $filter allows you to only keep the appropriate elements.To filter an array, you need https://www.mongodb.com/docs/manual/reference/operator/aggregation/filter/.", "username": "steevej" } ]
Filter time slot array
2022-04-28T11:08:24.376Z
Filter time slot array
4,553
null
[]
[ { "code": "", "text": "While studying MongoDB, I got to know Flush, Journal, and Oplog.Flush is once the memory stores the data, it stores it on disk after a certain period of time (default 60 seconds).Journal is before saving data to disk, save it to a Journal record and save it to disk.Oplog is after storing the data completely, store the details of the work in the form of a log in the oplog.Then, when the data write operation comes into the primary node,\ndoes it proceed in the order of flush → journal → oplog?Can you tell me in detail what is the order of the these processes?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseon,This sounds a lot like the good old MMapV1 storage engine. With WiredTiger, things are a bit different now. It’s well explained in this doc : https://www.mongodb.com/docs/manual/core/wiredtiger/. At least that’s as far as my knowledge goes. Then it’s really low level operations that I don’t know about.Note that snapshots & checkpoints is also one of the fundamental brick that allow MongoDB to perform multi-document ACID transactions.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Can you tell me in detail what is the order of the these processes?What happens when you write to a document is we write to the document and we write a special format of this write to the oplog in memory “simultaneously”. Then, only the oplog is flushed to the journal. This is because the journal is used to replay operations after a crash onto the most recent saved “checkpoint” (what you are calling flush). Only the oplog format is necessary to recreate any writes that happened after the last checkpoint. So the order is really the opposite of what you have - “oplog” → “journal” and then relatively infrequently by comparison “flush” aka checkpoint.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Thank you.\nI will study hard about the page.", "username": "Kim_Hakseon" }, { "code": "", "text": "Oh, my God!\nIf I may use your words to recapitulate,Right?And the Secondary is replicating the Primary’s Oplog on the memory?\n(In the above process, number 1)", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Q. MongoDB's Data Storage Method
2022-04-26T05:50:41.248Z
Q. MongoDB&rsquo;s Data Storage Method
2,181
null
[ "mongodb-shell" ]
[ { "code": "db.createUser(\n { \n user: 'username',\n pwd: passwordPrompt(), \n roles: [ \n { role: \"readWrite\", db: \"db1\" }, \n { role: \"readWrite\", db: \"db2\" } \n ] \n})\nmongosh \"mongodb://username@localhost\"\nAuthentication failed.mongosh \"mongodb://username@localhost/db1\"\n", "text": "I am trying to secure my mongodb installation. I created a user as follows:I can connect by doing:However the following fails with the message Authentication failed.:", "username": "Mike_Robinson" }, { "code": "use db1db1", "text": "Ok, I figured it out. I needed to do use db1 before adding the user to db1.", "username": "Mike_Robinson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can connect only when I don't specify db name
2022-04-28T23:38:11.852Z
Can connect only when I don&rsquo;t specify db name
1,334
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Can we create mongo Trigger on local db without atlas?", "username": "Unique_Karanjit" }, { "code": "", "text": "Yes… MongoDB Triggers (In Atlas/Realm) use Change Streams - a capability built into MongoDB. You can read more about Change Streams here: https://www.mongodb.com/docs/manual/changeStreams/", "username": "Michael_Lynn" }, { "code": "", "text": "Great ! works fine. Thank you Michael !", "username": "Unique_Karanjit" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we create mongo Trigger on local db without atlas?
2022-04-28T17:31:41.099Z
Can we create mongo Trigger on local db without atlas?
3,226
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Curriculum Service Engineer", "text": "We will be live on MongoDB Youtube and MongoDB TwitchLead Developer Advocate\nSonali512×512 346 KB\nSenior Curriculum Service EngineerEvent Type: Online\nLink(s):\nLocation\nVideo Conferencing URL", "username": "Shane_McAllister" }, { "code": "", "text": "Thanks for eveyone who joined - here’s the recordingPlease rewatch & share!", "username": "Shane_McAllister" }, { "code": "", "text": "Hi @Shane_McAllister I would like to know what format tomorrow’s hack day is going to take for those teams that want to show a little demo of their progress", "username": "Fiewor_John" }, { "code": "", "text": "For the demo, the format is very fluid. You can join the livestream after Nic Raboyu has concluded his Demo.I will send you an invite in the morning to our streaming platform - stream yard and you can join that once the livestream has started.Delighted to hear you are still up for Demoing - many thanks.", "username": "Shane_McAllister" }, { "code": "", "text": "Alright! Thank you Shane", "username": "Fiewor_John" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Orientation Livestream 3 - APAC/EMEA
2022-04-25T16:42:46.101Z
Hackathon Orientation Livestream 3 - APAC/EMEA
2,903
null
[ "python" ]
[ { "code": "", "text": "We are pleased to announce the 3.0 release of Motor - a coroutine-based API for non-blocking access to MongoDB in Python. Motor 3.0 brings a number of improvements as well as some backward breaking\nchanges. For example, all APIs deprecated in PyMongo 3.X have been removed.\nBe sure to read the changelog and the Motor 3 Migration Guide\nbefore upgrading from Motor 2.x.See the Motor 3.0 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Motor 3.0.0 Released
2022-04-28T18:35:24.388Z
MongoDB Motor 3.0.0 Released
2,794
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "", "text": "Hey,I just upgraded to M2. On my localhost I can connect to the database.On my vServer, the connection does not work anymore. It worked before without any flaws.The error message varies between:What I already did:I use the following string:mongodb+srv://user:[email protected]/correctName?retryWrites=true&w=majority", "username": "Malte_Hoch" }, { "code": "mongosh \"mongodb+srv://free.ne23r.mongodb.net/myFirstDatabase\" --apiVersion 1 --username max\n", "text": "Hi @Malte_Hoch and welcome in the MongoDB Community !From the same machine you are trying to connect to your M2 from, can you connect using Mongosh?Do you happen to have a firewall or some antivirus in place maybe? Maybe a company policy that blocks the port 27017?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hey @MaBeuLux88,First of all, thank you for your prompt reply. I could solve the issue by removing “?retryWrites=true&w=majority” from the string. Quite weird, since this worked perfectly fine before the upgrade to M2 and I would have never expected this to be the final solution after investigating the docs, this forum and Stackoverflow! Thank you for your hint anyway.With best regardsMalte", "username": "Malte_Hoch" }, { "code": "", "text": "I just used the “CONNECT” button in Atlas and follow the path for the application I want to use. Then I just copy & paste the command line or the connection string that is provided.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I upgraded to M2: Mongoose Server Selection Error: Could not connect to any servers in Atlas
2022-04-27T16:20:34.390Z
I upgraded to M2: Mongoose Server Selection Error: Could not connect to any servers in Atlas
4,058