image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "java" ]
[ { "code": "private <T> List<T> constructValuesList(final Object key, final Class<T> clazz, final List<T> defaultValue) {\n List<?> value = get(key, List.class);\n if (value == null) {\n return defaultValue;\n }\n\n for (Object item : value) {\n if (!clazz.isAssignableFrom(item.getClass())) { //NullPointerException if document contain list with null item\n throw new ClassCastException(format(\"List element cannot be cast to %s\", clazz.getName()));\n }\n }\n return (List<T>) value;\n}\nif (item!=null && !clazz.isAssignableFrom(item.getClass()))\n", "text": "When using the mango-java driver: 3.12.1 getting NullPointerException exceptions when reading a collection containing null item from a document,how we can add small fix", "username": "Vi_Os" }, { "code": "null", "text": "Hi @Vi_Os, welcome!getting NullPointerException exceptions when reading a collection containing null item from a document,Could you provide the following information to further clarify the question:Regards,\nWan.", "username": "wan" }, { "code": " Document npe = new Document();\n List<Integer> numbers = Arrays.asList(1, null);\n npe.put(\"numbers\", numbers);\n List<Integer> read = npe.getList(\"numbers\", Integer.class);\nException in thread \"main\" java.lang.NullPointerException\n\tat org.bson.Document.constructValuesList(Document.java:381)\n\tat org.bson.Document.getList(Document.java:349)\n\tat MainApplication.main(MainApplication.java:56)", "text": "Hi @wan\nIn our schema we can`t check value in collection because multiple service can update it\nin our workflow we can change model but i think in mongo-java-driver check for NULL before check type is better solutionYou can reproduce NPE with this code,stack-trace", "username": "Vi_Os" }, { "code": "NullPointerExceptionList<Integer> read = (List<Integer>) npe.get(\"numbers\");\nSystem.out.println(read); // prints [1, null]", "text": "This will not throw NullPointerException:", "username": "Prasad_Saya" }, { "code": "", "text": "Yes, we use Get method but there unchecked cast and no type control.\nIt no good solution because exist method getList, with small bug)", "username": "Vi_Os" } ]
Mongo-java-driver NullPointerException in getList
2020-05-08T16:14:07.132Z
Mongo-java-driver NullPointerException in getList
6,007
null
[ "realm-studio" ]
[ { "code": "", "text": "I was prompted to upgrade to v3.11.0 when launching Realm Studio v3.10.0 today.On doing so, the initial issue I faced was that the Realm I was testing in Xcode could not be opened simultaneously by Studio - the error message displayed was “Realm file is currently open in another process which cannot share access with this process. All processes sharing a single file must be the same architecture.”I closed Xcode & Simulator and opened the Realm file in Studio 3.11.0 only. This notified me that the Realm file was from an earlier version and prompted me to allow update, which I did so.However, the updated Realm file no longer works with the Realm pods installed via CocoaPods. The version of Realm & RealmSwift I have is 4.4.1. I ran pod deintegrate, install and update and still get the same version.Pod outdated gives the following message\nThe following pod updates are availableThe only resolution currently is to uninstall 3.11.0 and reinstall 3.10.0.", "username": "Richard_English" }, { "code": "# Changelog\n\n## vNext\n\n### Enhancements\n\n- None\n\n### Fixed\n\n- The produced checksum in version 13.0.0 was incorrect. ([#1554](https://github.com/realm/realm-studio/issues/1554), since v13.0.0)\n\n### Internals\n\n- Upgraded Realm JS to v11.3.1.\n- Skipping version 13.0.1.\n\n\n## Release 13.0.2 (2022-12-08)\n\n", "text": "@Richard_English Correct - you should downgrade if you are intending to use RealmSwift 4.4.1The new version of Studio is designed to work with RealmSwift 5.0 which we are hoping to GA on the 18th.See the Studio Release notes here:", "username": "Ian_Ward" }, { "code": "", "text": "Thanks, although please note 10.0.0-alpha.4 does not work with Realm Studio 3.11.0 either. The only configuration which is working for me is Realm Studio 3.10.0 with RealmSwift 4.4.1.Hopefully RealmSwift 5.0 remedies this.", "username": "Richard_English" } ]
Realm Studio 3.11.0 upgrade causes incompatibility issues with Realm Pods
2020-05-15T12:54:43.658Z
Realm Studio 3.11.0 upgrade causes incompatibility issues with Realm Pods
4,655
null
[]
[ { "code": "COUNT(born_state_a=1 AND female= 1 AND vegan=1)", "text": "Hi all,\nWould the following use-case be appropriate for MongoDB?I’m looking at storing and querying census data -its something relational DBs can’t handle…What the data looks like relationally:What queries we want to do:Any and all comments are really appreciated. What do you think?", "username": "Adam_Scott" }, { "code": "", "text": "Would the following use-case be appropriate for MongoDB?Sure.I would consider the attribute pattern for question/answer part.Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.Capable of returning result < 1 second-ishThat above depends more on the hardware and the setup than anything else.But with 50k answer/question, the 16Mb limit might be reach.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
"Big" Census Data - MongoDB to the rescue?
2020-05-15T11:00:25.940Z
&ldquo;Big&rdquo; Census Data - MongoDB to the rescue?
1,611
null
[]
[ { "code": "", "text": "Hi!We are trying to build a logging service that limits the number of items for a given property of the log entry (per example only store the latest x number of records for this project).Cap collections would be great but that means that we would need to create as many collections as projects, which doesn’t seem like the best of ideas.The other idea was to simply with every write (they are done in batches), to simply insert the new elements, get the count of items for that given project and simply limit the n oldest ones above 100 based on the number of items we just insertedAny other good approach to solve this efficiently?Thanks!", "username": "Javier_Tarazaga_Gome" }, { "code": "", "text": "(1) Using TTL Indexes you can delete documents automatically based upon a date field. This is a way to control the size of the collection by removing older documents (but not control the number of documents within a collection).TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time.(2) Another approach is to use Change Streams. With change streams your application can listen to data change events and and perform some action(s). For example, you can listen for change events for a write operation on a collection and check the number of documents in a collection and delete if the number exceeds a limit.Change streams are available for replica sets and sharded clusters.", "username": "Prasad_Saya" }, { "code": "async function capSourceData(projectId: string) {\n const element = (await repository.logs.find({ ‘subscription.project.id’: projectId }).sort({ _id: -1 \n }).skip(MAX_ITEMS - 1).limit(1).toArray()).map(p => new Log(p));\n\n if (element && element[0]) {\n await repository.logs.deleteMany({\n $and: [\n { _id: { $lt: element[0].exposed.id } },\n { 'subscription.project.id': projectId }\n ]\n });\n }\n}\nawait", "text": "Thanks for the quick response!For now, I started doing this and seems to be working perfectly:I simply fire this promise without the await when adding the new batch of elements into the DB.", "username": "Javier_Tarazaga_Gome" }, { "code": "", "text": "I like solution (2) of @Prasad_Saya.", "username": "steevej" } ]
How to limit the number of items in a collection sorted by a given prop
2020-05-14T10:05:35.971Z
How to limit the number of items in a collection sorted by a given prop
2,623
null
[ "security" ]
[ { "code": "", "text": "I have installed mongo v4.2.6 . I have created a users. sometime it gives authentication failed and after some time it resolved itself and now again same error and i am not getting any solution.", "username": "Prem_Kori" }, { "code": "", "text": "I have never observed this behaviour. I have often seen typing errors and lower vs upper case issues. At other time I have seen people running 2 instances at different times and mixing users from one instances to the other.So it is most likely an user error rather than a server error.More details might help to find out if it is really the case.", "username": "steevej" } ]
Error: Authentication failed
2020-05-14T11:21:22.479Z
Error: Authentication failed
1,672
null
[ "compass" ]
[ { "code": "", "text": "Hello,\nI have a mongodb in a docker on a server accessible from the outside (in ssh).\nHow I should use MongoDb Compass to administer my database", "username": "mathias_drapier" }, { "code": "", "text": "Hi @mathias_drapier, have you set the SSH Tunnel option based on how you SSH into your Docker container?\nimage730×804 84.5 KB\nNOTE, there have been reports that SSH Tunneling for Compass 1.21.0 is not working, but appears to be fixed in 1.21.2. Make sure you’re on the latest version of Compass to avoid issues.", "username": "Doug_Duncan" } ]
How use mongodb Compass remotely
2020-05-15T08:54:26.163Z
How use mongodb Compass remotely
4,388
https://www.mongodb.com/…b97725d0495a.png
[ "python" ]
[ { "code": " \"unc_path\": \"\\\\server\\share\\path\\file\" db.collection.find({}, {\"unc_path\": 1})", "text": "Hey folks,I have been stumped by what I presume is a pretty simple issue. One of the fields in my documents is a MS windows-style UNC path ie: \"unc_path\": \"\\\\server\\share\\path\\file\"We are using Atlas and if I take a look within compass or the Atlas web UI I see what needs to be there. But when I perform a find for this info, db.collection.find({}, {\"unc_path\": 1}) a python escaped string is returned for each result, ie:\nIs there a way to request that raw text be returned for these values or any other reasonable work around?Thanks!", "username": "James_Randolph1" }, { "code": "reprprint(doc['unc_path']# Note the 'r' prefix in the following string, which allows you to avoid escaping the backslashes:\nIn [1]: my_string = r\"\\not\\a\\path\"\n\nIn [2]: print(my_string)\n\\not\\a\\path\n\nIn [3]: my_string\nOut[3]: '\\\\not\\\\a\\\\path'\n\nIn [4]: a_dict = { 'key': my_string}\n\nIn [5]: a_dict\nOut[5]: {'key': '\\\\not\\\\a\\\\path'}\n\nIn [6]: print(a_dict)\n{'key': '\\\\not\\\\a\\\\path'}\n", "text": "Hi James,What you’re seeing is the way Python prints out string values in dictionaries - this isn’t something PyMongo is doing.Python prints dict values using repr, which shows a Python string with backslashes escaped. If you take your document and run print(doc['unc_path'] you’ll see it without the escaped backslashes.", "username": "Mark_Smith" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pymongo and Python string escape characters
2020-05-14T20:39:09.555Z
Pymongo and Python string escape characters
3,625
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.7-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.6. The next stable release 4.2.7 will be a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Luke_Chen" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.7-rc0 is released
2020-05-15T05:54:06.141Z
MongoDB 4.2.7-rc0 is released
1,678
null
[]
[ { "code": "", "text": "Hello guys,There is a kind of professional certification dba beyond associate or even a master program in the future plans from MongoDB team ?Regards,\nAlexandre Araujo", "username": "Alexandre_Araujo" }, { "code": "", "text": "I have been thinking in doing certification, but if you go to linkedin, there is not alot of jobs looking for mongodb people certified.So in short, maybe that is why they have the same certifications (dev and dba) from the beginning.", "username": "Mario_Pereira" }, { "code": "", "text": "That is strange in linkedin with the large adoption of MongoDB. The other curious situation i have been thinking from a dba perspective is due the fact MongoDB has a massive investments in Atlas instead dba stuffs. I want be wrong but i don’t think the future is so promising for a MongoDB DBA.", "username": "Alexandre_Araujo" }, { "code": "", "text": "on DBA perspective, yes you are correct. But still a lot of companies have hybrid scenarios (dev on prem and Atlas or the other way around, in my case i have both)But if you search C100DEV or C100DBA in linkedin jobs, you get 0 jobs or if you search “MongoDB Certified” you will have 2 job posts (1 DBA e 1 DB Architect).More strange, if you go to mongodb careers page if you look they dont even talk about it in their job offers. So, if even mongodb itself does not put that in job offer, why other companies would do it? For the academy/training/hr teams looking at this post, sorry for the harsh feedbackP.s move back to slack. The community was way more interactive and there was job and certification channel.", "username": "Mario_Pereira" }, { "code": "", "text": "there was job and certification channel.@Jamie would it make sense to have a Jobs and/or University/Certification categories here? I believe that the University courses might still be using Slack or another platform.@Mario_Pereira as for moving back to Slack, well I’ll stay out of that conversation. There are pros and cons to all platforms. ", "username": "Doug_Duncan" }, { "code": "", "text": "There is a Careers category.On the main page navigate to About the Community → Careers", "username": "Prasad_Saya" }, { "code": "", "text": "But if you search C100DEV or C100DBA in linkedin jobs, you get 0 jobs or if you search “MongoDB Certified” you will have 2 job posts (1 DBA e 1 DB Architect).I’m pretty happy with it. I know my employer was supportive and appreciative of it. And I’m pretty sure it would also give an edge over another candidate without one.There is a kind of professional certification dba beyond associate or even a master program in the future plans from MongoDB team ?Myself, I wish that were the case.", "username": "chris" }, { "code": "", "text": "You are correct @Prasad_Saya. Being that no one has posted to it I have missed it. Thanks for calling that out.", "username": "Doug_Duncan" }, { "code": "", "text": "There is a kind of professional certification dba beyond associate or even a master program in the future plans from MongoDB team ?Hi @Alexandre_Araujo,Thank you for the feedback. More advanced certifications are on our roadmap, but I do not have a specific timeline to share.would it make sense to have a Jobs and/or University/Certification categories here? I believe that the University courses might still be using Slack or another platform.MongoDB University currently has a separate Discourse instance (https://discourse.university.mongodb.com) for discussion of courses and related topics like certification. This University forums are only available once you have enrolled in a course, and have course-specific support discussion.The longer term plan is to consolidate discussion in the MongoDB Community, but that project has some additional coordination, planning, and technical requirements.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Certification
2020-05-13T15:47:38.298Z
MongoDB Certification
2,913
null
[ "monitoring" ]
[ { "code": "serverStatus.opcountersRepl", "text": "The web page: https://docs.mongodb.com/manual/reference/command/serverStatus/\nsays serverStatus.opcountersRepl:reports on database replication operationsIs that on the primary and the replicate? What database replication operations are there on the primary? Are opcountersRepl a superset of opcounters?But then the same web page says:These values will differ from the [opcounters values because of how MongoDB serializes operations during replication.So is a logical operation (eg. delete a set of rows) counted once in in opcounters, but then counted once for each row if the operation is serialized into multiple single row deletes?", "username": "Benjamin_Slade" }, { "code": "opcountersopcountersReplmongod", "text": "Hi @Benjamin_Slade,The opcounters values, for CRUD operations, will be the number of operations that happen when the node is the PRIMARY node for the replica set. The opcountersRepl values, for CRUD operations, will be the number of replicated commands that were run when the node was acting as a SECONDARY member. You might see values in both of these sections on a single node due to the fact that a node can be promoted from a SECONDARY to PRIMARY should there be a reason for an election.Note that a single command on the PRIMARY could trigger multiple commands on the SECONDARY node. This is due to the fact that the replicated commands in the oplog will have a single command for each document that was affected on the PRIMARY.These counters are reset when the mongod process is restarted, so they only show numbers since the last service restart.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks for the clear, concise answer. My only gripe is, I wish this was in the documentation.", "username": "Benjamin_Slade" }, { "code": "", "text": "I’m glad that helped you out @Benjamin_Slade. One thing to note is that the documentation is a github project that accepts pull requests, so if you find errors or things that don’t make sense you can help make the docs better. Each documentation page has an edit icon ( ) in the upper right hand corner that will take you to the page for editing and allow you to submit that change for review. You do have to know restructured text formatting, but that’s not that hard to figure out.", "username": "Doug_Duncan" }, { "code": "", "text": "See my github pull request at: Updated info for opcounters and opcountersRepl by bslade · Pull Request #4107 · mongodb/docs · GitHub", "username": "Benjamin_Slade" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
How are opcountersRepl different from opcounters?
2020-05-13T22:05:46.998Z
How are opcountersRepl different from opcounters?
3,900
null
[]
[ { "code": "", "text": "Hi guysNeed help – Can anyone please help me out - what are the possible options for migrating partial and complete data from a Mongodb DB server to Casendra.Kind regards\nG", "username": "Gaurav_Gupta" }, { "code": "", "text": "Hi @Gaurav_Gupta, while I don’t know of any tools that will help you do this as the two platforms are very different, the Migrating Resources portion of Datastax’s Migrating MongoDB to Cassandra post might help lead you to some ideas.My question is why are you considering this migration? What does Cassandra provide your app that MongoDB doesn’t?", "username": "Doug_Duncan" }, { "code": "", "text": "Hi DuncanThanks - we not intent migrating complete mongo server to cansendra. We having some issue in application code due to DBref, hence only those collections which uses Dbref we planning to migrate to cansendra.Kind regards\nGaurav", "username": "Gaurav_Gupta" } ]
Mongodb migration to cassandra
2020-05-13T20:35:20.704Z
Mongodb migration to cassandra
1,919
null
[ "aggregation" ]
[ { "code": "/* First collection */\n{\n \"product\": \"test\",\n \"labels\" : [\n {\"code\": \"label1\", \"value\": 42},\n {\"code\": \"label2\", \"value\": 50}\n ]\n}\n\n/* Second collection */\n{\n \"code\": \"label3\",\n \"calculation\" : [\n {\"label\" : \"label1\", \"operation\":\"+\"},\n {\"label\" : \"label2\", \"operation\":\"-\"}\n ]\n}\n{\n \"product\" : \"test\", \n \"labels\" : [\n {\"code\": \"label1\", \"value\": 42},\n {\"code\": \"label2\", \"value\": 50}\n ], \n \"vlabels\" : [\n {\"code\": \"label3\", \"value\": -8}\n ]\n}\n", "text": "Is it possibile to create a pipeline for something like this? I wanted the value in the aggregated collection to be calculated according to the operation in the second collection.In my aggregated collection i want a new field that would be label1 - label2.", "username": "Ciprian_Stanciu" }, { "code": "FirstSecondproduct", "text": "Hello Ciprian_Stanciu How are the two collections, First and Second, linked - by the field product?", "username": "Prasad_Saya" }, { "code": "", "text": "hello @Prasad_Saya ,\nThey are not linked actually. The second collection only holds the calculation method for some virtual labels with values from the first collection.\nTo be clear every product in the first collection has some physical labels with values. And i need to add those virtual ones calculated according to the operation (either ‘+’ or ‘-’).", "username": "Ciprian_Stanciu" }, { "code": "db.first.aggregate( [\n { \n $unwind: \"$labels\" \n },\n { \n $lookup: {\n from: \"second\",\n localField: \"labels.code\",\n foreignField: \"calculation.label\",\n as: \"matches\"\n }\n },\n { \n $unwind: \"$matches\" \n },\n { \n $addFields: { \n op: { \n $arrayElemAt: [ \n {\n $filter: {\n input: \"$matches.calculation\", \n as: \"calc\",\n cond: { $eq: [ \"$$calc.label\", \"$labels.code\" ] }\n } }, \n 0 \n ] \n } \n } \n },\n { \n $addFields: {\n op_value: {\n $switch: {\n branches: [\n { case: { $eq: [ \"$op.operation\", \"+\" ] }, then: \"$labels.value\" },\n { case: { $eq: [ \"$op.operation\", \"-\" ] }, then: { $multiply: [ \"$labels.value\", -1 ] } }\n ]\n }\n }\n } \n },\n { \n $group: { \n _id: { _id: \"$_id\", product: \"$product\" }, \n labels: { $push: \"$labels\" },\n code: { $first: \"$matches.code\" }, \n value: { $sum: \"$op_value\" } \n } \n },\n { \n $project: { \n _id: \"$_id._id\", \n product: \"$_id.product\", \n labels: 1, \n \"vlabels.code\": \"$code\", \n \"vlabels.value\": \"$value\" \n } \n }\n] ).pretty()\n{\n \"_id\" : ObjectId(\"5ebd33aba0b845b22f4c8ace\"),\n \"product\" : \"test\",\n \"labels\" : [\n {\n \"code\" : \"label1\",\n \"value\" : 42\n },\n {\n \"code\" : \"label2\",\n \"value\" : 50\n }\n ],\n \"vlabels\" : {\n \"code\" : \"label3\",\n \"value\" : -8\n }\n}", "text": "Here is the aggregation:The output:", "username": "Prasad_Saya" } ]
Conditional sum aggregation pipeline
2020-05-14T10:01:36.493Z
Conditional sum aggregation pipeline
6,124
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "I am looking at building management of our Realm Cloud Users into my application so I am looking for the SDK endpoints to;", "username": "Raymond_Brack" }, { "code": "", "text": "u may can check the System Realm /__admin. There you find the Class User.kind regards", "username": "rouuuge" } ]
Programmatically managing Realm Cloud users
2020-04-23T23:32:45.449Z
Programmatically managing Realm Cloud users
2,240
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hi there\nIm a bi analyst, new to working with MongoDb’s. I’m using Mongo connector for bi (mongodrdl) to connect my DB to Tableau, and wanted to learn how can i aggregate my data in the schema file. I’ve tried using mongotranslate, but i cant find it’s location. i get the error “mongotranslate is not recognized as an internal or external command”.my command was:mongotranslate “select * from groceries.fruits where _id >100;”\\ --schema schema.drdlIs any one here familiar with Mongo connector for bi? please help", "username": "11132" }, { "code": "", "text": "i get the error “mongotranslate is not recognized as an internal or external command”.Welcome to the community @11132Is the program installed on your pc?\nCan you run that command using full path of exe\nCould be path issuePlease check these linksThe mongotranslate reference page.", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I actually cant find the exe, serched in the “Mongo connector for BI” path, but only found “mongodrdl” and “mongosqld”\nWhere could it be?", "username": "11132" }, { "code": "", "text": "What is your OS?\nIt should be under bin where you found other executables\nDid you try to search/findMongoDB BI Connector is required for connecting Tableau Desktop to MongoDB database. Tableau uses MongoDB ODBC driver at client side to talk to MongoDB database. So we need to install MongoDB ODBC driver on the client side. A system DSN has to be...", "username": "Ramachandra_Tummala" }, { "code": "", "text": "My OS is Windows 10 x64\nIn bin i have : “mongodrdl” and “mongosqld”, but no sign for “mongotranslate” .\nTried to reinstall the connector,", "username": "11132" } ]
Mongotranslate and mongodrdl
2020-05-13T10:58:03.318Z
Mongotranslate and mongodrdl
2,652
null
[ "replication" ]
[ { "code": "", "text": "Hi,\nWe have a 4.2.6 based replica set running with 3 members.\nUpon trying to add a 4th member (freshly installed) it will never sync completely…\nIt will remain in STARTUP2 state, retrieving all the data from a secondary node ( db.adminCommand( { replSetGetStatus: 1 } ).initialSyncStatus.databases will show all collections are filled at some point ), but will keep restarting this process 10 times until aborting.\nConnection between the nodes is 10 Gbs private, seems to do the job while syncing the data.\nWe have upped (read it could help) oplog size to 128GB, it did seem to help retrieving all the data but not finishing the sync phase.Anyone with any hints we could have missed ?\nThanks a bundle.", "username": "Philippe_Longere" }, { "code": "", "text": "Hi @Philippe_LongereOften the log will show what the problem is when replication is not completing. And I would suggest this as your first step.Usually rs.add() is the least complex way to add a new member, however you may be interested in this other method for syncing a new member.", "username": "chris" }, { "code": "", "text": "Hi @Philippe_Longere and welcome to the forums.In addition to the information that @chris has provided, I would recommend checking your oplog size to make sure that it covers enough time to make the initial sync and replay any new changes that were made during the time the sync occurred to the new member. It seems like the oplog might be rolling before the sync completes.If the oplog is too small then you will not be able to complete the initial sync of data and you will get errors and the new member will stay in a STARTUP2 state as it never gets all the data.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi Chris, thanks for the input here’s a partial (it tried 10 times) of the InitialSyncStatus error according to rs.status() :\n“initialSyncAttempts” : [\n{\n“durationMillis” : 964585,\n“status” : “OplogStartMissing: error fetching oplog during initial sync :: caused by :: Our last optime fetched: { ts: Timestamp(1589378774, 1118), t: 12 }. source’s GTE: { ts: Timestamp(1589378778, 1501), t: 12 }”,\n“syncSource” : “mongo-p2-priv:27017”\n},mongod log shows about the same :\nFatal assertion 40088 OplogStartMissing: error fetching oplog during initial sync :: caused by :: Our last optime fetched: { ts: Timestamp(1589387475, 1698), t: 12 }. source’s GTE: { ts: Timestamp(1589387476, 1907), t: 12 } at src/mongo/db/repl/replication_coordinator_impl.cpp 743I have more than doubled the oplog size on the Secondary sync gets done from since this morning, to no avail…We had a look at the snapshot solution in the documentation, but it seemed far fetched because we cannot stop our application from running …Best,\nPhilippe", "username": "Philippe_Longere" }, { "code": "", "text": "Hello Philippe,\nI want to clarify two things.\na. usually you need to increase the oplog on all the available mongod members as initial sync can happen from any secondary or primary member in the replicaset.\nb. for the snapshot solution, you can take a snapshot from the secondary as well, so you need not stop your application. In my experience, the initial sync process hardly works for large clusters with heavy writes.Thanks\nR", "username": "errythroidd" }, { "code": "", "text": "Hi Doug and Rohit, thank you both for your input (and welcome :)),I now get it we have a oplog problem that we’ll try to tackle by increasing the size and reducing as much as possible the writing during the sync.We are also interested in learning a bit more about the snapshot solution, because the mongo documentation is a bit “light” at least for us non experts on that matter.I understand we should shutdown a secondary, then copy its data files over to the new member, that’s the easy part. One question that stands is “What files should be copied over from S1 to S2 ?”. In /var/lib/mongod we have collection-XXX.wt indeed-XXX.wt WiredTiger*, a journal folder and a few others ?Thanks again for your help, very much appreciated.\ncheers,\nPhilippe", "username": "Philippe_Longere" }, { "code": "", "text": "We are also interested in learning a bit more about the snapshot solution, because the mongo documentation is a bit “light” at least for us non experts on that matter.Stopping a secondary and rsyncing the files being the more straight forward way to seed. In my experience this has not been faster than rs.add()Snapshot tends to be a more advanced system topic than a mongodb one. Snapshots and may require reconfiguring/redeploying your server. Linux LVM, ZFS and storage applicances(netapp) provides methods to take a snapshot and then access the snapshot files. This allows for a no downtime copy of the files.Cloud vendor snapshots work equally well.But this seeding from datafiles method too, requires an adequately sized oplog to cover the duration of copy + catch up.One question that stands is “What files should be copied over from S1 to S2 ?”. In /var/lib/mongod we have collection-XXX.wt indeed-XXX.wt WiredTiger*, a journal folder and a few othersThe entire contents of that data directory to the target data directory(clear it first)", "username": "chris" }, { "code": "", "text": "Hi again,\nSo we’ve been trying the sync for the last 3 hours, it looks like there is something wrong we cannot put our finger on …\nHere’s what seem to work all right (in order) :But operationTime goes up very very/too slowly (it is slower than real time, operationTime goes up something like 1 second every 2 seconds) and process gets aborted after a while (surely after some manager detects it’s way behind and will never catch up) …CPU/Memory/drive speed cannot be an issue given the sizing of the host (which is dedicated to this task, and seems rather quiet during the process), so we’re wondering what could make the application of the oplog slower than expected ? Or how to investigate what is happening ?Thanks guys.", "username": "Philippe_Longere" }, { "code": "rs.printReplicationInfo()", "text": "You need to go back to your logs again. It could still be oplog related.Reading your earlier post you said you increased oplog to 128GB. Now you are saying the it is full at ~48GB. You need to ensure the oplog on you new node(all nodes actually) is the same size too.Out of interest what is the output of rs.printReplicationInfo()", "username": "chris" }, { "code": "", "text": "Hey,\nIndeed (we finally got the end of it this morning) it was still oplog related, there was a (mostly) debug information that was filled very often in an obscure collection, but it looks like that somehow invisible data was in fact filling up the oplog. My colleague found about it (for anyone that would get the same kind of issue) by reading the output of :use local\ndb.oplog.rs.find().limit(20)which gets some extract of the oplog that lead us to the culprit by actually seeing this data.Thanks everyone for your input, it made us look in the right direction :).\nTake care.\nPhil.", "username": "Philippe_Longere" } ]
Replica set member not syncing
2020-05-13T14:54:47.832Z
Replica set member not syncing
9,078
null
[]
[ { "code": "", "text": "Hello,\nWhen dealing with very large collections i came across timeout issues when querying and inserting.\nEven though the collections were indexed and optimized to combine small documents together.The rate of insertion for a given collection a day 172,800 documents per day.I solved this by partitioning the DB into months.\nASDF -> ASDF_10_2018 , ASDF_11_2018 … ASDF_05_2020by doing so the collection size under each db was decreased and i did not encounter any query or insertion issues.The things is that there are multiple DB’s like ASDF and this has been running for more then 2 years.\nMongoDb now as an issues with the number of open files it has to hold in order to maintain my partitioned DB’s and collections.Questions :", "username": "11134" }, { "code": "ulimit -a", "text": "Hi,\nIf this is linux server, maybe increasing limits will be enough. By default number of open files for user is 1024. You can check it with command ulimit -a. You can increase it in /etc/security/limits.conf (for user which is running mongo service). For application servers I usually set it for something like 65000.", "username": "Piotr_Tajdus" } ]
Too many open files
2020-05-14T08:09:41.689Z
Too many open files
3,375
null
[ "configuration" ]
[ { "code": "about to fork child process, waiting until server is ready for connections.\nforked process: 29977\nERROR: child process failed, exited with error number 48\nTo see additional information in this output, start without the \"--fork\" option.\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: localhost # I tested this.(localhost, instance ip, my computer ip)\n\nsecurity:\n authorization: enabled\n javascriptEnabled: false\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n", "text": "Hello!I’m just starting MongoDB, so i need a lot of help. Now, I’m setting ‘net.bindIp’ in mongodb.conf, but it doesn’t seem like it.I know that ‘net.bindIp’ determines what ip to access DB like AWS’s inbound setting.So, I did various tests.Below is error content.So, I have a few questions.Why can’t I connect ‘bindIP : localhost’ with MongoDB Comapss when I can connect ‘bindIp : instance IP’ with MongoDB Compass?Why can’t I set my computer IP with net.bindIp?Below is my mongod.conf setting, and my MongoDB version is v4.2.6I just modified ‘net’ and ‘security’.Thank you!", "username": "DongHyun_Lee" }, { "code": "bind_ipbindIpifconfig -a | grep \"inet\"mongodsshssh", "text": "Welcome to the MongoDB Community, @DongHyun_Lee!I know that ‘net.bindIp’ determines what ip to access DB like AWS’s inbound setting.That’s actually an incorrect assumption. The bind_ip configuration value only determines which local IP address(es) your MongoDB server is listening to. It does not control access from remote IPs – that is the job of a firewall (like your AWS Inbound rules).Why can’t I set my computer IP with net.bindIp?The only valid values for bindIp are local network interfaces for the MongoDB process. For example, on Linux any local IPs would appear in the output of ifconfig -a | grep \"inet\".Why can’t I connect ‘bindIP : localhost’ with MongoDB Comapss when I can connect ‘bindIp : instance IP’ with MongoDB Compass?If you want to connect from your Compass on your local computer to a remote MongoDB deployment on AWS, you need to set up a secure connection. Typically this is done via VPN or SSH port forwarding, so your database instance is not directly exposed to the internet. In this case your mongod instance would only need to listen to localhost (for ssh) and the private IP (for VPN or ssh via a jump host on the same private network).For more information on available security measures, please review the MongoDB Security Checklist.Below is error content.If you review your MongoDB logs, I expect you’ll find a message like:Failed to set up listener: SocketException: Can’t assign requested addressThis message indicates you are trying to bind to an address that is not a valid local network interface, and will be the reason your MongoDB process is unable to start.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you Stennie! I understand . Have a nice day!", "username": "DongHyun_Lee" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
I can't set bindIp
2020-05-13T08:39:50.348Z
I can&rsquo;t set bindIp
5,783
null
[ "installation" ]
[ { "code": "", "text": "On the installation screen for Service Configuration using the .msi installer on Windows 10, I cannot get past the error that reports the account domain, name or password is incorrect (for service as a local user). This should go smoothly so I can enroll in a course already started.", "username": "Kurt_Pluntke" }, { "code": "", "text": "Please check this link", "username": "Ramachandra_Tummala" } ]
Service Configuration for new installation on Windows
2020-05-13T22:05:42.709Z
Service Configuration for new installation on Windows
2,832
null
[ "server" ]
[ { "code": "", "text": "Hi,Which versions (4.0.x, 4.2.x, 4.4.x, etc) do you plan to support on 20.04 and when can we expect the packages to be available? Thanks.", "username": "Jean-Francois_Lebeau" }, { "code": "", "text": "HI @Jean-Francois_Lebeau,Ubuntu 20.04 packaging and testing work is planned for the MongoDB 4.4 release cycle.Depending on your system architecture, relevant issues to watch for updates are:I cannot currently offer more precise timing, but issue states in Jira will change as work progresses. For example, SERVER-44070 for x64 platform support is currently in progress.We typically do not create new packages and test environments for O/S releases that predate a MongoDB server release, so I would not expect MongoDB 4.0 or 4.2 packages for Ubuntu 20.04.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Server says as resolved but can’t find documentation for mongodb installation for ubuntu focal 20.04", "username": "Dilber_P_Shakir" }, { "code": "", "text": "Hi @Dilber_P_Shakir,Server says as resolvedIt looks like the JIRA ticket for the documentation is unresolved at this time and still in external review.", "username": "Doug_Duncan" }, { "code": "", "text": "JIRA ticket for the documentationThanks for the reply.\nCan you share link to status tracker?\nThe above link was supposed to track the development in repository which said completed on 4th may, still documentation is lagging.\nExpecting it soon.\nThanks", "username": "Dilber_P_Shakir" }, { "code": "fixVersion4.5.14.4.0-rc4Watch", "text": " Server says as resolved Hi @Dilber_P_Shakir ,The fixVersion for this server issue currently indicates 4.5.1 and 4.4.0-rc4 . That means a technical issue is addressed as of those specific releases. MongoDB 4.4.0 is currently in Release Candidate stage, so not recommended for production deployments. MongoDB 4.5.1 is an early development/unstable release of work in progress for the future 4.6 major release series (see MongoDB Versioning for more info).@Doug_Duncan pointed at the relevant issue (DOCS-13629) tracking the documentation update. You can also find this referenced in the Issue Links on the original issue (SERVER-44070 “is documented by” …).It looks like the technical packaging work is complete but waiting on final testing and documentation.If you want to follow progress on the documentation update you can Watch the issue directly in Jira.Note: when a documentation issue is resolved, the change will go into the next documentation refresh and may not appear immediately. However, there shouldn’t be a significant delay from that point.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Unfortunately I have no idea when the documentation will be updated. The JIRA link I provided was for the documentation project. Once that ticket gets closed then I would imagine it won’t be long before it gets merged into the public documentation.I see that the wonderful @Stennie_X has provided better information while I was typing the above out. ", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Ubuntu 20.04 support
2020-04-25T14:37:09.035Z
Ubuntu 20.04 support
5,191
null
[ "upgrading" ]
[ { "code": "", "text": "I’ve inherited a project that is woefully behind migrating from Windows Server 2008 r2. I’m new to mongoDB, but from reading the release notes it appears as though the migration path is 2.6.11 → 3.0.15 → 3.2.22 → 3.4.24 → 3.6.18 → 4.0.18 → 4.2.6. Is this correct or is a shorter path possible?It also seems that it is just a matter ofAnything else I should know or look out for?Thanks,\nScott", "username": "Scott_Reynolds" }, { "code": "", "text": "Hi @Scott_Reynolds,That is the recommended upgrade path. One major consideration as you upgrade the server is the client versions. If the server is this stale it is likely the client drivers are also. This many major version changes introduces many depreciations which could break your apps.It also seems that it is just a matter ofIt’s not. There are some important step in many of the upgrades and depending on whether you have a standalone, replica set or sharded cluster the procedure can vary.Fully read each upgrade procedure with care.", "username": "chris" }, { "code": "", "text": "Thanks Chris!I should have mentioned that I have a standalone db and that’s what drove me to the simplistic process after reading through the upgrade procedures. I will remember to update the client libraries.Scott", "username": "Scott_Reynolds" }, { "code": "", "text": "Yes, still a few things to take care of along the way.", "username": "chris" }, { "code": "db.adminCommand( { setFeatureCompatibilityVersion: \"4.2\" } )", "text": "Hello Scott,\nyou might want to update the fcv after each upgrade by running the below command\ndb.adminCommand( { setFeatureCompatibilityVersion: \"4.2\" } )\n//version depends on the version you are upgrading toi would modify your steps to the following:-Thanks\nR", "username": "errythroidd" }, { "code": "", "text": "@Scott_ReynoldsOne potential shortcut might be an export/import path. I don’t know if it will work or not, just mentioning it.", "username": "chris" }, { "code": "", "text": "Also note to look to see if your language’s current driver can be used with MongoDB 4.2. @chris alluded to this earlier, but I would hate for your application to stop working because you upgraded MongoDB.Since you’re running a standalone, I will assume that this is not a production instance? Replica sets are generally recommended for production. If this is production data, I would definitely make a back up of the MongoDB data directory before proceeding with the upgrade (you should do this anyways, but this warning definitely applies here as you most likely don’t want to lose your data).", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrade advice from 2.6.11 to 4.2.6 (current)
2020-05-13T12:29:13.639Z
Upgrade advice from 2.6.11 to 4.2.6 (current)
3,615
null
[ "upgrading" ]
[ { "code": "2020-04-13T17:38:03.113+0000 I ROLLBACK [rsBackgroundSync] Finished waiting for background operations to complete before rollback\n2020-04-13T17:38:03.113+0000 I ROLLBACK [rsBackgroundSync] finding common point\n2020-04-13T17:38:03.314+0000 I ROLLBACK [rsBackgroundSync] Rollback common point is { ts: Timestamp(1586118344, 1), t: 167 }\n2020-04-13T17:38:03.315+0000 I ROLLBACK [rsBackgroundSync] finding record store counts\n2020-04-13T17:38:03.317+0000 I REPL [rsBackgroundSync] Incremented the rollback ID to 107\n2020-04-13T17:38:03.318+0000 I STORAGE [rsBackgroundSync] closeCatalog: closing all databases\n2020-04-13T17:38:03.336+0000 I STORAGE [rsBackgroundSync] closeCatalog: closing storage engine catalog\n2020-04-13T17:38:03.336+0000 I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down\n2020-04-13T17:38:03.337+0000 F ROLLBACK [rsBackgroundSync] RecoverToStableTimestamp failed. :: caused by :: UnrecoverableRollbackError: No stable timestamp available to recover to. You must downgrade the binary version to v3.6 to allow rollback to finish. You may upgrade to v4.0 again after the rollback completes. Initial data timestamp: Timestamp(1586118674, 1), Stable timestamp: Timestamp(0, 0)\n2020-04-13T17:38:03.337+0000 I ROLLBACK [rsBackgroundSync] Rollback summary:\n2020-04-13T17:38:03.338+0000 I ROLLBACK [rsBackgroundSync] start time: 2020-04-13T17:38:03.108+0000\n2020-04-13T17:38:03.338+0000 I ROLLBACK [rsBackgroundSync] end time: 2020-04-13T17:38:03.338+0000\n2020-04-13T17:28:39.615+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.\n2020-04-13T17:28:39.615+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=16384M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=\"3.0\",require_max=\"3.0\"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2020-04-13T17:28:40.743+0000 E STORAGE [initandlisten] WiredTiger error (-31802) [1586798920:743663][15291:0x7efdddc6ab80], connection: __log_open_verify, 1028: Version incompatibility detected: unsupported WiredTiger file version: this build requires a maximum version of 2, and the file is version 3: WT_ERROR: non-specific WiredTiger error\n2020-04-13T17:28:40.748+0000 E - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 486\n2020-04-13T17:28:40.748+0000 I STORAGE [initandlisten] exception in initAndListen: Location28595: -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2020-04-13T17:28:40.748+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2020-04-13T17:28:40.748+0000 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27032.sock\n2020-04-13T17:28:40.749+0000 I CONTROL [initandlisten] now exiting\n2020-04-13T17:28:40.749+0000 I CONTROL [initandlisten] shutting down with code:100\n", "text": "Hello,\nI am trying to upgrade a replicaSet with 4 data bearing members (PSSSA) from version 3.6 to 4.0 (and eventually to 4.2). However during upgrade two members upgraded successfully, but the other two members would not start.When i try to start the mongod using 4.0 binary, below is the error message i see in mongod.logThe error message states that to faciliate rollback, i need to downgrade the binary to 3.6. When i start the mongod using 3.6 binary (3.6.17), i encounter the below error.Now i am unable to start the mongod with neither 4.0 nor 3.6. Did anybody face the same issue or similar issue. If yes, how did you resolve it?Thanks\nErrythroidd.", "username": "errythroidd" }, { "code": "", "text": "Hello,\nI was able to solve this when i have restarted the mongod process with latest version of 3.6 binary and in standalone mode. Apparently, the mongo member needed to be stopped cleanly and restarted with 3.6 binary to update all the files with 3.6 file version.Lesson learnt:- Always upgrade to the highest minor version before upgrading to the next version.Thanks\nR", "username": "errythroidd" } ]
Rollback during upgrade from 3.6 to 4.0
2020-04-13T18:07:20.638Z
Rollback during upgrade from 3.6 to 4.0
3,854
null
[ "queries" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"5eba4e63e4c68d0631a26bbe\"\n },\n \"name\": \"Washington\",\n \"byCounty\": [{\n \"DailyStats\": [{\n \"TotalConfirmed\": 5174,\n \"TotalDeaths\": 346,\n \"DailyConfirmed\": 272,\n \"DailyDeaths\": 15,\n \"Date\": \"4/19/2020\"\n }, {\n \"TotalConfirmed\": 5174,\n \"TotalDeaths\": 346,\n \"DailyConfirmed\": 0,\n \"DailyDeaths\": 0,\n \"Date\": \"4/20/2020\"\n }, {\n \"TotalConfirmed\": 5293,\n \"TotalDeaths\": 360,\n \"DailyConfirmed\": 119,\n \"DailyDeaths\": 14,\n \"Date\": \"4/21/2020\"\n }, {\n \"TotalConfirmed\": 5379,\n \"TotalDeaths\": 373,\n \"DailyConfirmed\": 86,\n \"DailyDeaths\": 13,\n \"Date\": \"4/22/2020\"\n }, {\n \"TotalConfirmed\": 5532,\n \"TotalDeaths\": 385,\n \"DailyConfirmed\": 153,\n \"DailyDeaths\": 12,\n \"Date\": \"4/23/2020\"\n }],\n \"StateOrTerritory\": \"US\",\n \"Latitude\": 47.49137892,\n \"Longitude\": -121.8346131,\n \"County\": \"King\",\n \"Population\": 2252782\n }, {\n \"DailyStats\": [{\n \"TotalConfirmed\": 804,\n \"TotalDeaths\": 36,\n \"DailyConfirmed\": 45,\n \"DailyDeaths\": 2,\n \"Date\": \"4/19/2020\"\n }, {\n \"TotalConfirmed\": 835,\n \"TotalDeaths\": 36,\n \"DailyConfirmed\": 31,\n \"DailyDeaths\": 0,\n \"Date\": \"4/20/2020\"\n }, {\n \"TotalConfirmed\": 868,\n \"TotalDeaths\": 38,\n \"DailyConfirmed\": 33,\n \"DailyDeaths\": 2,\n \"Date\": \"4/21/2020\"\n }, {\n \"TotalConfirmed\": 886,\n \"TotalDeaths\": 38,\n \"DailyConfirmed\": 18,\n \"DailyDeaths\": 0,\n \"Date\": \"4/22/2020\"\n }, {\n \"TotalConfirmed\": 879,\n \"TotalDeaths\": 41,\n \"DailyConfirmed\": -7,\n \"DailyDeaths\": 3,\n \"Date\": \"4/23/2020\"\n }],\n \"StateOrTerritory\": \"US\",\n \"Latitude\": 46.45738486,\n \"Longitude\": -120.7380126,\n \"County\": \"Yakima\",\n \"Population\": 250873\n }]\n}\n", "text": "I would like to Select just one of the counties; I am trying “name.ByCounty.County” : “King”. But I get no data in the result set.In addition, how do I access elements of the array of DailyStates that is nested in the by county dataThe document is structured as follows:", "username": "Mark_Friedman" }, { "code": "byCountybyCounty.DailyStats", "text": "Hello @Mark_Friedman I would like to Select just one of the countiesTo select and print an element of the byCounty array field, use the $ array projection operator.how do I access elements of the array of DailyStates that is nested in the by county dataWhat is it you want do with the elements of the byCounty.DailyStats nested array? Any specific query you are trying?", "username": "Prasad_Saya" }, { "code": "", "text": "compare one day’s activity to the next;\ncalculate a 5-day moving average of the data", "username": "Mark_Friedman" }, { "code": "County = \"King\"db.collection.find( \n { \"byCounty.County\": \"King\" }, \n { \"byCounty.$\": 1 }\n).pretty()\nSTART_DTEND_DTDateDateCOUNTY = \"King\"\nSTART_DT = ISODate(\"2020-04-19T00:00:00Z\")\nEND_DT = ISODate(\"2020-04-23T00:00:00Z\")\n\ndb.collection.aggregate( [\n { \n $unwind: \"$byCounty\" \n },\n { \n $unwind: \"$byCounty.DailyStats\" \n },\n { \n $match: { \n $expr: { $and: [ \n { $gte: [ { $toDate: \"$byCounty.DailyStats.Date\" }, START_DT ] },\n { $lte: [ { $toDate: \"$byCounty.DailyStats.Date\" }, END_DT ] }\n ] }, \n \"byCounty.County\": COUNTY \n }\n },\n { \n $group: { \n _id: \"Average\", \n average_daily_deaths: { $avg: \"$byCounty.DailyStats.DailyDeaths\" } \n } \n }\n] )\n{ \"_id\" : \"Average\", \"average_daily_deaths\" : 10.8 }", "text": "To print the details for County = \"King\", use the query:To get an average you use an Aggregation Framework query. The five days (for averages) are specified with a range of dates by START_DT and END_DT. You will use the Date object type rather than the string date the Date field has for date comparison.The output:{ \"_id\" : \"Average\", \"average_daily_deaths\" : 10.8 }", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Mark_Friedman,calculate a 5-day moving average of the dataFor ideas on how to get moving averages, take a look at this SO answer by Buzz Moschetti (ex MongoDB Architect), which uses the aggregation framework.Unlike the example that Prasad gave above (which is a good example of getting the average of a group of documents), the SO answer will give you averages for every x consecutive days in your data set. If you have 50 documents (1 document per day) and want 5 day moving averages, then you will get 46 resulting documents back with an average of the proceeding 5 days.", "username": "Doug_Duncan" } ]
New to MongoDB; how do I query this document?
2020-05-12T07:58:56.140Z
New to MongoDB; how do I query this document?
1,451
https://www.mongodb.com/…ccf892c40da5.png
[]
[ { "code": "", "text": "Hello. There is an array consisting of two string values, but they are without keys. Is it possible to somehow bring them all into the database without looping through the loop and so that the first number in the pair is the key, and the second is the value?\n\nArray428×807 13.7 KB\n", "username": "11131" }, { "code": "[ [ \"12.345\", \"0.1234\" ], [ \"5.6789\", \"0.6789\" ] ]{ _id: 1, array: [ { \"12.345\": \"0.1234\" }, { \"5.6789\": \"0.6789\" } ] }", "text": "Hello Игорь Князьков I think it can be done. But, there are some questions to be answered before that.The array is something like this, confirm: [ [ \"12.345\", \"0.1234\" ], [ \"5.6789\", \"0.6789\" ] ]And you want to store it, yes, as key-value pairs, but all these keys and values within one document? Something like this:\n{ _id: 1, array: [ { \"12.345\": \"0.1234\" }, { \"5.6789\": \"0.6789\" } ] }Is the data in a file or as an array object?What is the MongoDB version?How are you planning to use this data?", "username": "Prasad_Saya" }, { "code": "", "text": "{ _id: 1, array1: [ { “12.345”: “0.1234” }, { “5.6789”: “0.6789” } ], array2:[ { “12.5”: “0.14” }, { “5.69”: “0.89” } ]}", "username": "11131" }, { "code": "array1array2", "text": "{ _id: 1, array1: [ { “12.345”: “0.1234” }, { “5.6789”: “0.6789” } ], array2:[ { “12.5”: “0.14” }, { “5.69”: “0.89” } ]}How do you determine which data goes into array1 and array2 ? You have one input array.", "username": "Prasad_Saya" }, { "code": "", "text": "When they come from the exchange they have different names: asks: and bids: ", "username": "11131" }, { "code": "mongoarray_input = [ [ \"12.345\", \"0.1234\" ], [ \"5.6789\", \"0.6789\" ] ]db.collection.save( { _id: 1, array: array_input } )db.collection.aggregate( [\n { \n $unwind: \"$array\" \n },\n { \n $addFields: { array: [ \"$array\" ] } \n },\n { \n $addFields: { array: { $arrayToObject: \"$array\" } } \n },\n { \n $group: { _id: \"$_id\", array: { $push: \"$array\" } } \n }\n] )\n{\n \"_id\" : 1,\n \"array\" : [\n {\n \"12.345\" : \"0.1234\"\n },\n {\n \"5.6789\" : \"0.6789\"\n }\n ]\n}", "text": "From the mongo shell:array_input = [ [ \"12.345\", \"0.1234\" ], [ \"5.6789\", \"0.6789\" ] ]db.collection.save( { _id: 1, array: array_input } )This saved the array into a document in the collection. Now, we can convert the array elements into key-value pairs using an aggregate query.The output:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Entering a large array into the database
2020-05-13T06:37:39.207Z
Entering a large array into the database
2,966
null
[]
[ { "code": "", "text": "I donwloaded the file “mongodb-enterprise-server_4.2.6_amd64.deb”\nbut your instruction (mac os) doesn’t help me.\nplz help me how to install on ubuntu18", "username": "sajjad_valisheikhzahed" }, { "code": "", "text": "Hi @sajjad_valisheikhzahed,The setup instructions are very similar for Mac and ubuntu system.Please let me know what steps have you completed so far.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "thanks for your response. actually i was on ubuntu20 and whole error i got was related to version.\nfinally i found your site instruction for installing on ubuntu and it helped me on ubuntu18.\nbut on my other system with ubuntu20 i installed it throw it’s own repo with comand:\n“sudo apt install mongo” -> it installed v3.6.8 on ubuntu20", "username": "sajjad_valisheikhzahed" }, { "code": "", "text": "Hi @sajjad_valisheikhzahed.\nYou can follow steps in mongoDB official documentation given here:I myself followed these steps on my machine.", "username": "vikash_gupta" }, { "code": "", "text": "Hi @sajjad_valisheikhzahed,but on my other system with ubuntu20 i installed it throw it’s own repo with comand:\n“sudo apt install mongo” → it installed v3.6.8 on ubuntu20Please have a look at this post.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Any setup instruction for ubuntu18?
2020-05-11T07:01:09.838Z
Any setup instruction for ubuntu18?
1,231
null
[ "aggregation" ]
[ { "code": "SELECT DISTINCT(ctype, cname, pname) \n FROM histoValues\n WHERE (timestamp >= fromDate and timeStamp <= toDate) \n AND instrument = 35\n GROUP BY ctype, cname, pname\n", "text": "Hi,I have a MongoDB collection named “histoValues” whose fields are:\n_id , ctype, cname, pname, instrument, timestamp, valueI want to select distinct ctype/cname/pname for a given instrument and range of dates (fromDate, toDate).In SQL, the query I would do is:In MongoDB Compass tool, I firstly tried to group on ctype, without any $match.\nIn SQL it would be :\nSELECT DISTINCT(ctype)\nFROM histoValues\nGROUP BY ctypeIn Compass, I used the $group aggregation operator and entered:{\n_id: “$ctype”\n}In the “output after $group stage” window, there is the following result:_id: “Axis”while I have three different ctype values in my collection.What am I doing wrong ?And how do I add fields in my $group operator ?Thanks a lot for your help", "username": "Helene_ORTIZ" }, { "code": "_id{\n $group: {\n \"_id\": {\n ctype: \"$ctype\",\n cname: \"$cname\",\n pname: \"$pname\"\n },\n ...\n }\n}\nctype", "text": "Hi @Helene_ORTIZ, and welcome to the community forums.And how do I add fields in my $group operator ?To group on multiple fields in an aggregation, you would make the _id field an object like the following:this will give you a grouping for each unique combination of values in those three fields.In the “output after $group stage” window, there is the following result:_id: “Axis”while I have three different ctype values in my collection.Without seeing the data, query and results it’s hard to say why you only got a single value back if you have three unique values in the dataset for ctype.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks a lot Doug for your answer.\nIt works fine in Robo3T: the group operator returns 9 documents.\nBut it does not work in MongoDB Compass tool, which returns only one document.\nPlease see attached file\n\nscreenshot1921×1468 180 KB\n", "username": "Helene_ORTIZ" }, { "code": "", "text": "@Helene_ORTIZ can you turn off Sample Mode inside of Compass? I would guess that all the samples have the same three values and that’s why you’re only getting a single result.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi Doug,\nI changed “Number of preview documents” in settings panel and now it works fine, like in Robo3T.\nThanks !\nThe right code is:db.getCollection(‘histoValues’).aggregate([\n{$match: {\n$and: [\n{instrument: 94},\n{timestamp: {$gte: new Date(‘2020-04-01’)}},\n{timestamp: {$lte: new Date(‘2020-04-25’)}}\n]\n}\n},\n{$group: { _id: {ctype: “$ctype”,\ncname: “$cname”,\npname: “$pname”\n}\n}\n}\n])", "username": "Helene_ORTIZ" }, { "code": "", "text": "Glad you got things working in Compass now Hélène!", "username": "Doug_Duncan" } ]
Group on multiple fields
2020-05-11T18:54:36.996Z
Group on multiple fields
12,258
null
[ "golang" ]
[ { "code": "err := coll.FindOne(context.TODO(), bson.M{\"_id\": id}).Decode(result)\nerr := coll.FindOne(context.TODO(), bson.D{{\"_id\", id}}).Decode(result)\n", "text": "Hello, I met a strange case, when I try to Find doc by the ID (here the id is not objectID, it is a string), I implement the Find in two ways:can pass all of the local test but fail on gitlab CI. Failed to find the result.can pass all of the local test and the test on gitlab CI.my local DB version, 3.6.17, the version of db used on gitlab CI testing, 3.6.11.Thanks,James", "username": "Zhihong_GUO" }, { "code": "", "text": "These should be generating the same query. Are there any configuration parameters for your database in CI that could be affecting this? For example, the number of nodes in the cluster, the read/write concern used, the read preference used, etc?", "username": "Divjot_Arora" }, { "code": "", "text": "Hello Divjot,Thank you for the support. A little more information but not sure if it is useful. I tried three test cases: one I use bson.M {\"_id\" : id} in a FindOne, where the id is a string, the other use bson.D{{\"_id\", id}} in a FindOneAndUpdate, where the id is a string, the last one use bson.M{\"_id\", id} in a FindOne, where the id is objectID, all can work locally but the bson.M with string id will always fail on C, while bson.D and bson.M with objectID will always work on CI.After I change bson.M with string id to bson.D with string id, in the FindOne, the CI can successfully run this case, I try a few times and it looks very “stable” pass. Any comments?Thanks,James", "username": "Zhihong_GUO" }, { "code": "// Try to find all projects in list with mongo-driver/mongo\nvar results []bson.M\ncursor, err := ProjectCollection.Find(ctx,\n bson.D{{\n Key: \"_id\",\n Value: bson.D{{Key: \"$in\", Value: obj.ProjectIDs}},\n }},\n)\ncursor.All(ctx, &results)\nfmt.Printf(\"id strings: %v\\n\", obj.ProjectIDs)\nfmt.Printf(\"id types: %T\\n\", obj.ProjectIDs) // showing type of project ids list\nfmt.Printf(\"results: %v\\n\", results)\nid strings: [\"idhexstring1\", \"idhexstring2\", ...]\nid types: []string\nresults []\n//This works in Atlas in db.projects collection\n{\"_id\": {\"$in\": [ObjectId(\"idhexstring1\"), ObjectId(\"idhexstring2\"), ...]} }\n[{document1...},{document2...},...]\n[]primitive.ObjectIdinterface{} is string, not primitive.ObjectID", "text": "Hi. I’ve found a similar case that doesn’t seem to work.For the mongo-driver/mongo code, changing to bson.M does not change the result. Changing the project id list to be of type []primitive.ObjectId yields an error that interface{} is string, not primitive.ObjectID. Can someone confirm that the “$in” search does in fact work with db.Collection.Find() and possibly provide an example?Edit: See @Divjot_Arora suggestion for the answer to my question", "username": "Eric_Solomon" }, { "code": "obj.ProjectIDsObjectIDobj.ProjectIDs[]primitive.ObjectIDinterface{} is string...strings := []string{\"idhexstring1\", \"idhexstring2\", ...}\nobjectIDs := make([]primitive.ObjectID, 0, len(strings)\n\nfor _, str := range strings {\n oid, err := primitive.ObjectIDFromHex(str)\n if err != nil {\n // handle error\n }\n\n objectIDs = append(objectIDs, oid)\n}\nobjectIDsobj.ProjectIDsprimitive.ObjectIDUnmarshalJSON", "text": "Hi @Eric_Solomon,Per the logs you gave, obj.ProjectIDs is a slice of strings, but it seems like the data in the server is of type ObjectID, so the filter won’t match any documents. You’re right that obj.ProjectIDs should be of type []primitive.ObjectID. The error you provided (interface{} is string...) seems like a compilation error, not a driver error, suggesting that you’re trying to put strings into a slice of ObjectIDs. If you have a bunch of hex strings that represent object IDs, you can convert them using this code:From there, you can use the new objectIDs slice in your filter. This is a little verbose, so ideally you could restructure your application to make sure the obj.ProjectIDs array is always filled with ObjectIDs from the start. The primitive.ObjectID type has an UnmarshalJSON function that will handle converting hex strings to ObjectIDs, so if you’re getting these string values from a JSON source, you might be able to leverage that.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Thank you @Divjot_Arora! I had previously tried converting the strings to primitive.ObjectIDs in the way that you described. Doing this a second time caused me to realize that this is in fact correct and that my bug was further down in my collection-processing function during another conversion.", "username": "Eric_Solomon" } ]
Filter bson.M or bson.D in Find
2020-03-13T13:49:04.070Z
Filter bson.M or bson.D in Find
13,831
null
[]
[ { "code": "", "text": "Hi,We are using mongo db client side field level automatic encryption in enterprise mongodb 4.2 in Java environment. In developer environment this are working as expected. Developer started the mongocryptd process locally in his machine and end to end is working ok.However, in integration environment administrator wants to run the mongocryptd process remotely (not in the application server machine) and wants application to use it from remote by specifying the remote url in mongocryptdURL extra options while creating the mongo encrypted client. administrator is suggesting to use “mongodb://remote-machine-address:27020” in :mongocryptdURL. And he wants to run the mongocryptd process remotely in remote-machine-address.So, our question are.\n(a) Is it possible to use such setup ? Can mongocryptd process run in a remote machine and the mongocryptd driver jar can access it remotely. Or it must run locally in same machine where the jvm is.(b) If it can run remotely, then what steps we have to do ?", "username": "Srijeeb_Roy" }, { "code": "mongocryptdmongocryptdmongocryptd", "text": "Welcome to the MongoDB community @Srijeeb_Roy!Is it possible to use such setup ?No. Only local interfaces (localhost or a local unix domain socket) are supported for mongocryptd as at MongoDB 4.4. There is no configuration for remote access.One mongocryptd process can be shared by multiple applications, but they have to run in the same local server environment.Note that mongocryptd is specific to the Automatic Client-Side Field Level Encryption feature using MongoDB Enterprise. The alternative approach of Explicit (Manual) Client-Side Field Level Encryption does not require any additional processes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "StennieThanks for your reply! Yes, I understand that manual encryption will work and not dependent on mongocryptd process. But that will bring lot of manual code changes in the application as in each read, write, search we have to manually either call encrypt or decrypt. We need to tell the administrator this constraint and ask him to set up the mongocryptd in same machine.", "username": "Srijeeb_Roy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongocryptd for client side encryption
2020-05-11T18:54:21.054Z
Mongocryptd for client side encryption
3,627
null
[]
[ { "code": "", "text": "Hey,I want to import (really) large amounts of users into a collection, after checking if their email appears in a blacklist collection. The blacklist collection is going to be very large also (talking about hundreds of thousands of documents each).I’m trying to figure out the fastest ways to approach this problem, so far I have these possiblities in mind:So, are there any other potential solutions to my problem, and which solution should produce the fastest query results?", "username": "Tal_Ron" }, { "code": "", "text": "This might not work in all work load. But if the goal is to determine at run-time if a user email is blacklisted or not I would:", "username": "steevej" } ]
Searching for Blacklisted emails in huge collections
2020-05-11T18:58:12.457Z
Searching for Blacklisted emails in huge collections
2,190
null
[]
[ { "code": "use TreeMongoParent;\ndb.categoriesPCO.insert({_id:\"Electronics\",parent:null});\ndb.categoriesPCO.insert({_id:\"Cameras_and_Photography\",parent:\"Electronics\", order:10});\ndb.categoriesPCO.insert({_id:\"Digital_Cameras\",parent:\"Cameras_and_Photography\", order:10});\ndb.categoriesPCO.insert({_id:\"Camcorders\",parent:\"Cameras_and_Photography\", order:20});\ndb.categoriesPCO.insert({_id:\"Lenses_and_Filters\",parent:\"Cameras_and_Photography\", order:30});\ndb.categoriesPCO.insert({_id:\"Tripods_and_supports\",parent:\"Cameras_and_Photography\", order:40});\ndb.categoriesPCO.insert({_id:\"Lighting_and_studio\",parent:\"Cameras_and_Photography\", order:50});\ndb.categoriesPCO.insert({_id:\"Shop_Top_Products\",parent:\"Electronics\", order:20});\ndb.categoriesPCO.insert({_id:\"IPad\",parent:\"Shop_Top_Products\", order:10});\ndb.categoriesPCO.insert({_id:\"IPhone\",parent:\"Shop_Top_Products\", order:20});\ndb.categoriesPCO.insert({_id:\"IPod\",parent:\"Shop_Top_Products\", order:30});\ndb.categoriesPCO.insert({_id:\"Blackberry\",parent:\"Shop_Top_Products\", order:40});\ndb.categoriesPCO.insert({_id:\"Cell_Phones_and_Accessories\",parent:\"Electronics\", order:30});\ndb.categoriesPCO.insert({_id:\"Cell_Phones_and_Smartphones\",parent:\"Cell_Phones_and_Accessories\", order:10});\ndb.categoriesPCO.insert({_id:\"Headsets\",parent:\"Cell_Phones_and_Accessories\", order:20});\ndb.categoriesPCO.insert({_id:\"Batteries\",parent:\"Cell_Phones_and_Accessories\", order:30});\ndb.categoriesPCO.insert({_id:\"Cables_And_Adapters\",parent:\"Cell_Phones_and_Accessories\", order:40});\ndb.categoriesPCO.insert({_id:\"Nokia\",parent:\"Cell_Phones_and_Smartphones\", order:10});\ndb.categoriesPCO.insert({_id:\"Samsung\",parent:\"Cell_Phones_and_Smartphones\", order:20});\ndb.categoriesPCO.insert({_id:\"Apple\",parent:\"Cell_Phones_and_Smartphones\", order:30});\ndb.categoriesPCO.insert({_id:\"HTC\",parent:\"Cell_Phones_and_Smartphones\", order:40});\ndb.categoriesPCO.insert({_id:\"Vyacheslav\",parent:\"Cell_Phones_and_Smartphones\", order:50});\nvar stack=[];\nvar item = db.categoriesPCO.findOne({_id:\"Cell_Phones_and_Accessories\"});\nstack.push(item);\nwhile (stack.length>0){\n var currentnode = stack.pop();\n var children = db.categoriesPCO.find({parent:currentnode._id});\n while(true === children.hasNext()) {\n var child = children.next();\n descendants.push(child._id);\n stack.push(child);\n }\n}\ndescendants.join(\",\")\nCell_Phones_and_Smartphones,Headsets,Batteries,Cables_And_Adapters,Nokia,Samsung,Apple,HTC,Vyacheslav\n", "text": "I have data in MongoDB as Tree Structures model.\nHere is my data:In my practices, I want to remove Node parent means delete all Node Descendants.For example:When I remove \" Cell_Phones_and_Accessories \" node, it must remove Cell_Phones_and_Smartphones , Headset , Battery , Cables_And_Adapter , Nokia , Samsung , HTC , Apple and VyacheslavI have mongo shell to get all Node Descendants. Here is my code:Here is output", "username": "Napoleon_Ponaparte" }, { "code": "TOP_LEVEL_PARENT = \"Cell_Phones_and_Accessories\"\n\ndb.categoriesPCO.aggregate( [\n {\n $graphLookup: {\n from: \"categoriesPCO\",\n startWith: \"$parent\",\n connectFromField: \"parent\",\n connectToField: \"_id\",\n as: \"hierarchy\"\n }\n },\n { \n $match: { \n $or: [ \n { \"hierarchy._id\": TOP_LEVEL_PARENT },\n { _id: TOP_LEVEL_PARENT }\n ]\n } \n }\n] \n).forEach( doc => db.categoriesPCO.deleteOne( { _id: doc._id } ) )", "text": "Hello @Napoleon_Ponaparte , somehow the name sounds quite familiar!You can use this aggregate query to get the node and all its descendant nodes - and then delete all of them.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you Sir.\nIt worked for me.", "username": "Napoleon_Ponaparte" } ]
Remove all Node Descendants when remove Node parent in MongoDB
2020-05-12T03:25:07.609Z
Remove all Node Descendants when remove Node parent in MongoDB
4,255
null
[ "dot-net" ]
[ { "code": "[BsonKnownTypes(typeof(Cat), typeof(Dog))] <-- why is this needed!?\npublic class Animal \n{\n}\n\npublic class Cat : Animal \n{\n}", "text": "Hello,Can anyone please explain WHY you have to utilize a class map OR [BsonKnownTypes] attribute on our abstract base class to tell mongodb what the possible child classes can be when the discriminator field _t already conveys this information when serialized to the database?", "username": "Michael_Fyffe" }, { "code": "", "text": "As a side note, We are running integration tests against mongodb and the deserialization without the BsonKnownTypes attribute works fine. If anyone could explain this as well it would be awesome!", "username": "Michael_Fyffe" }, { "code": " [BsonKnownTypes(typeof(Cat), typeof(Dog))]\n public class Pet \n {\n public string Name {get; set;}\n }\n public class Cat : Pet \n {\n public string Toy {get; set;}\n }\n public class Dog : Pet \n {\n public string Toy {get; set;}\n }\n public class House\n {\n public ObjectId Id {get; set;}\n public List<Pet> Pets {get; set;}\n }\nHouseCatDogvar mypets = new List<Pet>();\nmypets.Add(new Cat{ Name=\"Izzy\", Toy=\"scratchy\"} );\nmypets.Add(new Dog{ Name=\"Dotty\", Toy=\"plushy\"} );\ncollection.InsertOne(new House { Pets=mypets});\nBsonKnownTypesPetBsonKnownTypes", "text": "Hi @Michael_Fyffe,Can anyone please explain WHY you have to utilize a class map OR [BsonKnownTypes] attribute on our abstract base class to tell mongodb what the possible child classes can be when the discriminator field _t already conveys this information when serialized to the database?You need to define the BSON attribute to tell the serialiser about the classes in the hierarchy for deserialisation. Let’s use an example to show case this, see the following polymorphism:If you insert an instance of House with two pets, a Cat and a Dog instance as a list. i.e.The discriminators are mapped to each of the pet classes. From then on, the mappings exist for the life of the application. If you find the document (deserialise) right after an insert (serialise) it would work because the previous mappings still registered, which is probably what happens in your integration tests. However, if the application try to deserialise the document without prior mappings, the BSON serialiser wouldn’t know the correct mappings. See also Specifying Known Types.See this gist:Polymorphism BsonKnownTypes for code snippet example. If you run the program it should be able to serialise/deserialise. If you then comment out part A (the insert) and the BsonKnownTypes line, and re-run the program again, you should get an error about Pet deserialisation. Now uncomment the BsonKnownTypes to restore the line back, and you should be able to deserialise.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hey @wanAwesome explanation. Is there a way to turn off the automapping feature on insert so our integration test project will catch missing bsonknowntype attributes?", "username": "Michael_Fyffe" }, { "code": "", "text": "Hi @Michael_Fyffe,Could you elaborate more on what are you trying test ? Are you trying to test whether your class have the attributes ?\nAlso, it could be useful to provide a code example snippet.Regards,\nWan.", "username": "wan" }, { "code": " [Test]\n public void GetByXXUniqueIdentifier_XXDocumentExist_ReturnsDocument()\n {\n // Arrange\n var sut = GetRepository<IXXDocumentRepository>();\n var document = GetXXDocuments(1).FirstOrDefault();\n sut.AddOne(document); // But this prevents the below line from blowing up\n\n // Act\n var actual = sut.GetByXXUniqueIdentifier(document.XXUniqueIdentifier); // I want this to blow up\n\n // Assert\n actual.ShouldBeEquivalentTo(document);\n }\n", "text": "I’m simply wanting my DataAccess Layer Integration Tests to build more confidence for my team by uncovering missing attributes needed for polymorphism by having the following type of test blow up", "username": "Michael_Fyffe" }, { "code": "// Checking class Pet\nSystem.Attribute[] attrs = System.Attribute.GetCustomAttributes(typeof(Pet)); \n\nforeach (System.Attribute attr in attrs) \n{ \n if (attr is BsonKnownTypesAttribute) \n { \n BsonKnownTypesAttribute a = (BsonKnownTypesAttribute)attr; \n \n foreach(var item in a.KnownTypes) \n {\n // Should return Cat and Dog (including the namespace)\n Console.WriteLine(item.ToString()); \n }\n } \n} \n", "text": "Hi @Michael_Fyffe,to build more confidence for my team by uncovering missing attributes needed for polymorphismDepending on your use case, this could be accomplished via unit test instead of integration test. Essentially you would like to test whether the class(es) have the necessary attributes needed for polymorphism. You can use check whether a class have the necessary attribute(s) using reflection. See also Accessing Attributes by Using Reflection.Using the same example classes above (Pet, Cat, and Dog):Regards,\nWan.", "username": "wan" } ]
C# Driver - Why is BsonKnownTypes attribute needed for polymorphism?
2020-03-31T17:56:48.141Z
C# Driver - Why is BsonKnownTypes attribute needed for polymorphism?
10,500
null
[ "compass" ]
[ { "code": "Mac OS Catalina 10.15.4\n“MongoDB Compass.app” can’t be opened because Apple cannot check it for malicious software.\n", "text": "I installed the MongoDB server Community edition on my Mac without a problem (seemingly). I then installed MongoDB Compass community edition, from the MongoDB download center. The setup completed without any errors, including the last step (moving it into the applications folder). When I try to run Compass (either from the Launchpad, or clicking on it, in the Applications folder), I get the following error message:Anyone else running into this?", "username": "Joseph_Mouhanna" }, { "code": "", "text": "Welcome to the MongoDB community @Joseph_Mouhanna!This warning is because the Compass binary isn’t notarised for macOS yet. We are aware of this and working on resolving the issue.For a workaround, please see Update Compass for new MacOS - #3 by Massimiliano_Marcon.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Problem running Compass on Catalina
2020-05-11T23:56:36.378Z
Problem running Compass on Catalina
4,829
null
[ "queries" ]
[ { "code": "", "text": "I am having data for one month, I need to fetch maximum data for a specific date.ex: I need to fetch the maximum value for o5-Jan-2020 00:00:00 between 05-Jan-20200 23:59:59.\nHow to achieve this condition", "username": "Aruljothy_Sundaramoo" }, { "code": "db.collection.find({\"date\": {\"$gte\": ISODate(\"2020-01-05T00:00:00\"), \"$lte\": ISODate(\"2020-01-05T23:59:59\")}}).sort({dataField: -1}).limit(1)\ndataField", "text": "Hi @Aruljothy_Sundaramoo and welcome to the MongoDB community forums.To do that type of mach, you would use the following:The above will search for all documents that fall on the given date. It will then sort the values in the field dataField in a descending order and return just the top item. This could be done in an aggregation as well.NOTE: Depending on how your data was entered and stored, you might need to adjust values for timezone differences from UTC.", "username": "Doug_Duncan" }, { "code": "", "text": "Nice! Thanks Doug! I was about to recommend similar!", "username": "Michael_Lynn" }, { "code": "", "text": "Thanks mate…but i need to get the max(highest) value, no the last value", "username": "Aruljothy_Sundaramoo" }, { "code": "", "text": "The query will do that. By sorting in a descending fashion you you will get the maximum value of the field.", "username": "Doug_Duncan" }, { "code": "", "text": "bu it tooks the last value not the highest value in the collection of specific date", "username": "Aruljothy_Sundaramoo" }, { "code": "", "text": "Are you sorting on the field that has the values that you are looking for? Can you share your query and some sample data showing that the wrong document is being returned?", "username": "Doug_Duncan" }, { "code": "> db.forumTest.find()\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263830\"), \"date\" : ISODate(\"2020-05-01T00:00:00Z\"), \"value\" : 5 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263831\"), \"date\" : ISODate(\"2020-05-01T03:00:00Z\"), \"value\" : 12 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263832\"), \"date\" : ISODate(\"2020-05-01T06:00:00Z\"), \"value\" : 2 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263833\"), \"date\" : ISODate(\"2020-05-01T09:00:00Z\"), \"value\" : 1 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263834\"), \"date\" : ISODate(\"2020-05-01T12:00:00Z\"), \"value\" : 19 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263835\"), \"date\" : ISODate(\"2020-05-01T15:00:00Z\"), \"value\" : 3 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263836\"), \"date\" : ISODate(\"2020-05-01T18:00:00Z\"), \"value\" : 17 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263837\"), \"date\" : ISODate(\"2020-05-01T21:00:00Z\"), \"value\" : 7 }\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263838\"), \"date\" : ISODate(\"2020-05-02T00:00:00Z\"), \"value\" : 55 }\nvaluedb.forumTest.find({\"date\": {\"$gte\": ISODate(\"2020-05-01T00:00:00\"), \"$lte\": ISODate(\"2020-05-01T23:59:59\")}}).sort({value: -1}).limit(1)\n{ \"_id\" : ObjectId(\"5eb9bf88278631f5c0263834\"), \"date\" : ISODate(\"2020-05-01T12:00:00Z\"), \"value\" : 19 }\n", "text": "Given the following simple set of data:Note that there are eight documents for May 1st and one for May 2nd. The document on May 2nd has the highest value in the value field. The highest value for May 1st is the document from 12:00:00 and has a value of 19.If I run the following query:I get back the expected document:", "username": "Doug_Duncan" }, { "code": "db.data.find({\"timeStamp\": {\"$gte\": 1589221800000, \"$lte\": 1589308199000}}).sort({value: -1}).limit(1)\n/* 1 */\n{\n \"_id\" : ObjectId(\"5eb9b6424fc5c77aaa81w466\"),\n \"timeStamp\" : 1589229122937.0,\n \"company_id\" : \"10001\",\n \"plant_id\" : \"10001\",\n \"device_id\" : \"10001\",\n \"value\" : {\n \"MEASEAS1\" : 0,\n \"MEASEAS2\" : 2.15729898582806e-41,\n \"MEAS3\" : -151732604633088.0\n }\n}\n", "text": "Seems did some mistaked while converting the time from double. it makes some issues…the date was logged as Double in mongo…now using the above method i can get the value… thanks mate Query UserOUTPUT", "username": "Aruljothy_Sundaramoo" } ]
Fetch data with max and between condition
2020-05-11T18:54:12.018Z
Fetch data with max and between condition
3,476
null
[ "dot-net", "security" ]
[ { "code": "Encryption related exception: A timeout occured after 1000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : \"3\", ConnectionMode : \"Automatic\", Type : \"Unknown\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 3, EndPoint : \"Unspecified/localhost:27020\" }\", EndPoint: \"Unspecified/localhost:27020\", State: \"Disconnected\", Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:27020\n at System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)\n at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)\n at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<ConnectAsync>d__7.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.TcpStreamFactory.<CreateStreamAsync>d__4.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.<OpenHelperAsync>d__51.MoveNext()\n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\n at MongoDB.Driver.Core.Servers.ServerMonitor.<HeartbeatAsync>d__28.MoveNext()\", LastUpdateTimestamp: \"2020-05-11T11:34:53.7029821Z\" }] }..\"\n", "text": "Hi,We are trying to implement the auto encryption feature in our project. Whenever we configure the settings for auto encryption and try to run the code, we receive the following error:We have setup a replicaset on our machine with port 27017 and 27108 as primary and secondary. We have no idea from where port 27020 is being called. If we don’t use the auto encryption library, then there is no error. Port ‘27020’ is not mentioned anywhere in our files.It seems to be some in built issue in MongoDB encryption.", "username": "b_singh" }, { "code": "mongocryptdmongocryptdmongodcryptdURI", "text": "Welcome to the community @b_singh,Automatic Client-Side Field Level Encryption requires MongoDB Enterprise 4.2 or newer or a MongoDB Atlas 4.2+ cluster. The Enterprise package includes a mongocryptd binary which runs on port 27020 by default. Drivers will attempt to spawn mongocryptd if it isn’t running already. You can set specific options include the spawn path and mongodcryptdURI in the driver options.For more information on using automatic encryption with the .NET driver, see Client-Side Field Level Encryption in the driver documentation.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Auto Encryption C# driver error - Unknown port being picked in code
2020-05-11T18:58:15.581Z
Auto Encryption C# driver error - Unknown port being picked in code
2,611
null
[ "database-tools", "installation" ]
[ { "code": "", "text": "I’m trying to import data into an Atlas cluster using mongoimport. When I run the command I get an error “command not found”. I have MongDB Server version 4.2.6 installed.", "username": "joyfulnoiseforyahshu" }, { "code": "", "text": "mogoimport is a command line utility to be run from os prompt\nCan you run it giving full path?\nCould be path issue", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I also have oh-my-zsh installed with the mongodb plugin. Could this be the issue?", "username": "joyfulnoiseforyahshu" }, { "code": "mongoimportzshmongoimportPATHmongoimport", "text": "Hi @joyfulnoiseforyahshu,Assuming you are running mongoimport from a command shell (which sounds like zsh in your case), a “command not found” message indicates the mongoimport binary was not found in the current search path (PATH environment variable).What O/S version are you using and how did you install the MongoDB command line tools?If mongoimport isn’t in your path, you can also provide the full path to the binary as suggested by @Ramachandra_Tummala.Regards,\nStennie", "username": "Stennie_X" }, { "code": "ls \"$(which mongo | sed 's/mongo//')\" | grep mongo\n", "text": "The zsh plugin shouldn’t affect finding an executable in any way.If you’re on a Unix based system, what is returned when you run the following command:", "username": "Doug_Duncan" }, { "code": "", "text": "When I ran the command, it returns a list of the what’s installed in the /usr/local/bin directory (mongo is installed). I’m running OS X (15.4) Catalina.", "username": "joyfulnoiseforyahshu" }, { "code": "mongomongo\nmongod\nmongodump\nmongoexport\nmongofiles\nmongoimport\nmongoreplay\nmongorestore\nmongos\nmongostat\nmongotop\n", "text": "@joyfulnoiseforyahshu were there any other binaries in that folder that started with mongo?Could you state how you installed the MongoDB package? Did you follow the directions given on the MacOS installation page?I run MacOS Catalina as well and this is the list of files I get from running that command:If you don’t see a list similar to the above then you don’t have a full install of the MongoDB tools.", "username": "Doug_Duncan" }, { "code": "", "text": "Thank you. I will reinstall mongo and check that I have everything installed.", "username": "joyfulnoiseforyahshu" }, { "code": "", "text": "It works now. Thank you.", "username": "joyfulnoiseforyahshu" }, { "code": "", "text": "Glad you got things working @joyfulnoiseforyahshu! Appreciate the follow up to state that things are going right.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoimport command not found
2020-05-10T14:30:17.125Z
Mongoimport command not found
32,415
null
[ "stitch" ]
[ { "code": "Code: AccessDenied\nMessage: Access Denied\nRequestId: 46FED9D915F5AFAC\nHostId: YRrZ4uibB6saqH9KwvZo1q+MXEAxjX9rEj+mSJDxyhfKUtXEPW8yQ8gWjaFMHKuG0y3c+hMrp94=\n", "text": "I’m getting this error when I reload on any page in my SPA:403 ForbiddenI should be getting 200 replies to my SPA urlsNo combination of disabling/enabling hosting and various other settings toggles fixes this. I’m guessing it’s an internal Stitch bug.Is this a borked S3 Bucket permissions issue?!I’m dead in the water, help!", "username": "Anthony_McLaughlin" }, { "code": ".js", "text": "Hi Anthony,I’m getting this error when I reload on any page in my SPA:Just to clarify the issue, are you saying that you can access a page (i.e. foobar.mongodbstitch.com) but when you reload the browser you’re getting the 403 forbidden error ?Does the error message elaborate more on what it’s trying to access i.e. perhaps a .js file ?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "The index.html page serves and pulls in all the linked css and js files fine, but when I reload a SPA url e.g. /auth/sign-in instead of replying with the code 200 and serving up the index.html page allowing the SPA to handle the url it gives me the code 403. I know Stitch utilizes S3 under the hood and this is what an error message looks like when permissions are set incorrectly. Maybe something went wrong when uploading new files? I don’t know but I can’t fix it through the Stitch UI.", "username": "Anthony_McLaughlin" }, { "code": "/auth/sign-in", "text": "HI @Anthony_McLaughlin,Just to clarify, the error was triggered when you are accessing your page from [yourapp].mongodbstitch.com, and programmatically reloading to page to a relative page /auth/sign-in ?Could you confirm whether you are still encountering this issue right now?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi,If I was on say [app].mongodbstitch.com/auth/sign-in and refreshed the page with the browser it would return the Error 403 page outlined in my original post instead of a code 200.It eventually went away after a few days. I have no idea how it got fixed.", "username": "Anthony_McLaughlin" } ]
Getting a 403 Forbidden error message (invalid S3 bucket permissions?!) Stitch SPA - I cannot fix this!
2020-05-05T04:48:50.901Z
Getting a 403 Forbidden error message (invalid S3 bucket permissions?!) Stitch SPA - I cannot fix this!
4,555
https://www.mongodb.com/…8927134a0cc3.png
[ "queries" ]
[ { "code": "", "text": "Hi Friends,I have a use case(POC) where I need to perform a query on two collections(bigger, many fields +4k fields in each collection)\nHow to fire JSON query on this…\nI have SQL query, similar(the ![input2|519x133]( thesame query need to be fired)the SQL query:\nselect t1.X, t1.Y_DT,t1.Z,t1.adj,t1.bjc,t1.jbc,t1.mnk,t2.adj1,t2.bjc1,t2.jbc1,t2.mnk1 from inpt1 t1, input2 t2 where t1.X = t2.X AND t1.Y_DT=t2.Y_DT AND t1.Z = t2.Z;Similar mongodb query needed…Plz find sample input collections(files)…\n ", "username": "Murali_Muppireddy" }, { "code": "$lookup$lookup", "text": "Hi @Murali_Muppireddy and welcome to the community forums.Take a look at the Specify Multiple Join Conditions with $lookup section of the $lookup documentation for an example query.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Doug_Duncan, great let me try out.Thank you", "username": "Murali_Muppireddy" } ]
How to query more than one collection with the condition(s)
2020-05-11T10:45:11.340Z
How to query more than one collection with the condition(s)
2,436
null
[]
[ { "code": "", "text": "Installer for MongDB Enterprise Server on Ubuntu 20.04 Linux x64 is not yet an option in the dropdown menu. Tried using Ubuntu 18.04 Linux x64 but no dice. Would an installer for Ubuntu 20.04 Linux x64 be available soon? Please help. Thanks.", "username": "arquero" }, { "code": "", "text": "Hi @arquero,Let me find that out for you.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "@arquero,Please have a look at this post.Hope it helps!~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Thanks @Shubham_Ranjan", "username": "arquero" }, { "code": "", "text": "", "username": "Shubham_Ranjan" } ]
Installer for MongDB Enterprise Server on Ubuntu 20.04 Linux x64
2020-05-06T16:16:04.243Z
Installer for MongDB Enterprise Server on Ubuntu 20.04 Linux x64
1,327
null
[ "atlas-functions", "stitch" ]
[ { "code": "exports = function(arg){\n let collection = context.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\");\n let doc = collection.findOne({\"title\": arg});\n return {doc};\n};\nconst {\n Stitch,\n AnonymousCredential\n} = require('mongodb-stitch-server-sdk');\n\nStitch.initializeDefaultAppClient('mflix-dskle');\n/*client.auth.loginWithCredential(new AnonymousCredential()).then(user => {\n console.log(user);\n client.close();\n}).catch(err => {\n console.log(err);\n client.close();\n});*/\n\nconst client = Stitch.defaultAppClient;\nconsole.log(\"logging in anonymously\");\n client.auth.loginWithCredential(new AnonymousCredential()).then(user => {\n console.log(`logged in anonymously as user ${user.id}`)\n});\nclient.callFunction(\"getMovies\", [\"Adventures in Babysitting\"]).then(result => {\n console.log(result);\n});\n", "text": "I’m using MongodB Stitch functions to find one document by title on the sample_mflix.movies collection:When I run this in Node.js, I get {doc: null}. Here’s my Node.js code:Is this an issue with authentication? I’m using application authentication.", "username": "joyfulnoiseforyahshu" }, { "code": "exports = async function(arg){ let collection = context.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\"); let doc = await collection.findOne({\"title\": arg}); return {doc}; };", "text": "exports = function(arg){ let collection = context.services.get(“mongodb-atlas”).db(“sample_mflix”).collection(“movies”); let doc = collection.findOne({“title”: arg}); return {doc}; };findOne is an async function. You either can “await” the result or you can use .findOne(…).then(doc => { … }).try the following:", "username": "Dimitar_Kurtev" }, { "code": "", "text": "Thanks. I also noticed that when I use Anonymous Authentication the function returns an empty array {doc: [ ]}, but when I switch to System authentication I get a response back. Why does this happen?", "username": "joyfulnoiseforyahshu" }, { "code": "", "text": "This usually means that you’ve set some rules on you collection that prevent the Anonymous user to read/modify the document. Check your collection rules. Make sure that “non-owners” can read documents from you collection.", "username": "Dimitar_Kurtev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Stitch Function returns Null value
2020-05-04T18:58:04.350Z
Stitch Function returns Null value
3,353
null
[ "aggregation" ]
[ { "code": "", "text": "Hi Team,How could I write aggregation without exceeds maximum document size?", "username": "sudheer_ramidi" }, { "code": "", "text": "Use $match early, make it the most restrictive. Use $project early to remove unnecessary fields. Try to use index for the first $match. Add intermediate $match as soon as you know you can reduce the working set.", "username": "steevej" } ]
How could I write aggregation without exceeds maximum document size?
2020-05-11T07:14:40.309Z
How could I write aggregation without exceeds maximum document size?
1,482
null
[ "java", "production" ]
[ { "code": "", "text": "The 3.12.4 MongoDB Java Driver release is a patch to the 3.12.3 release and a recommended upgrade.The documentation hub includes extensive documentation of the 3.12 driver, includingand much more.You can find a full list of bug fixes here .http://mongodb.github.io/mongo-java-driver/3.12/javadoc/ ", "username": "Jeffrey_Yemin" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Java Driver 3.12.4 Released
2020-05-11T11:36:53.098Z
MongoDB Java Driver 3.12.4 Released
2,657
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.0.3 MongoDB Java & JVM Drivers release is a patch to the 4.0.2 release and a recommended upgrade.The documentation hub includes extensive documentation of the 4.0 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.0/apidocs/ ", "username": "Jeffrey_Yemin" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Java Driver 4.0.3 Released
2020-05-11T11:36:00.761Z
MongoDB Java Driver 4.0.3 Released
1,967
null
[ "sharding" ]
[ { "code": "", "text": "Hi Team,We have about 10 shards, 3 configs and 6 MongoS and what could we see is particular nodes of shards are getting most of request.\nLet say we have rep1 to rep10 as shards of the Mongo cluster andMongoD details are below-Let me know if more details required.Thanks in advance.Cheers,\nMukesh Kumar", "username": "Mukesh_kumar" }, { "code": "sh.getBalancerState()", "text": "What we could see is particular shard rep2 is getting most read request and it’s not distributed to other shards.Is this particular collection sharded? Unsharded collections are on the Primary shard for the database.Is the balancer enabled? sh.getBalancerState() If it is disabled then this node could have a disproportionate amount of chunks.Database for which queries are coming(as per the logs of Mongo) are sharded so it should go to across the shards.The collections themselves need to be sharded. Perhaps this is just the way you have written and you mean collections.MongoDB version is- 3.0.4If I don’t say it someone else will. Mongo 3.0.4 was End of Life February 2018. 3.6 is the next current version, along with 4.0 and 4.2If you are staying on 3.0 then you should loook at updating to 3.0.15 for the most up to date version of that release.", "username": "chris" }, { "code": "sh.status()", "text": "In addition to @chris’s answer, take a look at the results of sh.status(). Does that shard have more chunks than the other shards do?", "username": "Doug_Duncan" }, { "code": "rep2jumbosh.status(true)jumbodb.collection.getShardDistribution()", "text": " Database for which queries are coming(as per the logs of Mongo) are sharded so it should go to across the shards. Hi Mukesh,The earlier suggestions from @chris and @Doug_Duncan are great starting points. Definitely identify whether the query volume is targeting sharded or unsharded collections, and review your queries and data distribution for any sharded collections on the affected shard. If there are many unsharded collections with rep2 as their primary shard, this difference in network traffic may be expected.Queries for sharded collections are distributed based on the shard key values. Chunks in a sharded collection are balanced based on roughly equal distribution of chunks according to migration thresholds. There are scenarios where a sharded collection may be balanced by policy but still have an unequal distribution of workload. A good choice of shard key will, on average, distribute your workload appropriately.Some other considerations for balancing:If chunks are balanced as expected but data/docs distribution is not, you may have empty chunks, jumbo chunks, or a poor choice of shard key.Also as mentioned earlier, MongoDB 3.0 is an end of life release series. As always, I’d recommend upgrading to the final minor release (3.0.15) and planning an upgrade to a supported release series (currently 3.6 or higher). This is unlikely to affect your query distribution issue, but MongoDB 3.0.4 was released in June, 2015 and you are missing out on 2 years of maintenance and stability updates for the 3.0 release series.We also checked the stats of Mongo from MongoDB ops managerIf you are using Ops Manager legally in a production environment, you should also be able to raise a case on the Support Portal as per your MongoDB Enterprise Advanced subscription . However, MongoDB 3.0 is well out of support so I’ll assume this is a test/development environment.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Doug_Duncan @Stennie_X Thanks for sharing the info.\nWe checked the balancer and data on the collection. We could see collection is sharded but there were 2 documents on that shard(let say sh1). And size of those 2 docs is more than 10MB and whenever a query search these doc, we could see getmore multiple times. And there was a flaw in our code base due to which it was trying to search same doc again and again.We have deleted that doc though code isn’t fixed yet but it seems everything is normal now.Really appreciate your efforts.Thanks,\nMukesh Kumar", "username": "Mukesh_kumar" } ]
Specific Mongo shards are getting most request
2020-05-09T15:40:26.353Z
Specific Mongo shards are getting most request
3,116
null
[]
[ { "code": "", "text": "Hi Team,We have 6 mongos with 3 shards, 3 config servers and about 5 databases on it.\nIt seems we are getting different db.stats() info for a database in MongoS.\nLet say we have mongos[1-6] and in MongoS[1,3,4,5] we are getting db.stats() for a db DB_N1 like below.In MongoS[1,3,45]\ndb.stats()\nrep1A/server1:port,server2:portIn MongoS[2,6]\ndb.stats()\nrep2A/server3:port,server4:portMongoDB version we are using is 2.6.3 and config servers are in sync.Thanks in advance and also let me know if any details require.Cheers,\nMukesh Kumar", "username": "Mukesh_kumar" }, { "code": "", "text": "Hi @Mukesh_kumar and welcome to the MongoDB community forums.While I don’t have an answer to your question, I will ask if you are able to upgrade your MongoDB implementation and test to see if you see similar results. MongoDB 2.6 is a very old version and support for it ended towards the end of 2016. There have been a lot of performance and security improvements since then as well as the addition of a lot of new functionality.Being on a version that old means that there will be less people available to help out determine why you are seeing what you are seeing and to provide support to resolve any issues.", "username": "Doug_Duncan" }, { "code": "", "text": "I understand it’s old version but it’s quiet difficult to understand why different MongoS of same cluster is showing different db stats for a database. Has anyone reported about it?\nCan you please suggest what to debug? I did this to debug but it doesn’t help me.Let me know if I should do something else or I missed some steps. I never faced such issues.Cheers,\nMukesh Kumar", "username": "Mukesh_kumar" }, { "code": "", "text": "Hi Mukesh,The MongoDB 2.6 release series first shipped in March, 2014 and reached end of life in Oct 2016. There have been significant architectural changes and improvements in the six (and soon to be seven) major production release series since then.It is quite possible you are encountering an issue which has been fixed some time in the past 6 years. If your issue is easily reproducible, you should be able to reproduce it in a test/staging environment. Please upgrade to a supported version of MongoDB (currently 3.6 or newer) and test if the problem is still occurring.I would also recommend an immediate upgrade from 2.6.3 to the final 2.6.12 version. Minor releases do not include any backward-breaking changes and there are important stability, security, and performance fixes.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X Thanks for the updates. Also to immediate upgrade from 2.6.3 to the final 2.6.12, Can we stop the mongo v2.6.3 and start it with v.2.6.12 or should I follow upgradation processes like https://docs.mongodb.com/manual/release-notes/3.0-upgrade/Cheers,\nMukesh Kumar", "username": "Mukesh_kumar" }, { "code": "", "text": "Upgrade the binaries and start/restart.", "username": "chris" }, { "code": "", "text": "@chris Thanks for the quick response, do you mean replace 2.6.3 binaries with 2.6.12 and restart the MongoDB servers?", "username": "Mukesh_kumar" }, { "code": "", "text": "Yes. Read the guide for full details.", "username": "chris" }, { "code": "", "text": "@chris Thanks yoy so much for the help.", "username": "Mukesh_kumar" } ]
Getting different shards for db.stats from MongoS
2020-05-07T10:17:53.737Z
Getting different shards for db.stats from MongoS
2,140
https://www.mongodb.com/…50082b690c58.png
[ "compass" ]
[ { "code": "", "text": "Hi,A few days ago compass was working a.o.k, in the past 2 days I am unable to connect to Atlas cluster. The Compass window becomes faded and it just freezes forever after hitting connect (screenshot attached).No problem connecting to localhost.Not being able to access my db for two days is really frustrating?\nAny ideas?Alternatively: Is there any other client that would let me connect to Atlas?Thanks.\nimage795×525 47.1 KB\n", "username": "Sky_Diver" }, { "code": "mongo", "text": "Hi @Sky_Diver,Can you connect to a local MongoDB instance with Compass?As far as connecting to your Atlas instance, you can use the mongo shell. You could also try one of the third part tools out there such as dbKoda or Studio 3t.", "username": "Doug_Duncan" }, { "code": "mongo", "text": "Hey @Doug_Duncan,First off, many thanks for your (swift) reply.As I wrote in my original post, there is no problem connecting to localhost. It has to do just with connecting to Atlas.As for the rest, I was under the impression that you can connect to Atlas via Compass only and not via other tools. Even if this was true at some point, this info appears to be outdated.I have Robo3T installed for a while now, and following your post I fired it up and (embarrassing as it is) I realized I can supply a mongodb+srv connection string. From there I was connected to my Atlas instance successfully and in no time. I therefore didn’t try to connect from the mongo shell.Thanks again for enlightening me. Appreciated! ", "username": "Sky_Diver" }, { "code": "mongo", "text": "A few days ago compass was working a.o.k, in the past 2 days I am unable to connect to Atlas cluster. The Compass window becomes faded and it just freezes forever after hitting connect (screenshot attached).Hi @Sky_Diver,To help reproduce this issue, can you confirm your O/S version and Atlas cluster type (eg M0)?If you want to work with documents in Atlas and are having trouble with client applications, you should still be able to use Atlas’ built-in Data Explorer feature.As for the rest, I was under the impression that you can connect to Atlas via Compass only and not via other tools. Even if this was true at some point, this info appears to be outdated.You can connect to MongoDB Atlas using any drivers or tools that support your cluster’s MongoDB server version and implement TLS/SSL. Free/shared tier clusters (M0, M2, M5) additionally require TLS with SNI support.Robo3T is unfortunately not actively maintained and has a large number of unresolved (and untriaged) issues including compatibility and stability problems. It also embeds a specific version of the mongo shell, which can cause unexpected issues with different major server releases than the embedded shell version. Most MongoDB admin tools use a supported driver API for broader server version compatibility.I definitely would not recommend using Robo3T for working with any production data or essential data until it is more actively maintained.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for your comments @Stennie_Xyou should still be able to use Atlas’ built-in Data Explorer feature.I saw Atlas’s web ui and while it’s indeed a working alternative, it’s cumbersome to use.I definitely would not recommend using Robo3T for working with any production data or essential data until it is more actively maintained.Since I’m taking my first steps with Atlas, I’ll still use Robo3T as it is a working alternative to Compass which fails to connect currently. Thanks for the heads up about the issues with Robo3T thought.A few days ago Compass was hanging for 5-6 connection attempts and around the 7th attempt it magically happened to connect. That was the last time I was able to connect via Compass.To help reproduce this issue, can you confirm your O/S version and Atlas cluster type (eg M0)?I’m running on Windows 10 version 1803, OS build: 17134.1130.\nTrying to connect to a M0 Sandbox (General) cluster tier.Hope this helps.", "username": "Sky_Diver" } ]
Compass 1.21.1 freezes when connecting to Atlas
2020-05-10T20:02:40.927Z
Compass 1.21.1 freezes when connecting to Atlas
4,352
null
[]
[ { "code": "", "text": "Looking to integrate libmongoc driver in android app. I didn’t find anywhere how to run the local server. is it possible to run local server in mongoDb c driver, or is it possible to run the client without running server.", "username": "sai_krishna_reddy_ko" }, { "code": "libmongoc", "text": "Welcome to the community @sai_krishna_reddy_ko!The MongoDB C driver (aka libmongoc ) is a client library for connecting to a remote MongoDB deployment, and does not include an embedded server or standalone library implementation.The MongoDB server is not designed to be embedded as a library or used in resource-constrained environments.However, I would recommend looking into Realm Database which is a mobile-first database implementation with optional cloud-based sync. We are currently working on the integration of Realm with MongoDB Stitch and Atlas for the upcoming MongoDB Realm beta.Realm does not presently have a C SDK, but may be of interest if you are able to use Java, Kotlin, or .NET in your Android project.Regards,\nStennie", "username": "Stennie_X" } ]
libmongoc in android app
2020-05-11T03:53:54.099Z
libmongoc in android app
1,792
null
[]
[ { "code": "", "text": "Hi,I am completely new to to MongoDB. In fact as new as that I explored MongoDB for the first time in my life only 3 days back. Taking on the course M001 right now.My sole purpose to come and explore this database is to look out for alternate database platforms to develop an Asset Management Solution based on J2EE. A solution which manages the complete lifecycle of movable and immovable assets like building, computer hardware, vehicles and just kind of assets.We have been working on with a java based product CMDBuild and Openmaint (google it to know more about these products), but found that these two products lack much needed flexibility to customize and innovate a custom solution based on these products.Hierarchical structure (inheritance and polymorphism) is the most important aspect of any asset management system because a new kind of Asset Class is supposed to be created in the system at run time and corresponding database changes and model classes are also supposed to be made available immediately at run time for CRUD operations.Second important aspect is seamless integration with a workflow system which can help in the aspect of process automation from within the same application.Third aspect is to have a robust google map based GIS system integration to pin point the marking of an asset geographically.Fourth is to have a 3D viewer to see the complete connected relationship between all assets. Like a computer kept on which room of a floor of a building situated in a particular complex among numerous complexes.Fifth is integration of a chart based on-screen dynamic reporting system, using which we can view all statistics live in a dashboard and generate beautiful and professional reports and export it in any file format.And the last but not the least point is, does MongoDB provides any IDE for application development. I mean an IDE like Eclipse or Springtoolsource ?So is MongoDB a suitable product to develop such a huge application? If yes, then I would be thankful if some MongoDB expert can show me a path to do that from design and development point of view.Regards,\nDev", "username": "devanshu" }, { "code": "", "text": "Short answer yes. Being schema-less each asset types can have its own document shape easily.You may want to take a look at M320 after M001. Since you mentioned J2EE, M220J covers the Java API.MongoDB is principally a data store. You may use Eclipse or any IDE you please.", "username": "steevej" }, { "code": "", "text": "Thanks @steevej for your reply.\nYour answer leaves me on a positive note to explore further into MongoDB and motivates me to take on the course M320 as soon as I finish M001.Regards,", "username": "devanshu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we develop an asset management application based on MongoDB
2020-05-09T22:04:06.361Z
Can we develop an asset management application based on MongoDB
3,151
https://www.mongodb.com/…628752a61de4.png
[ "compass" ]
[ { "code": "", "text": "I have downloaded the latest Stable version of Compass on my Mac OS (Mac OS Catalina).Am getting the following error message stating that the application cannot be launched because Apple cannot verify if there is malicious content and that the app has to be updated. Please review.", "username": "Vijender_P" }, { "code": "", "text": "Welcome to the MongoDB community @Vijender_P!This warning is because the Compass binary isn’t notarised for macOS yet. We are aware of this and working on resolving.For a workaround, please see Update Compass for new MacOS - #3 by Massimiliano_Marcon.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Not able to launch Compass Application
2020-05-10T06:37:46.439Z
Not able to launch Compass Application
1,928
null
[ "replication" ]
[ { "code": "# mongod.conf\n\n\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n\n\n# network interfaces\nnet:\n port: <%= @port %>\n bindIp: <%= @bind_ip %>\n\n\n\n\n#processManagement:\n\n\n\n#security:\n\n\n\n#operationProfiling:\n\n\n\n#replication:\n\n\n\n#sharding:\n\n\n\n## Enterprise-Only Options:\n\n\n\n#auditLog:\n\n\n\n#snmp:\nexport DB_HOST=mongodb://10.0.1.100:27017/posts,10.0.2.100:27017/posts,10.0.3.100:27017/posts?replicaSet=rs0", "text": "Hello guys,\nI hope everyone is ok. I am sorry to bother you with some basic questions but I am new to mongodb and i am struggling a bit.I am a junior devOps, and i have a practice project which consist in:\n1 aws instance running a nodejs application (just a simple home page) running on port 3000\n2 a mongodb (configured with the application and should connect on port 27017Inside the app configuration, there is a variable called DB_HOST(where, from the instance, we should import the linkmongodb://10.0.1.100:27017/postsand we should be able to connect from the app instance IP:27017/posts to the mongodb and display some random blog posts.Now, the process i followed, was using chef,packer and terraform to spin up aws instances. the app application works just fine, but if i try to access the mongodb, keeps loading the page but nothing happens.Here are the steps i did.Withthe application, i implemented a nginx reverse proxy, so the communication get through port 80 instead of port 3000in mongodb i did thismongod.confand in the attributre i set:default[‘mongodb’][‘port’] = 27017\ndefault[‘mongodb’][‘bind_ip’] = ‘0.0.0.0’doing this i should be able to connect from any ip to the port 27017.Now, doing this, if i ssh inside my app instance and node seed my db, i can do so, but as soon as i run the command npm start, i get an error that there is another service running on port 3000.plus i have another problem, as i need to implement a replicaset, in my DB_HOST i should implemente 3 different mongodb, so i did so:export DB_HOST=mongodb://10.0.1.100:27017/posts,10.0.2.100:27017/posts,10.0.3.100:27017/posts?replicaSet=rs0but doing this, i get an error that i cant pass multiple values.Any idea guys to sort out this? thank you very much for your help.", "username": "Hamza_El_Aouane" }, { "code": "mongodb://host:port,host:port,host:port/database?replicaSet=setName", "text": "the format is mongodb://host:port,host:port,host:port/database?replicaSet=setNameThe monouri format is documented here.", "username": "chris" }, { "code": "export DB_HOST=mongodb://10.0.1.100:27017,10.0.2.100:27017,10.0.3.100:27017?replicaSet=rs0(node:2579) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.\n(node:2579) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.\n(node:2579) UnhandledPromiseRejectionWarning: MongoError: no primary found in replicaset or invalid replica set name\n at /home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/topologies/replset.js:616:11\n at Server.<anonymous> (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/topologies/replset.js:338:9)\n at Object.onceWrapper (events.js:315:30)\n at emitOne (events.js:116:13)\n at Server.emit (events.js:211:7)\n at Pool.<anonymous> (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/topologies/server.js:377:12)\n at emitTwo (events.js:126:13)\n at Pool.emit (events.js:214:7)\n at connect (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/connection/pool.js:624:10)\n at callback (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connect.js:109:5)\n at runCommand (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connect.js:172:5)\n at Connection.messageHandler (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connect.js:334:5)\n at emitTwo (events.js:126:13)\n at Connection.emit (events.js:214:7)\n at processMessage (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connection.js:364:10)\n at Socket.<anonymous> (/home/ubuntu/AppFolder/app/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connection.js:533:15)\n(node:2579) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)\n(node:2579) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.\n#!/bin/bash\n\n\nexport DB_HOST=mongodb://10.0.1.100:27017,10.0.2.100:27017,10.0.3.100:27017?replicaSet=rs0\n\ncd /home/ubuntu/AppFolder/app\nnode seeds/seed.js\n\n\nsudo npm install\nsudo npm start &\n#!/bin/bash\n\necho '10.0.1.100' >> /etc/hosts\n\nsudo systemctl enable mongod\nsudo systemctl start mongod\n\nmongo mongodb://10.0.1.100 --eval \"rs.initiate( { _id : 'rs0', members: [{ _id: 0, host: '10.0.1.100:27017' }]})\"\nmongo mongodb://10.0.1.100 --eval \"rs.add( '10.0.2.100:27017' )\"\nmongo mongodb://10.0.1.100 --eval \"rs.add( '10.0.3.100:27017' )\"\nmongo mongodb://10.0.1.100 --eval \"db.isMaster().primary\"\nmongo mongodb://10.0.1.100 --eval \"rs.slaveOk()\"\n\n\nsleep 60; sudo systemctl restart metricbeat\nsudo systemctl restart filebeat\n\nsleep 180; sudo filebeat setup -e \\\n -E output.logstash.enabled=false \\\n -E output.elasticsearch.hosts=['10.0.105.100:9200'] \\\n -E setup.kibana.host=10.0.105.101:5601 && sudo metricbeat setup\n", "text": "Hi chris, thank you very much for the time to reply.i tried your codeexport DB_HOST=mongodb://10.0.1.100:27017,10.0.2.100:27017,10.0.3.100:27017?replicaSet=rs0ERROR:this is my configurationbash script for the app to seed the mongodb DB_HOST:Primary and other replicasets DB_HOST:i realised that when my instance is up, it does execute all the command but it does not export the enviroment variable, so i need to do it manually.", "username": "Hamza_El_Aouane" } ]
MongoDB reverse proxy and replica set
2020-05-09T22:02:54.708Z
MongoDB reverse proxy and replica set
5,856
https://www.mongodb.com/…3_2_1023x781.png
[ "atlas-device-sync", "kotlin", "legacy-realm-cloud" ]
[ { "code": "", "text": "Hello,\nI just started to develop an android version of my ios app. For the iOS app I used realm as my remote database service.\nI am having trouble understanding how to use synced realm objects with kotlin, hence the kotlin docs only specify how to perform CRUD operations on a locally saved realm file. But what about creating a realm configuration describing a remote realm file?. I was expecting some code snippet like the one below, to also be available for kotlin.\nScreenshot 2020-05-05 at 12.12.551496×1142 118 KB\n", "username": "Octavian_Milea" }, { "code": "", "text": "@Octavian_Milea You’ll need to use the Java code for opening a realm along with calling other Sync APIs. We do plan to add a fully native Kotlin SDK but for the time being you’ll need to use Java code in your app until this lands.", "username": "Ian_Ward" }, { "code": "", "text": "Not sure this is correct @Ian_Ward. I have been using Kotlin exclusively for Android and Realm for a long time without problems. You can call any java sdk from Kotlin. And Android studio is actually pretty good at converting java code to Kotlin for you.", "username": "Simon_Persson1" }, { "code": "", "text": "@Simon_Persson1 Correct you can call Java APIs from Kotlin. That was what I was trying to say in my response, that you will need to call the Java version of SyncUser.login and getInstance from Kotlin until we have a more “kotlinified” version. Apologies if that was not clear.", "username": "Ian_Ward" } ]
Working with synced realm objects in Kotlin
2020-05-05T09:15:43.009Z
Working with synced realm objects in Kotlin
2,338
null
[ "legacy-realm-cloud" ]
[ { "code": "SyncConfigurationBase.Initialize(UserPersistenceMode.NotEncrypted);\n\nvar authUrl = new Uri(\"https://MY-REALM-CLOUD-NAME.cloud.realm.io\");\nvar credentials = Credentials.UsernamePassword(\"MY-USERNAME\", \"MY-PASSWORD\", createUser: false);\nvar user = await User.LoginAsync(credentials, authUrl);\n\nvar realmUrl = new Uri(\"realms://MY-REALM-CLOUD-NAME.cloud.realm.io/~/myRealm\");\n\nvar configuration = new FullSyncConfiguration(realmUrl,user: user);\n\n\n//Realm realm = Realm.GetInstance(new RealmConfiguration(Environment.CurrentDirectory+\"/database.realm\"));\nRealm realm = await Realm.GetInstanceAsync();\nrealm.Write(() =>\n {\n realm.Add(new Dog { Name = \"1111\", Age = 2 });\n });\n var oldDogs=realm.All<Dog>().Where(d => d.Age > 1);\n foreach (var d in oldDogs)\n {\n Console.WriteLine(d.Name);\n }\n\n", "text": "I would be glad, if someone could help out, because I have been suffering with this problem for the days, the problem is with setting up Realm Sync of Local and Cloud databases in .Net(C#) Project.I can connect to Cloud and write and retrieve the data, but I can’t figure out, how to have the local Realm on my computer sync with Cloud(in other words, when offline → not connected to Realm, see the updates on Realm Studio). To make the issue more clear, let’s say I don’t connect to Realm Cloud and update the local Realm database, I want to see this update immediately on Realm Object Server.", "username": "gamer_life" }, { "code": "return realm;", "text": "@gamer_life This generally looks correct to me, although I’m not sure why you are return realm;\nare you trying to sync the writes on your local realm to cloud? And you are saying this is not working when you come back online? Can you post some logs here? You can set the log level here:", "username": "Ian_Ward" }, { "code": "return realm;2Fptx-test.de1a.cloud.realm.ioFMY-REALM.cloud.realm.io'MY-REALM.cloud.realm.io:443'MY-REALM.cloud.realm.io", "text": "@Ian_Ward Thank you so much for the feedback. I am sorry, return realm; was inside of function and I was making the code look compact and nice, as a result I forgot to remove that part in this post.\nQ: are you trying to sync the writes on your local realm to cloud?\nA: Exactly, I am trying to have sync between local realm with cloud, unfortunately I can’t fully understand how to accomplish this. I tried with subscribe function from Realm, but I can’t figure out why still it doesn’t keep my local Realm database always synced with Cloud Realm.\nQ: And you are saying this is not working when you come back online?\nA: With above code, all of the entries are being uploaded only to Realm Cloud and I can’t see the local Realm Database with those uploaded entries. So, the main idea is that local Realm database(let’s say endpoint) shall be always in sync with Realm Cloud.\nQ: Can you post some logs here?\nA: Below are the logs:Realm sync client ([realm-core-5.23.8], [realm-sync-4.9.5])\nSupported protocol versions: 26-30\nPlatform: Linux Linux 4.15.0-91-generic #92~16.04.1-Ubuntu SMP Fri Feb 28 14:57:22 UTC 2020 x86_64\nBuild mode: Release\nConfig param: max_open_files = 256\nConfig param: one_connection_per_session = 1\nConfig param: connect_timeout = 120000 ms\nConfig param: connection_linger_time = 30000 ms\nConfig param: ping_keepalive_period = 60000 ms\nConfig param: pong_keepalive_timeout = 120000 ms\nConfig param: fast_reconnect_limit = 60000 ms\nConfig param: disable_upload_compaction = 0\nConfig param: tcp_no_delay = 0\nConfig param: disable_sync_to_disk = 0\nUser agent string: 'RealmSync/4.9.5 (Linux Linux 4.15.0-91-generic #92~16.04.1-Ubuntu SMP Fri Feb 28 14:57:22 UTC 2020 x86_64) RealmDotNet/4.3.0.0 (.NET Core 3.1.2) ’\nConnection[1]: WebSocket::Websocket()\nConnection[1]: Session[1]: Binding ‘/home/cool/realm-object-server/90adb87d-8624-4bee-beba-946c0ad1f66f/realms%3A%2F%2Fptx-test.de1a.cloud.realm.io%2F%7E%2FmyRealm’ to ‘/4219523c68c25db5b31ff2b3e48face4/myRealm’\nConnection[1]: Session[1]: Activating\nConnection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\nOpening Realm file: /home/cool/realm-object-server/90adb87d-8624-4bee-beba-946c0ad1f/realms%3A%2F%2FMY-REALM.cloud.realm.io%2F%7E%2FmyRealm\nConnection[1]: Session[1]: client_file_ident = 2, client_file_ident_salt = 13912707436647382\nConnection[1]: Session[1]: Progress handler called, downloaded = 57, downloadable(total) = 57, uploaded = 873, uploadable = 930, reliable_download_progress = 0, snapshot version = 33\nConnection[1]: Resolving 'MY-REALM.cloud.realm.io:443'\nConnection[1]: Connecting to endpoint ‘3.120.234.243:443’ (1/3)\nConnection[1]: Connected to endpoint ‘3.120.234.243:443’ (from ‘192.168.0.104:414’)\nConnection[1]: Verifying server SSL certificate using root certificates, host name = MY-REALM.cloud.realm.io, server port = 443, certificate =\n-----BEGIN CERTIFICATE-----\nMII… DUE TO CONFIDENTIAL INFO, THE REST OF CERTIFICATE WAS REMOVED\n-----END CERTIFICATE-----Connection[1]: Verifying server SSL certificate using 155 root certificates\nConnection[1]: Server SSL certificate verified using root certificate(29):\n-----BEGIN CERTIFICATE-----\nMII… DUE TO CONFIDENTIAL INFO, THE REST OF CERTIFICATE WAS REMOVED\n-----END CERTIFICATE-----Connection[1]: WebSocket::initiate_client_handshake()\nConnection[1]: WebSocket::handle_http_response_received()\nConnection[1]: Negotiated protocol version: 30\nConnection[1]: Will emit a ping in 43708 milliseconds\nConnection[1]: Session[1]: Sending: BIND(path=’/4219523c68c25db5b31ff2b3e48face4/myRealm’, signed_user_token_size=786, need_client_file_ident=0, is_subserver=0)\nConnection[1]: Session[1]: Sending: IDENT(client_file_ident=2, client_file_ident_salt=1391270743664738180, scan_server_version=15, scan_client_version=30, latest_server_version=15, latest_server_version_salt=7570085601498466528)\nConnection[1]: Session[1]: Sending: MARK(request_ident=2)\nConnection[1]: Session[1]: Received: DOWNLOAD(download_server_version=16, download_client_version=33, latest_server_version=16, latest_server_version_salt=7570085601498466528, upload_client_version=33, upload_server_version=15, downloadable_bytes=0, num_changesets=0, …)\nConnection[1]: Session[1]: Progress handler called, downloaded = 57, downloadable(total) = 57, uploaded = 930, uploadable = 930, reliable_download_progress = 1, snapshot version = 34\nConnection[1]: Session[1]: Received: MARK(request_ident=2)\nConnection[1]: Session[1]: Sending: UPLOAD(progress_client_version=34, progress_server_version=16, locked_server_version=16, num_changesets=0)\nooooooooo\nooooooooo2\nooooooooo2\nooooooooo2\nooooooooo2\nooooooooo2\n3333333\n3333333\n3333333\n1111\n22220000000000\n1111\n1111\n1111\n1111\n1111\nConnection[1]: Session[1]: Sending: UPLOAD(progress_client_version=35, progress_server_version=16, locked_server_version=16, num_changesets=1)\nConnection[1]: Session[1]: Upload compaction: original size = 57, compacted size = 57\nConnection[1]: Session[1]: Progress handler called, downloaded = 57, downloadable(total) = 57, uploaded = 930, uploadable = 987, reliable_download_progress = 1, snapshot version = 35", "username": "gamer_life" }, { "code": "", "text": "From the logs I have seen that there is realm-object-server Folder on my computer and it’s there. I assume this is local version of Realm Cloud, but I wonder, how can I pass the file path to Realm sync configuration, so that Realm Cloud folder for certain Project is located inside of Project directory.", "username": "gamer_life" }, { "code": "", "text": "You do not need to pass in the file path. This file is created automatically for you when you call getInstanceAsync. When you make a write to a realm that is opened with a sync configuration you do not need to explicitly call sync - it does this automatically for you in the background. Just keep the realm reference alive and do not let it garbage collect and it will continue syncing.", "username": "Ian_Ward" }, { "code": "User.LoginAsync(credentials, authUrl)", "text": "I think, I wasn’t fully clear with the issue. Let’s consider this consequence, I update the local Realm Cloud database without initiating connection(not connected to Realm Cloud, because there was no internet connection available) and after the connection to Realm Cloud with User.LoginAsync(credentials, authUrl) , I want to see these changes that were made while offline(not connected to Realm Cloud, because there was no internet connection available) synced with Realm Cloud. The problem, I can’t figure out how to access the local Realm Cloud database in code in order to update it while offline and I want to have these offline changes being synced with Realm Cloud once the connection with Realm Cloud is established.", "username": "gamer_life" }, { "code": "", "text": "@gamer_life OK - I understand now. You only need to logOn (and be online) once to validate crednetials, the first time the user logs in. After that, the credentials are cached locally and the synced realm can be opened even when offline by making a call to currentUser as described here:So the steps would be", "username": "Ian_Ward" }, { "code": " User user=User.Current;\n //var serverURL = new Uri(\"/~/myRealm\", UriKind.Relative);\n var serverURL = new Uri(\"/default\", UriKind.Relative);\n \n var configuration = new FullSyncConfiguration(serverURL, user);\n\n var realm = Realm.GetInstance(configuration);\n\n realm.Write(() =>\n {\n realm.Add(new Dog { Name = \"......\", Age = 2 });\n });\n var oldDogs=realm.All<Dog>().Where(d => d.Age > 1);\n foreach (var d in oldDogs)\n {\n Console.WriteLine(d.Name);\n }\n", "text": "@Ian_Ward Thanks a lot for the answer. I have done everything as you have mentioned above, but still for some unknown reason it doesn’t sync. Could you please have a look and propose or update the code to resolve this issue. Below is code, which I am using:Console logs are below:Realm sync client ([realm-core-5.23.8], [realm-sync-4.9.5])\nSupported protocol versions: 26-30\nPlatform: Linux Linux 4.15.0-91-generic #92~16.04.1-Ubuntu SMP Fri Feb 28 14:57:22 UTC 2020 x86_64\nBuild mode: Release\nConfig param: max_open_files = 256\nConfig param: one_connection_per_session = 1\nConfig param: connect_timeout = 120000 ms\nConfig param: connection_linger_time = 30000 ms\nConfig param: ping_keepalive_period = 60000 ms\nConfig param: pong_keepalive_timeout = 120000 ms\nConfig param: fast_reconnect_limit = 60000 ms\nConfig param: disable_upload_compaction = 0\nConfig param: tcp_no_delay = 0\nConfig param: disable_sync_to_disk = 0\nUser agent string: 'RealmSync/4.9.5 (Linux Linux 4.15.0-91-generic #92~16.04.1-Ubuntu SMP Fri Feb 28 14:57:22 UTC 2020 x86_64) RealmDotNet/4.3.0.0 (.NET Core 3.1.2) ’\n…\n…", "username": "gamer_life" }, { "code": "", "text": "The code looks fine to me and there is nothing in the logs to indicate what is going wrong. Sorry I can’t be of more help - the .NET tutorial should work for this except for updating the APIs where Visual Studio tells you to", "username": "Ian_Ward" } ]
Problem with setting up Realm Sync of Local and Cloud databases in .Net Project
2020-05-08T16:14:03.876Z
Problem with setting up Realm Sync of Local and Cloud databases in .Net Project
3,589
null
[ "node-js" ]
[ { "code": "async function dbData() {\n\n try{\n\n let dbData= await formModel.find({})\n\n //console.log(dbData) //Console logs succesfully\n\n return dbData\n\n }\n\n catch(err){\n\n console.log(err)\n\n }\n\n \n\n}", "text": "hiIm trying to get data from my DBatlas cluster with Model.find(), but I think something is not working with my async function. function returns a pending promise but im able to console.log the actual cluster objects. How can I store my collection with node.js ?", "username": "Tony_Jerry" }, { "code": "const MongoClient = require('mongodb').MongoClient;\nconst client = new MongoClient( 'mongodb://localhost:27017', { useUnifiedTopology: true } );\n\nasync function getDocs() {\n\n try {\n await client.connect()\n coll = await client.db('testDB').collection('testColl')\n cursor = await coll.find({})\n return cursor.toArray()\n } catch (e) {\n console.error('Error:', e)\n }\n finally {\n client.close();\n }\t\t\n};\n\n(async function() {\n let docsList = await getDocs()\n console.log('Fetched documents:', docsList)\n})();\n", "text": "This is using MongoDB NodeJS driver:", "username": "Prasad_Saya" } ]
Model.find() returns pending promise , but logs actual collection?
2020-05-09T06:59:06.894Z
Model.find() returns pending promise , but logs actual collection?
6,013
null
[]
[ { "code": "", "text": "Hi there! Mary here from the MongoDB Education team . I’d like to share several updates we’ve made to MongoDB University with you all:Users now have access to the entirety of a course’s curriculum, including chapters and exercises, from the moment they enroll. While we recommend going through the content sequentially, this change will allow users to access specific content within a course at any time.There are no longer weekly deadlines. This will allow users to learn at their own pace.Users now have two months to complete a course upon enrollment.These changes have been applied to all our course offerings. We hope these changes will help users learn MongoDB with even more flexibility.As we continue to work to improve MongoDB University, we’d love to hear from you. Feel free to comment below or reach out at [email protected]. – For those of you that aren’t familiar with MongoDB University, we offer completely free, online courses led by Curriculum Engineers! You can check our courses out at university.mongodb.com.", "username": "Mary_Alati" }, { "code": "", "text": "These are indeed great changes to MongoDB University and something students have been asking for since the first course was offered.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks for sharing your feedback! Input from our students was certainly a major factor in the team’s decision to make these updates .Have you taken any courses or certification exams?", "username": "Mary_Alati" }, { "code": "", "text": "Hi @Mary_Alati, I was part of the group that took the original set of courses and spent two years as an online TA just shortly after the courses were offered (2013 - 15). I did take part in, and passed, the beta for the MongoDB DBA certification as well.I unfortunately haven’t taken any of the recent courses as life has kept me busy, but having the ability to take the courses when I have time will hopefully allow me to check the courses out again.", "username": "Doug_Duncan" }, { "code": "", "text": "Fabulous updates, @Mary_Alati! I have personally struggled to stay on top of the weekly deadlines to complete courses, so I’m sure other learners greatly appreciate these changes too. That flexibility is especially critical now more than ever as more folks are leaning into kids at home during work time, demanding work schedules, and adapting to remote work. ", "username": "Jamie" }, { "code": "", "text": "thank you very much. Those were great changes.", "username": "DavidSol" }, { "code": "", "text": "Yes. This is great news.", "username": "chris" }, { "code": "", "text": "Wow! Amazing to hear you were an early user and an online TA.Hopefully, with these updates you’ll be able to dive into some of our new courses. If you do and have any feedback, I’d love to hear it!Also, if you’re ever interested in taking the new Certification Exam just let me know!", "username": "Mary_Alati" }, { "code": "", "text": "Great changes. I struggled several times to complete the courses within the deadlines and were not able to complete it. But now it will be doable and will be able to prepare the best for my upcoming DBA exam. This also gives flexibility to complete the whole topic thoroughly, Like I can complete whole m103 and can go for m201, m310 which makes more sense then referring to just single chapter of each courses.", "username": "viraj_thakrar" }, { "code": "", "text": "Thanks for the feedback Viraj! Excited for you to keep learning with MongoDB University. When do you plan on taking the DBA Certification exam?", "username": "Mary_Alati" }, { "code": "", "text": "Any reason on demand access is being removed? It’s been amazing to pick a course and start immediately", "username": "jeremyfiel" }, { "code": "", "text": "Hi @jeremyfiel,I just noticed your comment here. On-demand access has not been removed.Is there a specific course you are unable to access?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "hey @Stennie_X I had an email from the university team and they said it was expiring from last year’s World. If it’s not going away, that’s great news. I’m currently enrolled in M121 and M320 and working my way through each of them without any issues.thanks for the reply!", "username": "jeremyfiel" }, { "code": "", "text": "I had an email from the university team and they said it was expiring from last year’s World.Hi Jeremy,Thanks for clarifying the context. MongoDB University On-Demand access is a separate offering which provides access to courses outside of the scheduled sessions. On-Demand access is only available with an activation code which will have an associated expiry date. Typically this is provided as part of a support subscription or special offer.It sounds like you received On-Demand access as a bonus for attending MongoDB World last year. Unfortunately that offer will have an end date, but there will be future opportunities to earn access.The schedule for general course offerings has been expanded from March, 2020 (as per the announcement you are commenting on), so access to all course materials is now available upon enrolment and you have two months to complete a course without any weekly deadlines.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Updates to MongoDB University: March, 2020
2020-03-25T21:56:49.108Z
Updates to MongoDB University: March, 2020
5,557
null
[ "containers" ]
[ { "code": "# mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())'\nMongoDB shell version v3.6.9\nconnecting to: mongodb://xxx.xxx.xxx.xx:xxxxx/\nImplicit session: session { \"id\" : UUID(\"xxxxx-xxxxx-xxxxx-xxxxx-xxxxx\") }\nMongoDB server version: 3.6.9\n{\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on admin to execute command { replSetGetStatus: 1.0, lsid: { id: UUID(\\\"xxxxx-xxxxx-xxxxx-xxxxx-xxxxx\\\") }, $db: \\\"admin\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n}\n# mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())'\nMongoDB shell version v3.4.16\nconnecting to: mongodb://xxx.xxx.xxx.xx:xxxxx/\nMongoDB server version: 3.4.16\n{\n \"set\" : \"set01\",\n \"date\" : ISODate(\"YYYY-MM-DDT17:25:54.460Z\"),\n \"myState\" : 7,\n \"term\" : NumberLong(1),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"optimes\" : {\n : \n : \n : \n : \n }\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"hostname0:xxxxx\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n : \n : \n : \n : \n },\n {\n \"_id\" : 1,\n \"name\" : \"hostname1:xxxxx\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n : \n : \n : \n : \n },\n {\n \"_id\" : 2,\n \"name\" : \"hostname2:xxxxx\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 681823,\n : \n : \n : \n : \n }\n ],\n \"ok\" : 1\n}\n", "text": "In MongoDB server version: 3.4.16 we are able to get the data via arbiter node but in same environment but MongoDB server version: v3.6.9, it is throwing error. Any reason, why? No config of replica set is changedMongoDB shell version v3.6.9MongoDB shell version v3.4.16", "username": "Joanne" }, { "code": "mongo --host ... --port ... --eval \"db.isMaster().arbiterOnly\"arbiterOnlytrue", "text": "In MongoDB server version: 3.4.16 we are able to get the data via arbiter node but in same environment but MongoDB server version: v3.6.9, it is throwing error. Any reason, why?Hi,I cannot reproduce this issue using MongoDB 3.6.9 with auth enabled.Can you check that you are definitely connecting to an arbiter via:mongo --host ... --port ... --eval \"db.isMaster().arbiterOnly\"The arbiterOnly result should be true if you are connected to an arbiter.Arbiters do not replicate any data (including user/auth data), but another possibility is that someone has manually created users to secure your arbiter.Regards,\nStennie", "username": "Stennie_X" }, { "code": "db.isMaster()rs.status()mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())'mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval \"db.isMaster().arbiterOnly\"", "text": "n MongoDB server version: 3.4.16 we are able to get the data via arbiter node but in same environment but MongoDB server version: v3.6.9, it is throwing error. Any reason, why?mongo --host … --port … --eval “db.isMaster().arbiterOnly”This returned true. But how come this command db.isMaster() returned the result but rs.status() doesn’t? The authentication is enabled in the environment.mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())' —> This resulted in error\nmongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval \"db.isMaster().arbiterOnly\" --> This returned true\nHow? Why? Sorry for being naive. But need to understand it better.\nIs there any list of commands that work only with authentication enabled or regardless of authentication enabled they work too? Or am I understanding it wrong?", "username": "Joanne" }, { "code": "mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())'mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval \"db.isMaster().arbiterOnly\"admindb.isMaster()db.runCommand( { isMaster: 1 } )rs.status()db.adminCommand( { replSetGetStatus: 1, initialSync: 1 } )db.adminCommand(...)rs.status()db.runCommand(...)db.isMaster()admin", "text": "mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())' —> This resulted in error\nmongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval \"db.isMaster().arbiterOnly\" --> This returned trueDoes your user have access to run admin commands (commands in the admin database)?The db.isMaster() shell helper runs db.runCommand( { isMaster: 1 } ) behind the scenes.The rs.status() command runs db.adminCommand( { replSetGetStatus: 1, initialSync: 1 } ) behind the scenes.Notice the the db.adminCommand(...) for rs.status() as opposed to db.runCommand(...) for db.isMaster().If the user you are logging in with cannot run queries in the admin database then that would explain the errors you are receiving.It would be helpful to see that user’s privileges and roles.", "username": "Doug_Duncan" }, { "code": "mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())'mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval \"db.isMaster().arbiterOnly\"db.listCommands()splitChunk: adminOnlydb.adminCommandsplitVector: startSession: slaveOk top: adminOnly slaveOk ", "text": "Hi Doug,mongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval 'printjson(rs.status())' —> This resulted in error\nmongo --host xxx.xxx.xxx.xx --port xxxxx --ipv6 --eval \"db.isMaster().arbiterOnly\" --> This returned trueIs there any list of commands that work only with authentication enabled or regardless of authentication enabled they work too?If I run db.listCommands() and in output I get whole list of db commands available:\nCan you let me know if my understanding is correctsplitChunk: adminOnly --> only returns result if Authentication is enabled, runs db.adminCommand and should be run from primary host?\nsplitVector: . --> returns result regardless of authentication enabled or not and should be run from primary host?\nstartSession: slaveOk --> returns result regardless of authentication enabled or not and can be run from either primary or secondary host?\ntop: adminOnly slaveOk . --> only returns result if Authentication is enabled, and can be run from either primary or secondary host?Really appreciate your help on this!\nThanks in advance.", "username": "Joanne" }, { "code": "rs.status().../mongo/bin/mongodb-3.6.9/mongo --port 27019 --eval 'db.adminCommand({\"replSetGetStatus\": 1, \"initialSync\": 1})'\n/mongo/bin/mongodb-3.6.9/mongo --port 27019 --eval 'db.adminCommand({\"top\": 1})'\nMongoDB shell version v3.6.9\nconnecting to: mongodb://127.0.0.1:27019/\nImplicit session: session { \"id\" : UUID(\"17462515-7c74-44ed-9cef-152ac3463d7b\") }\nMongoDB server version: 3.6.9\n{\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on admin to execute command { top: 1.0, lsid: { id: UUID(\\\"17462515-7c74-44ed-9cef-152ac3463d7b\\\") }, $db: \\\"admin\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n}\nadminCommands()rs.status()adminCommand()adminrs.status()splitChucksplitVectortopadminstartSession()rs.status()", "text": "We are not specifying username in command.Sorry about that. I remembered @Stennie_X mentioning auth earlier in the thread and didn’t look at the exact queries being run.I have tested with version 3.6.9 on my Mac and I can run those commands without issue, whether I have auth enabled or not. The weird thing here is that I can run the following command (note that this is the same command that gets run by the shell helper rs.status()):I get an error with the following:Both are adminCommands() on the same Arbiter with auth enabled. Connecting to the secondary, but not logging in, I get the expected error when trying to run rs.status(). Really any adminCommand() call should require that you be authed and allowed to run commands in the admin database.I am not sure why I can run the rs.status() commands against the arbiter and get results back where as you can’t. I am using a slightly older version than you, so not sure which is right in this case.As for the splitChuck and splitVector commands, those are internal only methods and probably shouldn’t be called (if they even can be). The shell helpers sh.splitFind() or sh.splitAt() methods are what you would want to use. Also note that these methods are only needed if you’re using sharding.As for top, you need to be logged in as someone with privileges to run commands in the admin database. This can be run from either a primary or secondary.I’ve not used the startSession() method but does appear that it can be called whether or not you’ve authenticated and can be run from either a primary or secondary. Having said this, if you’re not authenticated, then you probably won’t be able to run many commands from this new session.Hopefully some of this helps, and sorry that I can’t provide and explanation for why the rs.status() call fails for you from an arbiter.", "username": "Doug_Duncan" }, { "code": "rs.status()db.listCommands()adminOnlyslaveOkcommand1: adminOnlydb.adminCommandcommand2: command3: slaveOk command4: adminOnly slaveOk ", "text": "I am not sure why I can run the rs.status() commands against the arbiter and get results back where as you can’t. I am using a slightly older version than you, so not sure which is right in this case.Can someone please help us understand about Doug’s findings?And I’d like to understand the output of db.listCommands() the meaning of combinations of adminOnly and slaveOk in case of authorisation enabled or disabled:\nCan you let me know if my understanding is correctcommand1: adminOnly --> only returns result if Authentication is enabled, runs db.adminCommand and should be run from primary host?\ncommand2: . --> returns result regardless of authentication enabled or not and should be run from primary host?\ncommand3: slaveOk --> returns result regardless of authentication enabled or not and can be run from either primary or secondary host?\ncommand4: adminOnly slaveOk . --> only returns result if Authentication is enabled, and can be run from either primary or secondary host?Really appreciate your help on this!\nThanks in advance.", "username": "Joanne" }, { "code": "18:30 $ docker exec -it mongo-0-c mongo\nMongoDB shell version v3.6.9\nconnecting to: mongodb://127.0.0.1:27017\nImplicit session: session { \"id\" : UUID(\"a095286b-f632-4292-a14c-633861261df4\") }\nMongoDB server version: 3.6.9\ns0:ARBITER> rs.status()\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"not authorized on admin to execute command { replSetGetStatus: 1.0, lsid: { id: UUID(\\\"a095286b-f632-4292-a14c-633861261df4\\\") }, $db: \\\"admin\\\" }\",\n\t\"code\" : 13,\n\t\"codeName\" : \"Unauthorized\"\n}\ns0:ARBITER> \nbye\n18:30 $ docker rm -f mongo-0-c \nmongo-0-c\n18:30 $ docker volume rm mongo-psa_mongo-0-c \nmongo-psa_mongo-0-c\n18:30 $ docker-compose up -d mongo-0-c\nWARNING: The Docker Engine you're using is running in swarm mode.\n\nCompose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.\n\nTo deploy your application across the swarm, use `docker stack deploy`.\n\nCreating volume \"mongo-psa_mongo-0-c\" with default driver\nCreating mongo-0-c ... done\n18:31 $ docker exec -it mongo-0-c mongo\nMongoDB shell version v3.6.9\nconnecting to: mongodb://127.0.0.1:27017\nImplicit session: session { \"id\" : UUID(\"63c73dab-8a0d-40d7-a97b-e40cf702d0e8\") }\nMongoDB server version: 3.6.9\nWelcome to the MongoDB shell.\nFor interactive help, type \"help\".\nFor more comprehensive documentation, see\n\thttp://docs.mongodb.org/\nQuestions? Try the support group\n\thttp://groups.google.com/group/mongodb-user\ns0:ARBITER> rs.status()\n{\n\t\"set\" : \"s0\",\n\t\"date\" : ISODate(\"2020-05-08T22:31:28.865Z\"),\n\t\"myState\" : 7,\n\t\"term\" : NumberLong(4),\n\t\"syncingTo\" : \"\",\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t}\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"mongo-0-a:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 9,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-05-08T22:31:21Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2020-05-08T22:31:21Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-05-08T22:31:27.615Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-05-08T22:31:27.579Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"mongo-0-b:27017\",\n\t\t\t\"syncSourceHost\" : \"mongo-0-b:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"mongo-0-b:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 9,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1588977081, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2020-05-08T22:31:21Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2020-05-08T22:31:21Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2020-05-08T22:31:27.616Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2020-05-08T22:31:27.743Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1588967519, 1),\n\t\t\t\"electionDate\" : ISODate(\"2020-05-08T19:51:59Z\"),\n\t\t\t\"configVersion\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"mongo-0-c:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 7,\n\t\t\t\"stateStr\" : \"ARBITER\",\n\t\t\t\"uptime\" : 11,\n\t\t\t\"syncingTo\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 7,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t}\n\t],\n\t\"ok\" : 1\n}\n", "text": "I decided to have a play also.PSA v3.6.9 auth enabled.\nmongo locally on the arbiter could complete rs.status()A fun scenario might be a node that was a replica but then was changed to an arbiter.\nNow I get the unauthorized error!So this arbiter could have been a ‘dirty’ node when it was added. Clean out the data dir on the arbiter and I can now rs.status()", "username": "chris" }, { "code": "command1: adminOnlydb.adminCommandcommand2: command3: slaveOk command4: adminOnly slaveOk adminOnlyslaveOkadminOnlyadminrunCommand({...})adminCommand({...})slaveOkrs.slaveOkslaveOkdb.listCommands()", "text": "So this arbiter could have been a ‘dirty’ node when it was added. Clean out the data dir on the arbiter and I can now rs.status()Brilliant thinking @chris ! I’m glad you jumped in. I told you that I learn a lot from you.command1: adminOnly --> only returns result if Authentication is enabled, runs db.adminCommand and should be run from primary host?\ncommand2: . --> returns result regardless of authentication enabled or not and should be run from primary host?\ncommand3: slaveOk --> returns result regardless of authentication enabled or not and can be run from either primary or secondary host?\ncommand4: adminOnly slaveOk . --> only returns result if Authentication is enabled, and can be run from either primary or secondary host?@Joanne sorry that I misunderstood your question. You were wondering about the adminOnly and slaveOk designations and not the actual commands themselves. The best way to learn this is to run the command and see what happens. It doesn’t matter if authentication is enabled or not for these commands to run. If authentication is enabled, then the user running the command would need to have the proper privileges to run them.The adminOnly listed commands must be run in the admin database if using runCommand({...}) or with the adminCommand({...}) from any other database. These can be run from either the primary or the secondary.I am not sure what the slaveOk designation means as I was able to run those commands from both a primary and a secondary that did not have rs.slaveOk ran on it. @Stennie_X, can you provide insight into the commands listed showing slaveOk, and what that label means, on them in the results of db.listCommands()?", "username": "Doug_Duncan" }, { "code": "db.isMaster()rs.status()isMasterpingconnectionStatusauthenticateisMasterisMasterserverStatusslaveOkrs.slaveOkslaveOkdb.listCommands()slaveOkAllowedOnSecondaryadminCommand()admindb.adminCommand()admindb.getSiblingDB(\"admin\").runCommand()admindb.runCommand()db.adminCommand()admin", "text": "how come this command db.isMaster() returned the result but rs.status() doesn’t?A few commands (such as isMaster, ping, connectionStatus, and authenticate) do not require any authentication even if auth is enabled. These are used to support connecting to a deployment.The isMaster command is used by drivers as part of the connection handshake to discover server features and negotiate compatibility. The isMaster response includes information such as the wire protocol version, supported auth mechanisms, compression options, and the role of the current node.The majority of commands (including serverStatus) will require authentication if auth is enabled.A fun scenario might be a node that was a replica but then was changed to an arbiter.\nNow I get the unauthorized error!Yes, this is a variation on the possibility I mentioned earlier:Arbiters do not replicate any data (including user/auth data), but another possibility is that someone has manually created users to secure your arbiter.This can be (ab)used as a feature, although any users created directly on an arbiter are (as of MongoDB 4.4) independent of the rest of the replica set. Dropping the data directory on the arbiter (as suggested by @chris) will ensure the arbiter is only used as designed.I am not sure what the slaveOk designation means as I was able to run those commands from both a primary and a secondary that did not have rs.slaveOk ran on it. @stennie, can you provide insight into the commands listed showing slaveOk , and what that label means, on them in the results of db.listCommands() ?The slaveOk flag (aka AllowedOnSecondary) indicates whether a command is permitted on a secondary, but does not mean this command does something useful from an end user point of view. Some commands are internal (although they should be generally noted as such) or require specific context to be useful.The Database Commands section in the MongoDB documentation is the best reference for available commands relevant to your version of MongoDB.Really any adminCommand() call should require that you be authed and allowed to run commands in the admin database.The db.adminCommand() shell helper runs a command against the admin namespace and is a shorthand for db.getSiblingDB(\"admin\").runCommand() or changing to the admin namespace and invoking db.runCommand(). You can run either admin or non-admin commands through the db.adminCommand() shell helper, as long as those are meaningful for the admin namespace.Commands will fail unless the current user has appropriate privileges for the current configuration of access control and authentication.Regards,\nStennie", "username": "Stennie_X" } ]
Getting replica set status via arbiter node
2020-05-07T22:47:45.402Z
Getting replica set status via arbiter node
5,338
https://www.mongodb.com/…ee1a449b7f64.png
[ "connector-for-bi" ]
[ { "code": "", "text": "I have installed the necessary ODBC driver for use with Power BI, but when I enter my server info I get this error.What is the issue on my end? Is there another alternative I should take?", "username": "B_Ng" }, { "code": "system error: 10060", "text": "Hi,I assume you are referring to the MongoDB ODBC Driver for BI Connector which also requires a compatible version of the MongoDB Connector for BI to be installed and correctly configured.The system error: 10060 message indicates the ODBC driver is unable to establish a connection to the Connector for BI. I suggest testing that the Connector for BI is working before moving on to the ODBC driver set up.Are you using a local version of the Connector for BI or Atlas-hosted?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi,I’m using a local version of the Connector for BI. I found that I was missing something involving mongosqld. Is the sampling for the schema supposed to take hours if my database is quite large?", "username": "B_Ng" } ]
PowerBI ODBC connector error
2020-05-03T21:56:04.992Z
PowerBI ODBC connector error
4,273
null
[ "atlas-search" ]
[ { "code": " { $text:\n { $search: \"\\\\\"my cool phrase\\\\\" OR \"\\\\\"my even cooler phrase\\\\\"\" }\n }\n", "text": "I’m using Atlas and have read that Lucene supports AND and OR operators.Can these be used somehow with multiple phrases?Something like:", "username": "Corey_Murphy" }, { "code": "{\n $search: {\n text: { path: \"name\", query: [\"phrase1\", \"phrase2\"]}\n }\n}\ncompound:should: {\n $search: {\n compound: {\n should: [\n { text: { path: \"name\", query: \"phrase1\"} },\n { text: { path: \"name\", query: \"phrase2\"} },\n ]\n }\n }\ncompound:filter: {\n $search: {\n compound: {\n filter: [\n { text: { path: \"name\", query: \"phrase1\"} },\n { text: { path: \"name\", query: \"phrase2\"} },\n ]\n }\n }\n", "text": "Hi Corey -Yes, you can. For OR there are several different ways to do this. The easiest is simply to pass an array instead of a string (phrase1 OR phrase2). Also see the path construction docs.You can also use the compound: should: operator, which can handle more sophisticated use cases. Also see the compound docs.You can use compound: filter: for AND (Phrase1 AND Phrase2):Compound clauses can also be nested to arbitrary depths which will let you build more sophisticated boolean logic.", "username": "Doug_Tarr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to full text search multiple phrases
2020-05-07T20:59:03.626Z
How to full text search multiple phrases
4,669
null
[ "golang" ]
[ { "code": "", "text": "This is a general question regarding client software written using any of the MongoDB drivers. What is the best practice for handling a disconnection between client and server (due to network issues, downed server, etc.)? I’m using the mongo-go-driver and I’m wondering which sorts of methods/errors I should be using/checking to ensure that my client applications aren’t waiting on infinitely-long queries or unable to use my application without me restarting it (to, hopefully, reconnect to the MongoDB instance/replica set/master).", "username": "John_Rinehart" }, { "code": "", "text": "Hi John,The driver monitors the deployment using one goroutine per node in the cluster so generally, there should be nothing required on your end to reconnect in the event of a transient issue. For example, when connecting to a replica set, the driver will detect the new primary in the event of a failover or node restart. In addition, the driver will retry certain read and write operations by default in the event of a transient error.The only thing I can think of checking for that would indicate an issue is a server selection timeout. By default, the driver will try to find a suitable server for an operation for 30 seconds. This is generally enough time for transient network issues to resolve themselves. If your application receives a server selection error, that could be indicative of a more serious issue or a signal that you need to set the server selection timeout to a higher value (e.g. if primary elections are consistently taking 45 seconds, try setting the timeout to 1 minute).The current driver version (1.3.0 at the time of writing) does not return a custom error type for server selection errors, but you can check for the “server selection error” substring. We have a project planned for this quarter to improve the error types returned by the driver, and I plan on adding a concrete type that users can check for as part of that.– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "Divjot, that was an amazing answer. Please see if you can get this added to a FAQ on the official MongoDB documentation. It details that 1) I shouldn’t have to worry about this and 2) If I’m having some issue with my network that there exist some hacky solutions to solve the problem of an unusually long connection time. Thanks!Where should I go to get documentation on the error types/values which I can check for a given version of the mongo-go-driver? Particularly, I’m interested in an error value that I can check to tell if my CreateIndex call is being called on a collection for which that index already exists (ErrIndexExist or similar). Sorry, that this is a derivative topic.", "username": "John_Rinehart" }, { "code": "mongo.CommandError", "text": "For others: A list of errors can be found here: mongo/error_codes.yml at master · mongodb/mongo · GitHubAlso, the error can be asserted as a mongo.CommandError: Issue of creating index with default name using Go driver - #2 by Divjot_Arora .", "username": "John_Rinehart" }, { "code": "var Client *mongo.Client\nfunc ConnectDb() bool {\n\tduration := // some duration\n\tctx, cancel := context.WithTimeout(context.Background(), duration)\n\tdefer cancel()\n\n\tmonitor := &event.PoolMonitor{\n\t\tEvent: HandlePoolMonitor,\n\t}\n\n\tclient, err := mongo.Connect(ctx,\n\t\toptions.Client().\n\t\t\tApplyURI(/*atlas connection uri*/).\n\t\t\tSetMinPoolSize(/*min pool size*/).\n\t\t\tSetMaxPoolSize(/*max pool size*/).\n\t\t\tSetHeartbeatInterval(/* some duration*/).\n\t\t\tSetPoolMonitor(monitor))\n\tif err != nil {\n\t\treturn false\n\t}\n\tClient = client\n\treturn Ping()\n}\n\nfunc reconnect(client *mongo.Client) {\n\tfor {\n\t\tif ConnectDb() {\n client = Client\n\t\t\tbreak\n\t\t}\n\t\ttime.Sleep(time.Duration(configuration.AppConfig.MongoDb.ReconnectInterval) * time.Second)\n\t}\n}\n\nfunc HandlePoolMonitor(evt *event.PoolEvent) {\n\tswitch evt.Type {\n\tcase event.PoolClosedEvent:\n\t\tlogging.AppLogger.Error(\"DB connection closed.\")\n\t\treconnect(Client)\n\t}\n}\n\nfunc main() {\n ginRouter.GET(\"/health\", func(context *gin.Context) {\n\t\tPing()\n\t\tcontext.JSON(http.StatusOK, gin.H{\n\t\t\t\"status\": \"System is up and running.\",\n\t\t})\n\t})\n\n}\nfunc Ping() bool {\n\tif err := Client.Ping(context.TODO(), nil); err != nil {\n\t\treturn false\n\t}\n\treturn true\n}\n", "text": "Hi @Divjot_Arora,To achieve: Implement reconnect logic.With several documents, I have seen two field present in the mongoDB docs for node.\nI am unable to find the same variable based configuration in the mongo-go-driver 1.3.1. However as you have mentioned that it has retry mechanism present already, I am unable to implement the same.As a alternative solution, I came up with the below.Testing scenario:Efforts made to resolve:\nI tried using monitor event(PoolClosedEvent) and tried to reconnect.Problem:\nThough this code works but I believe there is a better way provided by you guys for the same considering node.js docs.Thanks & Regards\nAnkush Goyal", "username": "Ankush_Goyal" }, { "code": "Client.DisconnectPoolClosed", "text": "Hi @Ankush_Goyal,Can you answer these questions so I can get a better understanding of what you’re trying to do?What does “Implement reconnect logic” mean in an actual application? In your example, you manually call Client.Disconnect but I assume a production application wouldn’t disconnect a Client that’s still being used. Also, you’re checking for PoolClosed events, but those only happen when the Client is being disconnected or if someone tries to check out a connection using a previously disconnected Client.You mentioned you’ve seen two fields present in the Node driver documentation and can’t find the analogous fields for Go. What are these fields? Maybe we can help you figure out how to do the same in Go.Also, as I mentioned in a previous reply, the driver will automatically handle things like elections and server restarts. Obviously, some operations like writes and reads against the primary may fail if the primary node is unavailable for an extended amount of time, but the driver will find the node again when it comes back up.One final note: I recommend upgrading driver versions to 1.3.2 or even better to 1.3.3, which was released two days ago. In 1.3.2, we fixed a regression that was introduced in 1.3.0 and caused a deadlock if a connection encountered a network error during establishment. You can find a list of released versions on Releases · mongodb/mongo-go-driver · GitHub.", "username": "Divjot_Arora" }, { "code": "Client.DisconnectClient.Disconnect", "text": "Hi @Divjot_Arora,For point 1,For point 2,Thanks for the heads on new version release. Will make sure to check it out.Thanks & Regards\nAnkush Goyal", "username": "Ankush_Goyal" }, { "code": "Client.DisconnectPoolClosedmlaunchdb.shutdownServer()rs.stepDown()", "text": "Client.Disconnect isn’t a great way to test this because that’s generally considered a cleanup/tear-down method that closes resources that are in-use and publishes some events like PoolClosed. It’s not really meant to be a transient state that you can “recover” from. If you want to test what happens when the server goes down, you can do this locally. I recommend looking into orchestration tools like mlaunch (http://blog.rueckstiess.com/mtools/mlaunch.html) to help you quickly start up replica sets locally.", "username": "Divjot_Arora" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Gracefully reconnecting with the Go driver
2020-03-01T22:02:06.208Z
Gracefully reconnecting with the Go driver
14,199
https://www.mongodb.com/…a_2_1024x43.jpeg
[]
[ { "code": "", "text": "Hi, I’m getting an error using mongoimport (version r4.2.6) saying that it cannot decode array into a D. I’ve already added the --jsonArray option. The json was created by a python API that I’m running from the command line.here’s the command:\nmongoimport --host Bible-shard-0/bible-shard-00-00-pdnuf.mongodb.net:27017,bible-shard-00-01-pdnuf.mongodb.net:27017,bible-shard-00-02-pdnuf.mongodb.net:27017 --ssl --username rsfarrell --password MY_PWD --authenticationDatabase admin --db Sermons --collection Rick_Warren --jsonArray --type json --file rick.jsonI’ve attached the error.\nmongoimport error1694×72 28 KB\nAny ideas? I’m happy to send over the json but I’m not able to upload it here. Thanks for any help!Rich", "username": "Rich_Farrell" }, { "code": "", "text": "Hi @Rich_Farrell,I’m getting an error using mongoimport (version r4.2.6) saying that it cannot decode array into a D.The error related to an array type from the document(s) in the file.\nYou mentioned that the JSON was created by a Python API. If the documents in the file is consistent in structure, could you provide just one document ?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi Wan, when I drag/drop the file to this JSON validator (https://jsonlint.com/), it says that it is a valid JSON. \nJSONLint valid - lucilleball.json1119×771 55.7 KB\n", "username": "Rich_Farrell" }, { "code": "", "text": "Hi Wan, I’ve attached the image of the json file. Is there a way for me to send you the actual json? If I edit the file down myself then mongo imports it properly (I’m using Compass / Add Data / Import File from within the Collection). The file size is 21k.\nlucilleball-11825×950 610 KB\n", "username": "Rich_Farrell" }, { "code": "", "text": "Hi Wan, 2nd part of json (I’m only able to upload 1 image at a time).\n\nlucilleball-21797×814 513 KB\n", "username": "Rich_Farrell" }, { "code": "", "text": "Hi @Rich_Farrell it would probably help if you put the file in a pastebin, gist or something similar (if possible) so @wan can download and work with the actual data.", "username": "Doug_Duncan" }, { "code": "", "text": "Brilliant - thank you!", "username": "Rich_Farrell" }, { "code": "", "text": "@wan - here is the pastebin - lucilleball - Pastebin.com", "username": "Rich_Farrell" }, { "code": "json[ [ {\"text\":\"one\"}, {\"text\":\"two\"} ] ]\n[ {\"text\":\"one\"}, {\"text\":\"two\"} ]\nmongoimport", "text": "Hi @Rich_Farrell,Thanks for providing a sample file. Looking at the file, the problem with the json file is that it contains an array of array documents. Currently this is the structure of your file:The format should be as below:Try updating your Python script to output an array of dictionaries instead of an array of array of dictionaries, and try running mongoimport again.Regards,\nWan.", "username": "wan" }, { "code": "wc -l", "text": "Personally, I would modify the Python code to output one document per line as it is done by default by mongoexport. Benefits", "username": "steevej" }, { "code": "", "text": "Thanks Wan and @steevej …I’ve emailed the programmer to ask if he can make your recommended changes.", "username": "Rich_Farrell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoimport error - Failed: cannot decode array into a D
2020-05-06T18:04:38.238Z
Mongoimport error - Failed: cannot decode array into a D
33,358
null
[]
[ { "code": "", "text": "HiI don’t seem to be able to adjust my email? I can on Mongo University. But I don’t have the option it seems on the community forum?ThanksJonny ", "username": "Jonny" }, { "code": "", "text": "It would be helpful to add alternative email Id and changing the email Id. As for some who have logged in via some organization and left might want to shift it to personal email", "username": "Ankush_Goyal" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Unable to change email
2020-03-26T20:21:22.475Z
Unable to change email
3,494
null
[]
[ { "code": "", "text": "Hi Team, just wanted to know if the ticket would be available anytime in future release https://jira.mongodb.org/browse/SERVER-35649?", "username": "Joanne" }, { "code": "REMOVED/etc/hosts.conf", "text": "Hi @Joanne,Welcome to the MongoDB Community!Per the Jira metadata on SERVER-35649, this issue is currently Open in the Backlog and does not have a planned fixVersion yet. I suggest upvoting and watching the issue in Jira for updates.This issue appears to affect replica set members restarted while their hostname is not resolvable via DNS. If a replica set member cannot resolve its current hostname, it will be unable to find itself in the current replica set configuration and transition to a REMOVED state.The workaround is to restart affected replica set members after the DNS issue has been resolved.To mitigate this issue, you could ensure the local member hostname is always resolvable via redundant DNS servers and/or add the hostname to /etc/hosts.conf.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Got it. Thanks for the prompt reply!", "username": "Joanne" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Availability of fix for SERVER-35649
2020-05-07T22:17:56.369Z
Availability of fix for SERVER-35649
1,500
null
[]
[ { "code": "", "text": "Hi everyone,I want to create a local database/cluster in mongodb that I can use only my own device. I couldn’t figure it out.\nShould I follow these instructions: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ ?\nI installed mongo shell, compass and atlas cluster for the course but I want to do it locally too.Sorry if this question is not related to the course content.Regards,\nBurak", "username": "Burak_Sakallioglu" }, { "code": "", "text": "Should I follow these instructions: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ ?If you’re on Ubuntu then yes. M103 covers the server more in-depth.", "username": "007_jb" }, { "code": "", "text": "Great! Thank you for your answer ", "username": "Burak_Sakallioglu" }, { "code": "", "text": "", "username": "system" } ]
How to create local database instead of cloud Atlas
2020-05-08T07:54:00.626Z
How to create local database instead of cloud Atlas
1,880
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "I’m having doubt on security layer of realm in iOS. In the realm documentation, I found thisIn fact, in another article, i read that, “For Android, they use AES-256 level of encryption and decryption of all the data stored locally.Whereas for iOS applications, their encryption is based on the iOS CommonCrypto library, which protects the app data and passwords stored in the keychain”.\nIs CommonCrypto implemented under AES or how it is. what is the standard comes under CommonCrypto. or it uses same AES-256 level of encryption and decryption in iOS as well.Regards\nAnson", "username": "anson_tp" }, { "code": "", "text": "@anson_tp Yes both platforms use AES", "username": "Ian_Ward" }, { "code": "", "text": "Welcome to the community @anson_tp!The encryption format is AES-256 verified with a SHA2 HMAC. AES encryption is implemented via native libraries, for example CommonCrypto on iOS (as mentioned in the Realm Swift docs you referenced).There’s a more general description (and some further details) in the Encrypting Realms documentation:iOS, macOS, tvOS and watchOS versions of Realm use the CommonCrypto library whereas the Windows uses the built-in Crypto library and Android platforms use OpenSSL.The encryption library used is an implementation detail that is abstracted via the Realm SDKs. You just need to provide a 64 byte encryption key.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X . It is very useful.", "username": "anson_tp" }, { "code": "", "text": "Thanks @Ian_Ward for the confirmation., it is useful.", "username": "anson_tp" } ]
Realm Security level
2020-05-07T14:10:06.069Z
Realm Security level
2,467
null
[ "licensing" ]
[ { "code": "", "text": "HI, we are a software service company and are planning to develop an accelerator that will be shared with customers. This tool will collect data from automation tool and store that data in MongoDB. Do we need to use an enterprise edition or it is okay to have a community edition. Do i need to contact for more clarification.", "username": "mahantesh_yalakki" }, { "code": "", "text": "Hey @mahantesh_yalakkiHere are the FAQs to the SSPL.However, Atlas will allow you to use MongoDB without the need to worry about the SSPL.The enterprise license is expensive (IMO as a startup). Which is why I went with Atlas. M10 cost me around $60 a month.When I contacted MongoDB for more info about the community license use, this was the answer I got. At first I did not like paying so much for the DB; however the whole infrastructure is run by MongoDB and now I have one less worry every month. And I can focus on the application now.", "username": "Natac13" }, { "code": "", "text": "Welcome to the community @mahantesh_yalakki!This tool will collect data from automation tool and store that data in MongoDB. Do we need to use an enterprise edition or it is okay to have a community edition.If your commercial application is just storing data in MongoDB, your customers can use either server edition. The SSPL license has important requirements if you are modifying server code or running MongoDB as a service, but does not otherwise impose usage restrictions.MongoDB Enterprise Advanced includes extra security, management, and enterprise tools as well as commercial support, but requires a commercial subscription. As @Natac13 mentioned, you could also consider using MongoDB Atlas for a managed database service running on MongoDB Enterprise.You may also want to find out more information about the MongoDB Partner Program. There are different types of partnerships (Technology, System Integrator, Reseller, and OEM) as well as levels of participation with associated benefits.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello @Stennie_X,\nRegarding your response referring to Enterprise Advanced it sounds like the main advantage to EA is increased security, some tools and support. That implies that, as far as storage limitations, database structure, etc., that there is little difference between the community server and EA. The reason I am asking such question is volunteer work I am involved in with a non-profit organization. All admin, dba, and developer work is supplied by volunteers. One concern is the “growability” of MongoDb community version. However, from what I have read there seems to be no advantage of EA in this area, and that there is no such limit other than possibly the size of each document, and then the storage and memory of the server. I believe community is a good option for an extremely budget challenged project. Have I missed anything in my research regarding community?", "username": "William_Brewbeck" }, { "code": "", "text": "HI @William_Brewbeck,Welcome to the MongoDB Community forum!Your assessment is correct: the Community Edition does not impose any limits on data volume or development features, and you can scale and grow your deployment using standard features like replication and sharding.If your non-profit organisation has limited resources, I would strongly consider using a managed service like MongoDB Atlas. Atlas includes security best practices and makes it easy to monitor, backup, and scale a deployment without having a dedicated DBA or extensive experience. There are also some Atlas features like temporary users that might be helpful for giving volunteers access for a limited period of time.You also have the option of a self-managed deployment, but would have to factor in the time and availability of your volunteers for monitoring, securing, and maintaining the deployment.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Licensing when it comes to develop commercial applications
2020-04-13T18:55:41.442Z
Licensing when it comes to develop commercial applications
11,662
null
[]
[ { "code": "", "text": "I have many thousands of records to upsert or adjust. Basically looking for the fastest way to get the data into the collection. I’m using pymongo, but I see only the most basic examples. Basically how should I format a list of records to perform an update_many (inserting if not already there). But if there is abulk operator that can do a better job faster I’ll take it.Each record is relatively small, and there are lots of them and potentially lots already in the collection so the number of matches/clashes is high.", "username": "Brett_Donovan" }, { "code": "mongo", "text": "There is Bulk.find.update() mongo shell method and the corresponding PyMongo bulk interface.How to format the updates with these bulk methods? I think little more details like a sample input document and what/how you are planning to update, will help discuss details further.", "username": "Prasad_Saya" }, { "code": "", "text": "There might be 10 thousands of these…tick = {“date”: dateObject, “price”: round(close,2), “ticker”: ticker}\ndb[locationTarget].update_one({“date”: dateObject, “ticker”: ticker}, {\"$set\": tick}, upsert=True)", "username": "Brett_Donovan" }, { "code": "UpdateOneBulkWriteResultupdate_onebulk_write()requests_list = [\n { UpdateOne( { ... }, { ... }, upsert=True },\n { UpdateOne( { ... }, { ... }, upsert=True },\n ...\n]\n{ UpdateOne( { ... }, { ... }, upsert=True }request_list{ UpdateOne( { 'date': dateObject, 'ticker': ticker }, { '$set': tick }, upsert=True }{ '$set': tick } { '$set': { 'date': dateObject, 'price': round(close,2), 'ticker': ticker } }dateticker$setupsert : Truedatetickerresult = bulk_write(requests_list, ordered=False)ordered=Falseresultmatched_countmodified_countupserted_countupserted_ids", "text": "Method and class used in this bulk update operation, using PyMongo:bulk_write(requests, ordered=True): Requests are passed as a list of write operation instances - UpdateOne in this case. Returns a BulkWriteResult object as result.The class pymongo.operations.UpdateOne(filter, update, upsert=False) represents an update_one operation. This is used with bulk_write().First, make (or build) a update requests list of all the updates:About:tick = { ‘date’: dateObject, ‘price’: round(close,2), ‘ticker’: ticker }Each { UpdateOne( { ... }, { ... }, upsert=True } in the request_list will have the following format:\n{ UpdateOne( { 'date': dateObject, 'ticker': ticker }, { '$set': tick }, upsert=True }The { '$set': tick } is not clear; I think you mean:\n { '$set': { 'date': dateObject, 'price': round(close,2), 'ticker': ticker } }Note that in case the date and ticker field values are not changing, no need to specify them in the $set clause.The Upsert Option:Since you are using the upsert : True update option, be sure that the query filter matches exactly one document. This means that the date and ticker combination must be unique for each document; an index on these two fields will make the update operation efficient.Next, run the bulk update operation.result = bulk_write(requests_list, ordered=False)The option ordered=False specifies that updates are not dependent on any previous individual updates in the list. The individual writes happen at any order and even when there is a failure with a write in between. Also, this has better performance than the ordered writes.The result is of type pymongo.results.BulkWriteResult. The following fields are of interest in this class: matched_count, modified_count, upserted_count, and upserted_ids.", "username": "Prasad_Saya" } ]
Update_many or bulk update
2020-05-07T08:49:58.931Z
Update_many or bulk update
5,963
null
[ "node-js", "react-native" ]
[ { "code": "0 info it worked if it ends with ok\n1 verbose cli [\n1 verbose cli 'C:\\\\Program Files\\\\nodejs\\\\node.exe',\n1 verbose cli 'C:\\\\Program Files\\\\nodejs\\\\node_modules\\\\npm\\\\bin\\\\npm-cli.js',\n1 verbose cli 'install',\n1 verbose cli '--save',\n1 verbose cli 'realm'\n1 verbose cli ]\n2 info using [email protected]\n3 info using [email protected]\n4 verbose npm-session ac314580881bb0e6\n5 silly install loadCurrentTree\n6 silly install readLocalPackageData\n7 http fetch GET 304 https://registry.npmjs.org/realm 1272ms (from cache)\n8 silly pacote tag manifest for realm@latest fetched in 1339ms\n9 timing stage:loadCurrentTree Completed in 1375ms\n10 silly install loadIdealTree\n11 silly install cloneCurrentTreeToIdealTree\n12 timing stage:loadIdealTree:cloneCurrentTree Completed in 1ms\n13 silly install loadShrinkwrap\n14 timing stage:loadIdealTree:loadShrinkwrap Completed in 2ms\n15 silly install loadAllDepsIntoIdealTree\n16 silly resolveWithNewModule [email protected] checking installable status\n17 silly fetchPackageMetaData error for node-addon-api@git+https://github.com/blagoev/node-addon-api.git#rjs Error while executing:\n17 silly fetchPackageMetaData undefined ls-remote -h -t https://github.com/blagoev/node-addon-api.git\n17 silly fetchPackageMetaData\n17 silly fetchPackageMetaData\n17 silly fetchPackageMetaData spawn git ENOENT\n18 silly fetchPackageMetaData error for node-pre-gyp@git+https://github.com/kneth/node-pre-gyp.git#add-node-14.0.0 Error while executing:\n18 silly fetchPackageMetaData undefined ls-remote -h -t https://github.com/kneth/node-pre-gyp.git\n18 silly fetchPackageMetaData\n18 silly fetchPackageMetaData\n18 silly fetchPackageMetaData spawn git ENOENT\n19 http fetch GET 304 https://registry.npmjs.org/ini 170ms (from cache)\n20 silly pacote range manifest for ini@^1.3.5 fetched in 174ms\n21 silly resolveWithNewModule [email protected] checking installable status\n22 http fetch GET 304 https://registry.npmjs.org/request 159ms (from cache)\n23 http fetch GET 304 https://registry.npmjs.org/node-fetch 296ms (from cache)\n24 http fetch GET 304 https://registry.npmjs.org/prop-types 287ms (from cache)\n25 http fetch GET 304 https://registry.npmjs.org/command-line-args 361ms (from cache)\n26 http fetch GET 304 https://registry.npmjs.org/deprecated-react-native-listview 359ms (from cache)\n27 http fetch GET 304 https://registry.npmjs.org/deepmerge 361ms (from cache)\n28 http fetch GET 304 https://registry.npmjs.org/progress 293ms (from cache)\n29 http fetch GET 304 https://registry.npmjs.org/https-proxy-agent 357ms (from cache)\n30 silly pacote range manifest for node-fetch@^1.7.3 fetched in 306ms\n31 silly resolveWithNewModule [email protected] checking installable status\n32 silly pacote range manifest for command-line-args@^4.0.6 fetched in 376ms\n33 silly resolveWithNewModule [email protected] checking installable status\n34 silly pacote range manifest for request@^2.88.0 fetched in 197ms\n35 warn deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142\n36 silly resolveWithNewModule [email protected] checking installable status\n37 silly pacote range manifest for prop-types@^15.6.2 fetched in 325ms\n38 silly resolveWithNewModule [email protected] checking installable status\n39 silly pacote range manifest for https-proxy-agent@^2.2.4 fetched in 390ms\n40 silly resolveWithNewModule [email protected] checking installable status\n41 silly pacote range manifest for progress@^2.0.3 fetched in 329ms\n42 silly resolveWithNewModule [email protected] checking installable status\n43 silly pacote version manifest for [email protected] fetched in 397ms\n44 silly resolveWithNewModule [email protected] checking installable status\n45 http fetch GET 304 https://registry.npmjs.org/fs-extra 396ms (from cache)\n46 silly pacote range manifest for fs-extra@^4.0.3 fetched in 405ms\n47 silly resolveWithNewModule [email protected] checking installable status\n48 silly pacote version manifest for [email protected] fetched in 421ms\n49 silly resolveWithNewModule [email protected] checking installable status\n50 http fetch GET 304 https://registry.npmjs.org/tar 131ms (from cache)\n51 http fetch GET 304 https://registry.npmjs.org/url-parse 129ms (from cache)\n52 http fetch GET 304 https://registry.npmjs.org/sync-request 165ms (from cache)\n53 silly pacote range manifest for sync-request@^3.0.1 fetched in 169ms\n54 silly resolveWithNewModule [email protected] checking installable status\n55 http fetch GET 304 https://registry.npmjs.org/stream-counter 175ms (from cache)\n56 silly pacote range manifest for url-parse@^1.4.4 fetched in 158ms\n57 silly resolveWithNewModule [email protected] checking installable status\n58 silly pacote range manifest for tar@^6.0.1 fetched in 174ms\n59 silly resolveWithNewModule [email protected] checking installable status\n60 silly pacote range manifest for stream-counter@^1.0.0 fetched in 214ms\n61 silly resolveWithNewModule [email protected] checking installable status\n62 http fetch GET 304 https://registry.npmjs.org/node-machine-id 1042ms (from cache)\n63 silly pacote range manifest for node-machine-id@^1.1.10 fetched in 1053ms\n64 silly resolveWithNewModule [email protected] checking installable status\n65 timing stage:rollbackFailedOptional Completed in 1ms\n66 timing stage:runTopLevelLifecycles Completed in 2529ms\n67 silly saveTree [email protected]\n67 silly saveTree `-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree +-- [email protected]\n67 silly saveTree `-- [email protected]\n68 verbose stack Error: spawn git ENOENT\n68 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19)\n68 verbose stack at onErrorNT (internal/child_process.js:469:16)\n68 verbose stack at processTicksAndRejections (internal/process/task_queues.js:84:21)\n69 verbose cwd C:\\Users\\HP\\Downloads\\testNode\n70 verbose Windows_NT 10.0.18363\n71 verbose argv \"C:\\\\Program Files\\\\nodejs\\\\node.exe\" \"C:\\\\Program Files\\\\nodejs\\\\node_modules\\\\npm\\\\bin\\\\npm-cli.js\" \"install\" \"--save\" \"realm\"\n72 verbose node v12.16.3\n73 verbose npm v6.14.4\n74 error code ENOENT\n75 error syscall spawn git\n76 error path git\n77 error errno ENOENT\n78 error enoent Error while executing:\n78 error enoent undefined ls-remote -h -t https://github.com/blagoev/node-addon-api.git\n78 error enoent\n78 error enoent\n78 error enoent spawn git ENOENT\n79 error enoent This is related to npm not being able to find a file.\n80 verbose exit [ 1, true ]\n", "text": "This is the log file for the failed installation", "username": "vivekananda_kalidind" }, { "code": "", "text": "@vivekananda_kalidind I don’t believe we support node 12", "username": "Ian_Ward" }, { "code": "gitnode-addon-api74 error code ENOENT\n75 error syscall spawn git\n76 error path git\n77 error errno ENOENT\n78 error enoent Error while executing:\n78 error enoent undefined ls-remote -h -t https://github.com/blagoev/node-addon-api.git\n78 error enoent\n78 error enoent\n78 error enoent spawn git ENOENT\n79 error enoent This is related to npm not being able to find a file.\nnode-addon-apigit", "text": "Welcome to the community @vivekananda_kalidind,The error message indicates you need to have git installed in order to fetch the node-addon-api installation dependency:The node-addon-api package is part of support for modern versions of Node via N-API, but I don’t see any callout for the git prerequisite in the installation instructions. I’ll follow-up with a GitHub issue for this if there isn’t one already.Realm React Native currently supports Node 10 or higher.Regards,\nStennie", "username": "Stennie_X" } ]
Unable to npm install realm for react native
2020-05-03T11:46:12.185Z
Unable to npm install realm for react native
5,193
https://www.mongodb.com/…d1aad3b4c1a9.png
[ "installation" ]
[ { "code": "", "text": "service MongoDB server failed to start to verify that you have sufficient privileges to start system service", "username": "ABHIMANYU_SHARMA" }, { "code": "", "text": "May be the user does not have privileges\nTry to install as admin", "username": "Ramachandra_Tummala" } ]
Error when installing MongoDB Enterprise Server
2020-05-07T06:33:08.407Z
Error when installing MongoDB Enterprise Server
2,247
https://www.mongodb.com/…03753726b7e4.png
[ "connecting", "serverless" ]
[ { "code": "", "text": "I’m new to MongoDB and currently working on a PoC to present to my Org. I’ve got connection to my cluster successful on Compass. Setup databases and collections but whenever I tried invoking a function, I keep getting timeout error. Locally everything works fine.I’m just wondering if anyone can point me to what I must have missed.See attached screenshots of the error messages, connection and cmd.1840×835 126 KB 21185×623 41.1 KB Capture1041×987 165 KB", "username": "Solomon_UDOH" }, { "code": "", "text": "Hi @Solomon_UDOH, welcome!Setup databases and collections but whenever I tried invoking a function, I keep getting timeout errorBased on the errors, would it be correct to assume that you are trying to execute a Node.js function in Serverless Framework ?\nCan you provide answers to the following:I would suggest to write a simple script to test the database connection. Please see MongoDB Node.js driver: Quick Start tutorial.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Function invocation timeout error after connecting to a cluster
2020-05-04T14:03:58.754Z
Function invocation timeout error after connecting to a cluster
2,512
null
[]
[ { "code": "", "text": "Hello, I’m quite used to SQL, but new to MongoDB.\nBefore asking I searched a lot on the Internet and in this forum, but didn’t come up with even some method to translate the following kind of SQL update into MongoDB:UPDATE Authors SET lastName=REPLACE(lastName, ‘DE’, ‘OF’) WHERE city LIKE ‘%new%’Is there any way to perform such an update through the simple shell? Maye I overlooked some relevant topic, being a newbie, even addressing to a relevant post could help!Thank you very much in advance!", "username": "Davide_Cicuta" }, { "code": "mongo", "text": "You can use the db.collection.updateMany mongo shell method to update a collection. Also, note that there are other update methods.The main arguments of the update method are the filter where you specify the condition, and the update where you set the modified values.There is no “LIKE” operator in MongoDB. But, you can use the $regex for specifying the same query condition.For replacing the character string you need to use the Aggregation Pipeline for the update. Aggregation Framework provides string expression operators to find / replace the characters.", "username": "Prasad_Saya" }, { "code": "", "text": "Wow, thank you very much, I appreciate the links a lot! \nI’ll have look, now I have a path to follow!\nThanks again!!!", "username": "Davide_Cicuta" } ]
Translating SQL update to MongoDB
2020-05-06T21:51:09.250Z
Translating SQL update to MongoDB
1,776
null
[ "data-modeling", "performance" ]
[ { "code": "", "text": "I’ve been making game replay system that saves player actions every 100ms as JSON file. Currently I’m using Java for it so I can convert json to map or text without delay. Average size of json is 8MB and maximum is 15MB. I used to upload json file to internet and whenever player wants to watch it again I downloaded from internet but it’s bad for me. If you consider traffic and usage of internet is bad for me. So I’ve decided to use database and tried Redis but it’s not one for this process. In the end of the day I’ve only one database remaining which is MongoDB.QuestionsNOTE: Decidated server and Mongo server is in same location and machine. You can think of decidated server features like an average system.I’d like to get answers to my questions.", "username": "Baris_Karamanli" }, { "code": "mongofiles", "text": "Hey @Baris_KaramanliSo for your questions:Document Size LimitThe maximum BSON document size is 16 megabytes.The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See mongofiles and the documentation for your driver for more information about GridFS.So no 15mb is fine, but you are close to the limit.This depends on the memory amount for your cluster / system. You want to make sure that your working set and indexes “fit into” the amount of RAM you have. But since you said it is the same machine then you would need to deduct the amount of memory your application is using I think.Do you mean by having a large db? This can be solved by sharding with Mongodb, which will allow you to split a collection(s) across multiple replica sets in the cluster.You can use a TTL index on any collection where you would like mongo to expire the document. So Mongo has you covered there!", "username": "Natac13" }, { "code": "", "text": "Welcome to the MongoDB community @Baris_Karamanli!As @Natac13 noted, you can store up to 16MB per document. However, I’d advise against doing so without consideration of the practical impact. Large documents will occupy more memory in your WiredTiger cache and doing any update on a large document still involves loading the full document into cache.If you are frequently only accessing or updating a small portion of a large document, I would reconsider your data model as you may be able to make more efficient use of RAM and I/O.The Building with Patterns blog series includes some helpful patterns which are also explored in the free online course M320: Data Modelling at MongoDB University.You’ll have to test this with your own deployment, but you can provision appropriately for your workload. 150MB of data every 10 minutes isn’t much, but if that is 150MB for each client of your game the usage will scale up quickly.I think this is similar to the previous two questions. Large documents may require provisioning more server resources, but if this is the best fit for your use case you can scale up server resources and eventually grow into a sharded deployment if appropriate.As @Natac13 mentioned, you can use a Time-To-Live (TTL) index to Expire Documents at a Specific Clock Time.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for the information. That’s really help me a lot. I’ll follow @Stennie and @Natac13 suggestions.", "username": "Baris_Karamanli" } ]
MongoDB performance with large documents
2020-05-06T08:27:51.036Z
MongoDB performance with large documents
11,407
null
[ "stitch" ]
[ { "code": "", "text": "Lately I’m having very bad performance drops on all of my functions. It takes 4-5 seconds to retrieve 10 documents. I thought that it could be the “cold start” of a function, but the performance does not improve when a function is called multiple times in a row. Performance changes day to day, but it’s very inconsistent.\nMy primary is located in Frankfurt, Stitch is set to Ireland and I’m calling functions from Germany.\nI tried running the same query directly against the mongodb cluster, and it executes in less than a ms.\nIs there an ongoing Stitch problem? Am I doing something wrong ?", "username": "Dimitar_Kurtev" }, { "code": "", "text": "Im my case the stitch functions is slow about 60 seconds or 90 seconds with timeout and no result, maybe a infrastructure problem", "username": "Alailson_Ribeiro" }, { "code": "", "text": "I had the same last week. It lasted about an hour and resolved by itself. I think they do some upgrades or something without any warning, which is a little bit annoying for me and my users …", "username": "Dimitar_Kurtev" }, { "code": "", "text": "\nWhatsApp Image 2020-05-06 at 17.50.58364×698 42.7 KB\nI contact chat support and pass more details with my tests and receive this feedback. I hope they get a solution ASAP…\n", "username": "Alailson_Ribeiro" }, { "code": "", "text": "Today, everything seems to be back to normal. Responses take 300-600 ms, not 3-6 seconds.", "username": "Dimitar_Kurtev" } ]
Degraded Stitch Performance
2020-05-05T18:07:20.026Z
Degraded Stitch Performance
2,225
null
[ "indexes" ]
[ { "code": "db.collection.find({\"_id\" : \"669486112345\"}).pretty()\n{\n \"_id\" : \"669486112345\",\n \"subscriber\" : {\n \"msisdn\" : \"669486112345\",\n \"state\" : 1,\n \"createdOn\" : \"2020-04-23 22:13:35.228\",\n \"updatedOn\" : \"2020-04-23 22:13:35.228\",\n \"lbChargeCode\" : \"\",\n \"userClass\" : \"1\",\n \"imei\" : \"\",\n \"tg\" : {\n },\n \"arrayfileds\" : [\n {\n \"subsId\" : \"5217331305876549695359482\",\n \"state\" : 1,\n \"prvsState\" : 0,\n \"createdOn\" : \"2020-04-23 22:16:09.584\",\n \"updatedOn\" : \"2020-04-23 22:16:09.584\",\n \"productId\" : 4000,\n \"opid\" : 0,\n \"reqChrgCode\" : \"49880127010\",\n \"cCode\" : \"49880127010\",\n \"lbPrice\" : 0,\n \"serviceId\" : 3000,\n },\n {\n \"subsId\" : \"5217331305876549695359482\",\n \"state\" : 1,\n \"prvsState\" : 0,\n \"createdOn\" : \"2020-04-23 22:16:09.584\",\n \"updatedOn\" : \"2020-04-23 22:16:09.583\",\n \"productId\" : 4000,\n \"opid\" : 0,\n \"reqChrgCode\" : \"49880127010\",\n \"cCode\" : \"49880127010\",\n \"lbPrice\" : 0,\n \"serviceId\" : 3000,\n },\n {\n \"subsId\" : \"5217331305876549695359482\",\n \"state\" : 1,\n \"prvsState\" : 0,\n \"createdOn\" : \"2020-04-23 22:16:09.584\",\n \"updatedOn\" : \"2020-04-23 22:16:09.583\",\n \"productId\" : 4000,\n \"lbPrice\" : 0,\n \"serviceId\" : 3000,\n }\n ]\n}\n", "text": "Hi All,We have a document as below. Need to have only unique combination of productId and serviceId in “arrayfileds” array.We have added unique index as below, but still duplicates with serviceId , productId combination are being allowed.Could you help is restricting the duplicates in array.Index:db.collection.createIndex( { _id : 1, “subscriber.msisdn”: 1, “arrayfileds.serviceId”: 1, “arrayfileds.productId”: 1 }, { unique: true } )", "username": "sai_krishna" }, { "code": "", "text": "Welcome to the community @sai_krishna !It’s not currently possible, with an index, to ensure uniqueness of the elements within an array as you can see here.\nFor “simple” elements, we could be smart and use the $addToSet operator but for object elements it’s impossible.It’s something already known as you can see SERVER-1068.Two solutions for you:", "username": "Gaetan_MORLET" }, { "code": "", "text": "Hi Gaetan_MORLET,Thanks for the response,– Implement it in your application.Any specific operator can be used to overcome this. What can be the approach if we have to implement it in application .Regards,\nSai", "username": "sai_krishna" }, { "code": "", "text": "My method is simple.db.collection.find ({_ id: “669486112345”, arrayfileds: {$elemMatch: {productId: 4000, serviceId: 3000}}}).count()db.collection.updateOne({_id: “669486112345”},{$addToSet: {arrayfileds: {productId: 4000, serviceId: 3000,…}}})To be more efficient during find(), make an index like this:db.collection.createIndex( {\"_id\": 1, “arrayfileds.serviceId”: 1, “arrayfileds.productId”: 1 })\nor\ndb.collection.createIndex( {“subscriber.msisdn”: 1, “arrayfileds.serviceId”: 1, “arrayfileds.productId”: 1 })because in your case “_id” and “subscriber.msisdn” are the same value.This is how i would do Any better idea @Prasad_Saya ?", "username": "Gaetan_MORLET" }, { "code": "productIdserviceIdarrayfiledsNEW_DOC = { // sample element to be added\n \"productId\": 22,\n \"serviceId\": \"service-22\",\n \"othersFields\": \"xyz\"\n}\n\ndb.collection.updateOne(\n { _id: ObjectId(\"5eaab2df1347cc3a123a2878\"),\n arrayfileds: { $not: { $elemMatch: { productId: NEW_DOC.productId, serviceId: NEW_DOC.serviceId } } }\n },\n { $push: { arrayfields: NEW_DOC } }\n)\nproductIdserviceId", "text": "@Gaetan_MORLET Hello To make sure that no duplicate values of the productId and serviceId (of arrayfileds array) are not introduced into the array during update operations, you can try this:This will make sure that the new element is added to the array - only if the productId and serviceId do not exist.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_Saya, @Gaetan_MORLETThank you very much for the inputs This worked for me.we are also checking with an application for race condition. I will post you the status once tested.\nCould you help with the script to remove the existing duplicates with the same condition.", "username": "sai_krishna" }, { "code": "$groupproductIdserviceIdarrayfieldsproductIdserviceId", "text": "remove the existing duplicatesThis can be done in two steps:Here are couple of approaches, you can use one of them.The aggregation framework’s $group stage allows group documents by the productId and serviceId, and get the counts for each document. When the count is greater than 1, it means that there are duplicates in the array; so filter by the count and and get the unique identifier for the embedded document within the arrayfields array. Use this information to remove the duplicate array elements with an update operation.Another approach is to group by productId and serviceId, and collect all the array elements (into an array). Next, keep the first element and remove the remaining elements - this will remove the duplicates from the array. Use this aggregation result to update your collection through an update operation.Please note, I will not be writing scripts for this ", "username": "Prasad_Saya" }, { "code": "db.getCollection(\"collection\").aggregate([{$unwind: {path : \"$arrayfileds\",}},{$group: { _id : { _id: \"$_id\" , ProductId : \"$arrayfileds.productId\" , serviceId:\"$arrayfileds.serviceId\"},dups: { $addToSet: \"$_id\" },count: { $sum: 1 }}},{$match: {count: {\"$gt\": 1}}}])\n\n\nLoop to remove duplicats: \n\nvar duplicates = [];\ndb.getCollection(\"collection\").aggregate([{$unwind: {path : \"$arrayfileds\",}},{$group: { _id : { _id: \"$_id\" , ProductId : \"$arrayfileds.productId\" , serviceId:\"$arrayfileds.serviceId\"},dups: { $addToSet: \"$_id\" },count: { $sum: 1 }}},{$match: {count: {\"$gt\": 1}}}]).forEach(function(doc) {\n doc.dups.shift(); \n doc.dups.forEach( function(dupId){ \n duplicates.push(dupId); \n }\n ) \n});\nprintjson(duplicates);\n", "text": "Hi,I am able to fetch all the “_id” s which are having duplicates in its array with the below aggregation.However, the for loop is failing to remove duplicates. Am i missing something here?Query find duplicates.", "username": "sai_krishna" }, { "code": "", "text": "This worked for me… We are able to overcome duplicated even with RACE condition.Thanks", "username": "sai_krishna" } ]
Unique key on array fields in a single document
2020-04-29T13:35:36.149Z
Unique key on array fields in a single document
18,069
null
[ "queries" ]
[ { "code": "{\n \"name\": \"GT\",\n \"brand\": \"Ford\"\n},\n{\n \"name\": \"Carrera GT\",\n \"brand\": \"Porsche\"\n}\n", "text": "Hey, maybe im nab, but idk how to do this:\nmy data is like:The user will type “ford gt”, how i can find the right doc?\nim using mongoose.", "username": "Vincent_BERNHARDT" }, { "code": "db.cars.createIndex( { name: \"text\", brand: \"text\" } )db.cars.find( { $text: { $search: \"gt\" } } ) // this returns both documentsdb.cars.find( { name: /^gt/i } ) // this returns the \"Ford GT\" only", "text": "There are two ways you can search text: (i) text search and (ii) regex search.The case you had posted probably is a simple one; still, a text search can get both the documents, and a regex search can get the intended one.The search method mostly depends upon the data you have in the fields. A good sampling of data and the application requirement can result in a specific solution. That said, a combination of the two methods might work too.Sample code:Text search requires creating a text index, e.g.,db.cars.createIndex( { name: \"text\", brand: \"text\" } )Get the words from the user input, and search with a word:db.cars.find( { $text: { $search: \"gt\" } } ) // this returns both documentsWith regex search:db.cars.find( { name: /^gt/i } ) // this returns the \"Ford GT\" only", "username": "Prasad_Saya" }, { "code": "", "text": "Yes i was using regex, just had somes problems. Thx u, i have solution rn.Have nice day", "username": "Vincent_BERNHARDT" }, { "code": "", "text": "Hi Prasad, I’m in Compass and trying to do a search for text using $regex. I am successfully able to search for one term “Saddleback” but how do I also search for a 2nd term like “ranch”? I’d like to be able to search AND or OR. Here’s my current Filter that is working: {text:{$regex:“Saddleback”}}Thanks for any help you can provide.Rich", "username": "Rich_Farrell" }, { "code": "{ \"_id\" : 1, \"a\" : \"saddleback\" }\n{ \"_id\" : 2, \"a\" : \"saddleback ranch\" }\n{ \"_id\" : 3, \"a\" : \"java ranch\" }\n{ \"_id\" : 4, \"a\" : \"saddleback kick ranch\" }\n{ \"_id\" : 5, \"a\" : \"database rebel\" }\ndb.collection.find( { a: { $regex: '(?=.*saddleback)(?=.*ranch)' } } ){ \"_id\" : 2, \"a\" : \"saddleback ranch\" }\n{ \"_id\" : 4, \"a\" : \"saddleback kick ranch\" }\ndb.collection.find( { a: { $regex: 'saddleback|ranch' } } )|{ \"_id\" : 1, \"a\" : \"saddleback\" }\n{ \"_id\" : 2, \"a\" : \"saddleback ranch\" }\n{ \"_id\" : 3, \"a\" : \"java ranch\" }\n{ \"_id\" : 4, \"a\" : \"saddleback kick ranch\" }\n", "text": "Here are some examples for AND and OR regex search:Sample documents:AND:db.collection.find( { a: { $regex: '(?=.*saddleback)(?=.*ranch)' } } )returns:OR:db.collection.find( { a: { $regex: 'saddleback|ranch' } } )Note the regex or operator is |. When using make sure there is no space between the words and the operator.returns:Reference: Documentation on MongoDB $regex", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you! I tried different parameters and this one worked: {text: {$regex: ‘Saddleback|ranch’}}", "username": "Rich_Farrell" } ]
Matching values in two fields
2020-04-14T20:34:36.583Z
Matching values in two fields
3,732
null
[]
[ { "code": "db.users.aggregate(\n [\n { $sort : { age : -1, posts: 1 } } // age may not be the first key when serialized in a language/runtime that doesn't guarantee serialized key ordering\n ]\n)\n> x = {a:1, 10:1}\n{ \"10\" : 1, \"a\" : 1 }\n", "text": "I know that the mongo shell docs say $sort parses the fields in left to right order, but what happens when it’s in a language/runtime that doesn’t guarantee object key order?e.g. https://docs.mongodb.com/manual/reference/operator/aggregation/sort/#ascending-descending-sortCan we specify $sort in an array for example to ensure that fields are sorted in the desired order?Also of concern, the doc instructions is not necessarily true for the shell either as evidenced by this open bug from 2013: https://jira.mongodb.org/browse/SERVER-11358 (which is still the case when tested on mongo 4.2.6)", "username": "juniorprogrammer" }, { "code": "mongomongoMapSONmongoMapdata.2data2data", "text": "Welcome to the MongoDB Community @juniorprogrammer!JavaScript objects (and analogous data structures such as dictionaries and hashes in other programming language) do not guarantee ordering of keys. This is definitely a consideration when ordering can be significant in the BSON format used by the MongoDB server.The mongo shell will preserve the order of alphanumeric keys in JavaScript objects, but numeric keys (or strings that look like numbers) will be re-ordered to the front of the object as per your example and the open issue you referenced. This behaviour is part of the JavaScript language implementation used by the mongo shell (currently the MozJS runtime).Most languages provide an order-preserving data type which can be used as an alternative (for example, Map in modern versions of Node.js). Official MongoDB drivers include a helper class if there isn’t a native data type, such as the SON class in the Python driver).The mongo shell (as at MongoDB 4.2) has a historical implementation of Map that differs from modern JavaScript, so this is a lingering issue that still needs to be resolved.However, I strongly recommend avoiding numeric key names as this is both a straightforward workaround for this issue and avoids some syntactic ambiguity between array references and embedded fields. Using dot notation, a reference like data.2 could either be referring to the 3rd element in the 0-based data array or a field 2 embedded within data . Using numeric field names may also result in unexpected outcomes (such as backfilling an array) with the right combination of update syntax and documents.Regards,\nStennie", "username": "Stennie_X" }, { "code": "Mapconst sortMap = new Map();\nsortMap.set('a', 1);\nsortMap.set('10', 1);\nusers.aggregate(\n [\n { $sort : sortMap } \n ]\n)\n", "text": "Hi @Stennie_X,Thanks for that quick reply, so if I’m understanding you correctly here, you are suggesting for node, we use a Map object rather than a plain old JS object? for example something like the below to ensure “a” gets sorted before “10” in case of some strange JS run time:Thanks for the guidance once again!", "username": "juniorprogrammer" }, { "code": "MapMap()", "text": "Thanks for that quick reply, so if I’m understanding you correctly here, you are suggesting for node, we use a Map object rather than a plain old JS object? for example something like the below to ensure “a” gets sorted before “10” in case of some strange JS run time:Hi,I’m actually suggesting you avoid using numeric field names so a workaround is not required, but Map() is the correct approach for an order-preserving data structure in JavaScript.You don’t typically see this used in examples since the default behaviour works as expected in JavaScript unless you use numeric field/key names. Semantic names describing the field context are much more common (and helpful for future readers of your data model / code).Regards,\nStennie", "username": "Stennie_X" }, { "code": "Map", "text": "Hi @Stennie_X,Thanks! This is super helpful!I totally understand what you’re saying regarding using alphanumeric rather than numeric field names and we’re currently using meaningful alpha field names. I was just made aware that in the ES standard, it didn’t necessarily dictate any specific order for the enumeration of properties of objects, so I just wanted to be prepared for backup options in case the current V8 engine behavior changes and this has been very helpful for that! I’m glad to know that the driver supports the order-preserving Map data structure in case V8 behavior changes in the future.Reference for anyone else looking at this in the future: Property ordering of [[Enumerate]] / getOwnPropertyNames()Thanks again!", "username": "juniorprogrammer" } ]
Aggregate $sort multiple fields in order
2020-05-06T22:48:30.413Z
Aggregate $sort multiple fields in order
11,902
null
[ "performance" ]
[ { "code": "", "text": "Hi Team,We do same amount of data-load every week on 2 different env.1st env: TEST box : 3.6 version (compatibility version is at 3.4 version)\n2nd env : TEST1 box : 4.0 version (compatibility version is at 3.6 version)every week we need to drop the collection and reload the collection from oracle to mongodb.In 3.6 it takes 3 hrs and 4.0 it is taking 5 hours.Is there any performance issues?\nAs we are using same hardware configuration and there is no change in data. So would like to know why it is taking too much time.collection name \tcount \t size of the collectionemp1\t 69 million \t 13 GB\t\nemp2\t 20 million \t 7gbCan this performance be improved?During slowness issues --> what all check-points we need to check to improvise our db performance?Regards", "username": "Mamatha_M" }, { "code": "", "text": "Team,Any update on this request?", "username": "Mamatha_M" }, { "code": "", "text": "Hi @Mamatha_M,There isn’t enough information to provide suggestions beyond instrumenting your environments so you have something to compare.How are you monitoring your deployments? I suggest you compare MongoDB and resource metrics between the two environments to identify differences that might be worth investigating.If these are both test environments, you could also try running the MongoDB 4.0 load test on the hardware that currently has 3.6. Despite similar hardware specs, there may be some other difference (such as drive speed or health) that may not be obvious from the high level spec.Regards,\nStennie", "username": "Stennie_X" } ]
Dataloading performance issues compared to 3.6 and 4.0 version
2020-05-02T19:52:42.769Z
Dataloading performance issues compared to 3.6 and 4.0 version
1,790
https://www.mongodb.com/…4_2_1024x512.png
[ "performance" ]
[ { "code": "", "text": "The snippet from this document:“If you have and use multiple collections, you must consider the size of all indexes on all collections. The indexes and the working set must be able to fit in memory at the same time”contradicts with FAQ doc for diagnostics:\nhttps://docs.mongodb.com/manual/faq/diagnostics/#memory-diagnostics-for-the-wiredtiger-storage-engine. In this doc it says:“Must my working set size fit RAM?\nNo”Can someone from mongodb help to make it more clear?", "username": "astro" }, { "code": "inMemorySizeGB", "text": "Welcome to the community forums @astro!Your first snippet from the docs is missing the opening context which is:For the fastest processing, ensure that your indexes fit entirely in RAM so that the system can avoid reading the index from disk.However, the must in this snippet is more correctly should . I’ll raise a DOCS pull request to fix the wording.Your concise quote from the FAQ is also correct, but the full FAQ answer includes useful elaboration on cache size and eviction.Both documentation pages are trying to suggest the same outcome: for best performance you will want your commonly used indexes and working set to fit in memory. This is not a strict requirement for most MongoDB storage engines (with one notable exception which I’ll mention in a moment). However, if your working set is significantly larger than available memory, performance will suffer as moving data to and from disk becomes a significant bottleneck.The one exception to this guidance is the In-Memory Storage Engine which is part of MongoDB Enterprise edition. The In-Memory Storage Engine provides predictable latency by intentionally not maintaining any on-disk data, and requires all data to fit within the specified inMemorySizeGB cache.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for clarifying, Stennie.A system with 100GB wotth working set may need ~3 shards(considering 32GB memory on every node). That sums up-to 3*3(PSS)= 9 nodes in the cluster and 12 nodes cluster including config servers. That’s the lot of hardware for 100GB data.Any suggestions how to optimize this case with trade-off b/w performance and less hardware.PS: The 100GB working set is thoughtful consideration, and already reduced to considerable extent. 32GB memory per node is the available configuration", "username": "astro" }, { "code": "", "text": "A system with 100GB wotth working set may need ~3 shards(considering 32GB memory on every node). That sums up-to 3*3(PSS)= 9 nodes in the cluster and 12 nodes cluster including config servers. That’s the lot of hardware for 100GB data.Any suggestions how to optimize this case with trade-off b/w performance and less hardware.Hi Astro,It really depends what flexibility you have in terms of your deployment configuration, how the working set impacts your workload, and anticipated future growth. Your provisioning for 100GB of data is factoring in data redundancy, failover, and performance. You can compromise some or all of those dimensions for cost savings.On a strict cost and effort basis, you could also have a more straightforward setup using a 3 member replica set with 128GB of RAM per server. On current hardware purchases, 32GB RAM is a decently spec’d laptop and server class machines are available well above 128GB RAM.Other ideas you could consider:If you don’t have the in-house expertise to do this capacity planning with confidence, I recommend engaging an experienced consultant, either from MongoDB’s Consulting team or one of our partners.Public forum discussion can provide some general advice, but a good consultant will spend the time to understand all of your business requirements and constraints for more holistic recommendations.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Working set MUST fit in memory?
2020-04-26T00:04:29.461Z
Working set MUST fit in memory?
5,502
null
[ "replication" ]
[ { "code": "", "text": "Hi,We have a simple P-S-S configuration and an use case where our source data which comes from a third party pipeline, removes the existing data and replaces it on the primary.Our secondary servers immediately replicates the data, but the functionality we want is redundancy. While our primary is being updated with data (deletion and insertion) , we want the secondaries to have the old data available for clients.Is this possible on our config?", "username": "Nicholas_Bridgemohan" }, { "code": "", "text": "May be this can help https://docs.mongodb.com/manual/core/replica-set-delayed-member/", "username": "steevej" } ]
Redundancy where source data on primary is deleted and replaced regularly
2020-05-06T20:22:10.006Z
Redundancy where source data on primary is deleted and replaced regularly
1,465
null
[ "sharding", "performance" ]
[ { "code": "", "text": "In an extremely write heavy intensive job for multiple days (with only ~4 small required indexes) mongodb will stop for minutes and do this over and over writing to disk heavily. Then when it’s done, it starts again but seemingly with more limited performance each time. What is happening and how can I prevent it?2020-04-12T15:41:34.914-0400 I STORAGE [WTCheckpointThread] WiredTiger message [1586720494:914549][78231:0x7fd3ef33a700], file:collection-17–5766071557703571556.wt, WT_SESSION.checkpoint: Checkpoint has been running for 2021 seconds and wrote: 5435000 pages (179714 MB)", "username": "Matthew_Zimmerman" }, { "code": "", "text": "Let me clarify since I now understand a little further. This was a 5 member replicaset configured as one shard. I have since moved on from this configuration although the underlying message still appears. To essentially “get around” this, I have moved spun up additional instances/shards on the same physical server, thus the performance penalty of “stopping accepting writes while I write out pages to disk” is somewhat further distributed.", "username": "Matthew_Zimmerman" }, { "code": "", "text": "Without more information I would guess that there is not enough memory for your workload.", "username": "steevej" }, { "code": "", "text": "The workload is extreme write heavy with no indexes (will generate those after most content is inserted). Basically I’m trying to figure out why mongodb needs to pause/slow-down to write out checkpoint. Why wouldn’t it be constantly writing these out?Other than adjusting the write concern journal to false and specifying a high maximum 500ms of https://docs.mongodb.com/manual/reference/configuration-options/#storage.journal.commitIntervalMs what else can I do to make it “batch writes”? I can’t turn off journaling anymore when you cluster (can’t run shards without replicasets (even of 1).", "username": "Matthew_Zimmerman" }, { "code": "", "text": "It’s actually the exact opposite. Too much memory let too many dirty pages hang around and then all must be written at the same time. Thank you percona for writing this up: https://www.percona.com/blog/2020/05/05/tuning-mongodb-for-bulk-loads/", "username": "Matthew_Zimmerman" } ]
Write performance drops on 5-config replica, 5 shard (2 arbiter) cluster
2020-04-12T23:07:21.480Z
Write performance drops on 5-config replica, 5 shard (2 arbiter) cluster
3,206
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello Developers,\nHope you are fine. I am new here. I want to convert a MySQL query in MongoDB statement, please help me. My query is…SELECT DISTINCT f1.*\nFROM followers AS f1\nINNER JOIN followers AS f2 on f1.user_id=f2.follower_id and f1.follower_id = f2.user_id\nwhere f1.user_id=any user Id.Thanks.", "username": "Pradeep_Maurya" }, { "code": "", "text": "The following links has information and examples about how to map SQL to MongoDB - both the terminology and the queries. You will be using an Aggregation query to convert the SQL query.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Pradeep_Maurya and welcome to the community.Being that your query is looking to join on multiple fields, this example in the documents can be used as the basis of your query.As @Prasad_Saya, there are a couple of documents that give information on joining documents from multiple collections.Note that depending on your use case however, you might get much better performance by embedding the fields of one document into the other.", "username": "Doug_Duncan" }, { "code": "", "text": "Welcome to the community @Pradeep_Maurya!Rather than directly translating your MySQL schema and queries into MongoDB, I would encourage you to consider the best data model to suit your use case. As @Doug_Duncan mentioned, there are other options in MongoDB (such as embedding) which may offer better performance and less complex queries for common use cases.Some helpful starting points for learning more about data modelling in MongoDB are:If you do have a question about querying in MongoDB, please provide more information to help others understand what you are trying to achieve: version of MongoDB server, example documents, desired outcome, and what you have tried.Regards,\nStennie", "username": "Stennie_X" }, { "code": "{\n\n $lookup:\n\n {\n\n from: \"follows\",\n\n let: { user_id: \"$user_id\", follower_id: \"$follower_id\" },\n\n pipeline: [\n\n { $match:\n\n { $expr:\n\n { $and:\n\n [\n\n { $eq: [ \"$follower_id\", \"$$user_id\" ] },\n\n { $eq: [ \"$user_id\", \"$$follower_id\" ] }\n\n ]\n\n }\n\n }\n\n \n\n }\n\n ],\n\n as: \"friends\"\n\n }\n\n},\n\n{$match:{\"user_id\":user_id}}\n", "text": "Hi @Stennie_X ,Thanks for reply.Actually I have a followers collection. I have put document with user_id and follower_id . I want to find all friends of a user X . (Condition For friend : all user follow to user X and X follow to all these users. ) .\nI hope you understand my condition.I have made this =>\ndb.follows.aggregate([]);But not fulfill my condition . I have get all other user data who have no any friends with my user X\nThanks.", "username": "Pradeep_Maurya" } ]
Convert MySQL query into MongoDB
2020-05-04T20:03:47.879Z
Convert MySQL query into MongoDB
17,636
null
[ "data-modeling", "golang" ]
[ { "code": "", "text": "In the struct the top level primitives are pretty straight forward, but how do I model more complex properties? I looked up object type and I think it’s bson.D ? See the Tomatoes property in the mflix sample movie collection from atlas I’m a beginner in both Go and MongoDb. Thanks everyone.", "username": "Chris_Kettenbach" }, { "code": "bsonbson.Dbson.Mtype Movie struct {\n Plot string `bson:\"plot\"`\n Cast []string `bson:\"cast\"`\n Tomatoes bson.D `bson:\"tomatoes\"`\n}\nbson.Dtype Viewer struct {\n Rating int32 `bson:\"rating\"`\n NumReviews int32 `bson:\"numReviews\"`\n Meter int32 `bson:\"meter\"`\n}\n\ntype Tomatoes struct {\n Viewer Viewer `bson:\"viewer\"`\n LastUpdated time.Date `bson:\"lastUpdated\"`\n}\n\ntype Movie struct {\n Plot string `bson:\"plot\"`\n Cast []string `bson:\"cast\"`\n Tomatoes Tomatoes `bson:\"tomatoes\"`\n}\n", "text": "Hey @Chris_Kettenbach,I’m not sure if you bumped into bson annotations for native Go data structures. They make it so that you can model and interact with your MongoDB documents directly without having to use primitives like bson.D, bson.M, etc.Here is a quick tutorial I wrote on the subject:Learn how to model MongoDB BSON documents as native Go data structures for seamless interaction with MongoDB collections.In regards to the mflix dataset, you could probably get away with doing something like the following:Of course, I personally wouldn’t want to use bson.D within my struct, so I would more than likely break it up into several data structures. This would leave me with something like the following:You might have to play around with the data structures a little bit because I didn’t test what I provided, but for the most part it should work out fine.Does that answer your question?Best,", "username": "nraboy" }, { "code": " {\n \"_id\": \"573a13faf29313caabdec42e\",\n \"genres\": [\n \"Drama\"\n ],\n \"runtime\": 90,\n \"title\": \"Desde allè\",\n \"tomatoes\": {\n \"viewer\": {}\n }\n }\n", "text": "It looks like this", "username": "Chris_Kettenbach" }, { "code": "package models\n\nimport (\n\t\"go.mongodb.org/mongo-driver/bson/primitive\"\n)\n\ntype Viewer struct {\n\tRating int32 `bson:\"rating,omitempty\" json:\"rating,omitempty\"`\n\tNumReviews int32 `bson:\"numReviews,omitempty\" json:\"numReviews,omitempty\"`\n\tMeter int32 `bson:\"meter,omitempty\" json:\"meter,omitempty\"`\n}\n\ntype Tomatoes struct {\n\tDvd int `bson:\"dvd,omitempty\" json:\"dvd,omitempty\"`\n\t//LastUpdated `bson:\"lastUpdated,omitempty\" json:\"lastUpdated,omitempty\"`\n\tViewer Viewer `bson:\"viewer,omitempty\" json:\"viewer,omitempty\"`\n}\n\ntype Movie struct {\n\tID primitive.ObjectID `json:\"_id,omitempty\" bson:\"_id,omitempty\"`\npackage repositories\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/hcabnettek/filmapi/models\"\n\t\"github.com/joho/godotenv\"\n\t\"go.mongodb.org/mongo-driver/bson\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\ntype repo struct{}\ntype movierepo struct{}\n\n// NewMongoRepository constructor function\n", "text": "This is exactly what I tried but the tomatoes field ends up empty. Also it didn’t seem to like time.Date for the lastUpdated, compiler says that’s not a type.Thanks for the help!", "username": "Chris_Kettenbach" }, { "code": "package main\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"go.mongodb.org/mongo-driver/bson\"\n\t\"go.mongodb.org/mongo-driver/bson/primitive\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\ntype Viewer struct {\n\tRating float64 `bson:\"rating,omitempty\" json:\"rating,omitempty\"`\n\tNumReviews int `bson:\"numReviews,omitempty\" json:\"numReviews,omitempty\"`\n\tMeter int `bson:\"meter,omitempty\" json:\"meter,omitempty\"`\n}\n\ntype Tomatoes struct {\n\tDvd time.Time `bson:\"dvd,omitempty\" json:\"dvd,omitempty\"`\n\tLastUpdated time.Time `bson:\"lastUpdated,omitempty\" json:\"lastUpdated,omitempty\"`\n\tViewer Viewer `bson:\"viewer,omitempty\" json:\"viewer,omitempty\"`\n}\n\ntype Movie struct {\n\tID primitive.ObjectID `json:\"_id,omitempty\" bson:\"_id,omitempty\"`\n\tPlot string `json:\"plot,omitempty\" bson:\"plot,omitempty\"`\n\tGenres []string `json:\"genres,omitempty\" bson:\"genres,omitempty\"`\n\tRuntime int `json:\"runtime,omitempty\" bson:\"runtime,omitempty\"`\n\tTitle string `json:\"title,omitempty\" bson:\"title,omitempty\"`\n\tTomatoes Tomatoes `json:\"tomatoes,omitempty\" bson:\"tomatoes,omitempty\"`\n}\n\nfunc main() {\n\tclient, err := mongo.Connect(context.Background(), options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer client.Disconnect(context.Background())\n\n\tdatabase := client.Database(\"sample_mflix\")\n\tmoviesCollection := database.Collection(\"movies\")\n\n\tctx, _ := context.WithTimeout(context.Background(), 10*time.Second)\n\n\tvar movies []Movie\n\topts := options.Find().SetLimit(3)\n\tcursor, err := moviesCollection.Find(ctx, bson.M{}, opts)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tif err = cursor.All(ctx, &movies); err != nil {\n\t\tpanic(err)\n\t}\n\n\tdata, _ := json.Marshal(movies)\n\tfmt.Println(string(data))\n}\n[\n {\n \"_id\": \"573a1390f29313caabcd4135\",\n \"genres\": [\n \"Short\"\n ],\n \"plot\": \"Three men hammer on an anvil and pass a bottle of beer around.\",\n \"runtime\": 1,\n \"title\": \"Blacksmith Scene\",\n \"tomatoes\": {\n \"dvd\": \"0001-01-01T00:00:00Z\",\n \"lastUpdated\": \"2015-06-28T18:34:09Z\",\n \"viewer\": {\n \"meter\": 32,\n \"numReviews\": 184,\n \"rating\": 3\n }\n }\n },\n {\n \"_id\": \"573a1390f29313caabcd42e8\",\n \"genres\": [\n \"Short\",\n \"Western\"\n ],\n \"plot\": \"A group of bandits stage a brazen train hold-up, only to find a determined posse hot on their heels.\",\n \"runtime\": 11,\n \"title\": \"The Great Train Robbery\",\n \"tomatoes\": {\n \"dvd\": \"0001-01-01T00:00:00Z\",\n \"lastUpdated\": \"2015-08-08T19:16:10Z\",\n \"viewer\": {\n \"meter\": 75,\n \"numReviews\": 2559,\n \"rating\": 3.7\n }\n }\n },\n {\n \"_id\": \"573a1390f29313caabcd4323\",\n \"genres\": [\n \"Short\",\n \"Drama\",\n \"Fantasy\"\n ],\n \"plot\": \"A young boy, opressed by his mother, goes on an outing in the country with a social welfare group where he dares to dream of a land where the cares of his ordinary life fade.\",\n \"runtime\": 14,\n \"title\": \"The Land Beyond the Sunset\",\n \"tomatoes\": {\n \"dvd\": \"0001-01-01T00:00:00Z\",\n \"lastUpdated\": \"2015-04-27T19:06:35Z\",\n \"viewer\": {\n \"meter\": 67,\n \"numReviews\": 53,\n \"rating\": 3.7\n }\n }\n }\n]\n", "text": "Hey @Chris_Kettenbach,So the data structures I gave you were a little incorrectly formatted. Some of the data types should have been changed. My fault for not having tested them prior.I ran the following code:It seems that the results were as expected. Take a look at the following output from the code that I ran:Is this what you were looking for?Best,", "username": "nraboy" } ]
How do I model object data types in Go? Like mflix movie tomato property
2020-05-06T13:55:55.465Z
How do I model object data types in Go? Like mflix movie tomato property
3,732
https://www.mongodb.com/…032a856181b3.png
[]
[ { "code": "", "text": "\nimage846×216 13.5 KB\n", "username": "chris" }, { "code": "", "text": "Hmm… I get a couple more letters than you @chris!\nimage738×138 7.4 KB\nNot sure what’s going on. Maybe @Jamie can take a look once she comes online.", "username": "Doug_Duncan" }, { "code": "", "text": "Works fine for me.\n", "username": "kerbe" }, { "code": "<img src=\"https://www.mongodb.com/community/forums/uploads/default/original/2X/a/aa439e59cdc3a3dd2db06490420d9301deef67ad.svg\" alt=\"MongoDB Developer Community Forums\" id=\"site-logo\" class=\"logo-big\">", "text": "Guru Meditation:<img src=\"https://www.mongodb.com/community/forums/uploads/default/original/2X/a/aa439e59cdc3a3dd2db06490420d9301deef67ad.svg\" alt=\"MongoDB Developer Community Forums\" id=\"site-logo\" class=\"logo-big\">", "username": "chris" }, { "code": "shiftrefresh", "text": "Hi @chris, @Doug_Duncan, (and anyone else noticing UI quirks with the site logo):Can you confirm your specific browser version and O/S to help narrow down the problem?There were a few files inadvertently (and briefly) pushed for a theme being developed which has different CSS.These changes should have been backed out. but perhaps your browser may have cached incorrect versions. I just did a quick check on the latest versions of Chrome, Firefox, and Safari on macOS. Safari has slight cropping on the right of the “y” (but not as dramatic as the examples above), but the other browsers appear to be well-behaved at varying page widths.It would also be helpful if you could try shift+refresh on the MongoDB Community page (or more aggressively clearing your browser cache), which should fix the issue if it is related to caching.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Firefox 75.0 / Ubuntu 18.04 (Dev Tools Disable Cache)\nNormal hard refreshes do not update it. I’ll get more aggressive. Loads fine in a new Opera tab.", "username": "chris" }, { "code": "", "text": "https://www.mongodb.com/community/forums/uploads/default/original/2X/a/aa439e59cdc3a3dd2db06490420d9301deef67ad.svgActually opening the URL directly in a tab it is still cropped in firefox. Even after a full cache clear.", "username": "chris" }, { "code": "yCommun", "text": "Hi @Stennie_X the screen shot I provided was from Brave Browser ( Version 1.8.90 Chromium: 81.0.4044.129 (Official Build) (64-bit) on MacOS 10.15.4. I have hard refreshed but the logo is still the same.I did check out on other browsers I have with the following results:Firefox 76.0 the logo shows the full Community without any cutoff.Safari 13.1 the logo shows most of the logo with only a small part of the y cut off.Chrome 81.0.4044.138 shows only Commun (like the screenshot that @chris provided).Brave is my main and preferred browser. The other three browsers have not accessed the community forums before this test so it shouldn’t be a local caching issue. It does seem weird that four browsers however get different results.I can provide screenshots if necessary, just let me know.", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks for all the testing info, everyone. I’ll pass it to the designer and see if we can get a revised version to use. In the meantime, I’ll roll back to our old logo. ", "username": "Jamie" }, { "code": "", "text": "Wow, @Jamie you don’t mess around and get things done.Community has been removed from the logo\nimage920×246 21.8 KB\nI can’t wait for its return. The logo seems so empty now and it feels like something is missing from our cozy little community. ", "username": "Doug_Duncan" }, { "code": "", "text": "Ditto. All good after a regular refresh.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Someone got a bit to enthusiastic with the Crop tool on the banner?
2020-05-06T13:06:32.254Z
Someone got a bit to enthusiastic with the Crop tool on the banner?
3,573
null
[ "node-js", "connecting", "atlas" ]
[ { "code": "", "text": "I am using glitch.com to learn from freeCodeCamp. The project is hosted on glitch.com for some reason its not able to connect to the mongoDB atlas.I already have 2 collections in a cluster. Is that the maximum limit or is the problem something else. My project is at Glitch :・゚✧", "username": "Balkrishna_Agawral" }, { "code": "", "text": "Hi @Balkrishna_Agawral and welcome to the community forums.My guess is that something else is going on as you should be able to make more than two collections in Atlas on even the smaller tiers, unless you have more data than the storage allows.You state you are having problems connecting to Atlas. Can you please provide information on the error you’re having? Without that anything would be speculation on our part and not helpful.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Balkrishna_Agawral,The Atlas Free Tier (M0) currently allows up to 100 databases and 500 collections in total, and up to 512MB of storage. For more information, please see Atlas M0 (Free Tier), M2, and M5 Limitations.The result of your write operation should return an error code/message which will provide more context on the issue.If your application isn’t able to connect at all, a good starting point would be confirming the credentials in your connection string as well as your Atlas whitelist entries. For troubleshooting steps, please see Connect via Driver in the Atlas documentation.Since Glitch requests may originate from a large range of IP addresses, it is more difficult to configure an effective whitelist. See What IPs must I whitelist so my Glitch project can access my MongoDB database? on the Glitch forums for some suggestions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "The issue was that I was using an older version of mongoose. upgrading to the new version worked!", "username": "Balkrishna_Agawral" }, { "code": "", "text": "Hi @Balkrishna_Agawral,Thanks for confirming the resolution.If you can also comment with the Mongoose version that wasn’t working and the version that resolved your issue, this could be helpful for other users encountering the same problem in future.Regards,\nStennie", "username": "Stennie_X" }, { "code": "\"mongoose\": \"^4.7.2\",\n\"mongoose\": \"5.7.7\",\n", "text": "4.7.2 was not working5.7.7 is working.", "username": "Balkrishna_Agawral" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issues on Glitch connecting with MongoDB Atlas
2020-05-05T19:20:49.567Z
Issues on Glitch connecting with MongoDB Atlas
4,096
null
[ "mongoose-odm" ]
[ { "code": " Users1.find({\n _id:userId,\n 'log.date' :{\n $lte: to != 'Invalid Date' ? to.getTime() : Date.now(),\n $gte: from != 'Invalid Date' ? from.getTime() : 0\n }\n }\n", "text": "why isnt the ‘find’ working properly & filter based on dates is returning all the data in the DBAlso why is the below not working? why is the date still returning null & not Date.now()\nconst date = req.body.date != null ? req.body.date : Date.now() ;Thanks for the help ", "username": "Balkrishna_Agawral" }, { "code": "", "text": "you can find the complete code onSimple, powerful, free tools to create and use millions of apps.", "username": "Balkrishna_Agawral" } ]
Using find() with Mongoose
2020-05-06T14:16:06.626Z
Using find() with Mongoose
1,946
null
[]
[ { "code": "", "text": "currently we are using standalone mongodbInstance with 2core 8gb ram.\nwe have planned to move to replicaSet(3Node replica) based setup to handle more number of readWrite requests.\ni exported data using mongodump. Among these exported collections, some collections has size >400mb, some have 800mb.when i import using mongorestore command, i get runtime outofmemory error.\niam trying this out on 4core8gb ram Instance.To fix this problem , i thought that rate of which the collections are imported + replication happening at the same time can cause this issue. So i wrote a script that can restore all the collection one by one but with some delay.The intention behind having this delay is, this delay will give sufficient time to load a collection and once loaded, the script will pause for sometime, this will allow replication to secondaryNodes.This could freeup ram while loading the nextCollection. And i have added larger delay’s for larger collections.But even at this approach, the ram gets too low. And it takes lot of time to load all the collections due to the delay that i have set.is there a standard way to approach this problem in mongodbReplicaSet? Please help.", "username": "Divine_Cutler" }, { "code": "", "text": "You should have a look at the mongod logs too, you may be on the edge of catastrophe.Follow this to convert the standalone to a replica set.Once you have a replica set configured remember to update the clients connection strings/parameters.", "username": "chris" }, { "code": "", "text": "i checked the links but iam not able to find any details to restoring collections with high volume.the issue iam facing right now goes like this", "username": "Divine_Cutler" } ]
Memory issue : migrating database from standalone to replica set
2020-05-05T21:11:00.171Z
Memory issue : migrating database from standalone to replica set
2,088
null
[]
[ { "code": "", "text": "Out of curiosity, what is the editor used in the insertMany() lecture?", "username": "Fabio_Gil" }, { "code": "", "text": "Share a screenshot", "username": "007_jb" }, { "code": "", "text": "", "username": "Fabio_Gil" }, { "code": "", "text": "It looks like Sublime Text on a Mac. @Shubham_Ranjan can confirm.", "username": "007_jb" }, { "code": "", "text": "As per my knowledge, it’s Sublime Text .", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
Js editor used in the insertMany() lecture
2020-05-05T12:44:32.694Z
Js editor used in the insertMany() lecture
1,521
null
[ "legacy-realm-cloud" ]
[ { "code": "", "text": "All clients cannot connect, Realm Studio cannot connect, What happened?", "username": "Nic_Young" }, { "code": "", "text": "Hi @Nic_Young,There are currently no widespread incidents according to system monitoring (and as reported on the public Realm Cloud status page).Please create a support case for the team to investigate any issues with your Realm Cloud instances.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you Stennie.\nNo reply from Realm Support. Now it’s over 28 hours.\nMy clients are killing me.\nI have to say the support is terrible\nScreen Shot 2020-05-06 at 07.15.042586×1166 169 KB\n", "username": "Nic_Young" }, { "code": "", "text": "Hi Nic,Apologies for the delay in support response. Issues are triaged based on subscription plan and priority. The $30/month Community plan does not include a support SLA, but we do endeavour to respond as soon as we can. We also have subscriptions available for Developer & Production support.It looks like the issues you’ve filed are between 14 and 11 hours ago and there are multiple issues with different comments. I’ve merged those into a single case to keep the discussion focused and bumped this issue with the support team for investigation.This appears to be a problem specific to your instance, but needs someone from the support team to follow up.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you Stennie!\nEverything works now!\nHave a good day!", "username": "Nic_Young" } ]
Realm Cloud is down for 15 hours!
2020-05-05T12:12:41.886Z
Realm Cloud is down for 15 hours!
2,064
null
[ "sharding" ]
[ { "code": "", "text": "Dear All Experts,Here I have a 300+GB collection, which I need search documents by a specific date, and most likely around 2800 documents related to that one specified date. I just test this query on standalone Mongod, the response time is more than 2 seconds, which I need it to be less than 1 or even 0.5 seconds. Consequently, I’m considering sharding, is that hashed shard key on date field will fit my requirement?Good day!Best regards,\nYuyong", "username": "Yuyong_Liao" }, { "code": "", "text": "Have you try an index on the date before considering sharding. Performance will be affected by the size of the memory on the server. However processing 2800 documents at each query will necessary take some time and the network will become an important part. A schema change might be necessary. There are design patterns, Building with Patterns: A Summary | MongoDB Blog, that might help.", "username": "steevej" } ]
Hashed Shard Key or Ranged Shard Key
2020-05-06T09:45:23.598Z
Hashed Shard Key or Ranged Shard Key
1,335
null
[]
[ { "code": "", "text": "I have question on MongoDB High Availability & Data Storage. Could you please anyone give answer for the below metrics\nregarding MOngoDBAvailability Percentage – ?\nRecoverability Duration – :\nDurability Percentage - ?\nData Resiliency Plan - ?", "username": "Matra_Use" }, { "code": "", "text": "What is maximum limit of DB Storage Size Per NodeThat would depends on the node. Memory and disk space dependent.What is Data Retention Period on MongoDBYou choose.In my opinion, your questions are much too vague to be able to provide an answer other than it depends.", "username": "steevej" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB High Availability and Durability
2020-05-06T08:35:23.657Z
MongoDB High Availability and Durability
1,817
null
[ "sharding" ]
[ { "code": "{\n \"Username\": \"Kushal\"\n}\n{\n \"ID\" : 3\n}\n", "text": "Hey, I am Kushal Shah, naive at Mongo.\nI request someone to explain me what would happen if we insert a document without shard key into a sharded collection.\nAs in, suppose my shard key looks like this:I insert a document which looks like the following:Please tell in which chunk the document would be store.If I query, would it need to scan through all the documents of the collection?", "username": "Shah_Kushal" }, { "code": "{\n \"Username\": \"Kushal\"\n}\nUsername: 1mongos> db.users.insert({ID : 3})\nWriteResult({\n\t\"nInserted\" : 0,\n\t\"writeError\" : {\n\t\t\"code\" : 61,\n\t\t\"errmsg\" : \"document { _id: ObjectId('5eb2713cf49a3bda89da595c'), ID: 3.0 } does not contain shard key for pattern { Username: 1.0 }\"\n\t}\n})\n", "text": "Hi @Shah_Kushal,Welcome to the MongoDB community!suppose my shard key looks like this:As is, this wouldn’t be a valid shard key. Trying to shard a collection with this would result in an error.For example, using MongoDB 4.2:“Shard key { Username: \"Kushal\" } can contain either a single ‘hashed’ field or multiple numerical fields set to a value of 1. Failed to parse field Username”However, assuming you sharded on Username: 1, lets see what happens when a document is inserted without the shard key:Every document inserted into a sharded collection must contain the shard key. A document will be associated with a single shard based on the shard key index.If I query, would it need to scan through all the documents of the collection?Queries based on the shard key (or a prefix of a compound shard key) will only target the relevant shards. Queries on a sharded collection that don’t fit that criteria will be broadcast to all shards. For more information, see Targeted Operations vs Broadcast Operations in the MongoDB documentation.To learn more about MongoDB I would recommend taking the free online courses at MongoDB University and following one of the learning paths (DBA or Developer).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Dear @Stennie_X, Immensely thankful to you for your answer. It clears my doubt. I would ask few more. Actually I am writing a term paper on MongoDB, focusing mainly on Replica Sets and Shards.\nApart from the MongoDB manual can you suggest any intuitive material?\nThanks in advance.", "username": "Shah_Kushal" }, { "code": "mlaunchmtoolsmtoolsmlaunch$ mlaunch --shards 2 --repl\nlaunching: \"mongod\" on port 27018\nlaunching: \"mongod\" on port 27019\nlaunching: \"mongod\" on port 27020\nlaunching: \"mongod\" on port 27021\nlaunching: \"mongod\" on port 27022\nlaunching: \"mongod\" on port 27023\nlaunching: config server on port 27024\nreplica set 'configRepl' initialized.\nreplica set 'shard01' initialized.\nreplica set 'shard02' initialized.\nlaunching: mongos on port 27017\nadding shards. can take up to 30 seconds...\n", "text": "Actually I am writing a term paper on MongoDB, focusing mainly on Replica Sets and Shards.\nApart from the MongoDB manual can you suggest any intuitive material?Hi,Can you clarify what sort of information you are looking for? It is probably more helpful to start with the documentation and explore specific questions that arise before getting into deeper implementation details that require more context.The MongoDB Manual provides a very comprehensive end user guide including several categories of Frequently Asked Questions.You can learn more about behaviour (such as the question you asked) by setting up a local test deployment or perhaps using MongoDB Atlas.For local test deployments I recommend using mlaunch which is part of the mtools Python package (see: mtools installation guide).I used mlaunch to quickly stand up a local sharded cluster to provide the example message in my original response to you. I already have MongoDB installed and in my path, so creating a cluster with 2 shards looks like:The free M103: Basic Cluster Administration online course at MongoDB University would also give you more insight. You can login using the same credentials as this forum.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Surely I would refer those part of the documentation.I have one more issue. Suppose that, a mongos is playing the role of a balancer redistributing the data over the shards. Now, we switch off that particular instance of mongos. Please tell what would happen then, as in would the balancing process roll back and stop, or would the mongos finish balancing and then stop.Secondly, while a particular chunk is being part of balancing process, suppose a client performs some operation on that chunk, how under the hood, the mongos would reflect those changes?", "username": "Shah_Kushal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Naive doubt regarding sharding
2020-05-06T08:03:09.972Z
Naive doubt regarding sharding
2,945
null
[]
[ { "code": "", "text": "I have switched my 7 year career in utility management and auditing to become a web developer and aspiring to become an all round developer. I started a small business called ACJ Intro Designs (Pty) Ltd to make an income from web development. now my CURRENT mission is to be certified in MongoDB but my compass doesnt go past the “activating plugins” it freezes…", "username": "Angelo_Hedley" }, { "code": "", "text": "Hi @Angelo_Hedley and welcome to the MongoDB community forums!As for the problem with your Compass issues I see you made this post for that.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi @Angelo_Hedley! Welcome to the community!", "username": "Jamie" } ]
Hi, I am Angelo a newly web developer From South Africa
2020-05-03T21:56:19.288Z
Hi, I am Angelo a newly web developer From South Africa
2,456
null
[ "change-streams" ]
[ { "code": "cluster_aggregate.cpprunAggregatewatchaggregateaggregateafterClusterTime", "text": "I’m very interested in the change stream, so I go through some documentation posts on the official website. But there is no document introduce the change stream inner details especially on sharding, so I read the source code in cluster_aggregate.cpp starts from runAggregate function in v4.0. However, I’m not quite understanding the dispatching and merging policy details. Hope to get help here.Here come my basic understandings about change streams in sharding, please let me know if I’m wrong:I’ve some questions about the dispatching and merging policy:", "username": "Vinllen_Chen" }, { "code": "{fullDocument: \"updateLookup\"}$changeStreamDocumentSourceOplogMatch: filters the oplog for all relevant events\nDocumentSourceChangeStreamTransform: transforms oplog entries into equivalent change stream entries\nDocumentSourceCheckInvalidate: determines whether the current event should invalidate the stream, e.g. a collection or database drop\nDocumentSourceCheckShardResumability: if a resume point was specified, checks that the shard's oplog history goes back at least that far\nDocumentSourceEnsureResumeTokenPresent: if a resume token was specified, this stage verifies that it appears in the resumed stream.\nDocumentSourceCloseCursor: after an invalidate, ensures that the cursor is closed and cleaned up correctly.\nDocumentSourceLookupChangePostImage: if {fullDocument: \"updateLookup\"} was specified, obtains and adds the full document into all change stream events of {operationType: \"update\"}\nstartAtOperationTimeresumeAfterstartAftermongoSclusterTimeclusterTimemongoS1$changeStreamshard1{clusterTime: 10:00}shard1clusterTimemongoS'clusterTimeshard1$changeStreammongoSstartAtOperationTimemongoSshard1shard2shard2shard1mongoSshardATshardBT+1mongoS'clusterTimeT+2T+2TT+1startAtOperationTimeresumeAfterstartAftermongoSpostBatchResumeToken", "text": "Hi Vinllen,Thank you for your questions, and for your interest in change streams!Your four-point outline of how change streams operates is broadly accurate. There are a couple of additional stages which may or may not be present depending on whether (for instance) you provided a token at which to resume the stream, or whether you specified the {fullDocument: \"updateLookup\"} option. Below, I have listed all the stages that $changeStream expands to, and where they run in a sharded context.The following stages run on the shards:The following stages run on mongoS:Question 1:A change stream which is opened with no explicit startAtOperationTime, resumeAfter or startAfter is a request to start the stream at the current time and to return all events which occur from that point on. This is why the mongoS adds the current clusterTime to the request it sends to the shards - so that all shards will begin returning results from the same moment in time. Bear in mind that the cluster-wide global logical clock works by attaching the current clusterTime to every message sent between members of the cluster; so in your example, when mongoS1 sends the $changeStream to shard1, it will also send {clusterTime: 10:00} along with the request. When shard1 receives the request and sees that its local clusterTime is outdated, it will adopt the mongoS' clusterTime instead. Therefore, the next oplog write on shard1 after it receives the $changeStream request will jump from 09:59 to 10:01 or later, and will be picked up by the stream.To understand why mongoS explicitly sets the startAtOperationTime, consider what would happen in your example if we did not do so. If the mongoS simply passed the request on to the shards, then shard1 would start reporting events at 09:59 and shard2 would start reporting at 10:01. But what if shard2 also had some events that occurred between 09:59 and 10:01, and which should therefore sort between the events from shard1? We would never see those events, which would violate change streams’ guarantee that no events in the cluster-wide sorted stream will ever be omitted.Similarly, it would be semantically incorrect for mongoS to read the current most-recent optime of each shard and choose either the earliest or latest as its starting point. Say the most recent event on shardA was at time T, the most recent event on shardB was at T+1, and the mongoS' current clusterTime is T+2. As discussed above, if the user opens a stream with no resume options then they are requesting a stream that returns everything from now on, i.e. everything that happens after T+2. If we were to consult the shards and start at either T or T+1, then we would be violating this request; we would be starting the stream at a point in the past, which is not what the user asked for. The only way to start a stream at a point in the past is to explicitly supply one of the startAtOperationTime, resumeAfter, or startAfter options.Finally, the ticket SERVER-31767 is not relevant to this issue. SERVER-31767 concerns storage-layer changes necessary to facilitate global reads at a specific point-in-time, but change streams do not use this feature - they read sequentially from the oplog, a collection whose history is always present.Question 2:Yes, there is a “wait policy” for change streams on mongoS; it cannot return an event from any shard until all other shards have caught up with that event. We implement this by tracking the minimum promised sort key from each shard. Each time we receive a response from a shard, the response includes a field called postBatchResumeToken, which is a promise that the shard will never produce an event that sorts earlier than that key. Before we can return the next event to the user, we must ensure that its sort key is lower than or equal to the minimum promised sort key across all shards. The minimum promised sort key can advance even if no events are returned from a particular shard, so an inactive shard cannot indefinitely block events from other shards from being returned to the user.Hope that helps! Please let me know if you have any further questions.Best regards,\nBernard", "username": "Bernard_Gorman" }, { "code": "", "text": "Thank you so so much for your reply, this is very helpful for me to understand change stream. The change stream is a good feature!", "username": "Vinllen_Chen" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
The dispatching and merging policy about change stream in MongoS
2020-04-13T04:01:33.658Z
The dispatching and merging policy about change stream in MongoS
3,218
null
[ "cxx", "c-driver" ]
[ { "code": "ubuntu@ip-10-122-3-175:~$ sudo apt-get install -y libmongoc-devubuntu@ip-10-122-3-175:~$ sudo apt-get install -y build-essential devscripts debian-keyring fakeroot debhelper cmake libssl-dev pkg-config python3-sphinx zlib1g-dev libicu-dev libsasl2-dev libsnappy-dev libzstd-devubuntu@ip-10-122-3-175:~$ dget httphttp://snapshot.debian.org/archive/debian-debug/20191108T152030Z/pool/main/m/mongo-c-driver/mongo-c-driver_1.15.2-1.dscubuntu@ip-10-122-3-175:~$ cd mongo-c-driver-1.15.2/ubuntu@ip-10-122-3-175:~$ export DEB_BUILD_OPTIONS=\"nodoc\"--- a/debian/rules\n+++ b/debian/rules\n@@ -26,8 +26,8 @@ override_dh_auto_configure:\n -DBUILD_VERSION=$(DEB_VERSION_UPSTREAM) \\\n -DENABLE_MONGOC=ON \\\n -DENABLE_BSON=ON \\\n- -DENABLE_MAN_PAGES=ON \\\n- -DENABLE_HTML_DOCS=ON \\\n+ -DENABLE_MAN_PAGES=OFF \\\n+ -DENABLE_HTML_DOCS=OFF \\\n -DENABLE_MAINTAINER_FLAGS=ON \\\n -DENABLE_TESTS=OFF \\\n -DENABLE_ZLIB=SYSTEM\nubuntu@ip-10-122-3-175:~/mongo-c-driver-1.15.2$ fakeroot dpkg-buildpackage --no-signubuntu@ip-10-122-3-175:~$ sudo apt-get install -y build-essential devscripts debian-keyring fakeroot debhelper cmake libboost-dev libsasl2-dev libicu-dev libzstd-dev doxygenubuntu@ip-10-122-3-175:~$ dget http://deb.debian.org/debian/pool/main/m/mongo-cxx-driver/mongo-cxx-driver_3.4.1-1.dscubuntu@ip-10-122-3-175:~$ cd mongo-cxx-driver-3.4.1/ubuntu@ip-10-122-3-175:~/mongo-cxx-driver-3.4.1$ fakeroot dpkg-buildpackage --no-sign", "text": "The MongoDB C++ Driver was recently accepted into the Debian experimental distribution (package page, announcement). The experimental distribution was chosen because the C++ Driver Team has not yet set a stable ABI for the C++ Driver; packages do not automatically migrate out of the experimental distribution, which ensures that they do not become part of a stable Debian release. For the same reasons stated above, we do not intend to request a sync of the package to Ubuntu until after a stable ABI has been established and the package is ready for the Debian unstable distribution.Debian users wishing to make use of the MongoDB C++ Driver packages from the experimental distribution can follow the instructions from the Debian Wiki for installing experimental packages. Users of Debian stable or testing and users of Ubuntu can follow the instructions detailed below in this post to build and install the packages locally.(Note: the following commands were executed on an Ubuntu 18.04 EC2 instance)C Driverubuntu@ip-10-122-3-175:~$ sudo apt-get install -y libmongoc-devUbuntu 18.04 has an old C Driver:ubuntu@ip-10-122-3-175:~$ apt-cache policy libmongoc-1.0-0\nlibmongoc-1.0-0:\nInstalled: (none)\nCandidate: 1.9.2+dfsg-1build1\nVersion table:\n1.9.2+dfsg-1build1 500\n500 Index of /ubuntu bionic/universe amd64 PackagesInstall the build dependencies of the C Driver:ubuntu@ip-10-122-3-175:~$ sudo apt-get install -y build-essential devscripts debian-keyring fakeroot debhelper cmake libssl-dev pkg-config python3-sphinx zlib1g-dev libicu-dev libsasl2-dev libsnappy-dev libzstd-devubuntu@ip-10-122-3-175:~$ dget httphttp://snapshot.debian.org/archive/debian-debug/20191108T152030Z/pool/main/m/mongo-c-driver/mongo-c-driver_1.15.2-1.dsc(Note: You can check the “Versions” section of the mongo-c-driver Debian Package Tracking System status page to find the link to the latest available version. Additional intermediate versions of the C Driver may be available from the Debian Snapshots repository. You should also consult the MongoDB C++ Driver installation guide for information on C Driver version requirements and compatibility with C++ Driver versions.)ubuntu@ip-10-122-3-175:~$ cd mongo-c-driver-1.15.2/ubuntu@ip-10-122-3-175:~$ export DEB_BUILD_OPTIONS=\"nodoc\"For earlier versions, you will need to edit the debian/rules file to make the following change:ubuntu@ip-10-122-3-175:~/mongo-c-driver-1.15.2$ fakeroot dpkg-buildpackage --no-signInstall the development and runtime packages:ubuntu@ip-10-122-3-175:~/mongo-c-driver-1.15.2$ cd …\nubuntu@ip-10-122-3-175:~$ sudo apt-get install -y ./libbson-1.0-0_1.15.2-1_amd64.deb ./libbson-dev_1.15.2-1_amd64.deb ./libmongoc-1.0-0_1.15.2-1_amd64.deb ./libmongoc-dev_1.15.2-1_amd64.debC++ Driverubuntu@ip-10-122-3-175:~$ sudo apt-get install -y build-essential devscripts debian-keyring fakeroot debhelper cmake libboost-dev libsasl2-dev libicu-dev libzstd-dev doxygenubuntu@ip-10-122-3-175:~$ dget http://deb.debian.org/debian/pool/main/m/mongo-cxx-driver/mongo-cxx-driver_3.4.1-1.dsc(Note: You can check the “Versions” section of the mongo-cxx-driver Debian Package Tracking System status page to find the link to the latest available version.)ubuntu@ip-10-122-3-175:~$ cd mongo-cxx-driver-3.4.1/ubuntu@ip-10-122-3-175:~/mongo-cxx-driver-3.4.1$ fakeroot dpkg-buildpackage --no-signInstall the resulting packages:ubuntu@ip-10-122-3-175:~/mongo-cxx-driver-3.4.1$ cd …\nubuntu@ip-10-122-3-175:~$ sudo apt-get install -y ./libbsoncxx-dev_3.4.1-1_amd64.deb ./libbsoncxx-noabi_3.4.1-1_amd64.deb ./libmongocxx-dev_3.4.1-1_amd64.deb ./libmongocxx-noabi_3.4.1-1_amd64.deb(Note: If you do not intend to compile against the C++ Driver, you can leave off the -dev packages.)", "username": "Roberto_Sanchez" }, { "code": "", "text": "", "username": "system" } ]
C and C++ Driver for Debian & Ubuntu users
2020-04-13T17:03:28.699Z
C and C++ Driver for Debian &amp; Ubuntu users
5,014
null
[ "performance" ]
[ { "code": "", "text": "my cloudInstance has 4cpu’s 8gb ram.\niam running mongodb4.2 community version in it.\nwill mongodb make use of all cores in my 4cpu’s or wiill it utilize 1 cpu only?", "username": "Divine_Cutler" }, { "code": "", "text": "Yep.The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores. Specifically, the total number of active threads (i.e. concurrent operations) relative to the number of available CPUs can impact performance", "username": "chris" }, { "code": "", "text": "Thank you for the detail.i would like to clarify my understanding/.Lets assume there are 4CPU’s and 8 requests coming in. Then wiredTiger would split these 8 requests into all the 4CPU’s and then serve it? is my understanding correct?\ni’m looking forward for your response.\nif you think some articles can help me with more detail, please share it", "username": "Divine_Cutler" }, { "code": "db.currentOp(true)mongo", "text": "Lets assume there are 4CPU’s and 8 requests coming in. Then wiredTiger would split these 8 requests into all the 4CPU’s and then serve it? is my understanding correct?Hi,WiredTiger is a storage engine, so deals with requests at a lower layer (reading data to/from storage) rather than at the networking level. The core MongoDB server is responsible for handling incoming client requests and coordinating with the storage engine API.The MongoDB server currently uses a thread per connection plus a number of internal threads. You can list all threads (including idle and system) using db.currentOp(true) in the mongo shell.If you have 8 incoming requests, each of those will be handled by a separate connection thread. Your O/S will manage concurrent execution (distributing threads across available CPU cores). Individual operations (for example, a query or index build) will generally run on a single thread. Internal operations such as syncing changes to disk may take advantage of parallel threads if appropriate.In general, multithreading enables higher concurrency for multiple operations on a deployment rather than enabling a single operation to dominate all available resources. Long running read and write operations will also yield to allow other operations to interleave.For more information on concurrency in MongoDB, see FAQ: Concurrency.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.", "username": "system" } ]
Will MongoDB utilize all my 4 CPUs?
2020-05-05T19:09:39.370Z
Will MongoDB utilize all my 4 CPUs?
9,590
null
[ "java" ]
[ { "code": "", "text": "I want to listen to a changestream for a collection , which java driver should I use ,", "username": "Manojkumar.Julakanti" }, { "code": "", "text": "You can use either of the two:Also note that from the MongoDB Reactive Streams Java Driver Documentation:The Reactive Streams Driver is the canonical asynchronous Java driver for MongoDB, providing asynchronous stream processing with back pressure in line with the Reactive Streams specification. The API mirrors the now-deprecated callback-based MongoDB Async Driver.", "username": "Prasad_Saya" } ]
Reactive streams java
2020-05-05T15:32:15.868Z
Reactive streams java
3,355
null
[]
[ { "code": "", "text": "I have developed a web application with Java & Angular, that uses Mongodb Stitch services (data in atlas). I am drafting an article on how i architected & built the application.Can i submit this to be posted at MongoDB Blog ?", "username": "Suren_Konathala" }, { "code": "", "text": "Hi @Suren_Konathala,Please visit developer.mongodb.com and scroll to the bottom. There, you’ll see a button that says “Share” under the title “Show Your Stuff.”\nScreen Shot 2020-05-05 at 5.32.16 PM1429×582 75.4 KB\nFill in the form and that will start the process for publishing your article on the Developer Hub.Cheers!Jamie", "username": "Jamie" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we write to Atlas Blog?
2020-05-05T20:42:26.822Z
Can we write to Atlas Blog?
1,674
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that fixes a couple of bugs reported since 2.10.3 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.10.4%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "Vincent_Kam" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.10.4 Released
2020-05-05T21:40:16.569Z
.NET Driver 2.10.4 Released
1,822
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of 1.3.3 of the MongoDB Go Driver.This release contains several bugfixes. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.3.3 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "Divjot_Arora" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.3.3 Released
2020-05-05T15:50:55.891Z
MongoDB Go Driver 1.3.3 Released
1,791
null
[ "rust" ]
[ { "code": "", "text": "I want begin a new project, mongodb is the only office db driver for rust now, but The mongodb rust driver not async ? Is there a db connect-pool?", "username": "zmlk_lkyx" }, { "code": "", "text": "Hello! I’m one of the engineers who works on the Rust driver. You’re correct that the driver currently doesn’t have async support; we’re hard at work designing how we want to convert the internals of the driver to async, and we plan on releasing an async API alongside the sync one and (changing the sync one to wrap around that) in the coming months.", "username": "Samuel_Rossi" }, { "code": "", "text": "Hi , since deno is written in rust…will there be a official deno wrapper or driver in coming future…", "username": "prasan_kumar" }, { "code": "", "text": "We don’t currently have any plans to integrate the Rust driver with Deno. I haven’t heard anything about plans to make a Deno-specific driver either.", "username": "Samuel_Rossi" }, { "code": "", "text": "Does mongodb-driver async only support tokio and async-std ?\nHow to support new async runtime library?", "username": "zmlk_lkyx" }, { "code": "", "text": "We’re still in the process of implementing the async functionality in the driver, but the plan is to have built-in support for both tokio and async-std out of the box (with a feature flag to select which one to use). We don’t plan on adding support for specifying a custom runtime right now. Is there a specific runtime that you wanted to use with the driver other than tokio or async-std?", "username": "Samuel_Rossi" }, { "code": "", "text": "Now I use may : async-coroutine-runtime, if mongodb support it that’s perfect !may has beautiful Features:", "username": "zmlk_lkyx" }, { "code": "", "text": "Right now, we’re pretty focused on finishing async support for the runtimes we’re already committed to supporting and then implementing the remaining features we consider necessary for our 1.0 release. Once we’re past 1.0, we can look into whether it’s worth adding support for custom runtimes.", "username": "Samuel_Rossi" }, { "code": "", "text": "Hi, it.s good news 1.0 is comming. we have an Project : node + Mongodb, we want to try The new part use Rust, so what is The 1.0 release time? can you first release a alpha version for users test or family with The Async api.", "username": "marklin_delifer" }, { "code": "", "text": "Hello again everyone! We just release the first beta of the driver, v0.10.0, which provides async support for tokio and async-std. We hope you have a chance to try it out!", "username": "Samuel_Rossi" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Is there a db connect-pool for Rust?
2020-02-21T01:36:33.065Z
Is there a db connect-pool for Rust?
6,024
null
[ "rust", "beta" ]
[ { "code": "tokioasync-stdmongodb", "text": "The MongoDB Rust driver team is pleased to announce the first beta of the driver, v0.10.0. This release adds async support to the driver, with the option of using either tokio or async-std as the underlying runtime. You can read more about the release on Github, and the release is published on https://crates.io under the package name mongodb. If you run into any issues, please file an issue on JIRA.Thank you, and we hope you enjoy using the driver!", "username": "Samuel_Rossi" }, { "code": "", "text": "", "username": "system" } ]
Announcing the first Rust driver beta, v0.10.0
2020-05-05T20:25:14.402Z
Announcing the first Rust driver beta, v0.10.0
2,859
null
[]
[ { "code": "", "text": "Got a app situation where I’m executing a bulk-write operation on a sub-collection insert. Between the line of code that writes the data, there’s about another several dozen lines of code that executes, including writing one or more log messages… then I do a fetch of the data but I’m consistently fetching the pre-write version of the record instead of the just-updated version.There’s a slim chance the data will be on-disk and fetched proper, but it’s more likely than not to be absent, according to test results. (About 1 in 10.)fsync() isn’t an option b/c of the overhead. This is an enterprise app so I’m not going to introduce delays, like sleep(), to make-up for a lack of write-behind caching, or collection-level write-locks because the app is asynchronous.I was curious as to what others have done to compensate. As of now, my only option is to return the results of the operation (success, -n- records updated) forcing the user to make a subsequent call to fetch the updated record which I think is an inefficient kludge.Thanks…–miker", "username": "Micheal_Shallop" }, { "code": "", "text": "There is many things that can be happening so nothing that follows might be the right thing.You say the app is asynchronous, then may be you start reading before the asynchronous writing is all done.May be you have read preference for secondaries that are not completely sync yet.I really do not know why fsync() is part of this discussion. Are you read data from disk or something like that? Hope not.", "username": "steevej" }, { "code": "", "text": "The read is contiguous with respect to the write. It’s highly unlikely that a read-request for the same record would be posted asynchronously although it could happen. I mentioned the async nature b/c the app is - which is why I’ll avoid collection-level locking - but the current read is happening after the write within the same “thread”, to misuse the term.My code does favor read-slaves so, yes, it’s definitely the case where the write hasn’t propagated to replication nodes in the cluster. This was implicit in the problem statement; I could have made that clearer.Reading from storage is… do you read from somewhere else for cold data? I could have written my own write-behind cache and then would normally query it first. But I didn’t. Forcing a disk flush prior to the write is the sysadmin response - not a programming solution.I think what I’ll do to mitigate is pre-fetch the record, calc a checksum, then do the update, then loop until I fetch a copy of the updated record s.t. the checksum are not equivalent so long as I’m within a few attempts at reading the updated record successfully. It’s either that or develop the write-behind caching to compensate.I do wish mongo would take a page from mysql and provide the updated record as part of a success response tho – it would make my life a lot easier for sure…", "username": "Micheal_Shallop" }, { "code": "", "text": "If you want to ready what is just written, the Primary is your best bet. The data is going to be hot in cache.If that is not your flavor then what is your read and write concern? Sounds like these both need to be majority for what you want to do.Depending on your topology w:2 r:2 might be enough.", "username": "chris" }, { "code": "", "text": "@chris: My Driver\\Manager settings use secondaryPreferred for read-primary and primaryPreferred for secondary preference.I bumped-up my write-concern setting to w=2 and I’ve had 100% hits on the fetch so I’m going to mark that as a solution. I’ve not tested this yet in a production deployment - but with the w=2 setting, I don’t think it will matter as the db topo isn’t going to be radically different.Thanks, all, for stirring up my brain chemistry - much appreciated!–mike", "username": "Micheal_Shallop" }, { "code": "", "text": "Good to hear.Just a note or two.Given a three node replica set majority is essentially the same thing as w:2 or r:2If the topology is PSA then the failure of either data node will cause w: r:2 to fail. Majority would work.Any larger replica sets you can be back in the same scenario you were previously experiencing.But, if you are looking for the answer to the topic ‘The D in ACID’. Majority write concern is the correct answer.", "username": "chris" }, { "code": "", "text": "I wrote the connecting/resource-mgmt code over two years ago - it was good to go back and review. The logic was basically:$w = ($production) ? MAJ : 1;I think I’ll eliminate the conditional and just leave it set to majority.Thanks, again, Chris, for the help!–mike", "username": "Micheal_Shallop" } ]
Looking for the D in ACID
2020-05-04T21:42:15.683Z
Looking for the D in ACID
1,723
null
[ "node-js" ]
[ { "code": "", "text": "SetEnv MONGO_URL mongodb://:@XXXXXXXXX.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=/home/ec2-user/rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=falseApp 2291 output: MongoNetworkError: failed to connect to server [XXXXXX.docdb.amazonaws.com:27017] on first connect [Error: unable to get local issuer certificate\nApp 2291 output: at TLSSocket.onConnectSecure (_tls_wrap.js:1474:34)\nApp 2291 output: at TLSSocket.emit (events.js:310:20)\nApp 2291 output: at TLSSocket.EventEmitter.emit (domain.js:482:12)\nApp 2291 output: at TLSSocket._finishInit (_tls_wrap.js:917:8)\nApp 2291 output: at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:687:12) {\nApp 2291 output: name: ‘MongoNetworkError’,\nApp 2291 output: [Symbol(mongoErrorContextSymbol)]: {}\nApp 2291 output: }]MongoNetworkError: [Error: unable to get local issuer certificate", "username": "Yaseen_Shaik" }, { "code": "", "text": "Please check this linkGetting below error\n\n{ Error: Error: unable to get local issuer certificate\n … at generateError (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\rally\\dist\\request.js:38:11)\n at Request._callback (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\rally\\dist\\request.js:110:20)\n at self.callback (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\request\\request.js:187:22)\n at emitOne (events.js:116:13)\n at Request.emit (events.js:211:7)\n at Request.onRequestError (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\request\\request.js:813:8)\n at emitOne (events.js:116:13)\n at ClientRequest.emit (events.js:211:7)\n at TLSSocket.socketErrorListener (_http_client.js:387:9)\n at emitOne (events.js:116:13)\n errors:\n [ { Error: unable to get local issuer certificate\n at TLSSocket.<anonymous> (_tls_wrap.js:1103:38)\n at emitNone (events.js:106:13)\n at TLSSocket.emit (events.js:208:7)\n at TLSSocket._finishInit (_tls_wrap.js:637:8)\n at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:467:38) code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY' } ] }\n(node:9540) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Error: unable to get local issuer cert\n(node:9540) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not hand\nminate the Node.js process with a non-zero exit code.\n{ Error: Error: unable to get local issuer certificate\n at generateError (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\rally\\dist\\request.js:38:11)\n at Request._callback (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\rally\\dist\\request.js:110:20)\n at self.callback (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\request\\request.js:187:22)\n at emitOne (events.js:116:13)\n at Request.emit (events.js:211:7)\n at Request.onRequestError (C:\\Users\\mishrut\\full_stack\\gitlab_rally_alm\\node_modules\\request\\request.js:813:8)\n at emitOne (events.js:116:13)\n at ClientRequest.emit (events.js:211:7)\n at TLSSocket.socketErrorListener (_http_client.js:387:9)\n at emitOne (events.js:116:13)\n errors:\n [ { Error: unable to get local issuer certificate\n at TLSSocket.<anonymous> (_tls_wrap.js:1103:38)\n at emitNone (events.js:106:13)\n at TLSSocket.emit (events.js:208:7)\n at TLSSocket._finishInit (_tls_wrap.js:637:8)\n at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:467:38) code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY' } ] }\n(node:9540) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: Error: unable to get local issuer certI see you have posted on stackexchange too\nThere are couple of other threads on this error\nMay be the certificate is not from trusted source or your company rules not allowing it", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Certificate is working fine if I connect via SSL to data base. It is not working if I use it from Apache. I am using phusion passenger for Meteor", "username": "Yaseen_Shaik" }, { "code": "", "text": "This is likely a permission/access issue on /home/ec2-user/rds-combined-ca-bundle.pemWhatever user apache/phusion is running as needs access to that file. Under a user directory this is unlikely to be the case.", "username": "chris" }, { "code": "", "text": "@chris Thankyou for the response, I tried below fix, but still getting the same error.I moved it to /var/www/medapp/rds-combined-ca-bundle.pem\n-rwxr-xr-x 1 medappuser medappuser 43888 May 5 11:07 rds-combined-ca-bundle.pemSetEnv MONGO_URL mongodb://:@XXXXXXXXX.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=/var/www/medapp/rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false", "username": "Yaseen_Shaik" }, { "code": "", "text": "Sorry I assumed the driver took care of this.You need to do something like this:\nhttp://mongodb.github.io/node-mongodb-native/3.5/tutorials/connect/tls/#validate-server-certificate", "username": "chris" }, { "code": "", "text": "Thanks a lot @chris, it is fixed now.I am supposed to use tls=true&tlsCAFile=/var/www/covidapp/rds-combined-ca-bundle.pem instead of ssl=true&ssl_ca_certs=/var/www/medapp/rds-combined-ca-bundle.pem", "username": "Yaseen_Shaik" }, { "code": "", "text": "Oh great, I missed that. ", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoNetworkError: unable to get local issuer certificate
2020-04-30T19:50:08.708Z
MongoNetworkError: unable to get local issuer certificate
22,564
null
[]
[ { "code": "", "text": "Hi,Thanks all for participating in our virtual event where @Ashutosh_Singh showed us how to create your first CRUD App using MongoDB. For all that have missed the meeting you can see the recording now on the events page: Your First CRUD App Using MongoDB - Other Events - MongoDB Developer Community ForumsCheers,\nSven", "username": "Sven_Peters" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Virtual User Group: Your First CRUD App Using MongoDB
2020-05-05T15:21:53.549Z
Virtual User Group: Your First CRUD App Using MongoDB
3,168