image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Sorry if this is already answered somewhere, but I can’t find anything. All I see is “Web Sync is not supported.” I know MongoDB Realm is still relatively new but I’m worried about the lack of this feature. Is it planned?Relatedly—Stitch used to have a Canny.io board, does Realm have a public roadmap? Edit: found it: Realm: Top (68 ideas) – MongoDB Feedback Engine",
"username": "Ted_Hayes"
},
{
"code": "watch()",
"text": "Followup question: what options does the Web SDK have for being notified that there are remote changes? watch()? I guess GraphQL Subscriptions are also not available?",
"username": "Ted_Hayes"
},
{
"code": "watch()",
"text": "Is it planned?I think my best answer is to direct your attention to this topic written by @Ian_Ward (Product Manager on Realm SDKs): The Evolution of Realm-Web with Sync? - please voice your support, wishes and use-case to let us know why and how this would be of interest to you.Followup question: what options does the Web SDK have for being notified that there are remote changes? watch() ? I guess GraphQL Subscriptions are also not available?Right … to my knowledge, we don’t support GraphQL subscriptions and Realm Web does support watching documents.",
"username": "kraenhansen"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | Is Web Sync on the roadmap? | 2021-03-26T00:09:16.424Z | Is Web Sync on the roadmap? | 3,225 |
null | [] | [
{
"code": "",
"text": "I have the following json context and I have requirement to read the complete document and identify the distinct keyword under “conversion_token” group and output how many times keyword is occurred, count, classify in the output results;input:{\n“select_emp”: {\n“specification”: {\n“input”: [\n“p_empno”\n],\n“declare_stmt”: {\n“anchorvariable”: [\n“V_ENAME”,\n“V_HIREDATE”,\n“V_TITLE”,\n“V_REPORTSTO”,\n“V_DISP_DATE”,\n“V_INS_COUNT”,\n“CITY_FROM”\n],\n“tablename_variable”: [\n“EMPLOYEE.V_ENAME”,\n“EMPLOYEE.V_HIREDATE”,\n“EMPLOYEE.V_TITLE”,\n“EMPLOYEE.V_REPORTSTO”,\n“EMPLOYEE.V_DISP_DATE”,\n“EMPLOYEE.V_INS_COUNT”,\n“EMPLOYEE.CITY_FROM”\n]\n}\n},\n“body”: {\n“select_stmt1”: {\n“columns”: [\n“FIRSNAME”,\n“HIREDATE”,\n“TITLE”,\n“REPORTSTO”\n],\n“tablename”: [\n“EMPLOYEE”\n],\n“conversion_token”: [\n{\n“keyword”: “NVL”,\n“count”: 1,\n“classify”: 2\n}\n]\n},\n“select_stmt2”: {\n“columns”: [\n“CITY”\n],\n“tablename”: [\n“EMPLOYEE”\n],\n“conversion_token”: [\n{\n“keyword”: “DECODE”,\n“count”: 1,\n“classify”: 3\n}\n]\n},\n“dbms_stmt1”: {\n“dbms_putline”: [\n“P_EMPNO”,\n“V_ENAME”,\n“V_DISP_DATE”,\n“V_REPORTSTO”\n],\n“conversion_token”: [\n{\n“keyword”: “DBMS”,\n“count”: 1,\n“classify”: 2\n}\n]\n},\n“forloop1”: {\n“select_stmt”: {\n“columns”: [\n“EMPLOYEEID”,\n“ROWID”\n],\n“tablename”: [\n“EMPLOYEE”\n],\n“conversion_token”: [\n{\n“keyword”: “DBMS”,\n“count”: 1,\n“classify”: 2\n}\n]\n}\n},\n“merge_stmt1”: {\n“merge_into”: “EMPLOYEE”,\n“merge_using”: {\n“columns”: [\n“EMPLOYEEID”,\n“LASTNAME”,\n“TITLE”,\n“BIRTHDATE”,\n“HIREDATE”,\n“ADDRESS”,\n“CITY”,\n“STATE”,\n“COUNTRY”,\n“POSTALCODE”,\n“PHONE”,\n“FAX”,\n“EMAIL”,\n“BONUS”\n],\n“tablename”: [\n“EMPLOYEE”\n]\n},\n“merge_update”: {\n“columns”: [\n“BONUS”\n],\n“tablename”: [\n“EMPLOYEE”\n]\n},\n“merge_delete”: {\n“columns”: [\n“BONUS”\n],\n“tablename”: [\n“EMPLOYEE”\n]\n},\n“merge_insert”: {\n“columns”: [\n“EMPLOYEEID”,\n“LASTNAME”,\n“FIRSTNAME”,\n“TITLE”,\n“BIRTHDATE”,\n“HIREDATE”,\n“ADDRESS”,\n“CITY”,\n“STATE”,\n“COUNTRY”,\n“POSTALCODE”,\n“PHONE”,\n“FAX”,\n“EMAIL”,\n“BONUS”\n],\n“tablename”: [\n“EMPLOYEE”\n]\n},\n“conversion_token”: [\n{\n“keyword”: “Merge”,\n“count”: 1,\n“classify”: 4\n}\n]\n},\n“exception_handling1”: {\n“dbms_putline”: [\n“P_EMPNO”\n],\n“conversion_token”: [\n{\n“keyword”: “DBMS”,\n“count”: 1,\n“classify”: 2\n}\n]\n}\n}\n}\n}output: it should aggregate the group “keyword”, count, classify and results the final array;{\n“conversion_token”: [{\n“keyword”: “NVL”,\n“count”:1,\n“classify”:2},\n{\n“keyword”: “DBMS”,\n“count”:6,\n“classify”:2},\n{\n“keyword”: “DECODE”,\n“count”:2,\n“classify”:3\n}\n}& my next step is to loop through the same functionality across all the documents inside one collection and display the above output;",
"username": "Nishanth_Bejgam"
},
{
"code": "select_empselect_empconversion_token",
"text": "Hi Nishanth,Is the top level field always called select_emp?Does each document only contain one top level field such as select_emp which contains the statements or could there be more than one?Does conversion_token occur at different depths? (for example, it occurs at a different depth for the nested loops statement than the other statements in your sample document)Ronan",
"username": "Ronan_Merrick"
},
{
"code": "select_empselect_empconversion_token",
"text": "Is the top level field always called select_emp ?\n[A] no, it changes for every different document inside the collection…Does each document only contain one top level field such as select_emp which contains the statements or could there be more than one?\n[A] it contains only one top level fieldDoes conversion_token occur at different depths? (for example, it occurs at a different depth for the nested loops statement than the other statements in your sample document)\n[A] yes, it can be possible. based on the construct it formed ‘conversion_token’ can found in different nested ladder.",
"username": "Nishanth_Bejgam"
},
{
"code": "db.test.aggregate(\n [\n {$project:{_id:0, \"arrayofkeyvalue\":{\"$objectToArray\":\"$$ROOT\"}}},\n {\n $project: {\n item: {\n $filter: {\n input: \"$arrayofkeyvalue\",\n as: \"item\",\n cond: { $ne: [ \"$$item.k\", \"_id\" ] }\n }\n }\n }\n },\n {$project:{ item: {$arrayElemAt: [ \"$item\", 0 ] }}},\n {$project: { body: \"$item.v.body\"}},\n {\n $project: {\n conversion_key: {\n $function: {\n body: function conversion_tokens(object, key) {\n var values = [];\n search_keys(object);\n function search_keys(object) {\n \n if (key in object) values.push(object[key][0]);\n \n for (var property in object) {\n if (object.hasOwnProperty(property)) {\n if (typeof object[property] == \"object\") {\n search_keys(object[property]);\n }\n }\n }\n }\n return values;\n } ,\n args: [\"$body\", \"conversion_token\"],\n lang: \"js\"\n }\n }\n }\n },\n {$unwind:\"$conversion_key\"},\n {\n $group: {\n _id: \"$conversion_key.keyword\",\n count: {$sum:1},\n classify: {$sum:\"$conversion_key.classify\"}\n }},\n {\n $group: {\n _id: \"null\",\n conversion_token: {\n $push: {\n keyword: \"$_id\",\n count: \"$count\",\n classify: \"$classify\"\n }\n }\n\n }\n },\n {\n $project: {\n _id:0,\n conversion_token:1\n }\n }\n])\n{\n \"conversion_token\" : [\n {\n \"keyword\" : \"Merge\",\n \"count\" : 1,\n \"classify\" : 4\n },\n {\n \"keyword\" : \"NVL\",\n \"count\" : 1,\n \"classify\" : 2\n },\n {\n \"keyword\" : \"DECODE\",\n \"count\" : 1,\n \"classify\" : 3\n },\n {\n \"keyword\" : \"DBMS\",\n \"count\" : 3,\n \"classify\" : 6\n }\n ]\n}\n",
"text": "Hi Nishanth,Try this:Result:$function is new to 4.4.Let me know if you’re on a different version.**EDIT: I just wanted to add that while it is possible to query the data in this way, longer term you may want to review your schema design. Having a fixed top-level field name and conversion_token at a fixed depth would allow this query to be greatly simplified. While the above may work, it may not be performant.Ronan",
"username": "Ronan_Merrick"
}
] | MongoDB Basics aggregation help | 2021-03-12T09:40:43.428Z | MongoDB Basics aggregation help | 1,984 |
null | [
"app-services-user-auth",
"security",
"app-services-data-access"
] | [
{
"code": "",
"text": "Hi there,I am facing an important question as a result of following action:\nBy registering a new user signup process, user enters First name and last name as well as email and password.\nThen all this fields expect password are being saved in custom user data collection. This insertion is done by anonymous user since they are in the new user registration process.\nNow the question is how safe is this approach and if it’s not safe how can we secure this insertion. If there is a better approach how to consist these properties please say a bit in details.Kind regards, Behzad",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Hi @Behzad_Pashaie,Why does an email user needs to authenticate anonymous?Is that part of registration phase?I would suggest using standard sdk registration for user signup.eg. node sdk :https://docs.mongodb.com/realm/sdk/node/advanced/multi-user-applications/Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,\nYes this is part of user registration process which is follows:\nuser fills the sign up form with Name, last name, email and password. By submitting the form we need to save\nthe first name and last name. While Email and pasword are rquired to register the new user with webSDK.\nButthe SDK doesnot provide any solution for saving the custom data at registration and user confrimation step at least as i have gone thorugh many times.\nCan you please say a bit in details how to save first name last name before user confrimation?\nThanks,\nKind regards, Behzad",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Hi @Behzad_Pashaie,Well you can use a confirmation function flow to save user data in the custom data collection via the atlas service. But if you can’t pass this information to that function I can think of 2 ways:Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,\nThanks for the answer, Step 1 is exactly what I have implemented. But then the question is anonymous write/insert to that temp collection safe?\nStill a bit unclear.\nhave nice day forward,\nBehzad",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "@Behzad_Pashaie,Make sure you secured it properly and allowing only fields and types that cannot harm your data.You can only allow appending to that collection and the user id could be a unique index…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hei @Pavel_Duchovny,\nthanks .",
"username": "Behzad_Pashaie"
}
] | How to secure anonymous DB Insert in Realm? | 2021-03-24T13:46:32.532Z | How to secure anonymous DB Insert in Realm? | 4,084 |
null | [
"data-modeling",
"crud"
] | [
{
"code": "",
"text": "Hello i have a question,Lets say that i had a column with a number of 5 and it gets increased for example to 10. Is it possible to get this column back to its previous number without knowing what that number was?And if so how could you do this?",
"username": "JasoO"
},
{
"code": "",
"text": "Hi Jasper,\nI think I’m not totally grasping your question.\nIf you want a kind of a roll back, check:Best,",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "Well basically i want a colomn with a number to go to back to its previous value. But in this case i dont know what its previous value was.",
"username": "JasoO"
},
{
"code": "",
"text": "Is it possible to get this column back to its previous number without knowing what that number was?Hi @JasoO,The only way a scenario like this would be possible is if the changed data has been written somewhere. There is no generic “undo” for a write operation that has been committed.I think the general pattern you are looking for is document versioning. For some examples of common approaches (and considerations) see:Some frameworks for data abstraction may support one or more of these versioning variations, but often this is something you have to handle via application logic to suit your use case requirements.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Go back to previous number with column without knowing its previous value | 2021-03-25T16:54:23.478Z | Go back to previous number with column without knowing its previous value | 1,733 |
null | [
"data-modeling",
"atlas-device-sync"
] | [
{
"code": "export const articlemetaSchema = {\n name: 'articlemeta',\n properties: {\n _id: 'objectId?',\n __v: 'int?',\n authors: 'articlemeta_authors[]',\n catalog_id: 'objectId?',\n content: 'objectId?',\n createdAt: 'date?',\n description: 'string?',\n language: 'articlemeta_language',\n main_image_url: 'string?',\n origin_type: 'string?',\n pub_date: 'date?',\n publication: 'objectId?',\n sub_title: 'string?',\n title: 'string?',\n updatedAt: 'date?',\n url: 'string?',\n },\n primaryKey: '_id',\n};\n\nexport const catalogpublicationSchema = {\n name: 'catalogpublication',\n properties: {\n _id: 'objectId?',\n __v: 'int?',\n _partition_key: 'string?',\n createdAt: 'date?',\n is_master: 'bool?',\n name: 'string?',\n updatedAt: 'date?',\n },\n primaryKey: '_id',\n};\n",
"text": "Here’s my schemaI want to setup a relationship between article_meta.publication and catalogpublication\nbut every time I try to do that I get an error that it is not validatedRequirement is to sync article_meta with publication object not the objectID",
"username": "shawn_batra"
},
{
"code": "",
"text": "Just to confirm, you’re trying this with React Native?Can you explain more about what you mean by “Realm Syncing relationship”? Is this MongoDB Realm Sync (and if so, what is your partition key?)What code is failing, and what is the error?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "No, I’m tryng this in Electron using nodejs sdk.Given above are my schemas in which I want to define a relationship b/w articemeta.publication and catalogPublication._idMy partition key is _partition_key=‘catalog_id=some-ObjectId’Realtionship which I defined",
"username": "shawn_batra"
},
{
"code": "",
"text": "@Andrew_Morgan Did you get my problem?",
"username": "shawn_batra"
},
{
"code": "",
"text": "Hi Shawn, I’ve not used Electron (or Realm Sync with the Node.js SDK) but I can take a look. Could you share the Realm schemas? Are you seeing the error when you try to add that relationship through the Realm UI or from somewhere else?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "@Andrew_Morgan Here’s schemas:\nArticleMeta = {\n“title”: “articlemeta”,\n“properties”: {\n“__v”: {\n“bsonType”: “int”\n},\n“_id”: {\n“bsonType”: “objectId”\n},\n“authors”: {\n“bsonType”: “array”,\n“items”: {\n“bsonType”: “object”,\n“properties”: {\n“_id”: {\n“bsonType”: “objectId”\n},\n“ambiguous_author”: {\n“bsonType”: “bool”\n},\n“author”: {\n“bsonType”: “objectId”\n},\n“createdAt”: {\n“bsonType”: “date”\n},\n“updatedAt”: {\n“bsonType”: “date”\n}\n}\n}\n},\n“catalog_id”: {\n“bsonType”: “objectId”\n},\n“content”: {\n“bsonType”: “objectId”\n},\n“createdAt”: {\n“bsonType”: “date”\n},\n“description”: {\n“bsonType”: “string”\n},\n“language”: {\n“bsonType”: “object”,\n“properties”: {\n“_id”: {\n“bsonType”: “objectId”\n},\n“createdAt”: {\n“bsonType”: “date”\n},\n“display_name”: {\n“bsonType”: “string”\n},\n“language_id”: {\n“bsonType”: “objectId”\n},\n“lcid_String”: {\n“bsonType”: “string”\n},\n“name”: {\n“bsonType”: “string”\n},\n“updatedAt”: {\n“bsonType”: “date”\n}\n}\n},\n“main_image_url”: {\n“bsonType”: “string”\n},\n“origin_type”: {\n“bsonType”: “string”\n},\n“pub_date”: {\n“bsonType”: “date”\n},\n“publication”: {\n“bsonType”: “objectId”\n},\n“sub_title”: {\n“bsonType”: “string”\n},\n“title”: {\n“bsonType”: “string”\n},\n“updatedAt”: {\n“bsonType”: “date”\n},\n“url”: {\n“bsonType”: “string”\n},\n“_partition_key”: {\n“bsonType”: “string”\n}\n}\n}Article= {\n“title”: “article”,\n“properties”: {\n“__v”: {\n“bsonType”: “int”\n},\n“_id”: {\n“bsonType”: “objectId”\n},\n“active_search”: {\n“bsonType”: “bool”\n},\n“article_meta”: {\n“bsonType”: “objectId”\n},\n“catalog_id”: {\n“bsonType”: “objectId”\n},\n“content”: {\n“bsonType”: “objectId”\n},\n“createdAt”: {\n“bsonType”: “date”\n},\n“flagged”: {\n“bsonType”: “bool”\n},\n“owner_id”: {\n“bsonType”: “objectId”\n},\n“rating”: {\n“bsonType”: “int”\n},\n“read”: {\n“bsonType”: “bool”\n},\n“status”: {\n“bsonType”: “string”\n},\n“status_updated_at”: {\n“bsonType”: “date”\n},\n“subscriptions”: {\n“bsonType”: “array”,\n“items”: {\n“bsonType”: “object”,\n“properties”: {\n“_id”: {\n“bsonType”: “objectId”\n},\n“createdAt”: {\n“bsonType”: “date”\n},\n“origin_type”: {\n“bsonType”: “string”\n}\n}\n}\n},\n“updatedAt”: {\n“bsonType”: “date”\n},\n“_partition_key”: {\n“bsonType”: “string”\n}\n}\n}catalogPublication = {\n“title”: “catalogpublication”,\n“properties”: {\n“__v”: {\n“bsonType”: “int”\n},\n“_id”: {\n“bsonType”: “objectId”\n},\n“_partition_key”: {\n“bsonType”: “string”\n},\n“createdAt”: {\n“bsonType”: “date”\n},\n“is_master”: {\n“bsonType”: “bool”\n},\n“name”: {\n“bsonType”: “string”\n},\n“updatedAt”: {\n“bsonType”: “date”\n}\n}\n}SO I have already defined a relationship between article and articlemeta and it is working absolutely fine but when I tried to define relationship between articleMeta and catalogPublication No data is synced and I can not see any errors in the logs.\nThanks",
"username": "shawn_batra"
},
{
"code": "",
"text": "One thing I noticed is that the collection name in the schema is “catalogpublication” but the relationship refers to “catalogpublications”",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I generated the schema from realm using generate schema and also defined the relationship from realm UI",
"username": "shawn_batra"
},
{
"code": "",
"text": "@Andrew_Morgan I generated the schema from realm using generate schema and also defined the relationship from realm UI and it automatically generates it.One more thing, my catalogPublication and articleMeta does not have same partition value. Can it be a reason for not working?",
"username": "shawn_batra"
},
{
"code": "",
"text": "tbh I’ve not yet used relationships with synced Realms, but I suspect that might be the issue as all synced objects in a Realm should be in the same partition.",
"username": "Andrew_Morgan"
},
{
"code": "articlemetaSchema.publicationcatalogpublication",
"text": "@shawn_batra Your articlemetaSchema.publication field needs to point to the class definition you are trying to create a relationship to - in your case: catalogpublicationYou can read more about this here:",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Yes I have tried that but then I’m not receiving any data but with no error.",
"username": "shawn_batra"
},
{
"code": "",
"text": "Could you please provide some help here because I’m stuck in this.\nI want to define a relationship (explained above) because I want to sync only data which is required.Note: partition Value is different b/w two collections. So is this causing the issue?",
"username": "shawn_batra"
},
{
"code": "",
"text": "@shawn_batra The partition values must match for any objects/documents which you want related to each other. Otherwise you will get an error",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I have shared the all my schema’s. Can you look n tell me if it’s possible or not.I have defined relationship b/w articles and articlemeta and it’s working perfectly\nI have noticed this thing only partition value was same in these two and it worked, now articlemeta and catalogpublication have different partition value and they are not working and I’m not receiving any error.",
"username": "shawn_batra"
},
{
"code": "",
"text": "Related objects must have the same partition value in order to be valid relationships. Hence your error",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Any solutions to this How Can I achieve this? Any workaround?",
"username": "shawn_batra"
},
{
"code": "",
"text": "Yes you can use manual references which are not a realm reference but a string Id or something similar - similar to a foreign key concept - you can see this here in mongo’s documentation -As well as me blathering on here ",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Yes we use manual references in Atlas.\nOkay so According to use in my use case, I will have two local realm one will have article and articlemeta which are defined on partition value and other realm will have catalogpublications which is synced on other partition valueNow locally all data will get synced and I have to query manually from both the collections and map them togethter based on that reference (foreign key concept) Correct?",
"username": "shawn_batra"
},
{
"code": "",
"text": "Now locally all data will get synced and I have to query manually from both the collections and map them togethter based on that reference (foreign key concept) Correct?Yeah that’s correct. You would make a query that took a manual reference field in one realm and used it as a query in a different realm for a specific field. For instance you might haveItem.itemIdYou then take the value of that field and create a query in a different realm which corresponds to an object field which could match",
"username": "Ian_Ward"
}
] | Realm Syncing relationship between two collection | 2021-03-19T03:58:13.524Z | Realm Syncing relationship between two collection | 4,598 |
null | [] | [
{
"code": "",
"text": "Hi:\nWould appreciate guidance with regards to staring MongoDB (db version v4.0.3) processes via “systemctl”, when a RHEL 7 server is rebooted. Thank you.Regards,Wajid Syed",
"username": "Wajid_Syed"
},
{
"code": "",
"text": "I suspect you can follow https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/",
"username": "steevej"
}
] | Starting mongodb after server reboot | 2021-03-25T18:48:43.651Z | Starting mongodb after server reboot | 1,444 |
[
"golang"
] | [
{
"code": "",
"text": "Hey Peeps,I’m going to be speaking at the GolangNYC meetup next week. I’ll be sharing my top 10 tips for making remote work actually work…right now…in the chaos of quarantine. Hope to see you there!Find groups that host online or in person events and meet people in your local community who share your interests.If you prefer reading to watching, check out my blog post on the same topic:\nhttps://www.mongodb.com/article/10-tips-making-remote-work-actually-work/",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | GolangNYC meetup next week! Remote Work Tips! | 2021-03-25T18:49:05.232Z | GolangNYC meetup next week! Remote Work Tips! | 2,018 |
|
null | [] | [
{
"code": "",
"text": "Hi Experts,If any of you using “Mongo 3.6.17 in CentOS 8 Environment” in your projects and not facing any issues ?Please confirm.Thanks,\nKiran",
"username": "Kiran_Pamula"
},
{
"code": "",
"text": "Hi @Kiran_Pamula, we are so glad to have you here in our community.\nFor a better understanding of the issue you are facing, it would be really great if you can provide us some screenshots/error-messages along with the steps to reproduce the same.In case of any doubts, please feel free to reach out to us.Thanks. Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer.",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Hi Sourabh,Thanks for your response.Sometime back we upgraded CentOS 7 to CentOS 8 in our product, as part of which we had to upgrade from mongoDB 3.6.9 to 3.6.17.Post this we are experiencing frequent issues where in after 1-2 days of system run, the mongo secondary members are\nstarting to lag far behind the primary and we have to manually stop the service, followed by deleting the db path and\nthen restart to recover the members. So far we have found this is affects only secondary members.As mentioned, we currently use mongodb(3.6.17) in our environment and some of our operations involves opening and closing connections.\nHowever we already verified that the connections are closed from mongo side but the system keeps holding those connections.\nThis is leading to high number of files being opened by mongo which we can see in our lsof command and because of this,\nour server is crashing and our mongo goes into recovery state. Kindly check and let us know why this is happening in our environmentAttached SOSreport from the affected systems for reference.Also when issue is hit below errors are found:\n2021-02-24T22:50:04.692+0000 I - [listener] pthread_create failed: Resource temporarily unavailable\n2021-02-24T22:50:04.692+0000 W EXECUTOR [conn480782] Terminating session due to error: InternalError: failed to create service entry worker thread\n2021-02-24T22:50:05.589+0000 I - [listener] pthread_create failed: Resource temporarily unavailable\n2021-02-24T22:50:05.589+0000 W EXECUTOR [conn480783] Terminating session due to error: InternalError: failed to create service entry worker threadWe are manually recovering our mongo replica set but after some time it is again facing the same issue and going into recovery mode.One more observation is we are noticing this issue only with CentOS 8, and when we try to use the same MongoDB Version 3.6.17 in CentOS 7,\nthere were no such issues reported.Thanks,\nKiran",
"username": "Kiran_Pamula"
}
] | Anyone using Mongo 3.6.17 in CentOS 8 Environment without any issues? | 2021-03-25T12:59:41.024Z | Anyone using Mongo 3.6.17 in CentOS 8 Environment without any issues? | 1,852 |
null | [
"legacy-realm-cloud",
"migration"
] | [
{
"code": "",
"text": "I am migrating Legacy Realm into MongoDB using guide provided by support team.\nhttps://docs.realm.io/realm-legacy-migration-guide/\nAll the data has copied from realm successfully excluding user (App user).\nis any way to import users from realm to MongoDB ?",
"username": "Swapnil_Jagdale"
},
{
"code": "const RealmWeb = require('realm-web');\nconst app = new RealmWeb.App({ id: 'app-id' })\nconst credentials = RealmWeb.Credentials.function({\n email: person.email,\n password: person.password\n})\nconst result = await app.logIn(credentials)\n``",
"text": "I just did with the help of realm-web sdk.In my case I use the custom-auth function. So before I migrate the data I call the login function for every user I have:",
"username": "rouuuge"
}
] | How to migrate Legacy Realm users to MongoDB | 2021-03-25T08:19:49.523Z | How to migrate Legacy Realm users to MongoDB | 3,952 |
null | [
"app-services-user-auth"
] | [
{
"code": "import { useForm } from \"react-hook-form\";\n// Used 'useForm' hook to simplify data extraction from the //input form\nimport { useAuth } from \"members\";\n\nconst SignUpForm = () => {\n const router = useRouter();\n const { handleSubmit, register } = useForm();\n const { signup } = useAuth();\n\n const signUpAndRedirect = (form) => {\n signup(form.email, form.password);\n router.push(\"/\");\n // after signing up, redirect client back to home\n };\n\n return (\n{/*My form's 'email' and 'password' fields are only accessible in the SignUpForm component*/}\n <div>\n <form onSubmit={handleSubmit(signUpAndRedirect)}>\n ...\n </form>\n </div>\n );\n};\n\nexport default SignUpForm;\nconst authenticateAndRedirect = (form) => {\n login(form.email, form.password);\n router.push(\"/\");\n };\nimport Link from \"next/link\";\nimport { useEffect } from \"react\";\nimport { useRouter } from \"next/router\";\nimport { useAuth } from \"members\";\n\nconst Confirm = () => {\n const router = useRouter();\n const { confirm, login } = useAuth();\n useEffect(() => {\n const token = router.query.token;\n const tokenId = router.query.tokenId;\n\n if (token && tokenId) {\n confirm(token, tokenId);\n login(email, password); // !!! I don't have access to these\n }\n }, [router]);\n//used useEffect() to assure the confirmation only happens once, after the component was rendered.\n\n return (\n <div>\n <h2>\n Thank you for confirming your email. Your profile was \t successfully\n activated.\n </h2>\n <Link href=\"/\">\n <a>Go back to home</a>\n </Link>\n </div>\n );\n};\n\nexport default Confirm;\nconst client = () => {\n const { app, credentials } = useRealm();\n const [currentUser, setCurrentUser] = useState(app.currentUser || false);\n const [isAuthenticated, setIsAuthenticated] = useState(user ? true : false);\n\n \n // Login and logout using email/password.\n\n const login = async (email, password) => {\n try {\n const userCredentials = await credentials(email, password);\n await app.logIn(userCredentials);\n\n setCurrentUser(app.currentUser);\n setIsAuthenticated(true);\n } catch (e) {\n throw e;\n }\n };\n\n const logout = async () => {\n try {\n setUser(null);\n\n // Sign out from Realm and Auth0.\n await app.currentUser?.logOut();\n // Update the user object.\n setCurrentUser(app.currentUser);\n setIsAuthenticated(false);\n setUser(false);\n } catch (e) {\n throw e;\n }\n };\n\n const signup = async (email, password) => {\n try {\n await app.emailPasswordAuth.registerUser(email, password);\n // await app.emailPasswordAuth.resendConfirmation(email);\n } catch (e) {\n throw e;\n }\n };\n\n const confirm = async (token, tokenId) => {\n try {\n await app.emailPasswordAuth.confirmUser(token, tokenId);\n } catch (e) {\n throw e;\n }\n };\n\n return {\n currentUser,\n login,\n logout,\n signup,\n confirm,\n };\n};\n\nexport default client;\n",
"text": "I’m currently building a blog sample app, using NextJS, ApolloClient and MongoDB + MongoRealm. The NextJS skeleton was built after the framework’s official page tutorial.\nAt the moment, new users can signup, by accessing a SignUp form which is routed at ‘pages/signup’. After entering their credentials, they are redirected to the home page. Then, the freshly signed in users have to visit another page(the one associated with ‘pages/login’ root), which contains the login form, which is responsible with their email/password authentication.\nAlso, I’ve set up Realm to send a confirmation email at the user’s email address. The email contains a link to a customized page from my NextJs app, which will handle their confirmation(users also have to be confirmed, after requesting a sign in)The workflow should be established with this. However, I want to automatically login a user, after he/she just logged in(so that they won’t need to sign in and also visit the log in page, when creating their accounts).The problem I’m encountering is that my React component that handles the user confirmation, doesn’t have access to the user instance’s email and password. I need a way to login the user, without having access to his/her credentials.Below, I will try to explain exactly why this access restriction happens in the first place. Although the entire ‘_app.js’ is wrapped in some custom providers, I’ll try to keep things as simple as possible, so I’ll present only what is needed for this topic.My signup.js file looks something like this:My login.js file is built after the same concept, the only difference being that ‘signUpAndRedirect’ is replaced with\n‘authenticateAndRedirect’:And here is my confirm.js file, which is responsible with extracting the token and tokenId from the confirmation URL. This component is normally only rendered when the client receives the email and clicks on the confirmation link(which basically has the form /confirm, where each token is a string and is added into the URL by Realm).And finally, just a quick look into the signup, login and confirm methods that I have access to through my customized providers. I am quite positive that they work correctly:The currentUser will basically represent the Realm.app.currentUser and will be provided to the _app by my providers.So, the problem is that my Confirm component doesn’t have access to the email and password fields.\nI’ve tried to use the useContext hook, to pass data between sibling components, but quickly abandoned this approach, because I don’t want to pass sensitive data throughout my NextJS pages(The only place where I should use the password is during the MongoDB POST request, since it gets encrypted by Realm Web).Is there any way I could solve this issue? Maybe an enitirely different approach?Thank you very much in advance! Any help would be very much appreciated!",
"username": "Andrei_Daian"
},
{
"code": "const handleRegistrationAndLogin = async () => {\n const isValidEmailAddress = validator.isEmail(email);\n setError((e) => ({ ...e, password: null }));\n if (isValidEmailAddress) {\n try {\n // Register the user and, if successful, log them in\n await app.emailPasswordAuth.registerUser(email, password);\n return await handleLogin();\n } catch (err) {\n handleAuthenticationError(err, setError);\n }\n } else {\n setError((err) => ({ ...err, email: \"Email is invalid.\" }));\n }\n};\n",
"text": "Hi @Andrei_Daian,Welcome to MongoDB Community.Have you tried to implement a similar logic to our Realm web tutorial:Here a registration flow is followed by a login attempt:Please let me know if you have any additional questions.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "await app.emailPasswordAuth.registerUser(email, password)return await handleLogin();",
"text": "Thank you very much for replying, Pavel! Is this usable with sending email confirmation method, though? Between await app.emailPasswordAuth.registerUser(email, password) and return await handleLogin(); , it seems I have to interfier the confirmation process. I’m a bit confused about why they’re multiple solutions to the same problem, throughout the officail docs. This is what I’ve used for email/password authentication(Should’ve provided it in my original post) https://docs.mongodb.com/realm/web/manage-email-password-users/#std-label-web-manage-email-password-users I think your above solution treats the User Confirmation Method => Automatically confirm users case. Please correct my if I’m wrong, most probably I’m missing something! ",
"username": "Andrei_Daian"
},
{
"code": "",
"text": "Oh I see, yes the tutorial assumes automatic confirmation,@Andrei_Daian, why wouldn’t you want for the user to re-login after confirming the email.The redirect URL could land them on a login page UI eventually? This is a common procedure in most web apps.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Yes, indeed, in the end I decided just to redirect newly created users to the login page. No need to over complicate it! Was just wondering if it were any way of authenticating the user in a different way, using the tokens or something. But I’ll move to a JWT + Auth0 authentication, using an external service. The point of this project was firstly, to learn the basic approaches of MongoRealm and how to link it with the front-end application. Thank you for your replies and keep up the good work!",
"username": "Andrei_Daian"
}
] | How to automatically login users after Email/Password authentication | 2021-03-24T18:06:11.870Z | How to automatically login users after Email/Password authentication | 5,439 |
null | [] | [
{
"code": "",
"text": "What’s the best way to sync mongodb data to elastic? I’m thinking that my service should publish message to a kafka queue and a worker will fetch it and update to elastic db. But after I research, there’s mongo-connector and logstash. What’s the difference if i use a message queue and what things should i be aware if i use mongo-connector/logstash? ThanksEdit: someone told me that using kafka is more stable than mongo connector and logstash. After i read somemore theres also mongo kafka connector. Whats the difference between setting up my own queue and consumer vs using mongo kafka connector? Thanks",
"username": "Ariel_Ariel"
},
{
"code": "",
"text": "Kafka Connect is a native Kafka service that is built for integrating heterogenous data like MongoDB within Kafka. You could write your own consumer but you’ll have to deal with initial loading of data and keeping track of topic offsets to protect yourself in case of errors. In the end why bother coding all that, just use the service that is already designed for this purpose. The MongoDB Connector for Apache Kafka is the official connector supporting Kafka Connect.In your scenario, simply use the MongoDB connector as a source and point to your MongoDB cluster. Then use the ElasticSearch Sink connector to pull data from the kafka topic to Elastic.",
"username": "Robert_Walters"
}
] | Sync mongodb with elastic | 2021-03-24T16:13:41.197Z | Sync mongodb with elastic | 7,522 |
null | [
"crud"
] | [
{
"code": "",
"text": "I have document in one collection. once its reaches TTL, I need to remove documents from other collections based on some lookup(i.e, foreignKey reference). How to do this in mongo query",
"username": "Habeeb_Raja"
},
{
"code": "",
"text": "Hi @Habeeb_Raja,I think the best way is to populate the relevant documents with a corresponding TTL field/index as well so they will be removed as well with that document.Otherwise you can use Atlas triggers on “delete” event if the data is in Atlas or MongoDB change streams on delete event and use the delete _id key to run remove commands on other collections.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks @Pavel_Duchovny. I will try with Triggers",
"username": "Habeeb_Raja"
}
] | How to remove data from multiple collections | 2021-03-25T09:06:47.439Z | How to remove data from multiple collections | 5,681 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "{\n \"_id\" : ObjectId(\"605a62eadefa2a09797f1ae3\"),\n \"address\" : [\n {\n \"_id\" : ObjectId(\"605a62eadefa2a09797f1ae4\"),\n \"street\" : \"King AS\",\n \"state\" : \"CA\",\n \"zip\" : 12111\n },\n {\n \"_id\" : ObjectId(\"605a62f9defa2a09797f1ae5\"),\n \"street\" : \"123 Leon Street\",\n \"state\" : \"CA\",\n \"zip\" : 12121\n }\n ],\n \"firstName\" : \"James\",\n \"__v\" : 1\n}\nfindOneAndUpdate({ \"_id\": ObjectId(\"605a62eadefa2a09797f1ae3\"), }, { address: { $elemMatch: {\"_id\": ObjectId(\"605a62f9defa2a09797f1ae5\")} } }, {$unset:{address: \"\"}})\n\naddress: [ { _id: 605a697be0f5e50a3561dc44 } ]\n{\n \"_id\" : ObjectId(\"605a62eadefa2a09797f1ae3\"),\n \"address\" : [\n {\n \"_id\" : ObjectId(\"605a62eadefa2a09797f1ae4\"),\n \"street\" : \"King AS\",\n \"state\" : \"CA\",\n \"zip\" : 12111\n }\n ],\n \"firstName\" : \"James\",\n \"__v\" : 1\n}\n",
"text": "i have the following documenti wanted to remove the nested address document completely but somehow leaves me with that id value, instead of being completely empty.i did something like this:but somehow it tends to delete both the nested address object elements. and I’m left with something like this:what i’d ideally want is something like this:\ni’d want to delete specific address only and also, remove that subdocument from the nested array subdocuments. Something like this:lets say i want to delete the second nested documents, then the final documents should look like this:thank you kindly.",
"username": "M_San_B"
},
{
"code": "find({\n \"_id\": ObjectId(\"605a62eadefa2a09797f1ae3\")\n},\n{\n address: {\n \"$elemMatch\": {\n \"$ne\": [\n \"_id\",\n ObjectId(\"605a62f9defa2a09797f1ae5\")\n ]\n }\n }\n})",
"text": "Hi again \nthe answer is driven from this postSincerely,",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "hello,the statement you provided does not remove the nested document.\nits only outputting.Is there any other way?",
"username": "M_San_B"
},
{
"code": ".update({\n \"_id\": ObjectId(\"605a62eadefa2a09797f1ae3\")\n},\n{\n \"$pull\": {\n \"address\": {\n \"_id\": ObjectId(\"605a62f9defa2a09797f1ae5\")\n }\n }\n})",
"text": "My bad, sorry\nthere you go",
"username": "Imad_Bouteraa"
},
{
"code": ".update(\n{},\n{\n \"$pull\": {\n \"address\": {\n \"_id\": ObjectId(\"605a62f9defa2a09797f1ae5\")\n }\n }, \n{\n multi: true\n}\n})\n",
"text": "@Imad_Bouteraa Thank you so much brother. I really appreciate you taking time to help me. Your steps really guided me in finishing up my personal project. It means a lot.So for the answer I think what you wrote was on the right track, but when I tried it did not work,\nso I just looked up $pull and looks like we do not have to pass the _id of the document when modifying the nested array sub documents.Your answer definitely helped tho. Thank you again:so looks like we can get rid of the document _id and only worry about the nested document _id\nand add {multi:true} at the end of the query, like so:",
"username": "M_San_B"
},
{
"code": "_id\"_id\": ObjectId(\"605a62f9defa2a09797f1ae5\")\"_id\": ObjectId(\"605a62eadefa2a09797f1ae3\")\"_id\"\"_id\"\"_id\"",
"text": "Well, without the _id in the query. mongodb will scan the whole collection and “technically” updates every document by removing addresses matching \"_id\": ObjectId(\"605a62f9defa2a09797f1ae5\").\nHowever if you include \"_id\": ObjectId(\"605a62eadefa2a09797f1ae3\") mongodb will use the \"_id\" index and directly targets THE matching document.\nSo, if you know the \"_id\" of the concerned document. you absolutely have to take advantage of the \"_id\" index.\nNow, a lot depends on your context:Sincerely,",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to delete a specific nested subdocument completely from an document | 2021-03-23T22:24:00.430Z | How to delete a specific nested subdocument completely from an document | 29,369 |
null | [
"stitch"
] | [
{
"code": "var admin = require('firebase-admin')\n try {\n admin.initializeApp({credential: admin.credential.cert({\n \"project_id\": \"someproject-id\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nKEY\\n-----END PRIVATE KEY-----\\n\",\n \"client_email\": \"[email protected]\",\n }\n )});\n } catch(err) {\n console.log(err)\n }\n \n //var token = await admin.auth().createCustomToken(uid)\n \n console.log('token', token) \n }\n",
"text": "I’m trying to initializeApp with firebase-admin and I’ve uploaded the dependencies.I’m creating a trigger function so I can generate a JWT token, to be sent to firebase for authentication.The same code below works for my backend server, but when I try to use it in Stitch function, it returns “Error: Failed to parse private key: Error: Cannot read private key. ASN.1 object does not contain an RSAPrivateKey.” as error.https://firebase.google.com/docs/admin/setup",
"username": "Dave_Teu"
},
{
"code": "",
"text": "Hi Dave – Looks like this may be a place where we want to extend how our dependency resolution works. Would you mind sharing the version of ‘firebase-admin’ that you’re using so we can try to reproduce?",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "I am having the same exact problem with firebase-admin 8.12.1Please help.",
"username": "Aidan_Salvi"
},
{
"code": "",
"text": "Same here 8.13.0",
"username": "Roby_Rodriguez"
},
{
"code": "",
"text": "latest version. you mean you guys actually got it working before?",
"username": "Dave_Teu"
},
{
"code": "",
"text": "Hi,i am getting the same problem “can not parse private key” for firebase-admin v8.9.2.have you got it working?",
"username": "Vishnu_Rana"
},
{
"code": "",
"text": "Can someone on the Mongodb Realm team give us a timeline for fixing this?Its been almost a year since this issue was created.How are we supposed to get confidence and migrate our project to mongodb-realm if something as basic as sending a firebase notification using a function is not possible.Waiting…",
"username": "Hemant_Sharma"
},
{
"code": "",
"text": "We are also facing the same problem while migrating our project, how long will it take to fix it?",
"username": "Prateek_Uttreja"
}
] | Stitch cannot parse private key | 2020-05-28T16:44:39.298Z | Stitch cannot parse private key | 4,526 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hi,I have uploaded function dependencies for a particular version. Now, i want to remove the previously uploaded dependencies & upload the new one.But, I am not able to delete the older dependencies? How can i do that?regards",
"username": "Vishnu_Rana"
},
{
"code": "",
"text": "I handle this by pack the dependencies locally and upload it again.",
"username": "rouuuge"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to delete function dependencies? | 2021-03-24T19:08:21.693Z | How to delete function dependencies? | 1,566 |
[
"server",
"containers",
"installation"
] | [
{
"code": "",
"text": "Hi there,I followed the steps mentioned at this page: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/#install-mongodb-community-editionbut it didn’t work (please, see the below picture):\nimage2678×196 30 KB\nIt’s a CentOS 7.Warmest regards.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "Hi there!What command was this the output of?Can you show me the output of:\nls -l /etc/yum.repos.d/mongodb*andcat /etc/yum.repos.d/mongodb*??Also is the machine behind a proxy that tunnels port 443? a web proxy? A VPN? On a campus or corporate network?",
"username": "Sheeri_Cabral"
},
{
"code": "yum install vimmongodsystemdSystem V Initmongodmkdir /tmp/db && mongod --dbpath /tmp/dbdocker exec -it <container-name> mongodocker run --rm -it centos:7 bash",
"text": "Hi @Abelardo_Leon_Gonzal and welcome in the MongoDB Community !I just followed the steps from your link in a fresh CentOS 7 docker container and I was able to install the latest version of MongoDB (v4.4.3 currently) without any major issue.I just had to install vim with yum install vim because my container was just out of the box and I couldn’t start mongod using systemd or System V Init because this just doesn’t work in a docker container (needs more permissions) - but I was able to start a mongod manually with mkdir /tmp/db && mongod --dbpath /tmp/db and then connect to it from my host using docker exec -it <container-name> mongo.Just FYI, I started my CentOS 7 container with docker run --rm -it centos:7 bash.So I couldn’t reproduce your issue really here. I’m not sure what step is causing this issue. Could you please be more specific?Thanks,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "sudo yum install -y mongodb-orgls -l /etc/yum.repos.d/mongodb*",
"text": "Hi @Sheeri_Cabral,I run the following command: sudo yum install -y mongodb-org\nas mentioned at that page.The output of the command: ls -l /etc/yum.repos.d/mongodb*\nis:-rw-r–r-- 1 root root 200 Jan 5 09:56 /etc/yum.repos.d/mongodb-org-4.4.repo[mongodb-org-4.4]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/centos/$releasever/mongodb-org/4.4/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-4.4.ascIt’s a VPS hosting.Thanks for your help! Best regards.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "Hi there,I have a VPS account which runs CentOS 7.When I run thesudo yum install -y mongodb-orgin order to install MongoDB to be used with my NodeJS app, it outputs the following lines:Loaded plugins: fastestmirror, langpacks, universal-hooks\nLoading mirror speeds from cached hostfile\nEA4 | 2.9 kB 00:00:00\ncpanel-addons-production-feed | 2.9 kB 00:00:00\ncpanel-plugins | 2.9 kB 00:00:00\nbase | 3.6 kB 00:00:00\ndocker-ce-stable | 3.5 kB 00:00:00\nepel | 4.7 kB 00:00:00\nextras | 2.9 kB 00:00:00\nimunify360 | 2.7 kB 00:00:00\nimunify360-rollout-1 | 3.0 kB 00:00:00\nimunify360-rollout-2 | 3.0 kB 00:00:00\nimunify360-rollout-3 | 3.0 kB 00:00:00\nimunify360-rollout-4 | 3.0 kB 00:00:00\nh ttps://repo.mongodb.org/yum/centos/7/mongodb-org/4.4/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found\nTrying other mirror.\nTo address this issue please refer to the below wiki articleh ttps://wiki.centos.org/yum-errorsIf above article doesn’t help to resolve this issue please use h ttps://bugs.centos.org/.mysql-connectors-community | 2.6 kB 00:00:00\nmysql-tools-community | 2.6 kB 00:00:00\nmysql57-community | 2.6 kB 00:00:00\nremi-php74 | 3.0 kB 00:00:00\nremi-safe | 3.0 kB 00:00:00\nul | 2.9 kB 00:00:00\nul_ipage | 2.9 kB 00:00:00\nupdates | 2.9 kB 00:00:00\nNo package mongodb-org available.\nError: Nothing to [email protected] [~]# ls -l /etc/yum.repos.d/mongodb*:-rw-r–r-- 1 root root 200 Jan 5 09:56 /etc/yum.repos.d/mongodb-org-4.4.repocat /etc/yum.repos.d/mongodb*:[mongodb-org-4.4]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/centos/$releasever/mongodb-org/4.4/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-4.4.ascThanks for your help and best regards.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "h ttps://repo.mongodb.org/yum/centos/7/mongodb-org/4.4/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not FoundLooks like an Internet connection issue? Some firewall misconfiguration maybe? Looks like you can’t reach the repo.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @Abelardo_Leon_Gonzal,The baseurl looks different to me - in the docs, it’s:baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.4/x86_64/your file has ‘centos’ instead of ‘redhat’ - when I go to MongoDB Repositories I’m not seeing a centos option, but I do see a redhat one.Further, if I go to https://repo.mongodb.org/yum/centos/7/mongodb-org/4.4/x86_64/repodata/repomd.xml I get a 404 Not Found but if I change ‘centos’ to ‘redhat’ and go to:https://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/repodata/repomd.xmlI see XML data.can you try changing your baseurl?",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi there,Thanks for replying.After I replacing “centos” by “redhat”, “$releaseever” by “7” and to run the above command, I got this message:Loaded plugins: fastestmirror, langpacks, universal-hooks\nLoading mirror speeds from cached hostfile\n…\nNo package mongodb-org available.\nError: Nothing to doMy mongodb-org-4.4.repo file:[mongodb-org-4.4]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-4.4.ascBest regards.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "Hi there,Thanks for replying.It’s not related to an Internet connection issue. It’s my fault because I made some typos in the url. I fixed them but it still continues with no be installed.Best regards.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "Hi there!Did you clear the yum cache after making that change?yum clean allHopefully that will have the installation working!",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi there!I executed that command but a strange thing happens.When I re-run the following command:sudo yum install -y mongodb-orgits output was:Loaded plugins: fastestmirror, langpacks, universal-hooks\nDetermining fastest mirrors\n…\nEA4 | 2.9 kB 00:00:00\ncpanel-addons-production-feed | 2.9 kB 00:00:00\ncpanel-plugins | 2.9 kB 00:00:00I don’t understand why in this line appears centos again when I wrote redhat in my mongodb-org-4.4.repo file:http://mirrors.unifiedlayer.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 403 -\nForbiddenIn fact, if I click on this link a xml appears. It seems valid.Trying other mirror.\nTo address this issue please refer to the below wiki article",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "yum clean allI re-runned twice this command and the second time the following output was showed:\nimage619×1233 132 KBThe package #23/25 clearly statesmongodb-org-4.4/primary_dbbut the last message says:No package mongodb-org available.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "Hi! I’m assuming that this also didn’t work? If I try to find mongo-org in the packages there, it doesn’t exist at\nhttp://mirrors.unifiedlayer.com/centos/7/os/x86_64/Packages/What does this command result in?\ncat /etc/issue",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "cat /etc/issue\\S\nKernel \\r on an \\m",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "Right, sorry, try these too?cat /proc/version\nand\ncat /etc/redhat-release",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi!\nYes, you correctly assumed that that also didn’t work.",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "cat /proc/version outputs:Linux version 3.10.0-1127.19.1.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Tue Aug 25 17:23:54 UTC 2020cat /etc/redhat-release:CentOS Linux release 7.9.2009 (Core)",
"username": "Abelardo_Leon_Gonzal"
},
{
"code": "",
"text": "OK, great. So you’re on CentOS.Can you try this?\nsudo rm /etc/yum.repos.d/mongodb-org-4.4.repo\nsudo yum clean allThen edit /etc/yum.repos.d/mongodb-org-4.4.repo and add in:\n[mongodb-org-4.4]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.4/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-4.4.ascand then do\nsudo yum install -y mongodb-organd see if that installs it?If that doesn’t work, you could try downloading the packages manually (there’s a link on the installation page) - MongoDB Repositories - specifically these packages and versions:\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-org-4.4.3-1.el7.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-org-database-tools-extra-4.4.3-1.el7.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-org-mongos-4.4.3-1.el7.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-org-server-4.4.3-1.el7.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-org-shell-4.4.3-1.el7.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-org-tools-4.4.3-1.el7.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-cli-1.9.1.x86_64.rpm\nhttps://repo.mongodb.org/yum/redhat/7/mongodb-org/4.4/x86_64/RPMS/mongodb-database-tools-100.2.1.x86_64.rpmDownload them via wget or curl, and then yum install FILENAME, for example:\nsudo yum install -y mongodb-org-4.4.3-1.el7.x86_64.rpmI’m not sure why you’re having this issue - we’ve had a few people test on CentOS 7 and the instructions are working. ",
"username": "Sheeri_Cabral"
},
{
"code": "--noplugins",
"text": "@Sheeri_Cabral\nI just ran through on a centos:7 container, all looks good to me following the install instructions.@Abelardo_Leon_Gonzal\nAlso try the yum with the --noplugins flag in case there is a yum-plugin messing with things.",
"username": "chris"
},
{
"code": "",
"text": "Yes, I mentioned it in my first post.The first solution didn’t work. I will try to manually download those packages in order to be installed via yum install rpm by rpm.Best regards.",
"username": "Abelardo_Leon_Gonzal"
}
] | Installing MongoDB Cpanel | 2021-01-01T12:53:16.436Z | Installing MongoDB Cpanel | 19,972 |
|
null | [] | [
{
"code": "",
"text": "{\nclassId: 1\nnumberOfStudents: 4\nStudents:\n[\n{\nstudentId: 1001,\nstudentName: ‘foo’\n},\n{\nstudentId: 1002,\nstudentName: ‘bar’\n},\n{\nstudentId: null\nstudentName: null\n},\n{\nstudentId: null\nstudentName: null\n},\n{\nstudentId: null\nstudentName: null\n},\n{\nstudentId: null\nstudentName: null\n},\n…\n}How can I locate and update specific student by name and update it?\nHow can I delete first student record which matches certain name?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Hi @Abhishek_Kumar_Singh,Why do you need it with aggregation pipeline and not a regular update? Are you using pipeline update type?For updates you have arrayFilters to specify criteria for the updated object.Why do you have multiple place holders with null? Why not just to use $push when a new student is added to a class?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I need to conditional update, and I believe conditional update support is not available in regular update right?\nI see your point regarding placeholder, totally agreed.",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "It is supported, look at the link I posted",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I am sorry by conditional update I meant $cond operator like functionality. I couldn’t find anything in this regard.",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Hi @Abhishek_Kumar_Singh,Can you describe the initial document and the updated one?Im afraid im still missing the point.Thanks",
"username": "Pavel_Duchovny"
}
] | How to find, update, delete a document in an embedded array using aggregation pipeline? | 2021-03-23T21:43:37.447Z | How to find, update, delete a document in an embedded array using aggregation pipeline? | 3,161 |
null | [
"queries"
] | [
{
"code": "db.students.updateOne(\n {\n _id: 5,\n grades: { $elemMatch: { grade: { $lte: 90 }, mean: { $gt: 80 } } }\n },\n { $set: { \"grades.$.std\" : 6 } }\n)\ndb.students.findOneAndUpdate(\n{ \n _id : 5, \n \"grades.grade\" : { $lte : 90}, \"grades.mean\" : { $gt : 80} }, \n{ $set : { \"grades.$.std\" : 6}\n})\n",
"text": "Can someone please explain the correct application of ‘$elemMatch’ functionality.\nmongoDb document suggest that elemMatch needs to be used for multi-field condition while updating embedded arrays.Mentioned example:but the same can be achieved with:What is the use case for ‘elemMatch’ operator?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Hello @Abhishek_Kumar_Singh,There is a similar topic, it will help you to understand the difference,",
"username": "turivishal"
}
] | '$elemMatch' functionality | 2021-03-24T22:23:51.606Z | ‘$elemMatch’ functionality | 2,619 |
null | [
"aggregation",
"dot-net"
] | [
{
"code": "public class Document {\n public string Name { get; set; }\n public string Description{ get; set; }\n public List<string> Children { get; set; }\n}\nConventionRegistry.Register(\"camelCase\", new ConventionPack { \n new CamelCaseElementNameConvention()\n}, _ => true);\npublic class ProjectionDocument {\n \n [BsonElement(\"name\")]\n public string Name { get; set; }\n \n [BsonElement(\"description\")]\n public string Description { get; set; }\n}\nBsonClassMap.RegisterClassMap<ProjectionDocument>(cm =>\n{\n cm.AutoMap();\n});\nIMongoCollection<Document>().Aggregate()\n .Match(matchDefinition)\n .Project(a => new ProjectionDocument {\n Name = a.Name,\n Description = a.Description,\n }\n .Group(a => a.Name, a => new ProjectionDocument {\n Name = a.Key,\n Description = a.First().Description,\n }\n .Sort(Builders<ProjectionDocument>.Sort.Ascending(\"name\"))\n .ToList();\nProject(Builders<Document>.Projection.Expression(a => new ProjectionDocument { ... })).Project(a => new ProjectionDocument { ... })",
"text": "Hello, I am here to understand if I am doing something wrong or it’s the standard beheviour.I have a Document on the database defined with the classThe properties on MongoDb are traslated by the following global rulesSo all the properties is camelCase on the DbHere is the ProjectClassThis projectionClass is also registered as:When I read the data I am doing the follow:This does not sorting right.\nIf I use “Name” instead of “name” it will work.It is the expected behaviour ?\nI am not using a Lamba expression because the sort field definition can be anything.Second question:Is\nProject(Builders<Document>.Projection.Expression(a => new ProjectionDocument { ... }))\nthe same things as\n.Project(a => new ProjectionDocument { ... })?Thanks in advance.\nKind Regards",
"username": "Andrea_Zanini"
},
{
"code": "BuilderCamelCaseElementNameConventionCamelCaseElementNameConventionProjectionDocument[BsonElement]",
"text": "Hi, Andrea,Thanks for reaching out to us with your question about mapping C# property names to database fields.Since you are using a Builder for the sort stage, you should refer to the field by its C# property name “Name” rather than by its database field name “name”. The CamelCaseElementNameConvention should take care of any C# property name to MongoDB field name conversions.ASIDE: Since CamelCaseElementNameConvention is configured for all types, you shouldn’t need to annotate your ProjectionDocument class with field names. Even [BsonElement] isn’t needed since we map all public get/set properties by default.Hope that helps. Let us know if you have any additional questions.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "IMongoCollection<Document>",
"text": "Hello, James\nthank you for your quick answer.So just for better understanding.\nIf I am sorting with a Builder the FieldDefinition refer to the name of the C# property, even if there is BsonElement or the property on the DB is camelCase (globally setted)?I’ve tried do the Sort directly to the IMongoCollection<Document> (with the Builder) and it works using the camelCase property name and not the C# property name.Can you explain me why this difference?Thanks again for your time!",
"username": "Andrea_Zanini"
},
{
"code": "Builders<T>",
"text": "Ok maybe it’s like:If I use the Builders<T> for sorting, projecting etc. the query works with the BsonDocument objInstead, if I use the lamba to perform this operations it’s like working in an upper level (C#) so thats why the property name valid is the C# class property name.I understood correctly?",
"username": "Andrea_Zanini"
},
{
"code": "Builders<BsonDocument>Builders<DOMAIN>DOMAIN",
"text": "Hi, Andrea,If you are working with Builders<BsonDocument> then you would use the field names in the database. If you are working with Builders<DOMAIN> (where DOMAIN is one of your strongly-typed C# classes) then you would use the C# property names.If you have encountered a case where you’re using your C# domain class but require using the database field names, please provide a self-contained repro so we can debug further as that is not the expected behaviour.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "public class CSharpClass {\n public string Name {get; set; }\n public string Description {get; set; }\n .... (like 20 more properties)\n}\n\tprivate static FilterDefinition<T> GetFilterDefinitionByFunction<T>(TreeParser.QueryTreeItem item, List<FilterDefinition<T>> other) where T : class\n\t{\n\t\treturn item.Operator switch\n\t\t{\n\t\t\tFunction.And => Builders<T>.Filter.And(other),\n\t\t\tFunction.Or => Builders<T>.Filter.Or(other),\n\t\t\tFunction.Equal => Builders<T>.Filter.Eq(Mappings.KeywordDbMapping[item.Keyword], GetStringOrNumber(item.Condition)),\n\t\t\tFunction.Contains => double.TryParse(item.Condition, out _)\n\t\t\t\t? Builders<T>.Filter.Eq(Mappings.KeywordDbMapping[item.Keyword], GetStringOrNumber(item.Condition))\n\t\t\t\t: Builders<T>.Filter.Regex(Mappings.KeywordDbMapping[item.Keyword], $\"(?i).*{item.Condition}.*\"),\n\n\t\t\tFunction.EqualOrGreaterThan => Builders<T>.Filter.Gte(Mappings.KeywordDbMapping[item.Keyword], GetStringOrNumber(item.Condition)),\n\t\t\tFunction.EqualOrLesserThan => Builders<T>.Filter.Lte(Mappings.KeywordDbMapping[item.Keyword], GetStringOrNumber(item.Condition)),\n\t\t\tFunction.LesserThan => Builders<T>.Filter.Lt(Mappings.KeywordDbMapping[item.Keyword], GetStringOrNumber(item.Condition)),\n\t\t\tFunction.GreaterThan => Builders<T>.Filter.Gt(Mappings.KeywordDbMapping[item.Keyword], GetStringOrNumber(item.Condition)),\n\t\t\tFunction.Size => Builders<T>.Filter.Size(Mappings.KeywordDbMapping[item.Keyword], GetIntNumber(item.Condition)),\n\t\t\t_ => Builders<T>.Filter.Empty\n\t\t};\n\t}\nvar filterDefinition = GetFilterDefinitionByFunction<CSharpClass>(result.TreeItem, ...);\nMappings.KeywordDbMapping[item.Keyword]\nIMongoDbCollection<CSharpClass>.Aggregate().Match(filterDefinition)\n.Project(a => new NewCSharpDocument {\n .....\n})\nMatch(filterDefinition)",
"text": "Hello James,I will check but I’d like if you can check what I am actually doing with this example.I have a classBecause I have a dynamic filter (the user can search into the documents by keywords + value)\nI have created a method for build the filterDefinition and it is like thisso when I call this BuilderMethod I dothis call use a strongly typed C# class (used also fo the insert operation with the global camelCase convention) and the properties names that I get fromis camelCase property name (because on MongoDb are like that) and it works greatly.But after I do theand addthe sortDefinition use the C# property name.Why the Match(filterDefinition) use the camelCase conventionad MongoDb properties?\nHope to have explained my questionKind Regards,\nAndrea",
"username": "Andrea_Zanini"
},
{
"code": "ProjectionDocumentNameProjectionDocumentName_idToString()var query = coll.Aggregate()\n .Match(Builders<Document>.Filter.Empty)\n .Project(a => new ProjectionDocument\n {\n Name = a.Name,\n Description = a.Description,\n })\n .Group(a => a.Name, a => new ProjectionDocument\n {\n Name = a.Key,\n Description = a.First().Description,\n })\n .Sort(Builders<ProjectionDocument>.Sort.Ascending(\"name\"));\n\nConsole.WriteLine(query.ToString());\naggregate([\n { \"$match\" : { } },\n { \"$project\" : { \"Name\" : \"$name\", \"Description\" : \"$description\", \"_id\" : 0 } },\n { \"$group\" : { \"_id\" : \"$Name\", \"Description\" : { \"$first\" : \"$Description\" } } },\n { \"$sort\" : { \"name\" : 1 } }\n])\n$group{_id: <<STRING>>, Description: <<STRING>>}name$group$sortName = a.First().NameName = a.Keyvar query = coll.Aggregate()\n .Match(Builders<Document>.Filter.Empty)\n .Project(a => new ProjectionDocument\n {\n Name = a.Name,\n Description = a.Description,\n })\n .Group(a => a.Name, a => new ProjectionDocument\n {\n Name = a.First().Name,\n Description = a.First().Description,\n })\n .Sort(Builders<ProjectionDocument>.Sort.Ascending(\"Name\"));\n\nConsole.WriteLine(query.ToString());\naggregate([\n { \"$match\" : { } },\n { \"$project\" : { \"Name\" : \"$name\", \"Description\" : \"$description\", \"_id\" : 0 } },\n { \"$group\" : { \"_id\" : \"$Name\", \"Name\" : { \"$first\" : \"$Name\" }, \"Description\" : { \"$first\" : \"$Description\" } } },\n { \"$sort\" : { \"Name\" : 1 } }\n])\n$group{_id: <<STRING>>, Name: <<STRING>>, Description: <<STRING>>}$sortNamecamelCaseConvention$project{ \"$project\" : { \"Name\" : \"$name\", \"Description\" : \"$description\", \"_id\" : 0 } }$project",
"text": "Hi, Andrea,Thank you for your additional clarifications. I dug into your code a bit more and believe I understand the root cause of the problem.In your aggregation, you are projecting into ProjectionDocument, grouping those results by Name into a new ProjectionDocument, and then attempting to sort those results. Our LINQ provider isn’t smart enough to realize that you’ve implicitly renamed Name to _id in the grouping stage.This is probably somewhat confusing without a concrete example. Let’s consider the following aggregation…ASIDE: In the .NET/C# driver you can see how we translate any query into MQL by calling ToString() on the query.The resulting aggregation is:Note the output of the $group stage is a set of documents of the form {_id: <<STRING>>, Description: <<STRING>>}. There is no mention of the name field in the output of the $group stage and thus the $sort stage is a no op.Let’s consider a slightly different, but equivalent aggregation. Notably we use Name = a.First().Name rather than Name = a.Key:The resulting MQL is:The output of the $group stage now contains documents of the form {_id: <<STRING>>, Name: <<STRING>>, Description: <<STRING>>}. The $sort stage is now able to sort on the Name field as desired.Note that the camelCaseConvention only plays a role in the $project where we are determining database field names from C# property names. e.g. { \"$project\" : { \"Name\" : \"$name\", \"Description\" : \"$description\", \"_id\" : 0 } } After the $project stage, we have transient documents with the correctly-cased field names that can be referred to by their C# property names.Hope this explanation helps.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hello James,thank you for your detailed clarifications.\nNow I understood perfectly the behaviour!Thanks again,\nAndrea",
"username": "Andrea_Zanini"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# .NET Core 5.0 - Driver (2.12.0) - Sort ignore BsonElement attribute on projected class | 2021-03-22T20:12:59.325Z | C# .NET Core 5.0 - Driver (2.12.0) - Sort ignore BsonElement attribute on projected class | 10,847 |
null | [
"java",
"security"
] | [
{
"code": "gssapiHostName",
"text": "I am setting up an environment with mongodb and envoy combination. Our current mongodb has configured with kerberos authentication. After configuring mongodb with envoy, I was able to connect using mongodb client by passing gssapiHostName option.Is there any way to pass this option from mongodb java driver ?",
"username": "Surendra_K"
},
{
"code": "gssapiHostNamegssapiHostName",
"text": "Hi @Surendra_K,There is no way to configure gssapiHostName on the client side using the Java driver. However, the server does allow configuration of saslHostName in the event that there is a discrepancy between the learned hostname and the attached principal of the mongod. It serves the same purpose as gssapiHostName, but solved from the server side.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Hi @Jeffery_Yemin,Thanks for the reply. I looked at the GSSAPIAuthenticator code and the issue is with getHostName method. Currently, getHostName method simply returns the hostname from ServerAddress. If ServerAddress has ability capture gssapiHostname and use that value as hostname if available. That could solve the issue. After doing this change in GSSAPIAuthenticator class, it worked for me but I am not really sure about side effects.Thanks,\nSurendra",
"username": "Surendra_K"
},
{
"code": "MongoClientgssapiHostname",
"text": "I don’t think it will be a simple change. Consider that a MongoClient can be configured to connect to a replica set, and if so it attempts to discover all of its members. The driver would need to be configured with a different gssapiHostname for each discovered member.If you’re not able to determine a path forward without a change to the driver, I suggest you contact MongoDB support via support.mongodb.com (I’m assuming you have a support contract since GSSAPI authentication is an Enterprise-only feature).Regards,\nJeff Yemin",
"username": "Jeffrey_Yemin"
}
] | GSSAPI authentication hostname override issue in java | 2021-03-23T20:56:51.373Z | GSSAPI authentication hostname override issue in java | 2,450 |
null | [
"realm-studio"
] | [
{
"code": "",
"text": "Hello everyone. I am new to Mongo Db and Realm- I am helping a team with a mobile app.\nThe app set up with a Realm Cloud project and I need to have access to the database, I have tried to use the Real Studio but there is no more Connect to Cloud project button.Any idea of how can I see or query the Realm Cloud Database?Thanks",
"username": "Miguel_Olvera"
},
{
"code": "",
"text": "Hi Miguel,Thanks for creating your first post and welcome to the community!If you are developing a new MongoDB Realm app, this works by connecting to an Atlas Cluster which is where your database can be queried.If you already have an Atlas cluster linked to your Realm app and want to query data in your cluster, you’ll need to be given a relevant user role on the project that contains the cluster before you can view documents.I have tried to use the Real Studio but there is no more Connect to Cloud project button.I believe you’re referring to the functionality of an older version of the Studio which allowed you to connect to the legacy Realm app server. The newer version of Realm Studio does not have this connect to cloud button as it is meant to only allow accessing local Realm files for troubleshooting purposes.If you are migrating an existing legacy Realm app to MongoDB Realm, please refer to this article for guidance.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hi @Miguel_Olvera , note that you can also use Realm Studio to connect to local Realm databases. This article explains how to use it with iOS apps.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks for the answer but that only opens local files forma single device/user.I want to access the complete Realm Database.Thanks",
"username": "Miguel_Olvera"
},
{
"code": "",
"text": "Hello Mansoor.Is there any functionality that I can use to do these. Where can I access the legacy Realm database on the cloud?Is the solution to migrate to Mongo DB?\nThanks",
"username": "Miguel_Olvera"
},
{
"code": "",
"text": "@Miguel_Olvera Yes, using MongoDB Realm Sync to consolidate all of the data into a MongoDB Atlas database would be the way to view all of the data in one place.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "@Miguel_Olvera You can find older Studio Releases here:\nhttps://studio-releases.realm.io/For instance, 5.0.3 should allow you to connect to legacy Realm Cloud",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Should have mentioned that tried using it but MacOs Catalina doesn’t allow the software to run .Best",
"username": "Miguel_Olvera"
},
{
"code": "",
"text": "Welcome @Miguel_OlveraYou may already be aware but allow me to provide a high-level overview which may or may not help.I don’t think it was mentioned but when Realm became MongoDB Realm, the strategy for setting up and working directly with the database also changed. Previously you would access both local and Realm Cloud realms with Realm Studio. That App is now for local only.Going forward (and once the data is migrated), you need to use the online MongoDB Console to set up your app and work directly with all of the Realm data. This is the login link.Secondly, Realm is a local first database. That means that what is stored in the Realm Cloud may be a copy of the data that is stored locally (data is written locally first and then sync’d later). So you can use Realm Studio to access the local data.However, legacy came in two flavors; Full Sync and Partial (or query) Sync. If that was set up as full sync, all of your data is local and what’s in the cloud is just a copy. If it was partial, you will need to go through a different process. You didn’t mention which is was.If you need to use Realm Studio 5.0.3 and are running Catalina, you can run older versions of macOS (and apps) using VirtualBox.Keep in mind that if you do open a prior Realm file with Realm Studio, it will force an upgrade to the file which means it won’t be compatible with an older SDK, so you end up updating that as well.Loop us in if you have more questions.",
"username": "Jay"
}
] | Realm Cloud Databse - Realm Studio | 2021-03-23T03:16:10.746Z | Realm Cloud Databse - Realm Studio | 4,342 |
null | [] | [
{
"code": "src/mongo/db/db.cpp:340: error: undefined reference to 'mongo::OCSPManager::startThreadPool()'\nsrc/mongo/util/net/ocsp/ocsp_manager.h:49: error: undefined reference to 'mongo::OCSPManager::OCSPManager()'\ncollect2: error: ld returned 1 exit status\nscons: *** [build/opt/mongo/mongod] Error 1\nscons: building terminated because of errors.\nbuild/opt/mongo/mongod failed: Error 1\n",
"text": "Hi!\nI’m trying to build mongodb 4.4.4 with Ubuntu 20.04, using the following command:pip3 install -r etc/pip/compile-requirements.txtpython3 buildscripts/scons.py \ninstall-mongod \n–ssl=off \n–enable-free-mon=off -j 12 \n–disable-warnings-as-errors \nLINKFLAGS=’-static-libstdc++’But, it fails with an error on ocsp manager after a long time building.Am I doing something wrong? I will try again with mongo 4.4.1 and see if something changes between versions.",
"username": "Renan_Castro"
},
{
"code": "--ssl=on--ssl=off--ssl=on-ssl=off",
"text": "Hi @Renan_Castro -Thanks for getting in touch with us about the build issue you are experiencing. Does that error still reproduce for you if you build with --ssl=on instead of --ssl=off? The SSL disabled configuration isn’t a build that gets tested frequently on our side, as all of our builds that we release have SSL enabled.If it does build with --ssl=on, I’d suggest following up with a bug report in the SERVER project on jira.mongodb.com. Please note in any such ticket if the -ssl=off configuration was working for you with a specific release in the past, as that would likely indicate a build regression.Thanks,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Thanks, @Andrew_Morrow!\nIt’s actually as you said, the build works with SSL on, and doesn’t with ssl off.I’m doing this on behalf of the Meteor community, as we bundle the Mongo binary with SSL off. I did a patch for this, and submitted a PR(SERVER-59010 Fix SSL off build, OCSPManager shouldn't be used when ssl = off by renanccastro · Pull Request #1401 · mongodb/mongo · GitHub). I will open an issue too!Thanks for your help.",
"username": "Renan_Castro"
},
{
"code": "",
"text": "Thanks for submitting the PR.I recommend mentioning that this is in support of meteor on the PR, as it may help provide some context for the reviewers (which probably won’t be me).Meanwhile, I’d be interested to know a little more about how exactly meteor uses the mongo binaries. Could you provide a brief description?Thanks,\nAndrew",
"username": "Andrew_Morrow"
}
] | Failure to build MongoDB 4.4.4 | 2021-03-24T00:22:18.851Z | Failure to build MongoDB 4.4.4 | 3,857 |
null | [
"aggregation",
"golang"
] | [
{
"code": "cur, err := collection.Aggregate(\n ctx,\n []bson.M{\n bson.M{\"$match\": generateQueryFilter(query)},\n bson.M{\"$set\": bson.M{\"count\": \"$count\"}},\n bson.M{\"$match\": generatePaginationFilter(query)},\n bson.M{\"$sort\": sort},\n bson.M{\"$facet\": bson.M{\n \"data\": []bson.M{bson.M{\"$limit\": query.Limit}},\n }},\n },\n ) \nbson.M{\"$set\": bson.M{\"count\": \"$count\"}}, \n",
"text": "I have run the aggregate command as below:but I’m not sure how to get the count that I calculated fromI want the count before the 2nd $match… Also, I am not aware of any other way I can get the data field besides using facet. Is there any other way? It looks weird that I use facet with only 1 pipeline. I have tried using $set but it seems $set and $limit can’t be used in one pipeline.",
"username": "Ariel_Ariel"
},
{
"code": "",
"text": "Hi @Ariel_Ariel,Can you please share what your expected pipeline output is? I think with that, we can come up with a plan on how to get there.Best,",
"username": "nraboy"
}
] | Count and filter pipelines using mongodb go driver aggregate | 2021-02-20T02:38:09.106Z | Count and filter pipelines using mongodb go driver aggregate | 4,767 |
null | [
"data-modeling",
"sharding"
] | [
{
"code": "",
"text": "hi all,we store some 20M documents per day, documents have an average size of 3.5KB , storage in MongoDB accounts for some 20GB/day , indexsize is approx 21GB/day\nTo keep the sizes of our indexes reasonable (working set size in RAM, remember) we create collections per week of data.We are not sure how we should define our sharding-key properly, for the following 2 requirements:We had a sharding-key based on a single numeric field in our data, a field that is taken from a timestamp in the data, and of which we use the seconds of each hour in the data-timestamp combined.\nThis yields 3600 possible seconds, so we pre-split our new (empty) collections into 3600 chunks, then move these chunks manually over 108 shards (=33 chunks per shard). This is done prior to any data loading because otherwise the chunk movements go terribly slow.\nWith this sharding-key, our data gets very evenly distributed across all shards and chunks.But our users can launch queries with criteria for these 5 fields, for a specific date or for a date-range. We don’t know which of these 5 fields they are filtering on, it could be one, two, or more of the fields. They very seldomly would select within a very short timerange that would translate (via the sharding-key) to very few chunks. So typically ALL chunks on ALL shards get queried each time, which is obviously far from optimal:While data is being loaded into the current’s week collection, its indexes are present in RAM/cache. So queries on this week’s data go terribly fast (sub-second responses). That’s the good part.But when we launch a query of another (thus older) week, the indexes of that other week’s collection need to be loaded into RAM/cache first. And this needs to be done on ALL shards. If data loading happens to be busy at the same time for this week’s collection, then not all indexes fit in RAM at the same time. It happens that the queries take 10-18mins in such conditions !So we need to reduce the number of shards involved in these typical random queries, and the number of chunks involved.I understood that, in order to help MongoDB identify in which limited number of chunks it should search for some query, we must include some selection-criterium also for the shard key values.But we don’t know for which of the typical 5 selection fields a user would enter criteria.\nSo we don’t know how we possibly might be able to compute a selection-criterium for a sharding key.Any hints/tips are most welcome !\nRob",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "An example:if my 5 selection-fields are e.g. named “A”, “B”, “C”, “D”, “E” ;\nthen I have eg a document with “A=ab1c”, “B=d2ef”, “C=ghab”, “D=12389”, “E=526_ab”Now users might run queries with given selection values (criteria) for eg only the field “A”, or for both fields “B” and “C” , or for only field “E”, or for a partial value of field “B” (like “*2ef”)-> when I define the sharding key on all of the 5 fields, the sharding key can be computed at insert time (when we know the values of all 5 fields) to drop that document in some specific chunk\n-> but when a user specifies query criteria for less than all 5 fields, we cannot compute the corresponding shard key where the original document has been stored",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "I found some web-article suggesting to use multiple sharded collections:Then the application makes queries in these sub-collections based on which filter-key has been specified by the user. Such queries returns a set of “_id” values.\nThe application should ‘intersect’ the obtained “_id” values from these sub-collections.\nThen the application makes a second query in the ‘main’ collection where the full documents are stored, filtering on the “_id” values it got from the previous sub-queries.Each such collection is fully sharded, so these sub-queries and final query are expected to run very fast, compensating the overhead of running multiple queries instead of one single query.",
"username": "Rob_De_Langhe"
}
] | Need advise for sharding-key | 2021-03-24T12:53:04.011Z | Need advise for sharding-key | 2,039 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n id,\n name,\n subjects:\n [ \n { name : 'Maths', marks : '100'},\n { name : 'Science', marks : '90'}\n ]\n}\n",
"text": "I want to delete entry corresponding to Math in embedded array using aggregation pipeline, How can I do so?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "db.collection.aggregate([\n {\n \"$addFields\": {\n subjects: {\n \"$filter\": {\n \"input\": \"$subjects\",\n \"as\": \"subject\",\n \"cond\": {\n $ne: [\n \"$$subject.name\",\n \"Maths\"\n ]\n }\n }\n }\n }\n }\n])",
"text": "Hi\nIt worked for me",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "Thanks for replying Imad. I was hoping to get rid of only single occurrence, can this be tweaked to accomplish that?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "@Pavel_Duchovny Hey Pavel, need your 2 cents here if possible please. Is it possible to locate an embedded document and delete it using aggregation pipeline? Apologies for the random tag.",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "It would be best to publish a couple sample documents from the input set and the exact result set that you wish.",
"username": "steevej"
},
{
"code": "subjectsdb.collection.aggregate([\n {\n \"$addFields\": {\n subjects: {\n \"$let\": {\n \"vars\": {\n \"firstMatchedIndex\": {\n \"$arrayElemAt\": [\n {\n \"$arrayElemAt\": [\n {\n \"$filter\": {\n \"input\": {\n \"$zip\": {\n \"inputs\": [\n \"$subjects\",\n {\n \"$range\": [\n 0,\n {\n \"$size\": \"$subjects\"\n }\n ]\n }\n ]\n }\n },\n \"cond\": {\n \"$eq\": [\n {\n \"$arrayElemAt\": [\n \"$$this.name\",\n 0\n ]\n },\n \"Maths\"\n ]\n }\n }\n },\n 0\n ]\n },\n 1\n ]\n }\n },\n in: {\n \"$ifNull\": [\n {\n \"$concatArrays\": [\n {\n \"$slice\": [\n \"$subjects\",\n \"$$firstMatchedIndex\"\n ]\n },\n {\n \"$slice\": [\n \"$subjects\",\n {\n \"$subtract\": [\n {\n \"$add\": [\n \"$$firstMatchedIndex\",\n 1\n ]\n },\n {\n \"$size\": \"$subjects\"\n }\n ]\n }\n ]\n }\n ]\n },\n \"$subjects\"\n ]\n }\n }\n }\n }\n }\n])",
"text": "I was hoping to get rid of only single occurrenceCorrect me if I’m wrong. you want to remove one matching document in each subjects array. if it’s so, try:",
"username": "Imad_Bouteraa"
},
{
"code": "db.collection.aggregate([\n { \n $addFields: {\n subjects: {\n $function: {\n body: \n function(subjects) {\n let flag = false;\n let fn = function(acc, curr) {\n if (curr.name === \"maths\" && flag === false) {\n flag = true;\n return acc.concat([ ])\n }\n return acc.concat([curr])\n }\n return subjects.reduce(fn, [ ])\n },\n args: [ \"$subjects\"],\n lang: \"js\"\n }\n }\n }\n }\n])",
"text": "Hello @Abhishek_Kumar_Singh,The following aggregation will work with MongoDB v4.4 or higher.",
"username": "Prasad_Saya"
}
] | How to 'pull' update in aggregation pipeline - remove a sub-document from an embedded array? | 2021-03-22T23:21:40.373Z | How to ‘pull’ update in aggregation pipeline - remove a sub-document from an embedded array? | 7,097 |
null | [
"node-js",
"connecting",
"typescript"
] | [
{
"code": "import { MongoClient } from 'mongodb';\n\n\nasync function initDB() {\n const mongoClient = new MongoClient(\n 'mongodb://***:***@192.168.45.20:27020/db?authSource=admin&readPreference=primaryPreferred&ssl=false', {\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n \n const client = await mongoClient.connect()\n const data = await client.db().collection('providers').find().toArray()\n return data\n}\n\ninitDB().then((result) => {\n console.log(`result`, result)\n}).catch((err) => {\n console.log(`err`, err)\n});\n\nMongoServerSelectionError: getaddrinfo ENOTFOUND bawq-server\n at Timeout._onTimeout (/mnt/d/work/playground/node_modules/mongodb/lib/core/sdam/topology.js:438:30)\n at listOnTimeout (internal/timers.js:554:17)\n at processTimers (internal/timers.js:497:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n setName: 'rs1',\n maxSetVersion: 1,\n maxElectionId: 7fffffff0000000000000001,\n servers: Map(1) { 'bawq-server:27020' => [ServerDescription] },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: 9\n }\n}\nuseUnifiedTopology (node:2464) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring \nengine, pass option { useUnifiedTopology: true } to the MongoClient constructor.\n",
"text": "Dears,When i updated to mongodb from 3.6.2 to 3.6.3 when i connect to the DB it throws error if useUnifiedTopology is set to trueHere is the snippet of the code (ommited username, password and db) using typescriptBut if useUnifiedTopology is set to false, it gives me warning , works fine and it gives me the results i expectedAny help about this?Edit: the problem only exists when using replica set (standalone) when i disable replica set option in config it works fine",
"username": "Ahmed_Naser1"
},
{
"code": "",
"text": "I just upgraded locally from 3.5.x to latest 3.6.3 and was also unable to connect to my local MongoDB instance. I got the same errors. I thought something was wrong with my dev server. At least it’s good to know that I’m not the only one. Back on 3.5 for now and it works.",
"username": "Nick"
},
{
"code": "3.6.4useUnifiedTopology: false3.5",
"text": "I’m having the same issue when I use mongodb native driver 3.6.4. When I set useUnifiedTopology: false it doesn’t cause that selection error. I will downgrade to 3.5. Thanks @Nick for the tip!",
"username": "C_Bess"
},
{
"code": "",
"text": "I’ve resolved by changing the members’ hostname. See more: Mongoose v5.12.2: Connecting to MongoDB (mongoosejs.com).",
"username": "ijerryfeng"
}
] | Npm mongodb 3.6.3 useUnifiedTopology throws MongoServerSelectionError with replicaSet | 2020-12-05T04:43:17.202Z | Npm mongodb 3.6.3 useUnifiedTopology throws MongoServerSelectionError with replicaSet | 12,542 |
null | [
"atlas-functions",
"serverless"
] | [
{
"code": "exports = async (bsonFileData, fileName, venueId) => {\n // Define dependencies\n const axios = require('axios');\n const crypto = require('crypto');\n \n \n // Define Variables from context\n const cloud_name = context.values.get(\"cloudinaryAccount\"); // <-- Cloudinary Account Name\n const api_key = context.values.get(\"cloudinaryApiKey\"); // <-- Cloudinary Public API Key \n const api_secret = context.values.get(\"cloudinarySecretApiKey\"); // <-- Links to Secret: Cloudinary Secret API Keyimport Foundation\nimport UIKit\nimport RealmSwift\n\n/**\n * @param image - A UIImage probably taken from a UIImagePickerController\n * @param fileName - The File Name you might want to have it stored in repo\n * @param appUser - The currently logged in Realm User\n */\nfunc functionImageUpload(image: UIImage, fileName: String = \"test_file_upload\", appUser: User) {",
"text": "I went through some of the videos and articles done by Lauren Schaefer and Drew DiPalma about what could be done with Realm Serverless, and decided to see if I could run before I start walking. So I decided to write a Realm Serverless Function that could handle image uploads from our mobile client and post them into our Cloudinary account as an example.Basically in the client we create a Data URI from the image and we found out that there are limits to the BSON Parameters that we could send. It seems that limit is around 10KB. So once that was determined, we chunked the fileData argument into a BSONArray of BSON strings.The code works fine with almost any image size if I run it in Node.js (Nice trick using the module.exports = exports from Lauren’s video), and it does work in the Realm Serverless if the image is small (I’ve tried 13KB, not sure that’s the limit), and it seems to not work on images that are around 300KB. It seems that it’s timing out on my axios request. (I imported crypto, and axios as node_modules into my Realm Application). As the console.log shows that it makes it to that request. (… ‘About to upload to: ’)Disclaimer: I’m actually not looking for the response of that you shouldn’t use Realm Serverless Functions for this, (we’re willing to concede this already.), however it would be better to know what the limits are so we can determine what kind of use cases make sense for Realm Serverless Functions.What could be the reason for the timeout as the normal 13KB request takes about < 5 seconds?Realm Serverless Source\niOS Source\n",
"username": "bainfu"
},
{
"code": "execution time limit exceeded.",
"text": "Hey Bain,Does the console show anything after the timeout (e.g. execution time limit exceeded.) when you use the larger image? We do have a 90s execution limit on functions - see here, and it sounds like the large image may be causing that axios request to be longer.It could also be something to dig in with axios - we have a context.http module and I’m curious if you’re seeing the same behavior with that as well.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "@Sumedha_Mehta1 It does. I’m a little surprised that it takes as long as it does. If I run it on my local machine (I know it’s a cliche), with just node itself, the total time is ~ 2-3 seconds for the big file. It also seems to get to the axios call fairly quickly. I don’t mind switching over to the context http, to show it happening, however I just imported axios because the hashing function of the utils.crypto wasn’t working for me, so I imported axios & crypto together. I’ll see if I can turn it around quickly and report back.",
"username": "bainfu"
},
{
"code": "",
"text": "Actually let me clarify that (might help anyone that’s an SDK developer), I only receive ‘execution time limit exceeded’ if I run it from the functions console. If I run it from my iOS app it seems to time out after 60 seconds or so (which might be the RealmSwift SDK not using the same timeout as the function.) If I check the logs, it says ‘operation canceled’. Which makes me think the client cancelled the operation.",
"username": "bainfu"
},
{
"code": " return new Promise((resolve, reject) => {\n const req = https.request(options, response => {\n let responseData = '';\n console.log('cloudinary status code:', response.statusCode);\n console.log('cloudinary headers: ', response.headers);\n \n response.on('data', (chunk) => {\n responseData += chunk;\n });\n \n response.on('end', () => {\n console.log(`Completed Upload Response.`);\n const jsonData = JSON.parse(responseData);\n \n const { asset_id, created_at, format, overwritten, secure_url, url, version } = jsonData;\n\n console.log(`AssetID: ${asset_id}`);\n console.log(`Created At: ${created_at}`);\n console.log(`format: ${format}`);\n console.log(`secure_url: ${secure_url}`);\n console.log(`url: ${url}`);\n console.log(`version: ${version}`);\n console.log(`overwritten: ${overwritten}`);\n \n resolve(jsonData);\n });\n });\n \n req.on('error', error => {\n console.log(`Received an error when uploading:`, error);\n reject(error);\n });\n \n req.write(formData);\n req.end(); \n });",
"text": "@Sumedha_Mehta1 that worked, and is a lot more performant.(For reference or for future travelers) I switched the axios request to the following: (importing https at the top of the function):",
"username": "bainfu"
},
{
"code": "",
"text": "I’m so glad you found my videos useful! ",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Serverless Parameter Upload / External HTTP POST Limits? | 2021-03-23T21:37:27.785Z | Realm Serverless Parameter Upload / External HTTP POST Limits? | 4,156 |
null | [] | [
{
"code": "",
"text": "I’m following the Task Tracker tutorial. The app is spooling up on my ios emulator displaying the login/registration page. When I try to register a user (I’ve even tried just straight up login), either way I receive this error message: Failed to sign up: authentication via ‘local-userpass’ is unsupportedWithin the Atlas UI I’ve made certain that my access rights have been set to anonymous thinking I could get started without signing in (just to see if I can access further into the app).I’ve reviewed the finished code on GitHub, as well as reviewing the tutorial code and everything looks correct.Think is my first Realm experience any help is much appreciated. Anyone have any ideas? Has anyone else run across this? I saw an earlier post similar to this, but apparently the issue resolved itself, so I couldn’t gain any ideas from it.",
"username": "Natalie_Burge"
},
{
"code": "",
"text": "Hi @Natalie_Burge - welcome to the forum!Have you checked if username/password is enabled in your Realm app?\nimage963×380 29.9 KB",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi @Andrew_Morgan. thanks for replying! After posting I kept working the problem and realized I’d hadn’t properly enabled the email in the Realm app. That was the problem. I’m now stuck at a new problem! I’m not able to add a task. I’ve combed through the code it looks correct. I’ve spent time in the Realm dashboard checking and rechecking my schemas and database connections. Any ideas? Have you run into something similar?",
"username": "Natalie_Burge"
},
{
"code": "",
"text": "Hi @Natalie_Burge Are you getting any error messages in the frontend app and/or the Realm logs?",
"username": "Andrew_Morgan"
},
{
"code": " projectRealm.create(\n \"Task\",\n",
"text": "Yes. In the ios emulator. Exception null is not an object ( evaluating ‘projectRealm.write’)\nFurther under Source it shows code from my TasksProvider.jsconst createTask = (newTaskName) => {\nconst projectRealm = realmRef.current;\nprojectRealm.write( ( ) => {The tiny red arrows are pointing to the 3rd line of code: projectRealm.write( ( ) => {\nIt seems straight forward, but I can’t find any discrepancies between my code and the code example from the tutorial.The front end terminal is displaying the same error.\nERROR TypeError: null is not an object (evaluating ‘projectRealm.write’)In the Realm interface I’ve checked and rechecked my schemas. Per the tutorial instructions I used Realm to create my schemas/JSON/BSON objects. But when I try to sync I receive the following error:\nSchema Errors/WarningsCollections with the following schemas will fail to sync.The following collections have incompatible schemas:If you have any ideas, I’d be grateful for the help. I’m new to coding in general and this my first MondoDB project/ first mobile app project.",
"username": "Natalie_Burge"
},
{
"code": "",
"text": "Are you able to add your version of the app (removing the Realm App Id to a repo (or share a zipped version)?)",
"username": "Andrew_Morgan"
}
] | Failed to sign up: authentication via 'local-userpass' is unsupported + MongoDB Realm | 2021-03-16T21:38:53.044Z | Failed to sign up: authentication via ‘local-userpass’ is unsupported + MongoDB Realm | 3,945 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "I have installed mongodb via homebrew added it to my path variable via export (mac) and checked using echo(mac) but when I run mongo “mongodb+srv://discordbot.5vftj.mongodb.net/myFirstDatabase” --username areze I get the error bash: mongo: command not found can anyone help because I really need this database.",
"username": "Areze_F"
},
{
"code": "",
"text": "Welcome to the community!\nPlease show the output of echo $PATH command\nCan you invoke mogo command from bin directory or by giving full path of mongo.exe",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "fOR THE PATH I GET:\n/Library/Frameworks/Python.framework/Versions/3.9/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/git/bin:/Library/Frameworks/Python.framework/Versions/3.9/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin\nI don’t know how to do the second thing",
"username": "Areze_F"
},
{
"code": "",
"text": "I don’t see mongo in your path\nWhat command you used to update path\nby export method it will not be persistent\nYou have to update in /etc/pathsWhere is your mongodb installed\ncd to that dir.Under bin dir you will see mongo executable\nJust run mongo --version to test if command working or not",
"username": "Ramachandra_Tummala"
}
] | Help installing shell | 2021-03-05T14:23:09.553Z | Help installing shell | 1,918 |
null | [
"swift"
] | [
{
"code": " init(orderID: String, item: String) {\n super.init()\n orderId = orderID\n rlmItem = RealmItem.getRealmItem(item)\n }\n static func getRealmItem(_ name: String) -> RealmItem {\n let realm = try! Realm()\n let item = realm.object(ofType: RealmItem.self, forPrimaryKey: name)\n return item == nil ? RealmItem(name: name) : item\n }\n",
"text": "Hi there,I’m writing a project using RealmSwift,Now I kind of know some basic rules of using Realm, and here’s the quesiton:how can I check if an Object is mamanged already? no seeing some similar APIs;secondly,imagine I have Object called “RealmOrder”, which contains an Object called “RealmItem”now I have an initializer, which looks like this:RealmItem.getRealmItem(item) is defined as below:so basically if RealmItem already has data, it will return the managed object, if not it will create a new one.Now problem is if I want to write this RealmOrder object into realm, I will get trouble like‘Object is already managed by another Realm. Use create instead to copy it into this Realm.’So I want to ask, how to solve this and what’s the best practise?",
"username": "liu_davion"
},
{
"code": "convenience init(orderID: String, item: String) { //note convenience\n self.init() //note self.\n orderId = orderID\n rlmItem = RealmItem.getRealmItem(item)\n}",
"text": "Just a note that iInitializing Realm objects should be this",
"username": "Jay"
},
{
"code": "getRealmItem()",
"text": "may I ask why?I kind of declared init() to be private, so for outsiders, only use getRealmItem() to get the item, now allowing them to create a new one",
"username": "liu_davion"
},
{
"code": "convenience init(myValue: String) {\n self.init() //Please note this says 'self' and not 'super'\n self.myValue = myValue\n}\n",
"text": "From the Realm Documentation:Custom initializers for Object subclasses: When creating your model Object subclasses, you may sometimes want to add your own custom initialization methods for added convenience.Due to some present limitations with Swift introspection, these methods cannot be designated initializers for the class. Instead, they need to be marked as convenience initializers using the Swift keyword of the same name:MyModel: Object {\n@objc dynamic var myValue = “”}",
"username": "Jay"
},
{
"code": "item.realmnil",
"text": "To return to the first question, you should be able to check item.realm – if it is nil then the item should be an unmanaged Object.",
"username": "Andrew_Morgan"
},
{
"code": "init()",
"text": "Hi Jay,but when I use super, it seems everything is fine on my side. May I ask what would be the side effect?Because I didn’t get any compiler errros nor warnings, and the run time seems functional, so I don’t understand why can’t I use super here.I assume init() merely takes no arguments, so it would kind of behave like super.init().anyway, I followed your suggestion, just want to ask the reason behind the scene. Thank you.",
"username": "liu_davion"
},
{
"code": "isUnmanaged",
"text": "Hi Andrew,Thank you. I found tha Java seems have the isUnmanaged API.Second question,‘Object is already managed by another Realm. Use create instead to copy it into this Realm.’I don’t find too much useful information on this, like how do I copy the Object type and send it to Realm?I found two APIs;realm.create(T.Type)\nrealm.create(T.Type, value: Any, update: Realm.UpdatePolicy)but don’t know how to ‘create’ or ‘copy’ a managed object into another Realm.",
"username": "liu_davion"
},
{
"code": "extension User {\n convenience init(_ user: User) {\n self.init()\n partition = user.partition\n userName = user.userName\n userPreferences = UserPreferences(user.userPreferences)\n lastSeenAt = user.lastSeenAt\n conversations.append(objectsIn: user.conversations.map { Conversation($0) })\n presence = user.presence\n }\n}\n\nextension UserPreferences {\n convenience init(_ userPreferences: UserPreferences?) {\n self.init()\n if let userPreferences = userPreferences {\n displayName = userPreferences.displayName\n avatarImage = Photo(userPreferences.avatarImage)\n }\n }\n}",
"text": "I’ve only done it it in Swift, but I added a convenience init to create a new Object instance by copying over the attributes. If the object contains embedded objects then you need similar convenience inits for those too:",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Will this work with nested Object types as well?I have seen something similar on stack overflow, but a few people mentions it doesn’t work with nested objects, though I don’t try yet.",
"username": "liu_davion"
},
{
"code": "",
"text": "If the object contains embedded objects then you need similar convenience inits for those too:also, what you mean by \" If the object contains embedded objects then you need similar convenience inits for those too:\"thank you.",
"username": "liu_davion"
},
{
"code": "UseruserPreferencesUserPreferencesUserPreferencesEmbeddedObjectObject",
"text": "A Realm Object class can contain primitive such as strings and ints, but it can also contain other Realm objectc. In my example, the User class includes the userPreferences attribute which is UserPreferences object. Because UserPreferences is embedded within an Object rather than being a top-level Realm Object, it inherits from EmbeddedObject rather than Object",
"username": "Andrew_Morgan"
},
{
"code": "class UserClass: Objectinit",
"text": "re: self.initMay I ask what would be the side effect?Remember that Realm is backed by ObjC code and not Swift. For example Realm (objects) do not directly support computed properties (to be managed/handled by Realm), which is very common in Swift objects.So when you create a Realm Object class UserClass: Object you are creating a UserClass of type Object, which is itself an NSObject (I am being loose here)A designated initializer must ensure that the properties introduced by its class are initialized before it delegates up to a superclass initializer. Properties cannot be left in an intermediate state which super.init could cause (depending on how the vars are defined within the object)The memory for an object is fully initialized once the initial state of all of its stored properties is known. To satisfy that requirement, a designated initializer must make sure that all of its own properties are initialized (self.init) before it hands off up the chain.Without the initial value, the compiler won’t synthesize initializers for the class - which would be a version of init that calls through to the base class.Side effects? objects that don’t behave as expected, ‘lose’ data and and generally mis-behave - and those issues are very hard to track down. Been there, done that.",
"username": "Jay"
},
{
"code": "",
"text": "Hi Jay, thanks for reply so fruitful!I was under the assumption that a Object instance init() is merely calling 'super.init()` if my other variables are either optional or already had a default value. Looks like I’m wrong.",
"username": "liu_davion"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Check object is managed, and get unmanaged from managed object and write it | 2021-03-20T10:12:45.130Z | Check object is managed, and get unmanaged from managed object and write it | 7,237 |
null | [] | [
{
"code": "{\n \"_id\" : ObjectId(\"605a3a82c8bbb404f4e6b643\"),\n \"address\" : [\n {\n \"_id\" : ObjectId(\"605a3a82c8bbb404f4e6b644\"),\n \"street\" : \"123 Leon Street\",\n \"state\" : \"WE\",\n \"zip\" : 11312\n },\n {\n \"_id\" : ObjectId(\"605a3b3bc8bbb404f4e6b645\"),\n \"street\" : \"123 King Street\",\n \"state\" : \"WE\",\n \"zip\" : 99998\n }\n ],\n \"firstName\" : \"Tetet\",\n \"__v\" : 1\n}\n \"_id\" : ObjectId(\"605a3b3bc8bbb404f4e6b645\")\ndb.contacts.find({\"_id\" : ObjectId(\"605a3a82c8bbb404f4e6b643\")}, {\"address._id\": ObjectId(\"605a3b3bc8bbb404f4e6b645\")}).pretty()\n{\n \"_id\" : ObjectId(\"605a3a82c8bbb404f4e6b643\"),\n \"address\" : [\n {\n \"_id\" : ObjectId(\"605a3b3bc8bbb404f4e6b645\")\n },\n {\n \"_id\" : ObjectId(\"605a3b3bc8bbb404f4e6b645\")\n }\n ]\n}\n",
"text": "lets say i have the id for the second nested object.how to get only the document that is related to that id and not the whole document?so far I have tried:and gotten:I would ideally want to have the nested documents, not just the id. Can anyone help me?\nThank you kindly",
"username": "M_San_B"
},
{
"code": "find({\n \"_id\": ObjectId(\"605a3a82c8bbb404f4e6b643\")\n},\n{\n address: {\n \"$elemMatch\": {\n \"_id\": ObjectId(\"605a3b3bc8bbb404f4e6b645\")\n }\n }\n})[\n {\n \"_id\": ObjectId(\"605a3a82c8bbb404f4e6b643\"),\n \"address\": [\n {\n \"_id\": ObjectId(\"605a3b3bc8bbb404f4e6b645\"),\n \"state\": \"WE\",\n \"street\": \"123 King Street\",\n \"zip\": 99998\n }\n ]\n }\n]",
"text": "Hi,\nwith this query+projectionyou getis it what you want?Sincerely,",
"username": "Imad_Bouteraa"
},
{
"code": "{\n \"_id\": ObjectId(\"605a3b3bc8bbb404f4e6b645\"),\n \"state\": \"WE\",\n \"street\": \"123 King Street\",\n \"zip\": 99998\n}\n",
"text": "is there a way to just get",
"username": "M_San_B"
},
{
"code": "db.collection.aggregate([\n {\n \"$match\": {\n \"_id\": ObjectId(\"605a3a82c8bbb404f4e6b643\")\n }\n },\n {\n \"$unwind\": \"$address\"\n },\n {\n \"$match\": {\n \"address._id\": ObjectId(\"605a3b3bc8bbb404f4e6b645\")\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$address\"\n }\n }\n])db.collection.aggregate([\n {\n \"$match\": {\n \"_id\": ObjectId(\"605a3a82c8bbb404f4e6b643\")\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": {\n \"$first\": {\n \"$filter\": {\n \"input\": \"$address\",\n \"as\": \"item\",\n \"cond\": {\n $eq: [\n \"$$item._id\",\n ObjectId(\"605a3b3bc8bbb404f4e6b645\")\n ]\n }\n }\n }\n }\n }\n }\n])",
"text": "is there a way to just getthere are many and morethis also works",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to only get the array nested subdocuments with that document id and not having to iterate through it | 2021-03-23T19:34:09.652Z | How to only get the array nested subdocuments with that document id and not having to iterate through it | 21,239 |
[
"dot-net",
"production"
] | [
{
"code": "",
"text": "This is a patch release that addresses some issues reported since 2.12.0 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.12.1%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.",
"username": "James_Kovacs"
},
{
"code": "",
"text": "",
"username": "system"
}
] | .NET Driver 2.12.1 Released | 2021-03-23T19:40:17.791Z | .NET Driver 2.12.1 Released | 1,628 |
|
null | [] | [
{
"code": "class DogClass: Object {\n @objc dynamic var _id = ObjectId.generate()\n @objc dynamic var name = \"\"\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\nlet existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: \"60574941858a6fbe9a7fff6e\")let ss: StaticString = \"60574941858a6fbe9a7fff6e\"\nlet theKey = ObjectId(ss)\nlet existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: theKey)\n",
"text": "I am suddenly getting an error when trying to get a specific object from Realm via it’s primary ObjectId key. This may be something I am overlooking or a change I was not aware of so wanted to put it out there.Here’s the errorInvalid value ‘60574941858a6fbe9a7fff6e’ of type ‘Swift.__SharedStringStorage’ for ‘object id’ property ‘DogClass._id’.And my DogClassand the code that causes the crashlet existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: \"60574941858a6fbe9a7fff6e\")I could have sworn this was working previously; the only change I made was updating to RealmSwift 10.7.2This is a local only, non-sync’d Realm, macOS 11.2.3, Realm 10.7.2, XCode 12.4EditI just noticed a similar question over in the MongoDB Realm forumBut is that solution the solution? Seems strange we’re forced to use StaticString as in",
"username": "Jay"
},
{
"code": "et existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: ObjectId(string : \"60574941858a6fbe9a7fff6e\"))",
"text": "let existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: ObjectId(string : \"60574941858a6fbe9a7fff6e\"))",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "@Muhammad_AwaisThank you but isn’t that the same code posted in my question? theKey is an ObjectId with a static string.",
"username": "Jay"
},
{
"code": "",
"text": "no check primary lkey passed as objectId",
"username": "Muhammad_Awais"
},
{
"code": "ObjectId(string:let ss = try! ObjectId(string: \"60574941858a6fbe9a7fff6e\") let regularString = \"6058ed1016f023a5a8dca06a\"\n let objIdString = ObjectId(regularString)\n",
"text": "@Muhammad_Awaislet existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: ObjectId(string : “60574941858a6fbe9a7fff6e”))`That code won’t work because ObjectId(string: can throw as shown in the documentation for ObjectIdDeclaration\nSWIFT\npublic override required init(string: String) throwsSo it would need to include a trylet ss = try! ObjectId(string: \"60574941858a6fbe9a7fff6e\")or as I mentioned, a StaticString. So this for example generates a compiler errorCannot convert value of type ‘String’ to expected argument type ‘StaticString’Hopefully one of the Realm’ers can clarify how to implement this or what I am missing. Let me re-state the questionHow to pass an ObjectId into the primaryKey property of the function .object(ofType: forPrimaryKey:)A static string is useless because it would have to be hard coded (hence: static).",
"username": "Jay"
},
{
"code": "ObjectId(string: \"60574941858a6fbe9a7fff6e\")0..edo {\n try let id = ObjectId(string: yourString)\n let existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: id)\n} catch {\n // Do something to react to the error\n}",
"text": "ObjectId(string: \"60574941858a6fbe9a7fff6e\") can throw because the string you provide may not be of the required length or may contain character outside of the 0..e range.This code should work:",
"username": "Andrew_Morgan"
},
{
"code": "forPrimaryKey: some stringlet existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: \"xxxyyzzz\")",
"text": "@Andrew_MorganThanks for the answer.Was that a recent change? We have existing code in a project that has not been updated in a while where that was not required e.g. forPrimaryKey: some string.Do you know with what SDK version that change was made?I have it working in a current project but that should really be in the documentation so it’s clear thislet existingDog = realm.object(ofType: DogClass.self, forPrimaryKey: \"xxxyyzzz\")is only applicable to non-sync’d realms that do NOT use ObjectID to generate primary keys.Primary Keys",
"username": "Jay"
},
{
"code": "",
"text": "I’m afraid that I don’t know – I’ve always explicitly converted the string to an ObjectId before trying to match with an ObjectId field.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ObjectID error with .object(ofType: forPrimaryKey:) function? | 2021-03-21T13:57:41.242Z | ObjectID error with .object(ofType: forPrimaryKey:) function? | 6,397 |
null | [
"crud",
"mongoose-odm"
] | [
{
"code": "const companyUserModelSchema = new mongoose.Schema({\n companyName: {\n type: String,\n required: [true, 'You must provide a Company Name'],\n unique: true,\n trim: true,\n },\n companyPicture: {\n type: String,\n default: 'default.jpg',\n },\n adminName: {\n type: String,\n required: [true, 'You must provide an Admin for the Company'],\n unique: true,\n trim: true,\n },\n role: {\n type: String,\n enum: ['admin', 'watcher', 'dev'],\n default: 'admin',\n },\n\n email: {\n type: String,\n required: [true, 'Please provide an Email'],\n unique: true,\n lowercase: true,\n validate: [validator.isEmail, 'Please provide a valide email'],\n },\n password: {\n type: String,\n required: [true, 'Please provide a password'],\n minlength: 8,\n select: false,\n },\n passwordConfirm: {\n type: String,\n required: [true, 'Please confirm your password'],\n validate: {\n //Will only work on SAVE and CREATE\n validator: function (el) {\n return el === this.password;\n },\n message: 'Password are not the same.',\n },\n },\n drivers: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Driver',\n },\n ],\n passwordChangedAt: Date,\n passwordResetToken: String,\n passwordResetExpires: Date,\n active: {\n type: Boolean,\n default: true,\n select: false,\n },\n});\nexports.updateMe = catchAsync(async (req, res, next) => {\n // 1) Create error if user POSTs password data\n if (req.body.password || req.body.passwordConfirm) {\n return next(\n new AppError(\n 'This route is not for password updates. Please use /updateMyPassword.',\n 400\n )\n );\n }\n\n // 2) Filtered out unwanted fields names that are not allowed to be updated\n const filteredBody = filterObj(\n req.body,\n 'companyPicture',\n 'adminName',\n 'email',\n 'drivers'\n );\n console.log(filteredBody);\n\n // 3) Update user document\n const updatedUser = await CompanyUser.findByIdAndUpdate(\n req.user.id,\n filteredBody,\n {\n new: true,\n runValidators: true,\n }\n );\n\n res.status(200).json({\n status: 'success',\n data: {\n user: updatedUser,\n },\n });\n});\nexports.updateDrivers = catchAsync(async (req, res, next) => {\n // 1) Create error if user POSTs password data\n if (req.body.password || req.body.passwordConfirm) {\n return next(\n new AppError(\n 'This route is not for password updates. Please use /updateMyPassword.',\n 400\n )\n );\n }\n\n // 2) Update user document\n const addedDriver = await CompanyUser.findByIdAndUpdate(\n req.user.id,\n { $addToSet: { drivers: req.body } },\n {\n runValidators: true,\n }\n );\n\n res.status(200).json({\n status: 'success',\n data: {\n user: addedDriver,\n },\n });\n});\n",
"text": "Good morning all,\nFirst time posting on this forum, I hope that I am not asking an existing question… (personally couldn’t find any similar case).\nSo I’m using the MERN stack to build a dashboard, a fleet management dashboard. I currently have have two Models. Company and Driver model and I am referencing the Driver Model to the Company. (I did not choose to embed because the application might scale to thousands of drivers, and I saw that in such case better to normalize).Every time I create a new driver for a specific company, I retrieve the driver’s ID and update the Company document with that ID in order to reference it. It works fine, however, I am using react states to create an array of drivers ID and then pass it to my api call in order to reference all the drivers i’ve added. But states get reset at each page reload or each time I connect from an other device, which is great because I make sure that sensitive data are never stored on the browser for ever.However I’ve noticed that findByIdAndUpdate will take the data of the current state session and overwrite what’s already in the DB. I would like to instead, push each new session state to the existing array of ObjectID in my mongodb document.On the net I found $push and $addToSet could help me achieve this but I am obviously doing something wrong that does not make it work.This is the company schema :And that’s the controller to update the company document (the one that works but does not behave like I want it to behave, well it does but not for the “drivers” field) :And this is what I tried to do after my online research, and just simply does not work :This is not a personal project, but actually my first big order from a real client, and I have to do a demo on Friday, I’m done with everything else, just left with that small issue, I really hope that someone will be able to help me before that, I really have the feeling that I’m being stupid and that I’m just missing something silly to add to my code…I am already truly grateful and thankful.\nJonathan!",
"username": "0rion_MSK"
},
{
"code": "",
"text": "Problem Solved! It was the way I was sending my data from the frontend…",
"username": "0rion_MSK"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | findByIdAndUpdate will delete previous existing data. [MERN] | 2021-03-22T15:39:37.570Z | findByIdAndUpdate will delete previous existing data. [MERN] | 8,894 |
null | [
"licensing"
] | [
{
"code": "",
"text": "Is MongoDB community is free to use ? even for industry or industries have to pay for that , I can’t get the clear picture of it by just reading the docs if anyone can help it would be great.",
"username": "WOLFIE_N_A"
},
{
"code": "",
"text": "Asked and Answered.https://www.mongodb.com/community/forums/search?q=sspl",
"username": "chris"
}
] | Is MongoDB community is free to use? | 2021-03-23T09:02:45.891Z | Is MongoDB community is free to use? | 4,828 |
null | [
"whitepaper",
"preview"
] | [
{
"code": "",
"text": "Hello EveryoneWe will shortly be publishing a new whitepaper that tracks MongoDB’s evolution from a niche NoSQL database a decade ago into one used by millions of developers today for building new applications.There are many of you in the community that have used MongoDB for a while, and so we wanted to share the paper with you ahead of its broader publication, and also ask for any comments you may have.You can grab the preview from this link (note that it is a pdf): https://webassets.mongodb.com/mongodb_evolved.pdfIf you have comments or feedback, please reply to this message in the section below.",
"username": "Mat_Keep"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Evolved: Preview for the Community | 2021-03-23T10:32:26.075Z | MongoDB Evolved: Preview for the Community | 1,759 |
null | [] | [
{
"code": "",
"text": "I am trying to insert 4 text files from a directory to mongodb. I want that all the contents should be in a single document with key as their file name and value as their contents. I am able to insert them into multiple documents using insert_many. Pl let me know how can I do it to a single document.",
"username": "hari_shekon"
},
{
"code": "\necho “{”\necho \" ‘my-first-file:’\"\necho -n \" ‘\"\ncat my-first-file\necho -n \"’,\"\necho \" ‘my-2nd-file:’\"\necho -n \" ‘\"\ncat my-2n-file\necho -n \"’,\"\n…\necho “}”\n",
"text": "You may write a script that creates a .json document and then you may import this document or integrate it to a .js script.Untested bashscript (will fail with files with single or double quotes) :\n\necho “{”\necho \" ‘my-first-file:’\"\necho -n \" ‘\"\ncat my-first-file\necho -n \"’,\"\necho \" ‘my-2nd-file:’\"\necho -n \" ‘\"\ncat my-2n-file\necho -n \"’,\"\n…\necho “}”\n",
"username": "steevej"
},
{
"code": "{\n \"$schema\": \"http://json-schema.org/draft-04/schema#\",\n \"type\": \"object\",\n \"title\": \"file\",\n \"additionalProperties\": false,\n \"properties\": {\n \"file_load_id\": {\n \"type\": \"string\",\n \"pattern\": \"^[a-fA-F0-9]{24}$\"\n },\n \"files\": {\n \"type\": \"array\",\n \"additionalItems\": true,\n \"uniqueItems\": false,\n \"items\": {\n \"id\": \"file\",\n \"type\": \"object\",\n \"properties\": {\n \"seq_no\": {\n \"type\": \"number\"\n },\n \"name\": {\n \"type\": \"string\"\n },\n \"content\": {\n \"type\": \"string\"\n }\n },\n \"additionalProperties\": false,\n \"required\": [\n \"seq_no\"\n ]\n }\n }\n },\n \"required\": [\n \"file_load_id\"\n ]\n}\nuse file_db;\n\n db.createCollection( \"file\",{\n \"storageEngine\": {\n \"wiredTiger\": {}\n },\n \"capped\": false,\n \"validator\": {\n \"$jsonSchema\": {\n \"bsonType\": \"object\",\n \"title\": \"file\",\n \"additionalProperties\": false,\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"file_load_id\": {\n \"bsonType\": \"objectId\"\n },\n \"files\": {\n \"bsonType\": \"array\",\n \"additionalItems\": true,\n \"uniqueItems\": false,\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"seq_no\": {\n \"bsonType\": \"number\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"content\": {\n \"bsonType\": \"string\"\n }\n },\n \"additionalProperties\": false,\n \"required\": [\n \"seq_no\"\n ]\n }\n }\n },\n \"required\": [\n \"file_load_id\"\n ]\n }\n },\n \"validationLevel\": \"off\",\n \"validationAction\": \"warn\"\n});\nvar file = new Schema({\n file_load_id: {\n type: Schema.Types.ObjectId,\n required: true\n },\n files: [\n new Schema({\n seq_no: {\n type: Number,\n required: true\n },\n name: {\n type: String\n },\n content: {\n type: String\n }\n })\n ]\n});\n{\n \"file_load_id\": ObjectId(\"507f1f77bcf86cd799439011\"),\n \"files\": [\n {\n \"seq_no\": 1,\n \"name\": \"file_no_1\",\n \"content\": \"Lorem\"\n },\n\t\t{\n \"seq_no\": 2,\n \"name\": \"file_no_1\",\n \"content\": \"Lorem\"\n },\n\t\t{\n \"seq_no\": 3,\n \"name\": \"file_no_1\",\n \"content\": \"Lorem\"\n }\n ]\n}\n",
"text": "Hi @hari_shekon,Make sure the Documents size is less than 16MBMongoDB Document sizestep 1 would be create an etl workflow process, if you are going to perform this load frequently. If not, if you can have an one-off manual task.you can either $push or $addtoset to accomplise.$push$addtoSetI have attached a Json Doc, JSON Schema, MongDB Create script(demo) and Mongoose Schema for your reference.JSON SchemaCreate the collection(change the fieldname if you need)Mongoose ScriptA sample collection",
"username": "Dominic_Kumar"
},
{
"code": "// 1. declare function, that will read files in a directory\n// and put it into the new document payload\nfunction provideDocumentPayloadFromFilesInDir(pathToDir) {\n const list = listFiles(pathToDir);\n const doc = { files: [] };\n list.forEach(item => {\n // do not descend into subdirectories\n if (item.isDirectory === false) {\n const file = {\n // generate _id for the file in case\n // you may need to manipulate it later\n _id: new ObjectId(),\n size: item.size,\n name: item.basename,\n // read the contentwith cat() shell method\n content: cat(item.name),\n };\n doc.files.push(file);\n }\n });\n return doc;\n}\n\n// 2. create document payload with files content\nconst doc = provideDocumentPayloadFromFilesInDir(<path>);\n\n// 3. insert new document into collection\nconst targetDb = db.getSiblingDB(<dbName>);\ntargetDb.getCollection(<collectionName>).insertOne(doc);\n",
"text": "You can do this with mongo shell, using the script below.\nTo use it, you need to connect to mongo shell, and .load() script with the following contents:Make sure you do not load huge files, otherwise you may hit BSON Document limit of 16MB.",
"username": "slava"
},
{
"code": "",
"text": "Hello slava,\nI am just a poor beginner so my question:How can I start this function “provideDocumentPayloadFromFilesInDir” in the mongo Shell\nor\nHow do I have to start your function from the command line?mongo provideDocumentPayloadFromFilesInDir(\"/home/mongo/loadfile.txt\") ??And the “pathToDir” means “/home/mongo/loadfile.txt” or in a another form?thanks for youe helpregards",
"username": "Stefan"
},
{
"code": "",
"text": "Hello\nokay I load the data with your mongo shell functionregards\nStefan",
"username": "Stefan"
}
] | Insert multiple text files into mongo | 2020-05-14T06:12:42.926Z | Insert multiple text files into mongo | 9,448 |
null | [] | [
{
"code": "get_documentsfor doc in collection.find():import os\nimport json\nimport pymongo\nfrom pymongo.errors import ConnectionFailure\n\nCONFIG_PATH = os.path.join(os.getcwd(), \"config.json\")\n\n\nclass MongoDBConnection:\n \"\"\"Main Class for the connecting to MongoDB\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Initialization for the MongoDBConnection Class.\"\"\"\n\n with open(CONFIG_PATH) as file:\n config_json = json.load(file)\n\n db_connection = config_json[\"mongo_db\"][0]\n self.HOST = db_connection[\"host\"]\n self.PORT = db_connection[\"port\"]\n self.ADMIN_USERS = config_json[\"admin_users\"]\n\n self.db_connected = None\n\n def connected(self) -> bool:\n \"\"\"Confirm the connection with the database.\n\n If there is a connection issue, ConnectionFailure is raised.\n\n :return: The connection status\n \"\"\"\n\n try:\n with pymongo.MongoClient(self.HOST, self.PORT) as client:\n self.db_connected = client.ao_connect\n\n return self.db_connected\n\n except ConnectionFailure as err:\n print(err)\n return False\n\n def get_documents(self, collection: pymongo.collection.Collection) -> list:\n \"\"\"Get all of the documents contained in the collection.\n\n :param collection: The collection Object selected\n :return: Collection names\n \"\"\"\n docs_list = []\n\n collection = self.db_connected[f\"{collection.name}\"]\n for doc in collection.find():\n docs_list.append(doc)\n\n return docs_list\ngbmud11368:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 603e5aa529766aed2c463091, topology_type: Single, servers: [<ServerDescription ('gbmud11368', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('gbmud11368:27017: timed out')>]>",
"text": "Hi all,Having a strange issue, that I cannot figure out and need some help, please.I’m developing a Client-Server application where the clients have access to MongoDB. While developing, the database is in the same machine as my client and server. This works absolutely fine.\nOnce I move the application to another client and run it, I can connect to MongoDB but getting all documents with get_documents function, it hangs on the for loop of the cursor for doc in collection.find():.Here’s the script for MongoDB access:Here’s the error:\ngbmud11368:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 603e5aa529766aed2c463091, topology_type: Single, servers: [<ServerDescription ('gbmud11368', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('gbmud11368:27017: timed out')>]>Let me know if you need any other information.Thank you very much in advance,Andre",
"username": "AndreAnjos"
},
{
"code": "",
"text": "The firewall was clocking port 27017 \nThanks all! ",
"username": "AndreAnjos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Client failing to return documents MongoDB (NetworkTimeout) | 2021-03-02T19:15:12.892Z | Client failing to return documents MongoDB (NetworkTimeout) | 2,187 |
null | [] | [
{
"code": "#include <cstdint>\n#include <iostream>\n#include <vector>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/stdx.hpp>\n#include <mongocxx/uri.hpp>\n#include <mongocxx/instance.hpp>\n#include <bsoncxx/builder/stream/helpers.hpp>\n#include <bsoncxx/builder/stream/document.hpp>\n#include <bsoncxx/builder/stream/array.hpp>\n\nusing bsoncxx::builder::stream::close_array;\nusing bsoncxx::builder::stream::close_document;\nusing bsoncxx::builder::stream::document;\nusing bsoncxx::builder::stream::finalize;\nusing bsoncxx::builder::stream::open_array;\nusing bsoncxx::builder::stream::open_document;\n\nint main()\n{\n mongocxx::instance instance{}; // This should be done only once.\n mongocxx::uri uri(\"mongodb://localhost:27017\");\n mongocxx::client client(uri);\n\n mongocxx::database db = client[\"sliit\"];\n return 0;\n}\ng++ --std=c++17 main.cpp -o test -IC:/mongo-cxx-driver/include/mongocxx/v_noabi -IC:/mongo-cxx-driver/include/bsoncxx/v_noabi -LC:/mongo-cxx-driver /lib -lmongocxx -lbsoncxx -IC:\\local\\boost_1_60_0c:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x2d): undefined reference to `__imp__ZN8mongocxx7v_noabi8instanceC1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x36): undefined reference to `__imp__ZN8mongocxx7v_noabi3uri13k_default_uriB5cxx11E'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x54): undefined reference to `__imp__ZN8mongocxx7v_noabi3uriC1EN7bsoncxx7v_noabi6string13view_or_valueE'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x94): undefined reference to `__imp__ZN8mongocxx7v_noabi6clientC1ERKNS0_3uriERKNS0_7options6clientE'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0xb0): undefined reference to `__imp__ZN8mongocxx7v_noabi3uriD1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0xe4): undefined reference to `__imp__ZN8mongocxx7v_noabi6clientD1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0xf4): undefined reference to `__imp__ZN8mongocxx7v_noabi8instanceD1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x11a): undefined reference to `__imp__ZN8mongocxx7v_noabi3uriD1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x140): undefined reference to `__imp__ZN8mongocxx7v_noabi6clientD1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text+0x155): undefined reference to `__imp__ZN8mongocxx7v_noabi8instanceD1Ev'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text$_ZN7bsoncxx7v_noabi7builder6stream8documentC1Ev[_ZN7bsoncxx7v_noabi7builder6stream8documentC1Ev]+0x33): undefined reference to `__imp__ZN7bsoncxx7v_noabi7builder4coreC1Eb'\nc:/users/tharindu/downloads/programs/mingw/bin/../lib/gcc/x86_64-w64-mingw32/9.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\\Users\\Tharindu\\AppData\\Local\\Temp\\ccRL6mIk.o:main.cpp:(\n.text$_ZN7bsoncxx7v_noabi7builder6stream8documentD1Ev[_ZN7bsoncxx7v_noabi7builder6stream8documentD1Ev]+0x1a): undefined reference to `__imp__ZN7bsoncxx7v_noabi7builder4coreD1Ev'\n",
"text": "When I’m trying to compile the following code I’m getting so many issues. But mongoc and mongocxx drivers are installed on my pc.code :compiling command :\ng++ --std=c++17 main.cpp -o test -IC:/mongo-cxx-driver/include/mongocxx/v_noabi -IC:/mongo-cxx-driver/include/bsoncxx/v_noabi -LC:/mongo-cxx-driver /lib -lmongocxx -lbsoncxx -IC:\\local\\boost_1_60_0Error :Can someone please help me regarding this issue?",
"username": "Tharindu_Balasooriya"
},
{
"code": "",
"text": "It looks like you’re following the tutorial here: Tutorial for mongocxx-LC:/mongo-cxx-driver /libI think you have an extraneous space before /lib. Also, I don’t see any use of boost in your code, so you might want to remove that include directive.",
"username": "Bernie_Hackett"
},
{
"code": "main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall bsoncxx::v_noabi::string::view_or_value::view_or_value(char const *)\" (__imp_??0view_or_value@string@v_noabi@bsoncxx@@QAE@PBD@Z) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::options::client::client(void)\" (__imp_??0client@options@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::options::client::~client(void)\" (__imp_??1client@options@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::uri::uri(class bsoncxx::v_noabi::string::view_or_value)\" (__imp_??0uri@v_noabi@mongocxx@@QAE@Vview_or_value@string@1bsoncxx@@@Z) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::uri::~uri(void)\" (__imp_??1uri@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::client::client(class mongocxx::v_noabi::uri const &,class mongocxx::v_noabi::options::client const &)\" (__imp_??0client@v_noabi@mongocxx@@QAE@ABVuri@12@ABV0options@12@@Z) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::client::~client(void)\" (__imp_??1client@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::instance::instance(void)\" (__imp_??0instance@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\nmain.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::instance::~instance(void)\" (__imp_??1instance@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\nuntitled5.exe : fatal error LNK1120: 9 unresolved externals\nNMAKE : fatal error U1077: '\"C:\\Program Files\\JetBrains\\CLion 2020.3.3\\bin\\cmake\\win\\bin\\cmake.exe\"' : return code '0xffffffff'\nStop.\nNMAKE : fatal error U1077: '\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x86\\nmake.exe\"' : return code '0x2'\nStop.\nNMAKE : fatal error U1077: '\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x86\\nmake.exe\"' : return code '0x2'\nStop.\nNMAKE : fatal error U1077: '\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x86\\nmake.exe\"' : return code '0x2'\nStop.\n",
"text": "@Bernie_Hackett I think that happens that I’ve used different type of compilers for mongocxx install and project build . I used the same compiler for both and then I get rid of above issue . But I am still getting a error , can you please help me on fixing this issue",
"username": "Tharindu_Balasooriya"
},
{
"code": "",
"text": " help ???",
"username": "Tharindu_Balasooriya"
},
{
"code": "",
"text": "Hi @Tharindu_Balasooriya,It looks like you resolved your issue from your post on stack overflow here. Please let me know if you’re still running into any problems.",
"username": "Clyde_Bazile_III"
},
{
"code": "",
"text": "@Clyde_Bazile_III\nYes I got it resolved, Thankyou very much",
"username": "Tharindu_Balasooriya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo cxx driver issue - undefined reference to `__imp__ZN7bsoncxx7v_ | 2021-03-19T11:52:25.880Z | Mongo cxx driver issue - undefined reference to `__imp__ZN7bsoncxx7v_ | 6,453 |
null | [] | [
{
"code": "",
"text": "Hi @all,\nin the documentation I read, that REST encryption is only available at the commercial one.\nAs far as I understand this, that this will only affect encryption in the DB itself. But the connection between the client and the server can also be secured via tls with the community version right?Thanks for any clarification, because the documentation are not so 100% clear.",
"username": "MDC_MDC"
},
{
"code": "mongodmongos",
"text": "Welcome to the MongoDB Community @MDC_MDC!in the documentation I read, that REST encryption is only available at the commercial one. As far as I understand this, that this will only affect encryption in the DB itself.The Encrypted Storage Engine which provides native encryption at rest is a feature of MongoDB Enterprise edition. Encryption in this context is referring to the data files that are written to disk: without the encryption key, someone with direct access to encrypted data files (for example, via a backup copy) will not be able to read any of the original data. Encrypting communication over the network is a separate security measure (see the MongoDB Security Checklist for an overview).However, there are disk/volume alternatives you could use with MongoDB Community Edition. If you happen to be using storage services via a major cloud provider (AWS, GCP, Azure), they also have options for encryption of volumes at rest (for example: Amazon EBS Encryption). Encryption at the disk or volume level prevents access to data if someone has physical access but does not have the encryption key. If someone has access to a copy of the data files from an encrypted volume, the contents of those files are not encrypted.the connection between the client and the server can also be secured via tls with the community version right?Yes, TLS/SSL encryption is a common feature for all modern MongoDB server editions.Overview from the page to Configure mongod and mongos for TLS/SSL:MongoDB can use any valid TLS/SSL certificate issued by a certificate authority, or a self-signed certificate. If you use a self-signed certificate, although the communications channel will be encrypted to prevent eavesdropping on the connection, there will be no validation of server identity. This leaves you vulnerable to a man-in-the-middle attack. Using a certificate signed by a trusted certificate authority will permit MongoDB drivers to verify the server’s identity.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Will the community version of MongoDB support ssl connections? | 2021-03-23T06:28:28.798Z | Will the community version of MongoDB support ssl connections? | 3,029 |
null | [] | [
{
"code": "",
"text": "i need explains very much. i love nodejs, and v8. nodejs js engine is v8.",
"username": "anlex_N"
},
{
"code": "mongoevaldb.eval()mongomongoshmongoshmongoshmongosh",
"text": "Welcome to the MongoDB Community @anlex_N!There is a lengthy article in the MongoDB Engineer Journal which explains some of the technical reasoning: Code-generating Away the Boilerplate in Our Migration Back to Spidermonkey.This paragraph from the article provides a nice summary:SpiderMonkey has several qualities that make it the most suitable option for our needs. Most importantly, its process model is most like ours, with a single process for the browser and threads for tabs. This necessitates better tools for constraining memory and managing resource exhaustion. In addition, Firefox (and thereby SpiderMonkey) is available on the greatest number of platforms. In addition to its high performance JIT, it also has a baseline interpreter in C++ that ensures that any architecture capable of hosting a reasonable distribution of Linux can also run Firefox. And as icing on the cake, the Mozilla Foundation offers extended support releases of Firefox that offer security fixes for a year; annual maintenance affords us a substantially more manageable integration than the six-week fix cycles we had with V8.From an end-user point of view, the choice of JavaScript engine affects two broad usages: client-side and server-side. The legacy mongo shell is part of the MongoDB server codebase, so a given release of MongoDB will have the same embedded JavaScript engine in both of these binaries.Server-side JavaScript is generally discouraged (and often disabled) as there are significant security, concurrency, and performance caveats. The MongoDB query language has expanded in successive server releases to replace the majority of use cases where JavaScript would previously been required. The server-side eval command (db.eval() via the mongo shell) was deprecated in MongoDB 3.0 and removed in MongoDB 4.2. The limited contexts where server-side JavaScript can be used are not expected to support the full range of language features (or pace of change) as Node.js.However, for the client-side use case we have introduced a new MongoDB Shell (mongosh) which is built on top of the Node.js REPL. The new MongoDB shell is designed to be embeddable and extensible. You can install mongosh as a standalone download, but you’ll also find mongosh embedded in MongoDB Compass and third party tools (for example: JetBrains: Introducing MongoDB Shell in DataGrip).That’s a rather long explanation, but if you’re a fan of Node.js you should definitely check out mongosh and let us know if you have any feedback: How can we improve the MongoDB Shell?.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "as i know, firefox is written by rust, and your new mognsh is built on top of nodejs repl, oh my god, nodejs is written by C++ mainly. i don’t want to learn rust that is very complex. i just learn and like C++. your mongo team use so many programming language, it is so wrong.",
"username": "anlex_N"
},
{
"code": "",
"text": "Hi @anlex_N,Can you provide more information on the use case or issue you are trying to solve?There is no need to learn the languages that applications or server processes are written in, unless you are planning to contribute to those software projects.If you are developing an application that connects to MongoDB, you would use a Supported Driver/Library in your preferred language framework. This does not require learning any additional programming languages aside from the main one you are using.If you want to explore data using a tool, you can use the new MongoDB Shell (which exposes an interactive command-line interface) or a more visual tool like MongoDB Compass. You do not need to learn Node.js to use the MongoDB Shell; that aspect is a feature for developers who want to extend functionality of their shell environment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "i am learning your mongodb source code. i think spidermonkey is not so very meaningful. i don’t want to learn another js engine. i want to contribute to you in the future.\nalthough i can not master C++ 100%, but if i master rust, i would master c++ 100%. it wll cost me so long time. do you know?\nby the way, based on your docs, your mongodb source code is written by c++ 11 syntax largely. c++ 20 is the stable release now. oh my god. your syntax is so old, and if written by latest c++ syntax, your source code can be decreased very much.",
"username": "anlex_N"
},
{
"code": "mongodb/mongodocs/building.md",
"text": "i am learning your mongodb source code. i think spidermonkey is not so very meaningful. i don’t want to learn another js engine. i want to contribute to you in the future.Hi @anlex_N,The JavaScript engine is only used in limited contexts. You can learn about the server source code without ever touching the code for the embedded JavaScript engine.If you are learning about the server source code, I would focus on a specific area or subsystem of interest. Working with the source code for a large project is more like reading short stories than sitting down with a novel where you read every chapter.by the way, based on your docs, your mongodb source code is written by c++ 11 syntax lI think you may be referring to the MongoDB C++ Driver which supports C++11 or newer.The definitive information for the server source code is on GitHub (mongodb/mongo). You can find the prerequisites (which may vary for major release branches) in docs/building.md. The current requirement is C++17, which is the most recent ISO standard before C++20.The build requirements for a mature product with an established codebase and many supported platforms cannot change as quickly as the latest language features, but the platform team does evaluate the benefits (and risks) of upgrading to newer prerequisites at the appropriate time.C++20 has been in development for a few years, but the language standard was only approved in Sept 2020 and published by ISO in Dec 2020. Adopting a new language standard also requires compilers with feature support, and many still only have partial/experimental C++20 support at this stage.I wish you luck in your further studies but I believe your original question on choice of JavaScript engines has been thoroughly addressed now.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "webpack is my build system, nodejs is my server code, nativescript is my mobile code, those are based on v8. i wish mongodb js engine is v8 very much. i wish all in one SOLUTION. do you know, teacher?\n@Stennie_X, how is v8 process model? i have searched in google, but can not find some material.\nchromium have 4 process models, those are not benefit for you?",
"username": "anlex_N"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why change v8 to spidermonkey in mongodb js engine? | 2021-03-21T05:35:49.944Z | Why change v8 to spidermonkey in mongodb js engine? | 13,866 |
null | [
"swift"
] | [
{
"code": "",
"text": "Hi all,I’m creating an iOS app in Swift.\nI have a collection with a lot of objects. Is there a way to count all the objects of this collection without getting/loading them?Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "exports = function() {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const chatCollection = db.collection(\"ChatMessage\");\n \n return chatCollection.count()\n .then(result => {\n return result\n }, error => {\n console.log(`Failed to count ChatMessage documents: ${error}`);\n });\n}; \n",
"text": "Hi @Julien_Chouvet, as you mention “collection”, I assume that the data is stored in Atlas and you don’t want to load it all into your mobile Realm database.If this is the case, then you could create a Realm function such as this:and then you can invoke that function from the iOS app: https://docs.mongodb.com/realm/sdk/ios/examples/call-a-function/",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Perfect!Thanks a lot!",
"username": "Julien_Chouvet"
},
{
"code": "console.log(\"test = \", test[0]);testtest.firstNametest.[\"firstName\"]",
"text": "Hi again !@Andrew_Morgan I tried your solution which works fine.\nI tied to filter the objects in my collection to only return the number of objects matching some criteria. I tried the following code:exports = function(year){\nconst user = context.user;const db = context.services.get(“mongodb-atlas”).db(“Customers”);\nconst customersCollection = db.collection(“Customers”);const test = customersCollection.find({ firstName: “Julien” });console.log(\"test = \", test);return 0\n};In the Realm logs I have the following result:[\n“test = [object Object]”\n]How can I see the objects resulting from the filter function?\nI tried:\nconsole.log(\"test = \", test[0]);And got:[\n“test = undefined”\n]I also tried to get only one result with findOne() and access my customer parameters but still got “undefined”.const test = customersCollection.findOne();\nconsole.log(\"test = \", test.firstName);\nand\nconsole.log(\"test = \", test[“firstName”]);Moreover, when I return test and display the result in my iOS app, I got the following:document([\"_id\": Optional(RealmSwift.AnyBSON.objectId(60154fe947e22b1d7b3ce126)), “firstName”: Optional(RealmSwift.AnyBSON.string(“Julien”))])And when I return test.firstName or test.[\"firstName\"]:document([\"$undefined\": Optional(RealmSwift.AnyBSON.bool(true))])Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "customersCollection.findconsole.log.thenreturn customersCollection.find({ firstName: “Julien” })\n.then (result => {\n // work with the results\n return something\n})",
"text": "customersCollection.find is an asynchronous call, and your console.log call is executing before the database has returned any results. The database returns a promise that resolves once the request has completed. You need to add a .then clause that will run once the database request completes (the promise then resolves). Something like this…",
"username": "Andrew_Morgan"
},
{
"code": "findOne()find()Function call failed: TypeError: 'then' is not a functionfind()",
"text": "This is working with findOne() but with find() I have the error Function call failed: TypeError: 'then' is not a function.\nI searched why is this error happening and apparently find() is not async.",
"username": "Julien_Chouvet"
},
{
"code": "findreturn customersCollection.find({ firstName: “Julien” }).toArray()\n.then (result => {\n // work with the results\n return something\n})",
"text": "Sorry - find returns a cursor which you can them map to an array (which is what returns a promise)…",
"username": "Andrew_Morgan"
},
{
"code": "then()findOne()return customersCollection.findOne()\n .then (result => {\n console.log('test = ', result.lastName);\n return result\n })\nfind().toArray()undefined",
"text": "Thanks! The then() function is working now. I just have a last problem, when I use findOne() I’m able to get an attribute like this:However, when I use find().toArray() I got undefined:return customersCollection.find({ firstName: “Julien” }).toArray()\n.then (result => {\nconsole.log('test = ', result.lastName);\nreturn result.lastName\n})",
"username": "Julien_Chouvet"
},
{
"code": "result",
"text": "result is an array and so you’ll need to index into it.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Of course! That was a stupid question sorry.Thanks a lot for your help!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm count objects in collection | 2021-03-18T18:02:54.658Z | Realm count objects in collection | 5,348 |
null | [
"queries",
"python"
] | [
{
"code": "file_namefile_name38_Julio's_Fatacy_2008.mkvfile_nameJulio's FatacyJulios Fatacyfile_name38_Julio's_Fatacy_2008.mkvText Indexfile_nameJulio's Fatacy",
"text": "Mates,I am creating a telegram robot where I used mongodb to save details of files from a telegram channel… One of the detail is the file’s name… and is store in the key “file_name”…The bot I am making is a Auto Filter Robot which when some one enters a text in telegram group the bot willl search the text given by the user in all document in file_name key… and returns matching documents…The problem I am facing is some file names are mixed with special character for example, supposs a file name is 38_Julio's_Fatacy_2008.mkv and is saved in the field “file_name” In one document… And If a user types Julio's Fatacy or Julios Fatacy I need to fetch that documents with file_name as 38_Julio's_Fatacy_2008.mkv…Someone told me a text index would work as per as my needs… And I created a Text Index on the filed file_name and tried to search the term Julio's Fatacy but it could’nt find that document…So I would like to know if there is way to slove this? That is searching a sequence of letter from a string filed and returns them even if the sequence is in middle of other letters/charecters like the example name i mentioned above…??Am currently using Motor Asyncio Driver for Python…Regards,\nBen",
"username": "MoviezHood"
},
{
"code": "",
"text": "If you are in Atlas, you should consider trying Atlas Search rather than a text index.",
"username": "Marcus"
},
{
"code": "",
"text": "But sadly I am not using atlas search ",
"username": "MoviezHood"
}
] | Matching A Sequence Of Charatters In A Text Field | 2021-03-22T08:26:02.560Z | Matching A Sequence Of Charatters In A Text Field | 2,255 |
[
"atlas-device-sync"
] | [
{
"code": "",
"text": "This morning I received an email notifying that our app has been paused. This may be nothing but understanding the cause of the error would be helpfulSync to MongoDB has been paused for your application: TaskTrackerThis is a test app that only I ever use, so there are no connected users and the timeframe in which it was paused I was not using or logged into the app.Here’s the error text showing in consoleTranslatorFatalError Error Mar 20 3:29:00-04:00 0ms SyncError6055a43cd66fbb05a38ed587Error: recoverable event subscription error encountered: maximum number of streams (1000) reachedSource: Error syncing MongoDB writeAnd some screenshots of the actual error, and then the activity. As mentioned this was not caused by the app but it appears it’s a server level event - which if so, should not generate an error (I would think) unless it was really an error.Any thoughts?\nConsole Error2414×1836 309 KB\n\nActivity 1848×724 25.9 KB\n\nActivity 21604×748 107 KB\n",
"username": "Jay"
},
{
"code": "",
"text": "same error for App Services",
"username": "V_P"
},
{
"code": "",
"text": "@V_Pthat link is broken.",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay, @V_P,This was a temporary issue affecting some clusters, and has since been resolved.If you have urgent questions about operational issues with MongoDB Cloud services such as Realm, I recommend you contact Cloud Support directly as they can provide more insight into your specific clusters.that link is broken.@V_P provided a link to their Realm cluster, so this is expected. Access will be limited to their organisation and MongoDB Production Support employees.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_XThanks for the update.understanding the cause of the error would be helpfulI was more looking for clarification about the cause of the error and what the error messages mean and what the effect was/is. The errors in console are often a little vague so what action should be taken, if any, is unclear. For example, what does this mean?maximum number of streams (1000) reachedDoes that mean someone s DDOS’ing us or does it mean server failed on a backup (for example)? Should I restart my instance? Is my data intact? Wouldn’t this be more clear…Error: recoverable event subscription error encountered: maximum number of streams (1000) reached. The server temporarily paused a backup. No data was affected and it does not require any further action.I feel as more developers come on board (Realm), understanding what’s going on in the console will be part of the development process (e.g. clearer error messages).For example, when that event occurred and there were 400 users running my app, would it have crashed?Is there a cross reference to error messages and what they mean?",
"username": "Jay"
},
{
"code": "",
"text": "I was more looking for clarification about the cause of the error and what the error messages mean and what the effect was/is. The errors in console are often a little vague so what action should be taken, if any, is unclear. For example, what does this mean?Hi @Jay,This error was due to a backend configuration limit on the sync translator service for your cluster which has been adjusted. The detail on number of translator streams is from a part of the backend sync service you don’t control, so unfortunately there currently isn’t a lot of context for error messages like this unless the person reviewing the logs is a support engineer.The impact is that you would need to restart the sync service per the email notification you received (which should have included a link to follow).I feel as more developers come on board (Realm), understanding what’s going on in the console will be part of the development process (e.g. clearer error messages).I absolutely agree! We have work in progress to improve documentation and messaging, and teams are very aware of the feedback requests from users in the forum and support cases (as well as internal users!).Is there a cross reference to error messages and what they mean?There is a list of Error Codes in the sync client source, but it is missing a lot of context that would go into a complete developer reference. This also doesn’t cover errors that are specific to the sync service infrastructure (like the “TranslatorFatalError” in the first post of this topic).I’m not aware of an available detailed reference for Realm Sync service error messages including impact and resolution, but I’ll definitely help socialise as soon as one is available .Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "FWIW I got this same issue for a few of the Realm apps I manage. While the action that could be taken was clear (restart sync) I had no idea if restarting the sync would just make the error happen again, or if it was something about data trying to be sync’d that was wrong - we have user inputted data in our app.Learning that it was an issue deeper than what we can control is useful - that we SHOULD click the restart sync button, it’s nothing in our particular app, and the error shouldn’t come back. That’s not always the case, and that’s what was unclear IMO.-Sheeri",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Recoverable event subscription error encountered | 2021-03-20T13:32:29.706Z | Recoverable event subscription error encountered | 3,488 |
|
null | [] | [
{
"code": "",
"text": "Hi,The Realm roadmap is no longer publicly available. Previously it stated that:Is it still your intention to offer self-hosted Realm in the future? For platforms that process medical data, the ability to self-host is crucial because many countries require medical data to remain within their borders.BR,\n–Oskari",
"username": "Oskari_Koskimies"
},
{
"code": "",
"text": "@Oskari_Koskimies Thank you for the question. While we do recognize that certain use cases, particularly around data governance, require a self-hosted deployment; right now we are focused on making the cloud-hosted version of Realm Sync the best it can possibly be. I hope this helps",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is self-hosted Realm still on the roadmap? | 2021-03-20T19:48:41.780Z | Is self-hosted Realm still on the roadmap? | 2,950 |
null | [
"dot-net"
] | [
{
"code": " public IDictionary<string, decimal> monthlyBalance { get; set; }",
"text": "Hi,I’m using mongodb c# driver in my .net core application.\nI’ve a dictionary data fields in table which is like\n public IDictionary<string, decimal> monthlyBalance { get; set; }\nbut when storing dictionary in database the value takes in string not in decimal which is i declared in dictionary.so please help me to solve this difficult problem.\nThank you !!",
"username": "Bhavin_Varsur"
},
{
"code": "Dictionary<string,decimal> [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string _id { get; set; }\n public string ledgerTemplateId { get; set; }\n public string companyId { get; set; }\n\n [BsonRepresentation(BsonType.Int32)]\n public int periodId { get; set; }\n public List<CategoryItem> categories { get; set; }\n\n [BsonRepresentation(BsonType.Decimal128)]\n public decimal openingBalance { get; set; }\n\n [BsonRepresentation(BsonType.Decimal128)]\n public decimal currentBalance { get; set; }\n\n [BsonRepresentation(BsonType.Decimal128)]\n public decimal closingBalance { get; set; }\n \n public IDictionary<string, decimal> monthlyBalance { get; set; } = new Dictionary<string, decimal>();\n\n public IDictionary<string, decimal> dayBalance { get; set; } = new Dictionary<string, decimal>();\n",
"text": "Hello,I’m Using mongo c# driver v2.11.3. i just want insert dictionary in database.\nand the dictionary is Dictionary<string,decimal>. i just simply insert string a key of dictionary and decimal a value of dictionary.but when i insert dictionary in database the value of dictionary takes as string.\nit cannot store decimal value of dictionary.\nhere is the dictionary declared.here is the sample database record .\nimage580×712 12.4 KBin above image you can see the value of dictionary is taking as string.\nis there anything which i do for store decimal value of dictionary.\nHow to do that? What should i do for?\nI’'ll eagerly waiting for your response.\nThank you ?",
"username": "Bhavin_Varsur"
},
{
"code": "",
"text": "Have you managed to solve it? I have the exact same issue.",
"username": "speter97"
}
] | MongoDB Dictionary<string,decimal> store | 2020-12-19T06:43:06.218Z | MongoDB Dictionary<string,decimal> store | 6,918 |
null | [
"replication",
"security"
] | [
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: {{ mongodb_data_dir }}\n journal:\n enabled: true\n# engine:\n# mmapv1:\n wiredTiger:\n engineConfig:\n cacheSizeGB: {{ memory }}\n\n# where to write logging data.\n#systemLog:\n# destination: file\n # logAppend: true\n # path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: {{ mongod_port }}\n bindIp: 127.0.0.1,{{ fqdn }}\n ssl:\n mode: requireSSL\n PEMKeyFile: /certs/tls.pem\n CAFile: /certs/tls.crt\n disabledProtocols: TLS1_0,TLS1_1\n allowConnectionsWithoutCertificates: false\n allowInvalidHostnames: false\nsecurity:\n authorization: enabled\n keyFile: /conf/mongodb/repl.key\n javascriptEnabled: {{'true' if javascript_enabled else 'false'}}\n\n#operationProfiling:\n\nreplication:\n replSetName: {{ replica_set_name }}\n oplogSizeMB: {{ oplog_size_mb }}\n enableMajorityReadConcern: {{'true' if enable_majority_read_concern else 'false'}}\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n\n2021-03-19T16:33:49.630+0000 I NETWORK [conn619] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 127.0.0.1:44320 (connection id: 619)\n2021-03-19T16:33:49.630+0000 I NETWORK [conn619] end connection 127.0.0.1:44320 (8 connections now open)\n2021-03-19T16:33:50.132+0000 I NETWORK [listener] connection accepted from 127.0.0.1:44350 #620 (9 connections now open)\n2021-03-19T16:33:50.132+0000 I NETWORK [conn620] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 127.0.0.1:44350 (connection id: 620)\n2021-03-19T16:33:50.132+0000 I NETWORK [conn620] end connection 127.0.0.1:44350 (8 connections now open)\n2021-03-19T16:33:50.634+0000 I NETWORK [listener] connection accepted from 127.0.0.1:44360 #621 (9 connections now open)\n2021-03-19T16:33:50.634+0000 I NETWORK [conn621] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 127.0.0.1:44360 (connection id: 621)\n2021-03-19T16:33:50.634+0000 I NETWORK [conn621] end connection 127.0.0.1:44360 (8 connections now open)\n2021-03-19T16:33:51.136+0000 I NETWORK [listener] connection accepted from 127.0.0.1:44364 #622 (9 connections now open)\n2021-03-19T16:33:51.136+0000 I NETWORK [conn622] Error receiving request from client: SSLHandshakeFailed: The server is configured to only allow SSL connections. Ending connection from 127.0.0.1:44364 (connection id: 622)\n allowConnectionsWithoutCertificates: true\n allowInvalidHostnames: true\n",
"text": "hi everyone. I’m in a peculiar situation.I have a mongodb cluster with replication with this configuration:TLS is required and the operator which is doing some things is already using the correct certificate.When replication is kicking in however, suddenly, I’m starting see these:I have no idea what this is and where it’s coming from. The localhost connection I mean. It’s not me, so I suspect it’s the replication inside mongo? I have 3 instances. To secondary and a single primary. But the cluster can’t connect and so no-one is primary. I tried specifying a clusterFile too, but that didn’t do anything.Anyone ever see something like this and might have ANY ideas where or what I can do? I’ve been trying to figure this out for a long time now without luck.If I setIt works of course, but that is not desirable. Any help is much appreciated. ",
"username": "Gergely"
},
{
"code": "ssl:\n mode: requireSSL\n",
"text": "Welcome to the community!What is your mongodb version?\nAre you using TLS or SSL?\nIf TLS your config file still pointing to SSLHave you tried withTLS:\nmode: requireTLS",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi!Unfortunately that’s not an option for now, because we have to maintain backwards compatibility with 3.x versions. So switching to TLS is not an option yet. Also, I believe it should work until it’s fully not supported any longer, right?Or are you suggesting to ALSO add a TLS section next to the SSL one? Also, that has duplicate keys.",
"username": "Gergely"
},
{
"code": "mongod --versionrs.status()rs.conf()",
"text": "Hi @Gergely welcome to the community!I think the picture is still incomplete here. Could you provide the following information as a starting point:Best regards,\nKevin",
"username": "kevinadi"
}
] | Mongodb replication with enforced TLS is failing | 2021-03-19T17:10:10.967Z | Mongodb replication with enforced TLS is failing | 7,660 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hello,As previously REALM team recommended Full sync. but can MongoDB Realm support Query based realm sync??Thanks in advance.",
"username": "Vishal_Deshai"
},
{
"code": "",
"text": "Welcome to the community @Vishal_Deshai!can MongoDB Realm support Query based realm sync??MongoDB Realm currently only supports full sync. For more information, please refer to the Getting Started with Sync documentation.The team is considering how to architect more flexible sync options in future, but there isn’t a specific timeline for this yet:Something akin to query-based sync is certainly in our plans but it is difficult to give an exact date; we are certainly working on it. Unfortunately I cannot tell you if it will land for GA for MongoDB Realm but we will endeavor to make that happen. What I can say is that I do recommend moving to full sync if at all possible - of course, it is use case and load dependent but many architectures can be solved by full sync if partitioning and schema design is thought about initially.If you would like more specific suggestions on schema design and partitioning, please provide some details on your use case.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Vishal_Deshai,Recordings from this week’s MongoDB.live conference are now available on-demand via MongoDB’s YouTube channel or the archived MongoDB.live event site.The presentation on Realm Scalable Sync Design has some particularly relevant information for your question.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X @Ian_Ward now that Realm is GA, is there a rough idea of when we can expect query-based sync support?Right now, I have a legacy realm app that I want to migrate over to a query-based sync approach. I don’t want to migrate over to a full-sync approach with workarounds and then be locked into that approach when query-based sync is ready. I also don’t want to keep waiting to migrate and have legacy realm be deprecated.Should I just go ahead and migrate to a full-sync realm or wait a little longer if query-based sync is around the corner?",
"username": "Obi_Anachebe"
},
{
"code": "",
"text": "We are still a long way away from launching query-based sync 2.0 with a more flexible syncing API. My suggestion would be to use partition-based sync and not wait as we will not have QBS production ready before the legacy realm cloud shuts down.",
"username": "Ian_Ward"
}
] | Query Based sync support? | 2020-06-12T06:46:00.366Z | Query Based sync support? | 3,096 |
null | [] | [
{
"code": "Warning: Accessing non-existent property 'count' of module exports inside circular dependency",
"text": "With the Node driver almost anything you do will result in a boatload of warnings similar to this:\nWarning: Accessing non-existent property 'count' of module exports inside circular dependency.\nI tried installing [email protected] and that did not seem to fix it.\nI know it has already been reported, but it is driving me insane. I see this every time I run unit tests OVER AND OVER AND OVER AND …\nHOLY CRAP.\nIt’s been over a month since it’s been reported. PLEASE MAKE THE PAIN STOP. PLEASE.",
"username": "David_Welling"
},
{
"code": "npm ls mongodb",
"text": "This was fixed in v3.6.5 and I was confused because I had a dependency that was not updated to the latest driver. My apologies to the driver team.\nIf anybody else gets here, try this to see if you have some hidden older version that is causing the problem:\nnpm ls mongodb",
"username": "David_Welling"
},
{
"code": "",
"text": "Hi @David_Welling!Welcome to the MongoDB Community Forums Thanks for coming back and posting the clarification and potential solution for others!",
"username": "yo_adrienne"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Pretty please fix Accessing non-existent property 'MongoError' of module exports inside circular dependency | 2021-03-19T23:07:41.561Z | Pretty please fix Accessing non-existent property ‘MongoError’ of module exports inside circular dependency | 6,762 |
null | [] | [
{
"code": "",
"text": "According to mongodb atlas cluster api, the get request should return connection strings with the following property “privateSrv”, if : Atlas returns this parameter only if you created a network peering connection to this cluster.I have created network peering between my AWS vpc and atlas mongodb created network container (VPC) On both sides the vpc connection shows Active/Available state. I have also upgraded to M10 dedicated cluster.However the get request only returns “standard” and “standardSrv” connection strings in “connectionStrings” object.Is there anything else I can do (any step I have forgotten to do)? Is there a way to make cluster aware of the peering? Without privateSrv connectionString the whole peering is useless.thanks\nSimon",
"username": "Simon_Obetko"
},
{
"code": "",
"text": "Hi @Simon_Obetko.AWS Atlas clusters use the same DNS for private and standard connections.With GCP and Azure it is different.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Perfect, tested this and works well even when I am using lambda joined into VPC (meaning without internet connection). Pretty nice solution. I was afraid that the DNS record would not be resolvable from lambda joined in VPC, but it seems to be working well.\nThanks for clearing things out.",
"username": "Simon_Obetko"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | PrivateSrv connection string not present | 2021-03-22T11:50:44.162Z | PrivateSrv connection string not present | 1,959 |
null | [
"capacity-planning"
] | [
{
"code": "",
"text": "Hello,\nI’m new user and I’m studying MongoDB. I have installed MongoDB as a Docker container on a Linux device. This device sends documents which contain information about some data, in different DBs of MongoDB. I would like to know what is the maximum size of the DB and of the collections and what happens to the DB once that size is reached. Is the information overwritten?\nI hope you can help me. Thank you",
"username": "FEDERICA_BO"
},
{
"code": "",
"text": "Hi Federica,The maximum size an individual document can be in MongoDB is 16MB with a nested depth of 100 levels.Edit: There is no max size for an individual MongoDB database.You can learn more about MongoDB limits and thresholds here: https://docs.mongodb.com/manual/reference/limits/ ",
"username": "ado"
},
{
"code": "",
"text": "Hi Ado,\nI had seen the table but there are different parameters for the max size, it goes from 1 TB to 32 TB according to the Chunk Size and Average Size of Shard Key Values. I haven’t set either of these two parameters. What values should I consider?\nSorry for the requests but I have some difficulties to understand how MongoDB works.\nThank you",
"username": "FEDERICA_BO"
},
{
"code": "",
"text": "Hey Federica,Sorry I made a mistake in my original reply. As far as the database size goes, there is technically no limit for how big an individual database can be.If you’re using MongoDB Atlas, you won’t ever have to worry about database size as it will scale as you grow.",
"username": "ado"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @FEDERICA_BO!I would like to know what is the maximum size of the DB and of the collections and what happens to the DB once that size is reached.Practical limits & thresholds to consider are documented in the MongoDB Limits and Thresholds page that @ado shared earlier.I had seen the table but there are different parameters for the max size, it goes from 1 TB to 32 TB according to the Chunk Size and Average Size of Shard Key Values. I haven’t set either of these two parameters. What values should I consider?The table you are referring to is specific to Sharding Existing Collection Data Size and per the Important callout for this section, this limitation only applies to initial sharding:These limits only apply for the initial sharding operation. Sharded collections can grow to any size after successfully enabling sharding.Collections are generally sharded well before reaching those collection sizes. Rebalancing TBs of data will take a long while even with great server & network resources. It is best to shard well before it becomes urgent to do so, as data migration will add even more load to a deployment that is already stressed.The estimation of these limits is explained just above the table. When a collection is initially sharded, a calculation is done to determine how to split existing data into chunk ranges based on the shard key with each range representing data sizes close to the configured Chunk Size. A list of initial split points is currently returned in a single BSON document which is subject to the 16MB document size limit.What that table is trying to estimate is the size of collections that can be sharded based on varying shard key sizes or chunk sizes:Use the following formulas to calculate the theoretical maximum collection size.maxSplits = 16777216 (bytes) / maxCollectionSize (MB) > maxSplits * (chunkSize / 2)Chunk Size and Average Size of Shard Key Values. I haven’t set either of these two parameters. What values should I consider?Chunk Size should be left at the default value (64MB) unless you have specific motivation to change this (for example, if you waited too long to shard and need a larger chunk size for initial sharding ). There is no configuration for shard key size: the average size of shard key values will depend on the field(s) you choose for your shard key index and the associated values in the collection being sharded.For more background on practical vs theoretical limits, please see my response on this earlier discussion: Database and collection limitations - #2 by Stennie.If you are concerned about managing capacity planning and scaling yourself, MongoDB Atlas would be a significant help with features like Cluster Auto-Scaling and the ability to adjust cluster resources based on your current requirements.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you guys for you explanation. You have been very kind. Now the topic is clearer.",
"username": "FEDERICA_BO"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Maximum size of database or collection | 2021-03-18T13:55:30.469Z | Maximum size of database or collection | 49,090 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hei there, \nCant find where to modify the content of the emails send to user for Confirming or reseting password. Is this feature available in free tier or?\nKind regards,\nBehzad ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Hi Behzad Welcome to the forum.Here’s a quote from the documentation:You can use a custom confirmation function to define your own confirmation flows, such as the following examples:To customize the content of the confirmation email you’ll need to:I hope this helps ",
"username": "kraenhansen"
},
{
"code": "",
"text": "Hi Kræn,\nthanks for the answer. So there is no way Mongo DB provides to handel this in the Realm? I mean to modify the email content from Real.Kind regards, Behzad ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Correct. The subject line and URL is configurable, but the content and template is only configurable by implementing a function as mentioned above. You might be interested in sharing your thoughts on this thread: Able to edit Confimation Email Body | Voters | MongoDB",
"username": "kraenhansen"
},
{
"code": "",
"text": "Thanks Kræn \nHonestly the mongo dbRealm could be improved. I find the realm documentation lacking alot …\nWish you a nice day. ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Nice day to you, too And thanks for sharing your feedback.\nAre there any other areas (perhaps specific sections) that you’ve felt the need for more / better documentation?",
"username": "kraenhansen"
},
{
"code": "",
"text": " Actually I have chosen Mongodb platform as our main underlying infrastructure form now and and future. But lacking of well described documentation makes the development a slow…Hopefully it will be better.\nI have a question I would be thankful if you could tell me where do I set the user profile data for username/password users? I mean when I signup a user they provide their name and last name,… In documentation I found you user the realm register user function…But I cold not find anywhere in docs how to these profile data. I tried the custom data but that doesnot seem to usefull registering the profile data…\nSorry asking this question here. I am having a deadline soon…\nMany Thanks ",
"username": "Behzad_Pashaie"
},
{
"code": "registerUserapp.logInconst email = \"[email protected]\";\nconst password = \"a-super-secure-password\";\nconst firstName = \"John\";\nconst lastName = \"Doe\";\n// Register the user\nawait app.emailPasswordAuth.registerUser(email, password);\nconst credentials = Credentials.emailPassword(email, password);\n// Authenticate as that user\nconst user = await app.logIn(credentials);\n// Create the user profile\nconst userProfiles = user.mongoClient(\"mongodb-atlas\").db(\"my-database\").collection(\"user-profiles\");\nawait userProfiles.insertOne({ _id: user.id, firstName, lastName });\n",
"text": "I agree, it’s a bit off-topic and perhaps better for another post in the forum.\nYou’re going to get my best answer none the less (using Realm Web / JS) here as an example, since I don’t know what platform you’re developing for.where do I set the user profile data for username/password users?I’d suggest that you use the custom data feature, that you’ve already been looking into. Once you’ve completed the registerUser and app.logIn you can insert a new document in the custom data collection (I’m using “user-profiles” here as an example):You can also setup a trigger to add a document (with some default data) upon user creation: https://docs.mongodb.com/realm/triggers/authentication-triggers#authentication-eventsAnyways - I hope this works out for you. We’re always happy to get concrete feedback (links, sections in docs, etc.) on stuff we can improve and if you have any further questions, feel free to create a new post in the forum ",
"username": "kraenhansen"
},
{
"code": "",
"text": "Thanks a lot for your kind replies. I am working with React / JS \nI have mistakenly posted this with another account on the forum. Sure I will add your solution to that post.\nThe approach you described sounds very good, but one challenge is consider user provides FirstName and LastName at signup. By submitting the sign up form registerUser(email, password). At this point user have to confirm and then log in for user to be marked as Authenticated. How can we persist this FirstName and LastName until the user have first login?\nAnother senario could be user signs up from one browser and l logs in from another browser.\n ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "I also just came up with an idea. How about if for sign in we only ask for email and password. Then when user is loing for first time we can send/push them to Profile page to update their Personal info…This is a bit tricky as well…I am not sure how convinent is thís solution according user experience…",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "You’re right, since your app requires confirmation you the user has to confirm before you can post to the collection Depending on your use-case you might be able to store it in the local storage.An entirely different approach is to create a function (which requires at least anonymous authentication) or alternatively enable a webhook (which can be called unauthenticated) and when called with the first and last name it will store these values into the collection, in case no profile already exists for that user. But … this feels a bit like a misuse of the features to me, perhaps another person on the forum has a better suggestion?",
"username": "kraenhansen"
},
{
"code": "",
"text": "Yes I agree easily it becomes complicated. yes let see if we get some other ideas \nSure these are thing Realm Dev Team can fix hopefully ",
"username": "Behzad_Pashaie"
},
{
"code": "emailpasswordfirstNamelastName",
"text": "I think for this use case a function is your best bet. +1 for making this a bit easier out of the box though!For this solution you’ll need to:When a user first loads your app (i.e. on the sign in/sign up screen) you’ll need to log in with an\nAnonymous credential so that you can call functions.The user fills out the Sign Up form and provides the following info: email, password, firstName, lastNameOn submit:a. Realm sends the user a confirmation email (or runs a confirmation function)b. You call a function that creates a document in the custom user data collection for the user with all the provided info except for their password. The document could probably also include a boolean “isConfirmed” field though that’s not strictly necessary.The user confirms their email address. This fires an auth trigger (on CREATE) that looks up the user’s document by email, adds the user’s ID to whatever field you specified in the custom data config, and sets isConfirmed to true.The user (still authed as Anonymous) logs in with their now confirmed email & password. I’d suggest passing the email/password credential to User.linkCredentials() on the anonymous user instead of directly logging in - this associates the anonymous user activity with the email/password account. It probably doesn’t make too much difference but it is a bit cleaner, especially if you’d like to add any features that don’t require login.",
"username": "nlarew"
},
{
"code": "",
"text": "Great \nThanks Nick & Kræn . Looks very good. I will implement this.\nWish you a good time forward\n ",
"username": "Behzad_Pashaie"
},
{
"code": "",
"text": "Hi Nick @nlarew ,\nhope you are doing well. I have a question regarding step 2 point b.\nCurrently in the app I am working on by default anonymous is activated before any user logs in by providers.\nBy submit button I call a function which inserts the user info expect password in custom data collection. My question is how safe is this first insertion which has been it done by an anonymous user? It seems a security hole. if this is an issue how can we make it safe?\nThen later when user is confirmed a trigger runs a function which updates the corresponding document with normal user id.\nKind regards,\nBehzad Pashaie",
"username": "Behzad_Pashaie"
}
] | How to change the content of User Confirmation Email | 2021-01-22T19:46:51.079Z | How to change the content of User Confirmation Email | 5,297 |
null | [] | [
{
"code": "",
"text": "I notice that $indexStats gather info since last reboot, so the information we can get from it is very poor.Is there any way to avoid this behavior?Mongo is running in a AWS EC2, managed from Mongo MMS: 3 nodes cluster (3 configs, 3 mongoes), 2 shards. Mongo v4.0.6Thanks",
"username": "Ruben_Fernandez_Gonz"
},
{
"code": "",
"text": "Hi @Ruben_Fernandez_Gonz,Welcome to MongoDB Community.I think the only way is to persist this data into a collection periodically…Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $indexStats reset after instance rebooting | 2021-03-22T11:40:48.905Z | $indexStats reset after instance rebooting | 1,575 |
null | [
"java"
] | [
{
"code": "public class EnumCodecProvider implements CodecProvider {\n @Override\n public <T> Codec<T> get(Class<T> clazz, CodecRegistry registry) {\n log.info(\"Inside EnumCodecProvider: {}\", clazz.getSimpleName());\n if (clazz == ShipmentStatus.class) {\n return (Codec<T>) new ShipmentStatusCodec();\n }\n return null; \n }\n}\nCodecRegistry codecRegistry = CodecRegistries.fromRegistries(\n\t\t\t\tCodecRegistries.fromProviders(new EnumCodecProvider()),\n\t\t\t\tMongoClients.getDefaultCodecRegistry()\n\t\t);\n\n\t\tMongoClientSettings settings = MongoClientSettings.builder()\n\t\t\t\t.uuidRepresentation(UuidRepresentation.STANDARD)\n\t\t\t\t.retryReads(true)\n\t\t\t\t.retryWrites(true)\n\t\t\t\t.codecRegistry(codecRegistry)\n\t\t\t\t.applyConnectionString(connectionString).build();\n\t\treturn MongoClients.create(settings);\n10:32:15.538 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Document \n10:32:15.540 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: MaxKey \n10:32:15.540 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: BsonRegularExpression \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Integer \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Date \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: BsonDbPointer \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Symbol \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: ObjectId \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: BsonTimestamp \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: MinKey \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: String \n10:32:15.541 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: List \n10:32:15.542 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Binary \n10:32:15.542 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Double \n10:32:15.543 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Code \n10:32:15.543 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Long \n10:32:15.543 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Boolean \n10:32:15.543 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: Decimal128 \n10:32:15.543 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: CodeWithScope \n10:32:15.544 INFO 6ad6a0f68e2d c.t.t.p.c.v.codec.EnumCodecProvider - Inside EnumCodecProvider: BsonUndefined\n",
"text": "Hi,\nI have a codec provider for enum suggested by https://dev.to/harithay:EnumCodecProvider was registered this way:During debugging I can see that all BSON types are processed by EnumCustomProvider except my\nenum ShipmentStatus:Could you give me an advice what is missing in the code? How we can force MongoDB to process the enums using custom ShipmentStatusCodec?Thanks,\nElena",
"username": "Elena_Alexandrova"
},
{
"code": "public class EnumCodecProviderTest {\npublic static void main(String[] args) {\n CodecRegistry codecRegistry = CodecRegistries.fromRegistries(\n CodecRegistries.fromProviders(new EnumCodecProvider()),\n MongoClients.getDefaultCodecRegistry()\n );\n\n // I'm assuming that you are using Document as the container, but it's\n // not clear from your description\n Document doc = new Document(\"status\", ShipmentStatus.SHIPPED);\n\n // this is essentially what happens when using MongoClient, but avoids\n // the need to actually use a MongoClient\n String json = doc.toJson(codecRegistry.get(Document.class));\n\n System.out.println(json);\n}\n\npublic static class EnumCodecProvider implements CodecProvider {\n @Override\n public <T> Codec<T> get(Class<T> clazz, CodecRegistry registry) {\n if (clazz == ShipmentStatus.class) {\n //noinspection unchecked\n return (Codec<T>) new ShipmentStatusCodec();\n }\n return null;\n }\n\n}\n\npublic static class ShipmentStatusCodec implements Codec<ShipmentStatus> {\n @Override\n public ShipmentStatus decode(BsonReader reader, DecoderContext decoderContext) {\n return ShipmentStatus.valueOf(reader.readString());\n }\n\n @Override\n public void encode(BsonWriter writer, ShipmentStatus value, EncoderContext encoderContext) {\n writer.writeString(value.name());\n }\n\n @Override\n public Class<ShipmentStatus> getEncoderClass() {\n return ShipmentStatus.class;\n }\n}\n\npublic enum ShipmentStatus {\n SHIPPED\n}\n}\n{\"status\": \"SHIPPED\"}\n",
"text": "Hi @Elena_Alexandrova,Thanks for reaching out. I tried to reproduce your results with the following test:but it printsas expected. Can you provide a complete, minimal reproducer that demonstrates what you’re seeing?Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "@Configuration\n@Slf4j\npublic class MongoDBConfig extends AbstractReactiveMongoConfiguration {\n\t@Override\n\tprotected void configureConverters(MongoCustomConversions.MongoConverterConfigurationAdapter adapter) {\n\t\tadapter.registerConverter(new ShipmentStatusReadConverter());\n\t\tadapter.registerConverter(new ShipmentStatusWriteConverter());\n\t}\n}\n\n@Slf4j\n@Component\n@WritingConverter\npublic class ShipmentStatusWriteConverter implements Converter<ShipmentStatus, String> {\n @Override\n public String convert(ShipmentStatus status) {\n log.info(\"Writing Converter called\");\n return status.getValue();\n }\n}\n\n@Slf4j\n@Component\n@ReadingConverter\npublic class ShipmentStatusReadConverter implements Converter<String, ShipmentStatus> {\n @Override\n public ShipmentStatus convert(String value) {\n log.info(\"Reading Converter called\");\n return ShipmentStatus.fromValue(value);\n }\n}\n\n@Slf4j\npublic enum ShipmentStatus {\n NEW(\"New\"), \n PLANNED(\"Planned\"),\n ASSIGNED(\"Assigned\");\n\n private String value;\n\n ShipmentStatus(String value) {\n this.value = value;\n }\n\n public String value() {\n return this.value;\n }\n\n @JsonValue\n public String getValue() {\n return value;\n }\n\n @JsonCreator\n public static ShipmentStatus fromValue(String value) {\n for (ShipmentStatus e : ShipmentStatus.values()) {\n if (e.value.equalsIgnoreCase(value)) {\n return e;\n }\n }\n throw new UnsupportedEnumValueException(value, ShipmentStatus.class);\n }\n\n @Override\n public String toString() {\n return this.value;\n }\n}\n",
"text": "Thanks, @Jeffrey_Yemin.\nI used ReactiveMongoRepository to store my data.Actually, I’ve already found the solution that works for me. I used the custom converters.",
"username": "Elena_Alexandrova"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot store Java enum values in MongoDB | 2021-03-19T13:42:13.444Z | Cannot store Java enum values in MongoDB | 14,208 |
null | [] | [
{
"code": "",
"text": "Can the Server be Shutdown because of the Oplog?If you have one, can I know the case?",
"username": "Kim_Hakseon"
},
{
"code": "mongod",
"text": "Hi @Kim_Hakseon,Are you asking if this is possible in theory or do you have a specific unexpected shutdown event to investigate?A secondary’s oplog could fall too far behind the primary causing that member to become “stale”, but that will not trigger shutdown. Unrecoverable write errors are a possible cause for the mongod process to shutdown because it is not safe to continue … but that is a general I/O safety mechanism.If you have a specific example, please provide more information around the shutdown behaviour you are interested in (eg log snippet and specific version of MongoDB server).Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I think you’re asking, “This is possible in the story.”This question was asked by the client company to us, but I didn’t understand it at all, so I asked.Thank you.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Shutdown due to Oplog | 2021-03-22T03:01:19.611Z | Shutdown due to Oplog | 1,677 |
null | [] | [
{
"code": "M0",
"text": "From what I understand based on the limits mentioned - You can deploy at most one M0 Free Tier cluster per Atlas project.But is there a limit on the number of Atlas projects?",
"username": "_A_P"
},
{
"code": "",
"text": "Hi @_A_P,The limit is:Projects per Atlas Organization - 250Other limits can be view here:Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny so theoretically there can be upto 250 Projects each having an M0 cluster with their independent 500 mb memory and rolling 10 gb bandwidth?",
"username": "_A_P"
},
{
"code": "",
"text": "Hi @_A_P,Can you please explain your end goal? What are you trying to achieve?The atlas free tier clusters are for familiarizing yourself with the product and easily spinning a demo cluater…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I was worried about accidentally hitting the limit but spinning up and down clusters while learning how to use Mongo. Currently, I am simply going through the Dev Certificate learning path.",
"username": "_A_P"
},
{
"code": "",
"text": "By the way, you can use cross-org billing to share a single subscription across multiple Atlas organizations https://docs.atlas.mongodb.com/billing/#cross-organization-billing",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there a limit on the number of projects? | 2021-03-21T15:17:10.542Z | Is there a limit on the number of projects? | 6,031 |
null | [
"spark-connector"
] | [
{
"code": "",
"text": "Hi,\nWe are having jobs that use the mongo spark connector: https://mvnrepository.com/artifact/org.mongodb.spark/mongo-spark-connector_2.12/3.0.0\nA month ago we set the connector to run with a specific driver 4.0.5. After few days of successful runnings, the jobs fail, and the only way that the process succeeded to run is to upgrade to a new driver version: 4.2.0.Again, after few days of successful running, the process that configures the same with 3.0.0 connector and driver 4.2.0 fail and the only solution that succeeded is upgrading to a newer driver version 4.2.2.Eventually, it seems that if a new version of a driver came up, so an existing older version fails and we can’t figure out why?\nIf that case is common to more users?\nThat is the last configuration that is stable so far, but we can’t be sure it won’t fail once a newer driver version will be launched.\n–packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.0,org.mongodb:mongodb-driver-sync:4.2.2\nCan you please assist?",
"username": "Arik_Sasson"
},
{
"code": "",
"text": "HI @Arik_Sasson,The 3.0.0 spark connector sets the sync java driver version in its pom to be 4.0.5 and is only tested with that combination.I think you may need to provide more information regarding the errors you are seeing, to help understand the cause.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Hey,We got two types of errors:When using mongo driver version 4.2.0 we got:py4j.protocol.Py4JJavaError: An error occurred while calling o119.load.\n: java.lang.NoClassDefFoundError: com/mongodb/client/model/WriteModel\nat com.mongodb.spark.sql.DefaultSource.constructRelation(DefaultSource.scala:89)\nat com.mongodb.spark.sql.DefaultSource.createRelation(DefaultSource.scala:61)\nat org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:342)\nat org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)\nat org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)\nat scala.Option.getOrElse(Option.scala:189)\nat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)\nat org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:221)\nat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.lang.reflect.Method.invoke(Method.java:498)\nat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\nat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)\nat py4j.Gateway.invoke(Gateway.java:282)\nat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\nat py4j.commands.CallCommand.execute(CallCommand.java:79)\nat py4j.GatewayConnection.run(GatewayConnection.java:238)\nat java.lang.Thread.run(Thread.java:748)\nCaused by: java.lang.ClassNotFoundException: com.mongodb.client.model.WriteModel\nat java.net.URLClassLoader$1.run(URLClassLoader.java:371)\nat java.net.URLClassLoader$1.run(URLClassLoader.java:363)\nat java.security.AccessController.doPrivileged(Native Method)\nat java.net.URLClassLoader.findClass(URLClassLoader.java:362)\nat java.lang.ClassLoader.loadClass(ClassLoader.java:418)\nat java.lang.ClassLoader.loadClass(ClassLoader.java:351)\n… 19 more\nCaused by: java.util.zip.ZipException: invalid LOC header (bad signature)\nat java.util.zip.ZipFile.read(Native Method)\nat java.util.zip.ZipFile.access$1400(ZipFile.java:60)\nat java.util.zip.ZipFile$ZipFileInputStream.read(ZipFile.java:734)\nat java.util.zip.ZipFile$ZipFileInflaterInputStream.fill(ZipFile.java:434)\nat java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)\nat sun.misc.Resource.getBytes(Resource.java:124)\nat java.net.URLClassLoader.defineClass(URLClassLoader.java:463)\nat java.net.URLClassLoader.access$100(URLClassLoader.java:74)\nat java.net.URLClassLoader$1.run(URLClassLoader.java:369)\n… 24 moreWhen using version 4.0.5 we got Partitioner errors (with all kinds of partitioners)\nfor example this one:Partitioning using the ‘MongoShardedPartitioner’ failed.Please check the stacktrace to determine the cause of the failure or check the Partitioner API documentation.\nNote: Not all partitioners are suitable for all toplogies and not all partitioners support views.%n",
"username": "Almog_Gelber"
},
{
"code": "NoClassDefFoundError: com/mongodb/client/model/WriteModel",
"text": "Hi @Almog_Gelber,Firstly its unclear why when previously running the partitioner would suddenly fail. Without further information then its impossible to determine the cause.For: NoClassDefFoundError: com/mongodb/client/model/WriteModel it indicates that there is an issue in the class path and the required write model is not available. Has the Spark Executor and all the Spark Driver nodes been updated?Again, after few days of successful running, the process that configures the same with 3.0.0 connector and driver 4.2.0 fail and the only solution that succeeded is upgrading to a newer driver version 4.2.2This also appears that the real cause of the error has not been determined.If the Spark job takes multiple days to run and it failed in the initial instance, then just updating the driver wouldn’t necessarily be expected to fix the issue (unless there was a driver bug).So I think the next step would be to determine the root cause of failure and proceed from there.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "I Ross,\nThanks for your quick response.\nActually, if we had known the root cause for it, we would have tried to solve it or find a solution for it.\nThe problem is that we don’t have a thread for the root cause and the flow mentioned in the last reply is the information that we wrote.\nSo since we have made any changes and the process fail twice and the only info that we realize it might be related to the failures, is of new driver upgrade.if you can point us to find the root cause or might encounter the same situation with other mongo users it can be very helpful.\nThanks,",
"username": "Arik_Sasson"
},
{
"code": "",
"text": "Hi @Arik_Sasson,There isn’t enough information here to understand the cause of the failure. Ideally, providing a minimal reproducible example would help as I could replicate the issue to understand the cause.Failing that more information about the spark job is required to understand what the error actually is:Once I understand more about the error, I can help look at ways to mitigate the error.All the best,Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "",
"username": "Almog_Gelber"
},
{
"code": "NoClassDefFoundErrorpy4j.protocol.Py4JJavaError: An error occurred while calling o119.load.\n: java.lang.NoClassDefFoundError: com/mongodb/client/model/WriteModel\nNoClassDefFoundError: com/mongodb/client/model/WriteModel",
"text": "Hi,So to clarify the only error you are seeing is a NoClassDefFoundError :And this error occurs even though running the job previously worked and nothing else has changed?For: NoClassDefFoundError: com/mongodb/client/model/WriteModel it indicates that there is an issue in the class path and the required write model is not available. Has the Spark Executor and all the Spark Driver nodes been updated?One of the Spark Drivers (also known as Spark Workers) is not configured correctly. As far as I can tell either it has a different version of the Mongo Spark Connector installed or a partial installation of the Mongo Spark Connector (without the Mongo Java driver classes).Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "The Mongo version is: 4.0.10",
"username": "Arik_Sasson"
},
{
"code": "",
"text": "@Ross_Lawley\nThanks for the quick response.We are running in client mode using single master node. (and many workers of course)\nWe are doing the installation of the connector using --packages in the spark submit command.\nWhy do you think there could be any issue with the installation? and why it suddenly happens after many successful runs?Thanks,\nAlmog",
"username": "Almog_Gelber"
},
{
"code": "",
"text": "Hi @Ross_Lawley\nShould we supply any other information that could assist to point of the root cause?",
"username": "Arik_Sasson"
},
{
"code": "",
"text": "Hi @Arik_Sasson,I would check all the class paths on each of the worker nodes and ensure that they are as expected. Also, it would be worth double checking each worker node is running the correct version of Spark.Ross",
"username": "Ross_Lawley"
}
] | Upgrade driver versions unstable | 2021-03-16T12:56:16.911Z | Upgrade driver versions unstable | 6,016 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Can we apply to get a voucher for both the MongoDB exams?",
"username": "_A_P"
},
{
"code": "",
"text": "Hi @_A_PWelcome to the forum!You’ll get a voucher for each learning path that you complete.\nIf you finish both learning paths, you can take both exams.Good luck!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How many exam vouchers are students eligible for? | 2021-03-20T14:03:14.338Z | How many exam vouchers are students eligible for? | 5,145 |
null | [] | [
{
"code": "",
"text": "Hi,For days I have a problem that has prevented me from going to production and that is that on 4 occasions the database is deleted and I do not know the reason.I keep a normal record, the next day I consult it and it no longer appears, when I check the database it no longer exists. I try again and it comes back and it happens, I am not sure when time passes or if there is something that triggers this event.in mongodb.conf there is the storage directory and normal. I use MongoCompass as a client, does this have something to do with connecting to the remote base? permissions or something similar?I am working with MEAN and everything is in digitalocean, with the support of them from the server everything is ok, there are no reboots or anything abnormal.I appreciate if you can help me identify what happens, because that way I can’t go to production.Thank you.",
"username": "John_Jairo_Dussan_Ra"
},
{
"code": "db.version()mongo",
"text": "Welcome to the MongoDB Community @John_Jairo_Dussan_Ra!If data is being removed unexpectedly I would start by making sure your deployment is properly secured with access control enabled, appropriate firewall rules, and TLS/SSL network encryption.To understand more about your scenario can you please:Confirm the Security Measures you have implemented for your deployment.Confirm the exact MongoDB server version used (i.e. output of db.version() in the mongo shell) and the host O/S version.Describe more specifically what data is missing. Does database mean all of your databases & collections? Are there any databases or collections that are not affected?There are other possibilities, but this is the most likely one to eliminate first.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello,Thanks for the welcome and support.version MongoDB\n4.4.4Ubuntu 20.04 LTS\nRelease: 20.04Please see the attached images where the failure is evidenced.I do not have more databases, for now I only have this project, not that I have uploaded more until I can solve this issue.I thank you for the guidance and help that you can give me.\ndatabases1421×452 14.4 KB",
"username": "John_Jairo_Dussan_Ra"
},
{
"code": "",
"text": "I would be scary.What documents do you have in the database READ__ME_TO_RECOVER_YOUR_DATA. That looks like you have been hacked.",
"username": "steevej"
},
{
"code": "",
"text": "\nNever heard back on this thread.‘Hacked’ is a strong word for unsecured servers on the internet. Ransomed would be appropriate.@John_Jairo_Dussan_Ra your data is gone. Follow @Stennie_X’s advise:If data is being removed unexpectedly I would start by making sure your deployment is properly secured with access control enabled, appropriate firewall rules, and TLS/SSL network encryption.",
"username": "chris"
},
{
"code": "",
"text": "‘Hacked’ is a strong word for unsecured servers on the internet.So true. Ransomed indeed. I am really curious with the follow ups.",
"username": "steevej"
},
{
"code": "READ__ME_TO_RECOVER_YOUR_DATA",
"text": "Hi @John_Jairo_Dussan_Ra,Thank you for the extra details. One aspect you did not confirm was any security measures you have taken, but your screenshot confirms that someone was able to remotely access your deployment, drop the databases, and create a database called READ__ME_TO_RECOVER_YOUR_DATA which will likely have some instructions on paying a “ransom”. See: How to Avoid a Malicious Attack That Ransoms Your Data.You can secure a deployment following the measures in the MongoDB Security Checklist. At a minimum a public deployment should have access control and authentication enabled, TLS/SSL network encryption configured, and appropriate firewall rules to limit network exposure. You should also set up Monitoring and Backup for a production environment and should review the Production Notes if you want to tune your deployment.If that sounds like a daunting list of administrative tasks to take care of before your production launch, I would strongly recommend using MongoDB Atlas. Atlas deploys fully managed MongoDB clusters on AWS, GCP, and Azure with Enterprise-level security, backup, and monitoring features that can be configured via web UI and API. There’s a free tier with 512MB of data if you want to try out the platform or have a development sandbox, and resources can be scaled (or auto-scaled) depending on your cluster tier and configuration. MongoDB Atlas undergoes independent verification of platform security, privacy, and compliance controls (you can find more information in the Trust Centre).If you prefer to manage deployments in your own VPS in the long term, you can always backup your data from Atlas and restore into a self-managed MongoDB deployment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi everyoneThanks for your opinions.effectively some instructions on paying a “ransom”.The site is one of tests in which I am reviewing all these aspects before going to production with the real one.I have a question, how do you manage to violate the security of the server and access the DB?I am applying Security Measures indicated. after that I can erase the base and load again?\nor how can I be sure that the security measures were well applied?Thanks.",
"username": "John_Jairo_Dussan_Ra"
},
{
"code": "",
"text": "Hi @John_Jairo_Dussan_Ra,I have a question, how do you manage to violate the security of the server and access the DB?Typically this happens through a combination of disabling default security measures (only bind to localhost) and not correctly configuring security measures like access control and firewalls. If you do not have access control enabled on your deployment and anyone can connect remotely, those remote connections will have full administrative access to your deployment. If you do not enable network encryption and connect to your deployment remotely over the public internet (without an encrypted path like a VPN or SSH tunnel), all of your data is exchanged in plaintext and could be subject to eavesdropping.As @chris noted, there is no hacking effort required if there are no effective security measures in place.Direct access to your database server should ideally only be allowed from a limited set of origin application IPs that themselves would like be inside the same VPN or firewall perimeter. User access should be authenticated using Role-Based Access Control following the Principle of least privilege. The general security measures are similar for any infrastructure service you want to host and secure.I am applying Security Measures indicated. after that I can erase the base and load again?\nor how can I be sure that the security measures were well applied?You can test to make sure the security measures are effective. For example, if access control is enabled you should not be able to run any commands to view or modify data as an authorised user. If firewall configuration is correct, you should be able to connect from whitelisted IPs but not from any other IPs.Good security involves multiple layers of defence. Normally I would configure & test the inner layers of security (access control, enforcing authentication, TLS/SSL) before opening up to broader levels of exposure (binding to a non-local IP, firewall configuration). Since you have already had some unwelcome connections, I would start by limiting network exposure via your firewall so you can configure and test the other security measures.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X,Thank you very much for the explanation, excellent support.",
"username": "John_Jairo_Dussan_Ra"
},
{
"code": "",
"text": "beforePlease follow below document once and check all security aspects in your environmenthttp://www.itsecure.hu/library/image/CIS_MongoDB_3.4_Benchmark_v1.0.0.pdf",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Database deleted auto | 2021-03-18T21:13:43.020Z | Database deleted auto | 16,543 |
null | [
"c-driver"
] | [
{
"code": "cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..-- Building for: NMake Makefiles\n-- The C compiler identification is unknown\nCMake Error at CMakeLists.txt:54 (project):\n The CMAKE_C_COMPILER:\n\n cl\n\n is not a full path and was not found in the PATH.\n\n To use the NMake generator with Visual C++, cmake must be run from a shell\n that can use the compiler cl from the command line. This environment is\n unable to invoke the cl compiler. To fix this problem, run cmake from the\n Visual Studio Command Prompt (vcvarsall.bat).\n\n Tell CMake where to find the compiler by setting either the environment\n variable \"CC\" or the CMake cache entry CMAKE_C_COMPILER to the full path to\n the compiler, or to the compiler name if it is in the PATH.\n\n\n-- Configuring incomplete, errors occurred!\nSee also \"C:/TestDrivers/mongo-c-driver-1.17.4/cmake-build/CMakeFiles/CMakeOutput.log\".\nSee also \"C:/TestDrivers/mongo-c-driver-1.17.4/cmake-build/CMakeFiles/CMakeError.log\".\n",
"text": "I’ am trying to install MongoDB c driver on windows 10 . I follow the guideline in the\nInstalling the MongoDB C Driver (libmongoc) and BSON library (libbson) — libmongoc 1.23.2 .\nBut I came across an issue when building mongoc .\nI entered the following command on mysys2 minGW64.cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..then I received the following errorCan anyone please help me with this?",
"username": "Tharindu_Balasooriya"
},
{
"code": "",
"text": "Have you followed the msys2 directions here?http://mongoc.org/libmongoc/current/installing.html#build-environment-on-windows-with-mingw-w64-and-msys2",
"username": "Bernie_Hackett"
},
{
"code": "",
"text": "Yes, I think that happened because there were multiple CMake versions on my computer, then I removed those version and went with installing mongoc install with Visual Studio, then the problem got solved",
"username": "Tharindu_Balasooriya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | The C compiler identification is unknown ? MongoDB c - windows 10 | 2021-03-17T04:51:44.394Z | The C compiler identification is unknown ? MongoDB c - windows 10 | 7,648 |
null | [
"replication"
] | [
{
"code": "rs.reconfig",
"text": "I’m stuck in a situation where only one of the three nodes in my replica set is healthy and is stuck as secondary. I’m unable to force it to become primary via rs.reconfig as there is no primary for me to run this on.This node was previously primary while my other two nodes were rebuilt. Now it is the only surviving node with the latest data and has become unusable.What is my way out of this?",
"username": "timw"
},
{
"code": "forceforceforceforce",
"text": "Hi @timw,In this situation you can follow the tutorial on forced reconfiguration: Reconfigure a Replica Set with Unavailable Members.Per the description in the tutorial, this is a last resort procedure:The force option forces a new configuration onto the member. Use this procedure only to recover from catastrophic interruptions. Do not use force every time you reconfigure. Also, do not use the force option in any automatic scripts and do not use force when there is still a primary.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "That’s great, thanks!",
"username": "timw"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Stuck with secondary only in replica set | 2021-03-21T22:43:24.481Z | Stuck with secondary only in replica set | 4,817 |
null | [
"queries"
] | [
{
"code": "\n{\n\"_id\":{\"$oid\":\"604e242c038a6aef974e61a4\"},\n\"author\":{\"$oid\":\"603c08ff76c5cf37b82d2ba3\"},\n\"title\":\"Alarme Web Mobile\",\n\"licence\":\"Licence 2\",\n\"lastUpdate\":{\"$date\":{\"$numberLong\":\"1616264648000\"}},\n\"startedDate\":{\"$date\":{\"$numberLong\":\"1577833200000\"}},\n\"stateProject\":\"En cours\",\n\"tags\":[\"multimedia\",\"education\",\"graphics\",\"games\",\"system\",\"utilities\"],\n\"sumup\":\"Ips\",\n\"description\":\"Icici bonjour vidifszkfghsdlkfhgdsfhgkjldsfhjgdfnn le tan in\",\n\"links\":[\n {\"title\":\"github\",\"value\":\"http://projet.github.com\"}, \n {\"title\":\"github\",\"value\":\"http://projet.github.com\"}, \n {\"title\":\"github\",\"value\":\"http://projet.github.com\"}, \n {\"title\":\"wiki\",\"value\":\"http://monwiki.com\"}\n],\n\"jobs\":[\n {\n \"type\":\"developer\",\n \"requiredNb\":{\"$numberInt\":\"2\"},\n \"skills\":[\"java\",\"javascript\",\"typescript\"],\n \"nameCollabPeople\":[\n {\"name\":\"Philippe Bulot\",\"_collab\":{\"$oid\":\"60578f742edbff32ee8f1c33\"}}\n ]\n },\n {\n \"type\":\"webmestre\",\n \"requiredNb\":{\"$numberInt\":\"2\"},\n \"nameCollabPeople\":[\n {\"name\":\"Leslie Duciel\",\"_collab\":{\"$oid\":\"60578fe82edbff32ee8f1c34\"}},\n {\"name\":\"Philippe Bulot\",\"_collab\":{\"$oid\":\"60578f742edbff32ee8f1c33\"}}\n ]\n }\n ]\n}\n$pull:{'jobs.$.nameCollabPeople':{name:nameCollab}\n",
"text": "Hello ! I have a document as such :I’d like to remove the object from all nameCollabPeople that match a filter, here a name. Like so :But it does not work…\nI found out how remove on object in one job document matching the type propertie and the name of the subdocument. But not all matching the name…Could someone help me ?",
"username": "Preney_Valere"
},
{
"code": "",
"text": "Hi Preney\nTo update all the elements of an array use $[] instead of $Let us know if it works\nGreets",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "Hi !\nYes I finaly used $ instead and it worked in deed, after numerous of trials !!!\nI can go to bed without overthinking ! ^^But Thank you for your answer anyway !!Greets",
"username": "Preney_Valere"
},
{
"code": "",
"text": "\nHave a good night",
"username": "Imad_Bouteraa"
}
] | Update same elements from differents array into same document | 2021-03-21T21:21:03.554Z | Update same elements from differents array into same document | 1,745 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I want to make hands dirty on Mongodb Node Driver 4.0 . How Can I install that using npm ? and When Mongodb Node Driver 4.0 will transfer from Beta to official and get down into system for testing and production using NPM or YARN?",
"username": "Anish_Gupta1"
},
{
"code": "npm",
"text": "Welcome to the MongoDB Community @Anish_Gupta1!You can install the latest beta version using:npm install mongodb@betaOr any tagged version on npm by specifying the tag:npm install [email protected] Mongodb Node Driver 4.0 will transfer from Beta to official and get down into system for testing and production using NPM or YARN?Per the above, beta releases are already available for testing.Timing for a production release will depend on any critical issues found during beta testing, so it would be helpful if you can test in a development or staging environment and provide any feedback on issues encountered for your use case. There is also work in progress to update the 4.x driver documentation, so this will definitely be more complete before a production / GA release.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to install mongodb Node Driver 4.0? | 2021-03-21T17:44:20.961Z | How to install mongodb Node Driver 4.0? | 2,346 |
null | [
"queries"
] | [
{
"code": "",
"text": "according to docs, ObjectId consists of 12 bytes.but what it does not explain is what is that 5 random bytes and does it change within the same server instance?so if that 5 random bytes do not change within the same Mongo server instance, it means that all docs inserted in the same running server instance will have the same 5 random bytesnow as the last 3 bytes are incremental, can we conclude that _id field has the absolute insertion order if we only have one replica set with no sharding throughout the lifetime of database and if the system time does not change mistakenly never?",
"username": "Masoud_Naghizade"
},
{
"code": "_id_id\"_id\"\"_id\"\"_id\"_id_id_id_id",
"text": "Hi @Masoud_Naghizade,By default new ObjectIDs are generated by the client/driver (although they can also be generated on the server if the client does not provide an _id). In most cases the driver generates the ObjectID (for _id) and adds this to the document representation before sending the server request.Example from PyMongo documentation on Inserting a Document:When a document is inserted a special key, \"_id\" , is automatically added if the document doesn’t already contain an \"_id\" key. The value of \"_id\" must be unique across the collection.The “automatically added” mention is referring to the driver adding the _id to your document. This allows the driver to provide the _id for a document without waiting for a round-trip to the server to fetch a generated _id. You can also use that _id to prepare multiple related documents in a transaction.ObjectIDs are generally monotonically increasing because of the leading timestamp prefix, but do not strictly reflect insertion order. The granularity of the timestamp is in seconds, so multiple ObjectIDs generated within the same second do not have predictable ordering. The client can also generate ObjectIDs well before the request is sent to the server, or be subject to clock skew for the timestamp component.ObjectId consists of 12 bytes.but what it does not explain is what is that 5 random bytes and does it change within the same server instance?Random bytes change for each call to generate a new ObjectID. This provides some differentiation for ObjectIDs that may be generated concurrently on different application servers.now as the last 3 bytes are incremental, can we conclude that _id field has the absolute insertion orderNo.if we only have one replica set with no shardingSince ObjectIDs are typically generated on the client, the deployment topology does not factor into the ordering of ObjectIDs. Even if the ObjectIDs are generated on the server, multiple ObjectIDs generated in the same second will not have a predictable ordering.For a use case requiring strict insertion order you could use a capped collection to provide a guarantee that results queried in natural order will reflect insertion order. However, capped collections have many associated restrictions in order to achieve this guarantee. They are FIFO (FIrst-In First-Out) collections with a maximum file size and do not allow direct removal or changing document size in updates.If capped collections aren’t a suitable solution, you will have to find an alternative approach to ensure generation of unique monotonically increasing identifiers that reflect insertion order and suit your use case.For an overview of common approaches and their associated benefits & drawbacks, see: Generating Globally Unique Identifiers for Use with MongoDB.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ObjectId Guarantees absolute Insertion Order if just one replica set and no sharding throughout lifetime? | 2021-03-03T13:31:45.616Z | ObjectId Guarantees absolute Insertion Order if just one replica set and no sharding throughout lifetime? | 6,070 |
null | [
"monitoring"
] | [
{
"code": "{\"t\":{\"$date\":\"2020-10-09T01:41:38.323+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.964+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.965+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.969+05:30\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.970+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":14072,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"Gideon\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.971+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.971+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.0\",\"gitVersion\":\"563487e100c4215e2dce98d0af2a6a5a2d67c5cf\",\"modules\":[\"enterprise\"],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.971+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 18363)\"}}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.971+05:30\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.974+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:38.975+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3520M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:39.109+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1602187899:108931][14072:140715032993360], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 4 through 5\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:39.224+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1602187899:223627][14072:140715032993360], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 5\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:39.361+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1602187899:360260][14072:140715032993360], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 4/5248 to 5/256\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:39.597+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1602187899:596630][14072:140715032993360], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 4 through 5\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:39.748+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1602187899:748670][14072:140715032993360], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 5\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:39.828+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1602187899:828511][14072:140715032993360], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.003+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1028}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.004+05:30\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.012+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.080+05:30\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.081+05:30\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.092+05:30\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.379+05:30\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.383+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2020-10-09T01:41:40.383+05:30\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n",
"text": "Hi, I am new to mongodb and I have installed and run mongod command but instead of regular output I am getting some other output. Please help me out.I am running this on windows CMD.",
"username": "smiraldr"
},
{
"code": "",
"text": "How did you start your mongod?What command was used\nPlease check last line.It is waiting for connections\nOpen another cmd prompt window and connect to your mongod",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Well I rechecked the Env variables but turns out my datapath didn’t exist so I had to manually create the data and db folder in my C Drive.It works now! but I just want it’s format in a better way like it is shown in the other tutorials and not in this json format as it is a bit difficult to read .",
"username": "smiraldr"
},
{
"code": "",
"text": "Capture857×248 174 KB\nHow do I get this output instead of the Json form that I’ve got in the above format(Refer Question Log)",
"username": "smiraldr"
},
{
"code": "mongodmongos",
"text": "Starting in MongoDB 4.4, mongod / mongos instances output all log messages in structured JSON format.",
"username": "chris"
},
{
"code": "mongojq",
"text": "Hi @smiraldr,As @chris noted, this is the expected format (also known as “structured logging”) for server logs in MongoDB 4.4+. While the JSON format may appear confronting at first glance, it is designed to be used with standard JSON tools and libraries. This actually represents a significant improvement from the previous log format, especially in terms of filtering and diagnostics. I’ve personally invested an unhealthy amount of time reverse engineering MongoDB log formats and variations .If you want to view startup warnings in a more readable format, you can:Connect to your MongoDB 4.4+ deployment with a mongo 4.4+ shell or applicationPipe your server log to a JSON tool like jq. The Log Messages page that Chris mentioned includes some helpful log parsing examples.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongod produces json output instead of usual one | 2020-10-08T20:57:23.769Z | Mongod produces json output instead of usual one | 3,913 |
null | [
"schema-validation"
] | [
{
"code": "",
"text": "Hello,Is there any usage of the description field in schema validation?\nFor example, is there any way that Mongo prints the description field of properties that failed validation?\nIf not, I’m not sure of the purpose of this property. Maybe I’m not using it correctly but setting it as “must be a string and is required” like it’s shown in the MongoDB manual is redundant.Does someone have a good usage of this property?",
"username": "Hugh_Chocart"
},
{
"code": "description",
"text": "Welcome to the MongoDB Community @Hugh_Chocart!The description field is part of the JSON Schema specification. This isn’t used by the current production MongoDB server release (4.4), but may still be useful for admin tools and JSON Schema Validators called from your application code.Data validation should ideally happen as early as possible in the user experience (front-end, then application server) to provide responsive feedback to the end user. Validation by the database server should be the last gate before data is inserted or modified.MongoDB’s JSON Schema implementation includes some minor Extensions and Omissions, but a collection’s JSON Schema validator can be used as the basis for earlier validation with a more informative client library.Server-side support for more descriptive JSON Schema validation errors (SERVER-20547: Expose the reason an operation fails document validation) has been merged into the unstable/development branch of MongoDB (4.9.0) and will be included in the next major release (MongoDB 5.0) that is planned for later this year.With MongoDB 5.0, releases will also shift to a shorter release cycle with quarterly releases and annual Long Term Support (LTS) releases: Accelerating Delivery with a New Quarterly Release Cycle, Starting with MongoDB 5.0.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Purpose of the schema validation "description" field | 2021-02-26T06:30:37.455Z | Purpose of the schema validation “description” field | 2,970 |
null | [
"node-js",
"connecting"
] | [
{
"code": "PLEASE HELP L L :frowning: few days i try to solve this issue but no success resolved",
"text": "PLEASE HELP L L :frowning: few days i try to solve this issue but no success resolved\ni try to connect my node js app Back-end to mongodbatlasError: querySrv ECONNREFUSED _mongodb._tcp.xxxcluster.v1ulb.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (dns.js:203:19) {\nerrno: undefined,\ncode: ‘ECONNREFUSED’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.xxxcluster.v1ulb.mongodb.net’\n}\nC:\\Users\\xxx\\touch7\\BackEnd\\node_modules\\mongodb\\lib\\utils.js:691\nthrow error;",
"username": "A.T_Rayan"
},
{
"code": "mongodb://<username>:<password>....",
"text": "Hi @A.T_Rayan,Error: querySrv ECONNREFUSED _mongodb._tcp.xxxcluster.v1ulb.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (dns.js:203:19) {\nerrno: undefined,\ncode: ‘ECONNREFUSED’,The error indicates a possible SRV lookup failure.Could you try using the connection string from the connection modal that specifies all 3 hostnames instead of the SRV record? To get this, please head to the Atlas UI within the Clusters section and follow the below steps:Replace the original connection string you used with the version 2.2.12 or later node.js connection string copied from the above steps and then restart your application.If it returns a different error, please send that error here.In addition to the above, I would recommend also checking out the Atlas Troubleshoot Connection Issues documentation.Note: although the above workaround may allow you to connect, it may be better to resolve any DNS issues into why the SRV record lookup is failing.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I love you !! Seriously !! You saved me and made me the whole week\nThank you very much and even though I tried your solution before and it did not work properly maybe\nbecause the function of creating a collection had something in it that disrupted the query processAGAIN!!! THANK U A LOT",
"username": "A.T_Rayan"
},
{
"code": "",
"text": "Thanks for your kind words & confirming that the issue is resolved.Hope you continue to enjoy using MongoDB.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | querySrv ECONNREFUSED | 2021-03-21T05:01:51.372Z | querySrv ECONNREFUSED | 29,701 |
null | [
"crud",
"atlas-functions"
] | [
{
"code": "postsToApprovepostsToApprovelet postToDeleteQuery = {\n \"_id\": data._id,\n \"postsToApprove._id\": postsToApprove[i]._id\n };\n \n usersCollection.findOneAndDelete(postToDeleteQuery);\n",
"text": "I’m using a Realm Function to update another document, but once it’s been updated, I want to delete the old one.Right now I have in my user document an array called postsToApproveOnce it’s been dealt with (approve or disapprove), it updates another document with a trigger.Afterward, I don’t want that subdocument in postsToApprove anymore.I tried to do:This works, except it deleted my user object instead of the subdocument.How do I findOneAndDelete on a subdocument?Thanks.–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "usersCollection.findOneAndUpdate({\"postsToApprove._id\": postsToApprove[i]._id}{$pull : {\"postsToApprove\": {_id :postsToApprove[i]._id}}} )\n",
"text": "Hi @Kurt_Libby1,I believe what you are looking for is a findOneAndUpdate with a $pull inside the Update clause:As you need to remove array items.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | FindOneAndDelete for subdocument | 2021-03-20T17:31:26.592Z | FindOneAndDelete for subdocument | 2,665 |
null | [
"atlas"
] | [
{
"code": "Mongo error: Error: querySrv ENOTFOUND _mongodb._tcp.cluster0-shard-00-01-xxxxx.mongodb.net\n[0] at QueryReqWrap.onresolve [as oncomplete] (dns.js:206:19) {\n[0] errno: 'ENOTFOUND',\n[0] code: 'ENOTFOUND',\n[0] syscall: 'querySrv',\n[0] hostname: '_mongodb._tcp.cluster0-shard-00-01-xahr1.mongodb.net'\n[0] }\n\n",
"text": "Hi!I have been working with Mongo Atlas and connecting to it fine from my local MacBook pro laptop (using the node.js mongodb library).However, when I try to connect to the database from my remote Ubuntu machine I am never able to connect. Here is the error message:Anyone else having this issue where they are unable to connect to MongoDB from an Ubuntu server? My connection string looks like this:mongodb+srv://AdminUser:[email protected]/test?retryWrites=true&w=majorityThanks!",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "Hi @Jim_Lynch have you verified that the IP address of your Ubuntu machine has been whitelisted in Atlas?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I am surprised that your SRV connection string has the following format:mongodb+srv://AdminUser:[email protected]/test?retryWrites=true&w=majorityIn SRV strings the shard information is usually not present. I would try withmongodb+srv://AdminUser:[email protected] following should work at as DNS has the following information for your cluster",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej. I had found a random comment on stack overflow that trying to connect to a shard instead of the primary might work, but sadly it didn’t.Any other ideas for something I can try? ",
"username": "Jim_Lynch"
},
{
"code": "",
"text": "Thanks @Doug_Duncan, but I had already added the Ubuntu machine’s IP to the whitelist.",
"username": "Jim_Lynch"
},
{
"code": "mongodb+srv://AdminUser:[email protected] ",
"text": "@Jim_Lynch Did you try as he suggested?\nmongodb+srv://AdminUser:[email protected] Is there a difference in driver version between your host and the ubuntu host ?Copy the connection string from the clusters connect button on cloud.mongodb.com . If you need the older style then select an older version of the driver for the correct string.",
"username": "chris"
},
{
"code": "",
"text": "You can always connect with a shard URI in your case it would be",
"username": "steevej"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Getting Error, "queryServ ENOTFOUND" on Ubuntu | 2020-05-01T06:52:03.789Z | Getting Error, “queryServ ENOTFOUND” on Ubuntu | 52,413 |
null | [
"cxx"
] | [
{
"code": "",
"text": "Are there any alterntaive drivers to mongocxx , for windows os based c++ applications .",
"username": "Tharindu_Balasooriya"
},
{
"code": "mongocxx",
"text": "Hi @Tharindu_Balasooriya,Are you looking for an alternative driver because of installation issues, or do you have other requirements?I appreciate you have been having some difficulties getting your environment set up, but we’ll need some more details to help you in discussions like The C compiler identification is unknown ? MongoDB c - windows 10.mongocxx is the officially supported cross-platform C++ driver and I’m not aware of any actively developed alternatives. An important aspect of official drivers is that they implement MongoDB specifications for consistent behaviour. If you are looking for a different API, there are often abstractions like Object-Document Mappers (ODMs) that build on the official driver.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": " LINK Pass 1: command \"C:\\PROGRA~2\\MICROS~1\\2019\\COMMUN~1\\VC\\Tools\\MSVC\\1428~1.299\\bin\\Hostx86\\x86\\link.exe /nologo @CMakeFiles\\untitled5.dir\\objects1.rsp /out:untitled5.exe /implib:untitled5.lib /pdb:C:\\Users\\Tharindu\\CLionProjects\\untitled5\\cmake-build-debug\\untitled5.pdb /version:0.0 /machine:X86 /debug /INCREMENTAL /subsystem:console kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib /MANIFEST /MANIFESTFILE:CMakeFiles\\untitled5.dir/intermediate.manifest CMakeFiles\\untitled5.dir/manifest.res\" failed (exit code 1120) with the following output:\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall bsoncxx::v_noabi::string::view_or_value::view_or_value(char const *)\" (__imp_??0view_or_value@string@v_noabi@bsoncxx@@QAE@PBD@Z) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::options::client::client(void)\" (__imp_??0client@options@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::options::client::~client(void)\" (__imp_??1client@options@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::uri::uri(class bsoncxx::v_noabi::string::view_or_value)\" (__imp_??0uri@v_noabi@mongocxx@@QAE@Vview_or_value@string@1bsoncxx@@@Z) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::uri::~uri(void)\" (__imp_??1uri@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::client::client(class mongocxx::v_noabi::uri const &,class mongocxx::v_noabi::options::client const &)\" (__imp_??0client@v_noabi@mongocxx@@QAE@ABVuri@12@ABV0options@12@@Z) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::client::~client(void)\" (__imp_??1client@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::instance::instance(void)\" (__imp_??0instance@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\n main.cpp.obj : error LNK2019: unresolved external symbol \"__declspec(dllimport) public: __thiscall mongocxx::v_noabi::instance::~instance(void)\" (__imp_??1instance@v_noabi@mongocxx@@QAE@XZ) referenced in function _main\n untitled5.exe : fatal error LNK1120: 9 unresolved externals\n NMAKE : fatal error U1077: '\"C:\\Program Files\\JetBrains\\CLion 2020.3.3\\bin\\cmake\\win\\bin\\cmake.exe\"' : return code '0xffffffff'\n Stop.\n NMAKE : fatal error U1077: '\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x86\\nmake.exe\"' : return code '0x2'\n Stop.\n NMAKE : fatal error U1077: '\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x86\\nmake.exe\"' : return code '0x2'\n Stop.\n NMAKE : fatal error U1077: '\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.28.29910\\bin\\HostX86\\x86\\nmake.exe\"' : return code '0x2'\n Stop.\n",
"text": "Yes, I’m struggling with install mongocxx on windows. I have some errors that I previously had, but still I am getting so many errors.\nHere is the error that I’m struggling nowCan you help me on this ?",
"username": "Tharindu_Balasooriya"
},
{
"code": "get-started-cxxMONGO_URI",
"text": "Can you help me on this ?HI @Tharindu_Balasooriya,Including this discussion, you have currently have four active topics which all seem related to your install issues.It will be easier for someone to help if you can keep discussion for the same problem focused in a single topic. If you’ve solved an issue and moved on to a new problem and new topic, it would also be helpful to reply to the original topic with your solution (or choose one of the existing posts as a solution as a signal further help is not required).I believe the problem you’ve quoted here is the same as Mongo cxx driver issue - undefined reference to `__imp__ZN7bsoncxx7v_, so it would be best not to fork that discussion. I would also keep this topic focused on your question of alternative C++ drivers for Windows.Your issues appear to be with setting up your build environment, but if you want a comparison point (and are open to using Docker), you may find @wan’s Get Started project helpful: \"Get Started\" with MongoDB Atlas - #2 by wan.The get-started-cxx repo sets up a working Linux development environment with some sample C++ code. You can set the MONGO_URI to any valid MongoDB connection string URI (Atlas, self-hosted, or locally hosted). However, since your end goal appears to be building Windows C++ apps using MinGW64 this may not be a productive direction to explore.Regards,\nStennie",
"username": "Stennie_X"
}
] | Alternatives to mongocxx? | 2021-03-19T11:39:25.709Z | Alternatives to mongocxx? | 4,180 |
null | [
"python"
] | [
{
"code": "command_2 = {\"$project\": {\"_id\": 0,\n \"station_id\": 1,\n \"station_status\": 1,\n \"hour\": {\"$hour\": \"$time\"},\n \"available_bikes\": 1\n }\n }\n my_query.append(command_2) \n",
"text": "Hey Everyone,In my collection there is field called as time and the format of this field is as follows :\ntime: “2019/05/02 00:00:00”.I need to extract hours from this field and group them based on hour. I tried many things including ISODate(), new Date() but getting different errors. Below is my project block followed by one of the errors. Any help in this regard will be appreciated.pymongo.errors.OperationFailure: can’t convert from BSON type string to Date, full error: {‘operationTime’: Timestamp(1616064173, 4), ‘ok’: 0.0, ‘errmsg’: “can’t convert from BSON type string to Date”, ‘code’: 16006, ‘codeName’: ‘Location16006’, ‘$clusterTime’: {‘clusterTime’: Timestamp(1616064173, 4), ‘signature’: {‘hash’: b’\\xe0\\xdd\\x7f\\xd6&\\x1d\\r\\xb5\\xdfv\\x11\\xc3\\x88\\xfc\\xb1L\\x93\\x7f\\xb8\\xe1’, ‘keyId’: 6929970731853807619}}}",
"username": "Gunjan_Gautam"
},
{
"code": "time$hourtimeDate$dateFromStringdb.foo.drop();\ndb.foo.insert({ time: \"2019/05/02 13:00:00\" })\ndb.foo.aggregate([\n{ $project: { \n hour: { \n $hour: { \n $dateFromString: { dateString: \"$time\", format: \"%Y/%m/%d %H:%M:%S\" } \n } \n }\n}}])\n",
"text": "Hi @Gunjan_Gautam,As you’re storing the time field as a string you can’t use the $hour operator directly. You will first need to convert the time to a Date type, which can be done using the $dateFromString operator as such:Try this out in the mongo shell then adapt to Python as needed.",
"username": "alexbevi"
},
{
"code": "",
"text": "Hey @alexbeviIts now running but not able to extract hour. Please have a look at the output. I want hours like 19, 21 etc.available_bikes : 0\nhour : 2019-07-11 05:30:00\nstation_id : 522\nstation_status : In ServiceProcess finished with exit code 0",
"username": "Gunjan_Gautam"
},
{
"code": "// change COLLECTION to the name of your collection\ndb.COLLECTION.aggregate([\n{ $limit: 1 },\n{ $addFields: { \n hour: { \n $hour: { \n $dateFromString: { dateString: \"$time\", format: \"%Y/%m/%d %H:%M:%S\" } \n } \n }\n}}])\n",
"text": "@Gunjan_Gautam try it first using the mongo shell. I’m not sure what the output is you’re sharing but if you can adapt the following and share the results it might help:",
"username": "alexbevi"
},
{
"code": "{\n\t\"$project\" : {\n\t\t\"hour\" : {\n\t\t\t\"$substr\" : [\n\t\t\t\t\"$time\",\n\t\t\t\t11,\n\t\t\t\t2\n\t\t\t]\n\t\t}\n\t}\n}\n",
"text": "Since your time field is a string. You can simply use $substr as in:to get the hour part. See https://docs.mongodb.com/manual/reference/operator/aggregation/substr.",
"username": "steevej"
},
{
"code": "time2019/05/02 00:00:00-/",
"text": "2019-07-11 05:30:00If your time field only contains data in this format then what @steevej wrote is correct. My example expected the date string to be in a 2019/05/02 00:00:00 format, however your latest example uses a different delimiter (- compared to /).",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't convert from BSON type string to Date | 2021-03-18T10:53:22.646Z | Can’t convert from BSON type string to Date | 31,417 |
null | [
"realm-web",
"typescript"
] | [
{
"code": "",
"text": "Hi. I’m navigating my way through the Realm docs and tutorials to validate it for my next project. Something that’s really not clear to me is whether I can use Realm for a web app? Having read the documentation, my understanding is that Realm cannot be used for Web apps. At the same time, I can see tutorials how to use Realm with Angular or React, which is very confusing.Your official documentation for the Web SDK states:The MongoDB Realm Web SDK enables server-side client applications to access data stored in MongoDB Atlas and interact with MongoDB Realm services like Functions and authentication. The Web SDK supports both JavaScript and TypeScript applications.Source: https://docs.mongodb.com/realm/web/The same page states:The Web SDK does not support JavaScript or TypeScript applications written for the Node.js environment or React Native mobile apps.To sum up: the Web SDK can be used for server-side applications provided they are not Node.js applications. What other server environment can run JS apart from Node.js? Maybe Deno, but I doubt this SDK is for Deno?While I see example how to use Realm with browser Web apps, I am not sure whether this is the intended use of the SDK or simply an unsupported hack?",
"username": "Lukasz_Ciastko"
},
{
"code": "",
"text": "Hi,Thanks for pointing that out. That should say “browser applications” rather than “server-side client applications”. You can use the Web SDK for web apps (webpack, React, etc.). The key point is that you cannot currently use Realm Database or Sync with the Web SDK (i.e. in the browser). You can use MongoDB Realm functionality such as Remote MongoDB Access, GraphQL, calling Realm Functions, and user authentication.Hope this helps.",
"username": "Chris_Bush"
},
{
"code": "",
"text": "Thank you for this explanation.Are there any plans to add support for Realm DB on the Web (e.g. using IndexedDB)? Some competitors already have such solutions, e.g. Firestore or PouchDB.",
"username": "Lukasz_Ciastko"
},
{
"code": "",
"text": "Are there any plans to add support for Realm DB on the Web (e.g. using IndexedDB)?Here’s a couple of links where you might want to share your enthusiasm:",
"username": "kraenhansen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does Realm support browser Web apps? | 2021-03-20T11:40:42.086Z | Does Realm support browser Web apps? | 5,631 |
null | [
"atlas-functions"
] | [
{
"code": " await new Promise((res, rej) => {\n setTimeout(() => {\n rej(JSON.stringify({message: \"An error\", code: 42}))\n }, 300)\n }).catch(err => {\n throw err\n })\n.jsawait user.functions.errorTest().catch(errResponse => {\n console.log(errResponse)\n})\nerrResponse.error{\"code\":42,\"message\":\"An error\"}throw JSON.stringify({message: \"an error message\", code: 42})\"{\\\"code\\\":42,\\\"message\\\":\\\"An error\\\"}\"throw {message: \"an error message\", code: 42}",
"text": "Hi there,I’m having quite the time figuring out how to include error codes in the responses from my Realm functions. A simple string message with name “Error” doesn’t meet my requirements. Unfortunately, the realm function will return a different format based on how it is used.When I catch an error from a promise, and rethrow:and catch it in the client (.js in this case)the errResponse.error property is properly stringified:{\"code\":42,\"message\":\"An error\"}I believe this is the correct behaviour for the realm functions, and is what I expect to happen.From another realm function, when I throw a stringified javascript object:throw JSON.stringify({message: \"an error message\", code: 42})the response is double-stringified JSON:\"{\\\"code\\\":42,\\\"message\\\":\\\"An error\\\"}\"And the only way I can think of working with it is to parse it twice. My client is in swift so this is introducing instability and try catch expressions - which doesn’t make sense.The reason I can’t return like so:throw {message: \"an error message\", code: 42}is because it will be stringified by the system as EJSON which I cannot work with. Also, why should I have to? I think this is a bug.",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "Hello can someone please approve this and make it visible? Really could use some help here.",
"username": "Eric_Lightfoot"
},
{
"code": "gotAProblemexports = function(){\n throw JSON.stringify({message: \"an error message\", code: 42});\n};\nstruct MyError: Decodable {\n let message: String\n let code: Int\n \n}\n\nprivate func errorProne() {\n app.currentUser!.functions.gotAProblem([]) { (_, error) in\n if let error = error {\n let myError = try! JSONDecoder().decode(MyError.self, from: error.localizedDescription.data(using: .utf8)!)\n print(\"Error: \\(myError.message)\")\n }\n }\n}",
"text": "Hopefully, someone can suggest something more elegant, but this works…Realm function (gotAProblem):Swift code using Realm-Cocoa:",
"username": "Andrew_Morgan"
},
{
"code": "asyncexports = async (assetId) => {\n const db = context.services.get(\"mongodb-atlas\").db(\"realm-sync\")\n const userId = context.user.id\n\n let assetIdentity = await AssetIdentities.findOne({ _id: BSON.ObjectId(assetId) })\n\n return new Promise(async (resolve, reject) => {\n ///\n /// Guard conditions\n ///\n const assetListing = await AssetListings.findOne({\n _id: BSON.ObjectId(assetId)\n })\n\n if (!assetListing) {\n return reject(JSON.stringify({\n message: `The asset with id ${assetId} does not exist`,\n code: 404\n }))\n }\n\n if (assetListing.owner_id !== userId) {\n return reject(JSON.stringify({\n message: \"An asset can only be destroyed by the asset's owner\",\n code: 405\n }))\n }\n\n ///\n /// Post-guard execution\n ///\n\n /// Delete Asset Identity\n await AssetIdentities.deleteOne({\n _id: BSON.ObjectId(assetId),\n _partition: `/${userId}/assets`\n })\n\n /// Delete Asset Listing\n await db.collection(\"AssetListing\").deleteOne({\n _id: BSON.ObjectId(assetId)\n })\n\n /// Made it\n resolve()\n }).catch(err => {\n /// Send correctly formatted error object\n /// inside catch block to avoid the client receiving\n /// \"{\\\"code\\\":42,\\\"message\\\":\\\"An error\\\"}\"\n throw err\n })\n}",
"text": "Your code works, however my realm function is declared async and that appears to cause my issue. Although I don’t understand the language well enough to explain why that makes sense, it does in the context of my current solution, which is to return a Promise for every async function I need to write, that throw the JSON.stringified error object from inside their final catch block. I leave an example of what works in case it helps anyone save the time I had to put in experimenting with this.Please add documentation about this to you realm functions page!",
"username": "Eric_Lightfoot"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Cloud Function returning double-stringified JSON string | 2021-03-17T18:49:45.852Z | MongoDB Cloud Function returning double-stringified JSON string | 3,768 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi all,\nI have a collection such as this Mongo playgroundI need to have a seach with this three condition:How I can go to operate in parameters.options?The parameters can be more than one",
"username": "Francesco_Di_Battist"
},
{
"code": "",
"text": "What did you try so far?Which issues did you get with what you try?What do you mean by the following?other key matchSince parameters and options are both arrays, start by looking at https://docs.mongodb.com/manual/reference/operator/query-array/",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej,\nif I try to search for parameters.options.id(“60548851b5a553690d392019”) the key2:true, I expect the first collection (that ave all key of this option at false) second collection (that have exactly key2 of this options at true). The third collection are not in output because, for this parameters options, has key3 at true.Now I try with $filter but doesn’t works. I view array query link",
"username": "Francesco_Di_Battist"
},
{
"code": "$allElementsTrue",
"text": "Hi all,\nI have try to $unwind the parameters and next project to have options array to root,\nsuch us this: Mongo playgroundNext, I have try to check if all key are false and in this case return true to variable with $group.\nThe Idea is to have 3 check that return true and, if all check are true, I can return the id of collection.Do you can help me to solve it?\nI have found the operator $allElementsTrue, but I don’t find the opposite false.\nThe idea is this, for step 2, this: Mongo playground",
"username": "Francesco_Di_Battist"
},
{
"code": "[\n\t{\n\t\t\"$unwind\" : \"$parameters\"\n\t},\n\t{\n\t\t\"$match\" : {\n\t\t\t\"parameters.id\" : ObjectId(\"60548851b5a553690d392019\")\n\t\t}\n\t},\n\t{\n\t\t\"$match\" : {\n\t\t\t\"parameters.options\" : {\n\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\"value\" : \"key3\",\n\t\t\t\t\t\"sel\" : false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t{\n\t\t\"$match\" : {\n\t\t\t\"parameters.options\" : {\n\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\"value\" : \"key2\",\n\t\t\t\t\t\"sel\" : false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t{\n\t\t\"$match\" : {\n\t\t\t\"parameters.options\" : {\n\t\t\t\t\"$elemMatch\" : {\n\t\t\t\t\t\"value\" : \"key1\",\n\t\t\t\t\t\"sel\" : false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n]\n",
"text": "Starting with $unwind was a good start. Using a divide and conquer approach, I came up with:You probably could do the parameters.options part inside a single $match. However, I prefer using multiple $match stages when things get complicated. It is easier to understand and modify.",
"username": "steevej"
},
{
"code": "{\n\t\t\"$match\" : {\n\t\t\t\"parameters.id\" : ObjectId(\"60548851b5a553690d392019\")\n\t\t}\n\t}\n",
"text": "You may optimize the pipeline by adding anotheras the first stage. Especially if you have an index. This way you only apply the $unwind to documents that have the correct parameter. The second match will then take care of removing the extra parameters.",
"username": "steevej"
}
] | Multiple match in a sub-sub documents | 2021-03-19T13:26:07.099Z | Multiple match in a sub-sub documents | 3,717 |
null | [] | [
{
"code": "students",
"text": "I have a collection named students containing the following documents\n{_id: 1, marks:[1,2,3,4,5]}\n{_id: 2, marks:[4,2,6,4,5]}\n{_id: 3, marks:[9,6,3,4,5]}\n{_id: 4, marks:[6,2,2,4,7]}\n{_id: 5, marks:[7,2,2,4,7]}I want to get those documents where the value of first element of the marks array is between 3 and 8 . I am using following $expr based aggregate query. But it is returning no document. Please note I want to use $expr only.db.students.aggregate([{\n$match: {\n$expr: {\n“$and”: [{\n“$gte”: [\"$marks.0\", 3]\n}, {\n“$lte”: [\"$marks.0\", 8]\n}]\n}\n}\n}])Please help. Thanks in advanced.",
"username": "Sudarshan_Roy"
},
{
"code": "{ $arrayElemAt: [ <array>, <idx> ] }$first",
"text": "Welcome to MongoDB community,\nin aggregations you should use { $arrayElemAt: [ <array>, <idx> ] } or in your case the alias $first instead of the dot notation\nfor more information checkIf you still struggle with this pipeline, let us know\nGreets",
"username": "Imad_Bouteraa"
},
{
"code": "> c = db.Sudarshan_Roy\ntest.Sudarshan_Roy\n> c.find()\n{ \"_id\" : 1, \"marks\" : [ 1, 2, 3, 4, 5 ] }\n{ \"_id\" : 2, \"marks\" : [ 4, 2, 6, 4, 5 ] }\n{ \"_id\" : 3, \"marks\" : [ 9, 6, 3, 4, 5 ] }\n{ \"_id\" : 4, \"marks\" : [ 6, 2, 2, 4, 7 ] }\n{ \"_id\" : 5, \"marks\" : [ 7, 2, 2, 4, 7 ] }\n> query = { \"marks.0\" : { \"$gte\" : 3, \"$lte\" : 8 } }\n{ \"marks.0\" : { \"$gte\" : 3, \"$lte\" : 8 } }\n> c.find( query )\n{ \"_id\" : 2, \"marks\" : [ 4, 2, 6, 4, 5 ] }\n{ \"_id\" : 4, \"marks\" : [ 6, 2, 2, 4, 7 ] }\n{ \"_id\" : 5, \"marks\" : [ 7, 2, 2, 4, 7 ] }\n",
"text": "You were almost there. On the right track but a little bit too far. It is simpler:",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your prompt reply. As I already mentioned in my post that I want to use this with $expr in $match aggregation pipeline, instead of find. Thanks again.",
"username": "Sudarshan_Roy"
},
{
"code": "",
"text": "Thanks for your prompt reply. I have applied with both $first and $arrayElemAt. Both are working absolutely fine. But why it is not working with dot notation. Thanks again.",
"username": "Sudarshan_Roy"
},
{
"code": "",
"text": "Honestly, I don’t know. may be @steevej can help on that",
"username": "Imad_Bouteraa"
},
{
"code": "$match> query = { \"marks.0\" : { \"$gte\" : 3, \"$lte\" : 8 } }\n{ \"marks.0\" : { \"$gte\" : 3, \"$lte\" : 8 } }\n> match = { $match : query }\n{ \"$match\" : { \"marks.0\" : { \"$gte\" : 3, \"$lte\" : 8 } } }\n> c.aggregate( [ match ] )\n{ \"_id\" : 2, \"marks\" : [ 4, 2, 6, 4, 5 ] }\n{ \"_id\" : 4, \"marks\" : [ 6, 2, 2, 4, 7 ] }\n{ \"_id\" : 5, \"marks\" : [ 7, 2, 2, 4, 7 ] }\n",
"text": "Sorry, I missed the aggregation part. But the same query is also working inside an aggregation $match stage.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to apply $gte, $lte on a specific array element with $expr in $match stage of aggregation pipeline | 2021-03-19T21:00:49.060Z | How to apply $gte, $lte on a specific array element with $expr in $match stage of aggregation pipeline | 23,294 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "I have a model in collection which has userId of type ObjectId but i received in string format when i get changes initially or update. My app crashes if i set type objectId as when i receive userId in string.’Expected object of type object id for property ‘userId’ on object of type ‘device_reading’, but received: 6050956d57f60d3225dfccb7’",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "Hi @Muhammad_Awais could you please share your schema and Object definitions as well as how your creating the objects/documents",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I resolved the issue. I was seding wrong wrong data in realm queryrealm.objects(SomeObject.self).filter(“userId = ‘string value of id’”)corrected with :\nrealm.objects(SomeObject.self).filter(“userId = %@”, ObjectId(“string value of id”))",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Type objectId but receives string in change listener swift crash | 2021-03-16T12:24:17.796Z | Type objectId but receives string in change listener swift crash | 2,763 |
null | [
"crud"
] | [
{
"code": "",
"text": "Hello Guyz,I am trying to update a field that was initially captured as a string instead of a date type.\nCurrently, the query that insert into the collection has been modified , so that future insert to that field is date. data typeHowever, I am trying to update the previously inserted data, before query modification that still has the string data typeHere is what I tried ,but giving errordb.collection.update_one({“clusterTime”:{\"$type\":“string”}},{\"$set\":{“clusterTime”:datetime.datetime.strptime(’$clusterTime’,’%y-%m-%d’).date()}})I really would appreciate contributions.Thank you.",
"username": "Samson_Eromonsei"
},
{
"code": "db.collection.update(\n {\n \"clusterTime\":{\n \"$type\":\"string\"\n }\n },\n [\n {\n \"$set\":{\n \"clusterTime\":{\n \"$dateFromString\":{\n \"dateString\":\"$clusterTime\",\n \"format\":\"%Y-%m-%d\"\n }\n }\n }\n }\n ]\n)",
"text": "Hi,\nTry thisGood luck",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Updating a pre-existing fields datatype(string=>date) in a collection | 2021-03-19T22:06:19.694Z | Updating a pre-existing fields datatype(string=>date) in a collection | 9,795 |
null | [
"aggregation"
] | [
{
"code": "db.inventory.aggregate([\n {\n $group: {\n _id: {\n \"group_id\": \"$product\"\n },\n \"quantity\": {\n $sum: \"$quantity\"\n }\n }\n },\n {\n \"$match\": {\n \"quantity\": {\n $gt: 0\n }\n }\n },\n {\n $lookup: {\n from: \"inventory\",\n localField: \"_id.group_id\",\n foreignField: \"$product\",\n as: \"records\"\n }\n }\n])\n",
"text": "New to Mongodb Atlas I am trying understand the 100mb limit MongoDb for aggregate pipelines. Trying to find out what this actually means? Does it apply to the size of the database collection we are performing the aggregate on?Bit of background we have the following query on an inventory ledger where we are taking a data set, running a group sum to find out which products are still in-stock (ie amount sum is greater than 0). Based on the result where the product is in stock we return those records by running a lookup in the original collection. The query is provided below.Assume the inventory objects contains about 10 sub fields/record pair. And assume for 1000records/1mb.QUESTION My question is if the inventory collection size reaches 100mb as a JSON object array does this mean the call with fail? ie the max we can run the aggregate on is 100mb x 1000 records = 100,000 records?BTW we are on a server that does not support writing to disk hence the question.",
"username": "Ka_Tech"
},
{
"code": "$group$group{ \"_id\": { \"group_id\": \"XXXXX\"}, \"quantity\": N }\nproduct_id$sort",
"text": "I wish our docs were more clear on this - there is an explanation of this in one of my aggregation talks but it can be hard to dig out. Note that the docs do not say the pipeline is limited to 100MBs, it’s a single stage that’s limited.Think of the pipeline as a stream of documents - each stage takes in documents and then it outputs documents.Some stages cannot output any documents until they have accepted all incoming documents. The most obvious example is $group. (1)That first $group you have needs to see all the documents coming in from the collection. But it only needs to output N documents, where N is the number of distinct product values in the collection. The size of original documents does not matter. The only thing that matters is the size of the “grouped” documents (the ones coming out of this stage), and each of them is just:That’s maybe 53-60 bytes, depending on how long your product field is. So to exceed 100MBs you would need approximately 1.7 million distinct products. More if you remove the sub-object from the _id. I hope you can see that your aggregation will not fail due to 100MBs limit here. All the other stages in the pipeline are “streaming” stages - meaning the batches of documents stream through them and don’t have to be accumulated all at once.Asya(1) the other example is $sort when it’s not supported by an index, and hence causes an in-memory sort.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDb - meaning of pipeline aggregation 100mb limit? | 2021-03-18T09:37:36.085Z | MongoDb - meaning of pipeline aggregation 100mb limit? | 5,208 |
null | [
"queries"
] | [
{
"code": "const userFound = await Contact.find({\n $expr: {\n $and: [\n { $eq: [{ $dayOfMonth: '$dob' }, { $dayOfMonth: new Date() }] },\n { $eq: [{ $month: '$dob' }, { $month: new Date() }] },\n ],\n },\n});\n",
"text": "I am comparing person date of birth which is stored as UTC format by client side with current date at 00:00 on time zone asia/kolkata, but it gives me the previous previous date data. I mean at 00:00:00 at 17-mar-2021 i fired the query, It should check that is there any entries whose date and month in date of birth matches to 17 of March? But it compares with 16 of march, I don’t figure out why this is happening.QueryHere, dob stores the data of birth of person in UTC format (e.g. ‘2021-03-17T00:00:00.000+00:00’) .\nAlso I have tried it in morning its work fine, But at 00:00:00 it won’t. Why it is happening and what its solution?",
"username": "Ujjwal_Kushwaha"
},
{
"code": "timezone",
"text": "The issue is that it won’t give you the response you want until you wait till it’s after midnight in UTC. When you are querying it right after midnight in Kolkata, it’s actually still the previous day in UTC and it’s correct that those records are not being returned.There are a few ways you can correct this - easiest would be to query against stored date converted into your local timezone.Starting with 3.6 both $dayOfMonth and $month accept an optional timezone argument. This should allow you to compare the two dates in the same “context”.Asya",
"username": "Asya_Kamsky"
}
] | I want to fire a query to get person whose date of birth is matched to current date | 2021-03-16T19:23:07.862Z | I want to fire a query to get person whose date of birth is matched to current date | 3,266 |
null | [
"security",
"monitoring"
] | [
{
"code": "2020-09-17T19:49:47.801+0200 I NETWORK [conn38] received client metadata from [134.122.38.54:58938](http://134.122.38.54:58938/) conn38: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-47-generic\" }, platform: \"CPython 3.8.2.final.0\" }\n2020-09-17T19:49:48.029+0200 I NETWORK [listener] connection accepted from [134.122.38.54:58940](http://134.122.38.54:58940/) #39 (4 connections now open)\n2020-09-17T19:49:48.029+0200 I NETWORK [conn39] received client metadata from [134.122.38.54:58940](http://134.122.38.54:58940/) conn39: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-47-generic\" }, platform: \"CPython 3.8.2.final.0\" }\n2020-09-17T19:49:48.258+0200 I COMMAND [conn39] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - starting\n2020-09-17T19:49:48.258+0200 I COMMAND [conn39] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - dropping 0 collections\n2020-09-17T19:49:48.266+0200 I COMMAND [conn39] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - finished\n2020-09-17T19:49:48.381+0200 I COMMAND [conn39] dropDatabase config - starting\n2020-09-17T19:49:48.381+0200 I COMMAND [conn39] dropDatabase config - dropping 0 collections\n2020-09-17T19:49:48.383+0200 I COMMAND [conn39] dropDatabase config - finished\n2020-09-17T19:49:48.498+0200 I COMMAND [conn39] dropDatabase local - starting\n2020-09-17T19:49:48.498+0200 I COMMAND [conn39] dropDatabase local - dropping 0 collections\n2020-09-17T19:49:48.504+0200 I COMMAND [conn39] dropDatabase local - finished\n2020-09-17T19:49:48.619+0200 I STORAGE [conn39] createCollection: READ_ME_TO_RECOVER_YOUR_DATA.README with generated UUID: 238fafb1-d410-41e9-8072-8a7939b5a64f\n2020-09-17T19:49:48.741+0200 I NETWORK [conn39] end connection [134.122.38.54:58940](http://134.122.38.54:58940/) (3 connections now open)\n2020-09-17T19:49:48.741+0200 I NETWORK [conn38] end connection [134.122.38.54:58938](http://134.122.38.54:58938/) (2 connections now open)\n\n2020-09-18T12:13:04.144+0200 I NETWORK [conn46] received client metadata from [134.122.38.54:38038](http://134.122.38.54:38038/) conn46: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-47-generic\" }, platform: \"CPython 3.8.2.final.0\" }\n2020-09-18T12:13:04.372+0200 I NETWORK [listener] connection accepted from [134.122.38.54:38040](http://134.122.38.54:38040/) #47 (4 connections now open)\n2020-09-18T12:13:04.373+0200 I NETWORK [conn47] received client metadata from [134.122.38.54:38040](http://134.122.38.54:38040/) conn47: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-47-generic\" }, platform: \"CPython 3.8.2.final.0\" }\n2020-09-18T12:13:04.602+0200 I COMMAND [conn47] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - starting\n2020-09-18T12:13:04.605+0200 I COMMAND [conn47] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - dropping 0 collections\n2020-09-18T12:13:04.607+0200 I COMMAND [conn47] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - finished\n2020-09-18T12:13:04.722+0200 I COMMAND [conn47] dropDatabase config - starting\n2020-09-18T12:13:04.722+0200 I COMMAND [conn47] dropDatabase config - dropping 0 collections\n2020-09-18T12:13:04.724+0200 I COMMAND [conn47] dropDatabase config - finished\n2020-09-18T12:13:04.839+0200 I COMMAND [conn47] dropDatabase local - starting\n2020-09-18T12:13:04.839+0200 I COMMAND [conn47] dropDatabase local - dropping 0 collections\n2020-09-18T12:13:04.841+0200 I COMMAND [conn47] dropDatabase local - finished\n2020-09-18T12:13:04.959+0200 I STORAGE [conn47] createCollection: READ_ME_TO_RECOVER_YOUR_DATA.README with generated UUID: 2b8efc1f-b9e0-49a0-9135-f819205d39f0\n2020-09-18T12:13:05.080+0200 I NETWORK [conn47] end connection [134.122.38.54:38040](http://134.122.38.54:38040/) (3 connections now open)\n2020-09-18T12:13:05.080+0200 I NETWORK [conn46] end connection [134.122.38.54:38038](http://134.122.38.54:38038/) (2 connections now open)\n\n2020-09-18T17:53:05.203+0200 I NETWORK [conn52] received client metadata from [134.122.38.54:32922](http://134.122.38.54:32922/) conn52: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-47-generic\" }, platform: \"CPython 3.8.2.final.0\" }\n2020-09-18T17:53:05.431+0200 I NETWORK [listener] connection accepted from [134.122.38.54:32924](http://134.122.38.54:32924/) #53 (4 connections now open)\n2020-09-18T17:53:05.434+0200 I NETWORK [conn53] received client metadata from [134.122.38.54:32924](http://134.122.38.54:32924/) conn53: { driver: { name: \"PyMongo\", version: \"3.11.0\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"x86_64\", version: \"5.4.0-47-generic\" }, platform: \"CPython 3.8.2.final.0\" }\n2020-09-18T17:53:05.664+0200 I COMMAND [conn53] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - starting\n2020-09-18T17:53:05.664+0200 I COMMAND [conn53] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - dropping 0 collections\n2020-09-18T17:53:05.667+0200 I COMMAND [conn53] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - finished\n2020-09-18T17:53:05.781+0200 I COMMAND [conn53] dropDatabase config - starting\n2020-09-18T17:53:05.782+0200 I COMMAND [conn53] dropDatabase config - dropping 0 collections\n2020-09-18T17:53:05.793+0200 I COMMAND [conn53] dropDatabase config - finished\n2020-09-18T17:53:05.908+0200 I COMMAND [conn53] dropDatabase local - starting\n2020-09-18T17:53:05.911+0200 I COMMAND [conn53] dropDatabase local - dropping 0 collections\n2020-09-18T17:53:05.912+0200 I COMMAND [conn53] dropDatabase local - finished\n2020-09-18T17:53:06.027+0200 I STORAGE [conn53] createCollection: READ_ME_TO_RECOVER_YOUR_DATA.README with generated UUID: 49ee438e-fc1b-4ab9-926e-760844110871\n2020-09-18T17:53:06.148+0200 I NETWORK [conn52] end connection [134.122.38.54:32922](http://134.122.38.54:32922/) (3 connections now open)\n2020-09-18T17:53:06.148+0200 I NETWORK [conn53] end connection [134.122.38.54:32924](http://134.122.38.54:32924/) (2 connections now open)\n\n2020-09-22T07:22:40.520+0200 I NETWORK [conn121] received client metadata from [199.58.80.194:48952](http://199.58.80.194:48952/) conn121: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.4.1\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"20.04\" } }\n2020-09-22T07:22:42.876+0200 I COMMAND [conn121] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - starting\n2020-09-22T07:22:42.876+0200 I COMMAND [conn121] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - dropping 0 collections\n2020-09-22T07:22:42.890+0200 I COMMAND [conn121] dropDatabase READ_ME_TO_RECOVER_YOUR_DATA - finished\n2020-09-22T07:22:43.423+0200 I COMMAND [conn121] dropDatabase config - starting\n2020-09-22T07:22:43.423+0200 I COMMAND [conn121] dropDatabase config - dropping 0 collections\n2020-09-22T07:22:43.426+0200 I COMMAND [conn121] dropDatabase config - finished\n2020-09-22T07:22:43.764+0200 I COMMAND [conn121] dropDatabase local - starting\n2020-09-22T07:22:43.765+0200 I COMMAND [conn121] dropDatabase local - dropping 0 collections\n2020-09-22T07:22:43.766+0200 I COMMAND [conn121] dropDatabase local - finished\n2020-09-22T07:22:44.347+0200 I NETWORK [conn121] end connection [199.58.80.194:48952](http://199.58.80.194:48952/) (2 connections now open)\n",
"text": "Dear Community,I have installed Mongodb on my VPS but for some time it crashed.Here are the errors:What can I do to fix this?Best regards",
"username": "Rene_Schneider"
},
{
"code": "",
"text": "createCollection: READ_ME_TO_RECOVER_YOUR_DATA.README with generated UUID: 49ee438e-fc1b-4ab9-926e-760844110871Is this open on the internet with no authentication enabled? Because this looks suspiciously like a ransom thing to do.",
"username": "chris"
},
{
"code": "",
"text": "Hi @Rene_Schneider,As @chris correctly noted, the issue is someone connecting to an unsecured deployment (there are no authentication messages in the logs), dropping the databases, and then creating a collection with a ransom note. This cycle happens several times in the log excerpt, so there are likely multiple bad actors who have discovered your unsecured deployment.For a similar recent discussion and advice, please see Database deleted auto - #7 by Stennie.Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB is crashing | 2020-09-22T19:24:18.852Z | MongoDB is crashing | 5,039 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 3.6.23 is out and is ready for production deployment. This release contains only fixes since 3.6.22, and is a recommended upgrade for all 3.6 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 3.6.23 is released | 2021-03-19T17:10:55.574Z | MongoDB 3.6.23 is released | 3,122 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.2.13 is out and is ready for production deployment. This release contains only fixes since 4.2.12, and is a recommended upgrade for all 4.2 users.\nFixed in this release:",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.13 is released | 2021-03-19T17:06:42.540Z | MongoDB 4.2.13 is released | 2,734 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "hello first time i used mongodb realms and 3rd party services. i can send json but i need to upload image can i do that. thank you for your helps",
"username": "Akifcan_Kara"
},
{
"code": "",
"text": "Hi @Akifcan_Kara,Welcome to MongoDB community.Uploading an image will require you to use a storage service like aws s3 for example:Although the following article is branded for old stitch service the concept should be similar:MongoDB Stitch allows for the easy updating and tracking of files which can then be automatically updated to a cloud storage provider like AWS S3. See the code and follow along on the live coding on Twitch.Setup aws s3 service and put the binary image through a realm sdk.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "hello pavel thank you for your response.\ni used a storage service and save the image path to mongodb.",
"username": "Akifcan_Kara"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can i send form data | 2021-03-18T21:52:43.350Z | Can i send form data | 2,849 |
null | [
"java"
] | [
{
"code": "3.12.84.2.1uuidRepresentation \"simple test case\" in {\n val uuid = UUID.randomUUID()\n\n val dbo1 = new BasicDBObject()\n dbo1.put(\"subfield\", uuid)\n\n val dbo2 = new BasicDBObject()\n dbo2.put(\"subfield\", uuid)\n\n dbo1.toJson\n dbo1.equals(dbo2)\n }\nThe uuidRepresentation has not been specified, so the UUID cannot be encoded.",
"text": "I am trying to upgrade us from 3.12.8 to 4.2.1.\nI know that the uuidRepresentation now needs to be set to java legacy if that was used before and it works fine with my main client(s). But unfortunately we have lots of code in tests and elsewhere, that is failing now, due to equality not working anymore and default printing to json not working anymore. A short example case:fails with The uuidRepresentation has not been specified, so the UUID cannot be encoded.What is the way to go to make all these tests work again? Can I not pass in the uuid natively anymore?",
"username": "Yannick_Gladow"
},
{
"code": "equalityBasicDBObjectUUIDUuidRepresentationBasicDBObjectUuidRepresentation",
"text": "It looks to me like equality on BasicDBObject is broken as soon as we put a UUID as a value? There is no way to override the UuidRepresentation. Still it should somehow work, as it is a valid BasicDBObject that I can send to mongo db with a client specified to use a UuidRepresentation.",
"username": "Yannick_Gladow"
},
{
"code": "BasicDBObjectval uuid = new BsonBinary(UUID.randomUUID(), UuidRepresentation.JAVA_LEGACY)\nval dbo1 = new BasicDBObject()\ndbo1.put(\"subfield\", uuid)\nBasicDBObjectuuidRepresentationuuidRepresentation",
"text": "Is the only solution to now everywhere I put uuids into BasicDBObject to use?Can that lead to any compatibility problems if I do that with BasicDBObjects that are written to MongoDB with a client which has java legacy as uuidRepresentation specified? (in 3.x.x version we had no uuidRepresentation specified and did just pass the uuid in)",
"username": "Yannick_Gladow"
},
{
"code": " // store a reference to this registry somewhere that's easy to access\n var registry =\n CodecRegistries.fromRegistries(\n // Put a UuidCodec with representation specified in the first provider, which \n // will override the default one with no representation specified\n CodecRegistries.fromCodecs(new UuidCodec(UuidRepresentation.JAVA_LEGACY)),\n MongoClientSettings.getDefaultCodecRegistry());\n\n var document = new BasicDBObject(\"_id\", UUID.randomUUID());\n\n // Get an Encoder<BasicDBObject> from the registry and pass it to the #toJson method\n String json = document.toJson(registry.get(BasicDBObject.class));\n\n System.out.println(json);\n{\"_id\": {\"$binary\": {\"base64\": \"oUyLWtHF29MJqv5y89Emvg==\", \"subType\": \"03\"}}}\n",
"text": "Hi @Yannick_Gladow,I agree that’s a problem, and I don’t have a solution for you that doesn’t involve a change in your code. But here is an example of what you can do:This prints:Let me know if that works for you and if not I can open up an issue in Jira to consider other options.Regards,\nJeff Yemin",
"username": "Jeffrey_Yemin"
},
{
"code": ".equalsBasicDBObjectSet",
"text": "Yes that is a workaround, but actually the worse thing for me is, that .equals does not work anymore. I can not work with these BasicDBObjects anymore. E.g. when using a Set data structure, which internally relies on object equality it throws the UUID exception. This seems to me a big problem, what do you think?",
"username": "Yannick_Gladow"
},
{
"code": "",
"text": "Yeah, I see what you’re saying, and sorry for my misunderstanding of the main source of pain. I opened https://jira.mongodb.org/browse/JAVA-4051 to see what we can do to address the issue.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "The bug in BasicDBObject equals/hashCode has been fixed and can be tested using the 4.2.3-SNAPSHOT release. If you have a chance please give it a go. The 4.2.3 release is planned for April 6.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue with UUID after upgrading to 4.2.1 for java client | 2021-03-10T15:27:29.620Z | Issue with UUID after upgrading to 4.2.1 for java client | 5,135 |
null | [
"swift"
] | [
{
"code": "struct AutoView: View {\n @ObservedRealmObject var autoMaker: AutoMaker\n @State private var text = \"\"\n \n var body: some View {\n VStack {\n HStack {\n TextField(\"Car Name\", text: $text)\n Button(action: addItem) {\n Text(\"Add\")\n }\n }\n List {\n ForEach(autoMaker.cars) {car in\n Text(car.name)\n }\n .onDelete(perform: $autoMaker.cars.remove)\n }\n }\n }\n \n func addItem() {\n autoMaker.numberOfAdditions += 1 // Exception caused by this line\n $autoMaker.cars.append(Car(name: text)) // This line works fine when the line above is commented out\n text = \"\"\n }\n}\nclass AutoMaker: Object, ObjectKeyIdentifiable {\n @objc dynamic var name = \"\"\n @objc dynamic var numberOfAdditions: Int = 0\n var cars = RealmSwift.List<Car>()\n \n convenience init (name: String) {\n self.init()\n self.name = name\n }\n}\n",
"text": "In the SwiftUI View code below, I am able to successfully add a Car object to the cars list of my AutoMaker class using the append function. That works fine. But when I try to increment another property on the AutoMaker class, I get an error “attempting to modify object outside of a write transaction .” I understand the error and I know that modifications need to be made within a write transaction, but I’m not sure what that code would look like.I think there are two ways to handle this. One would be to add the write transaction in my addItem function in the view. And the other would be to add a method on my AutoMaker class, like incrementCount(), to do the incrementing, and then just call that function when I need to update the field. The write transaction would then be in that function on the AutoMaker class.So I’m hoping someone can show me what the code would look like in both circumstances. What would that write transaction look like if I put it here in the addItem function of the view, and what might it look like as a function on the AutoMaker class itself?Here is the view code:And here is the AutoMaker class object:(Note, this numberOfAdditions property is not the count of the number of cars. I know I can get that from a computed property. It has another purpose.)Thanks.\n–Tom",
"username": "TomF"
},
{
"code": "do {\n try Realm().write() {\n guard let thawedAutoMaker = autoMaker.thaw() else {\n print(\"Unable to thaw autoMaker\")\n return\n }\n thawedAutoMaker.numberOfAdditions += 1\n }\n} catch {\n print(\"Failed to save autoMaker: \\(error.localizedDescription)\")\n}",
"text": "Hi @TomF,I’ll give you my temporary solution to try, but there should be a cleaner solution available from Realm-Cocoa sometime (hopefully soon):",
"username": "Andrew_Morgan"
},
{
"code": "autoMaker.numberOfAdditions += 1$autoMaker.numberOfAdditions.wrappedValue += 1",
"text": "autoMaker.numberOfAdditions += 1Can you try:\n$autoMaker.numberOfAdditions.wrappedValue += 1\n?",
"username": "Jason_Flax"
},
{
"code": "func addItem() {\n $autoMaker.numberOfAdditions.wrappedValue += 1\n $autoMaker.cars.append(Car(name: text))\n text = \"\"\n }\nfunc incrementNumber() {\n do {\n try Realm().write() {\n guard let thawedAutoMaker = self.thaw() else {\n print(\"Unable to thaw autoMaker\")\n return\n }\n thawedAutoMaker.numberOfAdditions += 1\n }\n } catch {\n print(\"Failed to save autoMaker: \\(error.localizedDescription)\")\n }\n } \nfunc addItem() { \n autoMaker.incrementNumber()\n $autoMaker.cars.append(Car(name: text))\n text = \"\"\n }\n",
"text": "Both solutions worked. Thank Andrew and Jason. It makes sense to put the wrappedValue version in the view code to keep the view simpler:Or if I want to add the incrementing code as a method on the AutoMaker class, I would use Andrew’s code:And then call that in my view:–Tom",
"username": "TomF"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to update an @ObservedRealmObject in a SwiftUI view | 2021-03-19T13:48:17.506Z | How to update an @ObservedRealmObject in a SwiftUI view | 5,805 |
[
"python",
"atlas-search"
] | [
{
"code": "search_similar.html{% extends \"todo/base.html\" %}\n\n{% block content %}\n\n <div class=\"recommendations\">\n <!-- <div class=\"login-page\"> -->\n <div class=\"form\">\n <form action=\"{% url 'search_results' %}\" method=\"get\">\n <input name=\"q\" type=\"text\" placeholder=\"Perfume name...\">\n <input id=\"perfumename\" type =\"submit\" value=\"Find Similar Perfumes\" autocomplete=\"nope\"/>\n </form>\n <form class=\"login-form\" action = \"similar_results/\" method=\"POST\">\n <input type=\"text\" placeholder=\"perfume name\"/> <!-- https://www.youtube.com/watch?v=TRODv6UTXpM&ab_channel=ProgrammingKnowledge -->\n {% csrf_token %}\n <input id=\"perfumename2\" type =\"submit\" value=\"Find Similar Perfumes\" autocomplete=\"nope\"/>\n </form>\n </div>\n <script>\n $(document).ready(function() {\n $(\"#perfumename\").autocomplete({\n source: async function(request, response){\n let data=await fetch(`http://localhost:8000/search_similar?q={request.term}`)\n .then(results => results.json())\n .then(results => results.map(result => {\n return { label: result.name, value: result.name, id:result._id };\n }\n response(data);\n },\n minLength=2,\n select: function(event, ui){\n console.log(ui.item);\n }\n\n })\n }),\n $(document).ready(function() {\n $(\"#perfumename2\").autocomplete({\n source: async function(request, response){\n let data=await fetch(`http://localhost:8000/search_similar?q={request.term}`)\n .then(results => results.json())\n .then(results => results.map(result => {\n return { label: result.name, value: result.name, id:result._id };\n }\n response(data);\n },\n minLength=2,\n select: function(event, ui){\n console.log(ui.item);\n }\n\n })\n })\n </script>\n </div>\n\n{% endblock %}\nautocomplete=\"nope\"",
"text": "Hi MongoDeBians!I want to build an autocomplete form in a Django webapp. I have already been able to do the search bar where I query my MongoDB database but how can I add an autocomplete? I tried to adapt an official tutorial that does it with Javascript:search_similar.html:Even when I have autocomplete=\"nope\" the first search bar still shows up the default autocomplete by chrome and doesn’t show up the one I built in MongoDB. The second doesn’t show up anything, even when I link the id to the script.Do you know how I can handle that?",
"username": "Mike_MPC"
},
{
"code": "minLength=2minLength: 2sourceselect",
"text": "Hi @Mike_MPC, my apologies for the late reply here, but I hope this message finds you well.I re-ordered my response because one was long.I noticed a couple things about your snippet and thought a few bits of information could be helpful.The second detail I would point out is the minLength=2 line. I think that should be minLength: 2. It is a key value pair in the object that is the parameter/argument of the autocomplete function, along with source and select.First, here is a repo containing a hacked together Flask app I forked to port its search functionality to Atlas Search. It’s not exactly Django, which adds some more constructs, but it’s similar so it could be helpful. There’s even an example index definition for autocomplete in the README. Be sure you have set that autocomplete index up in Atlas, otherwise it work work.",
"username": "Marcus"
},
{
"code": "class SearchResultsView(ListView):\n model = Perfume\n template_name = 'todo/search_similar_results.html'\n\n def get_queryset(self):\n query = self.request.GET.get('q')\n print(\"JE SUIS PASSE PAR LA\")\n # object_list = list(collection.find({\"q0.Results.0.Name\": {\"$regex\": str(query), \"$options\": \"i\"}}))\n object_list = list(collection.aggregate(\n {\n \"$search\":{\n \"autocomplete\":{\n \"query\": query,\n \"path\": \"q0.Results.0.Name\",\n \"fuzzy\":{\n \"maxEdits\":2,\n }\n }\n }\n # \"q0.Results.0.Name\": {\"$regex\": query, \"$options\": \"i\"}\n }\n ))\n print('type(object_list): ', type(object_list))\n return object_list\n",
"text": "Many thanks for your answer Marcus, no worries of being late. I was away for a few day as well.Many thanks for your resource and for pointing out the error with minLength, I corrected it but it didn’t launch the autocomplete. So the error must be somewhere before. I’m reviewing my code and I will dive into your GitHub as long as I am sure the error isn’t from my side.I am now reviewing my views.py which contains the query to MongoDB, I don’t know if it is realted to the autocomplete triggering. I guess not, but in such a case here it is:The findone query worked, without triggering the autocomplete, the aggregate doesn’t so I’m reviewing it at the moment. If it doesn’t work I will redisign that with your example.Thanks again!",
"username": "Mike_MPC"
},
{
"code": "0Results$merge",
"text": "Ahh, herein lies the issue @Mike_MPC. Unfortunately, autocomplete does not support positional paths (note: position 0 in the Results array). For this query to work, you would need to unwind the results array if possible. You have a few options in triggers and materialized views via $merge.Let me know if that fixes the issue. You could test by simply changing the path to another field that is already a non-enumerable field.",
"username": "Marcus"
},
{
"code": "object_list = list(collection.aggregate([\n {\n \"$search\":{\n \"autocomplete\":{\n \"query\": str(query),\n \"path\": \"name\",\n \"fuzzy\":{\n \"maxEdits\":2,\n }\n }\n }\n # \"q0.Results.0.Name\": {\"$regex\": query, \"$options\": \"i\"}\n }] \nlist(collection.find({\"q0.Results.0.Name\": {\"$regex\": str(query), \"$options\": \"i\"}}))\n",
"text": "Thanks for that info @Marcus , I just created a non-enumarable field for all documents to deal with this positional path issue. I don’t have any error anymore but it doesn’t trigger the autocomplete either:So you’re saying that I need to unwind the results array if possible. However contrarily to my first attempt which was commented up above:The result I get there is an empty array.",
"username": "Mike_MPC"
},
{
"code": "class SearchResultsView(ListView):\n model = Perfume\n template_name = 'todo/search_similar_results.html'\n\n def get_queryset(self): # new\n query = self.request.GET.get('q')\n print(\"JE SUIS PASSE PAR LA\")\n # object_list = list(collection.find({\"q0.Results.0.Name\": {\"$regex\": str(query), \"$options\": \"i\"}}))\n object_list = list(collection.aggregate([\n {\n '$search': {\n 'index': 'default',\n 'compound': {\n 'must': {\n 'text': {\n 'query': str(query),\n 'path': 'name',\n 'fuzzy': {\n 'maxEdits': 2\n }\n }\n }\n }\n }\n }\n ]\n ))\n print(object_list)\n return object_list\n",
"text": "Okay, now the search query works:This get the results in function of what I have in the get query. However, it doesn’t trigger the autocomplete.",
"username": "Mike_MPC"
},
{
"code": "autocompletetext",
"text": "for autocomplete, you need to use the autocomplete operator instead of the text operator.",
"username": "Marcus"
},
{
"code": "getrestaurants()suggest_restaurants()",
"text": "Okay thanks @Marcus , that make sense. I guess I need to create a new function. But I’ve looked at your code and I didn’t understand how it was triggered.\nIndeed, if getrestaurants() is triggered by the find-restaurants submit action button. I don’t know about suggest_restaurants().",
"username": "Mike_MPC"
}
] | Building an Autocomplete Form Element with Atlas Search and Django | 2021-03-03T17:43:41.976Z | Building an Autocomplete Form Element with Atlas Search and Django | 3,631 |
|
null | [] | [
{
"code": "",
"text": "I setup an Azure functions and Mongo DB cluster to POC moving from on premise to Mongo DB Atlas platform.I am able to access the mongo DB Atlas from developer box but not when I deploy to production Azure functions.I saw several entry around the same error I got:\n“A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 …”I am using the latest Mongo DB Driver 2.12.0 and here is the format of connection string:\nmongodb+srv://username:[email protected]/Integra?connect=replicaSet&retryWrites=true&w=majority&connectTimeoutMS=360000&socketTimeoutMs=360000None of this seems to fix the issue connecting from Azure serverless functions. Any idea how to properly connect to Mongo Db Atlas from Azure serverless functions?",
"username": "Toga_Hartadinata"
},
{
"code": "",
"text": "Hi Toga,Is it possible that the dynamic Azure function IP is not on the Atlas IP Access List? If this is the issue, please note that Azure does not support having Functions connect via peering over a private IP: they do offer this workaround here Frequently asked questions about networking in Azure Functions | Microsoft LearnAnother option is to add to your IP Access List all IPs but if you do so it’s important to ensure you’re governing your database credentials appropriately.-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew, I guess i prefer to let all IPs in however can you recommend a guidance to increase security for the database credentials.",
"username": "Toga_Hartadinata"
},
{
"code": "",
"text": "If you use Vault then our secrets engine integration could be a great fit: MongoDB Atlas - Secrets Engines | Vault | HashiCorp Developer",
"username": "Andrew_Davidson"
}
] | Azure functions unable to connect with Mongo Db Atlas M10 | 2021-03-15T04:36:51.977Z | Azure functions unable to connect with Mongo Db Atlas M10 | 3,755 |
null | [
"queries",
"graphql"
] | [
{
"code": "query {\n authorizationGroup(query: { name: \"name\" }) {\n _id\n\t\tdescription\n\t\tname\n \n }\n}\nquery {\n authorizationGroup {\n\t\tname_contains(input: \"admin\") {\n name\n description\n }\n }\n}\nquery {\n authorizationGroup(query: { name_contains: \"name*\" }) {\n _id\n\t\tdescription\n\t\tname\n \n }\n}\n",
"text": "I have a need for a new QueryInput:The problem with the query above is it is a FULL match. I want a partial match. Cool custom resolver:The problem with this what if I want to combine it with limits and sorts and such. I want to do this:Notice that the QueryInput is name_contains and can be combined with all other queryables. I have not seen in the documentation explaining this scenario at all. This seems like it is not possible but maybe I missed something.",
"username": "Jorden_Lowe"
},
{
"code": "",
"text": "Hey Jordan - you’re correct in that querables such as limit and sort are not supported out-of-the-box with custom resolvers due to the extremely flexible nature of the return type. However, that flexibility of custom resolvers also gives you the ability to achieve a similar pattern by including booleans/integers for limit and sort within your query and using those to write your function logic.Better text search with GraphQL might be a potential improvement here, you can add it to our Uservoice feedback, which we use to influence our roadmap for GraphQL.",
"username": "Sumedha_Mehta1"
}
] | GraphQL Custom QueryInput | 2021-03-10T22:18:52.140Z | GraphQL Custom QueryInput | 1,821 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I have a data setdb.users.insertMany([\n{\"_id\":1, “name”:“abcd” },\n{\"_id\":2, “name”:“abcd”},\n{\"_id\":3, “name”:“abcd” },\n{\"_id\":4, “name”:“abcd”},\n{\"_id\":5, “name”:“abcd” },\n{\"_id\":6, “name”:“abcd”},\n{\"_id\":7, “name”:“abcd” },\n{\"_id\":8, “name”:“abcd”},\n{\"_id\":9, “name”:“abcd”},\n{\"_id\":10, “name”:“abcd” },\n{\"_id\":11, “name”:“abcd”},\n{\"_id\":12, “name”:“abcd”},\n{\"_id\":13, “name”:“abcd”},\n{\"_id\":14, “name”:“abcd”},\n{\"_id\":15, “name”:“abcd”},\n{\"_id\":16, “name”:“abcd”},\n])db.users_hirarchy.insertMany([{\n“_id”: “1101”,\n“_from”: 14,\n“_to”: 15\n},{\n“_id”: “1102”,\n“_from”: 14,\n“_to”: 16\n},{\n“_id”: “1103”,\n“_from”: 15,\n“_to”: 3\n},{\n“_id”: “1104”,\n“_from”: 15,\n“_to”: 5\n},{\n“_id”: “1105”,\n“_from”: 15,\n“_to”: 7\n},{\n“_id”: “1106”,\n“_from”: 3,\n“_to”: 1\n},{\n“_id”: “1107”,\n“_from”: 3,\n“_to”: 2\n},{\n“_id”: “1108”,\n“_from”: 3,\n“_to”: 4\n},{\n“_id”: “1109”,\n“_from”: 3,\n“_to”: 4\n},{\n“_id”: “1110”,\n“_from”: 3,\n“_to”: 4\n},{\n“_id”: “1111”,\n“_from”: 3,\n“_to”: 4\n}])What I want to achieve is something like this{\nnodes:[\n{\n“_id” : 3,\n“name”: “abcd”\n},{\n“_id” : 4,\n“name”: “abcd”\n}\n] ,\nhirarchies : [\n{\n“_id” : 3,\n“Hierachy” : [\n{\n“_id” : “1106”,\n“_from” : 3.0,\n“_to” : 1.0,\n“depth” : NumberLong(0)\n},\n{\n“_id” : “1107”,\n“_from” : 3.0,\n“_to” : 2.0,\n“depth” : NumberLong(0)\n} ] },\n{\n“_id” : 4,\n“Hierachy” : [\n{\n“_id” : “1106”,\n“_from” : 3.0,\n“_to” : 1.0,\n“depth” : NumberLong(0)\n} ] } ]I have written a query but it doesn’t give the required results. how to do it using graphlookup.\nmy tried query isdb.users.aggregate([\n{ graphLookup: {\nfrom: \"users_hirarchy\",\nconnectToField: \"_from\",\nstartWith: \"_id\",\nmaxDepth: 0,\nconnectFromField: “_to”,\ndepthField: “depth”,\nas: “hirarchy”\n}\n}\n,\n{project:{\"Data\":{\n\"id\":\"_id\",\n“name”:\"$name\"},“Hierachy”:\"$hirarchy\"}\n}])How can I achieve my desired output?",
"username": "MWD_Wajih_N_A"
},
{
"code": "",
"text": "So, I achieved this using facet in aggregate asdb.users.aggregate([\n{ graphLookup: {\nfrom: \"users_hirarchy\",\nconnectToField: \"_from\",\nstartWith: \"_id\",\nmaxDepth: 0,\nconnectFromField: “_to”,\ndepthField: “depth”,\nas: “hirarchy”\n}\n}, {\n$facet: {\n“Node”: [\n{\n$project: {\n“hirarchy”: 0\n}\n}],\n“hirarchy”: [\n{\n$project: {\n“hirarchy”: 1,\n“_id”: 1\n}\n}]}}\n])",
"username": "MWD_Wajih_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Graphlookup query | 2021-03-19T06:56:43.364Z | Graphlookup query | 1,629 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Hi, is there any way to get database reports out from atlas. something like the cluster configuration and database size and number of users etc…",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "Depending on what you’re looking for, other than what you can see in the UI, the API may be the best bet https://docs.atlas.mongodb.com/reference/api-resources/It’d be great to hear more about what you’re aiming to do",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "My customer wants have keep on eye on the disk and cpu use which he wants these details to be shared in the email, most of the things in the metrics but they are looking out in the form of the report that will be generated and can be shared in email…",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "I’m a DBA, so how can I better use these API methods to get number of users and other information for my customer. I assume these API methods should be called with programming languages… I might not know like DBA can also use tools or something to run these methods… any help on this is highly appreciated",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "The API methods can be used for deriving these kinds of details but if you’re not keen to script around the API endpoints your better bet might be to use the UI",
"username": "Andrew_Davidson"
}
] | Atlas database reports | 2021-03-12T14:58:24.604Z | Atlas database reports | 2,634 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "How do you query the Token and TokenID from the confirmation URL sent when registering a user?Currently working in iOS. I am able to send the confirmation email successfully. Just do not know how to query the Token and TokenID from the URL.Thanks!",
"username": "Radis"
},
{
"code": "",
"text": "Hi @Radis,As far as I know the link should point back to.a module in your application which will extract the values from the query parameters.Those are appended to the end of the configured confirmation URL.The way to do it depends on your programming language. For iOS we recommend to review universal linking for your application:App Search Programming Guide: Support Universal LinksAs mentioned in our docs.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks Pavel.I was able to query from the confirmation link sent in email on a webpage and then I ran emailPasswordAuth.confirmUser(token, tokenId); on my client from that webpage.\nThis worked just fine. Thanks!",
"username": "Radis"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to query Token and TokenID in confirmation URL | 2021-03-09T12:29:29.821Z | How to query Token and TokenID in confirmation URL | 2,100 |
null | [
"node-js",
"monitoring"
] | [
{
"code": "const listener = require('mongodb').instrument({\n operationIdGenerator: {\n operationId: 1,\n\n next: function() {\n return this.operationId++;\n }\n },\n\n timestampGenerator: {\n current: function() {\n return new Date().getTime();\n },\n\n duration: function(start, end) {\n return end - start;\n }\n } \n}, function(err, instrumentations) {\n // Instrument the driver \n});\noptionsinstrumentationsinstrumentationsEventEmitterforEach",
"text": "Hello,What is the status of the MongoDB native driver APM functionality on Node.js?\nIn the 3.6 docs, you can find it here, but it no longer appears in the docs for 4.x. Will it be removed?Also, the docs for APM in 3.6 seem very wrong. For instance, there is this snippet of code:But if you check at the driver code here, the options argument isn’t used at all.Also, the docs show that you can iterate over the instrumentations argument … but that isn’t the case because instrumentations is the Instrumentation Class instance which extends an EventEmitter which doesn’t have a forEach.The code in 4.0 branch is the same but written in TS. Is this a work in progress? can we reliably use this? I’m currently using this for monitoring but I’m not quite sure if this APM will continue to exist in next versions. The alternative is overwrite prototype of internal MongoDB classes to get these commands which is something I want to avoid doing (but it’s something that some monitoring services actually do).",
"username": "Eddy_Wilson"
},
{
"code": "const client = new MongoClient(URL, options)\nclient.on('commandStarted', event => {/* event handling... */})\nclient.on('commandSucceeded', event => {/* event handling... */})\nclient.on('commandFailed', event => {/* event handling... */})\n// SDAM events can also be listened to in the same way\n\nawait client.connect()\n// etc..instrument",
"text": "Hello @Eddy_Wilson!\nRest assured the status of APM in the node driver is that the functionality is fully supported and will be going forward. However, it does seem we have some confusing documentation that hasn’t made clear what is legacy and what is best practice. I have filed a ticket for this and we will update the docs as soon as we can.Today all drivers share the same monitoring events for a uniform database experience. In node it would like something like this:In 4.0 the instrument method will likely be removed and it will be deprecated in 3.6. However, the 4.0 version is still an in progress beta so if you are targeting production I would hold off on deploying with the beta version of the driver. The code above will still work in either version of the driver.Hopefully this helps! Let me know if there’s anything still unclear.",
"username": "neal"
},
{
"code": "operationIdGeneratortimestampGeneratorrequestIdrequestIdtimestampGeneratorconst DurationStart = new Map();\non(\"commandStarted\", event => DurationStart.set(event.requestId, process.hrtime.bigint()));\non(\"commandSucceeded\", event => {\n const diff = process.hrtime.bigint() - DurationStart.get(event.requestId)\n // do something with diff\n DurationStart.delete(event.requestId)\n});\non(\"commandFailed\", event => DurationStart.delete(event.requestId));\ntimestampGeneratormsoperationIdGeneratorrequestIdSetmethodsoptionscallback: true, promise: true",
"text": "@neal cool, it makes sense.One thing though, so there isn’t a way to provide these instrument options, right? such as operationIdGenerator and timestampGenerator?Right now, I’m logging requestId for slow queries. However, with ~200 node.js servers, this requestId isn’t unique. Is there a non-hacky way to provide these options to the driver?For instance, for timestampGenerator, this pseudo-code is how I implemented it:While this works, it’s pretty hacky. I need a timestampGenerator because ms duration in the event isn’t accurate enough (seems rounded, e.g: duration 7ms may be 6.65ms) and we actually need metrics in 4 digits duration (e.g: 1ms is 1000)And it’s pretty similar thing for a operationIdGenerator to map a requestId to a, for example, uuid/v4 random id. The method above works, but it’s error prone, if there is an error on the event or I forgot to remove it from the Set, then we have a memory leak.Lastly, the instrumentation points won’t be available either in 4.x. Will it? The class description with the methods and options (e.g: callback: true, promise: true). I haven’t seen it in MongoDB native driver for Node.js codebase.",
"username": "Eddy_Wilson"
},
{
"code": "const client = new MongoClient(url, { monitorCommands: true })requestIdmongodrequestIdms",
"text": "I forgot to note in my first reply that you’ll also need to enable monitoring via the constructor like so:Firstly, maybe attaching a comment to your find operations that is some unique string can help with logging/tracing. The requestId is a monotonically increasing value that is an int32 and part of the underlying communication protocol between a driver and mongod. Combining requestId with some unique process information (maybe, pid + hostname) should give you unique values per nodejs server. Additionally, if you want a detailed view into why a query may be slow, I recommend taking a look at the explain API which can be run from the mongo shell as well as the nodejs driver.As of now it appears we do lack support for the specific use case of assigning a custom id to every operation and writing custom timestamp generation logic with higher precision than ms. However, we would be open to adding support for such a use case if the above still doesn’t achieve the functionality you’re searching for.",
"username": "neal"
},
{
"code": "monitorCommandsexplainfindcommandSucceededrequestIddurationcommandStartedrequestIddurationMaprequestIddurationcommandSucceededcommandFailedrequestIdMapcommandFailed",
"text": "Oh, thanks, so that’s what monitorCommands option does.I do actually have an implementation on top of apm that does explain for all these slow find commands that get logged in commandSucceeded event during development.However, we would be open to adding support for such a use case if the above still doesn’t achieve the functionality you’re searching for.It’d be great to have a built-in way to generate a requestId and duration. The way we do it is by subscribing to all 3 events and keeping each commandStarted event requestId and duration in a Map so we can identify the uuid assigned to requestId or diff the duration in later events commandSucceeded or commandFailed. It works well but seems very hacky and makes the application prone to memory leaks (like forgetting to remove the requestId from the Map on commandFailed)",
"username": "Eddy_Wilson"
}
] | What is the status of APM (In Mongo node.js native driver)? | 2021-03-12T11:21:56.591Z | What is the status of APM (In Mongo node.js native driver)? | 3,710 |
null | [
"indexes"
] | [
{
"code": "",
"text": "I want to understand how MongoDB database phases is when it index the data step by step.Do MongoDB save the BSON serialized data first as files and later deserialize that data when needed and index it as B-tree in memory for search?Can someone please explain step by step?",
"username": "sakdkjkj_jjsdjds"
},
{
"code": "",
"text": "Hi @sakdkjkj_jjsdjds,Associated indexes to the fields that are affected by a write operation will be updated on the fly.The transformation between files and memory structure is managed by wired tiger engine and is getting persistent by a periodic checkpoint to files from cache.For this reason maintain of many indexes may slow writes and require more memory for each write.The tradeoff between indexes for queries and too many indexes is crucial in performance tuningThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_DuchovnyThank you very much for taking the time to explain to me the steps! Is it safe to assume that wired tiger cache all the BSON documents when MongoDB is started to initialize or do MonogoDB cache only document with indexes?and changes made to the cached data effect the BSON documents later on disk to be persistent?",
"username": "sakdkjkj_jjsdjds"
},
{
"code": "",
"text": "Hi @sakdkjkj_jjsdjds,Any object that needs to be altered is loaded into cache, also documents that are not indexed.When you just read data it is also loaded. However if data is not accessed it is not loaded to memoryThe index is a different file so it has its own pages to be loaded.The data is written in a different format than pure bson. But eventually in memory they are desirialized to bsons.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Any object that needs to be altered is loaded into cache, also documents that are not indexed.When you just read data it is also loaded. However if data is not accessed it is not loaded to memoryThe index is a different file so it has its own pages to be loaded.@Pavel_Duchovny I see that give a lot of senseBut that means the first data accessed ( not cached yet ) will be a slower operation because BSON have to iterate the collection to find it?Do MongoDB cache only the document or the whole collection ( all docs inside) if the data need to be accessed?And is the cached data freed again after some time if there happens to be no activity for that collection?",
"username": "sakdkjkj_jjsdjds"
},
{
"code": "",
"text": "Yes.If the working set is yet in memory it will have to be fetched from disk.The ideal performance for your primary is if all your working set can fit into 80% of your Wired Tiger cache. If that is not possible due to size limits consider trying to fit at least the indexes in those 80% as this will mean the disk Access be minimal and direct.Example a 32GB server will be by default with 16GB Wired Tiger cache , 80% of the cache will be ~13GB …MongoDB caches pages of WiredTiger the amount if documents that could fit there depands on the size of documents.",
"username": "Pavel_Duchovny"
}
] | Do MongoDB deserialize the BSON from file before indexing? | 2021-03-10T19:18:58.423Z | Do MongoDB deserialize the BSON from file before indexing? | 2,453 |
null | [
"python"
] | [
{
"code": "",
"text": "mydoc=list(mycol.find({ “pId”:“3020956658” }, {“id1”: 1, “drId”: 0, “name”: 1, “cno”:1, “email”:1, “address”:1,“compid”:0,“comp”:0,“compst”:0,“personalh”:0, “presenth”:0, “pasth”:0,“fh”:1}))The above line is giving error.\nI have tried to follow the order of the different fields.\nKindly help",
"username": "sps"
},
{
"code": "",
"text": "It will be simpler for us tk help you if you could share the error you are having.",
"username": "steevej"
},
{
"code": "import pymongo\nimport time\nimport datetime\nmyclient = pymongo.MongoClient(\"mongodb+srv://pss:[email protected]/<dbname>?retryWrites=true&w=majority\")\nmydb = myclient[\"hcd\"]\nmycol = mydb[\"c61\"]\nx=datetime.datetime.now()\nprint(x)\nts1 = time.time()\nprint(ts1)\nmydoc=list(mycol.find({ \"pId\":\"6993341507\" }, {\"compid\" : 0, \"comp\" :1, \"compst\":1}))\n# )) \n# {\"id1\": 1, \"drId\": 0, \"name\": 1, \"cno\":1, \"email\":1, \"address\":1,\"compid\":0,\"comp\":0,\"compst\":0,\"personalh\":0, \"presenth\":0, \"pasth\":0,\"fh\":1\nts2 = time.time()\nprint(ts2)\ntd= ts2-ts1\nprint('The difference is approx. %s seconds' % td)\nprint(mydoc)\n2021-03-16 16:07:33.998910\n1615891053.9989104\nTraceback (most recent call last):\n File \"E:\\nm21\\readmongoc61.py\", line 12, in <module>\n mydoc=list(mycol.find({ \"pId\":\"6993341507\" }, {\"compid\" : 0, \"comp\" :1, \"compst\":1}))\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\cursor.py\", line 1207, in next\n if len(self.__data) or self._refresh():\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\cursor.py\", line 1124, in _refresh\n self.__send_message(q)\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\cursor.py\", line 999, in __send_message\n response = client._run_operation_with_response(\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1368, in _run_operation_with_response\n return self._retryable_read(\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1471, in _retryable_read\n return func(session, server, sock_info, slave_ok)\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\mongo_client.py\", line 1360, in _cmd\n return server.run_operation_with_response(\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\server.py\", line 136, in run_operation_with_response\n _check_command_response(\n File \"C:\\Python\\Python39\\lib\\site-packages\\pymongo\\helpers.py\", line 167, in _check_command_response\n raise OperationFailure(msg % errmsg, code, response,\npymongo.errors.OperationFailure: Cannot do inclusion on field comp in exclusion projection, full error: {'operationTime': Timestamp(1615891048, 1), 'ok': 0.0, 'errmsg': 'Cannot do inclusion on field comp in exclusion projection', 'code': 31253, 'codeName': 'Location31253', '$clusterTime': {'clusterTime': Timestamp(1615891048, 1), 'signature': {'hash': b'\\xfeu\\xdb\\xa7c\\xfa=\\xf20\\xed`\\xb6s\\xd3xsm\\xcaV|', 'keyId': 6920164380919201795}}}\n",
"text": "The Code:The Output:",
"username": "sps"
},
{
"code": "01_id# Inclusion filter (any fields not listed are removed, except for _id):\nmydoc=list(mycol.find({ \"pId\":\"3020956658\" }, {\"id1\": 1, \"name\": 1, \"cno\":1, \"email\":1, \"address\":1, \"fh\":1}))\n\n# Inclusion filter that excludes _id:\nmydoc=list(mycol.find({ \"pId\":\"3020956658\" }, {\"id1\": 1, \"name\": 1, \"cno\":1, \"email\":1, \"address\":1, \"fh\":1, \"_id\": 0}))\n\n# Exclusion filter (any fields listed are removed):\nmydoc=list(mycol.find({ \"pId\":\"3020956658\" }, {\"drId\": 0,\"compid\":0,\"comp\":0,\"compst\":0,\"personalh\":0, \"presenth\":0, \"pasth\":0}))\n",
"text": "Cannot do inclusion on field comp in exclusion projectionHi @sps,When you do a projection, you can either do an exclusion filter, with fields that have a value of 0, or an inclusion filter, with fields that have a value of 1. The exception to this is the _id field, which is always included unless it’s specifically excluded.I’m not sure exactly what you’re trying to do, but I think one of the following may do what you want:Some more examples are in the MongoDB tutorialI hope this helps!",
"username": "Mark_Smith"
},
{
"code": "",
"text": "I have clearly understood. It worked perfectly.\nI have started working with mongodb practically, after mid November, 2020.\nI have got enormous help and support from mongodb university and people like you.\nThanks a lot.\nIf I have any more doubts regarding any other topic I will definitely post here again. The terms inclusion and exclusion are self explanatory but got confused somehow about the syntax. Thanks for the elaboration again.",
"username": "sps"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Python projection is not working with MongoDB Free Cloud, what is wrong in the query? | 2021-03-11T09:55:04.169Z | Python projection is not working with MongoDB Free Cloud, what is wrong in the query? | 5,071 |
null | [
"replication",
"security"
] | [
{
"code": "clusterAuthMode: x509authorization: enabledauthenticate db: $external { authenticate: 1, mechanism: \"MONGODB-X509\", user: \"CN=mongod1\", $db: \"$external\" }\n2020-09-11T19:15:50.644+0000 I ACCESS [conn10] Failed to authenticate CN=mongod1@$external from client 172.19.62.122:37102 with mechanism MONGODB-X509: UserNotFound: Could not find user \"CN=mongod1\" for db \"$external\"\n",
"text": "Hi,\nWhen I enable clusterAuthMode: x509 for replica set authentication, with the proper certs in place per this documentation, the replica members can authenticate with each other just fine. However, when I also enable authorization: enabled, the members are unable to communicate with each other. In the logs I see entries like this:This would suggest that a corresponding user needs to be in place for each member of the replica set. In the documentation, it’s clear that a user is needed for clients to use when connecting to the replica set and I do see log entries indicating that my client is successfully authenticating using the user I’ve setup, but I can’t find anything in the docs showing that such a user is needed for internal/membership authorization.Is such a user needed for each member of the replica set for internal/membership authorization, or am I missing something else here? If a user for each member is needed, what db should the user be created in, and with what permissions/roles; where are the docs I’ve overlooked that specify this?Thanks for any help.",
"username": "DanielP"
},
{
"code": "",
"text": "Hi @DanielP and welcome in the MongoDB Community !Are the cluster IP addresses correctly setup in the bind_ip list?\nThere are definitely no “internal users” for internal membership authorization, etc. Users are just for real users. Not for internal purposes.Can you please share your config files & command lines and walk us through the steps you have been through so we can maybe spot the issue?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Appreciate your response!Here is my conf for the 2 data bearing members:net:\nport: 17017\nbindIpAll: true\ntls:\nmode: requireTLS\nCAFile: /certs/myCA.pem\ncertificateKeyFile: /certs/myCert.pem\nreplication:\noplogSizeMB: 4096\nreplSetName: mySet\nenableMajorityReadConcern: false\nsetParameter:\nenableLocalhostAuthBypass: true\nprocessManagement:\nfork: “false”\nstorage:\ndbPath: /data/db\nengine: wiredTiger\njournal:\nenabled: true\nwiredTiger:\ncollectionConfig:\nblockCompressor: snappy\nengineConfig:\ndirectoryForIndexes: true\ncacheSizeGB: 20\nsecurity:\nenableEncryption: true\nencryptionKeyFile: /etc/mongodb.keyfile\nauthorization: enabled\nclusterAuthMode: x509And here is my arb conf:net:\nport: 17017\nbindIpAll: true\ntls:\nmode: requireTLS\nCAFile: /certs/myCA.pem\ncertificateKeyFile: /certs/myCert.pem\nreplication:\noplogSizeMB: 1\nreplSetName: mySet\nenableMajorityReadConcern: false\nsetParameter:\nenableLocalhostAuthBypass: true\nprocessManagement:\nfork: “false”\nstorage:\ndbPath: /data/db\nengine: wiredTiger\nwiredTiger:\ncollectionConfig:\nblockCompressor: snappy\nengineConfig:\ndirectoryForIndexes: true\nsecurity:\nenableEncryption: true\nencryptionKeyFile: /etc/mongodb.keyfile\nauthorization: enabled\nclusterAuthMode: x509When I comment the two lines in the security section, the replica set members can communicate just fine, so it seems to be an issue relating to membership authentication.Thank you for confirming there are no users needed for internal membership authentication; that was my initial assumption but I’ve been grinding on this problem for several days now so I didn’t want to rule anything out, particularly since the error messages I’m seeing in the logs seem to indicate it’s in fact expecting a particular user named after the member that’s trying to connect.",
"username": "DanielP"
},
{
"code": "with mechanism MONGODB-X509: UserNotFound: Could not find user \"CN=mongod1\" for db \"$external\"",
"text": "with mechanism MONGODB-X509: UserNotFound: Could not find user \"CN=mongod1\" for db \"$external\"When you comment security section members are able to communicate means some issue with certificatesFrom mongo documentation“…Certificat Requirements: The Distinguished Name (DN), found in the member certificate’s subject, must specify a non-empty value for at least one of the following attributes: Organization (O), the Organizational Unit (OU) or the Domain Component (DC).”",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I’ve double-checked the certs of the three members; all three have the exact same subject, except for the value of CN; e.g.$ openssl x509 -in myCert.pem -inform PEM -subject -nameopt RFC2253\nsubject= [email protected],CN=mongod1,OU=ops,O=myCompany,L=myCity,ST=myState,C=USI’ve gone over the documentation you’re referencing several times and I’m meeting all the requirements, namely:",
"username": "DanielP"
},
{
"code": "",
"text": "Also, can someone shed light on why I’m seeing errors about:MONGODB-X509: UserNotFound: Could not find user “CN=mongod1”when membership authentication is not using users?",
"username": "DanielP"
},
{
"code": "mongod1mongod2mongod1CN=mongod1",
"text": "some more info:\nWhen I start up the replica set with the following enabled in my conf:security:\nauthorization: enabled\nclusterAuthMode: x509And then access the shell on mongod1, authorize w/ the appropriate user, then run rs.status(), I see entries like this for the other members:{\n“_id” : 2,\n“name” : “mongod2:17017”,\n“health” : 0,\n“state” : 8,\n“stateStr” : “(not reachable/healthy)”,\n“uptime” : 0,\n“lastHeartbeat” : ISODate(“2020-09-18T15:37:18.349Z”),\n“lastHeartbeatRecv” : ISODate(“1970-01-01T00:00:00Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “Could not find user “CN=mongod1” for db “$external””,\n“syncingTo” : “”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“configVersion” : -1\n}It looks like mongod2 is refusing to communicate with mongod1 because it can’t find the user CN=mongod1. Based on what @MaBeuLux88 stated earlier about users not being needed for membership authentication, should I assume this error message is a red herring? I have no idea why this isn’t working.",
"username": "DanielP"
},
{
"code": "with mechanism MONGODB-X509: UserNotFound: Could not find user \"CN=mongod1\" for db \"$external\"",
"text": "per @Ramachandra_Tummala’s comment, I’m further scrutinizing the certs. I don’t know if this is relevant, but I’m creating the certs from an easyrsa CA with the following X509 extensions:basicConstraints = CA:FALSE\nsubjectKeyIdentifier = hash\nauthorityKeyIdentifier = keyid,issuer:always\nextendedKeyUsage = clientAuth,serverAuth\nkeyUsage = digitalSignature,keyEnciphermentAnything here that might cause mongod to fail when attempting to use the certificates?Also, can anyone confirm that error messages like with mechanism MONGODB-X509: UserNotFound: Could not find user \"CN=mongod1\" for db \"$external\" can indeed be indicative of problems with the certificates?",
"username": "DanielP"
},
{
"code": "clusterAuthMode: x509",
"text": "Maybe we can work this from the other end; can anyone that is usingclusterAuthMode: x509 with certificates generated with easyrsa3 tell me how they’re doing so, and with what attributes (e.g. which x509-types, etc.)?",
"username": "DanielP"
},
{
"code": "",
"text": "Can you extract CN from your certificate with -noout option of openssl\nIs it matching with your hostname?Also in mongodb university course when we implemented x509 i remember an user is created with subject on external dbPlease check if this link gives any helphttps://dinfratechsource.com/2018/12/16/securing-mongodb-with-x-509-authentication/",
"username": "Ramachandra_Tummala"
},
{
"code": "mongod1mongod1.my.domain",
"text": "@Ramachandra_Tummala thanks for the response; it’s perhaps important to note that the CN value of the certs is the FQDN of the hosts, not the regular hostname (e.g. hostname is mongod1 whereas the CN value is mongod1.my.domain) Is this perhaps the problem? Would it help to generate a cert that has both the FQDN hostname and the regular hostname in the SAN list?Also, your comment about needing to add a user, that seems to be solely for client authentication, not membership authentication.",
"username": "DanielP"
},
{
"code": "",
"text": "b university course when we implemented x509 i remember an user is created with subject on external dbI’m having the exact same problem as yours where replica is returning NoUser error. I have tried both certificates with only clientAuth and both clientAuth and serverAuthMay i know how did you resolve this?",
"username": "Dave_Teu"
},
{
"code": "",
"text": "HelloI did not get this error.I was trying to help another student in the thread you have referred\nPlease show screenshot of the exact error you got\nCheck all your certificates are fine .Values of CN,O fields etc\nRefer to mongo documentation for more details",
"username": "Ramachandra_Tummala"
},
{
"code": "OOUDCnet.tls.clusterFilenet.tls.certificateKeyFiletlsX509ClusterAuthDNOverride",
"text": "It’s okay, I found the error. It was mentioned in the docs but it was all cluttered within sentences so I overlooked.The Organization attributes ( O ’s), the Organizational Unit attributes ( OU ’s), and the Domain Components ( DC ’s) must match those from both the net.tls.clusterFile and net.tls.certificateKeyFile certificates for the other cluster members (or the tlsX509ClusterAuthDNOverride value, if set).I had a different certificateKeyFile generated by letsencrypt when I only enabled TLS for client to server communication.",
"username": "Dave_Teu"
}
] | `clusterAuthMode: x509` fails when `authorization: enabled` | 2020-09-14T20:17:42.356Z | `clusterAuthMode: x509` fails when `authorization: enabled` | 6,282 |
null | [] | [
{
"code": "",
"text": "|1|Scan rate. How fast is the search for certain size of data?\n|2|Does frequent update on the array of objects same as updating the ordinary key value pair?|\n|3|What is the ideal server specs for one node? (CPU, mem, harddisk)|\n|4|How much data each node can handle?|\n|5|How can we optimize our search query? How should we design our schema for fast search?|\n|6|Which do you recommend, a row with sub properties or simple row with sub properties on another table?|\n|7|How should we troubleshoot our cluster in case we encounter abnormality?|\n|8|Is there any limit in row/collection/node size?|\n|9|How should we handle a table with increasing data everyday? What is the right structure for that?|\n|10|Is there any restriction or known problem with regards to nested array?|",
"username": "Amy_Jonson"
},
{
"code": "explain",
"text": "Hello @Amy_Jonson, welcome to the MongoDB Community forum!This is by no means anything but basic questions. Though the questions are very valid and are often asked on this forum and elsewhere. Each question needs to be a post by itself - and this is unusual (I think).I have posted some relevant information for each of the questions and its not very detailed, and some links to documentation. Please feel free to search for each of the topics and you will find detailed answers on this forum and elsewhere on the net.There are also other resources, in addition to the documentation links. If you navigate to the top of this page (press keyboard Home button), there is a horizontal menu with various links to blog posts, tutorials, podcasts, training videos, etc. One important resource is the MongoDB University, where you will find courses for topics of your interest like, Data Modeling and Performance.In MongoDB, data is stored as fields and values within documents. A set of documents are stored together in a collection. And a grouping of these collections is a database. There can be many databases on a MongoDB deployment. As such there are no rows and tables in MongoDB vocabulary. MongoDB has databases, collections and documents (these are analogous to databases, tables and rows in SQL/tabular databases).|1| Scan rate. How fast is the search for certain size of data?Searching and the performance depends upon many factors - the size of data, the indexes on the search criteria fields, the hardware, and the query itself. A good response would be within 100 ms, I believe.|2| Does frequent update on the array of objects same as updating the ordinary key value pair?I think it is not the same, as they are different data and field types. There are different methods to work with different types of fields. The array field type has separate set of methods to insert, update, query, or delete array elements. The frequency of updates on different types of fields depends upon your application. As such an update operation within a document is atomic irrespective of the size and type of field.See Update - Atomicity.|3| What is the ideal server specs for one node? (CPU, mem, harddisk)This is determined based upon your application requirements. How much data are you storing? What kind of queries you are performing? What kind of data is it? How much money you are willing to spend? There are many factors like this.See References below.|4| How much data each node can handle?This is dependent upon your node’s specifications. How much hard disk, RAM, CPU, etc., the node has. In general, the data size is limited by the file systems of the servers. The database data is stored as files. By default, a collection creates at least two files; one for the data and another for the default _id field index. So, the number of files, size of these files and the resources to handle the files are main factors.See References below.|5| How can we optimize our search query? How should we design our schema for fast search?There are query optimization techniques. There are tools like explain which generate query plans and you can study these to figure how the query is performing. In general, proper usage of indexes is an important aspect of the query optimization. Then, how the query is constructed. The server hardware plays a part too (for example, the configuration of RAM).The designing of the schema (or data modeling) affects the performance. So, the questions to ask are what kind of data, what kind of application, how much data, what kind of operations, etc.See Analyze Query Performance|6| Which do you recommend, a row with sub properties or simple row with sub properties on another table?This question is related to data modeling. You need to model the data based upon your application requirements. What are the relationships between the data entities (one-to-one, one-to-many, many-to-many)? What are the important operations you will be performing on this data? And, there are many other factors which influence the data design.See Data Model Design - Embedded and Normalized|7| How should we troubleshoot our cluster in case we encounter abnormality?What is this abnormality you have encountered? Based upon the issue these issues can be resolved.|8| Is there any limit in row/collection/node size?MongoDB document have a limitation of 16 Megabyte (MB) maximum size. For larger data sizes per document (for example media files, etc.) there is a feature called as GridFS, which allows data storage larger than 16 MB size.See GridFS and also References below.|9| How should we handle a table with increasing data everyday? What is the right structure for that?What data? How much data? What kind of operations? Without any details it is difficult to say anything. In general, there are design patterns which address various data and operational scenarios. These can be applied to some commonly occurring situations.See Data Modeling Concepts topic Operational Factors and Data Models.|10| Is there any restriction or known problem with regards to nested array?In a MongoDB document, you can nest upto 100 levels. The size of the nested array(s) is limited by the document size of 16 MB. In such cases, you can model data with references.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi Prasad_Saya,Thank you for taking time to answer my questions.Follow up questions:Example we are inserting 1million document of 21kb every day. We do 6 updates on the nested array of each document and with frequent search. What would be the ideal server specs for that? Is it good to store all the documents in one single collection only or should consider creating new collection every day/week/month?We are currently using elasticsearch as data storage but we found out that frequent update especially in nested array is costly that is why we are considering mongodb instead.",
"username": "Amy_Jonson"
},
{
"code": "",
"text": "Hello @Amy_Jonson,we are inserting 1million document of 21kb every day. Is it good to store all the documents in one single collection only or should consider creating new collection every day/week/month?Some thoughts. So, you will have quite a bit of data in a year and more in two years, etc. More data means more disk storage. Accessing more data in your application means more RAM memory - the data and indexes used in queries (a.k.a working set) need to be in memory for performance; reading these from disk for querying will result in poor performance.I don’t know your use case of the data and the queries. Storing documents in multiple collections based upon time is not a new concept. But, you will also have to look at your queries so that you shouldn’t be accessing two collections (all the time) for running your queries - for practical and performance reasons. This sure will require a closer study.We do 6 updates on the nested array of each document and with frequent search.MongoDB array fields can be indexed (these are called as Multikey Indexes), and this can be useful for read and update queries with array field as query filter criteria.What would be the ideal server specs for that?I would think in terms of working sets which will give an idea about the kind of RAM memory you might need. Then the data and index sizes for the disk drive needs. Please look for some finer details, especially about how MongoDB uses the computer’s memory (see FAQ: MongoDB Storage).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @Prasad_Saya,Thank you so much for your help. We will take a look on your suggestions.Regards,",
"username": "Amy_Jonson"
}
] | Basic Questions on MongoDB | 2021-03-02T04:03:00.459Z | Basic Questions on MongoDB | 2,466 |
null | [
"atlas-device-sync",
"android"
] | [
{
"code": "",
"text": "Dear Team,I am planning to implement the application using MongoDB. It contains Android app & Web app. In mobile app, I need real-time sync as well as offline functionality. I am planning to use MongoDB Realm Android SDK in mobile with sync implementation as mentioned in - https://docs.mongodb.com/realm/sdk/android/quick-start-sync/My question is -Please let me know if you need any additional details.\nThanks in advance!",
"username": "Samir_Bukkawar"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Samir_Bukkawar!What are the server side databases we can use with this sync?Realm Sync is currently only available as a cloud service with MongoDB Atlas as the backing database.MongoDB Realm has Application Development Services including features like triggers and functions that can be used for integration with other applications and APIs.In forum I can see we can use MongoDB Atlas (AWS cloud), but can I use normal MongoDB which will be hosted on our internal server?No, you cannot currently host your own Realm Sync service. There is a feedback request for an On-premise Solution that you can watch and upvote.Regards,\nStennie",
"username": "Stennie_X"
}
] | Mobile - MongoDB Realm - Sync Compatibility | 2021-03-18T15:24:20.490Z | Mobile - MongoDB Realm - Sync Compatibility | 2,446 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi there,as a developer, in an Atlas eco system, I like to provision a dev database with a subset of a production database which is located in an other project, so that I can develop with a small database holding recent data.Is it possible to write from one project to an other? Has someone done something like this ? Or is there a better solution around, which I missed out?Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @michael_hoeller,I think we talked about it in the past.Build in is only an option to complete restore of a backup to another project. Or complete live migrate.For subset of data you need to use dump and restore.There might be a nice trick to use $out and clone any collection you need to a temp new database by limit or filter and dump restore only this/those dbs.Thank\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello Pavel\n#[quote=“Pavel_Duchovny, post:2, topic:99319”]\nBuild in is only an option to complete restore of a backup to another project. Or complete live migrate.\nFor subset of data you need to use dump and restore.\n[/quote]\nAs you mention this is not an option since it a) either is a full ‘restore’ or, with mongo dump a particular filter without the ability to reflect relations.There might be a nice trick to use $out and clone any collection you need to a temp new database by limit or filter and dump restore only this/those dbs.Yes it it simple to use $out and $merge but as of my tests someone would need to stay in the same Project which is not really what you want in an productive environment. So we are back to the actual question which is aiming on Realm and cross project.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "@michael_hoeller,Realm cannot.link clusters in an app cross project … BUT you got me an idea.Its pretty creative and I have to check it but it might be interesting. Will share more in the coming days when I poc it.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello @Pavel_DuchovnyWill share more in the coming days when I poc it.thanks a lot, I stay tuned Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "{\n \"databases\": [\n {\n \"name\": \"mydb\",\n \"collections\": [\n {\n \"name\": \"movies\",\n \"dataSources\": [\n {\n \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"storeName\": \"atlasClusterStore\"\n }\n ]\n }\n],\n\"stores\": [\n {\n \"provider\": \"s3\",\n \"bucket\": \"atlas-datalakepavel\",\n \"includeTags\": false,\n \"name\": \"s3store\",\n \"region\": \"us-east-1\"\n },\n {\n \"provider\": \"atlas\",\n \"clusterName\": \"prod\",\n \"name\": \"atlasClusterStore\",\n \"projectId\": \"xxxxxxxx\"\n }\n ]\n// This function is the webhook's request handler.\nexports = async function(payload, response) {\n \n var movies = context.services.get(\"data-lake\").db(\"mydb\").collection(\"movies\");\n \n var res = await movies.aggregate([{$sample : { size: 10}}, \n {\"$out\" : {\n \"s3\" : {\n \"bucket\" : \"atlas-datalakepavel\",\n \"region\" : \"us-east-1\",\n \"filename\" : \"10movies/movies\",\n \"format\" : {\n \"name\" : \"json\"\n }\n }\n }\n }]).toArray();\n\n};\n{\n \"databases\": [\n {\n \"name\": \"mydb\",\n \"collections\": [\n {\n \"name\": \"devMovies\",\n \"dataSources\": [\n {\n \"path\": \"10movies/*\",\n \"storeName\": \"s3store\"\n }\n ]\n }\n ]\n }\n],\n\"stores\": [\n {\n \"provider\": \"s3\",\n \"bucket\": \"atlas-datalakepavel\",\n \"includeTags\": false,\n \"name\": \"s3store\",\n \"region\": \"us-east-1\"\n },\n {\n \"provider\": \"atlas\",\n \"clusterName\": \"dev\",\n \"name\": \"atlasClusterStore\",\n \"projectId\": \"yyyyyyy\"\n }\n ]\n// This function is the webhook's request handler.\nexports = async function(payload, response) {\n \n var movies = context.services.get(\"data-lake\").db(\"mydb\").collection(\"devMovies\");\n \n var res = await movies.aggregate([\n {\n \"$out\": {\n \"atlas\": {\n \"projectId\": \"yyyyyyyyyy\",\n \"clusterName\": \"dev\",\n \"db\": \"dev\",\n \"coll\": \"movies\"\n }\n }\n}]).toArray();\n\n};\n",
"text": "Hi @michael_hoeller,Ok so I tailored a solution. I have tto say its not super stright forward but its creative and mind opening for various things.The key ability I used is that REALM applications can be linked to Atlas Data lakes.Now Atlas Data lakes can read/write data from/to MongoDB clusters and from/to S3 buckets. Having said that, there is a limitation that the Data lake linked can be only from the same project and the Atlas cluster linked in that Data lake can be from the same project as well.However, the s3 buckets can be shared cross Data lakes.In my configuration I have the following topology:My Data lake configuration on prod is mapping the Atlas cluster so our realm app could read from it and use the $out operator to write to S3.My webhook in the prod project realm app is linked to this data lake and therefore can perform a write of sample 10 movies to my s3 store:Once I execute it via a curl or a trigger or any http hook it creates my source files:\n\nScreen Shot 2021-03-18 at 11.14.471550×854 58.4 KB\nNow I map the data lake :No I can do the opsite import via a webhook in the dev realm app connected to this datalake:Once I do that I get the needed data in my dev:\n\nScreen Shot 2021-03-18 at 11.56.422982×1172 333 KB\nHere you can be as flexible as you want and transfer data with logic lookups and many other options.I know this is a lot to digest but I hope it might help.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello @Pavel_Duchovny\nthank you very much! This fits perfectly well, my initial posting was sparse on details how to create a subset,\nsince I didn’t want to over load the question. Your suggestion goes further and provides to aggregation where I can add the logic to get the subset.\nThanks a again !\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "Copy" data from one project to an other | 2021-03-16T20:29:16.897Z | “Copy” data from one project to an other | 5,428 |
null | [
"queries"
] | [
{
"code": "",
"text": "I want to query a Date field. The query should return all the records that are between 2 dates. When I use the gt and lt operators I get an error in lt. This is the query I'm doing: \nvar list = await db.saveNewRecord.find ({LastReportDate: { gt: ‘2021-03-15T21: 07: 48.000Z’}}, {LastReportDate: {$ lt: '2021- 03-15T21: 50: 48.000Z '}}).\nAnd if I try to use $in, it doesn’t work.Can you help me?",
"username": "Norberto_Massaferro"
},
{
"code": "$gt$$lt$",
"text": "The operators are $gt (you are missing a $) and $lt (you have a white space between $ and lt).Since you have 2 closing braces after the first date, then your query ends there. The 2nd LastReportDate… is taken as a projection or something else but it is not part of your query.You have a couple of white space inside your date and time values that might cause wrong comparison.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steeve,I try with this but it doesn’t work:var list = await db.saveNewRecord.find({LastReportDate: {$gt: ‘2021-03-15T21:07:48.000Z’}, LastReportDate: {$lt: '2021-03-15T21:50:48.000Z '}}).",
"username": "Norberto_Massaferro"
},
{
"code": "db.saveNewRecord.find({\n\tLastReportDate:{\n\t\t$gt:ISODate('2021-03-15T21:07:48.000Z'),\n\t\t$lt:ISODate('2021-03-15T21:50:48.000Z')\n\t}})\n",
"text": "Hi @Norberto_Massaferro,var list = await db.saveNewRecord.find({LastReportDate: {$gt: ‘2021-03-15T21:07:48.000Z’}, LastReportDate: {$lt: '2021-03-15T21:50:48.000Z '}}).As Steve has mentioned, the second LastReportDate is not part of your query since it is taken as a projection.There appears to be a white space between the Z and the ’ within your $lt value as well.To better troubleshoot this, can you provide:Also, if you are querying a date object, you may wish to try something like this example below:Hope this helps.\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason, thanks for the reply.ISODate (‘2021-03-15T21: 07: 48.000Z’) did not work for me because the use of ISODate gave me an error.But I understood what the problem was, the reality is that it was so basic that I overlooked it. The problem was in the {}, something that they mentioned to me and that although it was obvious I did not see it.The sentence stays like this and it works:\nlist = await db.saveNewRecord.find ({LastReportDate: { gte: '2021-03-15T21: 07: 48.000Z', lte: ‘2021-03-15T21: 50: 48.000Z’}})Thanks for the help.Greetings",
"username": "Norberto_Massaferro"
},
{
"code": "{ LastReportDate: { $gte: \"2021-03-15T21: 07: 48.000Z\", $lte: \"2021-03-15T21: 50: 48.000Z\"}})> c = db.date_test\ndb.date_test\n> c.find() ;\n{ \"_id\" : 1, \"date\" : ISODate(\"2021-03-18T18:47:46.276Z\") }\n> q = { \"date\" : { \"$lt\" : new Date() } }\n{ \"date\" : { \"$lt\" : ISODate(\"2021-03-18T19:00:47.829Z\") } }\n> c.find( q ) ;\n{ \"_id\" : 1, \"date\" : ISODate(\"2021-03-18T18:47:46.276Z\") }\n> q = { \"date\" : { \"$lt\" : \"2021-03-18T19:00:47.829Z\" } }\n{ \"date\" : { \"$lt\" : \"2021-03-18T19:00:47.829Z\" } }\n> c.find( q ) ;\n> \n> date = ISODate( \"2021-03-15T21: 07: 48.000Z\" )\n2021-03-18T15:07:07.128-0400 E QUERY [js] Error: invalid ISO date: 2021-03-15T21: 07: 48.000Z :\nISODate@src/mongo/shell/types.js:65:1\n@(shell):1:8\n> date = ISODate( \"2021-03-15T21:07:48.000Z\" )\nISODate(\"2021-03-15T21:07:48Z\")\n> c.insert( { _id : 2 , date : \"2021-03-18T19:00:47.829Z\" } )\nWriteResult({ \"nInserted\" : 1 })\n> q = { date : { \"$lte\" : \"2021-03-18T 19: 00:47.829Z\" } }\n{ \"date\" : { \"$lte\" : \"2021-03-18T 19: 00:47.829Z\" } }\n> c.find(q)\n> // string wise they are not equal but date wise they should but I get not result\n> q = { date : { \"$gt\" : \"2021-03-18T 19: 00:47.829Z\" } }\n{ \"date\" : { \"$gt\" : \"2021-03-18T 19: 00:47.829Z\" } }\n> // now I got a result but I should not as the date are logically the same\n",
"text": "If your query{ LastReportDate: { $gte: \"2021-03-15T21: 07: 48.000Z\", $lte: \"2021-03-15T21: 50: 48.000Z\"}})works when dates are specified as string then your first postI want to query a Date field.is kind of misleading, Unless, of course, the node driver automatically converts string to Date. In the mongo shell it does not. See:As forISODate (‘2021-03-15T21: 07: 48.000Z’) did not work for me because the use of ISODate gave me an error.I suspect that it is because you have extra spaces before 07 and 48. With the extra spaces:and without the extra spacesSo unless your data also has extra spaces, your query, despite not generating any error, probably produce the wrong result. See the following where my data is string and has no extra space but my query does.All this simply to say that if you date related data, use Date rather than string.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problem with query using $lt | 2021-03-17T20:50:28.883Z | Problem with query using $lt | 19,607 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Thanks you @Ian_Ward and @Jason_Flax for the great webinar today! I asked a question about transactions and costs, but I don’t think I got the context around the question right, so I’ll ask it here instead.My app is a workout log app and I am using Realm locally, but want to move over to MongoDbRealm. Since I only use local realms, I have never had to worry about committing to edits to the database, but as I move to synced realms, each transaction causes sync and has a cost attached to it and because of this I will have to restructure the app to be better suited for sync.Currently, each keystroke is saved locally. For a single user, this could easily be thousands of writes during a workout session and each would trigger a sync. From the user perspective, the only time you want to sync the data is when done with your session. Doing commits every keystroke, like I do today will both cause performance issues with sync and be expensive. Pausing sync won’t help as each transaction is still counted. Is there any other way?I am thinking about doing something along these lines:Creating a workout:\n1: The user creates a workout and adds this to a local realm\n2: The user edits the local realm\n3: When the user is done, this is copied to the synced realm (1 commit right?)When editing an existing workout\n1: The user copies a synced workout to the local realm.\n2: The user edits the local realm\n3: When the user is done, the workout is copied to the synced realm.Would this approach work? Do you have any other recommendations for this type of app, where you don’t need realtime sync and do loads of writes?",
"username": "Simon_Persson"
},
{
"code": "workoutsprotocol Workout: Object {\n var name: String { get set }\n var reps: Int { get set }\n var sets: Int { get set }\n var weight: Double { get set }\n}\n\nclass WorkoutDraft: Object, ObjectKeyIdentifiable, Workout {\n dynamic var name: String = \"\"\n @objc dynamic var reps: Int = 0\n @objc dynamic var sets: Int = 0\n @objc dynamic var weight: Double = 0\n}\n\n@objcMembers class SavedWorkout: Object, ObjectKeyIdentifiable, Workout {\n dynamic var name: String = \"\"\n dynamic var reps: Int = 0\n dynamic var sets: Int = 0\n dynamic var weight: Double = 0\n\n convenience init(fromDraft draft: WorkoutDraft) {\n self.init()\n self.name = draft.name\n self.reps = draft.reps\n self.sets = draft.sets\n self.weight = draft.weight\n }\n}\n\nstruct WorkoutFormView<WorkoutType>: View where WorkoutType: Workout & ObjectKeyIdentifiable {\n @ObservedRealmObject var workout: WorkoutType\n @Environment(\\.presentationMode) var presentation\n\n var body: some View {\n Form {\n Stepper(value: $workout.reps, label: { Text(\"reps: \\(workout.reps)\") })\n Stepper(value: $workout.sets, label: { Text(\"sets: \\(workout.sets)\") })\n Stepper(value: $workout.weight, step: 0.5, label: { Text(\"weight: \\(workout.weight)\") })\n }.navigationBarItems(trailing: Button(\"Save\", action: {\n // TODO, this should already be thawed, will be worked out before release\n if let workout = workout.thaw() as? WorkoutDraft, let realm = workout.realm?.thaw() {\n try? syncedRealm.write {\n syncedRealm.add(SavedWorkout(fromDraft: workout))\n }\n try? realm.write {\n realm.delete(workout)\n }\n presentation.wrappedValue.dismiss()\n }\n }))\n }\n}\n\nstruct MainView: View {\n @StateRealmObject var workouts: Results<SavedWorkout>\n @StateRealmObject var unsyncedWorkouts: Results<WorkoutDraft>\n var body: some View {\n NavigationView {\n List {\n ForEach(unsyncedWorkouts) { workout in\n NavigationLink(workout.name, destination: WorkoutFormView(workout: workout))\n }\n ForEach(workouts) { workout in\n NavigationLink(workout.name, destination: WorkoutFormView(workout: workout))\n }\n }.navigationTitle(\"workouts\")\n .navigationBarItems(trailing: Button(\"add\") {\n $unsyncedWorkouts.append(WorkoutDraft())\n })\n }\n }\n}\n\n@main\nstruct App: SwiftUI.App {\n var view: some View {\n MainView(workouts: syncedRealm.objects(SavedWorkout.self),\n unsyncedWorkouts: localRealm.objects(WorkoutDraft.self))\n }\n\n var body: some Scene {\n WindowGroup { view }\n }\n}\n",
"text": "Hey @Simon_Persson, thanks for joining the group! Really appreciated your question during the session as well.It sounds like workouts effectively exist in a “draft” state before you want them synced. I can only give so much advice without seeing your data structures, but, I’ll give it a go based on your suggested approach (pseudocode incoming):This is one solution of many. The “optimal” solution here largely depends on the details of the use case as well as the structure of how you respond to user intent, however, the general idea of consider local data as draft data is common enough.Let me know if this gives you a decent idea of how to move forward.",
"username": "Jason_Flax"
},
{
"code": "",
"text": "Thanks @Jason_Flax!That helps! The objects here are a lot more complex and contains nested structures, but the approach makes sense. It also makes the transition from local to synced a bit easier I think. I had not thought about using separate datatypes for drafts vs synced sharing a common interface, but it sounds like a good idea.For the case of updating, I assume that using realm.add(workout, update: .modified) would do the trick?I am also thinking it might be a good idea to have the workout be self contained. I.e. correspond to one document in MongoDb Atlas, i.e. make sure that all “lists of sets” in the synced workout are of embedded object type. I have run into some problems migrating my local classes to this structure Unable to execute a migration that changes the \"embeddedness\" of an object · Issue #7060 · realm/realm-swift · GitHub. But if I use separate objects for synced vs local realms and share an interface, I might be able to bypass this migration altogether and keep the current “Workout” as a local draft and then simply add a “Synced Workout” that contains a structure better adapted for sync (no floats, object ids, partitions, and embedded objects in lists).",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Thanks again for the additional comments today @Jason_Flax and @Ian_Ward.An additional question about local draft objects. I wasn’t aware of that you couldn’t copy local realm objects to synced objects (inconsistent histories?). My assumption here was that I would be able to copy the synced object to a local realm and use it as a draft object, then copy it back when done editing, but I guess I can’t do this?Again, thanks for the great work on the new platform ",
"username": "Simon_Persson"
},
{
"code": "",
"text": "I wasn’t aware of that you couldn’t copy local realm objects to synced objects (inconsistent histories?)You can copy local realm objects to to synced realms - the question on the meetup was regarding converting a local realm to sync realm - which you need to manually copy the objects over (akin to your draft use case when done).",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks for the clarification! I have had to wait for a recent bug fix in the cocoa sdk (converting lists to embedded objects for local realms), but now it sounds like I am all set. Thank you for your help!",
"username": "Simon_Persson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I avoid syncing on every commit? | 2021-01-27T17:18:17.717Z | How can I avoid syncing on every commit? | 2,271 |
Subsets and Splits