image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"queries"
] | [
{
"code": "{\n roomid: \"\",\n questions: {\n q1: {\n user1:\"\"\n }\n }\n}\n",
"text": "is there a way to query the document that match the roomid and match user1?",
"username": "Allen_He"
},
{
"code": "db.collection.find( \n { $and: [ { roomid: \"\" }, { \"questions.q1.user1\": \"\" } ] }\n)\n$and{ roomid: \"\" , { \"questions.q1.user1\": \"\" }AND",
"text": "Hello @Allen_He,To query with multiple conditions you use an $and logical operator. Your query can be formed as:Note that you can also use the query filter as follows - without specifying the $and operator. The results will be same:{ roomid: \"\" , { \"questions.q1.user1\": \"\" }This is because:MongoDB provides an implicit AND operation when specifying a comma separated list of expressions.",
"username": "Prasad_Saya"
}
] | Find and replace with multiple condition query | 2020-11-04T00:09:40.586Z | Find and replace with multiple condition query | 6,096 |
null | [
"queries"
] | [
{
"code": "",
"text": "I have a field sometimes it has numbers only and sometimes it has a string of alphanumeric values like 12b or 3a or 23b etc. I want to test the field to see if it has a number then I will just return that number. But if after testing it, it has an alphanumeric value I would like to split the letters and the numbers. I hope the explanation makes sense. Don’t hesitate to ask for clarification. Many thanks",
"username": "Elhadji_M_Ba"
},
{
"code": "",
"text": "Hi @Elhadji_M_Ba welcome to the community.Could you provide an example of what you need? Also, are you looking to do this using the mongo shell, or using some language (node, python, etc.)? An input → output example would be helpful here.As an immediate answer, you should be able to do this using regular expressions. See $regex for MongoDB’s operator, or Regular Expression if you’re not familiar with the subject.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "[{\n \"agendanum\": \"70\"\n},{\n \"agendanum\": \"70a\",\n \"agendasubject\": \"RIGHTS OF THE CHILD\"\n},{\n \"agendanum\": \"70b\",\n \"agendasubject\": \"CHILDREN--UN. GENERAL ASSEMBLY (27TH SPECIAL SESS. : 2002)\"\n},{\n \"agendanum\": \"71a\",\n \"agendasubject\": \"INDIGENOUS PEOPLES\"\n},{\n \"agendanum\": \"71b\",\n \"agendasubject\": \"INDIGENOUS PEOPLES--CONFERENCE (2014 : NEW YORK)\"\n}]\n[{\n \"agendanum\": [\n\t\"70\"\n\t]\n},{\n \"agendanum\": [\n\t\"70\",\n\t\"a\"\n\t]\n \"agendasubject\": \"RIGHTS OF THE CHILD\"\n},{\n \"agendanum\": [\n\t\"70\",\n\t\"b\"\n\t]\n \"agendasubject\": \"CHILDREN--UN. GENERAL ASSEMBLY (27TH SPECIAL SESS. : 2002)\"\n},{\n \"agendanum\": [\n\t\"71\",\n\t\"a\"\n\t]\n \"agendasubject\": \"INDIGENOUS PEOPLES\"\n},{\n \"agendanum\": [\n\t\"71\",\n\t\"b\"\n\t]\n \"agendasubject\": \"INDIGENOUS PEOPLES--CONFERENCE (2014 : NEW YORK)\"\n}]\n",
"text": "ok here is an example. I hope it helps. thanks//Current - numeric portion can be 1 or more digits//Ideal",
"username": "Elhadji_M_Ba"
},
{
"code": "> db.test.find()\n{ \"_id\" : 0, \"num\" : \"70a\", \"txt\" : \"Desc 1\" }\n{ \"_id\" : 1, \"num\" : \"70b\", \"txt\" : \"Desc 2\" }\n\n> db.test.aggregate([\n... {$project: {\n... ret: {\n... $regexFind: {input: '$num', regex: /([0-9]+)([a-z]+)/}\n... },\n... txt: '$txt'\n... }},\n... {$project: {\n... num: '$ret.captures',\n... txt: '$txt'\n... }}\n... ])\n{ \"_id\" : 0, \"num\" : [ \"70\", \"a\" ], \"txt\" : \"Desc 1\" }\n{ \"_id\" : 1, \"num\" : [ \"70\", \"b\" ], \"txt\" : \"Desc 2\" }\n",
"text": "Hi @Elhadji_M_Ba,Apologies for the late reply. Looking at your example, if you’re looking to permanently transform the shape of all the documents in the collection, I would personally do this via a general purpose language e.g. Python, since I think it will be easier to test and extend.However if you only want to project the result, you may be able to achieve it using $regexFind, which allows match captures. Something like:Please note that you need MongoDB 4.2 or newer for this. The regex pattern above also assumes the pattern to follow exactly the example, so it may need further refinements for more complex examples.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Checking a string if it's a number or alphanumeric | 2020-10-07T18:06:18.843Z | Checking a string if it’s a number or alphanumeric | 7,427 |
null | [
"production",
"c-driver"
] | [
{
"code": "",
"text": "I’m pleased to announce version 1.17.2 of libbson and libmongoc,\nthe libraries constituting the MongoDB C Driver.\nFor more information see the 1.17.2 release on GitHub.libmongoc\nBug fixes:libbson\nNo changes since 1.17.1; release to keep pace with libmongoc’s version.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C driver 1.17.2 released | 2020-11-03T23:11:48.654Z | MongoDB C driver 1.17.2 released | 1,939 |
null | [
"queries"
] | [
{
"code": "await Products.find({}).sort({ miShopPrice: -1, discountPrice: -1, normalPrice: -1 })\nbut this returns me:\n [\n {\n \"normalPrice\": 250,\n \"miShopPrice\": 100,\n \"discountPrice\": 0\n },\n {\n \"normalPrice\": 64990,\n \"miShopPrice\": 0,\n \"discountPrice\": 0\n },\n {\n \"normalPrice\": 19500,\n \"miShopPrice\": 0,\n \"discountPrice\": 0\n },\n {\n \"normalPrice\": 1600,\n \"miShopPrice\": 0,\n \"discountPrice\": 0\n }\n]\n [\n {\n \"normalPrice\": 64990,\n \"miShopPrice\": 0,\n \"discountPrice\": 0\n },\n {\n \"normalPrice\": 19500,\n \"miShopPrice\": 0,\n \"discountPrice\": 0\n },\n {\n \"normalPrice\": 1600,\n \"miShopPrice\": 0,\n \"discountPrice\": 0\n },\n {\n \"normalPrice\": 250,\n \"miShopPrice\": 100,\n \"discountPrice\": 0\n }\n]\n",
"text": "I have the following problem, I have to do a sortBy to a mongo DB collection, but with 3 types of fields that are type int / number.I currently have the querymcdvoice\nthe condition is as followsif you have miShopPrice and normalPrice → myShopPrice\nif you have miShopPrice and discountPrice → myShopPrice\nif you have discountPrice and normalPrice → discountPrice\nif you only have normalPrice → normalPricewhat the query should return would be this:could you help me I would really appreciate it",
"username": "Layla_Archie"
},
{
"code": "",
"text": "HelloIf you want to keep the original prices,and just pick a sorting order,you can use many fields.If you want to calculate a price based on some logic,and then sort based on it,\nyou can aggregate on that documents,and use aggregation operators.if (for the rules)to check if a field is missing and decideto add the final price,in which you will sort",
"username": "Takis"
}
] | How to order with 3 fields in mongoDB? | 2020-11-03T09:34:19.796Z | How to order with 3 fields in mongoDB? | 2,510 |
null | [
"queries"
] | [
{
"code": "db.movies.find({\"_id\": ObjectId(\"5ed07711b61249708c74f684\")}).pretty()...\n\n {\n \"_id\" : ObjectId(\"5ed07711b61249708c74f684\"),\n \"item\" : \"movie\",\n \"year\" : 2013,\n \"cats\" : \"Donni Yen\",\n \"trilogy\" : true,\n **\"title\" : \"Yip Man\"**\n }\n {\n \"_id\" : ObjectId(\"5ed07711b61249708c74f684\"),\n \"item\" : \"movie\",\n **\"title\" : \"Yip Man\"**,\n \"year\" : 2013,\n \"cats\" : \"Donni Yen\",\n \"trilogy\" : true\n }\n",
"text": "Hello everyone, how are you?I am starting my studies in MongoDB and at first I really like to practice CRUD operations interacting via shell.I would like to know if it is possible to change the position of a simple “field: value” in a given collection, as shown below:I wanted to move “title”: “Yip Man” to position 2:I’ve been researching a lot but I couldn’t find this specific issue.Thank you very much if you can help me.",
"username": "Fernando_Xavier_de_B"
},
{
"code": "",
"text": "Hello,and welcome : )The order of the fields are in general the order they had in the inserted document,\nwith some exceptions. See the linkBecause its a Document order is not really important,if you want to print it somewhere,\nyou can print the title as 3rd ,no matter what is the original position.For times that position is important,arrays are more safe way to go.",
"username": "Takis"
}
] | Change "field: value" position | 2020-11-03T19:33:35.742Z | Change “field: value” position | 2,319 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.3 of the MongoDB Go Driver.This release contains several bugfixes. For more information please see the release notes.You can obtain the driver source from GitHub under the 1.4.3 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Isabella_Siu"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.4.3 Released | 2020-11-03T20:11:25.267Z | MongoDB Go Driver 1.4.3 Released | 1,949 |
null | [
"swift"
] | [
{
"code": "import SwiftUI\nimport RealmSwift\n\nlet app = App(id: \"tasktracker-qczfq\")\n\n@main\nstruct myApp: App {\n \n var body: some Scene {\n WindowGroup {\n TabBarView()\n }\n }\n}\n",
"text": "I’m trying to setup mongoDB Realm following theiOS Tutorial (todo list) but I can’t get it working. This is my code.It throws the error “Argument passed to call that takes no arguments” which makes sense because App is a protocol. How can I solve this issue?",
"username": "Martin122"
},
{
"code": "",
"text": "@Martin122 We just released a SwiftUI tutorial here -I think this should help you. Let us know what you think",
"username": "Ian_Ward"
},
{
"code": "",
"text": "The problem was using SPM to install RealmSwift instead of Cocoa-Pods. Took me a while to figure it out.",
"username": "Martin122"
}
] | mongoDB Realm iOS Tutorial Problem | 2020-11-01T20:14:47.247Z | mongoDB Realm iOS Tutorial Problem | 1,842 |
null | [
"graphql"
] | [
{
"code": "",
"text": "Is there any way to restrict some collection to be exposed over GraphQL apis ?\nExact use case is : I want to expose only few collection of my database over graphql , but want to use all query roles, filters for all.\nIs there any way I can do that ??",
"username": "serverDown_Saturday"
},
{
"code": "",
"text": "If you’re asking whether you can prevent a collection from having a generated GraphQL Schema, there is no way to do that. However, our GraphQL API respects all permissions you set so you can use Rules to disable all reads and writes to the collection.Do you mind explaining why you want to only expose certain collections in your use-case? Additionally, you can submit a suggestion here which we monitor as we build our roadmap - Realm: Top (0 ideas) – MongoDB Feedback Engine",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks for your reply. Actually some the of collections are for our internal use.\nI managed to achieve this by adding empty schema object form Realm Web UI.\nWe are using realm in our production now as well. So Just wanted to catch up with all the questions that we have for future decisions to use it.\nCurrently we are facing some issue in production. Logs doesn’t have any information to understand the issue. Is there any way we can get help form support team to evaluate the log ?",
"username": "serverDown_Saturday"
},
{
"code": "",
"text": "I would suggest going through mongodb support or using the Intercom chat to raise this issue.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Intercom gave a sad reply.\nThanks tough",
"username": "serverDown_Saturday"
}
] | Restriction of GraphQl Apis | 2020-10-31T09:46:02.395Z | Restriction of GraphQl Apis | 1,956 |
null | [
"atlas-device-sync",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Javascript Realm SDK\nhow can I open a realm locally, after I have synchronized in the cloud.\nI’m experiencing the following problem:\nI have an instance in the realm cloud, that instance has been canceled, and now I am unable to open the local realm on my app.",
"username": "Royal_Advice"
},
{
"code": "",
"text": "@Royal_Advice What are you trying to do? You can open the local Realm file in Studio to view the data. You can also programmatically open the local realm in read-only mode with the various SDKs by passing a regular Configuration struct rather than a SyncConfiguration struct.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I’m trying to open a local realm that was previously synced to the cloudi have an application and i am using cloud realm, an error occurred on the card and now my instance in the cloud is not working.\nI’m getting the authentication error, because the server is not online, but I should be able to open the realm locally but I can’t.",
"username": "Royal_Advice"
},
{
"code": "",
"text": "@Royal_Advice Realm works offline but if your user session has expired you will need to get a new token from the server which requires network access. Why not re-enable your cloud instance?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "about re-enable: thats what im trying to do but, im update the billing payment but its not automatically worked for me, i open a ticket, but not answer until nowabout open the realm:\nwhen i deactivate my internet i can open the realm locally, but suppose that if i am unable to stabilize a connection to the server, i will not be able to access my realm locally?",
"username": "Royal_Advice"
},
{
"code": "",
"text": "@Royal_Advice If you do not have a valid User or the User has expired then you will not be able to open the synced Realm.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I have a current user, in this case, that I can’t get a connection to the server, will I really be unable to open the realm locally?",
"username": "Royal_Advice"
},
{
"code": "",
"text": "@Royal_Advice CurrentUser is cached locally under the hood - so even if you go offline you can still open the realm",
"username": "Ian_Ward"
},
{
"code": "",
"text": "yes, but if im online, with a currentUser in cache, and i cant authenticate on the server, im unable to open the realmwithout access to the cloud, will I not be able to open my realm locally?",
"username": "Royal_Advice"
},
{
"code": "",
"text": "I’m not sure how your app is coded but if you are calling .login() or using asyncOpen() these APIs require connectivity in order to function. In fact, if you call login and get rejected this may invalidate your previous valid currentUser",
"username": "Ian_Ward"
},
{
"code": "",
"text": "is there anything i can explain better so you can help me?",
"username": "Royal_Advice"
},
{
"code": "",
"text": "Are you calling login() or asyncOpen() when attempting to open the realm? What is the error?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I’m check if currentUser exists in cache, if true i call Realm.open(config), else call login.\nWith my cloud instance offline, i know login will not authenticate, but if im already have an user authenticated in cache with currentUser, when i call Realm.open i get auth error, because i cant make a request to it, but if i turn off the internet, i can connect locally with new Realm(config)am i doing something wrong?\nthanks for responding and for being patient",
"username": "Royal_Advice"
},
{
"code": "new Realm",
"text": "@Royal_Advice Yes new Realm is the way to go in JS to get around this - Realm.open will attempt to make a call to the server side",
"username": "Ian_Ward"
},
{
"code": "…config …\nsync: {\nurl: serverUrlFull,\nerror: (error) => {\nconsole.log(error.name, error.message)\n}\n}",
"text": "look, with Realm.open, if i get auth error from server (in this case with the server offline), will my current user be invalidated? all them, if i have more than 1?…config …\nsync: {\nurl: serverUrlFull,\nerror: (error) => {\nconsole.log(error.name, error.message)\n}\n}",
"username": "Royal_Advice"
},
{
"code": "",
"text": "@Royal_Advice The first time you open a realm (basically the first time you run the app) you should use Realm.open() - on subsequent openings of the app you should use the new Realm API to open the already cached Realm on disk. The way to check which way you should open the Realm is to use to use different code paths on app start that are gated by a check to SyncUser.current() != NULL",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi mate. Can you share me your code to open database locally ? I want my app to open local database when my internet is off. But it doesn’t return anything.",
"username": "Promit_Rai"
},
{
"code": "",
"text": "yes sure, tell me the version you are using please",
"username": "Royal_Advice"
},
{
"code": "",
"text": "I am using:\nrealm: 10.0.0-rc.2,\nRN: 0.63.2,",
"username": "Promit_Rai"
},
{
"code": "",
"text": "The code was for version <= 6\nim trying to figure it out to handle data without connection after i have synced a realm in version >= 10",
"username": "Royal_Advice"
}
] | Local Realm open after synchronized on realm cloud | 2020-07-08T15:56:25.080Z | Local Realm open after synchronized on realm cloud | 8,505 |
null | [
"replication",
"change-streams"
] | [
{
"code": "",
"text": "Hello all and thanks admins for accepting me.I am having a problem with mongodb (version 4.2), specifically with the Change Stream functionality.I have a ReplicaSet cluster consisting of 1 primary, 1 secondary and 1 Arbiter and, in the code (Spring boot), I have a .watch () process on a collection of interest.Basically, the stream works fine. When an insert / update operation occurs, the event is recognized and the document is streamed correctly.However, when one of the two nodes (either the primary or the secondary) goes down, the watcher stops streaming anything.Update/insert operations continue fine. Therefore, the program keeps interfacing correctly with the database, even after re-election of the primary.However, the stream is blocked. As soon as I restart one of the two nodes, the stream immediately resumes correctly and also shows me the events not streamed previously.Can anyone help me to solve this problem?Thanks in advance.",
"username": "Giovanni_Desiderio"
},
{
"code": "",
"text": "Hi @Giovanni_Desiderio,Welcome to MongoDB community!Starting MongoDB 3.6 change streams are using read concern “majority” by default which means that only majority node commited data will be streamed.When your PSA replica has only one data node there is no majority commited data therefore it won’t streamThe good news for you is 4.2+ changestream has disabled the required need for read concern majority but you have to disable this default behaviour:Btw its a recommended change for better stability for PSA replica sets anyway.we recommend moving to PSS and replacing arbiter with a secondary for overal rs HA…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "First of all, thanks for your answer.\nAre you suggesting that if I set “replication.enableMajorityReadConcern” equals to false within the .conf file of each of my replicaSet’s member, will my collection.watch() be able to stream insert/update anyway?\nHonestly, I already tried it but my .watch() still does not work when one of the two data-bearing node is down… any other tip?",
"username": "Giovanni_Desiderio"
},
{
"code": "",
"text": "Change streams do not report events that haven’t been majority-committed. Change stream will wait until the event has propagated to a majority of nodes. See this section in docs.\nThis is because an event which has not yet been majority-committed may be rolled back , and this also can lead to change stream becoming non-resumable.",
"username": "Katya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change events stream stops working when a node fails in ReplicaSet | 2020-10-30T01:10:23.210Z | Change events stream stops working when a node fails in ReplicaSet | 4,558 |
null | [
"containers",
"configuration"
] | [
{
"code": "",
"text": "I am deploying mongo db 4.4.1 (windows servercore 1809 image) container on K8S. The memory limit on the ‘pod’ is set at 300M. However the container start fail with out of memory exception. Looking deeper in the logs, it turns out that wired tiger is trying build a cache of size ~3GB which is resulting in the exception.Is this a known issue or am I missing something here?relevant log lines -{“t”:{\"$date\":“2020-11-02T12:49:55.088+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2020-11-02T12:50:01.066+00:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2020-11-02T12:50:01.067+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648602, “ctx”:“main”,“msg”:“Implicit TCP FastOpen in use.”}\n{“t”:{\"$date\":“2020-11-02T12:50:01.076+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:6496,“port”:27017,“dbPath”:“C:/PluginDataFolder”,“architecture”:“64-bit”,“host”:“documents-0”}}\n{“t”:{\"$date\":“2020-11-02T12:50:01.076+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:23398, “ctx”:“initandlisten”,“msg”:“Target operating system minimum version”,“attr”:{“targetMinOS”:“Windows 7/Windows Server 2008 R2”}}\n{“t”:{\"$date\":“2020-11-02T12:50:01.076+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“4.4.1”,“gitVersion”:“ad91a93a5a31e175f5cbf8c69561e788bbc55ce1”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“windows”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2020-11-02T12:50:01.076+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Microsoft Windows Server 2019”,“version”:“10.0 (build 17763)”}}}\n{“t”:{\"$date\":“2020-11-02T12:50:01.076+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“net”:{“bindIp”:\"*\"},“security”:{“authorization”:“enabled”},“storage”:{“dbPath”:“C:\\PluginDataFolder”}}}}\n{“t”:{\"$date\":“2020-11-02T12:50:01.089+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=3071M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{\"$date\":“2020-11-02T12:50:01.093+00:00”},“s”:“E”, “c”:“STORAGE”, “id”:22435, “ctx”:“initandlisten”,“msg”:“WiredTiger error”,“attr”:{“error”:12,“message”:\"[1604321401:93157][6496:140717780783712], wiredtiger_open: __wt_calloc, 52: memory allocation of 32237280 bytes failed: Not enough space\"}}",
"username": "Hemant_Jain"
},
{
"code": "mongodlxccgroupsstorage.wiredTiger.engineConfig.cacheSizeGBmemLimitMB",
"text": "A memory limit is not the same as available memory(at least in docker, not sure about k8s, but I assume it is the same).To limit mongod wiredtiger cache size, specifically set it in the configuration or via command line arguments.If you run mongod in a container (e.g. lxc , cgroups , Docker, etc.) that does not have access to all of the RAM available in a system, you must set storage.wiredTiger.engineConfig.cacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container. See memLimitMB .",
"username": "chris"
},
{
"code": "",
"text": "To limit mongod wiredtiger cache size, specifically set it in the configuration or via command line argumentsHi @Hemant_Jain,While the documentation note pointed out by Chris is a solid workaround, it does predate an improvement to detection of the memory constraint within a container/cgroup versus the total system memory: https://jira.mongodb.org/browse/SERVER-16571 (fixed in 3.6.13+, 4.0.9+, and 4.2+).However, looking into this change further it appears to have been made specifically for Linux environments and would not help with a deployment running Windows in a container.Can you please raise a new improvement issue in the SERVER project in the MongoDB JIRA issue tracker: http://jira.mongodb.org/browse/SERVER?I can raise an issue on your behalf, but I expect the triage team may need further information on your deployment environment so it would be best for you to report directly.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X and @chris for the suggestion.\nI shall raise the issue as indicated.",
"username": "Hemant_Jain"
},
{
"code": "",
"text": "@Hemant_Jain\nCould you please confirm - are you starting MongoDB using the Mongodb Enterprise Operator? Or just the mongodb container without an Operator?\nIf it’s the Operator then the ubuntu is used as a based image for Database images and there should be no issues with wired tiger cache calculation.\nAlso as a small “nit” - the wired tiger memory shown in the logs is 32 MB, not GB.",
"username": "Anton_Lisovenko"
},
{
"code": "",
"text": "@Anton_Lisovenko I am not using the enterprise operator rather just the mongodb container.Also as a small “nit” - the wired tiger memory shown in the logs is 32 MB, not GB.Yes, you are right about the failed memory allocation, it failed while allocating a chunk of 32M. But my concern is about the ‘cache size’ wired tiger has decided to build (see the part in bold){“t”:{\"$date\":“2020-11-02T12:50:01.089+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,",
"username": "Hemant_Jain"
},
{
"code": "",
"text": "Raised the bug in Jira - https://jira.mongodb.org/browse/SERVER-52596",
"username": "Hemant_Jain"
}
] | Memory limit on K8S pod is not honored for WiredTigerCacheSize calculation | 2020-11-02T15:47:51.274Z | Memory limit on K8S pod is not honored for WiredTigerCacheSize calculation | 6,231 |
null | [] | [
{
"code": "",
"text": "I have 30 years experience with RDBMS, so MongoDB design principles are new to me. I’m also designing my app using TDD principles, which is also new to me. I’m writing an API to do all the database operations.I was writing a test to check that an error is raised if an attempt is made to set the required property of a field that does not exist (in RDBMS terms) or does not have a document using that field (in MongoDB terms).If I were using an RDBMS, then I would expect the RDBMS to raise an error and I wouldn’t even have to think about it. In MongoDB, if I add a required validator to a field which is not used (or doesn’t exist), then the validator is created without an error and any previously created documents fail validation. This behaviour by MongoDB is deliberate because it fits in with the principles of the flexibility of MongoDB.My inclination is to only allow the required validator to be created on a non-existent if there are no documents in the collection. So, any new documents created will need to have a value in this field.If the validator is created on a collection has documents, then they will fail validation and will need to be sorted out. In this scenario, the programmer should amend the documents first and add the validator once this has been done. The likelihood is that the creation of the validator is a programming error (perhaps the name of a field was spelled incorrectly), so an error should be raised.What do you think?",
"username": "Julie_Stenning"
},
{
"code": "$jsonSchema",
"text": "If the validator is created on a collection has documents, then they will fail validation and will need to be sorted out. In this scenario, the programmer should amend the documents first and add the validator once this has been done.Hi @Julie_Stenning,You can use the $jsonSchema query operator to check existing documents for compliance with new or proposed schema validation. You can use this to assert that there are no invalid documents found before applying a new validator.For example usage, see my DBA Stack Exchange answer on How to find all invalid document based on jsonSchema validator?.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks Stennie. That helps me when I want to check whether or not the documents are valid. What is your view about whether or not I should add a required validator to a field that doesn’t exist?",
"username": "Julie_Stenning"
}
] | Good Practice - Setting the required property of a field that "does not exist" | 2020-11-02T22:10:09.858Z | Good Practice - Setting the required property of a field that “does not exist” | 3,878 |
null | [
"python"
] | [
{
"code": "",
"text": "Are there plans to develop an official Python driver for Realm?",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Not at this time, although there is a Python driver for Atlas here:",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi @Platon_workaccount - thanks for your question!As Ian says, there aren’t any imminent plans to work on a Python driver. But I was wondering if you could tell us what kind of thing you wanted to build with one?Mark",
"username": "Mark_Smith"
},
{
"code": "",
"text": "I am developing a bioinformatics toolkit and I want to replace SQLite with serverless NoSQL-DBMS, which provides the most pythonic syntax (without any ORM). My other ongoing and planned projects also need this solution.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Oh, in that case I think you probably want to look at the Python drivers for MongoDB that @Ian_Ward recommended - I think they’re what you want. Check out MongoDB Atlas for a hosted database-as-a-service!",
"username": "Mark_Smith"
},
{
"code": "",
"text": "I develop software for ordinary personal computers (e.g. laptops). MongoDB is redundant for most of my projects. It is a huge enterprise product for large servers. Atlas is a commercial product that is not suitable for integration into FLOSS. TinyDB would be a good fit, but it is not capable of creating indices. SQLite is the best solution at the moment. But the SQL instructions integrated into Python code look ugly. ORMs give an inadequate complex syntax. Realm looks attractive, but for some reason it does not have a driver of the very popular Python language.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Ah, I see.MongoDB is a server product, but I wouldn’t describe it as a huge enterprise product (although it can totally be run that way). We support the Raspberry Pi as a host platform! On the other hand, it sounds like you need an embedded database along the lines of SQLite, but without the SQL.The reason Realm doesn’t currently support Python is because it’s traditionally been a mobile library, and so iOS and Android were the target platforms, hence Java/Kotlin and Swift were the supported language. We’ve been adding more support for other platforms but it’s still relatively early days. As a Python developer myself, I definitely hope we add Python support to Realm in the not-too-distant future!Have you tried using SQLAlchemy on top of SQLite to work with your models? I think it may give you some of what you’re looking for, but you would need to set up the object-table mappings yourself.",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Thanks for the answers!Have you tried using SQLAlchemyIn my practice, ORMs make the code bloated, difficult to maintain, exposed to bugs.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Seems like you have a cool use case! I just searched feedback.mongodb.com and didn’t find any feedback about a Python SDK for Realm but I bet you’re not the only one out there who would love to use Realm in their Python apps.I recommend that you create a new suggestion on the Realm section of the feedback site so that others can upvote the idea to help the team get a better feel for how many folks want a Python SDK. I’ll upvote it!",
"username": "nlarew"
},
{
"code": "",
"text": "Unfortunately, https://feedback.mongodb.com/ does not open. Its IP 104.17.30.92 is blocked in Russia.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Are there plans to develop an official Python driver for Realm? | 2020-10-30T23:39:04.831Z | Are there plans to develop an official Python driver for Realm? | 7,042 |
[
"swift"
] | [
{
"code": "platform :ios, '12.0'\n\ntarget 'toto' do\n # Comment the next line if you don't want to use dynamic frameworks\n use_frameworks!\n\n # Pods for toto\n pod 'RealmSwift', '=10.1.0'\n\nend\n\n",
"text": "I have just upgraded to the latest RealmSwift - the pod version ‘=10.1.0’.When I build any project now, I get 6 warnings - or Documentation Issues with the Realm Swift code. The program runs fine, I was just worried about the documentation issues.This is my pod fileCheersRichard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_Krueger the warnings have been fixed but have yet to be merged into the latest version of Realm Cocoa.",
"username": "Lee_Maguire"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Xcode warning - Documentation issues | 2020-10-29T20:18:49.359Z | Xcode warning - Documentation issues | 1,741 |
|
null | [
"swift"
] | [
{
"code": "import SwiftUI\nimport RealmSwift\n\nlet app = RealmSwift.App(id: \"tasktracker-qczfq\")\n@main\nstruct LandManagementApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n ContentView()\n }\n }\n}\nError:Module 'RealmSwift' has no member named 'App'\nimport SwiftUI\nimport RealmSwift\n\nlet app = App(id: \"tasktracker-qczfq\")\n@main\nstruct LandManagementApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n ContentView()\n }\n }\n}\nError:Argument passed to call that takes no arguments\nError:Protocol type 'App' cannot be instantiated\n",
"text": "I can not use App directly,nor can use RealmSwift.App,what can i do to resolve the conflict?I installed Realm with swift manager package and did not change any build settings.",
"username": "Stephen_Zhuang"
},
{
"code": "",
"text": "Try not putting the “import RealmSwift” statement in this file. You don’t need it there anyways. I have the same problem if I put it in the app file. You can put it in the other files just fine.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Stephen_Zhuang Realm Sync is not available via SPM, please use Cocoapods if you use to use full MongoDB Realm functionality.",
"username": "Lee_Maguire"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to resolve the conflict between RealmSwift.App and SwiftUI.App in SwiftUI after iOS 14.0? | 2020-10-30T09:42:56.524Z | How to resolve the conflict between RealmSwift.App and SwiftUI.App in SwiftUI after iOS 14.0? | 2,687 |
[
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "MongoDB realm won’t save data to the mongo atlas cluster after linking the Realm app to the mongo atlas. When I try registering a user, it does register it and also saves it somewhere but I am not sure where but it does not save it in the mongo atlas cluster. Please, I really need help to solve this. After registering the user I get some error and then it registers it successfully. Here is the error\nScreen Shot 2020-10-28 at 1.40.36 PM898×900 143 KB\n",
"username": "Noah_Nelson"
},
{
"code": "",
"text": "@Noah_Nelson Could you please share how you are trying to save data to Atlas? Also could you please share any logs in the MongoDB Realm UI that may help narrow down the issue.",
"username": "Lee_Maguire"
}
] | MongoDB realm on swift won't save data to the mongo atlas cluster | 2020-10-28T20:47:15.969Z | MongoDB realm on swift won’t save data to the mongo atlas cluster | 2,122 |
|
null | [] | [
{
"code": "",
"text": "Hi,i’m sony and i’m new to this mongodb stuff. i’m a DBA and mostly work with RDBMS/SQL. Need your help to clearly my mind for some of question i had in mind:i cant find mongodb architecture like i found for oracle architecture, architecture that tell about how mongodb work, the buffer, log wal etc Did mongodb have this arch?if this mongodb have a metadata collection like in rdbms hv metadata table that i can use to query ?how to count the collection in database ? mostly in google i only found how to count document in collection.did mongodb adopt read uncommited concept ? how to make it read committed ?Many Thanks",
"username": "sony_vipassi"
},
{
"code": "db.collection.stats()db.stats()mongodbStatsmajority",
"text": "Welcome to the MongoDB community @sony_vipassi!i cant find mongodb architecture like i found for oracle architecture, architecture that tell about how mongodb work, the buffer, log wal etc Did mongodb have this arch?It depends what level of detail you are looking for.For some high level info on disk write behaviour, see FAQ: MongoDB Storage. For a more technical dive into architecture, see A Technical Introduction to WiredTiger (MongoDB 3.0) and the Engineering Chalk & Talks sessions on the path to transactions in later versions of MongoDB.A general difference from RDBMS is that MongoDB tries to have reasonable defaults so you don’t have to do a lot of tuning of database configuration parameters.After installation (and following the Security Checklist), initial tuning of an on-premise deployment is more focused on your O/S and filesystem settings per the Operations Checklist and Production Notes in the MongoDB documentation.For a more comprehensive introduction I recommend taking the Free DBA training courses at MongoDB University as they will help with foundational knowledge and some hands-on learning.if this mongodb have a metadata collection like in rdbms hv metadata table that i can use to query ?MongoDB does not have the full equivalent of an RDBMS information schema. For example, the information required to interpret a document (field names and types) is embedded in the document format. There is no strict requirement for every document in a collection to have the same fields or field types.There are commands to list databases, collections, and indexes. There is also some basic metadata stored with databases and collections, but that is typically for configuration and diagnostics. For example, see db.collection.stats() .Document schema is intentionally flexible, but you can impose rules on data writes (inserts & updates) using Schema Validation.For a great introduction to data modelling and design patterns (and comparison with RDBMS), please see Data Modeling with MongoDB.how to count the collection in database ? mostly in google i only found how to count document in collection.You can use the db.stats() helper in the mongo shell, or call the underlying dbStats command from any MongoDB driver.did mongodb adopt read uncommited concept ? how to make it read committed ?The concept of durability in a distributed database is more nuanced than “read committed”. For example, you may wish to confirm that a write has been accepted by a majority of the members of a replica set and will not be rolled back. For more information on read concern and isolation levels please review the documentation on Read Isolation, Consistency, and Recency and Read Concern Levels.I know that’s a lot of info to digest, but hopefully this addresses your questions and gets you started on the right learning path.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "majority",
"text": "Hi Stenie,thanks for the explanation. its great.\n*The concept of durability in a distributed database is more nuanced than “read committed”. For example, you may wish to confirm that a write has been accepted by a [ majority ] of the members of a replica set and will not be rolled back. For more information on read concern and isolation levels please review the documentation on Read Isolation, Consistency, and Recency and Read Concern Levels.about this i hv read that and because of that i conclude that mongodb is adopt read uncommited and for like guarantee we must use majority. am i right ?Regards",
"username": "sony_vipassi"
},
{
"code": "setDefaultRWConcern",
"text": "about this i hv read that and because of that i conclude that mongodb is adopt read uncommited and for like guarantee we must use majority. am i right ?Hi,That is correct … and explained further in the two documentation links that you quoted on Read Isolation and Read Concern Levels.As mentioned in the Read Concern documentation, in MongoDB 4.4+ it is possible to configure a global default read or write concern configuration for a replica set or sharded cluster using the setDefaultRWConcern administrative command.Regards,\nStennie",
"username": "Stennie_X"
}
] | Clear my doubt on some question for mongoDB DBA | 2020-11-02T08:12:22.953Z | Clear my doubt on some question for mongoDB DBA | 2,089 |
[
"other-languages"
] | [
{
"code": "",
"text": "Please, share an example of the implementation of the Document Versioning Pattern for DenoThe Document Versioning Pattern - When history is important in a document",
"username": "Yuriy_Tigiev"
},
{
"code": "",
"text": "Hi @Yuriy_Tigiev,Welcome to MongoDB community!Not sure what you mean by this pattern to deno?Deno seems to be a third party driver for MongoDB. What difficulty do you have using the presented idea?Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Document Versioning Pattern & Deno | 2020-11-02T18:56:20.473Z | Document Versioning Pattern & Deno | 1,477 |
|
null | [
"dot-net",
"atlas-device-sync",
"android",
"legacy-realm-cloud"
] | [
{
"code": "@PrimaryKey\n@Required\nvar _id: Int = 0\n[PrimaryKey]\n[Required]\n[MapTo(\"_id\")]\npublic int Id { get; set; }\n",
"text": "Hi IanThe migration document looks very useful (essential).I have a question…I have a realm on the realm cloud which is accessed by both Android and .Net apps.\nThey currently both use primary key ids of type Long.\nUsing MongoDB Realm, I assume I can’t use Android ObjectId because there is no .Net type.Would the following workANDROID:.NET:Thanks.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "@Richard_Fairall The .NET SDK support ObjectID - see here:\nhttps://www.mongodb.com/article/realm-database-cascading-deletes#objectids",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian, I finally have connection to a MongoDB realm from Android.\nThanks\nRichard",
"username": "Richard_Fairall"
},
{
"code": "{\n \"title\": \"Attendance\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_partition\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"int\"\n },\n \"_partition\": {\n \"bsonType\": \"string\"\n },\n \"visitId\": {\n \"bsonType\": \"int\"\n },\n }\n}\n",
"text": "Hi Ian\nI get the following error on saving a scene\nschema for namespace (BookingRealm.Attendance) must include partition key \"BookingKey\"BookingKey is the actual partition key.\nI’ve tried and scoured documents, but I don’t know how to set the partitionKey in Schema.Below is a simple example of an offending schema…",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "I don’t see BookingKey as a field in your document schema? It needs to be present in the document data model itself",
"username": "Ian_Ward"
},
{
"code": " [MapTo(\"_partition\")]\n public string _partition { get; set; } = \"Bookarama\";\n",
"text": "Hi\nI’m Android and Windows.\nI changed the names for the post. It’s Bookarama\n*It needs to be present in the document data model itself…So in my data model…\npublic class Zone : RealmObject, PlaceIf\n{\n[PrimaryKey]\n[MapTo(\"_id\")]\npublic int Id { get; set; }…\\Rich",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "@Richard_Fairall I’ll take a look - could you email me your cloud Realm app URL - the webpage you are viewing for the Realm App? As well as your client side schema? [email protected]",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Realm/io …\nprivate val REALM_INSTANCE = “bookaramasync.de1a.cloud.realm.io”\nprivate val REALM_AUTH_URL = “https://$REALM_INSTANCE”;// + “/auth”\nprivate val REALM_URL = “realms://$REALM_INSTANCE”\nprivate val REALM_PATH = REALM_URL + “/bookarama”When you say ‘client-side schema’ - the only schema I have been involved with here are the ones entered on the Data Access → Schema page for the Bookarama Realm. I have around 20 classes (Documents) each one has a separate schema. Strangely, some could be ‘Saved’ on the Schema page, others not. Some are currently disfunctional because I had to remove the _id and _partition fields in order to save them! Are these what you need?\nbtw the Android app is working - I have viewed the data in Realm Studio - all partition keys are set to the Partition Key.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "Lets start with URL - can you please email me so I can see what you see? Also, it might be easier to use developer mode in this case since you are migrating to MongoDB Ream - presumable you already have a legacy realm sync app so you just need to switch where you are syncing to and the schema will be set up for you on the server side",
"username": "Ian_Ward"
},
{
"code": "",
"text": "HiIf I can backtrack and create schemas automatically then I will go into Lockdown happy.\nCan I please have the URL for me to switch to?The system is in Developer mode, but I never saw any schemas produced.\nI explained below that the Android app is connecting and working.\nThe Windows app connects but does not sync.\nI don’t know what happens to the Schemas I’ve created already ??Here’s where I’m at for MondoDB,ANDROID\nCreated a cluster, linked the realm app.\nI cloned entire software for the Realm.io app, modified the connection business and thought life was going to be easy.\nThe app creates data on the realm ( not all fields are populated) and runs OK.WINDOWS\nI cloned entire software for the Realm.io app (Visual Studio 19), modified the connection business and it connected but did not sync (no schemas - oh those schemas)\nHere’s where the problems started.I started to create Schemas which are clearly incomplete…\nIt seems that any classes(documents?) which have no data in the realm cannot be saved with the _id and _partition fields.\nI have checked the realm created by the Android app and it shows the empty collections.\nI sent the realm URL earlier.",
"username": "Richard_Fairall"
},
{
"code": "",
"text": "It appears that any of the classes (collections) that are in my data model that have not had any data submitted to the Realm (ie empty on RealmStudio) cannot have Schemas with properties _id and _partition, otherwise they cannot be saved. I get the red ink:\nschema for namespace (BookaramaRealm.Attendance) must include partition key \"Bookarama\"",
"username": "Richard_Fairall"
}
] | Migrating app using both Android & .Net SDKs from Realm Cloud to MongoDB Realm | 2020-10-28T20:32:52.233Z | Migrating app using both Android & .Net SDKs from Realm Cloud to MongoDB Realm | 5,028 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hello,I am running Realm for triggering a function on change events. this function does index some documents on another search db. Recently I found out, that my documents in the search db are not updated when I’m using the initializeUnorderedBulkOp (nodejs) operator and do my updates as bulk e.g. bulk,find(…).updateOne(…) => search db not updated.Normal updates without bulk operators work fine.Are there any known limitation when on realm in regard to bulk update operationsmongodb version 4.2.10 Wired Tiger\nnodejs version 12.X\nmongoose versionthanks in advance",
"username": "Sami_Karak"
},
{
"code": "",
"text": "Hi @Sami_Karak - do you mind sharing your entire code snippet - it’s hard to tell what issue you might be running into with Triggers without seeing the function code.",
"username": "Sumedha_Mehta1"
}
] | Realm bulk update not working | 2020-10-29T19:59:38.360Z | Realm bulk update not working | 2,265 |
null | [
"dot-net",
"atlas-device-sync",
"legacy-realm-cloud",
"xamarin"
] | [
{
"code": "PrintGradepublic class PrintGrade : RealmObject\n{\n\t[PrimaryKey]\n\t[MapTo(\"id\")]\n\tpublic int ID { get; set; }\n\n\t[Required]\n\t[MapTo(\"grade\")]\n\tpublic string Grade { get; set; }\n\n\t[Required]\n\t[MapTo(\"name\")]\n\tpublic string Name { get; set; }\n\n\t[MapTo(\"active\")]\n\tpublic bool Active { get; set; }\n}\npublic static class AppConfig\n{\n\t...\n\tpublic const string CommonRealm = \"DriverAppCommon\";\n\t...\n}\n\t\npublic partial class RunSheet : ContentPage\n{\n\t...\n\tprivate readonly string _realmCommon = AppConfig.CommonRealm;\n\tprivate Realm _commonRealm;\n\t...\n\tprotected override async void OnAppearing()\n\t{\n\t\t...\n\t\t_commonRealm = await OpenRealm(_realmCommon, _user).ConfigureAwait(true);\n\n\t\tif (_commonRealm != null)\n\t\t{\n\t\t\tif (!await SynchroniseRealm(_commonRealm, false, true).ConfigureAwait(true))\n\t\t\t\tthrow new Exception(\"Failed to download the reference data required by the app.\");\n\t\t}\n\t}\n\t...\n\tprivate async Task<Realm> OpenRealm(string realmName, User user)\n\t{\n\t\tRealm realm = null;\n\t\ttry\n\t\t{\n\t\t\tvar config = ConnectionServices.GetRealmConfiguration(realmName, user);\n\t\t\trealm = ConnectionServices.ConnectToSyncServer(config);\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\t...\n\t\t}\n\t\treturn realm;\n\t}\n\tprivate async Task<bool> SynchroniseRealm(Realm realm, bool upload, bool download, int timeout = 0)\n\t{\n\t\tbool synchronised = false;\n\t\ttry\n\t\t{\n\t\t\tvar session = realm.GetSession();\n\t\t\tThread.Sleep(150);\n\t\t\tusing CancellationTokenSource cts = new CancellationTokenSource();\n\n\t\t\tswitch (timeout)\n\t\t\t{\n\t\t\t\tcase -1:\n\t\t\t\t\t// No timeout - wait until finished...\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tcts.CancelAfter(TimeSpan.FromSeconds(timeout));\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (download)\n\t\t\t{\n\t\t\t\tif (timeout == -1)\n\t\t\t\t\tawait SynchroniseRealmData(session, download).ConfigureAwait(true);\n\t\t\t\telse\n\t\t\t\t\tawait SynchroniseRealmData(session, download).CancelAfter(cts.Token).ConfigureAwait(true);\n\t\t\t}\n\t\t\tif (upload)\n\t\t\t{\n\t\t\t\t...\n\t\t\t}\n\t\t\tsynchronised = true;\n\t\t}\n\t\tcatch (OperationCanceledException)\n\t\t{\n\t\t\tAnalytics.TrackEvent(nameof(SynchroniseRealm), new Dictionary<string, string> {\n\t\t\t\t{\"User\", _userName },\n\t\t\t\t{ download ? \"DownLoad\" : \"Upload\", $\"Timed out after {timeout} seconds\" }\n\t\t\t});\n\t\t}\n\t\tcatch (Realms.Exceptions.RealmException ex)\n\t\t{\n\t\t\tCrashes.TrackError(ex, new Dictionary<string, string> { { \"Synchronise Realm\", (IsDownloading ? \"Downloading\" : \"Uploading\") } });\n\t\t}\n\t\tcatch (Exception ex)\n\t\t{\n\t\t\tCrashes.TrackError(ex, new Dictionary<string, string> { { \"Synchronise Realm\", (IsDownloading ? \"Downloading\" : \"Uploading\") } });\n\t\t}\n\t\treturn synchronised;\n\t}\n\tprivate static async Task SynchroniseRealmData(Session session, bool download)\n\t{\n\t\tif (download)\n\t\t\tawait session.WaitForDownloadAsync().ConfigureAwait(true);\n\t\telse\n\t\t\tawait session.WaitForUploadAsync().ConfigureAwait(true);\n\t}\n}\n\npublic static class ConnectionServices\n{\n\tprivate const string _commonRealm = \"DriverAppCommon\";\n\n\tpublic static Realm ConnectToSyncServer(FullSyncConfiguration config)\n\t{\n\t\treturn Realm.GetInstance(config);\n\t}\n\tpublic static FullSyncConfiguration GetRealmConfiguration(string realmName, User user)\n\t{\n\t\tFullSyncConfiguration config;\n\t\tUri serverUrl;\n\t\tif (realmName == _commonRealm)\n\t\t{\n\t\t\tserverUrl = new Uri(realmName, UriKind.Relative);\n\t\t\tconfig = new FullSyncConfiguration(serverUrl, user)\n\t\t\t{\n\t\t\t\tObjectClasses = new[] { typeof(PrintGrade) }\n\t\t\t};\n\t\t}\n\t\telse\n\t\t{\n\t\t\t...\n\t\t}\n\t\treturn config;\n\t}\n}\n",
"text": "I have created a global realm for use in a Xamarin Forms app with a PrintGrade class as follows;And I use the following code to configure, open and synchronise the realm;My problem is the common realm never finishes synchronising. I have tried it with both an indefinite and a 2 minute timeout. The realm only contains 13 records so I would expect it to take seconds to synchronise.From what I have read in the documentation a global realm is read-only for all users and the user I pass to the procedures is the same user from previous code who has been authenticated and downloaded their own realm already. Am I missing something?",
"username": "Raymond_Brack"
},
{
"code": " public static Task<Realm> OpenRealm(User user)\n {\n var config = new SyncConfiguration(Partition, user);\n return Realm.GetInstanceAsync(config);\n }",
"text": "@Raymond_Brack Are there any client side and server side logs you can share with us please? Just a hunch, have you tried the asyncOpen API?",
"username": "Ian_Ward"
},
{
"code": "HTTP response: f2530548-b8c3-421e-bcff-bdbfac7e1201 {\"type\":\"https://docs.realm.io/server/troubleshoot/errors#access-denied\",\"title\":\"The path is invalid or current user has no access.\",\"status\":403,\"code\":614}\n",
"text": "@Ian_Ward I get the following error entry in the log;But from what I can determine I am accessing the correct Realm with a valid user;CommonRealmConfig890×292 43.6 KBCommonRealmUser964×266 41.7 KBDo I need to give permission to every user to access the global realm?",
"username": "Raymond_Brack"
},
{
"code": "'*'",
"text": "@Raymond_Brack No you can just use '*' for the userId when you use the applyPermission API to apply the same permission to all users",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward It appears I can’t add the permission using Realm Studio and I couldn’t locate the API documentation. Is there a REST API to handle permissions? I know there is a GraphQL API but this appears to be for querying the data.",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "The API is in the SDK itself, see here:",
"username": "Ian_Ward"
}
] | Can't Synchronise Global Realm | 2020-11-02T04:04:29.970Z | Can’t Synchronise Global Realm | 4,657 |
null | [
"node-js",
"transactions"
] | [
{
"code": "",
"text": "I have an application that predates the transaction api written in Node.js. We have created classes that loosely correspond to database collections. Each class will get its own connection to the database to update the documents. Now, we want to combine several of these updates into a transaction. Do all of the database updates have to be on the same connection (client) to the database as the session for the transaction to work properly?",
"username": "William_Odefey"
},
{
"code": "const client = new MongoClient(uri);\nawait client.connect();\nconst session = client.startSession();\n\n...\n\nconst coll1 = client.db('mydb1').collection('foo');\nconst coll2 = client.db('mydb2').collection('bar');\n\nawait coll1.insertOne({ abc: 1 }, { session });\nawait coll2.insertOne({ xyz: 999 }, { session });\n",
"text": "Hi @William_Odefey, and welcome!Do all of the database updates have to be on the same connection (client) to the database as the session for the transaction to work properly?Transactions are associated with a single session instance. See also MongoDB Transactions.\nSo you won’t be able to use different client (MongoClient) as a session is started from a single client. i.e.Each class will get its own connection to the database to update the documents.Generally you should use a single MongoClient instance for your application lifetime. Each class can have their own collection instance and use a connection from the connection pool, but they should use the same client (as shown on the example above). For example, you could pass the client into a class on construction to share the same client.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Combining database accesses into a transaction | 2020-10-28T16:51:40.439Z | Combining database accesses into a transaction | 1,751 |
null | [
"graphql"
] | [
{
"code": "variables: {query: {lookupId: \"5f62c86c7ad07eaf72d7667d\"}, set: {priceMin: 115, priceMax: 400}}[string]",
"text": "This is the error I got from a recent GQL mutation I’m trying to get working:“reason=“role “non-owner” in “Db.Collection” does not have update permission for document with _id: ObjectID(“5f62c86ddafd4b5d2bfc411b”): could not validate document: \\n\\tpriceMax: Invalid type. Expected: type: undefined, bsonType: decimal, given: [string]\\n\\tpriceMin: Invalid type. Expected: type: undefined, bsonType: decimal, given: [string]”; code=“SchemaValidationFailedWrite”; untrusted=“update not permitted”; details=map[]”But this is the payload I am sending:\nvariables: {query: {lookupId: \"5f62c86c7ad07eaf72d7667d\"}, set: {priceMin: 115, priceMax: 400}}So what is it talking about with [string] ? That doesn’t make any sense. Clearly I am sending a int. Is this a bug?Thanks in advance guys!",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hey @Lukas_deConantseszn1 - can you give more details about what your graphQL schema looks like and a code snippet of your query (whether it’s from GraphiQL or the client)",
"username": "Sumedha_Mehta1"
}
] | GraphQL Schema validation error message doesn't make sense | 2020-10-30T05:21:46.937Z | GraphQL Schema validation error message doesn’t make sense | 5,443 |
null | [] | [
{
"code": "",
"text": "I’m unable to update documents with the owner permission, I get the following error\nERROR: could not validate document ‘ObjectID(“5f9cda428915f0b29c4bb633”)’ for update",
"username": "AbdulGafar_Ajao"
},
{
"code": "",
"text": "Welcome to the community, @AbdulGafar_Ajao !I do not have a lot of information from your post, but I’m assuming it’s a schema validation error - which means your updated document does not follow the schema defined in Realm.It might be helpful to paste the schema and the document you’re trying to update to better understand what’s going on and confirm that.",
"username": "Sumedha_Mehta1"
}
] | Document update error due to permission issues | 2020-10-31T08:18:18.333Z | Document update error due to permission issues | 2,463 |
null | [
"graphql"
] | [
{
"code": "",
"text": "Hi,\nwith the graphql Api will we need to have an external server for authentication or notifications ?\nOr firebase ?",
"username": "Nabil_Ben"
},
{
"code": "",
"text": "Realm has in-built authentication (email/pass, anonymous, API Key, etc) but you can also build your own authentication system or use a 3rd party (firebase, cognito, auth0) and authenticate to Realm using a JWT. Both approaches are outlined here - https://docs.mongodb.com/realm/authentication/providers/",
"username": "Sumedha_Mehta1"
}
] | With the graphql Api will we need to have an external server for authentication or notifications? | 2020-06-01T20:31:02.293Z | With the graphql Api will we need to have an external server for authentication or notifications? | 2,258 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I have been playing around with Sync permissions. I have noticed that if one gives a ‘true’ value to the write permissions, it does not matter what restrictions are placed on the read permissions, i.e. write ‘true’ basically implies ‘read’ true, or trumps the read permissions. To me this feels like a bug. Maybe I am missing something.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_Krueger This is by design - if you have write permissions on a partition then you implicitly have read permissions on that partition. What’s the use case for having write but not read?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward actually there is no use case for write without read. I discovered this issue by messing around with the read permissions and finding out that what I was doing had no effect, because of the “true” in my write permissions. Only after turning off the write permissions was I able to see the effect of my read permissions. Call me old fashion, but this went counter to my Unix purist sensibilities, where read and write are independent variables.My suggestion would be to add something in the documentation to that effect that permissions granted to “write” will automatically convey to “read” permissions.",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_Krueger its here - https://docs.mongodb.com/realm/sync/rules/#overviewWhenever a user opens a synced realm from a client app, Realm evaluates your app’s rules and determines if the user has read and write permissions for the partition. Users must have read permission to sync and read data in a realm and must have write permission to create, modify, or delete objects. Write permission implies read permission, so if a user has write permission then they also have read permission.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "“Write permission implies read permission, so if a user has write permission then they also have read permission.” - I stand corrected.",
"username": "Richard_Krueger"
}
] | MongoDB Realm Sync Permissions | 2020-11-01T02:54:13.160Z | MongoDB Realm Sync Permissions | 2,192 |
null | [
"swift"
] | [
{
"code": "let config = Realm.Configuration()\nlet x = try? Realm.deleteFiles(for: config)\nlet realm = try! Realm()",
"text": "Objective:delete the realm files off disk after app startDiscussion:Realm has a function that allows the developer to delete the local Realm files off diskHowever if realm has been opened or touched in any way e.g. let realm = try! Realm() , realm (in code) is ‘attached’ to those realm files and the function will not allow the files to be deleted.There’s probably an obvious answer but how does one ‘close’ or ‘deinit’ or even ‘nil’ a Realm after it was touched in code so the realm files can be deleted with that function while the application is running.I can manually write other code to delete files but even with that, the Realm is re-created because the Realm (in code) is still alive - it seems that function should be able to handle it.Use CaseHere’s an example;Suppose we have a ToDo application where the user can create a ToDo list and give it a name. For this case the Realm file is named with that name. (note there are obviously other solutions for naming)Then the user decides they don’t want that ToDo list at all and wishes to totally delete it",
"username": "Jay"
},
{
"code": "realm = nil",
"text": "@Jay You guessed it - in Swift, you just set the realm reference to nil - as inrealm = nilThis is a best practice for RealmSwift, especially when dispatching to background threads - once you complete your work on the background, set the reference to nil",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks @Ian_WardI was a bit unclear. Is this the same behavior for a sync’d realm? The local realm files are stored differently for a sync’d realm than an local only which lead to my (unclear) question.Jay",
"username": "Jay"
}
] | 'Close' a Realm to allow file delete | 2020-11-01T13:24:34.081Z | ‘Close’ a Realm to allow file delete | 2,427 |
[
"python"
] | [
{
"code": "",
"text": "HelloI add new column in DB11236×156 15 KBEverything allright2716×222 13.4 KBBut when I try find this user3950×132 14.5 KBThis column is not here11514×68 9.88 KBWhere is my problem?",
"username": "Fungal_Spores"
},
{
"code": "self.idfirst_zero_refcount_ref",
"text": "It’s difficult to tell from the code you’ve provided - it all looks fine, but I guess the problem is elsewhere.Your second screenshot - is that from Compass, or something else?Are you definitely using the exact same value for self.id in both first_zero_ref and count_ref?",
"username": "Mark_Smith"
},
{
"code": "",
"text": "Sorry, I find mistake in my code, maybe someone can delete this topic?Thanks",
"username": "Fungal_Spores"
},
{
"code": "",
"text": "Don’t worry about it. Glad you solved your problem!",
"username": "Mark_Smith"
}
] | Problem with update_one and find_one Pymongo | 2020-10-30T16:23:40.105Z | Problem with update_one and find_one Pymongo | 2,891 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "{ message: ‘auth function result must include id field’,\ncode: 47 }",
"text": "There is a way to handle errors on Custom Function Authentication?\nno matter what i return, always i got this{ message: ‘auth function result must include id field’,\ncode: 47 }",
"username": "Royal_Advice"
},
{
"code": "return \"5f650356a8631da45dd4784c\"\nreturn { \"id\": \"5f650356a8631da45dd4784c\" }\nreturn { \"id\": \"5f650356a8631da45dd4784c\", \"name\": \"James Bond\" }\n",
"text": "Hi @Royal_Advice,A custom function auth requires the function to return either a string of a unique id for authenticated user (so realm could map it to its internal user) or and object with “id” field and the unique value.The above are valid outputs, any other type will result with the received error.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Ok, when all requisites match i return the id, its ok\nbut if i need to handle errors? user exist, password dont match?",
"username": "Royal_Advice"
},
{
"code": "",
"text": "@Royal_Advice just throw an error.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Royal_Advice It sounds like you want to have a meaningful response when authentication fails based on different problems. Unfortunately, Custom Function Authentication offers no way to do this. However, if you look at the last solution in my advice thread, you can set up an Incoming Webhook to call your authentication function directly first to find any potential problems. If you find no problems, you then call login() as usual.Unfortunately, it took many such hacks to get Custom Function Authentication working as desired.",
"username": "Scott_Garner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to handle errors on Custom Function Authentication? | 2020-11-01T02:24:53.580Z | How to handle errors on Custom Function Authentication? | 3,870 |
null | [
"python"
] | [
{
"code": "",
"text": "I have a flask API and I am saving the data in MongoDB with the help of mongoengine and now I need to update the index programmatically. I don’t want to end up creating a new index each and every time instead I want to update the index with a particular name.Please give me a solution.Thanks in advance.",
"username": "Python_Hub"
},
{
"code": "create_indexkwargsnameModel.create_index([(\"field_one\", 1), (\"other_field\", -1)], name=\"my_index_name\")\n",
"text": "Hi @Python_Hub! Thanks for your question!I don’t think it’s documented, but mongoengine’s create_index method takes a kwargs argument that I believe are just passed on to pymongo under the hood. This means if you want to specify the name for your index, you should be able to with a name keyword argument, as documented in the PyMongo docs.I don’t use mongoengine myself, but I believe it should look something like this:Let me know if this helps!Mark",
"username": "Mark_Smith"
}
] | How to delete index with mongoengine | 2020-10-27T21:27:08.113Z | How to delete index with mongoengine | 2,483 |
null | [
"spring-data-odm"
] | [
{
"code": "",
"text": "Hi expert,\nI am new to MongoDB and I need to know how to store images into MongoDB. I am working with Java and SpringBoot.\nPlease point me to useful and easy to follow tutorials. Tks.",
"username": "Nobody"
},
{
"code": "",
"text": "I use GridFS for this.Here is the javadoc for Package com.mongodb.client.gridfsHere’s the Java driver GridFS tutorial.",
"username": "Jack_Woehr"
}
] | Tutorial needed to store image into MongoDB | 2020-11-02T09:56:13.968Z | Tutorial needed to store image into MongoDB | 2,408 |
null | [
"python"
] | [
{
"code": "",
"text": "Hi,I am trying to retrieve information about indexes on collections in a database using PyMongo. My understanding is that PyMongo supports index_information() instead of getIndexes(). I tried using index_information() on a collection as follows:\nindexes = tdb.tcol.index_information()\nprint(indexes).I only see ‘{}’ as the output. I know there are indexes on collections in the database (Listingandreviews collection in the samples database). Any idea what I might be doing wrong or what is the correct way to display index information on collections (definitions of indexes basically).Thanks much !.",
"username": "Satya_Tanduri"
},
{
"code": "$ python\nPython 3.8.6\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from pymongo import MongoClient\n>>> c = MongoClient('mongodb://foo:bar@localhost/admin')\n>>> print(c.mydb.mycollection.index_information());\n{'_id_': {'v': 2, 'key': [('_id', 1)], 'ns': 'mydb.mycollection'}, 'name_1': {'v': 2, 'unique': True, 'key': [('name', 1)], 'ns': 'mydb.mycollection'}}",
"text": "You probably aren’t correctly authorized. Show a sanitized version of your connection string, e.g., did you put your authorization database in the URI? You can connect and not be able to see anything if you aren’t authorized.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi Jack,I don’t think there is any issue with the connection string. I can query the collections/documents successfully. I am connecting to a Atlas service (free tier/4.4) from my PyMongo client (latest version).\nprint(myclient.tdb.tcol.index_information()) - produces the following o\n{}I can connect to the instance and query the db/collection and retrieve documents and information like avg document sizes etc. Maybe this is an issue with the Atlas Service.Thanks much for looking into this.",
"username": "Satya_Tanduri"
},
{
"code": "",
"text": "Sanity check: Can you see the indexes when you connect to your Atlas instance via Compass?",
"username": "Jack_Woehr"
},
{
"code": "index_information()>>> from pymongo import MongoClient\n>>> c = MongoClient('mongodb+srv://myacct:[email protected]/test?authSource=admin&replicaSet=ClusterN-shard-0&readPreference=primary')\n>>> print(c.mydb.mycollection.index_information());\n{'_id_': {'v': 2, 'key': [('_id', 1)], 'ns': 'mydb.mycollection'}}\n",
"text": "@Satya_Tanduri I don’t think the problem is with Atlas. Below I get index_information() on my Atlas collection (names sanitized, but it works just fine).I cannot of course be sure, but the first thing I would look for is some typographical error in your code.\nAs debugging steps, perhaps you can try to perform the same operation against the same Atlas colleciton with the driver for another language, e.g., PHP or JavaScript/Node.js …",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I can see the indexes when I browse the collections in the console (web). I am not using the Compass though.",
"username": "Satya_Tanduri"
},
{
"code": "index_information()index_information()index_information()index_information()",
"text": "Okay, @Satya_Tanduri …Test cases:Really, the probability is very high that the problem is a typographical error or procedural error of some sort because:If you do an interactive Python session and show the same Atlas session both retrieving collection data and failing to retrieve index_information() and paste that session into this topic, it might help.",
"username": "Jack_Woehr"
},
{
"code": " tdb = myclient[db_name]\n tcol = db[format(coll_name)]\n index_cursor = tcol.list_indexes()\n print (\"\\nindex_cursor TYPE:\", type(index_cursor))\n for index in index_cursor:\n print (\"\\n\")\n print (index, \"--\", type(index))\n print (\"index.keys():\", index.keys())\n print (\"NAME:\", index[\"name\"]) # index name\n print (\"VERSION:\", index[\"v\"]) # index version\nindex_cursor TYPE: <class 'pymongo.command_cursor.CommandCursor'>\n\nSON([('v', 2), ('key', SON([('_id', 1)])), ('name', '_id_')]) -- <class 'bson.son.SON'>\nindex.keys(): ['v', 'key', 'name']\nNAME: _id_\nVERSION: 2\n",
"text": "Hi Jack,Here is a quick update. I tried the following and it seems to work in PyMongo.I got this code snippet from the web (How to use Python to Check if an Index Exists for a MongoDB Collection | ObjectRocket). Now I can see the following output:The id index is the default index I suppose. I see other indexes too now. This is all from the sample databases from MongoDB. Nothing custom there. Thanks much for your help.",
"username": "Satya_Tanduri"
},
{
"code": "_id",
"text": "Yes _id is always there.\nCongratulations, have fun with your project.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Collection.list_indexes() or index_information() returns no data in PyMongo | 2020-10-28T02:55:45.014Z | Collection.list_indexes() or index_information() returns no data in PyMongo | 8,491 |
null | [
"queries"
] | [
{
"code": "[{\n \"_id\": {\n \"$oid\": \"5f9956d8430bef10e05aaffa\"\n },\n \"_userId\": {\n \"$oid\": \"5f89871cea073c27bc192088\"\n },\n \"username\": \"ferdinand\",\n \"connections\": [\n {\n \"_id\": {\n \"$oid\": \"5f9956d8430bef10e05aaffb\"\n },\n \"_userIdCon\": {\n \"$oid\": \"5f8d0ccff7074616e46cf925\"\n },\n \"username_con\": \"haleluya\",\n \"connection_type\": 0\n },\n {\n \"_id\": {\n \"$oid\": \"5f9a7e6dd8bbea1ae0221367\"\n },\n \"_userIdCon\": {\n \"$oid\": \"5f6494c327900b20ec9f0f5b\"\n },\n \"username_con\": \"andibel\",\n \"connection_type\": 2\n },\n {\n \"_id\": {\n \"$oid\": \"5f9fbe785e35584568cfb3fc\"\n },\n \"_userIdCon\": {\n \"$oid\": \"5f682e897a2042380494a854\"\n },\n \"username_con\": \"budinam\",\n \"introduction_con\": \"ini isi pesan saya ya fer dari budi\",\n \"connection_type\": 0\n }\n ],|\n....\n}] \nConnection.findOne({ _userId: userId }, {\"connections\": { $slice : [startLoad, limitLoad] }});\n",
"text": "This the example of my mongodb data:I want to get the data with criteria “userId : 5f89871cea073c27bc192088”\nthen return sub fields of connections that have “connection_type: 0”\nand it just return several data according to the request (limit options)I don’t know how to do this in mongodb.\nI am using mongoose.I have tried this codes:But, it just give me the data without filtering “connection_type: 0”How to add filter “connection_type: 0” ?",
"username": "Cakkavati_Kusuma"
},
{
"code": "connectionsconnection_type",
"text": "Hello @Cakkavati_Kusuma, welcome to the MongoDB community forum.You can use the $ projection operator. This will return the first matching sub-document of connections array with the matching connection_type value.You can also use the $elemMatch (projection operator) to get the same result.",
"username": "Prasad_Saya"
}
] | How to get specified data from array object field with specific condition | 2020-11-02T15:48:04.601Z | How to get specified data from array object field with specific condition | 3,572 |
null | [
"graphql",
"stitch"
] | [
{
"code": "{\n \"userId\": \"%%user.id\"\n}\nuserId",
"text": "I have this definition for the owner role of a document:This role has read and write permissions. The userId field is an ObjectId type. I know the request is authenticated and the user is the same user whose objectId is the userId. The response is an error that I am trying to update a document that I am not the owner of. I am also using graphQL for Stitch. Does the GraphQL Authentication flow allow for using complex document roles and permissions?Anyone know if I am doing something wrong?",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi Lukas – The userId field will actually need to be of type String vs. ObjectId. Hope that helps!",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Is there a way to keep it an objectId and still run a match?",
"username": "Lukas_deConantseszn1"
},
{
"code": "{\n \"userId\": {\n \"$oid\":\"%%user.id\"\n }\n}\n",
"text": "This is kind of a big blocker here. I tried using this code instead:It said “expected $oid field to contain 24 hexadecimal characters”. Looks like this JSON expression might actually work if it wasn’t for this validation issue? There has got to be a workaround for this. I don’t want to change all of these userIDs into strings.",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "I’m wondering if using the Realm CLI will work for making this change. Thoughts @Drew_DiPalma?",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi Lukas – Currently, the best path is to call a function which does the conversion. We are actually in the process of releasing a new expression for Rules that will convert between OID/String and vice-versa but it is probably two weeks away, and at that point you would be able to just use the expression vs. calling a function.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Hi @Drew_DiPalma can one use a function inside a JSON expression? Or are you referring to running a function on all the data to convert everything?Any updates on this new expression in Rules available for this?Thanks!",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Hi Lukas – You can use %function to call a function from a JSON expression. The expanded JSON syntax is in Code Review now but I don’t have a precise timeframe to share other than that it should be available within the next few weeks.",
"username": "Drew_DiPalma"
}
] | Using document permissions and roles for Stitch with GraphQL | 2020-04-30T12:37:13.428Z | Using document permissions and roles for Stitch with GraphQL | 3,933 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "When importing my a json file using mongoimport, arrays are overwritten due to the field name for the actual array being the same, regardless of the elements in the array containing different data.Is there a way to essentially combine and merge the arrays, instead of overwriting the array, when importing with mongoimport?",
"username": "James_Anderson"
},
{
"code": "",
"text": "Hi @James_Anderson,Welcome to MongoDB community!Although mongoimport can merge documents based on a list of fields into your collection it cannot perform complex transformation like concatenating arrays:https://docs.mongodb.com/database-tools/mongoimport/#replace-matching-documents-during-importWhat I would suggest is to load the documents into a temporary stage collection and perform a post command/script doing a $merge aggregation with $zip of the source and target arrays.Let me know if this is acceptable.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @James_Anderson, although mongoimport can’t do this currently, I think this is a pretty good suggestion. I made a feature request for this on Jira: TOOLS-2765. The feature might not get added very fast - we have a large backlog - but you can follow that ticket to be notified of updates.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Combine arrays when importing with mongoimport? | 2020-11-02T05:59:16.571Z | Combine arrays when importing with mongoimport? | 3,374 |
null | [
"data-modeling",
"developer-hub",
"anti-patterns"
] | [
{
"code": "",
"text": "Hey Peeps,All three videos of my schema design anti-patterns video series are now live. You can check them out at Schema Design Anti-Patterns Series - YouTube.If you prefer blog posts to videos, check out the blog series that covers the same topics: A Summary of Schema Design Anti-Patterns and How to Spot Them | MongoDBComment here if you have any questions. ",
"username": "Lauren_Schaefer"
},
{
"code": "extended-reference-pattern// collection-a\n{\n name: 'a1',\n epd: {\n profile_key: '12345',\n profile_name: 'helloworld',\n } \n}\n\n// collection-b\n{\n name: 'b1',\n epd: {\n profile_key: '12345',\n profile_name: 'helloworld',\n } \n}\n\n// continue for up to collection-n\n\n// collection epd\n{\n profile_key: '12345',\n profile_name: 'helloworld',\n // and some more fields\n}\nprofile_name// movies\n{\n title: 'fast',\n reviews: [\n // keep top 10 here\n ]\n}\n\n// reviews\n// all the rest of the reviews\nmoviesreviews",
"text": "Hello @Lauren_Schaefer,I enjoyed all 3 parts of your video very much, together with building-with-patterns-a-summary. I think it’s really great, and hope that there would be more of these, especially with some real-world use-case and example would definitely be better.Having said that, most of the videos and guides and pattern only seem to be focusing a lot of the read portion but I thought that maybe there could be a series that focuses of the write? Like what’s the best way to handle the write when there’s need to.For example, as mentioned in extended-reference-pattern, we duplicate a couple of data that is frequently accessed but rarely updated in the collection, so that the read is fast which I agree, but I don’t know what’s the best way to handle if I have to update my collection in the event that the duplicated data is updated. Let me try to give an example of that.I have x number of collections, all having a same subset of data in each collection using the extended-reference-pattern.Assuming that profile_name rarely gets updated, but if it ever does, what would be the best way to handle these kind of updates across numerous collection? Is there a good pattern for this sort of stuff?This is just one example, but I hoped that I did bring the point across where most of the guides only tells the pros of having the patterns, and the cons are (e.g) data duplication. But it doesn’t also tells us how to handle the data duplication properly whenever it does get updated. I thought it was also as important as identify the pattern to resolve the read issue.Another quick example would be subset-pattern, it tells us to place only a couple of document in collection-a so to quickly access a sub-set of information but it doesn’t tell us how to manage effectively/efficiently on updating the docs.Say I have a review that will be promoted to the movies top 10 review, and one of the top 10 review will be demoted to the reviews collection. Is there any recommended pattern/practice to do this sort of thing?I hope I did make some sense here, because those are the things that I don’t quite understand fully.Lastly, thank you for all the videos and blogs post which is very useful, and keep them coming!",
"username": "Joseph_Gan"
},
{
"code": "db.collA.updateMany({'epd.profile_key': '12345'},{$set: {'epd.profile_name': 'new'}})",
"text": "Hey @Joseph_Gan!Great questions! I’ll be thinking about how we can answer these better in the future. In the meantime, let’s get to your specific questions.As with almost everything in MongoDB, the answer is “it depends on your use case.” I’ll walk through one possible solution, but know that there may be a better way based on your use case.Regarding the extended-reference pattern question:\nOne option would be to do handle the updates at the application level. You could do something like db.collA.updateMany({'epd.profile_key': '12345'},{$set: {'epd.profile_name': 'new'}})\nwhenever you make an update to the profile name. You’d have to ensure that your development team knew about this data duplication and handled it correctly every time.Another option would be to use change streams or triggers. These allow you to watch for updates in a collection and take a specified action whenever an update occurs. This would prevent you from worrying about forgetting to update the documents in your application code every time.Another thing to consider is how detrimental data inconsistency would be. You may want to wrap your updates in a transaction to ensure that the update occurs in all of the collections or none of the collections.Regarding the Subset Pattern question:Again, the best way to do this will depend on how you’ve implemented it and your particular use case.I’d likely keep all of the information about every review in the reviews collection – regardless of whether it’s in the movies collection or not. I’d keep the movie name and rating in the reviews array in the movies collection. Whenever you’re updating the movie reviews, check to see if the review is high enough to add the item to the movies collection. If so, remove the bottommost review and add the new review.Again, we could consider whether using a transaction, change stream, and/or trigger would help.",
"username": "Lauren_Schaefer"
},
{
"code": "extended-reference-patternprofile namedb.collA.updateMany({'epd.profile_key': '12345'},{$set: {'epd.profile_name': 'new'}})\ndb.collB.updateMany({'epd.profile_key': '12345'},{$set: {'epd.profile_name': 'new'}})\ndb.collC.updateMany({'epd.profile_key': '12345'},{$set: {'epd.profile_name': 'new'}})\n// and so on\nupdateManybulkWritetriggerschange-streamchange-stream// profile.name gets updated\n// change-stream detected change, running the update across collections (a-z)\n// another operation trigger an update to collection-f doc\n// change-stream update the docs with the updated profile.name but overwrites the previous update (above)\nsubset-pattern",
"text": "Thank you for your response.Regarding the extended-reference-pattern, when you mentioned to update at application level, you meant to say that whenever profile name is updated, it will triggers a update to all collectionsThe cons of this would be that the application would need to know what are the available collections, and update it whenever there is an new collection. Am I right to say so? Bringing the example slightly further, if there is below 100k documents per such collection (maybe 1 or 2 would have more, say up to a million), would there be any different between using updateMany and bulkWrite. When should I use one over another?Going back to the other options as mentioned, triggers don’t look an option to me since it’s a on-prem setup, and my current thinking is to go with something like change-stream, where it listens for the change, and then update all the collections that needs to be updated. Would I need to be mindful of concurrency issue here, where some other actions triggers an update to the document, then the change-stream triggers the update?Thanks for the suggestion regarding the subset-pattern",
"username": "Joseph_Gan"
},
{
"code": "epd",
"text": "@Joseph_Gan When I say make the update at the application level, I mean that whenever the application makes an update to the profile_name, the application would need to make updates to all of the collections where profile_name is set. It would look like the code block you created.Yes, you’re totally right about the cons. The application developers would need to be aware of all of the places profile_name is set and make all of the updates in every piece of the application where the update is made.updateMany() is a wrapper on top of bulkWrite(), so the performance is the same.I’m trying to think through the example you gave regarding concurrency. I can’t think of why you would want to set the profile_name in collection-f explicitly. You would probably want to only make the update in the epd collection and then push the changes out everywhere else.",
"username": "Lauren_Schaefer"
},
{
"code": "updateManyanother operation trigger an update to collection-f doc on a different field// profile.name gets updated\n// change-stream detected change, running the update across collections (a-z)\n// another operation trigger an update to collection-f doc on a different field\ncolF.update(id, { anotherField: 'not profile.name' });\n// change-stream update the docs with the updated profile.name but overwrites the previous update (above)\n// this change-stream update ops updates the `profile.name` but overwrites `anotherField` to its previous state\npatchupdate",
"text": "updateMany() is a wrapper on top of bulkWrite(), so the performance is the same.I was trying to find the docs somewhere but I couldn’t. I remember that you perform a batch execute with bulkWrite? Something similar to the post written here. Am I able to the same for updateMany operations?Regarding the concurrency, I wasn’t making myself clear. I meant that another operation trigger an update to collection-f doc on a different fieldIf I’m using patch to patch specific field, then this should not matter but would be a problem if use update operation?",
"username": "Joseph_Gan"
},
{
"code": "db.foo.updateMany\nfunction(filter, update, options) {\n var opts = Object.extend({}, options || {});\n\n // Pipeline updates are always permitted. Otherwise, we validate the update object.\n if (!Array.isArray(update)) {\n // Check if first key in update statement contains a $\n var keys = Object.keys(update);\n if (keys.length == 0) {\n throw new Error(\n \"the update operation document must contain at least one atomic operator\");\n }\n // Check if first key does not have the $\n if (keys[0][0] != \"$\") {\n throw new Error('the update operation document must contain atomic operators');\n }\n }\n\n // Get the write concern\n var writeConcern = this._createWriteConcern(opts);\n\n // Result\n var result = {acknowledged: (writeConcern && writeConcern.w == 0) ? false : true};\n\n // Use bulk operation API already in the shell\n var bulk = this.initializeOrderedBulkOp();\n\n // Add the updateMany operation\n var op = bulk.find(filter);\n if (opts.upsert) {\n op = op.upsert();\n }\n\n if (opts.collation) {\n op.collation(opts.collation);\n }\n\n if (opts.arrayFilters) {\n op.arrayFilters(opts.arrayFilters);\n }\n\n op.update(update);\n\n try {\n // Update all documents that match the selector\n var r = bulk.execute(writeConcern);\n } catch (err) {\n if (err instanceof BulkWriteError) {\n if (err.hasWriteErrors()) {\n throw err.getWriteErrorAt(0);\n }\n\n if (err.hasWriteConcernError()) {\n throw err.getWriteConcernError();\n }\n }\n\n throw err;\n }\n\n if (!result.acknowledged) {\n return result;\n }\n\n result.matchedCount = r.nMatched;\n result.modifiedCount = (r.nModified != null) ? r.nModified : r.n;\n\n if (r.getUpsertedIds().length > 0) {\n result.upsertedId = r.getUpsertedIdAt(0)._id;\n }\n\n return result;\n}\n",
"text": "Hi @Joseph_Gan,I found that updateMany() is a wrapper for bulk write by using the Mongo Shell.I typedand the following was returned:For more information about bulkWrite, visit https://docs.mongodb.com/manual/core/bulk-write-operations/. For more information about updateMany, visit https://docs.mongodb.com/manual/reference/method/db.collection.updateMany/index.html.If you want to update many documents, you can choose to use bulkWrite or updateMany and you’ll get the same performance.Regarding concurrency: I see what you mean now.MongoDB intelligently handles locking for you. If you have two simultaneous transactions that touch the same document, one transaction will occur before the other. You have two options.See https://docs.mongodb.com/manual/faq/concurrency/ for more details.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "Hi @Lauren_Schaefer,Thank you for the explanation. It’s much clearer to me now! Appreciate it.Cheers",
"username": "Joseph_Gan"
},
{
"code": "",
"text": "You’re welcome! ",
"username": "Lauren_Schaefer"
}
] | Schema Design Anti-Patterns Video Series is Live on YouTube | 2020-10-13T12:59:22.241Z | Schema Design Anti-Patterns Video Series is Live on YouTube | 4,280 |
null | [
"graphql"
] | [
{
"code": "",
"text": "Hi!I’m having trouble returning multiple types as a custom payload in my custom resolver. For instance, I am able to create a custom resolver that returns an array of objects, together with a boolean. However, if I want to return e.g an array of Documents, together with a boolean, I can’t find out how.I have described the problem further in the following SO post:Thanks in advance!",
"username": "petas"
},
{
"code": "{\n \"bsonType\": \"object\",\n \"title\": \"FindShirts\",\n\n \"properties\": {\n \"shirts\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"title\": \"ShirtObject\",\n \"properties\": {\n \"size\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"isFoo\": {\n \"bsonType\": \"boolean\"\n }\n }\n}\n",
"text": "Can you paste your input and payload type? I was able to do this with a “Custom” payload type with the following schema:GraphiQL Schema -",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hi Sumedha! Thank you for the reply.I’m sorry, I realize that my question wasn’t super clear (I will edit it after writing this comment).\nI’m aware that I can configure objects in the way you do above, with custom fields defined within the schema. However, what I’m interested in doing is to link to an EXISTING schema.Say that I’ve defined a shirt schema (1) already, with fields color and size. Then I want to define a separate schema (2), with a list of these shirt types, together with a boolean. This way, if I adjust the prior schema (1) and add a field “fabric”, I don’t have to add that field to my second schema (2) separately.As I understand it, this is how it works when you create a Custom Resolver through the UI and pick the payload type “Existing Type (List)”. However, if I go with that approach, I cannot have my boolean in the payload.",
"username": "petas"
}
] | Returning multiple types as custom payload in a custom resolver | 2020-10-14T19:15:47.780Z | Returning multiple types as custom payload in a custom resolver | 3,802 |
null | [
"legacy-realm-server"
] | [
{
"code": "default\t10:22:08.381259-0400\tPattyMelt\tSync: Connection[1]: Session[1]: Failed to parse, or apply received changeset: no such row\nException backtrace:\n0 Realm 0x00000001013263e0 _ZN5realm4sync17BadChangesetErrorC1EPKc + 64`\n1 Realm 0x0000000101321fdc _ZNK5realm4sync18InstructionApplier19bad_transaction_logEPKc + 36\n2 Realm 0x000000010132343c _ZN5realm4sync18InstructionApplierclERKNS0_11Instruction3SetE + 0\n3 Realm 0x00000001012ca94c _ZN5realm5_impl17ClientHistoryImpl27integrate_server_changesetsERKNS_4sync12SyncProgressEPKyPKNS2_11Transformer15RemoteChangesetEmRNS2_11VersionInfoERNS2_21ClientReplicationBase16IntegrationErrorERNS_4util6LoggerEPNSE_20SyncTransactReporterEPKNS2_27SerialTransactSubstitutionsE + 3824\n4 Realm 0x00000001012dd83c _ZN5realm5_impl14ClientImplBase7Session20integrate_changesetsERNS_4sync21ClientReplicationBaseERKNS3_12SyncProgressEyRKNSt3__16vectorINS3_11Transformer1<…>\n",
"text": "I am gettig follwing error on ios client and not sure why this is happening. can you please help us:RLMSyncError(_nsError: Error Domain=io.realm.sync Code=6 “Bad changeset (DOWNLOAD)” UserInfo={NSLocalizedDescription=Bad changeset (DOWNLOAD), statusCode=112})When we look the detailed logs found following:Realm sdk version: 5.4.1\nUsing self hosted ROS.",
"username": "Uma_Tiwari"
},
{
"code": "",
"text": "@Uma_Tiwari Would you mind opening an issue you here?Realm is a mobile database: a replacement for Core Data & SQLite - Issues · realm/realm-swiftI will have the iOS team take a look - if you have a repro case, we should be able to solve it quickly",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian. I have created issue on github.",
"username": "Uma_Tiwari"
}
] | iOS Sync Error: Bad changeset (DOWNLOAD) | 2020-10-28T16:51:25.111Z | iOS Sync Error: Bad changeset (DOWNLOAD) | 3,219 |
[
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "Team,I have currently tried MongoDB Enterprise Operator on OpenShift Cluster Platform 4.X on IBM’s ppc64le architecture - via steps mentioned here -MongoDB Enterprise Kubernetes Operator. Contribute to mongodb/mongodb-enterprise-kubernetes development by creating an account on GitHub.However, to my surprise, I have not found ppc64le equivalent docker images for the following -Quay mongodb-enterprise-kubernetes/mongodb-enterprise.yaml at master · mongodb/mongodb-enterprise-kubernetes · GitHubQuay mongodb-enterprise-kubernetes/mongodb-enterprise.yaml at master · mongodb/mongodb-enterprise-kubernetes · GitHubQuay mongodb-enterprise-kubernetes/mongodb-enterprise.yaml at master · mongodb/mongodb-enterprise-kubernetes · GitHubQuay mongodb-enterprise-kubernetes/mongodb-enterprise.yaml at master · mongodb/mongodb-enterprise-kubernetes · GitHubQuay mongodb-enterprise-kubernetes/mongodb-enterprise.yaml at master · mongodb/mongodb-enterprise-kubernetes · GitHubQuay - mongodb-enterprise-kubernetes/mongodb-enterprise.yaml at master · mongodb/mongodb-enterprise-kubernetes · GitHubCan you perhaps tell me, how can I help in enabling ppc64le docker images ?Thanks,\nHarsha.",
"username": "Krishna_Harsha"
},
{
"code": "",
"text": "Team,Can you kindly address this question?Thanks,\nHarsha.",
"username": "Krishna_Harsha"
},
{
"code": "",
"text": "I believe you have to use IBM’s version of the MongoDB Operator. GitHub - IBM/ibm-mongodb-operator: ibm-mongodb-operator",
"username": "Albert_Wong"
},
{
"code": "",
"text": "I understand we can very well make use of IBM’s version MongoDB Operator available via GitHub - IBM/ibm-mongodb-operator: ibm-mongodb-operator .Perhaps we would like to work towards the enablement of ppc64le equivalent docker images for consumption of users directly via MongoDB Community Operator as well as MongoDB Enterprise Operator.So to re-iterate how can we contribute towards it ?Thanks,\nHarsha.",
"username": "Krishna_Harsha"
},
{
"code": "",
"text": "I don’t think it’s on roadmap. I would post your request to https://feedback.mongodb.com/ so that product management is aware of it.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "HiMongoDB Enterprise Operator does not support ppc64le at the moment.For the community, I would suggest contributing to GitHub - docker-library/mongo: Docker Official Image packaging for MongoDB to add a new image\nand for the Community operator, we will follow up with more ideas in this thread.\nMongoDB Community Operator NA for ppc64le arch · Issue #226 · mongodb/mongodb-kubernetes-operator · GitHubWe are going to publish a few changes to our community operator that will allow us to build images for any targetted architecture, supported by MongoDB.",
"username": "Andrey_Belik"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Enterprise Operator NA - ppc64le arch | 2020-09-14T20:16:08.579Z | MongoDB Enterprise Operator NA - ppc64le arch | 3,537 |
|
null | [
"objective-c"
] | [
{
"code": " NSURL *authURL = [NSURL URLWithString:[NSString stringWithFormat:@\"https://*********.cloud.realm.io\"]];\n RLMSyncCredentials *credential = [RLMSyncCredentials credentialsWithUsername:@\"*********\"\n password:@\"********\"\n register:NO];\n\n \n [RLMSyncUser logInWithCredentials:credential\n authServerURL:authURL\n onCompletion:^(RLMSyncUser *user, NSError *error)\n",
"text": "I can do credentialswithemail.but still not found.\n/*******.m:35:4: Unknown type name ‘RLMSyncCredentials’; did you mean ‘RLMCredentials’?All I want is sync.",
"username": "Krikor_Herlopian1"
},
{
"code": "",
"text": "What version of Realm are you using? Can I see a more complete compiler output please?",
"username": "Ian_Ward"
}
] | No known class method for selector 'credentialsWithUsername:password:register:' | 2020-10-28T20:47:46.597Z | No known class method for selector ‘credentialsWithUsername:password:register:’ | 4,044 |
null | [
"objective-c"
] | [
{
"code": "#import <Realm/Realm.h>\n#import \"Log.h\" \nRLMSyncCredentials *credential = [RLMSyncCredentials credentialsWithUsername:@\"sdf\"\n password:@\"fsd@123\"\n register:NO];\n[RLMSyncUser logInWithCredentials:credential\n authServerURL:authURL\n onCompletion:^(RLMSyncUser *user, NSError *error)\n",
"text": "I keep getting /sfsf/fsdfsdfds/rel.m:33:5: Unknown type name ‘RLMSyncCredentials’; did you mean ‘RLMCredentials’?I have in my podspod ‘Realm/Headers’ pod ‘Realm’I also changed credentialswithusername to credentialswithemail same stuff.",
"username": "Krikor_Herlopian1"
},
{
"code": "",
"text": "What version of Realm are you using? Can I see a more complete compiler output please?",
"username": "Ian_Ward"
}
] | Unknown type name 'RLMSyncCredentials'; did you mean 'RLMCredentials'? | 2020-10-28T20:48:22.040Z | Unknown type name ‘RLMSyncCredentials’; did you mean ‘RLMCredentials’? | 2,990 |
null | [
"react-native"
] | [
{
"code": "",
"text": "Hi,I’m building an app that has groups of users. At the start of the app, I open a Realm for every group the user is part of, and persist the Realms throughout the app with React Context Provider. When a user creates a new group, though, I need to open a Realm for that new group, and add it to the list of realms in the Context Provider component.Currently, I’m opening a new Realm for each new group that the user creates, and then I’m manually sticking them inside the existing list of Realms in the Context Provider.However, I’m wondering if it’s possible to “refresh” the Realms after a user creates a new group by simply calling the function that opens all the user’s realms at the beginning of the app and generating a new list of Realms. While this strategy would be the least amount of code, I’m worried that this strategy may produce memory leaks or performance issues since I am not sure how to close the existing, open Realms (or whether they even need to be closed) prior to “refreshing” the Realms.Thank you for the help!",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "@Jerry_Wang It is definitely a best practice to close realms when you are done using them. If you are opening many realms, then I usually suggest a design pattern where the first view a user sees is a listView/recyclerView of all potential realms they could open - they could then select one which triggers the opening of this realm, they interact with it on another Activity, and then when they go back to the original listView which displays all the realms they have permission to access; you but a realm.close() in the onClose() lifecycleYou can sse an example of this in GH here -finalContribute to mongodb-university/realm-tutorial-android-kotlin development by creating an account on GitHub.With a tutorial here -\nhttps://docs.mongodb.com/realm/tutorial/android-kotlin/",
"username": "Ian_Ward"
}
] | Do open realms close themselves after a period of inactivity? How are open Realms closed? | 2020-10-29T14:53:39.151Z | Do open realms close themselves after a period of inactivity? How are open Realms closed? | 1,883 |
null | [
"atlas-device-sync"
] | [
{
"code": "[ 'Open Error - session', Session {} ]\n[\n 'Open Error - error',\n {\n name: 'Error',\n message: 'Bad changeset (UPLOAD)',\n isFatal: true,\n category: 'realm::sync::ProtocolError',\n code: 212,\n userInfo: {}\n }\n]\n[\n 'Open Error - partition',\n '<<THE-PARTITION-PATH-HERE>>'\n]\n",
"text": "Hi All,I’m getting errors as follows:My team report that these errors are happening without changes to any code relating to the pertinent schema.@Ian_Ward - if you need to review the logs for this - errors can be found between Nov 02 11:00:00+11:00 and Nov 02 13:00:00+11:00Thoughts?Benjamin",
"username": "Benjamin_Storrier"
},
{
"code": "",
"text": "In attempting to progress with work - we terminated sync and reinitialized it and this seems to solve the issue.\nNo other changes were made.I can provide further info if required.B",
"username": "Benjamin_Storrier"
},
{
"code": "",
"text": "@Benjamin_Storrier Sure, you want to email me your cloud web URL? the one you are using to view the realm web dashboard? [email protected]",
"username": "Ian_Ward"
}
] | Bad Changeset (UPLOAD) - code 212 | 2020-11-02T01:30:24.473Z | Bad Changeset (UPLOAD) - code 212 | 2,504 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "I have a mongodb 2.6 replica set (1 primary and 1Secondary). I would like to upgrade to latest stable version 4.4. Do I need to follow the migration path i.e 2.6.11 → 3.0.15 → 3.2.22 → 3.4.24 → 3.6.18 → 4.0.18 → 4.2.6->4.4 or is there any way to directly upgrade to 4.4 with no or minimum downtime?",
"username": "Amanullah_Ashraf"
},
{
"code": "",
"text": "Do I need to follow the migration path i.e 2.6.11 -> 3.0.15 -> 3.2.22 -> 3.4.24 -> 3.6.18 -> 4.0.18 -> 4.2.6->4.4This is the recommended upgrade path.or is there any way to directly upgrade to 4.4 with no or minimum downtimeNothing that I have seen. Perhaps you can export and import, but you will want to perform adequate testing. The version to version upgrade path is well documented and tried and tested.As stated in other similar threads, the amount of technical debt here will also apply to your clients/drivers. They will require a similar amount of attention.",
"username": "chris"
},
{
"code": "",
"text": "Hi @Amanullah_Ashraf,This is essentially the same question as Replace mongodb binaries all at once? but with a different starting version (2.6 instead of 3.2).The options I’ve outlined in that discussion apply to MongoDB 2.6 as well.MongoDB 2.6 is below the minimum version for Cloud Manager Automation, but still supported by MongoDB Atlas Live Migration if you wanted to take that approach.Since you are aiming for no or minimal downtime, you will have to manually upgrade through successive major versions (following the documented Upgrade Procedures) or use compatible automation tooling.Regards,\nStennie",
"username": "Stennie_X"
}
] | Upgrade from mongodb 2.6 to 4.4 | 2020-10-25T08:50:40.267Z | Upgrade from mongodb 2.6 to 4.4 | 3,626 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "I have a mongodb 1.8.2 with replica set. ( 1 Primary and 2 Secondary). I would like to upgrade to latest stable version i.e 4.4. Is there any way to do it? As the oldest version available is 2.6? Can I export and import data or take backup of 1.8.2 and restore in 4.4 ?",
"username": "Amanullah_Ashraf"
},
{
"code": "",
"text": "I have a mongodb 1.8.2 with replica set. ( 1 Primary and 2 Secondary). I would like to upgrade to latest stable version i.e 4.4. Is there any way to do it?Yes, see your other topic/thread.As the oldest version available is 2.6?Per the 2.2 documentation 2.2 and other versions are here:Try MongoDB Atlas products free. Developers can choose to use in the cloud or download locally. Either way, our software makes it easy to work with data.",
"username": "chris"
},
{
"code": "mongodumpmongorestoremongodumpmongodumpcollection.metadata.jsonmongodumpmongodump",
"text": "Can I export and import data or take backup of 1.8.2 and restore in 4.4 ?Hi @Amanullah_Ashraf,I addressed this question more generally in your related discussion topic which mentions MongoDB 3.2 as a starting point: Replace mongodb binaries all at once? - #3 by Stennie_X.Upgrading via mongodump and mongorestore should be possible, however the MongoDB 1.8 release series is from 2011 and I’m not aware of anyone attempting to fast forward almost 10 years in a single upgrade.I would try using a newer version of mongodump (perhaps 2.4 or 2.6), as 1.8-era mongodump didn’t support dumping the indexes and collection options (which newer versions save in collection.metadata.json files).The latest version of mongodump is only tested with non-EOL server releases (currently MongoDB 3.6+) and I expect may rely on MongoDB 3.0+ storage engine API commands. I suggested 2.4 or 2.6 mongodump as there were some important bug fixes, but it has been a long while since I’ve worked with server or tool versions of that era.Regards,\nStennie",
"username": "Stennie_X"
}
] | Upgrade from 1.8.2 to 4.4 | 2020-10-25T08:50:29.054Z | Upgrade from 1.8.2 to 4.4 | 2,543 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "While upgrading mongodb from 3.2 to 4.4 replica set. Do I need to download and replace the binaries of all the version from 3.2 to 4.4 all at once and then restart the mongod at the end or I have to restart the mongod after every version download and restart the server, and then download 3.4 and replace and restart, repeat the process till end?",
"username": "Amanullah_Ashraf"
},
{
"code": "",
"text": "Hello again @Amanullah_AshrafYes they have to be started, version by version. More pertinent, you have to follow all the steps in each version’s release notes for the type of upgrade(standalone, replicaset, sharded cluster).Some of the versions introduce change in storage engine, others changes to authentication and yet others changes to default bindings.This is thoroughly documented.",
"username": "chris"
},
{
"code": "mongodumpmongorestore",
"text": "Hi @Amanullah_Ashraf,The recommended (and most throughly tested & documented) approach is an in-place upgrade which minimises downtime and gives you opportunity to test for compatibility issues before upgrading to the next major release.In-place upgrades require upgrading via successive major releases (so from 3.2 => 3.4, 3.4 => 3.6, etc). All members of a replica set or sharded cluster must be completely upgraded to the same version before continuing to the next major version upgrade.There are a few ways you can fast track a migration through multiple major version upgrades.As @chris noted in one of your other discussion topics, you should upgrade your clients/drivers for compatibility with your target server version. I would test and implement driver upgrades before commencing any server upgrades. Given the age of your original deployment, newer drivers may have API compatibility changes requiring an update to your application code. Drivers are generally backward compatible with a great range of MongoDB server versions, but older drivers will be missing support for newer APIs and authentication mechanisms.Be sure to review the relevant Release Notes and Compatibility Changes information in the MongoDB manual, and follow any version-specific Upgrade procedures. There have been a lot of changes since MongoDB 3.2 was released in 2015!Regardless of the upgrade approach you take, be sure to take backups of your deployment so you have a straightforward recovery path in the unlikely event that something goes dramatically amiss.This is the safest upgrade path:Use Cloud Manager Automation (currently supports MongoDB 3.4+) to migrate your on-premise deployment to a supported version of MongoDB. There’s a 30-day free trial of Cloud Manager automation which will probably cover your migration period.Use MongoDB Atlas Live Migration (currently supports MongoDB 2.6+) to upgrade to a modern version of MongoDB hosted in the cloud. You can either restore a backup from your Atlas deployment on-premise, or consider using Atlas as a more convenient management solution going forward.If downtime is acceptable and you have a large number of major releases to upgrade through, you can also consider a migration using mongodump and mongorestore.This approach will require more testing and patience because you are still subject to major version compatibility changes and will encounter different issues depending on the provenance of your data. This approach will also not support upgrading user & auth schema, which is supported via the usual in-place upgrade path.Unlike an in-place upgrade, dump & restore will recreate all data files and indexes so you may run into some (fixable) errors. The most likely complaints will be due to stricter validation of index and collection options which would not cause an issue for an in-place upgrade. I definitely recommend testing this procedure in a staging/QA environment with a copy of your production data to ensure there are no unexpected issues that might otherwise delay your production upgrade.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Replace mongodb binaries all at once? | 2020-10-28T06:36:21.658Z | Replace mongodb binaries all at once? | 6,187 |
null | [] | [
{
"code": "",
"text": "when it comes to develope a new Job porta., Jobs database, Jobs website , job board\nPHP/MySQL into mind if you think of FOSSalso job portal has CVs (documents)\nanyone has any valid points, why mongoDB wins over MySQL in all terms over MySQL\nif the Project is to design and develop a FOSS Job portal",
"username": "sgp_sai"
},
{
"code": "",
"text": "I don’t know if MongoDB is “best” for such a portal, but the following might be the best argument in favor of MongoDB vs. MySQL (both of which I use in web design):Most of your data entities will be largely self-contained. A user profile can easily be expressed as one document, without requiring relational normalization. Likewise, for your job offerings, there will be only one relational element: the one-to-many relation between a job offerer and multiple job offerings.One question will be: which system makes it easier to do the queries and searches you plan?Model your system in both environments and see which you find more comfortable.Experience, even a small amount of experience, beats recommendations every time!",
"username": "Jack_Woehr"
}
] | Anybody got a valid reason why mongoDb is best for a Job Portal | 2020-09-25T02:08:27.008Z | Anybody got a valid reason why mongoDb is best for a Job Portal | 1,968 |
null | [
"configuration"
] | [
{
"code": "mongod soft fsize unlimited\nmongod hard fsize unlimited\nmongod soft cpu unlimited\nmongod hard cpu unlimited\nmongod soft as unlimited\nmongod hard as unlimited\nmongod soft nofile 64000\nmongod hard nofile 64000\nmongod soft rss unlimited\nmongod hard rss unlimited\nmongod soft nproc 64000\nmongod hard nproc 64000\nmongod soft memlock unlimited\nmongod hard memlock unlimited\n",
"text": "Hi,I deployed MongoDB 4.4 on Centos 7 (with YUM).I followed the instructions and created the file /etc/security/limits.d/99-mongodb-nproc.conf and wrote this values :Restart the server but still have this warning when execute “mongo” command :2020-10-31T17:36:49.240+01:00: Soft rlimits too low\n2020-10-31T17:36:49.240+01:00: currentValue: 4096\n2020-10-31T17:36:49.240+01:00: recommendedMinimum: 64000I scoured all the forums with Google search and try multiple solutions but still have this warning.Does anyone have an idea?Thank you!Christophe",
"username": "Christophe_QUEVAL"
},
{
"code": "",
"text": "Some forums suggesting reboot of system to effect the changes",
"username": "Ramachandra_Tummala"
},
{
"code": "sudo systemctl start mongodsystemdnproculimitsystemd[Service]",
"text": "Restart the server but still have this warning when execute “mongo” commandWelcome to the community @Christophe_QUEVAL!Can you confirm the command line you are running to restart the server? For example, are you using sudo systemctl start mongod (systemd) as suggested in Install MongoDB Community Edition on Red Hat or CentOS?It looks like you have followed the instructions to create a seperate nproc configuration from the UNIX ulimit Settings page. If you are using the default systemd installation, you can also specify resource limits within the [Service] sections of service scripts.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Yes already done, but no changes…",
"username": "Christophe_QUEVAL"
}
] | MongoDB 4.4 and limits | 2020-10-31T19:36:17.009Z | MongoDB 4.4 and limits | 4,437 |
null | [
"atlas-search"
] | [
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"email\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n {\n $search: {\n autocomplete: {\n path: \"email\",\n query: \"ka\",\n },\n },\n },\n",
"text": "Hey guys, how do I get search autocomplete to work with email addresses?Here’s my search index.If I have a document say,{\n_id: “123”\nemail: “[email protected]”\n}and do a search w/ this agit will successfully return the doc above.Even if the query is “kat” or “kate”, the search will return the document.However, I soon as I add the “@” (kate@) the document is not returned. I’m assuming this has something to do with “@” being a special character.So how can I improve the search so autocomplete works on emails?Thanks!",
"username": "Tyler_Bell"
},
{
"code": "%40@{\n $search: {\n autocomplete: {\n path: \"email\",\n query: \"kate%40\",\n },\n },\n },\n",
"text": "Hi @Tyler_Bell,I suggest you try an encoding url representation like %40 instead of symbol @:W3Schools offers free online tutorials, references and exercises in all the major languages of the web. Covering popular subjects like HTML, CSS, JavaScript, Python, SQL, Java, and many, many more.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " $search: {\n autocomplete: {\n path: \"email\",\n query: \"kate%40\",\n },\n },\n",
"text": "Hi Pavel, thanks so much for the reply.Unfortunately that is not working for me I triedIs there anything else I can try?Thanks!",
"username": "Tyler_Bell"
},
{
"code": "@@$search: {\n autocomplete: {\n path: \"email\",\n query: \"kate@\",\n },\n } \n",
"text": "Hi @Tyler_Bell,Using HTML codes worked for me, for @ I used @ valuePlease let me know if this worked for you.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"email\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n{\n $search: {\n autocomplete: {\n path: \"email\",\n query: \"kate@\"\n },\n },\n},\n@",
"text": "@hmm that also doesn’t work for me Here’s my indexand here’s my queryQuerying “ka”, “kat”, and “kate” all work, but as soon as I add the “@”, there are no results.",
"username": "Tyler_Bell"
},
{
"code": "",
"text": "@Pavel_Duchovny A friendly bump ",
"username": "Tyler_Bell"
},
{
"code": "",
"text": "Hi @Tyler_Bell,Apperantly autocomplete index does not tokenize @ so you can’t use autocomplete for this search…You would want to use a new text index with text operation and not autocomplete ,since autocomplete is only designed to work with autocomplete indexes.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks Pavel.Do you know if there are any plans of allowing autocomplete to work with emails? It seems like emails would be a common use case.Thanks!",
"username": "Tyler_Bell"
},
{
"code": "",
"text": "Hi @Tyler_Bell,Yes we have plans to introduce an email tokenizer.You can place a comment on https://feedback.mongodb.com …Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks, just posted it as an idea.",
"username": "Tyler_Bell"
},
{
"code": "{ \"_id\" : 1, \"email\" : \"[email protected]\" }\n{ \"_id\" : 2, \"email\" : \"[email protected]\" }\n{ \"_id\" : 3, \"email\" : \"missing\" }\n {\n \"analyzer\": \"emailAutocomplete\",\n \"mappings\": {\n \"dynamic\": true\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"emailAutocomplete\",\n \"tokenFilters\": [\n {\n \"maxGram\": 10,\n \"minGram\": 1,\n \"type\": \"nGram\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n}\ndb.email_search.aggregate([\n... {\n... $search: {\n... index: \"KeywordTokenizer_Autocomplete_Analyzer\",\n... text: {\n... path: \"email\",\n... query: \"@\",\n... }\n... }\n... }\n... ])\n{ \"_id\" : 1, \"email\" : \"[email protected]\" }\n{ \"_id\" : 2, \"email\" : \"[email protected]\" }\nminGramminGram",
"text": "@Tyler_Bell One alternative that could work for you Tyler would be to create a custom analyzer with a keyword tokenizer and autocomplete field. It’s important to note that your query operator would need to change from autocomplete to text.Consider the following three documents:The following index definition:The following query:Thanks to these Harshad and @Pavel_Duchovny for the investigation. I am only relaying the message. Hopefully that helps you.In production, I would recommend considering/testing a higher value for minGram depending on your corpus and use case because such a low minGram could quickly expand the size of your index.",
"username": "Marcus"
},
{
"code": "",
"text": "Another gotcha to consider is that the name of the analyzer should be different from your previous index, otherwise you need to delete the index and create it again.",
"username": "Marcus"
},
{
"code": "search",
"text": "Awesome, thank you! I will give this a try!A completely non related question, but is there an ETA on being able to use the search stage after other stages? idea hereThanks for your help!",
"username": "Tyler_Bell"
},
{
"code": "$search",
"text": "There currently is no ETA for $search as a later stage. Such a featrue would require significant changes, so we are doing our best to support a variety of use cases and scenarios to support the community. If you have specific questions you can ask them here or in the feedback portal.Thanks again!",
"username": "Marcus"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Search Autocomplete Type | 2020-10-19T03:23:29.648Z | Mongo Search Autocomplete Type | 7,022 |
null | [
"aggregation",
"java"
] | [
{
"code": "org.mongodb:mongodb-driver-sync:4.1.1 final List<Bson> updates = new ArrayList<>();\n updates.add(Updates.set(\"projectId\", \"p1\"));\n updates.add(Updates.set(\"userId\", \"u1\"));\n updates.add(Updates.set(\"visitorId\", \"v1\"));\n updates.add(Updates.currentTimestamp(\"lastSeenTime\"));\n\n final Document updatedDocument =\n this.visitorsCollection.findOneAndUpdate(\n and(eq(\"projectId\", visitor.getProjectId()),\n or(eq(\"userId\", visitor.getUserId()), eq(\"visitorId\", visitor.getVisitorId()))),\n updates,\n new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER).upsert(true));\nUpdates.currentTimestamp(...)Updates.setOnInsert(...)findOneAndUpdate",
"text": "Hi there!Using a standalone instance of MongoDB 4.4.1 Community and the Java driver (org.mongodb:mongodb-driver-sync:4.1.1), when I execute the following operation:I am getting the following error:Exception in thread “main” com.mongodb.MongoCommandException: Command failed with error 40324 (Location40324): ‘Unrecognized pipeline stage name: ‘$currentDate’’ on server 35.238.203.251:53254. The full response is {“ok”: 0.0, “errmsg”: “Unrecognized pipeline stage name: ‘$currentDate’”, “code”: 40324, “codeName”: “Location40324”}\nat com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)\nat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:359)\nat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:280)\nat com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:100)\nat com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:490)\nat com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:71)\nat com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:255)\nat com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:202)\nat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:118)\nat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:110)\nat com.mongodb.internal.operation.CommandOperationHelper$13.call(CommandOperationHelper.java:712)\nat com.mongodb.internal.operation.OperationHelper.withReleasableConnection(OperationHelper.java:620)\nat com.mongodb.internal.operation.CommandOperationHelper.executeRetryableCommand(CommandOperationHelper.java:705)\nat com.mongodb.internal.operation.CommandOperationHelper.executeRetryableCommand(CommandOperationHelper.java:697)\nat com.mongodb.internal.operation.BaseFindAndModifyOperation.execute(BaseFindAndModifyOperation.java:69)\nat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:195)\nat com.mongodb.client.internal.MongoCollectionImpl.executeFindOneAndUpdate(MongoCollectionImpl.java:785)\nat com.mongodb.client.internal.MongoCollectionImpl.findOneAndUpdate(MongoCollectionImpl.java:765)The error is caused by the use of Updates.currentTimestamp(...). The same happens if I use Updates.setOnInsert(...). Are these actions not allowed with findOneAndUpdate? My purpose is to update some fields or create a new document with specific fields depending on whether a document if found or not.",
"username": "Laurent_Pellegrino"
},
{
"code": "",
"text": "See JAVA-3872: Unrecognized pipeline stage name: ‘$setOnInsert’ for my answer to an almost identical question.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unrecognized pipeline stage name: '$currentDate' | 2020-10-27T21:27:05.292Z | Unrecognized pipeline stage name: ‘$currentDate’ | 6,166 |
null | [
"performance"
] | [
{
"code": "{\"_id\":{\"$oid\":\"5f8ffa19a46b9d4179eaf943\"},\"Nome\":\"Eduino Dykstra\",\"Data nascita\":\"March 4, 1980\",\"Nazionalità\":\"Togo\",\"Ruolo\":\"FW\",\"Link calciatore\":\"https://fbref.com/en/players/vjLRENdD/Eduino-Dykstra\",\"Stagioni\":[{\"season\":\"2006-2007\",\"age\":{\"$numberInt\":\"27\"},\"squad\":\"Atlas\",\"country\":\"mx MEX\",\"comp_level\":\"1. Liga MX\",\"lg_finish\":\"18th\",\"games\":{\"$numberInt\":\"2\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"32\"},\"goals\":{\"$numberInt\":\"0\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0\"},\"goals_pens_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0\"},\"xg\":{\"$numberDouble\":\"0.52\"},\"npxg\":{\"$numberDouble\":\"0.52\"},\"xa\":{\"$numberDouble\":\"0.58\"},\"xg_per90\":{\"$numberDouble\":\"0\"},\"xa_per90\":{\"$numberDouble\":\"0\"},\"xg_xa_per90\":{\"$numberDouble\":\"0\"},\"npxg_per90\":{\"$numberDouble\":\"0\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0\"}},{\"season\":\"2007-2008\",\"age\":{\"$numberInt\":\"28\"},\"squad\":\"Puebla\",\"country\":\"mx MEX\",\"comp_level\":\"1. Liga MX\",\"lg_finish\":\"8th\",\"games\":{\"$numberInt\":\"0\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"0\"},\"goals\":{\"$numberInt\":\"0\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0\"},\"goals_pens_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0\"},\"xg\":{\"$numberDouble\":\"0\"},\"npxg\":{\"$numberDouble\":\"0\"},\"xa\":{\"$numberDouble\":\"0.81\"},\"xg_per90\":{\"$numberDouble\":\"0\"},\"xa_per90\":{\"$numberDouble\":\"0\"},\"xg_xa_per90\":{\"$numberDouble\":\"0\"},\"npxg_per90\":{\"$numberDouble\":\"0\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0\"}},{\"season\":\"2008-2009\",\"age\":{\"$numberInt\":\"29\"},\"squad\":\"Villarreal\",\"country\":\"es ESP\",\"comp_level\":\"1. La Liga\",\"lg_finish\":\"5th\",\"games\":{\"$numberInt\":\"3\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"98\"},\"goals\":{\"$numberInt\":\"0\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0\"},\"goals_pens_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0\"},\"xg\":{\"$numberDouble\":\"0.08\"},\"npxg\":{\"$numberDouble\":\"0.08\"},\"xa\":{\"$numberDouble\":\"0.97\"},\"xg_per90\":{\"$numberDouble\":\"0.08\"},\"xa_per90\":{\"$numberDouble\":\"0.97\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.05\"},\"npxg_per90\":{\"$numberDouble\":\"0.08\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.05\"}},{\"season\":\"2009-2010\",\"age\":{\"$numberInt\":\"30\"},\"squad\":\"Nantes\",\"country\":\"fr FRA\",\"comp_level\":\"1. Ligue 1\",\"lg_finish\":\"13th\",\"games\":{\"$numberInt\":\"21\"},\"games_starts\":{\"$numberInt\":\"20\"},\"minutes\":{\"$numberInt\":\"1040\"},\"goals\":{\"$numberInt\":\"13\"},\"assists\":{\"$numberInt\":\"2\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"1\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.18\"},\"assists_per90\":{\"$numberDouble\":\"0.18\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.35\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.18\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.36\"},\"xg\":{\"$numberDouble\":\"10.4\"},\"npxg\":{\"$numberDouble\":\"10.4\"},\"xa\":{\"$numberDouble\":\"0.72\"},\"xg_per90\":{\"$numberDouble\":\"0.94\"},\"xa_per90\":{\"$numberDouble\":\"0.06\"},\"xg_xa_per90\":{\"$numberDouble\":\"1\"},\"npxg_per90\":{\"$numberDouble\":\"0.94\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1\"}},{\"season\":\"2010-2011\",\"age\":{\"$numberInt\":\"31\"},\"squad\":\"FC Khimki\",\"country\":\"ru RUS\",\"comp_level\":\"1. Russian Premier League\",\"lg_finish\":\"15th\",\"games\":{\"$numberInt\":\"27\"},\"games_starts\":{\"$numberInt\":\"2\"},\"minutes\":{\"$numberInt\":\"570\"},\"goals\":{\"$numberInt\":\"10\"},\"assists\":{\"$numberInt\":\"1\"},\"pens_made\":{\"$numberInt\":\"1\"},\"pens_att\":{\"$numberInt\":\"2\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"1\"},\"goals_per90\":{\"$numberDouble\":\"1.66\"},\"assists_per90\":{\"$numberDouble\":\"0.16\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.81\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.66\"},\"xg\":{\"$numberDouble\":\"9.41\"},\"npxg\":{\"$numberDouble\":\"7.37\"},\"xa\":{\"$numberDouble\":\"0.91\"},\"xg_per90\":{\"$numberDouble\":\"1.56\"},\"xa_per90\":{\"$numberDouble\":\"0.15\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.71\"},\"npxg_per90\":{\"$numberDouble\":\"1.22\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.36\"}},{\"season\":\"2011-2012\",\"age\":{\"$numberInt\":\"32\"},\"squad\":\"Cagliari\",\"country\":\"it ITA\",\"comp_level\":\"1. Serie A\",\"lg_finish\":\"14th\",\"games\":{\"$numberInt\":\"6\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"186\"},\"goals\":{\"$numberInt\":\"1\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0.5\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0.5\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0.5\"},\"xg\":{\"$numberDouble\":\"0.43\"},\"npxg\":{\"$numberDouble\":\"0.43\"},\"xa\":{\"$numberDouble\":\"0.54\"},\"xg_per90\":{\"$numberDouble\":\"0.21\"},\"xa_per90\":{\"$numberDouble\":\"0.27\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.48\"},\"npxg_per90\":{\"$numberDouble\":\"0.21\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.48\"}},{\"season\":\"2012-2013\",\"age\":{\"$numberInt\":\"33\"},\"squad\":\"Defensa y Just\",\"country\":\"ar ARG\",\"comp_level\":\"1. Primera Div\",\"lg_finish\":\"6th\",\"games\":{\"$numberInt\":\"5\"},\"games_starts\":{\"$numberInt\":\"2\"},\"minutes\":{\"$numberInt\":\"220\"},\"goals\":{\"$numberInt\":\"3\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.5\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.5\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.5\"},\"xg\":{\"$numberDouble\":\"1.53\"},\"npxg\":{\"$numberDouble\":\"1.53\"},\"xa\":{\"$numberDouble\":\"0.38\"},\"xg_per90\":{\"$numberDouble\":\"0.76\"},\"xa_per90\":{\"$numberDouble\":\"0.19\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.95\"},\"npxg_per90\":{\"$numberDouble\":\"0.76\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.95\"}},{\"season\":\"2013-2014\",\"age\":{\"$numberInt\":\"34\"},\"squad\":\"Emmen\",\"country\":\"nl NED\",\"comp_level\":\"1. Dutch Eredivisie\",\"lg_finish\":\"12th\",\"games\":{\"$numberInt\":\"14\"},\"games_starts\":{\"$numberInt\":\"12\"},\"minutes\":{\"$numberInt\":\"963\"},\"goals\":{\"$numberInt\":\"5\"},\"assists\":{\"$numberInt\":\"1\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0.5\"},\"assists_per90\":{\"$numberDouble\":\"0.1\"},\"goals_assists_per90\":{\"$numberDouble\":\"0.6\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0.6\"},\"xg\":{\"$numberDouble\":\"5.32\"},\"npxg\":{\"$numberDouble\":\"5.32\"},\"xa\":{\"$numberDouble\":\"1.09\"},\"xg_per90\":{\"$numberDouble\":\"0.53\"},\"xa_per90\":{\"$numberDouble\":\"0.1\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.63\"},\"npxg_per90\":{\"$numberDouble\":\"0.53\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.63\"}},{\"season\":\"2014-2015\",\"age\":{\"$numberInt\":\"35\"},\"squad\":\"Munchen Gladbach\",\"country\":\"de GER\",\"comp_level\":\"1. Bundesliga\",\"lg_finish\":\"4th\",\"games\":{\"$numberInt\":\"33\"},\"games_starts\":{\"$numberInt\":\"6\"},\"minutes\":{\"$numberInt\":\"967\"},\"goals\":{\"$numberInt\":\"18\"},\"assists\":{\"$numberInt\":\"3\"},\"pens_made\":{\"$numberInt\":\"4\"},\"pens_att\":{\"$numberInt\":\"6\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.8\"},\"assists_per90\":{\"$numberDouble\":\"0.3\"},\"goals_assists_per90\":{\"$numberDouble\":\"2.1\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.4\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.7\"},\"xg\":{\"$numberDouble\":\"16.2\"},\"npxg\":{\"$numberDouble\":\"13\"},\"xa\":{\"$numberDouble\":\"2.54\"},\"xg_per90\":{\"$numberDouble\":\"1.61\"},\"xa_per90\":{\"$numberDouble\":\"0.25\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.86\"},\"npxg_per90\":{\"$numberDouble\":\"1.3\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.55\"}},{\"season\":\"2015-2016\",\"age\":{\"$numberInt\":\"36\"},\"squad\":\"Schalke 04\",\"country\":\"de GER\",\"comp_level\":\"1. Bundesliga\",\"lg_finish\":\"12th\",\"games\":{\"$numberInt\":\"22\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"667\"},\"goals\":{\"$numberInt\":\"6\"},\"assists\":{\"$numberInt\":\"3\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"1\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"1\"},\"goals_per90\":{\"$numberDouble\":\"0.85\"},\"assists_per90\":{\"$numberDouble\":\"0.42\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.27\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.85\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.28\"},\"xg\":{\"$numberDouble\":\"1.26\"},\"npxg\":{\"$numberDouble\":\"1.26\"},\"xa\":{\"$numberDouble\":\"1.71\"},\"xg_per90\":{\"$numberDouble\":\"0.18\"},\"xa_per90\":{\"$numberDouble\":\"0.24\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.42\"},\"npxg_per90\":{\"$numberDouble\":\"0.18\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.42\"}},{\"season\":\"2016-2017\",\"age\":{\"$numberInt\":\"37\"},\"squad\":\"Sheffield Utd\",\"country\":\"eng ENG\",\"comp_level\":\"1. Premier League\",\"lg_finish\":\"9th\",\"games\":{\"$numberInt\":\"5\"},\"games_starts\":{\"$numberInt\":\"3\"},\"minutes\":{\"$numberInt\":\"271\"},\"goals\":{\"$numberInt\":\"3\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"1\"},\"goals_pens_per90\":{\"$numberDouble\":\"1\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1\"},\"xg\":{\"$numberDouble\":\"1.04\"},\"npxg\":{\"$numberDouble\":\"1.04\"},\"xa\":{\"$numberDouble\":\"0.6\"},\"xg_per90\":{\"$numberDouble\":\"0.34\"},\"xa_per90\":{\"$numberDouble\":\"0.19\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.53\"},\"npxg_per90\":{\"$numberDouble\":\"0.34\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.53\"}},{\"season\":\"2017-2018\",\"age\":{\"$numberInt\":\"38\"},\"squad\":\"Utrecht\",\"country\":\"nl NED\",\"comp_level\":\"1. Dutch Eredivisie\",\"lg_finish\":\"6th\",\"games\":{\"$numberInt\":\"34\"},\"games_starts\":{\"$numberInt\":\"3\"},\"minutes\":{\"$numberInt\":\"872\"},\"goals\":{\"$numberInt\":\"14\"},\"assists\":{\"$numberInt\":\"2\"},\"pens_made\":{\"$numberInt\":\"4\"},\"pens_att\":{\"$numberInt\":\"6\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"2\"},\"goals_per90\":{\"$numberDouble\":\"1.55\"},\"assists_per90\":{\"$numberDouble\":\"0.22\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.77\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.11\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.33\"},\"xg\":{\"$numberDouble\":\"13.1\"},\"npxg\":{\"$numberDouble\":\"9.09\"},\"xa\":{\"$numberDouble\":\"2.05\"},\"xg_per90\":{\"$numberDouble\":\"1.45\"},\"xa_per90\":{\"$numberDouble\":\"0.22\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.67\"},\"npxg_per90\":{\"$numberDouble\":\"1.01\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.23\"}},{\"season\":\"2018-2019\",\"age\":{\"$numberInt\":\"39\"},\"squad\":\"Alavés\",\"country\":\"es ESP\",\"comp_level\":\"1. La Liga\",\"lg_finish\":\"16th\",\"games\":{\"$numberInt\":\"14\"},\"games_starts\":{\"$numberInt\":\"1\"},\"minutes\":{\"$numberInt\":\"445\"},\"goals\":{\"$numberInt\":\"3\"},\"assists\":{\"$numberInt\":\"4\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0.75\"},\"assists_per90\":{\"$numberDouble\":\"1\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.75\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.75\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.75\"},\"xg\":{\"$numberDouble\":\"1.39\"},\"npxg\":{\"$numberDouble\":\"1.39\"},\"xa\":{\"$numberDouble\":\"0.21\"},\"xg_per90\":{\"$numberDouble\":\"0.34\"},\"xa_per90\":{\"$numberDouble\":\"0.05\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.39\"},\"npxg_per90\":{\"$numberDouble\":\"0.34\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.39\"}},{\"season\":\"2019-2020\",\"age\":{\"$numberInt\":\"40\"},\"squad\":\"Defensa y Just\",\"country\":\"ar ARG\",\"comp_level\":\"1. Primera Div\",\"lg_finish\":\"6th\",\"games\":{\"$numberInt\":\"17\"},\"games_starts\":{\"$numberInt\":\"6\"},\"minutes\":{\"$numberInt\":\"761\"},\"goals\":{\"$numberInt\":\"10\"},\"assists\":{\"$numberInt\":\"1\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"1\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.25\"},\"assists_per90\":{\"$numberDouble\":\"0.12\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.37\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.25\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.37\"},\"xg\":{\"$numberDouble\":\"8.54\"},\"npxg\":{\"$numberDouble\":\"8.54\"},\"xa\":{\"$numberDouble\":\"0.18\"},\"xg_per90\":{\"$numberDouble\":\"1.06\"},\"xa_per90\":{\"$numberDouble\":\"0.02\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.08\"},\"npxg_per90\":{\"$numberDouble\":\"1.06\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.08\"}}]}\n{\"_id\":{\"$oid\":\"5f8ffa19a46b9d4179eafd2c\"},\"Nome\":\"Eduino Dykstra\",\"Data nascita\":\"March 4, 1980\",\"Nazionalità\":\"Togo\",\"Ruolo\":\"FW\",\"Link calciatore\":\"https://fbref.com/en/players/vjLRENdD/Eduino-Dykstra\",\"Ultima stagione\":{\"season\":\"2019-2020\",\"age\":{\"$numberInt\":\"40\"},\"squad\":\"Defensa y Just\",\"country\":\"ar ARG\",\"comp_level\":\"1. Primera Div\",\"lg_finish\":\"6th\",\"games\":{\"$numberInt\":\"17\"},\"games_starts\":{\"$numberInt\":\"6\"},\"minutes\":{\"$numberInt\":\"761\"},\"goals\":{\"$numberInt\":\"10\"},\"assists\":{\"$numberInt\":\"1\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"1\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.25\"},\"assists_per90\":{\"$numberDouble\":\"0.12\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.37\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.25\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.37\"},\"xg\":{\"$numberDouble\":\"8.54\"},\"npxg\":{\"$numberDouble\":\"8.54\"},\"xa\":{\"$numberDouble\":\"0.18\"},\"xg_per90\":{\"$numberDouble\":\"1.06\"},\"xa_per90\":{\"$numberDouble\":\"0.02\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.08\"},\"npxg_per90\":{\"$numberDouble\":\"1.06\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.08\"}},\"Penultima stagione\":{\"season\":\"2018-2019\",\"age\":{\"$numberInt\":\"39\"},\"squad\":\"Alavés\",\"country\":\"es ESP\",\"comp_level\":\"1. La Liga\",\"lg_finish\":\"16th\",\"games\":{\"$numberInt\":\"14\"},\"games_starts\":{\"$numberInt\":\"1\"},\"minutes\":{\"$numberInt\":\"445\"},\"goals\":{\"$numberInt\":\"3\"},\"assists\":{\"$numberInt\":\"4\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0.75\"},\"assists_per90\":{\"$numberDouble\":\"1\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.75\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.75\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.75\"},\"xg\":{\"$numberDouble\":\"1.39\"},\"npxg\":{\"$numberDouble\":\"1.39\"},\"xa\":{\"$numberDouble\":\"0.21\"},\"xg_per90\":{\"$numberDouble\":\"0.34\"},\"xa_per90\":{\"$numberDouble\":\"0.05\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.39\"},\"npxg_per90\":{\"$numberDouble\":\"0.34\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.39\"}},\"Stagioni\":[{\"season\":\"2006-2007\",\"age\":{\"$numberInt\":\"27\"},\"squad\":\"Atlas\",\"country\":\"mx MEX\",\"comp_level\":\"1. Liga MX\",\"lg_finish\":\"18th\",\"games\":{\"$numberInt\":\"2\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"32\"},\"goals\":{\"$numberInt\":\"0\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0\"},\"goals_pens_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0\"},\"xg\":{\"$numberDouble\":\"0.52\"},\"npxg\":{\"$numberDouble\":\"0.52\"},\"xa\":{\"$numberDouble\":\"0.58\"},\"xg_per90\":{\"$numberDouble\":\"0\"},\"xa_per90\":{\"$numberDouble\":\"0\"},\"xg_xa_per90\":{\"$numberDouble\":\"0\"},\"npxg_per90\":{\"$numberDouble\":\"0\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0\"}},{\"season\":\"2007-2008\",\"age\":{\"$numberInt\":\"28\"},\"squad\":\"Puebla\",\"country\":\"mx MEX\",\"comp_level\":\"1. Liga MX\",\"lg_finish\":\"8th\",\"games\":{\"$numberInt\":\"0\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"0\"},\"goals\":{\"$numberInt\":\"0\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0\"},\"goals_pens_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0\"},\"xg\":{\"$numberDouble\":\"0\"},\"npxg\":{\"$numberDouble\":\"0\"},\"xa\":{\"$numberDouble\":\"0.81\"},\"xg_per90\":{\"$numberDouble\":\"0\"},\"xa_per90\":{\"$numberDouble\":\"0\"},\"xg_xa_per90\":{\"$numberDouble\":\"0\"},\"npxg_per90\":{\"$numberDouble\":\"0\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0\"}},{\"season\":\"2008-2009\",\"age\":{\"$numberInt\":\"29\"},\"squad\":\"Villarreal\",\"country\":\"es ESP\",\"comp_level\":\"1. La Liga\",\"lg_finish\":\"5th\",\"games\":{\"$numberInt\":\"3\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"98\"},\"goals\":{\"$numberInt\":\"0\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0\"},\"goals_pens_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0\"},\"xg\":{\"$numberDouble\":\"0.08\"},\"npxg\":{\"$numberDouble\":\"0.08\"},\"xa\":{\"$numberDouble\":\"0.97\"},\"xg_per90\":{\"$numberDouble\":\"0.08\"},\"xa_per90\":{\"$numberDouble\":\"0.97\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.05\"},\"npxg_per90\":{\"$numberDouble\":\"0.08\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.05\"}},{\"season\":\"2009-2010\",\"age\":{\"$numberInt\":\"30\"},\"squad\":\"Nantes\",\"country\":\"fr FRA\",\"comp_level\":\"1. Ligue 1\",\"lg_finish\":\"13th\",\"games\":{\"$numberInt\":\"21\"},\"games_starts\":{\"$numberInt\":\"20\"},\"minutes\":{\"$numberInt\":\"1040\"},\"goals\":{\"$numberInt\":\"13\"},\"assists\":{\"$numberInt\":\"2\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"1\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.18\"},\"assists_per90\":{\"$numberDouble\":\"0.18\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.35\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.18\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.36\"},\"xg\":{\"$numberDouble\":\"10.4\"},\"npxg\":{\"$numberDouble\":\"10.4\"},\"xa\":{\"$numberDouble\":\"0.72\"},\"xg_per90\":{\"$numberDouble\":\"0.94\"},\"xa_per90\":{\"$numberDouble\":\"0.06\"},\"xg_xa_per90\":{\"$numberDouble\":\"1\"},\"npxg_per90\":{\"$numberDouble\":\"0.94\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1\"}},{\"season\":\"2010-2011\",\"age\":{\"$numberInt\":\"31\"},\"squad\":\"FC Khimki\",\"country\":\"ru RUS\",\"comp_level\":\"1. Russian Premier League\",\"lg_finish\":\"15th\",\"games\":{\"$numberInt\":\"27\"},\"games_starts\":{\"$numberInt\":\"2\"},\"minutes\":{\"$numberInt\":\"570\"},\"goals\":{\"$numberInt\":\"10\"},\"assists\":{\"$numberInt\":\"1\"},\"pens_made\":{\"$numberInt\":\"1\"},\"pens_att\":{\"$numberInt\":\"2\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"1\"},\"goals_per90\":{\"$numberDouble\":\"1.66\"},\"assists_per90\":{\"$numberDouble\":\"0.16\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.81\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.66\"},\"xg\":{\"$numberDouble\":\"9.41\"},\"npxg\":{\"$numberDouble\":\"7.37\"},\"xa\":{\"$numberDouble\":\"0.91\"},\"xg_per90\":{\"$numberDouble\":\"1.56\"},\"xa_per90\":{\"$numberDouble\":\"0.15\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.71\"},\"npxg_per90\":{\"$numberDouble\":\"1.22\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.36\"}},{\"season\":\"2011-2012\",\"age\":{\"$numberInt\":\"32\"},\"squad\":\"Cagliari\",\"country\":\"it ITA\",\"comp_level\":\"1. Serie A\",\"lg_finish\":\"14th\",\"games\":{\"$numberInt\":\"6\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"186\"},\"goals\":{\"$numberInt\":\"1\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0.5\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"0.5\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0.5\"},\"xg\":{\"$numberDouble\":\"0.43\"},\"npxg\":{\"$numberDouble\":\"0.43\"},\"xa\":{\"$numberDouble\":\"0.54\"},\"xg_per90\":{\"$numberDouble\":\"0.21\"},\"xa_per90\":{\"$numberDouble\":\"0.27\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.48\"},\"npxg_per90\":{\"$numberDouble\":\"0.21\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.48\"}},{\"season\":\"2012-2013\",\"age\":{\"$numberInt\":\"33\"},\"squad\":\"Defensa y Just\",\"country\":\"ar ARG\",\"comp_level\":\"1. Primera Div\",\"lg_finish\":\"6th\",\"games\":{\"$numberInt\":\"5\"},\"games_starts\":{\"$numberInt\":\"2\"},\"minutes\":{\"$numberInt\":\"220\"},\"goals\":{\"$numberInt\":\"3\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.5\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.5\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.5\"},\"xg\":{\"$numberDouble\":\"1.53\"},\"npxg\":{\"$numberDouble\":\"1.53\"},\"xa\":{\"$numberDouble\":\"0.38\"},\"xg_per90\":{\"$numberDouble\":\"0.76\"},\"xa_per90\":{\"$numberDouble\":\"0.19\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.95\"},\"npxg_per90\":{\"$numberDouble\":\"0.76\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.95\"}},{\"season\":\"2013-2014\",\"age\":{\"$numberInt\":\"34\"},\"squad\":\"Emmen\",\"country\":\"nl NED\",\"comp_level\":\"1. Dutch Eredivisie\",\"lg_finish\":\"12th\",\"games\":{\"$numberInt\":\"14\"},\"games_starts\":{\"$numberInt\":\"12\"},\"minutes\":{\"$numberInt\":\"963\"},\"goals\":{\"$numberInt\":\"5\"},\"assists\":{\"$numberInt\":\"1\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"0.5\"},\"assists_per90\":{\"$numberDouble\":\"0.1\"},\"goals_assists_per90\":{\"$numberDouble\":\"0.6\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.5\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"0.6\"},\"xg\":{\"$numberDouble\":\"5.32\"},\"npxg\":{\"$numberDouble\":\"5.32\"},\"xa\":{\"$numberDouble\":\"1.09\"},\"xg_per90\":{\"$numberDouble\":\"0.53\"},\"xa_per90\":{\"$numberDouble\":\"0.1\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.63\"},\"npxg_per90\":{\"$numberDouble\":\"0.53\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.63\"}},{\"season\":\"2014-2015\",\"age\":{\"$numberInt\":\"35\"},\"squad\":\"Munchen Gladbach\",\"country\":\"de GER\",\"comp_level\":\"1. Bundesliga\",\"lg_finish\":\"4th\",\"games\":{\"$numberInt\":\"33\"},\"games_starts\":{\"$numberInt\":\"6\"},\"minutes\":{\"$numberInt\":\"967\"},\"goals\":{\"$numberInt\":\"18\"},\"assists\":{\"$numberInt\":\"3\"},\"pens_made\":{\"$numberInt\":\"4\"},\"pens_att\":{\"$numberInt\":\"6\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1.8\"},\"assists_per90\":{\"$numberDouble\":\"0.3\"},\"goals_assists_per90\":{\"$numberDouble\":\"2.1\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.4\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.7\"},\"xg\":{\"$numberDouble\":\"16.2\"},\"npxg\":{\"$numberDouble\":\"13\"},\"xa\":{\"$numberDouble\":\"2.54\"},\"xg_per90\":{\"$numberDouble\":\"1.61\"},\"xa_per90\":{\"$numberDouble\":\"0.25\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.86\"},\"npxg_per90\":{\"$numberDouble\":\"1.3\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.55\"}},{\"season\":\"2015-2016\",\"age\":{\"$numberInt\":\"36\"},\"squad\":\"Schalke 04\",\"country\":\"de GER\",\"comp_level\":\"1. Bundesliga\",\"lg_finish\":\"12th\",\"games\":{\"$numberInt\":\"22\"},\"games_starts\":{\"$numberInt\":\"0\"},\"minutes\":{\"$numberInt\":\"667\"},\"goals\":{\"$numberInt\":\"6\"},\"assists\":{\"$numberInt\":\"3\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"1\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"1\"},\"goals_per90\":{\"$numberDouble\":\"0.85\"},\"assists_per90\":{\"$numberDouble\":\"0.42\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.27\"},\"goals_pens_per90\":{\"$numberDouble\":\"0.85\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.28\"},\"xg\":{\"$numberDouble\":\"1.26\"},\"npxg\":{\"$numberDouble\":\"1.26\"},\"xa\":{\"$numberDouble\":\"1.71\"},\"xg_per90\":{\"$numberDouble\":\"0.18\"},\"xa_per90\":{\"$numberDouble\":\"0.24\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.42\"},\"npxg_per90\":{\"$numberDouble\":\"0.18\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.42\"}},{\"season\":\"2016-2017\",\"age\":{\"$numberInt\":\"37\"},\"squad\":\"Sheffield Utd\",\"country\":\"eng ENG\",\"comp_level\":\"1. Premier League\",\"lg_finish\":\"9th\",\"games\":{\"$numberInt\":\"5\"},\"games_starts\":{\"$numberInt\":\"3\"},\"minutes\":{\"$numberInt\":\"271\"},\"goals\":{\"$numberInt\":\"3\"},\"assists\":{\"$numberInt\":\"0\"},\"pens_made\":{\"$numberInt\":\"0\"},\"pens_att\":{\"$numberInt\":\"0\"},\"cards_yellow\":{\"$numberInt\":\"0\"},\"cards_red\":{\"$numberInt\":\"0\"},\"goals_per90\":{\"$numberDouble\":\"1\"},\"assists_per90\":{\"$numberDouble\":\"0\"},\"goals_assists_per90\":{\"$numberDouble\":\"1\"},\"goals_pens_per90\":{\"$numberDouble\":\"1\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1\"},\"xg\":{\"$numberDouble\":\"1.04\"},\"npxg\":{\"$numberDouble\":\"1.04\"},\"xa\":{\"$numberDouble\":\"0.6\"},\"xg_per90\":{\"$numberDouble\":\"0.34\"},\"xa_per90\":{\"$numberDouble\":\"0.19\"},\"xg_xa_per90\":{\"$numberDouble\":\"0.53\"},\"npxg_per90\":{\"$numberDouble\":\"0.34\"},\"npxg_xa_per90\":{\"$numberDouble\":\"0.53\"}},{\"season\":\"2017-2018\",\"age\":{\"$numberInt\":\"38\"},\"squad\":\"Utrecht\",\"country\":\"nl NED\",\"comp_level\":\"1. Dutch Eredivisie\",\"lg_finish\":\"6th\",\"games\":{\"$numberInt\":\"34\"},\"games_starts\":{\"$numberInt\":\"3\"},\"minutes\":{\"$numberInt\":\"872\"},\"goals\":{\"$numberInt\":\"14\"},\"assists\":{\"$numberInt\":\"2\"},\"pens_made\":{\"$numberInt\":\"4\"},\"pens_att\":{\"$numberInt\":\"6\"},\"cards_yellow\":{\"$numberInt\":\"1\"},\"cards_red\":{\"$numberInt\":\"2\"},\"goals_per90\":{\"$numberDouble\":\"1.55\"},\"assists_per90\":{\"$numberDouble\":\"0.22\"},\"goals_assists_per90\":{\"$numberDouble\":\"1.77\"},\"goals_pens_per90\":{\"$numberDouble\":\"1.11\"},\"goals_assists_pens_per90\":{\"$numberDouble\":\"1.33\"},\"xg\":{\"$numberDouble\":\"13.1\"},\"npxg\":{\"$numberDouble\":\"9.09\"},\"xa\":{\"$numberDouble\":\"2.05\"},\"xg_per90\":{\"$numberDouble\":\"1.45\"},\"xa_per90\":{\"$numberDouble\":\"0.22\"},\"xg_xa_per90\":{\"$numberDouble\":\"1.67\"},\"npxg_per90\":{\"$numberDouble\":\"1.01\"},\"npxg_xa_per90\":{\"$numberDouble\":\"1.23\"}}]}\n",
"text": "I have 2 databases with 1 collection and the same documents, this databases run in localhost. The documents in the different databases differ only in their structure.Example of Structure 1Example of Structure 2:On these databases I perform the same operations and calculate the times through an application written in Java. The queries aren’t optimized and that’s not what I’m interested in right now.\nBut for example, why in a classic find operation performed through “Link calciatore” for both cases, the operation on the database with structure 2 is faster than the operation on the database with structure 1 ??\nWhat affects performance in this case? Structure?",
"username": "Andrea_Langone"
},
{
"code": "",
"text": "Hi @Andrea_Langone,Can you post the tested queries and their explain plan and evidence of the execution time?Also can you provide collection.stats() output from both.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " {\n \"ns\" : \"FootballStats.Calciatori\",\n \"size\" : 2519591,\n \"count\" : 500,\n \"avgObjSize\" : 5039,\n \"storageSize\" : 745472,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:collection-42--2967289990190938292\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 25,\n \"blocks allocated\" : 25,\n \"blocks freed\" : 0,\n \"checkpoint size\" : 729088,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 0,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 745472,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 6,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 3,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 2766556,\n \"bytes dirty in the cache cumulative\" : 908,\n \"bytes read into cache\" : 0,\n \"bytes written from cache\" : 2523802,\n \"checkpoint blocked page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 0,\n \"eviction walk passes of a file\" : 0,\n \"eviction walk target pages histogram - 0-9\" : 0,\n \"eviction walk target pages histogram - 10-31\" : 0,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 0,\n \"eviction walk target pages histogram - 64-128\" : 0,\n \"eviction walks abandoned\" : 0,\n \"eviction walks gave up because they restarted their walk twice\" : 0,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 0,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 0,\n \"eviction walks reached end of tree\" : 0,\n \"eviction walks started from root of tree\" : 0,\n \"eviction walks started from saved location in tree\" : 0,\n \"hazard pointer blocked page eviction\" : 0,\n \"in-memory page passed criteria to be split\" : 0,\n \"in-memory page splits\" : 0,\n \"internal pages evicted\" : 0,\n \"internal pages split during eviction\" : 0,\n \"leaf pages split during eviction\" : 0,\n \"modified pages evicted\" : 0,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 0,\n \"page written requiring cache overflow records\" : 0,\n \"pages read into cache\" : 0,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages read into cache requiring cache overflow entries\" : 0,\n \"pages requested from the cache\" : 2500,\n \"pages seen by eviction walk\" : 0,\n \"pages written from cache\" : 23,\n \"pages written requiring in-memory restoration\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 0\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 121244,\n \"compressed pages read\" : 0,\n \"compressed pages written\" : 22,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 1\n },\n \"cursor\" : {\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 999,\n \"close calls that result in cache\" : 0,\n \"create calls\" : 2,\n \"insert calls\" : 500,\n \"insert key and value bytes\" : 2520528,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 250500,\n \"open cursor count\" : 0,\n \"operation restarted\" : 0,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 3502,\n \"search calls\" : 0,\n \"search near calls\" : 1500,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 43,\n \"internal page multi-block writes\" : 0,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 1,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 2,\n \"page reconciliation calls for eviction\" : 0,\n \"pages deleted\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0\n },\n \"transaction\" : {\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 20480,\n \"indexSizes\" : {\n \"_id_\" : 20480\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1\n {\n \"ns\" : \"FootballStats_2.Calciatori\",\n \"size\" : 2535188,\n \"count\" : 500,\n \"avgObjSize\" : 5070,\n \"storageSize\" : 737280,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:collection-44--2967289990190938292\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 25,\n \"blocks allocated\" : 25,\n \"blocks freed\" : 0,\n \"checkpoint size\" : 720896,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 0,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 737280,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 8,\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 3,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 2783499,\n \"bytes dirty in the cache cumulative\" : 908,\n \"bytes read into cache\" : 0,\n \"bytes written from cache\" : 2539399,\n \"checkpoint blocked page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 0,\n \"eviction walk passes of a file\" : 0,\n \"eviction walk target pages histogram - 0-9\" : 0,\n \"eviction walk target pages histogram - 10-31\" : 0,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 0,\n \"eviction walk target pages histogram - 64-128\" : 0,\n \"eviction walks abandoned\" : 0,\n \"eviction walks gave up because they restarted their walk twice\" : 0,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 0,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 0,\n \"eviction walks reached end of tree\" : 0,\n \"eviction walks started from root of tree\" : 0,\n \"eviction walks started from saved location in tree\" : 0,\n \"hazard pointer blocked page eviction\" : 0,\n \"in-memory page passed criteria to be split\" : 0,\n \"in-memory page splits\" : 0,\n \"internal pages evicted\" : 0,\n \"internal pages split during eviction\" : 0,\n \"leaf pages split during eviction\" : 0,\n \"modified pages evicted\" : 0,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 0,\n \"page written requiring cache overflow records\" : 0,\n \"pages read into cache\" : 0,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages read into cache requiring cache overflow entries\" : 0,\n \"pages requested from the cache\" : 2500,\n \"pages seen by eviction walk\" : 0,\n \"pages written from cache\" : 23,\n \"pages written requiring in-memory restoration\" : 0,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 0\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 127796,\n \"compressed pages read\" : 0,\n \"compressed pages written\" : 22,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 1\n },\n \"cursor\" : {\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 998,\n \"close calls that result in cache\" : 0,\n \"create calls\" : 3,\n \"insert calls\" : 500,\n \"insert key and value bytes\" : 2536125,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 250500,\n \"open cursor count\" : 0,\n \"operation restarted\" : 0,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 3502,\n \"search calls\" : 0,\n \"search near calls\" : 1500,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 43,\n \"internal page multi-block writes\" : 0,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 1,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 0,\n \"page reconciliation calls\" : 2,\n \"page reconciliation calls for eviction\" : 0,\n \"pages deleted\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0\n },\n \"transaction\" : {\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 1,\n \"indexBuilds\" : [ ],\n \"totalIndexSize\" : 20480,\n \"indexSizes\" : {\n \"_id_\" : 20480\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1\ndb.collection.find({\"Link calciatore\": value })\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"FootballStats.Calciatori\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"Link calciatore\" : {\n \"$eq\" : \"https://fbref.com/en/players/ohHqST5T/Vea-Pete\"\n }\n },\n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\",\n \"filter\" : {\n \"Link calciatore\" : {\n \"$eq\" : \"https://fbref.com/en/players/ohHqST5T/Vea-Pete\"\n }\n },\n \"direction\" : \"forward\"\n },\n \"rejectedPlans\" : [ ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 1,\n \"executionTimeMillis\" : 2,\n \"totalKeysExamined\" : 0,\n \"totalDocsExamined\" : 500,\n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\",\n \"filter\" : {\n \"Link calciatore\" : {\n \"$eq\" : \"https://fbref.com/en/players/ohHqST5T/Vea-Pete\"\n }\n },\n \"nReturned\" : 1,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 502,\n \"advanced\" : 1,\n \"needTime\" : 500,\n \"needYield\" : 0,\n \"saveState\" : 3,\n \"restoreState\" : 3,\n \"isEOF\" : 1,\n \"direction\" : \"forward\",\n \"docsExamined\" : 500\n }\n },\n \"serverInfo\" : {\n \"host\" : \"LAPTOP-UKM9G3CG\",\n \"port\" : 27017,\n \"version\" : \"4.2.3\",\n \"gitVersion\" : \"6874650b362138df74be53d366bbefc321ea32d4\"\n },\n \"ok\" : 1\n {\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"FootballStats_2.Calciatori\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"Link calciatore\" : {\n \"$eq\" : \"https://fbref.com/en/players/ohHqST5T/Vea-Pete\"\n }\n },\n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\",\n \"filter\" : {\n \"Link calciatore\" : {\n \"$eq\" : \"https://fbref.com/en/players/ohHqST5T/Vea-Pete\"\n }\n },\n \"direction\" : \"forward\"\n },\n \"rejectedPlans\" : [ ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 1,\n \"executionTimeMillis\" : 0,\n \"totalKeysExamined\" : 0,\n \"totalDocsExamined\" : 500,\n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\",\n \"filter\" : {\n \"Link calciatore\" : {\n \"$eq\" : \"https://fbref.com/en/players/ohHqST5T/Vea-Pete\"\n }\n },\n \"nReturned\" : 1,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 502,\n \"advanced\" : 1,\n \"needTime\" : 500,\n \"needYield\" : 0,\n \"saveState\" : 3,\n \"restoreState\" : 3,\n \"isEOF\" : 1,\n \"direction\" : \"forward\",\n \"docsExamined\" : 500\n }\n },\n \"serverInfo\" : {\n \"host\" : \"LAPTOP-UKM9G3CG\",\n \"port\" : 27017,\n \"version\" : \"4.2.3\",\n \"gitVersion\" : \"6874650b362138df74be53d366bbefc321ea32d4\"\n },\n \"ok\" : 1\n",
"text": "Stats of Structure 1:}Stats of Structure 2:}Query tested for both:**Execution time of 500 finds: **\nStructure 1: 418.10612000000003 ms\nStructure 2: 411.49282000000005 msExplain Structure 1:}Explain Structure 2:}",
"username": "Andrea_Langone"
},
{
"code": "",
"text": "Hi @Andrea_Langone,The difference is 8ms for a full collection scan to return 1 doc.This could be from verious reason which can be hard to find.One of them is that on disk collection 1 take more space (by a small amount) since you don’t use and index, which you must, scan 2 can be potentially a bit faster.I wouldn’t investigate this difference any further and suggest to compare indexed executions.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I had a similar question raised. Please check if you can help here as well: MongoDB: Queries running twice slow on NEW server compared to OLD server.",
"username": "Temp_ORary"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why is one database faster than the other? | 2020-10-21T10:10:30.447Z | Why is one database faster than the other? | 2,090 |
null | [
"atlas-triggers"
] | [
{
"code": "{\"updateDescription.updatedFields\":{\"participation.2021.0.status\":\"accepted\"}}{\"updateDescription.updatedFields\":{\"participation.2021.0.status\":{\"$exists\":true}}}{\"updateDescription.updatedFields\":{\"participation.2021.0.status\":{\"$eq\":\"accepted\"}}}",
"text": "I’m also looking for some guidance on limiting a trigger based on the update belonging to a specific field of a subdocument. In my case the Match Expression works if I’m matching on a specific value but fails with more complex operators.Works:\n{\"updateDescription.updatedFields\":{\"participation.2021.0.status\":\"accepted\"}}Does Not Work:\n{\"updateDescription.updatedFields\":{\"participation.2021.0.status\":{\"$exists\":true}}}Also Does Not Work:\n{\"updateDescription.updatedFields\":{\"participation.2021.0.status\":{\"$eq\":\"accepted\"}}}It would be great to get some more detailed information on how Trigger Match Expressions are evaluated and what sort of limitations / workarounds there are.",
"username": "r_schaufelberger"
},
{
"code": "{\n \"$expr\": {\n \"$not\": {\n \"$cmp\": [{\n \"$let\": {\n \"vars\": { \"updated\": { \"$objectToArray\": \"$updateDescription.updatedFields\" } },\n \"in\": { \"$arrayElemAt\": [ \"$updated.k\", 0 ] }\n }},\n \"example.subdocument.nested_field\"\n ]\n }\n }\n }\nexample.subdocument.nested_field",
"text": "Update with solution, thanks to some help from MongoDB Support.Because of the vagaries of how the change stream is formatted, it’s non-trivial to get a trigger to fire on any change to a particular nested field. This general structure, however, worked for me in the trigger’s match expression:(just replace example.subdocument.nested_field with the proper dot-notation path to your field)",
"username": "r_schaufelberger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Limiting a trigger based on the update belonging to a specific field of a subdocument | 2020-10-14T19:17:06.212Z | Limiting a trigger based on the update belonging to a specific field of a subdocument | 2,350 |
null | [
"queries",
"mongoose-odm"
] | [
{
"code": "",
"text": "Hello i’m update mongoDb 4.2 but still showing issue “No array filter found for identifier ‘ele’ in path ‘membersArray.$.challenge_video.$[ele].status’”, Or please share document how we can update array filter functionality",
"username": "Ravi_kumrawat"
},
{
"code": "",
"text": "Update mongoose client now working perfectly",
"username": "Ravi_kumrawat"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | mongoDb 4.2 showing issue “No array filter found for identifier ‘ele’ in path" | 2020-10-23T10:51:07.123Z | mongoDb 4.2 showing issue “No array filter found for identifier ‘ele’ in path” | 4,780 |
null | [
"atlas-functions",
"app-services-user-auth"
] | [
{
"code": "user.customDatacontext.user.identities[0].iduser.refreshCustomData()user.customDatauser.datacontext.user.identities[0]context.user.identities[0].id",
"text": "Like the user linked below, I need the ability to do user administration client-side.Administrate Realm users on clientWhile Custom Function Authentication opens a door for that, I found it difficult in practice to get everything working in a streamlined way. I’m outlining my process here in the hopes 1) that it might be helpful to others dealing with these issues and 2) that someone can potentially offer more appropriate solutions for some of my problems because many of my solutions feel like hacks.Overall, it feels like everything surrounding Custom Function Authentication is a little half-baked, though hopefully I’m just missing something.Problem: Realm provides no client-side tools for user management.Solution: Create your own user collection, then use a Custom Function Authentication. But…Problem: Managing custom users (writing rules, dealing with permissions, etc.) is difficult because the relationship between Realm Users and custom users isn’t very robust.Solution: Enable Custom User Data and point it to your user collection so that all of your user data is in user.customData. But…Problem: The User ID Field that Realm looks at in the Custom User Data collection doesn’t exist initially. Further, Authentication Triggers do not seem to support Custom Function Authentication, making it difficult to create a relationship between a Realm User and your Custom User Data collection.Solution: Create a function that uses context.user.identities[0].id to find the appropriate entry in your Custom User Data collection and populate it with the Realm User ID. Then call this function client-side after every login, followed by user.refreshCustomData(). Now user.customData will work for server-side rule authoring and for client-side tasks. This is the most frustrating hack by far.Problem: There seems to be no way to fail a Custom Function Authentication gracefully. Either you return a proper value for a new/existing user, or it just fails, so you can’t return any useful info about why the attempt was invalid.Solution: Call the authentication function from a webhook first to find any potential issues, then use Custom Function Authentication only if there are no problems.user.data is empty for Custom Function Authentication users. You can still look in context.user.identities[0], but it could be annoying if you have multiple identities.The Realm Users page doesn’t show any useful info for Custom Function Authentication users. At a minimum it would ideally show the internal id used to create it (context.user.identities[0].id).Deleting a user from the custom collection leaves a Realm user behind. Presumably I could fix this by building a clean-up tool with the Admin API.That’s where I am so far and things are more or less working, though I am very open to feedback or alternative solutions.",
"username": "Scott_Garner"
},
{
"code": "// Get Atlas Parameters and application id\n const AtlasPrivateKey = context.values.get(\"AtlasPrivateKey\");\n const AtlasPublicKey = context.values.get(\"AtlasPublicKey\");\n const AtlasGroupId = context.values.get(\"AtlasGroupId\");\n const appId = '<APP-ID>';\n \n \n // Authenticate to Realm API\n const respone_cloud_auth = await context.http.post({\n url : \"https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login\",\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"]},\n body : {\"username\": AtlasPublicKey, \"apiKey\": AtlasPrivateKey},\n encodeBodyAsJSON: true\n \n });\n \n const cloud_auth_body = JSON.parse(respone_cloud_auth.body.text());\n \n // Get the internal appId\n const respone_realm_apps = await context.http.get({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`]\n }\n \n });\n \n const realm_apps = JSON.parse(respone_realm_apps.body.text());\n \n \n var internalAppId = \"\";\n \n realm_apps.map(function(app){ \n if (app.client_app_id == appId)\n {\n internalAppId = app._id;\n }\n });\n \n \n // Get all realm users \n const respone_realm_users = await context.http.post({\n url : `https://realm.mongodb.com/api/admin/v3.0/groups/${AtlasGroupId}/apps/${internalAppId}/users`,\n headers : { \"Content-Type\" : [\"application/json\"],\n \"Accept\" : [\"application/json\"],\n \"Authorization\" : [`Bearer ${cloud_auth_body.access_token}`],\n body : {\n \"email\": \"string\",\n \"password\": \"string\"\n },\n encodeBodyAsJSON: true\n }\n \n });\n \n \n const realm_users = JSON.parse(respone_realm_users.body.text());\n",
"text": "Hi @Scott_Garner,Thank you for sharing your insights. I think this kind of posts can become a useful blog post for our users.Since I was working for a long time with MongoDB Realm (from its initial Stitch days) I can understand how there is no single perfect auth provider which on one hand will offer an easy robust authentication API and on the other hand cover all use cases such as full administration capabilities.I find what you have posted for the Custom Function Authentication very interesting and I need to read those points in depth to understand them.However, I wanted to offer some thoughts and progress I made with Emaill/Password administration and the Admin API from Realm functions/webhooks without exposing the Admin API keys/tokens.The following code can facilitate an access to the API by using Secrets from the Application, this code can be placed in an “admin” webhook with service rules allowing only admins to run it:Additionally, the Email/Password can have a confirmation function. A user can register and be pending until the admin which can be notified via an email triggered by a confirmation function, approves him (can be done in an email having an HTML link/button to run the confirmation flow).Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "A post was split to a new topic: How to handle errors on Custom Function Authentication?",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Custom Function Authentication Problems and Solutions | 2020-09-08T01:16:36.613Z | Custom Function Authentication Problems and Solutions | 4,760 |
null | [
"queries",
"security"
] | [
{
"code": "DatabaseException['user is not allowed to do action [find] on [databaseyyy.products]' (code = 8000)]",
"text": "Hi all,I wanted to share a strange (and worrying) issue with Atlas (tier 0) I am experiencing today .Without having modified anything, not deployed any new version, my server hosted on a GKE cloud is gettingDatabaseException['user is not allowed to do action [find] on [databaseyyy.products]' (code = 8000)]The same SCRAM user can connect and read from MongoDB Compass.Cannot understand what is changed overnight… if you have any suggestion is more than welcome.Thank you,Mario",
"username": "Mario_Callisto"
},
{
"code": "",
"text": "Please check if you are connected to the correct DB in your connect string",
"username": "Ramachandra_Tummala"
}
] | MongoDB Atkas tier0 - user is not allowed to do action [find] | 2020-10-31T09:45:17.060Z | MongoDB Atkas tier0 - user is not allowed to do action [find] | 5,130 |
null | [
"queries"
] | [
{
"code": "db.getCollection('test').find(\n { \"array.0\" : {$in : [\"abcd\", \"cdfg\"], \n \"array.1\" : {$in : [\"abcd\",\"cdfg\"] \n }\n )\n",
"text": "How to rewrite the following query without using array.position to return documents if $in values for each array element in the array match the given values. Please note values given are same for all array position element. This query does not pick up index on array so array.0+array.1 index is needed and if array has 10 element its not possible to write query as follows and also create index each element . Instead query should be rewritten in such a way that it return the same result as the following query and use index on array (multi key index).",
"username": "Chiku"
},
{
"code": "db.getCollection(\"test\").find( { \"array\" : { $in : [\"abcd\", \"cdfg\"] } } )\ndb.getCollection(\"test\").find(\n{ \"array.0\" : {$in : [\"abcd\", \"cdfg\"] },\n\"array.1\" : {$in : [\"abcd\",\"cdfg\"] }\n}\n)\n",
"text": "I am not sure I really understand your issue but the following might work.By the way you are missing a couple of closing parenthesis original query. It should beFinally, it is best to enclose code with triple back ticks as the formatting is better and the quotes are not replaced by the fancy html matching quotes.",
"username": "steevej"
},
{
"code": "",
"text": "please ignore type issue …",
"username": "Chiku"
},
{
"code": "{\n code: \"0001\",\n array:[ \"1\",\"2\"]\n}\n\n{\n code: \"0002\",\n array: [\"3\",\"2\"]\n}\n\n{\n code: \"0003\",\n array: [\"1\",\"2\"]\n}\n",
"text": "Thanks for the reply.\nthanks for the reply.Thanks for the reply. The query you have suggested will not give the same result. It will return documents where array has any of the $in values provided but what I am looking for is return the documents only if all the elements of the array has same values. for exampleFor example you query will return all 3 documents if array: {$in : [1,2,]} but what i need is two documents where array.0 = [1,2] and array.1 = [1,2] . Please ignore syntax/{ etc.",
"username": "Chiku"
},
{
"code": "{ \"array\" : { $all : [\"abcd\", \"cdfg\"] } } )\n",
"text": "Hi @Chiku,You will probably need to use an $all expression:If you wish go get only array with only those elements you should use and aggregation and match only on documents with $size of array equals to true.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\n \"_id\" : ObjectId(\"5f9d53673fbd2c5684bb67b4\"),\n \"code\" : 1.0,\n \"array\" : [ \n 1.0, \n 2.0\n ]\n}\n\n/* 2 */\n{\n \"_id\" : ObjectId(\"5f9d537e3fbd2c5684bb67b5\"),\n \"code\" : 2.0,\n \"array\" : [ \n 3.0, \n 2.0\n ]\n}\n\n/* 3 */\n{\n \"_id\" : ObjectId(\"5f9d53973fbd2c5684bb67b6\"),\n \"code\" : 3.0,\n \"array\" : [ \n 1.0, \n 2.0\n ]\n}\n\n/* 4 */\n{\n \"_id\" : ObjectId(\"5f9d54423fbd2c5684bb67b7\"),\n \"code\" : 4.0,\n \"array\" : [ \n 3.0, \n 3.0\n ]\n}\ndb.getCollection('test').find({\"array.0\":{$in :[1,2]},\n \"array.1\": {$in:[1,2]}\n } \n )\ndb.getCollection('test').find({\"array\":{$all :[1,2]}\n } \n )\ndb.getCollection('test').find({\"array.0\":{$in :[1,2,3]},\n \"array.1\": {$in:[1,2,3]}\n } \n )\n \ndb.getCollection('test').find({\"array\":{$all :[1,2,3]}\n } \n ) \n",
"text": "$all check if all the values are same in the given array i.e. AND function which is not same as checking for $in which is OR function. To explain the issue I am attaching the sample documents and the original query which works but has limitation i.e. it need additional index on each array element and if number of elements increased (at this time two only) index size will increase and query will be complicated . Original query do not use index in array.Documents:-Now if you run following original query for two values and $all it will show you same result but if you check of 3 values it will not . I guess you have tested the solution with two values .This will work i.e. $in and $all:------This will not work if i.e. $in and $all for 3 values will give different result:-",
"username": "Chiku"
}
] | How to Check each element of array has same values | 2020-10-30T16:23:29.656Z | How to Check each element of array has same values | 10,184 |
null | [] | [
{
"code": "",
"text": " Welcome to the MongoDB Developer Community Forums!We’re very excited to help you join, share, and contribute to the MongoDB community. Here are a few FAQs to get you started with the forum.Anything related to learning, using, developing, or otherwise working with MongoDB products, services, and ecosystem. Each category on the site has a general description of the expected discussion topics. If you’re unsure where to post, choose the most likely category.Some common starting points are:As a last resort, there is also Other MongoDB Topics for general MongoDB discussion.Additionally, there are some categories to help you meet and network with other community members, such as:No, these are forums for public community discussion and collaboration. MongoDB team members are active on the forum, but anyone in the community is encouraged to share their experience and advice. There are no SLAs or guarantees of response, and some patience may be needed for questions requiring more specific expertise or asked over weekends or holiday periods.If you have an urgent production issue or are looking for 24x7 SLA-based support, you may want to consider MongoDB Support.Some general suggestions that may help with getting responses:You are likely to get a faster response (or have fewer rounds of back and forth clarification) if you can include more details that might help someone else reproduce or understand your environment: software versions used, error messages, or steps to reproduce. Sample documents, expected output, and attempted queries would be helpful for query or aggregation questions.If a discussion isn’t getting enough responses, adding an additional post with extra information or an update on your progress is a good way to “bump” that topic for attention.If you have a billing or service issue with MongoDB Atlas, please contact the Atlas support team. Even if you do not have a Support Subscription, you should always have access to the free Basic support plan which includes in-app chat support for operational questions specific to your account.Indeed! If you haven’t already, be sure to respond to @leafiebot in your message inbox for a quick walk-through of basic UI elements in this site. Complete the tutorial to earn your first badge! Yes. All community members are expected to follow our MongoDB Code of Conduct to make this a welcoming, friendly community for all.If you see posts that may need assistance from the moderation team (for example, for formatting or content issues), please flag posts for moderator review or assistance.Check out more of the guides and tips in the Getting Started category, including:If you have feedback about your experiences with the site or our community, please post in the Site Feedback category or send us a direct message at [email protected] you are having any trouble with the site or your account, please send us a message at [email protected].",
"username": "system"
},
{
"code": "",
"text": "A post was split to a new topic: Filling Out Your Bio",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Managing and subscribing to notifications",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Trust levels and forum privileges",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Badges and recognition",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: How to Write a Good Post/Question",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Groups and private categories",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Data privacy",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Likes for helpful topics and posts",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Flagging posts for moderation assistance",
"username": "Stennie_X"
},
{
"code": "",
"text": "A post was split to a new topic: Mentioning other users",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "wan"
}
] | Getting Started with the MongoDB Community: README.1ST | 2020-01-21T23:43:21.160Z | Getting Started with the MongoDB Community: README.1ST | 26,838 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hi everyone,First time installer and user of the MongoDB here. I’m currently in a web development course where we’re learning about databases. I’ve installed the MongoDB through Ubuntu, but whenever I try to run mongod I keep getting a \" Aborting after fassert() failure\". I already tried to find a solution through JIRA (https://jira.mongodb.org/browse/SERVER-51860), but Mr. Edwin Zhou could not help me further and directed me to the community.I hope you can find out what I did wrong / what’s going wrong on my system. Thanks in advance!Dennis",
"username": "Dennis_Pierins"
},
{
"code": "",
"text": "Have you completed all the steps of instructions.txt like dirpath creation and change permissions?\nMake sure your dbpath is not on a shared directory\nMake sure no other mongod running on same port\nIt says wt files exist.Did you try to empty the dir and try again\nWhen you run mongod without any parmeters it tries to bring mongod on default port 27017 and default path /data/dbFor sake of testing try to bring it up on a different dbpath say your home dir where yu can read/write\nmongod --port 28000 --dbpath full_path_of_your_homedir",
"username": "Ramachandra_Tummala"
}
] | Aborting after fassert() failure | 2020-10-30T09:44:33.701Z | Aborting after fassert() failure | 3,277 |
null | [
"indexes",
"performance"
] | [
{
"code": "",
"text": "Hey guys, I have a performance issues while trying to find and sort a big amount of data (more than 3m of records, but limit is 10k).\nSo I have the following request:db.getCollection(“events”).find({\n“field1”: {\"$in\": [“value1”, null]},\n“field2”: “value2”,\n“field3”: {\n“$in”: [“type1”, “type2”, “type3”]\n}\n}).sort({“timestamp”: -1}).explain(“executionStats”)I have a single field index of {“timestamp”: -1} and executionTimeMillis is around 11kI tried to create compound index of {“field1”: 1, “field2”: 1, “field3”: 1}, but as I can see this index is not used cos of sort, so I tried to create another one as {“field1”: 1, “field2”: 1, “field3”: 1, “timestamp”: -1}, but it gives me executionTimeMillis around 8k and it’s a bit better (SORT_MERGE is used), but still it’s 3k without sort.Any tricks on how to use sortable field in compound index to improve performance here?",
"username": "Mykyta_Bezverkhyi"
},
{
"code": "{ field2 : 1, timestamp : -1, field1: 1, field3: 1}",
"text": "Hi @Mykyta_Bezverkhyi,Welcome to MongoDB community!The trick with indexing is the order of the fields should follow one thumb rule and it is Equality , Sort , Range.The $in operator is actually considered range so I would try the following index:\n{ field2 : 1, timestamp : -1, field1: 1, field3: 1}Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.If this does not help please provide execution Stats plan.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for your answer @Pavel_Duchovny, it answers some of my questions, but I already tried some configurations, and I got the following results:Documents Returned:9420\nIndex Keys Examined:1610000\nDocuments Examined:9420\nActual Query Execution Time (ms):10664\nSorted in Memory:no\nQuery used the following index:\nfield2_1_timestamp_-1_field1_1_field3_1But I managed to improve performance by the following:Documents Returned:9420\nIndex Keys Examined:158261\nDocuments Examined:9420\nActual Query Execution Time (ms):603\nSorted in Memory:no\nQuery used the following index:\nfield1_1_field2_1_timestamp_-1_field3_1Any explanations on why it could be faster? For some reasons I got better results having field1 as the first in my index.",
"username": "Mykyta_Bezverkhyi"
},
{
"code": "",
"text": "@Mykyta_Bezverkhyi, I assume that field1 which has 1 value (nulls are not indexed) can be used as equility by the engine.Therefore fields 1,2 should be.before the sort…Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB find with sort performance | 2020-10-30T16:23:33.497Z | MongoDB find with sort performance | 8,759 |
null | [
"backup"
] | [
{
"code": "",
"text": "We are currently looking for a solution to secure our Atlas backups.Q1: Is there a plan from MongoDB to provide something similar to AWS Glacier Vault Lock [1] or even a grace period before backups are deleted once and for all?It would be amazing to protect the Atlas backups from being deleted.\nCurrently, if one of our Atlas admins was compromised, the damage for the company would be enormously high. So we need to implement measures against the final deletion of our most mission critical data.Q2: Are there currently any features in Atlas to delay or prevent the deletion of backups by admins?Best,\nMartin[1] Amazon Glacier Introduces Vault Lock | AWS Security Blog",
"username": "MartinLoeper"
},
{
"code": "",
"text": "Hi Martin… Not today but we are working on some things for the future…stay tuned.",
"username": "bencefalo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there a vault lock for Atlas backups? | 2020-10-30T01:09:14.535Z | Is there a vault lock for Atlas backups? | 2,389 |
null | [
"data-modeling"
] | [
{
"code": " job.schema = {\n title: \"web developer\",\n description: 'lots of words here\",\n company: \"\",\n contact: [\"contact details\"],\n other-stuff: ....\n }\n jobs.schema = {\n jobs: [{\n type: mongoose.Schema.Types.ObjectId,\n ref: \"job\"\n }, \n {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"job\"\n }]\n }\n",
"text": "What I’m trying to conceive of is how should I approach this problem of having a marketplace e.g. where jobs are and can be searched for and then each job.I feel like I would want a model for jobs and that model would be made up an array of references to every job within it’s collection. How while I feel that’s the structure when I say it out loud it seems to not make sense. A single monolithic collection with all the references to each ‘job’ collection seems incorrect.Just a little detail, just to help give some visuals for those who wish it.\n// using a mongoose-esqe seudo code here// ‘Jobs/marketplace’ of roles.Now this seems a little crazy to me, but I’m not really sure how to change it for the better. I’d really appreciate it. I’m doing this for a small side hustle project just to learn more about Mongo, so please don’t feel that any feedback needs to represent a high end finished project just a nice push towards something more practical that I can iterate on over time. I just don’t want to totally break it with something so terribly conceived that I’d have to completely rethink it as my knowledge on the topic matures.Your kind assistance is greatly appreciated. Thank you!!",
"username": "misterhtmlcss"
},
{
"code": "job.schema = {\n title: \"web developer\",\n description: 'lots of words here\",\n company: \"\",\n contact: [\"contact details\"],\n marketPlace : [ \"market1\",\"market2\"]\n other-stuff: ....\n }\n",
"text": "Hi @misterhtmlcss ,Welcome to MongoDB community!I really recommend to any person that starts to design with MongoDB to go over the next two articles:\nhttps://www.mongodb.com/article/schema-design-anti-pattern-summaryA summary of all the patterns we've looked at in this seriesNow regarding your specific schema questions, I think the following section is very related, which is how do I embedded relationships correctly to not overload the document and heavy lifting queries.I would say that it might make more sense to reference each job to its market place. This way you can index an array of the marketplaces the jobs belong to. It can be just an id or maybe if names are unique a market name (it can be an array of small objects as well):This way when you would like to fetch all jobs for a specific market you will query an indexed jobs collection for the relevant jobs.Of course you can still have a collection for market description and details related to the market or which are the current active markets.Let me know if that makes sense.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,Sorry for the slow reply, I don’t work full-time on this side hustle and also I wanted to take some time to actually review your material, the links and think too.Firstly thank you. It is really helpful and actually gave me some ideas about other things I was thinking about too for my work. Awesome. I’m grateful.Also I really do struggle with how to solve different problems without dipping into potential anti-patterns of which the blog article you linked to is really helpful.This I believe solves my issues. I’ll bounce back if I have something more to ask around this issue such as a specific problem that sprouts from this information.Cheers\nRoger",
"username": "misterhtmlcss"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Jobs (like a market place) and job (the role itself) | 2020-10-26T20:36:12.077Z | Jobs (like a market place) and job (the role itself) | 1,900 |
null | [
"atlas-search"
] | [
{
"code": "{\n \"collectionName\": \"projects\",\n \"database\": \"'\"$DB_NAME\"'\",\n \"mappings\":{\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.english\"\n },",
"text": "I come to report an issue we experienced with Atlas FTS search indexes when using mongoDB 4.4. We upgraded our M10 cluster from 4.2 to 4.4, and since then none of our search queries worked anymore.After investigating it, we found that :nested fields don’t workpartial searches don’t work (e.g. “covid” doesn’t match “My covid study project”Only first level (not nested) exact matches are returnedOur FTS indexes were set as “dynamic: false” (see below). Once we switched to “dynamic: true”, everything worked again. In case it matters, we also switched from custom index names to “default” in order to avoid specifying the index on each query.Here is our former FTS index on our “projects” collection:\n",
"username": "Antoine_Cordelois"
},
{
"code": "documentdynamic: true{ \"mappings\":{\n \"dynamic\": false,\n \"fields\": {\n \"contact\": {\n \"type\": \"document\",\n \"fields\" : {\n \"lastname\" : {\n \"type\" : \"string\",\n \"analyzer\" : \"lucene.standard\"\n }\n }\n }}}}\n",
"text": "You need to use the document type for nested documents. Alternatively, using dynamic: true will also find those fields. For example:I am not sure if your partial searches is an artifact of the same issue. If not, can you provide the query you used?",
"username": "Doug_Tarr"
},
{
"code": "$search: {\n text: {\n query: ['test'],\n path: ['name']\n }\n }",
"text": "Thank you very much for your answer. I’ll check if it is a side effect of not using document type.The query failign partial matches has been tested in its simplest form, with an aggregation stage like (as I recall):",
"username": "Antoine_Cordelois"
}
] | Updating an Atlas cluster to 4.4 with non dynamic index breaks Full text search | 2020-10-29T14:36:49.359Z | Updating an Atlas cluster to 4.4 with non dynamic index breaks Full text search | 2,118 |
null | [
"backup"
] | [
{
"code": "",
"text": "HiHow to check mongodb history backup, when was last backup taken for dataabasethanks",
"username": "kumaran_rajendran"
},
{
"code": "",
"text": "Hi @kumaran_rajendran,Welcome to MongoDB community!MongoDB servers does not back themselves on their own they can just replicate between each other. To backup your database you need to use one of the following methods, each method has its own ways to view backup information:Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "thanks pavel for ersponse,no my question is if we used mongodump or any other method to backup the database, can i list out the list of history when the last backup was performed against mongodb databasethanks",
"username": "kumaran_rajendran"
},
{
"code": "",
"text": "Hi @kumaran_rajendranThere is no record of this stored in mongodb. So no you cannot list out a history of backups.",
"username": "chris"
},
{
"code": "",
"text": "Hi @kumaran_rajendran,You can though add a script to your post backup process to update the time kofthe last backup as a stored document.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | MongoDB History Backup information | 2020-10-30T01:10:00.638Z | MongoDB History Backup information | 2,455 |
null | [
"field-encryption"
] | [
{
"code": "",
"text": "This Topic is a continuation of previous topic that i created. I browsed through the materials that was recommended in the previous topic and those were very helpful and i have few questions on the same.Here is my understanding\nThe keys that will be used to encrypt/decrypt the fields in collection will be stored in Mongodb keystore (collection). These keys will be encrypted/decrypted using CMK and CMK can be maintained/stored in any of the external KMS service for example (AWS-KMS).Here are my questionsQuestion1\nLets assume that\n- The CMK that is stored in AWS-KMS is rotated after 6months(but the keys present in mongodb keystore are not rotated) then this means there will be a newCMK key. But the keys present in mongodb keystore is encrypted using the oldCMK key.\n- In this case, will there be any issue if this newCMK key is used to decrypt the keys from mongodb keystore as the keys present in mongodb keystore were encrypted using the oldCMK key?Question2\nLets assume that\n- The keys present in mongodb keystore are rotated\n- After rotation of keys in mongodb keystore, the CMK fetched from AWS-KMS is used to encrypt the rotatedKeys and these newKeys are stored into mongodb keystore.\n- In this case, will the mongodb driver still be able to decrypt the fields that were encrypted by oldKeys?As i don’t have clarity on crypto related topics, i would like to have some clarity on below question as wellQuestion3\n\"if a particular data is encrypted using a key and if this key is rotated (this will give rise to newKey), then will this newKey would still be able to decrypt the data which was encrypted using the oldKey?I feel like all the above questions are similar, but i’m not sure about it. I’m looking forward for the response.Thank you.",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "Hi Divine.I think I can help. Taking your questions in order:Master key rotation using a purpose-generated Amazon KMS master key (“CMK”) rotation: In Amazon (and GCP & Azure) parlance, key service “rotations” are really just versioning techniques. Because many use cases (such as FLE data in MongoDB) are for long term storage with potentially massive numbers of records and previously encrypted data must still continue to be accessible, a CMK rotation will create a new wrapping envelope on any newly created data (field) keys, but all previous “old” encrypted payloads carry enough meta information so they can still be decrypted as well. In this sense, rotation is meant to signifiy that a given key is only used for encryption for a specific period of time. You can read more details here: Rotating AWS KMS keys - AWS Key Management Service, but the take-away is, no, automatic rotation with AWS KMS CMKs do not pose any issue for FLE. They are essentially an opaque implementation detail and from the view of our drivers, old and new field keys can still be decrypted through API calls using the same IAM account privileges as always.Not sure I entirely follow what you’re asking, but the actual (raw/plaintext) data encryption keys (field keys) themselves are not rotated, only the encryption key that protects them on initial generation. You can think of this as similar to what happens on a laptop with disk encryption - when you change your password, a key derivation algorithm is used to create a new wrapping for the disk encryption key, but the disk data encryption key itself does not change; if it did, your whole drive would have to be re-encrypted block by block, and with a sizable volume that could take a considerable amount of time.As mentioned, the field encryption keys themselves are not rotated, but their encrypted versions can be, if desired. One of the advantages of a hardened, mature key service like AWS KMS is that the master key material used to perform the envelope encrypt/decrypt operations never leaves the confines of the service, and depending on the provider may not even leave the backing Hardware Security Module. This is not the case with the local key service (which could be supplied with either an actual local key file or results of a REST call to a remote key/secrets manager like Hashicorp Vault).What may be helpful is to consider real world threat modeling: how likely is it that an attacker would compromise your application server, capturing database credentials and privileged network access and plaintext FLE encryption keys (or the latter and a full snapshot of the production database)? And if that were to occur, it may make sense to rotate your keys, but in many (most?) cases this would be a distant priority from dealing with the fact that your application server were compromised in the first place; responding to a catastrophic breach would take precendent.From a compliance perspective, rotating master keys typically meets the intention of most major regulatory frameworks and industry guidelines.Hope that helps.-Kenn",
"username": "Kenneth_White"
},
{
"code": "",
"text": "@Kenneth_White Not only you have answered my questions but you have constructed the answer in such a way that i would try to respond. Thanks a lot. Have a nice day ",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Doubt in key rotation flow in relation to FLE | 2020-10-30T05:40:05.571Z | Doubt in key rotation flow in relation to FLE | 3,400 |
null | [
"swift"
] | [
{
"code": "SyntaxError: Invalid or unexpected token\n at wrapSafe (internal / modules / cjs / loader.js: 931: 16)\n at Module._compile (internal / modules / cjs / loader.js: 979: 27)\n at Object.Module._extensions..js (internal / modules / cjs / loader.js: 1035: 10)\n at Module.load (internal / modules / cjs / loader.js: 879: 32)\n at Function.Module._load (internal / modules / cjs / loader.js: 724: 14)\n at Function.executeUserEntryPoint [as runMain] (internal / modules / run_main.js: 60: 12)\n at internal / main / run_main_module.js: 17: 47\n",
"text": "Hello,It’s been several days since I tried to migrate the data of my application which is on Realm Cloud to MongoDb Realm by following the tutorial Realm Legacy Migration Guide - Realm Legacy Migration Guide but I am completely blocked.WhenI tried the method described at the end of the tutorial by creating a copyToMongoDb file, I have an error when I run it:I have my application developed in Swift and what I really don’t understand is that I am using the latest version of RealmSwift 10.0 to be able to insert data into MongoDb. However, with RealmSwift 10.0 you cannot connect to Realm Cloud, you have to use an old version for that. How then to get the data on Realm Cloud in order to copy it directly to MongoDb? I have tried downloading the data from Realm Cloud to a local .realm file but I cannot open it using RealmSwift 10.0, is this possible?Thanks very much for your help, I hope my question is clear enough ",
"username": "Arnaud_Combes"
},
{
"code": "",
"text": "I don’t think the intent is to open the Legacy Realm Cloud with Realm Studio 10.Assuming you’re using Partial/Query Sync, if you connect to your existing Realm with Realm Studio 3.11, you can File->Export JSON. Alternatively you can connect to it with with an App, iterate over the objects and ‘convert’ them to the current format and save them locally. You could then upgrade that file with Realm Studio 10.If you select a legacy realm file with v10, here’s what you get.Upgrade1144×320 66 KBOne the file is of the correct type and all of the new properties are in place _id and partitionKey, it’s pretty straightforward to write that data to the MongoDB Realm server.",
"username": "Jay"
}
] | Data migration from Realm Cloud to MongoDb | 2020-10-30T01:08:35.082Z | Data migration from Realm Cloud to MongoDb | 1,554 |
null | [
"compass",
"atlas-data-lake"
] | [
{
"code": "",
"text": "Hello guys, I am having issue of connection timed out error from Compass while trying to connect to S3 bucket. I have checked whitelisted IP’s, AWS Roles, AWS Permissions and followed exact documentation supplied.Please advice.",
"username": "Graeme_Henderson"
},
{
"code": "",
"text": "Hello Graeme, can you send me an email at [email protected] with your project name so I can take a closer look?This issue could be coming from a couple places but should be a relatively quick fix.Best,",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error viewing data from S3 | 2020-10-30T01:04:55.290Z | Error viewing data from S3 | 3,241 |
null | [
"spring-data-odm"
] | [
{
"code": " org.springframework.data.mongodb.UncategorizedMongoDbException: Timeout while receiving message; nested exception is com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message\n at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:107)\n at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2114)\n at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:464)\n at org.springframework.data.mongodb.core.MongoTemplate.saveDBObject(MongoTemplate.java:1080)\n at org.springframework.data.mongodb.core.MongoTemplate.doSave(MongoTemplate.java:1015)\n at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:961)\n at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:949)\n at in.org.db.access.DBOpMongo.save(DBOpMongo.java:306)\n at in.org.db.access.DBOperation.save(DBOperation.java:186)\n at in.org.gen.objectStore.ObjectStore.saveObj(ObjectStore.java:847)\n at in.org.gen.objectStore.ObjectStore.saveObject(ObjectStore.java:204)\n at in.org.fms.objects.FMObjectAbstract.saveOrUpdate(FMObjectAbstract.java:351)\n at in.org.fms.messaging.Messaging.saveAndSendMsg(Messaging.java:78)\n at in.org.fms.messaging.Messaging.saveAndSendMsg(Messaging.java:71)\n at in.org.fms.process.Impl.FMExpiryProcess$ExpirySubProcess.call(FMExpiryProcess.java:226)\n at in.org.fms.process.Impl.FMExpiryProcess$ExpirySubProcess.call(FMExpiryProcess.java:167)\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)\n at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)\n at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message\n at com.mongodb.internal.connection.InternalStreamConnection.translateReadException(InternalStreamConnection.java:563)\n at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:448)\n at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:299)\n at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)\n at com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:99)\n at com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:450)\n at com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:72)\n at com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:226)\n at com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:269)\n at com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:131)\n at com.mongodb.operation.MixedBulkWriteOperation.executeCommand(MixedBulkWriteOperation.java:435)\n at com.mongodb.operation.MixedBulkWriteOperation.executeBulkWriteBatch(MixedBulkWriteOperation.java:261)\n at com.mongodb.operation.MixedBulkWriteOperation.access$700(MixedBulkWriteOperation.java:72)\n at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:205)\n at com.mongodb.operation.MixedBulkWriteOperation$1.call(MixedBulkWriteOperation.java:196)\n at com.mongodb.operation.OperationHelper.withReleasableConnection(OperationHelper.java:501)\n at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:196)\n at com.mongodb.operation.BaseWriteOperation.execute(BaseWriteOperation.java:148)\n at com.mongodb.operation.BaseWriteOperation.execute(BaseWriteOperation.java:52)\n at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:213)\n at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:182)\n at com.mongodb.DBCollection.executeWriteOperation(DBCollection.java:356)\n at com.mongodb.DBCollection.replaceOrInsert(DBCollection.java:436)\n at com.mongodb.DBCollection.save(DBCollection.java:425)\n at org.springframework.data.mongodb.core.MongoTemplate$11.doInCollection(MongoTemplate.java:1086)\n at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:462)\n ... 19 more\nCaused by: java.net.SocketTimeoutException: Read timed out\n at java.net.SocketInputStream.socketRead0(Native Method)\n at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)\n at java.net.SocketInputStream.read(SocketInputStream.java:170)\n at java.net.SocketInputStream.read(SocketInputStream.java:141)\n at com.mongodb.internal.connection.SocketStream.read(SocketStream.java:109)\n at com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:580)\n at com.mongodb.internal.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:445)\n ... 43 more\n",
"text": "Hi,\nI am facing socket timeout exception very frequently in my development environment even no pending running quries in mongodb.\nMongodb and Application server running on different subnet with inbetween firewall.\nApplication not running properly due to this problem. Please help to solve this issue and it is very critical now. Thanks.\nServer Details:\nApp Server IP : 10.10.6.7\nMongodb Server IP : 10.9.7.7\nMongodb Version : 4.2.2Stack Trace :",
"username": "Visva_Ram"
},
{
"code": "",
"text": "Check TCP keepalives on the server.Ensure that the system default TCP keepalive is set correctly. A value of 300 often provides better performance for replica sets and sharded clusters. See: Does TCP keepalive time affect MongoDB Deployments? in the Frequently Asked Questions for more information.ref: https://docs.mongodb.com/manual/administration/production-checklist-operations/\nhttps://docs.mongodb.com/manual/faq/diagnostics/#faq-keepalive",
"username": "chris"
},
{
"code": "",
"text": "Thank you Chris.\nI made all settings and as you mentiontion in the link. But still threads are waiting on socket and finally got timing out. What else I suppose to verify?",
"username": "Visva_Ram"
},
{
"code": "",
"text": "By waiting I suppose you mean idle ?The firewall would be the next thing I would look into. Some will have a maximum socket lifetime. The TCP keepalive would generally keep a connection marked as active.",
"username": "chris"
},
{
"code": "",
"text": "There is no firewall in between the mongodb and application server. There is one switch which spliting the network with vlan id. No other component.",
"username": "Visva_Ram"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Socket timeout exception with no pending queries | 2020-02-22T18:29:31.415Z | Socket timeout exception with no pending queries | 18,517 |
null | [] | [
{
"code": "",
"text": "We are currently running a production workload on one Atlas cluster and a staging workload on another Atlas cluster. We want to synchronize data between those two clusters.Currently, we do a full backup restore periodically every X days. This strategy tends to become harder as the database grows in size over time.Are there standard solutions for mirroring production data onto a staging cluster - possibly continuously / live?Best,\nMartin",
"username": "MartinLoeper"
},
{
"code": "",
"text": "Hi @MartinLoeper,Mirroring data set between MongoDB atlas clusters can potentially be done via the mongomirror tool, however, since it is a standalone process you need to reliably monitor and resume it at your own risk. This tool was designed for live migration purposes therefore was never tested for long duration tests which means we cannot guarantee if it will be sustainable or perffomant solution.https://docs.atlas.mongodb.com/reference/mongomirror/Alternatively, you can setup Atlas triggers linking both clusters to migrate subsets of collections from source to target.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to sync two Atlas clusters | 2020-10-30T01:09:10.189Z | How to sync two Atlas clusters | 5,531 |
null | [
"charts",
"on-premises"
] | [
{
"code": " ✔ existingClientAppIds ([ 'mongodb-charts-qwxzr' ])\n ✔ migrationsExecuted ({})\n ✖ stitchUnconfigured failure { message: 'Error removing all functions:' }\n✖ stitchUnconfigured failure { message:\n 'app \"mongodb-charts-qwxzr\" not found. To reconfigure Charts with a fresh database, delete the mongodb-charts_keys volume and deploy again.' }\n",
"text": "Hey there,\nI have been using mongo charts via the quay.io container for some time now and I love it!\nHowever since a few months and versions I have had massive problems with starting the container.\nI always encounter the following message:Unfortunately this message is not very helpful to me. I don’t know what to do with it…\nWhat I know is that if I restart the container several times (up to 5 to 10 times) it will work at some point…What is the reason? Can I do something?Edit: I have another issue I like to add here. I tried to upgrade from 1.17 to 1.8 and 1.9 and I’m unable to start charts now, because of:Reverting back to 1.17 still works fine…Thanks!",
"username": "Daniel_N_A"
},
{
"code": "",
"text": "Hi @Daniel_N_A -The on-prem installation is a bit quirky, since it involves configuring a local Stitch/Realm server. We see this error during development from time to time too. Usually the easiest thing to do is clean everything up and start again, but if it works on retries then that’s fine too.Note that any version of Charts from the repo higher than 1.9.2 is not supported for on-prem deployment. In our versioning system, 1.17 is a (much) newer version than 1.9. It’s failing since you’re attempting to downgrade which is not possible.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | stitchUnconfigured failure { message: 'Error removing all functions:' } | 2020-10-27T13:34:10.559Z | stitchUnconfigured failure { message: ‘Error removing all functions:’ } | 4,016 |
null | [] | [
{
"code": "",
"text": "Where is the link to the in-browser IDE. Cannot find it.",
"username": "Oscar_Jesus_Labrador_Rubio"
},
{
"code": "",
"text": "Ok, just forget about it. It is in the bottom of the page",
"username": "Oscar_Jesus_Labrador_Rubio"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Dont know hot to access In-browser IDE | 2020-10-30T01:13:41.387Z | Dont know hot to access In-browser IDE | 1,785 |
[
"queries",
"indexes"
] | [
{
"code": "db.getCollection('directorypage').find(\n\n {\n \"appId\": \"c6979f9b230f\",\"dirPageId\": \"services_1539646679172_5\",\"status\": 1,\n \"$or\": [\n {\"heading\": {\"$regex\": \".*Waverly .*\",\"$options\": \"i\"}},\n {\"summary\": {\"$regex\": \".*Waverly .*\",\"$options\": \"i\"}},\n {\"body\": {\"$regex\": \".*Waverly .*\",\"$options\": \"i\"}}\n ],\n \"loc\": {\n \"$nearSphere\": {\n \"$geometry\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -85.84142,\n 38.130725\n ]\n },\n \"$minDistance\": 0,\n \"$maxDistance\": 1609340\n }\n }\n }).skip(0).limit(10).explain('executionStats');\n",
"text": "Hi ,I hope ,all are doing well .I have a collection with millions of records. and i am trying to fetch using using nearSphere. but one thing that i observed that in ExecutionStats that totalKeysExamined scan is very high with respect to totalDocsExamined. My Query is given below -and i am attaching the result of ExecutionStats in attachment section.Screenshot 2020-10-28 at 6.14.53 PM2448×654 119 KBIs this working fine?",
"username": "Anuj_chauhan"
},
{
"code": "{ appId : 1, dirPageId : 1, status: 1, loc : \"2dsphere\"}\n",
"text": "Hi @Anuj_chauhan,I would say that scanning 386k index keys and 16k docs to return 2 is far from optimized.To help you with this query I would suggest provide the used index for it.I would suggest the following index:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "Name - appId_dirPageId_status_loc\nKey - {\n \"appId\" : 1,\n \"dirPageId\" : 1,\n \"status\" : 1,\n \"catId\" : 1,\n \"heading\" : 1,\n \"summary\" : 1,\n \"body\" : 1,\n \"address\" : 1,\n \"latitude\" : 1,\n \"longitude\" : 1,\n \"loc\" : \"2dsphere\"\n}\n",
"text": "Thanks for support ,\nScreenshot 2020-10-29 at 1.49.41 PM1214×576 41.5 KB\ni have already applied this indexing. please look in the attached screeenshot.Thanks",
"username": "Anuj_chauhan"
},
{
"code": "",
"text": "Hi @Anuj_chauhan,This is not the same index it has 7 more fields between status and loc… The query will not use it as optimized.This index seems to be insufficient as regex expression with unanchored and case insensitive data cannot use it.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Regarding ExecutionStats | 2020-10-28T16:52:15.486Z | Regarding ExecutionStats | 1,374 |
|
null | [] | [
{
"code": "",
"text": "Hello, I am tring to re-test the lab at:\nhttps://university.mongodb.com/mercury/M001/2020_October_20/chapter/Chapter_1_What_is_MongoDB_/lesson/5f32dec404e9ffc0285d7076/problemI was able to connect using the IDE last week, and successfully completed this lab, but I am trying it again this week and the connection is failing for some reason:\nI am able to connect using the same command below using the Mongo Shell, but not through the IDE.The command I am using is as follows (from Atlas):\nmongo “mongodb+srv://sandbox.p3tyo.mongodb.net/” --username m001-studentthe response is:MongoDB shell version v4.0.5\nEnter password:\nconnecting to: mongodb://sandbox-shard-00-01.p3tyo.mongodb.net.:27017,sandbox-shard-00-02.p3tyo.mongodb.net.:27017,sandbox-shard-00-00.p3tyo.mongodb.net.:27017/%3Cdbname%3E?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-4j1t2p-shard-0&ssl=true\n2020-10-28T18:35:06.016+0000 I NETWORK [js] Starting new replica set monitor for atlas-4j1t2p-shard-0/sandbox-shard-00-01.p3tyo.mongodb.net.:27017,sandbox-shard-00-02.p3tyo.mongodb.net.:27017,sandbox-shard-00-00.p3tyo.mongodb.net.:27017\n2020-10-28T18:35:06.514+0000 W NETWORK [js] Unable to reach primary for set atlas-4j1t2p-shard-0\n2020-10-28T18:35:06.514+0000 I NETWORK [js] Cannot reach any nodes for set atlas-4j1t2p-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-10-28T18:35:07.701+0000 W NETWORK [js] Unable to reach primary for set atlas-4j1t2p-shard-0\n2020-10-28T18:35:07.701+0000 I NETWORK [js] Cannot reach any nodes for set atlas-4j1t2p-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.\n##etc. through 11 checks\n2020-10-28T18:35:19.421+0000 W NETWORK [js] Unable to reach primary for set atlas-4j1t2p-shard-0\n2020-10-28T18:35:20.603+0000 W NETWORK [js] Unable to reach primary for set atlas-4j1t2p-shard-0\n2020-10-28T18:35:20.604+0000 E QUERY [js] Error: connect failed to replica set atlas-4j1t2p-shard-0/sandbox-shard-00-01.p3tyo.mongodb.net.:27017,sandbox-shard-00-02.p3tyo.mongodb.net.:27017,sandbox-shard-00-00.p3tyo.mongodb.net.:27017 :\nconnect@src/mongo/shell/mongo.js:328:13\n@(connect):1:6\nexception: connect failedPlease help. I cannot find anything from the course material to help me.\nThanks.\nGreg.",
"username": "Greg_Taylor"
},
{
"code": "",
"text": "mongo “mongodb+srv://sandbox.p3tyo.mongodb.net/” --username m001-studentWhat is the status of this cluster when you check on Atlas?",
"username": "steevej"
},
{
"code": "",
"text": "As far as I can tell the connection is good. I am able to view Collections from my Atlas Cluster.\nWhen I click Connect, it gives me different connection strings, but does not do anything else - not sure if that is the norm.\nThanks.",
"username": "Greg_Taylor"
},
{
"code": "",
"text": "Have you tried with this new connection string?I absolutely have no clue why the connection string would change.",
"username": "steevej"
},
{
"code": "",
"text": "@Greg_Taylor Have you configured network access to your Atlas cluster?",
"username": "Yulia_Genkina"
},
{
"code": "",
"text": "@yulia_genkina happy cake day ",
"username": "santimir"
},
{
"code": "",
"text": "I had this exact issue, fixed it by setting up the connection security (it seems I hadn’t added the 0.0.0. IP address)",
"username": "Mairead_Behan"
},
{
"code": "",
"text": "Thanks all. Setting the 0.0.0 ip address fixed the issue. I am thinking that my Ip Address changed, that would be why it worked last week, but stopped working this week. Setting to 0.0.0.0 will make this easier, and I don’t need the training to be super secure.\n:))\nGreg.",
"username": "Greg_Taylor"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Issues connecting from the IDE | 2020-10-28T18:49:23.433Z | Issues connecting from the IDE | 2,202 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "As we are improving the querying experience, we would like to learn more from our users.If you are interested in providing feedback by talking to our product team and participating in the usability studies, leave your contact information in this form.Your information is kept private and never shared outside MongoDB.We look forward to working with you!",
"username": "Katya"
},
{
"code": "",
"text": "",
"username": "Jamie"
}
] | We want your feedback on MongoDB Query Language | 2020-10-29T17:17:09.573Z | We want your feedback on MongoDB Query Language | 1,617 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.2-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.1. The next stable release 4.4.2 will be a recommended upgrade for all 4.4 users.\nFixed in this release:4.4 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.4.2-rc0 is released | 2020-10-29T15:05:13.357Z | MongoDB 4.4.2-rc0 is released | 1,833 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 3.6.21-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 3.6.20. The next stable release 3.6.21 will be a recommended upgrade for all 3.6 users.\nFixed in this release:3.6 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 3.6.21-rc0 is released | 2020-10-29T15:00:55.015Z | MongoDB 3.6.21-rc0 is released | 1,780 |
[
"o-fish"
] | [
{
"code": "",
"text": "Hello!I keep getting this error when accessing either Boardings, Vessels, or Crews, and the results are not loading.e.g. clicking Boarding Records on main menu > BoardingsRepro with local setup using the sandbox, or on the sandbox itselfScreen Shot 2020-10-24 at 8.50.28 AM3098×1670 359 KBShould I create a Github Issue (maybe it’s a recent regression), or is it because I’m trying to access it on a Saturday morning? ",
"username": "Lenmor_LD"
},
{
"code": "",
"text": "Hi! This was fixed on Friday in our sandbox server, can you “git pull” on your local code to get the latest code and test against the sandbox server? (it was bugging me too, so I looked into it and finally resolved it)Thanx!-Sheeri",
"username": "Sheeri_Cabral"
},
{
"code": "asyncToGenerator.js:6 Uncaught (in promise) TypeError: Cannot read property 'filter' of undefined",
"text": "Thanks for looking at it @Sheeri_CabralI’m still getting an error thoughimage3380×1528 581 KBThis time, it is\nasyncToGenerator.js:6 Uncaught (in promise) TypeError: Cannot read property 'filter' of undefined",
"username": "Lenmor_LD"
},
{
"code": "",
"text": "Ah, I got it resolved.\nFor some reason, it was only happening on the user that I was using.\nWhen I created another user, it worked fine.Thanks!",
"username": "Lenmor_LD"
},
{
"code": "",
"text": "That’s weird! Are they different user types, like the one that had an error was a Field Officer, and the one that didn’t is an Agency Administrator? We’re still working out some permissions issues, so it would be helpful to know.I’m so glad you got unstuck!",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "the one that had an error was a Field Officer, and the one that didn’t is an Agency Administrator?Yes, that’s actually true. Good catch!\nI checked, and the one that is getting the error was a Field Officer, and the newly one I created got the default Agency Admin, so it worked",
"username": "Lenmor_LD"
},
{
"code": "",
"text": "If you’re interested, I fixed it today (on the sandbox), the fix was to put an if statement around the block of code so it doesn’t run if inboundPartnerAgencies is null (because the function just returns the agency name and shared agencies, to the agency variable). See Update get-data.js by Sheeri · Pull Request #355 · WildAid/o-fish-web · GitHub",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | TypeError: agency.inboundPartnerAgencies is undefined | 2020-10-24T12:57:25.826Z | TypeError: agency.inboundPartnerAgencies is undefined | 4,312 |
|
[
"field-encryption"
] | [
{
"code": "",
"text": "i’m looking into the encryption features available in mongodb.\ni came across the concept:clientside-field level encryption and this concept is relevant for my requirement. i need additional details on the implementation flow. The flow chart provided in the pdf i have linked in this post confuses me. i would like to learn more on this topic and if there is any sample code snippet related to implementation please share.\nimage1083×688 41.2 KB\n",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "Hi @Divine_Cutler,We’ve got some sample code here. If you’re keen to learn more about the implementation of the feature, we did a talk on that, too (which also came with a cheatsheet). Hope this helps.Cheers,\nNaomi",
"username": "Naomi_Pentrel"
},
{
"code": "",
"text": "thank you @Naomi_pentrel . i will look into it.is it possible to practice/experiment/playaround with the field-level-encryption feature using the mongodb community softwareversion or will it require mongodb enterprise softwareversion?",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "With a self-hosted community version you’d only be able to do manual client-side field level encryption.For the full/automatic field-level encryption, you can either use the Enterprise edition or an Atlas cluster (the free tier is sufficient). So I’d suggest setting up a free cluster on Atlas and playing around with it there :).",
"username": "Naomi_Pentrel"
},
{
"code": "",
"text": "@Naomi_Pentrel this is great. thank you. have a nice day. ",
"username": "Divine_Cutler"
},
{
"code": "",
"text": "You too! ",
"username": "Naomi_Pentrel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Requesting for details on mongodb clientside field level encryption | 2020-10-28T10:43:06.901Z | Requesting for details on mongodb clientside field level encryption | 3,570 |
|
null | [
"aggregation"
] | [
{
"code": "aggregate$lookup$outcursorforEachjsonbulkWrite",
"text": "Hi everyone,I’m fairly new to MongoDB and struggling to understand the best way to update a large collection with data from another collection.We have a collection with ~6m items that requires field within each document to be updated to that of an item in a related collection.Initially I wrote an aggregate pipeline which built up the required data via $lookup and used $out to update the collection but it took over 90mins to run locally which isn’t ideal and I suspect is due to the items also containing a lot of data.I started to look at using a cursor and forEach but it still seemed very slow and getting debug output was difficult.Can anyone advise how they would handle a large update such as this? I’m thinking the best way would be to prepare a json payload for use with bulkWrite?",
"username": "Phunky"
},
{
"code": "",
"text": "Hi @Phunky,I think you are right for the initial build I would suggest doing a range query on an indexed field, or collection scan depanding on best read logic and deviding the data into a unique based bulk chunks.Than those bulk chunks can be passed in parallel to multiple write threads to run insert/update simultaneously based on unique key filter (make sure it is indexed on the target collection). Please make sure to use w: majority to keep replica members in sync and avoid cache pressure on primary.To keep this collection up to dat I would suggest using Atlas triggers if you are in atlas or a changestream module so that you will stream changes as they come from the source collection.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the response @Pavel_Duchovny, i’ll take those points into consideration.Thankfully this is just a one-time task we need to run to clean up some problematic data and restructure our existing data structures.",
"username": "Phunky"
}
] | What is the best way to update a large collection with data from another collection? | 2020-10-27T13:35:12.108Z | What is the best way to update a large collection with data from another collection? | 2,115 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hi, I’m having the following error message when trying to upload node_modules.tar.gz file to Realm Functions dependencies:\nerror: multipart: NextPart: read tcp → : i/o timeout\nFile size is below the 10mb limit, being 5mb in size. Upload works when the file size is below 2mb.\nMy Realm-Cli version : 1.1.0. Thanks.",
"username": "Mounir_Farhat"
},
{
"code": "",
"text": "Hey Mounir - can you attach your zipped file here so we can take a look and also share what dependencies you’re adding when going from the 2mb -> 5mb file size?",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks for your response. Here is the file:\nhttps://we.tl/t-radOg1df5s\nThis is a failed attempt to fit the library opencv4nodejs into the 10mb constraint. What is strange, is that I keep receiving the ‘error: multipart: NextPart: i/o timeout’ message even though the file is under 5mb.",
"username": "Mounir_Farhat"
}
] | Error uploading realm function dependencies | 2020-10-28T16:51:19.695Z | Error uploading realm function dependencies | 2,231 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hey everyone,So I’ve been able to incorporate Touch/Face ID within my app and it works great if there is pre-existing username/password info filled in. I’m trying to figure out the best way to go about setting up the Auth process for this, as Touch ID should only work if the user has already logged in before (so that the credentials are stored).Would the logic be something like this?\n-Setup app.currentUser() and store username/pass\n-Store the username/pass locally somehow (this is the part I’m stuck on)\n-If user has setup/logged in before, then allow Touch ID",
"username": "Aabesh_De"
},
{
"code": "app.curentUser() != nil",
"text": "I would think you app logic, on app startup, should check whether app.curentUser() != nil - is this possible? Why do you need to store the username/password locally?",
"username": "Ian_Ward"
}
] | Working with Touch/Face Auth ID? | 2020-10-15T14:57:55.892Z | Working with Touch/Face Auth ID? | 1,923 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi, I have a stand alone Mongodb database running on ec2. The data size is about 30gb now. I am now wanting to add 2 replica sets with the existing one. Is it possible now ? Or can I only shard of any existing collection coming at this data size ? What kind of precaution will I need ?",
"username": "Md_Mahadi_Hossain"
},
{
"code": "",
"text": "Hi @Md_Mahadi_Hossain,I believe you are mixing some of the trems we use for MongoDB.If you wish to have replica set with 3 nodes by adding 2 replicates members you can convert the standalone to a replica set with the following guide:Please note that you will have to change your applications connection string to a replica set type.However, sharding is adding shards which are additional set of replica sets added with components to stripe your data across those multiple replica sets. Only than you need to shard collections which is set the field that will devide them.I would not shard an environment with 30gb , I would start consider sharding around 1TB or more.I would say that running on aws is best done by MongoDB Atlas as it allows you flexibility , ease of management and live secure migration.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Creating replica set of stand alone db | 2020-10-28T16:51:32.298Z | Creating replica set of stand alone db | 1,754 |
null | [
"o-fish"
] | [
{
"code": "",
"text": "Hello open source projects! During the month of October, let’s do a “Wednesday weigh-in” to see how #hacktoberfest is going for your projects. What’s going on in the codebase? What contributions have you gotten? Are there any issues you’d LOVE to see worked on?",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "The O-FISH project had a great start to Hacktoberfest. We had a fantastic kickoff meeting, where a brief demo of the apps and their architecture were shown. If you missed the video, you can watch the replay.Looking to participate? We got you covered with some lovingly hand-picked issues we think you might enjoy:iOS issues - SwiftUI:Android issues - KotlinWeb issues - React / Node.jsAll repos are looking for more testing scripts if you’re into CI/CD work. There are also documentation issues around - we’re not kidding when we say we have something for everyone!We are awed by the love from the community so far - we have had 45 contributions - @yo_adrienne listed most of them in her recent Weekly Update.And we have a leaderboard - click the image to be taken to it to see the entire board:\n-Sheeri",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "As of right now, we have 94 contributions from 47 different contributors!Android is almost totally complete with Dark mode, though there are some lingering issues.iOS had a crashing bug fixed, and needs more work to get dark mode complete.Web had a bug fixed where an empty license number caused a crew member to not have a name, and it would be awesome to have some unit tests for it.Here’s this week’s leaderboard - click on the image to be taken to the full leaderboard:\n",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Well, here it is, Wednesday again! We had a bit of a dip in contributions last week - which was nice because I was having a hard time keeping up with all the PRs! But now the queues are cleared, and there are more issues to work on, so if you haven’t gotten your hacktoberfest PR’s in just yet, here’s a list of select open issues we’d love for folks to take on:iOS\nThere are several tickets to make sure dark mode is implemented everywhere in the iOS app.\nAnd a really important issue to capture an “Other” value for some fields.Android\nScript to check for strings that aren’t localized\nAnd a really important issue to capture an “Other” value for some fields.React/Node.js\nChange where the Agency Form Data links from\nSearch results are being redirected to the wrong pageOr try your hand at some unit testing - for iOS, Android or React/Node.js.Here’s a more detailed board of contributions by week (Weeks are from Monday-Sunday)\n\nScreen Shot 2020-10-28 at 3.35.56 PM1110×396 23.5 KB\n",
"username": "Sheeri_Cabral"
}
] | Hacktoberfest: Wednesday Weigh-in | 2020-10-07T20:55:51.015Z | Hacktoberfest: Wednesday Weigh-in | 4,249 |
null | [] | [
{
"code": "",
"text": "One of the things that typically causes me to withold account creation on a forum is the required fields in a signup form. One such example of that which gave me pause for account creation was the requirement of a cellphone # in the signup. Company name was also slightly obtuse, but I understand the requirement there to some degree. However, cellphone # seems excessive as I am fairly private about handing that data out. I’d love to see that field made optional so I’m able to remove that data from the site (I understand that it’s not shown on a profile, but all the same I’d rather it be more private than not)",
"username": "crutchcorn"
},
{
"code": "",
"text": "Hi Corbin,Thanks for your feedback! We’re already in the process of revamping the sign up sheet so I’ll pass along your comments to the relevant team.Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Corbin, I agree. I initially skipped over the Company Name only to find that it was a required field.",
"username": "Lourdes_ovando"
},
{
"code": "",
"text": "I too agree that asking for a cell phone number should be entirely optional. Perhaps for use only when enrolling in 2 factor authentication.Bob",
"username": "Robert_Cochran"
},
{
"code": "",
"text": "Hi all,Just wanted to reply here and let you know that we have a new registration page in the works for our community sites, including the forums. Thanks so much for your feedback!Best,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Jamie"
}
] | Excessive Required Data for Signup | 2020-01-24T17:56:58.184Z | Excessive Required Data for Signup | 4,153 |
null | [
"aggregation"
] | [
{
"code": "[{\n \"_id\": {\n \"$oid\": \"id....\"\n },\n \"codeex\": \"...example\",\n \"one\": \"testexample\",\n \"two\": \"testexample\",\n }\n},{\n \"_id\": {\n \"$oid\": \"id....\"\n },\n \"codeex\": \"...example\",\n \"one\": \"testexample\",\n \"two\": \"testexample\",\n }\n}]\n{\n \"_id\": {\n \"$oid\": \"id....\"\n },\n \"task: { \n id: \"...task.id\",\n name: 'err',\n}\n }\n{\n \"_id\": {\n \"$oid\": \"id....\"\n },\n \"task: { \n id: \"...task.id\",\n name: 'err',\n}\n }\n .aggregate(\n {\n $lookup: {\n from: 'users',\n localField: '_id',\n foreignField: 'tasks.id',\n as: 'user',\n },\n },\n { $unwind: '$user' },\n {\n $group: {\n _id: '$user._id',\n count: { $sum: 1 },\n },\n },\n {\n $group: {\n _id: null,\n total: { $sum: '$count' },\n },\n },\n ).toArray();\n",
"text": "Hello,I don’t know how to aggregate to have the actual result for my two collectionExample my first one is : TaskExample 2 : UsersThe attended result is like a find { } with a projection of all the field of the Tasks in an array for each tasks but with a new value by tasks countUser: wich will be the number of the user who have the task id linkedThanks i know how to use a little bit aggregate etc… but actually i don’t get the right resultThis is my first reflexion thanks",
"username": "Zack_N_A"
},
{
"code": "\n [{\n $lookup: {\n from: 'users',\n localField: '_id',\n foreignField: 'task.id',\n as: 'users'\n }}, \n {$set: {\n usersCount: {$size: \"$users\"}\n }}, \n {$project: {\n users:0\n }}]\n",
"text": "You can get the number of users by project the size of the array you got after lookupI didn’t quite get how do you want the array for each talk to look like, maybe $objectToArray can help",
"username": "Katya"
}
] | Aggregate two collection & find all datafrom the first one | 2020-10-28T16:51:45.611Z | Aggregate two collection & find all datafrom the first one | 1,490 |
null | [
"data-modeling"
] | [
{
"code": "db.followings.aggregate([\n {\n $addFields: {\n userIds: {\n $setUnion: [\n {\n $map: {\n input: \"$followers\",\n in: \"$$this.followerId\"\n }\n },\n {\n $map: {\n input: \"$followings\",\n in: \"$$this.followingId\"\n }\n }\n ]\n }\n }\n },\n {\n $lookup: {\n from: \"users\",\n localField: \"userIds\",\n foreignField: \"_id\",\n as: \"users\"\n }\n },\n {\n $project: {\n userId: 1,\n followers: {\n $map: {\n input: \"$followers\",\n as: \"f\",\n in: {\n $mergeObjects: [\n \"$$f\",\n {\n fullName: {\n $reduce: {\n input: \"$users\",\n initialValue: \"\",\n in: {\n $cond: [\n { $eq: [\"$$this._id\", \"$$f.followerId\"] },\n \"$$this.fullName\",\n \"$$value\"\n ]\n }\n }\n }\n }\n ]\n }\n }\n },\n followings: {\n $map: {\n input: \"$followings\",\n as: \"f\",\n in: {\n $mergeObjects: [\n \"$$f\",\n {\n fullName: {\n $reduce: {\n input: \"$users\",\n initialValue: \"\",\n in: {\n $cond: [\n { $eq: [\"$$this._id\", \"$$f.followingId\"] },\n \"$$this.fullName\",\n \"$$value\"\n ]\n }\n }\n }\n }\n ]\n }\n }\n }\n }\n }\n])\n",
"text": "I have been working on a simple followers/following app. This question is a continuation of my previous question here. This is what I’ve tried so far:Here’s the Mongo playgroundNow, what I want to achieve is to display 2 boolean fields namely isFollowed for Followings and isFollowing for Followers array. Is this possible? I would gladly appreciate any help. Thanks!",
"username": "nirmamalen"
},
{
"code": "db.followings.aggregate([\n {\n $addFields: {\n userIds: {\n $setUnion: [\n {\n $map: {\n input: \"$followers\",\n in: \"$$this.followerId\"\n }\n },\n {\n $map: {\n input: \"$followings\",\n in: \"$$this.followingId\"\n }\n }\n ]\n }\n }\n },\n {\n $lookup: {\n from: \"users\",\n localField: \"userIds\",\n foreignField: \"_id\",\n as: \"users\"\n }\n },\n {\n $project: {\n userId: 1,\n followers: {\n $map: {\n input: \"$followers\",\n as: \"f\",\n in: {\n $mergeObjects: [\n \"$$f\",\n {\n fullName: {\n $reduce: {\n input: \"$users\",\n initialValue: \"\",\n in: {\n $cond: [\n {\n $eq: [\n \"$$this._id\",\n \"$$f.followerId\"\n ]\n },\n \"$$this.fullName\",\n \"$$value\"\n ]\n }\n }\n },\n isFollowing: true\n }\n ]\n }\n }\n },\n followings: {\n $map: {\n input: \"$followings\",\n as: \"f\",\n in: {\n $mergeObjects: [\n \"$$f\",\n {\n fullName: {\n $reduce: {\n input: \"$users\",\n initialValue: \"\",\n in: {\n $cond: [\n {\n $eq: [\n \"$$this._id\",\n \"$$f.followingId\"\n ]\n },\n \"$$this.fullName\",\n \"$$value\"\n ]\n }\n }\n },\n isFollowed: true\n }\n ]\n }\n }\n }\n }\n }\n])\n",
"text": "Hi @nirmamalenWelcome to MongoDB community !I am not sure I fully understood the placing of this filed but I think the following command:I am not a fan of the lookup and the relational schema you have done. I wonder if embedding the followings array in the user document might make more sense.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,Thanks for your response. For the isFollowed and isFollowing, I need to check whether the logged in user has already followed or being followed by each person in the followers and followings arrays.",
"username": "nirmamalen"
},
{
"code": "",
"text": "Hi @nirmamalen,This sounds like this query will be complicated and insufficient. I would suggest refactoring the module to have people you follow or following embedded in user documents , if the list grows big offload it into another collection with a pointer:Building With Patterns: The Outlier Pattern | MongoDB BlogHope this helps, if not consider splitting this into several queries from the client side.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Mongoose Get Mutual Followers/Following | 2020-10-27T13:35:26.707Z | Mongoose Get Mutual Followers/Following | 3,599 |
null | [
"installation"
] | [
{
"code": "replication:\n replSetName: \"rs0\"\n\nnet:\n port: 27017\n bindIp: mongodb-01,mongodb-02,mongodb-03\n{\"t\":{\"$date\":\"2020-10-28T09:12:43.784+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"mongodb-01,mongodb-02,mongodb-03\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"replication\":{\"replSetName\":\"rs0\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2020-10-28T09:12:43.786+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Cannot assign requested address\"}}}\n",
"text": "Hi there,I spent over 24 hours reading all the docs and all the forums but I cant get my Replica Set deployed.I setup 3 ubuntu instances in AWS, all on the same subnet with the same security group.\nI installed MongoDB on all three in the same way\nI tested that all three can talk to each other over local ips\nI added all 3 hostnames on each /etc/hosts file\nI added the same mongodb.conf settings to all three like soBut it’s failing to start after I edit the conf file.PLEASE HELP ME Peter",
"username": "Peter_Smith"
},
{
"code": "net:\n port: 27017\n bindIp: mongodb-01\nnet:\n port: 27017\n bindIp: mongodb-02\nnet:\n port: 27017\n bindIp: mongodb-03\n",
"text": "Hi @Peter_SmithThe bindIP should be the ip address or hostname of the host.The mongod cannot bind an ip addrress of another host.This should be the relevant section on each host.",
"username": "chris"
},
{
"code": "",
"text": "Thanks so much Chris, ill give that whirl ",
"username": "Peter_Smith"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | SocketException: Cannot assign requested address | 2020-10-28T09:53:05.391Z | SocketException: Cannot assign requested address | 9,830 |
[
"node-js",
"field-encryption"
] | [
{
"code": "mongocryptd",
"text": "I am doing client side field level encryption, But while connection getting this error “MongoError: Unable to connect to mongocryptd, please make sure it is running or in your PATH for auto-spawn”, How to solve this problem?!image1366×237 48.1 KB",
"username": "Great_Manager_Instit"
},
{
"code": "",
"text": "Looks like mongocryptd executable is not in your $PATH\nPlease check and update if it is missing",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks, now it’s working. Earlier I thought since I am encrypting the data at MongoDB cluster so mongocryptd has to be available from their end. But later got to know this has to be run in the client application.",
"username": "Great_Manager_Instit"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoError: Unable to connect to `mongocryptd`, please make sure it is running or in your PATH for auto-spawn | 2020-10-27T13:35:01.375Z | MongoError: Unable to connect to `mongocryptd`, please make sure it is running or in your PATH for auto-spawn | 3,439 |
|
null | [
"etl"
] | [
{
"code": "",
"text": "We have lot of data in Cassandra and we need to move it to Mongo. Is there a tool I can use or how can I easily migrate the data. Some tables I need to move as it is and for some I need to make some changes in the column names. I have written a script but I think it will be slow.Thank you,\nSM",
"username": "Jason_Widener1"
},
{
"code": "",
"text": "Hi @Jason_Widener1,I assume that you will need to write scripts or code using a driver to connect to your Cassandra cluster query data, reshape and bulk load to MongoDB databases and collections .I noticed that Cassandra is supporting JSON format queries:https://cassandra.apache.org/doc/latest/cql/json.htmlAnother way might be to export data to json or csv files and load them to MongoDB , however, here reshaping might be a challenge.However, to leverage MongoDB architecture I would suggest to verify that the data copied from Cassandra is wisely using the document model.Verify you are not hitting any of the known antipatterns.https://www.mongodb.com/article/schema-design-anti-pattern-summaryBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Migration from Cassandra to Mongo | 2020-10-27T21:11:04.285Z | Migration from Cassandra to Mongo | 4,779 |
null | [
"performance",
"transactions"
] | [
{
"code": "",
"text": "Hello devs, we have a use case where we insert thousands (>20000) of documents (each one small, <1kb) into the same collection within one transaction. Setup is a one-node replica set.We made the following observations:Questions:",
"username": "fran_28"
},
{
"code": "",
"text": "Hi @fran_28,Welcome to MongoDB community!The WiredTiger cache is the main component that translates block level disk representation to the memory structure which your queries and CRUD operations run.By default it will take around 50% of the machine Which is sufficient for most use cases and should not be changed. This is also because the filesystem cache used have the data in compressed disk format and having a sufficient space for that should allow better access other than giving more and more space to WT.The engine will try keeping the cache under 80% full and dirty under %5. Now when dirty cache reach 20 % application threads will be busy evicting cache rather than surving queries, this is why your instance almost halt.I would say you need to find the resource that cause this to reach 20% and scale it (disk,ram,cpu) rather than increasing cache.Transactions do come with a price as the mechanics to isolate reads are expensive and require extra cache. Having transactions small or throttle your transaction rate should ease that.Consider testing different isolation levels as well and maybe combine documents into single objects.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,thank you for your detailed answer. My follow-up questions are:",
"username": "fran_28"
},
{
"code": "",
"text": "Hi @fran_28,As I mentioned before changing those internal values are not recommended if you can scale the env or tune your workload.Those values are set this way to guard you from driving your database to places where it can abruptly stop or get corrupted. Playing with those without a deep inspection from a MongoDB engineering might yield unexpected results.The best way to tune your workload is by load testing and trying verious write concerns and read isolation levels. Lowering the amount of documents per transaction should not lower your consistency if you implement a retry logic.Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Question regarding large transaction and limited WiredTiger cache size | 2020-10-26T19:08:18.675Z | Question regarding large transaction and limited WiredTiger cache size | 3,559 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "I am just trying to install the mongo shell to begin to play around with the interface. I downloaded the files and added the \\bin to the PATH variable of my windows desktop, but the terminal closes immediately after opening when I try to run it. I am on the M0 free tier, and I know that some features are limited, but I thought that this would be something that I would have access to. Any suggestions on what I could try to get this working would be much appreciated!",
"username": "Benjamin_Loshe"
},
{
"code": "",
"text": "What command you have issued?\nAre you trying to connect to your cluster or local mongo instance\nDoes it work with simple commands\nmongo\nor\nmongo --version if this does not work cd to the mongo/bin directory and try",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I would like to connect to my cluster. The terminal opens and closes before I have the chance to type anything at all",
"username": "Benjamin_Loshe"
},
{
"code": "",
"text": "What exactly you mean by terminal?\nhow you are trying to connect to mongodb?You have to run the command at Windows cmd prompt",
"username": "Ramachandra_Tummala"
},
{
"code": "C:\\Users\\Owner>mongo --version\nMongoDB shell version v4.4.1\nBuild Info: {\n \"version\": \"4.4.1\",\n \"gitVersion\": \"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"windows\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n",
"text": "image1022×801 12.3 KBThis is the result that I get when I run the mongo --version command. I was trying to run the application in the screenshot above by double clikcing on it, and it would just open and then close",
"username": "Benjamin_Loshe"
},
{
"code": "",
"text": "You cannot connect by double clicking mongo.exeDo you have the connect string to connect to your cluster?\nPlease run the command at Windows cmd prompt\nSomething like below:\nC:\\Users>mongo “mongodb+srv://myuser:[email protected]/test”Above onnect string is for example only taken from mongodb university course.It varies depending on where your cluster is hosted/exists",
"username": "Ramachandra_Tummala"
}
] | Mongo shell fail to launch | 2020-10-26T19:09:01.938Z | Mongo shell fail to launch | 7,428 |
null | [
"configuration"
] | [
{
"code": "",
"text": "Hello, can someone tell us what the following are:mongoa as in /etc/mongodb/mongoa.conf; this is a process that runs with mongodb; what is standard port?\nmongos as in /etc/mongodb/mongos.conf; this is a process that runs with mongodb; what is standard port?\nmongoc as in /etc/mongodb/mongoc.conf; this is a process that runs with mongodb; what is standard port?For example, mongod is standard db process and runs on port 27017.Not able to find definitions on mongodb or web.Thanks.",
"username": "geo_dezix"
},
{
"code": "mongodmongoamongoc--versionmongod",
"text": "Welcome to the community @geo_dezix!The information you are looking for can be found in the documentation under:mongoa as in /etc/mongodb/mongoa.conf; this is a process that runs with mongodb; what is standard port?mongoc as in /etc/mongodb/mongoc.conf; this is a process that runs with mongodb; what is standard port?These are not standard MongoDB binaries or naming conventions. If you find them on a local deployment, I assume an administrator probably copied or aliased the mongod binary to mongoa (to suggest a role as a replica set arbiter) and mongoc (to suggest a role as a sharded cluster config server).Try running those binaries with --version. If the output starts with “db version v” followed by JSON Build Info, it is likely those are actually mongod binaries.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What are mongoa, mongos, mongoc? | 2020-10-27T21:26:59.281Z | What are mongoa, mongos, mongoc? | 3,505 |
[
"kafka-connector"
] | [
{
"code": "# Generic Connector Configs\ngroup.id=ksql-connect-cluster\nbootstrap.servers=\"https://confluent:broker:uri\"\nsecurity.protocol=SASL_SSL\nsasl.mechanism=PLAIN\nsasl.jaas.config= org.apache.kafka.common.security.plain.PlainLoginModule required \\\n username=\"USERNAME\" password=\"Password;\nproducer.ssl.endpoint.identification.algorithm=https\nproducer.sasl.mechanism=PLAIN\nproducer.request.timeout.ms=20000\nproducer.retry.backoff.ms=500\nproducer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\\n username=\"USERNAME\" password=\"Password\";\nproducer.security.protocol=SASL_SSL\nkey.converter=io.confluent.connect.avro.AvroConverter\nvalue.converter=io.confluent.connect.avro.AvroConverter\nkey.converter.schemas.enable=true\nvalue.converter.schemas.enable=true\nvalue.converter.enhanced.avro.schema.support=true\ninternal.key.converter=org.apache.kafka.connect.json.JsonConverter\ninternal.value.converter=org.apache.kafka.connect.json.JsonConverter\n# Schema Registry credentials\nvalue.converter.schema.registry.url=https://schemaregistryurl\nvalue.converter.basic.auth.credentials.source=USER_INFO\nvalue.converter.schema.registry.basic.auth.user.info=USERNAME:PASSWORD\n",
"text": "Hi,I’m currently integrating MongoDB’s Kafka Source Connector with a Confluent Kafka cluster. My source connector sends the change events stream data from my database into Kafka, however I would like to know how I could integrate this connector with Schema Registry.My setup is using Kafka from a Confluent server, then I have a docker container with KSQL and Kafka Connect embedded. This Kafka Connect currently only has the MongoDB Source Connector.This is my connector.properties file to configure my Kafka Connect:This is how I set up MongoDB Source Connector properties:I configured the converters to use the AvroConverter and also gave the credentials for the Schema Registry, however, when I check the Kafka’s topic to which the events are sent, instead of the schema of the change event streams data, Confluent Schema Registry shows me the following schema:image1516×272 9.53 KBWe want to use KSQL to apply transformations to the messages running through this topic that receives the change events streams, however, when I try to create a stream listening to one of these topics I receive the following error message:image983×147 7.53 KBThe schema of the full document sent in these change events streams is extremely complex with many levels of nested objects and arrays, so having to set these schemas in AVRO manually would be very hard and error prone so we wanted to use KSQL schema inference to create these streams. This is currently not being possible due to the error displayed above which leads me to believe the problem may be in how we’re setting up our connector and consequently how we’re creating our topics and their respective AVRO schemas.Our goal here would be to have an AVRO schema compatible with our change stream events. Is this possible to achieve automatically through the MongoDB Source connector or will I have to create the schemas manually so I can use KSQL schema inference?",
"username": "Miguel_Azevedo"
},
{
"code": "",
"text": "In the current version of the connector the output of the source is always a string so you won’t be able to use the schema registry. However, in the next version of the connector we will support outputting to schema. There is a snapshot build of the connector with this support and demo here. GitHub - RWaltersMA/kafka1.3: The Financial Securities demo shows data flowing from MySQL, MongoDB via Kafka Connect into Kafka Topics. Showcases various improvements in MongoDB Connector for Apache Kafka V1.3. It is not feature complete but should provide some guidance as far as the direction of the connector.Also, have you considered setting publish.full.document.only=true, this will not push all the change stream metadata to the message only the document itself.",
"username": "Robert_Walters"
},
{
"code": "output.schema.value",
"text": "Hi Mongo Team,Thank you so much for the 1.3 release. The new features are exactly what we needed to move forward with the integration between our database and our Kafka infrastructure.If I may give some feedback:Again, thank you for this very important release.\nMiguel Azevedo",
"username": "Miguel_Azevedo"
}
] | Kafka Source Connector, Kafka Schema Registry and KSQL - Schemas inconsistencies | 2020-08-24T15:01:20.047Z | Kafka Source Connector, Kafka Schema Registry and KSQL - Schemas inconsistencies | 4,042 |
|
[
"containers",
"kubernetes-operator"
] | [
{
"code": "albertwong@Alberts-MacBook-Pro mongodb-enterprise-kubernetes % kubectl replace -f crds.yaml\nWarning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nclusterrole.rbac.authorization.k8s.io/mongodb-enterprise-operator-mongodb-webhook replaced\nError from server (NotFound): error when replacing \"crds.yaml\": customresourcedefinitions.apiextensions.k8s.io \"mongodb.mongodb.com\" not found\nError from server (NotFound): error when replacing \"crds.yaml\": customresourcedefinitions.apiextensions.k8s.io \"mongodbusers.mongodb.com\" not found\nError from server (NotFound): error when replacing \"crds.yaml\": customresourcedefinitions.apiextensions.k8s.io \"opsmanagers.mongodb.com\" not found\n",
"text": "Error with trying to install the MongoDB operator on minikubeCrosslinked to https://github.com/mongodb/mongodb-enterprise-kubernetes/issues/161",
"username": "Albert_Wong"
},
{
"code": "",
"text": "Use “apply” as in “kubectl apply” instead.",
"username": "Albert_Wong"
}
] | Minikube 1.14. Problem with CRD installation | 2020-10-27T19:38:24.878Z | Minikube 1.14. Problem with CRD installation | 3,962 |
|
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "Hello community,We are running MongDB on kuberntes using MongoDB Enterprise Kubernetes Operator. Due to security reasons, we are in Air-Gapped environment (no internet access) , we were able set up our private docker register to pull the different images , however while creating a replicasset , we notice that the need of the need to fetch some linux binary from internet, so I was wondering if there is a possibility to bypass that liking setting our private repo manager.\nThank you in advance",
"username": "Asma_BEN_SALAH"
},
{
"code": "",
"text": "I would open a ticket with support and also request that air-gapped environments are formally supported. https://feedback.mongodb.com/",
"username": "Albert_Wong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Enterprise Kubernetes Operator in air-gapped environment | 2020-05-22T10:03:51.583Z | MongoDB Enterprise Kubernetes Operator in air-gapped environment | 2,985 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "kubectl logs -f deployment/mongodb-enterprise-operator -n mongodb\n---\napiVersion: mongodb.com/v1\nkind: MongoDB\nmetadata:\n name: ops-manager-db-replica\n namespace: mongodb\nspec:\n members: 3\n type: ReplicaSet\n version: 4.2.2-ent\n\n persistent: false\n\n opsManager:\n configMapRef:\n name: ops-manager-connection\n credentials: some-mongodb-secret\napiVersion: mongodb.com/v1\nkind: MongoDBOpsManager\nmetadata:\n name: ops-manager\n namespace: mongodb\nspec:\n # the version of Ops Manager distro to use\n version: 4.2.4\n\n # the name of the secret containing admin user credentials.\n adminCredentials: ops-manager-admin-secret\n\n externalConnectivity:\n type: NodePort\n\n # the Replica Set backing Ops Manager. \n # appDB has the SCRAM-SHA authentication mode always enabled\n applicationDatabase:\n members: 3\n kubectl create secret generic ops-manager-admin-secret \\\n--from-literal=Username=\"[email protected]\" \\\n--from-literal=Password=\"domainadmin1.\" \\\n--from-literal=FirstName=\"Kay\" \\\n--from-literal=LastName=\"K\" -n mongodb\n\n\nkubectl -n mongodb create secret generic some-mongodb-secret --from-literal=\"[email protected]\" --from-literal=\"publicApiKey=e5cd7422-d4ff-4b74-8b9d-f4d9374ab031\"\n",
"text": "Hi,Im trying to implement MongoDB Kubernetes operator enterprise on my local minibike machine. However after applying a replica set it fails and i get the following error:{“level”:“error”,“ts”:1595706141.912315,“caller”:“workflow/failed.go:67”,“msg”:“Failed to prepare Ops Manager connection: Error reading or creating project in Ops Manager: Get http://ops-manager-svc.mongodb.svc.cluster.local:8080/api/public/v1.0/orgs?itemsPerPage=500&pageNum=1: dial tcp 172.18.0.10:8080: connect: connection refused”,“ReplicaSet”:“mongodb/ops-manager-db-replica”,“stacktrace”:“github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/workflow.failedStatus.Log\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/workflow/failed.go:67\\ngithub.com/10gen/ops-manager-kubernetes/pkg/controller/operator.(*ReconcileCommonController).updateStatus\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/common_controller.go:233\\ngithub.com/10gen/ops-manager-kubernetes/pkg/controller/operator.(*ReconcileMongoDbReplicaSet).Reconcile\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/pkg/controller/operator/mongodbreplicaset_controller.go:76\\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:246\\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:222\\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:201\\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\\nk8s.io/apimachinery/pkg/util/wait.Until\\n\\t/data/mci/4fe86eb1346d553a9277f028ff80fe35/src/github.com/10gen/ops-manager-kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88”}I have been using the following guides to try and implement this.\n.https://www.mongodb.com/blog/post/running-mongodb-ops-manager-in-kubernetes\n.Introducing the MongoDB Enterprise Operator for Kubernetes | MongoDB Blogreplicaset.ymlops-manager.ymlSecretsIt’s the first time im building a mongodb operator. So a long with the issue i have two extra questionsFirst im not sure if i should be using the enterprise version or the community version, what are the core differences are they both free?How could i use AWS EBS for my volume persistancy?",
"username": "Kay_Khan"
},
{
"code": "",
"text": "First im not sure if i should be using the enterprise version or the community version, what are the core differences are they both free?The community and enterprise contain different application containers. Enterprise is the paid version of MongoDB and comes with apps like Ops Manager in addition to mongoDB database.How could i use AWS EBS for my volume persistancy?The operator should work if storage has been configured correctly.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | ReplicaSet Failed State using operator | 2020-07-25T20:42:08.664Z | ReplicaSet Failed State using operator | 2,983 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "Please help to clarify",
"username": "Homer_Najafi"
},
{
"code": "",
"text": "Provide a parameter to the mongo command like mongo “mongodb://mongodb0.example.com:27017/testdb?tls=true”That isn’t a mongodb concern. Look for the answer in the kube documentation.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Question about Mongodb Kubernetes Operator | 2020-09-01T20:56:53.576Z | Question about Mongodb Kubernetes Operator | 2,762 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "ops-manager.yamlapiVersion: mongodb.com/v1\nkind: MongoDBOpsManager\nmetadata:\n name: ops-manager\nspec:\n # the number of Ops Manager instances to run. Set to value bigger\n # than 1 to get high availability and upgrades without downtime\n replicas: 1\n\n # the version of Ops Manager distro to use\n version: 4.2.12\n\n # the name of the secret containing admin user credentials.\n # Either remove the secret or change the password using Ops Manager UI after the Ops Manager\n # resource is created!\n adminCredentials: ops-manager-admin-secret\n\n \n statefulSet:\n spec:\n # the Persistent Volume Claim will be created for each Ops Manager Pod\n volumeClaimTemplates:\n - metadata:\n name: mongodb-versions\n spec:\n accessModes: [ \"ReadWriteOnce\" ]\n resources:\n requests:\n storage: 20G\n template:\n spec:\n containers:\n - name: mongodb-ops-manager\n volumeMounts:\n - name: mongodb-versions\n # this is the directory in each Pod where all MongoDB\n # archives must be put\n mountPath: /var/lib/docker/ap-mongodb-om\n\n # the application database backing Ops Manager. Replica Set is the only supported type\n # Application database has the SCRAM-SHA authentication mode always enabled\n applicationDatabase:\n members: 3\n # optional. Configures the version of MongoDB used as an application database.\n # The bundled MongoDB binary will be used if omitted and no download from the Internet will happen\n version: 4.0.18-ent\n persistent: true\n podSpec:\n cpu: '0.50'\n memory: 350M\n persistence:\n single:\n storage: 1G\n",
"text": "Hi there,I am trying to install Mongo Kubernetes Operator and in turn Ops Manager in my Rancher cluster using instructions here - Deploy an Ops Manager Resource — MongoDB Kubernetes Operator 1.18I am currently facing a issue after Step 5. Basically my kubernetes Rancher instance is complaining that“error while running “VolumeBinding” filter plugin for pod “ops-manager-db-0”: pod has unbound immediate PersistentVolumeClaims”For reference please find below the contents of the ops-manager.yamlAny inputs would be appreciated. I see the same issue in my local Rancher cluster too.",
"username": "Varun"
},
{
"code": "",
"text": "Typically this a problem with your container storage. As far as I know, Rancher doesn’t have a container storage product.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | No persistent volumes available for this claim | 2020-05-26T11:46:08.896Z | No persistent volumes available for this claim | 5,197 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I am using MongoDB Enterprise Operator for deploying MongoDB into Kubernetes. Till now I have successfully deployed the database but I am getting problems in exposing it outside kubernetes.I am following this doc: Connect to a MongoDB Database Resource from Outside Kubernetes — MongoDB Kubernetes Operator 1.18\nA/c to the doc, we have to expose each pod as NodePort and connect through replicaSetHorizons. But this requires that the k8s worker nodes have an external IP which is not available in my case.Is there are other ways to expose the replicaset?Thanks",
"username": "Piyush_Kumar"
},
{
"code": "",
"text": "Hello,You can use a load-balancer service; one for each replicaset.",
"username": "Eric_Faure"
},
{
"code": "",
"text": "@Piyush_Kumar\nthere should be some way to expose each pod to the outside. May be continuing the idea of @Eric_Faure you can try creating a LB instance for each of the pod in the replica set?",
"username": "Anton_Lisovenko"
},
{
"code": "",
"text": "@Anton_Lisovenko, can you please elaborate a bit more on what you mean by a load balancer for replica set ? Assume this is going to be a NLB/TCP and I think this is a very common scenario for prod deployments.Any reason that this is not documented and provided in the current documentation ?",
"username": "Kish_V"
},
{
"code": "",
"text": "The official docs recommend the use of nodeports.",
"username": "Albert_Wong"
}
] | Exposing replica set outside Kubernetes | 2020-03-19T06:42:10.987Z | Exposing replica set outside Kubernetes | 4,212 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I am installing MongoDB on Kubernetes using helm charts. The DB path is not getting set to the value provided in ConfigMap in Values.yaml. I have created persistent volume claim and mount a folder on same.The data gets lost after re-install mongodb.",
"username": "Rakesh_Gupta"
},
{
"code": "",
"text": "I would use the operator. That is the most tested path.",
"username": "Albert_Wong"
}
] | MongoDB on Kubernetes not taking custom configMap | 2020-04-20T20:50:09.819Z | MongoDB on Kubernetes not taking custom configMap | 2,495 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I tried following the instruction on the MongoDB Operator 1.8 install on OperatorHub. The operator installed just fine. However when I try to go through the UI, the operator install doesn’t work. I’m able to fill out the form to provision the mongod replica set but nothing happens.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "I found out that you have to:Now I have an atlas issue.\nhttps://github.com/mongodb/mongodb-enterprise-kubernetes/issues/149",
"username": "Albert_Wong"
},
{
"code": "",
"text": "Also the MongoDB Operator cannot connect to an Atlas-based organization in cloud.mongodb.com. Must use a Ops Manager-based organization.",
"username": "Albert_Wong"
}
] | MongoDB Operator install via OperatorHub within Red Hat OpenShift 4.5. Does not deploy any mongod! | 2020-10-23T02:42:20.653Z | MongoDB Operator install via OperatorHub within Red Hat OpenShift 4.5. Does not deploy any mongod! | 2,796 |
Subsets and Splits