image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"aggregation"
] | [
{
"code": "{\n \"uuid\": \"a43fb870-e012-4c99-adf1-02adb58350bc\",\n \"stats\": {\n \"wins\": 1,\n \"games\": 3\n }\n}\n{\n \"uuid\": \"a43fb870-e012-4c99-adf1-02adb58350bc\",\n \"stats\": {\n \"wins\": 8,\n \"kills\": 7\n }\n}\n{\n \"game1\": {\n \"stats\": {\n \"wins\": 1,\n \"games\": 3\n },\n \"game2\": {\n \"stats\": {\n \"wins\": 8,\n \"kills\": 7\n }\n}",
"text": "Hi,I have multiple collections with the following schema:\ncollection ‘game1’:collection ‘game2’:Is it possible to make a query searching based on the UUID field that has the following output?",
"username": "Henrik"
},
{
"code": "$lookup// Setup\ndb.game1.insert({\n \"uuid\": \"a43fb870-e012-4c99-adf1-02adb58350bc\",\n \"stats\": {\n \"wins\": 1,\n \"games\": 3\n }\n});\ndb.game1.createIndex({ uuid: 1 })\n\ndb.game2.insert({\n \"uuid\": \"a43fb870-e012-4c99-adf1-02adb58350bc\",\n \"stats\": {\n \"wins\": 8,\n \"kills\": 7\n }\n});\ndb.game2.createIndex({ uuid: 1 })\ndb.game1.aggregate([\n { $match: { uuid: \"a43fb870-e012-4c99-adf1-02adb58350bc\" } },\n { $lookup: {\n from: \"game2\",\n let: { uuid: \"$uuid\" },\n pipeline: [\n { $match: { $expr: { $eq: [ \"$uuid\", \"$$uuid\" ] } } },\n { $project: { _id: 0, uuid: 0 } },\n ],\n as: \"game2\"\n }},\n { $unwind: \"$game2\" },\n { $project: { _id: 0, game1: { stats: \"$stats\" }, game2: \"$game2\" } }\n]);\ngame1game2game*$lookup$lookup",
"text": "Hi @Henrik ,This can be accomplished using an Aggregation Pipeline with a $lookup stage.For example:Given the above setup, the following pipeline should produce the desired result:The indexes on both collections should allow the matching documents to be queried more efficiently on both the game1 and game2 collections.If more game* collections exist this approach can be further expanded to add additional $lookup stages, however the performance of a pipeline with additional $lookups may not be great and the Data Model may need refinement instead.",
"username": "alexbevi"
},
{
"code": "$lookup",
"text": "Thank you, that was exactly what I was looking for. However your last comment made me think about restructuring the data model.I have multiple games where each game has its own set of stats, as you saw on the original post. The two solutions I have thought of so far is one collection per game (what I currently have) or a single collection where each document has all the stats for all games. The reason I didn’t choose the second option was because it will end up being quite many read and write operations on that collection, but I also think I would want to avoid using $lookup on multiple collections because I might end up having 20 games.Do you have any advice on this?",
"username": "Henrik"
},
{
"code": "$lookup",
"text": "I have multiple games where each game has its own set of stats, as you saw on the original post. The two solutions I have thought of so far is one collection per game (what I currently have) or a single collection where each document has all the stats for all games. The reason I didn’t choose the second option was because it will end up being quite many read and write operations on that collection, but I also think I would want to avoid using $lookup on multiple collections because I might end up having 20 games.Before settling on the data model it would be helpful to evaluate how you intend to interact with your data. If you will be interacting with each game’s collection frequently and only infrequently querying across games the per-game model is likely fine.If you will frequently be interacting with all games’ data by uuid then a single collection may make more sense as you can later shard this data if collection “size” becomes a concern.",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Joining multiple collections by common field | 2021-06-22T12:24:33.344Z | Joining multiple collections by common field | 27,043 |
[
"containers"
] | [
{
"code": "",
"text": "Hi All,We are running MongoDB on Docker swarm and it was working fine till yesterday. We have stopped the docker daemon for patching post that we are getting below error.\nimage1569×149 85 KB\nI ran mongodb repair and now am getting below error. Can someone please suggest on this?Oplog entry at { :Timestamp(1624030066,1)) is missing; actual entry found is { : Timestamp(16246-83,3}}\nFatal Assertion 40292 at src/mongo/db/repl/replication_Recover.cpp 220",
"username": "Siva_Reddy_Kotigari"
},
{
"code": "mongodmongodmongod",
"text": "Welcome to the MongoDB Community Forums @Siva_Reddy_Kotigari !The error message indicates a problem reading your WiredTiger data files; perhaps this mongod was incorrectly shutdown during your patching process.Since this mongod has an oplog and is presumably part of a replica set, you should Resync from a healthy member of the same replica set rather than running repair.The repair process salvages data that can be successfully read (ignoring data that cannot) and should only be used on a standalone mongod when you have no other options. Repairing a replica set member with data file issues may result in a data set which is not consistent with other members of the replica set, which in turn will lead to further operational challenges.Copying data from another replica set member or a recent backup is a much more reliable option to return your deployment to a working state.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X,Thanks for the replay.We are running 3 mongo services, now i have copied the files from mongodb2 to mongodb1 and started the services. I can see all services are up and running but in logs am seeing connection refused. Please suggested2021-06-22T12:51:22.910+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongodb_db03:27019\n2021-06-22T12:51:22.911+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to mongodb_db03:27019 - HostUnreachable: Connection refused\n2021-06-22T12:51:22.911+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Dropping all pooled connections to mongodb_db03:27019 due to failed operation on a connection\n2021-06-22T12:51:22.911+0000 I REPL_HB [replexec-5] Error in heartbeat (requestId: 2024) to mongodb_db03:27019, response status: HostUnreachable: Connection refused2021-06-22T12:51:22.910+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to mongodb_db03:27019\n2021-06-22T12:51:22.911+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to mongodb_db03:27019 - HostUnreachable: Connection refused\n2021-06-22T12:51:22.911+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Dropping all pooled connections to mongodb_db03:27019 due to failed operation on a connection\n2021-06-22T12:51:22.911+0000 I REPL_HB [replexec-5] Error in heartbeat (requestId: 2024) to mongodb_db03:27019, response status: HostUnreachable: Connection refusedI CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIpAll: true, port: 27019, ssl: { CAFile: “/etc/certs/ca.pem”, PEMKeyFile: “/etc/certs/cert.pem”, allowConnectionsWithoutCertificates: true, allowInvalidHostnames: true, mode: “preferSSL” } }, replication: { oplogSizeMB: 400, replSetName: “graylog” } }\nW - [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.\nI - [initandlisten] Detected data files in /data/db created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.\nW STORAGE [initandlisten] Recovering data from the last clean checkpoint.\nI STORAGE [initandlisten] wiredtiger_open config: create,cache_size=63873M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release=\"",
"username": "Siva_Reddy_Kotigari"
}
] | Fatal Assertion 40292 at src/mongo/db/repl/replication_recovery.cpp | 2021-06-22T08:04:28.558Z | Fatal Assertion 40292 at src/mongo/db/repl/replication_recovery.cpp | 3,527 |
|
null | [
"connecting",
"devops"
] | [
{
"code": "",
"text": "Hello All,In one of my projects, we are trying to use Azure and MongoDB. Now I have managed to establish connectivity from Azure to Mongo over Endpoint.\nBut then what I realized was the access still not work. Then I white-listed the Public IP of the VM and found the code started working i.e. DB access worked. But this means I couldn’t leverage the Endpoint I created.\nSeems I am missing something. Appriciate if you can help me.Note:- I am using Terraform for all this activity. So any piece of code which does this would be helpful.Please let me know if you need further information from my end.",
"username": "Amit_Vengsarkar"
},
{
"code": "mongo",
"text": "Hi @Amit_Vengsarkar,Welcome to the community But then what I realized was the access still not work. Then I white-listed the Public IP of the VM and found the code started working i.e. DB access worked. But this means I couldn’t leverage the Endpoint I created.Did you try connecting using the Private Endpoint connection or the standard connection?Please also provide the following information as well if possible:Kind Regards,\nJason",
"username": "Jason_Tran"
}
] | Azure MongoDB connection using Endpoint | 2021-06-22T09:23:10.221Z | Azure MongoDB connection using Endpoint | 2,792 |
null | [
"node-js",
"crud"
] | [
{
"code": "const removeOne = {$pull:{ [removeValue] :id}};\n\nconst updateOne = {$push:{[insertValue]:{ $each:[id] , $position: insertPosition}}};\n\nconst result = Order.updateMany(query , removeOne , updateOne).catch(err=>{\n\n console.log(err);\n\n});\n",
"text": "Actually i want to remove one value from an array and insert new value in another array.const query = { “order.userId”: userId };this is my code.",
"username": "kuldeep_saini"
},
{
"code": "{ \n _id: 1, \n numbers: [ 4, 12, 55 ], \n strings: [ \"red\", \"green\", \"blue\", \"white\" ] \n }\nnumbersstringsmongodb.collection.updateOne( \n { }, \n { \n $push: { numbers: 777 }, \n $pull: { strings: \"white\" } \n } \n)",
"text": "Hello @kuldeep_saini, you can do something like this:Suppose you have a document like this:To add to numbers array and remove from strings array the update would be (as run in mongo shell, and the NodeJS driver query syntax is likely to be slightly different):",
"username": "Prasad_Saya"
},
{
"code": "$pull$pushconst result = Order.updateMany(\n query, \n { \n ...removeOne,\n ...updateOne\n }\n).catch(err => {\n console.log(err);\n});\n",
"text": "Hi @kuldeep_saini,Need to merge both $pull and $push object in one and pass it in the second parameter,",
"username": "turivishal"
}
] | How can i update two fields in single query | 2021-06-22T06:17:15.229Z | How can i update two fields in single query | 3,364 |
null | [
"connecting"
] | [
{
"code": "explain() mongo --host xxx-shard-01-00.abcd.mongodb.net --username xxx*** You have failed to connect to a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network. Error: network error while attempting to run command 'isMaster' on host ",
"text": "Hi folks.\nWe’re using Atlas for our db. We’ve got a global cluster and just sharded some collections and want to physically check the collection and documents exist in the correct region.We can connect to the cluster with the cli and nodejs driver using the +srv address, we can do things like explain() to see where the query is going, but how can I connect to a single member or replica set to physically look at what resides on it?\nI’ve tried mongo --host xxx-shard-01-00.abcd.mongodb.net --username xxx and I get the following*** You have failed to connect to a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network. Error: network error while attempting to run command 'isMaster' on host I’m guessing the instance has no idea about auth and it’s handled by the config servers?Forgot to add, using mongo 4.4.6 and same for shell for mongo cli",
"username": "Paul_Robinson"
},
{
"code": "mongo \"mongodb://USERNAME:[email protected]:27017,my-shard-01-01.abcd.mongodb.net:27017,my-shard-01-02.abcd.mongodb.net:27017/my-database?replicaSet=my-shard-1\" --authenticationDatabase admin --ssl\n",
"text": "As always the case, soon after posting I figure it out.",
"username": "Paul_Robinson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connect to a single member or replica of a sharded cluster | 2021-06-22T10:51:25.845Z | Connect to a single member or replica of a sharded cluster | 2,785 |
null | [
"server",
"monitoring",
"configuration"
] | [
{
"code": "**systemLog.traceAllExceptions**",
"text": "I am using **systemLog.traceAllExceptions** option for production replicaset. This makes the log daemon to write the complete exception lines including C++ codes. I observed very huge size to my log files due to this exception lines. is it really required to enable this option unless we dig into the log file in higher level. what is the real case we required to have this info?",
"username": "Telen_Stanley"
},
{
"code": "systemLog.traceAllExceptionsfalselogrotate",
"text": "Hi @Telen_Stanley,The default log level suffices for typical production troubleshooting.Per the documentation on systemLog.traceAllExceptions, this option is false by default:Default : falsePrint verbose information for debugging. Use for additional logging for support-related troubleshooting.I would only enable this option (or other increased logging) if you are working with a support or development team that has asked for extra debugging detail to investigate a problem, and generally only enable verbose logging long enough to capture a sample of more detailed diagnostic information. If you are encountering the same problem frequently, multiple copies of verbose logging may not provide additional information but will consume a lot of extra disk space.I also recommend setting up Log Rotation using logrotate or a similar utility for your O/S so log file sizes remain manageable.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X appreciate your response… thanks Stennie.",
"username": "Telen_Stanley"
}
] | What is the importance to have "systemLog.traceAllExceptions" option | 2021-06-19T14:47:20.131Z | What is the importance to have “systemLog.traceAllExceptions” option | 1,909 |
null | [
"queries",
"indexes"
] | [
{
"code": "",
"text": "Hi , I have posts collection and fetch the datas depending on three properties.\nI have two types of posts which are seperated with the boolean isCoin:true and isCoin:false.\nI also always use updatedAt:-1 when querying.\nSo, I have two scenarios ;in the first scenario i dont use coinid when fetching , I just fetch them depending on isCoin property and updatedAt descendingin the second scenario I use coinid and updatedAt descending,so what kind of Indexing should I use to optimise those request ?\nin total I have three varieables to be taken into account → isCoin , updatedAt:-1 and coinid(objectid)\nthanks.summary:\nI have three properties isCoin updatedAt:-1 and coinid(objectid)\nI have two query scenarios\n1-) isCoin:true or false and updatedAt:-1\n2-) coinid(objectid) and updatedAt:-1",
"username": "iktisat_ogrencisi"
},
{
"code": "db.collection.createIndex( { isCoin: 1, updatedAt: -1 } )db.collection.createIndex( { updatedAt: -1, isCoin: 1 } )isCoin",
"text": "Hello @iktisat_ogrencisi, welcome to the MongoDB Community forum!I think, you need to use two indexes - one for each query. One of the factors to consider is the order of the fields you specify in creating the index. For example,db.collection.createIndex( { isCoin: 1, updatedAt: -1 } )\nis different from:\ndb.collection.createIndex( { updatedAt: -1, isCoin: 1 } )This is something you need to figure based upon how your data is. The Query Selectivity is taken into consideration to determine the order of the fields in the index. Note that the order of fields specified in the query filter do not matter in your case. In my opinion, isCoin may not be a very selective, to be used as the first field in the index.You can use the explain method on the query, and it generates a query plan for that query. The query plan tells if the query is using an index or not, or if there are multiple indexes which one of them is being used, or no index is being used at all. Also, there are options/modes to see the information like the amount of time the query takes using the index, etc.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "thank you for answering, Lastly, what if %60 percent of my posts have isCoin:true , should I still use isCoin as second field in index?",
"username": "iktisat_ogrencisi"
},
{
"code": "",
"text": "@iktisat_ogrencisi, I guess you can, as it comes as the second field. The concern would be, if the 60% is a large number of documents it is probably waste of index storage and loading into the memory. You can do some trials with sample data sets and see how it works for you.",
"username": "Prasad_Saya"
},
{
"code": "{isCoin : 1, updatedAt : -1}\n{coinId : 1, updatedAt -1}\n",
"text": "@iktisat_ogrencisi ,Those scenarios require 2 indexes as MongoDB will not use a second field as a leading filter if the first indexed field is absent.Maintaining those indexes shouldn’t be much of overhead and it will speed up the queries so you need to go for it.In summary indexes need to be created for each unique query shape in Equity Sort Range field order .Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you very much ! Ah , I almost forgot to ask, should I always define a new index foreach different sceneario ? for example I also sometimes need to fetch posts depending on ownerid of the post? so there should also be index like {ownerid:!,updatedAt:-1} ?",
"username": "iktisat_ogrencisi"
},
{
"code": "",
"text": "Thank you a lot for your explanation. I really got it now.\nOne last I wonder about it → Should I define another index for another scenario where I query posts for ownerid property ? , or in other words , are indices set depending on their query scenario ? like {ownerid:1,updatedAt:-1}",
"username": "iktisat_ogrencisi"
},
{
"code": "",
"text": "for example I also sometimes need to fetch posts depending on ownerid of the post? so there should also be index like {ownerid:!,updatedAt:-1} ?Yes, a different compound index on the two fields will help improve query performance.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "If you have a lot of properties to index (i would say more than 10 you may consider an attribute pattern schema:Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you a lot Pavel, I learnt so many things today.",
"username": "iktisat_ogrencisi"
},
{
"code": "",
"text": "Thank you a lot Prasad, I learnt so many things today.",
"username": "iktisat_ogrencisi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to index for three properties properly? | 2021-06-21T20:23:01.303Z | How to index for three properties properly? | 2,784 |
null | [
"java",
"field-encryption",
"spring-data-odm"
] | [
{
"code": "",
"text": "Hi,I wish to use Client side field-level encryption for reactive java (I am using Spring Webflux which provides implementation for Mongo Reactive Repository). The only examples for code snippets to do this at client-end are for Java (Sync). Could someone pl help me with similar examples for Reactive java as well?",
"username": "Shuchi_Gupta"
},
{
"code": "",
"text": "Hello @Shuchi_Gupta, welcome to the MongoDB Community forum!Here are some references, and hope you find them useful:",
"username": "Prasad_Saya"
}
] | Client side Field level encryption with Spring Reactive framework | 2021-06-21T12:45:59.329Z | Client side Field level encryption with Spring Reactive framework | 4,585 |
null | [
"replication",
"backup"
] | [
{
"code": "",
"text": "We are planning to restore one of the PROD replica set mongo database to stage replica set mongo servers. As we are running the PROD on community version, would like to know best options to complete this activity.So can anyone please suggest the options to fulfill this client request?",
"username": "Santosh_Dhanukonda"
},
{
"code": "rs.initiate()",
"text": "Welcome to the MongoDB Community Forums @Santosh_Dhanukonda!You can create a copy of your replica set using any of the Supported Backup Methods for MongoDB server.The MongoDB Manual also includes a tutorial with steps to Restore a Replica Set from MongoDB Backups. During step 5 in this tutorial, rs.initiate(), I recommend using a different replica set name so your staging deployment/configuration is less easily confused with your production environment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Best way to take dump from community version PROD mongo replica set and restore to non-prod | 2021-06-11T03:11:16.370Z | Best way to take dump from community version PROD mongo replica set and restore to non-prod | 2,227 |
null | [
"python",
"connecting"
] | [
{
"code": "import pymongo\nfrom pymongo import MongoClient\nimport dns\n\ncluster = MongoClient(f\"mongodb+srv://myusername:[email protected]/datebase?retryWrites=true&w=majority\"\n Traceback (most recent call last):\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 72, in _resolve_uri\n results = resolver.query('_mongodb._tcp.' + self.__fqdn, 'SRV',\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 1100, in query\n return get_default_resolver().query(qname, rdtype, rdclass, tcp, source,\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 1073, in get_default_resolver\n reset_default_resolver()\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 1085, in reset_default_resolver\n default_resolver = Resolver()\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 543, in __init__\n self.read_registry()\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 720, in read_registry\n self._config_win32_fromkey(key, False)\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 674, in _config_win32_fromkey\n self._config_win32_domain(dom)\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\resolver.py\", line 639, in _config_win32_domain\n self.domain = dns.name.from_text(str(domain))\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\name.py\", line 889, in from_text\n return from_unicode(text, origin, idna_codec)\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\dns\\name.py\", line 852, in from_unicode\n raise EmptyLabel\ndns.name.EmptyLabel: A DNS label is empty.\nTraceback (most recent call last):\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\main.py\", line 15, in <module>\n cluster = MongoClient(f\"mongodb+srv://RamboDash:[email protected]/Bot?retryWrites=true&w=majority\")\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\pymongo\\mongo_client.py\", line 639, in __init__\n res = uri_parser.parse_uri(\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\pymongo\\uri_parser.py\", line 500, in parse_uri\n nodes = dns_resolver.get_hosts()\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 102, in get_hosts\n _, nodes = self._get_srv_response_and_hosts(True)\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 83, in _get_srv_response_and_hosts\n results = self._resolve_uri(encapsulate_errors)\n File \"C:\\Users\\invar\\PycharmProjects\\DiscordBot\\venv\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 79, in _resolve_uri\n raise ConfigurationError(str(exc))\npymongo.errors.ConfigurationError: A DNS label is empty.\n\nProcess finished with exit code 1\n",
"text": "I’m getting this error when I try to connect to Atlas MongoHere is my code:And here is full TraceBack:During handling of the above exception, another exception occurred:Regards,",
"username": "Alex_N_A3"
},
{
"code": "",
"text": "The name cucumber.iq05j.mongodb.net does not appear to be a valid active Atlas cluster name.May be you have a typo. Could you share the cluster status window snapshot?",
"username": "steevej"
},
{
"code": "",
"text": "cucumber was just a place holder.\nThe real cluster name is Helpy\n\nClusters _ Atlas_ MongoDB Atlas - Google Chrome 21.06.2021 19_21_001920×1030 118 KB\n",
"username": "Alex_N_A3"
},
{
"code": ";; ANSWER SECTION:\nHelpy.iq05j.mongodb.net. 59\tIN\tTXT\t\"authSource=admin&replicaSet=atlas-leemhu-shard-0\"\n\n;; ANSWER SECTION:\n_mongodb._tcp.Helpy.iq05j.mongodb.net. 59 IN SRV 0 0 27017 helpy-shard-00-00.iq05j.mongodb.net.\n_mongodb._tcp.Helpy.iq05j.mongodb.net. 59 IN SRV 0 0 27017 helpy-shard-00-01.iq05j.mongodb.net.\n_mongodb._tcp.Helpy.iq05j.mongodb.net. 59 IN SRV 0 0 27017 helpy-shard-00-02.iq05j.mongodb.net.\n",
"text": "It is really hard to help debug and a waste of time when you are given false information, placeholder or not.The DNS information for Helpy is correct:Since now that we now that the DNS information is correct, we can suggest other things to check.You either missing some package, or you internet or VPN provider does not support SRV queries. Try connecting with the old style non-SRV URI.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks!!\nI’ve just put a non-SRV link and now it is perfectly working!\nThanks a lot!",
"username": "Alex_N_A3"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | pymongo.errors.ConfigurationError: A DNS label is empty | 2021-06-21T16:06:16.727Z | pymongo.errors.ConfigurationError: A DNS label is empty | 8,705 |
[] | [
{
"code": "",
"text": "We have a physical event project, and there are registration, purchase tickets, check-in, and check-out, etc. processes.There is a problem with limited internet access connectivity, and we don’t rely on the internet.Below is the rough architecture for understand the flow:\narchitecture (1)709×751 52.2 KB\nI have highlighted the red line box, the sync part in the above image, The question is how to sync local server MongoDB databases with atlas server databases?I am not getting the exact way how to do this.Thank you.",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal,The first thing that comes to mind is change streams. You could build a small app leveraging change stream feature to sync data both ways. Of course, the source and destination would flip depending on which side of the network - limited or unlimited connectivity - you are on. However, there are few caveats to this approach.First, how do you handle conflicting updates and deletes? Does the chronological order take precedence when deciding the source of truth?Second, how big should the oplog window be? Definitely greater than the length of lost-connectivity window. But then, how do you handle inserts/updates/deletes bursts which has the potential to overflow oplog before connectivity is achieved?Anyway, those are some of the things to think about if you go down change-streams path. Hope this helps!Thanks,\nMahi",
"username": "mahisatya"
},
{
"code": "",
"text": "The first thing that comes to mind is change streams. You could build a small app leveraging change stream feature to sync data both ways.Thank you for your suggestion will look into it.First, how do you handle conflicting updates and deletes? Does the chronological order take precedence when deciding the source of truth?There will be always inserts or updates operations, so it should be executed on the basis of oplog date order,But yes there will be conflict for some reason the internet is off and some transactions have been made online and some have been made on the local server.So let me list down the possible queries/operations, I will try to handle there should not be any conflict.",
"username": "turivishal"
},
{
"code": "",
"text": "We have minimised the syncing dependencies to 2/3 collections, so change stream will work perfectly in this situation.Thank you.",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to sync local mongodb databases with atlas server databases? | 2021-06-17T14:45:02.273Z | How to sync local mongodb databases with atlas server databases? | 5,725 |
|
null | [
"capacity-planning",
"licensing"
] | [
{
"code": "",
"text": "Hi!,I intend to use MongoDB for my Freelance project, I would use it as a database for it, basically it is web development for small businesses.My intention is to install it on a VPS (lightsail) server to form a “MERS” system.1º- Should I pay if I install MongoDB Community server on this server?2º- Do you require a powerful server ?, I thought to use “MERS” in lightsail with the smallest version of 3.5$ , which provides 512RAM, 1 core and 20SSD.I’ve been reading about the SSPL but I don’t really get it.Thanks in advance",
"username": "Isaac_HP"
},
{
"code": "",
"text": " Welcome to the MongoDB Community Forums @Isaac_HP !1º- Should I pay if I install MongoDB Community server on this server?I believe this is covered in the Server Side Public License FAQ under “What are the implications of this new license on applications built using MongoDB and made available as a service (SaaS)?”:The copyleft condition of Section 13 of the SSPL applies only when you are offering the functionality of MongoDB, or modified versions of MongoDB, to third parties as a service. There is no copyleft condition for other SaaS applications that use MongoDB as a database.If your end users are interacting with your web application and you are not offering MongoDB functionality as a service, the copyleft condition of SSPL would not apply.2º- Do you require a powerful server ?, I thought to use “MERS” in lightsail with the smallest version of 3.5$ , which provides 512RAM, 1 core and 20SSD.Server resources will depend on your application use case and workload, but 512MB of total RAM and 1 virtual CPU isn’t going to support a very large data set, especially if you are planning on running your MERS application in the same VPS.The MongoDB Production Notes include more information on Allocating Sufficient RAM and CPU, but the best way to predict performance is using some representative test data and workload. You can always start with a small VPS and scale up as needed, but if you are planning on accessing a working set with GBs of uncompressed data with reasonable performance (or a large number of client connections) you will need to test & resource appropriately for your performance expectations.I think a more realistic plan would be to run your application server within a small VPS or Lightsail instance and offload your database requirements to a hosted service like MongoDB Atlas which can be provisioned in the same AWS region. You could start with the Atlas free tier (512MB of data) or one of the shared starter clusters which have more storage and additional features like daily backup snapshots. Separating your application server from your database cluster will help you monitor and scale resource usage more effectively.If your freelance project involves building separate web applications for your small business clients (i.e. you aren’t building a mutlti-tenant platform), it is also typical to factor hosting, backup, and support into your pricing model so you can provision infrastructure appropriate to your client’s budgets.Hope that helps!Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB Community version inquiries | 2021-06-21T10:23:56.109Z | MongoDB Community version inquiries | 3,402 |
[
"crud"
] | [
{
"code": "",
"text": "\nissue470×527 9.8 KB\nin this i want to remove value from column1 taskIds.",
"username": "kuldeep_saini"
},
{
"code": "db.stores.update(\n { <---- Your filter ----> },\n { $pull: { \"order.columns.column1.taskIds\": \"<---- The value that you want to be removed ----> \"},\n)\ndb.stores.update(\n { <---- Your filter ----> },\n { $pop: { \"order.columns.column1.taskIds\": -1 },\n)\ndb.stores.update(\n { <---- Your filter ----> },\n { $pop: { \"order.columns.column1.taskIds\": 1 },\n)\n",
"text": "Hi @kuldeep_saini, welcome to the community.\nPlease take a look at the $pull operator.\n$pull will remove the specified value from the array.In case you are not sure which specific value you want to remove from the array, but just want to remove the first or last value in the array you can use $pop operator.To remove the first element in the array:To remove the last element in the array:In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Thanks @SourabhBagrecha it worked .\ni will reach to you again if i stuck in somewhere. ",
"username": "kuldeep_saini"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can i remove a value from an array which is nested | 2021-06-21T09:11:32.714Z | How can i remove a value from an array which is nested | 7,316 |
|
null | [
"server"
] | [
{
"code": "mongod",
"text": "I’m Trying to create a application which uses MongoDB. So I need to pack MongoDB with the setup, the size of MongoDB server for windows is about 254MB which is huge for a setup and downloading when the setup is running, so when checking found that .pdb files are large chunks that causing the issue.When I check with internet found that these are optional files that are used during development and not during production, but it also contains dll and other stuffs required to debug.I have run mongod by deleting the .pdb files, It runs with no issues. So is it safe to delete them and use ? will it cause any issues in future",
"username": "magesh_70268"
},
{
"code": "mongod",
"text": "Welcome to the community @magesh_70268!You are right about pdb files. They are valuable in enabling debugging of compiled applications, and don’t need to be installed on end-user’s machine to make the app work.However, I’m curious to know more about your use case here. Why not use a Atlas M0 free-tier instance instead of embedding mongod in the setup?If you really have to embed the binaries, I suggest taking a look at Mongo2Go for dotnet apps.Mahi",
"username": "mahisatya"
},
{
"code": "",
"text": "Thanks for the help!The application I’m creating requires offline support, so I’m using MongoDB Community server to store and sync the data later when user is online. I’m developing it for cross platforms, so I can’t depend on Mongo2GoMagesh",
"username": "magesh_70268"
},
{
"code": "",
"text": "The application I’m creating requires offline support, so I’m using MongoDB Community server to store and sync the data later when user is online.Hi @magesh_70268,You may also want to look into Realm Sync as a solution for offline data access with automatic data sync to a MongoDB Atlas cluster when an application has connectivity.The Realm Database SDKs provide embeddable cross-platform client libraries with local database support, so you do not have to orchestrate running database server processes with your application. Realm Sync can be added to an offline-first application with a few lines of code. For example, using the Realm .NET SDK: Quick Start with Sync.The Realm Sync service is part of MongoDB Realm, which includes access to some additional features like serverless functions and triggers. For more information see the MongoDB Realm Introduction for Backend Developers.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is it safe to delete the .pdb files while moving to production | 2021-06-04T06:37:09.172Z | Is it safe to delete the .pdb files while moving to production | 4,954 |
null | [
"database-tools"
] | [
{
"code": "date \\+\\%Y\\%m\\%d_\\%H\\%M\\%Sdate \\+\\%Y\\%m\\%d_\\%H\\%M\\%S",
"text": "Hi, we are having to migrate a large database that is in use. So we have a new database stack and I take an initial DB dump of the data (~500GB) using mongodumpmongodump --host=localhost --port=27017 --oplog --gzip --username=admin --authenticationDatabase=admin --out=/data/db/backup/mongodb-dump-date \\+\\%Y\\%m\\%d_\\%H\\%M\\%SAt the same time, I output the timestamp when that dump finishes so I can use that in the incremental backup later on.use local\ndb.oplog.rs.find().sort({$natural:-1}).limit(1).next().tsI then scp that dump over to the new host and I can restore it without isseu (takes 2.5hrs)mongorestore --oplogReplay --port=27017 --username=admin --authenticationDatabase=admin --gzip /data/backup/mongodb-dump-20200917_201959Then, since the old DB is still in use, the plan is to take an incremental backup from the point of the last full dump (using that timestamp above). So if I wait a few hours and run that it looks like thismongodump --host=localhost --port=27017 -d local -c oplog.rs --username=admin --authenticationDatabase=admin --gzip --out=/data/db/backup/mongodb-dump-increment-date \\+\\%Y\\%m\\%d_\\%H\\%M\\%S --query ‘{ “ts” : { $gt : Timestamp(1600376683, 359) } }’ 2>&1 | tee -a /data/db/backup/incremental.logSo when I then scp this incremental backup over and load it, I get the mentioned duplicate key errors.Restoring the incremental dump:mongorestore --oplogReplay --port=27017 --username=admin --authenticationDatabase=admin --gzip /data/db/backup/mongodb-dump-increment-20200922_163727And then I get the errors:Failed: restore error: error applying oplog: applyOps: E11000 duplicate key error collection: countly.app_crashusers5d72120aab9bc2004a137120 index: group_1_uid_1 dup key: { : null, : null }I am using --oplogReplay. Is there another option to run on mongorestore to avoid these errors?thanks",
"username": "Gregor_Philp"
},
{
"code": "",
"text": "Is there any update on it? Facing same issue",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "Hi,I am also facing a similar issue. Can anyone share some knowledge on this topic. Will be very useful.",
"username": "Mithun_Pillai1"
}
] | Mongorestore - error applying oplog | 2020-09-22T22:45:49.375Z | Mongorestore - error applying oplog | 3,737 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{\n \"ok\": 0,\n \"errmsg\": \"Unrecognized pipeline stage name: '$search'\",\n \"code\": 40324,\n \"codeName\": \"Location40324\",\n \"name\": \"MongoError\"\n}\n {\n $search: {\n text: {\n query: searchTerm,\n path: \"note\",\n },\n },\n },\n",
"text": "I created a text index via mongodb compass.\nWhile trying to test the search query locally(mongod on Ubuntu 20.04), I am getting this error.Here is the aggregate pipeline snippet,",
"username": "Rahul_Dahal"
},
{
"code": "",
"text": "Hi Rahul, welcome to the community. To make use of Atlas Search you need to use the Atlas API or the Atlas UI (Click “Search in the cluster nav - click “create an [Atlas] Search index,” click “Query” or run the command you posted here.Search indexes are different text indexes.",
"username": "Marcus"
},
{
"code": "$search",
"text": "Ahh…\nSo, the $search is only available in the Atlas and not in the local instance ?",
"username": "Rahul_Dahal"
},
{
"code": "$search$textdb.articles.drop();\ndb.articles.createIndex( { subject: \"text\" } )\ndb.articles.insert(\n [\n { _id: 1, subject: \"coffee\", author: \"xyz\", views: 50 },\n { _id: 2, subject: \"Coffee Shopping\", author: \"efg\", views: 5 },\n { _id: 3, subject: \"Baking a cake\", author: \"abc\", views: 90 },\n { _id: 4, subject: \"baking\", author: \"xyz\", views: 100 },\n { _id: 5, subject: \"Café Con Leche\", author: \"abc\", views: 200 },\n { _id: 6, subject: \"Сырники\", author: \"jkl\", views: 80 },\n { _id: 7, subject: \"coffee and cream\", author: \"efg\", views: 10 },\n { _id: 8, subject: \"Cafe con Leche\", author: \"xyz\", views: 10 }\n ]\n)\ndb.articles.find( { $text: { $search: \"bake coffee cake\" } } )\n$search$text$search",
"text": "Correct, the $search Aggregation Pipeline stage is only available in MongoDB Atlas as an Atlas Search index is required, which cannot be created in a local deployment.If you are looking to create a Text Index these can be queried using the $text operator as follows:Note the $search field of the $text operator is not the same as the $search aggregation pipeline stage.See the Text Search in the Aggregation Pipeline tutorial for more information as well.",
"username": "alexbevi"
}
] | Unrecognized pipeline stage name: '$search' | 2021-06-19T14:16:58.777Z | Unrecognized pipeline stage name: ‘$search’ | 15,007 |
null | [] | [
{
"code": "",
"text": "uncaught exception: Error: Updating user failed: User and role names must be either strings or objects :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.updateUser@src/mongo/shell/db.js:1436:11\n@(shell):1:1I have tried to update the password for a user using updateUser method. but it is not working.can any one suggest?Thank you",
"username": "Umang_Dosi1"
},
{
"code": "",
"text": "Hi @Umang_Dosi1, welcome to the community.\nCan you please post the complete command that resulted in this error?In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer",
"username": "SourabhBagrecha"
}
] | Modifying User Passwords | 2021-06-21T09:40:55.791Z | Modifying User Passwords | 1,680 |
null | [
"connecting"
] | [
{
"code": "",
"text": "I use the free version, I can connect to monggodb, then after upgrading to m10, I can’t connect to data",
"username": "Ha_Thanh"
},
{
"code": "",
"text": "Hi @Ha_Thanh,Welcome to the community!then after upgrading to m10, I can’t connect to dataAre you able to provide any errors you’re receiving upon attempting to connect to the M10 cluster? In addition to this, can you also provide further information with regards to how you are connecting? (via Driver, MongoDB shell, MongoDB Compass, etc).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "i connected but it only works for http address and https only loads internally, not connecting to the server",
"username": "Ha_Thanh"
},
{
"code": "",
"text": "Do you have any relevant screenshot or error logs that could assist with troubleshooting this issue?",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I installed it but can’t connect to the server https://streamable.com/cbt1py",
"username": "Ha_Thanh"
}
] | After upgrade i cant connect to monggodb | 2021-06-19T12:02:55.703Z | After upgrade i cant connect to monggodb | 2,650 |
null | [
"aggregation"
] | [
{
"code": " {\n $lookup:\n {\n from: <collection to join>,\n let: { <var_1>: <expression>, …, <var_n>: <expression> },\n pipeline: [ <pipeline to execute on the collection to join> ],\n as: <output array field>\n }\n}\n",
"text": "Hi!\nIn aggregationis not working in 4.0 version\nI am using cosmoDB 4.0 version in Azure\nsomeone please help me to solve this issue",
"username": "Ganesh_ND"
},
{
"code": "",
"text": "A very valid reason to consider atlas on azure It will never happen there…",
"username": "Pavel_Duchovny"
},
{
"code": "let",
"text": "is not working in 4.0 version\nI am using cosmoDB 4.0 version in Azure\nsomeone please help me to solve this issueHi @Ganesh_ND,As other commenters have noted, CosmosDB is an incomplete emulation of MongoDB (even for the claimed MongoDB server versions).Some important feature support is lacking, for example index collation:Based on the error message you received, it appears that Cosmos’ API does not recognise the Collation support that was introduced in MongoDB 3.4 (November 2016). It looks like collation support has been on Cosmos’ long term road map since about 2 years ago.The let syntax for Join Conditions & Uncorrelated Sub-queries was added in MongoDB 3.6 (November 2017).As per @Pavel_Duchovny, MongoDB Atlas is available on Azure if you want a managed database service with genuine MongoDB feature support.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $let is not working in Cosmos DB 4.0 | 2021-06-19T17:00:10.308Z | $let is not working in Cosmos DB 4.0 | 4,303 |
null | [
"replication"
] | [
{
"code": "",
"text": "When configuring Replica Set, I know that that uses oplog to synchronize data.However, I wonder if secondary pools the primary’s oplog and stores it in a particular space (ex: cache area or journal) before applying oplog.If there is a separate space, are there options for controlling that space?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon ,AFAIK, Oplog is first stored in the secondary oplog and then applied by 16 replication threads.The defaults are mostly good for almost all workloads so unless there is a support indication to tune this I would stick to defaults.What is highly important to decide is the read and write concerns on the driver side as well as readPreference those will have a main impact on replication behaviour.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you so much.In the other DB (ex: Redis, etc), there was another space that went through before storing it in a log like oplog, so I was asking how MongoDB was doing.Once again, thank you for your kind reply.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Oplog storage method | 2021-06-21T02:19:29.611Z | Oplog storage method | 2,272 |
null | [] | [
{
"code": "",
"text": "Hi …\nI am new to mongoDB, some one please suggest to me which version is better for real time projects and one more thing…\ntill now mongoDB 4.4 is in beta version?is there any issues with 4.4 version?",
"username": "Ganesh_ND"
},
{
"code": "",
"text": "Hi @Ganesh_ND,MongoDB 4.4 is the latest stable/production release series and has been Generally Available (GA) since July, 2020. As with any production release series, I recommend keeping your deployments current with the latest minor/patch release. Minor releases (4.4.x) include bug fixes and improvements and do not introduce any backward-breaking data compatibility issues. The release notes include highlights and a link to the full Changelog for each release in the MongoDB Jira issue tracker.The upcoming production release will be MongoDB 5.0, which is currently in a beta testing / Release Candidate (RC) cycle: Release Notes for MongoDB 5.0 (Release Candidate).Per the RC release notes:While the 5.0 release candidates are available, these versions of MongoDB are for testing purposes only and not for production use .Once a release series is promoted to GA/stable, you’ll find it listed on the first link I shared above.Regards,\nStennie",
"username": "Stennie_X"
}
] | Mongodb version 4.4 | 2021-06-19T21:55:00.943Z | Mongodb version 4.4 | 1,344 |
null | [
"replication"
] | [
{
"code": "",
"text": "hi, I am newbie for mongodb and want to ask for the replica set is just only can setup in same server? or is can be replica to another server and running it?how I can setup the cluster to another windows server 2019?thanks.",
"username": "Kelvin_Shee"
},
{
"code": "",
"text": "Hi @Kelvin_Shee,I am newbie for mongodb and want to ask for the replica set is just only can setup in same server? or is can be replica to another server and running it?Members of a replica set are normally distributed across different physical hosts to achieve data redundancy as well as automated failover.You would typically only colocate replica set members on the same host server for development or testing purposes, as otherwise you are compromising the failover benefit and will potentially have multiple replica set members competing for the same physical resources.how I can setup the cluster to another windows server 2019?Follow the tutorial to Deploy a Replica Set in the MongoDB server documentation. If you run into any issues during set up, this forum is a good channel for assistance. I suggest including information on the specific version of MongoDB server you are installing, the step of the tutorial or configuration you are currently stuck on, and any relevant error/log messages.Since you are creating a distributed deployment, please also review the MongoDB Security Checklist for available security measures.Regards,\nStennie",
"username": "Stennie_X"
}
] | Replica Set and cluster setup on windows server 2019 | 2021-06-21T06:57:14.041Z | Replica Set and cluster setup on windows server 2019 | 2,823 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n _id: \"someID\",\n title: \"someTitle\",\n content: \"someContent\",\n posted_by: \"userID\"\n}\nposted_by\"userA\"posted_by !== \"userA\"_idlimit",
"text": "Hi, for some context, I am trying to build a feed for my application, where data posted by other users is retrieved in random order (not chronological, so you can load up a post from a long time ago, that’s ok) and you can scroll down to load more (pagination).I have some things I’d like to clarify with this implementation.Suppose I have collection of Posts as such:How do I retrieve 10 random posts where posted_by is NOT equal to the current userID, say \"userA\"?And how should I be implementing pagination where 10 different random posts are queried, again where posted_by !== \"userA\"?I understand that I can use _id where it’s a Mongo ObjectID and limit for simple chronological pagination, but how do I incorporate randomness into this as well?Appreciate any help, thank you!",
"username": "Ajay_Pillay"
},
{
"code": "db.Posts.aggregate(\n [ { $sample: { size: 10 } } ]\n)\n",
"text": "Hi @Ajay_Pillay ,One of the built-in ways to do this is by using a $sample operator with a size configured to amount of retrieved documents.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "$sample",
"text": "Hi @Pavel_Duchovny, yes I did come across this but: $sample may output the same document more than once in its result set.Is there any way to prevent duplicates?",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "What do you mean duplicates?To group?",
"username": "Pavel_Duchovny"
},
{
"code": "$random$sample$random",
"text": "Oh no what I mean is on the $random documentation page it says that the selector may return an item more than once (aka items with the same _id)But I can’t have that for what I need to do, I can’t have duplicate items being returned by the query.Is that not what the documentation implies?EDIT: I meant to refer to $sample not $random.",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "$smaple is not $random its a different stage it returns documents once.",
"username": "Pavel_Duchovny"
},
{
"code": "$sample$sample",
"text": "Sorry I made a typo, I meant to refer to $sample.\nimage788×145 9.11 KB\nWhat does this bit mean when it says $sample may return a document more than once? Doesn’t that mean there could be duplicates? Or am I not understanding that correctly?",
"username": "Ajay_Pillay"
},
{
"code": "[{$sample: {\n size: 10\n}}, {$group: {\n _id: \"$_id\",\n result: { $push : \"$$ROOT\"}\n}}, {$replaceRoot: {\n newRoot: {$first : \"$result\" }\n}}]\n",
"text": "Hi @Ajay_Pillay ,Ok I never payed attention to this section. Never noticed a duplicate document and I beilive its a super rare condition if you pick just “10” documents.However, if you really need to ensure uniquness you can do a group by _id and project a new root of the first document only which will verify no document is returned twice.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the clarification!I need to account for uniqueness because I will be querying for 10 documents at first, and when the user scrolls down the page, I need to query 10 more unique documents, and as this grows the chances of duplicates increases in the subsequent queries.How should I be approaching this? I understand how to make a single query for 10 unique random documents but how should this be used together with pagination?",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "@Ajay_Pillay ,In such a case I suggest that you add a random number to your documents index them and pick 10 random numbers to be queried in a $in query , than pick 10 more making sure they are not already picked before.Otherwise just pick a way to sort the documents randomly and paginate themThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Another option is to run the aggregation on 3000 samples and batch them into 10 documents a batch …If a user presses 300 times on the next run a new query … No way he will notice a returning result ",
"username": "Pavel_Duchovny"
},
{
"code": "$sample",
"text": "Hi @Pavel_Duchovny this is an interesting idea and honestly I don’t think I even need it at 3000, I think 100 is enough. A sample of 100, and then batched into 10 documents a batch.So once I get this random sample, I paginate 10 documents at a time. Once I reach the end of this 100, I will run another sample of 100, and batch as per before.How exactly am I supposed to be doing this batching, and saving the $sample aggregation between queries? For context, I’m running a Meteor/Apollo/React stack, so my GraphQL queries will include an offset and limit argument in my resolvers, and I will use that to handle the batching logic.",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "Aggregation has a batchSize as part of the aggregation command.I don’t know the specific of your driver architecture but you can do just query and fetchNo need for skip and limit.",
"username": "Pavel_Duchovny"
},
{
"code": "db.getCollection('myDB').aggregate([\n { $sample: { size: 100 } }],\n { cursor: { batchSize: 10 } }\n)\n",
"text": "So I tried the following query:But I still see all the 100 samples being returned.Also I don’t quite understand why I don’t need the idea of skip and limit here. Correct me if I am wrong, but from what I understand I run an aggregation for a sample size of 100 documents once. Then, I can choose which batch to send back, based on the offset and limit.So my first query would be an offset of 0 and limit of 10. So I return the first batch, and second query I return the second batch (offset 10, limit 10). But if I run this command a second time, wouldn’t it be a different $sample chosen?",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "I think that the shell overwrite anything below 101 results.Try this with your driver …Now the skip and limit is not related to the batch its to offset and limit the entire query and is done on server side …Skip is a highly unoptimized operation as it needs to first scan all offset before retrieve results while limit just stops after x returned docs",
"username": "Pavel_Duchovny"
}
] | What would be the best approach to randomly query X amount of items from a collection? | 2021-06-14T21:18:43.296Z | What would be the best approach to randomly query X amount of items from a collection? | 16,673 |
null | [
"crud"
] | [
{
"code": "",
"text": "Hello all,Is it possible to sync my local MySQL database (only few tables) with MongoDB atlas? Any record being inserted into MySQL should also get populated into MongoDB too.What are the options I have in this scenario?",
"username": "Abhi"
},
{
"code": "",
"text": "Hi @Abhi ,You can use a kafka connector to read data from mysql and write to MongoDB using its own kafka connector.Confluent, founded by the original creators of Apache Kafka®, delivers a complete execution of Kafka for the Enterprise, to help you run your business in real-time.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny thank you for providing the information. Is there any documentation or article I can go through for detailed steps? I’m bit new to mongodb and I’m not aware for Kafka as well.In my mysql dB, I just need one table to sync with mongodb that’s my use-case.",
"username": "Abhi"
},
{
"code": "",
"text": "I didn’t find a single article to cover it all…But if you define mysql CDC as a source:Get started with the MySQL CDC Source (Debezium) Connector for Confluent Cloud.And mongoDB as a sink listening to the same topics it should work as data is flowing in…Get started with the MongoDB Atlas Sink connector for Confluent Cloud.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)",
"username": "Abhi"
},
{
"code": "curl -i -X POST -H \"Accept:application/json\" -H \"Content-Type:application/json\" localhost:8083/connectors/ -d '''{\n \"name\": \"source_mysql_connector\", \n \"config\": { \n \"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\",\n \"tasks.max\": \"1\", \n \"database.hostname\": \"host.docker.internal\", \n \"database.port\": \"3306\",\n \"database.user\": \"abhijith\",\n \"database.password\": \"$apr1$o7RbW.xt$8.GZtOoAhXvRqyYGvrPIY1\",\n \"database.server.id\": \"8111999\", \n \"database.server.name\": \"db_source\", \n \"database.include.list\": \"example\", \n \"database.history.kafka.bootstrap.servers\": \"broker:29092\", \n \"database.history.kafka.topic\": \"schema-changes.example\",\n \"database.allowPublicKeyRetrieval\":\"true\",\n }\n}'''\ncurl -i -X POST -H \"Accept:application/json\" -H \"Content-Type:application/json\" localhost:8083/connectors/ -d '''{\n \"name\": \"sink_mongodb_connector\", \n \"config\": { \n \"connector.class\":\"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\":\"1\",\n \"topics\":\"db_source.example.employees\",\n \"connection.uri\":\"mongodb://abhi:[email protected]:27017/\",\n \"database\":\"example\",\n \"collection\":\"employees\",\n \"key.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter.schemas.enable\": \"false\",\n \"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\": \"false\"\n }\n}'''\n\"transforms\": \"unwrap\"\n\n\nconnect | [2021-06-21 02:02:40,083] INFO Kafka version: 6.2.0-ccs (org.apache.kafka.common.utils.AppInfoParser)\nconnect | [2021-06-21 02:02:40,083] INFO Kafka commitId: 1a5755cf9401c84f (org.apache.kafka.common.utils.AppInfoParser)\nconnect | [2021-06-21 02:02:40,083] INFO Kafka startTimeMs: 1624240960083 (org.apache.kafka.common.utils.AppInfoParser)\nconnect | [2021-06-21 02:02:40,090] INFO interceptor=confluent.monitoring.interceptor.connector-consumer-sink_mongodb_connector-0 created for client_id=connector-consumer-sink_mongodb_connector-0 client_type=CONSUMER session= cluster=pT-3hoQ9Qfi52n58kayl-Q group=connect-sink_mongodb_connector (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor)\nconnect | [2021-06-21 02:02:40,093] INFO [Producer clientId=confluent.monitoring.interceptor.connector-consumer-sink_mongodb_connector-0] Cluster ID: pT-3hoQ9Qfi52n58kayl-Q (org.apache.kafka.clients.Metadata)\nconnect | [2021-06-21 02:02:40,096] ERROR WorkerSinkTask{id=sink_mongodb_connector-0} Error converting message key in topic 'db_source.example.employees' partition 0 at offset 0 and timestamp 1624240077642: Converting byte[] to Kafka Connect data failed due to serialization error: (org.apache.kafka.connect.runtime.WorkerSinkTask)\nconnect | org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error: \nconnect | \tat org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:324)\nconnect | \tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertKey(WorkerSinkTask.java:530)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:473)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)\nconnect | \tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nconnect | \tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nconnect | \tat java.base/java.lang.Thread.run(Thread.java:829)\nconnect | Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'Struct': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\nconnect | at [Source: (byte[])\"Struct{id=2}\"; line: 1, column: 8]\nconnect | Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'Struct': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\nconnect | at [Source: (byte[])\"Struct{id=2}\"; line: 1, column: 8]\nconnect | \tat com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)\nconnect | \tat com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)\nconnect | \tat com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4247)\nconnect | \tat com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2734)\nconnect | \tat org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:64)\nconnect | \tat org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:322)\nconnect | \tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertKey(WorkerSinkTask.java:530)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:473)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)\nconnect | \tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nconnect | \tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nconnect | \tat java.base/java.lang.Thread.run(Thread.java:829)\nconnect | [2021-06-21 02:02:40,098] ERROR WorkerSinkTask{id=sink_mongodb_connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)\nconnect | org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:473)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)\nconnect | \tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nconnect | \tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nconnect | \tat java.base/java.lang.Thread.run(Thread.java:829)\nconnect | Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error: \nconnect | \tat org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:324)\nconnect | \tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertKey(WorkerSinkTask.java:530)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\nconnect | \t... 13 more\nconnect | Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'Struct': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\nconnect | at [Source: (byte[])\"Struct{id=2}\"; line: 1, column: 8]\nconnect | Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'Struct': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\nconnect | at [Source: (byte[])\"Struct{id=2}\"; line: 1, column: 8]\nconnect | \tat com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)\nconnect | \tat com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)\nconnect | \tat com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)\nconnect | \tat com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4247)\nconnect | \tat com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2734)\nconnect | \tat org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:64)\nconnect | \tat org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:322)\nconnect | \tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertKey(WorkerSinkTask.java:530)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\nconnect | \tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:493)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:473)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:328)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)\nconnect | \tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)\nconnect | \tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\nconnect | \tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\nconnect | \tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\nconnect | \tat java.base/java.lang.Thread.run(Thread.java:829)\n{\n \"connect.name\": \"db_source.example.employees.Envelope\",\n \"fields\": [\n {\n \"default\": null,\n \"name\": \"before\",\n \"type\": [\n \"null\",\n {\n \"connect.name\": \"db_source.example.employees.Value\",\n \"fields\": [\n {\n \"name\": \"id\",\n \"type\": \"long\"\n },\n {\n \"name\": \"name\",\n \"type\": \"string\"\n },\n {\n \"name\": \"team\",\n \"type\": \"string\"\n },\n {\n \"name\": \"birthday\",\n \"type\": {\n \"connect.name\": \"io.debezium.time.Date\",\n \"connect.version\": 1,\n \"type\": \"int\"\n }\n }\n ],\n \"name\": \"Value\",\n \"type\": \"record\"\n }\n ]\n },\n {\n \"default\": null,\n \"name\": \"after\",\n \"type\": [\n \"null\",\n \"Value\"\n ]\n },\n {\n \"name\": \"source\",\n \"type\": {\n \"connect.name\": \"io.debezium.connector.mysql.Source\",\n \"fields\": [\n {\n \"name\": \"version\",\n \"type\": \"string\"\n },\n {\n \"name\": \"connector\",\n \"type\": \"string\"\n },\n {\n \"name\": \"name\",\n \"type\": \"string\"\n },\n {\n \"name\": \"ts_ms\",\n \"type\": \"long\"\n },\n {\n \"default\": \"false\",\n \"name\": \"snapshot\",\n \"type\": [\n {\n \"connect.default\": \"false\",\n \"connect.name\": \"io.debezium.data.Enum\",\n \"connect.parameters\": {\n \"allowed\": \"true,last,false\"\n },\n \"connect.version\": 1,\n \"type\": \"string\"\n },\n \"null\"\n ]\n },\n {\n \"name\": \"db\",\n \"type\": \"string\"\n },\n {\n \"default\": null,\n \"name\": \"sequence\",\n \"type\": [\n \"null\",\n \"string\"\n ]\n },\n {\n \"default\": null,\n \"name\": \"table\",\n \"type\": [\n \"null\",\n \"string\"\n ]\n },\n {\n \"name\": \"server_id\",\n \"type\": \"long\"\n },\n {\n \"default\": null,\n \"name\": \"gtid\",\n \"type\": [\n \"null\",\n \"string\"\n ]\n },\n {\n \"name\": \"file\",\n \"type\": \"string\"\n },\n {\n \"name\": \"pos\",\n \"type\": \"long\"\n },\n {\n \"name\": \"row\",\n \"type\": \"int\"\n },\n {\n \"default\": null,\n \"name\": \"thread\",\n \"type\": [\n \"null\",\n \"long\"\n ]\n },\n {\n \"default\": null,\n \"name\": \"query\",\n \"type\": [\n \"null\",\n \"string\"\n ]\n }\n ],\n \"name\": \"Source\",\n \"namespace\": \"io.debezium.connector.mysql\",\n \"type\": \"record\"\n }\n },\n {\n \"name\": \"op\",\n \"type\": \"string\"\n },\n {\n \"default\": null,\n \"name\": \"ts_ms\",\n \"type\": [\n \"null\",\n \"long\"\n ]\n },\n {\n \"default\": null,\n \"name\": \"transaction\",\n \"type\": [\n \"null\",\n {\n \"fields\": [\n {\n \"name\": \"id\",\n \"type\": \"string\"\n },\n {\n \"name\": \"total_order\",\n \"type\": \"long\"\n },\n {\n \"name\": \"data_collection_order\",\n \"type\": \"long\"\n }\n ],\n \"name\": \"ConnectDefault\",\n \"namespace\": \"io.confluent.connect.avro\",\n \"type\": \"record\"\n }\n ]\n }\n ],\n \"name\": \"Envelope\",\n \"namespace\": \"db_source.example.employees\",\n \"type\": \"record\"\n}",
"text": "@Pavel_Duchovny Thanks for the resources, I was able to setup most of stuff but I am facing a issue with data type, my mongodb sink is not able to convert the response of mysql cdc?Here’s my source connector config command,Here’s my sink connector request,Right after I connect the mongodb sink connector, I get this error, I understood that error is due to mongodb sink connector not able to understand the output of mysql source connector. But I don’t know how to fix it?I tried adding this settings in mysql source connector but didn’t failed,This is my MySQL source connector schema value, I got this from confluent control center,",
"username": "Abhi"
},
{
"code": "",
"text": "I @AbhiI think the issue is converting data of type byte into one of MongoDB types.I think you will need to play with converters on each side to make it righthttps://kafka.apache.org/21/javadoc/index.html?org/apache/kafka/connect/storage/StringConverter.htmlNot a kafka expert so maybe someone else can help\nThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I sync few tables from MySQL DB to MongoDB? | 2021-06-18T15:27:15.697Z | How can I sync few tables from MySQL DB to MongoDB? | 6,984 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi there, mi name is Carlos and i’m learning mongodb.\nMy question how to you program having a local bd for all the data you are sharing through mongo atlas. This would be through node.js. I’m new to this kind of distributed systems. Thanks for your help, adn time.",
"username": "Carlos_Diego_Sanchez"
},
{
"code": "",
"text": "Hi @Carlos_Diego_Sanchez,Welcome to the community and glad to hear you’re learning MongoDB My question how to you program having a local bd for all the data you are sharing through mongo atlas.Please correct me if I am wrong in understanding your question here, are you wanting to have the same data that you currently have on Atlas on a local MongoDB deployment?Kind Regards,\nJason",
"username": "Jason_Tran"
}
] | Backing up my mongo atlas in my local host; student learning mongo | 2021-06-19T02:05:30.189Z | Backing up my mongo atlas in my local host; student learning mongo | 1,413 |
null | [
"devops"
] | [
{
"code": "",
"text": "I deleted a significant amount of data from a single collection and would like to reclaim the space (including the index space). I successfully ran compact on that collection for the primary. I then did a stepdown and ran it again so that the other node was also compacted. However, I’m not able to run compact on the third node. The priority of this node in Atlas is 6.5 (vs 7.0) for the others so when I run a stepdown it doesn’t become primary. My question therefore is how can I complete this compact? I’m not sure how to connect just to this single non-primary node to run it there as I keep connecting just to the primary. And MongoDB Atlas keeps giving me permission errors when I try to change the priority to force this third node to be the primary. So, right now I’m stuck with a larger database than I need because of this issue and would like to fix that.",
"username": "Andrew_Pitts"
},
{
"code": "",
"text": "Hi @Andrew_Pitts ,Welcome to MongoDB community.Actually you should avoid using primary compact and run it on secondary.So what I would suggest is to go to the connect tab and use an older connection string format where all hosts are present and place only the specific node in the string (non srv).This will directly connected to the secondary .Than compact on it.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny Thank you! The suggestion to use the older connection string format was what I needed and worked like a charm. I was able to connect to the secondary in that way.For others that may come across this, please note that I needed to remove the replicaSet= part of that string and also that I needed to run rs.secondaryOk() before I could operate on the secondary.",
"username": "Andrew_Pitts"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Compact a Collection | 2021-06-20T04:26:33.221Z | Compact a Collection | 5,160 |
[
"crud",
"atlas-device-sync"
] | [
{
"code": "",
"text": "I’ve been battling this for a few days so figured I’d just ask! I’m new to Realms/Mongo so it’s probably something simple. I had a quick search and I couldn’t find something similar.I’m following the guide here:On how to update a document in a linked atlas collection using a realm connection. The docs seem to say create a Document(…) for the query and a Document(…) specifying the values you wish to update.The behaviour I seem to be seeing is that the new Document replaces the matching document rather than updating matching fields. Just wondering what I could be doing wrong?The screenshots attached are from Atlas/Realm website with me trying to query for Document(“name”, “russian”) and updating with Document(“difficulty”, 99). The query in the realm log looks like I would expect and indeed this works if I perform a similar update on the console.I can only upload one photo apparently, so sorry for the bodged paint job \n\n_combined704×1057 36.3 KB\nIs there something I am missing? The only way I can seem to get this to work is to populate all the original fields in the update query which is tedious. This is all on Android but I think it’s something more fundamental that I am missing… Maybe some setting or something silly?Thanks!",
"username": "Steven_Pool"
},
{
"code": "Document(\"$set\", Document(\"difficulty\", 99));\n",
"text": "Hey Steven, welcome to the forums! The update document supports update operators like $set which updates a single field without modifying the rest of the document. You’ll want to use that or another update operator, e.g.The behavior you’re seeing (replace the full document) is expected based on the update documents in the docs example because they don’t include update operators. That said the example is incomplete and we can do better. I just made a docs ticket and will follow up to make sure we get better examples that show update operators.",
"username": "nlarew"
},
{
"code": "",
"text": "Hey Nick! Thanks for the warm welcome Ah that does make sense now that you mention there are different update operators you can perform… I knew it would be something simple! I do now also see the link for the update operators just above. But I wouldn’t have guessed to do Document(\"$set\", Document(…))…Thanks for saving me some more days of frustration! ",
"username": "Steven_Pool"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Updating linked atlas documents problems | 2021-06-19T16:28:57.539Z | Updating linked atlas documents problems | 2,132 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "Hi !\nI am new to mongoDB, I have a doubt regarding alias\nI want to get custom id name without aggregation\n(select id as userId from users)\nI am using 4.0 version someone please help me",
"username": "Ganesh_ND"
},
{
"code": "db.users.findOne({},{ userId : \"$id\"});\n",
"text": "Hi @Ganesh_NDWelcome to MongoDB community and good luck on your MongoDB journey.You don’t need aggregation to do that and a simple project clause in a find or findOne query will do the trick.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "thanks for response @Pavel_Duchovny \nthis query(db.users.findOne({},{ userId : “$id”})) works in 4.4 but I am using 4.0",
"username": "Ganesh_ND"
},
{
"code": "",
"text": "Yea in this version its better to use $project with aggregation…",
"username": "Pavel_Duchovny"
}
] | Mongodb alias issues | 2021-06-17T20:20:11.155Z | Mongodb alias issues | 3,593 |
null | [
"app-services-user-auth",
"typescript"
] | [
{
"code": "",
"text": "Hi there,\nAs mention in the subject can you kindly guide me where/how can I set the user profile data like first name laste name , phone , address…\nI am able to register a user with email and password but could not figure out how to the mentions user info.\nKind regards, Behzad",
"username": "Behzad_Pashaie1"
},
{
"code": "",
"text": "Have you looked into custom user data?When a user logs in, you can populate the data with an Authentication Trigger, or afterwards when you’ve acquired the data with a write to the custom user data collection that you specify.",
"username": "Sumedha_Mehta1"
},
{
"code": "Realms.Sync.UserProfileRealms.Sync.UserUserProfile",
"text": "Hey @Sumedha_Mehta1,I have a question related to this. I’m successfully using custom user data, but there’s also this Realms.Sync.UserProfile in Realms.Sync.User and I’m wondering about this. I know custom user data is useful for things like canRead / canWrite to partitions (or other custom data…). Is UserProfile populated when you use social authentication or how is it intended to be used?Thanks!",
"username": "Derek_Winnicki"
},
{
"code": "",
"text": "Yep this is related to Social Auth (Google, FB) and should be populated if you use those providers",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "it’s curious that typescript definitions show some fileds on the profile object that are not present except email\nCaptura de pantalla de 2021-06-19 10-49-09877×289 36.6 KB\n",
"username": "saulpalv"
}
] | Set user profile data like First-name Last Name - Phone -Address | 2021-01-26T21:14:11.985Z | Set user profile data like First-name Last Name - Phone -Address | 4,729 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hi, we need to run transactions across multiple documents from different collections, and these collections are all sharded with the same sharding key. Is there a way to collocate the documents with the same sharding key from different collections on to the same shard so that the transactions do not span multiple shards?Also I imagine running transactions in one shard is more preformant than running it across multiple shards, but how much a difference is it?",
"username": "skiptomylu"
},
{
"code": "",
"text": "I am also finding such feature. if mongodb support this then there are many possibilities.\nsuch as $lookup on sharded collections.",
"username": "Ken_Cheng"
}
] | Is there a way to collocate documents of different collections on to the same shard? | 2021-06-14T17:50:18.462Z | Is there a way to collocate documents of different collections on to the same shard? | 1,949 |
[
"crud"
] | [
{
"code": "holesArrayObjectholeGrossround._idArray ( [holeScoreHidden] => Array ( [1] => 7 [2] => 7 [3] => 7 [4] => 8 [5] => 7 [6] => 7 [7] => 7 [8] => 7 [9] => 7 ) [roundId] => 60c642db09080f1b50331b2d [submit] => )findAndModify()hole.nodb.roundTest.findAndModify({\n query: { _id : ObjectId('60c916684bd16901f36efb3a') },\n update: { $set: { \"holes.$[elem].holeGross\" : 8 } },\n arrayFilters: [ { \"elem.no\": 1 } ]\n})\nArrayhole.noechofindAndReplacearrayFiltersarrayFilters: [ <filterdocument1>, ... ]",
"text": "I have the following document structure. I am trying to update specific values inside the holes sub-array:\nimage690×994 76.2 KB\nEach Array element is an Object representing the score on a golf hole, with various properties (fields). I am trying to provide the ability to update the holeGross field ONLY for each score. On my PHP website, this is using a POST form, which pushes an Array of scores and the round._id value too, as:Array ( [holeScoreHidden] => Array ( [1] => 7 [2] => 7 [3] => 7 [4] => 8 [5] => 7 [6] => 7 [7] => 7 [8] => 7 [9] => 7 ) [roundId] => 60c642db09080f1b50331b2d [submit] => )However, initially I am trying to get it working with MongoDB shell syntax. I have tried following the findAndModify() method in the MongoDB shell, which updates 1 field specified, i.e. where hole.no == 1:How can I update all Array elements in this document? Each holeGross to be updated will [likely] be different, so need to match the hole.no (“elem.no”) against the corresponding new score.I expect I will need to also perform a loop, in order to go through the $_POST array. Would this likely be to form echo statements, or can it be directly integrated into the findAndReplace call?Also, can I perform this within a single method call, or would this approach require a call per single field / document that is to be updated? I would prefer a single call, obviously.The documentation says that arrayFilters is:arrayFilters: [ <filterdocument1>, ... ]But I don’t want to pass in a whole document.",
"username": "Dan_Burt"
},
{
"code": "Arrayhole.noholeGrossholes{\n \"_id\" : 1,\n \"player\" : \"John\",\n \"holes\" : [\n {\n \"no\" : 1,\n \"par\" : 3,\n \"holeGross\" : 2\n },\n {\n \"no\" : 2,\n \"par\" : 4,\n \"holeGross\" : 3\n }\n ]\n}\nvar new_vals = [ 9, 5 ]db.collection.updateOne(\n { _id: 1 },\n [\n {\n $set: {\n holes: {\n $map: {\n input: { $range: [ 0, { $size: \"$holes\" } ] },\n in: {\n $mergeObjects: [ \n { $arrayElemAt: [ \"$holes\", \"$$this\" ] }, \n { holeGross: { $arrayElemAt: [ new_vals, \"$$this\" ] } } \n ]\n }\n }\n }\n }\n }\n ]\n)\n",
"text": "How can I update all Array elements in this document? Each holeGross to be updated will [likely] be different, so need to match the hole.no (“elem.no”) against the corresponding new score.Hello @Dan_Burt, you can use the Updates with Aggregation Pipeline feature to update different values of holeGross for all the elements of the holes array.Suppose you have a document with two holes, for example:and the new values of hole gross in an array, for example:var new_vals = [ 9, 5 ]The following update operation will change each element of the array with new values (in your case, you need to supply an array of 8 elements as there are eight holes) in a single call.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "HowThanks, I have tested that this works as I expect in the MongoDB shell.Was the findAndModify / findAndReplace methods the incorrect approach for what I was attempting? Is that only to update a single field, and not multi-dimensional arrays of stuff?",
"username": "Dan_Burt"
},
{
"code": "",
"text": "Also, are these Aggregation methods available in PHP as well?",
"username": "Dan_Burt"
},
{
"code": "findAndModifyfindAndReplaceupdateOnefindOneAndUpdate",
"text": "@Dan_Burt , the findAndModify also has the option to use the Aggregation Updates - so it will also work in the same way. The findAndReplace has no such option.I see that the update methods (updateOne and findOneAndUpdate) of the MongoDB PHP Driver supports the Updates with Aggregation Pipeline feature:",
"username": "Prasad_Saya"
},
{
"code": "**Fatal error** : Uncaught MongoDB\\Driver\\Exception\\BulkWriteException: Unknown modifier: $map. Expected a valid update modifier or pipeline-style update specified as an array in /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php:228 Stack trace: #0 /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php(228): MongoDB\\Driver\\Server->executeBulkWrite('golf.roundTest', Object(MongoDB\\Driver\\BulkWrite), Array) #1 /var/www/html/vendor/mongodb/mongodb/src/Operation/UpdateOne.php(117): MongoDB\\Operation\\Update->execute(Object(MongoDB\\Driver\\Server)) #2 /var/www/html/vendor/mongodb/mongodb/src/Collection.php(1075): MongoDB\\Operation\\UpdateOne->execute(Object(MongoDB\\Driver\\Server)) #3 /var/www/html/functions.php(783): MongoDB\\Collection->updateOne(Array, Array) #4 /var/www/html/updateRound.php(19): setRoundScores('60cb07d14bd1690...', Array) #5 {main} thrown in **/var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php** on line **228**$mapfunction setRoundScores($roundId, $scoresArray) {\n \t\n\t$collection = $client->golf->roundTest;\n\t\t\n $match = [\n \t'$match' => [\n \t\t'_id' => new MongoDB\\BSON\\ObjectID( $roundId )\n \t]\n ];\n \t\n $set = [\n \t'$map' => [\n \t\t'input' => [\n \t\t\t'$range' => [ 0, [ '$size' => '$holes']],\n \t\t\t'in' => [\n \t\t\t\t'$mergeObjects' => [\n \t\t\t\t\t[ '$arrayElemAt' => [ '$holes', '$$this']],\n \t\t\t\t\t[ 'holeGross' => [ '$arrayElemAt' => $scoresArray, '$$this']]\n \t\t\t\t]\n \t\t\t]\n \t\t]\n \t]\n ];\n \t\n $updateOne = $collection->updateOne($match,$set);\n \t\n\treturn $updateOne->getUpsertedId();\n\t\t\n}",
"text": "So I tried retaining the existing code, formatted for PHP, i.e. replacing “:” with “=>” and using square braces.It returns:**Fatal error** : Uncaught MongoDB\\Driver\\Exception\\BulkWriteException: Unknown modifier: $map. Expected a valid update modifier or pipeline-style update specified as an array in /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php:228 Stack trace: #0 /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php(228): MongoDB\\Driver\\Server->executeBulkWrite('golf.roundTest', Object(MongoDB\\Driver\\BulkWrite), Array) #1 /var/www/html/vendor/mongodb/mongodb/src/Operation/UpdateOne.php(117): MongoDB\\Operation\\Update->execute(Object(MongoDB\\Driver\\Server)) #2 /var/www/html/vendor/mongodb/mongodb/src/Collection.php(1075): MongoDB\\Operation\\UpdateOne->execute(Object(MongoDB\\Driver\\Server)) #3 /var/www/html/functions.php(783): MongoDB\\Collection->updateOne(Array, Array) #4 /var/www/html/updateRound.php(19): setRoundScores('60cb07d14bd1690...', Array) #5 {main} thrown in **/var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php** on line **228**My guess is $map isn’t available to the PHP drivers?The syntax in use is:",
"username": "Dan_Burt"
},
{
"code": "$match$map",
"text": "Hello @Dan_Burt, are you trying to use $match in the aggregate update (match is not valid stage for update)? You need to use regular query filter.I am not familiar with PHP programming. Here is some syntax using $map in PHP:",
"username": "Prasad_Saya"
},
{
"code": "'$match'$filterupdateOne()$match2 = [ '_id' => new MongoDB\\BSON\\ObjectID( $roundId ) ];'$map'",
"text": "But the error is not complaining about that stage. Copying from previous PHP statements, I used the '$match' construct, as thought it aligned to the first $filter parameter supplied to the updateOne() method call.I have tried replacing the 1st stage with this, which looks more like the format in the documentation link you provided for the PHP driver:$match2 = [ '_id' => new MongoDB\\BSON\\ObjectID( $roundId ) ];I just prefer to split these out to separate var’s for readability.And it still ONLY errors about the '$map' stage. Maybe I need to pursue this on Stack Overflow, that might have more PHP-savvy developers to assist.",
"username": "Dan_Burt"
},
{
"code": "holesupdateOne()",
"text": "Or would a simpler (but NOT efficient, I know) method be to replace the entire holes sub-array in the updateOne() call?This would mean sending over loads more data than is necessary… But I don’t get caught up in advanced Aggregation pipelines.Newbie, just enquiring about options and fallbacks…",
"username": "Dan_Burt"
},
{
"code": "mongo$setholes$map$set: {\n holes: {\n $map: { ...\n'$map' => [ ...[\n \"$set\" => [\n \"holes\" => [\n \"$map\" => [\n \"input\" => [ ... ],\n \"in\" => [...]\"\n ]\n ]\n ]\n]\n [\n {\n $set: {\n holes: {\n $map: {\n input: { $range: [ 0, { $size: \"$holes\" } ] },\n in: {\n $mergeObjects: [ \n { $arrayElemAt: [ \"$holes\", \"$$this\" ] }, \n { holeGross: { $arrayElemAt: [ new_vals, \"$$this\" ] } } \n ]\n }\n }\n }\n }\n }\n ]\n['_id' => 'some value ...']",
"text": "@Dan_Burt ,The way you are coding the update in PHP may not be correct, I think.The following is the mongo shell version from my post, where $set is the aggregation stage, holes is the array field being updated, and the $map is the aggregate operator:But, in your PHP version, I only see this:'$map' => [ ...As such PHP MongoDB Driver (latest versions) support the Update with the Aggregate Pipeline along with MongoDB server v4.2 or later.[ Post Updated ]:I think your update should be something like this in PHP:This corresponds to the shell update’s code:And, the filter would be : ['_id' => 'some value ...']",
"username": "Prasad_Saya"
},
{
"code": "$set = [\n '$set' => [\n \t'holes' => [\n \t\t'$map' => [\n \t\t\t'input' => [\n \t\t\t\t'$range' => [ 0, [ '$size' => '$holes']]\n \t\t\t],\n\t\t\t\t'in' => [\n\t\t\t\t\t'$mergeObjects' => [\n\t\t\t\t\t\t[ '$arrayElemAt' => [ '$holes', '$$this' ]],\n\t\t\t\t\t\t[ 'holeGross' => [ '$arrayElemAt' => $scoresArray, '$$this' ]]\n\t\t\t\t\t]\n\t\t\t\t]\n \t\t\t\t\t\n \t\t]\n \t]\n ]\n];\n",
"text": "Yes, I think you are correct - the syntax gets very tricky very quickly!I have a new error code now, having tried to correct the structure:Errors with:Fatal error : Uncaught MongoDB\\Driver\\Exception\\BulkWriteException: The dollar ($) prefixed field ‘$map’ in ‘holes.$map’ is not valid for storage. in /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php:228 Stack trace: #0 /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php(228): MongoDB\\Driver\\Server >executeBulkWrite(‘golf.roundTest’, Object(MongoDB\\Driver\\BulkWrite), Array) #1 /var/www/html/vendor/mongodb/mongodb/src/Operation/UpdateOne.php(117): MongoDB\\Operation\\Update->execute(Object(MongoDB\\Driver\\Server)) #2 /var/www/html/vendor/mongodb/mongodb/src/Collection.php(1075): MongoDB\\Operation\\UpdateOne >execute(Object(MongoDB\\Driver\\Server)) #3 /var/www/html/functions.php(804): MongoDB\\Collection >updateOne(Array, Array) #4 /var/www/html/updateRound.php(19): setRoundScores(‘60cb07d14bd1690…’, Array) #5 {main} thrown in /var/www/html/vendor/mongodb/mongodb/src/Operation/Update.php on line 228",
"username": "Dan_Burt"
},
{
"code": "",
"text": "@Dan_Burt, I think the feature to update using the pipeline is still being implemented with the PHP driver.See this JIRA: https://jira.mongodb.org/browse/DRIVERS-626An alternative is to use the “execute database command” to run the update command:",
"username": "Prasad_Saya"
},
{
"code": "$cursor = $database->command([\n 'update': 'collection_name',\n 'updates': [\n [\n 'q': [ '_id' => 1 ],\n 'u': [\n [...] // <-------- see the code below to fill (the brackets stay)\n ]\n ],\n ],\n ]\n]);\n\n$results = $cursor->toArray()[0];\nvar_dump($results);\n... \"$set\" => [\n \"holes\" => [\n \"$map\" => [\n \"input\" => [ ... ],\n \"in\" => [...]\"\n ]\n ]\n ]\n",
"text": "From the execute basic command in MongoDB PHP, I think your update can be like this:This code goes into the ... of the above command:",
"username": "Prasad_Saya"
},
{
"code": "function setRoundScores($roundId, $scoresArray) {\n \t\n $client = new MongoDB\\Client($_ENV['MDB_CLIENT']);\n\n\t$collection = $client->golf->round;\n\n $match = [ '_id' => new MongoDB\\BSON\\ObjectID( $roundId ) ];\n \t\n $set = [\n \t'$set' => [\n \t\t'holes' => [\n \t\t\t'$map' => [\n \t\t\t\t'input' => [\n \t\t\t\t\t'$range' => [ 0, [ '$size' => '$holes']]\n \t\t\t\t],\n\t\t\t\t\t'in' => [\n\t\t\t\t\t\t'$mergeObjects' => [\n\t\t\t\t\t\t\t[ '$arrayElemAt' => [ '$holes', '$$this' ]],\n\t\t\t\t\t\t\t[ 'holeGross' => [ '$toInt' => [ '$arrayElemAt' => [ $scoresArray, '$$this' ]]]]\n\t\t\t\t\t\t]\n\t\t\t\t\t]\n \t\t\t\t\t\n \t\t\t]\n \t\t]\n \t]\n ];\n \t\n $updateOne = $collection->updateOne($match, [$set]);\n \t\n\treturn $updateOne->getModifiedCount();\n\t\t\n}\n'holeGross' =>'$toInt'",
"text": "Received assistance on Stack Overflow. Providing the final working syntax for future use:It was the number of square braces in the 'holeGross' => line. I also had to add an extra layer to this, '$toInt', as passing the var’s using $_POST converted them to strings.",
"username": "Dan_Burt"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update single (same) field in every object of an array | 2021-06-16T08:10:01.955Z | Update single (same) field in every object of an array | 23,840 |
|
null | [
"queries",
"field-encryption"
] | [
{
"code": "",
"text": "Could someone tell me if Wildcard (LIKE) queries are supported in encrypted fileld?Thanks,\nSurya",
"username": "SURYA_KANTA_DALAI"
},
{
"code": "$eq$in",
"text": "Welcome to the community @SURYA_KANTA_DALAI!I assume you are referring to Client-Side Field Level Encryption (CSFLE), which has options for both deterministic encryption and randomized encryption.This feature is designed to protect sensitive data: encrypted field values can only be decrypted by a client with the correct encryption keys. Encrypted field values cannot be evaluated by server queries, so wildcard or regex matches are not supported regardless of the encryption method used.The deterministic encryption algorithm ensures a given input value always encrypts to the same output value each time the algorithm is executed. This method supports a limited set of query operators based on equality comparison ($eq, $in, …) of encrypted values.With randomized encryption a given input value always encrypts to a different output value each time the algorithm is executed. This method provides the strongest guarantees of data confidentiality, which also prevents support for any read operations which must operate on the encrypted field to evaluate the query.For more information, see: Supported Query Operators for CSFLE.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello All,Can we query the randomized encryption field by any chance?",
"username": "khasim_ali1"
},
{
"code": "$exists",
"text": "Hi @khasim_ali1,Randomly encrypted field values are opaque to the server by design. You can use $exists (which does not require decrypting the field value), but you would have to use deterministic encryption if query support for equality comparisons is needed.Queries using an unsupported operator against a randomly encrypted field will return an error.Please refer to the documentation linked from my earlier comment for full details.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Like Queries With Field Level Encrypted Field | 2020-12-22T19:42:11.568Z | Like Queries With Field Level Encrypted Field | 3,737 |
null | [
"data-modeling",
"crud"
] | [
{
"code": "",
"text": "Hello all,I am trying to build a system which requires storing an actual document file in the database along with other details. For example, assume I need a table where I store the candidate information like name, email, phone etc, and resume file (doc, docx, pdf) who are applying for a job I posted online.I don’t want to use amazon S3 to store the document and use the link in the database. I’m building an application which requires low latency. So having the resume file content , alongside the candidate data helps.In my use-case I don’t actually require the resume file to be frank, rather I need the text inside the resume as a string. So, which is the most efficient way, should I extract the text from resume document file and store it in mangoDB as text (or) store the resume file as binary format. Which approach is more scalable?The only reason I’m choosing to store text is, if I store the resume file itself I have to fetch and extract text from the document using some document parser which further increases my latency. On the other hand I’m thinking that storing such large texts from resume is scalable or not?I’m new to mangoDB. Appreciate any help or suggestions. Note: The resumes document files are less than 1mb.",
"username": "Abhi"
},
{
"code": "",
"text": "Hello @Abhi, welcome to the MongoDB Community forum!MongoDB data in a document can be upto maximum size of 16 Megabytes. A resume’s text is likely to be few (less than 100) kilobytes only, if stored as string (or text) data type. In case you are storing the resume document as a PDF or DOCX file, it is still going to be about few hundred kilobytes. So, you can store the resume as text with “String“ data type and the PDF/DOCX as “Binary data” within the document itself, along with other data like name, address, phone, email, etc. See MongoDB BSON Types.There is also an option store large document files, files greater than 16 MB size, in MongoDB GridFS (see this post for more details Process of storing images in MongoDB).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hello @Prasad_Saya,Thank you for the information. I just have one more doubt, Since there’s not much different on how I store the document. I also need a way to lookup/search basic candidates details like name, phone, email, title etc.,If I query the database directly from the UI/Frontend for these columns, will the latency be a problem, or do I need to index these columns in order to make the search faster. How fast is mangoDB? Or do I need to choose elasticsearch for these?",
"username": "Abhi"
},
{
"code": "db.usersCollection.find( { name: input_name } )name",
"text": "@Abhi , when you query from a browser - for example you enter some search criteria for name field - the search string is passed to the database query in the application server program (this is generally a Java, NodeJS, Python, etc., program and uses appropriate MongoDB Driver software). The program sends the query to the database server where it is executed and the results are returned to the application program which in turn is sent to the client (the browser, in this case).The query on the database will run based upon the search criteria; for example, db.usersCollection.find( { name: input_name } ). For the query to run efficiently and in a performant way, an index can be defined on the collection’s name field. Indexed searches are fast.Reference:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya, Since you mentioned the resumes is around few hundred kilobytes, when I store it in MongoDB is it compressed or stores it in original file size, We also want to calculate the cost and which server to choose, in order to do that we need to know how much disk space will it take when I store them.Let’s say I have around 100 million resumes or something, if each file takes around 100-200 kilobytes the amount of disk space is huge right?. I know that it converts JSON document to BSON, does it also compress my size? Is there a way to calculate that?For example, If I store the resume text in Elasticsearch for each document, it’s taking around 15-20 kilobytes only.",
"username": "Abhi"
},
{
"code": "",
"text": "when I store it in MongoDB is it compressed or stores it in original file size,The data is stored in the document in its original size. The data and the indexes in the database are finally stored as data files. MongoDB uses a storage engine which manages this, and the presently it is the Wired Tiger Storage Engine; see Wired Tiger Storage Engine - Compression. See MongoDB Storage - FAQs for how to find the size of your collection size, and other details (specifically see the section Data Storage Diagnostics).",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya Thank you",
"username": "Abhi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the best way to store an actual document file? | 2021-06-18T00:35:06.205Z | What is the best way to store an actual document file? | 27,953 |
null | [
"dot-net",
"xamarin"
] | [
{
"code": " C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : [xma]: An error occurred on the receiver while executing a post for topic xvs/build/execute-task/AppName.iOS/be46a7d002fCodesign and client buildbe46a7dea89e48e793bc0f9a0cc351c537bea3be9755ca474d2a7810526c87cc25572Me C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : An error occurred on client Build169000292 while executing a reply for topic xvs/build/execute-task/AppName.iOS/be46a7d002fCodesign C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at Xamarin.Messaging.Client.ApplicationMessageExtensions.<>c__DisplayClass10_0 C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at System.Reactive.Linq.ObservableImpl.Select C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : --- End of stack trace from previous location where exception was thrown --- C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at Xamarin.Messaging.Client.MessagingClient.<PostAsync>d__21C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : MessagingRemoteException: An error occurred on client Build169000292 while executing a reply for topic xvs/build/execute-task/AppName.iOS/be46a7d002fCodesignC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : AggregateException: One or more errors occurred.C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : DirectoryNotFoundException: Could not find a part of the path '/Users/username/Library/Caches/Xamarin/mtbs/builds/AppName.iOS/be46a7dea89e48e793bc0f9a0cc351c537bea3be9755ca474d2a7810526c87cc/bin/iPhone/Debug/device-builds/iphone11.8-14.4.1/AppName.iOS.app/Frameworks/realm-wrappers.framework/_CodeSignature'.C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : ~/Library/Caches/Xamarin/mtbs",
"text": "HI there.I have a problem, and i think it may be related to the realm dotnet.This is the error i get: C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : [xma]: An error occurred on the receiver while executing a post for topic xvs/build/execute-task/AppName.iOS/be46a7d002fCodesign and client buildbe46a7dea89e48e793bc0f9a0cc351c537bea3be9755ca474d2a7810526c87cc25572Me C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : An error occurred on client Build169000292 while executing a reply for topic xvs/build/execute-task/AppName.iOS/be46a7d002fCodesign C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at Xamarin.Messaging.Client.ApplicationMessageExtensions.<>c__DisplayClass10_01.b__1(MqttApplicationMessage m) in C:\\A\\1\\230\\s\\src\\Xamarin.Messaging.Client\\Extensions\\ApplicationMessageExtensions.cs:line 194` C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at System.Reactive.Linq.ObservableImpl.Select2.Selector._.OnNext(TSource value) in d:\\a\\1\\s\\Rx.NET\\Source\\src\\System.Reactive\\Linq\\Observable\\Select.cs:line 39` C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : --- End of stack trace from previous location where exception was thrown --- C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): warning : at Xamarin.Messaging.Client.MessagingClient.<PostAsync>d__212.MoveNext() in C:\\A\\1\\230\\s\\src\\Xamarin.Messaging.Client\\MessagingClient.cs:line 190`C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : MessagingRemoteException: An error occurred on client Build169000292 while executing a reply for topic xvs/build/execute-task/AppName.iOS/be46a7d002fCodesignC:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : AggregateException: One or more errors occurred.C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : DirectoryNotFoundException: Could not find a part of the path '/Users/username/Library/Caches/Xamarin/mtbs/builds/AppName.iOS/be46a7dea89e48e793bc0f9a0cc351c537bea3be9755ca474d2a7810526c87cc/bin/iPhone/Debug/device-builds/iphone11.8-14.4.1/AppName.iOS.app/Frameworks/realm-wrappers.framework/_CodeSignature'.C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\MSBuild\\Xamarin\\iOS\\Xamarin.Shared.targets(975,3): error : I’ve tried various supposed solutions i’ve found when googling - cleaning buildfolders, deleting bin and obj folders, cleaning the cache on buildhost found at path ~/Library/Caches/Xamarin/mtbs.I have a valid developer certificate, and provisioning profile - so that should not be the issue…Finaly i verified i could actually connect to the buildhost by creating a new blank project, and it worked just fine - until i added realm.I’m all up to date with latest version of xcode and visual studio. I’ve spend a lot of time trying to fix this, so i’m hoping someone can help with a solution.",
"username": "Rasmus_B"
},
{
"code": "",
"text": "What makes you think this is Realm related? I don’t see anything that points to Realm in the posted log - am I missing something?",
"username": "nirinchev"
},
{
"code": "",
"text": "DirectoryNotFoundException: Could not find a part of the path ‘/Users/username/Library/Caches/Xamarin/mtbs/builds/AppName.iOS/be46a7dea89e48e793bc0f9a0cc351c537bea3be9755ca474d2a7810526c87cc/bin/iPhone/Debug/device-builds/iphone11.8-14.4.1/AppName.iOS.app/Frameworks/realm-wrappers.framework/_CodeSignature’This part - as mentioned i tried with a blank solution, and after adding realm. Worked without - did not work with",
"username": "Rasmus_B"
},
{
"code": "",
"text": "Ugh, I can see that now - was truncated when I saw it in my inbox This seems to be identical to this issue reported on Github. Unfortunately, the user there didn’t reply, so we haven’t looked at it yet. Can you provide some details about your setup - judging by your comments and the log, it seems like you’re building for device from a Windows machine? If that’s the case, have you tried building from macOS directly? Do you get the same error?",
"username": "nirinchev"
},
{
"code": "",
"text": "No problem - i also did a pretty poor job at formatting it nicely .\nYes, it’s from my windows PC. I tried to set up VS for mac but ran into issues. I’ll give it another go and see if it’ll fix anything.\nIt started before i updated my mac, xcode and VS on my PC, I actually updated to try and fix the issue. Unfortunately i didn’t notice which version(s) the error occured on.",
"username": "Rasmus_B"
},
{
"code": "",
"text": "Ok - I was actually able to deploy it from VS on my Mac…",
"username": "Rasmus_B"
},
{
"code": "",
"text": "Hm, so sounds like might be a bug with the latest build host/VS integration. We’ll definitely look into reproducing and identifying the issue but it might be out of our hands. In the meantime is building from macOS a feasible workaround for you or is it too disruptive for your workflow?",
"username": "nirinchev"
},
{
"code": "",
"text": "Sounds good. Yes - i’ll be able to proceed with my project, so it’s good for now.",
"username": "Rasmus_B"
},
{
"code": "",
"text": "@nirinchev I just upgraded VS for PC, and now i get this error message:PackageInspectionFailed: Failed to load Info.plist from bundle at path /var/installd/Library/Caches/com.apple.mobile.installd.staging/temp.fonrLF/extracted/AppName.iOS.app/Frameworks/realm-wrappers.framework; Extra info about “/var/installd/Library/Caches/com.apple.mobile.installd.staging/temp.fonrLF/extracted/AppName.iOS.app/Frameworks/realm-wrappers.framework/Info.plist”: Couldn’t stat /var/installd/Library/Caches/com.apple.mobile.installd.staging/temp.fonrLF/extracted/AppName.iOS.app/Frameworks/realm-wrappers.framework/Info.plist: No such file or directory",
"username": "Rasmus_B"
},
{
"code": "",
"text": "It appears that this is a regression in VS for Windows: ERROR ITMS-90171: \"Invalid Bundle Structure\" · Issue #11728 · xamarin/xamarin-macios · GitHub and Visual Studio Feedback. 16.10.2 should fix the latter issue and may also fix the former if they stem from the same root cause. We’ll test it out later this week, but it might be something to try if you’ve already upgraded to latest VS.",
"username": "nirinchev"
},
{
"code": "",
"text": "I can confirm it works just fine on 16.10.2 ",
"username": "Rasmus_B"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb realm Xamarin forms issue | 2021-05-20T16:30:10.324Z | Mongodb realm Xamarin forms issue | 6,138 |
[
"dot-net"
] | [
{
"code": "",
"text": "\nSchermata 2021-06-18 alle 10.19.35856×438 35.2 KB\nHello everyone,\nsearching a bit on google i could not find the solution to my problem.\nI would like to create the schema that I posted as an image by c # code.\nBut in a dynamic, not static way.",
"username": "Salvatore_Lorello"
},
{
"code": "",
"text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Create Array of Object on C# | 2021-06-18T08:31:06.686Z | Create Array of Object on C# | 2,223 |
|
null | [
"graphql",
"schema-validation"
] | [
{
"code": "{\n \"title\": \"test\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"aliases\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n }\n }\n}\ntype Test {\n _id: ObjectId\n aliases: [String]\n}\ntype Test {\n _id: ObjectId\n aliases: [String!]\n}\n",
"text": "With JSON schema defined as suchThe generated graphQL schema is:I would expectAm I missing something or is it a bug?",
"username": "aleks"
},
{
"code": "",
"text": "Any comments? Is it a bug or am I not seeing something here?",
"username": "aleks"
},
{
"code": "\t\"title\": \"test\",\n\t\"properties\": {\n\t\t\"_id\": \"12345\",\n\t\t\"aliases\": [\"foo\", \"bar\", \"baz\"]\n\t}\n}\n{\n\t\"title\": \"test\",\n\t\"properties\": {\n\t\t\"_id\": \"12345\",\n\t\t\"aliases\": [\"foo\", null, \"baz\"]\n\t}\n}\n",
"text": "Interesting. Looks like a bug, as both are valid JSON:e.g. this code is valid JSON:And so is this:And I tested this against https://www.jsonschemavalidator.net/ to make sure both of these validated against the JSON schema you provided…",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hey @aleks, thanks for catching this - I work on the Realm team, and it is something we missed and are looking into resolving. Is this blocking you or your use-case at the moment?An array with all strings should still be valid for both schemas which is the true intent of that type.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hey @Sumedha_Mehta1!\nNo, it is not blocking me. It’s just messing a bit with my type checking, which is a bit of annoyance.I assume that you are interested in getting notified of potential bugs ASAP, yet it took two weeks for a staff member to notice it. Do you have a way in which we (users) can reach out to you directly if we find some issues?",
"username": "aleks"
},
{
"code": "",
"text": "Hey @aleks - The forums are the best place to ask questions is the forum. Another place to provide feedback/improvement suggestions for the cloud side is in our feedback engine. We try to monitor both pretty closely the best we can Realm: Top (68 ideas) – MongoDB Feedback Engine",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "MongoDB support recommended me to file a request to fix this bug. Posting link to request here for anyone coming from search: Array types in generated GraphQL schema should follow JSON schema of a collection or custom type – MongoDB Feedback Engine",
"username": "Ivan_Bereznev1"
}
] | GraphQL schema wrongly allows nullable values as array items | 2021-05-07T12:40:46.263Z | GraphQL schema wrongly allows nullable values as array items | 4,457 |
null | [
"indexes",
"performance"
] | [
{
"code": "db.getCollection('activities').createIndex({\n_userId: 1,\n_userActivityTypeId: 1,\ncontentId: 1,\n_entityId: 1,\n_createdAt: 1\n},\n{\n partialFilterExpression: {\n contentId: { $exists: true }\n },\n name: 'activities__userId__userActivityTypeId__contentId__entityId__createdAt'\n});\nmaxIndexBuildMemoryUsageMegabytes$ dd if=index-87--9072640376711127209.wt of=/dev/null\n7854200+0 records in7854200+0 records out4021350400 bytes (4.0 GB, 3.7 GiB) copied, 84.4896 s, 47.6 MB/s",
"text": "Hi, we are creating an index using the rolling build index pattern (https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/) on a very large collection (aprox 450 M documents).Index is a partial index where all documents at the beginning of the collection does not have the contentId field:The creation is very slow, estimation time to complete is around 100 hours where our oplog time window is currently only 23 hours.To improve the process we did the following tuning but without luck:Looking at the I/O metrics we found that read speed is almost stable at 1.5 MB/s.CPU is not bounded, memory is not under pressure, IOPS are low (largely under the limit of the VM/Disk)During normal operation as secondary node we can observe the read speed go beyond 1.5 MB/s, and also reading an index file manually provide us the speed of 47.6 MB/s:7854200+0 records in\n7854200+0 records out\n4021350400 bytes (4.0 GB, 3.7 GiB) copied, 84.4896 s, 47.6 MB/sAs commented above the index creation process is just scanning the collection and not write anything, as at the beginning all the documents do not have the contentId field. We can observe that the index file size stay always at 4096 bytes.We have been looking at the doc to try to find any parameter that can improve the read speed of the scanning collection but without luck.Does anybody know any way to speed up the collection scan speed? Or if this limitation is documented somewhere?Further Info:Thank you so much,\nFrancesco",
"username": "Francesco_Rivola"
},
{
"code": "",
"text": "Hi @Francesco_Rivola ,Version 4.2 offers hybrid index builds which should enable you to avoid the rolling builds index , so rolling build might nit be necessary:The new builds are not locking the workload as the old ones.Moreover, in 4.4 we are building those indexes in all 3 nodes in parallel , is upgrading a possibility?Otherwise rather than increasing resources and increasing oplog and waiting I don’t have much more ideas.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny ,First of all, thank you so much for your reply.Yes, we are aware of the new hybrid index in 4.2. However we were going with rolling builds index because in the past we experienced issues with secondaries suffering during index creation and not being able to keep up in sync with the primary.Yes, we are planning to upgrade to 4.4. In fact, we already upgraded but had to revert to 4.2 due to a bug in MongoDB 4.4.4 (it should be fixed now in 4.4.6). We are definitely interested in parallel index creation and hidden index features.Do you know if the 1.5 MB/sec read speed scanning collection during index creation is due to some technical limitation really not tied to the server resources? I guess in any case there is no parameter/setting to tune that part other that what you suggest.Thank you so much.\nBest regards,\nFrancesco",
"username": "Francesco_Rivola"
},
{
"code": "",
"text": "Hi @Francesco_Rivola ,To analyse a performance of this type we need full logs and diagnostics.This is best covered by a support subscription.Please consider contacting our sales in that direction as it will be the best approach.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Index creation - slow read speed on scan collection | 2021-06-14T07:46:19.841Z | Index creation - slow read speed on scan collection | 3,798 |
[
"queries",
"python"
] | [
{
"code": "bool = mydb.mycollection.find({\"buttons\": {\"$elemMatch\": {\"userid\": 198832506034716672}}}, limit = 1) == 1",
"text": "Hello!\nI’ve been trying to get my query to work but it always fails, so I’m guessing I have a syntax error of some sort. I basically want to check if a specific userid exists, so in my case my query should return True, but it returns false. Can anyone help?My data:\nMy code:\nbool = mydb.mycollection.find({\"buttons\": {\"$elemMatch\": {\"userid\": 198832506034716672}}}, limit = 1) == 1",
"username": "Lily_B"
},
{
"code": "for doc in mydb.mycollection.find( { \"buttons.userid\": 67890 }, limit = 1 ):\n print(doc)\nlimit = 1findprint(mydb.mycollection.count_documents({ \"buttons.userid\": 67890 }))",
"text": "Hello @Lily_B, welcome to the MongoDB Community forum!I basically want to check if a specific userid exists, so in my case my query should return True, but it returns falseThe following query will work in PyMongo:It will print the first matching document, as you have specified the limit = 1 option. The find method returns a cursor, which will be empty if there is no match.Since you are trying to check if there are any documents exist, you can use the following query (instead of); it prints the count of matching documents.print(mydb.mycollection.count_documents({ \"buttons.userid\": 67890 }))",
"username": "Prasad_Saya"
}
] | $elemMatch not working with Pymongo | 2021-06-18T02:12:40.753Z | $elemMatch not working with Pymongo | 3,223 |
|
[
"atlas-triggers"
] | [
{
"code": " exports = function(authEvent){\n // Only run if this event is for a newly created user.\n if (authEvent.operationType !== \"CREATE\") { return }\n // Get the internal `user` document\n const { user } = authEvent;\n const users = context.services.get(\"mongodb-atlas\")\n .db(\"MyApp\")\n .collection(\"members\");\n const isLinkedUser = user.identities.length > 1;\n if(isLinkedUser) {\n const { identities } = user;\n return users.updateOne(\n { user_id: user.id },\n { $set: { \n email: user.data.email\n } }\n )\n } else {\n return users.insertOne({ _id: user.id, email: user.data.email })\n .catch(console.error)\n }\n};\n",
"text": "Hello,I have a trigger set up to run a function on “Create” with authentication. This function creates a document in my members collection to contain some basic info about the user. The function looks like this:This trigger works correctly if I manually create the user in the ui, but if I create it from the front end it does not fire. The user is created, but the trigger never fires. Here is the trigger:\nimage2024×1471 153 KB\nAny ideas why it may be failing to trigger?",
"username": "Ian_Wilson"
},
{
"code": "",
"text": "Ok, so the trigger only fires after the user is confirmed through the email confirmation function and successful login. Would be helpful to include this in the documentation",
"username": "Ian_Wilson"
},
{
"code": "",
"text": "I have a similar issue.\nI have a collection Products and a Trigger to fire on any Products update.\nIf I update Products on Compass for instance the Trigger is fired and everything works fine. However, i am using an incoming webhook (third party services - http) to do updates to the Products collection and when Products are updated from that webhook then the Trigger doesn’t fire anymore. I have tried several Authentication options for the function to be triggered but still no luck.\nThe funny thing is that when I first completed my job yesterday it was working and since last night it is just not working anymore. This is critical in my processes so if anybody can Help I will be more than happy.\nAttached the setup of the Trigger and the Function.\nThanks!!\n\nScreen Shot 2021-06-18 at 07.40.101902×1624 256 KB\n\n\nScreen Shot 2021-06-18 at 07.39.342444×1446 223 KB\n",
"username": "Ricardo_Soares"
}
] | Realm Auth Trigger not firing from frontend | 2021-02-07T03:19:49.059Z | Realm Auth Trigger not firing from frontend | 2,531 |
|
null | [
"atlas-functions"
] | [
{
"code": "exports = function() {\ncontext.http.post({\nurl: \"https://fcm.googleapis.com/fcm/send\",\nheaders: {\n \"Authorization key\" : \"AAAAI......\",\n \"Content-Type\" : \"application/json\",\n},\nbody: {\n \"to\" : \"/topics/messaging\",\n \"notification\" : {\n \"body\" : \"Body of Your Notification\",\n \"title\": \"Title of Your Notification\"\n },\n },\n encodeBodyAsJSON: true\n});\n};\n",
"text": "I’m trying no notify firebase topic about any change in one of my collection.What is wrong?This is the function :This is the errorran on Thu Jun 17 2021 19:35:07 GMT+0300 (Israel Daylight Time)\ntook 262.907943ms\nerror:\nuncaught promise rejection: http request: “headers” argument must be a object containing only string keys and string array values",
"username": "Kobi_Meridor"
},
{
"code": "exports = function() {\nreturn context.http.post({\n\"Authoritzation\": \"AAAAIkn1XHI:APA91bFO_JzltUR0hGjjQ5yTOAEM1t5rMeXcdoTNexJ8q_uIW3TYf3La39Lyc_v7bW4mkc46qFE1M2zHfKK0yg22lEO5aHXaNyBtk8vybsJugMXuV-brCBdzVtvAEMZsD-02-NnzDsTv\",\n\"url\" : \"https://fcm.googleapis.com/fcm/send\",\n\"headers\": {\n \"Authorization\":[\"key=AAAAI......\"],\n \"Content-Type\" : [\"application/json\"],\n},\n\"body\": {\n \"to\" : \"/topics/messaging\",\n \"notification\" : {\n \"body\" : \"Body of Your Notification2\",\n \"title\": \"Title of Your Notification1\"\n },\n },\n encodeBodyAsJSON: true\n});\n};\n",
"text": "I found the syntax error",
"username": "Kobi_Meridor"
}
] | Https POST on trigger function to Firebase | 2021-06-17T16:45:13.776Z | Https POST on trigger function to Firebase | 2,852 |
[
"graphql"
] | [
{
"code": "",
"text": "Hi,\nI just renamed an input field in a custom resolver I have in Realm.\nThe query works perfectly fine if I use it through the GraphiQL explorer in the Realm UI.\nBut if I use the api endpoint to reach the GraphQL api, the schema seems to not be updated.\nThe query gives me an error saying that this new field name does not exist. And I can keep using the old, non existing, input field.Query docs in GraphiQL\n\nScreenshot 2021-06-02 at 12.40.35752×1222 112 KB\nQuery in GraphiQL\n\nScreenshot 2021-06-02 at 12.41.292170×1254 330 KB\nSame query through /graphql endpoint (note error “In field \"articlenumber\": Unknown field.” )\n\nScreenshot 2021-06-02 at 12.45.43644×711 58.4 KB\nIs the schema perhaps cached? I found information about something similar here https://docs.realm.io/sync/graphql-web-access/how-to-use-the-api but, Im not using realm sync and the endpoint suggested does not seem to exist on my graphql endpoint.How do I make the schema also be updated for external api calls?Best Regards",
"username": "clakken"
},
{
"code": "",
"text": "Hi,\nAn update.\nWe deploy the same app in a dev and prod version.\nThe problem described above only occurs in our dev app.\nThe production app reacted properly to the changes of the field name, whilst the development app did not.Closing up on release of these apps, so any help is much apprecieated.\nThanks!",
"username": "clakken"
},
{
"code": "",
"text": "Hi,\nAnother update.\nNow, one week later, the query works fine in the dev app aswell.\nI guess the schema cache got cleared by itself or something.\nI would still like to know what caused this delay, and also, how in the future we could handle it manually to avoid having to wait a week for it to automatically get solved?UPDATE: Any changes being made, now results in this issue. Ive had to delete and redeploy the app in order to be able to use any changes. Could you please inform me of how this cache can be cleared without having to create a new app on every little change made?Thanks!",
"username": "clakken"
},
{
"code": "",
"text": "Hi!Did you deploy changes to your app via GitHub? There appears to be a minor caching issue for apps deployed via GitHub that we are currently resolving. A temporary workaround is to deploy via the UI if possible to invalidate the cache if possible (any change would suffice here and you can keep your current GraphQL changes)",
"username": "Kush_Patel"
},
{
"code": "",
"text": "Hi! Thanks for the information.\nDeploying through the UI worked.\nIs there any estimation when this fix will done?",
"username": "clakken"
},
{
"code": "",
"text": "No problem!\nThe fix should be out by the middle of next week. I will provide an update if we can get this out sooner or if I have a more accurate estimate.Regards,\nKush",
"username": "Kush_Patel"
},
{
"code": "",
"text": "Hi @clakken!This fix should be live in production now. Please feel free to let us know if you notice any strange behavior again. Thank you for pointing this out!Regards,\nKush",
"username": "Kush_Patel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | GraphQL api endpoint query does not match query in GraphiQL in Realm UI | 2021-06-02T08:56:36.611Z | GraphQL api endpoint query does not match query in GraphiQL in Realm UI | 4,268 |
|
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.0-rc2 is out and is ready for testing.This release candidate requires extra steps when upgrading a sharded installation from 5.0.0-rc0 or 5.0.0-rc1.Note: Only installations which have all these conditions need these extra steps:To upgrade from 5.0.0-rc0/5.0.0-rc1 to 5.0.0-rc2 the following procedure must be followed:To downgrade from 5.0.0-rc2 to 5.0.0-rc0 or 5.0.0-rc1:\nAs always, please let us know of any issues.\n\n-- The MongoDB Team\n\nMongoDB 5.0 Release Notes | Changelog | Downloads",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 5.0.0-rc2 is released | 2021-06-17T15:25:00.539Z | MongoDB 5.0.0-rc2 is released | 2,911 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Hi,\nWe have a MongoDB Atlas (Version 4.0.25 + M20) with 1 primary and 2 secondaries. Since 2 days ago we have faced an increase in system write IOPS both in the primary and secondary nodes. As we are still well below the 2000 IOPS which is configured for our cluster I don’t understand why we are facing this increment.\nHow could we fix this IOPS issue?\nRegards",
"username": "Mario_Martinez1"
},
{
"code": "",
"text": "Hi @Mario_Martinez1 and welcome in the MongoDB Community !First of all, if you think there is a problem with the platform and it’s not something you are doing, don’t hesitate to contact the support in the bottom right corner or with the support link at the top in Atlas.IOPS means that you are using the disk (Captain Obvious… I know!). I guess you have the usual read & write amount of operations per minutes you are used to. So of course, your write operations generate a certain “baseline” of IOPS on your disks. Nothing we can really do here.\nBut the read operations, it’s a very different story. When you try to access data from your data set, MongoDB will first look if the document is in the RAM, and if it’s not, it will fetch it on disk (trying to keep it simple…). So read IOPS can be avoided if you have more RAM and if you don’t evict too often documents that are part of your working set.If your read queries sometimes run ad hoc queries that access “unusual documents” (meaning docs outside of the working set - docs from 2019 for example), then it means that you will have to evict useful docs from your RAM for these old docs and generate a lot of IOPS to answer these queries. After that, the opposite happens: evicting from RAM the 2019 docs to load the recent docs once you are back on your normal workload.That’s why analytics workload are usually handled on a specific node to avoid impacting the prod workload.I’m not saying that this is what is happening here. But lack of RAM could be a reason for high IOPS.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88 ! Thank you \nI already contacted MongoDB support but as this is a Root Cause Analysis it is out of scope from the free support. And that’s why I decided to create a topic here (Also recommended by the support).The read IOPS are almost at 0 ops/s but the write ones suddenly changed from 0 to 12 ops/s. I know it is not a lot but I wanted to understand why (we didn’t deploy any change that day) and how we could fix it. I also was curious about it because this happened both in the primary and secondary nodes.\n\nScreenshot 2021-06-17 at 09.15.512958×710 163 KB\nRegards\nMario",
"username": "Mario_Martinez1"
},
{
"code": "",
"text": "It’s not the replication (oplog moving forward + replication on the secondaries) or some writes coming from your client applications?",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "MongoDB write requests and Document writes are the same. It just changes the write IOPS.\nAnd we haven’t received an increase in traffic or deployed changes lately.",
"username": "Mario_Martinez1"
},
{
"code": "",
"text": "I’m short on ideas then . No idea. Maybe someone else will have an idea.",
"username": "MaBeuLux88"
}
] | MongoDB Atlas unusual increase of write IOPS | 2021-06-16T13:29:04.974Z | MongoDB Atlas unusual increase of write IOPS | 3,411 |
null | [
"golang"
] | [
{
"code": "\tmodel := mongo.NewUpdateOneModel().SetFilter(\n\t\tbson.M{\n\t\t\t\"anchorLotteryID\": awardInfo.LotteryID,\n\t\t\t\"uin\": winner,\n\t\t},\n\t).SetUpdate(winningRecord).SetUpsert(true)\n\tmodel := mongo.NewUpdateOneModel().SetFilter(\n\t\tbson.M{\n\t\t\t\"anchorLotteryID\": awardInfo.LotteryID,\n\t\t\t\"uin\": winner,\n\t\t},\n\t).SetUpdate(bson.M{\"$set\": winningRecord}).SetUpsert(true)\n",
"text": "I update golang client version from v1.3.5 to v1.5.2.\nBut I find there has failed in using.This is my old using. It failed and error is “update document must contain key beginning with ‘$’”So,I change it toI think it is very important to compatible with old version!!!",
"username": "Bo_Jiang"
},
{
"code": "> db.coll.updateOne({name:\"Maxime\"}, {surname: \"Beugnet\"})\n{name:\"Maxime\"}replaceOne> db.coll.replaceOne({name:\"Maxime\"}, {surname: \"Beugnet\"})\nOR\n> db.coll.updateOne({name:\"Maxime\"}, {$set: {surname: \"Beugnet\"}})\n> db.coll.updateOne({name:\"Maxime\"}, {surname: \"Beugnet\"})\n",
"text": "Hi @Bo_Jiang and welcome in the MongoDB Community !Update operation changed in MongoDB 4.2: https://docs.mongodb.com/manual/reference/method/db.collection.updateOne/#update-with-an-aggregation-pipeline.Now updateOne can take an aggregation. It wasn’t possible before.They also apparently resolved the trap that many people were falling into:This command, that doesn’t work anymore, was actually overwriting the entire document with {name:\"Maxime\"} which is what the replaceOne command is actually doing.The actual command should be eitherSo, to me, it’s normal and expected that the following command fails and raises an error.At least this resolve the confusion between the 2 commands and hopefully people won’t erase their entire document when they are actually just trying to add a new field in them or set an existing field to a new value.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Error. update document must contain key beginning with '$' | 2021-06-17T07:00:00.296Z | Error. update document must contain key beginning with ‘$’ | 15,163 |
null | [
"data-modeling",
"crud"
] | [
{
"code": "db.isMaster().maxMessageSizeBytes\n",
"text": "I have some code the generates a series of BsonDocuments and tries to insert them into a Mongo Collection. On one of the documents, I get this error.I can see that the size is larger than the reported MaxDocumentSize, but if I executedirectly towards the Mongo server, then et reports 48000000, which is significantly higher than the one reported in the error message.I’m using the .net driver 2.11.3 (and have tried updating to 2.12.4 without success).Is there a way to somehow compress the document being sent - like removing whitespaces in the json document?",
"username": "Benny_Skjold_Tordrup"
},
{
"code": "db.isMastermongod48000000",
"text": "Hello @Benny_Skjold_Tordrup,The error message “Size 19103932 is larger than MaxDocumentSize 16777216” is refering to the maximum size of a MongoDB document that can be stored in a collection - which is 16 Megabytes. See Document Size Limit. Since the document you are trying to insert has more than 16 MB, there is the error.The db.isMaster is related to replica sets, and the command returns a document that describes the role of the mongod instance. The db.isMaster.maxMesageSizeBytes refers to - The maximum permitted size of a BSON wire protocol message. The default value is 48000000 bytes. This is not related to the error message you are getting.",
"username": "Prasad_Saya"
},
{
"code": "/* 1 createdAt:6/17/2021, 9:14:01 AM*/\n{\n\t\"_id\" : ObjectId(\"60caf639b13fa2d7f0678ee8\"),\n\t\"id\" : 422699,\n\t\"sourceId\" : \"Airports-535-1\",\n\t\"origin\" : \"Miljøstyrelsen\",\n\t\"noiseClass\" : \"E\",\n\t\"noiseInterval\" : 5,\n\t\"date\" : \"2016-12-19T00:00:00\",\n\t\"hasGeometry\" : true,\n\t\"geometry\" : {\n\t\t\"type\" : \"MultiPolygon\",\n\t\t\"coordinates\" : [\n\t\t\t[\n\t\t\t... bunch of coordinates \n\t\t\t]\n\t\t]\n\t},\n\t\"square\" : {\n\t\t\"Type\" : \"Polygon\",\n\t\t\"Coordinates\" : [\n\t\t\t[\n\t\t\t\t[\n\t\t\t\t\t12.39719995332741,\n\t\t\t\t\t55.445833896222\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t12.55498453345486,\n\t\t\t\t\t55.44134688006085\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t12.563078945878638,\n\t\t\t\t\t55.53103046866473\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t12.404936833922546,\n\t\t\t\t\t55.53553246668553\n\t\t\t\t],\n\t\t\t\t[\n\t\t\t\t\t12.39719995332741,\n\t\t\t\t\t55.445833896222\n\t\t\t\t]\n\t\t\t]\n\t\t]\n\t},\n\t\"areaLevel\" : 60,\n\t\"last_updated\" : ISODate(\"2021-06-17T09:14:08.345+02:00\")\n},\n",
"text": "What do I then do to get this document stored? It is not an option so split it up. The document has this structure:where the “bunch of coordinates” refer top a list of GeoJson coordinates.What is the reason for the 16 Mb max document size?",
"username": "Benny_Skjold_Tordrup"
},
{
"code": "",
"text": "Can documents be compressed somehow? What serializer settings are used for serializing the json document?",
"username": "Benny_Skjold_Tordrup"
},
{
"code": "",
"text": "What is the reason for the 16 Mb max document size?I believe 16 MB is an optimal size for a document to be stored in a collection. I think, for most practical purposes it suffices. For larger data sizes, usually media data like video files or images, GridFS allows large sized storage.In general, the data stored is modeled (or designed) in such a way so as to store, query (also perform other operations) and use it efficiently in an application. This topic is called as Data Modeling. Note that data storage requires disk drive space and while working with data requires RAM memory. An efficient model takes into consideration various factors like - number of documents, the operations on them, size of a document, storage / memory requirements, etc.Can documents be compressed somehow?No, the document size limit applies.",
"username": "Prasad_Saya"
}
] | Size 19103932 is larger than MaxDocumentSize 16777216 | 2021-06-17T09:26:09.616Z | Size 19103932 is larger than MaxDocumentSize 16777216 | 19,284 |
null | [
"monitoring"
] | [
{
"code": "$clusterTime\" : { \t\t\"clusterTime\"",
"text": "Hi, i do have clusters that are configured in the same way but in one of them there is not showing info about\n$clusterTime\" : { \t\t\"clusterTime\"I search the internet for 2 days and I didn’t find any clue if this part is enabled by settings?",
"username": "wojtas"
},
{
"code": "$clusterTime$clusterTimedb.version()mongo",
"text": "Welcome to the MongoDB Community @wojtas!The $clusterTime document is an internal reference which will only appear in command responses for replica sets and sharded clusters running MongoDB 3.6+. The absence of this field should not impact your use case unless you are trying to use read operations associated with casually consistent sessions.Per the documentation on Command Responses, the $clusterTime is:A document that returns the signed cluster time. Cluster time is a logical time used for ordering of operations. Only for replica sets and sharded clusters. For internal use only.If this information doesn’t help explain the difference you are observing, please provide some further details on each of your environments:What type of deployment is used (standalone, replica set, or sharded cluster)?What specific version of MongoDB is being used (eg as reported by db.version() in the mongo shell)?What command are you running, and what tool or driver version are you using to run this?Is this a self-managed deployment or using a hosted service (for example, MongoDB Atlas)?If using a hosted service, what cluster tier is being used?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "db.version()mongors_test:SECONDARY> rs.status() { \t\"set\" : \"rs_test\", \t\"date\" : ISODate(\"2021-06-16T12:47:01.702Z\"), \t\"myState\" : 2, \t\"term\" : NumberLong(54), \t\"syncingTo\" : \"xxxxx:27017\", \t\"syncSourceHost\" : \"xxxxxx27017\", \t\"syncSourceId\" : 2, \t\"heartbeatIntervalMillis\" : NumberLong(2000), \t\"majorityVoteCount\" : 2, \t\"writeMajorityCount\" : 2, ",
"text": "Hi, Thank you for a quick answer\nHere are answers:What type of deployment is used (standalone, replica set, or sharded cluster)?\n3 nodes replica SETWhat specific version of MongoDB is being used (eg as reported by db.version() in the mongo shell)?\nMongoDB shell version v4.2.8\nMongoDB server version: 4.2.8What command are you running, and what tool or driver version are you using to run this?\njust mongo shell and admin user that has root roleIs this a self-managed deployment or using a hosted service (for example, MongoDB Atlas)?\nSelf-hosted on my VMsIf using a hosted service, what cluster tier is being used?\nDEV Tier.additional notes:\nNO back compatibility -\n{ “featureCompatibilityVersion” : { “version” : “4.2” }, “ok” : 1 }SOME RS info\nrs_test:SECONDARY> rs.status() { \t\"set\" : \"rs_test\", \t\"date\" : ISODate(\"2021-06-16T12:47:01.702Z\"), \t\"myState\" : 2, \t\"term\" : NumberLong(54), \t\"syncingTo\" : \"xxxxx:27017\", \t\"syncSourceHost\" : \"xxxxxx27017\", \t\"syncSourceId\" : 2, \t\"heartbeatIntervalMillis\" : NumberLong(2000), \t\"majorityVoteCount\" : 2, \t\"writeMajorityCount\" : 2, \nso writeMajorityCount is set to 2No replication lag",
"username": "wojtas"
},
{
"code": "$clusterTimewriteMajorityCount$clusterTime$clusterTimemongors.status().ok\nrs.status()['$clusterTime']\ndb.version()\nversion()\ndb.isMaster().ismaster\ndb.isMaster().secondary\ndb.serverBuildInfo().gitVersion\ndb.serverStatus().uptime\n",
"text": "Hi @wojtas,I can’t reproduce this issue with a fresh install of MongoDB 4.2.8 and it sounds like your environments should be fine. Absence of the $clusterTime value shouldn’t affect normal usage of your cluster outside of the causal session option I mentioned, so I wouldn’t be particularly concerned although it is a mysterious outcome.I didn’t find any SERVER issues related to this, but you may want to try upgrading to the latest version of MongoDB 4.2 (currently 4.2.14) as minor releases only include bug fixes & stability improvements.Replication lag and writeMajorityCount should not affect the presence of $clusterTime in command responses.On the replica set that doesn’t display the $clusterTime, can you share the output of running the following in the mongo shell:Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi, Thank You for the follow-up.\nI did that (upgrade to 4.2.14) on one slave node and noting wasn’t better - still no\nclusterTime in output - can I try to upgrade all cluster nodes.I wouldn’t be particularly concerned althoughWe are, dev need them to check something on query. EDIT: i asked and the operationTime is needed for them to monitor query time.Here are command output that You asked:~]# mongo\nMongoDB shell version v4.2.8\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“bla bla bla”) }\nMongoDB server version: 4.2.8\nrs_test:SECONDARY> rs.status().ok\n1\nrs_test:SECONDARY> rs.status()[‘$clusterTime’]\nrs_test:SECONDARY> db.version()\n4.2.8\nrs_test:SECONDARY> version()\n4.2.8\nrs_test:SECONDARY> db.isMaster().ismaster\nfalse\nrs_test:SECONDARY> db.isMaster().secondary\ntrue\nrs_test:SECONDARY> db.serverBuildInfo().gitVersion\n43d25964249164d76d5e04dd6cf38f6111e21f5f\nrs_test:SECONDARY> db.serverStatus().uptime\n156415\nrs_test:SECONDARY>I’m just wondering it that a enabled option? I tried to read some test that You have on GIT repo, and i know to have that visible user if he have some AdvencedTime role. But is it possible that admin user that have root role do not have permissions?EDIT2:\nI dig out the all internet \nhttps://jira.mongodb.org/browse/SERVER-43086?attachmentViewMode=list\nI got something similar where my response to a query is simply ending on the OK.\nNo additional info about operationTime and clusterTime and we need that.EDIT3:\nMore info:\nthe funny part is that this output does not work even when --norc is added and the mongo shell was not authorizedserver1 ~]# mongo --norc\nMongoDB shell version v4.2.8\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“bla bla bla”) }\nMongoDB server version: 4.2.8\nrs_test:SECONDARY> rs.status()\n{\n“ok” : 0,\n“errmsg” : “command replSetGetStatus requires authentication”,\n“code” : 13,\n“codeName” : “Unauthorized”\n}\nrs_test:SECONDARY>where on the second serversever2 ~]# mongo --norc\nMongoDB shell version v4.2.8\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(\"bla ble bla \") }\nMongoDB server version: 4.2.8\nrsamdb:SECONDARY> rs.status()\n{\n“operationTime” : Timestamp(1623927100, 1),\n“ok” : 0,\n“errmsg” : “command replSetGetStatus requires authentication”,\n“code” : 13,\n“codeName” : “Unauthorized”,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1623927100, 1),\n“signature” : {\n“hash” : BinData(0,“0RRWUPgvVn+s+EWkXO1dyDd9e5k=”),\n“keyId” : NumberLong(“6923043821483720705”)\n}\n}\n}\nrsamdb:SECONDARY> db.serverBuildInfo().gitVersion\n43d25964249164d76d5e04dd6cf38f6111e21f5f\nrsamdb:SECONDARY>",
"username": "wojtas"
}
] | clusterTime and operationTime are not visble | 2021-06-16T08:23:27.969Z | clusterTime and operationTime are not visble | 6,655 |
null | [
"connecting"
] | [
{
"code": "const CONNECTION_URL = 'mongodb+srv://user:[email protected]/myFirstDatabase?retryWrites=true&w=majority';\n\nconst PORT = process.env.PORT || 5000;\n\nmongoose.connect(CONNECTION_URL, {useNewUrlParser: true, useUnifiedTopology: true})\n\n .then(()=> app.listen(PORT, ()=> console.log(`server runniung on: ${[PORT]}`)))\n\n .catch((error)=> console.log(error.message));\n\nmongoose.set('useFindAndModify', false);\n",
"text": "",
"username": "Martin_Ntalika"
},
{
"code": "querySRV ECONNREFUSED",
"text": "Hi @Martin_Ntalika,Welcome to the Community!querySrv ECONNREFUSED _mongodb._tcp.cluster0.6hfdm.mongodb.netYou may wish to check out the steps listed in this post as a workaround. As noted in the post, the querySRV ECONNREFUSED error you’ve noted in the title of this post possibly indicates a SRV lookup failure. All official MongoDB drivers that are compatible with MongoDB server v3.6+ should support the SRV connection URI meaning that the issue may be related to environment’s network / DNS configuration.By following the instructions in the post, you can try with the alternate string and see if you’re able to connect or get a different error.Hope this helps.Kind Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "My gratitude to you Jason saved the day",
"username": "Martin_Ntalika"
},
{
"code": "",
"text": "Thanks for the kind words Martin ",
"username": "Jason_Tran"
}
] | querySrv ECONNREFUSED _mongodb._tcp.cluster0.6hfdm.mongodb.net | 2021-06-09T12:38:00.788Z | querySrv ECONNREFUSED _mongodb._tcp.cluster0.6hfdm.mongodb.net | 7,622 |
[
"atlas-device-sync",
"android"
] | [
{
"code": "package org.wildaid.ofish.data\n\nimport android.content.Context\nimport android.util.Log\nimport androidx.lifecycle.MutableLiveData\nimport io.realm.*\nimport io.realm.kotlin.toFlow\nimport io.realm.kotlin.where\nimport io.realm.log.LogLevel\nimport io.realm.log.RealmLog\nimport io.realm.mongodb.App\nimport io.realm.mongodb.AppConfiguration\nimport io.realm.mongodb.AppException\nimport io.realm.mongodb.Credentials\nimport io.realm.mongodb.sync.SyncConfiguration\nimport kotlinx.coroutines.flow.Flow\nimport org.bson.Document\nimport org.bson.types.ObjectId\nimport org.wildaid.ofish.BuildConfig\nimport org.wildaid.ofish.data.report.MPA\n",
"text": "Realm version 10.5.1\nMicrosoftTeams-image (6)2690×1212 350 KB\nfun restoreLoggedUser(): io.realm.mongodb.User? {\nreturn realmApp.currentUser()?.also {\ninstantiateRealm(it)\n}\n}while restoring its not working",
"username": "kunal_gharate"
},
{
"code": "",
"text": "@kunal_gharate : Thanks for reaching out to us, let me have a look and get back to you soon.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "@kunal_gharate : I am able to use the WildAid app with Realm 10.5.1. Have you followed the steps mentioned in the Building and running the app section",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Yes same file i used in my project",
"username": "kunal_gharate"
},
{
"code": "",
"text": "@kunal_gharate : Can you please share the clone project via GitHub?",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "I cant share the source but I can show you on my pc currently this issue with all dependencies i usedi have triedclasspath “io.realm:realm-gradle-plugin:10.4.0”\nclasspath “io.realm:realm-gradle-plugin:10.5.1”\nclasspath “io.realm:realm-gradle-plugin:10.6.0” (beta)",
"username": "kunal_gharate"
}
] | Stuck on login realm using existing user object | 2021-06-16T11:39:04.698Z | Stuck on login realm using existing user object | 2,979 |
|
null | [
"security"
] | [
{
"code": "",
"text": "Why would authentication at MongoDB be needed if my Node.js backend already has 2FA installed?",
"username": "Pieter_16800"
},
{
"code": "",
"text": "Hi @Pieter_16800, can you please provide more details regarding your doubt?\nI am assuming by “2FA installed” you mean that you are handling authentication(like Login/Signup) of your app/website users in your backend using 2FA, but please let me know if I am missing something here.The reason why you would still be needing authentication at the MongoDB side is to protect your MongoDB deployment from unwanted access from anywhere else.Also, suppose your analytics team wants to use the data stored in MongoDB and they want to perform some read operations(like aggregation queries) to gain insights about the product performance, etc, usually in these cases, to avoid the risk of accidental deletion/modification of the existing data you would create a database user with the appropriate read permissions & selected collections on which they want to perform the analytics.Hence, authentication at MongoDB is really important, even if you are implementing 2FA for your users in the backend.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "SourabhHi Sourabh, thanks for your input. And yes, I mean “2FA implemented”, sorry for using sloppy English. I have build a subsription service for cookbook recipes with Nextjs frontend at Vercell using a Sanity backend (Reactjs) for the recipe and news data and a Nodejs backend (using MongoDB) at Heroku for the user profiles. In Sanity Studio food editors can edit recipes and news items. In MongoDB users can store recipes, create there own recipes, send emails with recipe details, create a shopping list. These users first need to create a user account to be authorised for these activities. The implementation of 2FA involves Nodemailer, Sendgrid and JasonWebTokens. You can try for yourself at https://hetkookt.vercel.app/:-) This project is a Proof Of Concept, so no team is involved, just yet. But I sure want to know why this is not safe. Thanks, Pieter Roozen",
"username": "Pieter_16800"
},
{
"code": "Connection-String",
"text": "Hi @Pieter_16800, that’s awesome, MERN stack is my favorite, and I use it in almost all of my full stack projects, and Next.js & Vercel are my go-to choices for SSR(server-side rendering).Having said that, even if you are the only person working on the project, I would still recommend you to implement authorization on your MongoDB server.Also, it may happen that you are leveraging authorization in MongoDB without even knowing it, for e.g.: when you are connecting your Node.js server with your MongoDB deployment, you might be connecting them through a Connection-String, which in fact asks you to add the username & password of the database user that you have created.A typical connection-string would look something like this:mongodb://[username:password@]host1[:port1][,…hostN[:portN]][/[defaultauthdb][?options]]So having a good knowledge of how authorization/authentication works in MongoDB will give you a better understanding of how things work behind the scene and how you can better secure them.I hope it helps.\nIn case you have any doubts, please feel free to reach out to us.Thanks & Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Yes, connecting with a “connection-string” is exactly how I grant access in my case to Heroku, so that is safe, right? What would this whole authentication thing at the same time do any extra? Does it give access to users on developer level to a specific database, or what? Thanks, Pieter",
"username": "Pieter_16800"
},
{
"code": "",
"text": "Hi @Pieter_16800, yes that’s safe, but just make sure that you are only providing the permissions that are necessary to do the job(running the server).There’s a lot more than just the connection string that MongoDB provides, you’ll learn a lot about then in this course: Authentication & Authorization.For e.g., running a single MongoDB instance is not ideal in production, you usually replicate your data among different instances in a single replica-set.\nAlso, when your app grows to a point where running a single replica-set to store all your data, then you might want to shard your very giant collections into multiple replica-sets.In cases like these you need a mechanism to internally authenticate different MongoDB instances among each other for communication and data transfer purposes. Hence, concepts like Internal/Membership Authentication helps you in achieving that.There are a lot of things to learn about Security that MongoDB provides out-of-the-box, such as authentication, access control, encryption, to secure your MongoDB deployments. Learn more.I hope it helps.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Thank you for your detailed answer. I fully understand now! Cheers, Pieter",
"username": "Pieter_16800"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | My Node.js backend already has 2FA | 2021-06-16T10:03:43.641Z | My Node.js backend already has 2FA | 2,823 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi all Green again, Im trying to implement a “favorite” functionality and I have basically two solutions, but cant say which one would work best. Here is the use case:I have a collection of Aliments that users can browse. The users can then put one or more Aliment as a favorite. When the user enters his favorite section, it must be able to do the same queries done when looking for a “public” Aliment (before is moved to the favorite section). The queries are by name and tags.My two solutions are the following:The act of put an Aliment as favorite will duplicate the aliment and set the user id as the owner of that aliment. In this case I can keep the existing queries and just add the condition on the user id if the queries come from the favorite section where only the favorite Aliment must be shown. I like this solution, but in the event a public aliment gets an update (for example the addition of a translation) the users will not see this update in their favorite collection. Of course these updates are not frequent, so when a new translation is added to a public aliment, I can go and update all the Aliment copied in the favorite collection of each user.Use an AlimentFavorite collection that holds the userId and the AlimentId and use an aggregation + lookup. The query would add the filter conditions (name and tags) and it will do a subpipeline lookup to match only the aliment where an AlimentFavorite exists (more like relational database style). I was also thinking that, if in the future I need to shard, then these lookups could be across shard, which will then degrade the query performance (correct me if Im wrong)Thank youGreen",
"username": "Green"
},
{
"code": "$lookup",
"text": "Hello @Green, I think the first approach you have explained looks more appropriate to me. This is because the updates on the public aliment are not frequent. These updates can be applied to the user favorites once daily, for example, for all users and updated aliments as a batch job. Also, your querying will be efficient (simpler and better performant). It is not an uncommon scenario to store duplicated data across collections, especially if the updates are infrequent.The second approach needs creating data (user and aliment) in a separate collection (this is also duplicating data), and querying with aggregate $lookup can be complicated and tax on the performance.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya Thank you! The first approach look better to me too.",
"username": "Green"
}
] | Favorite funcionality | 2021-06-17T07:56:51.390Z | Favorite funcionality | 3,179 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Total newbie to mongodb and not a networking pro.I hve to connect to a mongodb database remotely. As it has restricted IP addresses which can establish a connection with it, everytime it has to whitelist my IP address. But IP addresses keeps on changing. So what are the easiest workaround so as to avoid this each time asking for whitelisting to my manager.",
"username": "geetha_vasu"
},
{
"code": "0.0.0.0/0",
"text": "Hi @geetha_vasu,Welcome to the MongoDB University.We suggest you use 0.0.0.0/0 for all university courses.This isn’t a recommended setting for production environments but will help with access to your course cluster if you do not have a fixed IP address.Thank you,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb remote connection IP whitelisting | 2021-06-17T07:08:27.208Z | Mongodb remote connection IP whitelisting | 2,719 |
null | [
"crud"
] | [
{
"code": "_id: \"xx\"\nstatuses: [{\n status: \"pending\",\n timestamp: \"1 january 10 pm\",\n}, {\n status: \"accepted\",\n timestamp: \"2 january 2 am\",\n}]\n_id: \"xx\"\nstatuses: [{\n status: \"pending\",\n timestamp: \"1 january 10 pm\",\n}, {\n status: \"accepted\",\n timestamp: \"2 january 2 am\",\n}]\nlast_accepted_at: \"2 january 2 am\"\ndb.task.findOneAndUpdate( { last_accepted_at: { $exists: false } }, { $set: { \"last_accepted_at\": \"statuses.$[element].timestamp\" }}, { arrayFilters: [ { \"element.status\": \"accepted\" } ] }, )\nuncaught exception: Error: findAndModifyFailed failed: {\n \"ok\" : 0,\n \"errmsg\" : \"The array filter for identifier 'element' was not used in the update { $set: { last_accepted_at: \\\"statuses.$[element].timestamp\\\" } }\",\n \"code\" : 9,\n \"codeName\" : \"FailedToParse\"\n} :\ndb.task.findOneAndUpdate( { last_accepted_at: { $exists: false } }, { $set: { \"last_accepted_at\": \"statuses.$[element].timestamp\" }}, { arrayFilters: [ { \"element.$.status\": \"accepted\" } ] }, )\nuncaught exception: Error: findAndModifyFailed failed: {\n \"ok\" : 0,\n \"errmsg\" : \"The array filter for identifier 'element' was not used in the update { $set: { last_accepted_at: \\\"statuses.$[element].timestamp\\\" } }\",\n \"code\" : 9,\n \"codeName\" : \"FailedToParse\"\n} :\n",
"text": "I have this document structure:now i would like to add the timestamp of accepted in the root structure.I know how to set value based on another root field, but not from an item of array that I have to filter.I tried:it has errorI also tried:How to achieve this? also I would like to use updateMany since its a migration file to update all old data. Thanks",
"username": "Ariel_Ariel"
},
{
"code": "updateManydb.collection.updateOne(\n { _id: \"xx\", last_accepted_at: { $exists: false } },\n [\n {\n $set: {\n status_accepted: { \n $arrayElemAt: [ {\n $filter: {\n input: \"$statuses\",\n cond: { $eq: [ \"accepted\", \"$$this.status\" ] }\n }\n }, 0 ]\n }\n } \n },\n { \n $set: { \n last_accepted_at: \"$status_accepted.timestamp\", \n status_accepted: \"$$REMOVE\" \n } \n }\n ]\n)",
"text": "Hello @Ariel_Ariel, you can try this update using Aggregation Pipeline. This can be used with updateMany also. Note that this requires MongoDB v4.2 or greater.",
"username": "Prasad_Saya"
},
{
"code": "db.collection.updateOne(\n { _id: \"xx\", last_accepted_at: { $exists: false } },\n [\n {\n $set: {\n last_accepted_at: {\n $let: {\n vars: {\n status_accepted: { \n $arrayElemAt: [ {\n $filter: {\n input: \"$statuses\",\n cond: { $eq: [ \"accepted\", \"$$this.status\" ] }\n }\n }, 0 ]\n }\n },\n in: \"$$status_accepted.timestamp\"\n }\n }\n } \n },\n ]\n)",
"text": "This is slightly different way of doing the same update:",
"username": "Prasad_Saya"
}
] | Mongo make migration to set value based on value of a subdocument in array | 2021-06-17T02:27:29.959Z | Mongo make migration to set value based on value of a subdocument in array | 2,635 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello!\nWe have some problems with 3 nodes replicaset cluster. Two or three times a week, one of the secondary nodes loses connection to the master node, starts to give the error mongodb_target_down host: port mongodb error and nothing helps to fix it. We have to stop the process, delete the data, and start it to replicate. At the same time, there is a connection between the servers, there are no network problems, monitoring (prometheus) of many metrics does not give any problems. Can anyone help with this issue? Mongo is 4.4 in docker container",
"username": "Anton_Dvornikov"
},
{
"code": "",
"text": "Welcome to the community, @Anton_Dvornikov! Glad to have you here!Recently, we were in a similar situation where Docker failed to mount the data volume and lost some files after a restart. That’s the theory we are going with for now till we dig deeper to find the root cause. Anyway, we downgraded docker to 19.03.9 from latest, on centos 7, and we haven’t experienced it since then. It’s something to try if any of this applies to you. I will be curious to find out the final resolution though.Thanks,\nMahi",
"username": "mahisatya"
},
{
"code": "",
"text": "Hello, Mahi!\nThank you for your answer, we will try to make a downgrade and see what happens, I hope this will help. Have a nice day!",
"username": "Anton_Dvornikov"
}
] | Problem with replication | 2021-06-16T13:24:29.985Z | Problem with replication | 1,813 |
null | [
"queries",
"java"
] | [
{
"code": "",
"text": "I have a nested data model:\ncountry: id\nregion: id\ntime: 1999L\ncontentList: // can have 20000-50000 itemsFilter by country id and get the document with the max time for each unique region. Then slice the contentList (top 1000 items with the highest scores)Finally put all the list items together and sort again by score to return top 1000 items.Yes it’s complicated logic.Someone who has a good solution in Java ?Thank you :slight_smile",
"username": "Yaqin_Chen_Hedin"
},
{
"code": "",
"text": "The conversation continues in another thread for those who are intrested:Hi 🙂 I am totally new to MongoDB. I need help to improve my Java Query. The query is to get all the child array elements to a document with highest time for a distinct region. Since I use Accumulators.first I need to sort first. My question is: is...",
"username": "Yaqin_Chen_Hedin"
}
] | Java query - group Max return a child array field | 2021-06-14T14:48:43.568Z | Java query - group Max return a child array field | 1,922 |
null | [
"aggregation",
"queries",
"data-modeling",
"java"
] | [
{
"code": " Bson match = Aggregates.match(\n Filters.eq(\"country._id\", countryId)\n );\n\n Bson sort = Aggregates.sort(\n Indexes.descending(\"time\", \"region._id\")\n );\n\n Bson group = Aggregates.group(\n \"region._id\",\n Accumulators.max(\"time\", \"$time\"),\n Accumulators.first(\"contentList\", \"$contentList\")\n );\n\n List<Data> dataList = dataMongoCollection.aggregate(\n Arrays.asList(\n match,\n sort,\n group\n )\n ).into(new ArrayList<>());\n{\n \"_id\":\"ObjectId(\"\"60\"\")\",\n \"contentList\":[\n {\n \"colors\":[\n {\n \"displayName\":\"Red\",\n \"reference\":\"red_0\",\n \"value\":\"red\"\n }\n ],\n \"country\":{\n \"_id\":\"countryId\",\n \"name\":\"Sweden\"\n },\n \"region\":{\n \"_id\":\"regionId\",\n \"name\":\"Stockholm\"\n },\n \"score\":20002.4,\n \"time\":NumberLong(16237),\n \"weights\":[\n {\n \"displayName\":\"Red\",\n \"reference\":\"weight_0\",\n \"value\":0.08\n }\n ]\n }\n ],\n \"country\":{\n \"_id\":\"c_0\",\n \"name\":\"Sweden\"\n },\n \"granularity\":\"PT15M\",\n \"region\":{\n \"_id\":\"r_0\",\n \"name\":\"Stockholm\"\n },\n \"time\":NumberLong(1623751979098)\n}\n",
"text": "Hi I am totally new to MongoDB. I need help to improve my Java Query. The query is to get all the child array elements to a document with highest time for a distinct region. Since I use Accumulators.first I need to sort first. My question is: is there a way to skip the sorting part but still get the correct child array for a distinct region with the highest time ?Many thanks in advance Sample document:",
"username": "Yaqin_Chen_Hedin"
},
{
"code": "{ country._id : 1 , time : -1 , region._id : -1 }",
"text": "I do not think that the sort is helping you very much and might add processing time if you do have an index that looks like { country._id : 1 , time : -1 , region._id : -1 }. I am not sure if the order of time and region._id in the index is important. Anyhow I would try without the sort first.Dumping everything into an ArrayList might be slower than using cursor methods as in some circumstances you might be able to start your local processing before all matching documents are done with the pipeline.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you, steevej Appreciated very much.\nSince I use first to extract ContentList I believe that sort is necessary since first just return the first best document not the one with the highest value of time. I have tested only with first without sorting it doesn’t work.Good point with your comment on dumping part Is there a way to extract the contentList for the correct document, i.e. the document with the highest value of time to a distinct region without using first ?",
"username": "Yaqin_Chen_Hedin"
},
{
"code": "$group$groupAccumulators.maxAccumulators.first{ \"country._id\": 1, \"time\": -1, \"region._id\": -1, \"contentList\": 1}\nregion._idexplain[\n {\n '$match': {\n 'country._id': 'c_0'\n }\n }, {\n '$sort': {\n 'time': -1, \n 'region._id': -1\n }\n }, {\n '$group': {\n '_id': '$region._id', \n 'time': {\n '$first': '$time'\n }, \n 'contentList': {\n '$first': '$contentList'\n }\n }\n }\n]\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"test.coll\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"country._id\" : {\n\t\t\t\t\t\t\t\"$eq\" : \"c_0\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"B626CD4F\",\n\t\t\t\t\t\"planCacheKey\" : \"68E972A3\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_DEFAULT\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"contentList\" : 1,\n\t\t\t\t\t\t\t\"region._id\" : 1,\n\t\t\t\t\t\t\t\"time\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"country._id\" : 1,\n\t\t\t\t\t\t\t\t\t\"time\" : -1,\n\t\t\t\t\t\t\t\t\t\"region._id\" : -1,\n\t\t\t\t\t\t\t\t\t\"contentList\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"country._id_1_time_-1_region._id_-1_contentList_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"country._id\" : [ ],\n\t\t\t\t\t\t\t\t\t\"time\" : [ ],\n\t\t\t\t\t\t\t\t\t\"region._id\" : [ ],\n\t\t\t\t\t\t\t\t\t\"contentList\" : [\n\t\t\t\t\t\t\t\t\t\t\"contentList\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"country._id\" : [\n\t\t\t\t\t\t\t\t\t\t\"[\\\"c_0\\\", \\\"c_0\\\"]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"time\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"region._id\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"contentList\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : \"$region._id\",\n\t\t\t\t\"time\" : {\n\t\t\t\t\t\"$first\" : \"$time\"\n\t\t\t\t},\n\t\t\t\t\"contentList\" : {\n\t\t\t\t\t\"$first\" : \"$contentList\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"hafx\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.6\",\n\t\t\"gitVersion\" : \"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\"\n\t},\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1623794133, 5),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1623794133, 5)\n}\n",
"text": "Hi @Yaqin_Chen_Hedin !I think in your case you case use an index on the $group stage because you are in this very particular case documented here in the $group section:But in this case, the index will only work on the $group stage if you ONLY use $first in your $group stage. As you are sorting in descending order right before anyway, the first value of $time should always be the first & max one at the same time. So replace Accumulators.max by Accumulators.first on the $time and create the index:I’m not 100% this will work because you are using region._id as your “group by” field.Can you give it a try and let me know what the explain output of this aggregation tells you?I did a little test with your single document and this aggregation:I get this:But I’m not sure if the $group is actually covered by the index here… But at least the index is used so it’s a good sign.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88 \nThank you very much for your response ! Highly appreciated Your query is very similar to mine. I want to avoid the sort step since the data can be very very big. Best regards\nyEH",
"username": "Yaqin_Chen_Hedin"
}
] | Java query - sort, group by with max | 2021-06-15T05:35:31.554Z | Java query - sort, group by with max | 5,893 |
null | [
"data-modeling",
"capacity-planning"
] | [
{
"code": "",
"text": "I need to provide collection for each user, but on atlas creating too many collection on single cluster can slow down writes so i will just spin new cluster programmatically after hitting 300 collections. The problem will be managing too many connections on the backend. To what extent I can scale this strategy?",
"username": "Arnav_Singh1"
},
{
"code": "",
"text": "Hi @Arnav_Singh1 and welcome in the MongoDB Community !Here is a topic that will help a bit I think:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I am creating a mini database. so, each user will have different index and schema. so, the suggested strategy won’t be useful. What will be the pitfalls of my strategy of spinning new cluster after reaching 300 to 400 collection",
"username": "Arnav_Singh1"
}
] | Multi tenant usecase of atlas | 2021-06-16T19:04:40.032Z | Multi tenant usecase of atlas | 5,347 |
null | [
"swift"
] | [
{
"code": "",
"text": "Hi,Am going through the docs for the swift sync driver, and am trying to find details on how to query by an ISODate. Is this conforming to ISO8601 by chance?Mark",
"username": "Mark_Windrim"
},
{
"code": "",
"text": "Hi Mark,Are you using the MongoDB Swift driver or the MongoDB Realm Swift SDK? If using the Realm SDK, are querying the local Realm database?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi,I’m using the MongoDB Swift Sync Driver (linux) - not using Realm.Mark",
"username": "Mark_Windrim"
},
{
"code": "ISODateISODateDateDate// create some reference dates\nlet now = Date()\nlet yesterday = Date(timeIntervalSinceNow: -1 * 24 * 60 * 60)\nlet twelveHoursAgo = Date(timeIntervalSinceNow: -1 * 12 * 60 * 60)\n\n// insert some data with various date values\ntry collection.insertMany([\n [\"_id\": 1, \"d\": .datetime(yesterday)],\n [\"_id\": 2, \"d\": .datetime(yesterday)],\n [\"_id\": 3, \"d\": .datetime(now)],\n [\"_id\": 4, \"d\": .datetime(now)]\n])\n\n// queries with exact dates\nlet nowDocs = try collection.find([\"d\": .datetime(now)])\nprint(try Array(nowDocs).map { try $0.get() } )\n\nlet yesterdayDocs = try collection.find([\"d\": .datetime(yesterday)])\nprint(try Array(yesterdayDocs).map { try $0.get() } ) \n\n// query with a range\net recentDocs = try collection.find([\"d\": [\"$gt\": .datetime(twelveHoursAgo)]])\nprint(try Array(recentDocs).map { try $0.get() } )\n[{\"d\":{\"$date\":\"2021-06-16T19:01:24.337Z\"},\"_id\":3}, {\"_id\":4,\"d\":{\"$date\":\"2021-06-16T19:01:24.337Z\"}}]\n[{\"_id\":1,\"d\":{\"$date\":\"2021-06-15T19:01:24.337Z\"}}, {\"_id\":2,\"d\":{\"$date\":\"2021-06-15T19:01:24.337Z\"}}]\n[{\"d\":{\"$date\":\"2021-06-16T19:01:24.337Z\"},\"_id\":3}, {\"d\":{\"$date\":\"2021-06-16T19:01:24.337Z\"},\"_id\":4}]\nDateDateFormatter",
"text": "Hi Mark, thanks for reaching out.By ISODate, maybe you are referring to ISODate in the MongoDB shell? That is a convenience helper for creating a new MongoDB datetime. The database itself stores dates as signed 64-bit integers representing the number of milliseconds since the Unix epoch.The MongoDB drivers, Swift included, typically have facilities for converting native date types in each language to/from that format the database uses. In the Swift driver case, the relevant type is Date: Apple Developer DocumentationIf you want to insert or query with a Swift Date object you can do something like the following:This will print:If you need to convert to/from ISO-8601 formatted date strings and Swift Dates I would suggest looking into DateFormatter: Apple Developer DocumentationLet me know if that answers your question or if you need more information!-Kaitlin",
"username": "kmahar"
},
{
"code": "",
"text": "Hi Kaitlin,As always, your answers are incredibly detailed and helpful. Everything is working. That ISODate in the shell was leading me down the wrong path.Thanks,\nMark",
"username": "Mark_Windrim"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to query by ISODate? | 2021-06-16T15:01:05.230Z | How to query by ISODate? | 20,161 |
null | [
"java"
] | [
{
"code": "",
"text": "I have executed the db.serverStatus() in noSQLBooster and in Java Program both gave different outputs.[\n{$date=2021-06-16T04:10:17.993Z}] - in JAVA ProgramISODate(“2021-06-16T09:23:27.075+05:30”) - in noSQLBooster. Can you please let us know why.I want the same value in Java also how to get that?",
"username": "Pradeep_kumar.K"
},
{
"code": "",
"text": "Hi @Pradeep_kumar.K and welcome in the MongoDB Community !Looks like the same date to me almost but expressed in different timezones. I don’t know what noSQLBooster is, but I guess there is a setting in there to set the timezone in which dates are expressed by default… Or it’s just using the default of your system which is most probably set to the Indian TZ while Java is probably using UTC by default.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Why the value of localTime is different when I checked using noSQL Booster and while I am getting the same attribute using Java? | 2021-06-16T04:11:23.236Z | Why the value of localTime is different when I checked using noSQL Booster and while I am getting the same attribute using Java? | 2,447 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "I wanna perform some actions upon inserting a document. The details are like this:\nI have 2 collections called “product” and “import”. The product collection has the field “availability” indicating how many products are there in total. The import collection has the field “amount” indicating how many products are being imported.\nWhen inserting an import document, I would like to increase the field “availability” by the “amount”. So I created a trigger and I was able to achieve what I want. So far so good.\nBut the problem is there is a constraint for the import collection. The “amount” field must be a positive number. So I would like to reject the document if it doesn’t satisfy the condition. But I can’t seem to find the way. Because when the trigger is triggered, the document was already inserted to the collection. What am I supposed to do? Should I delete the document by its _id? That doesn’t seem like a right way to do it. I am thinking of something like “trigger before” like in any RDBMS. Should I perform the validation on client machines? If so, what if somebody inserts documents directly by using the console on the web?I am totally new to mongodb as well as any NoSQL in general. So maybe my mindset of designing database is inappropriate. I will appreciate anyone’s help.",
"username": "Dao_Lac"
},
{
"code": "",
"text": "Hi @Dao_Lac - welcome to the community forum!You’re correct that it’s too late to prevent the write once the trigger has been invoked.You have a couple of options:",
"username": "Andrew_Morgan"
},
{
"code": "amount{\n \"title\": \"Import\",\n \"properties\": {\n \"amount\": { \"bsonType\": \"int\" }\n },\n \"validate\": {\n \"amount\": { \"$gte\": 0 }\n }\n}\nexports = async function(arg){\n const importCollection = context.services.get(\"mongodb-atlas\").db(\"test\").collection(\"import\")\n \n await importCollection.insertOne({ amount: 3 }); // This will succeed because amount >= -1\n await importCollection.insertOne({ amount: -1 }); // This will fail with a schema validation error\n};\n",
"text": "Hey Dao!In addition to what Andrew mentioned, you may also want to try out the change validation that’s built in to Realm’s JSON schemas. You can validate that amount is always greater than zero and, if it isn’t, reject the insert/update.Using the above schema you could test that it works with the following function:",
"username": "nlarew"
}
] | Trigger - Can I reject the document being inserted? | 2021-06-16T14:29:36.416Z | Trigger - Can I reject the document being inserted? | 2,614 |
null | [] | [
{
"code": "",
"text": "HiWhere can I download MongoDB Manual?\nThis link doesn’t work https://docs.mongodb.com/master/mongodb-manual-master.epub",
"username": "111404"
},
{
"code": "",
"text": "Looks like they broke it it is pointing to v5.0.Try:\nhttps://docs.mongodb.com/v4.4/mongodb-manual-master.epub",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Manual Download | 2021-06-16T14:35:54.863Z | MongoDB Manual Download | 2,311 |
null | [
"java",
"monitoring"
] | [
{
"code": "",
"text": "I am interested in seeing org.mongodb.driver.protocol.query debug logs. I believe these are going to come from here:- https://github.com/mongodb/mongo-java-driver/blob/master/driver-core/src/main/com/mongodb/internal/connection/QueryProtocol.java (and they are called when we do collection.find). Doing logging.level.org.mongodb.driver=DEBUG doesn’t seem to help. Is there anything else i need to do? Do i have to go to exact level like logging.level.org.mongodb.driver.protocol.query=DEBUG?",
"username": "Rahul_Singh1"
},
{
"code": "org.mongodb.driver.protocol.commandorg.mongodb.driver",
"text": "For any server past version 3.0, QueryProtocol, and therefore that logger, is no longer used by the driver. The logger you want is org.mongodb.driver.protocol.command. That said, if org.mongodb.driver isn’t picking that up, something seems wrong in your configuration. Best I can do is point you to Logging, which links to general SLF4J logging info.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "@Jeffrey_Yemin :- I believe my configuration is correct, and that is how rest of loggers are enabled , plus this is a very standard way to change log level in any spring application.",
"username": "Rahul_Singh1"
},
{
"code": "",
"text": "@Jeffrey_Yemin :- Logs have started to come now. Before we close this thread, i want one info on protocol.command logs. From what i understand, they log just before query is sent , and just after response is received from database, which essentially means it is capturing the network call time. Am i right?",
"username": "Rahul_Singh1"
},
{
"code": "",
"text": "That’s correct that it is essentially capturing network time (and TLS state machine execution).I’m glad it’s working now. Do you know what the issue was?Regards,\nJeff",
"username": "Jeffrey_Yemin"
}
] | Enable debug logs for Mongo java driver | 2021-06-15T15:53:41.552Z | Enable debug logs for Mongo java driver | 7,867 |
[
"connector-for-bi"
] | [
{
"code": "",
"text": "I want to connect my database to Tableau. I installed the BI Connector and I have set up my DSN System and tested it like explained here:The connection was tested successfully and also I was able to use it from Tableau desktop. The issue is I can’t see the tables. (only the databases)",
"username": "Sergiu_Corneliu_Dan"
},
{
"code": "",
"text": "Can you click into the databases to drill down to the collections?",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "I can change the databases from the combobox. I get the “Executing query…” window box and after that, nothing is displayed.Captured with Lightshot",
"username": "Sergiu_Corneliu_Dan"
},
{
"code": "",
"text": "How many collections are there?",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "7 collections. I can see them in the mongosqld console.[sampler] mapped schema for 7 namespaces: “MongoDB” (7): [“collection1”, “collection2”, “collection3”, “collection4”, “collection5”, “collection6”, “collection7”]",
"username": "Sergiu_Corneliu_Dan"
},
{
"code": "",
"text": "I am asking our internal BI Connector team for some additional diagnosis. It will take a while as they are based in the US.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Hi, a few questions on your setup to start:",
"username": "Jeffrey_Sposetti"
},
{
"code": "",
"text": "I think the ticket can be closed. I was expecting Tableau to automatically show the tables but I had to press the Search button. 1 day well spent.Thank you for your help.",
"username": "Sergiu_Corneliu_Dan"
},
{
"code": "",
"text": "I have the same problem. I have the 2.14.3 BI Connector and the latest ODBC connector (just installed). do you have any idea?",
"username": "Karlijn_Berning"
}
] | Tableau does not display the tables | 2020-09-09T20:02:52.721Z | Tableau does not display the tables | 4,857 |
|
null | [
"java",
"change-streams"
] | [
{
"code": "",
"text": "All application instances receiving events from change stream. How can we loadbalance events amongst the instances?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "There is many ways to do it. You must do it yourself. Here are some ideas, some more applicable than others depending of your use-cases.",
"username": "steevej"
},
{
"code": "",
"text": "Hello Steeve,Regarding point 1, Since its just two instance of same code base running on two replicas I am really confused how can we really point make different instance to listen to different events.Second suggestion will entail introduction of shared cache (a new point of failure), which will be fed by both instances with duplicate data, so there will be a need to handle duplicates.Whats the industry standard here? How is this feature used in production by other developers?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "I want to emphasis the point:depending of your use-casesRegarding point 1, Since its just two instance of same code base running on two replicas I am really confused how can we really point make different instance to listen to different events.For example, if your use-case relates to phone numbers. You have one server listening to phone numbers that ends with an even number and the other one that ends with odd numbers. So only one type of event is sent to one and the other type sent to the other. So you have the same code base handling 2 unique sets of event. But you might also want one handler listens to inserts and an other one listen to updates. You might do both in the same code base right now. But logically it is 2 code bases into the same server and you do an if to dispatch to one of the 2 internal code bases.Second suggestion will entail introduction of shared cache (a new point of failure), which will be fed by both instances with duplicate data, so there will be a need to handle duplicates.Yes. But most message queue systems handle that. A simple event cache can even be a MongoDB replica set that is fed by your multiple listeners with upserts which kind of make sure there is not duplicate.Whats the industry standard here?Kafka seems to have some momentum. RabbitMQ is also popular.",
"username": "steevej"
}
] | All application instances receiving events from change stream. How can we loadbalance events amongst the instances? | 2021-06-15T08:13:39.117Z | All application instances receiving events from change stream. How can we loadbalance events amongst the instances? | 4,232 |
[] | [
{
"code": "",
"text": "I want to know how to add a tag to a new topic like below:\nUntitled1211×422 39.6 KB\n",
"username": "Ping_Pong"
},
{
"code": "optional tags",
"text": "Hi @Ping_Pong,When you create your topic you can add up to 5 tags in the optional tags section which should be displayed to the right of the category selection:\ntagging topics741×111 7.02 KB\nI noticed this is missing from your screenshot, but can’t tell if that is because the image is cropped or if there is another issue. Can you provide a larger screenshot showing the subsequent fields in your “New topic” form?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X ,For some reason, I can see it now. Thanks.",
"username": "Ping_Pong"
},
{
"code": "",
"text": "Hi @Ping_Pong,Thanks for confirming the issue no longer appears to be reproducible.I thought this might be a rendering issue, but I tried a few quick tests resizing windows in different macOS browsers with the New Topic form open and didn’t run into any obvious trouble. Rendering issues are generally browser-specific rather than O/S-specific, so I didn’t test on Windows or Linux.If this happens again please start a new Site Feedback topic including a screenshot and some steps to reproduce (eg browser version, O/S version, forum url, and what you clicked) and we’ll look into the issue.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Forum UI: How to tag a new Topic | 2021-06-12T23:09:48.982Z | Forum UI: How to tag a new Topic | 6,403 |
|
null | [
"queries",
"text-search"
] | [
{
"code": "",
"text": "HelloI have set up a compound index as follows:db.Capture.createIndex({“Timestamp”: -1, “BodyHtml”: “text”})Every time a run a query in Compass I get the following error:failed to use text index to satisfy $text query (if text index is compound, are equality predicates given for all prefix fields?)The query looks like the following:{Timestamp: {$gte: new Date(“2021-05-01T00:00:00.000-04:00”)}, $text: {$search: “Session Timeout”}}What am I missing?Thanks\nTom",
"username": "Tom_Meschede"
},
{
"code": "",
"text": "The query partTimestamp: {$gte: new Date(“2021-05-01T00:00:00.000-04:00”)}is notequality predicates given for all prefix fields",
"username": "steevej"
},
{
"code": "",
"text": "SteveI appreciate your response, but being very new to MongoDb could explain the error a bit more? Also any kind of solution to correct my query?Tom",
"username": "Tom_Meschede"
},
{
"code": "",
"text": "You are looking for a range of Timestamp with $gte. If you were looking at a specific Timestamp with $eq the index could be use.From the query part$text: {$search: “Session Timeout”}I suspect that Session Timeout is the complete value of field. If it is then a normal index might be sufficient.Since your new to MongoDB, I think it is best that you look at\nand take some MongoDB university courses like M201.",
"username": "steevej"
}
] | Failed to use text index to satisfy $text query (if text index is compound, are equality predicates given for all prefix fields?) | 2021-06-15T21:16:33.761Z | Failed to use text index to satisfy $text query (if text index is compound, are equality predicates given for all prefix fields?) | 4,800 |
null | [
"data-modeling"
] | [
{
"code": "{\n Type: Job1 ('Job1', Print, Scan, AddValue, Fine),\n TransactionDate: String,\n TransactionStartTime: String,\n TransactionEndTime: String,\n TimeZone: String,\n BarcodeNumber: String,\n FullName: String,\n EmailAddress: String,\n Balance: Balance\n Job1: {\n 10 more fields\n }\n}\n\n{\n Type: Print ('Job1', Print, Scan, AddValue, Fine),\n TransactionDate: String,\n TransactionStartTime: String,\n TransactionEndTime: String,\n TimeZone: String,\n BarcodeNumber: String,\n FullName: String,\n EmailAddress: String,\n Balance: Balance\n Print: {\n 10 more fields\n }\n}\n{\n Type: Job1,\n TransactionDate: String,\n TransactionStartTime: String,\n TransactionEndTime: String,\n TimeZone: String,\n BarcodeNumber: String,\n FullName: String,\n EmailAddress: String,\n Balance: Balance\n Job1Field1: value,\n Job1Field2: value,\n Job1Field3: value\n so on...\n}\n\n{\n Type: Print,\n TransactionDate: String,\n TransactionStartTime: String,\n TransactionEndTime: String,\n TimeZone: String,\n BarcodeNumber: String,\n FullName: String,\n EmailAddress: String,\n Balance: Balance\n PrintField1: value,\n PrintField2: value,\n Printield3: value,\n and so on...\n}\n",
"text": "Hi Team,We are working on a project. We need your suggestion on defining our schema for the report collection. We have 5 types of reports. Now we are thinking to keep one collection for them instead of 5 collections. Is it a good idea to keep them in a single collection or it’s better to keep them in a different collection? We are finding an alternative as we don’t want to run into performance issues.1 Collection with all types of ReportsSeparate these collections into 5",
"username": "Tudip_Company"
},
{
"code": "\n{\nType: Job1 (‘Job1’, Print, Scan, AddValue, Fine),\nTransactionDate: String,\nTransactionStartTime: String,\nTransactionEndTime: String,\nTimeZone: String,\nBarcodeNumber: String,\nFullName: String,\nEmailAddress: String,\nBalance: Balance\nJobDetails: {\n10 more fields\n}\n}\n",
"text": "Hi @Tudip_Company ,Welcome to MongoDB community.Since MongoDB have a flexible schema it make sense to hold similar documents in the same collection.In this case since lots of fields are same cross job types:The question on how to index this dataor restructure further is dependent on the Access patterns…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks, Pavel for the response. Will it affect the performance for querying larger data.",
"username": "Tudip_Company"
},
{
"code": "",
"text": "Not sure I understand the question?",
"username": "Pavel_Duchovny"
}
] | Single Collection Vs Multiple Collection for Reporting | 2021-06-11T14:32:45.747Z | Single Collection Vs Multiple Collection for Reporting | 3,159 |
null | [
"android"
] | [
{
"code": "override fun getItemCount(): Int = medicineItemList.size // Crashing on this line\n",
"text": "Scenario: creating a record from mobile and then syncing the record with MongoDB altas then update it on web side on same time I am on same list so when I click on list item it is crashingFatal Exception: java.lang.IllegalStateException Access to invalidated List objectio.realm.internal.OsList.nativeSize (OsList.java) io.realm.internal.OsList.size (OsList.java:248) io.realm.ManagedListOperator.size (ManagedListOperator.java:73) io.realm.RealmList.size (RealmList.java:597) com.reach52.healthcare.ui.medicine.adapter.MedicineSubOrderItemAdapter.getItemCount (MedicineSubOrderItemAdapter.kt:53) androidx.recyclerview.widget.RecyclerView.dispatchLayoutStep1 (RecyclerView.java:4044) com.android.internal.os.ZygoteInit.main (ZygoteInit.java:870)i have tried the .isValid() object but i am unable to use that on the above function",
"username": "kunal_gharate"
},
{
"code": "",
"text": "@kunal_gharate : Can you send some more details like GitHub repo(which would be great) or how are you accessing data and passing to adapter, if you have figured out how to fix the issue.",
"username": "Mohit_Sharma"
}
] | Crashed on recyclerview | 2021-06-09T08:20:02.827Z | Crashed on recyclerview | 2,283 |
null | [
"data-modeling",
"python"
] | [
{
"code": "[{ \"_id\": \"Programming\", \"path\": \",Books,\" }, { \"_id\": \"Databases\", \"path\": \",Books,Programming,\" }]Booksdb.categories.find( { path: \"^,Books,\" } )BookspathBookstore",
"text": "Hi guys,I recently asked about this in another community and got some very kind answers, but the problem isn’t solved. They referred me to go in this category to get more viewers. in case you wanna have a look at the original question, the link’s in the comments.The problem is, the docs of mongo in the section for Materialized Path Trees are not translated for Python, only JavaScript, which I’m not familar with yet. I guess maybe I just have to learn basic JS to use this feature.I’d like to store data like in this example : [{ \"_id\": \"Programming\", \"path\": \",Books,\" }, { \"_id\": \"Databases\", \"path\": \",Books,Programming,\" }]\nand with a simple query find all the nodes that are ascending from the path Books: db.categories.find( { path: \"^,Books,\" } ). Note that I in this example already changed the syntax of mongo, as it is written for JS and I’m using Python. Of course, I don’t get any output. Python doesn’t know, what I want from it. It seems like, the features I’m asking for either have a different syntax or don’t even exist for Python.The guys commenting under my original question suggested to use the $regex operator in Python, following this part of the docs: https://docs.mongodb.com/manual/reference/operator/query/regex/. The problem is though, that this doesn’t help with Materialized Path Trees. It only makes it possible to query data and find all the nodes, which have the asked string, like in this case Books, but that means it also for example would find nodes with the path Bookstore. I mean there’s a clean feature to solve this using JS, so I guess I’m just learning this now. Though I would be more than happy if there would be a way to use this feature in Python.Btw. here’s the link to the section of the docs about Materialized Path Trees (which I need translated for Python, if possible). https://docs.mongodb.com/manual/tutorial/model-tree-structures-with-materialized-paths/ I have already asked this in the customer support, but they referred me to the community as they don’t cover this depht of topics apparently.Cheers!",
"username": "Moritz_Honscheidt"
},
{
"code": "",
"text": "Link to original question: Is it possible to use materialized path trees with python?",
"username": "Moritz_Honscheidt"
},
{
"code": "{ \"path\" : { \"$regex\" : \"^,Books,\" } }: steevej@rpi ; python3\nPython 3.7.3 (default, Jul 25 2020, 13:03:44)\n[GCC 8.3.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> query = { \"path\" : { \"$regex\" : \"^,Books,\" } }\n>>> query\n{'path': {'$regex': '^,Books,'}}\n>>>\n",
"text": "What do you want to do that you cannot do with $regex?The query { \"path\" : { \"$regex\" : \"^,Books,\" } }, almost verbatim from the other thread, is valid Python and should matches both _id:Programming and _id:Databases.If not, then we are missing some pieces of the puzzle that might be resolve if your share your Python.",
"username": "steevej"
},
{
"code": "$regexdb.categories.find( { path: /^,Books,/ } )\n/^,Books,/re.compile(\"^,Books,\"){ \"$regex\": \"^,Books,\" }{ \"$regex\": \"^,Books,\" }pathBookstore^,Books,,Books,,Books,toreBookstore",
"text": "Hi Moritz,I’m also a Python programmer! You’ve already received the correct answer from a number of people - use the $regex operator, but I wanted to clarify why this is the correct advice:The JavaScript code you’ve seen in our docs is here:It looks a bit like Python, because they’re quite similar languages, but I wanted to extract one part of it, which is: /^,Books,/. This looks a bit like a string, but it isn’t - it’s a compiled regular expression in JavaScript. It’s the equivalent to the following in Python: re.compile(\"^,Books,\") - which returns a regular expression object.What I’m trying to highlight here is that the example code is already using a regular expression. The JavaScript driver does something quite clever with this regular expression object - it automatically converts it to the following MongoDB query expression: { \"$regex\": \"^,Books,\" }. As far as I know, the Python driver doesn’t do the same thing with Python regular expression objects, so you need to supply { \"$regex\": \"^,Books,\" } instead of a Python-native regular expression object.Your question above stated the following:it also for example would find nodes with the path Bookstore .… but that’s not actually the case. The regular expression ^,Books, will only find paths that begin with ,Books, - including those commas. So it would match ,Books,tore, but not Bookstore.It’s worth reading up on regular expressions, as they’re very powerful for text matching, and you can build expressions that will very specifically match exactly what you’re looking for. The Python Regex Docs are very good for the basics, but I also highly recommend the O’Reilly Regex book if you really want to become an expert.I hope this helps,Mark",
"username": "Mark_Smith"
},
{
"code": "Bookstore",
"text": "this is so nice, thanks Mark!I must have made a mistake, when trying to check if it would find Bookstore as well.Thank you so much.",
"username": "Moritz_Honscheidt"
},
{
"code": "",
"text": "No problem, @Moritz_Honscheidt. Any time!",
"username": "Mark_Smith"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Materialized Path Trees with Python | 2021-06-14T09:40:59.683Z | Using Materialized Path Trees with Python | 3,889 |
null | [
"node-js",
"field-encryption"
] | [
{
"code": "",
"text": "I have been following this guide - How to use MongoDB Client-Side Field Level Encryption (CSFLE) with Node.js/ by Joe Karlsson to test out the MongoDB CSFLE.In doing so, in the step of creating the data key in local key vault store [https://www.mongodb.com/how-to/client-side-field-level-encryption-csfle-mongodb-node/#create-a-data-key-in-mongodb-for-encrypting-and-decrypting-document-fields] the data key successfully is created but the keyAltName is not attached to the data key’s document.I tested this multiple times and there is nothing wrong in my code and I’m following the guide as it is. I can’t understand what is causing this issue. The data key creation is successful but without the keyAltNames field. A help here would be really appreciated.",
"username": "Ravindu_Fernando"
},
{
"code": "",
"text": "Did you get any solution ? I am also facing the same issue.\nI think it’s an issue related to ‘mongdb-client-encryption’ npm module.",
"username": "Navin_Devassy"
},
{
"code": "",
"text": "I couldn’t still find a solution. I also think this is related to the mongodb-client-encryption npm module. I asked the same question on Stack Overflow but still no luck. I’m waiting for some official reply from MongoDB team, I don’t think we can create issues on libmongocrypt repo",
"username": "Ravindu_Fernando"
},
{
"code": "MongoClient.connect(\n connectionString,\n {\n useUnifiedTopology: true,\n },\n async (err, db) => {\n if (err) throw err;\n try {\n await db.db(your_DB_Name).collection('__keyVault')\n .findOneAndUpdate({ _id: dataKeyId }, { $set: { keyAltNames: [keyAltName] } });\n } catch (error) {\n console.log(`failed to add keyaltname ${keyAltName}, ${error.stack}`);\n }\n db.close();\n },\n );\n",
"text": "I did a temporary workaround. Update the local key-vault document after it’s created. I know it’s not the correct method. Hope they fix this issue in their future release.",
"username": "Navin_Devassy"
},
{
"code": "",
"text": "Thanks. This is the only way it seems as of now. How did you get the dataKeyId? Is it the Binary type key ID returned from the createDataKey method?",
"username": "Ravindu_Fernando"
},
{
"code": "",
"text": "Hello all,I believe the issue you’re facing will be fixed soon we have a related ticket scheduled to start soon: NODE-3118. The community forums are a great place to get assistance with learning how to use our tools or some troubleshooting. If you ever encounter an issue you can let us know on our JIRA project here: https://jira.mongodb.org/projects/NODE.Thanks for your patience,\nNeal",
"username": "neal"
},
{
"code": "",
"text": "i am facing the same issue, i don\"t think the problem is resolved, the keyAltName was not in the vaultKeys encryption database\nis there a solution for this ??",
"username": "bilal_meddah"
},
{
"code": "Fix Version",
"text": "Welcome to the MongoDB Community Forums @bilal_meddah!Development & testing for the NODE-3118 issue mentioned in an earlier comment is still in progress if you follow the link through to MongoDB’s Jira issue tracker. There are a few commits linked to the issue but it has not been resolved or targeted for a Node.js driver release yet.If you login to Jira (which uses the same MongoDB Cloud login as the forums) you can Watch specific issues for updates. When an issue is targeted to be resolved in a specific Node.js driver release a Fix Version will be set on the Jira issue. Ultimately the issue will be closed when all changes have been tested & merged. The final “Fix Version/s” value(s) will indicate which driver releases the fix will be included in (or possibly backported to).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi, @Stennie_X\nthank you for the welcome, unfortunately this is not the only problem, i manage to add the keyAltName to the document of vaultKeys, but now what i am facing is weird, the database without encryption enabled is working perfectly, but when i integrate the encryption, sometimes just accept the first request (read op) and after that it not accepting anything, without any logs for errors, and sometimes, it’s not accepting anything, i am using the local provider with nodejs, and the configuration seems good, i don\"t know where the problem, please can you provide me with any solution, methods, by the way i am using mongoose\nthanks",
"username": "bilal_meddah"
}
] | NodeJS - The keyAltNames field is not created when creating the Data Key in MongoDB Client Side Field Level Encryption | 2021-02-13T20:13:39.396Z | NodeJS - The keyAltNames field is not created when creating the Data Key in MongoDB Client Side Field Level Encryption | 4,101 |
null | [
"connecting",
"atlas-functions"
] | [
{
"code": "",
"text": "I’m using a Realm app to query my Atlas cluster through a HTTP Webhook.\nMy Service function is something like this:\nexports = function(payload, response) {\nconst movies = context.services.get(“mongodb-atlas”).db(“sample_mflix”).collection(“movies”);\n… Do some search related stuff here …Everything is working fine but I see the connections (in the Atlas UI) to my cluster keep increasing. Started off at 2 a couple of days ago, and it is at 22 right now. I am the only person testing this app out right now, so there’s shouldn’t be too many connections.Have read about connection pooling and closing connections after a time interval, but I can’t seem to find anyway to do that in the Webhook.Any suggestions on how I can keep my connections from hitting the limit?",
"username": "Sunil_Daman"
},
{
"code": "",
"text": "Hi @Sunil_Daman ,Welcome to MongoDB community.Realm apps use connection pooling to optimise atlas cluster connections.As workloads coming in idle connections might be present but it shouldn’t pose any risk or performance issues to your cluster and will never grow close to cluster limit.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Closing connections to Atlas Cluster from HTTP Webhooks | 2021-06-14T13:38:36.858Z | Closing connections to Atlas Cluster from HTTP Webhooks | 2,214 |
[
"installation"
] | [
{
"code": "",
"text": "Hi everyone, after installing mongo, I tried to run it on hyper terminal and I can’t go ahead from this stage, I already tried changing port using mongod --portXXXX, also I have created the directory C:/data/db and added the path C:\\Program Files\\MongoDB\\Server\\4.4\\bin on environment variables but I’m having the same issue whatsoever, I realized that there this message ““msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”” so I’m assuming there’s a problem regarding the connection to the port, that’s why I tried using another one, but no use, I hope that someone can help me here.\nThanks in advance\n\nCapture1895×945 50.8 KB\n",
"username": "SegaFredo"
},
{
"code": "",
"text": "Waiting for connections means your mongod is up and running\nYou just need to connect to it.Open another terminal and issue mongoWhat you are seeing is a normal behavior on Windows.Your mongod is running in foreground\nSo just leave that terminal and connect from another terminal",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I do not remember having to run mongod for M001.Your are supposed to use the IDE and connect to an Atlas cluster.",
"username": "steevej"
},
{
"code": "",
"text": "Yes, I already figure it out, thanks for your reply!",
"username": "SegaFredo"
},
{
"code": "",
"text": "I think I’d put this in the wrong place, sorry… I’m still figuring out how everything works ",
"username": "SegaFredo"
},
{
"code": "",
"text": "",
"username": "SourabhBagrecha"
}
] | Problem after installation | 2021-06-16T00:39:50.338Z | Problem after installation | 6,498 |
|
null | [
"queries",
"dot-net",
"performance",
"atlas"
] | [
{
"code": "",
"text": "Hi guys, is there any way to use the command or utility “mongoimport” with the .Net Mongo Driver to load a .CSV in mongo atlas for running it in a microservice than read the file in a url or cloud storage.Thanks for your help.",
"username": "Jose_Alejandro_Benit"
},
{
"code": "mongoimportmongoexport",
"text": "Hi @Jose_Alejandro_Benit, thanks for your question!mongoimport is a separate tool that does not depend on or require the use of any driver. You usually use it to load data that’s generated from either mongoexport or some other third-party export tool.What are you trying to do? Any additional information would be great so that we can point you in the right direction!Thanks ",
"username": "yo_adrienne"
},
{
"code": "",
"text": "Thanks Adrienne, we are trying to import masive records into mongo collections using .net driver or nodejs. Right now we are reading de CSV file row by row and doing a “InsertMany” but we want to know if there is another fastest way to do this load.Thanks.\nPd. Excuse me for my english.",
"username": "Jose_Alejandro_Benit"
},
{
"code": "",
"text": "No worries at all, I understand you @Jose_Alejandro_Benit. Thanks for the context around what you’re doing.A few more questions:Thank you! The answers to these questions will help me get a clearer picture of what you’re doing and will help me determine if there are any improvements or changes needed!",
"username": "yo_adrienne"
},
{
"code": "",
"text": "Thanks Adrienne.\nScreenshot_371406×766 141 KB\nAgain, thanks for your help.",
"username": "Jose_Alejandro_Benit"
}
] | Import CSV to Mongo with .Net Mongo-driver using mongoimport | 2021-06-11T14:37:05.731Z | Import CSV to Mongo with .Net Mongo-driver using mongoimport | 4,273 |
[
"replication",
"python",
"connecting"
] | [
{
"code": "",
"text": "Hello, friends. I have a question: I have an Api that connects via Pymongo to the ReplicaSet database.\n\nСнимок экрана от 2021-06-10 22-54-18877×216 20.8 KB\nBUT yesterday I broke my server with the master node and the api couldn’t switch to the new master. Although the sysadmin told me that the nodes themselves agreed among themselves which of them is the main one. What could be the problem ? Why can’t the api automatically connect to the new master node ?I get an error:\nServerSelectionTimeoutError: No replica set members match selector “Primary()”",
"username": "Kaper_N_A"
},
{
"code": "hostsclientmasterClientwlen(hosts.split(','))serverSelectionTimeoutMSserverSelectionTimeoutMS",
"text": "Hi @Kaper_N_A and welcome in the MongoDB Community !That’s all the idea & comments I have for now .\nI hope this will help a bit.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I don’t speak very good English, but I will try to answer your questions correctly .\nI got the old code from the developer for support.I use a 4-node server.\nhosts = ‘db0.mongo_server.com:27017,db1.mongo_server.com:27017,db2.mongo_server.com:27017,db3.mongo_server.com:27017’yes, there are no routing/firewall issues.client and masterClient these are different points for methods inside a self-written class for working with databases and data.\nAs far as I understand, the idea was that methods that modify data in the database should use PRIMARY . And the methods that get the data went to the nearest database and took the data there.\nThe databases are located in different geolocation.\nсlient - used for methods: find, count_documents, aggregate, count.\nmasterClient - used for methods: find_and_modify, drop, create, count,\nmasterClient -used only primery.\nclient -used nearest node.I can’t say anything about point 4 yet. The code was written by a previous developer.\nAs far as I understand, w =4, because the previous developer wants the write confirmation to be from all 4 nodes. I understand this in my case . if one node is unavailable, the record cannot be confirmed ?our version of mongo is 4.0.19 .\nEvent history:\nOur network got into the\nDigitalOcean for a couple of seconds, and it only had access to one node, the api couldn’t connect to one node. Then the network appeared, the replica was reassembled and started working. But the api couldn’t connect to the mongobd replica. Everything was solved by restarting the servers using mongobd. When the servers were rebooted, the api was able to connect to mongobd.our serverSelectionTimeoutMS default 30 secondsNow I’m trying to understand the situation, maybe it’s something else. But if you give any recommendations, I will be happy, as I am not an expert in mongobd. I can try to throw you a list of code where the database is connected and methods are written for inserting updating and deleting data from the database . If it helps.",
"username": "Kaper_N_A"
},
{
"code": "",
"text": "Hi again !I don’t know why your connection failed and why you couldn’t reconnect automatically. This isn’t supposed to work like that and I can’t find the reason without an in-depth analysis of the entire system… which isn’t really possible in a forum…What I can do though is provide another set of recommendations because I see a lot of issues here.You should follow MongoDB University free courses. You will learn a lot and get more confidence in your system in just a few days. All the time you invest in here will be paid back in just a few days, I can guarantee that.4 nodes cluster isn’t a recommended configuration. There is an entire course about Replica Sets in MongoDB University that explains that a lot better than me. Usually it’s 3 or 5 because MongoDB needs to access the majority of the voting members of the replica set to elect a primary and work properly (move forward the majority commit-point…)\nWith 3 nodes, majority == 2. With 4 or 5 nodes, the majority = 3. So in a config with 4 nodes, if 3 healthy nodes can’t communicate correctly, you have no primary. With 5 nodes, you also need 3 nodes. And if 2 nodes are failing, you are still fine. That’s why 5 is better than 4. And 4 isn’t better than 3. Because with 3 or 4 nodes only 1 node can fail until you cannot have a primary anymore… But you have mathematically more probability to fail with 4 nodes than with 3.\nTo sum up, unless you are doing something very specific with hidden nodes, this doesn’t make a lot of sense to use 4 nodes instead of 3.Your developer who developed this application probably didn’t follow the MongoDB University courses and using 2 clients in the same code isn’t the right way to implement this. So I would remove client & masterClient to use only one client CORRECTLY configured with w=“majority” and DEFINITELY NOT w=4 which is clearly a big mistake. Because in that case, if one single node is offline, then you cannot write anymore with your default writeConcern that is set by default at the connection level unless it’s manually overwritten somewhere in a lower level.My colleague @ado explains very well the priority order of the read and write concerns in his blog post. You can overwrite the level of read of write concern all the way down from the connection up to a specific query. You should use this and specify the options for each collection, db or query instead of creating 2 connections.4.0.19 is a bit old now. If you can, update to 4.4.X and get ready because MongoDB 5.0 is coming up soon ! https://docs.mongodb.com/manual/tutorial/upgrade-revision/Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | I can't connect to a new master node in ReplicaSet | 2021-06-10T20:02:05.915Z | I can’t connect to a new master node in ReplicaSet | 5,089 |
|
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hello. I tried to follow this tutorial Ingesting and Visualizing API Data with Stitch and Charts | MongoDB Blog, but it seems to be outdated.I cannot find any services tab https://webassets.mongodb.com/_com_assets/cms/Stitch_Charts_03-61wq20kdrf.pngBasically I’m trying to grab a specific value from remote API json (accessable via https on the web) and store it in a database. This Json api displays data in realtime so I want to grab it every 10 minutes and store the values it grabbed to later display this as a chart to show the history of the values",
"username": "David_Berndtsson"
},
{
"code": "",
"text": "Hi @David_Berndtsson - welcome to the community forum.That post is pretty old (and it dates back to before “Stitch” was renamed to “MongoDB Realm”. This blog series might be a better starting point https://www.mongodb.com/article/coronavirus-map-live-data-tracker-charts/",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks, but it doesn’t show any code, or any tutorial on how I can grab data from an API and store it in the database",
"username": "David_Berndtsson"
},
{
"code": "exports = function(payload) {\n const httpService = context.services.get('http');\n var weatherApiInfo = context.values.get('weatherApiInfo');\n let url = `http://api.openweathermap.org/data/2.5/weather?q=${weatherApiInfo.city}&units=metric&appid=${weatherApiInfo.appId}`;\n console.log(\"Fetching \" + url);\n return httpService.get( {url: url}).then(response => {\n \n let json = JSON.parse(response.body.text());\n json.observationDate = new Date(json.dt * 1000);\n \n var collection = context.services.get('mongodb-atlas').db('weather').collection('observations');\n collection.insertOne(json);\n console.log('Inserted document!');\n });\n};\n",
"text": "The old post you referenced including some (now) Realm function code to fetch data from an API and store it in Atlas…A major change since then is that you don’t have to bake your own solution to run that function periodically – you can now use Realm scheduled triggers to keep it all inside your Realm app.You can also take a look at this repo – we’re currently using this to periodically fetch data from some APIs for an internal dashboard.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Grab data from API json and store it in the database | 2021-06-15T15:21:25.210Z | Grab data from API json and store it in the database | 12,373 |
null | [
"mongodb-shell",
"indexes"
] | [
{
"code": "",
"text": "Could someone explain to me exactly what this index means or implies?db.x.createIndex( { “user.login”: 1, “user.date”: -1 }, “myIndex” )The second parameter “myIndex” catches my attention and I don’t understand where to find this kind of thing in the mongo guide. Any reference?Thank you!",
"username": "Veronica_Moreno_Flor"
},
{
"code": "user.loginuser.dateuser.logincreateIndex",
"text": "The first parameter is specifying the shape of the compound index. First, by sorting embedded field user.login in ascending (1) order, and then by sorting user.date in descending (-1) order within the context of user.login. The field order and the direction ( 1 or -1) of sort is important in a compound index.The second parameter to the createIndex function (https://docs.mongodb.com/manual/reference/method/db.collection.createIndex/#options-for-all-index-types) is the name of the index.Thanks,\nMahi",
"username": "mahisatya"
},
{
"code": "",
"text": "Thank you very much, I really couldn’t find the logic, I already read “options for all index types” → name",
"username": "Veronica_Moreno_Flor"
},
{
"code": "",
"text": "",
"username": "Eoin_Brazil"
}
] | Second parameter of db.col.createIndex | 2021-06-15T15:44:46.584Z | Second parameter of db.col.createIndex | 2,019 |
null | [
"app-services-user-auth",
"realm-web"
] | [
{
"code": "Error: Request failed (POST https://stitch.mongodb.com/api/client/v2.0/app/my-app-id/auth/providers/local-userpass/register): TypeError: 'captureStackTrace' is not a function (status 400)\n at Function.fromRequestAndResponse (bundle.dom.es.js:2677)\n at async Fetcher.fetch (bundle.dom.es.js:2841)\n at async Fetcher.fetchJSON (bundle.dom.es.js:2858)\n at async EmailPasswordAuth.registerUser (bundle.dom.es.js:1073)\n",
"text": "Hi,I’m experiencing an issue in production. My users cannot sign up using the Email/Password provider. When trying to do so, an error is returned by realm-web :I had never seen that error before and I don’t think there’s been any recent modifications on that part of our webapp. Could it be a bug on MongoDB Realm’s side? Has anyone encountered the same error before?Thanks,Benjamin",
"username": "Benjamin_ARIAS"
},
{
"code": "my-app-id",
"text": "Hi Benjamin,just checking that the actual error had your real app-id in the URL rather than my-app-id?Was there anything in the Realm logs?Was it a transient error, or is it still being seen?",
"username": "Andrew_Morgan"
},
{
"code": "Error:\n\nTypeError: 'captureStackTrace' is not a function\nStack Trace:\n\nTypeError: 'captureStackTrace' is not a function at FetchError (node_modules/node-fetch/lib/index.js:192:25(32)) at events:10:4454(129) at K (_http_client:10:6817(98)) at events:10:4454(129) at M (stream:10:9105(24)) at T (stream:10:8932(184)) at stream:10:9716(51) at onStreamRead (internal/stream_base_commons:10:2040(41))\n",
"text": "Hi Andrew,I do use the right app id but I didn’t want to share it on a forum The issue is still being seen. I don’t have much more info except the error log in the MongoDB Realm dashboard:Do you have any idea what could cause this error ?Thanks,Benjamin",
"username": "Benjamin_ARIAS"
},
{
"code": "",
"text": "Which mode are you using to confirm users?If it’s a function, what does that function look like?\n\nimage925×413 33.7 KB\n",
"username": "Andrew_Morgan"
},
{
"code": " exports = async ({ token, tokenId, username, password }) => {\n const fetch = require('node-fetch');\n\n const addUserToMailerliteGroupAPIEndPoint = 'https://api-endpoint.com';\n const groupId = 'mailerlitegroupid';\n\n try {\n await fetch(`${addUserToMailerliteGroupAPIEndPoint}/add-user-to-group`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify({\n groupId,\n email: username,\n name: '',\n }),\n });\n } catch (error) {\n console.log(error);\n // Do nothing\n }\n \n return { status: 'pending' };\n };\n",
"text": "I run a confirmation function. Yeah I thought it might come from that but I do not see where the issue could come from in this function:We currently have implemented a flow where the user gets sent an automatic email from mailerlite when he/she signs up.",
"username": "Benjamin_ARIAS"
},
{
"code": "",
"text": "Okay it came from that function, wrong endpoint !! The error is still very misleading Anyway, thanks for your message, you unlocked me !Best,Benjamin",
"username": "Benjamin_ARIAS"
},
{
"code": "",
"text": "Glad to hear that you found the fix!",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | app.emailPasswordAuth.registerUser() throws an error | 2021-06-14T16:04:22.081Z | app.emailPasswordAuth.registerUser() throws an error | 4,291 |
null | [
"queries",
"java"
] | [
{
"code": "db.probability.aggregate(\n [\n {\n $unwind: “ $probability”\n },\n {\n $group: {\n _id: \"$experimentId\",\n \"probabilityArr\": { $push: \"$probability\" }\n }\n },\n {\n $project: {\n \"description\": 1,\n \"results\": {\n $reduce: {\n input: \"$probabilityArr\",\n initialValue: [],\n in: { $concatArray: [ \"$$value\", \"$$this\" ] }\n }\n }\n }\n }\n ]\n)\n",
"text": "Hi,How to translate this shell script to Java ?",
"username": "Yaqin_Chen_Hedin"
},
{
"code": "",
"text": "I will not answer directly to your question.Here is what I do with pipelines and queries.This way",
"username": "steevej"
},
{
"code": " Bson match = Aggregates.match(\n Filters.eq(\"country._id\", countryId)\n );\n\n Bson sort = Aggregates.sort(\n Indexes.descending(\"time\", \"region._id\")\n );\n\n Bson group = Aggregates.group(\n \"region._id\",\n Accumulators.max(\"time\", \"$time\"),\n Accumulators.first(\"contentList\", \"$contentList\")\n );\n\n List<Data> dataList = dataMongoCollection.aggregate(\n Arrays.asList(\n match,\n group\n )\n ).into(new ArrayList<>());\n",
"text": "Thank you very much for your quick response \nI am very new to MongoDB. I have used POJO to store in the database. Since the document model is a nested one and that there could be thousands list items in the child array field I have a question: wouldn’t it be very slow to use ObjectMapper to convert Document to POJO ? It would also slow down the process to read the stored result from the resource file?My intention is to build on this block of java code to make the query more efficient(to use unwind and reduce):But I found out that it’s quite slow to run the Java query. Is there anyway to improve my existing Java query ? How can I skip sort but still get the document with highest value of time for a certain region._id ?Thank you very much in advance ",
"username": "Yaqin_Chen_Hedin"
},
{
"code": "",
"text": "It would also slow down the process to read the stored result from the resource file?I did not express the following correctly.Store back the result in my resource file.I store back the debugged pipeline, the result of the debug session. The result of running the pipeline is always process the normal way. There is no intermediary step involve to process the aggregation result.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much for your answer I guess that my very limited knowledge in MongoDB makes it difficult for me to understand everything that you wrote. Do I need to remap the result to my POJO model in Java ? You see, if I use the Java query I get a list of my POJO directly.Do you have a solution regarding my question about the query in Java ?",
"username": "Yaqin_Chen_Hedin"
},
{
"code": "",
"text": "You do not change your POJO model at all.Do you have a solution regarding my question about the query in Java ?I saw your other post. If I have something I will write it over there.",
"username": "steevej"
},
{
"code": "",
"text": "thank you I thought that query deserves as an own topic ",
"username": "Yaqin_Chen_Hedin"
}
] | Java query from shell script | 2021-06-14T20:34:42.011Z | Java query from shell script | 2,597 |
null | [] | [
{
"code": "",
"text": "I am receiving this error randomly. I have used the same code and same python environment for years. I recently moved to a new computer and have to believe something has changed? firewall type thing. anyways has anyone seen this before?[WinError 10054] An existing connection was forcibly closed by the remote host",
"username": "Ryland_Mathews"
},
{
"code": "",
"text": "What is theremote hostif a shared M0, M2 or M5 cluster, then hosts are not necessarily up all the time.",
"username": "steevej"
},
{
"code": "",
"text": "Its really weird. I will run the code and it will work, then when it runs later it errors out. from I can tell its totally random, super frustrating lol",
"username": "Ryland_Mathews"
},
{
"code": "",
"text": "for anyone with this issue, it was being caused because I was trying to drop a collection that didnt exist. somehow a collection that I drop each morning and completely repopulate was deleted. so when it went to go drop the collection that wasnt there it popped that error.",
"username": "Ryland_Mathews"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [WinError 10054] An existing connection | 2021-06-14T16:51:19.791Z | [WinError 10054] An existing connection | 5,574 |
null | [
"android",
"atlas-triggers"
] | [
{
"code": "//Trigger function\nexports = function(authEvent){\n // Only run if this event is for a newly created user.\n if (authEvent.operationType !== \"CREATE\") { return }\n\n // Get the internal `user` document\n const { user } = authEvent;\n\n const users = context.services.get(\"mongodb-atlas\")\n .db(\"RecommenderAppDB\")\n .collection(\"UserDetails\");\n\n const isLinkedUser = user.identities.length > 1;\n\n if(isLinkedUser) {\n const { identities } = user;\n return users.updateOne(\n { id: user.id },\n { $set: { identities } }\n )\n\n } else {\n return users.insertOne({ _id: user.id,\n UserID: user.id,\n Email: user.data.email,\n Phone: user.data.phoneNum,\n Gender: user.data.gender,\n DOB: user.data.dateOfBirth\n })\n .catch(console.error)\n }\n};\n",
"text": "Hi, I am new to Realm and I was looking for an example of how to use the AUTHENTICATE trigger snippet for signup. I do not know how to pass the variables from my android application to the trigger snippet. The collection I want to pass the details to, upon signup, is UserDetails having fields:\n{ UserID, Email, Phone, Gender, DOB} and my variables in my android app name are: email, phoneNum, gender, dateOfBirth.I have tried the following code and it triggers after the new email logins but obviously, only the email is being passed correctly since I do not know how to pass the other variables. I can’t find any examples either on the web.How do I pass the variables from my android app using the registerUserAsync() method?Any ideas will b greatly appreciated!",
"username": "Praveer_Ramsorrun"
},
{
"code": "",
"text": "Hi @Praveer_Ramsorrun, welcome to the community forum!You can’t include the extra information a part of the call to register the user.You can create a user object after the registration, and that object will get synced to MongoDB (you can then optionally have a trigger that processes that data).",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi, thank you for replying. Yes, I understand that extra info cannot be passed with the register method. But, how do I pass data from my app which can then be used in the trigger functions? For example if I want to pass a user’s name with variable uName, to the trigger function. How do I proceed? Apologies if these queries seem trivial, but I cannot find info about these elsewhere.",
"username": "Praveer_Ramsorrun"
},
{
"code": "exports = function({user}) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const userCollection = db.collection(\"User\");\n \n const partition = `user=${user.id}`;\n const defaultLocation = context.values.get(\"defaultLocation\");\n const userPreferences = {\n displayName: \"\"\n };\n \n console.log(`user: ${JSON.stringify(user)}`);\n \n const userDoc = {\n _id: user.id,\n partition: partition,\n userName: user.data.email,\n userPreferences: userPreferences,\n location: context.values.get(\"defaultLocation\"),\n lastSeenAt: null,\n presence:\"Off-Line\",\n conversations: []\n };\n \n return userCollection.insertOne(userDoc)\n .then(result => {\n console.log(`Added User document with _id: ${result.insertedId}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n}\ncontext.user.idUsercontext.user",
"text": "Hi @Praveer_Ramsorrun,how I tend to handle it is to store that data in a collection. I have an authentication trigger that runs when a user registers. Here’s an example…In general, you can run functions as the current user – and so you can access the user id from the context using context.user.id. You can then use that to fetch data from your User collection.Triggers execute functions as the system user and so this information isn’t available in context.user and so you need to include something within the modified document that can be used to identify the user that updated the collection.",
"username": "Andrew_Morgan"
}
] | AUTHENTICATE Trigger Snippet Signup Example | 2021-06-11T23:12:50.253Z | AUTHENTICATE Trigger Snippet Signup Example | 3,044 |
null | [
"aggregation"
] | [
{
"code": "[\n {\n _id: \"sdfsdfasdasdfasdf\",\n name: \"Robot James\"\n friends: [\n \n ]\n } \n {\n _id: \"sdfsdfasdasdfasdf\",\n name: \"John Wu\"\n friends: [\n {\n id: \"asdfasdfasdfaadsf\",\n name: \"John Wu-1\"\n friends: [\n {\n id: \"asdfasdfasdfaadsf\",\n name: \"John Wu-1-1\"\n friends: [\n \n ]\n },\n {\n id: \"asdfasdfasdfaadsf\",\n name: \"John Wu-1-2\"\n friends: [\n \n ]\n } \n ]\n }\n ]\n }\n]\n[\n {\n _id: \"sdfsdfasdasdfasdf\",\n name: \"Robot James\"\n friends: [\n \n ]\n } \n {\n id: \"asdfasdfasdfaadsf\",\n name: \"John Wu-1\"\n friends: [\n {\n id: \"asdfasdfasdfaadsf\",\n name: \"John Wu-1-1\"\n friends: [\n \n ]\n },\n {\n id: \"asdfasdfasdfaadsf\",\n name: \"John Wu-1-2\"\n friends: [\n \n ]\n } \n ]\n }\n]\n",
"text": "I have a aggregate statement that returns following result. I want to change this result-1 to result-2. How can I make do it? Please teach me that pipeline.\nresult-1Here, Robot James and John Wu are my friends, Robot James hasn’t any friends, but John wu has friends.\nIf my friends has their friends, I want to add their friends to my friends list, but instead of him, like following,.",
"username": "bill_oneil"
},
{
"code": "",
"text": "Hi @bill_oneil ,Welcome to MongoDB community.In order to provide a pipeline we need to see some base documents example and not just a pipeline result. Additionally please provide the query that creates this result.I believe you will need a $graphLookup to support this but need more info to really help.Thanks\nPav",
"username": "Pavel_Duchovny"
}
] | How can I add friend's friends to my friend list? | 2021-06-15T00:38:24.850Z | How can I add friend’s friends to my friend list? | 3,384 |
null | [
"atlas-functions",
"graphql"
] | [
{
"code": "",
"text": "Hi, We are thinking of using Realm. We’re are growing startup based in India.\nMost of our backend services run in AWS VPC, and our APIs are exposed through AWS API-GW. I have some question regarding integrating Realm with AWS ecosystem -Can I expose Realm GraphQL endpoint and HTTP service through API GW ? The reason why we need this, is our clients who will be using our APIs. So it’s in best interest that domain name and url remains same for clients even though system behind them changes.As I have read, Realm HTTP service is used for low latency APIs, compared to using Lambda function with API GW for REST APIs, where cold start is measure issue. How does Realm manages cold start ? How is Realm function different than AWS Lambda function in terms of provisioned concurrency & pricing?Triggers have hard limitation of 3000 invocation per second which currently good enough number for us, Is there any option for increase in this quota if in future it requires more than that ?How does token resumability(streams) works let’s say for some reason AWS Eventbridge went offline or I change cluster’s configuration ?How to use caching service like Redis with Realm function ?!Any help with these !!",
"username": "Timey_AI_Chatbot"
},
{
"code": "",
"text": "Hi @Timey_AI_ChatbotI will try to answer your questions.Make sure to authenticate with a proper user and use the retrieved tokenThe cost is measured by various factors you can read more hereThe limitations is currently a given thing. If you can present a valid reason to increase please contact support to see if its possible.So resumabilty of a trigger is based on change stream resume tokens so it allows resuming a failed event listening on the database side as long as this event is present in MongoDB oplog.If a function failed or aws event bridge the event will not resume on the particular event and only log an error its on the user to prepare a functionality to resume those for example reupdate all “undone” documents to be rerun by a trigger.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | AWS API-GW integration with Realm Http service & GraphQL service | 2021-06-14T22:18:55.924Z | AWS API-GW integration with Realm Http service & GraphQL service | 2,369 |
null | [
"server",
"configuration"
] | [
{
"code": "",
"text": "To get the default Configuration File of MongoDB Windows MSI Installer \\bin\\mongod.cfg.\nMy install directory is C:\\Program Files\\MongoDB\\4.4\\Server.\nTo configure mongod or mongos using a configuration file , we can specify – config option as follows:mongod --config which is mongod --config C:\\Program Files\\MongoDB\\Server\\4.4\\bin\\mongod.cfg in my installation directory.mongos --config \nwhich is mongos --config C:\\Program Files\\MongoDB\\Server\\4.4\\bin\\mongod.cfg in my installation directory.\nIn the both cases, while configuring both mongod and mongos , I am getting error.\nPlease look into it.\nArindam Biswas.",
"username": "Arindam_Biswas2"
},
{
"code": "",
"text": "What error are you getting\nPlease show us the error or screenshotWhile installing did you install mongod as service or just binaries installed?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "\nErrors while configuring mongod from Command Prompt947×313 20.4 KB\n\n\nIn response to your query , I hereby attach the following screen shots.\n(1) Errors while configuring mongod ( mongos not shown here) from the Command Prompt.\n(2) Installation of mongod as a service.",
"username": "Arindam_Biswas2"
},
{
"code": "",
"text": "Why do you want to start mongod if it is installed as service\nIt may be already up and running on default port 27017\nJust issue mongo\nIf you can connect means it is up\nor you can check from Windows servicesIf you want to start another/your own mongod prepare your own config file giving different values for dbpath,logpath,port etcYour first command is correct but failed to start mongod as you are using default config file but that may be already being used by mongod which started as service\nSo it is not able to create logfile and exiting with error",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "So, when once Windows services is installed, there is no need to use this command line option of this configuration files. We can use it only before installation.\nMy mongo installation is ok. Please see the screen shot.\n\nMongo1008×417 16 KB\n",
"username": "Arindam_Biswas2"
},
{
"code": "",
"text": "Yes as far as default mongod is concerned you don’t have to do anything\nJust connect to mongo and run few commands\ndb\nshow dbsCreate collection or load data and explore other commandsIf you want additional mongod on same box (for testing or any other purpose) create your config file and you can start it with config file or command line parametersex: mongod --port 28000 -dbpath your_home_dir --logpath your_home_dir\nCheck mongo documentation for various options like auth etc",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "While I am transferring data of my mongod log file to my own directory C:\\data\\log , I am getting the following error : \" The process can not access the file because it is being used by other process\" .While I am transferring data from my MongoDB data file to my own directory C:\\data\\db , it is ok and data is being transferred to my directory C:\\data\\db.In the first case , I use the following command in the command prompt\n“C:\\Program Files\\MongoDB\\Server\\4.4\\log\\mongod.log” --logpath “C\\data\\log”In the second case, I use the following command in the command prompt “C:\\Program Files\\MongoDB\\Server\\4.4\\bin\\mongod.exe” --dbpath \"C:\\data\\dbAlso, while I try to configure mongod log file in different port (port :2800 ) usuing the following command, I get an error.\nmongod --port 2800 --logpath C:\\data\\logI find ok when I configure mongod data file in different port (port : 2800) by the following command.\nmongod --port 2800 --dbpath C:\\data\\db\n\nlog file used by other process1004×52 2.41 KB\n\n\nErrors while configuring mongod from Command Prompt947×313 20.4 KB\nPlease find the relevant screen shots.",
"username": "Arindam_Biswas2"
},
{
"code": "",
"text": "Your screenshots not matching with the description you mentioned about the errors while starting mongod on port 2800\nIt is 28000 not 2800In the first snapshot you did not run mongod but tried to set logpath with mongod.log which is invalid command\nSecond snapshot appears to be from earlier post which i have already explained why it is failing\nWhen your default mongod is already up and running why you are trying to run mongod again with default config file\nPlease do not touch default cfg file nor use it to run another mongod.It is kind of master file\nI clearly told you to prepare your own config file under different directory using master file as reference\nSomething like below\nmongod --config “C:\\Users\\xyz\\my_cfg.cfg”\nYou have to create the file my_cfg.cfg and add different parameters then save it\nAlso you seem to run same mongod command multiple times to set different parameters\nIt will not work that way\nAll necessary parameters should be passed in the command line in single attempt\nExample:\nmongod --port 28000 --dbpath your_db_path --logpath your_logpath\nWhen you start mongod on a different port you have to connect specifying port num\nmongo --port 28000I suggest you review mongo documentation\nIt will be confusing initially but will be clear as you go forward",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "mongod --port 28000 --dbpath your_db_path --logpath your_logpathPlease note that I am unable to create my own config file under my own directory. Screen shot enclosed.\n\nInability to create my own config file under different directory748×299 5.33 KB\n",
"username": "Arindam_Biswas2"
},
{
"code": "",
"text": "mongod is expecting a file my_cfg.cfg but you created a directory\nGo to C:\\Users\\arindam1 open a text editor like notepad.Add required parameters.Save the file\nIt has to follow YAML format else you will get errors\nThen run the mongod command again.It should work",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you for your advice. Sorry, for replying in late. I am going through the YAML standard. I take time to be conversant in writing parameters in YAML format.\nAs we add more parameters in YAML standard , we shall get additional features of MongoDB instances. I am just trying to experiment on it. I may turn up again if I face issues while I shall be experimenting on additional features on MongoDB instances.",
"username": "Arindam_Biswas2"
}
] | Configuration File MongoDB 4.4 Communuty Server | 2021-06-11T16:02:02.638Z | Configuration File MongoDB 4.4 Communuty Server | 6,182 |
null | [
"monitoring",
"scala"
] | [
{
"code": "",
"text": "Hello,I am having trouble figuring out how to use Observables with GridFS to upload and download files into Mongo using scala latest driver (4.2.3). The examples I saw were lacking. We used to use AsyncInputStream, but that got deprecated with the latest version. Any help will be appreciated.",
"username": "Dmitriy_Mestetskiy"
},
{
"code": "/*\n * Copyright 2008-present MongoDB, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.mongodb.scala.gridfs\n\nimport java.io.ByteArrayOutputStream\nimport java.nio.ByteBuffer\n",
"text": "This file has good examples, I’ll try them:",
"username": "Dmitriy_Mestetskiy"
}
] | Observable Integration | 2021-06-14T19:10:13.239Z | Observable Integration | 3,568 |
[
"devops"
] | [
{
"code": "",
"text": "hi guys,I need help for my log files is growing and is just only have 1 log file without separate it by per date.how I can setting up the log with per date save it?thanks.\nimage_2021_06_14T03_11_22_838Z1037×357 23.6 KB\n",
"username": "Kelvin_Shee"
},
{
"code": "use admin\ndb.runCommand( { logRotate: 1} )\n",
"text": "Hi @Kelvin_Shee, welcome to the community!LogRorate command can be used to rename old log file and create new one. Check out the doc for more details.So, in your case, you could run this command to rotate log based on your schedule.Thanks,\nMahi",
"username": "mahisatya"
}
] | MongoDB Log windows version how I can save by per date in a each files | 2021-06-14T03:22:59.705Z | MongoDB Log windows version how I can save by per date in a each files | 2,925 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hey I’ve recently been looking through the way user authentication/creation is handled within the RChat app and wondered if there was any particular reason custom user data is disabled in the realm config? RChat/config.json at 8be092c1fe776c92e1dff8dc66ceb40788f0bc57 · realm/RChat · GitHubAre there any advantages to simply using the User collection with a partition vs using the custom user data attribute to gain access to these additional fields?Possibly one for @Andrew_MorganThanks",
"username": "Mishkat_Ahmed"
},
{
"code": "",
"text": "Hi @Mishkat_Ahmed - welcome to the community forum.Custom user data is read-only in the mobile app and I wanted to be able to update it.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Custom User Data vs User Document | 2021-06-14T17:02:35.678Z | Custom User Data vs User Document | 2,050 |
null | [] | [
{
"code": "",
"text": "Hi there, successfully created my tasktracker backend app, it shows up in the Realm UI, but there is no Atlas configuration provided and no way for me to access the sample data…help !!",
"username": "Brian_Kathler"
},
{
"code": "realm-cli --versionCluster0",
"text": "Hi @Brian_Kathler – welcome to the community forum!First, just to check that these are the instructions you followed: https://docs.mongodb.com/realm/tutorial/realm-app/ ?What version of the Realm CLI are you using (realm-cli --version)?Did you name your cluster Cluster0?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi there.Yes, that is the tutorial I followed. I see now that it contained instructions on how to create an Atlas account, which I already have and therefore skipped that section. I am going to try again and see if it works.",
"username": "Brian_Kathler"
}
] | No Atlas configuration for realm-tasktracker-backend | 2021-06-11T17:02:35.232Z | No Atlas configuration for realm-tasktracker-backend | 1,419 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n _id: someID,\n field1: [data],\n field2: [data]\n field3: {\n field3_1: data,\n ...\n }\n}\nfield1field2field3db.findOne({ _id: someID }, { fields: { \"field3.field1\": \"$field1\", \"field3.field2\": \"$field2\", field3: 1 }})\nfieldsfield3field1field2",
"text": "Hi, so I have data as such:Is it possible during a query to to return field1 and field2 under field3? I tried the following but got a path collision error:The syntax may be a little different as I’m using Mongo with Meteor. Here we need to specify the fields key explicitly.Note that field3 does not contain the keys field1 or field2.Appreciate any help, thank you!",
"username": "Ajay_Pillay"
},
{
"code": "db.findOne({ _id: someID }, { fields: { \"field3\" : {\"field1\": \"$field1\", \"field2\": \"$field2\"}}})\ndb.findOne({ _id: someID }, { fields: { \"field3.field1\": \"$field1\", \"field3.field2\": \"$field2\"}})\n",
"text": "Hi @Ajay_PillayHave you tried the following:Or maybe just :Eventually I believe the error is as you specified field3 twice which I am not sure why…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "field3field3_1{\n _id: \"someID\"\n field3: {\n field1: [data],\n field2: [data]\n }\n}\nfield3{\n _id: someID,\n field1: [data],\n field2: [data]\n field3: {\n field3_1: data,\n ...\n }\n}\nfield3field1field2field3field1field2field3field3.field1field3.field2{\n _id: someID,\n field3: {\n field1: [data],\n field2: [data],\n field3_1: data,\n ...\n }\n}\nfield1field2field3field3",
"text": "Hi,Eventually I believe the error is as you specified field3 twice which I am not sure why…I didn’t specify field3 twice anywhere. If you were referring to the JSON data, it’s field3_1, sorry if that wasn’t clear.Both of the suggestions return the following:But all the other data in field3 is not returned. Just to make it clearer, so as per my OP I have the following data (copied verbatim):What I want to do is query the whole of field3 but also include field1 and field2 as sub-fields of field3, even though the data stored does not have field1 and field2 as sub-fields for field3. I’m trying to avoid to have to do this as post-processing where I copy/move the data into field3.field1 and field3.field2. So here’s how the final query should look like:To give some context as to why I want to do this, it’s part of building a Graphql API, and the schema would require that field1 and field2 are sub-fields of field3. I’m trying to avoid having to do unnecessary post-processing to get them as sub-fields of field3. If I can accomplish this with a simple query call that would be great.But I think this might need an aggregation approach instead?Thanks!",
"username": "Ajay_Pillay"
},
{
"code": "// setup\ndb.foo.drop();\ndb.foo.insert({\n _id: 1,\n field1: \"a\",\n field2: \"b\",\n field3: {\n field3_1: \"c\"\n }\n});\ndb.foo.find();\n/*\n{ \n \"_id\" : 1.0, \n \"field1\" : \"a\", \n \"field2\" : \"b\", \n \"field3\" : {\n \"field3_1\" : \"c\"\n }\n}\n*/\n$match$addFieldsfield3$projectfield1field2db.foo.aggregate([\n{ $match: { _id: 1 } },\n{ $addFields: { \n \"field3.field1\": \"$field1\", \n \"field3.field2\": \"$field2\"\n}},\n{ $project: { field1: 0, field2: 0 } }\n])\n/*\n{ \n \"_id\" : 1.0, \n \"field3\" : {\n \"field3_1\" : \"c\", \n \"field1\" : \"a\", \n \"field2\" : \"b\"\n }\n}\n*/\n",
"text": "Hi @Ajay_Pillay,You can do this using an Aggregation Pipeline as follows:Assuming the structure above matches your sample structure, to achieve the result you described you can first $match to filter the results, use $addFields to append to the field3 object and then use a $project to remove field1 and field2 from the top level of the resulting document:",
"username": "alexbevi"
},
{
"code": "$project: { _id: 0, field3: 1 }field3",
"text": "Thank you very much, this works as intended!The only change I needed to make is $project: { _id: 0, field3: 1 } as I only need to return field3.",
"username": "Ajay_Pillay"
},
{
"code": "$project$mergeObjectsdb.foo.aggregate([\n{ $match: { _id: 1 } },\n{ $project: { \n _id: 0,\n field3: { $mergeObjects: [ \"$field3\", \n { field1: \"$field1\", field2: \"$field2\" }\n ]}\n}}\n])\n",
"text": "@Ajay_PillayJust FYI, the solution can be simplified slightly to just use a single $project along with a $mergeObjects as follows:",
"username": "alexbevi"
},
{
"code": "",
"text": "Oh, that’s great, I shall do this instead. Thank you again!",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I return a field under a different key? | 2021-06-13T22:18:10.835Z | How do I return a field under a different key? | 4,654 |
null | [] | [
{
"code": "",
"text": "cheers,does MongoDB.com site have an API that provides list of new releases like e.g. MariaDB has? Downloads REST API - MariaDB.orgthank you,\no",
"username": "Stanislav_Omacka"
},
{
"code": "#!/usr/bin/env bash\n\nINSTALL_FOLDER=\"/home/polux/Softwares\"\nLINUX=\"debian11\"\nSITE_ENTERPRISE=$(wget -qO- https://www.mongodb.com/try/download/enterprise | tr -d '\\n')\nSITE_TOOLS=$(wget -qO- https://www.mongodb.com/try/download/tools | tr -d '\\n')\nCOMPASS_VERSIONS=$(curl -sH \"Accept: application/vnd.github.v3+json\" https://api.github.com/repos/mongodb-js/compass/releases)\n\nCURRENT_MDB_COMPASS=$(dpkg -l | grep \"mongodb-compass \" | tr -s ' ' '\\t' | cut -f3)\nCURRENT_MDB_COMPASS_BETA=$(dpkg -l | grep \"mongodb-compass-beta \" | tr -s ' ' '\\t' | cut -f3 | sed 's/~/-/')\nCURRENT_MONGOSH=$(mongosh --version)\nCURRENT_MONGODB=$(readlink $INSTALL_FOLDER/mongodb-linux-current | grep -oP '\\d+\\.\\d+\\.\\d+')\nCURRENT_TOOLS=$(readlink $INSTALL_FOLDER/mongodb-tools-current | grep -oP '\\d+\\.\\d+\\.\\d+')\n\nCURRENT_MDB_COMPASS=${CURRENT_MDB_COMPASS:-'0.0.0'}\nCURRENT_MDB_COMPASS_BETA=${CURRENT_MDB_COMPASS_BETA:-'0.0.0'}\nCURRENT_MONGOSH=${CURRENT_MONGOSH:-'0.0.0'}\nCURRENT_MONGODB=${CURRENT_MONGODB:-'0.0.0'}\nCURRENT_TOOLS=${CURRENT_TOOLS:-'0.0.0'}\n\n",
"text": "Hi @Stanislav_Omacka and welcome in the MongoDB Community !Not to my knowledge.\nI actually had a need for this to update automatically my PC and I did something way more rudimentary…Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88,Thak you for your reply you’ve certainly inspired me!Cheers",
"username": "Stanislav_Omacka"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB downloads API | 2021-06-10T20:56:42.231Z | MongoDB downloads API | 2,118 |
null | [
"schema-validation"
] | [
{
"code": " amount: {\n bsonType: \"string\",\n pattern: \"^\\$[^ ].*\"\n },\namount: {\n bsonType: \"string\",\n pattern: \"^[$].*\"\n },\n",
"text": "Hello,I am attempting to create a pattern that can be used with a $jsonSchema for M036. Everything else is straight forward, but this:-Pattern ProblemThe problem is that the string must start with a $So outside of JSON Schema I have come up with/^$[^ ].*/gHowever when I try theIt blocks both $100 and 100 as an amount. If I take away the pattern, I can insert strings.Any help would be greatly appreciated.NOTE - I tried the below and it worked. Any thoughts on why both worked in a regex simulator, but only one worked in $jsonSchema. 2.5 hours of my life I will not get back ",
"username": "NeilM"
},
{
"code": "",
"text": "No one has any thoughts on this?I did wonder if jsonSchema has a more limited regex syntax.",
"username": "NeilM"
},
{
"code": "$\"^\\\\$\\\\d+\"",
"text": "Hi @NeilM,It blocks both $100 and 100 as an amount. If I take away the pattern, I can insert strings.You need to double escape the $ when defining in jsonSchema. For example, \"^\\\\$\\\\d+\"Regards,\nWan.",
"username": "wan"
},
{
"code": "[^ ].*^\\\\$[^ ].*",
"text": "[^ ].*Thanks for that, I did try it and it worked I went with: -^\\\\$[^ ].*Since it also handled decimal points in the currency, but ignored spaces e.g.$ 180 = Would be rejected\n$180.00 = Would be accepted.\n$180 = Would be accepted.Q. Do you need to double escape other things, apart from $ when using patterns in $jsonSchema?",
"username": "NeilM"
},
{
"code": "$\\\\\\",
"text": "Hi @NeilM,Glad you got it working.Q. Do you need to double escape other things, apart from $ when using patterns in $jsonSchema?Any regex syntax that need to be a literal needs to be double escaped.Let’s take your example, we would like specify a literal $ character, which is a regex syntax for boundary-type assertions indicating the end of an input . We would need escape the character with a single \\ to be literal, however in jsonSchema, the \\ would also needed to be escaped with another slash \\.Hope that helps.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "@wanThank you for that. To be fair an excellent description of the whys and wherefores of handling a literal, and the additional handling required for the JSON schema.It actually means I more of an idea of what I should be doing now, instead of the bumbling around I was doing before.Thanks\nNeil",
"username": "NeilM"
}
] | Use of pattern within JSON schema | 2021-06-03T17:41:25.113Z | Use of pattern within JSON schema | 7,943 |
null | [
"transactions"
] | [
{
"code": "",
"text": "In the official mongodb youtube channel I saw- they recommended not to use transaction unless it is acutely required.\nIs transaction costly in mongodb ?",
"username": "Md_Mahadi_Hossain"
},
{
"code": "",
"text": "Hi @Md_Mahadi_HossainThere is certainly an additional overhead with transactions but it isn’t too computational costly. However, the best practice is not to use a transaction unless you must as using transactions without proper design is definitely a sign of poor planning and will likely lead to poor performance as you may equally be able to achieve the vast majority of what is needed using simple single document atomic updates.I’d recommend visiting Transactions | MongoDB and reviewing the videos there on tansactions to get a better understand of how and of when to use transactions.Kindest regards,\nEoin",
"username": "Eoin_Brazil"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sould I avoid transaction? | 2021-06-13T14:27:44.912Z | Sould I avoid transaction? | 2,325 |
null | [
"aggregation",
"python",
"indexes"
] | [
{
"code": "pipeline = [{'$lookup': {'from': right_coll_name,\n 'let': {'chrom': '$chrom', 'start': '$start', 'end': '$end'},\n 'pipeline': [{'$match': {'$expr': {'$and': [{'$eq': ['$$chrom', '$chrom']},\n {'$lt': [{'$max': ['$$start', '$start']},\n {'$min': ['$$end', '$end']}]}]}}}],\n 'as': right_coll_name}} for right_coll_name in right_coll_names])\ndoc[right_coll_alias] != []left_coll_obj.aggregate(pipeline)",
"text": "Select c1 collection documents, each of which overlaps with at least one c2 document and at least one c3 document by chrom field and by start-end intervals.c1:c2:c3:chrom_1_start_1_end_1\nchrom_1\nstart_1\nend_1Technically, this code is fully functional.Further I successfully filter the merged documents by doc[right_coll_alias] != []. Full Python code is here.Based on the terrible speed, left_coll_obj.aggregate(pipeline) doesn’t use indexes.How do I rework a pipeline to use compound or single indexes?",
"username": "Platon_workaccount"
},
{
"code": "pipeline = [{'$lookup': {'from': right_coll_name,\n 'let': {'chrom': '$chrom', 'start': '$start', 'end': '$end'},\n 'pipeline': [{'$match': {'$expr': {'$and': [{'$eq': ['$$chrom', '$chrom']},\n {'$lt': ['$$start', '$end']},\n {'$lt': ['$start', '$$end']}]}}}],\n 'as': right_coll_name.replace('.', '_')}} for right_coll_name in right_coll_names]",
"text": "If you get rid of min/max, the indexes will also be ignored. Here’s an updated pipeline:",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "In MongoDB 4.4.1 + PyMongo 3.11.0 the issue persists. I would ask the developers to investigate the problem.",
"username": "Platon_workaccount"
},
{
"code": "left_coll_obj.aggregate(pipeline)mongoexplain()explain()$lookup{\"$match\":{\"chrom\":\"chr1\"}}$lookupIXSCAN$lookupchrom1chrom2",
"text": "Hi @Platon_workaccount,Based on the terrible speed, left_coll_obj.aggregate(pipeline) doesn’t use indexes.First, it would be easier to debug performance if you separate the layers (application/db). If you haven’t done so already, I’d suggest to run a single aggregation pipeline directly (using mongo shell) instead of running from your application and see whether there is another possible bottleneck (application layer).Note that you can view detailed information regarding an execution plan of an aggregation pipeline by using explain(). See also Return Information on Aggregation Pipeline Operation. This method is useful for debugging, as it details the processing (also shows which index, if any, the aggregation pipeline operation used).How do I rework a pipeline to use compound or single indexes?If you execute explain() on your current pipeline, most likely it would show COLLSCAN. This is because you haven’t specified any query stages before the $lookup stage. If an entire collection is being loaded without any filtering criteria, a collection scan would have less overhead than iterating an index.For example, you could try adding {\"$match\":{\"chrom\":\"chr1\"}} before the $lookup stage and you should see on the explain output that it would utilise IXSCAN.Having said all that above, depending on your use case, it looks like you’re going to be performing multiple expressive $lookup. In this case, I would suggest to re-consider your data modelling. For example, you may try to store all the data in one collection instead and use a field to filter. i.e. chrom1, chrom2, etc.Regards,\nWan.",
"username": "wan"
},
{
"code": "$eq$lt$lt",
"text": "Before researching separate expressions in Shell, I decided to create application level performance scheme. The picture shows that an expression with a simultaneous presence of $eq, $lt and $lt differs significantly in speed from the other expressions. Could it be a MongoDB bug?\nmongodb_aggregation1920×1050 229 KB",
"username": "Platon_workaccount"
},
{
"code": "explain()db.bed_1.bed.explain().aggregate([{'$lookup': {'from': 'bed_2.bed', 'let': {'chrom': '$chrom', 'start': '$start', 'end': '$end'}, 'pipeline': [{'$match': {'$expr': {'$and': [{'$eq': ['$$chrom', '$chrom']}, {'$lt': ['$$start', '$end']}, {'$lt': ['$start', '$$end']}]}}}], 'as': 'bed_2_bed'}}]){\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"int_sub_big_BED.bed_1.bed\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"8B3D4AB8\",\n\t\t\t\t\t\"planCacheKey\" : \"8B3D4AB8\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"COLLSCAN\",\n\t\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$lookup\" : {\n\t\t\t\t\"from\" : \"bed_2.bed\",\n\t\t\t\t\"as\" : \"bed_2_bed\",\n\t\t\t\t\"let\" : {\n\t\t\t\t\t\"chrom\" : \"$chrom\",\n\t\t\t\t\t\"start\" : \"$start\",\n\t\t\t\t\t\"end\" : \"$end\"\n\t\t\t\t},\n\t\t\t\t\"pipeline\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$match\" : {\n\t\t\t\t\t\t\t\"$expr\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"$$chrom\",\n\t\t\t\t\t\t\t\t\t\t\t\"$chrom\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"$$start\",\n\t\t\t\t\t\t\t\t\t\t\t\"$end\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"$start\",\n\t\t\t\t\t\t\t\t\t\t\t\"$$end\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"platon-VivoBook-ASUSLaptop-X712FA-X712FA\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.2\",\n\t\t\"gitVersion\" : \"15e73dc5738d2278b688f8929aee605fe4279b0e\"\n\t},\n\t\"ok\" : 1\n}\ndb.bed_1.bed.explain().aggregate([{'$lookup': {'from': 'bed_2.bed', 'localField': 'name', 'foreignField': 'name', 'as': 'bed_2_bed'}}]){\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"int_sub_big_BED.bed_1.bed\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"8B3D4AB8\",\n\t\t\t\t\t\"planCacheKey\" : \"8B3D4AB8\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"COLLSCAN\",\n\t\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [ ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"$lookup\" : {\n\t\t\t\t\"from\" : \"bed_2.bed\",\n\t\t\t\t\"as\" : \"bed_2_bed\",\n\t\t\t\t\"localField\" : \"name\",\n\t\t\t\t\"foreignField\" : \"name\"\n\t\t\t}\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"platon-VivoBook-ASUSLaptop-X712FA-X712FA\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.2\",\n\t\t\"gitVersion\" : \"15e73dc5738d2278b688f8929aee605fe4279b0e\"\n\t},\n\t\"ok\" : 1\n}\nCOLLSCANbed_2.bednamename_1chrom_1_start_1_end_1",
"text": "Debug via explain()Query by intervals:db.bed_1.bed.explain().aggregate([{'$lookup': {'from': 'bed_2.bed', 'let': {'chrom': '$chrom', 'start': '$start', 'end': '$end'}, 'pipeline': [{'$match': {'$expr': {'$and': [{'$eq': ['$$chrom', '$chrom']}, {'$lt': ['$$start', '$end']}, {'$lt': ['$start', '$$end']}]}}}], 'as': 'bed_2_bed'}}])One field query:db.bed_1.bed.explain().aggregate([{'$lookup': {'from': 'bed_2.bed', 'localField': 'name', 'foreignField': 'name', 'as': 'bed_2_bed'}}])As I can see, in both cases COLLSCAN was output. But Compass shows the usage of the indexes of the bed_2.bed collection. In case of intersection by a single field name Compass shows hundreds of thousands uses of the name_1 index. When intersecting by intervals there are only a few uses of the chrom_1_start_1_end_1 index.\nMongoDB aggregation index usage1433×653 60.7 KB",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Will the terrible speed of interval queries be considered a MongoDB bug?",
"username": "Platon_workaccount"
},
{
"code": "$eq$lt$lte$gt$gte$expr",
"text": "Looks like this is fixed in MongoDB 5.0.Starting in MongoDB 5.0, the $eq , $lt , $lte , $gt , and $gte operators placed in an $expr operator can use indexes to improve performance.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using an index to intersect intervals | 2020-04-13T20:29:04.685Z | Using an index to intersect intervals | 3,443 |
null | [] | [
{
"code": "mongodb+srv://mongoldb://",
"text": "hello all,I am trying to connect Apache Nifi with Mongo DB. We have Mongo DB Atlas instance and the connection string starts with mongodb+srv://. Now when I give the same URL in NiFi, it throws a message saying that the Mongo DB URL should start with mongoldb://. Can someone throw some light on this topic pls?",
"username": "Ravi_Teja_Kondisetty"
},
{
"code": "",
"text": "On your Nifi canvas, drag and drop a processor called GetMongo processor. Aftter, You will see where to enter the credentials of the mongo connection and allow to mongo atlas to connect from everywhere or you connect via ssl\n\nScreenshot from 2021-06-13 13-20-21767×566 39.1 KB\n",
"username": "Sunday_Aroh"
}
] | Unable to connect Mongo DB to NiFi | 2020-08-25T12:21:57.400Z | Unable to connect Mongo DB to NiFi | 3,974 |
[] | [
{
"code": "",
"text": "I created the topic below:However, the best place should be “Drivers & ODMs” category, with tags like “dot-net”, like below:But I cannot edit the topic and add any tags.",
"username": "Ping_Pong"
},
{
"code": "",
"text": "Hi @Ping_Pong,Some site permissions (like editing categories & tags) depend on the trust level for your account in the forums. As you spend more time interacting in the forums, you will earn more privileges.However, if you ever need assistance with a post you can always flag it for moderator attention. For category/tag edits use the “Something Else” category and let us know what assistance is needed.I adjusted category & tags for the post you mentioned.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "“Drivers & ODMs” category, with tags like “dot-net”, like below:@Stennie_X,Thanks for your help.But I think this policy of restricting users to such a level is creating more works for everyone, which is not necessary.",
"username": "Ping_Pong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | UI: Is it possible to change a topic to a different category with different tags | 2021-06-12T23:17:12.150Z | UI: Is it possible to change a topic to a different category with different tags | 4,640 |
|
null | [
"node-js",
"data-modeling"
] | [
{
"code": "?[]['a', null, 'b']null",
"text": "I think, from my browsing of the docs and the realm-js tests it looks like the combination of ? and [] results in a list of optionals? E.g. ['a', null, 'b'].Is there no way of specifying a property as an optional list? E.g the list could be entirely absent or, if present, it had mandatory values (e.g. null wouldn’t be allowed inside the list).",
"username": "Liam_Jones"
},
{
"code": "",
"text": "Hi @Liam_Jones, welcome to the community.Could you please share a code sample?",
"username": "Andrew_Morgan"
},
{
"code": "const ScheduledThing = {\n name: 'ScheduledThing',\n properties: {\n title: 'string',\n schedule: 'Schedule',\n },\n}\n\nconst Schedule = {\n name: \"Schedule\",\n properties: {\n day: \"string?\",\n weeklyTime: \"string?\",\n dailyTimes: \"string[]\", // would like this to be string[]?\n },\n}\n\nconst realm = await Realm.open({\n schema: [Schedule],\n deleteRealmIfMigrationNeeded: true,\n})\n\nconst dailyScheduledThing = {\n name: 'Daily medication',\n schedule: {\n weeklyTime: undefined,\n day: undefined,\n dailyTimes: ['09:00', '10:00', '11:00'],\n },\n}\n\nconst weeklyScheduledThing = {\n name: 'Food shopping',\n schedule: {\n day: \"Wednesdays\",\n weeklyTime: \"16:30\",\n dailyTimes: undefined,\n },\n}\n\nrealm.write(() => {\n realm.create(Schedule.name, dailyScheduledThing)\n realm.create(Schedule.name, weeklyScheduledThing)\n})",
"text": "Sure. Here’s a minimal one. We have 'things that can be scheduled daily or weekly. If they’re daily, you can specify multiple times you do it in a day. If they’re weekly you specify one time and the day of the week you do it on.We could split the Daily/Weekly schedules to separate schemas but that then means we can’t link them from ScheduledThing directly and would have to query for/update them separately. Instead, we want a schema with enough optional properties that we can store both shapes inside it, only setting the relevant properties for the schedule type.I’ve highlighted the thing that would ideally be either ‘undefined’ or an array of strings. The current setup isn’t the end of the world, it’s just we end up with empty arrays stored for dailyTimes on weekly schedules currently and it’d be nice if the whole list could be optional.I hope that makes sense?",
"username": "Liam_Jones"
},
{
"code": "scheduleTypeweeklydaily",
"text": "Lists cannot be optional in Realm. There’s no performance cost associated with them though, so whether they’re null or empty, it would be all the same. If you do need to differentiate between the null and empty case, you’ll need to use a separate property - e.g. scheduleType that can be either weekly or daily.",
"username": "nirinchev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "string?[]" - list of optionals vs optional list | 2021-06-07T16:26:12.028Z | “string?[]” - list of optionals vs optional list | 3,954 |
null | [
"aggregation",
"queries",
"performance"
] | [
{
"code": "",
"text": "Hello guys.I want to ask a question.If i disable the default compression by wired-tiger on my collection do i get faster or slower execution times on queries (aggregates pipelines)?Does compression have any thing to do with queries execution time in general?",
"username": "harris"
},
{
"code": "",
"text": "Hi @harris ,The compression is done for disk space saving mainly .When query executes it decompress the data anyway.Compressing data allows also the storage filesystem cache to hold more data so there is a bigger chance uncompressed data will have larger scanning times vs the savings in cpu decompress overhead.However, you should test the impact on your hw and data.Thanks",
"username": "Pavel_Duchovny"
}
] | Wired tiger and query execution time | 2021-06-12T22:51:18.899Z | Wired tiger and query execution time | 2,122 |
[
"data-modeling"
] | [
{
"code": "",
"text": "My site will have many products, and each product would have many tags (imagine wordpress’s tag option)\n\nWhat’s the best way to organize these in the database?\nPlease suggest me some tutorials if there’s any.",
"username": "Ishmam_N_A"
},
{
"code": "{\nPictureUrl: ...,\ntags: [ \"party\", \"man\" ... ]\n}\n",
"text": "Hi @Ishmam_N_A ,Welcome to MongoDB community.The classic way to store it is via an embedded array of tags:As long as number of tags is below a few houndreds its fine.You can index tags field with multi key index and simply search terms on tags.Updating array is simply done with array operatorsThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to store multiple tags in Mongo DB | 2021-06-12T22:08:13.678Z | How to store multiple tags in Mongo DB | 5,311 |
|
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "My app is using a custom authentication provider and I also enabled custom user data by mapping to a user record on my DB. I discovered that this custom user data is encoded inside the JWT token and in my case contained some fields with potential large amount of data. At one point this caused the login to succeed but later calls failed since the token was too large. There is no indication that this is the case and only because I noticed the token was very large that I discovered this cause.",
"username": "michael_schiller"
},
{
"code": "16MB",
"text": "According to the docs, the limit is 16 MBAvoid Storing Large Custom User Data\nCustom user data is limited to 16MB , the maximum size of a MongoDB document. To avoid hitting this limit, consider storing small and relatively static user data in each custom user data document, such as the user’s preferred language or the URL of their avatar image. For data that is large, unbounded, or frequently updated, consider only storing a reference to the data in the custom user document or storing the data with a reference to the user’s ID rather than in the custom user document.Do you know if it was as large as that (I was a little surprised that it was now that high as it was 2K at one point)?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "The problem is not the size of the custom data but the fact that the JWT token included the data and its size was ~170kb",
"username": "michael_schiller"
},
{
"code": "",
"text": "That’s what I figured, I’ll follow up with engineering/docs.Thanks for flagging this.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hello, I’m having exactly this problem. After several days of entry, my app users are no longer able to login (as the JWT token got too big). At the moment I’m only storing ‘email’ via custom user data for the purpose of reading the realm logs better (partition key being email rather than user.id). Is there a better way to go about this?",
"username": "Mari_Robert"
},
{
"code": "",
"text": "The server is storing the entire user record mapped in custom user data inside the JWT token. In my case I did not use it so I disabled this feature. you should decode your JWT token and see what is causing the token to be bloated and maybe consider separating this data from the record.",
"username": "michael_schiller"
},
{
"code": "",
"text": "Yeah okay, I can probably disable this feature as well. Thanks @michael_schiller! @Andrew_Morgan is this on the roadmap at this stage?",
"username": "Mari_Robert"
}
] | Warning about custom user data with authentication | 2021-03-21T16:50:17.881Z | Warning about custom user data with authentication | 2,448 |
null | [
"aggregation"
] | [
{
"code": "aggregate([\n { $match: { _id: \"someID\" } },\n { $unwind: \"$array1\" },\n {\n $project: {\n _id: {\n $cond: [{ $in: [someID, \"$array1.array2._id\"] }, \"$array1._id\", null]\n },\n another_field: {\n $cond: [{ $in: [someID, \"$array1.array2._id\"] }, \"$array1.other_field\", null]\n }\n }\n }\n])\n$cond$condarray1$cond$in",
"text": "Hi, so I have an aggregation as such:Is there any way to condense this such that the $cond clause only runs once? Or does Mongo optimize repeated conditions like this such that it is run once anyway?Ideally if I can declare $cond to run only once, then use that result for the projection of various fields, that’ll be ideal. If I have multiple fields relying on this condition, it’s going to cause a lot of duplicate code where the only difference is what field I return as part of array1.I understand that because the truth outcome is different for every field, $cond may not be declared once and reused. What about $in? Can that be run once and reused?",
"username": "Ajay_Pillay"
},
{
"code": "{\"$addFields\" {\"myTempField\" { $in: [someID, \"$array1.array2._id\"] }}}\n",
"text": "You can add one $addFields extra stage after the unwind to hold that value.\nAnd use that value in the project condtion.After project it will go anyways",
"username": "Takis"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Condensing this aggregation such that $cond/$in only runs once | 2021-06-12T21:45:11.092Z | Condensing this aggregation such that $cond/$in only runs once | 2,055 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n _id: \"someID\"\n array1: [\n { _id: \"id1\", array2: [\n {_id: \"a\", other_data: \"data\"},\n {_id: \"b\", other_data: \"data\"},\n {_id: \"c\", other_data: \"data\"}\n ]\n }.\n { _id: \"id2\", array2: [\n {_id: \"d\", other_data: \"data\"},\n {_id: \"e\", other_data: \"data\"},\n {_id: \"a\", other_data: \"data\"}\n ]\n }.\n { _id: \"id3\", array2: [\n {_id: \"x\", other_data: \"data\"},\n {_id: \"y\", other_data: \"data\"},\n {_id: \"z\", other_data: \"data\"}\n ]\n }\n ]\n}\narray2id3array2_id[\n {\n _id: \"id3\"\n }\n]\nid2id3[\n {\n _id: \"id2\"\n },\n {\n _id: \"id3\"\n }\n]\narray2[\n {\n _id: \"id1\"\n },\n {\n _id: \"id2\"\n },\n {\n _id: \"id3\"\n }\n]\ndb.findOne({ _id: mainID, \"array1.array2._id\" : { $ne: selectedID } }))",
"text": "Hi, I have some data that looks like this:I wish to find all elements in array1 which do not have an element in array2 with the ID of “a”. So, I wish to retrieve an array as such, since only id3's array2 field does not contain an element with _id “a”.If I repeat this for searching for “b”, I should get the following since only id2 and id3 do not contain “b”:If I search for “q”, I should get the following since they all do not contain “q” in array2:I tried db.findOne({ _id: mainID, \"array1.array2._id\" : { $ne: selectedID } })) but this isn’t giving me the expected result.How should I go about this?Any help is appreciated, thank you!",
"username": "Ajay_Pillay"
},
{
"code": "db.getCollection('myDB').aggregate([\n { $match: { _id: \"someID\" }},\n { $project: { \"array1\": {\n $filter: {\n input: \"$array1\",\n as: \"item\",\n cond: { $ne: [\"$$item.array2._id\", \"id\"]}\n }\n }\n }\n }\n])\n",
"text": "I tried using an aggregate but have only managed to get this far:But this isn’t quite giving me the expected result. I think this is going down the right path but I’m not sure how I’m supposed to be checking the condition correctly. It seems to evaluate to true regardless of what the input string is on the right.I think I need to somehow include another projection filter in the first bit after $ne but I’m not quite sure how to do this correctly.",
"username": "Ajay_Pillay"
},
{
"code": "db.getCollection('myDB').aggregate([\n { $match: { _id: \"someID\" } },\n { $unwind: \"$array1\"},\n {\n $project: {\n \"array1._id\": 1,\n \"array2\": {\n $filter: {\n input: \"$array1.array2\",\n as: \"item\",\n cond: { $ne: [\"$$item._id\" , \"someOtherID\"] }\n }\n }\n }\n }\n])\narray1_idarray1array2\"someOtherID\"",
"text": "Managed to solve this!First I match the document’s ID, then I unwind array1. Next, in the projection I keep the unique _id tied to array1 and filter its array2 to look for all items not equal to my chosen \"someOtherID\".Works like a charm.",
"username": "Ajay_Pillay"
},
{
"code": "_idarray1db.getCollection(\"myDB\").aggregate([\n { $match: { _id: \"someID\" } },\n { $unwind: \"$array1\" },\n {\n $project: {\n \"array1._id\": {\n $cond: [{ $in: [{ _id: \"someIDToCheck\" }, \"$array1.array2\"] }, null, \"$array1._id\"]\n },\n \"array2\": {\n $filter: {\n input: \"$array1.array2\",\n as: \"item\",\n cond: { $ne: [\"$$item._id\", \"someIDToCheck\"] }\n }\n }\n }\n }\n])\nsomeIDToCheckarray2array2someIDToCheck[{\n \"_id\" : \"someID\",\n \"array1\" : {\n \"_id\" : null\n },\n \"array2\" : [ \n {\n \"_id\" : \"someOtherIDNotMatched\"\n }, ...\n ]\n},\n{\n \"_id\" : \"someOtherID\",\n \"array1\" : {\n \"_id\" : \"someHashedID\"\n },\n \"array2\" : [ \n {\n \"_id\" : \"someOtherIDNotMatched2\"\n }, ...\n ]\n}]\narray1someIDToCheckarray2nullarray1._idarray1._id\"array1._id\": {\n $cond: [{ $in: [\"someIDToCheck\" , \"$array1.array2._id\"] }, null, \"$array1._id\"]\n },\n",
"text": "So as a final follow-up, this didn’t quite give me the data I required (it missed a little extra bit, where it returns the _id of the array1 element.The final version is as such:This gets me my desired data, the first result is when some element with the ID of someIDToCheck exists in array2, and the second is when it does NOT exist. The returned array2 will never contain the element with ID of someIDToCheck:For my use case I actually don’t need the array2 result, so I removed that bit. I just need the array1 IDs which do NOT contain someIDToCheck in their respective array2's. If it does contain, it just returns null in the array1._id field.EDIT: Small correction, for the array1._id projection it should be:The first argument should be the string, not an object with the _id as the key and the string as the value. The second argument should be querying on the _id field of the array element.",
"username": "Ajay_Pillay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I search for all elements in a nested array NOT matching a certain parameter? | 2021-06-10T22:40:06.274Z | How do I search for all elements in a nested array NOT matching a certain parameter? | 13,222 |
null | [
"java"
] | [
{
"code": "",
"text": "There is comment option in collection.find , through which you can send a request id, but there is no similar option available while doing writes?Also, i didn’t find a hook, which gets called every time , before request is send out ,in which i can set few common things. CommandListener is there, but it doesn’t allow me to set any properties.Any help in above two queries?",
"username": "Rahul_Singh1"
},
{
"code": "",
"text": "Hi @Rahul_Singh1, Can you share more details about your queries ?? As we can see you are asking 2 questions but i am not getting what are your test case/scenario behind these queries.Please share more details so, community members can will get idea of what type of hook you are asking and what you need while write operations.Thanks.",
"username": "Manoj_Manikrao_Sawan"
}
] | Not able to send Request ID in mutations through Java driver | 2021-06-08T07:50:19.720Z | Not able to send Request ID in mutations through Java driver | 1,731 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.