image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "Hi All,\nI am planning to build a database using MogoDB Community Server that will store roughly 20 GB of data and 8 million documents. I would like to build a Windows 10 tower PC for use with the database and right now I am looking at an AMD CPU with 16 cores and 32 GB of RAM, but do not know if this is overkill, under powered, or just about right. I am also assuming that I can go with a lower end GPU, since I am not interested in gaming, rendering, etc. Also, regarding processing time, I am hoping to get results in seconds rather than minutes if possible. Any advice you can offer would be greatly appreciated.\nThanks in advance!", "username": "David_Geyer" }, { "code": "", "text": "Hi David,Thanks for asking. Not an expert here, take this with a grain of salt.Looking at the Hardware Considerations for Production (https://docs.mongodb.com/manual/administration/production-notes/#hardware-considerations) I think you’re in the safe side, most probably 32GB RAM for just running the MongoDB Community Server are overkill but you know the saying: “The more harddisk and memory you’ll have, the happier you’ll be”.Don’t think GPU is key here, but using the fastest possible M.2 NVMe drives does count. Not all M.2 drives are NVMe, some are SATA and are then capped by SATA max data transfer speeds.Hope this helps.P.S.: hurry up buying hardware because SSDs are getting more expensive (same that happened with graphic cards) because they’re now also used to mine Bitcoins…", "username": "Diego_Freniche" }, { "code": "", "text": "Hi Diego,The link to the “Hardware Considerations” is very helpful. Also, thanks for the heads up on the Bitcoin issue! I have heard yesterday that it was affecting high end graphics cards, but not the SSDs.Dave", "username": "David_Geyer" } ]
Hardware Recommendations For Community Server
2021-05-12T02:12:53.838Z
Hardware Recommendations For Community Server
3,049
null
[ "security" ]
[ { "code": "", "text": "With which algorithm are the passwords of email/password users encrypted? I couldn’t find any information about this.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi @Jean-Baptiste_Beau passwords are stored in the backend salted and hashed with SHA256. They’re not stored in the client", "username": "Andrew_Morgan" }, { "code": "", "text": "Perfect, thank you for the answer. This should be added somewhere in the MongoDB Realm documentation — maybe it is already and I missed it.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Realm: how are passwords of email/password users encrypted and stored?
2021-05-12T08:24:04.047Z
MongoDB Realm: how are passwords of email/password users encrypted and stored?
2,181
null
[ "aggregation", "c-driver" ]
[ { "code": "db.t5.aggregate([ {$lookup: { \n\t\t\t\tfrom: \"t6\",\n\t\t\t\tlet: {id_field: \"$id\", age_field: \"$age\", name_field: \"$name\"},\n\t\t\t\tpipeline: [ { $match:{ $expr:{ \n\t\t\t\t\t\t\t $and: [\n\t\t\t\t\t\t\t { $or: [\n\t\t\t\t\t\t\t { $eq: [ \"$old\", \"$$age_field\" ] },\n\t\t\t\t\t\t\t { $eq: [ \"$no\", \"$$id_field\"] }\n\t\t\t\t\t\t\t ]}\n\t\t\t\t\t\t\t\t ,{ $eq: [\"$alias\", \"$$name_field\"] }\n\t\t\t\t\t\t\t ]\n\t\t\t\t\t\t }\n\t\t\t\t\t }\n\t\t\t\t}],\n\t\t\t\tas: \"joined_result\"\n\t\t }},\n\t\t {$unwind: {path: \"$joined_result\", preserveNullAndEmptyArrays: false}}\n])\nSELECT * FROM t5 \nLEFT JOIN t6 ON (t5.age = t6.old AND t5.id = t6.no OR t5.name = t6.alias);\n", "text": "How to form the complex aggregation pipeline using MongoC driver API. I know BCON_NEW can be used to form the hard-coded pipeline but If we want to form pipeline run time. What are the various ways available? Can you provide some complex example.\nE.g. If I want to build below aggregation pipeline:Please note that below is equivalent query in SQL:Where t5 and t6 are collections/tables and t5 has age, id and name columns/fields. Also, t6 has old, no and alias columns/fields.What are possible ways to build this pipeline run time using MongoC driver API’s. The meaning of ‘run-time’ is that there can be different operations I mean multiple Join clauses. Different AND and OR combinations.Is there any tool to build pipeline?", "username": "Vaibhav_Dalvi" }, { "code": "", "text": "Hi @Roberto_Sanchez @Shane @Paul_Done @Kevin_Albertson @tapiocaPENGUIN Could you please comment on this ? Otherwise please ask someone who is good at C driver.Let me know if problem description is not clear or you need more details.It would be great help. Thanks.", "username": "Vaibhav_Dalvi" } ]
Forming Aggregation pipeline using mongo-c-driver API's
2021-05-10T04:58:23.754Z
Forming Aggregation pipeline using mongo-c-driver API’s
2,786
null
[ "dot-net", "xamarin" ]
[ { "code": "", "text": "I have raised this same issue a while back in Xamarin github issues and after some time decided to raise this in here as well, cause it might be that Realm’s team should fix something to make this work.Basically, the cool Xamarin Hot Restart Feature is not working when you use Realm database.This basically is a developer productivity killer… Considering most of the Xamarin projects I do are using Realm, you can imagine this has become very irritating. Is there anything that Realm team can do to fix this issue?", "username": "Gagik_Kyurkchyan" }, { "code": "", "text": "Hi @Gagik_Kyurkchyan, I checked with the engineering team and they confirmed that they’re aware of the issue. I’d suggest creating a new “idea” in the Realm feedback tool - that way you and other developers can vote for it to flag its priority Realm: Top (68 ideas) – MongoDB Feedback Engine", "username": "Andrew_Morgan" }, { "code": "", "text": "@Andrew_Morgan thanks for the response! I’ve created the idea", "username": "Gagik_Kyurkchyan" } ]
Xamarin Hot restart is not compatible with Realm database
2021-05-08T06:49:01.043Z
Xamarin Hot restart is not compatible with Realm database
3,922
null
[ "aggregation", "security" ]
[ { "code": "readWrite@dbnamedbnameaggregationreadWriteAnyDatabase@adminUnhandledPromiseRejectionWarning: \nMongoError: not authorized on mydb?retryWrites=true&w=majority \nto execute command { aggregate: \"applicationMetadata\", pipeline: [ { $match: {} }, {\n $group: { _id: 1, n: { $sum: 1 } } } ], cursor: {}, lsid: { id: UUID(\"dcb2caef-dccf-4de7-acd9-1713101be14b\") },\n $clusterTime: { clusterTime: Timestamp(1620655876, 2)\n", "text": "I am using MongoDB Atlas. I create a user which has readWrite@dbname permission on a single DB(dbname).\nGetting the following error when I start aggregation\nIt works If I change policy to readWriteAnyDatabase@adminI would like to create custom policy with the least permission on the cluster. I don’t want to let this user access other DBs.\nWhich permission should I assign to this user to be able to run aggregate?", "username": "ismail_yenigul" }, { "code": "readWrite@dbnamedbnamereadWrite@dbnamedbnamereadWriteAnyDatabase@adminreadWrite@dbnamefindfindtestdbfind", "text": "Hi @ismail_yenigul,Welcome to the community!Would you be able to provide the following information to help troubleshoot the error?:UnhandledPromiseRejectionWarning:\nMongoError: not authorized on mydb?retryWrites=true&w=majorityYou have stated originally that the user had the readWrite@dbname permission on the dbname database. However, the error above indicates you are running the aggregate command against a different database name. Have you tried the same command with the user who has readWrite@dbname permissions against database dbname? Since the same command works using readWriteAnyDatabase@admin as opposed to readWrite@dbname, I suspect that the issue may exist with what database the command is being run against.I would like to create custom policy with the least permission on the cluster. I don’t want to let this user access other DBs.You can configure a Custom Role in Atlas so that Database users associated with the custom role can only perform selected actions and roles against certain database(s).Which permission should I assign to this user to be able to run aggregate?You can assign the find action so that database users associated with a custom role with this action are allowed to perform the aggregation command you have provided.Please see the example below of a custom role with the find action allowed for the testdb database:\n\nimage740×169 10.2 KB\nNote : You will be able to find the find action under the category Collection Actions → Query and Write ActionsHope this helps.Kind Regards,\nJason", "username": "Jason_Tran" }, { "code": "readWrite@mydb", "text": "Hi Jason,I realized it was the wrong database after the message. How did I miss that part \nWe are using nodejs with typeorm.\nIt seems we hit bug: mongodb url with query params is incorrectly parsed · Issue #6389 · typeorm/typeorm · GitHub error. typeorm can’t parse mongodb uri correctly, then it considers “mydb?retryWrites=true&w=majority” as db name. This is the reason why we are getting error on aggregate.\nWe updated typeorm release and it is fixed. It works fine with readWrite@mydb\nThanks", "username": "ismail_yenigul" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What permissions are required to run aggregate
2021-05-11T11:34:51.791Z
What permissions are required to run aggregate
4,534
null
[ "python", "performance" ]
[ { "code": "updateoneupdatemanyfor file in sorted_files:\n df = process_file(file)\n for row, item in df.iterrows():\n data_dict = item.to_dict()\n bulk_request.append(UpdateOne(\n {\"nsamples\": {\"$lt\": 12}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n ))\n result = mycol1.bulk_write(bulk_request)\n...\n...\nbulk_request.append(UpdateMany(..\n..\n..\n", "text": "I want to know if its faster(importing) using updateone or updatemany with bulk write.My code for importing the data into the collection with pymongo look is this:When i tried update many the only thing i change is this:I didnt see any major difference in insertion time.Shouldnt updateMany be way faster?\nMaybe i am doing something wrong.Any advice would be helpful!\nThanks in advance!Note:My data consist of 1.2m rows .I need each document to contain 12 subdocuments.", "username": "harris" }, { "code": "", "text": "didnt see any major difference in insertion time.Shouldnt updateMany be way faster?Yes it is supposed to be. But, as answered in your other thread, your observed time is not necessarily related to the performance of your server since you have file access and other logic intermixed. Your hardware configuration might also be inadequate for your use case.", "username": "steevej" }, { "code": "for file in sorted_files:\n df = process_file(file)\n for row, item in df.iterrows():\n data_dict = item.to_dict()\n bulk_request.append(UpdateMany(\n {\"nsamples\": {\"$lt\": 12}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n ))\n result = mycol1.bulk_write(bulk_request)\n", "text": "Yes.i will check everything you replied to me. but i am wondering if this code is right.maybe i am doing something wrong.Is this how the right way to do it?", "username": "harris" }, { "code": "data_dict{\"nsamples\": {\"$lt\": 12}}", "text": "UpdateMany is useful when you want to apply the exact same update operation to multiple documents in a collection. This does not appear to be what you want to use here. It seems like your intention is to add each data_dict sample to the data set once (to a single document), is that correct? If so, then you should be using UpdateOne.As for why you don’t see a performance difference between the two, I suspect that is because the query ({\"nsamples\": {\"$lt\": 12}}) only ever has either 0 or 1 result in which case UpdateOne and UpdateMany are identical.", "username": "Shane" }, { "code": "data_dictincEach group of operations can have at most 1000 operations. If a group exceeds this limit, MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.", "text": "I understand.Yes you are right.Yes what i am trying to do is to add each data_dict to the dataset to a single document and when that specific document get full because the inc then we go to the next document and we do the same again.When we finish every document should have 12 subdocuments inside…One thing more.Should i set a loop for every 1000 updates?Will i see any difference? I say that because of that Each group of operations can have at most 1000 operations. If a group exceeds this limit, MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.", "username": "harris" }, { "code": "", "text": "Should i set a loop for every 1000 updates?No, you should not batch at 1000 ops. 1000 ops used to be the bulk write batch size limit but that limit was increased to 100,000 ops starting in MongoDB 3.6 (back in 2017). https://docs.mongodb.com/manual/reference/limits/#mongodb-limit-Write-Command-Batch-Limit-SizeIt’s ideal to pass as many bulk_write operations as possible in a single call. PyMongo will automatically batch the operations together in chunks of 100,000 (or when a chunk reaches a total size of 48MB). The next real limitation is the app’s memory. It might be inefficient or impossible to materialize all the operations in a single call to bulk_write. For example, let’s say you have 12,000,000 ops and each one is 1024 bytes, then you would need at least 12GB of memory. To solve this problem the app can batch manually at 100,000 ops which gives the same MongoDB performance with lower client side memory usage.A further optimization would be to use multiple threads and execute multiple bulk writes in parallel using a single MongoClient shared between them.", "username": "Shane" }, { "code": "StackOverflow", "text": "I have answered this question with code to achieve it on StackOverflow.", "username": "Harshavardhan_Kumare" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb Bulk write Updateone or Updatemany
2021-05-11T08:42:48.487Z
Mongodb Bulk write Updateone or Updatemany
8,424
null
[ "data-modeling", "swift", "atlas-device-sync" ]
[ { "code": "friendshipUsers=userId1+userId2user1+user2user2+user1", "text": "Hey everyone! I got stuck on making a document available for two users only. My partition is friendshipUsers=userId1+userId2. In a backend function I check if user’s Id is contained in the partition and then I give read/write permissions. However the problem is with opening a realm in the client.A hacky solution would be to check if the client had issues connecting to realm with partition user1+user2 and if so then try with user2+user1.Another thing I can do I suppose is duplicate the document so each user access their own doc and update each one with function when there is a change.Is there a way to have two user read and write to the same document by checking their ids?Thanks ", "username": "dimo" }, { "code": "", "text": "Hi @dimo,\nCould you sort the user ids in the partition (so that the order in the partition string is deterministic?)", "username": "Andrew_Morgan" }, { "code": "private func request() {\n do {\n try friendshipRealm.write {\n var newFriendship = Friendship()\n newFriendship = Friendship(partition: \"friendshipUsers=\\(state.user!._id)+\\(publicUser._id)\",\n requester: state.user!._id,\n receiver: publicUser._id,\n between: [state.user!._id, publicUser._id],\n accepted: false)\n friendshipRealm.add(newFriendship)\n type = .cancel\n }\n } catch {\n state.error = \"Unable to request friendship\"\n }\n}\nvar body: some View {\n FriendButton(publicUser: user)\n .environment(\\.realmConfiguration, app.currentUser!\n .configuration(partitionValue: \"friendshipUsers=\\(state.user!._id)+\\(user._id)\"))\n} \nrequester+receiver", "text": "Hey @Andrew_Morgan,This is how I create the document:This is how I open the realm:The partition is always requester+receiver. I wonder if I am doing the whole thing in a wrong way. Would you recommend to duplicate the document for each user?", "username": "dimo" }, { "code": ".configuration(partitionValue: \n \"friendshipUsers=\\(min(state.user!._id, user._id))+\\(max(state.user!._id, user._id))\"))\n", "text": "I’m assuming that your issue is that each of the 2 friends is using a different ordering of users in the partition key? If so, then you could get around that problem by always using something like…The same logic would be applied when creating the object. That way both users would use the same partition name for the relationship.Alternatively, you could duplicate the data and then user the user id as the partition (that may mean that the app would open fewer partitions which could help with performance)", "username": "Andrew_Morgan" }, { "code": "", "text": "Worked perfectly! I need to learn more about swift I guess…", "username": "dimo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Doc available only for two users?
2021-05-09T15:00:28.834Z
Doc available only for two users?
1,603
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "partitionpublicuserteamA", "text": "Hey\nI want to sync different partition values at same time. Let’s say may partition key is partition and an I have different values. I want to sync public,user,teamA,teamB`,… .\nWhat’s the limit for this?\nIs there any limit in Mongo Atlas side that can stop me from syncing partition?\nIf no, Is there any suggestion from limit in client side that can cause performance issues?", "username": "mahdi_shahbazi" }, { "code": "", "text": "The limit you might hit will be on the number of partitions that a given device has open at any given time – as always, mileage varies, but if you stay at 10 open realms/partitions per app/device then you should be fine.", "username": "Andrew_Morgan" }, { "code": "", "text": "@Andrew_MorganCan you elaborate a bit? Why 10? Is that a bandwidth concern or just a hard limit coded in Realm?In the past we’ have 20+ partitions available to sync and Realm didn’t seem to have an issue with that.", "username": "Jay" }, { "code": "", "text": "There’s no hard-coded limit in Realm, it’s down to the number of file descriptors that various mobile devices allow an app to use (each open Realm uses 6-8). More modern devices allow more, and there are differences between iOS and Android (the lower limits tend to be on iOS).", "username": "Andrew_Morgan" } ]
Limit of partition value that can be sync in same time
2021-05-08T11:43:12.011Z
Limit of partition value that can be sync in same time
2,058
null
[ "dot-net", "atlas-data-lake" ]
[ { "code": "", "text": "If I implement Atlas Data Lake am I able to query it with the dot net driver? It would seem odd if this wasnt possible", "username": "Graeme_Henderson" }, { "code": "", "text": "Hi @Graeme_Henderson - you should be able to query Atlas Data Lake with any MongoDB driver including the dot net driver.", "username": "Naomi_Pentrel" }, { "code": "", "text": "Ok great but I can’t find any documentation for that. Is the connection the same as the database. Is the query against collections? How are collections defined. If nor collections how is the query done.", "username": "Graeme_Henderson" }, { "code": "", "text": "Sorry for the very late reply, but just to clarify things you can use any MongoDB driver to connect to Atlas Data Lake. Data Lake behaves similarly to a MongoDB cluster but is mostly read-only.Queries are against Databases and Collections just like in a cluster. I’d encourage you to take a look at our new UI to see how you can set it up.", "username": "Benjamin_Flast" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Driver compatibility with Data Lake
2020-10-29T09:09:06.080Z
Driver compatibility with Data Lake
3,361
null
[ "mongodb-shell", "installation" ]
[ { "code": "", "text": "Hi,Trying to install mongo shell on Apple Mac M1 - Bigsur 11.2Did anyone see this before “zsh: bad CPU type in executable: mongo”can anyone help me please.Thanks,\nAravind.", "username": "Aravind_Adla" }, { "code": "", "text": "I do not think that the new processor of the M1 is supported yet. Check on MongoDB Developer Community Forums for verification.", "username": "steevej" }, { "code": "", "text": "What is your default shell?\nIf it is not zsh try to switch to it and tryIf you are in zsh then it could be version compatibilty issue like [steevej-1495] mentioned.\nIn some cases 32 bit vs 64 bit also cause issues\nCheck this links.May helphttps://discussions.apple.com/thread/250777998", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Did anyone see this before “zsh: bad CPU type in executable: mongo”Hi @Aravind_Adla,This is an older question but I wanted to note that although MongoDB does not run natively on M1 processors at the moment, you can use macOS’ Rosetta Translation Environment to run the Intel binaries. If you don’t have Rosetta installed, I believe you should be prompted to install this the first time you try to run an Intel binary.hi what links should i check?Follow the standard Install MongoDB Community Edition on macOS guide to install MongoDB server & tools via the Homebrew package manager. The only other requirement is that you have installed Rosetta (which is a one-off installation if you want to run any Intel apps).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Zsh: bad CPU type in executable: mongo
2021-02-09T21:17:26.427Z
Zsh: bad CPU type in executable: mongo
16,811
null
[ "connector-for-bi" ]
[ { "code": "", "text": "For different systems that we have, there are more than one mongodb server setup for Q&A purposes. I want to be able to access them in Power BI. I have successfully configured the BI Connector, mongosqld command line parameter, the ODBC driver and connection, and been able to connect to the server that I had provided on the mongosqld command line.I wanted to know if it is possible for the mongosqld to connect to more than one server at a time, or if there is another way to do this? Right now, I can have a different batch file to execute on my local windows computer to establish the connection, though it would be ideal if I can have both servers connected at the same time to use in my reporting.", "username": "Jeffrey_Newman" }, { "code": "", "text": "You can ignore this. We are using a free version of mongodb, possibly the community version, so the ability to connect to multiple of those database servers at the same time is not an option. If we decide to migrate to atlas servers, I believe we would be able to do it, though not sure how that would work.I chatted with a mongodb respresentative named Christopher who was very helpful. I replied to my post so that perhaps it may be helpful to someone else in the future.", "username": "Jeffrey_Newman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoSQLd multiple servers
2021-05-11T16:30:43.124Z
MongoSQLd multiple servers
3,090
https://www.mongodb.com/…f_2_1024x252.png
[ "atlas-triggers" ]
[ { "code": "", "text": "On Realm log shows below errorsimage1547×381 30.4 KBI have already tried restarting many times.\nIs there any solution to resume the token programmatically after the trigger is suspended?", "username": "Jayvant_Patil" }, { "code": "", "text": "Hi Jayvant – Looks like we can do a better job surfacing the EventBridge error in this case, we’re looking into that. It looks like the root cause of this error is that they MongoDB change event is larger than the maximum size that EventBridge supports. One way to get the size down may be to add a projection (under advanced settings) to ensure you aren’t sending fields that aren’t needed on the EventBridge side.", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi Drew,\nCan we know which document id is causing this issue?", "username": "Jayvant_Patil" } ]
Trigger keeps being suspended with unknown error message
2021-05-06T12:44:00.746Z
Trigger keeps being suspended with unknown error message
3,003
null
[ "queries", "swift" ]
[ { "code": "class Parent: Object {\n let id = ObjectId()\n let name: String = \"some name\"\n\n let children = List<Child>\n}\n\nclass Child: Object {\n let id = ObjectId()\n let name: String = \"some name\"\n\n}\nvar allLazySequence<FlattenSequence<LazyMapSequence<Results<Parent>, List<Child>>.Elements>> extension Child {\n \n var all: Results<Child> {\n let parents = realm.objects(Parent.self)\n \n // Returns LazyMapSequence<Results<Parent>, List<Child>>\n let children = parents.map({$0.children})\n \n // Returns LazySequence<FlattenSequence<LazyMapSequence<Results<Parent>, List<Child>>.Elements>>\n let all = children.joined()\n \n return all\n }\n }\nextension Child {\n \n var all: Results<Child> {\n let parents = realm.objects(Parent.self)\n \n // Returns LazyMapSequence<Results<Parent>, List<Child>>\n let children = realm.objects(Child.self).filter(\"parent IN %@\", parents)\n \n return children\n }\n }", "text": "Never quite sure where to post this type of question these days but I will give it a go here…The model is as follows (simplified example)How can I get all the parents children back as a since results set ? It seems odd to define the var all as a LazySequence<FlattenSequence<LazyMapSequence<Results<Parent>, List<Child>>.Elements>>The only other way I can think of is to add a property ‘parent’ to the ‘Child’ objects and then query the realm for all children whose ‘parent’ is IN the parents Result set.", "username": "Duncan_Groenewald" }, { "code": "class Parent: Object {\n @objc dynamic var id = ObjectId()\n @objc dynamic var name: String = \"\"\n\n let children = List<Child>()\n}\n\nclass Child: Object {\n @objc dynamic var id = ObjectId()\n @objc dynamic var name: String = \"\"\n\n let linkingParent = LinkingObjects(fromType: Parent.self, property: \"children\")\n}\nlet jayChildren = realm.objects(Child.self).filter(\"ANY linkingParent.name == 'Jay'\")\nfor child in jayChildren {\n print(\" \", child.name)\n}\nnamelet jayChildren = realm.objects(Child. self ).filter(\"ANY linkingParent == %@\", jayObject)", "text": "How can I get all the parents children back as a since results setFirst question is why do you want to get the children back as a Results? What will you being doing with them?Then, let’s fix the objects and include LinkingObjects which generates a path back to the parentSuppose we have a parent with the name Jay and we we to get all Jay’s kids as a Results objectThat being said, getting anything by name is a bit vacuous as there could be a lot of ‘Jay’ in the Parent class. So, if you get the certain parent object, you can match it that way which is guaranteeed to only get that parents children.let jayChildren = realm.objects(Child. self ).filter(\"ANY linkingParent == %@\", jayObject)", "username": "Jay" }, { "code": "let parents = realm.objects(Parent.self).filter(some filter)\n\nlet allSomeChildren = parents.map({$0.someChildren(params)}).joined()\n// Get array of predicates\nlet allSomeChildrenPredicates = parents.map({$0.someChildrenPredicate(params)}).reduce([], +)\n\nlet predicate = NSCompoundPredicate(andPredicateWithSubpredicates: allSomeChildrenPredicates)\n\nlet allSomeChildren = realm.objects(Child.self).filter(predicate)\n", "text": "We have extensions to the parent that provide filtered sets of Results based on some rules unique to each Parent.So simply getting the child objects directly for each Parent misses out on any of the parent specific filters.Which gets me thinking that perhaps we should look at using the Parent to just generate the NSPredicates to filter the Child objects - each parent can generate its own compound predicate and in theory we can then just query the Child objects using this filter.It just seems simpler to be able to use the following code structureRather than thisNot sure if there are any limits on the use of NSPredicate in this type of situation.", "username": "Duncan_Groenewald" }, { "code": "let allSomeChildren = parents.map", "text": "A couple of things. Your question askedHow can I get all the parents children back as a since results set ?And my answer above is a solution. However, if there are additional criteria, it would be good to include them in the question so any answers can address that. It’s good to know that:So simply getting the child objects directly for each Parent misses out on any of the parent specific filters.But what, specifically, does that mean? I think we may be able to craft a more helpful answer but understanding the use case would help.Then, and this is very important - using Swift filtering and mapping on Realm objects totally disconnects them from Realm - so this code casts the realm objects to an array, all stored in memorylet allSomeChildren = parents.mapand a large dataset can overwhelm the device because all of the objects are loaded into memory, which overrides the natural lazy-loading aspect of Realm. Those objects also are disconnected and will no longer live update or even really be Realm objects.It may be ok in this use case but just something to keep in mind.", "username": "Jay" } ]
RealmSwift - is it possible to combined the results of a List property into a single Result<>
2021-05-10T00:21:51.677Z
RealmSwift - is it possible to combined the results of a List property into a single Result&lt;&gt;
4,399
null
[ "atlas-functions" ]
[ { "code": "exports = function(){ const XLSX = require(\"xlsx\"); };", "text": "I am trying to use SheetJS js-xlsx as an external dependency. I have uploaded it as described in https://docs.mongodb.com/realm/functions/upload-external-dependencies.I have the following Realm function:\nexports = function(){ const XLSX = require(\"xlsx\"); };\nHowever, I get an error “execution time limit exceeded” when I run it. I’d appreciate any help solving this.", "username": "Ilya_Sytchev" }, { "code": "", "text": "Hi @Ilya_SytchevWelcome to MongoDB community.Have you tried saving this function and calling it from outside of the debug console.Even from another function or a schedule trigger, do you get the same issue?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks. I’ve just tried running this function both from another function and from a scheduled trigger but I got the same error message.", "username": "Ilya_Sytchev" }, { "code": "", "text": "Please verify that this package does not use an unsuprted module:If not please provide the application url and Ill try to lookup.\nIf its urgent please open a support call.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks, I see. It would be great if you could have a look at the package: GitHub - SheetJS/sheetjs: 📗 SheetJS Spreadsheet Data Toolkit -- New home https://git.sheetjs.com/SheetJS/sheetjs", "username": "Ilya_Sytchev" }, { "code": "", "text": "@Ilya_Sytchev,We are looking into that. I involved the realm team.We suspect that the problem is number of code lines in the package…Will update.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "Failed to upload node_modules.tar.gz: unknown: Unexpected token (62:14) 60 | } 61 | > 62 | async function* exploreWalkAsync(dir, path, followSyslinks, useStat, shouldSkip, strict) { | ^ 63 | let files = await readdir(path + dir, strict); 64 | for(const file of files) { 65 | let name = file.name;", "text": "Thanks for the update! I’ve also tried uploading an alternative package (exceljs - npm) but received the following error message:\nFailed to upload node_modules.tar.gz: unknown: Unexpected token (62:14) 60 | } 61 | > 62 | async function* exploreWalkAsync(dir, path, followSyslinks, useStat, shouldSkip, strict) { | ^ 63 | let files = await readdir(path + dir, strict); 64 | for(const file of files) { 65 | let name = file.name;", "username": "Ilya_Sytchev" }, { "code": "", "text": "@Ilya_Sytchev,The dependencies are in beta and far from perfect.We are working on improving stability and predictability…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Any updates on this by any chance?", "username": "Ilya_Sytchev" }, { "code": "", "text": "Hello @Pavel_Duchovny , one more import issue here I am trying to use Sentry SDK in Realm functions. This package depends on tslib. But I cannot import tslib, Realm shows following error:Failed to upload node_modules.zip: unknown: Unexpected token (15:36) 13 | // Force a commonjs resolve 14 | import { createRequire } from “module”; > 15 | const commonJSTSLib = createRequire(import.meta.url)(“…/…/tslib.js”); | ^ 16 | 17 | for (const key in commonJSTSLib) { 18 | if (commonJSTSLib.hasOwnProperty(key)) {Does this mean that I better leave the idea to use Sentry SDK in Realm for now?", "username": "Ivan_Bereznev" }, { "code": "", "text": "Hi guys,I was trying to help on first response but the beta dependencies limitations are not owned by me.CC @Drew_DiPalma maybe someone from Realm cloud could help…", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi,\nI ran into the same problem. Using SheetJS in Realm functions is very slow. Running the function in the console works, but it takes more than 40 seconds to execute. Most of the time seems to be taken up when calling ‘const XLSX = require(“xlsx”);’.\nWhen calling the very same function from within my android app I run into a timeout (already after 10 seconds). I opened an issue about it: Can not change the timeout when calling a function · Issue #7455 · realm/realm-java · GitHubI tried using exceljs instead, but I get the same error as Ilya.Would be great to get an update about it, because it is very important for me to be able to send some data as an excel sheet. The sending part with AWS SES works fine. ", "username": "Annika" } ]
Error importing external dependencies
2021-02-02T19:11:05.854Z
Error importing external dependencies
3,862
null
[ "queries", "data-modeling" ]
[ { "code": "{\n _id,\n isPublic,\n userId,\n other fields ... such as name\n}\n", "text": "Hello everyone, Im designing an app where there are public and private documents. The public documents are generated by the owner of the app, while the private data is generated by the user. These documents reside in the same collection with the following structureI have some doubts about the queries against such collection. My requirements are to show to the user both the public and his private documents, without having to select public, private or the combination of the two. So, lets say that I have a query that matches three documents on the name, one for public, one private for the user that issued the query and another one of another user. At this point I have to return only the first two documents. My query will then look like: \"give me all document matching the [name] where userId = [userId] OR isPublic=true.\nI was wondering what would be the best way to set an index to support such query. In general, the where clause should always be present, in each and every query.Thank you!", "username": "Green" }, { "code": "{userId 1 , isPublic : 1}", "text": "Hi @Green,Nice to hear from you again.This exact problem is what Realm and Realm Rules are coming to solve where you can define your sync and read/write rules. However, to leverage this you need to use MongoDB Atlas and Ream applications which I very much encourage you.https://www.mongodb.com/how-to/realm-partitioning-strategies/However, if you wish to use MongoDB you should make sure that the queries add the correct filtering criteria.What I think you can do is to use a $unionWith or 2 queries/aggregations to get the data. You will index the {userId 1 , isPublic : 1} of course and isPublic separately (or even hold public data in antoher collection).Instead of making one query with an OR maybe run 2 or a union of:If you wish the result set to come back in a natural or _id order you can add a sort stage in the end (adding the sorting field _id to the index last part)Let me know what you think,Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny, thank you again for your nice answers! I would like to go for atlas, but Im not familiar with cloud services and Im a little bit scared of the costs. I like the solution with $unionWith, but can this be used to union the same collection? Is there a way to union the result of two queries/aggregations on the db? Also, if I have to query a single document by id, I should still do the union right?\nAnother question, this index is the main index I use to select private and public data. In addition to that, there would be queries where other fields are specified (name, tags, category, etc…). In that case I suppose I need to create a single index on the field I want to query and then mongo will combine the result of the two index scans?Thank you", "username": "Green" }, { "code": "", "text": "Hi @Green,First Atlas provide a nice free 4 ever tier to experience this involves realm as well, you should try it.You can union same collection data with different pipeline filters.Well the union is for public and private data so even if one is filtered and public is not a union needs to be used to bring the additional data set.Now if you just query by id the additional fields could be fetched from the pointed document by the index. But if you need to filter by more complex or other criteria you should build additional indexes.We have a notion of covered queries in MongoDB where if index also have project fields it can avoid accessing the doc but if you need to fetch arbitrary or large amount of fields maintain a large index for that is unadvaised…MongoDB can use index intersection in $or but it considered a non selective pattern and should be avoided if possible…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny thank you again! Well things are a little bit more complicated. First, I have languages for public documents that comes in. Public documents have a uniqueId and a language, which makes them uniquely identifiable (I had to add a uniqueId because I need to reference these documents somewhere else and if the user change the language, I should be able to still have a valid reference). In general, I provide documents with uniqueId and language (and let mongo generate the unique _id), while user provide documents with _id generated by mongo and language = their chosen language (not actually used). So in general my queries are:Search:It looks like I need a separate collection for public documents, but then how do I query these two collection with sort and pagination?Thank youGreen", "username": "Green" }, { "code": "db.documents.aggregate( [{$match : { \"language \" : \"en\", \"isPublic\" : true } }, { $unionWith: { coll: \"documents\", pipeline: [ {$match : { \"userId\" : \"xxx\", \"name\" : ... } } ]} },\n{$sort : {\"_id\" : 1}}, {$limit : ...}])\ndb.documents.aggregate( [{$match : { \"_id \" : \"...\", \"userId\" : ... } }, { $unionWith: { coll: \"documents\", pipeline: [ {$match : { \"_id\" : \"xxx\", \"langue\" : ... } } ]} },\n{$sort : {\"_id\" : 1}}, {$limit : ...}])\n{ languege : 1, isPublic : 1, _id}{userId : 1, _id : 1}{ languege : 1, _id : 1}", "text": "Hi @Green,So it sounds like the first section can be designed of 2 queries or one unionWith :Now you can do $sort and $limit and add the _id of the last page to the next fetch match of each stage…The second section sounds like possible union:Same foe second query.The indexes here should be:Let me know if that make sense…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{language : 1, uniqueId : 1}\n", "text": "Hi @Pavel_Duchovny, really thank you this makes sense! Unfortunately, I have introduce a uniqueId instead of the _id, so I have to create the index likeThank you! Just another question, I see there are collation that I can use, but this still implies that I need to use two different documents for the languages (unfortunately embedding is not a choice as the documents could get too big)? I’m asking because I hate the fact that I had to introduce this uniqueId, but I still need a way to uniquely identify the documents also by a general id and the language to select the correct document.Thank you!", "username": "Green" }, { "code": "", "text": "How many different languages can be per document?Not sure what you mean by two documents? Collation can be either on a collection or query/operation … If collection level than its default cross collection.Theoretically you can have a collection per language with language prefix…Can you show me an example?Using your own id is also good, but _id is there by default for every doc…Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\"_id\" : <id>}{userId : <userId>}{\"_id\" : <id>}{\"userId:\" \"publicUserId\", \"language\" : \"en\"}{\n _id,\n name,\n description,\n tags,\n language,\n days: [\n {\n day: 1,\n data: [\n {\n nb: 0,\n subData: [\n {\n nb: 0,\n refDoc1: reference to doc with same language,\n name: name of the refDoc1 (as extended reference)\n },\n {\n nb: 1,\n refDoc2: reference to doc with same language,\n name: name of the refDoc2 (as extended reference)\n }]\n }]\n }]\n}\n{uniqueId: 1, language: en}Not sure what you mean by two documents? Collation can be either on a collection or query/operation … If collection level than its default cross collection.\n", "text": "Not so many languages, for the moment only two. The problem is that these documents refers to other documents, of course with the same language. In general I have public documents with uniqueId and language and then private documents with only the id. At the moment, all my documents are in the same collection and I can distinguish them using either the:I was thinking to embed the languages in the same document as subdocuments, and it works for two types of documents, maybe 3, but then I have other documents where this does not work given the complexity of the document itself. Its not just a single field like name description etc that changes, but these documents have a nested structure and inside that structure I reference other documents and it is exactly here where I need to have the changes for language. Example:Using your own id is also good, but _id is there by default for every doc…I would have prefer to use the _id, but then since I have everything in one document I have to use the uniqueId to be able to reference back the documents. For example, I have Document with uniqueId : 1. I have 2 Documents for that uniqueId, one for language “en” and the other for “de”. There are documents that reference one of these two documents, depending on the language, so I will keep the uniqueId and when I have to show the details of the references documents I will do -> {uniqueId: 1, language: en}.What I mean is, how the collation selects the correct documents? Is it creating a separate collection on its own? Or you have to specify the language on the documents and then the collation selects the correct documents by index?Thank youGreen", "username": "Green" }, { "code": "", "text": "Hi @Green,I think we need to seperate this discussion into two.So a collation can be specified in a few places:You can specify collation for a collection or a view, an index, or specific operations that support collationDepand on your use case you need to see if you have a default collation for the whole collection or you do a per field one by using an index with the collation.Than in your queries you have to specify the specific collation to match the index.So I am not sure I understand the whole design, same collection reference documents within the same collection?Why can’t the _id of a unique document be placed as a refDoc : <_id value> ?If there is a small amount of languages why not to use a collection per language and than have unique locale indexes per this collection?You can reference id’s from one language to another if needed. Moreover, the same _id can be used in two different collections as well Let me know if that make sense…Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n _id,\n name,\n steps,\n tags,\n ingerdients: [\n {\n uniqueId,\n name\n }\n ]\n}\n{\n _id,\n date,\n recipeUniqueId\n}\n", "text": "Hi @Pavel_Duchovny you are helping me alot! Thank you.\nSo for the schema, lets take an example: I have a Recipe which references one or more Aliments. Both the Recipes and the Aliments have one version for each language. The Aliments are easy, they do not reference any document and are small, so in this case I could embed all the language in a single document (only name and description are going to change between one language and the other. However, I would like to have the same structure for all the multilanguage documents). Now, the Recipe has also its language and of course the Aliments it references should be of the same language too. This is an example of Recipe:Next, this Recipe is kept in another collection, lets call it DailyRecipes. The user can move one or more recipe into this collection. The user is currently using language “en”, so it will add to the DailyRecipes the Recipe [uniqueId: 1, language: en]. An entry in this DailyRecipes would be:Now, the user can travel from the DailyRecipes to the Recipe using the uniqueId and the current language (en) with the query {“uniqueId”: , “language”: “en”}. When the user changes the language, the navigation will use the query {“uniqueId”: , “language”: “de”}. What I dont like about that is this uniqueId, because it is only present for the public documents and not for the private and I would like to be able to treat the public and private documents in the same way in the API. What would be nice is to use the primary id and the language to select all the documents, instead of using the uniqueId and language for public and id for private.Thank you!", "username": "Green" }, { "code": "{\n _id,\n recipeId, \n name,\n isPublic : true/false,\n type : [ \"regular\", \"daily\"],\n steps,\n language,\n tags,\n ingerdients: [\n {\n uniqueId,\n name\n }\n ]\n}\n", "text": "Hi @Green,So this “UniqueId” is more a recipeId as it is not unique but can have few instances.Why each document can’t have a recipeId ?Additionally, why a document can’t have a “type” field defining if it is a “regular” recipe or a “DailyRecipe” ?Why not to have the following schema:Now you search for user recipes and public from the same collection.Or I am totally off?Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n _id,\n date,\n recipes: [\n <recipeUniqueId>,\n <recipeUniqueId>,\n ...\n ]\n}\n{\n _id: 1,\n language: en,\n name: Fish Recipe\n}\n{\n _id: 1,\n language: it,\n name: Ricetta pesce\n}\n", "text": "Hi @Pavel_Duchovny, actually no. The Recipes may be public or not, while the DailyRecipes collection is kind of a collection of today recipes. (Sorry the json I posted for the DailyRecipes was wrong):When the user switches languages, the DailyRecipe document should point to the correct recipe with the current language, thats why I have this uniqueId (which is actually a recipe Id created by me). I know that this use case is really corner case for the regular users, but in the future there will be other kind of users that would use this feature to create the same document in multiple language. To me it would me much easier and clear to get rid of the additional uniqueId, but what I did not understand if it is possible with collation or some other mechanism (without having multiple collections) to have:Document “en”:Document in “it”:Having multiple collections could pose a problem when I have to aggregate the data from a public-language specific collection and user specific collection, or am I wrong? (Especially for search queries with sort and limit) (And not for now, but for the future if I have to shard the db I was wondering how these search queries would behave when the data is on different shards).ThanksGreen", "username": "Green" }, { "code": " {\n _id: 1,\n \"en\" : {\n name: Fish Recipe,\n Ingredients : [ ] ,\n ....\n},\n\"it\" : {\nname: Ricetta pesce\n Ingredients : [ ] ,\n}\n...\n}\ndb.recipes.createIndex({_id : 1 , it : 1} , {collation ... } );\n", "text": "Hi @Green,Why not to have the same recipe in one document having a language entry:You project the language you need based on user current preference. Using collation you can create an index per language field with its own collation Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny, that’s what I can do for Aliments and maybe Recipes, but not for other documents as they are probably getting too big … but I have to do some tries on that. Being able to put the stuff in the same document would be great, it will make the API and the code cleaner and partially simplify my indexes.For the moment thank you so much! You helped me alot!Green", "username": "Green" }, { "code": "", "text": "Hi @Pavel_Duchovny just one last thing, when referencing another documnet, should the field holding the id be a string or a ObjectId?Thank you!", "username": "Green" }, { "code": "", "text": "@Green,You should use object id of the reference to objectId and string to string for both code simplicity and potential $ lookup ability …", "username": "Pavel_Duchovny" } ]
Private and public documents
2021-05-05T15:48:51.092Z
Private and public documents
4,258
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "How can I get the userId of a newly registered user that is not yet confirm.", "username": "Aaron_Parducho" }, { "code": " realmSync.emailPassword.registerUserAsync(username, password) {\n // re-enable the buttons after user registration returns a result\n if (it.isSuccess) {\n realmSync.currentUser()\n } else {\n Log.i(\"LoginViewModel\", \"Successfully registered user.\")\n // when the account has been created successfully, log in to the account\n }\n }", "text": "@Aaron_Parducho: Yes, have tried getting current user from realm instance post successful sign-up.Something like this :", "username": "Mohit_Sharma" } ]
Email/Password Auth
2021-05-10T07:37:10.683Z
Email/Password Auth
1,609
null
[ "database-tools" ]
[ { "code": "", "text": "Hi Experts,What is the process of deleting multiple records from a collection using a json file. mongoimport utility or any any other utility to full fill this requirement. I am trying the following but getting an error.Command:\nmongoimport --host hostname:port --db=test --collection=collectionName --mode=delete --file=“C:\\MongoDB\\Docs\\Delete.json” --authenticationDatabase $external --ssl --sslCAFile cacert.pem -u xxx --authenticationMechanism PLAIN -pError:\n2021-05-10T17:35:43.404+0200 Failed: error processing document #1: invalid character ‘d’ looking for beginning of value\n2021-05-10T17:35:43.404+0200 0 document(s) deleted successfully. 0 document(s) failed to delete.Json File Contains:db.collectionName.remove({resource:“xxx”,“channel.ItemId”:“yyy”}, true);", "username": "Jagadeesh_Yalamanchi" }, { "code": "", "text": "Delete.jsonIt seems your Delete.json is not actual JSON file but a command hence parser picks up invalid character ‘d’ as it is first character in the file?", "username": "MaxOfLondon" }, { "code": "", "text": "I changed the file name to dl.json and getting the same error. its picking first character in the json file i.e. db.collectionName.remove({resource:“xxx”,“channel.ItemId”:“yyy”}, true);If i modify the data in json file with the dbname ‘test’ over db like below, i am getting different error.\ntest.collectionName.remove({resource:“xxx”,“channel.ItemId”:“yyy”}, true);Error:\nFailed: error processing document #1: invalid character ‘e’ in literal true (expecting ‘r’)If i modify the data in json file just as an import files with only data with out any commands as we are using --mode=delete as like below, i am getting different error.\n{resource:“xxx”,“channel.ItemId”:“yyy”}Error: Failed: invalid JSON input", "username": "Jagadeesh_Yalamanchi" }, { "code": "remove()true{\"resource\":\"xxx\", \"channel.ItemId\":\"yyy\"}{resource:\"xxx\", \"channel.ItemId\":\"yyy\"}", "text": "Hi Jagadeesh,There are two different issues with your attempt but they both boil up to an invalid JSON problem.If i modify the data in json file with the dbname ‘test’ over db like below, i am getting different error.\ntest.collectionName.remove({resource:“xxx”,“channel.ItemId”:“yyy”}, true);The argument of remove() and in fact any mongo operation needs to be a JSON data. JSON data is written as name/value pairs. A name/value pair consists of a field name (in double quotes), followed by a colon, followed by a value. Appling this rule the true part in the argument to remove is missing it’s argument’s name and curly braces. Granted that mongo shell makes it a bit easier to read commands because it allows dropping double quotes for names but sematic of data needs to be followed.If i modify the data in json file just as an import files with only data with out any commands as we are using --mode=delete as like below, i am getting different error.\n{resource:“xxx”,“channel.ItemId”:“yyy”}The JSON data requires names to be double quoted therefore {\"resource\":\"xxx\", \"channel.ItemId\":\"yyy\"} is a valid JSON but {resource:\"xxx\", \"channel.ItemId\":\"yyy\"} is not quite . Validity of JSON can be tested with online validator like for example https://jsonlint.com/\nSince the argument is an external file the content of which must be valid JSON parser complains (so no dropping quotes for name on this occasion, I’m afraid).Hope this helps.\nBest", "username": "MaxOfLondon" }, { "code": "", "text": "Hi MaxOfLondon,Thank you for your time to assist me and what is the better way to full fill my requirement ?I need to deleted few 100 records from a collection on a remote mogoserver through mongo shell connecting from my laptop.", "username": "Jagadeesh_Yalamanchi" }, { "code": "C:\\Users\\Jagadeesh> mongo Address-Of-Your-Server\n// The mongo shell is started\n>\n// Select the database you what to use\n> use test \n// Simply delete the documents with\n> db.collectionName.deleteMany( { \"resource\":\"xxx\" , \"channel.ItemId\":\"yyy\" } )", "text": "I strongly recommend that you take M001 course from https://university.mongodb.com/. That sort of things are well covered. For you requirement you might try:", "username": "steevej" } ]
Delete multiple records using mongoimport
2021-05-10T15:39:06.789Z
Delete multiple records using mongoimport
4,118
null
[ "python", "crud", "performance" ]
[ { "code": "for file in sorted_files:\n df = process_file(file)\n\n for row,item in df.iterrows():\n data_dict = item.to_dict()\n mycol1.update_one(\n {\"nsamples\": {\"$lt\": 288}},\n {\n \"$push\": {\"samples\": data_dict},\n\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n )\n", "text": "Hi guys.I use bucket pattern for timeseries data.I use this code for importing my data into my table :The problem is that the insert is very very slow.Is there any way to get things done faster?Is there a way to do this with bulk insert?Thanks in advance guys!", "username": "harris" }, { "code": "", "text": "@Pavel_Duchovny Hello Pavel.I am sorry for disturbing you.Can you help me with that?Is it possible to do multiple updates at once instead of one?", "username": "harris" }, { "code": "Bulk.find.upsert()", "text": "Hi @harris,Sure. You can use a bulk updates using Bulk.find.upsert() syntax:Tou can use unordered updates if the order don’t matter which will be parallel.Let me know if that works for youThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "for file in sorted_files:\n df = process_file(file)\n var bulk = mydb1.mycol1.initializeOrderedBulkOp()\n for row,item in df.iterrows():\n data_dict = item.to_dict()\n bulk.find().upsert().update_one(\n {\"nsamples\": {\"$lt\": 288}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n )\n bulk.execute\n", "text": "Thank you for you reply @Pavel_Duchovny\nDo you mean something like that", "username": "harris" }, { "code": "", "text": "Well first you need an upsert command it has its own in bulk.Now you need to do the criteria in the find and in the upsert do the push. Accumulated bulk in the item for loop should be executed in outside the loop.Essentially you build a bulk on client side and do the upsert after the loop avoiding the need to update per loop cycle…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny Thanks you for helping me.I cant do it on my own.If its possible can you write me with code what should i do?i know i am asking a lot but i am drowing on my own.", "username": "harris" }, { "code": "for file in sorted_files:\n df = process_file(file)\n bulk = mydb1.mycol1.initializeOrderedBulkOp()\n for row,item in df.iterrows():\n data_dict = item.to_dict()\n bulk.find({\"nsamples\": {\"$lt\": 288}}).upsert().update_one(\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n )\n bulk.execute\n", "text": "@Pavel_Duchovny Do you mean something like this:", "username": "harris" }, { "code": "", "text": "The execute should be done each x loops or outside of the loop.You can run a counter and do every 1000 loops an execute and one at the end.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Is it a problem that in the start the table is empty?", "username": "harris" }, { "code": "", "text": "Hi @harris,No problem of running on empty collection but index the filter field for when its get filled.But now that you say that I don’t understand the purpose of this update.If you do $lt of the same number it will keep pushing to the same document creating a huge array.You might be bottleneck by the array pushes … Why not to get the _id and spread the data by a bucket id or something …Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": " bulk_request = []\nfor file in sorted_files:\n df = process_file(file)\n \n for row, item in df.iterrows():\n data_dict = item.to_dict()\n bulk_request=mycol1.update_one(\n {\"nsamples\": {\"$lt\": 12}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n )\n result = mycol1.bulk_write(bulk_request)\n", "text": "What do you think of that?I dont think i see any changes in insert time…", "username": "harris" }, { "code": "", "text": "It looks like you are doing regular updates and not bulk…\nWhat makes you think this code is doing bulk updates…", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes i edited the answer. I’m sorry I bothered you, i just cant understand how bulk works.Thanks for you patience.I appreciate it alot!", "username": "harris" }, { "code": "bulk_request=[]\nfor file in sorted_files:\n df = process_file(file)\n for row, item in df.iterrows():\n data_dict = item.to_dict()\n bulk_request.append(UpdateOne(\n {\"nsamples\": {\"$lt\": 12}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n ))\n result = mycol1.bulk_write(bulk_request)\n", "text": "This is my final try", "username": "harris" }, { "code": "", "text": "Why not to do the final bulk write after the main loop?", "username": "Pavel_Duchovny" }, { "code": "", "text": "Why should i do that?If i keep it there i do bulk_write for each file…if i move it to the main loop i do one bulk_write in the end for all files?Is this why i should do the final bulk write after the main loop?Its optimal right?", "username": "harris" }, { "code": "", "text": "Well it depends on expected amount of operations in a single bulk.If it under 1000 you can do just one bulk operation.It is the same collection so the minimum the client to database round trips the better the performance…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "bulk_request=[]\nfor file in sorted_files:\n df = process_file(file)\n for row, item in df.iterrows():\n data_dict = item.to_dict()\n bulk_request.append(UpdateOne(\n {\"nsamples\": {\"$lt\": 12}},\n {\n \"$push\": {\"samples\": data_dict},\n \"$inc\": {\"nsamples\": 1}\n },\n upsert=True\n ))\n result = mycol1.bulk_write(bulk_request)\n", "text": "Why should i check if its under 1000?in my collection its about 1,2m rows importing with banches of 12 so its about 90.000 operations if i understand right…but mongodb i think does the divide on its own.i mean if its 2000 for example it divides the group in half\nAnd one last thing…if i use updatemany instead of updateone hereDo i see any changes in terms of insertion time?", "username": "harris" }, { "code": "", "text": "Hi @harris,To advise any further I need the breakdown of the 1.2 MHow many files are there in this loop?How much rows per file?Is there only one python client processing all this data at once? Can you split it into threads?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "11 files each one consist of 105000 rows…there is only one python client", "username": "harris" } ]
Any way to get faster inserts?
2021-05-09T13:08:19.415Z
Any way to get faster inserts?
13,503
https://www.mongodb.com/…686e789f118b.png
[ "queries" ]
[ { "code": "arrayarrays{\n \"_id\": {\n \"$oid\": \"60701a691c071256e4f0d0d6\"\n },\n \"schema\": {\n \"$numberDecimal\": \"1.0\"\n },\n \"playerName\": \"Dan Burt\",\n \"comp\": {\n \"id\": {\n \"$oid\": \"607019361c071256e4f0d0d5\"\n },\n \"name\": \"Roll Up 2021\",\n \"tees\": \"Blue\",\n \"roundNo\": {\n \"$numberInt\": \"1\"\n },\n \"scoringMethod\": \"Stableford\"\n },\n \"holes\": [\n {\n \"holeNo\": {\n \"$numberInt\": \"1\"\n },\n \"holePar\": {\n \"$numberInt\": \"4\"\n },\n \"holeSI\": {\n \"$numberInt\": \"3\"\n },\n \"holeGross\": {\n \"$numberInt\": \"4\"\n },\n \"holeStrokes\": {\n \"$numberInt\": \"1\"\n },\n \"holeNett\": {\n \"$numberInt\": \"3\"\n },\n \"holeGrossPoints\": {\n \"$numberInt\": \"2\"\n },\n \"holeNettPoints\": {\n \"$numberInt\": \"3\"\n }\n }\n ]\n}\nholesroundholes.X.holeGrosscomp._idcomp.roundNoaggregation", "text": "My document has an array of arrays and I would like to filter based on matching 1 value from the lowest arrays.The basic structure of my document is (there are other fields but they aren’t required to demonstrate):In the Atlas UI, it looks like this:image344×662 41 KBThe holes array is made up of another array per hole played.I would like to filter only the round's where holes.X.holeGross equals 2 (i.e. made a birdie on a par 3 for those familiar with golf) for a specific comp._id and comp.roundNo.But I don’t want to loop through and process this. I think an aggregation query or pipeline can be used, but I cannot fathom how to use them, or even how to begin typing stuff into the Atlas UI to start test queries…I am reading the documentation, found some StackOverflow articles and watched a few Youtube videos, but I am struggling to replicate for my particular situation of this additional sub-array.", "username": "Dan_Burt" }, { "code": "roundholeGross{\"holes.0.holeGross\": 2}holes", "text": "In the Atlas UI, when viewing the documents, I can apply this filter to the round collection, which finds matching documents for the 1st hole (0 index) and the holeGross equals 4:{\"holes.0.holeGross\": 2}But how do I apply this across all sub-arrays of holes?", "username": "Dan_Burt" }, { "code": "comp{\n \"comp.id\": ObjectId(\"607019361c071256e4f0d0d5\"),\n \"comp.roundNo\": 2\n}\n", "text": "Think I have worked out the comp filters should be:Is this my first pipeline “stage”?", "username": "Dan_Burt" }, { "code": "db.games.find({\"holes\": {\"$elemMatch\": {\"holeGross\" : 2}}})", "text": "Hi Dan,I am not sure I fully understand if you would like to match documents having holeGross with value 2 in any element of holes array?To start with in screenshot you provided the holes is not an array but an object that complicates things.If it was an array this could be approached with $elemMatch array operator making it quite simple like\ndb.games.find({\"holes\": {\"$elemMatch\": {\"holeGross\" : 2}}}) which should return all documents from games collection that have at least one holeGross of 2", "username": "MaxOfLondon" }, { "code": "holesArrayObjectsObjectObjects{ \"holes.holeGross\": { $lte: 2 } }\n", "text": "Thanks again @MaxOfLondon - you have pinpointed the problem… wrong data types!When doing the initial data modelling, holes was intended to be an Array of Objects. Somehow with other coding (translating this through PHP currently), these documents were being saved as Object of Objects, which meant the standard querying methods weren’t working.Using the Atlas UI, I can filter the matching documents with the simple / standard syntax:Which doesn’t require any aggregation.", "username": "Dan_Burt" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filter based on a Sub-Sub-Array
2021-05-09T10:27:09.369Z
Filter based on a Sub-Sub-Array
8,333
null
[]
[ { "code": "", "text": "Hello,I am trying to add a field to all the documents in my collection which has >100,000 documents. There is no filter for this operation. Is there any better API that I can use other than collection.updateMany() to ensure that the operation is faster?Thanks,\nParikshit", "username": "Parikshit_Navgire" }, { "code": "// collection before\nmongo shell> db.Parikshit.find()\n{ \"_id\" : 1 }\n{ \"_id\" : 2 }\n{ \"_id\" : 3 }\n// what to update all documents\nmongo shell> query = {}\n// the operations we want to perform\nmongo shell> operations = {\n\t\"$currentDate\" : {\n\t\t\"verified\" : {\n\t\t\t\"$type\" : \"date\"\n\t\t}\n\t},\n\t\"$set\" : {\n\t\t\"user\" : \"steevej\"\n\t}\n}\nmongo shell> db.Parikshit.updateMany( query , operations )\n// the result\nmongo shell> db.Parikshit.find()\n{ \"_id\" : 1, \"user\" : \"steevej\", \"verified\" : ISODate(\"2021-05-11T11:50:56.505Z\") }\n{ \"_id\" : 2, \"user\" : \"steevej\", \"verified\" : ISODate(\"2021-05-11T11:50:56.505Z\") }\n{ \"_id\" : 3, \"user\" : \"steevej\", \"verified\" : ISODate(\"2021-05-11T11:50:56.505Z\") }\n", "text": "Is there any better API that I can use other than collection.updateMany() to ensure that the operation is faster?Not that I know off. For example, you can add/set/update the field *verified to the current date and add/set/update the field user with:", "username": "steevej" } ]
Updating a collection with >100,000 documents
2021-05-10T22:10:07.068Z
Updating a collection with &gt;100,000 documents
2,471
null
[ "installation" ]
[ { "code": "", "text": "Hello,\nI have some problems installing MongoDB as local database on the edge which has the following features: Linux 5.4.66-sunxi armv7l. Its kernel is 32bit.Today I realize that mongodb-server, mongodb-10gen and mongodb-org-server can no longer be installed on 32-bit systems.Has anyone managed to solve this problem?Thank you,\nBest regards,\nFederica", "username": "FEDERICA_BO" }, { "code": "", "text": "Hi @FEDERICA_BO,In case of any other queries, feel free to reach out to us.Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Installation on 32bit system
2021-05-11T10:45:00.926Z
Installation on 32bit system
4,634
https://www.mongodb.com/…1ae77cd0c58b.png
[ "app-services-user-auth" ]
[ { "code": "", "text": "image748×315 8.16 KBI encounter this can Anyone tell me what is needed to configure to properly use facebook auth in web app.", "username": "Aaron_Parducho" }, { "code": "", "text": "Have you set up permissions for your app in Facebook? Overview - Facebook Login - Documentation - Meta for Developers", "username": "Andrew_Morgan" } ]
FacebookAuth Error
2021-05-10T13:48:42.338Z
FacebookAuth Error
1,549
null
[ "aggregation", "crud" ]
[ { "code": "", "text": "Hi all,I am working on a problem that requires me to update some documents in the database, but I also need to get the full list of all updated documents and their data (so, not just the count or just id-s). This could be solved simply by using two different queries, but I want to avoid that if possible because performance holds a lot of weight in this particular problem.After some digging, I saw the aggregate function and got very close to the desired behavior, but it diverged from what I need at the last step:\nwith the aggregation pipeline, I can match all my documents and alter the fields I need, but if I want the data to be written to the database, I need to use the $out operator.\nAs far as I managed to discover with my testing, if I use the $out operator, I cannot access the data in the calling program (Python 3.8, PyMongo 3.11.4). However, if I don’t use the $out operator, I can access the data in the calling program, but it won’t be written to the database.Does someone have a suggestion about how to solve this problem efficiently? Did I approach this problem from a wrong angle entirely?Thank you in advance.", "username": "Nikola_Socec" }, { "code": "findAndModify", "text": "Hello @Nikola_Socec, welcome to the MongoDB Community forum!See if this is useful to your issue. You can use findOneAndUpdate method (or findAndModify method) to return the updated document. Note these two methods update one document only. With MongoDB v4.2+, all update methods support Updates with Aggregation Pipeline.", "username": "Prasad_Saya" }, { "code": "findOneAndUpdate", "text": "Hello @Prasad_Saya,thank you very much for your reply. Unfortunately, I cannot use findOneAndUpdate because I need to work with multiple documents at once. Also, as far as I understand, updates with aggregation pipelines still return the pymongo.results.UpdateResult object, which can only tell me the numbers of matched and modified documents, but does not give me access to the data within the documents.", "username": "Nikola_Socec" } ]
Updating database and getting back data with aggregate
2021-05-10T07:52:14.276Z
Updating database and getting back data with aggregate
3,079
null
[ "aggregation", "queries" ]
[ { "code": " db.getCollection('inventory').aggregate([\n {\n $match: {\n updatedat: {\n $gte: ISODate(\"2021-05-10T12:00:00Z\"),\n $lte: ISODate(\"2021-05-11T12:00:00Z\")\n }\n } \n },\n { $unwind: \"$sizes\" },\n {\n $match: {\n \"sizes.status\": { $ne: \"REMOVED\" }\n }\n },\n {\n $group: {\n _id: {\n item: \"$item\",\n price: \"$price\",\n updatedat: \"$updatedat\",\n fees: \"$fees\"\n },\n sizes: { $push: \"$sizes\" } \n }\n },\n {\n $project: { _id: 0, sizes: 1, item: \"$_id.item\", price: \"$_id.price\", updatedat: \"$_id.updatedat\", fees: \"$_id.fees\" }\n }\n {\n \"_id\" : ObjectId(\"60996db251b4a0b97ee405ba\"),\n \"item\" : \"A\",\n \"price\" : NumberDecimal(\"80\"),\n \"updatedat\" : ISODate(\"2021-05-10T12:00:00.000Z\"),\n \"fees\" : {\n \"texttext\" : \"sold in!\",\n \"taxes\" : [ \n {\n \"type\" : 1.0,\n \"description\" : \"QC/CA#1234\"\n }, \n {\n \"type\" : 2.0,\n \"description\" : \"QC/CA#2231\"\n }\n ]\n },\n \"sizes\" : [ \n {\n \"size\" : \"S\",\n \"status\" : \"AVAILABLE\"\n }, \n {\n \"size\" : \"M\",\n \"status\" : \"REMOVED\"\n }, \n {\n \"size\" : \"L\",\n \"status\" : \"AVAILABLE\"\n }\n ]\n }\n", "text": "I have a collection with a nested array of subdocuments. I’d like to filter out the subdocuments in this nested array that have the field status: REMOVED, and then return the original document unchanged aside from the filtered subdocument array. I have this working in the following aggregate pipeline:Here is an example document in my collection:This returns what I need, but managing each root level field by grouping them inside _id, and then projecting them in the final stage is tedious. This is also a test dataset, in reality the documents I’ll be manipulating are much more complex.I was wondering if there was a better way to handle this than my solution above.", "username": "Greg_Fitzpatrick-Bel" }, { "code": " db.getCollection('inventory').aggregate([\n {\n $match: {\n updatedat: {\n $gte: ISODate(\"2021-05-10T12:00:00Z\"),\n $lte: ISODate(\"2021-05-11T12:00:00Z\")\n }\n } \n },\n {\n $project: {\n item: 1,\n price: 1,\n sizes: {\n $filter: {\n input: \"$sizes\",\n as: \"size\",\n cond: { \n $ne: [ \"$$size.status\", \"REMOVED\" ] \n }\n }\n }\n }\n },\n {\n $match: {\n $nor: [\n { sizes: { $exists: false } },\n { sizes: { $size: 0 } }\n ]\n }\n }\n])\nitem: 1,\nprice: 1,\n", "text": "I’ve found a better way to handle this, using the $filter operator:I am now wondering if there is a way to prject all root level fields in my $project stage. instead of writtingAs I mentioned above, the data I will be actually using is much more complex, and this will be a very large list.", "username": "Greg_Fitzpatrick-Bel" }, { "code": "$project$addFields", "text": "I am now wondering if there is a way to prject all root level fields in my $project stage. instead of writtingYes, you can. Use the $addFields stage instead of the $project:Adds new fields to documents. $addFields outputs documents that contain all existing fields from the input documents and newly added fields.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering a nested array of subdocuments without
2021-05-10T17:52:46.061Z
Filtering a nested array of subdocuments without
20,039
null
[ "crud" ]
[ { "code": "useruser = collection('user').findOne({...})userupdateOne()user.update({...})collection('user').updateOne(user, {...})findOne()", "text": "I’m using Node native language.In my code, I have already found a user document with user = collection('user').findOne({...}).I want to update a field on this user document.I know the updateOne() function, however that takes a query as opposed to a document object, which has to perform another query.I was expecting something like:user.update({...})\nor\ncollection('user').updateOne(user, {...})Do this mean if I want to perform an update on a document I have already found with findOne(), I need to perform another query?", "username": "Jack_Zhang" }, { "code": "updateOne()updateOne_id{ \n _id: 1, \n name: \"John\", \n email: \"[email protected]\", \n country: \"Australia\" \n}\nlet user_john = db.users.findOne( { email: \"[email protected]\" } }db.users.updateOne( \n { _id: user_john._id }, \n { $set: { phone: \"123-456-7890\" } }\n )\n_id: 1{ _id: user_john._id }{ $set: { phone: \"123-456-7890\" } }", "text": "I know the updateOne() function, however that takes a query as opposed to a document object, which has to perform another query.The updateOne method takes a filter and an update document as parameters. But, the update operation on a single document is atomic.Lets take this example, where you have already queried a user. The user document has the unique _id field (or some other unique identifier for the user), and use this as the update operation’s query (or filter).Assume you have a user document like this:You query it by email:let user_john = db.users.findOne( { email: \"[email protected]\" } }Now, update the user to add a new field called as phone.The above update operation adds the new field to the user with _id: 1.", "username": "Prasad_Saya" } ]
Can I update a document I have already found without making another query?
2021-05-10T20:33:36.745Z
Can I update a document I have already found without making another query?
3,332
null
[ "node-js", "dot-net", "java", "golang", "scala" ]
[ { "code": "get-started", "text": "Hello Community,Life is an adventure where we are continuously learning. If you are a developer in today’s world you may find it challenging to find the time to learn about new technologies whether that may be new frameworks, new build tools, new modules, new programming language features, etc.Some developers find the time to learn new things after work hours, or on weekends, or on holiday seasons. Whichever your learning mode is, if you’re interested in learning MongoDB with a new programming language, we have just released a series of get-started repositories which may be able to kickstart your learning journey. It comes with the nicety of starting from a working development environment with all the expected dependencies installed.If you’re just beginning your MongoDB journey or wanting to learn another programming language, this project is for you.The MongoDB Get-Started project aims to provide an easy way for developers to get started with prerequisite language-specific development tools and a simple example application using official MongoDB drivers.Each Get-Started repository includes:Currently there are five available get-started repositories:Happy learning! If you have any questions, feel free to submit an issue in the relevant repository. If you would like to contribute to the project, please see Contributing to the MongoDB Get-Started projectRegards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi everyone,I have recently added a new get-started repository for MongoDB C++ driver, you can find the repository here: get-started-cxxIf you’re looking for a way to quickly spin up an environment with MongoDB C++ driver either for learning or debugging purposes, this is the repository for you.If you have any questions related to a specific repository feel free to submit an issue in the related repository. For any other questions, comments, or feedback please feel free to reach out.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi everyone,There are two new projects that have been added recently:If you’re looking for a way to quickly spin up an environment with MongoDB PHP driver or MongoDB Ruby driver, please give these two repositories a try. Please see get-started-readme to find a list of other projects.If you have any questions related to a specific repository feel free to submit an issue in the related repository. For any other questions, comments, or feedback please feel free to reach out.Regards,\nWan.", "username": "wan" } ]
"Get Started" with MongoDB Atlas
2020-10-20T08:36:07.387Z
&ldquo;Get Started&rdquo; with MongoDB Atlas
5,138
null
[ "aggregation" ]
[ { "code": "statuscountryorders: [\n { id: 100, status: 'ordered', country: 'US', items: [] },\n { id: 101, status: 'ordered', country: 'UK', items: [] },\n { id: 102, status: 'shipped', country: 'UK', items: [] },\n]\nag: [\n { _id: 'US', status: { ordered: 1} },\n { _id: 'UK', status: { ordered: 1, shipped: 1 } }\n]\n$count$group", "text": "Hi all,this time I am trying to do two steps in the aggregation, but can’t wrap my head around it.I would like to count the status and group them by country.Desired aggregation outcome:I can $count and $group, but I am not sure how to put this together…Thanks,\nbluepuama", "username": "blue_puma" }, { "code": "{ \"_id\" : \"US\", \"counts\" : [ { \"status\" : \"ordered\", \"count\" : 1 } ] }\n{ \"_id\" : \"UK\", \"counts\" : [ { \"status\" : \"shipped\", \"count\" : 1 }, { \"status\" : \"ordered\", \"count\" : 1 } ] }\n[\n\t{\n\t\t\"$group\" : {\n\t\t\t\"_id\" : {\n\t\t\t\t\"country\" : \"$country\",\n\t\t\t\t\"status\" : \"$status\"\n\t\t\t},\n\t\t\t\"count\" : {\n\t\t\t\t\"$sum\" : 1\n\t\t\t}\n\t\t}\n\t},\n\t{\n\t\t\"$group\" : {\n\t\t\t\"_id\" : \"$_id.country\",\n\t\t\t\"counts\" : {\n\t\t\t\t\"$push\" : {\n\t\t\t\t\t\"status\" : \"$_id.status\",\n\t\t\t\t\t\"count\" : \"$count\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n]\n", "text": "The hard part is that you want values (ordered and shipped) to become field names. I tried but the closest I can get iswith the following:This is as far I can go. I am pretty sure, $function could do the rest but I am not familiar enough.", "username": "steevej" } ]
How to $count and $group within aggregation?
2021-05-06T12:52:30.823Z
How to $count and $group within aggregation?
4,600
null
[ "queries", "node-js", "atlas-device-sync" ]
[ { "code": " realm = await Realm.open(config)realm = new Realm(config) config = {\n schema,\n path: getDBPath(),\n sync: {\n user: app.currentUser,\n partitionValue: new ObjectID(getCatalogId()),\n error: (error) => {\n console.log(error.name, error.message)\n }\n }\n }\n", "text": "Using :Node js sdk\nelectron app\nVue js(renderer)I have an app in which initially I want to sync data from server to local instance and for that I’m opening realm asynchronously and I’m closing it when done with the syncing (shall I close this ? as I want to react to the collection changes as well).\n realm = await Realm.open(config)Then after the syncing is done I’m quering in my app for some data which I have synced\nby opening my realm synchronously (as user can go offline now) by using:\nrealm = new Realm(config)my config objectWhat is the best practice to close the realm and when?The first time a user logs on to your realm app, you should open the realm asynchronously to sync data from the server to the device. After that initial connection, you can open a realm synchronously to ensure the app works in an offline state. Docs linkSo I 'm following the same approach but when opening the same config object with two different methods to open the realm I’m getting the error that \"same instance is opened on the current thread \" which is solved by closing the instance.But I want to react to changes in the collection if any how can i do that?", "username": "shawn_batra" }, { "code": "", "text": "That docs snippet is referring to subsequent launches of the app not within the same app launch. Speaking generally, you’ll want to open the realm on appLaunch and then close it when the app is shutting down. Once the app is open you can observe changes by attaching a change listener to the open realm reference and react to changes.", "username": "Ian_Ward" }, { "code": "realm = new Realm(config)", "text": "Thanks for the reply. When I open a realm instance on the app launch and not close it, I get an error when I try to open it in some other component using realm = new Realm(config) and error I get is:realm already opened on current thread with different schemaWhat should I do for this.?", "username": "shawn_batra" }, { "code": "", "text": "You should open the realm with the same configuration", "username": "Ian_Ward" }, { "code": "realm = await Realm.open( config)realm = new Realm(config)", "text": "I’m opening it the wit the same config but using different method.\nAt launch I use Async open realm = await Realm.open( config) and now if the data is synced so user can go offline and I’m supposed to open it using realm = new Realm(config) right?", "username": "shawn_batra" }, { "code": "", "text": "Any Update on this ticket as I’m still facing the issue?", "username": "shawn_batra" }, { "code": "", "text": "@shawn_batra Can you file an issue here with a reproduction case please - GitHub - realm/realm-js: Realm is a mobile database: an alternative to SQLite & key-value storesI believe this should work as long as you are not changing the configuration.", "username": "Ian_Ward" } ]
Best practice to handle multiple realm
2021-05-06T05:55:23.674Z
Best practice to handle multiple realm
5,280
https://www.mongodb.com/…c19ed7676aa.jpeg
[ "aggregation", "node-js", "atlas", "weekly-update" ]
[ { "code": "", "text": "Welcome to MongoDB $weeklyUpdate!Here, you’ll find the latest developer tutorials, upcoming official MongoDB events, and get a heads up on our latest Twitch streams and podcast, curated by Adrienne Tacke.Enjoy!We have lots of new content to share with you! Take a look at our YouTube channel to see the latest premieres from our Dev Rel team!When you want to analyze data stored in MongoDB, you can use MongoDB’s powerful aggregation framework to do so. In part two of this quick start tutorial for beginners, Lauren Schaefer provides a high-level overview of the aggregation framework and demonstrates how to use it in a Node.js script. She explains how to use aggregate() to analyze data.Joe Karlsson has another popular topic out! When you need to model data, is your first instinct to start breaking it down into rows and columns? It was for Joe, and many others too. When you want to develop apps in a modern, agile way, MongoDB databases can be the best option. In this video, we’ll compare and contrast the terms and concepts in SQL databases and MongoDB.As always, be sure to like, subscribe, and turn on notifications for our YouTube channel so you never miss a video!Want to find the latest MongoDB tutorials and articles created for developers, by developers? Look no further than our DevHub!Maxime Beugnet and John Page use the MongoDB Aggregation Pipeline to apply Benford’s law on the COVID-19 data set from Johns Hopkins University.Part one of Lauren Schaefer’s popular series, Node.js and MongoDB. Here, she’ll walk you though connecting to MongoDB in a Node.js application.Lauren Schaefer shows you how to execute the CRUD (create, read, update, and delete) operations in MongoDB using Node.js in this step-by-step tutorial.Learn how to set up a continuous copy from MongoDB into an AWS S3 bucket in Parquet with Joe Karlsson.Attend an official MongoDB event near you (virtual for now)! Chat with MongoDB experts, learn something new, meet other developers, and win some swag!May 18 (1:00 PM GMT | Nigeria) - How to Scale Your Product With MongoDBMay 21 (11:00 AM GMT | Kenya) - MongoDB Community Mini Workshop in EldoretMay 21 (5:00 PM GMT | Global) - Removing master/slave terminology from Apache LuceneMay 26 (4:00 PM GMT | Global) - Realm Kotlin Multiplatform for Modern Mobile AppsWe stream tech tutorials, live coding, and talk to members of our community every Friday. Sometimes, we even stream twice a week! Be sure to follow us on Twitch to be notified of every stream!Latest Stream\n\nUpcoming Streams\nMay 21, 10am PDT - Removing master-slave terminology from Apache Lucene Follow us on Twitch so you never miss a stream!Latest EpisodeListen to this episode from The MongoDB Podcast on Spotify. With Thunkable, anyone can easily build beautiful apps, program powerful functionality with drag & drop blocks, and upload apps to the Google Play Store and Apple's App Store. In this...Catch up on past episodes:Ep. 53 - The MERN Stack with Beau Carnes of freeCodeCamp\nEp. 52 - Scaling Startups with Blerp and MongoDB\nEp. 51 - Scaling Startups - Funnelytics with Alexey Glazunov(Not listening on Spotify? We got you! We’re most likely on your favorite podcast network, including Apple Podcasts, PlayerFM, Podtail, and Listen Notes )Watch our team do their thang at various conferences, meetups, and podcasts around the world (virtually, for now). Also, find external articles and guest posts from our DevRel team here! UpcomingSr. Dev Advocate Adrienne Tacke gives one of her favorite talks “There is NO Developer Uniform!” at Techorama 2021!Staff Dev Advocate Lauren Schaefer will be at Codemotion Online Conference to give their talk “Top Ten Tips for Making Remote Work Actually Work Right Now”!Staff Dev Advocate Lauren Schaefer will be at DevSum Conference to give their talk “Top Ten Tips for Making Remote Work Actually Work Right Now”! Did you know that you get these $weeklyUpdates before anyone else? It’s a small way of saying thank you for being a part of this community. If you know others who want to get first dibs on the latest MongoDB content and MongoDB announcements as well as interact with the MongoDB community and help others solve MongoDB related issues, be sure to share a tweet and get others to sign up today!", "username": "yo_adrienne" }, { "code": "", "text": "Another winner! ", "username": "JoeKarlsson" } ]
$weeklyUpdate #22 (May 10, 2021): Latest MongoDB Tutorials, Events, Podcasts, & Streams!
2021-05-10T17:05:46.550Z
$weeklyUpdate #22 (May 10, 2021): Latest MongoDB Tutorials, Events, Podcasts, &amp; Streams!
2,429
null
[ "queries", "node-js", "atlas-device-sync" ]
[ { "code": "", "text": "Realm slice is not working for the second time.\nMy query is:\nrealm.objects(‘article’).sorted(‘createdAt’,true).slice(0, 30)For the first time it is working but for the second time the query is not working. And I can see there are 1500 records.Thanks in advance", "username": "shawn_batra" }, { "code": "(0, 30)", "text": "Hi @shawn_batra, could you share a bit more info on what you mean by working the first time but not the second – is the slice (0, 30) for both?What error do you see?Which SDK is this using?", "username": "Andrew_Morgan" }, { "code": "", "text": "Thanks for reply. No for the next time it was .slice(30, 30) which is clear now as it requires the end index not the length.\nSo issue is solved thanks.", "username": "shawn_batra" }, { "code": "", "text": "Great to hear you found the solution! Please come back the next time you need a rubber duck ", "username": "Andrew_Morgan" }, { "code": "", "text": "Can you provide some help in this : https://www.mongodb.com/community/forums/t/best-practice-to-handle-multiple-realm/105821/4/", "username": "shawn_batra" } ]
Realm slice not working
2021-05-03T19:40:18.105Z
Realm slice not working
2,249
null
[ "atlas-cluster" ]
[ { "code": "", "text": "I’m dealing with an issue where the cluster refuses to be created in Atlas and instead hangs.\nI’ve tried removing the cluster, deleting the project, deleting the organization, logging in and out and remaking it each time with no success. Anyone face a similar issue that was able to resolve it?", "username": "Andrew_W" }, { "code": "", "text": "SOLVED - Nevermind, it appears that there are cluster creation delays as shown on the status page.Welcome to MongoDB Cloud's home for real-time and historical data on system performance.I think a good UX feature to add would be to add the following to the cluster creation window\nBeneath the estimated time it should read:Taking too long? Check the MongoDB status for possible issues.Hey MongoDB – You hiring? I’m good at idea creation and implementation. I’m on LinkedIn. Let’s connect! Fellow MongoDB community members, let’s connect as well!", "username": "Andrew_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cluster hangs on creation
2021-05-10T18:00:22.257Z
Cluster hangs on creation
4,462
null
[]
[ { "code": "fn get_session_id(session: &mut ClientSession) -> Result<Bson, &str> {\n match session.id().get(\"id\") {\n Some(id) => {\n debug!(\"session id: {:?}\", id);\n Ok(id.clone())\n }\n None => {\n error!(\"no session id\");\n Err(\"no session id\")\n }\n }\n}\n\npub async fn some_big_calculation(client: &Client) -> Result<(), Box<dyn std::error::Error>> {\n // Get a handle to a database.\n let database = client.database(\"database\");\n\n // List the names of the collections in that database.\n for collection_name in database.list_collection_names(None).await? {\n info!(\"collection_name: {}\", collection_name);\n let collection: Collection<Document> = database.collection(&collection_name);\n let mut current_session = client.start_session(None).await?;\n // Query the documents in the collection with a filter and an option.\n let filter = doc! { \"some_field\": { \"$exists\": false } };\n let find_options = FindOptions::builder().sort(doc! { \"_id\": -1 }).build();\n let mut cursor = collection\n .find_with_session(filter, find_options, &mut current_session)\n .await?;\n\n let mut last_id: i64 = 0;\n if let Some(last_document) = cursor.with_session(&mut current_session).next().await {\n let document = last_document?;\n last_id = document.get_i64(\"_id\")?;\n\n //do some calculation with the first document here\n // the first document is a special case\n } else {\n error!(\"Cursor not found, collection_name: {}\", collection_name);\n continue;\n }\n\n let wait_time = Duration::minutes(5);\n let mut start = Instant::now();\n let session_id = get_session_id(&mut current_session)?;\n\n // Iterate over the results of the cursor.\n // It is the previous document because we are ordering in descendent order by date/id\n while let Some(previous_document) = cursor.with_session(&mut current_session).next().await {\n let document = previous_document?;\n let previous_id = document.get_i64(\"_id\")?;\n\n let some_field = expensive_calculation();\n let filter = doc! { \"_id\" : last_id };\n let update = doc! {\"$set\" : { \"some_field\": some_field}};\n\n let update_result = collection.update_one(filter, update, None).await?;\n info!(\n \"collection: {}, _id: {}, update_result: {:?}\",\n collection_name, last_id, update_result\n );\n\n last_id = previous_id;\n\n // Check if more than 5 minutes have passed since the last refresh\n match wait_time.checked_sub(start.elapsed()) > Some(0.seconds()) {\n true => {\n debug!(\n \"remaining time: {:?}\",\n wait_time.checked_sub(start.elapsed())\n )\n }\n false => {\n info!(\"5 min passed, refreshing session\");\n start = Instant::now();\n \n let r = database\n .run_command(doc! { \"refreshSessions\": [ {\"id\": &session_id}] }, None)\n .await?;\n info!(\"{:?}\", r);\n }\n }\n }\n }\n\n Ok(())\n}\n", "text": "Hi I’m executing a long task using the rust driver on my MongoDB atlas cluster. I was getting the “CursorNotFound” error, so I modified my code to refresh the session every 5 minutes, but now I’m getting this error “CMD_NOT_ALLOWED: refreshSessions” instead, can someone help on this? what am I doing wrong?And the problem is that if I don’t try to refresh the session I always get this other error:Error { kind: CommandError(CommandError { code: 43, code_name: “CursorNotFound”, message: “cursor id 7688697219134251972 not found”, labels: [] }), labels: [] }", "username": "Adrian_Espinosa" }, { "code": "refreshSessionspingdatabase.run_command_with_session(doc! { \"ping\": 1 }, None, &mut current_session).await?;\npinglistDatabases", "text": "Hi @Adrian_Espinosa!This is not an issue with the Rust driver but rather a limitation of Atlas shared tier, which does not currently support the refreshSessions command. As a workaround, you can issue a ping command using the session which should also refresh it.Note: if you’re on MongoDB < 4.0.7, you’ll need to use a command that requires authentication instead of ping, e.g. listDatabases.", "username": "Patrick_Freed" }, { "code": "database.run_command_with_session(doc! { \"ping\": 1 }, None, &mut current_session).await?;\n&mut current_session while let Some(previous_document) = cursor.with_session(&mut current_session).next().await {\ncurrent_session", "text": "Thank you for your help @Patrick_Freed, but looks like I can’t run the command you recommended:because it requires the &mut current_session, but the current_session was borrowed already in the while loop. This line to be specific:Do you know how I Can fix this error?cannot borrow current_session as mutable more than once at a time\nsecond mutable borrow occurs here…Thanks!", "username": "Adrian_Espinosa" }, { "code": "loop {\n if let Some(doc) = cursor.with_session(&mut session).next().await {\n // do stuff with doc\n }\n db.run_command_with_session(doc! { \"ping\": 1 }, None, &mut current_session).await?;\n}\nping", "text": "That error is in fact caused by a driver bug, thanks for reporting it! I filed RUST-796 to track the work for getting that fixed.In the meantime, you can work around it by doing something like this:This ensures the mutable reference to the session is released before you run the ping to satisfy the borrow checker.", "username": "Patrick_Freed" }, { "code": "", "text": "Thanks for the help @Patrick_Freed!", "username": "Adrian_Espinosa" }, { "code": "", "text": "No problem! Please let us know if you run into any further issues.", "username": "Patrick_Freed" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Rust driver "AtlasError", message: "CMD_NOT_ALLOWED: refreshSessions
2021-05-04T03:56:32.120Z
Rust driver &ldquo;AtlasError&rdquo;, message: &ldquo;CMD_NOT_ALLOWED: refreshSessions
1,948
https://www.mongodb.com/…7229ea2e784d.png
[ "node-js", "crud" ]
[ { "code": "setUserBrand(id:string,ruolo:string){\n\n const mongodb = this.app.currentUser!.mongoClient(\"mongodb-atlas\")\n\n const users = mongodb.db(\"Saw\").collection<any>(\"Users\");\n\n const NEW_RUOLO = {\n\n \"brand\":new Realm.BSON.ObjectId(this.getActiveBrandId()),\n\n \"ruolo\":ruolo\n\n }\n\n const query = { \n\n \"_id\":new Realm.BSON.ObjectId(id)\n\n };\n\n \n\n const update = {\n\n $push: {\n\n \"ruoli\":NEW_RUOLO\n\n }\n\n };\n\n const options = { upsert: false,\n\n 'Content-Type': 'application/json',\n\n \"Authorization\": \"Bearer \"+this.getAccessToken()\n\n };\n\n return users.updateOne(query, update, options);\n\n }\n", "text": "Hello everybody,I’m trying to insert (append) a new object inside an array of object elements.This is the piece of code that issues the update:The schema of the Users collection is the following:I would like to append to the array “ruoli” a new object, but the piece of code written above does not work.Have you any idea why that does not work? Thank you!", "username": "Giacomo_Ondesca" }, { "code": "", "text": "The problem was that the “ruolo” field was not a string, so adding the .toString() I resolved the problem.", "username": "Giacomo_Ondesca" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Append element into array of objects
2021-05-10T13:35:35.579Z
Append element into array of objects
3,445
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.6 is out and is ready for production deployment. This release contains only fixes since 4.4.5, and is a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4.6 is released
2021-05-10T15:31:37.522Z
MongoDB 4.4.6 is released
3,060
null
[ "java", "spring-data-odm" ]
[ { "code": "", "text": "I recently upgraded the java driver in my application and my local session count went from a steady count of ~20 to steady count of ~200. I upgraded from mono-java-driver-2.14.3 (using spring-data-mongodb 1.10.12 ) to mongo-java-driver-3.11.2 (using spring-data-mongodb 2.2.12.RELEASE). The workload hasn’t changed and I don’t believe that anything else in the code base changed. Does anyone have any suggestions as to why the version upgrade would cause a 10x increase in local sessions?", "username": "Casey_O_Neill" }, { "code": "", "text": "It’s not clear what you mean by “local session”. Can you elaborate? How are you measuring this number?", "username": "Jeffrey_Yemin" }, { "code": "db.aggregate( [ { $listLocalSessions: { allUsers: true } } , {$count: \"sessions\"} ] )", "text": "Thanks for responding. Also I’m running MongoDB 3.6.6", "username": "Casey_O_Neill" }, { "code": "", "text": "Thanks.This is probably due to the fact that the 2.14 driver doesn’t create sessions at all, whereas the 3.11 driver does. In particular, it creates sessions in order to support features such as retryable writes and cluster-wide kill ops.For more details, you can look at the driver sessions specification here.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
10x Increase in Sessions with Java Driver and Spring Upgrade
2021-05-07T17:27:55.897Z
10x Increase in Sessions with Java Driver and Spring Upgrade
2,229
null
[ "replication", "upgrading" ]
[ { "code": "", "text": "We have a three node cluster with one primary and two secondary nodes with 150GB of data and heavy write and read operations.\nWhen cluster was operating in version 3.6, Large amount of slow queries were observed and data packet processing is taking more than 10seconds.\nAs part of performance optimisationa and multi-document transaction, we upgraded mongodb version to 4.0.23 and query processing speed improved, But within 2-3 hours load primary goes down and one of the secondary nodes becomes unhealthy.\nWhy mongodb is surviving with version 3.6 and high load, but not on 4.0?\nCan anyone help here?Oplog : 71GB,\nRAM: 64GB\nInstance: r4.2xLarge EC2 Instance", "username": "Rakhi_Maheshwari" }, { "code": "insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time\n 187 *0 1855 45 196 544|0 5.1% 79.9% 0 34.8G 28.3G 0|456 6|128 3.21m 120m 1264 rs0 PRI Apr 23 17:33:31.297\n 198 *0 1619 21 199 519|0 5.2% 80.1% 0 34.8G 28.3G 0|469 4|128 2.93m 125m 1264 rs0 PRI Apr 23 17:33:32.298\n 172 *0 3200 48 158 613|0 5.7% 80.4% 0 34.8G 28.3G 0|268 1|128 4.17m 122m 1263 rs0 PRI Apr 23 17:33:33.318\n 101 *0 7378 86 14 731|0 4.9% 79.2% 0 34.8G 28.3G 0|343 3|128 6.05m 133m 1262 rs0 PRI Apr 23 17:33:34.296\n 133 *0 2521 42 11 419|0 5.6% 79.9% 0 34.8G 28.3G 0|393 6|128 3.30m 188m 1261 rs0 PRI Apr 23 17:33:35.302\n 136 *0 3022 83 126 536|0 5.5% 80.0% 0 34.8G 28.3G 0|422 2|127 4.09m 178m 1261 rs0 PRI Apr 23 17:33:36.316\n 175 *0 2585 52 175 623|0 5.2% 79.8% 0 34.8G 28.3G 0|449 4|128 3.66m 158m 1268 rs0 PRI Apr 23 17:33:37.298\n 95 *0 2027 31 180 528|0 5.1% 79.9% 0 34.8G 28.3G 0|429 4|128 2.92m 127m 1270 rs0 PRI Apr 23 17:33:38.298\n 83 *0 2418 41 149 523|0 4.9% 79.4% 0 34.8G 28.3G 0|419 4|128 3.65m 146m 1270 rs0 PRI Apr 23 17:33:39.297\n 148 *0 2288 45 171 556|0 4.9% 79.7% 0 34.8G 28.3G 0|453 1|128 3.26m 109m 1270 rs0 PRI Apr 23 17:33:40.307\ninsert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time\n 144 *0 1333 27 242 535|0 5.1% 80.0% 0 34.8G 28.3G 0|489 4|128 2.57m 103m 1271 rs0 PRI Apr 23 17:33:41.297\n 140 *0 1125 22 226 458|0 5.1% 80.4% 0 34.8G 28.3G 0|498 3|128 2.42m 96.7m 1271 rs0 PRI Apr 23 17:33:42.298\n 183 *0 1778 50 210 530|0 5.1% 80.2% 0 34.8G 28.3G 0|474 8|128 2.77m 122m 1271 rs0 PRI Apr 23 17:33:43.296\n 217 *0 1680 30 217 556|0 4.8% 79.9% 0 34.8G 28.3G 0|408 1|127 3.01m 122m 1271 rs0 PRI Apr 23 17:33:44.311\n 200 *0 4268 49 123 611|0 5.2% 80.4% 0 34.8G 28.3G 0|239 2|128 4.68m 136m 1271 rs0 PRI Apr 23 17:33:45.601\n 160 *0 6644 81 24 644|0 5.7% 80.8% 0 34.8G 28.3G 0|303 3|128 6.11m 213m 1271 rs0 PRI Apr 23 17:33:46.300\n 62 *0 5432 85 57 554|0 5.0% 79.9% 0 34.8G 28.3G 0|254 1|126 5.20m 148m 1271 rs0 PRI Apr 23 17:33:47.311\n 89 *0 9086 90 26 545|0 5.2% 80.2% 0 34.8G 28.3G 0|146 3|128 6.07m 130m 1274 rs0 PRI Apr 23 17:33:48.300\n 30 *0 8915 72 7 541|0 4.9% 79.7% 0 34.8G 28.3G 0|243 1|128 6.29m 128m 1279 rs0 PRI Apr 23 17:33:49.318\n 33 *0 13579 56 2 375|0 5.4% 80.1% 0 34.8G 28.3G 0|73 1|123 7.62m 37.7m 1275 rs0 PRI Apr 23 17:33:50.409\ninsert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time\n 23 *0 12968 86 7 380|0 5.8% 80.0% 0 34.8G 28.3G 0|54 2|127 7.81m 111m 1272 rs0 PRI Apr 23 17:33:51.331\n 12 *0 6958 82 14 548|0 5.1% 79.9% 0 34.8G 28.3G 0|268 3|128 4.90m 259m 1273 rs0 PRI Apr 23 17:33:52.298\n 95 *0 8353 105 61 453|0 5.5% 80.2% 0 34.8G 28.3G 0|70 3|128 6.28m 103m 1275 rs0 PRI Apr 23 17:33:53.299\n 95 *0 8484 103 65 410|0 5.3% 79.7% 0 34.8G 28.3G 0|171 3|128 6.08m 138m 1278 rs0 PRI Apr 23 17:33:54.299\n 90 *0 9836 52 49 163|0 4.6% 79.5% 0 34.8G 28.3G 0|13 1|128 5.71m 89.4m 1279 rs0 PRI Apr 23 17:33:55.343\n^Z\n", "text": "Please find the output of monostat command.", "username": "Rakhi_Maheshwari" }, { "code": "", "text": "Hi @Rakhi_Maheshwari welcome to the community!From the mongostat output you posted, it appears that the node was heavily utilized, up to its configured memory usage limits. I would think that the server was killed by the OOMkiller. Can you confirm if this is the case? Note that by default EC2 instances does not come configured with swap, so if any process takes a lot of memory, the OS will just kill it.If the server continually gets killed by the OOMkiller, it seems that your instance size is too small for the workload you’re putting in. One possible solution is to provision a larger hardware and see if the issue persists.Why mongodb is surviving with version 3.6 and high load, but not on 4.0?You mentioned earlier that you upgraded to 4.0 for multi document transactions. Is your app using this feature? If yes, multi document transactions will incur additional memory load (see Performance Best Practices: Transactions and Read / Write Concerns), especially if there are a lot of them.On another note, I would encourage you to try out the newer MongoDB versions (4.4.5 currently) and see if it helps your situation, since there are many improvements made since 4.0.Best regards,\nKevin", "username": "kevinadi" }, { "code": "Current version 3.6\nMemory consumption Stats:\nrs0:PRIMARY> db.serverStatus().wiredTiger.cache[\"maximum bytes configured\"]\n32212254720\nrs0:PRIMARY> db.serverStatus().tcmalloc.tcmalloc.formattedString\n------------------------------------------------\nMALLOC: 28084039952 (26783.0 MiB) Bytes in use by application\nMALLOC: + 7536099328 ( 7187.0 MiB) Bytes in page heap freelist\nMALLOC: + 374013696 ( 356.7 MiB) Bytes in central cache freelist\nMALLOC: + 2279168 ( 2.2 MiB) Bytes in transfer cache freelist\nMALLOC: + 260880624 ( 248.8 MiB) Bytes in thread cache freelists\nMALLOC: + 114385152 ( 109.1 MiB) Bytes in malloc metadata\nMALLOC: ------------\nMALLOC: = 36371697920 (34686.8 MiB) Actual memory used (physical + swap)\nMALLOC: + 2148909056 ( 2049.4 MiB) Bytes released to OS (aka unmapped)\nMALLOC: ------------\nMALLOC: = 38520606976 (36736.1 MiB) Virtual address space used\nMALLOC:\nMALLOC: 610748 Spans in use\nMALLOC: 449 Thread heaps in use\nMALLOC: 4096 Tcmalloc page size\n------------------------------------------------\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\nBytes released to the OS take up virtual address space but no physical memory.\n2021-05-02T18:29:58.804+0000 I COMMAND [conn448567] command admin.$cmd command: isMaster { ismaster: 1, $clusterTime: { clusterTime: Timestamp(1619980198, 12), signature: { hash: BinData(0, ), keyId: } }, $db: \"admin\", $readPreference: { mode: \"primary\" } } numYields:0 reslen:678 locks:{} protocol:op_msg 572ms\n2021-05-02T18:29:58.804+0000 I NETWORK [listener] connection accepted from ip:54178 #457578 (916 connections now open)\n2021-05-02T18:29:58.805+0000 I NETWORK [conn457577] received client metadata from ip:54176 conn457577: { driver: { name: \"mongo-java-driver|legacy\", version: \"3.11.2\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"amd64\", version: \"4.14.219-119.340.amzn1.x86_64\" }, platform: \"Java/Eclipse OpenJ9/1.8.0_252-b09\" }\n2021-05-02T18:29:58.805+0000 I COMMAND [conn457572] commanddbname.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ), $db: \"dbname\" } numYields:0 reslen:203 locks:{} protocol:op_query 1018ms\n2021-05-02T18:29:58.805+0000 I COMMAND [conn457573] commanddbname.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ), $db: \"dbname\" } numYields:0 reslen:203 locks:{} protocol:op_query 1018ms\n", "text": "Hi kevinadi,\nThanks for your suggestion.\nFor case of OOMkiller we have checked all logs of /var/log/messages but there is no log message indicating the same. Is there any other way checking the same.\nAnd in context to multi document transaction , we have not yet enabled feature compatability version to 4.0, so there are less chances of additional memory load.Also , we have now downgraded to version 3.6 , and we are running with very low load, but still mongodb nodes are crashing. i.e one of the secondary node got converted into primary and started responding very slow and ultimately reached to unhealthy state, where as the node which was primary originally was not behaving unusual.Please find some additonal stats if and let us know if you can help us out here.Error log Before secondary transiting into Primary:Data size: 94GB\nRAM:64GB\ncache : 30 GB\ninstance size: r4.2xlarge", "username": "Rakhi_Maheshwari" }, { "code": "", "text": "@kevinadi Please let me know if you can guide us here.", "username": "Rakhi_Maheshwari" }, { "code": "dmesg", "text": "Now that you are back on 3.6 and still experiencing the same issue I think you need to focus your attention to the platform.Review and ensure you are implementing the recommendations in the Productions Notes and Operations Checklist, specifically AWS EC2Review your mongod logs and check for warnings at startup and fix them.but still mongodb nodes are crashing.The whole host is ‘crashing’ or mongod ?Are the hosts dedicated for mongodb or are the other softwares co-located on the host.How does other metrics of the host look, Load Average, IO Stat, CPU Usage ?For case of OOMkiller we have checked all logs of /var/log/messages but there is no log message indicating the same. Is there any other way checking the same.Also check dmesg for OOM.", "username": "chris" }, { "code": "", "text": "HI @chrisNow we have done AMI restore, and server is working as expected, but we have still not reached on any conclusion what effect did upgrade to 4.0 and downgrade activity performed on mongodb environment.Please find the stats before AMI backup:\nCPU: Fluctuating between 50% to 6%\nLoad average per 1 min: 29.5Mongod reached in unhealthy/ not reachable state .", "username": "Rakhi_Maheshwari" }, { "code": "", "text": "Load average per 1 min: 29.5That is a normalised load average of 3.69 (29.5/8vcpu) This should below 1 on a healthy system. Given that the actual cpu usage you reported is low you’re bottlenecked elsewhere. This could be memory pressure and swapping to disk(without swap you get very high load before OOM killer) or your disk IO is hitting a limit.", "username": "chris" }, { "code": "", "text": "@chris With same load average , server was running fine till upgrade to downgrade activity. Now as we have done AMI restore server is running fine again.\nIs it the case during upgrade to 4.0 and downgrade some configuration might have changed ?\nIs there any way to figure this out?\nNow we are not sure even to proceed with upgrade process to 4.0, because on production environment it will be at risk.Please help us here.", "username": "Rakhi_Maheshwari" }, { "code": "", "text": "With same load average , server was running fine till upgrade to downgrade activity.You may think it is running fine. Its not. A load average that high you are under resourced somewhere. In my opinion the changes you made(upgeade and downgrade) are highlighting this issue, not causing it.", "username": "chris" } ]
Mongodb Replicaset Crashes when upgrade to version 4.0
2021-04-23T06:45:09.906Z
Mongodb Replicaset Crashes when upgrade to version 4.0
3,478
null
[ "node-js" ]
[ { "code": "", "text": "I’m currently studying the basics of MongoDB’s native JavaScript drivers. I was just reading this post.I understand both Promise and Async Functions are supported in the driver.As I am new to MongoDB and async JavaScript in general, I have no particular preference in which style to use. To keep my project’s codebase consistent, I plan on picking one syntax and using it throughout the project.Does MongoDB encourage the use of a particular syntax style? Again, I know it probably comes down to personal preference but as I have none, I am curious to know if one syntax is recommended over the another, and what are the reasons for it.", "username": "Jack_Zhang" }, { "code": "", "text": "Hey @Jack_Zhang! Great question - MongoDB makes has no opinions about whether you use Promise or Async functions. This is because under the hood they are the same thing! Async function are just syntactic sugar on top of Promises. This means there is no difference as far as MongoDB is concerned which one you choose. My personal recommendation would be to pick one and be consistent within your code base with which ever one you choose. Good luck!", "username": "JoeKarlsson" } ]
JavaScript Native Driver - Which syntax is recommended: Promise or Async Functions?
2021-05-09T00:13:22.980Z
JavaScript Native Driver - Which syntax is recommended: Promise or Async Functions?
1,675
null
[]
[ { "code": "", "text": "Hi,\nI’m learning MongoDB & dashboard charts.\nCan I, and how, filter a chart by a user? I mean, I have a website and I wish to show certain results only for the user logged.\nThe user is a field in the documents that generate the charts, obviously.\nFor example, we create a chart for historical taxes for all users and past years. This chart is embedded into the website with an iframe. I wish to show only the data for the user logged.\nI hope I explain.\nThanks\nBTW, GREAT tool and GREAT database engine.", "username": "Felipe_Fernandez" }, { "code": "", "text": "Hey Felipe -I’m glad you’re enjoying Charts! Yes it is possible to do what you are looking for. Please take a look at this documentation page for details.When the information you are showing is not sensitive, you can just pass the filter in as an iframe parameter. However if the information is sensitive (which I’m assuming is true in your case), you need to use the method described under “Inject User-Specific Filters”. This requires using the Embedding SDK and Authenticated Embedding to make it secure.HTH\nTom", "username": "tomhollander" }, { "code": "", "text": "Thanks. Allways the error is\n{“errorCode”:7,“simple”:“Error loading data for this chart (error code: 7).”,“verbose”:“Error loading data for this chart (error code: 7). User filter is not allowed. See https://dochub.mongodb.org/core/charts-embedding-error-codes for details.”}\nThe filter, AppealYear, is set into the chart filter.\nMongoDB Charts{ $match : { AppealYear : 2019 } }", "username": "Felipe_Fernandez" }, { "code": "$matchhttps://charts.mongodb.com/charts-retm-statistics-fouqe/embed/charts?id=XXXX&theme=light&filter={ AppealYear : 2019 }", "text": "Hi Felipe -I think this is failing because you are specifying $match in your filter. This is implicit; you should just use the body of the match filter, e.g:\nhttps://charts.mongodb.com/charts-retm-statistics-fouqe/embed/charts?id=XXXX&theme=light&filter={ AppealYear : 2019 }Tom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom\nStil {“errorCode”:7,“simple”:“Error loading data for this chart (error code: 7).”,“verbose”:“Error loading data for this chart (error code: 7). User filter is not allowed. See https://dochub.mongodb.org/core/charts-embedding-error-codes for details.”}\nMongoDB Charts{AppealYear : 2019}", "username": "Felipe_Fernandez" }, { "code": "", "text": "Can you share the chart id? Privately if you want: tom.hollander at mongodb.com.", "username": "tomhollander" }, { "code": "", "text": "Thanks. Fixed\nimage1111×1272 93.2 KB", "username": "Felipe_Fernandez" } ]
Export & Filter
2021-05-07T16:02:44.992Z
Export &amp; Filter
2,796
null
[ "atlas-functions" ]
[ { "code": "createPresignedPost", "text": "I’d like to use presigned POST requests in Amazon S3, instead of PUT, to limit the file size (tutorial).The Amazon S3 service does not seem to support the createPresignedPost function — it doesn’t appear in the list of possible actions when setting rules. How can I use it?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "MongoDB team, any idea? Is that possible?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi Jean-Baptiste,If this action was added relatively recently then it would explain why we have not have added support yet. I also couldn’t find it in the list of S3 API actions documentation.As a workaround we recommend using the AWS SDK.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "presignURL", "text": "@Mansoor_Omar Thank you for your reply.I don’t think it’s a new feature; in the link you sent, I can’t see the supported presignURL neither.It is listed here: Class: AWS.S3 — AWS SDK for JavaScriptDo you think support for this method could be added in the foreseeable future?", "username": "Jean-Baptiste_Beau" } ]
Amazon S3 service: createPresignedPost
2021-03-24T11:03:38.489Z
Amazon S3 service: createPresignedPost
2,860
null
[ "compass" ]
[ { "code": "", "text": "How to find the connection string to connect to MongoDB from compass?\nTried installing the shell. I am getting the error MSVCP140.dll was not found. How do I fix it?Note: I am new to MongoDB", "username": "Balasubrahmanyam_Ira" }, { "code": "mongodb://localhost:27017Connect", "text": "Hi @Balasubrahmanyam_Ira. Welcome to the MongoDB community. Where is your MongoDB server running? In Atlas or on your machine?If in Atlas, you can find some details here. If MongoDB is running on your local computer with the default settings, the connection string will be mongodb://localhost:27017. If MongoDB is running locally with the default settings, in Compass you actually don’t need to enter anything. Just click Connect and Compass will connect to the default host and port.Regarding the issue with the dll of the mongo shell, I think I remember there are other threads in this forum. However, I’d suggest you try out our new shell instead: MongoDB Shell Download | MongoDB. It’s currently in beta but we will GA it later this year.", "username": "Massimiliano_Marcon" } ]
Displaying connection string
2021-05-06T11:46:46.223Z
Displaying connection string
2,392
null
[ "aggregation", "performance" ]
[ { "code": "", "text": "I have a collection of prices. Its size is about 2.5GB, with 2.1 million documents. It increases a little every day as I collect prices from different sources daily. I also run a daily fetch to get the prices and show them to my clients. Until yesterday, this would take around 30 seconds. Today, suddenly, it started taking around 7 minutes! The only operations I do are a $match and a $projection. For the $match I search for a productId and an array of storeIds. Can anyone help me??", "username": "Guilherme_de_Carvalh" }, { "code": "", "text": "Hi @Guilherme_de_Carvalh,Welcome to MongoDB community.We would need to explore . explain for thd explain plans of the query. And query itself…Additionally please provide a getIndexes() from your collection.Now you mentioned storeIds array is this array saves all stores that has the product and kept in one product document??Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Match performance has dropped significantly and suddenly
2021-05-09T19:44:00.130Z
Match performance has dropped significantly and suddenly
1,725
null
[ "aggregation" ]
[ { "code": "", "text": "Hello, I created a view containing some places with location, then I need to execute a geoNear pipeline on the view, to find the near places. From the guideline at https://docs.mongodb.com/manual/core/views/#:~:text=Views%20are%20computed%20on%20demand,not%20support%20operations%20such%20as%3A&text=%24geoNear%20pipeline%20stage. it seems view doesn’t support geoNear because “Views are computed on demand during read operations, and MongoDB executes read operations on views as part of the underlying aggregation pipeline…” I am still not quite clear about the explanation, why I can’t execute the geoNear pipeline on a view? and is there any workaround solution? My view is created by a pipeline, i.e. pipeline Pa, if I copy the pipeline Pa in my search function, and add the operation of geoNear as new stage to the pipeline Pa, then I have pipeline Pb, will Pb work? If it can work, I don’t understand why the geoNear will not work on a view which is a result of the Pa?Thanks,James", "username": "Zhihong_GUO" }, { "code": "", "text": "Hi @Zhihong_GUO,A geoNear query needs to run on a geo index and for that to happen it must be the first stage of aggregation. Now when you creating a view you are forcing the query to push all view stages before the query on the view.Therefore it cannot guarantee geoNear will be first and thus forbidden.What you can consider is either have a schedule $merge process to persist the view and geo index it to form a materialized view or $out to temp collection with an index and query it…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny, thanks for the quick answer.", "username": "Zhihong_GUO" } ]
geoNear pipeline on view
2021-05-09T14:03:38.536Z
geoNear pipeline on view
1,408
null
[ "on-premises" ]
[ { "code": "", "text": "That seems like a dead end!\nI want to view some charts from data in my OWN instance of mongodb on MY OWN kubernetes cluster. I do NOT want to use your cloud cluster service.\nI created a project on your cloud service and want to connect charts to my server.\nHow would I do that?", "username": "Klaus_Kobald" }, { "code": "", "text": "@Klaus_KobaldIt is only for MongoDB Altlas. It is indeed a shame that MongoDB are discontinuing the self hosted version. I’ll be on the hunt for a replacement too.", "username": "chris" }, { "code": "", "text": "Please let me know, if you find something. If there is some coding involved it´s fine for me. I even thought using one of those JS frameworks. In fact when I think about the wasted time in setting up charts I could have made a nice tiny dashboard for my needs in the same time. I am not willing to pay a monthly fee. My project is too small.\nIt´s also very stupid from the mongo team: If I as developer could use charts for free, I would promote it to my clients how would then pay for it.", "username": "Klaus_Kobald" }, { "code": "", "text": "Hi Klaus, sorry to hear you have been inconvenienced by this transition. It is true that our cloud-hosted product can’t connect to a locally-hosted MongoDB server. However it is not true that you are required to pay a monthly fee for the cloud version. You can deploy a free-tier Atlas cluster and use Charts below the generous bandwidth threshold and everything will be completely free.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi, so can I use my own MongoDB instance from my servers as datasource in the Atlas Cluster? ", "username": "Klaus_Kobald" }, { "code": "", "text": "You’ll need to copy the data from your personal cluster into the Atlas cluster first.", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
So, on premises charts is going away! Your cloud solution does not allow to connect my own mongodb!
2021-05-08T04:20:31.862Z
So, on premises charts is going away! Your cloud solution does not allow to connect my own mongodb!
4,937
https://www.mongodb.com/…cbe3d5f6d5cb.png
[ "data-modeling" ]
[ { "code": "comp.idObjectIdcompStringplayerName$insertOneResult = $collection->insertOne(\n [\n 'playerName' => $_SESSION['playerName'],\n ...\n 'comp' => [\n 'id' => $_SESSION['compId'],\n ...\n ]\n ]\n);\n", "text": "Early development, I have a collection with the following document structure:image368×629 49.1 KBOf note is comp.idThe “Ernie Els” document was entered manually via the Atlas web UI. It has an actual ObjectId for the comp sub-document.When I use my PHP site to enter new records (still under development, obviously…), the format appears as a String, which is the 2nd document with my playerName.The insert syntax currently is:What do I need to change, please?", "username": "Dan_Burt" }, { "code": "'id' => ObjectId($_SESSION['compId']),'id' => new MongoId($_SESSION['compId']) /* this might throw an Exception if compId is not a valid reference syntax */", "text": "Hi Dan,I would try:\n'id' => ObjectId($_SESSION['compId']),\nor\n'id' => new MongoId($_SESSION['compId']) /* this might throw an Exception if compId is not a valid reference syntax */That code was untested however as I only started learning MongoDB and am thinking of changing implementation in my application.HTH,\nStay safe.", "username": "MaxOfLondon" }, { "code": "'id' => new MongoDB\\BSON\\ObjectID($_SESSION['compId'])compId", "text": "Thanks @MaxOfLondon, I used the latter code:'id' => new MongoDB\\BSON\\ObjectID($_SESSION['compId'])compId must exist at the time of saving this new document, otherwise I have other problems in any case!", "username": "Dan_Burt" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Inserting new sub-document ObjectId's
2021-05-07T20:04:21.701Z
Inserting new sub-document ObjectId&rsquo;s
5,558
https://www.mongodb.com/…0_2_1024x482.png
[ "queries" ]
[ { "code": "{ _id: 1, reportId: \"a\", accountId: \"1\" },\n{ _id: 2, reportId: \"b\", accountId: \"1\" }\ndb.collection.createIndex({ reportId: 1, accountId: 1 });\ndb.collection.find({ reportId: \"a\", accountId: \"1\" }).sort({ _id: 1 });\n_id{\n \"stage\": \"FETCH\",\n \"filter\": {\n \"$and\": [\n {\n \"accountId\": {\n \"$eq\": \"1\"\n }\n },\n {\n \"reportId\": {\n \"$eq\": \"a\"\n }\n }\n ]\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 4,\n \"advanced\": 1,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"docsExamined\": 2,\n \"alreadyHasObj\": 0\n}\n{\n \"stage\": \"IXSCAN\",\n \"nReturned\": 2,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 3,\n \"advanced\": 2,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"_id\": 1\n },\n \"indexName\": \"_id_\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"_id\": []\n },\n \"isUnique\": true,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"_id\": [\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\": 2,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0,\n \"parentName\": \"FETCH\"\n}\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"sample.index1\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"accountId\" : {\n \"$eq\" : \"1\"\n }\n }, \n {\n \"reportId\" : {\n \"$eq\" : \"a\"\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [ \n {\n \"accountId\" : {\n \"$eq\" : \"1\"\n }\n }, \n {\n \"reportId\" : {\n \"$eq\" : \"a\"\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"_id\" : 1\n },\n \"indexName\" : \"_id_\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"_id\" : []\n },\n \"isUnique\" : true,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"_id\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"_id\" : 1\n },\n \"memLimit\" : 104857600,\n \"type\" : \"simple\",\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"reportId\" : 1.0,\n \"accountId\" : 1.0\n },\n \"indexName\" : \"reportId_1_accountId_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"reportId\" : [],\n \"accountId\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"reportId\" : [ \n \"[\\\"a\\\", \\\"a\\\"]\"\n ],\n \"accountId\" : [ \n \"[\\\"1\\\", \\\"1\\\"]\"\n ]\n }\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 1,\n \"executionTimeMillis\" : 0,\n \"totalKeysExamined\" : 2,\n \"totalDocsExamined\" : 2,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [ \n {\n \"accountId\" : {\n \"$eq\" : \"1\"\n }\n }, \n {\n \"reportId\" : {\n \"$eq\" : \"a\"\n }\n }\n ]\n },\n \"nReturned\" : 1,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 4,\n \"advanced\" : 1,\n \"needTime\" : 1,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"docsExamined\" : 2,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 2,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 3,\n \"advanced\" : 2,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"_id\" : 1\n },\n \"indexName\" : \"_id_\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"_id\" : []\n },\n \"isUnique\" : true,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"_id\" : [ \n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\" : 2,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n },\n \"serverInfo\" : {\n \"host\" : \"vt\",\n \"port\" : 27017,\n \"version\" : \"4.4.5\",\n \"gitVersion\" : \"ff5cb77101b052fa02da43b8538093486cf9b3f7\"\n },\n \"ok\" : 1.0\n}\n_id", "text": "I have a collection that contains:I have created compound Index:Executing query:This query uses _id index instead of compound index, and compound index in in rejected plain,Screenshot from 2021-05-08 18-20-591127×531 38.5 KB Screenshot from 2021-05-08 18-25-471132×224 8.5 KBIts really required to add _id field in compound index?", "username": "turivishal" }, { "code": "", "text": "Do you only have 2 documents in the collection? I asked because, if I understand correctly, the query planner execute all plans at first until one gives better performance and since you are sorting may be it is more efficient to use _id with so little number of documents.If you really only have 2 documents, try to populate with extra documents you may delete later, to see if it corrects.", "username": "steevej" }, { "code": "{\n \"stage\": \"SORT\",\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 5,\n \"advanced\": 1,\n \"needTime\": 2,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"sortPattern\": {\n \"_id\": 1\n },\n \"memLimit\": 104857600,\n \"type\": \"simple\",\n \"totalDataSizeSorted\": 67,\n \"usedDisk\": false\n}\n{\n \"stage\": \"FETCH\",\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 2,\n \"advanced\": 1,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"docsExamined\": 1,\n \"alreadyHasObj\": 0,\n \"parentName\": \"SORT\"\n}\n{\n \"stage\": \"IXSCAN\",\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 2,\n \"advanced\": 1,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"reportId\": 1,\n \"accountId\": 1\n },\n \"indexName\": \"reportId_1_accountId_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"reportId\": [],\n \"accountId\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"reportId\": [\n \"[\\\"a\\\", \\\"a\\\"]\"\n ],\n \"accountId\": [\n \"[\\\"1\\\", \\\"1\\\"]\"\n ]\n },\n \"keysExamined\": 1,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0,\n \"parentName\": \"FETCH\"\n}\n/* 1 */\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"sample.index1\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"accountId\" : {\n \"$eq\" : \"1\"\n }\n }, \n {\n \"reportId\" : {\n \"$eq\" : \"a\"\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"_id\" : 1\n },\n \"memLimit\" : 104857600,\n \"type\" : \"simple\",\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"reportId\" : 1.0,\n \"accountId\" : 1.0\n },\n \"indexName\" : \"reportId_1_accountId_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"reportId\" : [],\n \"accountId\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"reportId\" : [ \n \"[\\\"a\\\", \\\"a\\\"]\"\n ],\n \"accountId\" : [ \n \"[\\\"1\\\", \\\"1\\\"]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [ \n {\n \"accountId\" : {\n \"$eq\" : \"1\"\n }\n }, \n {\n \"reportId\" : {\n \"$eq\" : \"a\"\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"_id\" : 1\n },\n \"indexName\" : \"_id_\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"_id\" : []\n },\n \"isUnique\" : true,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"_id\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 1,\n \"executionTimeMillis\" : 0,\n \"totalKeysExamined\" : 1,\n \"totalDocsExamined\" : 1,\n \"executionStages\" : {\n \"stage\" : \"SORT\",\n \"nReturned\" : 1,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 5,\n \"advanced\" : 1,\n \"needTime\" : 2,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"sortPattern\" : {\n \"_id\" : 1\n },\n \"memLimit\" : 104857600,\n \"type\" : \"simple\",\n \"totalDataSizeSorted\" : 67,\n \"usedDisk\" : false,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 1,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 2,\n \"advanced\" : 1,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"docsExamined\" : 1,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 1,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 2,\n \"advanced\" : 1,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"reportId\" : 1.0,\n \"accountId\" : 1.0\n },\n \"indexName\" : \"reportId_1_accountId_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"reportId\" : [],\n \"accountId\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"reportId\" : [ \n \"[\\\"a\\\", \\\"a\\\"]\"\n ],\n \"accountId\" : [ \n \"[\\\"1\\\", \\\"1\\\"]\"\n ]\n },\n \"keysExamined\" : 1,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n },\n \"serverInfo\" : {\n \"host\" : \"vt\",\n \"port\" : 27017,\n \"version\" : \"4.4.5\",\n \"gitVersion\" : \"ff5cb77101b052fa02da43b8538093486cf9b3f7\"\n },\n \"ok\" : 1.0\n}\n{ _id: 1 }_id", "text": "Thank you for pointing out, i have added more documents and it shows that compound index is used,Screenshot from 2021-05-08 18-59-211034×507 37.8 KB Screenshot from 2021-05-08 18-59-341033×350 13.3 KBI have another question:Why its SORT in memory? when _id have default unique index in order { _id: 1 },Does it really require to add _id field in compound index?", "username": "turivishal" }, { "code": "sort()_iddb.collection.createIndex({ reportId: 1, accountId: 1, _id: 1 });\n", "text": "Actually i found the answer of my question from below documentation, if i am not wrong,Index Intersection and Sort:Index intersection does not apply when the sort() operation requires an index completely separate from the query predicate.So that is why its performing Blocking Sorts Operation and for prevention i must have to add _id field in compound index,Thank you @steevej", "username": "turivishal" } ]
Compound index is rejected when i use sort by _id field
2021-05-08T13:05:24.897Z
Compound index is rejected when i use sort by _id field
4,174
https://www.mongodb.com/…7_2_1024x319.png
[ "atlas-functions" ]
[ { "code": "", "text": "Realm isn’t able to import the dependencies that I have imported using a .tar.gz file. I have also tried with a .zip version of my node_modules pacakage.\nScreen Shot 2021-04-10 at 4.15.41 PM1972×616 74.3 KB\n", "username": "Tyler_Huyser" }, { "code": "", "text": "Hey Tyler -Which of the dependencies (axios or cheerio) is showing up in the error message for 'cannot find module ’ and is this still happening?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Hello Sumedha,The error occurs for both dependencies. I tested by trying to import each independently as well.Best,\nTyler", "username": "Tyler_Huyser" } ]
Realm Function Error: "FunctionError: Cannot find module '[FUNCTION NAME]'"
2021-04-10T20:17:31.624Z
Realm Function Error: &ldquo;FunctionError: Cannot find module &lsquo;[FUNCTION NAME]&rsquo;&rdquo;
2,334
null
[ "dot-net", "unity" ]
[ { "code": " private const string MONGO_URI = \"mongodb+srv://user:[email protected]/test?retryWrites=true&w=majority\";\n\n private IMongoClient client;\n private IMongoDatabase db;\n\n\n void Start()\n {\n Debug.Log(\"TEST REACHED START\");\n client = new MongoClient(MONGO_URI);;\n Debug.Log(\"TEST MDB CLIENT\" + client);\n\n }\n", "text": "I’m having problems getting MongoClient() in Unity, here is the code:Everything works fine in the Unity editor, the problem is when building to mobile in IL2CPP.XCode debug:TEST REACHED STARTNotSupportedException: ./External/il2cpp/il2cpp/libil2cpp/icalls/mscorlib/System.Reflection/Module.cpp(112) : Unsupported internal call for IL2CPP:Module::GetPEKind - “This icall is not supported by il2cpp.”(Filename: currently not available on il2cpp Line: -1)Unity Android Debug:AndroidPlayer([email protected]:34999) NotSupportedException: /Applications/Unity/Hub/Editor/2019.3.3f1/Unity.app/Contents/il2cpp/libil2cpp/icalls/mscorlib/System.Reflection/Module.cpp(112) : Unsupported internal call for IL2CPP:Module::GetPEKind - “This icall is not supported by il2cpp.”Found a thread in stack overflow from 7 months ago that says the following:I got the DLLs from this repo, maybe they are not updated and mongoDB fixed this issue, i’d really appreciate some help in this, thanks!", "username": "Elizeu_Becker" }, { "code": "", "text": "I am having the same problem, did you solve it?", "username": "IMag_VR" }, { "code": "", "text": "I’m getting this issue now, as it looks like MongoDB doesn’t support it.", "username": "Jonathan_Peplow" }, { "code": "", "text": "I’m having the same issue when building an UWP for HoloLens - seems to be a general problem with IL2CPP", "username": "Stefan_Bock" }, { "code": "", "text": "Anyone find a workaround for this issue?", "username": "christian_aubert" }, { "code": "", "text": "After a lot of tinkering I found a way to make the MongoDB C# Driver work with Unity IL2CPP builds (I tested it on PC and UWP builds, but I guess it works for others too)\nFor it to work I had to make a custom build of the driver, plus pay some attention to other details.\nBut before I go into a bit of detail of my findings, here’s a link to an example project that I uploaded: MongoDB IL2CPP Example Project (don’t forget to change the MongoDBTester.cs to use your instance of MongoDB)So here’s what I did:MongoDB Driver:\nIn ClientDocumentHelper.cs of the driver source is a function CreateOSDocument() which contains a section #if NET452 - I just removed that whole section and tried it out in Unity with the IL2CPP build and it solved the OPs NotSupportedExceptionUnity - Assembly Stripping:\nThe stripping process removed some code that is actually needed, which I fixed with a linker file - link.xml in the Plugins Folder of my example project.Unity - AOT and Reflections:\nThe MongoDB Driver uses reflections for some things, like getting the right serializer for a collection. The AOT nature of C++ builds and the way IL2CPP works don’t mix well with that (see Unity Docs - Scripting Restrictions).\nFortunately this is easy enough to work around: If you use generic classes (like Dictionary) you need to specify the Serializer with the BsonSerializerAttribute\nFor constructors the same issue happens when using constructors with arguments - I just ensure that an zero argument public constructor is available for all classes that are used with MongoDB.Unity - IPv4:\nI’m not sure if this has anything to do with Unity or the MongoDB Driver, or maybe just the server why my instance of MongoDB was hosted, but I encountered a problem that my address protocol was not supported. So I wrote a workaround to use IPv4 instead.Additional Note:\nI noticed that auto-mapping of my classes did not always work. I haven’t quite figured out when and why it doesn’t work but if this happens just manually map the class with BsonClassMap.RegisterClassMapHope this helps, let me know if it works for you or if you encounter any other problems.", "username": "Stefan_Bock" }, { "code": "", "text": "Hi Everyone,I know I’m a little late to this thread, but I wanted to point out that MongoDB has a Realm SDK for Unity that is currently in Alpha. As far as I know it is scheduled to have a stable release later this year (2021).You can learn more about it here:https://www.mongodb.com/how-to/getting-started-realm-sdk-unity/I’m hoping this takes some of the pain out of working with MongoDB and Unity.I’m tagging @nirinchev who is the Engineer on the project.Best,", "username": "nraboy" }, { "code": "", "text": "Hey Stefan,This works great in editor. When I try to make a build on Mac or iOS, the app just hangs with this error:Test failed, exception: System.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “Automatic”, Type : “Unknown”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “x.x.x.x:27017” }”, EndPoint: “x.x.x.x:27017”, ReasonChanged: “ServerInitialDescription”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, LastHeartbeatTimestamp: null, LastUpdateTimestamp: “2021-04-12T02:39:45.5177210Z” }] }.\nat MongoDB.Driver.Core.Clusters.ClusterAny ideas what’s causing that?", "username": "christian_aubert" }, { "code": "", "text": "Is this the full stack trace? I also had timeout problems, but in my case the error also contained an HeartbeatException in the Servers list of the error message. So I doubt it is the same problem. In my case it was the problem with using IPv6, for which I wrote the GetIPv4Host function, are you using that?You could also check Error Handling in the drivers reference.You could try playing around with your connection string to see if you can find a solution with settings, depending on how your MongoDB is configured, like here.", "username": "Stefan_Bock" }, { "code": "", "text": "I also found a post where it is mentioned that it can be related to firewall or security settings.Please let me know if you find anything.(Sorry for double-reply, I’m not allowed to post more than 2 links in one post because I’m new)", "username": "Stefan_Bock" }, { "code": "", "text": "I tried Mac, Windows and iOS builds connecting to a mongodb server running on the local network, no issues whatsoever.When I try the exact same server (I’ve tried windows and Mac) hosted on Linode or AWS or another network with port forwarding/security policies in place, the editor and Windows builds still work without issues, the Mac and iOS builds fail in the same way.", "username": "christian_aubert" }, { "code": "", "text": "Also works just fine in Android. So if it’s a server (latest mongodb community edition) issue, it’s gotta be something only iOS/Mac are being finicky about. And that’s only when connecting over the internet vs connecting to local lan.", "username": "christian_aubert" }, { "code": "Try uploading data: { \"name\" : \"test\", \"someBool\" : true, \"number\" : -1323691331, \"list\" : [{ \"value\" : 0.29091730713844299 }, { \"value\" : 0.75945866107940674 }], \"dictionary\" : { \"test_0\" : { \"value\" : 1984966749 } } }\nUnityEngine.DebugLogHandler:Internal_Log(LogType, LogOption, String, Object)\nUnityEngine.DebugLogHandler:LogFormat(LogType, Object, String, Object[])\nUnityEngine.Logger:Log(LogType, Object)\nUnityEngine.Debug:Log(Object)\nMongoDBTester:ShowMessage(String)\n<TestMongoDB>d__22:MoveNext()\nSystem.Runtime.CompilerServices.AsyncVoidMethodBuilder:Start(TStateMachine&)\nMongoDBTester:TestMongoDB(String)\n<Start>d__17:MoveNext()\nSystem.Runtime.CompilerServices.AsyncVoidMethodBuilder:Start(TStateMachine&)\nMongoDBTester:Start()\n \n(Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 39)\n\nTest failed, exception: System.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:01 } }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"Automatic\", Type : \"Unknown\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"x.x.x.x:27017\" }\", EndPoint: \"x.x.x.x:27017\", ReasonChanged: \"ServerInitialDescription\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", LastHeartbeatTimestamp: null, LastUpdateTimestamp: \"2021-04-14T01:55:57.1876190Z\" }] }.\n at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException (MongoDB.Driver.Core.Clusters.ServerSelectors.IServerSelector selector, MongoDB.Driver.Core.Clusters.ClusterDescription description) [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.Core.Clusters.Cluster+WaitForDescriptionChangedHelper.HandleCompletedTask (System.Threading.Tasks.Task completedTask) [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.Core.Clusters.Cluster+<WaitForDescriptionChangedAsync>d__57.MoveNext () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ContextCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction (System.Action action, System.Boolean allowInlining, System.Threading.Tasks.Task& currentTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetResult (TResult result) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.TaskFactory+CompleteOnInvokePromise.Invoke (System.Threading.Tasks.Task completingTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetResult (TResult result) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task+DelayPromise.Complete () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task+<>c.<Delay>b__276_1 (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.TimerCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Timer+Scheduler.TimerCB (System.Object o) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.WaitCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ThreadPoolWorkQueue.Dispatch () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback () [0x00000] in <00000000000000000000000000000000>:0 \n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.ConfiguredTaskAwaitable+ConfiguredTaskAwaiter.GetResult () [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.Core.Clusters.Cluster+<SelectServerAsync>d__49.MoveNext () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ContextCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction (System.Action action, System.Boolean allowInlining, System.Threading.Tasks.Task& currentTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageTwo () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.Finish (System.Boolean bUserDelegateExecuted) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetException (System.Object exceptionObject) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[TResult].SetException (System.Exception exception) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException (System.Exception exception) [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.Core.Clusters.Cluster+<WaitForDescriptionChangedAsync>d__57.MoveNext () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ContextCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction (System.Action action, System.Boolean allowInlining, System.Threading.Tasks.Task& currentTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetResult (TResult result) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.TaskFactory+CompleteOnInvokePromise.Invoke (System.Threading.Tasks.Task completingTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetResult (TResult result) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task+DelayPromise.Complete () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task+<>c.<Delay>b__276_1 (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.TimerCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Timer+Scheduler.TimerCB (System.Object o) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.WaitCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ThreadPoolWorkQueue.Dispatch () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback () [0x00000] in <00000000000000000000000000000000>:0 \n--- End of stack trace from previous location where exception was thrown ---\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1+ConfiguredTaskAwaiter[TResult].GetResult () [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.MongoClient+<AreSessionsSupportedAfterSeverSelctionAsync>d__55.MoveNext () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ContextCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction (System.Action action, System.Boolean allowInlining, System.Threading.Tasks.Task& currentTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageTwo () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.Finish (System.Boolean bUserDelegateExecuted) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetException (System.Object exceptionObject) [0x00000] in <00000000000000000000000000000000>:0 \n\n at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[TResult].SetException (System.Exception exception) [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.Core.Clusters.Cluster+<SelectServerAsync>d__49.MoveNext () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ContextCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction (System.Action action, System.Boolean allowInlining, System.Threading.Tasks.Task& currentTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageTwo () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.Finish (System.Boolean bUserDelegateExecuted) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetException (System.Object exceptionObject) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[TResult].SetException (System.Exception exception) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException (System.Exception exception) [0x00000] in <00000000000000000000000000000000>:0 \n at MongoDB.Driver.Core.Clusters.Cluster+<WaitForDescriptionChangedAsync>d__57.MoveNext () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ContextCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.RunInternal (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ExecutionContext.Run (System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, System.Object state, System.Boolean preserveSyncCtx) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Action.Invoke () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction (System.Action action, System.Boolean allowInlining, System.Threading.Tasks.Task& currentTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetResult (TResult result) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.TaskFactory+CompleteOnInvokePromise.Invoke (System.Threading.Tasks.Task completingTask) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishContinuations () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task.FinishStageThree () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task`1[TResult].TrySetResult (TResult result) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task+DelayPromise.Complete () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Tasks.Task+<>c.<Delay>b__276_1 (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.TimerCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.Timer+Scheduler.TimerCB (System.Object o) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.WaitCallback.Invoke (System.Object state) [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading.ThreadPoolWorkQueue.Dispatch () [0x00000] in <00000000000000000000000000000000>:0 \n at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback () [0x00000] in <00000000000000000000000000000000>:0 \n--- End of stack trace from previous location where exception was thrown ---\n....", "text": "And here’s the lengthy stack trace:", "username": "christian_aubert" }, { "code": "", "text": "To me this sounds a lot like its related to security policies on apple devices. Maybe you can find some information about that? Did you enable internet capabilities for your iOS/macOS builds? (I found this for example)", "username": "Stefan_Bock" }, { "code": "", "text": "“Requires Persistent WiFi” and “Allow downloads over HTTP (nonsecure)” was already enabled for iOS build. Double checked that “NSAllowsArbitraryLoads” and “UIRequiresPersistentWiFi” are set in the info.plist.Network connectivity isn’t the issue either. Tried pinging before connecting to the database and I get the same results on all platforms.The fact that it connects to a LAN database but not a WAN one only on iOS/Mac certainly points to security policies but I can’t see which setting might be missing.", "username": "christian_aubert" }, { "code": "", "text": "I stand corrected. The sample project was set to Mono as scripting backend for Android. When I set it to IL2CPP, I get the same failure on Android as I get on Mac and iOS when connecting to a WAN mongodb instance. Furthermore, it won’t connect to a LAN mongodb instance either.", "username": "christian_aubert" }, { "code": "", "text": "I’m sorry to hear that. Unfortunately I have no idea where the problem could be.\nFor me the project worked with a remote (community) server, with PC and UWP builds.", "username": "Stefan_Bock" }, { "code": "", "text": "I am having the same problem. Implemented the MongoDB fully within the editor but when I builded the project to iOS it didn’t work. it returned this error:NotSupportedException: ./External/il2cpp/builds/libil2cpp/icalls/mscorlib/System.Reflection/Module.cpp(112) : Unsupported internal call for IL2CPP:Module::GetPEKind - \"This icall is not supported by il2cpp.\"", "username": "Nicokkam" }, { "code": "", "text": "I had the same error and made a custom build of the driver. You can find it hereFor more explaination what I did to make it work check my previous reply to this topic.", "username": "Stefan_Bock" }, { "code": "", "text": "Stefan,How did you determine what code gets stripped by the stripping process? Maybe I need to force more code to be included in link.xml?", "username": "christian_aubert" } ]
MongoDB and Unity IL2CPP mobile builds
2020-04-08T20:23:00.002Z
MongoDB and Unity IL2CPP mobile builds
10,859
null
[ "indexes" ]
[ { "code": "", "text": "Hello all,I’m facing some difficulties to create a unique index:Example collection with documents:\n{\"_id\":1,“name”:service1,“results”:{“data”:[{“date”:“2021-05-01”,“result”:10},{“date”:“2021-05-02”,“result”:20}]}},\n{\"_id\":2,“name”:service2,“results”:{“data”:[{“date”:“2021-05-01”,“result”:50},{“date”:“2021-05-02”,“result”:40}]}}I want to be able to avoid duplicated dates in each documents. I tried to create a unique index but it doesn’t work.I tried to create some index:\n1 - {‘results.data.date’: 1}, {‘unique’: true}\nI understand why it doesn’t work. There is no distinction between documents.2 - {‘name’:1, ‘results.data.date’: 1}, {‘unique’: true}\nNote: Name value is unique\nBut why this one doesn’t work.For example, I was expecting that the following update will raise a duplicate error message. Because an existing index value should already exist with name:service1 && results.data.date: 2021-05-02.\nThat’s not the case. This update is successful.{’_id’:1}, {\"$set\":{‘results.data’:{‘date’: ‘2021-05-02’, ‘result’: 22}Final document:\n{\"_id\":1,“name”:service1,“results”:{“data”:[{“date”:“2021-05-01”,“result”:10},{“date”:“2021-05-02”,“result”:20},{“date”:“2021-05-02”,“result”:22}]}}What’s wrong with my understanding ?\nHow to create correctly the index for this feature?Thanks for your help.", "username": "Julien_AVON" }, { "code": "", "text": "Hello @Julien_AVON, welcome to the MongoDB Community forum!I want to be able to avoid duplicated dates in each documentsIndexes on array fields are called as Multikey Indexes. The unique index with array fields is only possible across documents - not within the same document.See note on Unique Multikey Index in MongoDB Manual:For unique indexes, the unique constraint applies across separate documents in the collection rather than within a single document.Because the unique constraint applies to separate documents, for a unique multikey index, a document may have array elements that result in repeating index key values as long as the index key values for that document do not duplicate those of another document.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Building unique index on array of embedded documents
2021-05-07T09:09:25.570Z
Building unique index on array of embedded documents
8,837
null
[ "aggregation", "mongoose-odm" ]
[ { "code": "var UserSchema = new mongoose.Schema({\n likes: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Products' }],\n dislikes: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Products' }],\n})\nvar ProductSchema = new mongoose.Schema({\n title: {\n type: String,\n required: true,\n },\n})\nconst user = await User.findById(req.user.id, 'likes dislikes');\nlet seenProducts = [];\n\nuser.likes.forEach(p => {\n seenProducts.push(new mongoose.Types.ObjectId(p));\n})\n\nuser.dislikes.forEach(p=> {\n seenProducts.push(new mongoose.Types.ObjectId(p));\n})\n\nProduct.aggregate([\n {\n $match: { _id: { $nin: seenProducts } },\n }\n])\n$setUnion$setDifference", "text": "I have two collections:I would like to return all of the products that is not in the User.likes or User.dislikes. Here is how I am currently doing it:It works, but I would like to switch over to using the aggregration pipeline framework to do this if possible. It does not seem like it is easy to compare two collections… I have been trying this for a while with no luck. $setUnion and $setDifference look promising, but I can’t find a way to set the union of the likes & dislikes of the user, and then the difference of that with all products.", "username": "joshua_fonseca" }, { "code": "{\n likes: [\n ObjectId(\"6091e8db8fac308fbebd2988\"),\n ObjectId(\"6091e8e18fac308fbebd298a\")\n ],\n dislikes: [\n ObjectId(\"6091e8de8fac308fbebd2989\"),\n ObjectId(\"6091e8e18fac308fbebd298b\")\n ]\n}\n {\n _id: ObjectId(\"6091e8e18fac308fbebd211b\"),\n title: \"hig\"\n },\n {\n _id: ObjectId(\"6091e8db8fac308fbebd2988\"),\n title: \"abc\"\n },\n {\n _id: ObjectId(\"6091e8e18fac308fbebd298b\"),\n title: \"xyz\"\n },\n {\n _id: ObjectId(\"6091e8e18fac308fbebd291b\"),\n title: \"efg\"\n }\ndb.users.aggregate([\n {\n \"$set\": {\n \"likesAndDislikes\": {\n \"$setUnion\": [\n \"$likes\",\n \"$dislikes\"\n ]\n }\n }\n },\n {\n \"$lookup\": {\n \"from\": \"products\",\n let: {\n usersChoice: \"$likesAndDislikes\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $not: [\n {\n $in: [\n \"$_id\",\n \"$$usersChoice\"\n ]\n }\n ]\n },\n \n ]\n }\n }\n },\n \n ],\n \"as\": \"productsNeitherLikedOrDisliked\"\n }\n },\n {\n $project: {\n productsNeitherLikedOrDisliked: 1,\n _id: 0\n }\n }\n])\n", "text": "Hi Joshua,You are on the right track. $setUnion will combine likes and dislikes array, but unfortunately $setDifference won’t work across collections. We have to deploy $lookup stage to compare and get all the products that are not in likes or dislikes array. Something like this - https://mongoplayground.net/p/MNl2icwWEsM. Let me know if you have any questions.users:products:aggregation:", "username": "mahisatya" }, { "code": "", "text": "Hey, Mahisatya, thank you so much for your reply. Makes sense!", "username": "joshua_fonseca" } ]
How do I compare multiple collections in the aggregation framework?
2021-05-03T03:16:29.178Z
How do I compare multiple collections in the aggregation framework?
6,389
null
[ "atlas-functions" ]
[ { "code": "", "text": "Hello,We were using the http package inside a custom function and noticed that the function was throwing errors we could not reproduce locally.\nAfter some investigation we saw that the “createServer” function of the http package is not working.\nIt should be supported inside the node version mongo is using: HTTP | Node.js v10.24.1 Documentation\nAlso it is stated as fully supported inside the MongoDB documentation: https://docs.mongodb.com/realm/functions/built-in-module-support/So there seems to be a bug here.Best Regards,\nDaniel", "username": "Daniel_Bebber" }, { "code": "", "text": "Thanks Daniel - are you able to share details of your function? Or a repo so we can try to re-create and see what’s happening?", "username": "Shane_McAllister" }, { "code": "", "text": "The http(s) objects are supported except for the Server class. We’ve updated the docs accordingly.", "username": "Caleb_Thompson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[BUG] TypeError: 'createServer' is not a function
2021-05-03T13:49:05.715Z
[BUG] TypeError: &lsquo;createServer&rsquo; is not a function
3,856
null
[]
[ { "code": "Database Rebel", "text": " Congrats to everyone who logged in on May 4th. You’ve been awarded a Database Rebel badge that you can also use as a title. Shout out to @TimSantos, who hooked up the badge logic to award this automatically to past, present, and future Database Rebels every year! You can select from available titles in the Preferences section of your account. You may have more options depending on which badges you have earned or forum groups you are a member of .Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "And, I will continue flashing my Database Rebel title ", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you @Stennie_X and @TimSantos and MongoDB forum for honoring us with greatest badge ", "username": "turivishal" }, { "code": "", "text": "", "username": "Stennie_X" } ]
May the 4th be with you, always!
2021-05-07T11:17:53.571Z
May the 4th be with you, always!
4,744
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "[\n {\n \"$addFields\": {\n \"trigger_time\": {\n \"$convert\": {\n \"input\": \"$trigger_time\",\n \"to\": \"date\",\n \"onError\": null\n }\n }\n }\n },\n {\n \"$match\": {\n \"event_type\": {\n \"$nin\": [\n null,\n \"\",\n \"AC Lost\",\n \"Device Lost\",\n \"logged into Database\",\n \"logged into Nexus Database\",\n \"logged out of Nexus Database\",\n \"Low Battery\"\n ]\n }\n }\n },\n {\n \"$addFields\": {\n \"trigger_time\": {\n \"$cond\": {\n \"if\": {\n \"$eq\": [\n {\n \"$type\": \"$trigger_time\"\n },\n \"date\"\n ]\n },\n \"then\": \"$trigger_time\",\n \"else\": null\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"__alias_0\": {\n \"hours\": {\n \"$hour\": \"$trigger_time\"\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"__alias_0\": \"$__alias_0\"\n },\n \"__alias_1\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"__alias_0\": \"$_id.__alias_0\",\n \"__alias_1\": 1\n }\n },\n {\n \"$project\": {\n \"y\": \"$__alias_1\",\n \"x\": \"$__alias_0\",\n \"_id\": 0\n }\n },\n {\n \"$sort\": {\n \"x.hours\": 1\n }\n },\n {\n \"$limit\": 5000\n }\n]\n[\n {\n \"$addFields\": {\n \"trigger_time\": {\n \"$convert\": {\n \"input\": \"$trigger_time\",\n \"to\": \"date\",\n \"onError\": null\n }\n }\n }\n },\n {\n \"$match\": {\n \"event_type\": {\n \"$nin\": [\n null,\n \"\",\n \"AC Lost\",\n \"Device Lost\",\n \"logged into Database\",\n \"logged into Nexus Database\",\n \"logged out of Nexus Database\",\n \"Low Battery\"\n ]\n },\n \"trigger_time\": {\n \"$gte\": {\n \"$date\": \"2021-03-29T08:35:47.804Z\"\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"trigger_time\": {\n \"$cond\": {\n \"if\": {\n \"$eq\": [\n {\n \"$type\": \"$trigger_time\"\n },\n \"date\"\n ]\n },\n \"then\": \"$trigger_time\",\n \"else\": null\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"__alias_0\": {\n \"hours\": {\n \"$hour\": \"$trigger_time\"\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"__alias_0\": \"$__alias_0\"\n },\n \"__alias_1\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"__alias_0\": \"$_id.__alias_0\",\n \"__alias_1\": 1\n }\n },\n {\n \"$project\": {\n \"y\": \"$__alias_1\",\n \"x\": \"$__alias_0\",\n \"_id\": 0\n }\n },\n {\n \"$sort\": {\n \"x.hours\": 1\n }\n },\n {\n \"$limit\": 5000\n }\n]\n", "text": "Currently can’t figure out why one pipeline works and the other doesn’t. I got both pipelines from MongoDB charts and they both returned something and displaying charts on MongoDBCharts. However, when I use them in my code, only the first pipeline returns something. I used the same data for all cases. Any suggestions would be greatly appreciated!The first one doesn’t filter the last 30 days (hard coded by Mongo), both pipelines are copied from Mongodb charts and are not altered.The second pipeline", "username": "Karen_Cheung" }, { "code": "", "text": "$convert is new in version 4.0\nwhat version are you using in the machine you use when running your code?", "username": "Rafael_Green" }, { "code": "\"trigger_time\": {\n \"$gte\": {\n \"$date\": \"2021-03-29T08:35:47.804Z\"\n }\n", "text": "I have the Mongo community server 4.4 or above. I also use the first pipeline and it works. so I don’t understand why the additional $matchwould not work", "username": "Karen_Cheung" }, { "code": "", "text": "@Karen_Cheung\nthis forum is about basic course about the aggregation framework, so I’m not sure if this is the right place to put this question.\ntry asking in the Working with data forum or in the Developers Tools forum.\ngood luck,\nRafael", "username": "Rafael_Green" }, { "code": "mongo> c.find()\n{ \"_id\" : 1, \"date\" : ISODate(\"2021-05-05T13:08:36.217Z\") }\nmongo> d = { \"$date\" : \"2021-03-29T08:35:47.804Z\" }\n{ \"$date\" : \"2021-03-29T08:35:47.804Z\" }\nmongo> q = { \"date\" : { \"$gte\" : d } }\n{ \"date\" : { \"$gte\" : { \"$date\" : \"2021-03-29T08:35:47.804Z\" } } }\nmongo> c.find( q )\nmongo> d = ISODate( \"2021-03-29T08:35:47.804Z\" )\nISODate(\"2021-03-29T08:35:47.804Z\")\nmongo> q = { \"date\" : { \"$gte\" : d } }\n{ \"date\" : { \"$gte\" : ISODate(\"2021-03-29T08:35:47.804Z\") } }\nmongo> c.find( q )\n{ \"_id\" : 1, \"date\" : ISODate(\"2021-05-05T13:08:36.217Z\") }\n", "text": "I think the shell syntax is a little bit different.Rather than { $date : “2021…Z” } you have to use ISODate( “2021…Z” ).", "username": "steevej" }, { "code": "", "text": "Hi Steevej,I am using the pipeline in my node.js, so I’ll have to use something else other than ISODate(), but I have tried changing it to ISODate like new Date(“2021-03-29T08:35:47.804Z”) in javaScript. However, it still doesnt work.", "username": "Karen_Cheung" }, { "code": "", "text": "Thank you… I’ll try posting there as well.", "username": "Karen_Cheung" }, { "code": "\"trigger_time\": {\n \"$gte\": new Date(\"2021-03-29T08:35:47.804Z\"),\n }\n", "text": "I end up solving my own problem. After a bit of digging and asking. Apparently, Node.js does some funny things with Mongodb when it comes to using ‘$date’, that’s why the pipeline didn’t work.The resolve is to remove ‘$date’ and pass in a date object. For my case,Hope it helps other people", "username": "Karen_Cheung" }, { "code": "\"trigger_time\": {\n \"$gte\": new Date(\"2021-03-29T08:35:47.804Z\"),\n }\n", "text": "For the benefit of all of us, what has changed betweenI have tried changing it to ISODate like\nnew Date(“2021-03-29T08:35:47.804Z”)\nin javaScript. However, it still doesnt work.andThe resolve is to remove ‘$date’ and pass in a date object. For my case,and", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why this Mongochart generated aggregation pipeline doesn't work when I implement it
2021-05-04T03:35:00.125Z
Why this Mongochart generated aggregation pipeline doesn&rsquo;t work when I implement it
2,882
null
[ "kafka-connector" ]
[ { "code": "2021-04-12 14:48:34,634 WARN /connectors/mongo-source/config (org.eclipse.jetty.server.HttpChannel) [qtp1386677799-23] \njavax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig \n\tat org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:410) \n\tat org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) \n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) \n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) \n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) \n\tat org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:763) \n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:563) \n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) \n\tat org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1612) \n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) \n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434) \n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) \n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) \n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1582) \n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) \n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349) \n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) \n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234) \n\tat org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179) \n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) \n\tat org.eclipse.jetty.server.Server.handle(Server.java:516) \n\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) \n\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) \n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) \n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) \n\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) \n\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) \n\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) \n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) \n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) \n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) \n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) \n\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) \n\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:773) \n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:905) \n\tat java.base/java.lang.Thread.run(Thread.java:834) \nCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig \n\tat org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254) \n\tat org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236) \n\tat org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436) \n\tat org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261) \n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) \n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) \n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:292) \n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:274) \n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:244) \n\tat org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) \n\tat org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) \n\tat org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) \n\tat org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) \n\t... 35 more \nCaused by: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig", "text": "Dear experts,\nrunning Kafka 2.7.0 by the means of Strimzi operator 0.22.1. Facing an issue with MongoDB Source Connector (by the way, MongoDB Sink Connector is working fine) with both Confluent MongoDB Connector 1.5.0 and 1.4.0.\nThx for your support.\nBest Regards.", "username": "Richard_ORich" }, { "code": "[kafka@data-pulse-connect-7896656445-6hkhd kafka]$ curl -X PUT http://localhost:8083/connector-plugins/MongoSourceConnector/config/validate -H \"Content-Type: application/json\" -d '{\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"tasks.max\": \"1\",\n \"topics\": \"test-topic\"\n}'\n{\"error_code\":500,\"message\":\"java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig\"}", "text": "I get a similar reply from Kafka Connect REST interface. The connection.uri is not valid yet since my MongoDB setup is not ready yet. Could it be the root cause of this error ? Or is there an error in the uploaded jar file ?\nThx for your help.", "username": "Richard_ORich" }, { "code": " 946 Defl:N 449 53% 03-30-2021 11:16 08037467 com/mongodb/kafka/connect/source/json/formatter/DefaultJson.class\n 449 Defl:N 236 47% 03-30-2021 11:16 abcf0628 com/mongodb/kafka/connect/source/Configurable.class\n 27178 Defl:N 10773 60% 03-30-2021 11:16 3a0b8370 com/mongodb/kafka/connect/source/MongoSourceTask.class\n 12486 Defl:N 5268 58% 03-30-2021 11:16 3b557e42 com/mongodb/kafka/connect/source/MongoCopyDataManager.class\n 29429 Defl:N 10384 65% 03-30-2021 11:16 4b1378af com/mongodb/kafka/connect/source/MongoSourceConfig.class\n 0 Defl:N 2 0% 03-30-2021 11:16 00000000 com/mongodb/kafka/connect/source/topic/\n 0 Defl:N 2 0% 03-30-2021 11:16 00000000 com/mongodb/kafka/connect/source/topic/mapping/\n 268 Defl:N 191 29% 03-30-2021 11:16 479d034a com/mongodb/kafka/connect/source/topic/mapping/TopicMapper.class\n4987 Defl:N 2241 55% 03-30-2021 11:16 c88397f0 com/mongodb/kafka/connect/source/topic/mapping/DefaultTopicMapper.class\n3564 Defl:N 1436 60% 03-30-2021 11:16 9e0601d8 com/mongodb/kafka/connect/source/MongoSourceConfig$1.class\n1356 Defl:N 613 55% 03-30-2021 11:16 d0f294a9 com/mongodb/kafka/connect/source/MongoSourceConfig$OutputFormat.class\n 415 Defl:N 287 31% 03-30-2021 11:16 70a7057b com/mongodb/kafka/connect/Versions.class", "text": "Once I logged into the kafaconnect pod, I can find the class com.mongodb.kafka.connect.source.MongoSourceConfig from unzip -v /opt/kafka/plugin/mongodb-kafka-connect-mongodb-1.5.0/mongo-kafka-1.5.0-all.jar", "username": "Richard_ORich" }, { "code": "Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/apache/avro/Schema", "text": "Here https://mvnrepository.com/artifact/org.mongodb.kafka/mongo-kafka-connect/1.5.0 it shows that it needs avro and mongodb-driver-sync dependencies\nI do not see any org.avro classes in mongo-kafka-1.5.0-all.jar … are you sure all of them are in the -all.jar?\nno output for command jar tvf mongo-kafka-1.5.0-all.jar | grep avroHere one line from error message output:\nCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/apache/avro/Schema", "username": "Richard_ORich" }, { "code": "", "text": "I would like to inform you that I successfully setup my MongoDB Source Kafka Connector once I added avro and mongodb-driver-sync jar files … and also after lots of tuning.", "username": "Richard_ORich" }, { "code": "com.mongodb.kafka.connect.MongoSourceConnectorcat Dockerfile\nFROM quay.io/strimzi/kafka:0.22.1-kafka-2.7.0\nUSER root: root\nRUN mkdir -p / opt / kafka / plugins\nCOPY ./mongo-plugins/ / opt / kafka / plugins\nUSER 1001\nls -lrt mongo-plugins /\ntotal 2972\n-rw-r - r-- 1 users 2310134 May 5 16:49 mongo-kafka-1.5.0-all.jar\n-rw-r - r-- 1 users 137343 May 5 17:09 mongodb-driver-sync-4.2.1.jar\n-rw-r - r-- 1 users 590599 May 5 17:10 avro-1.10.2.jar\n", "text": "com.mongodb.kafka.connect.MongoSourceConnectorI get the same errors.Avro not foundjava.lang.NoClassDefFoundError: org / apache / avro / Schema\nCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org / apache / avro / SchemaMongoSourceConfig not foundCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig\nCaused by: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfigHow have you built the KafkaConnect? Have you created your own image?I have done it with this Dockerfile.At first I only had mongo-kafka-1.5.0-all.jar and then I added these versions of avro and mongodb-driver-sync and they keep giving me the same errors.Could you tell me how you have built the image and what additional tuning have you done?", "username": "Alberto_Jimenez_Loza" }, { "code": "", "text": "Dear Alberto,\nI guess you have the correct jar files. Did you check the target directory /opt/kafka/plugins/ content in the built image ? Either by the means of docker file command line RUN ls -lR /opt/kafka/plugins or by connecting your pod/container built from your image ?\nBest regards. Richard.", "username": "Richard_ORich" }, { "code": "[kafka@mongodb-connect-cluster-dual-connect-79c96cd5f-p64mn kafka]$ ls -lrt /opt/kafka/plugins/\ntotal 2972\n-rw-r--r-- 1 root root 2310134 May 5 14:49 mongo-kafka-1.5.0-all.jar\n-rw-r--r-- 1 root root 137343 May 5 15:09 mongodb-driver-sync-4.2.1.jar\n-rw-r--r-- 1 root root 590599 May 5 15:10 avro-1.10.2.jar \nNAME DESIRED REPLICAS READY\nmongodb-connect-community 1 True \nmongodb-connect-community-connect-7cf7f798d-n7c2q 1/1 Running 0 31m\nkind: KafkaConnect\nmetadata:\n name: mongodb-connect-community\n annotations:\n strimzi.io/use-connector-resources: \"true\"\nspec:\n image: xxxxxxx\n replicas: 1\n bootstrapServers: mycluster-kafka-bootstrap:9093\n tls:\n trustedCertificates:\n - secretName: mycluster-cluster-ca-cert\n certificate: ca.crt\n config:\n config.storage.replication.factor: 1\n offset.storage.replication.factor: 1\n status.storage.replication.factor: 1\n config.providers: file\n config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider\napiVersion: kafka.strimzi.io/v1alpha1\nkind: KafkaConnector\nmetadata:\n name: mongodb-source-connector-community\n labels:\n strimzi.io/cluster: mongodb-connect-community\nspec:\n class: com.mongodb.kafka.connect.MongoSourceConnector\n tasksMax: 1\n config:\n connection.uri: xxxxxxx\n topic.prefix: mongo\n database: test\n collection: ships\n copy.existing: true\n key.converter: org.apache.kafka.connect.json.JsonConverter\n key.converter.schemas.enable: false\n value.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter.schemas.enable: false\n publish.full.document.only: true\n pipeline: >\n [{\"$match\":{\"operationType\":{\"$in\":[\"insert\",\"update\",\"replace\"]}}},{\"$project\":{\"_id\":1,\"fullDocument\":1,\"ns\":1,\"documentKey\":1}}]\nNAME CLUSTER CONNECTOR CLASS MAX TASKS READY\nmongodb-source-connector-community mongodb-connect-community com.mongodb.kafka.connect.MongoSourceConnector 1 \nCaused by: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig\njavax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig\nCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig\nCaused by: java.lang.NoClassDefFoundError: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig\n\njavax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/apache/avro/Schema\n\nCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/apache/avro/Schema\n\nCaused by: java.lang.NoClassDefFoundError: org/apache/avro/Schema\n\nCaused by: java.lang.ClassNotFoundException: org.apache.avro.Schema\n\njavax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/apache/avro/Schema\n\nCaused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: org/apache/avro/Schema\n", "text": "The .jar libraries are in the container in the path. I connect and they are ok.\nThe path is / opt/kafka/plugins, right?\nIs there a jar file missing?KafkaConnector is created ok and it goes to ReadyState.The definition of my KafkaConnect is:The definition of my KafkaConnector is:KafkaConnector is not Ready.I get the error messages that the Mongodbsource and avro libraries cannot be foundWhat am I doing wrong in the procedure?Best Regards\nAlberto", "username": "Alberto_Jimenez_Loza" }, { "code": "", "text": "Could you actually check the missing class in your avro jar file from cmde line: jar tvf avro-1.10.2.jar | grep Schema.classHere is my output:\n1995 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$ArraySchema.class\n498 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$BooleanSchema.class\n490 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$BytesSchema.class\n494 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$DoubleSchema.class\n5075 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$EnumSchema.class\n2869 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$FixedSchema.class\n490 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$FloatSchema.class\n482 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$IntSchema.class\n486 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$LongSchema.class\n1982 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$MapSchema.class\n4401 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$NamedSchema.class\n486 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$NullSchema.class\n8106 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$RecordSchema.class\n1124 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$SerializableSchema.class\n494 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$StringSchema.class\n4136 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema$UnionSchema.class\n33910 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/Schema.class\n510 Tue Mar 09 17:44:24 CET 2021 org/apache/avro/reflect/AvroSchema.class", "username": "Richard_ORich" }, { "code": " 1995 03-09-2021 17:44 org/apache/avro/Schema$ArraySchema.class\n 498 03-09-2021 17:44 org/apache/avro/Schema$BooleanSchema.class\n 490 03-09-2021 17:44 org/apache/avro/Schema$BytesSchema.class\n 494 03-09-2021 17:44 org/apache/avro/Schema$DoubleSchema.class\n 5075 03-09-2021 17:44 org/apache/avro/Schema$EnumSchema.class\n 2869 03-09-2021 17:44 org/apache/avro/Schema$FixedSchema.class\n 490 03-09-2021 17:44 org/apache/avro/Schema$FloatSchema.class\n 482 03-09-2021 17:44 org/apache/avro/Schema$IntSchema.class\n 486 03-09-2021 17:44 org/apache/avro/Schema$LongSchema.class\n 1982 03-09-2021 17:44 org/apache/avro/Schema$MapSchema.class\n 4401 03-09-2021 17:44 org/apache/avro/Schema$NamedSchema.class\n 486 03-09-2021 17:44 org/apache/avro/Schema$NullSchema.class\n 8106 03-09-2021 17:44 org/apache/avro/Schema$RecordSchema.class\n 1124 03-09-2021 17:44 org/apache/avro/Schema$SerializableSchema.class\n 494 03-09-2021 17:44 org/apache/avro/Schema$StringSchema.class\n 4136 03-09-2021 17:44 org/apache/avro/Schema$UnionSchema.class\n 33910 03-09-2021 17:44 org/apache/avro/Schema.class\n 510 03-09-2021 17:44 org/apache/avro/reflect/AvroSchema.class\n", "text": "jar tvf avro-1.10.2.jar | grep Schema.classMy output is exactly the same:What image do you use in the dockerfile?I use QuayBest Regards\nAlberto", "username": "Alberto_Jimenez_Loza" }, { "code": "", "text": "Same as yours:\nFROM Quay", "username": "Richard_ORich" }, { "code": "mongodb-source-connector-community mongodb-connect-community com.mongodb.kafka.connect.MongoSourceConnector 1 True\n", "text": "have found the fault in my procedure, it was the path where the plugins are placed.\nIt was not / opt /kafka /plugins but as you indicated your /opt/kafka/plugins/mongodb-kafka-connect-mongodb-1.5.0.\nOnce this is done, the connector works ok and goes to the ready state and the messages from the jar avro and source mongodb libraries do not appear.Best Regards\nAlberto", "username": "Alberto_Jimenez_Loza" }, { "code": "2021-05-06 15:05:19,134 INFO WorkerSourceTask{id=mongodb-source-connector-community-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,172 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fa\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fa\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fa\"}, \"name\": \"USS Enterprise-D\", \"operator\": \"Starfleet\", \"type\": \"Explorer\", \"class\": \"Galaxy\", \"crew\": 750.0, \"codes\": [10.0, 11.0, 12.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,175 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fb\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fb\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fb\"}, \"name\": \"USS Prometheus\", \"operator\": \"Starfleet\", \"class\": \"Prometheus\", \"crew\": 4.0, \"codes\": [1.0, 14.0, 17.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,177 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fc\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fc\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fc\"}, \"name\": \"USS Defiant\", \"operator\": \"Starfleet\", \"class\": \"Defiant\", \"crew\": 50.0, \"codes\": [10.0, 17.0, 19.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,178 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fd\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fd\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fd\"}, \"name\": \"IKS Buruk\", \"operator\": \" Klingon Empire\", \"class\": \"Warship\", \"crew\": 40.0, \"codes\": [100.0, 110.0, 120.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,183 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fe\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fe\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995fe\"}, \"name\": \"IKS Somraw\", \"operator\": \" Klingon Empire\", \"class\": \"Raptor\", \"crew\": 50.0, \"codes\": [101.0, 111.0, 120.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,185 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995ff\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995ff\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3dd3115520492995ff\"}, \"name\": \"Scimitar\", \"operator\": \"Romulan Star Empire\", \"type\": \"Warbird\", \"class\": \"Warbird\", \"crew\": 25.0, \"codes\": [201.0, 211.0, 220.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,188 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093af3ed311552049299600\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093af3ed311552049299600\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093af3ed311552049299600\"}, \"name\": \"Narada\", \"operator\": \"Romulan Star Empire\", \"type\": \"Warbird\", \"class\": \"Warbird\", \"crew\": 65.0, \"codes\": [251.0, 251.0, 220.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\n2021-05-06 15:05:19,189 WARN No topic set. Could not publish the message: {\"_id\": {\"_id\": {\"$oid\": \"6093bf4dd311552049299601\"}, \"copyingData\": true}, \"documentKey\": {\"_id\": {\"$oid\": \"6093bf4dd311552049299601\"}}, \"fullDocument\": {\"_id\": {\"$oid\": \"6093bf4dd311552049299601\"}, \"name\": \"USS Enterprise-D\", \"operator\": \"Starfleet\", \"type\": \"Explorer\", \"class\": \"Galaxy\", \"crew\": 750.0, \"codes\": [10.0, 11.0, 12.0]}} (com.mongodb.kafka.connect.source.MongoSourceTask) [task-thread-mongodb-source-connector-community-0]\nkind: KafkaConnector\nmetadata:\n name: mongodb-source-connector-community\n labels:\n strimzi.io/cluster: mongodb-connect-community\nspec:\n class: com.mongodb.kafka.connect.MongoSourceConnector\n tasksMax: 1\n config:\n connection.uri:xxxxxxxxxxxx\n topic.prefix: mongo\n database: test\n collection: ships\n copy.existing: true\n key.converter: org.apache.kafka.connect.json.JsonConverter\n key.converter.schemas.enable: false\n value.converter: org.apache.kafka.connect.json.JsonConverter\n value.converter.schemas.enable: false\n publish.full.document.only: true\n pipeline: >\n [{\"$match\":{\"operationType\":{\"$in\":[\"insert\",\"update\",\"replace\"]}}},{\"$project\":{\"_id\":1,\"fullDocument\":1,\"ns\":1,\"documentKey\":1}}]\n", "text": "Once the KafkaConnector works, I have seen in the Kafkaconnect logs that it connects to the MONGODB DB, but it gives this WARN No topic set error message. Could not publish the message.It seems that it connects to the MONFGO DB, but then it cannot copy it to a topic because it is not defined.The topic in Kafka is supposed to be built with topic_prefix.database.collection and in my case it is mongo.test.ships.\nThis topic is created on the fly or it is necessary to create it previously.Did you have this error?Am I missing something in the KafkaConnector yaml?Best regards\nThanks\nAlberto", "username": "Alberto_Jimenez_Loza" }, { "code": "", "text": "Sorry, I have checked the operation and everything works OK. Both the Source and SINK connectors work OK.\nThank you", "username": "Alberto_Jimenez_Loza" } ]
Kafka connector: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig
2021-04-12T15:02:41.156Z
Kafka connector: Could not initialize class com.mongodb.kafka.connect.source.MongoSourceConfig
8,803
https://www.mongodb.com/…a_2_1023x254.png
[ "server", "installation" ]
[ { "code": "", "text": "\n27(HQ)2XMWWK`5SG}YM11243×309 43.5 KB\n", "username": "zhao_qi" }, { "code": "mongodmongod", "text": "Hello @zhao_qi, welcome to the MongoDB Community forum!The image you had posted is the logging from the startup of the mongod process. You can open another console and connect with mongo shell or start the GUI tool like Compass and work with the database.[EDIT ADD]You can specify the logpath parameter to the mongod and the log data can directed to a log file - then you wont see it interactively.", "username": "Prasad_Saya" } ]
Mongod --dbpath error
2021-05-07T09:25:53.849Z
Mongod &ndash;dbpath error
2,992
null
[ "python", "crud" ]
[ { "code": "myCollection = [\n { \n _id: '5654xxx',\n id: 1,\n name: \"Troy\",\n additional: \"some\",\n day: \"21 Apr, 2021\"\n },\n { \n _id: '5655xxx',\n id: 2,\n name: \"Thor\",\n day: \"25 Apr, 2021\"\n }\n]\ncursor = myCollection.update_many(\n {\n additional: { '$exists': True } # to update on specific document which has **additional** key\n },\n {\n '$set': { \n 'date': '$day', # here find **day** key from matched document and map to **date** as new key\n 'newKey': 'newValue' # adding **newKey** with **newValue** in the same document\n }\n },\n upsert=False\n)\n[\n { \n _id: '5654xxx',\n id: 1,\n name: \"Troy\",\n additional: true,\n date: \"$day\", # here expecting to map with **day** value which is \"25 Apr, 2021\" but its mapped with $day \n day: \"25 Apr, 2021\",\n newKey: \"newValue\" # it is as expected \n },\n { \n _id: '5655xxx',\n id: 2,\n name: \"Thor\",\n day: \"25 Apr, 2021\"\n }\n]\n", "text": "Hi Team,I stuck on finding solution when update_many with existing document key to set new document key and merge it in the same collection. (Mongo v4.0)Scenario:1 Used pymongo in python service to update2 my collection is3 Tried firing query using pymongo as given below,4 From above 3rd setup the cursor executed and checked in collection, it will shown as,5 In above what expecting the $day value but its mapped as it is without document value scope.Please suggest what else can do here or I missed something here.Thanks.", "username": "Jitendra_Patwa" }, { "code": "", "text": "Hi @Jitendra_PatwaThanks for raising this question, can you confirm which lesson in M220P this relates and can you provide the outputs of the pytest where you might be encountering an issue.If this isn’t related to M220P,. I’d suggest moving this to the Working with Data category as you can find more assistance within that section to help answering your question.Kindest regards,\nEoin", "username": "Eoin_Brazil" } ]
Update and add new key based on existing document key
2021-05-06T15:36:01.049Z
Update and add new key based on existing document key
12,195
null
[ "graphql", "api" ]
[ { "code": "watch()", "text": "I would like to use GraphQL, but I also want to use watch() (How to use watch() on web SDK? - #2 by kraenhansen), but if I’m going to set up the Mongo web API anyway, it seems a little silly to set up everything for GraphQL as well… am I missing something?", "username": "Ted_Hayes" }, { "code": "", "text": "My personal opinion is that GraphQL comes into its own when your web app has to work with multiple data sources. If all of those data sources provide a GraphQL API then it can simplify your code to use GraphQL for everything. If MongoDB was my only data source then I’d use the Realm SDK.", "username": "Andrew_Morgan" }, { "code": "", "text": "Thanks for the reply! That makes sense. One thing that occurred to me is that Apollo client offers a lot more than just the GraphQL connection; if I understand correctly it would basically replace Redux, for example, whereas with the Realm SDK I’d be implementing my own client-side caching etc. Does that make sense?Regardless of that though, my main question is if it’s necessary to use both in order to replicate the missing GraphQL subscription feature.", "username": "Ted_Hayes" }, { "code": "", "text": "I think that your hybrid approach is the way to go if you want to use GraphQL and need a way to receive notifications. You may want to up-vote this feature request to add subscriptions to MongoDB’s GraphQL service https://feedback.mongodb.com/forums/923521-realm/suggestions/40640038-add-graphql-subscriptions", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it silly to use both GraphQL and Mongo API?
2021-05-05T18:08:13.205Z
Is it silly to use both GraphQL and Mongo API?
4,306
null
[ "atlas-device-sync", "capacity-planning" ]
[ { "code": "", "text": "Hello!We have some concerns about the availability of the Sync service around the world (namely in mainland China). Could you clarify them?Consider the following deployment example:A user is in mainland China and tries to use the mobile app.How the mobile app does service discovery? As far as I understand, there is a worldwide service for it and the whole process looks like the following:Do I understand right? If I do, is that global endpoint actually available from mainland China (we know about their Great Firewall)? Do you do something to keep working if they block the global endpoint some day (maybe accidentally, maybe intentionally)? What if the https://very-long-url-that-points-to-the-cluster-in-Ireland is blocked? Do you do something about it?We are really interested in this topic. Actually, we discard using Google Firebase because it is not available in mainland China. Now we are considering MongoDB Realm and Couchbase. With Couchbase everything is clear. One just installs it where he needs and passes its url to the mobile application.Could you, please, clarify how you solve (or are going to solve) this case?\nThank you in advance!", "username": "111463" }, { "code": "", "text": "@111463 So we generally recommend deploying a local app and placing your Atlas & RealmSyncApp in the same region as Realm Sync for performance reasons - https://docs.mongodb.com/realm/sync/#best-practicesIn the case of China, the closest region would be Singapore - https://docs.mongodb.com/realm/admin/deployment-models-and-regions/If you needed a Global app then the way it works is that the client first figures out which region to connect to first based on latency and will connect and begin transferring sync data. In the case of Chinese users, they will most likely connect to Sydney. Unfortunately, there is nothing Realm can do if you are blocked by the Great Firewall of China.", "username": "Ian_Ward" }, { "code": "", "text": "Thank you, got it. Could you please give some more details so that we can understand the reliability of the whole system?How does the client technically figure out which region to connect to? Is there a single global endpoint that the client asks and the endpoint answers with the list of all regions’s endpoints? (for example global-endpoint.realm.mongodb.com)\nOr does the client hold a list of all endpoints around the world and try them one by one?", "username": "111463" }, { "code": "", "text": "Yes it is a single endpoint that checks latency and client location", "username": "Ian_Ward" }, { "code": "", "text": "Got it. Thank you! Excuse my persistence, but could you please tell if you intentionally check this endpoint on availability from different regions around the world (and in China, of course).Why am I asking? There are two big differences:As you can suppose, when one is picking a platform for building his international business, these differences are really different for him)", "username": "111463" } ]
Sync availability around the world
2021-05-05T15:21:11.151Z
Sync availability around the world
3,589
null
[ "data-modeling", "indexes" ]
[ { "code": "idnametimezoneidcompanyidreviewerrevieweescorecompanyidcompanyidcompanyid", "text": "Hello there - new to this community but love how active it seems and looking forward to learn more.I’ll keep it short, i’m looking for a good model that will fit my requirements. Will give short example of the bottlenecks i’m experiencing.I have 2 ‘tables’, Company and ReviewsCompany has id, name, timezone\nReviews has id, companyid, reviewer, reviewee, score, etc.To keep it simple, assume i have >20k rows in Reviews.\nI am currently, in my application, querying through the Reviews and filtering by the companyid at runtime when a user lands on a page.It is taking ~16 seconds to query and get that data (around 5-6k entities) - I am using google cloud Datastore and transitioning now to mongodb thus looking for best practices on this.Thanks,\nMihai", "username": "Mihai_Oprescu" }, { "code": "", "text": "Hi @Mihai_Oprescu,Welcome to MongoDB community.It sounds as a company might have a large number of reviews (more than thousnds).Therefore it doesn’t make sense to embed those in the company collection (doc size is 16mb max).Storing the reviews in its own collection make sense and indexing the companyid as the relationship value is also goodIf you query the reviews with a desc date you should add this field with -1 direction into the index.It might be worth considering a partitioning strategy per month but I have a question.How is the data presented in the application, do you need all reviews in a single page or just a batch of first documents?What you might do is fetch only X first documents per company and when a user clicks on more get the next from the cursor. Or do sort by _id and limit , performing a $gt and limit for next batches…You can maintain a totalReviews field in the company document incrementing its value each review so users can still get how many reviews there are without the need to count them over and over.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel!Yes, the bigger the company, the more reviews are coming in. I’m actually (pleasently?) surprised by how fast it’s growing but at the same time, some fires are here to be put out I will clarify that the reviews are currently in their own collection and I perform queries on the reviews collection.I’m really glad you brought up the indexing advice. thanks for that ^.Usually, there is a 3 month default range for which the reviews are needed and the user has quick date range options (6 months, 1 year, All Time)image2464×118 15 KBI like the partitioning strategy per month. Furthermore, I was thinking of having user + month partitioning of reviews. For example, I will store a user’s reviews for january in collection user_12345_01 (user_:userid_:month)\nThoughts on that? If i rewrite the model, I might as well try to go for one which I won’t have to rewrite in 6-12 months.Using mongo with nodejs, does the operation .find(…) on an index actually lookup by hash? I clearly need to get more intimately familiar with implementation details of mongodb.Yes, the reason why all reviews for those date ranges need to be queried at once is because there are scores calculated using them.As an immediate solution, I’ve added an in-memory caching for the functions which are bottle-necking right now.Thanks Pavel!", "username": "Mihai_Oprescu" }, { "code": "", "text": "If i create a (dynamic) collection for each user for each month, am i going to create too many indexes and then everything gets slow? for each user for each month sort of seems overkill, but i’d like to know the cons of it.A collection per company per month might be a better choice.", "username": "Mihai_Oprescu" }, { "code": "", "text": "Hi @Mihai_Oprescu,Yes creating a collection per user per month sounds like an antipattern as you will have to many collections.Plus I don’t think that user only needs to see ita own review… So that doesn’t sound like a good way to store the data.What do you think about a database per company storing only company specific reviews or collection per company reviews with company id prefix ?This sounds like reviews are strongly coupled with a company…Now why wouldn’t you use a $merge materialize view that will pre calculate the scores every x minutes and than the user will use a query to score by the prepopulated score … You can actually index the score field to show top score then paging this make sense.MongoDB identify a query shape and best index is with order of Equility Sort and Range ordered fields in the index.So every field which is used as equality should be first than any sort field and finally the range filters …Indexes are btrees which are sorted so finding an equality is considerably fast if its cardinality is low. So is sorting which is just advancing in the index.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Modal advice, indices and transitioning from Datastore to Mongodb
2021-05-05T22:18:25.539Z
Modal advice, indices and transitioning from Datastore to Mongodb
1,802
null
[]
[ { "code": "", "text": "We have some microservices which are using C# mongo driver and Node.js mongo driver. I found online (https://docs.mongodb.com/drivers/node/faq/) that we can set socketTimeoutMS and connectTimeoutMS.Is it good idea to set these two for both .NET and node applications ? I am just concerned that if we don’t set those, the connections can stay there for a long time and number of connections might increase over the time which might cause the performance issues.Please help!Thanks\nJW", "username": "Jason_Widener1" }, { "code": "", "text": "Hi @Jason_Widener1,Usually the defaults on the driver side should be good for most workloads.However, if you see issues with connections while using latest driver you can test changing those.A healthy connection pool according to our best practices should reuse connections and therefore connections are expected to be mostly stableThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Sounds good. Thank you.", "username": "Jason_Widener1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding timeout for mongoDB
2021-05-06T04:33:47.586Z
Adding timeout for mongoDB
4,129
null
[ "app-services-user-auth", "realm-web" ]
[ { "code": "", "text": "Hi there,\nI could not find documentation for how to remove user by webSDK. Lets say a user wants to DELETE his/her user account. is there a realm function for this? Kindly tell me a safe right way of doing this.Second question I also wander how can a user deactivate their account for a period of time but not delet it. (From webSDK)Notice I dont mean deleting the user form Realm UI. I know this.Kind regards,\nBehzad Pashaie", "username": "Behzad_Pashaie" }, { "code": "", "text": "Behzad - Did you get the answer you needed in your other post?", "username": "Shane_McAllister" } ]
Remove/Delete user account from webSDK
2021-04-08T17:26:47.828Z
Remove/Delete user account from webSDK
2,366
null
[ "aggregation", "crud" ]
[ { "code": "", "text": "I know that MongoDB uses IX, IS, X, S level locks.Updating the document uses IX locks at the collection level.Since version 4.2, pipeline aggregation can be used for updates.When doing a single document update (like using _id in the filter), does mongodb use IX lock at the collection level even if using the aggregation pipeline, not X lock at the collection level?If used, can it be guaranteed that the update can happen atomically even if it goes through each $set stage?(For example, while going through two $set stages, does another update request that modifies the same document wait in a pending state, or can it update between each $set stages?)", "username": "111479" }, { "code": "", "text": "Hello @111479, welcome to the MongoDB Community forum!When doing an update on a single document the operation is atomic - irrespective of it using the update operators or the update with aggregation pipeline. In fact, all write operations on a single document are atomic: see Update - Atomicity", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lock with update aggregation pipeline
2021-05-06T15:37:37.783Z
Lock with update aggregation pipeline
3,573
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.4.6-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.5. The next stable release 4.4.6 will be a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.4.6-rc0 is released
2021-05-06T15:14:37.996Z
MongoDB 4.4.6-rc0 is released
2,692
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.14 is out and is ready for production deployment. This release contains only fixes since 4.2.13, and is a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.14 is released
2021-05-06T15:05:31.187Z
MongoDB 4.2.14 is released
2,269
null
[ "queries" ]
[ { "code": "", "text": "My sample table contains data as follows\n{ “_id” : 6, “Last_Name” : “Brady”, “First_Name” : “Eoin”}\n{ “_id” : 1, “Last_Name” : “Curran”, “First_Name” : “Kevin”}\n{ “_id” : 0, “Last_Name” : “Curran”, “First_Name” : “Sam”}\n{ “_id” : 7, “Last_Name” : “Hazzard”, “First_Name” : “Aidan”}\n{ “_id” : 5, “Last_Name” : “Kelly”, “First_Name” : “Ciaran”}\n{ “_id” : 8, “Last_Name” : “Kholi”, “First_Name” : “Virat”}\n{ “_id” : 3, “Last_Name” : “Morgan”, “First_Name” : “Eden”}\n{ “_id” : 2, “Last_Name” : “Morgan”, “First_Name” : “Eoin”}\n{ “_id” : 4, “Last_Name” : “Pollard”, “First_Name” : “Kevin”}\n{ “_id” : 9, “Last_Name” : “Sharma”, “First_Name” : “Rohit”}I am trying to get all documents whose last name less than or equal to C. But it does not bring data starting with “C”.\ndb.sar.find({“Last_Name”:{$lte:“C”}})\n{ “_id” : 6, “Last_Name” : “Brady”, “First_Name” : “Eoin”}\nIt does not return documents with “C” as last name why?It works fine with $gte.\ndb.sar.find({“Last_Name”:{$gte:“C”}})\nit brings last name starting with C and others.\nThanks in advance\nSarma", "username": "Sarma_Bhamidipati" }, { "code": "> db.C.insert( { \"_id\" : \"C\" } )\nWriteResult({ \"nInserted\" : 1 })\n> db.C.insert( { \"_id\" : \"Ci\" } )\nWriteResult({ \"nInserted\" : 1 })\n> db.C.insert( { \"_id\" : \"D\" } )\nWriteResult({ \"nInserted\" : 1 })\n> db.C.find().sort( { \"_id\" : 1 } )\n{ \"_id\" : \"C\" }\n{ \"_id\" : \"Ci\" }\n{ \"_id\" : \"D\" }\n{\"Last_Name\":{$gte:\"C\"}}{\"Last_Name\":{$lt:\"D\"}}", "text": "It does not return documents with “C” as last name why?Most likely because any name other than the letter C itself is considered greater than C. SeeSo you should try with{\"Last_Name\":{$gte:\"C\"}}\n{\"Last_Name\":{$lt:\"D\"}}And thinking about ifdb.sar.find({“Last_Name”:{$gte:“C”}})\nit brings last name starting with C and others.it then confirm what I wrote above any name other than the letter C itself is considered greater than C.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,I have already tried that and it is working.My question is that why it does not bring data when I say $lte : “C”. (less than or equal to C). I am expecting it to bring data for C as well. This works for numeric data but it is not working for string data.\nAny thoughts on this please?", "username": "Sarma_Bhamidipati" }, { "code": "", "text": "Look at the output of my sort above. The name Ci comes after C so it is bigger. And think about it, $lte CANNOT select the same set than $gte EXCEPT for the one that are equals.This works for numeric data but it is not working for string data.Please show me a number that is both $gte and $lte to another one but that is not equal.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,Probably I did not explain my question correctly.Issue : if I run below commanddb.test.find({“name”:{$lte: \"C}}It is not returning any names starting with “C”.$lte : less than or equal right?This is my issue?Thanks\nSarma\nPlease let me know your thoughts.", "username": "Sarma_Bhamidipati" }, { "code": "", "text": "Probably I did not explain my question correctly.No you do. But you do not understand my answer. Look back at my sorted output. The string Ci, or any string that starts with C, comes after C so they are $gte. Since they are not equal they are $gt.$lte : less than or equal right?Yes. But if Ci, as in my example above, is $gte to C, and then since it is not equal to C, then is not less than equal.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,Yes I could follow your example.Thanks for that information.", "username": "Sarma_Bhamidipati" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issues with $gte and $lte for string comparision
2021-05-06T10:42:55.720Z
Issues with $gte and $lte for string comparision
8,358
null
[ "replication", "security" ]
[ { "code": "", "text": "Hi Team,Not able to execute Mongo health check commands like rs.status() and replSetGetStatus commands for Mongo read only user . Is there any alternative to do health check for replica sets using read only user? Kindly suggest.Reagards,\nwork4mongo", "username": "390ed733ef3432fd811d" }, { "code": "", "text": "Hi @390ed733ef3432fd811dThe built-in Cluster Monitor role is perfect for this.Provides read-only access to monitoring tools, such as the MongoDB Cloud Manager and Ops Manager monitoring agent.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to execute status for replica sets
2021-05-06T10:43:04.325Z
Not able to execute status for replica sets
1,788
null
[ "replication" ]
[ { "code": "# systemctl stop mongod\n# systemctl start mongod\n", "text": "Dear all,I’m to write some operating manual for an application relying on MongoDB for its database. As far as my simple tests are concerned (on a single node), stopping and starting MongoDB on Linux is done with the “systemctl” command:But the customer I’m writing this manual for is using a 3 nodes Replica Set cluster (one primary and one secondary node on Linux and one arbiter on Windows). Is the procedure still the same in that case ? I.e stopping and starting MongoDB on each server ? And is the order important ?Best regards,Samuel", "username": "Samuel_VISCAPI" }, { "code": "", "text": "The operation is the same of the 2 linux nodes but you have to adjust for Windows.", "username": "steevej" } ]
How to stop / start a 3 nodes Replica Set cluster?
2021-05-06T13:06:48.095Z
How to stop / start a 3 nodes Replica Set cluster?
2,251
null
[ "configuration" ]
[ { "code": "replica1:PRIMARY> db.adminCommand( { \"setParameter\": 1, \"wiredTigerEngineRuntimeConfig\": \"allocation_size=64KB\"}) { \"ok\" : 0, \"errmsg\" : \"WiredTiger reconfiguration failed with error code (22): Invalid argument\", \"code\" : 2, \"codeName\" : \"BadValue\"} replica1:PRIMARY> db.createCollection( \"users\", { storageEngine: { wiredTiger: { configString: \"allocation_size=64KB\" } } } ) { \"ok\" : 0, \"errmsg\" : \"22: Invalid argument\", \"code\" : 2, \"codeName\" : \"BadValue\"}\n", "text": "Hi, on this link WiredTiger: Tuning page size and compression there is a note that allocation_size can be tuned between 512B and 128 MB How do we modify that variable and start mongod process that will have allocation_size of 16KB for example, the default is 4KB ?This does not work", "username": "Al_Tradingsim" }, { "code": "", "text": "Hi, does anyone have any ideas? Thanks in advance!!", "username": "Al_Tradingsim" }, { "code": "", "text": "What version of mongodb are you using?\nLooks like parameter value is not in required format\nPlease check mongod.log around the time when you ran this command.It may give more details\nSome docs suggest the value shoud be power of 2\nInstead of giving 64kb try 64X1024\nmongo documentation does not give much details but referring to WT docWord of caution as per mongo docWARNING\nAvoid modifying the wiredTigerEngineRuntimeConfig unless under the direction from MongoDB engineers as this setting has major implication across both WiredTiger and MongoDB.", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi, Ramachandra, we are using v3.6.", "username": "Al_Tradingsim" }, { "code": "", "text": "What does your mongod.log say?\nDid you try with value 65536 (64X1024)", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I tried that and it’s the same error. I don’t see anything in the error log that can point me in some direction.", "username": "Al_Tradingsim" }, { "code": "", "text": "Hi @Al_TradingsimWhat is the motivation to change this parameter? Are you seeing issues that necessitates changing it?Note that changing internal WiredTiger parameters are not supported nor encouraged, since the defaults were designed and tested for the vast majority of use case. Changing the allocation size could put the deployment into an untested territory.If you’re having certain issues, please describe the issue in more details, along with your MongoDB versions, and what attempts (other than changing WiredTiger parameters) that you have tried.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank you for your response KevinWe are working on a dynamic market scanner. It lets users to define and run custom dynamic scans across our data set. Scanners are filters built upon a set of primitives, (i.e. last price or previous day closing price) but also functions’ result like total volume, that is the sum of all the trades’ size of the day up to the current timestamp.MongoDB’s aggregation pipeline seems to be the perfect match for this new feature, because it can express many of the primitives we need without precomputing the values, which is an essential requirement for a dynamic scanner.So far we found that simple primitives like closing prices are pretty fast, as they essentially need just a lookup across the symbols on a given timestamp. Unfortunately this is not the case for aggregated primitives, like the total volume, that has to scan thousands rows of the selected symbols in that day.We tried different setups, and we found that nested documents are faster than flat ones, because they need less disk access. Disk is obviously playing a big role here, and for that reason we have pretty fast and expensive NVMEs disk. We ran a set of benchmarks to test disk performances, and we found that our NVME disks are able to match up the memory bandwidth when reading blocks with a 512KB size : 5,5GB/s vs 8,5 GB/s.This should mean that sequential reads from disk can be as fast as memory, and for our scenario means we should be able to read 3 GB of uncompressed data in almost half a second. It turns out that Mongo is way slower than that.\nWhile exploring the issue, we found that Mongo is actually allocating blocks at 4KB (wiredTiger.block-manager.file allocation unit size).\nSo, we tried the same disk benchmarks with the same block size, assuming this is the block size mongo is reading from the disks. The benchmarks show a 850MB/s maximum bandwidth, ~7 time less than the optimum. This matches what we are seeing on our mongo benchmarks: the aggregation pipeline is 6 time faster on nested documents than on unwound flat ones.So, we are wondering if we can improve the overall mongodb performances by increasing the wiredtiger file allocation unit size to 512K, matching the optimum block size of our disk benchmarks. Is that possible? Is there any other tricks to achieve the NVMEs maximum read speed from Mongo?Let me know what you think and feel free to ask me any more detail.", "username": "Al_Tradingsim" }, { "code": "", "text": "And to answer your question regarding Mongo version, We are on Mongo 3.6", "username": "Al_Tradingsim" }, { "code": "", "text": "Hi @Al_TradingsimI think you have done an impressive amount of work in figuring out how the disk performed with various block sizes.Having said that, there may be further optimizations that could be done on your schema & query design that may or may not necessitate tuning internal knobs. I would suggest to explore all possible optimization avenues (query, indexing, schema design, etc.) before turning into WiredTiger allocation sizes, as this is the riskiest approach and may lead to unintended consequences. Is this something that is possible in your use case?Unfortunately this is not the case for aggregated primitives, like the total volume, that has to scan thousands rows of the selected symbols in that day.Is this the specific query that is not as performant as you need? Could you provide some example documents and the aggregation, and also the required result?We tried different setups, and we found that nested documents are faster than flat ones, because they need less disk access.I’m curious if this means that your working set cannot fit in RAM, since in most cases, you want to avoid having to read from disk as much as possible and do most work from RAM. Could you elaborate on your provisioned hardware?Best regards,\nKevin", "username": "kevinadi" }, { "code": "{sym: 'symbol', price: XXX, size: YYY, timestamp: ZZZ}Personalities : [raid0]\nmd0 : active raid0 nvme3n1p1[3] nvme2n1p1[2] nvme0n1p1[0] nvme1n1p1[1]\n 12501934080 blocks super 1.2 512k chunks\n", "text": "Hi @kevinadi.\nI’m the lead developer at tradingsim working on this issue.\nThe specific query is an aggregation pipeline of timesales data. A document is a simple object: {sym: 'symbol', price: XXX, size: YYY, timestamp: ZZZ}. The pipeline works on a subset of the symbols and in a user defined date range, it groups by minutes & select some prices in the group (last, first, max, min) sums the size, and finally it sorts the results. We have tens of thousands of symbols and millions of timesales across many years of data, queried by tens of users concurrently with very low latency requirements (sub second).\nAs said in the previous post by @Al_Tradingsim, we already tried different schemas, starting from flat objects (82 bytes average size) to nested docs in minute blocks (770 bytes on average). The next step will be using nested docs in daily blocks, but this needs substantial changes on the pipeline code and a full data reload, which is a significant effort.\nOur benchmarks show that reading data in blocks of 512Kbytes from our NVMEs is on par with the RAM bandwidth for the same amount of data (5,5 GB/s vs 8,5 GB/s). Therefore, our guess is that big reading blocks can outperform small ones when accessing the disks. This is an optimization that could boost many queries we are actually run, not just this specific pipeline.\nAbout the hardware: we have a cluster of 3 Xeon Gold 5222 servers: 184GB RAM, 12TB RAID-0 NVME disks\nRAID details:", "username": "Ivano_Picco" }, { "code": "allocation_sizeinternal_page_maxleaf_page_maxallocation_sizeinternal_page_maxleaf_page_maxallocation_sizeallocation_size", "text": "Hi @Ivano_Picco, welcome to the community.Actually, the command to change allocation_size failed because it was constrained by at least two other parameters: internal_page_max and leaf_page_max. Both of them must be multiplies of allocation_size. Since internal_page_max defaults to 4k and leaf_page_max defaults to 32k, setting allocation_size larger than 4k will fail. To be able to set allocation_size larger than 4k, you must also increase those two numbers.Having said that, this is a very use-case specific tuning and should only be attempted when everything else from the MongoDB and hardware side failed to produce the desired outcome, since tuning those numbers could have unintended consequences, wasted disk space being one of them. Have you tried experimenting with different read ahead settings? If this is set too high, you might see a lower performance.A key performance indicator is checking the query’s explain results and seeing how many times the query yields (which typically indicates a disk bottleneck), how many documents returned vs. documents examined (which indicates query targeting inefficiencies), are the right index being used, etc. I would start from this area before going deep into WiredTiger tuning.If you haven’t seen it yet, there are also a series of blog posts for time series data which may be worth checking: Time Series Data and MongoDB: Part 1, Part 2, and Part 3.Another tool that could be useful is Keyhole, where you can quickly examine the database’s performance. It can work with seed data where you can supply your example documents to the tests will be more tailored for your use case. See the blog posts linked in the Github description for details into Keyhole’s operation, and also other avenues for MongoDB performance analysis.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi, I’m trying do so something similar (my goal is to see if changing internal_page_max affects my query performance). But I am unable to change any of these parameters. With any config (mine is still in default) setting allocation_size to half its current size should work, or?Well the folllowing command still says “WiredTiger reconfiguration failed with error code (22): Invalid argument”:\ndb.adminCommand ( { “setParameter”: 1, “wiredTigerEngineRuntimeConfig”: “allocation_size=2KB”});", "username": "Guilhem_SEMPERE" } ]
Increase WT allocation_size
2020-07-16T12:32:50.066Z
Increase WT allocation_size
4,484
null
[ "ruby", "mongoid-odm" ]
[ { "code": "", "text": "Hi\nAs Mongoid latest version 7.2.1 has quite a few issues of saving , updating and also the validation errors. With Rails 6.1.3 and Ruby 3.0.0.\nWhat is the previous stable version of mongoid other than 7.2.1 Which do not have any such issues as 7.2.1.\nSource: https://jira.mongodb.org/browse/MONGOID-5048", "username": "hanish_jadala" }, { "code": "", "text": "Hi @hanish_jadala,Ruby 3 is not currently supported by the MongoDB Ruby Driver or the Mongoid ODM, however is scheduled (see RUBY-2268) .Until this work is done we recommend using a 2.x version of Ruby along with the latest driver and ODM compatible with your version of MongoDB.", "username": "alexbevi" }, { "code": "", "text": "The main error that comes is through the translate method from the MongoidError class which i did a work around with a monkey patchMongoid::Errors::MongoidError.class_eval do\ndef translate(key, options)\n::I18n.translate(“mongoid.#{key}”, **options)\nend\nendAs ruby 3 doesnt accept options directly as the arguments cannot be send as a hash we need to spread it to make it work.", "username": "Manish_Sharma5" }, { "code": "::I18n.translate", "text": "@Manish_Sharma5,FYI there is a PR at ::I18n.translate takes keyword arguments by reidmorrison · Pull Request #4944 · mongodb/mongoid · GitHub being discussed specifically for Mongoid and ::I18n.translate (tracked by MONGOID-5044).", "username": "alexbevi" } ]
What is the previous stable version of Mongoid with Ruby 3.0.0
2021-03-31T09:17:55.753Z
What is the previous stable version of Mongoid with Ruby 3.0.0
4,173
null
[ "queries", "python" ]
[ { "code": "", "text": "Hii am new to mongodb and will like to get some help.i wish to store a json which contains array of “records” in one document.\nE.g. one document called “UK”, which contains top 20 records of temperature. i am writing in python.When i try to update the record using update,\n.update({“date”: “06-05-2021”, “temp”: 10, }, {“date”: “07-05-2021”, “temp”: 11 }, upsert=True)\n, there will also be multiple records created (despite using upsert ) with new id, and also the values in each record contains only 1 value.I also encountered issue when i try to use a converted json which a list of records in python which i encountered “more than 1 parameter is required” issue. Please advise. thank you", "username": "Chun_Leong_Lee" }, { "code": "> db.temp.update({\"country\" : \"US\"},{$push : { temperature : {\"date\": \"07-05-2021\", \"temp\": 11 }}}, {upsert : true});\n{\n acknowledged: true,\n insertedId: { index: 0, _id: ObjectId(\"6093c45f14febd9ff2772ead\") },\n matchedCount: 0,\n modifiedCount: 0,\n upsertedCount: 1\n}\n> db.temp.update({\"country\" : \"US\"},{$push : { temperature : {\"date\": \"06-05-2021\", \"temp\": 10 }}}, {upsert : true});\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n> db.temp.find({})\n[\n {\n _id: ObjectId(\"6093c45f14febd9ff2772ead\"),\n country: 'US',\n temperature: [\n { date: '07-05-2021', temp: 11 },\n { date: '06-05-2021', temp: 10 }\n ]\n }\n]\n db.temp.update({\"country\" : \"US\"},{$push : { temperature : { $each : [{\"date\": \"05-05-2021\", \"temp\": 10 }], $sort : {date : -1}, $slice: 20}}}, {upsert : true});\n", "text": "Hi @Chun_Leong_Lee,You need to use a filter + $push command to an array:The filter make sure that if upserted the document has a “country” field and the $push insert a document in an array if it is matched … Now if you wish to have only top 20 you can use a $each and $slice like that:Porting this to python should be fairly easy:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Update an array into One Document
2021-05-06T10:02:41.196Z
Update an array into One Document
3,664
null
[]
[ { "code": "", "text": "Is it a good idea to have multiple mongo clients when reading and writing data in MongoDB. For example, we currently have client in the following way (using .NET Mongo Driver):var client = new MongoClient(configuration);\nvar _db = client.GetDatabase(_database);I am planning to have two separate clients, one for reading and one for writing. Will it help in increasing the performance or will it help the performance negatively ?Thanks,\nJW", "username": "Jason_Widener1" }, { "code": "", "text": "Hi @Jason_Widener1,I am not sure the reason to do so, eventually your database will go through same amount of reads and writes with double connections.Each connection occupied memory and resources so I can see only negative impact.What you can consider is changing the connection readPreference to read from secondary for example and this may potentially ease primary for writes , but be sure to understand that date might be staled as replication is async…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding Multiple Clients
2021-05-06T04:39:27.906Z
Adding Multiple Clients
3,577
null
[ "aggregation" ]
[ { "code": "db.t1.aggregate([ {$lookup: { \n from: \"t2\",\n let: {age_field: \"$age\", name_field: \"$name\"},\n pipeline: [ { $match:{ $expr:{ $and:[\n {$eq: [ \"$old\", \"$age_field\" ]},\n {$eq: [ \"$alias\", \"$name_field\"]},\n {$eq: [ 24, \"$age_field\"]}\n ]}}}\n ], \n as: \"joined_result\" }},\n {$unwind: {path: \"$joined_result\", preserveNullAndEmptyArrays: true}}\n ])\n db.t1.aggregate([ {$lookup: { from: \"t2\", let: {age_field: \"$age\", name_field: \"$name\"},\n pipeline: [\n { $match:{ $expr:{ $and:[{ $eq: [ 25, \"$$age_field\" ] },{ $eq: [ \"arun\", \"$$name_field\" ] }]}} }\n ,{ $match:{ $expr:{ $or:[{ $eq: [ 24, \"$$age_field\" ] }]}} }\n ],\n \n as: \"joined_result\" }},\n {$unwind: {path: \"$joined_result\",\n preserveNullAndEmptyArrays: true}},\n ]) \n db.t1.aggregate([ {$lookup: { from: \"t2\",\n let: {age_field: \"$age\", name_field: \"$name\"},\n pipeline:[{ $match:{ $expr: [ {$and:[\n {$eq:[\"$old\",\"$$age_field\" ]},\n {$eq:[\"$alias\",\"$$name_field\"]},\n {$or: {$eq: [24, \"$$age_field\"]}}\n ]}\n ] }}\n ] ,\n as: \"joined_result\" }},\n {$unwind: {path: \"$joined_result\",\n preserveNullAndEmptyArrays: true}},\n ])\n db.t1.aggregate([ {$lookup: { from: \"t2\",\n let: {age_field: \"$age\", name_field: \"$name\"},\n pipeline:[{ $match:{ $expr:[\n {$and:[{$eq:[\"$old\",\"$$age_field\" ]},\n {$eq: [\"$alias\",\"$$name_field\"]}]},\n {$or: {$eq: [24, \"$old\"]}}]}}\n ],\n as: \"joined_result\" }},\n {$unwind: {path: \"$joined_result\",\n preserveNullAndEmptyArrays: true}},\n ])\n", "text": "Hi @slava and experts,I am using MongoDB 4.2.I want to write below query written in postgresql to MongoDB:SELECT * FROM t1 LEFT JOIN t2 ON (t1.age = t2.old AND t1.name = t2.alias OR t1.age = 24);The above query has ‘AND’ and ‘OR’ operation.The query below which has same operator with multiple join condition has equivalent syntax in MongoDB.Postgresql:SELECT * FROM t1 LEFT JOIN t2 ON (t1.age = t2.old AND t1.name = t2.alias AND t1.age = 24);MongoDB:But I am looking for syntax for below query which has $AND and $OR operators\nSELECT * FROM t1 LEFT JOIN t2 ON (t1.age = t2.old AND t1.name = t2.alias OR t1.age = 24);I tried some syntaxes as below but unfortunately those are NOT working as expected:Used Multiple $match stages:$OR is part of $AND:Array of expression:It would be great help …!!Thanks in advance.", "username": "Vaibhav_Dalvi" }, { "code": "(t1.age = t2.old AND t1.name = t2.alias OR t1.age = 24)((t1.age = t2.old AND t1.name = t2.alias) OR t1.age = 24)ANDOR$lookup$match $expr:{ \n\t $or:[\n\t\t { $and: [ \n\t\t\t { $eq: [ \"$old\", \"$$age_field\" ] },\n\t\t\t { $eq: [ \"$alias\", \"$$name_field\" ] }\n\t\t ] },\n\t\t { $eq: [ 24, \"$$age_field\" ] }\n\t ]\n }", "text": "Hello @Vaibhav_Dalvi, welcome to the MongoDB Community forum!I want to write below query written in postgresql to MongoDB:SELECT * FROM t1 LEFT JOIN t2 ON (t1.age = t2.old AND t1.name = t2.alias OR t1.age = 24);The above query has ‘AND’ and ‘OR’ operation.The query condition(t1.age = t2.old AND t1.name = t2.alias OR t1.age = 24)is the same as:((t1.age = t2.old AND t1.name = t2.alias) OR t1.age = 24)Note the SQL AND operator has precedence over the OR operator.Then the $lookup pipeline’s $match stage can be constructed as follows:", "username": "Prasad_Saya" }, { "code": "", "text": "Interesting…This works.Thanks @Prasad_Saya for quick response.", "username": "Vaibhav_Dalvi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Multiple JOIN conditions with different operators in $lookup stage sub-pipeline
2021-05-05T13:02:54.841Z
Multiple JOIN conditions with different operators in $lookup stage sub-pipeline
11,360
null
[ "atlas-triggers" ]
[ { "code": "", "text": "I want to make trigger such a that when user create contest from app & document will be inserted with expiry time stamp of next 24hours in mongodb. So when that expiry time stamp time arrives it will automatically set value to fast so my contest will end on App.", "username": "Its_Me" }, { "code": "", "text": "Hi @Its_Me,Maybe you could leverage Time To Live (TTL) indexes so your document would be deleted within 60 seconds of its expiry date and be deleted. You can use this event to start a trigger in Realm as well.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks for Answer. I learnt something new by this answer but my problem statement is to change field value like true to false so contest will be disabled. Not deleted", "username": "Its_Me" }, { "code": "stuffexpiration_dateactive: true{\n \"_id\": <same as the other doc in the stuff collection>,\n \"expiration_date\": <date when you want to udpate the field>\n}\neventsdb.events.createIndex( { \"expiration_date\": 1 }, { expireAfterSeconds: 0} )\neventsstuff_idactive: falseactive", "text": "IF your update doesn’t need to be precise down to the second, but can happen somewhere between 0 and 60 seconds after the expiration date, you could use TTL indexes:We have a blog post coming up soon on https://www.mongodb.com/, it’s currently in review ! Ping @Pavel_Duchovny.IF you need a real time update of that active field, you need to implement another solution based on a realm time software.I hope this helps. I tried to stay as concise as possible, but please let me know if you need more details of course. Hopefully the blog post is coming soon Basically this is a derivation of a notification system.Cheers,\nMaxime.EDIT PS: The same concept is explained in this blog post:https://www.mongodb.com/article/triggers-tricks-data-driven-schedule/", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks @MaBeuLux88 such a great support encourages me to keep learning. I will try it. Hopefully i am sure it will work else i will reach you again.", "username": "Its_Me" }, { "code": "", "text": "Thanks a lot for your comment. I’m really happy my post helped you. See you in the next topic !", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to set trigger on the basis of date present in specific document or from many document the trigger should excecute every time
2021-05-04T09:48:40.121Z
How to set trigger on the basis of date present in specific document or from many document the trigger should excecute every time
8,533
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of v1.5.2 of the MongoDB Go Driver.This release contains several bug fixes. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.5.2 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team", "username": "Matt_Dale" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.5.2 Released
2021-05-05T21:29:41.812Z
MongoDB Go Driver 1.5.2 Released
2,019
null
[ "sharding", "configuration" ]
[ { "code": "mongo --eval \"sh.stopBalancer()\" mongos-host:27017\n\n# Repeat below on each shard host:\nmongo --eval \"db.fsyncLock()\" localhost:27018\n\ncp /mongodb/data/collection/3109--6926861682361166404.wt /slow-disc/mongodb/collection/3109--6926861682361166404.wt\nln --force --symbolic /mongodb/data/collection/3109--6926861682361166404.wt /slow-disc/mongodb/collection/3109--6926861682361166404.wt\n\nmongo --eval \"db.fsyncUnlock()\" localhost:27018\n\n# After all shards are done:\nmongo --eval \"sh.startBalancer()\" mongos-host:27017\n/mongodb/data/collection\n/mongodb/data/index\n/mongodb/archive/collection -> /slow-disc/mongodb/collection \n/mongodb/archive/index\nmongo --eval 'sh.shardCollection(\"archive.coll\", shardKey)' mongos-host:27017\nmongodump --uri \"mongodb://mongos-host:27017\" --db=data --collection=coll --archive=- | mongorestore --uri \"mongodb://mongos-host:27017\" --nsFrom=\"data.coll\" --nsTo=\"archive.coll\" --archive=-\nmongo --eval 'db.getSiblingDB(\"data\").getCollection(\"coll\").drop()' mongos-host:27017\n", "text": "I have a MongoDB Sharded cluster with a hybrid storage, i.e. some fast SSD and some slower and cheaper spinning rust.For archiving I like to move some data to the slower disc. For legal reason we have to keep them, they are queried only occasionally.In principle I would do it like this:The indexes shall remain on the fast disc.Would this be a reliable way to archive my data? What happens if the collection is read while move?Another approach would be a file system like this:And then move the collection as this:Main disadvantage: the balancer has to distribute the whole data across the shards. It creates additional load on my shared cluster.Which approach would you recommend?", "username": "Wernfried_Domscheit" }, { "code": "", "text": "If Atlas is an option you could use Online Archive to archive you data automatically to S3. Much cheaper and yet still queryable.", "username": "Joe_Drumgoole" }, { "code": "dbPathmongodfsyncLockdirectoryPerDBdirectoryForIndexes", "text": "Hi @Wernfried_Domscheit,The Online Archive option for Atlas is optimised for archival storage of data that you still may want to query occasionally.Since you have a self-managed sharded cluster, I would look into using Zone Sharding to influence data locality.The use case you’ve described is one of the example scenarios: Tiered Hardware for Varying SLA or SLO.Would this be a reliable way to archive my data? What happens if the collection is read while move?Zone sharding is part of the normal sharded architecture, so application access can continue concurrently with rebalancing activity. This approach allows a DBA to influence data locality and there is no downtime if an admin needs to adjust allocation between existing resource tiers or provision additional fast or slow hardware.In principle I would do it like thisIt is possible to use symlinks, but this may affect your backup strategy (for example, if you are using filesystem snapshots) because a single MongoDB dbPath will span multiple filesystems. I recommend using an agent-based backup approach (i.e. MongoDB Ops Manager or Cloud Manager) to avoid complications of backing up shards spanning multiple filesystems.I would also stop the mongod process (not just fsyncLock) when changing symlinks. Removing or replacing files for a running process may lead to unexpected outcomes, and you’re already stopping all writes to make this administrative change.If you are planning on doing this frequently (and decide to take the filesystem approach rather than zone sharding), I would consider using database-level mount points via storage options like directoryPerDB and directoryForIndexes. Managing symlinks at a database granularity is less disruptive and error prone than collection-level changes.Note: In order to change storage level options that affect the physical arrangement of files, you will need to rebuild the data files for your shard replica set members via initial sync. You can do so in a rolling fashion: change the storage options on one secondary at a time, wait for initial sync to complete, and eventually step down and upgrade the primary.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Some more details:\nAtlas is no option. Please don’t start a discussion about it, we analyzed it carefully together with MongoDB and there are several reasons against Atlas (e.g. mongoimport does not support Client-Side Field Level Encryption).The application is rather big. It generates at peak 70’000 documents/second which gives about 100GB (storage size) data every day. For 2 days this data shall be hosted on fast disk, because is it frequently used and potentially modified. Data has to be kept for 6 Months, i.e. in total the DB has a storage size of 20TB - distributed over 6 Shards. The Shards are built as PSA-Replica Sets.After 2 days I like to move the “old” data on daily basis to the slower archive storage, because then it is used only rare.So, I don’t see any option for Zoned Sharding, because the same data needs to be first stored on fast hardware and then on slower (i.e. cheap) hardware.Running a full initial sync of 20 TB every day might not be the best option.Best Regards\nWernfried", "username": "Wernfried_Domscheit" }, { "code": "recentarchivedirectoryPerDBdirectoryForIndexes", "text": "So, I don’t see any option for Zoned Sharding, because the same data needs to be first stored on fast hardware and then on slower (i.e. cheap) hardware.Hi Wernfried,With zoned sharding you would update the zone ranges on a schedule (eg daily) so older data would end up migrating from recent to archive shards. You would not have to coordinate this change across every member of your sharded cluster (as you will for filesystem symlinks).This approach does presume that you would want to query your recent & archived data as a single sharded collection, rather than querying across multiple collections.The extra info you provided in your latest response is that the archived data only needs to be retained for 6 months and indicates that you are concerned about the daily and total volume of data.If you have already modelled your data so you can archive based on a collection naming convention, your first approach (symlinks) sounds more appropriate for your use case than dumping & restoring data (which includes rebuilding indexes).However, choice of an approach is up to you. I’m just sharing suggestions based on the information you have provided.The Shards are built as PSA-Replica Sets.I expect you are already aware, but there are some consequences of arbiters that will have a performance impact if you are routinely taking data-bearing members down for maintenance issues like updating symlinks.For more background, please see my comment on Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie.Running a full initial sync of 20 TB every day might not be the best option.Definitely not! My mention of initial sync was in the context of a one-off operation if you wanted to change your storage options to use directoryPerDB and/or directoryForIndexes. Grouping of related files by database or type can be helpful if you want to tune different mount point options.If you are fine maintaining symlinks at a file level, you can skip any notions of changing storage options.Regards,\nStennie", "username": "Stennie_X" }, { "code": "recentarchive", "text": "With zoned sharding you would update the zone ranges on a schedule (eg daily) so older data would end up migrating from recent to archive shards. You would not have to coordinate this change across every member of your sharded cluster (as you will for filesystem symlinks).Yes, I did not consider update the zone ranges. I will give it a try.I create one collection per day. Deleting old data from one big collection takes far to much time. Dropping a daily collection after 6 Months takes only a few seconds.I expect you are already aware, but there are some consequences of arbiters that will have a performance impact if you are routinely taking data-bearing members down for maintenance issues like updating symlinks.MajorityReadConcern is disabled, of course. Having a PSS (1 Primary + 2 Secondary) would be nice but is a significant cost-driver due to storage requirement.Thanks for your suggestions!Wernfried", "username": "Wernfried_Domscheit" }, { "code": "", "text": "Just for information, the Zoned Sharding looks promising and would be the preferred method for me.\nHowever, it fails for larger collections, see https://jira.mongodb.org/browse/SERVER-56116Let’s hope we will get a fix for it.\nWernfried", "username": "Wernfried_Domscheit" }, { "code": "maxSize", "text": "You don’t mention it but do you have any other parameters/settings on your shards?For instance, maxSize or something else? Without seeing the full config database, it’s difficult to speculate about what might be going on.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "No, I am not aware of any. However, today I tried this procedure again and it was working. I am not able to reproduce the error.Best Regards\nWernfried", "username": "Wernfried_Domscheit" } ]
Is it possible to move WiredTiger files to different file system?
2021-04-08T12:36:50.557Z
Is it possible to move WiredTiger files to different file system?
4,959
null
[ "dot-net", "replication" ]
[ { "code": "", "text": "I have a MongoDB Replica Set with 3 instances. One of them was in Recovery for a couple of weeks. A .net Core (C#) application continued reading from the instance in Recovery. This .net Core application uses MongoDB.Driver 2.11.6 .My questions are:Thanks.", "username": "Stefano_Curcio" }, { "code": "mongod{\n \"topologyVersion\": {\n \"processId\": ObjectId(\"6091db349e24f4ec09d5b60e\"),\n \"counter\": NumberLong(\"5\")\n },\n \"ok\": 0,\n \"errmsg\": \"node is recovering\",\n \"code\": 13436,\n \"codeName\": \"NotPrimaryOrSecondary\"\n}\nmongod", "text": "Hi, Stefano,Thank you for contacting MongoDB. We understand that one member of a 3-member replica set was in recovering for 2 weeks, but your .NET Core application continued to read from this node.If you connect to a replica set with the .NET/C# driver, it will select a replica set node to read from based on the configured read preference for the operation (possibly defaulted from the connection string). Even if the driver selected a recovering node due to stale topology information, the recovering mongod instance will not service reads and instead will return an error similar to the following:To read from a recovering node for data recovery purposes, you would have to restart the node as a standalone on a different port and then connect directly to that standalone node on that new port.To answer your questions:If you are seeing different behaviour than described above, please provide a self-contained reproduction of the issue so that we can investigate further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "readPreferenceprimarymongodb://user:[email protected]:27017,x.x.x.x:27017,x.x.x.x:27017/?authSource=auth_source", "text": "We do not have a readPreference in the connection string, so the default is primary.\nOur connection string has this shape: mongodb://user:[email protected]:27017,x.x.x.x:27017,x.x.x.x:27017/?authSource=auth_source\nWhile Java applications logged an error (Command failed with error 211 (KeyNotFound): 'Cache Reader No keys found for HMAC that is valid for time: ) there were not errors or warning in the logs of C# applications.I will investigate further.Thank you @James_Kovacs.", "username": "Stefano_Curcio" }, { "code": "?replicaSet=<<replSetName>>recovering?replicaSet=<<replSetName>>", "text": "Hi, Stefano,Thank you for the follow-up. You are correct that the default read preference will be primary if left unspecified.Reviewing your connection string, you do not specify the ?replicaSet=<<replSetName>> option. The driver will connect to the node as a standalone and will use the first host that can be resolved. This is why you continued to connect to the node after it went into recovering.Please try specifying your replica set name in ?replicaSet=<<replSetName>> in your connection string for both the .NET/C# and Java applications. This should resolve the problem that you are observing.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# MongoDB.Driver - Replica Set with an instance in Recovery
2021-05-04T13:38:32.386Z
C# MongoDB.Driver - Replica Set with an instance in Recovery
5,915
null
[ "dot-net", "monitoring" ]
[ { "code": "", "text": "We’re having problems with connection pooling and the wait queue with mongo using the mongo c# driver.Is there any way to get metrics from the MongoClient at all? Things like number of active connections, size of connection pool, size of wait queue etc.", "username": "Paul_Allington" }, { "code": "MongoClientConnectionPoolCheckingOutConnectionEventConnectionPoolCheckedOutConnectionEventConnectionPoolCheckingOutConnectionFailedEventConnectionPoolCheckedOutConnectionEventConnectionPoolCheckedInConnectionEventConnectionPoolAddedConnectionEventConnectionPoolRemovedConnectionEventConsole.WriteLineConnectionPoolCheckingOutConnectionFailedEvent", "text": "Hi, Paul,Thank you for reaching out to MongoDB about your question regarding MongoClient metrics.The .NET/C# driver exposes a wide variety of internal metrics via Eventing. This includes creation/destruction of connection pools, connections being added/removed from pools, connections checked in/out of pools, and more. For example, the wait queue is entered when ConnectionPoolCheckingOutConnectionEvent is raised and exited when ConnectionPoolCheckedOutConnectionEvent or ConnectionPoolCheckingOutConnectionFailedEvent is raised. A connection is active between ConnectionPoolCheckedOutConnectionEvent and ConnectionPoolCheckedInConnectionEvent. You can keep track of the total number of connections by monitoring ConnectionPoolAddedConnectionEvent and ConnectionPoolRemovedConnectionEvent.One point to note. Connection pools are per cluster member, not per MongoClient. So if you have a 3 member replica set, you will have 3 connection pools, one for each node in the cluster. There is an additional monitoring connection outside of the connection pools using for Server Discovery and Monitoring (SDAM).For an example of how to build a monitoring solution, you can take a look at the PerformanceCounterEventSubscriber in the source code. You can see how various counters are incremented and decremented in response to various driver events, which can then be forwarded onto whatever monitoring solution (potentially as simple as Console.WriteLine for debugging purposes) you want to use. Note that this particular subscriber neglected to handle the ConnectionPoolCheckingOutConnectionFailedEvent, which means that counts will not be accurate in the face of connection checkout timeouts.Hopefully that gives you a starting point for diagnosing the issues that you are observing. Please let us know if you have any additional questions.Sincerely,\nJames", "username": "James_Kovacs" } ]
Mongo C# Driver Metrics
2021-05-05T08:57:32.317Z
Mongo C# Driver Metrics
4,336
https://www.mongodb.com/…3_2_1024x260.png
[ "connecting", "php" ]
[ { "code": "function Connect ($dbName, $dbURI){\n if (!empty($this->connection)) {\n return; \n } else {\n $options = [\n 'connect' => true,\n 'connectTimeoutMS' => 10000,\n ];\n\n $mongo = new MongoClient(\n $dbURI,\n $options\n );\n \n $this -> connection = $mongo -> $dbName;\n", "text": "I have a problem regarding to the Mongo client connections. I want to understand why is mongo opening new connections for the same client, the same query. I’m currently using PHP 5.6 with Mongo 3.6.17 and the mongo php driver is GitHub - mongodb/mongo-php-driver-legacy: Legacy MongoDB PHP driver.This is the Connect function:Note: Every time reloads the page that uses the MongoClient is opening a new connection, which is not what I want. I’d like to keep the same connection or when the user leaves the page Mongo can close that. Otherwise, I’ll end up with twice the number of connections from a single client. The following image demonstrates that 3161 connections opened, but having around 1000 users in the application doesn’t make sense for me.\nScreenshot 2021-04-27 at 12.26.231181×301 95.5 KB\n", "username": "Luis_Carbajal" }, { "code": "", "text": "You’ve asked the same question just a day ago (Need help with Mongo Client Connections and PHP 5.6). There’s no need to ask the same question multiple times - we’ll reply when we get to it.In your case, you are using the legacy MongoDB extension, which is no longer supported. If you are using PHP 5.6, there are older (now also unsupported) versions of the new driver available that also run on PHP 5.6. Please upgrade and see if that resolves your problem. Thanks!", "username": "Andreas_Braun" }, { "code": "", "text": "@Andreas_Braun there’s nothing to upgrade because we already are in the latest version for PHP (5.6) driver which is (ext 1.7 + lib 1.6) as per: https://docs.mongodb.com/drivers/php. Unfortunately I still have the same issue", "username": "Luis_Carbajal" }, { "code": "MongoClientcomposer show mongodb/mongodb\ncomposer show alcaeus/mongo-php-adapter\nphp --ri mongo\nphp --ri mongodb\n", "text": "The code you’ve posted uses the MongoClient class from the legacy driver as you’ve indicated. As I mentioned, there is a new extension (ext-mongodb) along with a new library (mongodb/mongodb on packagist), which is what we support. The legacy extension is no longer supported. Please run the following commands in your project directory one by one and post the complete output so we can figure out what you’re running:", "username": "Andreas_Braun" }, { "code": "", "text": "\ncurrent-plat3578×1319 403 KB\nAs you can see in that image that every time I do a find with the exact same params in the constructor, meaning the query is the same but mongo allow a new connection instead of reusing. (Doesn’t persist in my case) and for that I still see a lot of connections.I’m using this (new MongoDB\\Client)->test->zips; instead of MongoClient.Thanks @Andreas_Braun", "username": "Luis_Carbajal" }, { "code": "", "text": "Thank you for the information. As a general rule, please copy terminal output and put it into a code block in the reply, this makes it easier for others to read it.Please note that I can only comment toward the behaviour of the MongoDB extension. Connections to MongoDB are persisted internally per process. This means that if you invoke the script from the CLI, a new PHP process is spawned for every run, which will always create a new connection. That said, when the PHP process is terminated, the connection is closed, so you shouldn’t see a continuous increase in connections.How is the code above invoked? Do you run it through the CLI (so one connection per process), or do you run it through a webserver to php-fpm or another SAPI?", "username": "Andreas_Braun" }, { "code": "(new MongoDB\\Client)->customers;", "text": "I’m using apache 2 and the code above is running every time the user hits a page that requires a Mongo connection. i.e. GET https://app.xyz/customers.php. Then that page is using (new MongoDB\\Client)->customers; and if I refresh the page Mongo will open another connection without closing the previous one.", "username": "Luis_Carbajal" } ]
Mongo is opening too many Connections. PHP 5.6 and Mongo 3.6
2021-04-28T17:37:41.570Z
Mongo is opening too many Connections. PHP 5.6 and Mongo 3.6
5,197
null
[ "indexes" ]
[ { "code": "{\t\"nReturned\" : 101,\n\t\"executionTimeMillisEstimate\" : 0,\n\t\"totalKeysExamined\" : 101,\n\t\"totalDocsExamined\" : 0,\n\t\"executionStages\" : {\n\t\t\t\"stage\" : \"PROJECTION_COVERED\",\n\t\t\t\"nReturned\" : 101,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 101,\n\t\t\t\"advanced\" : 101,\n\t\t\t\"needTime\" : 0,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 3,\n\t\t\t\"restoreState\" : 2,\n\t\t\t\"isEOF\" : 0,\n\t\t\t\"transformBy\" : {\n\t\t\t\t\t\"CompanyId\" : 1,\n\t\t\t\t\t\"UpdatedDateUtc\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\"works\" : 101,\n\t\t\t\t\t\"advanced\" : 101,\n\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 3,\n\t\t\t\t\t\"restoreState\" : 2,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"CompanyId\" : 1,\n\t\t\t\t\t\t\t\"BetDateUtc\" : -1,\n\t\t\t\t\t\t\t\"WagerEventDateUtc\" : -1,\n\t\t\t\t\t\t\t\"UpdatedDateUtc\" : -1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"CompanyId_1_BetDateUtc_-1_WagerEventDateUtc_-1_UpdatedDateUtc_-1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"CompanyId\" : [ ],\n\t\t\t\t\t\t\t\"BetDateUtc\" : [ ],\n\t\t\t\t\t\t\t\"WagerEventDateUtc\" : [ ],\n\t\t\t\t\t\t\t\"UpdatedDateUtc\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"CompanyId\" : [\n\t\t\t\t\t\t\t\t\t\"[1341.0, 1341.0]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"BetDateUtc\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"WagerEventDateUtc\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"UpdatedDateUtc\" : [\n\t\t\t\t\t\t\t\t\t\"[new Date(9223372036854775807), new Date(1616715300000)]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"keysExamined\" : 101,\n\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\t}},\n{\n\t\"nReturned\" : 101,\n\t\"executionTimeMillisEstimate\" : 0,\n\t\"totalKeysExamined\" : 101,\n\t\"totalDocsExamined\" : 0,\n\t\"executionStages\" : {\n\t\t\t\"stage\" : \"PROJECTION_COVERED\",\n\t\t\t\"nReturned\" : 101,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 101,\n\t\t\t\"advanced\" : 101,\n\t\t\t\"needTime\" : 0,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 10242,\n\t\t\t\"restoreState\" : 10242,\n\t\t\t\"isEOF\" : 0,\n\t\t\t\"transformBy\" : {\n\t\t\t\t\t\"CompanyId\" : 1,\n\t\t\t\t\t\"UpdatedDateUtc\" : 1,\n\t\t\t\t\t\"_id\" : 0\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\"works\" : 101,\n\t\t\t\t\t\"advanced\" : 101,\n\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 10242,\n\t\t\t\t\t\"restoreState\" : 10242,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"CompanyId\" : -1,\n\t\t\t\t\t\t\t\"UpdatedDateUtc\" : -1,\n\t\t\t\t\t\t\t\"WagerEventDateUtc\" : -1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"CompanyId_-1_UpdatedDateUtc_-1_WagerEventDateUtc_-1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"CompanyId\" : [ ],\n\t\t\t\t\t\t\t\"UpdatedDateUtc\" : [ ],\n\t\t\t\t\t\t\t\"WagerEventDateUtc\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"CompanyId\" : [\n\t\t\t\t\t\t\t\t\t\"[1341.0, 1341.0]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"UpdatedDateUtc\" : [\n\t\t\t\t\t\t\t\t\t\"[new Date(9223372036854775807), new Date(1616715300000)]\"\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"WagerEventDateUtc\" : [\n\t\t\t\t\t\t\t\t\t\"[MaxKey, MinKey]\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"keysExamined\" : 101,\n\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\t}}\n", "text": "Hi, mongodb often use index which include the field not in the filter , and the field will using [MaxKey , MinKey] ,this is not the same as what I saw in the index prefix documentation.Why it will use wrong index ?the query like this :db.Wager.aggregate([{\"$match\": {“CompanyId”: 1341, “UpdatedDateUtc” : {\"$gte\" : ISODate(“2021-03-25T23:35:00Z”)}}},{$project:{“CompanyId”:1,“UpdatedDateUtc”:1,_id:0}},{\"$group\": {\"_id\": 1,“n”: {\"$sum\": 1}}}])I hope this can use this index :{ “CompanyId”: -1, “UpdatedDateUtc” : -1 ,“WagerEventDateUtc” : -1 }but it always will use index :{ “CompanyId” : 1,“BetDateUtc” : -1, “WagerEventDateUtc” : -1, “UpdatedDateUtc” : -1}there is the explain :I think mongodb choose wrong index were because “saveState” and “restoreState” too high , but I’m not sure because I don’t know what these two fields mean.", "username": "111148_1" }, { "code": "", "text": "Hi @111148_1,Welcome to MongoDB community.Both of those indexes can be used for this query as both of them cover the searched fields.MongoDB uses an empirical engine to run all candidates and the return first is the one choosen.The explain plans show same works and same execution times so I don’t see why you will prefer one over the other.If you wish to force an index use a hint on the query.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Mongodb not follow prefix rule
2021-05-05T04:19:28.893Z
Mongodb not follow prefix rule
2,160
null
[ "atlas-functions", "atlas-triggers" ]
[ { "code": "", "text": "Is it possible to list my databases in a trigger function on Mongodb Atlas?\nI need to loop through some databases.", "username": "Thiago_Andreazza" }, { "code": "", "text": "Hey Thiago, listDatabases is not currently available at the moment for Functions, but you could get around this by storing a list of your databases in a separate collection and retrieving that list when you need to perform the loop as a workaround.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I think this could work for now. Thank you, Sumedha! ", "username": "Thiago_Andreazza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I list my databases in an Atlas function?
2021-04-25T13:55:16.557Z
Can I list my databases in an Atlas function?
2,536
null
[ "change-streams" ]
[ { "code": "", "text": "Hi,Our Prod Mongo is a running on version 3.6.0. We currently have a synchronisation service that is reading from the change stream to update index in Elastic search using monstache (GitHub - rwynn/monstache: a go daemon that syncs MongoDB to Elasticsearch in realtime. you know, for search.). After we made some update on our data we got this:ERROR 2021/05/03 16:00:59 Error starting change stream. Will retry: CappedPositionLost: CollectionScan died due to position in capped collection being deleted.And now I am not able to enable the synchronisation again.Do you know a way I can enable the synchronisation process?Thank you in advance", "username": "Jonatan_Aponte" }, { "code": "CappedPositionLostdb.getReplicationInfo()", "text": "Hi @Jonatan_Aponte and welcome in the MongoDB Community !First time I hear about Monstache so I don’t know anything about it. But looks like they are pulling data from change streams and copying them into Elastic.CappedPositionLost sounds like Monstache was stopped or had a temporary incident and couldn’t sync for a bit. When it tried to restart the change stream were it stopped earlier, the last entry they processed couldn’t be found, probably because your Oplog window is too small and had already rolled over that last processed entry.Can you run db.getReplicationInfo() and see how large your oplog is? You need to make sure that your timeDiff is large enough to cover any incident you could have. Else your Elastic will be desync from your MongoDB collections and you will need to recreate your Elastic indexes from scratch as you can’t recover at this point, as the history of the write operations has already rolled over some operations that you couldn’t sync.I don’t know if you are on Community, Enterprise Advanced or Atlas, but if you are on Atlas, I would recommend that you have a look at Atlas Search instead, because the sync is done automatically for you.\nWith this solution, you don’t need to run 3 nodes for MongoDB and 3 nodes for Elastic and x2 the price of your infra & (potentially) licence costs. Atlas Search comes out of the box with an Atlas cluster and doesn’t need extra infra or licensing and the sync is done automatically, which is less troubles…Finally, I would recommend that you upgrade to MongoDB 4.4 as MongoDB 3.6 has now reached EOL last month and isn’t supported anymore. MongoDB Support Policies | MongoDB\nThis is also a trivial thing to do in Atlas.I hope this helps.\nCheers,\nMaxime.", "username": "MaBeuLux88" } ]
CappedPositionLost
2021-05-03T16:53:43.080Z
CappedPositionLost
3,736
null
[]
[ { "code": "option.returnedtruestatus:'returned'orders: [\n { id: 100, status: \"shipped\", options: [{ returned: true }] },\n { id: 101, status: \"packed\", options: [{ quick: true }] },\n { id: 102, status: \"ordered\" }\n]\n\ndesired result: [\n { id: 100, status: \"returned\", options: [{ returned: true }] },\n // ^- updated status because returned: true\n { id: 101, status: \"packed\", options: [{ quick: true }] },\n { id: 102, status: \"ordered\" }\n]\nreturned {\n $set: {\n returned: \"$options.returned\"\n }\n },\n {\n $unwind: {\n path: \"$returned\",\n preserveNullAndEmptyArrays: true\n }\n }\n\norders: [\n { id: 100, status: \"shipped\", options: [{ returned: true }], returned: true }, // extracted value from array\n { id: 101, status: \"packed\", options: [{ quick: true }] },\n { id: 102, status: \"ordered\" }\n]\n$set$condstatusstatusreturned", "text": "How can I update a field when a condition is met?if option.returned is true I want to update the field status:'returned'.I can set an additional field returned, extracting the value from the array into the field.But so far all attempts combining $set and $cond to overwrite status have failed for me.How can I update status when returned is set, otherwise keep the previous value?MongoDB PlaygroundThanks,\nbluepuma", "username": "blue_puma" }, { "code": "$ifdb.orders.aggregate([\n {\n $set: {\n status: {\n $cond: {\n if: {\n $arrayElemAt: [\n \"$options.returned\",\n 0\n ]\n },\n then: \"returned\",\n else: \"$status\"\n }\n }\n }\n }\n])\n", "text": "I tried to use $if, of course that didn’t work.", "username": "blue_puma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to combine $set and $cond? Or use other operator?
2021-05-05T11:03:31.702Z
How to combine $set and $cond? Or use other operator?
6,734
null
[ "replication", "security", "configuration" ]
[ { "code": "", "text": "Hello so I’m kinda in need of some help.I’m working on Mongodb with Elasticsearch in Graylog. I’m trying to make a high availability setup.\nIt all worked out with 4.4 mongodb but 4.4.5 has been Challenging.The enforce command i found for mangodb primary isn’t working.\ncfg = rs.conf();\ncfg.members[0].priority = 2;\ncfg.members[1].priority = 1;\ncfg.members[2].priority = 1;\nrs.reconfig(cfg);and making a Authorized user for database admin can login but he can’t use commandos such as rs.stepDown() however it worked without the Authorized user.Are all the commands really that different from last update to the newest?", "username": "Knightofmoon_N_A" }, { "code": "rs.stepDown()", "text": "and making a Authorized user for database admin can login but he can’t use commandos such as rs.stepDown() however it worked without the Authorized user.What steps did you take to create this user and what roles did you grant?\nDid the user authenticate successfully?\nWhat is the error returned from rs.stepDown()", "username": "chris" }, { "code": "", "text": "Dear Chris tyvm for your messages on both posts.However i just used the downgraded command seen below\nsudo apt-get install -y --allow-downgrades mongodb-org=4.4.4 mongodb-org-server=4.4.4 mongodb-org-shell=4.4.4 mongodb-org-mongos=4.4.4 mongodb-org-tools=4.4.4for some reason it kept giving me some issues on 4.4.5 which i havn’t seen on 4.4.4 but i now looks like that 4.4.4 the downgrade i did might not have been the best idea either.", "username": "Knightofmoon_N_A" } ]
MongoDB 4.4.5 Help with Primary?
2021-05-05T10:12:37.748Z
MongoDB 4.4.5 Help with Primary?
1,666
null
[ "data-modeling" ]
[ { "code": "", "text": "Hello All,Usually I come across suggestions of splitting a use collection into entity collections.\nLike if you have an IOT application, with a “sensor_temperature” collection and documents, because it will grow too much, the idea would be for example, collections likesenson_temperature_houseA\nsenson_temperature_houseB\nsenson_temperature_houseCand so on. even if you have 1million different houses.\nSo house A does not need to see ( or know about) houseB’s data. But does this make any sense?\nIn any case? Any case?\nIf so, which cases/conditions, and why? Thank you so much.\nBest Regards,\nJP", "username": "Joao_Pinela" }, { "code": "", "text": "That sounds like it could be a recipe for disaster or it could be a very good way to doing it, depending on how the data is collected and how it’s used.If you think of each house as a separate tenant/customer/user then maybe there are valid reasons to split the data, but remember that even if it’s all in one collection you can split it (eventually sharding) by house/tenant_id (plus other fields, as it makes sense).So, like always, in MongoDB the answer is “it depends” Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hello Asya,thank you for your feedback one good idea, I would think, is data isolation/ privacy . That is really the only reason, because other than that, you can “split” in one single collection by a simple attribute “house_id” or similar.I don’t see any more benefits. Performance-wise, versus the complexity of coding, doesn’t seem one.\nWould it be THAT much better, if there isn’t a requirement for data isolation for privacy?Thank you again.Best Regards,\nJP", "username": "Joao_Pinela" }, { "code": "yearmonth_yearsensor_temp_2020\nsensor_temp_2021\nOR\nsensor_temp_01_2021\nsensor_temp_02_2021\n", "text": "EDIT: I started to type this hours ago. Then I went to a food break and didn’t see the 2 previous answers.Hi @Joao_Pinela,My answer might not be the only truth but, let’s try.First, I would say that MANY collections in MongoDB is generally a bad idea. I’d say that it’s better to have a few very large collection with many documents in them rather than MANY MANY collections with a limited numbers of documents in them.\nAlso, this will make any aggregation involving the entire data set a lot more complex, because you would have to $unionWith all the collections to calculate the average temperature for example.\nI think if you HAVE to split your data set into a FEW collections, I would use something with a lot less cardinality so the number of collections stays completely under control.\nFor example, I would use the year or month_year.At least here, if you need to calculate the average temperature for 2020 and 2021, if you chose the first option, it’s trivial, it’s more complicated if you choose the second option.If you need the averages per months, I would go for the first or second option, in that case, both aggregations are trivial.I think it’s all coming down to “how are you going to query your data”?Another GREAT pattern for IOT data with too many documents would be to use the bucket pattern.Basically, instead of storing 1 temperature per document, you store the entire day or month of temperatures in a single document using arrays. This can divide your number of documents very significantly. But don’t make jumbo documents either. A few hundreds KB top would be my recommendation.Also, I would use Online Archive to archive automatically the old values into S3 to reduce the costs but keep that data queryable using the federated queries that still allow to query both the “hot” data in Atlas and the archived one.I hope this helps.\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "thanks @MaBeuLux88 .ok, from your answer I see that splitting per entity , like the following collection names,sensor_temp_house0001_2020\nsensor_temp_house0001_2021\nsensor_temp_house0002_2020\nsensor_temp_house0002_2021\nsensor_temp_house0003_2020\nsensor_temp_house0003_2021\n…\nsensor_temp_house9999_2020\nsensor_temp_house9999_2021it could make sense, depending on the access patterns, and if you don’t have more than maybe 1000 houses, because you could only store 10 years of data (according to the suggested max 10000 collection on the Massive Number of Collections article)which means that if you had many MANY houses, or many users (like 1M users) this pattern is simply not a good idea.I see. depends on entity number.Thank you @MaBeuLux88 and all best regards,\nJP", "username": "Joao_Pinela" }, { "code": "{\n \"_id\": \"house0001_05_2021\",\n \"values\": [\n {\n \"date\": ISODate(...),\n \"v\": 34.3\n }, \n {...}\n ]\n}\n", "text": "The bucket pattern I mentioned in my previous answer is usually the go-to solution for IOT to reduce the number of documents in the collection.\nFor example, maybe you could bucket your sensor readings by house per month.If you take one measurement every hour, you would have 31*24 = 744 values per doc which is totally manageable I think. The document would look something like:Again, it can or cannot be a valid solution, it depends on the access patterns. But this solution would divide the number of documents in the collection by 744.Maybe another solution could be around the granularity. For example maybe after one year, you don’t need to keep all the details and you could squash the readings for one day in a single averaged value. Which could be done with Realm Scheduled Triggers for instance.It’s really down to what the data is for and how it’s consumed.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I see. Thank you for the help and perspective.", "username": "Joao_Pinela" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Any case where "Collection per entity" is a good model?
2021-05-04T15:36:08.110Z
Any case where &ldquo;Collection per entity&rdquo; is a good model?
3,613
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "I have a collection and need to query on all documents where the serviceArea.regions.zipCodes have a length greater than 5 or 9, I’ve tried different approaches using $where and also $aggregate function but didn’t have any luck.db.myCollection.find({})serviceArea is an object and zipCodes is an array with elements:eg. serviceArea{regions:[(countryCode:“US”,zipCodes[“50010”,“50011”,“50012”])]}so it should be serviceArea.regions.zipCodes that I need to refer to and give me back results for zipCodes that are greater than 5 or 9Any help would be much appreciated.", "username": "Puneet_Sharma" }, { "code": "", "text": "The following is not clear to me.zipCodes that are greater than 5 or 9Please provide sample documents and result documents. Make sure you include documents that matches and some that do not matches what you want.", "username": "steevej" }, { "code": "", "text": "ZipCodes1181×442 68.9 KBI’ve uploaded an image of the document structure, so basically I want to get back all documents that have serviceArea.regions.zipCodes elements greater then 5 or 9 digits. In this example you will see elements displaying zipCodes with 5 digits.", "username": "Puneet_Sharma" }, { "code": "", "text": "Please provide real documents that we can copy directly into our installation. Retyping documents to test our idea to solve your issue is time consuming.serviceArea.regions.zipCodes elements greater then 5 or 9 digitsYou repeated the same sentence as the original post. It is not clearer. Do you want that in the same query or in 2 different queries?You might be interested in\nand", "username": "steevej" }, { "code": "", "text": "So in the same query, for example if I want to get back documents with zipCodes that have 6, 7, 8, 10, 11 digits. Does that make sense?Eg.\nDocument 1\nZipcodes - 510192\nDocument 2\nZipCodes - 5101932\nDocument 3\nZipCodes - 5101934\nDocument 4\nZipCodes - 51019345\nDocument 5\nZipCodes - 5101934567", "username": "Puneet_Sharma" }, { "code": "{\n\t\"zone\" : \"DC\",\n\t\"active\" : true,\n\t\"akas\" : [\n\t\t{\n\t\t\t\"name\" : \"Testing\"\n\t\t},\n\t\t{\n\t\t\t\"name\" : \"Testing\"\n\t\t}\n\t],\n\t\"displayName\" : \"Testing\",\n\t\"serviceArea\" : {\n\t\t\"regions\" : [\n\t\t\t{\n\t\t\t\t\"zipCodes\" : [\n\t\t\t\t\t\"50010\",\n\t\t\t\t\t\"50011\",\n\t\t\t\t\t\"50012\",\n\t\t\t\t\t\"50013\",\n\t\t\t\t\t\"50014\"\n\t\t\t\t]\n\t\t\t}\n\t\t]\n\t}\n}\n", "text": "Here is an example, does this work?", "username": "Puneet_Sharma" }, { "code": "", "text": "Have you tried to incorporate the information from the 2 links supplied?What have you tried? What issues did you get?", "username": "steevej" } ]
Need help with MongoDB query to return those documents that have a length of array element greater than 5 or 9
2021-04-30T04:30:45.828Z
Need help with MongoDB query to return those documents that have a length of array element greater than 5 or 9
2,497
null
[]
[ { "code": "", "text": "Does anyone know how to downgrade mongodb from 4.4.5 to 4.4.4?", "username": "Knightofmoon_N_A" }, { "code": "", "text": "Hi @Knightofmoon_N_AAs it is the same major version you should be able to just drop back to the 4.4.4 version, I doubt this will impact the issue you are experiencing though.If you need platform specific steps please post your OS.", "username": "chris" } ]
MongoDB 4.4.5 -> MongoDB 4.4.4
2021-05-05T10:23:58.061Z
MongoDB 4.4.5 -&gt; MongoDB 4.4.4
1,543
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "", "text": "Hello, I am trying to build a friend request system. The tricky part is that certain user info is available only to friends.Currently I have multiple objects (with mostly duplicated data):I also have an object(collection) Friendship. There I store:Is there a proper and more scalable way of building this? I want to avoid having an array of friend ids under the main user object.Thank you all!", "username": "dimo" }, { "code": "partitionpartitionpartition<key>=<value><key>Userpartition: \"user=878275838475834\"PublicUserpartition: \"manyUsers=all-the-users\"FriendUserpartition: \"friendlyUser=878275838475834\"keykey == \"user\"valuekey == \"manyUsers\" && value == \"all-the-users\"truekey == \"friendlyUser\"value", "text": "HI @dimo, welcome to the community forum!For your partition key, I’d suggest creating a String attribute called partition. User partition as the partion key. partition will be set to <key>=<value>, and <key> can be different for different collections.For the User collection, docs would include partition: \"user=878275838475834\".For the PublicUser collection, docs would include partition: \"manyUsers=all-the-users\".For the FriendUser collection, docs would include partition: \"friendlyUser=878275838475834\".Your sync permissions can then call a Realm function that will make different checks, depending on the “type” of key and the value:", "username": "Andrew_Morgan" }, { "code": "", "text": "Hey @Andrew_Morgan ! Thanks for the quick reply!This will work! I think I got it from a different perspective, I was trying to have a value of “friends-only” and got super confused how I am going to check this. Your suggestion solves everything!Thanks for your time!", "username": "dimo" }, { "code": "", "text": "@dimo glad that this works for you!For future projects, I’ve just published an article on Realm provisioning strategies: https://www.mongodb.com/how-to/realm-partitioning-strategies/", "username": "Andrew_Morgan" }, { "code": "", "text": "Amazing! Thanks again @Andrew_Morgan!I am facing another problem because of the partition strategy. It’s related to accepting, declining and canceling the friend request but I will open another thread because it’s Swift related.", "username": "dimo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Partitioning a friend request system?
2021-04-29T18:20:54.458Z
Partitioning a friend request system?
3,933
null
[ "swift" ]
[ { "code": "", "text": "Hello there,We work on an app released in store and after updating their iOS version to > 14, users started to complain about info not being populated from database. The issue goes away for awhile if they reinstall the app, but after using it for a few minutes it happens again.There’s no crash, there’s no error, it’s like there’s nothing saved in the database. We could reproduce the issue only on the store version, if we attach the debugger it never happens and we were able to see that the information is still in the database.The database uses encryption as documented here.We already updated realm to the latest version (5.4.7).We checked this issue iOS 14 + Xcode 12 (Beta 3, Beta 4, Beta 5 & Beta 6): When Realm is stored in a shared app group container, backgrounding the app triggers: Message from debugger: Terminated due to signal 9 · Issue #6671 · realm/realm-swift · GitHub but it’s not the case for us, we don’t have a shared app group container.Does this happen to anyone else?Thanks,\nMagda", "username": "Magda_Miu" }, { "code": "", "text": "@Magda_Miu Feel free to open an issue on the realm-cocoa repo and I will make sure the team takes a look", "username": "Ian_Ward" }, { "code": "", "text": "realm-cocoa@Ian_Ward we already opened it: Realm Swift - iOS 14 issue - there’s no crash, but saved info in database is not available · Issue #6843 · realm/realm-swift · GitHub Please could you help us? Thank you for you guidance.", "username": "Magda_Miu" }, { "code": "", "text": "@Ian_Ward have you talked with the team assigned to the realm-cocoa repo? Please do you have any updates for us?", "username": "Magda_Miu" }, { "code": "", "text": "Yes we took a look but there isn’t any information in there for us to go on. What kind of troubleshooting steps have you taken? Like does the realm file actually exist? Seeing the logs of the app would help - you may need to add extensive logging yourself to the app to figure out what is going on.", "username": "Ian_Ward" }, { "code": "", "text": "Hi @Ian_Ward We also got this issue. Can you please help to check this issue is reported by our main client and we do not have any fix for this issue.Our observations are this issue happens only above iOS 14.", "username": "Jitendra_Jibhau_Deor" }, { "code": "", "text": "We’ve spent a significant amount of time investigating but we need a reproduction case or something to narrow down the search. The community has been unable to provide this at this time.", "username": "Ian_Ward" }, { "code": "", "text": "We have tried at our end we are not able to repro but on AppStore build one user reported this issue.\n@Ian_Ward", "username": "Jitendra_Jibhau_Deor" } ]
Realm Swift - iOS 14 issue - there's no crash, but saved info in database is not available
2020-10-06T15:45:07.558Z
Realm Swift - iOS 14 issue - there&rsquo;s no crash, but saved info in database is not available
3,731
null
[ "performance", "atlas-triggers" ]
[ { "code": "", "text": "Do Realm triggers add any overhead to a production cluster, other than the reads and writes they perform on the database? I’m assuming since they are serverless functions they run in their own environment. Is that correct?", "username": "Tyler_Queen" }, { "code": "", "text": "Thats correct they run in their own cordoned off environment.", "username": "Ian_Ward" } ]
How much overhead, if any, do Realm triggers add to a production environment?
2021-05-05T02:17:57.696Z
How much overhead, if any, do Realm triggers add to a production environment?
1,825
null
[ "queries" ]
[ { "code": "\"$and\" : [ {\n \"$or\" : [\n {\n \"firstDateField\" : {\n \"$exists\" : false\n },\n \"secondDateField\" : {\n \"$gte\" : ISODate(\"2021-04-25T21:00:18.547+00:00\")\n }\n },\n {\n \"firstDateField\" : {\n \"$gte\" : ISODate(\"2021-04-25T21:00:18.547+00:00\")\n }\n }\n ]\n },\n {\n \"$or\" : [\n {\n \"firstDateField\" : {\n \"$exists\" : false\n },\n \"secondDateField\" : {\n \"$lt\" : ISODate(\"2021-04-27T14:35:18.547+00:00\")\n }\n },\n {\n \"firstDateField\" : {\n \"$lt\" : ISODate(\"2021-04-27T14:35:18.547+00:00\")\n }\n }\n ]\n }\n ]\n", "text": "Hi,\nI’m trying to get all documents that are between 2 dates with the condition if the first date field does not exists, use another date field, and sort this query with descending order, i got it working but the performance is terrible (asc order is working much better) maybe i got some indexes missing, i tried adding some with no success\nthe sort is on the firstDateFieldthis query runs for 56 seconds, if i remove the part with the “exist with gte” and “exist with lt” the query runs 150msThanks!", "username": "A_H" }, { "code": "", "text": "Hello @A_H, welcome to the MongoDB Community forum!Please include the details of indexes created on the collection in your reply. Also, generate and post the query plan using the explain method (use the “executionStats” mode) on the query (including the sort operation).Also, tell about the MongoDB version, cluster type, and the collection size / number of documents.", "username": "Prasad_Saya" } ]
Help with dates query
2021-05-05T06:33:04.796Z
Help with dates query
1,904
null
[ "atlas-device-sync" ]
[ { "code": "initUserItemsObserver()private func initUserItemsRealm(){\n self.userItemRealm = try ! Realm(configuration: RealmConstants.USER!.configuration(partitionValue: <partition_value>))\n self.userItems = self .userItemRealm?.objects(Items. **self** ).sorted(byKeyPath: \"name\")\n self .initUserItemsObserver()\n}\n_id_idinitUserItemsRealm()_id", "text": "Hi all!I’m developing an iOS app using Realm. I read some docs to understand the functioning of Realm data management, especially with the Sync functionality. However I still have a lack of understanding on some points.My app allows users to choose different items from different categories and then generate some reports containing details about the selected items.Here are two use cases happening in my app:Use case #1To select the items, the user navigate through different categories.Let’s say I have only 2 levels on categories, the user goes to the category 1, then 1.1 which displays a list of items, then it goes to category 2 → 2.1 which displays a list of other items and so on.Currently each time a category is opened, I call the following function which open a Realm and then call the function initUserItemsObserver() which init an observer which do some UI stuffs.I want to know if there is a way to optimize my algorithm in terms of number of request and sync time because I have categories with hundreds/thousands of items.Use case #2Once a report is generated, I save it in a collection. The report object contains an array of _id corresponding to the one of the items.When a user wants to see an old report, I use the _id of the items stored in the array to get the item objects and then generate the report.My first question is: In that case, to get the item objects from their _id is it better to open a Realm with an observer (as in the initUserItemsRealm() ) or to use AsyncOpen ?My second question is: In order to optimize the number of requests and the time, I was thinking of storing the information needed from the items directly in the report object (instead of storing only the _id in the array). Instead of storing directly the information in the report object, I was thinking of using the One-to-Many Relationship to access the information needed from the item objects. However, I read that when using the Relationships « Realm Database executes read operations lazily as they come », but as I will read all the item objects is there a benefit of using the Relationship?My last question is about the lifecycle of the objects. As explained in the « Think Offline-first » paragraph, the changes received from the server are integrated into the local realm. If I understand well, does it mean that after the objects have been downloaded once to generate the report, if the user wants to generate the same report again, realm will take the objects from the local file instead of downloading them from the server?Thanks for your help!", "username": "Julien_Chouvet" }, { "code": "initUserItemsRealm()", "text": "I want to know if there is a way to optimize my algorithm in terms of number of request and sync time because I have categories with hundreds/thousands of items.Realm is an offline first database. That means ALL of the data is stored locally and then sync’d at a later time - typically milliseconds. So, if you’re performing a query, that’s a local function and it will return the data as fast is your drive will return it. Fortunately, Realm objects are lazily loaded so even with thousands of items, the results will be populated ‘instantly’.So… I am not sure what your algorithm is but your code looks great to me!My first question is: In that case, to get the item objects from their _id is it better to open a Realm with an observer (as in the initUserItemsRealm() ) or to use AsyncOpen ?In a sync environment, you always open realm the first time with .asyncOpen. See Sync Changes Between Devices - iOS SDKThereafter you can access realm via the code in your question.My second question is: In order to optimize the number of requests and the time, I was thinking of storing the information needed from the items directly in the report objectThis sounds like you are asking about denormalizing your data. In a nutshell that means duplicating your data into smaller or different chunks to improve read performance. I am not really sure it’s necessary in this use case; a lot of that would depend on how long it takes to generate the report in the first place. If it takes 18ms for example, then denormalizing the data is not needed.That technique is really powerful when you are dealing directly with a NoSQL database. While that is what MongoDB uses on the back end for storage, up front here in the drivers seat we are insulated from that and get to play with and query super flexible objects that represent that data in an object oriented way.My last question is about the lifecycle of the objects…If I understand well, does it mean that after the objects have been downloaded once to generate the report,I think this wraps back to your first question; objects are not downloaded once in response to a read or a query. All objects exist on the local drive as well as on the server. So when you run a report, no additional information is downloaded as it’s already there.Back when Realm was not part of MongoDB, they had a thing called a Query based aka Partial sync where the app would only download specific realm data. That changed and now its a 100%. So keep that in mind - local first really means ‘local’; all of your data is stored locally and sync’d at a later time.", "username": "Jay" }, { "code": "Realm(config:)Realm(config:)", "text": "Thanks a lot for your answers @Jay! It’s a lot more clear now.I still have 2 questions that came while reading your answers.1 - When you say:all of your data is stored locallyI’m wondering if it is “simply” stored on my Iphone because as said in the link you provided “Realm avoids copying data into memory except when absolutely required” and if it’s indeed the case, is there a way to limit the size on the data stored?2 - My second question is about the Sync Runtime. Is this metric increased every time I open a realm with Realm(config:)? For example, if I open a realm with Realm(config:) and the data stored locally is the same than the one on the server (no changes have been made), does the Sync runtime increase?\nSame question when I init an observer, is the Sync Runtime increased until the observer is invalidated or just when data are downloaded (if some are)?", "username": "Julien_Chouvet" }, { "code": "", "text": "Hi Julien,to answer #1. You can use Realm Sync Partitioning to control what data is synced to the device (typically based on the user and/or what they’ve asked to see). I’ve a new article that will hopefully go live tomorrow that covers various partitioning strategies – I’ll try to remember to circle back here with the link once it’s live (but if I forget, then it will appear in this list: https://www.mongodb.com/learn/?products=Mobile).Cheers, Andrew.", "username": "Andrew_Morgan" }, { "code": "class WineClass: Object {\n @objc dynamic var _partitionKey\n @objc dynamic var varietal = \"\"\n @objc dynamic var rating = \"\"\n}\nWineClass\n _partitionKey = \"US\"\n varietal = \"Cabernet Sauvignon\"\n rating = \"Excellent\"\nWineClass\n _partitionKey = \"Italy\"\n varietal = \"Nebbiolo\"\n rating = \"Good\"\nlet config = user.configuration(partitionValue: \"Italy\")\nRealm.asyncOpen(configuration: config) { result in...\nRealm(config:)let wineResults = realm.objects(WineClass.self) // <- results from disk\nnotificationToken = self. wineResults.observe { changes in\nlet wineResults", "text": "Let me elaborate a bit on question #1. I am sure @Andrew_Morgan will cover it more thoroughly but coding examples are always good.Suppose you have a wine cataloging app. It stores information about wines; the grape (varietal), a rating and the country of origin etc. In this use case, we’re going to use the country of origin as the _partitionKey; Here’s the object:So an WineClass object from the United Stated may look likeone from Italy may look likeSo as you can see we have a single object WineClass, that has different partitions. Note that in the big picture, a partition = a Realm. When you Read realm, the partition you want to read is specifiedSo only the wines from Italy are sync’d - the wines from the US will never touch your disk. So when I mentioned ALL data is sync’d, what was meant was ALL data whose partitions you access from code are synch’d.Is this metric increased every time I open a realm with Realm(config:) ?If the data on the server matches the local data, there’s nothing to sync. When you add an observer, it’s observing something that’s already been lazily loaded, and those objects were stored locally. In other words if you wanted to observe your wines for changes it would look something like this:The let wineResults lazily loads the wines (from disk) and then the observer observes those results. There will be no time impact for that above code, so no it would not impact sync’ing since that’s automagically done in the background.when data are downloadedRemember data is not downloaded upon request; it’s local. Any partitions you accessed when opening Realm (as shown above) already has the data sync’d by the time you’re ready to use it.", "username": "Jay" }, { "code": "an active connection to the sync server", "text": "Thanks again for your answers!I didn’t know that data is sync’d automatically in background.I just have one last question. In the Billing documentation, it is said:Realm counts the total amount of time in which a client application user has an active connection to the sync server even if they are not transferring data at the timeWhat does an active connection to the sync server means?\nDoes it means that as long as my app has opened (at least) one Realm (and thus has a local version which is sync’ing in background), I’m connected to the sync server?", "username": "Julien_Chouvet" }, { "code": "", "text": "Does it means that as long as my app has opened (at least) one Realm (and thus has a local version which is sync’ing in background), I’m connected to the sync server?This is correct. For MongoDB Realm Sync:Price: $0.08 / 1,000,000 runtime minutes ($0.00000008 / min)Formula: (# Active Users) * (Sync time (min / user)) * ($0.00000008 / min)Free Tier Threshold: 1,000,000 requests or 500 hours of compute or 10,000 hours of sync runtime (whichever occurs first)", "username": "Jay" }, { "code": "", "text": "Thanks for all your answers @Jay!", "username": "Julien_Chouvet" } ]
Understanding Realm data management & Sync
2021-04-28T12:51:13.479Z
Understanding Realm data management &amp; Sync
3,641
null
[ "cxx" ]
[ { "code": "// main.cpp\n#include <bson/bson.h>\n\nint main()\n{\n return 0;\n}\ncmake_minimum_required(VERSION 3.5)\n\nproject(test)\n\nfind_package(bson-1.0 1.7 REQUIRED)\nmessage(STATUS \"find BSON_INCLUDE_DIRS = ${BSON_INCLUDE_DIRS}\")\nadd_executable(main main.cpp)\ntarget_include_directories(main PRIVATE\n ${BSON_INCLUDE_DIRS}\n)\n~/Documents/code/test_cmake$ cmake .\n-- find BSON_INCLUDE_DIRS = \n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/xzc/Documents/code/test_cmake\n~/Documents/code/test_cmake$ make\nScanning dependencies of target main\n[ 50%] Building CXX object CMakeFiles/main.dir/main.cpp.o\nDocuments/code/test_cmake/main.cpp:1:10: fatal error: bson/bson.h: No such file or directory\n 1 | #include <bson/bson.h>\n | ^~~~~~~~~~~~~\ncompilation terminated.\nfind_package(libbson-1.0 1.7 REQUIRED)~/Documents/code/test_cmake$ cmake .\nCMake Warning at /usr/local/lib/cmake/libbson-1.0/libbson-1.0-config.cmake:15 (message):\n This CMake target is deprecated. Use 'mongo::bson_shared' instead.\n Consult the example projects for further details.\nCall Stack (most recent call first):\n CMakeLists.txt:5 (find_package)\n\n\n-- find BSON_INCLUDE_DIRS = /usr/local/include/libbson-1.0\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/xzc/Documents/code/test_cmake\n~/Documents/code/test_cmake$ make\nScanning dependencies of target main\n[ 50%] Building CXX object CMakeFiles/main.dir/main.cpp.o\n[100%] Linking CXX executable main\n[100%] Built target main\nmongo::bson_sharedtarget_link_libraries (hello_bson PRIVATE mongo::bson_shared)find_package (bson-1.0 1.7 REQUIRED)", "text": "According to the online example, I write a demo. Test under Ubuntu 20.04 and Debian 10.CMake seems to find the bson-1.0, but the BSON_INCLUDE_DIRS not set, gcc cant find the bson.h header.using find_package(libbson-1.0 1.7 REQUIRED) works, except it warn about deprecatedI don’t understand the warning. mongo::bson_shared is a library, like target_link_libraries (hello_bson PRIVATE mongo::bson_shared) but in my case I dont even link with libbson, only include it. How can I avoid this warning.Also, is the online example outdate? find_package (bson-1.0 1.7 REQUIRED) does not work.", "username": "x_changnet" }, { "code": "libbson-1.0bson-1.0libbson-1.0bson-1.0bson-1.0find_package() get_target_property(BSON_INCLUDE_DIRS mongo::bson_shared INTERFACE_INCLUDE_DIRECTORIES)\n-- The C compiler identification is GNU 8.3.0\n-- The CXX compiler identification is GNU 8.3.0\n-- Check for working C compiler: /usr/bin/cc\n-- Check for working C compiler: /usr/bin/cc -- works\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Check for working CXX compiler: /usr/bin/c++\n-- Check for working CXX compiler: /usr/bin/c++ -- works\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\n-- find BSON_INCLUDE_DIRS = /usr/local/include/libbson-1.0\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/roberto/cmake_test/build\n", "text": "@x_changnet, there are several things going on here. The libbson-1.0 package is an older package the required the consuming project to manually set each of the properties based on variables set by the package. This was not very CMake-like and so the new bson-1.0 package uses a proper CMake target. The libbson-1.0 package is deprecated and will be discontinued in a future release. So, you are better off to use the bson-1.0 package, which is planned to be supported long term.That said, to use the bson-1.0 package in the way that you want, that is only for headers and without linking to the library itself, you need to extract the appropriate property from the imported CMake target. You can do that by adding after find_package():When I did that, I get this output from CMake:Please give that a try and follow-up if you are still unable to get it working.", "username": "Roberto_Sanchez" }, { "code": "target_link_libraries (hello_bson PRIVATE mongo::bson_shared)target_include_directories()target_link_libraries (hello_bson PRIVATE mongo::bson_shared)bson.hfind_package (bson-1.0 1.7 REQUIRED)get_target_property(BSON_INCLUDE_DIRS mongo::bson_shared INTERFACE_INCLUDE_DIRECTORIES)", "text": "Thank you for the reply, it really help.I look into INTERFACE_INCLUDE_DIRECTORIES it turns out when target_link_libraries (hello_bson PRIVATE mongo::bson_shared) CMake will read include directories from mongo::bson_shared. target_include_directories() is’t needed at all.In my case, I trying to build a library base on bson, it only need the bson header, not link with it, so target_link_libraries (hello_bson PRIVATE mongo::bson_shared) is missing, leading to gcc complaints bson.h header file missing.Now I use find_package (bson-1.0 1.7 REQUIRED) and extract include directories get_target_property(BSON_INCLUDE_DIRS mongo::bson_shared INTERFACE_INCLUDE_DIRECTORIES), the deprecated warning is gone, and everything works find.", "username": "x_changnet" } ]
Confusion about using libbson in cmake
2021-05-03T04:22:25.004Z
Confusion about using libbson in cmake
3,834
https://www.mongodb.com/…e2ef05edc23e.png
[ "atlas-functions", "realm-web" ]
[ { "code": "clearArgstypeof nullobjectclearArgs", "text": "If we call a realm function from realm-web SDK passing any null parameter, we are getting the following errors.While digging down the clearArgs function, it is found that as typeof null is object, it is throwing the above error. Definition of clearArgs is as follows:I think this is a bug in realm-web. Otherwise, please help how to overcome this error.Just to inform you that I am upgrading my app from MongoDB Stitch to MongoDB Realm and the same function was working in Mongodb Stich.", "username": "Sudarshan_Roy" }, { "code": "", "text": "@Sudarshan_Roy Can you file an issue here please with steps to reproduce - GitHub - realm/realm-js: Realm is a mobile database: an alternative to SQLite & key-value stores", "username": "Ian_Ward" } ]
Problem in passing one or few parameters as null in realm function (Bug!)
2021-04-23T05:46:30.967Z
Problem in passing one or few parameters as null in realm function (Bug!)
2,907
null
[ "capacity-planning" ]
[ { "code": "", "text": "Hello,I need to prepare a technical plan to deploy mongoDB cluster on multi-AZ in mongoDB Atlas and how I can perform a stress test for it, e.g. :Appreciate your support.", "username": "Haytham_Mostafa" }, { "code": "Multi-Cloud, Multi-Region & Workload isolation", "text": "Hi @Haytham_Mostafa,Write operations always happen on the Primary first (then replicated to the secondaries). So 3 or 50 nodes => Same number of write transactions / sec.\nReads could technically scale up if you add more nodes, if you start reading from secondaries (and accept eventual consistency, $nearest, etc), but it’s not a good idea.Replica Set are for High Availability, not for scaling. If you rely on your 3 nodes to provide 30K reads / seconds, if one node fails, sending suddenly 15K reads / secs (instead of 10K) to the 2 remaining nodes might just DDOS them (domino effect).Your first lever to get more reads & writes is vertical scaling: migrate from M10 to M30 in Atlas for example.\nThe second and most efficient lever to get real scaling on both reads and writes is Sharding: multiply the number of replica sets to make them work as a team in parallel.That being said, by default, Atlas deploys 3 node replica sets.\nIf you want more nodes though, you can activate the Multi-Cloud, Multi-Region & Workload isolation option and increase the number of nodes in the region.\nimage1116×842 95.1 KB\nBut please, remember that this is not scaling up. It’s just adding more resilience to your cluster.Sharding (==scaling up) is this way:\nimage994×249 28.2 KB\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks a lot for these valuable info.", "username": "Haytham_Mostafa" } ]
Deploy mongoDB on Multi-AZ in MongoDB Atlas
2021-05-04T14:33:21.213Z
Deploy mongoDB on Multi-AZ in MongoDB Atlas
4,460
null
[ "python", "production" ]
[ { "code": "", "text": "We are pleased to announce the 3.11.4 release of PyMongo - MongoDB’s Python Driver. This release fixes a bug that caused MongoClient(s) to mistakenly attempt to create minPoolSize connections to arbiter nodes in a replica set.See the changelog for a high-level summary of what is in this release or see the PyMongo 3.11.4 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Shane" }, { "code": "", "text": "", "username": "system" } ]
PyMongo 3.11.4 Released
2021-05-04T22:34:04.542Z
PyMongo 3.11.4 Released
3,106
null
[ "data-modeling", "swift", "atlas-device-sync" ]
[ { "code": "user_id{\n \"title\": \"DFFilter\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"user_id\",\n \"name\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"user_id\": {\n \"bsonType\": \"string\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"edits\": {\n \"bsonType\": \"objectId\"\n }\n }\n}\n@objc public class DFFilter : Object {\n @objc dynamic public var _id: ObjectId = ObjectId.generate()\n \n // Partition Key\n @objc dynamic var user_id: String = \"\"\n\n @objc dynamic public var name: String = \"\"\n @objc dynamic public var edits: DFEdits? = nil\n \n override public static func primaryKey() -> String? {\n return \"_id\"\n }\n // ...\n let collection = database.collection(withName: \"DFFilter\")\n\n let filterDocument : Document = [\n \"user_id\": [\n \"$ne\": AnyBSON(forUser.id)\n ]\n ]\n\n collection.find(filter: filterDocument) { result in\n switch result {\n case .failure(let error):\n print(error)\n case .success(let documents):\n print(documents)\n completion(documents.map({ document in\n let dfFilter = DFFilter(value: document)\n print (dfFilter)\n return \"foo\" // just a placeholder for now\n }))\n }\n }\nTerminating app due to uncaught exception 'RLMException', reason: 'Invalid value 'RealmSwift.AnyBSON.objectId(6089b6e38c3fafc3e01654b1)' of type '__SwiftValue' for 'object id' property 'DFFilter._id'.' (lldb) po document\n▿ 4 elements\n ▿ 0 : 2 elements\n - key : \"_id\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - objectId : 6089b6e38c3fafc3e01654b1\n ▿ 1 : 2 elements\n - key : \"name\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - string : \"Hey Hey\"\n ▿ 2 : 2 elements\n - key : \"user_id\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - string : \"6089b62f9c0f6a24a1a5794b\"\n ▿ 3 : 2 elements\n - key : \"edits\"\n ▿ value : Optional<AnyBSON>\n ▿ some : AnyBSON\n - objectId : 6089b6e38c3fafc3e01654b2\neditsDFFilterDFEdits", "text": "I’m working on a Swift iOS app using Realm Sync and MongoDB Atlas. It’s a photo editing app, and I want people to be able to create filters that they have write access to, and be able to share them, so that other users can have read-only access to them to download them on their phone.I’m able to sign in, open a realm, create filters, store them, and access them.However, I’d like to run a query for all filters available to download (i.e. those which aren’t owned by me). My data is partitioned by the user_id property.Here is the schema for my filters:And here is my equivalent swift Object:And here is how I’m performing the query:However, the initializer is failing to create a local unmanaged DFFilter object from the BSON Document I’m getting from Realm:Terminating app due to uncaught exception 'RLMException', reason: 'Invalid value 'RealmSwift.AnyBSON.objectId(6089b6e38c3fafc3e01654b1)' of type '__SwiftValue' for 'object id' property 'DFFilter._id'.' Here’s what the BSON document looks like when I print it in the console:I’ve tried search around for answers but I’m coming up blank. This indicates to me that potentially my whole approach to this problem might be mistaken?It is worth pointing out that the edits property of DFFilter which you see in the schema is a different object of type DFEdits. I’m not entirely sure how MongoDB can resolve these links?", "username": "Majd_Taby" }, { "code": "user_idpartitionpartition: \"user=7567365873487465783\"partition: \"anyone=read\"", "text": "Hi @Majd_Taby, welcome to the community forum!I’d like to suggest a slightly different strategy (feel free to ignore if it doesn’t fit your use case)…It would be nice if the filters for all users were synced to the mobile app (so that you can still browse them while offline).Your current partitioning key (user_id) doesn’t allow that and so I’d suggest replacing it with one named partition. For the documents that should only be visible to their owner, you’d set partition: \"user=7567365873487465783\" and then you can bind a function to your sync permissions to check that the requesting user is the only one that accesses those docs. The filters (either the originals or a copy created using Realm database triggers) can then be stored in documents where you set partition: \"anyone=read\"). The sync rules would then prevent non-owners from updating other people’s filters.The downside to this approach is that all of the filters (from all users) are stored on the mobile device and so you’d need to assess whether the storage impact is justified by being able to access them when offline.I’ve a new article that will hopefully go live tomorrow that covers various partitioning strategies – I’ll try to remember to circle back here with the link once it’s live (but if I forget, then it will appear in this list: https://www.mongodb.com/learn/?products=Mobile).", "username": "Andrew_Morgan" }, { "code": "init(value:)", "text": "Thanks for the response, @Andrew_Morgan.I don’t think it makes sense for us to store every single shared feature globally on every customer’s device.Let me try to ask the question more directly:Should I safely assume that I can query Documents from MongoDB with a schema of DFFilter, and initialize an object of type DFFilter locally in my client SDK?This really sounds like what init(value:) is intended to be for in the Object class definition, but I’m really surprised that it’s failing when trying to populate the object_id.Is this just a bug in RealmSwift? Or do I have incorrect expectations? Do I need to convert the BSON document to a more direct JSON representation?", "username": "Majd_Taby" }, { "code": "", "text": "I’ve moved this question to the Mobile SDKs forum since I wasn’t aware of that forum’s existence when I decided to post here. Apologies for the confusion: (New Thread)[Generating Unmanaged Realm Objects from Equivalent MongoDB Atlas BSON Documents (Cross-Post)]", "username": "Majd_Taby" }, { "code": "editsobjectIdeditDFEditsDFEditsDFFilterDFFilterDFEditsDFEditsEmbeddedObjectObject", "text": "Just comparing your Atlas schema with your Swift class. In Atlas, edits is an objectId. In your Swift class, edit is an optional DFEdits (so in Atlas it looks like your trying to work with a reference, whereas in Swift, you’re trying to embed the DFEdits within DFFilter).If you want to use embedding (as seems to be the case on the Swift side) then your schema should show the edit data fields being embedded within the DFFilter collection (i.e. there’s no DFEdits collection). Note that on the Swift side, the DFEdits needs to conform to EmbeddedObject rather than Object.", "username": "Andrew_Morgan" }, { "code": "ObjectEmbeddedObject", "text": "Thanks for the follow-up, @Andrew_Morgan.DFEdits documents can exist on their own. DFFilter objects however, must contain a DFEdits reference. I did not want to embed the edits within the filter due to the fact that they can exist as their own top-level Objects, so they couldn’t extend EmbeddedObject.It’s hard to remember exactly why I made it optional in DFFilter, but I believe the SDK complained to me about making a custom reference required? I’ll have to validate that.", "username": "Majd_Taby" }, { "code": "ObjectIdedits", "text": "Makes sense, but I believe you need to be consistent between the schema and the Object definition to either embed or use a reference. atm Atlas is using a reference but the Realm Object is embedding. Perhaps your Realm Object should contain an ObjectId for edits too?", "username": "Andrew_Morgan" }, { "code": "", "text": "Noted! Makes sense. I think ObjectId is the solution here", "username": "Majd_Taby" } ]
Generating Unmanaged Realm Objects from Equivalent MongoDB Atlas BSON Documents
2021-04-28T23:06:57.591Z
Generating Unmanaged Realm Objects from Equivalent MongoDB Atlas BSON Documents
3,477
null
[ "atlas-device-sync", "android", "kotlin", "migration" ]
[ { "code": "", "text": "My android app has not been updated for a couple of years and is still using a non-sync Realm V5.15.1. I would like to update it to the latest MongoDb version with sync capabilities for users. I also need to make changes to a lot of the classes and fields. What is the best way to update while making sure users retain their data?I have been unable to find resources dealing with this scenario. Any advice, tips, links or guides will be greatly appreciated.", "username": "Deji_Apps" }, { "code": "", "text": "@Deji_Apps Non-sync realms and sync realms are of a different storage format so in order to migrate to using a sync realm you would need to open two realms in the app, the old non-sync realm and the new synced realm, and then copy the data out of the non-sync realm and insert it into the sync realm as new realm objects. From there you can then close and delete the old non-sync realm and just continue using the sync realm going forward. Sync realms behave just like non-sync realms in terms of APIs - querying, writing, and notifications are all the same so hopefully the real meat of your application code will not need to change.The biggest difference is that you open a sync realm with a SyncConfiguration and need to login a user in order to sync data but most other APIs remain the same.You can quickly compare the differences by looking at the non-sync realm quickstart -And the sync realm quickstart -", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrating from non-synced realm to MongoDB
2021-05-01T14:07:51.058Z
Migrating from non-synced realm to MongoDB
3,680
null
[ "unity" ]
[ { "code": "Unity-mongo-csharp-driver-dllsclient = new MongoClient(MONGO_URI);\ndb = client.GetDatabase(DATABASE_NAME);\ncollection = db.GetCollection<MongoClass>(COLLECTION_NAME);\nList<MongoClass> fetchedList = collection.Find(i => true).ToList();Realms.Sync.MongoClient 'MongoClient' does not contain a constructor that takes 1 arguments", "text": "Hello, there!I have been developing a mobile game in unity for a while now and it is very DB dependent, so I thought that MongoDB was the best alternative and decided to start studying how it works.My first solution was downloading the Unity-mongo-csharp-driver-dlls, given that it seemed pretty simple to use. I tested several things within the editor and made it work EXACTLY as I wanted by fetching the data using the methods below.and then I was able to do whatever I wanted with the data, like fetch all:List<MongoClass> fetchedList = collection.Find(i => true).ToList();But, once I had the project built to mobile, It stopped working, as explained in this topic by people with similar issues: MongoDB and Unity il2cpp Mobile Builds.So I started studying the MongoDB Realm as an alternative and found it very confusing. I believe it is due to the fact that it is still in alpha, but even so, I followed the steps and imported the package from GitHub as orientated, but couldn’t get much farther than that.What I wish to do is the same as I described above BUT something that still works on mobile builds.I also came across this topic but couldn’t apply the recommendation: Accessing Realm features with Unity and C#.For starters I was not able to specify a MongoClient for some reason. This is the error that I get: Realms.Sync.MongoClient 'MongoClient' does not contain a constructor that takes 1 argumentsAnyway, I described thoroughly what I aim to achieve and the problems to do so. I will be waiting for a reply!", "username": "Nicokkam" }, { "code": "", "text": "Hey, thanks for trying out the early preview of the Realm Unity SDK. Unfortunately, I have mixed answers to your questions. First, the current version of the SDK has only been tested and proven to work as a local database. The sync and remote MongoDB functionalities are not expected to work. Fortunately, most of the issues regarding those have been addressed and we expect to release a new version next week where everything should work in the editor/with the Mono backend.The bad news is that we’re still working through the IL2CPP problems and we don’t expect most of the functionality to work well there yet. We definitely aim to fully support it, but it’s a contained environment, significantly different from the Mono or .NET runtimes and it takes quite a bit of time to adapt to it.", "username": "nirinchev" }, { "code": "", "text": "Thanks for the reply!I see. So, for the time being there isn’t a way to acess a DB in Mongo Atlas through a mobile build? Even with other API’s?", "username": "Nicokkam" }, { "code": "", "text": "You can use the GraphQL API which can be called with a simple HttpClient.", "username": "nirinchev" }, { "code": "", "text": "Could you give me a simple example?\nI have some doubts, like, what url do I have to provide?", "username": "Nicokkam" }, { "code": "", "text": "You can find some examples under the Run GraphQL Operations from a CLI section of the docs. Those showcase using curl, but that should translate fairly easily to the HttpClient API.", "username": "nirinchev" }, { "code": "", "text": "@Nicokkam We’d love to hear more about the game you are looking to build with the Realm Unity SDK - drop me a line at [email protected] and I can let you fill you in on our gaming roadmap for Realm", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB and Realms with Unity3D for Mobile Builds Recoomendation
2021-05-01T20:48:50.295Z
MongoDB and Realms with Unity3D for Mobile Builds Recoomendation
4,865
null
[ "unity" ]
[ { "code": "", "text": "Hey\nSo Ive been wondering how to properly implement MongoDB to a Unity project\nIve found some Unity specific DLLs someone probably remade from the base C# driver: GitHub - Julian23517/Unity-mongo-csharp-driver-dlls and with the little documentation and tutorials Ive made leaderboards and a login system: https://twitter.com/HyperLemonPL/status/1385269669364477958?s=19\nAre there any actual official DLLs for Unity, and how to use Realms SDK(I haven’t yet read the post I got sent), because currently the login data is just stored locally in a file and I guess Realms allow for better authentication etc\nAnd also how to hide a login token/passsord to the database from a script?", "username": "Maciej_Krefft" }, { "code": "", "text": "Hi @Maciej_Krefft - welcome to the community forum!You might want to start by watching this video that was published a few days ago: Introduction to the Realm SDK for Unity3D", "username": "Andrew_Morgan" }, { "code": "", "text": "To add onto what @Andrew_Morgan gave you, take a look at this:https://www.mongodb.com/how-to/getting-started-realm-sdk-unity/It is what I had given you on Twitter.I’d encourage you to take it slow first. Get the Unity SDK into your project and become familiar with working with your game data locally. Then when you feel you’re ready, then we can focus on the sync or auth side of things.Just remember, that as of right now April 2021, the Unity SDK is alpha. By the Fall we should have a production release, but in the meantime, there could be bugs.Best,", "username": "nraboy" }, { "code": "", "text": "Thank you, Ill get into it soon\nDoes Realm SDK also allow me to solve the problem of uncovered DB password in code?", "username": "Maciej_Krefft" }, { "code": "", "text": "@Maciej_Krefft The SDK does expose a built-in encryption API -Not sure if that will help your use case?", "username": "Ian_Ward" }, { "code": "", "text": "@Maciej_Krefft We’d love to hear more about the game you are looking to build with the Realm Unity SDK - drop me a line at [email protected] and I can let you fill you in on our gaming roadmap for Realm", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unity gamedev with MongoDB/Realms SDK
2021-04-22T22:46:47.986Z
Unity gamedev with MongoDB/Realms SDK
5,060
null
[ "dot-net", "unity" ]
[ { "code": "", "text": "Hello there!\nWe are currently developing an app with C# via Unity, using the MongoDB C#/.NET Driver. Working with the database itself works flawless (insert, update, read, etc.), but we have problems to access the Mongo DB Realm features. We’re trying to call serverside functions, but had no success. I see that C#/Unity is not officially supported by Realm, but want to ask if there are some known solutions or workarounds. My current solution approach is to my write my code in Javascript (which itself worked, tested via Node.js), and then try to access this JS methods from C#. This should be possible theoretically, but is somewhat difficult, since Unity does not longer support Javascript. Has anyone experiences or suggestions? Thank you in advance!", "username": "Felix_Reichel" }, { "code": "// Sample Connection String (not all options may be used by you)\nvar client = new MongoClient(\"mongodb://<user>:<password>@realm.mongodb.com:27020/?authMechanism=PLAIN&authSource=%24external&ssl=true&appName=realm-application-abcde:mongodb-atlas:local-userpass\");\n// Set your Realm function and any arguments \nvar command = new BsonDocument { { callFunction: \"getEmployeeById\", arguments: [\"5ae782e48f25b9dc5c51c4a5\"] } };\n\nvar mongoDatabase = mongoClient.GetDatabase(\"database\");\nvar result = mongoDatabase.RunCommand<BsonDocument>(command);\n", "text": "Hi @Felix_Reichel!Since Realm natively implements a subset of the MongoDB Wire Protocol, it might be possible to configure your app to connect to Realm using that and the .NET driver.Check out Connect Over the Wire Protocol for more details.But the TL;DR of it is:First, Enable the wire protocol connections in RealmThen, you’d connect to Realm via connection string, using the driver. What’s key here is the appName parameter as that’s how you’d tie it to your Realm app:And the call a function:Hope this helps and let me know if this works (or doesn’t and we can investigate further )!", "username": "yo_adrienne" }, { "code": "", "text": "@Felix_Reichel Realm .NET SDK now supports Unity - https://www.mongodb.com/how-to/getting-started-realm-sdk-unity/", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Accessing Realm Features with Unity and C#
2020-06-18T16:14:17.059Z
Accessing Realm Features with Unity and C#
5,128
null
[ "atlas-device-sync", "kotlin" ]
[ { "code": "", "text": "Hi there!\nThere is a realm-kotlin project on Github.\nAs far as I understand, it does not support the Sync feature currently and one can only use it as a local database.Do I understand properly or may be there is an opportunity to use it with the Sync?\nIf it supports the Sync, could you tell how to enable it?\nIf it does not support the Sync, could you tell if you plan to support it in the future? Maybe some time estimations? Maybe a link to the issue with it?Thank you in advance!", "username": "111463" }, { "code": "", "text": "It does not support sync yet but we are working on it. We are hoping for a beta before the end of the year.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does multiplatform realm-kotlin support realm-sync
2021-05-03T13:13:17.659Z
Does multiplatform realm-kotlin support realm-sync
2,490