image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "react-native" ]
[ { "code": "addressesaddresses: { type: \"list?\", objectType: \"Address\" },addresses: 'Address[]?'", "text": "In the example here: https://www.mongodb.com/docs/realm/sdk/react-native/examples/define-a-realm-object-model/#define-an-embedded-object-propertyHow can I make the addresses field optional in the BusinessSchema?I have tried addresses: { type: \"list?\", objectType: \"Address\" },and addresses: 'Address[]?'but both seem to raise errors.", "username": "Rob_Elliott" }, { "code": "", "text": "To make it required you can just do “addresses: 'Address[]”. I think that it is not a valid schema to make it optional due to some under-the-hood limitations, but curious if there is a reason you need it to be optional? If you are trying to make the list as a whole optional then making it required is no real difference as Realm initializes all lists anyways to empty. If you were trying to make the elements within the list optional, then its worth re-examining why you want that since it is probably safer and better schema design to remove elements from the list that are no longer objects instead of letting them remain there as null entries.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to make a list of embedded objects optional in a Realm Object Model?
2022-07-05T09:48:32.946Z
Is it possible to make a list of embedded objects optional in a Realm Object Model?
2,277
null
[ "database-tools", "backup" ]
[ { "code": "Error: 2021-07-26T10:58:40.893-0700 positional arguments not allowed: [{$gt: ISODate(‘2021-07-15T07:00:00.000Z’)}}’]\n2021-07-26T10:58:40.895-0700 try ‘mongodump --help’ for more information\n", "text": "I want to do backup of last 10 days data only based on my column LastModifiyDate in mongo.mongodump --gzip --db=mongo-database --collection=Records --out=F:\\backup\\ --query ‘{LastModifyDate: {$gt: ISODate(‘2021-07-15T07:00:00.000Z’)}}’Error Received:", "username": "NP_User" }, { "code": "queryquery", "text": "Hello @NP_User, It is possible that you are not using proper syntax in specifying the query option. This following post has the solution for a similar question:Also, see the example usage of query option in the documentation:", "username": "Prasad_Saya" }, { "code": "", "text": "", "username": "Mark_Rozovsky" } ]
Mongodump last 10 day data
2021-07-26T18:06:57.730Z
Mongodump last 10 day data
5,697
null
[ "aggregation" ]
[ { "code": "", "text": "How can we filter the documents with objectId in with Atlas search filter?", "username": "sajan_kumar" }, { "code": "{\n index: '<index-name>',\n compound :{\n filter:[{\n equals: {\n query: someObjectID,\n path: \"<path-of-object-id>\"\n }\n }\n ]\n }\n}\n", "text": "The simplest way is to use the equals operator.", "username": "Marcus" }, { "code": "\"filter\": [{\n \"text\": {\n \"query\": [ objectid1, objectid2, ... ],\n \"path\": \"role\"\n }\n }]\n", "text": "so when we have a loot of object id’s in that case we will have to crate this equals query with loop but then it’s again not a good solution.it would be great if we can use it like this", "username": "sajan_kumar" }, { "code": "db.users.aggregate([\n {\n \"$search\": {\n \"compound\": {\n \"should\": [\n {\n \"equals\": {\n \"path\": \"teammates\",\n \"value\": ObjectId(\"5a9427648b0beebeb69537a5\")\n }\n },\n {\n \"equals\": {\n \"path\": \"teammates\",\n \"value\": ObjectId(\"59b99dbdcfa9a34dcd7881d1\")\n }\n },\n {\n \"equals\": {\n \"path\": \"teammates\",\n \"value\": ObjectId(\"5a9427648b0beebeb69579d0\")\n }\n }\n ],\n \"minimumShouldMatch\": 2\n }\n }\n },\n {\n \"$project\": {\n \"name\": 1,\n \"_id\": 0,\n \"score\": { \"$meta\": \"searchScore\" }\n }\n }\n])\n// untested so some details might be wrong or missing\n\nlots_of_ids = [ 1 , 2 , 3 , 4 , 5 ]\nmapped_ids = lots_of_ids.map( id => ( { \"equals\" : { \"path\" : \"role\" , \"value\" : id } } ) )\ndb.users.aggregate( [ { \"$search\": { \"compound\": { \"should\" : mapped_ids } ] )\n", "text": "Not quite in the format that you wish but this example is taken from the link shared by @Marcus.You need some kind of loop to map your array of IDs [ id_1 , id_2 , … , id_n ] into the required array but that can simply be done, for example in JS, with:", "username": "steevej" }, { "code": "", "text": "yeh, but my point was that if we could filter it in the text search then it would be much better. we don’t need to add an extra loop", "username": "sajan_kumar" }, { "code": "", "text": "how can we use negation in atals search with object id’s?\ne.g. normal mongo query { uid: { $nin: blockedUserIds } }", "username": "sajan_kumar" }, { "code": "$ninmustNotequals", "text": "Ok. The question is starting to expand in scope. If you don’t mind, please accept the answer(s) that are sufficient for wrapping up this topic. But don’t feel any pressure and only once you feel the concepts are clear.As for $nin equivalents, you can use a mustNot clause with equals as listed above:Use the compound operator to combine multiple operators in a single query and get results with a match score.", "username": "Marcus" }, { "code": "", "text": "okay, thanks, no I get it, I was wondering if there is any other way to do the same. but I guess we only have this solution. ", "username": "sajan_kumar" }, { "code": "queryvalue", "text": "query should be value in my query, my apologies.", "username": "Marcus" }, { "code": "[\n {\n \"id\": \"1\",\n \"employeeId\": null,\n \"staus\": false\n },\n {\n \"id\": \"2\",\n \"employeeId\": ObjectId(\"62c2cd1c9fd30bce6f2ccb20\"),\n \"staus\": true\n }\n]\nid: '2'", "text": "In my use case, I have data like some property values will be null,\nfor example:how can I filter property (here employeeId) of type objectId that doesn’t contain null values. (in the end it should return only record with id: '2')", "username": "varaprasad_kodali" }, { "code": "", "text": "Enjoy my best friend, the documentation.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas search filter documents with objectId
2022-04-06T15:06:29.477Z
Atlas search filter documents with objectId
8,077
null
[ "queries" ]
[ { "code": "", "text": "Hi Team,Can you please clarify the below 2 mongo commands. during executing different outputdb.getCollection(‘june_2022_test’).find({\"$and\": [{“timeStamp”: {\"$gt\": ISODate(“2022-06-23T05:53:00.000”)}},{“timeStamp”: {\"$lt\": ISODate(“2022-07-09T04:47:00.000”)}},{ “publisher” : “N”}]}).count()\n0the 2nd command execute is long timewhy the different 1st and 2nd command what is reason", "username": "hari_dba" }, { "code": "", "text": "Many reasons.A COLLSCAN for all documents to determine if publisher:N is true vs an IXSCAN on the range of dates.Only the explain plan can give the whole truth for sure, so please share.", "username": "steevej" } ]
MongoDB query execution different
2022-07-05T13:08:37.201Z
MongoDB query execution different
1,186
null
[ "aggregation" ]
[ { "code": "Database$match{\n Price: 500,\n Category: 'A'\n},\n{\n Price: 7500,\n Category: 'A'\n},\n{\n Price: 340,\n Category: 'B'\n},\n{\n Price: 60,\n Category: 'B'\n}\n$group{\n _id: \"$Category\",\n Prices: {\n $addToSet: \"$Price\"\n }\n}\n{\n _id: 'A',\n Prices: [500, 7500]\n},\n{\n _id: 'B',\n Prices: [340, 60]\n}\n$bucketAutogroupBy{\n groupBy: \"$Prices\",\n buckets: 5,\n output: {\n Count: { $sum: 1}\n }\n}\n_id{\n _id: {min: 500, max: 7500, category: 'A'},\n Count: 2\n},\n{\n _id: {min: 60, max: 340, category: 'B'},\n Count: 2\n}...\n", "text": "I need to create an aggregation pipeline that return price ranges for each product category.What I need to avoid is to load all available categories and call the Database again, one by one with a $match on each category. There must be a better way to do it.Product documentsNow I could use a $group stage to group the prices into an array by their category.Which would result inBut If I use $bucketAuto stage after this, I am unable to groupBy multiple properties. Meaning it would not take the categories into account.I have tried the followingThis does not take categories into account, but I need the generated buckets to be organised by category. Either having the category field within the _id as well or have it as another field and have 5 buckets for each distinct category:", "username": "t_s" }, { "code": "output_id db.product.aggregate( [\n {\n $bucketAuto: {\n groupBy: \"$price\",\n buckets: 5,\n output: {\n Count: { $sum: 1},\n \"category\": { $first: \"$category\" }\n }\n }\n }\n ] )\n{ _id: { min: '300', max: '500' }, Count: 1, category: 'A' }\n{ _id: { min: '500', max: '5000' }, Count: 1, category: 'B' }\n{ _id: { min: '5000', max: '7500' }, Count: 1, category: 'B' }\n{ _id: { min: '7500', max: '7500' }, Count: 1, category: 'A' }\n", "text": "Hi @t_s,Welcome to the MongoDB Community forums You could use the output document to get the category field added to the final output result not within the _id because it just takes one field to group the documents.Here is the aggregation pipeline:OutputFor more information around $bucket and $bucketAuto, please refer to the official docs.I hope it answers your questions. Please let us know if you have any follow-up questions.Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use $bucketAuto where the resulting buckets also needs to be grouped by another field
2022-06-30T14:54:03.565Z
How to use $bucketAuto where the resulting buckets also needs to be grouped by another field
1,332
null
[]
[ { "code": "function deleteFileStream(fileKey, next) {\n const deleteParams = {\n Key: fileKey,\n Bucket: bucket_name,\n }\n s3.deleteObject(deleteParams, (error, data) => {\n next(error, data)\n })\n}\nexports.deleteFileStream = deleteFileStream;\nrouter.delete(\"/deleteImage/:key\", (req, res) => {\n const key = req.params.key\n deleteFileStream(key, (error, data) => {\n if (error) {\n return res.send({ error: \"Can not delete file, Please try again later\" });\n }\n return res.send({ message: \"File has been deleted successfully\" });\n });\n})\nfunction user_delete(req, res) {\n User.findByIdAndRemove(req.params.id)\n .then(data => {\n if (!data) {\n return res.status(404).send({\n success: false,\n message: \"User not found with id \" + req.params.id\n });\n }\n res.send({\n success: true,\n message: \"User successfully deleted!\"\n });\n })\n};\nrouter.delete('/delete/:id', deleteUser);\n", "text": "I am saving user data with their image and for storage, I am using s3 bucket of aws. In my upload API, I am saving the URL of the image in the database. I want to implement that if I delete the user the image also be deleted from the s3 bucket. I wrote the delete API too but I need some guidance on how to use it? here is my s3 delete API code.delete a file from s3my router controllerthose apis are working fine in postman. I need some guidance on how to use it in my delete user route so that if I delete a user the image associated with also deleted from my s3 bucket.my delete user API", "username": "Naila_Nosheen" }, { "code": "userModel{\n_id: ObjectId(...),\nuserName: \"Naila_Nosheen\",\nimageURL: \"https://s3.us-west-2.amazonaws.com/mybucket/image01.jpg\",\n...\n}\ndeleteFileStreamuser_deletevar path = require(\"path\")\n\nfunction user_delete(req, res) {\n const user = User.findById(req.params.id);\n const keyName = path.basename(user.imageURL)\n\n // var user.imageURL = \"https://s3.us-west-2.amazonaws.com/mybucket/image01.jpg\"\n // const keyName = path.basename(user.imageURL) // \"image01.jpg\"\n\n //Deleting the user from the DB\n User.findByIdAndRemove(req.params.id)\n .then(data => {\n if (!data) {\n return res.status(404).send({\n success: false,\n message: \"User not found with id \" + req.params.id\n });\n }\n }).then(() => {\n //Deleting the Image from the S3 bucket\n deleteFileStream(keyName, (error, data) => {\n if (error) {\n return res.status(500).send({\n success: false,\n message: error.message\n });\n } \n res.send({\n success: true,\n message: \"<Message>\"\n });\n })\n })\n};\n", "text": "Hi @Naila_Nosheen,Welcome to the MongoDB Community forums The userModel I’m assuming here:I’d suggest you call the deleteFileStream function within the user_delete function and pass the keyName as the parameter before deleting the user from the DB then using the multiple promises you can delete both.I’m putting up the rough code snippet (for reference) Note that this is an untested example and may not work in all cases. Please do the test any code thoroughly with your use case so that there are no surprises.I hope it answers your questions. Please let us know if you have any follow-up questions.Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you for the answer. @Kushagra_Kesav It worked now if I delete user its image also deleted. can you please help me on one thing too? i have image data also in child model. And I want to implement if I delete user the child also deleted with its data image on server. I user remove middleware for cascade delete and it is working fine. On deleting parent(User) the child also deleted but its image not deleting. here is the link to my code example How to delete data from child and also its image data on server if we delete parent in nodejs mongodb", "username": "Naila_Nosheen" }, { "code": "", "text": "Hi @Naila_Nosheen,I’m glad to know that it worked and it got resolved.To better understand the other issue, could you please provide the user model in this thread.Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to delete image associated with a document in mongodb using s3 bucket?
2022-07-03T22:41:12.793Z
How to delete image associated with a document in mongodb using s3 bucket?
3,117
null
[ "java", "crud", "connecting" ]
[ { "code": "", "text": "I’m trying to connect my Android app to an Atlas shared cluster using SCRAM (i had tryied with X509 with no succes) using these code:String ConnectionString = “mongodb://” + UserName +“:” + Password + “@sales-shard-00-00.8yajj.mongodb.net:27017,” +\n“sales-shard-00-01.8yajj.mongodb.net:27017,sales-shard-00-02.8yajj.mongodb.net:27017/” +\nDbName + “?ssl=true&replicaSet=atlas-t0tu8r-shard-0&authMethod=MONGODB-X509&authSource=admin&retryWrites=true&w=majority”;\nConnectionString _connectionString = new ConnectionString(ConnectionString);\nMongoClient _client = MongoClients.create(_connectionString);\nMongoDatabase _database = _client.getDatabase(DbName);\nMongoCollection _collection = _database.getCollection(CollectionName);Document sampleDoc = new Document(“_id”,“1”).append(“name”,“Caccola Pelosa”);\n_collection.insertOne(sampleDoc);same code in C# work perfectly and fast…with Java in android studio nothing.can someone help me please ?thanks", "username": "Alberto_Omini" }, { "code": "", "text": "If you already inserted document with _id:1 using the C# code that works, it is normal that the other does not work because you can only have one document with the given _id.Do you have any exception, tracing or log?", "username": "steevej" }, { "code": "V/RenderScript: 0xaed97000 Launching thread(s), CPUs 4\nI/System.out: 17:15:54.665 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[sales-shard-00-00.8yajj.mongodb.net:27017, sales-shard-00-01.8yajj.mongodb.net:27017, sales-shard-00-02.8yajj.mongodb.net:27017], mode=MULTIPLE, requiredClusterType=REPLICA_SET, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500, requiredReplicaSetName='atlas-t0tu8r-shard-0'}\nI/System.out: 17:15:54.674 [main] INFO org.mongodb.driver.cluster - Adding discovered server sales-shard-00-00.8yajj.mongodb.net:27017 to client view of cluster\nI/System.out: 17:15:54.718 [main] INFO org.mongodb.driver.cluster - Adding discovered server sales-shard-00-01.8yajj.mongodb.net:27017 to client view of cluster\nI/System.out: 17:15:54.721 [main] INFO org.mongodb.driver.cluster - Adding discovered server sales-shard-00-02.8yajj.mongodb.net:27017 to client view of cluster\nI/System.out: 17:15:54.726 [main] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}]\nI/System.out: 17:15:54.765 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:1}\nI/System.out: 17:15:54.765 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:2}\nI/System.out: 17:15:54.768 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server sales-shard-00-01.8yajj.mongodb.net:27017\n com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\n at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n at java.lang.Thread.run(Thread.java:818)\n Caused by: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)\n at com.mongodb.internal.connection.SslHelper.enableHostNameVerification(SslHelper.java:64)\n at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:60)\n at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\n at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n \t... 2 common frames omitted\nI/System.out: 17:15:54.770 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server sales-shard-00-00.8yajj.mongodb.net:27017\nI/System.out: com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\n at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n at java.lang.Thread.run(Thread.java:818)\n Caused by: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)\n at com.mongodb.internal.connection.SslHelper.enableHostNameVerification(SslHelper.java:64)\n at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:60)\n at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\n at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n \t... 2 common frames omitted\nI/System.out: 17:15:54.772 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}]\nI/System.out: 17:15:54.766 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:3}\nI/System.out: 17:15:54.775 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server sales-shard-00-02.8yajj.mongodb.net:27017\n com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\n at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\n at java.lang.Thread.run(Thread.java:818)\n Caused by: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)\n at com.mongodb.internal.connection.SslHelper.enableHostNameVerification(SslHelper.java:64)\n at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:60)\n at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\n at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\n at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n \t... 2 common frames omitted\nI/System.out: 17:15:54.777 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING}]\nI/System.out: 17:15:54.779 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:58.020 [main] INFO org.mongodb.driver.cluster - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@947b2d7 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, ServerDescription{address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, ServerDescription{address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]}. Waiting for 30000 ms before timing out\nI/System.out: 17:15:58.028 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:4}\nI/System.out: 17:15:58.030 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:58.035 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:6}\nI/System.out: 17:15:58.036 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:58.039 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:5}\nI/System.out: 17:15:58.040 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:58.532 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:7}\nI/System.out: 17:15:58.534 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:58.538 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:8}\nI/System.out: 17:15:58.542 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:9}\nI/System.out: 17:15:58.541 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:58.545 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:59.051 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:10}\nI/System.out: 17:15:59.061 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:12}\nI/System.out: 17:15:59.064 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:11}\nI/System.out: 17:15:59.057 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-00.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}, {address=sales-shard-00-02.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters' appears in /system/framework/core-libart.jar)}}]\nI/System.out: 17:15:59.070 [cluster-ClusterId{value='61cb383ace66a833af94e670', description='null'}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address=sales-shard-00-01.8yajj.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of 'javax.net.ssl.SSLParameters\n", "text": "Thanks for your answer Steve…\nthis is whats happen when i try to insert the document", "username": "Alberto_Omini" }, { "code": "", "text": "using SCRAM (i had tryied with X509 with no succes)But the URI you shared still specifyauthMethod=MONGODB-X509", "username": "steevej" }, { "code": "", "text": "This is the connection string now:\nmongodb://Username:[email protected]:27017,sales-shard-00-01.8yajj.mongodb.net:27017,sales-shard-00-02.8yajj.mongodb.net:27017/Stores?ssl=true&replicaSet=atlas-t0tu8r-shard-0&authSource=admin&retryWrites=true&w=majorityand this the log\nI/OpenGLRenderer: Initialized EGL, version 1.4\nI/System.out: 14:18:06.666 [main] INFO org.mongodb.driver.cluster - Cluster created with settings {hosts=[sales-shard-00-00.8yajj.mongodb.net:27017, sales-shard-00-01.8yajj.mongodb.net:27017, sales-shard-00-02.8yajj.mongodb.net:27017], mode=MULTIPLE, requiredClusterType=REPLICA_SET, serverSelectionTimeout=‘30000 ms’, maxWaitQueueSize=500, requiredReplicaSetName=‘atlas-t0tu8r-shard-0’}\n14:18:06.676 [main] INFO org.mongodb.driver.cluster - Adding discovered server sales-shard-00-00.8yajj.mongodb.net:27017 to client view of cluster\nI/System.out: 14:18:06.729 [main] INFO org.mongodb.driver.cluster - Adding discovered server sales-shard-00-01.8yajj.mongodb.net:27017 to client view of cluster\nI/System.out: 14:18:06.730 [main] INFO org.mongodb.driver.cluster - Adding discovered server sales-shard-00-02.8yajj.mongodb.net:27017 to client view of cluster\nI/System.out: 14:18:06.748 [main] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address:27017=sales-shard-00-01.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING}, {address:27017=sales-shard-00-00.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING}, {address:27017=sales-shard-00-02.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING}]\nI/System.out: 14:18:06.809 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:2}\nI/System.out: 14:18:06.809 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:1}\nI/System.out: 14:18:06.811 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.connection - Closing connection connectionId{localValue:3}\nI/System.out: 14:18:06.811 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-00.8yajj.mongodb.net:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server sales-shard-00-00.8yajj.mongodb.net:27017\ncom.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.lang.Thread.run(Thread.java:818)\nCaused by: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)\nat com.mongodb.internal.connection.SslHelper.enableHostNameVerification(SslHelper.java:64)\nI/System.out: at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:60)\nat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n… 2 common frames omitted\nI/System.out: 14:18:06.811 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-01.8yajj.mongodb.net:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server sales-shard-00-01.8yajj.mongodb.net:27017\ncom.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.lang.Thread.run(Thread.java:818)\nCaused by: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)\nat com.mongodb.internal.connection.SslHelper.enableHostNameVerification(SslHelper.java:64)\nI/System.out: at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:60)\nat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n… 2 common frames omitted\nI/System.out: 14:18:06.813 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-00.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address:27017=sales-shard-00-01.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING}, {address:27017=sales-shard-00-00.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}}, {address:27017=sales-shard-00-02.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING}]\nI/System.out: 14:18:06.815 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-02.8yajj.mongodb.net:27017] INFO org.mongodb.driver.cluster - Exception in monitor thread while connecting to server sales-shard-00-02.8yajj.mongodb.net:27017\nI/System.out: com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)\nI/System.out: at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.lang.Thread.run(Thread.java:818)\nCaused by: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)\nat com.mongodb.internal.connection.SslHelper.enableHostNameVerification(SslHelper.java:64)\nat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:60)\nat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128)\n… 2 common frames omitted\nI/System.out: 14:18:06.817 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-01.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address:27017=sales-shard-00-01.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}}, {address:27017=sales-shard-00-00.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}}, {address:27017=sales-shard-00-02.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING}]\nI/System.out: 14:18:06.823 [cluster-ClusterId{value=‘61cc600ea2f06c7d697bfe74’, description=‘null’}-sales-shard-00-02.8yajj.mongodb.net:27017] DEBUG org.mongodb.driver.cluster - Updating cluster description to {type=REPLICA_SET, servers=[{address:27017=sales-shard-00-01.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}}, {address:27017=sales-shard-00-00.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}}, {address:27017=sales-shard-00-02.8yajj.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoException: java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}, caused by {java.lang.NoSuchMethodError: No virtual method setEndpointIdentificationAlgorithm(Ljava/lang/String;)V in class Ljavax/net/ssl/SSLParameters; or its super classes (declaration of ‘javax.net.ssl.SSLParameters’ appears in /system/framework/core-libart.jar)}}]this is what’s happen when i try to connect to Atlas", "username": "Alberto_Omini" }, { "code": "", "text": "Try using the SRV style connection string.", "username": "steevej" }, { "code": "defaultConfig {\n minSdkVersion 14\n targetSdkVersion 31\n versionCode 1\n versionName \"1.8\"\n\n testInstrumentationRunner \"androidx.test.runner.AndroidJUnitRunner\"\n}\n\nbuildTypes {\n release {\n minifyEnabled false\n proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'\n }\n}\n", "text": "you mean whti mongodb+srv ?? Java crash immediately…\ni’m using “implementation ‘org.mongodb:mongodb-driver:3.12.1’” and javaandroid {\ncompileSdkVersion 31\nbuildToolsVersion “28.0.3”}", "username": "Alberto_Omini" }, { "code": "", "text": "Share any error message aboutJava crash immediately", "username": "steevej" }, { "code": "", "text": "ConnectionString _connectionstring = new ConnectionString(“mongodb+srv://” + UserName +\":\" + Password+ “@sales.8yajj.mongodb.net/” +\nDbName + “?retryWrites=true&w=majority”);this is the log\nI/OpenGLRenderer: Initialized EGL, version 1.4\nW/art: Unresolved exception class when finding catch block: javax.naming.NamingException\nDisconnected from the target VM, address: ‘localhost:50804’, transport: ‘socket’", "username": "Alberto_Omini" }, { "code": "", "text": "I am out of ideas.Since the error seems related to SSL, I would try back the original URI (without +srv) and remove the ssl parameter.Or upgrade the driver version.", "username": "steevej" }, { "code": "", "text": "Hi,I am also having the same issue with Java Classes and found the best answer from here, thanks.", "username": "swati_jain" } ]
Java for Android with Atlas MongoDB connection
2021-12-28T14:37:50.022Z
Java for Android with Atlas MongoDB connection
4,688
null
[]
[ { "code": "", "text": "To reduce costs on our development and test mongo clusters, I’d like to pause them automatically on a schedule in the evening and restart them in the morning. What’s the best way to achieve this? If scripted would appreciate sharing of scripts / tools to help with this as it doesn’t seem to be a feature which is native (surprisingly, without manual intervention!)Maany thanks in advance", "username": "Nicholas_Davies" }, { "code": "", "text": "Hi @Nicholas_Davies,Welcome to MongoDB community!I believe you refer to Atlas clusters.Here you go a blog discussing this ability using schedule triggersAtlas Cluster Automation Using Scheduled Triggers | MongoDB BlogHope that helps!Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Another thing that might interest you is the Atlas API. SeeandA quick look at using Python to script tasks in your MongoDB Atlas Cluster. This example shows how to pause and resume Atlas Clusters.I presume that the above API and a cron job would do the trick if triggers do not accomplish what you want.", "username": "steevej" }, { "code": "", "text": "This is the link for the new API\nhttps://www.mongodb.com/docs/atlas/reference/api-resources-spec/#operation/updateConfigurationOfOneCluster", "username": "AndrewR" } ]
Auto pausing and restarting a cluster on a schedule
2020-11-06T16:24:12.217Z
Auto pausing and restarting a cluster on a schedule
5,922
https://www.mongodb.com/…_2_1023x536.jpeg
[]
[ { "code": "", "text": "\ncannot connect to RS_11700×890 223 KB\n", "username": "Ashish_Wanjare" }, { "code": "", "text": "You should contact your system admin to see why your private server running on your private network is not running.", "username": "steevej" }, { "code": "", "text": "My RS is deploy on Azure cloud. I want to connect this RS cluster using connection string to my local laptop.\nBut not able connect", "username": "Ashish_Wanjare" }, { "code": "", "text": "Is your cluster Atlas or self managed?What is the connection string you are using?But not able connectis not enough to know what is happening. Any error messages?", "username": "steevej" }, { "code": "", "text": "My RS with a 3 nodes is deploy on RedHat 8 .0 servers (using AWS cloud). Now I want to connect this cluster on Robomongo/MongoDB compass using Connection string (mongodb://Mysuperuserdb:[email protected]:27017,mymongo.demo1.com:27017,mymongo.demo2.com:27017/?replicaSet=replicaset01&authSource=admin\")\n\nMyClusterError1302×667 49.9 KB\n", "username": "Ashish_Wanjare" }, { "code": "", "text": "I have install both MongoDB GUI tools on my personal laptop, I already configured my DNS into /etc/hosts fileSee below connection string for Robomongo GUI toolsmongodb://superuserdb:[email protected]:27017,mymongo.demo1.com:27017,mymongo.demo2.com:27017/?replicaSet=replicaset01&authSource=admin\n\nRobomongo Cluster Connection error1881×772 37.4 KB\n", "username": "Ashish_Wanjare" }, { "code": "", "text": "Can you ping mymongo.demo1.com and mymongo.demo2.com? Please post the output.You have have 3 nodes in your replica set.Can you share the output of rs.status()?Can you connect with mongosh directly (that is without replicaSet= option) to all of the nodes one by one?", "username": "steevej" }, { "code": "", "text": "OutPut of rs.status()\n\nimage833×1022 26.6 KB\n\n\nimage918×1023 24.5 KB\n\n\nimage930×1022 24.9 KB\nping result of “mymongo.demo0.com”\n\nimage1078×636 10.6 KB\n", "username": "Ashish_Wanjare" }, { "code": "", "text": "ping result of “mymongo.demo1.com” & \" mymongo.demo2.com\"\n\nimage1191×752 18.8 KB\n", "username": "Ashish_Wanjare" }, { "code": "", "text": "Hi,\n@steevejI have tried to connect one by one node but not able to connect on MongoDB Compass/Robomongo", "username": "Ashish_Wanjare" }, { "code": "", "text": "Most likely your 172.31.85.0 Network is not properly routed from your workstation. You need to adjust your routing tables. If you do not know how to do that ask your system admin. If they do not know how to do that use Atlas.", "username": "steevej" }, { "code": "10.*172.31.*", "text": "Hi @Ashish_Wanjare,The 10.* and 172.31.* IP addresses you are attempting to connect to are private network addresses that require you to be on the same local network or using a forwarding service (VPN or SSH) in order to connect.Remote connection options are specific to your hosting configuration, but since you are connecting to a replica set deployment I would consider using an Azure VPN Gateway (perhaps Point-to-Site VPN from your laptop).Regards,\nStennie", "username": "Stennie_X" } ]
I am.facing an issue when we connecting to MongoDB
2022-05-25T13:42:06.899Z
I am.facing an issue when we connecting to MongoDB
2,850
null
[]
[ { "code": "", "text": "Hi,We are using mongo 4.2 in our lab environment. And we are doing a performance test in the lab.\nI would like to know what will be the impact of cache eviction in the production environment.\nHow will I know that cache eviction occurred in the system?During our performance test, I could not observe eviction-related messages in the mongo logs.\nHow to check whether cache eviction occurred or not?\nHow to simulate cache eviction?", "username": "Sreedhar_N" }, { "code": "", "text": "Cache eviction means mongod has to hit the disk to serve requests.See Performance of Mongodb pods- sharded deployment - #2 by steevej", "username": "steevej" }, { "code": "", "text": "Hi @Sreedhar_NIn terms of performance, it’s basically what @steevej said.I would like to know what will be the impact of cache eviction in the production environment.Cache eviction means exactly that. It’s basically WiredTiger needing to empty some space in its cache (working memory) to make room for data that needs to be loaded from disk. The data in the cache can be clean (not modified) or dirty (modified). Evicting clean data is relatively straightforward: just delete them from memory (I’m hand-waving here but you get the idea ). Evicting dirty data is more involved, since it needs to ensure that all data are persisted to disk before it can be evicted.This is more or less a universal term across databases. I think MySQL call this a buffer pool with similar ideas and working conditions.During our performance test, I could not observe eviction-related messages in the mongo logs.This is because this metric is WiredTiger specific and not MongoDB specific. It’s internal to WiredTiger, but you can see some statistics from db.serverStatus().wiredTiger.cacheHow to simulate cache eviction?Try to push a workload that’s larger than the available RAM on the machine. However this is a very generalized answer.Having said that, I’m sure you understand that performance testing is a tricky subject that involves a lot of variables: server size, MongoDB configuration, WiredTiger configuration, hardware configuration, specific workload (insertion rate, update rate, read rate, delete rate, etc.), data size, index size, statistical distribution of the data, and many other things that needs to be considered to make sure that tests are repeatable and representative, so I don’t believe there’s a one-size-fits-all question or answer to this complex endeavour Best regards\nKevin", "username": "kevinadi" } ]
How to check cache eviction occurred or not? How to simulate cache eviction?
2022-06-28T11:48:47.529Z
How to check cache eviction occurred or not? How to simulate cache eviction?
2,750
null
[ "aggregation", "queries" ]
[ { "code": "[\n {\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 10,\n 20\n ]\n },\n \"proximity\": 0.001\n },\n {\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 10,\n 30\n ]\n },\n \"proximity\": 0.005\n }\n]\n$geoWithin$centerSpheredb.collection.aggregate([\n {\n \"$match\": {\n \"location\": {\n \"$geoWithin\": {\n \"$centerSphere\": [\n [\n 10,\n 20\n ],\n 0.001\n ]\n }\n }\n }\n }\n])\ndb.collection.aggregate([\n {\n \"$match\": {\n \"location\": {\n \"$geoWithin\": {\n \"$centerSphere\": [\n [\n 10,\n 20\n ],\n \"$proximity\"\n ]\n }\n }\n }\n }\n])\nradius must be a non-negative number", "text": "So, imagine these input documents:When I want to filter with $geoWithin and $centerSphere, if I specify static radius, the query works:However, when I set radius dynamically based on some property, the query throws an error:The error thrown is: radius must be a non-negative number. What’s happening here?I created this playground for testing.", "username": "NeNaD" }, { "code": "$centerSphere\"$proximity\"const proximity = 0.001;\n...\n \"$centerSphere\": [\n [\n 10,\n 20\n ],\n proximity\n ]\n...\nproximity", "text": "Hi @NeNaDI believe it’s because $centerSphere requires an actual number. When you put \"$proximity\" there, it treats it as a string. I know you intend it to refer to a specific document field, but I don’t think it has that feature.Alternatively perhaps you can do something like:where proximity is defined as a variable outside of the aggregation. Not sure if your use case allows this, but this is one way I can think of to make it work.Best regards\nKevin", "username": "kevinadi" }, { "code": "aggregateproximityproximity", "text": "Hi @kevinadi,Thanks for the answer! I think it should accept it, since it’s aggregate query, right? Yeah, I can not have static proximity since it has to be fetched on document level from proximity field.", "username": "NeNaD" }, { "code": "aggregate$centerSphere$centerSphere$centerSphere", "text": "I think it should accept it, since it’s aggregate query, right? Specifically in the case of $centerSphere, at least in my mind it does make sense to not be able to reference things: $centerSphere is basically describing a circle on a certain position with a certain radius, and the query reads like: “find me all documents that is within this circle”.However referencing a field in a document would mean that the circle can be made variable based on each document in the collection, so you’re basically running one $centerSphere query for each document in the collection, isn’t it? Best regards\nKevin", "username": "kevinadi" }, { "code": "aggregatedistanceproximity", "text": "That is correct. That is what I need in this case though. Both static and dynamic use cases are useful, and I just though that this can be dynamic inside aggregate.Another way to solve this is to add new property field where I would calculate distance for each document, and then add another stage where I would compare new distance field with proximity field. But, I don’t know how to calculate the distance. Do you happen to know how do to it?I created this post for it, so please add solution there if you know the solution. ", "username": "NeNaD" } ]
$geoWithin and $centerSphere with dynamic radius
2022-07-04T09:48:56.104Z
$geoWithin and $centerSphere with dynamic radius
2,553
null
[ "aggregation", "queries" ]
[ { "code": "UsersEvents{\n location: {\n type: \"Point\",\n coordinates: [10, 20]\n },\n ...\n}\n{\n location: {\n type: \"Point\",\n coordinates: [10, 50]\n },\n proximity: 10\n ...\n}\nproximity", "text": "I have 2 collections, Users and Events:User example:Event example:Now, I want to find all Events related to a specific User, which means I should calculate distance between User’s and Event’s location, and check if that distance is lower than Event’s proximity property.Note: I have User’s location as input, so I should only create a filter for Events.I can easily do it if I would know how to calculate the distance between two geolocation points, but I could not find any operator in the docs that does it. How that can be calculated?", "username": "NeNaD" }, { "code": "2dsphere", "text": "Hi @NeNaD,You are looking for $near which will require a 2dsphere index.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "$neargeoWithin$radius$maxDistanceproximity", "text": "Hi @MaBeuLux88,Thanks for the answer! I already checked both $near and geoWithin (I don’t need sorting), but for some reason, it does not work.Note that I have to use aggregate since the $radius/$maxDistance need to be fetched from the proximity field on document level.Can you please check this post and let me know what do you think?", "username": "NeNaD" } ]
Filter documents based on distance
2022-07-04T05:13:11.326Z
Filter documents based on distance
1,732
null
[ "node-js", "mongoose-odm" ]
[ { "code": "[const mongoose = require('mongoose');\nconst User = mongoose.model('User', new mongoose.Schema({\n name: {\n type: String,\n required: true\n },\n email: {\n type: String,\n required: true,\n },\n password: {\n type: String,\n required: true,\n },\n profilePictureURL: {\n type: String\n }\n}));\nexports.User = User;](https://)\nconst store = multer.diskStorage({\n destination: function (req, file, cb) {\n fs.mkdir(path, { recursive: true}, function (err) {\n if (err) return cb(err);\n cb(null, \"uploads/photos\");\n });\n },\n filename: function (req, file, cb) {\n const name = file.originalname.toLowerCase().split(' ').join('_');\n cb(null, name + '-' + Date.now());\n }\n});\nconst upload = multer({ storage: store }).single('file');\nfunction CreateUser(req, res) {\n const url = req.protocol + '://' + req.get(\"host\");\n let user = new User(\n {\n name: req.body.name,\n email: req.body.email,\n image: url + '/uploads/photo/' + req.file.filename\n }\n );\n user.save()\n .then(data => {\n res.send(data);\n }).catch(err => {\n res.status(500).send({\n success: false,\n message: err.message || \"Some error occurred while creating the user.\"\n });\n });\n};\n\nrouter.post('/create', [upload], CreateUser);\n", "text": "Hello, I want to store the image URL in my MongoDB collection using multer and machine storage. I tried to follow one tutorial but it is not generating the correct URL I am posting my code here too. Can someone please guide me? I am very new to storing data in the database.This is my model file:This is my multer middleware:This is my post router:Can anyone please help me with this code???", "username": "Naila_Nosheen" }, { "code": "", "text": "Hi @Naila_NosheenUnfortunately I have zero knowledge of Multer, and I would assume most people in this forum as well, as this is a MongoDB-specific forum However I found an article that may be of help for you: Image Uploading to MongoDb in Nodejs using Multer. The article in question is using a buffer as I think it intends to store the image binary itself, rather than the image URL as you need. You might need to modify this part a little to fit your purpose. Hopefully this will point you to the right direction.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank you so much for your link ", "username": "Naila_Nosheen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can i store image url in mongodb collection?
2022-06-13T12:47:24.969Z
How can i store image url in mongodb collection?
13,063
null
[]
[ { "code": "", "text": "Hi\nI’m trying to setup a replication between 2 mongodb servers, one of the steps is to generate and share keys between mongo servers.\nI did below stepsthen I add below lines in /etc/mongod.confOn node 1 => mongoDb-01net:\nport: 27017\nbindIp: 10.0.0.11\n#security:\nsecurity:\nauthorization: enabled\nkeyFile: /etc/mongodb/keyFile/mongo-key\n#replication:\nreplication:\nreplSetName: \" replicaset01 \"When I try to start the mongodb server, I get below error\n+++++++++++++++++++++++++++++++++++\n{“t”:{\"$date\":“2022-06-28T13:41:59.650+04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:\"-\",“msg”:\"***** SERVER RESTARTED *****\"}\n{“t”:{\"$date\":“2022-06-28T13:41:59.651+04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:\"-\",“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{\"$date\":“2022-06-28T13:41:59.653+04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:\"-\",“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2022-06-28T13:41:59.655+04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{\"$date\":“2022-06-28T13:41:59.655+04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{\"$date\":“2022-06-28T13:41:59.656+04:00”},“s”:“I”, “c”:“ACCESS”, “id”:20254, “ctx”:“main”,“msg”:“Read security file failed”,“attr”:{“error”:{“code”:30,“codeName”:“InvalidPath”,“errmsg”:“Error reading file /etc/mongodb/keys/mongo-key: Permission denied”}}}\n{“t”:{\"$date\":“2022-06-28T13:41:59.656+04:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“main”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{\"$date\":“2022-06-28T13:41:59.656+04:00”},“s”:“F”, “c”:“CONTROL”, “id”:20575, “ctx”:“main”,“msg”:“Error creating service context”,“attr”:{“error”:“Location5579201: Unable to acquire security key[s]”}}\n+++++++++++++++++++++++++++++++++++Any Ideas??", "username": "Ahmed_Hosni" }, { "code": "", "text": "Check the path of your keyfile in config file\nTypo keyfile vs keyfiles", "username": "Ramachandra_Tummala" }, { "code": "#security:\nsecurity:\n authorization: enabled\n keyFile: /etc/mongodb/keys/mongo-key\n", "text": "no typo. this is what I have in my mongod.conf file", "username": "Ahmed_Hosni" }, { "code": "", "text": "It says invalid path\nPlease your keyfile dirpath and what you mentioned in your config file should match", "username": "Ramachandra_Tummala" }, { "code": "{\"t\":{\"$date\":\"2022-06-28T14:38:05.459+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.460+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.460+04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.464+04:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.464+04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.474+04:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20254, \"ctx\":\"main\",\"msg\":\"Read security file failed\",\"attr\":{\"error\":{\"code\":30,\"codeName\":\"InvalidPath\",\"errmsg\":\"Error reading file /etc/mongodb/keyFiles/mongo-key: Permission denied\"}}}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.475+04:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"main\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-06-28T14:38:05.475+04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20575, \"ctx\":\"main\",\"msg\":\"Error creating service context\",\"attr\":{\"error\":\"Location5579201: Unable to acquire security key[s]\"}}\n{\"t\":{\"$date\":\"2022-06-28T14:38:31.980+04:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.\",\"nextWakeupMillis\":5\n", "text": "I fix the path in config file, no invalid path now. but I still have Permission denied issue", "username": "Ahmed_Hosni" }, { "code": "", "text": "Why would it say invalid path if your keyfile exists in that path\nDid you do ls -lrt keyfile with full path?\nAs per your post above keyfile is at /etc/mongodb/keys but the directory you created is\n/etc/mongodb/keyFiles\nPlease check again", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I fixed the path issue[root@alt06ymr ~]# ls -ltr /etc/mongodb/keyFiles/mongo-key\n-rw-r–r-- 1 mongodb mongodb 1024 Jun 28 12:02 /etc/mongodb/keyFiles/mongo-key\n[root@alt06ymr ~]#[root@alt06ymr ~]# cat /etc/mongod.conf#security:\nsecurity:\nauthorization: enabled\nkeyFile: /etc/mongodb/keyFiles/mongo-key", "username": "Ahmed_Hosni" }, { "code": "", "text": "How are you starting mongod?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Maybe you can try changing permissions to fit mongodb needs, but by the looks it should be fine. And yeah, please state how are you starting mongodMaybe you can try running it manually using \" mongod --keyFile \"", "username": "Tin_Cvitkovic" }, { "code": "", "text": "This is the way I’m starting my mongo service\nsystemctl start mongod.service", "username": "Ahmed_Hosni" }, { "code": "", "text": "Could you please give me the full command to start the service using mongod --KeyFile? I’m little bit new here\nThx", "username": "Ahmed_Hosni" }, { "code": "mongod --keyFile <path-to-keyfile> --replSet <replicaSetName> --bind_ip localhost,<hostname(s)|ip address(es)>\n\n--bind_ip", "text": "I have posted a tutorial from Official Mongodb Docs in my previous reply, you can try following those 9 steps (if you are not bound by live server) - recreating your keyfile and then starting mongod manually using your config file and your keyFile:Include additional options as required for your configuration. For instance, if you wish remote clients to connect to your deployment or your deployment members are run on different hosts, specify the\n--bind_ipFollow the steps 1. - 7.", "username": "Tin_Cvitkovic" }, { "code": "", "text": "I used this command\n[root@alt06ymr ~]# mongod --keyFile /etc/mongodb/keyFiles/mongo-key --replSet replicaset01 --bind_ip 0.0.0.0No I’m getting this{“t”:{\"$date\":“2022-06-28T16:16:11.450+04:00”},“s”:“I”, “c”:\"-\", “id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.”,“nextWakeupMillis”:4000}}", "username": "Ahmed_Hosni" }, { "code": "mongod --keyFile <path-to-keyfile>\n\n", "text": "Hello @Ahmed_Hosni , are you planning on establishing a replica set as it is, or you are running a singleton mongodb ? In case you are using only one instance of mongodb you should not be running a mongod using “–replSet” command line argument. Instead just try running withAlso, in your /etc/mongod.conf file you should check spacing, as mongod.conf file uses YAML file format and you should have properly written your configuration, again please refer to the Official Docs i have linked above.", "username": "Tin_Cvitkovic" }, { "code": "", "text": "Hi @Tin_CvitkovicI plan to use 1 primary server + 1 replica server\nI did not do anything on primary server yet to not disturb the business, but I have to do the same configuration in primary server as well.\nSo, you tell me I can ignore starting up mongodb service using systemctl and use mongod command?", "username": "Ahmed_Hosni" }, { "code": "sudo systemctl enable mongod.service", "text": "Yes you can! You can just enable the mongod service to be enabled at all times, and starting mongod through command line is sufficent. You should only enable the mongod.service itself:sudo systemctl enable mongod.serviceIn that case, if you ask me, I would test the singleton’s auth first and make sure it’s working properly before bringing down the primary for re-config. I will be available for you, have you managed to run mongod with authentication enabled ?", "username": "Tin_Cvitkovic" }, { "code": "", "text": "Could you advice how to do that?\nI have the server ready now and this is my mongod.conf[root@alt06ymr ~]# cat /etc/mongod.confsystemLog:\ndestination: file\nlogAppend: true\npath: /var/log/mongodb/mongod.logstorage:\ndbPath: /var/lib/mongo\njournal:\nenabled: trueprocessManagement:\nfork: true # fork and run in background\npidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\ntimeZoneInfo: /usr/share/zoneinfonet:\nport: 27017\nbindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.#security:\nsecurity:\nauthorization: enabled\nkeyFile: /etc/mongodb/keyFiles/mongo-key#operationProfiling:#replication:\nreplication:\nreplSetName: “replicaset01”#sharding:#auditLog:#snmp:\n[root@alt06ymr ~]#", "username": "Ahmed_Hosni" }, { "code": "sudo systemctl status mongod.service\n", "text": "I believe your mongod service should already be enabled , you can check it using:", "username": "Tin_Cvitkovic" }, { "code": "", "text": "[root@alt06ymr ~]# sudo systemctl status mongod.service\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\nActive: failed (Result: exit-code) since Tue 2022-06-28 15:48:08 +04; 1h 25min ago\nDocs: https://docs.mongodb.org/manualJun 28 15:48:08 alt06ymr systemd[1]: Starting MongoDB Database Server…\nJun 28 15:48:08 alt06ymr mongod[1214]: about to fork child process, waiting until server is ready for connections.\nJun 28 15:48:08 alt06ymr mongod[1331]: forked process: 1331\nJun 28 15:48:08 alt06ymr mongod[1214]: ERROR: child process failed, exited with 1\nJun 28 15:48:08 alt06ymr mongod[1214]: To see additional information in this output, start without the “–fork” option.\nJun 28 15:48:08 alt06ymr systemd[1]: mongod.service: Control process exited, code=exited status=1\nJun 28 15:48:08 alt06ymr systemd[1]: mongod.service: Failed with result ‘exit-code’.\nJun 28 15:48:08 alt06ymr systemd[1]: Failed to start MongoDB Database Server.\n[root@alt06ymr ~]#", "username": "Ahmed_Hosni" }, { "code": "mongod --keyFile <path-to-keyfile>", "text": "mongod --keyFile <path-to-keyfile>Now you can try running this on your command line.", "username": "Tin_Cvitkovic" } ]
Error reading file /etc/mongodb/keys/mongo-key: Permission denied
2022-06-28T09:42:27.459Z
Error reading file /etc/mongodb/keys/mongo-key: Permission denied
13,235
null
[ "php" ]
[ { "code": "", "text": "sudo /Applications/XAMPP/xamppfiles/bin/pecl install mongodb-1.7.4Above command I tried.\nMongodb extension was installed in /usr/local/Cellar/[email protected]/5.6.35/pecl/20131226/mongodb.so\nBut extension not loaded in xampp(php 5.6)", "username": "reka_rajendran" }, { "code": "extension=mongodb.so", "text": "@reka_rajendran I have not tried this on MacOS, but generally with PHP you also have to edit the php.ini to add:extension=mongodb.so", "username": "Jack_Woehr" } ]
How to load mongodb.so extension in xampp php5.6 on mac os
2022-07-04T10:02:52.161Z
How to load mongodb.so extension in xampp php5.6 on mac os
3,189
null
[ "swift" ]
[ { "code": "@ObservedResults(Group.self) var groupssortDescriptoritem.name@ObservedResultslet sortedGroup.items = group.items.sorted(byKeyPath: \"name\")", "text": "Hi,\nI just learning about Swift and Realm, I’m following the Realm Database with SwiftUI QuickStart, which is really helpful.But I was stacked here@ObservedResults(Group.self) var groupsI know I can use sortDescriptor to sort the group, but I’d like to sort the groups.items by item.name, how can I implement with @ObservedResults?I’m also trying let sortedGroup.items = group.items.sorted(byKeyPath: \"name\") . But the onDelete function won’t works since the IndexSet has changed.", "username": "max_N_A1" }, { "code": "struct tempView: View {\n @ObservedRealmObject var user: User\n \n private let sortDescriptors = [\n SortDescriptor(keyPath: \"name\", ascending: true)\n ]\n \n var body: some View {\n \n VStack {\n // Works with no sorting\n // if let activities = user.activties {\n \n // Delete doesn't work with sorting\n if let activities = user.activties.sorted(by: sortDescriptors) {\n List {\n ForEach(activities) { activity in\n ActivityRow(activity: activity.name)\n }\n .onDelete(perform: $user.activties.remove)\n }\n }\n }\n }\n}\n", "text": "I have the same issue, if I sort the onDelete feature doesn’t work. Without sort it works. Here is the code.", "username": "Jimmi_Andersen" }, { "code": "realm-swift/examples/ios/swift/ListSwiftUI/ReminderListRowViewstruct ReminderListRowView: View {\n @ObservedRealmObject var list: ReminderList\n\n var body: some View {\n HStack {\n Image(systemName: list.icon)\n TextField(\"List Name\", text: $list.name)\n Text(list.reminders.sorted(by: \\Reminder.title, ascending: true).first?.title ?? \"N/A\")\n Spacer()\n Text(\"\\(list.reminders.count)\")\n }.frame(minWidth: 100)\n }\n}\nReminderListlist.reminders.first?.title ?? \"N/A\"", "text": "The same issue can be easily reproduced with the official example at realm-swift/examples/ios/swift/ListSwiftUI/.Just replace the current implementation of ReminderListRowView withand then try to remove an added ReminderList from the launched example.Of course it works perfectly when having list.reminders.first?.title ?? \"N/A\" (no sorting) ", "username": "psitsme" }, { "code": "ResultsListListSwiftUIclass ReminderList: Object, ObjectKeyIdentifiable {\n @Persisted var name = \"New List\"\n @Persisted var icon: String = \"list.bullet\"\n @Persisted var reminders: RealmSwift.List<Reminder>\n}\n\nextension ReminderList {\n var sortedReminders: Results<Reminder>? {\n guard\n realm != nil,\n let thawed = thaw(),\n !thawed.isInvalidated\n else {\n return nil\n }\n \n return thawed.reminders.sorted(by: \\Reminder.title, ascending: true)\n }\n}\nstruct ReminderListRowView: View {\n @ObservedRealmObject var list: ReminderList\n\n var body: some View {\n HStack {\n Image(systemName: list.icon)\n TextField(\"List Name\", text: $list.name)\n Text(list.sortedReminders?.first?.title ?? \"N/A\")\n Spacer()\n Text(\"\\(list.reminders.count)\")\n }.frame(minWidth: 100)\n }\n}\n", "text": "Okay, I’ve got a workaround, but not sure about its performance/efficiency.So to get sorted Results from a List the owner must be thawed first. In the ListSwiftUI example it can look like this:And later it can be used seamlessly:", "username": "psitsme" }, { "code": "ForEach(items, id: \\.self)\n", "text": "I think to solve the issue you need to include an identifier in the ForEach header:", "username": "Joe_Stella" }, { "code": "", "text": "It seems pretty clear to me now that Realm’s SwiftUI support is experimental / prototype-level, and that many standard expectations of iOS interfaces (such as list sorting, optional selection objects, scene state restoration) require nasty workarounds if able to find one", "username": "Alex_Ehlke" }, { "code": "", "text": "The constructor ObservedResults can take a sort description for sorting.", "username": "Jason_Flax" } ]
How to sort @ObservedResults for List
2022-05-02T14:08:59.507Z
How to sort @ObservedResults for List
4,617
null
[ "aggregation", "containers", "security", "change-streams" ]
[ { "code": "0.0.0.0mongod.cfghost.docker.internalenvENV PORT=1337 DB_HOSTNAME=host.docker.internal DB_USERNAME=root DB_PASSWORD=root DB_NAME=Test DB_PORT=27017\nenvrootrootdbOwnerreadWriteMongoError: command aggregate requires authentication\n\n at MessageStream.messageHandler (/var/www/myapp/node_modules/mongodb/lib/cmap/connection.js:263:20)\n\n at MessageStream.emit (node:events:390:28)\n\n at processIncomingData (/var/www/myapp/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\n\n at MessageStream._write (/var/www/myapp/node_modules/mongodb/lib/cmap/message_stream.js:42:5)\n\n at writeOrBuffer (node:internal/streams/writable:389:12)\n\n at _write (node:internal/streams/writable:330:10)\n\n at MessageStream.Writable.write (node:internal/streams/writable:334:10)\n\n at Socket.ondata (node:internal/streams/readable:754:22)\n\n at Socket.emit (node:events:390:28)\n\n at addChunk (node:internal/streams/readable:315:12)\n\n at readableAddChunk (node:internal/streams/readable:289:9)\n\n at Socket.Readable.push (node:internal/streams/readable:228:10)\n\n at TCP.onStreamRead (node:internal/stream_base_commons:199:23)\n\nEmitted 'error' event on ChangeStream instance at:\n\n at processError (/var/www/myapp/node_modules/mongodb/lib/change_stream.js:567:38)\n\n at ChangeStreamCursor.<anonymous> (/var/www/myapp/node_modules/mongodb/lib/change_stream.js:436:5)\n\n at ChangeStreamCursor.emit (node:events:390:28)\n\n at /var/www/myapp/node_modules/mongodb/lib/core/cursor.js:343:16\n\n at /var/www/myapp/node_modules/mongodb/lib/core/cursor.js:745:9\n\n at /var/www/myapp/node_modules/mongodb/lib/change_stream.js:330:9\n\n at done (/var/www/myapp/node_modules/mongodb/lib/core/cursor.js:458:7)\n\n at /var/www/myapp/node_modules/mongodb/lib/core/cursor.js:542:11\n\n at executeCallback (/var/www/myapp/node_modules/mongodb/lib/operations/execute_operation.js:70:5)\n\n at callbackWithRetry (/var/www/myapp/node_modules/mongodb/lib/operations/execute_operation.js:122:14)\n\n at /var/www/myapp/node_modules/mongodb/lib/operations/command_v2.js:85:9\n\n at /var/www/myapp/node_modules/mongodb/lib/cmap/connection_pool.js:354:13\n\n at handleOperationResult (/var/www/myapp/node_modules/mongodb/lib/core/sdam/server.js:493:5)\n\n at MessageStream.messageHandler (/var/www/myapp/node_modules/mongodb/lib/cmap/connection.js:263:11)\n\n at MessageStream.emit (node:events:390:28)\n\n at processIncomingData (/var/www/myapp/node_modules/mongodb/lib/cmap/message_stream.js:144:12) {\n\n ok: 0,\n\n code: 13,\n\n codeName: 'Unauthorized',\n\n '$clusterTime': {\n\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1656582517 },\n\n signature: {\n\n hash: Binary {\n\n _bsontype: 'Binary',\n\n sub_type: 0,\n\n position: 20,\n\n buffer: Buffer(20) [Uint8Array] [\n\n 219, 242, 28, 129, 253, 119,\n\n 198, 223, 175, 105, 63, 222,\n\n 228, 0, 1, 204, 26, 204,\n\n 65, 185\n\n ]\n\n },\n\n keyId: Long { _bsontype: 'Long', low_: 1, high_: 1645435952 }\n\n }\n\n },\n\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1656582517 }\n\n}\n", "text": "I’m trying to use MongoDB for the first time for a Sails.js app running on docker using Docker Desktop on Windows.Please note, I’m connecting my LOCAL SYSTEM’S MongoDB (not a Mongo container) to my app’s docker container.For test purposes, I used 0.0.0.0 to bind all IPs in my mongod.cfg and I used host.docker.internal as the hostname for docker to connect to my system’s MongoDB.Dockerfile has the folowingenv parameters:I used these env variables in my Sails datastore to connect to the db.I get the following error when I run my container despite having the proper roles to the user root. root has dbOwner and readWrite rights.Any help would be appreciated. Thanks!", "username": "Maheshkumar_Sundaram" }, { "code": "", "text": "I reinstalled my docker desktop and somehow the issue was gone.Thanks", "username": "Maheshkumar_Sundaram" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Local system’s Mongo authentication in Docker
2022-06-30T11:01:49.049Z
Local system’s Mongo authentication in Docker
4,683
null
[ "aggregation", "golang" ]
[ { "code": "Invalid $set :: caused by :: Use of undefined variable: this\ndb.getCollection('interests').aggregate([\n {\n $lookup:{\n from: \"products\",\n localField: \"product_types.product_id\",\n foreignField: \"_id\",\n as: \"productInterestData\"\n }\n },\n {\n $set: {\n \"product_types\": {\n $map: {\n input: \"$product_types\",\n in: {\n $mergeObjects: [\n \"$$this\",\n { product: {\n $arrayElemAt: [\n \"$productInterestData\",\n {$indexOfArray: [\"$productInterestData._id\", \"$$this.product_id\"]}\n ]\n }}\n ]\n }\n }\n }\n }\n }\n])\npipeline := mongo.Pipeline{\n // join product model\n // lookup field\n bson.D{bson.E{Key: \"$lookup\", Value: bson.M{\n \"from\": \"products\",\n \"localField\": \"product_types.product_id\",\n \"foreignField\": \"_id\",\n \"as\": \"productInterestData\",\n }}},\n // set array\n bson.D{bson.E{Key: \"$set\", Value: bson.E{\n Key: \"product_types\", Value: bson.E{\n Key: \"$map\", Value: bson.D{\n bson.E{Key: \"input\", Value: \"$product_types\"},\n bson.E{Key: \"in\", Value: bson.E{\n Key: \"$mergeObjects\", Value: bson.A{\n \"$$this\",\n bson.E{Key: \"product\", Value: bson.E{Key: \"$arrayElemAt\", Value: bson.A{\n \"$productInterestData\",\n bson.E{Key: \"$indexOfArray\", Value: bson.A{\"$productInterestData._id\", \"$$this.product_id\"}},\n }}},\n },\n }},\n },\n },\n }}},\n // join data\n bson.D{bson.E{Key: \"$unset\", Value: \"productInterestData\"}},\n}\ncursor, err := ctx.Store.GetCollection(ctx, &models.Interest{}).Aggregate(c, pipeline)\n\t\tif err != nil {\n\t\t\tutils.Log(c, \"warn\", err)\n\t\t\treturn\n\t\t}\n", "text": "Hello,\nWhile trying to run the following query, it is working fine on mongo shell, but I have the following on my golang application:", "username": "chapad" }, { "code": "bson.EValuebson.Ebson.EValuebson.Ebson.D{bson.E{Key: \"$set\", Value: bson.E{\n Key: \"product_types\", Value: bson.E{\n Key: \"$map\", Value: bson.D{\n bson.E{Key: \"input\", Value: \"$product_types\"},\n// ...\nbson.DValuebson.D{\n bson.E{Key: \"$set\", Value: bson.D{\n bson.E{Key: \"product_types\", Value: bson.D{\n bson.E{Key: \"$map\", Value: bson.D{\n bson.E{Key: \"input\", Value: \"$product_types\"},\n// ...\nbson.Abson.Amongo.Pipeline", "text": "Hey @chapad, thanks for the question! I think the issue may be related to using bson.E as the Value in another bson.E. Typically a bson.E should never be used as the Value in a bson.E because the generated BSON doesn’t have the expected structure.For example, this part of your aggregation pipeline:should be rewritten using a bson.D as the Values:Recent versions of MongoDB Compass support exporting aggregation pipelines as Go-syntax BSON documents (see the aggregation Export to Language docs). That can be really helpful for translating complex aggregation pipelines that work in the MongoDB shell to Go-syntax aggregation pipelines. Check it out and see if it helps!P.S. The top-level element for an “Export to Language” exported aggregation pipelines is a bson.A, but you can update that top-level bson.A to mongo.Pipeline to make it more readable.", "username": "Matt_Dale" }, { "code": "", "text": "Hello,\nThanks for the insight, it is very convenient to export directly from MongoDB Compass.\nUnfortunately, even with the exported code, I still have the same issue (Invalid $set :: caused by :: Use of undefined variable: this).\nIs it something unsupported by the mongo driver, and is it possible to investigate more, or open an issue on the driver?", "username": "chapad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Go driver Use of undefined variable: this
2022-07-01T07:18:01.216Z
Go driver Use of undefined variable: this
2,901
null
[ "aggregation" ]
[ { "code": "", "text": "Hey guys,I am having some trouble building a script that gets the count of domain addresses of my user base, how would I do that?{id:1, email:‘[email protected]’}\n{id:2, email:‘[email protected]’}\n{id:3, email:‘[email protected]’}I kinda need to aggregate down enough so that it shows\n{gmail.com:2, example.com:1}Been googling a lot, find one for mapreduce but it’s already been deprecated.", "username": "Peter_Ma" }, { "code": "$split$arrayElemAt$setdomain$group$sumdomaindb.collection.aggregate([\n {\n \"$set\": {\n \"domain\": {\n \"$arrayElemAt\": [\n {\n \"$split\": [\n \"$email\",\n \"@\"\n ]\n },\n 1\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$domain\",\n \"count\": {\n \"$sum\": 1\n }\n }\n }\n])\n", "text": "You can do it like this:Working example", "username": "NeNaD" } ]
How do I aggregate by substring (email address domains)
2022-07-04T10:13:33.667Z
How do I aggregate by substring (email address domains)
2,619
null
[ "replication" ]
[ { "code": "", "text": "I have an repliaset (RS-A) of 3 nodes each has 1T data, and the oplog for the cluster is about one day.Can I shutdown the whole cluster for 29 days, and open it up, as it’s far more beyong the oplog window, will the RS be running healthy after 29 days?I tried to use the ebs snapshot of RS-A, node-A1 (29 days ago) to create another RS-B and node B1, B2, B3. When i am trying to startup the first mongod process on B1 (with the replSetName in conf file updated), seems the process cannot started successfully, the 4th node is still try to join the RS-A, instead of working as the first node of RS-B. Is it feasible to do it as stated here?Thanks and regards\nMac", "username": "Mac_Ma" }, { "code": "Can I shutdown the whole cluster for 29 days, and open it up, as it’s far more beyong the oplog window, will the RS be running healthy after 29 days?\nI tried to use the ebs snapshot of RS-A, node-A1 (29 days ago) to create another RS-B and node B1, B2, B3. When i am trying to startup the first mongod process on B1 (with the replSetName in conf file updated), seems the process cannot started successfully, the 4th node is still try to join the RS-A, instead of working as the first node of RS-B. Is it feasible to do it as stated here?\nlocal", "text": "I think at the beginning you should start mongod as a standalone and drop the local database (replica set config from RS-A will be there) if it exists in the backup. please follow MongoDB documentation\nRestore a Replica Set ", "username": "Arkadiusz_Borucki" } ]
How to use the harddisk (ebs) snapshot to create a new replicaset
2022-07-04T07:55:50.309Z
How to use the harddisk (ebs) snapshot to create a new replicaset
1,306
null
[]
[ { "code": "", "text": "Please help me to delete the topic which I posted in this forum? I am not able to see the flag option as well", "username": "Siva" }, { "code": "", "text": "Hi @Siva,Please help me to delete the topic which I posted in this forum? I am not able to see the flag option as wellThe flag option should be available to all users, but you may have to click on the “…” below the post to show less common actions:Since you only have one other topic created, I’m assuming you no longer require an answer so I have deleted this for you.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to delete the topic I posted in this forum?
2022-07-04T06:28:09.975Z
How to delete the topic I posted in this forum?
2,640
null
[ "queries" ]
[ { "code": "_deletedOn", "text": "Hello, I have a collection where documents can be marked as deleted by having the _deletedOn field set to the date/time that they were deleted. Documents that aren’t deleted either don’t have this field or the value for it is null. I’ve read through the documentation but there doesn’t seem to be anything for specifically omitting results for which a path exists or is non-null.I’m trying to avoid introducing a separate $match stage for search results but it seems that I’ll have to either do that or filter the results after the query returns. Is there a way to do this within the $search pipeline stage?", "username": "Nathan_Knight" }, { "code": "", "text": "Have you look at\nandUse the exists operator to test if a path to an indexed field name exists. If it exists but isn't indexed, the document isn't included in the results.", "username": "steevej" }, { "code": "", "text": "Thank you for your reply, I have looked at those operators and they come close to solving the issue, the problem is that the field can be present in which case I need to be able to check for a null value which is not possible as far as I know.For now I am just filtering it outside of the $search stage which isn’t ideal but at least it works.", "username": "Nathan_Knight" }, { "code": "", "text": "Perhaps the following is better:", "username": "steevej" }, { "code": "", "text": "A post was split to a new topic: Is there any way to replace null values during indexing?", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Filter out results that have a value for a field
2021-04-06T13:33:44.194Z
Filter out results that have a value for a field
8,999
null
[ "java", "spring-data-odm" ]
[ { "code": " List<PackageHoliday> findPackageHolidayByTypeOfPackageHolidayContains(String typeOfPackageHoliday); @GetMapping(\"/escortedTours\")\n public List<PackageHoliday> escortedToursNested() {\n return packageHolidayRepository.findPackageHolidayByTypeOfPackageHolidayContains(\"escortedTours\");\n }\n", "text": "Hi,I’m using Spring Boot with MongoDB.I have repository like this: List<PackageHoliday> findPackageHolidayByTypeOfPackageHolidayContains(String typeOfPackageHoliday);And Controller like thisAs you can see, program is returning a list of elements that has typeOfPackeHoliday == escortedTours.But that return me all elements with that type while I want to return just one random from collection. How I can create method that will return me one random element from collection that have typeOfPackageHoliday == escortedTours", "username": "Sefan_Jankovic" }, { "code": "db.collection.aggregate([\n{ \n $match: { typeOfPackageHoliday: \"escortedTours\" } \n},\n{ \n $sample: { size: 1 } \n},\n])\nMongoRepositoryString matchStage = \"{ '$match': { 'typeOfPackageHoliday': { '$eq': ?0 } } }\";\nString sampleStage = \"{ '$sample' : { 'size' : 1 } }\";\nString aggPipeline = matchStage + \", \" + sampleStage;\n\n@Aggregation(pipeline = { aggPipeline } )\nList<PackageHoliday> findByTypeOfPackageHoliday(String typeOfPackageHoliday);\nString inputType = \"escortedTours\";\nList<PackageHoliday> result = packageHolidayRepository.findByTypeOfPackageHoliday(inputType);\n", "text": "Hello @Sefan_Jankovic, welcome to the MongoDB Community forum!You can try this aggregation:From the shell:Using Spring Data MongoDB’s MongoRepository API - in your repository interface define a method like this:And, you call the repository method as:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SpringBoot With MongoRepository
2022-07-03T15:05:58.209Z
SpringBoot With MongoRepository
2,151
null
[]
[ { "code": "", "text": "I am studying MongoDB.\nAnd I read this.MongoDB is capable of multi-core for read jobs, but insert jobs is capable of single-core only.\n(But I think it will work multi-core for bulk write jobs.)The article was written in 2011~2015.\nIs the current MongoDB the same?", "username": "Kim_Hakseon" }, { "code": "mongorestorenumInsertionWorkersPerCollection ", "text": "Hi @Kim_Hakseon,The MongoDB server is (and has always been) multithreaded and can take advantage of multiple cores, but the WiredTiger storage engine (added in 2015) greatly improves concurrency and resource utilisation compared to the older MMAPv1 storage engine which an article circa 2011-2015 is probably referring to. I strongly recommend studying recent articles and documentation, as 7+ year old information is almost certainly outdated.Operations on a single connection are typically not going to use all available cores as it would generally be undesirable for a single connection to consume all server resources. However, more cores can increase the throughput of concurrent operations per the MongoDB Production Notes on WiredTiger:The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores. Specifically, the total number of active threads (i.e. concurrent operations) relative to the number of available CPUs can impact performanceYou can also utilise multiple threads in your client application to parallelise operations such as inserts. For example, mongorestore has options like numInsertionWorkersPerCollection to control concurrent requests.Related discussion: Will MongoDB utilize all my 4 CPUs? - #4 by Stennie.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I could understand it to some extent by looking at your explanation, the writing of the forum you linked, and the manual.Thank you for your quick and accurate reply.\nHave a good day today. ", "username": "Kim_Hakseon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can MongoDB use multi-core?
2022-07-04T01:11:30.354Z
Can MongoDB use multi-core?
3,863
null
[]
[ { "code": "{\n lsid: {\n id: UUID(\"cb05cafc-c6e5-499a-904f-70e5f42506ed\"),\n uid: Binary(Buffer.from(\"d2ca1836aaed04eff4b456c51087fcebefbf828ce3769117ac24ba7e0aa04ba5\", \"hex\"), 0)\n },\n txnNumber: Long(\"1\"),\n op: 'u',\n ns: 'io_blitzz.usertable',\n ui: UUID(\"5430849c-6922-4ecf-a535-cf1d5c804b2c\"),\n o: { '$v': 2, diff: { u: { a: 'simple bbb' } } },\n o2: { _id: ObjectId(\"62a1d82c6ce0604e2a9e1636\") },\n ts: Timestamp({ t: 1656074879, i: 2 }),\n t: Long(\"7\"),\n v: Long(\"2\"),\n wall: ISODate(\"2022-06-24T12:47:59.109Z\"),\n stmtId: 0,\n prevOpTime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n postImageOpTime: { ts: Timestamp({ t: 1656074879, i: 1 }), t: Long(\"7\") }\n }\n{ \"lsid\" : { \"id\" : UUID(\"439488ea-86e3-407e-bd85-97c84fcf380d\"), \"uid\" : BinData(0,\"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=\") }, \"txnNumber\" : NumberLong(1), \"op\" : \"u\", \"ns\" : \"io.usertable\", \"ui\" : UUID(\"7f1d5ea3-6a83-4fb1-8baa-8e87baa3a8d7\"), \"o\" : { \"$v\" : 1, \"$set\" : { \"surname\" : \"a1 caprio\" } }, \"o2\" : { \"_id\" : ObjectId(\"62b94beb1a070faf11a9eaf0\") }, \"ts\" : Timestamp(1656312210, 4), \"t\" : NumberLong(1), \"v\" : NumberLong(2), \"wall\" : ISODate(\"2022-06-27T06:43:30.533Z\"), \"stmtId\" : 0, \"prevOpTime\" : { \"ts\" : Timestamp(0, 0), \"t\" : NumberLong(-1) }, \"postImageOpTime\" : { \"ts\" : Timestamp(1656312210, 3), \"t\" : NumberLong(1) } }\n", "text": "When there is an update on a document, oplog entry is supposed to generate $set/$unset key and value.\nBut it seems I am getting another format in 5.0.x.If I check on 4.0.x, it works as expected.", "username": "Mandar_Pawar" }, { "code": "", "text": "Operations I am doing is update key ‘a’ to another value ‘simple bbb’.\nExpected tag o is something like o: { ‘$v’: 2, $set: { a: ‘simple bbb’ } }\nBut actual value is o: { ‘$v’: 2, diff: { u: { a: ‘simple bbb’ } } }\nWhat is this diff: and u: tag? This seems to be generated in mongo 5.0.x\nWhen I check oplogs on mongo 4.0.x, I can see $set tag as part of o: correctly", "username": "Mandar_Pawar" }, { "code": "diffu", "text": "Hi @Mandar_Pawar and welcome to the community!!The oplog format is internal and subject to change. Change Streams is the supported API for observing changes in a MongoDB deployment.\nHowever to answer your question,{ ‘$v’: 2, diff: { u: { a: ‘simple bbb’ } } }diff signifies difference in the old and new field entries.\nu signifies the operation type, in this case it would be Update. (there can be others ad d: Delete, i: Insert and so on)Please refer to the update-operators documentation to understand more on the changes for the oplog in the recent versions.Let us know if you have any further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Oplog update entry without $set and $unset
2022-06-27T07:30:32.512Z
Oplog update entry without $set and $unset
1,575
null
[ "connecting" ]
[ { "code": "MongoServerSelectionError: connect ECONNREFUSEDreason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n ...\n}\n", "text": "Hey guys,\nSo I have a free cluster on MongoDB Atlas for my node.js app. When I was developing the app, the connection to the database worked perfectly fine. Now I have deployed this app on a server and I get an error to connect to the remote db:\nMongoServerSelectionError: connect ECONNREFUSED\nandSo after some researchs, I decided to add the server’s ip address to the database’s white list, but still getting the same error.So I’m waiting for some solutions, thanks in advance guys!", "username": "Ewan_Humbert" }, { "code": "ping/// example\nping cluster0-shard-00-00.ze4xc.mongodb.net\ntelnet27017/// example\ntelnet cluster0-shard-00-00.ze4cx.mongodb.net 27017\n", "text": "Hey @Ewan_Humbert - Firstly, welcome to the community So I have a free cluster on MongoDB Atlas for my node.js app. When I was developing the app, the connection to the database worked perfectly fine. Now I have deployed this app on a server and I get an error to connect to the remote dbRegarding the above, could you advise on the following:I would also recommend you to please try performing the initial basic network connectivity tests and provide the output for the cluster you are having trouble connecting to (from the server where you have deployed your app):Note: You can find the hostname in the metrics page of your clusterAdditionally, I would recommend to review the Troubleshoot Connection Issues documentation. You may also find the following blog post regarding tips for atlas connectivity useful too.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hey @Jason_Tran\nThank you so much for your answer!I finally fixed the problem by authorizing the server to connect with the cluster, that’s pretty much it lol.\nFor those who face the same issue, and have access to the cPanel, just go to “SSH authorization”, and permit the cluster’s IP addresses that you find in your error message. And that’s pretty much it.", "username": "Ewan_Humbert" }, { "code": "", "text": "Thanks for posting your fix here and i’m glad to hear that you got it working! It’s interesting to also know that this was deployed via cPanel.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting ECONNREFUSED and ReplicaSetNoPrimary errors
2022-06-30T11:39:29.424Z
Getting ECONNREFUSED and ReplicaSetNoPrimary errors
5,643
null
[ "charts" ]
[ { "code": "", "text": "So with the deprecation of this, it will remove the ability to filter charts on ALL fields. This isn’t good at all for my use case and I’m sure others as well.I understand fully that you’re moving to the SDK, but WHY are you not letting authenticated access to be able to filter on all fields?Currently, you have to specify which fields can be filtered. I’m very confused by this restriction. My webapp allows users to add custom fields to forms. I can’t specify these fields as I’ll never know.I went with Atlas because of the schemaless solution it provided. But now it looks like that’s the direction Atlas is going.If this filter capability gets restricted and removed, then you have caused my business to lose a lot of value and that disappoints me, but it’s your product, so I understand. Please provide any help or solutions or anything.Thanks", "username": "Protrakit_Support" }, { "code": "{ employeeName: \"Tom Hollander\" }", "text": "Hi @Protrakit_Support - thanks for raising your concerns. I’ll explain the reason for this restriction, but if it prevents you from using the tool I’d like to learn more about your scenario so we can see if there are other viable solutions.The requirement to explicitly define filterable fields exists to make embedding more secure. To use a contrived example, imagine if I had a chart showing the average employee salary per department. In aggregate, this information may not be considered sensitive. But if someone added a filter to the chart { employeeName: \"Tom Hollander\" } the chart would now show information just about me, which would be revealing sensitive information.Under normal circumstances, the chart filters will only be manipulated by the site’s developer and they will make sure the filters are appropriate. However there is nothing to stop site users from using the developer tools to inject their own filters, so we need to protect against this case by disabling filtering on unexpected fields by default.Can you elaborate on how you are currently using filtering with your verified signature charts? Is this just a usability issue (e.g. it takes a long time to explicitly allow every field you filter on), or do your documents vary so much that it’s not possible to know in advance which fields might exist?thanks\nTom", "username": "tomhollander" }, { "code": "", "text": "Tom,\nThanks for the quick reply. I completely understand from the security side of it now.My app lets users create form fields. These form fields can be different types (date, text, number, etc). I’ll never be able to add those as they’ll be ever changing. Plus, these fields are nested within an object in the collection.If I were able to specify the object without having to select each nested field, then I can definitely make that work. But, you don’t allow that either.I’m using the verified signature so I can filter on these nested fields.", "username": "Protrakit_Support" }, { "code": "", "text": "Got it, thanks. While we do plan on getting rid of the Verified Signature mode eventually, it’s not happening imminently so you can keep using that for the moment. We’ll have a look into what we can do to support your scenario with authenticated embedding before we retire Verified Signature. One idea is to have an opt-in wildcard filter - would that work for you?Tom", "username": "tomhollander" } ]
Authenticated Verified Signature
2022-06-30T13:11:38.111Z
Authenticated Verified Signature
2,463
null
[ "swift" ]
[ { "code": "closeuser.logOut", "text": "According to the documentation, close is not available in Swift SDK.There is no need to manually close a realm in Swift or Objective-C. When a realm goes out of scope and is removed from memory due to ARC , the realm is closed.However when handling a user logout use case, after the user logged out and try to log in again the Realm through a “Realm at path XXX already opened with different sync user.”Apparently user.logOut would not close the Realm for me and I do not think I have any previous Realm ref stored in the memory.I have even tried to delete the Realm database file associated with the sync user but once the user got re-logged into the app. The same error appears.", "username": "NightNight" }, { "code": "", "text": "The error seems to be caused by a different sync config between the local realm and sync realm, so I checked the sync config content; however, the user ID, partition, and a few other fields are all the same because they are both from the same sync user.", "username": "NightNight" }, { "code": ".logout.remove", "text": "Tried both .logout and .remove and both methods turn out to have the same issue after the same user re-logs in after signing out in the same app session.", "username": "NightNight" }, { "code": "", "text": "Seems like you have a reference to your Realm object somewhere, maybe a singleton or a zombie object. What does xcode debug memory graph show you? Do you see any Realm object there?", "username": "Jerome_Pasquier" }, { "code": "", "text": "Hi @Jerome_Pasquier thanks for the response. That’s what I suspect is happening right now. I am new to iOS dev so I could not find where is the Realm object reference in my code. Thank you for the pointer, I can check the memory graph and see if I could find the bug there.", "username": "NightNight" } ]
How to force close Realm database with Swift?
2022-07-02T17:32:13.453Z
How to force close Realm database with Swift?
3,155
null
[ "crud" ]
[ { "code": " \"emailVerifyStatus\": [\n \"ready\",\n \"ready\",\n \"ready\"\n ]\n{\n \"dataSource\":\"Cluster0\",\n \"database\":\"v2\",\n \"collection\":\"allPersons\",\n \"filter\":{\n \"host\":\"hostName\",\n \"emailVerifier.status\":\"ready\"\n },\n \"update\":{\n \"$set\":{\n \"emailVerifier.status\":\"uploading\"\n }\n }\n}\n", "text": "Hi there,I’m struggling to figure out a way to update items within an array via the data-api. I’ve got an array (emailVerifyStatus) with 3 items in it and I want to search for all the items named ‘ready’ and change them to ‘uploading’.I’m using the end point ‘/data/beta/action/updateMany’ with the following code that’s working without an array.However this code is not working within an array. Do you have any Advice? Thank you.", "username": "spencerm" }, { "code": " \"emailVerifyStatus\": [\n \"ready\",\n \"ready\",\n \"ready\"\n ]\n \"$set\":{\n \"emailVerifier.$[]\":\"uploading\"\n }\n\"emailVerifyStatus\":\"ready\"{\"$set\": \n {\"emailVerifyStatus.$[element]\":\"uploading\"}},\n {arrayFilters: [{\"element\":\"ready\"}]\n}\n", "text": "Hi,I assume that’s what your array looks like:To update all the array elements, use the all positional $[] operator\nfor example, your $set stage can look like this:if you want to update just matching array elements \"emailVerifyStatus\":\"ready\" use the filtered positional operator", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data api updateMany within an array
2022-07-02T23:25:44.526Z
Data api updateMany within an array
2,702
null
[ "swift" ]
[ { "code": "let value1: Decimal128 = 70\nlet value2: Decimal128 = 1.09\nvalue1 / value2 = +6422018348623853211009174311926606E-32\n", "text": "I have this dividing operation which returned result is ok but the value synced to Atlas backend is wrong:The value synced to server is: 12.29721490089025582478677982706510Any ideea what is the issue?", "username": "horatiu_anghel" }, { "code": "", "text": "Hi @horatiu_anghel, it looks like you are using the swift SDK? Which version are you running? We had an issue with Decimal128 values if they had more than 19 significant digits, but that has been fixed for a while now.", "username": "James_Stone" }, { "code": "", "text": "Hi @James_Stone, thank you for your reply! Yes, I am using swift SDK, last version 10.28.1.", "username": "horatiu_anghel" }, { "code": "", "text": "@horatiu_anghel thanks! Could you please open an issue in GitHub - realm/realm-swift: Realm is a mobile database: a replacement for Core Data & SQLite with a code sample of how you are storing the value? The team there should be able to help figure out what is going on.", "username": "James_Stone" }, { "code": "", "text": "Ok! I will open an issue. Thank you!", "username": "horatiu_anghel" }, { "code": "class TestClass: Object {\n @Persisted var someDecimal: Decimal128 = 0.0\n}\nlet test = TestClass()\nlet value1: Decimal128 = 70\nlet value2: Decimal128 = 1.09\ntest.someDecimal = value1/value2\nsomeDecimalprint(test.someDecimal)", "text": "I am not able to duplicate this issue.I created a model (Swift)then populated itand stored it (sync’ing to the server)When I read the object back and print the value of someDecimalprint(test.someDecimal)I get+6422018348623853211009174311926606E-32Perhaps it’s how the model is set up? Can you post that so we can take a look?", "username": "Jay" }, { "code": "", "text": "Hi @Jay,If you try to sync that value on another device is the correct one? Or using MongoDB Compass to see the value stored on Atlas.On device where the value was created everything is ok.", "username": "horatiu_anghel" }, { "code": "", "text": "Sure enough… Either accessing/syncing from another fresh device or wiping the current local data and resync’ing results in incorrect data.\nDecimal128742×206 22.7 KB\nHere’s the git link### How frequently does the bug occur?\n\nAll the time\n\n### Description\n\nI am havi…ng a strange issue when dividing two Decimal128 values. The result on device and printed on console is ok but the value synced to Atlas backend is wrong. I believe it is related to the significant digits.\n\nI have this two values:\n\n```\n@Persisted var grossPrice: Decimal128 = 70\n@Persisted var vatValue: Double = 1.09\n\ngrossPrice / Decimal128(floatLiteral: vatValue) = +6422018348623853211009174311926606E-32\n```\nThe value stored on server is: ```12.29721490089025582478677982706510```.\n\nOn the device where the operation is made, everything is ok but, on other synced devices, all the results are wrong.\n\n### Stacktrace & log output\n\n_No response_\n\n### Can you reproduce the bug?\n\nYes, always\n\n### Reproduction Steps\n\n_No response_\n\n### Version\n\n10.28.1\n\n### What SDK flavour are you using?\n\nMongoDB Realm (i.e. Sync, auth, functions)\n\n### Are you using encryption?\n\nYes, using encryption\n\n### Platform OS and version(s)\n\niOS 15.5\n\n### Build environment\n\nXcode version: 13.4.1", "username": "Jay" } ]
Dividing two Decimal128 produces weird results
2022-06-28T21:28:58.907Z
Dividing two Decimal128 produces weird results
2,046
null
[ "aggregation", "transactions" ]
[ { "code": "_idg_*[_id, ..., Time, ..., g_client_machinename, g_vendor_machinename, g_report_machinename, g_uploaded_at]all_keysunique_id$sortg_uploaded_atresult = coll.aggregate([\n {\n '$project': {\n 'data': {'$objectToArray': \"$$ROOT\"},\n 'g_unique_id': 1,\n 'g_uploaded_at': 1, }\n },\n {'$unwind': \"$data\"},\n {'$project': {'g_uploaded_at': 1, 'g_unique_id': 1, 'key': \"$data.k\", '_id': 0}},\n {'$sort': {'g_uploaded_at': -1}},\n {'$group': {'_id': \"$g_unique_id\", 'all_keys': {'$push': \"$key\"}}},\n {\n '$project': {\n 'all_keys': 1,\n 'all_keys_string': {\n '$reduce': {\n 'input': \"$all_keys\",\n 'initialValue': \"\",\n 'in': {'$concat': [\"$$value\", \"$$this\"]}\n }\n }\n }\n },\n {\n '$group': {\n '_id': \"$all_keys_string\",\n 'all_keys': {'$first': \"$all_keys\"},\n 'g_unique_id': {'$first': \"$_id\"}\n }\n },\n {'$unset': \"_id\"}\n])\n", "text": "My collection is un-nested transaction data. Sometimes the schema changes, and I’d like to query the collection and get the _id of the first record after a change (or the last before. doesn’t matter only so it’s consistent).The data always has a date or time feature, plus a few features I tag on g_* when the record is added to the collection:[_id, ..., Time, ..., g_client_machinename, g_vendor_machinename, g_report_machinename, g_uploaded_at]Below I have an aggregation pipeline which actually returns a row for each change in schema with two features in results:But I need to know when the change happened. I need an explicit sort stage I can reason about. I added $sort before the concat, but I can’t get the g_uploaded_at feature into the output.", "username": "xtian_simon" }, { "code": "{ _id: ObjectId(\"62bb53499fc9e78118c732d5\"), a: { b: 2 } }\n{ _id: ObjectId(\"62bb53529fc9e78118c732d6\"), a: { b: 2, c: 3 } }\n{ _id: ObjectId(\"62bb54239fc9e78118c732d7\"), a: 2 }\n", "text": "My first recommendation would be to $sort on g_uploaded_at as the first stage. The 2 $project and $unwind will end up being a slow memory sort. If you $sort first you could use an index. Even if you do not use an index it should be faster because it is sorting before $unwind which increase the number of documents to sort.If schema changes are important why not simply keep a schema version number in your documents, or something like g_schema_date.Note that since you do not recursively $objectToArray what ever you do, you will only detect new or removed at the root document. Starting with the following documents:without recursively doing $objectToArray your keys of $$ROOT will always be _id and a. You cannot detect if a field change from a simple value to an array or object and you cannot detect that field a.c has been added or removed.", "username": "steevej" }, { "code": "", "text": "If schema changes are important why not simply keep a schema version number in your documents, or something like g_schema_date.I think this comment is more about how I should design my application, and not about the query itself. I need a query to initialize the effective date change for schemas already in MongoDB.Note that since you do not recursively […] you cannot detect that field a.c has been added or removed.As I stated in my OP, my data is not nested. The original data is tabular transaction data from CSV. Not an issue for this project.", "username": "xtian_simon" } ]
Date sorting aggregation with group stage so I can reason about the results
2022-06-26T00:48:32.838Z
Date sorting aggregation with group stage so I can reason about the results
2,206
https://www.mongodb.com/…646e9a91aa8a.png
[ "dot-net", "compass" ]
[ { "code": "", "text": "Hello,I have the following document structure:\nI’m using the latest C# MongoDB driver and I am trying to retrieve only the documents with Value of “11111” for instance.I’ve tried the following filter:var filter = Builders.Filter.Eq(“Parameters.Value”, “11111”);This does not return any documents (yet on Compass { Parameters.Value: “11111” } returns the document in question.What might I be doing wrong? Any help would be appreicated.", "username": "Matan_Cohen" }, { "code": "using MongoDB.Bson;\nusing MongoDB.Driver;\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<BsonDocument>(\"coll\");\n\nvar filter = Builders<BsonDocument>.Filter.Eq(\"Parameters.Value\", 11111);\nvar query = coll.Find(filter);\nforeach (var result in query.ToList())\n{\n Console.WriteLine(result);\n}\n\"11111\"stringint11111", "text": "Hi, @Matan_Cohen,Welcome to the MongoDB Community. I understand that you’re having trouble querying a document by the value in an array element. I wrote the following minimal repro and was able to successfully query documents from a collection with a similar structure:Note that your C# filter contains the value \"11111\" (data type is string) but your schema appears to contain values in the array of type int. In my filter above, I use the numeric value 11111 to query matching documents.When querying data from MongoDB, you must either use the correct data types in your query or explicitly convert values to a common data type prior to comparison.If you continue to have problems with this query, please provide a self-contained repro as well as sample data so we can troubleshoot further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hey, @James_KovacsI’d like to thank you for taking the time to reply to me. Apparently there are sometimes string values for Value and other times there are integer values. I did however try your solution and it does not work, so I assume something else is wrong with my code.I’ll be investigating a bit more before troubling anyone again, but I will definitely come back and give an update once I either give up investigating and seek more help, or figure out the solution to my issue for future reference.Regardless, thank you very much!", "username": "Matan_Cohen" }, { "code": "", "text": "@James_KovacsI’ve actually found user-error in my code. I’m connecting and managing two databases simultaneosly and I was actually applying my queries to the other database which doesn’t even share the same structure.Sorry for your time and thank you again.", "username": "Matan_Cohen" }, { "code": "", "text": "A post was split to a new topic: How can I filter entries based on the FieldValue.Alias and FieldValue.Value?", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
C# MongoDB filter a value from the following structure
2022-03-03T16:25:21.910Z
C# MongoDB filter a value from the following structure
33,516
null
[ "dot-net", "graphql", "legacy-realm-cloud" ]
[ { "code": "", "text": "Platform: Realm Cloud\nClients: .Net client library, GraphQLI’m using .Net nuget package on Win10 platform and GraphQL to access the same DB on a web app.\nAfter I add a new field in a RealmObject class and run, the field will be added to DB automatically.\nBut the GraphQL interface seems cached and return the error Message: Cannot query field “<>” on type “<>”I’ve tried to set a new FullSyncConfiguration.SchemaVersion in .NET but it still doesn’t work.\nHow do I update the schema on GraphQL side? Thanks", "username": "Sing_Leung" }, { "code": "", "text": "@Sing_Leung The schema is cached - you’ll need to hit this endpoint to refresh the cache after making schema changes - https://docs.realm.io/sync/graphql-web-access/using-the-graphql-client#schema-appears-to-be-cached-to-an-old-version", "username": "Ian_Ward" }, { "code": "Use to clear the cached schema of the Realm at if schema caching is enabled (e.g. due to a recent schema change).", "text": "@Ian_Ward is this still the recommended approach? Those docs are not the latest version, I don’t understand how to make the call as your link suggests and I can’t find anything about this for the current latest version of realm.How to use the API - Realm Sync (LEGACY) saysUse DELETE /graphql/schema/:path to clear the cached schema of the Realm atpath if schema caching is enabled (e.g. due to a recent schema change).Is there a code example of this?It would be great if realm could clear the graphql cache automatically when the schema is modified…", "username": "Shea_Dawson" } ]
GraphQL schema update
2020-06-07T11:22:52.360Z
GraphQL schema update
4,663
null
[]
[ { "code": "", "text": "Apps keep on crashing because of Access to invalidated Collection object\nHow do i check if the object has been invalidated.if (realmObject != null){. —> crashing over here}", "username": "spam_mail" }, { "code": "", "text": "What SDK are you using and do you have a crash log/exception message that you can share?", "username": "nirinchev" }, { "code": "classpath “io.realm:realm-gradle-plugin:10.8.0”\n\n2021-09-13 01:13:09.112 6565-6565/com.app.abc E/REALM_JNI: jni: ThrowingException 9, Access to invalidated Collection object, .\n2021-09-13 01:13:09.112 6565-6565/com.app.abc E/REALM_JNI: Exception has been thrown: Access to invalidated Collection object\n2021-09-13 01:13:09.113 6565-6565/com.app.abc D/AndroidRuntime: Shutting down VM\n2021-09-13 01:13:09.116 6565-6565/com.app.abc E/AndroidRuntime: FATAL EXCEPTION: main\nProcess: com.app.abc, PID: 6565\njava.lang.IllegalStateException: Access to invalidated Collection object\nat io.realm.internal.OsList.nativeSize(Native Method)\nat io.realm.internal.OsList.size(OsList.java:285)\nat io.realm.ManagedListOperator.size(ManagedListOperator.java:72)\nat io.realm.RealmList.size(RealmList.java:599)\nat com.app.abc.activity.MessageActivity.onBackPressed(MessageActivity.java:376)\nat com.app.abc.activity.MessageActivity.onBackClick(MessageActivity.java:178)\nat com.app.abc.activity.MessageActivity_ViewBinding$4.doClick(MessageActivity_ViewBinding.java:90)\nat butterknife.internal.DebouncingOnClickListener.onClick(DebouncingOnClickListener.java:18)\nat android.view.View.performClick(View.java:7448)\nat android.view.View.performClickInternal(View.java:7425)\nat android.view.View.access$3600(View.java:810)\nat android.view.View$PerformClick.run(View.java:28305)\nat android.os.Handler.handleCallback(Handler.java:938)\nat android.os.Handler.dispatchMessage(Handler.java:99)\nat android.os.Looper.loop(Looper.java:223)\nat android.app.ActivityThread.main(ActivityThread.java:7656)\nat java.lang.reflect.Method.invoke(Native Method)\nat com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592)\nat com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)\n", "text": "So we have the Chat functionality and users can delete the chat. When one user deletes the chat, it will not be visible to other user. Issues occurs when one user deletes the chat and the 2nd is still on that screen", "username": "spam_mail" }, { "code": "", "text": "Apps keep on crashing because of Access to invalidated Collection object.", "username": "adamvinh_zi" }, { "code": "", "text": "Is that a question? Are you having the same issue? The OP’s issue was likely due to the client not observing changes in Realm. So when a Realm object was deleted on one client the other client didn’t know about it so when attempting to modify a deleted object, the error was thrown.If you have a question or a problem with your code, perhaps opening a different thread or posting a question on StackOverflow with more details would help us to help you.", "username": "Jay" } ]
Android and iOS Access to invalidated Collection object
2021-09-12T19:58:52.535Z
Android and iOS Access to invalidated Collection object
4,234
null
[ "replication", "sharding" ]
[ { "code": "", "text": "Hi all;\nwe have 3 node shareded cluster. All of them are also replica set and 3 config servers on these nodes.\nWe want to restore this cluster to another servers with file system snapshot. We will close secondary nodes then take file system snapshot and then copy data dir to test server. Finally open copy of production cluster on test servers. What is the steps for doing these operation?\nOr what is the best way to restore sharded cluster to new test servers?\nRegards.", "username": "baki_sahin1" }, { "code": "", "text": "Hi,there is good MongoDB documentation describing restoring a sharded cluster from file system snapshots.\nI think you should follow the steps from the documentation Restore a Sharded Cluster", "username": "Arkadiusz_Borucki" } ]
Restore sharded cluster with file system snapshot
2022-07-02T00:50:12.072Z
Restore sharded cluster with file system snapshot
1,551
https://www.mongodb.com/…c_2_1024x536.png
[ "java", "atlas-cluster", "transactions", "connector-for-bi", "scala" ]
[ { "code": "\nERROR\nApplication diagnostics message: User class threw exception: org.apache.spark.SparkException: Job aborted. at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:105) at org.apache.spark.rdd.PairRDDFunctions.$anonfun$saveAsNewAPIHadoopDataset$1(PairRDDFunctions.scala:1077) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1075) at org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopDataset(JavaPairRDD.scala:833) at io.cdap.cdap.etl.spark.batch.RDDUtils.saveHadoopDataset(RDDUtils.java:58) at io.cdap.cdap.etl.spark.batch.RDDUtils.saveUsingOutputFormat(RDDUtils.java:47) at io.cdap.cdap.etl.spark.batch.SparkBatchSinkFactory.writeFromRDD(SparkBatchSinkFactory.java:175) at io.cdap.cdap.etl.spark.batch.BaseRDDCollection$1.run(BaseRDDCollection.java:239) at io.cdap.cdap.etl.spark.SparkPipelineRunner.runPipeline(SparkPipelineRunner.java:383) at io.cdap.cdap.etl.spark.batch.BatchSparkPipelineDriver.run(BatchSparkPipelineDriver.java:227) at io.cdap.cdap.app.runtime.spark.SparkTransactional$2.run(SparkTransactional.java:236) at io.cdap.cdap.app.runtime.spark.SparkTransactional.execute(SparkTransactional.java:208) at io.cdap.cdap.app.runtime.spark.SparkTransactional.execute(SparkTransactional.java:138) at io.cdap.cdap.app.runtime.spark.AbstractSparkExecutionContext.execute(AbstractSparkExecutionContext.scala:229) at io.cdap.cdap.app.runtime.spark.SerializableSparkExecutionContext.execute(SerializableSparkExecutionContext.scala:63) at io.cdap.cdap.app.runtime.spark.DefaultJavaSparkExecutionContext.execute(DefaultJavaSparkExecutionContext.scala:91) at io.cdap.cdap.api.Transactionals.execute(Transactionals.java:63) at io.cdap.cdap.etl.spark.batch.BatchSparkPipelineDriver.run(BatchSparkPipelineDriver.java:158) at io.cdap.cdap.app.runtime.spark.SparkMainWrapper$.main(SparkMainWrapper.scala:87) at io.cdap.cdap.app.runtime.spark.SparkMainWrapper.main(SparkMainWrapper.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732) Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=transmit-staging-cluster-biconnector.o4mkl.mongodb.net:27015, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLException: Unsupported or unrecognized SSL message}}] at com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:182) at com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:41) at com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:136) at com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:94) at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:249) at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:172) at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:161) at com.mongodb.DB.executeCommand(DB.java:774) at com.mongodb.DBCollection.getStats(DBCollection.java:2282) at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitterByStats(MongoSplitterFactory.java:76) at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitter(MongoSplitterFactory.java:127) at com.mongodb.hadoop.MongoInputFormat.getSplits(MongoInputFormat.java:56) at io.cdap.cdap.etl.batch.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:45) at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:131) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at io.cdap.cdap.app.runtime.spark.data.DatasetRDD.getPartitions(DatasetRDD.scala:61) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:300) at scala.Option.getOrElse(Option.scala:189) at org.apache.spark.rdd.RDD.partitions(RDD.scala:296) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2257) at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:83) ... 28 more\n", "text": "Hi Team,I’m trying to create a pipeline in Google Cloud Datafusion to extract data from MongoDB Atlas to load in BigQuery. I’m using the google provided Mongo DB driver (v 2.0.0) in order to achieve this but I haven’t had any luck connecting to Atlas.I’m trying to connect via standard connection and I’ve enabled the BI connection for our cluster and I’ve whitelisted the necessary IP’s in the network settings with no luck.The MongoDB pipeline settings looks like this in Datafusion (I’m trying to connect using the host, port and user defined in the BI connection) :\n\nimage1849×969 64.3 KB\nHowever this is not working and I’m getting the following errors in Datafusion logs:The connection seems to be timing out. I’ve tested connecting using MySQL Workbench (using the B.I connection and I could connect fine.Does anyone here have experience with Datafusion and MongoDB Atlas and can help?Thank you", "username": "Marck_Munoz" }, { "code": "", "text": "Hi Marck - We have better ways to move extract data from Atlas to BigQuery like AtlasSQL, Datflow templates. Would be able to help based on the use case. Happy to have quick call to discuss on the same. Please reach out to me on - [email protected]", "username": "paresh_saraf1" } ]
Connecting Datafusion to to MongoDB Atlas
2022-06-29T00:56:02.438Z
Connecting Datafusion to to MongoDB Atlas
3,316
null
[ "compass", "migration" ]
[ { "code": "", "text": "I couldn’t find a thread related to my situation but if there is one then I apologize for posting a redundant thread.My company was recently acquired by a bigger company and so all of our accounts were changed from [email protected] to [email protected], and so they want all retained employees to change their accounts to the new email or create a new account with the @newcompany if that’s not possible.I created a new account for MongoDB and so now I’m trying to essentially migrate every database and collection in my cluster to my new account while maintaining the same schema.I played around with the MongoDB Compass but I couldn’t see any sort of way to do it.", "username": "Vince_Quach" }, { "code": "", "text": "Hi Vince,You don’t need to do any data plane level movements here: just add your new domain users to your Atlas org and then remove the old ones: and Org Owner level user can do this.By the way, you can also move projects between orgs if you’re an org owner of both the source and destination org!If you really do need to move a cluster to a different project/org for some reason: then the best bet is to use Live Migration, but that doesn’t seem needed in your caseCheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrating entire cluster to different MongoDB account
2022-07-01T18:11:04.398Z
Migrating entire cluster to different MongoDB account
2,767
null
[ "python", "database-tools" ]
[ { "code": "", "text": "hello there\ni have some huge collections (2-10 million records) what is the fastest way for me to read them in pymongo ?\niterating cursor seems too slow. is there any other alternative ?\nmongoexport looks kinda fast. but using a command line tool in my codebase seems like a bad idea", "username": "Ali_ihsan_Erdem1" }, { "code": "", "text": "Having to read into your client code all your documents from your huge collectionsseems like a bad ideaAnd will be slow because you transfer all your data over the Network.And mongoexport is no magic. It uses the same protocole and API you have accès and does some kind of for Loop using a cursor. Despite being fast because it uses a faster language than python, the whole process might be slower because it has to write to disk and then you have to Read from disk while the direct pymongo route might allow you the local disk I/O.Read about the aggregation framework.", "username": "steevej" }, { "code": "from bson.raw_bson import RawBSONDocument\nfrom pymongo import MongoClient\n\nclient = MongoClient(...)\ncoll = client.db.test\nraw_coll = coll.with_options(codec_options=coll.codec_options.with_options(document_class=RawBSONDocument))\nfor raw_doc in raw_coll.find():\n print(raw_doc)\n# PyMongo must be installed with snappy support via: python3 -m pip install 'pymongo[snappy]'\nclient = MongoClient(compressors=\"snappy\")\n", "text": "To improve the performance of reading large collections you should try using RawBSONDocument:RawBSONDocument is a read-only view of the raw BSON data for each document. It can improve performance because the BSON data is decoded lazily. The raw BSON can also be accessed directly via the RawBSONDocument.raw property.You may also want to try enabling network compression to reduce the bytes sent over the network: mongo_client – Tools for connecting to MongoDB — PyMongo 4.3.3 documentation\nand\nInstalling / Upgrading — PyMongo 4.3.3 documentation", "username": "Shane" } ]
What is the best way to read big collections from mongodb?
2022-06-27T22:05:22.965Z
What is the best way to read big collections from mongodb?
6,272
null
[ "atlas-device-sync" ]
[ { "code": "Exception backtrace:\n<backtrace not supported on this platform>\n", "text": "Facing a sync error but there is no information about when it’s got break please help me with this I have more than 10k records in altas I can not able to find it.E/REALM_SYNC: Connection[1]: Session[1]: Failed to parse, or apply received changeset: ERROR: ArrayInsert: Invalid prior_size (list size = 4, prior_size = 0)2021-05-20 17:46:04.934 24099-24396/com.reach52.healthcare.debug E/REALM_SYNC: Connection[3]: Session[3]: Failed to parse, or apply received changeset: ERROR: ArrayInsert: Invalid prior_size (list size = 4, prior_size = 0)2021-05-20 17:46:05.034 24099-24396/com.rea.healthcare.debug I/REALM_SYNC: Connection[3]: Connection closed due to error\n2021-05-20 17:46:05.035 24099-24396/com.rea.healthcare.debug E/Realm Setup: Received an ObjectServerError.\n2021-05-20 17:46:05.035 24099-24396/com.rea.healthcare.debug D/REALM_SYNC: Connection[3]: Allowing reconnection in 3266722 milliseconds\n2021-05-20 17:46:05.035 24099-24396/com.rea.healthcare.debug V/REALM_SYNC: Using already open Realm file: /data/user/0/com.reach52.healthcare.debug/files/mongodb-realm/master-appidby/60a4c95914263d500c582c0c/s_101.realm", "username": "kunal_gharate" }, { "code": "", "text": "Hi @kunal_gharate, have you tracked down the issue and/or found a way to reproduce it?", "username": "Andrew_Morgan" }, { "code": "", "text": "@Andrew_Morgan 1621937309.287 7365-7426/com.reach52.healthcare.debug E/REALM_SYNC: Connection[1]: Session[1]: Failed to parse, or apply received changeset: ERROR: ArrayInsert: Invalid prior_size (list size = 4, prior_size = 0)I got this issue again but i don’t know how it get produced", "username": "kunal_gharate" }, { "code": "Failed to parse, or apply received changeset: ArrayInsert: Invalid prior_sizeConnection[1]: Session[1]: Failed to parse, or apply received changeset: ArrayInsert: Invalid prior_size (list size = 3, prior_size = 0) (instruction target: SettingsV2RealmModel[ObjectId{6159ef49872d4b24322a1daa}].ts[0], version: 47, last_integrated_remote_version: 1, origin_file_ident: 211, timestamp: 220174678003)\nException backtrace:\n0 Divtracker 0x000000010b690676 _ZN5realm4sync17BadChangesetErrorCI1NS_4util22ExceptionWithBacktraceISt13runtime_errorEEIJNSt3__112basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEEEEEDpOT_ + 38\n1 Divtracker 0x000000010b68ac92 _ZN5realm4sync12_GLOBAL__N_125throw_bad_transaction_logENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 34\n2 Divtracker 0x000000010b68aa9f _ZNK5realm4sync18InstructionApplier19bad_transaction_logERKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 1135\n3 Divtracker 0x000000010b698999 _ZNK5realm4sync18InstructionApplier19bad_transaction_logIJmRKjEEEvPKcDpOT_ + 73\n4 Divtracker 0x000000010b698477 _ZN5realm4sync18InstructionApplier20resolve_list_elementINS_4util8overloadIJZNS1_clERKNS0_5instr11ArrayInsertEE4$_20ZNS1_clES8_E4$_21ZNS1_clES8_E4$_22ZNS1_clES8_E4$_23ZNS1_clES8_E4$_24ZNS1_clES8_E4$_25EEEEEvRNS_7LstBaseEmNSt3__111__wrap_iterIPKN5mpark7variantIJNS0_12InternStringEjEEEEESQ_PKcOT_ + 2871\n5 Divtracker 0x000000010b697439 _ZN5realm4sync18InstructionApplier13resolve_fieldINS_4util8overloadIJZNS1_clERKNS0_5instr11ArrayInsertEE4$_20ZNS1_clES8_E4$_21ZNS1_clES8_E4$_22ZNS1_clES8_E4$_23ZNS1_clES8_E4$_24ZNS1_clES8_E4$_25EEEEEvRNS_3ObjENS0_12InternStringENSt3__111__wrap_iterIPKN5mpark7variantIJSI_jEEEEESQ_PKcOT_ + 1177\n6 Divtracker 0x000000010b68ea2c _ZN5realm4sync18InstructionApplierclERKNS0_5instr11ArrayInsertE + 636\n7 Divtracker 0x000000010b6b16f4 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 100\n8 Divtracker 0x000000010b6ae662 _ZN5realm5_impl17ClientHistoryImpl27integrate_server_changesetsERKNS_4sync12SyncProgressEPKyPKNS2_11Transformer15RemoteChangesetEmRNS2_11VersionInfoERNS2_21ClientReplicationBase16IntegrationErrorERNS_4util6LoggerEPNSE_20SyncTransactReporterE + 946\n9 Divtracker 0x000000010b6bff0d _ZN5realm5_impl14ClientImplBase7Session29initiate_integrate_changesetsEyRKNSt3__16vectorINS_4sync11Transformer15RemoteChangesetENS3_9allocatorIS7_EEEE + 173\n10 Divtracker 0x000000010b68572a _ZN12_GLOBAL__N_111SessionImpl29initiate_integrate_changesetsEyRKNSt3__16vectorIN5realm4sync11Transformer15RemoteChangesetENS1_9allocatorIS6_EEEE + 42\n11 Divtracker 0x000000010b6be9bd _ZN5realm5_impl14ClientImplBase7Session24receive_download_messageERKNS_4sync12SyncProgressEyRKNSt3__16vectorINS3_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 589\n12 Divtracker 0x000000010b6bc15d _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS0_14ClientImplBase10ConnectionEEEvRT_PKcm + 5485\n13 Divtracker 0x000000010b6b6bd4 _ZN5realm5_impl14ClientImplBase10Connection33websocket_binary_message_receivedEPKcm + 52\n14 Divtracker 0x000000010b64ce25 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 1509\n15 Divtracker 0x000000010b6c39a0 _ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINSt3__18functionIFvNS5_10error_codeEmEEEJRS7_RmEEEvbRT_DpOT0_ + 224\n16 Divtracker 0x000000010b6c3464 _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINSt3__18functionIFvNS8_10error_codeEmEEEE19recycle_and_executeEv + 196\n17 Divtracker 0x000000010b6f4784 _ZN5realm4util7network7Service4Impl3runEv + 484\n18 Divtracker 0x000000010b67cb5d _ZN5realm4sync6Client3runEv + 29\n19 Divtracker 0x000000010b822e8d _ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigENS_8weak_ptrIKNS7_11SyncManagerEEEEUlvE0_EEEEEPvSN_ + 45\n20 libsystem_pthread.dylib 0x00007fff6bfee8fc _pthread_start + 224\n21 libsystem_pthread.dylib 0x00007fff6bfea443 thread_start + 15\nSettingsV2RealmModelSettingsV2RealmModel{\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_p\": {\n \"bsonType\": \"string\"\n },\n \"ce\": {\n \"bsonType\": \"bool\"\n },\n \"g\": {\n \"bsonType\": \"int\"\n },\n \"t\": {\n \"bsonType\": \"double\"\n },\n \"te\": {\n \"bsonType\": \"bool\"\n },\n \"ts\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"s\": {\n \"bsonType\": \"objectId\"\n },\n \"t\": {\n \"bsonType\": \"double\"\n }\n },\n \"required\": [\n \"s\",\n \"t\"\n ],\n \"title\": \"TaxesV2RealmModel\"\n }\n }\n },\n \"required\": [\n \"_id\",\n \"_p\"\n ],\n \"title\": \"SettingsV2RealmModel\"\n}\n", "text": "I have the same issue Failed to parse, or apply received changeset: ArrayInsert: Invalid prior_size.Both iOS and Android apps are unable to open Realm for some users because of the error received from the Realm Sync.Full error:I do not know the exact steps to reproduce. We have the SettingsV2RealmModel model that contains a list of embedded objects. After we deployed this new model and performed data migration from the old V1 format we received several reports from our users that they are unable to start the app. Erasing databases or the whole Realm folder on the client-side didn’t work. I tried to install the fresh app and log in as a user with the problem and I faced the same issue.We had to terminate sync and start it again. After that, I was able to log in as a previously broken user. Sync termination is a terrible user experience and we don’t have a clean way to perform that on our mobile clients so we can’t do that all the time. Please-please-please fix the issue on your side.SettingsV2RealmModel scheme:Also relates to Realm Data delete from atlas but getting this issue", "username": "Anton_P" }, { "code": "", "text": "I also have a full log from the device but can’t list it here as it may contain some personal info like tokens/URLs/IDs/etc. Please reach me if that may help in the investigation.", "username": "Anton_P" }, { "code": "", "text": "@Anton_P Could you open an issue on Realm Swift and we will take a look into this. Thanks!", "username": "Lee_Maguire" }, { "code": "", "text": "That’s also reproduces on the Android so I decided to post it here but no problem will also open an issue for the iOS repo.### How frequently does the bug occur?\n\nSometimes\n\n### Description\n\nCloned… from here https://www.mongodb.com/community/forums/t/e-realm-sync-connection-1-session-1-failed-to-parse-or-apply-received-changeset/107809/4?u=anton_p\n\nBoth iOS and Android apps are unable to open Realm for some users because of the error received from the Realm Sync.\n\n### Stacktrace & log output\n\n```shell\nConnection[1]: Session[1]: Failed to parse, or apply received changeset: ArrayInsert: Invalid prior_size (list size = 3, prior_size = 0) (instruction target: SettingsV2RealmModel[ObjectId{6159ef49872d4b24322a1daa}].ts[0], version: 47, last_integrated_remote_version: 1, origin_file_ident: 211, timestamp: 220174678003)\nException backtrace:\n0 Divtracker 0x000000010b690676 _ZN5realm4sync17BadChangesetErrorCI1NS_4util22ExceptionWithBacktraceISt13runtime_errorEEIJNSt3__112basic_stringIcNS5_11char_traitsIcEENS5_9allocatorIcEEEEEEEDpOT_ + 38\n1 Divtracker 0x000000010b68ac92 _ZN5realm4sync12_GLOBAL__N_125throw_bad_transaction_logENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 34\n2 Divtracker 0x000000010b68aa9f _ZNK5realm4sync18InstructionApplier19bad_transaction_logERKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 1135\n3 Divtracker 0x000000010b698999 _ZNK5realm4sync18InstructionApplier19bad_transaction_logIJmRKjEEEvPKcDpOT_ + 73\n4 Divtracker 0x000000010b698477 _ZN5realm4sync18InstructionApplier20resolve_list_elementINS_4util8overloadIJZNS1_clERKNS0_5instr11ArrayInsertEE4$_20ZNS1_clES8_E4$_21ZNS1_clES8_E4$_22ZNS1_clES8_E4$_23ZNS1_clES8_E4$_24ZNS1_clES8_E4$_25EEEEEvRNS_7LstBaseEmNSt3__111__wrap_iterIPKN5mpark7variantIJNS0_12InternStringEjEEEEESQ_PKcOT_ + 2871\n5 Divtracker 0x000000010b697439 _ZN5realm4sync18InstructionApplier13resolve_fieldINS_4util8overloadIJZNS1_clERKNS0_5instr11ArrayInsertEE4$_20ZNS1_clES8_E4$_21ZNS1_clES8_E4$_22ZNS1_clES8_E4$_23ZNS1_clES8_E4$_24ZNS1_clES8_E4$_25EEEEEvRNS_3ObjENS0_12InternStringENSt3__111__wrap_iterIPKN5mpark7variantIJSI_jEEEEESQ_PKcOT_ + 1177\n6 Divtracker 0x000000010b68ea2c _ZN5realm4sync18InstructionApplierclERKNS0_5instr11ArrayInsertE + 636\n7 Divtracker 0x000000010b6b16f4 _ZN5realm4sync18InstructionApplier5applyIS1_EEvRT_RKNS0_9ChangesetEPNS_4util6LoggerE + 100\n8 Divtracker 0x000000010b6ae662 _ZN5realm5_impl17ClientHistoryImpl27integrate_server_changesetsERKNS_4sync12SyncProgressEPKyPKNS2_11Transformer15RemoteChangesetEmRNS2_11VersionInfoERNS2_21ClientReplicationBase16IntegrationErrorERNS_4util6LoggerEPNSE_20SyncTransactReporterE + 946\n9 Divtracker 0x000000010b6bff0d _ZN5realm5_impl14ClientImplBase7Session29initiate_integrate_changesetsEyRKNSt3__16vectorINS_4sync11Transformer15RemoteChangesetENS3_9allocatorIS7_EEEE + 173\n10 Divtracker 0x000000010b68572a _ZN12_GLOBAL__N_111SessionImpl29initiate_integrate_changesetsEyRKNSt3__16vectorIN5realm4sync11Transformer15RemoteChangesetENS1_9allocatorIS6_EEEE + 42\n11 Divtracker 0x000000010b6be9bd _ZN5realm5_impl14ClientImplBase7Session24receive_download_messageERKNS_4sync12SyncProgressEyRKNSt3__16vectorINS3_11Transformer15RemoteChangesetENS7_9allocatorISA_EEEE + 589\n12 Divtracker 0x000000010b6bc15d _ZN5realm5_impl14ClientProtocol22parse_message_receivedINS0_14ClientImplBase10ConnectionEEEvRT_PKcm + 5485\n13 Divtracker 0x000000010b6b6bd4 _ZN5realm5_impl14ClientImplBase10Connection33websocket_binary_message_receivedEPKcm + 52\n14 Divtracker 0x000000010b64ce25 _ZN12_GLOBAL__N_19WebSocket17frame_reader_loopEv + 1509\n15 Divtracker 0x000000010b6c39a0 _ZN5realm4util7network7Service9AsyncOper22do_recycle_and_executeINSt3__18functionIFvNS5_10error_codeEmEEEJRS7_RmEEEvbRT_DpOT0_ + 224\n16 Divtracker 0x000000010b6c3464 _ZN5realm4util7network7Service14BasicStreamOpsINS1_3ssl6StreamEE16BufferedReadOperINSt3__18functionIFvNS8_10error_codeEmEEEE19recycle_and_executeEv + 196\n17 Divtracker 0x000000010b6f4784 _ZN5realm4util7network7Service4Impl3runEv + 484\n18 Divtracker 0x000000010b67cb5d _ZN5realm4sync6Client3runEv + 29\n19 Divtracker 0x000000010b822e8d _ZNSt3__1L14__thread_proxyINS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN5realm5_impl10SyncClientC1ENS2_INS7_4util6LoggerENS4_ISB_EEEERKNS7_16SyncClientConfigENS_8weak_ptrIKNS7_11SyncManagerEEEEUlvE0_EEEEEPvSN_ + 45\n20 libsystem_pthread.dylib 0x00007fff6bfee8fc _pthread_start + 224\n21 libsystem_pthread.dylib 0x00007fff6bfea443 thread_start + 15\n```\n\nI also have a full log from the device but can’t list it here as it may contain some personal info like tokens/URLs/IDs/etc. Please reach me if that may help in the investigation.\n\n### Can you reproduce the bug?\n\nYes, always\n\n### Reproduction Steps\n\nI do not know the exact steps to reproduce. We have the `SettingsV2RealmModel` model that contains a list of embedded objects. After we deployed this new model and performed data migration from the old V1 format we received several reports from our users that they are unable to start the app. Erasing databases or the whole Realm folder on the client-side didn’t work. I tried to install the fresh app and log in as a user with the problem and I faced the same issue.\n\nWe had to terminate sync and start it again. After that, I was able to log in as a previously broken user. Sync termination is a terrible user experience and we don’t have a clean way to perform that on our mobile clients so we can’t do that all the time. Please-please-please fix the issue on your side.\n\n`SettingsV2RealmModel` scheme:\n```json\n{\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_p\": {\n \"bsonType\": \"string\"\n },\n \"ce\": {\n \"bsonType\": \"bool\"\n },\n \"g\": {\n \"bsonType\": \"int\"\n },\n \"t\": {\n \"bsonType\": \"double\"\n },\n \"te\": {\n \"bsonType\": \"bool\"\n },\n \"ts\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"s\": {\n \"bsonType\": \"objectId\"\n },\n \"t\": {\n \"bsonType\": \"double\"\n }\n },\n \"required\": [\n \"s\",\n \"t\"\n ],\n \"title\": \"TaxesV2RealmModel\"\n }\n }\n },\n \"required\": [\n \"_id\",\n \"_p\"\n ],\n \"title\": \"SettingsV2RealmModel\"\n}\n```\n\n### Version\n\n10.14.0\n\n### What SDK flavour are you using?\n\nMongoDB Realm (i.e. Sync, auth, functions)\n\n### Are you using encryption?\n\nNo, not using encryption\n\n### Platform OS and version(s)\n\niOS 15.0\n\n### Build environment\n\n```\nProductName:\tmacOS\nProductVersion:\t11.6\nBuildVersion:\t20G165\n\n/Applications/Xcode.app/Contents/Developer\nXcode 13.1\nBuild version 13A1030d\n\n/usr/local/bin/pod\n1.11.2\nRealm (10.14.0)\nRealmSwift (10.14.0)\nRealmSwift (= 10.14.0)\n\n/bin/bash\nGNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)\n\n/usr/local/bin/carthage\n0.38.0\n(not in use here)\n\n/usr/bin/git\ngit version 2.30.1 (Apple Git-130)\n```", "username": "Anton_P" }, { "code": "", "text": "Was there any resolution here? I’m seeing a very similar issue.There is. changeset referencing a document that no longer exists in the collection. I’m gonna terminate synch, but I’ve got a feeling this will pop up again.", "username": "Ryan_Goodwin" }, { "code": "", "text": "Any recent update about this? we also getting the same error time to time.\nOur object also has some subdocuments and it is a bit complex object. It is difficult to restart the realm sync it is a multi-tenant app and has lots of partitions. (One per each).", "username": "Salinda_Karunarathna" } ]
E/REALM_SYNC: Connection[1]: Session[1]: Failed to parse, or apply received changeset
2021-05-20T11:41:13.177Z
E/REALM_SYNC: Connection[1]: Session[1]: Failed to parse, or apply received changeset
5,326
null
[ "flexible-sync" ]
[ { "code": "$and{\n \"name\": \"owner-write\",\n \"applyWhen\": {},\n \"read\": {\n \"_partitionKey\": \"PUBLIC\"\n },\n \"write\": {\n \"$and\": [\n {\n \"_partitionKey\": \"%%user.id\"\n },\n {\n \"organizationID\": \"%%user.custom_data.organizationID\"\n }\n ]\n }\n}\nQuerySubscription<Product>(name: \"productsList\") { $0._partitionKey == user._id && $0.organizationID == organizationID }", "text": "Hi,I was wondering if $and operator is allowed in permissions JSON. Something like this:And this is the client side query:\nQuerySubscription<Product>(name: \"productsList\") { $0._partitionKey == user._id && $0.organizationID == organizationID }I am trying to add a record using this approach but the server reverts it.Or maybe there is a better solution for my use case.Thank you!", "username": "horatiu_anghel" }, { "code": "{\n \"name\": \"owner-write\",\n \"applyWhen\": {},\n \"read\": {\n \"_partitionKey\": \"PUBLIC\"\n },\n \"write\": {\n \"_partitionKey\": \"%%user.id\"\n \"organizationID\": \"%%user.custom_data.organizationID\"\n }\n}\n", "text": "That is indeed a legal permission syntax (though the $and is uncessary), you could just do this instead:It sounds like your issue is likely that the permissions are working, and you are trying to insert something that permissions says you are not allowed to insert (which will cause the “Reverting” behaviour)", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi @Tyler_Kaye, thank you for the info!Actually, this was my initial setup but I thought that the issue was caused by the missing $and operator.You were right, the problem was somewhere else, I managed to fix it.", "username": "horatiu_anghel" }, { "code": "", "text": "Awesome thats glad to hear. I hope you are enjoying flexible sync! We are hoping to move the “rules” into the main rules tab soon so that it will be a bit easier to define and visualize rules soon!Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Yeah, I like it so far!Excellent, looking forward for the new updates!Thank you,\nHoratiu", "username": "horatiu_anghel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flexible sync permission using $and operator
2022-06-30T20:44:54.133Z
Flexible sync permission using $and operator
1,747
null
[ "aggregation", "atlas-search" ]
[ { "code": "db.product.aggregate([\n {\n \"$match\": {\n \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\",\n \"isArchive\": false,\n \"isActive\": true\n }\n },\n {\n \"$sort\": {\n \"description\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n])\n", "text": "Hello there,I would like to know if it is possible to sort with case insensitive using Atlas Search Index. Ok, you will say “where is your $search?”. The search is optional, sometimes I don`t want to pass, but I really need to guarantee that always will be sorted by description in case insensitive (description is also the field that I am using in Atlas Search Index).It would be perfect if I could achieve this, otherwise I will have to use project and lowercase all.", "username": "Renan_Geraldo" }, { "code": "$toLowerdescription", "text": "Hi @Renan_Geraldo - Welcome to the community.To better understand what you’re possibly after with Atlas search, are you able to give a few sample documents and expected outputs? It would also be great to see how you are currently achieving the desired results without use of Atlas search so that I further understand the context behind this question.It would be perfect if I could achieve this, otherwise I will have to use project and lowercase all.Also, just to clarify here, do you mean using $toLower when projecting so that your description field outputs are all lower case?Regards,\nJason", "username": "Jason_Tran" }, { "code": "db.product.aggregate([\n {\n \"$search\": {\n \"autocomplete\": {\n \"path\": \"description\",\n \"query\": \"Nota\"\n }\n }\n },\n {\n \"$match\": {\n \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\",\n \"isArchive\": false,\n \"isActive\": true\n }\n },\n {\n \"$sort\": {\n \"description\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n])\n{\n \"_id\": {\n \"$oid\": \"62bb64108f4e7c44e778c81a\"\n },\n \"productCode\": 430566,\n \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\",\n \"codAlfa\": \"1\",\n \"description\": \"Caderno preto \"\n}\n", "text": "Hi @Jason_Tran ,Thank you so much for the response and the greetings.Yes, I will contextualize better. First of all, this is the front end consuming my Api: https://notare.dev.qa.smartpos.net.br/ (I am sorry, it is in Portuguese, but I think it will be easy to explain). As you can see, it is a list of products. This req executes the query that I send above.There is also a search bar. This search bar uses the same requisition, but it passes a query param called “description”. This is the query that will be executed using the Atlas searchSo, sometimes I can do the query using the $search and sometimes not, but what I really want is to make every time the sorting to be case insensitive. So, I was trying to figure out if it is possible to do this with the Atlas search index. Passing the query in both manners, it is returning case sensitive.Basically, this is the document:And for the second question, yes I would use the $toLower, but it would not return the lower field to the final user, I would just use to sort.", "username": "Renan_Geraldo" }, { "code": "db.product.aggregate([\n {\n \"$match\": {\n \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\",\n \"isArchive\": false,\n \"isActive\": true\n }\n },\n {\n \"$sort\": {\n \"featuredPosition\": 1,\n \"description\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 10\n }\n], {collation: {\n locale: \"pt\"\n}})\n", "text": "Hi,I could achieve it with the collation.", "username": "Renan_Geraldo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to sort case insensitive with Atlas Search?
2022-06-30T21:58:55.035Z
Is it possible to sort case insensitive with Atlas Search?
4,112
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "My Problem:MongooseServerSelectionError: connect ECONNREFUSED ::1:27017\nat Connection.openUri (D:\\website\\Discord-OAuth2\\node_modules\\mongoose\\lib\\connection.js:819:32)\nat D:\\website\\Discord-OAuth2\\node_modules\\mongoose\\lib\\index.js:377:10\nat D:\\website\\Discord-OAuth2\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:32:5\nat new Promise ()\nat promiseOrCallback (D:\\website\\Discord-OAuth2\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:31:10)\nat Mongoose._promiseOrCallback (D:\\website\\Discord-OAuth2\\node_modules\\mongoose\\lib\\index.js:1220:10)\nat Mongoose.connect (D:\\website\\Discord-OAuth2\\node_modules\\mongoose\\lib\\index.js:376:20)\nat Object. (D:\\website\\Discord-OAuth2\\src\\database\\database.js:2:27)\nat Module._compile (node:internal/modules/cjs/loader:1112:14)\nat Module._extensions…js (node:internal/modules/cjs/loader:1166:10) {\nreason: TopologyDescription {\ntype: ‘Unknown’,\nservers: Map(1) { ‘localhost:27017’ => [ServerDescription] },\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nlogicalSessionTimeoutMinutes: undefined\n},\ncode: undefined\n}My Code:const mongoose = require(‘mongoose’);\nmodule.exports = mongoose.connect(‘mongodb://localhost:27017/discordauth’,\n{ useNewUrlParser: true});Can someone help me?", "username": "Cooler_Typ991" }, { "code": "", "text": "ECONNREFUSEDmeans there is no mongod running at the given address. You need to start mongod first.", "username": "steevej" }, { "code": "", "text": "How I start mongod? with NPM?", "username": "Cooler_Typ991" }, { "code": "", "text": "Technical writers are better than I to explain, so see", "username": "steevej" }, { "code": "", "text": "I just noticed that I haven’t installed MongoDB yet. ", "username": "Cooler_Typ991" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongooseServerSelectionError: connect ECONNREFUSED ::1:27017
2022-07-01T12:47:04.943Z
MongooseServerSelectionError: connect ECONNREFUSED ::1:27017
5,092
null
[ "installation" ]
[ { "code": "", "text": "[centos@ip-172-26-2-104 ~]$ pwd\n/home/centos[centos@ip-172-26-2-104 ~]$ ls -l\ndrwxrwxrwx. 3 centos centos 26 Jul 1 09:05 mongodb[centos@ip-172-26-2-104 ~]$ ls -l mongodb/log\n-rwxrwxrwx. 1 centos centos 0 Jul 1 09:01 aso i have both the data and log directories with 777 permission[centos@ip-172-26-2-104 ~]$ ps -ef | grep mongod\ncentos 15164 14007 0 09:32 pts/5 00:00:00 grep --color=auto mongod[centos@ip-172-26-2-104 ~]$ mongod --dbpath mongodb --port 27017 --logpath mongodb/log --fork --logappend --bind_ip_allabout to fork child process, waiting until server is ready for connections.\nforked process: 15194\nERROR: child process failed, exited with 1\nTo see additional information in this output, start without the “–fork” option.[centos@ip-172-26-2-104 ~]$ mongod --dbpath mongodb --port 27017 --logpath mongodb/log --logappend --bind_ip_all{“t”:{\"$date\":“2022-07-01T09:33:45.030Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“logpath “/home/centos/mongodb/log” should name a file, not a directory.”}}}\n[centos@ip-172-26-2-104 ~]$", "username": "Rayaguru_S_Dash" }, { "code": "", "text": "There is nothing we can add to the error message you got./home/centos/mongodb/log” should name a file, not a directory.”Specify a file for --logpath rather than a directory.", "username": "steevej" }, { "code": "", "text": "I got the fix,\n\nMongod fork issue needs mongod user permission1623×277 88.1 KB\n\nsudo chown -R mongod:mongod Mongod fork issue needs mongod user permission", "username": "Rayaguru_S_Dash" }, { "code": "", "text": "there has to be directory only, i confirmed over testing", "username": "Rayaguru_S_Dash" }, { "code": "", "text": "Mongod fork issue needs mongod user permissionOnly when you start it as mongod user. But you don’t. Your shell prompt and current working directory seems to indicate that you are user centos.there has to be directory onlyWhat do you mean by the above?logpath has to be file. if you redo your ls command you will notice a new file named mongdblog. You have a directory mongolog but it is not being used.", "username": "steevej" }, { "code": "", "text": "Next time you publish terminal output, please do it using Markdown text so that we can cut-n-paste part of it in our answers.", "username": "steevej" } ]
Mongod --fork --logpath /var/log/mongodb.log gives error = ERROR: child process failed, exited with 1
2022-07-01T09:34:30.984Z
Mongod &ndash;fork &ndash;logpath /var/log/mongodb.log gives error = ERROR: child process failed, exited with 1
7,086
null
[ "containers", "upgrading" ]
[ { "code": "{\"t\":{\"$date\":\"2020-12-15T15:27:43.546-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/home1/mongo/db/\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/home1/mongo/log/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.590-05:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/home1/mongo/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.590-05:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.590-05:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=15515M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.974-05:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.974-05:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"95: Operation not supported\"}}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.975-05:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":1123}}\n{\"t\":{\"$date\":\"2020-12-15T15:27:43.975-05:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "I recently was having so many problems with my computer (gdm issues unrelated to mongo), that I wound up upgrading my OS from Ubuntu 20.04 to Pop_OS 20.10. In the process, I needed to reinstall mongo-org. Fortunately, my old database was on a secondary drive and hence did not get refreshed.However, when I try to start mongod, it is failing to start. I’m getting the following message in the log file.I’m guessing that I upgraded mongo in the process of upgrading the rest of the system, and that somehow the old database is not adjusting itself for compatability?Is there some kind of database repair I can run to fix this problem? (Fortunately, I can restore any mission critical data from backup if need be).", "username": "Russell_Almond" }, { "code": "", "text": "Hi Russell_Almond,Share the below detail-\n1.) Version of MongoDB server before upgrade.\n2.) Version of MongoDB server after upgrade.Thanks\nBraj Mohan", "username": "BM_Sharma" }, { "code": "", "text": "Is there some kind of database repair I can run to fix this problem? (Fortunately, I can restore any mission critical data from backup if need be).No, you’re going to have to run a mongod that can handle that compatibility level. Best use the last version that you ran. Then upgrade following the procedures in the release note.", "username": "chris" }, { "code": "", "text": "I’m still not sure of the original version, but given that I’m at version 3.6 on my laptop, it looks like a multistep process.Fortunately, in my case, the critical data is backed up as JSON files, so it is probably faster to kill the old database and then rebuild from JSON.Is there an easy way to tell what version of Mongo the database was built under? That might help for a more informative error message.", "username": "Russell_Almond" }, { "code": "", "text": "@Russell_Almond see this post. It show which version it successfully ran under last. Use the most recent metrics file.", "username": "chris" }, { "code": "featureCompatibilityVersionfeatureCompatibilityVersion4.2featureCompatibilityVersion4.4", "text": "Just because this pops up first when searching for the ErrorFailed to start WiredTiger after system upgradeIn our case we had mongo-4.2.8 running and wanted to upgrade to mongo-4.4.4 which should work just fine, but the featureCompatibilityVersion was still set to “4.0” instead of “4.2” so the upgrade did not succeed.You can solve this easily with:", "username": "hb0" }, { "code": "", "text": "thx, this is really helping ", "username": "Stefan_Schmidt" } ]
Failed to start WiredTiger after system upgrade
2020-12-15T20:55:42.153Z
Failed to start WiredTiger after system upgrade
38,994
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi everyone,\nI got a set of data that has the same data model/fields. In my model, there is a specific field called “chat_id”. I was hoping anyone could tell me which was the better approach if it is better to store the data having the same “chat_id” in a separate collection or store all data irrespective of its “chat_id” in a single collection.Thanks,\nBen", "username": "MoviezHood" }, { "code": "", "text": "Hey,how many documents (and what is the size of the document) you will have in this collection?\nYou are saying a set of data that has the same data model/fields. Is it a one-to-one relationships model with embedded documents?\nI think one collection ( irrespective of its “chat_id”) for all documents will be a good solution unless there is a specific reason for using multiple collections.", "username": "Arkadiusz_Borucki" } ]
Should I store my data in single collection or multiple collection
2022-07-01T05:01:37.391Z
Should I store my data in single collection or multiple collection
1,473
null
[ "node-js", "serverless" ]
[ { "code": "const getImage = (req: any, res: Response) => {\n gridfsBucket.openDownloadStreamByName(filename).pipe(res);\n}\n", "text": "HiI am porting my express node.js API to AWS Lambda and I am currently stuck by reading and writing files to mongoDB GridFS. The data I would like to store is mostly images and PDF documents.Somehow I can not get the piping of the stream to work.This is how I did it beforeDoes anyone have any hints on how to use GridFS on Lambda?Is piping/streaming not supported on AWS Lambda? Can I download the whole stream inside the lambda and return this object?Cheers! Marc", "username": "Marc_Wittwer" }, { "code": "", "text": "Hey, anyone resolved this already? I’m having the same issue…", "username": "martin_trax" } ]
Download/Upload files using GridFS on AWS Lambda with Node.js
2020-12-29T20:15:02.069Z
Download/Upload files using GridFS on AWS Lambda with Node.js
3,468
null
[ "node-js", "connecting" ]
[ { "code": "MongoNetworkError: failed to connect to server [<mongodb_host>:27017] on first connect [MongoNetworkError: connection timed out\n at connectionFailureError (/workspace/node_modules/mongodb-core/lib/connection/connect.js:362:14)\n at TLSSocket.<anonymous> (/workspace/node_modules/mongodb-core/lib/connection/connect.js:286:16)\n at Object.onceWrapper (events.js:420:28)\n at TLSSocket.emit (events.js:314:20)\n at TLSSocket.EventEmitter.emit (domain.js:483:12)\n at TLSSocket.Socket._onTimeout (net.js:483:8)\n at listOnTimeout (internal/timers.js:554:17)\n at processTimers (internal/timers.js:497:7) {\n", "text": "We are trying to connect to atlas from google cloud functions which throw this error:Sometimes it connect and sometimes it fails. Let us know the possible cause of this.", "username": "Prateek_Singh1" }, { "code": "", "text": "Prateek what kind of network IP Access List or private networking setup are you using?", "username": "Andrew_Davidson" }, { "code": "", "text": "Thanks Andrew for replying. We are using VPC Network for Egress settings in google cloud functions.", "username": "Prateek_Singh1" }, { "code": "", "text": "Got it, are you connecting to a dedicated Atlas cluster (M10+) witth VPC peering?", "username": "Andrew_Davidson" }, { "code": "", "text": "Yes we are connecting to the dedicated Atlas cluster.", "username": "Prateek_Singh1" } ]
Atlas connection socket timeout error from Google cloud functions
2022-06-28T03:55:54.624Z
Atlas connection socket timeout error from Google cloud functions
2,788
null
[ "aggregation" ]
[ { "code": "Order.aggregate([\n {\n $addfields: {\n newField: {\n $function: {\n body: function(vistaDate){return moment(vistaDate).formato(\"YYYY-MM-DD\")},\n arg: [\"$vistaDate\"],\n lang: \"js\"\n }\n }\n }\n}\n])\n", "text": "hello, I am doing an aggregate query using $function to calculate a date and save it in another formatErrorMongoError: PlanExecutor error during aggregation :: caused by :: ReferenceError: _momentTimezone is not defined : body@:2:24Thanks\nL.", "username": "leonardo_ramirez_zaldivar" }, { "code": "", "text": "You should be able to do this with $dateToString.This kind of cosmetic formatting are better if done in the client code. stateless client code is easier to scale.", "username": "steevej" }, { "code": "", "text": "Greetings, in my case $dateFromString is more adapted but it throws me the following error\nIMG_20220630_2251111920×1436 603 KB\n", "username": "leonardo_ramirez_zaldivar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$function method help
2022-06-30T21:15:27.538Z
$function method help
1,367
https://www.mongodb.com/…bd25e2db3297.png
[ "dot-net" ]
[ { "code": "FilterDefinition<ProductDbm> filterProduct = Builders<ProductDbm>.Filter.In(\"VariationList._id\", wishList.Select(x => x.VariationID)) \n", "text": "Example the structure of the document is as abovei wanted to query mongodb to access specific field “VariationList._id” of the documentAs code above, i have to harcoded a string “VariationList._id” in order to access the field. I have tried using VaraitionList[-1]._id, but it only returns empty data. How can i write without harcoding the field name ? Thanks", "username": "Muhammad_Aiman" }, { "code": "", "text": "It is not clear what you want with:How can i write without harcoding the field nameSo you want to query a specific field but you do not want to specify the field name.It is better if you publish sample documents in text JSON that we can cut-n-paste into our system.", "username": "steevej" }, { "code": "{\n \"_id\": \"62bc1072b95e3868b279af47\",\n \"CreatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"CreatedBy\": \"62bc1071b95e3868b279af30\",\n \"LastUpdatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"LastUpdatedBy\": \"62bc1071b95e3868b279af30\",\n \"Status\": \"Inactive\",\n \"ShopID\": \"62bc1071b95e3868b279af32\",\n \"Name\": \"CHARM KASUT BABY ENAMEL\",\n \"CategoryID\": \"62bc1071b95e3868b279af41\",\n \"ProductSKU\": \"ANIAC_PC039-E1\",\n \"Description\": null,\n \"Stock\": 3,\n \"MinPrice\": 388.36,\n \"MaxPrice\": 391.26,\n \"MinPromotionPrice\": 0,\n \"MaxPromotionPrice\": 0,\n \"PromotionStartDate\": null,\n \"PromotionEndDate\": null,\n \"VariationList\": [\n {\n \"CreatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"CreatedBy\": \"62bc1071b95e3868b279af30\",\n \"LastUpdatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"LastUpdatedBy\": \"62bc1071b95e3868b279af30\",\n \"_id\": \"62bc1072b95e3868b279af48\",\n \"Status\": \"Inactive\",\n \"VariationSKU\": \"102102409N\",\n \"Name\": null,\n \"Stock\": 2,\n \"Price\": 391.26,\n \"PromotionPrice\": null,\n \"PromotionStartDate\": null,\n \"PromotionEndDate\": null,\n \"Attributes\": [\n {\n \"ID\": \"62bc1071b95e3868b279af37\",\n \"Value\": \"1.22\"\n },\n {\n \"ID\": \"62bc1071b95e3868b279af34\",\n \"Value\": \"1\"\n },\n {\n \"ID\": \"62bc1071b95e3868b279af38\",\n \"Value\": \"2.1\"\n }\n ],\n \"VariationThumbnail\": null,\n \"VariationImages\": null\n },\n {\n \"CreatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"CreatedBy\": \"62bc1071b95e3868b279af30\",\n \"LastUpdatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"LastUpdatedBy\": \"62bc1071b95e3868b279af30\",\n \"_id\": \"62bc1072b95e3868b279af49\",\n \"Status\": \"Inactive\",\n \"VariationSKU\": \"102102412N\",\n \"Name\": null,\n \"Stock\": 1,\n \"Price\": 388.36,\n \"PromotionPrice\": null,\n \"PromotionStartDate\": null,\n \"PromotionEndDate\": null,\n \"Attributes\": [\n {\n \"ID\": \"62bc1071b95e3868b279af37\",\n \"Value\": \"1.21\"\n },\n {\n \"ID\": \"62bc1071b95e3868b279af34\",\n \"Value\": \"1\"\n },\n {\n \"ID\": \"62bc1071b95e3868b279af38\",\n \"Value\": \"2.1\"\n }\n ],\n \"VariationThumbnail\": null,\n \"VariationImages\": null\n }\n ],\n \"ProductThumbnail\": {\n \"Primary\": null,\n \"Secondary\": null\n },\n \"ProductImageList\": [],\n \"Tags\": {\n \"CreatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"CreatedBy\": \"62bc1071b95e3868b279af30\",\n \"LastUpdatedDate\": {\n \"$date\": {\n \"$numberLong\": \"1656492146137\"\n }\n },\n \"LastUpdatedBy\": \"62bc1071b95e3868b279af30\",\n \"TagtoShow\": 0,\n \"TagList\": [\n \"New Product\",\n \"No Tag\",\n \"Featured\"\n ]\n },\n \"Views\": 1,\n \"SaleScore\": 0\n}\n", "text": "Thanks for your attentionThis is the sample of the document. Back to the question, when specifying the field name in Filter.In it works when I used the string “VariationList._id”. I have an object that maps to the document. When specifying the field name in Filter.In using the object, which I write VaraitionList[-1]._id , it returns empty. How can i specify the field name properly using the object ?", "username": "Muhammad_Aiman" }, { "code": "", "text": "I still do not understand. I am giving up. Hopefully someone with former expertise will kick in. I am curious to see the rest.", "username": "steevej" }, { "code": "FilterDefinition<ProductDbm> filterProduct = Builders<ProductDbm>.Filter.In(\"VariationList._id\", wishList.Select(x => x.VariationID)) \nFilterDefinition<ProductDbm> filterProduct = Builders<ProductDbm>.Filter.In(x => x.VariationList[-1]._id, wishList.Select(x => x.VariationID)) \n", "text": "My question might be confusing. My bad.Working code is as belowBut it fails when I did as belowI wanted to know why it fails and how to call the filed name correctly.", "username": "Muhammad_Aiman" } ]
Getting the field name of document without hardcoding field name mongoDB driver c#
2022-06-29T02:12:19.716Z
Getting the field name of document without hardcoding field name mongoDB driver c#
3,359
null
[ "queries", "atlas-search", "text-search" ]
[ { "code": "{\n \"_id\" : ObjectId(\"62ba4bc29aad3560c12161e2\"),\n \"EmpId\" : UUID(\"b7eda8fd-41b8-4ecf-8c24-9dc06eee40a3\"),\n \"RecordName\" : \"{\\\"childid\\\":\\\"62ba4bc29aad3560c12161e2\\\",\\\"childname\\\":\\\"Mike Tyson\\\",\\\"Roll\\\":\\\"J001\\\"}\"\n}\n", "text": "Hi Team,I have created “text” search index for one of the collection. However, following script is not giving result.db.emp.find({\n“EmpId”: UUID(“b7eda8fd-41b8-4ecf-8c24-9dc06eee40a3”),\n$text : {$search:\"“child”\", $caseSensitive:false}\n})Created Index as below:\ndb.emp.createIndex({“EmpId”:1,“RecordName”:“text”})Document as below:I would like to return documents that contains word “child” in RecordName field.", "username": "Yatinkumar_Patel" }, { "code": "db.emp.find({\n“EmpId”: UUID(“b7eda8fd-41b8-4ecf-8c24-9dc06eee40a3”),\n $text : {$search:\"\\\"child\\\"\", $caseSensitive:false}\n})\n", "text": "db.emp.find({\n“EmpId”: UUID(“b7eda8fd-41b8-4ecf-8c24-9dc06eee40a3”),\n$text : {$search:\"“child”\", $caseSensitive:false}\n})updated Find script as below:", "username": "Yatinkumar_Patel" }, { "code": "\"child\"child\"childid\"\"childname\"", "text": "You DO NOT have \"child\" in your document. You do have child but without the double quotes.With double quotes your have \"childid\" and \"childname\".I hope you have a really Really REALLY good reason to have a JSON string as a field value rather than a real JSON object. Just the fact that you are messed up with the quotes is a con point for doing that.", "username": "steevej" }, { "code": "\"RecordName\"", "text": "\"RecordName\"“RecordName” field contains child word. I would like to search the word child from the RecordName field. This is my requirement to store the JSON as String value in the document.", "username": "Yatinkumar_Patel" }, { "code": "$search:\"\\\"child\\\"\"$search:\"child\"", "text": "That was the whole point of my answer.You query is not looking for the word child without quotes. It is looking for “child” within quotes. If you want to search for child without quotes, you have to replace$search:\"\\\"child\\\"\"with quotes to$search:\"child\"without quotes.This is my requirement to store the JSON as String valueThis is nota really Really REALLY good reasonTry to convince the requirement owner to change. BecauseJust the fact that you are messed up with the quotes is a con point for doing that", "username": "steevej" }, { "code": " $text:{$search:\"child\"}\"RecordName\" : \"{\\\"childid\\\":\\\"62ba4bc29aad3560c12161e2\\\",\\\"childname\\\":\\\"Mike Tyson\\\",\\\"Roll\\\":\\\"J001\\\"}\"db.emp.find({\"RecordName\" : /child/})", "text": "I can appreciate your reply, If you can please read very carefully about my question. I need to search word = child if it is there anywhere in the string. $text:{$search:\"child\"} will return docs if there is any docs match with child word. As per my given example, I have a doc with the value \"RecordName\" : \"{\\\"childid\\\":\\\"62ba4bc29aad3560c12161e2\\\",\\\"childname\\\":\\\"Mike Tyson\\\",\\\"Roll\\\":\\\"J001\\\"}\". This doc is not returning through Mongodb text search. This doc is returned when I hit following script: db.emp.find({\"RecordName\" : /child/}). While I need to utilise the Text Index therefore I have raised this issue after considering all other aspects.", "username": "Yatinkumar_Patel" }, { "code": "", "text": "Please read:So basically, you DO NOT have the word child in your RecordName field but you have the substring child. You might have the words childid and childname.I do not think you can do what you want with text search and the data you have. Search for child will not work according to the documentation I shared but childid might work.", "username": "steevej" }, { "code": "sexplain()parsedTextQueryqueryPlanner.winningPlanchildidchildnamechildidschildnamesautocompletehighlight", "text": "Hi @Yatinkumar_Patel,Text search is designed to search text based on language heuristics for word patterns. As @steevej suggested, that isn’t suitable for your pattern matching examples.Text search uses the Snowball stemming library to find the root form of words using a set of rules or heuristics. For example, in English words ending with a single s are generally the plural form of a noun.There’s a Snowball online demo you can use to get a sense of word stemming, or you can see this in the explain() output of a MongoDB text search query (look for the parsedTextQuery section in queryPlanner.winningPlan).The examples you have of childid and childname are string patterns (not word patterns) so they both do not match any stemming rules and can only be matched exactly or from a plural form like childids or childnames.If you want to find partial matches or autocomplete,I recommend considering Atlas Search, which integrates the Apache Lucene search library. Atlas Search has configurable text analyzers, operators like autocomplete, and options like highlight to show matches in results.For a comparison of text matching approaches, please review A Decisioning Framework for MongoDB $regex and $text vs Atlas Search.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X appreciated for your reply. I understood that Text Search will not work for my given example.", "username": "Yatinkumar_Patel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to search text with Text Search Index
2022-06-28T00:34:26.063Z
Unable to search text with Text Search Index
5,422
null
[ "queries", "python" ]
[ { "code": "AddrAddrPriceTypeTypePriceAddrAddr", "text": "I have total 20m documents, and about 17m documents has Addr key.\nThis key is kind of address, about 1m type of value exists.I need to get all documents that Addr exists and classify & sum value(each document has Price, Type key)\nsample code of my script is like thisfor docu in collection.find():\n# TODO - classify by Type and determine to add or not PriceIn this case, which is faster and proper way?I’m confusing which one is right way", "username": "soohyok2011" }, { "code": "", "text": "You definitively NOT want to do with a client code for-loop.You want to use the aggregation framework. IT is made for this kind of stuff.You start with a $match stage for Addr existence.\nYou then use a $group stage with _id:$Type and $sum for Price.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$exists query vs full scan which is faster and proper way?
2022-06-30T05:39:29.710Z
$exists query vs full scan which is faster and proper way?
1,199
null
[ "dot-net", "golang" ]
[ { "code": "", "text": "This may seem to be an odd question but please hear me out. I am building a cli application in Go that needs to work with a relatively small data set, lets say a few documents less than 20 mb. Internally this may still represent many thousands of objects. Given this relatively low data set, I thought it would be best not to have to use a full blown mondodb server. Rather, since the cli will have access to these documents on disk, I would like to be able to load them into memory and query it with a mongodb client/go-driver as though connnected to a server.At this stage, I’m only concerned with running queries not mutating data (although will most likely become a requirement, but the qeury part is the most important aspect for now).It this possible? It is akin to using a lite version of mongo db, which exists in memory only. To me this makes sense, because once you have a document, why could you not be able to query it the same way you would issue a query to a mongo db server.Currently, I have build prototypes with .net/powershell that uses XML with querying implemented via xpath queries, to prove the via-ability of my idea. I now need to drammatically improve its performance with Go and hopefully gain the querying capabilities of mongdb. I have considered google Protocol Buffers, but that does not seem suited to my usecase (data can’t be represented hierarchichally and message size is limited). Also cosidered MsgPack, but this is also unsuitable for same reason as Protocol Buffers. I then discovered BSON which naturally led me to MongoDb.I suppose another way of asking this question is, is there an in memory only version of mondodb that can be loaded with JSON/BSON documents, then queried with mongodb client/go-driver.Thanks.", "username": "plastikfan_p" }, { "code": "mongod --storageEngine inMemory --dbpath <path>\nstorage:\n engine: inMemory\n dbPath: <path>\n", "text": "Hey @plastikfan_p, thanks for the interesting question! MongoDB does have an in-memory storage engine that you can use by running:or with a configuration file:However, it sounds like you might want an in-process MongoDB server that can be used without starting a separate process, like SQLite. As far as I know, there is no official version of the MongoDB database that can run in-process with a Go application. If you need an in-memory database that runs in a Go process, you could check out HashiCorp’s go-memdb.", "username": "Matt_Dale" } ]
Is it possible to use mongodb client without a mongodb server (via Mongo Go Driver)
2022-05-11T09:55:47.359Z
Is it possible to use mongodb client without a mongodb server (via Mongo Go Driver)
3,961
null
[ "schema-validation" ]
[ { "code": "find", "text": "I’m trying to apply $jsonSchema validation to existing data, and it’s quite hard to debug what’s actually wrong with the data: I can find faulty documents, but it would be nice if there was a way to report what’s actually wrong with them, e.g. having a more detailed reporting like “field X is required but it’s missing”. With large documents it’s a nightmare trying to debug them.Is there some library or a tool that could do that?", "username": "dimaip" }, { "code": "", "text": "Hi @dimaip,Which version of MongoDB are you running? If have a vague memory of the output being improved at some point.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "bsonType", "text": "Hi @dimaip,As @MaBeuLux88 mentioned, JSON Schema validation was improved in MongoDB 5.0 to report more detailed validation errors: Improved Error Messages for Schema Validation in MongoDB 5.0 | MongoDB.There are also client libraries for JSON Schema validation available for most languages: Implementations | JSON Schema. Those may need a slight tweak for MongoDB extensions including the bsonType keyword. For example AJV is a popular (and extensible) Node.js library.Typically you want to perform validation as early as possible to provide fast feedback for the user. There are opportunities to validate and provide feedback client-side, in your application layer, and at the database level. JSON Schema should help with creating reusable validation rules.Regards,\nStennie", "username": "Stennie_X" } ]
Better validation output of $jsonSchema
2022-06-30T17:13:38.908Z
Better validation output of $jsonSchema
2,400
null
[ "aggregation", "queries" ]
[ { "code": "exports = async function (request, response) {\n\nconst pipeline = [\n {\n '$project': {\n '_id': 0\n }\n }, {\n '$group': {\n '_id': '$vaultId', \n 'vaultName': {\n '$first': '$vaultName'\n }, \n 'vaultContract': {\n '$first': '$vaultContract'\n }, \n 'prices': {\n '$push': {\n 'readingDate': '$readingDate', \n 'spotPrice': '$spotPrice'\n }\n }\n }\n }, {\n '$sort': {\n '_id': 1, \n 'readingDate': -1\n }\n }\n];\n\n\nrequestResponse = await context.services\n .get(\"mongodb-atlas\")\n .db(\"mydb\")\n .collection(\"mycollection\")\n .aggregate(pipeline).toArray();\n\nreturn requestResponse;\n};\n...\n,\n {\n \"vaultId\": {\n \"$numberInt\": \"552\"\n },\n \"vaultName\": \"RR/BAYC\",\n \"vaultContract\": \"0xcd2e3a66507e94190e3b1521a189ad821c8c3006\",\n \"prices\": [\n {\n \"readingDate\": {\n \"$date\": {\n \"$numberLong\": \"1656295562871\"\n }\n },\n \"spotPrice\": {\n \"$numberDouble\": \"0.18317855743872674\"\n }\n },\n {\n \"readingDate\": {\n \"$date\": {\n \"$numberLong\": \"1656381961889\"\n }\n },\n \"spotPrice\": {\n \"$numberDouble\": \"0.253926321676319\"\n }\n },\n {\n \"readingDate\": {\n \"$date\": {\n \"$numberLong\": \"1656468400214\"\n }\n },\n \"spotPrice\": {\n \"$numberDouble\": \"0.23309041730430674\"\n }\n }\n ]\n }\n...\n...,\n {\n \"vaultId\": \"552\",\n \"vaultName\": \"RR/BAYC\",\n \"vaultContract\": \"0xcd2e3a66507e94190e3b1521a189ad821c8c3006\",\n \"prices\": [\n {\n \"readingDate\": \"1656295562871\",\n \"spotPrice\": \"0.18317855743872674\"\n },\n {\n \"readingDate\": \"1656381961889\",\n \"spotPrice\": \"0.253926321676319\"\n },\n {\n \"readingDate\": \"1656468400214\",\n \"spotPrice\": \"0.23309041730430674\"\n }\n ]\n },\n...\nrequestResponse = await context.services\n .get(\"mongodb-atlas\")\n .db(\"nftx_aggregators\")\n .collection(\"spot_price\")\n .aggregate(pipeline);\n \n let jsonData = JSON.stringify(requestResponse);\n\n\n return jsonData;\n};\n> result: \n\"{}\"\n> result (JavaScript): \nEJSON.parse('\"{}\"')\ntoArray()requestResponse = await context.services\n .get(\"mongodb-atlas\")\n .db(\"nftx_aggregators\")\n .collection(\"spot_price\")\n .aggregate(pipeline).toArray();\n \n let jsonData = JSON.stringify(requestResponse);\n\n\n return jsonData;\n};\n> result: \n\"[{\\\"vaultId\\\":0,\\\"vaultName\\\":\\\"CryptoPunks\\\",\\\"vaultContract\\\":\\\"0x269616d549d7e8eaa82dfb17028d0b212d11232a\\\",\\\"prices\\\":[\n\"$numberInt\"...,\n {\n \"vaultId\": \"552\",\n \"vaultName\": \"RR/BAYC\",\n \"vaultContract\": \"0xcd2e3a66507e94190e3b1521a189ad821c8c3006\",\n \"prices\": [\n {\n \"readingDate\": \"1656295562871\",\n \"spotPrice\": \"0.18317855743872674\"\n },\n {\n \"readingDate\": \"1656381961889\",\n \"spotPrice\": \"0.253926321676319\"\n },\n {\n \"readingDate\": \"1656468400214\",\n \"spotPrice\": \"0.23309041730430674\"\n }\n ]\n },\n...\n", "text": "I have created a simple Realm App that will use a HTTPS endpoint to run a function which runs an aggregation pipeline.The function looks like this…When setting up the HTTPS endpoint I have the option of choosing the return type as JSON or EJSON. When choosing EJSON it works fine but includes additional details in the response that are not required. When switching to JSON the response fails.At the moment the response looks like this…but I’m looking for it to beIf I change toI get backI then added toArray() to get that workingThat returns the following… and although it’s missing the \"$numberInt\" and other EJSON items the format isn’t right.How do I go about getting the response to look like", "username": "NFTX_Tech" }, { "code": "coll.find(query, project).sort(sort).toArray()\n .then( docs => {\n response.setBody(JSON.stringify(docs));\n });\n", "text": "Hi @NFTX_Tech and welcome back !Try this instead. Use the response parameter and set the body:You don’t need to return anything.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Return JSON instead of EJSON
2022-06-29T14:31:27.053Z
Return JSON instead of EJSON
2,353
null
[ "queries" ]
[ { "code": "", "text": "please how do i match a field which is not empty", "username": "jimoh_afeez" }, { "code": "", "text": "Hi @jimoh_afeez,Use $exists.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I match a field which is not empty?
2022-06-29T16:25:38.939Z
How do I match a field which is not empty?
10,144
null
[ "aggregation", "mongoose-odm" ]
[ { "code": "mainId=a2345e87firstsecondfourthfirstfourth return this.aggregate([\n {\n $lookup: {\n from: 'collectionB',\n localField: 'mainId',\n foreignField: 'mainId',\n as: 'allowed',\n },\n },\n {\n $match: {\n {\n $or: [\n { 'allowed': [] }, // if collectionA.mainId doesn't exist inside collectionB\n { 'allowed.expires': { $lt: new Date(interval) } }, // if CollectionA.mainId exists inside collectionB BUT the field expires is longer than a certain interval\n ],\n },\n },\n },\n ]);\n", "text": "Lets imagine I have:name: “first”\nmainId: a2345e87\nemail: “[email protected]”name: “second”\nmainId: a2345e87\nemail: “[email protected]”name: “third”\nmainId: c2345e87\nemail: “[email protected]”name: “fourth”\nmainId: a2345e87\nemail: “[email protected]”We can notice that mainId=a2345e87 repeats 3 times. Rows first, second and fourth.\nBut only rows first and fourth have same emails.How to retrieve only one row when this happens. (mainId + email are the same) ?Is like a mysql GROUP BY two fields. Tried to do this using the documentation but I failed.\nIt returned me a single field (the one I grouped by, or demands me to use an accumulator).\nI Want all fields, just want to pick a single row when two or more fields are the same.The result I want:name: “first”\nmainId: a2345e87\nemail: “[email protected]”name: “second”\nmainId: a2345e87\nemail: “[email protected]”name: “third”\nmainId: c2345e87\nemail: “[email protected]”I need to plug this solution inside the aggregation I already have:", "username": "Alan" }, { "code": "> db.coll.find()\n[\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb78\"),\n name: 'first',\n mainId: 'a2345e87',\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb79\"),\n name: 'second',\n mainId: 'a2345e87',\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb7a\"),\n name: 'third',\n mainId: 'c2345e87',\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb7b\"),\n name: 'fourth',\n mainId: 'a2345e87',\n email: '[email protected]'\n }\n]\n[\n {\n '$sort': {\n 'name': 1\n }\n }, {\n '$group': {\n '_id': {\n 'a': '$mainId', \n 'b': '$email'\n }, \n 'doc': {\n '$first': '$$ROOT'\n }\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$doc'\n }\n }\n]\n[\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb79\"),\n name: 'second',\n mainId: 'a2345e87',\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb7a\"),\n name: 'third',\n mainId: 'c2345e87',\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"62ab6a529c4f1c48f52ceb78\"),\n name: 'first',\n mainId: 'a2345e87',\n email: '[email protected]'\n }\n]\nname", "text": "Hi @Alan,Here is my proposition. It’s a weird pipeline but here we go !Input:Pipeline:Result:Note that depending if you sort the name in ascending or descending order, you don’t get the same result because you are retrieving the “first” doc that comes which is order dependant.Without the $sort stage, their is no guarantee that you will always get the same result with the same set of documents as it will be dependant on the physical storage order.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "name", "text": "hat depending if you sort the name in ascending or descending order, you don’t get the same result because you are retrieving the “first” doc that comes which is order dependant.Without the $sort stage, their is no guarantee that you will always get the same result with the same set of documents as it will be dependant on the physical storage order.Thank you very much!", "username": "Alan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Return ony when a selected fields are repeated
2022-06-15T18:39:01.448Z
Return ony when a selected fields are repeated
1,030
null
[ "node-js", "mongoose-odm", "performance" ]
[ { "code": "", "text": "I’ve noticed that MongoDB doesn’t do well when multiple request target the same document for updating at once.Let’s say I have a document that holds a number of quarters. If multiple requests “take a quarter” (decrementing the quarter count down by one), updating and saving the new update - then the end result after all requests are processed isn’t correct.This is prob a terrible example but hopefully, this makes sense.How can we make a document be handled by one request at a time?", "username": "Tommy_Rivera" }, { "code": "", "text": "Transactions", "username": "Jack_Woehr" } ]
How to prevent data inconsistencies when database receives multiple request at once?
2022-06-30T16:45:48.492Z
How to prevent data inconsistencies when database receives multiple request at once?
2,424
null
[ "queries", "data-modeling", "atlas-data-lake" ]
[ { "code": "", "text": "Please can you help me\nI am working on a project to migrate data from Dynamo DB to Mongo atlas (which already contains tables), in order to insert the data into the existing tables in mongo and to create the non-existing tables.I have not found an etl that allows me to link the 2 DBs.\nSo I decided to create an S3 bucket in which I will store my data from Dynamo DB.To this end, my project is now to migrate the data from S3 to Mongo atlas.\nI opted for Aws Data Federation, I could connect S3 to Mongo atlas.\nFor the moment I can’t write the tigers that will retrieve the data.On Google I could only find documentation that deals with the migration from mongo atlas to S3 and not in the opposite direction (as in my case).So I ask for your help", "username": "tim_Ran" }, { "code": "", "text": "Hi @tim_Ran and welcome in the MongoDB Community !I think you are looking for this:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I think you are looking for this:Thanks for trying and I’ll get back to you ", "username": "tim_Ran" }, { "code": "", "text": "Alas, I am still at an impasse\nIs it possible to have a test trigger to import data from S3 to Data federation ??", "username": "tim_Ran" }, { "code": "", "text": "In App Services you can create a Trigger on a MongoDB Atlas write operation in a collection but not when something happens in S3.Using $out though, you can write something to an Atlas collection or an S3 bucket.Maybe there is an equivalent service in AWS that listens to write operations in S3 and trigger an event?Maybe this?In this tutorial, you use the console to create a Lambda function and configure a trigger for Amazon Simple Storage Service (Amazon S3). The trigger invokes your function every time that you add an object to your Amazon S3 bucket.From a Lambda function you can use the MongoDB Driver or the Atlas Data API to write stuff into MongoDB.Take a look at this blog post:Learn how to write serverless functions with AWS Lambda and MongoDBAnd this doc to avoid creating a new Connection with each lambda execution (big big anti pattern):Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
How can i import data from AWS S3 buckets into an Atlas cluster with AWS Data Federation
2022-06-28T08:35:56.171Z
How can i import data from AWS S3 buckets into an Atlas cluster with AWS Data Federation
5,169
null
[ "connector-for-bi" ]
[ { "code": "mongosqld2021-10-11T11:58:54.939+0300 I SCHEMA [manager] attempting to initialize schema\n2021-10-11T11:58:54.939+0300 I SCHEMA [manager] sampling schema\n2021-10-11T11:58:59.940+0300 W SCHEMA [manager] error initializing schema: unable to execute command: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: xx1.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: OCSP verification failed: no OCSP cache provided }, { Addr: xx2.idn1l.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: OCSP verification failed: no OCSP cache provided }, { Addr: xx3.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: OCSP verification failed: no OCSP cache provided }, ] }\nxx1Last error: connection() error occured during connection handshake: \nOCSP verification failed: no OCSP cache provided \nmongosqld", "text": "I have a cluster running on MongoDB Atlas. I want to connect a mongosqld to it, but I’m getting an error that I don’t understand:Note: I changed the cluster shard addresses to xx1 to hide the actual address.The important part:Any ideas what’s going on? It seems to be related to TLS and PKI. There is also a documentation that has few lines about OCSP https://docs.atlas.mongodb.com/setup-cluster-security/#ocsp-certificate-revocation-check, but it’s not particularly helpful.Also, I’m able to connect to cluster using mongo client. So, the problems seems to be related to mongosqld", "username": "Juri_Andrejev" }, { "code": "", "text": "Hi @Juri_Andrejev and welcome in the MongoDB Community !Silly question but… Do you have a BI connector node deployed on this cluster?I’m not sure if this can help a bit or not, but I did this 2 years ago: open-data-covid-19/python/odbc at master · mongodb-developer/open-data-covid-19 · GitHubCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "mongosqldssl:\n enabled: true\n", "text": "The problem was that in my mongosqld config I was missing:After adding that, everything worked.", "username": "Juri_Andrejev" } ]
Can't connect mongosqld to MongoDB Atlas cluster
2021-10-11T10:22:16.557Z
Can&rsquo;t connect mongosqld to MongoDB Atlas cluster
5,306
null
[]
[ { "code": "", "text": "We have setup a MongoDB Atlas cluster in Single Region (us-east-1) with a private link on us-east-1 and us-west-2. After that we have implemented the following steps.", "username": "Sarada_Talluri" }, { "code": "", "text": "Do you have peering between your app tier regional VPCs/VNets (not sure if this is AWS or Azure) so as to be able to reach both regional privatelink enabled nodes from one or the other side?", "username": "Andrew_Davidson" }, { "code": "", "text": "This is in AWS. We do have private link setup on east and west region. When we just move a node from a single region to a multi region, connectivity is working.\nInitially, one node in us-east-1 is primary. We are trying to re-configure to make the node in us-east-2 to primary by assigning a higher priority, the setup works. However the connectivity using private link fails.", "username": "Sarada_Talluri" } ]
Unable to connect to Mongodb Cluster after moving the primary to new node
2022-06-29T15:36:48.175Z
Unable to connect to Mongodb Cluster after moving the primary to new node
1,312
null
[ "react-native" ]
[ { "code": "", "text": "I want to know the best suited type of app which can be build in realm react native,\nI want to build some business app which multiple users and hierarchical system\nso which is best partition sync or flexible. I already read the documentation about partition sync and flexible sync. So please suggest me the choose the best type of sync in multi role app systems like some kind of order taking app, creating bill, invoice, much more with multiple role system.", "username": "Zubair_Rajput" }, { "code": "", "text": "Flexible sync was designed to allow for more expressive roles and has per-document roles and field-level permissions filtering (whereas partition sync is just partition-level roles and filtering). In that sense, I would definitely recommend flexible sync.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "I would love to try and build some kind of app like that, thanks for advocating me.\nStay in touch. One more thing I want to know that can we use flexible sync in production app now?Thanks\nZubair", "username": "Zubair_Rajput" }, { "code": "", "text": "Hi @Zubair_Rajput,Flexible sync recently graduated from Preview to General Availability (Flexible Sync Delivers Device Data to the Cloud in Real-Time) and is now recommended for production use cases.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X Glad to hear this.Thanks\nZubair", "username": "Zubair_Rajput" } ]
What type of App not to be build in realm partition sync?
2022-06-26T06:27:20.123Z
What type of App not to be build in realm partition sync?
2,053
null
[]
[ { "code": "", "text": "Hi Team,We have replication 1 primary 4 secondary want to setup back daily basis automatically script in crontab any body have idea script details please let me knowand also delete old backup files one month ago.Thanks,\nSrihari", "username": "hari_dba" }, { "code": "", "text": "Hello @hari_dba ,I would recommend you to go through MongoDB Backup Methods and go for the one suiting your environment. Note that one of the supported backup option for on-premise installation involves using MongoDB Ops Manager, which is part of an Enterprise Advanced subscription.However I would also recommend you to develop and test any scripting solution in your own deployment, since I believe any script to do this would be very specific toward individual deployment (i.e. any script would not likely be transferrable from one deployment to another), and would undoubtedly be a vital part of your operation. It’s also to ensure that it meets your needs and conform to your company’s backup policies.Thanks,\nTarun Gaur", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Daily Backup on replication
2022-06-28T14:01:11.251Z
Daily Backup on replication
1,136
null
[ "queries", "sharding", "migration" ]
[ { "code": "db.collection_new.insert(db.collection_old.find(), {ordered: false})", "text": "Hi,I have two sharded collections within the same database, say collection_old and collection_new. Both these collections contain the same shard key, and each collection contains ~20 million documents. Now I want to migrate all the documents from collection_old to collection_new. After successful migration, I want to delete the collection_old.Since the collection size is somewhat huge, I am unsure whether the below command will cause some performance issues and, if the insertion fails for some documents, how to get the ids for those documents so that I can fix the errors and retry later.db.collection_new.insert(db.collection_old.find(), {ordered: false})So please let me know if there is any best approach for migrating documents from one shared collection to another within the same database.Thanks in advance", "username": "Allwyn_Jesu" }, { "code": "", "text": "Hi,Any suggestions will help.Thanks", "username": "Allwyn_Jesu" }, { "code": "db.collection.stats().avgObjSizedb.collection.stats().size", "text": "Hello @Allwyn_Jesu ,Could you please help me with below queries for better understanding of this migration?I think, it will be better to check whether there are _id collisions between old & new and fix them beforehand, instead of trying to fix it after the fact? If they are very similar, and colliding _id can be avoided, perhaps a mongodump & mongorestore is the fastest way to achieve this, since you can specify the number of insertion worker.Thanks,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best approach for migration data from one sharded collection to another sharded collection
2022-06-21T15:00:09.134Z
Best approach for migration data from one sharded collection to another sharded collection
2,958
null
[ "time-series" ]
[ { "code": "", "text": "Hi,\ni was trying to use Update One Timestamps Strategy for a mongo sink connector for a timesereies collection, it was failing with time series collection but was working with normal collection.Is this a limitation in timeseries collections?", "username": "Kanishka_Viraj" }, { "code": "", "text": "Hi @Kanishka_Viraj,Time series collections have a different internal storage format and there are some limitations depending on your version of MongoDB server. Please review Time Series Collection Limitations in the relevant version of the MongoDB server documentation for more information.A limited range of Update and delete operations is possible in MongoDB 5.0.5 or newer. There are also New time series capabilities coming for MongoDB 6.0.If your use case requires more flexibility than the current time series implementation, you could consider using a normal collection with The Bucket Pattern instead.It would also be helpful if you can share or upvote relevant ideas on the MongoDB Feedback Engine.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using WriteModelStrategy for Timeseries collection
2022-06-30T07:22:24.469Z
Using WriteModelStrategy for Timeseries collection
1,462
null
[ "containers" ]
[ { "code": " rs0:PRIMARY> db.serverStatus().connections\n {\n \t\"current\" : 77,\n \t\"available\" : 52351,\n \t\"totalCreated\" : 335185,\n \t\"active\" : 1\n }\n", "text": "Hi - Im running a single mongod in a docker container and every couple of weeks Im no longer available to connect to the server using python/mongo or compass all with the error:com.mongodb.MongoQueryException: Query failed with error code 261 and error message ‘Unable to add session into the cache because the number of active sessions is too high’ on server *******:27017Available sessions looks good:we are running :\nMongoDB shell version v4.0.10\nMongoDB server version: 4.2.8How can I investigate this issue? Thanks\nDuncan", "username": "Duncan_Kerr" }, { "code": "", "text": "Hi @Duncan_Kerr and welcome onboard !Have a look to this ticket, it looks very similar.\nAre you running pymongo 3.10?Maybe you could consider upgrading to MDB 4.4.0 and pymongo 3.11.0?Also, does your server has enough RAM? Do you close the connections & cursors your create correctly?", "username": "MaBeuLux88" }, { "code": "", "text": "enough RAM? Do you close the connections &we just upgraded which was when this started. Our drivers are quite old, but we operate a large mongo estate so we have various contraints on what can be changed.", "username": "Duncan_Kerr" }, { "code": "", "text": "Hi @Duncan_KerrThis issue is related to the fact that you have reached a 1000000 allowed default open sessions and for some reason they are not getting purged (maxSessions parameter).There is probably a session leak as a result of a bug or incompatibility of the driver and the server.Please upgrade to latest compatible drivers and see if after a bounce of the cluster the issue persists.https://docs.mongodb.com/drivers/ - See needed compatible drivers.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi, @Duncan_Kerr Is the issue resolved. Could you share the details here.Regards,\nRohith", "username": "Rohith_Roshan_Devu" } ]
Session resource leak
2020-09-02T14:51:09.911Z
Session resource leak
6,656
null
[ "connecting" ]
[ { "code": "", "text": "We are encountering maxSessions peaking out in mongo db due to which the application in not able to create more connection.As of now we have 3 node cluster Primary secondry secondry.We see all read and write operations happening on Primary node and none of the traffic is going to secondry.Tried shifting read traffic to secondry nodes but that results in stale dataset.Help would be highly appreciated.", "username": "abhishek_sharma3" }, { "code": "", "text": "Hi @abhishek_sharma3This sounds like an issue for either our technical services (support team) or our professional services (consulting team) rather than a question related to this MongoDB University course.In the case of maximum utilisation of any resource, you can either scale up the hardware to support the higher utilisation or you can limit the number of requests being made.If this is an Atlas deployment, I’d suggest moving to a higher tier to support the workload. If this is running on your own hardware, I’d suggest contacting sales as it looks like there are several factors here that require deeper discussion.Kindest regards,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "Hi Eoin,Can you give me a link of correct support channel.\nWhat is the quickest way to connect with support. We are running community edition on our own amazon kubernetess cluster.Warm regards,\nAbhishek", "username": "abhishek_sharma3" }, { "code": "db.serverStatus().connections.wtdbPathulimit", "text": "Wellcome to the MongoDB Community Forums @abhishek_sharma3!Since you originally posted this issue in a MongoDB University category, it was definitely off-topic (not related to the course) so I moved it to the more general category of Ops and Admin for visibility.Since your issue sounds likely to require more dedicated investigation, Eoin was suggesting Commercial Support or Consulting Services might be more appropriate for resolution of your production issue. You can contact MongoDB Sales to discuss options or find a local consultant if you have a preference.If you want to try to solve your issue with public advice in the community forums, you can also continue discussion in the forums but will need to provide more details and be prepared for some back and forth as we try to work out what your issue may be.It would be helpful to know:The most likely starting place for connection limits is checking your ulimit configuration on UNIX-like operating systems, but we need more details about your deployment to provide relevant suggestions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "root@mongodb-1:/# cat /etc/os-release\nNAME=\"Ubuntu\"\nVERSION=\"16.04.6 LTS (Xenial Xerus)\"\nID=ubuntu\nID_LIKE=debian\nPRETTY_NAME=\"Ubuntu 16.04.6 LTS\"\nVERSION_ID=\"16.04\"\n\nUBUNTU_CODENAME=xenial\nroot@mongodb-1:/# mongo\nMongoDB shell version v4.0.12\nconnecting to: mongodb:/Localhost/?gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"4755946b-0f96-43b2-ab24-74a8dfddd828\") }\nMongoDB server version: 4.0.12\nServer has startup warnings:\n2021-07-30T01:18:37.549+0000 I STORAGE [initandlisten]\n2021-07-30T01:18:37.549+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2021-07-30T01:18:37.549+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2021-07-30T01:18:40.448+0000 I CONTROL [initandlisten]\n2021-07-30T01:18:40.448+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2021-07-30T01:18:40.448+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2021-07-30T01:18:40.448+0000 I CONTROL [initandlisten]\npsc:PRIMARY> db.serverStatus().connections\n{\n \"current\" : 1172,\n \"available\" : 837688,\n \"totalCreated\" : 819557,\n \"active\" : 5\n}\n", "text": "Hi Stennie,Thank you so much for prompt response. Here are the details.com.mongodb.MongoQueryException: Query failed with error code 261 and error message ‘cannot add session into the cache’ on server mongodb-1.mongodb.shared.svc.cluster.local:27017 at com.mongodb.operation.FindOperationCount - 711\nSize - 4.9GRegards,\nAbhishek", "username": "abhishek_sharma3" }, { "code": "", "text": "Hi @abhishek_sharma3 Is the issue resolved. If it is please could you share the resolution here. I’m currently facing this issue.Regards,\nRohith", "username": "Rohith_Roshan_Devu" } ]
We are facing issue inproduction. maxSessions
2021-08-18T14:25:40.700Z
We are facing issue inproduction. maxSessions
2,904
null
[ "aggregation", "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"626abbdff6df0a095e8d66d0\"),\n \"identifier\" : \"2022-04-28T17:22:41_Pushy Tanky_Doctorfeelgood\",\n \"createdAt\" : ISODate(\"2022-04-28T16:07:59.068+0000\"),\n \"damageRegion\" : NumberInt(14),\n \"killer\" : {\n \"_id\" : ObjectId(\"000000000000000000000000\"),\n \"createdAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"updatedAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"clanId\" : NumberInt(0),\n \"clanLevel\" : NumberInt(0),\n \"factionId\" : NumberInt(0),\n \"factionSympathy\" : NumberInt(0),\n \"name\" : \"Pushy Tanky\",\n \"class\" : \"\",\n \"combatRank\" : NumberInt(67),\n \"faction\" : \"CityAdmin\",\n \"profession\" : \"\",\n \"characterSlot\" : \"\",\n \"clan\" : {\n \"_id\" : ObjectId(\"000000000000000000000000\"),\n \"createdAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"updatedAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"factionSympathy json:\" : NumberInt(0),\n \"money\" : NumberInt(0),\n \"name\" : \"-17th squad-\",\n \"shortname\" : \"17th squad\"\n }\n },\n \"target\" : {\n \"_id\" : ObjectId(\"000000000000000000000000\"),\n \"createdAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"updatedAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"clanId\" : NumberInt(0),\n \"clanLevel\" : NumberInt(0),\n \"factionId\" : NumberInt(0),\n \"factionSympathy\" : NumberInt(0),\n \"name\" : \"Doctorfeelgood\",\n \"class\" : \"\",\n \"combatRank\" : NumberInt(57),\n \"faction\" : \"Black Dragon\",\n \"profession\" : \"\",\n \"characterSlot\" : \"\",\n \"clan\" : {\n \"_id\" : ObjectId(\"000000000000000000000000\"),\n \"createdAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"updatedAt\" : ISODate(\"0001-01-01T00:00:00.000+0000\"),\n \"factionSympathy json:\" : NumberInt(0),\n \"money\" : NumberInt(0),\n \"name\" : \"\",\n \"shortname\" : \"\"\n }\n },\n \"timeStamp\" : \"2022-04-28T17:22:41\",\n \"updatedAt\" : ISODate(\"2022-04-28T16:07:59.068+0000\"),\n \"weapon\" : \"RAVAGER\",\n \"weaponResourceKey\" : \"2040470A\",\n \"world\" : \"TUNNEL I\"\n}\n target.namekiller.namekiller.namekiller.nametarget.name", "text": "Hey there,im doing my first tries with MongoDB and im actually hitting a wall. Im not sure if I misunderstood the concept behind MongoDB or if I just have issues getting up and running.Im importing data from an API and im saving then as that kind of documents (its data from an online game, so no worries that the field names are little weird)I managed to filter out duplicates where target.name and killer.name is of the same value. What im now trying to do, is basically to query a sorted view. Meaning, I want for every unique killer.name to count its occurrences in killer.name and target.name. I understand, that this will mean to transform to a new document but im having issues to get the “dynamic filtering” going, as I need to have the unique names first.Does it make sense? Am I trying something which is not possible with MongoDB that easily? Its certainly possible that I did a wrong decision.Thanks for your help ", "username": "SantoDE" }, { "code": "", "text": "Hi @SantoDE ,It seems that your documents have only embedded docs and no arrays so it sounds that you just need to use $group on “killer.name” and/or “target.name” with a $count or $sum(1) operator ( do the same eventually)Does that work for you?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "_id", "text": "Hello @Pavel_Duchovny,thanks for the reply I feel like im missing something. Do you envision that working in one group phase or in two? Is the _id then each respective value? I think it is. And then, latest, to add a count to (each?) group stage?", "username": "SantoDE" } ]
Counting occurrences of a given field value inside a document
2022-06-29T11:57:38.753Z
Counting occurrences of a given field value inside a document
2,745
null
[]
[ { "code": "/mnt/V1/mnt/V2/mnt/V2//etc/mongod.confstorage:\ndbPath: /mnt/V2/mongodb\n\nsystemLog:\npath: /mnt/V2/mongod.log\nsudo chown $USER -R /mnt/V2/mongodb\nsudo chown $USER -R /mnt/V2/mongod.log\nclient[db_name]pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 608eb8fa091e8bf0c343b405, topology_type: Single, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017: [Errno 111] Connection refused')>]>", "text": "There are 3 storages on the computer:By default, the databases were stored in the SSD, but the free space there ran out. I decided that the data should be accumulated in the /mnt/V2/ and I edited /etc/mongod.conf:Then I created the appropriate paths and corrected the permissions for them:Then I rebooted the OS.An error occurs after an attempt to connect to DB (client[db_name]):pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 608eb8fa091e8bf0c343b405, topology_type: Single, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017: [Errno 111] Connection refused')>]>There was no error before changing the database directory.", "username": "Platon_workaccount" }, { "code": "", "text": "Looks like mongod server did not restart.Share output of ps -aef | grep [m]ongo.Share the content of /mnt/V2/mongod.log if present. Since the log file is directly in /mnt/V2/ then mongod should have write permission in order to create and rotate the log file.", "username": "steevej" }, { "code": "ps -aef | grep mongoplaton 2019 1481 0 15:26 pts/0 00:00:00 grep --color=auto mongo{\"t\":{\"$date\":\"2021-05-02T13:37:28.770+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.819+00:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.819+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.820+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1055,\"port\":27017,\"dbPath\":\"/mnt/V2/mongodb\",\"architecture\":\"64-bit\",\"host\":\"platon\"}}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.820+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.5\",\"gitVersion\":\"ff5cb77101b052fa02da43b8538093486cf9b3f7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.820+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.820+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/mnt/V2/mongodb\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/mnt/V2/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.857+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /mnt/V2/mongodb\"}}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.857+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2021-05-02T13:37:28.863+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}", "text": "ps -aef | grep mongo\nplaton 2019 1481 0 15:26 pts/0 00:00:00 grep --color=auto mongomongod.log:", "username": "Platon_workaccount" }, { "code": "{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /mnt/V2/mongodb\"}: steevej @ xps ; ps -aef | grep mongod\nmongod 7799 1 11 12:23 ? 00:00:00 /usr/bin/mongod -f /etc/mongod.conf\nsteevej 7847 4992 0 12:23 pts/0 00:00:00 grep mongod\n: steevej @ xps ; ps -aef | grep [m]ongod\nmongod 7799 1 1 12:23 ? 00:00:01 /usr/bin/mongod -f /etc/mongod.conf\n", "text": "The server mongod could not restart because:{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /mnt/V2/mongodb\"}I suspect that $USER was not defined correctly when you chown. The directory /mnt/V2/mongodb must be writable by the user running mongod. On my CentOS the user is also named mongod.PS: I asked for grep [m]ongo with the brackets in order to not see the command grep itself in the output. See the difference below:", "username": "steevej" }, { "code": "sudo chown platon -R /mnt/V2/mongodbsudo chown platon -R /mnt/V2/mongod.loggroups platonplaton : platon adm cdrom sudo dip plugdev lxd docker", "text": "Okay, I have specified user name explicitly:\nsudo chown platon -R /mnt/V2/mongodb\nsudo chown platon -R /mnt/V2/mongod.log\nThat didn’t help.About user permissions:\ngroups platon\nplaton : platon adm cdrom sudo dip plugdev lxd dockerGrep with [] does not print any results, so I applied the command without them.", "username": "Platon_workaccount" }, { "code": "sudo chown platon -R /mnt/V2/mongodbsudo chown platon -R /mnt/V2/mongod.log", "text": "Grep with [] does not print any results, so I applied the command without them.No output means no mongo process running.Okay, I have specified user name explicitly:\nsudo chown platon -R /mnt/V2/mongodb\nsudo chown platon -R /mnt/V2/mongod.log\nThat didn’t help.Most likely because platon is not the appropriate user. You have to look at the systemd configuration for mongod.", "username": "steevej" }, { "code": "", "text": "But why did it work before changing the DB directory?", "username": "Platon_workaccount" }, { "code": "", "text": "May be because the old directory has been created correctly and the new one, not.", "username": "steevej" }, { "code": "", "text": "Can anyone provide a step-by-step guide to solving the problem?", "username": "Platon_workaccount" }, { "code": "", "text": "Have you solved the problem finally? I meet the same problem after changing the application directory.", "username": "hzmadifeng" }, { "code": "", "text": "Please start a new thread with your issue and provide more details.Unless you really have the same problem. In this case the solution would be the same. Fix the directory permissions by making sure the user (should be mongodb) with which you start mongod can write into it.", "username": "steevej" }, { "code": "/etc/mongod.confstorage:\ndbPath: /mnt/VOLUME_NAME/mongodb\n\nsystemLog:\npath: /mnt/VOLUME_NAME/mongod.log\nCTRL+SCTRL+Xmkdir /mnt/VOLUME_NAME/mongodb\ntouch /mnt/VOLUME_NAME/mongod.log\nsudo chown mongodb -R /mnt/VOLUME_NAME/mongodb\nsudo chown mongodb -R /mnt/VOLUME_NAME/mongod.log\n", "text": "CTRL+S\nCTRL+X", "username": "Platon_workaccount" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Timeout error after database directory change
2021-05-02T15:03:50.143Z
Timeout error after database directory change
14,940
null
[ "aggregation" ]
[ { "code": "\"input\":{\n \"_id\":\"1\naccess_key\":12345,\n \"name\":\"anu\"\n}{\n \"_id\":2,\n \"access_key\":12345,\n \"name\":\"babu\"\n}{\n \"_id\":3,\n \"access_key\":12345,\n \"name\":\"chinu\"\n}{\n \"_id\":4,\n \"access_key\":12345,\n \"name\":\"chinu\"\n}\n{\n \"access_key\":12345,\n \"anu\":1,\n \"babu\":1,\n \"chinu\":2\n}\n", "text": "i want to display the result like this with acces_key and count of name", "username": "priya_gunasekaran" }, { "code": "db.test.aggregate([{\"$group\" : {_id:{\"access_key\":\"$access_key\",\"name\":\"$name\"}, sum:{$sum:1}}},{$sort:{\"sum\":1}}])\n{ \"_id\" : { \"access_key\" : 12345, \"name\" : \"anu\" }, \"sum\" : 1 }\n{ \"_id\" : { \"access_key\" : 12345, \"name\" : \"babu\" }, \"sum\" : 1 }\n{ \"_id\" : { \"access_key\" : 12345, \"name\" : \"chinu\" }, \"sum\" : 2 }\naccess_keydb.test.aggregate([{\"$match\":{\"access_key\":12345}},{\"$group\" : {_id:{\"access_key\":\"$access_key\",\"name\":\"$name\"}, sum:{$sum:1}}},{$sort:{\"sum\":1}}])\n{ \"_id\" : { \"access_key\" : 12345, \"name\" : \"anu\" }, \"sum\" : 1 }\n{ \"_id\" : { \"access_key\" : 12345, \"name\" : \"babu\" }, \"sum\" : 1 }\n{ \"_id\" : { \"access_key\" : 12345, \"name\" : \"chinu\" }, \"sum\" : 2 }\n", "text": "you can use the aggregation pipeline for thisif you want to filter by access_key", "username": "Arkadiusz_Borucki" }, { "code": "db.test.aggregate([{\"$group\" : {_id:{\"access_key\":\"$access_key\",\"name\":\"$name\"},\"name\":{\"$push\": \"$name\"}}},{\"$group\":{\"_id\": \"$_id.access_key\",\"name\":{\"$push\":{\"name\":\"$name\",\"count\":{$size:\"$name\"}}}}}])\n{ \"_id\" : 12345, \"name\" : [ { \"name\" : [ \"anu\" ], \"count\" : 1 }, { \"name\" : [ \"chinu\", \"chinu\" ], \"count\" : 2 }, { \"name\" : [ \"babu\" ], \"count\" : 1 } ] }\n", "text": "if the result should be in 1 document, maybe something like", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "but there is n no of document how i wil count randomly without mention name values", "username": "priya_gunasekaran" } ]
Group multiple count of values in single field to display
2022-06-29T06:38:02.237Z
Group multiple count of values in single field to display
2,939
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "var Claims = mongoose.Schema({\n\n billed_insurances:\n [\n {type: mongoose.Schema.Types.ObjectId, ref: 'Insurances'},\n ]\n});\n\nmodule.exports = mongoose.model(\"Claims\", Claims);\nvar Patients = mongoose.Schema({\n name: \n {\n first_name: {type: String, required: true}, \n last_name: {type: String, required: true}\n },\n insurances: \n [\n carrier: {type: mongoose.Schema.Types.ObjectId, ref: 'Carriers', required: true},\n member_id: {type: String, required: true},\n ]\n});\n\nmodule.exports = mongoose.model(\"Patients\", Patients);\nClaim.findOne({_id : claimId}).populate({path: 'billed_insurances'})\n", "text": "Here’s my Claim Schema:Here’s my Patient SchemaI’m trying to store and populate a reference from ‘insurances’ in Patient to billed_insurances in Claims. Storing it is no problem, but I haven’t been able to successfully populate the billed_insurances array.Here’s my populate call:When I run this, billed_insurances comes back as an empty array. I’d like to keep insurances embedded within the Patient and not extrapolated to it’s own collection since it’s always accessed when other Patient info is accessed. I’ve tried making insurances it’s own subdocument, among other things, however still no luck. How should I accomplish this?", "username": "macintosh1097" }, { "code": " const popluatedClaim = await Claim.findById(insertedClaim._id).populate({\n path: \"billed_insurances\",\n });\n {\n _id: new ObjectId(\"62bbe0968c7777b4c0ce0db8\"),\n billed_insurances: [\n { _id: new ObjectId(\"62bbe0968c7777b4c0ce0db2\"), name: 'Lorem', __v: 0},\n { _id: new ObjectId(\"62bbe0968c7777b4c0ce0db3\"), name: 'Ipsum', __v: 0},\n { _id: new ObjectId(\"62bbe0968c7777b4c0ce0db4\"), name: 'Dolor', __v: 0},\n { _id: new ObjectId(\"62bbe0968c7777b4c0ce0db5\"), name: 'Sit', __v: 0 }\n ],\n __v: 0\n }\nconst mongoose = require(\"mongoose\");\n\nconst claimSchema = mongoose.Schema({\n billed_insurances: [\n { type: mongoose.Schema.Types.ObjectId, ref: \"Insurance\" },\n ],\n});\nconst Claim = mongoose.model(\"Claim\", claimSchema);\n\nconst insuranceSchema = mongoose.Schema({InsuranceClaim^6.4.1", "text": "Hi @macintosh1097, welcome to the community.\nI created a sample to populate an array of ObjectIDs using mongoose and it seems to work as expected.\nThe following query:returned the following populated document.Please take a look at this gist to learn more:\nPlease note that the naming convention I followed was as per the mongoose documentation:The first argument is the singular name of the collection your model is for. Mongoose automatically looks for the plural, lowercased version of your model name. Thus, for the example above, the model Tank is for the tanks collection in the database.Hence I used the model names as Insurance & Claim instead of their plural forms.The version of mongoose that I am using is ^6.4.1 and it worked perfectly fine. Please let me know in case you are using a different version of mongoose and the array is not getting populated using .populate method as shown above.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to Reference and Populate Object Embedded in Another Collection
2022-06-11T04:16:14.644Z
How to Reference and Populate Object Embedded in Another Collection
36,768
null
[ "aggregation", "queries", "node-js", "crud", "mongodb-shell" ]
[ { "code": "db.seats.updateOne({\n \"show_seats.$.showByDate.shows.$.showSeats.$._id\":\"new ObjectId(\"\"62b0d1a72f155a7ad94cc831\"\")\"\n},\n{\n \"$set\":{\n \"show_seats.$[].showByDate.shows.$[].showSeats.$[].seat_status\":false\n }\n})\n{\n \"_id\": {\n \"$oid\": \"62b0c3342f155a7ad94cc81c\"\n },\n \"totalShowByDay\": \"2\",\n \"totalShowDays\": 4,\n \"movieId\": {\n \"$oid\": \"62b04c782828dd04f0d1c1ad\"\n },\n \"screenId\": {\n \"$oid\": \"62b04b8e2828dd04f0d1c1ac\"\n },\n \"createdAt\": 1655751476553,\n \"showId\": {\n \"$oid\": \"62b0c3342f155a7ad94cc6db\"\n },\n \"show_seats\": [{\n \"showByDate\": {\n \"ShowDate\": \"2022-06-20\",\n \"shows\": [{\n \"showTime\": \"2022-06-20T10:00\",\n \"showSeats\": [{\n \"_id\": {\n \"$oid\": \"62b0c3342f155a7ad94cc6dc\"\n },\n \"seat_number\": \"1\",\n \"tag_name\": \"A\",\n \"seat_status\": false,\n \"user_id\": false,\n \"price\": \"110\",\n \"seats_category\": \"CLASSIC\",\n \"show_time\": \"2022-06-20T10:00\"\n }, {\n \"_id\": {\n \"$oid\": \"62b0c3342f155a7ad94cc6dd\"\n", "text": "I’m trying to update a document, this is my code I’m new to the mongodb I’m trying to update a nested array of object this is my query please tell me what is wrong with this queryand this is my data look like", "username": "Abhijith_Vikraman_pillai" }, { "code": "", "text": "The $ does not belong in the query parameter. Only in the update parameter.Verify your syntax and check examples.", "username": "steevej" }, { "code": "seat_statusfalsetrueseat_statusdb.seats.updateOne(\n{\n 'show_seats.showByDate.shows.showSeats._id': ObjectId(\"62b0c3342f155a7ad94cc6dc\")\n},\n{\n '$set': {\n 'show_seats.$[e1].showByDate.shows.$[e2].showSeats.$[e3].seat_status': true\n }\n},\n{\n arrayFilters: [\n { 'e1.showByDate.shows': { '$exists': true } },\n { 'e2.showSeats': { '$exists': true } },\n { 'e3.seat_status': false }\n ]\n}\n)\n{\n \"_id\":\"62b0c3342f155a7ad94cc81c\",\n \"totalShowByDay\":\"2\",\n \"totalShowDays\":4,\n \"movieId\":\"62b04c782828dd04f0d1c1ad\",\n \"screenId\":\"62b04b8e2828dd04f0d1c1ac\",\n \"createdAt\":1655751476553,\n \"showId\":\"62b0c3342f155a7ad94cc6db\",\n \"show_seats\":[\n {\n \"showByDate\":{\n \"ShowDate\":\"2022-06-20\",\n \"shows\":[\n {\n \"showTime\":\"2022-06-20T10:00\",\n \"showSeats\":[\n {\n \"_id\":\"62b0c3342f155a7ad94cc6dc\",\n \"seat_number\":\"1\",\n \"tag_name\":\"A\",\n \"seat_status\":true, /// `seat_status` now true\n \"user_id\":false,\n \"price\":\"110\",\n \"seats_category\":\"CLASSIC\",\n \"show_time\":\"2022-06-20T10:00\"\n },\n {\n \"_id\":\"62b0c3342f155a7ad94cc6dd\"\n }\n ]\n }\n ]\n }\n }\n ]\n}\narrayFilters$[<identifier>]", "text": "Hi @Abhijith_Vikraman_pillai - Welcome to the community.In addition to what steevej has advised, the data / document you provided appears to be incomplete / invalid. The query parameter ObjectId value doesn’t appear to exist in the sample document as well.However, in saying so, I presume you are trying to update the seat_status value from false to true based off my interpretation of your update command. Please take a look at the below example query in which I was able to update the seat_status value on my test environment:Document after the update:You would need to edit the arrayFilters accordingly for your use case.Please note, this was not thoroughly tested and it is highly recommended to test extensively within a test environment before running any commands in production.I would also recommend going over the $[<identifier>] documentation to learn more about the update example above. Additionally, If this is a common operation in the database, the following pages may be of use to you regarding data modelling:If you require further assistance with this, could you provide the following information:Hope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
How to update a nested document use $ operator in mongodb
2022-06-20T21:47:54.722Z
How to update a nested document use $ operator in mongodb
13,510
https://www.mongodb.com/…_2_1024x519.jpeg
[ "mobile-bytes" ]
[ { "code": "{\n\"rules\": {},\n \"defaultRoles\": [\n {\n \"name\": \"owner-write\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": {\n \"owner_id\": \"%%user.id\"\n }\n }\n ]\n}\n{\ndefaultRoles: [],\n rules: {\n Author: [\n { name: \"role1\", applyWhen: { userId: \"abc\" }, read: true, write: { field1.user == %%user.id } },\n { name: \"role2\", applyWhen: true, read: true, write: { field1.user == %%user.id } },\n ]\n }\n }\n", "text": "Hello Everybody,Last week, we talked about Realm Relationships and types and how they are implemented client-side (on mobile).This week I will focus on Realm Sync and discuss the differences between Partition-based Sync and Flexible Sync. This will help you choose the best approach for syncing your mobile data to MongoDB Atlas.Please feel free to follow previous realm-bytes on understanding cluster configuration . After creating your cluster, you create a Realm App and link to clusterThere are two ways to enable Sync in your mobile application:When you have data in Atlas already or can load it easily, Realm will generate your client data models for you. There is also a Sync guide available on your application Dashboard page. The guide explains how to configure your collections for Realm Sync\n1600×812 196 KB\nIf you aren’t starting with data, Development Mode can be very useful for getting started quickly – it allows you to build your mobile app from scratch and sync your data to Atlas. Once you have finished creating your client data models, development mode can be turned off and it will lock in a backend schema based on the models that you’ve created.There are two Sync Types: Flexible Sync and Partition Based.Partition-based Sync allows you to choose a single field called a Partition Key across all collections to divide Atlas data into partitions based on the field’s value.This plays an important role in partition-based Sync. If you have opted for “Generate Schema” in the previous step, you can choose one of the fields from your Schema to split the data across your MongoDB collections into partitions/Realms based on the value. If you opted for Development mode and you are creating your application from scratch, make sure to choose a field that either exists in your client application schema or you create a new one.For complex use-cases, please refer Partitioning Strategies documentationSync permissions is another important concept. This can vary depending on your use-case. For example if I have to use the Book and Author model explained previously, I would want users to read all information but write in their own private realm. For different use-cases, check Sync Permissions and Rules documentation.Please Note - When you have Sync enabled on your cluster. Sync Permissions will serve as the permissions for all requests in the application.Below is a snapshot of a random key and permissions for Book Author Model\n1376×1562 141 KB\nFlexible Sync is in preview but offers far more flexibility in data synchronization across devices and MongoDB Atlas.Please Note: Flexible Sync requires MongoDB 5.0 and aboveFlexible Sync uses subscriptions and permissions to determine which data to sync with your Realm App.Some basic terminology used in Flexible Sync is explained below:This refers to the fields in your client schema that your application can query. These queries will define the data that is synced down to your device and replace the concept of partitions. When you configure Flexible Sync on the backend, the field names are specified there. You can choose upto 10 queryable fields.If you choose development mode, you can create fields that are part of your client schema. For example, for Book and Author model, I created the following queryable fields\n1374×622 70.3 KB\nThe query and its metadata are represented by a subscription. For flexible sync, it sends an RQL (Realm Query Language) query that the client app is trying to sync on in comparison to the partition key sent in the Partition-based Sync. Flexible Sync does not support all the operators available in RQL. See Flexible Sync RQL limitations for details.Flexible Sync allows you to define a query in the client, and sync only the objects that match the query. When the client-side makes a query, Realm searches the server-side data set for documents matching the query.The respective Realm SDKs provide a Subscription API to modify the queries. For example, if you are syncing on [author == “Rowling”] and [isRead ==true], you can remove the second query, add another one or update one of these, and the server will re-sync with the new data.Flexible Sync has a more powerful permission system and can be applied on a per-document level in comparison to partition-based permission systems that do not offer granular filtering.When you set up permissions on the backend, you can choose from a provided template or design your own permissions from scratch.For the Book and Author model, I chose “Users can read all data but only write their own data” from the provided options below:\n1402×898 94.1 KB\nThe JSON expression for the selected permission is as below and this will be applied to all collections in the databaseIf there is a requirement to apply more granular rules, those can be applied in the following wayPlease refer to Flexible Sync Role Strategies for more information.I hope the information provided is helpful. Please feel free to share your experience, thoughts of using Realm Sync Types and Modes.Until next week…Cheers ", "username": "henna.s" }, { "code": "", "text": "issions is another important concept. This can vary depending on your use-case. For example if I have to use the Book and Author model explained previously, I would want users to read all information but write in their own private realm. For different use-cases,Is there any guidance on when you should opt for one or the other sync mode? What use-cases would be suitable for each? If I start off with partition-based sync, does that lock me in or will I be able to combine the approaches in the future?If I am not yet syncing data, would it be of benefit to wait until flexible sync is generally available? When is this expected to be available so I can used it in production? A lot of questions, but I find it a bit hard to understand what approaches to take without more information.", "username": "Simon_Persson" }, { "code": "", "text": "Hello @Simon_Persson,Thank you for raising your concerns and these are really great questions Is there any guidance on when you should opt for one or the other sync mode? What use-cases would be suitable for each?This will depend on your application requirement and needs. Flexible-Sync will work for almost all use-cases. For example, I have a news-reader app, I would want all users to read all news, but perhaps save topics to their own private realms. This use-case can be implemented with either of the Sync Type. If you want to extend the functionality, for example, users should access only certain titles and articles then Flexible Sync would be a better choice.If I start off with partition-based sync, does that lock me in or will I be able to combine the approaches in the future?You can convert from one type to another. Please refer Alter Sync Configuration in the MongoDB docs. Currently, it’s not possible to have both Flexible and Partition Sync in your application but this may change in the future.If I am not yet syncing data, would it be of benefit to wait until flexible sync is generally available? When is this expected to be available so I can use it in production? A lot of questions, but I find it a bit hard to understand what approaches to take without more information.Unfortunately, we don’t have a timeline available at this moment but will update here once more information comes to light. I would suggest testing your use case with both the options and making your decision from there.The feedback from our users is always appreciated, so please feel free to raise any questions that you may have while working with either of the options.I look forward to your response.Cheers ", "username": "henna.s" }, { "code": "", "text": "I just read the Alter Sync Configuration link and it doesn’t say anything about converting from one type to another?Are there any differences in performance between the two? I know that the legacy query based sync was removed because of performance. Is this resolved with partition based?In my case. The main use case is that each user has their own data. This maps well to partitions. But there might be future use cases where users might want to share data with each other. What I don’t want to run into is that I start with one, and then discover that I have the need for another sync type and can’t change it if that makes sense.On one hand, flexible sync sounds like it has pretty high complexity to me and I prefer to keep things simple. On the other hand, Client Reset is only mentioned in the partition based sync. This was a major headache for me when trying to use the legacy realm sync and I found it pretty much impossible to test during development. Is client reset not a thing with flexible sync?", "username": "Simon_Persson" }, { "code": "## 10.14.0 (YYYY-MM-DD)\n\n### Enhancements\n* None.\n\n### Fixed\n* None.\n\n### Compatibility\n* File format: Generates Realms with format v22. Unsynced Realms will be upgraded from Realm Java 2.0 and later. Synced Realms can only be read and upgraded if created with Realm Java v10.0.0-BETA.1.\n* APIs are backwards compatible with all previous release of realm-java in the 10.6.y series.\n* Realm Studio 11.0.0-alpha.0 or above is required to open Realms created by this version.\n\n\n## 10.13.0 (2022-12-05)\n\n### Enhancements\n* [RealmApp] Added option for working with Device Sync from an internal network. `SyncConfiguration.trustedRootCA(assetPath)` can embed a custom certificate in the app that will be used by Sync. (Issue [#7739](https://github.com/realm/realm-java/pull/7739)).\n* [RealmApp] Added option for working with Device Sync from an internal network. `SyncConfiguration.disableSSLVerification()` makes it possible to turn off local SSL validation. (Issue [#7739](https://github.com/realm/realm-java/pull/7739)).\n\n", "text": "Initially, there are performance differences between the two sync types, however during this preview we aim to bring flexible sync in line with the performance profile of partition-based sync, and if successful, flexible sync will become the default going forward. We believe this should be readily achievable because you can basically think of partition-based sync as an incredibly simple version of query-based sync ( give me all documents, across all collections, where this field matches this value). Of course, the performance of sync queries greatly depends on the type of query you run, essentially the big-O notation, if the query is slow to run on MongoDB then it will also be slow to sync.Client reset will still be a thing with flexible sync, however, we realize that it is an undertaking for a developer to implement it themselves which is why we are shipping a new feature that performs the client reset for the developer automatically. The next iteration of the feature will automatically recover the data as well. See discardLocal enhancement here -", "username": "Ian_Ward" }, { "code": "", "text": "Thanks Ian. If flexible sync is something that will become the default going forward, then that is a strong argument to try it out. I strongly suspect that my use case can be accommodated using both models, but it is good to know that this is something you are aiming for.And great info with the client resets and good that you are working on them. Makes me super happy to hear that.", "username": "Simon_Persson" }, { "code": "", "text": "There are some interesting question and answers on Flexible Sync that hopefully will be helpful to you all.Thank you @Reveel for asking these great questions Linking them here for others as well Cheers, ", "username": "henna.s" } ]
Mobile Bytes #5: Understanding Partition-based Sync and Flexible Sync
2022-02-24T17:36:52.594Z
Mobile Bytes #5: Understanding Partition-based Sync and Flexible Sync
8,049
null
[ "aggregation", "atlas-search" ]
[ { "code": "", "text": "Hello! I have been building an application centered around atlas search (trying to avoid using elastic). Everything is great until sorting. It seems the recommended approach is to use stored source fields (docs have an example on sorting) but this is extremely slow on large datasets. Sorting with near on dates, and number fields is lightning fast. But if you need to sort on text it is incredibly slow. I hope I am missing something. Any help is appreciated.", "username": "Kyle_Mcarthur2" }, { "code": "", "text": "Hey Kyle! Stored Source is particularly optimized to make sorting on text for larger datasets faster. Do you mind sharing", "username": "Elle_Shwer" }, { "code": "", "text": "Hi @Elle_Shwer , Thank you for responding to me I unfortunately cant share the document or index definition as the client is EXTREMELY protective of their data. What I can share is that the documents have a lot of fields, all of which are indexed (not ideal I know) but their app needs to be able to search on any of these fields.Once the data is loaded into atlas search no more writes occur on that collection. just reads. The query consists of a compound filter / must stage and returns one stored source field per query and we just want to sort on the one field.When doing the near operator on all the other numeric or date fields for sorting its almost instant. As well as any other search queries return instantly. However the sort on the m10 cluster will take 40 seconds or time out. I figured it was the weak m10 cluster so I upgraded to an m40 to run some tests and it got a lot faster but still takes 10 - 15 seconds. The collection size is around 286k documents and the index size is around 600 mb.", "username": "Kyle_Mcarthur2" }, { "code": "", "text": "Hi Kyle, no problem. Based on some research, what you’re experiencing seems possible (hard to say without details). Some options to improve performance:Please note that we are working on significant improvements to minimize the effect of recall set/stored source size, so hopefully we make headway on this naturally being faster for you soon!", "username": "Elle_Shwer" }, { "code": "", "text": "Hi @Elle_Shwer the recall set is large, but there is no way to reduce that with the application requirements as the feature is a table, and the search can be just a few characters that still return a large portion of the dataset. The sort has to occur before the limit stage for pagination.Currently I am storing just one source field, that adds around 50 mb to the index from what I have seen the field is usually a max of like 10 characters so its not big. So it must just be the amount of documents thats causing the issue. The hard part is elastic seems to handle things fine. And as I said before sorting with near on non string fields is really fast. So its a weird experience for the user when they sort on a string and everything slows to a crawl.Thats great to hear that improvements are being made. I really appreciate your input and helpedit: sorry just realized I was logged in with my work mongo account", "username": "kyle_mcarthur1" }, { "code": "", "text": "@Elle_Shwer any other ideas on this? I really want to use mongo search instead of elastic.", "username": "Kyle_Mcarthur2" }, { "code": "", "text": "Would it make sense to discuss this further on a call? (If you click on my name and send me a dm with your email, I’ll send over my calendly.)", "username": "Elle_Shwer" }, { "code": "", "text": "@Elle_Shwer sure, for some reason I am not seeing the DM option though.", "username": "Kyle_Mcarthur2" } ]
$sort & Atlas Search
2022-06-28T03:37:07.072Z
$sort &amp; Atlas Search
3,118
null
[ "aggregation", "queries", "atlas-search" ]
[ { "code": " ` {` ` $match: {` ` $or: [` ` { zipcode: { $regex: '^' + data, $options: \"si\" } },` ` { city: { $regex: '^' + data, $options: \"si\" } }` ` ]` ` }` ` }`\n", "text": "Hi, I am trying the atlas search. Facing some unexpected behavior. Like I want to search zip code, I write this piece. Then I run the script to execute the query 3000 times so that I can check its performance.{ $search: { index: “zipcode_search”, “autocomplete”: { “path”: “searchKey”, “query”:payload.search } } }In comparison to the above query. I write this query also. So that I can compare how much-indexed search is better than traditional searches.But the results are totally opposite to what I thought.The traditional query works way better than the indexed one. Both of them are giving results.I don’t know, why this is happening?Is my index working in the indexed search query or not, How can I confirm? Or am I doing something work here?The number of documents examined by the $search query is 0 and the other query examined 299 documents there is a total of 42000 documents in the zipcode collection and there is an index on zip code and city.The avg execution time of $search query is higher than $match query.When $search query is running then load on zipcode collection goes up to 100%. But on $match load goes up to 10%.", "username": "varun_garg" }, { "code": "", "text": "It’s hard to say without seeing an example document, know what you are searching on, and knowing the index definition. Do you mind sharing?Also this is a nice blog comparing when to use regex vs. search which may be relevant.", "username": "Elle_Shwer" }, { "code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"searchKey\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n", "text": "Hi @Elle_Shwer\nthe blog is not accessible. Please provide the permissionthis is the index definitionSo problem statement is that I want to use autocomplete to search the zip code and city. So I created a “searchKey” in each document which will be a combination of zip code and city name. For Eg, if the zip code is “1234” and the city is “ABC” then “searchKey” in that document will be “1234 ABC”. So that I can perform the search on the single key. Then I create the search index on it. Index definition is done above.", "username": "varun_garg" }, { "code": "", "text": "Hi @Elle_Shwer\ns2270×736 23.4 KB\n\nThese are the screenshot of the search indexed query.I want to know that when we run the search query on zip code collection the load on the zip code goes up to 100%, Why?I also want to know why the examined key is 0?", "username": "varun_garg" }, { "code": "", "text": "@Elle_Shwer any thoughts on this? and can you please tell me how can I improve this index?", "username": "varun_garg" }, { "code": "$search$search$searchfind()$match.*", "text": "Hi @varun_garg ,\nThanks for all the info supplied,When $search query is running then load on zipcode collection goes up to 100%. But on $match load goes up to 10%.What do you mean by 100% load? CPU? Also I am not sure how you are tracking utilization per collection, perhaps you mean CPU on a particular node in your cluster? Primary node?The number of documents examined by the $search query is 0Are you referring to the keys-examined metric reported by mongodb? Its okay for this to be 0 for any $search aggregation stage, this metric refers to native mongodb indexes, not search indexes.Is my index working in the indexed search query or not, How can I confirm? Or am I doing something work here?If you are getting results from the $search query, it works, $search won’t run a collection scan like a regular find() query would if an index is not present.I want to know that when we run the search query on zip code collection the load on the zip code goes up to 100%, Why?It’s hard to answer this here, if you need further help diagnosing the issue I recommend contacting supportFew thoughts on this issue without seeing further information:", "username": "Oren_Ovadia" }, { "code": "", "text": "First, thank you for replying @Oren_OvadiaWhat do you mean by 100% load? CPU? Also I am not sure how you are tracking utilization per collection, perhaps you mean CPU on a particular node in your cluster? Primary node?Yes, 100% load is referring to CPU.Now I just want to know how can I improve my search index for this scenario?", "username": "varun_garg" }, { "code": "", "text": "Is your production workload similar to your script that runs the query 3000 times?\nIf not then why worry about it?\nIf it is we have learnt that $search scales pretty well when you add cores, if you are concerned about saturating your CPU you can always upgrade your Atlas cluster (or use a different query).If you are showing the suggestions to human users I suggest running the query and looking whether the first 10 documents or so are good matches and ordered like you want. If you use autocomplete it might be easier for you to control the order using other search knobs, fuzzy-ness, boosting exact matches to the top over partial matches and so on.", "username": "Oren_Ovadia" }, { "code": "", "text": "the blog is not accessible. Please provide the permissionHi @varun_garg,The correct blog post link is:Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Autocomplete Search
2022-06-23T08:17:26.423Z
Atlas Autocomplete Search
2,522
null
[ "replication", "python", "transactions", "motor-driver" ]
[ { "code": "_socket_from_server_socket_from_serverPRIMARY", "text": "Hello,We are currently using pymongo 3.12.0, but we want to upgrade to 4.1.1.In _socket_from_server in the newer pymongo version, the read_preference is forced to be PRIMARY_PREFERRED when connecting to a replica set of SINGLE topology. The comment says that this is according to “the spec”, that PRIMARY_PREFERRED should be used when connecting directly to a replSet member.Transactions require the read_preference to be set to PRIMARY, but this _socket_from_server method completely overrides any preference we pass.What this means:\nFor our single node sites, transactions will fail because of the read preference not being PRIMARY .\nFor our multi-node sites, everything works swimmingly.We would like to be able to use transactions on our single-node replica sets, but it seems like this is not possible by design.For context, we use multi-node replica sets on our production and staging environments, but we use single-node replica sets in order to have numerous cheap development/testing environments.\nFor these environments, we do not care about data integrity or persistence, but we do need them to be able to mimic the feature set available to our production environments.It would be nice to get some insight on how to resolve this.", "username": "Sanchit_Uttam" }, { "code": ">>> client = MongoClient()\n>>> client.topology_description\n<TopologyDescription id: 62b206b9e17622ce1b043822, topology_type: ReplicaSetWithPrimary, servers: [<ServerDescription ('localhost', 27017) server_type: RSPrimary, rtt: 0.002338583999999977>]>\n>>> with client.start_session() as s, s.start_transaction():client.t.t.find_one({}, session=s)\n... \n{'_id': ObjectId('62b2053d003273b84afb7006')}\n>>> client = MongoClient(directConnection=True)\n>>> client.topology_description\n<TopologyDescription id: 62b20544003273b84afb7007, topology_type: Single, servers: [<ServerDescription ('localhost', 27017) server_type: RSPrimary, rtt: 0.000564334999992866>]>\n>>> with client.start_session() as s, s.start_transaction():client.t.t.find_one({}, session=s)\n... \n{'_id': ObjectId('62b2053d003273b84afb7006')}\n", "text": "Thanks for reporting this issue. I cannot reproduce this error using PyMongo 4.1.1 (or any other version):Same with a client connected directly to the primary:Could you provide the code that reproduces the error including the full trackback?", "username": "Shane" }, { "code": "async with await AsyncDatabase.instance()._client.start_session() as session:\n # PRIMARY_PREFERRED doesn't seem to be supported for transactions, so use PRIMARY instead\n # The type hint from pymongo doesn't have the enum values inherit from ReadPreference, so we must cast it here.\n return await session.with_transaction(\n execute_transaction, read_preference=cast(ReadPreference, ReadPreference.PRIMARY)\n )\nbackend/[REDACTED]/services/database_test.py:440: in test_all_or_nothing\n await run_in_transaction(\"test_all_or_nothing\", tx)\nbackend/[REDACTED]/services/database.py:543: in run_in_transaction\n return await session.with_transaction(\n execute_transaction, read_preference=cast(ReadPreference, ReadPreference.PRIMARY)\n )\n..[REDACTED]\nbackend/[REDACTED]/persistence/base.py:532: in _count\n return await self._get_mongo_collection().count_documents(\n../.pyenv/versions/3.10.4/lib/python3.10/concurrent/futures/thread.py:58: in run\n result = self.fn(*self.args, **self.kwargs)\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/collection.py:1811: in count_documents\n return self._retryable_non_cursor_read(_cmd, session)\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/collection.py:1817: in _retryable_non_cursor_read\n return client._retryable_read(func, self._read_preference_for(s), s)\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/mongo_client.py:1371: in _retryable_read\n return func(session, server, sock_info, read_pref)\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/collection.py:1806: in _cmd\n result = self._aggregate_one_result(sock_info, read_preference, cmd, collation, session)\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/collection.py:1663: in _aggregate_one_result\n result = self._command(\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/collection.py:272: in _command\n return sock_info.command(\n../.virtualenvs/[REDACTED]/lib/python3.10/site-packages/pymongo/pool.py:736: in command\n session._apply_to(spec, retryable_write, read_preference, self)\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nself = <pymongo.client_session.ClientSession object at 0x10f8fe200>\ncommand = [REDACTED]\nis_retryable = False, read_preference = PrimaryPreferred(tag_sets=None, max_staleness=-1, hedge=None)\nsock_info = SocketInfo(<socket.socket fd=23, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('127.0.0.1', 59648), raddr=('127.0.0.1', 27017)>) at 4522065392\n\n def _apply_to(self, command, is_retryable, read_preference, sock_info):\n self._check_ended()\n self._materialize()\n if self.options.snapshot:\n self._update_read_concern(command, sock_info)\n \n self._server_session.last_use = time.monotonic()\n command[\"lsid\"] = self._server_session.session_id\n \n if is_retryable:\n command[\"txnNumber\"] = self._server_session.transaction_id\n return\n \n if self.in_transaction:\n if read_preference != ReadPreference.PRIMARY:\n> raise InvalidOperation(\n \"read preference in a transaction must be primary, not: \"\n \"%r\" % (read_preference,)\n )\nE pymongo.errors.InvalidOperation: read preference in a transaction must be primary, not: PrimaryPreferred(tag_sets=None, max_staleness=-1, hedge=None)\n\nPRIMARYwith_transaction_socket_from_serverPRIMARY_PREFERRED(Pdb) AsyncDatabase.instance()._client.topology_description\n<TopologyDescription id: 62b44a11581b0cbf8945e9ae, topology_type: Single, servers: [<ServerDescription ('localhost', 27017) server_type: RSPrimary, rtt: 0.002651116764172912>]>\n", "text": "Here is some code that begins the transaction and a matching traceback.Code that executes transaction:Traceback:As you can see, The PRIMARY read preference is being given to with_transaction, but at some point (actually in _socket_from_server) it’s being turning into PRIMARY_PREFERRED, which causes the transaction to not execute.Here is the topology description:", "username": "Sanchit_Uttam" }, { "code": "", "text": "Thank you for the additional info! I’ve reproduced the bug and opened a ticket for it here: https://jira.mongodb.org/browse/PYTHON-3333", "username": "Shane" }, { "code": "directConnection=True>>> client = MongoClient(directConnection=False)\n>>> client.topology_description\n<TopologyDescription id: 62bcab02b4fdcaaf57288dfa, topology_type: ReplicaSetWithPrimary, servers: [<ServerDescription ('localhost', 27017) server_type: RSPrimary, rtt: 0.0007047529999795188>]>\n>>> with client.start_session() as s, s.start_transaction():client.t.t.count_documents({}, session=s)\n... \n0\n", "text": "For context, we use multi-node replica sets on our production and staging environments, but we use single-node replica sets in order to have numerous cheap development/testing environments.\nFor these environments, we do not care about data integrity or persistence, but we do need them to be able to mimic the feature set available to our production environments.Note that this bug only occurs when using directConnection=True which is not required for your use case. Instead your apps can connect without directConnection=True (or with directConnection=False) even with a single member replica set. For example:", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using pymonogo to execute transactions on single-node replica sets
2022-06-20T11:41:34.585Z
Using pymonogo to execute transactions on single-node replica sets
3,882
https://www.mongodb.com/…8_2_1024x499.png
[]
[ { "code": "", "text": "Hi,i am trying to run a mongodb community edition(db version v5.0.9) on an azure vm instance (Linux 18.04 LTS). The installation was successful (although i had to manually create the mongod.conf file).However while launching there is a problem with the socket file:\ngrafik1296×632 49.5 KB\nThe ownership of this socket file is correct to mongodb:\n0 srwx------ 1 mongodb mongodb 0 Jun 29 12:54 /tmp/mongodb-27017.sockI´ve also tried with deleting this file and restart the systemctl, didn´t help. Neither the restart of the instance itself.Can someone help me with this?\nThank you in advance", "username": "Delanduer" }, { "code": "", "text": "Could you provide the contents of the config file and how you started the mongodb process?", "username": "tapiocaPENGUIN" }, { "code": "processManagement:\n fork: true\nnet:\n bindIp: localhost\n port: 27017\nstorage:\n dbPath: /var/lib/mongo\nsystemLog:\n destination: file\n path: \"/var/log/mongodb/mongod.log\"\n logAppend: true\nstorage:\n journal:\n enabled: true\nsudo systemctl start mongod", "text": "Thank you for the replay. Here is the content of the config file, i took it from the official recommendation of minimal version And i start the process with sudo systemctl start mongodI just reinstalled mongodb v5 after i successfully installed and launched v4.4 in another instance. Now it works smoothly with an automatically generated config file under /etc. So i assume there was something wrong with the installation although it showed successful.", "username": "Delanduer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Failed to unlink socket file with mongodb ownership on azure instance
2022-06-29T14:23:29.287Z
Failed to unlink socket file with mongodb ownership on azure instance
1,590
null
[ "aggregation", "crud", "views" ]
[ { "code": " hrManager: Object\n _id: \"123456789\",\n name: \"John Doe\"\nhrManager: Array\n 0: Object\n _id: \"123456789\",\n name: \"John Doe\"\ndb.resources.updateOne(\n { name: \"J Mark\" },\n [{\n $set: {\n hrManagers: {\n $objectToArray {\n $map: {\n input: \"$hrManagers\",\n in: {\n _id: \"$$this._id\",\n name: \"$$this.name\",\n startDate: null,\n endDate: null\n }\n }\n }\n }\n }\n }]\n) \n db.resources.updateOne(\n { name: \"J Mark\" },\n {\n $project: {\n hrManagers: [\n {\n _id: '$hrManagers._id',\n name: '$hrManagers.name'\n }\n ]\n }\n }, {\n $merge: {\n into: 'resources',\n on: 'hrManagers',\n whenMatched: 'replace'\n }\n }\n )\n", "text": "I cannot find code to convert a nested object into an array.\nFrom this:To this:Using the $map does not work because it is not an array yet.\nAggregation does not work because I cannot get merge to replace the items.or", "username": "Austin_Summers" }, { "code": "db.collection.update({},\n[\n {\n \"$set\": {\n \"hrManager\": [\n {\n \"_id\": \"$hrManager._id\",\n \"name\": \"$hrManager.name\"\n }\n ]\n }\n }\n])\n", "text": "You can do it like this:Working Example", "username": "NeNaD" }, { "code": "db.resources.updateOne( { \"name\" : \"J Mark\" } , [ { \"$set\" : {\n \"hrManagers\" : [\n { \"_id\" : \"$hrManager._id\" ,\n \"name\" : \"$hrManager.name\" \n }\n ]\n} } ] )\n", "text": "Try this untested snippet:", "username": "steevej" }, { "code": "", "text": "Sorry @NeNaD, our answers interlaced.", "username": "steevej" }, { "code": "", "text": "Hahaha, same solution. Nice! ", "username": "NeNaD" }, { "code": "", "text": "Thank you both!As luck would have it, I was struggling with the first set of code but the second worked fine!\nThe whims of a terminal session.Thanks again!", "username": "Austin_Summers" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert object to array
2022-06-29T18:11:05.546Z
Convert object to array
5,425
null
[ "aggregation" ]
[ { "code": "\n{\n \"test\": \"some\",\n \"test2\": {\n },\n \"test3\": {\n \"some-key\": {\n },\n \"some-other-key\": {\n \"more-nested-data\": true,\n \"more-nested-emtpy\": null\n }\n }\n}\n\n{\n \"test\": \"some\",\n \"test3\": {\n \"some-other-key\": {\n \"more-nested-data\": true\n }\n }\n}\n", "text": "Is there a way to remove literally all null or empty string values from an object? We have an aggregation which creates an object with empty fields and empty objects should the value be null.\nWhat we wish to do is remove all null properties and empty objects and recreate the object, in order to keep the data as small as possible.e.g. in the following object, only ‘test’ and ‘more-nested-data’ should be taken into account, the rest can be removedwhich should become:I tried a lot, but I think by using objectToArray that something could be done, but I have not found the solution yet. The required aggregation should need to recursively (or by defined levels) remove null properties and empty objects.", "username": "Brian_Marting" }, { "code": "{myKey:0}", "text": "Hi @Brian_Marting,If those documents should be retrieved from the database but only the myKey: null should be gone, we can $project off the field using {myKey:0}.I suspect you tried this but it is not enough? I am not sure whether there is any other way though.", "username": "santimir" }, { "code": "deptest.trainstation.gate.tunnel: $deptestTrainstationTunnel\ndeptest.trainstation.gate.passage: $deptestTrainstationPassage\ndeptest.trainstation.track: $deptestTrainstationTrack\ndeptest: {\n trainstation: {\n gate: {},\n track: 4d,\n ...\n }\n}\n", "text": "We don’t know what field is null, the thing is, the problem is introduced by the $project aggregation itself.\nwe have some project mappings e.g.This is all fine, until icao and iata both are null. This will result in the following object:As you can see here, there is one gate object that is just empty, and can be left out. It does a great job of leaving the properties out, but the parent path is still constructed though\nThe same thing happens in some cases where we convert some strings to dates, should these strings be null, the value will also be null and the property stays present", "username": "Brian_Marting" }, { "code": "db.empty.aggregate([\n {$replaceWith:{\n $arrayToObject:{\n $filter:{\n input:{$objectToArray:\"$$ROOT\"}, \n cond:{$not:{$in:[\"$$this.v\", [null, \"\", {}] ]}}\n }\n }\n }}\n])\ndb.empty.update(\n {},\n [{$replaceWith:{$arrayToObject:{$filter:{\n input:{$objectToArray:\"$$ROOT\"}, \n cond:{$not:{$in:[\"$$this.v\", [null, \"\", {}] ]}}\n }}}}],\n {multi:true}\n)\n", "text": "It’s easy to do what you describe at the top level of the document. It’s a little harder to do it within subdocuments, especially if you don’t know how many levels they may be embedded and/or if some of them might be array of subdocuments.Note that you can update the documents in place, or you can use the same pipeline in aggregation without modifying original documents.For top level fields:Same thing as an update:It’s a little tricker to do it for subdocuments, but possible using the same approach.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "A post was split to a new topic: How do I match a field which is not empty?", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Mongodb aggregation remove null values from object with nested properties
2022-02-02T14:23:45.510Z
Mongodb aggregation remove null values from object with nested properties
12,666
null
[]
[ { "code": "roles: Array\n [\"mentor\",\n \"engineer\"]\nroles: Array\n [name: \"mentor\",\n startDate: \"01-01-1900\",\n endDate: \"12-31-2099\"],\n [name: \"engineer\",\n startDate: \"01-01-1900\",\n endDate: \"12-31-2099\"] \n", "text": "I am trying to convert a simple array into an array of arrays while retaining the existing values:into this:Everything I have tried wipes out my existing values.Thanks", "username": "Austin_Summers" }, { "code": "startDateendDate", "text": "Hello @Austin_Summers,Your question is not clear to me, what are you trying to do:It will help us if you post an example document, and whatever query that you tried.", "username": "turivishal" }, { "code": "db.resources.updateOne(\n { name: \"Jon Doe\" },\n [{\n $set: {\n roles: {\n $map: {\n input: \"$roles\",\n in: {\n name: \"$$this.name\",\n startDate: null,\n endDate: null\n }\n }\n }\n }\n }]\n)\n", "text": "Hello @turivishal,I am looking to reformat the existing data structure without losing the existing data.\nThis will be part of a larger set of updates to change and add other fields.\nThe fields will eventually be updated by a tool in current development.I tried a $map that added the date fields, but wiped out the existing data.", "username": "Austin_Summers" }, { "code": "\"$$this\"\"$$this.name\"roles", "text": "name: “$$this.name”,here you need to just use \"$$this\" instead of \"$$this.name\" because roles existing value has an array of strings.", "username": "turivishal" }, { "code": "", "text": "Thank you!\nI just tried that myself. I am still very fuzzy on the use of $$this within aggregations.Thanks again.\nAustin", "username": "Austin_Summers" }, { "code": "$map", "text": "Look at the documentation of $map aggregation array operator,It is explained everything perfectly with the example.", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert simple array into nested array
2022-06-29T14:29:00.611Z
Convert simple array into nested array
1,695
https://www.mongodb.com/…_2_1024x574.jpeg
[ "field-encryption", "migration", "atlas-data-lake", "saintlouis-mug" ]
[ { "code": "Principal, Industry Solutions, MongoDBAdvisory Solutions Architect, MongoDBSaint Louis MongoDB User Group, Leader", "text": "\nSaintLouis1920×1078 96.1 KB\nSaint Louis, US MongoDB User Group is excited to launch and announce their first meetup in collaboration with Daughtery.Please join us for a deep dive into MongoDB 6.0 and recent announcements made at MongoDB World 2022. Topics will include Atlas Data Federation, Queryable Encryption, Relational Migrator, and more! We’ll wrap with an open Q&A and a giveaway of YETI Ramblers to a few lucky trivia winners!5:30pm-8:00pm\nDrinks and Sandwiches provided. Come with colleagues and questions!Event Type: Onsite\n Daughtery Business Solutions\nThree, Cityplace Dr, St. 11th floor\nSt. Louis, MO, 63141To RSVP - Go here: MongoDB User Group St. Louis–\n\nimage1120×348 16 KB\nDaugherty is a partner of MongoDB and specializes in helping companies think bigger, expect more and deliver results with 35+ years in the technology industry.As sponsors, Daugherty Business Solutions is providing us with exclusive space where you will get the opportunity to hear a recap of our recent announcements at MongoDB World 2022 and network with your local peers!\nimage800×800 40.4 KB\nPrincipal, Industry Solutions, MongoDB–\n\nimage512×512 48.6 KB\nAdvisory Solutions Architect, MongoDB\nimage512×512 94.1 KB\nSaint Louis MongoDB User Group, LeaderJoin the Saint Louis Group to stay updated with upcoming meetups and discussions.", "username": "Harshit" }, { "code": "", "text": "Excited to see everyone there.", "username": "Steve_Council" }, { "code": "", "text": "Yeahhh!! Back in person user groups!!Look forward to seeing everyone!", "username": "Colin_Tracy" } ]
St Louis MUG: MongoDB 6.0 and MongoDB World Announcements
2022-06-28T21:50:05.888Z
St Louis MUG: MongoDB 6.0 and MongoDB World Announcements
5,700
null
[ "aggregation", "dot-net" ]
[ { "code": "{\n\t\"_id\": \"99a7692c-9687-443d-8f18-f8a28fe9ffbb\",\n\t\"version\": \"Version45eea229-0f5b-4977-bfa2-72bf6a9cca42\",\n\t\"first\": \"First18457307-c556-4231-bfc7-a1cd0ba65325\",\n\t\"last\": \"Laste92586b9-53bd-4cff-84d3-bdc6b33ec4ca\",\n\t\"identity\": {\n\t\t\"_id\": \"[email protected]\",\n\t\t\"type\": \"Email\"\n\t},\n\t\"diProfileId\": \"DiProfileId763b1a03-aefc-44d5-aee1-18e41564df26\",\n\t\"idp\": \"Idp5322c864-c39d-4bc6-9ae0-6bf0f3c79ab5\",\n\t\"type\": \"User\",\n\t\"links\": [\n\t\t{\n\t\t\t\"linkType\": \"App\",\n\t\t\t\"linkPath\": \"0923689a-e009-4d67-8db5-5ba40f840bf3\",\n\t\t\t\"status\": \"Invited\",\n\t\t\t\"inviterId\": \"00000000-0000-0000-0000-000000000000\",\n\t\t\t\"createdOn\": \"2022-06-07T12:09:58.421+00:00\"\n\t\t},\n\t\t{\n\t\t\t\"linkType\": \"App\",\n\t\t\t\"linkPath\": \"0923689a-e009-4d67-8db5-5ba40f840bf4\",\n\t\t\t\"status\": \"Activated\",\n\t\t\t\"inviterId\": \"00000000-0000-0000-0000-000000000000\",\n\t\t\t\"createdOn\": \"2022-06-07T12:09:58.421+00:00\"\n\t\t}\n\t]\n}\n", "text": "I have above document and want to group by based on first link matched status and want to get count based on filter criteria across documents. Please help on this.", "username": "Shyam_Sohane" }, { "code": "db.principals.aggregate([\n {\n $match: {\n links: {\n $elemMatch: { linkPath: /^0923689a-e009-4d67-8db5-5ba40f840bf3/s }\n }\n }\n },\n {\n $project: {\n links: {\n $filter: {\n input: \"$links\",\n cond: {\n $regexMatch: {\n input: \"$$this.linkPath\",\n regex: /^0923689a-e009-4d67-8db5-5ba40f840bf3/s\n }\n }\n }\n }\n }\n },\n {\n $project: {\n links: {\n $arrayElemAt: [\"$links\", 0]\n }\n }\n },\n {\n $unwind: \"$links\"\n },\n {\n $facet: {\n count: [{ $count: \"count\" }],\n group: [{ $group: { _id: \"$links.status\", count: { $sum: 1 } } }],\n page: [{ $sort: { \"identity.id\": 1 } }, { $skip: 0 }, { $limit: 1 }]\n }\n }\n])\n", "text": "I could write a query which works but any optimization options?", "username": "Shyam_Sohane" }, { "code": "$match : { \"links.linkPath\" : \"0923689a-e009-4d67-8db5-5ba40f840bf3\" }\n", "text": "1 - you need an index on links.linkPath2 - you do not need $elemMatch in your $match3 - you do not need regular expression in your $matchThe following should be sufficient.4 - you do not need $regexMatch in the cond: of your $filter, simple $eq should work.5 - you could use $reduce rather than $filter and avoid the subsequent $project and $unwind", "username": "steevej" }, { "code": "0923689a-e009-4d67-8db5-5ba40f840bf3", "text": "0923689a-e009-4d67-8db5-5ba40f840bf3Thanks Steve,Thanks,\nShyam.", "username": "Shyam_Sohane" }, { "code": "", "text": "path is kind of hierarchy path like a/b/cYour sample document nor your description expressed that. Sorry to lead you in the wrong direction.#5 still applyYou use $reduce more or less like your $filter. You start with the value:null, the first $regex match is your new and final value. You ignore all others. So you end up with a value equivalent to links.0 or whatever matches the regex but as an object to you do not need to $project $arrayElemAt and you do not need to $unwind.One more thing, your page facet $sort on identity.id. But in your input document you have identity._id but your $project do not keep that fields. Try with $set stage rather than $project.", "username": "steevej" } ]
Please help me how to do group by based on a first matching array element status
2022-06-27T21:26:20.972Z
Please help me how to do group by based on a first matching array element status
1,359
null
[ "node-js", "replication", "mongoose-odm" ]
[ { "code": "systemLog: \n destination: file \n path: /opt/homebrew/var/log/mongodb/mongo.log \n logAppend: true \nstorage: \n dbPath: /opt/homebrew/var/mongodb \nnet: \n bindIp: 127.0.0.1 \nreplication: \n replSetName: \"rs0\" \nsecurity: \n authorization: \"enabled\" \n keyFile: /Users/username/mongodb.key \n", "text": "Detail about the issue.I have converted my MongoDB instance to a standalone ReplicaSet using the following config.Using the following Mongo URI to connect to my MongoDB instancemongodb://username:[email protected]:27017/dbName?authSource=admin&replicaSet=rs0using the URI I can connect to my MongoDB through Studio3T but when I use this URI in my Node app through Mongoose it never connects, nor gives any error.Following are the Screenshots for rs.status() output displaying details of member\nmember data from rs.statusAdditional details.\nNodeJs version: v14.19.1\nMongoose version: 6.3.2\nMongoDB version: 5.0.7", "username": "Shivam_Spraxa" }, { "code": "const MongoClient = require('mongodb').MongoClient;\n\nlet run = async function() {\n console.log('start')\n let opt = {poolSize:1, useNewUrlParser: true, useUnifiedTopology: true}\n let conn = await MongoClient.connect(\n 'mongodb://username:[email protected]:27017/test?authSource=admin&replicaSet=rs0',\n opt)\n console.log('db connected')\n console.log(await conn.db('test').collection('test').findOne())\n conn.close()\n console.log('db closed')\n}().catch(console.error)\ntesttest", "text": "Hi @Shivam_SpraxaIt’s been a while since you posted this question. Are you still having issues with connecting using Mongoose?If yes, what if you try to connect using a simple node script, e.g:The snippet above should try to connect to the database test, then print a single document from the collection test. If this snippet works, then we need to look deeper into why Mongoose have issues.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "A post was split to a new topic: Having a problem when trying to connect to replica set with compass and with python code", "username": "Stennie_X" } ]
Mongoose unable to connect with Mongo Standalone replicaSet
2022-05-06T07:18:38.489Z
Mongoose unable to connect with Mongo Standalone replicaSet
3,352
null
[ "java" ]
[ { "code": "", "text": "I’m trying to do an API call that has $gt as a query parameter. This works fine either from the API Console in Kinvey or from Postman, however when I try doing this using a Java HttpURLConnection then I’m getting a 400 response: Your browser sent an invalid request.Here’s what the query looks like:\nappdata/myapp/Summary?query={ “code”:“043”, “time”: { “$gt”: 0 } }In Java I’ve also tried encoding the dollar sign:\nappdata/myapp/Summary?query={ “code”:“043”, “time”: { “%24gt”: 0 } }If I remove that gt query parameter then it works fine.Any ideas?", "username": "Mike_Wallace" }, { "code": "java.net.URLEncoder", "text": "Java HttpURLConnectionInstead of translating your URL by hand, you might try using java.net.URLEncoder and see what that gets you.", "username": "Jack_Woehr" }, { "code": "", "text": "I did try that already and didn’t have any luck. I tried using the encoder for the whole string as well as just the query part but I got the exact same problem.", "username": "Mike_Wallace" }, { "code": "", "text": "It was the spaces. For some reason the HttpURLConnection had trouble with them. Once I removed every space in the string it started working.", "username": "Mike_Wallace" }, { "code": "", "text": "It is funny that white spaces are an issue with $gt but not whenremove that gt query parameter then it works fineIt is also funny thatjava.net.URLEncoderdoes not encode the white space correctly.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issues with query parameter and $gt
2022-06-28T21:12:54.238Z
Issues with query parameter and $gt
1,772
null
[ "queries", "indexes" ]
[ { "code": "", "text": "Option A : 1 billion data in one collection (with indexing )\nOption B : 1 billion data in multiple collection (api will need data across multiple collection )Happy to hear your feedback", "username": "Bhuvanesh_J" }, { "code": "", "text": "Hi @Bhuvanesh_J ,There is no enough information to answer such a question.How many documents are retrieve in each query?What is the document size on avg? Does the indexes cover efficiently the queried fields?Does the indexes fit into memory of the queried instance in their uncompressed form?In general, MongoDB design will prefer data that is queried together to be stored in the same collection or document if it is logically justified, this is to avoid joining collection and multiple data access.But there is no ne solution fit all and it is really depends on the use case and access pattern.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Which is more efficient way of retrieving data
2022-06-29T09:59:38.798Z
Which is more efficient way of retrieving data
1,675
null
[ "aggregation", "queries", "python" ]
[ { "code": "", "text": "@michael_hoeller specifically asked me for some training material at MongoDB World. Well Michael here is the workshop material I promised.The deck is here: Introduction to MongoDB.There is a companion github repo:https://github.com/jdrumgoole/intro-to-mongodb-with-pythonIf you want to teach MongoDB this represents about 1 day workshop.", "username": "Joe_Drumgoole" }, { "code": "]", "text": "@Joe_Drumgoole , there’s a typo in the github URL (extra ] at the end, darn you, MarkDown )", "username": "Jack_Woehr" }, { "code": "", "text": "Thanks @Jack_Woehr . Fixed that now ", "username": "Joe_Drumgoole" }, { "code": "", "text": "Hi Joe,I did the same kind of tutorial for C#Learn mongodb quickly with examples. Contribute to iso8859/learn-mongodb-by-example development by creating an account on GitHub.Remi", "username": "Remi_Thomas" } ]
Here is a Workshop Deck I use to Teach MongoDB
2022-06-27T15:20:14.684Z
Here is a Workshop Deck I use to Teach MongoDB
1,665
null
[ "aggregation", "serverless" ]
[ { "code": "[\n // This is just to filter the result set down\n { $match: { timestamp: { $gte: yesterday } } }, \n // Next I sort by timestamp\n { $sort: { timestamp: -1 } },\n // Then group by the username grabbing the whole document\n {\n $group: {\n _id: \"$username\",\n doc: { $first: \"$$ROOT\" },\n },\n },\n // another sort to bring the most recent to the front\n { $sort: { \"doc.timestamp\": -1 } },\n // Grab the most recent 20\n { $limit: 20 },\n // Finally replace the root\n { $replaceRoot: { newRoot: \"$doc\" } },\n ]\n", "text": "Hi,I’m looking at the best way to optimize an aggregate. My main aim is to bring the costs down for mongo serverless, which I think main approach to this would be to reduce the number of RPUs this aggregate costs.Essentially what I have is a collection of Activities that users have done. I would like to see the most recent 20 activities, but only once per user. For Example, If a user named “John” made two actions recently, I’d only see the most recent. The second document would then be a different user. Currently my collection has 1.5 million documentsHere’s an example of my current aggregation:I am considering that it might be better to have a second collection that only has the most recent activity per user. I can then upsert into this table on every new activity. From that point, I could probably do a simple query sorting by timestamp: -1 and limiting to 20.However, its quite hard to work out whats more performant (and whats more costly) as now we are switching between RPU and WPU.It also seems that the explain doesn’t seem to be able to show WPU or RPU, so I am really flying blind when it comes to calculating this stuff.", "username": "Piercy" }, { "code": "", "text": "I ended up creating a new collection and upserting into this collection based on the username. This way I can index both username and timestamp and its a simply sorted query, limited to 20.Far simpler than the aggregation, far less intensive, and as my WPU rate is so low (in comparison). I am sure this will be cheaper and more performant than the aggregation.It would be nice to still get an answer on the best way to handle that aggregation though. It seems to be an example that mongo cannot handle in a very performant way. So i would be interested to see if theres a solution beyond having another collection", "username": "Piercy" }, { "code": "", "text": "Hi @Piercy and welcome to the community!!I ended up creating a new collection and upserting into this collection based on the username.I believe the approach that you described is a materialised view, and I think it’s a valid approach, especially if you’re willing to trade disk space vs. time (e.g. you end up with a relatively simpler query and workflow, with disk space as a price)It would be nice to still get an answer on the best way to handle that aggregation thoughHaving said that, I’m interested to see if the aggregation you posted earlier can be improved. Could you post some additional information:Regarding Serverless pricing, please see Serverless Instance Costs. Notably mentioned in the page, Serverless instances may be more cost effective for applications with low or intermittent traffic.\nThus if your workload is more regular and involve a lot of data, a regular server instance may be more effective in the long run. However this is a generalisation, so you might want to double check with your actual expected workload.Please let us know if you have further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "twitchtimestamptwitchpiercyttv{\n \"_id\": {\n \"$oid\": \"62b97709a564afe29b87d3a0\"\n },\n \"twitch\": \"piercyttv\",\n \"__v\": 0,\n \"accuracy\": \"\",\n \"difficulty\": 4,\n \"endSongTime\": \"1.79\",\n \"endType\": 2,\n \"fullCombo\": false,\n \"is360\": false,\n \"is90\": false,\n \"noFail\": false,\n \"oneSaber\": false,\n \"practiceMode\": false,\n \"song\": {\n \"_id\": {\n \"$oid\": \"5ecc03ffa468ce001df66aac\"\n },\n \"easy\": true,\n \"normal\": true,\n \"hard\": true,\n \"expert\": true,\n \"expertPlus\": true,\n \"wipMap\": false,\n \"songName\": \"Delightful Introduction\",\n \"songSubName\": \"\",\n \"songAuthorName\": \"Deathpact\",\n \"levelAuthorName\": \"xScaramouche\",\n \"hash\": \"567859C06D0D010987875E2579E08899331F73CC\",\n \"coverUrl\": \"/cdn/847b/567859c06d0d010987875e2579e08899331f73cc.jpg\",\n \"key\": \"847b\",\n \"__v\": 0\n },\n \"timestamp\": {\n \"$date\": \"2022-06-27T09:23:21.000Z\"\n }\n}\ntwitchfullCombo: { $in: [true,false] }, ...return await Activity.find({\n fullCombo: { $in: [true, false] }, // this was added so that 7 can replace 5.\n twitch: userSearch.toLowerCase(),\n })\n .sort([[\"timestamp\", \"descending\"]])\n .skip(skip)\n .limit(count)\n .exec();\n", "text": "Thanks, I am actually considering a regular instance might be more cost effective. For now I am happy optimizing and seeing where I can get the cost down to. My first day cost my 15 USD but I am now down to about 0.6-0.8 USD a day, so optimizing and restructuring the data has a had a large impact. Which regardless of serverless or not, will have a large improvement on performance for my users.While long term I want to get it as cheap as possible (owing to the service having no monetization), for now I am happy that I’ve got it down to a manageable amount.Below is a sample document from the collection, the collection contains 1.5 million documents. The fields we are interested in are twitch and timestamp. Essentially what I am trying to do is retrieve the most recent 20 documents, but only once for each unique twitch. So if twitch user piercyttv has 10 documents, in the most recent list, i’d only retrieve the most recent one, and then return 9 other twitch users most recent documents. That way I get the 20 most recent activities, but only one per user.Expected Output is an array that features the 20 most recent documents (of the above), but only one per twitch user.Indexes on the collection are, usage stats are all from 23rd Jun:A quick discussion on the above indexes but there are some I will likely remove.Firstly, 2 & 3, the two song hash ones. I don’t think I need two, I think this was actually a typo in my code. So ill remove one.6 7 and 8, Originally I had just number 6. However, for this aggregation I wondered if doing 7 would help it as I realised the sort was before the grouping.Regarding 8, this is a tip I learned from a mongo cloud engineer through my work. We were discussing my work project (different from the above project) and he suggested that … if you have a field with a low amount of values, you can create a compound index an use $in with your queries. So, my aim with 8 is to replace 6, and in my queries do: fullCombo: { $in: [true,false] }, .... While I have made this change, looking at my index usage it doesnt seem to have taken effect. I wonder if I need to remove index 6 for it to see 8 as the better option now.Here’s an example query that indexes 6 and 8 were designed for. So although the usage stats show its using 6, I think 8 should be the better option. Whille this isn’t related to th original aggregate. I figured it was needed to explain my indexes.Example Query for indexes:I am not sure how to get the average document size. However if I divide the storage size by the number of documents I get 223 bytes per document.", "username": "Piercy" }, { "code": "return await Activity.find({\n fullCombo: { $in: [true, false] }, // this was added so that 8 can replace 6.\n twitch: userSearch.toLowerCase(),\n })\n .sort([[\"timestamp\", \"descending\"]])\n .skip(skip)\n .limit(count)\n .exec();\n", "text": "I can’t seem to edit my post but I noticed a mistake in the code example. The comment on the line with fullCombo, should be for indexed 8 and 6. As below.", "username": "Piercy" } ]
Optimizing an aggregate that gets the most recent 20 documents, but only one per username
2022-06-23T21:32:15.694Z
Optimizing an aggregate that gets the most recent 20 documents, but only one per username
3,139
https://www.mongodb.com/…5_2_1023x378.png
[ "atlas-triggers" ]
[ { "code": "61d2ac6f624571ca88cfd71aaws.partner/mongodb.com/stitch.trigger/61d2ac6f624571ca88cfd71a", "text": "\nimage1470×543 62.6 KB\nI have created 3 triggers for 3 collections each. I have associated AWS Eventbridge to the triggers. The image shows how it is seen from the AWS console. Is there a way to know to which collection each of these event bridges belong to.\nFor example Does 61d2ac6f624571ca88cfd71a in aws.partner/mongodb.com/stitch.trigger/61d2ac6f624571ca88cfd71a mean anything which can be used to identify the collection name.", "username": "schach_schach" }, { "code": "https://realm.mongodb.com/groups/<project_id>/apps/<realm_app_id>/triggers/61d2ac6f624571ca88cfd71a", "text": "Hi Schach,I’m not sure if this information is available on the AWS console but you can use the trigger id provided to look this up in your Realm app and check your trigger configuration.The id is shown in the URL when you navigate to a trigger, example:https://realm.mongodb.com/groups/<project_id>/apps/<realm_app_id>/triggers/61d2ac6f624571ca88cfd71aRegards", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi @Mansoor_OmarThank you for replying.\nUnfortunately I don’t have access to MongoDB Atlas and have to get the trigger created by someone who has access.\nOne solution I found was to enable Schema Discovery on the Event Bus from AWS side then it will have the schema of the event and will have details such as the collection name. But even for that to work the event will have to fired at least once", "username": "schach_schach" } ]
How to identify MongoDB collection name from MongoDB AWS EventBridge Trigger
2022-04-06T07:47:44.846Z
How to identify MongoDB collection name from MongoDB AWS EventBridge Trigger
3,198
null
[ "queries", "golang" ]
[ { "code": "{\n _id: ObjectId('61fbeb4e41691f4d9f012434'),\n time_stamp: ISODate('2022-02-03T14:48:11.000Z'),\n trading_pair: '1INCH-BTC',\n price: 0.0000439,\n status: 'online',\n trading_disabled: false\n}\n{\n _id: ObjectId('61fbeb4e41691f4d9f012432'),\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: '1INCH-BTC',\n price: 0.0000437,\n status: 'online',\n trading_disabled: false\n},\n{\n _id: ObjectId('61fbeb4e41691f4d9f012433'),\n time_stamp: ISODate('2022-02-03T14:47:11.000Z'),\n trading_pair: '1INCH-BTC',\n price: 0.0000438,\n status: 'online',\n trading_disabled: false\n},\n{\n _id: ObjectId('61fbeb4e41691f4d9f012434'),\n time_stamp: ISODate('2022-02-03T14:48:11.000Z'),\n trading_pair: '1INCH-BTC',\n price: 0.0000439,\n status: 'online',\n trading_disabled: false\n},\ntrading_pairInsertManytrading_pairpercentage changetrading_pair5 minutes24 hours7 days", "text": "Here is a document in my collection in mongodb 4.4.12(could not use latest 5.0.6 because of the Intel AVX support required by mongodb 5.0.0 and aboveI really want to use the native mongodb time series but for now i can’t\nNow i got the context out of the wayNow back to the document, here it isand these documents get inserted every 1 minute, sometimes a minute is skipped and 2 minute range before next insert, even though i run the cronjob every 1 minute, but many factors that does not allow every minute alwaysPlease note i insert many documents all at once using same exact ISODate, so i can have like 100 documents all using same ISODate with different and unique values for trading_pair…which is why i want to query by date.Eventually i will move over to using the native timeseries MongoDB's New Time Series Collections | MongoDBWhat i want to do is be able toI want to be able to return only the most recent single document when i search. So it will return all documents from the most recent InsertMany …so i can always get the most recent price data for all trading_pairHow do i perform percentage change to calculate percentage price difference for a trading_pair between different ISODate for like last 5 minutes, 24 hours, 7 days? This part is the one i really want to see how to do. I am new to MongoDB , in the sense i haven’t used in PRODUCTION app before but now it is that time to fo it, and i want to do it right(need your help here)", "username": "Bradley_Benjamin" }, { "code": "", "text": "for context i am using the mongodb golang driver GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDB incase that changes anythingIf you need clarifications on anything i can add detailsThanks in advance", "username": "Bradley_Benjamin" }, { "code": "db.collection.aggregate([\n { \n $sort: { trading_pair: 1, time_stamp: -1 } \n },\n { \n $group: { \n _id: \"$trading_pair\",\n latest_date: { $first: \"$time_stamp\" },\n recent_price: { $first: \"$price\" }\n }\n }\n])\n", "text": "Hello @Bradley_Benjamin, welcome to the MongoDB Community forum!This can be used for your first query - get all the trading pairs with the price at the latest date.", "username": "Prasad_Saya" }, { "code": "time_stampInsertManytime_stampmongo-go-driver", "text": "@Prasad_SayaI think i want to sort by time_stamp alone, trading_pairs will not be an index, like i mentioned…there will be multiple rows of documents inserted using InsertMany with same ISODate, so am thinking time_stamp will eb the only secondary indexAm i right? in that case how will the query look like? By the way i will have to convert these queries to the mongo-go-driver version", "username": "Bradley_Benjamin" }, { "code": "{\n \"_id\" : \"1INCH-BTC\",\n \"latest_date\" : ISODate(\"2022-02-03T14:49:11Z\"),\n \"recent_price\" : 439\n}\n{\n \"_id\" : \"ANOTHER\",\n \"latest_date\" : ISODate(\"2022-02-03T14:49:11Z\"),\n \"recent_price\" : 999\n}\n", "text": "@Bradley_Benjamin, I can see the output might look like this from the above query. Is that what you are looking for?", "username": "Prasad_Saya" }, { "code": "{\n _id: ObjectId('61fbeb4e41691f4d9f012432'),\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: 'ONE-ONE',\n price: 0.0000437,\n status: 'online',\n trading_disabled: false\n},\n{\n _id: ObjectId('61fbeb4e41691f4d9f012433'),\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: 'TWO-TWO',\n price: 0.0000438,\n status: 'online',\n trading_disabled: false\n},\n{\n _id: ObjectId('61fbeb4e41691f4d9f012434'),\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: 'THREE-THREE',\n price: 0.0000439,\n status: 'online',\n trading_disabled: false\n},\nInsertMany{\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: 'ONE-ONE',\n price: 0.0000437,\n status: 'online',\n trading_disabled: false\n},\n{\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: 'TWO-TWO',\n price: 0.0000438,\n status: 'online',\n trading_disabled: false\n},\n{\n time_stamp: ISODate('2022-02-03T14:45:11.000Z'),\n trading_pair: 'THREE-THREE',\n price: 0.0000439,\n status: 'online',\n trading_disabled: false\n},\n", "text": "But what documents with same date looks like, and i want to return every field except for ObjectIdso i want something like this from query, to return latest InsertMany documents", "username": "Bradley_Benjamin" }, { "code": "", "text": "Anyway to edit my post after i post? New to the forum.\nI do not see an edit button", "username": "Bradley_Benjamin" }, { "code": "_iddb.test.aggregate([\n { \n $sort: { trading_pair: 1, time_stamp: -1 } \n },\n { \n $group: { \n _id: \"$trading_pair\",\n latest: { $first: \"$$ROOT\" },\n }\n },\n {\n $replaceWith: \"$latest\"\n },\n {\n $project: { _id: 0 }\n }\n])\n", "text": "@Bradley_Benjamin, try this query to get all the fields, except the _id.", "username": "Prasad_Saya" }, { "code": "", "text": "Any reason why you still using trading_pair as sort? order does not matter, as long as i get all documents based on the latest ISODate i am fineI need the way i want because the result will be consumed by another API", "username": "Bradley_Benjamin" }, { "code": "allcollection.Find(context.Background(), filter, options.Find().SetProjection(option))\n", "text": "Here is what am using to return all documents in the collectionusing GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDBSo will have to modify to use your query format", "username": "Bradley_Benjamin" }, { "code": "", "text": "Any reason why you still using trading_pair as sort?That is because a trading pair may have more than one document (with different dates) and you want the latest one (one document only). The group stage gives you one document per trading pair with the latest date.", "username": "Prasad_Saya" }, { "code": "InsertManytime_stamp", "text": "There is going to be only 1 trading_pair per InsertMany based on time_stamp, there wont be more than 1. The timestamp is used to group all documents separated by timesstampwhich is why all i need is to return all documents for latest time_stamp", "username": "Bradley_Benjamin" }, { "code": "", "text": "Just clarify this one thing for me. Can the collection have more than one document with the same trading pair and different timestamps?", "username": "Prasad_Saya" }, { "code": "time_stampInsertManytime_stamptime_stamp", "text": "yes, separated by unique time_stamp that groups all documents belonging to a time togetherto clarify, there is a cronjob every minute that does bulk InsertMany of like 100 documents with same exact ISODDate time_stampThink of it like getting the current price of a crypto based on the time_stamp, so i can return all documents of latest time_stamp so i can extract price data from it", "username": "Bradley_Benjamin" }, { "code": "sortStage := bson.D{{\"$sort\", bson.D{{\"trading_pair\", 1}, {\"time_stamp\", -1}}}}\ngroupStage := bson.D{{\"$group\", bson.D{{\"_id\",\"$trading_pair\"}, {\"latest\", bson.D{{\"$first\", \"$$ROOT\"}}}}}}\nreplaceStage := bson.D{{\"$replaceWith\", \"$latest\"}}\nprojectStage := bson.D{{\"$project\", bson.D{{\"_id\", 0}}}} \n\ncursor, err := collection.Aggregate(ctx, mongo.Pipeline{sortStage, groupStage, replaceStage, projectStage})\n\nvar results []bson.M\n\nif err != nil {\n fmt.Println(\"Failed to Aggregate: \", err)\n}\nif err = cursor.All(ctx, &results); err != nil {\n fmt.Println(\"cursor.All() error:\", err)\n}\n\nfmt.Println(results)\n", "text": "@Bradley_Benjamin, the golang version of the same aggregation query:", "username": "Prasad_Saya" }, { "code": "#2percentage changetrading_pair5 minutes24 hours7 days", "text": "ok will test this in a bit…thanks for the helpAny ideas of how to approach the #2 question?", "username": "Bradley_Benjamin" }, { "code": "#2", "text": "Any ideas of how to approach the #2 question?Lets take the scenario for the calculation of percentage change in the last 5 mins. I think, another idea is to take an example scenario where the change is calculated between the latest time stamp and an hour before that. What is the calculation for the percentage change of the price between the values of these two time stamps?The first information needed is, what formula are you using for this. I Googled generally with this search string “percentage change based on time formula” and found some sites explaining the calculation. I’d like to know what is the formula you want to use (or you have on your mind)? That, can be applied to build a query.", "username": "Prasad_Saya" }, { "code": "time_stamp5 minutesPercent change = [(current latest value - value 5 minutes ago)/value 5 minutes ago] * 100\n112.5% up210% down24 hours7 days", "text": "sorry i was blocked from posting for another 20 hours, for posting too much for first day on forumthe query for getting latest documents by latest time_stamp worked great…thanks for thatregarding the percentage change calculation5 minutes percentage change:example:1\ncurrent latest value = 45\nvalue 5 minutes ago = 40Percent change = [(45 - 40)/40] * 100 = 12.5\nthat is 12.5% up2\ncurrent latest value = 45\nvalue 5 minutes ago = 50Percent change = [(45 - 50)/50] * 100 = -10\nthat is 10% downand so on for other periods like 24 hours, 7 days etc", "username": "Bradley_Benjamin" }, { "code": "{\n \"latest_time_stamp\" : ISODate(\"2022-02-03T14:45:00Z\"),\n \"latest_price\" : 999,\n \"trading_pair\" : \"2INCH-BTC\",\n \"prev_time_stamp\" : ISODate(\"2022-02-03T14:40:00Z\"),\n \"prev_price\" : 678,\n \"percent_change\" : 47.34513274336283\n}\ndb.collection.aggregate([\n { \n $sort: { trading_pair: 1, time_stamp: -1 } \n },\n { \n $group: { \n _id: \"$trading_pair\",\n docs: { $push: \"$$ROOT\" }, \n latest_time_stamp: { $first: \"$time_stamp\" }, \n latest_price: { $first: \"$price\" }\n }\n },\n { \n $addFields: {\n prev_doc: {\n $arrayElemAt: [\n { $filter: {\n input: \"$docs\", \n as: \"doc\",\n cond: { $eq: [ \"$$doc.time_stamp\", { $subtract: [ \"$latest_time_stamp\", 5 * 60 * 1000 ] } ] }\n }}, 0\n ]\n }\n }\n },\n {\n $project: {\n trading_pair: \"$_id\",\n _id: 0,\n latest_time_stamp: 1,\n latest_price: 1,\n prev_time_stamp: \"$prev_doc.time_stamp\",\n prev_price: \"$prev_doc.price\",\n percent_change: {\n $divide: [ { $multiply: [ { $subtract: [ \"$latest_price\", \"$prev_doc.price\" ] }, 100 ] }, \"$prev_doc.price\" ]\n }\n }\n },\n])\n", "text": "@Bradley_Benjamin, here is the aggregation query which can return a result like follows for a trading pair (assuming relevant data exists):", "username": "Prasad_Saya" }, { "code": "", "text": "can you help post the query with the mongo-go-driver? like the last one you helped with\nthanks", "username": "Bradley_Benjamin" } ]
How do i search and query based on time?
2022-02-04T09:37:02.151Z
How do i search and query based on time?
38,826
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hi, I hope you can help.\nI’m trying to use an aggregation query for $geoNear with multiple conditions in the query key.\nthe query is super slow (~800ms). But If I use the same query conditions without $geoNear but using $match with exact same condition it works well (~ 25ms). the query has mostly ( $in, $lte, $gte) operators. I don’t understand it works well without $geoNear.I would appreciate it if you guys could help.\nQuery: https://gitlab.com/-/snippets/2343327", "username": "4_Bits_Solutions" }, { "code": "executionStats", "text": "Hello @4_Bits_Solutions,Welcome to the community!! Could you please help me with below things to look into this issue?Regards,\nTarun Gaur", "username": "Tarun_Gaur" }, { "code": "", "text": "Thanks, Tarun for the response. I really appreciate that. Here are the things that might help you look into the issue more closely.Thanks,\nAmit", "username": "4_Bits_Solutions" }, { "code": "\"indexName\" : \"minEnergySpectrum_1\"\n\"indexName\" : \"maxEnergySpectrum_1\"\n\"indexName\" : \"totalAvgRating_1\"\n.\n.\nso on\n\"indexName\" : \"location.coordinates_2dsphere\"\nexecutionStats \t\t\t\"executionSuccess\" : true,\n \t\t\t\"nReturned\" : 223,\n \t\t\t\"executionTimeMillis\" : 90,\n \t\t\t\"totalKeysExamined\" : 4517,\n \t\t\t\"totalDocsExamined\" : 4187\n \"executionSuccess\" : true,\n \t\t\t\"nReturned\" : 223,\n \t\t\t\"executionTimeMillis\" : 809,\n \t\t\t\"totalKeysExamined\" : 89476,\n \t\t\t\"totalDocsExamined\" : 177470\nexplain.executionStats.nReturnednReturnedncursor.explain()Here we can see that number of returned documents are same.\nexplain.executionStats.executionTimeMillisexecutionTimeMillismilliscursor.explain()Here we can see that time taken to execute both queries differ.\nexplain.executionStats.totalKeysExaminedtotalKeysExaminednscannedcursor.explain()Here we can see the difference in index entries scanned of both queries.\nexplain.executionStats.totalDocsExaminedCOLLSCANFETCHHere we can see documents scanned for $geoNear Query are way higher than $match query.\n$orfilterFETCH", "text": "Hi Amit,In the explain query output shared, we can see that in the match query specific indexes are being used such aswhere as in geoNear query only below index is being usedThis changes how the query planner works and basically we cannot say that both query are same or work similarly hence you can see the difference between the outputs in executionStats section of the outputBelow are the parameters from $match queryBelow are the parameters from $geoNear queryexplain.executionStats.nReturned Number of documents that match the query condition. nReturned corresponds to the n field returned by cursor.explain() in earlier versions of MongoDB.explain.executionStats.executionTimeMillisTotal time in milliseconds required for query plan selection and query execution. executionTimeMillis corresponds to the millis field returned by cursor.explain() in earlier versions of MongoDB.explain.executionStats.totalKeysExaminedNumber of index entries scanned. totalKeysExamined corresponds to the nscanned field returned by cursor.explain() in earlier versions of MongoDB.explain.executionStats.totalDocsExaminedNumber of documents examined during query execution. Common query execution stages that examine documents are COLLSCAN and FETCH .NOTE:\ntotalDocsExamined refers to the total number of documents examined and not to the number of documents returned. For example, a stage can examine a document in order to apply a filter. If the document is filtered out, then it has been examined but will not be returned as part of the query result set.If a document is examined multiple times during query execution, totalDocsExamined counts each examination. That is, totalDocsExamined is not a count of the total number of unique documents examined.Generally, MongoDB only uses one index to fulfill most queries. However, each clause of an $or query may use a different index, and in addition, MongoDB can use an intersection of multiple indexes. Reference document to check Indexing Strategies.Materialized view is one possibility for performance improvement. It does this by pre-aggregating the complex query into a different collection, and thus the filter part of the final FETCH stage of the geo query is basically pre-calculated so it may improve performance at the expense of disk space. Materialized view is the recommended method for certain workloads where a regular non-materialized view cannot be used due to how it currently works in MongoDB (see View creation ). One of which is that views currently do not support $geoNear stage.Although superficially both queries look similar at a glance, these are two very different queries, with very different execution plan, using very different indexes. Thus they cannot be compared apple-to-apple. The geo query does a lot more work, which is reflected in the explain output.Thanks,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slow performance when Using Aggregation along with $geoNear with multiple query condition
2022-06-04T07:23:14.604Z
Slow performance when Using Aggregation along with $geoNear with multiple query condition
3,075
null
[ "node-js" ]
[ { "code": "", "text": "Hi community,I am new to Mongo Charts and recently built a demo dashboard , but having questions regarding the user setting.I am currently using JWT authentication method to embed the dashboard in my nodejs application.\nHowever, I want to create a user “filter” , which means each user can only see their own data.I tried to refer to the documentation about the Injected User Filter , by using email to define each user,however, I am having errors 17. I wonder what is the correct setting / any demo code to use the user filter.Please advise.", "username": "Super_Chain" }, { "code": "context.tokensubownerIdfunction getFilter(context) {\n return { ownerId: context.token.sub };\n}\n", "text": "Hi @Super_Chain -It sounds like you’re on the right track. As per the docs, error 17 means “injected filter failed to apply”. Can you share your filter function code? It should interrogate the context.token object and return a JavaScript object representing a valid MQL filter.For example, if the user’s email address is in the sub field of the JWT token and you want to filter based on the ownerId field in your collection, the function would look like this:Tom", "username": "tomhollander" }, { "code": "", "text": "Hey Tom !!!Thanks for your reply !!!Just fixed it ! Thanks for explaining the “sub” field, I wasnt know it was referring to the JWT token. I decoded the JWT token by putting the token in jwt.io and found that our payload is not using “sub” , instead using “username” , so after changing that it works perfectly !!!Cheers!", "username": "Super_Chain" }, { "code": "", "text": "Hi Tom,I am using having an new error when I deploy the application :slight_smile…Without changing the node.js code, I found that the dashboard is no longer showing to every users, like this : (suppose when the user login, they can see the numbers for their own data)I checked the network tab and seems every requests are fine with all 200So, I also check to use postman to request the URL with the authorisation as the Bearer Token and there is 404 error - Cannot access member ‘text’ of undefinedHere is the setting for Inject Filter Per User :\n\nScreenshot 2022-06-29 at 12.50.44 PM692×606 44.9 KB\nI use username instead of sub, as based on the jwt decoding the Bearer token :PAYLOAD:DATA\n{\n“username”: “[email protected]”,\n“iat”: 1656477038,\n“exp”: 1656480638\n}Trying to change the setup on MongoDB , but no luck so far …Could you please shed light on the direction , what aspect should I look at for debug ?Cheers !", "username": "Super_Chain" }, { "code": "return { ownerId: \"[email protected]\" };\nownerId", "text": "Hi - so it looks like the charts aren’t showing any error, but they don’t contain any data. The most common cause for this is because all data is being filtered out by your filter function.From the information provided it’s hard to tell exactly what’s going wrong, but I’d guess that either the token or the data isn’t what you expect. One good step to debug would be to change your function so it uses a constant value, e.g:This will result in the same filter for all users which clearly isn’t what want, but depending on whether it renders a chart or not you may get some clues as to the nature of the problem. I take it that ownerId is the field you want to match in your data?Tom", "username": "tomhollander" }, { "code": "", "text": "Oh Tom ! Lifesaver !I thought it was a default setting of using “ownerId”. Turns out this is exactly how we query the data normally. Just like {\"email\":\"[email protected]}I got the correct return, based on your advice , here is the filter should be looks like// Return a filter based on token attributes, e.g:\nreturn { email: context.token.username };", "username": "Super_Chain" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create a user-specific dashboard Questions
2022-06-23T06:54:01.954Z
Create a user-specific dashboard Questions
3,449
null
[ "containers" ]
[ { "code": "", "text": "Hi, mongodb gurus,I ran into a werid connection and spend a few hours and still cannot fix it.I set up a mongod with tls enabled:\ndocker run -d --rm -v /mnt/mongodb:/data/db -v /etc/pki:/etc/ssl/mongo --network host --name mongodb mongo:4.2 mongod --replSet rs0 --auth --tlsMode requireTLS --clusterAuthMode x509 --tlsCertificateKeyFile /etc/ssl/mongo/tls/certs/test.pem --tlsCAFile /etc/ssl/mongo/ca.pem --bind_ip_all --logpath /data/db/mongo.logthen then mongo to connect to it:\ndocker run mongodb bash\nthen\nmongo --tls --tlsAllowInvalidHostnames --tlsCertificateKeyFile /etc/ssl/mongo/tls/certs/test.pem --tlsCAFile /etc/ssl/mongo/ca.pemand I got the error:\nconnecting to: mongodb://127.0.0.1:27017/localhost?compressors=disabled&gssapiServiceName=mongodb\n2022-06-20T10:32:27.543+0000 E QUERY [js] Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: internal error :\nconnect@src/mongo/shell/mongo.js:353:17\n@(connect):2:6\n2022-06-20T10:32:27.545+0000 F - [main] exception: connect failed\n2022-06-20T10:32:27.545+0000 E - [main] exiting with code 1The weird part is that the whole thing works in my on-prem configuration but failed on azure. Selinux is permissive and fips is disabled. (Both seems irrelevant)the log on server side is simply:\n2022-06-20T10:32:31.722+0000 I NETWORK [listener] connection accepted from 127.0.0.1:60700 #9 (1 connection now open)\n2022-06-20T10:32:31.723+0000 I NETWORK [conn9] end connection 127.0.0.1:60700 (0 connections now open)The server side seems ok. How can we get more meaningful error messages?Cheers,\nFreeman", "username": "Freeman_LIU" }, { "code": "", "text": "Further info:disabled tls and everythig is fine.certificate are verified with no problem.Both mongod and mongo are run in official mongo:4.2 docker on redhat 7.9.", "username": "Freeman_LIU" }, { "code": "", "text": "Tried 5.0 and everyting works smoothly. However, the up application is from 3rd and it depends on 4.2. Is it a bug?", "username": "Freeman_LIU" }, { "code": "", "text": "Turned install with rpm installed of docker and everything is fine.", "username": "Freeman_LIU" } ]
Cannot connect to mongodb with tls
2022-06-20T10:36:05.572Z
Cannot connect to mongodb with tls
2,616
null
[ "aggregation", "queries", "sharding", "performance" ]
[ { "code": "", "text": "Hi allUsing 4.4.13 Mongodb Sharded deployment the goal is the check performance on a storage system using c5n-18xlarge AMI, 4 node cluster with huge storage pool .\nUsing ycsb tool to generate traffic want to be sure that the traffic hits the disk to get my performance numbers\n(72 cpu and 192GB memory I have in each node)\nEach data shard pod is configured with 72GB of memory and 24GB of CPUhowever when i run traffic I am not sure if i am reaching the best perf ? how do i confirm this ? and make sure that IO is hitting the disk with least latency\nhow do i ensure all the pods utilize max cpu and its memory assigned so that then the IO reaches the disk ??\nAny inputs or tips to tune in or help me know this will be grateful !", "username": "Shilpa_Agrawal" }, { "code": "", "text": "A well configured and behaved DB should seldomhits the diskIf it does too often, you should then find out why and try to avoid it.Testing DB performance while hitting the disk is of little value as you do not want to hit the disk when running live traffic. Disks are an order of magnitude slower.You should do your benchmarks with traffic that does not hit the disk and if you hit the disks in production you should re-size your system so that you do not hit the disks.", "username": "steevej" }, { "code": "", "text": "@steevej Thank you for the response, however i am testing a SDS solution so unless application uses some amount of disk if not more I wont be able to conclude on the performance numbers on that sds solution hence the query. I have to reach a point where application is using its memory(cache) and then reaching the disk for some portion at least …Also I am observing the mongo-data-sharded pods at-times overallocated memory based on the traffic is there a way i can limit the pod’s memory so that once it reaches the limit it has to reach out to disk ?I understand as in-memory application it will perform the best when its not even reaching the disk and doing it all good from memory but my use case here is diff so !! any inputs in this regards shall help", "username": "Shilpa_Agrawal" }, { "code": "", "text": "@steevej and et all ,\nI am using aws rhel 8.4 ec2 instances, for best performance on mongodb should i disable THP ? or any other best practices to follow for mongodb sharded deployment via helm way of install ? for best performance please do guide or share some advice", "username": "Shilpa_Agrawal" }, { "code": "", "text": "Hi @Shilpa_Agrawalfor best performance on mongodb should i disable THP ? or any other best practices to follow for mongodb sharded deployment via helm way of install ?Regarding recommended settings for MongoDB, you might find the Production Notes contain all the recommendations. And yes, THP is recommended to not be used as per the Production Notes and the related Disable Transparent Huge Pages (THP) page.Regarding your question:how do i ensure all the pods utilize max cpu and its memory assigned so that then the IO reaches the diskI think you can achieve this basically by ensuring that your workload exceeds the hardware’s capability, e.g. maybe you can try to do a collection scan on a collection that’s way larger than your provisioned RAM? Apologies for the lack of ideas, but your question is basically how to do what we tell people not to do so it’s a bit of an unfamiliar territory Best regards\nKevin", "username": "kevinadi" } ]
Performance of Mongodb pods- sharded deployment
2022-06-23T12:36:21.324Z
Performance of Mongodb pods- sharded deployment
3,137
null
[ "containers" ]
[ { "code": "", "text": "Hi Team,\nWe are in planning to deploy mongodb cluster in gcp.\nFor reason above my paygrade, we aren’t using mongodb atlas.My question is, which is better, deploying mongodb on ubuntu as processes ( 3 ubuntu hosts) or say 3 containers of mongodb on docker. Any guide where such difference is mentioned already or if anyone has any input on this will be highly appreciated.Thanks", "username": "Mayank_Kumar2" }, { "code": "", "text": "Hello @Mayank_Kumar2 ,Welcome to the MongoDB community forum!! Technically from MongoDB’s perspective, as long as it has enough resources to do the workload, it matters little where it is deployed (bare metal or docker instances). It’s more up to your operational requirements than MongoDB’s own requirements. Regarding MongoDB’s operational requirements, please follow the recommendations in the following pages:Let us know if you have any further questions.Thanks,\nTarun Gaur", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb on Docker or Ubuntu host
2022-06-21T17:01:37.235Z
Mongodb on Docker or Ubuntu host
1,724
null
[ "node-js" ]
[ { "code": "", "text": "Hello,I like all MongDB related subjects, so i was looking for a job that i mainly do MongoDB related things.\nBut most jobs online in Greece look for mainly nodejs programmers, or data engineers that mostly do SQL programming.So i thought to see about remote jobs in another country that ask mainly for MongoDB.Do you have any experience with those, has anyone worked in mainly MongoDB related job in remote way? Is it possible?Thank you", "username": "Takis" }, { "code": "", "text": "Hello @Takis,I also like working with MongoDB database. I have a limited exposure and experience learning MongoDB and working with it.Most of the work related to a somewhat specialized database (a document based NoSQL database, and a modern one too) like MongoDB is likely to be part of a larger project - like an app (for example, of MERN stack), a data science or data processing project. These will involve other aspects like a platform (e.g., NodeJS, Python), programming languages (e.g., JavaScript, Java), tools (e.g., presentation, querying, processing, data loading), etc. So, there is possibility “just MongoDB only work” might be limited (but, I may not be correct also).Yes, you can find small projects, and gigs for just specific aspects like, for example data querying, on the international freelancing sites. You can try looking up online, and maybe join one of them and try and get a feel of it. Another thing you can consider is, partnering with other freelancers and provide your specialized service in a larger project.", "username": "Prasad_Saya" }, { "code": "", "text": "Yassou @Takis !What sort of MongoDB-related roles are you considering and what tech skills are you hoping to leverage?Roles requiring Node.js sound likely to be development focus whereas those asking for SQL are probably DBA or data science.I’m not sure about the requirements for working remotely from Greece, but I gather this is more straightforward for EU citizens working in EU countries. Anecdotally I also think companies have more flexibility for remote work since the pandemic.most jobs online in Greece look for mainly nodejs programmers, or data engineers that mostly do SQL programming.I’m not familiar with local/regional job boards for Greece, but as one data point LinkedIn seems to have 500+ jobs mentioning MongoDB in Greece (~150 remote). A quick skim suggests Python & Java are also very popular, if Node.js isn’t one of your strengths or interests.FYI MongoDB also has roles open to Remote EMEA. If there isn’t anything suitable at the moment, definitely worth checking back regularly for new roles or joining the MongoDB Talent Community for updates.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you both for the reply and the links.I am not expert in MongoDB, but i enjoyed it the most, so i was just thinking those alternatives.Those 4 i was thinking, as alternatives.\nAll those are fine to me, except the second if not MongoDB is used, and i end up doing statistics and data science.", "username": "Takis" } ]
MongoDB remote job is it possible?
2022-06-29T00:44:30.370Z
MongoDB remote job is it possible?
2,570
null
[]
[ { "code": "", "text": "Hello,I would like to know if anyone knows the process of storing in firebase backups generated from a mongodb db", "username": "Rahisbel_Herrera" }, { "code": "", "text": "I would like to know how to connect a moralis mongodb instance to mongodb atlasThank you", "username": "Rahisbel_Herrera" }, { "code": "", "text": "Welcome to the MongoDB Community @Rahisbel_Herrera !I would like to know if anyone knows the process of storing in firebase backups generated from a mongodb dbCan you provide more detail on your use case for storing backups in Firebase?If you are using Google Cloud and want to save large binary backups, a more typical destination would be Google Cloud Storage.moralis mongodb instance to mongodb atlasMongoDB Atlas is a managed database service, so you can’t directly connect an external instance with your Atlas cluster outside the use case of migration. If you are interested in migrating or copying data to an Atlas cluster please see the Migrate or Import Data documentation.Regards,\nStennie", "username": "Stennie_X" } ]
Backups moralis - mongodb
2022-06-27T19:19:59.345Z
Backups moralis - mongodb
1,825
null
[ "dot-net" ]
[ { "code": "", "text": "Hi,I just started playing with MongoDB API (Never knew about it) and it is pretty impressive for a small app project I am working on. I am struggling on a slight issue though which I cannot find any manual for (or maybe my search skills are bad).I have created 3 GET endpoints and some functions these all work well and tested via postman. However since coming from .Net and Web API one issue which bugs me and something which I’d like to keep so I don’t need to change my APP URLS is have the querystring data within the URL.Currently the route ends points are like so:myapi.com/endpoint/method/?query=valueI would prefermyapi.com/endpoint/method/name/valueEssentially having the query data with the URL route, is this possible to achieve? it would save my time as I would not need to change how my app functions.", "username": "asim" }, { "code": "", "text": "Hi @asim welcome to the community!I just started playing with MongoDB APII’m a bit unclear on this: do you mean Data API or is it something else?I have created 3 GET endpoints and some functions these all work well and tested via postman. However since coming from .Net and Web API one issue which bugs me and something which I’d like to keep so I don’t need to change my APP URLS is have the querystring data within the URL.How did you create the GET endpoints? I apologize in advance if I’m presuming too much, but the GET endpoint designs are not ones I’m familiar with from the MongoDB side, so I’m guessing this is created using a framework?Could you share more information regarding the framework you’re using and the MongoDB product you’re having trouble with?Best regards\nKevin", "username": "kevinadi" } ]
HTTP EndPoints Data Within Route URL
2022-06-26T22:11:18.419Z
HTTP EndPoints Data Within Route URL
2,858
null
[ "queries" ]
[ { "code": ".find({}, { sort: { date: -1 }, limit: 48, skip: 24 })\n\n{\n \"name\": \"find\",\n \"arguments\": [\n {\n \"database\": \"aDatabase\",\n \"collection\": \"aCollection\",\n \"query\": {},\n \"sort\": {\n \"date\": -1\n },\n \"limit\": 48\n }\n ],\n \"service\": \"mongodb-atlas\"\n}\n", "text": "Trying to toss in skip but the logs show in Realm that the skip piece was not sent? Bit confused as practically every MongoDB doc shows skip as something you should be able to use and yet no matter what I try I can’t seem to get skip to go through. I’ve made sure too that the skip is a number and not like a string.Quick example just to highlight with a sort and limit.Where did my friend skip go?Thank you for any help!", "username": "CloudServer" }, { "code": "find({}).sort({date:-1}).skip(24).limit(48)\n", "text": "I gonna skip out of my comfort zone since I don’t use Realm.Other drivers use a different syntax for sort/skip/limit. They chain functions rather than optional parameters. So I would try to use", "username": "steevej" }, { "code": "", "text": "Well still trying to find an answer or solution here. Doing the method suggested above results in the same output. It’s like skip is ignored entirely.", "username": "CloudServer" }, { "code": "", "text": "you must be doing something wrong. I user skip and limit many times and it never failed. you wroteDoing the method suggested above results in the same outputbut indeed you will if you never change the skip value. you will always skip the same documents. you must increase the skip value to go to the next group of documents.", "username": "steevej" }, { "code": "", "text": "Nope was doing everything right and hours later found the real reason. Guess in over 2 years now support for skip has been in development? Absolutely crazy.For anyone who comes across this on a Google Search here you go:Realm Web ----> DOES NOT SUPPORT SKIP**There is no \"skip\" option on MongoDB Collection**. Is that correct?\n\nThe onl…y way to use skip right now would be on aggregation.\n\n## Code Sample\n\nDirectly from realm-web source code:\n\n```js\nclass MongoDBCollection {\n\n find(filter = {}, options = {}) {\n return this.functions.find({\n database: this.databaseName,\n collection: this.collectionName,\n query: filter,\n project: options.projection,\n sort: options.sort,\n limit: options.limit,\n // WHERE IS THE \"skip\" option?\n });\n }\n```\n\n## Version of Realm and Tooling\n\n- Realm JS SDK Version: realm-web 1.0\n- Node or React Native: no", "username": "CloudServer" } ]
Skip not going through on find
2022-06-20T05:07:43.428Z
Skip not going through on find
2,512
null
[ "aggregation" ]
[ { "code": "{\n\t\"fruits\": {\n\t\t\"banana\": [{\n\t\t\t\"name\": \"goodBanana\",\n\t\t\t\"ripe\": true\n\t\t}, {\n\t\t\t\"name\": \"badBanana\",\n\t\t\t\"ripe\": false\n\t\t}]\n\t}\n}\n", "text": "Hey, thank you in advance for reading this post. I basically have data like this:I would like to find documents where there are “goodBanana” in their array of “banana”. I am experimenting with the $in operator but I am having trouble using objects in the expression field. Please let me know the best practice to query this type of data.Thank you.", "username": "Wai_Kun" }, { "code": "fruits: { $elemMatch: {name: \"goodBanana\"}}\n", "text": "I just discovered a solution, I used $elemmatch in this fashion:That appeared to have worked, I am not sure if it’s the optimal solution though.", "username": "Wai_Kun" }, { "code": "", "text": "you can try to use $elemMatch operator:db.test.find({“fruits.banana”:{$elemMatch: {“name”:“goodBanana”}}})", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "Thank you, I just solved it just now! The timing. I still appreciate your help.", "username": "Wai_Kun" }, { "code": "c.find( { \"fruits.banana.name\" : \"goodBanana\" } )\nc.find( { \"fruits.banana.name\" : { \"$in\" : [ \"rottenBanana\", \"badBanada\" ] } } )\nc.find( { \"fruits.banana\" : { \"$elemMatch\" : { \"name\" : \"goodBanana\" , \"ripe\" : \"false\" } } } )\n", "text": "The following should work:You would use $in when you have a list rather than a single value like:You would use $elemMatch when wanting multiple condition to be true for the same element like:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find an object in nested array of object
2022-06-28T22:30:06.791Z
Find an object in nested array of object
16,480
null
[ "aggregation" ]
[ { "code": "aggregate([\n {$unwind: '$sizes'},\n {\n $group: {\n _id: \"$_id\",\n brand: {$first: '$brand'},\n title: {$first: '$title'},\n totalShoes: {\n $sum: {\n \n $cond: [{$eq: [\"$sizes\", []]}, 0, \"$sizes.count\"]\n }\n },\n }\n },\n ])\n{\n \"_id\": \"61dc771dc825e9bb0066a20a\",\n \"title\": \"Falcon\",\n \"brand\": \"Adidas\",\n \"sizes\": [\n {\n \"_id\": \"5fbf9730f2192b42589f63b1\",\n \"sizeValue\": 41,\n \"count\": 1\n },\n {\n \"_id\": \"5fbf9730f2192b42589f63b2\",\n \"sizeValue\": 44,\n \"count\": 4\n }\n ]\n },\n \n{\n \"_id\": \"61fabd7d9e38f5770f4f52e5\",\n \"title\": \"568\",\n \"brand\": \"New Balance\",\n \"sizes\": []\n },\n", "text": "As you can see I use $cond inside the $sum operator and true case returns 0, but this causes $sum to skip the model from the list. I need to get something like {title: Falcon, totalShoes: 0, sizes: []} when sizes array is empty.Here is model example.", "username": "Ivan_Kravchenko" }, { "code": "", "text": "The major issue with your aggregation is that after $unwind, sizes is not not an array anymore. See at the examples in the documentation.An $unwind followed by $group with _id:$_id is usually seen as bad and in most cases, $reduce, $map or $filter can be used.There is an option for $unwind for your empty array. It is preserveNullAndEmptyArrays.In your case a simple $reduce with input:$sizes, initialValue:0 and in:{ $add:[$$value,$$this.count]} should be working. But I have not tested.", "username": "steevej" }, { "code": "$unwind$map$sum$settotalShoesdb.collection.aggregate([\n {\n \"$set\": {\n \"totalShoes\": {\n \"$sum\": {\n \"$map\": {\n \"input\": \"$sizes\",\n \"in\": \"$$this.count\"\n }\n }\n }\n }\n }\n])\n", "text": "You don’t have to use $unwind since it’s an expensive operation. You can do the following:Working example", "username": "NeNaD" }, { "code": "aggregate([\n {\n $project: {\n _id: 1,\n title: 1,\n brand: 1,\n sex: 1,\n totalShoes: {\n $reduce: {\n input: \"$sizes\",\n initialValue: 0,\n in: {$add: [\"$$value\", \"$$this.count\"]}\n }\n }\n }\n }\n ]);\n", "text": "Thank you!\nFor someone who needs there is working example.", "username": "Ivan_Kravchenko" }, { "code": "aggregate([\n {\n $set: {\n totalShoes: {\n $sum: {\n $map: {\n \"input\": \"$sizes\",\n \"in\": \"$$this.count\"\n }\n }\n }\n },\n },\n {\n $project: {\n _id: 1,\n title: 1,\n brand: 1,\n sex: 1,\n totalShoes: 1\n },\n }\n ]);\n", "text": "Cheers. Also working solution.\nWho needs working example look at this.", "username": "Ivan_Kravchenko" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I need to get 0 from $sum operator when array is empty inside a group aggregation
2022-06-28T17:36:45.701Z
I need to get 0 from $sum operator when array is empty inside a group aggregation
3,676
null
[ "replication", "mongodb-shell", "containers" ]
[ { "code": "hostname: mongodb1\n\ncontainer_name: mongodb1\n\nimage: mongo:5.0\n\nnetworks:\n\n - replica-docker\n\nexpose:\n\n - 27017\n\nrestart: always\n\nentrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\nhostname: mongodb2\n\ncontainer_name: mongodb2\n\nimage: mongo:5.0\n\nnetworks:\n\n - replica-docker\n\nexpose:\n\n - 27017\n\nrestart: always\n\nentrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\nhostname: mongodb3\n\ncontainer_name: mongodb3\n\nnetworks:\n\n - replica-docker\n\nimage: mongo:5.0\n\nexpose:\n\n - 27017\n\nrestart: always\n\nentrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\nimage: mongo:5.0\n\nnetworks:\n\n - replica-docker\n\ndepends_on:\n\n - mongodb1\n\n - mongodb2\n\n - mongodb3\n\nvolumes:\n\n - .:/scripts\n\nrestart: \"no\"\n\nentrypoint: [ \"bash\", \"/scripts/mongo_setup.sh\"]\nimage: mongo-express:latest\n\ndepends_on:\n\n - \"mongodb1\"\n\n - \"mongosetup\"\n\nnetworks:\n\n - replica-docker\n\nenvironment:\n\n - ME_CONFIG_MONGODB_SERVER=mongodb1\n\nports:\n\n - \"8081:8081\"\n\nrestart: alway\ndate +\"%T\" \"_id\": \"rs0\",\n\n\"version\": 1,\n\n\"members\": [\n\n {\n\n \"_id\": 0,\n\n \"host\": \"mongodb1:27017\",\n\n \"priority\": 2\n\n },\n\n {\n\n \"_id\": 1,\n\n \"host\": \"mongodb2:27017\",\n\n \"priority\": 0\n\n },\n\n {\n\n \"_id\": 2,\n\n \"host\": \"mongodb3:27017\",\n\n \"priority\": 0\n\n }\n\n]\n", "text": "Hello, im devoloping a replica set mongo db with docker.\n----->docker-compose:version: “3”services:mongodb1:mongodb2:mongodb3:mongosetup:mongo-express:networks:\nreplica-docker:---->this is my mongo_setup.sh:#!/bin/bashecho “sleeping for 10 seconds”sleep 10echo mongo_setup.sh time now: date +\"%T\" mongo --host mongodb1:27017 <<EOFvar cfg = {};rs.initiate(cfg);EOF—>But i dont know why , my mongoexpress its not conecting to my replica set, it dosent show any error, but when i go to the browser (localhost:8081) it shows that its not working.Can you guys help me please", "username": "Alexandre_Sousa" }, { "code": "mongoshrs.status()", "text": "Hi @Alexandre_Sousa,Can you connect to the RS using mongosh and share the result of rs.status()?\nCan you share the connection string you are using? What does it looks like?\nWe agree that these 4 machines are all on 4 different physical machines, correct?Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Replication set conect to mongo express
2022-06-27T15:38:14.639Z
Replication set conect to mongo express
3,567