image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "Hello,I have a function which works fine when running via AppServices page.\nBut when accessing from my iOS app it failed. I got : {“message”:“ObjectId in must be a single string of 12 bytes or a string of 24 hex characters”,“name”:“Error”}It fails here:const team = await teamsCollection.findOne({\"_id\": BSON.ObjectId(teamId)});I log teamId which is “6126dca2a3973c43f39e0174”.\nDo you have an idea?", "username": "Thierry_Bucco" }, { "code": "", "text": "I found the cause. I have to convert my ObjectID to stringValue (from Swift side) when calling function", "username": "Thierry_Bucco" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AppServices functions problem when calling from iOS sdk
2022-06-21T12:28:44.661Z
AppServices functions problem when calling from iOS sdk
1,266
null
[ "queries", "golang" ]
[ { "code": "var c1 Category\n\tvar setElements bson.D\n\tif len(c.CategoryLargeType) > 0 {\n\t\tsetElements = append(setElements, bson.E{Key: \"category_large_type\", Value: c.CategoryLargeType})\n\t}\n\tif len(c.CategoryMiddleType) > 0 {\n\t\tsetElements = append(setElements, bson.E{Key: \"category_middle_type\", Value: c.CategoryMiddleType})\n\t}\n\tif len(c.CategorySmallType) > 0 {\n\t\tsetElements = append(setElements, bson.E{Key: \"category_small_type\", Value: c.CategorySmallType})\n\t}\n\tsetMap := bson.D{\n\t\t{Key: \"$in\", Value: setElements},\n\t}\n\tfmt.Println(setMap)\n\terr := categoryCollection.FindOne(ctx, setMap).Decode(&c1)\n\tfmt.Println(c1)\n\tfmt.Println(err)\n\tif err != nil {\n\t\treturn http.StatusInternalServerError, Response{Status: http.StatusInternalServerError, Mssage: \"데이터 변환에 실패하였습니다.\", Result: \"\"}\n\t}\n\treturn http.StatusOK, Response{Status: http.StatusInternalServerError, Mssage: \"이미 저장된 데이터가 존재합니다.\", Result: c1}\n", "text": "my code hereexample) select * from TEST where id =\"\" AND title=\"\" AND desc =\"\"how to use example for golang mongodb Find", "username": "kindcode" }, { "code": "", "text": "my error code here\nunknown top level operator: $in. If you have a field name that starts with a ‘$’ symbol, consider using $getField or $setField", "username": "kindcode" }, { "code": "", "text": "Hi @kindcode and welcome in the MongoDB Community !You have a code sample in the doc here:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use golang mongodb select
2022-06-21T07:28:35.221Z
How to use golang mongodb select
1,716
https://www.mongodb.com/…e_2_1024x512.png
[ "server" ]
[ { "code": "mongodb-community", "text": "I followed these steps to install MongoDB:When I start running MongoDB, no error pops up. It just says \" ==> Successfully started mongodb-community (label: homebrew.mxcl.mongodb-commu\"But when I check the status, I get a 19968 error:\n\nScreen Shot 2022-06-14 at 9.25.20 PM1360×120 42.2 KB\nAlso, when I check mongo --version, I get this error: “zsh: bad CPU type in executable: mongo”I checked the var/log/mongodb/output.log but that is an empty file.Can anyone help me fix these errors?", "username": "SM_A" }, { "code": "", "text": "What command line did you use to start mongod?", "username": "Jack_Woehr" }, { "code": "", "text": "I used “brew services start [email protected]”", "username": "SM_A" }, { "code": "mongod --config /opt/homebrew/etc/mongod.conf --fork", "text": "Have you tried the manual start mongod --config /opt/homebrew/etc/mongod.conf --fork\nIf so, what was the result?", "username": "Jack_Woehr" }, { "code": "ps -ef | grep mongo", "text": "Also, check ps -ef | grep mongo to see if maybe you have some piece of mongo stuck and running despite the error messages.", "username": "Jack_Woehr" }, { "code": "", "text": "When I try to manual start mongod, I get this error: “zsh: bad CPU type in executable: mongod”\nand when I check “ps -ef | grep mongo”, it says this: “501 43048 42112 0 11:12PM ttys000 0:00.00 grep mongo”", "username": "SM_A" }, { "code": "zsh: bad CPU type in executable: mongod501 43048 42112 0 11:12PM ttys000 0:00.00 grep mongogrepmongo", "text": "zsh: bad CPU type in executable: mongodYou’ve somehow installed the wrong executable, it seems.501 43048 42112 0 11:12PM ttys000 0:00.00 grep mongoThat’s the grep process itself searching for the string mongo … it finds itself.", "username": "Jack_Woehr" }, { "code": "", "text": "Do you have any suggestions on what I should do next? Thanks", "username": "SM_A" }, { "code": "brewbrewbrew", "text": "@SM_A maybe you can try this:If it still doesn’t work, submit a bug report. The Forums here are more and more “community supporting community” and you’ll have better chances of getting serious attention with a well-formatted bug report if there really is a problem.I feel for you I have a Mac, but not an M1, so I can’t try this myself.", "username": "Jack_Woehr" }, { "code": "", "text": "Hmm, searching I find some stuff about using Rosetta during the install.\nSee this Stack Exchange question @SM_A … it’s long but it may be the answer.", "username": "Jack_Woehr" }, { "code": "file", "text": "Hi @SM_A,You will need Rosetta2 emulation installed (a one-off procedure) to run MongoDB 5.0 on M1. NOTE: native M1 binaries are coming with MongoDB 6.0.However, I expect macOS should automatically prompt you to install Rosetta rather than complaining about a bad CPU type.Can you confirm the output of:file $(which mongod)sw_versOn my M1 with a brew install of MongoDB 5.0 the file command returns:/opt/homebrew/bin/mongod: Mach-O 64-bit executable x86_64Do you have any suggestions on what I should do next?You could try manually installing Rosetta 2:softwareupdate --install-rosettaRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X\nI installed Rosetta. When checking mongo --version now, it looks correct. But when I check the status after running Mongo, it still shows an error:\n", "username": "SM_A" }, { "code": "", "text": "I also tried reinstalling mongoDB in a rosetta clone terminal, but this error pops up:\n\nScreen Shot 2022-06-15 at 10.51.39 AM1080×116 40.3 KB\n", "username": "SM_A" }, { "code": "", "text": "Check your mongod.log\nuse brew --prefix to find your var/log location\nMost likely it is failing due to permissions issues on /tmp .sock .lock files", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_Tummala\nI see two logs in my /var/log/mongodb folder:{“t”:{\"$date\":“2022-06-15T12:55:58.842Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T13:36:36.829Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T13:37:26.315Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T13:39:10.033Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T13:45:00.497Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T13:45:55.112Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T13:57:58.049Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T14:47:17.978Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}\n{“t”:{\"$date\":“2022-06-15T15:00:03.140Z”},“s”:“F”, “c”:“CONTROL”, “id”:20574, “ctx”:\"-\",“msg”:“Error during global initialization”,“attr”:{“error”:{“code”:38,“codeName”:“FileNotOpen”,“errmsg”:“Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.log”}}}Not sure what to do with this", "username": "SM_A" }, { "code": "", "text": "Can’t initialize rotatable log file :: caused by :: Failed to open /opt/homebrew/var/log/mongodb/mongo.logIt is unable to create the logfile\nCheck the permissions on /opt/homebrew/var/log/mongodb\nYour mongod should be able to write to this dir\nAlso the dbpath directory", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_Tummala\nI changed the permissions to this:\nBut I am still getting the same mongo.log errors and installation errors", "username": "SM_A" }, { "code": "", "text": "Does this directory exist?\nPlease show contents\nDo you have any mongod.log already in that dir with different ownership/permissions", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_TummalaI do not have a mongod.log. These are the logs I see:\n\nScreen Shot 2022-06-15 at 12.28.55 PM1528×300 29.9 KB\n", "username": "SM_A" }, { "code": "", "text": "I meant mongo.log only.On other os it is named as mongod.log\nSo why you are not able to open/read this file?\nCan you show file permissions from terminal prompt instead of explorer view\nls -lrt mongod.log", "username": "Ramachandra_Tummala" } ]
Error Installing on M1 Macbook
2022-06-15T01:27:28.377Z
Error Installing on M1 Macbook
9,961
null
[ "kafka-connector" ]
[ { "code": "INFO An exception occurred when trying to get the next item from the Change Stream: Query failed with error code 286 and error message 'Error on remote shard **.***.**.**:***** :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.' on server **.***.**.**:***** (com.mongodb.kafka.connect.source.MongoSourceTask:597)", "text": "I am using change stream through Mongo Source Connector. Suddenly, the connector has made continuous logs like below and stopped working:INFO An exception occurred when trying to get the next item from the Change Stream: Query failed with error code 286 and error message 'Error on remote shard **.***.**.**:***** :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.' on server **.***.**.**:***** (com.mongodb.kafka.connect.source.MongoSourceTask:597)I got some answer like “Increase the oplog size!” after googling, but I want to know more:NOTE: My mongo collections have been sharded.", "username": "Hyunsang_h" }, { "code": "", "text": "Anyone helps? T.T\nI am waiting for your help!", "username": "Hyunsang_h" }, { "code": "", "text": "There is an article related to invalid resume tokens.", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Resume of change stream was not possible, as the resume point may no longer be in the oplog (by MongoSourceConnector)
2021-11-03T02:31:53.085Z
Resume of change stream was not possible, as the resume point may no longer be in the oplog (by MongoSourceConnector)
3,650
null
[]
[ { "code": "", "text": "Keep getting below error while trying to create kafka source connector through confluent cli locally .\nfor configuration connection.uri: Unable to look up TXT record for host", "username": "Piyali_Ash" }, { "code": "", "text": "Probably is a network issue not related to the connector. A couple of things to try. Your ISP may be blocking the connection string, try using another like a public one 8.8.8.8 is google’s DNS. Get Started  |  Public DNS  |  Google Developers. If this is not the issue, you have a firewall set up that might be blocking the port 27017 for MongoDB.", "username": "Robert_Walters" } ]
Not being able to connect to nonprod cluster from kafka connector through local confluent clli
2022-06-21T04:48:16.075Z
Not being able to connect to nonprod cluster from kafka connector through local confluent clli
1,188
null
[ "atlas-triggers" ]
[ { "code": "$or$or{\n \"$or\": [\n { \"operationType\": { \"$in\": [\"insert\", \"delete\"] } },\n { \"updateDescription.updatedFields.email\": { \"$exists\": true } },\n { \"updateDescription.updatedFields.filter\": { \"$exists\": true } },\n { \"updateDescription.updatedFields.channels\": { \"$exists\": true } }\n ]\n}\n", "text": "Hi!I’m on MongoDB Atlas and I tried to add a match expression with a top-level $or and it simply doesn’t work.\nI can specify every single one of the or-ed filters individually and they work just fine but as soon as I try to $or them, no more events are passed along to EventBridgeCan you help me figure out what’s wrong?", "username": "CBasch" }, { "code": "", "text": "I do not see an obvious error in the query.Can you include a full example that we can test? Then it will likely be trivial.", "username": "Mah_Neh" }, { "code": "", "text": "What additional information do you require?", "username": "CBasch" } ]
$or not working in trigger match expression
2022-06-01T16:24:04.783Z
$or not working in trigger match expression
2,723
https://www.mongodb.com/…c_2_1024x348.png
[ "dot-net", "data-modeling", "python", "schema-version-pattern" ]
[ { "code": "", "text": "Hello all… I am new to mongoDB and I have a couple questions. I plan on asking them one at at time so here goes for my first question.I am currently coding an application in Python… That being said, the switch for me from .net to Python isn’t without its pains. This brings be to my first question.In taking the Developer Courses I have come to one big question… Embed Objects verses using Lookup.I currently am running one database titled KLD9_Bazaar with 7 collections. They areCart\nCountry\nInvoice\nProduct\nSession\nStore\nUserThe User collection is where my question begins. I currently am trying to embed objects within the user object. Below is my current schema for the user collection.\nScreenshot 2022-06-18 at 4.02.08 PM1169×398 57 KB\nSpecifically Address and Card (Credit Card) are embedded objects.First… Is this the best way to do this, or should I use a lookup query to embed the other objects within the user display when returned?Example: John Doe logs in to purchase some items… When he logs in, the lookup is used for both the card (or cards) he uses and his identified addresses (billing and shipping)?It seems that I am trying to over complicate this and I wanted to get some recommendations. I will use the recommendations to further develop the application… The store collection will have product in it… Populate the products in the store by using the lookup for the product collection to add the products when the store is loaded.for example:ABC and XYZ sell shirts. Then the customer looks at ABC store, the products that ABC sells will be loaded by using the lookup query.Is it better to use lookup or embed… and if embedding is better, should it be embedded as an object?Thanks for taking the time to read this and give me some pointers… Once this is answered… I will move on to my next question… Indexing… LOL.Respectfully,\nDave Thompson\nCo-Founder\nGlobal Kloud Solutions", "username": "David_Thompson" }, { "code": "$lookupaddresscard$lookupstore", "text": "Hi,Here are some main pointers:If you use embedded objects, that is faster because it is only one database query. When you use $lookup, that is additional database query.You don’t want to use embedded approach when you will have a lot of embedded objects, because that would not scale. MongoDB has a limit of 16MB for each document in a collection.For your use case I would suggests this:For User collection, go ahead and add address and card as embedded objects. You can add them as objects or as an array of objects, but either way, every user will probably not have more that 5-10 addresses and cards.For Stores and Products collections, use $lookup. This would not be a good place to use embedded approach, because each store can potentially have thousands of products, and if you would put thousands of products inside store document, you can potentially reach limit of 16MB for the document.", "username": "NeNaD" }, { "code": "from dataclasses import dataclass\nfrom model.cart import Cart\n\n\n@dataclass\nclass User(object):\n \n \n # __useritems = None\n \n # @staticmethod\n # def get_user_items():\n # if User.__useritems == None:\n # User.__useritems = []\n # return User.__useritems\n \n def __init__(self, first_name, last_name, phone, email, uid, cart):\n self.first_name = first_name\n self.last_name = last_name\n self.phone = phone\n self.email = email\n self.uid = uid\n self.cart = cart\n \n\n def __getattr__(self, key):\n return getattr(self, key)\n\n def __repr__(self):\n return \"first_name: {}, last_name: {}, uid: {}, cart: {}, card: {}\".format(self.first_name, self.last_name, self.uid, self.cart, )\n\n\n first_name: str\n last_name: str\n phone: str\n email: str\n uid: str\n cart: Cart\nclass User(Document):\n email = StringField(required=True)\n first_name = StringField(max_length=50)\n last_name = StringField(max_length=50)\n", "text": "Sorry @NeNaD I have been busy looking at what you said and applying it to my current project… I have now gotten the hang of nesting objects. This brings me to my next question.I have been looking at how to embed a document into the Users database (the address and card field like you recommended) and I believe those are the objects right? (I also forgot to add the cart since each user should have their own cart with products in it)But upon looking up multiple questions I came across the mongoengine…My current classes are written like this:I find the documentation on mongoengine to define classes quite differently. They define classes like this:I cannot find any documentation that tells me what format I need to use… Can you point me in the right direction for clarification? I am dying to read about this and whether I need to use the schema outlined in the mongoengine or if I can use the schema like all the tutorials explain in Python. I.E. can I use the standard class definition or do I need to use the mongoengine verison.From what I can gather, the mongoengine version enforces validation. If I don’t use this format, I’m assuming I need to program in the validation. (the required portion and also max length, etc)Am I on the right track?Respectfully,Dave Thompson", "username": "David_Thompson" }, { "code": "", "text": "Hi @David_Thompson,I am not familiar with Python implementation.You should probably wait for someone that is familiar with Python implementation to answer this.", "username": "NeNaD" }, { "code": "", "text": "Thanks… Should I move it to an entirely new topic?", "username": "David_Thompson" }, { "code": "", "text": "I would say just create a new topic with that question.It’s kinda new question anyway, and it’s more language specific than your original question.", "username": "NeNaD" } ]
Recommend Data Schema
2022-06-18T08:14:29.583Z
Recommend Data Schema
2,684
null
[ "golang" ]
[ { "code": "type MachineScoreType uint8\nfunc (m *MachineScoreType) UnmarshalBSONValue(t bsontype.Type, b []byte) (err error) {\n\tif len(b) == 0 {\n\t\t*m = NOT_SET\n\t\treturn nil\n\t}\n\tif len(b) == 5 && b[0] == 1 && b[1] == 0 && b[2] == 0 && b[3] == 0 {\n\t\t// Empty string case - bizarre what mongo passes us, but this is the empty string\n\t\t*m = NOT_SET\n\t\treturn nil\n\t}\n\tif (b[0] == 0x0003 || b[0] == 0x000a || b[0] == 0x000b || b[0] == 0x000c || b[0] == 0x000d || b[0] == 0x000e) && len(b) > 4 {\n\t\t// If this is a string machine score, it will be proceeded by a few NUL/whitespace chars\n\t\t// e.g. ' offTopicScore'\n\t\t// It also has a trailing space\n\t\t// bytes hex before trimming: 0E0000006F6666546F70696353636F726500\n\t\ts := string(b[4 : len(b)-1])\n\t\t*m = MachineScoreTypeFromString(s)\n\t} else {\n\t\t// Otherwise, this is a uint8\n\t\t*m = MachineScoreType(binary.LittleEndian.Uint16(b))\n\t}\n\n\t// Now we should have a uint8 - check to see if it's valid\n\tif !m.IsValid() {\n\t\t*m = INVALID_SCORE\n\t\treturn errors.WithStack(fmt.Errorf(\"'%s' (%X) is not a valid score type. valid choices: [%s]\", string(b), b, strings.Join(machineScoreStrings, \", \")))\n\t}\n\treturn nil\n}\nUnmarshalJSON\"", "text": "We have a field in mongo which is stored in old data as a string and new data as a uint, so I attempted to support that pattern by using UnmarshalBSONValue as such:This works, but the fact that there are a range of control characters before a string (e.g. it could be 0x000a-0x000e) is confusing. With UnmarshalJSON, a string is prefixed and suffixed with a \".First question - can you help me understand why there are different NUL/control chars before a string?\nSecond and most important question - how should I be doing this in a more standard way? Decoding either as an uint8 or a string? (note - I realize mongo stores uint16 rather than uint8, but just want to be clear about what we’re ultimately trying to get from the function.", "username": "Topher_Sterling" }, { "code": "\\x00purpose: length string terminator\nbytes: [0 ... 3][4 ... N-1][N]\nUnmarshalBSONValuebsontype.Typebson.UnmarshalUnmarshalBSONValueuint8Int32uint8func (m *MachineScoreType) UnmarshalBSONValue(t bsontype.Type, b []byte) (err error) {\n\tif len(b) == 0 {\n\t\t*m = NOT_SET\n\t\treturn nil\n\t}\n\n\tswitch t {\n\tcase bsontype.String:\n\t\t// Ignore the first 4 bytes because they specify the length of the\n\t\t// string field, which has already been trimmed for us. Ignore the\n\t\t// last byte because it is always the terminating character '\\x00'.\n\t\ts := string(b[4 : len(b)-1])\n\t\tif s == \"\" {\n\t\t\t*m = NOT_SET\n\t\t\treturn nil\n\t\t}\n\t\t*m = MachineScoreTypeFromString(s)\n\tcase bsontype.Int32:\n\t\t// Go uint8 values are encoded as BSON Int32 by default.\n\t\t// Decode the value as an int32 and make sure it can fit in\n\t\t// a uint8.\n\t\ti := int32(binary.LittleEndian.Uint32(b))\n\t\tif i < 0 || i > math.MaxUint8 {\n\t\t\t*m = INVALID_SCORE\n\t\t\treturn fmt.Errorf(\"%d cannot be stored as a uint8 value and is not a valid score\", i)\n\t\t}\n\t\t*m = MachineScoreType(i)\n\tdefault:\n\t\t*m = INVALID_SCORE\n\t\treturn fmt.Errorf(\"unsupported BSON type %v\", t)\n\t}\n\n\t// Now we should have a uint8 - check to see if it's valid\n\tif !m.IsValid() {\n\t\t*m = INVALID_SCORE\n\t\treturn errors.WithStack(fmt.Errorf(\"'%s' (%X) is not a valid score type. valid choices: [%s]\", string(b), b, strings.Join(machineScoreStrings, \", \")))\n\t}\n\treturn nil\n}\n", "text": "Hey @Topher_Sterling thanks for the question! The extra bytes you’re seeing is the BSON binary encoding for a string. Here’s how the BSON specification describes the string format:string \t::= \tint32 (byte*) “\\x00” \tString - The int32 is the number bytes in the (byte*) + 1 (for the trailing ‘\\x00’). The (byte*) is zero or more UTF-8 encoded characters.The first 4 bytes are 32-bit integer that describe the length of the string. The next N-1 bytes are the actual string bytes. The Nth byte is always the string terminating character \\x00.A more robust implementation of UnmarshalBSONValue should switch logic depending on the bsontype.Type parameter. For BSON strings, it’s probably safe to ignore the length bytes because bson.Unmarshal already trims the input bytes when calling UnmarshalBSONValue to only the bytes in the field being decoded (i.e. you already know the length). For BSON integer values, BSON only supports 32-bit and 64-bit signed integers. A Go uint8 value is encoded as a BSON Int32, so we should read the full 4-byte signed integer and make sure it can safely be converted to a uint8.For example:", "username": "Matt_Dale" }, { "code": "", "text": "Beautiful. This was excellent. I hadn’t thought to look at the bson spec for the offset bytes. I had just figured that the value that came in was the actual value already trimmed and I just missed it.Thank you for both the pointer to the spec and for explaining it. You are appreciated.", "username": "Topher_Sterling" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
UnmarshalBSONValue Help
2022-06-17T20:30:43.840Z
UnmarshalBSONValue Help
2,266
null
[ "data-api" ]
[ { "code": "", "text": "I tried to filter with a regex, it works in Compas : { “area” : /.no se que./ }\nbut when I tried with the data api I needed to put quotes\nas this : {“area” : “/.no se que./”} .Do anyone knows how can I resolve this problem?\nRegards", "username": "Brahian_Velazquez" }, { "code": "", "text": "As far as I know the short-cut syntax // is only available in JS.Try using the full syntax that uses $regex.", "username": "steevej" }, { "code": "", "text": "Hi, I already tried with the same results\n{“area”: {\"$regex\": /.no se que./}} working\n{“area”: {\"$regex\": “/.no se que./”}} not workingAny other idea?", "username": "Brahian_Velazquez" }, { "code": "sample_mflix.moviescurl --location --request POST 'https://data.mongodb-api.com/app/data-utkat/endpoint/data/beta/action/find' --header 'Content-Type: application/json' --header 'Access-Control-Request-Headers: *' --header 'api-key: clZkRovvPtdfHLVFwvUTMfsqapKsGU9eNoUkcM83vdQB1P1DFfEI57qtwcGJDeSv' --data-raw '{\n \"collection\":\"movies\",\n \"database\":\"sample_mflix\",\n \"dataSource\":\"Free\",\n \"filter\": {\"title\": {\"$regex\": \"Matrix\"}},\n \"projection\": {\"_id\":0, \"title\":1}\n}'\n{\"documents\":[{\"title\":\"The Matrix\"},{\"title\":\"The Matrix Reloaded\"},{\"title\":\"The Matrix Revolutions\"},{\"title\":\"Armitage: Dual Matrix\"}]}\n", "text": "Hi @Brahian_Velazquez and welcome in the MongoDB Community !This works for me against the sample_mflix.movies database from the sample data sets in Atlas.The same filter in Compass returns 4 docs.See the doc of $regex if you need options.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "yes, is working now , thanks", "username": "Brahian_Velazquez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Regex in data api not working
2022-06-15T23:12:23.721Z
Regex in data api not working
3,637
null
[ "replication" ]
[ { "code": "", "text": "I have three 3 node cluster on-premises ( 1 Primary and 2 Secondary ). I have same 3 node cluster in AZURE ( 1 Primary and 2 Secondary ) with same version and data but with different FQDN in azure. How shall I connect mongodb replica set in azure to mongodb replica set in on-premises so that they can start replicating the data. Later I would like to switch off on-premises mongodb and move finally to azure. My question is:", "username": "Amanullah_Ashraf1" }, { "code": "", "text": "Hi @Amanullah_Ashraf1,Sounds like you are trying to migrate to Atlas.In this case, the Atlas Live Migration Service is what you need.You cannot add directly on-prem nodes in Atlas. This would be a security issue and how would Atlas administer / monitor / upgrade these nodes?See my answer here:If you want to replicate data, you could manually set up Change Streams and push the write operations from on-prem to Atlas, but this is a lot of work (need monitoring, restart algorithm, etc).To keep it simple, I would just migrate the service in one shot when you are ready by starting from an empty cluster in Atlas equivalent (in size) to your current on-prem machines (or bigger) and I would migrate in one shot using the Migration tool.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks for your response. I am migrating to AZURE (IaaS) from On-Premises. Not on Atlas. Thanks", "username": "Amanullah_Ashraf1" }, { "code": "", "text": "Oops. When I saw Azure, my mind immediately switched to Atlas as it’s the easiest way to deploy MongoDB in Azure.Then if your oplog windows is large enough, I would recreate 3 nodes from scratch using the latest full backup in Azure. Then you just have to add these 3 new nodes in your on-prem RS and it will just catch up using the oplog (if they are still in the oplog window).This would avoid the 3 initial sync & costly data transfer (in money & time). You can make them hidden:true and p:0 if you want, it’s up to you. Just remember to update the connection strings in your apps.In the config you will have to add the IP addresses (bind ip).I don’t know much about Azure without Atlas so I’m not super useful. Sorry for the misunderstanding.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to replicate between On-Premises and Azure
2022-06-19T14:02:58.546Z
How to replicate between On-Premises and Azure
2,101
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.2.3 MongoDB Java & JVM Drivers release is a patch to the 4.2.2 release.The documentation hub includes extensive documentation of the 4.2 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.2/apidocs/ ", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Hi @Jeffrey_Yemin , will the MongoDB java driver sync version 4.2.3 supports MongoDB 5.0, Although the official documentation says no. But I was able to connect to MongoDB 5.0 from this java driver. I wanted to know what is the limitation of it. Please let me know. Its bit urgent and important for me to know.", "username": "Sweta_Das1" }, { "code": "", "text": "We have never tested the 4.2 driver against MongoDB 5.0, but I’m not aware of a specific failure scenario that would cause an existing application to break. There may be features of MongoDB 5.0 that you won’t be able to use (ones that require corresponding driver changes).", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Our application partners have recently upgraded the Mongo Java Driver to 4.2.3 from 3.11.2 on a performance test environments to test the new driver.\nThey connect to an instance running MongoDB 4.2.17\nwith maxIdleTimeMS set to 3600000 (1 hr ) for many of their heavily used services .\nThe default value for this per MongoDB is 0 but this tweak was made as a workaround for socket exception failures between the application node to the DB host .Before the driver version upgrade , we used to see around 5000 connections and now we have started the seeing 10000-12000 connections . The number of authentications have also gone up by 5x .\nWe are concerned about this increase in the number of connections and authentications .\nQuestion1 :- Is this a known fallout of MongoDB Java-Driver 4.2.3 due to reactive streams ?\nQuestion2 :- What is the best path forward in light of this increase in connections to the Database ?", "username": "Vinay_Setlur" }, { "code": "", "text": "The driver was installed correctly, there were no problems, except for initializing the database on existing projects, I found a solution here", "username": "Marina_Petrenko" }, { "code": "", "text": "Question1 :- Is this a known fallout of MongoDB Java-Driver 4.2.3 due to reactive streams ?No, it shouldn’t have anything to do with reactive streams. Is your application even using reactive streams?Question2 :- What is the best path forward in light of this increase in connections to the Database ?This is not a known issue, so the best path forward is to open a support ticket to investigate further. It’s not the kind of thing that is easily handled in a forum like this one, as it will likely involve client and server log analysis, etc.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Java Driver 4.2.3 Released
2021-04-07T17:57:48.968Z
MongoDB Java Driver 4.2.3 Released
13,351
null
[ "aggregation", "java", "crud" ]
[ { "code": "[\n {\n \"user\": \"abc\",\n \"seconds\": 1111,\n \"time\": ISODate(\"2020-05-05T00:00:00Z\")\n },\n {\n \"user\": \"abc\",\n \"seconds\": 2222,\n \"time\": ISODate(\"2020-05-05T00:00:00Z\")\n }\n]\nsecondstime4.6.0var alerts = db.getCollection(\"alerts\");\n\nalerts.updateMany(eq(\"user\", user),\n set(new Field<>(\"time2\",\n new Document(\"$dateAdd\",\n new Document(\"startDate\", \"$time\")\n .append(\"unit\", \"second\")\n .append(\"amount\", \"$seconds\"))\n )));\n", "text": "With a collection ofI need to add another field, which adds the seconds to the time of individual records.This seems to be possible with dateAdd which is added in version 5 (the database is version 5).However, I am not able to achieve that with MongoDB Java driver 4.6.0 .I have tried:Thanks a lot.", "username": "Ahmed_Ashour" }, { "code": "", "text": "I am not able to achieve thatThere is nothing specific to the driver version for this.But without more information, log, trace or data after update, about what is happening, error, wrong result, warning, we cannot really help.The only thing I can think is that since auto reference fields from the existing documents you might need to use the syntaxThe following page provides examples of updates with aggregation pipelines.", "username": "steevej" }, { "code": "db.alerts.insertMany( [\n { _id: 1, seconds: 86400 },\n { _id: 2, seconds: 172800 }\n] )\ntimesecondsdb.alerts.updateMany({}, [\n {\n $set: { time: {\n $dateAdd: {\n startDate: \"$$NOW\",\n unit: \"second\",\n amount: \"$seconds\"\n }\n } }\n }\n])\nvar alerts = db.getCollection(\"alerts\");\n\nalerts.updateMany(new Document(),\n set(new Field<>(\"time\",\n eq(\"$dateAdd\",\n new Document(\"startDate\", \"$$NOW\")\n .append(\"unit\", \"second\")\n .append(\"amount\", \"$seconds\")))));\n \"time\": {\n \"$dateAdd\": {\n \"startDate\": \"$$NOW\",\n \"unit\": \"second\",\n \"amount\": \"$seconds\"\n }\n }\ndateAdd", "text": "Sorry for not being clear.Let’s say we have:the following works fine on the shell, which sets field time to be now plus the seconds fields.The issue is that I don’t know how to transfer this to Java, for example:produces an object withI am under the impression that dateAdd should be added to the Java driver (in Aggregate class).", "username": "Ahmed_Ashour" }, { "code": "eqnew Document", "text": "The culprit must be the eq() builder. I don’t think it can be used in this context. The $dateAdd is implemented in the server, so it should work.Try to simply replace eq with new Document.", "username": "steevej" }, { "code": "eqnew Documentalerts.updateMany(doc, eq(...))alerts.updateMany(doc, List.of(eq(...)))updateMany(Bson filter, Bson update)updateMany(Bson filter, List<? extends Bson> update)", "text": "Thanks, eq is behaving as new Document.After enabling profiling and checking the different, the reason is:alerts.updateMany(doc, eq(...)) doesn’t workbutalerts.updateMany(doc, List.of(eq(...))) works.In other words:updateMany(Bson filter, Bson update) doesn’t workbutupdateMany(Bson filter, List<? extends Bson> update) does.I would consider this a bug, but I guess the team would be better be able to evaluate it.", "username": "Ahmed_Ashour" } ]
How to update many records with `dateAdd` in Java
2022-06-14T11:55:08.004Z
How to update many records with `dateAdd` in Java
1,778
null
[ "crud" ]
[ { "code": "", "text": "I am using updateOne() method to update a document in database. Then just after updating I am logging results to see updated changes in document, changes are not reflecting. But when I am logging results after a sleep for 100 ms, changes are reflected. I am not able to understand why this delay is there.", "username": "Jatin_Kumar2" }, { "code": "", "text": "Share your code. Most likely there is something wrong with it.", "username": "steevej" }, { "code": "private void updateState(List FilesInfo, string state)\n{\nforeach(File file in FilesInfo)\n{\nfile.State = state;\nfile.UpdateTime = DateTimeService.GetDateTime();\n\ndbrepository.UpdateOne(i=>i.Id.Equals(file.Id), Builders.Update.Set(i=>i.State, state);\n}\n}\n\n", "text": "This function written in c# is used to update state of the file in database.", "username": "Jatin_Kumar2" }, { "code": "", "text": "This is the code that does the update. It is the code doing the verification that’s important.You should be doing your updateOne inside a bulkWrite to minimize database interaction.I do not know c#, but is it possible that UpdateOne is asynchronous like in JS and that you need to await the result. Your sleep simply allows the async operation to complete.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
updateOne () method taking delay in updating document in database
2022-06-08T04:02:44.757Z
updateOne () method taking delay in updating document in database
2,023
null
[ "database-tools" ]
[ { "code": "mongoimport --db test --collection=entity --type=tsv --fieldFile=entity.ff --columnsHaveTypes --mode=upsert --file=entity_mapping.tsv", "text": "With the following commandmongoimport --db test --collection=entity --type=tsv --fieldFile=entity.ff --columnsHaveTypes --mode=upsert --file=entity_mapping.tsvI get the following error:Failed: type 2 does not support argumentsI’m pretty sure that the syntax is correct as it worked for another collection, so it must be my format file (entity.ff)_id.string()\nname.string()\ndescription.string()\nis_liquid.boolean()\nndb_no.string()\nfdc_id.int64()\nalt.string()\napproved.boolean()\nanydish_category.string()\nanydish_category_id.string()\nanydish_food_label.string()\nis_label_default.boolean()\ngeneric_name.string()\nis_generic_default.boolean()What is wrong with this file?", "username": "Ilan_Toren" }, { "code": "mongoimport --db=test --collection=entity --type=tsv --fieldFile=entity.ff --columnsHaveTypes --mode=upsert --file=entity_mapping.tsv\n=--db test", "text": "Hello @Ilan_Toren ,Can you try above command? I think you were missing = in --db test.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Proper format for mongoimport --fieldFile
2022-06-09T13:39:30.721Z
Proper format for mongoimport &ndash;fieldFile
1,478
null
[ "connecting" ]
[ { "code": "", "text": "Hi everyone, I have been having this issue since few days ago, when ever I start my node server I got this connect ECONNREFUSED ::1:27017, my mongo service is running; I have used several approach uninstalling and reinstalling agian momgo", "username": "temitope_Adekeye" }, { "code": "", "text": "my mongo service is runningSince you say service is up can you connect by mongo shell or by other tool like Compass to your mongo DB?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "change localhost word to\n127.0.0.1", "username": "Ayoub_N_A" }, { "code": "::10:0:0:0:0:0:0:1mongodnet.ipv6truenet.bindIpmongodlocalhost:27017127.0.0.1:27017", "text": "Welcome to the MongoDB Community Forums @temitope_Adekeye !Your connection refused message indicates an attempted connection via the IPv6 localhost alias ::1 (0:0:0:0:0:0:0:1).By default mongod does not bind to IPv6 addresses: you would have to set net.ipv6 to true and add appropriate IPv6 addresses in net.bindIp.If you are using the default mongod configuration you should be able to connect to localhost:27017 or the equivalent IPv4 address 127.0.0.1:27017.If you are still having trouble connecting, please provide more information on your environment:Regards,\nStennie", "username": "Stennie_X" }, { "code": "mongoose.connect('mongodb://localhost:27017/MovieApp')\nsystemLog:\n destination: file\n path: /opt/homebrew/var/log/mongodb/mongo.log\n logAppend: true\nstorage:\n dbPath: /opt/homebrew/var/mongodb\nnet:\n bindIp: 127.0.0.1\n", "text": "Hi @Stennie_X,I’m experiencing similar issues when trying to connect with MongoDB using mongoose.Works fine when changing the URL from localhost to 127.0.0.1. My operating system is a M1 based macOS. I’m running MongoDB as a (i.e. the mongod process) macOS Service**My MongoDB confi file stored in “/opt/homebrew/etc” as follows:", "username": "Armindaz" }, { "code": "", "text": "most likely localhost is not defined as 127.0.0.1 but to something else.try ping -n localhost to find out", "username": "steevej" }, { "code": "", "text": "@steevej shows up as 127.0.0.1 for me", "username": "Armindaz" }, { "code": "ps -aef | grep [m]ongod\nss -tlnp | grep [2]7017\n", "text": "Then your error must be different from:ECONNREFUSEDPlease post a screenshot of what you are doing that shows the issue you are having.Also share the output of the following commands:You may replace ss with netstat if have not the former.", "username": "steevej" }, { "code": "", "text": "@Armindaz, any followup on this?", "username": "steevej" }, { "code": "", "text": "hey@Stennie can you help me insetting up net,ipv613", "username": "Raja_Rishabh" }, { "code": "", "text": "Same here. MacOS running on M1 Pro, having localhost in the env file does not seem to work. I changed to 127.0.0.1 and it works perfectly.", "username": "Danijel_Filipovic" }, { "code": "mongo localhostmongo 127.0.0.1", "text": "I have the same problem with macOS Monterey running on Intel: I can connect with 127.0.0.1 but not with localhost using MongoClient.\nMy localhost is well set to 127.0.0.1. mongo localhost and mongo 127.0.0.1 are both working.\nmongod is running with “mongod --config /usr/local/etc/mongod.conf”", "username": "Nicolas_Traut" }, { "code": "sudo launchctl limit maxfiles 2147483647 2147483647", "text": "Note that this problem doesn’t happen to me directly when I start my computer, it only appears after some time of usage. I tried to change the rlimits with sudo launchctl limit maxfiles 2147483647 2147483647 but it didn’t appear to fix the issue.", "username": "Nicolas_Traut" }, { "code": "", "text": "as requested:\n\nimage960×1040 62 KB\n", "username": "Elvis_Van" }, { "code": "", "text": "This thread already provides the solution. Read carefully. The most important post is stennie’s about IPv6.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Econnrefused ::1:27017
2021-11-11T10:39:34.749Z
Econnrefused ::1:27017
69,534
null
[ "queries", "dot-net", "many-to-many-relationship" ]
[ { "code": "UserCars {\n UserId\n CarId\n}\nCars {\n Id\n Make\n Model\n Name\n}\n", "text": "I’m having the darnest time trying to get a particular query right. It’s a fairly simple SQL query, but my searches online (Stack, Google, etc.) on how to do it with the C# Driver API is just evading me. If someone here can help me, it’d be much appreciated. Here’s my scenario:Assuming I have the following 2 collections:I’d like to get all Cars where the UserId = X (using the join collection).Thanks!", "username": "Cellaret_App" }, { "code": "", "text": "Can’t you call usercars and populate carid? sure, you’ll also get userid, but you can play with the structure of the data you get.I know my solution isn’t the best, but it’s something. I’m not an expert.", "username": "Alex_Blattner" }, { "code": "", "text": "i do not know how to express it in c# but you could use Compass to create the aggregation and then export it in c#UserCars.aggregatea match stage with UserId:X\na lookup stage from:Cars with localField:CarId and foreignField:Id", "username": "steevej" } ]
Querying a Collection by a Match on an ID1-ID2 Collection
2022-06-20T05:18:11.000Z
Querying a Collection by a Match on an ID1-ID2 Collection
2,348
null
[ "atlas-cluster", "serverless" ]
[ { "code": "", "text": "Let’s say I have a machine inside my private network, behind a Firewall with all traffic to the Internet Restricted.\nWhat Hosts/Ports do I need to allow so an App can connect to my MongoDB Serverless Instance?Currently I have an Instance in “AWS / us-east-1” and I can’t reach it from a server inside our Network, I’m being asked to provide Host & Port to whitelist. Initially I took the name from the connection string.Example: “mongodb+srv://dev:[email protected]/my-awesome-db?retryWrites=true&w=majority”I asked the guys from Networking to whitelist:\nHost: mydb.iuytw.mongodb.net\nPorts: 27015, 27016, 27017However it didn’t work. I’ve read that for clusters you need the lists of clusters but for Serverless I don’t think it’s the same. As I said, all I know is that the region is \"“AWS / us-east-1”. Thanks in advance.", "username": "Cristopher_Rosales" }, { "code": "", "text": "Hi @Cristopher_Rosales - Welcome to the community!In a typical server environment where the topology / architecture is more static / predictable, the adding of hosts or servers to a whitelist will generally work for your use case stated. However, serverless doesn’t currently provide the capability for you to “whitelist” a server / host. This is because in a serverless environment, the situation is a lot more dynamic, where resources are constantly added and removed according to your needs. Thus, the same method that works in a server environment won’t necessarily work the same way (or at all) in a serverless environment.Having said that, if it suits your use case, please consider setting up a private endpoint for connectivity to serverless instances:\nimage1892×1060 210 KB\nHope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Necessary Firewall rules (Hosts and Ports) to reach Serverless MongoDB
2022-06-10T16:19:19.103Z
Necessary Firewall rules (Hosts and Ports) to reach Serverless MongoDB
1,944
null
[]
[ { "code": "", "text": "I’ve recently upgraded mongo from 3.4 and 3.6 and noticed a significant increase in IOPS. With 3.4 the disk util was 50% and now with 3.6 it’s up to 80-90%.It appears that ever since 3.6 there was a change that causes the journal to flush more to disk.\nhttps://jira.mongodb.org/browse/SERVER-37233The issue listed above states that it is a known issue and is in fact working as designed. However, the issue also claims that the increase in IOPS is isolated to primary members and what I’m seeing is that secondaries are affected as well. The issue also states that in case disk becomes loaded, mongo gives precedence to write ops over flushing to journal, effectively preserving the same expected performance before the upgrade.However, I cannot really rely on that as I’m seeing the disk IO climbing to 90%. Is there a way to handle the increase in IO, reduce it somehow? Has anyone else also come across this issue?", "username": "vonschnappi_N_A" }, { "code": "", "text": "Hi @vonschnappi_N_A welcome to the community!You are correct that the higher disk activity is expected in MongoDB 3.6 compared to 3.4. This is due to improvements to make replication more reliable. Specifically, in 3.6 and newer, MongoDB only replicate journaled data. This makes replication more reliable since if the primary crashes while replicated writes are not journaled, it could end up with “holes” in its oplog.Having said that, MongoDB 3.6 is not supported anymore per April 2021. I would encourage you to upgrade to the latest version (currently 5.0.9) for bugfixes and performance improvements.Best regards\nKevin", "username": "kevinadi" }, { "code": "%util: utilization\nIf this value is below 100%, then the device is completely idle for some of the time. It means there's definitely spare capacity.\n\nHowever if this is at 100%, it doesn't necessarily mean that the device is saturated. As explained before, some devices require multiple outstanding I/O requests to deliver their full throughput. So a device which can handle (say) 8 concurrent requests could still show 100% utilisation when only 1 or 2 concurrent requests are being made to it all the time. In that case, it still has plenty more to give. You'll need to look at the the throughput (kB/s), the number of operations per second, and the queue depth and services times, to gauge how heavily it is being used.\n", "text": "Hi Kevin and thank you for your response.Regarding 3.6 not being supported, your’e absolutely right. I’m actually in the process of reaching mongo latest and so I need to pass through all versions.After some investigation I realized that I was looking at the wrong metric. I was looking at disk util% rather than IOPS. Indeed mongo 3.6 shows high IO but as I am using a provisioned IOPS disk, that shouldn’t be a problem. That the disk is 90% utilized is no cause for concern as long as the disk is able to serve requests.I’m relying on this explanation:Taken from this post.Who ever comes across this post please note that the increase in IO is expected and as @kevinadi explained it’s for making replication and restore from crash more reliable and efficient. Make sure that the disks you give your mongo are provisioned IOPS and have enough of those to serve requests.", "username": "vonschnappi_N_A" } ]
High IO After Upgrade to 3.6
2022-06-15T20:00:34.356Z
High IO After Upgrade to 3.6
1,622
null
[]
[ { "code": "", "text": "How is RAM distributed for the replica sets?For example when we have a M30 cluster with 8GB RAM - does it mean that each of the “3 data bearing servers” has 8GB or just 8GB in total for all 3?", "username": "max_matinpalo" }, { "code": "M30", "text": "Hi @max_matinpalo,For example when we have a M30 cluster with 8GB RAM - does it mean that each of the “3 data bearing servers” has 8GB or just 8GB in total for all 3?As noted on the Atlas Cluster Sizing and Tier Selection - Memory documentation:Memory refers to the total physical RAM available on each data bearing node of your Atlas cluster. For example, an M30 standard replica set is configured with 8 GB RAM on each of the 3 data bearing nodes.Funnily enough the example is also M30 Hope this helps!Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replica Sets - RAM usage
2022-04-28T06:24:53.449Z
Replica Sets - RAM usage
1,578
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "", "text": "Hello everyone, I’m new to mongodb as well as the community, I was looking to replace the mongodb _id with a custom one when i do an insert or insertMany.\nOne way i can think of is looping through each element in the array and add _id field with the id i want, just wanted to know if this approach is right or is there a cleaner and better way to achieve this ?\nThanks", "username": "Ashish_Kumar3" }, { "code": "", "text": "I don’t see any other way. If you want to assign a custom _id to an array of documents, the you have to loop over the documents and assign your custom _id.", "username": "steevej" }, { "code": "", "text": "Thanks steeve, will try this.", "username": "Ashish_Kumar3" } ]
How can i do insertMany with custom _id field?
2022-06-19T19:03:16.046Z
How can i do insertMany with custom _id field?
2,912
null
[ "replication" ]
[ { "code": "", "text": "Hello,we’re managing a small MongoDB 4.2 ReplicaSet cluster (PSA) with about 54GBs of on-disk WireTiger-based data.Today, I was upgrading the replica from <4.0 and MMAPv1 (Meaning I had to delete its whole datadir and let it start up from the primary from scratch).All worked fine after I readded it into the replicaset and it transition into the Startup2 state, meaning it was fetching data from primary.What I didn’t expect is for the primary to crash due to exceeding the default max number of open files (64k)The reason it reached such a high number is probably that our client has a large number of small collections inside the database, resulting in about 74k inodes taken up by the datadir.It seems the server loops over all the files, incrementally sending them over, without ever closing them again.Even now, several hours after the incident, the process’s holding ~73.5k file descriptors open (We had to increase the limit to allow the replica to start up).Is this the intended behavior? Only “solution” to this problem I was able to find online is “Increase the max FDs limit”, which is… Not really a solution, rather, a hotpatch…", "username": "Lukas_Pavljuk" }, { "code": "", "text": "Hi @Lukas_Pavljuk welcome to the community!The reason it reached such a high number is probably that our client has a large number of small collections inside the database, resulting in about 74k inodes taken up by the datadir.I think, if the deployment requires that many files to be open, then it needs that many files to be open The other solution is to artificially limit the amount of number of collections, which is probably not the best either.Another possibility is to split the replica set into 2-3 smaller ones (via sharding or just separate deployments) if it hits some issues due to the number of files open.Having said that, are you seeing any issues (performance or otherwise) due to the large number of open files?Best regards\nKevin", "username": "kevinadi" } ]
MongoDB 4.2.19: Extremely high File Descriptor counts (70k+)
2022-06-17T15:48:21.558Z
MongoDB 4.2.19: Extremely high File Descriptor counts (70k+)
1,473
null
[]
[ { "code": "", "text": "Hello,\nI try to create realm app with flexible sync type using App Services Admin API. I created it. But I have a problem when created. I want to enable development mode. Then I passed this one in json format.> “development_mode_enabled” : true,But development mode not enabled.\nCan I enable that development mode using App Services Admin API?", "username": "Salitha_Shamood" }, { "code": "", "text": "Hi Salitha,Thanks for your question and welcome to the community!Are you doing this through the POST /apps api?\nI don’t believe this is possible with the admin api, would you be able to use realm-cli instead?Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Thanks for reply my question.Yes. I did it through POST method.I can create using realm-cli.\nBut I want to do it using admin api.\nBecause I want do that in server side.", "username": "Salitha_Shamood" }, { "code": "PUT \"/groups/{groupId}/apps/{appId}/sync/config\"\n{ development_mode_enabled: true }\n", "text": "You can use this endpoint.with this payload", "username": "Isuru_Jayathissa" }, { "code": "database_name", "text": "Hi Isuru,It is generally not recommended to use this API since you may make a mistake in including all the configurations that are required of development mode/sync, for instance your payload does not have a database_name property included which is required when turning this feature on in the UI.", "username": "Mansoor_Omar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to enable development mode using App Services Admin API?
2022-06-16T05:14:01.538Z
How to enable development mode using App Services Admin API?
2,021
null
[ "queries" ]
[ { "code": "const { nodeNameSearch, sortOn, sortByAscDesc, skipNumber, takeNumber, nodeFilter, collateralRequired } = query;\nconsole.log(\"nodeNameSearch, sortOn, sortByAscDesc, skipNumber, takeNumber, nodeFilter, collateralRequired: \", nodeNameSearch, sortOn, sortByAscDesc, skipNumber, takeNumber, nodeFilter, collateralRequired);\n\nvar collection = context.services.get(\"mongodb-atlas\")\n .db(\"Nodes\")\n .collection(\"liveNodes\");\n\nvar result = \"\";\n\nif(nodeNameSearch != 'ALL'){\n // try only do a text search first\n result = collection.find({nodeID: {$regex : nodeNameSearch}},\n {\n title: 1, \n nodeId: 1, \n icon: 1, \n nodeType: 1, \n \"nodePriceData.rewardedAsset\": 1,\n \"nodePriceData.collateral\": 1,\n \"nodePriceData.currentAssetPrice\": 1,\n \"nodePriceData.nodeValue\": 1,\n \"resources\":1,\n \"mediaLinks\": {$slice: 5},\n \"supportedOs\": 1, \n \n}\n ).then(rslt => {\n rslt.supportedOs.forEach(function (item, index) {", "text": "Hi,\nWas wondering if someone could help me - I am trying to limit the fields returned through HTTP calls - in the function, I have put the following which I believe to be related the error I get the following error - “FunctionExecutionError”, Any guidance/help would be really appreciated.exports = function({ query, headers, body }, response) {", "username": "pb665" }, { "code": "function isPositiveInteger(str) {\n return ((parseInt(str, 10).toString() == str) && str.indexOf('-') === -1);\n}\n\nfunction log_ip(payload) {\n const log = context.services.get(\"pre-prod\").db(\"logs\").collection(\"ip\");\n let ip = \"IP missing\";\n try {\n ip = payload.headers[\"X-Envoy-External-Address\"][0];\n } catch (error) {\n console.log(\"Can't retrieve IP address.\")\n }\n console.log(ip);\n log.updateOne({\"_id\": ip}, {\"$inc\": {\"queries\": 1}}, {\"upsert\": true})\n .then( result => {\n console.log(\"IP + 1: \" + ip);\n });\n}\n\nexports = function(payload, response) {\n log_ip(payload);\n\n const {uid, country, state, country_iso3, min_date, max_date, hide_fields} = payload.query;\n const coll = context.services.get(\"mongodb-atlas\").db(\"covid19\").collection(\"global\");\n \n var query = {};\n var project = {};\n const sort = {'date': 1};\n \n if (uid !== undefined && isPositiveInteger(uid)) {\n query.uid = parseInt(uid, 10);\n }\n if (country !== undefined) {\n query.country = country;\n }\n if (state !== undefined) {\n query.state = state;\n }\n if (country_iso3 !== undefined) {\n query.country_iso3 = country_iso3;\n }\n if (min_date !== undefined && max_date === undefined) {\n query.date = {'$gte': new Date(min_date)};\n }\n if (max_date !== undefined && min_date === undefined) {\n query.date = {'$lte': new Date(max_date)};\n }\n if (min_date !== undefined && max_date !== undefined) {\n query.date = {'$gte': new Date(min_date), '$lte': new Date(max_date)};\n }\n if (hide_fields !== undefined) {\n const fields = hide_fields.split(',');\n for (let i = 0; i < fields.length; i++) {\n project[fields[i].trim()] = 0\n }\n }\n \n console.log('Query: ' + JSON.stringify(query));\n console.log('Projection: ' + JSON.stringify(project));\n \n coll.find(query, project).sort(sort).toArray()\n .then( docs => {\n response.setBody(JSON.stringify(docs));\n response.setHeader(\"Contact\",\"[email protected]\");\n });\n};\n", "text": "Hi @pb665 and welcome in the MongoDB Community !Where is the rest of the function? Can you please fix the code block so it’s complete and correctly displayed?\nI guess you noticed already but this forum supports standard markdown code blocks.Here is an example of a custom function I’m using to implement a REST API. It’s old school because it’s still using the old 3rd Party Service that is now deprecated. So there may be a few details to change. But I think it’s just the function signature (params extraction). The interesting part is just the last 5 lines I guess for you.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks for your reply and apologies for not texting the whole function and late response. I think the learning point for me is that MongoDB function does not support a then function or a foreach ?", "username": "pb665" }, { "code": "", "text": "I’m using “then” in my code block above, so it’s definitely supported.\nSame for forEach. See the doc here for example.", "username": "MaBeuLux88" } ]
Get response "FunctionExecutionError" from function
2022-06-13T11:27:01.999Z
Get response &ldquo;FunctionExecutionError&rdquo; from function
3,460
null
[ "dot-net", "crud", "containers" ]
[ { "code": "MongoDB.DriverUpdateManyAsyncawait UpdateManyAsyncFilterDefinitionUpdateDefinitionUpdateResult{\"update_modified_count\":3,\"update_matched_count\":3,\"update_is_acknowledged\":true}UpdateManyAsync", "text": "We are using the csharp MongoDB.Driver library, version 2.13.1.During our integration tests, the UpdateManyAsync method occasionally acts weirdly.\nThe flow is the following:Several facts:The only guess we currently have (rather than some read-write replica implementation which we don’t really think exists when using a simple docker-compose setup), is some caching done by the csharp driver.\nIs it possible that it sets some internal state (and therefore the changes are reflected properly when queried in the same flow) and fails to persist it to the actual DB from time to time?Any help will really be appreciated here.", "username": "Ilia_Shkolyar" }, { "code": "", "text": "Hi @Ilia_Shkolyar and welcome back !I could be completely wrong but here is my wild guess as this already happened to me, also in an integration tests / CICD setup.Unit Tests or integration tests are supposed to be completely independent from one another. So on your computer, if you run them one by one => no problem.The DB is reset with default values with X. You run the test. Confirm it’s now Y in your assert statements. All !But often CICD run tests in parallel to reduce build time. So now if Integration Tests 1 and 2 are running in parallel, and they rely on the same MDB collection, you can have a conflict or a race condition that can randomly make your test succeed or fail.Could this be what is happening here? The cache hypothesis doesn’t make sense though because the MDB collection could be altered by any other client so if you send a find command, it always HAS to get that data from the actual collection. Can’t cache anything here.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "tenantIdUpdateManyAsyncawait UpdateManyAsyncUpdateManyAsync", "text": "Hello @MaBeuLux88!First of all thanks for the response.\nWe have a dedicated tenantId field in our Mongo collections in order to fully support a “multi-tenant” approach.\nWe use the same mechanism in our integration tests, so each new test creates a unique tenant id.\nThis means that any DB operations (that indeed can happen in parallel) will not modify values for other tenants/tests.\nSo unfortunately no, this is not the case here.Our first suspicion was that UpdateManyAsync is changing the fields in the background and the “acknowledgment” is just to identify that the operation will happen sometime in the future.\nThis made sense that if the DB is under heavy load the fields could be updated after some time which can cause the tests to fail some time.But as I explained above, we added code that queries the DB and verifies the field values right after the await UpdateManyAsync is called, so that theory is invalid as well.What else can lead the UpdateManyAsync behavior to “revert” its operation from time to time?\nWe are truly out of ideas here…Thanks again for your help!", "username": "Ilia_Shkolyar" }, { "code": "", "text": "If you are in a mutli-doc ACID transaction that is aborted. But apparently that’s not the case here.\nOr if the entire Replica Set performs a rollback operation because the Primary failed => Elect a secondary that was lagging 1s behind (these not replicated operations are now “lost” => Primary comes back online => has to rollback 1s of write operations.But I guess that’s not that either. No nothing else really. It’s not a sync issue with unrelsolved promises and the check is performed before the resolution of the promise?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "async-await", "text": "No promises.\nStandard async-await with C# Mongo driver…", "username": "Ilia_Shkolyar" }, { "code": "", "text": "I don’t know if it’s called something else in C# but the principle is the same.", "username": "MaBeuLux88" } ]
UpdateMany changes are occasionally 'reverted'
2022-06-15T14:22:18.298Z
UpdateMany changes are occasionally &lsquo;reverted&rsquo;
2,923
null
[ "upgrading" ]
[ { "code": "", "text": "I am currently running v3.6.3 on one machine. I have another machine on v4.4.5. I want to move the data from the v3.6.3 machine to the v4.4.5 machine.I am using ubuntu 18.04Is this how I should do it:Make a backup (mongoexport)\nUpgrade to mongodb 4.0\nUpgrade to mongodb 4.2\nUpgrade to mongodb 4.4\nMake a backup (mongoexport)\nImport the backup data into v4.4.5 machine (mongoimport)Is all of this needed or can I just mongoexport the data from the v3.6 machine then mongoimport it into the v4.4 machine?", "username": "mental_N_A" }, { "code": "mongodumpmongorestoremongoimportmongoexportmongodumpmongorestoremongodmongodump", "text": "Hi @mental_N_A,You have multiple solutions to upgrade from MDB 3.6 to 4.4. But before you try anything BACKUP EVERYTHING and of course make sure that you can recreate a 3.6.3 node from your backup so you can always come back and try something else.mongodump & mongorestore can backup and restore entire databases in a MongoDB cluster. The BSON file they create might not be compatible if the gap between your MongoDB versions is too large (like 2.4 to 4.4 for example). But because 3.6 is not excessively old, it might work just fine.The big bonus here is that you don’t have to care about recreating your indexes because they will be saved as well in the metadata & they will be rebuilt on the new cluster. And you also don’t need to update 3.6 => 4.0 => 4.2 => 4.4. You can just mongodump in 3.6 and restore in a brand new 4.4 cluster.It’s not 100% guaranteed that it works as you might find compatibility issues but I would give it a shot.This solution is similar to mongodump / mongorestore but instead of using binary files (BSON), you are using plain text JSON files. It’s a lot slower than mongodump / mongorestore & you are not saving the metadata so you will have to recreate the indexes manually.Usually this is the path sysadmins take to avoid down times as it is possible to upgrade your MongoDB Replica Set with a rolling upgrade. This is guaranteed to work this time (unlike the 2 previous methods) but it will require to upgrade from one major version to the next.\nIn your case, you would have to follow these instructions in this order:These pages include step by step instructions and contain valuable warnings to avoid surprises like:\nimage791×186 20.8 KB\nMake sure to read these instructions ahead before planning your upgrades and making sure you don’t have anything specific in your MongoDB deployment that would prevent these steps.EDIT: Adding some links & details about mongodumpCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "100.5.2", "text": "I want to go with the first solution (the dump solution). But when I check the docs for mongodump, it shows only 4.0+ versions to be supported. Should I still go ahead and use the dump solution when upgrading from 3.6?Note: My mongodump version is 100.5.2", "username": "Pra_Deep" }, { "code": "/db/data", "text": "There is probably a reason why the docs says it’s not compatible. If you can afford to be offline, then in your situation I would:I think this would be safe. And in case something goes wrong, you still have your 3.6 nodes completely untouched and safe.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "[email protected]/db/data", "text": "Yes. I can afford to be offline. And my setup is simple, I have one db server only on the same machine as application. I didn’t setup cluster explicitly so probably I have one node only.\nTaking cues from your suggestions, is this what you’d suggest?Is that alright?Note: I want to upgrade to 4.4 only because I see that 5.0 is in active development, and I won’t have time to upgrade this again for at least next 6-12 months.", "username": "Pra_Deep" }, { "code": "mongod", "text": "All the versions that didn’t reach the End Of Life dates are in active development. At the moment it’s 4.0, 4.2, 4.4 and 5.0+. All these versions received minor updates, security patches and backports from the current version.5.0 is currently the recommended version for prod environments. Many bugs got fixed since 4.4 so it’s better to be there without these known issues in the way. But I understand the logic. If you are only using a single node, I would recommend to use a single node Replica Set which would unlock a few interesting features of MongoDB like Change Streams or multi-doc ACID transactions. These features rely on the presence of the oplog system collection that only exists in Replica Sets.My recommendation for your upgrade would be to leave the current node untouched. Just shutdown mongod and take a copy of the data folder to another PC.Then you can upgrade to 4.0 and take a mongodump using the latest version. And restore on 4.4 or 5.0 if you feel like it. Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "4.0", "text": "Running into issue while upgrading to 4.0 on Ubuntu 18.04. I am following the guide. On install, I get error - arm64 is not supported.\nBut the guide mentions to add source repo with arch=arm64. What am I missing? How do I fix it?There is already an issue on this forum mentioning the exact issue I am facing but without an answer", "username": "Pra_Deep" }, { "code": "mongodump+mongorestore", "text": "One more follow up question. Is there a difference between mongodump+mongorestore vs “copy the data folder” approach? Will that create a problem if I take that data folder to 4.0 and then keep upgrading from 4.0 → 4.2 → 4.4 and so on without doing the dump+restore.", "username": "Pra_Deep" }, { "code": "", "text": "If your machine isn’t arm64 then don’t add packages for arm64 architectures.“Copy the data folder” is faster than a dump. But you can’t migrate from 4.0.X to 5.0.X with that method. You must migrate from one major version to the next.If you dump/restore, then you can skip major releases in theory but I would recommend to avoid giant gaps and confirm that your mongodump / mongorestore are compatible with the versions you are targeting.", "username": "MaBeuLux88" }, { "code": "3.2 -> 3.4 -> 3.6 -> 4.0 -> 4.2 -> 4.4mongodb.conf", "text": "Got it. The path I chose was to upgrade standalone major version step by step 3.2 -> 3.4 -> 3.6 -> 4.0 -> 4.2 -> 4.4, and I am doing it on the same machine. Upgrade to 5.0 and changing the installation to use replica sets, I will cover later after ensuring that this standalone upgrade works without any issues.\nSo what I understand from your suggestions for this path that - I don’t need to use mongodump/restore as long as I make sure that during the upgrades, the db folder remains the same. Which is always the case because I’m defining db path in mongodb.conf which remains same throughout all the upgrades.I’m sorry for bothering you so much by asking probably obvious questions, I don’t want to make any mistake.", "username": "Pra_Deep" }, { "code": "db.adminCommand( { setFeatureCompatibilityVersion: \"4.2\" } )\n", "text": "Do a full backup before you start. Even better, backup the prod, restore it on another server and test your upgrade on that server.Make sure you read all the upgrade documentation for each version because they contain specificities and special commands that needs to be executed to complete the upgrade.For example, this one is common:But there are other useful information in these docs.From what I’m reading, always make sure to upgrade to the latest minor version for each of the major one: for example, prefer 4.2.20 to 4.2.1, etc.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "sudo apt-get update && sudo apt-get upgradesudo apt-get update && sudo apt-get upgrademongod --version5.0.9sudo apt-get purge mongoldb-orgE: Couldn't find package by regex/glob mongoldb-org-5.0.list \nE: Couldn't find package by regex/glob mongoldb-org-4.0.list \nE: Couldn't find package by regex/glob mongoldb-org-3.6.list \nE: Couldn't find package by regex/glob mongoldb-org-3.2.list \n", "text": "I accidentally upgraded 4.0 to 5.0(instead of 4.2).How it happened? I went to link “upgrade standalone to 4.2” and copied the command for keys and source repo but it was set to 5.0 (as it turns out, I have to choose the version no. in the left sidebar). And then as usual I ran sudo apt-get update && sudo apt-get upgrade.What can I do now?I’m on Ubuntu 18.04 currently", "username": "Pra_Deep" }, { "code": "", "text": "I have to say that it’s a tad easier on MongoDB Atlas. It takes like 4 clicks…I have no idea how you can fix this to be honest. Start over? Does the cluster works in 5.0 ? I don’t manage or install MongoDB with packages (only tar balls) and I’m on Debian. I only manage my prod envs on Atlas so I’m kinda useless at this point I’m afraid.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks for all the help. For now, I uninstalled all the mongo related packages one by one. And then installed 4.4 from scratch. So it is done now, upgraded to 4.4.\nEverything seems to work as usual, no issues. But will report back if I face any issues.", "username": "Pra_Deep" } ]
How to upgrade from v3.6.3 to v4.4.5?
2021-04-11T13:46:57.708Z
How to upgrade from v3.6.3 to v4.4.5?
21,769
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hello,\nI am using sample_airbnb database with listingsAndReviews collection. I would like to use the string field\nroom_type:“Entire home/apt” to sort my query. Could you please let me know my error in the following script:db.listingsAndReviews.find().sort({room_type:1}).collation({ locale: “en”, caseLevel: true }).pretty()MongoServerError: Executor error during find command :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting.Thanks", "username": "john_Doe3" }, { "code": "", "text": "Hi @john_Doe3,\nEach document in the sample_airbnb’s listingAndReviews collection has around 45 fields.\nSince with more than 5.6k documents and an average size of 17KB/document it would take a lot of memory and time to sort the results.\nDo you need all the 45 fields that are there? If not, a projection on the fields you need might be helpful in your case.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "db.listingsAndReviews.find({projection:{ interaction: 1, name: 1, _id: 0 }}).sort({room_type:1}).collation({ locale: \"en\", caseLevel: true }).pretty()\n", "text": "Hi, thank you so much for your reply.\nI added projection without success. Could you please let me know the error.\nI only need to show some results.Thanks again!", "username": "john_Doe3" }, { "code": "find({projection:{ interaction: 1, name: 1, _id: 0 }}).sort({room_type:1})\ndb.listingsAndReviews.find(\n { }, \n { interaction: 1, name: 1, _id: 0 }\n)\n.sort({ room_type:1 })\n.collation({ locale: \"en\", caseLevel: true })\n.pretty()\n{}find(){ interaction: 1, name: 1, _id: 0 }interactionname_id", "text": "Hi @john_Doe3,That’s not the correct syntax to perform projection on the results.\nTry the following instead:WhereIf you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "Hi Sourahb, many thanks. I still get the same error. However, when I added limit of 500, it works properly. Adding a limit of 5000 wont work. Thanks again for your help!\nhave a good one,", "username": "john_Doe3" }, { "code": "", "text": "Dear Sourabh,\nI created the following collection (test) and would to use $graphLookup. I was wondering if by chance you can tell me what is the error?db.test.insert([ { node:10, connectedTo: [40]}, {node:20, connectedTo:[30,40]}, {node:30, connectedTo:[50]}, {node:40, connectedTo:[10,20,50]}, {node:5,connectedTo:[20,30,40] } ])db.test.aggregate([ {$match: {node:1}} , { $graphLookup: {from : “graph”, startWith: “$node”, connectFromField: “connectedTo”, connectedToField:“node”, as: “demo”} } ]).pretty()\nerror: MongoServerError: Unknown argument to $graphLookup: connectedToField", "username": "john_Doe3" }, { "code": "", "text": "Hi @john_Doe3,\nThe argument name in $graphLookup should be connectToField instead of connectedToField.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "Many thanks for your prompt reply! However, It does not show any output. Am I missing something.Atlas atlas-odp3nm-shard-0 [primary] sample_training> db.test.aggregate([ {$match: {node:1}} , { $graphLookup: {from : “graph”, startWith: “$node”, connectFromField: “connectedTo”, connectToField:“node”, as: “demo”} } ]).pretty()", "username": "john_Doe3" }, { "code": "", "text": "It is normal. Your from: collection named graph is empty. You inserted your nodes in a collection named test.", "username": "steevej" }, { "code": "", "text": "My apology, I have already tried the test without sucess.\ndb.test.aggregate([ {$match: {node:1}} , { $graphLookup: {from : “test”, startWith: “$node”, connectFromField: “connectedTo”, connectToField:“node”, as: “demo”} } ]).pretty()", "username": "john_Doe3" }, { "code": "", "text": "\nimage1002×50 4.11 KB\n", "username": "john_Doe3" }, { "code": "", "text": "There is no node:1 in your test collectio .", "username": "steevej" }, { "code": "", "text": "my my silly mistake. Many thanks for you patience and help!", "username": "john_Doe3" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to sort by string
2022-06-10T20:33:45.987Z
How to sort by string
4,739
https://www.mongodb.com/…1_2_1024x284.png
[ "crud" ]
[ { "code": " db.products.updateMany({\"images.url\":\"sampleURL\"},{$set : {\"images.url\":\"aaa\"}})", "text": "\nimage2524×702 80.4 KB\n\nwhat i’m doing wrong?\n db.products.updateMany({\"images.url\":\"sampleURL\"},{$set : {\"images.url\":\"aaa\"}})", "username": "Abhi_Raj" }, { "code": "imagesdb.products.updateMany(\n { \"images.url\": \"sampleURL\" },\n { $set : { \"images.0.url\": \"new_url\" } }\n)\n", "text": "Your images property is an array of objects. So you can NOT update the field of an array. But you can update the field of each item in an array.So, if you want to update the first item in an array, you would do it like this:", "username": "NeNaD" }, { "code": "", "text": "Since the query part queries the array you may use", "username": "steevej" }, { "code": "", "text": "Definitely better solution. ", "username": "NeNaD" }, { "code": "", "text": "but isn’t $update for for updating only a single value?", "username": "Abhi_Raj" }, { "code": "", "text": "If $ does not work in updateMany, which I cannot test right now, you may use the arrayFilters: option of updateMany.", "username": "steevej" } ]
Why updatemany function is not working?
2022-06-18T12:47:44.007Z
Why updatemany function is not working?
5,543
null
[ "node-js", "compass", "connecting" ]
[ { "code": "", "text": "windows 10\nnode 16.13.2", "username": "Lamine_Koussi" }, { "code": "", "text": "Show your code and the error message you got.", "username": "Jack_Woehr" } ]
I tried to connect to mongodb with Nodejs and Compass ,but a timeout error occurred, is there any solution?
2022-06-18T03:16:58.610Z
I tried to connect to mongodb with Nodejs and Compass ,but a timeout error occurred, is there any solution?
1,347
null
[ "aggregation" ]
[ { "code": "{\n ...\n \"assigned\": \"622aafaf9e9e2b0023fb5f92\",\n \"responsible\": \"624b90d8cfa6460059711558\",\n \"name\": \"test task\",\n \"status\": \"pending\"\n ...\n},\n{\n ...\n \"assigned\": \"624b90d8cfa6460059711558\",\n \"responsible\": \"622aafaf9e9e2b0023fb5f92\",\n \"name\": \"another test task\",\n \"status\": \"queued\"\n ...\n}\n{\n\"assigned\": {\n \"pending\": [\n {\n ...\n \"assigned\": \"622aafaf9e9e2b0023fb5f92\",\n \"responsible\": \"624b90d8cfa6460059711558\",\n \"name\": \"test task\",\n \"status\": \"pending\"\n ...\n }\n ],\n}\n\"responsible\": {\n \"queued\": [\n {\n ...\n \"assigned\": \"624b90d8cfa6460059711558\",\n \"responsible\": \"622aafaf9e9e2b0023fb5f92\",\n \"name\": \"another test task\",\n \"status\": \"queued\"\n ...\n }\n ]\n}\n}\nconst sharedStages = [\n { $sort: { createdAt: -1 } },\n { $limit: 20 },\n { $group: {\n _id: \"$status\",\n tasks: { $push: \"$$ROOT\" }\n } }\n]\nconst agg = await Task.aggregate([\n {\n $facet: {\n \"assigned\": [\n { $match: { assignedTo: user.id } },\n ...sharedStages\n ],\n \"responsible\": [\n { $match: { responsible: user.id } },\n ...sharedStages\n ]\n }\n },\n {\n $replaceRoot: {\n newRoot: {\n assigned: \"$assigned\",\n responsible: \"$responsible\"\n }\n }\n },\n])\n$replaceRoot[\n {\n assigned: [\n {\n _id: \"pending\",\n tasks: [{...}]\n ],\n responsible: [\n {\n _id: \"queued\",\n tasks: [{...}]\n }\n ]\n }\n]\n$replaceRoot$arrayToObject$objectToArray", "text": "I have a collection of tasks that I’m trying to group using aggregate.Sample Data:I’d like to have the aggregate query to return the following object:I’ve put this aggregate query together so far, which has almost got me there, but not quite:This query gets me this output, which is close, but I can’t seem to figure out how to get $replaceRoot to put together the status arrays full of tasks without losing some data.I’m still pretty new to Mongo so I could be going about this all wrong too - I’ve tried a bunch of different ways to use $replaceRoot and $arrayToObject and $objectToArray but haven’t found any success.Would appreciate if anyone could help point me in the right direction to make this work, thank you!", "username": "boomography" }, { "code": "assignedresponsibleassignedToassigned", "text": "Hi @boomography and welcome in the MongoDB Community !I’m trying but I’m struggling to understand what you are trying to do exactly.Can you please confirm that:Also, what do you mean by “losing some data”? What’s missing? Is this because of the limit 20 maybe?Also I noticed that you are using assignedTo and assigned. Which one is it?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "assignedresponsibleassignedresponsiblestatusassigned: {\n pending: [...],\n queued: [...],\n completed: [...],\n rejected: [...]\n},\nresponsible: {\n pending: [...],\n completed: [...],\n rejected: [...]\n}\n$unwindtasksassignedToassignedassigned", "text": "Thanks for the welcome, @MaBeuLux88! I apologize for the confusion, I was fighting with this until about 4 AM and posted this when I was half asleep.Yes, one document as I’m only looking to return tasks for a single user. A user can have tasks that they are assigned to or responsible for. What I’m wanting to end up with is a document that represents all of the tasks associated with a given user that is split up by whether they’re responsible for or assigned to the task and then have the respective tasks grouped by their status.That document should have two fields, assigned and responsible. Those fields should contain an object with keys named after the status type and the values of those keys being an array of tasks matching the respective type. For example:Tasks can have many statuses and assigned/responsible lists should have an array for each status type that exists within its respective input documents.Also, what do you mean by “losing some data”? What’s missing? Is this because of the limit 20 maybe?If I $unwind tasks after the group stage I will only get one array of tasks back even if there are multiple status types. This may be intended behavior and I’m just not connecting the dots with at this point as I’ve been staring at this for too long.Also I noticed that you are using assignedTo and assigned . Which one is it?It’s only assigned.Hopefully that is more clear, thank you for the help!Cheers,Boom", "username": "boomography" }, { "code": "import random\nfrom pprint import pprint\n\nfrom faker import Faker\nfrom pymongo import MongoClient\n\nfake = Faker()\n\n\ndef rand_person():\n return random.choice([\"Alice\", \"Bob\", \"Max\", \"Boom\", \"Nic\", \"John\", \"Joe\"])\n\n\ndef rand_state():\n return random.choice([\"pending\", \"completed\", \"rejected\", \"queued\"])\n\n\ndef rand_tickets():\n return [{\n 'assigned': rand_person(),\n 'responsible': rand_person(),\n 'name': fake.text(max_nb_chars=20),\n 'status': rand_state()\n } for _ in range(100)]\n\n\nif __name__ == '__main__':\n client = MongoClient()\n db = client.get_database('test')\n tickets = db.get_collection('tickets')\n tickets.drop()\n tickets.create_index(\"assigned\")\n tickets.create_index(\"responsible\")\n tickets.insert_many(rand_tickets())\n\n user = \"Boom\"\n pipeline = [\n {\n '$match': {\n '$or': [\n {\n 'assigned': user\n }, {\n 'responsible': user\n }\n ]\n }\n }, {\n '$group': {\n '_id': None,\n 'docs': {\n '$push': '$$ROOT'\n }\n }\n }, {\n '$project': {\n 'assigned': {\n 'pending': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.assigned', user]}, {'$eq': ['$$item.status', 'pending']}]}}},\n 'completed': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.assigned', user]}, {'$eq': ['$$item.status', 'completed']}]}}},\n 'rejected': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.assigned', user]}, {'$eq': ['$$item.status', 'rejected']}]}}},\n 'queued': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.assigned', user]}, {'$eq': ['$$item.status', 'queued']}]}}}\n },\n 'responsible': {\n 'pending': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.responsible', user]}, {'$eq': ['$$item.status', 'pending']}]}}},\n 'completed': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.responsible', user]}, {'$eq': ['$$item.status', 'completed']}]}}},\n 'rejected': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.responsible', user]}, {'$eq': ['$$item.status', 'rejected']}]}}},\n 'queued': {'$filter': {'input': '$docs', 'as': 'item', 'cond': {'$and': [{'$eq': ['$$item.responsible', user]}, {'$eq': ['$$item.status', 'queued']}]}}}\n }\n }\n }\n ]\n\n for res in tickets.aggregate(pipeline):\n pprint(res)\n{'_id': None,\n 'assigned': {'completed': [{'_id': ObjectId('62ace5bae1e2057c790a5d7b'),\n 'assigned': 'Boom',\n 'name': 'Lose someone.',\n 'responsible': 'Bob',\n 'status': 'completed'},\n {'_id': ObjectId('62ace5bae1e2057c790a5d96'),\n 'assigned': 'Boom',\n 'name': 'Tend plant reveal.',\n 'responsible': 'Joe',\n 'status': 'completed'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dd0'),\n 'assigned': 'Boom',\n 'name': 'State state around.',\n 'responsible': 'Nic',\n 'status': 'completed'}],\n 'pending': [{'_id': ObjectId('62ace5bae1e2057c790a5d8a'),\n 'assigned': 'Boom',\n 'name': 'Ask well prove.',\n 'responsible': 'John',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5d9c'),\n 'assigned': 'Boom',\n 'name': 'Contain development.',\n 'responsible': 'John',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5d9d'),\n 'assigned': 'Boom',\n 'name': 'Mouth strategy.',\n 'responsible': 'Alice',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5da6'),\n 'assigned': 'Boom',\n 'name': 'Clear fire feeling.',\n 'responsible': 'Boom',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5daf'),\n 'assigned': 'Boom',\n 'name': 'Fall bring feel.',\n 'responsible': 'Nic',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5db3'),\n 'assigned': 'Boom',\n 'name': 'Manage but himself.',\n 'responsible': 'Max',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dc3'),\n 'assigned': 'Boom',\n 'name': 'So suffer accept.',\n 'responsible': 'Nic',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dd5'),\n 'assigned': 'Boom',\n 'name': 'Ever foot different.',\n 'responsible': 'Bob',\n 'status': 'pending'}],\n 'queued': [{'_id': ObjectId('62ace5bae1e2057c790a5d9f'),\n 'assigned': 'Boom',\n 'name': 'Reason after.',\n 'responsible': 'Alice',\n 'status': 'queued'}],\n 'rejected': [{'_id': ObjectId('62ace5bae1e2057c790a5db7'),\n 'assigned': 'Boom',\n 'name': 'Middle understand.',\n 'responsible': 'John',\n 'status': 'rejected'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dc5'),\n 'assigned': 'Boom',\n 'name': 'South camera get.',\n 'responsible': 'Joe',\n 'status': 'rejected'}]},\n 'responsible': {'completed': [{'_id': ObjectId('62ace5bae1e2057c790a5d90'),\n 'assigned': 'Joe',\n 'name': 'Letter tax agent.',\n 'responsible': 'Boom',\n 'status': 'completed'},\n {'_id': ObjectId('62ace5bae1e2057c790a5db6'),\n 'assigned': 'Bob',\n 'name': 'West hand before.',\n 'responsible': 'Boom',\n 'status': 'completed'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dc9'),\n 'assigned': 'Bob',\n 'name': 'Feeling minute card.',\n 'responsible': 'Boom',\n 'status': 'completed'}],\n 'pending': [{'_id': ObjectId('62ace5bae1e2057c790a5da6'),\n 'assigned': 'Boom',\n 'name': 'Clear fire feeling.',\n 'responsible': 'Boom',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5da3'),\n 'assigned': 'John',\n 'name': 'Democratic it.',\n 'responsible': 'Boom',\n 'status': 'pending'},\n {'_id': ObjectId('62ace5bae1e2057c790a5daa'),\n 'assigned': 'John',\n 'name': 'Account natural.',\n 'responsible': 'Boom',\n 'status': 'pending'}],\n 'queued': [{'_id': ObjectId('62ace5bae1e2057c790a5d7a'),\n 'assigned': 'John',\n 'name': 'Understand not.',\n 'responsible': 'Boom',\n 'status': 'queued'},\n {'_id': ObjectId('62ace5bae1e2057c790a5d94'),\n 'assigned': 'Alice',\n 'name': 'When toward college.',\n 'responsible': 'Boom',\n 'status': 'queued'}],\n 'rejected': [{'_id': ObjectId('62ace5bae1e2057c790a5d83'),\n 'assigned': 'Joe',\n 'name': 'How cup simple back.',\n 'responsible': 'Boom',\n 'status': 'rejected'},\n {'_id': ObjectId('62ace5bae1e2057c790a5d9a'),\n 'assigned': 'Joe',\n 'name': 'They task member.',\n 'responsible': 'Boom',\n 'status': 'rejected'},\n {'_id': ObjectId('62ace5bae1e2057c790a5db1'),\n 'assigned': 'Alice',\n 'name': 'Operation any.',\n 'responsible': 'Boom',\n 'status': 'rejected'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dcc'),\n 'assigned': 'Joe',\n 'name': 'Interview itself.',\n 'responsible': 'Boom',\n 'status': 'rejected'},\n {'_id': ObjectId('62ace5bae1e2057c790a5dcf'),\n 'assigned': 'Bob',\n 'name': 'Very teach beyond.',\n 'responsible': 'Boom',\n 'status': 'rejected'}]}}\n$groupcreatedAt", "text": "Hey @boomography,Here is my attempt in Python. Just to try different things, I generated a bunch of docs randomly and then I wrote the pipeline in Compass and copy/pasted it in Python.Let me l know what you think:Final result looks like this:Maybe there is a more sexy & optimized approach but I think it works. With $slice and $sortArray (v5.2+) you could refine even more the array if you like. You can also sort the array before the $group by Null.It’s also trivial to add a filter in the $filter conditions if you want the createdAt field > X for instance.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you! I will give this a try shortly.", "username": "boomography" } ]
Struggling to structure aggregate data with $replaceRoot & $group
2022-06-17T10:12:16.964Z
Struggling to structure aggregate data with $replaceRoot &amp; $group
1,783
null
[ "node-js", "replication", "mongoose-odm" ]
[ { "code": "node:internal/errors:465\n ErrorCaptureStackTrace(err);\n ^\n\nTypeError [ERR_INVALID_URL]: Invalid URL\n at new NodeError (node:internal/errors:372:5)\n at URL.onParseError (node:internal/url:563:9)\n at new URL (node:internal/url:643:5)\n at isAtlas (/test/node_modules/mongoose/lib/helpers/topology/isAtlas.js:17:17)\n at MongooseServerSelectionError.assimilateError (/test/node_modules/mongoose/lib/error/serverSelection.js:35:35)\n at /test/node_modules/mongoose/lib/connection.js:813:36\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {\n input: 'host4.example.com:27017',\n code: 'ERR_INVALID_URL'\n}\n\n", "text": "Hey,\nI’m currently facing the issue that mongoose stops working if one of the replica set members given in the connection string is offline.\nI currently have 2 replica set members on two different servers and I just want mongoose to use one or the other, depending on which one is reachable. The problem here is that as soon as I stop one of the servers, it is throwing the following error:When I start the “host4.example.com” server, everything works again.\nAny idea on how to tell mongoose to ignore that one of the servers is not available?Thanks in advance for the help ", "username": "Xge_N_A" }, { "code": "", "text": "Hi @Xge_N_A and welcome in the MongoDB Community !Can you please share the connection string (redact user, password and any sensible data) so we can have an idea of what it looks like?If you could also share the piece of code that helps you connect (the options used, etc) this could help.Also and most importantly: 2 nodes Replica Set are IMPOSSIBLE. This isn’t a valid cluster architecture and should never exist.MongoDB performs elections to elect a Primary node when a majority of the voting members of the Replica Set can be reached.In a 2 nodes RS, majority = 2/2 + 1 = 2… So you need 2 of the 2 nodes to be up and running to elect a Primary. If one of these 2 nodes go down, the remaining node cannot become Primary (because the majority can’t be reached anymore) and if the remaining node happens to be the Primary, it will immediately perform a Step Down operation to become Secondary and prevent any write operation.If you are in a testing environment, setting up a single node RS is perfectly fine and this will unlock all the cool features like Change Streams or Multi-docs ACID Transactions.But a production environment always need minimum 3 data bearing nodes (ie one Primary and 2 Secondaries). So you can afford to lose one node as now the majority = 2 with 3 nodes available.I suggest you have a look to this free Training on the MongoDB University that will explain all the details and subtleties of MongoDB clusters.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Invalid URL Error When One Replica Set Member Is Unreachable
2022-06-16T12:16:49.085Z
Invalid URL Error When One Replica Set Member Is Unreachable
4,135
null
[ "scala" ]
[ { "code": "", "text": "We are looking to transition to Scala 3, and have noticed that the MongoDB Scala Driver is only available for versions 2.x. Is there a timeline on when we could expect to see the MongoDB Scala Driver support Scala 3?", "username": "Richard_Jones" }, { "code": "", "text": "Hi @Richard_Jones,We are currently blocked by gradle#16527 which is preventing our build system from creating the artifacts.We are keeping an eye on that ticket and will look to publish the artifacts once support has been added.Ross", "username": "Ross_Lawley" }, { "code": "", "text": "https://docs.gradle.org/nightly/release-notes.html#scala", "username": "Bilal_Fazlani" }, { "code": "", "text": "Hi, what is the status of Scala 3 support?", "username": "Bilal_Fazlani" }, { "code": "", "text": "Given Gradle supports scala 3 for some time, is this driver going to expand its support for it?", "username": "raul_rodriguez" } ]
Scala 3 Support
2021-07-19T10:19:45.218Z
Scala 3 Support
5,320
https://www.mongodb.com/…_2_1024x573.jpeg
[ "rome-mug" ]
[ { "code": "Sr. Presales Solutions Architect, MongoDBNoSQL Database Solution Architect, SORINT.lab ", "text": "\nRome-MUG1920×1075 100 KB\nRomeMUG è lieta di annunciarvi il secondo meetup rigorosamente in presenza Thursday, June 9, 2022 5:00 PM presso il The Hub - LVenture Group di Roma Termini.\nL’evento includerà due lightning talk con demo, bevande e swag marchiati MongoDB! Event Type: In-Person\n Location: The Hub - LVenture Group di Roma Termini.\n Via Marsala, 29H, 00185 Roma RM, ItalySr. Presales Solutions Architect, MongoDB\n_NoSQL Database Solution Architect, SORINT.lab ", "username": "cesare_laurenziello" }, { "code": "", "text": "\nimage2048×1536 288 KB\n\n\nimage1920×1440 219 KB\n\n\nimage1920×1440 146 KB\n\n\nimage1920×1440 210 KB\n\n\nimage1280×960 198 KB\n", "username": "cesare_laurenziello" }, { "code": "", "text": "Congratulazioni per il MongoDB User Group!\nSaluti alla bella città di Roma!Michael", "username": "michael_hoeller" }, { "code": "", "text": "Grazie @michael_hoeller, ti aspettiamo a Roma!", "username": "cesare_laurenziello" }, { "code": "", "text": "Hello @cesare_laurenziello\nlet’s swap to English. It is 20 years back when I lived in Rome (actually Pomezia).\nThe MongoDB University Team is looking for Beta Testers. They asked me if could ask in my User Group (Germany-Austria-Switzerland) for Beta Testers for some new classes. There are no further requirements needed than appr. 2h of time. If you’d like to share a link which provides access to a questionnaire and the beta test. Please let me know.\nRegards,\nMichael", "username": "michael_hoeller" } ]
RomeMUG: Monthly Meet Up #2
2022-06-01T09:54:21.596Z
RomeMUG: Monthly Meet Up #2
4,353
https://www.mongodb.com/…b9f381f30adc.png
[ "aggregation", "queries", "node-js", "transactions" ]
[ { "code": "", "text": "Hello Everyone !\nI want to join two collections products and transactions using $lookup\nThe structures of the two collections are :\ntransactions collection436×553 7.22 KB\nI want to list all transactions that have status “sold” for every product\nFirst, I want just to have all transactions of each productI tried to do this :\nBut did’t work ?!The goal is return all documents that contain the product id in status of transactions.I try using this with “status.$id” but don’t work.I want to know if there is another way to query nested documents instead of dot notation ?Also How to join two collections in a specific condition (like in SQL) not just with localField and foreighField ?Thanks !", "username": "ayoub_sadiki" }, { "code": "status.$idstatus$lookup", "text": "Hi @ayoub_sadiki and welcome in the MongoDB Community !Using a value as a key isn’t a good idea because this leads to this kind of problems and it creates an instable schema that makes querying a lot more complex.status.$id doesn’t exists in your doc. and the $ shouldn’t be here anyway.To fix this, you will have to use $objectToArray on the status object. So your product id now become a value that you can use in the pipeline (and not a key!).Then, you will be able to use $lookup.To answer your final question:Also How to join two collections in a specific condition (like in SQL) not just with localField and foreighField ?$lookup has 2 forms. The one with a sub pipeline is the one you are after.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Get nested subdocuments in join using $lookup
2022-06-17T15:30:31.538Z
Get nested subdocuments in join using $lookup
2,936
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "Hi, I’m experimenting with flexible-sync technology.\nI would like to download on the mobile app only the records that interest a certain customer, to do this I would have to use a relationship between two objects.\nWhy is this not allowed? On MongoDB the relationships between objects are realized through string fields containing the _id field.\nThis is a limitation that forces me to redesign the data model.Thanks\nLuigi", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "Can you clarify what exactly is not allowed? Generally, relationships are supported by Flexible Sync.", "username": "nirinchev" }, { "code": "_id", "text": "In Flex Sync Page is written:\" Select up to 10 queryable fields from your Realm JSON Schema to construct queries on. Note that the _id field is always queryable.Adding/removing fields while Sync is running can have damaging consequences. Learn more about the risks of adding/removing fields while Sync is enabled.Only top-level, primitive fields containing scalars are provided here.\"this is my code (Provinciale is a Realm Object, not a primitive fields)var associati = Realm.All().Where(a=>a.Provinciale == provinciale);Realm.Subscriptions.Update(() =>\n{\nRealm.Subscriptions.Add(associati);\n});await Realm.Subscriptions.WaitForSynchronizationAsync();this is error received:Received QUERY_ERROR “Client provided query with bad syntax: unsupported query for table “Associato”: matching two properties to each other is not yet supported in server-side queries” (error_code=300, query_version=2)Thanks\nLuigi", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "I see - yes, that’s a limitation we currently have, but are looking to lift it.cc @Ian_Ward for visibility.", "username": "nirinchev" }, { "code": "", "text": "Thanks Nikola.Is this feature expected to be released soon?\nThe lack of this feature prevents migration from PartitionSynch to FlexSynchThanks\nLuigi", "username": "Luigi_De_Giacomo" }, { "code": "", "text": "We don’t have a timeframe as we’re in the early stages of researching it.", "username": "nirinchev" } ]
Flexible Sync - Object relationship
2022-05-30T11:50:07.742Z
Flexible Sync - Object relationship
2,618
null
[ "transactions", "realm-web" ]
[ { "code": "", "text": "Hello everybody,\nis there a way to perform multiple insert operations in a transactional manner using realm web sdk?", "username": "Armando_Marra" }, { "code": "", "text": "I don’t think it’s possible but I could be wrong.", "username": "MaBeuLux88" } ]
Transaction in realm Web SDK
2022-06-16T22:37:39.866Z
Transaction in realm Web SDK
2,155
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.listsudo apt-get updateSkipping acquire of configured file 'multiverse/binary-arm64/Packages' as repository 'https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 InRelease' doesn't support architecture 'arm64'", "text": "Hi, I’m trying to upgrade Mongo from 3.6.6 to 5.1 on Ubuntu 18.04.6, running on an AWS EC2 instance.I’ve gotten up until 4.2 → 4.4, and am now using this tutorial:I’ve done the following command:echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.listThen, I ran sudo apt-get update. However, I get the following error:Skipping acquire of configured file 'multiverse/binary-arm64/Packages' as repository 'https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 InRelease' doesn't support architecture 'arm64'I can’t install mongo after this. I saw this:https://jira.mongodb.org/browse/SERVER-37692which says there weren’t arm64 builds, but it got corrected? How should I proceed? Do I need to update my AWS instance? It seems weird that Mongo can’t be installed on Ubuntu…", "username": "Alex_Scott" }, { "code": "uname -mx86_64x86_64", "text": "I am getting the same error on Ubuntu 18.04. On running uname -m I get x86_64\nWhy does source repo mentions arm64. If I change that to x86_64, will that fix it?", "username": "Pra_Deep" }, { "code": "", "text": "Yes I guess this should fix the issue. If your machine isn’t ARM64 then you don’t need these packages anyway.", "username": "MaBeuLux88" } ]
Skipping acquire of configured file 'multiverse/binary-arm64/Packages' as repository 'https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 InRelease' doesn't support architecture 'arm64'
2022-01-29T06:15:01.157Z
Skipping acquire of configured file &lsquo;multiverse/binary-arm64/Packages&rsquo; as repository &lsquo;https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 InRelease&rsquo; doesn&rsquo;t support architecture &lsquo;arm64&rsquo;
5,873
https://www.mongodb.com/…6d71983515d.jpeg
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hello Hackers!The entries arrived last Friday and we’ve been busy reviewing since. Thank you all so much once again for you submissions and participation, we’ve really enjoyed the last 7 weeks or so of hackathon fun and collaboration!We are nearly finished our deliberations and we have decided to announce the winners from the Community Cafe stage live at MongoDB World on Wednesday morning at 10:30am on June 8th. Can’t make it to NYC? That’s no problem, we don’t want you to miss a single moment so you can now also register virtually for FREE for MongoDB World now! (scroll down to see the Virtual Ticket option)So tune on Wednesday to see the results of all you hard work! We’ll also announce it here on the forums too and closer to the day, we’ll share a link also.Thank you all again, from the whole Hackathon team here at MongoDB - it’s been a blast and we’ve enjoyed having you!\nScreenshot 2022-06-03 at 16.33.20950×906 67.9 KB\n", "username": "Shane_McAllister" }, { "code": "", "text": "Hello Hackers…Thank you all so much again for your participation. Apologies for the delay in posting here - we were hoping to be able to share the event livestream announcing the winners, but we’ve been delayed with getting access to the video. So…not wanting to postpone any further, we are delighted to announce the winners! Please!3rd Place\n|605px;x341px;1600×901 417 KB\nCryptic News - Made a collection with a hourly measure of Tone, Goldstein and Bitcoin Price for the events related with the crypto world (using the URL).\nTeam: @Crist2nd Place\n|450px;x330px;1600×876 189 KB\nGood News - The “GOODNEWS” web app highlights positive news from the GDELT Dataset.\nTeam: @Fiewor_John @Avik_Singha @Sucheta_SinghaAnd the Winner is…Marga Compos - The newsroom’s goal is to show people positive news happening all around the world in an engaging/creative way.\nTeam: @Margarita_Campos_QuinonesAnd you can watch the winning video submission here -Many, many congratulations to our winners! We will be in touch directly with regard to prizes.Thank you all again, from the whole Hackathon team here at MongoDB. We hope to get the chance to do this again and we hope you all enjoyed the experience, and learned something new!", "username": "Shane_McAllister" }, { "code": "", "text": "I am so happy! this is great news! thank you \nI had a lot of fun doing the project and definitely learned a lot.\nI was wondering if there will be a way to see the announcement at the MongoDB World in YouTube or something as I missed it live due to work.", "username": "Margarita_Campos_Quinones" }, { "code": "", "text": "Thank you again! I just realized the name of the project is my name instead of “The Newsroom” which was the intended name.\nI’m so sorry, I must have sent the submission wrong ", "username": "Margarita_Campos_Quinones" }, { "code": "", "text": "It was unbelievable, we were thrilled to know the result. We are so happy. Today I saw the live stream recording.Truly it was an awesome experience participating in the Hackathon. Learned a lot of things.\nJoined live stream for first time. Explored MongoDB Charts, such an awesome tool.Would like to thank @Shane_McAllister , @Mark_Smith , @nraboy , @Angie_Byron , @Joe_Drumgoole , @Stennie_X you all for your great support. Thank you @Angie_Byron once again for sharing such an awesome idea. Never thought an online hackathon can be so engaging,\nThank you again @Shane_McAllister , @Mark_Smith , and @nraboy for giving us the opportunity to live stream in such a great platform.Lastly it wouldn’t be possible without great team efforts. Thank you @Fiewor_John and @Sucheta_Singha. Cheers!! Looking forward to participate in future hackathons.Posting lately due to some issues…", "username": "Avik_Singha" }, { "code": "", "text": "Good work everyone! This was fun to watch ", "username": "nraboy" }, { "code": "", "text": "Thank you @Avik_Singha and everyone!\nThis was the experience of a lifetime. Fun and educative to say the least.\nLooking forwards to next time? ", "username": "Fiewor_John" }, { "code": "", "text": "It was a great opportunity to share and study.\nI wanted to thank all the people in the community. Thank you ", "username": "Crist" }, { "code": "", "text": "And here’s the announcement Video from the Community Cafe Stage @ MongoDB World for those who may have missed it live", "username": "Shane_McAllister" } ]
And the winners are
2022-06-03T15:47:15.357Z
And the winners are
4,636
null
[ "queries", "dot-net" ]
[ { "code": "", "text": "Hi,I am struck in a situation and want to know if I can get any solution for this.\nWhile exporting and importing data from MongoDB database, the api’s are taking more than 20000ms. I have done indexing for few fields which are necessary and I have used compound index and single index as well. But I am not able to increase the performance of my apis.Any help or idea is greatly appreciated.Thanks,\nNishchitha", "username": "Nishchitha_A" }, { "code": "", "text": "These are the thing I need to do,", "username": "Nishchitha_A" }, { "code": "", "text": "You are sharing way too little information about your full data set for anyone whom can help.What is total sizes of everything?What is your system configuration? Client? Server? Connection? RAM? Storage?", "username": "steevej" }, { "code": "", "text": "For the my collection I have almost 3000 documents. For bulk imports its taking too much time to fetch the data.\nRAM - 16GB\nSystem Type - x64-based PC\nProcessor - Intel(R) Core™ i5-10210U CPU @ 1.60GHz, 2112 Mhz, 4 Core(s), 8 Logical Processor(s)\nDisks storage Size - 238.47 GB (256,052,966,400 bytes)", "username": "Nishchitha_A" }, { "code": "", "text": "Still not enough information.So client and server is running on same machine and fighting for the same resources.Standalone instance or 3 nodes replica set on the same machine?What is the source of your documents? JSON files, CSV, …Which client are you using for bulk imports? mongoimport, mongosh, nodejs app?3000 documents can be small (ex: 3000 x 1Kb) amount of data or HUGE (3000 x 1Mb) amout.Disk storage size is one thing but must important is storage type, SSD, HD, NAS, SAN? What?Is permanent DB storage on same partition of same disk as source file? Everything has an impact.Is that single machine dedicated to that import but also serve as a web browsing or dev. machine? Any IDE running?Linux or Windows?", "username": "steevej" } ]
Increase the performance while importing and exporting data from MongoDB
2022-06-16T06:44:59.925Z
Increase the performance while importing and exporting data from MongoDB
2,667
null
[ "react-native" ]
[ { "code": "", "text": "Is flexible sync is available for use in production, can we use it in production app", "username": "Zubair_Rajput" }, { "code": "", "text": "Flexible Sync became GA at MongoDB world this past week: https://twitter.com/MongoDB/status/1535308064701747200?ref_src=twsrc^tfwThis means we are recommending it for production applications", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is flexible sync is available for production app in react native
2022-06-17T06:03:48.985Z
Is flexible sync is available for production app in react native
1,474
null
[ "node-js" ]
[ { "code": "nodejs", "text": "Hello,I like MongoDB i really do, and i hope to get a job using MongoDB.\nI like programming also, but i like the database more.So the question is what else to learn to get a job that i mainly work with MongoDB?\nEven if job was MongoDB alone(design database, queries, admin etc) it would be nice for me, but i dont see such jobs in Greece.From my searches i think nodejs can help alot to be backend engineer.\nIf someone can make a list with the popular technologies around MongoDB would be helpful.Thank you", "username": "Takis" }, { "code": "", "text": "Hi,Two main stacks are MEAN (MongoDB, Express, Angular, Node.js) and MERN (MongoDB, Express, React, Node.js).So, you are right. Node.js is really popular these days. And since it’s JavaScript, it’s perfect for MongoDB.For the jobs, I think that LinkedIn is great place, or you can check Indeed. Also, you can create an account on Upwork and search for MongoDB jobs as a freelancer.", "username": "NeNaD" }, { "code": "", "text": "Thank you for your reply, i was thinking MERN also as the easiest way to get a job that i mainly use MongoDB.I like programming so its ok for me, to know those, but in practise one programmer works in all the stack? or knows enough to communicate with the other programmers in the team?For example i can imagine knowing reactjs, but i dont think i am the right person to build and style webpages in a professional way.", "username": "Takis" }, { "code": "", "text": "You can specialize in only frontend or backend if you want. You don’t have to know full-stack. If you would like to work with MongoDB, then it’s better to specialize in backend. You can learn Node.js (Express) for example.Here is a great course that would cover both Node.js (Express) and MongoDB: https://www.coursera.org/learn/server-side-nodejs. You can enroll for free.", "username": "NeNaD" } ]
How to get a MongoDB job?
2022-06-16T18:06:07.794Z
How to get a MongoDB job?
2,385
null
[ "time-series" ]
[ { "code": "", "text": "Hi ! I would like to create a higly scalable iot solution wich save data sended by my devices into a mongoDB database. I don’t know what is the best chose between using MongoDB on a VM or Mongodb Atlas but I would like to do something like that :Do you have any suggestions, advices, recommandations for best practices ?\nIn Addition I never used AWS and I am just starting with Azure, so if you have any tutorials, getting started, guides to deploy that I will take it !\nThank you in advance", "username": "Gaby_P" }, { "code": "", "text": "There are a few ways to do what you are asking within these cloud providers. At a most basic level you can accomplish data movement from the edge to Atlas using MongoDB Realm Sync on the device. Here is a Ream Sync IoT example.Devices (Realm Sync) → MongoDB Atlas (Azure, AWS, GCP)If you want to introduce an event platform like Apache Kafka, you can send data through Kafka into MongoDB Atlas and achieve massive scale. Here is a blog post by Confluent that discusses this architecture IoT Reference Architecture and Implementation Guide Using Confluent and MongoDB Realm | Confluent Build Your Own IoT Platform with Confluent and MongoDB.If you want to use cloud specific platform like Azure Event hubs for example, you can integrate Kafka Connect with that and use the MongoDB Connector for Apache Kafka to move data into MongoDB Atlas. Blog post here.", "username": "Robert_Walters" }, { "code": "", "text": "Tanks you very much for your advices, I will take a look on it !", "username": "Gaby_P" } ]
How to save data from Azure Iot Hub o AWS into MongoDB
2022-06-16T13:46:37.830Z
How to save data from Azure Iot Hub o AWS into MongoDB
2,802
null
[ "queries", "indexes" ]
[ { "code": "", "text": "Currently my explain output is like 4000 lines and I am having difficulty going through it. Any tool which nicely visualises it?", "username": "V_N_A" }, { "code": "", "text": "Hi @V_N_A,MongoDB Compass has a feature to help visualise explain plans:Regards,\nStennie", "username": "Stennie_X" }, { "code": "hint", "text": "Thank you!I wish the explain tab had an option of hint, I want to see how the query behaves with different indexes", "username": "V_N_A" }, { "code": "", "text": "oh no… seems explain tab does not have an option to set the verbosity levels either ", "username": "V_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Any tools to visualise or explain the mongo explain output?
2022-06-17T05:01:01.905Z
Any tools to visualise or explain the mongo explain output?
1,428
https://www.mongodb.com/…aa191c541b29.png
[ "data-modeling" ]
[ { "code": "", "text": "Hi\nOn setting date value as new ISODate(‘2265-06-10’), Mongo is saving Data in Date format instead of ISODate\n\nScreenshot 2022-06-17 at 11.40.54 AM790×160 14.1 KB\nIf any one can help me with that, That would be great", "username": "Sushant_Bansal" }, { "code": "mongosh", "text": "Welcome to the MongoDB Community @Sushant_Bansal !Can you confirm the admin tool & version you are using? It looks like your screenshot may be from Robo3T, so this may be expected (as compared to using mongosh or another admin tool).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks Stennie, for reply.\nYes it the screenshot of robo3T", "username": "Sushant_Bansal" } ]
Mongo 4.0.20 Not able to convert Date to ISODate
2022-06-17T06:11:27.483Z
Mongo 4.0.20 Not able to convert Date to ISODate
1,849
null
[ "replication", "backup" ]
[ { "code": "", "text": "Dear Friends,My current setup is MongoDB 3.4 version with replica set ( Arbiter, Master and Slave)\nI am planning to upgrade to latest version of 5.0.2.\nI am using SSL certificate on database level.\nNote: I am planning for parallel setup not in place upgrade.\nSomeone please clarify the below doubts.Q1: Can backup from 3.4 and restore onto 5.0 directly or do I need to use intermediate version ?\nQ2: Mine is Replica Set with 3 nodes, can we backup databases from Master only instead of replica set and restore onto Stand alone ?\nQ3: Replication will work between different versions of MongoDB ?\nQ4: Is there any possibility directly to import and export the databases between the different MongoDB versions.\nQ5: I am using SSL certificate, Consider the example: Today I took the backup, tomorrow SSL certificate was expired and renewed and unfortunately database corrupted, need to restore the database from yesterday’s backup,\nWhile taking the backup used different SSL certificate and while restoring using the renewed certificate.\nMongoDB allow the restore ?I am new to MongoDB , kindly bare with me and Thanks in advance for your valuable advise. .", "username": "Mahammad_Jilan" }, { "code": "mongodumpmongorestoremongoexportmongoimportmongodumpmongorestore", "text": "Hi @Mahammad_Jilan!! \nWelcome to the community forum!!Can backup from 3.4 and restore onto 5.0 directly or do I need to use intermediate version ?The mongodump and mongorestore is only recommended between the major releases. For more, please refer to the documentation here which explains more on the same.\nIn my test situation, I tried exporting a 3.4 collection using mongoexport and importing it into a 5.0 instance using mongoimport, and it appears to be working. If you choose this strategy, please carry out extensive analysis to verify that there are no unexpected outcomes.Mine is Replica Set with 3 nodes, can we backup databases from Master only instead of replica set and restore onto Stand alone ?If I understand your question correctly, you need to do a mongodump and mongorestore from a Primary member of replica set to a stand alone MongoDB. In that case, yes, this case is possible provided the version of replica set and stand-alone MongoDB are on the same major version. Please visit for documentation hereReplication will work between different versions of MongoDB ?Replication works between different MongoDB versions only for the purpose of upgrading a replica set. It was not designed for long term use, and definitely not designed to take a production workload. Please refer to the following documentation to know about feature compatibility version behaviour.Is there any possibility directly to import and export the databases between the different MongoDB versions.Yes, import and export between the MongoDB versions is possible. Similar to your question, I tried to import and export between different versions of MongoDB and it worked for me, however please test extensively with your data to ensure that no data would be lost or changed by the process.While taking the backup used different SSL certificate and while restoring using the renewed certificate.\nMongoDB allow the restore ?According to my understanding, the TSL/SSL is a transport layer security mechanism and hence it would have no indulgence on how the data is restored. Therefore, the certificate expiry should not be an issue and hence should not affect the backup or restore.Please let us know if you have any further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Dear Aasawari,Thank you so much for your response and answers, appreciated.", "username": "Mahammad_Jilan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Backup & Restore, Upgrade and Migration between different MongoDB Versions
2022-05-31T08:18:26.629Z
Backup &amp; Restore, Upgrade and Migration between different MongoDB Versions
4,423
null
[ "aggregation" ]
[ { "code": "db.companies.insert([\n { \"_id\" : 4, \"name\" : \"CompanyABC\"},\n { \"_id\" : 5, \"name\" : \"CompanyLMN\"},\n { \"_id\" : 6, \"name\" : \"CompanyXYZ\"},\n])\n\ndb.user.insert([\n{\n \"_id\": 1,\n \"name\": \"john\",\n \"companies\": [\n {\n \"_id\": 4,\n \"primary\": true\n },\n {\n \"_id\": 5,\n \"primary\": false\n },\n {\n \"_id\": 6,\n \"primary\": false\n }\n ]\n},\n \"_id\": 2,\n \"name\": \"jane\",\n \"companies\": [\n {\n \"_id\": 4,\n \"primary\": false\n },\n {\n \"_id\": 5,\n \"primary\": true\n },\n {\n \"_id\": 6,\n \"primary\": false\n }\n ]\n},\n])\ndb.users.aggregate( [{\n '$lookup' => [\n 'from' => 'companies',\n 'let' => ['company' => '$companies'],\n 'pipeline' => [\n [\n '$match' => [\n '$expr' => [\n '$and' => [\n ['$eq' => ['$_id', '$$company._id']],\n ['$eq' => ['$$company.primary', true]]\n ]\n ]\n\n ]\n ]\n ],\n 'as' => 'actualCompanies',\n ]\n}]\n", "text": "How do i join user with company that are primary to the user? see below for sample the data.I tried doing this below but it returned empty.", "username": "LiNo_Castro" }, { "code": "", "text": "Can a user have more that one primary?I would do that in 2 stages:Should be easier to debug since you can stop after stage 1 and see if the $set field works correctly. Then it is a trivial $lookup.", "username": "steevej" }, { "code": "[\n {\n '$addFields': {\n 'companies': {\n '$filter': {\n 'input': '$companies', \n 'as': 'item', \n 'cond': {\n '$eq': [\n '$$item.primary', true\n ]\n }\n }\n }\n }\n }, {\n '$lookup': {\n 'from': 'companies', \n 'localField': 'companies._id', \n 'foreignField': '_id', \n 'as': 'companies'\n }\n }\n]\n", "text": "To keep things simple, I would do it like that.\nimage1463×742 77.8 KB\nThere is probably a way to make it work with the subpipeline like you tried, but apparently I’m not good enough just yet. \nThe array isn’t helping. Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "thanks Steevej, got the idea now. its pretty much like what MaBeuLux88 suggested below.\nDidn’t think that way initially, it’s probably just that im still having the MySQL way of doing things right now though. =) The debugging by stage will really be handy idea for me going forward.", "username": "LiNo_Castro" }, { "code": "", "text": "@ MaBeuLux88 Thanks, this is how i ended up using.", "username": "LiNo_Castro" }, { "code": " '$expr' => [\n '$and' => [\n ['$eq' => ['$companies._id', '$company._id']],\n ['$eq' => ['$companies.primary', true]]\n ]\n ]\ndb.users.aggregate( [{\n '$lookup' => [\n 'from' => 'companies',\n 'let' => ['company' => '$companies'],\n 'pipeline' => [\n [\n '$match' => [\n '$expr' => [\n '$and' => [\n ['$eq' => ['$companies._id', '$$company._id']],\n ['$eq' => ['$companies.primary', true]]\n ]\n ]\n ]\n ]\n ],\n 'as' => 'actualCompanies',\n ]\n}]\n", "text": "Just before i close this topic, let me clarify that the solution i initially tried is actually legit and would work, i just messed up the ‘$expr’ part. see below for the corrected code:I just replaced the ‘$_id’ with ‘$companies’ then the ‘$$company.primary’ with ‘$companies.primary’. I was just confused and forgot that i was joining companies to users, not users to company. see full code corrected code below:But moving forward i’d probably be using the staged method as suggested by @steevej. Thanks for the replies…", "username": "LiNo_Castro" }, { "code": "", "text": "I got the same confusion when I tried! I’m glad you got it to work.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to do Aggregated $lookup in Mongodb on embedded collection of objects
2022-06-16T05:45:03.419Z
How to do Aggregated $lookup in Mongodb on embedded collection of objects
2,100
null
[ "php" ]
[ { "code": " curl_setopt_array($curl, array(\n CURLOPT_URL => $this->apiPath . $action,\n CURLOPT_RETURNTRANSFER => true,\n CURLOPT_ENCODING => '',\n CURLOPT_MAXREDIRS => 10,\n CURLOPT_TIMEOUT => 0,\n CURLOPT_FOLLOWLOCATION => true,\n CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,\n CURLOPT_CUSTOMREQUEST => 'POST',\n CURLOPT_POSTFIELDS => '{\n \"dataSource\": \"' . $this->dataSource . '\",\n \"database\": \"' . $this->database . '\",\n \"collection\": \"' . $this->collection . '\",\n \"filter\": { \n \"type\": { \"$eq\": \"'.$query.'\" } \n }\n }',\n CURLOPT_HTTPHEADER => array(\n 'Content-Type: application/json',\n 'Access-Control-Request-Headers: *',\n 'api-key: ' . $this->apiKey,\n ),\n ));\n $mongodata = json_decode(curl_exec($curl));\nstdClass Object\n(\n [documents] => Array\n (\n [0] => stdClass Object\n (\n [_id] => 6253c5fc22d6ee83815a111a\n [lang] => Bangla\n [type] => top\n [createdAt] => 2022-04-11T00:08:54Z\n [searchTerm] => key\n [customerId] => 747498\n [wpId] => 20\n [results] => stdClass Object\n (\n [parts] => 0\n [pages] => 0\n [productCategory] => 0\n )\n\n )\n )\n)\n\ncurl_setopt_array($curl, array(\n CURLOPT_URL => $this->apiPath . $action,\n CURLOPT_RETURNTRANSFER => true,\n CURLOPT_ENCODING => '',\n CURLOPT_MAXREDIRS => 10,\n CURLOPT_TIMEOUT => 0,\n CURLOPT_FOLLOWLOCATION => true,\n CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,\n CURLOPT_CUSTOMREQUEST => 'POST',\n CURLOPT_POSTFIELDS => '{\n \"dataSource\": \"' . $this->dataSource . '\",\n \"database\": \"' . $this->database . '\",\n \"collection\": \"' . $this->collection . '\",\n \"filter\": { \n \"type\": { \"$eq\": \"'.$query.'\" },\n \"createdOn\": {\n \"$gte\": \"2022-04-11T00:08:54Z\",\n \"$lt\": \"2022-04-11T00:08:54Z\"\n } \n }\n }',\n CURLOPT_HTTPHEADER => array(\n 'Content-Type: application/json',\n 'Access-Control-Request-Headers: *',\n 'api-key: ' . $this->apiKey,\n ),\n ));\n $mongodata = json_decode(curl_exec($curl));\nstdClass Object\n(\n [documents] => Array\n (\n )\n\n)\n", "text": "I am trying to fetch MongoDB data using CURL with PHP. My code is like below.I am getting output like belowBut if I use below codeI am getting empty object like below", "username": "foysal_foysal" }, { "code": "{ \"$date\" : \"2022-04-11T00:08:54Z\" }\n\"2022-04-11T00:08:54Z\"", "text": "Try withrather than\"2022-04-11T00:08:54Z\"The types must match.", "username": "steevej" }, { "code": "curl_setopt_array($curl, array(\n CURLOPT_URL => $this->apiPath . $action,\n CURLOPT_RETURNTRANSFER => true,\n CURLOPT_ENCODING => '',\n CURLOPT_MAXREDIRS => 10,\n CURLOPT_TIMEOUT => 0,\n CURLOPT_FOLLOWLOCATION => true,\n CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,\n CURLOPT_CUSTOMREQUEST => 'POST',\n CURLOPT_POSTFIELDS => '{\n \"dataSource\": \"' . $this->dataSource . '\",\n \"database\": \"' . $this->database . '\",\n \"collection\": \"' . $this->collection . '\",\n \"filter\": { \n \"type\": { \"$eq\": \"'.$query.'\" },\n \"createdOn\": {\n \"$date\" : \"2022-04-11T00:08:54Z\"\n } \n }\n }',\n CURLOPT_HTTPHEADER => array(\n 'Content-Type: application/json',\n 'Access-Control-Request-Headers: *',\n 'api-key: ' . $this->apiKey,\n ),\n ));\n $mongodata = json_decode(curl_exec($curl));\n", "text": "Thanks @steevej . I am using below code.But I am getting empty object. Thanks.", "username": "foysal_foysal" }, { "code": "[createdAt] => 2022-04-11T00:08:54Z\"createdOn\": {\n \"$date\" : \"2022-04-11T00:08:54Z\"\n } \n", "text": "According to Date equality in Data API - #13 by Jason_Tran it should work.And then right after I wrote the above and I go check and I found that you are not using the correct field name for your date query.I am getting output like below[createdAt] => 2022-04-11T00:08:54Zbut you query", "username": "steevej" }, { "code": "", "text": "createdAtThanks @steevej . Your solution is working. But I would like to apply a Date range (Start Date & End Date ) to fetch items. Could you please help me in this regard ?", "username": "foysal_foysal" }, { "code": "\"createdOn\": {\n \"$gte\": \"2022-04-11T00:08:54Z\",\n \"$lt\": \"2022-04-11T00:08:54Z\"\n } \n", "text": "It is the same thing. Your date range query was also using the wrong field name.", "username": "steevej" }, { "code": "createdOn", "text": "Thanks @steevej . Should I use createdOn ? or anything else ?", "username": "foysal_foysal" }, { "code": "createdOn\"createdOn\":createdOn", "text": "Should I use createdOn ?I wrote that you areusing the wrong field name .and posted your code that uses\"createdOn\":which produce no results.So to answerShould I use createdOn ?of course NOT. This is thewrong field name .And to answerShould I use … anything else ?The answer is that yes. You have to use the correct field name rather than the wrong field name. And the correct field name is one present in your data. It is:[createdAt] => 2022-04-11T00:08:54ZJust like you did earliercreatedAtThanks @steevej . Your solution is working.", "username": "steevej" }, { "code": "stdClass Object\n(\n [documents] => Array\n (\n [0] => stdClass Object\n (\n [_id] => 6253c5fc22d6ee83815a111a\n [lang] => Bangla\n [type] => top\n [createdAt] => 2022-04-11T00:08:54Z\n [searchTerm] => key\n [customerId] => 747498\n [wpId] => 20\n [results] => stdClass Object\n (\n [parts] => 0\n [pages] => 0\n [productCategory] => 0\n )\n\n )\n )\n)\ncurl_setopt_array($curl, array(\n CURLOPT_URL => $this->apiPath . $action,\n CURLOPT_RETURNTRANSFER => true,\n CURLOPT_ENCODING => '',\n CURLOPT_MAXREDIRS => 10,\n CURLOPT_TIMEOUT => 0,\n CURLOPT_FOLLOWLOCATION => true,\n CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,\n CURLOPT_CUSTOMREQUEST => 'POST',\n CURLOPT_POSTFIELDS => '{\n \"dataSource\": \"' . $this->dataSource . '\",\n \"database\": \"' . $this->database . '\",\n \"collection\": \"' . $this->collection . '\",\n \"filter\": { \n \"type\": { \"$eq\": \"'.$query.'\" },\n \"createdAt\": {\n \"$gte\": \"2022-04-11T00:08:54Z\",\n \"$lt\": \"2022-04-11T00:08:54Z\"\n }\n }\n }',\n CURLOPT_HTTPHEADER => array(\n 'Content-Type: application/json',\n 'Access-Control-Request-Headers: *',\n 'api-key: ' . $this->apiKey,\n ),\n ));\n $mongodata = json_decode(curl_exec($curl));\nstdClass Object\n(\n [documents] => Array\n (\n )\n\n)\n", "text": "Thanks @steevej . I tried your solution before. I am explaining everything once again. I have below item in MongoDB.I am using below code as per your instruction.But I am getting empty object like below.Thanks @steevej .", "username": "foysal_foysal" }, { "code": "{ \"$date\" : \"2022-04-11T00:08:54Z\" }\n", "text": "Good we are making progress.You have now the correct field name in your query. But you still have the wrong type. You are passing dates as string data rather that dates as date data. You are still missing the part of my answer that is in the first reply I made:Try withThis makes sure that you string date is converted to a date object.", "username": "steevej" }, { "code": "curl_setopt_array($curl, array(\n CURLOPT_URL => $this->apiPath . $action,\n CURLOPT_RETURNTRANSFER => true,\n CURLOPT_ENCODING => '',\n CURLOPT_MAXREDIRS => 10,\n CURLOPT_TIMEOUT => 0,\n CURLOPT_FOLLOWLOCATION => true,\n CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,\n CURLOPT_CUSTOMREQUEST => 'POST',\n CURLOPT_POSTFIELDS => '{\n \"dataSource\": \"' . $this->dataSource . '\",\n \"database\": \"' . $this->database . '\",\n \"collection\": \"' . $this->collection . '\",\n \"filter\": { \n \"type\": { \"$eq\": \"'.$query.'\" },\n \"createdAt\": {\n \"$gte\": { \"$date\" : \"2022-04-11T00:08:54Z\" },\n \"$lt\": { \"$date\" : \"2022-04-11T00:08:54Z\" }\n }\n }\n }',\n CURLOPT_HTTPHEADER => array(\n 'Content-Type: application/json',\n 'Access-Control-Request-Headers: *',\n 'api-key: ' . $this->apiKey,\n ),\n ));\n $mongodata = json_decode(curl_exec($curl));\nstdClass Object\n(\n [documents] => Array\n (\n )\n\n)\n", "text": "Thanks @steevej . I have to pass 2 Dates (Start Date & End Date). I am using below code now.But I am still getting empty object.Thanks.", "username": "foysal_foysal" }, { "code": "\"$gte\": \"2022-04-11T00:08:54Z\"", "text": "Of course you do.Think about it, there is no way createdAt can be both $gte a given date and $lt than the same given date.The highest date $lt 2022-04-11T00:08:54Z has to be 2022-04-11T00:08:53Z, definitively not\"$gte\": \"2022-04-11T00:08:54Z\"", "username": "steevej" } ]
Getting empty object while fetching data between two Dates
2022-06-13T11:23:21.993Z
Getting empty object while fetching data between two Dates
5,248
null
[ "queries" ]
[ { "code": "", "text": "Hi\nI have following data set of 5 minutes bar\nDate_Time, symbol, close_price\n2022-06-14 10:00, ES, 5.31\n2022-06-14 10:05, ES, 5.25\n2022-06-14 10:10, ES, 5.29\n2022-06-14 10:15, ES, 5.33I would like to fetch the data so it would get the close_price from previous row for each current row so output will be\nDate_Time, symbol, close_price,prev_close_price\n2022-06-14 10:00, ES, 5.31, NULL\n2022-06-14 10:05, ES, 5.25,5.31\n2022-06-14 10:10, ES, 5.29,5.25\n2022-06-14 10:15, ES, 5.33,5.29in this case uniqueness is expected to be for Date_Time+ Symbol so query need to be accordinglyThanks in advance for looking into thisDhru", "username": "Dhruvesh_Patel" }, { "code": "", "text": "Please provide sample data and sample output in JSON format that we can cut-n-paste into our system so that we can experiment.Also share what you have tried and indicate how it fails to provide the expected result. That will prevent us for investigating in a direction you already know does not work.", "username": "steevej" }, { "code": "{ \n \"end_dt\" : ISODate(\"2022-05-25T09:50:00.000+0000\"), \n \"close\" : 1849.9, \n \"symbol\" : \"1GCM2\"\n}\n{ \n \"end_dt\" : ISODate(\"2022-05-25T09:45:00.000+0000\"), \n \"close\" : 1849.6, \n \"symbol\" : \"1GCM2\"\n}\n{ \n \"end_dt\" : ISODate(\"2022-05-25T09:40:00.000+0000\"), \n \"close\" : 1849.7, \n \"symbol\" : \"1GCM2\"\n}\n{ \n \"end_dt\" : ISODate(\"2022-05-25T09:35:00.000+0000\"), \n \"close\" : 1852.3, \n \"symbol\" : \"1GCM2\"\n}\n{ \n \"end_dt\" : ISODate(\"2022-05-25T09:30:00.000+0000\"), \n \"close\" : 1851.0, \n \"symbol\" : \"1GCM2\"\n}\n{ \n \"end_dt\" : ISODate(\"2022-05-25T09:25:00.000+0000\"), \n \"close\" : 1850.8, \n \"symbol\" : \"1GCM2\"\n}\n{ \n \"end_dt\" : ISODate(\"2022-05-25T09:20:00.000+0000\"), \n \"close\" : 1851.0, \n \"symbol\" : \"1GCM2\"\n}\n", "text": "Here is json formatted dataThanks\nDhru", "username": "Dhruvesh_Patel" }, { "code": "", "text": "The quotes are not usable.See Formatting code and log snippets in posts", "username": "steevej" }, { "code": "", "text": "let me look into and get back to youThanks\nDhru", "username": "Dhruvesh_Patel" }, { "code": "db.pricedata.insertMany([\n\n{end_dt : ISODate(\"2022-05-25T09:50:00.000+0000\"),close : 1849.9,symbol : \"1GCM2\"},\n{end_dt : ISODate(\"2022-05-25T09:45:00.000+0000\"),close : 1849.6,symbol : \"1GCM2\"},\n{end_dt : ISODate(\"2022-05-25T09:40:00.000+0000\"),close : 1849.7,symbol : \"1GCM2\"},\n{end_dt : ISODate(\"2022-05-25T09:35:00.000+0000\"),close : 1852.3,symbol : \"1GCM2\"},\n{end_dt : ISODate(\"2022-05-25T09:30:00.000+0000\"),close : 1851.0,symbol : \"1GCM2\"},\n{end_dt : ISODate(\"2022-05-25T09:25:00.000+0000\"),close : 1850.8,symbol : \"1GCM2\"},\n{end_dt : ISODate(\"2022-05-25T09:20:00.000+0000\"),close : 1851.0,symbol : \"1GCM2\"}\n]);\n", "text": "Let me know if still any issue with format", "username": "Dhruvesh_Patel" }, { "code": "{ \"$lookup\" : { \n \"from\" : \"pricedata\" ,\n \"let\" : {\n \"symbol\" : \"$symbol\" ,\n \"end_dt\" : \"$end_dt\"\n }\n \"pipeline\" : [\n { \"$match\" : {\n \"symbol\" : \"$$symbol\" ,\n \"end_dt\" : { \"$lt\" : $$end_dt\" }\n } } ,\n { \"$sort\" : { \"end_dt\" : -1 } } ,\n { \"$limit\" 1 }\n ] ,\n \"as\" : \"open\"\n} }\n", "text": "My first instinct would be to do a $lookup from:pricedata with a pipeline that $match the symbol, with $lt my own end_dt, with $sorts end_dt and $limit1.Something like: (not able to test at this point)", "username": "steevej" }, { "code": "db.getCollection(\"pricedata\").aggregate(\n [\n { \n \"$lookup\" : { \n \"from\" : \"pricedata\", \n \"let\" : { \n \"symbol\" : \"$symbol\", \n \"end_dt\" : \"$end_dt\"\n }, \n \"pipeline\" : [\n { \n \"$match\" : { \n \"symbol\" : \"$$symbol\", \n \"end_dt\" : { \n \"$lt\" : \"$$end_dt\"\n }\n }\n }, \n { \n \"$sort\" : { \n \"end_dt\" : -1.0\n }\n }, \n { \n \"$limit\" :1.0\n }\n ], \n \"as\" : \"prev_close\"\n }\n }\n ], \n { \n \"allowDiskUse\" : false\n }\n);\n", "text": "I am getting prev_close as empty array , our server version is 4.4, will following syntax is compatible with it?", "username": "Dhruvesh_Patel" }, { "code": " \"$match\" : { \"$expr\" : { \"$and\" : [\n { \"$eq\" : [ \"$symbol\" , \"$$symbol\" ] } , \n { \"$lt\" : [ \"$end_dt\" ,\n \"$$end_dt\"\n ] }\n ] } }\n", "text": "The documentation indicatesMongoDB 3.6 adds support for:Executing a pipeline on a joined collection.so it should work.Looking at the documentation, one of my best friend, I remembered:A $match stage requires the use of an $expr operator to access the variables. The $expr operator allows the use of aggregation expressions inside of the $match syntax.With $expr the $match becomes:", "username": "steevej" }, { "code": "", "text": "Look like its working now. Thanks for your help", "username": "Dhruvesh_Patel" }, { "code": "", "text": "One more question I have on this topic. I have used above suggested approach on collection which has large dataset and I can see Aggregation Pipeline is slow and it’s because to get the previous row we are are using\n{ “$lt” : [ “$end_dt” , “$$end_dt” ] } with “$sort” : { “end_dt” : -1.0 }I was wondering is it possible in aggregation pipeline to access previous row based on index of row. (e.g if current row index is n then previous row should be accessed using n-1?)Thanks", "username": "Dhruvesh_Patel" }, { "code": "", "text": "The first thing is to have a search index that support the query if it is a regular use-case.Documents don’t really have a position n. Even if they had, document at n-1 is not necessarily the document you want, it is if the symbol is the same, it is not if the symbol is $ne. You cannot know which document is the previous one unless you $match and $sort.", "username": "steevej" }, { "code": "", "text": "I did have index “symbol:1,end_dt:1” for pricedata collectionBut because of lookup stage is using end_dt:-1 , I end up creating second index as symbol:1,end_dt:-1\nafter adding new index, lookup was faster comparatively previous scenario.\nBut my aggregation query is returning output with symbol asc order and end_dt in descending order.\nwhen add additional stage after lookup with sort as symbol:1,end_dt:1 then it takes forever not sure why", "username": "Dhruvesh_Patel" }, { "code": "{ a: 1, b: -1 }{ a: 1, b: -1 }{ a: -1, b: 1 }", "text": "additional stage after lookup with sort as symbol:1,end_dt:1 then it takes forever not sure whyOnce documents are transformed indexes cannot really used because indexes points to the original documents from the collection, not the modified documents from the previous stage. So a late stage $sort is usually a memory sort rather than an index scan, and should be avoided.I am surprised aboutI end up creating second index as symbol:1,end_dt:-1\nafter adding new index, lookup was faster comparatively previous scenario.According to documentation the index symbol:1,end_dt:1 should support symbol:1,end_dt:-1 as writtenFor example, an index key pattern { a: 1, b: -1 } can support a sort on { a: 1, b: -1 } and { a: -1, b: 1 }It would be interesting to see the explain plan. By the ESR rule the original index should be used and the descending one should not make a difference.I would try to $sort:{“symbol”:1 , “end_dt” : -1 } in the inner pipeline. The result should be the same we $match symbol.But my aggregation query is returning output with symbol asc order and end_dt in descending order.I do not think that this has anything to do with the index or the inner pipeline. The aggregation returns the documents in a non specified order when you do not sort. If you want symbol in order you have to $sort. To have $sort use the index it has to done before $lookup.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Fetch the value of column from previous row
2022-06-14T18:06:22.562Z
Fetch the value of column from previous row
6,005
null
[ "crud", "transactions" ]
[ { "code": "Don't use transactions in MongoDBisBlocked = trueisClosed = trueawait Promise.all([\n User.updateOne({ _id: data.id }, {\n isBlocked: true,\n }, { session }),\n\n Conversation.updateOne({ user: data.id }, {\n isClosed: true,\n }, { session }),\n]);\n", "text": "Hi,I’m trying to perform two operations within a transaction but it’s really frustrating to see the Don't use transactions in MongoDB advice everywhere. That makes me overthink everything.Here if i want to block a user (isBlocked = true), I should also mark their conversation as closed (isClosed = true).Should i avoid transactions here?", "username": "Chris_Martel" }, { "code": "", "text": "I do not think that blocking a user is a very frequent use-case.So I do not think that any overhead of transaction should be an issue. If you have performance issue then fix it then. Do not worry about performance of uncommon use-case. Make them work correctly and then make them work fast if needed.From what I see, it looks like a user has only 1 conversation. If not you might want to use updateMany.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should i use Transactions?
2022-06-16T19:37:05.682Z
Should i use Transactions?
1,790
null
[]
[ { "code": "", "text": "We are considering to use Linode for deploying our application. We want to use the MongoDB Atlas DBaaS. Are there any good guidelines to setup Linode instances to securely access DB clusters in Atlas? We are specifically looking for guidelines, suggestions and ways to setup DB Network Access from our Linode instances.Thanks in advance.", "username": "Ranganath_Kini" }, { "code": "", "text": "Hi @Ranganath_Kini - Welcome to the community!Connection should be possible from Linode to the Atlas instances but you wouldn’t have any of the cloud provider (GCP,AWS, Azure) integration options like VPC peering or private endpoints. However, Atlas is secure by default as communications are encrypted using TLS and has IP access list capabilities which limits exposure of the Atlas endpoints to certain IP’s which user’s control.We are specifically looking for guidelines, suggestions and ways to setup DB Network Access from our Linode instances.I currently am not aware of any blogs or guides that we have that would assist with this specifically regarding Linode and Atlas. In saying so, perhaps the following documentation regarding Atlas connectivity and security may be of useful to you:It would also be great to know your experience of connectivity from Linode to Atlas as well.Hope the above helps.Regards,\nJason", "username": "Jason_Tran" } ]
Setting Up MongoDB Atlas DB Network Access from Linode Instances
2022-05-30T03:56:34.278Z
Setting Up MongoDB Atlas DB Network Access from Linode Instances
1,410
null
[ "aggregation" ]
[ { "code": "", "text": "I am working on an aggregation pipeline to compile data to push to the frontend for a chart to render, which involves 3 collections (one of which is just a reference to the other 2). Individually on the collections, I am able to get the expected response on the aggregation, but when trying to incorporate the aggregations together with $lookup, I keep getting the error:query failed: (Location40323) A pipeline stage specification object must contain exactly one field.From what I’m reading in other user’s threads on this error, its either a missing bracket, or a property being set outside the appropriate bracket. However, for the life of me, I cannot find any missing brackets. I’m still learning mongoDB, and (coding in general…) so I’m certain its some silly mistake/oversight, but I’ve been running through the code over and over and coming up short. If anyone can help point out the errors I’m making, I’d greatly appreciate it.I’ve made a playground with the example:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Jowz" }, { "code": "", "text": "Yup, was the brackets, I was tripping myself up adding the {} in the $lookup pipelines. Each time I went to the add the brackets, I was accidentally wrapping each condition within a greater object of the pipeline array, for example :pipeline: [\n{\n{$match:… },\n{$lookup: …}\nJust needed to take a break and come back to it, was staring too long and got lost in the braces…", "username": "Jowz" }, { "code": "", "text": "involves 3 collections (one of which is just a reference to the other 2You know, that sounds a little like a normalized relational schema which isn’t usually the best way to go for MongoDB…. Asya\nP.S. glad you were able to get the syntax resolved.", "username": "Asya_Kamsky" }, { "code": "", "text": "Well the initial collection has a lot of other properties and at scale the documents would exceed the 16mb limit if I nested the other 2 collections, so I tie them together with ref’s. As far as I’m aware, that is the preferred mongo way of approaching it, but I am always open to suggestions.", "username": "Jowz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pipeline error, code 40323, its gotta be a missing bracket, right?
2022-06-14T22:15:58.302Z
Pipeline error, code 40323, its gotta be a missing bracket, right?
2,348
null
[ "realm-web" ]
[ { "code": "Error: Request failed (POST https://stitch.mongodb.com/api/client/v2.0/app/mongo-realm-covid-chartjs-ryfil/functions/call): insert not permitted (status 403)\n at Button.js:2850:24\n at l (index.js:63:40)\n at Generator._invoke (index.js:293:22)\n at Generator.next (index.js:118:21)\n at n (asyncToGenerator.js:3:20)\n at s (asyncToGenerator.js:25:9)\n", "text": "Hello\nMy repo is here: GitHub - coding-to-music/mongo-realm-covid-chartjs: mongo realm covid chartjsYou can login here: https://mongo-realm-covid-chartjs.vercel.appThen navigate to Countries\nThen press Save DataThe schema rules default to allow read and insert\nThe problem is collection Data, the collection Users is allowing insert. Tasks also has rows, but that came from another application, in database tracker.I am getting this message in the browser console (ctrl-shift-I) :", "username": "Tom_Connors" }, { "code": "Rules", "text": "Hi @Tom_Connors,I think you can update the function Authentication to “System” so it has full rights. Or you may be missing some write authorization in the Rules tab.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "It worked, thanks very much for your help", "username": "Tom_Connors" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Functions / call): insert not permitted (status 403)
2022-06-16T05:16:15.579Z
Functions / call): insert not permitted (status 403)
3,590
https://www.mongodb.com/…1_2_1023x401.png
[ "swift" ]
[ { "code": "SwiftRealmRealm Device Synccustom user datarealm permissionsCustom User Datafunc login() {\n isLoading = true\n errorMessage = nil\n \n \n let credentials = Credentials.emailPassword(email: username, password: password)\n \n DispatchQueue.main.async {\n app.login(credentials: credentials) { [weak self] result in\n switch (result) {\n case .failure(let error):\n print(String(describing: error))\n self?.errorMessage = error.localizedDescription\n \n case .success(let user):\n if user.customData.isEmpty {\n let client = user.mongoClient(\"mongodb-atlas\")\n let database = client.database(named: \"UserAPI\")\n let collection = database.collection(withName: \"Users\")\n // Insert the custom user data object\n let customUserData: Document = [\n \"_id\": AnyBSON(user.id),\n \"email\": .string(self!.email),\n \"province\": .string(self!.province),\n \"_partition\": .string(user.id)\n ]\n collection.insertOne(customUserData) {result in\n switch result {\n case .failure(let error):\n print(\"Failed to insert document: \\(error.localizedDescription)\")\n case .success(let newObjectId):\n print(\"Inserted custom user data document with object ID: \\(newObjectId)\")\n }\n }\n }\n }\n self?.isLoading = false\n }\n }\n }\nCustom User DataFailed to insert document: no rule exists for namespace 'UserAPI.Users'Custom User Data", "text": "I’m developing a mobile app using Swift and Realm database.I configured Realm Device Sync and tried to add custom user data to a cluster I created.Even though I watched dozens of tutorials about realm permissions I still can’t figure out what’s wrong with the in-app permissionshere is the authentication function I am using to add Custom User DataBut when I try to create a new user, it successfully creates one. The problem is, when it comes things comes to adding the Custom User Data it returns an error like this:Failed to insert document: no rule exists for namespace 'UserAPI.Users'and when I check the MongoDB logs, I can see the error in more detail:\nError:Action on service ‘mongodb-atlas’ forbidden: no rules have been configured for this service\n{\n“name”: “insertOne”,\n“arguments”: [\n{\n“database”: “UserAPI”,\n“collection”: “Users”,\n“document”: {\n“_id”: “62aaf9ab775b8b21d2343783”,\n“_partition”: “62aaf9ab775b8b21d2343783”,\n“email”: “selam456qwe”,\n“province”: “”\n}\n}\n],\n“service”: “mongodb-atlas”\n}\nFunction Call Location:\nDE-FF\nCompute Used:\n91734180 bytes•ms\nRemote IP Address:\n31.223.94.210my Custom User Data settings:\n(I saw an error like I can’t upload more than one embeded images so I will write them instead:)Cluster Name:\nmongodb-atlasDatabase Name:\nUserAPICollection Name:\nUsersUser ID Field: _idand my app permissions:\n\nScreen Shot 2022-06-16 at 13.06.582942×1154 207 KB\nany help would be appriciated, I’m struggling with this error for 3 daysI read the documentations about syncing and partitioning a couple of times but still can’t figure what’s wrong,thanks in advance.", "username": "GrandSir" }, { "code": "", "text": "wrong image, sorry.\n\nScreen Shot 2022-06-16 at 13.09.511864×1704 204 KB\n", "username": "GrandSir" }, { "code": "", "text": "Hi, could you post the URL to your application on realm.mongodb.com? (It is safe, only employees can read it). I suspect one of the issues may be that you may not have a schema defined on that table, but I can take a peek under the hood.Thanks,\nTyler", "username": "Tyler_Kaye" } ]
Realm device sync "Action on service 'mongodb-atlas' forbidden: no rules have been configured for this service" error
2022-06-16T10:34:23.186Z
Realm device sync &ldquo;Action on service &lsquo;mongodb-atlas&rsquo; forbidden: no rules have been configured for this service&rdquo; error
2,154
null
[ "aggregation" ]
[ { "code": "", "text": "Hello everyone,I have 3 collections “Product”, “Shop” and “Stock”.Here my steps :\n1 - $match “Product” > 6 909 documents\n2 - $lookup “Product” and “Shop” > 6 909 documents\n3 - $unwind this first result > 443 163 documents\n4 - $lookup this first result and Stock\n5 - $unwind this second resultat > 4 253 137 documentsAfter this 5 steps, I tried to add a “$count” stage, but i got “exceeded time limit” !\nI also tried to add “$group” to merge the documents and reduce the number of documents, buf failed again !Do you have any ideas ?Followed are the collections :— Product = 6909 documents\n{\n‘productId’: ‘141013’,\n‘language’: ‘fr’\n},\n{\nproductId: ‘141014’,\nlanguage: ‘fr’\n}\n…— Shop —{\nshopId: ‘101’,\nname: ‘Paris’,\nlanguages: [‘fr’]\n},\n{\nshopId: ‘102’,\nname: ‘Marseille’,\nlanguages: [‘fr’]\n},\n{\nshopId: ‘103’,\nname: ‘Bordeaux’,\nlanguages: [‘fr’]}\n…Lookup between Product and Shop give :{\nproductId: ‘141013’,\nlanguage: ‘fr’,\nlookupShop: [\n{ shopId: ‘101’, name: ‘Paris’, languages: [ ‘fr’ ] },\n{ shopId: ‘102’, name: ‘Marseille’, languages: [ ‘fr’ ] },\n{ shopId: ‘103’, name: ‘Bordeaux’, languages: [ ‘fr’ ] }\n…\n]\n},\n{\nproductId: ‘141014’,\nlanguage: ‘fr’,\nlookupShop: [\n{ shopId: ‘101’, name: ‘Paris’, languages: [ ‘fr’ ] },\n{ shopId: ‘102’, name: ‘Marseille’, languages: [ ‘fr’ ] },\n{ shopId: ‘103’, name: ‘Bordeaux’, languages: [ ‘fr’ ] }\n…\n]\n},\n…After $unwind = 443 163 documents{\nproductId: ‘141013’,\nlanguage: ‘fr’,\nlookupShop:\n{\nshopId:101,\nname: ‘Paris’,\nlanguages:[‘fr’]\n}\n},\n{\nproductId: ‘141013’,\nlanguage: ‘fr’,\nlookupShop:\n{\nshopId:102,\nname: ‘Marseille’,\nlanguages:[‘fr’]\n}\n},\n{\nproductId: ‘141013’,\nlanguage: ‘fr’,\nlookupShop:\n{\nshopId:103,\nname: ‘Bordeaux’,\nlanguages:[‘fr’]\n}\n},\n{\nproductId: ‘141014’,\nlanguage: ‘fr’,\nlookupShop:\n{\nshopId:101,\nname: ‘Paris’,\nlanguages:[‘fr’]\n}\n}\n…— Stock{\nproductId: ‘141013’,\nsizeId: ‘01’,\nshop:\n{\nshopId: ‘101’,\nname: ‘Paris’\n}\n},\n{\nproductId: ‘141013’,\nsizeId: ‘02’,\nshop:\n{\nshopId: ‘101’,\nname: ‘Paris’\n}\n},\n{\nproductId: ‘141013’,\nsizeId: ‘01’,\nshop:\n{\nshopId: ‘102’,\nname: ‘Marseille’\n}\n},\n…Lookup with Stock with keys productId and shopId.\nAfter $unwind = 4 253 137 documents", "username": "emmanuel_bernard" }, { "code": "", "text": "Why are you unwinding after the first lookup? You can pass an array of keys as localField and it will “do the right thing”.But this looks very much like a normalized schema would look - why do you need to so many joins? Can you explain your use case?Asya\nP.S. your documents would be easier to read if you format them as code.\nP.P.S. Include the actual pipeline you’re running too…", "username": "Asya_Kamsky" }, { "code": "[\n { _id: 'fr_141013', language: 'fr', productId: '141013' },\n { _id: 'fr_141014', language: 'fr', productId: '141014' }\n]\n[\n {\n _id: 'fr_141013',\n language: 'fr',\n productId: '141013',\n lookupShop: [ { shopId: '407' }, { shopId: '408' } ]\n },\n {\n _id: 'fr_141014',\n language: 'fr',\n productId: '141014',\n lookupShop: [ { shopId: '407' }, { shopId: '409' } ]\n }\n]\n[\n {\n _id: 'fr_141013',\n language: 'fr',\n productId: '141013',\n lookupShop: [ { shopId: '407' }, { shopId: '408' } ],\n lookupStock: [\n {\n sizeId: '05',\n warehouse: {\n shopIds: [\n '594', '562', '520',\n '510', '533', '511',\n '501', '534', '557',\n '503', '548', '504',\n '406', '407', '506',\n '616', '408', '409'\n ]\n }\n },\n {\n sizeId: '04',\n warehouse: {\n shopIds: [\n '594', '562', '520',\n '510', '533', '511',\n '501', '534', '557',\n '503', '548', '504',\n '406', '407', '506',\n '616', '408', '409'\n ]\n }\n },\n[{$match: {\n productId: {\n $in: [\n '100007',\n '100007'\n ]\n }\n}}, {$project: {\n productId: 1,\n language: 1\n}}, {$lookup: {\n from: 'Shop',\n localField: 'language',\n foreignField: 'localeIds',\n pipeline: [\n {\n $match: {\n shopId: {\n $in: [\n '407',\n '408'\n ]\n }\n }\n }\n ],\n as: 'lookupShop'\n}}, {$project: {\n productId: 1,\n language: 1,\n 'lookupShop.shopId': 1\n}}, {$lookup: {\n from: 'Stock',\n localField: 'productId',\n foreignField: 'productId',\n pipeline: [\n {\n $limit: 10\n }\n ],\n as: 'lookupStock'\n}}, {$project: {\n language: 1,\n productId: 1,\n lookupShop: 1,\n 'lookupStock.sizeId': 1,\n 'lookupStock.warehouse.shopIds': 1\n}}]\n", "text": "You are rightn no need to unwind after the first lookup (thanks).I need to check the truthfulness of the documents.I have formated my documents as code.— Product (with $match and $project)— adding $lookup Shop– adding $lookup StockI need to reduce the lookupStock array and keep, for each size, only the values of warehouse.shopIds similar to those lookupShop array’s values.Here is the actual pipelineBest regardsEmmanuel", "username": "emmanuel_bernard" } ]
Max size of $lookup arrays?
2022-06-16T15:49:18.628Z
Max size of $lookup arrays?
1,990
null
[ "aggregation" ]
[ { "code": "", "text": "Hi,\nI have DB with 2 collections - collection A and collection B.\nDoc-A from collection A\nDoc-B1 and Doc-B2 from collection B.Doc-A {x:[y1,y2,y3]\nt: 10\n}\nDoc-B1 {name: y1\ntargets: [z1,z2]\n}\nDoc-B2 {name: y2\ntargets: [z3,z4]\n}How can I get the Doc-A’ (below) - I want all the other fields of doc-A will also be in Doc-A’ ?Doc- A’ { x:[y1,y2],\nt: 10\ntargets: [z1,z2,z3,z4]\n}I tried to unwind but I didn’t succeed to combine them after that into one document\nthanks!", "username": "Tal_Elbaz" }, { "code": "", "text": "Please publish real JSON documents that we can cut-n-paste into our systems for us to experiment.Also share what you have tried and indicate how it fails to produce the expected result. This will save us time by preventing us investigating in the wrong directory.What field from Doc-A refers to Doc-B1? I suspect the y1, y2, y3 from Doc-A to match Doc-B1.name?Have you tried the trivial $lookup from:B that uses localField:x and foreignField:name?", "username": "steevej" } ]
Combination of two docs from two different collection
2022-06-16T07:32:55.989Z
Combination of two docs from two different collection
1,128
null
[ "compass" ]
[ { "code": "", "text": "Hi,More often than not, I need to check some metrics happening in my Atlas database, for example number of users signing up (# of docs in a collection).Currently I use Charts to build a dashboard and host it somewhere so that I can quickly check the numbers on phone.Is there any way to get an official MongoDB Atlas or Compass support as a mobile app?\nIf not, can we access charts in view only mode in mobile app?Thanks…!", "username": "shrey_batra" }, { "code": "", "text": "Hi @shrey_batra and welcome in the MongoDB Community !Depending of the rights the Atlas user has on the dashboard, it can be able to edit the charts or not.\nBut if your goal is to just see a few Charts or even a dashboard from your phone, I would just create an App in the Atlas App Service, activate the hosting feature and create a tiny HTML page in which you can embed the iframes of the Charts / dashboard from MDB Charts.In this blog post you have an example of how to make it dynamic but they also explain how to just embed a chart:Learn how to build an animated timeline chart with the MongoDB Charts Embedding SDKCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi thanks,The steps and solution you mentioned I do that already I was hoping to have some type of custom query tool / chart building tool on phone so that we can pull up adhoc results on the fly, without opening our laptops or building the static html page.", "username": "shrey_batra" }, { "code": "", "text": "Oooh I misunderstood the need a bit !Well I think this is a nice feature request for the team. You can submit it here in the Charts section. If you get enough votes, it’s probably the future! https://feedback.mongodb.com/forums/923524-chartsCheers,\nMaxime.", "username": "MaBeuLux88" } ]
Feature Request - MongoDB Atlas / Charts Mobile App
2022-06-15T12:29:02.888Z
Feature Request - MongoDB Atlas / Charts Mobile App
2,271
null
[ "java" ]
[ { "code": "", "text": "Hi MongoDB community, I’m a Java developer and I recently got my MongoDB Certified Developer certificate. But I want to learn more about the in-depth knowledge of MongoDB, so I pulled down the code of Mongo Java driver from GitHub - mongodb/mongo-java-driver: The Official Java driver for MongoDB. The things is there are some many classes, any suggestion where should I get started as an entry point ?", "username": "Junlei_Li" }, { "code": "", "text": "Enjoy MongoDB Courses and Trainings | MongoDB UniversityI was too quick. I imagine that since you gotMongoDB Certified Developer certificateyou already took M220J.Sorry.", "username": "steevej" }, { "code": "", "text": "I pulled down the code of Mongo Java driver from GitHub - mongodb/mongo-java-driver: The Java driver for MongoDB . The things is there are some many classes, any suggestion where should I get started as an entry point ?Hi @Junlei_Li,Was there something specific you are looking to learn about the Java driver?As a general starting point for official MongoDB drivers, I recommend reviewing some of the MongoDB Specifications the drivers implement.These specifications provide context on expected behaviour and rationale for driver APIs.For example:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X Thank you so much for the useful tips, I’m gonna look into these parts.", "username": "Junlei_Li" } ]
Any learning suggestion for the MongoDB Java Driver?
2022-05-05T19:56:39.811Z
Any learning suggestion for the MongoDB Java Driver?
2,103
null
[]
[ { "code": "", "text": "is there a way to get last sync time from primary before primary went down and new secondary is elected as primary?", "username": "Sameer_Kattel" }, { "code": "", "text": "Hi @Sameer_Kattel,I think you have this information in the log when the election is performed.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Last sync time from primary before primary went down
2022-06-15T14:37:31.811Z
Last sync time from primary before primary went down
1,224
null
[]
[ { "code": "", "text": "I am using realm 10.25.2 with Xcode 13.2.1 and I can see crash here:function sharedSchema.in the line:\nmalloc_ptr classes(objc_copyClassList(&numClasses), &free);I try with other version 10.17 and 10.20 with the same result.I use realm from splash.Also see in the log this:didDisappear < MyApp: 0x7f899e85da00>deinit < MyApp.SplashScreenView: 0x7f899e85da00>2022-06-15 20:41:06.705867+0200 MyApp[27444:2084370] [] nw_connection_receive_internal_block_invoke [C26] Receive reply failed with error \"Operation canceled\"2022-06-15 20:41:06.705981+0200 MyApp[27444:2084370] [] nw_connection_receive_internal_block_invoke [C26] Receive reply failed with error \"Operation canceled\"2022-06-15 20:41:06.706148+0200 MyApp[27444:2084370] [] nw_connection_receive_internal_block_invoke [C26] Receive reply failed with error \"Operation canceled\"2022-06-15 20:41:06.706325+0200 MyApp[27444:2084370] [] nw_connection_receive_internal_block_invoke [C26] Receive reply failed with error \"Operation canceled\"SingletonMetadataCache(0x600000249b40): cyclic metadata dependency detected, abortingCoreSimulator 783.5 - Device: iPad Pro (9.7-inch) (3948E767-3034-48B3-A0B2-BBF5E8EB8C99) - Runtime: iOS 13.0 (17A577) - DeviceType: iPad Pro (9.7-inch)Any idea?Thank you.", "username": "Alejandro_Acosta" }, { "code": "", "text": "Showing (only) the error doesn’t really give us much to go on. It’s likely caused by some code you wrote but without seeing that and some further troubleshooting there’s no way to zero in on the issue.A good process for troubleshooting is to add a breakpoint to your code and run the app - if it crashes before that breakpoint, remove it and then add a new breakpoint before that one. Same thing if it crashes afterwards.In your case a good place to start is somewhere around didDisappear - perhaps some code is attempting to access a var that has gone out of scope.", "username": "Jay" } ]
Crash with iOS 13
2022-06-15T18:54:01.326Z
Crash with iOS 13
1,186
null
[ "queries", "crud" ]
[ { "code": "{\n\t\"_id\" : ObjectId(\"62a85e9017cfa90511b22c23\"),\n\t\"field\" : {\n\t\t\"field\" : {\n\t\t\t\"field\" : 208.5,\n\t\t\t\"field\" : 208.5,\n\t\t\t\"field\" : 0\n\t\t},\n\t\t\"field\" : {\n\t\t\t\"_id\" : ObjectId(\"625fea09645b2228bcea11bc\"),\n\t\t\t\"field\" : \"text\"\n\t\t},\n\t\t\"field\" : null,\n\t\t\"field\" : \"text\",\n\t\t\"field\" : \"A8\",\n\t\t\"field\" : \"a8\",\n\t\t\"fields\" : [ ],\n\t\t\"field\" : \"text\"\n\t},\n\t\"field\" : {\n\t\t\"field\" : {\n\t\t\t\"key\" : \"text\",\n\t\t\t\"field\" : \"text\",\n\t\t\t\"field\" : \"text\"\n\t\t},\n\t\t\"_id\" : ObjectId(\"61ba11a19d464624ae8f8a7d\"),\n\t\t\"field\" : \"text\",\n\t\t\"field\" : \"text\",\n\t\t\"field\" : \"text\"\n\t},\n\t\"field\" : {\n\t\t\"field\" : {\n\t\t\t\"field\" : {\n\t\t\t\t\"field\" : 0,\n\t\t\t\t\"field\" : 0\n\t\t\t},\n\t\t\t\"field\" : 0,\n\t\t\t\"fields\" : [ ]\n\t\t},\n\t\t\"fields\" : 0\n\t},\n\t\"field\" : \"text\",\n\t\"field\" : ObjectId(\"62a76ea90ea8c7770e542a9e\"),\n\t\"field\" : ObjectId(\"61ba0e5c63627829303591e0\"),\n\t\"__v\" : 0,\n\t\"field\" : ISODate(\"2022-06-14T10:10:25.754Z\"),\n\t\"field\" : ISODate(\"2022-06-14T10:10:25.754Z\")\n}\n", "text": "I have an insertMany with around 300 documents of an average and it takes on average 4 seconds. Is this due to my tier I am at at loss as to how I can improve the speeds here. I have reduced the document size as much as possible and it looks similar to this:I changed names and values but just wanted to give an idea of its size. Is this normal? what recourse is there, please jebus tell me its a symptom of my tier ", "username": "Ben_Gibbons" }, { "code": "{\n _id: ObjectId(\"62a85e9017cfa90511b22c23\"),\n field: 2022-06-14T10:10:25.754Z,\n __v: 0\n}\n", "text": "By redacting you document to have only root field names, _id, __v and field, it make it totally unusable for us to experiment. Because, in JSON, where duplicated field names are replaced by the last occurrence of a repeated field you end up with the very small document:M0 is shared so the performance you experiment is dependant of what else is happening of the shared server.What is the average size of your 300 documents?What are the total size of your 300 documents?What is your connection between your client and the rest of internet?Where is your location compared to the region of the cluster?All the above influence the performances.One test at 4 seconds might be normal but if you consistently clock at 4s that seems slow. If you are doing load testing on a shared server you are shooting your self and everybody else in the foot.", "username": "steevej" }, { "code": "", "text": "Hi in your experience does upgrading to a dedicated tier on average improve insert performance", "username": "Ben_Gibbons" }, { "code": "", "text": "Dedicated will always be better than shared. And more predictable as all the traffic will be yours. Your performances will not be influence by the other applications using the same shared server.You have not answered any of my questions, so nobody can know for sure if you will gain from the switch. Your bottleneck has not been determined yet.", "username": "steevej" } ]
insertMany on M0 is extremely slow
2022-06-14T10:21:47.368Z
insertMany on M0 is extremely slow
2,848
null
[ "aggregation", "python" ]
[ { "code": "{\n\t\"model\": \"Honda Civic\",\n\t\"license_plate\": \"ABC-1234\",\n\t\"attributes\":\n\t\t{\n \"rented\": \"YES\",\n\t\t\t...lots more data here...\n\t\t}\n}\n mydatabase = client.CARS_DB\n mycollection = mydatabase.RENTAL_LOT_A\n\n listOfRules = mycollection.distinct(\"model\")\n\n for rule in listOfRules:\n\n \tmatch_variable = {\n \"$match\": { 'model': rule }\n \t}\n \tproject_variable = {\n \"$project\": {\n \t'_id': 0,\n \t'model': 1,\n\t\t'license_plate': 1,\n \t'attributes.rented': 1\n }\n \t}\n \tpipeline = [\n match_variable,\n project_variable\n \t]\n \tresults = mycollection.aggregate(pipeline)\n \tfor r in results:\n print(r)\n print(\"- - - - - - - - - - - - - - - - -\")\n{'model': 'Honda Civic', 'license_plate': 'ABC-1234', 'attributes': {'rented': 'YES'}}\n- - - - - - - - - - - - - - - - -\n{'model': 'Toyota Camry', 'license_plate': 'ABC-5678', 'attributes': {'rented': 'YES'}}\n- - - - - - - - - - - - - - - - -\n{'model': 'Honda Civic', 'license_plate': 'DEF-1001', 'attributes': {'rented': 'no'}}\n- - - - - - - - - - - - - - - - -\nMODEL TOTAL\n========================\nHonda Civic 134\nToyota Camry 432\nFord Mustang 93\nHonda Accord 738\nChevorlet Corvette 3\nMODEL TOTAL\n=================================\nHonda Civic, rented 76\nHonda Civic, available 58\nToyota Camry, rented 245\nToyota Camry, available 187\nFord Mustang, rented 60\nFord Mustang, available 33\nHonda Accord, rented 137\nHonda Accord, available 601\nChevorlet Corvette, rented 3\nChevorlet Corvette, available 0\ndb.collection.countDocuments()", "text": "Hi everyone,I’m slowly learning MongoDB with python with help from this site and other tutorials I’ve found online. I need help aggregating and counting my documents.To explain: In my instance of MongoDB, I already have 1000s of documents, with each document tracking a car available from my (fictional) rental company. All the car documents have this format:I’ve learned enough MongoDB/python to build simple pipelines that search the data. Here’s a pipeline that searches all documents, plucks out a car’s model, license plate, and “rented” status:The output is:So far, so good.But here’s what’s vexing me: The above is great if I want all the cars listed individually. But say I want to see the bigger, aggregated picture. I don’t care about the license plate because what I want to see is the equivalent of this:…where the value in the “TOTAL” column is the number of documents where “model” equaled “Honda Civic,” and so on. Better yet would be this:Now I’m aggregating on “model” and “attributes.rented”.I don’t really care about the SQL-like table format, I just want to be able to pull this data out of MongoDB. There’s got to be a way to modify my pipeline, or create something new from scratch. I’ve tried python dictionaries, db.collection.countDocuments(), and a number of other posts from this website; no luck. Can anyone suggest an approach? Thank you.", "username": "redapplesonly" }, { "code": "", "text": "You need to use the $sum accumulator inside a $group stage.Your _id will be an object with $model and $attributes.rented.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Modify MongoDB/python Pipeline to Aggregate Documents by Field?
2022-06-15T14:21:32.325Z
Modify MongoDB/python Pipeline to Aggregate Documents by Field?
1,337
null
[ "kafka-connector" ]
[ { "code": "", "text": "Hi, I am using com.mongodb.kafka.connect.MongoSourceConnector to stream data from MongoDB into Kafka. My partition key is an ObjectID (ex: company : {\"$oid\": “value”}. I need to extract the value from the object id and set it as the Kafka message key.This is my current configuration:‘{\n“connector.class”: “com.mongodb.kafka.connect.MongoSourceConnector”,\n“publish.full.document.only”: “false”,\n“mongo.errors.log.enable”: “true”,\n“tasks.max”: “1”,\n“output.format.value”: “json”,\n“change.stream.full.document”: “updateLookup”,\n“collection”: “job”,\n“output.schema.key”: “{“name”:“JobKeySchema”,“type”:“record”,“namespace”:“com.rippling.main.avro”,“fields”:[{“name”:“fullDocument”,“type”:{“name”:“fullDocument”,“type”:“record”,“fields”:[{“name”:“company”,“type”:“string”}]}}]}”,\n“output.format.key”: “schema”,\n“mongo.errors.tolerance”: “all”,\n“database”: “infra1”,\n“topic.prefix”: “cdc”,\n“output.json.formatter”: “com.mongodb.kafka.connect.source.json.formatter.ExtendedJson”,\n“name”: “cdc-infra1-job”,\n“copy.existing”: “true”,\n“value.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“key.converter”: “io.confluent.connect.avro.AvroConverter”\n}’The above configuration results in the following message key: H{\"$oid\": “value”}. How can I extract the value only? Setting the type of company as record does not work, I get an error that actual type is object id not record. It looks like the connector converter is serializing the whole object id as a single string.I have also attempted to use StringConverter instead of AvroConverter and got the same result. Any help is appreciated.", "username": "Nizar_Hejazi" }, { "code": "\"transforms\": \"ExtractField\",\n\"transforms.ExtractField.type\": \"org.apache.kafka.connect.transforms.ExtractField$Key\",\n\"transforms.ExtractField.field\": \"$oid\"\n", "text": "This might work, try the ExtractField SMTThis document provides usage information for the Apache Kafka SMT org.apache.kafka.connect.transforms.ExtractField.", "username": "Robert_Walters" } ]
Extract ObjectID value without $oid in MongoDB Kafka SourceConnector
2022-06-15T09:16:01.575Z
Extract ObjectID value without $oid in MongoDB Kafka SourceConnector
3,214
null
[]
[ { "code": "", "text": "Hello MongoTeam , We want to setup MongoDB Source Connector to produce to multiple topics from a single collection, based on some schema property value. Is this configuration possible?", "username": "Piyali_Ash" }, { "code": "", "text": "Check out Dynamic Topic MappingVersion 1.4 of the MongoDB Connector for Apache Kafka focused on customer requested features that give the MongoDB Connector the flexibility to route MongoDB data within the Kafka ecosystem.", "username": "Robert_Walters" }, { "code": "", "text": "After reading , what I understood is dynamic mapping for different topics are possible for multiple collections . I have a single collection and want to read the single collection properties value and then map to multiple topics . This is what I found on Mongo support portal which maps different namespace for different collections.\nCapture972×463 53 KB\n", "username": "Piyali_Ash" } ]
Produce Events to multiple topics from Kafka Source connector
2022-06-16T16:28:11.083Z
Produce Events to multiple topics from Kafka Source connector
2,071
null
[]
[ { "code": "search $search: {\n text: {\n query: search,\n path: [\n 'name',\n 'description',\n 'contact.lastname',\n 'contact.entity',\n 'team.lastname',\n 'team.entity',\n ],\n },\n },\nearly analysis of covid variantsearly analysis of covid variantsearly analysis of covidanalysis of covid variantscovid variantsearly analysissearch", "text": "Hi all,I work on wprn.org with an atlas cluster handling the search. I was writing an issue on my repo to increase the score of the longest matched n-gram but I realized I might have misunderstood what Atlas does under the hood regarding FTS. I would like to double check it.Right, now, here is how I proceed:The downside with my approach is that matching a phrase does not boost the score. Ideally, the longest sequence of words that matches should get the highest score.For instance, if we search for early analysis of covid variants the boost level could be:early analysis of covid variants : 4\nearly analysis of covid : 3\nanalysis of covid variants : 3\ncovid variants: 2\nearly analysis: 2The approach I planned to chose was to insert all the n-grams I find in the search string into my search array. Each of those would be boosted depending on the number of words in the string minus those belonging to the stop words.For a long search string, it would increase big time the number of strings elements I am searching for. So before I commit into this, I wanted to check with the community if there are better approaches.Am I doing it the right way?Subsidiary question: I generate a static score for each item I search that I use as a basic sort. It is based on popularity (views/time) metrics and ratings. Do you guys think it is a good idea to use it as a coefficient of the search boost?", "username": "Antoine_Cordelois" }, { "code": "", "text": "Hi, after more than one year I stumbled on the same problem, did you maybe find a solution?", "username": "Leonardo_Pratesi" } ]
What strategy should I use to implement an efficient full text search?
2021-02-11T18:47:55.608Z
What strategy should I use to implement an efficient full text search?
2,439
null
[ "aggregation", "compass" ]
[ { "code": "", "text": "Scenario:Let’s say you have 2 collections. You begin writing an aggregation query for the first collection. Then you want to double check some value that is in the second collection. Clicking on the second collection brings the user to it and clears the aggregation query that has been written for the first collection. Just like that.Of course, ideally you would expect it to be saved automatically as a draft for you to come back to it once you have done some multi-tasking, something that other software such as email clients do, but no.To make matters worse, there is no communication to a user about this. There is no prompt informing the user that an aggregation query he/ she spent 30 min designing would be lost and whether it should be saved.This is such a jarring and punishing experience for simply wanting to glance at something else related to the query being written, that it degrades the whole Compass user experience, making it feel like a bomb, rendering one scared to touch anything to not have it explode in his/ her face with an unintended data loss.", "username": "RENOVATIO" }, { "code": "", "text": "Hi @RENOVATIO.Thank you for your feedback. We are aware of this UX annoyance (other users have reported it too: Do not lose aggregation pipeline when switching to another collection or database – MongoDB Feedback Engine) and we intend to fix it.For now, the workaround is to open the second collection in a new tab.\nimage1432×780 160 KB\n", "username": "Massimiliano_Marcon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Compass clicking on a different collection loses aggregation progress
2022-06-16T10:47:37.033Z
Mongo Compass clicking on a different collection loses aggregation progress
2,255
null
[ "queries" ]
[ { "code": "{\n\t\"_id\" : \"62860d15ae6cd23352fa3683\",\n\t\"chapterName\" : \"chap1\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"26-05-2022\",\n\t\t\"fromDateString\" : 20220526,\n\t\t\"toDate\" : \"29-05-2022\",\n\t\t\"toDateString\" : 20220529,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3684\",\n\t\t\"topicName\" : \"1\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n{\n\t\"_id\" : \"62860d15ae6cd23352fa3683\",\n\t\"chapterName\" : \"chap1\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"01-06-2022\",\n\t\t\"fromDateString\" : 20220601,\n\t\t\"toDate\" : \"03-06-2022\",\n\t\t\"toDateString\" : 20220603,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3685\",\n\t\t\"topicName\" : \"2\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n{\n\t\"_id\" : \"62860d15ae6cd23352fa3686\",\n\t\"chapterName\" : \"chap2\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"02-07-2022\",\n\t\t\"fromDateString\" : 20220702,\n\t\t\"toDate\" : \"04-07-2022\",\n\t\t\"toDateString\" : 20220704,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3687\",\n\t\t\"topicName\" : \"1\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n\ndb.VW_SYLLABUS_PLAN_DB.find({$expr: {$and: [{$gte: [\"topicsList.fromDateString\", 20220526]}, {$lte: [20220526,\"topicsList.toDateString\"]}]}}).pretty()\n\n{\n\t\"_id\" : \"62860d15ae6cd23352fa3683\",\n\t\"chapterName\" : \"chap1\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"26-05-2022\",\n\t\t\"fromDateString\" : 20220526,\n\t\t\"toDate\" : \"29-05-2022\",\n\t\t\"toDateString\" : 20220529,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3684\",\n\t\t\"topicName\" : \"1\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n{\n\t\"_id\" : \"62860d15ae6cd23352fa3683\",\n\t\"chapterName\" : \"chap1\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"01-06-2022\",\n\t\t\"fromDateString\" : 20220601,\n\t\t\"toDate\" : \"03-06-2022\",\n\t\t\"toDateString\" : 20220603,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3685\",\n\t\t\"topicName\" : \"2\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n{\n\t\"_id\" : \"62860d15ae6cd23352fa3686\",\n\t\"chapterName\" : \"chap2\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"02-07-2022\",\n\t\t\"fromDateString\" : 20220702,\n\t\t\"toDate\" : \"04-07-2022\",\n\t\t\"toDateString\" : 20220704,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3687\",\n\t\t\"topicName\" : \"1\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n{\n\t\"_id\" : \"62860d15ae6cd23352fa3683\",\n\t\"chapterName\" : \"chap1\",\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"teamId\" : ObjectId(\"6069a6a9a0ccf704e7f4b537\"),\n\t\"topicsList\" : {\n\t\t\"fromDate\" : \"26-05-2022\",\n\t\t\"fromDateString\" : 20220526,\n\t\t\"toDate\" : \"29-05-2022\",\n\t\t\"toDateString\" : 20220529,\n\t\t\"topicId\" : \"62860d15ae6cd23352fa3684\",\n\t\t\"topicName\" : \"1\"\n\t},\n\t\"userId\" : ObjectId(\"6069a5daa0ccf704e7319d16\")\n}\n\n", "text": "I wanted to check fromDateString >= 20220526 and toDateString <= 20220526the query used isThe output I’m gettingany ways to get only one document in this way", "username": "Prathamesh_N" }, { "code": "", "text": "Hello @Prathamesh_N ,It has been long since you have posted this and I think you might have found your solution. If not then you can use limit at the end of your query.Below will be your updated Querydb.VW_SYLLABUS_PLAN_DB.find({$expr: {$and: [{$gte: [“topicsList.fromDateString”, 20220526]}, {$lte: [20220526,“topicsList.toDateString”]}]}}).pretty().limit(1)Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to check whether give no lies between these two
2022-05-26T09:53:45.029Z
How to check whether give no lies between these two
1,188
null
[ "aggregation", "node-js", "data-modeling" ]
[ { "code": "", "text": "Hi there. First off, sorry for the very noob question here. I’m still trying to get my head round MongoDB coming from a SQL background.I’ve got a NodeJS that’s pulling data from a MongoDB collection. My data looks a bit like this:{\nvalue: “something”,\notherValue: “something else”,\nimages: [{ array of objects }],\nspecs: [{ array of objects }]\n}I’m using an aggregation to unwind both the images and specs arrays to reorder them. This creates about 300 new documents.In my Node application I then iterate through the returned aggregate cursor like this:await aggCursor.forEach(o => {\nvehObj.push(o) // Pushed into an array for later use\n})But this seems to me a bit of a waste because the “value:” and “otherValue:” will always be the same, but will be iterated 300 times. Whereas what I want to do is push the “value:” and “otherValue:” once, and only iterate the “images:” and “specs:” objects.Is there a better way of doing this, or should I be regrouping the “images:” and “specs:” back into an array of objects as part of the aggregation process?Hope this makes sense.Thanks in advance.", "username": "Andy_Bryan" }, { "code": "$unwind", "text": "Hi @Andy_Bryan,Yes $unwind is a waste in this case because you want to sort an array of docs. You can also let the application layer do the job, but data manipulation is supposed to be done by the database by definition.So there is 3 ways to sort an array of docs in MongoDB. The terrible way. The old school but OK way. And there is the pro way.Remember that the aggregation framework is Turing complete. You can literally do anything. I even wrote the Game of Life with it and @John_Page also mined bitcoins with it so… You can reduce your data manipulation in your back-end to the minimum if you can write the right query. Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Brilliant.\nOnce again, many thanks for your help.Andy.", "username": "Andy_Bryan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Noob question about aggregations and aggregation cursors
2022-06-15T12:38:17.265Z
Noob question about aggregations and aggregation cursors
1,534
null
[]
[ { "code": "mongodb_ss_wt_cache_pages_queued_for_evictionmongodb_ss_wt_cache_pages_selected_for_eviction_unable_to_be_evicted", "text": "There was an incident in our Mongo cluster wherein secondary got a page fault after which we started seeing spike in pages queued for eviction (mongodb_ss_wt_cache_pages_queued_for_eviction) as well as pages unable to get evicted (mongodb_ss_wt_cache_pages_selected_for_eviction_unable_to_be_evicted). Despite the secondary recovering after ~2.5 hours, these two metrics have been abnormal since the issue.How do we debug this issue? Should we look at any other metric? Would there be any data loss in case of server restart?", "username": "Tejas_Jadhav1" }, { "code": "", "text": "\nimage817×600 65.5 KB\nAttaching the graphs for reference.", "username": "Tejas_Jadhav1" }, { "code": "mongod", "text": "Hi @Tejas_Jadhav1 welcome to the community!The two eviction metrics are for internal WT use (eviction is an internal WT process), where it’s mainly used by MongoDB engineers to troubleshoot issues. However, they are used in combination with other metrics, and rarely, if ever, used as a standalone metric.Typically if they are showing large numbers like what you posted (I would consider millions of anything as large ), it means that the server is trying hard to process backlog of work, i.e. it’s overwhelmed, and is trying to stay on top of the work it’s given.There are many improvements made to WT over the years since version 3.6.2 was released in Jan 2018, including many performance improvements that make the eviction process smarter & more efficient. Also, the 3.6 series was not supported anymore as per April 2021. I would strongly encourage you to upgrade to the latest supported version (4.2 is the oldest series still supported), but upgrading to the latest version (currently 5.0.9) is best.Upgrading to the latest version would also ensure that you don’t experience old bugs that was already fixed.As per Replica Set Deployment Architectures, it’s not recommended to deploy an even number of members. If you have 2 data bearing nodes, I recommend you remove the Arbiter from the replica set.Would there be any data loss in case of server restart?Unless you do a kill -9 of the mongod process, and you’re using majority write concern for your writes, there should be no risk of data loss, unless it’s hardware related.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks for the reply @kevinadiit means that the server is trying hard to process backlog of work, i.e. it’s overwhelmed, and is trying to stay on top of the work it’s given.How can we further debug this? Are there any more metrics that we can look at to identify the cause?I would strongly encourage you to upgrade to the latest supported version (4.2 is the oldest series still supported), but upgrading to the latest version (currently 5.0.9) is best.Yes, we have a plan for this. We might be done with it in the coming weeks. But before that, wanted to get clarity over these anomalous metric that we are seeing on Mongo and would that impact our upgrade process in any way.If you have 2 data bearing nodes, I recommend you remove the Arbiter from the replica set.The other secondary has no voting rights and does not participate in elections. It was created as a backup node in case we see issues on primary and secondary.", "username": "Tejas_Jadhav1" }, { "code": "db.serverStatus()", "text": "How can we further debug this? Are there any more metrics that we can look at to identify the cause?I’m afraid there’s not an easy answer here. Basically, I don’t think there’s anything to debug; the database was overwhelmed at some point, but then it managed to clear the backlog of work after some time, and things got better.In terms of other metrics, there are hundreds of them (you can see them in db.serverStatus()). However the best tool in my opinion is mongostat for the overall health of the server, and mongotop to check your collection activities.In most cases like this, if you let the server catch up with work and not overwhelm it further, the situation typically resolves itself (as you’ve seen in this case, I believe).Also as I previously mentioned, things are mostly better behaved in newer MongoDB versions, so you might not need to do anything other than upgrading The other secondary has no voting rights and does not participate in elections. It was created as a backup node in case we see issues on primary and secondary.Sorry I don’t follow; isn’t that the purpose of a secondary? To be able to step up and take over as primary when there’s a problem in the replica set, so that you have high availability? What’s the goal of making this secondary not acting like a secondary? Does it have a lesser hardware spec, or other reasons?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "In most cases like this, if you let the server catch up with work and not overwhelm it further, the situation typically resolves itself (as you’ve seen in this case, I believe).In our case, the page eviction has never been this spiky. Even now it is spiking a lot more than before. If you notice those spikes after 16:00 in the screenshot above, those spikes are still happening now. That looks worrisome.Sorry I don’t follow; isn’t that the purpose of a secondary? To be able to step up and take over as primary when there’s a problem in the replica set, so that you have high availability? What’s the goal of making this secondary not acting like a secondary? Does it have a lesser hardware spec, or other reasons?Yeah, the setup is unusual. We had created that secondary in past when we were seeing some hardware related issues on the primary. But just after creating it, we saw that those failures had recovered and did not happen again. Since this new secondary was already created, we decided to keep it as a backup in case we see any catastrophic hardware failures on both the primary as well as existing secondary. We are planning to remove it now once we are done with MongoDB version upgrade.", "username": "Tejas_Jadhav1" } ]
How to debug page eviction failures?op
2022-06-15T15:17:15.079Z
How to debug page eviction failures?op
1,485
null
[ "queries" ]
[ { "code": "", "text": "I have realm function1 that’s called from another realm function2. Function1 is not returning any data when called within function2. However, if I make the call from the frontend application, function1 works fine. Just can’t figure out what is causing this behavior, as I would like to keep the logic on the server-side. Let me know if anyone has come across this behavior before.", "username": "Lenin_Kumar" }, { "code": "", "text": "Hi Lenin,Thanks for your question and welcome to the community!What authentication type are you using on the two functions within the settings configuration?Regards", "username": "Mansoor_Omar" } ]
Function works fine in client application but does not with in the App service module
2022-06-08T03:56:47.495Z
Function works fine in client application but does not with in the App service module
1,056
null
[ "queries", "indexes", "golang" ]
[ { "code": "{\n\t\"status\": 2,\n\t\"queue\": true,\n\t\"department\": \"apple\",\n \"ts\": ISODate(\"2022-05-10T08:35:52.451Z\")\n}\n{\n\t\"status\": { \"$ne\": 4 },\n\t\"queue\": true,\n\t\"department\": { \"$in\": [\"banana\"] }\n}, \n{ \"sort\": { \"ts\": -1 }, \"limit\": 20}\n{\n\t\"status\": { \"$ne\": 4 },\n\t\"queue\": true\n}, \n{ \"sort\": { \"ts\": -1 }, \"limit\": 20}\ndepartmentts{\n\t\"status\": { \"$ne\": 4 },\n\t\"queue\": true,\n\t\"department\": { \"$in\": [\"banana\"] },\n \"ts\": { \"$lte\": lastTimestamp }\n}, \n{ \"sort\": { \"ts\": -1 }, \"limit\": 20}\nstatusqueuedepartmenttsts{\n\t\"status\": 1,\n\t\"queue\": 1,\n\t\"department\": 1,\n \"ts\": 1\n}\ndepartment{\n\t\"status\": 1,\n\t\"queue\": 1,\n \"ts\": 1\n}\n", "text": "My sample document:I typically have two kinds of queries, where one of the fields in the query is optional:The second type:(notice there is no department field)I also use ts field for pagination:Applying ESR rule:E = status, queue, department\nS = ts\nR = tsNow, what is the ideal index?", "username": "V_N_A" }, { "code": "", "text": "Most likely with the single index status:1,queue:1,department:1,ts:1 you will end up with in memory sort of ts if the department is not specify.Depending of the selectivity of status and queue vs the cardinality of department you might be better off with the index status:1:queue:1,ts:1.Note that $ne:4 might not be able to use the index. Not-equalilty is not the same as Equality. It is closer to Range.", "username": "steevej" }, { "code": "status{\n\t\"queue\": 1,\n\t\"department\": 1,\n \"ts\": 1,\n \"status\": 1,\n}\n", "text": "Note that $ne:4 might not be able to use the index. Not-equalilty is not the same as Equality. It is closer to Range.whoa, TIL! Then order of the status filed should be changed I guess? something like:?", "username": "V_N_A" }, { "code": "", "text": "The best is to try alternatives and to compare the explain plans. Start with https://www.mongodb.com/docs/manual/tutorial/analyze-query-plan/.", "username": "steevej" }, { "code": "{\n\t\"status\": { \"$ne\": 4 },\n\t\"queue\": true,\n\t\"department\": { \"$in\": [\"banana\"] }\n},\n{ \"sort\": { \"ts\": -1 }, \"limit\": 20}\nfind({\"status\": {\"$ne\": 4}, \"queue\": true, \"department\": {\"$in\": [\"banana\"]}}).sort({ts:-1}).limit(20)\nInequality operators$ne$nindepartmentSORTSORTSORT{\n\t\"status\": 1,\n\t\"queue\": 1,\n\t\"department\": 1,\n \"ts\": 1\n}\nstatus_queue_department_ts{\"status\": {\"$ne\": 4}, \"queue\": true, \"department\": {\"$in\": [\"banana\"]}}).sort({ts:-1}\nexpRun = db.collections.explain(\"executionStats\")\nexpRun.find({\"status\": {\"$ne\": 4}, \"queue\": true, \"department\": {\"$in\": [\"banana\"]}}).sort({ts:-1})\n...\n executionStats: {\n executionSuccess: true,\n nReturned: 152,\n executionTimeMillis: 0,\n totalKeysExamined: 169,\n totalDocsExamined: 152,\n executionStages: {\n stage: 'FETCH',\n...\n inputStage: {\n stage: 'SORT',\n...\n inputStage: {\n stage: 'IXSCAN',\n...\nSORTdepartment.explainexpRun.find({\"status\": {\"$ne\": 4}, \"queue\": true}).sort({ts:-1})\n...\n executionStats: {\n executionSuccess: true,\n nReturned: 477,\n executionTimeMillis: 2,\n totalKeysExamined: 486,\n totalDocsExamined: 477,\n executionStages: {\n stage: 'FETCH',\n...\n inputStage: {\n stage: 'SORT',\n...\n inputStage: {\n stage: 'IXSCAN',\n...\nSORTSORTstatus_queue_ts...\n executionStats: {\n executionSuccess: true,\n nReturned: 152,\n executionTimeMillis: 1,\n totalKeysExamined: 486,\n totalDocsExamined: 477,\n executionStages: {\n stage: 'SORT',\n...\n inputStage: {\n stage: 'FETCH',\n...\n inputStage: {\n stage: 'IXSCAN',\n...\nSORT{\n \"queue\":1,\n \"department\":1,\n \"ts\":1,\n \"status\":1\n}\nqueue_department_ts_status.explain...\n executionStats: {\n executionSuccess: true,\n nReturned: 152,\n executionTimeMillis: 0,\n totalKeysExamined: 168,\n totalDocsExamined: 152,\n executionStages: {\n stage: 'FETCH',\n...\n inputStage: {\n stage: 'IXSCAN',\n...\nSORT...\n executionStats: {\n executionSuccess: true,\n nReturned: 477,\n executionTimeMillis: 2,\n totalKeysExamined: 526,\n totalDocsExamined: 477,\n executionStages: {\n stage: 'FETCH',\n...\n inputStage: {\n stage: 'SORT',\n...\n inputStage: {\n stage: 'IXSCAN',\n...\nSORT$in.sort()SORTdepartment", "text": "Hey @V_N_A,Just adding a few more points to what @steevej has rightly said. Firstly, I assume that when you wrote the first query ie.you meant this:because the first one is just projecting the documents.Coming back to the post, a general rule of thumb one considers when dealing with compound indexes is that you don’t need an additional index if your query can be covered by the prefix of the existent compound index. Another thing to note is that Inequality operators such as $ne or $nin are range operators, not equality operators, so moving status to the end might help speed up your first query. But in the case of your second query, since there is no department, you will end up with memory sort since prefixes won’t work.About the SORT stage: although having a SORT stage is not necessarily a bad thing if the result set is reasonably small, the best outcome is to avoid having this stage altogether, since it can cause the query to fail if it runs out of memory to do the sorting. Using proper indexing can avoid the SORT stage.To demonstrate out all this, I tried the same with the sample document you provided and created a sample collection with 1000 such documents using mgeneratejs, a tool to create example documents by following some patterns you define.If we create an index like the one you suggested here:I named the index status_queue_department_ts and so on the first query ie.the Explain Plan gives the following results:From the execution stats, it shows it examined a total of 169 keys and 152 documents and returned 152 documents, which is not a bad query targeting ratio. Note that the winning plan contains a SORT stage.Now for the second query ie. without the department field, the same .explain, gives the following results:we can see now, that it has returned 477 documents and had to scan 486 keys. This is also not a bad query targeting ratio, but this also contains a SORT stage.Thus for your first index, in conclusion, it’s not the ideal index for the query since the explain output for both queries contains a SORT stage, even though the query targeting metric is not too bad.For your second index ie. status_queue_ts, the first query returned:ie. 486 keys scanned for only 152 returned documents. This is not great, since it means the server needs to examine ~3 documents for each relevant one, thus the server does a lot of unnecessary work. It would be the same if no department is mentioned as is the case of your second query. Note that this also contains a SORT stage, which makes it even more unappealing.Now, we if change our index order tonaming it as queue_department_ts_status, for the first query, .explain returns:We can see it scanned 168 keys and returned 152 documents (not a bad query targeting metric), all without using a SORT stage.For the second query, however, since there is no department, it relies on memory sort:The number of keys scanned increases to 526 with the same 477 documents returned, but all 477 examined documents are returned, so the server does a little unnecessary work. However the presence of the SORT stage negates this.As we can see, the selection of the order of indexes would highly depend on what query you are going to use the most as well as the operators you will use in them. Eg. When $in is used alone, it is an equality operator that does a series of equality matches, but acts like a range operator when it is used with .sort(). Additionally, the use case you posted might require two different indexes if you want to avoid having a SORT stage. One with department, and the other without.I would highly suggest you read the following to cement your knowledge about indexes.Please let us know if there’s any confusion in this. Feel free to reach out for anything else as well.Regards,\nSatyam Gupta", "username": "Satyam" }, { "code": "", "text": "Hey Satyam, thank you for taking your time and replying. It was insanely helpful! Let me read all the resources you linked and get back to you!", "username": "V_N_A" }, { "code": "tsstatus{\n \"queue\":1,\n \"department\":1,\n \"ts\":1,\n \"status\":1\n}\n{\n \"queue\":1,\n \"department\":1,\n \"status\":1,\n \"ts\":1,\n}\n", "text": "Hey @Satyam I had a question regarding range queries and cardinality. What should be the order, is it like high cardinality to low cardinality in case of range fields?for e.g. in the queries above, cardinality for ts is very high compared to status. So which is correct for the following:OR", "username": "V_N_A" }, { "code": "explain()_idexplain output", "text": "Hi @V_N_A,What should be the order, is it like high cardinality to low cardinality in case of range fields?Since we don’t know the statistical distribution of the fields in your collection, we can only simulate the possibilities (see the previous answer). You might want to perform a similar explain() experiments using your actual data to arrive at the best scenario that works in your usecase (try having no COLLSCAN, no SORT).That said, please do remember that the selection and ordering of an index depends on a lot of factors. The purpose of an index is to be able to zoom in to a relevant part of the collection and eliminate as much irrelevant documents as possible so that the server don’t need to do unnecessary work to while running the query. When talking about cardinality, fields with low cardinality have a comparably lower value in an index. For example, if a field cardinality is so low that it encompass a third of the collection, it’s less useful as an index vs. a field with high cardinality that can identify a single document (e.g. _id ).\nI believe that your compound index would help with range and sorting, so the ESR rule of thumb should be more relevant for field ordering choice in a compound index, irrespective of the cardinality involved. But I would still suggest you to try creating the indexes and checking the explain output to see which one works better for your use case (since you know your exact dataset and the queries that you are going to use mostly). The resources that I mentioned in previous answer should also help you a lot.Let us know if there is any confusion regarding this. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How would the ESR rule of Indexes behave if one of the fields is optional?
2022-06-04T15:30:30.292Z
How would the ESR rule of Indexes behave if one of the fields is optional?
3,698
null
[ "swift", "atlas-device-sync", "flexible-sync" ]
[ { "code": "realmSetupErrorHandler()useThisRealmsConfig.objectTypes.syncManager.errorHandler\n/// SOME GENERAL NOTES. \n /// I am running multiple Realm-Apps on 1 client-side app. Therefore, I test for Flexible-Sync (FS) and Partition-Based-Sync (PBS) - this can also work in the future when both FS and PBS can be used in 1 Realm-App [🤞].\n /// I have multiple Realms on PBS (i.e., per partition) and 1 Realm on FS (since FS currently works with only 1 Realm essentially).\n /// Since '`.syncManager.errorHandler`' is per Realm-App, I start it right after I open all the Realms (per Realm-App) in a single block. Since I'm running 2 Realm-Apps (per PBS & FS) I do this process twice - and only 2x (unless reload Realm-App) and not per error being handled.\n /// My code setup is quite different, but I extracted and refactored my code for this part and to fit your example. I also named properties to be long in order to be verbose for any readers.\n\n\n/// The changes could be entered here (line# 293) in your example (as of 2022-06-11). I would start by adding an unwrapped ‘SyncSession’. I renamed to use '`have`' prepended in the names too.\nguard let haveSelf = self, let haveSession = session, let haveUser = haveSession.parentUser() else { return; }\n\nguard let useThisRealmsSyncConfig = haveSession.configuration()\n else {\n // NOTE: CRASH in DEV, Error-Log in PROD. Reminder I started after opening a Realm.\n fatalError();\n }\n\nprint(\"During-TESTING-GOT:'useThisRealmsSyncConfig' - at this point we now know if this is 'Flexible-Sync'=='\\(useThisRealmsSyncConfig.enableFlexibleSync)' - if 'true' then is 'FS'.\")\n\nlet isFlexibleSyncEnabled = useThisRealmsSyncConfig.enableFlexibleSync\nlet useThisRealmsConfig: Realm.Configuration\n\nif isFlexibleSyncEnabled {\n useThisRealmsConfig = useThisRealmsSyncConfig.user.flexibleSyncConfiguration()\n}\nelse {\n guard let haveThisRealmsPartitionValue = useThisRealmsSyncConfig.partitionValue as? String\n else {\n // NOTE: This should not occur.\n fatalError();\n }\n print(\"During-TESTING-GOT: 'haveThisRealmsPartitionValue', which is: '\\(haveThisRealmsPartitionValue)'.\")\n useThisRealmsConfig = useThisRealmsSyncConfig.user.configuration(partitionValue: AnyBSON(haveThisRealmsPartitionValue), clientResetMode: .manual)\n}\n\n \nprint(\"During-TESTING-GOT: 'useThisRealmsConfig', which its file-URL is: '\\(String(describing: useThisRealmsConfig.fileURL))'.\")\n\nlet useThisRealm: Realm\ndo {\n useThisRealm = try Realm(configuration: useThisRealmsConfig)\n}\ncatch let haveError as Error? {\n fatalError(\"During-TESTING-GOT: an ERROR trying to get a 'Realm', here is the ERROR: '\\(String(describing: haveError?.localizedDescription))'.\");\n}\n\nprint(\"During-TESTING-GOT: 'useThisRealm', which has: '\\(useThisRealm.schema.objectSchema.count)' Schemas.\")\n\n/// This is provided to show how to get all objects to merge later. Also, based on the size of Realm you may want to do similar from the backed-up Realm and query on a per Collection basis. I wish there was an exposed property for last sync'd to prevent unnecessary loading here. \nlet allObjectTypesArray: [Object.Type] = (useThisRealm.configuration.objectTypes?.filter({ $0 is Object.Type }) ?? []) as! [Object.Type]\nvar mergeLaterArray: [ (Object.Type, [Object]) ] = [ ]\nallObjectTypesArray.forEach({ mergeLaterArray.append(($0, Array(useThisRealm.objects($0)))) }) // may want to sort these too.\nprint(\"During-TESTING-GOT: this far! Now will test the current Realm (before backing-up) for querying before a Client-Reset. Here are values for: 'allObjectsArray' having '\\(mergeLaterArray.count)' total Classes/Schemas && 'allObjectsArray' having '\\(mergeLaterArray.compactMap({ $0.1 }).flatMap({ $0 }).count)' total objects across all Collections.\")\n\n/// back to your code\nlet syncError = error as! SyncError\nvar fileURL: URL?\n\n", "text": "Hi @Paolo_MannaI just came across your example code, which was interesting to me. First off, before I go on, thanks for posting your example - I am sure it has and will continue to help many developers!I compared your example to my setup and found in general we do a lot of the same. I did pick up your way of choosing an ‘asyncOpen’ or ‘sync’. Previously, I depended on tracking the progress on a Realm’s sync solely. Consequently, that could pause the user longer than needed in some cases, since I have some Realms not worth waiting for and benefit via ‘asyncOpen’.I would like to bring some differences to your attention, that I felt were important for me and could be for some other developers too. If you don’t mind I would appreciate your thoughts - especially if I error-ed or have any oversights (thank you in advance). Below I have presented a question on your use of calling ‘realmSetupErrorHandler()’ per Realm open.In my usage, I like to know which Realm the ‘SyncSession’ is reporting on, for the ‘Error’ being handled. Therefore, I determine the Realm from the ‘SyncSession’ provided (which is the code I am providing). With that, I use such Realm in places thereafter. For example, sometimes I could use it to query before I back up the (reset-needing) Realm, and then merge later. Though for larger Realms it is better to query on a per Collection basis from the backed-up Realm. Merging gets tricky for Flexible Sync (FS), since it is based on query permissions and those results can change based on client-side usage (but that is a topic for another day). Luckily, for Partition-Based-Sync (PBS), it is straight forward. Oh yeah, I also pull the array of ‘ObjectTypes’ (e.g., ‘useThisRealmsConfig.objectTypes’) to iterate through them for querying data, in order for merging into the new (downloaded) syncing Realm.\n• I know for your example, it was testing 1 Realm (and 1 Schema) and therefore did NOT need to provide for additional Realms nor Schemas.Some of my usage/differences to note:\n• I am running multiple Realm-Apps on 1 client-side app. Therefore, I check for Flexible-Sync (FS) and Partition-Based-Sync (PBS) - this can also work in the future when both FS and PBS can be used in 1 Realm-App [].\n• I have multiple Realms on PBS (i.e., per partition) and 1 Realm on FS (since FS currently works with only 1 Realm essentially).\n• Since ‘.syncManager.errorHandler’ is per Realm-App, I start it right after I open all the Realms (per Realm-App) in a single block. Since I’m running 2 Realm-Apps (per PBS & FS) I do this process twice - and only 2x (unless reloading Realm-App) and not per error being handled.\n•• I noticed you do it per error handling (via Realm opens), is that something I need to change? - meaning (e.g.) is there some issue during errors that it is better re-pass it again?", "username": "Reveel" }, { "code": "useThisRealmsConfig.objectTypesobjectTypesrealm.schema.syncManager.errorHandler\t// Don't re-do it\n\tguard app.syncManager.errorHandler == nil else { return }\n", "text": "Hi @Reveel,I’ll try to comment some of your statements inline: I won’t be exhaustive, though, just point out a couple of things. A foreword, however: Client Reset is an area in development, there are on the roadmap some modifications that would make life easier for developers, so this conversation will likely be outdated soon.I determine the Realm from the ‘SyncSession’ provided (which is the code I am providing).Of course: the sample code tried to stay simple, and assumed only one realm was opened. Interestingly, this may not be an issue for other SDKs, JavaScript for example has a per-realm error handler.I also pull the array of ‘ObjectTypes’ (e.g., ‘ useThisRealmsConfig.objectTypes ’) to iterate through them for querying dataobjectTypes isn’t mandatory, so it may not be set, and may not reflect what the actual content of the realm is: if you’ve the realm object, I’d rather go for realm.schema to detect the classes that are in the DB.Since ‘ .syncManager.errorHandler ’ is per Realm-App, I start it right after I open all the Realms (per Realm-App) in a single block.As it’s per-app, as you say, why not open it before opening all the Realms? This way, if an error occurs on, say, the first realm while you’re still opening the second, you’d still catch it.I noticed you do it per error handling (via Realm opens)No, I only do it once: at the beginning, there’s the condition:", "username": "Paolo_Manna" }, { "code": "/recovered-realms.objectTypes.objectTypes.objectTypesArray<ObjectBase.Type>Array<Object.Type>EmbeddedObject.objectTypesfalseshouldIncludeInDefaultSchema().create(: : update: .modified).create()", "text": "Hey @Paolo_Manna,Thanks for getting back to me!I’ll answer your points and give some background on my work for this. Then I’d like to ask your thoughts on some other related points - that I’d like to see the Realm team improve on.A little background:\nI had built my original EH (errorHandler) during legacy Realm, which had been modified with MongoDB. I am looking at it again right now since Flexible Sync can trigger more Client Resets. I feel I have tracked most of the changes from legacy to Mongo, that I found in docs. I am assuming some things may have improved with MongoDB, that I might not notice with my setup. As an example, back then I would sometimes receive an error that the recovery Realm file was not (yet) available to move. Hence, I built a device-folder monitoring service to notify me of changes in a given folder (e.g., “/recovered-realms”) … which may not be an issue currently. Additionally, I built processes to check for “error codes” and “error descriptions” trying to match both or one, and report on any discrepancies with docs, or even undocumented codes like error code ‘225’. This way I can track changes and compare them to docs’ error codes. This also helps in situations like error code ‘101’ which can have different descriptions, such as a more common one like: “sync is not enabled for this app”.Reply to your points:\n• Yes, you are correct '.objectTypes’ are optional and also that the Schema is a more solid approach (though a little bit more to translate that into an array for ‘.objectTypes’ ). In my app, I ensure maintaining a value to pass for each Realm’s config and for each Realm later.\n•• I found this as a benefit for many uses throughout my app. I do not know if it has improved recently, but previously, when restoring a Realm from the backup (i.e., for Client Reset) I was only able to open that Realm locally once I supplied the array to ‘.objectTypes’ (assumed reopen had to match initial open). I keep a static value for both the full ‘Array<ObjectBase.Type>’ and the ‘Array<Object.Type>’ for merging that exclude ‘EmbeddedObject’.\n•• I think there is a bug or issue currently that I have noticed with Realms on a device that is syncing. Basically, it will show all the models in Realm Studio, even though I supplied a specific array to ‘.objectTypes’ (excluding certain models) and I have even declared ‘false’ for ‘shouldIncludeInDefaultSchema()’ in each class (e.g., Swift SDK).• My original design did start the EH before opening Realms, I recently changed it to be right after opening the Realms. Since, beforehand, you’d be likely to see more of the 100-level error codes (which previously I needed to handle & report since I managed ROS too). Additionally, on the client side I handle these types of errors differently… since in most cases I have a wait, retry, etc. If there are some solid error cases prior to opening a Realm please let me know I’d will improve.• D’oh - my bad. I do remember seeing that when I was initially reviewing your code, but I must have overlooked it when doing my comparison. I should have also copied your code into a project to test it out. Anyhow, thanks for pointing that out.Some questions for you…\n• The way Client Reset works, is that I read all the objects from the backed-up Realm and write them into the new (downloaded) Realm via ‘.create(: : update: .modified)’. That can make it prone for a long process (i.e., for larger Realms) which then carries a slew of potential issues. I always hoped for a way to query on a date (last sync’d) to reduce the processing before merging back in. Previously, to reduce tasking the client-side (and to be thorough), I built a way for the merging to occur on the server side (ROS/legacy) but cannot do that now. Additionally, even if I could, it would be hard without the support of any Realm SDKs available to use (e.g., a ‘.create()’ method) on a serverless Realm Function.\n•• Do you know if that is something Realm might work on or could be submitted to be offered? -for both the ability to query on last synced and Realm SDK support on Realm Functions.• Lastly, it would be nice if we could try to reconnect manually from the client-side, when there is ‘disconnected‘ state. For a Client Reset we would not have to ask the user to restart the app. Any thoughts on this and its possibility?", "username": "Reveel" }, { "code": ".objectTypes.discardLocal.manual", "text": "Hi @Reveel,Some more considerations belowit will show all the models in Realm Studio, even though I supplied a specific array to ‘ .objectTypes ’ (excluding certain models)This is by design, for Partition-based Sync: if a MongoDB collection is under Sync, all its records that match the partition value are included. It would make things complicated to let the client dictate what should or shouldn’t be downloaded. Flexible Sync is another story, of course.If there are some solid error cases prior to opening a Realm please let me know I’d will improve.It’s not much about errors that can happen before opening a realm, but the fact that you open multiple realms: imagine this scenarioThe way Client Reset works…As I wrote before, Client Reset handling is still in full development at this time: while I can’t comment on features that haven’t been released nor announced, you can look forward some changes coming in soon…Do you know if that is something Realm might work on or could be submitted to be offered?While the backend is proprietary technology, the Realm SDKs are still open source, you’re welcome to raise issues or provide suggestions on Github.it would be nice if we could try to reconnect manually from the client-side, when there is ‘disconnected‘ state.This certainly makes sense: a Github issue, or even a suggestion on our Feedback portal is always worth consideration.For a Client Reset we would not have to ask the user to restart the app.That’s not necessary right now: if you use the .discardLocal mode, all happens rather automatically, and without the user realising a client reset happened (at the cost of discarding all local changes). Also, in .manual mode, you can close the realm (that’s tricky in Swift, as there’s the iOS memory handling involved), and re-opening it without quitting. My example does that, even though it’s not perfect.", "username": "Paolo_Manna" }, { "code": ".manual.logOut().suspend.invalidate()CFRunLoopbeforeClientResetafterClientReset.discardLocal.manual.discardLocal.discardLocal.manual", "text": "@Paolo_Manna,• I believe the new change (by design) to show all Collections on local Realms is not good in my opinion. A user can now open that Realm from their local device, and then can see all Collections and their Schemas, which would include some that would not apply to them (e.g., maybe an employee or customer). This did not occur in legacy Realm.• The way I start each Realm-App’s EH it does avoid that issue you mentioned with ‘asyncOpen’ - though thanks for being thorough, since pointing it out can help future readers.• Happily awaiting to hear about the changes that are in the works! Hopefully not many (or any ) breaking changes .• OK - I will write something up for Github soon, for both the SDK support on backend and getting a process to improve merging. Regarding the latter… I always felt that Realm could set a flag that is triggered on a Client-Reset to expose needed info on that back-up Realm, to allow us to query for recent sync’d (or even query a metadata file), in order to concisely query such backed-up Realm.• Regarding a ‘reconnect’ to Sync, I will post on the feedback site too, though (IMHO) it seems like GitHub is more active.• Yup - tricky for sure[!]… though I am hoping to know this trick! Yeah I do not want lose any of the User’s data and hence merge it; therefore I use ‘.manual’. I have tried many things; including what I saw in your code too. I had even included logging-out all the users (via ‘.logOut()’ catching potential ‘Error’ too), and have also tried other things too. Unfortunately, with the new Realm (w/ my focus on FS) I am not successful on many Client Resets for Swift. I have not been able to pin-point the nuances for a full-proof success nor identify all of the failures yet. I have even tried edge things like adding a ‘.suspend’ before an ‘.invalidate()’, ensuring a read on the re-loggged-in Realm (before merging), and also tried issuing a new user ‘tokens’, etc. (hoping something under the hood might trigger it). I had even played with utilizing ‘CFRunLoop’ – all without consistent success. I would really like to handle this for iOS (to avoid presenting the user a feel that resembles a Windows-like restart )… so I am looking for any advice or direction (to dig deeper) that you can provide - thank you in advance!Can you clear up one last set of questions (assumptions) regarding Client Reset I have?\n• Am I correct that both ‘beforeClientReset’ and ‘afterClientReset’ are only effective for ‘.discardLocal’? -and will not be offered for ‘.manual’ in the future.• Am I correct that if I chose ‘.discardLocal’, since it does attempt to self-correct, that there is NO way to merge data if it fails? – of course, excluding some cases where ‘.discardLocal’ converts to ‘.manual’ (e.g., breaking or destructive schema changes).Thank you for all your time!", "username": "Reveel" }, { "code": "beforeClientResetafterClientReset.discardLocal.manual.manual.discardLocal", "text": "Hi @Reveel,• Am I correct that both ‘ beforeClientReset ’ and ‘ afterClientReset ’ are only effective for ‘ .discardLocal ’? -and will not be offered for ‘ .manual ’ in the future.I don’t think so: as you say, .manual is still the fallback, and is unlikely to change, if nothing else for compatibility reasons.Am I correct that if I chose ‘ .discardLocal ’, since it does attempt to self-correct, that there is NO way to merge data if it fails?I don’t know enough to reliably comment on that, probably the best place to ask would be to open a Github issue (Realm Core would be the best - all SDKs rely on that for low-level behaviour)Hope the discussion helps, even though I doubt its contents will still be relevant for more than a few months…", "username": "Paolo_Manna" }, { "code": "", "text": "@Paolo_MannaOK. Thank you for your time.", "username": "Reveel" } ]
Client Reset Process
2022-06-11T21:37:28.532Z
Client Reset Process
3,410
null
[ "atlas-cluster" ]
[ { "code": "getaddrinfo ENOTFOUND cluster0.hudbd.mongodb.netmongobad auth : Authentication failed.mongodb+srvTLS/SSL is disabled. If possible, enable TLS/SSL to avoid security vulnerabilities.", "text": "I don’t know what half of these terms mean but people sing your praises, I cannot connect via the string and have been trying for over an hour. Currently I get these errorsgetaddrinfo ENOTFOUND cluster0.hudbd.mongodb.net if I choose mongo with “connection string scheme” above it or bad auth : Authentication failed. if i try to click the other option mongodb+srvI’ve managed to be gifted a third error TLS/SSL is disabled. If possible, enable TLS/SSL to avoid security vulnerabilities.", "username": "Andrew_Last" }, { "code": "", "text": "Are you trying to connect by Compass?\nDoes it work with shell?\nBad authentication means wrong combination of userid & password\nWhat is your db & shell version", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Compass, via their own website button from compas I signed up and copied the string. All that compas says now is flash an angry red and yellow error giving me technnical gibberish that a new user is not expected to know.\" TLS/SSL is disabled. If possible, enable TLS/SSL to avoid security vulnerabilities.\" um ok how do i handle this? If you say I need training then this website is built by morons because I could access salesforce and a dozen websites and not need a class.“getaddrinfo ENOTFOUND mongos0.example.com” or “bad auth : Authentication failed.” or my personal favorite for I am a piece of garbage coder and don’t know how to make a good user interface this error takes the cake “connect ECONNREFUSED 127.0.0.1:27017” why is connection refused and what do i do about it?", "username": "Andrew_Last" }, { "code": "", "text": "WTF, now it lets me it at completely random… I just full-on dropped my forehead against the keyboard… and it let me in.", "username": "Andrew_Last" }, { "code": "", "text": "Hahahahaha, awesome !", "username": "Tin_Cvitkovic" }, { "code": "", "text": "No not awesome, because whomeever wrote the python tutorial on your own corporate homepage website forgot to test it outside of their own computer. it breaks.Build world-class Python applications on MongoDB. Use the PyMongo or Motor drivers to create general purpose web apps or PyMongoArrow for MongoDB data analytics. Try our interactive tutorials, check out step-by-step guides, and more.raise NXDOMAIN(qnames=qnames_to_try, responses=nxdomain_responses)\ndns.resolver.NXDOMAIN: None of DNS query names exist: _mongodb._tcp.cluster0.sqdyt.mongodb.net., _mongodb._tcp.cluster0.sqdyt.mongodb.net.local.During handling of the above exception, another exception occurred:", "username": "Andrew_Last" }, { "code": "", "text": "I go to load the API and get an error as it won’t even show what database I’m connected to. FML menu loads and its empty, its an empty box.", "username": "Andrew_Last" }, { "code": "", "text": "Hi @Andrew_Last welcome to the community!Sorry you’re having trouble that seems cryptic to you. I totally sympathize how frustrating it is to see cryptic error messages. We’ve all been there Could you help us understand what you’re trying to do so we can help you achieve it?As I understand it, you’re trying to follow the tutorial in this blog post: Build Applications With MongoDB And Python | MongoDBCould you tell us at what point you got stuck, and the error message you see at that point?Best regards\nKevin", "username": "kevinadi" } ]
Initial setup connection string is broken, gives me one or two of three different errors
2022-06-13T06:22:28.007Z
Initial setup connection string is broken, gives me one or two of three different errors
2,454
null
[ "node-js", "data-modeling", "connecting", "atlas", "configuration" ]
[ { "code": "", "text": "Hi, I’m new working with MongoDB Atlas and I want to know if is there any best practice or properly export and reuse the mongo connection in nodejs, im using the layered structure, so already have the app.js separate from the express server listening, the routes, controllers, models and dao but i dont know how to reuse the db connection from a separate file and where to import it.\n(im using the mongodb native driver)", "username": "Richard_Upton_Pickman" }, { "code": ".envMONGODB_CONNECTION_URLrequire('dotenv').config();\nconst express = require('express');\nconst mongoose = require('mongoose');\n\nmongoose.connect(process.env.MONGODB_CONNECTION_URL).then(() => {\n\n const app = express();\n\n app.use(express.json());\n app.use(express.urlencoded({ extended: true }));\n app.use(express.static(path.join(__dirname, 'public')));\n\n // Add your endpoints\n\n const server = http.createServer(app);\n\n server.listen(process.env.PORT || 3000, () => {\n console.log(`Server is listening on port ${server.address().port}`);\n });\n})\n\n", "text": "Hi,I suggest you to use Mongoose.Generally, you should connect to database before you start the server. With Mongoose, you can just pass the connection string that you can fetch from Atlas. You should not hardcode that connection string because it should not be visible in your code, since your code will probably go in GitHub or somewhere where other people will be able to see it. The good practice is to create .env file and to add an environment variable there (MONGODB_CONNECTION_URL for example), which you can use later in your app without exposing actual secrets. For that, you can use popular dotenv package.Once you connected to MongoDB with Mongoose, you can use Mongoose to query database anywhere in your application, and it will use the connection that you created when starting the server.You can do it like this:", "username": "NeNaD" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Properly export Atlas connection in Node JS
2022-06-15T20:45:42.360Z
Properly export Atlas connection in Node JS
3,486
null
[ "aggregation" ]
[ { "code": "{\n $project: {\n _id: 0,\n visitsTime: 1,\n bracketsTime: 1,\n chartResult: {\n $concatArrays: [\n \"$visitsTime\",\n \"$bracketsTIme\"\n ]\n }\n }\n }\n[\n {\n \"bracketsTime\": [\n {\n \"date\": \"04-08-2022\",\n \"entryHours\": 4.25\n },\n {\n \"date\": \"04-10-2022\",\n \"entryHours\": 6.5\n }\n ],\n \"chartResult\": null,\n \"visitsTime\": [\n {\n \"date\": \"04-20-2022\",\n \"visitHours\": 1\n },\n {\n \"date\": \"04-10-2022\",\n \"visitHours\": 3\n },\n {\n \"date\": \"04-08-2022\",\n \"visitHours\": 2\n },\n {\n \"date\": \"05-26-2022\",\n \"visitHours\": 2\n }\n ]\n }\n]\n", "text": "I’m trying to concat the arrays that are the results of 2 lookup operations within a single aggregation pipeline. The results of both lookups are working fine, but the concat of them is resulting null and I cannot figure out why. Could be that Mongo wont acknowledge the lookup results as an array? But that doesnt make sense since I have access to it within the same project stage as arrays…So I dont know really…The project stage from the pipeline:The current results:Here is a link for a mongo playground I’ve been using for this pipeline:Mongo playground: a simple sandbox to test and share MongoDB queries onlineAny suggestions would be greatly appreciated! thanks", "username": "Jowz" }, { "code": "bracketsTImebracketsTime", "text": "Hi,You have a typo. You typed bracketsTIme instead of bracketsTime.When you update that, you example is working.", "username": "NeNaD" }, { "code": "", "text": "Much appreciated! I’m used to chasing down typos and syntax errors, but for some reason these aggregation pipelines make my eyes cross… Thank you", "username": "Jowz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to concat results from 2 lookup operations within 1 aggregation pipeline?
2022-06-15T20:54:39.988Z
How to concat results from 2 lookup operations within 1 aggregation pipeline?
1,497
null
[ "aggregation", "queries", "node-js" ]
[ { "code": ".find().aggregate()myCursor.batchSize(X).toArray()for...awaitwhile myCursor.hasNext() myCursor.next()", "text": "Hello all, I’m working on a small application to query my Atlas. The size of the collection is trivial now, but I’m looking at performance techniques for the future. In using .find() and .aggregate() the node.js driver says they return cursors, which are iterable objects that I can pull out my docs for processing.My understanding was that either as an option to those methods, or by calling myCursor.batchSize(X) I would be able to set the number of docs I can pull out per iteration (as opposed to .toArray() which I’m using and could become unmanageableIs that correct?Right now, looping over my cursor with for...await or while myCursor.hasNext() myCursor.next() I only get one document at a time. I thought I could at least process them 2, 8, etc at a time with batchSize.", "username": "zacharykane" }, { "code": "batchSizegetMorefind()aggregate()getMoremyCursor.toArray()getMorebatchSizegetMorebatchSizegetMoremyCursor.next()batchSize(1)", "text": "@zacharykane,When setting batchSize for a cursor this is changing the number of documents that will be included in a getMore command’s response. The “optimization” here is limiting the number of network roundtrips required to retrieve the full result set.From our “Iterating a Cursor” tutorial’s section on Cursor Batches:find() and aggregate() operations have an initial batch size of 101 documents by default. Subsequent getMore operations issued against the resulting cursor have no default batch size, so they are limited only by the 16 megabyte message size.When you call myCursor.toArray() the Driver will retrieve and deserialize the results from the server using as many getMore commands as are needed to exhaust the cursor. If you don’t set a batchSize, then after the first 101 results the next getMore will fetch as many results will fit into 16MB before returning. If you set a batchSize of 1 and you have 1000 results then the Driver will call getMore 999 times.Right now, looping over my cursor with for…await or while myCursor.hasNext() myCursor.next() I only get one document at a time. I thought I could at least process them 2, 8, etc at a time with batchSize.The myCursor.next() call iterates the cursor and returns the next result, but this does not involve a network round trip (unless batchSize(1) is set).", "username": "alexbevi" }, { "code": ".next()", "text": "Hey @alexbevi !Okay, so I’m thinking of this incorrectly then. This is more about optimizing the underlying requests to the DB server, and NOT about changing the return value/functioning of the cursors?Using JS iteration techniques or the .next() api will always just return one doc from the query’s cursor no matter what?", "username": "zacharykane" } ]
(Mis)Understanding batchSize?
2022-06-15T05:48:26.615Z
(Mis)Understanding batchSize?
6,711
null
[ "graphql" ]
[ { "code": "{\n link: objectId,\n **create**: object\n}\n{\n link: objectId,\n **update**: object\n}\n{\n \"id\": \"626f2eec9d529fac10934538\",\n \"input\": {\n \"brands\": {\n \"link\": \"626f2eec9d529fac10934531\",\n \"create\": {\n \"_id\": \"626f2eec9d529fac10934531\",\n \"name\": \"Nike\"\n }}\n }\n}\n", "text": "In a relationship, there isBUT there is noWhen creating a new linked object, the linked object gets created fine BUT when updating the same object, GraphQL throws an error saying the _id is duplicated.\nAs an example, this works if the item does not existBUT will fail if you try to update or upsert. It looks like Realm GraphQL only defaults to insert regardless of Update or Upsert. It makes sense to be able to update linked objects if I am allowed to create them .Is there anything I am missing or is there no update for linked objects?", "username": "David_Oabile" }, { "code": " \"brand\": {\n \"ref\": \"#/relationship/mongodb-atlas/products/brands\",\n \"foreignKey\": \"_id\",\n \"isList\": false\n },\n{\n link: objectId,\n create: BrandInsertOne\n}\n", "text": "It will be good to get feedback from the Mongo Realm team on why they have linked relationships to only create BUT NOT update. I know the common answer is to use customer resolvers but seems like a standard workflow.Just to sum up the above if you have defined a relationship:GraphQL will createThis allows you to insert a new parent object embedded with a linked object. BUT when you try to update the same object, you get a duplicate error on the embedded object.", "username": "David_Oabile" } ]
Updating linked object from GraphQL raises duplicate key error
2022-06-12T06:35:58.813Z
Updating linked object from GraphQL raises duplicate key error
2,963
null
[ "crud" ]
[ { "code": "let condition = {\n $cond:{\n if:{ $gte:[{$size:\"Ranking\"},10]},\n then:{$and:[{$pop:{Ranking: -1}},{$push:{Ranking:123}}]},\n else:{$push:{Ranking:345}}\n }\n}\ndb.collection('musicChart').findOneAndUpdate({Song:'Beatbox'},{$expr:{Ranking:{condition}}},{upsert:true})\n", "text": "Hello,\nLet me first provide the context of what is my database and what I want to with it. So, I have a database of song names and their ranks in music chart. What I want to do isBut I got an error of Unknown modifier: $expr.\nI tried to search forums for the answers but I can’t find the exact of what I want to do. So I decided to post here.", "username": "Kyaw_Zayar_Tun" }, { "code": "function updateRank(value) {\n return db.coll.findOneAndUpdate({Song: 'Beatbox'},\n [\n {\n $set: {\n Ranking: {\n $cond: {\n if: {$gte: [{$size: {$ifNull: [\"$Ranking\", []]}}, 10]},\n then: {$concatArrays: [{$slice: [\"$Ranking\", 1, 9]}, [value]]},\n else: {$concatArrays: [{$ifNull: [\"$Ranking\", []]}, [value]]}\n }\n }\n }\n }\n ], {upsert: true, new: true});\n}\n\ndb.coll.drop();\nupdateRank(1);\ndb.coll.updateOne({Song: \"Beatbox\"}, {\"$set\": {Ranking: [1, 2, 3, 4, 5, 6, 7, 8, 9]}});\nupdateRank(10);\nupdateRank(11);\n$ mongosh --quiet test < /tmp/mdb/query.js \ntest [direct: primary] test> function updateRank(value) {\n... return db.coll.findOneAndUpdate({Song: 'Beatbox'},\n..... [\n..... {\n....... $set: {\n......... Ranking: {\n........... $cond: {\n............. if: {$gte: [{$size: {$ifNull: [\"$Ranking\", []]}}, 10]},\n............. then: {$concatArrays: [{$slice: [\"$Ranking\", 1, 9]}, [value]]},\n............. else: {$concatArrays: [{$ifNull: [\"$Ranking\", []]}, [value]]}\n............. }\n........... }\n......... }\n....... }\n..... ], {upsert: true, new: true});\n... }\n[Function: updateRank]\ntest [direct: primary] test> \n\ntest [direct: primary] test> db.coll.drop();\ntrue\ntest [direct: primary] test> updateRank(1);\n{\n _id: ObjectId(\"62aa12362b73caadd158f2de\"),\n Song: 'Beatbox',\n Ranking: [ 1 ]\n}\ntest [direct: primary] test> db.coll.updateOne({Song: \"Beatbox\"}, {\"$set\": {Ranking: [1, 2, 3, 4, 5, 6, 7, 8, 9]}});\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\ntest [direct: primary] test> updateRank(10);\n{\n _id: ObjectId(\"62aa12362b73caadd158f2de\"),\n Song: 'Beatbox',\n Ranking: [\n 1, 2, 3, 4, 5,\n 6, 7, 8, 9, 10\n ]\n}\ntest [direct: primary] test> updateRank(11);{\n _id: ObjectId(\"62aa12362b73caadd158f2de\"),\n Song: 'Beatbox',\n Ranking: [\n 2, 3, 4, 5, 6,\n 7, 8, 9, 10, 11\n ]\n}\n", "text": "Hi @Kyaw_Zayar_Tun and welcome in the MongoDB Community !I think I got it working with all 3 scenarios: doc doesn’t exist, small size, big size.Check out this script and its output below.Result in the console:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks, will check this out!!", "username": "Kyaw_Zayar_Tun" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Conditional Operations with findOneAndUpdate
2022-06-15T13:35:09.896Z
Conditional Operations with findOneAndUpdate
3,890
https://www.mongodb.com/…8_2_1023x307.png
[]
[ { "code": "", "text": "I currently use M0 instances, but some weird happen every day at same hour\n\nimage1706×512 58.9 KB\n\nThe peak of opcounters command freeze my project for a few minutes, is it possible another user who uses the same CPU shared appear here?", "username": "ocielgp" }, { "code": "", "text": "Hey Ociel,The opcounters you reference here reflect operations that you/your application perform against your M0 instance - another user’s operations would not appear there… Is it possible that you have a cron job or an App Service trigger running somewhere that you are not actively maintaining?I recommend reaching out to our support via the chat icon in the bottom right corner of the Atlas UI.Best,\nChris", "username": "Christopher_Shum" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Metrics High Opcounters command every day
2022-06-04T22:05:23.258Z
MongoDB Metrics High Opcounters command every day
1,551
null
[ "aggregation", "queries", "crud" ]
[ { "code": "db.<collection>.updateMany({}, [{$set: {user_ID: {$floor: { $multiply: [{$rand: {}}, 10000]} } } }] )subjectNames = [ \"Chemistry\", \"Physics\", \"Database Systems\", \"Functional Programming\", . . ., \"10th subject\"]\"Subject\" : \"subject_name\"", "text": "I’ve udes this to update a field with a random number.db.<collection>.updateMany({}, [{$set: {user_ID: {$floor: { $multiply: [{$rand: {}}, 10000]} } } }] )How can I use this to update a field in all 10 documents with a random name from a list of 10 names?e.g.\nsubjectNames = [ \"Chemistry\", \"Physics\", \"Database Systems\", \"Functional Programming\", . . ., \"10th subject\"]In all 10 documents the field name is;\n\"Subject\" : \"subject_name\"Thanks in advance", "username": "Carlos_Harris" }, { "code": "$arrayElemAt$floor$multiply$randdb.collection.aggregate([\n {\n \"$set\": {\n \"subject\": {\n \"$arrayElemAt\": [\n [\n \"Chemistry\",\n \"Physics\",\n \"Database Systems\",\n \"Functional Programming\",\n \"Mathematics\"\n ],\n {\n \"$floor\": {\n \"$multiply\": [\n {\n \"$rand\": {}\n },\n 5\n ]\n }\n }\n ]\n }\n }\n }\n])\n", "text": "Hi,Here is how you can do it:Working example", "username": "NeNaD" }, { "code": "", "text": "Thank you very much for this. Much appreciated", "username": "Carlos_Harris" }, { "code": "", "text": "I forgot to say that “subject” should be “subjects” and it is an array of max 4. The list of subjects has 15 elements in them", "username": "Carlos_Harris" }, { "code": "subjectNames = [ \"Chemistry\", \"Physics\", \"Database Systems\", \"Functional Programming\", . . ., \"10th subject\"]\"subjects\" : [ sub1, sub2, sub3, sub4 ]", "text": "subjectNames = [ \"Chemistry\", \"Physics\", \"Database Systems\", \"Functional Programming\", . . ., \"10th subject\"]\"subjects\" : [ sub1, sub2, sub3, sub4 ]", "username": "Carlos_Harris" } ]
Updating a field with a random name from a list
2022-06-15T13:43:00.790Z
Updating a field with a random name from a list
3,326
null
[ "queries", "mongodb-shell" ]
[ { "code": "", "text": "localDb> db.help.createIndex({fielda:“text”})\nTypeError: db.help.createIndex is not a functionGetting this error in Mongosh", "username": "Girish_V" }, { "code": "mongomongoshhelpmongoshdb.helpdb.help()db.getCollection('help').createIndex({fielda:\"text\"})\n", "text": "In the shell (and this is true both for the legacy mongo as well as for mongosh) the help property of most objects is special as it is used for the integrated contextual help. If in mongosh you do db.help or db.help() the shell prints out the help content for the database class.To get around that, you can do:", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Thank you so much for the root cause …it was confusing when the command did not work , \nyour solution worked …we have just started on Mongodb, will probably change the collection name.\nThanks again", "username": "Girish_V" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to use createIndex in mongosh
2022-06-15T09:15:40.690Z
Not able to use createIndex in mongosh
1,504
null
[ "node-js", "crud", "mongoose-odm", "typescript" ]
[ { "code": "{\n \"product\": {\n \"type\": \"Dairy\"\n },\n \"weight\": 2,\n \"quantity\": 1\n}\n{\"product.type\" : \"Vegetable\"}\nconst updateQuery = {\n \"product\": {\n \"type\": \"Vegetable\" //modify this\n },\n \"weight\": 3 //modify this too\n}\nupdateProductById()const updateProductById = async(productId: string, updateQuery) => {\n\n await ProductsModel.findOneAndUpdate(\n {_id: productId},\n {$set: updateQuery},\n {new: true}\n ).exec()\n_idconst updatedProduct = await ProductsModel.findOneAndUpdate(\n { _id: productId },\n { \"product.type\": \"Vegetable\" },\n { new: true }\n).exec()\n{\"product.type\" : \"Vegetable\"}", "text": "Hello,If we consider the following document that contains 1 embedded document:In order to update the product’s type one could do the following:However, let’s suppose we wanted to update the document dynamically, as many both embedded and non-embedded values as we prefer.Ergo, an idea would be to build a query object:and pass it as a parameter to the updateProductById() function:However, this will overwrite the original embedded document, resulting in a regenerated embedded document’s _id key.Thus, we would have to do the following instead:That’s good for that one value only, buthow do you use {\"product.type\" : \"Vegetable\"} dynamically in Javascript passed along with other, non-embedded values?Is it possible to somehow express this key- value pair in a Javascript object?finally, could you tell me please, how to describe this in a Typescript interface?Thank you!", "username": "RENOVATIO" }, { "code": "_id \"product\": {\n \"type\": \"Vegetable\" //modify this\n },\n{ \"product.type\": \"Vegetable\" }{\"product.type\" : \"Vegetable\"}", "text": "resulting in a regenerated embedded document’s _id key.Mongoose is doing that, not mongo.When you update withYou basically say set the field product of the root document to the object {type:Vegetable}.When you update with{ \"product.type\": \"Vegetable\" }You basically say set the field type of the sub-object product.Both type of updates are needed for different use-case.Forhow do you use {\"product.type\" : \"Vegetable\"} dynamically in Javascript passed along with other, non-embedded values?see Find - $text search with $or + other optional values - #2 by steevej", "username": "steevej" }, { "code": "", "text": "I find it really strange that Mongo is advocating for embedded documents, a feature that is integral to its high performance philosophy, yet them being so clunky and uncomfortable to work with. The only use of embedded documents, that with its straightforwardness could match the ease and bliss of working on the document scope itself, is saving the data and forgetting about it or printing it along with the rest of the data. The moment one needs to cherry pick embedded properties, modify them, verify their uniqueness against the whole collection, suddenly they become less attractive.I am grateful for your link you have provided, but I could not find a desired solution whereby it would be possible to construct a single object that could be documented by a Typescript interface as a JS object, an object that would allow to dynamically update the whole or a part of the document or/ and its embedded properties without resetting the embedded _id keys.", "username": "RENOVATIO" }, { "code": "", "text": "Mongoose and Typescript are in your way rather than helping to access MongoDB easily and its full flexibility. They even slow down your development because you have to write these extra specifications and find way to bypass them for use-case they do not handle well.Like I wrote:Mongoose is doing that, not mongo.and by that, I mean:resetting the embedded _id keys", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Working with embedded documents in Typescript
2022-06-11T11:34:34.311Z
Working with embedded documents in Typescript
4,041
null
[ "queries", "node-js", "mongoose-odm", "transactions" ]
[ { "code": "const sales = await Sale.find({seller, \"createdAt\": {$gte: finalUnix}}).lean().exec();\nconst Sale = new mongoose.Schema({\n seller: {\n type: String,\n required: true,\n },\n product: {\n type: String,\n required: true,\n },\n comment: {\n type: String,\n required: true,\n },\n transaction: {\n type: Object\n },\n createdAt: {\n type: String,\n required: true,\n }\n})\n", "text": "Guys, Im running the following query:seller is the ID of the seller, like a hash, and the createdAt is an unix timestamp.So, the collection has a total of 21k documents and this query returns about 1.100 results. But the problem is that, it is taking about 15-20 seconds to complete and I think that’s a lot of time, because I dont have much documents and results.My schema is:And that’s it. The transaction object stored there doesnt contain much information, (I mean, it is not heavy at all). What could be the problem?Note: I don’t know if it can be the problem but my application is probably running other queries while this one is running. (Because I have scripts working 24/7 collecting and storing data).", "username": "foco_radiante" }, { "code": "", "text": "Post the explain plan so that we have an idea of what is happening.Hardware configuration is also of interest for performance issues.", "username": "steevej" }, { "code": "", "text": "Hey @foco_radiante,As @steevej has mentioned, kindly post the explain plain for us to better understand and help you with the problem. I also noticed that your included schema snippet does not have any index declarations, which could be one of the reasons for the slow queries. I suggest adding the appropriate compound index to support that query if you are not already using them.I also noticed you had a very similar discussion here as well:Did this discussion help resolve this problem or something is still left to be resolved?Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Simple query taking long time to execute
2022-06-01T07:58:00.080Z
Simple query taking long time to execute
3,709
null
[ "replication", "database-tools", "backup" ]
[ { "code": "", "text": "I have a mongo replicaset on version 3.6.2. I want to upgrade it to 5.0. I do not want to perform the step by step upgrade process. Instead I want to create a new setup i.e replicaset in version 5.0.\nI want to migrate the data to the new setup. I am planning to do the below steps:I tried this in the testing environment and it worked just fine. Can this cause any issues with the data. I could not find any info about thi in the mongo docs. Can I face issues later on because of the version difference?", "username": "Ishrat_Jahan" }, { "code": "", "text": "Hello and welcome to the community forum, @Ishrat Jahan!!I do not want to perform the step by step upgrade processWe do not suggest you to upgrade directly from 3.6 to 5.0 as there might be a great difference in the features and enhancements between the versions. Please refer to the following documentation here which explains the same.Therefore, the recommended and tested in-place method is to upgrade the MongoDB version by upgrading sequentially. Hence, for you the upgrade sequence would be 3.6 → 4.0 → 4.2 → 4.4 → 5.0.The reason why we suggest a step wise upgrade is because each major version is released with new features or enhancements of the legacy features. Hence to maintain the data integrity, the step wise upgrade is recommended. Please refer to the following documentation Upgrade to 5.0 to read more on the same.In addition, we have a related discussion on a similar topic MongoDB community that you might want to take a look for further information/documentation.Please do let us know if you have any concerns.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Is there a 4.6 version of mongo ? I think we directly jump from 4.4 → 5 https://www.mongodb.com/docs/manual/release-notes/5.0-upgrade-replica-set/#upgrade-version-path", "username": "Ishrat_Jahan" }, { "code": "", "text": "Thank you for pointing it out @Ishrat_Jahan.It was MongoDB version 5.0 after 4.4. I would edit the post for future inconsistencies.Thanks\nAasawari", "username": "Aasawari" } ]
Dump and Restore between version 3.6 and 5.0
2022-06-10T10:18:29.536Z
Dump and Restore between version 3.6 and 5.0
4,778
null
[ "atlas-device-sync", "android" ]
[ { "code": "", "text": "I have a ‘personalData’ in ‘datas’ table in Atlas. I am using flexible sync with subscription. The conditions for this issue are following:Any clue about why is it so and how to solve it?", "username": "Santosh_Kumar4" }, { "code": "", "text": "Finally I found that if any modification is made on the database directly, it throws ‘Client Reset’ in case of ‘Flexible Sync’.‘Flexible Sync’ works only till all changes are affected by the app. If you use Compass/Function/Webhook to modify the data, it requires ‘Client Reset’.The error received by me on client reset was :\nbad last integrated server version in upload progress (500180358788 > 17) (ProtocolErrorCode=209)Another point to note is that if you use Flexible Sync, you are required to handle Client Reset manually.Therefore my whole day or two is going to be spent in implementing Manual Reset, unless someone can explain the exact meaning of : \" bad last integrated server version\".How to tell the local-realm that the data is deleted/modified in remote-realm and as the deletion/modification happened later than your timestamp, you should discard the local data and re-download remote data?", "username": "Santosh_Kumar4" } ]
Realm.getInstanceAsync doesn't get called for the first time on android app
2022-06-14T07:05:16.704Z
Realm.getInstanceAsync doesn&rsquo;t get called for the first time on android app
2,010
null
[ "java", "connecting" ]
[ { "code": " try (MongoClient mongoClient = MongoClients.create(uriString);) {\n some crud operations ....\n }\n", "text": "hi .\ni have a windows vps server with mongoDB installed and a admin user and some Database with collections created.\nthe problem is i can not connect to the database at all !\nno matter compass , shell or java code , i can not connect to remote server .\nthis is my java code :\nfinal String uriString = “mongodb://<What should i put here ? >:27017/”;i used hostname and ip address but it doesnt worked\ncan anyone help me plz ?", "username": "ART_OF_CODE" }, { "code": "mongodb://some_uid:some_password@your_server:27017", "text": "final String uriString = “mongodb://<What should i put here ? >:27017/”;mongodb://some_uid:some_password@your_server:27017", "username": "Jack_Woehr" } ]
How can i connect to mongoDB server running on a windows server?
2022-02-08T22:50:46.276Z
How can i connect to mongoDB server running on a windows server?
2,396
null
[]
[ { "code": "", "text": "Hi, after upgrade mongodb replication cluster to 32Gb and 16 core, it seems usually unstable and crash due to high memory usage. Normally the system always execute query lower than 300ms, but sometimes all the query (whether heavy or light) always get higher than 20000ms and after that the mongod was killed by kernel by oom-killer. I’ve use datadog for system monitor and get the memory peak when mongod of primary go down.\nMongodb cluster info:", "username": "barryd" }, { "code": "", "text": "Datadog Metrics:Image Screen-Shot-2022-05-30-at-17-36-59 hosted in ImgBBMongoops Metrics:Image Screen-Shot-2022-05-30-at-17-35-46 hosted in ImgBB", "username": "barryd" }, { "code": "mongotopmongostatmongodmongodmongodmongodmongod", "text": "Hi @barryd welcome to the community!after upgrade mongodb replication cluster to 32Gb and 16 core, it seems usually unstable and crash due to high memory usageIf I understand correctly, you upgraded the hardware but now you’re seeing worse performance? What was the hardware in the previous iteration?As a basic check, you might want to check out what’s happening via mongotop and mongostat to see if anything is amiss, and also check the mongod logs for anything out of the ordinary. Please see MongoDB Performance Tuning Questions to see if any of the suggestions there can help.For a sanity check, are you running anything else besides a single mongod in any of the nodes there? It’s highly recommended to not run anything else besides mongod in a node (e.g. other servers, apps, another mongod process, etc.) since they can compete with mongod for resources.For this setting in particular, is there a reason why it’s changed from the default? Have you tried setting it back to its default value?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank for your reply, yes we face the problem when upgrade the hardware but I forgot to tell that we had merged and cleaned up multiple replicaset cluster (15) into two clusters and upgrade the hardware of these. Total databases in one cluster now is around 500, but the datasize (not usage disk) after cleaned up is ~300GB and equal to one old cluster.\nAnd we use mongoops and freshed ec2 instances to deploy so all nodes run mongodb only.\nThe problem that I cannot understand is that the execution time is very high (almost every 5 minutes) even with the simplest query (findOne in a collection has no docs take 50second to execute) and sometimes the primary was killed and switch while cpu is not high (40-50%). When the execution time go high all query was affected but the CPU usage is still stable at 40-50%. Every metrics in the mongoops chart is normal, total connection is the same as old cluster ~1500 connections and the cpu usage always lower than 60%.", "username": "barryd" }, { "code": "mongostatmongodiostat", "text": "Hi @barrydOk so let me see if I understand your case correctly:Did I get that correctly?Now the problems:Actually all the problems you see point to typical symptoms of the hardware struggling to keep up with the work. If you have inefficient queries (e.g. queries that returns the majority of a collection, queries without indexes to back them up), then they will put additional burdens as well. You can try to optimize the queries, but at some point there’s no way around needing more hardware to service the workload.If my understanding of the before & after state of your deployment is accurate, then one replica set is now serving the work of ~7 replica sets previously, so anecdotally you’ll need 7x the old hardware There are some tools that can help you determine how your deployment is coping with work:Please post the output of these two tools during the busiest period, as typically we can determine if the hardware is really struggling from their output.Also I would recommend you to set the WT cache back to it’s default setting. Making it smaller would amplify some issue, so we should check if setting it back to default makes it slightly better.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "The cases is correct.\nThe problem of execution time is not 5 mins for each query, I mean every 5 minutes something affect the server and cause all query slowed even findOne in a collection has no docs take 50 second to execute.\nAnd the workload of new cluster is not ~7 replica sets previously, because we also reduce the traffic too, the total connections to new cluster is the same as old one. The difference is that the total databases in one cluster is now larger than old one (due to multi-tenantcy architecture).\nHere is the output of these tools in one node of a cluster:\n\nScreen Shot 2022-06-10 at 07.18.57998×139 9.76 KB\n\n\nScreen Shot 2022-06-10 at 07.15.292008×604 84.8 KB\n", "username": "barryd" }, { "code": "", "text": "I don’t think hardware is the problem. One thing we recognize from mongodb logs is when the query slowed, almost of time execute was spend on “schemaLock”.", "username": "barryd" }, { "code": "mongotop", "text": "Yes I don’t see the numbers in mongotop that are outlandish, so it’s possible it’s something else.One thing we recognize from mongodb logs is when the query slowed, almost of time execute was spend on “schemaLock”.Do you mean that this doesn’t happen all the time? E.g., every 5 mintues this will happen consistently? Could you post the log snippet when this happens?Just for due diligence and to ensure you’re not running into a solved issue, I noted that you’re running MongoDB 4.0.10, which is quite old by now. The latest in the 4.0 series is 4.0.28. Is it possible to at least try this version?Having said that, please note that the whole 4.0 series is out of support since April 2022, so you might want to consider moving to a supported version.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Yes it’s not happen all the time but every 5-10 mins consistently.\nSome logs when query was slowed:\nmongodb.log (12.3 KB)I will prepare to upgrade the mongod version, do you have some migrate or upgrade notes to announce?", "username": "barryd" }, { "code": "", "text": "Unfortunately I cannot see the log you posted for some reason. Could you copy the relevant part instead?Upgrading within a major version should be a straightforward binary replacement (e.g. from 4.0.10 to 4.0.28), but it’s slightly more involved to upgrade to a newer major release. To upgrade from 4.0 to 4.2, please have a look at https://www.mongodb.com/docs/manual/release-notes/4.2-upgrade-replica-set/Please note that the supported upgrade path between major versions only covers subsequent versions, e.g. to upgrade to 5.0, you’ll need to do 4.0 → 4.2 → 4.4 → 5.0Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Here you are2022-06-08T01:48:02.822+0000 I NETWORK [listener] connection accepted from 172.16.106.98:44388 #68446 (1636 connections now open)\n2022-06-08T01:48:02.836+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:02.853+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:02.867+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:02.881+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:02.895+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:02.909+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:02.923+0000 I ACCESS [conn68446] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44388\n2022-06-08T01:48:03.004+0000 I NETWORK [listener] connection accepted from 127.0.0.1:35332 #68447 (1637 connections now open)\n2022-06-08T01:48:03.315+0000 I COMMAND [conn26646] command db1.products command: find { find: “products”, filter: { is_active: true, is_deleted: false }, sort: { created: -1 }, projection: {}, skip: 119900, limit: 100, returnKey: false, showRecordId: false, lsid: { id: UUID(“c0889d15-d2a2-4891-8f54-bf6d8e11d368”) }, $clusterTime: { clusterTime: Timestamp(1654652882, 10), signature: { hash: BinData(0, 5DB339F8B7F72F5E1A38A2F261F3525DD1C33118), keyId: 7057813452781256841 } }, $db: “db1” } planSummary: IXSCAN { is_active: 1, created: -1 } keysExamined:120000 docsExamined:120000 cursorExhausted:1 numYields:938 nreturned:100 reslen:139555 locks:{ Global: { acquireCount: { r: 939 } }, Database: { acquireCount: { r: 939 } }, Collection: { acquireCount: { r: 939 } } } storage:{ data: { bytesRead: 14742, timeReadingMicros: 14 } } protocol:op_msg 219ms\n2022-06-08T01:48:03.487+0000 I COMMAND [conn68332] command local.oplog.rs command: collStats { collstats: “oplog.rs”, $readPreference: { mode: “secondaryPreferred” }, $db: “local” } numYields:0 reslen:7143 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 56654174 } } protocol:op_query 56654ms\n2022-06-08T01:48:03.487+0000 I COMMAND [conn68364] command local.oplog.rs command: collStats { collstats: “oplog.rs”, $readPreference: { mode: “secondaryPreferred” }, $db: “local” } numYields:0 reslen:7143 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 63527346 } } protocol:op_query 63527ms\n2022-06-08T01:48:03.487+0000 I COMMAND [conn68440] command local.oplog.rs command: collStats { collstats: “oplog.rs”, $readPreference: { mode: “secondaryPreferred” }, $db: “local” } numYields:0 reslen:7143 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 23466243 } } protocol:op_query 23466ms\n2022-06-08T01:48:03.488+0000 I COMMAND [conn2348] command db3.balances command: find { find: “balances”, filter: { created: { $lt: new Date(1654534800000) } }, sort: { created: -1 }, projection: {}, limit: 1, lsid: { id: UUID(“6b230d18-ede8-4fba-aa73-cc97a901b51b”) }, $clusterTime: { clusterTime: Timestamp(1654652812, 125), signature: { hash: BinData(0, 74934498926F07BC1FA12E736332F8EAAD2D6A0C), keyId: 7057813452781256841 } }, $db: “db3” } planSummary: IXSCAN { created: 1 } keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:231 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } storage:{ timeWaitingMicros: { handleLock: 28808, schemaLock: 70536054 } } protocol:op_msg 70564ms\n2022-06-08T01:48:03.488+0000 I NETWORK [conn68332] end connection 172.16.106.98:44040 (1636 connections now open)\n2022-06-08T01:48:03.491+0000 I WRITE [conn39654] update db2.orders command: { q: { _id: ObjectId(‘629ffb1ca8338b7e97dce04f’) }, u: { $set: { somefield: “value” } }, multi: false, upsert: false } planSummary: IDHACK keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keysInserted:1 keysDeleted:1 numYields:1 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { w: 3 } } } storage:{ data: { bytesRead: 12714, timeReadingMicros: 11 }, timeWaitingMicros: { handleLock: 2551, schemaLock: 60845939 } } 60850ms\n2022-06-08T01:48:03.491+0000 I COMMAND [conn39654] command db2.$cmd command: update { update: “orders”, updates: [ { q: { _id: ObjectId(‘629ffb1ca8338b7e97dce04f’) }, u: { $set: { somefield: “value” } }, upsert: false, multi: false } ], ordered: true, lsid: { id: UUID(“880af29e-6d0b-41bf-878c-11029ac879d7”) }, txnNumber: 42, $clusterTime: { clusterTime: Timestamp(1654652820, 20), signature: { hash: BinData(0, D892514641C0A21507EBE33BA230A731093091B0), keyId: 7057813452781256841 } }, $db: “db2” } numYields:0 reslen:245 locks:{ Global: { acquireCount: { r: 6, w: 4 } }, Database: { acquireCount: { w: 4 } }, Collection: { acquireCount: { w: 4 } }, Metadata: { acquireCount: { W: 1 } } } storage:{} protocol:op_msg 60850ms\n2022-06-08T01:48:03.493+0000 I COMMAND [conn26295] command db4.costs command: insert { insert: “costs”, documents: [ { status: “pending”, currency: “USD”, _id: ObjectId(‘629fffce9d7f4312150e83c5’), order: ObjectId(‘629f5020a8338bba4adc4c5e’), created: new Date(1654652878704), __v: 0 } ], ordered: true, lsid: { id: UUID(“73feff84-ee1f-4334-a69c-663409705397”) }, txnNumber: 2, $clusterTime: { clusterTime: Timestamp(1654652878, 4), signature: { hash: BinData(0, C99B37FB735574CB01648E132D4B1CB1D56E15EA), keyId: 7057813452781256841 } }, $db: “db4” } ninserted:1 keysInserted:6 numYields:0 reslen:230 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } storage:{ data: { bytesRead: 104055, timeReadingMicros: 2071 }, timeWaitingMicros: { handleLock: 7271, schemaLock: 4776151 } } protocol:op_msg 4786ms\n2022-06-08T01:48:03.496+0000 I COMMAND [conn68441] serverStatus was very slow: { after basic: 0, after advisoryHostFQDNs: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after freeMonitoring: 0, after globalLock: 0, after logicalSessionRecordCache: 0, after network: 0, after opLatencies: 0, after opReadConcernCounters: 0, after opcounters: 0, after opcountersRepl: 0, after oplog: 1694, after repl: 1694, after security: 1694, after storageEngine: 1694, after tcmalloc: 1694, after transactions: 1694, after transportSecurity: 1694, after wiredTiger: 1694, at end: 1695 }\n2022-06-08T01:48:03.496+0000 I COMMAND [conn68441] command local.oplog.rs command: serverStatus { serverStatus: 1, advisoryHostFQDNs: 1, locks: 0, recordStats: 0, oplog: 1, $readPreference: { mode: “secondaryPreferred” }, $db: “admin” } numYields:0 reslen:33408 locks:{ Global: { acquireCount: { r: 3 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 1695ms\n2022-06-08T01:48:03.502+0000 I ACCESS [conn68440] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44314\n2022-06-08T01:48:03.503+0000 I NETWORK [conn68364] end connection 172.16.106.98:44112 (1635 connections now open)\n2022-06-08T01:48:03.515+0000 I ACCESS [conn68441] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44318\n2022-06-08T01:48:03.516+0000 I ACCESS [conn68440] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44314\n2022-06-08T01:48:03.530+0000 I ACCESS [conn68440] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44314\n2022-06-08T01:48:03.533+0000 I ACCESS [conn68441] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44318\n2022-06-08T01:48:03.546+0000 I ACCESS [conn68440] Successfully authenticated as principal mms-monitoring-agent on admin from client 172.16.106.98:44314\n2022-06-08T01:48:03.547+0000 I WRITE [conn39369] update db2.orders command: { q: { _id: ObjectId(‘629ffb1ca8338b7e97dce04f’) }, u: { $set: { date: new Date(1654652823741) } }, multi: false, upsert: false } planSummary: IDHACK keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keysInserted:1 keysDeleted:1 writeConflicts:780 numYields:780 locks:{ Global: { acquireCount: { r: 782, w: 782 } }, Database: { acquireCount: { w: 782 } }, Collection: { acquireCount: { w: 782 } } } storage:{} 59783ms\n2022-06-08T01:48:03.549+0000 I COMMAND [conn39369] command db2.$cmd command: update { update: “orders”, updates: [ { q: { _id: ObjectId(‘629ffb1ca8338b7e97dce04f’) }, u: { $set: { date: new Date(1654652823741) } }, upsert: false, multi: false } ], ordered: true, lsid: { id: UUID(“ebed48c8-dc08-46df-9a04-b76f179353b0”) }, txnNumber: 39, $clusterTime: { clusterTime: Timestamp(1654652823, 6), signature: { hash: BinData(0, A69CE8B3E8DBDF008051A969595868A30CB2CBDF), keyId: 7057813452781256841 } }, $db: “db2” } numYields:0 reslen:245 locks:{ Global: { acquireCount: { r: 785, w: 783 } }, Database: { acquireCount: { w: 783 } }, Collection: { acquireCount: { w: 783 } }, Metadata: { acquireCount: { W: 1 } } } storage:{} protocol:op_msg 59785ms\n2022-06-08T01:48:04.327+0000 I COMMAND [conn26646] command db1.products command: find { find: “products”, filter: { is_active: true, is_deleted: false }, sort: { created: -1 }, projection: {}, skip: 120000, limit: 100, returnKey: false, showRecordId: false, lsid: { id: UUID(“c0889d15-d2a2-4891-8f54-bf6d8e11d368”) }, $clusterTime: { clusterTime: Timestamp(1654652883, 8), signature: { hash: BinData(0, 13CB05DE002D356B635E7EB9CE4BBD2FD0D287A4), keyId: 7057813452781256841 } }, $db: “db1” } planSummary: IXSCAN { is_active: 1, created: -1 } keysExamined:120100 docsExamined:120100 cursorExhausted:1 numYields:939 nreturned:100 reslen:141777 locks:{ Global: { acquireCount: { r: 940 } }, Database: { acquireCount: { r: 940 } }, Collection: { acquireCount: { r: 940 } } } storage:{ data: { bytesRead: 58688, timeReadingMicros: 42 } } protocol:op_msg 204ms\n2022-06-08T01:48:04.673+0000 I ACCESS [conn68434] Successfully authenticated as principal __system on local from client 172.16.106.98:44278\n2022-06-08T01:48:04.680+0000 I ACCESS [conn68442] Successfully authenticated as principal __system on local from client 172.16.106.98:44326\n2022-06-08T01:48:04.794+0000 I ACCESS [conn68435] Successfully authenticated as principal __system on local from client 172.16.134.149:35916\n2022-06-08T01:48:04.884+0000 I ACCESS [conn68435] Successfully authenticated as principal __system on local from client 172.16.134.149:35916\n2022-06-08T01:48:04.896+0000 I ACCESS [conn68435] Successfully authenticated as principal __system on local from client 172.16.134.149:35916\n2022-06-08T01:48:04.908+0000 I ACCESS [conn68435] Successfully authenticated as principal __system on local from client 172.16.134.149:35916\n2022-06-08T01:48:04.920+0000 I ACCESS [conn68435] Successfully authenticated as principal __system on local from client 172.16.134.149:35916\n2022-06-08T01:48:04.933+0000 I ACCESS [conn68435] Successfully authenticated as principal __system on local from client 172.16.134.149:35916\n2022-06-08T01:48:05.125+0000 I COMMAND [conn26646] command db1.products command: find { find: “products”, filter: { is_active: true, is_deleted: false }, sort: { created: -1 }, projection: {}, skip: 120100, limit: 100, returnKey: false, showRecordId: false, lsid: { id: UUID(“c0889d15-d2a2-4891-8f54-bf6d8e11d368”) }, $clusterTime: { clusterTime: Timestamp(1654652883, 8), signature: { hash: BinData(0, 13CB05DE002D356B635E7EB9CE4BBD2FD0D287A4), keyId: 7057813452781256841 } }, $db: “db1” } planSummary: IXSCAN { is_active: 1, created: -1 } keysExamined:120200 docsExamined:120200 cursorExhausted:1 numYields:939 nreturned:100 reslen:137342 locks:{ Global: { acquireCount: { r: 940 } }, Database: { acquireCount: { r: 940 } }, Collection: { acquireCount: { r: 940 } } } storage:{ data: { bytesRead: 83510, timeReadingMicros: 60 } } protocol:op_msg 204ms", "username": "barryd" }, { "code": "2022-06-08T01:48:03.315+0000 I COMMAND [conn26646] command db1.products command: find { find: “products”, filter: { is_active: true, is_deleted: false }, sort: { created: -1 }, projection: {}, skip: 119900, limit: 100, returnKey: false, showRecordId: false, lsid: { id: UUID(“c0889d15-d2a2-4891-8f54-bf6d8e11d368”) }, $clusterTime: { clusterTime: Timestamp(1654652882, 10), signature: { hash: BinData(0, 5DB339F8B7F72F5E1A38A2F261F3525DD1C33118), keyId: 7057813452781256841 } }, $db: “db1” } planSummary: IXSCAN { is_active: 1, created: -1 } keysExamined:120000 docsExamined:120000 cursorExhausted:1 numYields:938 nreturned:100 reslen:139555 locks:{ Global: { acquireCount: { r: 939 } }, Database: { acquireCount: { r: 939 } }, Collection: { acquireCount: { r: 939 } } } storage:{ data: { bytesRead: 14742, timeReadingMicros: 14 } } protocol:op_msg 219ms\n", "text": "Thanks for the logs. There are some things that caught my eye. Let me explain:What’s interesting about this line is that it’s a pagination query with skip/limit, with it trying to skip 119,900 documents and limit to 100. This is not an efficient query, since it needs to look at all 120,000 documents, throw away 119,900 of them, and only return 100.I found three instances of this query, at the interval of 1 per second. But I think this is only part of the problem.The other part is that I suspect you have too many databases & collections sitting in one server, since you mentioned that this is a multi-tenant database. WT does a checkpoint every 60 seconds, where it writes a consistent view of the database to disk. In order to do this, it needs to walk through all open collections open by one. With more collections & databases, the longer this will take. Thus, other operations may queue up behind a long checkpoint. Do you mind telling us how many databases, collections, and indexes you have in the deployment?Hence I believe you’re seeing the effect of WT-5479. This is a known issue, and it’s gradually made better in MongoDB 4.4 series, and also in MongoDB 5.0 series.So to mitigate the issue you’re seeing, I would recommend you to:Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks a lots for your information. We have prepared to upgrade to version 4.4 and keep watching the cluster. Here is some information about total databases, collections, indexes in one repl cluster deployment:\nTotal indexes on the server: 491161\nTotal dbs on the server: 3669\nTotal cols on the server: 88576After upgrade to 4.4 to improve the checkpoint locking time, but how can I determine the maximum limit number of databases, collections, indexes that can cause same issue in the future?", "username": "barryd" }, { "code": "mongod", "text": "how can I determine the maximum limit number of databases, collections, indexes that can cause same issue in the future?Unfortunately there’s no one number I can quote for this, as this will be wildly different between hardware, and the actual workload of the database. The checkpoint process involves writing modified data (inserted, modified, deleted, etc.) to disk, so if you have relatively few of these operations, having a lot of collections may not matter. In another case, you can have a relatively small number of busy collections that can result in long checkpoints.To make any number even less relevant, there’s also a consistent effort from MongoDB & WT to make this situation better and improve the server to handle more work.I would say, as a general rule, to keep an eye on the mongod logs, and regularly monitor them for any slow operations. If you see an increase in number of slow operations, it’s best to act to discover the root cause and mitigate the issue before it gets out of hand.Sorry I can’t be more precise than that. However, having ~90k collections and ~500,000 indexes is definitely a large number by any metric Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "I got it, thank you sir for all your helps.", "username": "barryd" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongod crash by memory peak
2022-05-30T10:33:54.484Z
Mongod crash by memory peak
6,052
null
[ "aggregation" ]
[ { "code": "{\n obj1: {foo: 'bar'}, \n obj2: {foo: 'baar'}\n}\n$eq", "text": "I am writing an aggregation pipeline, that will essentially have docs like this:Can I somehow write a query to get all the documents, where obj1 and obj2 don’t match?I tried with the $eq, but this operator does not seem to work.", "username": "Alex_Bjorlig" }, { "code": "obj1obj2foofoo", "text": "Does both obj1 and obj2 have only the foo as sub-property? Do you want to try to match only foo sub-property or all sub-properties in case of multiple sub-properties?", "username": "NeNaD" }, { "code": "foo$eq", "text": "The foo sub-property was only to make the example easy My objective is to test all sub-properties.I just found out that $eq actually works, but the order of keys inside an object is important!I’m investigating my options…", "username": "Alex_Bjorlig" }, { "code": "$objectToArray$setEqualsdb.collection.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$setEquals\": [\n {\n \"$objectToArray\": \"$object_1\"\n },\n {\n \"$objectToArray\": \"$object_2\"\n }\n ]\n }\n }\n }\n])\nobject$type$setEqualsobjectToArray$eqdb.collection.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$cond\": {\n if: {\n \"$and\": [\n {\n \"$eq\": [\n {\n \"$type\": \"$object_1\"\n },\n \"object\"\n ]\n },\n {\n \"$eq\": [\n {\n \"$type\": \"$object_2\"\n },\n \"object\"\n ]\n }\n ]\n },\n then: {\n \"$setEquals\": [\n {\n \"$objectToArray\": \"$object_1\"\n },\n {\n \"$objectToArray\": \"$object_2\"\n }\n ]\n },\n else: {\n \"$eq\": [\n \"$object_1\",\n \"$object_2\"\n ]\n }\n }\n }\n }\n }\n])\n", "text": "Here is how you can do it:Working exampleHere is the updated answer that would also work if one or both sub-properties are not of type object:Working Example", "username": "NeNaD" }, { "code": "", "text": "Thanks - it actually works ", "username": "Alex_Bjorlig" }, { "code": "$objectToArray", "text": "Hi @NeNaDWe are starting to see some issues when comparing complex objects, with nested arrays and objects. Is $objectToArray capable of working with nested objects - or is there some work-around?", "username": "Alex_Bjorlig" }, { "code": "$objectToArray", "text": "Hi @Alex_Bjorlig,I am not sure. Can you add example documents?If you have nested objects, maybe after $objectToArray it will have the same issue as when you just try to match 2 objects and keys are not in the same order. ", "username": "NeNaD" }, { "code": "{\n objectToCompare: {\n \"foo\": \"bar\",\n \"fooLevels\": {\n \"1\": \"level 1\",\n \"2\": \"level 2\",\n \"3\": \"level 3\"\n }\n }\n}\nfooLevels", "text": "Issues seem to arrive when the 2 objects to compare looks like this:But it actually seems like I can get it “to work” if I project fooLevels directly ", "username": "Alex_Bjorlig" }, { "code": "foo$objectToArray", "text": "It does not work because in this case foo is not an object, it’s a string. And $objectToArray throws an error because it expects input of type object.", "username": "NeNaD" }, { "code": "", "text": "I’m sorry for not being explicit about the data structure. I updated the example…", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi,I updated my answer. Can you check it?", "username": "NeNaD" } ]
Can I compare 2 objects on the same doc?
2022-06-09T10:27:41.909Z
Can I compare 2 objects on the same doc?
6,192
null
[]
[ { "code": "firebase-admin const { initializeApp } = require('firebase-admin');\n", "text": "I’ve noticed that function runtime can go from 200ms to 2.5s just by just importing modules into a function from a custom dependency.An example is the firebase-admin module.Then adding the single import line to an empty junction seems to have a severe impact on its performanceAre there any suggestions or work arounds to improve the performance?Thank you", "username": "Tyler_Collins" }, { "code": "", "text": "Hey Tyler,\nthank you for bringing this up, we’re actually actively working on making this initialization times more consistent across functions. We have some pending work that we hope to put in the upcoming release which should already mitigate this. There’s not really much you can do on your end at the moment but please let us know if you notice any other delay when using specific dependencies (especially firebase-admin since we added support for it not too long ago).", "username": "Gabriele_Cimato" } ]
Importing NPM modules significantly impacts performance
2022-06-14T13:50:35.623Z
Importing NPM modules significantly impacts performance
1,456
null
[ "aggregation", "dot-net", "java", "crud" ]
[ { "code": "this.productCollection.updateMany(Filters.empty(),\n combine(Updates.set(\"custom-attributes.custom-attribute-integer\", new BsonDocument(\"$toDecimal\", \"$custom-attributes.custom-attribute-integer\")),\n currentTimestamp(\"timestamp\")));\n", "text": "Hi!\nUsing the Mongo Sync Driver for Java, I am trying to change the data type in a field (from integer to decimal:But this syntax is not accepted. I have also tried by using an aggregation pipeline, with no result.\nBy using C# driver, this operation is more straightforward.How can I implement this operation by using the Java drivers?", "username": "catalin" }, { "code": " private static void transformingIntegersIntoDecimal128(MongoCollection<Document> coll) {\n Bson setDecimal128 = set(\"number\", new Document(\"$toDecimal\", \"$number\"));\n Bson setCurrentDate = set(\"currentDate\", new Date());\n UpdateResult updateResult = coll.updateMany(empty(), asList(combine(setDecimal128, setCurrentDate)));\n System.out.println(updateResult);\n }\nasListpackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.result.UpdateResult;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\nimport org.bson.json.JsonWriterSettings;\n\nimport java.util.Date;\nimport java.util.function.Consumer;\n\nimport static com.mongodb.client.model.Filters.empty;\nimport static com.mongodb.client.model.Updates.combine;\nimport static com.mongodb.client.model.Updates.set;\nimport static java.util.Arrays.asList;\n\npublic class Community {\n\n public static void main(String[] args) {\n String connectionString = System.getProperty(\"mongodb.uri\");\n try (MongoClient mongoClient = MongoClients.create(connectionString)) {\n MongoCollection<Document> coll = mongoClient.getDatabase(\"test\").getCollection(\"coll\");\n System.out.println(\"Dropping collection 'test.coll'\");\n coll.drop();\n System.out.println(\"Insert 2 sample docs...\");\n insertSampleDocs(coll);\n printDocs(coll);\n System.out.println(\"Transform integers into Decimal128...\");\n transformingIntegersIntoDecimal128(coll);\n printDocs(coll);\n }\n }\n\n private static void insertSampleDocs(MongoCollection<Document> coll) {\n coll.insertMany(asList(new Document(\"number\", 42), new Document(\"number\", 69)));\n }\n\n private static void transformingIntegersIntoDecimal128(MongoCollection<Document> coll) {\n Bson setDecimal128 = set(\"number\", new Document(\"$toDecimal\", \"$number\"));\n Bson setCurrentDate = set(\"currentDate\", new Date());\n UpdateResult updateResult = coll.updateMany(empty(), asList(combine(setDecimal128, setCurrentDate)));\n System.out.println(updateResult);\n }\n\n private static void printDocs(MongoCollection<Document> coll) {\n coll.find(empty()).forEach(printDocuments());\n }\n\n private static Consumer<Document> printDocuments() {\n return doc -> System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));\n }\n}\nDropping collection 'test.coll'\nInsert 2 sample docs...\n{\n \"_id\": {\n \"$oid\": \"62a8ec50afc5b13f7c0ec2fe\"\n },\n \"number\": 42\n}\n{\n \"_id\": {\n \"$oid\": \"62a8ec50afc5b13f7c0ec2ff\"\n },\n \"number\": 69\n}\nTransform integers into Decimal128...\nAcknowledgedUpdateResult{matchedCount=2, modifiedCount=2, upsertedId=null}\n{\n \"_id\": {\n \"$oid\": \"62a8ec50afc5b13f7c0ec2fe\"\n },\n \"number\": {\n \"$numberDecimal\": \"42\"\n },\n \"currentDate\": {\n \"$date\": \"2022-06-14T20:15:12.644Z\"\n }\n}\n{\n \"_id\": {\n \"$oid\": \"62a8ec50afc5b13f7c0ec2ff\"\n },\n \"number\": {\n \"$numberDecimal\": \"69\"\n },\n \"currentDate\": {\n \"$date\": \"2022-06-14T20:15:12.644Z\"\n }\n}\n> db.coll.find()\n[\n {\n _id: ObjectId(\"62a8ec50afc5b13f7c0ec2fe\"),\n number: Decimal128(\"42\"),\n currentDate: ISODate(\"2022-06-14T20:15:12.644Z\")\n },\n {\n _id: ObjectId(\"62a8ec50afc5b13f7c0ec2ff\"),\n number: Decimal128(\"69\"),\n currentDate: ISODate(\"2022-06-14T20:15:12.644Z\")\n }\n]\n<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n<maven-compiler-plugin.source>8</maven-compiler-plugin.source>\n<maven-compiler-plugin.target>8</maven-compiler-plugin.target>\n<maven-compiler-plugin.version>3.10.1</maven-compiler-plugin.version>\n<mongodb-driver-sync.version>4.6.0</mongodb-driver-sync.version>\n<mongodb-crypt.version>1.4.0</mongodb-crypt.version>\n<logback-classic.version>1.2.11</logback-classic.version>\n", "text": "Hi @catalin and welcome in the MongoDB Community !Here you go:I think you are just missing the asList that’s changing the behaviour of the update from a “normal” update operation to an update that is using the aggregation pipeline which kinda changes everything here.Here is the operation with all its context in a simple proof of concept / example:Result in the console:Result in Mongosh:For this I’m using these version in Maven:See this pom.xml if you need.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Java driver] Change data type for a field in a collection
2022-06-14T08:18:22.482Z
[Java driver] Change data type for a field in a collection
2,454
null
[ "atlas", "api" ]
[ { "code": "", "text": "Using the below API I will get the list of all the snapshots but how will I get the snapshot id of the latest on-demand backup from that list?curl -X GET -i --digest -u “{PUBLIC-KEY}:{PRIVATE-KEY}” “https://cloud.mongodb.com/api/atlas/v1.0/groups/6c7498dg87d9e6526801572b/clusters/Cluster0/snapshots”", "username": "Vikas_Rathore" }, { "code": "description", "text": "Hi @Vikas_Rathore,When you create the snapshot, you send a description which is returned when you list the snapshots.Would that work if you choose a specific pattern and then filter on it?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "curl --user \"${{ secrets.PUBLIC_KEY }}:${{ secrets.PRIVATE_KEY }}\" \\ --digest --include \\ --header \"Accept: application/json\" \\ --header \"Content-Type: application/json\" \\ --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/${{ secrets.PROJECT_ID }}/clusters/cluster-1/backup/snapshots?pretty=true\" \\ --data '{ \"description\" : \"On Demand Snapshot\", \"retentionInDays\" : 3 }'", "text": "@MaBeuLux88 Thanks for helping.Could you provide an example of how I can add a filter? Also, I might have multiple snapshots with the same description. can’t we do sorting according to the date it got created?CODE=curl --user \"${{ secrets.PUBLIC_KEY }}:${{ secrets.PRIVATE_KEY }}\" \\ --digest --include \\ --header \"Accept: application/json\" \\ --header \"Content-Type: application/json\" \\ --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/${{ secrets.PROJECT_ID }}/clusters/cluster-1/backup/snapshots?pretty=true\" \\ --data '{ \"description\" : \"On Demand Snapshot\", \"retentionInDays\" : 3 }'", "username": "Vikas_Rathore" }, { "code": "created.date", "text": "Yes the date is provided. See the field created.date.You will have to implement the filter and the sort in your back-end code once the results are collected.Supporting these options would make the API way more complex to support filtering and sorting on all the fields.", "username": "MaBeuLux88" }, { "code": "", "text": "@MaBeuLux88 I am not using this in the backend code but I want to trigger it from a bash script in our CI/CD pipeline", "username": "Vikas_Rathore" }, { "code": "grepsortcut", "text": "Well it’s not the easiest language to process JSON but I think it’s doable. With some grep, sort, cut and a few other magic commands I guess you can achieve this without too much trouble.", "username": "MaBeuLux88" } ]
How to get the snapshot id of the last on-demand backup
2022-06-13T14:56:07.283Z
How to get the snapshot id of the last on-demand backup
3,406
null
[ "crud" ]
[ { "code": "", "text": "Hi expertsI have a collection, where I need to upsert only on nested array and not on the documentdb.nesteddemo.insertOne({name:“toplevel”,nested:[{id:1,val:“1”,val2:“2”}]})db.nesteddemo.findOneAndUpdate({_id:ObjectId(“62a6a45bce71e57502b28344”)},{$addToSet:{nested:{id:2,val:2,val2:“2”}}})db.nesteddemo.updateOne(\n{_id:ObjectId(“62a6a45bce71e57502b28344”)},\n{ $set: { “nested.$[elem].val” : -1 } },\n{ arrayFilters: [ { “elem.id”: 3} ],upsert:true }\n)\nthe above are not working\nhere if I want to conditionally perform upsert operation on nested field “nested” based on nested.id fielddoes mongodb support this operation ?", "username": "Ravi_Tatikonda1" }, { "code": "", "text": "does mongodb support this operation ?MongoDB does. Your data does not. You do not have a nested object with id:3.", "username": "steevej" }, { "code": "", "text": "yes steve, I am looking for an upsert , if the nested doc doesnt exists it should add ,if nested doc exists it should update", "username": "Ravi_Tatikonda1" }, { "code": "\"nested.id\" : { \"$ne\" : 3 }", "text": "The arrayFilters is to match existing data you want to update.To implement your use-case you have to use $push or $addToSet and add something like\n\"nested.id\" : { \"$ne\" : 3 } in the query argument.", "username": "steevej" }, { "code": "", "text": "Steve, this will work for new inserts , but if the the document with id 3 exists, we may need to update the nested document, I am looking to see if I can acheive both in a single statment", "username": "Ravi_Tatikonda1" }, { "code": "", "text": "I have to clue on how to do that easily.But I would experiment with something like:", "username": "steevej" } ]
Ability to update nested field based on keys and upsert only document in nested field
2022-06-13T02:55:30.935Z
Ability to update nested field based on keys and upsert only document in nested field
3,657
null
[ "atlas-cluster", "database-tools", "backup" ]
[ { "code": "", "text": "Hello,I have a mongodb in a ubuntu machine in digitalOcean, and I am trying to restore it to MongoDB Cloud cluster. I downloaded the dump locally in my machine(Mac) and was able to restore to the MongoDB Cloud cluster. If I repeat the same process from the Ubuntu machine I does not work.I am using:\nmongorestore --uri mongodb+srv://userd(http://cluster0.xxxx.mongodb.net)I get :lookup cluster0.xxxx.mongodb.net on 127.0.0.53:53: cannot unmarshal DNS messageI read the recommendations in this forum already, nothing seems to help.thanks.", "username": "Daisy_Etm" }, { "code": "", "text": "It is hard to help when addresses are redacted to replace part of the cluster name with xxxx.Because we cannot test if the addresses is legitimate or if the DNS error is.If you really have xxxx in your cluster name then the DNS error is legitimate because that could not be a valid cluster name.I also do not recognized the syntax userd(http://cluster0.xxxx.mongodb.net), may be it is a Mac only thing that is able to fetch your credentials based from the http: name you specify. You might try to manually put the credentials in the URI.", "username": "steevej" } ]
Can't restore to MongoDB Cloud from Ubuntu in Digital Ocean
2022-06-14T07:47:44.237Z
Can&rsquo;t restore to MongoDB Cloud from Ubuntu in Digital Ocean
1,718
null
[ "migration" ]
[ { "code": "", "text": "In the app I am working on, we are downloading a database from a partner’s API (documents with unique IDs), parsing it into JSON format, and storing the data in our MongoDB collection. Periodically, since updates may be made in the partner’s database, we download their database again to update our collection in MongoDB. Upserting the documents from their database to our collection will handle the cases where the data entry is the same as before (we just write over the existing document with the same ID) or a new one has been added on their side (we create a new document in our collection).\nBut what is the best practice to handle the case where they have removed data entries on their side? How does one check the ID’s of documents in the source MongoDB collection against the target database to see what is now missing/absent on the source side compared to what we have in our target Mongo collection, so that we know which ones to remove on our side as well?", "username": "Francesca_Ricci-Tam" }, { "code": "", "text": "What I would try is:If unsatisfactory I will experiment with using a temporary collection in the target server to hold the new source dataset and $merge. I am still unsure if that can work. May be something like a $merge from new temporary data to target that merge each document, and then another $merge from the target into the new temporary using discard. But not sure.But the big question is why don’t you simply wipe out your old copy to replace it with the new copy.", "username": "steevej" } ]
Comparing target and source datasets to check for removed data in source
2022-06-09T18:37:15.237Z
Comparing target and source datasets to check for removed data in source
2,568
null
[ "node-js", "typescript" ]
[ { "code": "", "text": "How can I get the typescript types of a collection, mainly not just for type checking but to leverage the benefits of the editor’s intellisense without forcing me to rewrite the entire schema in typescript? I’m using not just Atlas but also realm mongodb. Is there a tool out there that can help me build the types from the schema files of a realm app?Thanks!", "username": "Demian_Caldelas1" }, { "code": "", "text": "Hello @Demian_Caldelas1, you can use the Realm Object Models inside the Realm SDKs menu. Just make sure to select Typescript in the language selector (on the top right side of the screen).", "username": "Diego_Jose_Goulart" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Generate typescript types from the schema files of a realm app?
2022-06-14T10:22:07.970Z
Generate typescript types from the schema files of a realm app?
3,450
null
[ "queries" ]
[ { "code": "db.companies.find({\"relationships.person.first_name\":\"Mark\",\"relationships.is_past\":true}).count()\n\ndb.companies.find({ \"relationships\":\n { \"$elemMatch\": { \"is_past\": true,\n \"person.first_name\": \"Mark\" } } },\n { \"name\": 1 }).count()\n", "text": "This query I wrote returns the output of 448whereas the query that given in the course outputs 256\nWhat am I doing wrong? Aren’t this queries supposed to return the same output?Thanks in Advance!!!", "username": "kesav_kumar" }, { "code": "", "text": "Aren’t this queries supposed to return the same output?No. They are not the same. The first one matches documents that have one element of the array relationship to hane a person with the given name and one element with is_past true. The elements do not need to be the same. In the second one both conditions must be true for the same element.", "username": "steevej" }, { "code": "db.companies.find({\"relationships.person.first_name\":\"Mark\",\"relationships.person.last_name\":\"Zuckerberg\"}).count()\n", "text": "I am sorry I couldn’t completely grasp what you are saying. My understanding of what u said is that in my query the find function returns the documents that satisfies at least one condition whereas in the course query it checks for both the conditions.\nIf that’s what you meant , I think the find function implicitly uses the AND operator so both the condition must be true. To verify in the shell I tried the query below and got the desired output.The above code return only 1 document.If this query takes all the documents that satisfices at least one condition it should return 523 documents as there are so many CEO with the first name “Mark” but only 1 that has the specified last name too.Thanks for the response @steevej ", "username": "kesav_kumar" }, { "code": "", "text": "at least one conditionNot what I wrote. Both conditions must be true in the two cases. In the first case, they do not have to be true for the same element of the array. For the second case, they have to be true for the same arrau element.", "username": "steevej" }, { "code": "", "text": "Would you mind explaining with some examples. Sharing some references/links to look upon would be a great help too!", "username": "kesav_kumar" }, { "code": "", "text": "It is always best to refer to documentation. See", "username": "steevej" } ]
M001:Chapter 4:Array Operators and Subdocuments
2022-06-11T11:34:57.217Z
M001:Chapter 4:Array Operators and Subdocuments
1,843
null
[ "python", "change-streams", "spark-connector" ]
[ { "code": "spark.readspark.readStreamstartingOffsetsearliest# pyspark shell\n\naccount_df = spark.readStream.format(\"mongodb\") \\\n\t\t.option('spark.mongodb.connection.uri', 'mongodb://<HOST>:27017') \\\n \t.option('spark.mongodb.database', 'algorand') \\\n \t.option('spark.mongodb.collection', 'account') \\\n\t\t.option('spark.mongodb.change.stream.publish.full.document.only','true') \\\n \t.option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n \t.load()\n\nres = account_df.writeStream \\\n .outputMode(\"append\") \\\n .option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n .format(\"console\") \\\n .trigger(continuous=\"1 second\") \\\n .start().awaitTermination()\n\n# returns empty dataframes\n", "text": "Hi community.I’m experimenting with the new Spark Connector 10.0.2. With the current setup, I’m ingesting data with the MongoDB Kafka sink connector into collections. From there, I would like to further process the data with Spark Structured Streaming.I can successfully batch read the existing data with spark.read. But I was hoping that I could also fetch the existing data with spark.readStream in addition to newly inserted data. However, I only get the data that is inserted after opening the stream with Spark.I have enabled replication on my instance before ingesting any data, but this didn’t change the outcome. Is there a way to achieve this? For example, with Kafka I can define the\nstartingOffsets as earliest.Here is my current code:", "username": "Abasch_Kim" }, { "code": "", "text": "We have a Jira to track the ability to copy existing data. https://jira.mongodb.org/projects/SPARK/issues/SPARK-303Feel free to track and comment.Thanks,\nRob", "username": "Robert_Walters" }, { "code": "", "text": "Hi Rob, thanks for the information! Looking forward to this feature.\nDo I understand this correctly: to be able to use Spark Structured Streaming, replication on the MongoDB instance is needed to enable change streams?", "username": "Abasch_Kim" }, { "code": "", "text": "Change streams requires a MongoDB cluster so yes, that said you can theoretically create a one node cluster if you want to test it.", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Spark Connector 10.0.2 - Read Existing Data as Stream
2022-06-11T20:49:41.499Z
MongoDB Spark Connector 10.0.2 - Read Existing Data as Stream
3,407
null
[ "golang" ]
[ { "code": "[\n {\n \"_id\": \"hoge\",\n \"companyname\": \"ABC\",\n \"empoloyees\": [\n {\n \"name\": \"John Doe\",\n \"email\": \"[email protected]\"\n },\n {\n \"name\": \"Jone Doe\",\n \"email\": \"[email protected]\"\n }\n ]\n },\n {\n \"_id\": \"fuga\",\n \"companyname\": \"XYZ\",\n \"empoloyees\": [\n {\n \"name\": \"Tom Cat\",\n \"email\": \"[email protected]\"\n },\n {\n \"name\": \"Dazy Dog\",\n \"email\": \"[email protected]\"\n }\n ]\n }\n]\n", "text": "Hi there,I have a golang project, which uses mongo driver (“go.mongodb.org/mongo-driver/mongo”).\nMy collections live in Atlas free tier cluster. They have a collection like below.\nI want to filter documents by company name and get elements in a specific index range of the nested array “employees”. (For example, getting only first and second employees who works for ABC company)\nHow can I do that by Mongo query and golang driver?", "username": "Ryo_Koezuka" }, { "code": "$slice", "text": "Hello @Ryo_Koezuka, welcome to the MongoDB Community forum!You can use the $slice projection operator to limit the number of elements in the query result. Here is an example post from this forum: Limit the items returned from array", "username": "Prasad_Saya" }, { "code": "", "text": "Hello @Prasad_Saya , thx for reply!\nActually, I tried slice projection in golang some days ago, but I saw a error saying something like Mongo Atlas free tier users cannot use slice projection…\nIs that expected behavior?", "username": "Ryo_Koezuka" }, { "code": "", "text": "I saw a error saying something like Mongo Atlas free tier users cannot use slice projection…\nIs that expected behavior?I don’t know. You can refer to this for clarification:", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya\nOK, Thanks a lot!", "username": "Ryo_Koezuka" }, { "code": "", "text": "There are not reasons why such a simple operation as $slice is not supported on free tier.Most likely you are not using it correctly.Share your code and we will be able to pin point your error.", "username": "steevej" } ]
Getting elements in a specfic index range of nested array
2022-06-01T03:37:29.672Z
Getting elements in a specfic index range of nested array
2,738
null
[ "queries" ]
[ { "code": "", "text": "Hello community,\nMongoDB server version: 4.0.27\nI am trying to connect prometheus-exporter’s container with mongoDB container but facing issue in SSL options within connection string.\nmongodb-exporter: GitHub - percona/mongodb_exporter: A Prometheus exporter for MongoDB including sharding, replication and storage engines\nConnection string: //flexsnap-mongodb/?ssl=true&sslclientcertificatekeyfile=\"cert key pem\"mongodb.cert.pem&sslinsecure=true&sslcertificateauthorityfile=\n“ca cert path”\nI am on Ubuntu 16.04", "username": "Yash_Chavan" }, { "code": "mongodb-exporter", "text": "Hey @Yash_Chavan,Welcome to the MongoDB Community Forums! Which version of mongodb-exporter are you using? Also, can you please send the full error statement/logs that you are getting for us to see and help you out?Regards,\nSatyam", "username": "Satyam" } ]
connection error: ssl certification option
2022-06-01T06:56:02.937Z
connection error: ssl certification option
1,487
null
[]
[ { "code": "", "text": "My question is: I have a query with the mongodb find () method. I’m fetching all my users that match the features I specified in the query. As part of my work, I used the k-mean algorithm on my server side to show users returning from mongodb in a clustered fashion. As the number of Acnak users increases, there will be delays in mongodb. To overcome this delay, I want to apply the K-mean algorithm to the last part of the Mongo answer. So users come to me clustered.I am currently solving this problem temporarily in nodejs. I want to integrate it directly into mongodb for hanging. I have a k-mean algorithm written in JavaScript. I’ll translate this directly to c ++ and integrate it into mongo. But where is the last answer in mongodb?I was wondering if anyone knows the exact source code. Where should I add the K-mean algorithm to the final answer section in Mongod?", "username": "Aytac_Abay" }, { "code": "", "text": "Merhaba Aytaç,\nI created a ticket to track this work here. I’d expect us to take it on when we do a general push for in-database distributed machine learning along with other popular algorithms.I am guessing you’d like to do this in the database either because you would like to reduce the data before returning to the client (e.g. return per cluster summaries instead of each individual customer) or expect performance gains from running in the database, even though you still plan on returning a list of all the customers. Also do you intend to run clustering across all users (making it possible to run it once and use it multiple times) or on different subsets (e.g. viewer A could be looking at customers in Region X, while viewer B looking at Region Y and you want to rerun clustering just for X or Y)? Would you mind clarifying?In the meantime have you considered precomputing centroids using k-means outside MongoDB (e.g. R, Python) then computing squared Euclidian distances between the points and centroids (values from R/Python hardcoded into the query) in MQL then assigning each point to the nearest centroid? That would be a non-trivial MQL query but might possibly help. If you’re rendering all customers it might be easier to do this in Javascript after retrieving the data from MongoDB. Since neighborhood assignment could be done in a single pass vs k-means which takes numerous iterations, that would cut your run time by few orders of magnitude at least. It would also make the results deterministic which you wouldn’t get from k-means with random seed and unless there is a significant change in user count, centroids wouldn’t move much so you would still get meaningful results.Selamlar,Bora", "username": "Bora_Beran" }, { "code": "", "text": "Not sure what happened to the other thread about KDBush but if I remember your last message correctly is it safe to say what you’d like to get is the ID for the grid cell that KDBush index created because you treat those as clusters?I think in many scenarios an algorithm like DBSCAN would be more suitable for spatial clustering but if the goal is to get a cell id, what are your thoughts on S2CellId?", "username": "Bora_Beran" }, { "code": "", "text": "Kdbush is rapidly processing 1m data within milliseconds at all zoom values. It is very fast, but the only problem is it works well on static data. I will try to do it dynamically. Then I tried a method with Mongodb aggregate on over 200,000 people, but the room works slowly. I’ve seen a few methods of postgis and I’ll try them. S2CellId I will investigate soon.", "username": "Aytac_Abay" }, { "code": "", "text": "Bora, If you have any other ideas, don’t hesitate to tell them ", "username": "Aytac_Abay" }, { "code": "", "text": "", "username": "Stennie_X" } ]
I want to add the kdBush Algorithm
2021-02-23T18:43:05.321Z
I want to add the kdBush Algorithm
5,023
null
[ "aggregation" ]
[ { "code": "const dashboard = await AdminDataModel.aggregate([\n {\n $match: {\n ...keyword\n }\n },\n {\n $lookup: {\n from: \"saveData\",\n let: {\n cadmin_id: \"$_id\" \n },\n\n pipeline: [\n {\n $match: {\n $expr: {\n $eq:\n [\"$c_id\", \"$$cadmin_id\"] \n }\n }\n },\n ],\n as: \"sadmin\"\n }\n },\n {\n $lookup: {\n from: \"users\",\n let: {\n cadmin_id: \"$_id\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq:\n [\"$c_id\", \"$$cadmin_id\"]\n }\n }\n }\n ],\n as: \"Datauser\"\n }\n },\n\n {\n $lookup: {\n from: \"workflows\",\n let: {\n cadmin_id: \"$_id\"\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq:\n [\"$c_id\", \"$$cadmin_id\"]\n }\n }\n }\n ],\n as: \"workflow\"\n }\n }, \n ])\n", "text": "When i run this code on Azure cosmos it show an error\n\" MongoServerError: let not supported \"", "username": "Avish_Pratap_Singh" }, { "code": "", "text": "As far as I know Cosmos will always be behind feature wise compared to local instance or Atlas.In the code you shared you do not need the let:/pipeline: feature.All your $lookup stages could forgo the let:/pipeline: and simply use localField:_id and foreignField:c_id.", "username": "steevej" }, { "code": " from: \"sadmins\",\n\n let: {\n\n cadmin_id: \"$_id\"\n\n },\n\n pipeline: [\n\n {\n\n $match: {\n\n $expr: {\n\n $eq:\n\n [\"$c_id\", \"$$cadmin_id\"]\n\n }\n\n }\n\n },\n\n {\n\n $match: {\n\n ...keyword\n\n }\n\n },\n\n {\n\n $lookup: {\n\n from: \"users\",\n\n let: { sadmin_id: \"$_id\" },\n\n pipeline: [\n\n {\n\n $match: {\n\n $expr: {\n\n $eq:\n\n [\"$s_id\", \"$$sadmin_id\"]\n\n }\n\n }\n\n }\n\n ],\n\n as: \"user\"\n\n }\n\n },\n\n \n\n {\n\n $lookup: {\n\n from: \"workflows\",\n\n let: {sadmin_id: \"$_id\"},\n\n pipeline: [\n\n {\n\n $match: {\n\n $expr: {\n\n $eq:\n\n [\"$s_id\", \"$$sadmin_id\"]\n\n }\n\n }\n\n }\n\n ],\n\n as: \"workflow\"\n\n }\n\n },\n\n \n\n {\n\n $project: {\n\n _id: 1, user_name: 1, email: 1, phone: 1, logo: 1, sadmin: 1, address: 1, state: 1, country: 1, pin: 1,\n\n loginValue:1,profilePath:1,total_user: { $size: \"$user\" },\n\n total_workflow: { $size: \"$workflow\" }\n\n }\n\n }\n\n ],\n\n as: \"sadmin\"\n\n },", "text": "Hello Sir,\nyour previous answer solve my 1 one problem\nPlease again help… if we have let/pipeline inside $lookup which is inside pipeline\ncode is below: what should i dow\n$lookup: {", "username": "Avish_Pratap_Singh" }, { "code": "", "text": "As mentioned in my previous postAll your $lookup stages could forgo the let:/pipeline: and simply use localField:_id and foreignField:c_id .Look at the examples from the $lookup documentation that use localField and foreignField.", "username": "steevej" }, { "code": "$lookup$lookup$lookuplet is not supported$lookupletpipeline", "text": "Hi @Avish_Pratap_Singh,Cosmos’s MongoDB API is an independently developed emulation of a subset of MongoDB server features. Some features (like $lookup) are significantly behind their genuine counterparts in MongoDB server releases and others are missing entirely.The Cosmos API version 4.2 documentation notes this incomplete $lookup implementation:The $lookup aggregation does not yet support the uncorrelated subqueries feature introduced in server version 3.6. You will receive an error with a message containing let is not supported if you attempt to use the $lookup operator with let and pipeline fields.MongoDB 3.6 was first released in 2017. If you want to use modern MongoDB features on Azure, I recommend looking into MongoDB Atlas on Azure: How To Run MongoDB Atlas On Microsoft Azure | MongoDB.Even if you decide you prefer to use Cosmos’ API, you can use the Atlas free tier to compare behaviour with a genuine MongoDB server to determine whether an issue might be due to differing server implementations rather than your usage or syntax.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoServerError: let not supported database is on Azure
2022-06-08T12:13:47.579Z
MongoServerError: let not supported database is on Azure
3,349
null
[ "atlas-device-sync", "performance" ]
[ { "code": "SYNC: [2] Realm sync client ([realm-core-10.1.4], [realm-sync-10.1.5])\nSYNC: [2] Supported protocol versions: 2-2\nSYNC: [2] Platform: macOS Darwin 20.2.0 Darwin Kernel Version 20.2.0: Wed Dec 2 20:40:21 PST 2020; root:xnu-7195.60.75~1/RELEASE_ARM64_T8101 x86_64\nSYNC: [2] Build mode: Release\nSYNC: [2] Config param: max_open_files = 256\nSYNC: [2] Config param: one_connection_per_session = 1\nSYNC: [2] Config param: connect_timeout = 120000 ms\nSYNC: [2] Config param: connection_linger_time = 30000 ms\nSYNC: [2] Config param: ping_keepalive_period = 60000 ms\nSYNC: [2] Config param: pong_keepalive_timeout = 120000 ms\nSYNC: [2] Config param: fast_reconnect_limit = 60000 ms\nSYNC: [2] Config param: disable_upload_compaction = 0\nSYNC: [2] Config param: tcp_no_delay = 0\nSYNC: [2] Config param: disable_sync_to_disk = 0\nSYNC: [2] User agent string: 'RealmSync/10.1.5 (macOS Darwin 20.2.0 Darwin Kernel Version 20.2.0: Wed Dec 2 20:40:21 PST 2020; root:xnu-7195.60.75~1/RELEASE_ARM64_T8101 x86_64) RealmJS/10.1.2 (node.js, darwin, vv12.20.0) '\nSYNC: [2] Connection[1]: WebSocket::Websocket()\nSYNC: [3] Connection[1]: Session[1]: Binding '/Users/duncangroenewald/Development/RealmMigrationMongoDB/mongodb-realm/makespace-development-qkudm/5fffc37714c08f5936e29aef/s_default.realm' to '\"default\"'\nSYNC: [2] Connection[1]: Session[1]: Activating\nSYNC: [4] Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false\nSYNC: [2] Opening Realm file: /Users/duncangroenewald/Development/RealmMigrationMongoDB/mongodb-realm/makespace-development-qkudm/5fffc37714c08f5936e29aef/s_default.realm\nSYNC: [2] Connection[1]: Session[1]: client_file_ident = 1, client_file_ident_salt = 301361383067496991\nSYNC: [1] Connection[1]: Session[1]: last_version_available = 350\nSYNC: [1] Connection[1]: Session[1]: progress_server_version = 68\nSYNC: [1] Connection[1]: Session[1]: progress_client_version = 50\nSYNC: [1] Using already open Realm file: /Users/duncangroenewald/Development/RealmMigrationMongoDB/mongodb-realm/makespace-development-qkudm/5fffc37714c08f5936e29aef/s_default.realm\nSYNC: [2] Connection[1]: Session[1]: Progress handler called, downloaded = 3670, downloadable(total) = 3670, uploaded = 15004029, uploadable = 79150774, reliable_download_progress = 0, snapshot version = 350\nSYNC: [3] Connection[1]: Resolving 'ws.realm.mongodb.com:443'\nSYNC: [3] Connection[1]: Connecting to endpoint '52.64.157.195:443' (1/1)\nSYNC: [4] Connection[1]: Connected to endpoint '52.64.157.195:443' (from '10.0.1.171:50581')\nSYNC: [2] Connection[1]: WebSocket::initiate_client_handshake()\nSYNC: [1] Connection[1]: HTTP request =\nGET /api/client/v2.0/app/makespace-development-qkudm/realm-sync HTTP/1.1\nHost: ws.realm.mongodb.com\nAuthorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RldmljZV9pZCI6IjVmZmZmMDJmNTczNGRmNWQyNzQ2NGY4OSIsImJhYXNfZG9tYWluX2lkIjoiNWZmZmMyZjA2YzEwMjRmZmU3ZDIzYjYwIiwiZXhwIjoxNjEwNjEwNDg4LCJpYXQiOjE2MTA2MDg2ODcsImlzcyI6IjVmZmZmMDJmNTczNGRmNWQyNzQ2NGY5YiIsInN0aXRjaF9kZXZJZCI6IjVmZmZmMDJmNTczNGRmNWQyNzQ2NGY4OSIsInN0aXRjaF9kb21haW5JZCI6IjVmZmZjMmYwNmMxMDI0ZmZlN2QyM2I2MCIsInN1YiI6IjVmZmZjMzc3MTRjMDhmNTkzNmUyOWFlZiIsInR5cCI6ImFjY2VzcyJ9.VzDFU9zouyEqEYAPCRkZOga4KYnfjOwhKhI4_hQF6B0\nConnection: Upgrade\nSec-WebSocket-Key: hSprx0sKupZR/g39vOZkmw==\nSec-WebSocket-Protocol: com.mongodb.realm-sync/2\nSec-WebSocket-Version: 13\nUpgrade: websocket\nUser-Agent: RealmSync/10.1.5 (macOS Darwin 20.2.0 Darwin Kernel Version 20.2.0: Wed Dec 2 20:40:21 PST 2020; root:xnu-7195.60.75~1/RELEASE_ARM64_T8101 x86_64) RealmJS/10.1.2 (node.js, darwin, vv12.20.0)\n\n\nSYNC: [2] Connection[1]: WebSocket::handle_http_response_received()\nSYNC: [1] Connection[1]: HTTP response = HTTP/1.1 101 Switching Protocols\ncache-control: no-cache, no-store, must-revalidate\nconnection: Upgrade\ndate: Thu, 14 Jan 2021 07:18:08 GMT\nsec-websocket-accept: I7CDbiLyq7SKRXjK9JaK8hIaMxk=\nsec-websocket-protocol: com.mongodb.realm-sync/2\nserver: envoy\nupgrade: websocket\nvary: Origin\nx-frame-options: DENY\n\n\nSYNC: [3] Connection[1]: Negotiated protocol version: 2\nSYNC: [2] Connection[1]: Will emit a ping in 48083 milliseconds\nSYNC: [2] Connection[1]: Session[1]: Sending: BIND(path='\"default\"', signed_user_token_size=469, need_client_file_ident=0, is_subserver=0)\nSYNC: [2] Connection[1]: Session[1]: Sending: IDENT(client_file_ident=1, client_file_ident_salt=301361383067496991, scan_server_version=68, scan_client_version=50, latest_server_version=68, latest_server_version_salt=7290643098720209980)\nSYNC: [2] Connection[1]: Session[1]: Sending: MARK(request_ident=2)\nSYNC: [1] Connection[1]: Download message compression: is_body_compressed = 1, compressed_body_size=484573, uncompressed_body_size=1491642\nSYNC: [1] Connection[1]: Received: DOWNLOAD CHANGESET(server_version=75, client_version=52, origin_timestamp=190527672689, origin_file_ident=3, original_changeset_size=610, changeset_size=610)\nSYNC: [1] Connection[1]: Changeset: 3F 00 07 41 70 70 55 73 65 72 3F 01 24 33 35 35 35 42 44 44 34 2D 43 39 45 38 2D 34 35 38 34 2D 39 30 30 42 2D 46 46 35 39 37 35 46 37 35 38 35 33 3F 02 0D 5F 33 36 36 34 30 30 32 31 43 35 3F...\nSYNC: [1] Connection[1]: Received: DOWNLOAD CHANGESET(server_version=77, client_version=52, origin_timestamp=190527672928, origin_file_ident=3, original_changeset_size=266399, changeset_size=266399)\nSYNC: [1] Connection[1]: Changeset: 3F 00 0A 41 73 73 6F 72 74 6D 65 6E 74 3F 01 24 33 39 46 39 39 37 35 32 2D 33 41 45 41 2D 34 31 44 32 2D 41 44 30 43 2D 37 42 33 33 34 32 36 33 41 43 30 44 3F 2D 39 42 38 44 2D 34 43 30 35 2D 39 42 43 34 2D 42 41 46 41 37 36 34 34 39 46 34 42 3F 23 24 32 44 42 44 43 38 33 30 2D 43 36 38...\nSYNC: [1] Connection[1]: Received: DOWNLOAD CHANGESET(server_version=78, client_version=52, origin_timestamp=190527674683, origin_file_ident=3, original_changeset_size=1217095, changeset_size=1217095)\nSYNC: [1] Connection[1]: Changeset: 3F 00 0E 41 73 73 6F 72 74 6D 65 6E 74 49 74 65 6D 3F 01 24 45 44 37 45 33 30 38 42 2D 41 42 36 46 2D 34 38 36 41 2D 38 42 33 43 2D 45 46 37 33 33 38 38 44 39 6 38 2D 39 36 38 38 2D 45 30 43 36 38 30 42 38 44 39 45 46 3F 27 24 41 39 42 43 30 37 31 38 2D 38 35 34 41 2D 34 31 36 41 2D 42...\nSYNC: [2] Connection[1]: Session[1]: Received: DOWNLOAD(download_server_version=78, download_client_version=52, latest_server_version=78, latest_server_version_salt=2596012816928273966, upload_client_version=52, upload_server_version=0, downloadable_bytes=0, num_changesets=4, ...)\nSYNC: [1] Using already open Realm file: /Users/duncangroenewald/Development/RealmMigrationMongoDB/mongodb-realm/makespace-development-qkudm/5fffc37714c08f5936e29aef/s_default.realm\nSYNC: [1] Connection[1]: Session[1]: Scanning incoming changeset [1/4] (32 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning incoming changeset [2/4] (486 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning incoming changeset [3/4] (18970 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning incoming changeset [4/4] (84042 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [1/192] (5652 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [2/192] (20010 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [3/192] (20010 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [4/192] (20010 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [5/192] (20010 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [6/192] (20010 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [7/192] (12920 instructions)\n...\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [190/192] (3044 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [191/192] (58 instructions)\nSYNC: [1] Connection[1]: Session[1]: Scanning local changeset [192/192] (217 instructions)\nSYNC: [1] Connection[1]: Session[1]: Indexing incoming changeset [1/4] (32 instructions)\nSYNC: [1] Connection[1]: Session[1]: Indexing incoming changeset [2/4] (486 instructions)\nSYNC: [1] Connection[1]: Session[1]: Indexing incoming changeset [3/4] (18970 instructions)\nSYNC: [1] Connection[1]: Session[1]: Indexing incoming changeset [4/4] (84042 instructions)\nSYNC: [2] Connection[1]: Session[1]: Finished changeset indexing (incoming: 4 changeset(s) / 103530 instructions, local: 192 changeset(s) / 2404763 instructions, conflict group(s): 66)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [1/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [2/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [3/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [4/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [5/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [6/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [7/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [8/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [9/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [10/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [11/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [12/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [13/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [14/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [15/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [16/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [17/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [18/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [19/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [20/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [21/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [22/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [23/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [24/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [25/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [26/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [27/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [28/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [29/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [30/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [31/192] through 4 incoming changeset(s) with 66 conflict group(s)\nSYNC: [1] Connection[1]: Session[1]: Transforming local changeset [32/192] through 4 incoming changeset(s) with 66 conflict group(s)\n", "text": "After running a script to load data into a synced MondoDB Realm and the script eventually failing with a “Bad sync process (7)” error after the data load has completed but before the sync has completed I restart the script in query mode and the sync seems to attempt to pick up where it left off but seems to be doing some additional steps and is very very slow - it has been running for a day now.Can anyone explain what the sync is doing when “Transforming local changeset(s)…” and why it might be so slow ?Previously when running the load script it would always fail with the “Bad sync process (7)” error but when restarted in query mode the sync would usually continue uploading change sets until it received another “Bad sync process (7)” error. Enough restarts and the sync would eventually complete.Now the behaviour seems to have changed.Any ideas as to what is going on here ?", "username": "Duncan_Groenewald" }, { "code": "DebugDebug", "text": "Are you using Swift and running in Debug mode? I’ve noticed that some sync functions are ~50x slower for me in Debug configurations (see this post).", "username": "Andreas_Ley" }, { "code": "", "text": "Debug mode is very slow - use the prebuilt binaries so you can still run the application ind bug mode without the performance hit in Realm.There were some server side issues that have been fixed recently as well that caused client sync to fail.", "username": "Duncan_Groenewald" } ]
Sync performance
2021-01-14T21:29:51.181Z
Sync performance
3,267