image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"mongodb-shell",
"indexes"
] | [
{
"code": "mongo -u <Account><Account>use dbOfInterest \ndb.collectionOfInterest.createIndex({fieldOne:-1})\n",
"text": "I want to add an index on an existing collection in a MongoDB Comunity Edition (mongo version v4.4.3) replica set. This replica set has only a Primary, at the moment, and is configured in this way to be able to use MongoDB change streams.\nI log into the Mongo shell using:\nmongo -u <Account>\nwhere <Account> is an account with role ‘userAdminAnyDatabase’ and ‘ClusterManager’.\nThen I issue the commands:But after issuing the command above the console seems not responding anymore (after several minutes I had to stop it using CTRL+C). I also tried to create the index using Compass but the result is the same, GUI not responding. After killing Compass GUI and restarting it, I found the new index but is it created correctly?\nThe considered collection is a test one with very few documents.\nWhat am I doing wrong?",
"username": "Sergio_Ferlito1"
},
{
"code": "",
"text": "I would expect this from the cli as is waiting for the index build to complete before returning the prompt to you, I cannot comment on the GUI.After killing Compass GUI and restarting it, I found the new index but is it created correctly?The index build will have continued. The specific conditions of when the build fails are outlines in index\nbuilds on populated collections.The considered collection is a test one with very few documents.If the collection is busy the index build will yield for operations, but usually a small collection will index quickly. The index build is logged by mongod so you can see when it started, it’s progress and completion.",
"username": "chris"
},
{
"code": " db.adminCommand(\n {\n currentOp: true,\n $or: [\n { op: \"command\", \"command.createIndexes\": { $exists: true } },\n { op: \"none\", \"msg\" : /^Index Build/ }\n ]\n }\n )\n",
"text": "Thanks Chris.So, If I understand well:Nevertheless, I am very hesitating to start a new index building on a production server, as, as I said, the console does not respond and I have to terminate it. In this case, the index building goes on the same?To monitor the index building process I have to use:",
"username": "Sergio_Ferlito1"
},
{
"code": "",
"text": "Nevertheless, I am very hesitating to start a new index building on a production server, as, as I said, the console does not respond and I have to terminate it. In this case, the index building goes on the same?I am pretty sure it keeps building once you terminate the issuing client. Easy to check in the logs or your current_op query. You can use a terminal multiplexor like screen or tmux (assuming linux) on the server and leave it with confidence.Prior to 4.2 an index build would lock the collection of the indexing unless run with the background option. 4.2+ an exclusive lock is only required at the beginning and end of the index build.Running an Index Build on an active database/collection is going to prolong an index build and may have an impact on running operations.\nFrom the previous link:Index Build Impact on Database PerformanceBuilding indexes during time periods where the target collection is under heavy write load can result in reduced write performance and longer index builds.Consider designating a maintenance window during which applications stop or reduce write operations against the collection. Start the index build during this maintenance window to mitigate the potential negative impact of the build process.",
"username": "chris"
},
{
"code": "",
"text": "I think there is some issue in creating indexes in my configuration.\nI found this and following the provided instructions I added a ‘keyfile’ for replica set members internal authentication, but still, I’m not able to resolve the issue. Moreover, the command to monitor index creation can be executed only with a “root” account, ClusterAdmin role was not sufficient. I wasn’t able to create new indexes so far (tried on other collections).",
"username": "Sergio_Ferlito1"
},
{
"code": "mongod# mongod.conf\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n keyFile: /home/user1/Desktop/mongo.keyfile\n authorization: enabled\n#operationProfiling:\n\nreplication:\n replSetName: AirHeritage\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\nkeyFilemongo.keyfileopenssl rand -base64 756 > <path-to-keyfile>\nchmod 400 <path-to-keyfile>\nsudo chown mongodb:mongodb /home/user1/Desktop/mongo.keyfilels -al /home/user1/Desktop/mongo.keyfile \n-r-------- 1 mongodb mongodb 1024 gen 13 09:19 /home/user1/Desktop/mongo.keyfile\nmongodsudo systemctl stop mongod \nsudo systemctl start mongod \n sudo systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: active (running) since Wed 2021-01-13 09:25:59 CET; 44min ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 10156 (mongod)\n Memory: 348.6M\n CGroup: /system.slice/mongod.service\n └─10156 /usr/bin/mongod --config /etc/mongod.conf\n\ngen 13 09:25:59 eanx-XPS-13-9350 systemd[1]: Started MongoDB Database Server.\nmongo -u root\nuse dbOfInterest\ndb.collectionOfInterest.createIndex({field:1})\n db.collectionOfInterest.createIndex({'field':1})\n{\n \"createdCollectionAutomatically\" : false,\n \"numIndexesBefore\" : 1,\n \"numIndexesAfter\" : 2,\n \"commitQuorum\" : \"votingMembers\",\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1610526522, 5),\n \"signature\" : {\n \"hash\" : BinData(0,\"qSqkVswHQA/IzWGYCd8HwNhXoQk=\"),\n \"keyId\" : NumberLong(\"6877458498892857349\")\n }\n },\n \"operationTime\" : Timestamp(1610526522, 5)\n}\nsudo tail /var/log/mongodb/mongod.log | jqsudo apt install jq db.collectionOfInterest.getIndexes()\n[\n {\n \"v\" : 2,\n \"key\" : {\n \"_id\" : 1\n },\n \"name\" : \"_id_\"\n },\n {\n \"v\" : 2,\n \"key\" : {\n \"field\" : 1\n },\n \"name\" : \"field.name_1\"\n }\nmongo.keyfile",
"text": "Finally solved!!.\nThe issue is indeed related to “replica set node with auth can connect to itself” (more here).\nI had to modify my mongod (file /etc/mongod.conf) configuration as follow:Note keyFile section into YAML mongod.conf file (pay attention to white spaces).\nTo correctly generate this mongo.keyfile I used:Then do:sudo chown mongodb:mongodb /home/user1/Desktop/mongo.keyfileCheck that results are something as:Then stop and restart mongod using:Check status:Then log into mongo console as root (not sure it is necessary root, should suffice ClusterAdmin role):Having done all that before, index creation should have worked without hanging the console with a result as:To check MongoDB logs use:sudo tail /var/log/mongodb/mongod.log | jq(if not installed in your system use sudo apt install jq, jq is very useful to pretty print json files)Finally check indexes on collection with:Note that two key fields are reported: “_id” (by default on collection creation) and “field”!!\nHope this can help someone else having a similar issue.\nOnly move the mongo.keyfile to a more suitable location (someone can suggest where?)",
"username": "Sergio_Ferlito1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo console freezes in attempt to create a new index on a populated collection within a replica set | 2021-01-12T08:48:55.903Z | Mongo console freezes in attempt to create a new index on a populated collection within a replica set | 9,377 |
null | [
"php"
] | [
{
"code": "",
"text": "Included in the PHP 1.8 MongoDB driver library is a component called StreamWrapper. (in the MongoDB\\GridFS namespace.) This component is compatible with the PHP stream_wrapper_register() function. This allows native PHP functions such as fopen() and fwrite() to access GridFS as if it were a file system.I would like to use StreamWrapper but I cannot find any documentation on how to do so. I know you instantiate it using StreamWrapper::register() but that does not work alone. I have taken a look through the code and it seems I need to set a stream context option called ‘collectionWrapper’ but beyond that I am stumped. I have spent a long time looking for any relevant documentation too but have not been able to find any.Do you know either where I can find documentation for the object or can you give me an example of how to use it?",
"username": "John_Godsland"
},
{
"code": "",
"text": "I have taken some more time today to go through the code and experiment. I can tell that the Stream Wrapper is instantiated automatically when you create a new GridFS\\Bucket so my code now initialises a bucket on start-up.But. That does not solve the problem. I can use the openUploadStream() method of the Bucket object to get a file handle and can write to it, but I need to use native methods as I have third-party code that uses the built-in PHP file system functions. Help! ",
"username": "John_Godsland"
},
{
"code": "@internal",
"text": "The MongoDB\\GridFS\\StreamWrapper class is intentionally undocumented, as it’s an internal class used by the library’s own GridFS implementation. The same applies to the CollectionWrapper, ReadableStream, and WritableStream classes in the same namespace. Only MongoDB\\GridFS\\Bucket and related exception classes are intended to be public.The GridFS tutorial in the library documentation should help you get started with the public API.PHP doesn’t currently allow any way to define private functions and classes, but we’ve attempted to signal our intention by adding @internal to their doc blocks.",
"username": "jmikola"
},
{
"code": "",
"text": "I can use the openUploadStream() method of the Bucket object to get a file handle and can write to it, but I need to use native methods as I have third-party code that uses the built-in PHP file system functions.What “native methods” are you referring to? It may be helpful to share some code and highlight exactly what is incompatible. Note that the Bucket API provides two ways to write files. One method gives you a stream resource, which you can use with PHP’s file IO functions, and the second method exhaustively reads a stream that you provide into GridFS (without needing to work with PHP’s functions).",
"username": "jmikola"
},
{
"code": "",
"text": "@jmikola Thanks for the response. This is what I needed to know, especially with regard to why StreamWrapper is undocumented! (Saves me tearing my hair out trying to get it working.)My issue is that I have a third-party library that caches to the file system. (A template library.) I can provide a path for the cache in any supported form but the library uses the built in PHP file handling functions, e.g. fopen(), fwrite(), file_put_contents(), etc. My architecture makes using local file systems on servers impractical. I had thought I could instantiate and use StreamWrapper to provide a file system-alike wrapper for the built-in functions. However, given that isn’t the case I suppose I will need to write my own custom PHP stream wrapper object if I want to use GridFS in this way.",
"username": "John_Godsland"
},
{
"code": "fopenfcloseuploadFromStream",
"text": "I can provide a path for the cache in any supported form but the library uses the built in PHP file handling functions, e.g. fopen(), fwrite(), file_put_contents(), etc.Ah, the GridFS StreamWrapper definitely won’t work out of the box in this scenario. If you’ve looked into it, you might have noticed that the file/path string really isn’t used at all. Excluding the actual data, which will be written to the stream, we supply everything in the stream context (i.e. fourth parameter for fopen). This likely deviates from other stream wrappers you might have encountered, such as AWS SDK’s S3 adapter.I agree that you’ll probably need to create your own stream wrapper that internally uses the library’s Bucket API. You may also see better performance by collecting data in memory (e.g. php://memory) and waiting until fclose to upload that stream to GridFS in one shot via uploadFromStream. Doing so should minimize the amount of interaction with the MongoDB driver.",
"username": "jmikola"
},
{
"code": "fcloseuploadFromStream",
"text": "You may also see better performance by collecting data in memory (e.g. php://memory) and waiting until fclose to upload that stream to GridFS in one shot via uploadFromStream . Doing so should minimize the amount of interaction with the MongoDB driver.That’s an excellent suggestion, thanks. I already batch all the database changes and push them at the end of script processing so this would be straightforward.the GridFS StreamWrapper definitely won’t work out of the box in this scenarioThanks for your help on this. I am going to mark this as the solution!",
"username": "John_Godsland"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I use MongoDB\GridFS\StreamWrapper? | 2021-01-11T17:39:32.400Z | How do I use MongoDB\GridFS\StreamWrapper? | 2,316 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n _id: '123',\n subTasks: [\n {\n _id: '444',\n },\n {\n _id: '555',\n subTasks: [\n {\n _id: '666',\n }\n ]\n }\n ]\n }\n- Root node 1\n - Child 1.1\n - Subchild 1.1.1\n - Child 1.2\n- Root node 2\n",
"text": "I have a tree structure.node collection.{\n_id: ‘123’,\n}\n{\n_id: ‘444’,\n}\n{\n_id: ‘555’,\n}\n{\n_id: ‘666’,\n}hierarchy collection.{\n_id: ‘333’,\nparent: ‘123’,\nchild: ‘444’,\n}\n{\n_id: ‘344’,\nparent: ‘123’,\nchild: ‘555’,\n}\n{\n_id: ‘344’,\nparent: ‘555’,\nchild: ‘666’,\n}Something similar to the above. Now i need to recursively fetch the entire tree.\nI want the result to be a single nested json.I tried using graph lookup. But i’m not getting the desired result{\nfrom: ‘hierarchies’,\nstartWith: ‘$_id’ ,\nconnectFromField: ‘parent’,\nconnectToField: ‘child’,\nas: ‘subTasks’,\nmaxDepth: 3,\n}Given the root id of the tree. I need to fetch the entire tree(ie. stack all the children).I don’t need the hierarchy collection’s data to be in the result. I need the nodes to be stacked.Thanks in advance ",
"username": "Ashwin_Ramamurthy"
},
{
"code": "db.col1.aggregate([\n { $sort: { _id: 1 } },\n { $limit: 1 },\n {\n $graphLookup: {\n from: \"col2\",\n startWith: \"$_id\",\n connectFromField: \"child\",\n connectToField: \"parent\",\n depthField: \"level\",\n as: \"subTasks\"\n }\n },\n {\n $unwind: {\n path: \"$subTasks\",\n preserveNullAndEmptyArrays: true\n }\n },\n { $sort: { \"subTasks.level\": -1 } },\n {\n $group: {\n _id: \"$_id\",\n parent: { $first: \"$subTasks.parent\" },\n subTasks: {\n $push: {\n _id: \"$subTasks.child\",\n level: \"$subTasks.level\",\n parent: \"$subTasks.parent\"\n }\n }\n }\n },\n {\n $addFields: {\n subTasks: {\n $reduce: {\n input: \"$subTasks\",\n initialValue: {\n level: -1,\n presentChild: [],\n prevChild: []\n },\n in: {\n $let: {\n vars: {\n prev: {\n $cond: [\n { $eq: [\"$$value.level\", \"$$this.level\"] },\n \"$$value.prevChild\",\n \"$$value.presentChild\"\n ]\n },\n current: {\n $cond: [\n { $eq: [\"$$value.level\", \"$$this.level\"] },\n \"$$value.presentChild\",\n []\n ]\n }\n },\n in: {\n level: \"$$this.level\",\n prevChild: \"$$prev\",\n presentChild: {\n $concatArrays: [\n \"$$current\",\n [\n {\n _id: \"$$this._id\",\n parent: \"$$this.parent\",\n subTasks: {\n $filter: {\n input: \"$$prev\",\n as: \"e\",\n cond: { $eq: [\"$$e.parent\", \"$$this._id\"] }\n }\n }\n }\n ]\n ]\n }\n }\n }\n }\n }\n }\n }\n },\n { $addFields: { subTasks: \"$subTasks.presentChild\" } }\n])\n",
"text": "Hello @Ashwin_Ramamurthy Welcome to MongoDB Community Forum I am not sure but there will be good and easy approach to do this, I am sharing one hack this may cause performance issues in huge documents because this is lengthy but for the knowledge you can try,PlaygroundI have answered this in details here",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Graphlookup to fetch a tree structure | 2021-01-13T06:24:02.580Z | Graphlookup to fetch a tree structure | 7,684 |
null | [
"php"
] | [
{
"code": "",
"text": "I’m encountering a problem where PHP code (using v1.6 of the PHP MongoDB library) is quickly hitting the 1.5k simultaneous-connection limit on my M10 account. Upon further inspection, it looks like it’s possible failing to reuse old connections and also failing to close connections, as when I do a graceful restart of Apache to make it close all of the inactive threads it suddenly drops all of the Mongo connections it was holding open.I’ve done a fair amount of research trying to figure out what’s going on behind the scenes here, connection-wise. I found a bunch of documentation for the old MongoClient class talking about its connection persistence mechanisms (such as the official documentation or this article), but thus far I’ve been completely unable to locate any similar documentation for the newer MongoDB\\Client library. Is there any documentation available on how the new library handles its connection persistence?Specifically, I’d love it if there were a way to see whether any given MongoDB\\Client instance is creating a new connection or reusing an old one. I found this pull request to the PHP MongoDB driver that should allow the persistence function to be turned off in the future, but that’s still a ways out.Does anyone have any experience with this issue?",
"username": "Clint_Olson"
},
{
"code": "MongoDB\\Driver\\ManagerMongoDB\\Client",
"text": "The PHP driver reuses connections as explained in the PHP extension’s Connection handling and persistence documentation. Within a single PHP process, multiple MongoDB\\Driver\\Manager instances reuse the same client object if the constructor arguments of the managers are the same. These connections are kept open until the process terminates, but since they are reused there shouldn’t be an ever-growing number of connections unless you prevent reuse of clients. One way this could happen is by creating each MongoDB\\Client instance with different parameters (e.g., attaching a random key to either options arrays), which leads to each manager instance creating a new internal client that holds connections.The specifics of connections obviously depend on your code, but also on the environment that you deployed your application in. Obviously, separate PHP processes (e.g. those spawned by php-fpm) can’t share client objects. This means that if you run php-fpm with 20 processes and send a single request to each of them, there will be 20 clients that hold connections to the cluster.The PR you linked to disable persistence has been merged, but we don’t advise enabling this flag unless you encounter serious issues. One reason for reusing internal clients is that this reduces the performance impact of discovering and connecting to the cluster when sending multiple requests to a single PHP process (as is typical when running php-fpm behind a web server).To better evaluate what’s happening here, some numbers would be interesting: in the entire deployment, how many PHP processes connect to the cluster? Elaborating on the example above, having 20 web servers running 20 php processes to handle requests would mean that at least 400 clients will be created and connect to MongoDB - you can see how quickly this could reach 1500 clients depending on your deployment.\nFurthermore, do you see connections rise slowly to this 1500 connection limit or do you immediately run into the limit?If you feel confident, you can test the feature of disabling persistent clients by manually compiling the extension and enabling the flag. Since this is pre-release software, be aware that there could be negative performance impacts due to this change, so I don’t recommend running this version in a production environment. With the change, you should see the number of connections drop as they are dropped when a request terminates, but I’ll point out that this could come at the expense of performance when connecting to the same cluster in a subsequent request.",
"username": "Andreas_Braun"
}
] | Is there documentation on MongoDB\Client's connection persistence behavior? | 2021-01-12T23:24:13.611Z | Is there documentation on MongoDB\Client’s connection persistence behavior? | 3,476 |
null | [
"data-modeling"
] | [
{
"code": " {\n \"action_datetime\": \"2007-12-03T10:15:30+01:00\",\n \"issuer\": \"myusername-for-compliance\",\n \"action_trigger\": \"approval\",\n \"action_name\": \"changed\",\n \"details\": {\n\t\"new_amount\": 123.45\n },\n \"object_id\": \"12345678-1234-5678-12345-12345678912345\",\n \"object_type\": \"document\"\n}\n {\n \t\"invoice_id\": \"ABC45678-1234-5678-12345-12345678912345\",\n \t\"protocol_entries\":\n \t[\n \t\t{\n \t\t\t\"action_datetime\": \"2007-12-03T10:15:30+01:00\",\n \t\t\t\"issuer\": \"myusername-for-compliance\",\n \t\t\t\"action_trigger\": \"approval\",\n \t\t\t\"action_name\": \"changed\",\n \t\t\t\"details\": {\n \t\t\t\"new_amount\": 123.45\n \t\t\t},\n \t\t\t\"object_id\": \"12345678-1234-5678-12345-12345678912345\",\n \t\t\t\"object_type\": \"document\"\n \t\t},\n \t\t{\n \t\t\t\"action_datetime\": \"2007-12-03T10:15:30+01:00\",\n \t\t\t\"issuer\": \"myusername-for-compliance\",\n \t\t\t\"action_trigger\": \"payment_system\",\n \t\t\t\"action_name\": \"paid\",\n \t\t\t\"details\": {\n \t\t\t\"iban\": \"...\"\n \t\t\t},\n \t\t\t\"object_id\": \"12345678-1234-5678-12345-12345678912345\",\n \t\t\t\"object_type\": \"document\"\n \t\t}\n \t]\n }\n",
"text": "Hello together,we are designing a protocol system for an invoice handling software system.\nWithin the life cycle of an invoice, many so called protocol entries are created (about 50)for example:I want to monitor all this protocol entries for an invoice.\nOur system will serve many millions of invoices and there for even more protocol entriesMy big question is, how to store that protocol entries in mongodb.There are two main ideas:(1)\na mongo db document represents a protocol entry(2)\na mongo db document represenets an invoice and has a set of protocol entriesUsing (1) will cause only an insert for a new protocol entry with a reference field to invoice\nUsing (2) will cause a read and an updateCan you give me a hint where mongodb performs better?",
"username": "Steffel86"
},
{
"code": "test:PRIMARY> db.coll.update({invoice_id: 123},{$push: {protocol_entries: {a:1, b:2, c:3}}})\nWriteResult({ \"nMatched\" : 0, \"nUpserted\" : 0, \"nModified\" : 0 })\ntest:PRIMARY> db.coll.update({invoice_id: 123},{$push: {protocol_entries: {a:1, b:2, c:3}}}, {upsert:true})\nWriteResult({\n\t\"nMatched\" : 0,\n\t\"nUpserted\" : 1,\n\t\"nModified\" : 0,\n\t\"_id\" : ObjectId(\"5ffe2708142dfb3a77a54e7d\")\n})\ntest:PRIMARY> db.coll.update({invoice_id: 123},{$push: {protocol_entries: {a:3, b:2, c:5}}}, {upsert:true})\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\ntest:PRIMARY> db.coll.find().pretty()\n{\n\t\"_id\" : ObjectId(\"5ffe2708142dfb3a77a54e7d\"),\n\t\"invoice_id\" : 123,\n\t\"protocol_entries\" : [\n\t\t{\n\t\t\t\"a\" : 1,\n\t\t\t\"b\" : 2,\n\t\t\t\"c\" : 3\n\t\t},\n\t\t{\n\t\t\t\"a\" : 3,\n\t\t\t\"b\" : 2,\n\t\t\t\"c\" : 5\n\t\t}\n\t]\n}\ninvoice_id",
"text": "Hi @Steffel86 and welcome in the MongoDB Community !Why are you saying that 2 requires a read + an update? You can use upsert I think here.If it’s the very first update on this invoice, it will create the document (==insert operation). Each following update on this invoice will be an update operation as a matching invoice_id is found.Also, I think I would recommend the second option ─ based on the information you provided ─ as it’s actually a bucket pattern. It will be fine as long as you know FOR SURE that you will never have a crazy invoice with 1000000 updates as this could go over the 16MB limit for a single document and generate an array way too large to be handled.If you have 10M invoices, this will avoid to have a 50*10 = 500M documents collection to store these updates. This will help reduce the size of your indexes, etc.I hope this helps.\nCheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Aggregate protocol entries or not | 2021-01-12T20:03:46.960Z | Aggregate protocol entries or not | 1,489 |
null | [] | [
{
"code": "Object.entries()",
"text": "Im working on an aggregation pipeline, using the $accumulator function.What version of javascript can I use when writing the code?I seem to have issues with Object.entries() - and Atlas support send me here ",
"username": "Alex_Bjorlig"
},
{
"code": "interpreterVersion()mongoMozJS-60$accumulator",
"text": "Welcome to the MongoDB community @Alex_Bjorlig!What version of javascript can I use when writing the code?The most straightforward way to check the embedded JavaScript interpreter version is by calling interpreterVersion() in the mongo shell version matching your server release (both embed the same interpreter versions).For MongoDB 4.4, the interpreter version is MozJS-60 (or more precisely from a peek at the source code, MozJS ESR 60.3). This ESR (Extended Support Release) version of MozJS was embedded in Firefox 63, and roughly corresponds to ECMAScript 2017 with some features from newer drafts: ECMAScript 2016 to ES.Next support in Mozilla.If an $accumulator function isn’t working as expected, please share some more information. Example documents, aggregation pipeline, and expected vs actual output would be helpful.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "Object.entries()",
"text": "Hi Stennie.I guess what would be really help-full was a site like https://caniuse.com/.When I look-up Object.entries() it lists the following support:If the javascript engine corresponds to Firefox 63 - then Object.entries() should work Thanks for the help - and I guess my approach should be fine when determining what code I can use?",
"username": "Alex_Bjorlig"
},
{
"code": "$objectToArray$map$filter",
"text": "HI @Alex_Bjorlig,Yes, referring to a guide on browser compatibility should generally work for core syntax. There are a few differences in the embedded JavaScript engine because this doesn’t include some of the browser APIs (like console output).However, if something doesn’t work as expected I would start a forum discussion topic with more details so someone may be able to help with a solution or workaround.For best performance and resource usage you should aim to minimise use of server-side JavaScript and try to use in-built aggregation operators.For example, you can use array expression operators like $objectToArray to convert a document to an array, $map to apply an expression to each element of an array, or $filter to select a subset of an array based on an expression.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Yeah - makes sense with the browser APIs not being there For more advanced use-cases the in-built aggregation operators are pretty limited, or at least they require a lot of steps.So in our application, we are working on replacing some really long aggregation pipelines, and replace them with only 2 $group + $accumulator steps (and we use $filter to remove not-wanted data).If you wan’t I could try to describe our data-format and desired output?",
"username": "Alex_Bjorlig"
}
] | What version of javascript is $accumulator using? | 2021-01-12T10:34:21.965Z | What version of javascript is $accumulator using? | 2,687 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "Hi all,Would appreciate any help here (MongoDB version 4.2.11.I am running a mongorestore with the following simple string:mongorestore -u auser --password apassword --gzip --archive=replica_set_0.gz --oplogReplayAll works well and all my users collections and databases are restored OK . However, after the last one has finished restoring, I see the following message:2020-12-01T18:12:06.749+0200 admin.tempusers 15.0KBThis just keeps going on and the restore does not finish causing be to cancel the restore.The main issue that I have is that apart from this being puzzling that the oplog has probably not been replayed.Has anyone else come across this, and /or I am just missing something.RegardsJohn",
"username": "John_Clark"
},
{
"code": "",
"text": "Hi @John_Clark,Just wanted to double check if you’ve used --oplog option while making this dump that you’re trying to restore.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hi Viraj,Yes we did.RegardsJohn",
"username": "John_Clark"
},
{
"code": "admin.usersadmin.tempusers-vvvvv",
"text": "Hi @John_Clark,Are you still having this problem? It sounds like mongorestore is hanging when restoring admin.users into admin.tempusers. It’s difficult to say what could be causing that without seeing more of the output. Can you share more? Also running the command with -vvvvv will increase the output verbosity.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "Hi Tim,Thanks very much for the response. The restore did essentially work and I have not needed to do a similar one since. It was odd, but I will be sure to add the -vvvvv option next time. Hopefully it will not happen again but if it does and I cannot work it out, I will give you a pingBest regardsJohn",
"username": "John_Clark"
}
] | Mongorestore not finishing (tempusers) | 2020-12-02T09:30:45.400Z | Mongorestore not finishing (tempusers) | 3,953 |
[] | [
{
"code": "",
"text": "Termius_GQjKNRzcH01046×220 29 KBThe local database on my server has stopped working. What could be the problem?",
"username": "Ivan_Ermolaev"
},
{
"code": "",
"text": "Please share the log. The status code is not sufficient for reliable diagnostic.",
"username": "steevej"
},
{
"code": "/etc/mongod.conf/data/db",
"text": "Hi @Ivan_Ermolaev and welcome in the MongoDB Community !Could you also please share the /etc/mongod.conf as well just in case.\nIf you are using the default one, please double check that you have a /data/db folder owned by root and nothing else running on the port 27017.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Error enabling local base | 2021-01-11T17:34:35.622Z | Error enabling local base | 1,631 |
|
null | [] | [
{
"code": "",
"text": "Hello,We’re using an M10 cluster to store all our data.\nWe have several collections including:I wanted to graph how many messages there were over time from users that had entered their phone number but the number of documents in the messages collection makes that complicated.I can’t do a $lookup ‘message.user’ -> $match ‘user.phone’ as the request won’t complete.The only way I’d be able to do this would be by first getting all users with a non null phone and then matching that id list in the messages collection.Is it possible (or will it be possible in the future) to do something like this?Thank you!",
"username": "Thomas_Bianchi"
},
{
"code": "{phone:1}{user_id:1}from datetime import datetime\n\nfrom faker import Faker\nfrom pymongo import MongoClient\n\nfake = Faker()\n\n\ndef random_messages():\n docs = []\n for _id in range(1, 1001):\n doc = {\n '_id': _id,\n 'user_id': fake.pyint(min_value=1, max_value=100),\n 'message': fake.sentence(nb_words=10),\n 'date': datetime.strptime(fake.iso8601(), \"%Y-%m-%dT%H:%M:%S\")\n }\n docs.append(doc)\n return docs\n\n\ndef random_users_with_phones():\n docs = []\n for _id in range(1, 50):\n doc = {\n '_id': _id,\n 'firstname': fake.first_name(),\n 'lastname': fake.last_name(),\n 'phone': fake.phone_number()\n }\n docs.append(doc)\n return docs\n\n\ndef random_users_without_phones():\n docs = []\n for _id in range(51, 101):\n doc = {\n '_id': _id,\n 'firstname': fake.first_name(),\n 'lastname': fake.last_name()\n }\n docs.append(doc)\n return docs\n\n\nif __name__ == '__main__':\n client = MongoClient()\n db = client.get_database('test')\n messages = db.get_collection('messages')\n users = db.get_collection('users')\n messages.drop()\n users.drop()\n messages.insert_many(random_messages())\n users.insert_many(random_users_with_phones())\n users.insert_many(random_users_without_phones())\n print('Import done!')\n\n users.create_index(\"phone\")\n messages.create_index(\"user_id\")\n\n pipeline = [\n {\n '$match': {\n 'phone': {\n '$exists': 1\n }\n }\n }, {\n '$lookup': {\n 'from': 'messages',\n 'localField': '_id',\n 'foreignField': 'user_id',\n 'as': 'messages'\n }\n }, {\n '$unwind': {\n 'path': '$messages'\n }\n }, {\n '$replaceRoot': {\n 'newRoot': '$messages'\n }\n }, {\n '$bucketAuto': {\n 'groupBy': '$date',\n 'buckets': 10,\n 'output': {\n 'count': {\n '$sum': 1\n }\n }\n }\n }\n ]\n\n print('Result aggregation:')\n for doc in users.aggregate(pipeline):\n print(doc)\n",
"text": "Hi @Thomas_Bianchi and welcome in the MongoDB Community !I think it would be more efficient to lookup from users => messages rather than do a lookup from messages => users and then do a match to eliminate all the messages without a user with a phone number.That way, you are not joining 2.5M docs and then eliminate a bunch of them.Let me illustrate with an example. I made a little script in Python to generate a fake database. 50% of the users have a phone number (1 => 50).At the end, I’m also executing my pipeline and I’m showing the results in 10 date buckets. As you can see below, I’m doing:Note that to support the lookup, I also created the {user_id:1} index on the messages collection.image1146×1196 91.2 KBI hope it helps .Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Possible to connect 2 data sources without lookup? | 2021-01-12T10:34:47.621Z | Possible to connect 2 data sources without lookup? | 2,303 |
null | [
"replication",
"cxx"
] | [
{
"code": "",
"text": "Hi,I have an issue with GridFS (See https://jira.mongodb.org/browse/CXX-2148 for details) where writes on the secondary seemed to be applied in arbitrary order, more specifically chunks of a given document are not readable while the files entry already is, despite the fact that (AFAIU) chunks are written before the files entry.The documentation states that “MongoDB provides monotonic write guarantees, by default, for standalone instances and replica set”. Doesn’t that mean that write order on the primary are replicated in the same order on secondaries if there is no sharding ?\nOr maybe it does but it’s not reflected from the point of view of a reader of those writes ?\nI see that there is causally consistent client session that could be used to provide stronger guarantee but I’d like to be sure I understand correctly the default behavior in my case.Thanks in advance.Env: mongo 4.2, cluster with 6 replicas, no sharding.",
"username": "Francois_EE"
},
{
"code": "local.oplog.rsfs.filesfs.chunks",
"text": "Hi @Francois_EE and welcome in the MongoDB Community !Writes operations are replicated on a secondary node in the same order they appear in the oplog: local.oplog.rs collection and it’s also the same order on the primary node.But WHAT you can read depends on which read concern you are using and WHERE (== which node) you are reading from depends on your read preference.As GridFS relies on 2 collections: fs.files and fs.chunks, if you want to write a big file “atomically” to your primary (and replicate that in a similar manner on your secondaries), you would have to use a multi doc transaction which is the real “all or nothing” implementation that you are chasing here apparently.I hope this helps .Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks Maxime.\nFrom what I can read transactions do have an impact on performances. I do a lot of writes and a few reads so I’m sensitive to write performances.\nFrom what I understand, with the default write concern and a read concern of majority, using session would achieve the consistency I’m looking for. Is that correct ?\nFrom a performance standpoint, would it be ok to use a new session for each document upload to gridFS or would it better to share the same session for a set of gridFS uploads ?Thanks again.",
"username": "Francois_EE"
},
{
"code": "",
"text": "Hi @Francois_EE,Actually I have to take some of my comment back because GridFS doesn’t support multi-doc transactions for some reasons. It’s actually the first thing in the GridFS doc. My bad .But yes, you are correct, a causal consistent session will help you “read your own writes” ─ even if these read operations happen to be on a secondary node right after the insertion. Depending on which read concern & write concern combo you are using.It’s all explained in this documentation: https://docs.mongodb.com/manual/core/causal-consistency-read-write-concerns/Also, you will have to solve this replication delay that you have on your RS because it’s not healthy and in case you have too many delayed nodes, your read & write operations with “majority” will time out if they can’t be replicated fast enough.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | Consistency with replicas | 2021-01-11T17:34:16.350Z | Consistency with replicas | 3,262 |
null | [
"data-modeling",
"atlas-device-sync"
] | [
{
"code": "",
"text": "Would like to confirm that there are no limits for how many partitions can be created …Similar to how each user has their own Realm (e.g. /~/someRealm ), which backend apps can process. I’d like to make sure it is possible to handle additional Realm partitions, that could be all users (in multiples) per partition grouping - e.g., /someRealm partitioned by userID & type (e.g. “someType_userID”). Assuming many millions of users as an example usage.Basically, will this large addition of partitions on Realms become a problem?",
"username": "Reveel"
},
{
"code": "",
"text": "@Reveel There are no limits for the amount of partitions you can create. A partition is a Realm file. The only limitation here would be the amount of available storage on a device.",
"username": "Lee_Maguire"
},
{
"code": "",
"text": "@Lee_Maguire:OK - Thank you!Also, are there any performance issues (on larger quantities) to consider?",
"username": "Reveel"
},
{
"code": "",
"text": "@Reveel If you mean by having many Realm files, then no.",
"username": "Lee_Maguire"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Are there any quantity limitation(s) for Partitioned Realms | 2021-01-11T20:33:09.948Z | Are there any quantity limitation(s) for Partitioned Realms | 1,693 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hey guys,I may have a pipeline with 2 or 3 $facet stages in every each with $match, $group, $unwind, $addField stages, alltogether easily around 20 or 30 stages,\nor I optionally share this one large to let’s say 3-5 other, smaller pipelines\n( Pipeline A, Pipeline B, …, Pipeline E ) with a common output collection.My questions:",
"username": "Vane_T"
},
{
"code": "",
"text": "Hi @Vane_T,There are limits to the aggregation pipeline:Moreover, for $facet if the result set is returned or outed somewhere the arrays returned (in a single document) cannot exceed 16mb.Therefore if you use 4.4 cluster and above you may consider $unionWith to create your documents rather than $facet:This stage results in each document as a seperate one so only each document is subject to 16mb limit and not the entire result set.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovnythank you for your response and links, but they are too broad and theoretical for me to help my particular decision I plan to execute this or these pipeline(s) via scheduled triggered functions, planning to run them by every 5-15 minutes, so - I think - with proper index usage some hundred new documents in source collection should not be an issue in aggregation, should be?BTW the linked doc doesn’t provide info about recommended or max number of stages in a pipeline.\nThank you!",
"username": "Vane_T"
},
{
"code": "",
"text": "Hi @Vane_T,There is no max number of stages per say as long as the entire command document does not cross 16mb.For shared and Atlas free tier the limit is 50 stages in a pipeline.Your question is also very broad , if you want my specific opinion you should provide specific queries with specific execution stats/plan.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Maximum number of stages in a pipeline, pipeline design | 2021-01-12T04:22:18.708Z | Maximum number of stages in a pipeline, pipeline design | 3,005 |
null | [
"sharding",
"devops"
] | [
{
"code": "",
"text": "I would like to ask what is advisable when scaling out sharded cluster.\nI am planning to scale out horizontally my sharded cluster(3 node cluster), and initially, the design is to add a new node. So now, there are 4 nodes in my cluster and each node should only have 3 mongodb shards. This is done by using pacemaker location constraints and max clone to limit 3 processes per shard in each node.3-Node cluster\nP - Primary\nS - Secondary\nNode 1: P - shard 1(Port: 27018), P - shard 2(Port: 27019), P - shard 3(Port: 27020)\nNode 2: S -shard 1(Port: 27018, S -shard 2(Port: 27019), S -shard 3(Port: 27020)\nNode 3: S -shard 1(Port: 27018, S -shard 2(Port: 27019), S -shard 3(Port: 27020)Option 1:\nPort number will increase dynamically.\n4-node cluster\nNode 1: P - shard 1(Port: 27018), P - shard 2(Port: 27019), P - shard 3(Port: 27020)\nNode 2: S -shard 1(Port: 27018), S -shard 2(Port: 27019), S -shard 4(Port: 27021)\nNode 3: S -shard 1(Port: 27018), S -shard 4(Port: 27021), S -shard 3(Port: 27020)\nNode 4: P -shard 4(Port: 27021), S -shard 2(Port: 27019), S -shard 3(Port: 27020)5-node cluster\nNode 1: P - shard 1(Port: 27018), P - shard 2(Port: 27019), P - shard 3(Port: 27020)\nNode 2: S -shard 1(Port: 27018), S -shard 2(Port: 27019), S -shard 4(Port: 27021)\nNode 3: S -shard 1(Port: 27018), S -shard 4(Port: 27021), S -shard 5(Port: 27022)\nNode 4: P -shard 4(Port: 27021), S -shard 5(Port: 27022), S -shard 3(Port: 27020)\nNode 5: P -shard 5(Port: 27022), S -shard 2(Port: 27019), S -shard 3(Port: 27020)Option 2:\nPort number used is only 3.\n4-node cluster\nNode 1: P - shard 1(Port: 27018), P - shard 2(Port: 27019), P - shard 3(Port: 27020)\nNode 2: S -shard 1(Port: 27018), S -shard 2(Port: 27019), S -shard 4(Port: 27020)\nNode 3: S -shard 1(Port: 27018), S -shard 4(Port: 27019), S -shard 3(Port: 27020)\nNode 4: P -shard 4(Port: 27018), S -shard 2(Port: 27019), S -shard 3(Port: 27020)5-node cluster\nNode 1: P - shard 1(Port: 27018), P - shard 2(Port: 27019), P - shard 3(Port: 27020)\nNode 2: S -shard 1(Port: 27018), S -shard 2(Port: 27019), S -shard 4(Port: 27020)\nNode 3: S -shard 1(Port: 27018), S -shard 4(Port: 27019), S -shard 5(Port: 27020)\nNode 4: P -shard 4(Port: 27018), S -shard 5(Port: 27019), S -shard 3(Port: 27020)\nNode 5: P -shard 5(Port: 27018), S -shard 2(Port: 27019), S -shard 3(Port: 27020)Which is better, option 1 or option 2? Which is recommended?\nFor option 2, the issue will be adding of ports in the firewall. More shards that will be added, more ports will be added also in firewall setting.",
"username": "Ralph_Anthony_Plante"
},
{
"code": "mongodmongos",
"text": "Which is better, option 1 or option 2? Which is recommended?Hi @Ralph_Anthony_Plante,In general I wouldn’t recommend multiple shards per host node unless you have a clear allocation of resources for each mongod process and have considered the potential consequences of failover scenarios (for example, making sure that your busiest primaries do not all end up on the same node). A 3 or 4 node replica set may be able to make more effective use of this hardware, but I assume you have done some comparison for your workload.Since applications will be connecting to your cluster via mongos, the assignment of ports for shard servers should only affect administrators.I think your Option 1 is less likely to lead to administrative confusion: it looks like shard 1 is always port 27018, shard 2 is always port 27019, etc. You can minimise firewall reconfiguration by allowing a larger port range with room for future expansion (eg 27018-27028).Regards,\nStennie",
"username": "Stennie_X"
}
] | MongoDB Sharded Cluster Scaleout | 2021-01-11T07:06:09.976Z | MongoDB Sharded Cluster Scaleout | 2,493 |
null | [
"configuration"
] | [
{
"code": "2020-09-13T23:27:09.269+0200 I NETWORK [listener] connection accepted from 10.100.22.100:9808 #2135 (97 connections now open)\n2020-09-13T23:27:09.269+0200 I NETWORK [conn2135] received client metadata from 10.100.22.100:9808 conn2135: { driver: { name: \"NetworkInterfaceTL\", version: \"4.4.0\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"18.04\" } }\n2020-09-13T23:27:09.270+0200 I NETWORK [conn2135] end connection 10.100.22.100:9808 (96 connections now open)\n2020-09-13T23:27:09.412+0200 I NETWORK [listener] connection accepted from 10.100.22.100:9830 #2136 (97 connections now open)\n",
"text": "Hi,I want to raise the value of “maxPoolSize” from its default 100, to more or less 200.\nHow can I do this via “mongod.conf” ?I see in the logs of each of my 108 shard servers (36 shards, 3 servers each) that they nearly continuously setup connections to each other, ending connections when they are idle for some time, and this around some limit of 100 connections:Since this is happening nearly continuously on all of the shard servers, I suspect this is slowing down my cluster.\nI found that “maxPoolSize” has a default value of 100. Is this the parameter that I should adjust to a higher value, so that all 108 shard servers can keep their connection active, even when idle for some seconds ?",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "Hi @Rob_De_Langhe,The maxPoolSize of 100 is a driver side parameter which is intended to your application connection to MongoDB.Sharding have different pool size parameters which you can scroll from hereNow please note that default values are tuned for most use cases therefore be cautious when changing them. Please test fully in your load test env mimic production arch and traffic before deploying to prod.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "hi Pavel,thx a lot for your feedback !If I browse to that URL about the “Sharding TaskExecutor PoolMaxConnecting” parameter, it says that it applies only to the router’s “mongos” program.But we see those zillions of logs about repeatedly connections and disconnections between all the shard servers, not to or from any router “mongos” (see my extract higher up in my original post).Those connections between all the shard servers seems to balance around some limit of 100 connections (1 or 2 less than 100) :\nSince we have 36 shards of 3 servers each, thus 108 shard servers, I assume that each of the shard servers tries to maintain connections to all 107 other shard servers. Since they continuously disconnect from some and reconnect to other shard servers, still just below those 100 active connections, I suspect some limit of approx 100 connections is involved.",
"username": "Rob_De_Langhe"
},
{
"code": "ShardingTaskExecutorPoolMinSize",
"text": "Hi @Rob_De_Langhe,The parameter ShardingTaskExecutorPoolMinSize ¶ can be set on a mongod. This parameter is by default 1 therefore shards connection pool will shrink to 1 and grow as it needed.This could result in large amounts of connection and disconnected between the shards but this is expected and should be fine.Not sure why you decided this has any impact on your cluster. If you want you can test increasing the min pool to maintain a larger number of connections idle, but I don’t see how thats is better to be honest.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\"t\":{\"$date\":\"2021-01-04T13:45:47.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): NetworkInterfaceExceededTimeLimit: Remote command timed out while waiting to get a connection from the pool, took 31481ms, timeout was set to 20000ms\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)202, mongo::ExceptionForCat<(mongo::ErrorCategory)1>, mongo::ExceptionForCat<(mongo::ErrorCategory)10> >\\n\"}}\n",
"text": "hi Pavel, thx for the reply, and sorry for not reacting sooner: I am still struggling to get our (now 90-shards on v4.4.1) cluster stable: even when loading a tiny amount of data, several shard servers shutdown with a fatal log message likeSo this wait-time (“to get a connection from the pool”) took 31secs, which is way above the 20secs max, and thus these shards shutdown, leaving only 2 shard servers running per shard.\nVery often, soon after that another shard server has the same shutdown, leaving not enough shard servers to obtain a majority… Aborting the transactions to that shard which in turn aborts my application and forces retries until the shard servers have restarted (I had to install a 1-minute “cron” job on each shard server to let them restart quickly).\nClearly this is not a stable setup…",
"username": "Rob_De_Langhe"
}
] | How set maxPoolSize via mongod.conf? | 2020-09-13T21:31:23.626Z | How set maxPoolSize via mongod.conf? | 5,670 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Hello there, I just now linked my Github account with MongoDB account, However the may in my profile is my pet name that most of my peers call me with, so I have the same name at all my social networks and even GitHub uses the same too. But for certification purpose I use my real name, but I don’t see any option to change my name in the profile, I even updated my Github account but the changes are not reflected. Could you’ll please update it (or) provide a link for updation.",
"username": "Mrithyunjay_Gurav"
},
{
"code": "",
"text": "Hi @Mrithyunjay_GuravWelcome to the forum!\nApologies for the late response. Were you able to solve the issue?\nYou can find more information on how to change your account settings here: Edit Personal Settings — MongoDB Cloud ManagerBest,Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Updation of profile details: change of profile name | 2020-12-22T09:58:54.631Z | Updation of profile details: change of profile name | 6,218 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Hi all,\nI am doing performance testing of a rest API with more than 100 concurrent users.\nApparently, mongodb does not create more than 103 connection, and throwing errors.\nI am using mongodb community version on Windows 10 and running mongod via CMD.\nLet me know if there is way to create more connection when concurrent users are increased.\nBelow is the output of connections:db.serverStatus().connections\n{\n“current” : 103,\n“available” : 999897,\n“totalCreated” : 103,\n“active” : 78,\n“exhaustIsMaster” : 1,\n“exhaustHello” : 0,\n“awaitingTopologyChanges” : 1\n}",
"username": "major1mong"
},
{
"code": "serverStatusavailablemongoserverStatus()",
"text": "Hi @major1mong,Your serverStatus output shows plenty of available server connections, so I would look into the configuration used by the driver for your REST API.Most MongoDB drivers use connection pools to allow connections to be reused (saving on the overhead of establishing connections) and to manage the number of connections per client (trying to avoid overwhelming a deployment).The 103 limit sounds like you could be hitting a driver connection pool default of 100 connections (which happens to be the default for a few drivers, like Java). The few extra connections would be from monitoring or admin interaction (like your mongo shell session to get serverStatus() output).What specific MongoDB driver & version are you using for your REST API?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie, thanks for you response.\nI am using mongodb-driver-reactivestreams-4.1.1.jar.\nPlease let me know if there is a way to increase the number of connections from a driver configuration on the app side. Thanks.",
"username": "major1mong"
},
{
"code": "",
"text": "I was able to change the maxPoolSize via connection URI and it worked as expected.",
"username": "major1mong"
},
{
"code": "",
"text": "When I change the pool size to 200 or more, I get lot of socket errors. I presume socket errors may be due to lack of resources available in the system.",
"username": "major1mong"
},
{
"code": "net.ipv4.ip_local_port_range = 2000 65535\nnet.ipv4.tcp_fin_timeout = 15\n{\"t\":{\"$date\":\"2021-01-04T13:45:47.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): NetworkInterfaceExceededTimeLimit: Remote command timed out while waiting to get a connection from the pool, took 31481ms, timeout was set to 20000ms\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)202, mongo::ExceptionForCat<(mongo::ErrorCategory)1>, mongo::ExceptionForCat<(mongo::ErrorCategory)10> >\\n\"}}",
"text": "hi major1mong, did you find which resource-setting in your system was limiting the number of sockets that can be created for the pools ? We seem to encounter the same situation, our system hasso it would be able to handle (65535-2000)/15 = 4235 sockets per second per IP.\nBut still we encounter lots of cases where a shard server shuts down with a logmessage like this:",
"username": "Rob_De_Langhe"
}
] | Not creating more than 103 connection | 2021-01-02T08:27:41.983Z | Not creating more than 103 connection | 3,139 |
null | [
"sharding",
"configuration"
] | [
{
"code": "{\"t\":{\"$date\":\"2021-01-04T13:45:47.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:47.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): NetworkInterfaceExceededTimeLimit: Remote command timed out while waiting to get a connection from the pool, took 31481ms, timeout was set to 20000ms\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)202, mongo::ExceptionForCat<(mongo::ErrorCategory)1>, mongo::ExceptionForCat<(mongo::ErrorCategory)10> >\\n\"}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"563F1146B921\",\"b\":\"563F0E788000\",\"o\":\"2CE3921\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1E1\"},{\"a\":\"563F1146CF59\",\"b\":\"563F0E788000\",\"o\":\"2CE4F59\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"563F1146A5F6\",\"b\":\"563F0E788000\",\"o\":\"2CE25F6\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"s+\":\"A6\"},{\"a\":\"563F115F5A16\",\"b\":\"563F0E788000\",\"o\":\"2E6DA16\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"s+\":\"6\"},{\"a\":\"563F11689DB9\",\"b\":\"563F0E788000\",\"o\":\"2F01DB9\",\"s\":\"__cxa_call_terminate,\"s+\":\"39\"},{\"a\":\"563F115F5435\",\"b\":\"563F0E788000\",\"o\":\"2E6D435\",\"s\":\"__gxx_personality_v0\",\"s+\":\"2C5\"},{\"a\":\"7F394622F763\",\"b\":\"7F394621F000\",\"o\":\"10763\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1E13\"},{\"a\":\"7F394623007D\",\"b\":\"7F394621F000\",\"o\":\"1107D\",\"s\":\"_Unwind_Resume\",\"s+\":\"12D\"},{\"a\":\"563F0F5FDC30\",\"b\":\"563F0E788000\",\"o\":\"E75C30\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL16CommandStateBase8setTimerEv.cold.1687\",\"s+\":\"78\"},{\"a\":\"563F10E17678\",\"b\":\"563F0E788000\",\"o\":\"268F678\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL19ExhaustCommandState11sendRequestESt10shared_ptrINS1_12RequestStateEE\",\"s+\":\"38\"},{\"a\":\"563F10E1B2A1\",\"b\":\"563F0E788000\",\"o\":\"26932A1\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL14RequestManager7trySendENS_10StatusWithISt10unique_ptrINS0_14ConnectionPool19ConnectionInterfaceESt8functionIFvPS6_EEEEEm\",\"s+\":\"C41\"},{\"a\":\"563F10E1BB7E\",\"b\":\"563F0E788000\",\"o\":\"2693B7E\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZZNOS_14ExecutorFutureISt10unique_ptrINS_8executor14ConnectionPool19ConnectionInterfaceESt8functionIFvPS9_EEEE8getAsyncIZNS7_18NetworkInterfaceTL19startExhaustCommandERKNS7_12TaskExecutor14CallbackHandleERNS7_24RemoteCommandRequestImplISt6vectorINS_11HostAndPortESaISO_EEEEONS0_IFvRKNS7_26RemoteCommandOnAnyResponseEEEERKSt10shared_ptrINS_5BatonEEEUlT_E0_Li0EEEvOS14_ENUlNS_10StatusWithISE_EEE_clES18_EUlS1_E_EEDaS16_EN12SpecificImpl4callEOS1_\",\"s+\":\"CE\"},{\"a\":\"563F10E4EA09\",\"b\":\"563F0E788000\",\"o\":\"26C6A09\",\"s\":\"_ZN4asio6detail11executor_opINS0_15work_dispatcherIZN5mongo9transport18TransportLayerASIO11ASIOReactor8scheduleENS3_15unique_functionIFvNS3_6StatusEEEEEUlvE_EESaIvENS0_19scheduler_operationEE11do_completeEPvPSE_RKSt10error_codem\",\"s+\":\"89\"},{\"a\":\"563F10F8F714\",\"b\":\"563F0E788000\",\"o\":\"2807714\",\"s\":\"_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code\",\"s+\":\"3B4\"},{\"a\":\"563F10F8F9A5\",\"b\":\"563F0E788000\",\"o\":\"28079A5\",\"s\":\"_ZN4asio6detail9scheduler3runERSt10error_code\",\"s+\":\"115\"},{\"a\":\"563F10F9762E\",\"b\":\"563F0E788000\",\"o\":\"280F62E\",\"s\":\"_ZN4asio10io_context3runEv\",\"s+\":\"3E\"},{\"a\":\"563F10E400C6\",\"b\":\"563F0E788000\",\"o\":\"26B80C6\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv\",\"s+\":\"36\"},{\"a\":\"563F10E0D348\",\"b\":\"563F0E788000\",\"o\":\"2685348\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL4_runEv\",\"s+\":\"C8\"},{\"a\":\"563F10E0D58D\",\"b\":\"563F0E788000\",\"o\":\"268558D\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_8executor18NetworkInterfaceTL7startupEvEUlvE_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"6D\"},{\"a\":\"563F1161147F\",\"b\":\"563F0E788000\",\"o\":\"2E8947F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"},{\"a\":\"7F39460076DB\",\"b\":\"7F3946000000\",\"o\":\"76DB\",\"s\":\"start_thread\",\"s+\":\"DB\"},{\"a\":\"7F3945D3088F\",\"b\":\"7F3945C0F000\",\"o\":\"12188F\",\"s\":\"clone\",\"s+\":\"3F\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.2\",\"gitVersion\":\"15e73dc5738d2278b688f8929aee605fe4279b0e\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"4.15.0-122-generic\",\"version\":\"#124-Ubuntu SMP Thu Oct 15 13:03:05 UTC 2020\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"563F0E788000\",\"elfType\":3,\"buildId\":\"D18F657A1E06C333C2AEE534E3047044B0653DBF\"},{\"b\":\"7F394621F000\",\"path\":\"/lib/x86_64-linux-gnu/libgcc_s.so.1\",\"elfType\":3,\"buildId\":\"039AE85FEF075EC14FE3528762A0645C8CF73B29\"},{\"b\":\"7F3946000000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"28C6AADE70B2D40D1F0F3D0A1A0CAD1AB816448F\"},{\"b\":\"7F3945C0F000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"B417C0BA7CC5CF06D1D1BED6652CEDB9253C60D0\"}]}}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F1146B921\",\"b\":\"563F0E788000\",\"o\":\"2CE3921\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1E1\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F1146CF59\",\"b\":\"563F0E788000\",\"o\":\"2CE4F59\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F1146A5F6\",\"b\":\"563F0E788000\",\"o\":\"2CE25F6\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"s+\":\"A6\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F115F5A16\",\"b\":\"563F0E788000\",\"o\":\"2E6DA16\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"s+\":\"6\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F11689DB9\",\"b\":\"563F0E788000\",\"o\":\"2F01DB9\",\"s\":\"__cxa_call_terminate\",\"s+\":\"39\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F115F5435\",\"b\":\"563F0E788000\",\"o\":\"2E6D435\",\"s\":\"__gxx_personality_v0\",\"s+\":\"2C5\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7F394622F763\",\"b\":\"7F394621F000\",\"o\":\"10763\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1E13\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\": {\"a\":\"7F394623007D\",\"b\":\"7F394621F000\",\"o\":\"1107D\",\"s\":\"_Unwind_Resume\",\"s+\":\"12D\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F0F5FDC30\",\"b\":\"563F0E788000\",\"o\":\"E75C30\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL16CommandStateBase8setTimerEv.cold.1687\",\"s+\":\"78\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.267+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E17678\",\"b\":\"563F0E788000\",\"o\":\"268F678\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL19ExhaustCommandState11sendRequestESt10shared_ptrINS1_12RequestStateEE\",\"s+\":\"38\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E1B2A1\",\"b\":\"563F0E788000\",\"o\":\"26932A1\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL14RequestManager7trySendENS_10StatusWithISt10unique_ptrINS0_14ConnectionPool19ConnectionInterfaceESt8functionIFvPS6_EEEEEm\",\"s+\":\"C41\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E1BB7E\",\"b\":\"563F0E788000\",\"o\":\"2693B7E\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZZNOS_14ExecutorFutureISt10unique_ptrINS_8executor14ConnectionPool19ConnectionInterfaceESt8functionIFvPS9_EEEE8getAsyncIZNS7_18NetworkInterfaceTL19startExhaustCommandERKNS7_12TaskExecutor14CallbackHandleERNS7_24RemoteCommandRequestImplISt6vectorINS_11HostAndPortESaISO_EEEEONS0_IFvRKNS7_26RemoteCommandOnAnyResponseEEEERKSt10shared_ptrINS_5BatonEEEUlT_E0_Li0EEEvOS14_ENUlNS_10StatusWithISE_EEE_clES18_EUlS1_E_EEDaS16_EN12SpecificImpl4callEOS1_\",\"s+\":\"CE\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E4EA09\",\"b\":\"563F0E788000\",\"o\":\"26C6A09\",\"s\":\"_ZN4asio6detail11executor_opINS0_15work_dispatcherIZN5mongo9transport18TransportLayerASIO11ASIOReactor8scheduleENS3_15unique_functionIFvNS3_6StatusEEEEEUlvE_EESaIvENS0_19scheduler_operationEE11do_completeEPvPSE_RKSt10error_codem\",\"s+\":\"89\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10F8F714\",\"b\":\"563F0E788000\",\"o\":\"2807714\",\"s\":\"_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code\",\"s+\":\"3B4\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10F8F9A5\",\"b\":\"563F0E788000\",\"o\":\"28079A5\",\"s\":\"_ZN4asio6detail9scheduler3runERSt10error_code\",\"s+\":\"115\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10F9762E\",\"b\":\"563F0E788000\",\"o\":\"280F62E\",\"s\":\"_ZN4asio10io_context3runEv\",\"s+\":\"3E\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E400C6\",\"b\":\"563F0E788000\",\"o\":\"26B80C6\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv\",\"s+\":\"36\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E0D348\",\"b\":\"563F0E788000\",\"o\":\"2685348\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL4_runEv\",\"s+\":\"C8\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F10E0D58D\",\"b\":\"563F0E788000\",\"o\":\"268558D\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_8executor18NetworkInterfaceTL7startupEvEUlvE_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"6D\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"563F1161147F\",\"b\":\"563F0E788000\",\"o\":\"2E8947F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7F39460076DB\",\"b\":\"7F3946000000\",\"o\":\"76DB\",\"s\":\"start_thread\",\"s+\":\"DB\"}}}\n{\"t\":{\"$date\":\"2021-01-04T13:45:48.268+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":\"frame\":\"a\":\"7F3945D3088F\",\"b\":\"7F3945C0F000\",\"o\":\"12188F\",\"s\":\"clone\",\"s+\":\"3F\"}}}\n\n{\"t\":{\"$date\":\"2021-01-04T13:46:01.892+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\nsetParameter:\n ShardingTaskExecutorPoolMinSize: 90\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.326+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.327+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): NetworkInterfaceExceededTimeLimit: Remote command timed out while waiting to get a connection from the pool, took 30851ms, timeout was set to 20000ms\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)202, mongo::ExceptionForCat<(mongo::ErrorCategory)1>, mongo::ExceptionForCat<(mongo::ErrorCategory)10> >\\n\"}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.369+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn56560\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.100.22.58:48090\",\"connectionId\":56560,\"connectionCount\":823}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.374+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn56561\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.100.22.58:48708\",\"connectionId\":56561,\"connectionCount\":822}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.379+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.100.22.58:53872\",\"connectionId\":57245,\"connectionCount\":823}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.389+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn57245\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.100.22.58:53872\",\"client\":\"conn57245\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"4.4.2\"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"architecture\":\"x86_64\",\"version\":\"18.04\"}}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.401+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.201.122:17792\",\"connectionId\":57246,\"connectionCount\":824}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"55AD0DFDF921\",\"b\":\"55AD0B2FC000\",\"o\":\"2CE3921\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1E1\"},{\"a\":\"55AD0DFE0F59\",\"b\":\"55AD0B2FC000\",\"o\":\"2CE4F59\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"55AD0DFDE5F6\",\"b\":\"55AD0B2FC000\",\"o\":\"2CE25F6\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"s+\":\"A6\"},{\"a\":\"55AD0E169A16\",\"b\":\"55AD0B2FC000\",\"o\":\"2E6DA16\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"s+\":\"6\"},{\"a\":\"55AD0E1FDDB9\",\"b\":\"55AD0B2FC000\",\"o\":\"2F01DB9\",\"s\":\"__cxa_call_terminate\",\"s+\":\"39\"},{\"a\":\"55AD0E169435\",\"b\":\"55AD0B2FC000\",\"o\":\"2E6D435\",\"s\":\"__gxx_personality_v0\",\"s+\":\"2C5\"},{\"a\":\"7F680F90E763\",\"b\":\"7F680F8FE000\",\"o\":\"10763\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1E13\"},{\"a\":\"7F680F90F07D\",\"b\":\"7F680F8FE000\",\"o\":\"1107D\",\"s\":\"_Unwind_Resume\",\"s+\":\"12D\"},{\"a\":\"55AD0C171C30\",\"b\":\"55AD0B2FC000\",\"o\":\"E75C30\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL16CommandStateBase8setTimerEv.cold.1687\",\"s+\":\"78\"},{\"a\":\"55AD0D98B678\",\"b\":\"55AD0B2FC000\",\"o\":\"268F678\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL19ExhaustCommandState11sendRequestESt10shared_ptrINS1_12RequestStateEE\",\"s+\":\"38\"},{\"a\":\"55AD0D98F2A1\",\"b\":\"55AD0B2FC000\",\"o\":\"26932A1\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL14RequestManager7trySendENS_10StatusWithISt10unique_ptrINS0_14ConnectionPool19ConnectionInterfaceESt8functionIFvPS6_EEEEEm\",\"s+\":\"C41\"},{\"a\":\"55AD0D98FB7E\",\"b\":\"55AD0B2FC000\",\"o\":\"2693B7E\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZZNOS_14ExecutorFutureISt10unique_ptrINS_8executor14ConnectionPool19ConnectionInterfaceESt8functionIFvPS9_EEEE8getAsyncIZNS7_18NetworkInterfaceTL19startExhaustCommandERKNS7_12TaskExecutor14CallbackHandleERNS7_24RemoteCommandRequestImplISt6vectorINS_11HostAndPortESaISO_EEEEONS0_IFvRKNS7_26RemoteCommandOnAnyResponseEEEERKSt10shared_ptrINS_5BatonEEEUlT_E0_Li0EEEvOS14_ENUlNS_10StatusWithISE_EEE_clES18_EUlS1_E_EEDaS16_EN12SpecificImpl4callEOS1_\",\"s+\":\"CE\"},{\"a\":\"55AD0D9C2A09\",\"b\":\"55AD0B2FC000\",\"o\":\"26C6A09\",\"s\":\"_ZN4asio6detail11executor_opINS0_15work_dispatcherIZN5mongo9transport18TransportLayerASIO11ASIOReactor8scheduleENS3_15unique_functionIFvNS3_6StatusEEEEEUlvE_EESaIvENS0_19scheduler_operationEE11do_completeEPvPSE_RKSt10error_codem\",\"s+\":\"89\"},{\"a\":\"55AD0DB03714\",\"b\":\"55AD0B2FC000\",\"o\":\"2807714\",\"s\":\"_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code\",\"s+\":\"3B4\"},{\"a\":\"55AD0DB039A5\",\"b\":\"55AD0B2FC000\",\"o\":\"28079A5\",\"s\":\"_ZN4asio6detail9scheduler3runERSt10error_code\",\"s+\":\"115\"},{\"a\":\"55AD0DB0B62E\",\"b\":\"55AD0B2FC000\",\"o\":\"280F62E\",\"s\":\"_ZN4asio10io_context3runEv\",\"s+\":\"3E\"},{\"a\":\"55AD0D9B40C6\",\"b\":\"55AD0B2FC000\",\"o\":\"26B80C6\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv\",\"s+\":\"36\"},{\"a\":\"55AD0D981348\",\"b\":\"55AD0B2FC000\",\"o\":\"2685348\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL4_runEv\",\"s+\":\"C8\"},{\"a\":\"55AD0D98158D\",\"b\":\"55AD0B2FC000\",\"o\":\"268558D\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_8executor18NetworkInterfaceTL7startupEvEUlvE_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"6D\"},{\"a\":\"55AD0E18547F\",\"b\":\"55AD0B2FC000\",\"o\":\"2E8947F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"},{\"a\":\"7F680F6E66DB\",\"b\":\"7F680F6DF000\",\"o\":\"76DB\",\"s\":\"start_thread\",\"s+\":\"DB\"},{\"a\":\"7F680F40F88F\",\"b\":\"7F680F2EE000\",\"o\":\"12188F\",\"s\":\"clone\",\"s+\":\"3F\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.2\",\"gitVersion\":\"15e73dc5738d2278b688f8929aee605fe4279b0e\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"4.15.0-128-generic\",\"version\":\"#131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"55AD0B2FC000\",\"elfType\":3,\"buildId\":\"D18F657A1E06C333C2AEE534E3047044B0653DBF\"},{\"b\":\"7F680F8FE000\",\"path\":\"/lib/x86_64-linux-gnu/libgcc_s.so.1\",\"elfType\":3,\"buildId\":\"039AE85FEF075EC14FE3528762A0645C8CF73B29\"},{\"b\":\"7F680F6DF000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"28C6AADE70B2D40D1F0F3D0A1A0CAD1AB816448F\"},{\"b\":\"7F680F2EE000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"B417C0BA7CC5CF06D1D1BED6652CEDB9253C60D0\"}]}}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0DFDF921\",\"b\":\"55AD0B2FC000\",\"o\":\"2CE3921\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1E1\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0DFE0F59\",\"b\":\"55AD0B2FC000\",\"o\":\"2CE4F59\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0DFDE5F6\",\"b\":\"55AD0B2FC000\",\"o\":\"2CE25F6\",\"s\":\"_ZN5mongo12_GLOBAL__N_111myTerminateEv\",\"s+\":\"A6\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0E169A16\",\"b\":\"55AD0B2FC000\",\"o\":\"2E6DA16\",\"s\":\"_ZN10__cxxabiv111__terminateEPFvvE\",\"s+\":\"6\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0E1FDDB9\",\"b\":\"55AD0B2FC000\",\"o\":\"2F01DB9\",\"s\":\"__cxa_call_terminate\",\"s+\":\"39\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0E169435\",\"b\":\"55AD0B2FC000\",\"o\":\"2E6D435\",\"s\":\"__gxx_personality_v0\",\"s+\":\"2C5\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7F680F90E763\",\"b\":\"7F680F8FE000\",\"o\":\"10763\",\"s\":\"_Unwind_GetTextRelBase\",\"s+\":\"1E13\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7F680F90F07D\",\"b\":\"7F680F8FE000\",\"o\":\"1107D\",\"s\":\"_Unwind_Resume\",\"s+\":\"12D\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0C171C30\",\"b\":\"55AD0B2FC000\",\"o\":\"E75C30\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL16CommandStateBase8setTimerEv.cold.1687\",\"s+\":\"78\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D98B678\",\"b\":\"55AD0B2FC000\",\"o\":\"268F678\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL19ExhaustCommandState11sendRequestESt10shared_ptrINS1_12RequestStateEE\",\"s+\":\"38\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D98F2A1\",\"b\":\"55AD0B2FC000\",\"o\":\"26932A1\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL14RequestManager7trySendENS_10StatusWithISt10unique_ptrINS0_14ConnectionPool19ConnectionInterfaceESt8functionIFvPS6_EEEEEm\",\"s+\":\"C41\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D98FB7E\",\"b\":\"55AD0B2FC000\",\"o\":\"2693B7E\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZZNOS_14ExecutorFutureISt10unique_ptrINS_8executor14ConnectionPool19ConnectionInterfaceESt8functionIFvPS9_EEEE8getAsyncIZNS7_18NetworkInterfaceTL19startExhaustCommandERKNS7_12TaskExecutor14CallbackHandleERNS7_24RemoteCommandRequestImplISt6vectorINS_11HostAndPortESaISO_EEEEONS0_IFvRKNS7_26RemoteCommandOnAnyResponseEEEERKSt10shared_ptrINS_5BatonEEEUlT_E0_Li0EEEvOS14_ENUlNS_10StatusWithISE_EEE_clES18_EUlS1_E_EEDaS16_EN12SpecificImpl4callEOS1_\",\"s+\":\"CE\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D9C2A09\",\"b\":\"55AD0B2FC000\",\"o\":\"26C6A09\",\"s\":\"_ZN4asio6detail11executor_opINS0_15work_dispatcherIZN5mongo9transport18TransportLayerASIO11ASIOReactor8scheduleENS3_15unique_functionIFvNS3_6StatusEEEEEUlvE_EESaIvENS0_19scheduler_operationEE11do_completeEPvPSE_RKSt10error_codem\",\"s+\":\"89\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0DB03714\",\"b\":\"55AD0B2FC000\",\"o\":\"2807714\",\"s\":\"_ZN4asio6detail9scheduler10do_run_oneERNS0_27conditionally_enabled_mutex11scoped_lockERNS0_21scheduler_thread_infoERKSt10error_code\",\"s+\":\"3B4\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0DB039A5\",\"b\":\"55AD0B2FC000\",\"o\":\"28079A5\",\"s\":\"_ZN4asio6detail9scheduler3runERSt10error_code\",\"s+\":\"115\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0DB0B62E\",\"b\":\"55AD0B2FC000\",\"o\":\"280F62E\",\"s\":\"_ZN4asio10io_context3runEv\",\"s+\":\"3E\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D9B40C6\",\"b\":\"55AD0B2FC000\",\"o\":\"26B80C6\",\"s\":\"_ZN5mongo9transport18TransportLayerASIO11ASIOReactor3runEv\",\"s+\":\"36\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D981348\",\"b\":\"55AD0B2FC000\",\"o\":\"2685348\",\"s\":\"_ZN5mongo8executor18NetworkInterfaceTL4_runEv\",\"s+\":\"C8\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0D98158D\",\"b\":\"55AD0B2FC000\",\"o\":\"268558D\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_8executor18NetworkInterfaceTL7startupEvEUlvE_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"6D\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"55AD0E18547F\",\"b\":\"55AD0B2FC000\",\"o\":\"2E8947F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7F680F6E66DB\",\"b\":\"7F680F6DF000\",\"o\":\"76DB\",\"s\":\"start_thread\",\"s+\":\"DB\"}}}\n{\"t\":{\"$date\":\"2021-01-04T23:39:51.427+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7F680F40F88F\",\"b\":\"7F680F2EE000\",\"o\":\"12188F\",\"s\":\"clone\",\"s+\":\"3F\"}}}\n\n{\"t\":{\"$date\":\"2021-01-04T23:40:01.713+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\n",
"text": "hi gurus,\nwe run community version 4.4.2 on Ubuntu 18.04, on 90 shards of 3 LXC-containers each (thus 270 containers), on top of XFS filesystems.\nWhen we start our continuous load stream then soon some shard servers crashed with the following logs:This is not explicitly documented with some solution, but some users posted messages that their identical problem was solved by adding some extra pool-size parameters to the startup. Thus I added the following settings to my “/etc/mongod.conf” files on all shard servers:But this causes even more shard servers to crash, now with the following log message:So I will remove again that “setParameter” from “/etc/mongod.conf” because it only makes things worse.Has anyone some suggestions to fix my initial problem, or what are safe values for the pool-size parameters, please ?thx!\nRob",
"username": "Rob_De_Langhe"
},
{
"code": "{\"t\":{\"$date\":\"2021-01-04T13:45:47.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): NetworkInterfaceExceededTimeLimit: Remote command timed out while waiting to get a connection from the pool, took 31481ms, timeout was set to 20000ms\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)202, mongo::ExceptionForCat<(mongo::ErrorCategory)1>, mongo::ExceptionForCat<(mongo::ErrorCategory)10> >\\n\"}}\n",
"text": "It turned out that some of my servers where not (NTP-)time synchronized. Fixing that however, did still not remove the many cases where a shard server did shutdown with a log message like below:",
"username": "Rob_De_Langhe"
}
] | ShardingTaskExecutor-PoolMinSize | 2021-01-05T07:47:47.578Z | ShardingTaskExecutor-PoolMinSize | 2,113 |
null | [] | [
{
"code": "",
"text": "Hi,\nI was trying to build mongodb source code and run unit and integration tests. But looks like the documentation on mongodb wiki is older? Test the MongoDB Server · mongodb/mongo Wiki · GitHub\nI face issues running the the resmoke script\npython buildscripts/resmoke.py --help\ngives error as it needs evergreen module.\nIs there any script or better yet, a docker image with all the dependencies, to build and run mongo tests?Thanks,\nUnmesh",
"username": "Unmesh_Joshi"
},
{
"code": "",
"text": "Hi Unmesh,Thank you for the question. The testing documentation wiki page has been updated with additional setup steps. See the updated section on resmoke.py here.Robert",
"username": "Robert_Guo"
},
{
"code": "",
"text": "I had to run following to install dev-requirements, which installed the correct versions of evergreen and other dependencies…\npython3 -m pip install -r etc/pip/dev-requirements.txtFor building unittests (and generating build/unittests.txt), whats the target?",
"username": "Unmesh_Joshi"
},
{
"code": "",
"text": "It seems to be install-unittests and not unittests.\nIs there a quick developer build option? install-unittests seems to be taking a lot of disk space.",
"username": "Unmesh_Joshi"
},
{
"code": "./buildscripts/scons.py --ninja generate-ninjaninja +<string>ninja",
"text": "Try running ./buildscripts/scons.py --ninja generate-ninja to generate a ninja build file. Then if you run ninja +<string> it will give you a list of unit test targets starting with . This requires the ninja build tool to be installed.",
"username": "Robert_Guo"
},
{
"code": "--link-model=dynamic --install-action=hardlink",
"text": "Building with Ninja is unlikely to be much faster for a full build than building with SCons unless you have a compute cluster and can drive huge amounts of parallel compile through it. Ninja will be faster to decide what must be rebuilt for an incremental build after making source changes, but it still can’t drive more compile jobs than you have accessible cores. However, if disk space is a concern, you can build with --link-model=dynamic --install-action=hardlink. Note that the resulting binaries are not production quality, but that should be fine because it appears that your goal is to run the various tests.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Thanks. I am interested in running tests. Mostly to understand some of the implementation aspects for the work I am doing to documents patterns of distributed systems at Patterns of Distributed Systems\nDocker image build with following Dockerfile builds the source code perfectly fine. If I use Ubuntu20.04 base image instead of Ubuntu18.04, I face issues. But thats fine, I can continue to use Ubuntu18.04\nHaving this kind of Docker images to build will be useful for new developers. Something that scylladb folks do.\nJust curious what is the machine configurations developers use to work with mongodb source code?\nThinkpad P1 gen2 does not seem to be good enough. Any recommendations will be helpful.FROM ubuntu:18.04\nRUN apt update\nRUN apt install software-properties-common -y\nRUN apt update\nRUN apt install git -y\nRUN apt install python3.8 -y\nRUN apt install python3-pip -y\nRUN apt install libpython3.8-dev -y\nRUN ln -S -f /usr/bin/python3.8 /usr/bin/python3\nRUN apt install g+±8 -y\nRUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 800 --slave /usr/bin/g++ g++ /usr/bin/g+±8\nRUN apt-get install libssl-dev -y\nRUN apt-get install -y liblzma-dev\nRUN apt-get install libcurl4-openssl-dev -yCopy and setup python dependencies as an optimization.\nCOPY pip /pip\nRUN python3 -m pip install -r /pip/dev-requirements.txtVOLUME build\nCMD [“/bin/bash”]",
"username": "Unmesh_Joshi"
},
{
"code": "",
"text": "With the docker container mentioned above and using link-model=dynamic --install-action=hardlink, I could run tests.\nIs there a way to run individual tests?",
"username": "Unmesh_Joshi"
},
{
"code": "build/install/binscons ... build/install/bin/foo_testfoo_test",
"text": "Yes, just run the relevant test binary out of the build/install/bin directory after building it with SCons. You can also just build the specific test you are interested in by naming it as the target to scons: scons ... build/install/bin/foo_test will build the foo_test binary, which you can then run from the command line.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Replying to your request about developer machines, it varies, but I would consider something like the highest end MacBook Pro to be the bare minimum. Most developers use linux workstations with at least 16 cores for database development, and often quite a bit more than that. You really want a workstation class system to be able to work with the codebase effectively.",
"username": "Andrew_Morrow"
},
{
"code": "mongodb:masterunmeshjoshi:master",
"text": "Thank you. I was able to build using the docker image I built and also run specific unit tests.\nJust so that this does’nt get lost and might be useful for someone new wanting to build mongo db source code, I have created a pull request with instructions to build and run tests using docker containerAdded a Dockerfile to build Ubuntu 18.04 based image with all the required depen…dencies to run mongo build.",
"username": "Unmesh_Joshi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Running mongodb unit and dbtests | 2021-01-06T11:35:06.444Z | Running mongodb unit and dbtests | 5,549 |
null | [] | [
{
"code": "",
"text": "Can you explain the difference between ReadConcern Available vs ReadPreference Nearest?These two things seem to imply the same thing.",
"username": "Altiano_Gerung"
},
{
"code": "",
"text": "It is quite complex and it is best to refer to the professionally written documentation.Using read concern available returns data with no guarantee that the data has been written to a majority of the replica set members (i.e. may be rolled back). Read concern available can return orphan documents.\nandYou certainly may ask for clarification on what is written over there. There is a green button at the bottom right of the page to provide feedback if you feel the manual page is not sufficient.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Altiano_Gerung,I think to answer your question you first need to understand the difference between a readConcern and a readPreference.The readConcern implies what level of durability are you expecting from the documents read. For example readConcern available will return any available document in the connected node, regardless of readPreference (you can be on a secondary or primary based on your preference).The readPreference implies which nodes are eligible for he connection to be established on. For example the nearest means that the connection is tested for ping latency and node with lowest number will be chosen.You can still try to find majority commited data when reading on that node, regardless if its primary or secondary if your readConcern is default majority.Hope it helps.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | ReadConcern Available vs ReadPreference Nearest | 2021-01-10T14:26:41.846Z | ReadConcern Available vs ReadPreference Nearest | 1,737 |
null | [
"java",
"atlas"
] | [
{
"code": " URI uri = URI.create(\"https://cloud.mongodb\" +\n \".com/api/atlas/v1.0/groups/5f6f561e246090346809ec1c/clusters/wso2-apim-cluster/fts/indexes/\");\n\n HttpPost post = new HttpPost(uri);\n Gson gson = new Gson();\n Map<String, String> map = new HashMap<>();\n Map<String, String> mappings = new HashMap<>();\n mappings.put(\"dynamic\", \"true\");\n\n map.put(\"collectionName\", \"test\");\n map.put(\"database\", \"randomDb\");\n map.put(\"mappings\", gson.toJson(mappings));\n map.put(\"name\", \"default\");\n\n\n try {\n HttpEntity stringEntity = new StringEntity(gson.toJson(map));\n post.setEntity(stringEntity);\n\n DigestScheme md5Auth = new DigestScheme();\n HttpClient client = HttpClientBuilder.create().build();\n HttpResponse authResponse = client.execute(new HttpGet(uri));\n final Header challenge = authResponse.getHeaders(\"WWW-Authenticate\")[0];\n md5Auth.processChallenge(challenge);\n final Header solution = md5Auth.authenticate(\n new UsernamePasswordCredentials(\"\", \"\"),\n post\n );\n\n md5Auth.createCnonce();\n post.addHeader(\"content-type\", \"application/json\");\n post.addHeader(solution.getName(), solution.getValue());\n HttpResponse execute = client.execute(post);\n System.out.println(execute.getStatusLine());\n\n for (Header h : execute.getAllHeaders()\n ) {\n System.out.println(h.getName());\n System.out.println(h.getValue());\n System.out.println();\n\n }\n } catch (IOException e) {\n e.printStackTrace();\n } catch (MalformedChallengeException e) {\n e.printStackTrace();\n } catch (AuthenticationException e) {\n e.printStackTrace();\n }",
"text": "I’m trying to invoke mongodb atlas rest api to create atlas search indexes, I tried the request in postman and it works fine, I have written the following Java code but is getting a 400 bad request error, Any idea if im missing anything here?",
"username": "dushan_Silva"
},
{
"code": " Map<String, Object> map = new HashMap<>();\n Map<String, Object> mappings = new HashMap<>();\n mappings.put(\"dynamic\", true);\n\n map.put(\"collectionName\", \"col1\");\n map.put(\"database\", \"abc\");\n map.put(\"mappings\", mappings);\n map.put(\"name\", \"default\");\n",
"text": "Managed to solve this issue, problem was passing a json string for mappings part of the body, changed as followsEverything else remained same, hope this helps someone ",
"username": "dushan_Silva"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I'm trying to invoke the mongodb atlas api using java, I'm getting a 400 bad request error,Details | 2021-01-09T08:38:50.812Z | I’m trying to invoke the mongodb atlas api using java, I’m getting a 400 bad request error,Details | 3,415 |
null | [
"node-js",
"connecting"
] | [
{
"code": "TypeError: Cannot read property 'replace' of undefined\nmatchesParentDomain\nC:/Users/Owner/Dropbox/GitHub/TestChart/node_modules/mongodb/lib/core/uri_parser.js:24\n 21 | */\n 22 | function matchesParentDomain(srvAddress, parentDomain) {\n 23 | const regex = /^.*?\\./;\n> 24 | const srv = `.${srvAddress.replace(regex, '')}`;\n 25 | const parent = `.${parentDomain.replace(regex, '')}`;\n 26 | return srv.endsWith(parent);\n 27 | }\n",
"text": "I’m having the EXACT same error as Getting error upon await client.connect() in node.JS and Error with await client.connect() node.JS - #6. I’ve literally spent two days on this issue. To add some more information, I’ve narrowed it down to a node.js parsing issue, specifically related to the “mongodb+srv”. (If I remove the + it gets past the error, but then of course doesn’t connect). Here is where it fails…on node.js code. And I do have the latest node version…Any thoughts? So frustrating…\nThanks!",
"username": "Gregory_Fay"
},
{
"code": "const MongoClient = require(\"mongodb\").MongoClient;\n\nconst uri = \"mongodb+srv://readonly:[email protected]/covid19\";\n\nconst client = new MongoClient(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nasync function run() {\n try {\n await client.connect();\n const globalAndUS = client.db('covid19').collection(\"global_and_us\");\n const cursor = globalAndUS.find({ country: \"France\" }).sort([\"date\", -1]).limit(2);\n await cursor.forEach(console.dir);\n } finally {\n await client.close();\n }\n}\n\nrun().catch(console.dir);\nnpm install mongodb\nnode index.js\n{\n useNewUrlParser: true,\n useUnifiedTopology: true,\n}\n",
"text": "Hi @Gregory_Fay,Could you please try this piece of code and let me know if this works for you?File: index.jsI executed this by just running:It’s most probably not the cleanest piece of JS code you will see today as it’s not my main language but I got it to work.My guess is that you probably need one of these in your MongoClient:I hope this helps.\nCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "useNewUrlParser : trueuseUnifiedTopology : true",
"text": "@MaBeuLux88I did do a quick test. I already had\nuseNewUrlParser : true\nin there, but just added\nuseUnifiedTopology : true\nto my existing code and that did not help. I’ll try your full code if you still think that’s valuable when I get back in a bit!Thanks again,\nGreg",
"username": "Gregory_Fay"
},
{
"code": "async function getData(coll, loadOptions) { //async\n\n console.log(\"DEBUG: At getData()...\")\n\n try {\n\n const results = await query(coll, loadOptions); //await\n\n console.log(results);\n\n //return (results);\n\n } catch (err) {\n\n console.log(\"DEBUG: Query Error!!\", err)\n\n handleError(err, 'Failed to retrieve data');\n\n }\n\n }\n\n const uriMongo = \"mongodb+srv://gfay63:[email protected]/dfs?retryWrites=true&w=majority\";\n\n function goforit() {\n\n console.log(\"DEBUG: About to connect... URI: \", uriMongo)\n\n const clientMongo = new MongoClient(uriMongo, { useNewUrlParser: true });\n\n clientMongo.connect(err => {\n\n if (err) {\n\n console.log(\"DEBUG: Error Connecting!!!\", err);\n\n console.error(err);\n\n return;\n\n }\n\n console.log(\"DEBUG: Connected to database!\")\n\n const collection = new clientMongo.database.db(\"dfs\").collection(\"nba-v3-GamesByDate\");\n\n getData(collection,\n\n {\n\n // This is the loadOptions object - pass in any valid parameters\n\n filter: [\"Day\", \"=\", \"2021-01-06T00:00:00\"],\n\n sort: [{ selector: \"GameID\", desc: true }]\n\n }\n\n );\n\n }\n\n );\n\n }",
"text": "@MaBeuLux88\nThank you for your reply! I am about to head to a quick lunch but will check the moment I get back. In the meantime, here is the code I am using…",
"username": "Gregory_Fay"
},
{
"code": "await client.connect();.${srvAddress.replace(regex, '')}",
"text": "@MaBeuLux88Could you please try this piece of code and let me know if this works for you?At this point, I tested your EXACT piece of code (with of course uri/db/collection changed), and the exact same error occurs. I added some debug lines and showed that, as before, this crashes trying to connect…at this line:\nawait client.connect();\n…as an untapped error as described above. From all my previous digging, it is clearly crashing due to the “mongodb+srv” portion of the uri, thus makes sense that this is where it crashed again.Specifically, traceback shows it is crashing with node.js code, exactly at this line of code from:./node_modules/mongodb/lib/core/uri_parser.js:2424 | const srv = .${srvAddress.replace(regex, '')};FYR, I am on node.js version v14.15.4.I can’t begin to show how grateful I would be for someone to help resolve this! I have found several other people that have had this EXACT same issue on this Forum as well as others, and no-one has has a Solution yet.Huge thanks in advance!\nGreg",
"username": "Gregory_Fay"
},
{
"code": "npm install mongodb",
"text": "I’m running on Node v14.13.0 and look like npm install mongodb tells me 3.6.3.\nWhich version of MongoDB are you trying to connect to?Can you test my piece of code with the same URI I provided? Just to see if you get anything different or can actually execute the query? It points to a public COVID-19 that I have made available in readonly so you can play with it and send some query its way. No worries at all :-).More info about it here.By the way, are you able to connect to you cluster with the Mongo Shell from the same PC? I just want to make sure we ruled out all the MongoDB Atlas parameters like user, password and networking issues.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "clientMongo.database.db(\"dfs\")",
"text": "By the way I also saw in your code that you have an extra “database” in your code which is definitely not going to help you: clientMongo.database.db(\"dfs\").",
"username": "MaBeuLux88"
},
{
"code": "clientMongo.database.db(\"dfs\")C:\\Users\\Owner>mongo \"mongodb+srv://gfay63:[email protected]/dfs?retryWrites=true&w=majority\"\nMongoDB shell version v4.4.3\nconnecting to: mongodb://cluster0-shard-00-01.vzkfs.mongodb.net:27017,cluster0-shard-00-00.vzkfs.mongodb.net:27017,cluster0-shard-00-02.vzkfs.mongodb.net:27017/dfs?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=atlas-h1a9j7-shard-0&retryWrites=true&ssl=true&w=majority\nImplicit session: session { \"id\" : UUID(\"10fc15c2-ed60-4c63-ada5-0468ba21819c\") }\nMongoDB server version: 4.2.11\nWARNING: shell and server versions do not match\nMongoDB Enterprise atlas-h1a9j7-shard-0:PRIMARY>\n",
"text": "@MaBeuLux88 Thanks for the replies…\nIt looks like none of my replies are getting approved in a timely manner since I’m new (last night), so not sure what you’ve seen? I replied to your earlier messages several hours ago. Did you see that “At this point, I tested your EXACT piece of code (with of course uri/db/collection changed)” reply yet? It says “Awaiting Approval 2h” for me, but seems like maybe you are seeing it since you’re an employee? New replies…By the way I also saw in your code that you have an extra “database” in your code which is definitely not going to help you: clientMongo.database.db(\"dfs\") .LOL. Good catch. Just a typo I would have figured out if this ever got there! Which version of MongoDB are you trying to connect to?3.6.3. I am a brand new user so just installed everything last week.Can you test my piece of code with the same URI I provided?Yes, I did that, and STILL get the exact same error. By the way, are you able to connect to your cluster with the Mongo Shell from the same PC?I have been connecting from this same PC via Web, MongoDB Compass, and mongoimport via Powershell perfectly all along. Here is a test via Mongo Shell:LMK what’s next! Much appreciated! ",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "@MaBeuLux88Update: I created a raw, pure, new node application, and the connection works. I have no clue why it doesn’t work in my existing project.",
"username": "Gregory_Fay"
},
{
"code": "npx create-react-app my-app\ncd my-app\nnpm install mongodb\nnpm start\nimport logo from './logo.svg';\nimport './App.css';\n\nconst MongoClient = require(\"mongodb\").MongoClient;\n\nconst uri = \"mongodb+srv://readonly:[email protected]/covid19\";\n\nconst client = new MongoClient(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nasync function run() {\n try {\n await client.connect();\n const globalAndUS = client.db('covid19').collection(\"global_and_us\");\n const cursor = globalAndUS.find({ country: \"France\" }).sort([\"date\", -1]).limit(2);\n await cursor.forEach(console.dir);\n } finally {\n await client.close();\n }\n}\n\nrun().catch(console.dir);\n\nfunction App() {\n return (\n <div className=\"App\">\n <header className=\"App-header\">\n <img src={logo} className=\"App-logo\" alt=\"logo\" />\n <p>\n Edit <code>src/App.js</code> and save to reload.\n </p>\n <a\n className=\"App-link\"\n href=\"https://reactjs.org\"\n target=\"_blank\"\n rel=\"noopener noreferrer\"\n >\n Learn React\n </a>\n </header>\n </div>\n );\n}\n\nexport default App;\n[HMR] Waiting for update signal from WDS...\nTypeError: Cannot read property 'replace' of undefined\n at matchesParentDomain (http://localhost:3000/static/js/0.chunk.js:68308:30)\n at http://localhost:3000/static/js/0.chunk.js:68353:12\n at Object.push../node_modules/node-libs-browser/mock/dns.js.exports.lookup.exports.resolve4.exports.resolve6.exports.resolveCname.exports.resolveMx.exports.resolveNs.exports.resolveTxt.exports.resolveSrv.exports.resolveNaptr.exports.reverse.exports.resolve (http://localhost:3000/static/js/0.chunk.js:85584:5)\n at parseSrvConnectionString (http://localhost:3000/static/js/0.chunk.js:68345:7)\n at parseConnectionString (http://localhost:3000/static/js/0.chunk.js:68847:12)\n at connect (http://localhost:3000/static/js/0.chunk.js:78309:3)\n at http://localhost:3000/static/js/0.chunk.js:76368:5\n at maybePromise (http://localhost:3000/static/js/0.chunk.js:85307:3)\n at MongoClient.push../node_modules/mongodb/lib/mongo_client.js.MongoClient.connect (http://localhost:3000/static/js/0.chunk.js:76365:10)\n at run (http://localhost:3000/static/js/main.chunk.js:190:18)\n at Module.<anonymous> (http://localhost:3000/static/js/main.chunk.js:201:1)\n at Module../src/App.js (http://localhost:3000/static/js/main.chunk.js:331:30)\n at __webpack_require__ (http://localhost:3000/static/js/bundle.js:857:31)\n at fn (http://localhost:3000/static/js/bundle.js:151:20)\n at Module.<anonymous> (http://localhost:3000/static/js/main.chunk.js:444:62)\n at Module../src/index.js (http://localhost:3000/static/js/main.chunk.js:545:30)\n at __webpack_require__ (http://localhost:3000/static/js/bundle.js:857:31)\n at fn (http://localhost:3000/static/js/bundle.js:151:20)\n at Object.1 (http://localhost:3000/static/js/main.chunk.js:681:18)\n at __webpack_require__ (http://localhost:3000/static/js/bundle.js:857:31)\n at checkDeferredModules (http://localhost:3000/static/js/bundle.js:46:23)\n at Array.webpackJsonpCallback [as push] (http://localhost:3000/static/js/bundle.js:33:19)\n at http://localhost:3000/static/js/main.chunk.js:1:65\n",
"text": "@MaBeuLux88Next step: I created a very raw, basic, React App…then inserted your code per the final full code below, and the same parser error occurs. What is going wrong!?And the same error in uri_parser due to “mongodg+srv”…I am going with MongoDB locally until this can be resolved… \nThanks again for your help!Greg",
"username": "Gregory_Fay"
},
{
"code": " const MongoClient = require(\"mongodb\").MongoClient;\n\nconst uri = \"mongodb+srv://user:[email protected]/test?retryWrites=true&w=majority\";\n\nconst client = new MongoClient(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nasync function run() {\n try {\n await client.connect();\n const coll = client.db('test').collection('movies');\n const cursor = coll.find();\n await cursor.forEach(console.dir);\n } finally {\n await client.close();\n }\n}\n\nrun().catch(console.dir);\nindex.jsnpm install mongodb\nnode index.js\n",
"text": "Ok I just understood what is wrong here and why this will never work .What I sent you:is back end code.Copy paste this in a new folder in an index.js file and replace the URI with a valid Atlas URI. Then runThis will work, because it’s a backend application, executed by Node.js.What you are trying to do, is use the MongoDB Node.js driver in a front end application which is executed in the browser - so it won’t work. And even if you got it to work, that’s a terrible idea for several reasons.You need a secure channel between your backend code (Node.js server or Java or Python or whatever you want really) which is a trusted environment which can communicate with your Atlas cluster and your front end code. Usually, people use a REST API or a GraphQL API with some security features like authentification, tokens, TLS, etc.Basically, you need something between your front end code - which is just your presentation layer - and your database which contains your precious data. This back end layer is where you will implement a few things:You can choose the traditional path and implement a back end system as I described above, or you can choose the MongoDB Realm path.All the stuff I described above can be implemented easily in MongoDB Realm and allow your front end application to access safely your MongoDB Atlas cluster.I implemented a free COVID-19 REST and GraphQL API using MongoDB Realm and I did 2 blog posts to explain how I did:I hope this helps !Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88\nThank you so much. I was beginning to realize this last night and have been digging into it. This post helps jump start me a bit more, and confirms the issue. That’s what I get for trying to learn too much too fast! But it all makes complete sense now.Thank you!\nGreg",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "Enjoy & keep up the good work !Also if you are learning, checkout our MongoDB free courses on MongoDB University. They will teach you the basics of back end. Especially this one might have your interest if you are trying to build a Node.js back end: MongoDB Courses and Trainings | MongoDB University.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I’ll definitely go through this course. Thank you. ",
"username": "Gregory_Fay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to connect with node.js due to "mongodb+srv"; Get error...TypeError: Cannot read property 'replace' of undefined matchesParentDomain | 2021-01-08T03:58:32.639Z | Unable to connect with node.js due to “mongodb+srv”; Get error…TypeError: Cannot read property ‘replace’ of undefined matchesParentDomain | 45,248 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "I am trying to export data using mongoexport and specific query. Query just filters records in specific column “op”.Below is the query and command I am using.C:\\mongo\\bin\\mongoexport --host Hostname --port 999 -u user -p user --authenticationDatabase admin --db logsadmin --collection userActivity20201119 --query ‘{ op: “LookupAdvancedLaneMakers”}’ --out E:\\ALM\\UserActivity\\userActivity.json2020-12-01T04:46:47.859-0800 too many positional arguments: [op: LookupAdvanc\nedLaneMakers}’]\n2020-12-01T04:46:47.860-0800 try ‘mongoexport --help’ for more information",
"username": "Ashish_Kulkarni"
},
{
"code": "",
"text": "If you look at the documentation at mongoexport — MongoDB Manual you will see you are missing the equal sign between argument keys and values.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steeve,\nThanks for the reply. This command work when I remove query part. I also tried using equal sing. I am getting same error.",
"username": "Ashish_Kulkarni"
},
{
"code": "",
"text": "Try double quotes after query instead of single quote and also enclose --out path in quotes",
"username": "Ramachandra_Tummala"
},
{
"code": "\"--query\\--query '{ op: \\\"LookupAdvancedLaneMakers\\\"}'\n--query \"{ op: \\\"LookupAdvancedLaneMakers\\\"}\"\n--foo=bar--foo bar",
"text": "Hi @Ashish_Kulkarni, you will need to escape the \" characters in the --query option with a backslash \\. The following will work on powershell:As Ramachandra alluded to, on cmd.exe, you can’t use single quotes, so you would have to do the following (which also works on powershell):Having an equals or space between options and option arguments makes no difference. --foo=bar is equivalent to --foo bar.",
"username": "Tim_Fogarty"
}
] | Query fails with too many positional arguments | 2020-12-01T12:54:14.657Z | Query fails with too many positional arguments | 9,165 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "We would like to import UUIDs from CSV using “mongoimport”. According to the documentation it is possible to “type” the columns to import. Unfortunately there is no “type” available for UUIDs. Luckily there is a “binary” type present which enables us to encode the UUID as “base64”.We were able to encode UUIDs as “base64” based on the following code.So far so good…We are able to use “mongoimport” the “–columnsHaveTypes” flag and the fields defined as \" uuid.binary(base64)\". The problem/challenge is to change the “binary subtype”. Right now the default subtype “0” is being used which results in something like this: “BinData(0,“7tW06cUkR2+lOWIRssit5A==”)”. Is there a way to change the “binary subtype” to “4”?Thank you.",
"username": "Thomas_Martens"
},
{
"code": "",
"text": "Hi Thomas,Welcome to the online MongoDB Developer Community Forums!In order to get a better understanding of your question, I have two questions in return:What kind of queries will you run on this data after loading? And how do these queries depend on (the type of) this UUID field?Why do you prefer to store this field as a UUID “type”? Or now with MongoDB as binary? What influence does it have on the way you can work with the data?One question I saw on the forum that might be relevant for you is Problem inserting UUID field with binary subType via Atlas web UIRegards,Emil Zegers",
"username": "Emil_Zegers"
},
{
"code": "",
"text": "Thank you for getting back to me Emil.Currently we are storing spatial features in a number of (manually) sharded PostgreSQL databases. All the features are being identified by a globally unique id (uuid). In order to keep track of the features per shard, we need to maintain an index.We plan to start using MongoDB as a “feature id index” toward the PostgreSQL databases. The collection schema will contain a mapping between the UUID of the feature (key) and a PostgreSQL database (value).The user of our API will request information about a specific UUID (get feature by id). The UUID will be the main entry into our documents of the collection. The query operator will be most likely “$eq” for a single UUID and possibly “$in” for multiple UUIDs.The index will contain billions of entries. I assume (I might be wrong) that storing a “UUID” type instead of a plain “string” might have a significant effect on the storage used. I also wonder if there is a performance impact execution the queries mentioned above.Best regards,\nThomas.",
"username": "Thomas_Martens"
},
{
"code": "",
"text": "Unfortunately there’s no way to cast values in a CSV to a UUID type with mongoimport. I have opened a Jira ticket to add this functionality. You can see the ticket here: TOOLS-2788. You can track that ticket for updates, but it’s unlikely to be something we’ll work on soon.It is possible to import UUIDs from a JSON file by using the extended JSON format. JSON files can support all BSON types.You are correct that using a binary type instead of a plain string will vastly improve the storage used. The documentation says the following:Index keys that are of the BinData type are more efficiently stored in the index if:The documentation makes no differentiation between binary subtypes 0 and 4, so it might be easier to stick with the 0 subtype. But I’m not an expert on how indexes are implemented, so to double check there’s no difference, you may want to ask a question about that in the Working with Data category.",
"username": "Tim_Fogarty"
}
] | Import UUID from CSV using "mongoimport" | 2020-11-19T09:40:07.298Z | Import UUID from CSV using “mongoimport” | 4,774 |
null | [
"react-native"
] | [
{
"code": "",
"text": "In all the documentation I’ve read so far, a connection is opened, things are done, and then it is closed. What I’m wondering is, should the connection be opened and put in some sort of context, then reused for the lifetime of the running app? Or is the correct approach to do Realm.open as needed, closing it directly after?",
"username": "Gregg_Bolinger"
},
{
"code": "database.tsexport const app = new Realm.App({ id: realmAppId })\n\nlet database: Realm | null = null\n\nexport const connect = async (shouldConnectAsync?: boolean) => {\n const config = {\n sync: {\n user: app.currentUser,\n partitionValue,\n },\n schema,\n }\n\n database = shouldConnectAsync\n ? await Realm.open(config)\n : new Realm(config)\n}\n\nexport const db = () => database\ndbdb()?.write(() => { ... })",
"text": "The way I have it, I have a database.ts file with essentially the following:And then I use the exported db. For example db()?.write(() => { ... }). This may not be the best way to do it, and I kind of like your idea of using a React context to do this, but I think the point is pretty much the same: open it once and use that connection throughout the app.",
"username": "Peter_Stakoun"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Correct way to manage Realm.open in React (Native) | 2021-01-06T22:12:42.390Z | Correct way to manage Realm.open in React (Native) | 1,857 |
null | [
"database-tools"
] | [
{
"code": "mongoimport --db test --collection data--file ~/py/data.json --jsonArray2020-12-06T08:57:22.763+0000 connected to: mongodb://localhost/\n2020-12-06T08:57:23.515+0000 Failed: cannot decode 32-bit integer into a slice\n2020-12-06T08:57:23.517+0000 23000 document(s) imported successfully. 0 document(s) failed to import.\n",
"text": "Hey, I am trying to import documents from a json-file. I am using the following command:\nmongoimport --db test --collection data--file ~/py/data.json --jsonArray\nand upon running this I am receiving this messageIt only imports 23000 documents and there are 33300+ documents in that file. The file is 170MB in size. I am using version 4.4.2 if it maters. Any help will be appreciated.",
"username": "Abhishek_Singh"
},
{
"code": "[\n {\"a\": 1, \"b\": 1},\n {\"a\": 2, \"b\": 2},\n 3,\n {\"a\": 4, \"b\": 4},\n]\ncat ~/py/data.json | jq '.[] | select(type==\"object\" | not)'\n",
"text": "Hi @Abhishek_Singh! This kind of error can occur if the array in your file contains more than just documents. For example, if your file is:mongoimport will fail because of the value 3. Your file must be an array of documents, and nothing else.You can check to see if your array contains any values which aren’t documents by using the jq tool:I hope this helps!",
"username": "Tim_Fogarty"
}
] | [mongoimport] Failed: cannot decode 32-bit integer into a slice while importing from json file | 2020-12-06T11:11:13.503Z | [mongoimport] Failed: cannot decode 32-bit integer into a slice while importing from json file | 4,388 |
null | [
"java"
] | [
{
"code": "",
"text": "I have real problems with the MongoDB Atlas live chat. I have the impression that support\ndoes not know what gradle is.In late January 2021 my M0 Atlas cluster will be automatically upgraded from running MongoDB 4.2 to 4.4. So far no one can tell me what the new gradle entry for MongoDB 4.4 is.Current finds the entry: implementation(“org.mongodb:mongo-java-driver:3.12.7”)Can someone help me and tell me the correct Gradle entry for MongoDB 4.4?Thanks\nAxel",
"username": "Axel_Ligon"
},
{
"code": " dependencies {\n compile 'org.mongodb:mongodb-driver-sync:4.1.1'\n }\n dependencies {\n compile 'org.mongodb:mongodb-driver-reactivestreams:4.1.1'\n }\n",
"text": "Hi @Axel_Ligon,There are 2 MongoDB Java drivers:In the respective installation guides, you can find the Maven and the Gradle entries:So to sum up:orIf you are wondering about compatibility between the driver and MongoDB, it’s here and here.I hope it helps.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks,Withimplementation(“org.mongodb:mongodb-driver-legacy:4.1.1”)seems to run. The push test was positive.\nI hope also under MongoDB 4.4.Thank you.",
"username": "Axel_Ligon"
},
{
"code": "",
"text": "It will, it’s the latest version so all good !",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo-java-driver, gradle entry for MongoDB 4.4 | 2021-01-11T21:47:49.703Z | Mongo-java-driver, gradle entry for MongoDB 4.4 | 13,041 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,Several users of my app inform me that my instance is down, I’m using Realm Cloud but It seems it’s no longer possible to create a ticket on Support Portal\nCan you help me ? It’s very critical !Thanks",
"username": "Arnaud_Combes"
},
{
"code": "",
"text": "Hi Arnaud,Thank you for posting this issue. I understand that you have now created a support case in the new MongoDB Support Portal and our team replied to you already.Should you encounter any issues, please update the support case so that our engineers can continue assisting you with this.Kind Regards,\nMarco",
"username": "Marco_Bonezzi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | My instance is down and I can no longer create a ticket! | 2021-01-11T14:00:33.226Z | My instance is down and I can no longer create a ticket! | 3,222 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hi,\nFrom time to time one or two of my mongos instances gets into the state where it can’t connect replica set:numYields:0 ok:0 errMsg:“Encountered non-retryable error during query :: caused by :: Couldn’t get a connection within the time limit” errName:NetworkInterfaceExceededTimeLimit errCode:202 reslen:342 protocol:op_msg 20038msAfter restart of the mongos everything is fine again. Do you have idea what may be cause of that ?Here is version of my mongo installation:\n[mongosMain] mongos version v4.2.3\n[mongosMain] db version v4.2.3\n[mongosMain] git version: 6874650b362138df74be53d366bbefc321ea32d4\n[mongosMain] OpenSSL version: OpenSSL 1.0.2j-fips 26 [mongosMain] allocator: tcmalloc\n[mongosMain] modules: none\n[mongosMain] build environment:\n[mongosMain] distmod: suse12\n[mongosMain] distarch: x86_64\n[mongosMain] target_arch: x86_64",
"username": "Piotr_Tajdus"
},
{
"code": "",
"text": "Encountered similar issue for couple of mongos after upgrading to 4.0. I would also like to know what caused this and how to fix .",
"username": "Sudheer_Palempati"
},
{
"code": "ShardingTaskExecutorPoolMinSizeShardingTaskExecutorPoolMaxConnecting",
"text": "I still don’t know what is causing it. It doesn’t happen on router which is on the same machine as primary node so probably something with network. I will try to play with ShardingTaskExecutorPoolMinSize and ShardingTaskExecutorPoolMaxConnecting parameters.Here is fragment of my log with network debug when such problem happens:2020-05-13T13:21:18.631+0200 D3 NETWORK [ReplicaSetMonitor-TaskExecutor] Updating 10.122.129.44:27018 lastWriteDate to 2020-05-13T13:21:16.000+0200\n2020-05-13T13:21:18.631+0200 D3 NETWORK [ReplicaSetMonitor-TaskExecutor] Updating 10.122.129.44:27018 opTime to { ts: Timestamp(1589368876, 1), t: 3 }\n2020-05-13T13:21:18.631+0200 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set crkid took 0ms\n2020-05-13T13:21:18.631+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Returning ready connection to 10.122.129.44:27018\n2020-05-13T13:21:18.631+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.44:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.631+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.44:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.45:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.45:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.44:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.44:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.44:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.44:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.43:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:18.833+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.43:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:18.903+0200 D4 CONNPOOL [TaskExecutorPool-0] Updating controller for 10.122.129.44:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: true }\n2020-05-13T13:21:18.903+0200 D4 CONNPOOL [TaskExecutorPool-0] Comparing connection state for 10.122.129.44:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.071+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Updating controller for 10.122.129.43:27018 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.071+0200 D4 CONNPOOL [ReplicaSetMonitor-TaskExecutor] Comparing connection state for 10.122.129.43:27018 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Updating controller for 10.122.129.44:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Comparing connection state for 10.122.129.44:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Updating controller for 10.122.129.45:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Comparing connection state for 10.122.129.45:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Updating controller for 10.122.129.43:27019 with State: { requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\n2020-05-13T13:21:19.107+0200 D4 CONNPOOL [ShardRegistry] Comparing connection state for 10.122.129.43:27019 to Controls: { maxPending: 2, target: 1, }\n2020-05-13T13:21:19.252+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195893: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit\n2020-05-13T13:21:19.252+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195894: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit\n2020-05-13T13:21:19.252+0200 I NETWORK [conn1969] Marking host 10.122.129.43:27018 as failed :: caused by :: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time limit\n2020-05-13T13:21:19.252+0200 I COMMAND [conn1969] command crkid-prod.crkid_dokument_status command: update { update: “crkid_dokument_status”, ordered: true, txnNumber: 4, $db: “crkid-prod”, $clu\nsterTime: { clusterTime: Timestamp(1589368859, 22), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, lsid: { id: UUID(“f14f5162-2f43-4788-92b8-db2d0b11c46c”)\n} } nShards:1 nMatched:0 nModified:0 numYields:0 reslen:407 protocol:op_msg 19999ms\n2020-05-13T13:21:19.252+0200 I COMMAND [conn3081] command crkid-prod.crkid_dokument_status command: findAndModify { findAndModify: “crkid_dokument_status”, query: { _id: “CRKID#WPL.2019.01.10.00\n4869” }, new: false, update: { $set: { synced: true } }, txnNumber: 18, $db: “crkid-prod”, $clusterTime: { clusterTime: Timestamp(1589368859, 22), signature: { hash: BinData(0, 0000000000000000000\n000000000000000000000), keyId: 0 } }, lsid: { id: UUID(“e791e4a7-afd3-4c7e-9144-fd5248c50047”) } } numYields:0 ok:0 errMsg:“Couldn’t get a connection within the time limit” errName:NetworkInterfac\neExceededTimeLimit errCode:202 reslen:281 protocol:op_msg 19999ms\n2020-05-13T13:21:19.253+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195895: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit\n2020-05-13T13:21:19.253+0200 D2 ASIO [TaskExecutorPool-0] Failed to get connection from pool for request 19195896: NetworkInterfaceExceededTimeLimit: Couldn’t get a connection within the time\nlimit",
"username": "Piotr_Tajdus"
},
{
"code": "",
"text": "I am also encountering same error : errName:NetworkInterfaceExceededTimeLimit errCode:202",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "I have upgraded mongo to 4.2.6 and changed size of connection pool and it seems it helped:taskExecutorPoolSize: 0\nShardingTaskExecutorPoolMinSize: 10\nShardingTaskExecutorPoolMaxConnecting: 5",
"username": "Piotr_Tajdus"
},
{
"code": "",
"text": "hi,\nwe run version 4.4.1 on 90 shards of each 3 “mongod” servers each; we encounter many ( ! ) of these errors when we load even a little bit of data.\n@Piotr : with these pool-settings, how many shards can you run ‘stable’ ?Are there anywhere suggested settings documented for a large cluster setup ?",
"username": "Rob_De_Langhe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongos: Couldn't get a connection within the time limit | 2020-04-28T12:29:30.914Z | Mongos: Couldn’t get a connection within the time limit | 14,354 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi,\nI need to model a huge binary tree considering:*Need to know if a node is left child or right child\n*The binary tree can grow indefinitely\n*Updating structure is not possible, It is only allowed to add a new nodes\n*Find specific descendant node by numer and direction (left or right)\n*Find last descendant by direction (left or right)I was reviewing the proposed pattenrs by mongodb:\n°Materialized Paths : https://docs.mongodb.com/manual/tutorial/model-tree-structures-with-materialized-paths/\n°Child References: https://docs.mongodb.com/manual/tutorial/model-tree-structures-with-child-references/I was thinking for my case is better use Materialized Paths but I was worried about the storage of the path because of its lenght (grow indefinitely). In the other hand there is Child References pattern, it is good but I was worried about finding the whole path of a node talking about the performance of the traversal operation.Thats it, which one would you recommendme?, What other ways do you think there are to model this situation?.\nThanks",
"username": "Andres_Chica"
},
{
"code": "{\n \"node\": \"F\",\n \"path\": [\"B\", \"D\"],\n \"left\": \"G\",\n \"right\": \"H\"\n}\n",
"text": "Hey Andres, you are on the right path. This actually gets a little tricky as you have the need for the full path, and your tree can grow indefinite. If this really is a very big (very long path to leaf), you might want to consider a type of mix approach between the two. So you can have like a materialised path of say upto 5-10 levels up from the node in mind (or more, you need to check the performance, don’t go more than 500 i guess). So rather than the full path as a string, your memory consumption of path would be less (per doc), and then you can go the next step if you still are not on the root node. It’s a bit like bucketing / batch paths / pagination.Example graph below ->\naa550×886 22.3 KB\nand assuming we are storing a path of 2 nodes up, the node of F might look like ->Now, if you want the whole path, you can find use $graphLookup to iterate all the way back to node A, using “first” element of path (top most if batch is 2). You need to find the optimal batch number for your “long” paths.Also, I have used an array of parent path and not a string as per mongo docs, as if we use simple regex on paths, it might lead to collection scan, whereas if we use array, we can index the path field (multikey index) and make use of indexed queries and lookup.More suggestions are welcome…! Do let me know how it works…!",
"username": "shrey_batra"
}
] | Modeling huge binary tree | 2021-01-11T17:39:49.174Z | Modeling huge binary tree | 3,332 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Is it possible with the MongoDB C# Driver ( v2.10.2 ) to enable profiling along with the ability to query the results?Essentially, i’m trying to mimic the equivalent of db.setProfilingLevel(2) & db.system.profile.find().pretty() from the mongo shell.",
"username": "Michael_Fyffe"
},
{
"code": "mongoprofilevar profileCommand = new BsonDocument(\"profile\", 2);\nvar result = database.RunCommand<BsonDocument>(profileCommand);\nConsole.WriteLine(result);\nsystem.profilevar collection = database.GetCollection<BsonDocument>(\"system.profile\");\nvar doc = collection.Find(new BsonDocument()).FirstOrDefault();\n",
"text": "Is it possible with the MongoDB C# Driver ( v2.10.2 ) to enable profiling along with the ability to query the results?Hi @Michael_Fyffe,Yes, you can run database commands using IMongoDatabase.RunCommand<TResult>, or IMongoDatabase.RunCommandAsync<TResult>.db.setProfilingLevel() is a wrapper on mongo shell for the profile command. So, in this case you can execute the profile command. For example:You can then just query the collection system.profile:See also Database Profiler Output for more information.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Hey @wan,That is awesome. Thank you soo much!Is there a good way to correlate ( using the C# driver ) a query being run with a specific entry in the system.profiles collection when i query it?",
"username": "Michael_Fyffe"
},
{
"code": "",
"text": "Hi @Michael_Fyffe,Is there a good way to correlate ( using the C# driver ) a query being run with a specific entry in the system.profiles collection when i query it?Try adding $comment in your queries, so that you can easily interpret and trace the profile log. For example, using .NET/C# driver you could utilise FindOperation.Comment property.Regards,\nWan.",
"username": "wan"
},
{
"code": " IAsyncCursor<UserData> findCursor = await collection.FindAsync(filter, options);\nawait collection.UpdateOneAsync(filter, updateDefinition, options);\n",
"text": "Hi @wan,I can’t seem to find any examples of how to get or create a FindOperation.\nMy queries look similar to this.:Foutunately, I can see that I can specify a comment within the FindOptionsBase.Comment property.\nHowever, how can I do the same when doing an update command? For example:Here, the UpdateOptions class doesn’t have any comment property.\nCan this just not be done?",
"username": "Jan_Philip_Tsanas"
},
{
"code": "FindOptions MyFindOptions = new FindOptions();\nMyFindOptions.Comment = \"token-001\";\nvar cursor = collection.Find<BsonDocument>(new BsonDocument(), MyFindOptions);\ncollection = database.GetCollection<BsonDocument>(\"system.profile\");\nvar filter = new BsonDocument{{\"command.comment\", \"token-001\"}};\nvar entries = collection.Find(filter);\nvar updateOp = new BsonDocument{\n {\"$set\", new BsonDocument{{\"foo\", \"new\"}} }};\nvar filter = new BsonDocument{\n {\"foo\", \"old\"}, {\"$comment\", \"token-002\"}};\nvar result = collection.UpdateOne(filter, updateOp);\ncollection = database.GetCollection<BsonDocument>(\"system.profile\");\nvar filter = new BsonDocument{{\"command.q.$comment\", \"token-002\"}};\nvar entries = collection.Find(filter);\n",
"text": "Hi @Jan_Philip_Tsanas, welcome!I can see that I can specify a comment within the FindOptionsBase.Comment propertyYes, for example:Then you should be able to query it using the following:Here, the UpdateOptions class doesn’t have any comment property.Currently there is an open ticket for all MongoDB drivers to support this DRIVERS-742. In the mean time for update operations, you can attach $comment query operator on the query predicate. See more info on $comment behaviour. For example:Then you should be able to query it using the following:The above snippet was executed with MongoDB .NET/C# v2.10.2 and MongoDB server v4.2.5.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "A post was split to a new topic: How I can append $comment when working with FilterDefinition",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | How to enable profiling? - C# Driver | 2020-03-12T11:47:14.974Z | How to enable profiling? - C# Driver | 5,708 |
null | [
"database-tools"
] | [
{
"code": "getmore\n\n The number of get more (i.e. cursor batch) operations per second.\n",
"text": "I have a question concerning the mongostat tool.There is one field called getmore for which I need some explanations.Looking in the documentation gives an idea of the meaning, it states:But concretely, what does this exactly mean?An example by someone who clearly understands the idea may be helpful.The same goes for the fields dirty and used .\nSome simple explanation, maybe with an example, would help to clarify what these are about. Because I am not confident that the interpretation (based on the doc) I have is right.",
"username": "Michel_Bouchet"
},
{
"code": "findaggregate",
"text": "Clean (unmodified) refers to data in the cache that is identical to the version stored on disk.Dirty (modified) refers to data in the cache that has been modified and is different from the data stored in disk.The WiredTiger cache contains data that is loaded into memory and ready for immediate use by WiredTiger. If WiredTiger is configured to us...getMore : Use in conjunction with commands that return a cursor, e.g. find and aggregate , to return subsequent batches of documents currently pointed to by the cursor.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "OK. I hadn’t noticed any “clean” field.\nSo that means getMore is incremented each time those actions are executed:\nfind, findOne, it, aggregate ?\nAnd what about “used” ?",
"username": "Michel_Bouchet"
},
{
"code": "findaggregategetMorecollection.find(query).toArray()queryfindgetMorefindOnegetMoreitgetMoreitgetMoretoArray()getMore",
"text": "So that means getMore is incremented each time those actions are executed:\nfind, findOne, it, aggregate ?It’s not quite like that. When you run a database command that gets documents (such as find and aggregate), MongoDB doesn’t necessarily return all the matching documents at once. Instead it returns a batch of say 1,000 documents and a cursor ID. A getMore command is a way to request extra documents from a cursor.So say you’re using the NodeJS driver and you run collection.find(query).toArray(). This will find all documents matching query and put them into the array. Under the hood, the driver is first running the find database command. Then the driver will keep running getMore commands until the cursor is exhausted and all the documents have been returned.So, say you have a 10,000 documents in a collection and you want to get them all. And say the batch size is 1,000. This will result in one find command and 9 getMores.If you have 100 documents and your batch size is 1,000, running a find will never result in a getMore.findOne will never result in a getMore because the minimum batch size is 1. The it command is a MongoDB Shell helper that actually just runs a getMore on the last cursor. So anytime you run it, getMores will increase.Depending on what driver you use, you may never need to actually worry about using getMore. Some drivers have methods like toArray() that will handle all the getMore stuff for you under the hood.As for “used”, that’s the percentage of the WiredTiger cache that is currently used. It goes up when WiredTiger pages in data from disk, and goes down when pages are evicted.",
"username": "Tim_Fogarty"
},
{
"code": "",
"text": "I see, it makes it much clearer. Thank you for this detailed explanation.",
"username": "Michel_Bouchet"
}
] | Question about the mongostat tool | 2021-01-09T06:43:55.146Z | Question about the mongostat tool | 2,496 |
null | [
"realm-studio"
] | [
{
"code": "",
"text": "I am a complete beginner to using Realm but I have got a lot of information in a JSON file that I need to add to a local Realm database. I am using Realm Studio to view the Realm file but I can’t see anyway to take the data from JSON and update the Database.The JSON file should have most of the same property data with perhaps some added ones. I would be looking to try to point from each item in the JSON file to each property and update the data in the realm database if it already exists or add the property if it doesn’t.I generally use python, but I see that that may not be supported. I am completly lost with how to set it up to get the data in. I have been looking for beginners guides but I can’t find any.",
"username": "Ben_Irwin"
},
{
"code": "",
"text": "Hi Ben,\nwhat platform are you running Realm on? Is this a one-off operation or something that needs to be done from your app?",
"username": "Andrew_Morgan"
}
] | Trying to update a Realm db with data from a JSON file | 2021-01-09T03:00:23.002Z | Trying to update a Realm db with data from a JSON file | 3,696 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "[I’m using Realm v10.5.0 with Xcode 11.7]Trying to use a variety of properties (see below) with ‘SyncManager’ Singleton. The API reference states it is available (iOS-Swift-10.5.0-API-Reference-SyncManager-Singleton), but it is not. There is some info in the ChangeLog (to use ‘App’), but it does not work either.Trying to handle these:\nSyncManager.shared.logLevel\nSyncManager.shared.errorHandler\nSyncManager.shared.authorizationHeaderName\nSyncManager.shared.customRequestHeadersSome Others:\nSyncManager.shared.userAgent\nSyncManager.shared.appIDKnown Not to Work:\nSyncManager.shared.pinnedCertificatePaths",
"username": "f_s"
},
{
"code": "let app = App(id: \"my-app-id\")\napp.syncManager.logLevel = .debug\n",
"text": "Heya,The SyncManager is no longer a singleton– there is now a SyncManager per App.\nFor example:User agent and pinned certificate paths also don’t really make sense with the current implementation of MongoDB Realm.Cheers.",
"username": "Jason_Flax"
},
{
"code": "",
"text": "Ah - thank you Jason!The docs are not updated.",
"username": "f_s"
},
{
"code": "",
"text": "Thanks for the heads up, we’ll update the docs soon.",
"username": "Jason_Flax"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | SyncManager.shared for Realm v10+ not accessible (iOS) | 2021-01-11T01:08:23.288Z | SyncManager.shared for Realm v10+ not accessible (iOS) | 2,205 |
null | [] | [
{
"code": "db.myobjects.find({},{\"indexed_attributes\":1})db.serverStatus().wiredTiger.cache[\"bytes read into cache\"]",
"text": "I have a collection that contains documents with _id, indexed_attributes and big_data_blob. The index size is small enough to fit into the cache but all database does not. I use projection to get just the document _id and indexed_attributes field values with db.myobjects.find({},{\"indexed_attributes\":1}). Looking at db.serverStatus().wiredTiger.cache[\"bytes read into cache\"] shows that every time whole documents are read into cache and therefore the query takes more time than it would need to just read the indexed field.Is there some other method to read the content of the indexed fields more efficiently?",
"username": "Tomaz_Beltram"
},
{
"code": "mongoddb.myobjects.find({},{\"indexed_attributes\":1}){'indexed_attributes': 1, '_id': 1}db.coll.find({b: 1},{b: 1, _id:0}).explain()\n\"winningPlan\" : {\n\t\t\t\"stage\" : \"PROJECTION_COVERED\",\n\t\t\t\"transformBy\" : {\n\t\t\t\t\"b\" : 1,\n\t\t\t\t\"_id\" : 0\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"b\" : 1,\n\t\t\t\t\t\"_id\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"b_1__id_1\",\n\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"b\" : [ ],\n\t\t\t\t\t\"_id\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"b\" : [\n\t\t\t\t\t\t\"[1.0, 1.0]\"\n\t\t\t\t\t],\n\t\t\t\t\t\"_id\" : [\n\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n",
"text": "Hi @Tomaz_Beltram and welcome back !Looks like you are trying to make a covered query.Can you please confirm which indexes exist in this collection?If you really want to know what mongod is doing to resolve your query, you can use explain. If you have a covered query, then you won’t see a FETCH stage in your winning plan.To make this query db.myobjects.find({},{\"indexed_attributes\":1}) covered, you would need to have the index {'indexed_attributes': 1, '_id': 1} because you are returning both these values here as ‘_id’ is present by default and needs to be explicitly removed in the projection if you don’t want it. But if you do return it, it needs to be in your index to make it a covered query. You will probably also need to add a filter using your index in the query to trigger the use of the index.This example is a bit silly but…This is what it should look like in the explain plan:I hope this helps.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "\"winningPlan\" : {\n \"stage\" : \"PROJECTION_SIMPLE\",\n \"transformBy\" : {\n \"indexed_attributes\" : 1,\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"forward\"\n }\n}\n",
"text": "Hi Maxime,\nThanks for your quick reply and pointing me to the covered query. The _id and indexed_attributes fields are both indexed in my database. In may query the filter was missing and therefore COLLSCAN was used.If I add a filter to limit the query then I get stage PROJECTION_COVERED. However I noticed that it uses FETCH instead of index also in case that the filter matches all documents, e.g. {“indexed_attributes”: {$exists: true}}. Thanks again for your help.\nwbr Tomaz",
"username": "Tomaz_Beltram"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Get indexed field values (cached) without reading whole documents from disk | 2021-01-08T14:19:16.114Z | Get indexed field values (cached) without reading whole documents from disk | 2,636 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "{\n \"title\": \"Students\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"_parentId\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_parentId\": {\n \"bsonType\": \"string\"\n },\n \"lastName\": {\n \"bsonType\": \"string\"\n }\n}\n_parentIdself.realm = try! Realm(configuration: realmApp.currentUser!.configuration(partitionValue: \"myPartitionValue\"))\n\n// Access all students in the collection matching the partition value\nself.students = self.realm?.objects(Students.self).sorted(byKeyPath: \"lastName\")\n_parentIdlet realm = try! Realm()\nlet students = realm.objects(Students.self)\n_parentId",
"text": "Hi there!\nI’m using Realm sync to build an iOS app.\nI have a collection named Students:where _parentId is the partition value I use to access some objects of the collection, like this:I would like to access all the students in the collection, regardless of their _parentId (partition value). I tried to do the followings but I got nothing:Realm.asyncOpen() { (response) in\nlet students = try! response.get().objects(Students. self )\n}Do you know how can I get all the Students without iterating on all the different partition value (i.e _parentId)?Thanks for your help!",
"username": "Julien_Chouvet"
},
{
"code": "",
"text": "You can’t really get there from here. The partition key is what encapsulates data into Realms - e.g. you can think of a partition as a Realm. Realms are separate, discreet sets of data and you can’t access data across Realms at the same time. The actual file structure on your device has the different partitions (.realm files) stored in different files as well.You can however (as you mentioned), access data in Realm A, then access data in Realm B etc.Also, there are some options with server side code but there may be other options with some additional clarity in the use case.Why would some students have one partitionKey and other students have a different partition key. Why not use the same partition key for all students ‘students_partition’?",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay and Thanks for your help!My students have different partition key because they belong to different groups (one for each partition key). So, usually I need to get only the students of one group but I wanted to implement the possibility to search among all the students (regardless on their groups), but to do that I need to get all of them.",
"username": "Julien_Chouvet"
},
{
"code": "students_partition\n student_0\n student_1\n student_2\n\ngroups_partition\n group_0\n students\n ref_to_student_0\n ref_to_student_1\n",
"text": "Part of working with NoSQL databases (which Mongo is under the hood) is organizing data based on what you want to get out of it - your queries. In this case, one parameter is being able to search through all students. That would require a different setupI would suggest keeping all of the students in the ‘students_partition’ and then keep references to the students within they group the belong to. Conceptually it would be:or you could reverse that or even augment it by keeping a reference to the group with each student - it really depends on what additional queries you may want to run.",
"username": "Jay"
},
{
"code": "",
"text": "Thanks for your help! I will investigate this possibility to know if it will work with the other constraints I have from my other collections.Thank you and have a good day ",
"username": "Julien_Chouvet"
}
] | Realm Sync - Get all data from a collection | 2021-01-04T16:53:19.756Z | Realm Sync - Get all data from a collection | 2,216 |
null | [] | [
{
"code": "",
"text": "When I run the command from the shell nothing happens\n",
"username": "Jayesh_Nayak"
},
{
"code": "",
"text": "That’s good news though, it succeeded and is connected to the remote cluster (hosted on Atlas).What is the issue?",
"username": "santimir"
},
{
"code": "",
"text": "Hi @Jayesh_Nayak,You are running this command on your local computer and IDE provided in the courseware is a different environment and it has no connection to your local system. So, whatever work you are doing on your local machine is not going to be reflected in the IDE.Can you share some more information on what you are trying to accomplish here so that we can help you out?~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "hi ShubhamI was going thru the chapter one of the M001 course the last LAB connect to your Atlas cluster. according to the instructions I ran the command on command line and i do not see any ide opned.",
"username": "Jayesh_Nayak"
},
{
"code": "C:\\Users\\Jayesh",
"text": "Hi Jayesh,it is true, you can connect to the Atlas cluster “Sandbox” within your Windows cmd (I saw C:\\Users\\Jayesh) and you can solve most parts of Lab questions in there. Sometimes it shall be enough to fetch answers you are able to get this way and put them back into your open browser tab, where the Lab in question resides.However, other times, that is not enough to get desired credits for your answer (in green). Occasionally we shall have to read the Lab instructions very carefully to determine if we are supposed to use the in-browser IDE presented below the Lab questions.In particular watch out for a green “Run tests” button attached to the in-browser IDE which might be another hint, they expect us to solve part of the Lab within the online IDE presented.Hope it helps a little, Regards, M.",
"username": "Uwe_Scheffer"
},
{
"code": "",
"text": "Hi @Jayesh_Nayak ,In the lab Chapter 1: Connect to your Atlas Cluster, if you scroll to the bottom you will see the IDE which looks like this . You are supposed to run your command in the terminal area. Hope it helps!Screenshot 2021-01-11 at 3.25.17 PM2212×1628 251 KB~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Ide not starting | 2021-01-03T21:34:39.450Z | Ide not starting | 2,189 |
null | [] | [
{
"code": "",
"text": "How do you undo sharding on the database after it has been enabled?",
"username": "Altiano_Gerung"
},
{
"code": "",
"text": "For completeness of the discussion the following is the answer I posted on the MongoDB university forum.Just to clarify, sharding is done at the collection level rather than the database level.I am not aware of any command to undo the sharding of a collection. But you can easily copy the shared data into a new collection. Then drop the original collection, so all indexes and sharding information will be gone. You may then move back all the data under the original name.",
"username": "steevej"
},
{
"code": "sh.enableSharding(\"<database name>\")",
"text": "Hello @Altiano_Gerung,The sharded cluster’s main components are the shards, mongos and the config server. Have you set up any of these? After that, there is a command to enable sharding on a database: sh.enableSharding(\"<database name>\"). Then you can shard any of the collections in that database.Sharding is enabled at database level, but actually you shard collections individually - that means a sharded database can have both sharded and unsharded collections. Once sharding is enabled on a database or a collection there is no disable sharding command - it is in general a difficult process. So, what is it you have so far?You have a database which is sharding enabled. Are there any collections in it? Are any of the collections sharded? Is there any data in the collections?Please share some more details to understand.You may be interested in this post:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Sharding is enabled at database level, but actually you shard collections individually - that means a sharded database can have both sharded and unsharded collections.Thanks for the clarification.",
"username": "steevej"
},
{
"code": "",
"text": "thank you for the answer,I simply just want to know if such thing is possible or not.Well, I don’t know about the technical details that make undoing to be that difficult,But let’s say I don’t want to use sharded cluster anymore,\n(I.e. don’t want to have my collection sharded)In general, what is the best approach to migrate my data?",
"username": "Altiano_Gerung"
},
{
"code": "",
"text": "But let’s say I don’t want to use sharded cluster anymore,\n(I.e. don’t want to have my collection sharded)You can make a backup of the sharded collection data (or use a backup you already have) to create a new collection without sharding. After creating the new collection and its data, you can discard the collection which is sharded.Also, see sh.shardCollection - Considerations says:MongoDB provides no method to deactivate sharding for a collection after calling shardCollection.List of Sharding Commands.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Undo enableSharding() | 2021-01-07T14:14:39.624Z | Undo enableSharding() | 4,292 |
null | [
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "",
"text": "I can’t find what to use for ‘EmailPasswordAuth’ pre iOS 13. For example I want to confirm an email, which I should be able to (similarly) use ‘app.emailPasswordAuth.confirmUser()’, but that only seems to be available for iOS13+. Before Realm v10 I could use ‘SyncUser.confirmEmail()’.I am using Realm v10.5.0 on Xcode 11.7",
"username": "f_s"
},
{
"code": "",
"text": "Never mind …There are 2 via an overloaded functions, therefore one of them can be used pre-iOS 13 – the one with completion block vs ‘Future<>’.",
"username": "f_s"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why is app.emailPasswordAuth.confirmUser missing for Pre iOS 13? | 2021-01-10T21:43:26.719Z | Why is app.emailPasswordAuth.confirmUser missing for Pre iOS 13? | 1,754 |
null | [
"installation",
"on-premises"
] | [
{
"code": "`https://docs.mongodb.com/charts/19.12/installation/`\ndocker swarm initdocker pull quay.io/mongodb/charts:19.12.2docker run --rm quay.io/mongodb/charts:19.12.2 charts-cli test-connection 'mongodb://host.docker.internal'`MongoDB connection URI successfully verified.`\ncharts-mongodb-uriecho \"mongodb://host.docker.internal\" | docker secret create charts-mongodb-uri -docker stack deploy -c charts-docker-swarm-19.12.2.yml mongodb-charts $ docker exec -it \\\n $(docker container ls --filter name=_charts -q) \\\n charts-cli add-user --first-name \"<First>\" --last-name \"<Last>\" \\\n --email \"<[email protected]>\" --password \"<Password>\" \\\n --role \"<UserAdmin|User>\"\n`add-user command error: clientAppId not found. No Charts apps configured to add user to.`\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ parsedArgs\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ installDir ('/mongodb-charts')\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ log\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ salt\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ productNameAndVersion ({ productName: 'MongoDB Charts Frontend', version: '1.9.1' })\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ gitHash ('1a46f17f')\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ supportWidgetAndMetrics ('on')\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ tileServer (undefined)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ tileAttributionMessage (undefined)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ rawFeatureFlags (undefined)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchMigrationsLog ({ completedStitchMigrations: [] })\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ featureFlags ({})\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ lastAppJson ({})\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ existingInstallation (false)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ tenantId ('18c9543e-8677-4046-9166-5d54a2a6e1bb')\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ chartsMongoDBUri\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ tokens\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ encryptionKeyPath\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchConfigTemplate\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ libMongoIsInPath (true)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ mongoDBReachable (true)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchMigrationsExecuted ([ 'stitch-1332', 'stitch-1897', 'stitch-2041', 'migrateStitchProductFlag', 'stitch-2041-local', 'stitch-2046-local', 'stitch-2055', 'multiregion', 'dropStitchLogLogIndexStarted' ])\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ minimumVersionRequirement (true)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchConfig\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchConfigWritten (true)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchChildProcess\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ indexesCreated (true)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchServerRunning (true)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ stitchAdminCreated (false)\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✔ lastKnownVersion ('0.9.0')\nmongodb-charts_charts.1.ozvhqbhhmq9n@docker-desktop | ✖ existingClientAppIds failure: An error occurred authenticating: invalid username/password\n",
"text": "I have faced with an error in MongoDBChats logs file in windows, it Have thrown this error in the last line:existingClientAppIds failure: An error occurred authenticating: invalid username/passwordI followed the following setups specified on MongoDB docs for MongoDB Charts m installation:The steps I followed are as follows:Expected response:I’m getting back an error:Docker Service Logs:I have done all the provided previous solutions by this community, but still not working\n",
"username": "M.Mustafa_Amarkhil"
},
{
"code": "",
"text": "Looking for answer,\nIs this a community?? ",
"username": "M.Mustafa_Amarkhil"
},
{
"code": "docker system prune",
"text": " Is this a community?? Hi @M.Mustafa_Amarkhil,This is a community forum where others are volunteering their time and experience, so you may have to be patient for responses to questions requiring more specific expertise (especially during or after extended holiday periods).I’m not sure how to resolve your specific error, but apparently it is due to an incomplete install. One of my colleagues suggested deleting dangling containers/images with docker system prune (assuming you don’t have any other Docker assets) and trying the installation again.I would also draw your attention to this note on the download page for MongoDB Charts On-Premises, as this product version is no longer being updated outside of security fixes:Note: MongoDB Charts On-Premises will be end of life on September 1, 2021. If you’re currently using the on-premises version of Charts, there is no need for immediate action. We will continue to provide support, including releasing any important security fixes, until September 1, 2021. Additionally, we will provide a mechanism to assist with migrating on-premises dashboards to the cloud version of Charts.MongoDB Charts is available and will continue to be available as a service within MongoDB CloudThe cloud version of MongoDB Charts is integrated with MongoDB Atlas (including the M0 free tier), continues to be actively updated, and can be configured through the web UI without any local installation.If you’re interested, see Launch MongoDB Charts to get started.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "@Stennie_X\nThank you sir,\nI will try my best and patiently looking for new answers.",
"username": "M.Mustafa_Amarkhil"
},
{
"code": "appauthlogmetadata",
"text": "Hi @M.Mustafa_Amarkhil -This error can occur if your metadata database was previously configured for Charts, but the authentication keys are not present in the volume used by your current Charts instance. Without knowing all the steps you followed it’s hard to say how you got in this state. But if you’re happy to start fresh, you should be able to get past this by deleting the app, auth, log and metadata databases, as well as your Docker volumes, and then reinstalling.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi @tomhollander\nThank you so much sir,\nthis one is the right answer, inspite of deleting everything, the didn’t work correctly, but by fresh installation this system worked fine.A bundle of thanks again dear sir,",
"username": "M.Mustafa_Amarkhil"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | existingClientAppIds failure: An error occurred authenticating: invalid username/password, I have faced with this error in MongoDB Charts logs file in windows | 2021-01-05T03:12:26.159Z | existingClientAppIds failure: An error occurred authenticating: invalid username/password, I have faced with this error in MongoDB Charts logs file in windows | 4,284 |
null | [
"queries",
"python"
] | [
{
"code": "myCuster.myDB.find({\"color\": myArray})",
"text": "HI! I’m trying to find an elements on my database that has a value equal to any element of my array.\nFor example, I have the array [“red”,“brown”] in my code.\nI want to search on the database any document that has the “color” value equal to any element of my array, so red or brown. But I’ve not found anything useful on MongoDB docs or on internet,\nIn my head sound like:myCuster.myDB.find({\"color\": myArray}) But didn’t work",
"username": "Silvano_Hirtie"
},
{
"code": "myCuster.myDB.find( { \"color\" : { \"$in\" : myArray } } )\n",
"text": "TryPlease make sure you know why it works by visiting https://docs.mongodb.com/manual/reference/operator/query/in/.",
"username": "steevej"
},
{
"code": "d1 = { color : [ \"red\" , \"brown\" ] }\nd2 = { color : [ \"brown\" , \"red\" ] }\nd3 = { color : [ \"red\" , \"brown\" , \"blue\" ] }\nmyCuster.myDB.find({\"color\": myArray})\n",
"text": "And if I may add, experiment with { color : myArray }. Create a documents with the followingto see which document will match",
"username": "steevej"
}
] | [pymongo] Find every document that contains an element of my array | 2021-01-10T21:43:11.658Z | [pymongo] Find every document that contains an element of my array | 4,233 |
null | [
"php"
] | [
{
"code": "$nr = autoinc('oid');\nglobal $db;\n\n$nr = $db->counters->findOneAndUpdate(\n\n ['_id' => $sequence],\n ['$inc' => ['sequence' => 1]],\n [\n 'projection' => ['sequence' => 1],\n 'returnDocument' => MongoDB\\Operation\\FindOneAndUpdate::RETURN_DOCUMENT_AFTER\n ]\n);\n\nreturn $nr->sequence;\n",
"text": "Hello,I try to make a function that increases counters. It increases the counter but after that, I can reload the page and it’s not increasing. When I wait for a few minutes and reload again it increases the counter again.I’m totally confused. why this weird behavior?var_dump($nr);function autoinc($sequence) {}",
"username": "bad_pussycat"
},
{
"code": " [\n 'projection' => ['sequence' => 1],\n 'returnDocument' => MongoDB\\Operation\\FindOneAndUpdate::RETURN_DOCUMENT_AFTER\n ]\n [\n 'projection' => ['sequence' => 1],\n 'returnNewDocument' => true\n]\n",
"text": "Try",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thank you. After I noticed this isn’t working but calling the PHP from cli. I noticed the webserver send me a cached response. Stupid me Sorry for wasting your time!",
"username": "bad_pussycat"
}
] | Auto increment findOneAndUpdate don't work PHP | 2021-01-10T02:55:12.317Z | Auto increment findOneAndUpdate don’t work PHP | 2,460 |
null | [
"atlas-functions"
] | [
{
"code": "exports = ({limit, offset} = {limit: 10, offset: 20}) => {\n console.log('limit', limit);\n }\nlimit undefinedlimit 10",
"text": "Hello! Realm’s JS feature compatibility list says it supports both destructing assignment and default function parameters. The last example on MDN shows how these can be used together.However, this appears to break in Realm functions:In Realm Functions, clicking Run on this prints limit undefined instead of limit 10 as you would expect.This is valid syntax in general. Let me know if there’s anything else I can do to help debug. ",
"username": "mdierker"
},
{
"code": "exports = ({limit = 10, offset = 20}) => {\n console.log('limit', limit);\n }\n",
"text": "It looks like you can do this inline. I think what I wrote in the first post is still supposed to work (or at least it does in Chrome?) but this works in Realm:",
"username": "mdierker"
}
] | Bug in Realm default function args + destructured assignment | 2021-01-09T19:09:39.572Z | Bug in Realm default function args + destructured assignment | 1,352 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hey everyone,I’m sorry for the total noob question, but after days of docs reading I wasn’t able to decide what is the best way to run scheduled aggregation on data stored in Atlas.I have aggregation pipeline set in Aggregation Pipeline Builder, basically I’d like to run that by schedule and save aggregated data in a separate new db/collection.Should I have to use Realm / Functions or is there an easier way?Thank you in advance!",
"username": "Vane_T"
},
{
"code": "",
"text": "Hi @Vane_T,I have written a blog that exactly demonstrate this ability with Atlas scheduled triggers.This is not hard in stone and its mostly a concept, hopes it is useful:\nSpirits in the Materialized View: Automatic Refresh of Materialized Views | MongoDB BlogLet me know if something is not clear.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovnythank you for you response! I will study it, but not being a developer with only some familiarity with it I was hoping for some copy paste code like ones available in Aggregation Pipeline Builder where we can “Export Pipeline To Language”, so I supposed there might be some such obvious solution for us,\nwhere I copy that code in some format, and trigger its execution with the frequency I want in a schedule.\nI understand it is mainly for devs with a min. knowledge, but not all guys who control a budget are on that level \nI use the code I got from there for Node.js and run it from Visual Studio Code, the MongoDB connection is OK, the code runs there in VSC without error, but the aggregation is not executed at all (checking it in Atlas).\nI was hoping if I can execute it manually then I can copy it and trigger its run by an Atlas trigger.\nAny more idea is really appreciated though and thank you again! ",
"username": "Vane_T"
},
{
"code": "async function()\n{\nvar db = \"db\";\nvar sourceColl = \"sourceColl\";\nvar collection = context.services.get(\"<Atlas-service>\").db(db).collection(sourceColl);\n var pipeline =[...] //Paste pipeline\n \nawait collection.aggregate(pipeline).toArray();\n\n\n}\n",
"text": "Hi @Vane_T,For a straight forward code could look something like , paste your details and pipelineBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "\"<Atlas-service>\"var pipeline =[...]async function aggregation() {await collection.aggregate(pipeline).toArray();",
"text": "Thank you @Pavel_Duchovny,I created a schedule trigger in Atlas, set link data source to Cluster0, added your code to Functions area (above the existing sample code as new function), set “db” as source database name, “sourceColl” as source collection, \"<Atlas-service>\" as “Cluster0” ( like comment is sample function code referred to and which cluster I use ), pasted in pipeline text (not Node.js code, but only the pipeline object), removed a of 2 to set\nvar pipeline =[...]\nproperly - I think this is the correct way…\nI gave a name to the function ( I was asked to do ).\nI have 2 yellow warning related to missing semicolons, I couldn’t figure out why is it missed:\nin line:\nasync function aggregation() {\nand line:\nawait collection.aggregate(pipeline).toArray();After saving I was waiting for its run ( 1 min scheduled) , it didn’t happen, then I clicked on Run button of Console and I got:\n“> ran on Fri Jan 08 2021 21:56:59 GMT+0100 (Central European Standard Time)\n> took 315.087827ms\n> result:\n{\n“$undefined”: true\n}\n> result (JavaScript):\nEJSON.parse(’{”$undefined\":true}’)\nDo you see some mistake I made?\nThank you! ",
"username": "Vane_T"
},
{
"code": "exports = async function ()..",
"text": "Hi @Vane_T,You should place the database name and the source coll as your data source you have built the builder pipeline on.Can you share the pipeline?Additionally I think the function definition should look like exports = async function ()..Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "exports = async function aggregation() {\nvar db = \"WoocommerceRiskViaIntegromat\";\nvar sourceColl = \"ordersExtended\";\nvar collection = context.services.get(\"Cluster0\").db(db).collection(sourceColl);\n var pipeline =[\n {\n '$project': {\n 'businessMeta.client_id': true, \n 'businessMeta.client_name': true, \n 'businessMeta.webshop_id': true, \n 'businessMeta.webshop_name': true, \n 'numVerify.valid': true, \n 'numVerify.international_format': true, \n 'numVerify.country_prefix': true, \n 'numVerify.country_code': true, \n 'numVerify.location': true, \n 'order.id': true, \n 'order.status': true, \n 'order.currency': true, \n 'order.total': true, \n 'order.date_created': true, \n 'order.date_modified': true, \n 'order.date_paid': true, \n 'order.date_completed': true, \n 'order.customer_id': true, \n 'order.customer_ip_address': true, \n 'order.customer_user_agent': true, \n 'order.customer_note': true, \n 'order.payment_method': true\n }\n }, {\n '$sort': {\n 'order.date_modified': -1\n }\n }, {\n '$out': 'ordersProcessedAndSorted'\n }\n ]; //Paste pipeline\n \nawait collection.aggregate(pipeline).toArray();\n\n}\n",
"text": "Hi @Pavel_Duchovny,my latest varsion of full code I inserted above sample function:Thank you!",
"username": "Vane_T"
},
{
"code": "",
"text": "@Vane_T,That’s look correct. What is the problem?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny,result:\n{\n“$undefined”: true\n}\nresult (JavaScript):\nEJSON.parse(’{\"$undefined\":true}’)so simply it doesn’t work, yet! ",
"username": "Vane_T"
},
{
"code": "",
"text": "Can you send us a link to your triggers definition.",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovny\nI’m sorry, I don’t understand how can I send you a link to a trigger definition.\nAlso, I mentioned, I can not run it even from its console:\n/*\nTo Run the function:\n- Type ‘exports();’\n- Click ‘Run’\n*/Anyway it is a Scheduled, Basic trigger type, Enabled, Cluster0 is linked in Link Data Source(s),\nschedule is 1 minutes default and event type is Function",
"username": "Vane_T"
},
{
"code": "",
"text": "Hi @Vane_T,When you are on the trigger definition page just copy the url from the browser and paste it here.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Pavel_Duchovnyhere you are, thank you!\nhttps://cloud.mongodb.com/v2/5fdcf706a8d3a31512f87528#triggers/5ff8c10a106b1ff8837e7844",
"username": "Vane_T"
},
{
"code": "exports = async function() {",
"text": "Hi @Vane_T,I think you have some malformed code for the specific trigger syntax. The current code executes the second function (line 43) which is empty. The idea is to replace this function and not add to it.Sorry if my code snippet mislead you I wrote it from memory.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_DuchovnyI’ve already tried your code earlier both above and below sample code and also I tried with sample deleted, like you suggest.\nI did what you asked in your last and I got 2 warnings for line 1, 1 warning for line 40 and I still got error message in console:\n> ran on Sat Jan 09 2021 17:27:41 GMT+0100 (Central European Standard Time)\n> took 647.080429ms\n> result:\n{\n“$undefined”: true\n}\n> result (JavaScript):\nEJSON.parse(’{\"$undefined\":true}’)\nI see though that in target collection there is 11 docs now, that’s good, so it was executed (at least) one time some times ago.LATER: I generated 1 more document into source collection and now in both source and in aggregation there are also 12 docs, so the triggered function looks working now \nI made some further indentation changes in pipeline, nothing else.Pls. note the 2+1 warning in code editor and the console error message is still very confusing, so my issue is solved, but others might also find it confusing…Thank you for your help again! ",
"username": "Vane_T"
},
{
"code": "",
"text": "Hi @Vane_T,This is not an error. The execution output is expected as the function does not return anything.You can look at the logs tab and see that the trigger runs with no errors.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Scheduled aggregation best practices | 2021-01-08T10:58:09.125Z | Scheduled aggregation best practices | 6,134 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "It would be great when we create an aggregation pipeline in Aggregation Pipeline Builder,\nbeing able to generate and copy-paste a complete, reusable code to Atlas’s triggered (in my case: scheduled) functions.\nThank you",
"username": "Vane_T"
},
{
"code": "",
"text": "Hi @Vane_T,Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Feature request for Aggregation Pipeline Builder | 2021-01-09T10:16:38.140Z | Feature request for Aggregation Pipeline Builder | 1,363 |
[
"java"
] | [
{
"code": "",
"text": "i have multiple applications ~5 reading and writing onto the same MongoDB collection in atlas, regular 1min intervals.\nand this exception causes my applications to crash all at the same time.\nI’m not sure where to begin to prevent this error from crashing my whole setup. any advice is appreciated. driver I’m using is ‘org.mongodb:mongo-java-driver:3.12.7’\nimage972×233 8.5 KB",
"username": "alex_mindustry"
},
{
"code": "",
"text": "Hello @alex_mindustry, welcome to the MongoDB Community forum.Please post the code which is causing the exception (the stacktrace has the class name and the line numbers of the code causing the exception). Also, tell if your deployment is a replica-set or a sharded cluster.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "hello!\nhere is the code… its just the simple insertOne method:\nimage844×503 82.3 KB\nthank you. sorry for the delay in reply",
"username": "alex_mindustry"
},
{
"code": "",
"text": "also, i think its replica-set.\naccording to this screen shot:\n",
"username": "alex_mindustry"
},
{
"code": "",
"text": "Hello @alex_mindustry,com.mongodb.MongoWriteConcernException - An exception indicating a failure to apply the write concern to the requested write operation.This is a runtime (or unchecked) exception and it can be thrown by the MongoCollection#insertOne() method (the documentation says so). You can catch this exception in your code - and perform appropriate action as per your need. This way the application will not crash.Also see, about the WriteConcern. The write concern can be set for the collection, database or the mongo client connection. It can also be specified for a specific operation. Verify if you have set any or the default values are applied. The main options are the “w” and the “wtimeout” - these can affect the inserts and updates on the collection in a replica-set.At this point itis difficult to conclude what the issue is, but I will try to post more details.",
"username": "Prasad_Saya"
}
] | Debugging MongoWriteConcernException | 2021-01-06T18:49:54.905Z | Debugging MongoWriteConcernException | 2,294 |
|
null | [
"python"
] | [
{
"code": "query = {'$and': [{u'tag_list': '33854'}, {u'customs': {u'$not': {u'$elemMatch': {u'k': u'rule_id', u'v': 301}}}}, {u'customer_id': 4275L}, {'status': {'$nin': [u'invalid_domain', u'inexistent_address', u'mailbox_full', u'smsfail', u'whatsappfail']}}, {'customer_id': {'$in': [4275]}, 'opt_out': False}, {u'campaigns': {u'$not': {u'$elemMatch': {u'id': 112129L}}}}]}\ncursor = Contact._get_collection().find(query,{'id': 1}).hint([('customer_id',1), ('tag_list',1), ('status',1), ('opt_out',1)]).batch_size(5000).limit(5000)[1100001:1149999]\n\nfor c in cursor:\n\n contact = Contact._from_son(c)\n\nbulk_operations.append(\n\t\tUpdateOne({\n\t\t\t\t'_id': contact.id,\n\t\t\t\t'campaigns': {\n\t\t\t\t\t'$not':{\n\t\t\t\t\t\t'$elemMatch':{\n\t\t\t\t\t\t\t\t'id': campaign_id,\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}, update_exec)\n\t)\n\n results = Contact._get_collection().bulk_write(bulk_operations, ordered=False)\n# Create your models here.\nclass Contact(mongo.DynamicDocument):\n\n \n STATUS_CHOICES = (('ok', _('Ativo')),\n ('mx', _('Falha na entrega')),\n ('invalid_domain', _('Dominio inválido')),\n ('inexistent_address', _('E-mail não existe')),\n ('mailbox_full', _('Caixa cheia')),\n ('size_limit', _('Limite da mensagem excedido')),\n ('mail_loop', _('E-mail em loop')),\n ('spam', _('Spam')),\n ('unknown', _('Erro desconhecido')),\n ('complaint', _('Reclamação')),\n ('abuse', _('Denúncia de Abuse')),\n ('smsfail', _('Falha na Entrega de SMS')),\n ('whatsappfail', _('Falha na Entrega de WhatsApp')))\n\n\n customer_id = mongo.IntField(verbose_name=_(u'Cliente'), unique_with='email')\n name = mongo.StringField(max_length=255, verbose_name=_(u'Nome'))\n email = mongo.EmailField(verbose_name=_(u'E-mail'))\n campaigns = mongo.ListField(mongo.DictField(), verbose_name=_(u'Campanhas que o usuário participou'))\n customs = mongo.ListField(mongo.DictField(), verbose_name=_(u'Campos customizados do cliente'))\n status = mongo.StringField(choices=STATUS_CHOICES, default='ok', max_length='10', verbose_name=_(u'Status do E-mail'))\n date_created = mongo.DateTimeField(verbose_name=_(u'Criado em'))\n last_updated = mongo.DateTimeField(verbose_name=_(u'Última atualização em'))\n tag_list = mongo.ListField(mongo.StringField(), verbose_name=_(u'Listas que o contato faz parte'))\n \n meta = {\n 'index_background': True,\n 'index_drop_dups': True,\n 'indexes': [\n ('customer_id', 'tag_list', 'status', 'opt_out'),\n ('customer_id', 'tag_list', 'status'),\n ('customer_id', 'tag_list', 'opt_out'),\n ('customer_id', 'customs.k', 'customs.v', 'status', 'opt_out') \n ],\n }\n\n class Meta:\n using = 'mongodb'\n verbose_name = _(u'Contato')\n verbose_name_plural = _(u'Contatos')\n",
"text": "Hello Everyone!I’m having a similar issue to Mongodb update_many and limit - #6.I have a database with 17 MM of records, but I would like to select a range with 5k records inside this collection, that matches with a single query, but I cannot get the first register, because the database does an abend. I’m running a machine at AWS using an instance t3.xlarge with replication.This query that I’m running in pythonI’m using mongoengine to specify my class, so I can describe the attributes below:I must confess that I’ve tried everything to solve this performance issue I’m almost looking for a bount hunt to help to solve this problem.Any help will be very appreciatedThanks so much!",
"username": "rogerio.carrasqueira"
},
{
"code": "",
"text": "because the database does an abendHello @rogerio.carrasqueira, can you tell where does the code get aborted?",
"username": "Prasad_Saya"
}
] | Issue selecting 5k records from 17MM | 2021-01-08T18:57:57.733Z | Issue selecting 5k records from 17MM | 1,772 |
null | [
"queries",
"python"
] | [
{
"code": "",
"text": "Hello.\nPython 3.7.3\npymongo==3.10.1\nMongodb version 4.2.3\nin the collection of 100 million documents and every 30min + 300K documentsHow to make a request similar to MySQL:UPDATE table SET status_flag=1 WHERE status_flag=0 LIMIT 300000;???This is udate all documents\nmy_collection.update_many({“status_flag”: 0}, {\"$set\": {“status_flag”: 1}}).limit(300000)How to make a limit ?",
"username": "Kompaniiets_Denis"
},
{
"code": "bulk_request = [ ]\n\nfor doc in collection.find( { 'status_flag': 0 } ).limit( 3 ):\n bulk_request.append( UpdateOne( { '_id': doc['_id'] }, { '$set': { 'status_flag': 1 } } ) )\n\nresult = collection.bulk_write( bulk_request )\n\nprint(result.matched_count, result.modified_count) \nstatus_flag",
"text": "Hello @Kompaniiets_Denis, welcome to the MongoDB forum.You can use Bulk Write Operations for this batch update. Using PyMongo:PyMongo Definitions:NOTE: Having an index on the query filter field status_flag can improve the query performance.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "index status_flag and ts_utc_start setbulk_request = [ ]\nfor doc in col.find( { ‘status_flag’: count_old } ).sort( [ ( ‘ts_utc_start’, 1 ) ] ).limit( 10 ):\nbulk_request.append( updateOne( { ‘_id’: doc[ ‘_id’ ] }, { ‘$set’: { ‘status_flag’: count_new } } ) )\nresult = col.bulk_write( bulk_request )or\nbulk_request.append( UpdateOne( { ‘_id’: doc[ ‘_id’ ] }, { ‘$set’: { ‘status_flag’: count_new } } ) )gives an error message\nException name ‘updateOne’ is not defined\nor\nException name ‘UpdateOne’ is not defined",
"username": "Kompaniiets_Denis"
},
{
"code": "import pymongo\nfrom pymongo import UpdateOne\nclient = pymongo.MongoClient()\ncollection = client.testDB.testColl\n\nbulk_request = [ ]\nfor doc in collection.find( { 'status_flag': 0 } ).limit( 2 ):\n bulk_request.append(UpdateOne( { '_id': doc['_id'] }, { '$set': { 'status_flag': 1 } } ) )\n\nresult = collection.bulk_write( bulk_request )\nprint(result.matched_count, result.matched_count)",
"text": "Here is your Pyhton code with proper imports:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya thanks for the help 300K documentsfor doc in col.find({“status_flag”: 0}).sort( [(“ts_utc_start”, 1)] ).limit(300000):\nres = col.update_many({\"_id\": doc[\"_id\"]}, {\"$set\": {“status_flag”: 1}})completed in 2min 04 secondsbulk_request = [ ]\nfor doc in col.find( { ‘status_flag’: 0 } ).limit( 300000 ):\nbulk_request.append(UpdateOne( { ‘_id’: doc[’_id’] }, { ‘$set’: { ‘status_flag’: 1 } } ) )\nresult = collection.bulk_write( bulk_request )completed in 24 secondsThis is a cool result.",
"username": "Kompaniiets_Denis"
},
{
"code": "",
"text": "2 posts were split to a new topic: Issue selecting 5k records from 17MM",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Mongodb update_many and limit | 2020-07-10T08:56:25.409Z | Mongodb update_many and limit | 5,821 |
null | [
"database-tools",
"backup"
] | [
{
"code": "mongodump --uri mongodb+srv://admin:<PASSWORD>@cluster0.lab0q.mongodb.net/<DATABASE> \n",
"text": "Hello. I have been strugling with this for days now so hoping someone can help me. I have a free plan on Atlas abd i’m trying to get a dump of my database via the CLI but failing every time. Atlas says I can use the command:but I can’t get it to recognise the --uri option.I am using a Ubuntu client machine and went through the install process here https://docs.mongodb.com/mongocli/master/install but no luck.Is there an easier way to get my data out of this? I currently have no means of backing up my data.Thanks",
"username": "Slowreader87"
},
{
"code": "",
"text": "Post a screenshot of what you are trying that shows the issues you are getting.Without an exact error message we cannot really help.Just in case, and are placeholders and must be replaced with your real admin user password and a real database name. It is the safest to use test as the database name if you are unsure.If you look at the documentation of mongodump at https://docs.mongodb.com/database-tools/mongodump/ you will notice in the examples that they have an equal sign between –uri and the actual URI.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your response.If I post a screenshot it will reveal my connection info so i’ll paste from the command line belowI have now followed the instructions on this page to install the Mongo database toolsI then try the following command along with the equal sign, thank you\nmongodump -uri=‘mongodb+srv://<MY_USER>:<MY_PASS>@cluster0.lab0q.mongodb.net/’I then get\n2021-01-09T05:22:28.859+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refusedMy Linux box has 27017 outbound allowed. My application wouldn’t work without this. And I have set up Atlas to allow connections from this IP, again my app wouldn’t work without this. So I’m not sure why it is saying “connection refused”Thanks",
"username": "Slowreader87"
},
{
"code": "",
"text": "I have now managed to get my data out.In the end I used the command:mongodump “mongodb+srv://cluster0.lab0q.mongodb.net/<my_db>” --username <my_username>from within the downloaded version of the Windows MongoDB Tools\nmongodb-database-tools-windows-x86_64-100.2.1Thanks",
"username": "Slowreader87"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help with mongodump | 2021-01-08T20:37:08.060Z | Help with mongodump | 2,719 |
[
"atlas-functions"
] | [
{
"code": "{\n \"objectID\": \"5fa0bdb636a703546ee07c52\",\n \"id\": \"Joe\",\n \"value\": {\n \"id\": \"test\",\n \"dob\": \"20-20\",\n \"purchDate\": \"2020-09-02\",\n \"purchFrom\": \"2020-09-02\",\n \"location\": \"a\",\n \"category\": \"a\",\n \"breed\": \"a\",\n \"weight\": \"a\",\n \"damID\": \"a\",\n \"sireID\": \"a\",\n \"generalList\": [\n {\n \"genDate\": \"10-10\",\n \"genComment\": \"comment1\"\n },\n {\n \"genDate\": \"20-20\",\n \"genComment\": \"comment2\"\n }\n ],\n \"performanceList\": [\n {\n \"perfDateGain\": \"10-10\",\n \"perfWeight\": \"123\",\n \"perfBCS\": \"123\",\n \"perfChange\": \"123\",\n \"perfRate\": \"123\",\n \"perfDateBreed\": \"10-10\",\n \"perfEval\": \"123\",\n \"perfCommentBreed\": \"asd\"\n }\n ],\n \"healthList\": [\n {\n \"healthDate\": \"10-10\",\n \"healthIssue\": \"asd\",\n \"healthAction\": \"asd\",\n \"healthDosage\": \"123\",\n \"healthComment\": \"asd\"\n }\n ]\n }\n}\n",
"text": "Hi all - I’m new to using Mongo so I apologize for anything that looks dumb. I handle updating my records with a Realms webhook that works perfectly, EXCEPT for when I try to update the arrays I store inside my DB objects.Here is an example of a JSON post I am making to this webhook:And here is the function code I’m using to update the DB:\nfunction1470×518 22.1 KB\nLike I said, it has something to do with the way I am accessing the index of my array inside of my object, because everything works fine except for that part. Is there a special way to access array elements inside objects? Thanks!",
"username": "Joewangatang"
},
{
"code": "value_id == \"the value from objectID field\"exports = function(payload, response) {\n const body = EJSON.parse(payload.body.text());\n const id = BSON.ObjectId(body.objectID);\n const coll = context.services.get(\"mongodb-atlas\").db(\"test\").collection(\"coll\");\n return coll.updateOne({'_id': id},{'$set': body.value});\n};\nMongoDB Enterprise Free-shard-0:PRIMARY> db.coll.insert({name:\"Maxime\"})\nWriteResult({ \"nInserted\" : 1 })\nMongoDB Enterprise Free-shard-0:PRIMARY> db.coll.findOne()\n{ \"_id\" : ObjectId(\"5ff7b41bc922979b2f673bcf\"), \"name\" : \"Maxime\" }\ncurl \\\n-H \"Content-Type: application/json\" \\\n-d '{\"objectID\":\"5ff7b41bc922979b2f673bcf\", \"value\": {\"name\":\"Maxime Beugnet\", \"age\": 32}}' \\\nhttps://eu-west-1.aws.webhooks.mongodb-realm.com/api/client/v2.0/app/community-test-oubdb/service/HTTP/incoming_webhook/test\nMongoDB Enterprise Free-shard-0:PRIMARY> db.coll.findOne()\n{\n\t\"_id\" : ObjectId(\"5ff7b41bc922979b2f673bcf\"),\n\t\"name\" : \"Maxime Beugnet\",\n\t\"age\" : 32\n}\nnameage",
"text": "Hi @Joewangatang and welcome in the MongoDB Community !I think what you are trying to do here is to set all the values from your value field into your document in MongoDB with _id == \"the value from objectID field\".If that is correct then I think you can get away from this with a simple function like this:Here is my test.I inserted this doc:Then I sent my cURL command:And then my document was like this:As you can see above, my field name was updated and the new field age was added correctly.I hope I understood what you were trying to do correctly.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "return coll.updateOne({'_id': id},{'$set': {\"value\":body.value}});",
"text": "Hey Maxime,Thank you! This is exactly what I was looking for, except your function replaced the whole document. I modified it to only change the “value” field of my record, as that’s where all the information is stored:return coll.updateOne({'_id': id},{'$set': {\"value\":body.value}});Cheers!",
"username": "Joewangatang"
},
{
"code": "",
"text": "Oops ! But I’m glad I have put you the right direction.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Trouble Parsing JSON Data | 2021-01-07T00:00:08.455Z | Trouble Parsing JSON Data | 7,181 |
|
null | [
"connecting",
"golang"
] | [
{
"code": "",
"text": "Hi, I want to develop a multi-tenant SaaS golang application with tenant per database schema and want to have user authentication per database by leveraging connection pool. I found a similar implementation in java in this link. My question is how to do user authentication per database in Go driver with connection pool? Thanks!",
"username": "Kevin_Meng"
},
{
"code": "",
"text": "you can check the official documentation",
"username": "Alessandro_Sanino"
},
{
"code": "",
"text": "Hi Alessandro,Thanks for the reply. The sample code in the link is not for my case.\nI want to build a SaaS application with “Database per Tenant” approach and I need each tenant to authenticate with different credentials to MongoDB.\nI learned MongoDB completely separated the actions of \"connect” and “authenticate”, means we could leverage connection pool to create a pool of “blank” connections and then borrow a connection from the pool to do authentication for current tenant.\nBut I have no idea how to do this by using Go driver?BR, Kevin",
"username": "Kevin_Meng"
},
{
"code": "",
"text": "Hi Kevin, Any luck with question you posted related to \"I want to build a SaaS application with “Database per Tenant” approach and I need each tenant to authenticate with different credentials to MongoDB.\nI learned MongoDB completely separated the actions of “connect” and “authenticate”, means we could leverage connection pool to create a pool of “blank” connections and then borrow a connection from the pool to do authentication for current tenant.\nBut I have no idea how to do this by using Go driver?”Appreciate response if any. Thanks.",
"username": "putta_anoop"
}
] | User authentication per database | 2020-11-16T08:05:24.775Z | User authentication per database | 2,498 |
[
"swift",
"app-services-data-access"
] | [
{
"code": "exports = async function createNewUserDocument({user}) {\n const cluster = context.services.get(\"mongodb-atlas\");\n const users = cluster.db(\"tracker\").collection(\"User\");\n return users.insertOne({\n _id: user.id,\n _partition: `user=${user.id}`,\n name: user.data.email,\n canReadPartitions: [],\n canWritePartitions: [`user=${user.id}`,`project=${user.id}`],\n memberOf: [\n {\"name\": \"My Project\", \"partition\": `project=${user.id}`}\n ],\n });\n};\n No userID found No user collection No project partition {_id: localUserID},\n {$addToSet: { canWritePartitions: projectPartition, canReadPartitions: projectPartition }}\n);\ntry! userRealm.write() {\n let project = Project(partition: \"project=\\(ObjectId.generate().stringValue)\", name: \"New Project created\")\n guard let user = self.userRealm.object(ofType: User.self, forPrimaryKey: app.currentUser?.id) else { print (\"no users\"); return}\n user.memberOf.append(project)\n //Add projectpartition to users canwrite and canread array\n addProjectRulesToUser(projectPartition: project.partition!)\n}\n",
"text": "Dear All,\nI’m happy to be a new member of the community, this platform seems to have exactly what I’m planning for my app. Currently I’m trying to understand the how mongoldb works.\nI started with the IOS tutorial of creating a TaskApp with the ability to have common project etc.\nI made the app work as per the instructions, great!Now following my future goals, I started to modify part of the project and mongobd backend, as follows:Mongoldb:Functions:\n[modified] - createUserDocument - to make user able to change it own fields(to add new project)[added] - addWriteRulesOnNewProjectexports = async function(projectPartition) {const collection = context.services.get(“mongodb-atlas”).db(“tracker”).collection(“User”);\nconst localUserID = context.user.id\nif (localUserID == null) {\nreturn {error: No userID found};\n}\nif (collection == null) {\nreturn {error: No user collection};\n}\nif (projectPartition == null) {\nreturn {error: No project partition};\n}try {\nreturn await collection.updateOne(} catch (error) {\nreturn {error: error.toString()};\n}};Swift Tutorial TaskProject[added] - in ProjectViewController @ viewdidLoad a right item “add”\n// NEW//////////////navigationItem.rightBarButtonItem = UIBarButtonItem(title: “Add”, style: .plain, target: self , action: #selector (addProject))[added] - in ProjectViewController - function addProject@objc func addProject(){}\nfunc addProjectRulesToUser(projectPartition: String) {\nprint(“Adding project: (projectPartition)”)\nlet user = app.currentUser!\nuser.functions.addProjectRules([AnyBSON(projectPartition)], self.onNewProjectOperationComplete)\n}Results:Issue:mongo db log :\nScreenshot 2021-01-07 at 12.02.012150×770 62.1 KBNow I stop the app, restart the app and tap again and it loads it, but if I create again a new one and tap it, it again crashes at the Realm.async when trying to download the realm locally, I guess.\nScreenshot 2021-01-07 at 12.09.111650×638 135 KBWhat is wrong here?Thank you!\nI love the platform, cannot wait to deploy the real app!",
"username": "Radu_Patrascu"
},
{
"code": "project=xyzproject=project=project=xyz",
"text": "Hi, thanks for your question!It’s a bit hard to tell, but my first thought is that you might be sending the project’s partition, which is project=xyz, in addProjectRulesToUser(). But the function addWriteRulesOnNewProject() seems to prepend another project= to the partition key string, resulting in project=project=xyz. Is it possible to log exactly what you’re receiving in the addWriteRulesOnNewProject() function?Hope this helps!",
"username": "Chris_Bush"
},
{
"code": "",
"text": "Thank you for the response, unfortunately i fixed it just before uploading here but did not update the function, at first it was adding project=project=, i saw it and fixed.The final data you can see in results, it is as per Tutorial, only difference that projects are added dynamically.\nSo to analyze the logic:As per Mongodb Realm documentation, it says if the realm will be opened for the first time, you need to call Real.async, which is also already implemented in the tutorial.\nSo considering that before opening the new realm, the user has all necessary rights and permissions to open the realm, the mongodb Realm is still logging the permission error (sorry for repeating).",
"username": "Radu_Patrascu"
},
{
"code": "canReadPartitions: []createNewUserDocumentcanReadPartitioncanWritePartitiontrueconsole.logif",
"text": "Not certain that it would cause this problem, but I see that you’ve removed canReadPartitions: [user=${user.id}] from createNewUserDocument - try adding that back in as it may mean that you don’t have the correct user data.If that doesn’t fix it, then I’d focus on the Realm log that’s reporting that the user doesn’t have permission to sync partition “project=5ff…cc103”.The test to see whether the the user can sync a partition or not is decided by the functions canReadPartition and canWritePartition.Have you made any changes to those functions?Try returning true from the functions to confirm that that’s what’s triggering the error.Tracing what’s happening in those functions is a little messy but can be done. Add console.log commands to print out things like the name of the function, the user’s ID, whether an if test passes, etc. – the outputs will appear in the Realm logs.Let us know how you get on!",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi Radu,Just an addition to the above, I know you’re modifying your own project after getting the tutorial to work, but if you’re looking for a further reference, beyond the docs, and an alternative approach, @Andrew_Morgan who replied also, recently posted a variation on the Tutorial using SwiftUI & Combine - you can read this in our Developer hub HERE.",
"username": "Shane_McAllister"
},
{
"code": "user=${user.id}",
"text": "Hey Andrew,\nI added user [user=${user.id}] back to the canread, but nothing changed.\nCurrent form:\nScreenshot 2021-01-08 at 14.19.06796×698 94.7 KBNo changes to the canRead or canWrite functions.I just changed the permissions on the sync to only true, bypassing completely the canRead/canWrite functions and I still get same error on mongoldb and the app crashes. It’s like the problem is in the Realm.async, not being able to attempt to download or create the realm.Other ideas ? ",
"username": "Radu_Patrascu"
},
{
"code": "",
"text": "Thank you Shane, I’m going to take a look. I’m still going to work with the UiKit, at least for now, but maybe in the future.",
"username": "Radu_Patrascu"
},
{
"code": "",
"text": "I uploaded the CLI and swift file on my git hub, maybe it will be easier to troubleshoot.\nProject Git-Hub Link",
"username": "Radu_Patrascu"
},
{
"code": "",
"text": "Building it now - I’ll let you know what I find",
"username": "Andrew_Morgan"
},
{
"code": "canReadPartitioncanWritePartitionfalsetruetrueUsercanXxxxPartitioncanReadPartitionexports = async function(partitionValue) {\n const cluster = context.services.get(\"mongodb-atlas\");\n const userCollection = cluster.db(\"tracker\").collection(\"User\");\n \n return userCollection.findOne({ _id: context.user.id })\n .then (userDoc => {\n return userDoc.canReadPartitions && userDoc.canReadPartitions.includes(partitionValue);\n }, error => {\n console.log(`Couldn't find user ${context.user.id}: ${error}`);\n return false\n })\n}\ncanWritePartitionexports = async function(partitionValue) {\n const cluster = context.services.get(\"mongodb-atlas\");\n const userCollection = cluster.db(\"tracker\").collection(\"User\");\n\n return userCollection.findOne({ _id: context.user.id })\n .then (userDoc => {\n return userDoc.canWritePartitions && userDoc.canWritePartitions.includes(partitionValue);\n }, error => {\n console.log(`Couldn't find user ${context.user.id}: ${error}`);\n return false\n })\n};\n",
"text": "Hi Radu,I’ve found a solution.As I’d expected, the problem is that canReadPartition and/or canWritePartition are returning false when they should return true. Making them both simple return true (and hitting Deploy) got things working.The issue is that the functions use the current user’s custom data object to see if the partition is listed in there. Changes to custom data only appear when the user refreshes their token (e.g. when logging back into the app).So, on creating a new project, you correctly call your function to add the associated partition to the User collection. When the user then tries to open the project (and your code tries to open the realm), the canXxxxPartition functions are called but they make their decision on “stale” custom user data. When you restart the app, the user has to login again -> refresh of user token -> custom data updated.The fix is for the functions to fetch the User document from the database (rather than relying on the (possibly) stale custom user data).This is the working canReadPartition function:and the working canWritePartition function:Please let me know if this works for you.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "It really was the solution!But why when I changes the app Sync rule to both true it was not working ?",
"username": "Radu_Patrascu"
},
{
"code": "",
"text": "Did you hit the “Deploy” button (and possibly wait for a couple of seconds)?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "(Whenever I create a new Realm app, the first thing I do is disable drafts under Deploy/Configuration as it’s so easy to miss the prompt to manually deploy)",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks @Radu_Patrascu for raising this – we’ll update the code in the tutorial so that others don’t hit similar problems when they try extending the mobile apps.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I think I did, deployed it, but hard to say now, I’ll try tomorrow to break it again and check.Thank you very much, it was really painful to try fixing it myself.Where exactly can I find this info about the difference between context.user and a queried user and other traps like this ? ",
"username": "Radu_Patrascu"
},
{
"code": "",
"text": "You can start here… https://docs.mongodb.com/realm/users/enable-custom-user-dataI use it in frontend apps as a convenient read-only snapshot of what the data looked like when the user last logged in. If I need the data to be 100% up to date (rather than up to 30 minutes stale) then I’d read it from the database.In general, I don’t use it in my backend functions as it’s so easy and cheap to fetch the completely up to date data from the database each time.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help understand Rules - Using TaskApp Tutorial | 2021-01-07T10:04:28.108Z | Help understand Rules - Using TaskApp Tutorial | 5,859 |
|
null | [] | [
{
"code": " const query = [\n {\n $search: {\n index: 'directSearchIndex',\n compound: {\n should: [\n {\n text: {\n query: searchTerm,\n path: 'email',\n },\n },\n {\n text: {\n query: searchTerm,\n path: 'phone',\n },\n },\n {\n text: {\n query: cleanWebsiteUrl,\n path: 'website',\n },\n },\n ],\n },\n },\n },\n {\n $limit: 5,\n },\n {\n $project: {\n _id: 1,\n website: 1,\n email: 1,\n },\n },\n ];\n \n return await User.aggregate(query).exec();\n {\n \"mappings\": {\n \"dynamic\": true\n }\n }\n",
"text": "I’ve successfully created a search index for two of my documents. I’ve also created mongo search aggregation methods for the corresponding index’s. My profiler however is complaining that none of my index’s are being used (keys examined is 0) and I’m not sure what the issue is. Initially the index’s i created were static as i thought that the way i setup the index was the issue but even when i switched to a dynamic index I’m getting no keys examined.The following is a small code snippet for the aggregation pipeline (which I’m assuming is causing the issue)The cleanWebsiteUrl param is just the search term with the protocol stripped off. The index im using now is a dynamic index (copied and pasted from the docs):",
"username": "Bhavdip_Chauhan"
},
{
"code": "",
"text": "Each query can only use one index. You use the search index in the first stage, but the subsequent project stage does not use an index. That stage may be the source of your challenges here.",
"username": "Marcus"
}
] | Being told that no keys are looked through when using Atlas Search | 2021-01-06T19:39:10.576Z | Being told that no keys are looked through when using Atlas Search | 1,886 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "I uploaded two different validation libraries and none works. Tried the latest versions of Joi and ow. ow kept giving “getFileName is not a function error”. I went back to earlier versions, same issue. Joi’s issue is because it uses unsupported apis like WeakSet.It’s a lot of needless work to start writing validation features from scratch. Are there tested alternatives? Please help.Thanks",
"username": "Fatimah_Sanni"
},
{
"code": "",
"text": "Hey Fatimah, are you able to contact support with a ticket? If so, that would be the best way to file this bug.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Was there an outcome to the above? I’m encountering the same issue with Joi",
"username": "Justin_Grocott"
},
{
"code": "",
"text": "None that I know of. Moved to ExpressJS for this reason among others",
"username": "Fatimah_Sanni"
}
] | External Dependencies (Joi, ow) not working | 2020-12-12T20:20:31.522Z | External Dependencies (Joi, ow) not working | 2,243 |
null | [] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"5fe45a9de1cccf001a7c6f7f\"\n },\n \"caseQuantity\": 5,\n \"unitQuantity\": 0,\n \"totalQuantity\": 2000,\n \"currentQuantity\": 2000,\n \"isClaimActive\": \"true\",\n \"claim\": 32,\n \"status\": \"Active\",\n \"purchaseInventoryId\": {\n \"$oid\": \"5fe45a9ce1cccf001a7c6f7e\"\n },\n \"index\": \"1608800909352\",\n \"batchNo\": 1,\n \"unitPrice\": 14.19,\n \"casePrice\": 255.75,\n \"product\": {\n \"$oid\": \"5f8d9a6184c1d0005814ed61\"\n },\n \"productName\": \"Red Cow - Red Cow 18g\",\n \"type\": {\n \"$oid\": \"5f8d931fcc42160023d770e2\"\n },\n \"units\": 400,\n \"agency\": {\n \"$oid\": \"5f8d6f0acc42160023d770c4\"\n },\n \"createdBy\": {\n \"$oid\": \"5f8d6f2dcc42160023d770c5\"\n },\n \"__v\": 0,\n \"reservations\": [{\n \"loadingsheetId\": \"5fe45a9ce1cccf001a7c6f9k\"\n \"reservedTotalQuantity\": 22\n }]\n}\n{\n \"_id\": {\n \"$oid\": \"5fe45a9de1cccf001a7c6f7f\"\n },\n \"caseQuantity\": 5,\n \"unitQuantity\": 0,\n \"totalQuantity\": 2000,\n \"currentQuantity\": 2000,\n \"isClaimActive\": \"true\",\n \"claim\": 32,\n \"status\": \"Active\",\n \"purchaseInventoryId\": {\n \"$oid\": \"5fe45a9ce1cccf001a7c6f7e\"\n },\n \"index\": \"1608800909352\",\n \"batchNo\": 1,\n \"unitPrice\": 14.19,\n \"casePrice\": 255.75,\n \"product\": {\n \"$oid\": \"5f8d9a6184c1d0005814ed61\"\n },\n \"productName\": \"Red Cow - Red Cow 18g\",\n \"type\": {\n \"$oid\": \"5f8d931fcc42160023d770e2\"\n },\n \"units\": 400,\n \"agency\": {\n \"$oid\": \"5f8d6f0acc42160023d770c4\"\n },\n \"createdBy\": {\n \"$oid\": \"5f8d6f2dcc42160023d770c5\"\n },\n \"__v\": 0,\n \"reservations\": [{\n \"loadingsheetId\": \"5fe45a9ce1cccf001a7c6f97\"\n \"reservedTotalQuantity\": 22\n },{\n {\n \"loadingsheetId\": \"5fe45a9ce1cccf001a7c6f98\"\n \"reservedTotalQuantity\": 10\n }]\n }\n",
"text": "I have a Stock document.I need to update my reservation array of objects with a new reservation of reservedTotalQuantity 10. Output should be like below.How can i achieve this using Stock.updateOne() update operation??",
"username": "Shanka_Somasiri"
},
{
"code": "",
"text": "Thanks to @Prasad_Saya was able to get this sorted.",
"username": "Shanka_Somasiri"
},
{
"code": "db.stock.update(\n { _id: ObjectId(\"5fe45a9de1cccf001a7c6f7f\") },\n { $push: { reservations: {\"loadingsheetId\": \"5fe45a9ce1cccf001a7c6f98\", \"reservedTotalQuantity\": 10} } }\n)\ntest:PRIMARY> db.col.insert({reservations: [{table:1, seats: 3}]})\nWriteResult({ \"nInserted\" : 1 })\ntest:PRIMARY> db.col.findOne()\n{\n\t\"_id\" : ObjectId(\"5ff79e3d87f6a6d05484e2c8\"),\n\t\"reservations\" : [\n\t\t{\n\t\t\t\"table\" : 1,\n\t\t\t\"seats\" : 3\n\t\t}\n\t]\n}\ntest:PRIMARY> db.col.update({\"_id\" : ObjectId(\"5ff79e3d87f6a6d05484e2c8\")}, {$push: {reservations: {table: 2, seats: 10}}})\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\ntest:PRIMARY> db.col.findOne()\n{\n\t\"_id\" : ObjectId(\"5ff79e3d87f6a6d05484e2c8\"),\n\t\"reservations\" : [\n\t\t{\n\t\t\t\"table\" : 1,\n\t\t\t\"seats\" : 3\n\t\t},\n\t\t{\n\t\t\t\"table\" : 2,\n\t\t\t\"seats\" : 10\n\t\t}\n\t]\n}\n",
"text": "Hi @Shanka_Somasiri,Indeed, you could do it with $set + $concatArrays but it’s a bit overkill and less efficient than a simple $push which does exactly what you want.Your code should be something like:Here is a generic example I did in the Mongo Shell:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "for (const el of records) {\n console.log(el);\n promiseArray.push(\n Stock.updateOne(\n {\n index: el.index,\n product: el.product,\n batchNo: el.batchNo,\n agency,\n currentQuantity: { $gte: el.loadingTotal },\n },\n [\n {\n $set: {\n reservations: {\n $concatArrays: [\n '$reservations',\n [\n {\n loadingSheetId: sheetAfterSave._id,\n reservedCaseQuantity: el.loadingCaseCount,\n reservedUnitQuantity: el.loadingUnitCount,\n reservedTotalQuantity: el.loadingTotal,\n },\n ],\n ],\n },\n currentQuantity: {\n $add: ['$currentQuantity', -el.loadingTotal],\n },\n },\n },\n {\n $set: {\n unitQuantity: {\n $mod: ['$currentQuantity', el.units],\n },\n },\n },\n {\n $set: {\n caseQuantity: {\n $floor: {\n $divide: ['$currentQuantity', el.units],\n },\n },\n },\n },\n ],\n {\n session: session,\n }\n )\n );\n[distribution] MongooseError: Invalid update pipeline operator: \"$push\"\n[distribution] at castPipelineOperator (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:141:9)\n[distribution] at castUpdate (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:39:22)\n[distribution] at model.Query._castUpdate (/app/node_modules/mongoose/lib/query.js:4524:10)\n[distribution] at castDoc (/app/node_modules/mongoose/lib/query.js:4553:18)\n[distribution] at model.Query._updateThunk (/app/node_modules/mongoose/lib/query.js:3735:20)\n[distribution] at model.Query.<anonymous> (/app/node_modules/mongoose/lib/query.js:3833:23)\n[distribution] at model.Query._wrappedThunk [as _updateOne] (/app/node_modules/mongoose/lib/helpers/query/wrapThunk.js:16:8)\n[distribution] at /app/node_modules/kareem/index.js:369:33\n[distribution] at processTicksAndRejections (node:internal/process/task_queues:75:11)\n",
"text": "HI @MaBeuLux88. Thanks for your answer.Ok lets say that i have 4 operations to be done.\n1). Push data to reservations array\n2). Update the currentQuantity\n3). Update the caseQuantity\n4). Update the units QuantityMy current implementation is below.This works perfectly. But seems like i cannot use $push here. Getting below error since update operators cannot be used as an aggregation pipeline operator if i’m not mistaken.Is there any other way I could do this to overcome the overkill and inefficiency here??",
"username": "Shanka_Somasiri"
},
{
"code": "for (const el of records) {\n console.log(el);\n promiseArray.push(\n Stock.updateOne(\n {\n index: el.index,\n product: el.product,\n batchNo: el.batchNo,\n agency,\n currentQuantity: { $gte: el.loadingTotal },\n },\n [\n {\n $set: {\n reservations: {\n $concatArrays: [\n '$reservations',\n [\n {\n loadingSheetId: sheetAfterSave._id,\n reservedCaseQuantity: el.loadingCaseCount,\n reservedUnitQuantity: el.loadingUnitCount,\n reservedTotalQuantity: el.loadingTotal,\n },\n ],\n ],\n },\n currentQuantity: {\n $add: ['$currentQuantity', -el.loadingTotal],\n },\n unitQuantity: {\n $mod: ['$currentQuantity', el.units],\n },\n caseQuantity: {\n $floor: {\n $divide: ['$currentQuantity', el.units],\n },\n },\n },\n }\n ],\n {\n session: session,\n }\n )\n );\n",
"text": "I take that back - my bad. I didn’t understand you where using the aggregation pipeline update $set instead of the update operator $set and for some reason I thought the entire array was going over the network between your client and MDB cluster but that’s not the case here. You are all good !The only comment I have is that I thing you can write this in a bit more compact way if you like. It won’t affect the performances but… I like compact stuff !Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88,\nThanks for your advice regarding writing in compact way. Yes it won’t affect the performance but yeah i will look into that.Appreciate your help.Cheers ",
"username": "Shanka_Somasiri"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDb updateOne to update an array | 2021-01-07T05:30:49.249Z | MongoDb updateOne to update an array | 8,201 |
null | [
"aggregation",
"queries",
"performance"
] | [
{
"code": "{ts : 1, field1 : 1}{\n \"$match\":{\n \"ts\":{\n \"$gte\":\"startDate\",\n \"$lt\":\"endDate\"\n },\n \"field1\":\"myvalue\"\n }\n},\n{\n \"$sort\":{\n \"ts\":1\n }\n},\n{\n \"$project\":{\n \"ts\":true,\n \"field1\":true,\n \"field2\":true\n }\n}\n//other stages\n$project{ field2 : { $gt : 0 } }field2$match$match$project$project",
"text": "I have a compound index with {ts : 1, field1 : 1}I have an aggregation pipeline that starts as belowI dont want to read the entire document, so I am using $project to take only required fields.Before I go to the other stages I want to filter out the docs which have { field2 : { $gt : 0 } }.field2 is not indexed. So should I put the above condition in the first $match stage or should I have a separate $match after $project.Will mongodb have to read the entire doc from the disk if I put it in the first stage? Or will mongodb do that anyways and filter out the fields after $project?",
"username": "Dushyant_Bangal"
},
{
"code": "{field1: 1, ts : 1, field2 : 1}",
"text": "Hi @Dushyant_Bangal,Welcome to MongoDB community!First if you have a set of “and” expressions that could be placed in first stage you should do that to pass as minimum data to the next stage as possible. Now if this query is often reoccurring why not to have a compound index on all 3 fields? This will speedup the query regardless of doc access or not.Additionally, the order of the fields in the index matter. We call it Equility Sort and Range order.So in your case n optimal index will be {field1: 1, ts : 1, field2 : 1}.Additionally if an index can cover all return data it can be a covered query and speed performance so having all fields can help avoid document scans.I suggest to read the following material:Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "tsfield1field2{$match:{field2:{$gt:0}}}{$match:{field2:{$gt:0}}}$project",
"text": "The ts field has highest cardinality for me. That used along with field1 narrows down number of docs to about 1450.I have such multiple queries, each with different field in place of field2. So I cannot add compound index for all of them.I am fine with the extra time it might need not being in index.What I mainly want to know is, where should I add the {$match:{field2:{$gt:0}}}?\nWill merging it with first stage cause the entire document to be read for the filtering process or will mongodb just read that single field?\nIf it reads the entire document from disk for filtering, then I feel it would be better to put the {$match:{field2:{$gt:0}}} after $project stage.",
"username": "Dushyant_Bangal"
},
{
"code": "{ field1: 1, ts : 1}",
"text": "Hi @Dushyant_Bangal,The optimisation guide states that all possible filtering needs to be done in earliest stage possible.MongoDB can’t read single fields if they are not indexed. All documents of the first stage are read into memory and passed to the the next stage in memory (excluding sort stage which can use index) .The cardinality within an index can play a smaller role if the index cannot support the sort. Having ts as the first field will result in a blocking in memory sort.You can index your documents how you want of course but I would recommend { field1: 1, ts : 1}Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{ field1: 1, ts : 1}field1ts",
"text": "Thanks @Pavel_Duchovny. I’ll be going with adding the field in the first stage itself.Regarding the indexing, I did have { field1: 1, ts : 1} in the beginning, but then faced few issues like this Match regex is not utilizing index correctly - Working with Data - MongoDB Developer Community ForumsAlso, with this index, I didnt have to create a separate index on field1, query explain showed mongodb was reusing the existing index with min and max value on ts.",
"username": "Dushyant_Bangal"
},
{
"code": "",
"text": "Hi @Dushyant_Bangal,Sometimes an index will be chosen but its performance will be suboptimal.Running from min to max is usually undesirable as it means you are doing a full index scan where your purpose was to use index as a filter to reduce amount of docs to be accessed.A good metrics to look into is numKeysScanned or numDocsScanned vs nreturned.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{ field1: 1, ts:1 }{ ts:1 }{ ts:1, field1: 1 }{ field1: 1, ts:1 }tsfield1ts{ ts:1, field1: 1 }{ field1: 1, ts:1 }{ ts:1 }",
"text": "@Pavel_Duchovny I was trying to keep as less indexes as possible.My initial indexes were { field1: 1, ts:1 } and { ts:1 }When I read about cardinality it made a lot of sense to put { ts:1, field1: 1 } instead of { field1: 1, ts:1 }.My queries are mainly ts range and specific string id of source in field1 OR just ts range. So I figured just { ts:1, field1: 1 } index will do for both.\nBut now it looks like I should rethink.I know this is off topic from this thread, but if I go back to { field1: 1, ts:1 } do you think I should add { ts:1 } as well? Or should I first look at the metrics and then decide?",
"username": "Dushyant_Bangal"
},
{
"code": "{ field1: 1, ts:1 }{ ts:1 }",
"text": "Hi @Dushyant_Bangal{ field1: 1, ts:1 } and { ts:1 } make sense to cover the 2 described queries.Its true that we should avoid having redundant indexes and to many indexes to avoid write overhead they cause as well as memory and disk consumption.However, you should definitely keeps a few indexes that cover your main queries even if they have similar fields.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Non-Indexed match query should be before or after project? | 2021-01-07T05:48:21.414Z | Non-Indexed match query should be before or after project? | 4,603 |
null | [] | [
{
"code": "wget https://static.realm.io/downloads/sync/realm-sync-cocoa-4.7.10.tar.gz",
"text": "Not sure where to raise, so posting here.\nDownload speeds from the static server are really slow.wget https://static.realm.io/downloads/sync/realm-sync-cocoa-4.7.10.tar.gz\ngetting 10-20KB/s. That’s pretty close to dial up speeds.",
"username": "Denis_Tereshchenko"
},
{
"code": "",
"text": "Some users have observed slow downloads - in particular outside North America and Europe. We plan to solve it by distribute the binary files in a different way - see Distribute binaries in npm package · Issue #3492 · realm/realm-js · GitHub",
"username": "Kenneth_Geisshirt"
}
] | Download speed from static hosting | 2021-01-08T05:35:58.182Z | Download speed from static hosting | 1,900 |
null | [
"transactions"
] | [
{
"code": "",
"text": "After reading a paper on Clock-SI: Snapshot Isolation for Partitioned Data Stores Using Loosely Synchronized Clocksrecently, I was surprised to find that the content of the paper is very similar to the elements of mongodb transactions, so I am curious whether MongoDB’s transaction implementation is based on Clock-SI?",
"username": "Ouyang_Tsuna"
},
{
"code": "",
"text": "You can read this paper which describes the logical clock implementation that’s underlying MongoDB transactions.",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thanks for you replying.",
"username": "Ouyang_Tsuna"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is the implementation of transaction based Clock-SI? | 2021-01-07T19:34:13.372Z | Is the implementation of transaction based Clock-SI? | 2,817 |
null | [] | [
{
"code": "",
"text": "I am using MongoDB 4.2 community edition and working on 3 node ReplicaSet (PSS).\neach node is 10 cores, 42 GB and 5.5TB of storage space.\nI have deleted almost 900GB of data as I can see it in file bytes available for reuse in db.collections.stats()\nIssue is compact command is not reclaiming space. I have ran it 6-7 times.\nIn past also I faced this issue, but compact used to free up some space in 2nd or 3rd attempt. But this time after running for 2 hours it just throws (ok,1) but without gaining any space.\nCan anyone please help me in this regards ?",
"username": "Vinod_Z"
},
{
"code": "",
"text": "Inactive thread, and this is not a solution, but just an additional information:In case of replica sets, you have to run the compact command on each node separately.\ncompact — MongoDB Manual",
"username": "Dushyant_Bangal"
}
] | Compact command is not reclaiming any space | 2020-07-30T10:55:26.798Z | Compact command is not reclaiming any space | 1,718 |
[
"database-tools"
] | [
{
"code": "",
"text": "so i wanna imort csv file, i’ve search to anywhere but its still eror\nimage962×81 5.35 KB\nimage970×83 5.36 KB\nimage972×79 4.29 KB",
"username": "Fiqri_Firdaus"
},
{
"code": "mongoimport --db=tubes -c=EDOM --type=csv --headerline --file=\"C:\\data\\test1.csv\"",
"text": "Hello @Fiqri_Firdaus, welcome to the MongoDB Community forum.I think there is a syntax error in your mongoimport command. Try this one with your own import file path:mongoimport --db=tubes -c=EDOM --type=csv --headerline --file=\"C:\\data\\test1.csv\"This will work fine with your MongoDB installation on your computer connecting to the default host and port.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need Help to import csv file | 2021-01-08T04:13:39.982Z | Need Help to import csv file | 1,862 |
|
null | [] | [
{
"code": "",
"text": "Hi All,My use case is that I want to upsert many documents (>1000, large enough that bulk operation seems the only suitable way). As “upsert” suggests, it is either an insert or an update.So I use BulkWrite, with ReplaceOne model, and with ReplaceOptions.upsert(true).\nSo far, the bulk operation only returns a BulkWriteResult with overall inserted, modified, matched counts.My question is: how do I know whether update or insert actually carried out for each operation in the bulk ? (To know which document got updated, and which one got inserted)\nOR is there another alternative to achieve this ? I reckon one can perform update(upsert=true) for individual document but it’s too costly in my use case.Wellcome all comments, feedback and suggestions\nTuanPlease help!",
"username": "Tuan_Dinh1"
},
{
"code": "from pprint import pprint\n\nfrom pymongo import MongoClient, ReplaceOne, ASCENDING\n\n\ndef init_mongodb():\n global coll\n client = MongoClient()\n db = client.get_database('test')\n coll = db.get_collection('coll')\n\n\nif __name__ == '__main__':\n init_mongodb()\n\n # init content of collection for the example\n coll.delete_many({})\n coll.insert_many([\n {'_id': 2, 'name': 'Not-Lauren'},\n {'_id': 3, 'name': 'Not-Mark'}\n ])\n\n print('Collection content BEFORE')\n for doc in coll.find().sort('_id', ASCENDING):\n print(doc)\n\n bulk_ops = [\n ReplaceOne({'_id': 1}, {'name': 'Max'}, upsert=True),\n ReplaceOne({'_id': 2}, {'name': 'Lauren'}, upsert=True),\n ReplaceOne({'_id': 3}, {'name': 'Mark'}, upsert=True)\n ]\n bulk_result = coll.bulk_write(bulk_ops, ordered=True)\n\n print('\\nBulk Result')\n pprint(bulk_result.bulk_api_result)\n\n print('\\nCollection content AFTER')\n for doc in coll.find().sort('_id', ASCENDING):\n print(doc)\nCollection content BEFORE\n{'_id': 2, 'name': 'Not-Lauren'}\n{'_id': 3, 'name': 'Not-Mark'}\n\nBulk Result\n{'nInserted': 0,\n 'nMatched': 2,\n 'nModified': 2,\n 'nRemoved': 0,\n 'nUpserted': 1,\n 'upserted': [{'_id': 1, 'index': 0}],\n 'writeConcernErrors': [],\n 'writeErrors': []}\n\nCollection content AFTER\n{'_id': 1, 'name': 'Max'}\n{'_id': 2, 'name': 'Lauren'}\n{'_id': 3, 'name': 'Mark'}\n{_id: 1}",
"text": "Hi @Tuan_Dinh1 and welcome in the MongoDB Community !I wrote a little test in Python to see what I could do:Which prints:As you can see from the output I get from the bulk operations, I know that {_id: 1} was upserted and the 2 others found a matching document so they performed a replace operation.I hope this helps.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks @MaBeuLux88 for a very detailed explanation (Must say my experience in this community forum is pretty positive so far)My domain language is Java, but I realised the similar thing, there is a BulkWriteUpsert list returned in the BulkWriteResult with only the document that are INSERTED and their indexes on the original list. From there, I was able to figure out which ones have been updated.Another question though: Is there a good way to obtain document “before” it is replaced ?So far, I have gone the long way, before running the upsert, I take the snapshot of the DB, then I run the upsert operation, from the result I work out the documents have been updated (there’s an extra field acting like unique ref). After that, I restore the DB using the previous snapshot and query for the “old” documents using those ref. I can tell it a long way, but can’t think of a faster way.Suggestions ?\nTuan",
"username": "Tuan_Dinh1"
},
{
"code": "upsertreturnNewDocument",
"text": "I was about to suggest to use the Change Streams with a filter on the updates only + updateLookup but you will only get the new version of the documents, not the old one.You could fall back on findOneAndUpdate or findOneAndReplace which both support the options upsert and returnNewDocument.Also, as you already know which filters you will use to replaceOne your documents in your bulk operations, maybe you could also run some find operations before running your bulk. Document that don’t exist yet won’t be matched and you could retrieve the old versions of the one that do exist.I bet these 2 solutions are faster than the restore snapshot one .Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | How to determine whether an upsert resulted in an insert or update WITH BULK OPERATION | 2021-01-08T01:31:05.769Z | How to determine whether an upsert resulted in an insert or update WITH BULK OPERATION | 6,622 |
null | [
"aggregation"
] | [
{
"code": "{\n \"customerId\": \"111\",\n \"name\": \"Adil\",\n \"satisfactionLevels\": [{\n \"A\": \"Score 1\",\n \"B\": \"R\",\n \"C\": 2,\n \"D\": 1\n }, {\n \"A\": \"Score 2\",\n \"B\": \"S\",\n \"C\": 2,\n \"D\": 2\n }]\n \n}\n{ \n \"customerId\": \"111\",\n \"sScore\": \"20.54\",\n \"intId\": \"1527\" \n \n}\n{\n \"customerId\": \"111\",\n \"name\": \"Adil\",\n \"satisfactionLevels\": [{\n \"A\": \"Score 1\",\n \"B\": \"R\",\n \"C\": 2,\n \"D\": 1\n }, {\n \"A\": \"Score 2\",\n \"B\": \"S\",\n \"C\": 2,\n \"D\": 2\n }, { \n \"customerId\": \"111\",\n \"sScore\": \"20.54\",\n \"intId\": \"1527\" \n \n}]\n \n}\n",
"text": "Hi,\nI was working with the data and got stuck in a problem that might be small for you.\nI want to merge the objects and append it to the Array here are sample collectionsCOLLECTION ACOLLECTION BEXPECTED OUTPUTThat’s the expected output how can I achieve this. Thanks in advance",
"username": "Nabeel_Raza"
},
{
"code": "$lookup {\n $lookup: {\n from: \"colB\",\n localField: \"customerId\",\n foreignField: \"customerId\",\n as: \"colB\"\n }\n }\n$lookup$arrayElemAt {\n $addFields: {\n colB: { $arrayElemAt: [\"$colB\", 0] }\n }\n }\n",
"text": "Hello @Nabeel_Raza,You can use $lookup pipeline stage to join collection B in A,\nExample:For more information see $lookup documentation:It will return array of object, to access object from zero index try $arrayElemAt operator, it will return object from zero index,For more information see $arrayElemAt documentation:",
"username": "turivishal"
},
{
"code": "",
"text": "I think you didn’t understand my question. I want the resultant output in whole. There will be more than one document having same customerId so for that can’t use your stagey. Secondly i want to append the array object which is already exists in Collection A.",
"username": "Nabeel_Raza"
},
{
"code": "customerIdcustomerIdsatisfactionLevelscustomerId",
"text": "@Nabeel_Raza,To better answer this question it might help if you can clarify the following:",
"username": "alexbevi"
},
{
"code": "{\n $lookup: {\n from: \"colB\",\n localField: \"customerId\",\n foreignField: \"customerId\",\n as: \"colB\"\n }\n }\n",
"text": "",
"username": "Nabeel_Raza"
},
{
"code": "$project$reduce$concatArrays// SETUP\ndb.coll1.drop();\ndb.coll1.createIndex({ customerId: 1 });\ndb.coll2.drop();\ndb.coll2.createIndex({ customerId: 1 });\n\ndb.coll1.insertMany([{\n \"customerId\": \"111\",\n \"name\": \"Adil\",\n \"satisfactionLevels\": [{\n \"A\": \"Score 1\",\n \"B\": \"R\",\n \"C\": 2,\n \"D\": 1\n }, {\n \"A\": \"Score 2\",\n \"B\": \"S\",\n \"C\": 2,\n \"D\": 2\n }]\n \n},\n{\n \"customerId\": \"111\",\n \"name\": \"Adil\",\n \"satisfactionLevels\": [{\n \"A\": \"Score 3\",\n \"B\": \"R\",\n \"C\": 5,\n \"D\": 6\n }, {\n \"A\": \"Score 4\",\n \"B\": \"S\",\n \"C\": 8,\n \"D\": 9\n }]\n \n}]);\n\ndb.coll2.insert({ \n \"customerId\": \"111\",\n \"sScore\": \"20.54\",\n \"intId\": \"1527\" \n})\n// PIPELINE\ndb.coll2.aggregate([\n{ $match: { customerId: \"111\" } },\n{ $lookup: {\n from: \"coll1\",\n localField: \"customerId\",\n foreignField: \"customerId\",\n as: \"colB\"\n}},\n{ $project: {\n customerId: 1,\n name: 1,\n satisfactionLevels: { \n $reduce: {\n input: \"$colB.satisfactionLevels\",\n initialValue: [],\n in: { $concatArrays: [ \"$$value\", \"$$this\" ] }\n }\n },\n sScore: 1,\n intId: 1\n \n}}\n]);\n",
"text": "The following pipeline using $project, $reduce and $concatArrays should produce the desired result:",
"username": "alexbevi"
},
{
"code": "db.coll2.aggregate([\n{ $match: { customerId: \"111\" } },\n{ $lookup: {\n from: \"coll1\",\n localField: \"customerId\",\n foreignField: \"customerId\",\n as: \"colB\"\n}},\n{ $project: {\n customerId: 1,\n name: 1,\n satisfactionLevels: { \n $reduce: {\n input: \"$colB.satisfactionLevels\",\n initialValue: [],\n in: { $concatArrays: [ \"$value\", \"$this\" ] }\n }\n },\n sScore: 1,\n intId: 1\n \n}}\n]);\n}]\n \n}",
"text": "Thanks for the reply. You swap the collection. I need to add the items to collection A which already have field “satisfactionLevels”EXPECTED OUTPUT\n{\n“customerId”: “111”,\n“name”: “Adil”,\n“satisfactionLevels”: [{\n“A”: “Score 1”,\n“B”: “R”,\n“C”: 2,\n“D”: 1\n}, {\n“A”: “Score 2”,\n“B”: “S”,\n“C”: 2,\n“D”: 2\n}, {\n“customerId”: “111”,\n“sScore”: “20.54”,\n“intId”: “1527”",
"username": "Nabeel_Raza"
},
{
"code": "",
"text": "@Nabeel_Raza,\nI can understand your expected result for single matching document in collection B, can you provide expected result for 2 or more matching documents in collection B.",
"username": "turivishal"
},
{
"code": "",
"text": "Assume that we have same document in collection B but the values are different.",
"username": "Nabeel_Raza"
},
{
"code": "coll2db.coll2.insertMany([{ \n \"customerId\": \"111\",\n \"sScore\": \"20.54\",\n \"intId\": \"1527\" \n},\n{ \n \"customerId\": \"111\",\n \"sScore\": \"99.47\",\n \"intId\": \"1927\" \n}])\n{ \n \"_id\" : ObjectId(\"5ff6fe56378146f0309052c4\"), \n \"customerId\" : \"111\", \n \"sScore\" : \"20.54\", \n \"intId\" : \"1527\", \n \"satisfactionLevels\" : [\n {\n \"A\" : \"Score 1\", \n \"B\" : \"R\", \n \"C\" : 2.0, \n \"D\" : 1.0\n }, \n {\n \"A\" : \"Score 2\", \n \"B\" : \"S\", \n \"C\" : 2.0, \n \"D\" : 2.0\n }, \n {\n \"A\" : \"Score 3\", \n \"B\" : \"R\", \n \"C\" : 5.0, \n \"D\" : 6.0\n }, \n {\n \"A\" : \"Score 4\", \n \"B\" : \"S\", \n \"C\" : 8.0, \n \"D\" : 9.0\n }\n ]\n}\n{ \n \"_id\" : ObjectId(\"5ff6fe56378146f0309052c5\"), \n \"customerId\" : \"111\", \n \"sScore\" : \"99.47\", \n \"intId\" : \"1927\", \n \"satisfactionLevels\" : [\n {\n \"A\" : \"Score 1\", \n \"B\" : \"R\", \n \"C\" : 2.0, \n \"D\" : 1.0\n }, \n {\n \"A\" : \"Score 2\", \n \"B\" : \"S\", \n \"C\" : 2.0, \n \"D\" : 2.0\n }, \n {\n \"A\" : \"Score 3\", \n \"B\" : \"R\", \n \"C\" : 5.0, \n \"D\" : 6.0\n }, \n {\n \"A\" : \"Score 4\", \n \"B\" : \"S\", \n \"C\" : 8.0, \n \"D\" : 9.0\n }\n ]\n}\n\"customerId\"\"customerId\"",
"text": "@Nabeel_Raza,\nIf you modified my example to add another document to coll2:The results of the pipeline would be 2 documents:What would the expected result be given this scenario if you were returning a single document?You’ve listed \"customerId\" twice in your sample output, however you cannot duplicate fields withina document like that (only 1 field can be called \"customerId\" in a single document at the same level).",
"username": "alexbevi"
},
{
"code": "",
"text": "But the Collection 1 is the main Collection which should be on the top and all it’s fields should be in the output. And you made the Collection B as the main collection.The query should starts with thisdb.coll1.aggregate([\n{ $match: { customerId: “111” } },\n{ $lookup: {\nfrom: “coll2”,\nlocalField: “customerId”,\nforeignField: “customerId”,\nas: “colB”\n}},",
"username": "Nabeel_Raza"
},
{
"code": " {\n \"customerId\": \"111\",\n \"name\": \"Adil\",\n \"satisfactionLevels\": [{\n \"A\": \"Score 1\",\n \"B\": \"R\",\n \"C\": 2,\n \"D\": 1\n }, {\n \"A\": \"Score 2\",\n \"B\": \"S\",\n \"C\": 2,\n \"D\": 2\n }] \n}\n{ \n \"customerId\": \"111\",\n \"sScore\": \"20.54\",\n \"intId\": \"1527\" \n},\n{ \n \"customerId\": \"111\",\n \"sScore\": \"50.54\",\n \"intId\": \"1528\" \n}\n",
"text": "Assume that we have same document in collection B but the values are different.Okay, can you share expected result when your collection have below data:Collection A:Collection B:We got your start query.The query should starts with thisWhat will be the result document as per these inputs?",
"username": "turivishal"
},
{
"code": "{\n \"customerId\": \"111\",\n \"name\": \"Adil\",\n \"satisfactionLevels\": [{\n \"A\": \"Score 1\",\n \"B\": \"R\",\n \"C\": 2,\n \"D\": 1\n }, {\n \"A\": \"Score 2\",\n \"B\": \"S\",\n \"C\": 2,\n \"D\": 2\n },\n { \n\t\"customerId\": \"111\",\n\t\"sScore\": \"20.54\",\n\t\"intId\": \"1527\" \n },\n { \n\t\"customerId\": \"111\",\n\t\"sScore\": \"99.47\",\n\t\"intId\": \"1927\" \n\t}\n\n \n ]\n \n}",
"text": "Collection A{\n“customerId”: “111”,\n“name”: “Adil”,\n“satisfactionLevels”: [{\n“A”: “Score 1”,\n“B”: “R”,\n“C”: 2,\n“D”: 1\n}, {\n“A”: “Score 2”,\n“B”: “S”,\n“C”: 2,\n“D”: 2\n}]}Collection B//1\n{\n“customerId”: “111”,\n“sScore”: “20.54”,\n“intId”: “1527”\n}\n//2\n{\n“customerId”: “111”,\n“sScore”: “99.47”,\n“intId”: “1927”\n}Result:",
"username": "Nabeel_Raza"
},
{
"code": "db.colA.aggregate([\n {\n $match: {\n customerId: \"111\"\n }\n },\n {\n $lookup: {\n from: \"colB\",\n localField: \"customerId\",\n foreignField: \"customerId\",\n as: \"colB\"\n }\n },\n {\n $project: {\n customerId: 1,\n name: 1,\n satisfactionLevels: {\n $concatArrays: [\"$satisfactionLevels\", \"$colB\"]\n }\n }\n }\n])\n",
"text": "The implementation of your query, just need to add $project stage and concat both arrays using $concatArrays operator.Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Oh yeah that was the correct operator. Thanks Turivishal for helping me.\nGreat efforts (y)",
"username": "Nabeel_Raza"
},
{
"code": "$project$addFields$set{$set:{\n satisfactionLevels: { $concatArrays: [\"$satisfactionLevels\", \"$colB\"] }\n } }",
"text": "Except you don’t want $project you want $addFields (or its alias $set) otherwise you lose all the fields that already existed in the original document in Collection A. So last stage should be:",
"username": "Asya_Kamsky"
},
{
"code": "colBsatisfactionLevelscolB: \"$$REMOVE\"$set$addFields{\n $addFields: {\n colB: \"$$REMOVE\",\n satisfactionLevels: { $concatArrays: [\"$satisfactionLevels\", \"$colB\"] }\n }\n}\n",
"text": "Thank you for your reply, There is no need of colB array field in result because it is already concat in satisfactionLevels, there are 2 options:\nFirst: either we need to remove that field using colB: \"$$REMOVE\" in $set or $addFields stage,Second: Either we can use $project stage and specify required fields for result,I am not sure which is the best, is there any performance issue when we use First? Is First good, is Second good or both are equal in performance, please share your thoughts.",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to add objects into array from another collection | 2021-01-07T10:56:03.657Z | How to add objects into array from another collection | 18,210 |
null | [
"node-js",
"connecting"
] | [
{
"code": "TypeError: Cannot read property 'replace' of undefined at matchesParentDomain (uri_parser.js:24) at uri_parser.js:67const {MongoClient} = require('mongodb');\n\n async function main(){\n const uri = \"mongodb+srv://{username}:{password}@cluster0.5r4og.mongodb.net/{dbname}?retryWrites=true&w=majority\";\n \n\n const client = await new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true});\n \n try {\n await client.connect();\n \n } catch (e) {\n console.error(e);\n } finally {\n await client.close();\n }\n }\n\n main().catch(console.error);\n",
"text": "Hi all,I hope whoever is reading this is having a great day! I am currently trying to connect my mongodb Atlas to my React app using node.JS but I am getting the error TypeError: Cannot read property 'replace' of undefined at matchesParentDomain (uri_parser.js:24) at uri_parser.js:67I have posted my (extremely simple) code below ",
"username": "Tatiana_Wiener"
},
{
"code": "const uri = \"mongodb+srv://{username}:{password}@cluster0.5r4og.mongodb.net/{dbname}?retryWrites=true&w=majority\"{username}\"mongodb+srv://user001:[email protected]/test?retryWrites=true&w=majority",
"text": "const uri = \"mongodb+srv://{username}:{password}@cluster0.5r4og.mongodb.net/{dbname}?retryWrites=true&w=majority\"Hello @Tatiana_Wiener, you can substitute the actual username , password and the dbname in place of {username}, etc. For example,\"mongodb+srv://user001:[email protected]/test?retryWrites=true&w=majority",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi! Thanks for the reply! I do have my username, password, and Db name in the URI, I just omitted it for posting reasons ",
"username": "Tatiana_Wiener"
},
{
"code": "",
"text": "The code worked fine from my laptop using a NodeJS (version 12) app. I was able to connect to my Atlas Cluster account using similar URI.",
"username": "Prasad_Saya"
},
{
"code": "const MongoClient = require('mongodb').MongoClient; \nexport default function AboutUsPage(){\n React.useEffect(() => {\n window.scrollTo(0, 0);\n document.body.scrollTop = 0;\n });\n const classes = useStyles();\n const uri = \"mongodb+srv://{Username}:{Password}@cluster0.5r4og.mongodb.net/{DB}?retryWrites=true&w=majority\";\n const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true});\n connect().catch(console.error);\n \n async function connect() {\n await client.connect();\n\n }\nTypeError: Cannot read property 'replace' of undefinedmain()main()export default function AboutUsPage(){",
"text": "Hi there, I figured out it’s whenever I have any other outside functions. For example. I am trying to connect my MongoDB in another React function using the same methods:I am getting the same error TypeError: Cannot read property 'replace' of undefined Is this because I’m trying to connect it within an exported function?? Like I literally have no idea why I can’t connect to it at all. I’ve tried without the async function as well and get the same thing.Lastly, I have tried literally copy and pasting my previous main() function into a new document and simply calling main() from my new export default function AboutUsPage(){ and it still doesn’t work.",
"username": "Tatiana_Wiener"
},
{
"code": "",
"text": "A post was split to a new topic: TypeError: Cannot read property ‘replace’ of undefined matchesParentDomain",
"username": "Stennie_X"
}
] | Error with await client.connect() node.JS | 2020-11-24T22:40:01.646Z | Error with await client.connect() node.JS | 5,263 |
[
"replication",
"golang"
] | [
{
"code": "",
"text": "I have 3 existing mongo instances serving an existing kubernetes cluster, and added 3 new instances to the replica set while setting up a new kubernetes cluster. The goal is to switch to the 3 new instances and tear down the 3 old instances when I remove my old k8s cluster. codebase is golang, so using the Go Mongo driver. In a new cluster, I have mongo hostnames set to only the new instances I created. From my testing, it seems like the Go Mongo driver programmatically determines the primary node even though I don’t have the primary node in the hostnames list in my new kubernetes cluster and writes go through fine. can someone point me to where in the codebase it does this? I assume it determines the primary node given a hostname and directs writes to the primary node.The Official Golang driver for MongoDB. Contribute to mongodb/mongo-go-driver development by creating an account on GitHub.Thank you!",
"username": "Arun_Srinivasan"
},
{
"code": "",
"text": "Hi @Arun_Srinivasan and welcome in the MongoDB Community !I can’t find exactly where this is implemented in the Go Driver, but at least I can explain that all the MongoDB driver are implementing the same specs which are in this repo and in this spec, probably around here, it says that Mongo clients are “replica set aware” (or something like that) and that your client is doing “server discovery”.That basically means that, constantly, the client checks the topology of your cluster and can discover all the nodes, even if they are not in the seed list you initially provided.It’s always a good practice to provide the entire list of servers in your seed list. But as long as at least one of them is available, your client should be able to ask that node for its configuration and discover the rest.I hope this helps.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does go mongodb driver code determine primary based on given hostnames? | 2021-01-06T18:49:15.832Z | Does go mongodb driver code determine primary based on given hostnames? | 2,665 |
|
null | [
"node-js"
] | [
{
"code": "",
"text": "I’m trying to return the updated or inserted document from the upsert operation.\nsetting, “returnNewDocument: true” is not workingI also have this scheme static-analysis issue when setting that option…\nNo overload matches this callexample code:\nconst r = await client.db(dbName).collection(collName).findOneAndUpdate(objFilter, { objToUpdateOrInsert } }, { upsert: true, returnNewDocument: true });",
"username": "Melody_Maker"
},
{
"code": "const { MongoClient } = require(\"mongodb\");\n\nconst uri = \"mongodb://localhost/test?retrywrites=true&w=majority\";\n\nconst client = new MongoClient(uri, { useUnifiedTopology: true });\n\nasync function run() {\n try {\n await client.connect();\n\n const database = client.db('test');\n const collection = database.collection('coll');\n\n const filter = { title: 'Back to the Future' };\n const update = { $inc: { 'score': 10 }};\n const doc1 = await collection.findOneAndUpdate(filter, update, { upsert: true, returnOriginal: false });\n const doc2 = await collection.findOneAndUpdate(filter, update, { upsert: true, returnOriginal: false });\n\n console.log(doc1);\n console.log(doc2);\n } finally {\n await client.close();\n }\n}\nrun().catch(console.dir);\n{\n lastErrorObject: { n: 1, updatedExisting: false, upserted: 5ff7970200ba0cc6b74ca81c },\n value: {\n _id: 5ff7970200ba0cc6b74ca81c,\n title: 'Back to the Future',\n score: 10\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1610061570 },\n signature: { hash: [Binary], keyId: 0 }\n },\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1610061570 }\n}\n{\n lastErrorObject: { n: 1, updatedExisting: true },\n value: {\n _id: 5ff7970200ba0cc6b74ca81c,\n title: 'Back to the Future',\n score: 20\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1610061570 },\n signature: { hash: [Binary], keyId: 0 }\n },\n operationTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1610061570 }\n}\nfindOneAndUpdatereturnOriginalreturnNewDocument",
"text": "Hi @Melody_Maker,I gave it a try I was able to do it. Here is my sample code:Here is the result I get:As you can see above, even on the first findOneAndUpdate against an empty collection, I get the resulting document rather than “null”. And on the second call, I get score = 20 which proves again that it works because I get the result of the second operation (10+10).As you probably saw already, the correct parameter is returnOriginal rather than returnNewDocument. You can checkout all the other parameters - including this one - in the MongoDB Node.js documentation for findOneAndUpdate.Often the Mongo Shell methods and the Node.js API look alike, but sometimes, like in this case, they actually differ for some obscur reasons.Cheers ,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks Maxime, works.Saves me from doing…// let id;\n// if (isNil(r.value)) {\n// id = r.lastErrorObject && r.lastErrorObject.upserted; //the newly inserted PK\n// l(“was inserted”, id);\n// } else {\n// id = r.value._id; //return update data;\n// l(“was updated”, id);\n// }\n// return await client.db(dbName).collection(collName).findOne({ _id: new ObjectID(id) });",
"username": "Melody_Maker"
},
{
"code": "updateOnefindOnefindOneAndUpdate",
"text": "If you went along with doing the updateOne and then a findOne operation right after, it’s not a single atomic operation anymore like findOneAndUpdate.So if the “all mighty power of multi-threading” allows another write operation after your updateOne but before your findOne, you might have surprising results.That’s the point of having findOneAndUpdate in the first place !Glade I was here to save the day !Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Node.js: "returnNewDocument: true" not working | 2021-01-07T21:56:55.105Z | Node.js: “returnNewDocument: true” not working | 8,195 |
null | [
"backup",
"devops"
] | [
{
"code": "E STORAGE [WTCheckpointThread] WiredTiger error (28) [1609930584:117123][11189:0x7fbb8a654700], file:collection-4-8251526658303988689.wt, WT_SESSION.checkpoint: __ckpt_process, 641: collection-4-8251526658303988689.wt: fatal checkpoint failure: No space left on device Raw: \n",
"text": "I was just reading a server crash because the Host Machine / server run out of storage space, this a slice of the log:So the two possibilities are either to extend the storage or to remove data. But since it crashes, neither the server cant be accessed nor mongodump be used.Would it be a choice to run rsync to make a remote copy of the database? This is, in case the storage can’t be extended.( This is just out of curiosity, and nothing will be damaged if answer is not strictly correct. )",
"username": "santimir"
},
{
"code": "dbPathrsynccprsyncdbPathmongod",
"text": "Hi @santimir,So the two possibilities are either to extend the storage or to remove data. But since it crashes, neither the server cant be accessed nor mongodump be used.A variation of your second option would be to free up space elsewhere on the filesystem used by your dbPath.For example, perhaps there are large log files that could be compressed and/or archived.Would it be a choice to run rsync to make a remote copy of the database? This is, in case the storage can’t be extended.Yes, you can use rsync to make a copy of files to another server (see: Back up with cp or rsync in the MongoDB manual).I would include all directories & files in your current dbPath, and ideally do so while the files are not actively in use by a mongod process (which doesn’t sound like an issue in this situation).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Just took some time to process the answer. That’s very good!Thanks for the good explanation. I’ve seen your posts on SO too ",
"username": "santimir"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Run out of space in MongoDB's Host | 2021-01-07T00:28:27.822Z | Run out of space in MongoDB’s Host | 4,385 |
null | [] | [
{
"code": "",
"text": "Hello, I need help, I want to get the TOTAL number of one parameter in the document. How can this be done quickly, I tried to do it through “reduce”, tried through $group $sum, it does not work, or the answer is “undefined”, or [ object Promise ], I have been suffering for 5 hours, I can not do such a simple thing.Briefly what I want:\nUser = is collection\nI want to sum the “money” field in all documents in the collection.",
"username": "Ya_Strelok"
},
{
"code": "db.user.aggregate([{$group : { _id : null, total : {$sum : \"$money\"}}}]) \n",
"text": "Hi @Ya_Strelok,Welcome to MongoDB community!I believe a $group of null will allow you to do that:Best\nPavel",
"username": "Pavel_Duchovny"
}
] | Total number of one parameter in all documents | 2021-01-07T19:35:37.999Z | Total number of one parameter in all documents | 1,468 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I want to fetch a list of organizations that are in the ABC project.\nThere are two different collections i.e. Organizations & Projects in which project_id is common in both collections. and I don’t know the project_id and I know the only project_name i.e. ABC and want to get data by using this project_name {present in Projects collection} ???",
"username": "Saloni_Salodia"
},
{
"code": "$lookupproject_id// SETUP\ndb.projects.createIndex({ project_name: 1 });\n// newly inserted document will have an _id field automatically generated\ndb.projects.insert({ project_name: \"ABC\" })\n\ndb.organizations.createIndex({ project_id: 1 });\n// associate the _id value from the Projects collection as the project_id field for the Organizations\ndb.organizations.insert({ project_id: db.projects.findOne()._id, organization_name: \"Some Organization\" })\ndb.organizations.insert({ project_id: db.projects.findOne()._id, organization_name: \"Some Other Organization\" })\n// PIPELINE\ndb.projects.aggregate([\n{ $match: { project_name: \"ABC\" } },\n{ $lookup: {\n from: \"organizations\",\n localField: \"_id\",\n foreignField: \"project_id\",\n as: \"organizations\"\n}},\n]);\n",
"text": "Hi @Saloni_Salodia,You can use a $lookup stage in an Aggregation Pipeline to include all related Organizations to a Project based on the common field (project_id) as follows:",
"username": "alexbevi"
},
{
"code": "",
"text": "Hi @alexbevi,\nI got your solution working…!Thank you…! ",
"username": "Saloni_Salodia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Robo 3T: Want to fetch data with project from multiple Collections? | 2021-01-07T08:11:26.356Z | Robo 3T: Want to fetch data with project from multiple Collections? | 3,233 |
null | [] | [
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"caption\": {\n \"type\": \"string\"\n }\n }\n }\n}\n\n[\n {\n \"$search\":{\n \"text\":{\n \"path\":\"caption\",\n \"query\":\"Ingocnitáá\",\n \"fuzzy\":{\n \n }\n },\n \"highlight\":{\n \"path\":\"caption\"\n }\n }\n }\n]\n{caption:\"Ct tyu test Ingocnitáá\"}\nIngocnitaa",
"text": "I am using MongoDB Atlas Search to perform a search in Collection, for this I created a Atlas Search Index:Here is my aggregation:I have below document in my collection:Issue: When I searching Ingocnitaa agreegation returning 0 result.Is there anything wrong with my Search Index? I want an directive insensitive Search with highlight.Quesstion on Stack: Mongodb Atlas Search with directive insensitive - Stack Overflow",
"username": "codetycon"
},
{
"code": "IngocnitaaMongoDB Enterprise > db.search_test_555.find()\n{ \"_id\" : 1, \"caption\" : \"Ct tyu test Ingocnitáá\" }\nMongoDB Enterprise > db.search_test_555.aggregate([\n... {\n... \"$search\":{\n... \"text\":{\n... \"path\":\"caption\",\n... \"query\":\"Ingocnitaa\",\n... \"fuzzy\":{\n...\n... }\n... },\n... \"highlight\":{\n... \"path\":\"caption\"\n... }\n... }\n... }\n... ])\n{ \"_id\" : 1, \"caption\" : \"Ct tyu test Ingocnitáá\" }\n\"default\"\"default\"$search",
"text": "Hi Kishor,I tested this in my lab using the same document, index and query as you - I am getting the results just fine when searching for Ingocnitaa :My index name was \"default\" which is why I didn’t need to specify it in my query. Is your index name something other than \"default\"? If so, you will need to specify it in the $search query.",
"username": "Harshad_Dhavale"
},
{
"code": "",
"text": "@Harshad_Dhavale Thanks for the reply. I am using M2 (General)\ncluster, DO I need a higher plan?Same query not woking at my end, c heck below screenshot…\nimage1024×768 93.8 KB",
"username": "codetycon"
},
{
"code": "",
"text": "Hi @codetycon - Thanks for sharing the details. M2 tiers have a limit of 5 Atlas Search indexes as documented here; so, as long as you don’t have more than 5 Atlas Search indexes, you should be fine on the M2 instance.The screenshot shows that no index name is specified in the query. Is your index named “default”? If it’s not named “default”, please could you specify the name of the index in the query and try again? I have confirmed that the query returns results successfully in my Atlas UI as well as mongo shell, provided the index name is correctly specified.",
"username": "Harshad_Dhavale"
},
{
"code": "",
"text": "hot shows that no index name is specifieI am using default index, you can see in Screnshot it is not working…",
"username": "codetycon"
},
{
"code": "",
"text": "ALso check this video: https://drive.google.com/file/d/10yrettA7cuMqWMndAKEq0hGsZGNqZP74/view",
"username": "codetycon"
},
{
"code": "",
"text": "Hi @codetycon - thanks for sharing the video clip. One slight difference I noticed in the video and in the original query that you had shared is that the query in the video is missing the “fuzzy”:{} option. When I tested the query on my side, without the fuzzy option, I don’t get the results either. But with the fuzzy option, I get the results. Can you check if adding the fuzzy option changes the query’s behavior?",
"username": "Harshad_Dhavale"
},
{
"code": "fuzzy:{ }\n\n",
"text": "It is working when I usedStrange when I raised query I was using fuzzy . Anyway the issue is resolved, thanks a lot. ",
"username": "codetycon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb Atlas Search with directive insensitive | 2021-01-06T07:19:16.727Z | Mongodb Atlas Search with directive insensitive | 2,176 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Use Case -Metadata Schema below needs to be matched -https://www.openmicroscopy.org/XMLschemas/OME/FC/ome.xsd",
"username": "Rounak_Joshi"
},
{
"code": "",
"text": "Hi @Rounak_Joshi,Welcome to MongoDB community!You should basically ask yourself why not MongoDB?As it is a:Having said that designing your schema is key for scalability down the road.I suggest that if you don’t have much experience with MongoDB you first:Additionally please read the following:\nhttps://www.mongodb.com/article/mongodb-schema-design-best-practicesAll articles: Performance Best Practices: Benchmarking | MongoDB Bloghttps://www.mongodb.com/article/schema-design-anti-pattern-summaryA summary of all the patterns we've looked at in this seriesThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Greetings Pavel , thanks for sending all this information over…Only thing I would like to insist is that is this trillion image metadata…data itself is not even a scope right now, which I am sure will come later. As mentioned, the OME schema for metadata is the more relevant to this use case.\nHaving said that let me go over all the links you sent me and that will enable me to determine the best database schema fit specific to this type of a use case.",
"username": "Rounak_Joshi"
},
{
"code": "",
"text": "Also, this is all on-prem as this is based on image microscopic metadata based on experiments performed in research labs of health institutes",
"username": "Rounak_Joshi"
},
{
"code": "",
"text": "If I may add to @Pavel_Duchovny ideas I will recommend that you took some courses from https://university.mongodb.com. Some of them are low in terms of invested time but gives you a real good idea of what you can do with MongoDB.In particular the courses, M001, M100, M121 and M320 are suitable to have a good idea of the capabilities.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej for sending this additional information and I will be going through those courses.\nI was ,however , curious to know if anyone here in the community has encountered a similar use case or problem , especially using MongoDB, and if they did , how did they approach solving that.",
"username": "Rounak_Joshi"
},
{
"code": "",
"text": "Hi @Rounak_Joshi,I am not familiar specifically with OME schema or its rules/limitations.Perhaps you van highlight the main aspects?I would say that we do have limitations in MongoDB like a document size cannot exceed 16MBHowever, there are ways to.overcome those either by logically seperating documents or using a gridfs solutionThanks\nPavel",
"username": "Pavel_Duchovny"
}
] | I am trying to decide on a database which would fit for trillion image metadata per year dynamic schema? IS MongoDB the right data base | 2021-01-07T05:24:33.837Z | I am trying to decide on a database which would fit for trillion image metadata per year dynamic schema? IS MongoDB the right data base | 2,086 |
null | [] | [
{
"code": "",
"text": "I am trying to build mongodb source and see following error. It looks like version mismatch for binutils… any specific versions needed?Skipping ranlib for thin archive build/opt/mongo/db/s/libsharding_commands_d.a\nLinking build/opt/mongo/db/mongod\n/bin/ld.gold: internal error in make_view, at …/…/gold/fileread.cc:474\ncollect2: error: ld returned 1 exit status\nscons: *** [build/opt/mongo/db/mongod] Error 1\nscons: building terminated because of errors.\nbuild/opt/mongo/db/mongod failed: Error 1",
"username": "Unmesh_Joshi"
},
{
"code": "ld.gold",
"text": "That appears to be a crash within ld.gold itself, so it represents a problem with the toolchain you are using, not with the MongoDB sources. We don’t really have a required version of binutils. However, unless you are building v4.0 or older, the toolchain we use to build binary releases of v4.2+ is currently based on GCC 8 and binutils 2.30.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb build failure | 2021-01-07T09:05:00.311Z | Mongodb build failure | 3,310 |
null | [] | [
{
"code": "",
"text": "Hi Team,Am very new to MongoDB, pardon me for incorrect technical terms used.We have 2 replica set in our environment and each replica set has 3 nodes, 1 RS for Prod and another RS for testing.We want to migrate the data from production in to the test environment and once the testing is done we need to revert the testing data back in the testing environment.How can we achieve this ? IF there is an existing document to refer please redirect us.\nThanks for your support.History:We took the complete data folder from one of the production node and moved it under the primary node of test environment. which resulted in complete change of replica set itself in test. We then had to reconfigure the complete replica set by renaming each node to the correct server name. Is this correct process ?",
"username": "Arun_Prasath"
},
{
"code": "",
"text": "Am very new to MongoDB, pardon me for incorrect technical terms used.We have 2 replica set in our environment and each replica set has 3 nodes, 1 RS for Prod and another RS for testing.I give you 5/5 so far We took the complete data folder from one of the production node and moved it under the primary node of test environment. which resulted in complete change of replica set itself in test. We then had to reconfigure the complete replica set by renaming each node to the correct server name. Is this correct process ?Almost the correct procedure.\nStart the nodes without the replicaset enabled and clear out the replica set data.\nThen initialise a new replicaset.The full procedure is Restore a Replica Set from MongoDB Backups",
"username": "chris"
}
] | Restore production data in to Test database for testing | 2021-01-07T11:46:24.725Z | Restore production data in to Test database for testing | 2,434 |
null | [] | [
{
"code": "",
"text": "Hello MongoDB Folk,I am looking for some inputs and thoughts on reading only from Secondary through dedicated mongos.Is there any way to route all queries particularly from mongos that will read data only from secondary nodes of sharded cluster and no write will be entertained.To achieve this below are the tools I am aware of and that too comes with a lot of constraints like no support on index sync and requires a lot of manual interventions in order to manage and replicate a whole new cluster.mongo-connector and seems no more active: GitHub - yougov/mongo-connector: MongoDB data stream pipeline tools by YouGov (adopted from MongoDB)\nmongo-shake - Documentation is not much clear: GitHub - alibaba/MongoShake: MongoShake is a universal data replication platform based on MongoDB's oplog. Redundant replication and active-active replication are two most important functions. 基于mongodb oplog的集群复制工具,可以满足迁移和同步的需求,进一步实现灾备和多活功能。This is the improvement request I have submitted in MongoDB\nhttps://jira.mongodb.org/browse/SERVER-52536\nIf the above improvement seems useful then please up-vote and any other tools or utility recommendations are most welcome.",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Is there any way to route all queries particularly from mongos that will read data only from secondary nodes of sharded cluster and no write will be entertained.I am not sure what your use-case for this. If to improve performances of the whole system you may want to consider that secondaries have the same workload as the primary. They performed the same amount of writes. So you might not see a lot of differences.Before going to the complex route of having dedicated mongos, I would look at\nand\nfor normal read workload as they may satisfy your use-case.If you want mongos for reporting or analytic workload, you may consider How To Use MongoDB Replica Sets | MongoDB Hidden Nodes",
"username": "steevej"
}
] | Mongos dedicated to route only on secondary nodes of cluster | 2021-01-07T06:30:23.517Z | Mongos dedicated to route only on secondary nodes of cluster | 1,659 |
[
"app-services-data-access"
] | [
{
"code": "",
"text": "Hello. I seem to be having this same issue six months later:\nScreen Shot 2021-01-05 at 5.48.28 PM1978×1224 167 KB\nIn this case I have already created and populated collections on Atlas and now I am trying to get started working with them on the front-end. This is my first foray into Realm.I will try disconnecting and reconnecting as @Ian_Ward suggested above and will post an update if that works, but wanted to point out that this issue may not have been fixed yet.",
"username": "Nick_McCrea"
},
{
"code": "mongodb-atlasFailed to add rule: error processing request\n",
"text": "I was still unable to create rules for my existing Atlas collections after unlinking and relinking. I tried two methods of linking the data source:1) Automatically by creating a new Realm appWhen creating a new Realm app, I am prompted to select the existing cluster as the data source, and a link with the service name mongodb-atlas is automatically configured for me. From the Realm UI, this data source shows my existing collections from Atlas. However, when I attempt to create a rule for any of these collections, I get the same error:I also get this error when I attempt to create rules for a new (non-existent) collection on this data source, as @Jay appears to have been doing in the original post above.2) Manually using the “Link a Data Sources” tool under the “Manage” section of the side barAlthough I can select my Atlas cluster from this tool, once it is linked it does not find any of my existing collections. On this service, however, I can create a new rule on a “new” database and collection - i.e. I can successfully make a rule for a collection that doesn’t exist on a database that doesn’t exist on the Atlas cluster. Not really sure what to make of this.",
"username": "Nick_McCrea"
},
{
"code": "Sync is in Beta: Permissions for this synced collection are set \non the synced cluster. \n",
"text": "Nick,To my knowledge (I am not a MongoDB employee), the current version of Realm has still not released rule based permissions. This is why you see the bannerDevelopers are urged to use Sync permissions instead, which is documented herehttps://docs.mongodb.com/realm/sync/permissionsThese can be a little daunting to understand right out of the gate, and took me several days to fully digest. I did however write a medium article that tries to explain how to implement them.I grew up in Paris France and went to French high-school there. There is a lot I loved about the culture, but one of the most frustrating…\nReading time: 9 min read\nI hope this helpsRichard Krueger",
"username": "Richard_Krueger"
},
{
"code": "realm-cli login\"id\"realm-cli import// <application-name>/services/<service-name>/rules/<database-name>.<collection-name>.json\n\n{\n \"collection\": \"<collection-name>\",\n \"database\": \"<database-name>\",\n \"roles\": [\n {\n \"name\": \"default\",\n \"apply_when\": {},\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"additional_fields\": {}\n }\n ],\n \"schema\": {}\n}\nrealm-cli import",
"text": "Update: I was able to create rules for an existing collection by using the Realm CLI.Install the Realm CLI.Log into the Realm app using realm-cli login. (Requires creating an API key for the app).Create a local application configuration folder for the app. In particular I needed to figure out where to put my rules config for the collection in question, and what to put in it. When creating a new rule set, do not include the \"id\" field. The ID will be created for you when you push the rules up with realm-cli import.I started with a bare minimum rule config that looked like this:Push the application configuration up using realm-cli import.This seems to be enough for me to get unstuck and move on with my work. However I don’t know if it is appropriate to close this issue because the problem with the Realm web UI remains. I am still unable to create new rules for my collections using those tools, which are much more convenient and expedient for the kind of work I am doing right now.",
"username": "Nick_McCrea"
},
{
"code": "",
"text": "Thanks @Richard_Krueger. It looks like I was able to accomplish what I was trying to accomplish. I just posted an update on this thread which I assume will appear as soon as the mods approve it. Still haven’t delved into Sync. There is so much to learn! But as far as I can tell so far, creating these Realm access rules is critical to any aspirations to access Atlas data from the browser.",
"username": "Nick_McCrea"
},
{
"code": "",
"text": "One more update.I suspect the problem was that I was over my storage limit on the free tier of Atlas.One of my existing collections was very large. When I finished uploading it, I checked the metrics and was under the impression I was still under the 512MB storage limit for the free tier, which I am using to explore the system. I either misread the numbers or some kind of further processing occurred on that data (e.g. generating indexes maybe), as today I see that I am a little bit over the quota. It seems that MongoDB blocks most interactions with the system when you are over your limit (e.g. GraphQL queries return an error stating as much). After deleting the large collection in order to continue with my work, I find I am now able to create rules on the remaining existing collections using the Realm UI.This is not the rigorous proof of a solution I would hope for but suffice to say everything seems to be working normally now.",
"username": "Nick_McCrea"
},
{
"code": "",
"text": "Nick,Yea, the free tier is really just there for you to experiment with - it’s pretty easy to blow past the minimum memory requirement once you get serious.I have used realm-cli to set the rules as well. One thing to remember is that if you have dev mode turned on, even if you set the rules through the cli, they won’t be respected. In order for the sync rules to work, you have to have dev mode turned off, which is what you want for production.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to create rules for my existing Atlas collections | 2021-01-05T23:12:09.105Z | Unable to create rules for my existing Atlas collections | 4,692 |
|
null | [
"connecting"
] | [
{
"code": "",
"text": "Mac OS Catalina version 10.15.7\nMongoDB shell version v4.2.3\ngit version: 6874650b362138df74be53d366bbefc321ea32d4\nallocator: system\nmodules: none\nbuild environment:\ndistarch: x86_64\ntarget_arch: x86_64\n*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.2021-01-06T11:50:11.288+0500 E QUERY [js] Error: Authentication failed. :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2021-01-06T11:50:11.293+0500 F - [main] exception: connect failed\n2021-01-06T11:50:11.293+0500 E - [main] exiting with code 1",
"username": "Muhammad_Numan"
},
{
"code": "mongomongo",
"text": "Welcome to the MongoDB community @Muhammad_Numan!*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.\n…\n2021-01-06T11:50:11.288+0500 E QUERY [js] Error: Authentication failed.Based on the quoted messages, it looks like you may not have the correct credentials to connect to an Atlas cluster.Note that the credentials for Atlas User Access (logging in via the Atlas UI) are not the same as those for a Database User. If you are trying to use an email address as the username in your mongo shell connection, it is likely that is an Atlas User rather than a Database User.For more suggestions on authentication issues, please see the Atlas guide to Troubleshoot Connection Issues.If you are still having trouble connecting, please provide an example of the mongo command line you are using (with any user/password/cluster details replaced).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you i have found a solution.",
"username": "Muhammad_Numan"
},
{
"code": "",
"text": "Hi @Muhammad_Numan,Glad you found a solution! It would be helpful if you are able to share the solution here, as others may encounter a similar issue and benefit from your experience.Regards,\nStennie",
"username": "Stennie_X"
}
] | I have a problem connecting to MongoDB can someone guide me to resolve this issue | 2021-01-06T10:05:10.462Z | I have a problem connecting to MongoDB can someone guide me to resolve this issue | 3,237 |
null | [] | [
{
"code": "",
"text": "Hi Team,We are having ebook reading application developed in IOS platform. We want to use swiftLint tool to check whether accessibility enabled for all UIControls which are in storyboard as well as programmatically initialised controls.We had a walk through for SwiftLint rules, but we couldn’t find the rule to check the accessibility enabled/ disabled controls.SwiftLint tool should fail builds if accessibility tags are not present for UIControls in the application.Could you help us to resolve it.",
"username": "karthika_beulah"
},
{
"code": "",
"text": "Hi @karthika_beulah\nWithout knowing about the specifics, If you checked all options, then it’s likely not supported. SwiftLint is an open source tool managed by the community, so you can either make a feature request to the community on github or if you want to contribute try to add it yourself with a PR.Cheers\nBrian",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "Hi @Brian_MunkholmThanks for your response.Our requirement: We are supporting voice over for visually disabled people. Accessibility should be enabled for all UI controls (eg : UIButton, UILabel… etc) in application to support voice over.Use case : When we make a build, if the controls doesn’t have accessibility enabled true property, then the build should fail in Xcode.For achieving this requirement, we like to use SwiftLint tool.\nIf SwiftLint is supporting this feature, could you explain how to use it. If not, Could you take it as feature request?",
"username": "karthika_beulah"
},
{
"code": "",
"text": "Hi @karthika_beulah\nI’m sorry if I was unclear. I don’t have the answer to your specific question. But as the community who maintain the SwiftLint tool are using Github for feature requests, the best approach is for you to request that feature by going to Issues · realm/SwiftLint · GitHub and create a new feature request there. When you do that you will also be notified of the progress of that feature request going forward.\nHope this helps you!",
"username": "Brian_Munkholm"
},
{
"code": "",
"text": "Thank you @Brian_Munkholm",
"username": "karthika_beulah"
}
] | SwiftLint tool should fail builds if accessibility tags are not present for UIControls | 2021-01-07T05:17:22.325Z | SwiftLint tool should fail builds if accessibility tags are not present for UIControls | 2,148 |
null | [
"node-js",
"react-native"
] | [
{
"code": "const Person = {\n name: \"User\",\n primaryKey: \"_id\",\n properties: {\n _id: \"objectId\",\n name: \"string\",\n age: \"number\",\n company: \"Company[]\"\n }\n};\n\nconst Company = {\n name: \"Company\",\n primaryKey: \"_id\",\n properties: {\n _id: \"objectId\",\n name: \"string\",\n boss: {\n type: 'linkingObjects',\n objectType: 'Person',\n property: 'company'\n }\n }\n};\nrealm.objects(\"Person\").filtered(\"age > 30\");",
"text": "Giving the following schema -->After adding some persons and several of them with a company, I try to filter by age and get the ones that are over 30 years with -> realm.objects(\"Person\").filtered(\"age > 30\");The result of this query is the following error -> Maximum call stack size exceeded.I expect to have a list of Persons which meets the requirement but instead I have this error. I have seen that in the release 10.0.0 this error is suppose to be fix for toJSON() and indeed if I try to filter the object by its id everything works fine but once I try to get a list of some objects I get this error.Does somebody have a solution for this?",
"username": "Fabio_GC"
},
{
"code": "",
"text": "Hi @Fabio_GC,Can you confirm the specific Realm SDK version you are using?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello @Stennie_XI´m using -->Realm: 10.1.1React Native: 0.62.2Node: 10.22.1",
"username": "Fabio_GC"
},
{
"code": "",
"text": "After further investigation on this topic I´v seen that this problem is only happening with Jest while executing test.",
"username": "Fabio_GC"
}
] | Maximum call stack size exceeded when filtering objects with inverse relationship | 2020-12-22T19:43:18.276Z | Maximum call stack size exceeded when filtering objects with inverse relationship | 5,538 |
null | [
"capacity-planning"
] | [
{
"code": "",
"text": "What is max no. of database we can have in a single instance of mongodb",
"username": "Avanish_Gupta"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Avanish_Gupta!The maximum number of databases and collections is a practical limit determined by a combination of factors including your system resources, schema design, workload, and performance expectations. If your working set is significantly larger than available RAM, performance will suffer and eventually become I/O bound shuffling data to and from disk. You can scale a single server vertically (adding more RAM and disk), but it will eventually be more economical to scale horizontally (across multiple servers) using sharding.For more details, please see my response on Database and collection limitations - #2 by Stennie_X.Do you have a specific concern around an application or deployment you are designing? If you can share more details around your use case and concerns on the number of databases, there may be more relevant advice to share.Note: I generally only recommend using a standalone instance for development or testing purposes. A replica set provides data redundancy, failover, and high availability features which are typical requirements for production deployments.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Max no. of database | 2021-01-06T08:00:08.850Z | Max no. of database | 3,144 |
null | [
"connecting",
"sharding"
] | [
{
"code": "2021-01-04T12:18:29.355+0000 I CONTROL [main] ***** SERVER RESTARTED *****\n(...)\n2021-01-04T12:18:29.434+0000 I SHARDING [mongosMain] mongos version v4.2.8-8\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] db version v4.2.8-8\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] git version: 389dde50b8368b026e41abeeedc4498c24e27fd6\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] allocator: tcmalloc\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] modules: none\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] build environment:\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] distarch: x86_64\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] target_arch: x86_64\n2021-01-04T12:18:29.434+0000 I CONTROL [mongosMain] options: { config: \"xxx\", net: { bindIp: \"0.0.0.0\", maxIncomingConnections: 3000, port: 27017, processManagement: { fork: true, pidFilePath: \"xxx\" }, security: { clusterAuthMode: \"keyFile\", keyFile: \"xxx\" }, sharding: { configDB: \"rscfg/config1:27019,config2:27019,config3:27019\" }, systemLog: { destination: \"file\", logAppend: true, logRotate: \"reopen\", path: \"/log/mongodb/mongos.log\", quiet: true, verbosity: 0 } }\n2021-01-04T12:18:29.435+0000 I NETWORK [mongosMain] Starting new replica set monitor for rscfg/config1:27019,config2:27019,config3:27019\n2021-01-04T12:18:29.435+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to config3:27019\n2021-01-04T12:18:29.435+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to config1:27019\n2021-01-04T12:18:29.435+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to config2:27019\n2021-01-04T12:18:29.436+0000 I SHARDING [thread1] creating distributed lock ping thread for process mongos1:27017:1609762709:-6643752048932861528 (sleeping for 30000ms)\n2021-01-04T12:18:29.450+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for rscfg is rscfg/config1:27019,config2:27019,config3:27019\n2021-01-04T12:18:29.450+0000 I SHARDING [Sharding-Fixed-0] Updating sharding state with confirmed set rscfg/config1:27019,config2:27019,config3:27019\n2021-01-04T12:18:29.494+0000 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1609762708, 1), t: 1 }\n2021-01-04T12:18:29.495+0000 I NETWORK [shard-registry-reload] Starting new replica set monitor for rs01/node1:27018,node2:27018,node3:27018\n2021-01-04T12:18:29.517+0000 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document\n2021-01-04T12:18:29.612+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to node2:27018\n2021-01-04T12:18:29.612+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to node1:27018\n2021-01-04T12:18:29.612+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to node3:27018\n2021-01-04T12:18:29.692+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for rs01 is rs01/node1:27018,node2:27018,node3:27018\n2021-01-04T12:18:29.692+0000 I SHARDING [UpdateReplicaSetOnConfigServer] Updating sharding state with confirmed set rs01/node1:27018,node2:27018,node3:27018\n2021-01-04T12:18:31.495+0000 I FTDC [mongosMain] Initializing full-time diagnostic data capture with directory '/log/mongodb/mongos.diagnostic.data'\n2021-01-04T12:18:31.497+0000 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID(\"b08a780c-0939-41ef-81de-8aee9e754776\"), lastMod: 0 } took 0 ms\n2021-01-04T12:18:31.497+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock\n2021-01-04T12:18:31.497+0000 I NETWORK [listener] Listening on 0.0.0.0\n2021-01-04T12:18:31.497+0000 I NETWORK [listener] waiting for connections on port 27017 ssl\n2021-01-04T12:18:31.509+0000 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version\n",
"text": "Hi!setup:This setup was working fine for some time but suddenly any connection to any of the mongoS hangs (local and remote).mongo --host localhost --port 27017 --username root --password\nconnecting to: mongodb://localhost:27017/?compressors=disabled&gssapiServiceName=mongodbBelow are some logs on mongoS on restart:Connecting directly to the data nodes works and couldn’t find any logs that could explain this behaviour in the config and data nodes.Could you please help me understand what can be wrong here?Thank you.",
"username": "Pedro_Albuquerque"
},
{
"code": "",
"text": "Seems this behaviour is described as a bug here: https://jira.mongodb.org/browse/SERVER-53540",
"username": "Pedro_Albuquerque"
},
{
"code": "",
"text": "Hi @Pedro_Albuquerque,The symptoms you are describing sound similar to SERVER-53540 and SERVER-53337, but both are still open investigations that may have different causes (and have not been confirmed as bugs, yet). Proper investigation of your environment will require more details including diagnostic data.I checked in with colleagues investigating the existing issues and confirmed you should open a new SERVER issue if you are able to share further details.Regards,\nStennie",
"username": "Stennie_X"
}
] | Any connection to mongoS hangs | 2021-01-04T19:35:06.869Z | Any connection to mongoS hangs | 3,588 |
null | [
"mongoose-odm"
] | [
{
"code": "for (const el of records) {\n promiseArray.push(\n Stock.bulkWrite(\n [\n {\n updateOne: {\n filter: {\n index: el.index,\n product: el.product,\n batchNo: el.batchNo,\n agency,\n totalQuantity: { $gte: el.loadingTotal },\n },\n update: {\n $push: {\n reservations: {\n loadingSheetId: sheetAfterSave._id,\n reservedCaseQuantity: el.loadingCaseCount,\n reservedUnitQuantity: el.loadingUnitCount,\n reservedTotalQuantity: el.loadingTotal,\n },\n },\n $inc: { totalQuantity: -el.loadingTotal },\n $set: { currentQuantity: \"$totalQuantity\" } // Issue\n },\n },\n },\n ],\n { session: session }\n )\n );\n }\n\n const result = await Promise.all(promiseArray);\n console.log('******** Result Promise ********', result);\n[distribution] CastError: Cast to Number failed for value \"$totalQuantity\" at path \"currentQuantity\"\n[distribution] at SchemaNumber.cast (/app/node_modules/mongoose/lib/schema/number.js:384:11)\n[distribution] at SchemaNumber.SchemaType.applySetters (/app/node_modules/mongoose/lib/schematype.js:1031:12)\n[distribution] at SchemaNumber.SchemaType._castForQuery (/app/node_modules/mongoose/lib/schematype.js:1459:15)\n[distribution] at SchemaNumber.castForQuery (/app/node_modules/mongoose/lib/schema/number.js:436:14)\n[distribution] at SchemaNumber.SchemaType.castForQueryWrapper (/app/node_modules/mongoose/lib/schematype.js:1428:15)\n[distribution] at castUpdateVal (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:520:19)\n[distribution] at walkUpdatePath (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:347:22)\n[distribution] at castUpdate (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:94:7)\n[distribution] at /app/node_modules/mongoose/lib/helpers/model/castBulkWrite.js:70:37\n[distribution] at /app/node_modules/mongoose/lib/model.js:3502:35\n[distribution] at each (/app/node_modules/mongoose/lib/helpers/each.js:11:5)\n[distribution] at /app/node_modules/mongoose/lib/model.js:3502:5\n[distribution] at /app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:5\n[distribution] at new Promise (<anonymous>)\n[distribution] at promiseOrCallback (/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:30:10)\n[distribution] at Function.Model.bulkWrite (/app/node_modules/mongoose/lib/model.js:3500:10) {\n[distribution] stringValue: '\"$totalQuantity\"',\n[distribution] messageFormat: undefined,\n[distribution] kind: 'Number',\n[distribution] value: '$totalQuantity',\n[distribution] path: 'currentQuantity',\n[distribution] reason: AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:\n[distribution]\n[distribution] assert.ok(!isNaN(val))\n[distribution]\n[distribution] at castNumber (/app/node_modules/mongoose/lib/cast/number.js:28:10)\n[distribution] at SchemaNumber.cast (/app/node_modules/mongoose/lib/schema/number.js:382:12)\n[distribution] at SchemaNumber.SchemaType.applySetters (/app/node_modules/mongoose/lib/schematype.js:1031:12)\n[distribution] at SchemaNumber.SchemaType._castForQuery (/app/node_modules/mongoose/lib/schematype.js:1459:15)\n[distribution] at SchemaNumber.castForQuery (/app/node_modules/mongoose/lib/schema/number.js:436:14)\n[distribution] at SchemaNumber.SchemaType.castForQueryWrapper (/app/node_modules/mongoose/lib/schematype.js:1428:15)\n[distribution] at castUpdateVal (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:520:19)\n[distribution] at walkUpdatePath (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:347:22)\n[distribution] at castUpdate (/app/node_modules/mongoose/lib/helpers/query/castUpdate.js:94:7)\n[distribution] at /app/node_modules/mongoose/lib/helpers/model/castBulkWrite.js:70:37\n[distribution] at /app/node_modules/mongoose/lib/model.js:3502:35\n[distribution] at each (/app/node_modules/mongoose/lib/helpers/each.js:11:5)\n[distribution] at /app/node_modules/mongoose/lib/model.js:3502:5\n[distribution] at /app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:5\n[distribution] at new Promise (<anonymous>)\n[distribution] at promiseOrCallback (/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:30:10) {\n[distribution] generatedMessage: true,\n[distribution] code: 'ERR_ASSERTION',\n[distribution] actual: false,\n[distribution] expected: true,\n[distribution] operator: '=='\n[distribution] }\n[distribution] }\n",
"text": "I am looping through products (variable records) with a bulkWrite updateOne operation on each product.Once I update the records I can see the reservations array is being added to the document, totalQuantity is updated to the expected value (e.g: if the totalQuantity is 2000 and the loadingTotal is 600 then the updated totalQuantity is 1400)As you can see in line $set: { currentQuantity: “$totalQuantity” } I am trying to assign latest totalQuantity value (1400) to currentQuantity after $inc operation. But this is not working. Getting below errorCan someone help me with this issue?? Thanks",
"username": "Shanka_Somasiri"
},
{
"code": " $set: { currentQuantity: \"$totalQuantity\" }totalQuantity",
"text": "Hello @Shanka_Somasiri, welcome to the MongoDB community forum. $set: { currentQuantity: \"$totalQuantity\" }You cannot assign a document field’s (totalQuantity) value in an update operation like that - hence the error. But, you can use the Updates with Aggregation Pipeline to do such an update using the document field’s value.Note that this feature is supported only with MongoDB v4.2 or newer.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya for the insight. I will try Updates with Aggregation Pipeline — MongoDB Manual and will update you.Appreciate your help",
"username": "Shanka_Somasiri"
},
{
"code": "[distribution] MongoError: Unrecognized expression '$inc'\n[distribution] at Function.create (/app/node_modules/mongoose/node_modules/mongodb/lib/core/error.js:51:12)\n[distribution] at toError (/app/node_modules/mongoose/node_modules/mongodb/lib/utils.js:149:22)\n[distribution] at /app/node_modules/mongoose/node_modules/mongodb/lib/operations/common_functions.js:376:39\n[distribution] at handler (/app/node_modules/mongoose/node_modules/mongodb/lib/core/sdam/topology.js:913:24)\n[distribution] at /app/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection_pool.js:356:13\n[distribution] at handleOperationResult (/app/node_modules/mongoose/node_modules/mongodb/lib/core/sdam/server.js:493:5)\n[distribution] at commandResponseHandler (/app/node_modules/mongoose/node_modules/mongodb/lib/core/wireprotocol/command.js:123:25)\n[distribution] at MessageStream.messageHandler (/app/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:272:5)\n[distribution] at MessageStream.emit (node:events:376:20)\n[distribution] at processIncomingData (/app/node_modules/mongoose/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\n[distribution] at MessageStream._write (/app/node_modules/mongoose/node_modules/mongodb/lib/cmap/message_stream.js:42:5)\n[distribution] at writeOrBuffer (node:internal/streams/writable:395:12)\n[distribution] at MessageStream.Writable.write (node:internal/streams/writable:340:10)\n[distribution] at TLSSocket.ondata (node:internal/streams/readable:748:22)\n[distribution] at TLSSocket.emit (node:events:376:20)\n[distribution] at addChunk (node:internal/streams/readable:311:12) {\n[distribution] driver: true,\n[distribution] index: 0,\n[distribution] code: 168\n[distribution] }",
"text": "@Prasad_Saya Unfortunately I cannot use pipeline operators such as $inc this way. Getting error",
"username": "Shanka_Somasiri"
},
{
"code": "",
"text": "@Prasad_Saya There was an implementation issue on my behalf. Its working now as expected.Thanks alot for your help. Cheers",
"username": "Shanka_Somasiri"
},
{
"code": "promiseArray.push(\n Stock.updateOne(\n {\n index: el.index,\n product: el.product,\n batchNo: el.batchNo,\n agency,\n totalQuantity: { $gte: el.loadingTotal },\n },\n {\n $push: {\n reservations: {\n loadingSheetId: sheetAfterSave._id,\n reservedCaseQuantity: el.loadingCaseCount,\n reservedUnitQuantity: el.loadingUnitCount,\n reservedTotalQuantity: el.loadingTotal,\n },\n },\n $inc: { totalQuantity: -el.loadingTotal },\n $set: { currentQuantity: }, // How can i get the updated totalQuantity here\n },\n {\n session: session,\n }\n )\n );",
"text": "How can i get the latest totalQuantity in below code after $inc: { totalQuantity: -el.loadingTotal } operation? I am stuck at this point. Any ideas??",
"username": "Shanka_Somasiri"
},
{
"code": "$inc$push$set$add$inc$concatArrays$pushtotalQuantitycurrentQuantity",
"text": "How can i get the latest totalQuantity in below code after $inc: { totalQuantity: -el.loadingTotal } operation? I am stuck at this point. Any ideas??As I had mentioned earlier you cannot do that without using an Update With Aggregation Pipeline. The operators you can use within the pipeline are Aggregation Pipeline Operators - not the Update Operators ($inc, $push and $set).You can use the Aggregation Pipeline Operators - $add instead of $inc, $concatArrays instead of $push and then assign the totalQuantity to the currentQuantity.You need to get familiar with using the Aggregation queries to understand and work with Updates With Aggregation Pipeline. Here are some example posts:",
"username": "Prasad_Saya"
},
{
"code": "promiseArray.push(\n Stock.updateOne(\n {\n index: el.index,\n product: el.product,\n batchNo: el.batchNo,\n agency,\n totalQuantity: { $gte: el.loadingTotal },\n },\n {\n $set: {\n reservations: {\n $concatArrays: [\n { $slice: ['$reservations', 1] },\n [\n {\n loadingSheetId: sheetAfterSave._id,\n reservedCaseQuantity: el.loadingCaseCount,\n reservedUnitQuantity: el.loadingUnitCount,\n reservedTotalQuantity: el.loadingTotal,\n },\n ],\n ],\n },\n },\n },\n {\n session: session,\n }\n )\n );\n",
"text": "Hi @Prasad_SayaThis is my latest code. I am new to aggregate pipeline operations and might need some help.But this does not properly push the objects to reservation array even though a reservation is being added.As you can see in the above screen shot loadingSheetId: sheetAfterSave._id, is missing. Also the quantities are always zero. Any ideas why??",
"username": "Shanka_Somasiri"
},
{
"code": "{\n _id: 1,\n reservations: [ ]\n}\nreservationsvar docToUpdate = { loadingShetId: 12, reservedCaseQty: 200 }db.test.updateOne(\n { _id: 1 },\n [\n {\n $set: {\n reservations: {\n $concatArrays: [ \"$reservations\", [ docToUpdate ] ]\n }\n }\n }\n ]\n)\n{\n \"_id\" : 1,\n \"reservations\" : [\n {\n \"loadingShetId\" : 12,\n \"reservedCaseQty\" : 200\n }\n ]\n}\n$inc: { totalQuantity: -el.loadingTotal }\n$set: { currentQuantity: \"$totalQuantity\" }\ntotalQuantity: { $add: [ \"$totalQuantity\", -el.loadingTotal ] }\ncurrentQuantity: \"$totalQuantity\"\ndb.test.updateOne(\n { _id: 1 },\n [\n {\n $set: {\n reservations: {\n $concatArrays: [ \"$reservations\", [ docToUpdate ] ]\n },\n totalQuantity: { $add: [ \"$totalQuantity\", -el.loadingTotal ] },\n currentQuantity: \"$totalQuantity\"\n }\n }\n ]\n)",
"text": "Hi @Shanka_Somasiri, I am not sure about what you are trying. Here is how you do the update using Aggregation Pipeline, for example.I have an input document:Here is a document I want to push into the reservations array.var docToUpdate = { loadingShetId: 12, reservedCaseQty: 200 }The update query:The updated document:The remaining field updates:Can be coded as follows:So, the update query will become:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "@Prasad_Saya This worked perfectly. I had some issue with my code.\nThank you sooo much again. I was stuck with this for days now and appreciate your help. ",
"username": "Shanka_Somasiri"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB BulkWrite update operation $set not working | 2021-01-04T19:41:10.988Z | MongoDB BulkWrite update operation $set not working | 7,293 |
[
"sharding"
] | [
{
"code": "",
"text": "Hello,We recived the below error, and all mongos stopped its connections.“received StaleShardVersion error :: caused by :: StaleConfig”\nI found this document, where it says this occurs if all the mongos not get updated?Can anyone please help to understand this in detail.",
"username": "Aayushi_Mangal"
},
{
"code": "",
"text": "Hi @Aayushi_Mangal!Happy New Year!Can you add some more context? What is your MongoDB version? Does the error occur while doing inserts, updates, or queries?All the best,– Rodrigo",
"username": "logwriter"
},
{
"code": "",
"text": "Hi Rodrigo,\nHappy New Year!Version using 4.2.8Our all mongos suddenly stopped working, unable to connect and we were receiving constantly these messages:\nNETWORK [conn1524750] DBException handling request, closing client connection: ClientDisconnect: operation was interrupted.and\nSHARDING [PeriodicShardedIndexConsistencyChecker] Attempt 0 to check index consistency for DBname.Collname received StaleShardVersion error :: caused by :: StaleConfigWe logged these jira, but we are not sure if this is required flushrouter config or some hmac key refresh issue.\nhttps://jira.mongodb.org/browse/SERVER-53540So after some digging, I found these articles that stated flush required, because sometime metadata did not get updated across all the cluster.But we did not have any of the drop database and other conditions except getShardDistribution commands.",
"username": "Aayushi_Mangal"
}
] | StaleConfig - How to stop this | 2021-01-06T10:36:07.694Z | StaleConfig - How to stop this | 4,026 |
|
null | [] | [
{
"code": "",
"text": "Sorry, we’re deploying some big changes of our own and are temporarily unable to process your request. We expect to be back up and running very soon.any news about this?",
"username": "Khoren_Ter-Hovhannis"
},
{
"code": "",
"text": "We’re very sorry for the inconvenience, this is now resolved. Please track at https://status.cloud.mongodb.com/ in future",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cluster is not created | 2021-01-06T18:49:38.846Z | Cluster is not created | 2,306 |
null | [
"aggregation"
] | [
{
"code": "db.temp_coll.aggregate([\n{\n \"$match\": {}\n},\n{\n \"$addFields\": { \"size\": {$bsonSize: \"$$ROOT\"}}\n},\n{\n \"$group\": {\n \"_id\": {\n \"groupField\": \"$groupField\",\n },\n \"count\": {\"$sum\": 1},\n \"totalSizeBytes\": {\"$sum\": \"$size\"},\n \"maxId\": {\"$max\": \"$_id\"},\n \"minId\": {\"$min\": \"$_id\"},\n }\n}\n])\n",
"text": "Hi,I saw that mongo 4.4 added that ability to get bsonSize inside aggregation but I’m working with mongo 4.2\nI wanted to know if there is any way to get the size of a field (or the full document) with aggregation.This is the query I want to executeI know there is Object.bsonsize but not sure how can I access (if I can) to a value of a field in the current document (when I do it before the grouping.",
"username": "Roee_Gadot"
},
{
"code": "// Setup\ndb.foo.drop();\ndb.foo.insert({ _id: 1, groupField: \"a\", data: \"abcdef\" })\ndb.foo.insert({ _id: 2, groupField: \"b\", data: \"abcdefghijklmnop\" })\ndb.foo.insert({ _id: 3, groupField: \"a\", data: \"abcdefghijklmnopqrstuvwxyz\" })\ndb.foo.insert({ _id: 4, groupField: \"b\", data: \"abc\" })\ndb.foo.insert({ _id: 5, groupField: \"a\", data: \"abcdefghij\" })\n// Map-Reduce\ndb.runCommand({\n mapReduce: \"foo\",\n map: function(){ \n emit(this.groupField, this) \n },\n reduce: function(key, values) { \n var ret = { count: 1, totalSizeBytes: Object.bsonsize(values[0]), maxId: values[0]._id, minId: values[0]._id };\n var max = ret.totalSizeBytes;\n var min = ret.totalSizeBytes;\n for (var i = 1; i < values.length; i++) {\n var doc = values[i];\n var size = Object.bsonsize(doc);\n ret.totalSizeBytes += size;\n ret.count++;\n if (size > max) { max = size; ret.maxId = doc._id }\n if (size < min) { min = size; ret.minId = doc._id }\n }\n return ret;\n },\n out: { inline: 1 } \n})\n// Output\n{\n\t\"results\" : [\n\t\t{\n\t\t\t\"_id\" : \"a\",\n\t\t\t\"value\" : {\n\t\t\t\t\"count\" : 3,\n\t\t\t\t\"totalSizeBytes\" : 183,\n\t\t\t\t\"maxId\" : 3,\n\t\t\t\t\"minId\" : 1\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"_id\" : \"b\",\n\t\t\t\"value\" : {\n\t\t\t\t\"count\" : 2,\n\t\t\t\t\"totalSizeBytes\" : 113,\n\t\t\t\t\"maxId\" : 2,\n\t\t\t\t\"minId\" : 4\n\t\t\t}\n\t\t}\n\t],\n\t\"timeMillis\" : 23,\n\t\"counts\" : {\n\t\t\"input\" : 5,\n\t\t\"emit\" : 5,\n\t\t\"reduce\" : 2,\n\t\t\"output\" : 2\n\t},\n\t\"ok\" : 1\n}\n",
"text": "I saw that mongo 4.4 added that ability to get bsonSize inside aggregation but I’m working with mongo 4.2\nI wanted to know if there is any way to get the size of a field (or the full document) with aggregation.Unfortunately you cannot use Aggregation to accomplish this in MongoDB 4.2 and earlier.You can however use a Map-Reduce command to perform this operation as follows:",
"username": "alexbevi"
}
] | bsonSize in aggregation | 2021-01-04T19:42:02.324Z | bsonSize in aggregation | 3,060 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "what are the differences in map id functions ?c.MapIdMember(f => f.Email);\nc.MapIdField(f => f.Email);\nc.MapIdProperty(f => f.Email);",
"username": "alexov_inbox"
},
{
"code": "BsonClassMapIdBsonIdBsonMemberMap",
"text": "what are the differences in map id functions ?My understanding of these different BsonClassMap methods is that they all produce the same result (Mapping the Id member via code instead of using the BsonId attribute).All three will produce a BsonMemberMap which represents the mapping between a field or property and a BSON element.",
"username": "alexbevi"
}
] | What are the differences in map id functions | 2020-12-28T21:39:16.415Z | What are the differences in map id functions | 2,399 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hello,I have a given array of randomized numbers. For example [1,2,3…n]\nNow I want to use aggregation and the $addFields function, to give each element one and only on of the values from the given array.\nSo document 1 has randomizedNumber = 1, document 2 has randomizedNumber = 2 and so on till document n.I need this because I want to randomize sorting. And I need the Field, because I dont fetch all documents at once, but in small groups to let the website load smoothly. Therefore I need the constant randomized array.Dont find any functionality. I imagine I had to loop through the given array, but doesnt find any for this in the docs.\nHope somebody can give me input for a solution.Happy new year!",
"username": "Ep_Ch"
},
{
"code": "",
"text": "Hi @Ep_Ch,Welcome to MongoDB community.I would like to help you out but not sure I understand the purpose or the technic you are looking for.One of the main questions is how adding a dynamic field which is unindexed will allow the website to load smoothly, why not to fetch the documents in batches with their natural order (no order).If the idea is the present the data randomly why not do it on the application side by randomly accessing document positions?Once I better understand what you need I can recommend a method.Otherwise, one way to merge the data sets us to get the fetched documents into an array and use $map or $zip to merge them into one.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for your response, Pavel!\nExcuse my bad english - its not my mothertongue.The requirement to sort matches randomly is because fairness to the customer companies. It shall not happened, that some company-announcements for any reason is listened always behind other companies announcements, if filter-matching in the announcement-search is identitcal.\nIts an easy task, when fetch all data at once, but I had to use §skip\" functionality, so that I can fetch chunk-wise. Therefore its important, that the random-value I add as a field to sort in second dimension (first dimension sort is for prioritized filter-criteriy), stays at least for some time the same.Actually I have done this dirty solution, which works, but hasnt good scalebility:I create array and fill it with random numbers with array-length of 2000. the seed for randomization I set every hour, so the array stays constant on the one hand and chunk-wise fetching with §skip works, but also change regulary so no company is prioritized to others in the long term. Then I do $addField with randomNumber and give the value witch $arrayElementAt {randomNumberArray, elementShortID}The elementShortId is a incremental Number every company-announce gets when insert to the DB as a document. This solution works so far, but I will get problems, when the values of the shortId increase. The length of the random array will increase more and more and its a waste of time. From a length of 2000 on you feel allready slowing down the search on the website and thats super bad user-experience.As it is now it will work maybe for a few months, but I need a better concept, because increase of customer-announces so I can achieve the same effect but faster. I cant and shouldnt work with this up-bloated helper-array.So Im searching for alternative concepts.I hope I could described it better in this post here.",
"username": "Ep_Ch"
},
{
"code": "db.coll.find({_id : { $gt : 12211} })\n%/",
"text": "Hi @Ep_Ch,If you just want to show random x documents from a collection why not to use $sample which will return 200-300 random documents?Other than that adding fields dynamically is definitely not scalable when data set grows.What I would suggest is to use the _id or any unique based indexed field where you would populate random numbers generated during document creation.Now when it comes to query you can randomise a seed within the existing range, example : 12211 and query the documents grater or lower than this value.This will give you a random set of documents everytime.If this is not possible you should consider doing a calculated number based on a % or / math games.Read this as wellBecause sometimes you need to grab a random document from your database\nReading time: 5 min read\nBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the good input Pavel! I think I can build a fitting solution with that!",
"username": "Ep_Ch"
}
] | Increment over Array to use value for $addFields in aggregation | 2021-01-01T21:55:42.957Z | Increment over Array to use value for $addFields in aggregation | 3,413 |
null | [
"swift",
"atlas-device-sync"
] | [
{
"code": "notificationToken = accounts.observe { [weak self] (changes) in\n guard let accountsTableView = self?.accountsTableView else { return }\n switch changes {\n case .initial:\n accountsTableView.reloadData()\n case .update:\n accountsTableView.reloadData()\n case .error(let error):\n // An error occurred while opening the Realm file on the background worker thread\n fatalError(\"\\(error)\")\n }\n}\n",
"text": "Hi all,I’m porting a Swift iOS demo app from ROS to MongoDB Realm. I have an accounts collection and I initialize a notification handler as follows:Then initial sync works and my tableview gets loaded with the list of expected accounts. However, if I make a change to an account in the MongoDb Collection, then I do not get the update syncing with my IOS app (a breakpoint on the “.update” case is not hit)?If I restart the app the change comes though with the initial sync.The Realm server logs do however show the following entry:> OK\n> Dec 02 8:27:55+00:00\n> 49ms\n> SyncWrite[5fc7133ea9536f9ade3901ea]Source:Write originated from MongoDBLogs:[ “Upload message contained 1 changeset(s)”, “Integrating upload required conflict resolution to be performed on 0 of the changesets”, “Latest server version is now 12” ]Partition:5fc6497676312e5697784cbbWrite Summary:{ “Account”: { “replaced”: [ “5fc4e76983e79e3471f8d3c7” ] } }And my app logs show the following:2020-12-02 08:27:24.554687+0000 mongodb-realm-offline-banking[91583:9949465] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, async open = false, client reset = false2020-12-02 08:27:24.634948+0000 mongodb-realm-offline-banking[91583:9949452] [] nw_protocol_get_quic_image_block_invoke dlopen libquic failed2020-12-02 08:27:24.690874+0000 mongodb-realm-offline-banking[91583:9949465] Sync: Connection[1]: Connected to endpoint ‘3.210.32.164:443’ (from ‘192.168.1.141:62813’)FWIW - The app was fully functioning with ROS. Obviously I’ve updated the SDK to the latest version in order to sync with MongoDB Realm, and the notification handler did not require any code changes.Anyone have any ideas?",
"username": "Ross_Whitehead"
},
{
"code": "notificationToken",
"text": "Hi, @Ross_Whitehead,\nHow do you store notificationToken value? By description it looks like it’s being deallocated right after the subscription.",
"username": "Pavel_Yakimenko"
},
{
"code": "",
"text": "Hi Pavel, thanks for the reply.\nnotificationToken is a class level field on AccountsViewController (The Accounts view contains the table). So should not be deallocated until the AccountsViewController is de-initialized.",
"username": "Ross_Whitehead"
},
{
"code": " import UIKit\n import RealmSwift\n\n class AccountsViewController: UIViewController {\n @IBOutlet weak var accountsTableView: UITableView!\n \n var accounts: Results<Account>\n var realm: Realm\n var notificationToken: NotificationToken? = nil\n let app = App(id: AppConfig.REALM_APP_ID)\n \n required init?(coder aDecoder: NSCoder) {\n let user = app.currentUser\n let ownerId = AppConfig.OWNER_ID\n \n var configuration = user?.configuration(partitionValue: ownerId)\n configuration?.objectTypes = [Account.self]\n \n self.realm = try! Realm(configuration: configuration!)\n self.accounts = realm.objects(Account.self)\n\n super.init(coder: aDecoder)\n }\n \n override func viewDidLoad() {\n super.viewDidLoad()\n setUpRealmNotificationHandler()\n }\n \n override func viewWillAppear(_ animated: Bool) {\n self.parent?.title = \"Your Accounts\"\n }\n \n deinit {\n notificationToken!.invalidate()\n }\n \n fileprivate func setUpRealmNotificationHandler() {\n notificationToken = accounts.observe { [weak self] (changes) in\n guard let accountsTableView = self?.accountsTableView else { return }\n switch changes {\n case .initial:\n accountsTableView.reloadData()\n case .update:\n accountsTableView.reloadData()\n case .error(let error):\n // An error occurred while opening the Realm file on the background worker thread\n fatalError(\"\\(error)\")\n }\n }\n }\n }\n\n extension AccountsViewController: UITableViewDelegate, UITableViewDataSource {\n func tableView(_ tableView: UITableView,\n shouldIndentWhileEditingRowAt indexPath: IndexPath) -> Bool {\n return false\n }\n \n func numberOfSections(in tableView: UITableView) -> Int {\n return accounts.count\n }\n \n func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {\n return 1\n }\n \n func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat {\n return 10\n }\n \n func tableView(_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? {\n let headerView = UIView()\n headerView.backgroundColor = UIColor.clear\n return headerView\n }\n \n func tableView(_ tableView: UITableView,\n cellForRowAt indexPath: IndexPath) -> UITableViewCell {\n \n let cell = tableView.dequeueReusableCell(withIdentifier: \"AccountCell\") as! AccountTableViewCell\n \n cell.selectionStyle = .none\n cell.backgroundColor = UIColor.white\n cell.layer.borderColor = UIColor.gray.cgColor\n cell.layer.borderWidth = 0.25\n cell.layer.cornerRadius = 2\n cell.clipsToBounds = true\n \n let account = accounts[indexPath.section]\n cell.populate(with: account)\n return cell\n }\n \n func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {\n let selectedAccount = accounts[indexPath.section]\n let storyboard = UIStoryboard(name: \"Main\", bundle: nil)\n let controller = storyboard.instantiateViewController(withIdentifier: \"transactionsViewController\") as! TransactionsViewController\n controller.account = selectedAccount\n self.navigationController?.pushViewController(controller, animated: true)\n }\n }\n",
"text": "Here’s the full code…",
"username": "Ross_Whitehead"
},
{
"code": "Realm.asyncOpen(configuration: configuration) { result in\n switch result {\n",
"text": "If you’re using MongoDB Realm Sync (looks like you are), the way you’re connecting to Realm is an issue. It’s not done like it used to be - the first time you connect it needs to be async Open RealmAlso see TipTo open a synced realm, call asyncOpen, passing in the user’s Configuration objectthe codeYou’ve probably got this covered but also ensure the Account object contains both an _id primary key var as well as the partition key.",
"username": "Jay"
},
{
"code": "class AccountsViewController: UIViewController {\n @IBOutlet weak var accountsTableView: UITableView!\n \n var accounts: Results<Account>? = nil\n var notificationToken: NotificationToken? = nil\n var app = App(id: AppConfig.REALM_APP_ID)\n \n required init?(coder aDecoder: NSCoder) {\n super.init(coder: aDecoder)\n }\n \n override func viewDidLoad() {\n let user = app.currentUser\n let ownerId = AppConfig.OWNER_ID\n \n var configuration = user?.configuration(partitionValue: ownerId)\n configuration?.objectTypes = [Account.self, Transaction.self, PaymentRequestEvent.self]\n \n Realm.asyncOpen(configuration: configuration!) { result in\n switch result {\n case .failure(let error):\n print(\"Failed to open realm: \\(error.localizedDescription)\")\n fatalError(\"\\(error)\")\n case .success(let realm):\n print(\"Successfully opened realm: \\(realm)\")\n self.accounts = realm.objects(Account.self)\n self.setUpRealmNotificationHandler()\n }\n }\n super.viewDidLoad()\n }\n \n override func viewWillAppear(_ animated: Bool) {\n self.parent?.title = \"Your Accounts\"\n }\n \n deinit {\n notificationToken!.invalidate()\n }\n \n fileprivate func setUpRealmNotificationHandler() {\n notificationToken = accounts!.observe { [weak self] (changes) in\n guard let accountsTableView = self?.accountsTableView else { return }\n switch changes {\n case .initial:\n accountsTableView.reloadData()\n case .update:\n accountsTableView.reloadData()\n case .error(let error):\n // An error occurred while opening the Realm file on the background worker thread\n fatalError(\"\\(error)\")\n }\n }\n }\n}",
"text": "Jay, thanks for the suggestions. I changed my code to use the asyncOpen method as you advised, but no luck. The notification handler is still not getting fired on updates.And yes, I do have _id and a separate partition key in my objects - and the initial load works as expected.Here’s the changed code:",
"username": "Ross_Whitehead"
},
{
"code": "override func viewDidLoad() {\n super.viewDidLoad()\n self.accountsTableView.delegate = self\n self.accountsTableView.dataSource = self\n setUpRealmNotificationHandler()\n}",
"text": "Did you set your viewController to be the tableView Delegate and Datasource?",
"username": "Jay"
},
{
"code": "",
"text": "Hi Jay, yes I set these in the storyboard. And everything binds correctly with the initial sync of objects appearing in the table. After than when I make a server-side change the updates are not coming through. When I have a moment I’m going to get the realm ios tutorial code up-and-running and see if that reveals some answers. But currently snowed under with other work ATM.",
"username": "Ross_Whitehead"
},
{
"code": "Realm(configuration:queue:)Realm.asyncOpen(configuration: configuration!) { result in\n switch result {\n case .failure(let error):\n print(\"Failed to open realm: \\(error.localizedDescription)\")\n fatalError(\"\\(error)\")\n case .success(let realm): <- realm is on a background thread\n print(\"Successfully opened realm: \\(realm)\")\n self.accounts = realm.objects(Account.self)\n self.setUpRealmNotificationHandler()\n }\n }\nRealm.asyncOpen(configuration: configuration!) { result in\n switch result {\n case .failure(let error):\n print(\"Failed to open realm: \\(error.localizedDescription)\")\n fatalError(\"\\(error)\")\n case .success(let realm): <- realm is on a background thread\n self.configureRealm()\n }\n\nfunc configureRealm() {\n let app = App(id: AppConfig.REALM_APP_ID)\n let user = app.currentUser\n let ownerId = AppConfig.OWNER_ID\n let config = user?.configuration(partitionValue: ownerId)\n let realm = try! Realm(configuration: config)\n\n self.accounts = realm.objects(Account.self)\n self.setUpRealmNotificationHandler()\n self.taskTableView.delegate = self\n self.taskTableView.dataSource = self\n DispatchQueue.main.async {\n self.accountsTableView.reloadData()\n }\n}",
"text": "A shot in the dark here but I think the issue may lie with how Realm is being accessed. Note that with Realm.asyncOpen:The Realm passed to the publisher is confined to the callback queue as if Realm(configuration:queue:) was used.So instead of using the realm returned within that calltry this instead",
"username": "Jay"
},
{
"code": "",
"text": "Ok, this was annoying me so I downloaded and configured the ISO Swift Tutorial app. It has the same problem, which is: the change handler is not fired for modifications. However, it is being fired for deletions and insertions.",
"username": "Ross_Whitehead"
},
{
"code": "",
"text": "Hi Jay, sorry for the delay in trying this out. Other work + xmas.\nUnfortunately, it still does not work. I’m going to give up on this for now. My company is going to do a Realm POC (rather than me just playing) in coordination with MongoDB professional services. Once this is happening I’ll get them to advise. And if do find a resolution I’ll report back.\nMany Thanks, Ross",
"username": "Ross_Whitehead"
},
{
"code": "case .update(_, let deletions, let insertions, let modifications):\n// Always apply updates in the following order: deletions, insertions, then modifications.\n // Handling insertions before deletions may result in unexpected behavior.\n",
"text": "The task app is working for me. However, I don’t believe the downloadable git app includes the observer code. Did yours?You may know this but the order in which the handler handles the events is important. I’ve goofed a couple of times and swapped things around and it appears one event or the other was not being called but they were, I just had them in the wrong order.and",
"username": "Jay"
},
{
"code": "",
"text": "@Ross_Whitehead The likely issue you are running into is that you are using Compass, 3T, or the Atlas collection viewer to make a modifications. This actually translates into delete and replace of that document instead of an actual modification which confuses the Realm’s notification system. We are looking to fix this in both places but for now you should be able to trigger a modification notification by using a mongo shell command or similar",
"username": "Ian_Ward"
},
{
"code": "",
"text": "OK, that’s good to hear. I appreciate that everything is in beta, so a few “undocumented features” will need to be ironed out. Thanks Pavel, Jay and Ian for taking time to help out.",
"username": "Ross_Whitehead"
}
] | Realm initial sync is working but updates are not? | 2020-12-02T09:22:03.079Z | Realm initial sync is working but updates are not? | 4,963 |
null | [
"security"
] | [
{
"code": "mongo --host x.x.x.x --port 27017 -u \"myuser\" --authenticationDatabase \"mydb\" -p \nMongoDB shell version v4.2.2\n\nconnecting to: mongodb://x.x.x.x:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb\n\n2020-02-16T14:05:50.384+0800 E QUERY [js] Error: Authentication failed. :\n\nconnect@src/mongo/shell/mongo.js:341:17\n\n@(connect):2:6\n\n2020-02-16T14:05:50.386+0800 F - [main] exception: connect failed\n\n2020-02-16T14:05:50.386+0800 E - [main] exiting with code 1\n[root@mongo mongo]# ls -lh collection-11-6825888606219797635.wt\n\n-rw-------. 1 mongod mongod 180G 2月 15 18:19 collection-11-6825888606219797635.wt\n",
"text": "I met a problem:local authentication can be successful but remote authentication failed,my mongo version is 4.2.2 ,the mongodb is single shard and it has run more than 3 months without any problem until today I connect from another machine using mongo shell such as :and it output following error:and the maxsize datafile of the collection is 180GB(only one file is 180G):all the other factors is not changed,such as firewall is disabled,bindip is 0.0.0.0 the password is absolutely right,and even plus --authenticationMechanism also consideredthe unique changed factor is the data is increased day by day ,so I guess it is related to the huge file size but I am not sure.please help me many thanks.",
"username": "1111"
},
{
"code": "",
"text": "Hi @1111,Have you checked the remaning size on the disk?I had a similar issue in the past and what has helped me out was clearing out some space and trying it again. Or you can increase the disk size and check.Let me know if it works for you the same way.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "in your command line options, authenticationDatabase is “mydb”, but the shell output lists authSource as admin. Are you sure you are authenticating against the right database?",
"username": "errythroidd"
}
] | Local authentication can be successful but remote authentication failed | 2020-02-17T04:02:44.408Z | Local authentication can be successful but remote authentication failed | 7,064 |
null | [] | [
{
"code": "{\n \"id\":\"mongodb\",\n \"website\":\"www.mongodb.com\",\n \"history\":[\n {\"price\":350, \"timestamp\":\"2020-10-21T13:28:06.419Z\"}, \n {\"price\":320,\"timestamp\":\"2020-10-21T13:28:06.419Z\"}, \n {\"price\":310,\"timestamp\":\"2020-10-21T13:28:06.419Z\"}\n ]\n}\nvar bulk = db.items.initializeUnorderedBulkOp();\nbulk.find( { id: array[objId].id } ).upsert().update(\n {\n $setOnInsert: { Name: array[objId].name, website: array[objId].website .history: [{price: array[objid].price, timestamp: ISO}] },\n $set: { history: [{price: array[objId].price, timestamp: ISO}] }\n }\n); \n\nbulk.execute();",
"text": "I’m working with MongoDB to create a stock ticker app, and I’m trying to figure out how to structure the db + queries. Right now, I have a collection of object that look like this:I have a function that calls an API in bulk and returns ~5k stocks with their id. I want to upsert this array as follows: (1) based on the id, update history.price by pushing a new object with price and timestamp (2) if the id doesn’t exist, create a new document with the rest of the API data (id, website) and write that stocks price to a history array as well as the first entry.Right now, I’m thinking about bulk.find.upsert, but I can’t figure out how to pass the whole array to this so that it can upsert based off the ID.Code so far (very basic):",
"username": "lmb"
},
{
"code": "var bulk = db.items.initializeUnorderedBulkOp();\n\nfor (let doc in array) {\n // The bulk find and update query for each matching document in the collection\n}\n\nbulk.execute();\nexecute$set: { history: [{price: array[objId].price, timestamp: ISO}] }history",
"text": "Hello @lmb, welcome to the MongoDB Community forum.Right now, I’m thinking about bulk.find.upsert, but I can’t figure out how to pass the whole array to this so that it can upsert based off the ID.That is done by looping thru the array using a for-loop (assuming JavaScript here), for example:The execute sends the updates to the server all at once.Also, in the code $set: { history: [{price: array[objId].price, timestamp: ISO}] }, you are trying to add (or push) to the history array - so, use the $push array update operator.",
"username": "Prasad_Saya"
},
{
"code": "var bulk = db.items.initializeUnorderedBulkOp();\nfor ( ... ){\nbulk.find( { id: array[objId].id } ).upsert().update(\n {\n $setOnInsert: { Name: array[objId].name, website: array[objId].website .history: [{price: array[objid].price, timestamp: ISO}] },\n $push : { history: {price: array[objId].price, timestamp: ISO}}\n }\n); \n}\n\nbulk.execute();\n",
"text": "Hi @lmb,Welcome to MongoDB community!I think you are pretty close.You should use a $push operation to add the price on update.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Bulk Upsert for historical data | 2021-01-05T19:16:57.652Z | Bulk Upsert for historical data | 2,342 |
[
"app-services-user-auth"
] | [
{
"code": "",
"text": "Wondering why email addresses would be case sensitive as pointed out in the top “Note”. Considering emails addresses are not normally case sensitive, why would MongoDB have set this up this way?",
"username": "Anthony_CJ"
},
{
"code": "",
"text": "Hi @Anthony_CJ,I believe this comes from the implementation of this data being stored in a MongoDB store where by default values you search on are case sensitive.What I would recommend is when a user registraters turn the email to lower case via the app and so on every login. This way any case during login will work (aka case insensitive).Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Absolutely. Have done that. Just thought it was odd as it allows for errors and duplicated email addresses (where case is ignored). Thanks for the response.",
"username": "Anthony_CJ"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why are email addresses for user authentication case sensitive? | 2021-01-06T05:40:53.775Z | Why are email addresses for user authentication case sensitive? | 3,942 |
|
null | [
"production",
"php"
] | [
{
"code": "",
"text": "The long awaited arrival of PHP8 is here! On November 26, 2020, PHP developers will receive a new major version, with exciting new features like JIT compiler, union types, and attributes. The last major version, 7.0, was released almost exactly five years ago, in December of 2015.MongoDB is happy to announce that we have built out support for PHP8 in our latest version of the PHP driver. Worthy mentions for driver version 1.8.0:No big release would be complete without a little drama, but we were still surprised to hear Microsoft was killing off support for PHP builds on Windows. PHP is the dominant programming language of the internet itself, there are still plenty of students learning PHP in the world. Per the SO developer survey of 2020, 46.77% of PHP developers use Windows as their operating system, with only 28.6% on Linux. The PHP release manager, Sara Golemon, assured the community there was no reason to fret on this thread on github, and another community member has already taken over this critical task.Thanks again to the PHP community for using MongoDB, and let us know if you have any questions.Rachelle",
"username": "Rachelle"
},
{
"code": "",
"text": "So (of course) @Rachelle the new driver also still supports PHP 7?",
"username": "Jack_Woehr"
},
{
"code": "mongodbext 1.9 + lib 1.8",
"text": "Hi @Jack_Woehr,Yes, the MongoDB PHP Driver v1.8 supports PHP 7 and 8. The driver has two components: the high-level end user API provided by the MongoDB PHP library (v1.8.0) and the low-level mongodb PHP extension (v1.9.0) that this library builds on.The documentation includes tables for Language Compatibility and MongoDB Compatibility if you want to confirm supported combinations of PHP driver, PHP language, and MongoDB server versions.The latest PHP 1.8 driver is listed with both component versions: ext 1.9 + lib 1.8.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Installed and working, thanks.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "",
"username": "system"
}
] | PHP8 Support in the MongoDB PHP driver | 2021-01-05T19:46:13.796Z | PHP8 Support in the MongoDB PHP driver | 10,566 |
null | [
"monitoring",
"ops-manager"
] | [
{
"code": "",
"text": "Hi Team,\nCan we generate health script from Ops manager ,trigger it to run daily and send output to mailThanks in advance",
"username": "venkata_reddy"
},
{
"code": "",
"text": "Hi @venkata_reddy,What information are you looking for in an email health update?Ops Manager has configurable alerts to notify you of changes of interest or concern, or you can login to the web UI to view the current status of a deployment.Ops Manager also supports Third-Party Service Integrations including Slack, PagerDuty, New Relic, Datadog, and a few others.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "mongo",
"text": "Hi @venkata_reddy,I think this has been addressed by way of your related questions including Java script to use in windows power shell to connect to MongoDB and check status daily, How to avoid the output part starting from \"Ok:1\", and How to get hostInfo from all nodes in a replica set from Primary.It looks like you are capturing information via the mongo shell rather than using MongoDB Ops Manager.Regards,\nStennie",
"username": "Stennie_X"
}
] | Health script from Ops manager | 2020-12-23T09:38:58.283Z | Health script from Ops manager | 2,812 |
[] | [
{
"code": "trackertaskstracker.tasks",
"text": "Attempting to go through the getting started guide to setup Atlas for MongoDB Realm.We are on step 1) Set Up the MongoDB Collectionsand on this stepFor Database Name, enter tracker and for Collection Name, enter tasks . We’ll define our own permissions in a bit, so don’t select any permissions template. Click Add Collection to finish setting up the tracker.tasks collection.Upon clicking Add Collection, all is does is show a vague errorFailed to add rule: error processing requestAs you can see from the screenshot, our data matches the guide exactly. What now?Failed to add rule2214×1254 231 KB",
"username": "Jay"
},
{
"code": "",
"text": "@Jay That’s an odd error - can you open a ticket with support or use intercom to flag this? We need your appId to investigate on the backend so however you want to pass that along would be great.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Well, unfortunately no to the support ticketNo Support for you992×458 58.5 KBbut am in the queue for in-app chat. Not really sure where to find the app id - I am looking at the page that has the apps listed (there’s also a Create a New App button) but I don’t see an app id.If I select this app, the next page doesn’t seem to show the app id either. I’ll ask in chat.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay So we were able to reproduce your issue and we have a fix but the workaround should be to just wait a few minutes after linking your Atlas cluster and Realm app - then you should be able to create rules. This is a network issue that takes a few minutes to resolve.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_WardI am still in chat. I don’t want to be attacking this from multiple points so let me know how to proceed.We successfully linked the Atlas Cluser to Realm earlier today - that wasn’t the issue. It was when we went to add collections where the error occurred.Should we unlink the cluster and re-link or as the chat agent said to refresh the web page. Or both?I refreshed the page and got the same error",
"username": "Jay"
},
{
"code": "",
"text": "@Jay Yes - I am also speaking with the chat agent Please unlink the Atlas cluster, wait a few minutes, and then relink it - that should get the network back up and connected",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_WardI unlinked the Atlas cluster and it’s been close to an hour. I re-linked it and the error has changed to what’s shown below. I thought an hour would be ‘a few minutes’. Perhaps not - it is more like 24 hours or longer?Failed to add rule 22172×844 93.9 KB",
"username": "Jay"
},
{
"code": "",
"text": "@Jay Can you email me your dashboard URL please? [email protected]",
"username": "Ian_Ward"
},
{
"code": "",
"text": "For those following along. The issue has been identified and a fix is in the works.",
"username": "Jay"
},
{
"code": "",
"text": "2 posts were split to a new topic: Unable to create rules for my existing Atlas collections",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Issue setting up Atlas for MongoDB Realm | 2020-06-10T16:47:07.753Z | Issue setting up Atlas for MongoDB Realm | 2,692 |
|
null | [
"performance",
"capacity-planning"
] | [
{
"code": "",
"text": "I read at https://docs.mongodb.com/realm/sync#use-case-profiles-and-scalability that a realm can handle 30 concurrent writers, but performance drops beyond that because of conflict resolution overhead.I can see why there would be conflicts if users write to the same object, but assuming they all write to a separate object in the same realm, would the overhead still occur?\nAt the realm level, it is a “Multiple writer, multiple reader” case, however at the object level, it is more of a “Single writer, multiple reader” case. Which one is more relevant?",
"username": "Jean-Louis_Dinh"
},
{
"code": "",
"text": "I have close to the same question and while I don’t see a response, I’m wondering if anyone knows the answer?My situation is that I’m creating a scavenger hunt app.When someone creates a hunt, that will be it’s own realm.The hunt has a list of tasks for each person to do.Then each task has a list of posts that each person creates when they do the task.Does the 30 concurrent writers mean that only 30 people can access the hunt realm at a time? Or just that submitting the post can only be 30 per second? Could, say 1000 people be doing the hunt and as long as they aren’t all writing a new post at the same time, it won’t be an issue? Or is it that it would never be an issue because each person is writing a post, but since no one is writing or updating the same post that it would only always be 1 concurrent writer?Any explanation would be appreciated before getting too far into this and realizing that Realm can’t handle it.Thanks.–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Guys, all of these are great questions. My understanding is that the scalability can only really be managed by properly architecting the realm partitions. For example, if you were writing a chat program, you would not want to put all of the chats between all of the users in the same realm. You would basically want to segment each chat thread into a separate realm. Ok, but then how would you implement a telegram chat thread with a 100K users? Again, you would have to become clever with the architecture. In that case, each user would write chat entries to their private realm (partition key = user id), and a back end trigger would add it to the chat thread realm, which would only be read-only by the various users of the thread. By the way, you can substitute “chat” thread for “scavenger hunt teams”, and the arguments are identical.I had this exact conversation with Robert Oberhofer from MongoDB about a year ago at the Live event. He is the head of Product Solutions at Realm. According to Robert, you can have as many Realms (now partitions) as possible. That is the key to scalability. What you simply want to avoid is more than 30 users writing simultaneously to the same Realm or partition.I hope this was useful.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Perfect. That’s very helpful.Realm functions really save the day on this. Being able to design the functions well is another level of separating the business logic that I haven’t done in mobile app development, but it also prepares apps to spread to other platforms much more readily since the business logic doesn’t need to be repeated. ",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "I will second what @Kurt_Libby1 wrote by adding this. Server side functions and triggers are perhaps the single best feature of the new MongoDB Realm product since June 2020. This was something that was really missing in the old Realm Sync product - where you would have to spin up a server and implement the backend processing as a Node.js program. And whenever you force a developer to spin up a server, it takes away from the whole server-less programming story.To the engineering and product management teams at MongoDB, it would be great for a future release of the product if you could architect a mechanism (similar to Cocoa Pods) for installing third party functions and triggers onto a MongoDB Realm Application. This can be done in a rather convoluted way using the Realm Admin API, but requires the developer to assemble a number of keys for that to take place. And it would even be better if you could architect a payment system (similar to the Apple App Store) to go alongside of it. A man can dream…Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "In case it is helpful to anyone else in modeling, this is what I’m thinking.Would appreciate any feedback if you see any problems with scaling, but this should make it so that everyone is always writing to their own realm and the only reason there would be more than 30 is if someone logged into their own account on 30+ devices, which is pretty far outside of the scope of this application.Screen Shot 2021-01-05 at 8.22.58 AM1526×1466 399 KB",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "So the Hunt Realm should only be readable by the users who are part of the “hunt”, not all users in the system. Basically, any updates would happen in the user realm and those updates would be marshaled on to the hunt realm by a backend trigger function.I guess you would only need to follow this strategy if a particular hunt had more than 30 people concurrently. You may have thousands of users but never really more than 30 in any particular hunt at a time. If there were the case you probably wound not need the user realm/hunt realm marshaling. But if that is not the case, you probably will.Richard",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Yeah, the thought is to gate the amount of users in a hunt where they could do an IAP to increase the number and that could become much more than 30 as in the 1000s. That’s where the issue comes in.Technically speaking, I think the hunt realm could be readable by everyone because it will only show if your user object has created or joined the hunt by adding the hunt partition id to user.huntsCreated or user.huntsJoined in the user profile object.The way I’m designing and thinking about it right now, there would be a code (hunt.code) that is input to join a hunt. In the future, I may want to create a listing of available hunts that you have not joined. If that’s the case, it would need to be readable by anyone even if you haven’t joined yet.I don’t think there is a security concern because it is read only and not necessarily sensitive at all, but if you see any holes, I’d love to adjust this sooner rather than later.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Kurt,The most secure scenario is where you give your user read/write access only to their private user realm. Let the server handle all the other writes to the hunt realms.You will probably have to use Custom User Data to handling the sync permissions for each hunt realm.Richard",
"username": "Richard_Krueger"
}
] | Is scaling limited by concurrent writers per realm or per object? | 2020-12-08T03:24:04.595Z | Is scaling limited by concurrent writers per realm or per object? | 3,807 |
null | [
"mongodb-shell",
"monitoring"
] | [
{
"code": "",
"text": "Hi Team,We know that db.hostInfo() gives System and OS details of the node we connected to.\nAlso rs.status or rs.conf() or rs.isMaster() do not provide System or OS info.If we want to get System and OS details of all the nodes from Primary(without connecting to other nodes), Is it possible ?Could you please suggest me on how to get System and OS details of all the nodes sitting from Primary.Thanks in Advance",
"username": "venkata_reddy"
},
{
"code": "db.hostinfo()hostInfomongors.status().members.forEach(\n function(member) {\n print(`[${member._id}]: ${member.name}`);\n if (member.self) {\n printjson(db.adminCommand('hostInfo').os);\n } else { \n try {\n var mdb = new Mongo(member.name);\n printjson(mdb.adminCommand('hostInfo').os);\n } catch (error) {\n print(\"Failed: \" + printjson(error));\n }\n }\n print(\"\");\n }\n)\ndb.auth()rs.status().membersrs.conf().membersrs.status()selfhealthstate",
"text": "Hi @venkata_reddy,The db.hostinfo() shell helper calls the hostInfo administrative command, which must be executed against each replica set member.However, you can iterate the replica set config and connect to each of the members. Here’s a quick example using the mongo shell:Note: the above snippet can be run via any member of the replica set. It doesn’t include authentication (you will need to call db.auth() with appropriate credentials), but should be a useful starting point. I used rs.status().members instead of rs.conf().members, because rs.status() includes the self field (no need to open a new connection to the current member) and some additional fields like health and state that might be interesting for logging/monitoring purposes.For more robust error handling I recommend implementing this using one of the supported MongoDB drivers.An alternative to rolling your own monitoring solution would be to use MongoDB Cloud Manager (hosted management platform) or MongoDB Ops Manager (on-premises management platform).In one of your earlier questions (Health script from Ops manager) it sounded like you were already using Ops Manager.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "[${member._id}]: ${member.host}[${member._id}]: ${member.name}",
"text": "Hi @Stennie_X,\nGreat information and thanks for the update.\nI got what I have been looking for.\nI modified the following.\ni)print([${member._id}]: ${member.host}) as print([${member._id}]: ${member.name});\nii)new Mongo(member.host) as new Mongo(member.name).getDB(“admin”);\nand used mdb.auth to get authenticated.Thank you for the kind help",
"username": "venkata_reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to get hostInfo from all nodes in a replica set from Primary | 2021-01-04T21:08:27.029Z | How to get hostInfo from all nodes in a replica set from Primary | 4,616 |
null | [] | [
{
"code": "",
"text": "I have had a queation thats been bothering me for awhile, please note i am a beginner learning how to work with mongodb. I have been learning how to use mongodb atlas and i use it alot, however can i instead use mongodb comunity edition on my server instead of atlas?",
"username": "Gerald_Mbuthia"
},
{
"code": "",
"text": "Welcome to the MongoDB community forums @Gerald_Mbuthia!You can learn to develop with MongoDB using Atlas, a deployment of MongoDB on your own server, or both. A key difference is that Atlas takes care of common administrative and operational challenges (installing, configuring, securing, monitoring, scaling, …) so you can focus on getting started with development against a deployment set up with best practices. Atlas also allows you to manage your deployment(s) and configuration via UI or API; installation in your own server environment will generally involve working with command line tools.If you want to install MongoDB into your own server environment, see:If you want to learn more about the operational side of MongoDB, I highly recommend taking some of the free online courses at MongoDB University. For example, M103: Basic Cluster Administration will give you insight and practice setting up local deployments from scratch including standalone, replica sets, and sharded clusters. The M103 course is part of a DBA Learning Path which includes other important topics like Performance, Security, and Diagnostics & Debugging.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using mongodb community in a server | 2021-01-05T19:16:20.140Z | Using mongodb community in a server | 1,903 |
null | [
"app-services-data-access"
] | [
{
"code": "%%user{\n \"share_secret\": \"<value provided in the request>\"\n}\nshare_secretcollection.findOne({});\ncollection.findOne({ share_secret: \"abcdefg\" });\n",
"text": "Hi, MongoDB community! I would like to implement a feature in my Realm app that allows users to share a document using a public link containing a secret string (similar to apps like Dropbox). The user that receives the link shouldn’t need to register, so they are authenticating anonymously.How can I set up query roles for this use case? The examples in the documentation always use the %%user variable do detemine if a user is allowed to access a document. Essentially, I would like to set the role based on some additional value provided by the user (the secret string), not based on the user’s identity or some value stored in the user document. An “Apply When” expression for this role could look like this:Is there a way to pass additional values like this? Can I add restrictions on what the user can query so that share_secret must always be included? For example, I would like to prevent users from querying any document because they could get a document that they are not authorized to read:But I want to allow them to do this:",
"username": "N_A_N_A"
},
{
"code": "{\n \"%%user.custom_data.secretPartitions\": \"%%partition\"\n}\n",
"text": "Let me make a stab at trying to answer this fairly obscure request. First, at this point in time with MongoDB Realm, you would probably have to use Sync Permissions to control access to the “secret” document. In laymen’s terms, this means make your share_secret the partition key value of the document in question.Your sync rules would need something like thisYou would then define a Custom User Data with a secretPartitions array.When a user got the secret on a client device, it would have to call a function on the MongoDB Realm Application server that would add the secret to the secretParitions array in the Custom User Data.The only question I have is that I don’t know how this would work for anonymous users, but in theory it should.For more insight, please consult the article I wrote last month concerning Realm Sync Permissions.MongoDB Realm Sync Permissions ExplainedI hope this was useful, and good luck.Richard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Hi Richard, thank you very much! I think that a variation of your idea could work in this case. Instead of storing the secret in the user data, the function would store the user ID in the shared document because the user data is not always up to date (according to the docs, the data is refreshed at least every 30 minutes). That way users can always access the document directly after visiting the invitation link.The only small downside I can think of is that many (expired) anonymous user IDs can accumulate in the shared document over time.",
"username": "N_A_N_A"
},
{
"code": "",
"text": "I am glad that I was of some use. By the way, the client can force a refresh of the customer user data with a call to user.refreshCustomData() - to get over the 30 minute limit.Yes, all of these anonymous users will be like plastic bottles accumulating in the Pacific. You could always run a timer trigger to clean them up on the background.Richard",
"username": "Richard_Krueger"
},
{
"code": "refreshCustomData",
"text": "Thanks! I didn’t know about refreshCustomData. ",
"username": "N_A_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to share documents using a secret link? | 2020-12-30T19:25:24.591Z | How to share documents using a secret link? | 4,774 |
null | [
"aggregation",
"ruby"
] | [
{
"code": "db.first_collection.aggregate( [ { $merge: { into: \"second_collection\" } } ] )\n$db_handle[:first_collection].aggregate[ { \"$merge\" => { :into => \"second_collection\" } } ]\n",
"text": "Trying to append the contents of one collection to another one. On the mongo command line, this is a simpleHowever, using the ruby driver, it seems that this command is not “committed” to the database, and the append actually never happens. The command I use in Ruby:However, nothing seems to happen at the database level… How do I “commit” this to the database?Thank you,\njan.",
"username": "Jan_Aerts"
},
{
"code": "$db_handle[:first_collection].aggregate[ { \"$merge\" => { :into => \"second_collection\" } } ]Mongo::Collection::View::Aggregation$mergeto_a$db_handle[:first_collection].aggregate[ { \"$merge\" => { :into => \"second_collection\" } } ].to_a\n$outrequire 'bundler/inline'\ngemfile do\n source 'https://rubygems.org'\n\n gem 'mongo'\n gem 'test-unit'\nend\n\nclass TestAggregationOut < Test::Unit::TestCase\n def setup\n Mongo::Logger.logger.level = Logger::INFO\n @client = Mongo::Client.new([ '127.0.0.1:27017' ], database: 'test')\n @client[:foo].drop\n @client[:foo].insert_one( { driver: \"ruby\" } )\n @client[:bar].drop\n end\n\n def test_out_without_to_a\n result = @client[:foo].aggregate([ { :$out => \"bar\" } ])\n assert_equal @client[:foo].count, @client[:bar].count, \"Count should match\"\n end\n \n def test_out_with_to_a\n result = @client[:foo].aggregate([ { :$out => \"bar\" } ]).to_a\n assert_equal @client[:foo].count, @client[:bar].count, \"Count should match\"\n end\n\n def test_merge_without_to_a\n result = @client[:foo].aggregate([ { :$merge => { into: \"bar\" } } ])\n assert_equal @client[:foo].count, @client[:bar].count, \"Count should match\"\n end\n \n def test_merge_with_to_a\n result = @client[:foo].aggregate([ { :$merge => { into: \"bar\" } } ]).to_a\n assert_equal @client[:foo].count, @client[:bar].count, \"Count should match\"\n end\n\n def teardown\n @client = nil\n end\nend\n",
"text": "Hi @Jan_Aerts,The issue you’re having is that by calling $db_handle[:first_collection].aggregate[ { \"$merge\" => { :into => \"second_collection\" } } ] the driver is only returning an instance of Mongo::Collection::View::Aggregation, not the results of the aggregation command.Once you interact with this view instance the command will be executed and the $merge will produce results in the target collection.The easiest way to do this is to call to_a on the view instance:Note that the same behavior would be seen with $out stages as well.The following unit test can be used to demonstrate this:",
"username": "alexbevi"
}
] | Aggregation: merge does not commit to database | 2020-12-04T09:25:19.468Z | Aggregation: merge does not commit to database | 3,963 |
null | [
"kafka-connector"
] | [
{
"code": "ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:130)\njava.lang.NoClassDefFoundError: io/confluent/kafka/schemaregistry/client/SchemaRegistryClient\n at java.lang.Class.forName0(Native Method)\n at java.lang.Class.forName(Class.java:348)\n at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:719)\n at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:474)\n at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:467)\n at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)\n at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)\n at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:385)\n at org.apache.kafka.connect.runtime.standalone.StandaloneConfig.<init>(StandaloneConfig.java:42)\n at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:81)\nCaused by: java.lang.ClassNotFoundException: io.confluent.kafka.schemaregistry.client.SchemaRegistryClient\n at java.net.URLClassLoader.findClass(URLClassLoader.java:382)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:419)\n at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:352)\n ... 10 more\n",
"text": "",
"username": "saravana_hariharan"
},
{
"code": "",
"text": "Hi @saravana_hariharan,Could you provide more information? The expected schema registry class does not appear to be available. What version of Kafka / Kafka connect are you running?Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "apache kafka 2.12-2.4.1\nconnector - mongodb connector 1.3.0",
"username": "saravana_hariharan"
}
] | Error running the mongodb connector in apache kafka (NoClassDefFoundError:io/confluent/kafka/schemategistry) | 2020-12-29T06:10:48.397Z | Error running the mongodb connector in apache kafka (NoClassDefFoundError:io/confluent/kafka/schemategistry) | 3,962 |
null | [
"replication",
"performance",
"atlas"
] | [
{
"code": "",
"text": "I am a current Cassandra user evaluating MongoDB Atlas. I’m curious about others’ experiences with MongoDB or MongoDB Atlas.Has it been able to meet your write throughput needs? My applications regularly exceed 150,000 write operations per second. Has anyone had any good or bad experiences with Mongo or Atlas at that level?I feel like basic CRUD operations are going to be great on Mongo (as long as I can meet throughput needs) but I’m wondering if I should rely on Mongo’s more advanced functionality such as aggregations and filters or if I should plan to let my app handle that. Any thoughts?One of Cassandra’s banner features is Multi-DC replication. I’ve read mixed reviews about Atlas’ cross-region replication so this is a particular area of concern for me. Any experiences with this?Last, does anyone have any experiences with Atlas’ VPC peering? Does it work well and is it cost-effective?Thanks!",
"username": "Kiyu_Gabriel"
},
{
"code": "",
"text": "Hi, did you find any answers for this? Even I am in similar doubts now… would like to how mongodb perform for larger read and write as compare to Cassandra",
"username": "Great_Info"
},
{
"code": "",
"text": "Hi there\nThere is no doubt Cassandra offers good scalability for key-value workloads. MongoDB is also highly capable for the most performance and availability-demanding applications, demonstrated by a selection of examples shown on this page: MongoDB At Scale | MongoDBCassandra tends to be well suited for workloads that need to insert data quickly, but where that data is rarely updated, and is accessed only by its primary key or by a limited set of secondary indexes. For queries any more complex than simple point look-ups or range scans, the data will generally need to be replicated from Cassandra to dedicated analytics and search nodes.MongoDB offers a number of capabilities that enable organizations to ship more functional applications faster with lower cost and complexity. These capabilities include its intuitive and flexible document data model, powerful query engine and aggregation pipeline, secondary indexes, transactional ACID guarantees and strong consistency.With native multi-region, multi-cloud sharding and replication, MongoDB Atlas can be securely scaled out to support global applications. This is further enhanced with MongoDB Cloud (MongoDB Cloud | MongoDB) offering MongoDB Atlas, Search, and Data Lake to serve different workloads through a common API, while Realm Database extends the data foundation to the edge.I hope this information is useful, and I’d be keen to hear how you progress in your evaluation of MongoDB. Please don’t hesitate to contact us at MongoDB if you have any questions, or can help in any way.Regards\nMat Keep",
"username": "Mat_Keep"
}
] | MongoDB Atlas Experiences | 2020-05-08T23:05:36.738Z | MongoDB Atlas Experiences | 1,913 |
null | [] | [
{
"code": "myReplSet:PRIMARY> db.dropUser(\"sysadmin3\")\nfalse\nmyReplSet:PRIMARY> \n\t\"roles\" : [\n\t\t{\n\t\t\t\"role\" : \"root\",\n\t\t\t\"db\" : \"admin\"\n\t\t}\n\t]\nmyReplSet:PRIMARY> db.system.users.find().pretty()",
"text": "For some reason that I ignore, I cannot get rid of a user in mongodb.\nThis is how it goes:and I am logged in with a user, whith this kind of role:And if I run this, I can see the user sysadmin3 is present.\nmyReplSet:PRIMARY> db.system.users.find().pretty()Has somebody already seen this?",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "Are you connected to the correct DB?\nIf user does not exist or db does not exist it would give false\nAlso check db.getUsers() & db.getUser(“user”)",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "As I tried again in order to follow up on your reply.\nAll suddenly started to work as expected and I can use db.dropUser.I am not aware that I did things differently; but who knows.\nAnd the computer has been rebooted in the meanwhile.Anyway, thanks for bringing me luck.",
"username": "Michel_Bouchet"
}
] | db.dropUser not working | 2021-01-02T07:54:14.636Z | db.dropUser not working | 2,780 |
null | [
"installation"
] | [
{
"code": "root@ubuntu:/home/ubuntu# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: inactive (dead) since Thu 2020-12-17 03:43:33 UTC; 15min ago\n Docs: https://docs.mongodb.org/manual\n Process: 1816 ExecStart=/usr/bin/mongod --config /home/ubuntu/mongo-1.cfg (code=exited, status=0/SUCCESS)\n Main PID: 1816 (code=exited, status=0/SUCCESS)\n\nDec 17 03:43:10 ubuntu systemd[1]: Started MongoDB Database Server.\nDec 17 03:43:18 ubuntu mongod[1816]: about to fork child process, waiting until server is ready for connections.\nDec 17 03:43:18 ubuntu mongod[1982]: forked process: 1982\nDec 17 03:43:24 ubuntu mongod[1816]: child process started successfully, parent exiting\nDec 17 03:43:33 ubuntu systemd[1]: mongod.service: Succeeded.\nroot@ubuntu:/home/ubuntu# \nubuntu@ubuntu:~$ cat mongo-1.cfg \nstorage:\n dbPath: /mnt/mongoDB-One/DB_Data_1st\n journal:\n enabled: true\nnet:\n bindIp: localhost,192.168.1.2\n port: 22330\nsystemLog:\n destination: file\n path: /mnt/mongoDB-One/DB_Data_1st/mongod.log\n logAppend: true\nprocessManagement:\n fork: true\nreplication:\n replSetName: mngoRepSet\nubuntu@ubuntu:~$ \nroot@ubuntu:/home/ubuntu# tail -20 /mnt/mongoDB-One/DB_Data_1st/mongod.log\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.146+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.160+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.160+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.160+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.160+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.161+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22372, \"ctx\":\"OplogVisibilityThread\",\"msg\":\"Oplog visibility thread shutting down.\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.161+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.162+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.163+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.164+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.164+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.164+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.164+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.164+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:25.164+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2020-12-17T03:43:33.802+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":8638}}\n{\"t\":{\"$date\":\"2020-12-17T03:43:33.802+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:33.803+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:33.803+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2020-12-17T03:43:33.803+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\nroot@ubuntu:/home/ubuntu# \nroot@ubuntu:/home/ubuntu# cat /lib/systemd/system/mongod.service\n[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nUser=mongodb\nGroup=mongodb\nEnvironmentFile=-/etc/default/mongod\n#(Commented out by me)ExecStart=/usr/bin/mongod --config /etc/mongod.conf\nExecStart=/usr/bin/mongod --config /home/ubuntu/mongo-1.cfg\nPIDFile=/var/run/mongodb/mongod.pid\n# file size\nLimitFSIZE=infinity\n# cpu time\nLimitCPU=infinity\n# virtual memory size\nLimitAS=infinity\n# open files\nLimitNOFILE=64000\n# processes/threads\nLimitNPROC=64000\n# locked memory\nLimitMEMLOCK=infinity\n# total threads (user+kernel)\nTasksMax=infinity\nTasksAccounting=false\n\n# Recommended limits for mongod as specified in\n# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings\n\n[Install]\nWantedBy=multi-user.target\nroot@ubuntu:/home/ubuntu#",
"text": "I am trying to launch mongod with my own configuration from systemd, but it does not work. And I do not understand why. Any help by some more experienced person would be very welcome.This is what I can say to start with:\n1) When started outside of systemd, using the mongod command, the configuration works with no issue.\n2) When I use systemd with the default configuration instead of mine, it also works with no issue.This is the report provided by “systemctl status”, when using my own configuration:Here is the content of the config file:Here is the end of the server log file:This is the content of the mongod.service file:",
"username": "Michel_Bouchet"
},
{
"code": "mongodb:mongodbmongodb:mongodbforkmongo-1.cfgfalse",
"text": "Hello @Michel_BouchetRegrades,\nMichael",
"username": "michael_hoeller"
},
{
"code": "ubuntu@ubuntu:~$ ls -l /mnt/mongoDB-One/DB_Data_1st/mongod.log\n-rw------- 1 mongodb mongodb 402819 Dec 17 07:20 /mnt/mongoDB-One/DB_Data_1st/mongod.log\nubuntu@ubuntu:~$ \n",
"text": "When I launched manually mongod, it was done as root.Running it as mongodb is (also) one of the reasons I am trying to launch from systemd.As far as the permissions are concerned I have run this command:sudo chown -R mongodb:mongodb /mnt/mongoDB-One/DB_Data_1stand the access rights are unchanged. For example:As you can see, that is 600.I presume there is no reason here to have permissions problems. Let me know if I am wrong.I looked at /var/log/mongodb/mongod.log and didn’t see anything suspicious.There is no /var/log/system.Beside, I don’t know how to launch mongod as mongodb outside of systemd.\nBecause I can’t just log in as mongodb.",
"username": "Michel_Bouchet"
},
{
"code": "processManagement:\n fork: true\nsystemctl status mongodroot@ubuntu:/home/ubuntu# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Thu 2020-12-17 07:48:24 UTC; 1min 12s ago\n Docs: https://docs.mongodb.org/manual\n Process: 1817 ExecStart=/usr/bin/mongod --config /home/ubuntu/mongo-1.cfg (code=exited, status=14)\n Main PID: 1817 (code=exited, status=14)\n\nDec 17 07:47:37 ubuntu systemd[1]: Started MongoDB Database Server.\nDec 17 07:48:24 ubuntu systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nDec 17 07:48:24 ubuntu systemd[1]: mongod.service: Failed with result 'exit-code'.\nroot@ubuntu:/home/ubuntu#",
"text": "I changed the configuration file as an experiment:I deleted these 2 lines:And this is the result of systemctl status mongod",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "I am pretty sure that we will find the reason in the mongod.log\nCan you please empty the log and try to start the service… If the log is not too big can you please post it here.\n(Please make sure that there is nothing confidential posted)",
"username": "michael_hoeller"
},
{
"code": "ls -l /mnt/mongoDB-One/DB_Data_1st/* | grep -v mongodbsudo chown -R mongodb:mongodb /mnt/mongoDB-One/DB_Data_1stprocessManagement:\n fork: true\n",
"text": "I may already have an idea. By running this command:ls -l /mnt/mongoDB-One/DB_Data_1st/* | grep -v mongodbI found a few files with root onwnership. Maybe because I once ran mongod as root for test.\nAnyway, I then reran:sudo chown -R mongodb:mongodb /mnt/mongoDB-One/DB_Data_1stand that seems to solve the problem. It means you were right to suspect some permission issue.But there is something else I noticed. If I have these two lines in the config file:then the server starts and then vanishes after a short while. On the other hand if I delete them, all seems to be fine. Do you know if this is normal?",
"username": "Michel_Bouchet"
},
{
"code": "mongo",
"text": "Hello @Michel_Bouchetand then vanishes after a short while.What do you mean with this? The fork sends the process to the background so that would be ok, You can then connect to the mongodb shell via the mongo command.\nOr do you mean that the process dies? In this case please clean the mongod log an, rerun and wait until the process dies and post the log.",
"username": "michael_hoeller"
},
{
"code": "/etc/mongod.confType = Forking",
"text": "Hi @Michel_BouchetI was looking at this the other day on another thread. Systemd expects a service to run in the foreground by default (Type=simple). As the forking process exits systemd will kill the whole cgroup for the service.The Ubuntu .deb has a systemd service that uses simple and a default /etc/mongod.conf with no forking.The RPM mongo version I looked at the other day(Centos 6 or 7) has forking enabled and the systemd has Type = Forking. The RedHat package also “fixes” any permissions errors in the PreExec stage.",
"username": "chris"
},
{
"code": "fork: true\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":\"signal\":15,\"error\":\"Terminated\"}}\n\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\nfork",
"text": "Hi @christhanks for pointing that out. I never had a fork: true in my systemd configs and was not aware of this issue with the deb packages.@Michel_Bouchet in your mongod.log should be something like:For something like this I was looking for, though I would not immediately have thought of the fork\nThanks Chris! You saved me some intense head scratching … I just tried it out to get the loglines…Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "By this, I mean the fact that it does not work when I leave the 2 lines in the config;\nand it works when I remove them.The process going to the background when the 2 lines are in, is what happens when mongod is started by hand (And this is also what I expect).When mongod is started by systemd, it does not behave that way. I don’t know why.\nMaybe I need to follow your suggestions, erasing the logs first.",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "I see. Then this explains the strange behaviour I found then.\nI just shouldn’t fork when using systemd then.Thanks.",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "Yep, thanks to Chris we know now ",
"username": "michael_hoeller"
},
{
"code": "",
"text": "As I wrote you on the university forum, it’s very effective here. Problem solved even before I get a chance to participate.",
"username": "steevej"
},
{
"code": "",
"text": "Hey … I know I’m late to the party, but I want to leave my 2cents here.Systemd has a parameter called TimeoutSec (and if you want more control, there is TimeoutStartSec and TimeoutStopSec). In a few words, it controls for how long systemd will wait until service is fully started or stopped.In some situations, if your mongod has been abruptly killed, when you restart it MongoDB has to do some housekeeping tasks before it fully starts the instance. If the time to finish these housekeeping tasks goes beyond the default TimeoutSec (which I believe is 90 secs, but don’t quote me on that), systemd will kill the daemon.After you fixed your permissions, I believe your system was facing the issue I described above.The solution is to add “TimeoutStartSec=3600” to the mongod.service file which tells systemd to kill mongod only after 1 hour if it didn’t fully start.By adding that to the mongod.service, you don’t have to remove the fork: true option from the mongod.conf file.All the best,– Rodrigo",
"username": "logwriter"
},
{
"code": "",
"text": "Yeah, thanks. That was a good advice. I keep the reference and use it sometime.",
"username": "Michel_Bouchet"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problems firing up mongod from systemd | 2020-12-17T04:36:34.212Z | Problems firing up mongod from systemd | 21,415 |
Subsets and Splits