image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of version 1.4.6 of the MongoDB Go Driver.This release contains several bug fixes. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.4.6 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team", "username": "benjirewis" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.4.6 Released
2021-02-02T18:14:48.922Z
MongoDB Go Driver 1.4.6 Released
1,722
null
[ "production", "c-driver" ]
[ { "code": "", "text": "Announcing 1.17.4 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.It is my pleasure to announce libbson 1.17.4.No changes since 1.17.3; release to keep pace with libmongoc’s version.It is my pleasure to announce the MongoDB C Driver 1.17.4.Bug fixes:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.17.4 released
2021-02-02T16:34:41.071Z
MongoDB C driver 1.17.4 released
2,007
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "What is the best practise when logging out a user in a realm offline first application?In the docs it says:\n\"You can log out any user, regardless of the authentication provider used to log in, using the user.logOut() or user.logOutAsync() methods. Both methods:Because logging out halts synchronization, you should only log out after all local Realm updates have uploaded to the server.\"As I understand this, if a user logs out while being offline, all recently changed data by this user is deleted and no longer present on the next login. How can I prevent this from happening.I could check whether a user has internet connection and only allow users to logout if they have. But what if the connection is poor, but present and not all changes have been uploaded yet? Can I check somehow if the data is up to date with the server, before logging the user out?", "username": "Annika" }, { "code": "SyncSession.uploadAllLocalChanges()SyncSession.uploadAllLocalChanges()logout()logoutAsync()SyncSession.uploadAllLocalChanges()", "text": "Hi Annika,Welcome to the forum.As I understand this, if a user logs out while being offline, all recently changed data by this user is deleted and no longer present on the next login. How can I prevent this from happening.Your understanding is correct here, and demonstrates a tricky use case that we haven’t quite ironed out yet. Right now, I can recommend the following:This will guarantee that no local changes are lost on user logout. However, you should be aware that because SyncSession.uploadAllLocalChanges() blocks until all local changes complete, it could take a while (or forever!) if a user is truly offline or has a lot of changes to upload on a weak connection. So you’ll have to decide based upon your particular use case if it makes sense to cancel a user’s logout until they have a chance to synchronize local changes, or always fall back to a successful logout at the potential cost of losing local changes.Hopefully this helps you solve your problem. Thanks for asking such a thoughtful question!PS: We recognize that this solution is a bit boilerplate-y, so we just opened a new issue on the Android SDK (Add support for logging out but uploading data first · Issue #7289 · realm/realm-java · GitHub) to make this process a little bit easier. Thanks for giving us the idea.", "username": "Nathan_Contino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Log out best practises
2021-02-02T14:56:45.641Z
Log out best practises
2,807
null
[ "data-modeling" ]
[ { "code": "", "text": "Hey, I’ve been working an online game project for a while. I’m using MongoDB as a primary database. The problem I’m worrying about is game server performance with MongoDB.Let me explain the project more;\nThe game servers are using as microservices. Each game server connects to MongoDB directly. Because of the direct connection, I’m worrying about traffic and sync between services(servers).Example\nThere is a game character who has 100 gold in-game. Whenever he wants to buy something, the gold decreases and updates his document from MongoDB.Problems/QuestionsIs it good practice to update that much even the game has over a thousand player?The update operation might be happen anytime. If any of the other servers(the servers not player in) wants to get player document from MongoDB, the document might be not updated.It might be sound like game system problem but my solution will be depend on MongoDB.Thanks in advance!", "username": "Duck" }, { "code": "", "text": "Hi @Duck,MongoDB can obviously be used for Gaming and for very high write/read workloads. It’s the way you implement for document model (writes by scale of players) and your replication/sharding (distributing your data over multiple servers).", "username": "shrey_batra" }, { "code": "", "text": "Thanks for the advices!I’d want to give more information about current system. Thus, you can understand my situation more.All of the game servers have only one mongo connection. There won’t be any other connection in same server for mongo.Whenever a player joins any of the game servers, it gets player document/module from mongodb and cache inside JVM(Java Memory.). If he/she do something that triggers data update, the server update both cache and mongo document. That’s all!Since this project is new, I do not want to struggle with compilcated packet system across the network and other stuffs related to data and caching(master/slave etc.). If it’s possible, I would want to make the system as simple as possible for now.This depends on your replication process. Each server should be connected to a mongoDB Replica, and your writes must have “majority” concern set. More on this - here.Is “majority” feature solve my situation?", "username": "Duck" }, { "code": "", "text": "So the way I understand is that -If this is the scenario, then this will work perfectly awesome. Assuming no other player is writing/reading any other player’s information (as writing other player’s document may lead to stale cache for that particular player).Write concerns are mostly used to ensure that the write operation has been acknowledged by X number of replicas (data bearing nodes) of your mongodb deployment. For simple example, if you are using Atlas, and you are having 3 nodes in your mongodb replica set, “majority” would mean that atleast 2 nodes would reflect the new write that is going to take place, before the update query returns back to your application. This is necessary when multiple players might be reading same information (other players information) at same time.", "username": "shrey_batra" }, { "code": "", "text": "Ah, that’s what I want to do actually.Assuming no other player is writing/reading any other player’s information (as writing other player’s document may lead to stale cache for that particular player).Actually, from players across the network might change each other documents.\nExample A player from server X can send “gold” to another player from server Y. Thus, other servers might update players who are in other server.As writing other player’s document may lead to stale cache for that particular player.For that, I’m using “ChageStream” feature. If any of the documents from “player” collection, the server check if the updated document belongs to players in its server and it it is, the server updates the cache.In addition, since my project is not that big, I’will be using standalone. Even with standalone mode, should I use “majority” for my writings? An online game doesn’t need to avaiable. Therefore, if my project grows, I’d use “sharding” instead of “replication”.", "username": "Duck" }, { "code": "", "text": "No need for majority concerns for standalone. Do tell me how your project works out… Cheers…! ", "username": "shrey_batra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using MongoDB with microservices
2021-02-02T07:46:08.537Z
Using MongoDB with microservices
3,808
null
[ "cxx" ]
[ { "code": "CMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):\n Could not find a configuration file for package \"libbson-1.0\" that is\n compatible with requested version \"1.13.0\".\n\n The following configuration files were considered but not accepted:\n\n /usr/lib/x86_64-linux-gnu/cmake/libbson-1.0/libbson-1.0-config.cmake, version: 1.9.2\n\n\n-- Configuring incomplete, errors occurred!\nSee also \"/usr/local/mongo-cxx-driver-r3.6.2/build/CMakeFiles/CMakeOutput.log\".\nsudo cmake ..\n-DCMAKE_BUILD_TYPE=Release\n-DBSONCXX_POLY_USE_MNMLSTC=1\n-DCMAKE_INSTALL_PREFIX=/usr/local/mongo-cxx-driver-r3.6.2\n-DCMAKE_PREFIX_PATH=/usr/local/mongo-c-driver\n", "text": "Hello everyone,I’m trying to build mongodb cxx driver and I followed the instructions in this link:\nhttp://mongocxx.org/mongocxx-v3/installation/linux/I have installed mongodb c driver, and libbson files are inside it\nWhen I try to compile mongodb cxx driver, I get this error:my build command:Note:\nI have mongo c driver at: /usr/local/mongo-c-driver\nand libbson at: /usr/local/mongo-c-driver/src/libbson\nmy os is Ubuntu 18Help me to fix the error please.", "username": "Nujood_Ahmed" }, { "code": "/usr/lib/usr/local/mongo-c-driver/src/libbson", "text": "@Nujood_Ahmed, the output you provided indicates that you have libbson 1.9.2 installed under /usr/lib. This is the version that is available from the official Ubuntu package repositories for 18.04. However, that should not interfere with a properly installed C driver in another location if you specify CMake to look there. But, the fact that you say that libbson is located at /usr/local/mongo-c-driver/src/libbson seems to indicate that you have not actually built the C driver, but rather simply untarred the source code. You should review the instructions for installing the C++ driver, which have a link to the C driver installation documentation near the beginning. You will need to build and install the C driver before building the C++ driver.", "username": "Roberto_Sanchez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
CMake Error at src/bsoncxx/CMakeLists.txt
2021-02-01T07:38:49.405Z
CMake Error at src/bsoncxx/CMakeLists.txt
3,471
null
[ "node-js", "field-encryption" ]
[ { "code": " { MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27020\n at Timeout.waitQueueMember.timer.setTimeout [as _onTimeout] (/Users/ravindu/Documents/Private Projects/csfle-guides/nodejs/node_modules/mongodb/lib/core/sdam/topology.js:438:30)\n at ontimeout (timers.js:498:11)\n at tryOnTimeout (timers.js:323:5)\n at Timer.listOnTimeout (timers.js:290:5)\n name: 'MongoServerSelectionError',\n reason:\n TopologyDescription {\n type: 'Unknown',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map { 'localhost:27020' => [Object] },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null } }\nconst mongodb = require(\"mongodb\");\nconst { ClientEncryption } = require(\"mongodb-client-encryption\");\nconst { MongoClient, Binary } = mongodb;\n\nmodule.exports = {\n CsfleHelper: class {\n constructor({\n provider = null,\n kmsProviders = null,\n masterKey = null,\n keyAltNames = \"demo-data-key\",\n keyDB = \"encryption\",\n keyColl = \"__keyVault\",\n schema = null,\n connectionString = \"mongodb+srv://my_username:[email protected]/test?retryWrites=true&w=majority\",\n mongocryptdBypassSpawn = false,\n mongocryptdSpawnPath = \"mongocryptd\",\n } = {}) {\n if (kmsProviders === null) {\n throw new Error(\"kmsProviders is required\");\n }\n if (provider === null) {\n throw new Error(\"provider is required\");\n }\n if (provider !== \"local\" && masterKey === null) {\n throw new Error(\"masterKey is required\");\n }\n this.kmsProviders = kmsProviders;\n this.masterKey = masterKey;\n this.provider = provider;\n this.keyAltNames = keyAltNames;\n this.keyDB = keyDB;\n this.keyColl = keyColl;\n this.keyVaultNamespace = `${keyDB}.${keyColl}`;\n this.schema = schema;\n this.connectionString = connectionString;\n this.mongocryptdBypassSpawn = mongocryptdBypassSpawn;\n this.mongocryptdSpawnPath = mongocryptdSpawnPath;\n this.regularClient = null;\n this.csfleClient = null;\n }\n\n /**\n * Creates a unique, partial index in the key vault collection\n * on the ``keyAltNames`` field.\n *\n * @param {MongoClient} client\n */\n async ensureUniqueIndexOnKeyVault(client) {\n try {\n await client\n .db(this.keyDB)\n .collection(this.keyColl)\n .createIndex(\"keyAltNames\", {\n unique: true,\n partialFilterExpression: {\n keyAltNames: {\n $exists: true,\n },\n },\n });\n } catch (e) {\n throw new Error(e);\n }\n }\n\n /**\n * In the guide, https://docs.mongodb.com/ecosystem/use-cases/client-side-field-level-encryption-guide/,\n * we create the data key and then show that it is created by\n * retreiving it using a findOne query. Here, in implementation, we only\n * create the key if it doesn't already exist, ensuring we only have one\n * local data key.\n *\n * @param {MongoClient} client\n */\n async findOrCreateDataKey(client) {\n const encryption = new ClientEncryption(client, {\n keyVaultNamespace: this.keyVaultNamespace,\n kmsProviders: this.kmsProviders,\n });\n\n await this.ensureUniqueIndexOnKeyVault(client);\n\n let dataKey = await client\n .db(this.keyDB)\n .collection(this.keyColl)\n .findOne({ keyAltNames: { $in: [this.keyAltNames] } });\n\n if (dataKey === null) {\n dataKey = await encryption.createDataKey(this.provider, {\n masterKey: this.masterKey,\n });\n return dataKey.toString(\"base64\");\n }\n return dataKey[\"_id\"].toString(\"base64\");\n }\n\n async getRegularClient() {\n const client = new MongoClient(this.connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n return await client.connect();\n }\n\n async getCsfleEnabledClient(schemaMap = null) {\n if (schemaMap === null) {\n throw new Error(\n \"schemaMap is a required argument. Build it using the CsfleHelper.createJsonSchemaMap method\"\n );\n }\n const client = new MongoClient(this.connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n monitorCommands: true,\n autoEncryption: {\n keyVaultNamespace: this.keyVaultNamespace,\n kmsProviders: this.kmsProviders,\n schemaMap,\n },\n });\n return await client.connect();\n }\n\n createJsonSchemaMap(dataKey) {\n return {\n \"medicalRecords.patients\": {\n bsonType: \"object\",\n encryptMetadata: {\n keyId: [new Binary(Buffer.from(dataKey, \"base64\"), 4)],\n },\n properties: {\n insurance: {\n bsonType: \"object\",\n properties: {\n policyNumber: {\n encrypt: {\n bsonType: \"int\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\",\n },\n },\n },\n },\n medicalRecords: {\n encrypt: {\n bsonType: \"array\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n },\n },\n bloodType: {\n encrypt: {\n bsonType: \"string\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n },\n },\n ssn: {\n encrypt: {\n bsonType: \"int\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\",\n },\n },\n },\n },\n };\n }\n },\n};\n", "text": "I’ve been facing an issue in creating CSFLE enabled client with MongoDB ATLAS Cluster. I’m following the official Client-Side Field Level Encryption Guide. I’ve directly checked out the csfle-guides/nodejs github repository and followed the directions under the README. The only thing I changed in the code was to add the MongoDB ATLAS Connection URL as the database connection string from (helpers.js/ Line: 15).The regularClient connection works fine with ATLAS without any issue. I have even created the Key Vault and the Data Key and stored it on ATLAS using the regularClient connection. But when trying to create a CSFLE Enabled Client connection the program fails with the following error,Though the connection URL is set to ATLAS it tries to connect to a localhost node, even after changing the connection URL.MongoDB ATLAS Cluster Version: 4.4.3My helper.js file (Only change is in line 15 - Connection String)", "username": "Ravindu_Fernando1" }, { "code": "", "text": "The issue here was my mongocryptd process wasn’t running in the background. The issue resolved after I have installed the libmongocrypt library and mongocryptd binary along with mongodb-client-encryption NPM package.", "username": "Ravindu_Fernando1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to create Client-Side Field Level Encryption enabled connection client with ATLAS in NodeJS
2021-02-01T19:28:43.192Z
Unable to create Client-Side Field Level Encryption enabled connection client with ATLAS in NodeJS
4,553
null
[ "app-services-cli" ]
[ { "code": "", "text": "realm-cli lerna help$ yarn run mongodb-realm-cli\nUsage Error: Couldn’t find a script named “mongodb-realm-cli”.$ yarn run [–inspect] [–inspect-brk] …luism@DESKTOP-UJOT6VH MINGW64 /e/Users/luism/poligonosdemos (main)\n$ realm-cli login\nbash: realm-cli: command not foundluism@DESKTOP-UJOT6VH MINGW64 /e/Users/luism/poligonosdemos (main)\n$ realm-cli whoami\nbash: realm-cli: command not foundluism@DESKTOP-UJOT6VH MINGW64 /e/Users/luism/poligonosdemos (main)\n$ npx lerna realm-cli whoami", "username": "Luis_Mendes_Machado" }, { "code": "realm-cliyarnyarn global add mongodb-realm-cli", "text": "Welcome to the community forum @Luis_Mendes_Machado!It looks like you either haven’t installed realm-cli yet, or it isn’t in your path.If you want to install using yarn, try: yarn global add mongodb-realm-cli.For more installation & usage details please see the Realm CLI reference documentation.Regards,\nStennie", "username": "Stennie_X" } ]
Realm-cli lerna help
2021-02-01T21:24:00.027Z
Realm-cli lerna help
1,954
null
[]
[ { "code": "", "text": "Hi,\nI have a cluster (3 nodes) with MongoAtlas, and today I see this alert: “Disk I/O % utilization on Data Partition has gone above 90 on nvme1n1”… I see that one node is Down!\nI search help because I don’t understand a lot of things in MongoAtlas. I try to resize my cluster to 40Gb (150 IOs) but 3 hours later I see the label “We are deploying your changes: 0 of 3 servers complete (current action: configuring MongoDB)…”\nHow long for this resize? I don’t understand…\nHow can I start the down node?Regards", "username": "Alvaro_Becerra" }, { "code": "", "text": "Welcome to the community forum @Alvaro_Becerra!For operational issues like this, please login to your account and contact the Atlas Support team for assistance.Regards,\nStennie", "username": "Stennie_X" } ]
Node down in MongoAtlas
2021-02-01T19:28:15.820Z
Node down in MongoAtlas
3,695
null
[ "aggregation", "queries", "performance" ]
[ { "code": "db.getCollection(\"summary\").explain().aggregate([ {\n\n \"$match\" : { }\n }, \n {\n \"$project\" : {\n \"_id\" : NumberInt(1)\n }\n }])\n\n\n\"winningPlan\" : {\n \"stage\" : \"COLLSCAN\", \n \"direction\" : \"forward\"\n },\n", "text": "Hello,I am trying to get all the data projection with “id” field, but execution plan shows it is going for COLLSCAN.\nIdeally it should consider the “_id” index. please advise, if I am missing anything here -", "username": "Amit_G" }, { "code": "{ $sort : { _id : 1} } ", "text": "Hi @Amit_G,MongoDB collection scan is much more optimized than full index scans even for one field projection. Therefore optimzer prefer this method over full index scans.Additionally, in aggregation frame work the stages pass documents in memory from one to another therefore second stage might not use an index that is not used in first.Instead of empty match , consider using { $sort : { _id : 1} } in first stage.You can try to hint the _id index and see if it works better but I doubt itThanks\nPavel", "username": "Pavel_Duchovny" } ]
Aggregation projection on "_id" field going for COLLSCAN
2021-02-02T05:20:18.507Z
Aggregation projection on “_id” field going for COLLSCAN
2,628
null
[ "spark-connector" ]
[ { "code": "Caused by: com.mongodb.MongoCommandException: Command failed with error 2: 'Cannot open a new cursor since too many cursors are already opened' on server server_dns:27017. The full response is {\"ok\": 0.0, \"errmsg\": \"Cannot open a new cursor since too many cursors are already opened\", \"code\": 2}\n&MaxPoolSize=1\n", "text": "I am getting this error:I think that the plugin has too many connections to the database. I have tried appendingto the mongodb URL and there is nothing here about how to limit the number of open cursors. I cannot increase the number of allowed open cursors in the database configuration itself.How do I limit/control the number of open cursors used by the Apache Spark plugin for MongoDB?", "username": "Dan_S" }, { "code": "", "text": "Hi @Dan_S,Welcome to MongoDB community!The maxPoolSize limit amount of connections but not cursors which are controlled by the application when specifying a timeout for them (also they should be timedout on server after 10min)You should be able to kill cursors:Search if on spark side you are not opening cursors with noTimeOut flag or just many simultaneously running queries.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,Thanks for your response.The problem is that in Spark I am using the plugin to MongoDB for Spark which provides a much higher level interface to the database than the pymongo module. I assume (just by observing the behaviour of the plugin) that it is aggressively opening multiple cursors to the database, but I do not believe that this behaviour can be controlled by the plugin’s user.", "username": "Dan_S" }, { "code": "", "text": "Hi @Dan_S,I can see if our spark.team colud help us more. Can you share your MongoDB version, topology and spark connector version?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "Many thanks for the ticket. When working with multiple distributed systems, it often can be difficult to diagnose the root cause of too many cursors. I have a feel that error you are seeing is the symptom of an issue rather than the root cause. If that is the case then adding a configuration wouldn't be the right thing to do.\n\nCould you provide more detail on how you are hitting this issue? What version of MongoDB are you running? What OS? What version of Spark and what version of the Spark connector? Ideally, a minimal reproducible example would help as I could replicate the issue.\n", "text": "Does Ross Lawley (author of this plugin) pay attention to these forums?I include below the response from Ross Lawley to my ticket for this issue on MongoDB’s Jira:Hi Dan S,I believe that this forum is the most appropriate place for this discussion. The database used is Amazon’s DocumentDB, which is supposed to support the same client-server protocol as the database from MongoDB. The number of open cursors allowed for any database is a fixed parameter that corresponds to the size of the EC2 instances on which the managed database is to run. see here Note that the smallest instance allows up to 30 cursors.I have a batch job that runs a Spark application which includes reads from a MongoDB database using the Spark plugin. If there are 4 or more instances of this application running at the same time, while the database is on the smallest tier, they will produce this error. From this I reason the following:I may be misunderstanding, but there is an arbitrary number of cursors opened by this plugin to the database, with no mechanism to control this number.", "username": "Dan_S" }, { "code": "", "text": "Hi @Dan_S,Amazon document DB is just an emulator of MongoDB API, all the cursor management and other parts are developed and managed by Amazon.I don’t believe we can help you with this database and you should address Amazon for Answers.I strongly suggest to consider the REAL MongoDB as your backend database (Atlas)Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Spark plugin - too many open cursors
2020-10-28T06:36:17.068Z
Spark plugin - too many open cursors
5,463
null
[ "aggregation", "queries" ]
[ { "code": " {\n \"_id\" : ObjectId(\"60181d30bbc953f2c6f3cece\"),\n \"ProductDetails\" : [ \n {\n \"name\" : \"abc\",\n \"datepurchased\" : ISODate(\"2018-07-29T05:15:21.594Z\")\n }, \n {\n \"name\" : \"xxx\",\n \"datepurchased\" : ISODate(\"2021-07-29T05:15:21.594Z\")\n }, \n {\n \"name\" : \"abc\",\n \"datepurchased\" : ISODate(\"2019-07-29T05:15:21.594Z\")\n }\n ]\n}\n", "text": "Hi All,Consider this as a sample document, and the date purchased value can be in any order.How to filter the records in the array field by name = “abc” and get one recent product by date purchased and how to achieve this without including $unwind and $sort stage?Let me know this can be possible in the mongo db 4.2 version.", "username": "Sudhesh_Gnanasekaran" }, { "code": "ProductDetailsProductDetailsname {\n $addFields: {\n ProductDetails: {\n $arrayElemAt: [\n {\n $filter: {\n input: \"$ProductDetails\",\n cond: { $eq: [\"$$this.name\", \"abc\"] }\n }\n },\n 0\n ]\n }\n }\n }\nProductDetailsProductDetailsname {\n $addFields: {\n ProductDetails: {\n $filter: {\n input: \"$ProductDetails\",\n cond: { $eq: [\"$$this.name\", \"abc\"] }\n }\n }\n }\n }\n{ $max: \"$ProductDetails.datepurchased\" }ProductDetailsProductDetails$indexOfArray {\n $addFields: {\n ProductDetails: {\n $arrayElemAt: [\n \"$ProductDetails\",\n {\n $indexOfArray: [\n \"$ProductDetails.datepurchased\",\n { $max: \"$ProductDetails.datepurchased\" }\n ]\n }\n ]\n }\n }\n }\n {\n $addFields: {\n ProductDetails: {\n $let: {\n vars: {\n p: {\n $filter: {\n input: \"$ProductDetails\",\n cond: { $eq: [\"$$this.name\", \"abc\"] }\n }\n }\n },\n in: {\n $arrayElemAt: [\n \"$$p\",\n { \n $indexOfArray: [\n \"$$p.datepurchased\", \n { $max: \"$$p.datepurchased\" }\n ] \n }\n ]\n }\n }\n }\n }\n }\n", "text": "Hello @Sudhesh_Gnanasekaran,First Stage:Second Stage:You can do it using single stage by $let,", "username": "turivishal" }, { "code": "", "text": "@turivishal Thank you query working perfectly, and how about the performance compared with the unwinding and Sort stage?", "username": "Sudhesh_Gnanasekaran" }, { "code": "explain()", "text": "how about the performance compared with the unwinding and Sort stage?Depends on data in your collection and number of elements in array, You may want to take a look a the explain() of the query, this will tell you exactly where mongodb is spending time, compare both queries performance.", "username": "turivishal" }, { "code": "", "text": "@turivishal Thank you.", "username": "Sudhesh_Gnanasekaran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to filter array records without unwind and sort stage?
2021-02-01T15:35:52.294Z
How to filter array records without unwind and sort stage?
8,611
https://www.mongodb.com/…25ea479aef16.png
[ "swift", "atlas-device-sync" ]
[ { "code": "write'RLMException', reason: 'Realm accessed from incorrect thread.'\nstruct DALconfig {\n static let partitionKey = \"_partition\"\n}\n\n\nextension DataProvider where Self: Object {\n\n static func openRealm(partition: String) -> Realm? {\n if partition.isEmpty {\n DDLogError(\"Database error: empty partition value\")\n fatalError(\"Database error: empty partition value\")\n }\n\n let config = realmApp.currentUser!.configuration(partitionValue: partition)\n let realmConfiguration = setup(config: config) // update with other params.\n\n do {\n return try Realm(configuration: realmConfiguration)\n } catch let error {\n DDLogError(\"Database error: \\(error)\")\n fatalError(\"Database error: \\(error)\")\n }\n }\n\n\n static func createOrUpdateAll(with objects: [Self], update: Bool = true) {\n if objects.isEmpty {\n DDLogError(\"⚠️ 0 objects\")\n return\n }\n\n let objectPartitionValue = objects.first?.value(forKeyPath: DALconfig.partitionKey) as! String\n\n autoreleasepool {\n do {\n let database = self.openRealm(partition: objectPartitionValue)\n try database?.write {\n database?.add(objects, update: update ? .all : .error)\n }\n } catch let error {\n DDLogError(\"Database error: \\(error)\")\n fatalError(\"Database error: \\(error)\")\n }\n }\n }\n\n...\n\n\nlet car = RLMCar(partitionKey: Partition.user)\ncar.brand = \"Datsun\"\ncar.colour = \"Neon Green\"\n\nRLMCar.createOrUpdateAll(with: [car])\nwrite.userRealm?.syncSession?.resume()", "text": "I’m refactoring my DataProvider code to better support synced realms, and all of the example code I find opens and persists a realm DB on the main thread. Then either passes that realm between views or holds it in a Singleton class.Which is fine. Except for the fact that this now locks you into having every DB operation on that main thread.If you obviously try to do a write on a background thread then, you’ll immediately be confronted with:The examples also contradict the documentation that clearly states:\n\nimage866×190 12.1 KB\n \nTo help alleviate this and better support multithreading, I’ve written a wrapper to always (re)open a realm partition for that sole operation.This way, you can perform a write operation regardless of what thread you might be on, because the realm DB will specifically be opened for your transaction on that same thread.One important detail though with this method, is that because the realm DB is opened and destroyed (closed) within the same function scope, the sync engine doesn’t get time to trigger automatically.To solve this, as part of my app init(), I have to open the same realm DB on a main thread and persist it in global memory, so that its sync engine can stay active and be triggered by either the iOS event loop or calling .userRealm?.syncSession?.resume() to refresh a sync operation in the background.This “works”. But… is there perhaps a more elegant solution to this kind of work?", "username": "Sebastian_Dwornik" }, { "code": "", "text": "Hi @Jay ,Regarding your own question around threading off main, re:Writing Sync Data on main thread, did you find a clean and elegant pattern to use a syncing realm within a multithreaded environment?", "username": "Sebastian_Dwornik" } ]
Multithreading Realm Practices
2021-01-30T16:28:29.708Z
Multithreading Realm Practices
3,148
https://www.mongodb.com/…9_2_1024x115.png
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "[1.]partitionValuelet config = app.currentUser!.configuration(partitionValue: \"user=\\(user.id)\")\nlet userRealm = try! Realm(configuration: config)\nwrite_partitionKeyclass RLMCar: Object {\n @objc dynamic var _partitionKey: String = \"\"\n\n convenience init(partitionKey: String) {\n self.init()\n self._partitionKey = partitionKey\n }\n...\n\nlet car = RLMCar(partitionKey: \"user=\\(user.id)\")\nuserRealm?.createOrUpdateAll(with: [car])\n\n_partitionKeypartitionValuelet car = RLMCar(partitionKey: \"user=\\(appSession.userProfile.email)\")\nuserRealm?.createOrUpdateAll(with: [car])\nReceived: ERROR(error_code=212, message_size=22, try_again=0)\nBadChange[2.]_partitionKeypartitionKey == partitionValue[3.]Object", "text": "Help me get this straight:[1.]\nWhen you open a synced realm DB with a specific partitionValue:\neg.Every object transaction to this db (eg. a write), must also have a matching _partitionKey value:\neg.Otherwise… if an objects _partitionKey is different from the opened realm’s partitionValue:\neg.The object will still save correctly to the local realm db on device, but fail to sync, resulting in:With the server log stating BadChange:\n\nimage2888×326 54 KB\n[2.]\nIf you sync from multiple realms into the same MongoDB, the collection view UI on the server will show multiple objects (“Documents”?) with different _partitionKey's. Which would indicate they came from separate realm’s.\n\nimage1376×1446 121 KB\n\nI should probably add error checking in my function wrapper to verify the Object’s partitionKey == partitionValue before writing it.[3.]\nAs a side note: why does Realm insert empty tables of every Object class into the database file, even though there aren’t any actions performed on creating and actually writing those objects?", "username": "Sebastian_Dwornik" }, { "code": "[1.]_partitionKey\"123\"\"abc\"_partitionKey[2.]_partition[3.]", "text": "[1.]\nRegarding the first question: typically, in the documentation, we refer to the property name as “partition key” and the property value as “partition value”. In your example, your partition key is called _partitionKey and the partition values are the values this property can take - such as \"123\", \"abc\", and so on. If you do have the _partitionKey property explicitly defined in your models, then you should set its value to the same value as the value you opened the synchronized Realm with.To avoid confusion though, it is perfectly legal to omit the partition key property from your Swift models. That way, the SDK will not set it to anything and the server will automatically populate it to the expected value.[2.]\nI’m not sure that I understand your second question or is it merely an observation? It is expected that a single MongoDB collection will contain documents from multiple partitions/Realms and indeed the value of the _partition property is what determines which Realm a document is associated with.[3.]\nRealm is inherently a schema-based database. The schema is persisted upon opening the Realm file, regardless of whether there are objects created in a particular table or not. Is there a reason why you’re concerned with this behavior?", "username": "nirinchev" }, { "code": "[1.]_partitionKeyerror_code=212[2.][3.]", "text": "Hi Nikola,\nre:\n[1.]To avoid confusion though, it is perfectly legal to omit the partition key property from your Swift models. That way, the SDK will not set it to anything and the server will automatically populate it to the expected value.If I remove the _partitionKey property from my Swift models, I get the dreaded error_code=212, and sync is broken. As it doesn’t match the cloud model schema.If I try to remove it from the cloud model schema, I get:image1598×582 81.3 KBSo that doesn’t make sense to me in its setup.image1286×500 50 KB[2.]\nCorrect, this is an observation. Thank you for confirming.[3.]\nJust curious about this operation. Thanks.", "username": "Sebastian_Dwornik" }, { "code": "_partitionKey_id", "text": "Hey Sebastian,_partitionKey should be in your JSON Schema (the cloud models) - there’s no way around this.It’s very surprising that you’re getting 212 when you remove it from your swift models though. Have you tried doing it recently - it was not possible some months ago, but I’m fairly certain it should be legal today. Your Swift models are absolutely free to define a subset of the properties/classes that exist on the server. The partition key and the _id were the only exceptions, but the team resolved the issues with the partition keys and you no longer have to include them in your client models.If you do get the error with the latest version of the Swift SDK, would you mind opening a Github issue with some details and ideally a repro case so the team can investigate.", "username": "nirinchev" }, { "code": "class RLMCar: Object {\n\n @objc dynamic var _id: String = newUUID\n// @objc dynamic var _partitionKey: String = \"\"\n\n @objc dynamic var serverLastUpdated: String? = nil\n @objc dynamic var clientLastUpdated: String? = nil\n...\n_partitionKey_partitionKeypartitionValue", "text": "Hi Nikola,It works! I just had to delete my app.The server side document synced the object:image798×356 28 KBAnd my local realm file shows the _partitionKey property in the table. But curiously, it’s empty. \nimage3050×286 41.7 KBIs this expected behaviour?\nRealm sync won’t update my local realm object with the servers _partitionKey?As another side thought: when opening a Realm(…) DB, is the main difference in triggering a synced realm vs. a non-synced realm, by providing a partitionValue in its parameters?", "username": "Sebastian_Dwornik" }, { "code": "", "text": "The partition value should get synced back to your client, though that may take a few seconds as it needs to go to the server and back (i.e. it’s not populated on the device). If you don’t see it get synced back, that will be suspicious and I’ll open a bug report with the cloud team to investigate.As for the difference between synced and local Realm - yes, providing a user and a partition key is the main difference. That information will tell Realm that your intent is to synchronize the data with a MongoDB Realm app and initialize the sync mechanism.", "username": "nirinchev" }, { "code": "updatepartitionValue", "text": "If you don’t see it get synced back, that will be suspicious and I’ll open a bug report with the cloud team to investigate.Perfect! Here’s my video proof of the issue.One scenario I did test as well is modifying a synced object on the server DB side. Upon the update and sync down to the client app, I noticed that the partitionValue for that single object did fill in.\nimage3018×272 51.2 KB\n", "username": "Sebastian_Dwornik" }, { "code": "_partitionKey_partition_partitionKey", "text": "This may be nothing but in the top part of the question the objects had a _partitionKey property and in the latter posts the property was _partition - and then it’s set back to _partitionKey in a screen shot.What is that set to in the console->Sync settings?", "username": "Jay" }, { "code": "_partition", "text": "@Jay I just changed the string and re-started/reset everything. Wondered if by chance the _partition string mattered to some internal code. But it doesn’t. ¯\\(ツ)/¯", "username": "Sebastian_Dwornik" }, { "code": "class TaskClass: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n //@objc dynamic var _partitionKey = \"\" this will auto populate so not needed\n", "text": "Are you still having the same issue where the _partition (or _partitionKey) is not sync’ing back to the client?Let me preface this be saying we’ve been using this set up for months and it’s always populated and worked correctly. Initially we were manually populating that but with the update that auto-populated, it’s never failed for us.Did you verify the client has a Realm object with a property they matches what’s defined in the server Sync settings? e.g.When the TaskClass object is instantiated and sync’d the _partitionKey value will be automatically populated with whatever Realm we connected to.Then in the Console->Sync setting the partition key name has to match that property name (not the value)partition1568×554 83.2 KB", "username": "Jay" }, { "code": "_partition_partition_partition_partition@objc dynamic var _partition: String = Partition.user\n_partition", "text": "Just like you, I was manually populating the _partition key value in my code on the app side.Then, as @nirinchev stated above:… it is perfectly legal to omit the partition key property from your Swift models.Which I then did. The Realm Sync SDK adds the _partition column to my table automatically. But as seen in my video above, it does not auto-populate the key with any value after an up-sync.But if you then modify that synced object on the server-side, when it then syncs back down, it will have the populated _partition key value.To add more fuel to this process, I then tested re-enabling my _partition property in my object:It crashed sync. hahaha… I guess there is some cache that although my Swift model now matches the .realm file tables exactly, the SDK fails to adapt and still expects my Swift model to omit the _partition property. ", "username": "Sebastian_Dwornik" }, { "code": "//@objc dynamic var _partitionKey = \"\" this will auto populate so not needed", "text": "Ok. I made an error in the code in my above post. This was supposed to be commented out showing that even if the property does not exist in code, it’s still populated on the server.//@objc dynamic var _partitionKey = \"\" this will auto populate so not neededNow the interesting part; we are able to duplicate this (different) behavior.@nirinchevIt was working and populating the partition key property in the realm file with a prior Realm version but with 10.5.1, it definitely does not populate the partition key locally.Here’s the console and then the local realm file - it’s the same object but there is no local partition keymia partition key2180×688 90.7 KB", "username": "Jay" }, { "code": "", "text": "Yep, this sounds like a legitimate bug. It shouldn’t materially affect using the SDK, but we should fix it nonetheless.", "username": "nirinchev" }, { "code": "", "text": "Just a tad bit more info. If the local files are totally deleted and the app is run. It successfully sync’s and actually creates the partition key property in the file but does not populate it.@nirinchevI am happy to open a bug report, just not sure where to do it. Git? And is this a Realm-Cocoa SDK issue or does it fall under some other repository? Or should we chat them via the console?The reporting path is a bit unclear so any direction would be appreciated.", "username": "Jay" }, { "code": "", "text": "From the looks of it, doesn’t seem like it’s Cocoa specific. If it’s something that’s negatively impacting your app/development process, then file a support ticket and the team will route it to the Cloud team (and ensure that SLAs are met and you’re continuously updated on the progress). Otherwise, I’ll file a ticket internally and make a note for whoever handles it to report back on this thread. But it’ll be lower priority than if it’s impacting a live app/app in active development.", "username": "nirinchev" }, { "code": "", "text": "@nirinchevWell, they want me to pay to file a support ticket for a bug so I would please ask if you could file a ticket internally.It’s not mission critical for us as we always include the _partitionKey property in our objects but it maybe for @Sebastian_Dwornik or others.", "username": "Jay" }, { "code": "_partition", "text": "Not mission critical here too. (for now)\nJust an observation as I tinker with learning this SDK. But it might be useful at some point.Might the fix also resolve the sync error when I add back the _partition key property to my Swift model after initializing a realm without it?", "username": "Sebastian_Dwornik" }, { "code": "_partition", "text": "What is the error you’re getting after adding the _partition property?", "username": "nirinchev" }, { "code": "", "text": "hmm… let’s leave that alone for now. Seems me trying to reproduce it isn’t working at the moment.", "username": "Sebastian_Dwornik" }, { "code": "", "text": "Cool - if you encounter it again - feel free to file a ticket for the Cocoa team.", "username": "nirinchev" } ]
Do I understand this correctly? partitionKey == partitionValue
2021-01-27T20:49:15.943Z
Do I understand this correctly? partitionKey == partitionValue
5,321
null
[ "dot-net", "connecting", "atlas" ]
[ { "code": "{ ClusterId : \"1\", \n ConnectionMode : \"ReplicaSet\",\n Type : \"ReplicaSet\",\n State : \"Connected\",\n Servers : \n [\n \t{ \n \t\tServerId: \"{ ClusterId : 1, EndPoint : \"mongo atlas Host 1\" }\",\n \t\tEndPoint: \"mongo atlas Host 1\", \n \t\tReasonChanged: \"Heartbeat\", \n \t\tState: \"Connected\", \n \t\tServerVersion: 4.2.11,\n \t\tTopologyVersion: ,\n \t\tType: \"ReplicaSetSecondary\"\n\t}\n \t{ ServerId: \"{ ClusterId : 1, EndPoint : \"mongo atlas host 2\", EndPoint: \"mongo atlas host 2\", \tReasonChanged: \"Heartbeat\", State: \"Connected\", ServerVersion: 4.2.11, TopologyVersion: , Type: \"ReplicaSetSecondary\",\n \t},\n \t { ServerId: \"{ ClusterId : 1, EndPoint : \"mongo atlas host 3 }\", EndPoint: \"mongo atlas host 3\", ReasonChanged: \"InvalidatedBecause:NoLongerPrimary\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", LastHeartbeatTimestamp: null, LastUpdateTimestamp: \"2021-01-15T16:35:37.9491599Z\" }] }\n", "text": "I am using Mongo Db C# driver 2.11.5, and I have a static connection for the mongo client which I store globally. Sometimes the read operations fail. And it gives me error:A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = WritableServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 } }Client view of cluster state isSo the main error which I am getting over here isInvalidatedBecause: NoLongerPrimary.In my connection string, I don’t have added readPrefference so it will be using readPrefference as Primary only.Could someone Help me in how I can resolve this issue, or if there is something I am not doing correctly.", "username": "Shipra_Aggarwal" }, { "code": "", "text": "From the wording of the error the Primary stepped down of failed and based on the topology selecting a new primary.Catch this and reconnect. Replica sets provide HA, so recovering from this allows for maintenance and failure scenarios.", "username": "chris" }, { "code": "", "text": "This error happened saying the primary was down. The documentation says using the topology new primary would be created. So what could be the possible reason this time that a new primary was not created? Is it a frequent error that can happen or it seems to be a temporary issue?", "username": "Shipra_Aggarwal" }, { "code": "", "text": "Replica sets provide HAHi Chris! Thanks for your help. When we say reconnect does it mean to create a new MongoClient? And if yes should I try to perform read operation on primary only or try reconnecting to some replica.\nAnd also do you think this error no longer primary means primary node failure happens. If that is the case ideally Atlas should track that in server logs which they provide. right?", "username": "Shipra_Aggarwal" }, { "code": "", "text": "I updated this post with the Atlas tag.With Atlas first check the cluster activity. Automatic updating is one of those things that happen with Atlas. First check the project’s activity feed before delving in to the logs, lots of events will show here.An election for a new Primary can take a some time, in my experience usually in the single to tens of seconds.You should be able to reuse your existing mongo client. Subsequent calls, i.e. GetDatabase, will succeed when the driver can reconnect.", "username": "chris" }, { "code": "", "text": "Thanks a ton! I will see the cluster activity logs.", "username": "Shipra_Aggarwal" }, { "code": "", "text": "Hi,We saw similar issue on our server too. Is there any solution for this issue?", "username": "Pragathi_Kallu" } ]
Mongo DB Connection showing time out error with ReasonChanged: "InvalidatedBecause:NoLongerPrimary"
2021-01-18T12:45:52.691Z
Mongo DB Connection showing time out error with ReasonChanged: “InvalidatedBecause:NoLongerPrimary”
7,237
null
[ "queries", "golang" ]
[ { "code": "", "text": "I’m attempting to write something that enables me to see the raw response that is returned from mongo. Essentially, we run pipeline aggregations with many different data types - none of them a specific kind. Sometimes, it’s difficult to tell why the result isn’t unpacking cleanly into the result struct. In these cases, I’m attempting to create a debug mode that will print out the raw response from mongo (in the same format that you see on the mongo CLI essentially when you run a query - or something close).\nThen you would be able to see what mongo is returning and adjust your query or data type accordingly.I’ve run into a couple problems though - Ultimately, what should I be using to do this? Is there some kind of unmarshaller can I use to accomplish this perhaps? Is there a generic hook for all data types or perhaps I could call the default marshaller function at the end of this function?Any guidance would be appreciated.", "username": "TopherGopher" }, { "code": "monitor := &event.CommandMonitor {\n Started: func(_ context.Context, e *event.CommandStartedEvent) {\n fmt.Println(e.Command)\n },\n Succeeded: func(_ context.Context, e *event.CommandSucceededEvent) {\n fmt.Println(e.Reply)\n },\n Failure: func(_ context.Context, e *event.CommandFailedEvent) {\n fmt.Println(e.Failure)\n },\n}\n\nopts := options.Client().SetMonitor(monitor)\nclient, err := mongo.Connect(ctx, opts)\nsync.Map", "text": "Hi @TopherGopher,The easiest way to do this would be via our Command Monitoring API. If you’re just looking to log commands and server responses, you could do something like this:This will log all of the commands sent to the server as well as the server’s response or an error that occurred. If you have an application that does multiple concurrent operations, the logs may have started/succeed/failed events for different operations interleaved with each other. If you want to log the CommandStartedEvent for an operation along with it’s CommandSucceededEvent/CommandFailedEvent, you can use the RequestID field in the events to correlate them (perhaps using something like Go’s sync.Map type).– Divjot", "username": "Divjot_Arora" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can I view raw input/output from my queries?
2021-01-27T18:03:21.760Z
How can I view raw input/output from my queries?
5,858
null
[]
[ { "code": "user_id", "text": "We upload user pictures into S3 and store the file on user_id folder (partition key) and each file has a unique name on that folder. When a document is deleted I need the user_id and file name pairs from the deleted object to continue clean up on S3. But I can’t access fullDocument on delete triggers, any workaround for this? I don’t prefer to execute a user initiated function since client’s connection is not reliable.", "username": "ilker_cam" }, { "code": "deleted{\n \"updateDescription.updatedFields.deleted\": {\n \"$exists\": true\n }\n}\ncreateIndex({deleted : 1},{expireAfterSeconds : 0});\n", "text": "Hi @ilker_cam,Welcome to MongoDB community.You are correct that delete trigger cannot access the full document object.I want to offer you a workaround/trick that have worked for me in the past.Port your delete trigger into an update trigger with fullDocument enabled and filter only updates where updated field is a ttl expired field . (Eg. deleted flag)Example :Now in your collection define a 0 sec life time ttl index on this new Field :In your application logic when you need to delete a picture update this field to current time.This will make the documents delete asap while triggering an update event.\nAs a result you will get an update event that a delete is on the way with fullDocument and can cleanup the file.In your application queries make sure to query only objects without the “deleted” flag.Please let me know if that works.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks a lot! I’ll implement your solution, but a fullDocument on delete trigger would save a lot time ", "username": "ilker_cam" }, { "code": "", "text": "@ilker_cam ,This is currently not possible as the change event for delete does not contain this information. So changing this is changing MongoDB server behaviour…", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Accessing document fields on database delete trigger
2021-01-30T21:29:56.391Z
Accessing document fields on database delete trigger
5,881
null
[ "realm-web" ]
[ { "code": "Realm.App.getApp is not a function<script src=\"https://unpkg.com/[email protected]/dist/bundle.iife.js\"></script>// Create an anonymous credential\nconst credentials = Realm.Credentials.anonymous();\ntry {\n // Authenticate the user\n const user = await app.logIn(credentials);\n console.log(app.currentUser);\n // `App.currentUser` updates to match the logged in user\n return user\n} catch(err) {\n console.error(\"Failed to log in\", err);\n}\n", "text": "Hi Everyone, I am learning to use Mongodb Realm. I am trying to connect Realm App Client in my web sdk. But getting Realm.App.getApp is not a function error. Please see my code below and help me where I am doing wrong here.\nIn HTML,\n<script src=\"https://unpkg.com/[email protected]/dist/bundle.iife.js\"></script>In JS,const id = ‘< My App ID>’; // replace this with your App IDconst config = {\nid,\n};\nconst app = new Realm.App(config);async function loginAnonymous() {}\nloginAnonymous().then(user => {\nconsole.log(“Successfully logged in!”, user)\nconst app = Realm.App.getApp(\"< My App ID>\"); // replace this with your App ID\n// Getting error here as Realm.App.getApp is not a function\n})", "username": "Heartly_Regis" }, { "code": "", "text": "It looks like you’re using an older version, can you try using [email protected] in the HTML instead of 0.8.0?", "username": "Sumedha_Mehta1" } ]
Could not Initialize the Realm App Client
2021-01-31T02:26:44.346Z
Could not Initialize the Realm App Client
2,497
https://www.mongodb.com/…4_2_1024x512.png
[ "kafka-connector" ]
[ { "code": "{\n _id: 123\n name: xyz\n class: 2\n}\n", "text": "Hi All,I have a requirement , where i need to push change to kafka topic using \" Kafka Source Connector\" from a mongo collection . Only if there is a change in specific attribute only …Collection example :I want to push document in kafka topic only in case of any update in “name” . If any update happens in “class” i don’t want to push message.I tried reading below link but as per this we can only capture change at collection level .", "username": "Nitin_Singhal" }, { "code": "[{\n $match: {\n $and: [\n { \"updateDescription.updatedFields.class\": { $exists: true } },\n { operationType: \"update\" }\n ]\n }]\ncurl -X POST -H \"Content-Type: application/json\" --data '\n{\"name\": \"mongo-source-tutorial-update-value-changed\",\n\"config\": {\n\"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n\"connection.uri\":\"mongodb://mongo1:27017,mongo2:27017,mongo3:27017\",\n\"pipeline\":\"[{\\\"$match\\\": { \\\"$and\\\": [{\\\"updateDescription.updatedFields.class\\\": { \\\"$exists\\\" : \\\"true\\\"}},{\\\"operationType\\\":\\\"update\\\"}] } }]\",\"database\":\"UpdateExample\",\"collection\":\"Source\"}}' http://localhost:8083/connectors -w \"\\n\" \n", "text": "You can use the pipeline configuration parameter to specify an aggregation pipeline that should $match the condition you are trying to achieve. In your case you want to match where operationType is Update and the field ‘class’ exists within updateDecription.UpdateFields as follows:Here is an example connector configuration: (note I had to escape the quotes in the pipeline so curl would accept it)", "username": "Robert_Walters" } ]
Can we capture CDC at attribute level instead of whole collection
2021-01-27T11:11:40.094Z
Can we capture CDC at attribute level instead of whole collection
1,832
null
[ "indexes" ]
[ { "code": "", "text": "From version 4.2 of mongodb, foreground and background index has been deleted.https://docs.mongodb.com/manual/reference/command/createIndexes/index.htm\nhttps://docs.mongodb.com/manual/core/index-creation/#index-operationsThen is there no way to background indexing from the 4.2 version?", "username": "Kim_Hakseon" }, { "code": "background", "text": "Then is there no way to background indexing from the 4.2 version?Hi @Kim_Hakseon,Per the documentation page you linked, all index builds in MongoDB 4.2+ use an optimised build path and there is no longer an option for foreground or background builds. The background index build option will be ignored if specified.The new index build process does not have the extensive blocking behaviour that was a concern for foreground index builds in previous server releases.Index build performance in 4.2+ should be similar (or better) than before depending on whether the collection is being actively updated while the index is being built:The optimized index build performance is at least on par with background index builds. For workloads with few or no updates received during the build process, optimized index builds can be as fast as a foreground index build on that same data.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "So in 4.2 version, is it impossible to do anything else in that collection while creating index is in progress like a background index?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_Hakseon,Unlike foreground index builds in prior server releases, 4.2+ index builds do not block database access while an index is being created.There are further details in the two links you shared in your post, but the Behaviour section addresses this specifically (my performance quote above is from the same section):Starting in MongoDB 4.2, index builds obtain an exclusive lock on only the collection being indexed during the start and end of the build process to protect metadata changes. The rest of the build process uses the yielding behavior of background index builds to maximize read-write access to the collection during the build. 4.2 index builds still produce efficient index data structures despite the more permissive locking behavior.The new index build approach effectively delivers the previous benefits of background index builds with the potential efficiency and performance of foreground index builds depending on resource contention.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
After 4.2, background createIndexes
2021-02-01T06:35:30.876Z
After 4.2, background createIndexes
7,160
null
[]
[ { "code": "", "text": "MongoDB Sharded Cluster ScalingWe are implementing scale for our sharded cluster environment wherein we add nodes and add shards to the cluster. Each node has only 3 shards. So for example, our minimum number of nodes is 3, so each node has three shards (shards 1-3 are found in each shard). During scale out, we add 1 node and increase number of shards by 1. Each node will still have three shards (node1: shard1,2,3, node2: shard1,2,4, node3: shard1,3,4, node4: shard2,3,4).Now my problem is how to scale in from 4 nodes to 3 nodes.\nHow will we merge data from shard 4 to shards1-3?\nIs this possible?", "username": "Ralph_Anthony_Plante" }, { "code": "", "text": "Welcome to the MongoDB community @Ralph_Anthony_Plante!The MongoDB server documentation includes procedures for Removing Shards from an Existing Cluster, including rebalancing of sharded collections and migration of unsharded collections.For general automation of self-managed deployments, you may want to consider using MongoDB Cloud Manager (cloud service for automation, backup, and monitoring of self-managed deployments) or MongoDB Ops Manager (fully on-premise equivalent of Cloud Manager). Either of these management tools provide APIs (and UIs) to manage multiple MongoDB deployments.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB Cluster (Sharded) Scale In
2020-12-14T03:53:46.626Z
MongoDB Cluster (Sharded) Scale In
1,363
null
[ "change-streams" ]
[ { "code": "", "text": "we are using Mongo as our event store in our event sourced project.our database has only one replica just for supporting transactions and we are not gonna use multi replica, shards or any other advanced features of Mongo.in order for projections of events into a report table idea to work, we need to be able to permanently resume “Change Streams” from any point in the history.and we just need the history of insertions, no moreto do that, we store resume token and also operation time of each insertion along with it’s id in a separate collection, and when we want to resume from a certain insertion in the history, we query and find the exact resume token and we resume it from therethis causes a bunch of problems:\n1-storing resume tokens and operation time of each insertion via a change stream is fragile and also takes storage, but works!!!2-we recently hit the oplog size and found out that resumability is possible with oplog so in order to be able to resume from any point in the history, we have to store and keep the whole oplog throughout the life of application, but the size of oplog hits it’s max capacity every 2 hours!\nso it is not possible to save and keep oplog!what should we do?\nit seems there is no way other than implementing change stream from history ourselves on the Mongo.", "username": "Masoud_Naghizade" }, { "code": "", "text": "Hi @Masoud_NaghizadeWelcome to MongoDB community.So you are correct, in order to use the change stream resume token it must be present in the oplog.If current oplog size is covering 2 h consider increasing it anyway (we recommend trying to have an oplog of 12h+ anyway).Now in 4.4 you can define how much window you need and the oplog will try to grow to sustain it. Of course this requires a massive disk size and possibly can have performance overhead.One question I have is why you need to have ability to resume from any point in time? Maybe there is a better design.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "so in event sourcing, in order to have multiple reports(event projections) you need to be able to ask your event store to feed all events matching the criteria to your projector.if your projector fails to apply one event in to the projection in the middle of the way, you need to be able to ask it to resume it from there.other people use “EventStore” or “Kafka” as their event store, but Mongo gives the ability to live query your event streams and thats a huge benefit.anyway, thanks for your fast reply, but i think i have to implement my own change streams on top of Mongo", "username": "Masoud_Naghizade" }, { "code": "", "text": "Hi @Masoud_Naghizade,Why not to use our kafka connector to implement it:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "yeah, thanks.that was great and absolutely the solution to my problem.connecting mongo to Kafka topics is the way to stream my events into projectors.thanks again", "username": "Masoud_Naghizade" }, { "code": "", "text": "actually there is a catch event with connecting mongo changes to Kafka which is explained here .if your change stream shuts down and falls behind oplog size, those changes in between wont be announced to Kafka and you can only have the most recent changes pushed to Kafka.actually this option best suits those who just want to see most recent changes through Kafka in an at least once option.thanks anyway, i’m going to implement my own change stream driver on top of Mongo change streams", "username": "Masoud_Naghizade" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change stream permanent oplog
2021-01-30T08:59:29.703Z
Change stream permanent oplog
4,486
https://www.mongodb.com/…5_2_582x1024.png
[]
[ { "code": " db.product_data.find( {\"uIdHash\":\"2lgys2yxouhug5xj3ms45mluxw5hsweu\"}).sort({\"userTS\":-1}).explain() db.product_data.find( {\"uIdHash\":\"2lgys2yxouhug5xj3ms45mluxw5hsweu\"}).sort({\"userTS\":-1}).limit(10).explain()", "text": "I am using Mongo 4.2 (stuck with this) and have a collection say “product_data” with documents with the following schema:_id:“2lgys2yxouhug5xj3ms45mluxw5hsweu_itmep53vy”\nuIdHash:“2lgys2yxouhug5xj3ms45mluxw5hsweu”\nuserTS:1494055844000\nsystemTS:1582138336379Case 1:\nWith this, I have the following indexes for the collection:I tried to execute db.product_data.find( {\"uIdHash\":\"2lgys2yxouhug5xj3ms45mluxw5hsweu\"}).sort({\"userTS\":-1}).explain()and these are the stages in result:Screenshot 2021-01-25 at 8.14.27 PM608×1068 39.6 KBOfcourse, I could realize that it would make sense to have an additional compound index to avoid the mongo in-memory ‘Sort’ stage. So here is another case.Case 2:\nNow I have attempted to add another index with those which were existing\n3. {uIdHash:1 , userTS:-1}: Regular and CompoundUp to my expectation, the result of execution here was able to optimize on the sorting stage:<>Screenshot 2021-01-25 at 8.20.24 PM1204×1056 63.9 KBAll good so far, now that I am looking to build for pagination on top of this query. I would need to limit the data queried. Hence the query further translates to db.product_data.find( {\"uIdHash\":\"2lgys2yxouhug5xj3ms45mluxw5hsweu\"}).sort({\"userTS\":-1}).limit(10).explain()The result for each Case now are as follows:Case 1 Limit Result:\nScreenshot 2021-01-25 at 8.22.53 PM1198×1230 69.8 KBThe in-memory sorting does less work (36 instead of 50) and returns the expected number of documents.\nFair enough, a good underlying optimization within the stage.Case 2 Limit Result:\nScreenshot 2021-01-25 at 8.26.24 PM1238×1222 66.8 KB\nSurprisingly, with the compound index in use and the data queried, there is an additional Limit stage added to processing!The doubts now I have are as follows:Why do we need an additional stage for LIMIT, when we already have 10 documents returned from the FETCH stage?What would be the impact of this additional stage? Given that I need pagination, shall I stick with Case 1 indexes and not use the last compound index?", "username": "naman" }, { "code": "", "text": "Hi @naman,Welcome to MongoDB community and thanks for the detailed post!!I believe the fetch stage is required in both scenarios as the documents found by the indexed searches have to be fetched in batches to the client.Now the sort stage when happening in memory can skip a limit stage in the plan. However, the limit plan is very fast compared to in-memory blocking sort.Therefore in almost all cases the second index to avoid the sort will be the optimal for this query.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hey @Pavel_Duchovny, thanks for your response. While the sort stage is quite understood, II was curious to know why is it that the FETCH stage couldn’t skip the LIMIT in the plan based on the indexes used to query the data within. e.g. in the limit based query, the IXSCAN and FETCH both made use of the same index and returned exactly the same amount of documents as asked to limit for, then why add another stage (even if it is quicker)?PS: It took quite some time to get this post live here on the community and in the meanwhile I had posted it on SO#65889806. But because of answers made there, I can’t really pull it down. Maybe once we reach a conclusion to the discussion here, someone can answer the thread and help me close it a well. Just learning how this works and would take a note of avoiding that going forward.", "username": "naman" }, { "code": "", "text": "Hi @namanPlease note that the limit operation is done on the cursor level and not on the query.So does in memory sort. Therefore the fetch stage inly fills the server side cursor. As the in memory sort already operate on a cursor there is no need to add a limit. However, an index sort happens before cursor is filled therefore it has to do fetch and limit.If that doesn’t answer your questions please provide full explain (“executionStats”) to us.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"executionSuccess\": true,\n \"nReturned\": 10,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 10,\n \"totalDocsExamined\": 10,\n \"executionStages\": {\n \"stage\": \"LIMIT\",\n \"nReturned\": 10,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 11,\n \"advanced\": 10,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"limitAmount\": 10,\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 10,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 10,\n \"advanced\": 10,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"docsExamined\": 10,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 10,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 10,\n \"advanced\": 10,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"keyPattern\": {\n \"uIdHash\": 1,\n \"userTS\": -1\n },\n \"indexName\": \"uIdHash_1_userTS_-1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"uIdHash\": [],\n \"userTS\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"uIdHash\": [\n \"[\\\"2lgys2yxouhug5xj3ms45mluxw5hsweu\\\", \\\"2lgys2yxouhug5xj3ms45mluxw5hsweu\\\"]\"\n ],\n \"userTS\": [\n \"[MaxKey, MinKey]\"\n ]\n },\n \"keysExamined\": 10,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n }\n}\n", "text": "executionStatsThe executionStats for the limit with the compound index are as follows:and to further relate what I am trying to convey if you notice the scan and fetch looks sufficient to provide the exact 10 documents that shall be the result effectively. Hope that helps explain better. The complete output with winningPlan and rejectedPlans is accessible here as a gist", "username": "naman" }, { "code": "", "text": "@Pavel_Duchovny not sure if I was able to tag you with the stats in the previous response.", "username": "naman" }, { "code": "", "text": "Hi @naman,The stats show the same as the screen shots and I still have the same explanation.Limit stage is needed as there is no other method performed in memory. As opposed to in memory sort.Thanks pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I don’t see a reason to dove any dipper as clearly the addition of that stage has no impact on timing", "username": "Pavel_Duchovny" }, { "code": "", "text": "Limit stage is needed as there is no other method performed in memory.Can I infer from this, that the cursor has to be processed(must) in memory before returning the result and LIMIT here is idempotent in nature(might be specifically for this case)?", "username": "naman" }, { "code": "cursor.sort()kcursor.sort()LIMITallowDiskUsefind()aggregate()", "text": "Hi @naman,The difference you are observing in explain output is expected based on the documented sort behaviour.The extra query planning stage may seem counter-intuitive at first, but consider the following excerpts:If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. A blocking sort indicates that MongoDB must consume and process all input documents to the sort before returning results. See: cursor.sort() Behaviors: Sort and Index Use.If MongoDB cannot obtain the sort order via an index scan, then MongoDB uses a top-k sort algorithm. This algorithm buffers the first k results (or last, depending on the sort order) seen so far by the underlying index or collection access. See: cursor.sort() Behaviors: Limit Results.In a non-blocking, or indexed sort, the sort step scans the index to produce results in the requested order. See: Use Indexes to Sort Query Results.With an in-memory (blocking sort), output is buffered until the sort completes using a top-k sort algorithm. The top-k sort only keeps as many results as are needed for query processing, which in your example would be the limit of 10 documents. Per your Case 1 Limit Result, 24 index key comparisons were needed and 24 documents had to be fetched as input for the in-memory SORT stage which returned 10 documents.With an indexed (non-blocking) sort, results are streamed to subsequent query processing stages and a LIMIT stage is used to stop execution once the limit of 10 documents has been reached. Per your Case 2 Limit Result, only 10 index keys were examined leading to 10 documents fetched and returned. As @Pavel_Duchovny noted, this is a more optimal query.The stages in the query plan are required for correctness. The count of stages is not as important as the amount of work that has to be done when the same pipeline runs against a much larger data set.An indexed sort will effectively have a constant amount of work to do even if there are many more potential matching documents. An in-memory sort will have to iterate all matching documents to produce the sorted result set, so there will be more processing overhead as the number of matching documents (before sorting and limiting) grows.The comparative performance impact may not be very evident for a small number of documents in a test environment, but the indexed sort will be a much more scalable approach for future data growth.In-memory sorts can also be fragile if query limits (or your average document size) change significantly from your test data or original assumptions. There is a memory limit for blocking sort operations (100MB in MongoDB 4.4; 32MB in prior versions). In-memory sorts exceeding this limit will fail with an exception like “Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit”.You can include an allowDiskUse query option for find() queries in MongoDB 4.4 (or aggregate() queries in prior server versions) to support buffering in-memory sorts to disk if needed, but that I/O overhead is not going be ideal for performance.Regards,\nStennie", "username": "Stennie_X" }, { "code": "LIMIT", "text": "LIMIT stage is used to stop execution once the limit of 10 documents has been reachedThank you for the detailed answer @Stennie_X, I was really looking for this and yeah could agree with Pavel that the latter looked more optimal anyway by the work done under explanation. The additional details are really useful too. ", "username": "naman" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unwanted limit stage in querying data
2021-01-25T19:49:45.595Z
Unwanted limit stage in querying data
4,442
null
[]
[ { "code": "React", "text": "Hi,Feedback regarding this web page: What Is The MERN Stack? Introduction & Examples | MongoDB\nThe page has 3 minor typos relating to AngularJS (presumably carried over from the MEAN page this was based upon) which should be replaced by React. Please proof read and update.\ne.g.It was suggested via twitter to feedback here so that this can be fixed.Regards\nJason King @jsonking", "username": "Jason_King" }, { "code": "", "text": "Welcome to the MongoDB community forum @Jason_King and thanks for the feedback (well spotted!).I’ll pass those changes on to our web team to correct. It looks like those Angular references were indeed copied over from the original “MEAN Stack” page.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MERN Stack webpage typos
2021-01-31T19:19:28.101Z
MERN Stack webpage typos
1,538
https://www.mongodb.com/…2_2_1024x576.png
[ "realm-web" ]
[ { "code": "const Realm = require(\"realm\");\nconst BSON = require(\"bson\");\n\nconst app = new Realm.App({ id: \"debuggers-lzxyc\" });\n\nconst TaskSchema = {\n name: 'Task',\n properties: {\n _id: 'objectId',\n _partition: 'string?',\n name: 'string',\n status: 'string',\n },\n primaryKey: '_id',\n};\n\nconst email=\"[email protected]\";\nconst password=\"nikhil103\";\n\n// Create an email/password credential\nconst fun=async ()=>{\n\ntry {\n //const credentials = await app.emailPasswordAuth.registerUser(email, password);\n const credentials = Realm.Credentials.emailPassword(\nemail,\npassword\n);\n const user = await app.logIn(credentials);\n console.log(JSON.stringify(user));\n console.log(\"Successfully logged in!\", user.id);\n const mongodb = app.currentUser.mongoClient(\"mongodb-atlas\");\n console.log(app.currentUser);\n const users = mongodb.db(\"Debuggers\").collection(\"Events\");\n const result = await users.insertOne({\n title:\"jhasvchjvas\",\n desc:\"cascas\"\n });\n console.log(result);\n\n /*\n realm.write(() => { //Creating the data\n const newTask = realm.create(\"Task\", {\n _id: new BSON.ObjectID(),\n name: \"go grocery shoppijcdbhjsdbcng\",\n status: \"Open\",\n });\n }); */\n\nconsole.log(\"Successfully done\");\n\n} catch (err) {\n console.error(\"Failed to log in\", err);\n}\n}\nfun()\n", "text": "I have been working on mongoDb Realm for My Android Application and Web App. So the Inserting Document part while logging in works fine for my android application but gives error in node.jsError: Insert Not Permitted Code-13Code:Output:\nimage1920×1080 231 KB", "username": "Debuggers_NITKKR" }, { "code": "newTask", "text": "You’re using MongoDB Atlas client – Have you set up your user with the correct rights to allow for read/write into your collection?Also, it looks like you’re manually adding the “_id” field to your TaskSchema schema and then generating a BSON Object ID in your newTask function however “_id” and value is automatically created for you when you insert a new document.Try removing those fields from your schema and function, verify user credentials in your Atlas configuration (for now choose readWriteAnyDatabase) and let me know.", "username": "Andrew_W" }, { "code": "", "text": "image836×370 14.9 KB\nThis are the rules for User Collection.And I am been able to insert document via my Android Application but doesnot permit me to insert through my web application .\nAnd in the user collection data is being added as custom user data, so the id being stored in it is the id created while registering the user.Thank U for your response!", "username": "Debuggers_NITKKR" }, { "code": "", "text": "In the Atlas configuration online, you’re looking for settings like this:\nScreen Shot 2021-01-30 at 11.20.17 AM1300×338 28.3 KB Screen Shot 2021-01-30 at 11.22.05 AM809×462 31.9 KB\n Do not keep the IP address as 0.0.0.0 when you plan to deploy to production.\nThe users (or alternatively custom roles), need to be configured on your backend to be permitted to alter databases/collections.\nScreen Shot 2021-01-30 at 11.27.28 AM744×901 73 KB", "username": "Andrew_W" }, { "code": "", "text": "image1582×125 8.27 KBUpdated the Database, Still No Change. ", "username": "Debuggers_NITKKR" }, { "code": "", "text": "image1627×119 9.66 KB", "username": "Debuggers_NITKKR" }, { "code": "", "text": "And your mongoDB url in your app is using the proper connection url and config?\n\nScreen Shot 2021-01-31 at 3.00.50 AM1534×938 97 KB\nAlso, did you see this thread on StackOverflow?Beyond that I’m stumped. Hopefully someone reading the thread will catch something we both overlooked.", "username": "Andrew_W" }, { "code": "", "text": "Yeah, It has been resoved with ur previous solution. Thank u so much for replying", "username": "Debuggers_NITKKR" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Insert Not Permitted: Code 13 in node js realm app
2021-01-29T19:27:49.255Z
Insert Not Permitted: Code 13 in node js realm app
8,224
https://www.mongodb.com/…b6903cd4562d.png
[ "aggregation", "mongoose-odm" ]
[ { "code": "", "text": "I need help with this. This is how my ratings schema looks.\nNeed the aggregation logic for this.Screenshot 2021-01-30 170601580×566 16.9 KB", "username": "Atul_tiwari" }, { "code": "movie_id", "text": "Hello @Atul_tiwari,Welcome again to MongoDB Developer Forum,You could build aggregation pipeline (aggregate()) using below pipeline stages,Calculate - Total Ratings Count, Average Ratings.You can use $sort for sorting by ratingfor pagination you can use $skip and $limit“List - All Comments” This is little unclear, how you want list of all commentsFor more better answer you need to describe your expected result.", "username": "turivishal" } ]
Help with aggregation count, average, sort and pagination
2021-01-30T19:17:55.605Z
Help with aggregation count, average, sort and pagination
3,371
null
[ "queries", "data-modeling" ]
[ { "code": "Student {\n id,\n name,\n surname,\n birthDate,\n address\n}\n\nTeacher {\n id,\n name,\n surname,\n birthDate,\n address,\n courseInformation: {}\n}\n\nEnrollment {\n student: {\n name,\n surname,\n address\n },\n teacher: {\n name,\n surname\n },\n exercises: {...}\n }\n", "text": "Hi everyone, I’m a real newbie to the noSQL world and I wanted to play with it in combination with DDD. I have some doubts about updating data that is partially duplicated from one aggregate to another. Lets take an example:I’m modelling an online personal lecture system with Teachers and Students. Students can enroll to the course proposed by a Teacher. The Enrollment is private between the Teacher and the Student. The Teacher can assign exercises to the Student through the Enrollment. The Student can submit the completed exercise. After the Enrollment period is done, the System remove the Enrollment.In Mongo, I would model three main Documents: Teacher, Student and Enrollment. I Explicitly created an Enrollment Aggregate/Document, so that the Students and the Teachers can directly enquire all the active Enrollments.The Mongo Documents:Assume that for some reason the Enrollment is interested in keeping the address consistent with the Student address.\nMongo now has multi document ACID transactions, but I’m trying to follow DDD principles and make asynchronous updates across Aggregates/Documents.\nMy question is, prior multi document ACID transactions, how one can maintain consistency between data copied across multiple documents?\nWas some sort of messaging mechanism used? I’m quite afraid of implementing a messaging system due to the difficulties of atomic and reliable aggregate update + message send.Thank you", "username": "Green" }, { "code": "", "text": "Hi @Green,Welcome to MongoDB community .Keeping atomic consistency across multiple documents is best achieved with transactions and there are many challenges of achieving it without it.A good approach will be to try and reduce the places we need to update a value even in the price of referencing another document in a read.For example why wouldn’t you hold the address on the student document and use a extended reference pattern to fetch the most up to date address from student collection pointing to its _id. Additionally, do you need to keep the address of the enrollment time or the most up to date one?If you still need to keep the documents in sync you have several options:Another option is to use change streams available from 3.6 and to have a module listening for updates to addresses and update the enrollment documents.Let me know if that helps.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "Enrollment {\n student_id,\n student: {\n name,\n surname\n },\n teacher_id,\n teacher: {\n name,\n surname\n },\n exercises: {...}\n}\n", "text": "Hi @Pavel_Duchovny, thank you for your reply. As a premise, I’m just experimenting with DDD. I fully agree with your point about transactions, but I would like to follow the DDD principles by asynchronously updated aggregates and try to clearly separate the access to the domain model by the services. I would like to isolate as much as possible so that each service talks to only one aggregate.\nOn your point about the address, of course, the address is not probably going to change frequently, but I take this just as an example.\nAbout, the extended reference, I would model it this way right?Of course the name and surname are not going to change, so I dont have to worry about. For the address it means that when I access the Enrollment I need two queries correct? One for the enrollment and the second lookup for the address.Regarding the change streams, I read about that, but this looks a lot like CQRS, which is something that frighten me a little bit Thank you!", "username": "Green" } ]
Update embedded document
2021-01-30T11:39:00.232Z
Update embedded document
4,075
null
[ "database-tools" ]
[ { "code": "mongodump --ssl --host=localhost --port=33444 -u=\"User\" -p=\"Password\" --sslPEMKeyFile=/etc/ssl/mongodb/client/client.pem --sslCAFile=/etc/ssl/mongodb/server/server.pem --sslPEMKeyPassword=password --db=mydb --archive=./backups/backup_time.gz --gzipmongodb-database-toolsv100.2.1mongodb-org-tools4.4.2", "text": "Before upgrading from 4.4.2 to 4.4.3, on my Debian system, i was able to connect to my local database, with mongodump command like this:mongodump --ssl --host=localhost --port=33444 -u=\"User\" -p=\"Password\" --sslPEMKeyFile=/etc/ssl/mongodb/client/client.pem --sslCAFile=/etc/ssl/mongodb/server/server.pem --sslPEMKeyPassword=password --db=mydb --archive=./backups/backup_time.gz --gzipAfter upgrade, i’m getting this error:Failed: can’t create session: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: localhost:38917, Type: Unknow\nn, State: Connected, Average RTT: 0, Last error: connection() : x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0 }, ] }Can someone help me? Sorry for my bad Language!P.S. Downgrade mongodb-database-tools to v100.2.1 and mongodb-org-tools to 4.4.2 mongodump command working as espected!", "username": "Liber_Design" }, { "code": "GODEBUG=x509ignoreCN=0GODEBUG=x509ignoreCN=0 mongodump ....export GODEBUG=x509ignoreCN=0\nmongodump ...\n", "text": "The error explains you can use an environment variable GODEBUG=x509ignoreCN=0 to match on the CommonName.Either:\nGODEBUG=x509ignoreCN=0 mongodump ....\norIt seems the SANs should contain the servername. So updating your server certs to use SANs should resolve this more permanently.", "username": "chris" }, { "code": "{env: {GODEBUG: 'x509ignoreCN=0'}}", "text": "Thx for your fast reply! The SANs in my certificate is follow:distinguished_name = req_distinguished_name\nx509_extensions = v3_req\nprompt = no\n[req_distinguished_name]\nC = IT\nST = CA\nL = Milano\nO = Ferrari\nOU = Web Development\nCN = localhost\n[v3_req]\nkeyUsage = critical, digitalSignature, keyAgreement\nextendedKeyUsage = serverAuth\nsubjectAltName = @alt_names\n[alt_names]\nDNS.1 = www.localhost.com\nDNS.2 = localhost.com\nDNS.3 = localhostBtw, i have put env variables in my node_env path like: {env: {GODEBUG: 'x509ignoreCN=0'}} and problem is solved!", "username": "Liber_Design" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't connect with mongodump after upgrading from 4.4.2 to 4.4.3
2021-01-30T10:27:22.785Z
Can&rsquo;t connect with mongodump after upgrading from 4.4.2 to 4.4.3
8,990
null
[ "golang" ]
[ { "code": " result, err := col.UpdateOne(\n\t\tcontext.Background(),\n\t\tbson.M{\"_id\": `{$regex : /^` + companyID + `_/i}`, \"count\": `{\"$lt\": 100}`},\n\t\tbson.D{\n\t\t\t{Key: \"$push\", Value: bson.D{{Key: \"events\", Value: adding.FeedEvent{ID: savedID, Type: adding.FeedTypeNote}}}},\n\t\t\t{Key: \"$set\", Value: bson.D{{Key: \"end\", Value: ts}}},\n\t\t\t{Key: \"$inc\", Value: bson.D{{Key: \"count\", Value: 1}}},\n\t\t\t{Key: \"$setOnInsert\", Value: bson.D{{Key: \"_id\", Value: companyID + \"_\" + strconv.FormatInt(ts, 10)}}},\n\t\t\t{Key: \"$setOnInsert\", Value: bson.D{{Key: \"start\", Value: ts}}},\n\t\t},\n\t\t&options.UpdateOptions{Upsert: &valTrue, BypassDocumentValidation: &valTrue},\n\t)\nQuery<FeedBucket> q1 = repo.createQuery(FeedBucket.class, \"{'_id':/^\" + companyId + \"_/,'count':{$lt:100}}\");\n UpdateOperations<FeedBucket> ops = repo.getMorphiaDatastore().createUpdateOperations(FeedBucket.class)\n .push(\"events\", new FeedEvent(saved.getId().toString(), saved.getType()))\n .set(\"end\", saved.getTs())\n .inc(\"count\")\n .setOnInsert(\"_id\", companyId + \"_\" + saved.getTs())\n .setOnInsert(\"start\", saved.getTs());\n repo.getMorphiaDatastore().updateFirst(q1, ops, true);\n", "text": "I am trying to implement Bucket Pattern (Paging with the Bucket Pattern - Part 2 | MongoDB Blog) in golang. In Java it was no problem at all, and in golang I am stocking with Error:“Error: 66 Performing an update on the path ‘_id’ would modify the immutable field ‘_id’”.Is it possible to disable this validation (BypassDocumentValidation has no effect on it)? Or did I missed something?same in JAVA, works perfectly", "username": "firedrago" }, { "code": "bson.M{\"_id\": bson.D{{Key: \"$regex\", Value: primitive.Regex{Pattern: \"^\" + companyID + \"_\", Options: \"i\"}}}, \"count\": bson.D{{Key: \"$lt\", Value: 100}}},", "text": "my fault …\ncorrect filter must be -bson.M{\"_id\": bson.D{{Key: \"$regex\", Value: primitive.Regex{Pattern: \"^\" + companyID + \"_\", Options: \"i\"}}}, \"count\": bson.D{{Key: \"$lt\", Value: 100}}},", "username": "firedrago" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Paging with the Bucket Pattern and custom _id
2021-01-30T21:29:33.193Z
Paging with the Bucket Pattern and custom _id
2,104
null
[ "queries" ]
[ { "code": "foo{ _id: 1, done: false },\n{ _id: 2, done: false }\n...\ndonefalsetruefindAndModifydb.foo.findAndModify({\n query: { done: false },\n update: { done: true }\n}\n", "text": "Imagine I have a collection called foo with documents like this:I have X concurrent threads that want to flip a document’s done from false to true. It is important that no document is modified more than once. To do this, I am using findAndModify like this:I am satisfied that findAndModify guarantees that my documents cannot be updated twice, even by two concurrent threads.However, I am noticing that when I have thousands of documents and hundreds of threads performing this update, the rate at which documents are processed is much faster than towards the end.I want to gain a better understanding of what Mongo is doing under the hood.Does Mongo effectively following this algorithm:Therefore, is the case that for small numbers of documents, and large numbers of threads, I am seeing contention due to collisions in which documents are being locked?If so, is there anything I can do to reduce the likelihood of these collisions? I guess I could pre-partition my documents according to the threads I have (e.g. add a field called “thread” with the name of the thread) and include this in the query to the findAndModify to guarantee there were no collisions.", "username": "Rupert_Madden-Abbott" }, { "code": "query: { done: false }updateMany", "text": "Hello @Rupert_Madden-Abbott, welcome to the MongoDB Community forum.The scenario is that the update operation happens for a set of documents which match the same condition - query: { done: false }. The updateMany method can be used to submit the operation as one command to the database server. The update happens atomically on each of the documents matching the condition on the server.", "username": "Prasad_Saya" }, { "code": "updateManydb.foo.findAndModify({\n query: { done: false },\n update: { done: true, someImportantPieceOfInformation: \"a value known only to running thread\" }\n}\nupdateMany", "text": "Thanks very much for your response.I am deliberately not using updateMany because I also need to store a unique piece of information against each document. This piece of information is generated by each thread dynamically and is not known in advance for all documents.So my query actually looks like this:Therefore, I cannot use updateMany.Another way of putting it is that I am implementing a variation of a queue in Mongo but one in which receives could come from any part of the queue instead of just the tip.", "username": "Rupert_Madden-Abbott" }, { "code": "", "text": "update: { done: true, someImportantPieceOfInformation: “a value known only to running thread” }How does the thread know which document to update? What is the relationship between the thread and a document?", "username": "Prasad_Saya" }, { "code": "", "text": "How does the thread know which document to update? What is the relationship between the thread and a document?It doesn’t know and it doesn’t need to. It just needs to update any document that isn’t done.That is really part of my question: When you provide findAndModify with a query that matches multiple documents (because it does not matter which one is updated as long as exactly one is updated), then how does Mongo choose which document you get back? I know that it does choose one but the docs don’t say how (unless you sort the query in which case they do).", "username": "Rupert_Madden-Abbott" }, { "code": "while true {\n\tcreate/get a thread _and_ the important_info\n\tsubmit the update using the thread:\n\t\tdoc = db.collection.findAndModify( { query: { done: false }, update: { done: true, importantInfo: important_info } } )\n\tif doc is null\n\t\tall docs are updated (no update happened)\n\telse\n\t\tthe updated doc\n}", "text": "Assume there are some documents in the collection. To submit each of the update using a thread, I am assuming that your process is like this:", "username": "Prasad_Saya" }, { "code": "", "text": "Yes, that is a valid example.", "username": "Rupert_Madden-Abbott" } ]
findAndModify and concurrency
2021-01-27T12:06:53.944Z
findAndModify and concurrency
7,170
null
[ "queries", "atlas-search" ]
[ { "code": "", "text": "Hi Team,Working with the mongo aggregation on particular collection which consist of billions of documents with index on its primary keys and/or nested keys. Want to know whether this much of data in document can find over nested object aggregators that have unaware keys in request? If it is how this will results and how much time it will took to get final output. Thinking in example of NLP for unstructured data rendering from document collection. Please give suggestion, will help a lot.", "username": "Jitendra_Patwa" }, { "code": "", "text": "Hi @Jitendra_Patwa,Welcome to MongoDB communityIn order to better answer your question. I would need more details on the data.Is it unstructured in a way that you don’t know the structure and name of fields or is it dynamic and you can structure the data to better work for querying.My main point is to understand whether you require an index strategies for :The way you process the incoming data,their types and query pattern should allow us to locate the best method.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "[\n {\n \"_id\": \"5bfd1...\",\n \"source\": \"Citrus\",\n \"data\": {\n \"name\": \"Orange\",\n \"color\": \"Orange\",\n \"suppliers\": {\n \"name\": \"Punjab\"\n },\n \"quantity\": \"6T\"\n }\n },\n {\n \"_id\": \"5bfd2...\",\n \"source\": \"Citrus\",\n \"data\": {\n \"name\": \"limes\",\n \"color\": \"Yellow\",\n \"suppliers\": {\n \"name\": \"Gujarat\"\n },\n \"quantity\": \"5T\"\n }\n },\n {\n \"_id\": \"5bfd3...\",\n \"source\": \"Citrus\",\n \"data\": {\n \"name\": \"limes\",\n \"color\": \"Green\",\n \"suppliers\": {\n \"name\": \"Gujarat\",\n \"zone\": \"North\",\n \"cities\": [\n \"A\",\n \"B\",\n \"C\"\n ]\n },\n \"quantity\": \"5T\"\n }\n },\n {\n \"_id\": \"5bfd4...\",\n \"source\": \"Tropical\",\n \"data\": {\n \"name\": \"bananas\",\n \"color\": \"Yellow\",\n \"suppliers\": {\n \"name\": \"Maharashtra\",\n \"zone\": \"NorthEast\",\n \"vendor\": {\n \"vendorname\": \"Villare\"\n }\n },\n \"quantity\": \"5T\"\n }\n },\n {\n \"_id\": \"5bfd5...\",\n \"source\": \"Tropical\",\n \"data\": {\n \"name\": \"bananas\",\n \"color\": \"Yellow\",\n \"suppliers\": {\n \"name\": \"Maharashtra\",\n \"zone\": \"South\",\n \"vendor\": {\n \"vendorname\": \"Robb\"\n }\n },\n \"quantity\": \"5T\"\n }\n },\n {\n \"_id\": \"5bfd6...\",\n \"source\": \"Berries\",\n \"data\": {\n \"name\": \"kiwifruit\",\n \"color\": \"Green\",\n \"suppliers\": {\n \"name\": \"TamilNadu\",\n \"zone\": \"South\",\n \"vendor\": {\n \"vendorname\": \"Tamil\",\n \"resides\": [\"Chennai\",\"Thiruchi\"],\n \"transport\": {\n \"motor\": \"Truck\",\n \"ferry\": \"Boat\"\n }\n }\n },\n \"quantity\": \"5T\"\n }\n }\n]\n", "text": "Hi Pavel,Thanks for describing in more details for my needs. Basically, looking for the option for atlas search based on some aggregated queries which is not structure in documents, likewise NLP does.Please find below fruit collection that have structured source and data key which consist of fruit suppliers in the country.I may want to query that in following ways,\n1 Find the berries which supplied by boat in Chennai.\n2 Suppliers who supply fruit from Gujarat\n3 Find fruit which have 5T quantity supply\n…etcIn above 3rd query, we know that in data field have “quantity” node which then find out or match by value 5T.In 2nd we only know about the name of supplier but we doesn’t know in which name of fields in document it will belongs to also the same in 1st one where boat is in nested object and we only know the value “boat” , “berries” and “Chennai”. Here I faced problem if unstructure field or value is in incoming request how it will be used to query by only text or conditional text statements.Thanks.", "username": "Jitendra_Patwa" }, { "code": "wildcardcompound[\n {\n \"$search\": {\n \"wildcard\": {\n \"path\": \"suppliers*\",\n \"query\": \"Boat\"\n }\n }\n }\n", "text": "Hi @Jitendra_Patwa,I am not a 100% sure I got your entire requirement or challenge.But in Atlas search you can do a dynamic mapping of fields for a collection and therefore allow us to Search for words/text or phrases without necessarily know the fields we search on.Moreover, we have several operators like wildcard searches or compound where we could point to a root path or subpath that we think the search should be performed.For example:Use a wildcard operator in an Atlas Search query to match any character.This query with a dynamic index find documents where a value of one of the fields is boat … With the use of compound operator you can have several wildcard operations filters:Use the compound operator to combine multiple operators in a single query and get results with a match score.Let me know if this help.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Handling unstructure querying in document
2021-01-29T19:25:48.453Z
Handling unstructure querying in document
1,732
null
[]
[ { "code": "", "text": "I need to update a field in one collection using a field value from another collection. What is the best way to accomplish the following SQL in MongoDB 3.6?update action a set serial_nr=(select serial_nr from device d where a.device_id=d.device_id);", "username": "Don_Gathman" }, { "code": "deviceactiondb.action.aggregate( \n[\n { \n $lookup: {\n from: \"device\",\n localField: \"device_id\",\n foreignField: \"device_id\",\n as: \"R\"\n } \n },\n { \n $match: { $expr: { $gt: [ { $size: \"$R\" }, 0 ] } } \n }\n]\n).forEach( doc => db.action.updateOne( { _id: doc._id }, { $set: { serial_nr: doc.R[0].serial_nr } } ) )\n$setserial_nraction", "text": "Assuming there are two collections, device and action:Note the $set update operator will create a field called as serial_nr in the action's document, in case the field doesn’t exist (otherwise just replaces the existing value).", "username": "Prasad_Saya" }, { "code": "", "text": "Very helpful, thank you!", "username": "Don_Gathman" }, { "code": "", "text": "update action a set serial_nr=(select serial_nr from device d where a.device_id=d.device_id);how do we modify this to :\nupdate action a set Numberofdevices=(select count(serial_nr) from device d where a.device_type=d.device_type);In which I need to update counts in main collection", "username": "Vikas_Shah" } ]
Update with Correlated Subquery
2020-03-05T18:03:19.024Z
Update with Correlated Subquery
5,161
null
[ "data-modeling" ]
[ { "code": "user: {}\nbusiness: {\n owner: user_id,\n locations: [ businessLocation_id, ... ]\n}\nbusinessLocation: {\n businessId,\n employees: [employee_id, ...]\n address: {},\n ...\n}\nemployee: {\n businessLocation_id,\n user_id,\n role\n}\norder: {\n businessLocation_id,\n employee_id,\n *providers: [],\n customer: {},\n recipient: {}\n}\n", "text": "So I’m trying to structure out my schema_version 1 and I wanted to run it by everyone to let me know if I’m on the right track or way off. I know querying is a big thing so I’ll quickly break down relationships and how the application would be used, then my current schema, and then considerations for a schema_version 2. With that said here’s the app flow:A User can can add a Business which can have several BusinessLocations which can have many employees and many orders.\nAn order will have one customer, one recipient, be created by an employee, and have providers.An employee is a user that has certain role for permissions and belongs to a business location.Here are my current collections: (Assume all documents for each collection have a “_id” by default)*Do I need providers in their own document or can I get away with an embedded array of objects that are only to that order if I know that there is a limit per order and I don’t need to know if the provider is part of other orders?Indexes - Missing any? Wrong ones?Links/Refs - Am I missing any? Are any unnecessary?Thinking ahead, I will eventually want to create flexibility for enterprise; An enterprise (organization) may have many brands which could own many businesses with many locations. But this isn’t v1, just something to keep in mind when I eventually may need to migrate data.Thank you SO much! I know this is a doozy! ", "username": "Andrew_W" }, { "code": "Users Collection\n{\nUserId,\nEmployeeInfo : {\nEmployeeId,\nBusinessId,\nRoles : [],\nPermissions :[]\n}\nMoreUserDetails,\n...\n}\n\nBusinesses Collection\n{\nBusinessId,\nBusinessesLocations : [{addresses}]\n...\n}\n\nOrders Collection\n{\n OrderId ,\nBusiness : { BusinessId, CurrentAdress},\nUserId/EmployeeId,\nproviders: [providerId],\n customer: {},\n recipient: {}\n...\n}\n\nProviders Collection\n{\nProviderId,\nProviderDetails \n...\n}\n", "text": "Hi @Andrew_W,The main approach I take into account when I start building a schema is first I would think of the critical paths of application query/crud will work and what is the relationship magnetuted between the entities as well as if there is a justification of each to be stored as a document of its own.The following blogs are very useful to start.\nhttps://www.mongodb.com/article/mongodb-schema-design-best-practiceshttps://www.mongodb.com/article/schema-design-anti-pattern-summaryBest practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Now lets break down your needs and enteties:A User can can add a Business which can have several BusinessLocations which can have many employees and many orders .\nAn order will have one customer , one recipient , be created by an employee, and have providers .Ok based on the above I see the following potential schema:The main Idea is that a user is a main entity which with some additional info considered an employee in some application views. A business have addresses but there is not justification of having locations without a business object. Orders have both pointers the business and the user/employee that created it.Providers can be referred via ids as I believe their ammount is eventually limited per order.In terms of indexes this is what I thought, but its dependent on your specific queries:\nUsers - {userId }, {BusinessId, EmployeeId}, {EmployeeId}, {Roles?}\nBusinesses - {BusinessId}, {location geo index?}\nOrders - {OrderId}, {Business.BusinessId, Business.EmployeeId}, {Providers?}\nProviders - ProviderIdLet me know if that makes sense.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you @Pavel_Duchovny! I’ll take some time to digest all this and get back to you. ", "username": "Andrew_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Users, Businesses, Orders, Locations, Oh My!
2021-01-29T19:26:07.350Z
Users, Businesses, Orders, Locations, Oh My!
2,845
null
[]
[ { "code": "", "text": "Hello,How to get user name that from which user the query is coming , logged in MongoDB log.", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hello @Aayushi_Mangal, you can try using the mtools. It has a tool called mlogfilter which:slices log files by time, merges log files, filters slow queries, finds table scans, shortens log lines, filters by other attributes, convert to JSON.", "username": "Prasad_Saya" }, { "code": "", "text": "Hello @Prasad_Saya,That is really nice tool. But I am not sure that I will be able to get user details from log.", "username": "Aayushi_Mangal" } ]
How to find user name from mongo logs
2021-01-29T14:03:37.095Z
How to find user name from mongo logs
2,564
null
[ "containers" ]
[ { "code": "", "text": "Hi everyone. I’m sorry but I’m an enduser and not a developer. I use an app in a docker container and the app stores data in a mongo DB. I need to modify a list of users stored in the DB (the app doesn’t provided for that itself). I have access to the MongoDB directory where the all the container files and wiredtiger files are stored. I’m able to bring up mongoDB outside the docker but can’t figure out how to tell Mongo to use the containers in that directory. Is this at all possible? How would I do that. Remember, I’m not a developer so keep that in mind when you answer. I really appreciate an help you can give. I’ve been struggling with this for quite a while now. Thanks in advance", "username": "Roger_Andrews" }, { "code": "", "text": "Never mind. I figured out where MongoDB stored its’ data. I just replaced all of it with the directory that was holding with my database and then used studio-3t to edit it.", "username": "Roger_Andrews" } ]
Connect to MongoDB by directory
2021-01-28T19:12:27.907Z
Connect to MongoDB by directory
1,795
null
[ "node-js", "replication" ]
[ { "code": "const { MongoClient } = require(\"mongodb\");\n\n// Replace the following with your MongoDB deployment's connection\n// string.\nconst uri =\n \"mongodb+srv://<clusterUrl>/?replicaSet=rs&writeConcern=majority\";\n\nconst client = new MongoClient(uri);\n\n// Replace <event name> with the name of the event you are subscribing to.\nconst eventName = \"<event name>\";\nclient.on(eventName, event => {\n console.log(`received ${eventName}: ${JSON.stringify(event, null, 2)}`);\n});\n\nasync function run() {\n try {\n await client.connect();\n\n // Establish and verify connection\n await client.db(\"admin\").command({ ping: 1 });\n console.log(\"Connected successfully\");\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\nrun().catch(console.dir);\nserverOpeningserverClosed", "text": "I want to connect to a MongoDB replica set (only one instance to be able to use Change Streams) while being able to be notified of connection lost/reconnect. I followed what described here:I tried subscribing to events:I would like to be able to get notified when the MongoDB connection is lost and/or restored and accordingly send a notification mail with “nodemailer”.\nI was able to configure what desired in “mongoose” but, so far, not using official MongoDB drivers for NodeJS.", "username": "Sergio_Ferlito1" }, { "code": "const eventName2 = \"serverClosed\";\nclient.on(eventName2, event => {\n console.log(`.....received ${eventName2}: ${JSON.stringify(event, null, 2)}`);\n});\n.....received serverClosed: {\n \"topologyId\": 0,\n \"address\": \"localhost:27017\"\n}\n", "text": "Hello @Sergio_Ferlito1,I tried the same code with “serverClosed” event, it works fine. I am using MongoDB 4.2, NodeJS v12 and the MongoDB driver 3.6. I am connecting to a standalone server on localhost and default port.This printed to the console:Updated:Also, works fine when connected to Atlas Cluster with a replica-set.", "username": "Prasad_Saya" } ]
MongoClient NodeJS open/close connection events
2021-01-29T14:46:26.427Z
MongoClient NodeJS open/close connection events
18,104
null
[ "aggregation" ]
[ { "code": "", "text": "Hello,I am getting cursorTimeout error, so I increased cursorTimeoutMillis .I am aware that the default timeout is 10min, but strange is, on one cluster it is working fine and on another identical cluster it is giving me this cursor not found.I am not sure why my timeout setting is not working.Why same query with same data is not giving that error. The error-free cluster is trafic free and kind of test cluster, does that also matter?", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hi Ayush,Could you refer the links belowhttps://jira.mongodb.org/browse/SERVER-34053**Do you want to request a *feature* or report a *bug*?**\nReport a bug\n\n**Wha…t is the current behavior?**\nI recently bumped my Mongoose version from 5.5.7 to 5.9.6 and I spotted a issue with cursors.\nThe error is `MongoError: cursor id XXX not found` (even with `cursor.addCursorFlag('noCursorTimeout', true)`.\n\n**If the current behavior is a bug, please provide the steps to reproduce.**\nYou can reproduce the bug by running this script (for a few hours) :\n```\nconst mongoose = require('mongoose');\n\nconst sleep = seconds => new Promise(resolve => setTimeout(resolve, seconds * 1000));\n\n(async () => {\n\tawait mongoose.connect('mongodb://localhost/test', { useCreateIndex: true, useNewUrlParser: true, useUnifiedTopology: true, poolSize: 10 });\n\n\tconst Doc = mongoose.model('Test', new mongoose.Schema({\n\t\tcounter: { type: Number, default: 0 }\n\t}));\n\n\tfor(let i = 0; i < 10000; ++i)\n\t\tawait (new Doc()).save();\n\n\tconst cursor = Doc.find({}).cursor();\n\tcursor.addCursorFlag('noCursorTimeout', true);\n\tawait cursor.eachAsync(async doc => {\n\t\tdoc.counter++;\n\t\tawait doc.save();\n\t\tconsole.log(`Doc ${doc._id} saved, sleeping for 10 seconds...`);\n\t\tawait sleep(10);\n\t});\n\tawait cursor.close();\n})();\n```\n\n**What is the expected behavior?**\nThe error must not occur.\n\n**What are the versions of Node.js, Mongoose and MongoDB you are using?**\nNode v12.16.1\nMongoose v5.9.6\nMongoDB v4.0.16\n\nI think the error has to do with the MongoDB idle session timeout (https://docs.mongodb.com/manual/reference/method/cursor.noCursorTimeout/) but I am no expert.Cheers", "username": "Shanka_Somasiri" }, { "code": "", "text": "Thank you, it helped!", "username": "Aayushi_Mangal" } ]
"code" : 43, "codeName" : "CursorNotFound" - Cursor timeout is not working as expected
2021-01-12T13:16:37.511Z
&ldquo;code&rdquo; : 43, &ldquo;codeName&rdquo; : &ldquo;CursorNotFound&rdquo; - Cursor timeout is not working as expected
7,375
null
[ "aggregation", "golang" ]
[ { "code": "", "text": "Hello,I have build a pipeline PA which will create a data set DSA. I need to count the number of items in DSA and return to the client the total number, so here I created a pipeline PB,PB = append(PA, bson.M{\"$count\": “number”}).In the mean time, I need to return a subset of DSA to client, so I used another pipeline PC, PC will looks like:PC = append(PA, bson.M{ “$skip”: offset })So here the PB and PC will work on the same collection, I run PB to get data set DSA then count the total items in DSA; then I run PC to get data set DSA again then return subset in DSA from the location offset. I am not sure if there are something can be improved, for example, I create a dataset DSA by PA, then only run the {\"$count\" : “number”} on DSA to get total, then run the { “$skip”: offset } on DSA to get the subset after location offset.Thanks for the support!James", "username": "Zhihong_GUO" }, { "code": "", "text": "Hello @Zhihong_GUO, you can try the following approach.Assume, your source collections is “collectionA”. Apply, the pipeline PA on the “collectionA” to create a Materialized View, called as “DSA”.Then, you can run one Aggregation query on this view DSA to get the two outputs - one is the count of DSA and the second is the subset of the DSA.How to get two outputs with one query? Use $facet stage with two facets (“$facet processes multiple aggregation pipelines within a single stage on the same set of input documents.”).Updated:An option is to use a View. The view would be the “DSA”, which is created once. And run the same aggreagtion query using the facets I had mentioned above. But, the view is not persistent, like a collection. The definition says:A MongoDB view is a queryable object whose contents are defined by an aggregation pipeline on other collections or views. MongoDB does not persist the view contents to disk. A view’s content is computed on-demand when a client queries the view.", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya, many thanks for the suggestion. The facet operator seems very promising solution for my problem.\nBest,\nJames", "username": "Zhihong_GUO" }, { "code": "$facet$match$facet$facetCOLLSCAN", "text": "@Zhihong_GUO FYIFrom the DocumentationThe $facet stage and its sub-pipelines cannot make use of indexes, even if its sub-pipelines use [ $match ] or $facet first stage in the pipeline. The $facet stage will always perform a COLLSCAN during execution.", "username": "Sudhesh_Gnanasekaran" } ]
How to run two different pipelines from the data returned by the common pipeline
2021-01-28T10:39:19.908Z
How to run two different pipelines from the data returned by the common pipeline
3,771
null
[]
[ { "code": "", "text": "Hi there,\nto populate my test projects in Atlas I currently restore an old database whenever I feel the need of “clean” data. The production environment often grows and the database get’s to big to be restored in a small dev environment. Which option would be best to populate my dev DB?Thanks a lot for any suggestion\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoeller,What is your ultimate goal ?To get all collections with partial data into dev ?Or some collections with all data ?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovnythe first: to get all collections with partial data into dev. The get consistent (partial) data to dev some relations need to be fulfilled.Michael", "username": "michael_hoeller" }, { "code": "", "text": "@michael_hoeller,Sounds like a dump with query parameter to export specific documents from all collections.Another possibility is to use a fetch and bulk insert via a script considering your relationships logic.If the clusters are in the same project you can use Realm functions to copy data between them Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_DuchovnySounds like a dump with query parameter to export specific documents from all collections.I thought about this just this is a simple query on one collection, to keep consistency I need to cover relations to other collections. Just to maintain queries is do able but complex, also often some data needs to be anonymized when moved from prd to somewhere.If the clusters are in the same project you can use Realm functions to copy data between them They are not, I’d assume due to access rights that in most cases prd and dev will be in different projects.\nThis would be interesting to if possible.Another possibility is to use a fetch and bulk insert via a script considering your relationships logic.Same as with realm.In the past I mainly stripped down a local copy and keep a dump of this db as blue print for quickly setup dev dbs. This is surely not an ideal process, and the above mentioned issues drove me to write this post - hoping that I missed a cool trick.", "username": "michael_hoeller" } ]
Populate a dev db from prd
2021-01-28T15:40:01.357Z
Populate a dev db from prd
1,945
null
[]
[ { "code": "", "text": "Hi there,\nI am currently running an M50 with about 1GB collection size, 50 GB indices and 897GB disk usage. Note: collection size was much more before thats why I needed M50, but online archived most of the data now.Based on the collection size, data should fit into M30 and even M10. However, I am struggling with down scaling to M30 with 100GB. I am receiving the following error message:“The selected disk size of 100.0 GB is smaller than the amount of data currently used (897.3117332458496 GB)”.Are the 897 GB just allocated disk space that cannot be down scaled or actual data used besides the collection/indices?Thanks,\nMatthias", "username": "Matthias_Stumpp" }, { "code": "", "text": "Hi @Matthias_Stumpp,Welcome to MongoDB community.I believe that although data was deleted during archiving process it is still allocated on the disk space.You should compact your secondaries using the compact command conneting to each secondary one by one. Finally when both secondaries dropped the disk size failover the primary using “Test Failover” and compact it.When all nodes are below the 100GB mark downgrade the instances to 150GB.I would not recommend going to M10 directly and keep the database on M30 as a minimum workload demanding instance.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,thanks for helping out.After following the steps, down grading finally worked.Thanks again,\nMatthias", "username": "Matthias_Stumpp" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Downscale M50 to M10
2021-01-29T08:57:57.530Z
Downscale M50 to M10
2,132
null
[]
[ { "code": "", "text": "I am using the Mongo Atlas version 4.2 with my production site from last 6 months and all queries I have written are compatible with 4.2Suddenly from tomorrow 28/01/2021, My Mongo Atlas version changed to 4.4 without any indication and approval. On our live site users facing issue for accessing the features.Is there any proper way to handle it? Any suggestion? I don’t want to use the latest version as my site working fine with the version I using. When I want to use new features I definitely need to upgrade version and during that, I am going to check the compatibility with the existing system as wellBut without indication, I never want to use the latest version of Mongo Atlas database.Do you have any idea about the frustration level of my team, my client or many other people who attached to the application. Even we don’t know how much part of application break or stop working.", "username": "JD_Old7" }, { "code": "", "text": "Hi @JD_Old7,Welcome to MongoDB community.Major version upgrade should definitely not be triggered without a user approval or intention. (Only in case of a hard stop for version End Of Life but its not the case for 4.2 for sure).I would suggest opening a high saverity case for our support to investigate the course of events.To downgrade a cluster you will have to create a new cluster on 4.2 and either restore a backup from 4.2 or live migrate the current deployment.Having said that, consider upgrading your drivers and try to run sanity checks as it might work for you.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Jaydipsinh,Pavel is referring to our M10+ clusters where you select the version.I suspect you are referring to our shared tier clusters which are not version-configurable. We sent a series of heads-up email-based notices to users of our shared tier clusters over the last few months notifying that the upgrade to 4.4 was coming.We greatly regret the instability this has caused you: 99% of applications experience no change between 4.2 and 4.4 and we’re making improvements to deliver more version to version stability in future.We have clearly failed you here and I regret that. We will be in touch via email to ensure that we investigate whether somehow you may not have received our heads-up notices (that would not be acceptable).I will also ask you for your current state by email as our support team is here to help.-Andrew", "username": "Andrew_Davidson" } ]
Mongo Atlas new version migrate without any indication or notification
2021-01-29T06:47:30.009Z
Mongo Atlas new version migrate without any indication or notification
2,015
null
[ "aggregation" ]
[ { "code": "{\n _id: '$_id',\n program_name: {\n $first: '$program_name'\n },\n title: {\n $first: '$title'\n },\n description: {\n $first: '$description'\n },\n episode: {\n $first: '$episode'\n },\n seasons: {\n $first: '$seasons'\n },\n episodes: {\n $first: '$episodes'\n },\n dur: {\n $first: '$dur'\n },\n '4k': {\n $first: '$4k'\n },\n hlghdr: {\n $first: '$hlghdr'\n },\n progress: {\n $first: '$progress'\n },\n image: {\n $first: '$image'\n },\n watchlist: {\n $first: '$watchlist'\n },\n type: {\n $first: '$type'\n },\n data: {\n $push: '$data'\n },\n assets: {\n $push: '$assets'\n },\n count: {\n $first: '$count'\n },\n subscribed: {\n $first: '$subscribed'\n },\n plan: {\n $first: '$plan'\n },\n token: {\n $first: '$token'\n },\n usersettings: {\n $first: '$usersettings'\n }\n}\n[{$match: {\n _id: ObjectId('5f168c368281ec7f46d0f35a'),\n type: 'program',\n status: 'active',\n schedule_on: {\n $lte: ISODate('2021-01-28T09:02:38.711Z')\n }\n}}, {$addFields: {\n seasons: {\n $size: '$assets'\n }\n}}, {$unwind: {\n path: '$assets',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n translations1: {\n $filter: {\n input: '$translations',\n as: 'translations',\n cond: {\n $and: [\n {\n $eq: [\n '$$translations.language',\n 'en'\n ]\n }\n ]\n }\n }\n }\n}}, {$addFields: {\n hlghdr: {\n $cond: {\n 'if': {\n $in: [\n 'hlg hdr',\n '$hdr'\n ]\n },\n then: true,\n 'else': false\n }\n },\n dur: '$duration',\n program_name: {\n $arrayElemAt: [\n '$translations1.details.title',\n 0\n ]\n },\n description: {\n $arrayElemAt: [\n '$translations1.details.description',\n 0\n ]\n }\n}}, {$project: {\n items: 0,\n updated_on: 0,\n updated_by: 0\n}}, {$addFields: {\n images: {\n $cond: [\n {\n $ifNull: [\n '$images',\n false\n ]\n },\n '$images',\n []\n ]\n }\n}}, {$unwind: {\n path: '$assets.details',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'assets.details.videoid:': {\n $toObjectId: '$assets.details.video'\n }\n}}, {$unwind: {\n path: '$assets.details',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'assets.details.video': {\n $toObjectId: '$assets.details.video'\n }\n}}, {$lookup: {\n from: 'assets',\n localField: 'assets.details.video',\n foreignField: '_id',\n as: 'data'\n}}, {$unwind: {\n path: '$data',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'data.watchlist': null,\n 'data.progress': null,\n 'data.4k': {\n $cond: {\n 'if': {\n $eq: [\n '$data.assets_inputs.resolution',\n '4k'\n ]\n },\n then: true,\n 'else': false\n }\n },\n 'data.license_token': '$data.assets_inputs.license_token',\n 'data.dur': '$data.duration',\n 'data.translations1': {\n $filter: {\n input: '$data.translations',\n as: 'translations',\n cond: {\n $and: [\n {\n $eq: [\n '$$translations.language',\n 'en'\n ]\n }\n ]\n }\n }\n }\n}}, {$lookup: {\n from: 'users',\n localField: 'data._id',\n foreignField: 'watched.asset_id',\n as: 'users'\n}}, {$addFields: {\n 'data.hlghdr': {\n $cond: {\n 'if': {\n $in: [\n 'hlg hdr',\n '$data.hdr'\n ]\n },\n then: true,\n 'else': false\n }\n },\n 'data.dur': '$data.duration',\n 'data.title': {\n $arrayElemAt: [\n '$data.translations1.details.title',\n 0\n ]\n },\n 'data.description': {\n $arrayElemAt: [\n '$data.translations1.details.description',\n 0\n ]\n },\n 'data.images': {\n $cond: [\n {\n $ifNull: [\n '$data.images',\n false\n ]\n },\n '$data.images',\n []\n ]\n }\n}}, {$addFields: {\n 'data.image': {\n $filter: {\n input: '$data.images',\n as: 'images',\n cond: {\n $and: [\n {\n $eq: [\n '$$images.aspect',\n '16:9'\n ]\n }\n ]\n }\n }\n },\n 'data.episode_ai_id': '$data.episode'\n}}, {$addFields: {\n 'data.image': '$data.image.image',\n 'data.episode': '$assets.details.episode',\n 'data.season': '$assets.season'\n}}, {$unwind: {\n path: '$users',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'data.progress': {\n $filter: {\n input: '$users.watched',\n as: 'watched',\n cond: {\n $eq: [\n '$$watched.asset_id',\n '$data._id'\n ]\n }\n }\n },\n usersettings: {\n $cond: [\n {\n $ifNull: [\n '$users.usersettings',\n false\n ]\n },\n '$users.usersettings',\n null\n ]\n }\n}}, {$unwind: {\n path: '$data.image',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'data.image': {\n $cond: [\n {\n $ifNull: [\n '$data.image',\n false\n ]\n },\n '$data.image',\n null\n ]\n }\n}}, {$unwind: {\n path: '$data.progress',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'data.progress1': '$data.progress.progress'\n}}, {$addFields: {\n 'data.progress1': {\n $cond: [\n {\n $ifNull: [\n '$data.progress',\n false\n ]\n },\n '$data.progress.progress',\n 0\n ]\n }\n}}, {$unwind: {\n path: '$data.assets_inputs.metadata',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'data.progress': '$data,progress1',\n 'data.type': '$data.assets_inputs.metadata.category',\n subscribed: false,\n plan: 'free',\n token: '',\n 'data.manifest': '$data.assets_inputs.manifest',\n 'assets.details.videoidstring': {\n $toString: '$assets.details.video'\n }\n}}, {$project: {\n 'data.category': 0,\n 'data.duration': 0,\n 'data.resolution': 0,\n 'data.translations': 0,\n 'data.translations1': 0,\n 'data.hdr': 0,\n 'data.items': 0,\n 'data.geo': 0,\n 'data.tags': 0,\n 'data.metadata': 0,\n 'data.schedule_on': 0,\n 'data.produced_on': 0,\n 'data.updated_on': 0,\n 'data.status': 0,\n 'data.assets': 0,\n 'data.friendlyname': 0,\n 'data.images': 0,\n 'data.progress1': 0\n}}, {$sort: {\n 'data.episode': 1\n}}, {$sort: {\n 'data.season': 1\n}}, {$group: {\n _id: '$_id',\n program_name: {\n $first: '$program_name'\n },\n title: {\n $first: '$title'\n },\n description: {\n $first: '$description'\n },\n episode: {\n $first: '$episode'\n },\n seasons: {\n $first: '$seasons'\n },\n episodes: {\n $first: '$episodes'\n },\n dur: {\n $first: '$dur'\n },\n '4k': {\n $first: '$4k'\n },\n hlghdr: {\n $first: '$hlghdr'\n },\n progress: {\n $first: '$progress'\n },\n image: {\n $first: '$image'\n },\n watchlist: {\n $first: '$watchlist'\n },\n type: {\n $first: '$type'\n },\n data: {\n $push: '$data'\n },\n assets: {\n $push: '$assets'\n },\n count: {\n $first: '$count'\n },\n subscribed: {\n $first: '$subscribed'\n },\n plan: {\n $first: '$plan'\n },\n token: {\n $first: '$token'\n },\n usersettings: {\n $first: '$usersettings'\n }\n}}, {$addFields: {\n episodes: '$data',\n currentepisode: {\n $filter: {\n input: '$assets',\n as: 'assets',\n cond: {\n $and: [\n {\n $eq: [\n '$$assets.details.videoidstring',\n '5ed5081b46c9074616122f79'\n ]\n }\n ]\n }\n }\n }\n}}, {$unwind: {\n path: '$currentepisode',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n currentepisodeid: {\n $filter: {\n input: '$episodes',\n as: 'episodes',\n cond: {\n $and: [\n {\n $eq: [\n '$$episodes._id',\n ObjectId('5ed5081b46c9074616122f79')\n ]\n }\n ]\n }\n }\n }\n}}, {$unwind: {\n path: '$currentepisodeid'\n}}, {$addFields: {\n 'currentepisodeid.episode_ai_id': {\n $toObjectId: '$currentepisodeid.episode_ai_id'\n }\n}}, {$lookup: {\n from: 'assets_inputs',\n localField: 'currentepisodeid.episode_ai_id',\n foreignField: '_id',\n as: 'currentepisode_assets_inputs'\n}}, {$unwind: {\n path: '$currentepisode_assets_inputs',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n 'currentepisode_assets_inputs.sprites': {\n $ifNull: [\n '$currentepisode_assets_inputs.sprites',\n false\n ]\n },\n 'currentepisode_assets_inputs.manifest': {\n $ifNull: [\n '$currentepisode_assets_inputs.manifest',\n false\n ]\n }\n}}, {$redact: {\n $cond: {\n 'if': {\n $and: [\n {\n $eq: [\n '$currentepisode_assets_inputs.sprites',\n false\n ]\n },\n {\n $eq: [\n '$currentepisode_assets_inputs.manifest',\n false\n ]\n }\n ]\n },\n then: '$$PRUNE',\n 'else': '$$DESCEND'\n }\n}}, {$addFields: {\n count: {\n $add: [\n '$currentepisode.details.episode',\n 1\n ]\n },\n season: '$currentepisode.season'\n}}, {$addFields: {\n next: {\n $filter: {\n input: '$assets',\n as: 'assets',\n cond: {\n $and: [\n {\n $eq: [\n '$$assets.details.episode',\n '$count'\n ]\n },\n {\n $eq: [\n '$season',\n '$$assets.season'\n ]\n }\n ]\n }\n }\n }\n}}, {$unwind: {\n path: '$next',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n next: {\n $filter: {\n input: '$data',\n as: 'data',\n cond: {\n $and: [\n {\n $eq: [\n '$$data._id',\n '$next.details.video'\n ]\n }\n ]\n }\n }\n },\n currentepisode: {\n $filter: {\n input: '$data',\n as: 'data',\n cond: {\n $and: [\n {\n $eq: [\n '$$data._id',\n '$currentepisode.details.video'\n ]\n }\n ]\n }\n }\n }\n}}, {$unwind: {\n path: '$currentepisode',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n title: '$currentepisode.title',\n description: '$currentepisode.description'\n}}, {$unwind: {\n path: '$currentepisode_assets_inputs.sprites',\n preserveNullAndEmptyArrays: true\n}}, {$addFields: {\n title: '$currentepisode.title',\n description: '$currentepisode.description',\n episode: {\n $subtract: [\n '$count',\n 1\n ]\n },\n dur: '$currentepisode.dur',\n '4k': '$currentepisode.4k',\n hlghdr: '$currentepisode.hlghdr',\n progress: '$currentepisode.progress',\n image: '$currentepisode.image',\n watchlist: '$currentepisode.watchlist',\n type: '$currentepisode.type',\n manifest: '$currentepisode_assets_inputs.manifest',\n token: '$currentepisode_assets_inputs.license_token',\n spriteOutputPath: {\n $arrayElemAt: [\n '$currentepisode_assets_inputs.sprites.outputs.outputPath',\n 0\n ]\n },\n spriteName: '$currentepisode_assets_inputs.sprites.spriteName',\n vttName: {\n $concat: [\n {\n $arrayElemAt: [\n '$currentepisode_assets_inputs.sprites.outputs.outputPath',\n 0\n ]\n },\n '$currentepisode_assets_inputs.sprites.vttName'\n ]\n },\n license_server: {\n fairplay: {\n acquisition_url: 'https://drm-fairplay-licensing.axtest.net/AcquireLicense',\n certificate_url: 'https://vtb.axinom.com/FPScert/fairplay.cer'\n },\n widevine: {\n acquisition_url: 'https://drm-widevine-licensing.axtest.net/AcquireLicense'\n },\n playready: {\n acquisition_url: 'https://drm-playready-licensing.axtest.net/AcquireLicense'\n }\n },\n cdn: [\n 'https://travelxp.s.llnwi.net'\n ],\n device: '',\n manifesttype: ''\n}}, {$project: {\n data: 0,\n assets: 0,\n count: 0,\n currentepisode: 0,\n 'episodes.assets_inputs': 0,\n currentepisodeid: 0,\n currentepisode_assets_inputs: 0,\n 'episodes.license_token': 0\n}}]\n", "text": "Hi, I’m getting error at this stageBelow is my aggregation pipeline.\nAny help would be grateful.\nThanks", "username": "Sudarshan_Chavan" }, { "code": "", "text": "Hi @Sudarshan_Chavan,In order to better help you with this issue it will be best o get a sample document that fails this aggregation so we could reproduce it.What is the full error and with what tool are you getting it?Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{$addFields: {\n 'data.progress': '$data,progress1', <- This line\n 'data.type': '$data.assets_inputs.metadata.category',\n subscribed: false,\n plan: 'free',\n token: '',\n 'data.manifest': '$data.assets_inputs.manifest',\n 'assets.details.videoidstring': {\n $toString: '$assets.details.video'\n }\n", "text": "Hi @Pavel_Duchovny,There was typo in query.There should have been dot instead of comma in '$data,progress1’Thanks for help.", "username": "Sudarshan_Chavan" } ]
Path collision at data._id remaining portion _id
2021-01-28T09:37:50.909Z
Path collision at data._id remaining portion _id
13,697
null
[ "transactions" ]
[ { "code": "{\n \"_id\": \"601022171517ee00d48ce6ad\",\n \"caseQuantity\": 4,\n \"unitQuantity\": 395,\n \"totalQuantity\": 2000,\n \"currentQuantity\": 1995,\n \"isClaimActive\": \"true\",\n \"claim\": 32,\n \"status\": \"Active\",\n \"purchaseInventoryId\":\"601022151517ee00d48ce6ac\",\n \"index\": \"1611670005862\",\n \"batchNo\": 1,\n \"unitPrice\": 14.19,\n \"casePrice\": 255.75,\n \"product\": \"5f8d9a6184c1d0005814ed61\",\n \"productName\": \"Red Cow - Red Cow 18g\",\n \"type\": \"5f8d931fcc42160023d770e2\",\n \"units\": 400,\n \"agency\": \"5f8d6f0acc42160023d770c4\",\n \"createdBy\": \"5f8d6f2dcc42160023d770c5\",\n \"__v\": 0\n}\n const { records } = req.body;\n const agency: any = req.currentUser!.agency;\n let stockItemsNoUpdate: any[] = [];\n const promiseArray: Query<any>[] = [];\n let recordCounter = -1;\n let response = {};\n\n const session = await mongoose.startSession();\n session.startTransaction({\n readPreference: 'primary',\n readConcern: { level: 'local' },\n writeConcern: { w: 'majority' },\n });\n\n const existingLoadingSheet = await LoadingSheet.findById(\n req.params.id\n ).session(session);\n\n if (!existingLoadingSheet) {\n session.endSession();\n throw new NotFoundError('Loading Sheet Not Found');\n }\n\n if (existingLoadingSheet.get('agency')._id != agency) {\n session.endSession();\n throw new AccessRestrictedError();\n }\n\n const oldLoadingsheetStockValuesArray = [...existingLoadingSheet?.records];\n const oldLoadingsheetStockValuesObj = oldLoadingsheetStockValuesArray.reduce(\n function (result, item) {\n var key = item.stockId;\n result[key] = {\n loadingCaseCount: item.loadingCaseCount,\n loadingUnitCount: item.loadingUnitCount,\n loadingTotal: item.loadingTotal,\n stockId: item.stockId,\n product: item.product,\n index: item.index,\n batchNo: item.batchNo,\n type: item.type,\n };\n return result;\n },\n {}\n );\n\n try {\n existingLoadingSheet.set({ records, isUnloaded: false });\n await existingLoadingSheet.save({ session: session });\n\n for (const el of records) {\n recordCounter++;\n const oldLoadingTotal =\n oldLoadingsheetStockValuesObj[el.stockId] != null\n ? oldLoadingsheetStockValuesObj[el.stockId].loadingTotal\n : 0;\n const diff_qty = el.loadingTotal - oldLoadingTotal;\n if (diff_qty === 0) {\n stockItemsNoUpdate.push({\n recordIndex: recordCounter,\n stockId: el.stockId,\n nModified: 1,\n });\n }\n promiseArray.push(\n Stock.updateOne(\n {\n _id: mongoose.Types.ObjectId(el.stockId),\n agency,\n },\n [\n {\n $set: {\n currentQuantity: {\n $add: [\n '$currentQuantity',\n {\n $switch: {\n branches: [\n {\n case: { $gt: [diff_qty, 0] },\n then: {\n $cond: [\n { $gte: ['$currentQuantity', diff_qty] },\n -diff_qty,\n 0,\n ],\n },\n },\n {\n case: { $lt: [diff_qty, 0] },\n then: { $abs: diff_qty },\n },\n ],\n default: 0,\n },\n },\n ],\n },\n },\n },\n {\n $set: {\n unitQuantity: {\n $mod: ['$currentQuantity', el.units],\n },\n },\n },\n {\n $set: {\n caseQuantity: {\n $floor: {\n $divide: ['$currentQuantity', el.units],\n },\n },\n },\n },\n ],\n { session: session }\n )\n );\n }\n\n const promiseResults = await Promise.all(promiseArray);\n for (const el of stockItemsNoUpdate) {\n promiseResults[el.recordIndex]['nModified'] = 1;\n }\n\n recordCounter = -1;\n stockItemsNoUpdate = [];\n\n for (const result of promiseResults) {\n recordCounter++;\n if (result.nModified === 0) {\n stockItemsNoUpdate.push(records[recordCounter]);\n }\n }\n\n if (stockItemsNoUpdate.length > 0) {\n await session.abortTransaction();\n session.endSession();\n\n response = {\n status: 'updateFailed',\n data: {\n failed: stockItemsNoUpdate,\n },\n };\n\n return res.status(200).send(response);\n }\n\n await session.commitTransaction();\n session.endSession();\n } catch (error) {\n console.log(error);\n await session.abortTransaction();\n session.endSession();\n throw new Error(\n `Error occured while trying to update the loading sheet. ${error}`\n );\n }\n\n response = {\n status: 'success',\n data: {\n sheet: existingLoadingSheet,\n },\n };\n\n res.send(response);\n }\n", "text": "Hi,I have a stock document.Imagine that two users are updating the currentQuantity of the above document at the same time.I have used the below code to run the stock update. (Stock.updateOne)Above code works perfectly and gives me the expected output. This is because i am the only one who is using the document. But My concern is will this avoid any concurrency issues if multiple users start using the same document simultaneously??Some help would be much appreciated as I am new to MongoDB??", "username": "Shanka_Somasiri" }, { "code": "", "text": "Hi @Shanka_Somasiri,I haven’t reviewed the code. But in general when multiple transactions try to update the same documents they might acquire a lock on the updated documents, if they can’t whitin thr configured limits (default 5ms) they will abort.Additionally other transactions might fail on write conflicts if another in progress transactions modified documents they are about to update.More details see here:\nhttps://docs.mongodb.com/manual/core/transactions-production-consideration/#acquiring-locksThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "const existingLoadingSheet = await LoadingSheet.findById(\n req.params.id\n ).session(session); \nexistingLoadingSheet.set({ records, isUnloaded: false });\nawait existingLoadingSheet.save({ session: session });\n Stock.updateOne() statement;", "text": "Thanks for your input @Pavel_Duchovny.I went through the link you posted. My only concern is i run below 3 statements.1st statement is getting a loading sheet; // line #152nd statement updating the loading sheet // line #48and 3rd statement being update the stock in the warehouse; Stock.updateOne() statement; // line #65Acording to the https://docs.mongodb.com/manual/core/transactions-production-consideration/#acquiring-locks ; i know that my 3rd statement will cater to update the same document by multiple users.Will the findById() along with the transaction session as shown in statement 1, cater to multiple users??I am asking this because let’s imagine;I have users A and B.\nA and B both are looking at the same loadingsheet document (eg: 111)\nBefore A executes statement 2, B reads from statement 1.\nBy the time A finishes the statement 2 the loading sheet has new values, but unfortunately B has old values and can carry out the transaction but with wrong data.In the above case will it be a problem??Cheers", "username": "Shanka_Somasiri" }, { "code": "", "text": "@Shanka_Somasiri,We have a specific section that explains how to read and lock a document for this scenario.The main idea is that when you load a sheet you will adda locking field which will result in a tx lock. Therefore, other transactions will lock or abort when reading. I suggest for a more smooth experience wait a small period and retry this abort again by reloading the user view.Once the transaction is commited by user A user B will use a read commited read isolation and will only read the data after a successful commit or rollback.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "const existingLoadingSheet = await LoadingSheet.findById(\n req.params.id\n ).session(session); \n\nif (!existingLoadingSheet) {\n session.endSession();\n throw new NotFoundError('Loading Sheet Not Found');\n} // 1\n\nif (existingLoadingSheet.get('agency')._id != agency) {\n session.endSession();\n throw new AccessRestrictedError();\n} // 2\n\nconst oldLoadingsheetStockValuesArray = [...existingLoadingSheet?.records];\nconst oldLoadingsheetStockValuesObj = oldLoadingsheetStockValuesArray.reduce(\n function (result, item) {\n\tvar key = item.stockId;\n\tresult[key] = {\n\t loadingCaseCount: item.loadingCaseCount,\n\t loadingUnitCount: item.loadingUnitCount,\n\t loadingTotal: item.loadingTotal,\n\t stockId: item.stockId,\n\t product: item.product,\n\t index: item.index,\n\t batchNo: item.batchNo,\n\t type: item.type,\n\t};\n\treturn result;\n },\n {}\n); // 3\n\nexistingLoadingSheet.set({ records, isUnloaded: false });\nawait existingLoadingSheet.save({ session: session }); // 4 \n \n` // and run Stock.updateOne() with session` // 5\n", "text": "Hi @Pavel_Duchovny,This was really helpful.But still I am not sure how I can use findOneAndUpdate() and achieve findById() functionality shown by below steps in my original code. In the code I do check;1). whether the loading sheet exists\n2). whether the sheet is related to the user’s agency\n3). convert the loading item records\n4). update the loadingsheet document itself\n5). update stock documentWon’t just by running the findById(), loadingsheet.save() and Stock.updateOne() in the same transaction (with session) lock all the loading sheet and stock document until the transaction is commited or aborted so that no other users can perform on those documents??My knowledge is really low on this case and please bare with me.Cheers", "username": "Shanka_Somasiri" }, { "code": "findOneAndUpdate({ _id : ....},{$set : {active : \"true\"}).session(...)", "text": "Hi @Shanka_Somasiri,Not sure why you can’t use filters by id in findOneAndUpdate({ _id : ....},{$set : {active : \"true\"}).session(...)Once the documents you query are modified whithin a transaction atomically other will not be able to use the documents with snapshot read isolation until transaction commit.You can proceed with other writes and commit/rollback based on your logic.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "if (!existingLoadingSheet) {\n session.endSession();\n throw new NotFoundError('Loading Sheet Not Found');\n} // 1\n\nif (existingLoadingSheet.get('agency')._id != agency) {\n session.endSession();\n throw new AccessRestrictedError();\n} // 2\n\nconst oldLoadingsheetStockValuesArray = [...existingLoadingSheet?.records];\nconst oldLoadingsheetStockValuesObj = oldLoadingsheetStockValuesArray.reduce(\n function (result, item) {\n\tvar key = item.stockId;\n\tresult[key] = {\n\t loadingCaseCount: item.loadingCaseCount,\n\t loadingUnitCount: item.loadingUnitCount,\n\t loadingTotal: item.loadingTotal,\n\t stockId: item.stockId,\n\t product: item.product,\n\t index: item.index,\n\t batchNo: item.batchNo,\n\t type: item.type,\n\t};\n\treturn result;\n },\n {}\n); // 3", "text": "I think my concern was how I can check the below conditions and provide an error message back to the client while using findOneAndUpdate.", "username": "Shanka_Somasiri" }, { "code": "", "text": "The same way, findOneAndUpdate will return no result if the document searched does not exist.", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,I just finished altering my code and it works perfectly. You are a life saver.\nThank you sooo much for your assistance.Appreciate it loads. <3Cheers", "username": "Shanka_Somasiri" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Concurrency in MongoDB transactions
2021-01-27T13:50:23.573Z
Concurrency in MongoDB transactions
12,124
null
[]
[ { "code": "const collection = context.services.get('mongodb-atlas');", "text": "I’m trying to create a scheduled trigger to clear a collection weekly, but I am unable to get the service…\nconst collection = context.services.get('mongodb-atlas'); is returning undefined when I log it to console, and when I try and using it, it just says “Cannot access member ‘db’ of undefined”. I’ve also tried setting the service name to “Cluster0” and “mongodb-datalake”, neither of which worked.If someone could lend a hand on what I’m doing wrong and how I’m meant to do this, that would be awesome. Thanks.", "username": "Adam_Morris" }, { "code": "", "text": "Hi @Adam_Morris,Welcome to the community.You should place their a name of your linked atlas cluster service shown under linked clusters.If you still have issues please provide the trigger url from the browser.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Scheduled trigger service returning undefined
2021-01-27T21:21:42.401Z
Scheduled trigger service returning undefined
3,218
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 3.6.22-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 3.6.21. The next stable release 3.6.22 will be a recommended upgrade for all 3.6 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 3.6.22-rc0 is released
2021-01-28T16:24:28.144Z
MongoDB 3.6.22-rc0 is released
2,144
null
[ "dot-net" ]
[ { "code": "", "text": "Realm dot-net Upgraded (through Nuget) from 10.0.0-beta.2 to 10.0.0-beta.5\nClean and rebuild.My RealmObject ‘Accommodation’ has a date field:\n[MapTo(“date”)]\npublic DateTimeOffset Date { get; set; }In beta.2 the following query always worked:\nvar result2 = _Realm.All(Accommodation) ();\n(The editor does not accept greaterThan or LessThan chars)In beta.5 it fails.\nEvery ‘Accommodation’ object returned has ‘Date’ zeroed: {01/01/0001 00:02:41 +00:00}Switching back to beta.2 everything works again.Realm Studio view of the realm file shows dates in Accommodation classes are correct,\n(which they have to be for Beta.2 to work again).A different class returns dates correctly (named ‘ArrivalDate’)\nCould it be internal confusion due to the date simply being named ‘Date’ in the latest beta?\n[MapTo(“date”)]\npublic DateTimeOffset Date { get; set; }Any ideas?", "username": "Richard_Fairall" }, { "code": "date", "text": "Interesting, we do have tests for that but it’s possible you’re hitting a corner case we didn’t handle correctly. What you can try is to rename the property to date and remove the MapTo attribute. If that fixes the issue, it will indeed point to a bug with mapped properties. If it doesn’t, then we would probably need a repro case - can you by any chance share your project or reproduce the issue in a separate one?", "username": "nirinchev" }, { "code": "", "text": "I’ll try and reproduce it in another project which I can clear.\nThis project is a monster and has ‘live’ personal data on test, otherwise I’d be happy to clear the Realm.\nUnfortunately the class I mentioned is used in numerous queries as well as c# code.I’ll see what I can do right now.", "username": "Richard_Fairall" }, { "code": "", "text": "I have another class with\n[MapTo(“date”)]\npublic DateTimeOffset Date { get; set; }\nand a query returns an entry withe the date intact!\nThe only difference is Accommodations number 4700, the other class has only one item.btw I tried another realm theogh GetDefaultInstance() and it still failed on the dates.", "username": "Richard_Fairall" }, { "code": "Accommodation", "text": "Very peculiar! It could also be related to the schema of the Accommodation class versus your other models - perhaps a particular order of the properties is causing incorrect values to be returned.", "username": "nirinchev" }, { "code": "", "text": "I’ll have a look at both areas.\nThey are all in same order on Schema and Class.Baffling one this, especially as beta.2 works.\nHowever, in the case of the second class I tried, I created several objects on realm just now.\nThe accommodations were created a week ago.\nI can’t imagine there’s a bson date incompatibility between the betas.I’m trying the Date to date mod on Accommodation…\nUnfortunately removing the MapTo and renaming Date to date still fails.", "username": "Richard_Fairall" }, { "code": "", "text": "OK interesting development.I accessed one Accommodation object, updated the ‘Date’ with the correct datetime.The query now returns a count of 1.It appears beta.5 doesn’t like old dates !", "username": "Richard_Fairall" }, { "code": "", "text": "That is indeed peculiar - I’ll ask the Core database folks and see if there have been any file format changes from beta.2 and get back to you.", "username": "nirinchev" }, { "code": "", "text": "Firstly, I checked out another PC running the app, untouched for days, and it works OK.So, on this problem version…\nI deleted the local realm file, installed beta.2, started the app and it works fine.\nSo, I deleted the local realm file, installed beta.5, started the app, and it fails with zeroed dateTimeOffsets.I can continue working with good ol’ beta.2.", "username": "Richard_Fairall" }, { "code": "", "text": "Talked with the Core folks and this is not something expected or previously reported. Let me try to summarize your observations:This is however inconsistent with your latest observation that deleting the Realm file and creating it from scratch with beta.5 still exhibits the problem, so there may be something else at play here.It does seem like a bug though and to be able to proceed with investigating it, it will greatly help if:", "username": "nirinchev" }, { "code": "", "text": "btw Earlier, after changing dates in code using beta.5, reverting to beta.2 failed. Phew!\nI can send you the current beta.2 realm file. I can send the class file and corresponding schema.\nTo which Email address should I send the data?\nI can also send the Appid and Web Url from the dashboard if necessary.", "username": "Richard_Fairall" }, { "code": "", "text": "Oh, I didn’t realize that was a synced Realm! That’s another data point, although not one that explains what’s going on Can you send them over to [email protected] and I’ll try to get a local repro.Thanks!", "username": "nirinchev" }, { "code": "", "text": "Now working with fix in beta.6\nThanks .", "username": "Richard_Fairall" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Dot-net 10.0.0-beta.5 DateTimeOffsets returned zeroed
2021-01-19T17:42:42.949Z
Dot-net 10.0.0-beta.5 DateTimeOffsets returned zeroed
2,438
null
[ "sharding", "performance" ]
[ { "code": "{\"t\":{\"$date\":\"2021-01-28T13:57:52.235+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn82106\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MongoDB Shell\",\"command\":{\"_configsvrMoveChunk\":1,\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"min\":{\"SHARD_MINSEC\":67.0},\"max\":{\"SHARD_MINSEC\":68.0},\"shard\":\"db_rs030\",\"lastmod\":{\"$timestamp\":{\"t\":836,\"i\":1}},\"lastmodEpoch\": {\"$oid\":\"6012685e0a3dbbba22c99476\"},\"toShard\":\"db_rs068\",\"maxChunkSizeBytes\":67108864,\"secondaryThrottle\":{},\"waitForDelete\":true,\"forceJumbo\":false,\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":15000},\"lsid\":{\"id\":{\"$uuid\":\"0addce49-3078-42ed-bef6-32016ea86c73\"},\"uid\":{\"$binary\":{\"base64\":\"YtJ8CVGJPpojGlBhlVfpmkB+TWiGCwPUvkGEjp5tty0=\",\"subType\":\"0\"}}},\"$replData\":1,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611838654,\"i\":5}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"mUIAh6c1xe1e5RmLCS2d02ecNgI=\",\"subType\":\"0\"}},\"keyId\":6921625911045390337}},\"$audit\":{\"$impersonatedUsers\":[{\"user\":\"mongo-admin\",\"db\":\"admin\"}],\"$impersonatedRoles\":[{\"role\":\"root\",\"db\":\"admin\"}]},\"$client\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"4.4.2\"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"architecture\":\"x86_64\",\"version\":\"18.04\"},\"mongos\":{\"host\":\"ndecar01:30004\",\"client\":\"10.100.22.4:49376\",\"version\":\"4.4.2\"}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1611838654,\"i\":4}},\"t\":41}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":537,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":3}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":5}},\"Global\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Database\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Collection\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Mutex\":{\"acquireCount\":{\"r\":5}}},\"flowControl\":{\"acquireCount\":3,\"timeAcquiringMicros\":3},\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":15000,\"provenance\":\"clientSupplied\"},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":17631}}{\"t\":{\"$date\":\"2021-01-28T14:34:50.150+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":21993, \"ctx\":\"MoveChunk\",\"msg\":\"moveChunk data transfer progress\",\"attr\":{\"response\":{\"waited\":true,\"active\":true,\"sessionId\":\"db_rs030_db_rs047_6012bd570a3dbbba22e1ec76\",\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"from\":\"db_rs030/ndecar12:30303,ndecar14:30301,ndemec01:30302\",\"fromShardId\":\"db_rs030\",\"min\":{\"SHARD_MINSEC\":46.0},\"max\":{\"SHARD_MINSEC\":47.0},\"shardKeyPattern\":{\"SHARD_MINSEC\":1.0},\"state\":\"ready\",\"counts\":{\"cloned\":0,\"clonedBytes\":0,\"catchup\":0,\"steady\":0},\"ok\":1.0,\"$gleStats\":{\"lastOpTime\":{\"ts\":{\"$timestamp\":{\"t\":1611840856,\"i\":2}},\"t\":1},\"electionId\":{\"$oid\":\"7fffffff0000000000000001\"}},\"lastCommittedOpTime\":{\"$timestamp\":{\"t\":1611840856,\"i\":2}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1611840887,\"i\":32}},\"t\":41}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611840888,\"i\":62}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\"subType\":\"0\"}},\"keyId\":0}},\"operationTime\":{\"$timestamp\":{\"t\":1611840888,\"i\":1}}},\"memoryUsedBytes\":0,\"docsRemainingToClone\":0}}", "text": "hi,using MongoDB version 4.4.2 on Ubuntu 18.04, XFS filesystems, 36 bare metal servers:we have to adopt the mechanism of disabling the balancer permanently, and create new collections (on a weekly basis, for data of that week) then pre-split the initial chunk into 3600 chunks and pre-migrate these 3600 chunks evenly across our 90 shards using the “moveChunk” command. Only when this is completed, we can safely start using such collection.This rarely goes fast (like 1 chunk movement per second), but typically takes 10-20 secs per chunk, even with a fully idle cluster. So 20secs x 3600 chunks means … 20 hours.The only information about what’s happening can be found in the logs (examples below are for collection named “cdr_icx_isup_20201220” in dbase “cdrarch”):{\"t\":{\"$date\":\"2021-01-28T13:57:52.235+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn82106\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MongoDB Shell\",\"command\":{\"_configsvrMoveChunk\":1,\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"min\":{\"SHARD_MINSEC\":67.0},\"max\":{\"SHARD_MINSEC\":68.0},\"shard\":\"db_rs030\",\"lastmod\":{\"$timestamp\":{\"t\":836,\"i\":1}},\"lastmodEpoch\": {\"$oid\":\"6012685e0a3dbbba22c99476\"},\"toShard\":\"db_rs068\",\"maxChunkSizeBytes\":67108864,\"secondaryThrottle\":{},\"waitForDelete\":true,\"forceJumbo\":false,\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":15000},\"lsid\":{\"id\":{\"$uuid\":\"0addce49-3078-42ed-bef6-32016ea86c73\"},\"uid\":{\"$binary\":{\"base64\":\"YtJ8CVGJPpojGlBhlVfpmkB+TWiGCwPUvkGEjp5tty0=\",\"subType\":\"0\"}}},\"$replData\":1,\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611838654,\"i\":5}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"mUIAh6c1xe1e5RmLCS2d02ecNgI=\",\"subType\":\"0\"}},\"keyId\":6921625911045390337}},\"$audit\":{\"$impersonatedUsers\":[{\"user\":\"mongo-admin\",\"db\":\"admin\"}],\"$impersonatedRoles\":[{\"role\":\"root\",\"db\":\"admin\"}]},\"$client\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"4.4.2\"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"architecture\":\"x86_64\",\"version\":\"18.04\"},\"mongos\":{\"host\":\"ndecar01:30004\",\"client\":\"10.100.22.4:49376\",\"version\":\"4.4.2\"}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1611838654,\"i\":4}},\"t\":41}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":537,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":3}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":5}},\"Global\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Database\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Collection\":{\"acquireCount\":{\"r\":2,\"w\":3}},\"Mutex\":{\"acquireCount\":{\"r\":5}}},\"flowControl\":{\"acquireCount\":3,\"timeAcquiringMicros\":3},\"writeConcern\":{\"w\":\"majority\",\"wtimeout\":15000,\"provenance\":\"clientSupplied\"},\"storage\":{},\"protocol\":\"op_msg\",\"durationMillis\":17631}}(notice the last field “durationMillis” = 17secs)thx in advance for any suggestions !\nRob", "username": "Rob_De_Langhe" }, { "code": "{\"t\":{\"$date\":\"2021-01-28T14:51:30.876+01:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":22016, \"ctx\":\"MoveChunk\",\"msg\":\"Starting chunk migration donation\",\"attr\":{\"requestParameters\":\"ns: cdrarch.cdr_icx_isup_20201220, [{ SHARD_MINSEC: 579.0 }, { SHARD_MINSEC: 580.0 }), fromShard: db_rs030, toShard: db_rs040\",\"collectionEpoch\":{\"$oid\":\"6012685e0a3dbbba22c99476\"}}}{\"t\":{\"$date\":\"2021-01-28T14:51:30.882+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22080, \"ctx\":\"MoveChunk\",\"msg\":\"About to log metadata event\",\"attr\":{\"namespace\":\"changelog\",\"event\":{\"_id\":\"ndecar14:30301-2021-01-28T14:51:30.882+01:00-6012c1620a3dbbba22e39230\",\"server\":\"ndecar14:30301\",\"shard\":\"db_rs030\",\"clientAddr\":\"\",\"time\":{\"$date\":\"2021-01-28T13:51:30.882Z\"},\"what\":\"moveChunk.start\",\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"details\":{\"min\":{\"SHARD_MINSEC\":579.0},\"max\":{\"SHARD_MINSEC\":580.0},\"from\":\"db_rs030\",\"to\":\"db_rs040\"}}}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.303+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":21993, \"ctx\":\"MoveChunk\",\"msg\":\"moveChunk data transfer progress\",\"attr\":{\"response\":{\"waited\":true,\"active\":true,\"sessionId\":\"db_rs030_db_rs040_6012c1620a3dbbba22e39231\",\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"from\":\"db_rs030/ndecar12:30303,ndecar14:30301,ndemec01:30302\",\"fromShardId\":\"db_rs030\",\"min\":{\"SHARD_MINSEC\":579.0},\"max\":{\"SHARD_MINSEC\":580.0},\"shardKeyPattern\":{\"SHARD_MINSEC\":1.0},\"state\":\"steady\",\"counts\":{\"cloned\":0,\"clonedBytes\":0,\"catchup\":0,\"steady\":0},\"ok\":1.0,\"$gleStats\":{\"lastOpTime\":{\"ts\":{\"$timestamp\":{\"t\":1611841888,\"i\":24}},\"t\":21},\"electionId\":{\"$oid\":\"7fffffff0000000000000015\"}},\"lastCommittedOpTime\":{\"$timestamp\":{\"t\":1611841891,\"i\":3}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1611841890,\"i\":66}},\"t\":41}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611841891,\"i\":3}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\"subType\":\"0\"}},\"keyId\":0}},\"operationTime\":{\"$timestamp\":{\"t\":1611841891,\"i\":3}}},\"memoryUsedBytes\":0,\"docsRemainingToClone\":0}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.424+01:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":22026, \"ctx\":\"range-deleter\",\"msg\":\"Submitting range deletion task\",\"attr\":{\"deletionTask\":{\"_id\":{\"$uuid\":\"382b9f30-c249-4b6f-8361-e258353eb532\"},\"nss\":\"cdrarch.cdr_icx_isup_20201220\",\"collectionUuid\":{\"$uuid\":\"ac286098-d15b-4797-aaba-a399c3d7757b\"},\"donorShardId\":\"db_rs030\",\"range\":{\"min\":{\"SHARD_MINSEC\":579.0},\"max\":{\"SHARD_MINSEC\":580.0}},\"whenToClean\":\"now\"},\"migrationId\":{\"uuid\":{\"$uuid\":\"382b9f30-c249-4b6f-8361-e258353eb532\"}}}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.424+01:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":21990, \"ctx\":\"range-deleter\",\"msg\":\"Scheduling deletion of the collection's specified range\",\"attr\":{\"namespace\":\"cdrarch.cdr_icx_isup_20201220\",\"range\":\"[{ SHARD_MINSEC: 579.0 }, { SHARD_MINSEC: 580.0 })\"}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.424+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22080, \"ctx\":\"MoveChunk\",\"msg\":\"About to log metadata event\",\"attr\":{\"namespace\":\"changelog\",\"event\":{\"_id\":\"ndecar14:30301-2021-01-28T14:51:31.424+01:00-6012c1630a3dbbba22e392be\",\"server\":\"ndecar14:30301\",\"shard\":\"db_rs030\",\"clientAddr\":\"\",\"time\":{\"$date\":\"2021-01-28T13:51:31.424Z\"},\"what\":\"moveChunk.commit\",\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"details\":{\"min\":{\"SHARD_MINSEC\":579.0},\"max\":{\"SHARD_MINSEC\":580.0},\"from\":\"db_rs030\",\"to\":\"db_rs040\",\"counts\":{\"cloned\":0,\"clonedBytes\":0,\"catchup\":0,\"steady\":0}}}}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.428+01:00\"},\"s\":\"I\", \"c\":\"MIGRATE\", \"id\":22019, \"ctx\":\"MoveChunk\",\"msg\":\"Waiting for migration cleanup after chunk commit\",\"attr\":{\"namespace\":\"cdrarch.cdr_icx_isup_20201220\",\"range\":\"[{ SHARD_MINSEC: 579.0 }, { SHARD_MINSEC: 580.0 })\",\"migrationId\":{\"uuid\":{\"$uuid\":\"382b9f30-c249-4b6f-8361-e258353eb532\"}}}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.761+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22080, \"ctx\":\"MoveChunk\",\"msg\":\"About to log metadata event\",\"attr\":{\"namespace\":\"changelog\",\"event\":{\"_id\":\"ndecar14:30301-2021-01-28T14:51:31.761+01:00-6012c1630a3dbbba22e392e1\",\"server\":\"ndecar14:30301\",\"shard\":\"db_rs030\",\"clientAddr\":\"\",\"time\":{\"$date\":\"2021-01-28T13:51:31.761Z\"},\"what\":\"moveChunk.from\",\"ns\":\"cdrarch.cdr_icx_isup_20201220\",\"details\":{\"min\":{\"SHARD_MINSEC\":579.0},\"max\":{\"SHARD_MINSEC\":580.0},\"step 1 of 6\":0,\"step 2 of 6\":6,\"step 3 of 6\":387,\"step 4 of 6\":33,\"step 5 of 6\":31,\"step 6 of 6\":426,\"to\":\"db_rs040\",\"from\":\"db_rs030\",\"note\":\"success\"}}}}{\"t\":{\"$date\":\"2021-01-28T14:51:31.770+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn940\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"appName\":\"MongoDB Shell\",\"command\":{\"moveChunk\":\"cdrarch.cdr_icx_isup_20201220\",\"shardVersion\":[{\"$timestamp\":{\"t\":1962,\"i\":1}},{\"$oid\":\"6012685e0a3dbbba22c99476\"}],\"epoch\":{\"$oid\":\"6012685e0a3dbbba22c99476\"},\"configdb\":\"configReplSet/ndecar01:30001,ndecar02:30002,ndemec01:30003\",\"fromShard\":\"db_rs030\",\"toShard\":\"db_rs040\",\"min\":{\"SHARD_MINSEC\":579.0},\"max\":{\"SHARD_MINSEC\":580.0},\"maxChunkSizeBytes\":67108864,\"waitForDelete\":true,\"forceJumbo\":0,\"takeDistLock\":false,\"writeConcern\":{},\"lsid\":{\"id\":{\"$uuid\":\"0addce49-3078-42ed-bef6-32016ea86c73\"},\"uid\":{\"$binary\":{\"base64\":\"YtJ8CVGJPpojGlBhlVfpmkB+TWiGCwPUvkGEjp5tty0=\",\"subType\":\"0\"}}},\"$clusterTime\":{\"clusterTime\":{\"$timestamp\":{\"t\":1611841890,\"i\":65}},\"signature\":{\"hash\":{\"$binary\":{\"base64\":\"qi1oG+Z170E479Tzh38zpe9KbVk=\",\"subType\":\"0\"}},\"keyId\":6921625911045390337}},\"$audit\":{\"$impersonatedUsers\":[{\"user\":\"mongo-admin\",\"db\":\"admin\"}],\"$impersonatedRoles\":[{\"role\":\"root\",\"db\":\"admin\"}]},\"$client\":{\"application\":{\"name\":\"MongoDB Shell\"},\"driver\":{\"name\":\"MongoDB Internal Client\",\"version\":\"4.4.2\"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"architecture\":\"x86_64\",\"version\":\"18.04\"},\"mongos\":{\"host\":\"ndecar01:30004\",\"client\":\"10.100.22.4:49376\",\"version\":\"4.4.2\"}},\"$configServerState\":{\"opTime\":{\"ts\":{\"$timestamp\":{\"t\":1611841890,\"i\":65}},\"t\":41}},\"$db\":\"admin\"},\"numYields\":0,\"reslen\":333,\"locks\":{\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":1}},\"Global\":{\"acquireCount\":{\"r\":1}}},\"protocol\":\"op_msg\",\"durationMillis\":899}}", "text": "An example where the chunk movement went fast (approx 1 sec for the chunk with shard-value range 579-580) :So, the main difference seems to be that this data transfer is happening only once here, whereas in the (unfortunately many) other cases it is attempted tens of times, obviously slowing down the whole process.", "username": "Rob_De_Langhe" } ]
How can I inspect the moveChunk slow performance?
2021-01-28T13:38:34.766Z
How can I inspect the moveChunk slow performance?
2,646
null
[ "node-js", "mongoose-odm", "connecting" ]
[ { "code": "const express = require(\"express\");\nconst app = express();\nconst port = 4000;\n\nconst mongoose = require(\"mongoose\");\n\nvar uri = \"mongodb+srv://User:[email protected]/MyDatabase?retryWrites=true&w=majority\";\n\nmongoose.connect(uri,\n { useUnifiedTopology: true, useNewUrlParser: true }\n );\n \n const db = mongoose.connection;\ndb.on(\"error\", console.error.bind(console, \"connection error:\"));\ndb.once(\"open\", function( ) {\n console.log(\"hurray! we connected\");\n});\nconst mongoose = require(\"mongoose\");\n\nconst userTaskSchema = mongoose.Schema({\n taskTitle: {\n type: String,\n required: true,\n },\n taskDescription: {\n type: String,\n },\n});\nmodule.exports = mongoose.model(\"userTaskModel\", userTaskSchema);\n", "text": "Hi - I am new to learning NodeJS with mongodb. So please excuse and ask me if I didn’t provide enough information to look for help on this.I am trying to connect with MongoDB Atlas using Nodejs, ExpressJS and mongoose schema.The REST client in VS code that says 404 not found error and the connection: closeserver.js code:user_tak_model.jsand when i start executing, server.js\nit says\nHurray! we are connceted but REST client says 404 error . please help", "username": "PP_SS" }, { "code": "", "text": "Are you able to connect by shell using the connect string in your uri?", "username": "Ramachandra_Tummala" }, { "code": "var db = \"mongodb://localhost:27017/example\";\n\nmongoose.connect(db, { useNewUrlParser: true, useUnifiedTopology: true });`\n", "text": "Thanks, I restarted the system, after that the DB got coneted to the server. not sure what went wrong before.now I changed the DB connection asin shell,\nwhen i use example\nit switchd to that databaseuse collections is empty. why is that? the video tutorial that i followed for learning it says ther were collections created from mongoose chema. but all i see is empty in shell. why,please?", "username": "PP_SS" }, { "code": "", "text": "The uri you used earlier is different from what you used nowYou are connecting to a mongodb on your local host\nPlease clarify which is the correct oneDid you meant show collections in the next line?\nuse collections means it will put you into collections DB\nHave you created any collections under example?\nIf yes you should be able to see with show collections", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi Ramachandra - Thanks for the help. I have completely changes my code to use localhost. Sorry for the confusion that it may caused.I will create a new thread if I have question with relevant code.It was my mistakee that in a hurry I typed\n“Use collections” instead of “Show Collections” in the earlier post. very sorry.", "username": "PP_SS" } ]
Mongodb Atlas 404 error
2021-01-22T19:47:03.318Z
Mongodb Atlas 404 error
7,036
null
[ "queries", "compass" ]
[ { "code": " [{\n \"_id\": {\n \"$oid\": \"600f0760679d827471a8240a\"\n },\n \"is_company\": true,\n \"company_name\": \"DI MARMORE INDUSTRIA E COMERCIO LTDA\",\n \"cnpj\": \"05391213000105\",\n \"ie\": \"675140204112\",\n \"phones\": [\n \"1141380000\",\n \"1141380000\",\n \"11992690000\",\n \"11994540000\"\n ],\n \"address\": {\n \"street\": \"RUA SALETE\",\n \"number\": \"70\",\n \"bairro\": \"JARDIM SALETE\",\n \"city\": \"TABOAO DA SERRA\",\n \"uf\": \"SP\",\n \"cep\": \"06787999\"\n },\n \"site\": \"dimarmore.com.br\",\n \"email\": \"[email protected]\",\n \"nfe_status\": true,\n \"nfce_status\": false,\n \"cte_status\": false,\n \"mdfe_status\": false,\n \"token\": \"1-abc123defg4567-algoMaisAqui\"\n },{\n \"_id\": {\n \"$oid\": \"600f0760679d827471a8240b\"\n },\n \"is_company\": false,\n \"full_name\": \"NILTON GONÇALVES MEDEIROS\",\n \"cpf\": \"04298414122\",\n \"rg\": \"128768792\",\n \"phones\": [\n \"1155550108\",\n \"1195555555\"\n ],\n \"address\": {\n \"street\": \"AV. PREFEITO DONALD\",\n \"number\": \"9999\",\n \"complemento\": \"ETAPA 3\",\n \"bairro\": \"NOVA CAIEIRAS\",\n \"city\": \"CAIEIRAS\",\n \"uf\": \"SP\",\n \"cep\": \"07704999\"\n },\n \"email\": \"[email protected]\",\n \"nfe_status\": false,\n \"nfce_status\": false,\n \"cte_status\": true,\n \"mdfe_status\": true,\n \"token\": \"2-abc123defg4567-algoMaisAqui\"\n },{\n \"_id\": {\n \"$oid\": \"600f0fa9679d827471a8240e\"\n },\n \"is_company\": true,\n \"company_name\": \"SISTROM SISTEMAS WEB LTDA\",\n \"cnpj\": \"11000000999999\",\n \"im\": \"0009999\",\n \"phones\": [\n \"1199999999\"\n ],\n \"address\": {\n \"street\": \"AV. PREFEITO DONALD\",\n \"number\": \"9999\",\n \"complemento\": \"CASA 1\",\n \"bairro\": \"NOVA ORLEANS\",\n \"city\": \"CAIEIRAS\",\n \"uf\": \"SP\",\n \"cep\": \"07704999\"\n },\n \"site\": \"sistron.com.br\",\n \"email\": \"[email protected]\",\n \"issuers\": [\n {\n \"company_name\": \"ALEXPRESS AGENCIAMENTO AEREO LTDA\",\n \"cnpj\": \"59999999999999\",\n \"ie\": \"111799999999\",\n \"phones\": [\n \"1155555555\",\n \"1155555555\"\n ],\n \"address\": {\n \"street\": \"AV. CUPECÊ\",\n \"number\": \"9999\",\n \"bairro\": \"JARDIM CUPECÊ\",\n \"city\": \"SÃO PAULO\",\n \"uf\": \"SP\",\n \"cep\": \"04365000\"\n },\n \"site\": \"alexpress.com.br\",\n \"email\": \"[email protected]\",\n \"nfe_status\": false,\n \"nfce_status\": false,\n \"cte_status\": true,\n \"mdfe_status\": true,\n \"token\": \"3-abc123defg4567-algoMaisAqui\"\n },\n {\n \"company_name\": \"LW LOGÍSTICA E TRANSPORTES LTDA \",\n \"cnpj\": \"13599999999999\",\n \"ie\": \"146099999999\",\n \"phones\": [\n \"1155555555\",\n \"1155555555\"\n ],\n \"address\": {\n \"street\": \"RUA RANULFO PRATA\",\n \"number\": \"9999\",\n \"bairro\": \"CIDADE ADEMAR\",\n \"city\": \"SÃO PAULO\",\n \"uf\": \"SP\",\n \"cep\": \"04389999\"\n },\n \"email\": \"[email protected]\",\n \"nfe_status\": false,\n \"nfce_status\": false,\n \"cte_status\": true,\n \"mdfe_status\": true,\n \"token\": \"4-abc123defg4567-algoMaisAqui\"\n }\n ]\n }]\n{\"company_name\":1, \"issuers.company_name\":1, \"full_name\": 1, ...}", "text": "Hello everyone!In this fictional collection of Customers with arrays:How do I select certain fields inside or outside the array? See, to refer to the token “3-abc123defg4567-somethingHere” that may be inside or outside the array I use the following syntax in MongoDB Compass:{$or: [{token: “3-abc123defg4567-algoMaisHere” }, {“issuees.token”: “3-abc123defg4567-algoMaisHere”}]}My question is: How to select some fields using {\"company_name\":1, \"issuers.company_name\":1, \"full_name\": 1, ...}I tried to put this option in several ways but still returned all fields of the document or returns nothing.", "username": "Nilton_Medeiros" }, { "code": "{\"company_name\":1, \"issuers.company_name\":1, \"full_name\": 1, ...}{\"company_name\":1, \"issuers.company_name\":1, \"full_name\": 1, ... }10", "text": "Hello @Nilton_Medeiros, welcome to the MongoDB Community forum.My question is: How to select some fields using {\"company_name\":1, \"issuers.company_name\":1, \"full_name\": 1, ...}You can use projection to select specific fields (or exclude some fields). In MongoDB Compass, in the Documents tab, click the Filter Options. The Project area is where you can specify what fields you want in the output document. For example, you can specify {\"company_name\":1, \"issuers.company_name\":1, \"full_name\": 1, ... }. Use 1 to include and 0 to exclude fields.See Compass documentation topic for details: Set which fields are returned.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks! This is fantastic!", "username": "Nilton_Medeiros" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I select only a few fields in the query for a collection with array?
2021-01-27T19:08:16.083Z
How do I select only a few fields in the query for a collection with array?
5,631
null
[ "replication", "configuration" ]
[ { "code": "{\n\t\"operationTime\" : Timestamp(1611808548, 1),\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Quorum check failed because not enough voting nodes responded;\n required 2 but only the following 1 voting nodes responded: 192.168.1.2:22330; \n the following nodes did not respond affirmatively: 192.168.1.2:22331\n failed with stream truncated\",\n\t\"code\" : 74,\n\t\"codeName\" : \"NodeNotFound\",\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1611808548, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"PPsSmtIsQGW/nLZkWYh0J3jENLs=\"),\n\t\t\t\"keyId\" : NumberLong(\"6922656793390743556\")\n\t\t}\n\t}\n}\n", "text": "I am puzzled by a little problem while setting up a mongodb replica set.I have already run this:rs.initiate()I have therefore one member in my replica set.Then to add one more member I want to run:rs.add(‘192.168.1.2:22331’)But here is the message I get:I am obviously missing something.\nThe message sounds weird. How could there be more than one voting member when there is only one member?", "username": "Michel_Bouchet" }, { "code": "", "text": "Your primary is not able to communicate with the node you are trying to add\nIs the node up and running.Can you connect to it\nCould be some firewall or network issues\nDoes mongodb.log show more details?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for the reply. Yes all nodes are up and running.I just solve the issue.I was using this for the server certificates:extendedKeyUsage = serverAuthI changed it to:extendedKeyUsage = serverAuth, clientAuthNow it works. I guess this was needed for all the servers to communicate with each other.", "username": "Michel_Bouchet" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I can't add a second member to a replica set
2021-01-28T04:47:31.200Z
I can&rsquo;t add a second member to a replica set
3,799
null
[ "java", "change-streams" ]
[ { "code": "", "text": "Hey, I’ve been making an online game project for a while. I use MongoDB almost for everything related to database. I’d like to explain how I cache player data in game.Basically, when player joins the game, the mongo document saves in server memory as a cache and I use cache for almost everything realted to player data. For now, I use change stream to update cache; when the “player” collection update, insert or delete, change stream will update server cache.Should I really trust “Change Stream” for a long time? Normally, online games are using sockets and rest API for datas but I’ve found the Change Stream more suits what I’m doing. In the future, if the project grows up, I’ll probably use sharding and other complicated things for database and I really don’t want to change my bone caching system. Therefore, I’d want to know I should trust change stream for a long time or not.", "username": "Duck" }, { "code": "", "text": "Hi @Duck,I think change streams with all the recommended resume-ability options implemented should be a reliable solution.It is vastly used in many MongoDB products. However, its important to note all the production considerations and trying to keep the open change streams to a minimal number as long as the performance is setisfied:Please note there is a specific section for Sharded cluster there.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should I trust Change Stream?
2021-01-28T09:43:11.269Z
Should I trust Change Stream?
3,380
null
[ "atlas-functions", "app-services-data-access" ]
[ { "code": "", "text": "Hi,\nI am currently using a custom function to insert a document into a collection. I want to do this on server side as I need some data I don’t want to retrieve on the client side. However, I discovered that even if the function that finally inserts the document is executed with the permission of the user calling, when inserting rules and schema seem to be bypassed. When running manual validation added documents are shown as errors due to an obvious error on my side, but that’s not the point.\nIs there really no validation or is there something I might have missed? As always it is quite hard to find proper documentation. I tried to specifically set a flag collection.insertOne(doc, {bypassDocumentValidation: false}), but it just did nothing.Best regards", "username": "Daniel_Rollenmiller" }, { "code": "", "text": "Hey Daniel -Is there really no validation or is there something I might have missed?Validation and permissions are enforced when you are running your function as a user and your function requires Application Authentication.When you run a function as a System User it should bypass all rules and schema. So will setting your function authentication to System (docs reference). If you are not running as a system user and still facing your issue, it might be helpful to see your permissions and schema set up, as well as what document you are trying to insert.As always it is quite hard to find proper documentation.We’re always looking for ways to make our docs more helpful. Do you mind letting us know what you were searching for/where you were looking and found it unhelpful?", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I just double checked and the function is on Application Authentication as we need some info about the user calling it anyway. Rules and schemas should be fine as I tried the same query from the frontend but from there it was rejected due to schema validation. I’ll look further into it and double check everything.", "username": "Daniel_Rollenmiller" } ]
Schema and rule validation bypassed in Custom Functions?
2021-01-22T08:36:10.400Z
Schema and rule validation bypassed in Custom Functions?
3,661
null
[]
[ { "code": "", "text": "Una duda de MongoDB, alguien ha hecho operaciones con campos dentro de un documento que está embebido en un arreglo de documentos?, ejemplo: [{“Precio”:12, “cantidad”:3}, {“Precio”:12, “cantidad”:3},…], buscó multiplicar Precio *cantidad en cada elemento del arreglo, y sumar sus productos respectivos.", "username": "Emmanuel_Diaz" }, { "code": "", "text": "", "username": "Jack_Woehr" }, { "code": "", "text": "Ya logré hacer la operación, sacando la lista de objetos con $unwind, y agrupando estos objetos con $group, una vez agrupados, realicé la suma con $sum, y a su vez pude multiplicar por cada documento los campos “Precio” y “cantidad”. ¿alguien sabe como puedo actualizar el documento inicial con el resultado de la opción descrita previamente?. ¡Saludos!", "username": "Emmanuel_Diaz" }, { "code": "$unwind$group$sumPreciocantidad", "text": "@Stennie_X can you help me help on this? @Emmanuel_Diaz writes:I managed to derive the list of objects via $unwind, and group these objects via $group, and once they were grouped, extract the sum using $sum, and then for each document multiply the field Precio (Price) by cantidad (Quantity). Can someone tell me how to update the original document with the result of the operation described? Thanks!", "username": "Jack_Woehr" }, { "code": "$merge$merge", "text": "Can someone tell me how to update the original document with the result of the operation described?Hola @Emmanuel_Diaz! (and thanks for the translation @Jack_Woehr!)It would be very helpful to have more specific details such as your version of MongoDB server, sample documents, and aggregation pipeline in order to provide relevant suggestions.However, for general approaches to saving documents manipulated by aggregation back to the original collection consider:MongoDB 4.2+ supports an aggregation $merge stage which outputs results to a collection with options for how to merge when a result document matches an existing document in the output collection.You can iterate the aggregation result cursor in your application code and perform the equivalent logic of $merge by saving updates to affected documents (possibly as a bulk write if you have many document updates). This is an option for any version of MongoDB that supports your aggregation pipeline features, but adds some overhead for fetching aggregation results over the network and then sending them back to your MongoDB deployment.MongoDB 4.2+ also supports Updates with Aggregation Pipeline with a limited selection of stages. Based on your description of unwinding & grouping this may not be suitable for your current aggregation pipeline usage, but I’m listing here for completeness.Regards,\nStennie", "username": "Stennie_X" } ]
Operaciones al iterar en un arreglo con documentos
2021-01-23T23:22:33.053Z
Operaciones al iterar en un arreglo con documentos
5,746
null
[ "atlas-functions" ]
[ { "code": "const url = context.values.get(\"secret_url\") + \"/endpoint\";\n const data = {\n number_recommendations: 400,\n interests: profile,\n price_min: price_min || 10,\n price_max: price_max || 500,\n };\n const config = {\n method: \"post\",\n url,\n headers: {\n \"Content-Type\": \"application/json\",\n },\n data,\n };\n\n const recommendationsRequest = await axios(config); \n", "text": "I have encountered a very strange issue on a realm function. My function uses the latest axios library to make a POST request to another server. On that server, I know that the request is completing successfully. I have had to add log statements right before the response gets sent, and I can see those statements being logged, so I know the request is successful and being sent back to the realm function context. Also, when testing in postman, this same POST request succeeds no problem. So I KNOW the axios call should resolve. However, it does not resolve. Below is the function code making the call:I’ve tried logging and nothing happens after the axios await.Anybody had similar issues? Seems like a bug with realm perhaps.", "username": "Lukas_deConantseszn1" }, { "code": "context.http", "text": "Hi @Lukas_deConantseszn1,I’ve never used this module on Realm.To send http calls from functions you should use context.http:Please try the following according to syntax and example.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "exports = async function(arg) {\nvar axios = require('axios');\ntry {\n const resp = await axios.get('https://api.url......&apiKey=<api_key>');\n console.log(resp.data[0]); \n } catch (err) {\n console.error(err);\n }\n};\n\n", "text": "Hi Lukas, is it possible that this is due to the API that you are using? The following worked for me and returned a request payload:(This is with Axios 0.19.2)", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Pavel_Duchovny I might try using the http.context.post then.@Sumedha_Mehta1 thanks for trying out axios. I wouldn’t doubt that it worked for you, it used to work for me and it honestly has stopped working for no clear reason. So I feel the possible bug would be hard to reproduce in nature. Not very helpful I know ", "username": "Lukas_deConantseszn1" } ]
Realm Function axios call never waits despite HTTP request seemingly finishing
2021-01-19T23:10:34.629Z
Realm Function axios call never waits despite HTTP request seemingly finishing
3,939
null
[ "queries" ]
[ { "code": "db.dtest4.insertMany([{\n \"time1\" : ISODate(\"2021-01-23T23:05:36.910Z\"),\n \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\")\n}\n,{\n \"time1\" : ISODate(\"2021-01-26T20:03:13.344Z\"),\n \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\")\n}\n,{\n \"time1\" : ISODate(\"2021-01-23T23:05:49.339Z\"),\n \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\")\n}\n,{\n \"time1\" : ISODate(\"2021-01-23T23:04:12.710Z\"),\n \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\")\n}\n,{\n \"time1\" : ISODate(\"2021-01-24T01:26:58.906Z\"),\n \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\")\n}]);\n\ndb.dtest4.find({time1: {\"$lt\": \"$time2\"}});\n\n/* 1 */\n{\n \"acknowledged\" : true,\n \"insertedIds\" : [ \n ObjectId(\"60117e8292324b130b58f5f1\"), \n ObjectId(\"60117e8292324b130b58f5f2\"), \n ObjectId(\"60117e8292324b130b58f5f3\"), \n ObjectId(\"60117e8292324b130b58f5f4\"), \n ObjectId(\"60117e8292324b130b58f5f5\")\n ]\n}\n\nFetched 0 record(s) in 3ms\n", "text": "I’m obviously missing something very obvious. Why doesn’t this query return the records where time1 < time2?Returns:", "username": "Eric_Olson" }, { "code": "db.dtest4.find({time1: {\"$lt\": \"$time2\"}});> db.dtest4.find({\"$expr\": { \"$lt\" : [ \"$time1\" , \"$time2\" ] }});\n{ \"_id\" : ObjectId(\"6011c0934b17c80f1720f498\"), \"time1\" : ISODate(\"2021-01-23T23:05:36.910Z\"), \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\") }\n{ \"_id\" : ObjectId(\"6011c0934b17c80f1720f49a\"), \"time1\" : ISODate(\"2021-01-23T23:05:49.339Z\"), \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\") }\n{ \"_id\" : ObjectId(\"6011c0934b17c80f1720f49b\"), \"time1\" : ISODate(\"2021-01-23T23:04:12.710Z\"), \"time2\" : ISODate(\"2021-01-24T00:05:48.339Z\") }\n", "text": "db.dtest4.find({time1: {\"$lt\": \"$time2\"}});I think you cannot compare 2 fields together with the simple syntax. Try with $expr:", "username": "steevej" }, { "code": "db.dtest4.find({time1: {lt: {subtract:{[ISODate(\"2021-01-24T00:05:48.339Z\"),3600000]}}})", "text": "Thanks steeve!It seems like the “simple syntax” only works with constants as arguments. I tried this too: db.dtest4.find({time1: {lt: {subtract:{[ISODate(\"2021-01-24T00:05:48.339Z\"),3600000]}}}) and it also returns no records, but the $expr version works. Is this limitation documented somewhere?Thanks again for your help!", "username": "Eric_Olson" } ]
Date comparisons don't work?
2021-01-27T18:01:07.956Z
Date comparisons don&rsquo;t work?
1,882
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "ErrorMessage:AppSync.: failed to insert change into history during initial sync of AppSync.after copying 0 documents: failed to flush changes for ns “AppSync.” to unsynced documents cache during initial sync: connection(…) incomplete read of message header: context deadline exceededCan someone explain to me what this message means and the ways to solve it.Thank you!", "username": "John_Matanda" }, { "code": "__realm_sync", "text": "failed to flush changesHi John,It has been a while since this was posted, were you able to find a solution to this issue?If not, were there any other errors appearing in your the app? Please check your Realm error logs to see if there are any other unexpected errors that could provide more context e.g. duplicate key errors.Did you make any changes to class structure in the code before this error appeared? If so, please ensure that these updates have also been done to your Realm schema. Additive changes can be persisted through Realm sync development mode.If you’re working in a non-production environment, using the steps below you can try terminating and re-enabling sync to rectify schema inconsistencies due to changes in your class definitions:Did you see any warning notifications for your Atlas cluster or see any monitoring metrics in your dashboard that show resource issues? If this is the case, you may need to upgrade to a larger Atlas tier.If you’re still seeing this error please provide details of the current situation and a link to your Realm app so I can have a further look into it.Regards\nManny", "username": "Mansoor_Omar" } ]
Failed to insert change into history during initial sync
2020-12-29T20:16:27.969Z
Failed to insert change into history during initial sync
2,938
null
[]
[ { "code": "", "text": "What is a best practice for having a collection that I effectively want to be append only?I’d like to record transactions into a ledger where I can insert and not update or delete (for any user, ideally).What general recommendations are there for this scenario? I searched for existing threads, but please point me to one if this has already been discussed.Thank you!\nJeremy", "username": "Jeremy_Buch" }, { "code": "findinsert", "text": "Hi @Jeremy_Buch and welcome back !I think I would create a user and limit its permissions. I would create a custom role with the actions find and insert on the resources (db or collections) you need.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thanks Maxime! I saw a discussion about limiting the actions that users can take and this approach can definitely solve the problem from a management perspective based on permissions.Thanks for confirming!JeremyPS - thanks! I built out a MongoCDC implementation last year early summer and haven’t had to dig into mongo more since then until now.", "username": "Jeremy_Buch" }, { "code": "", "text": "If you are running on Atlas, you need to create a custom role:image1143×554 40.6 KBimage771×659 38.6 KBThen when you create the user, you can assign the custom role like this:image683×1108 84.4 KBI hope this helps Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What are best practices for immutable inserts? A transaction log collection, for example? Disabling update by user or database collection
2021-01-27T19:26:58.255Z
What are best practices for immutable inserts? A transaction log collection, for example? Disabling update by user or database collection
4,869
null
[ "server", "configuration" ]
[ { "code": "diagnosticDataCollectionEnabled: falsestorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n\nsystemLog:\n destination: file\n logAppend: true\n path: /mnt/web/log/mongodb/mongod.log\n\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n authorization: \"enabled\"\n\nsetParameter:\n diagnosticDataCollectionEnabled: false\n", "text": "I’ve installed mongodb 4.4.3 on my Raspberry Pi and I’ve noticed there are some IO operations even though no client is connected and no queries from my side.\nFirst I’ve noticed FTDC writing every ~8seconds, so I set diagnosticDataCollectionEnabled: false . But there still remain writes into WiredTiger.wt (.turtle and index) every minute.\nWhat does it do, is it some journal? Can I disable it?\nIt is my personal dev webserver, there won’t be much writing/reading from my side, so I dont see the point in mongo doing some unnecessary writing, since it will be just slowly killing my SSD.\n(Btw. I’m a nub, never really worked with mongo)My mongod.conf:", "username": "Martin_Beran" }, { "code": "storage.syncPeriodSecs", "text": "ok, after some digging I found out the one minute is storage.syncPeriodSecs, but I still have no idea why it is saving anything if there were no changes :-/ because I don’t think changing the syncPeriodSecs is the right way to fix it", "username": "Martin_Beran" }, { "code": "storage.journal.enabled:false830 ?sys mongodb 0.00 B 40.00 K 0.00 % 0.00 % mongod --config /etc/mongod.conf [WTCheck.tThread]\nWiredTiger.wt", "text": "seriously nobody knows? I have set storage.journal.enabled:false, the data are on binded external ssd, so I also set noatime and nodiratime… I checked iotop and it writes 40kB every minute, that would be 20.5GB a year for absolutely no reasonI dont call any updates or inserts, the content of the WiredTiger.wt is still the same, why does it flush every minute then?\nI’ve spent last 3 days googling and reading and I still have no clue why does it do that and nobody nowhere can tell me\nI’m desperate and also really angry already", "username": "Martin_Beran" } ]
Mongod config - WTCheck.tThread writes on disk into WiredTiger.wt every minute
2021-01-26T19:54:59.533Z
Mongod config - WTCheck.tThread writes on disk into WiredTiger.wt every minute
2,549
null
[]
[ { "code": "", "text": "How to create a user and grant him read-only access for all databases available in mongodb(2.2/2.4/2.6) instance. I am new to mongodb, please provide detail step by step process.user name: sec.ai\npassword: test\ndatabases: school, local, admin, employee, deptt", "username": "Amanullah_Ashraf1" }, { "code": "readAnyDatabase", "text": "Hi @Amanullah_Ashraf1,Welcome to MongoDB community.Please note that MongoDB versions 2.2 - 2.6 is not supported for a long time.In modern MongoDB versions you need to use a built-in role readAnyDatabase for this:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "More about the currently supported MongoDB versions here.Whichever MongoDB product you’re using, find the support policy quickly and easily.At the time I’m writing this, MongoDB 3.6, 4.0, 4.2 and 4.4 are supported. Note that MongoDB 3.6 will reach its end of life in April 2021. So it’s time to update .Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Can I use ’ readAnyDatabase ’ for older version of Mongo DB like 1.8, 2.2, 2.4, 2.6", "username": "Amanullah_Ashraf1" }, { "code": "", "text": "@Amanullah_Ashraf1,Looking at 2.6 docs it does:\nBuilt-In Roles — MongoDB ManualCheck for others as well.But you shouldn’t use a 6+ year old versions.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Wow 1.8!\nReleased in 2011 and EOL in September 2012. It’s really time to do something here.", "username": "MaBeuLux88" } ]
Create readonly user for all databases
2021-01-24T06:44:35.704Z
Create readonly user for all databases
10,050
null
[ "transactions" ]
[ { "code": "", "text": "HI,\nI was going through the HybridClock implementation and I see that the logical clock ticks only on the primary node. The value is replicated through oplog entries to the secondaries.\nIn a sharded cluster, primary of each shard is where the cluster time will be incremented. I think this works fine in a single replica set, which is controlled by Raft. With the Raft leader controlling the clusterTime, it is guaranteed to be monotonically increasing across the Raft cluster.\nBut in multi-shard transactions, there are multiple raft clusters (one replica set per shard). Then it is not enough for the shard primary to control cluster time ticking. It must be the transaction co-ordinator, which should chose cluster time, getting all the cluster times of shard primaries involved in the transaction and then chosing the value which is greatest and then incrementing it. Is this how its implemented in mongo?\nThe hybridclock paper https://dl.acm.org/doi/pdf/10.1145/3299869.3314049, talks about per shard scenarios, but not of how it works in multi document distributed transactions.Also , this looks similar to how its implemented in LogCabin, the reference implementation of Raft, but I don’t see Logcabin referred in the above mentioned paper. Is there any difference than whats implemented in logcabin? LogCabin.<wbr />appendEntry(5, \"Cluster Clock, etc\") - ongardie.net", "username": "Unmesh_Joshi" }, { "code": "", "text": "It must be the transaction co-ordinator, which should chose cluster time, getting all the cluster times of shard primaries involved in the transaction and then chosing the value which is greatest and then incrementing it.You are exactly correct about how that works.All of the shards write “prepare” and choose their prepare time independently and report this time to the coordinator. The coordinator chooses the max of these times as the commit timestamp.", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
HybridClock ticking in Mongodb with multi-document transactions
2021-01-26T07:56:07.844Z
HybridClock ticking in Mongodb with multi-document transactions
2,813
null
[ "replication", "configuration" ]
[ { "code": "--shardsvrQuery failed with error code 211 and error message 'Cache Reader No keys found for HMAC that is valid for time: { ts: Timestamp(1585205456, 422) } with id: 6802955028354040016' on server our-db-server.domain.com:27017", "text": "We followed the instructions to Convert a Cluster with a Single Shard into a Replica Set but as soon as we restarted the first Secondary (of a total of 3 secondaries + 1 primary) without the --shardsvr option, all database clients (which are connecting already directly to the replSet without problems instead to the mongoS routers) received the following error while querying the database:Query failed with error code 211 and error message 'Cache Reader No keys found for HMAC that is valid for time: { ts: Timestamp(1585205456, 422) } with id: 6802955028354040016' on server our-db-server.domain.com:27017Therefore, we have immediately reversed the change.\nThis error makes it impossible for us to convert the single-shard cluster into a standalone replSet.\nHow to proceed?\nThanks!", "username": "Kay_Agahd" }, { "code": "", "text": "Hi, Kay! Did you ever find a solution to this problem? I’m hitting the same error but in a somewhat different scenario: I have a replica set (not a sharded cluster) using the MMAPv1 engine and am switching to WiredTiger. As soon as I add a new node running WiredTiger, I hit this error in my application.", "username": "Noach_Magedman" }, { "code": "", "text": "I couldn’t post more than two links on this forum so you can check out what I discovered on stackoverflow: Error while converting a mongoDB Cluster into a Replica Set - Stack Overflow.My situation is identical to Noach_Magedman’s. Could potentially be an issue related to MMAP.It’s likely the client library for connecting to MongoDB is getting caught in some sort of expired clusterTime loop and is not correctly updating its clusterTime when receiving these errors. Perhaps disconnecting and finding a different node would help.", "username": "Fulton_Byrne" } ]
Error while converting a Cluster into a Replica Set
2020-03-26T07:45:23.281Z
Error while converting a Cluster into a Replica Set
10,088
null
[ "mongodb-shell" ]
[ { "code": " updateMonthlySales_05 = function(startDate) {\n db.bakesales.aggregate( [\n { $match: { date: { $gte: new Date(\"2018-05-01\") } } },\n { $group: { _id: { $dateToString: { format: \"%Y-%m\", date: \"$date\" } }, sales_quantity: { $sum: \"$quantity\"}, sales_amount: { $sum: \"$amount\" } } },\n { $merge: { into: \"monthlybakesales_06\", whenMatched: \"replace\", whenNotMatched: \"insert\" } }\n ] );\n};\n", "text": "I made a function as per the function in on Demand Materialized View topic in MongoDB doc.My Function isand I call it by\nupdateMonthlySales_05 (new ISODate(“1970-01-01”));Now, I want to find where my function (updateMonthlySales_05 ) is stored and what was its definition. How can I look for it in future perspective?", "username": "MWD_Wajih_N_A" }, { "code": "updateMonthlySales_05 mongo", "text": "Hello @MWD_Wajih_N_A, welcome to the MongoDB Community forum.The function updateMonthlySales_05 is created in the mongo shell, and it is available in that environment, until you exit the shell. It is just a JavaScript function. The example used the function to demonstrate the On-Demand Materialized Views.But, you can execute a JavaScript file (which is. already created) from the shell. While editing a function you can also use an external editor by configuring the mongo shell.", "username": "Prasad_Saya" }, { "code": "", "text": "exit the shell.I really appreciate you taking time out. Now, as you can see, I am using Robo to create this function but its not getting saved and hence is not getting queried either.", "username": "MWD_Wajih_N_A" }, { "code": "", "text": "Hello @MWD_Wajih_N_A, I don’t use a Robo tool , so I don’t know how it works (perhaps the tool comes with a Help or User Guide and there should be a link or a button or a menu option on the GUI itself).", "username": "Prasad_Saya" } ]
View definition of custom function made
2021-01-27T08:31:47.988Z
View definition of custom function made
2,250
null
[]
[ { "code": "{\n \"_id\" : ObjectId(\"5fe9b45d3613f35f9c7fe012\"),\n \"status\" : \"creation\",\n \"type\" : \"opened\",\n \"invitedUsers\" : [],\n \"members\" : [ \n {\n \"_id\" : ObjectId(\"5fb71aebc0f0ea402cd8e007\"),\n \"role\" : \"capitain\",\n \"side\" : \"black\",\n }\n ]\n}\n{\n \"_id\" : ObjectId(\"5fe9b45d3613f35f9c7fe013\"),\n \"status\" : \"creation\",\n \"type\" : \"invitations\",\n \"invitedUsers\" : [ObjectId(\"5fceb82c2917434bb8ef4966\")],\n \"members\" : [ \n {\n \"_id\" : ObjectId(\"5fb71aebc0f0ea402cd8e007\"),\n \"role\" : \"capitain\",\n \"side\" : \"black\",\n }\n ]\n}\nupdateOne(\n {\n _id:ObjectId(\"5fe9b45d3613f35f9c7fe012\"),\n status:'creation',\n $or : [{type:'opened'},{ $and: [{type:'invitations'}, {$elemMatch: {invitedUsers:ObjectId(\"5fceb82c2917434bb8ef4966\")}} ] }],\n members: {$size: 1},\n },\n {\n $set: {status:'configuration'},\n $push: {\n members: {\n _id: _id,\n side:\n { \n $switch: {\n branches: [\n { case : { $eq: [\"$members.0.side\", 'white']}, then: \"black\" },\n { case : { $eq: [\"$members.0.side\", 'black']}, then: \"white\" },\n { case : { $eq: [\"$members.0.side\", 'aleatoire']}, then: \"aleatoire\" },\n ],\n default: \"\"\n }\n },\n role:'challenger',\n tchatMode:'ON'\n }\n }\n }\n )\nupdateOne(\n {\n _id:ObjectId(\"5fe9b45d3613f35f9c7fe013\"),\n status:'creation',\n $or : [{type:'opened'},{ $and: [{type:'invitations'}, {$elemMatch: {invitedUsers:ObjectId(\"5fceb82c2917434bb8ef4966\")}} ] }],\n members: {$size: 1},\n },\n {\n $set: {status:'configuration'},\n $push: {\n members: {\n _id: _id,\n side:\n { \n $switch: {\n branches: [\n { case : { $eq: [\"$members.0.side\", 'white']}, then: \"black\" },\n { case : { $eq: [\"$members.0.side\", 'black']}, then: \"white\" },\n { case : { $eq: [\"$members.0.side\", 'aleatoire']}, then: \"aleatoire\" },\n ],\n default: \"\"\n }\n },\n role:'challenger',\n tchatMode:'ON'\n }\n }\n }\n )\n", "text": "Hello, I am blocked with the creation of an update request that need to filter on array containing an exact element and that need to set a value based on the value of another field of the document. Any help will be appreciated.This is my data which represents a group that is ‘opened’ to be joined:and another variant of a group that users can join only on ‘invitations’ if their user id is in the invitedUsers array:I am trying to create the join action with an updateOne request to the DB collection\nThis request will filter onthe request willI would like to do it in one request because I am concerned by concurrent access (2 users that would like to join at the same time).What I have tried is:And same with the second data (invitations case) changing only the _id of the group:But it seems that we can not use $elemMatch in the filter query (Robo 3T reports error: ‘unknown top level operator: $elemMatch\"’) nor use members.0.side in the update query (Robo 3T reports error: 'The dollar () prefixed field ‘$switch’ in ‘members…side.$switch’ is not valid for storage.’)This will have take me few minutes to be written in sql but I want to use mongoDB, understand the mongoDB concepts/usages and improve my capabilities to write all kinds of MongoDB requests. I have tried to find a solution in the mongoDB documentation and over the net but I found nothing to help me.Any help will be appreciated.", "username": "Nicolas_L" }, { "code": "db.groups.updateOne({_id:ObjectId(\"5fe9b45d3613f35f9c7fe012\"), \nstatus:'creation',\n$or : [{type:'opened'},{type:'invitations',invitedUsers : ObjectId(\"5fceb82c2917434bb8ef4966\")}]\n},\n [{$set: {\n status: 'configuration',\n members : {$concatArrays : [\"$members\",[{\n _id : ObjectId(),\n side : {$arrayElemAt : [\"$members.side\",0]},\n role:'challenger',\n tchatMode:'ON'\n }]]}\n}}])\n$elemMatch", "text": "Hi @Nicolas_L,Welcome to MongoDB Community.I think the best way to attack this scenario is using Pipeline updates available in 4.2+.This way you will be able to use the strong ability of aggregations. I made the following update as an example:Please note that you don’t have to use $elemMatch to match a single filed or value in an array and use equal it.Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "updateOne(\n {\n _id:ObjectId(\"5fe9b45d3613f35f9c7fe112\"),\n status:'creation',\n $or : [{type:'opened'},{type:'invitations',invitedUsers : ObjectId(\"5fceb82c2917434bb8ef4966\")}],\n members: {$size: 1},\n },\n [\n {\n $set: {\n status:'configuration',\n members: {\n $concatArrays : [ \"$members\",[{\n _id: ObjectId(\"5fceb82c2917434bb8ef4966\"),\n side: {\n $switch: {\n branches: [\n { case : { $eq: [ {$arrayElemAt : [\"$members.side\",0]} , 'white']}, then: \"black\" },\n { case : { $eq: [ {$arrayElemAt : [\"$members.side\",0]} , 'black']}, then: \"white\" },\n { case : { $eq: [ {$arrayElemAt : [\"$members.side\",0]}, 'aleatoire']}, then: \"aleatoire\" },\n ],\n default: \"\"\n }\n \n },\n role:'challenger',\n }] ]\n }\n }\n }\n ]\n )\n", "text": "Thank you a lot @Pavel_Duchovny,So, this is my final code (for those who would have a similar concern):So I do not need to use elemMatch within the array filter and I need to use an aggregation pipelines ( ) to get expressing conditional updates based on current field values or updating one field using the value of another field(s).I have keeped the switch statement because I need to toggle black / white between capitain and challenger, the arrayElemAt help me a lot in this logical statement.I have done some unit test with opened and invitations cases and it is working as expected Again, thanks you", "username": "Nicolas_L" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help update a document based on a document array containing a value and set a value based on another document field logic
2021-01-27T06:53:02.945Z
Help update a document based on a document array containing a value and set a value based on another document field logic
2,875
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hello,I am working on migrating my application from Realm Cloud to MongoDB Realm, as Realm Cloud will\nclose before the end of the year and I have a few questions about it:During the migration, I will copy all the data from Realm Cloud to MongoDb, but how to be sure all users have synced their data to Realm Cloud ? If a user was offline last timed he used my application, data will not have been transferred to the Cloud and therefore they will not be on the new MongoDB server. Is there a way to know if all the data have been uploaded to Realm Cloud before performing the migration?With Realm Cloud, local data is stored in a * .realm file. But what about MongoDB Realm ? Will it replace/delete the .realm file or will it create a new file ? If the .realm file still exists, I plan to transfer it on a storage server in case offline data was not synchronized during the migration, so I could always get them back.MongoDB Realm sync is still in beta, when will it no longer be ? When Realm Cloud will be closed, will it work ? Do I have to wait before migrating? Because sync is essential for my app.Thanking you for your help.", "username": "Arnaud_Combes" }, { "code": "%22Task%20Tracker%22_idpartition", "text": "Is there a way to know if all the data have been uploaded to Realm Cloud before performing the migration?There is not. If a user was working offline the last time the used the app, that data would not have been sync’d but there’s no way to know that, unless you accounted for it programmatically in some way.With Realm Cloud, local data is stored in a * .realm file. But what about MongoDB Realm ? Will it replace/delete the .realm file or will it create a new file ? If the .realm file still exists, I plan to transfer it on a storage server in case offline data was not synchronized during the migration, so I could always get them back.Several parts to that question.MongoDB Realm sync is still in beta, when will it no longer be ? When Realm Cloud will be closed, will it work ? Do I have to wait before migrating? Because sync is essential for my app.A Mongo employee can address those questions but you should begin the migration now and not wait.Keep in mind that there are a number of differences between Realm Cloud and MongoDB Realm, including adding _id and partition properties to you objects. There’s a whole thread on the process that you should definitely review. See Migrating from Legacy Realm Sync to MongoDB Realm Guide", "username": "Jay" }, { "code": "", "text": "Thanks for your feedback and all the details !\nI have already made good progress on the migration, so I will do it as soon as everything is ready ", "username": "Arnaud_Combes" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migrating from Realm Cloud to Mongodb Realm without losing offline data
2021-01-25T14:26:35.689Z
Migrating from Realm Cloud to Mongodb Realm without losing offline data
3,256
null
[ "python", "connecting" ]
[ { "code": "DB_URI = \"mongodb+srv://flask_app_user:[email protected]/flask_app?retryWrites=true&w=majority\"\n\ndef create_app():\n app = Flask(__name__)\n app.secret_key = os.environ.get('SECRET_KEY', 'replace_me_32437264278642')\n app.config['MONGODB_SETTINGS'] = {\n 'host': os.environ.get('MONGODB_URI', DB_URI)\n }\n MongoEngine(app)\n socketio.init_app(app)\n SSLify(app)\n\n return app\npymongo.errors.InvalidURI: Invalid URI scheme: URI must begin with 'mongodb://'\n", "text": "I am trying to connect with MongoDB atlas from my flask app using flask-mongoengine.But I am getting an error,How can I use mongo atlas with flask_mongoengine? I don’t want to stick with flask_mongoengine. I don’t want to change that.", "username": "Tanush_Software" }, { "code": "mongodb://mongodb://", "text": "Hi @Tanush_Software,Welcome to MongoDB community.I would recommend trying a standard MongoDB connection starting mongodb:// .You can retrieve this one by using the connect tab and choosing an old driver version or shell , once you find the string using mongodb:// use it and fill in the placeholders required to fit your db and user.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "python -m pip install --upgrade 'pymongo>=3.6'\n", "text": "Upgrading to pymongo>=3.6 should resolve this issue:", "username": "Shane" } ]
How to connect with atlas using MongoEngine?
2021-01-26T06:06:24.846Z
How to connect with atlas using MongoEngine?
4,317
null
[]
[ { "code": "rs.add( { host: ip_of_new_instance + \":27017\", priority: 0, votes: 0 } )OTHERSTARTUP2dbPathcollection-*index-*SECONDARYmongo$ mongo db_name$ mongo 'mongodb://username:[email protected]:27017/db_name?authSource=admin&replicaSet=rs0'db.oplog.rs.find().tailable( { awaitData : true } )itoperationTime$clusterTimedb.adminCommand( { getParameter: '*'})$clusterTime", "text": "I have a 3-node Replica Set (primary + 2 secondaries). They are all running Mongo 4.0.18 using the MMAPv1 engine. I am trying to switch the replica set over to use WiredTiger.I read through the MongoDB tutorial on how to Change Replica Set to WiredTiger. That tutorial instructs how to change each node in situ: take it offline, reconfigure it, bring it back online. I am not following those instructions as-is, but instead want to introduce new nodes to the replica set and (when all seems well) decommission the older nodes from the set.I launched a new AWS EC2 instance with Mongo configured for WiredTiger and manually added it to the replica set, following the Add Members to a Replica Set tutorial. (At essence, rs.add( { host: ip_of_new_instance + \":27017\", priority: 0, votes: 0 } ))The new node switches state from OTHER to STARTUP2, populates its dbPath folder with many new collection-* and index-* files, and eventually switches state to SECONDARY. All looks well. I can see all of the collections/documents via the mongo shell when running $ mongo db_name from the new node, and I can still access the primary by running $ mongo 'mongodb://username:[email protected]:27017/db_name?authSource=admin&replicaSet=rs0'.HOWEVER, the moment the new node transitions from STARTUP2 to SECONDARY, my application starts to fail, reporting the Mongo error:Cache Reader No keys found for HMAC that is valid for time: { ts: Timestamp(1591711351, 1) } with id: 6817586637606748161I have not been able to reproduce this Mongo error outside of the application (Rocket.Chat, built on the Meteor framework). Perhaps the problem lies there. Or perhaps the application is doing something I haven’t tried from the mongo shell, e.g. tailing the oplog. [Update: I tried it but am not sure if I’m doing it right: db.oplog.rs.find().tailable( { awaitData : true } ) returns a dozen documents before prompting for it]If, however, I start the new-node process from scratch, changing just one thing – set the storage.engine to mmapv1 instead of wiredTiger – then all works well. My application functions properly. I don’t know why the application works with all mmapv1 nodes but fails when there is a wiredTiger node, especially since the engine is a node-internal thing, undisclosed to the client.I notice a strange discrepency between running mmapv1 and wiredTiger. The node running wiredTiger includes two keys (operationTime and $clusterTime) in the response to certain commands (e.g. db.adminCommand( { getParameter: '*'})). None of the mmapv1 nodes (new or old) include those keys in their responses. Since the Mongo error message in my application’s logs includes a reference to time, I’m very suspicious that the presence of $clusterTime only on the wiredTiger node is somehow related to the underlying problem.I’m not sure how to troubleshoot this. I’ve been googling for solutions, but I have not found any strong leads – only a few references to that error message, none of which seem entirely on target:", "username": "Noach_Magedman" }, { "code": "", "text": "Noach, great trouble shooting. This is many months later, but I am assuming you were not using internal keyfile auth for the replica set memberships?", "username": "Fulton_Byrne" } ]
Adding WiredTiger node to MMAPv1 replica set leads to "No keys found for HMAC"
2020-06-09T19:09:49.572Z
Adding WiredTiger node to MMAPv1 replica set leads to &ldquo;No keys found for HMAC&rdquo;
2,948
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "Hi.I’ve just deployed MongoDB Community Edition to a CentOS 8 system using the repo detailed here:I noticed that the installation prompted a number of Python 2 packages to be installed as a dependency.Querying the mongodb-org-server package confirms the dependency:\nrpm -qR mongodb-org-server | grep python\npython2As Python 2 was sunset at the beginning of last year:The official home of the Python Programming LanguageCentOS 8 uses Python 3 by default.I’m not sure what the dependency on Python 2 is but, are there any known plans to move to Python 3?I couldn’t find anything in either these forums or the issue tracker.Thanks in advance.", "username": "INVADE_International" }, { "code": "", "text": "It’s an interesting one too. There does not seem to be anything in the package that requires python.", "username": "chris" }, { "code": "", "text": "Should I raise this in the issue tracker? Thanks.", "username": "INVADE_International" }, { "code": "", "text": "I don’t see why not. ", "username": "chris" }, { "code": "", "text": "I don’t seem to be able to authenticate.Does the issue tracker use the same credentials as the forums?", "username": "INVADE_International" }, { "code": "", "text": "Yes it does, for me at least. I think all forum accounts use sso now, that should work there too.https://jira.mongodb.org/plugins/servlet/samlsso?tracker=Y8TL8&idp=2&redirectTo=%2FIf it doesn’t work for you I might have time later to create an issue.", "username": "chris" }, { "code": "", "text": "New issue created:\nhttps://jira.mongodb.org/browse/SERVER-54057Thanks.", "username": "INVADE_International" } ]
Mongodb-org-server el8 package has dependency on python2
2021-01-22T10:58:04.760Z
Mongodb-org-server el8 package has dependency on python2
2,428
null
[ "backup" ]
[ { "code": "", "text": "how to MongoDB database restore old server to new server", "username": "Hemanth_perepi" }, { "code": "", "text": "Hello @Hemanth_perepi and welcome to the community.This really depends on the form of your backup. There are a few methods to extract data from mongodb using mongoexport and mongodump tools, as well as taking copies of the datafiles themselves while the database is offline or via file system snapshots.So if you let us know what you have, we can point you in the right direction", "username": "chris" } ]
How to mongodb database restore one server to another server
2021-01-26T12:17:43.888Z
How to mongodb database restore one server to another server
1,857
null
[ "dot-net" ]
[ { "code": "{\n cat: 1,\n items: [\n {\n userid: 1,\n ... 50+ more fields\n },\n {\n userid: 2,\n ... 50+ more fields\n },\n ]\n}\npublic void UpdateItem(string cat, int userId, Item itemToUpdate)\n{\n // psevdo code update: \n var u = Builders<Col>.Update.ReplaceItem(f => f.items[-1], itemToUpdate);\n Cols.UpdateOne(c => c.cat == 1 && c.items.Any(i => i.userid == 2), u);\n}\n", "text": "exists document with nested array:How i can replace item with userID == 2 and cat == 1?", "username": "alexov_inbox" }, { "code": "", "text": "Hi @alexov_inbox,You should utelize array filters inside inside an update.Here is a C# blog with some examples:How to update complex documents containing arrays in MongoDB with C#Please let me know if you need more than this?Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "Builders<Member>.Update.Set(x => x.Friends[-1].Name, \"Bob\");\n Builders<Member>.Update.Set(x => x.Friends[-1].Name, myItem.Name1)\n& Builders<Member>.Update.Set(x => x.Friends[-1].Field2, myItem.Field2)\n& Builders<Member>.Update.Set(x => x.Friends[-1].Field3, myItem.Field3)\nBuilders<Member>.Update.Set(x => x.Friends[-1], myItem)\n", "text": "I know how update a single item in an array. know about ‘[-1]’ filter and etc.\nAll examples in your documents going to:But i have ready Item object with 50 filled fields.\nI want to understand is it possible replace item? i dont want write 50+ lines code for each fields as shown above like:Maybe eat like:", "username": "alexov_inbox" }, { "code": "arrayFilters db.Col.update(\n {Cat : 1},\n { $set : items.$[userid] : {REPLACED_DOCUMENT} } },\n { arrayFilters: [ { \"userid\": 2 } ]}}\n)\n", "text": "Hi @alexov_inbox,This us exactly where arrayFilters come in handy:The c# blog I showed show how to use array filters in the end of the blog.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "Builders<Col>.Update.Set(f => f.items[-1], myItem);", "text": "oh thx its realy work with \"Set’ + “arrayFilter”\nBuilders<Col>.Update.Set(f => f.items[-1], myItem);", "username": "alexov_inbox" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB C# Replace nested array item by object
2021-01-25T12:13:12.787Z
MongoDB C# Replace nested array item by object
15,702
null
[ "queries" ]
[ { "code": "", "text": "BSON Document Size is 16 MB, but i normally search using find method of mongotemplate with record of 2 year, then i get error of 16 MB limitation. Example search from 1 jan 2019 to 31 Dec 2020, then i got errorbut if i search individual, then got the result like 1 jan 2019 to 31 Dec 2019 , this give me result. (5.5 lakh record it fetch successfully within 2 min.)When search with 2 year, then i got 16 mb limitation errorNote : This is simple JSON like Employee collection having employee no , date etc.i am not talking about GridFS APIPlease help me in this.", "username": "Kunal_Talwadiya" }, { "code": "", "text": "Hi @Kunal_Talwadiya,Can you share the code and query you are performing?Its possible that you try to manipulate all data into a single document and cross 16mb .Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Mongo 16 MB LImitatiion
2021-01-25T19:45:54.361Z
Mongo 16 MB LImitatiion
1,703
null
[ "aggregation", "indexes" ]
[ { "code": " db.MyColl.aggregate([\n {$lookup: {\n let: {\"a\": \"$ma\", \"b\": \"$mb\"},\n pipeline: [\n {$match: {\n $expr: {$and: [{$eq: [\"$la\", \"$$a\"]}, {$eq: [\"$lb\", \"$$b\"]}]}\n }},\n {$project: { la: 1, lb: 1, _id: 0}}\n ],\n from: \"MyOtherColl\",\n as: \"subs\"\n }}\n ])\ndocsExaminedexplain()$expr.aggregate([{$match: {la: \"va\", lb: \"vb\", _id: 0}}, {$project:...}])$expr.aggregate([{$match: {$expr: ...}}, {$project:...}])", "text": "Does MongoDB lookup stage in aggregation support covered query optimization? For example, with pipelineand indexdb.MyOtherColl.createIndex({“la”: 1, “lb”: 1})inner query is not covered according to docsExamined.If I explain() inner query itself, it seems that $expr breaks this: .aggregate([{$match: {la: \"va\", lb: \"vb\", _id: 0}}, {$project:...}]) will be covered, while equivalent with $expr (.aggregate([{$match: {$expr: ...}}, {$project:...}]) will be not.Is this a limitation of covered queries or am I missing something?", "username": "Ales_Kete" }, { "code": "", "text": "Hi @Ales_Kete,I don’t believe that with this query shape and the expr the engine will know to do a covered query.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I have my own interrogation with this.The query is not covered but does the index still used?From what I understand in https://docs.mongodb.com/manual/reference/operator/query/expr/#behavior, the index should be used as the query involves equality matches.Would keysExamined be a more accurate measure of the work done compared to docsExamined?Fetching the matching documents for the subsequent stages should not be a major issue unless the documents are huge.", "username": "steevej" }, { "code": "keysExamineddocsExamined", "text": "@steevej: Ratio of keysExamined and docsExamined is approximately one, so I assume index is used. But, IMHO, fetching of small documents can also impact performance in situation, when index itself would fit into memory, but index and documents would not and documents in collection are not sorted by index used in query. In such case, fetching document would cause cache pages reads and eviction and thus would impact performance.", "username": "Ales_Kete" }, { "code": "$expr", "text": "First, thank you for your input, Pavel!Is there perhaps a form of this (or similar) query in which such subquery would be covered or is $expr by itself showstopper?", "username": "Ales_Kete" }, { "code": "MyCollMyOtherColldb.MyColl.createIndex(...) // index supporting covered query for `a`\na = db.MyColl.find({}, {...}).sort({\"ma\": 1, \"mb\": 1})\nb = db.MyOtherColl.find({}, {...}).sort({\"la\": 1, \"lb\": 1})\nab(ma, mb)(la, lb)", "text": "Just for future reference, if anyone finds this useful:I ended up doing separate queries into MyColl and MyOtherColl with records sorted by lookup-match keys and doing the rest of the work in code:// Then advancing through a and/or b depending on (ma, mb) to (la, lb) comparison result", "username": "Ales_Kete" } ]
Using covered query in lookup stage in aggregation pipeline
2021-01-18T12:45:03.935Z
Using covered query in lookup stage in aggregation pipeline
3,732
null
[]
[ { "code": "", "text": "My testing shows that zstd will give me better compression as well as faster database access times, so I’d like to upgrade to 4.2+ and migrate to the new compression algorithm.Server is an unsharded 3 member replica set running v3.6.8 CE. Database is ~4.5TB, about 1/2 of which is indexes. Data size is ~16TB. The hardware probably doesn’t matter too much, but the primary is a 96 core 512GB memory epyc server with nvme drives on software raid.I understand the incremental upgrade process from v3.6 to v4.2+, but what is the process to change the database compression while minimizing downtime? The database is far too large for a simple dump/restore. Replica restore from scratch is extremely slow and the oplog isn’t nearly big enough. Current oplog size of 150GB only shows ~7hrs, while a full replica restore seems like it takes days if not weeks.I’m guessing that this is going to involve some combination of speeding up the replica sync and increasing the oplog. I would appreciate any thoughts or suggestions.", "username": "Eric_Miller" }, { "code": "localinitialSyncSourceReadPreference", "text": "Current oplog size of 150GB only shows ~7hrs, while a full replica restore seems like it takes days if not weeks.I don’t think this should matter on v3.6 see the below link/quote. If it does, this should worry you as this is the exact same process you’d need to follow if a member completely dies and need to be sync’d from scratch.Changed in version 3.4: Initial sync pulls newly added oplog records during the data copy. Ensure that the target member has enough disk space in the local database to temporarily store these oplog records for the duration of this data copy stage.I’m guessing that this is going to involve some combination of speeding up the replica sync and increasing the oplog. I would appreciate any thoughts or suggestions.If oplog does indeed limit you then yes. Increase it.Edit:but what is the process to change the database compression while minimizing downtime?A restore from scratch would be the approach. If go to 4.4 you can specify initialSyncSourceReadPreference which may increase the initial sync performance if the primary load is impacting the your current throughput.", "username": "chris" }, { "code": "", "text": "Database is ~4.5TB, about 1/2 of which is indexes. Data size is ~16TB. The hardware probably doesn’t matter too much, but the primary is a 96 core 512GB memory epyc server with nvme drives on software raid.Probably time to setup sharding, and/or clean up some indexes.", "username": "chris" }, { "code": "", "text": "Hi Chris - thanks for taking the time to look at this question.A quick clarification: I’m not too concerned about the undersized oplog in my daily operations because I use lvm snapshots and I can easily bring up a new member well within the current oplog window. Snapshot + rsync is easily an order of magnitude (if not two) faster than initial sync. I know there are other reasons to have a right-sized oplog, but this is working for now.I’d prefer to not shard yet, and the indexes are what they are. I’ve had to make tradeoffs for application efficiency as any db admin has I suppose.Can you elaborate on how an initial sync gets me to zstd from zlib? It is my understanding that the initial sync is going to use the same creation scheme as the source, but maybe there is a way to change that?If a new initial sync is the right way to get to zstd, then how can I speed it up (and I mean drastically)? Setting the read preference isn’t going to make that much difference and at the current rate I’d need a HUGE oplog, maybe even bigger than my database.Edit I just realized that perhaps I need to create another database and copy the old zlib collections into new zstd ones. That will require a fair bit of downtime though.", "username": "Eric_Miller" }, { "code": "$ for l in a b c ; do mongo --quiet --host mongo-0-${l} --eval \"db.getMongo().setSecondaryOk(); db.adminCommand('listDatabases').totalSize\" ; done\n159854592\n159002624\n107610112\n", "text": "Can you elaborate on how an initial sync gets me to zstd from zlib? It is my understanding that the initial sync is going to use the same creation scheme as the source, but maybe there is a way to change that?The storage engine of the node will determine its own compression. So the instance you are reinitializing should have the compressors configured.Stop the node to update. Remove the data files. Add your desired compressor configuration. Start the node.With a set of sampledata you can observe the difference with the third node with the blockcompressor enabled.All of these are brand new replicaset members before chaning the 3rd node.If a new initial sync is the right way to get to zstd, then how can I speed it up (and I mean drastically)? Setting the read preference isn’t going to make that much difference and at the current rate I’d need a HUGE oplog, maybe even bigger than my database.The manual says it starts the oplog copy when the initial sync starts. So I don’t think that will be an issue.\nBut this is a lot of data to scan from compressed blocks and transmit. I don’t have any tricks to speed this up.", "username": "chris" } ]
How do I migrate a large db from zlib to zstd?
2021-01-25T19:48:58.182Z
How do I migrate a large db from zlib to zstd?
3,547
null
[]
[ { "code": "", "text": "Hi All,I’m deepening my knowledge regarding the different features MongoDB provides and I’m wondering if it could be suitable for OLTP processing with an in-memory engine.In particular, the configuration I’m looking at is based on a 3 replica set with:Please let me know if you need any further information.Regards,", "username": "Giutor" }, { "code": "", "text": "Welcome to the MongoDB community @Giutor!It would be helpful to have some elaboration on your specific requirements or concerns for suitability, but the general answer is yes.There’s more detail (including links to further reading) in the “When should I use MongoDB?” entry in the MongoDB FAQ:The MongoDB data platform can be used across a range of OLTP and analytical apps.With the MongoDB Server and MongoDB Atlas Data Lake, you can address a wide range of application requirements.The In-Memory Storage Engine included in MongoDB Enterprise provides more predictable latency if your data, indexes, and oplog can fit entirely in-memory. The In-Memory Storage Engine intentionally avoids disk I/O, so if you need data persistence the recommended deployment is a replica set with a hidden priority 0 member configured with the default WiredTiger storage engine as you have suggested. See In-Memory Storage Deployment Architectures for more information.If you need stronger durability guarantees for your OLTP use case or have a working set which may exceed available RAM, a deployment with the default WiredTiger storage engine would likely be a more appropriate choice than in-memory.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello @Stennie_X, thank you very much for your reply!I apologize for the delayed answer, I’ve tried to retrieve some more information.My OLTP workload consists of a Real-Time pipeline with Apache Storm as the actor in charge of the writing operations.\nThe Storm topology, depending on the tuple it is processing, should retrieve a previous message written on the DB through a “find” based on the index keys and, depending on the values of some fields, performes an update on the retrieved data.\nWe are talking about millions of operations per minute and, on the other hand, there are some analytical workloads performed using the MongoDB Aggregation framework by different instances of Microservices deployed on K8s.The response time for both the OLTP and OLAP workloads is critical, this is the reason why I was thinking about an in-memory engine deployment.I hope to have cleared the scenario a bit more, in order to understand if I’m missing any drawbacks that could lead the solution not to be the best in terms of feasibility.Regards,\nGiutor", "username": "Giutor" } ]
MongoDB in-memory engine for OLTP
2021-01-20T02:33:18.804Z
MongoDB in-memory engine for OLTP
2,647
null
[ "cxx" ]
[ { "code": "/usr/bin/ld: /opt/vcpkg/installed/x64-linux/lib/libmongoc-1.0.a(mongoc-client.c.o): in function `_mongoc_get_rr_search':\nmongoc-client.c:(.text+0x1fd): undefined reference to `__res_nsearch'\n/usr/bin/ld: mongoc-client.c:(.text+0x2fe): undefined reference to `ns_initparse'\n/usr/bin/ld: mongoc-client.c:(.text+0x36a): undefined reference to `ns_parserr'\n/usr/bin/ld: mongoc-client.c:(.text+0x42d): undefined reference to `ns_parserr'\n/usr/bin/ld: /opt/vcpkg/installed/x64-linux/lib/libmongoc-1.0.a(mongoc-client.c.o): in function `srv_callback':\nmongoc-client.c:(.text+0x58e): undefined reference to `__dn_expand'\ncollect2: error: ld returned 1 exit status\nfind_package(libmongocxx REQUIRED)\nfind_package(libbsoncxx REQUIRED)\ninclude_directories(${LIBMONGOCXX_INCLUDE_DIR})\ninclude_directories(${LIBBSONCXX_INCLUDE_DIR})\n\ntarget_link_libraries(PROJECT_NAME ${LIBMONGOCXX_LIBRARIES})\ntarget_link_libraries(PROJECT_NAME ${LIBBSONCXX_LIBRARIES})\n", "text": "Hi everyone. I am trying to link MongoDB C++ Driver version 3.4.0-5#1 uploaded with vcpkg with g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0. I have linking error. Here is the full log of errors:Here what I have in CMake:what could be the root cause of that?", "username": "al-udmi_1" }, { "code": "-lresolvinclude_directories(${LIBBSONCXX_INCLUDE_DIR})target_link_libraries(PROJECT_NAME ${LIBBSONCXX_LIBRARIES})find_package()find_package(libmongocxx REQUIRED)\n\ntarget_link_libraries(PROJECT_NAME mongo::mongocxx_shared)\nexamples/projects/mongocxx/cmake/shared/CMakeLists.txt\nexamples/projects/mongocxx/cmake/static/CMakeLists.txt\n", "text": "You do not describe how your C driver and C++ driver packages are built. Or perhaps that is the meaning of “vcpkg”, though I am not familiar with that. In any event, it seems like linker does not know to link with -lresolv. The cause of this might be that you are using include_directories(${LIBBSONCXX_INCLUDE_DIR}) and target_link_libraries(PROJECT_NAME ${LIBBSONCXX_LIBRARIES}). Since you are using CMake, the recommended approach is to instead use the CMake targets that become available when the find_package() call returns. Also, libmongocxx depends on libbsoncxx, so only the former is required. You should be able to do something like this:Additional details can be found in the C++ driver source distribution in these files:", "username": "Roberto_Sanchez" }, { "code": "", "text": "HI thanks for the info, actually I resolve the issue only by building from source.", "username": "al-udmi_1" } ]
C++ linking error
2021-01-24T10:43:17.688Z
C++ linking error
3,445
https://www.mongodb.com/…9f9a136624e6.png
[ "server" ]
[ { "code": "", "text": "I have system configuration:Windows 8.1\nProcessor: Intel(R) Core™ i3-3110M CPU @ 2.40GHz 2.40GHz\nSystem Type: 32-bit Operating System, x64 based processorTried from MongodB Community Center, but giving below errorI got this link: Try MongoDB Atlas Products | MongoDB and download from:\nhttp://downloads.mongodb.org/win32/mongodb-win32-x86_64-2012plus-4.2.12-signed.msibut it shows below error:Is there any other option to download mongodb latest version?", "username": "turivishal" }, { "code": "", "text": "Hi @turivishal,Windows 8.1 isn’t supported by Microsoft since January 9, 2018. So I recommend you update your system to a newer version of Windows that is more secured and regularly updated.MongoDB stopped the support for all 32 bits OS a while ago because it’s limited to about 2GB of RAM which is simply not enough to run something in production and it was too complicated / costly to support both 32 and 64 bits systems in the code.So maybe you can use Docker or a VM to simulate a 64 bits system ─ but you will likely have similar issues with these ─ and then deploy MongoDB in it but I think it’s time for an update to 64 bits .You can use MongoDB Atlas to deploy your cluster in the cloud (Free Tier available).Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Thank you @MaBeuLux88,I think it’s time for an update to 64 bitsYou are right i would go with this.", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to install MongoDB latest version in windows 32-bit system?
2021-01-25T15:23:09.362Z
How to install MongoDB latest version in windows 32-bit system?
17,499
null
[]
[ { "code": "", "text": "Hi,I am working on Lab2 of Chapter1. I am not able to connect to Mongodb via IDE.I have db user created as m001-student. It doesn’t ask me to enter the password as mentioned in the instructions of the lab. I did review related topics to this as well.Here is the screenshot.Regards,\nSushma", "username": "Lakshmi_Ponnada" }, { "code": "", "text": "You have to run the command in terminal area.Looks like you are in editor area of IDE\nAfter pasting or typing the connect string hit enter", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Duh!! you are right!!Thanks so much for your response!! ", "username": "Lakshmi_Ponnada" }, { "code": "", "text": "I’m coming up with the same error and I’m not sure where I’m going wrong.", "username": "Bertan_Ozbek" }, { "code": "", "text": "Please show the screenshot\nAre you running the command in right area of IDE?\nDid you hit enter?\nDoes your cluster name starts with Sandbox?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi,Being weekend i could not respond. Sorry!I am glad Ramachandra responded to you. Are you still having issue?All i had to do is instead of giving the connect command in the editor, i have it in the Terminal window which is right below the editor window. Also the gave the Database name as ‘test’.Let me know if that helps.Regards,\nSushma", "username": "Lakshmi_Ponnada" }, { "code": "", "text": "Hi @Bertan_Ozbek,Let us know if you are still facing any issue.", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hey Hi,\nI am having following issue. Kindly help me out on this0. MongoDB1160×683 20.2 KB", "username": "Shri_Biyani" }, { "code": "", "text": "What is the name of your cluster?From the error message, you did not named it Sandbox as required.", "username": "steevej" }, { "code": "", "text": "Plz find the attachment and let me know the name of Cluster, because I didn’t changed name of cluster. plz help me this I am struggling from 2 days on this.", "username": "Shri_Biyani" }, { "code": "", "text": "Your cluster is named Cluster0 but the requirements are for it to be called Sandbox. It is best to terminate this cluster and create one with the appropriate name.", "username": "steevej" }, { "code": "", "text": "Hi Shri,You named your cluster as Cluster0. you cannot rename your cluster. Instead terminate the cluster and recreate one an name it as sandbox.Steps to terminate and recreate the cluster:\nNavigate to your clusters and click on three dots(…). you will see Terminate. Click on Terminate.\nonce you terminate. You will see an option to create cluster.\nFollow the steps to create the cluster.\nName your cluster as sandbox.Hope this helps. Let me know if you still see issues.Regards,\nSushma", "username": "Lakshmi_Ponnada" }, { "code": "", "text": "This part of the lecture and discussion was very confusing because it looks like it is already named “sandbox.”", "username": "Susan_Cragin" }, { "code": "", "text": "mongo “mongodb+srv://cluster0.uejju.mongodb.net/sandbox” --username m001-studentThat should do it, and it doesn’t.", "username": "Susan_Cragin" }, { "code": "", "text": "Hi Susan,The connect string has sandbox as part of the URI. Before you connect, you should name the cluster as sandbox. Once you name your cluster to sandbox, then your connect string will work.Hope this helps with your confusion.Regards,\nSushma", "username": "Lakshmi_Ponnada" }, { "code": "", "text": "In this URI,mongodb+srv://cluster0.uejju.mongodb.net/sandboxthe cluster is named cluster0. The part of the URI where you have sandbox written is default database to which mongo will connect. Please consult https://docs.mongodb.com/manual/reference/connection-string/ for more information about the format of a URI.The name cluster0 is the default name that Atlas used. You must change this name to sandbox as the requirements ask. Since a cluster cannot be renamed, you will have to terminate this cluster and create a new one with the requested name.", "username": "steevej" }, { "code": "", "text": "hello if you are up there / please help me completing the lab2 taks as i am stuck on it since days please help me??", "username": "shraddha_yadav" }, { "code": "", "text": "hello coders please help me connect to solve my issue so that i can complete my lab2 tasks i am stuck on it?/", "username": "shraddha_yadav" }, { "code": "", "text": "Hi Shradda,What is your issue? Can you help us understand where you are stuck? Brief description and a screenshot would help.Regards,\nSushma", "username": "Lakshmi_Ponnada" }, { "code": "", "text": "I have given right details and click on enter but it doesn’t work for me. why ?\nimage1238×589 19 KB", "username": "Harish_Thunga1" } ]
M001: Chapter1-Lab2: connect to Sandbox fails from IDE
2020-11-19T19:41:49.417Z
M001: Chapter1-Lab2: connect to Sandbox fails from IDE
4,712
null
[ "queries" ]
[ { "code": "", "text": "Is there a query like: if document is in collection update it, if not insert the documentI want to do this with multiple documents in one query like insertMany", "username": "mental_N_A" }, { "code": "", "text": "Hi @mental_N_AThis is called an upsert:", "username": "chris" }, { "code": "", "text": "Is it possible to do this with multiple documents like insertMany?", "username": "mental_N_A" }, { "code": "db.collection.update()db.collection.update()", "text": "Quite succinctly from the above link:When you specify the option upsert: true:", "username": "chris" }, { "code": "Query([document1, document2, document3])", "text": "I want to do it like Query([document1, document2, document3]). It checks if each document is in the collection, if it’s not in the collection it inserts it, if it is then it’ll update it.", "username": "mental_N_A" }, { "code": "", "text": "Try with the following:", "username": "steevej" } ]
"if document is in collection update it, if not insert the document" query
2021-01-25T16:41:14.386Z
&ldquo;if document is in collection update it, if not insert the document&rdquo; query
5,271
https://www.mongodb.com/…4_2_1024x231.png
[ "app-services-user-auth", "realm-web" ]
[ { "code": "", "text": "Hello guys,\nI have been today developing a feature where the user could save in a document his avatar picture in base64, im aware of the 2mb limit, my images are really small 200x200 (50kb aprox) And it is $set in a document that is the one linked to user data, then it come on custom_data too.The document look right in db too:\n\nScreenshot 2020-12-18 at 04.13.061746×394 33.7 KB\nThe problem that I realised is that after doing a request (my one is via graphql customResolver) the value is set ok, the UI works ok but if I logOut I cannot login again,\nThe issue is similar to this other issue but my case is not the same solution since I started working with realm-web 1 week ago and i have latest: customData - bad base64\nScreenshot 2020-12-18 at 04.14.511774×1140 428 KB\nI can see two request on the browser, first is:Login (response payload)\naccess_token: “eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2Rld”\ndevice_id: “5fdc285ed0e95c18e86df550”\nrefresh_token: “eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RhdGEiOm51bGwsImJhYXNfZGV2aWNlX2lkIjoiNWZkYzI4NWVkMGU5NWMxOGU4NmRmNTUwIiwiYmFhc19kb21haW5faWQiOiI1ZmQ1MTgyYzI4NjQxZjNmNzg3YzUxNDQiLCJiYWFzX2lkIjoiNWZkYzI4YjU4Y2EwYTBlMzE0MmZkYzJiIiwiYmFhc19pZGVudGl0eSI6eyJpZCI6IjVmZGMyODVkZDBlOTVjMThlODZkZjRkNCIsInByb3ZpZGVyX3R5cGUiOiJsb2NhbC11c2VycGFzcyIsInByb3ZpZGVyX2lkIjoiNWZkNTFkNzAyODY0MWYzZjc4ODExM2Y2In0sImV4cCI6MTYxMzQ0Nzg2MSwiaWF0IjoxNjA4MjYzODYxLCJzdGl0Y2hfZGF0YSI6bnVsbCwic3RpdGNoX2RldklkIjoiNWZkYzI4NWVkMGU5NWMxOGU4NmRmNTUwIiwic3RpdGNoX2RvbWFpbklkIjoiNWZkNTE4MmMyODY0MWYzZjc4N2M1MTQ0Iiwic3RpdGNoX2lkIjoiNWZkYzI4YjU4Y2EwYTBlMzE0MmZkYzJiIiwic3RpdGNoX2lkZW50Ijp7ImlkIjoiNWZkYzI4NWRkMGU5NWMxOGU4NmRmNGQ0IiwicHJvdmlkZXJfdHlwZSI6ImxvY2FsLXVzZXJwYXNzIiwicHJvdmlkZXJfaWQiOiI1ZmQ1MWQ3MDI4NjQxZjNmNzg4MTEzZjYifSwic3ViIjoiNWZkYzI4NWVkMGU5NWMxOGU4NmRmNTBkIiwidHlwIjoicmVmcmVzaCJ9.9MpuF2TW9wU6TYMVf-888JnvOH8wiQTFR-96KsYiG4k”\nuser_id: “5fdc285ed0e95c18e86df50d”The second show as “profile” and it fail with ERR CONNECTION CLOSED. the payload is huge and i have tried to decrypt the token and it show the base64 inside, i think the SDK is not supporting it or similar?Pasted here because token is so long due base64 image.\nhttps://pastebin.ubuntu.com/p/snTXJt8qYp/FYI maybe this issue was not fully fixed? Realm Web: Custom data base64 fixed by kraenhansen · Pull Request #3055 · realm/realm-js · GitHub", "username": "Juan_Jose_N_A" }, { "code": "", "text": "I have a few follow-up questions: Can you share an example of a base64 encoded image that I can use to reproduce this? Can you share a bit more about the failing GET /profile request, does the server respond with an error code or message?To me it sounds like you’re hitting a (perhaps undocumented) limit on the custom user data documents. From your dev tools screenshot it looks like the authentication succeeded, but the server is unable to complete the fetch of the profile because the access token (which includes the image) is too large.If at all possible, I would suggest that you upload the image somewhere else (perhaps S3 or Google’s storage) and reference it by a URL. If that’s not ideal for your use-case, perhaps store it in a different collection than the custom user data and find the document manually after a successful login to avoid it being included in the access token.", "username": "kraenhansen" }, { "code": "", "text": "Juan,MongoDB Realm does offer a 3rd party service to Amazon S3 for this exact problem. It is documented here:However, images are currently limited to a max of 4MB.I have tried this service, and I can say that it definitely works.Richard Krueger", "username": "Richard_Krueger" }, { "code": "", "text": "Hi guys, im sorry but i could not give more data about it since we leaving realm for our MVP and we not working with that codebase anymore. thanks anyway and hope it works for the future is somebody find same issue.", "username": "Juan_Jose_N_A" } ]
Login blocked when custom_data contain base64?
2020-12-18T04:26:32.150Z
Login blocked when custom_data contain base64?
2,908
null
[ "php" ]
[ { "code": "{\n \"type\" : \"search\",\n\"date_added\" : \"2020-10-12 11:21:36\",\n\"product_data\" : {\n \"name\" : \"red floral dress\",\n \"total_products\" : \"178\"\n }\n} \ntotal_productsdate_added", "text": "I am storing my Data to Mongo Atlas Cloud using PHP Library.All Data is inserted properly, but i am facing issue with date type. Sample of my data is:In above sample, total_products is type STRING but should be INT and date_added is type STRING but should be type DATE or timestampWhile passing data from PHP (in JSON format) to MongoDB, it automatically convert everything to STRING.What i can do to keep the data type Same as i want.I am using PHP 7.2, so how can i define a schema for my Collection.One last thing, I have a collection of size 1.5 GB , is there any way to manage all data according to my Schema?Thanks", "username": "Hashir_Anwaar" }, { "code": "", "text": "Hello @Hashir_Anwaar, can you tell how you are inserting the JSON data? What is the code and the data you are inserting?", "username": "Prasad_Saya" }, { "code": "$mongoData = array(\n 'type' => $post['type'], *//STRING TYPE*\n 'product_data' => $post['event_data'], *//Object*\n 'date_added' => $post['event_time'] *//STRING TYPE, But should be date/timestamp*\n );\n\n $mongoCollection = 'myCollection';\n\n $this->mongodb->insert($mongoCollection, $mongoData);\n$insert = $collection->insertOne($data);", "text": "My Controller Code:My MongoDB Library Code is:", "username": "Hashir_Anwaar" }, { "code": "date", "text": "Here is post explaining how you can insert date object into a MongoDB document: Inserting and retriving dates and timestamps in mongodb using PHP", "username": "Prasad_Saya" }, { "code": "$utcdatetime = new MongoDB\\BSON\\UTCDateTime($orig_date*1000);date2021-01-25T07:25:59.765+00:00DateTimestamp", "text": "Thanks for the link, but when i use that approach $utcdatetime = new MongoDB\\BSON\\UTCDateTime($orig_date*1000); it change my date and also store it as date type in format 2021-01-25T07:25:59.765+00:00For example, if my date is today’s date, then multiplying it with 1000 cause date change…Also, type is Date but what if i want to store it as type Timestamp?Thanks a lot", "username": "Hashir_Anwaar" }, { "code": "1000", "text": "if my date is today’s date, then multiplying it with 1000 cause date change…Can you tell why you are multiplying with 1000?", "username": "Prasad_Saya" }, { "code": "1000TimeStamp", "text": "Can you tell why you are multiplying with 1000 ?bcz of the example… but i dnt need that tough just need to save it as TimeStamp type now", "username": "Hashir_Anwaar" }, { "code": "time_tordinal", "text": "Hi @Hashir_AnwaarIf you are referring to the BSON type Timestamp, don’t use that, stick with the Date type. If you are not then ignore the rest of this response.A Date will give resolution down to the millsecond, a Timestamp is at second resolution with an ordinal counter. Primarily the Timestamp is used in a Mongodb replicaset members oplog.BSON has a special timestamp type for internal MongoDB use and is not associated with the regular Date type. This internal timestamp type is a 64 bit value where:NoteThe BSON timestamp type is for internal MongoDB use. For most cases, in application development, you will want to use the BSON date type. See Date for more information.", "username": "chris" } ]
Date inserted as STRING but should be DATE or TIMESTAMP
2021-01-06T07:29:56.583Z
Date inserted as STRING but should be DATE or TIMESTAMP
19,533
null
[ "queries" ]
[ { "code": "", "text": "Hi,I have a document that has lots of text fields. I also have a text index that includes all of the fields, which allows me to perform a case and diacritic insensitive search across all fields with ease.However, I also want to perform an equivalent case and diacritic insensitive search across a single field, but am stuck. I cannot add a new text index, and I cannot use regex to be diacritic insensitive.It is similar to this, but I need the diacritic insensitivity as well.I thought that this must be a common scenario, but maybe I am missing an obvious solution?If there is not an easy way to support this out of the box, then what are the recommended ways to do this?Thanks for any help!", "username": "Michael_Sudnik" }, { "code": "", "text": "Hi @Michael_Sudnik,Welcome to MongoDB community.Have you considered using a pipeline aggregation with first pipeline doing the text match and second/third pipeline doing a specific field filtering.There is an example for similar behaviour here:In your case you can run a. $regex on a specific filed match or add the field with $regexFind and match on only true values.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.test.find()\n{ \"_id\" : 1, \"a\" : \"café\" }\n{ \"_id\" : 2, \"a\" : \"Café\" }\n{ \"_id\" : 3, \"a\" : \"Cafe\" }\n{ \"_id\" : 4, \"a\" : \"cafe\" }\n\ndb.test.find({ a: \"cafe\" }).collation( { locale: \"en_US\", strength: 1 } )\n// returns all the four documents\nstrength12strength1db.test.find({ a: { $regex: \"caf[eé]\", $options: \"i\" }}){ \"_id\" : 4, \"a\" : \"cafe\" }\n{ \"_id\" : 3, \"a\" : \"Cafe caffe\" }\n{ \"_id\" : 1, \"a\" : \"café x\" }\n{ \"_id\" : 2, \"a\" : \"y Café\" }\n", "text": "Hello @Michael_Sudnik, welcome to the MongoDB Community forum.You can use Collation to perform case and diacritic insensitive search on a single field. For example,Note that in the above query, the document field is not indexed.You can also specify collation as an option when creating an index on a field (see Create Index - Collation as an Option.I think you cannot use case and diacritic insensitive search using a Regex search. You can only do a case-insensitive search. In case you need to do a search with both options, you can specify the query like this, for example:db.test.find({ a: { $regex: \"caf[eé]\", $options: \"i\" }})This matches documents like the following:", "username": "Prasad_Saya" }, { "code": "", "text": "Hi, thanks for your reply. Performing a find on a single field by specifying the collation (with or without an index) does not seem to perform a “containing” search. In your example, I would want to be able to search for “caf” and all 4 results should be found. Ideally, I would want to use an index as the data will get pretty big even after doing any additional initial match stages to reduce the data size.A regex solution is an option, but I do not like the idea of the only option being to not use the index. It’s also not ideal to have to adjust my expressions to support different diacritic characters in this way.It just feels like there should be a way to do a “contains” for your first option…", "username": "Michael_Sudnik" }, { "code": "", "text": "Have you considered using a pipeline aggregation with first pipeline doing the text match and second/third pipeline doing a specific field filtering.So, in this case, results would be returned where the single field does not match the search at all. To filter the results additionally I would need to construct a case and diacritic insensitive search as described by @Prasad_Saya. I guess this is the best solution so far, in terms of performance and getting the functionality that I require.", "username": "Michael_Sudnik" } ]
Query for field containing a string, case in-sensitive and diacritic in-sensitive
2021-01-25T02:36:57.989Z
Query for field containing a string, case in-sensitive and diacritic in-sensitive
12,503
null
[]
[ { "code": "{\n total: Number\n leaves: Number\n fakes: Number\n inviterID: String\n fake: Boolean\n}\n{\n invites: Number\n leaves: Number\n fakes: Number\n inviterID: String\n fake: Boolean\n}\ninvitestotal - leavestotal", "text": "I want to convert this schematoinvites will be calculated using total - leaves. What will be the fastest way to transfer 70 million documents to this layout using that calculation and also deleting the total property?", "username": "mental_N_A" }, { "code": "", "text": "Hi @mental_N_A,Why not to do a gradually transition for all documents through the application, where the application will have a condition to see if there is a total field in the documen. Then you will calculate the “invites” value and transform the object that use it in new form in app and save to database for future use.This will not require an unnecessary 70m document migration which might be painful even with multithreading bulk update.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Fastest way to convert 70 million documents
2021-01-24T22:55:08.678Z
Fastest way to convert 70 million documents
1,877
null
[ "connecting", "security" ]
[ { "code": "$ ps -ef | grep mongod\nubuntu 4908 1 5 07:44 ? 00:00:03 mongod --tlsMode requireTLS --tlsCertificateKeyFile Server2.cert --tlsCAFile RootCA.pem --auth --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --fork --bind_ip 192.168.1.2\nubuntu 4951 3223 0 07:45 pts/0 00:00:00 grep --color=auto mongod\n$ \n$ sudo netstat -tulpn | grep mongod\ntcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 4612/mongod \n$ \n$ mongo --tls --host localhost --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem\n$ mongo --tls --host 127.0.0.1 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem\n$ mongo --tls --host 192.168.1.2 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509\nMongoDB shell version v4.2.0\nconnecting to: mongodb://192.168.1.2:27017/?authMechanism=MONGODB-X509&authSource=%24external&compressors=disabled&gssapiServiceName=mongodb\n2021-01-21T16:26:42.690+0900 E QUERY [js] Error: couldn't connect to server 192.168.1.2:27017, connection attempt failed: SocketException: Error connecting to 192.168.1.2:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):3:6\n2021-01-21T16:26:42.693+0900 F - [main] exception: connect failed\n2021-01-21T16:26:42.693+0900 E - [main] exiting with code 1\n$ \n$ openssl req -new -newkey rsa:4096 -nodes -keyout Server.key.pem -out Server.req.pem -subj /C=US/ST=CA/O=ServerCA/CN=localhost\n$ openssl x509 -req -days 365 -in Server.req.pem -CA IntermedCA.pem -CAkey IntermedCA.key.pem -set_serial 01 -out Server.pem -extfile X509_v3.ext\n$ cat X509_v3.ext\nauthorityKeyIdentifier=keyid,issuer\nbasicConstraints=CA:FALSE\nkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment\nsubjectAltName = @alt_names\n[alt_names]\nDNS.1 = localhost\nIP.1 = 127.0.0.1\nIP.2 = 192.168.1.2\n$", "text": "I have a mongod daemon running on one machine.I can confirm it, by running these two commands:While being on the same machine I can launch mongo shell, using one of these two commands:But from a different machine, the following is not working:Why could that be?Some more information which may be useful:The server certificate has been created using these 2 commands.With X509_v3.ext being :", "username": "Michel_Bouchet" }, { "code": "", "text": "Hi @Michel_BouchetThe mongo server is only listening on 127.0.0.1 that is the loopback ip address only accessible on that host.By default mongod only binds to localhost for security reasons.Here is what you need:", "username": "chris" }, { "code": "$ mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile RootCA.pem --auth --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --fork --bind_ip 127.0.0.1,192.168.1.2$ ps -ef | grep mongod\nubuntu 2868 1 1 01:45 ? 00:00:10 mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile RootCA.pem --auth --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --fork --bind_ip 127.0.0.1,192.168.1.2\nubuntu 3031 2608 0 01:54 pts/0 00:00:00 grep --color=auto mongod\n$ \n$ mongo --tls --host localhost --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem\n$ mongo --tls --host 127.0.0.1 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem\n$ mongo --tls --host 192.168.1.2 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem\n$ mongo --tls --host localhost --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509\n$ mongo --tls --host 127.0.0.1 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509\n$ mongo --tls --host 192.168.1.2 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509\n$ mongo --tls --host 192.168.1.2 --tlsCertificateKeyFile Client.cert --tlsCAFile RootCA.pem --authenticationDatabase '$external' --authenticationMechanism MONGODB-X509\nMongoDB shell version v4.2.0\nconnecting to: mongodb://192.168.1.2:27017/?authMechanism=MONGODB-X509&authSource=%24external&compressors=disabled&gssapiServiceName=mongodb\n2021-01-22T11:12:02.292+0900 E NETWORK [js] SSL peer certificate validation failed: Certificate trust failure: Invalid Extended Key Usage for policy; connection rejected\n2021-01-22T11:12:02.293+0900 E QUERY [js] Error: couldn't connect to server 192.168.1.2:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: Certificate trust failure: Invalid Extended Key Usage for policy; connection rejected :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):3:6\n2021-01-22T11:12:02.296+0900 F - [main] exception: connect failed\n2021-01-22T11:12:02.296+0900 E - [main] exiting with code 1\n$", "text": "OK. Though I was not aware of this security limitation, what you write is also my understanding but it still does not work.I launch mongod with this command:$ mongod --tlsMode requireTLS --tlsCertificateKeyFile Server.cert --tlsCAFile RootCA.pem --auth --dbpath /mnt/mongoDB-One/DB_X509 --logpath /mnt/mongoDB-One/DB_X509/mongod.log --fork --bind_ip 127.0.0.1,192.168.1.2and I can confirm:Then I can fire up mongo shell locally using any of the following:But I expect the last command to also work from a different computer and it doesn’t. Can you see why?This is how it goes:", "username": "Michel_Bouchet" }, { "code": "2021-01-22T11:12:02.292+0900 E NETWORK [js] SSL peer certificate validation failed: Certificate trust failure: Invalid Extended Key Usage for policy; connection rejected\n2021-01-22T11:12:02.293+0900 E QUERY [js] Error: couldn't connect to server 192.168.1.2:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: Certificate trust failure: Invalid Extended Key Usage for policy; connection rejected :\n", "text": "I’m not sure why this would work on the mongod host but not remotely, maybe the tls implementation? What host OS is on the remote client.", "username": "chris" }, { "code": "", "text": "The remote client is a MacBook Air running macOS Catalina Version 10.15.7.", "username": "Michel_Bouchet" }, { "code": "", "text": "As per mongodb documentation to use x.509keyUsage = digitalSignature extendedKeyUsage = clientAuth\nCould this be the issue? As error says something about EKU", "username": "Ramachandra_Tummala" }, { "code": "openssl x509 -in Server.cert -noout -text", "text": "Are you able to share the X509 extension section from.openssl x509 -in Server.cert -noout -text", "username": "chris" }, { "code": "$ openssl x509 -in Server.cert -noout -text\nCertificate:\n Data:\n Version: 3 (0x2)\n Serial Number: 1 (0x1)\n Signature Algorithm: sha256WithRSAEncryption\n Issuer: C = US, ST = DC, O = IntermedCA, CN = echo\n Validity\n Not Before: Jan 22 03:21:07 2021 GMT\n Not After : Jan 22 03:21:07 2022 GMT\n Subject: C = US, ST = CA, O = ServerCA, CN = echo\n Subject Public Key Info:\n Public Key Algorithm: rsaEncryption\n RSA Public-Key: (4096 bit)\n Modulus:\n 00:f5:b8:9e:a3:6c:91:0c:b6:43:af:51:c1:13:e4:\n .............\n Exponent: 65537 (0x10001)\n X509v3 extensions:\n X509v3 Authority Key Identifier: \n keyid:82:E7:CC:17:6E:A3:9A:53:AC:5B:E1:82:08:ED:52:D3:FC:EA:F1:24\n DirName:/C=US/ST=NY/O=RootCA\n serial:01\n\n X509v3 Basic Constraints: \n CA:FALSE\n X509v3 Key Usage: \n Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment\n X509v3 Subject Alternative Name: \n DNS:localhost, IP Address:127.0.0.1, IP Address:192.168.1.2\n Signature Algorithm: sha256WithRSAEncryption\n 81:b3:2b:ac:f3:46:fc:85:28:ab:4e:16:17:d8:f1:25:da:71:\n .............\n$\n", "text": "This is it:I noticed this issue is not appearing on other clients. And adding keyUsage and extendedKeyUsage as mentioned by Ramachandra_Tummala doesn’t seem to make any difference, as much as I’ve tried.", "username": "Michel_Bouchet" }, { "code": "cat mongodb-test-server1.crt mongodb-test-server1.key > test-server1.pemcat mongodb-test-server1.crt mongodb-test-ia.crt mongodb-test-server1.key > test-server1.pem", "text": "In doing what @Ramachandra_Tummala suggested the the other Extended Key usage is missing:\nextendedKeyUsage = serverAuth, clientAuthThere is a good set of examples in the security appendix, hopefully that can help.I think the only thing I would change is in the Appendix B, Section B, Step 4:\ncat mongodb-test-server1.crt mongodb-test-server1.key > test-server1.pem\nto\ncat mongodb-test-server1.crt mongodb-test-ia.crt mongodb-test-server1.key > test-server1.pem", "username": "chris" }, { "code": "", "text": "I noticed this issue is not appearing on other clients. And adding keyUsage and extendedKeyUsage as mentioned by Ramachandra_Tummala doesn’t seem to make any difference, as much as I’ve tried.What platforms are the other clients on?", "username": "chris" }, { "code": "", "text": "The other clients are on Ubuntu and Debian.", "username": "Michel_Bouchet" }, { "code": "", "text": "Yes, it finally works. This time I added:extendedKeyUsage = serverAuth,clientAuthboth to the server and to the client certificates.And I can connect also from my Mac running macOS Catalina.Thanks again for the precious tips!", "username": "Michel_Bouchet" }, { "code": "", "text": "Trying out and testing a bit more seems to show that, only having this on the server certificate is enough:extendedKeyUsage = serverAuth", "username": "Michel_Bouchet" }, { "code": "", "text": "Hi @Michel_BouchetGlad you got there!", "username": "chris" } ]
Problems connecting to mongod from a different machine, using X509
2021-01-21T08:17:35.283Z
Problems connecting to mongod from a different machine, using X509
6,611
null
[ "queries", "data-modeling" ]
[ { "code": "", "text": "I want to replicate a filter experience on data. I am able to find it on one website but that is on the frontend, I want to create it using the mongo aggregation framework.\nURL - https://www.yatra.com/hotels/hotels-in-mumbai\nThe important thing about this filter is that it returns a count as zero also. In addition to that when the next filter applied the query should do the OR operation within its collection for counts and AND operation with other collections.\nFinally, the API should return data in the form of {id: filterName: count} for all the collections.", "username": "Saransh_Exports" }, { "code": "", "text": "Hi @Saransh_Exports;Could you give an example of what you need?? May be I will be able to help you out? I just want to make sure what I understood by going through your question is correct.Cheers\nShanka", "username": "Shanka_Somasiri" }, { "code": "", "text": "Hi @Saransh_Exports,It will be great to have a better example. But if you are talking about a facted multi data filter and counting like the provided site has in its “filter” section, you should use $facet and $bucket or $bucketAuto with a $size to calc the array sizes :Please review the example above and let us know if this is what you are looking for.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you for taking an interest.I have asked the same question in the below link with one closest example with dummy data.\naggregation framework - How to fetch counts from a query from a mongodb collection? - Stack Overflow.\nKeep me updated in case of any query or success.", "username": "Saransh_Exports" }, { "code": "", "text": "Hi Pavel,Thanks for the revert.$facet would definitely help. Main challenge is to calculate count = zero based on any filter criteria. Also, I am not sure on decide of bucket boundaries as my data have string values. Sharing an example below via stack-overflow.", "username": "Saransh_Exports" }, { "code": " collection.aggregate([ {$match : { _user : \"...\", \nmaterial : { $in : [silver, mixed] }, \ntype : { $in : [beads, jewels] },\n shape : \"round\" },\n {\n $facet: {\n categorizedByUser: [\n {\n $group: {\n _id: \"$_user\",\n count: {\n $sum: 1\n }\n }\n }\n ],\n categorizedByMaterial: [\n {\n $group: {\n _id: \"$material\",\n count: {\n $sum: 1\n }\n }\n }\n ],\n categorizedByShape: [\n {\n $group: {\n _id: \"$shape\",\n count: {\n $sum: 1\n }\n }\n }\n ],\n categorizedByType: [\n {\n $group: {\n _id: \"$type\",\n count: {\n $sum: 1\n }\n }\n }\n ]\n }\n }\n])\n", "text": "Hi @Saransh_Exports,I ve noticed that someone provided count example via the aggregation, so there is no need for bucketing.However for your queries on filter you just need to filter the data in a previous $match stage before the facet:If you need or logic use $or in the match.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "It helped, for counting the criteria that won’t match I am matching the query result with original distict types of values present.\nThank you.", "username": "Saransh_Exports" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filter experience
2021-01-22T01:59:28.390Z
Filter experience
2,755
null
[ "php" ]
[ { "code": "", "text": "I am facing issue with PHP7.4.9, MongoDb does not load, I have already added extension and .dll file.", "username": "jb_technology" }, { "code": "", "text": "Can you tell us more?", "username": "Jack_Woehr" } ]
Install or enable PHP's mongodb extension
2021-01-22T07:56:11.153Z
Install or enable PHP&rsquo;s mongodb extension
1,716
null
[ "mongodb-shell", "installation" ]
[ { "code": "", "text": "Hi - I’m relatively new to the developer course. I am trying to open the command prompt. So I unzipped the download file and clicked on mongo.exe. A window opened and then closed. I do not have a command prompt I can work with. Am I supposed to use the windows command prompt? That didn’t work…the windows command prompt did nothing (error message). Can somebody help me so I can get the mongo shell running? Thanks.", "username": "David_Ponzio" }, { "code": "", "text": "When you have anerror messageit is best to share with us the content of the error message. Usually, it helps to diagnose the issue.Yes, Windows command prompt is the way to invoke mongo.Most likely you did not adjust your %PATH% to include mongo’s bin directory.", "username": "steevej" }, { "code": "", "text": "I am not sure if you need to reboot the PC after adding the additional path.Also be aware that if mongo import tools are required for your course, and you installed v4.4, they are no longer incorporated into that install and need to be installed separately, with their installed directory path added in as well.", "username": "NeilM" } ]
Installing MongoDB on computer
2021-01-22T21:40:56.303Z
Installing MongoDB on computer
2,335
null
[ "installation" ]
[ { "code": "E: The repository 'https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/3.2 Release' does not have a Release file.\nN: Updating from such a repository can't be done securely, and is therefore disabled by default.\nN: See apt-secure(8) manpage for repository creation and user configuration details.", "text": "Hey, is it possible to install MongoDB 3.2 on my computer which is on Ubuntu 20.04 LTS?I tried the installation shown on the documentation but it does not find the release file.Console output below:", "username": "Mathieu_Sylvestre" }, { "code": "", "text": "Your combination does not show up under supported platforms", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Mathieu_Sylvestre,Welcome to the community.As @Ramachandra_Tummala point out this is not a supported platform. Mongodb 3.2 was End of Life September 2018.Mongodb 4.4 is the only version supported on Ubuntu 20.04.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Install MongoDB 3.2 on Ubuntu 20.04 LTS
2021-01-23T00:12:15.216Z
Install MongoDB 3.2 on Ubuntu 20.04 LTS
4,815
null
[ "monitoring" ]
[ { "code": "", "text": "Hi,In administering multiple environments in other persistence layers, we normally rely on tools such as schema compare (found in redgate, quest, modeling tools, or could be done by querying metadata directly) to look for “drift” across various environments in the schema definition (DEV <-> TST <-> UAT <-> PRD).I’m wondering if something similar exists in MongoDB or is offered by a 3rd party tool, to facilitate keeping the various environments or databases in synch and ensure they have consistent definitions and if not, whether it’s due to an in-flight release, or an unintended difference that needs to be reconciled.A second part of the question is whether there are dates associated with collections. This question arises in the context of trying to better understand the origin of objects that come up on a difference report (see prior paragraph), and being unsure who created them, when, or why, and whether they are meant to be deployed further or not. Per the documentation, it looks like collection create date is not a part of the metadata. If so, is there auditing that could be enabled that would trigger a write to a new “audit” type collection perhaps – with the change, who made it, and the date – and whether a library for this already exists, and what the possible performance overhead may be with this?Thanks\nEugene", "username": "GenePHL" }, { "code": "", "text": "Hello @GenePHL, welcome to the MongoDB Community forum.This is some basic information.The server logs (Log Messages) have the information about all the actions on a server. Logs have timestamps and log components specifying the functional categorization of the messages. I see that the mtools (a tool which is used to study logs, apply filters, format, etc.) can be used to get relevant information, for example the authentication / access by a particular user on a particular date/time.The user authentication and authorization (MongoDB Security) when enabled and configured will provide who did what and when information in the logs. Security can also control which individual or groups can do what - in an organized and centralized manner.Also see Security Auditing and a related post: Update auditLog configuration without restart.", "username": "Prasad_Saya" } ]
Auditing new/changed collections, Indexes, Release Automation,
2021-01-23T05:55:20.511Z
Auditing new/changed collections, Indexes, Release Automation,
1,794
null
[ "indexes", "performance" ]
[ { "code": "find({user_email: \"[email protected]\", list_type: \"bulk\"})", "text": "Lets say I have 3 indexes defined on a collection:Considering there can be multiple documents with same user_email, and list_type has only 2 values (“bulk” and “single”), if I make a query:find({user_email: \"[email protected]\", list_type: \"bulk\"})Which one of the indexes would be used? And on what basis does mongo determine which one to use?", "username": "Amin_Memon" }, { "code": "(list_type, 1) + (user_email, 1)findexplain()", "text": "Hello @Amin_Memon, here are some clarifications.Which one of the indexes would be used?In theory, any of the indexes could be used. In case, if there was a compound index on (list_type, 1) + (user_email, 1), it would be a candidate too.In your case, ideally, a compound index comprising the two fields should be created and that would be used. Since, there is a possibility of using the two fields in two combinations, you need to determine which is the best. This can be determined in the way the data is. For a given query, using the two fields, find how many documents will be selected with the first field alone. The best option is when the query returns the least number of documents using the first field. See Query Selectivity for details.Practically, the way to determine which index would be used is to generate a query plan on the find query using the explain() method. You can analyze the plan and find out which index is used, how and see statistics like how much time it has taken to execute the query. See Analyze Query Performance for details.And on what basis does mongo determine which one to use?MongoDB’s query optimizer generates query plans for all the indexes that could be used for a given query. In the example scenario, all the three indexes are possible candidates.The optimizer uses each of the index plans and executes them for a certain period of time and chooses the best performing candidate (this is determined based upon factors like - which returned most documents in least time, and other factors). Based upon this it will output winning and rejected plans and these can be viewed in the plan output.MongoDB caches the plans for a given query shape. Query plans are cached so that plans need not be generated and compared against each other every time a query is executed and get the winning plan.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What determines, which index needs to be used in case there are multiple indexes (with common fields) defined for a collection?
2021-01-23T09:20:37.261Z
What determines, which index needs to be used in case there are multiple indexes (with common fields) defined for a collection?
3,781
null
[ "java", "morphia-odm" ]
[ { "code": "", "text": "HelloI saw that Morphia provides a query builder, is it possible to use MQL instead of the query builder?Something like that (that says deprecated),but for all read/write operations\nAggregate,Update,FindAndModify,Delete etchttps://morphia.dev/morphia/2.1/querying-old.html#_raw_queryingAlso why use ODM ? (if i dont want the query builder)\nPOJO and Mongodb Java driver is harder to use?Thank you", "username": "Takis" }, { "code": "", "text": "Morphia predates the driver’s POJO support by several years is the answer for why is there both. Morphia served as the inspiration for the early forms of the driver’s POJO support (though they’ve drifted as such things do). When I first started the POJO support forever ago, my goal was to make it so that the driver supported most needs and for Morphia to be a thin veneer on top. I think that’s probably come to fruition. I think there are still things that Morphia does “better” than the driver but, of course, such things are a matter of taste and preference.The driver leans more heavily on the BSON types than I prefer but I understand and agree with the reasoning (as best as I understand them no longer being part of the team). Morphia has “better” support for references (whatever your feeling on those may be) and a nicer aggregation API (again with the BSON types). But these are all seriously subjective and i’m clearly biased toward Morphia. Morphia might be “too much” for an application in which case the driver is a perfectly valid and reasonable choice. It just depends on what you need and how you prefer to work.Using an ODM, whether it’s Morphia or the driver’s POJO support (which Morphia now builds on actually), allows you, as a developer, to work with higher level abstracts (Objects) and let the ODM manage getting data in and out of your domain objects. This simplifies your application and offloads the intricate details of interacting with a driver off to experts who can better manage and track how to do so.", "username": "Justin_Lee" }, { "code": "", "text": "Thank you for the reply,the information,and for Morphia/Java driver.\nMorphia is very popular and its looks very nice and clear to me.I just didn’t know the difference.Also i dont know if we can run a MQL query if it is in Document class of the java driver,or in List of Documents(pipeline)?Morphia supports MQL? Or all queries should be written in its own query builder?", "username": "Takis" }, { "code": "java.sql.PreparedStatementQuery", "text": "Morphia is not currently set up to accept, say, a json string representing a query as you might write in the shell and execute. I have given some thought to something along those line (along the lines of java.sql.PreparedStatement or hibernate’s named queries, e.g.) but it’s a pretty low priority so far. There are more interesting and significant features and fixes to deal with first. so, yes, the appropriate route with Morphia is via the APIs defined by Query.", "username": "Justin_Lee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MQL and ODM (Morphia)
2021-01-22T18:17:48.788Z
MQL and ODM (Morphia)
4,224
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.12 is out and is ready for production deployment. This release contains only fixes since 4.2.11, and is a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.12 is released
2021-01-22T23:37:49.435Z
MongoDB 4.2.12 is released
3,347
null
[]
[ { "code": "", "text": "In an ATLAS replica set environment (M40), what is the best way to reclaim disk space (compact storage) for large documents that have been removed from GridFS?", "username": "Chris_Hills" }, { "code": "", "text": "Hi Chris,As a collection namespace is continuously re-used the WiredTiger storage engine will naturally re-claim that space over time. If you’d like to more pro-actively explore options feel free to contact MongoDB support,-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi @Andrew_Davidson,Thanks for the reply.Just to clarify, we’re working with some large amounts of data that get stored in grid.fs for a short period of time - up to about a day before being deleted. Using Atlas is relatively new to us, our experience of using a simple non-replicated Mongo install, was that we had to pro-actively reclaim the disk space used by the grid.fs data after it had been deleted. The disk space was not automatically reclaimed by the operating system.By using Atlas (M10 or above) is the space from deleted grid.fs documents automatically released. This is pertinent in terms of our Atlas costs, if we have a cluster with 30GB of storage for instance, over-time is this going to get exhausted from the space assigned to grid.fs documents, that have been subsequently deleted, not getting freed up automatically.Chris", "username": "Chris_Hills" }, { "code": "", "text": "The data will be reclaimed automatically as long as you’re using the same collections for the gridFS environment over time. Separately, if you need to proactively free space you should be able to use https://docs.mongodb.com/manual/reference/command/compact/", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best way to reclaim disk space when using ATLAS M40
2021-01-11T12:32:54.991Z
Best way to reclaim disk space when using ATLAS M40
5,516
null
[]
[ { "code": "", "text": "My MongoDB has recently been using lots of CPU on IO Wait. I believe by switching to XFS filesystem it should stop these issues. How can I successfully do this without corrupting any of my current data? I am on Ubuntu 18.04. My disk is currently ext4.", "username": "mental_N_A" }, { "code": "", "text": "First, youand then you", "username": "steevej" }, { "code": "", "text": "If you have enough space elsewhere you can shutdown mongod and move all the files in the data directory out of the way. And move them back when after reformatting.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I successfully switch to XFS filesystem without corrupting any data?
2021-01-22T16:07:54.196Z
How do I successfully switch to XFS filesystem without corrupting any data?
2,966
null
[ "atlas-device-sync", "atlas-functions" ]
[ { "code": "const { v4: uuidv4 } = require('uuid');\nconsole.log(uuidv4());\nTypeError: 'slice' is not a function\n at rng (node_modules/uuid/dist/rng.js:29:10)\n at exports (function.js:12:15)\n at apply (<native code>)\n at function_wrapper.js:5:9\n at <anonymous>:11:1\n", "text": "hi,I installed the npm package uuid. Now I like to generate a new id like:but this produce following error:thx", "username": "rouuuge" }, { "code": "", "text": "Hi Rouuuge - thanks for posting.To help us see more details of where the issues lies - would you have a repo to share?", "username": "Shane_McAllister" }, { "code": "exports = async function() {\n const { v4: uuidv4 } = require('uuid');\n console.log(uuidv4());\n}\n{\n \"dependencies\": {\n \"dayjs\": \"^1.9.3\",\n \"uuid\": \"^8.3.2\"\n }\n}\n", "text": "hi,Currently not possible to share it but here a small snipped how to repoduce my realm-function:and following node-modules that I imported:thx for help!", "username": "rouuuge" } ]
Create uuid in realm mongodb functions
2021-01-20T17:55:48.574Z
Create uuid in realm mongodb functions
2,364
null
[ "aggregation" ]
[ { "code": "{\n $lookup: {\n from: \"parts\",\n let: {\n family: name,\n isParent: false,\n },\n pipeline: [\n {\n $match: {\n family: \"$family\",\n isParent: \"$isParent\",\n },\n },\n ],\n as: \"products\",\n },\n },\n", "text": "Greetings! I’m new to MongoDB but I’ve looked at the documentation for a lot of the aggregation related commands. I was wondering about a portion of an aggregatetion I am currently working with:// name is a predefined variable for a part nameI’m just a little confused on how to properly configure this properly. Currently it adds the products array to my current aggregate function, but it is completely devoid of content.", "username": "Ether" }, { "code": "$lookuppipeline\"$$<variable>\"$lookuppipeline$match$expr$expr$match$expr$match$lookup$exprfromfrom$match$exprlet$match {\n $lookup: {\n from: \"parts\",\n pipeline: [\n {\n $match: {\n family: name,\n isParent: false\n }\n }\n ]\n as: \"products\",\n }\n }\n", "text": "Hey @Ether Welcome to MongoDB Community Forum,Look at the instruction provided in the join-conditions-and-uncorrelated-sub-queries,Optional. Specifies variables to use in the pipeline field stages. Use the variable expressions to access the fields from the documents input to the $lookup stage.The pipeline cannot directly access the input document fields. Instead, first define the variables for the input document fields, and then reference the variables in the stages in the pipeline .NOTETo reference variables in pipeline stages, use the \"$$<variable>\" syntax.The let variables can be accessed by the stages in the pipeline, including additional $lookup stages nested in the pipeline .But in your case you are passing external inputs not from references, so no need to use let here, you can add direct condition in your $match stage,", "username": "turivishal" }, { "code": " const series = await db\n .collection(\"part\").aggregate([\n {\n $match: { family: name, isParent: true },\n },\n {\n $lookup: {\n from: \"part\",\n pipeline: [{ $match: { family: name, isParent: false } }],\n as: \"products\",\n },\n },\n ])\n", "text": "Thank you for the reply, I was wondering about the external inputs. Could $lookup be used like this within the same collection? I have a master document that needs to embed an array of children documents, and they are all within the same collection. I applied your change to my query and it still returns an empty products array.To be completely transparent, this is what I currently have:This returnsimage1327×420 53.4 KBI can’t understand why the parent document isn’t populating with the children. name = ‘nvidia_3080’, so it should match for all the children document that aren’t parents and slot them into the products array…", "username": "Ether" }, { "code": "$graphLookup,", "text": "Could $lookup be used like this within the same collection?Yes you canand also there is a $graphLookup, stage, Performs a recursive search on a collection, with options for restricting the search by recursion depth and query filter.https://docs.mongodb.com/manual/reference/operator/aggregation/graphLookup/I applied your change to my query and it still returns an empty products array.It would be more easy if post post some sample documents, one for parent and one for child.", "username": "turivishal" }, { "code": "", "text": "Understood, here are some sample documents:child product:\n{\n_id: 136\ntype: “video_card”\nfamily: “nvidia_3080”\nisParent: false\n}parent product:\n{\n_id: 369\ntype: “video_card”\nfamily: “nvidia_3080”\nisParent: false\nvendorList:[123,321]\n}", "username": "Ether" }, { "code": "isParent: false {\n _id: 136,\n type: \"video_card\",\n family: \"nvidia_3080\",\n isParent: false\n },\n {\n _id: 369,\n type: \"video_card\",\n family: \"nvidia_3080\",\n isParent: true,\n vendorList: [123, 321]\n }\n$lookupdb.part.aggregate([\n {\n $match: {\n family: \"nvidia_3080\",\n isParent: true\n }\n },\n {\n $lookup: {\n from: \"part\",\n pipeline: [\n {\n $match: {\n family: \"nvidia_3080\",\n isParent: false\n }\n }\n ],\n as: \"products\"\n }\n }\n])\n", "text": "parent product :\n{\n_id: 369\ntype: “video_card”\nfamily: “nvidia_3080”\nisParent: false\nvendorList:[123,321]\n}Make sure isParent: false is true for parent document!Query with $lookup:Playground", "username": "turivishal" }, { "code": "", "text": "I understand everything about this, thank you very much. The isParent being false was a typo.I only get an empty array returned currently on the aggregation resoluton. It must be a quirk with the Node.js MongoDB Driver API, I am looking further into it.However, I will mark your answer as correct because in vanilla MongoDB that is indeed what I am looking for. I just can’t get it to work in Node.js.", "username": "Ether" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to properly utilize $lookup with alternate join conditions?
2021-01-21T21:49:51.602Z
How to properly utilize $lookup with alternate join conditions?
13,662
null
[ "python", "performance" ]
[ { "code": " try: \n results = await self.mongo.insert_many(prepared_hosts, ordered=False) \n except BulkWriteError as e:\n details: dict = e.details\n self.logger.info(f\"There were some host insertion errors:\"\n f\"\\nwriteErrors: {len(details.get('writeErrors'))}\"\n f\"\\nnInserted: {details.get('nInserted')}\"\n f\"\\nwriteConcernErrors: {details.get('writeConcernErrors')}\"\n f\"\\nnUpserted: {details.get('nUpserted')}\"\n f\"\\nnMatched: {details.get('nModified')}\"\n f\"\\nnRemoved: {details.get('upserted')}\"\n )\n", "text": "Hi,I have built a service similar to Shodan which catalogues and indexes internet hosts. I’m currently really struggling with updating documents at scale and would appreciate some advice.Current Architecture (all distributed, all running latest Mongo version 4.4)3 x Mongos3 x configsvr in replicaset (configReplSet)10 x shardsvr in replicaset (shardReplSet)Note: all nodes communicate over at least a 1GBps line and have an uptime of > 99.9%.The cluster has a unique index on the “ip” fieldI use Motor as my Python-Mongo adapter, which itself sits on top of Pymongo.The service itself will perform a scan and aggregate details about a host, it will then prepare them for insertion into the mongo cluster. I use a simple try/catch block for initial inserts:Note: self.mongo.insert_many is a wrapper around Pymongo’s insert_manyInside the catch block I aggregate the the writeErrors and any that throw 11000 (# E11000 duplicate key error collection) are prepared for update. I And this is where the problems begin…Attempt 1Because each job was producing anywhere from 100,000 to 10,000,000 documents for processing, this bottleneck more often than not caused the job to run for days.Attempt 2This meant the jobs were finished in a very reasonable time, but even with 256 workers I could only manage approximately 1000 updates/second and the queue simply ballooned out of control, spiralling upwards of 200,000,000 documents within a few days.And that is where I am, today. I would appreciate any thoughts on the matter, whether it’s process, architecture, both or whatever.", "username": "Darrel_Rendell" }, { "code": "upsert: true", "text": "Hi @Darrel_Rendell and welcome in the MongoDB Community !From what I’m reading, it sounds like you want to insert the document if it doesn’t exist and update it if it already exists in your database. So it sounds like you would like to use an upsert which does exactly that.Also, because you apparently have big batches, you want to use the BulkWrite operation combined with many insertOne with the upsert: true option.This will avoid this logic of try / fail / retry with an update that you have implemented. As the IP address is your key apparently, you can use this for your filter for the upsert operation.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{ \n \"ip\": str,\n \"ports\": Array[int],\n \"banners\": Array[Object],\n \"certificates\": Array[Object],\n \"first_scanned\": ISODate,\n \"last_scanned\": ISODate,\n \"headers\": Array[Object]\n}\n", "text": "Hi Maxime,Thanks for your reply!Upserting is something I considered, but the challenge I have is that when I’m processing documents, the assumption is the IP hasn’t been seen before, so it is initialised with a default object:If a document with the provided IP exists (unique index), we then have to merge the documents. This is because inserting elements into banners, certificates or headers almost always requires modifying the ports array. If I did an upsert, it would set the new array element but not allow me to merge the ports.I hope this makes sense.", "username": "Darrel_Rendell" }, { "code": "from datetime import datetime\nfrom pprint import pprint\n\nfrom faker import Faker\nfrom pymongo import MongoClient, UpdateOne\n\nfake = Faker()\n\n\ndef rand_host():\n return {\n \"ip\": fake.ipv4(),\n \"ports\": [rand_port(), rand_port()],\n \"banners\": [{\"banner\": \"my first banner\"}],\n \"certificates\": [{\"cert\": \"cert 1.0\"}],\n \"first_scanned\": datetime.now(),\n \"last_scanned\": datetime.now(),\n \"headers\": [{\"header\": \"some header\"}]\n }\n\n\ndef rand_port():\n return fake.pyint(min_value=1, max_value=65535)\n\n\nif __name__ == '__main__':\n client = MongoClient()\n db = client.get_database('shodan')\n hosts = db.get_collection('hosts')\n\n # clean the db\n hosts.delete_many({})\n\n # init my hosts collection with one host\n hosts.create_index(\"ip\", unique=True)\n hosts.insert_one(rand_host())\n\n print('Print the existing document in my collection:\\n')\n init_doc = hosts.find_one()\n pprint(init_doc)\n init_ip = init_doc.get('ip')\n\n print('\\nIP already known:', init_ip)\n\n print('\\nNow I will try to insert 2 new hosts but the first one will have the same IP address than the one already in my collection.')\n\n hosts.bulk_write([\n # this first document will be updated. $setOnInsert won't do anything as it's not an insert.\n UpdateOne({'ip': init_ip},\n {'$setOnInsert': {'first_scanned': datetime.now()},\n '$addToSet': {'ports': {'$each': [rand_port(), rand_port()]},\n 'banners': {'banner': 'my second banner'},\n \"certificates\": {\"cert\": \"cert 2.0\"},\n \"headers\": {\"header\": \"some other header\"}\n },\n '$set': {'last_scanned': datetime.now()}\n }, upsert=True),\n # this ip address doesn't exist in my collection so it's an insert.\n UpdateOne({'ip': fake.ipv4()},\n {'$setOnInsert': {'first_scanned': datetime.now()},\n '$addToSet': {'ports': {'$each': [rand_port(), rand_port()]},\n 'banners': {'banner': 'my second banner'},\n \"certificates\": {\"cert\": \"cert 2.0\"},\n \"headers\": {\"header\": \"some other header\"}\n },\n '$set': {'last_scanned': datetime.now()}\n }, upsert=True)\n ], ordered=False)\n\n print('Final result in my hosts collection:\\n')\n for doc in hosts.find():\n pprint(doc)\n print()\nPrint the existing document in my collection:\n\n{'_id': ObjectId('6006fec1f502dc4efd9da90c'),\n 'banners': [{'banner': 'my first banner'}],\n 'certificates': [{'cert': 'cert 1.0'}],\n 'first_scanned': datetime.datetime(2021, 1, 19, 16, 46, 9, 164000),\n 'headers': [{'header': 'some header'}],\n 'ip': '147.19.133.207',\n 'last_scanned': datetime.datetime(2021, 1, 19, 16, 46, 9, 164000),\n 'ports': [29658, 6283]}\n\nIP already known: 147.19.133.207\n\nNow I will try to insert 2 new hosts but the first one will have the same IP address than the one already in my collection.\nFinal result in my hosts collection:\n\n{'_id': ObjectId('6006fec1f502dc4efd9da90c'),\n 'banners': [{'banner': 'my first banner'}, {'banner': 'my second banner'}],\n 'certificates': [{'cert': 'cert 1.0'}, {'cert': 'cert 2.0'}],\n 'first_scanned': datetime.datetime(2021, 1, 19, 16, 46, 9, 164000),\n 'headers': [{'header': 'some header'}, {'header': 'some other header'}],\n 'ip': '147.19.133.207',\n 'last_scanned': datetime.datetime(2021, 1, 19, 16, 46, 9, 165000),\n 'ports': [29658, 6283, 27037, 11895]}\n\n{'_id': ObjectId('6006fec10ef49ae99654a648'),\n 'banners': [{'banner': 'my second banner'}],\n 'certificates': [{'cert': 'cert 2.0'}],\n 'first_scanned': datetime.datetime(2021, 1, 19, 16, 46, 9, 166000),\n 'headers': [{'header': 'some other header'}],\n 'ip': '79.222.50.147',\n 'last_scanned': datetime.datetime(2021, 1, 19, 16, 46, 9, 166000),\n 'ports': [47533, 38525]}\nfirst_scanned$setOnInsert$set$addToSet$pushordered=False", "text": "Then you need to use the updateOne operation, also with an upsert within a BulkWrite operation.Let me try to illustrate with an example in Python.Console output:As you can see in my 2 final documents, the one which already existed in my collection has been updated correctly with the new ports, certificates and headers. The first_scanned field is untouched as it’s not an insert operation (thanks to the $setOnInsert).The second one in brand new as it’s IP address was never seen before.If you don’t want to update the document but rather completely update it, you could use replaceOne maybe? Depends what you want to do exactly.\nYou could replace some values also with $set instead of using the $addToSet or $push that work with arrays and will just append the new values in the array.I hope it helps and this is what you needed .Cheers,\nMaxime.PS EDIT: I added ordered=False on the Bulk Op as it’s more efficient in your scenario and you won’t stop on the first error you might have.", "username": "MaBeuLux88" }, { "code": "", "text": "@MaBeuLux88 this worked beautifully, thank you so much! Updating has gone from literal days to a few hundred seconds.", "username": "Darrel_Rendell" }, { "code": "", "text": "Wow! Thanks so much for this feedback! It makes my day!From days to seconds, that’s what I call optimization now!Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating 100s of Millions of Documents
2021-01-19T10:46:05.973Z
Updating 100s of Millions of Documents
11,433
null
[ "aggregation", "indexes" ]
[ { "code": "{\n _id: // ObjectId\n color: \"green\",\n size: 34,\n category: \"1-1-1\"\n}\ncountDocuments({color: 'green'})countDocuments({category: '1-1-1'})countDocuments({size: 34})countDocuments({color: 'green', category: '1-1-1'})countDocuments({color: 'green', category: '1-1-1', size: 34})", "text": "Let’s say we have a collection with million of documents, all with the following shape:I have the following requirements for counting the documents:countDocuments({color: 'green'})\ncountDocuments({category: '1-1-1'})\ncountDocuments({size: 34})\ncountDocuments({color: 'green', category: '1-1-1'})\ncountDocuments({color: 'green', category: '1-1-1', size: 34})How many indexes do I need to count the docs effectively?Thanks for the help in advance ", "username": "Alex_Bjorlig" }, { "code": "colorcategorysizecolorcolor+categorycolor+category+sizeexplainexplaincountDocumentsdb.collection.aggregate([\n { $match: <query> },\n { $group: { _id: null, n: { $sum: 1 } } }\n])\n", "text": "Hello @Alex_Bjorlig,You will require at least three indexes to support the above five queries. Those would be single field indexes on fields color , category and size respectively. These three indexes can support the five queries you had posted.The second alternative is instead of the single field index on color, you can create a compound index on one (or both) of these - color+category and color+category+size.But, there are couple of things to consider:You will need some test data, create these indexes and try your queries. Then measure their usage and performance - for example, you can use explain to generate the query plans and study them.I don’t think you can run explain on the countDocuments method, but countDocuments - mechanics says that underlying this method is the aggregation query:And, you can run explain on the aggregation query.", "username": "Prasad_Saya" }, { "code": ".explain()explain()", "text": "Hi @Prasad_SayaThanks for providing the detailed answer. I will continue to monitor the queries. Currently I feel a bit limited by the fact that I can not use .explain() in compas on aggregations. And when I use the explain() on aggregations in code, I need to read the raw json manually. But I hope this will change in the near future ", "username": "Alex_Bjorlig" }, { "code": "colorcolor+categorycolor+category+sizecolorcolor+category+sizecolorcolor+category{ color: 1, category: 1, size: 1 }{ category: 1 }{ size: 1 }", "text": "The second alternative is instead of the single field index on color , you can create a compound index on one (or both) of these - color+category and color+category+size .Hi @Prasad_Saya,A compound index on multiple fields can support all the queries that search a prefix subset of those fields, so the most efficient option to cover the three queries including color would be a single compound index on color+category+size.If this compound index exists, you would want to drop any prefix indexes like color or color+category as they would add unnecessary overhead.All five queries would optimally be supported by three indexes:There is some further information in Create Compound Indexes to Support Several Different Queries.Regards,\nStennie", "username": "Stennie_X" } ]
How many indexes are needed for a count query to be effective?
2021-01-20T13:17:59.415Z
How many indexes are needed for a count query to be effective?
8,553
null
[ "atlas-functions" ]
[ { "code": " const body = EJSON.parse(payload.body.text());\n if(body[i].userID == \"\"){\n var id = BSON.ObjectId(body[i].objectID);\n coll.deleteOne({\"_id\": id});\n }\n", "text": "I have a basic function containing this code:My webhook is receiving an array of objects (This code is inside a for loop, the “i” indexing works fine). If the userID field of the object is null/an empty string, then I want to delete that object. Any suggestions? I feel like this is really simple.", "username": "Joewangatang" }, { "code": "var toDelete = false;\n if( !body[i].userID ){ \n toDelete = true;\n}\nelse if (body[i].userID.toString() == \"\") {\n toDelete = true;\n}\n\n\n If (toDelete){\n...\n}\n", "text": "Hi @Joewangatang,Isn’t your condition needs to be as follows:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trouble Evaluating String in Realm Function
2021-01-21T20:20:10.949Z
Trouble Evaluating String in Realm Function
1,658
null
[]
[ { "code": "userspostscommentsgroupsref_ids: [{\n objectId: {\n type: Schema.Types.ObjectId,\n required:true\n },\n ref: {\n type: String,\n enum: ['users', 'posts', 'comments', 'groups'],\n required: true\n }\n}]\n$lookUp[\n {\n 'ref_ids': [\n {\n 'object_id': 'hash',\n 'ref': 'user',\n 'refObject': {\n '_id': 'hash',\n 'name': 'Piyush'\n }\n },\n {\n 'object_id': 'hash',\n 'ref': 'post',\n 'refObject': {\n '_id': 'hash',\n 'text': 'We might look this up'\n }\n }\n ]\n },\n {\n 'ref_ids': [\n {\n 'object_id': 'hash',\n 'ref': 'post',\n 'refObject': {\n '_id': 'hash',\n 'name': 'We might look this up'\n }\n },\n {\n 'object_id': 'hash',\n 'ref': 'comment',\n 'refObject': {\n '_id': 'hash',\n 'text': 'We won\\'tlookthisup'\n }\n }\n ]\n }\n]\n", "text": "Consider I have multiple collections users , posts , comments , groups , I want to refer to all these different types, using a json array containing object Ids and reference.What could be the possible disadvantages of using such a schema?How would you approach querying this using $lookUp , it seems possible?Expected Output should be like below -https://stackoverflow.com/questions/65839495/what-are-the-possible-disadvantages-of-referencing-multiple-collections-using-js", "username": "crossdsection" }, { "code": "{ post_id,\n auther_id,\n post_text,\n comments: [ { user_id, user_name, comment}, ....]\nuserid,\nusername,\ngroups : [ { groupid , groupinfo}]\n", "text": "Hi @crossdsection,Welcome to MongoDB community.It seems that the idea behind your schema is more oriented on a relational schema design where data is normalised and referenced.This design looses the advantages of MongoDB to embeed data within a main document to query it without the need of doing joins aka lookups.Lookups complex code and cause performance overhead.Thinking about a blog like schema I don’t think that comments have justification to be on their own without a post for example.I would think that a post document should embed comments , each one can be a complex object with the commentor details:The user documents can have ids of recent posts.I think that groups should exist in users objects like “tags” as a group is probably only collection of users labled with this group details. Group id can be indexed for faster user find within a group:I recommend reading following blogs:\nhttps://www.mongodb.com/article/mongodb-schema-design-best-practices/https://www.mongodb.com/article/schema-design-anti-pattern-summaryA summary of all the patterns we've looked at in this seriesThanks\nPavel", "username": "Pavel_Duchovny" } ]
Creating an aggregation query possible ?
2021-01-22T10:13:08.130Z
Creating an aggregation query possible ?
2,099
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Any plan to support password-less authentication, sometimes referred to as “link authentication” (like 이메일 링크를 사용하여 자바스크립트에서 Firebase 인증)?", "username": "Marco_Ancona" }, { "code": "", "text": "There are no near-term plans to support it directly in MongoDB Realm, although if you decided to go with a 3rd party service like Auth0 or Firebase Auth, you could still use it with Realm by using Custom JWT Auth", "username": "Sumedha_Mehta1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Any plan for password-less authentication?
2021-01-21T22:26:26.814Z
Any plan for password-less authentication?
2,393
null
[ "java" ]
[ { "code": "", "text": "HelloWhen i run a aggregate method of the driver i get a AggregateIterableImpl\nHow to get the same using runCommand({“aggregate” …})\nOr with any command that returns cursor like “find” etcI want to convert runCommand that return a cursor document ,to a Java driver cursor.\nIts important because without we cant run any command that return cursor.I am looking for how the java driver does it in code,but i haven’t found it.Thank you", "username": "Takis" }, { "code": "", "text": "Hi there,There is no way to do this with the public driver API. Can you explain your use case, and what prevents you from using the existing helper method for aggregate?Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Its useful when someone gives us a MQL command to run\nWithout it runCommand is only for commands that dont return cursorsI personally need it because i am making a query builder that generates\nMQL commands and i want to get the results in a java driver cursorrunCommand(myQueryGenerator()) => java driver cursorI know its possible to use getMore etc,but the point is to give those\nresults to a java user,to use them as he already knows.i am reading the code of the java driver , is it hard to make it?\nany help appreciated", "username": "Takis" }, { "code": "", "text": "There is no way to do it with the current public API of the driver.I’m still not clear from your description why you need to generate the full command document. Your best bet is to generate just the aggregation pipeline and use the driver’s existing aggregate helper.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Thank you for the reply,i will go for the pipeline only,the big part is the pipeline,i was thinking that also now.\nBut in general i believe it would be useful,to have it,commands are the same in all drivers,\nand it would be helpfull to be able to run them all with runCommand", "username": "Takis" } ]
Java runCommand cursor document , to driver cursor object
2021-01-22T00:29:28.079Z
Java runCommand cursor document , to driver cursor object
2,655
null
[ "c-driver" ]
[ { "code": "", "text": "Hi,could somebody point me to a basic tutorial for the libmongoc driver that covers the corresponding commands to CREATE DB, CREATE TABLE and INSERT?I’m in desperate need of practical libmongoc beginners tutorial/examples designed for people still thinking in SQL terms.", "username": "bugblatterbeast" }, { "code": "", "text": "OK, I was able to understand the examples now.Took me a while to realize that databases and collections are created automatically and that I don’t have to specify table fields like I’m used to.", "username": "bugblatterbeast" }, { "code": "", "text": "Have you looked at the libmongoc CRUD tutorial this walks through connecting to a database and then simple CRUD operations.In MongoDB databases / collections are created when a document is inserted into them.I’m in desperate need of practical libmongoc beginners tutorial/examples designed for people still thinking in SQL terms.I would check out the SQL to MongoDB mapping chart on the MongoDB docs page.Edit: I just saw your update…glad you figured it out!", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Oh, thank you very much for your quick response. I will definitely consult those links you’ve mentioned.\nEdit: 1st link seems to be exactly what I was looking for.", "username": "bugblatterbeast" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Looking for practical beginners tutorial for libmongoc
2021-01-21T21:40:00.554Z
Looking for practical beginners tutorial for libmongoc
2,054