image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"node-js",
"compass",
"mongodb-shell"
]
| [
{
"code": "// MongoDB\nconst mongoURI = `mongodb://Admin:${process.env.MONGODB_ADMIN_PW}@localhost:27017/?authMechanism=DEFAULT`;\nconst { MongoClient } = require('mongodb');\nconst mongo = new MongoClient(mongoURI);\n\n// ...\n\n// Database stuff\nconst findLighthouse = async (uname) => {\n try {\n await mongo.connect();\n\n const db = mongo.db('lighthouses');\n const lighthouses = db.collection('lighthouses');\n\n const lighthouse = lighthouses.findOne({ \"user.username\": uname });\n\n return lighthouse;\n } finally {\n await mongo.close();\n }\n}\n\n// ...\n\nserver.get('/lighthouse', async (req, res) => {\n res.render('lighthouse', {\n isLoggedIn: true,\n username: 'DragonOfDojima',\n pageTitle: 'A Lighthouse!',\n lighthouse: await findLighthouse('dragonofdojima')\n })\n})\nawait mongo.connect();4.12.0mongodblocalhost127.0.0.1console.log",
"text": "Hi there, I’ve been trying to deal with this timeout error all day, and I’m not sure what else to do. I have Compass installed as well, and I have no issues in Compass. It’s just the module that has an issue, it seems.Here’s my code that I use:The user does exist in the collection in the database, I’ve tried it with mongosh. Works fine there. Whenever I try to connect to Mongo via my code, though, I always get a timeout error before it can really do anything. The issue seems to be at await mongo.connect();I’m currently using MongoDB Community 6.0.3 for the MongoDB server, and version 4.12.0 of the mongodb NPM package. I have no problem connecting to the server via Compass or mongosh. The service is also running fine. I’ve tried everything from changing localhost to 127.0.0.1 to trying to use IPV6 to setting the server selection timeout higher, nothing works. I’m always arriving at a timeout. The environment variable is also there and correct, I’ve tested with a console.log. Any help would be greatly appreciated!",
"username": "Autumn_Rivers"
},
{
"code": "",
"text": "Share the connection string you use with mongosh and Compass.Share the user profile with mongosh.Most likely some options are different.",
"username": "steevej"
},
{
"code": "",
"text": "Tremendous thanks for your response! I didn’t even realize mongosh was using a different connection string, but it is still odd that the one Compass uses doesn’t work in my code… ah, well. Can’t win em all. Again, thank you!",
"username": "Autumn_Rivers"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Receiving Timeout error with NodeJS and MongoDB Community 6.0.3 on Windows 10 | 2022-11-21T21:15:38.989Z | Receiving Timeout error with NodeJS and MongoDB Community 6.0.3 on Windows 10 | 3,165 |
null | [
"data-modeling",
"java"
]
| [
{
"code": "",
"text": "The issue is reducing MongoDB thread monitoring time in java .\nI would like to run component test code as stated below. For the component test I would like to make asleep(at most 30 sec not 1 minute) and check if expired documents are removed or not. But it doesnt work. If I run thread.sleep(60000) (1 minute) It works but it is too much delay for a component test. Do you have any solution for this?\n‘’’\nrunCommand(new BsonDocument(“setParameter”, new BsonInt32(1)).append(“ttlMonitorEnabled”, new BsonBoolean(false)));\nrunCommand(new BsonDocument(“setParameter”, new BsonInt32(1)).append(“ttlMonitorSleepSecs”, new BsonInt64(5)));\nrunCommand(new BsonDocument(“setParameter”, new BsonInt32(1)).append(“ttlMonitorEnabled”, new BsonBoolean(true)));\nThread.sleep(15000);\nlong afterCount = getCounter(TrainedDbModel.class);// then\nassertThat(afterCount).isEqualTo(1);\npublic void runCommand(final Bson command) {\nfinal MongoDatabase db = mongoClient.getDatabase(“admin”);\ndb.runCommand(command);\n}\n‘’’",
"username": "seda_tankiz"
},
{
"code": "",
"text": "Hi @seda_tankizI believe this is the same question as in Reduce/chance Mongodb TTL time (60 seconds) in java - #4 by seda_tankiz ?I have provided a reply in that thread, hope it helps.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Reduce MongoDB Thread ttl time | 2022-11-21T10:54:33.104Z | Reduce MongoDB Thread ttl time | 1,266 |
null | [
"java"
]
| [
{
"code": "",
"text": "Hello, I would like to change mongo db ttl time which is 60 seconds. How can I do it in java?\nThank you in advance",
"username": "seda_tankiz"
},
{
"code": "ttlMonitorSleepSecssetParametermongoddb.adminCommand({setParameter:1, ttlMonitorSleepSecs:60})\nadminsample_mflixadmin",
"text": "Welcome to the MongoDB Community @seda_tankiz !I assume you are referring to how often the Time-To-Live (TTL) Monitor thread checks to see if there are any matching documents to remove, which is every 60 seconds by default.The TTL sleep interval is controlled by a global ttlMonitorSleepSecs parameter that can be changed via the setParameter administrative command or passed as a configuration option when starting mongod.The MongoDB shell command would be:The MongoDB Java driver documentation includes an example of running a command, which you should be able to adapt: Run a Command (Java Driver). Administrative commands must be run in the admin database, so replace sample_mflix with admin in this code example.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you Stennie,\nAs you got the topic it is about MongoDB thread monitoring time.\nI would like to run below code for component test. For the component test I would like to sleep(at most 30 sec not 1 minute) and check if expired documents are removed or not. But it doesnt work. If I run thread.sleep(60000) (1 minute) It works but it is too much delay for a component test. Do you have any solution for this?\n‘’’\nrunCommand(new BsonDocument(“setParameter”, new BsonInt32(1)).append(“ttlMonitorEnabled”, new BsonBoolean(false)));\nrunCommand(new BsonDocument(“setParameter”, new BsonInt32(1)).append(“ttlMonitorSleepSecs”, new BsonInt64(5)));\nrunCommand(new BsonDocument(“setParameter”, new BsonInt32(1)).append(“ttlMonitorEnabled”, new BsonBoolean(true)));\nThread.sleep(15000);\nlong afterCount = getCounter(TrainedDbModel.class);// then\nassertThat(afterCount).isEqualTo(1);\npublic void runCommand(final Bson command) {\nfinal MongoDatabase db = mongoClient.getDatabase(“admin”);\ndb.runCommand(command);\n}\n‘’’",
"username": "seda_tankiz"
},
{
"code": "",
"text": "Hi,\nIs there any solution for my problem explained above?\nThanks,\nSeda",
"username": "seda_tankiz"
},
{
"code": "",
"text": "Hi @seda_tankizEven if you set the TTL thread timing (say, to 30 seconds), there is no guarantee that the documents will be removed exactly 30 seconds later. This is mentioned in https://www.mongodb.com/docs/manual/core/index-ttl/#timing-of-the-delete-operationThe TTL index does not guarantee that expired data will be deleted immediately upon expiration. There may be a delay between the time that a document expires and the time that MongoDB removes the document from the database.The TTL feature is heavily tested in the server codebase, so in my opinion, you don’t have much value in testing this feature yourself. Although we can never prove the absence of bugs, I believe it’s reasonable from your point of view to assume that the TTL index will do its job as expected. Although I would recommend you to keep upgrading to the latest supported versions for bugfixes and improvements Hope this answers your question!Best regards\nKevin",
"username": "kevinadi"
}
]
| Reduce/chance Mongodb TTL time (60 seconds) in java | 2022-11-11T15:21:21.483Z | Reduce/chance Mongodb TTL time (60 seconds) in java | 2,635 |
null | [
"server"
]
| [
{
"code": "",
"text": "Hi Team,We are using mongo 4.2.21.Starting from 4.2 mongoDB added a calculated writeMajorityCount. The document explains about how its calcualted by mongo automatically and what would happen if majority is not available.But what is missing is,Also is there a way i can change this Parameter. It is needed because we dont want to enable writeMajority .Thanks,\nVenkataraman",
"username": "venkataraman_r"
},
{
"code": "writeMajorityCountdropDatabase()",
"text": "Hi @venkataraman_r welcome back!The main reason for writeMajorityCount is that MongoDB is mainly designed to work as a replica set. To ensure writes won’t be rolled back is to have it propagated to the majority of voting nodes. However with arbiters this is tricky since an arbiter is a voting node with no data. See Implicit Default Write Concern for the formula when arbiters are involved.dropDatabase() is one command that defaults to majority write concern. However I don’t think there’s a list of commands requiring majority write concern. Note that you can specify write concern for db.dropDatabase() but if you don’t it defaults to majority.I assume you’re using a PSA setup? This is expected since if the other data bearing member is down, the arbiter cannot acknowledge the write. Thus the command will just wait for a secondary acknowledgment that doesn’t arrive. The page Mitigate Performance Issues with PSA Replica Set would have more details into how to overcome this.I think the increased CPU is tangent to the write concern issue you’re seeing. However the first port of call is usually ensuring you’re following the production notes for the optimal setup. The blog post Performance Best Practices: Hardware and OS Configuration might also be of interest to you.we dont want to enable writeMajority .Write concern majority becomes the default in MongoDB 5.0, since it provides much greater assurances that your writes won’t be rolled back. However this depends on your use case. I understand that some use case might not need the majority assurance since it’s ok if some data got rolled back and you need high writing speed.However another advantage of write concern majority is that it provides a measure of control for data going into the replica set. Without this, it’s easy to overwhelm the servers with writes that it cannot replicate fast enough, leading to issues such as a secondary falling off the oplog.Having said that, if you understand all the risks for disabling write concern majority, for most commands, you can specify a write concern setting to deviate from the default.Hope this is useful!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "enableMajorityReadConcern",
"text": "Thanks Kevin,Thank you for your reply.I assume you’re using a PSA setup? This is expected since if the other data bearing member is down, the arbiter cannot acknowledge the write. Thus the command will just wait for a secondary acknowledgment that doesn’t arrive. The page Mitigate Performance Issues with PSA Replica Set would have more details into how to overcome this.#2. Yes, we are using PSA 4 data bearing member and 1 ARB. We carefully reviewed the performance and decided to disable enableMajorityReadConcern and w:1 default . We dont want durability but higher throughput.\nBut the problem I notice from 4.2 is that, mongo provides the rs.conf().getLastErrorDefaults to set the write majority and timeout. Then we expect mongo to honor that value. Instead mongo does create a computation and creates this writeMajorityCount. what do you think about this discrepancy?#3 Yes we follow the production checklist. But my question is will REPL suffer or contribute to high CPU.\nDoes REPL wait untill the oplog is written to the majority of the members due to this new writemajoritycount ?Having said that, if you understand all the risks for disabling write concern majority, for most commands, you can specify a write concern setting to deviate from the default.As I said, starting from mongo 4.2, there is no server option to disable the write concern majority. We set the following but still mongo calculates as 3.\n“getLastErrorDefaults” : {\n“w” : 1,\n“wtimeout” : 0\n},",
"username": "venkataraman_r"
},
{
"code": "dropDatabase()dropDatabase()w:1mongoshwriteMajorityCount",
"text": "what do you think about this discrepancy?I believe those defaults are for mainly write operations, and dropDatabase() is not really a normal write operation. I think those are treated slightly differently, and you can always set the write concern setting if you want to ensure that dropDatabase() is called with w:1.However if this is not what you think should happen, please provide a feedback into the MongoDB Feedback Engine so this can be explored by the product team.Does REPL wait untill the oplog is written to the majority of the membersPlease correct me if I’m wrong here, but what I understand as REPL is Read-Eval-Print-Loop for interactive discovery, a feature in most scripted languages like Python or Node. Do you mean doing CRUD operations on mongosh or similar by this? This depends on the write concern setting for that particular write. If you set it to w:1 it should not wait for replication. If you set it to w:majority then it will wait until the majority have replicated the write in their journal.As I said, starting from mongo 4.2, there is no server option to disable the write concern majority.Actually the default write concern was changed to majority in MongoDB 5.0, so 4.2 and 4.4 still defaults to w:1Also in MongoDB 4.4 there is a new command setDefaultRWConcern to set this cluster-wide.We set the following but still mongo calculates as 3.If you mean the value of writeMajorityCount, I believe that’s only informational and does not reflect the actual write concern setting. In your case, it is informing you that for a majority write to be acknowledged, you need at least 3 nodes.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "mongosh",
"text": "Please correct me if I’m wrong here, but what I understand as REPL is Read-Eval-Print-Loop for interactive discovery, a feature in most scripted languages like Python or Node. Do you mean doing CRUD operations on mongosh or similar by this? This depends on the write concern setting for that particular write. If you set it to w:1 it should not wait for replication. If you set it to w:majority then it will wait until the majority have replicated the write in their journal.No REPL, I meant Replication.",
"username": "venkataraman_r"
},
{
"code": "",
"text": "Hi Kevin,As I told you, we never send any queries with w:Majority from our clients. But, you can see the following log , operation done on system.sessions by mongo is doing w:majority and taking long time. So it looks like the w:majority is done at different places in mongo as well which is impacting performance time to time. Please check the r/w acquire count.This is from Primary\n/var/log/mongodb-27951.log:2022-11-16T04:34:27.802+0000 I COMMAND [conn8648] command config.$cmd command: update { update: “system.sessions”, ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: “majority”, wtimeout: 15000 }, $db: “config” } numYields:0 reslen:3160 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 30 } }, ReplicationStateTransition: { acquireCount: { w: 30 } }, Global: { acquireCount: { w: 30 } }, Database: { acquireCount: { w: 30 } }, Collection: { acquireCount: { w: 30 } }, Mutex: { acquireCount: { r: 60 } } } flowControl:{ acquireCount: 30, timeAcquiringMicros: 7 } storage:{} protocol:op_msg 679msand sedondary of a different replica-set as well./var/log/mongodb-27958.log:2022-11-16T04:34:27.547+0000 I COMMAND [conn8893] command config.$cmd command: update { update: “system.sessions”, ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: “majority”, wtimeout: 15000 }, $db: “config” } numYields:0 reslen:1899 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 17 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { w: 17 } }, Database: { acquireCount: { w: 17 } }, Collection: { acquireCount: { w: 17 } }, Mutex: { acquireCount: { r: 34 } } } flowControl:{ acquireCount: 17, timeAcquiringMicros: 2 } storage:{} protocol:op_msg 1612ms",
"username": "venkataraman_r"
},
{
"code": "config.system.sessions",
"text": "Let’s go back a little:And let me go back to your earlier question:Will REPL has any issue if the majority is not available.REPL being replication, then yes. Majority not available will hinder replication. You need a primary to be able to write. Using arbiters have known performance impact when the replica set is in a degraded state.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "config.system.sessions",
"text": "After We migrated from 4.0 to 4.2, we see the response time is higher and time to time we see queries taking more than 500ms which result in performance degradation in our application. Note that, there is no change in the client side, we use 3.12.9 java sync driver. and no changes in the DBSchema. we are seeing IDHACK queries also taking more time. we tried to set the FCV to 4.0 and gives some what better but still not equal to 4.0 performance.Not saying its the only culprit. Since we are analysing what are the slow queries this also getting logged. when we reviewed the 4.2 changes, the writeMajorityCount also one of the addition. So checking if these w:majority query can impact the performance issue we are experiencingWill provide by tomorrow.",
"username": "venkataraman_r"
},
{
"code": "2022-11-19T08:13:11.923+0000 D2 COMMAND [conn424] run command config.$cmd { update: \"system.sessions\", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" }\n2022-11-19T08:13:11.970+0000 D2 REPL [conn424] Waiting for write concern. OpTime: { ts: Timestamp(1668845591, 1583), t: 48 }, write concern: { w: \"majority\", wtimeout: 15000 }\n2022-11-19T08:13:11.974+0000 I COMMAND [conn424] command config.$cmd command: update { update: \"system.sessions\", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" } numYields:0 reslen:7137 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 183 } }, ReplicationStateTransition: { acquireCount: { w: 366 } }, Global: { acquireCount: { r: 183, w: 183 } }, Database: { acquireCount: { w: 183 } }, Collection: { acquireCount: { w: 183 } }, Mutex: { acquireCount: { r: 366 } } } flowControl:{ acquireCount: 183, timeAcquiringMicros: 65 } storage:{} protocol:op_msg 51ms\n2022-11-19T08:13:11.976+0000 D2 COMMAND [conn424] run command config.$cmd { delete: \"system.sessions\", ordered: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" }\n2022-11-19T08:13:11.988+0000 D2 REPL [conn424] Waiting for write concern. OpTime: { ts: Timestamp(1668845591, 1588), t: 48 }, write concern: { w: \"majority\", wtimeout: 15000 }\n2022-11-19T08:13:11.989+0000 I COMMAND [conn424] command config.$cmd command: delete { delete: \"system.sessions\", ordered: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" } numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 244 } }, ReplicationStateTransition: { acquireCount: { w: 488 } }, Global: { acquireCount: { r: 244, w: 244 } }, Database: { acquireCount: { w: 244 } }, Collection: { acquireCount: { w: 244 } }, Mutex: { acquireCount: { r: 246 } } } flowControl:{ acquireCount: 244, timeAcquiringMicros: 57 } storage:{} protocol:op_msg 12ms\n2022-11-19T08:13:45.030+0000 D2 COMMAND [LogicalSessionCacheRefresh] run command config.$cmd { update: \"system.sessions\", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" }\n2022-11-19T08:13:45.088+0000 D2 REPL **[LogicalSessionCacheRefresh] Waiting for write concern. OpTime: { ts: Timestamp(1668845625, 368), t: 48 }, write concern: { w: \"majority\", wtimeout: 15000 }**\n2022-11-19T08:13:45.094+0000 I COMMAND [LogicalSessionCacheRefresh] command config.$cmd command: update { update: \"system.sessions\", ordered: false, allowImplicitCollectionCreation: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" } numYields:0 reslen:3160 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 226 } }, ReplicationStateTransition: { acquireCount: { w: 452 } }, Global: { acquireCount: { r: 226, w: 226 } }, Database: { acquireCount: { w: 226 } }, Collection: { acquireCount: { w: 226 } }, Mutex: { acquireCount: { r: 452 } } } flowControl:{ acquireCount: 226, timeAcquiringMicros: 69 } storage:{} protocol:op_msg 63ms\n2022-11-19T08:13:45.094+0000 D2 COMMAND [LogicalSessionCacheRefresh] run command config.$cmd { delete: \"system.sessions\", ordered: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" }\n2022-11-19T08:13:45.127+0000 D2 REPL [LogicalSessionCacheRefresh] **Waiting for write concern. OpTime: { ts: Timestamp(1668845625, 457), t: 48 }, write concern: { w: \"majority\", wtimeout: 15000** }\n2022-11-19T08:13:45.130+0000 I COMMAND [LogicalSessionCacheRefresh] command config.$cmd command: delete { delete: \"system.sessions\", ordered: false, writeConcern: { w: \"majority\", wtimeout: 15000 }, $db: \"config\" } numYields:0 reslen:230 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 387 } }, ReplicationStateTransition: { acquireCount: { w: 774 } }, Global: { acquireCount: { r: 387, w: 387 } }, Database: { acquireCount: { w: 387 } }, Collection: { acquireCount: { w: 387 } }, Mutex: { acquireCount: { r: 459 } } } flowControl:{ acquireCount: 613, timeAcquiringMicros: 152 } storage:{} protocol:op_msg 35ms```\n",
"text": "Hi Kevin,I enabled the loglevel and found almost every 5 mins there is a response time increase and which makes some of our queries to become timedout. Looks like starting from 4.2, mongo Added MajorityService and in every 5mins LogicalSessionRefresh its doing a w:majority with timeout of 15000. During the same time the response (in mongotop) goes very high (from 200ms to 8s).disabling the disableLogicalSessionCacheRefresh doesnt have any improvement.Having the same clients but only mongoDB downgraded to 4.0.27 from 4.2 is giving the expected performance. So for sure, there is something in 4.2 which is impacting the performance. We tried changing the FCV to 4.0 on a 4.2 mongo but didnt provide any better results.",
"username": "venkataraman_r"
},
{
"code": "",
"text": "Hi @venkataraman_rI think the log snippet you posted is a symptom rather than a cause. I believe those are internal commands for server sessions and I don’t think they are the main cause of the slowdowns.During the same time the response (in mongotop) goes very high (from 200ms to 8s).Could you post the output of mongostat & mongotop and some logs during this period? If you can provide the output of mongostat & mongotop on the 4.0 and 4.2 deployments, we might be able to see what’s the difference between them.Having the same clients but only mongoDB downgraded to 4.0.27 from 4.2 is giving the expected performance.Just to make sure we’re on the same page, you’re doing the upgrades & downgrades on the same piece of hardware, or are you doing something like a blue/green deployment where the 4.0 deployment is in one hardware and the 4.2 deployment in another hardware?Best regards\nKevin",
"username": "kevinadi"
}
]
| Mongo document about writeMajorityCount is not very clear | 2022-11-07T07:31:38.241Z | Mongo document about writeMajorityCount is not very clear | 2,150 |
[
"queries",
"php"
]
| [
{
"code": "{\n\"_id\":\n{\n\"$oid\":\"637a7e16e490c4baadd9437f\"\n},\n\"civilid\":\"12769219\",\n\"Blocks\":\n{\n\"BaseBlock\":\n{\n\"nonce\":{\"$numberInt\":\"0\"},\n\"index\":{\"$numberInt\":\"0\"},\n\"timestamp\":\"11-20-2022 22:39:30.082200\",\n\"candidate_id\":\"0\",\"previousHash\":\"0\",\n\"hash\":\"d19f8d1d4bba2c22a82c1f3691f364fac86e992075bbd6a6ad239188d7a242e7\"\n},\n\"FirstBlock\":\n{\n\"nonce\":{\"$numberInt\":\"107675\"},\n\"index\":{\"$numberInt\":\"1\"},\n\"timestamp\":\"11-20-2022 22:39:30.082300\",\n\"candidate_id\":\"103\",\n\"previousHash\":\"d19f8d1d4bba2c22a82c1f3691f364fac86e992075bbd6a6ad239188d7a242e7\",\n\"hash\":\"0000ac30bf8b97a9525d64a10a990ccf2b1fe4ee0c7c91fe665e572f0a39e11f\"},\n\"SeconedBlock\":\n{\n\"nonce\":{\"$numberInt\":\"124478\"},\n\"index\":{\"$numberInt\":\"2\"},\n\"timestamp\":\"11-20-2022 22:39:30.082300\",\n\"candidate_id\":\"103\",\n\"previousHash\":\"0000ac30bf8b97a9525d64a10a990ccf2b1fe4ee0c7c91fe665e572f0a39e11f\",\n\"hash\":\"0000a709f4faf36466cbd37ec34e09b64db309e306d238b586d4a6ae669f28da\"\n}}}\n",
"text": "trying to get the value of candidate_id and count how many votes has been voted for a candidate\nhow do I access the candidate_id in php and count it\nMongoDB Structurethe display area in votes in intI tried searching the web and reading the documents with no luck as I am new in MongoDB and PHP",
"username": "Firas_Albusaidy"
},
{
"code": "",
"text": "Hello @Firas_Albusaidy and Welcome to the MongoDB community forums!Could you tell us more about your data?\nI was curious to know why ‘Blocks’ was not an array of objects? Also, how many blocks do you think that object would contain? (there’s a 16MB size limitation for MongoDB documents. It’s quite large, but not infinite)",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "Hi Hubert,\nit was an array but I thought I couldn’t access it unless it is an object ,no more blocks will be added\nthis is just for demonstrating my graduation projectI will update the code of Blocks to be an array on top",
"username": "Firas_Albusaidy"
},
{
"code": "",
"text": "I see! Thank you.\nIf you use an array, you’ll be able to load the array (or the whole document) from MongoDB to PHP and loop over all the PHP objects in the array as usual.That said, I would encourage you to learn about our MongoDB Aggregation Pipeline that could also be used to to perform computations like the one you seek. When data grows large enough, it’s not always convenient (or possible) to compute values on the client.It’s a more advanced topic, but we have great tutorials and free courses to learn about this. They will teach you the technical foundations in the relevant order to learn efficiently. I have gone through a similar training and loved it!Learn about the MongoDB Aggregation Pipeline, and how each stage transforms the document as it goes through the pipeline.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "Thank you for that, I would lovely learn more about it",
"username": "Firas_Albusaidy"
},
{
"code": "$cursor4 = $collection3->aggregate([['$project' => ['_id' => 0, 'Blocks.FirstBlock.candidate_id' => 1]]]);\n\n$result = $cursor4->toArray();\n\nforeach ($result as $row)\n\n{\n\n$canid[] = $row[\"candidate_id\"];\n\n}\n",
"text": "So I just finished the Course and understood how to use aggregate but when i use the aggregation and use var_dump it just shows me thishow do I save the candidate_id value to a array or variable ?I tried using foreach loop like thisthe aggregation code and for loop",
"username": "Firas_Albusaidy"
},
{
"code": "// find a specific object\n\n $filter = [ '_id'=> new \\MongoDB\\BSON\\ObjectId(\"637a7e16e490c4baadd9437f\") ];\n $cursor = $collection->find($filter);\n\n foreach($cursor as $document)\n {\n // accessing properties of the MongoDB document. first element directly\n $candidate_id = $document->Blocks[0]->candidate_id;\n\n // via a lopp\n foreach( $document->Blocks as $block ) {\n $candidate_id = $block->candidate_id;\n }\n\n // convert MongoDB document to PHP Object via JSON conversion\n $phpobject = json_decode( json_encode( $document ) );\n\n foreach($phpobject->Blocks as $block)\n {\n $candidate_id = $block->candidate_id;\n }\n }\njson_decode( json_encode( $document ) );",
"text": "the results you’re getting are MongoDB document objects that are based on BSON. BSON is an important concept to learn, especially if your data is strongly typed. However, it looks visually complex when using var_dump() as it encodes a lot of information about data type.Individual properties can be accessed as follow. I’m assuming that Blocks is an array, as previously discussed.using json_decode( json_encode( $document ) ); you can convert the document to a PHP object via JSON translation, but it’s important to note that some internal typing information will be lost.The PHP object is easier to look at in a debugger or with var_dump, but as you can see, how to access the properties is similar to how you would do it with the MongoDB document",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Accessing object value from nested objects in mongodb with php | 2022-11-21T16:13:30.894Z | Accessing object value from nested objects in mongodb with php | 4,277 |
|
null | [
"student-developer-pack"
]
| [
{
"code": "",
"text": "Hello,I am receiving this error when trying to log in as my github account when trying to access MongoDB University: “We were unable to log you in with that login method. Log in with the current social provider linked to your account, either Google or GitHub. Otherwise, enter your email address and click “Forgot Password” to reset your password and unlink from the social provider.”I also already set my email address to be public on my github account.",
"username": "David_Jetsupphasuk"
},
{
"code": "",
"text": "Hi David,Welcome to the forums! I’m sorry you’re experiencing this issue. I’m going to reach out to you via DM to try to resolve this.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "In case anyone else encounters this issue, it was resolved by following the password reset instructions.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Cannot access MongoDB University with Github student developer pack | 2022-11-18T04:03:08.481Z | Cannot access MongoDB University with Github student developer pack | 2,513 |
null | [
"aggregation",
"views",
"data-api"
]
| [
{
"code": "",
"text": "Good Day,I’m attempting to run a $merge within an aggregation using the Data API. The solution that was provided on this thread is very high-level → Data API now fails with A pipeline with this stage must be run as a System UserCould someone please break this down and share the step by step process? There is no documentation that has been provided to assist with this.Kind Regards",
"username": "Subscriptions_Riskbloq"
},
{
"code": "",
"text": "I have the same issue, any help would be appreciated. Also, the solution in linked thread feels more like a workaround. It’s been a couple of months since it was posted, maybe there is a better way to do this now?",
"username": "pjak"
}
]
| Data API aggregation $merge not working anymore | 2022-11-15T20:25:05.388Z | Data API aggregation $merge not working anymore | 1,886 |
null | [
"python",
"atlas-cluster",
"spark-connector"
]
| [
{
"code": "mongo_conn = \"mongodb+srv://<username>:<password>@cluster0.afic7p0.mongodb.net/?retryWrites=true&w=majority\"\nconf = SparkConf()\n# Download mongo-spark-connector and its dependencies.\nconf.set(\"spark.jars.packages\",\"org.mongodb.spark:mongo-spark-connector:10.0.5\")\nconf.set(\"spark.jars.packages\",\"org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1\")\n # Set up read connection :\nconf.set(\"spark.mongodb.read.connection.uri\", mongo_conn)\nconf.set(\"spark.mongodb.read.database\", \"mySecondDataBase\")\nconf.set(\"spark.mongodb.read.collection\", \"TwitterStreamv2\")\n # Set up write connection\nconf.set(\"spark.mongodb.write.connection.uri\", mongo_conn)\nconf.set(\"spark.mongodb.write.database\", \"mySecondDataBase\")\nconf.set(\"spark.mongodb.write.collection\", \"TwitterStreamv2\")\nSparkContext.getOrCreate(conf=conf)\ndf = spark \\\n .readStream \\\n .format(\"kafka\") \\\n .option(\"kafka.bootstrap.servers\", \"localhost:9092\") \\\n .option(\"startingOffsets\", \"earliest\") \\\n .option(\"kafka.group.id\", \"group1\") \\\n .option(\"subscribe\", \"twitter\") \\\n .load()\ndef write_row(batch_df , batch_id):\n batch_df.write.format(\"mongodb\").mode(\"append\").save()\n pass\n\nsentiment_tweets.writeStream.foreachBatch(write_row).start().awaitTermination()\nERROR:py4j.clientserver:There was an exception while executing the Python Proxy on the Python Side.\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.7/dist-packages/py4j/clientserver.py\", line 617, in _call_proxy\n return_value = getattr(self.pool[obj_id], method)(*params)\n File \"/usr/local/lib/python3.7/dist-packages/pyspark/sql/utils.py\", line 272, in call\n raise e\n File \"/usr/local/lib/python3.7/dist-packages/pyspark/sql/utils.py\", line 269, in call\n self.func(DataFrame(jdf, self.session), batch_id)\n File \"<ipython-input-34-a3fa83af6c03>\", line 2, in write_row\n batch_df.write.format(\"mongodb\").mode(\"append\").save()\n File \"/usr/local/lib/python3.7/dist-packages/pyspark/sql/readwriter.py\", line 966, in save\n self._jwrite.save()\n File \"/usr/local/lib/python3.7/dist-packages/py4j/java_gateway.py\", line 1322, in __call__\n answer, self.gateway_client, self.target_id, self.name)\n File \"/usr/local/lib/python3.7/dist-packages/pyspark/sql/utils.py\", line 190, in deco\n return f(*a, **kw)\n File \"/usr/local/lib/python3.7/dist-packages/py4j/protocol.py\", line 328, in get_return_value\n format(target_id, \".\", name), value)\npy4j.protocol.Py4JJavaError: An error occurred while calling o159.save.\n: java.lang.ClassNotFoundException: \nFailed to find data source: mongodb. Please find packages at\nhttps://spark.apache.org/third-party-projects.html\n \n\tat org.apache.spark.sql.errors.QueryExecutionErrors$.failedToFindDataSourceError(QueryExecutionErrors.scala:587)\n\tat org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:675)\n\tat org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:725)\n\tat org.apache.spark.sql.DataFrameWriter.lookupV2Provider(DataFrameWriter.scala:864)\n\tat org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:256)\n\tat org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:247)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:566)\n",
"text": "Hi,I am working on a project where I have the following data pipeline:Twitter → Tweepy API (Stream) → Kafka → Spark (Real-Time Sentiment Analysis) → MongoDB → TableauI was able to get tweets stream using Tweepy into Kafka Producer and from Producer into Kafka Consumer. I then used data stream in Kafka Consumer as the data source, I created a “Streaming Data Frame” in Spark (PySpark) , performed real-time pre-processing & sentiment analysis, the resultant “Streaming Data Frame” needs to go into MongoDB, this is where the problem lies.I am able to write “static” PySpark Data Frame into MongoDB, but not the streaming Data Frame.Details are below:Reading Kafka Data Frame (Streaming)Skipping Pre-Processing & Sentiment Analysis CodeWriting Data Stream to MongoDBWhere sentiment_tweets is the resultant Streaming Data Frame. The code above doesn’t work.",
"username": "Ayesha_Nasim"
},
{
"code": "conf.set(\"spark.jars.packages\",\"org.mongodb.spark:mongo-spark-connector:10.0.5,org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1\")",
"text": "Resolved. I was overwriting mongodb configuration with that of kafka.\nBelow is the correct format:conf.set(\"spark.jars.packages\",\"org.mongodb.spark:mongo-spark-connector:10.0.5,org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.1\")",
"username": "Ayesha_Nasim"
}
]
| Streaming Data From Spark to MongoDB | 2022-11-21T13:11:03.858Z | Streaming Data From Spark to MongoDB | 2,621 |
null | [
"sharding",
"migration"
]
| [
{
"code": "",
"text": "Our Current Instances:MongoDB 3.6MongoDB 4.0We ould like to Migrate Mongo database from current platform (MongoDB 3.6) to new platform MongoDB 5.0 (EOL Oct 2024) or 6.0 (EOL July 2025) .What is the exact steps to migrate our database from existing hardware into new hardware with the latest version?Thanks,\nArun.",
"username": "BHARATHARUN_RAMASAMY"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @BHARATHARUN_RAMASAMY !I strongly recommend not doing multiple major changes concurrently (for example: upgrading major release versions and migrating to new hardware). If you encounter any unexpected issues or performance problems, it will be more challenging to figure out which change might have introduced the issue.if you want to upgrade your sharded clusters while maintaining availability, you will have to plan upgrades between successive major releases: 3.6 => 4.0, 4.0 => 4.2, 4.2 => 4.4, 4.4 => 5.0, 5.0 => 6.0. The most straightforward upgrade path would be using automation tooling such as MongoDB Ops Manager or MongoDB Cloud Manager.The relevant documentation to review is:Release Notes and upgrade procedures for the relevant versions of MongoDB server in your upgrade plan.Migrate a Sharded Cluster to Different HardwareRegards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,Thanks a lot for sharing your suggestions. We will take this into our considerations and we will slowly upgrade into the latest versions in new UAT environment first and perform the full functional testing between each upgrade.Secondly, We are planning to move the whole sharded cluster to completely newer hardware in different location. In this case, How we can migrate the whole setup from one environment to another? How long will it take roughly for 5 databases each 1 TB in size?And, Is there any option to migrate without any downtime or with minimal down time?Thanks,\nArun.",
"username": "BHARATHARUN_RAMASAMY"
},
{
"code": "",
"text": "How long will it take roughly for 5 databases each 1 TB in size?It’s completely impossible to tell because it depends on the hardware and the distance (latency) between the 2 locations.If you want to migrate with zero downtime, it’s possible but you’ll have extra steps. I would take a full backup and restore it in the new location on a new node, then add this node in the replica set (repeat for each RS). Continue to add more nodes in your config until you can start removing nodes in the old location. Check that all the nodes are up-to-date.You can play with priority settings to make sure the primary stays on the desired side until you are ready to “move”.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime,Is it possible to migrate directly from 3.6 to the latest 6.0 with downtime?Thanks,\nArun.",
"username": "BHARATHARUN_RAMASAMY"
},
{
"code": "",
"text": "The latest version can only support from 6.0 to 4.2.\nMaybe you can use the mongoexport v3.6 and then import with mongoexport 100.5.4 (versioning changed, tools are now independent from the server releases). But there is no guarantee that this is going to work. I would give it a try on a couple of small collections though.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for sharing this info! It’s really helpful to me\nIf we upgrade into 4.2 then, can we migrate to 6.0 on new hardware from 4.2 to 6.0? If yes, how we can achieve this? Could you please recommend the best approach?",
"username": "BHARATHARUN_RAMASAMY"
},
{
"code": "",
"text": "Hi @BHARATHARUN_RAMASAMY,Sorry for the delayed answer, I was on extended paternity leave and kinda busy as you can imagine!\nI think @Stennie_X already covered your question about upgrading in this answer here in this topic but please feel free to update us with your new status now and let me know if your migration was successful or not.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| How to migrate MongoDB Shared Cluster to another Hardware | 2022-07-26T02:45:22.308Z | How to migrate MongoDB Shared Cluster to another Hardware | 4,657 |
[
"swift"
]
| [
{
"code": "final class TaskHelper: ObservableObject\n{\n private var taskToken: NotificationToken!\n var taskResults: Results<Tasks>\n \n init(filters: [String], quadrant: String, sort:Int, autoAdvance: Bool, focus: Int)\n {\n \n let realm = try! Realm()\n \n if filters == [NSLocalizedString(\"_none\", comment: \"\")] || filters.isEmpty || filters == [] || filters == [\"\"]\n {\n taskResults = realm.objects(Tasks.self)\n .sorted(by: sort.getSortDescriptor())\n .getfocus(quadrant: quadrant, focus: focus, autoAdvance: autoAdvance)\n }\n else\n {\n taskResults = realm.objects(Tasks.self)\n .sorted(by: sort.getSortDescriptor())\n .getfocus(quadrant: quadrant, focus: focus, autoAdvance: autoAdvance)\n .filter(NSPredicate(format: \"filter IN %@\", filters as CVarArg))\n }\n \n \n \n lateInit()\n }\n \n func lateInit(){\n taskToken = taskResults.observe { [weak self] _ in\n self?.objectWillChange.send()\n }\n }\n \n deinit {\n taskToken.invalidate()\n }\n \n var taskArray: [Tasks]\n {\n taskResults.map(Tasks.init)\n }\n \n}\nScrollView{\n LazyVStack{\n ForEach(taskHelper.taskResults)\n { task in\n TaskRow(task: task, quadrant: quadrant)\n }\n }\n }\n",
"text": "Hi there,\nI am working on an ios todo list that can be sorted and filtered in various way. I created an observableobject that can be intialized with different variables so that as the variables change, the results are updated and passed on to a list. However when the variables change and the result is updated, it maxes out the phone cpu and make the whole application hang. Any Advice?\nScreenshot 2022-11-14 at 1.46.33 PM1242×972 164 KB\nHere is my code:",
"username": "Deji_Apps"
},
{
"code": ".sorted(by: sort.getSortDescriptor()).sorted(byKeyPath: \"name\", ascending: false).getfocus(quadrant: quadrant, focus: focus, autoAdvance: autoAdvance)",
"text": "If your dataset is large, one of the things that can affect negatively affect performance is to use high level Swift functions on Realm collections.Realm Results are Live Objects which are lazily-loaded - meaning that huge datasets will have very low memory impact. However, as soon as Swift high level functions are used, that lazy-loading stops and ALL of the data is loaded into memory which can overwhelm it, as in this case.For example, this function loads everything up into memory.sorted(by: sort.getSortDescriptor())whereas the Realm sorted(by: function.sorted(byKeyPath: \"name\", ascending: false)is memory friendly and maintains the lazy-loading nature of Realm.Then this is a quandary.getfocus(quadrant: quadrant, focus: focus, autoAdvance: autoAdvance)as it’s not a Realm function at all as far as I know.What is that? What happens when it’s removed and the same or similar actions are performed?",
"username": "Jay"
},
{
"code": "",
"text": "Thanks for your response Jay.\nThe problem is that there are various ways the list can be sorted. Is there any way to let user change the sort keypath? I used the sort.getSortDescriptor() to provide the keypath as chosen by the user. And the getFocus is an extension on RealmResults to filter the list based on user input e.g quadrant. I comment it out as you suggest, but my app was just as slow unfortunately.",
"username": "Deji_Apps"
},
{
"code": "let results = realm.objects(PersonClass.self).sorted(byKeyPath: \"name\").sorted(byKeyPath:.sorted(by:.getfocus(quadrant: quadrant,taskToken = taskResults.observe { [weak self] _ in\n self?.objectWillChange.send()\n}\nself?.objectWillChange.send()final class TaskHelper: ObservableObject\n{\n var taskResults: Results<Tasks>\n \n init(filters: [String], quadrant: String, sort:Int, autoAdvance: Bool, focus: Int) {\n let realm = try! Realm()\n\n Task {\n let startTime = Date()\n let results = try await realm.objects(Tasks.self)\n let elapsed = Date().timeIntervalSince(startTime)\n print(\"Load took: \\(elapsed * 1000) ms\")\n }\n } \n}\ntry await realm.objects(Tasks.self).sorted(byKeyPath: \"some path\")",
"text": "@Deji_Apps Realm sorting is flexible so yes, you can sort by any keypath(s) of the object.You may be doing that now but as is, it’s invoking a high level Swift sort which may be part of the issue.As an proof-of-concept, we have a Realm with 100,000 Person objects. Each Person Object has a name, address etc - pretty straight forward. If I call this functionlet results = realm.objects(PersonClass.self).sorted(byKeyPath: \"name\")The results return in about a half-second. Noting we are using the Realm sort .sorted(byKeyPath:, not the Swift sort .sorted(by:.Note that when posting questions, having “mystery code” can make understanding the issue difficult. As mentioned above, there was an extension .getfocus(quadrant: quadrant, on your object which we don’t know what that does. Good it was removed.Likewise there’s this codeWhich adds an additional air of mystery as we don’t know what self?.objectWillChange.send() does either.It may be an issue though because it will fire after all the results are loaded so perhaps it’s doing something on the UI or working with the data in some way causing the app to become sluggish.Try a little test using async/await calls. Replace your TaskHelper temporarily with this to see how long it takes to populate results with all of your tasks.Then, add a Realm sort, and specify some simple property to sort by (where ‘some path’ is)try await realm.objects(Tasks.self).sorted(byKeyPath: \"some path\")and run it againReport back your findings.",
"username": "Jay"
},
{
"code": "DispatchQueue.main.async {\n do {\n let startTime = Date()\n let realm = try Realm()\n self.taskResults = realm.objects(Tasks.self)\n let elapsed = Date().timeIntervalSince(startTime)\n print(\"Load took: \\(elapsed * 1000) ms\")\n }\n catch{\n print(\"Realm error: \\(error)\")\n }\n }\nfunc getSortDescriptor() -> [RealmSwift.SortDescriptor]\n {\n if self == 0\n {\n return [SortDescriptor(keyPath: \"complete\", ascending: true), SortDescriptor(keyPath: \"position\", ascending: true)]\n }\n if self == 1\n {\n return [SortDescriptor(keyPath: \"complete\", ascending: true), SortDescriptor(keyPath: \"task\", ascending: true)]\n }\n else if self == 2\n {\n return [SortDescriptor(keyPath: \"complete\", ascending: true), SortDescriptor(keyPath: \"date_added\", ascending: true)]\n }\n else if self == 3\n {\n return [SortDescriptor(keyPath: \"complete\", ascending: true), SortDescriptor(keyPath: \"date_added\", ascending: false)]\n }\n else if self == 4\n {\n return [SortDescriptor(keyPath: \"complete\", ascending: true), SortDescriptor(keyPath: \"due_date\", ascending: false)]\n }\n else\n {\n return [SortDescriptor(keyPath: \"complete\", ascending: true), SortDescriptor(keyPath: \"due_date\", ascending: true)]\n }\n }\n func getfocus(quadrant: String, focus: Int, autoAdvance: Bool) -> RealmSwift.Results<Tasks>\n {\n // MARK: overdue only\n if focus == 0 //overdue only\n {\n if autoAdvance\n {\n if quadrant == \"q1\"\n {\n let list = self as! Results<Tasks>\n return list.where{\n ($0.quadrant == quadrant &&\n $0.due_date < Date() &&\n $0.due_date != nil &&\n $0.complete == false &&\n $0.deleted == false)\n \n ||\n \n ($0.quadrant == \"q2\" &&\n $0.due_date < Date() &&\n $0.due_date != nil &&\n $0.complete == false &&\n $0.deleted == false)}\n }\n else if quadrant == \"q3\"\n ...\n }\n else\n {\n let list = self as! Results<Tasks>\n return list.where{\n $0.quadrant == quadrant &&\n $0.due_date < Date() &&\n $0.due_date != nil &&\n $0.complete == false &&\n $0.deleted == false}\n }\n }\n \n // MARK: due today only\n else if focus == 1\n {\n ...\n }\n }\n",
"text": "Unfortunately I have not dove into Swift’s async/await yet, so I don’t fully understand it. When I tried the code you posted, it crashes with the error : \"Thread 2: “Realm accessed from incorrect thread.” However I got some results from this if it helps:Without sorting or filtering, Load took: 0.030040740966796875 ms\nWith sorting and filtering, Load took: 0.1760721206665039 msAlso you are absolutely right about mystery code, I’ll try to provide a better explanation for them.\nBecause the user should be able to change how the list is filtered and sorted at the click of a button, I wrote extensions to be able to dynamically provide the sort keypaths.and the getFocus is an extension on RealmResults to filter the results based on user choice:Does this help understand my situation better? I am still new to coding for IOS and I appreciate your patient.",
"username": "Deji_Apps"
},
{
"code": "let taskResults = self.getTasks()sortTypeself == 1func getTasks() -> Results<Task> {\n switch self.sortType {\n case 0:\n let results = realm.results(Task.self).sorted(byKeyPath: \"position\", ascending: true\n return results\n case 1:\n ...\n}\nfunc getfocus(quadrant: String",
"text": "Without sorting or filtering, Load took: 0.030040740966796875 ms\nWith sorting and filtering, Load took: 0.1760721206665039 msBased on those results, which is super fast and about what I expected, that would not cause the app to hang and stutter, so we can eliminate Realm from the equation.1 -Per the above, the sorting is being done using high level Swift functions - you will be better off using Realm sorting functions. Also, the sorting is a bit odd and SortDescriptors are not really needed - I would simplify that something likelet taskResults = self.getTasks()which calls the following - I added a sortType to switch on as self == 1 was unclear2 -func getfocus(quadrant: StringIt’s not really clear why Results are being extended as the code doesn’t really extend the capability of results - it’s really just filtering data. It’s not ‘wrong’ but it may be more managable to just filter the data as needed using Realm filterslet quads = [quadrant, “q2”, “q3”]\nlet results = realm.objects(Task.self).where { $0.quadrant.in(quads) && $0.due_date < Date() && …}I want to stress that this is all just guesswork - clearly the initial issue described has nothing to do directly with Realm per se - loading your data in 0.03 is pretty darn quick.",
"username": "Jay"
},
{
"code": "",
"text": "Oh - btw… I just noticed the speed test you posted above.The code I posted above to test how long it took Realm to load using the async/await so the end time would calculate once the data was fully loaded. Your version of the code doesn’t wait for the data to fully load (it’s loading asynchronously) so it won’t be accurate.If you want to do it that way, add an observer to the results and when that fires (the .initial) the results are loaded and the total time can be calculated. See my SO answer to this question for example code.Either way though, I am confident the performance issues are not tied directly to Realm.",
"username": "Jay"
}
]
| Filtering results causes app to hang and stutter | 2022-11-14T19:16:51.015Z | Filtering results causes app to hang and stutter | 2,251 |
|
null | [
"sharding"
]
| [
{
"code": "The following packages have unmet dependencies:\n mongodb-org-mongos : Depends: libssl1.1 (>= 1.1.1) but it is not installable\n mongodb-org-server : Depends: libssl1.1 (>= 1.1.1) but it is not installable\n mongodb-org-shell : Depends: libssl1.1 (>= 1.1.1) but it is not installable\n",
"text": "Hello,These days Ubuntu published their new long term supported version, 22.04, and it is not possible to install mongodb because they don’t support libssl1.1 anymore:I googled it, but I did not find anywhere nobody who could solve this problem. Any ideas?\nThank you!",
"username": "Truc_Oto"
},
{
"code": "",
"text": "Are you following these directions?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yes, but it’s a dependency problem. I tried mongo 4.4 and 5.0, to no avail: same error on both.",
"username": "Truc_Oto"
},
{
"code": "",
"text": "Aha. I’m on 20.0x and haven’t upgraded to 22 yet. I’m sure @Stennie_X will have the answer!",
"username": "Jack_Woehr"
},
{
"code": "echo \"deb http://security.ubuntu.com/ubuntu impish-security main\" | sudo tee /etc/apt/sources.list.d/impish-security.list\nsudo apt-get update\nsudo apt-get install libssl1.1\n",
"text": "This worked for me",
"username": "Vikrant_Banwal"
},
{
"code": "",
"text": "Just apt installing openssl 1.1 sounds like a recipe for disaster.\nUbuntu 22.04 already has openssl 3 installed.\nI think a MongoDB team member had best respond to this issue with a recommended path\n… @Stennie_X ?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi folks,Ubuntu 22.04 was released less than a week ago so doesn’t have official MongoDB packages available yet. The build team is aware and will set up appropriate packaging & testing infrastructure to validate new packages.Relevant Jira issues to watch are:I don’t have a specific ETA to share at the moment, but I’d generally recommend waiting for essential software packages to be available before committing to major O/S upgrades.If you are an early adopter of a new O/S release, suggested interim workarounds would be:Run your MongoDB deployment on separate hosts with supported O/S versions (eg 20.04 LTS)Run MongoDB in a container/VM with a supported O/S versionUse a hosted version of MongoDB (eg MongoDB Atlas) so you have fewer direct dependencies on O/S updatesAll of these approaches use official binaries, so you are less likely to run into novel issues.For a development environment you could also consider:Building MongoDB from sourceTrying to install or build missing dependenciesHowever, I would be very wary of mixing & matching packages intended for different O/S versions (especially for a production environment) as those combinations have not been thoroughly tested.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Ok, thank you! Will wait for the support then. Out of ignorance, may I ask if generating a snap instead of a deb could solve the dependency problem and other possible library conflicts?",
"username": "Truc_Oto"
},
{
"code": "",
"text": "Thanks, It worked for me as well!!",
"username": "Sachin_Toppa"
},
{
"code": "sudo docker run -dp 27017:27017 -v local-mongo:/data/db --name local-mongo --restart=always mongolocal-mongosudo docker exec -it local-mongo shmongo",
"text": "Hi there, hope you’re doing well. I’m currently using MongoDB in Docker and it is good enough while waiting for new update from the MongoDB team.to start MongoDB (automatically restart):\nsudo docker run -dp 27017:27017 -v local-mongo:/data/db --name local-mongo --restart=always mongoTo access into running local-mongo container:\nsudo docker exec -it local-mongo shThen type mongo and you’re good to go!",
"username": "Qu_c_D_t_Tr_n"
},
{
"code": "",
"text": "This worked for me as well. Still waiting for offcial release.",
"username": "Rajesh_Jaswal"
},
{
"code": "",
"text": "Can you drop the process of installing mongodb on docker ??",
"username": "Alexander_Joshua"
},
{
"code": "sudo docker run -dp 27017:27017 -v local-mongo:/data/db --name local-mongo --restart=always mongo",
"text": "Hi Alexander, I’ve already posted the process above. I’ll re-post it in case you didn’t see it:\nsudo docker run -dp 27017:27017 -v local-mongo:/data/db --name local-mongo --restart=always mongo",
"username": "Qu_c_D_t_Tr_n"
},
{
"code": "",
"text": "Thank you. Will try it now",
"username": "Alexander_Joshua"
},
{
"code": "sudo apt-get install mongo<tab>\nmongocli mongodb-mongosh ...\nsudo apt-get install mongodb-mongosh \nReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nRecommended packages:\n libssl1.1\n",
"text": "As a sidenote, you can still download some of the mongodb binaries from the official ubuntu repo (not recommended by mongodb though), if you just playing around it is fine imho. Example:You can see it skips libssl1.1",
"username": "Mah_Neh"
},
{
"code": "",
"text": "Is there any ETA on 22.04 packages? Or even RHEL 9?Any rough estimate? It’s the only blocker we have now for both distros.",
"username": "Phill_N_A"
},
{
"code": "",
"text": "Hey fellas.\nIs there already any “official” solution to that problem?\nI also struggle with Ubuntu 22.04 LTS, cannot install MongoDB on it.",
"username": "P_T"
},
{
"code": "",
"text": "Hi folks! Ubuntu 22.04 has been out for almost 3 months now, when will we be able to install the official version of mongo without all sorts of hacks and tweaks?",
"username": "chaosmos"
},
{
"code": "\nmongodb-org-mongos_5.0.9_amd64.deb\n\nmongodb-org-server_5.0.9_amd64.deb\n\nmongodb-org-shell_5.0.9_amd64.deb\n\n\nDepends: libc6 (>= 2.29), libcurl4 (>= 7.16.2), libgcc-s1 (>= 4.2), liblzma5 (>= 5.1.1alpha+20110809), libssl1.1 (>= 1.1.1)\n\n\nDepends: libc6 (>= 2.29), libcurl4 (>= 7.16.2), libgcc-s1 (>= 4.2), liblzma5 (>= 5.1.1alpha+20110809)\n\n\nmongodb-org-mongos_5.0.9_amd64.deb-newpackage.deb\n\nmongodb-org-server_5.0.9_amd64.deb-newpackage.deb\n\nmongodb-org-shell_5.0.9_amd64.deb-newpackage.deb\n\n\nsudo dpkg -i *.deb\n\n\nsudo apt install --no-install-recommends mongodb-org mongodb-org-database mongodb-org-tools mongodb-mongosh-shared-openssl3\n\n\nJun 11 18:14:11 systemd[1]: Started MongoDB Database Server.\n\nJun 11 18:14:11 systemd[48437]: mongod.service: Failed to determine user credentials: No such process\n\nJun 11 18:14:11 systemd[48437]: mongod.service: Failed at step USER spawning /usr/bin/mongod: No such process\n\nJun 11 18:14:11 systemd[1]: mongod.service: Main process exited, code=exited, status=217/USER\n\nJun 11 18:14:11 systemd[1]: mongod.service: Failed with result 'exit-code'.\n\n\nsudo adduser mongodb\n\n\nJun 11 18:15:53 systemd[1]: Started MongoDB Database Server.\n\nJun 11 18:15:53 mongod[48504]: /usr/bin/mongod: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory\n\nJun 11 18:15:53 systemd[1]: mongod.service: Main process exited, code=exited, status=127/n/a\n\nJun 11 18:15:53 systemd[1]: mongod.service: Failed with result 'exit-code'.\n\n\nsudo apt-get install libssl-dev\n\n\nsudo find / -type f -name libcrypto.so*\n\n\nsudo ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so.3 /usr/lib/x86_64-linux-gnu/libcrypto.so.3 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1\n\n\nJun 11 18:23:59 mongod[48822]: /usr/bin/mongod: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory\n\n\nsudo find / -type f -name libssl.so.*\n\n\nsudo ln -s /usr/lib/x86_64-linux-gnu/libssl.so.3 /usr/lib/x86_64-linux-gnu/libssl.so.1.1\n\n\nJun 11 18:26:29 systemd[1]: Started MongoDB Database Server.\n\nJun 11 18:26:29 mongod[48845]: /usr/bin/mongod: /lib/x86_64-linux-gnu/libcrypto.so.1.1: version `OPENSSL_1_1_0' not found (required by /usr/bin/mongod)\n\nJun 11 18:26:29 mongod[48845]: /usr/bin/mongod: /lib/x86_64-linux-gnu/libssl.so.1.1: version `OPENSSL_1_1_0' not found (required by /usr/bin/mongod)\n\nJun 11 18:26:29 mongod[48845]: /usr/bin/mongod: /lib/x86_64-linux-gnu/libssl.so.1.1: version `OPENSSL_1_1_1' not found (required by /usr/bin/mongod)\n\n\n$ sudo find / -type f -name libcrypto.so*\n\n/usr/lib/x86_64-linux-gnu/libcrypto.so.3\n\n/snap/core20/1518/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1\n\n\n$ sudo find / -type f -name libssl.so.*\n\n/usr/lib/x86_64-linux-gnu/libssl.so.3\n\n/snap/core20/1518/usr/lib/x86_64-linux-gnu/libssl.so.1.1\n\n\n$ sudo rm /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1\n\n$ sudo ln -s /snap/core20/1518/usr/lib/x86_64-linux-gnu/libssl.so.1.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1\n\n$ sudo ln -s /snap/core20/1518/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1\n\n\n$ sudo service mongod start && sudo service mongod status\n\n● mongod.service - MongoDB Database Server\n\nLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n\nActive: active (running) since Sat 2022-06-11 18:32:20 CEST; 49ms ago\n\nDocs: https://docs.mongodb.org/manual\n\nMain PID: 48894 (mongod)\n\nMemory: 760.0K\n\nCPU: 16ms\n\nCGroup: /system.slice/mongod.service\n\n└─48894 /usr/bin/mongod --config /etc/mongod.conf\n\nJun 11 18:32:20 systemd[1]: Started MongoDB Database Server.\n#!/bin/bash\n# name: debedit.sh\n# author: showp1984\n# description: unpacks, edits control file and repacks deb-files with gz or xz compression\n# prerequisites: xz-utils && nano\n# No guarantees. Use at your own peril.\n\nunset DECOMPXZ\nunset DECOMPGZ\nFILES=\"{post,pre}{inst,rm} conffiles md5sums control\"\n\necho \"Uncompressing deb...\"\nar x $*\n\nFILEXZ=\"control.tar.xz\"\nif [ -f \"$FILEXZ\" ]; then\n DECOMPXZ=1\n echo \"Found: $FILEXZ | using XZ decomp...\"\n tar --xz -xvf $FILEXZ\nfi\n\nFILEGZ=\"control.tar.gz\"\nif [ -f \"$FILEGZ\" ]; then\n DECOMPGZ=1\n echo \"$FILEGZ exists.\"\n tar xzf $FILEGZ\nfi\n\nnano control\n\nif [[ \"$DECOMPXZ\" == 1 ]]; then\n echo \"Repacking $FILEXZ...\"\n tar --ignore-failed-read -cvJf $FILEXZ $FILES\n\n echo \"Repacking deb with xz files...\"\n ar rcs \"${*}-newpackage.deb\" debian-binary $FILEXZ data.tar.xz\nfi\n\nif [[ \"$DECOMPGZ\" == 1 ]]; then\n echo \"Repacking $FILEGZ...\"\n tar --ignore-failed-read -cvzf $FILEGZ $FILES\n\n echo \"Repacking deb with gz files...\"\n ar rcs \"${*}-newpackage.deb\" debian-binary $FILEGZ data.tar.gz\nfi\n\necho \"Cleanup...\"\nrm -r control md5sums debian-binary control.tar.xz data.tar.xz control.tar.gz data.tar.gz postrm preinst prerm conffiles postinst\n\necho \"Done!\"\n",
"text": "I got mongodb 5.0.9 installed on ubuntu 22.05… but boy… it was a hassle.First you have to remove the libssl1.1 dep from:My bash script (that’s added below) will open the ‘nano’ editor and offer you to edit the control file specifying the deps. Your are looking for a line like this (starting with “Depends” and containing a reference to “libssl1.1”):Remove the reference to libssl1.1 and save & quit. It should look like this:The script will automagically repackage the files and generate you a new package with ‘-newpackage.deb’ attached to its name:Now put them in a folder and install them manually with dpkg:After that run:Hint: You don’t need ‘mongodb-mongosh’ from the deps as it’s being replaced by ‘mongodb-mongosh-shared-openssl3’.Afterwards I couldn’t start the mongod service:Which was because of a missing user (after looking at ‘/etc/systemd/system/mongod.service’), so I added it:Edit, try, fail, repeat:Missing libcrypto from ssl 1.1…I didn’t have the 3.0 on the system, fix that:Now search for the file:link it:And we got the same error for ssl1.1…Search, link, start…And error (obviously - it’s 3.0, not 1.1…):But didn’t I see a snap with 1.1 libs on the system when running the find commands? - Sure did!Let’s remove the previous link and they those…SUCCESS:Script:",
"username": "showp1984"
},
{
"code": "$ sudo mkdir /var/log/mongodb/\n$ sudo touch /var/log/mongodb/mongod.log\n$ sudo chown mongodb:mongodb /var/log/mongodb/mongod.log\n$ sudo mkdir /var/lib/mongodb\n$ sudo chown mongodb:mongodb /var/lib/mongodb\n",
"text": "I needed two more things mongodb was complaining about in the logs.\nYMMV!Add log dir & file:Add db dir:Have a nice weekend! ",
"username": "showp1984"
}
]
| Installing mongodb over Ubuntu 22.04 | 2022-04-25T13:18:46.093Z | Installing mongodb over Ubuntu 22.04 | 227,582 |
null | [
"production",
"server"
]
| [
{
"code": "",
"text": "MongoDB 5.0.14 is out and is ready for production deployment. This release contains only fixes since 5.0.13, and is a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 5.0.14 is released | 2022-11-21T17:35:43.962Z | MongoDB 5.0.14 is released | 2,502 |
null | [
"production",
"server"
]
| [
{
"code": "",
"text": "MongoDB 4.4.18 is out and is ready for production deployment. This release contains only fixes since 4.4.17, and is a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 4.4.18 is released | 2022-11-21T17:32:04.242Z | MongoDB 4.4.18 is released | 2,470 |
null | [
"queries",
"swift",
"kotlin"
]
| [
{
"code": "suspend fun getRestaurantsOrders(): CommonFlow<List<Order>> {\n val userId = appService.currentUser!!.id\n val restaurant = realm.query<Restaurant>(\"userID = $0\", userId).first().find()!!\n return withContext(Dispatchers.Default) {\n realm.query<Order>(\"restaurantID = $0 && totalQuantity > 0 SORT(discount DESC)\", restaurant._id.toString())\n .asFlow().map {\n it.list\n }.asCommonFlow()\n }\n }\nfunc getOrdersList()\n {\n Task{\n do{\n try await repo.getRestaurantsOrders().watch(block: {orders in\n\n self.restaurantActiveOrdersList = orders as! [Order] //breaks here\n\n })\n }catch{\n print(\"error\")\n }\n }\n }\n",
"text": "I have a kmm project, I am able to get the data from the kotlin version, but for iOS not.The data from the repo that I am trying to get:The way that I am trying to get data from ios:I am getting an exception break with code: Thread 4: EXC_BAD_ACCESS (code=EXC_I386_GPFLT)Any idea what I am doing wrong? I am new at swift, thanks in advance!",
"username": "AfterFood_Contact"
},
{
"code": "",
"text": "@AfterFood_Contact : Can you please share the complete stacktrace ?",
"username": "Mohit_Sharma"
}
]
| Not able to get data on ios from kotlin kmm repo | 2022-11-12T23:37:35.825Z | Not able to get data on ios from kotlin kmm repo | 1,530 |
null | [
"android",
"kotlin"
]
| [
{
"code": "",
"text": "I have a kotlin multiplatform mobile app in android and swiftui, what is the best approach to implement a geospatial/geolocation using realm?",
"username": "Daniel_Gabor"
},
{
"code": "func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {\n if let loc: CLLocationCoordinate2D = manager.location?.coordinate {\n print(\"you are here: \\(loc.latitude) \\(loc.longitude)\")\n }\n}\n",
"text": "This is kind of an open ended question; a “best approach” will depend on your use case. Realm itself has no knowledge of locations and no geospatial capability - it stores data and can notify your app of changes to that data, and that’s about it.If you want to implement some type of geofencing or location capability, you would rely on the devices API to provide your app that data, and then store the results in Realm.At a high level, for example, your devices API can tell you where the device is by asking it’s Location services for a location. In Swift it may be the CLLocationManager to which you could then get coordinatesBeyond that, it would be up to you to implement what is done what those coordinates and the UI.",
"username": "Jay"
},
{
"code": "",
"text": "hey there, Product for Realm & Device Sync here. I am interested in building this feature - I’d love to pick your bring on your use case if you’re keen. If you could drop me a line at [email protected] and we can set up a time to chat.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hello, thank you for the reply!I would like to know if Realm has already implemented that. Like there is in Mongo db: https://www.mongodb.com/docs/manual/geospatial-queries/.What I need is just to find out if a user point (latlong) is inside a polygon.Thank you for the info and time!",
"username": "Daniel_Gabor"
},
{
"code": "CLLocationCoordinate2D",
"text": "@Daniel_Gabor(Realm-ers feel free to correct me)At this time Realm SDK’s do not currently directly support GeoJSON types for geospatial queries - here’s what is supported: android property types or swift property typesThat being said as one option you can directly interface with Atlas via the Realm Swift SDK’s MongoClient with the Query APIAdditionally, I believe server functions can also be called which would also provide access to that functionality.You can also craft your own - we did a little test project a while back using Swifts CLLocationCoordinate2D and Type Projection to work with geospatial data. There’s some example code at that link as well.You have @Ian_Ward attention in this topic so run with that - having Geospatial ‘stuff’ in the Realm SDK would be (IMO) an important and exciting feature to add!",
"username": "Jay"
},
{
"code": "",
"text": "Hello Jay, I want to add as property to the realm object.\nWouldn’t this work? :Kotlin Multiplatform GeoJson library and Turfjs port",
"username": "Daniel_Gabor"
},
{
"code": "",
"text": "It looks promising but I can’t comment on third party libraries (especially ones I have no experience with).",
"username": "Jay"
},
{
"code": "",
"text": "We don’t have this natively yet but we are interested in building it - you can see a workaround another user leveraged here -### How frequently does the bug occur?\n\nAll the time\n\n### Description\n\nI'm tryin…g to implement an array of arrays (A GeoJSON LineString).\n\nThis is supported in MongoDB, BSON Schema (when I generate a schema), but I'm unable to model the schema in the JS SDK.\n\nI've attempted to use a `double[][]` and a `mixed[]` but both don't seem to work.\n\ni.e.\n\n```\nexport const LineString = {\n name: 'LineString',\n embedded: true,\n properties: {\n type: { type: 'string', default: 'LineString' },\n coordinates: 'mixed[]',\n },\n};\n```\n\nAny suggestions on how I model this?\n\n### Stacktrace & log output\n\n_No response_\n\n### Can you reproduce the bug?\n\nYes, always\n\n### Reproduction Steps\n\n_No response_\n\n### Version\n\n11.1.0\n\n### What SDK flavour are you using?\n\nAtlas Device Sync\n\n### Are you using encryption?\n\nNo, not using encryption\n\n### Platform OS and version(s)\n\nAndroid\n\n### Build environment\n\nWhich debugger for React Native: ..\n\n\n### Cocoapods version\n\n_No response_",
"username": "Ian_Ward"
}
]
| Geospatial/geolocation for KMM Realm Mongodb | 2022-11-18T20:04:11.820Z | Geospatial/geolocation for KMM Realm Mongodb | 3,072 |
null | []
| [
{
"code": "",
"text": "Hello All,This is Srinivas a Database Architect & MongoDB SME from Hyderabad, India.It’s an honor for me to join this amazing community, as a MongoDB User Group Leader in Hyderabad, India.I love learning new things related to Database Technology - especially MongoDB and as part of that process, I’ve completed my MongoDB DBA, Developer & SI Architect Certifications As I strongly believe in the power of networking - it’s a great platform for me to connect great talent around the MongoDB Community.You can learn more about me at https://www.linkedin.com/in/mutyalasrinivas/ - please do send a connection request if you’re okay with it.My blog : http://dbversity.com/Thank you,\nSrinivas Mutyala",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "Hey @Srinivas_Mutyala,\nWelcome to the MongoDB Community!We really hope you have a great time leading the Hyderabad Community! ",
"username": "Harshit"
},
{
"code": "",
"text": "Thank you Harshit !!",
"username": "Srinivas_Mutyala"
},
{
"code": "",
"text": "Hi @Srinivas_MutyalaWelcome to MongoDB community and congratulations being MUG leader.Looking forward to Working together.Darshan",
"username": "DarshanJayarama"
}
]
| Srinivas Mutyala - MUG Leader - Introduction | 2022-11-16T15:20:50.392Z | Srinivas Mutyala - MUG Leader - Introduction | 2,361 |
null | []
| [
{
"code": "repomd.xml<location href=\"repodata/primary.xml.gz\"/>primary.xml$ curl -Ls https://repo.mongodb.org/yum/redhat/8/mongodb-org/4.4/x86_64/repodata/primary.xml.gz | gunzip | fgrep -B1 -A2 '<name>mongodb-org</name>'\n<package type=\"rpm\">\n <name>mongodb-org</name>\n <arch>x86_64</arch>\n <version epoch=\"0\" ver=\"4.4.18\" rel=\"1.el8\"/>\n$ curl -Ls https://repo.mongodb.org/yum/redhat/8/mongodb-org/5.0/x86_64/repodata/primary.xml.gz | gunzip | fgrep -B1 -A2 '<name>mongodb-org</name>'\n<package type=\"rpm\">\n <name>mongodb-org</name>\n <arch>x86_64</arch>\n <version epoch=\"0\" ver=\"5.0.14\" rel=\"1.el8\"/>\n",
"text": "Between November 16th and 17th, 2022, a problem was introduced to the RedHat 8 repositories in repo.monogodb.org, where only the latest versions are listed for the 4.4 and 5.5 series.The following repomd.xml files reference <location href=\"repodata/primary.xml.gz\"/>:You can see this with the following commands:",
"username": "Eric_Hontz"
},
{
"code": "",
"text": "FYI: I checked today (November 21st), and it seems the issue has been fixed.",
"username": "Eric_Hontz"
}
]
| RHEL8 yum repository listings only include latest versions for 4.4 and 5.0 series | 2022-11-18T16:46:22.379Z | RHEL8 yum repository listings only include latest versions for 4.4 and 5.0 series | 1,092 |
[
"swift"
]
| [
{
"code": "",
"text": "I ve been watching Realm for KMM. For this sample:Demo Conference Manager App using Flexible Sync. Contribute to mongodb-developer/mongo-conference development by creating an account on GitHub.I can see that for the iOS code, you use var repo=RealmRepo() in every view, wouldn’t this affect the performance if we would have multiple views?What is the best approach in this case? Singleton class?Thank you in advance!@Mohit_Sharma",
"username": "Daniel_Gabor"
},
{
"code": "",
"text": "Hello @Daniel_Gabor :Thanks for reaching out. Yes, creating Realm per view is overhead, thanks for pointing that out, didn’t consider while creating a simple app.You can use any DI framework ( or vanilla factory) with gives you Singleton instance of Realm.",
"username": "Mohit_Sharma"
}
]
| Usage of var repo=RealmRepo() affecting performance? | 2022-11-18T18:44:52.920Z | Usage of var repo=RealmRepo() affecting performance? | 1,535 |
|
[
"replication",
"transactions",
"field-encryption",
"storage"
]
| [
{
"code": "● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: inactive (dead) since Fri 2022-10-28 07:11:17 UTC; 54min ago\n Docs: https://docs.mongodb.org/manual\n Process: 787 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=0/SUCCESS)\n Main PID: 787 (code=exited, status=0/SUCCESS)\n\nOkt 28 07:11:09 lsrocketchat systemd[1]: Started MongoDB Database Server.\nOkt 28 07:11:10 lsrocketchat mongod[787]: about to fork child process, waiting until server is ready for connections.\nOkt 28 07:11:10 lsrocketchat mongod[899]: forked process: 899\nOkt 28 07:11:16 lsrocketchat mongod[787]: child process started successfully, parent exiting\nOkt 28 07:11:17 lsrocketchat systemd[1]: mongod.service: Succeeded.\n0\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.359+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.362+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.392+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":899,\"port\":27017,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"lsrocketchat\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.2\",\"gitVersion\":\"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"20.04\"}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.599+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"fork\":true,\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"replication\":{\"replSetName\":\"rs01\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\",\"engine\":\"wiredTiger\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.618+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-28T07:11:10.618+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3466M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:15.991+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":5373}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:15.991+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":1666940823,\"i\":2}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:15.992+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":5380106, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger oldestTimestamp\",\"attr\":{\"oldestTimestamp\":{\"$timestamp\":{\"t\":1666940523,\"i\":2}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.011+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22383, \"ctx\":\"initandlisten\",\"msg\":\"The size storer reports that the oplog contains\",\"attr\":{\"numRecords\":8171,\"dataSize\":1884993}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.011+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22384, \"ctx\":\"initandlisten\",\"msg\":\"Scanning the oplog to determine where to place markers for truncation\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.037+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22382, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger record store oplog processing finished\",\"attr\":{\"durationMillis\":26}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.150+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.158+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.158+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.160+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.476+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.476+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5380103, \"ctx\":\"initandlisten\",\"msg\":\"Unpin oldest timestamp request\",\"attr\":{\"service\":\"_wt_startup\",\"requestedTs\":{\"$timestamp\":{\"t\":1666940523,\"i\":2}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.476+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongodb/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.489+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigStartingUp\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.490+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6005300, \"ctx\":\"initandlisten\",\"msg\":\"Starting up replica set aware services\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.490+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280500, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to create internal replication collections\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.495+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":200}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.495+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280501, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to load local voted for document\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.495+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280502, \"ctx\":\"initandlisten\",\"msg\":\"Searching for local Rollback ID document\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.498+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21529, \"ctx\":\"initandlisten\",\"msg\":\"Initializing rollback ID\",\"attr\":{\"rbid\":1}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.498+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280504, \"ctx\":\"initandlisten\",\"msg\":\"Cleaning up any partially applied oplog batches & reading last op from oplog\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.499+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6608200, \"ctx\":\"initandlisten\",\"msg\":\"Initializing cluster server parameters from disk\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.499+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21544, \"ctx\":\"initandlisten\",\"msg\":\"Recovering from stable timestamp\",\"attr\":{\"stableTimestamp\":{\"$timestamp\":{\"t\":1666940823,\"i\":2}},\"topOfOplog\":{\"ts\":{\"$timestamp\":{\"t\":1666940823,\"i\":2}},\"t\":5},\"appliedThrough\":{\"ts\":{\"$timestamp\":{\"t\":0,\"i\":0}},\"t\":-1}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.499+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21545, \"ctx\":\"initandlisten\",\"msg\":\"Starting recovery oplog application at the stable timestamp\",\"attr\":{\"stableTimestamp\":{\"$timestamp\":{\"t\":1666940823,\"i\":2}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.499+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5466604, \"ctx\":\"initandlisten\",\"msg\":\"Start point for recovery oplog application exists in oplog. No adjustment necessary\",\"attr\":{\"startPoint\":{\"$timestamp\":{\"t\":1666940823,\"i\":2}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.499+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21549, \"ctx\":\"initandlisten\",\"msg\":\"No oplog entries to apply for recovery. Start point is at the top of the oplog\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.499+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280505, \"ctx\":\"initandlisten\",\"msg\":\"Creating any necessary TenantMigrationAccessBlockers for unfinished migrations\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.501+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280506, \"ctx\":\"initandlisten\",\"msg\":\"Reconstructing prepared transactions\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.502+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280508, \"ctx\":\"ReplCoord-0\",\"msg\":\"Attempting to set local replica set config; validating config for startup\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.502+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280509, \"ctx\":\"ReplCoord-0\",\"msg\":\"Local configuration validated for startup\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.502+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"ReplCoord-0\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigSteady\",\"oldState\":\"ConfigStartingUp\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.502+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21392, \"ctx\":\"ReplCoord-0\",\"msg\":\"New replica set config in use\",\"attr\":{\"config\":{\"_id\":\"rs01\",\"version\":1,\"term\":5,\"members\":[{\"_id\":0,\"host\":\"127.0.0.1:27017\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":1,\"tags\":{},\"secondaryDelaySecs\":0,\"votes\":1}],\"protocolVersion\":1,\"writeConcernMajorityJournalDefault\":true,\"settings\":{\"chainingAllowed\":true,\"heartbeatIntervalMillis\":2000,\"heartbeatTimeoutSecs\":10,\"electionTimeoutMillis\":10000,\"catchUpTimeoutMillis\":-1,\"catchUpTakeoverDelayMillis\":30000,\"getLastErrorModes\":{},\"getLastErrorDefaults\":{\"w\":1,\"wtimeout\":0},\"replicaSetId\":{\"$oid\":\"635934063e8ef6387bb84510\"}}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.503+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21393, \"ctx\":\"ReplCoord-0\",\"msg\":\"Found self in config\",\"attr\":{\"hostAndPort\":\"127.0.0.1:27017\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.503+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"ReplCoord-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"STARTUP2\",\"oldState\":\"STARTUP\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.503+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21320, \"ctx\":\"ReplCoord-0\",\"msg\":\"Updated term\",\"attr\":{\"term\":5}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.503+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21306, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication storage threads\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.504+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280512, \"ctx\":\"ReplCoord-0\",\"msg\":\"No initial sync required. Attempting to begin steady replication\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.504+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"ReplCoord-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"RECOVERING\",\"oldState\":\"STARTUP2\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.504+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280507, \"ctx\":\"initandlisten\",\"msg\":\"Loaded replica set config, scheduled callback to set local config\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.505+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21299, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication fetcher thread\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.505+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21300, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication applier thread\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.505+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21301, \"ctx\":\"ReplCoord-0\",\"msg\":\"Starting replication reporter thread\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.505+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280511, \"ctx\":\"ReplCoord-0\",\"msg\":\"Set local replica set config\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.505+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21224, \"ctx\":\"OplogApplier-0\",\"msg\":\"Starting oplog application\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.505+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.506+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"OplogApplier-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"SECONDARY\",\"oldState\":\"RECOVERING\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.506+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":4615652, \"ctx\":\"OplogApplier-0\",\"msg\":\"Starting an election, since we've seen no PRIMARY in election timeout period\",\"attr\":{\"electionTimeoutPeriodMillis\":10000}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.506+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":21438, \"ctx\":\"OplogApplier-0\",\"msg\":\"Conducting a dry run election to see if we could be elected\",\"attr\":{\"currentTerm\":5}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.506+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":21444, \"ctx\":\"ReplCoord-0\",\"msg\":\"Dry election run succeeded, running for election\",\"attr\":{\"newTerm\":6}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.506+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":6015300, \"ctx\":\"ReplCoord-0\",\"msg\":\"Storing last vote document in local storage for my election\",\"attr\":{\"lastVote\":{\"term\":6,\"candidateIndex\":0}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":21450, \"ctx\":\"ReplCoord-0\",\"msg\":\"Election succeeded, assuming primary role\",\"attr\":{\"term\":6}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"ReplCoord-0\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"PRIMARY\",\"oldState\":\"SECONDARY\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21106, \"ctx\":\"ReplCoord-0\",\"msg\":\"Resetting sync source to empty\",\"attr\":{\"previousSyncSource\":\":27017\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21359, \"ctx\":\"ReplCoord-0\",\"msg\":\"Entering primary catch-up mode\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015304, \"ctx\":\"ReplCoord-0\",\"msg\":\"Skipping primary catchup since we are the only node in the replica set.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21363, \"ctx\":\"ReplCoord-0\",\"msg\":\"Exited primary catch-up mode\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21107, \"ctx\":\"ReplCoord-0\",\"msg\":\"Stopping replication producer\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21239, \"ctx\":\"ReplBatcher\",\"msg\":\"Oplog buffer has been drained\",\"attr\":{\"term\":6}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.508+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21239, \"ctx\":\"ReplBatcher\",\"msg\":\"Oplog buffer has been drained\",\"attr\":{\"term\":6}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21343, \"ctx\":\"RstlKillOpThread\",\"msg\":\"Starting to kill user operations\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21344, \"ctx\":\"RstlKillOpThread\",\"msg\":\"Stopped killing user operations\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21340, \"ctx\":\"RstlKillOpThread\",\"msg\":\"State transition ops metrics\",\"attr\":{\"metrics\":{\"lastStateTransition\":\"stepUp\",\"userOpsKilled\":0,\"userOpsRunning\":1}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4508103, \"ctx\":\"OplogApplier-0\",\"msg\":\"Increment the config term via reconfig\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015313, \"ctx\":\"OplogApplier-0\",\"msg\":\"Replication config state is Steady, starting reconfig\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"OplogApplier-0\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReconfiguring\",\"oldState\":\"ConfigSteady\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21353, \"ctx\":\"OplogApplier-0\",\"msg\":\"replSetReconfig config object parses ok\",\"attr\":{\"numMembers\":1}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.509+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":51814, \"ctx\":\"OplogApplier-0\",\"msg\":\"Persisting new config to disk\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40440, \"ctx\":\"initandlisten\",\"msg\":\"Starting the TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015315, \"ctx\":\"OplogApplier-0\",\"msg\":\"Persisted new config to disk\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"OplogApplier-0\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigSteady\",\"oldState\":\"ConfigReconfiguring\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21392, \"ctx\":\"OplogApplier-0\",\"msg\":\"New replica set config in use\",\"attr\":{\"config\":{\"_id\":\"rs01\",\"version\":1,\"term\":6,\"members\":[{\"_id\":0,\"host\":\"127.0.0.1:27017\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":1,\"tags\":{},\"secondaryDelaySecs\":0,\"votes\":1}],\"protocolVersion\":1,\"writeConcernMajorityJournalDefault\":true,\"settings\":{\"chainingAllowed\":true,\"heartbeatIntervalMillis\":2000,\"heartbeatTimeoutSecs\":10,\"electionTimeoutMillis\":10000,\"catchUpTimeoutMillis\":-1,\"catchUpTakeoverDelayMillis\":30000,\"getLastErrorModes\":{},\"getLastErrorDefaults\":{\"w\":1,\"wtimeout\":0},\"replicaSetId\":{\"$oid\":\"635934063e8ef6387bb84510\"}}}}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21393, \"ctx\":\"OplogApplier-0\",\"msg\":\"Found self in config\",\"attr\":{\"hostAndPort\":\"127.0.0.1:27017\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015310, \"ctx\":\"OplogApplier-0\",\"msg\":\"Starting to transition to primary.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015309, \"ctx\":\"OplogApplier-0\",\"msg\":\"Logging transition to primary to oplog on stepup\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.510+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20657, \"ctx\":\"OplogApplier-0\",\"msg\":\"IndexBuildsCoordinator::onStepUp - this node is stepping up to primary\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.512+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21331, \"ctx\":\"OplogApplier-0\",\"msg\":\"Transition to primary complete; database writes are now permitted\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.512+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015306, \"ctx\":\"OplogApplier-0\",\"msg\":\"Applier already left draining state, exiting.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.512+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":400}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.515+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40445, \"ctx\":\"TopologyVersionObserver\",\"msg\":\"Started TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.516+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.516+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.516+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.519+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.519+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.519+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.519+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.519+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.519+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.520+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"FLECrudNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.520+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.520+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40441, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.520+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123005, \"ctx\":\"ShardSplitDonorService-0\",\"msg\":\"Rebuilding PrimaryOnlyService due to stepUp\",\"attr\":{\"service\":\"ShardSplitDonorService\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.521+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123005, \"ctx\":\"TenantMigrationDonorService-0\",\"msg\":\"Rebuilding PrimaryOnlyService due to stepUp\",\"attr\":{\"service\":\"TenantMigrationDonorService\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.521+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123005, \"ctx\":\"TenantMigrationRecipientService-0\",\"msg\":\"Rebuilding PrimaryOnlyService due to stepUp\",\"attr\":{\"service\":\"TenantMigrationRecipientService\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.522+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40447, \"ctx\":\"TopologyVersionObserver\",\"msg\":\"Stopped TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.522+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.523+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.523+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784907, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the replica set node executor\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"ReplNodeDbWorkerNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5074000, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the replica set aware services.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123006, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"numInstances\":0,\"numOperationContexts\":0}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"TenantMigrationDonorServiceNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123006, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"numInstances\":0,\"numOperationContexts\":0}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"TenantMigrationRecipientServiceNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123006, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"numInstances\":0,\"numOperationContexts\":0}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"ShardSplitDonorServiceNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.527+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21328, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down replication subsystems\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.528+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21302, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping replication reporter thread\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.528+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21303, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping replication fetcher thread\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.528+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21304, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping replication applier thread\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.913+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ShutdownInProgress: Shutdown in progress\",\"nextWakeupMillis\":600}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.507+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21107, \"ctx\":\"BackgroundSync\",\"msg\":\"Stopping replication producer\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.512+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21225, \"ctx\":\"OplogApplier-0\",\"msg\":\"Finished oplog application\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.512+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5698300, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping replication applier writer pool\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.512+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21307, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping replication storage threads\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.512+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"OplogApplierNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.512+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.512+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"ReplCoordExternNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"ReplNetwork\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":5}}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-10-28T07:11:17.513+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784920, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalTimeValidator\"}\n\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.913+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ShutdownInProgress: Shutdown in progress\",\"nextWakeupMillis\":600}}",
"text": "Hi,im new in this sector Linux/Mongodb and hope some can help me.\nWe installed an Rocketchatserver with mongodb.\nThere is running wiredtiger. If i understand it right is this an Replication function.\nIf i install it runs fine. But after one restart the Mongodb wont start anymore.I Have Uused this manual and installe dmongodb 6after an restartof the Vm a got this if i look in the Monogdb log i got This. i hope it got not so long and i thnk this point could be the Problem, but i didnt found any good soloution.\n{\"t\":{\"$date\":\"2022-10-28T07:11:16.913+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ShutdownInProgress: Shutdown in progress\",\"nextWakeupMillis\":600}}Did anyone has an idea what the Problem could be?\nif you need anything else please say it and if possible where i cant geht this infos.ThanksThomas",
"username": "Thomas_Bersch"
},
{
"code": "",
"text": "Hello,\ndon’t any of you have an idea?\nI really need help with this.ThanksThomas",
"username": "Thomas_Bersch"
},
{
"code": "processManagement:\n #fork: true\n",
"text": "hey,i had the same problem, please try to disable the fork: true in /etc/mongod.conf.it worked for me.\nit seems to have something to do with the init script and daemon mode.\nhttps://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-processManagement.forki am currently testing rocket.chat. let’s see if the customization leads to any problems.best. m.",
"username": "profile"
},
{
"code": "",
"text": "Thank you so much!!! Yea that was the Problem why is that in the Manual discript. Why did thy not corret the Problem. From the Errors out the logs i cannot read that.\nwhat made you think it might have something to do with the fork?Thanks\nThomas",
"username": "Thomas_Bersch"
},
{
"code": "",
"text": "you are welcome!simple trial and error.i chose the manual install on ubuntu 20.04 with mongodb 5.0.13, rocket 5.3.1 and node version\n14.19.3 instead of docker or snap and thought that’s where the problem came from, but i’m only now seeing that you installed it with docker. so it surprises me that not so many people have this problem.best matthias",
"username": "profile"
},
{
"code": "",
"text": "Hi Matthias,trail and error works only if you realy khnow what are you doing. Im New in linux world and had no clue where i can begin to search. Only the logs were for me a point for searching.Where did you see, that i install rocketchat with Docker. I have done the normal installation no Docker or snap. It is recommended with docker but i dont used it.",
"username": "Thomas_Bersch"
},
{
"code": "",
"text": "hey thomas,sorry, i read that wrong.best\nmatthias",
"username": "profile"
}
]
| Problems with Wiredtiger/MOngoDB and Rocketchat | 2022-10-28T08:14:06.787Z | Problems with Wiredtiger/MOngoDB and Rocketchat | 4,737 |
|
null | [
"aggregation",
"queries",
"dot-net",
"data-modeling"
]
| [
{
"code": "{\n _id: ObjectId('12345')\n UserId: 1\n BalanceChanged: \"10\"\n BalanceCurrent: \"10\"\n Date: 2022-11-18T01:00:00\n ...\n},\n{\n _id: ObjectId('12346')\n UserId: 1\n BalanceChanged: 5\n BalanceCurrent: \"15\"\n Date: 2022-11-18T02:00:00\n ...\n}\n{\n _id: ObjectId('12345')\n UserId: 1\n BalanceCurrent: \"15\"\n History: [\n {\n BalanceChanged: \"10\"\n Date: 2022-11-18T01:00:00\n },\n {\n BalanceChanged: 5\n Date: 2022-11-18T02:00:00\n ...\n },\n ]\n}\n",
"text": "Hi,I have a collection with around 5M documents. This collection contains the history of a certain balances together with the current balance like this:When I want the current balance of all users, i am facing performance issues:I was wondering if I could do this another way where I can get the current balance of all users in lets say a few milliseconds?I am aware that I can create a inner collection like this:But I’m worried about running into limits (max document size for instance). Some users already have 50K documents in less then 3 months.Or is there another way to query this kind of data in a few milliseconds?Thanks!",
"username": "JohnHope"
},
{
"code": "",
"text": "(takes around 2 seconds with a desc index on “Date”)The above looks huge but it may be normal. What is the total size of your databases? What are the specs of your installation? How did you obtain the 2secs? Can you provide the explain plan? What other indexes do you have? How many unique UserId?",
"username": "steevej"
},
{
"code": "{\n _id: ObjectId('12345')\n UserId: 1 ,\n BalanceCurrent: \"15\" ,\n Month : 2022-11-01T00:00:00\n History: [\n {\n BalanceChanged: \"10\"\n Date: 2022-11-18T01:00:00\n },\n {\n BalanceChanged: 5\n Date: 2022-11-18T02:00:00\n },\n ]\n}\n",
"text": "About beingworried about running into limits (max document size for instance)You could use the bucket pattern to store monthly balance history. So you would go midway from having one history document per balance change to one document per month.Something like:You could also have 2 collections; CurrentBalance and BalanceHistory. It is not because we can keep things together that we have to. The motto is to keep together things that are access together, but I see getting my current balance as a different use-case from getting the history.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Some about data modeling | 2022-11-18T17:34:45.810Z | Some about data modeling | 1,239 |
[
"connecting",
"mongodb-shell"
]
| [
{
"code": "",
"text": "Hi All,\nI am getting servertimeout error when trying to connect mongodb through shell. Please help me out to figure it.\n\nimage948×385 14.5 KB\n",
"username": "balaji_d1"
},
{
"code": "",
"text": "just go to mongodb atlas admin panel.Go in security tab>Network Access> Then whitelist your IP by adding itstrong text",
"username": "Benjamin_Katlego"
},
{
"code": "",
"text": "The connection is to 127.0.0.1, doing anything on Atlas will not help.Is mongod running?If it is, then your firewall might block the connection.",
"username": "steevej"
},
{
"code": "",
"text": "I am using community version.",
"username": "balaji_d1"
},
{
"code": "",
"text": "Is your mongod up?\nCan you check status of mongod service from taskmanager",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yes it is up… But also getting server timeout .",
"username": "balaji_d1"
},
{
"code": "",
"text": "Is your mongod running on default port?\nCheck mongod.log",
"username": "Ramachandra_Tummala"
}
]
| Getting server timeout when making connection through shell | 2022-11-18T11:47:51.809Z | Getting server timeout when making connection through shell | 1,886 |
|
[
"mongodb-shell"
]
| [
{
"code": "MongoDB: 6.0.3Mongosh: 1.6.0mongosh.confmongosh.conf mongosh:\n enableTelemetry: false\nmongosh --port **** --host **** 'enableTelemetry' => true,",
"text": "Hi, I am using MongoDB: 6.0.3 with Mongosh: 1.6.0 and tried the mongosh.conf configuration to set my value but I don’t see it is being used.\nI took the information from here:\nThis is my mongosh.conf:And I’m connecting to mongosh like this:\nmongosh --port **** --host ****And when checking the config I see this:\n 'enableTelemetry' => true,Any idea why is the conf not working?\nThanks!",
"username": "Oded_Raiches"
},
{
"code": "config",
"text": "I can’t find a corresponding ticket for that but I seem to remember that the configuration parameters and values set with the configuration file are not shown when accessed by individual users with the config API. Telemetry should be, however, disabled.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Hi @Massimiliano_Marcon , thanks for the quick reply! I will take your word for it \nI only found this ticket that may be relevant: https://jira.mongodb.org/browse/MONGOSH-1141?jql=text%20~%20\"Telemetry\"But not sure it answers to the same thing I found.\nBTW, would this be fixed in the future?",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "Yes, that is exactly the ticket I could not find!Yes, while it’s currently not the highest priority, we expect to fix it in the future.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| /etc/mongosh.conf is not used | 2022-11-21T08:34:41.861Z | /etc/mongosh.conf is not used | 1,805 |
|
null | [
"aggregation",
"queries"
]
| [
{
"code": "Students\n{\n \"_id\": {\n \"$oid\": string\n },\n \"student_id\": int,\n \"grades\": string,\n \"student_name\": string,\n...\n}\n",
"text": "Hi guys, say I’m using MongoDB standard and I have an attribute with something like letter grades of a student using a structure like:I want to go through the database and group the students by storing them in sets based on each letter grade (assuming that each student has a unique name) like:\nA+ : [Chris, Ada, Lee], A- : [John, Lisa], …\nHow would I structure the query through using something like addToSet() (without having to manually type each letter grade)?",
"username": "Owen_Shi"
},
{
"code": "db.collection.aggregate([{$group: {\n _id : \"$grades\",\n student_names : { $addToSet: \"$student_name\"}}}])\n\n",
"text": "Hi @Owen_Shi ,You can use the $group stage with a $addToSet operator:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Listing through Querying (aggregation?) | 2022-11-20T23:39:35.618Z | Listing through Querying (aggregation?) | 1,201 |
null | [
"aggregation",
"dot-net"
]
| [
{
"code": "public sealed class TextValue\n{\n [BsonId]\n public required string Id { get; set; }\n\n public required string Value { get; set; }\n\n public required string Title { get; set; }\n\n public required DateTime UpdatedAt { get; set; }\n\n public required string UpdatedBy { get; set; }\n\n public List<TextValue>? PreviousVersions { get; set; }\n}\n{\n\"$project\":\n {\n \"_id\":0,\n \"Title\":1,\n \"UpdatedAt\":1,\n \"UpdatedBy\":1,\n }\n}\n{\n \"_id\":ObjectId('637a6d1781189a7076f5980b'),\n \"Value\":\"SOMETHING\",\n \"Title\":\"Some random title\",\n \"UpdatedAt\":2022-11-28T06:00:00.000+00:00,\n \"UpdatedBy\":\"robo215\",\n \"PreviousVersions\":[\n {\n \"_id\":ObjectId('637a6d1a81189a7076f5980c'),\n \"Title\":\"The previous title\",\n \"UpdatedAt\":2022-11-28T06:00:00.000+00:00,,\n \"UpdatedBy\":\"jdoe\",\n }\n ]\n}\n{\n \"Value\":\"SOMETHING\",\n \"Title\":\"This is something else\"\n}\n{\n \"_id\":ObjectId('637a6d1781189a7076f5980b'),\n \"Value\":\"SOMETHING\",\n \"Title\":\"This is something else\",\n \"UpdatedAt\":2022-11-29T06:00:00.000+00:00,\n \"UpdatedBy\":\"nguy\",\n \"PreviousVersions\":[\n {\n \"_id\":ObjectId('637a6d1a81189a7076f5980c'),\n \"Title\":\"The previous title\",\n \"UpdatedAt\":2022-11-28T06:00:00.000+00:00,,\n \"UpdatedBy\":\"jdoe\",\n },\n {\n \"_id\":ObjectId('637a6d1a81189a7076f5980f'),\n \"Title\":\"Some random title\",\n \"UpdatedAt\":2022-11-28T06:00:00.000+00:00,\n \"UpdatedBy\":\"robo215\",\n }\n ]\n}\n",
"text": "Hello Everyone,\nI am fairly new to MongoDB so please excuse the question if it is fairly basic. I have been searching the issue for hours on google with no luck.I am looking to maintain a history of a document as part of the document with the latest representation being the root. To do so in the C#, i have a list of the class itself as part of the class. Effectively whenever i update the main document i want to insert its current state (less the list and _id) into the array. This is easy enough to do on the application side but this is problematic because of terrible performance. The information will be coming from an external solution with no idea of the existing value (and its ID) and i don’t want to make a call to the database to find the record to then update it and make a second call to save the update due to performance concerns. I am scratching my head on how to do it as part of the update query to make this efficient in mass. Any pointers would be greatly appreciated.Playing with the aggregation pipelines, it appears i can use $project to limit the fields i want down to what i need for the insert to the array but can’t seem to find how i could use that in conjunction with an update.The behavior i am lookin for is as follows:I have an existing record as such:Then i receive a new value in the app dto such as:My app will resolve the actioning user and time but what i am looking to do is find the record “SOMETHING” and update the titles and store the previous version in the history as:",
"username": "Robo215"
},
{
"code": "db.collection.updateOne({\"title\" : \"concat\"},\n[{$addFields : {title : \"concat1\", previousVersion : { $concatArrays :[{$ifNull :[\"$$ROOT.previousVersion\",[]]}, [\"$$ROOT\"]]}}},{$project : {\"previousVersion.previousVersion\" : 0}}])\ndb.collection.findOne()\n\n{ _id: ObjectId(\"637b372db68dcba8d6b608d8\"),\n indexes: [ 'x', 'y' ],\n attributes: [ { k: 'x', v: 'v' } ],\n metrics: { a: 1, b: 1 },\n title: 'concat' }\n\ndb.collection.updateOne({\"title\" : \"concat\"},\n[{$addFields : {title : \"concat1\", previousVersion : { $concatArrays :[{$ifNull :[\"$$ROOT.previousVersion\",[]]}, [\"$$ROOT\"]]}}},{$project : {\"previousVersion.previousVersion\" : 0}}])\n\ndb.collection.findOne()\n\n{ _id: ObjectId(\"637b372db68dcba8d6b608d8\"),\n indexes: [ 'x', 'y' ],\n attributes: [ { k: 'x', v: 'v' } ],\n metrics: { a: 1, b: 1 },\n title: 'concat1',\n previousVersion: \n [ { _id: ObjectId(\"637b372db68dcba8d6b608d8\"),\n indexes: [ 'x', 'y' ],\n attributes: [ { k: 'x', v: 'v' } ],\n metrics: { a: 1, b: 1 },\n title: 'concat' } ] }\n\ndb.collection.updateOne({\"title\" : \"concat1\"},\n[{$addFields : {title : \"concat2\", previousVersion : { $concatArrays :[{$ifNull :[\"$$ROOT.previousVersion\",[]]}, [\"$$ROOT\"]]}}},{$project : {\"previousVersion.previousVersion\" : 0}}])\n\n\n{ _id: ObjectId(\"637b372db68dcba8d6b608d8\"),\n indexes: [ 'x', 'y' ],\n attributes: [ { k: 'x', v: 'v' } ],\n metrics: { a: 1, b: 1 },\n title: 'concat2',\n previousVersion: \n [ { _id: ObjectId(\"637b372db68dcba8d6b608d8\"),\n indexes: [ 'x', 'y' ],\n attributes: [ { k: 'x', v: 'v' } ],\n metrics: { a: 1, b: 1 },\n title: 'concat' },\n { _id: ObjectId(\"637b372db68dcba8d6b608d8\"),\n indexes: [ 'x', 'y' ],\n attributes: [ { k: 'x', v: 'v' } ],\n metrics: { a: 1, b: 1 },\n title: 'concat1' } ] }\n",
"text": "Hi @Robo215 ,I think that you should consider using the version design pattern rather than nesting it in a document:The Document Versioning Pattern - When history is important in a documentBut if you wish to go the aggregation update route, I did manage to come with something complex:This solution is using the agg pipeline within updates, and concat previous version $$ROOT document to the next (omitting the projection of previousVersion inside previousVersion itself).So for the following example:Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Add section of existing document to a nested array in the same document as part of update | 2022-11-20T20:12:49.918Z | Add section of existing document to a nested array in the same document as part of update | 1,243 |
null | [
"dot-net",
"flexible-sync"
]
| [
{
"code": " //To keep pages list reference\n private ObservableCollection<IQueryable<ChatMessage>> messagePages = new \n ObservableCollection<IQueryable<ChatMessage>>();\n //To keep listener reference and dispose it later\n private List<IDisposable> listeners = new List<IDisposable>();\n\n public async Task GetMessagePageAsync(string conversationId, DateTimeOffset startDate, DateTimeOffset endDate)\n {\n var messageDb = new MessageDB();\n var messagesPage = await messageDb.GetMessagesForTimePeriod(conversationId, startDate, endDate);\n if (messagesPage != null)\n {\n messagePages.Add(messagesPage);\n var listener1 = messagePages[messagePages.Count() - 1].SubscribeForNotifications(RealmChangeListener);\n listeners.Add(listener1);\n }\n }\n internal async Task<IQueryable<ChatMessage>> GetMessagesForTimePeriod(string conversationId, DateTimeOffset startDate, DateTimeOffset dayDate )\n {\n //Getting realm instance\n Realm realm = await _dbService.GetDbInstanceAsync();\n return realm.All<ChatMessage>().Where(o => o.ConversationID == conversationId && o.Timestamp >= dayDate && o.Timestamp <= startDate);\n }\n",
"text": "Realm Sdk: realm-dotnet version 10.15.1\nEnvironment: Xamarin FormsHello,\nI am working with an app where I need to manage multiple pages of the same collection. But the problem is that if I use single collection for all pages it is working fine. but when I use multiple collections and add their reference in the list to keep a reference and keep the listener alive, It does not work. Following is the code example:Following is the MessageDb.GetMessagesForTimePeriod:",
"username": "Ahmad_Pasha"
},
{
"code": "RealmChangeListner",
"text": "Hi @Ahmad_Pasha, thanks for your message.Just to be sure I got everything correctly, the issue here is that RealmChangeListner is not invoked, am I correct?\nIf so, could it be that the listeners collection is getting collected when moving to another page maybe?",
"username": "papafe"
},
{
"code": "",
"text": "@papafe Thanks for reply,\nEven If I don’t change the page, It still doesn’t work except for one time when the listener is set.\nAnd could be called second time if new changes arrives during the first call.",
"username": "Ahmad_Pasha"
},
{
"code": "",
"text": "I see. A couple of things:",
"username": "papafe"
}
]
| Subscribe for notification is not working | 2022-11-18T06:11:27.720Z | Subscribe for notification is not working | 2,160 |
null | []
| [
{
"code": "",
"text": "I am using MongoDB community edition on my localhost Ubuntu machine. But every time I restart my system, the databases I created previously are gone.How to prevent this?",
"username": "Robin_S"
},
{
"code": "",
"text": "That is very unlikely if it is installed correctly, started correctly and terminated correctly.How do you start mongod?How do you create your databases? mongosh? java program? nodejs program?How do you verify that the databases are created correctly?Can you share a screenshot of running mongosh that shows the content of one of your database?",
"username": "steevej"
},
{
"code": "sudo systemctl start mongodmongoshCurrent Mongosh Log ID: *************\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0\nUsing MongoDB: 6.0.3\nUsing Mongosh: 1.6.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n vm.max_map_count is too low\n------\n\n------\n Enable MongoDB's free cloud-based monitoring service, which will then receive and display\n metrics about your deployment (disk utilization, CPU, operation statistics, etc).\n \n The monitoring data will be available on a MongoDB website with a unique URL accessible to you\n and anyone you share the URL with. MongoDB may use this information to make product\n improvements and to suggest MongoDB products and deployment options to you.\n \n To enable free monitoring, run the following command: db.enableFreeMonitoring()\n To permanently disable this reminder, run the following command: db.disableFreeMonitoring()\nuse shop\nswitched to db shop\nshop> db.products.insertOne({\"productName\": \"A Computer\"})\n{\n acknowledged: true,\n insertedId: ObjectId(\"637b23fcc334f2258856a\")\n}\nshop> db.products.find()\n[\n {\n _id: ObjectId(\"637b23fcc334f2258856a\"),\n productName: 'A Computer'\n }\n]\n\n",
"text": "I start mongod by using sudo systemctl start mongod , I create database using mongosh.This is the output I get after using mongosh command:This is how I am creating a database:But when I restart my PC the shop database is gone.",
"username": "Robin_S"
},
{
"code": "",
"text": "Restarting mongod will not delete dbs\nShow output of show dbs.Restart and show output of the same command again\nAlso your mongod is running with access control disabled\nTry to secure your mongod with --auth and check if you are observing same behaviour",
"username": "Ramachandra_Tummala"
}
]
| Databases getting deleted on system restart | 2022-11-20T16:40:13.530Z | Databases getting deleted on system restart | 1,939 |
null | [
"java",
"python",
"sharding"
]
| [
{
"code": "",
"text": "Hi All,We are trying to pump messages from kafka to Mongodb . Kafka is running on Ubuntu while Mongo DB is installed on RHEL 8 servers. MongoDB is a shareded one and we have set up mongos instances to connect to MongoDB.The load from kafka to MongoDB via kafka-connector is failing with error when connection uri is set to mongos instance:Bulk write operation error on server. Write errors: [BulkWriteError{index=0, code=61, message=‘Failed to target upsert by query :: could not extract exact shard key’, details={}}]However, load succeeds when we set the the mongo db instances as connection uri.Please helpFull error:/python-code-test-data-gen$ {“mongodb-sink-connector”:{“status”:{“name”:“mongodb-sink-connector”,“connector”:{“state”:“RUNNING”,“worker_id”:\"\"},“tasks”:[{“id”:0,“state”:“FAILED”,“worker_id”:\"\",“trace”:“org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204)\\n\\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:200)\\n\\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:255)\\n\\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\\n\\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:829)\\nCaused by: org.apache.kafka.connect.errors.DataException: com.mongodb.MongoBulkWriteException: Bulk write operation error on server . Write errors: [BulkWriteError{index=0, code=61, message=‘Failed to target upsert by query :: could not extract exact shard key’, details={}}]. \\n\\tat com.mongodb.kafka.connect.sink.StartedMongoSinkTask.handleTolerableWriteException(StartedMongoSinkTask.java:168)\\n\\tat com.mongodb.kafka.connect.sink.StartedMongoSinkTask.bulkWriteBatch(StartedMongoSinkTask.java:111)\\n\\tat java.base/java.util.ArrayList.forEach(ArrayList.java:1541)\\n\\tat com.mongodb.kafka.connect.sink.StartedMongoSinkTask.put(StartedMongoSinkTask.java:76)\\n\\tat com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:90)\\n\\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:584)\\n\\t… 10 more\\nCaused by: com.mongodb.MongoBulkWriteException: Bulk write operation error on server . Write errors: [BulkWriteError{index=0, code=61, message=‘Failed to target upsert by query :: could not extract exact shard key’, details={}}]. \\n\\tat com.mongodb.internal.connection.BulkWriteBatchCombiner.getError(BulkWriteBatchCombiner.java:167)\\n\\tat com.mongodb.internal.connection.BulkWriteBatchCombiner.throwOnError(BulkWriteBatchCombiner.java:192)\\n\\tat com.mongodb.internal.connection.BulkWriteBatchCombiner.getResult(BulkWriteBatchCombiner.java:136)\\n\\tat com.mongodb.internal.operation.BulkWriteBatch.getResult(BulkWriteBatch.java:224)\\n\\tat com.mongodb.internal.operation.MixedBulkWriteOperation.executeBulkWriteBatch(MixedBulkWriteOperation.java:363)\\n\\tat com.mongodb.internal.operation.MixedBulkWriteOperation.lambda$execute$2(MixedBulkWriteOperation.java:260)\\n\\tat com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$2(OperationHelper.java:575)\\n\\tat com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:600)\\n\\tat com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$3(OperationHelper.java:574)\\n\\tat com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:600)\\n\\tat com.mongodb.internal.operation.OperationHelper.withSourceAndConnection(OperationHelper.java:573)\\n\\tat com.mongodb.internal.operation.MixedBulkWriteOperation.lambda$execute$3(MixedBulkWriteOperation.java:232)\\n\\tat com.mongodb.internal.async.function.RetryingSyncSupplier.get(RetryingSyncSupplier.java:65)\\n\\tat com.mongodb.internal.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:268)\\n\\tat com.mongodb.internal.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:84)\\n\\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:212)\\n\\tat com.mongodb.client.internal.MongoCollectionImpl.executeBulkWrite(MongoCollectionImpl.java:443)\\n\\tat com.mongodb.client.internal.MongoCollectionImpl.bulkWrite(MongoCollectionImpl.java:423)\\n\\tat com.mongodb.kafka.connect.sink.StartedMongoSinkTask.bulkWriteBatch(StartedMongoSinkTask.java:104)\\n\\t… 14 more\\n”},",
"username": "Rahul_M"
},
{
"code": "",
"text": "Hi Rahul_M ,Were you able to fix this issue… Im facing the same issue with sharded mongo collection",
"username": "vidhya_saravanan"
}
]
| Mongo Db kafka connector error while writing to sharded MongoDB via mongos instance | 2022-09-02T12:00:57.972Z | Mongo Db kafka connector error while writing to sharded MongoDB via mongos instance | 2,332 |
null | [
"text-search"
]
| [
{
"code": " Post:\n {\n _id: ObjectId,\n text: String,\n _comments: [ Comment ],\n _author: [ User ],\n }\n Comment:\n {\n _id: ObjectId,\n text: String,\n _subComments: [ Comment ],\n _author: [ User ],\n _post: Post\n }\n User:\n {\n _id: ObjectId,\n name: {\n firstName: String,\n lastName: String,\n }\n }\n",
"text": "Hello everybody, I have a question about Atlas Search:For example given these collections:Is it possible to do a full text search for Posts where the text match the query, OR its Comments text match the query, OR the Post author (User) full name (firstName + lastName) matches the query, OR the Comments author full name matches the query?Basically searching for multiple collections within one search, and generate a match score based on number of matches and accuracy.If this is not possible, do you know if this is possible with a third party service?Thanks!",
"username": "Ignacio_Montero"
},
{
"code": "",
"text": "I have the same question… some direction please.Thanks",
"username": "Maheep_Tathgur"
},
{
"code": "",
"text": "Bumping this post up! I have the same query.",
"username": "Mathews_Joseph"
},
{
"code": "",
"text": "I also have this question.Is mongo planning on adding multiple-collection $search to Atlas full-text $search?",
"username": "Tyson_Jeffreys"
},
{
"code": "$search$searchMeta$unionWith$lookup",
"text": "Please review the How to Run Queries Across Collections documentation for Atlas Search.Please note that you’ll need a cluster running MongoDB 6.0 or higher to specify the Atlas Search $search or $searchMeta stage in the $unionWith and $lookup pipeline stages.",
"username": "Jason_Tran"
}
]
| Full text search in multiple collections | 2021-01-15T17:46:22.988Z | Full text search in multiple collections | 6,210 |
null | [
"aggregation"
]
| [
{
"code": "{\"_id\":{\"$oid\":\"6379204a8cf5677554c26c1b\"},\"_partition\":\"6378eb74f6613d5d4192da79\",\"name\":\"silas test\"}\n{\"_id\":{\"$oid\":\"6378eb74f6613d5d4192da79\"},\"_partition\":\"6378eb74f6613d5d4192da79\",\"name\":\"test media\"}\n{\n \"_id\":{\n \"$oid\":\"6379ee6c770a8f43afc8e3e4\"},\n \"_partition\":\"6378eb74f6613d5d4192da79\",\n \"types\":[\n {\n \"mediaReference\":[\n {\"$oid\":\"6379204a8cf5677554c26c1b\"}\n ],\n \"mediatype\":\"podcast\"\n },\n {\n \"mediaReference\":[\n {\"$oid\":\"6378eb74f6613d5d4192da79\"}\n ],\n \"mediatype\":\"movie\"\n }\n ],\n \"username\":\"silas\"\n}\n{\n \"_id\":{\n \"$oid\":\"6379ee6c770a8f43afc8e3e4\"},\n \"_partition\":\"6378eb74f6613d5d4192da79\",\n \"types\":[\n {\n \"mediaReference\":[\n {\n \"_id\": {\n \"$oid\":\"6379204a8cf5677554c26c1b\"\n },\n \"_partition\":\"6378eb74f6613d5d4192da79\",\n \"name\":\"silas test\"\n }\n ],\n \"mediatype\":\"podcast\"\n },\n {\n \"mediaReference\":[\n {\n \"_id\": {\n \"$oid\":\"6378eb74f6613d5d4192da79\"\n },\n \"_partition\":\"6378eb74f6613d5d4192da79\",\n \"name\":\"test media\"\n }\n ],\n \"mediatype\":\"movie\"\n }\n ],\n \"username\":\"silas\"\n}\n",
"text": "I have two collections in my mongoDb database, one called media, one called users.Documents in the media collection:Document in the users collection:Now im looking for a way to use the mongoDb aggregation pipeline to get the following result:Basically im searching for a way to paste the refferenced document into the mediaReference object inside the users document.i would appreciate any help",
"username": "Silas_Jeydo"
},
{
"code": "db.users.aggregate([{\n $unwind: {\n path: '$types'\n }\n}, {\n $lookup: {\n from: 'media',\n localField: 'types.mediaReference',\n foreignField: '_id',\n as: 'types.mediaReference'\n }\n}, {\n $group: {\n _id: '$_id',\n merged: {\n $first: '$$ROOT'\n },\n types: {\n $push: '$types'\n }\n }\n}, {\n $addFields: {\n 'merged.types': '$types'\n }\n}, {\n $replaceRoot: {\n newRoot: '$merged'\n }\n}])\n",
"text": "Hi @Silas_Jeydo ,I have to say that it sounds like you should review your data mode, if you need the data to be queried together you should consider storing it together, even if it means duplicating data from “media” into users collection.However the following query will do what you need but its over-complex in my opinion and might be resource intesive:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you for the fast response!My reason for choosing this type of data model was that one user could store for example 100 podcasts and 100 movies in the media collection. To put all of them together in one document seemed too big for me.\nAlso one document in the media collection which has currently only the id, partiton and a name in it will have many more object entries.\nConsidering this, would you still restructure the data model or use the aggregation pipeline?Thank you for your help",
"username": "Silas_Jeydo"
},
{
"code": "",
"text": "Hi @Silas_Jeydo ,I don’t think that 200 elements in a document with this amount of sub fields is big or unadvisable.We do want to contain arrays under 1000 elements.We offer an outlier pattern to deal with large specific use cases that requires data breaking into multiple buckets:The Outlier Pattern helps when there's exceptionally large records occasionally occurring in your data setThanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Aggregate nested objects | 2022-11-20T12:10:32.966Z | Aggregate nested objects | 3,884 |
null | [
"crud"
]
| [
{
"code": "Character.updateMany(\n { user: userId },\n { $set: { \"regionRaid.$[element].clear\": false } },\n { arrayFilters: [{ \"element.clear\": { $eq: true } }] }\n_id : 63770ae21f37cb738724765c\nuser : 636231e72889df2615d636c3\nname : \"name\"\nlevel : 1548.33\nregionRaid : \n[ region :\"valtan\", clear : false, id : 1 ], \n[ region :\"iliakan\", clear : false, id : 2 ], \n[ region :\"vyakis\", clear : false, id : 3 ]\nDateToReset : \"Wed Nov 23 2022 13:32:34 GMT+0900 \"\nupdatedAt : \"Fri Nov 18 2022 15:26:51 GMT+0900\"\n",
"text": "i want to replace all the regionRaid.clear : “true” to “false”\ni tried to read all doc and reference but it can’t help me",
"username": "qkrwjdtn09"
},
{
"code": "_id : 63770ae21f37cb738724765c\nuser : 636231e72889df2615d636c3\nname : \"name\"\nlevel : 1548.33\nregionRaid : \n[ region :\"valtan\", clear : false, id : 1 ], \n[ region :\"iliakan\", clear : false, id : 2 ], \n[ region :\"vyakis\", clear : false, id : 3 ]\nDateToReset : \"Wed Nov 23 2022 13:32:34 GMT+0900 \"\nupdatedAt : \"Fri Nov 18 2022 15:26:51 GMT+0900\"\n",
"text": "Is your sample documentthe input or the result?If it is the input then none of clear field is true so no document is updated. If it is the result, then the code works because clear is false for all. We do know if any were true before.",
"username": "steevej"
},
{
"code": "[ region :\"valtan\", clear : true, id : 1 ], \n[ region :\"iliakan\", clear : false, id : 2 ], \n[ region :\"vyakis\", clear : true, id : 3 ]\n",
"text": "Sorry, I forgot the description of that clear is updated by client’s input.\nwhen the updateMany is excuted, clear value can be true or false.i want all the clear value become false.",
"username": "qkrwjdtn09"
},
{
"code": "mongosh > array = [ region :\"valtan\", clear : true, id : 1 ]\n> SyntaxError: Unexpected token, expected \",\" (1:13)\n\n> 1 | a = [ region :\"valtan\", clear : true, id : 1 ]\n | ^\nmongosh > object = { region :\"valtan\" , clear : true, id : 1 }\n< { region: 'valtan', clear: true, id: 1 }\n",
"text": "The shared data is not valid JSON:The following is the correct syntax:Your updateMany() is appropriate and should work. If it does not then check your query part, may be your userId is wrong. Often people make the mistake of passing a string when an ObjectId should be used.",
"username": "steevej"
},
{
"code": "> db.collection.updateMany( {}, { $set: { \"regionRaid.$[].clear\": false}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n> db.collection.find()\n[\n {\n _id: ObjectId(\"6377594b41a44b142239964a\"),\n user: ObjectId(\"636231e72889df2615d636c3\"),\n name: 'name',\n level: 1548.33,\n regionRaid: [\n { region: 'valtan', clear: false, id: 1 },\n { region: 'iliakan', clear: false, id: 2 },\n { region: 'vyakis', clear: false, id: 3 }\n ],\n DateToReset: ISODate(\"2022-11-23T04:32:34.000Z\"),\n updatedAt: ISODate(\"2022-11-18T06:26:51.000Z\")\n }\n]\n\n",
"text": "Hi @qkrwjdtn09 and welcome to the MongoDB community forum.I tried to replica the update query in my local environment, and I was successfully able to update the recordsHowever, the above query is tested based on the sample document provided. I would recommend you to perform testing for the complete dataset.\nLet us know if you have any further queries.Regards\nAasawari",
"username": "Aasawari"
}
]
| updateMany is not working with arrayFilters | 2022-11-18T07:54:41.707Z | updateMany is not working with arrayFilters | 2,261 |
[
"crud"
]
| [
{
"code": "",
"text": "Hello , I need query for converting statusDate into Date in an history array.",
"username": "Gayathri_Subramanyam"
},
{
"code": "",
"text": "This question is already answered in your other post Updating Data Type From String to Date - #3 by steevej.If you still have an issue, share what your tried and explain how it fails to deliver the appropriate result.And before posting, please read Formatting code and log snippets in posts so that your documents and code is formatted correctly.",
"username": "steevej"
},
{
"code": "",
"text": "The query is changed status object as array we need status as object in history array and statusDate as Date in status object .we need some changes in query .",
"username": "Gayathri_Subramanyam"
}
]
| Update Query For String to Date in an Array | 2022-11-18T12:28:55.498Z | Update Query For String to Date in an Array | 3,081 |
|
null | [
"aggregation"
]
| [
{
"code": "[{\n \"userref\": \"AAA\",\n \"sessionref\" : \"S1\",\n \"results\": [{\n \"gameref\": \"Spades\",\n \"dateplayed\": ISODate(2022-01-01T10:00:00),\n \"score\": 1000\n }, {\n \"gameref\": \"Hearts\",\n \"dateplayed\": ISODate(2022-01-02T10:00:00),\n \"score\": 500\n }, {\n \"gameref\": \"Clubs\",\n \"dateplayed\": ISODate(2022-01-05T10:00:00),\n \"score\": 200\n }]\n}, {\n \"userref\": \"AAA\",\n \"sessionref\" : \"S2\",\n \"results\": [{\n \"gameref\": \"Spades\",\n \"dateplayed\": ISODate(2022-02-02T10:00:00),\n \"score\": 1000\n }, {\n \"gameref\": \"Clubs\",\n \"dateplayed\": ISODate(2022-05-02T10:00:00),\n \"score\": 200\n }]\n}, {\n \"userref\": \"BBB\",\n \"sessionref\" : \"S1\",\n \"results\": [{\n \"gameref\": \"Clubs\",\n \"dateplayed\": ISODate(2022-01-05T10:00:00),\n \"score\": 200\n }]\n}]\n",
"text": "My collection, userresults, has documents which are unique by userref and sessionref together. A session has a selection of game results in a results array. I have already filtered the results to return those userresults documents which contain a result for game “Clubs”.What I need to do within my aggregation is select the userresult document FOR EACH USER that contains the most recently played game of Clubs, ie in this case it will return the AAA/S2 document and the BBB/S1 document.I’m guessing I need a group on the userref as a starting point, but then how do I select the rest of the document based on the Clubs date?Thanks!",
"username": "Fiona_Lovett1"
},
{
"code": "db.collection.aggregate([\n {\n '$unwind': {\n 'path': '$results'\n }\n }, {\n '$match': {\n 'results.gameref': 'Clubs'\n }\n }, {\n '$group': {\n '_id': {\n 'user': '$userref',\n 'gameref': '$results.gameref'\n },\n 'mostrecentdate': {\n '$max': '$results.dateplayed'\n }\n }\n }\n ])\n[\n {\n _id: { user: 'AAA', gameref: 'Clubs' },\n mostrecentdate: ISODate(\"2022-05-02T04:30:00.000Z\")\n },\n {\n _id: { user: 'BBB', gameref: 'Clubs' },\n mostrecentdate: ISODate(\"2022-01-05T04:30:00.000Z\")\n }\n]\n",
"text": "Hi @Fiona_Lovett1 and welcome to MongoDB community forum!!Based on the above example document shared, the below query might be helpful in achieving the desired output:However, please note that, the above query has only been tested on the sample documents provided. I would recommend testing thoroughly on your own test environment to verify it suits all your use case and requirements before running against any production data. collection.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Choose single document based on dates in subdocument | 2022-11-11T16:15:47.022Z | Choose single document based on dates in subdocument | 809 |
null | [
"aggregation"
]
| [
{
"code": "db.xiaoxu.aggregate([\\{ $match: {fld4: null\n }\n },\\{$group: {_id: \"$fld4\",total: {$sum: 1\n }\n }\n }\n])\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"POCDB.xiaoxu\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"fld4\": {\n \"$eq\": null\n }\n },\n \"queryHash\": \"7937EE4F\",\n \"planCacheKey\": \"C77A1A63\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"transformBy\": {\n \"fld4\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"fld4\": {\n \"$eq\": null\n }\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"fld4\": [\n \"[undefined, undefined]\",\n \"[null, null]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 550000,\n \"executionTimeMillis\": 1332,\n \"totalKeysExamined\": 550001,\n \"totalDocsExamined\": 550000,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 158,\n \"works\": 550001,\n \"advanced\": 550000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 570,\n \"restoreState\": 570,\n \"isEOF\": 1,\n \"transformBy\": {\n \"fld4\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"fld4\": {\n \"$eq\": null\n }\n },\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 121,\n \"works\": 550001,\n \"advanced\": 550000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 570,\n \"restoreState\": 570,\n \"isEOF\": 1,\n \"docsExamined\": 550000,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 40,\n \"works\": 550001,\n \"advanced\": 550000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 570,\n \"restoreState\": 570,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"fld4\": [\n \"[undefined, undefined]\",\n \"[null, null]\"\n ]\n },\n \"keysExamined\": 550001,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0,\n \"indexDef\": {\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"direction\": \"forward\"\n }\n }\n }\n }\n }\n },\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 1217\n },\n {\n \"$group\": {\n \"_id\": \"$fld4\",\n \"total\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"total\": 72\n },\n \"totalOutputDataSizeBytes\": 229,\n \"usedDisk\": false,\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 1330\n }\n ],\n \"serverInfo\": {\n \"host\": \"vmt30129\",\n \"port\": 51001,\n \"version\": \"5.0.2\",\n \"gitVersion\": \"6d9ec525e78465dcecadcff99cce953d380fedc8\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"xiaoxu\",\n \"pipeline\": [\n {\n \"$match\": {\n \"fld4\": null\n }\n },\n {\n \"$group\": {\n \"_id\": \"$fld4\",\n \"total\": {\n \"$sum\": 1\n }\n }\n }\n ],\n \"cursor\": {},\n \"$db\": \"POCDB\"\n },\n \"ok\": 1\n}\ndb.xiaoxu.aggregate([\n { $match: {fld4: null\n }\n },\n {$group: {_id: null,total: {$sum: 1\n }\n }\n }\n])\n \n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"POCDB.xiaoxu\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"fld4\": {\n \"$eq\": null\n }\n },\n \"queryHash\": \"2B634F0D\",\n \"planCacheKey\": \"FC6E7CF8\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"OR\",\n \"inputStages\": [\n {\n \"stage\": \"COUNT_SCAN\",\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"indexBounds\": {\n \"startKey\": {\n \"fld4\": undefined\n },\n \"startKeyInclusive\": true,\n \"endKey\": {\n \"fld4\": undefined\n },\n \"endKeyInclusive\": true\n }\n },\n {\n \"stage\": \"COUNT_SCAN\",\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"indexBounds\": {\n \"startKey\": {\n \"fld4\": null\n },\n \"startKeyInclusive\": true,\n \"endKey\": {\n \"fld4\": null\n },\n \"endKeyInclusive\": true\n }\n }\n ]\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 550000,\n \"executionTimeMillis\": 424,\n \"totalKeysExamined\": 550002,\n \"totalDocsExamined\": 0,\n \"executionStages\": {\n \"stage\": \"OR\",\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 60,\n \"works\": 550002,\n \"advanced\": 550000,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 551,\n \"restoreState\": 551,\n \"isEOF\": 1,\n \"dupsTested\": 550000,\n \"dupsDropped\": 0,\n \"inputStages\": [\n {\n \"stage\": \"COUNT_SCAN\",\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1,\n \"advanced\": 0,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 551,\n \"restoreState\": 551,\n \"isEOF\": 1,\n \"keysExamined\": 1,\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"indexBounds\": {\n \"startKey\": {\n \"fld4\": undefined\n },\n \"startKeyInclusive\": true,\n \"endKey\": {\n \"fld4\": undefined\n },\n \"endKeyInclusive\": true\n }\n },\n {\n \"stage\": \"COUNT_SCAN\",\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 18,\n \"works\": 550001,\n \"advanced\": 550000,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 551,\n \"restoreState\": 551,\n \"isEOF\": 1,\n \"keysExamined\": 550001,\n \"keyPattern\": {\n \"fld4\": 1\n },\n \"indexName\": \"fld4_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"fld4\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"indexBounds\": {\n \"startKey\": {\n \"fld4\": null\n },\n \"startKeyInclusive\": true,\n \"endKey\": {\n \"fld4\": null\n },\n \"endKeyInclusive\": true\n }\n }\n ]\n }\n }\n },\n \"nReturned\": 550000,\n \"executionTimeMillisEstimate\": 362\n },\n {\n \"$group\": {\n \"_id\": {\n \"$const\": null\n },\n \"total\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"total\": 72\n },\n \"totalOutputDataSizeBytes\": 229,\n \"usedDisk\": false,\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 415\n }\n ],\n \"serverInfo\": {\n \"host\": \"vmt30129\",\n \"port\": 51001,\n \"version\": \"5.0.2\",\n \"gitVersion\": \"6d9ec525e78465dcecadcff99cce953d380fedc8\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"xiaoxu\",\n \"pipeline\": [\n {\n \"$match\": {\n \"fld4\": null\n }\n },\n {\n \"$group\": {\n \"_id\": null,\n \"total\": {\n \"$sum\": 1\n }\n }\n }\n ],\n \"cursor\": {},\n \"$db\": \"POCDB\"\n },\n \"ok\": 1\n}\n",
"text": "–this genergate not covered query cause poor performance–this genergate covered query",
"username": "jing_xu"
},
{
"code": "db.xiaoxu.aggregate([{ $match:{fld4:null}},{$group:{_id:\"$fld4\",total:{$sum:1}}}])\ndb.xiaoxu.aggregate([{ $match:{fld4:null}},{$group:{_id:null,total:{$sum:1}}}])\n",
"text": "the difference is the covered query:–this genergate not covered query cause poor performance–this genergate covered query",
"username": "jing_xu"
}
]
| Mongodb 5.0 $group nulll and $field indicates different execution plan | 2022-11-21T02:16:31.973Z | Mongodb 5.0 $group nulll and $field indicates different execution plan | 1,040 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "server started on port 8080\n/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:122\n op.cb(new error_1.MongoNetworkError(`connection ${this.id} to ${this.address} closed`));\n ^\n\nMongoNetworkError: connection 1 to 34.71.95.215:27017 closed\n at Connection.handleIssue (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:122:23)\n at TLSSocket.<anonymous> (/home/overlord/github_sleepywakes_thunderroost/node_modules/mongoose/node_modules/mongodb/lib/cmap/connection.js:63:39)\n at TLSSocket.emit (node:events:394:28)\n at node:net:662:12\n at TCP.done (node:_tls_wrap:580:7)\n[nodemon] app crashed - waiting for file changes before starting...\n",
"text": "Greetings,I have been trying to connect my GCP VM application for some time. I’ve whitelisted the IPs of my personal machine and the external IP (34.68.254.120) of the virtual machine. This VM IP is pingable.When I start my app on the VM server, it crashes with the following error:When I open the MongoDB whitelist to 0.0.0.0/0, my VM server starts and I’m able to do a web preview of my app through GCP on port 8080.What’s the interpretation of this error message? Thanks in advance for any help!",
"username": "SleepyWakes"
},
{
"code": "",
"text": "I’ve just tried creating another VM instance from scratch, and I have the same problem – server crash with error above.I should also note that when I open the whitelist to 0.0.0.0/0, I can access through GCP’s web preview, but I CANNOT access the VM’s external IP address (the one I whitelisted in Mongodb) through my browser. It times out. I’m not sure if this is related, or if it’s a second problem I will have to resolve after getting the whitelist issue resolved.Thanks for any guidance. I’m fully stuck.",
"username": "SleepyWakes"
},
{
"code": "(issue.isClose) {",
"text": "I’ve been looking at the Mongoose code that generated the error, in:\n…node_modules\\mongoose\\node_modules\\mongodb\\lib\\cmapI’m not experienced enough to know what is fully going on, but it appears the connection is closing (line before is if (issue.isClose) {). The IP address in the error message (34.71.95.215) appears to be the Google Data Center where my VM is running. I’m not sure what this means by thought it my help for anyone who has ideas on next steps for me.",
"username": "SleepyWakes"
},
{
"code": " handleIssue(issue) {\n if (this.closed) {\n return;\n }\n if (issue.destroy) {\n this[kStream].destroy(typeof issue.destroy === 'boolean' ? undefined : issue.destroy);\n }\n this.closed = true;\n for (const [, op] of this[kQueue]) {\n if (issue.isTimeout) {\n op.cb(new error_1.MongoNetworkTimeoutError(`connection ${this.id} to ${this.address} timed out`, {\n beforeHandshake: this.hello == null\n }));\n }\n else if (issue.isClose) {\n op.cb(new error_1.MongoNetworkError(`connection ${this.id} to ${this.address} closed`));\n }\n else {\n op.cb(typeof issue.destroy === 'boolean' ? undefined : issue.destroy);\n }\n }\n this[kQueue].clear();\n this.emit(Connection.CLOSE);\n }\nissue.isClose",
"text": "If it helps, here’s the code within the mongoose file listed above:with the issue.isClose line throwing the error.",
"username": "SleepyWakes"
},
{
"code": "",
"text": "Another update on the off-chance that someone is reading this. I did an IP check through Cloud Shell directly from the VM, and it gives me a different IP address than GCP’s “external IP.” Whitelisting this new IP allows me to connect. Woohoo. (I’m curious why this is the case, but just happy it is finally working.)However, I still cannot access either the GCP VM stated external IP address (‘cannot connect’ error) or the Cloud Shell reported IP address (‘took too long to connect’ error) through Chrome. I can connect through GCP’s Web Preview, though.",
"username": "SleepyWakes"
},
{
"code": "",
"text": "Glad to hear you were able to eventually connect Steve.However, I still cannot access either the GCP VM stated external IP address (‘cannot connect’ error) or the Cloud Shell reported IP address (‘took too long to connect’ error) through Chrome.Regarding the above statement, I just want to clarify what you mean by “through Chrome”. Do you mean the Atlas UI itself isn’t able to be accessed from that same machine?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks, Jason. A coder friend came over just now and dug into my problem. I was using Cloud Shell, and he discovered that we needed to SSH into the VM so that the actual IP address was used. All of my file cloning, etc., done through Cloud Shell, was not putting it on the server itself. Once he cloned my files using SSH, all was good.",
"username": "SleepyWakes"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to Connect with GCP Virtual Machine | 2022-11-14T14:27:13.814Z | Unable to Connect with GCP Virtual Machine | 2,682 |
null | [
"kotlin"
]
| [
{
"code": "orderList.addAll(allProducts)\norderList.removeAll(allProducts)\nclass Product : EmbeddedRealmObject {\nvar name: String = \"\"\nvar category: String = \"\"\nvar productDescription: String? = \"\"\nvar price: Float = 0F\nvar imagine: String? = null\n}\n",
"text": "Why Kotlin list removeAll doesnt not work in this example:The code above will add the products but not remove them.\norderList is a mutableList. allProducts is a list of ProductsProduct:",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Issue Solved. I had to implement equals() function on my object",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Why Kotlin list removeAll doesnt not work in this example: | 2022-11-17T21:16:08.053Z | Why Kotlin list removeAll doesnt not work in this example: | 1,760 |
null | []
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"63776ec45efa596fb04c6e8b\"\n },\n \"number\": \"102345890\",\n \"Result\": \"failure\"\n},\n{\n \"_id\": {\n \"$oid\": \"63776ec45efa596fb04c6e8b\"\n },\n \"number\": \"102245890\",\n \"Result\": \"failure\"\n},\n{\n \"_id\": {\n \"$oid\": \"63776ec45efa596fb04c6e8c\"\n },\n\"number\": \"100345890\",\n \"Result\": \"success\"\n},\n{\n \"_id\": {\n \"$oid\": \"63776ec45efa596fb04c6e8f\"\n },\n \"number\": \"101345890\",\n \"Result\": \"success\"\n}\n{\n \n \"failednumbers\":[ \"102345890\",\"102245890\"]\n \"successnumbers\":[\"100345890\",\"101345890\"]\n}\n",
"text": "I have a collection like thisI want the output to be like below:How do achieve this?I tried using groupby but i am not getting desired results",
"username": "sai_sankalp"
},
{
"code": "",
"text": "This is almost the same thing as your other thread Need to append values based on one column value - #5 by steevej.The difference is that the _id of $group is $Result.If you still have issues, share what you tried and explain how it fails to deliver the desired result.",
"username": "steevej"
},
{
"code": "reports_col.aggregate([{\"$group\": {\"_id\": \"$Result\", \"number\": {\"$push\": \"$number\"}}}, {\"$project\": { \"Result\":\"$_id\",\"number\":1,\"Reportgeneratedat\":datetime.now(),\"_id\":0 }},{\"$out\": \"updatedreports\"}])\n\n",
"text": "Hi @steevej ,\nThanks for the quick reply.\nAfter using,my MongoDB collection looks like below:\n\nmdb11259×114 12.4 KB\nBut i want to column names to be like failednumbers and successnumbers with corresponding data below and I want to make my mongodb collection to look like below:\nmdb2954×197 20.5 KB\nThanks in advance",
"username": "sai_sankalp"
},
{
"code": "\"$group\" : {\n \"_id\" : null ,\n \"failednumbers\" : { \"$push\" : { \"$each\" : \"$failure\" } } ,\n \"successnumbers\" : { \"$push\" : { \"$each\" : \"$success\" } }\n}\n",
"text": "I think you simply need an extra $group along the lines of:",
"username": "steevej"
},
{
"code": "\"Unrecognized expression '$each', full error: {'ok': 0.0, 'errmsg': \\\"Unrecognized expression '$each'\\\", 'code': 168, 'codeName': 'InvalidPipelineOperator'}\"\n reports_col.aggregate([{\"$group\": {\"_id\": \"$Result\", \"number\": {\"$push\": \"$number\"}}},{\"$group\" : {\"_id\" : \"null\" ,\"failednumbers\" : { \"$push\" : { \"$each\" : \"$failure\" } } ,\"successnumbers\" : { \"$push\" : { \"$each\" : \"$success\" } }}},{\"$project\": { \"Result\":\"$_id\",\"number\":1,\"Reportgeneratedat\":datetime.now(),\"_id\":0 }},{\"$out\": \"updatedreports\"}])\n\n",
"text": "Hi @steevej ,\nTried the above query ,and i am getting the following errorMy query,",
"username": "sai_sankalp"
},
{
"code": "{ \"$push\" : \"$success\" }\n",
"text": "From the docs it looks like it is only available inside update operations.8-(I will need to take a deeper look at this.Try to simply useYou will probably end up with an array of array you may then use project to only keep the appropriate element.",
"username": "steevej"
},
{
"code": "/* the original $group */\n\ngroup_Result = { \"$group\" : {\n \"_id\" : \"$Result\" ,\n \"number\" : { \"$push\": \"$number\" }\n} }\n\n/* the $group by _id:null to get a single document */\ngroup_null = { \"$group\" : {\n \"_id\" : null ,\n \"failure\" : { $push : {\n \"$cond\" : [ { $eq : [ \"$_id\" , \"failure\" ]} , \"$number\" , null ]\n } } ,\n \"success\" : { $push : {\n \"$cond\" : [ { $eq : [ \"$_id\" , \"success\" ]} , \"$number\" , null ]\n } }\n} }\n{ _id: null,\n failure: [ [ '102345890', '102345890', '102245890' ], null ],\n success: [ null, [ '100345890', '101345890' ] ] }\nfilter_nulls = { \"$set\" : {\n \"failure\" : { \"$filter\" : { \"input\" : \"$failure\" , \"cond\" : { \"$not\" : { \"$eq\" : [ \"$$this\" , null ] } } } } ,\n \"success\" : { \"$filter\" : { \"input\" : \"$success\" , \"cond\" : { \"$not\" : { \"$eq\" : [ \"$$this\" , null ] } } } }\n} }\nset_0 = { \"$set\" : {\n \"failure\" : { \"$arrayElemAt\" : [ \"$failure\" , 0 ] } ,\n \"success\" : { \"$arrayElemAt\" : [ \"$success\" , 0 ] } \n} }\npipeline = [ group_Result , group_null , filter_nulls , set_0 ]\n{ _id: null,\n failure: [ '102345890', '102345890', '102245890' ],\n success: [ '100345890', '101345890' ] }\n",
"text": "It is getting closer.Running the above 2 stages produces:Then the cosmetic surgery:Running the pipelineprovides the following results:",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej ,\nThis solution given by you works fine now.Thanks for taking out your time and providing me the solution.\nI got to learn many things from this solution ",
"username": "sai_sankalp"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Converting rows to columns | 2022-11-18T12:26:51.574Z | Converting rows to columns | 2,504 |
null | [
"node-js"
]
| [
{
"code": "node-mongodb-nativenode-mongodb-nativenode-mongodb-nativenode-mongodb-native",
"text": "Over time I noted down performance timings while using different versions of node-mongodb-native driver.I noticed that performance goes down badly. For example, I run a simple get query on the same database, same collection having 9 records, with the same version of node (16.6.0) and same version of mongodb (5.x.x).It looks like folks are just keep adding things and nobody cares about performance.I don’t expect any answer, I just wanted to express my sadness…",
"username": "Sorin_GFS"
},
{
"code": "",
"text": "Hello @Sorin_GFS, Welcome again to the MongoDB community forum,I have tested the connection and a find query in each version (4.3.0, 4.6.0, 4.12.0) of the node-mongodb-native and same configuration as yours, I did not found any performance issues in connection or a find query.You can check and execute the same code from the GitHub repository,Test difference versions of node-mongodb-native driver - GitHub - turivishal/node-mongodb-native-performance: Test difference versions of node-mongodb-native driverIt looks like folks are just keep adding things and nobody cares about performance.I checked there are no major updates specific in DB connection from the 4.3.0 to 4.12.0 versions, you can also check in the release notes:The Official MongoDB Node.js Driver. Contribute to mongodb/node-mongodb-native development by creating an account on GitHub.Secondly, there are many possibilities why it is taking more execution time that is not related to node-mongodb-native driver, check the similar question that may help you,Thanks.",
"username": "turivishal"
},
{
"code": "nodemonnodemon",
"text": "You can check and execute the same code from the GitHub repository,GitHub - turivishal/node-mongodb-native-performance: Test difference versions of node-mongodb-native driverThank you for this work. It gave me a starting point because indeed using the driver directly there are no visible performance differences between this three versions of driver.check the similar question that may help youHere I saw a guy having similar problem while using nodemon for hot reloading, which is also my case. I hope this is the problem because there is no nodemon in production. I need to create a test that works in production and if there I can’t find this delay the problem is solved. Anyway, thanks for pointing me in the right direction.",
"username": "Sorin_GFS"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Node mongodb native performance | 2022-11-19T18:33:05.306Z | Node mongodb native performance | 2,122 |
null | []
| [
{
"code": "",
"text": "Is 1 primary + 2 secondary + 1 hidden a bad setup? This doc suggests keeping nodes in odd no., does hidden replicas count?",
"username": "Safwan"
},
{
"code": "",
"text": "You want an odd number of voting members to ensure you have a majority during an election.Since you may configure your hidden node to be non-voting you should be fine.If you really want your hidden node to vote during an election, it would be best to have an arbiter.Depending of the usage for your hidden member, you might be interested in the following which I was made aware recently (thank you @chris):",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is hidden replica treated as a normal replica? | 2022-11-18T20:49:32.081Z | Is hidden replica treated as a normal replica? | 1,206 |
null | [
"queries",
"performance"
]
| [
{
"code": "",
"text": "Scenario - We have application which is doing CRUD operations on database. From past 2 years we are using mongo 3.6.9 and system is working fine without any issues but when we upgraded the database from 3.6.9 to 4.0.27 and then we started getting high query response time for find operation but it is not consistent in nature ,sometimes we are getting high query response in 5 hours or 7 hours or even 11 hours.Testing Environment:\njava client driver on application side - 3.12.9\nmongo version - 4.0.27\nStorage Engine - mmap\nReplica-set : 7 members (4 non-arbiter and 3 arbiter , all voting members)\nOne of the member CMD as an example:\nmongod --keyFile=/mongodb.key --storageEngine mmapv1 --nojournal --noprealloc --smallfiles --ipv6 --bind_ip_all --port 27035 --dbpath=/mmapv1-tmpfs-27035 --replSet rs-app_shardAB-ipv6-7 --quiet --slowms 500 --logpath /data/db/mongo-27035.log --oplogSize 3221 --setParameter diagnosticDataCollectionEnabled=true --logappend --logRotate reopenMessages:\nAs a sample, we got these kind of messages on mongo secondary logs given below:2022-02-02T02:55:54.392+0000 I COMMAND [conn554] command drasessions_1.drasessions command: find { find: “drasessions”, filter: { _id: { sessionid: “ClpGx3:172.16.241.40:15124:1643368779:0080300316” } }, limit: 1, singleBatch: true, $db: “drasessions_1”, $clusterTime: { clusterTime: Timestamp(1643770525, 464), signature: { hash: BinData(0, A9E0739EB1E3BBA9EF776A9FCEC9342E9457D221), keyId: 7042384422720503811 } }, lsid: { id: UUID(“8b321501-be08-4fa8-ada5-367cc1eb555e”) }, $readPreference: { mode: “nearest” } } planSummary: IDHACK keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:239 locks:{ Global: { acquireCount: { r: 2 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 28911648 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } protocol:op_msg 28911msTroubleshooting performed so far:We have already checked network latency/CPU/RAM/disk space on VM and blade level, so there was no issue also we tested mongo 3.6.9 in same environment and configuration but there is no issue observed.We enabled mongostat and also attached it as a reference and found below suspect point:For one of the secondary there are no CRUD operation for 26 sec (*0 *0 *0 *0 in mongostat observed), then suddenly high CRUD operation(*2444 110 *5781 *1816 in mongostat observed) were found on that secondary, there is no connection lost message found in mongostat for that secondary. This pattern is common whenever we are getting high response time(28 sec) on that secondary.For Primary it is always showing *0 for insert operation for all the time but on secondary’s replication is happening for insert like given below:host insert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn set repl time\n[2606:ae00:3001:8311:172:16:244:59]:27032 *245 36 *595 *155 0 65|0 0 6.63G 4.49G 0 0|0 0|0 24.9k 124k 144 rs-app_shardAB-ipv6-4 SEC Feb 3 17:06:46.063\n[2606:ae00:3001:8311:172:16:244:60]:27032 *0 *0 *0 *0 0 63|0 0 6.61G 4.49G 0 0|0 0|0 11.3k 71.9k 184 rs-app_shardAB-ipv6-4 SEC Feb 3 17:06:46.068\n[2606:ae00:3001:8311:172:16:244:b]:27032 *227 40 *589 *155 0 64|0 0 6.75G 4.48G 0 0|0 0|0 26.6k 157k 144 rs-app_shardAB-ipv6-4 SEC Feb 3 17:06:46.075\n[2606:ae00:3001:8311:172:16:244:c]:27032 *0 39 827 150 68 119|0 0 6.79G 4.49G 0 0|0 0|0 592k 1.49m 371 rs-app_shardAB-ipv6-4 PRI Feb 3 17:06:45.627\nlocalhost:27032 *0 41 795 162 70 120|0 0 6.79G 4.49G 0 0|0 1|0 589k 1.50m 371 rs-app_shardAB-ipv6-4 PRI Feb 3 17:06:47.584Queries:Why there is no CRUD on one of the secondary for 26 sec when there is no connection lost between primary and secondaries.Why always 0 insert operation is showing in Primary in mongostat and even replication for insert is happening on secondaries.Why this behavior is not observed in 3.6.9 where same configuration and environment was used.Kindly reply to above queries it would really help us to process further as this is becoming a big blocker for us to use mongo in our environment.Attachments:We have attached mongostat output for one of the occurrence when we are getting high query response.rs.status for one of the rs.\nmogostat.txt (45.2 KB)\nrs.status.txt (9.1 KB)",
"username": "Kapil_Gupta"
},
{
"code": "",
"text": "can anyone reply to the above issue. Iam also facing the same in my setupUdaya",
"username": "Udaya_Bhaskar_chimak"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Udaya_Bhaskar_chimak!Unless you work on the same deployment as the @Kapil_Gupta, I recommend starting a new discussion topic with details relevant to your environment including:For the original question in this topic regarding MongoDB 4.0 (which reached end of life in April 2022) and MMAP (which was deprecated and ultimately removed in MongoDB 4.2) I suggest upgrading to a supported version of MongoDB (currently 4.2 or later) to see if the problem is still reproducible.Regards,\nStennie",
"username": "Stennie_X"
}
]
| High query response time for find operation in mongo 4.0.27 with mmap storage engine | 2022-02-04T12:27:45.483Z | High query response time for find operation in mongo 4.0.27 with mmap storage engine | 3,728 |
null | [
"node-js",
"react-native"
]
| [
{
"code": "",
"text": "Hello, I am working on a project that has two endpoints, one is a mobile application and the other one is a web-based admin site. For the website, I was intending to use React js and a Nodejs restful API with MongoDB for data storage. For the mobile application, I wanted to use react native since am from a react js background therefore it will be pretty easy for me to learn and use it for my project.Now my worry is here, I want to use my mongo database for both endpoints. The admin should update data from the website that should be accessed from the mobile application and similarly the mobile users should update their individual data that can also be accessed from the web app. Mobile users should also be able to use the app when offline and data sync when the connection is restored. Kindly advise the best approach for this.",
"username": "brian_murithi"
},
{
"code": "",
"text": "Hi. Your use case is one of the most common ones we see in customers and fits nicely into our ecosystem. Using App Services you can have your mobile clients connect to sync and your web clients can connect via GraphQL, Data API, or Query Anywhere.Do you have any specific questions?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello, thank you for getting back.\nNow my second issue is working with the mobile app service. The documentation is too less for a beginner , or lets say am not the docs only person. I have been trying working with realm flexible sync for 2 weeks now but don’t seem to get it right .My recent try seems to connect successfully, but I don’t really understand how everything else works from adding to db, querying, and other operations.Here is my scenario .I want the mobile app to be used by students, each student should login with their username and password that should be given to them by the admins (the admins create the account from the website portal).After login in, each student should access data related to them, the app should also store user instance so they may continue using the app even if online.There should be an exam module that will be queried to the app. The students will pick their answers from the multiple choices and on submit(when offline or online), the answers marked and results returned.The app should be able to store individual student previous scores.Kindly help me on working around this with the flexible sync. At least give me an approach for this.Thank you in advance",
"username": "brian_murithi"
},
{
"code": "\"student_id\" == ${student_id}",
"text": "Hi. Have you played with one of the tutorials yet? I would highly reccomend doing so since it looks like you are still understanding sync: how it works, what needs to be set up, etc.What you are trying to do is very possible. The easiest way I can think about doing this would be:",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "How do I work with subscriptions, give me a code example of working with subscriptions, then inserting on my data, then when opening my realm, do I define a realm name??",
"username": "brian_murithi"
},
{
"code": "",
"text": "For the questions, they are nit tired to any student, and therefore should be accessed by every student under a specific topic if that topic is active by that time. So how do i query these questions too??",
"username": "brian_murithi"
},
{
"code": "How do I work with subscriptions, give me a code example of working with subscriptions, then inserting on my data, then when opening my realm, do I define a realm name??\nFor the questions, they are nit tired to any student, and therefore should be accessed by every student under a specific topic if that topic is active by that time. So how do i query these questions too??\ndb.questions.find({topic: \"history\"})\"topic\" == \"history\"",
"text": "Hi, ill start out with responding to your question:These are the basic building blocks of flexible sync and there are examples of using these throughout the documentation and in our tutorials. I will link these below though.I recommend starting with one of the Realm Sync tutorials: Sync Tutorials | MongoDBFor React-Native specifically, try:The React Native Tutorial goes into more detail than the Quick Start, including a template app (Task Tracker).If you get stuck on a specific implementation detail or concept, please share more context including:As for your second comment:How would you ideally “query” this data. Sync works very similarily to just querying and pulling in the data. If your query would be something like db.questions.find({topic: \"history\"}) then to subscribe to that query in sync all you would need to do is make the “topic” field a queryable field and then subscribe to \"topic\" == \"history\"",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello about the react native tutorial, I am trying to run the command but I get the error “app create failed: failed to authenticate with Mongo DB cloud API: you are not authorized for this resource”I am on ubuntu Linux",
"username": "brian_murithi"
},
{
"code": "",
"text": "Hi. You’ll need to log in to the cli. If you just run the command it doesnt know if you actually have permissions to do anything. Please see this page or navigate to the “Deployment” section of a realm app you are using and then click “cli” to give you a copy and pastable login command.",
"username": "Tyler_Kaye"
},
{
"code": "import React, {\n createContext,\n useContext,\n useEffect,\n useRef,\n useState,\n} from 'react';\nimport {ResultSchema} from '../database/schemas';\nimport {useAuth} from '../provider/AuthProvider';\n\nconst QueriesContext = createContext();\n\nconst QueriesProvider = ({children}) => {\n // initialize state\n const [data, setData] = useState([]);\n const [isLoading, setIsLoading] = useState(true);\n const {user} = useAuth();\n const realmRef = useRef(null);\n\n // Fetch data\n useEffect(() => {\n if (user == null) {\n console.error('Null user? Needs to log in!');\n return;\n }\n\n // Enables offline-first: opens a local realm immediately without waiting\n // for the download of a synchronized realm to be completed.\n const OpenRealmBehaviorConfiguration = {\n type: 'openImmediately',\n };\n\n const config = {\n schema: [ResultSchema],\n sync: {\n user: user,\n flexible: true,\n initialSubscriptions: {\n update: (subs, realm) => {\n subs.add(realm.objects('Result').filtered('chv_id == \"+user.id+\"'));\n },\n },\n newRealmFileBehavior: OpenRealmBehaviorConfiguration,\n existingRealmFileBehavior: OpenRealmBehaviorConfiguration,\n },\n };\n\n // Open realm and sync\n Realm.open(config).then(realm => {\n realmRef.current = realm;\n const subs = realm.subscriptions;\n subs.update(mutableSubs => {\n mutableSubs.add(\n realm.objects('Result').filtered('chv_id == \"+user.id+\"'),\n );\n });\n realm.subscriptions.waitForSynchronization();\n realm.objects(ResultSchema.name);\n });\n\n return () => {\n // cleanup function\n closeRealm();\n };\n }, [user]);\n\n // View results\n let getResults = () => {\n const realm = realmRef.current;\n return realm.objects(ResultSchema.name);\n };\n\n // Create new results\n let addResults = () => {\n const realm = realmRef.current;\n realm.subscriptions.waitForSynchronization();\n realm.write(() => {\n realm.create('Result', {\n _id: ObjectId('637799211d66c64212188801'),\n chv_id: user.id,\n final_assessment_score: 0,\n module: 'Hypertension',\n post_test_score: 57,\n pre_test_score: 44,\n support_supervision_score: 0,\n });\n });\n };\n\n // close realm\n const closeRealm = () => {\n const realm = realmRef.current;\n if (realm) {\n realm.close();\n realmRef.current = null;\n setData([]);\n }\n };\n\n // Render the children within the QueriesContext's provider. The value contains\n // everything that should be made available to descendants that use the\n return (\n <QueriesContext.Provider\n value={{\n data,\n addResults,\n getResults,\n }}>\n {children}\n </QueriesContext.Provider>\n );\n};\n\nexport default QueriesProvider;\n\nconst useData = () => {\n const data = useContext(QueriesContext);\n if (data == null) {\n throw new Error('useData() called outside of a QueriesProvider?'); // an alert is not placed because this is an error for the developer not the user\n }\n return data;\n};\n\nexport {useData};\n\ntype or paste code here\n",
"text": "Hello, I have tried so much to have my flexible sync working, I have the log with the response okay but seems like am still missing something.\nWhen I query the data I get result.length as 0 and even if I insert a record I still get 0.\nI also don’t see any data on my atlas collections.\nNode version 18.12.1\nReact version 18.1.0\nRealm version ^11.1.0Here is my flexible sync code:",
"username": "brian_murithi"
},
{
"code": "subs.add(realm.objects('Result').filtered('chv_id == \"+user.id+\"'));subs.add(realm.objects('Result'))filtered(\"chv_id == $0\", user.id)\n",
"text": "Hi. Can you please send a link to your app in the console? I suspect your issue is that you are inserting data that is outside of your query. You subscription is subs.add(realm.objects('Result').filtered('chv_id == \"+user.id+\"')); but I am not sure that is doing what you want. Try replacing that with just subs.add(realm.objects('Result')) and I suspect things will start working.I believe you are likely trying to add a subscription like:",
"username": "Tyler_Kaye"
},
{
"code": "subs.add(realm.objects('Result').filtered()); ",
"text": "Hey , My issue was my subscription so I changed the subscription to: subs.add(realm.objects('Result').filtered(chv_id == “${user.id}”)); \nand it now works perfectly. Thanks.Back to the general collections that need to be available for all users, do I set subscriptions when querying them…??",
"username": "brian_murithi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm sync and restful api best approach? | 2022-11-12T16:46:18.069Z | Realm sync and restful api best approach? | 3,143 |
[
"android",
"kotlin"
]
| [
{
"code": "",
"text": "I’ve seen that a supported type for a list inside a realm is “RealmList”, is that correct?\nNow in my android application, I have a class that contains a property of a type List. So to be able to use a List inside a Realm, I need to change that type to a RealmList?Now my questions is, how can I specify a default value for a RealmList?\n\nScreenshot_2999×147 8.23 KB\n",
"username": "111757"
},
{
"code": "Person Object {\n RealmList myDogs = several DogObjects\n}\n",
"text": "So to be able to use a List inside a RealmNot exactly… ? A List is a property of a Realm Object. To use a List, you would need to add a List property to an object. The object is then stored in Realm. To get to that list, you would need to query/load that object and then work with it’s List propertyA good example is a situation where there is a person object and a dog object. A person can have meny dogs - psuedo-code:So to see a persons dogs, load the person, then you can get the list of dogs.There’s a lot more to this with inverse-relationships, embedded objects etc. Going through the Getting Started Guide would provide additional information.",
"username": "Jay"
},
{
"code": "{\n \"bsonType\": \"object\",\n \"properties\": {\n ...\n \"images\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [\n \"images\"\n ....\n ],\n \"title\": \"Diary\"\n}\n",
"text": "Hey Jay, thanks for answering to my question. I’ve found one function named: realmListOf() which does creates a list of a type RealmList. Which is great! Now my question is, how can I represent that RealmList object in the schema of a realm?This is how my current schema looks like. And for now I’ve specified a list to have a string type, until I figure out how to specify a list of multiple strings in that same schema:",
"username": "111757"
},
{
"code": "userIdList UserIdClass\n users_id which is a String\nChatRoom\n userIdList which is a RealmList of UserIdClass objects\n\"userIdList\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"title\": \"UserIdClass\",\n \"bsonType\": \"object\",\n \"required\": [\n \"users_id\"\n ],\n \"properties\": {\n \"users_id\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n",
"text": "A RealmList is treated very much as an array. So a RealmList of type String would essentially be an array of strings. You can have an array of primitives like string, or an array of managed or embedded objects as well.For example, in a Chat Room style app, there may be an object that represents a Chat Room with a Realm List property called userIdList of the users in that room.The object that contains the userId (and other info) object looks like thisThe parent property isIn Atlas, the schema may look like this",
"username": "Jay"
},
{
"code": "",
"text": "One thing to noteIf this is a synched app and the app is in Developer Mode on the Realm Console, the objects you build in code in the SDK will be created in Atlas automatically. e.g. if you craft an object in the SDK that has a RealmList property, then when the app sync’s that object and schema will be created in Atlas.That makes crafting objects much easier.",
"username": "Jay"
},
{
"code": "",
"text": "I’ve just tried using the approach with Development Mode, and indeed a new collection schema was created automatically from the model class from my app. However that schema contained only the name of the model class, and none of the properties.",
"username": "111757"
},
{
"code": "Development Mode",
"text": "A couple of thingsI have gone through the getting started guides numerous times and while they are a little thin, they are accurate so that documentation Atlas Device Sync contains everything you need to set your app up on the site in development mode.Next, under the Development Mode subheading, you can choose to toggle Development Mode on/off. Enabling Development Mode allows you to define schemas directly in your client application code and is suitable if you are in development and do not have application data in Atlas yet.The second thing is that if either a) the above guide was not followed or b) the Model is not correctly set up - either or both of those will cause the model to not show up in Atlas correctly.So please go through the guide and also post your model here (the actual model code from your app)",
"username": "Jay"
},
{
"code": "open class Diary : RealmObject {\n @PrimaryKey\n var _id: ObjectId = ObjectId.create()\n var ownerId: String = \"\"\n var mood: String = Mood.Neutral.name\n var title: String = \"\"\n var description: String = \"\"\n var images: String = \"\"\n var date: RealmInstant = RealmInstant.from(System.currentTimeMillis(), 0)\n\n @RequiresApi(Build.VERSION_CODES.O)\n var localDate: LocalDate =\n Instant.ofEpochMilli(date.epochSeconds).atZone(ZoneId.systemDefault()).toLocalDate()\n}\n",
"text": "Yeah I’ve read the docs couple of times, but they are missing a lot of practical things, especially for beginners who’re just getting started with the SDK. Nevertheless, I’ll go though the whole process once again. Here’s my model class from the code. Thank you for your responses Jay, really helpful. Btw, here you can see that I’m using a RealmInstant type for the date. However I’ve been having troubles converting a LocalDateTime type into a RealmInstant. I’ve been getting a wrong format when trying to make a conversion, check out my question on stackoverflow here. I’ve added a bounty ",
"username": "111757"
},
{
"code": "",
"text": "One other thing - you may have some kind of sync error, so check your console logsRealm console->App services tab->Click you app->View all logs activityWhile the errors shown there (if any) are usually cryptic and you’ll need a Rosetta stone to decipher what the error actually means, at least you’ll know if you’re getting errors and some direction as to what they are.",
"username": "Jay"
}
]
| RealmList Constructor? | 2022-10-29T15:29:40.744Z | RealmList Constructor? | 2,928 |
|
null | [
"queries",
"node-js",
"api"
]
| [
{
"code": "import clientPromise from \"../../../../lib/mongodb\"; \n\nimport { ObjectId } from \"mongodb\";\nexport default async function handler(req, res)\n{\n \n switch (req.method) {\n case 'GET': {\n return findUserById(req, res);\n }\n \n case 'DELETE': {\n return deleteUserById(req, res);\n }\n }\n \n}\n\n async function findUserById(req, res) \n {\n let id = ObjectId(req.query.userId)\n console.log(id)\n console.log(`logging this... in api ${req.query.userId}`)\n try {\n \n \n \n //connect to database\n const client = await clientPromise;\n const db = client.db()\n\n //get details\n const details = await db.collection(\"users\").find({_id:id})\n console.log(JSON.stringify(details))\n if (!details) {\n return res.status(402).json({ error: { message: 'Edit your Profile. No details found' } });\n }\n\n return res.json({details})\n\n } catch (error) {\n //return an error\n return res.json({\n message: new Error(error).message,\n success: false\n })\n }\n}\nFindCursor {\n _events: [Object: null prototype] {},\n _eventsCount: 0,\n _maxListeners: undefined,\n [Symbol(kCapture)]: false,\n [Symbol(topology)]: Topology {\n _events: [Object: null prototype] {\n topologyDescriptionChanged: [Array],\n connectionPoolCreated: [Function (anonymous)],\n connectionPoolClosed: [Function (anonymous)],\n connectionCreated: [Function (anonymous)],\n connectionReady: [Function (anonymous)],\n connectionClosed: [Function (anonymous)],\n connectionCheckOutStarted: [Function (anonymous)],\n connectionCheckOutFailed: [Function (anonymous)],\n connectionCheckedOut: [Function (anonymous)],\n connectionCheckedIn: [Function (anonymous)],\n connectionPoolCleared: [Function (anonymous)],\n commandStarted: [Function (anonymous)],\n commandSucceeded: [Function (anonymous)],\n commandFailed: [Function (anonymous)],\n serverOpening: [Function (anonymous)],\n serverClosed: [Function (anonymous)],\n serverDescriptionChanged: [Function (anonymous)],\n topologyOpening: [Function (anonymous)],\n topologyClosed: [Function (anonymous)],\n error: [Function (anonymous)],\n timeout: [Function (anonymous)],\n close: [Function (anonymous)],\n serverHeartbeatStarted: [Function (anonymous)],\n serverHeartbeatSucceeded: [Function (anonymous)],\n serverHeartbeatFailed: [Function (anonymous)]\n },\n _eventsCount: 25,\n _maxListeners: undefined,\n bson: [Object: null prototype] {\n serialize: [Function: serialize],\n deserialize: [Function: deserialize]\n },\n s: {\n id: 0,\n options: [Object: null prototype],\n seedlist: [Array],\n state: 'connected',\n description: [TopologyDescription],\n serverSelectionTimeoutMS: 30000,\n heartbeatFrequencyMS: 10000,\n minHeartbeatFrequencyMS: 500,\n servers: [Map],\n sessionPool: [ServerSessionPool],\n sessions: Set(0) {},\n credentials: [MongoCredentials],\n clusterTime: [Object],\n connectionTimers: Set(0) {},\n detectShardedTopology: [Function: detectShardedTopology],\n detectSrvRecords: [Function: detectSrvRecords],\n srvPoller: [SrvPoller]\n },\n [Symbol(kCapture)]: false,\n [Symbol(waitQueue)]: Denque {\n _head: 1,\n _tail: 1,\n _capacity: undefined,\n _capacityMask: 3,\n _list: [Array]\n }\n },\n [Symbol(namespace)]: MongoDBNamespace { db: 'myFirstDatabase', collection: 'users' },\n [Symbol(documents)]: [],\n [Symbol(initialized)]: false,\n [Symbol(closed)]: false,\n [Symbol(killed)]: false,\n [Symbol(options)]: {\n readPreference: ReadPreference {\n mode: 'primary',\n tags: undefined,\n hedge: undefined,\n maxStalenessSeconds: undefined,\n minWireVersion: undefined\n },\n fieldsAsRaw: {},\n promoteValues: true,\n promoteBuffers: false,\n promoteLongs: true,\n serializeFunctions: false,\n ignoreUndefined: false,\n bsonRegExp: false,\n raw: false,\n enableUtf8Validation: true\n },\n [Symbol(filter)]: {},\n [Symbol(builtOptions)]: {\n raw: false,\n promoteLongs: true,\n promoteValues: true,\n promoteBuffers: false,\n ignoreUndefined: false,\n bsonRegExp: false,\n serializeFunctions: false,\n fieldsAsRaw: {},\n enableUtf8Validation: true,\n writeConcern: WriteConcern { w: 'majority' },\n readPreference: ReadPreference {\n mode: 'primary',\n tags: undefined,\n hedge: undefined,\n maxStalenessSeconds: undefined,\n minWireVersion: undefined\n }\n }\n}\n",
"text": "using find method by providing _idcoderesponsePlease do let me know where am I going wrong?",
"username": "Agrit_Tiwari"
},
{
"code": "",
"text": "I have exactly the same issue, did you find a solution for this?",
"username": "Vincent_Asea"
},
{
"code": "let sitesCol = database.collection(\"sites\");\nlet site = await sitesCol.find({ name: \"udidact.com\" }).toArray();\n",
"text": "You need to add .toArray() to your call. The problem is mainly on node js side. See example below:Hopefully that helps.",
"username": "Programming_with_Patrice"
},
{
"code": "",
"text": "Thank you for the answer. Quick Start should include this as well.",
"username": "Servet_Hosaf"
}
]
| I am receiving a FInd Cursor object instead of expected document object | 2022-04-25T03:57:03.235Z | I am receiving a FInd Cursor object instead of expected document object | 9,115 |
null | [
"replication",
"java",
"crud",
"spring-data-odm",
"kubernetes-operator"
]
| [
{
"code": "mongodb://username:[email protected],kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local/kompas2?readPreference=primaryPreferred&ssl=false\n06:48:04.405 INFO org.mongodb.driver.connection : Opened connection [connectionId{localValue:120, serverValue:49559}] to kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017\n06:48:04.698 INFO org.mongodb.driver.cluster : No server chosen by WritableServerSelector from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=382123, setName='kompas2mongo', canonicalAddress=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, hosts=[kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017], passives=[], arbiters=[], primary='kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017', tagSet=TagSet{[]}, electionId=null, setVersion=2, topologyVersion=null, lastWriteDate=Mon Nov 07 06:47:54 GMT 2022, lastUpdateTimeNanos=4233730641707005}, ServerDescription{address=kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadException: Prematurely reached end of stream}}]}. Waiting for 30000 ms before timing out\n06:48:04.699 INFO org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=374840, setName='kompas2mongo', canonicalAddress=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, hosts=[kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017], passives=[], arbiters=[], primary='null', tagSet=TagSet{[]}, electionId=null, setVersion=2, topologyVersion=null, lastWriteDate=Mon Nov 07 06:48:04 GMT 2022, lastUpdateTimeNanos=4233734419530953}\n06:48:04.700 INFO org.mongodb.driver.cluster : Exception in monitor thread while connecting to server kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017\ncom.mongodb.MongoSocketOpenException: Exception opening socket\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70)\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:143)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:188)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144)\nat java.base/java.lang.Thread.run(Thread.java:832)\nCaused by: java.net.ConnectException: Connection refused\nat java.base/sun.nio.ch.Net.pollConnect(Native Method)\nat java.base/sun.nio.ch.Net.pollConnectNow(Net.java:589)\nat java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542)\nat java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)\nat java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333)\nat java.base/java.net.Socket.connect(Socket.java:648)\nat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:107)\nat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79)\nat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65)\n... 4 common frames omitted\n06:48:34.701 ERROR o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [/api] threw exception [Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=REPLICA_SET, servers=[{address=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, type=REPLICA_SET_SECONDARY, roundTripTime=0.4 ms, state=CONNECTED}, {address=kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=REPLICA_SET, servers=[{address=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, type=REPLICA_SET_SECONDARY, roundTripTime=0.4 ms, state=CONNECTED}, {address=kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]] with root cause\ncom.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=REPLICA_SET, servers=[{address=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, type=REPLICA_SET_SECONDARY, roundTripTime=0.4 ms, state=CONNECTED}, {address=kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]\nat com.mongodb.internal.connection.BaseCluster.createTimeoutException(BaseCluster.java:413)\nat com.mongodb.internal.connection.BaseCluster.selectServer(BaseCluster.java:118)\nat com.mongodb.internal.connection.AbstractMultiServerCluster.selectServer(AbstractMultiServerCluster.java:50)\nat com.mongodb.internal.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:105)\nat com.mongodb.internal.binding.ClusterBinding$ClusterBindingConnectionSource.<init>(ClusterBinding.java:100)\nat com.mongodb.internal.binding.ClusterBinding.getWriteConnectionSource(ClusterBinding.java:92)\nat com.mongodb.client.internal.ClientSessionBinding.getWriteConnectionSource(ClientSessionBinding.java:93)\nat com.mongodb.internal.operation.OperationHelper.withReleasableConnection(OperationHelper.java:619)\nat com.mongodb.internal.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:185)\nat com.mongodb.internal.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:76)\nat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:187)\nat com.mongodb.client.internal.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:1009)\nat com.mongodb.client.internal.MongoCollectionImpl.executeReplaceOne(MongoCollectionImpl.java:567)\nat com.mongodb.client.internal.MongoCollectionImpl.replaceOne(MongoCollectionImpl.java:550)\nat org.springframework.data.mongodb.core.MongoTemplate.lambda$saveDocument$18(MongoTemplate.java:1539)\nat org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:553)\nat org.springframework.data.mongodb.core.MongoTemplate.saveDocument(MongoTemplate.java:1507)\nat org.springframework.data.mongodb.core.MongoTemplate.doSave(MongoTemplate.java:1443)\nat org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:1385)\nat org.springframework.data.mongodb.repository.support.SimpleMongoRepository.save(SimpleMongoRepository.java:94)\nat jdk.internal.reflect.GeneratedMethodAccessor313.invoke(Unknown Source)\nat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.base/java.lang.reflect.Method.invoke(Method.java:564)\nat org.springframework.data.repository.core.support.RepositoryMethodInvoker$RepositoryFragmentMethodInvoker.lambda$new$0(RepositoryMethodInvoker.java:289)\nat org.springframework.data.repository.core.support.RepositoryMethodInvoker.doInvoke(RepositoryMethodInvoker.java:137)\nat org.springframework.data.repository.core.support.RepositoryMethodInvoker.invoke(RepositoryMethodInvoker.java:121)\nat org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:529)\nat org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:285)\nat org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:599)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.doInvoke(QueryExecutorMethodInterceptor.java:163)\nat org.springframework.data.repository.core.support.QueryExecutorMethodInterceptor.invoke(QueryExecutorMethodInterceptor.java:138)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:80)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.data.repository.core.support.MethodInvocationValidator.invoke(MethodInvocationValidator.java:98)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)\nat com.sun.proxy.$Proxy163.save(Unknown Source)\nat kompas2.recommendation.persistence.CustomCompanyRecommendationRepository.save(CompanyRecommendationRepository.kt)\nat kompas2.recommendation.persistence.CustomCompanyRecommendationRepository.save(CompanyRecommendationRepository.kt:21)\nat kompas2.recommendation.persistence.CustomCompanyRecommendationRepository$$FastClassBySpringCGLIB$$6cfc1920.invoke(<generated>)\nat org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:779)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\nat org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:137)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\nat org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:692)\nat kompas2.recommendation.persistence.CustomCompanyRecommendationRepository$$EnhancerBySpringCGLIB$$e7db7271.save(<generated>)\nat kompas2.recommendation.event.RecommendationCompanyUpdated.receive(RecommendationCompanyUpdated.kt:45)\nat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.base/java.lang.reflect.Method.invoke(Method.java:564)\nat org.springframework.context.event.ApplicationListenerMethodAdapter.doInvoke(ApplicationListenerMethodAdapter.java:344)\nat org.springframework.context.event.ApplicationListenerMethodAdapter.processEvent(ApplicationListenerMethodAdapter.java:229)\nat org.springframework.context.event.ApplicationListenerMethodAdapter.onApplicationEvent(ApplicationListenerMethodAdapter.java:166)\nat org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176)\nat org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169)\nat org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143)\nat org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:421)\nat org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:391)\nat kompas2.company.events.CompanyUpdatedPublisher.publish(CompanyUpdated.kt:41)\nat kompas2.company.application.CompanyBasicsWriter.update(CompanyBasicsWriter.kt:118)\nat kompas2.company.application.CompanyBasicsWriter.updateMany(CompanyBasicsWriter.kt:110)\nat kompas2.company.application.CompanyBasicsWriter.updateBasics(CompanyBasicsWriter.kt:103)\nat kompas2.company.application.CompanyBasicsWriter.validateAndUpdateBasics(CompanyBasicsWriter.kt:41)\nat kompas2.company.application.CompanyBasicsWriter$$FastClassBySpringCGLIB$$e20886b1.invoke(<generated>)\nat org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:779)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\nat org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89)\nat kompas2.misc.metrics.SkipFirstMeanTiming$measureExecutionTime$2.invoke(SkipFirstMeanTiming.kt:25)\nat kompas2.misc.metrics.SkipFirstMeanTimer.record$lambda-1(SkipFirstMeanTimer.kt:29)\nat io.micrometer.core.instrument.AbstractTimer.recordCallable(AbstractTimer.java:138)\nat kompas2.misc.metrics.SkipFirstMeanTimer.record(SkipFirstMeanTimer.kt:29)\nat kompas2.misc.metrics.SkipFirstMeanTiming.measureExecutionTime(SkipFirstMeanTiming.kt:25)\nat jdk.internal.reflect.GeneratedMethodAccessor213.invoke(Unknown Source)\nat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.base/java.lang.reflect.Method.invoke(Method.java:564)\nat org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634)\nat org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624)\nat org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\nat org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)\nat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\nat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\nat org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:692)\nat kompas2.company.application.CompanyBasicsWriter$$EnhancerBySpringCGLIB$$d206123c.validateAndUpdateBasics(<generated>)\nat kompas2.company.web.CompanyBasicsController.updateBasics(CompanyBasicsController.kt:29)\nat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.base/java.lang.reflect.Method.invoke(Method.java:564)\nat org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)\nat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)\nat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)\nat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)\nat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)\nat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)\nat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067)\nat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)\nat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)\nat org.springframework.web.servlet.FrameworkServlet.doPut(FrameworkServlet.java:920)\nat javax.servlet.http.HttpServlet.service(HttpServlet.java:684)\nat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)\nat javax.servlet.http.HttpServlet.service(HttpServlet.java:764)\nat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)\nat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\nat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\nat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\nat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:327)\nat org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:115)\nat org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:81)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:121)\nat org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:115)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:126)\nat org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:81)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:105)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:149)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat kompas2.security.JwtFilter.doFilterInternal(JwtFilter.kt:43)\nat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:103)\nat org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:89)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90)\nat org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75)\nat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:110)\nat org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:80)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:55)\nat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)\nat org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336)\nat org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:211)\nat org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:183)\nat org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:358)\nat org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:271)\nat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\nat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\nat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)\nat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)\nat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\nat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n06:48:53.850 INFO org.mongodb.driver.cluster : No server chosen by WritableServerSelector from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=379787, setName='kompas2mongo', canonicalAddress=kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, hosts=[kompas2mongo-1.kompas2mongo-svc.test.svc.cluster.local:27017, kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017], passives=[], arbiters=[], primary='null', tagSet=TagSet{[]}, electionId=null, setVersion=2, topologyVersion=null, lastWriteDate=Mon Nov 07 06:48:04 GMT 2022, lastUpdateTimeNanos=4233774899280824}, ServerDescription{address=kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]}. Waiting for 30000 ms before timing out\napiVersion: mongodbcommunity.mongodb.com/v1\nkind: MongoDBCommunity\nmetadata:\n name: kompas2mongo\nspec:\n members: 2\n type: ReplicaSet\n version: \"4.2.6\"\n security:\n authentication:\n modes: [\"SCRAM\"]\n[2022-11-07T06:48:04.107+0000] [.info] [src/mongoctl/processctl.go:Update:3263] <kompas2mongo-0> [06:48:04.107] <DB_WRITE> Updated with query map[] and update [{$set [{agentFeatures [StateCache]} {lastVersion 2} {nextVersion 2}]}] and upsert=true on local.clustermanager\n2022-11-07T06:48:04.376+0000 I CONNPOOL [RS] Ending connection to host kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 due to bad connection status: HostUnreachable: Connection closed by peer; 1 connections to that host remain open\n2022-11-07T06:48:04.377+0000 I NETWORK [conn9] end connection 10.244.1.183:47630 (9 connections now open)\n2022-11-07T06:48:04.379+0000 I REPL [replication-20] Restarting oplog query due to error: HostUnreachable: error in fetcher batch callback :: caused by :: Connection closed by peer. Last fetched optime: { ts: Timestamp(1667803684, 102), t: 219 }. Restarts remaining: 1\n2022-11-07T06:48:04.379+0000 I CONNPOOL [replication-20] dropping unhealthy pooled connection to kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017\n2022-11-07T06:48:04.379+0000 I CONNPOOL [RS] Connecting to kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017\n2022-11-07T06:48:04.379+0000 I REPL [replication-20] Scheduled new oplog query Fetcher source: kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 database: local query: { find: \"oplog.rs\", filter: { ts: { $gte: Timestamp(1667803684, 102) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 219, readConcern: { afterClusterTime: Timestamp(0, 1) } } query metadata: { $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" } } active: 1 findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms shutting down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 304917 -- target:kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 db:local cmd:{ find: \"oplog.rs\", filter: { ts: { $gte: Timestamp(1667803684, 102) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 219, readConcern: { afterClusterTime: Timestamp(0, 1) } } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: {type: \"NoRetryPolicy\"}\n2022-11-07T06:48:04.379+0000 I NETWORK [listener] connection accepted from 10.244.1.136:54320 #49559 (10 connections now open)\n2022-11-07T06:48:04.380+0000 I NETWORK [conn49559] received client metadata from 10.244.1.136:54320 conn49559: { driver: { name: \"mongo-java-driver|sync|spring-boot\", version: \"4.2.3\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"amd64\", version: \"4.19.0-17-amd64\" }, platform: \"Java/Azul Systems, Inc./14.0.2+12\" }\n2022-11-07T06:48:04.387+0000 I REPL [replication-18] Error returned from oplog query (no more query restarts left): HostUnreachable: error in fetcher batch callback :: caused by :: Error connecting to kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 (10.244.1.183:27017) :: caused by :: Connection refused\n...\n2022-11-07T06:48:37.471+0000 I REPL_HB [replexec-4] Heartbeat to kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 failed after 2 retries, response status: HostUnreachable: Error connecting to kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 (10.244.1.183:27017) :: caused by :: Connection refused\n2022-11-07T06:48:37.979+0000 I ELECTION [replexec-3] Not starting an election, since we are not electable due to: Not standing for election because I cannot see a majority (mask 0x1)\n2022-11-07T06:49:20.851+0000 I ELECTION [replexec-4] Starting an election, since we've seen no PRIMARY in the past 10000ms\n2022-11-07T06:49:20.851+0000 I ELECTION [replexec-4] conducting a dry run election to see if we could be elected. current term: 219\n2022-11-07T06:49:20.851+0000 I REPL [replexec-4] Scheduling remote command request for vote request: RemoteCommand 305346 -- target:kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: \"kompas2mongo\", dryRun: true, term: 219, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1667803684, 102), t: 219 } }\n2022-11-07T06:49:20.851+0000 I ELECTION [replexec-3] VoteRequester(term 219 dry run) received a yes vote from kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017; response message: { term: 219, voteGranted: true, reason: \"\", ok: 1.0, $clusterTime: { clusterTime: Timestamp(1667803684, 102), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1667803684, 102) }\n2022-11-07T06:49:20.851+0000 I ELECTION [replexec-3] dry election run succeeded, running for election in term 220\n2022-11-07T06:49:20.851+0000 I ELECTION [conn11] Received vote request: { replSetRequestVotes: 1, setName: \"kompas2mongo\", dryRun: true, term: 219, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1667803684, 102), t: 219 } }\n2022-11-07T06:49:20.851+0000 I ELECTION [conn11] Sending vote response: { term: 219, voteGranted: true, reason: \"\" }\n2022-11-07T06:49:20.852+0000 I REPL [replexec-3] Scheduling remote command request for vote request: RemoteCommand 305347 -- target:kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 db:admin cmd:{ replSetRequestVotes: 1, setName: \"kompas2mongo\", dryRun: false, term: 220, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1667803684, 102), t: 219 } }\n2022-11-07T06:49:20.852+0000 I ELECTION [conn11] Received vote request: { replSetRequestVotes: 1, setName: \"kompas2mongo\", dryRun: false, term: 220, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1667803684, 102), t: 219 } }\n2022-11-07T06:49:20.852+0000 I ELECTION [conn11] Sending vote response: { term: 220, voteGranted: true, reason: \"\" }\n2022-11-07T06:49:20.852+0000 I ELECTION [replexec-4] VoteRequester(term 220) received a yes vote from kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017; response message: { term: 220, voteGranted: true, reason: \"\", ok: 1.0, $clusterTime: { clusterTime: Timestamp(1667803684, 102), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1667803684, 102) }\n2022-11-07T06:49:20.852+0000 I ELECTION [replexec-4] election succeeded, assuming primary role in term 220\n",
"text": "Hi, I have Spring backend app connected to Mongodb (configured with MongoDB Kubernetes Operator) on Kubernetes.Sometimes (not for every write operation) there is problem with connection between database & backend app.I attach the logs & config files. I hope you will guide me where I can read more to understand this problem.Connection uri:Error from backend appMongoDB operator config:Logs from mongo:Election after secAnd after that database connection is restoredAlso on stackoverflow:\nhttps://stackoverflow.com/questions/73455042/spring-with-mongodb-losing-connection",
"username": "Lukasz_Milunas"
},
{
"code": "Error connecting to kompas2mongo-0.kompas2mongo-svc.test.svc.cluster.local:27017 (10.244.1.183:27017)rs.status()spec:\n members: 2\n type: ReplicaSet\n version: \"4.2.6\"\n",
"text": "Hi @Lukasz_Milunas and welcome to the MongoDB community!!From the logs shared above, it looks like, there are some issues in the kubernetes environment or deployment (i.e. “HostUnreachable” from a replica set node, and “Connection refused” from the application side)\nIt would be helpful if you could share a few more details to understand further:kubectl get pods -n namespace -owideDo you see the connection automatically recovering itself from after some amount of time. If yes, how long it take for the pod to make successful connection again?Can you share the output for rs.status() for a successful and unsuccessful connection.Also, please make sure you are following the right documentation to set up the replica set using the kubernetes deployment.\nYou can verify the steps from the documentation on Deploy a Replica Set in Kubernetes Environment.Since MongoDB version 4.2.6 is an old version, please upgrade to the latest in 4.2 series which is currently 4.2.23 to ensure you’re not seeing a fixed issue. I also would recommend you to have a minimum of 3 replica set deployment as per Replica Set Deployment ArchitecturesThanks\nAasawari",
"username": "Aasawari"
},
{
"code": "14:24:47.543 INFO org.mongodb.driver.cluster : Exception in monitor thread while connecting to server\n14:26:00.244 INFO org.mongodb.driver.cluster : Monitor thread successfully connected to server \nrs.status()",
"text": "Hi @Aasawari, thank you very much for response. Unfortunately I can’t see any pattern. What would you recommend looking at besides the logs?YesYes, ~ 1,2 secIt can be hard for unsuccessful. Usually I’m finding errors in logs and not while they occurs. I’ll add output for successful rs.status()Thanks! It definitely helps with electing new primary (and faster connection recovery). Unfortunately don’t solve problem with losing connections.Since MongoDB version 4.2.6 is an old version, please upgrade to the latest in 4.2 series which is currently 4.2.23 to ensure you’re not seeing a fixed issueThanks, will do ",
"username": "Lukasz_Milunas"
}
]
| No server chosen by WritableServerSelector | 2022-11-07T08:14:34.550Z | No server chosen by WritableServerSelector | 5,091 |
null | [
"crud"
]
| [
{
"code": "creatededited \"created\": {\n \"$timestamp\": {\n \"t\": 1631414145,\n \"i\": 5\n }\n },\n \"edited\": {\n \"$timestamp\": {\n \"t\": 1631414145,\n \"i\": 6\n }\n },\nedited",
"text": "I’m trying to work with the TIMESTAMP… I’m posting here because I can’t find ANY documentation or examples online or this site. If anybody knows any documentation that would be great!I’m using MongoDB is a basic CMS, for like blog posts. I’m saving created and edited TIMESTAMPS for each post. I have some mongoDB web hook functions that expose these posts to an API, making a headless CMS. The CMS works great, but those timestamps look like:Questions:I want to both display these in useful way in the blog post headers and order by them. What is the best way to format this? How do I do it? I think I’d prefer formatting api side, but I’m open to whatever.I know the ObjectId contains the created date somehow, is it more standard to use that somehow?I have triggers that format the blog posts… how do I automatically stamp the edited?I’m not super opinionated on this, what’s the easiest standard way to work with and format TIMESTAMPs with mongodb?Thanks for any help!General question: I always find MongoDB documentation and community very frustrating. Just trying to do basic thing, like format a timestamp, on other platforms I can find examples and documentation in seconds. Why is it so difficult with MongoDB? What am I doing wrong?Is there a Discord for MongoDB?",
"username": "Tim_N_A"
},
{
"code": "creatededitedcreatededitedTimestampvar createdDt = new Date()\ndb.posts.insertOne({ title: \"Working with dates\", created: createdDt })\ndb.posts.findOne()\n// returns\n{\n \"_id\" : ObjectId(\"619474dd9043453a52147dd7\"),\n \"title\" : \"Working with dates\",\n \"created\" : ISODate(\"2021-11-17T03:19:56.186Z\")\n}\n",
"text": "I’m using MongoDB is a basic CMS, for like blog posts. I’m saving created and edited TIMESTAMPS for each post.I suggest save the created and edited time fields as Date data type (instead of Timestamp type).For example,",
"username": "Prasad_Saya"
},
{
"code": "post.edited = new Date().toJSON();",
"text": "Thanks!Date() itself serialized as a similar useless object to TIMESTAMP. But storing this post.edited = new Date().toJSON(); creates that more meaningful iso date strong… with presumably I can work with on the frontend.I think this approach will work for me for now… is this standard?",
"username": "Tim_N_A"
},
{
"code": "date",
"text": "… is this standard?I think it is not. MongoDB data allows storing date as a date type. It is efficient storage, can be converted to any other forms (e.g., to string), has all the date/time related information, can be used for searching and sorting (compare with other dates), extract and use the different date fields (day, month, year, hour, min, sec, etc.) from it, perform date arithmetic, etc. All these operations can be applied using various query or aggregation operators (see Date Expression Operators).",
"username": "Prasad_Saya"
},
{
"code": "const document = {\n\"_id\": 'doc_id',\n\"created\": {\n \"$timestamp\": {\n \"t\": 1631414145,\n \"i\": 5\n }\n },\n}\nconsole.log(new Date(document?.created?.$timestamp.t * 1000))\n",
"text": "I might be too late for this but to convert mongodb timestamp to js date what I did is",
"username": "Siddiqui_Affan"
}
]
| How to use and format the TIMESTAMP in javascript? | 2021-11-16T21:11:47.498Z | How to use and format the TIMESTAMP in javascript? | 18,064 |
null | [
"python",
"production"
]
| [
{
"code": "",
"text": "We are pleased to announce the 0.6.2 release of PyMongoArrow - a PyMongo extension containing tools for loading MongoDB query result sets as Apache Arrow tables, Pandas and NumPy arrays.This is a minor release that brings support for PyArrow 10.0. We did not\npublish 0.6.0 or 0.6.1 due to technical errors.See the changelog for a high level summary of what’s new and improved or see the 0.6.2 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongoArrow 0.6.2 Documentation\nChangelog: Changelog\nSource: GitHubThank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "I took the MongoDB university’s PyMongoArrow course yesterday, and then realised that support for many types is still not there.On the other hand, the same functionality (with support for all Python types) is already provided by Pandas through one of its DataFrame constructors. The list of Python “dict” objects provided in the output of pymongo’s “find()” method (see Build A Python Database With MongoDB | MongoDB | MongoDB) can be directly given as input to the DataFrame constructor.So, what is the need for, or advantage of, using PyMongoArrow?",
"username": "Sanjay_Dasgupta"
},
{
"code": "",
"text": "Hi @Sanjay_Dasgupta, thank you for the question, and for opening Documentation should describe advantages over DataFrame constructor (of Pandas) · Issue #107 · mongodb-labs/mongo-arrow · GitHub.For completeness, we’re tracking this issue in https://jira.mongodb.org/browse/ARROW-129, summarized as:We should list the pros and cons of using this library versus using the PyMongo API directly, highlighting the benchmarks as well as the limitations.We should give examples showing how the same tasks could be accomplished with each.",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| PyMongoArrow 0.6.2 Released | 2022-11-16T23:29:46.389Z | PyMongoArrow 0.6.2 Released | 1,709 |
null | [
"golang"
]
| [
{
"code": "",
"text": "Hello! The MongoDB Developer Experience team is running a survey of Go developers, and we’d love to hear from you! The survey will take about 5-10 minutes to complete. We’ll be using the feedback from this survey to help us understand how Go developers use MongoDB and the MongoDB Go Driver and how to improve the MongoDB Go developer experience.You can find the survey here.Let us know if you have any questions!",
"username": "Matt_Dale"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| The 2022 MongoDB Go Developer Survey | 2022-11-18T21:46:25.735Z | The 2022 MongoDB Go Developer Survey | 1,354 |
null | []
| [
{
"code": "",
"text": "I need to make my superior understand how effective mongodb support, or we switch to cassandra.",
"username": "David_Baldonado"
},
{
"code": "",
"text": "They’re great, like really great.Source: Me, Migrating on-prem to Atlas.",
"username": "chris"
},
{
"code": "",
"text": "Hello @David_Baldonado,I can only second @chris The support is great. Of cause there is a difference in reaction time between enterprise and free tier - but that should be obvious. The enterprise support worked super good every time I needed it. You are asked to add a severity to your case. Sometimes I even with an low S4 I got a response in hours with is much faster than the SLA.\nAs a consultant in the area of noSQL DBs I deal also with other DBs than MongoDB - I can easily state that the MongoDB support is the best around.Disclaimer: This is all personal experience and I speak on my own.Regrads,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "I’ll expand on my comment and echo @michael_hoeller a bit.We were just starting the move to Atlas and I believe we received some limited Enterprise support with this on boarding. As I was working on our staging environment support requests were accordingly low or medium, response times always exceeded expectations. We also were given a pre-release version of mongomirror to see if recent changes would make a difference to our transfer times, that extra effort sticks with me.It is seldom I have experienced support this good.",
"username": "chris"
},
{
"code": "",
"text": "Thank you all for sharing your experience with mongodb support \nI’m a bit frustrated for the past days before my superior insisted on using Cassandra for non-technical reasons. We are developing software for a client expecting 10,000 users, and my superior doubts that mongodb can’t handle that many users. I’m not good at explaining things technically, I’ve been using mongodb for 4yrs without studying its architecture, I just happen to enjoy using it on my projects back in college and end up evangelizing it at work without knowing much about its technicality. That way I’m in this situation.",
"username": "David_Baldonado"
},
{
"code": "",
"text": "Hi @David_Baldonadodoubts that mongodb can’t handle that many users.Only based on a number of users any response is vague. The use case is important to understand and to design a well fitting database schema. However 10000 users sounds not as a stressful use case. I have seen and used MongoDB setups with millions of users, terra-bytes of data and high traffic volumes. But again, this is kind of unprofessional to go further, the message at this point is: as long as nothing has been made awfully wrong 10000 users should not be an issue at all.\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thank you, you’re a life saver, I feel confident talking to my superior about this.",
"username": "David_Baldonado"
},
{
"code": "",
"text": "Well, in case you want to go to a tech discussion it is always good to prepare the pro and cons.\nI strongly suggest to underpin your suggestion with researched arguments.\nIn case you need support: find a reliable MongoDB consultant in your area or contact MongoDB Professional Services . You also can ask questions here in the forum, though this can not replace professional consulting to find the best option.\nRegards,\nMichael",
"username": "michael_hoeller"
}
]
| How helpful is mongodb support? | 2022-11-18T11:43:18.146Z | How helpful is mongodb support? | 1,837 |
null | [
"flutter"
]
| [
{
"code": "",
"text": "I can’t seem to find any examples on how I would be able to listen to and get notifications on sync state (e.g. syncing, synced, disconnected). The only example in the document shows that I can listen to the sync progress on Realm.open(onProgressCallback) by passing a ProgressCallback.Am I correct in assuming that ProgressCallback will continue to monitor the sync progress for the entire life of the opened realm? Or is it a once off check on first opened? When the realm has been synced on that initial open, will the callback be continued to get notifications around the progress? If this the case, I could potentially check that 0 transferabble data means it is synced. Meaning that the transferred bytes will jump up and down between 0 and some numbers throughout the life of the opened realm as it continues to sync updates in the background.I would ideally want to check whether the realm is in a synced state to the remote server before I do some work. I could test the internet connection myself but it won’t guarantee that the realm has been synced.Any help would be appreciated, thanks!",
"username": "lHengl"
},
{
"code": "waitForUpload()waitForDownload() await realm.syncSession.waitForUpload();\n await realm.syncSession.waitForDownload();\nrealm.syncSession.pause();\nrealm.syncSession.resume();\n",
"text": "Hi @lHengl, Thanks for your interest to Ream Sync.\nonProgressCallback occurs only while Realm.open is working. Once the Realm.open completes this event handler is dettached. You can be sure that the realm is fully synced after Realm.open completes.\nIf you want to be sure at any time that the realm is fully synced you can wait waitForUpload() and waitForDownload() to complete before to do your work.If you don’t have internet connection these methods will continue to wait until the connection becomes available and then they will complete.If you want to be sure that the syncing is not in progress while you are doing your work you can use “pause” to stop syncing:And then after you finish the work call “resume” to allow syncing:I hope this answer is useful for your scenario.\nIf not, feel free to write!",
"username": "Desislava_St_Stefanova"
},
{
"code": "SysncSessionstateconnectionStateconnectionStateChangespause()resume()waitForUpload/waitForDownloadgetProgressStream",
"text": "@lHengl\nYou can also take a look at the other options provided by SysncSession API:",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Perfect thanks, didn’t realise syncSession was a thing, I don’t remember seeing anything about it in the documentation.This should do it, I will write some code and see how I go, thanks alot.",
"username": "lHengl"
}
]
| How can I listen to sync state for Flutter SDK? | 2022-11-16T22:34:45.820Z | How can I listen to sync state for Flutter SDK? | 2,049 |
null | [
"atlas",
"flutter"
]
| [
{
"code": "",
"text": "So after some research on how to access the cloud atlas database from a cilent application in Flutter, I found an unofficial package flutter_mongodb-realm.The package uses the deprecated MongoDb Stitch API to access Atlas Cloud data remotely online - which I didn’t know existed until I came across this package. Apparently it has been replaced by the Realm API.So, why then can I not find an obvious answer on how I can access the remote data without first syncing to Atlas Cloud Sync?I’m looking for a way to sync user data, but I also need a way for the user to access “public” or other user’s data on the cloud without having to sync the realms.Seems like the Atlas Data API is the only way around this currently.Is there a future plan for the Realm SDK to bring back the ability to query the Cloud database remotely without syncing first?",
"username": "lHengl"
},
{
"code": "",
"text": "What’s your coding platform?",
"username": "Jay"
},
{
"code": "",
"text": "The coding platform is Dart / Flutter.",
"username": "lHengl"
}
]
| Can I access Atlas Cloud data without syncing the data first through the Realm SDK? | 2022-11-17T05:24:07.091Z | Can I access Atlas Cloud data without syncing the data first through the Realm SDK? | 1,896 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "This is a pretty general question but the concept of “Live Objects” keeps coming up in the Realm DB docs. We are building an Electron app using Realm local nodejs. We haven’t been able to conceptually understand when the concept of Live Objects would apply to what we are doing with Realm / JS. Does this apply more if we were using something like React where Live Objects may “do something” / update the UI on its own? Because with our basic Electron app (no UI framework) we just pass back and forth between the main and renderer (UI) processes Realm _id primary keys to fetch and push the data back and forth to and from the UI. So we can’t seem to understand where the concept of a Live Object would apply?Is it as simple as if we stored a Realm object as a global variable then if that Object updates/changes that global variable of that Realm Object would update and stay “live and up to date”? Sorry for the noob question!Thanks,\nShawn",
"username": "Shawn_Murphy"
},
{
"code": "",
"text": "Im also doing the same thing (sending liveObject.toJSON via ipc to a renderer with React) It seems like we can’t send live object through ipc…Next step would be to addListener in electron and then send the changes to React…Is there a better way ?",
"username": "valentin_cournee"
},
{
"code": "",
"text": "@valentin_cournee It sounds like your use case, implementation and the actual question may be a bit different from the OP’s - I believe they were more asking in general what a ‘live object’ is. It sounds like you have more of a coding specific question. It may be best create a new question with your code and a description of what you’re attempting to do.e.g. you askedIs there a better way ?A better what to do what? What isn’t working with the way you’re doing it? Providing some additional details may result in an answer - or at least a suggestion.",
"username": "Jay"
},
{
"code": "",
"text": "Hi Jay , thank you for your answer. In my opinion we both have the same question. How to better apply the concept of live objects.In their question :“Because with our basic Electron app (no UI framework) we just pass back and forth between the main and renderer (UI) processes Realm _id primary keys to fetch and push the data back and forth to and from the UI. So we can’t seem to understand where the concept of a Live Object would apply?”My project does actually … the same…I guess our questions are quite the same (unless im wrong):\nHow to make a better use of the “live” property of Realm Objects.?Is it as simple as if we stored a Realm object as a global variable then if that Object updates/changes that global variable of that Realm Object would update and stay “live and up to date”? Sorry for the noob question! → If yes im interested in the answer (Can we create a global live object usable by both the back and frontend ?)My idea was slightly different but has the same goal:I obviously think that React is by nature made to be updated every time its data change. Its seems the perfect combination with Realm Objects. But we seems to loose this benefit by using Electron and IPCmessages… → Should we skip Electron and implement Realm directly in the UI ? (or in both ?)In both our case we have to trigger an IPC event each time there is an update in the realm. Is there something better to do?(i hope my question is better now)",
"username": "valentin_cournee"
},
{
"code": "",
"text": "Understood. If you look at the OP’s question, there was no answer and no activity for 6 months, which is why I suggested posting something new with details on your more specific use case.IMO It seems like there’s a lot of redundancy in what’s being described.Realm has full node.js SDK support so supporting Javascript and the like is baked in. Cross platform development is part of what Realm is, including React Native support and a Web SDKThe act of converting Realm objects to JSON makes them not live objects (now it’s JSON!) but it’s not clear why that’s being done or what the use case it. Could it benefit from live objects? Maybe!You’ve already got listeners in Electron so adding additional Realm listeners seems redundant - unless it provides additional functionality that you don’t already have - but again, it’s not clear what that would be.Is it as simple as if we stored a Realm object as a global variable then if that Object updates/changes that global variable of that Realm Object would update and stay “live and up to date”? Sorry for the noob question!Sure. But why? What would a global var do for you over say, in a friend-tracker all, observing friends within a 1-mile radius of you and being notified if a friend leaves or comes into your radius.In both our case we have to trigger an IPC event each time there is an update in the realmThat sounds like you’re “manually” triggering an IPC event - for what purpose? We’re using event driven programming here so why not let the server do the heavy lifting? e.g. If a user updates their favorite food, that should trigger an event automatically and all other users who are interested in that user should know about it - and Realm does just that from the UI to the backend local storage to the synced data.I am obviously speaking at the 10,000’ level but as you can see - there are a LOT of variable and use cases so how Realm can benefit any specific use case will depend on what’s needed.That probably doesn’t help a lot (lol) but just food for thought if you do post a question.",
"username": "Jay"
},
{
"code": "",
"text": "robably doesn’t help a lot (lol) but just food for thought if you do post a question.Hi jay thank you for you answer.Ok 1rst point is :Ipc communication in electron doesnt allow us to send live object.If i try to send a live object to my UI i will just receive an empty one… That’s the reason for the conversion.And that’s the biggest part of the problem in fact. It would be wonderful to send the live object to the UI and let them do thei magic (as React can update itself when data change).Actually unless im wrong, if i want to use / read / update a live object in react the only solution is to use the WebSDK … in that case… the problem is reverted: electron wont be able to access the live objects.Lets say i have a table of all the users in my UI.Actually if a users change its data somewhere else (on another app) … i can listen to this change in electron (as its “connected” to realm and its lives objects)But in order to update my UI i have to send the new data from electron to react through BrowserWindow.webcontent.send (IPC)… and to do that… convert it first toJSON…Am i missing something?So if i understood correctly :?",
"username": "valentin_cournee"
},
{
"code": "",
"text": "At this point we’re getting into high level overall design, which really goes beyond what we can do here in the forums.I think you have a better understand of the concept of Live Objects (which was the initial question) and how and what they do. The question still is though, do you even need that ability? It seems you have listeners already and are passing around JSON so ?In a bigger picture, we don’t know what the use case is:As you can see, lots of questions we don’t have the answers to and honestly, even if we did, that barely scratches the surface of a total app design. So, don’t spend time answering those questions as it’s just an example of considerations of a project.I would suggest digging into the Web SDK, node.js and the React SDK with a simple test project and see if it fits the bill - should only take a couple hours to eval those products and if it looks good, expand the project a bit. If it doesn’t then you’ve eliminated those variables.",
"username": "Jay"
},
{
"code": "",
"text": "bill - should only take a couple hours to eval those products and if it looks good, expand the project a bit. If it doesn’t then you’ve eliminated those variables.Ok,Thank you for your time I hope it helped some other’s too.",
"username": "valentin_cournee"
}
]
| Realm DB local - nodejs for Electron desktop app - how do Live Objects work? | 2022-05-11T03:50:32.114Z | Realm DB local - nodejs for Electron desktop app - how do Live Objects work? | 3,354 |
null | []
| [
{
"code": "",
"text": "We have a setup MongoDB cluster that consists of a total of 5 nodes of which 3 are on-premise and 2 nodes on cloud server. We have configured 2 nodes on cloud servers to be hidden and priority-0 to avoid delays in writes due to latency.In case of an outage at our DC, the only nodes that will remain in our cluster will be the hidden nodes that are on the cloud. In such a scenario, can we somehow make the hidden nodes primary and secondary? And later add more nodes to it?",
"username": "Map_Sec"
},
{
"code": "priorityhidden",
"text": "Hi @Map_SecThe procedure is outlined in reconfigure-a-replica-set-with-unavailable-members. The remaining members will need priority and hidden correctly set too. You will be wanting to add another member immediately to avoid running with a two node replicaSet.As these nodes are hidden any connection uris would have to be updated as the client drivers would not receive the topology update when the replicaSet is reconfigured.You definitely want to test this procedure in a non-prod environment before you need to execute this.We have configured 2 nodes on cloud servers to be hidden and priority-0 to avoid delays in writes due to latency.Unless you are changing votes as well, hidden members will still take part in writeConern acknowledgment so your experience may not match you expectations if one or two nodes in the DC become unavailable.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Manually change hidden node to primary when all other nodes fail | 2022-11-18T13:34:48.323Z | Manually change hidden node to primary when all other nodes fail | 1,202 |
null | []
| [
{
"code": "{ \"_id\" : ObjectId(\"56e0a3a2d59feaa43fba49d5\"), \"timestamp\" :2022-11-14T17:36:06.555+00:00, \"city\" : \"London \", \"mobilenumber\" : \"983xxxxxxxx\" }\n { \"_id\" : ObjectId(\"56e0a3a2d59feaa43fba49d6\"), \"timestamp\" : 2022-11-14T17:36:06.555+00:00, \"City\" : \"London\", \"mobilenumber\" : \"943xxxxxxxx\" }\n { \"_id\" : ObjectId(\"56e0a3a2d59feaa43fba49d7\"), \"timestamp\" : 2022-11-14T17:36:06.555+00:00, \"City\" : \"Dublin\", \"mobilenumber\" : \"9324xxxxxxxx\" }\n{ \"_id\" : ObjectId(\"56e0a3a2d59feaa4ba49d7\"), \"timestamp\" : 2022-11-14T17:36:06.555+00:00, \"City\" : \"Dublin\", \"mobilenumber\" : \"91233xxxxxxxx\" }\n{ \"_id\" : ObjectId(\"56e0a3a2d59feaa43fba49d5\"), \"timestamp\" :2022-11-14T17:36:06.555+00:00, \"city\" : \"London \", \"mobilenumber\" : [\"983xxxxxxxx\",\"943xxxxxxxx\"] }\n \n { \"_id\" : ObjectId(\"56e0a3a2d59feaa43fba49d7\"), \"timestamp\" : 2022-11-14T17:36:06.555+00:00, \"City\" : \"Dublin\", \"mobilenumber\" : [ \"9324xxxxxxxx\", \"91233xxxxxxxx\"] }\n",
"text": "This is a collection in my Mongodb.But i want the result to be like below:I want all the mobile numbers of london in one list under mobile number column and same thing even for dublin\nHow do i achieve this in mongodb?\nThanks in advance",
"username": "sai_sankalp"
},
{
"code": "",
"text": "There is a few things about your sample documents that need to be resolve first.In some documents, the city field is speed city lower-case c, while in others it is spelled City. Is that a typo or you really have different spelling. Field names are case sensitives.The timestamp of all input documents are the same. Is that always the case? If not do you still group by city if the timestamp is different? Or do you group by city/timestamp If the timestamps are different and you still group by city, which timestamp do you keep? The smallest or the biggest?You have 2 documents with the same _id in the source documents. That is not really possible so it makes hard to know which one you keep in the result set. Do you really what to keep one of the source document’s _id?Is there any other fields from the source documents that needs to be in the result set?What ever the answers to the above the solution will make use of $group and $push, so take a look at the documentation:",
"username": "steevej"
},
{
"code": "reports_col.aggregate([{\"$group\": {\"_id\": \"$City\", \"MobileNumber\": {\"$push\": \"$mobilenumber\"}}}, {\"$project\": { \"City\":\"$_id\",\"MobileNumber\":1,\"_id\":0 }},{\"$out\": \"updatedreports\"}])\n\n",
"text": "Hi @steevej,\nThe mistakes occured as what i provided in above example is just a sample data which i created on my own just for better understanding to the reader of what exactly i wanted.\nAnyways, I have tried the $group and $push as suggested by you and it is working fine.\nThis is my query now:But applying the $group and $push to millions of collections will make the process slow,right?\nIs there anyway other than $group and $push to make it more time efficient ?",
"username": "sai_sankalp"
},
{
"code": "group = { \"$group\" : { \"_id\" : \"$City\" } }\nlookup = { \"$lookup\" : {\n \"from\" : \"reports_col\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"City\" ,\n \"as\" : \"MobileNumber\" ,\n \"pipeline\" : [ { \"$project\" : { \"mobilenumber\" : 1 , \"_id\" : 0 } } ]\n} }\n/*\n After $lookup MobileNumber is an array of objects, but we want an array\n of phone numbers. So we $map to convert each object to the value of the field mobile number.\n*/\nmap = { \"$set\" : { \n \"MobileNumber\" : { \"$map\" : { /* details left out */ } }\n} }\nreports_col.aggregate( [ group , lookup , map ] )\n",
"text": "But applying the $group and $push to millions of collections will make the process slow,right?Yes but $group is how you group documents together and this is your use-case.Is there anyway other than $group and $push to make it more time efficient ?The index {City:1,mobilenumber:1} might help. With the index may be $sort:{City:1} before $group may improve.A funky idea would be to use self $lookup after a simpler $group as like the following untested code:I have no clue if the above is faster but I suspect that it might take less memory. A $group stage output its documents when all input documents are processed. We still have a $group but since we only keep the city name, this group might be faster and definitively takes less memory.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej ,\nThis above solution works for me .It was time efficient too ",
"username": "sai_sankalp"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Need to append values based on one column value | 2022-11-14T12:41:03.841Z | Need to append values based on one column value | 1,589 |
null | [
"aggregation"
]
| [
{
"code": "Id: 12548\nname: jhon\nexpence: 200$\nday: 22-3-2022\n\nid: 15426\nname: mary\nexpence: 150$\nday: 15-4-2022\n{\n $setWindowFields: {\n partitionBy: {“$month”:”$day”},\n sortBy: { day: 1 },\n output: {\n sumExpence: {\n $sum: \"$expencel\",\n window: { documents: [ -1, 0 ] }\n },\n previousDateTime: {\n $push: \"$date\",\n window: { documents: [ -1, 0 ] }\n }\n }\n }\n }\n",
"text": "I would like to calculate the percentage increase of some data in the database between two months. But as much as I think about it, I can’t find the aggregate.more or less\ncalculate increase percentage between two monthSomething idea??\nthank you",
"username": "Pilasu_Jorda"
},
{
"code": "rangeunit[\n{\n $setWindowFields: {\n sortBy: { day: 1 },\n output: {\n thirtyDaysAgoDate: {\n $last: \"$date\",\n window: { range: [30 * 24, 31 * 24], unit: 'hour' }\n },\n thirtyDaysAgoValue: {\n $last: \"$expencel\",\n window: { range: [30 * 24, 31 * 24], unit: 'hour' }\n }\n }\n }\n },\n{\n $set: {thirtyDayChange: {$divide: [{$subtract: ['$totalValue','$thirtyDaysAgoValue']}, '$totalValue']}}\n}\n]\n[30 * 24, 31 * 24]$last$last$first",
"text": "",
"username": "David_Aideyan"
}
]
| Calculate percentage betwwen two months | 2022-04-20T12:03:18.263Z | Calculate percentage betwwen two months | 2,359 |
[
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "If i add a new value to age field entire mongose is getting again printed in cluster along with updating age value. Problem is old & new data are again coming in mongose . its hard to manage there is only 5 ,i updated a value 2 times that made to 15Screenshot (493)1920×1080 133 KB",
"username": "Abhinand_Abhinand_tp"
},
{
"code": "",
"text": "Most likely your mongoose code is wrong.As an advice, please update the title to a shorter one and use text in the post to explain the issue. A title is a title, it is not a paragraph.Please share your code, this way we can see what is wrong with it.Please read Formatting code and log snippets in posts before posting. We need to cut-n-paste your code and data to experiment.",
"username": "steevej"
}
]
| Problem updating field value in Mongoose | 2022-11-18T14:38:01.474Z | Problem updating field value in Mongoose | 2,337 |
|
null | [
"queries"
]
| [
{
"code": "",
"text": "Hello Everyone,We are working on a query for fetching the feed, that we usually see in a social media app.The Logic we want to put there is, this would be the orderAlso, time is also a critical aspect when calculating the priority,for example, lets suppose the current time is 4 PM, someone from my contacts posted at 1 PM, lets call it post 1 & there is someone else in the following list who posted at 2 PM lets call it post 2, then we need the follower post to come first in the feed rather than the post 1But if we are applying sorting based on contacts then it’s returning all the posts for my contacts first irrespective of the time they are posted on.Is there any approach using which we can sort based on the combination or conditional-based sorting?Please help",
"username": "Ankit_Arora"
},
{
"code": "",
"text": "time is also a critical aspect when calculating the priorityIt seems to me that time is the only priority because the followingsomeone from my contacts posted at 1 PM, lets call it post 1 & there is someone else in the following list who posted at 2 PM lets call it post 2 , then we need the follower post to come first in the feed rather than the post 1means to me that a more recent post always comes first no matter who sent it.Can you give an example where a post with 1st Priority should be displayed before a post with 3rd priority that is more recent?",
"username": "steevej"
},
{
"code": "",
"text": "Let’s suppose 2 Posts,then Post X should come first",
"username": "Ankit_Arora"
},
{
"code": "",
"text": "That seems to contradict your first example.ThisLet’s suppose 2 Posts,then Post X should come firstseems to indicate that an older post from higher priority comes first despite the time.While the followingfor example, lets suppose the current time is 4 PM, someone from my contacts posted at 1 PM, lets call it post 1 & there is someone else in the following list who posted at 2 PM lets call it post 2 , then we need the follower post to come first in the feed rather than the post 1seems to indicate that a more recent post comes first despite the priority.",
"username": "steevej"
}
]
| Mongodb priority based sorting | 2022-11-16T18:02:25.617Z | Mongodb priority based sorting | 1,996 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"636b8caa92758a885c06798a\"\n },\n \"org_id\": {\n \"$oid\": \"636a064d8137cb5f659b26d9\"\n },\n \"name\": \"store 1\",\n \"location\": \"merkato\",\n \"type\": \"store\",\n \"inventory\": [\n {\n \"product_id\": 6,\n \"quantity\": 10\n },\n {\n \"product_id\": 5,\n \"quantity\": 3\n },\n {\n \"product_id\": 7,\n \"quantity\": 9\n }\n ]\n}{\n \"_id\": {\n \"$oid\": \"636a064d8137cb5f659b26da\"\n },\n \"org_id\": {\n \"$oid\": \"636a064d8137cb5f659b26d9\"\n },\n \"product_list\": [\n {\n \"product_name\": \"A30\",\n \"product_id\": 5,\n \"category\": \"phone\",\n \"model\": \"SM-7500\",\n \"price\": 190.00,\n \"active\": true,\n \"brand\": \"Samsung\",\n \"description\": null\n },\n {\n \"product_name\": \"A33\",\n \"product_id\": 6,\n \"category\": \"phone\",\n \"model\": \"SM-8500\",\n \"price\": 270.00,\n \"active\": true,\n \"brand\": \"Samsung\",\n \"description\": null\n },\n {\n \"product_name\": \"S20\",\n \"product_id\": 7,\n \"category\": \"phone\",\n \"model\": \"SM-2500\",\n \"price\": 385.00,\n \"active\": true,\n \"brand\": \"Samsung\",\n\n \"description\": null\n }\n ],\n \"product_count\": 7\n} \"name\": \"store 1\",\n \"location\": \"merkato\",\n \"inventory\": [\n {\n \"product_id\": 6,\n \"product_name\": \"A33\",\n \"price\": 270.00,\n \"quantity\": 10\n },\n {\n \"product_id\": 5,\n \"product_name\": \"A30\",\n \"price\": 190.00,\n \"quantity\": 3\n },\n {\n \"product_id\": 7,\n \"product_name\": \"S20\",\n \"price\": 385.00,\n \"quantity\": 9\n }\n ]",
"text": "I’m getting started with MongoDB and having trouble merging two nested arrays in different collections.\nthe first collection.\nstores (a sample store)The second collection\nproducts (shared across multiple stores) there is a shared objectId org_id that will be used to join the documents.The desired outputThank you in advance.",
"username": "Khalil_Ahmed"
},
{
"code": "",
"text": "Is org_id unique in the products collection?If not, is org_id and product_id a unique tuple in the same products collection?",
"username": "steevej"
},
{
"code": "",
"text": "Yes, org_id is unique in the products collection.",
"username": "Khalil_Ahmed"
},
{
"code": "$lookup : {\n from : \"products\" ,\n localField : \"org_id\" ,\n foreignField : \"org_id\" ,\n as : \"_products\"\n}\n$set : {\n \"inventory\" : { \"$map\" : {\n \"input\" : \"$inventory\" ,\n \"as\" : \"inventory_product\" ,\n \"in\" : { \"$mergeObjects\" : [ \"$inventory_product\" , { \"$filter\" : {\n \"input\" : \"$_products.0.product_list\" ,\n \"cond\" : { \"$eq : [ \"$inventory_product.product_id\" , \"$product.product_id\" ] ,\n \"as\" : \"product\" ,\n \"limit\" : 1\n } } ] }\n } }\n}\n",
"text": "You start by doing a $lookup in products something like:You then use a $set stage that uses $map on inventory that uses a $filter on _products.0.product_list. Something like:You might need some cleanup with a final $map to remove fields from products that you do not want, a pipeline with a $project in the $lookup might achieve the same result.",
"username": "steevej"
}
]
| Merging two nested arrays from two different collection | 2022-11-10T18:04:20.702Z | Merging two nested arrays from two different collection | 1,341 |
null | [
"queries",
"node-js"
]
| [
{
"code": "\"userA\".friends{ link: \"[email protected]\" }db.collection.findOne({email: userlogged }, {\"friends\": {$in: { [usertoaccept] }}});users.findOne({email: userlogged}).then(result =>{\n\tfriendz = result.friends;\n\tif (friendz){\n\t\tusers.findOne({ email: userlogged},{$in: { \"friends\": { $in: [usertoaccept, userid] }}}).then(resulty =>{\n\t\t\tconsole.log(resulty);\n\t\t\tconsole.log(\"Ressssabove\");\n\t\t\tfriendzz = resulty.friends[0];\n\t\t\t\n\t\t\tthatlogic = \"{ link:\" + \" '\" + usertoaccept + \"' \" + \"}\";\n\t\t\t\n\t\t\tconsole.log(thatlogic);\n\t\t\tconsole.log(friendzz);\n\t\t\t\n\t\t\tfor (let i = 0; i < resulty.friends.length; i++){\n\t\t\t\tif(resulty.friends[i] === resulty.friends[i].link.thatlogic){\n\t\t\t\t\t\tconsole.log(\"Resulty found\");\n\t\t\t\t\t}else{\n\t\t\t\t\t\tconsole.log(\"No resulty found.\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t})\n\t}\n})\n{ link: '[email protected]' }\n{ link: '[email protected]' }\n",
"text": "Hi everyone, I have been trying to do something for the past 48 hours, with not much luck…I’m trying to query an array within a specific document, say \"userA\".friends, the .friends field is an array with a list of friends simple as { link: \"[email protected]\" }, and I’m trying to find one within array, if exists already do one thing, if not does another, such as add the friend or remove the request field pertaining the user to be accepted…But I haven’t been able to succesfully query this array, I was wondering if there is something such as db.collection.findOne({email: userlogged }, {\"friends\": {$in: { [usertoaccept] }}});Does anybody know how to proceed?Thank you!EDIT:\nI got as far as this… When querying the array result, trying to compare the result of friend[i] even with a match its coming back with no result…code:Giving then the console result(y) of:Still… “No resulty found.” ",
"username": "Zoo_Zaa"
},
{
"code": "resulty.friends[i].link.thatlogicthatlogic = \"{ link:\" + \" '\" + usertoaccept + \"' \" + \"}\";",
"text": "This is basic JS logic. You are using ===, it means that type and value must match.The variable thatlogic is constructed as a string.The variable friendzz is object return by findOne().I am not too savvy in JS, but I am not sure ifresulty.friends[i].link.thatlogicis really equal tothatlogic = \"{ link:\" + \" '\" + usertoaccept + \"' \" + \"}\";",
"username": "steevej"
}
]
| How to query array of specific entry(document) in collection? | 2022-11-17T15:50:45.793Z | How to query array of specific entry(document) in collection? | 1,016 |
null | [
"compass"
]
| [
{
"code": "",
"text": "I am currently working on implementing SSO (Okta) to my organisation. I have configured a good range of applications and services now, including MongoDB Atlas.MongoDB Atlas is now configured with SAML using Okta as my IdP. However, I am wondering if I can take this further and delve into controlling database access on my production cluster. So, when a user logs in for the first time (via Okta SAML), it also creates a database user (database access tab) with specific roles and access. From here, they are able to connect to the database (using mongodb compass) with their account that Okta has created.Is this possible? Has anyone else got a similar use case?Be great to hear back and thank you for your time ",
"username": "Matthew_Mentlak"
},
{
"code": "",
"text": "Thanks, Matthew. We are looking into database authentication via OpenID Connect so that a database user can directly authenticate with the database via Compass through their identity provider credentials. Hopefully, this can address most of the use case you are describing ",
"username": "Salman_Baset"
},
{
"code": "",
"text": "Hello,Thank you very much for your reply. That’s good to know and I look forward to seeing this directly work.I have been researching today about inline hooks and SAML assertion with Okta and group attribute mapping.I have created group mapping with defines role assignment for the platform. I am wondering, on top of this question if yourself, or anyone knows how to assert a database access user with this too?Thanks",
"username": "Matthew_Mentlak"
},
{
"code": "",
"text": "We do not support database authentication directly with SAML and have no plans of doing so. If the reason you are looking for SAML authentication for database is to manage the full life cycle of identities with your corporate identity provider, you may consider using:\na) LDAP (https://www.mongodb.com/docs/atlas/security-ldaps/)\nb) Hashicorp Key Vault (HashiCorp Vault & MongoDB Atlas | MongoDB)",
"username": "Salman_Baset"
},
{
"code": "",
"text": "Okay brilliant.Thank you for that information.",
"username": "Matthew_Mentlak"
},
{
"code": "",
"text": "Hello,Hope you’re well. Been reading your comment back and want to confirm it is no authentication to a database.It is to create a database access user on the Atlas platform.Is this still a no?",
"username": "Matthew_Mentlak"
},
{
"code": "",
"text": "Please see this link on the database authentication in Atlas .Once you authenticate with Atlas control plane using SAML, you can create database users using one of several supported methods (details in link above) such as SCRAM, X.509, LDAP, and AWS-IAM. If you are able to use LDAP or AWS-IAM, you can possibly use your identity provider credentials to authenticate with the database.",
"username": "Salman_Baset"
}
]
| Okta Integration with MongoDB Atlas / Compass | 2022-11-16T17:05:34.358Z | Okta Integration with MongoDB Atlas / Compass | 1,670 |
null | []
| [
{
"code": "",
"text": "Hi, Everyone. I’m new to this topic, so sorry, if this is a stupid question\nI want to build a free text search using Atlas Search. It should look like:\ni have to fields for search name, email.\nAnd for example, if I pass mike I want to get something like mike@… mike23 23mike@…. BUT NOT mikke, miike ← I get it if using autocomplete.\nWhat is the best way to build a case-insensitive search? using regex? if so how to do a case-insensitive search?\nThanks for helping!",
"username": "Mike_Kravets"
},
{
"code": "",
"text": "If you are looking for exact matches - I’d consider one of these options in this tutorial.If specifically, you are looking to search on emails, I’d suggest using our email tokenizer.To test our which analyzer works best, check out this tutorial.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "Thanks a lot, but I want to have a partial match. For example\nif I pass Mike Krav – I want to get → Mike Kravets MikeKravert, BUT NOT Mike Kral Mike Kraavv\nFor example like on the screen, I get some correct results, but also a Simon Halep - what is not good for me\n\nindex1490×916 96.3 KB\n",
"username": "Mike_Kravets"
},
{
"code": "",
"text": "Can you share the index and query you are using that is driving those results?",
"username": "Elle_Shwer"
},
{
"code": "\"wholeName\": [\n {\n \"type\": \"string\"\n },\n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 4,\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n }\n ]\n{\n \"$search\": {\n \"index\": \"free-text\",\n \"autocomplete\": {\n \"path\": \"wholeName\",\n \"query\": f\"{search_value}\",\n }\n}\n",
"text": "index:query",
"username": "Mike_Kravets"
},
{
"code": "",
"text": "Is the goal to search on names or emails or both? And if both, is there one that is higher priority to match or must it match both in order to be returned?I’m pondering if maybe you want to use a compound query and set a minimumShouldMatch… And perhaps for your whole name field use keyword analyzer but for your email, use maybe standard analyzer with a email tokenizer?",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "yes, I want to search by email fields and wholeName, I put only wholeName just for example, sorry if it was misunderstood.\nSo, first I would like make it works only with wholeName",
"username": "Mike_Kravets"
},
{
"code": "",
"text": "For that also should work regex, isn’t it? but regex based on keywords is case-sensitive, which is not good for me. and if I tried to do regex based on simple or standard, it wasn’t work\nThank you for helping",
"username": "Mike_Kravets"
},
{
"code": "",
"text": "Hi @Mike_Kravets , to clarify, are you looking for exact matching for wholeName and then partial matching/autocomplete with email?",
"username": "amyjian"
},
{
"code": "",
"text": "I want to have something like a case-insensitive $regex in MongoDB. but not sure what is the best way how to do it.{\"$regex\": search_field, “$options”: “i”}}",
"username": "Mike_Kravets"
}
]
| What is the best way to build a case-insensitive search? | 2022-11-14T17:12:30.229Z | What is the best way to build a case-insensitive search? | 2,704 |
null | []
| [
{
"code": "com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'you are over your space quota, using 512 MB of 512 MB",
"text": "Hi everyone,\nI’m getting the error com.mongodb.MongoCommandException: Command failed with error 8000 (AtlasError): 'you are over your space quota, using 512 MB of 512 MB when using Kafka connector against Mongodb Atlas, but actually, the storage size of my database is 80,93MB and the total indez size is 15,79MB.\nCan anyone help me? thanks in advance",
"username": "Laura_Fernandez"
},
{
"code": "dataSizeindexSizedb.stats()",
"text": "Hi @Laura_Fernandez,The storage quotas for free and shared clusters are based on summing the dataSize and indexSize for all databases: How does Atlas calculate storage limits for shared clusters (M0, M2, M5)?.You can confirm current usage via the “Data Size” chart for a cluster in the Atlas UI or using the db.stats() command in the MongoDB shell.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "{\n \"db\": \"myFirstDatabase\",\n \"collections\": 10,\n \"views\": 0,\n \"objects\": 932195,\n \"avgObjSize\": 528.4847633810523,\n \"dataSize\": 492650854,\n \"storageSize\": 799703040,\n \"totalFreeStorageSize\": 0,\n \"numExtents\": 0,\n \"indexes\": 18,\n \"indexSize\": 1058549760,\n \"indexFreeStorageSize\": 0,\n \"fileSize\": 0,\n \"nsSizeMB\": 0,\n \"ok\": 1\n}\ndataSizeindexSizeMongoError: you are over your space quota, using 2048 MB of 2048 MB\n",
"text": "Useless reply.This is my db.stats() result:dataSize + indexSize = 1.4 GB.\nBut I still received error:My cluster is M2",
"username": "Ng_c_Tri_u_Vo"
}
]
| Space quota does not match Storage Size | 2022-02-04T18:01:10.303Z | Space quota does not match Storage Size | 8,322 |
null | [
"swift",
"atlas-device-sync",
"app-services-user-auth"
]
| [
{
"code": "'crypto' module: error signing messagemakeJWT()",
"text": "Hi,I have successfully implemented the Apple Sign In/Sign Up flows into my iOS app. However, I am not sure how to proceed regarding the account deletion which, as you may know, has been made mandatory by Apple:So, only deleting the user account through the Atlas Services is not enough.\nI took a look at the revoke flow and tried to follow the tutorial here (for Firebase) but I am encountering an error 'crypto' module: error signing message when running the makeJWT() function.I think I have an issue with the encoding of the private key if I save it directly as a string. AFAIK I cannot read a file from the function…\nDid anyone try to implement the revoke flow on MongoDB/Realm?",
"username": "Sonisan"
},
{
"code": "",
"text": "Ok so… it has been a mess but I have finally managed to code the whole flow with App Services and Swift. I’ll try to summarize the steps on a repository because there is a lot of unnecessary trial and error involved…EDIT: GitHub - sonisan/apple-token-revoke-in-realm: Description of the steps on how to revoke the Apple account of a user logged with Apple Sign In on MongoDB - Realm.\n@Ian_Ward in case any of this might be interesting to add in the doc…",
"username": "Sonisan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Apple Sign in: revoke token | 2022-11-17T12:13:48.039Z | Apple Sign in: revoke token | 2,281 |
null | [
"serverless"
]
| [
{
"code": "",
"text": "Our team is looking into MongoDB serverless.However, according to this document, multi-region is not supported.Any ideas about when/if multi-region will be supported?",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Hi Alex,Correct, currently multi-region is not supported for serverless instances.Unfortunately we would not be able to provide you with a timeline for when this will be supported for serverless instances. In saying so, if you would like this feature to be added in Atlas for serverless instances, I would recommend you to file a feature request via the MongoDB Feedback Engine. From that platform, the Product Management team will be able to monitor the requests. You can also keep track of the progress of your request, and make this visible to other users who can vote on it as well.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Serverless Instance Limitations; Multiø-region | 2022-11-17T10:56:32.231Z | Serverless Instance Limitations; Multiø-region | 1,752 |
null | [
"queries",
"dot-net"
]
| [
{
"code": "",
"text": "I want to achieve the below linq query using builder in MongoDB:var filterList = new List();\nfilterList.Add(“Note1”);\nfilterList.Add(“Note2”);\nvar notes = MongoContext._database.GetCollection(“NotesCollection”).Find(new BsonDocument()).ToList();\nvar result = notes.Where(note => filterList.Any(filter => note.Title.Contains(filter)));The above code is returning the result from the NotesCollection where the Title contains any string from the list.I dont want to get the entire NotesCollection and apply lambda expression on it to get the result. Instead im using the Builder from mongo and applying the lambda filter condition to get only the required data from collection.var builder = Builders.Filter.Where(note => filterList.Any(filter => note.Title.Contains(filter)));\nMongoContext._database.GetCollection(“NotesCollection”).Find(builder).ToList();This is throwing me an error saying \"System.ArgumentException: 'Unsupported filter: Any(value(System.String[]).Where({document}{Title}.Contains({document}))).'\n\"Kindly let me know how to achieve this in mongo",
"username": "leo_lawrence"
},
{
"code": "",
"text": "Hi Any help on this query is highly Appreciated.",
"username": "leo_lawrence"
}
]
| MongoDB Builder - Unsupported filter error | 2022-11-17T10:31:05.958Z | MongoDB Builder - Unsupported filter error | 1,834 |
null | [
"backup"
]
| [
{
"code": "",
"text": "Hi,I am looking best backup and recovery strategy for my 3 node prod mongodb cluster hosted in AWS.Some of my questions:Thanks\nSreedhar Y",
"username": "Sreedhar_Y"
},
{
"code": "",
"text": "Hello @Sreedhar_Y ,The backup and recovery requirements of a given system vary to meet the cost, performance and data protection standards the system’s owner sets. You should consult any internal backup policy(s) your organisation may have to determine the requirements of the backups taken.In saying the above, you can go through below links to help you understand the available backup methods and some backup strategies:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| What is best backup/recovery strategy of 3 node MongoDB replica hosted in AWS | 2022-11-14T18:06:59.901Z | What is best backup/recovery strategy of 3 node MongoDB replica hosted in AWS | 1,627 |
null | []
| [
{
"code": "",
"text": "Hello,I started to have slow queries today. My connections have gone up, though still under 30, and I do see constant network activity as well I think. I can post screen shots of my activity if that will help, I’m just not sure what to look for. As far as I can tell my issues correspond to the connections going from about 4 to 17. This may have nothing to do with it but that’s what jumped out at me.Thanks for any ideas",
"username": "Justus_Cook"
},
{
"code": "",
"text": "I did some more searching and saw some things saying maybe I hit a download limit and its being throttled. So I moved the data to a new DB and seems to be back to normal. Connections are the same even though I only have 1 connection that should be active. Could the node driver make a difference in performance?",
"username": "Justus_Cook"
},
{
"code": "",
"text": "Hi Justus - Welcome to the community I did some more searching and saw some things saying maybe I hit a download limit and its being throttled. So I moved the data to a new DB and seems to be back to normal.Do you have more details regarding your cluster tier or the limit you’re referring to? I presume it would be related to the Atlas M0 (Free Cluster), M2, and M5 Limitations but correct me if I am wrong here.Connections are the same even though I only have 1 connection that should be active. Could the node driver make a difference in performance?Could you clarify what the context here is in terms of performance? Are you referring to the connection count? Most MongoDB drivers use connection pools to manage database requests so connections do not overwhelm server resources and established connections can be reused.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "To answer your questions, I do have a M0 cluster, but I have never experienced this kinda of sluggishness some results taking 30 seconds plus.I was just throwing ideas out there about the driver being the issue, but I have no evidence of that. Maybe these screen shots can shed some light.I uploaded an image of the last 48 hour or so when the DB started being very slow.\n(I cant upload all the images I have as a new user) To me these seem like smaller but it the lag was very high.I use to use connection incorrectly and exceeded the maximum connections, but since fixing that code I haven’t had issues until now. I don’t have great understand of the metrics and what they should mean for performance, that’s why I mention connections since that’s the only known issue I have had.I was making more write than before with some new code, but that had been running a week or so and didn’t seem to cause issues. I also started making a React site that might have been using my API to access the DB a little more than I thought. The only reason I not sure that’s the case because even after disabling all that code I continued to have very bad performance.\nI hope this can help with coming up with a solution or at least figure out my problem.last 48 of original DB:\n\noldspike2520×1422 149 KB\nThanks,",
"username": "Justus_Cook"
},
{
"code": "M0M2/M5M0",
"text": "Thanks for providing those details Justus.I was making more write than before with some new code, but that had been running a week or so and didn’t seem to cause issues.As per the limitations documentation currently M0 free clusters and M2/M5 shared clusters limit the total data transferred into or out of the cluster in a rolling seven-day period. Specifically for M0 this limit is 10 GB in and 10 GB out per period. The limit itself noted here is for the cluster as a whole as well. This means that if the total sum of network data transferred for all nodes exceeds the limit, then the throttling will occur.One of the key point here is that the limit is based off a rolling seven-day period.So I moved the data to a new DB and seems to be back to normal.I understand that you’ve moved from one cluster to another in which the issue was resolved. If you encounter this again on the new cluster, I would recommend contacting the Atlas support team via the in-app chat to see if they are able to confirm if you’ve past that particular limit.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "@Jason_Tran Thanks for the advice. I didn’t know I had that support option at this tier so that would great. I didn’t see any alerts triggered like I got for the connection before so I didn’t think i would get throttled I guess. I will probably try to reach out and see if they can tell me what might have happened anyway.Thanks again!",
"username": "Justus_Cook"
},
{
"code": "",
"text": "No problem Justus - As per the Get help with Atlas docs:MongoDB Atlas offers several packages to support your organization or project. Atlas includes a free Basic support plan, or you can upgrade to a paid plan for extended support services. Details on support plans are available through the UI as part of the procedure to change your support plan or by contacting MongoDB.All the best,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Suddenly sluggish Atlas | 2022-11-17T19:17:15.427Z | Suddenly sluggish Atlas | 1,496 |
[]
| [
{
"code": "",
"text": "Can anyone can help me to display the data in my backend server\n\nimage872×657 34.6 KB\n\nIt shows only {“data”:}, But my collection having some data int it",
"username": "Vimal_Kumar_G"
},
{
"code": "",
"text": "\nimage652×540 5.4 KB\n",
"username": "Vimal_Kumar_G"
},
{
"code": "",
"text": "Please do not post your code as an image. We cannot cut-n-paste it.Are you using mongoose? If not, then find() returns a cursor. You have to consume the cursor. Calling toArray() is one way to do it.",
"username": "steevej"
},
{
"code": "origin:\"http://localhost:3000\"\nres.json({message: \"Welcome to E-canteen\"});\nconsole.log(`Server is running on port ${PORT}`);\ntry {\n\n const products = await Product.find([])\n\n res.status(200).send({data:products})\n\n} catch (err) {\n\n res.status(400).send({ error: err})\n\n}\ntry {\n\n const products = await Product.aggregate([\n\n { $match: {}},\n\n { $group: {\n\n _id: '$category',\n\n products: {$push: '$$ROOT'}\n\n }},\n\n {$project: { name: '$_id',products: 1, _id: 0}}\n\n ])\n\n res.status(200).send({data: products})\n\n} catch (err) {\n\n res.status(400).send({error: err})\n\n}\n",
"text": "Server.jsconst express = require(“express”);const bodyParser = require(“body-parser”);const cors = require(“cors”);const db = require(’./db’);const app = express();const productRouter = require(’./routes/productRouter.js’)var corsOptions = {}app.use(cors(corsOptions));app.use(bodyParser.json());app.use(bodyParser.urlencoded({extended: true}));db.on(‘error’, console.error.bind(console, ‘MongoDB Connection Failed:’))app.get(\"/\",(req,res)=>{});const PORT = process.env.PORT || 6505;app.listen(PORT, () => {});app.use(’/api/’,productRouter)productRouter.js\nconst express = require(‘express’)const router = express.Router()const Product = require(’…/models/productModel’)router.get(’/’, async (req, res) => {})router.get(’/products-by-categories’, async (req,res) =>{})module.exports = router;",
"username": "Vimal_Kumar_G"
},
{
"code": "const products = await Product.find([])",
"text": "Why did you added the empty array parameter to find()const products = await Product.find([])where you simply hadconst products = await Product.find()Your code does not show how you connect to your database server. It is probably done inconst Product = require(’…/models/productModel’)SoAre you using mongoose?Have you triedCalling toArray() is one way to do it.Share a screenshot that shows the data you have in your database. The ideal is a screenshot using mongosh so we can see the database and the collection use.",
"username": "steevej"
}
]
| Data present in my collection is not displayed in my backend server | 2022-11-17T14:23:56.034Z | Data present in my collection is not displayed in my backend server | 2,057 |
|
null | [
"replication"
]
| [
{
"code": "",
"text": "We are using MongoDB ReplicaSet of 3 nodes and we want to isolate the resources used by a sepcific database for example•\tLimit the CPU\n•\tLimit the Memory\n•\tIsolate the disk on a specific database data will be stored (Different Volume/Disk)Is this possible to achive this and if yes can you please provide your feedback and comments, we will really appreciate any help.Thank you !!!",
"username": "Andreas_Andreou"
},
{
"code": "",
"text": "ForIsolate the disk on a specific database data will be stored (Different Volume/Disk)seeyou then may certainly make this directory a symlink to another volume.",
"username": "steevej"
}
]
| Isolate resources on a specific database | 2022-11-17T06:19:40.336Z | Isolate resources on a specific database | 1,019 |
null | [
"crud",
"transactions"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6374aaa6d30e9e74980796d9\"\n },\n \"isMigrated\": true,\n \"isDeleted\": false,\n \"paymentId\": {\n \"$numberLong\": \"1\"\n },\n \"paymentDetails\": {\n \"tag\": [\n {\n \"vehicleNumber\": \"MH03S6692\",\n \"vehicleClass\": \"Car/Jeep/Van\",\n \"tagAccountNumber\": {\n \"$numberLong\": \"20001121\"\n },\n \"transactionAmount\": {\n \"$numberDecimal\": \"806\"\n },\n \"tagLineItems\": [\n {\n \"transactionAmount\": 200,\n \"appTxnCode\": \"TagBal\"\n },\n {\n \"transactionAmount\": 325.1,\n \"appTxnCode\": \"TollBal\"\n },\n ]\n }\n ]\n }\n}\n",
"text": "Hello everyone I am trying to modify the data type of transactionAmount in tagLineItems but unable to update the data type.",
"username": "Prudhvi_Raj1"
},
{
"code": "",
"text": "This looks a lot like Update the DataType for a field inside the Nested Array.",
"username": "steevej"
}
]
| Updating Data type from int to decimal | 2022-11-17T06:10:24.464Z | Updating Data type from int to decimal | 1,272 |
null | [
"aggregation",
"dot-net"
]
| [
{
"code": "",
"text": "Hi There,\nWe added Analytics node to handle the extra load on the atlas cluster. Initially we added one analytics node but faced the outage in analytics node. So we added two analytics node per MongoDB recommendation. So now we want to switch the traffic from one analytics node to second analytics node.\nCan anyone suggest how to do that?\nThanks,\nAarzoo",
"username": "AARZOO_MANOOSI"
},
{
"code": "{'usage':'analytics'}mongodb://host0.example.com/database?replicaSet=rsName&readPreference=secondary&readPreferenceTags=usage:analytics",
"text": "Hi @AARZOO_MANOOSIThis is a good use case for replica set tags. See the tutorial how to set them up.So you can setup a tag like {'usage':'analytics'} and then use that for the readPreferenceTags for the connection.mongodb://host0.example.com/database?replicaSet=rsName&readPreference=secondary&readPreferenceTags=usage:analytics",
"username": "chris"
},
{
"code": "",
"text": "Thanks much and I was guessing this option.",
"username": "AARZOO_MANOOSI"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to connect explicitly to analytics node if there are multiple analytics node? | 2022-11-15T16:43:22.583Z | How to connect explicitly to analytics node if there are multiple analytics node? | 2,197 |
null | [
"transactions",
"field-encryption",
"storage"
]
| [
{
"code": "systemctl status mongod.service× mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; preset: disabled)\n Active: failed (Result: exit-code) since Thu 2022-11-17 21:54:35 CET; 33s ago\n Docs: https://docs.mongodb.org/manual\n Process: 1265 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 1269 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 1282 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 1283 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=62)\n CPU: 1.369s\n\nnov 17 21:54:34 tony-laptop systemd[1]: Starting mongod.service - MongoDB Database Server...\nnov 17 21:54:34 tony-laptop mongod[1283]: about to fork child process, waiting until server is ready for connections.\nnov 17 21:54:34 tony-laptop mongod[1303]: forked process: 1303\nnov 17 21:54:35 tony-laptop mongod[1283]: ERROR: child process failed, exited with 62\nnov 17 21:54:35 tony-laptop mongod[1283]: To see additional information in this output, start without the \"--fork\" option.\nnov 17 21:54:35 tony-laptop systemd[1]: mongod.service: Control process exited, code=exited, status=62/n/a\nnov 17 21:54:35 tony-laptop systemd[1]: mongod.service: Failed with result 'exit-code'.\nnov 17 21:54:35 tony-laptop systemd[1]: Failed to start mongod.service - MongoDB Database Server.\nnov 17 21:54:35 tony-laptop systemd[1]: mongod.service: Consumed 1.369s CPU time.\n\n/tmp/mongodb-27017.sock# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\n#security:\n\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.376+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.379+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.380+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.380+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1303,\"port\":27017,\"dbPath\":\"/var/lib/mongo\",\"architecture\":\"64-bit\",\"host\":\"tony-laptop\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.3\",\"gitVersion\":\"f803681c3ae19817d31958965850193de067c516\",\"openSSLVersion\":\"OpenSSL 3.0.5 5 Jul 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel90\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Fedora release 37 (Thirty Seven)\",\"version\":\"Kernel 6.0.8-300.fc37.x86_64\"}}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.407+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017},\"processManagement\":{\"fork\":true,\"pidFilePath\":\"/var/run/mongodb/mongod.pid\",\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongo\",\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.409+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongo\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:34.409+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3406M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.753+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1344}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.753+01:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.782+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.782+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"initandlisten\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"initandlisten\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"initandlisten\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"initandlisten\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"initandlisten\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.788+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"initandlisten\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.818+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":30}}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.818+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"initandlisten\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.818+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.818+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-11-17T21:54:35.818+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":62}}\n\nmongod --version\ndb version v6.0.3\nBuild Info: {\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\",\n \"openSSLVersion\": \"OpenSSL 3.0.5 5 Jul 2022\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"rhel90\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n\n",
"text": "After i’v update my machine to Fedora 37, (patch update) mongod.service not work.\nsystemctl status mongod.serviceI have the /tmp/mongodb-27017.sockI’v already remove / install mongod & other stuff but steel not working.mongod.conf file:in my log file i have this :What can i do ?",
"username": "Belingheri_N_A"
},
{
"code": "{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}sudo yum -y remove mongodb-org-serversudo yum -y install \"mongodb-org-server < 6\"",
"text": "{\"t\":{\"$date\":\"2022-11-17T21:54:35.787+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20573, \"ctx\":\"initandlisten\",\"msg\":\"Wrong mongod version\",\"attr\":{\"error\":\"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \\\"featureCompatibilityVersion\\\", version: \\\"4.4\\\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures.\"}}The issue is that somehow you have upgraded from 4.4 straight though to 6.0. Fedora is also not a supported platformThat aside, if you want to run 6.0 you will have to first perform the upgrade procedure to 5.0, once that is successful you can complete the upgrade to 6.0.First:\nsudo yum -y remove mongodb-org-serverFollow the release notes/instructions for 5.0 upgrade BUT replace the yum install line:\nsudo yum -y install \"mongodb-org-server < 6\"\nthen :After this you will have a happy 6.0 install.",
"username": "chris"
}
]
| Mongod.service: Control process exited, code=exited, status=62 | 2022-11-17T21:11:43.617Z | Mongod.service: Control process exited, code=exited, status=62 | 4,411 |
null | []
| [
{
"code": "",
"text": "I created an M10 cluster, configured Azure Private Endpoint (both Atlas and Endpoint are shown as available), but when I go to cluster → connect, I don’t see an option to select the private endpoint, just set up allowed IP addresses.Edit: I’m after the connection string to use in Compass.What am I missing?",
"username": "Ran_Shenhar"
},
{
"code": "",
"text": "I also tried just adding “-pl-0” to the regular connection string - no luck",
"username": "Ran_Shenhar"
},
{
"code": "",
"text": "Hi @Ran_Shenhar - Welcome to the community I created an M10 cluster, configured Azure Private Endpoint (both Atlas and Endpoint are shown as available), but when I go to cluster → connect, I don’t see an option to select the private endpoint, just set up allowed IP addresses.The first thing I could guess at this point would be if the private link is active in a different region to where the cluster’s nodes are deployed. Could you confirm if the cluster’s region is the same as where the private link is active?You may find the following Private Endpoint - Limitations documentation useful.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "\nimage709×375 39.9 KB\n",
"username": "Ran_Shenhar"
},
{
"code": "",
"text": "Hi Ran,Thanks for the screenshot. I believe the private endpoint connection string option won’t appear unless you’ve deployed the private link on the same regions as the cluster’s nodes correctly. If you believe the region(s) of the cluster and the private link(s) associated with the cluster are correct and you are still not seeing the private endpoint connection string option, I would recommend contacting the Atlas Support team via the in-app chat to verify if there are any possible issues with the configuration of the private endpoint from the Atlas end.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason - you are correct.\nI moved the cluster to same Azure region, and everything works now.",
"username": "Ran_Shenhar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Private endpoint active, but no connection string? | 2022-11-17T18:41:06.945Z | Private endpoint active, but no connection string? | 1,734 |
null | []
| [
{
"code": "",
"text": "Hi All,The Geospacial Map component lacks a lot of detail of the real world. When using something like google maps or open street maps, buildings can be seen, vital to the project I am currently working on.Is it possible to change the map background of Geospacial charts? Or perhaps project the data onto another?Thanks,\nMatt",
"username": "Matt_B1"
},
{
"code": "getData()",
"text": "Hi @Matt_B1 -The map tiles we use are from HERE and they do include buildings for many parts of the world, but presumably you are working in an area where they have insufficient coverage. You might want to consider providing feedback directly to HERE using their tool at https://mapfeedback.here.com/.We don’t offer a way of using your own map tiles with our charts. If you are using embedding, one option you could consider is to use the getData() method to load the raw data for a chart, which you can then render using your choice of tool such as Google Maps.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Changing the Geospacial Map. Is it possible? | 2022-11-17T10:42:32.059Z | Changing the Geospacial Map. Is it possible? | 1,518 |
null | [
"python",
"production",
"field-encryption"
]
| [
{
"code": "",
"text": "We are pleased to announce the 4.3.3 release of PyMongo - MongoDB’s Python Driver. This release documents support for CSFLE on-demand credentials for cloud KMS providers, authentication support for EKS Clusters, and fixes a number of bugs.See the changelog for a high level summary of what’s new and improved or see the 4.3 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 4.3.3 Documentation\nChangelog: Changelog\nSource: GitHubThank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| PyMongo 4.3.3 Release | 2022-11-17T21:48:08.933Z | PyMongo 4.3.3 Release | 2,545 |
null | [
"atlas-search"
]
| [
{
"code": " \"_id\" : ObjectId(\"...\"),\n ...,\n \"profiles\" : [ \n {\n \"_id\" : ObjectId(\"...\"),\n \"type\": \"type\",\n \"firstName\" : \"abc\",\n \"lastName\" : \"abc\",\n \"pageName\": \"abc\",\n },\n ...\n ],\n$search: {\n index: \"profile_name\",\n compound: {\n must: [{ \n wildcard: {\n query: [\"*someQuery*\"], \n path: [\"profiles.firstName\", \"profiles.lastName\", \"profiles.pageName\"], \n allowAnalyzedField: true\n }\n }\n ]\n}\n}\n",
"text": "I am a little stuck,I have users collection defined as such:Every user can have multiple profiles, I have created an index “profile_name” to search for profiles based on firstName, lastName and pageName.how can I return only those matching profiles instead of the entire document… Currently the whole document is returned (if document contains a profile that matches the criteria but it has 30 profiles it returns all 30 profiles)since I cannot use $search after unwinding, how should I proceed?",
"username": "sabataitis"
},
{
"code": "",
"text": "Hi there, can you please share the index you created?",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "A multi-stage aggregation pipeline might help.Use the indexed search as first stage to find related documents, then use $unwind stage on selected documents, then $match on the next stage again to get the related sub-documents, add a grouping signature then group together by that signature.This is not a perfect solution though but might at least help for a while until you get a better solution.",
"username": "Yilmaz_Durmaz"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"profiles\": {\n \"fields\": {\n \"_id\": {\n \"type\": \"objectId\"\n },\n \"firstName\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n },\n \"lastName\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n },\n \"pageName\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n },\n \"type\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"diacriticFolder\",\n \"tokenFilters\": [\n {\n \"type\": \"icuFolding\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n}\n",
"text": "Here is my indexhow can I return only the matched sub documents? using multi stage aggregation is not an option as I can not match matched sub documents without running the search query, which can only be ran once.",
"username": "sabataitis"
},
{
"code": "",
"text": "I’m not sure it is possible to only return matched sub documents. Will poke around on this. Perhaps highlighting would help?",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "is there any way to return an _id of matched sub-document together with highlights?",
"username": "sabataitis"
},
{
"code": "",
"text": "or perhaps is there a way to sort the matching documents so they would appear at the top?",
"username": "sabataitis"
},
{
"code": "[\n {\n \"$search\": {\n \"index\": \"profiles_names\",\n \"text\": {\n \"query\": \"typea\",\n \"path\": {\n \"wildcard\": \"*\"\n }\n }\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$profiles\",\n \"preserveNullAndEmptyArrays\": false\n }\n },\n {\n \"$match\": {\n \"profiles.type\": \"typea\"\n }\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$profiles\"\n }\n }\n]\n[\n {\n \"username\": \"usera\",\n \"profiles\": [\n { \"type\": \"typea\", \"pageName\": \"abc\" },\n { \"type\": \"typeb\", \"pageName\": \"cde\" }\n ]\n },\n {\n \"username\": \"userb\",\n \"profiles\": [\n { \"type\": \"typea\", \"pageName\": \"asd\" },\n { \"type\": \"typec\", \"pageName\": \"zxc\" }\n ]\n },\n {\n \"username\": \"userc\",\n \"profiles\": [\n { \"type\": \"typeb\", \"pageName\": \"jkl\" },\n { \"type\": \"typec\", \"pageName\": \"bnm\" }\n ]\n }\n]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"profiles\": {\n \"fields\": {\n \"pageName\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n },\n \"type\": {\n \"analyzer\": \"diacriticFolder\",\n \"searchAnalyzer\": \"diacriticFolder\",\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"diacriticFolder\",\n \"tokenFilters\": [\n {\n \"type\": \"icuFolding\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n}\n",
"text": "I think you misunderstood the use cases of the search indexing. Actually of any indexing. Indexes are used for the preliminary elimination of unmatched documents. When we use an index for the first time, the resulting set of documents no longer are part of the index. that is also why we try to put any query that can use indexes at the top.Processing indexes is fast and in case the result of this first stage satisfies your needs in 1 step then you can use the result as is. otherwise, you have to go through the remaining procedure of finding what you need. Unfortunately, the remaining part is nasty without indexes but not hopeless, because the ugly part lies only in the time required to complete operations.The following aggregation pipeline does something similar to what you too should do. search by using the search index at the first stage, then unwind the profiles array, then search again with $match (nasty part, you need to use it on fields manually), then do whatever else you need to do in the remaining stages. I chose to flatten the result by replacing the root element. you can select fields and continue only with themThe above query does not need, but here are the data I used and the search index:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for taking your time to reply. I understand your proposal, the very problem I am facing is that my search index has removal of accents, which I can not $match again after $unwinding… I’ve followed the suggestions of highlighting to retrieve any unique field that I could identify nested profile by…I have managed to implement a solution using highlighting, I am querying by fields of pageName, firstName and lastName. After that I am highlighting unique field of slug, which lets me identify the matching profile.The problem with that though is when I enter a query: “karasukrainoje”, it finds the pageName, but only highlights the slug “karasukrainoje”, when the whole slug is “karasukrainoje.1”, is there any way to highlight all of it?",
"username": "sabataitis"
},
{
"code": "",
"text": "I haven’t used accents/collation before but if you haven’t configured it to store full documents in the index, you would be getting your actual documents after the search. and then you could work on the result set as usual without fancy additions. Even if you stored them, I think they are stored as is, only the keywords for the index should be changing for the accents.Can you give us some accented documents to work with so we may at least try to see it from your viewpoint?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "A quick note: Another option here is to redesign your document schema to store “profiles” in a separate collection. this will increase complexity but will give the capability to have better indexing on them.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I am sorry haven’t had the time to pull some documents to test with,is there a way to store only matched sub-documents? I’ve tried storing documents with index but again it would store all of the profiles instead of the matching ones. Am I missing something",
"username": "sabataitis"
},
{
"code": "_id",
"text": "Take your time about giving sample documents.if your question is about my quick note of storing profiles independently, it goes like this:make a migrator app, fetch a document, for each profile create a profile document in a new “profiles” collection, get its _id and use it to replace profile array in main document, and after processing whole array patch the corresponding document (profiles field) in the database. this is the migration part.drop current search index from main collection and create new search index on this new “profiles” collection.rewrite your application to use “profiles” collection when you do search on profiles.it involves at least 3 different operations, one involves changing main documents. so please be careful with the implementation since it can cause unwanted data loss",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "im sorry the reply was in regards to your previous comment…ref\nimage789×283 76.5 KB\n",
"username": "sabataitis"
},
{
"code": "",
"text": "Ah, ok then. This is what I mentioned:\nReturn Stored Source Fields — MongoDB AtlasThis way index will store those fields and will return them immediately instead of returning the full documents to which these fields belong.But it seems, either way, you will need subsequent matching stages on documents returned from index search stage: https://www.mongodb.com/docs/atlas/atlas-search/performance/index-performance/#storing-source-fieldsWhat I meant is, by the way, even though the index works with accents differently, the returned documents from this stage should retain their accents on which you may have a match stage with collation.PS: I am not 100% confident how stored fields works as I haven’t used them before. I just trust the documentation PS again: I might even be mostly wrong and this one might be what you need. please check on how this storing option works thoroughly.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Potentially can use a Materialized view for this? Tutorial hereStored Source isn’t a bad idea either… but it’s kind of for a different use case? (e.g. looking for more performant queries)",
"username": "Elle_Shwer"
}
]
| Atlas $search how to return only matching sub-documents | 2022-11-15T11:04:17.162Z | Atlas $search how to return only matching sub-documents | 3,236 |
[]
| [
{
"code": "",
"text": "So to give you guys a better picture I am building an API where I have data from movies. I have a db with 4 collections. The problem I am facing is that when I test the requests I get an empty array & null responses.\nWhat could be the issue I face? Does someone know if it is not exporting the data due to a db issue or is not connecting because of code?\nI am using mongo & express.\n\n2022-11-17 (4)1920×1080 244 KB\nCould it be the error within the code, my db or somehow Postman?",
"username": "Hermann_Rasch"
},
{
"code": "",
"text": "\nmodels_js1920×1080 220 KB\n",
"username": "Hermann_Rasch"
}
]
| While exporting and trying to test for request to see what data should appear, I got null and empty arrays | 2022-11-17T18:57:43.029Z | While exporting and trying to test for request to see what data should appear, I got null and empty arrays | 1,114 |
|
null | [
"python"
]
| [
{
"code": "",
"text": "Hi Team I registered for ondemand certification at Mongodb university as part of the student developer pack on github but due to changes can’t redeem that privilege on the new site. Please help!!!",
"username": "Byron_Odhiambo"
},
{
"code": "",
"text": "Hi @Byron_Odhiambo,Welcome to the MongoDB University forums Please email [email protected] and the team will help you out!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi Byron,Were you provided with a certification voucher? Or are you having trouble signing into Github through MongoDB?Can you provide a link to the sign-in page you’re trying to use? Thank you!",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Ondemand certification | 2022-11-17T08:37:01.418Z | Ondemand certification | 2,311 |
null | []
| [
{
"code": "/var/log/mongodb$ service mongod status\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/etc/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: signal) since Mon 2022-11-14 23:52:05 PST; 1h 5min ago\n Docs: https://docs.mongodb.org/manual\n Process: 3534572 ExecStart=/usr/bin/numactl --interleave=all /usr/bin/mongod --config /etc/mongod.conf (code=killed, signal=ABRT)\n Main PID: 3534572 (code=killed, signal=ABRT)\n{\"t\":{\"$date\":\"2022-11-14T23:51:17.944-08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:18.227-08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /var/lib/mongodb/diagnostic.data/metrics.interim.temp\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)39, mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:19.962-08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1668498679:962505][3534572:0x7fd50e2e8700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 161627, snapshot max: 161627 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 615587\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:20.091-08:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":28,\"message\":\"[1668498679:996781][3534572:0x7fd50e2e8700], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_write, 614: /var/lib/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1496 bytes at offset 0: No space left on device\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:20.105-08:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":28,\"message\":\"[1668498680:105455][3534572:0x7fd50e2e8700], file:WiredTiger.wt, WT_SESSION.checkpoint: __posix_file_write, 614: /var/lib/mongodb/WiredTiger.turtle.set: handle-write: pwrite: failed to write 1496 bytes at offset 0: No space left on device\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:20.426-08:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":28,\"message\":\"[1668498680:426187][3534572:0x7fd50e2e8700], file:WiredTiger.wt, WT_SESSION.checkpoint: __wt_turtle_update, 448: WiredTiger.turtle: fatal turtle file update error: No space left on device\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:20.426-08:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":-31804,\"message\":\"[1668498680:426345][3534572:0x7fd50e2e8700], file:WiredTiger.wt, WT_SESSION.checkpoint: __wt_turtle_update, 448: the process must exit and restart: WT_PANIC: WiredTiger library panic\"}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:20.426-08:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23089, \"ctx\":\"Checkpointer\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50853,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":538}}\n{\"t\":{\"$date\":\"2022-11-14T23:51:20.426-08:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23090, \"ctx\":\"Checkpointer\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n",
"text": "i have been noticing mongodb failing randomly , and i am unable to understand the logsMongod Service ------>This is the error ---->Is it beacuse of lack of space?",
"username": "Stuart_S"
},
{
"code": "No space left on device",
"text": "Is it beacuse of lack of space?Yes, the errorNo space left on devicemeanslack of space",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDb keeps crashing randomly | 2022-11-16T02:59:16.284Z | MongoDb keeps crashing randomly | 2,214 |
null | [
"crud"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"636368f52cc3504dc02d1c92\"\n },\n \"isDeleted\": false,\n \"isMigrated\": true,\n \"createdBy\": {\n \"userId\": 1631,\n \"userName\": \"saratg\"\n },\n \"modifiedBy\": {\n \"userId\": 1647,\n \"userName\": \"sabangalurupos\"\n },\n \"hexTagId\": \"918907048020000003E9\",\n \"tagSerialNo\": \"137438954473\",\n \"history\": [\n {\n \"status\": {\n \"statusId\": 1,\n \"status\": \"INVENTORYRETAILER\",\n \"statusDate\": \"2015-10-13 14:17:31\"\n },\n \"location\": {\n \"locationId\": 2,\n \"status\": \"Sub Agent\"\n },\n \"retailer\": {\n \"retailerId\": 10004668,\n \"name\": \"Bangaluru Sales\",\n \"retailerType\": {\n \"typeId\": 3004,\n \"name\": \"SalesRetailer\"\n }\n }\n }\n ]\n}\n",
"text": "Hello Everyone ,For the below mentioned document,i want to change datatype of statusDate from string to Date in history array .",
"username": "Gayathri_Subramanyam"
},
{
"code": "",
"text": "This question looks identical to Update the Key in the nested array.Are you taking the same course? Working on the same project? The solution should be same.",
"username": "steevej"
}
]
| Updating Data Type From String to Date | 2022-11-17T05:50:10.788Z | Updating Data Type From String to Date | 2,110 |
null | []
| [
{
"code": "",
"text": "Hi,I am looking to configure mobile alert if mongodb instance/replica down.Thanks\nSreedhar Y",
"username": "Sreedhar_Y"
},
{
"code": "SMS",
"text": "Hi @Sreedhar_Y,If this is for an Atlas cluster, please review the following documentation:Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason. It’s not Atlas cluster.Thanks\nSreedhar Y",
"username": "Sreedhar_Y"
},
{
"code": "",
"text": "Hi @Sreedhar_YMonitoring tools may or may not include mobile alerts or may be limited to just SMS. Most will have integrations to Instant Messaging platforms(Slack/MS Teams). Often for notifications you will want to integrate a feature rich tool to support on-call rotations and escalations such as PagerDuty or OpsGenie.I have had good success monitoring MongoDB with Datadog paired with OpsGenieMongoDB has a list of SaaS monitoring tools that you could look at. There are many self hosted platforms that will have a MongoDB plugin/integration; Zabbix, Prometheus, Nagios to name a few common ones.",
"username": "chris"
}
]
| How to configure mobile alerts when mongodb replica or instance down | 2022-11-15T00:07:41.810Z | How to configure mobile alerts when mongodb replica or instance down | 1,347 |
[]
| [
{
"code": "",
"text": "“MC”: “[‘B04-E13’, ‘B04-L05A2’, ‘B04-P01A0E’, ‘D05-H16A’, ‘D05-H19C’, ‘P14-A05’, ‘P14-E02C’]”\nThis is a field in my collection ,\nWhen I change the data type in the field MC from string to array, the result always appears:\n“IP”: [ undefined ],\nHere is my code:\n\nimage904×63 3.67 KB\nHow can I fix it? As a rookie in this field, I don’t know why the problem occurs",
"username": "M_M-m"
},
{
"code": "JSON.parse(doc.MC)db.getCollection(\"test_ge\"). updateOne({_id : doc._id},{$set : {IP : JSON.parse(doc.MC)}});\n",
"text": "Hi @M_M-mI think you must use something like JSON.parse(doc.MC) to parse this.I haven’t tested this code! Its just a pseudo suggestion.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks, I will try it",
"username": "M_M-m"
},
{
"code": "",
"text": "\nffff1141f52e2675d7392b2c62b8470982×87 3.55 KB\n[Error] SyntaxError: missing ; before statement\nThe method does not seem to work very well, maybe there is something wrong with my application?",
"username": "M_M-m"
}
]
| How to change the data type from string to array | 2022-11-17T09:47:10.720Z | How to change the data type from string to array | 3,171 |
|
null | [
"aggregation",
"replication",
"java",
"spring-data-odm",
"views"
]
| [
{
"code": "",
"text": "I am creating a view in Mongo Db in my Springboot application.Below is the code of same[{\n$sort: {\nevent_trigger_date: -1\n}\n}, {\n$group: {\n_id: {\n“profile_id”: “$profile_id”\n},\ndata: {\n$first: “$$ROOT”\n}\n}\n}, {\n$unset: “_id”\n}, {\n$replaceRoot: {\nnewRoot: “$data”\n}\n}, {\n$project: {\n“profile_id”: 1\n}\n}, {\n$lookup: {\nfrom: ‘profile_event’,\nlocalField: ‘profile_id’,\nforeignField: ‘profile_id’,\nas: ‘profile_event_data’\n}\n}, {\n$group: {\n_id: {\n“profile_id”: “$profile_id”\n},\ndata: {\n$first: “$$ROOT”\n}\n}\n}, {\n$replaceRoot: {\nnewRoot: “$data”\n}\n}, {\n$project: {\nprofile_id: 1,\nprofile_event_data: 1,\nevent_type_set: {\n$concatArrays: [“$profile_event_data.event_type”]\n}\n}\n}, {\n$addFields: {\n_id: {\n$concat: [“ACTIONS_NOT_COMPLETED_0X:”, “$profile_id”]\n},\nevent_type: “ACTIONS_NOT_COMPLETED_NX”,\nevent_trigger_date: “$$NOW”,\nevent_occurence: 0,\ntrigger_status: “SILENT”\n}\n}, {\n$unset: “event_exists”\n}, {\n$lookup: {\nfrom: ‘profile_personal_info’,\nlocalField: ‘profile_id’,\nforeignField: ‘profile_id’,\nas: ‘personal_info’\n}\n}, {\n$project: {\nprofile_id: 1,\nevent_type: 1,\nevent_trigger_date: 1,\nevent_occurence: 1,\ntrigger_status: 1,\nevent_type_set: 1,\npersonal_info: {\n$arrayElemAt: [“$personal_info”, 0]\n}\n}\n}, {\n$addFields: {\noldest_personal_info_created_date: {\n$trunc: {\n$divide: [{\n$subtract: [“$$NOW”, ‘$personal_info.created_date’]\n}, 1000 * 60 * 60 * 24]\n}\n}\n}\n}, {\n$addFields: {\ncreated_date: {\n$trunc: {\n$divide: [{\n$subtract: [“$$NOW”, ‘$event_trigger_date’]\n}, 1000 * 60 * 60 * 24]\n}\n}\n}\n}, {\n$project: {\nevent_type: 1,\nprofile_id: 1,\nevent_trigger_date: 1,\nprofile_event_data: 1,\nevent_type_set: 1,\nevent_occurence: 1,\ntrigger_status: 1,\ncategory_value: {\n$cond: {\nif: {\n$eq: [“$oldest_personal_info_created_date”, null]\n},\nthen: “$created_date”,\nelse: “$oldest_personal_info_created_date”\n}\n}\n}\n}, {\n$project: {\nprofile_id: 1,\nevent_type: 1,\nevent_type_set: 1,\nevent_trigger_date: 1,\nevent_occurence: 1,\ntrigger_status: 1,\ncategory_value: 1,\n“event_exists”: {\n$in: [“ACTIONS_NOT_COMPLETED_NX”, “$event_type_set”]\n}\n}\n}, {\n$match: {\nevent_exists: {\n$ne: true\n}\n}\n}, {\n$unset: [“event_exists”, “event_type_set”]\n}]I want to add allowDiskUse: true condition as i get following errorStacktrace: | / java.lang.Exception: [profile_event_view@stage [replica set: ]] Database error! | ___/ Mongo Server error (MongoQueryException): Query failed with error code 292 and error message ‘PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting.’How can i add allowDiskUse: true in my code in order to avoid above error?",
"username": "Sanjay_Naik"
},
{
"code": "",
"text": "I think that the way to do this is to allow disk use not on view creation but on view use. See https://www.mongodb.com/docs/manual/core/views/#index-use-and-sort-operations. I’m not sure of the syntax for doing that in Spring Data MongoDB, though.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "db.mydb.aggregate([....,{$sort:{\"a\":-1}}],{allowDiskUse:true})\ndb.createView(\"newview\",\"mydb\",[....,{$sort:{\"a\":-1}}],{allowDiskUse:true})\n",
"text": "is it possible to create view in Mongo DB using Springboot with AggregatePipeline because i believe in Aggregate Pipeline we can pass {allowDiskUse:true}Problem is we can use allowDiskUse on find aggregation query.For example -below worksbut below doesnt workIs there any way i can avoid Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting.’ in view @Jeffrey_Yemin .Please help me out here .Stuck in this from 1 week.Regards",
"username": "Sanjay_Naik"
},
{
"code": "implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'\n\t\t\tdatabase.createView (\n\t\t\t\t\tCommonConstants.PROFILE_EVENT_VIEW,\n\t\t\t\t\tCommonConstants.PROFILE_EVENT,Arrays.asList(new Document(\"$group\", \n\t\t\t\t\t\t new Document(\"_id\", \n\t\t\t\t\t\t \t new Document(\"profile_id\", \"$profile_id\"))\n\t\t\t\t\t\t \t .append(\"data\", \n\t\t\t\t\t\t \t new Document(\"$first\", \"$$ROOT\"))), \n\t\t\t\t\t\t \t new Document(\"$unset\", \"_id\"), \n\t\t\t\t\t\t \t new Document(\"$replaceRoot\", \n\t\t\t\t\t\t \t new Document(\"newRoot\", \"$data\")), \n\t\t\t\t\t\t \t new Document(\"$project\", \n\t\t\t\t\t\t \t new Document(\"profile_id\", 1L))));",
"text": "@Jeffrey_Yemin even if we have any java driver it will be helpfulBelow is the dependency we have usedBelow is the code snippet we use to create view",
"username": "Sanjay_Naik"
},
{
"code": "@Meta(allowDiskUse = true)\nList<Client> findAllOrderByClientName();",
"text": "",
"username": "VISHNU_P"
}
]
| How to add {allowDiskUse: true} in spring mongo view creation | 2022-04-07T13:36:27.413Z | How to add {allowDiskUse: true} in spring mongo view creation | 7,268 |
null | [
"mongoid-odm"
]
| [
{
"code": "",
"text": "Hello all,i ran in a problem with pagination and large data-sets:QueryExceededMemoryLimitNoDiskUseAllowed]: Executor error during findusually one can set allow_disc_use: true or similar. But in mongoid i cannot find this option.\nIs this not implemented or where do I have to set this parameter?Best regards,\nAndreas",
"username": "Andreas_Ulrich"
},
{
"code": "",
"text": "Hi, if your sort operation would exceed the 32MB memory limit, you can specify allowDiskUse in both find and aggregate, for example:",
"username": "Zhen_Qu"
},
{
"code": "",
"text": "Oops, sorry. For aggregation pipeline, each stage has a memory limit of 100 MB.",
"username": "Zhen_Qu"
},
{
"code": "",
"text": "Hi,yes - that’s the way it’s supposed to work, but unfortunately not in mongoid.\nThe ruby driver has the option for use_disk_space but I can’t find any way to set it in mongoid.Fallback to the default client forces me to use aggregation on my own, which means I loose all the mongoid magic.",
"username": "Andreas_Ulrich"
},
{
"code": "@Meta(allowDiskUse = true)\nList<Client> findAllOrderByClientName();",
"text": "Hii , this allowDiscUse method possible for findAllBy Crud repo methodthis is possible",
"username": "VISHNU_P"
}
]
| Allow Disc Usage for where / sort queries | 2022-01-10T12:23:37.133Z | Allow Disc Usage for where / sort queries | 5,717 |
null | [
"aggregation",
"queries",
"node-js",
"java"
]
| [
{
"code": "desksdeskdeskdesksdesk[0] = \n{ value: {\n .... other fields,\n students: Array of `student` object\n }\n}\nstudents: array {_id: string (not ObjectId), name: string}classRoomsLESS than 5",
"text": "Hi everybody.I have a classRoom Object, which stores under desks fields multiple desk objects. This desk object, part of desks array mentioned above, has the the following structure:I want to mention that students: array {_id: string (not ObjectId), name: string}\nI want to achieve the following:\nReturn just those classRooms that has a total of students that is LESS than 5.Do you have any suggestions for that ?\nThank you very much !",
"username": "Teodor_Aspataritei"
},
{
"code": "",
"text": "Hi @Teodor_Aspataritei and welcome to the MongoDB community forum!!It would be helpful in understanding and replicating in local environment, if you could share the following informationBest Regards\nAasawari",
"username": "Aasawari"
},
{
"code": " {\n '$unwind': {\n 'path': '$desks'\n }\n }, {\n '$match': {\n 'desks.value.students': {\n '$type': 'array'\n }\n }\n }, {\n '$addFields': {\n 'totalStudents': {\n '$sum': {\n '$size': '$desks.value.students'\n }\n }\n }\n }, {\n '$match': {\n 'totalStudents': {\n '$lt': 5\n }\n }\n }\n",
"text": "Guys here is the solution that I created and it works !Sharing in case anyone else needs it.",
"username": "Teodor_Aspataritei"
}
]
| Aggregation: Return just objects which contains maximum 5 elements cumulated in different arrays | 2022-11-16T07:47:41.773Z | Aggregation: Return just objects which contains maximum 5 elements cumulated in different arrays | 1,090 |
null | [
"sharding",
"upgrading"
]
| [
{
"code": "",
"text": "Hi, I’m using the MongoDB 3.4.24 version and I’m trying to upgrade it to 3.6. However, when I am trying to install the server, shell, mongos, tools and mongodb-org packages it is saying that the packages under the version 3.6.23 are not found.\nUp to this command I took the following steps:\n1 - wget -qO - https://www.mongodb.org/static/pgp/server-3.6.asc | sudo tee /etc/apt/trusted.gpg.d/mongodb-3.gpg\n2 - echo “deb [ arch=amd64 ] MongoDB Repositories bionic/mongodb-org/3.6 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list\n3 - sudo apt-get update\n4 - sudo apt-get install -y mongodb-org=3.6.23 mongodb-org-server=3.6.23 mongodb-org-shell=3.6.23 mongodb-org-mongos=3.6.23 mongodb-org-tools=3.6.23\nI got the error after the 4th step.",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "",
"text": "The problem solved. It appears I had to import a specific gpg key from ubuntu keyserver.(key: 2930adae8caf5059ee73bb4b58712a2291fa4ad5)\nsudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930adae8caf5059ee73bb4b58712a2291fa4ad5",
"username": "Mashxurbek_Muhammadjonov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Upgrading from 3.4 version to 3.6 version | 2022-11-17T04:10:31.554Z | Upgrading from 3.4 version to 3.6 version | 2,256 |
null | [
"queries",
"dot-net"
]
| [
{
"code": " [\n {\n \"_id\": {\n \"$oid\": \"61c474fd740cd6a46a7e8166\"\n },\n \"GroupIds\": [\n {\n \"$oid\": \"31c482ff6836e438631995ed\"\n },\n {\n \"$oid\": \"11c482ff6836e438631995ee\"\n },\n {\n \"$oid\": \"61bb96fb4c3d7106f5b9587a\"\n }\n ],\n \"Username\": \"Test\"\n },\n {\n \"_id\": {\n \"$oid\": \"61c474fd740cd6a46a7e8166\"\n },\n \"GroupIds\": [\n {\n \"$oid\": \"15c482ff6836e438631995ed\"\n },\n {\n \"$oid\": \"61c482ff6836e438631995ee\"\n },\n {\n \"$oid\": \"61bb96ee4c3d7106f5b95879\"\n }\n ],\n \"Username\": \"Test1\"\n },\n {\n \"_id\": {\n \"$oid\": \"21c474fd740cd6a46a7e8166\"\n },\n \"GroupIds\": [\n {\n \"$oid\": \"61c482ff6836e438631995ed\"\n },\n {\n \"$oid\": \"61c482ff6836e438631995ee\"\n },\n {\n \"$oid\": \"61bb96ee4c3d7106f5b95879\"\n }\n ],\n \"Username\": \"Test2\"\n },\n {\n \"_id\": {\n \"$oid\": \"31c474fd740cd6a46a7e8166\"\n },\n \"GroupIds\": [\n {\n \"$oid\": \"61c482ff6836e438631995ed\"\n },\n {\n \"$oid\": \"61c482ff6836e438631995ee\"\n },\n {\n \"$oid\": \"61bb96fb4c3d7106f5b9587a\"\n }\n ],\n \"Username\": \"Test3\"\n }\n]\n public async Task<List<List<ObjectId>>> UsersByGroupIdV3(List<List<string>> targetGroupses)\n {\n List<FilterDefinition<User>> query= new List<FilterDefinition<User>>();\n foreach (var targetGroups in targetGroupses)\n {\n \n if (targetGroups[0] != \"allcountry\")\n {\n\n query.Add(Builders<User>.Filter.Eq(x => x.GroupIds[0], new ObjectId(targetGroups[0])));\n\n }\n if (targetGroups[1] != \"allCity\")\n {\n query.Add(Builders<User>.Filter.Eq(x => x.GroupIds[1], new ObjectId(targetGroups[1])));\n }\n if (targetGroups[2] != \"allDistrict\")\n {\n query.Add(Builders<User>.Filter.Eq(x => x.GroupIds[2], new ObjectId(targetGroups[2])));\n }\n } \n \n var match = Builders<User>.Filter.And(query);\n \n var users = await _userRepository.Find(match);\n\n var group = users.Result.Select(i => i.GroupIds);\n return group.ToList();\n }\n[{[\"31c482ff6836e438631995ed\",\"11c482ff6836e438631995ee\",\"61bb96fb4c3d7106f5b9587a\"]},{[\"15c482ff6836e438631995ed\",\"61c482ff6836e438631995ee\",\"61bb96ee4c3d7106f5b95879\"]}]\n{[{\"America\", \"Alaska\", \"College\"}],[{\"Germany\", \"Hessen\", \"Kreis\"}]}\n",
"text": "Hello, I want to make multiple group based searches in mongo db,\nI can make a single group based SearchThere is a list of nested groups in the targetGroupses parameter\nExamplevar match = Builders.Filter.And(query);If there is 1 list, it works fine, but if there is more than 1, how can I run the query?\nI want to bring those in the America, Alaska, College or Germany Hessen Kreis groupsI want to fetch what’s in this group from mongo db with c#",
"username": "Mehmet_Ceylan"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| C# Mongodb how do I query a list with subset | 2022-11-16T21:30:44.145Z | C# Mongodb how do I query a list with subset | 1,389 |
null | [
"connecting"
]
| [
{
"code": "",
"text": "When I am connecting my application from the official network (it is the fixed network, with IP address 202.141.222/32) my application is connected successfully and can save data into my collection. but when I am doing work from home(my home network is cellular, with the IP address 199.160.98.81/32) it is showing me the error of connection failure, the application can’t connect with the database. Even I have changed the DNS of the cellular network also.\nWhile connecting to my cellular network, I also added an IP access list that “allows access from anywhere” but showed the same error, when I tried to connect the application to the database. I have spent much time on and tried to sort it out but couldn’t connect to the database using my cellular network. So if anybody will help me to understand this error",
"username": "Iram_Barkat"
},
{
"code": "",
"text": "Hi @Iram_BarkatIf you’re able to connect to Atlas from some network and not from others, then it’s highly likely that the issue resides in the problematic network. Perhaps there are some restrictions in your cellular network? I’ve seen many cases where some network disallows connection to non-HTTP/HTTPS ports (80 and 443). Since MongoDB uses port 27017, maybe this is restricted by the cellular network.Unfortunately there’s not much Atlas can do for this. The best way forward is to contact your cellular provider and asking for clarification about this from their end. Perhaps you can use a VPN product to work around this issue?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi last whole week I spent finding out the problem with my cellular network 's technical team. They have done some testing on their end. They told me that HTTP/non-HTTP ports are allowed at their end. Today they conclude that it needs to check on your end if the MongoDB server has some source-based restriction against mine cellular network Source IPs i.e. 119.160.0.0/16. so please check it, it will be helpful for me.",
"username": "Iram_Barkat"
},
{
"code": "",
"text": "They told me that HTTP/non-HTTP ports are allowed at their endTo connect to MongoDB Atlas, the port 27017 must be allowed from the network. Is it possible to double check with your provider that this port is allowed outgoing traffic?if the MongoDB server has some source-based restriction against mine cellular network Source IPs i.e. 119.160.0.0/16. so please check it, it will be helpful for me.This is totally under your control in the Atlas IP whitelist setting. Please see Configure IP Access List Entries on how to do this.However I find it curious that you can connect from a fixed network successfully but not from your cellular network. Perhaps you can try connecting using a VPN while using your cellular connection and see if using VPN enables you to connect?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "What do you get when you follow https://www.mongodb.com/community/forums/t/not-able-to-connect-to-atlas-cluster-given/199232/4?u=steevej.",
"username": "steevej"
},
{
"code": "",
"text": "let me check with the solutions you have discussed. Thank you",
"username": "Iram_Barkat"
}
]
| Faced problem in connecting database from my cellular network | 2022-11-07T12:20:17.955Z | Faced problem in connecting database from my cellular network | 1,700 |
null | [
"crud",
"atlas-functions",
"atlas-triggers"
]
| [
{
"code": "exports = async function (changeEvent) {\n const { ns, updateDescription, fullDocument } = changeEvent;\n const dbName = ns.db;\n \n // Construct full name from partial names\n const { _id, firstName, middleNames, lastName } = fullDocument;\n let fullName = (firstName + \" \" + middleNames + \" \" + lastName).replace(/\\s+/g, ' ').trim()\n\n // Connect to clients collection\n const clients = context.services\n .get(\"Dev\")\n .db(dbName)\n .collection(\"clients\");\n \n // Persist to client document\n const query = { _id: _id };\n const update = {\n $set: {\n fullname: fullName,\n },\n };\n const options = { upsert: true };\n await clients.updateOne(query, update, options);\n}\n",
"text": "Is there a way to find out the current cluster name inside of a trigger function? I have a database trigger that updates a field in a document when certain fields are changed in said document - however this is a trigger I would like to deploy across a number of clusters. I can get the current database name from ns in a changeEvent, but I cannot access the cluster name, and I don’t want to have to maintain versions of the code per cluster.Below is an abridged sample of a function - I can get the dbName, but I cannot get the cluster name - you can see in this example that it is hardcoded to Dev. How do I use the same function code in Dev, Demo, QA etc without hardcoding the cluster name or having to create a new project per cluster so that the cluster name is consistent?",
"username": "Ben_Giddins"
},
{
"code": "",
"text": "Well this is unusual - I had two applications - the default one created when I created my first trigger from Atlas, and another one I created directly in App Services.In the Trigger application, I needed to refer to the data source by the cluster name “Dev”.In the App Services created application - I was able to use the default “mongodb-atlas”. If I just ensure I create applications before creating the first trigger, my question is moot as the data source can be referred to by “mongodb-atlas”.",
"username": "Ben_Giddins"
}
]
| Get current cluster name inside of trigger function | 2022-11-16T20:10:21.350Z | Get current cluster name inside of trigger function | 2,541 |
null | [
"atlas-triggers"
]
| [
{
"code": "resource \"mongodbatlas_event_trigger\" \"client_full_name\" {\n project_id = \"62xxxxxxxxxxxxxxxxxxxx05\"\n app_id = \"63xxxxxxxxxxxxxxxxxxxx64\"\n name = \"clientFullName\"\n type = \"DATABASE\"\n function_id = \"63xxxxxxxxxxxxxxxxxxxx83\"\n disabled = false\n config_operation_types = [\"INSERT\", \"UPDATE\", \"REPLACE\"]\n config_database = \"Dev\"\n config_collection = \"clients\"\n config_service_id = \"62xxxxxxxxxxxxxxxxxxxxcf\"\n config_full_document = true\n config_full_document_before = false\n}\n",
"text": "I’m using mongodbatlas_event_trigger from the mongodb/mongodbatlas Terraform provider to create a database trigger. I’ve already created a database trigger through the Atlas UI, and am now attempting to replicate that through Terraform, however I get the following error on apply:Error: error creating MongoDB EventTriggers (62xxxxxxxxxxxxxxxxxxxx05): POST https://realm.mongodb.com/api/admin/v3.0/groups/62xxxxxxxxxxxxxxxxxxxx05/apps/63xxxxxxxxxxxxxxxxxxxx64/triggers: 400 (request “”) a database trigger requires an associated Atlas cluster serviceMy resource block is as follows:Which value in the resource block would I have incorrect to be throwing that error?(I use vars for values, they’re only hardcoded in the redacted example for debugging).",
"username": "Ben_Giddins"
},
{
"code": "",
"text": "Found it - it was config_service_id. I was using the cluster_id output from cluster creation in Terraform - turns out it expects the id of the data link added to the application.",
"username": "Ben_Giddins"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Which attribute is causing this error when creating an Event Trigger using Terraform? | 2022-11-16T21:19:18.960Z | Which attribute is causing this error when creating an Event Trigger using Terraform? | 2,105 |
null | [
"transactions"
]
| [
{
"code": "",
"text": "Hello.I have two questions related to transactions in MongoDB.Before version 4.0 (where transactions with ACID guarantees were inserted in multiple documents) how did MongoDB work with the transactions part? Or how did MongoDB manage the consistency of operations? Before version 4.0 did MongoDB already provide this guarantee of ACID properties?Were there any transactions implicit in the CRUD operations before version 4.0?I ask this because I noticed that since version 3.2 it was already possible to use some parameters to adjust the consistency level of operations using write concern, read concern, journal and read preference.",
"username": "morcelicaio"
},
{
"code": "\"majority\"\"majority\"",
"text": "Hello @morcelicaio ,According to this documentation of MongoDB server version 4.0.New in version 4.0.In MongoDB, an operation on a single document is atomic. Because you can use embedded documents and arrays to capture relationships between data in a single document structure instead of normalizing across multiple documents and collections, this single-document atomicity obviates the need for multi-document transactions for many practical use cases.However, for situations that require atomicity for updates to multiple documents or consistency between reads to multiple documents:Starting in version 4.0 , MongoDB provides the ability to perform multi-document transactions against replica sets.In terms of atomicity, yes MongoDB provided atomic operations even before 4.0. However this is only for single documents. In contrast, MongoDB 4.0 allows multi-document transactions. To learn more about how MongoDB was providing atomicity before v4.0, please go through MongoDB ACID Transactions general availability blog.I ask this because I noticed that since version 3.2 it was already possible to use some parameters to adjust the consistency level of operations using write concern, read concern, journal and read preference.This was used mostly for durability and not consistency, for reference MongoDB v3.2 write Acknowledgement documentation.Avoid using a \"majority\" write concern with a (P-S-A) or other topologies that require all data-bearing voting members to be available to acknowledge the writes. Customers who want the durability guarantees of using a \"majority\" write concern should instead deploy a topology that does not require all data bearing voting members to be available (e.g. P-S-S).Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello @Tarun_Gaur ,Those atomic operations you comment before version 4.0. Did they already provide ACID guarantees?When you say read consistency for multiple documents. In this case you say readings of several documents in different collections or in the same collection?",
"username": "morcelicaio"
},
{
"code": "",
"text": "Those atomic operations you comment before version 4.0. Did they already provide ACID guarantees?Yes, for single document.When you say read consistency for multiple documents. In this case you say readings of several documents in different collections or in the same collection?Yes, because the transaction is against the session so you can ready multiple documents from different collections, below is a blob from Transactions Manual v6.0For situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions. With distributed transactions, transactions can be used across multiple operations, collections, databases, documents, and shards.Please note that MongoDB v3.0 and v4.0 are out of support so if you are still using them I would recommend you to upgrade to at-least MongoDB v4.2.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks for the instructions @Tarun_Gaur .Other questions are as follows:Does “snapshot” read concern only work when it’s set within a transaction? Or outside of transactions is it also possible to use this read concern?About defining causal consistency. Is it possible to define causal consistency for the operation only within a session? Or outside the session if I set write concern “majority” and read concern “majority” am I already implicitly defining the operation as causally consistent?Outside of transactions the default write concern is “w:1” ?\nOutside transactions the default read concern is readConcernLevel: “local” ?Regards,\nCaio",
"username": "morcelicaio"
},
{
"code": "Transactions",
"text": "I would recommend you to go through below links as these can explain about Transactions in great detail and better than I can in short posts.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Two questions about transactions / data consistency levels in MongoDB | 2022-11-03T14:29:01.977Z | Two questions about transactions / data consistency levels in MongoDB | 2,307 |
null | [
"aggregation",
"flutter"
]
| [
{
"code": "",
"text": "Hi folks,I’m looking for an alternative to Firebase since I’ve come to a point where Firebase no longer support my needs. MongoDb has the following that I need:So I’m doing a bit of research on the power of MongoDb Realm that will work with my Flutter/Dart application. I’m just a little bit confused, and hoping to clarify my findings. Here is a summary of my understanding so far:In short, can someone please confirm that the only way for an externally authenticated user to access data is via the following?Bonus question - is there any driver (or anything) that allows externally authenticated user to access data other than the two mentioned?I hope I’m making sense, remember I am new and just doing research at this moment. Go easy on me, thanks!",
"username": "lHengl"
},
{
"code": "realm-cli",
"text": "Hi @lHengl - Welcome to the community Glad you’ve stumbled across MongoDB as part of your research! Please see my comments below in regards to some of your statements:The Atlas App Services - Users & Authentication documentation may provide more details for you. In addition to this, there is also the Atlas App Services Command Line Interface ( realm-cli ) which allows you to programmatically manage your Apps as well.In short, can someone please confirm that the only way for an externally authenticated user to access data is via the following?There are several authentication providers. Regarding the Realm SDK, you can view some example methods for the authentication providers in the corresponding SDK documentation, for example:For the Data API:Data API endpoints run in the context of a specific user, which allows your app to enforce rules and validate document schemas for each request.By default, endpoints use Application Authentication, which requires each request to include credentials for one of your application users, like an API key or JWT. You can also configure other custom authentication schemes to fit your application’s needs.I would also refer to the Atlas App Services Pricing - Users and Auth post, specifically:However it’s worth noting that we’re not trying to provide a full-featured identity management platform and for more advanced features you may still want to integrate something like Cognito, Auth0, or AAD via our JWT authentication provider.I am not sure if this meets your criteria for “externally authenticated” users but I would also go over federated authentication as well just in case.Bonus question - is there any driver (or anything) that allows externally authenticated user to access data other than the two mentioned?Generally the end users of the application should access the data through your application or API. They typically would not have direct access to the database in which drivers generally do (have direct access through authentication of a Database User in the case of Atlas).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks jason.I appreciate your thoughtful response.I think i get it now after some mulling over it a while.Coming from Firebase I was expecting the MongoDB drivers to be something similar to the Firebase client SDKs. So basically Firebase made it simple by wrapping their version of “data api” with an SDK for a given programming language such as dart. As opposed to Atlas app services data api, which is pure REST api.So if i wanted to emulate what Firebase did with their SDK i would need to write my own wrapper package for a language that abstracts away the http requests…Seems like a lot of work… or is it? I suppose it’s just a matter of abstracting the end points.I’ll just stick to writing RESTful http requests for now.",
"username": "lHengl"
}
]
| How can externally authenticated app user access MongoDb Atlas data? | 2022-10-19T04:02:16.087Z | How can externally authenticated app user access MongoDb Atlas data? | 2,400 |
null | [
"replication"
]
| [
{
"code": "",
"text": "What if primary node dropped and then it recovered by time, can it automatically get back to be primary again using Automatic Failover feature",
"username": "sherif_hany1"
},
{
"code": "",
"text": "Hi @sherif_hany1 welcome to the community!Yes you can do this: Adjust Priority for Replica Set MemberIn basic terms, you setup the replica set with the desired node having a higher priority than the rest. When that node went offline, then back online, the other nodes will see that it has higher priority, and will automatically elect that node to be primary. This is automatic; you don’t have to do anything special for this to happen.However I must point out that it’s best that all replica set nodes are provisioned with identical hardware, since a replica set is mainly designed to give you high availability. That is, it’s best that no single node has a lot more RAM & disk speed compared to the other nodes to act as the “primary”. Note that the secondaries are doing just as much work as the primary in terms of writing, and as designed, they need to be able to take over as the new primary in a moment’s notice.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Automatic Failover | 2022-11-16T14:13:36.312Z | Automatic Failover | 1,240 |
null | [
"database-tools"
]
| [
{
"code": "counttimerowcountitercounttop",
"text": "Hi. I want to suggest a couple of improvements for mongotop.I could implement them if the improvements are considered ok.",
"username": "weastur"
},
{
"code": "",
"text": "Hi @weastur welcome to the community!I think you have interesting suggestions here. However I would like to mention that for product suggestions, we have the MongoDB Feedback Engine where all ideas are collected. The development team monitors this, and you can also vote for improvements that you think deserve to be in the product.Please, if you don’t mind, could you describe the ideas in the Feedback Engine? Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongotop improvements | 2022-11-16T15:32:38.065Z | Mongotop improvements | 1,437 |
null | []
| [
{
"code": "",
"text": "Hello everyone. How I can notify some specific users that a new Document is inserted or a field inside the document is changed. For example: when a user adds a new Order, the respective Restaurant will be notified on their phone.",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hey, I would highly recommend looking into Database Triggers. Attached are some helpful links:You could define a trigger on the collection for “insert” events or “update/replace” events and execute a function that either (a) calls out to a push notification service or (b) adds data to a secondary collection that is being listened to on the device",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "what are the services that I can use for Mongodb and Realm?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Also, do you have any example on how I can send push notification to specifics users?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "You can write a function that will be run within the “Trigger” such that it runs on every insert to the collection and performs some action (hit an endpoint, update a document in mongodb, etc). Here are some examples:Here is a nice article detailing how to use push notifications:In a serverless application, one of the important features that we must implement for the success of our application is push notifications.\nReading time: 9 min read\n",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thanks for the info. In the article Firebase is being used. Does MongoDB has something similar to what Firebase do? I know that change streams exists, but they consume some memory and they have to be connected every time. Does MongoDB have anything else implemented? Can I send push notification to a specific user directly in a trigger function?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "I would like to take a very different approach to this problem. If you are using Kotlin SDK, you can setup stream of information flowing into the app whenever a change occur something like [this].(Job-Tracker/RealmRepo.kt at 6fde3d672b20991c417057a7c8a775f717e1cf88 · mongodb-developer/Job-Tracker · GitHub)And once you have the update, trigger a local push notification or toast to inform user about the change. In the code sample shared above I am adding a toast whenever new Job is interested into the document.Do let me know if this helps.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Thanks for the response. Its a great idea! I have some questions:\nI do not need any trigger for this?\nIt works for adding a new document or updating a field?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "No triggers are required for this. Do check this out.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Does someone know if this would work if my app is not open or in standby mode?",
"username": "Ciprian_Gabor"
}
]
| Send notification to user when document is inserted | 2022-11-15T20:57:41.876Z | Send notification to user when document is inserted | 3,796 |
null | []
| [
{
"code": "",
"text": "Hi Team,We see the the following query taking more time in secondary. There is no change in the indexing for the collection from the previous releases. With 4.0 we are not seeing much time. but we are currently using 4.2 and seeing this more frequently. We are not sure if this causing any impact with respect to the performance issues we are seeing in 4.2. Can you please tell us why this flooding frequently and how to avoid this?2022-11-09T19:46:05.056+0000 I COMMAND [conn5243758] command local.oplog.rs command: getMore { getMore: 3744042275538770656, collection: “oplog.rs”, batchSize: 13981010, maxTimeMS: 5000, term: 2, lastKnownCommittedOpTime: { ts: Timestamp(1668023164, 1355), t: 2 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: “secondaryPreferred” }, $clusterTime: { clusterTime: Timestamp(1668023164, 1369), signature: { hash: BinData(0, B5927874271B36B9316E6F71FCB7AEC44C84E677), keyId: 7150119648661340164 } }, $db: “local” } originatingCommand: { find: “oplog.rs”, filter: { ts: { $gte: Timestamp(1667995887, 563) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 2, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: “secondaryPreferred” }, $clusterTime: { clusterTime: Timestamp(1667995888, 584), signature: { hash: BinData(0, 908904E7F425925423D0DA35F74D736722331DB9), keyId: 7150119648661340164 } }, $db: “local” } planSummary: COLLSCAN cursorid:3744042275538770656 keysExamined:0 docsExamined:3 numYields:1 nreturned:3 reslen:2082 locks:{ ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 2 } } } storage:{} protocol:op_msg 418ms",
"username": "venkataraman_r"
},
{
"code": "",
"text": "Hi @venkataraman_r and welcome to the MongoDB forum!!for better understanding of the above error message, if would be very helpful if you could share the following details:if this causing any impact with respect to the performance issues we are seeing in 4.2.What impact do you see on the performance while switching between the MongoDB versions.Is there a chance that you application is trying to read the oplogs from the server? There are some framework that does this by default (e.g. Meteor).Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "/var/log/mongodb-27951.log:2022-11-16T04:37:35.348+0000 I COMMAND [conn8303] command local.oplog.rs command: getMore { getMore: 2277507442850471832, collection: \"oplog.rs\", batchSize: 13981010, maxTimeMS: 5000, term: 2, lastKnownCommittedOpTime: { ts: Timestamp(1668573449, 2688), t: 2 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" }, $clusterTime: { clusterTime: Timestamp(1668573449, 2844), signature: { hash: BinData(0, 3D6BC62F737C84288EF8DC5062F68D70A314756B), keyId: 7166105568476659716 } }, $db: \"local\" } originatingCommand: { find: \"oplog.rs\", filter: { ts: { $gte: Timestamp(1668488972, 7) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 2, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" }, $clusterTime: { clusterTime: Timestamp(1668488984, 2), signature: { hash: BinData(0, 052BE08BB97364A3B36D0B64AE8387624FD8D06E), keyId: 7166105568476659716 } }, $db: \"local\" } planSummary: COLLSCAN cursorid:2277507442850471832 keysExamined:0 docsExamined:66 numYields:6 nreturned:66 reslen:25336 locks:{ ReplicationStateTransition: { acquireCount: { w: 7 } }, Global: { acquireCount: { r: 7 } }, Database: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 7 } } } storage:{} protocol:op_msg 5553ms/var/log/mongodb-37952.log:2022-11-16T04:38:30.433+0000 local.oplog.rs 593ms\n/var/log/mongodb-37952.log:2022-11-16T04:38:30.432+0000 local.oplog.rs 593ms\n/var/log/mongodb-37717.log:2022-11-16T04:39:48.082+0000 local.oplog.rs 590ms\n/var/log/mongodb-27960.log:2022-11-16T04:34:41.634+0000 local.oplog.rs 1136ms\n/var/log/mongodb-27960.log:2022-11-16T04:34:41.597+0000 local.oplog.rs 2794ms\n/var/log/mongodb-27960.log:2022-11-16T04:34:41.592+0000 local.oplog.rs 2780ms\n/var/log/mongodb-27960.log:2022-11-16T04:34:39.552+0000 local.oplog.rs 750ms\n/var/log/mongodb-27959.log:2022-11-16T04:34:49.841+0000 local.oplog.rs 1700ms\n/var/log/mongodb-27959.log:2022-11-16T04:34:47.917+0000 local.oplog.rs 2864ms\n/var/log/mongodb-27959.log:2022-11-16T04:34:47.896+0000 local.oplog.rs 834ms\n/var/log/mongodb-27959.log:2022-11-16T04:34:47.696+0000 local.oplog.rs 2645ms\n/var/log/mongodb-27959.log:2022-11-16T04:34:44.543+0000 local.oplog.rs 731ms\n/var/log/mongodb-27959.log:2022-11-16T04:34:44.538+0000 local.oplog.rs 728ms\n/var/log/mongodb-27958.log:2022-11-16T04:32:17.638+0000 local.oplog.rs 556ms\n/var/log/mongodb-27957.log:2022-11-16T04:38:30.430+0000 local.oplog.rs 590ms\n/var/log/mongodb-27954.log:2022-11-16T04:34:41.534+0000 local.oplog.rs 2731ms\n/var/log/mongodb-27954.log:2022-11-16T04:34:41.250+0000 local.oplog.rs 2446ms\n/var/log/mongodb-27953.log:2022-11-16T04:38:54.832+0000 local.oplog.rs 760ms\n/var/log/mongodb-27953.log:2022-11-16T04:38:54.816+0000 local.oplog.rs 744ms\n/var/log/mongodb-27953.log:2022-11-16T04:36:43.602+0000 local.oplog.rs 921ms\n/var/log/mongodb-27953.log:2022-11-16T04:36:43.598+0000 local.oplog.rs 917ms\n/var/log/mongodb-27953.log:2022-11-16T04:36:43.584+0000 local.oplog.rs 903ms\n/var/log/mongodb-27953.log:2022-11-16T04:34:49.867+0000 local.oplog.rs 1719ms\n/var/log/mongodb-27953.log:2022-11-16T04:34:49.830+0000 local.oplog.rs 1682ms\n/var/log/mongodb-27953.log:2022-11-16T04:34:49.830+0000 local.oplog.rs 1270ms\n/var/log/mongodb-27952.log:2022-11-16T04:39:11.530+0000 local.oplog.rs 651ms\n/var/log/mongodb-27952.log:2022-11-16T04:38:35.177+0000 local.oplog.rs 729ms\n/var/log/mongodb-27952.log:2022-11-16T04:37:51.275+0000 local.oplog.rs 1193ms\n/var/log/mongodb-27951.log:2022-11-16T04:38:30.438+0000 local.oplog.rs 598ms\n/var/log/mongodb-27951.log:2022-11-16T04:37:35.348+0000 local.oplog.rs 5553ms\n/var/log/mongodb-27737.log:2022-11-16T04:39:48.089+0000 local.oplog.rs 595ms\n/var/log/mongodb-27737.log:2022-11-16T04:39:48.088+0000 local.oplog.rs 594ms\n/var/log/mongodb-27737.log:2022-11-16T04:39:48.086+0000 local.oplog.rs 589ms\n/var/log/mongodb-27730.log:2022-11-16T04:33:25.480+0000 local.oplog.rs 897ms\n/var/log/mongodb-27730.log:2022-11-16T04:33:25.472+0000 local.oplog.rs 888ms ```",
"text": "Hi Aasawari,I see its happening on both Primary and Secondary. We dont read anything from oplog directly.After we switched to mongodb4.2 (no changes from the client side as we uses compatible driver 3.12.9) , we see the query response is going crazy. We are trying to see what are the contributing factors and we see these queries are taking more time. Also we started to see IDHACK which suppose to be an optimized query that also taking mre time due to this./var/log/mongodb-27951.log:2022-11-16T04:37:35.348+0000 I COMMAND [conn8303] command local.oplog.rs command: getMore { getMore: 2277507442850471832, collection: \"oplog.rs\", batchSize: 13981010, maxTimeMS: 5000, term: 2, lastKnownCommittedOpTime: { ts: Timestamp(1668573449, 2688), t: 2 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" }, $clusterTime: { clusterTime: Timestamp(1668573449, 2844), signature: { hash: BinData(0, 3D6BC62F737C84288EF8DC5062F68D70A314756B), keyId: 7166105568476659716 } }, $db: \"local\" } originatingCommand: { find: \"oplog.rs\", filter: { ts: { $gte: Timestamp(1668488972, 7) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 2, readConcern: { afterClusterTime: Timestamp(0, 1) }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: \"secondaryPreferred\" }, $clusterTime: { clusterTime: Timestamp(1668488984, 2), signature: { hash: BinData(0, 052BE08BB97364A3B36D0B64AE8387624FD8D06E), keyId: 7166105568476659716 } }, $db: \"local\" } planSummary: COLLSCAN cursorid:2277507442850471832 keysExamined:0 docsExamined:66 numYields:6 nreturned:66 reslen:25336 locks:{ ReplicationStateTransition: { acquireCount: { w: 7 } }, Global: { acquireCount: { r: 7 } }, Database: { acquireCount: { r: 7 } }, Mutex: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 7 } } } storage:{} protocol:op_msg 5553ms",
"username": "venkataraman_r"
}
]
| Oplog.rs getMore taking more time on secondary PlanSummary:COLSCAN | 2022-11-09T20:04:54.720Z | Oplog.rs getMore taking more time on secondary PlanSummary:COLSCAN | 1,538 |
null | [
"node-js",
"production",
"change-streams"
]
| [
{
"code": "const changeStream = collection.watch();\nfor await (const change of changeStream) {\n console.log(“Received change: “, change);\n}\nconst changeStream = collection.watch();\nfor await (const change of changeStream.cursor) {\n console.log(“Received change: “, change);\n}\nmongodb",
"text": "The MongoDB Node.js team is pleased to announce version 4.12.0 of the mongodb package!ChangeStreams are now async iterables and can be used anywhere that expects an async iterable. Notably, change streams can now be used in Javascript for-await loops:Some users may have been using change streams in for-await loops manually by using a for-await loop with the ChangeStream’s internal cursor. For example:The change stream cursor has no support for resumabilty and consequently the change stream will never attempt to resume any errors. We strongly caution against using a change stream cursor as an async iterable and strongly recommend using the change stream directly.Version 4.7.0 of the Node driver released an improvement to our server monitoring in FAAS environments by allowing the driver to skip monitoring events if there were more than one monitoring events in the queue when the monitoring code restarted. When skipping monitoring events that contained a topology change, the driver would incorrectly fail to update its view of the topology.Version 4.12.0 fixes this issue by ensuring that the topology is always updated when monitoring events are processed.This release also modifies the data structures used internally in the driver to use linked lists in places where random access is not required and constant time insertion and deletion is beneficial.Many thanks to @ImRodry for helping us fix the documentation for our deprecated callback overloads in this release!We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Bailey_Pearson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| NodeJS Driver 4.12.0 Released | 2022-11-16T20:50:43.665Z | NodeJS Driver 4.12.0 Released | 1,534 |
null | [
"server"
]
| [
{
"code": "/bin/launchctl bootstrap gui/502 /Users/niravvachhani/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist",
"text": "Getting this error while starting mongo db community 6.0 on my mac mini.Failure while executing; /bin/launchctl bootstrap gui/502 /Users/niravvachhani/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist exited with 5.brew services start [email protected] used this comman.",
"username": "Akshay_Vispute"
},
{
"code": "",
"text": "Check your mongod.log\nIt may give more details on why it is failing to start",
"username": "Ramachandra_Tummala"
},
{
"code": "vi /usr/local/var/log/mongodb/mongo.log\nsudo chown yourloginname:wheel thedirectory/*\ndb.adminCommand( { setFeatureCompatibilityVersion: \"5.0\" } ) \n",
"text": "I had this when upgrading from 4.4 to 5.0 and I suspect it’s similar for you.Roll back to your previous working version of mongo (e.g. 5.x)Start Mongo 5Start a Mongo Shell and typeCheck the logs to verify the database has detected 5.0Uninstall 5.xRe-install 6.xIt should now work?",
"username": "Rob_Wilson"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| Hi there I am trying to start the Mongo db community 6.0 on my mac mini OS 12.5.1 but it's showing the error any help is appreciated | 2022-09-30T03:37:36.946Z | Hi there I am trying to start the Mongo db community 6.0 on my mac mini OS 12.5.1 but it’s showing the error any help is appreciated | 2,417 |
[
"aggregation",
"queries",
"dot-net",
"data-modeling"
]
| [
{
"code": " public BsonDocument Premium = new BsonDocument()\n .Add(\"isPremium\", false)\n .Add(\"PremiumEnds\", DateTime.Now)\n .Add(\"Level\", int.Parse(\"0\"))\n .Add(\"RatioText\", double.Parse(\"1.0\"))\n .Add(\"RatioVoice\", double.Parse(\"1.0\"));\n public BsonDocument Premium = new BsonDocument()\n .Add(\"isPremium\", false)\n .Add(\"PremiumEnds\", DateTime.Now)\n .Add(\"Level\", int.Parse(\"0\"))\n .Add(\"RatioText\", double.Parse(\"1.0\"))\n .Add(\"RatioVoice\", double.Parse(\"1.0\"))\n .Add(\"Key\", \"Value\");\npublic int Prestige { get; set; }public static async Task<UserMongo> GetUserAsync(ulong id)\n{\n var user = Users.Find(a => a.Id == id).FirstOrDefault();\n if (user != null) return user;\n user = await SignUp(id);\n return user;\n}\npublic static async Task<bool> UpdateAsync(UserMongo user)\n{\n await Users.ReplaceOneAsync(a => a.Id == user.Id, user);\n return await Task.FromResult(true);\n}\n[BsonIgnoreIfDefault] \npublic BsonDocument Premiums = new BsonDocument {\n { \"isPremium\", false },\n { \"PremiumEnds\", DateTime.Now },\n { \"Level\", int.Parse(\"0\") },\n { \"RatioText\", double.Parse(\"1.0\") },\n { \"RatioVoice\", double.Parse(\"1.0\") }\n};\n",
"text": "Hello i got an issue with MongoDB model builder like example i have UserModel for adding or updating existing document in mongo\nhere is one BsonDocumentand if document like this exist in user document like in first screenshot here is an issue\nif i add to this BsonDocument new Key/Value likeafter getting user info and updating some values this will not change the Existing BsonDocument\nbut if i add to UserModel like public int Prestige { get; set; } outside BsonDocument after updating user new row will appears in documentGetting User FunctionUpdating User FunctionWhat is the problem maybe someone has issue like this?also tryed building model like thisAfter inserting for first time and adding new Key/Value to this BsonDocument after updating this document not apply new Key/Value inside BsonDocument",
"username": "Workout_Latvia"
},
{
"code": "_idvar id = new ObjectId(\"123....abc\");\nvar filter = new BsonDocument { { \"_id\", id } };\nId_id",
"text": "You might be missing the complimentary field _id in your BsonDocument, or it is not in the required format. if this is the case, your replace function will return false when you try to compare it.when you define a class model, you define Id property mapped to _id so it is easy to miss it while using BsonDocument.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "It’s not a full model, it’s only peace of one BsonDocument, I have 2 unique keys it’s BsodId _id and userid which is long number that’s generated by discord and model is big like 7 nested BsonDocumets its only one of them.If it’s needed I can show full model file",
"username": "Workout_Latvia"
},
{
"code": "_idId",
"text": "I am not a pro for C#, so I can’t promise a resolution. But there are some considerations to help to identify the problem. unless it is a bug in the driver, we may find a solution by trying a few things.this may help you identify the problem by yourself. if not, share this stripped-down version so we here may peek into it.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The problem is it’s working pretty well but after adding new key value to BsonDocument it’s not adding new key value to document but if I wanna modify some values inside created keys this working good and also adding new bson documents is also updating existed model",
"username": "Workout_Latvia"
},
{
"code": "BsonDocumentBuilders{id:number,class:tring}_id{\n MongoClient dbClient = new MongoClient(URI);\n\n var database = dbClient.GetDatabase(\"testme\");\n var collection = database.GetCollection<BsonDocument>(\"grades\");\n\n var filterbuilder = Builders<BsonDocument>.Filter;\n var updatebuilder = Builders<BsonDocument>.Update;\n\n // get a document\n var firstuserfilter = filterbuilder.Eq(doc => doc[\"id\"], 2);\n var firstuser = collection.Find<BsonDocument>(firstuserfilter).First();\n Console.WriteLine(firstuser.ToJson());\n\n // updating some fields\n var updatefilter = filterbuilder.Eq(\"_id\", firstuser[\"_id\"]);\n var update = updatebuilder.Set(\"class\", \"3D\");\n collection.UpdateOne(updatefilter, update);\n Console.WriteLine(collection.Find<BsonDocument>(firstuserfilter).First().ToJson());\n\n //replacing whole document\n firstuser[\"class\"] = \"3E\";\n var replacefilter = filterbuilder.Eq(\"_id\", firstuser[\"_id\"]);\n collection.ReplaceOne(replacefilter, firstuser);\n Console.WriteLine(collection.Find<BsonDocument>(firstuserfilter).First().ToJson());\n}\n",
"text": "Actually, I am still failing to see what the actual problem you are facing. But it comes to my mind that you might be using your filtering wrong for BsonDocument.I have a running code to use Builders for both “update” and “replace” operation. Can you please check if it helps (my document is simple {id:number,class:tring} plus automatic _id):",
"username": "Yilmaz_Durmaz"
},
{
"code": "using System;\nusing System.Collections.Generic;\nusing MongoDB.Bson;\n\nnamespace InteractionFramework.Models\n{\n public class UserMongo\n {\n public ulong Id { get; set; }\n public ulong PheonixCoin { get; set; }\n public ulong CSCoin { get; set; }\n public int Prestige { get; set; }\n public ulong XP { get; set; }\n public int Level\n {\n get\n {\n return (int)Math.Sqrt(XP / 115);\n }\n }\n public string QiwiBillID { get; set; }\n public ulong VoiceActive { get; set; }\n public ulong Messages { get; set; }\n \n public BsonDocument Penalty { get; set; } = new BsonDocument()\n .Add(\"violations\", new BsonArray())\n .Add(\"warns\", new BsonArray())\n .Add(\"mute\", new BsonArray())\n .Add(\"ban\", new BsonArray());\n \n public BsonDocument Admin { get; set; } = new BsonDocument()\n .Add(\"isAdmin\", false)\n .Add(\"stats\", new BsonDocument()\n .Add(\"violations\", new BsonArray())\n .Add(\"warns\", new BsonArray())\n .Add(\"mute\", new BsonArray())\n .Add(\"ban\", new BsonArray())\n .Add(\"ticket\", new BsonArray()))\n .Add(\"admin_warns\", new BsonArray())\n .Add(\"admin_at\", DateTime.Now)\n .Add(\"start_at\", long.Parse(\"0\"))\n .Add(\"online_today\", long.Parse(\"0\"))\n .Add(\"online_week\", long.Parse(\"0\"))\n .Add(\"online_month\", long.Parse(\"0\"))\n .Add(\"online_total\", long.Parse(\"0\"));\n \n public BsonDocument Premium = new BsonDocument()\n .Add(\"isPremium\", false)\n .Add(\"PremiumEnds\", DateTime.Now)\n .Add(\"Level\", 0)\n .Add(\"RatioText\", double.Parse(\"1\"))\n .Add(\"RatioVoice\", double.Parse(\"1\"));\n\n public BsonDocument Cases { get; set; } = new BsonDocument()\n .Add(\"Bronze\", int.Parse(\"0\"))\n .Add(\"Silver\", int.Parse(\"0\"))\n .Add(\"Gold\", int.Parse(\"0\"))\n .Add(\"Platinum\", int.Parse(\"0\"))\n .Add(\"Emerald\", int.Parse(\"0\"))\n .Add(\"Donate\", int.Parse(\"0\"));\n \n public BsonDocument lfg { get; set; } = new BsonDocument()\n .Add(\"SteamID32\", uint.Parse(\"0\"))\n .Add(\"FaceitUrl\", string.Empty)\n .Add(\"MMRank\", int.Parse(\"0\"))\n .Add(\"WGRank\", int.Parse(\"0\"))\n .Add(\"DZRank\", int.Parse(\"0\"))\n .Add(\"FCRank\", int.Parse(\"0\"))\n .Add(\"PEmoji\", int.Parse(\"0\"));\n \n public BsonDocument personalRole { get; set; } = new BsonDocument()\n .Add(\"DonateRoleID\", long.Parse(\"0\"))\n .Add(\"RoleEnds\", DateTime.Now)\n .Add(\"AutoRenew\", int.Parse(\"0\"));\n \n public BsonDocument profileSettings { get; set; } = new BsonDocument()\n .Add(\"HideBalance\", int.Parse(\"0\"))\n .Add(\"HidePheonix\", int.Parse(\"0\"))\n .Add(\"AgentId\", int.Parse(\"1\"))\n .Add(\"BackgroundId\", int.Parse(\"1\"))\n .Add(\"CardId\", int.Parse(\"1\"))\n .Add(\"Icon1\", int.Parse(\"0\"))\n .Add(\"Icon2\", int.Parse(\"0\"))\n .Add(\"Icon3\", int.Parse(\"0\"))\n .Add(\"Icon4\", int.Parse(\"0\"));\n \n public List<InventoryEntry> Inventory { get; set; } = new List<InventoryEntry>();\n \n public class InventoryEntry\n {\n public int Id;\n public int Category;\n public DateTime EndTime;\n \n public InventoryEntry(int id, int category, DateTime endTime)\n {\n Id = id;\n Category = category;\n EndTime = endTime;\n }\n }\n public class Punishment\n {\n public ulong AdminID;\n public ulong Date;\n public ulong Unmute;\n public string Reason;\n\n public Punishment(ulong adminId, ulong date, ulong unmute, string reason)\n {\n AdminID = adminId;\n Date = date;\n Unmute = unmute;\n Reason = reason;\n }\n }\n\n }\n}\n\n",
"text": "Here is my Full model for User",
"username": "Workout_Latvia"
},
{
"code": "_id{\n MongoClient dbClient = new MongoClient(URI);\n\n var database = dbClient.GetDatabase(\"testme\");\n var collection = database.GetCollection<UserMongo>(\"player\");\n\n var filterbuilder = Builders<UserMongo>.Filter;\n var updatebuilder = Builders<UserMongo>.Update;\n\n // create 2 users , comment after first run\n var firstuser=new UserMongo();\n firstuser.Id=1;\n collection.InsertOne(firstuser);\n var seconduser=new UserMongo();\n seconduser.Id=2;\n collection.InsertOne(seconduser);\n // read users\n Console.WriteLine(collection.Find<UserMongo>(_=>true).FirstOrDefault().ToJson());\n\n // read first user back\n var firstuserfilter = filterbuilder.Eq<ulong>(doc => doc.Id, 1);\n var firstuserU = collection.Find<UserMongo>(firstuserfilter).FirstOrDefault();\n Console.WriteLine(firstuserU.ToJson());\n // read second user back\n var seconduserfilter = filterbuilder.Eq<ulong>(doc => doc.Id, 2);\n var seconduserR = collection.Find<UserMongo>(seconduserfilter).FirstOrDefault();\n Console.WriteLine(seconduserR.ToJson());\n\n // change first user by updating some fields \"on the database\"\n // var updatefilter = filterbuilder.Eq<ulong>(\"_id\", 1);\n var updatefilter = filterbuilder.Eq<ulong>(doc=>doc.Id, 1);\n var update = updatebuilder.Set(\"XP\", 100);\n collection.UpdateOne(updatefilter, update);\n Console.WriteLine(collection.Find<UserMongo>(firstuserfilter).FirstOrDefault().ToJson());\n\n // change document \"in app\" and change it on database replacing the whole document\n seconduserR.QiwiBillID = \"42-7\";\n // var replacefilter = filterbuilder.Eq(\"_id\", seconduserR.Id);\n var replacefilter = filterbuilder.Eq(doc=>doc.Id, seconduserR.Id);\n collection.ReplaceOne(replacefilter, seconduserR);\n Console.WriteLine(collection.Find<UserMongo>(seconduserfilter).FirstOrDefault().ToJson());\n}\n",
"text": "I have used your code and created following working code where I create 2 players with all default values except _id field.It works fine and changes the users’ data by two methods: partial update and full replacement.Please check it out and tell me if this reflects the workings of your app. I suspect either the “_id” field is the culprit or your update function has a flaw.",
"username": "Yilmaz_Durmaz"
}
]
| C# Driver updating BsonDocument | 2022-11-15T19:44:27.958Z | C# Driver updating BsonDocument | 4,389 |
|
null | [
"atlas-search"
]
| [
{
"code": "{\n $search: {\n compound: {\n must: [\n {\n text: {\n query: searchQuery.source,\n path: \"all_content\",\n },\n },\n {\n near: {\n origin: {\n type: \"Point\",\n coordinates: [Number(long), Number(lat)],\n },\n path: \"address.point\",\n pivot: 1000,\n },\n },\n ],\n },\n },\n }\n",
"text": "Hello everyone,I need to implement text search along with location search (via lat, long). For this, I have implemented\nthis (https://www.mongodb.com/docs/atlas/atlas-search/near/#examples)But the expected results are not coming.Sharing the query,Now What I expect here is, for the given query only those documents should be returned which are in the given lat long range of 1000 m. but this is not how it works I guess,whatever I am passing in pivot is not working, every time I got all the documents matching with the given query, Please let me know what’s wrong here",
"username": "Ankit_Arora"
},
{
"code": "pivotgeoWithincompoundneargeoWithingeoWithin[{\n \"$search\": {\n \"geoWithin\": {\n \"circle\": {\n \"center\": {\n \"type\": \"Point\",\n \"coordinates\": [-75.2, 13]\n },\n \"radius\": 200000\n },\n \"path\": \"position\"\n }\n }\n},\n{\n $project: {\n _id: 0,\n position: 1,\n score: { $meta: \"searchScore\" }\n }\n}]\n[\n {\n position: { type: 'Point', coordinates: [ -76.3, 12.2 ] },\n score: 1\n },\n { position: { type: 'Point', coordinates: [ -75.2, 13 ] }, score: 1 },\n {\n position: { type: 'Point', coordinates: [ -74.3, 13.7 ] },\n score: 1\n },\n {\n position: { type: 'Point', coordinates: [ -75.7, 12.5 ] },\n score: 1\n },\n {\n position: { type: 'Point', coordinates: [ -74.2, 13.7 ] },\n score: 1\n },\n {\n position: { type: 'Point', coordinates: [ -75.4, 13.1 ] },\n score: 1\n }\n]\nnear[\n {\n '$search': {\n index: 'default',\n near: {\n path: 'position',\n origin: { type: 'Point', coordinates: [ -75.2, 13 ] },\n pivot: 1\n }\n }\n },\n {\n '$project': { _id: 0, position: 1, score: { '$meta': 'searchScore' } }\n },\n { '$limit': 3 }\n]\npivot[\n { position: { type: 'Point', coordinates: [ -75.2, 13 ] }, score: 1 },\n {\n position: { type: 'Point', coordinates: [ -75.4, 13.1 ] },\n score: 0.00004106335836695507\n },\n {\n position: { type: 'Point', coordinates: [ -75.7, 12.5 ] },\n score: 0.00001287592385779135\n }\n]\ncompoundgeoWithinnear[\n {\n '$search': {\n index: 'default',\n compound: {\n must: [\n {\n geoWithin: {\n circle: {\n center: { type: 'Point', coordinates: [ -75.2, 13 ] },\n radius: 200000\n },\n path: 'position'\n }\n }\n ],\n should: [\n {\n near: {\n path: 'position',\n origin: { type: 'Point', coordinates: [ -75.2, 13 ] },\n pivot: 1\n }\n }\n ]\n }\n }\n },\n {\n '$project': { _id: 0, position: 1, score: { '$meta': 'searchScore' } }\n }\n]\nradiusnear[\n { position: { type: 'Point', coordinates: [ -75.2, 13 ] }, score: 2 },\n {\n position: { type: 'Point', coordinates: [ -75.4, 13.1 ] },\n score: 1.0000410079956055\n },\n {\n position: { type: 'Point', coordinates: [ -75.7, 12.5 ] },\n score: 1.0000128746032715\n },\n {\n position: { type: 'Point', coordinates: [ -74.3, 13.7 ] },\n score: 1.0000079870224\n },\n {\n position: { type: 'Point', coordinates: [ -74.2, 13.7 ] },\n score: 1.0000075101852417\n },\n {\n position: { type: 'Point', coordinates: [ -76.3, 12.2 ] },\n score: 1.0000066757202148\n }\n]\n",
"text": "Hi @Ankit_Arora,Now What I expect here is, for the given query only those documents should be returned which are in the given lat long range of 1000 m. but this is not how it works I guess,pivot is a value which is used to calculate scores of Atlas Search result documents based off a formula. Perhaps if you’re wanting to limit the results within a particular range, the geoWithin operator may suit your use case better. If you then want to do scoring on these filtered documents based on how close they are to a particular point (based off your example), you could then use the compound operator to include both near and geoWithin .Hopefully the below examples helps clarify the above.Using geoWithin only:Output (Note the same score value of 1 for all these documents):Using near only (limiting to 3 documents for brevity):Output (Note the pivot value used and scores):Using the compound operator containing both geoWithin and near:Output (Documents only within the specified circle radius returned sorted by score in which the near operator is used to assist with for nearest to furthest):Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Jason_Tran, will update you on this",
"username": "Ankit_Arora"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Geo Location search not working, pivot is of no use | 2022-11-01T09:30:06.928Z | Geo Location search not working, pivot is of no use | 1,726 |
null | [
"server",
"release-candidate"
]
| [
{
"code": "",
"text": "MongoDB 5.0.14-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.13. The next stable release 5.0.14 will be a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 5.0.14-rc0 is released | 2022-11-16T17:41:05.796Z | MongoDB 5.0.14-rc0 is released | 1,882 |
null | [
"python",
"database-tools"
]
| [
{
"code": "",
"text": "Hello, I am working with a collection that has more than ten million documents and counting. Each record has a few fields with short values as well as a field with an array of 768 floating point numbers. I am trying to figure out what the fastest way to export these records so I can then read them and use them to update entries in a separate SaaS product.I was originally thinking of using mongoexport to export this collection in documents in chunks of 100,000. I thought that perhaps I could run one mongoexport command per CPU core in order to speed up this operation and I wrote a retry mechanism that re-runs the mongoexport command if it times out or if there is some other kind of intermittent failure.After doing some testing with smaller collections I deployed this export system on a 64 core EC2 instance on us-east-2. I scaled my MongoDB cluster to an M40. The first two dozen chunks downloaded fine but soon I started seeing failures. Checking the M40 metrics I can see that for one of the shards disk utilization is at 100%, kernel CPU utilization is about 300% and user CPU utilization is at around 60%.Eventually, even with the retry mechanism in place, mongoexport operations were failing just as they started approaching a complete download of a chunk of 100,000 documents.Questions:Thanks so much for your help!",
"username": "John_David_Eriksen"
},
{
"code": "",
"text": "ten million documents and countingSorry hoping in. Not my area but this might help others to know: are the documents all under constant change like user data? or some accumulated data taken daily or so.also knowing the server version along with the error message might help too. that and if all shard has the same cpu/disk/network capabilities (twins) or some has lower capacities.by the way, lowering the chunk size might need more trips for you but also may solve the problem.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This thought was bugging me about a missing detail in your case: your sharding key being a poorly chosen one along with time-related daily entries can cause a huge amount of data to accumulate on a single shard.this time data is such an important case that MongoDB has added the ability to create “Time Series Collections” to store them since v5. Time Series — MongoDB ManualIf this is the case for your data, you need to be softer on that range of data while exporting. And you may also try re-sharding and/or converting them to Time Series Collections.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz, thank you so much for your reply.are the documents all under constant change like user data? or some accumulated data taken daily or so.These documents are read-only. They are only written to occasionally as part of seldom-run batch data migration oprtaions. Every night additional documents are added.also knowing the server version along with the error message might help too. that and if all shard has the same cpu/disk/network capabilities (twins) or some has lower capacities.Server version is 5.0.13. I don’t have the error message from mongoexport but I can potentially rewrite my code to expose that. I am using MongoDB Atlas M40. There is a primary node and two slave nodes. I assume that all have the same capacity. Monitoring shows that really only the primary node is exhibiting heavy disk and CPU utilization.by the way, lowering the chunk size might need more trips for you but also may solve the problem.Thank you, I will try that.Thanks for letting me know about time series collections and the danger of poorly chosen sharding keys. Taking a look at our cluster it doesn’t look like we are using horizontal scaling or sharding. When I say “sharding” I mean the techniques described here: https://www.mongodb.com/docs/manual/sharding/For 10 million + documents would you recommend horizontal scaling?",
"username": "John_David_Eriksen"
},
{
"code": "",
"text": "There is a primary node and two slave nodes\n…\nit doesn’t look like we are using horizontal scaling or shardingNo sharded clusters. Check.Sharding is needed when the total “size” does matter. 768 floating point numbers and other fields do not seem to make more than 10KB for each document. But you are saying “ten million documents” which would correspond to a 100GB size. So the decision is a rough one here for a sharding.you said you try exporting 100.000 at a time this would correspond to about 1GB which is not much until you said trying 64 of them concurrently. Things go a little blurry here. M40 should deal with these sizes and maybe also the load from 64 operations as well.But in return, you need bandwidth to get responses from each of those operations before timeouts start to happen. depending on the speed (plus other settings) anything can happen.It would help if you give a full line of what is that error with “256” code. I could not find a direct reference and traversing source code is not easy with just that number, because there are lots of other keywords having 256 in them; especially “SCRAM-SHA-256” takes a lot of space.assuming your data includes timestamps, converting to time series is still a valid suggestion for your documents for future purposes.also, I would like to hear the result if you try smaller numbers when fetching data.",
"username": "Yilmaz_Durmaz"
}
]
| Fast and reliable way to export a collection with approximately ten million documents? | 2022-11-15T20:55:49.009Z | Fast and reliable way to export a collection with approximately ten million documents? | 3,144 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.