image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": "{$lookup: {\n from: 'suspects',\n localField: 'suspects.id',\n foreignField: '_id',\n as: 'example1'\n}}\n_id\nfullName\nphoneNumber\n_id\nfullName\nphoneNumber\n_id\ncrimeNumber\neventDate\nsuspects: [\n {\n id // _id value in the suspects document\n note\n +++ populating data from suspects document\n\n lawyer: {\n id // _id value in the lawyers document\n note\n +++ populating data from lawyers document\n }\n }\n ...\n ...\n]\n", "text": "Hello everyone, I started using mongodb after a long time, but I could not write the code to perform the following operation.At first, I tried this code but didn’t get the desired result and couldn’t continue due to the example shown as an empty array.lawyers documentsuspects documentcase document", "username": "kibar" }, { "code": "", "text": "You lookup from suspects but localField seems to refer to a field from suspects. It is the other way around. You will need to share the whole pipeline and real sample documents.", "username": "steevej" } ]
Aggregation within nested array
2023-09-07T19:35:35.618Z
Aggregation within nested array
224
https://www.mongodb.com/…13a0ba5b23ca.png
[ "atlas" ]
[ { "code": "", "text": "Hello,I want to give Project access (Project Read Only) to a user in Atlas.When I give a Project access (Project Read Only) to a User it gives by default that Organization access (Organization Member) to that user.When I remove a user from the Organization Access it removes from the Project Access.It is okay but why it shows the below access to that user of my organization?Activity Feed (Organization): It shows my organization’s details to that user, including invoice and billing and all projects.Access Manager (Organization): It shows all the users of my organization.Activity Feed (Project): It shows all the activities of the project, I understand this I have given the project access so it will show activity.Access Manager (Project): It shows all the users of my project.I don’t want to show the above details to a user who has access to only a Project.", "username": "turivishal" }, { "code": "", "text": "Hello, welcome to the community.There is currently no way to customize these roles. You can try to request some resources for this at the following link:https://feedback.mongodb.com/", "username": "Samuel_84194" }, { "code": "", "text": "Look this Granular Permissions – MongoDB Feedback Engine", "username": "Samuel_84194" } ]
How to customize the access of organization in Atlas?
2023-09-08T08:54:52.351Z
How to customize the access of organization in Atlas?
362
null
[ "replication" ]
[ { "code": "", "text": "Hi! Thanks for someone who can help me, I have an M30 tier and in the primary replica set I have just 100gb after deleting and compacting the database, however, the secondary replica set still has 190gb and I was reading documentation about using the command of mongo --host clustername.host but it’s not possible to have access, Is there any way that the secondary storage has the same size as the principal one?", "username": "Jerry_Sebastian" }, { "code": "def resolveDNS(url):\n import dns.resolver\n domain = url\n\n try:\n answers = dns.resolver.resolve(f\"_mongodb._tcp.{domain}\", \"SRV\")\n except dns.resolver.NoAnswer:\n logging.exception(\"No SRV record found\")\n exit()\n except dns.resolver.NXDOMAIN:\n logging.exception(f\"Domain {domain} does not exist\")\n exit()\n\n return [str(cluster.target).rstrip(\".\") for cluster in answers]\nconnectionString = f\"mongodb://{username}:{password}@{cluster}:27017/{database}?authSource=admin&directConnection=true&retryReads=true\"\n", "text": "Good afternoon, welcome to the MongoDB community.If you have deleted documents, for example, and want to reduce the size of your disk, you need to connect to each node in your replicaset and run compact, after which you can reduce the size of your cluster’s disk. You can resolve your cluster’s DNS to receive the nodes’ IP and connect. I have a script that does this automatically, follow the DNS resolution partand the conn stringThat’s a way ;DI’m available if needed.", "username": "Samuel_84194" } ]
Replica Set with different storage size
2023-09-08T14:18:29.920Z
Replica Set with different storage size
307
null
[ "field-encryption" ]
[ { "code": "", "text": "We plan to use client-side field-level encryption for some confidential fields in our product. To generate and manage the Customer Master key, we want to use Hashicorp Vault. KMS providers currently supported are only: Amazon Web Services KMS and Locally Managed Keyfile.To work with Hashicorp Vault, it seems, we need to choose Locally Managed Keyfile as the KMS provider. This means that the Master key will be fetched from Vault in memory and then used in the code to encrypt/decrypt the DEK (Data Encryption Key). Ideally, the decryption of DEK should happen in the vault itself as a best practice, and master key should not be brought out of Vault.Is there a way to achieve this? There are numerous articles around encryption at rest and integration with Hashicorp vault, but none of them is for CSFLE. Need help if anyone is using CSFLE.Thanks", "username": "Anu_Madan" }, { "code": "", "text": "Were you able to solve this issue ?", "username": "John_Moser" }, { "code": "", "text": "No. We couldn’t find a way around. We chose not to use CSFLE.", "username": "Anu_Madan" }, { "code": "", "text": "Is there any one who implemented csfle using hasicorp vault ?", "username": "Navaneethakumar_Balasubramanian" }, { "code": "", "text": "Hello Navaneethakumar,We do have support for using a KMIP key provider, which can be used with HashiCorp Vault enterprise. We have a tutorial on how to set it up in our docs and this blog post covers Vault Enterprise specifically. I hope that helps.Cynthia", "username": "Cynthia_Braund" }, { "code": "\nMap<String, Object> extraOptions = new HashMap<String, Object>();\nextraOptions.put(\"cryptSharedLibPath\", \"<Full path to your Automatic Encryption Shared Library>\"));\n", "text": "Hey Hi Cynthia,Thanks for the links , I have already referred the TUTORAIL link and trying to implement with the help of that page only.But , Looks like this feature works only with Hashicorp enterprise edition. As mentioned in the blog link . We have requested for vault license and vault setup .Few more queries around this topic :Regards,\nNavaneethakumar", "username": "Navaneethakumar_Balasubramanian" }, { "code": "", "text": "Hi Navaneethakumar,That is correct about Vault, only their enterprise edition is KMIP enabled. If you aren’t using Automatic Encryption you don’t need to have the Shared Library or include the path to it. Just a quick note about terminology, AES is the encryption algorithm that is used by CSFLE and it is used regardless of Automatic or Explicit encryption.Thanks,Cynthia", "username": "Cynthia_Braund" } ]
Client side Field Level encryption - integration with Hashicorp vault
2020-08-20T11:44:54.167Z
Client side Field Level encryption - integration with Hashicorp vault
3,389
null
[ "aggregation", "queries" ]
[ { "code": "[\n {\n \"$sort\": {\n \"date_created\": -1\n }\n },\n {\n \"$limit\": 50\n }\n ]\n", "text": "I have an aggregation query where the first operation is $sort. I tried to improve this by indexing the field that was being sorted. I also added the option { allowDiskUse: true }. However, both of those things had no effect at all on the speed, it took the same amount of time.My pipeline looks like this:Any suggestions?", "username": "Fornida_Ecom" }, { "code": "", "text": "What does explain tell you the query plan is?", "username": "John_Sewell" }, { "code": " \"queryPlanner\": {\n \"plannerVersion\": 1,\n \"namespace\": \"erp-zayntek-prod.quotes\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"optimizedPipeline\": true,\n \"winningPlan\": {\n \"stage\": \"LIMIT\",\n \"limitAmount\": 50,\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"date_created\": 1,\n \"number\": 1\n },\n \"indexName\": \"date_created_1_number_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"date_created\": [],\n \"number\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"backward\",\n \"indexBounds\": {\n \"date_created\": [\n \"[MaxKey, MinKey]\"\n ],\n \"number\": [\n \"[MaxKey, MinKey]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 50,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 50,\n \"totalDocsExamined\": 50,\n \"executionStages\": {\n \"stage\": \"LIMIT\",\n \"nReturned\": 50,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 51,\n \"advanced\": 50,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 1,\n \"limitAmount\": 50,\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 50,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 50,\n \"advanced\": 50,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"docsExamined\": 50,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 50,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 50,\n \"advanced\": 50,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 0,\n \"restoreState\": 0,\n \"isEOF\": 0,\n \"keyPattern\": {\n \"date_created\": 1,\n \"number\": 1\n },\n \"indexName\": \"date_created_1_number_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"date_created\": [],\n \"number\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"backward\",\n \"indexBounds\": {\n \"date_created\": [\n \"[MaxKey, MinKey]\"\n ],\n \"number\": [\n \"[MaxKey, MinKey]\"\n ]\n },\n \"keysExamined\": 50,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n \"allPlansExecution\": []\n }\n\n", "text": "", "username": "Fornida_Ecom" }, { "code": "", "text": "That seems to be hitting the index, whats the server spec, collection size and timings that you are seeing?", "username": "John_Sewell" }, { "code": "", "text": "Server Spec:\n10GB Storage, 2GB RAM, 2 vCPUS (its the M10)Collection size:Timing is taking about 2900 msUsing $project helps some but hopefully there is another solution", "username": "Fornida_Ecom" }, { "code": "", "text": "That does seem slow, do you have a sample document? Just out of curiosity:What are you using to run the query\nI assume your internet connection is moderately fast\nHow exactly are you timing the query", "username": "John_Sewell" }, { "code": "\n{\n \"_id\": {\n \"$oid\": \"635e9bf1bfa48f5dbc06cf59\"\n },\n \"collection\": \"quotes\",\n \"client\": \"zt\",\n \"companyId\": \"fcf0fae9-ba57-ed11-8c36-000d3a8d9b01\",\n \"date_created\": {\n \"$date\": \"2022-10-30T17:04:52.251Z\"\n },\n \"number\": \"Q-100166\",\n \"id\": \"Q-100166\",\n \"headers\": {\n \"currencyCode\": \"USD\",\n \"salesperson\": \"\",\n \"externalDocumentNumber\": \"Q-100166\",\n \"shippingAgentCode\": \"FEDEX\",\n \"shippingAgentServiceCode\": \"GRD\",\n \"customerNumber\": \"STE999\",\n \"email\": \"\",\n \"phoneNumber\": \"\",\n \"_customerName\": \"\",\n \"paymentTermsId\": \"CREDITCARD\",\n \"_paymentTermsCode\": \"CREDITCARD\",\n \"sellToAddressLine1\": \"\",\n \"sellToCity\": \"\",\n \"sellToState\": \"\",\n \"sellToCountry\": \"USA\",\n \"shipToAddressLine1\": \"\",\n \"shipToCity\": \"\",\n \"shipToState\": \"\",\n \"shipToCountry\": \"USA\",\n \"shipToName\": \"Test Company\"\n },\n \"externalId\": \"Q-100166\",\n \"lines\": [\n {\n \"itemVariantId\": \"71FA277D-6458-ED11-8C36-000D3A8D9B01\",\n \"description\": \"Test NZ Item\",\n \"lineObjectNumber\": \"TNZITEM-1\",\n \"itemId\": \"48CFCD6F-6458-ED11-8C36-000D3A8D9B01\",\n \"quantity\": 1,\n \"unitCost\": 1,\n \"unitPrice\": 2,\n \"meta\": {},\n \"_gp_profit\": 1,\n \"_gp_margin\": 50,\n \"_gp_ext_profit\": 1,\n \"_gp_ext_margin\": 50,\n \"_ext_cost\": 1,\n \"_ext_price\": 2\n },\n {\n \"itemVariantId\": \"15003548-6458-ED11-8C36-000D3A8D9B01\",\n \"description\": \"Test Serial Item\",\n \"lineObjectNumber\": \"TSERIALITEM-1\",\n \"itemId\": \"78177632-6458-ED11-8C36-000D3A8D9B01\",\n \"quantity\": 1,\n \"unitCost\": 2,\n \"unitPrice\": 4,\n \"meta\": {},\n \"_gp_profit\": 2,\n \"_gp_margin\": 50,\n \"_gp_ext_profit\": 2,\n \"_gp_ext_margin\": 50,\n \"_ext_cost\": 2,\n \"_ext_price\": 4\n }\n ],\n \"marketplace\": \"zt Quote\",\n \"creator\": \"person\",\n \"flags\": [],\n \"errors\": [],\n \"SO\": {\n \"id\": \"06126cf2-7458-ed11-8c34-6045bdd449df\",\n \"number\": \"SO-DF-0020009\"\n },\n \"expires\": {\n \"$date\": \"2022-11-06T16:44:49.609Z\"\n },\n \"status\": \"converted\",\n \"margins\": [\n {\n \"label\": \"Revenue\",\n \"value\": 6,\n \"format\": \"currency\",\n \"hide\": false\n },\n {\n \"label\": \"Cost\",\n \"value\": 3,\n \"format\": \"currency\",\n \"hide\": false\n },\n {\n \"label\": \"Burden\",\n \"value\": \"5.5%\",\n \"format\": \"number\",\n \"hide\": false\n },\n {\n \"label\": \"Profit\",\n \"value\": 3,\n \"format\": \"currency\",\n \"hide\": false,\n \"vclass\": \"\"\n },\n {\n \"label\": \"GP Percent\",\n \"value\": 0.445,\n \"format\": \"gp\",\n \"hide\": false,\n \"vclass\": \"\"\n }\n ],\n \"expired\": false,\n \"complete\": false,\n \"deleteAfterSuccess\": true,\n \"flagged_by_processor\": true,\n \"openFlag\": true\n}\n\n", "text": "I am using the node js mongodb library.I am starting to wonder about internet connection. I’m not sure how though, it was fast without sort. Today i switched internet connections and it might be faster now.I am calculating the time delta before and after the query is run.Sure, sample document below:", "username": "Fornida_Ecom" }, { "code": "", "text": "To eliminate connectivity why not try piping the output of the query to a new collection and see how long that takes?Just add an $out stage to the aggregation query and see how long that takes…you have a slight overhead of doing a write, but it could help eliminate a possible issue.", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation Query with Sort is slow
2023-09-07T20:20:34.013Z
Aggregation Query with Sort is slow
242
null
[]
[ { "code": "", "text": "Hi All, I installed MongoDB today on my AWS Linux 2 server. It worked fine first time when I installed, was getting connected. However, it is not starting again and throwing failed connection when running sudo systemctl status mongod.I installed using yum commands. Any help would really be appreciated.Thanks.", "username": "Himanshu_Sethi" }, { "code": "/etc/mongod.conf'sudo systemctl restart mongod'", "text": "Hey @Himanshu_Sethi,Welcome to the MongoDB Community!However, it is not starting again and throwing failed connection when running sudo systemctl status mongod.In case the above steps don’t work, uninstall and reinstall MongoDB following the official documentation for your Linux distribution.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "It is started working. Thank you.", "username": "Himanshu_Sethi" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Getting MongoNetworkError: connect ECONNREFUSED 127.0.0.1:2701error message
2023-09-06T20:06:41.820Z
Getting MongoNetworkError: connect ECONNREFUSED 127.0.0.1:2701error message
391
null
[ "node-js", "mongoose-odm", "connecting", "next-js" ]
[ { "code": " import mongoose from \"mongoose\";\n \n const MONGO_URI =\n process.env.NODE_ENV === \"development\"\n ? process.env.MONGO_URI_DEVELOPMENT\n : process.env.MONGO_URI_PRODUCTION;\n \n console.log(`Connecting to ${MONGO_URI}`);\n \n const database_connection = async () => {\n if (global.connection?.isConnected) {\n console.log(\"reusing database connection\")\n return;\n }\n \n const database = await mongoose.connect(MONGO_URI, {\n authSource: \"admin\",\n useNewUrlParser: true\n });\n \n global.connection = { isConnected: database.connections[0].readyState }\n console.log(\"new database connection created\")\n \n };\n \n export default database_connection;\nyarn run devreusing database connection", "text": "I am using NextJS to build an app. I am using MongoDB via mongoosejs to connect to my database hosted in mongoAtlas.My database connection file looks like belowI have seen this MongoDB developer community thread and this GitHub thread.The problem seems to happen only in dev mode(when you run yarn run dev). In the production version hosted on Vercel there seems to be no issue. I understand that in dev mode the server is restarted every time a change is saved so to cache a connection you need to use as global variable. As you can see above, I have done exactly that. The server even logs: reusing database connection, then in mongoAtlas it shows like 10 more connections opened.How can I solve this issue or what am I doing wrong?", "username": "Perminus_Gaita" }, { "code": "", "text": "Did you get this figured out?", "username": "Daniel_Lewis2" }, { "code": "", "text": "I just stop and start my dev server periodically.", "username": "Perminus_Gaita" } ]
NextJs + Mongoose + Mongo Atlas multiple connections even with caching
2023-01-23T08:05:37.705Z
NextJs + Mongoose + Mongo Atlas multiple connections even with caching
2,970
null
[ "java", "spring-data-odm", "time-series" ]
[ { "code": "", "text": "We are using mongo 6.0.3 currently. During large data ingestions using Spring Data every now and then get this error during batch insert. It does not happen with all batches and retrying seems to help for most cases.I am just curious why this happens as timeseries should not have unique index and definitely can not create a index when trying to so it is a timeseries collection.Any help on insight would be appreciated.", "username": "Janari_Parts" }, { "code": "", "text": "Hey @Janari_Parts,Welcome to the MongoDB Community!During large data ingestions using Spring Data every now and then get this error during batch insertRegards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "db.getCollection(\"logs\").find({_id: {\"$gt\": ObjectId('64f88e78d0d43d0a25ffffff'), \"$lt\": ObjectId('64f88e78d0d43d0a27000000')}}, {\"_id\":1})\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c042\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c043\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c044\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c045\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c046\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c047\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c048\")\n}\n{\n \"_id\" : ObjectId(\"64f88e78d0d43d0a2662c049\")\n}\n", "text": "We have a standard collection (non time series) that we perform single inserts to and we occasionally receive an “E11000 duplicate key error” exception on the id index too. My issue may not be the same as yours as it is not a time series collection and not java or spring data but it may be the similar given that they’re both duplicates on the id key. If this should be a separate topic I can do that, please advise.We insert roughly 4 million records per day and this error happens 0 to 3 times per week. We are not supplying the _id in the inserted document, we rely on the mongo driver to do that for us. We run two processes, one on each of two servers, performing these inserts simultaneously, I expect that these two process would not interfere with each other because of the random per-process value component in the _id.Example provided below with exception error and list of all document _ids with the same time and process values. The _id counter is incrementing as expected and the logged duplicate does exist in the incrementing series.I’m hoping that someone has some insight or guidance on this problem that will help me solve it.PHP Fatal error: Uncaught MongoDB\\Driver\\Exception\\BulkWriteException: E11000 duplicate key error collection: cet.logs index: id dup key: { _id: ObjectId(‘64f88e78d0d43d0a2662c043’) }Ubuntu 20.04.4 LTS\nphp-mongodb 1.6.1\nmongo-php-library 1.12.0\nalcaeus/mongo-php-adapter 1.2.2\nphp7.4-cli 7.4.3\nmongodb-org-server 5.0.20", "username": "Chris_Feldhaus" }, { "code": "Write errors: [BulkWriteError{index=6887, code=11000, message='E11000 duplicate key error collection: data.system.buckets.data dup key: { _id: ObjectId('64ebe3806fb730fe1d39cad3') }', details={}}]. \n\tat org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:107)\n\tat org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2789)\n\tat org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:555)\n\tat org.springframework.data.mongodb.core.MongoTemplate.insertDocumentList(MongoTemplate.java:1456)\n\tat org.springframework.data.mongodb.core.MongoTemplate.doInsertBatch(MongoTemplate.java:1316)\n\tat org.springframework.data.mongodb.core.MongoTemplate.doInsertAll(MongoTemplate.java:1285)\n\tat org.springframework.data.mongodb.core.MongoTemplate.insertAll(MongoTemplate.java:1258)\n\tat org.springframework.data.mongodb.repository.support.SimpleMongoRepository.insert(SimpleMongoRepository.java:240)\n\tat jdk.internal.reflect.GeneratedMethodAccessor144.invoke(Unknown Source)\n", "text": "The error looks like this:We ingest a large file 9GB so it tends to happen a few thousand times, but the objects inserted are small. It works fine 95% of the time, so most of the data is ingested without problem and if we add a retry on this failure it can try again 5-20 times and it passes it in.We split everything into Chunks, it did not make a difference on the chunk size either 100 or 10k both get errors. We usually have 5000 insertions per second. We let mongo generate the ID itself so we don’t actually add an _id field.", "username": "Janari_Parts" } ]
Timeseries collection gives "E11000 duplicate key error index" on _id field
2023-09-08T08:17:17.042Z
Timeseries collection gives &ldquo;E11000 duplicate key error index&rdquo; on _id field
327
null
[ "aggregation", "node-js", "sharding", "transactions", "field-encryption" ]
[ { "code": "mongodbbson@6.0.0kerberos2.0.11.xzstd1.1.01.0.0mongodb-client-encryption6.0.02.3.0mongodb-client-encryption3.x-5.xmongodb-client-encryptionmongodb-client-encryptionmongodb-client-encryptionsockssocksmongodmongossockssockspeerDependencysocksfindOneAndXnullincludeResultMetadataModifyResultnullincludeResultMetadata: true// This has the same behaviour as providing `{ includeResultMetadata: false }` in the v5.7.0+ driver\nawait collection.findOneAndUpdate({ hello: 'world' }, { $set: { hello: 'WORLD' } });\n// > { _id: new ObjectId(\"64c4204517f785be30795c92\"), hello: 'world' }\n\n// This has the same behaviour as providing no options in any previous version of the driver\nawait collection.findOneAndUpdate(\n { hello: 'world' },\n { $set: { hello: 'WORLD' } },\n { includeResultMetadata: true }\n);\n// > {\n// > lastErrorObject: { n: 1, updatedExisting: true },\n// > value: { _id: new ObjectId(\"64c4208b17f785be30795c93\"), hello: 'world' },\n// > ok: 1\n// > }\nsession.commitTransaction()session.abortTransaction()MongoClientwithSessionwithTransactionawait client.withSession(async session => {})voidawait session.withTransaction(async () => {})withTransactionifwithTransactionundefinedwithTransactionvoidwithTransactionvoidwithTransactionMongoClientMongoClientMongoClientMongoInvalidArgumentErrorMongoClient// pre v6\nconst session = client1.startSession();\nclient2.db('foo').collection('bar').insertOne({ name: 'john doe' }, { session }); // no error thrown, undefined behavior\n\n// v6+\nconst session = client1.startSession();\nclient2.db('foo').collection('bar').insertOne({ name: 'john doe' }, { session });\n// MongoInvalidArgumentError thrown\nencryptdecryptcreateDataKeyClientEncryptionMongoCryptErrorMongoErrorMongoCryptErrorErrorError.causeMongoErrorMongoCryptErroruseNewUrlParseruseUnifiedTopology'1', 'y', 'yes', 't'true'-1', '0', 'f', 'n', 'no'false// Incorrect\nconst client = new MongoClient('mongodb://localhost:27017?tls=1'); // throws MongoParseError\n\n// Correct\nconst client = new MongoClient('mongodb://localhost:27017?tls=true');\ntls=true&tls=falseMongoClienttlsCAFiletlsCertificateKeyFiletlsCRLFileMongoClientMongoClientconst client = new MongoClient(CONNECTION_STRING, {\n tls: true,\n tlsCAFile: 'caFileName',\n tlsCertificateKeyFile: 'certKeyFile',\n tlsCRLFile: 'crlPemFile'\n}); // Files are not read here, but file names are stored on the MongoClient\n\nawait client.connect(); // Files are now read and their contents stored\nawait client.close();\n\nawait client.connect(); // Since the file contents have already been cached, the files will not be read again.\ntlsCAFiletlsCertificateKeyFiletlsCRLFiledb.command()admin.command()optionsreadConcernwriteConcern.command()ConnectionPoolCreatedEvent.optionsoptionsConnectionPoolCreatedEvent{\n\tmaxPoolSize: number,\n\tminPoolSize: number,\n\tmaxConnecting: number,\n\tmaxIdleTimeMS: number,\n\twaitQueueTimeoutMS: number\n}\n'mongodb://host?readPreferenceTags=region:ny&readPreferenceTags=rack:r1&readPreferenceTags=';\n// client.options.readPreference.tags\n[{ region: 'ny' }, { rack: 'r1' }, {}];\nreadPreferenceTagsGridFSBucketWriteStreamWritableGridFSBucketWriteStreamwrite()end()'close''drain''finish'_write_finalWritable.write().end()'finish''drain'GridFSFilegridFSFile// If our event handler is declared as a `function` \"this\" is bound to the stream.\nfs.createReadStream('./file.txt')\n .pipe(bucket.openUploadStream('file.txt'))\n .on('finish', function () {\n console.log(this.gridFSFile);\n });\n\n// If our event handler is declared using big arrow notation,\n// the property is accessible on a scoped variable\nconst uploadStream = bucket.openUploadStream('file.txt');\nfs.createReadStream('./file.txt')\n .pipe(uploadStream)\n .on('finish', () => console.log(uploadStream.gridFSFile));\nGridFSBucketWriteStream.ERRORGridFSBucketWriteStream.FINISHGridFSBucketWriteStream.CLOSEGridFSBucketReadStreamGridFSBucketReadStreamGridFSBucketReadStream.ERRORGridFSBucketReadStream.DATAGridFSBucketReadStream.CLOSEGridFSBucketReadStream.ENDcreateDataKeycreateDataKeyDataKeyinsertedIddb.addUser()admin.addUser()addUsercreateUsercreateUserAddUserOptionscreateUserconst db = client.db('admin');\n// Example addUser usage\nawait db.addUser('myUsername', 'myPassword', { roles: [{ role: 'readWrite', db: 'mflix' }] });\n// Example equivalent command usage\nawait db.command({\n createUser: 'myUsername',\n pwd: 'myPassword',\n roles: [{ role: 'readWrite', db: 'mflix' }]\n});\ncollection.stats()collStatscollStatsawait db.command()$collStatsCollStatsOptionsWiredTigerDataBulkWriteResultBulkWriteResult.nInsertedBulkWriteResult.insertedCountBulkWriteResult.nUpsertedBulkWriteResult.upsertedCountBulkWriteResult.nMatchedBulkWriteResult.matchedCountBulkWriteResult.nModifiedBulkWriteResult.modifiedCountBulkWriteResult.nRemovedBulkWriteResult.deletedCountBulkWriteResult.getUpsertedIdsBulkWriteResult.upsertedIdsBulkWriteResult.getUpsertedIdAt(index: number)BulkWriteResult.getInsertedIdsBulkWriteResult.insertedIdssslCAtlsCAFilesslCRLtlsCRLFilesslCerttlsCertificateKeyFilesslKeytlsCertificateKeyFilesslPasstlsCertificateKeyFilePasswordsslValidatetlsAllowInvalidCertificatestlsCertificateFiletlsCertificateKeyFilekeepAlivekeepAliveInitialDelayMongoErrorMongoErrorAutoEncrypterMongoClient.autoEncrypterAutoEncrypterMongoClient.autoEncrypterMongoClientClientEncryption.onKMSProvidersRefreshClientEncryption.onKMSProvidersRefreshmongodb-client-encryptiononKMSProviderRefreshEvalOptionsevalEvalOptionsonKMSProvidersRefreshmongodb", "text": "The MongoDB Node.js team is pleased to announce version 6.0.0 of the mongodb package!The main focus of this release was usability improvements and a streamlined API. Read on for details![!IMPORTANT]\nThis is a list of changes relative to v5.8.1 of the driver. ALL changes listed below are BREAKING.\nUsers migrating from an older version of the driver are advised to upgrade to at least v5.8.1 before adopting v6.The minimum supported Node.js version is now v16.20.1. We strive to keep our minimum supported Node.js version in sync with the runtime’s release cadence to keep up with the latest security updates and modern language features.This driver version has been updated to use [email protected]. BSON functionality re-exported from the driver is subject to the changes outlined in the BSON V6 release notes.[!NOTE]\nAs of version 6.0.0, all useful public APIs formerly exposed from mongodb-client-encryption have been moved into the driver and should now be imported directly from the driver. These APIs rely internally on the functionality exposed from mongodb-client-encryption, but there is no longer any need to explicitly reference mongodb-client-encryption in your application code.The driver uses the socks dependency to connect to mongod or mongos through a SOCKS5 proxy. socks used to be a required dependency of the driver and was installed automatically. Now, socks is a peerDependency that must be installed to enable socks proxy support.Previously, the default return type of this family of methods was a ModifyResult containing the found document and additional metadata. This additional metadata is unnecessary for the majority of use cases, so now, by default, they will return only the found document or null.The previous behavior is still available by explicitly setting includeResultMetadata: true in the options.See the following blog post for more information.Each of these methods erroneously returned server command results that can be different depending on server version or type the driver is connected to. These methods return a promise that if resolved means the command (aborting or commiting) sucessfully completed and rejects otherwise. Viewing command responses is possible through the command monitoring APIs on the MongoClient.The await client.withSession(async session => {}) now returns the value that the provided function returns. Previously, this function returned void this is a feature to align with the following breaking change.The await session.withTransaction(async () => {}) method now returns the value that the provided function returns. Previously, this function returned the server command response which is subject to change depending on the server version or type the driver is connected to. The return value got in the way of writing robust, reliable, consistent code no matter the backing database supporting the application.[!WARNING]\nWhen upgrading to this version of the driver, be sure to audit any usages of withTransaction for if statements or other conditional checks on the return value of withTransaction. Previously, the return value was the command response if the transaction was committed and undefined if it had been manually aborted. It would only throw if an operation or the author of the function threw an error. Since prior to this release it was not possible to get the result of the function passed to withTransaction we suspect most existing functions passed to this method return void, making withTransaction a void returning function in this major release. Take care to ensure that the return values of your function match the expectation of the code that follows the completion of withTransaction.Providing a session from one MongoClient to a method on a different MongoClient has never been a supported use case and leads to undefined behavior. To prevent this mistake, the driver now throws a MongoInvalidArgumentError if session is provided to a driver helper from a different MongoClient.Driver v5 dropped support for callbacks in asynchronous functions in favor of returning promises in order to provide more consistent type and API experience. In alignment with that, we are now removing support for callbacks from the ClientEncryption class.Since MongoCryptError made use of Node.js 16’s Error API, it has long supported setting the Error.cause field using options passed in via the constructor. Now that Node.js 16 is our minimum supported version, MongoError has been modified to make use of this API as well, allowing us to let MongoCryptError subclass from it directly.These options were removed in 4.0.0 but continued to be parsed and silently left unused. We have now added a deprecation warning through Node.js’ warning system and will fully remove these options in the next major release.Prior to this change, we accepted the values '1', 'y', 'yes', 't' as synonyms for true and '-1', '0', 'f', 'n', 'no' as synonyms for false. These have now been removed in an effort to make working with connection string options simpler.In order to avoid accidental misconfiguration the driver will no longer prioritize the first instance of an option provided on the URI. Instead repeated options that are not permitted to be repeated will throw an error.This change will ensure that connection strings that contain options like tls=true&tls=false are no longer ambiguous.In order to align with Node.js best practices of keeping I/O async, we have updated the MongoClient to store the file names provided to the existing tlsCAFile and tlsCertificateKeyFile options, as well as the tlsCRLFile option, and only read these files the first time it connects. Prior to this change, the files were read synchronously on MongoClient construction.[!NOTE]\nThis has no effect on driver functionality when TLS configuration files are properly specified. However, if there are any issues with the TLS configuration files (invalid file name), the error is now thrown when the MongoClient is connected instead of at construction time.Take a look at our TLS documentation for more information on the tlsCAFile, tlsCertificateKeyFile, and tlsCRLFile options.These APIs allow for specifying a command BSON document directly, so the driver does not try to enumerate all possible commands that could be passed to this API in an effort to be as forward and backward compatible as possible.The db.command() and admin.command() APIs have their options types updated to accurately reflect options compatible on all commands that could be passed to either API.Perhaps most notably, readConcern and writeConcern options are no longer handled by the driver. Users must attach these properties to the command that is passed to the .command() method.The options field of ConnectionPoolCreatedEvent now has the following shape:The following connection string will now produce the following readPreferenceTags:The empty readPreferenceTags allows drivers to still select a server if the leading tag conditions are not met.Our implementation of a writeable stream for GridFSBucketWriteStream mistakenly overrode the write() and end() methods, as well as, manually emitted 'close', 'drain', 'finish' events. Per Node.js documentation, these methods and events are intended for the Node.js stream implementation to provide, and an author of a stream implementation is supposed to override _write, _final, and allow Node.js to manage event emitting.Since the API is still a Writable stream most usages will continue to work with no changes, the .write() and .end() methods are still available and take the same arguments. The breaking change relates to the improper manually emitted event listeners that are now handled by Node.js. The 'finish' and 'drain' events will no longer receive the GridFSFile document as an argument (this is the document inserted to the bucket’s files collection after all chunks have been inserted). Instead, it will be available on the stream itself as a property: gridFSFile.Since the class no longer emits its own events: static constants GridFSBucketWriteStream.ERROR, GridFSBucketWriteStream.FINISH, GridFSBucketWriteStream.CLOSE have been removed to avoid confusion about the source of the events and the arguments their listeners accept.The GridFSBucketReadStream internals have also been corrected to no longer emit events that are handled by Node’s stream logic. Since the class no longer emits its own events: static constants GridFSBucketReadStream.ERROR, GridFSBucketReadStream.DATA, GridFSBucketReadStream.CLOSE, and GridFSBucketReadStream.END have been removed to avoid confusion about the source of the events and the arguments their listeners accept.Previously, the TypeScript for createDataKey incorrectly declared the result to be a DataKey but the method actually returns the DataKey’s insertedId.The deprecated addUser APIs have been removed. The driver maintains support across many server versions and the createUser command has support for different features based on the server’s version. Since applications can generally write code to work against a uniform and perhaps more modern server, the path forward is for applications to send the createUser command directly.The associated options interface with this API has also been removed: AddUserOptions.See the createUser documentation for more information.The collStats command is deprecated starting in server v6.2 so the driver is removing its bespoke helper in this major release. The collStats command is still available to run manually via await db.command(). However, the recommended migration is to use the $collStats aggregation stage.The following interfaces associated with this API have also been removed: CollStatsOptions and WiredTigerData.The following deprecated properties have been removed as they duplicated those outlined in the [MongoDB CRUD specification|https://github.com/mongodb/specifications/blob/611ecb5d624708b81a4d96a16f98aa8f71fcc189/source/crud/crud.rst#write-results]. The list indicates what properties provide the correct migration:The following options have been removed with their supported counterparts listed after the → TCP keep alive will always be on and now set to a value of 30000ms.The removed functionality listed in this section was either unused or not useful outside the driver internals.MongoError and its subclasses are not meant to be constructed by users as they are thrown within the driver on specific error conditions to allow users to react to these conditions in ways which match their use cases. The constructors for these types are now subject to change outside of major versions and their API documentation has been updated to reflect this.As of this release, users will no longer be able to access the AutoEncrypter interface or the MongoClient.autoEncrypter field of an encrypted MongoClient instance as they do not have a use outside the driver internals.ClientEncryption.onKMSProvidersRefresh was added as a public API in version 2.3.0 of mongodb-client-encryption to allow for automatic refresh of KMS provider credentials. Subsequently, we added the capability to automatically refresh KMS credentials using the KMS provider’s preferred refresh mechanism, and onKMSProviderRefresh is no longer used.This cleans up some dead code in the sense that there were no eval command related APIs but the EvalOptions type was public, so we want to ensure there are no surprises now that this type has been removed.We invite you to try the mongodb library immediately, and report any issues to the NODE project.", "username": "Warren_James" }, { "code": "node_modules/mongodb/lib/operations/find_and_modify.js:33\n options.includeResultMetadata ??= false;\n ^^^\n\nSyntaxError: Unexpected token '??='\n at wrapSafe (internal/modules/cjs/loader.js:984:16)\n at Module._compile (internal/modules/cjs/loader.js:1032:27)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)\n at Module.load (internal/modules/cjs/loader.js:933:32)\n at Function.Module._load (internal/modules/cjs/loader.js:774:14)\n at Module.require (internal/modules/cjs/loader.js:957:19)\n at require (internal/modules/cjs/helpers.js:88:18)\n at Object.<anonymous> (/Users/mikestorey/source/learn/mongo-developer/node_modules/mongodb/lib/collection.js:21:27)\n at Module._compile (internal/modules/cjs/loader.js:1068:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)\n", "text": "Seems like a defect was shipped. MacOS 12.6.8 - the following error prevents any functionality", "username": "Mike_Storey" }, { "code": "", "text": "Hi @Mike_Storey, what Node.js version are you using? That token is the nullish coalescing assignment operator. Node.js added support for it in v15.0.0 and our lowest supported Node.js version for v6 of the Node Driver is v16.20.1.", "username": "Warren_James" }, { "code": "", "text": "Duoh - yep that was it! Got wrapped around the axle on this one. Maybe create something that is indexed by google with that error, I did a lot of searching and came up empty.", "username": "Mike_Storey" } ]
MongoDB NodeJS Driver 6.0.0 Released
2023-08-28T20:34:16.687Z
MongoDB NodeJS Driver 6.0.0 Released
1,386
null
[ "node-js", "replication", "connecting", "atlas-cluster", "server" ]
[ { "code": "", "text": "I was on the M0 (shared, free) cluster, and very occasionally, I’d (seemingly randomly) get an error saying ServerSelectionError & ReplicaSetNoPrimary.Here’s my setup:But anyway, since I don’t know the exact cause of this problem, I thought it could be related to my M0 cluster, so I decided to upgrade to M10. The errors seem to be gone, but again I have no idea if it’ll appear again or what caused it. It’s a randomly appearing error.If anyone has any insight, that would be very helpful.", "username": "Pyra_Metrik" }, { "code": "ServerSelectionError/ReplicaSetNoPrimary", "text": "Hey @Pyra_Metrik,Welcome to the MongoDB Community!There could be a few potential causes for the occasional ServerSelectionError/ReplicaSetNoPrimary such as connection pooling issues, or intermittent network outage.However, feel free to reach out in case you face such an issue again.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav thanks for your answer.Where can I read more about connection pooling issues? What exactly does that mean? And is it at all related to M0 cluster?", "username": "Pyra_Metrik" }, { "code": "", "text": "Hey @Pyra_Metrik,Where can I read more about connection pooling issues? What exactly does that mean?Connection pooling is a technique used by database drivers and clients to maintain a cache/pool of open connections that can be reused, rather than opening and closing connections for every request. To read more about Connection Pool, please refer to the documentation.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "@Kushagra_KesavSo it looks an instance of MongoClient (I’m using the node driver) manages its own connection pool to the MongoDB cluster.So if the cluster is shared, I’m guessing there may be sometimes higher latency in establishing a connection? Would it be reasonable to assume that such a latency could cause the serverSelectionError/ReplicaSetNoPrimary errors?", "username": "Pyra_Metrik" } ]
Elusive bug -- ReplicaSetNoPrimary but only sometimes
2023-09-08T03:48:52.339Z
Elusive bug &ndash; ReplicaSetNoPrimary but only sometimes
377
null
[ "crud", "kotlin" ]
[ { "code": "private lateinit var db: MongoDatabase\nprivate lateinit var collection: MongoCollection<MyObject>\nprivate lateinit var service: MyService\n\n@BeforeEach\nfun setUp() {\n db = mockk()\n collection = mockk()\n service = MyService(db)\n}\n\n@Test\nfun example() {\n val expectedObject = MyObject()\n every { db.getCollection(\"myCollection\").withDocumentClass<MyObject>() } returns collection\n every { collection.findOneAndUpdate(any<Document>(), any<Document>()) } returns expectedObject\n\n val obj = service.doSmth(\"Test\")\n\n assertThat(obj).isEqualTo(expectedObject)\n}\nclass MyService(db: MongoDatabase) {\n fun doSmth(newValue: String): MyObject {\n val collection = db.getCollection(\"myCollection\").withDocumentClass<MyObject>()\n return collection.findOneAndUpdate(Document(mapOf(\"_id\", \"test\")), Document(mapOf(\"test\", newValue)))\n } \n}\n", "text": "Hello,\nI am currently working with the synchronized MongoDB driver and Kotlin. Thereby I find it quite difficult to mock different methods like findOneAndUpdate. But that would be important for my tests. For mocking I use MockK. Example:The code will not run successfully this way, because for the method call findOneAndUpdate of the mocked collection, no answer can be found. In the code that is tested, the following is called:This test does work with the find method.\nActually it should work like this, but probably I am not paying attention to something important. Can someone help me with this?Many greetings,\nFinn", "username": "Finn-Lasse_Reichling" }, { "code": "val ktor_version: String by project\nval kotlin_version: String by project\nval logback_version: String by project\n\nval prometeus_version: String by project\nplugins {\n kotlin(\"jvm\") version \"1.9.10\"\n id(\"io.ktor.plugin\") version \"2.3.4\"\n id(\"org.jetbrains.kotlin.plugin.serialization\") version \"1.9.10\"\n}\n\ngroup = \"com.example\"\nversion = \"0.0.1\"\n\napplication {\n mainClass.set(\"com.example.ApplicationKt\")\n\n val isDevelopment: Boolean = project.ext.has(\"development\")\n applicationDefaultJvmArgs = listOf(\"-Dio.ktor.development=$isDevelopment\")\n}\n\nrepositories {\n mavenCentral()\n}\n\ndependencies {\n implementation(\"io.ktor:ktor-server-core-jvm\")\n implementation(\"io.ktor:ktor-server-cors-jvm\")\n implementation(\"io.ktor:ktor-server-metrics-micrometer-jvm\")\n implementation(\"io.micrometer:micrometer-registry-prometheus:$prometeus_version\")\n implementation(\"io.ktor:ktor-server-content-negotiation-jvm\")\n implementation(\"io.ktor:ktor-serialization-kotlinx-json-jvm\")\n implementation(\"io.ktor:ktor-server-netty-jvm\")\n implementation(\"ch.qos.logback:logback-classic:$logback_version\")\n implementation(\"org.mongodb:mongodb-driver-kotlin-sync:4.10.2\")\n\n testImplementation(\"io.ktor:ktor-server-tests-jvm\")\n testImplementation(\"org.jetbrains.kotlin:kotlin-test-junit:$kotlin_version\")\n testImplementation(\"io.mockk:mockk:1.13.7\")\n testImplementation(\"org.assertj:assertj-core:3.24.2\")\n}\n\npackage com.example.services\n\nimport com.example.models.MyClass\nimport com.mongodb.client.model.Filters.eq\nimport com.mongodb.client.model.Updates\nimport com.mongodb.kotlin.client.MongoDatabase\n\nclass MyService(private val db: MongoDatabase) {\n fun doSmth(value: String): MyClass? {\n val collection = db.getCollection<MyClass>(\"myCollection\").withDocumentClass<MyClass>()\n\n return collection.findOneAndUpdate(eq(MyClass::id.name, \"myConstantKeyValue\"), Updates.set(\"myField\", value))\n }\n}\npackage com.example.models\n\nimport org.bson.codecs.pojo.annotations.BsonId\n\ndata class MyClass(@BsonId val id: String, val myField: String)\n\npackage com.example.services\n\nimport com.example.models.MyClass\nimport com.mongodb.kotlin.client.MongoCollection\nimport com.mongodb.kotlin.client.MongoDatabase\nimport io.mockk.every\nimport io.mockk.mockk\nimport org.assertj.core.api.Assertions.assertThat\nimport org.bson.Document\nimport org.junit.Before\nimport org.junit.Test\n\nclass MyServiceTest {\n private lateinit var db: MongoDatabase\n private lateinit var collection: MongoCollection<MyClass>\n private lateinit var myService: MyService\n\n @Before\n fun setUp() {\n db = mockk()\n collection = mockk()\n myService = MyService(db)\n }\n\n @Test\n fun `should find and update MyClass object`() {\n val expectedMyClass = MyClass(\"myId\", \"myNewValue\")\n every { db.getCollection<MyClass>(\"myCollection\").withDocumentClass<MyClass>() } returns collection\n every { collection.findOneAndUpdate(any<Document>(), any<Document>()) } returns expectedMyClass\n\n val myClass = myService.doSmth(\"myNewValue\")\n\n assertThat(myClass).isEqualTo(expectedMyClass)\n }\n}\nio.mockk.MockKException: no answer found for MongoCollection(#2).findOneAndUpdate(Filter{fieldName='id', value=myConstantKeyValue}, Update{fieldName='myField', operator='$set', value=myNewValue}, FindOneAndUpdateOptions{projection=null, sort=null, upsert=false, returnDocument=BEFORE, maxTimeMS=0, bypassDocumentValidation=null, collation=null, arrayFilters=null, hint=null, hintString=null, comment=null, let=null}) among the configured answers: (MongoCollection(#2).findOneAndUpdate(any(), any(), eq(FindOneAndUpdateOptions{projection=null, sort=null, upsert=false, returnDocument=BEFORE, maxTimeMS=0, bypassDocumentValidation=null, collation=null, arrayFilters=null, hint=null, hintString=null, comment=null, let=null}))))\n", "text": "Here are some more detailed examples and a specific error message.build.gradle.ktsMyServiceMyClassMyServiceTestError Message", "username": "Finn-Lasse_Reichling" } ]
Kotlin Synchronized Driver: Mocking updateOne
2023-09-08T11:57:54.320Z
Kotlin Synchronized Driver: Mocking updateOne
396
null
[ "aggregation", "change-streams" ]
[ { "code": "for await (const change of collection.watch()) {\n console.log(change);\n}\nconst pipeline = [ { $match: { \"name\": \"Stella\" } } ];\nfor await (const change of collection.watch(pipeline)) {\n console.log(change);\n}\n{\n _id: <some-id>,\n name: <some-name>,\n dependents: [\n {\n name: <some-name>,\n relation: <some-relation>,\n },\n ],\n date_time_created: <some-timestamp>,\n}\n", "text": "I’m trying to understand how to filter the results from the change stream. Currently, each insert, update, or delete from the collection is sent to my app:Based on the docs, adding $match in the aggregation pipeline should allow me to pick a specific document to watch and ignore every other documents. However, this doesn’t seem to work as any updates done in the collection (whether it is on the target document, or any other document) is still being sent to my app. Can anyone help me with this? Here’s the code:Here’s the sample document schema:", "username": "Christopher_Tabula" }, { "code": "", "text": "Hi. I would recommend reading through the documentation on Change Events. The TLDR though is that MongoDB returns Change Events and if you want to filter the events returned by watch() you need to “filter” on the “change events”. This has fields like FullDocument, UpdateDescription, etc depending on the event type (update, insert, delete, replace)Let me know if you have any specific questions but the gist is that you need to update your match expression to filter on these events", "username": "Tyler_Kaye" }, { "code": "", "text": "Sorry but this does not even closely close to a good answer. the Change Events page you linked does not contains information about how to correctly use pipelines. why should i filter myself based on the fields themself such ‘FullDocument’,etc. based on your answer i understand that ‘pipelines’ option for watch() does not work as a pipeline, so for what they exists?", "username": "louski.a" }, { "code": "", "text": "Hi,What is it exactly that you are trying to accomplish? If you explain what you would like to do, I would be more than happy to help give specific examples, but the documentation on Change Events and Match Expressions should be the proper place to start. The Watch() API takes a filter that is a $match on the series of Change Events that are sent back. Therefore, if you wanted to filter on all events in which name is “stella”, you would add a match expression of `{“fullDocument.name”: “stella” } and ensure that your trigger it change stream is using the FullDocument option for updates.Thanks,\nTyler", "username": "Tyler_Kaye" } ]
Filtering change stream
2022-11-12T08:10:04.222Z
Filtering change stream
2,882
null
[ "aggregation" ]
[ { "code": "", "text": "Hi,I need to update a collection field based on the values from another collection, For eg:\ncollection 1 - employee\nvalues\n{\"_id\":1,“name”:“aaa”, “department”:11}\n{\"_id\":2,“name”:“bbb”, “department”:12}collection 2 - department\n{\"_id\":11,“name”:“dept1”}\n{\"_id\":12,“name”:“dept2”I need the result collection as\nvalues\n{\"_id\":1,“name”:“aaa”, “department”:{\"_id\":11,“name”:“dept1”}}\n{\"_id\":2,“name”:“bbb”, “department”:{\"_id\":12,“name”:“dept2”}}Please let me know how can I update the existing column or create new column in the collection to get the result like the aboveThanks,", "username": "suja_j" }, { "code": "", "text": "To that sort of things, you use the aggregation framework. You can read about it https://docs.mongodb.com/manual/aggregation/. MongoDB University also offer the M121 course related to it.You will need a $lookup aggregation stage. Find more at https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/It is not clear if you want to add the field to the employee collection or if you want a third collection. If the latter, you can achieve that with $out stage. More at https://docs.mongodb.com/manual/reference/operator/aggregation/out/I recommend to go via $out even if the goal is to add the field in the first collection. This way you do not screw up existing data and it gives you the opportunity to verify the results and simply swap the collection when you are happy.I also recommend using Compass to develop the pipeline as it makes it easy to experiment. You can export the pipeline to your favorite programming language once you are done. This being said, I prefer to keep my pipeline in their natural json form in a file that I can reuse in the shell or in my favorite language.", "username": "steevej" }, { "code": "", "text": "You can perform a join, but a join is based a particular equality or an expression. If _id in the second table, is 10 + _id in the first table, then the join might be possible.However, the piped value is returned as an array, therefore, the array needs to be ‘unwinded’ and then manipulated into the document you want.Finally, you need to push this data into a new collection, which is quite easy once you actually get the data.", "username": "Susnigdha_Bharati" }, { "code": "", "text": "You can perform a join, but a join is based a particular equality or an expression. If _id in the second table, is 10 + _id in the first table, then the join might be possible.It is $lookup in MongoDB. Join are SQL. I really do not understand what you mean by if _id is 10+ in the first table. Could you please elaborate?However, the piped value is returned as an array, therefore, the array needs to be ‘unwinded’ and then manipulated into the document you want.The exact stage is called $unwind as documented in https://docs.mongodb.com/manual/reference/operator/aggregation/unwind/and then manipulated into the document you want.Quite right, manipulated with $project and/or $addFields.And finallyFinally, you need to push this data into a new collection, which is quite easy once you actually get the data.That’s the $out stage already mentioned.", "username": "steevej" }, { "code": "", "text": "$lookup is the operator to perform a join, you’re right.I really do not understand what you mean by if _id is 10+ in the first table. Could you please elaborate?A person wants to join the documents, but a person needs some sort of expression to actually compare one document with another. I thought the expression is -_id(document in coll 1) = 10 + _id(document in coll 2)", "username": "Susnigdha_Bharati" }, { "code": "", "text": "The _id in collection 1 is not related at all to the _id of collection 2.What’s related is the field “department” in collection 1 and _id of collection 2.", "username": "steevej" }, { "code": "", "text": "Hi Susnigdha! I was just wondering whether you are working anywhere and would like to know about the Tech Stack that you specialize in. I did notice that you have a deep interest in Automation and Cyber Security so I was just curious whether you’d be interested in guiding and mentoring. Waiting to hear from you. It’d be great if we can talk over the phone.", "username": "Rahul_Chomal" } ]
Update a collection field based on another collection
2020-06-02T10:54:56.685Z
Update a collection field based on another collection
20,501
null
[ "replication", "python", "atlas-cluster" ]
[ { "code": "ServerSelectionTimeoutError\nNo replica set members found yet, Timeout: 30s, Topology Description: <TopologyDescription id: 64ef55b92030ab4b7a836e07, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.if2rk.mongodb.net', 27017) server_type: Unknown, rtt: None>, <ServerDescription ('cluster0-shard-00-01.if2rk.mongodb.net', 27017) server_type: Unknown, rtt: None>, <ServerDescription ('cluster0-shard-00-02.if2rk.mongodb.net', 27017) server_type: Unknown, rtt: None>]>\nmongodb+srv://{MONGO_DB_USER}:{MONGO_DB_PASSWORD}@{MONGO_DB_HOST}/{MONGO_DB_DATABASE}?retryWrites=true&w=majority&replicaSet={MONGO_DB_REPLICA_SET}&readPreference=PrimaryPreferred\n", "text": "Hi Mongodb community.I am facing the following error on my app which uses pymongo 4.5.The strange thing is this happens only sometimes, not always. Like about 1% among total tries.This is my connection string.Thanks in advance!", "username": "Alphanomics_LLC" }, { "code": "ServerSelectionTimeoutErrormongodb+srv://...+srvtlsssltruetlssslfalsetls=falsessl=false", "text": "Hey @Alphanomics_LLC,Welcome to the MongoDB Community!The strange thing is this happens only sometimes, not always. Like about 1% among total tries.Could you share how frequently you are encountering the ServerSelectionTimeoutError issue? Is it occurring a few times within a day or perhaps once a week?Also, please review the Troubleshoot Connection Issues documentation and verify some configurations, such as adding the client’s IP (or IP ranges) to the Network Access List. You may also find the following blog post regarding tips for atlas connectivity useful.This is my connection string.\nmongodb+srv://...Please note, the use of the +srv connection string modifier generally automatically sets the tls (or the equivalent ssl ) option to true for the connection. You can override this behavior by explicitly setting the tls (or the equivalent ssl ) option to false with tls=false (or ssl=false ) in the query string. For more info, please refer here.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
ServerSelectionTimeoutError - pymongo4.5
2023-08-30T15:23:42.241Z
ServerSelectionTimeoutError - pymongo4.5
423
null
[ "replication", "ops-manager" ]
[ { "code": "", "text": "I have installed the operator MongoDB Enterprise Operator in OpenShift 4.9 and I have deployed the Ops Manager and MongoDB enabling TLS and SCRAM authentication, everything was fine until I realized that the user “mms-automation-agent” rotated password constantly which generated the following error on mongodb instances:“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“mms-automation-agent”,\"authenticationDatabase \":“admin”,“remote”:“10.128.2.122:33044”,“extraInfo”:{},“error”:\"AuthenticationFailed: SCRAM authentication failed, storedKey mismatch “}}”}In Ops Manager the processes were shown with a red square indicating “The primary of this replica set is unavailable”Is there a way to disable automatic password rotation for “mms-automation-agent”?Or maybe it is a bug?MongoDB version is 5.0.1-ent", "username": "Cristiano_R" }, { "code": "", "text": "I have a similar error using MongoDB Docker Container with no authentication enabled. Since yesterday I got:“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“xxx_user”,“authenticationDatabase”:“admin”,“remote”:“172.20.0.7:43540”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}", "username": "Andreas_Patock" }, { "code": "", "text": "I am also getting similar error, any solution found. “AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”", "username": "Aayushi_Mangal" }, { "code": "", "text": "I am in first host addin second node with rs.add and log file on second Node{“t”:{“$date”:“2023-05-18T22:01:54.798-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn81”,“msg”:“client metadata”,“attr”:{“remote”:“10.100.180.11:37288”,“client”:“conn81”,“doc”:{“driver”:{“name”:“NetworkInterfaceTL”,“version”:“4.4.20”},“os”:{“type”:“Linux”,“name”:“PRETTY_NAME=\"Debian GNU/Linux 11 (bullseye)\"”,“architecture”:“x86_64”,“version”:“Kernel 5.10.0-21-amd64”}}}}\n{“t”:{“$date”:“2023-05-18T22:01:54.799-03:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn81”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.100.180.11:37288”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}\n{“t”:{“$date”:“2023-05-18T22:01:54.801-03:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn81”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.100.180.11:37288”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}\n{“t”:{“$date”:“2023-05-18T22:01:54.801-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn81”,“msg”:“Connection ended”,“attr”:{“remote”:“10.100.180.11:37288”,“connectionId”:81,“connectionCount”:0}}\n{“t”:{“$date”:“2023-05-18T22:01:55.797-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“10.100.180.11:37294”,“connectionId”:82,“connectionCount”:1}}\n{“t”:{“$date”:“2023-05-18T22:01:55.798-03:00”},“s”:“I”, “c”:“NETWORK”, “id”:51800, “ctx”:“conn82”,“msg”:“client metadata”,“attr”:{“remote”:“10.100.180.11:37294”,“client”:“conn82”,“doc”:{“driver”:{“name”:“NetworkInterfaceTL”,“version”:“4.4.20”},“os”:{“type”:“Linux”,“name”:“PRETTY_NAME=\"Debian GNU/Linux 11 (bullseye)\"”,“architecture”:“x86_64”,“version”:“Kernel 5.10.0-21-amd64”}}}}\n{“t”:{“$date”:“2023-05-18T22:01:55.799-03:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn82”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:true,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.100.180.11:37294”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}I am using this version because my server processor do not have de processor flag for newer version.", "username": "Nelson_Takashi_Yunaka" }, { "code": "", "text": "I have installed mongodb 5.0 using mongodb statefulset and fcv is set to 5.0. When I am trying to upgrade it to 6.0, pod is not coming up and it is stuck in bootstrap init container:\nkubectl get pods|grep faal\nfaal-mongodb-0 1/1 Running 0 20h\nfaal-mongodb-1 1/1 Running 0 20h\nfaal-mongodb-2 0/1 Init:2/3 0 87mNo error is seen in the log:\nkubectl logs -f faal-mongodb-2 -c bootstrap\n2023/08/25 05:31:21 Peer list updated\nwas \nnow [faal-mongodb-0.faal-mongodb.default.svc.cluster.local faal-mongodb-1.faal-mongodb.default.svc.cluster.local faal-mongodb-2.faal-mongodb.default.svc.cluster.local]\n2023/08/25 05:31:21 execing: /work-dir/on-start.sh with stdin: faal-mongodb-0.faal-mongodb.default.svc.cluster.local\nfaal-mongodb-1.faal-mongodb.default.svc.cluster.local\nfaal-mongodb-2.faal-mongodb.default.svc.cluster.localWhen I exec to the pod, I can see the same authentication error in logs.txt:\n{“t”:{“$date”:“2023-08-31T08:29:19.359+00:00”},“s”:“I”, “c”:“ACCESS”, “id”:20249, “ctx”:“conn1118555”,“msg”:“Authentication failed”,“attr”:{“mechanism”:“SCRAM-SHA-256”,“speculative”:false,“principalName”:“__system”,“authenticationDatabase”:“local”,“remote”:“10.244.24.36:40500”,“extraInfo”:{},“error”:“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”}}Does anybody have any clue?", "username": "Kavita_Kumari" }, { "code": "", "text": "Getting this same error in one of the config VM.“AuthenticationFailed: SCRAM authentication failed, storedKey mismatch”Does anybody have any solution?", "username": "Debalina_Saha" } ]
AuthenticationFailed: SCRAM authentication failed, storedKey mismatch
2022-03-29T00:00:20.164Z
AuthenticationFailed: SCRAM authentication failed, storedKey mismatch
9,603
null
[]
[ { "code": "{\n\t\"_id\" : ...\n\t\"d\" : [\n\t\t{\n\t\t\t\"h\" : [ ]\n\t\t},\n\t\t{\n\t\t\t\"h\" : [\n\t\t\t\tBinData(0,\"...\"),\n\t\t\t\tBinData(0,\"...\")\n\t\t\t]\n\t\t}]\n}\n", "text": "Hi,I have such a document:I want to update all document in a way, that I want to concatenate all binary array elements into 1 binary field, so the “d” property has only 1 BinData() valuePlease note, that currently, this is an array of objects, but I think this was a bad design decision", "username": "norbert_NNN" }, { "code": "", "text": "Hi there, any idea regarding this problem?\nThank you", "username": "norbert_NNN" } ]
Concatenate 2D array of binary data
2023-09-06T11:40:19.096Z
Concatenate 2D array of binary data
181
null
[ "queries", "atlas-search" ]
[ { "code": "titletitle{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"_id\": {\n \"type\": \"objectId\"\n }\n \"title\": {\n \"type\": \"string\"\n }\n }\n }\n}\ntitleJon test shared?$search: {\n index: \"default\",\n regex: {\n query: \"(.*)jon test(.*)\",\n path: \"title\",\n allowAnalyzedField: true\n }\n }\n$search: {\n index: \"default\",\n regex: {\n query: \"(.*)shared\\?(.*)\",\n path: \"title\",\n allowAnalyzedField: true\n }\n }\nFailed to run this query. Invalid pipeline", "text": "Hi,I’ve been trying to search by regex but I can’t figure out how to use it exactly?I created an index having a field title and this is the configuration of the index and title field.For example, the value title is Jon test shared?\nAt the moment, my current query return nothing.Atlas search even through error when I was trying to search a special character.—> Failed to run this query. Invalid pipeline", "username": "Linh_Nguyen4" }, { "code": "lucene.standardlucene.keyword", "text": "When you use ‘string’ as the field type, in your case uses lucene.standard as the analyzer, which splits (and removes) whitespace. You will need to choose a different analyzer, like lucene.keyword (note that it does not lowercase) to have the whitespace and other special characters available within a token that comes out of the analyzer.", "username": "Erik_Hatcher" }, { "code": "", "text": "@Erik_Hatcher Many thanks for the answer. It works for me!", "username": "Linh_Nguyen4" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Searching special characters by regex does not work?
2023-09-07T02:50:35.242Z
Searching special characters by regex does not work?
505
null
[ "node-js", "mongodb-shell", "atlas", "react-js" ]
[ { "code": "js\n\nimport dotenv from \"dotenv\";\ndotenv.config();\n./mern/server/loadEnvironment.mjs:1\n\njs\n^\nReferenceError: js is not defined\nMongoParseError('Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\" \n", "text": "Hello all,I am trying to get back into the swing of full-stack and am using this guide: How To Use MERN Stack: A Complete Guide | MongoDB\nas a refresher.I am having an issue after going through this code back and forth a couple times. Super simple but annoyingly evasive for the search engines.The loadEnvironment.mjs file has the following code:VS Code kicks back the following:I am only copy pasting the code from the guide at this point because i’ve gone through the code and tried to figure it out myself.commenting out or removing the ‘js’ line throws error:But my ATLAS_URI does in fact start with the latter and is copied from the Atlas UI that provides it.I cannot go further by myself right now and get too frustrated with all the information search engines kick back that i cannot get to the bottom of such a seemingly simple question.Thank you to anyone that sees this and helps.", "username": "Joe_Morales" }, { "code": "", "text": "Removing the js line is certainly the way to go.As for the Invalid Scheme error it would be best if you share the code that uses the URI and the URI it self. If the URI is wrong we need to see it to know what is wrong with it.But my ATLAS_URI does in fact start with the latterMay be you have other characters like spaces or quotes. Share the dotenv file.", "username": "steevej" }, { "code": "", "text": "I’d first like to thank you @steevej for responding. Having any feedback is great for me.I did remove js and amongst other things I have sorted I am having trouble with the mongo atlas authentication now haha.I will look into it more after work another day. I may have to open a new thread for a different question, not sure how forums worksthanks again!", "username": "Joe_Morales" }, { "code": "", "text": "Hello Joe and Steve,I have been using the same tutorial as Joe, and I am facing the exact same problem, the MongoParseError.Hoping to get more help with this issue.owly", "username": "owly_dabs" } ]
How to Use MERN Stack ReferenceError: js not defined
2023-08-29T21:36:11.522Z
How to Use MERN Stack ReferenceError: js not defined
509
null
[ "python", "atlas-cluster" ]
[ { "code": " \"File \"C:\\Users\\Al Ghani Computer\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 82, in get_options raise ConfigurationError(str(exc)) pymongo.errors.ConfigurationError: The resolution lifetime expired after 21.221 seconds: Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.\" ", "text": "this is my code` \"import pymongo\nfrom pymongo import MongoClient\nMONGO_URI = ‘mongodb+srv://saadatbbaig:*********@cluster0.rcawijs.mongodb.net/test’client = MongoClient(MONGO_URI)print(client.list_database_names())\" `i have hided the password in uri string for security reason.SO Here is problem when i run this code it give me this error , i have tried change dns , check out firewell bloackage but nothing worked , here is the error: \"File \"C:\\Users\\Al Ghani Computer\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 82, in get_options raise ConfigurationError(str(exc)) pymongo.errors.ConfigurationError: The resolution lifetime expired after 21.221 seconds: Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.; Server 192.168.43.235 UDP port 53 answered The DNS operation timed out.\" Kindly help me out , i need this to be resolved today,Thank you", "username": "saadat_baig" }, { "code": "", "text": "Are you able to see the ip from “ping cluster0.rcawijs.mongodb.net” ?192.168.43.235This seems to be your local DNS resolver address and looks like it is not able to get IP address of that server FQDN.cluster0.rcawijs.mongodb.net is not a public name so some local configuration are needed to access the endpoint. (at least i can’t resolve the name on my mac)", "username": "Kobe_W" }, { "code": "", "text": "192.168.43.235This seems to be your local DNS resolver address and looks like it is not able to get IP address of that server FQDN.cluster0.rcawijs.mongodb.net is not a public name so some local configuration are needed to access the endpoint. (at least i can’t resolve the name on my mac)can you please elaborate more , and how i am going to git rid of this?", "username": "saadat_baig" }, { "code": ";QUESTION\ncluster0.rcawijs.mongodb.net. IN ANY\n;ANSWER\ncluster0.rcawijs.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-uftgjk-shard-0\"\ncluster0.rcawijs.mongodb.net. 60 IN SRV 0 0 27017 ac-uk0pdt4-shard-00-00.rcawijs.mongodb.net.\ncluster0.rcawijs.mongodb.net. 60 IN SRV 0 0 27017 ac-uk0pdt4-shard-00-01.rcawijs.mongodb.net.\ncluster0.rcawijs.mongodb.net. 60 IN SRV 0 0 27017 ac-uk0pdt4-shard-00-02.rcawijs.mongodb.net.\n", "text": "This seems to be your local DNS resolver address and looks like it is not able to get IP address of that server FQDN.cluster0.rcawijs.mongodb.net is not a public name so some local configuration are needed to access the endpoint.The cluster is public and correct. An error often made with clusters is to think that they have an IP address. They do not. They have 2 types of DNS records TXT and SRV. The TXT provides connection strings options and the SRV point to the initial hosts to contact for the cluster. These hosts will resolve to IP addresses.See the DNS records for the said cluster:I have tried change dnsTry again with Google’s 8.8.8.8 or 8.8.4.4.You might also have the wrong python module for SRV.If it stills fail with the SRV, try to use the old style using the host provided in the DNS response above.", "username": "steevej" }, { "code": "", "text": "Try again with Google’s 8.8.8.8 or 8.8.4.4.i did this but didn’t worked", "username": "saadat_baig" }, { "code": " File \"C:\\Users\\Al Ghani Computer\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 82, in get_options raise ConfigurationError(str(exc)) pymongo.errors.ConfigurationError: The resolution lifetime expired after 21.606 seconds: Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.", "text": " File \"C:\\Users\\Al Ghani Computer\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\srv_resolver.py\", line 82, in get_options raise ConfigurationError(str(exc)) pymongo.errors.ConfigurationError: The resolution lifetime expired after 21.606 seconds: Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.; Server 8.8.8.8 UDP port 53 answered The DNS operation timed out.; Server 8.8.4.4 UDP port 53 answered The DNS operation timed out.\nafter changing dns got this error", "username": "saadat_baig" }, { "code": "", "text": "The cluster is public and correct. An error often made with clusters is to think that they have an IP address. They do not. They have 2 types of DNS records TXT and SRV. The TXT provides connection strings options and the SRV point to the initial hosts to contact for the cluster. These hosts will resolve to IP addresses.Right, i am able to get those records from online tools. It doesn’t have a A record, so ping doesn’t show anything.", "username": "Kobe_W" }, { "code": "", "text": "@saadat_baigDid you check this?", "username": "Kobe_W" }, { "code": "", "text": "yes i did increased the timeout but still getting same error", "username": "saadat_baig" }, { "code": "", "text": "WhenGoogle’s 8.8.8.8 or 8.8.4.4andincreased the timeoutdoes not work. Make sure you do nothave the wrong python module for SRV.If it still fails.use the old style using the host provided in the DNS response above.If it still fails. Change network.", "username": "steevej" }, { "code": "motormongo+srvServer 8.8.8.8 UPD port 53 answered The DNS operation timed out.connectTimeoutMS", "text": "Hello everyone! I know this is a bit old topic but I have some questions for you.\nI’m using motor to connect to my db from python, using mongo+srv.\nOften all is good and I can connect to my DB, but sometimes it throws the Server 8.8.8.8 UPD port 53 answered The DNS operation timed out. (this usually happens 2/3 times a day).\nAs you can notice I’m using the Google DNS, I also put my connectTimeoutMS to a big value, sometimes this is useful, and I can connect after a bunch of seconds (sometimes mins), but sometimes also having a big connection timeout isn’t enough.\nI’m wondering: for now it’s all good, I’m developing, so no problems, but what will happen when I will put my code into production?Do you know why? Is there ant way to get rid of this annoying DNS problem once for all?Thank you in advance", "username": "Francesco_Maccari" }, { "code": "", "text": "what will happen when I will put my code into production?We don’t know. You control the prod environment (e.g. network, ), not us.Do you know why? Is there ant way to get rid of this annoying DNS problem once for all?Just as the msg says, the dns query times out. Why? no idea. maybe some packets are lost. Network is not reliable by nature. that’s why we usually consider P95 case instead of P100, because anything can happen in extreme cases.", "username": "Kobe_W" }, { "code": "", "text": "Yeah, but this is not an extreme case. I used MongoDB in the past when the srv protocol wasn’t implemented, same network, same DNS and I never had these types of problems.\nIt seems very strange that sometimes it works and sometimes not.\nI’m also using FireBase for other projects and I didn’t notice nothing similar.If you need some more logs to analyze the problem I can give it to you.\nIn the meanwhile I will try to host my dev environment on a dedicated server to see if I have the same issues.", "username": "Francesco_Maccari" }, { "code": "", "text": "We are recently seeing the same issue when testing locally.", "username": "Tim_Xie" } ]
Getting DNS operation timed out error
2023-04-21T15:01:08.290Z
Getting DNS operation timed out error
3,515
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "My query is to understood Log recording by MongoDB.To understand process, I have execute a set of queries each 3 times and every time, planCache is cleared . To get all process logging, I have also configured mongod with --profile 2 --slowms 1 --slowOpSampleRate 0.5 . also.But, when I check log entries for these queries, it shows few getMore entries of first instance, few getMore queries of second instance, and single log record of find or aggregate command.Why log file does not show all query processes for the test queryWhy does it not shows all logs of all query processing steps?", "username": "Prof_Monika_Shah" }, { "code": "", "text": "sometime, even no log record exists for main query command command.find / command.aggregate", "username": "Prof_Monika_Shah" }, { "code": "", "text": "Please see", "username": "steevej" } ]
Why Mongodb does not logs all search query and its getMore records?
2023-09-07T19:13:23.965Z
Why Mongodb does not logs all search query and its getMore records?
278
null
[ "queries" ]
[ { "code": "", "text": "one of my aggregation query using match (a,b), graphlookup and unwind has explain output with index being used (idx_a_1_b_1) shows keys & docs examined is 0 with docs returned is around 300. So, my question is why does the docs and keys examined are 0 event though documents are returned?", "username": "Sateesh_Bammidi" }, { "code": "db.collection.explain.aggregate(...)", "text": "Hi @Sateesh_Bammidi welcome to the community!This is very hard to determine without actual examples, do you mind posting:Best regards\nKevin", "username": "kevinadi" }, { "code": "{ \"_id\" : ObjectId(\"64f60cda03c66e64ab567374\"), \"t\" : ISODate(\"2023-03-19T04:31:28.490Z\"), \"s\" : \"I\", \"c\" : \"COMMAND\", \"id\" : 51803, \"ctx\" : \"conn59\", \"msg\" : \"Slow query\", \"attr\" : { \"type\" : \"command\", \"ns\" : \"LE.L\", \"command\" : { \"getMore\" : NumberLong(\"39043538923983074\"), \"collection\" : \"L\", \"lsid\" : { \"id\" : UUID(\"8cd9cc5b-e51c-425f-aeca-ca0b60ce43ad\") }, \"$db\" : \"LE\" }, \"originatingCommand\" : { \"aggregate\" : \"L\", \"pipeline\" : [ { \"$match\" : { \"rs\" : { \"$elemMatch\" : { \"rfield\" : { \"$gte\" : 30001, \"$lte\" : 70000 } } } } }, { \"$unwind\" : \"$rs\" }, { \"$match\" : { \"rs.rfield\" : { \"$gte\" : 30001, \"$lte\" : 70000 } } }, { \"$group\" : { \"_id\" : \"$rs.rid\", \"rfield\" : { \"$first\" : \"$rs.rfield\" } } } ], \"cursor\" : { }, \"lsid\" : { \"id\" : UUID(\"8cd9cc5b-e51c-425f-aeca-ca0b60ce43ad\") }, \"$db\" : \"LE\" }, \"planSummary\" : \"IXSCAN { rs.rfield: 1 }\", \"cursorid\" : NumberLong(\"39043538923983074\"), \"keysExamined\" : 0, \"docsExamined\" : 0, \"cursorExhausted\" : true, \"numYields\" : 0, \"nreturned\" : 80171, \"reslen\" : 2634625, \"locks\" : { }, \"remote\" : \"10.1.3.163:55496\", \"protocol\" : \"op_msg\", \"durationMillis\" : 44 } }\n", "text": "I also have same question.When I refer log for queries executed, it shows log records with 0 keysExamined as well as 0 docsExamined, but nReturned is non-zero number. All these queries have “getMore” as command.Sample log record is shown below:Does it mean no keys or docs examined during getMore operation? But, all keys and docsExamined during previous query execution?", "username": "Prof_Monika_Shah" }, { "code": "", "text": "Please see", "username": "steevej" } ]
Index used but keys & docs examined is 0 with docs returned is around 300
2021-09-10T18:28:11.593Z
Index used but keys &amp; docs examined is 0 with docs returned is around 300
2,358
null
[ "node-js", "react-native" ]
[ { "code": "export class Result extends Realm.Object<Result> {\n _id: Realm.BSON.ObjectId = new Realm.BSON.ObjectId();\n createdAt: Date = new Date();\n time!: number; // in ms\n scramble!: string;\n owner_id!: string;\n\n static primaryKey = '_id';\n}\n", "text": "Hi,\nI’m creating an Rubiks Cube TImer app. This app will have multiple users, from which each one will need to save his own Results and Sessions.Result will consist of actual result time, scramble algorithm and so on.Session will hold up a group of Results.User will never query for each particular solve, instead he will always need to take whole Session with all solves that are connected to this Session.How should I make a schema for that?\nFor now I’ve made Result schema:Here comes real question. How should I connect sessions with results? Sessions/Results needs to be tied to only one user. In MongoDB in Node.js I know how to make this happen. Here I’m not really sure.Should I create session and just make an array of results in there embedded?\nOr should I create session, and each result whilst creating should be bound to Session by some flag in Schema?", "username": "Rafal_Nawojczyk" }, { "code": "SessionsolvesolveResults", "text": "The question is a little vague.Conceptual questions are often very difficult to answer or address because without understanding the entire use case, we may send you down the wrong path - only you know about the app concept. Because of that, it’s often better to post specific coding questions rather than app design questions. But, let me see if we can help a bit.I see the Result class but what is a Session and a solve as mentioned in the question, do you have a proposed model you can post?If a solve is specific to one Session, never shared with any other Sessions then making it an embedded object would make sense, since it doesn’t need to be a separately managed object.connect sessions with resultsI would be careful with that naming; Realm has a Results object and that ends up being a naming collision and cause confusing code. Perhaps naming it something else may be a good idea.Sessions/Results needs to be tied to only one userOk, but when you say ‘tied to’ what does that mean exactly. Meaning it doesn’t need to be managed or queried?Try to address the above and we’ll take a look.", "username": "Jay" }, { "code": "", "text": "Ok, I will try to explain it better. There is only(changed that) two Schemas: Solve and Session.Each user will create their own sessions(probably like 10-15 per user), and each Solve needs to be connected to one Session.So one Session can have multiple Solves.App will almost always query for all Solves that are connected with certain Session.Sometimes there is need to edit certain Solve.That’s why I thought that I should embed Solves into Session. Is there any example Schema for that? In DOC’s there is only example where I need to manually specify Schema in Typescript, which now is not mandatory, since babel plugin does that for us.", "username": "Rafal_Nawojczyk" }, { "code": "SolvesSessionSessionsSolve", "text": "It sounds to me like you’re well on your way.At a high level, theres a session with a List of solves that are related to the sessionThe code in the docs is almost exactly that with a User having a list of Posts (akin to Session with a list of Solves)In your case, since the Solves are very specific to a Session and will never be shared with other Sessions and it doesn’t sound like a large amount of data in each Solve, they could easily be embedded objects.See One-to-Many relationshipI only mention the amount of data as there is a limit on the size of a property, and if the data is going to get lengthy or large, it may be better to have the solves as separately managed objects. But it doesn’t sound like that’s needed.", "username": "Jay" } ]
How should I manage relations?
2023-09-07T16:39:00.910Z
How should I manage relations?
385
null
[ "queries", "node-js", "sharding" ]
[ { "code": "", "text": "Hi,I am newer to MongoDB and am trying to make sense of sharding and how that might work within my application. For context, I am planning to have a website that sorts data based on user access rights. IE each user should only see the data that they have acquired through our company, and accidental mixing of data would not be acceptable. Based on recommendation, I have been looking into potentially creating a DB per customer setup as it provides the highest level of separation and security. I know that creating a collection per customer is not a good option, and highly not recommended. However, it seems like it is possible to setup a similar configuration by forcing sharding to separate data by customer.What I am struggling with is understanding how this can be achieved. Additionally, part of the reason I am looking into this is that I am trying to compare cost of db per customer vs shard per customer and I am having a hard time understanding cost difference for sharding. This would include some sort of configuration for high availability and failover. How would I go about structuring sharding per customer + high availability/failover and how would this compare to a db per customer setup?For further reference, each customer would have 1 to a few hundred nearly identical structured datasets of max only a few Mb’s each.Thanks!", "username": "Philip_Mallinger" }, { "code": "", "text": "This is not what sharding is used for, sharding is used to horizontally scale the data or geographically separate data. It breaks up data based on a shard key and separates it across the different shards. So each shard has a portion of the data.Now for the user access rights there is a few things you can do. You can have a users collection and collection that has data and put a “owner” or “user” field on the data. Then query the data by “owner” that way you only get data for a specific user. This will work but with many clients hitting one DB it could cause some issues along with no logical separation. One DB having issues creates issues for all customers.The other solution you mentioned would also work, having many DBs per client, this will make access management easier as well. You will just have to manage creating all the DBs for new clients, along with the users, permission, settings, etc.This would include some sort of configuration for high availability and failoverMongoDB by default is highly available with a properly configured replica set it can have node failure and automatically elect the new primary.", "username": "tapiocaPENGUIN" } ]
Creating a DB per User vs. Sharding per User
2023-09-07T16:43:15.888Z
Creating a DB per User vs. Sharding per User
311
https://www.mongodb.com/…370203952756.png
[ "node-js" ]
[ { "code": "const dbDuplicateKeyHandler = (error) => {\n console.log(error); // ==> print \n // index: 0,\n // code: 11000,\n // keyPattern: { email: 1 },\n // keyValue: { email: '[email protected]' },\n // statusCode: 500,\n // status: 'error',\n // [Symbol(errorLabels)]: Set(0) {}\n\n \n console.log(error.keyPattern.name); // ==> print undefined\n \n // const message = `Duplicated field value: (${error.keyValue.name}) Please use another value .`;\n \n return new AppError(400, `Duplicated field value: (${error.keyValue.name}) Please use another value .`); // ==> error.keyValue.name is undefined\n};\n", "text": "I’m building a node.js server, my problem is that after finishing the front-end part of my web application I wanted to test my error handlers again (I did it first when building the server on Postman), so in the signup form I entered a duplicated username then everything worked as expected but when I tried to signup with a duplicated email the keyValue is undefined for some reasonshere is the error handler in which I printed the error to see what is wrong, when I printed the error I got the data but then it become undefined :here is a screenshot of the error when trying with a duplicated email :\n", "username": "Marya_Harbi" }, { "code": "", "text": "The fields you’re pulling out are different than in the example you output unless I’m missing something.I guess you need to look at what fields are available before you pull it out, as depending on the source of the error (i.e. which field triggered the error) you’ll have a differently shaped error object.", "username": "John_Sewell" }, { "code": " console.log(\"THE ERROR IS ==>\",error); \n console.log(\"error.keyPattern.name IS ==> \",error.keyPattern.name);\n console.log(\"error.keyValue.name IS ==> \",error.keyValue.name);\n> console.log(\"error.keyPattern.name IS ==> \",error.keyPattern.name);\n> console.log(\"error.keyValue.name IS ==> \",error.keyValue.name);\n", "text": "All the fields of the form is filled with data no is missing, in addition there is a validator which check if there is any data is missing from the form before submitting it.let me explain my problem again, first I entered a duplicated username which is in my case (username), here is the output of these theree lines :when trying with a new username and duplicated email both ofare undefined", "username": "Marya_Harbi" }, { "code": "", "text": "Ahh, so when violating two restrictions only one is listed in the error raised?", "username": "John_Sewell" }, { "code": "", "text": "I’ve not tried violating two restrictions (username and email) yet,\nI once just tested the duplicated username (error handling works fine), the second time the turn was for testing just the duplicated email (got undefined for key.Value.name)\nDo you have any clue where I’ve done it wrong ?", "username": "Marya_Harbi" }, { "code": "keyValue: { email: '[email protected]' }keyValue: { username: 'john' }", "text": "It looks like the shape of the error object changes depending on the error.When you have a duplicate username you get this:\nkeyValue: { email: '[email protected]' }\nand with a username you get:\nkeyValue: { username: 'john' }The key provided within the keyValue section has the field name and field value, so you cannot have a generic lookup, you need to check what the key is and look at that.I can’t see the API documentation for that error, maybe someone can link to it if they know where it is.", "username": "John_Sewell" } ]
keyValue is undefined in E11000 duplicate key error
2023-09-07T12:17:09.968Z
keyValue is undefined in E11000 duplicate key error
247
null
[ "atlas-cluster" ]
[ { "code": "", "text": "So I have been reading up on EU law, and if I understand what the EU.Now because our business allows anyone in the world to access our data and we collect data from people all around the world, I am wondering how MongoDB atlas complies with these EU laws.For example:\nThe GDPR requires that all data collected on citizens must be either stored in the EU , so it is subject to European privacy laws, or within a jurisdiction that has similar levels of protection.But from my understanding Altas does not do this, it stores it on a cluster of servers that we the customer selected when we signed up.So how would we try and be GDPR compliant when using MongoDB when it only affects a small percentage of our clients.Is it possible to clone an atlas in realtime and have the data in the EU, and connect EU customers to that - I can see a number of issues with this method though as data would still be synced back to the US servers.Would love to hear suggestions and ways to solve this issue.", "username": "Russell_Harrower" }, { "code": "", "text": "Hi @Russell_HarrowerBut from my understanding Atlas does not do this, it stores it on a cluster of servers that we the customer selected when we signed up.MongoDB Atlas itself is GDPR compliant, as mentioned in the GDPR FAQ page in the Trust Center.However, users of MongoDB Atlas must also ensure that their processes relating to data are in compliance with GDPR. Please refer to GDPR: Impact to Your Data Management Landscape: Part 3 blog post to see how MongoDB’s products and services can support users to be GDPR compliant.Below is an excerpt from the post that would be relevant to your question about data sovereignty:To support data sovereignty requirements, MongoDB zones allow precise control over where personal data is physically stored in a cluster. Zones are also the basis for Atlas’s fully managed Global Clusters. Clusters can be configured to automatically “shard” (partition) the data based on the user’s location – enabling administrators to isolate EU citizen data to physical facilities located only in those regions recognised as complying with the GDPR.See also MongoDB Atlas: Manage Global Clusters. In addition, I’d suggest to review the GDPR blog series for more information.I would recommend you to engage a consultant specialising in these areas to ensure your compliance.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "EU laws prioritize privacy with robust regulations like the General Data Protection Regulation (GDPR). These laws empower individuals, ensuring control over their personal data and requiring organizations to handle it responsibly. Violations result in significant fines, reinforcing the EU’s commitment to safeguarding privacy rights in the digital age.", "username": "Educatoroid_Educating" } ]
EU laws + privacy
2022-09-01T23:40:37.069Z
EU laws + privacy
2,574
null
[ "node-js", "containers" ]
[ { "code": "", "text": "A few days ago, i update from [email protected] to [email protected]\nI started seeing massive memory leaks so decided to downgrade to [email protected]\nBut after doing that, i cannot connect to mongodb atlas anymore, but get an error\npayload.split is not a function\ninside lib/cmap/auth/scram.js atI could quickly see, the error was at\npayload.split(‘,’);\nYou cannot use split on a Buffer, so I assume it was supposed to be\npayload.toString().split(‘,’);I need to build this in docker, so just hiot fiing the .js file locally is not a solution, so I simply downgraded to [email protected]\nI still get the same error with [email protected] desperate so I downgraded to the original version [email protected]\nand i STILL get the errorSo my best guess is some package mongodb depend on, has been updated and it’s breaking the mongodb\ndriver …\nIs there a status on when this is fixed ?", "username": "Allan_Zimmermann" }, { "code": "", "text": "Tried deleting package-lock.json and node_modules then it started working.\nSo something in my package-lock.json must have been messed up … Pheeww …", "username": "Allan_Zimmermann" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
All mongodb drivers < 6.0.0 seems broken, and 6.0.0 is unusbale due to memory leaking
2023-09-07T13:39:27.284Z
All mongodb drivers &lt; 6.0.0 seems broken, and 6.0.0 is unusbale due to memory leaking
342
null
[ "replication" ]
[ { "code": "", "text": "Hi everyone…My production replica set replication lags shows sometime -1 or -2,\nsyncedTo: ‘Fri Aug 25 2023 11:39:18 GMT-0400 (Eastern Daylight Time)’,\nreplLag: '-1 secs (0 hrs) behind the primary ’\nand we set in read preference is primary preferred…\nplease explain the cause ?", "username": "sindhu_K" }, { "code": "", "text": "Hi everyone…My production replica set replication lags shows sometime -1 or -2,\nsyncedTo: ‘Fri Aug 25 2023 11:39:18 GMT-0400 (Eastern Daylight Time)’,\nreplLag: '-1 secs (0 hrs) behind the primary ’\nand we set in read preference is primary preferred…\nplease explain the cause ?", "username": "sindhu_K" } ]
Replication lag showing Negative value
2023-08-25T15:47:14.748Z
Replication lag showing Negative value
456
https://www.mongodb.com/…0f853f8e2054.png
[]
[ { "code": "", "text": "\"The domain, user name and /or password is incorrect. \" Unable to install MongoDB Service as a local or domain user.\n\n", "username": "Arindam_Biswas2" }, { "code": "", "text": "The accont name and account password should be set. You can set it at System Setting > Account.I think it is a bit confusing. It’s asking for Windows account and password not Microsoft account and password.Anyway, I had installed MongoDB Community 7.0.1 successfully as a local user.", "username": "franli0" } ]
MongoDB Service as a local or domain user
2022-11-07T07:40:00.834Z
MongoDB Service as a local or domain user
3,154
null
[ "installation", "field-encryption" ]
[ { "code": "{\"t\":{\"$date\":\"2023-09-05T23:13:42.465+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.468+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.485+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":7091600, \"ctx\":\"thread1\",\"msg\":\"Starting TenantMigrationAccessBlockerRegistry\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":95589,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"MacBook-Pro-Rawan.local\"}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"7.0.0\",\"gitVersion\":\"37d84072b5c5b9fd723db5fa133fb202ad2317f1\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"22.1.0\"}}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.489+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.492+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.492+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"setup bind :: caused by :: Address already in use\"}}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.493+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.495+03:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-09-05T23:13:42.496+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n", "text": "Hi , i just downloaded mongodb v7.0.1, and I’m facing a problem with mongod command i will show you the error:", "username": "Rawan_Kr" }, { "code": "{\"t\":{\"$date\":\"2023-09-05T23:13:42.492+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"setup bind :: caused by :: Address already in use\"}}}mongodmongod", "text": "Hi Rawan,{\"t\":{\"$date\":\"2023-09-05T23:13:42.492+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20568, \"ctx\":\"initandlisten\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"setup bind :: caused by :: Address already in use\"}}}Do you have a mongod instance already running? Or is the port that you’re trying to run it on already in use (For e.g. by another mongod or another process?).Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "No , i just downloaded it and trying the command mongod", "username": "Rawan_Kr" } ]
Error when typing mongod command on my terminal MacOS M1
2023-09-05T20:24:05.038Z
Error when typing mongod command on my terminal MacOS M1
459
https://www.mongodb.com/…e_2_1024x128.png
[ "node-js" ]
[ { "code": "listSearchIndexes{name: string}", "text": "Would it be possible to improve the types for the listSearchIndexes, so we can have the correct type definitions? Currently, it just returns {name: string}.\nCleanShot 2023-09-06 at 07.59.43@2x1374×172 31.4 KB\n", "username": "Alex_Bjorlig" }, { "code": "", "text": "Thanks for reporting this issue @Alex_Bjorlig. I’ve filed NODE-5611 to improve this behavior.", "username": "alexbevi" }, { "code": "", "text": "I did think about opening a JIRA issue, but it feels more comfortable here Btw - I have implemented a flow that defines Atlas search indexes in version control, and syncs to Atlas if there are changes on each deployment. Works great ", "username": "Alex_Bjorlig" } ]
Better Atlas search index typescript types in Node.js driver
2023-09-06T06:01:58.219Z
Better Atlas search index typescript types in Node.js driver
375
null
[]
[ { "code": "", "text": "Hi Team,I have hosted my database in AWS EC2 (ubuntu) server,I am facing Slowness in my database and server consume more CPU and memory, What is the cause? Is this query related issue? How to fix this issue.Thanks,\nKrishnakumar K", "username": "KRISHNAKUMAR_K" }, { "code": "`\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.089+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.090+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.093+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.093+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":5492,\"port\":21497,\"dbPath\":\"/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\"ip-192-168-3-34\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.6\",\"gitVersion\":\"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\"openSSLVersion\":\"OpenSSL 1.1.1f 31 Mar 2020\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2004\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.297+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"0.0.0.0\",\"port\":21497},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.298+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/var/lib/mongodb\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.298+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-08-31T12:31:21.298+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=15217M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.613+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1315}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.613+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.660+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":102400,\"maxConns\":51200},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.665+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.665+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.667+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.940+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.940+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongodb/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.944+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.944+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.946+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-21497.sock\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.946+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:22.946+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":21497,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:45.891+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.3.34:34430\",\"uuid\":\"8e9deb4f-1628-4c71-9f53-c893b9da16e9\",\"connectionId\":1,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:45.908+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"192.168.3.34:34430\",\"client\":\"conn1\",\"doc\":{\"driver\":{\"name\":\"nodejs|Mongoose\",\"version\":\"4.16.0|6.11.1\"},\"platform\":\"Node.js v18.0.0, LE\",\"os\":{\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.19.0-1028-aws\",\"type\":\"Linux\"}}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:45.982+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.3.34:34440\",\"uuid\":\"e6c8afba-29f8-49c1-b3ff-940cb98477ea\",\"connectionId\":2,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:46.032+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn2\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"192.168.3.34:34440\",\"client\":\"conn2\",\"doc\":{\"driver\":{\"name\":\"nodejs|Mongoose\",\"version\":\"4.16.0|6.11.1\"},\"platform\":\"Node.js v18.0.0, LE\",\"os\":{\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.19.0-1028-aws\",\"type\":\"Linux\"}}}}\n{\"t\":{\"$date\":\"2023-08-31T12:31:46.034+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.3.34:34446\",\"uuid\":\"4ad51d65-c201-4cc6-a39a-c56ea21b0cc1\",\"connectionId\":3,\"connectionCount\":3}}`\n", "text": "Below is the logs.", "username": "KRISHNAKUMAR_K" }, { "code": "mongostat", "text": "Hi @KRISHNAKUMAR_K and welcome to MongoDB community forums!!MongoDB is designed to use memory for caching frequently accessed data, so it’s generally expected that memory usage is higher than CPU usage. In saying so, there could be multiple reasons on why you are seeing this:Please feel free to reach out in sace of any queries.Warm Regards\nAasawari", "username": "Aasawari" } ]
MongoDB consume more CPU and memory
2023-09-01T05:48:24.954Z
MongoDB consume more CPU and memory
447
null
[ "graphql" ]
[ { "code": "", "text": "I want to use the MongoDB Atlas trigger, but it should only apply to Device Sync. For example, Suppose the insert, update, and delete are comming from Device Sync. In that case, the trigger will be executed, but if comming from Graphql or direct insert, update, and delete, the trigger will not be executed. How can I do that?", "username": "Chris_Ian_Fiel" }, { "code": "", "text": "Hi Chris,Thanks for posting and welcome to the community.One way to achieve this is to add a boolean field to your documents called “deviceSyncOrigin” and set it to true when you’re making a write via a sync client. You can then set a match expression on the trigger if this field is set to true.Hope that helps.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Hello @Mansoor_Omar, Thanks, that’s awesome! One last question: I know the trigger pricing is based on every time the trigger is executed. If the expression does not match, will you still be billed?", "username": "Chris_Ian_Fiel" }, { "code": "", "text": "Hi Chris,It would only be included if the trigger executes which would need the match expression to be matched.Here is a link to the App Services billing page for your convenience:Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trigger only for Device Sync
2023-09-07T01:19:35.073Z
Trigger only for Device Sync
323
null
[ "swift" ]
[ { "code": "[email protected]@ObservedResultsForEachForEachkeyPaths@ObservedResultsNavigationLink@ObservedRealmObjectidForEachIdentifiable@ObservedResultskeyPaths@ObservedResultskeyPath@ObservedResults@ObservedRealmObjectEquatable@ObservedRealmObjectEquatable@ObservedRealmObject@ObservedRealmObjectNavigationLink@ObservedRealmObjectnil// SwiftUI app for macOS\nimport SwiftUI\nimport RealmSwift // 10.25.0\n\n@main\nstruct TestApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n ContentView()\n }\n }\n}\n\nstruct ContentView: View {\n @State var selection: UInt64? = nil\n @ObservedResults(Car.self) var cars\n \n var body: some View {\n NavigationView {\n List {\n ForEach(cars) { car in\n let _ = print(\"Computing `NavigationLink` for `\\(car.name)`\")\n NavigationLink(destination: CarDetailView(car: car), tag: car.id, selection: $selection) {\n CarCell(car: car)\n }\n }\n }\n Text(\"Choose a car\")\n .foregroundColor(.secondary)\n }\n .toolbar {\n ToolbarItem(placement: .navigation) {\n Button(action: add10KCars) {\n Label(\"Add 10 000 cars\", systemImage: \"plus\")\n }\n }\n }\n }\n \n func add10KCars() {\n let numberOfCars = cars.count\n let realm = try! Realm()\n try! realm.write {\n for index in 0..<10_000 {\n realm.add(Car(\"Car \\(numberOfCars+index+1)\"))\n }\n }\n }\n}\n\nstruct CarDetailView : View {\n @ObservedRealmObject var car:Car\n \n init(car:Car) {\n print(\"Initializing `CarDetailView` for `\\(car.name)`\")\n self.car = car\n }\n\n var body: some View {\n let _ = print(\"Computing `CarDetailView` for `\\(car.name)`\")\n TextField(\"Name\", text: $car.name)\n .padding()\n TextField(\"Model\", text: $car.model)\n .padding()\n }\n}\n\nstruct CarCell : View {\n\n @ObservedRealmObject var car:Car\n \n init(car:Car)\n {\n print(\"Initializing `CarCell` for `\\(car.name)`\")\n self.car = car\n }\n\n var body: some View {\n let _ = print(\"Computing `CarCell` for `\\(car.name)`\")\n Text(car.name)\n }\n}\n\nclass Car : Object, ObjectKeyIdentifiable {\n\n @Persisted var name = \"Car \\(Date.now.timeIntervalSince1970.description)\"\n @Persisted var model = \"\"\n\n convenience init(_ name:String) {\n self.init()\n self.name = name\n }\n}\n// SwiftUI app for macOS\nimport SwiftUI\nimport RealmSwift // 10.25.0\n\n@main\nstruct TestApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n ContentView()\n }\n }\n}\n\nstruct ContentView: View {\n\n @State var selection: UInt64? = nil\n @ObservedResults(Car.self, keyPaths: [\"name\"]) var cars\n \n var body: some View {\n NavigationView {\n List {\n ForEach(cars) { car in\n let _ = print(\"Computing `NavigationLink` for `\\(car.name)`\")\n NavigationLink(destination: selection == car.id ? CarDetailView(car: car) : nil, tag: car.id, selection: $selection) {\n CarCell(name: car.name)\n }\n }\n }\n Text(\"Choose a car\")\n .foregroundColor(.secondary)\n }\n .toolbar {\n ToolbarItem(placement: .navigation) {\n Button(action: add10KCars) {\n Label(\"Add 10 000 cars\", systemImage: \"plus\")\n }\n }\n }\n }\n \n func add10KCars() {\n let numberOfCars = cars.count\n let realm = try! Realm()\n try! realm.write {\n for index in 0..<10_000 {\n realm.add(Car(\"Car \\(numberOfCars+index+1)\"))\n }\n }\n }\n}\n\nstruct CarDetailView : View, Equatable {\n\n @ObservedRealmObject var car:Car\n \n init(car:Car) {\n print(\"Initializing `CarDetailView` for `\\(car.name)`\")\n self.car = car\n }\n \n var body: some View {\n let _ = print(\"Computing `CarDetailView` for `\\(car.name)`\")\n TextField(\"Name\", text: $car.name)\n .padding()\n TextField(\"Model\", text: $car.model)\n .padding()\n }\n\n static func == (lhs: CarDetailView, rhs: CarDetailView) -> Bool {\n return lhs.car.id == rhs.car.id\n }\n}\n\nstruct CarCell : View {\n\n let name:String\n \n init(name:String) {\n print(\"Initializing `CarCell` for `\\(name)`\")\n self.name = name\n }\n \n var body: some View {\n let _ = print(\"Computing `CarCell` for `\\(name)`\")\n Text(name)\n }\n}\n\nclass Car : Object, ObjectKeyIdentifiable {\n\n @Persisted var name = \"Car \\(Date.now.timeIntervalSince1970.description)\"\n @Persisted var model = \"\"\n\n convenience init(_ name:String) {\n self.init()\n self.name = name\n }\n}\n", "text": "I’ve been trying to resolve some crippling performance issues in a SwiftUI app for macOS with a few thousand objects.Unfortunately, the documentation of Realm’s Property Wrappers is not very detailed and much of SwiftUI’s behavior remains opaque, so I’d appreciate if someone could confirm my findings or suggest better ways to improve performance.Workarounds I’ve found:It seems to me that there are quite a few design decisions in SwiftUI that currently make it extremely hard to integrate Realm in a way that is both simple and performant. I hope Apple will improve on this by implementing fine-grained invalidation and better-performing UI elements for macOS.The following article helped me better understand what’s going on in SwiftUI: Understanding how and when SwiftUI decides to redraw views – Donny WalsSimple example demonstration the performance issuesWith some performance enhancements (but still not great)", "username": "Andreas_Ley" }, { "code": "List", "text": "On macOS, SwiftUI’s List is not lazy-loading.Is that a question or a statement?According to WWDC List contents are always loaded lazily and I believe the lazy loading issue of NavigationLink was fixed in XCode 11.somethingWhat versions of XCode and Swift are you using? Have you considered LazyVStack? Did you use the Instruments tool to profile the app?A bit more info may lead to a clear explanation or perhaps even a solution.", "username": "Jay" }, { "code": "ListNavigationLink", "text": "Is that a question or a statement?A statement. While the documentation suggests that List should indeed be lazy-loading, it clearly is currently not in macOS.\nThe code I’ve posted makes it easy to verify this.And I believe the lazy loading issue of NavigationLink was fixed in XCode 11.somethingAFAIK, NavigationLink is supposed to not be lazy-loading. Running the code confirms that.What versions of XCode and Swift are you using?The most recent ones (Xcode 13.3 under macOS 12.3.1, Apple Swift version 5.6).Have you considered LazyVStack?Yes, and that does indeed load lazily. However, it looks quite different and has other issues (like bad scrolling performance).Did you use the Instruments tool to profile the app?Yes. That helped track down the above mentioned issues.", "username": "Andreas_Ley" }, { "code": "", "text": "How about with latest beta?", "username": "Alex_Ehlke" }, { "code": "", "text": "I currently don’t have a machine running macOS Ventura beta builds, so I can’t say.", "username": "Andreas_Ley" }, { "code": "NavigationLinkList@ObservedResults@ObservedResults", "text": "macOS Ventura brought some improvements, among them the ability to create a NavigationLink with a lazy destination. While drawing performance in general seems to have been slightly improved, the biggest issue (List not being lazy) has not been resolved yet.The recently released Realm v10.34.0 also contributed a speedup by reducing the number of times the view body has to be computed when a view with @ObservedResults is initialized for the first time. @ObservedResults still seems to trigger more view updates than strictly necessary though.I’ll continue to look into this and will keep you posted (if you don’t mind). ", "username": "Andreas_Ley" }, { "code": "", "text": "I’ve had to stop using ObservedResults and replace with async loading into a view model. I’ve also implemented pagination into list.", "username": "Alex_Ehlke" }, { "code": "", "text": "’ve had to stop using ObservedResultsThat’s interesting as we really don’t have any issues using ObservedResults - they seem to behave pretty well albeit possibly refreshing the view slightly more than is needed.implemented paginationWe regularly have thousands of objects within a Results with very little memory impact - so we’ve not needed pagination. I am curious what the use case is for pagination with lazy-loaded objects?It certainly doesn’t hurt but we’ve not seen a need for the additional code since a realm kind of paginates on it’s own, only loading data when it’s needed.", "username": "Jay" }, { "code": "ObservedResultsonAppear", "text": "I’ve implemented pagination, too (although I’m using it with ObservedResults). In combination with SwiftUI’s onAppear, it’s fairly simple to load an additional batch of items before the scroll hits bottom.The main issue from my point of view is still the non-lazy view loading on macOS and the opaque equality comparison done by SwiftUI to determine which objects have changed (especially with NSObject-based instances). Without the workarounds, my list views would still take several seconds to load (which is not surprising when several thousand views have to be created and rendered).Looking forward to macOS Sonoma, although my hopes are not that high. Apple hasn’t responded to a single bug report I’ve submitted this year, and there were plenty of those…", "username": "Andreas_Ley" }, { "code": "", "text": "I can’t document the issues I’ve had right now sorryIn terms of pagination, I’m dealing with lists of hundreds of thousands of objects", "username": "Alex_Ehlke" } ]
Performance issues with SwiftUI on macOS
2022-04-02T16:31:48.324Z
Performance issues with SwiftUI on macOS
5,217
null
[ "swift" ]
[ { "code": "", "text": "I have a couple Realms I bundle with the app and use as read-only fixture data. Their zipped size is dramatically smaller than their uncompressed + compacted size. Is it possible to unzip them in memory and access them from memory, instead of unzipping on disk? Sometimes users have 200MB RAM to spare, but not 200MB disk. Apple requires reporting the full on-disk size for on-demand resources which scares users off.", "username": "Alex_Ehlke" }, { "code": "", "text": "How are you bundling the zipped Realm? How are you relaying that your app + data takes 200MB on disk - and what’s ‘scary’ about 200Mb? e.g. my Wunderground app is 150Mb and it’s not really scary. Also, couldn’t you unzip the file, copy the data to an in-memory realm and then remove the file?", "username": "Jay" }, { "code": "", "text": "I couldn’t remember the name off hand - it’s the new Background Assets system, which replaces On-Demand Resources.When you download an app that uses Background Assets, the App Store adds a new interstitial which tells the user the estimated/max size of the app on disk after the downloads finish AND after they are decompressed into their “resting state” on disk. This new interstitial is much “louder” for users than the tiny download size info hidden at the bottom of App Store pages and requires a confirmation from the user that they’re OK with it. Users see it as a warning.I get feedback from users that my app at around 30-350MB total is heavy/puts them off, and I get this feedback more often now that I’ve adopted Background Assets. Users don’t understand that these don’t need to be downloaded on future app updates unlike bundled assets either so this new interstitial is a net negative.If I can keep the Background Assets zipped on disk, I could shave near 200MB.I’m not making a point about whether or not 200-300MB is scary, it’s only a description of the feedback I get from users, not my own feelings.Unzipping in memory and using it as an in-memory realm sounds like the solution, minus removing the file which would then need to be downloaded again on next launch, thx.", "username": "Alex_Ehlke" } ]
ZIP'd read-only Realms?
2023-09-06T13:55:02.829Z
ZIP&rsquo;d read-only Realms?
332
null
[ "replication", "python" ]
[ { "code": "", "text": "Currently using Django Rest with Djongo 1.3.6 as an ORM. Djongo uses pymongo 3.11.4 to connect to mongo. The replica set we are connected to is 4.2.20. It doesn’t look like 3.11.4 is on any of the compatibility charts for 4.2 (that we’re on) or 4.4 , which is what we’re being asked to move to.The problem is Djongo isn’t actively maintained and simply upgrading pymongo just uncovers a lot of deprecated commands and connectivity issues that break Djongo.Are there specific problems with keeping on 3.11.4 and connecting to a 4.4 cluster ?\nthanks!", "username": "Joe_Garrity" }, { "code": "", "text": "Hi @Joe_Garrity, we don’t officially support the 3.x versions of PyMongo anymore, but 3.11.4 does support MongoDB 3.6+.", "username": "Steve_Silvester" } ]
Will pymongo==3.11.4 work with mongo 4.4?
2023-09-06T19:43:00.619Z
Will pymongo==3.11.4 work with mongo 4.4?
317
https://www.mongodb.com/…c_2_1024x341.png
[]
[ { "code": "", "text": "mongodb cluster resumed after inactivity pause, however mongodb for vscode not showing databases list under connections side panemongodb connection is successfully established\ncollection list can be retrieved\nimage1900×634 37.8 KB\n", "username": "Yakov_Kravchenko" }, { "code": "v1.2.1", "text": "Hello @Yakov_Kravchenko ,Welcome to The MongoDB Community Forums! If you’re not currently using the latest version, I recommend updating VSCode to the most recent version available, which is Version: 1.81.1 . Additionally, ensure that you have updated the MongoDB Extension to the latest version, MongoDB for VS Codev1.2.1 .Furthermore, please verify that the user you are attempting to log in with has the appropriate database permissions (such as Atlas admin or readWriteAnyDatabase role) to access the cluster data.If the issue persists, please don’t hesitate to provide us with more information, including:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "v1.2.1", "text": "B for VS Codev1.2.1 .upgrading vsCode to 1.81.1 and MongoDB ext to 1.2.1 resolved the issue\nthank you so much", "username": "Yakov_Kravchenko" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb cluster resumed after inactivity pause however vscode not showing db and collections
2023-09-04T10:44:10.795Z
Mongodb cluster resumed after inactivity pause however vscode not showing db and collections
234
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I’m working on importing data from a collection to Power BI. The new named connector is working great, but when the preview shows up, the schema is based on some old test records left in the collection. Here’s what I’ve tried so far:", "username": "Joel_Zehring" }, { "code": "", "text": "Hi @Joel_Zehring I think I received this error when I didn’t have enough permissions to run this command. I am very confident that once we get this schema regenerated, all will be well for you. But your second bullet point, that is something to do with our older BI Connector, and not for the new Custom Power BI Connector (Atlas SQL)- so they are not related. I wanted to let you know this to alleviate some confusion.For Atlas SQL, you should have a federated database (either one you created or one that was created through the SQL Quickstart). You could either create a new virtual collection in your Data Federation configuration or you could delete and recreate your Quick Start Federated DB - this should also give you an updated SQL Schema, reflecting any new schema changes in your source collection.email me if you’d like to do a screen share session and we can make sure everything is set.\[email protected]", "username": "Alexi_Antonino" }, { "code": "", "text": "Hi @Alexi_Antonino having same issue with connector to powerBI. First I have the issue similar to SQL Atlas Interface Error Connect Power BI - #2 by Alexi_Antonino and then I try to create a schema but get error \"No such command: ‘sqlGenerateSchema’. I am sure I have admin access.", "username": "Manish_Gupta5" }, { "code": "", "text": "when i run sqlGetSchema i get the default answer: { “ok” : 1, “metadata” : { }, “schema” : { } }has anyone managed to generate the schema successfully? i also can’t run the sqlGenerateSchema command", "username": "Matheus_Brito" }, { "code": "", "text": "Hello @Matheus_Brito welcome to the community. A few things to help.First, when you run the sqlGeneratSchema, you must do so from the admin db. Here are some instructions that might help. You may need permissions to do this, let me know if you get stuck.\n\nScreenshot 2023-08-25 at 9.15.00 AM1285×722 152 KB\nAlso, when using Shell, I needed to change my results output to show more verbose response. If you enter this command in: config.set(‘inspectDepth’, Infinity)\nIt will be able to show the whole schema back.Hope this helps.Alexi", "username": "Alexi_Antonino" }, { "code": "", "text": "@Alexi_Antonino thanks for you reply! I’m going to try these commands but first I want to clarify some points:I have a collection with almost 300k documents with polymorphic data and I’m not sure how big should be the sampleSize. To sample between ALL documents should I set it to 0, right?To be sure I have a schema containing all the fields I need, instead of running sqlGenerateSchema, can I run sqlSetSchema with an exported schema from MongoDB Compass? All I have to do is paste the generated JSON’s schema inside the sqlSetSchema command?", "username": "Matheus_Brito" }, { "code": "", "text": "@Alexi_Antonino I try to run sqlGenerateSchema from the admin db and I’m getting “not authorized” response. The user who run these commands need any specific role or permission? I can’t find this info at documentation.", "username": "Matheus_Brito" } ]
No such command: 'sqlGenerateSchema'
2023-06-27T16:20:11.256Z
No such command: &lsquo;sqlGenerateSchema&rsquo;
1,173
null
[ "node-js" ]
[ { "code": "", "text": "I have a very simple node js application which connects to a free shared MongoDB. Looking at the metrics of the server I can see a daily 60 MB inbound and 10 MB outbound traffic to the shared instance. This traffic occurs even if there is no interaction with the database.\nCan anyone explain why this happens? Is this intended? Is there any way to avoid this constant traffic?Just for reference and why this can be a problem. The application is hosted on an EC2 instance within a AWS Free Tier account and on the same region as the database server. The AWS Free Tier includes 1 GB of regional traffic transfer, that is, traffic within the same region but across availability zoned (data centres). It turns out that the shared MongoDB is hosted in the same region but in a different availability zone. So with 70 MB of daily traffic I exceed the 1 GB monthly limit pretty fast.", "username": "T3rm1" }, { "code": "", "text": "Hi @T3rm1,I have a very simple node js application which connects to a free shared MongoDB. Looking at the metrics of the server I can see a daily 60 MB inbound and 10 MB outbound traffic to the shared instance. This traffic occurs even if there is no interaction with the database.I assume you’re talking about a M0 free-tier Atlas cluster in this scenario but please correct me if I am wrong. Please contact the Atlas in-app chat support regarding the network usage question as they’ll have more insight into your cluster.In saying the above, you could also try to remove all network access list entries and ensure connections are at 0 and monitor to see if anything might be coming from the application side (perhaps unexpected operations) as a troubleshooting step.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Ok, I tried that but I got in contact with a person who didn’t really understand the issue. In the end he said that this behaviour is normal.\nI think this is really bad design then. 2.1 GB of traffic each month on an idle connection is too much.", "username": "T3rm1" }, { "code": "NetworkheartbeatFrequencyMSconnectionsheartbeatFrequencyMS100000Networkclose()heartbeatFrequencyMS20000pymongo>>> client = pymongo.MongoClient(uri, heartbeatFrequencyMS=100000, tlsCAFile=certifi.where())\n>>> client.close()\n\n/// Waited several minutes here\n\n>>> client = pymongo.MongoClient(uri, heartbeatFrequencyMS=20000, tlsCAFile=certifi.where())\n>>> client.close()\nheartbeatFrequencyMS", "text": "I think this is really bad design then. 2.1 GB of traffic each month on an idle connection is too much.I believe the behaviour (network traffic shown on the Network metrics chart in the Atlas UI) you’ve mentioned in specific reference to the M0 tier cluster is expected assuming you are not passing through any operations to the server from your application. Firstly, a few points:My guess in this scenario (again, assuming you are not performing any operations from your application), is that the “idle” connection(s) still need to communicate with the cluster which generally involves a series of checks. Please see further details on the server monitoring specs documentation.I’m going to use the heartbeatFrequencyMS option in the pymongo driver as an example on my test M0 cluster to help demonstrate.In the following screenshots, you will see a single client connect at 2 different points for several minutes each time. The first block where the connections spike will be when the heartbeatFrequencyMS value is set to 100000 - Note the Network metrics during this time. After close()-ing the connection and waiting several minutes, another connection is made again but with a heartbeatFrequencyMS value set to 20000 (20% of the first connection) in which you can see higher network usage:\nimage1522×1102 41.5 KB\nFor reference, I executed this code in a test python environment using the pymongo driver:Note: Not all drivers behave exactly the same and the above is only for demonstration purposes regarding the network metric on a driver connection performing CRUD operations.It’s important to note that I am not advising you to change the heartbeatFrequencyMS option here. This is only to demonstrate that even though my connection is not performing any read or write operations, the connection(s) still requires network usage.I would go over the monitoring spec documentation linked earlier in my comment as I believe this would be useful here.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks a lot for your effort.In that case this must be the cause. Still I think 60 MB of traffic within 24 hours on a 100% idle connection (no commands sent) is huge.I’m by no means an expert but that doesn’t sound right. There is no need to have constant communication with replica servers that are not used at all. This should only happen if there is some kind of a problem with the connection to the main server. Please not that I always refer to client - server communication. What Atlas does internally so ensure high availability doesn’t concern me at all.Anyway, it is as it is and I can’t change it. That’s unfortunate. I’ll have a look at the heartbeat frequency. Maybe this can be increased and reduces the traffic.", "username": "T3rm1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
70 MB of daily traffic with no interaction
2023-08-22T09:39:17.515Z
70 MB of daily traffic with no interaction
449
null
[ "queries" ]
[ { "code": "", "text": "I will get the CREATED_BY field for one scenerio and I am not receiving this for other scenerios. So\nif I receive this field, I need to fetch the data which are created by this value( getting from CREATED_BY) else I need to fetch all the data.\nI have tried below and it is not working\nsrQuery = createdBy?{“CREATED_BY\":\"[email protected]”}:“”;\nmycollection.find({“ERROR_FLAG”: “E”, srQuery}).lean().sort({ “createdAt”: -1 }))", "username": "Rajalakshmi_R" }, { "code": "var theVarable = 'john';\ndb.getCollection('Test').find({\n $expr:{\n $or:[\n {$eq:['$name', theVarable]},\n {$eq:['', theVarable]}\n ]\n }\n})\n", "text": "One way is to use an $expr and $or condition to check if it’s either blank or the value you set?So I can set theVariable to either blank or ‘john’ and it’ll return a record, if you pass in ‘bob’ then it’ll fail and you get nothing.", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need to frame a find query based on a variable
2023-09-06T15:03:33.158Z
Need to frame a find query based on a variable
177
null
[ "data-modeling", "performance" ]
[ { "code": "", "text": "Hi, I am trying to achieve same functionality in my app - to avoid fetching any document twice - and unfortunately in my case I expect much more user activity.Let’s say the user will be served with 50 profiles each day and will also go through all of them, needing to fetch another batch at least on the next day. Simple calculation can estimate that “suggestedUsers” array would grow to 503012 = 18.000 after a year.As I was doing some research, Bloom filter with false positive would be acceptable and I could also hold separate dictionary with dictionary [ filterPosition(Int) : recycleDate(Date) ] for eventual recycling. But here I am not sure how the query would really look like and if there is even a way how to make it perform at scale.Any idea how to address the same topic on a larger scale where suggested profiles count grows to thousands?Thank you!", "username": "Lukas_Smilek" }, { "code": "", "text": "Hi @Lukas_Smilek,Will be happy to brainstorm and help.Can you share more details on the schema and query velocity and patterns?This will help me get up to speed.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "ParseObjectParseUserconst bloomFilter = dummyArray(128000); function dummyArray(N) {\n var a=Array(N),b=0;\n while(b<N) a[b++]=b;\n return a;\n }\n", "text": "Hello Pavel,many thanks for your prompt response. Currently I am trying to implement a back-end solution for this app idea: FelseThere are various separate query types where user search for partners:As the concept should scale up to millions users, otherwise there would not be much results of queries anyway and the concept idea would get lost, I have to implement a skip list that would be robust enough till let’s say 1 million users.I ended up going for Parse Server with MongoDB and idea of bloom filters and am currently investigating feasibility of following:Each user user would have two following documents:struct PrsUser: ParseUser {//: These are required for ParseObject.var objectId: String?\nvar createdAt: Date?\nvar updatedAt: Date?\nvar ACL: ParseACL?\n//: These are required for ParseUser.var username: String?\nvar email: String?\nvar password: String?\nvar authData: [String: [String: String]?]?//TBD custom fields to save on the server\nvar minimumAge: Int? = 20 //min search age\nvar maximumAge: Int? = 30 //max search age\nvar maximumDistance: Int? = 20 //distance in km//Inverted bloom filters for the containedIn(field, Int) indexed query option\n//for each 0 in a bloom filter there is a Int marking a position\n//example [0,1,2,3,6,7,8,9] ← here are the bits 4 and 5 used\n//this will have to be a bit array for fresh new user [0, 1, 2, …, 32000] or even 64000 if possible\nvar datingBloomFilter: [Int]?\nvar tandemBloomFilter: [Int]?\nvar travelBloomFilter: [Int]?\nvar compatriotsBloomFilter: [Int]?}struct PrsProfile: ParseObject {//Those are required for Parse Objectvar objectId: String?\nvar createdAt: Date?\nvar updatedAt: Date?\nvar ACL: ParseACL?//indexed field that are in queries>\nvar age: Int?\nvar gender: Int\nvar bloomFilterHashIndex: Int <— this Indeger is used for a indexed query comparison\n…about 10 various fields not indexed separately, only as combinations in a compound index of each query type//additional information not indexed fields\nvar name: String\n…other profile information}I have build compound indexes for each search type (date, language tandem, travel buddy,…) only in the profile class and these seems than the index size is reasonably small (5MB per 100k profile documents)the query works also well as I investigated and described here on Parse Platform forum.→ totalKeysExamined 302\n→ totalDocsExamined 0\n→ nReturned 0\n→ executionTimeMillis 13so there are 0 documents scanned if there is no hit in indexed results. The issue I am struggling with is the possible array size for each bloomFilter in the User document:As the user himself would be responsible for his bloom filters I can do following:Although the flexible bloom filter size could save some space and query response time, it would need additional arrays of already served ids. These could be stored as a file somewhere, not needed in the document itself (accessed only few times).Unfortunately, when I pass such a large array of “available hash integers” to the query, the query response time gets quite higher → 4sec for 64000 array size when I trigger the query at 10QPS lets say. This does not feel as a scalable solution. Keeping array bellow 32000 helps to reduce response down to 1sec or less, but then the bloom filter gets higher false positive.Advantage of splitting bloom filter for each query type is there… user will have much less “dating” results near him compare to “language tandem” results for english all over the world. So the bigger the searched profile pool is, the less problem it is to have false positive. Example… I would not mind that 50% of profiles are wrongly skipped when there are still 50% of other language tandem partners out there and on the other hand, for the dating partners I will not be able to fill the bloom filter so fast… So there is a high chance there would be a good user experience with bloom filters even of size of 32000bits. Although having it larger is only better.The issue is still maintaining and updating the bloom filter. this has to be updated every time an user is served by new profiles. He would go through the hashes/seeds and remove them from “array of available Integers.” This seems to be heavy operation when I save such array in to an document (4+sec response time)So cut the long story short I am currently investigating:If that would be a case I could think of scenario where the bloom filter would be saved as binary data and the cloud function of Parse Server would first read it, translate it into a “array of available integers” and then run the query.as I have no experience in build such complex system (mechanical engineer doing his first free time project) I can imagine that this could lead to less space consumption in the database, but would ultimately not speed up the query (still comparing a huge array with the bloomFilterHashIndex of the profile) and also would lead to high cpu load while deserialise a bloom filter of 32000+ bits (generating huge array each time the function is called). Out of curiosity I generated a dummy array inside cloud function const bloomFilter = dummyArray(128000); with following function, it seems still twice so fast than loading an saved array of 128000 from the User document…Deserialising the raw bit data into a “available integer array” therefore could gain both speed and disc space savings. Even it seem that big portion cannot be speeded up, recomputing of that big array…So one of the biggest doubts I have is: **is this even reasonable to try with MongoDB or should I rather go and learn ElasticSearch from scratch? Because there you can specify “must_not” and the I could feed in the bloom filter values without need of inverting them.Any comments, ideas, critics are welcomed! On the interned I found many people trying to achieve such functionality, but no real solution where it can be done with MongoDB without blowing the server after reaching million of users.Thank you!", "username": "Lukas_Smilek" }, { "code": "enum Bit { case zero, one\n func asInt() -> Int {\n return (self == .one) ? 1 : 0\n }\n }\n \n func generateBloomFilterData(){\n let startDate = Date()\n var randomlyFilledBloomFilter: [Bit] = []\n var bool: Bit = .zero\n for _ in 0...64000 {\n let random = arc4random_uniform(2)\n if random == 0 {\n bool = .zero\n } else {\n bool = .one\n }\n randomlyFilledBloomFilter.append(bool)\n }\n \n let numBytes = 1 + (randomlyFilledBloomFilter.count - 1) / 8\n var bytes = [UInt8](repeating: 0, count: numBytes)\n\n for (index, bit) in randomlyFilledBloomFilter.enumerated() {\n if bit == .one {\n bytes[index / 8] += UInt8(1 << (7 - index % 8))\n }\n }\n \n let data = Data(bytes)\n let base64String = data.base64EncodedString()\n\n--> this is saved directly in to the User document in database\n} \n const buf = new Buffer.from(request.user.get(\"daBlm\"), 'base64');\n \n var blm = Array();\n\n for (var i = 0; i < buf.length; i++) {\n //iterating through 8-bit chunks\n const byte = buf[i].toString(2);\n for (var l = 0; l < byte.length; l++) {\n //push int to array for each bloom 0 value - not used bit\n if (byte[l] == \"0\") {\n blm.push(i*8 + l);\n }\n }\n }\n \n dateQuery.containedIn(\"bf\", blm);\n", "text": "Hello,for anyone facing the same issue as me, I believe I achieved pretty solid solution:as user is responsible for his own bloom filter, he prepares prepares the array of bools, where he hash the UID of the served users into one Integer and based on that “seeds” he updates his bloom filter. That array of bools is then encoded to base64. For testing and dummy bloom filter generation in Swift client I used this snippet:That way takes the bloom filter only 5kB instead of previous approx. 300kB. As I am using 4 separate filters on each user, this is huge improvement in the data transfer speed. Also upon firing the query cloud function the user’s document gets loaded much faster.the second step is to decode the bloom filter from base64 to “array of available bits.” As Parse Server uses javaScript in the cloud code I used following part of the query code:To my huge surprise this does not take much computing effort even for a dummy bloom filter with no seeds - empty one - where the loop above has to generate array of 64000 integers. With this is the user able to query only documents that has hashed UID of a value that is not yet present in his bloom filter. After he gets served on his client device with new profile documents, he recalculates his bloom filter and saves it into his User document in MongoDB database. The next query will then take this updated bloom filter, so even he would trigger next query from other device, he would not get served the same profiles again.I believe this is pretty robust and scalable solution. Any comments or ideas?Thank you!", "username": "Lukas_Smilek" }, { "code": "", "text": "Hello Lukas,I am also a newbie MongoDB. As i understood that you are initing the bloom filter from client side and send it to server side to save to MongoDB as a base64 encoded string, isn’t it? Can you help to answer some questions?Thank you so much!", "username": "John_Vi" }, { "code": "", "text": "Mongo Support BloomFIlter ? ?", "username": "Yuvraj_Verma" } ]
Scaling a data model using bloom filters
2021-04-13T10:57:28.576Z
Scaling a data model using bloom filters
6,483
null
[ "golang", "field-encryption" ]
[ { "code": "bullseye#16 0.425 deb https://libmongocrypt.s3.amazonaws.com/apt/debian bullseye/libmongocrypt/1.8 main\n#16 0.501 Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]\n#16 0.582 Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]\n#16 0.604 Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]\n#16 0.624 Ign:4 https://libmongocrypt.s3.amazonaws.com/apt/debian bullseye/libmongocrypt/1.8 InRelease\n#16 0.718 Get:5 http://deb.debian.org/debian bullseye/main arm64 Packages [8071 kB]\n#16 0.780 Get:6 https://libmongocrypt.s3.amazonaws.com/apt/debian bullseye/libmongocrypt/1.8 Release [1142 B]\n#16 0.838 Get:7 https://libmongocrypt.s3.amazonaws.com/apt/debian bullseye/libmongocrypt/1.8 Release.gpg [866 B]\n#16 5.579 Get:8 http://deb.debian.org/debian-security bullseye-security/main arm64 Packages [241 kB]\n#16 5.707 Get:9 http://deb.debian.org/debian bullseye-updates/main arm64 Packages [14.9 kB]\n#16 6.117 Fetched 8537 kB in 6s (1507 kB/s)\n#16 6.117 Reading package lists...\n#16 6.394 Reading package lists...\n#16 6.659 Building dependency tree...\n#16 6.734 Reading state information...\n#16 6.784 E: Unable to locate package libmongocrypt\n", "text": "Looking for some assistance installing libmongocrypt to take advantage of Client Side Field Level Encryption. We use the mongo-go-driver and it is my understanding that this doesn’t come packaged with libmongocrypt, so we are required to install this separately.Our version of Debian is bullseye.I followed the install steps in the docs but I’m running into issues (see logs below).Is anyone able to assist me? Thanks", "username": "Kevin_Rathgeber" }, { "code": "", "text": "Hi Kevin,Thank you for posting. This thread shows an alternative method for installing libmongocrypt that may be helpful and solved the issue for another user.Thanks,Cynthia", "username": "Cynthia_Braund" }, { "code": "libmongocrypt-devlibmongocryptsudo apt-get install -y libmongocrypt-dev", "text": "@Kevin_Rathgeber the package name on Debian is libmongocrypt-dev (not libmongocrypt). Try using sudo apt-get install -y libmongocrypt-dev.The documentation will be updated to fix the package name.", "username": "Kevin_Albertson" }, { "code": "#17 102.5 # go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:296:16: could not determine kind of name for C.mongocrypt_crypt_shared_lib_version\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:305:20: could not determine kind of name for C.mongocrypt_crypt_shared_lib_version_string\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:169:11: could not determine kind of name for C.mongocrypt_ctx_rewrap_many_datakey_init\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:263:12: could not determine kind of name for C.mongocrypt_ctx_setopt_contention_factor\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:159:11: could not determine kind of name for C.mongocrypt_ctx_setopt_key_material\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:257:12: could not determine kind of name for C.mongocrypt_ctx_setopt_query_type\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:72:3: could not determine kind of name for C.mongocrypt_setopt_append_crypt_shared_lib_search_path\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:64:3: could not determine kind of name for C.mongocrypt_setopt_bypass_query_analysis\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:404:11: could not determine kind of name for C.mongocrypt_setopt_encrypted_field_config_map\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:77:4: could not determine kind of name for C.mongocrypt_setopt_set_crypt_shared_lib_path_override\n#17 102.5 /root/go/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/mongocrypt/mongocrypt.go:81:2: could not determine kind of name for C.mongocrypt_setopt_use_need_kms_credentials_state\n", "text": "@Kevin_Albertson That seems to have solved the issue but now I’m getting the following error when building:", "username": "Kevin_Rathgeber" }, { "code": "pkg-config$ pkg-config --modversion libmongocrypt\n1.8.2\nlibmongocryptlibmongocrypt", "text": "That error suggests the version of libmongocrypt is older than is supported by the Go driver.I expect Go driver 1.11.7 requires libmongocrypt 1.5.2 or higher.Try confirming the version of libmongocrypt with pkg-config:If the version of libmongocrypt shows as older, it may be due to another conflicting install of libmongocrypt. That may have happened if libmongocrypt was installed through the main package repository. Debian 11 packages libmongocrypt 1.1.0", "username": "Kevin_Albertson" }, { "code": "root@68b18b5afa62:/# pkg-config --modversion libmongocrypt\n1.1.0\nroot@9b981271015b:/# pkg-config --modversion libmongocrypt\n1.8.2\n", "text": "@Kevin_Albertson Yep, it looks like it installed version 1.1.0What’s the best way to get a newer version? For install, I followed these instructions you linked on another thread. Thanks!Edit: I think I figured out my issue. I’m running this docker build locally to test but I’m on arm64 so it’s not finding the right version (since it’s amd64) and just installing the default shipped with bullseye. When I spun up a docker container with linux/amd64, it appears to have installed the correct version.Edit 2:\nSo far so good, i was able to build this now that I’m on the right platform. Thanks for all the help @Kevin_Albertson and @Cynthia_Braund!", "username": "Kevin_Rathgeber" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to install libmongocrypt for Debian
2023-09-05T17:31:06.118Z
Unable to install libmongocrypt for Debian
477
null
[ "aggregation", "mongodb-shell", "spark-connector", "scala" ]
[ { "code": "val pipeline = \"\"\"[{$unionWith: {coll: \"course\", pipeline: [{$project: { _id: 0, courseTitle: 1, teacher: 1}}]}}]\"\"\"\n val spark = SparkSession.builder\n .appName(\"Data Aggregation\")\n .master(\"local[*]\")\n .config(\"spark.mongodb.input.uri\", \"mongodb://localhost:27017\")\n .config(\"spark.mongodb.input.database\", \"school\")\n .config(\"spark.mongodb.input.collection\", \"teacher\")\n .config(\"partitioner\",\"com.mongodb.spark.sql.connector.read.partitioner.SamplePartitioner\")\n .config(\"pipeline\", pipeline)\n .getOrCreate()\nval data = MongoSpark.load(spark)\n", "text": "Hello,I’m trying to aggregate data from several collections into one dataframe through the union step in Spark-Scala.\nI have a database named school and two collections: teacher and course, and I’d like to combine (concatenate) the data from these two collections into a single dataframe.\nSo, I create the pipeline, and add it in the configuration process, but it seem didn’t work.Here is my code:Data loading:My configuration:PS: I have try this aggregation with mongosh and it work.Thanks a lot.", "username": "Falcon_MS" }, { "code": "", "text": "Can you share the error that you are getting with this?", "username": "Prakul_Agarwal" }, { "code": "[\n {\n _id: 1,\n name: 'Elena Gilbert',\n email: '[email protected]',\n teacher: 'Harry'\n },\n {\n _id: 2,\n name: 'Alaric Steven',\n email: '[email protected]',\n teacher: 'Harry'\n }\n]\n[\n { _id: 1, c_id: 1, courseTitle: 'Python', teacher: 'Harry' },\n { _id: 2, c_id: 2, courseTitle: 'Java', teacher: 'Harry' }\n]\n[\n {\n _id: 1,\n name: 'Elena Gilbert',\n email: '[email protected]',\n teacher: 'Harry'\n },\n {\n _id: 2,\n name: 'Alaric Steven',\n email: '[email protected]',\n teacher: 'Harry'\n },\n { courseTitle: 'Python', teacher: 'Harry', courseId: 1 },\n { courseTitle: 'Java', teacher: 'Harry', courseId: 2 }\n]\ndb.student.aggregate([ { $unionWith: { coll: \"course\", pipeline: [ { $project: { _id: 0, courseId: \"$c_id\", courseTitle: 1, teacher: 1 } }] } }] )\n", "text": "Hi Prakul,\nThank you for your reply.The code contains no errors.\nThe problem I’m raising is this: logically, with the pipeline configuration, I should get the union of my teacher and course collection in the data variable. But, it’s just the teacher data collection that’s loaded (which is defined in the spark session), which means it hasn’t taken the pipeline request into account.Here is my two collections data:What I expect from the union aggregation pipeline is this:And I am able to get this result with mongosh from the console with this command:But not with spark data loader.", "username": "Falcon_MS" } ]
Aggregate multiples collections with Spark
2023-09-04T09:43:34.550Z
Aggregate multiples collections with Spark
368
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hello everyone,\nI have been trying to open the MongoDB lab terminal since yesterday morning, but I keep encountering this error: “mongodb-15778-mlcjw6kmpn3j.env.play.instruqt.com took too long to respond.” It is possible that the MongoDB team is already aware of this issue.", "username": "Adil_Imran" }, { "code": "", "text": "Hi Adil,Welcome to the forums! Apologies that you encountered this issue. Do you mind submitting a support ticket to [email protected]? Be sure to include the error code you’re getting. Thank you!", "username": "Aiyana_McConnell" } ]
Lab terminal is not working
2023-09-06T13:05:23.123Z
Lab terminal is not working
360
https://www.mongodb.com/…_2_1024x575.jpeg
[ "compass" ]
[ { "code": "", "text": "Hi,I have installed mongodb compass application, In which could not establish a connection. Also mongodb server status returns the failed status. I am using ubuntu 22 64 bit version. Please help me to fix this. I have shared my issue screenshot below.\nscreen1366×768 163 KB\n", "username": "Tamilselvi_S" }, { "code": "", "text": "ILL means illegal instructions\nCheck whether the mongodb version you are installing is supported by your OS or not\nAlso check CPU microarchitecture reqmnts from compatibility matrix\nYou can check our forum threads for similiar issues faced by others", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I installed mongodb version 4.4.8 this only works for me. I tried below commands, which works but I could not install latest versions like 5, 6.When installing for V5 or 6, I got below error message.Illegal instruction (core dumped)There is any option to use V5 or V6. Please verify and let me know.wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -sudo apt-get updatesudo apt-get install mongodb-org=4.4.8 mongodb-org-server=4.4.8 mongodb-org-shell=4.4.8 mongodb-org-mongos=4.4.8 mongodb-org-tools=4.4.8", "username": "Tamilselvi_S" } ]
Could not start mongodb in ubuntu
2023-09-06T06:53:47.058Z
Could not start mongodb in ubuntu
357
null
[ "dot-net", "polymorphic-pattern" ]
[ { "code": "public class BaseClass\n{\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; }\n\tpublic string Title { get; set; }\n}\n\npublic class Child1 : BaseClass\n{\n\tpublic string Description { get; set; }\n\tpublic int Value { get; set; }\n}\n\npublic class Child2 : BaseClass\n{\n\tpublic string Creator { get; set; }\n\tpublic int Amount { get; set; }\n\tpublic int NumberOfUsers { get; set; } \n}\n\npublic class MyCustomSerializer : IBsonSerializer<BaseClass>, IBsonDocumentSerializer\n{\n\tpublic Type ValueType => typeof(BaseClass);\n\n\tprivate readonly IBsonDocumentSerializer _baseSerializer = new BsonClassMapSerializer<BaseClass>(BsonClassMap.LookupClassMap(typeof(BaseClass)));\n\n\tpublic BaseClass Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n\t{\n\t\t// Deserialize to the relevant child class\n\t}\n\n\tpublic void Serialize(BsonSerializationContext context, BsonSerializationArgs args, BaseClass value)\n\t{\n // This doesn't give a value to _id field so the document is inserted with _id: null\n\t\t_baseSerializer.Serialize(context, args, value);\n\t}\n\n\tpublic void Serialize(BsonSerializationContext context, BsonSerializationArgs args, object value)\n\t{\n\t\tSerialize(context, args, (BaseInteractionInstance)value);\n\t}\n\n\tpublic bool TryGetMemberSerializationInfo(string memberName, out BsonSerializationInfo serializationInfo)\n\t{\n\t\treturn _baseSerializer.TryGetMemberSerializationInfo(memberName, out serializationInfo);\n\t}\n\n\tobject IBsonSerializer.Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n\t{\n\t\treturn Deserialize(context, args);\n\t}\n}\n", "text": "Hi all,\nI’m using the C# driver and needed to write a custom serializer for a collection that contains different document types (all children of the same base type)\nIn the deserialize method I figure out the document type and deserialize to the relevant class\nIn the serialize method I wanted to use the driver’s default behavior since I didn’t need to do anything special, so I created an instance of a BsonClassMapSerializer and used its serialize method\nThe problem is that when I do this it doesn’t generate a value for the _id property\nSo the question is do I need to do it myself because I’m using a custom serializer, or am I just doing something wrong here with how I’m serializing?\nI thought that using the ClassMapSerializer will take care of this for meSome code example for better understanding:", "username": "Dor_Ben-Senior" }, { "code": "if (string.IsNullOrEmpty(value.Id))\n {\n value.Id = ObjectId.GenerateNewId().ToString();\n }\n", "text": "For now I fixed it by adding this in the Serialize method:but leaving this thread open because I do want to know if I’m using the base serializer wrong or if this is the correct way to do it", "username": "Dor_Ben-Senior" } ]
C# - Custom serializer not giving value to _id
2023-09-06T11:52:33.610Z
C# - Custom serializer not giving value to _id
322
null
[ "react-native", "react-js", "typescript" ]
[ { "code": "", "text": "Hi there,Congrats on getting Realm JS version 12 out of the door!I can see there is a commit in the Babel Plugin repo that adds support for v12. Would it be please possible to release that? We want to try the new version to see if it fixes some problems we are having, but also anybody trying Realm JS with babel will stumble upon this now.Thanks!", "username": "Jakub_Duras" }, { "code": "", "text": "Any chance of getting the new version?People trying Realm must be stumbling upon this, and it’s likely pretty hard for them to understand those resulting errors.Thanks", "username": "Jakub_Duras" } ]
Publish a new version of Babel plugin for v12 compatibility
2023-08-21T23:04:46.186Z
Publish a new version of Babel plugin for v12 compatibility
498
https://www.mongodb.com/…9a467e83cafb.png
[ "compass", "atlas", "configuration" ]
[ { "code": "rs.conf()rs.reconfig(conf)", "text": "Hi all, hope I used the right category I’m trying to add tags to my cluster’s replica sets in order to customize the read preference\nI followed this guide: Configure Replica Set Tag Sets — MongoDB Manual\nI can get the config using the rs.conf() command (initially I had an issue with this as well because my user wasn’t an admin)\nBut then after changing the tags it fails when trying to run rs.reconfig(conf) to save the new config\nIf I try to run this command from my windows terminal I just get this error over and over:Reconfig did not succeed yet, starting new attempt…If I try the same from the shell in the Compass app I get the actual authorization error:MongoServerError: not authorized on admin to execute command { replSetReconfig: { … }, lsid: { id: UUID(“05c1654a-0f8b-4a05-a329-ee96075b878f”) }, $clusterTime: { clusterTime: Timestamp(1687437929, 1), signature: { hash: BinData(0, 96D7D9BB520E42DBA42D75D214C2BB4B3FD670DC), keyId: 7222560349487104006 } }, $db: “admin” }I tried playing around with the user roles and tried assigning any role that seemed relevant, but it didn’t help. These are the roles currently defined on the user:\nAlso probably relevant to mention that I’m using the M10 cluster tierAny help will be appreciated ", "username": "Dor_Ben-Senior" }, { "code": "", "text": "Also probably relevant to mention that I’m using the M10 cluster tierThe correct roles to make these changes are “clusterAdmin” and “replSetManager”.As per my understanding, these roles are available in M20 and above.", "username": "Anuj_Garg" }, { "code": "", "text": "This is an unsupported command in Atlas. If you can expand on what you are trying to achieve we may be able to provide a more appropriate answer.Likely what you are looking for is provided by Pre-defined Replica Set Tags this does require a dedicated tier (M10+)", "username": "chris" }, { "code": "", "text": "This is an unsupported command in Atlas. If you can expand on what you are trying to achieve we may be able to provide a more appropriate answer.Likely what you are looking for is provided by Pre-defined Replica Set Tags this does require a dedicated tier (M10+)I want to be able to spread my read requests across the 3 replicas I have (primary + 2 secondaries) using a customized read perference\nBecause by using the PrimaryPreferred preference it only uses the primary and never gets to the secondaries, and by using SecondaryPreferred it never uses the primary\nMy idea in general was to assign a number for each replica (0-2), and from my code rotate which tag is preferred for each call - first read prefer #0, second read prefer #1, etc. (the actual logic may be a little more complex and will actually prefer secondaries over the primary, but that’s the general idea)\nThe pre-defined tags don’t help me because they are exactly the same for all the replicas, so I need to be able to add a custom tag on each replica", "username": "Dor_Ben-Senior" }, { "code": "", "text": "For anyone interested - I was able to accomplish what I need by randomizing the read preference between PrimaryPreferred and SecondaryPreferred when I make calls, I gave the secondaries a 67% chance of being chosen and so I got a pretty even distirbution of the queries across the replicas", "username": "Dor_Ben-Senior" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Authorization error when trying to change replica sets config
2023-06-22T14:09:20.988Z
Authorization error when trying to change replica sets config
836
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "export default async function search(req, res) {\n try {\n const payload = await collection.aggregate(pipeline).toArray()\n return res.status(200).send(payload)\n } catch ({ message, stack }) {\n return res.status(400).send({ message, stack })\n }\n}\n", "text": "My front-end app makes requests to my Express.js app. The Express app uses the MongoDB Node.js driver to make queries to my (Atlas) MongoDB. When the front-end app cancels a request (e.g. AbortController, CancelToken) how do I make the Express endpoint kill the corresponding MongoDB query?Relevant endpoint code:", "username": "Mask_Hoski" }, { "code": "", "text": "Hi @Tarun_Gaur,I had not found a solution so am grateful for your reply.The suggestion you describe is not automatable, if I understand it correctly. Users of my web app cancel many requests they send to my Express app. To implement your approach, I would have to manually enter commands into a Mongo shell. That won’t work for me, because I need this to happen in real time – in the Express app.Scenario: As the user pans/zooms a map, the web app requests new data from the Express app. Users often pan/zoom multiple times per second. Some queries can take several seconds for Mongo to process. I want to cancel/abort all of the orphan Mongo queries to conserve resources.This seems like a fairly typical use case. Is the solution really as you suggest? That is, in a separate process from the running Express app manually open a Mongo shell and run commands to search for and abort these wasteful queries?Thank you for sharing your expertise.", "username": "Mask_Hoski" }, { "code": " const res = await client.db('admin').command({\n killOp: 1,\n op: 1234 //operation id to kill here\n });\n const currentop = await client.db('admin').aggregate([\n {'$currentOp': {}}\n ]).toArray()\n console.log(currentop)\n", "text": "Hi @Mask_HoskiYou can use the killOp command to kill an existing operation. In node, it will look something like:See Run a Command for more examples on running a command using the node driver.However to know which operation to kill, you’ll need to know it’s operation id first You can use the $currentOp aggregation stage to do this. For example:You probably need to wrap these two command/aggregation in separate endpoints in your app.Hope this helps!Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hey, did you ever find a solution to this? I’m experiencing the same issue but cannot find any documentation how to abort a query.", "username": "Elliot_Wilkinson" } ]
How do I cancel a running query?
2022-11-09T17:58:11.330Z
How do I cancel a running query?
2,508
https://www.mongodb.com/…7_2_1024x472.png
[ "queries" ]
[ { "code": "", "text": "I’m getting an error I’m not able to get my database anymore because of this collections error.\nimage1925×888 46.1 KB\n", "username": "John_Rodney_Bargayo" }, { "code": "", "text": "Hi John,Please contact the Atlas support team via the in-app chat to investigate regarding this error. You can additionally raise a support case if you have a support subscription. The team would have more insight into your Atlas account / cluster. Please provide them with the project and cluster name in question.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Okay, thanks for the information. The issue is now resolved.", "username": "John_Rodney_Bargayo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[ERROR]An error occurred while querying your MongoDB deployment. Please try again in a few minutes
2023-09-05T20:36:48.052Z
[ERROR]An error occurred while querying your MongoDB deployment. Please try again in a few minutes
249
https://www.mongodb.com/…9cbfa0a0d17.jpeg
[ "queries", "data-modeling" ]
[ { "code": "", "text": "Hi there,\nso I could show you better how it looks like the objects within an Array:\n{\n_id: ObjectID(‘63fdd955213f362bcfd809e2’),\nDescription: ‘String’,\nTitle: ‘String’,\nDirector: { Bio: ’ String’, Name: ‘String’, Birthdate: ’ String’, Deathdate: ’ String’ },\nGenre: {Name: ‘String’ , Description: 'String ‘},\nActors: “String”,\nImageURL: ’ String’\n}\nThe issue I encounter is that one movie does not render the Director object, somehow the other movies have not issue at all but I am not understanding why. The information will be rendered with the exception of that object.\n\ncomparing-director-obj1000×735 80.9 KB\n\nmovie-obj-director-not-showing678×739 202 KB\nIs there anything I am not seing, has anyone faced something similar in the past?", "username": "Hermann_Rasch" }, { "code": "", "text": "Is this from a lab or sample application? Where is the code that populates the UI, can you share the documents for one that displays properly and one that does not?", "username": "John_Sewell" }, { "code": "// Movie card component\n function MovieCard ({ movies, movie, user, updateUser }) {\n const [inFavoriteMovies, setInFavoriteMovies] = useState(user && user.FavoriteMovies.includes(movie._id));\n const token = window.localStorage.getItem(\"token\");\n // add Fav Movie function\n const addFavoriteMovie = () => {\n fetch(`https://movies-couch-api.vercel.app/users/${user.Username}/favMovies/${movie._id}`, {\n method: \"POST\",\n headers: {Authorization: `Bearer ${token}`}\n })\n .then(response => {\n if (response.ok) {\n return response.json();\n } else {\n alert(\"Failed adding the Movie to Favorite Movies\");\n return false;\n }\n })\n .then(user => {\n if(user) {\n alert(\"Movie added to Favorite Movies\");\n setInFavoriteMovies(true);\n updateUser(user);\n }\n })\n .catch(e => {\n alert(e);\n console.log(e);\n });\n } \n // Remove-favMovies\n const removeFavoriteMovie = () => {\n fetch(`https://movies-couch-api.vercel.app/users/${user.Username}/favMovies/${movie._id}`, {\n method: \"DELETE\",\n headers: {Authorization: `Bearer ${token}`} \n })\n .then(response => {\n if (response.ok) {\n return response.json();\n } else {\n alert(\"Failed removing movie from Favorite list\");\n return false;\n }\n })\n .then(user => {\n if(user) {\n alert(\"Movie deleted from Favorite Movies\");\n setInFavoriteMovies(false);\n updateUser(user);\n }\n })\n .catch(e => {\n console.log(e);\n alert(e);\n });\n }\nreturn (\n <Card className=\"movie-card\" style={{ width:\"18rem\"}}>\n <Card.Img variant=\"top\" src={movie.ImageURL} alt=\"movie-poster\"/> \n <Card.Body className=\"movie-card-body\">\n <Card.Title>{movie.Title}</Card.Title>\n <Card.Text>\n <br /><br />\n {movie.Director.Name} \n <br /><br />\n {movie.Genre.Name} \n </Card.Text>\n <br />\n <Link to={`/movies/${encodeURIComponent(movie._id)}`}>\n <Button className=\"movie-card-button\" variant=\"outline-warning\">Open</Button>\n <br/> <br/>\n {inFavoriteMovies ? <Button onClick={(e) => {\n e.preventDefault();\n removeFavoriteMovie(movie._id);\n }} \n className=\"movie-card-button\" variant=\"outline-warning\"\n >Remove from Favorite Movies</Button> :\n <Button onClick={(e) => {\n e.preventDefault();\n console.log(movie._id); \n addFavoriteMovie(movie._id);\n }} \n className=\"movie-card-button\" variant=\"outline-warning\"\n >Add to Favorite Movies</Button>\n }\n </Link>\n </Card.Body>\n </Card>\n );\nthis does not render the director object\n{\n_id: ObjectID(\"6378afed9eda6047940371ec\"),\nDescription: \"The Lord of the Rings: The Fellowship of the Ring is a 2001 epic fantasy adventure film directed by Peter Jackson from a screenplay by Fran Walsh, Philippa Boyens, and Jackson,\tbased on 1954s The Fellowship of the Ring, the first volume of the novel The Lord of the Rings by J. R. R. Tolkien. The film is the first installment in The Lord of the Rings trilogy. It features an ensemble cast including Elijah Wood, Ian McKellen, Liv Tyler, Viggo Mortensen, Sean Astin, Cate Blanchett, John Rhys-Davies, Billy Boyd,\tDominic Monaghan, Orlando Bloom, Christopher Lee, Hugo Weaving, Sean Bean, Ian Holm, and Andy Serkis. Set in Middle-earth, the story tells of the Dark Lord Sauron, who seeks the One Ring, which contains part of his might, to return to power. The Ring has found its way to the young hobbit Frodo Baggins. The fate of Middle-earth hangs in the balance as Frodo and eight companions (who form the Fellowship of the Ring) begin their journey to Mount Doom in the land of Mordor, the only place where the Ring can be destroyed.\",\nTitle: \"The Lord of the Rings: The Fellowship of the Ring\",\nDirector: { \nBio: ’\"Peter Jackson is a New Zealand film director, screenwriter and producer. He is best known as the director, writer and producer of the Lord of the Rings trilogy (2001–2003) and the Hobbit trilogy (2012–2014), both of which are adapted from the novels of the same name by J. R. R. Tolkien.\", \nName: \"Peter Jackson\", \nBirthdate: \"31-10-1961\" , \nDeathdate: \"not available\" },\nGenre: {\nName: \"epic fantasy \" ,\n Description: \"Epic fantasy is a subgenre of fantasy defined by the epic nature of its setting or by the epic stature of its characters, themes, or plot. The term 'high fantasy' was coined by Lloyd Alexander in a 1971 essay, 'High Fantasy and Heroic Romance', which was originally given at the New England Round Table of Children's Librarians in October 1969.\"},\nImageURL: \"https://www.themoviedb.org/t/p/w600_and_h900_bestv2/6oom5QYQ2yQTMJIbnvbkBL9cHo6.jpg\"\n}\nthis object does render perfect-\n{\n_id: ObjectID(\"6378b0b2a51c9e2ddc6f79bd\"),\nDescription: \"The Lord of the Rings: The Return of the King is a 2003 epic fantasy adventure film directed by Peter Jackson from a screenplay by Fran Walsh, Philippa Boyens, and Jackson, based on 1955s The Return of the King, the third volume of the novel The Lord of the Rings by J. R. R. Tolkien. Continuing the plot of the previous film, Frodo, Sam and Gollum are making their final way toward Mount Doom in Mordor in order to destroy the One Ring, unaware of Gollums true intentions, while Merry, Pippin, Gandalf, Aragorn, Legolas, Gimli and the rest are joining forces together against Sauron and his legions in Minas Tirith. The Return of the King was financed and distributed by American studio New Line Cinema, but filmed and edited entirely in Jacksons native New Zealand, concurrently with the other two parts of the trilogy. It premiered on 1 December 2003 at the Embassy Theatre in Wellington and was theatrically released\ton 17 December 2003 in the United States, and on 18 December 2003 in New Zealand.\",\nTitle: \"The Lord of the Rings: The Return of the King\",\nDirector: { \nBio: ’\"Peter Jackson is a New Zealand film director, screenwriter and producer. He is best known as the director, writer and producer of the Lord of the Rings trilogy (2001–2003) and the Hobbit trilogy (2012–2014), both of which are adapted from the novels of the same name by J. R. R. Tolkien.\", \nName: \"Peter Jackson\", \nBirthdate: \"31-10-1961\" , \nDeathdate: \"not available\" },\nGenre: {\nName: \"epic fantasy \" ,\n Description: \"Epic fantasy is a subgenre of fantasy defined by the epic nature of its setting or by the epic stature of its characters, themes, or plot. The term 'high fantasy' was coined by Lloyd Alexander in a 1971 essay, 'High Fantasy and Heroic Romance', which was originally given at the New England Round Table of Children's Librarians in October 1969.\"},\nImageURL: \"https://www.themoviedb.org/t/p/w600_and_h900_bestv2/rCzpDGLbOoPwLjy3OAm5NUPOTrC.jpg\"\n}\n\n", "text": "So the object shared was just a mock up. This is from an application I built.\nThe code that displays the movie-cards(displayed objects):example of the movies (object)Additionally I do not have actors in the object.", "username": "Hermann_Rasch" }, { "code": "{movie.Director.Name}\n", "text": "Comparing the two objects they both look like they should supply the data so it’ll work:(Image uploading seems broken so I can’t upload a beyond compare comparison, but both seem to have the required properties to show director)And the code from what I can see should be navigating the structure correctly so something is going on.In one case something it not working, can you put some debugging in your app to alert a json stringify of the object that’s being processed, then you can see what the difference is between the two.", "username": "John_Sewell" } ]
My db might have a bug, one object doesn't render info correctly
2023-09-05T15:49:13.608Z
My db might have a bug, one object doesn&rsquo;t render info correctly
342
null
[ "aggregation", "queries", "node-js" ]
[ { "code": " {\n \"category\":\"A\",\n \"list\": [\n {\n \"item\": 1,\n \"sub_list\": [ 11 ]\n },\n {\n \"item\": 2,\n \"sub_list\": [13, 43]\n },\n ],\n }\nsub_listcategoryitemdb.collection.update({\n \"category\": \"A\", // \"B\" will create/upsert new document\n \"list.item\": 1 // 2 will add new obj to list\n},\n ???\n{\n upsert: true\n})\n", "text": "I have a collection containing documents as such:How do I add a number to the sub_list given category & item? and if it doesn’t exist then upsert and create", "username": "MRM" }, { "code": "", "text": "Hello @MRM,You cannot directly update/upsert an array within an array using a normal update query in MongoDB. You will need to use the update with an aggregation pipeline to achieve this complex update/upsert operation.For reference, refer to this question,", "username": "turivishal" } ]
Update/upsert array within array using update aggregate?
2023-09-06T07:52:48.173Z
Update/upsert array within array using update aggregate?
279
null
[]
[ { "code": "", "text": "Hello, I am needing to upgrade Mongodb (which was installed using apt) from 3.6.8 - ubuntu default - to version 4.0 or higher.In checking the v4.0 manual (Install MongoDB Community Edition on Ubuntu — MongoDB Manual) only 18.04 bionic is supported. Also according to the manual “If you installed MongoDB from the MongoDB apt, yum, dnf, or zypper repositories, you should upgrade to 4.0 using your package manager.”(https://www.mongodb.com/docs/manual/release-notes/4.0-upgrade-replica-set/).If version 3.6 needs to be upgraded to 4.0 (to then be upgraded beyond that as version 4.4 is supported on 20.04) and should be done using the package manager but version 4.0 isn’t supported on 20.04 can someone point me to the procedure I should follow? Thanks.", "username": "JoTa" }, { "code": "", "text": "Any ideas on this? Can anyone share a link or experience on what they did?", "username": "JoTa" }, { "code": "", "text": "Really interested in anyone’s experience relating to this.", "username": "JoTa" }, { "code": "", "text": "I’m currently running into the same issue. We have a Graylog environment running on Ubuntu 20.04, installed it at the time using the official Ubuntu apt-repo, and now it seems we cannot upgrade because we can’t follow the documented upgrade path from MongoDB 3.6.8 → 4.0 → 4.2 → 4.4 → 5.0. Which in turn means we cannot upgrade Graylog to its latest version.Has there been any updates or progress made on this issue? Are Ubuntu 20.04 users stuck now or is there a way to get around this?", "username": "Arian_Huisman" }, { "code": "docker -exec ...sudo docker run -d --name mongodb40 -p 27017:27017 -v /var/lib/mongodb.docker:/data/db -v /etc/mongodb.docker.yml:/etc/mongodb.yml mongo:4.0.28 --config /etc/mongodb.ymlsudo docker exec -it mongodb40 mongo -u <user> -p --authenticationDatabase <auth-db>sudo docker run -d --name mongodb42 -p 27017:27017 -v /var/lib/mongodb.docker:/data/db -v /etc/mongodb.docker.yml:/etc/mongodb.yml mongo:4.2.24 --config /etc/mongodb.yml", "text": "For anyone running into this issue as well: I managed to get around this by using dockerized mongodb’s for versions 4.0 en 4.2. Both of these also come with the accompanying mongo-shell, which you can start with a docker -exec ... command.Steps are roughly:You can then start the mongo-docker container andwhich (in my case) was this:\nsudo docker run -d --name mongodb40 -p 27017:27017 -v /var/lib/mongodb.docker:/data/db -v /etc/mongodb.docker.yml:/etc/mongodb.yml mongo:4.0.28 --config /etc/mongodb.yml\nYou can then connect with the mongo-shell provided with the container:\nsudo docker exec -it mongodb40 mongo -u <user> -p --authenticationDatabase <auth-db>\nwith the appropriate user and authenticationdatabase and execute the necessary command to upgrade the database (i.e. check and set the featureCompatibilityVersion).\nAfter that, shut down the container and repeat with a container for the 4.2 version:\nsudo docker run -d --name mongodb42 -p 27017:27017 -v /var/lib/mongodb.docker:/data/db -v /etc/mongodb.docker.yml:/etc/mongodb.yml mongo:4.2.24 --config /etc/mongodb.yml\nSame trick for the shell, upgrade-command etc.You now have that copied mongodb-datafolder upgraded to version 4.2, and you can thenWhen all is done you can uninstall the docker-engine.Hope this helps anyone else running into the same situation.", "username": "Arian_Huisman" } ]
Upgrading Mongdb from 3.6.8 to 4.0 when installed with Apt - Ubuntu 20.04 focal
2022-04-14T15:20:47.418Z
Upgrading Mongdb from 3.6.8 to 4.0 when installed with Apt - Ubuntu 20.04 focal
2,512
https://www.mongodb.com/…9_2_1024x500.png
[ "dot-net" ]
[ { "code": "", "text": "I cannot pass this lab because there is no bank database\n\nimage1902×929 69 KB\nThat’s why I am stuck in this stage and cannot continue to the next tasks.", "username": "Djakhongir_Kurbanov" }, { "code": "", "text": "Similar to another recent post where you are not in the mongo shell.\nTry running mongosh first to open the mongo shell.", "username": "John_Sewell" }, { "code": "", "text": "Connection refused\n\nimage922×105 4.88 KB\n", "username": "Djakhongir_Kurbanov" }, { "code": "", "text": "Which course / lesson / lab are you on?", "username": "John_Sewell" }, { "code": "mongosh", "text": "Hi,This is happening to me as well.As the thread title says, the lab is “Lab: Creating a Single Field Index in MongoDB”.mongosh return a ECONNREFUSED error making impossible to progress in the labThank you", "username": "Ruben_Rodriguez_Alcalde" }, { "code": "", "text": "I had a play as well and it seems the labs environment was a touch upset and not connecting.", "username": "John_Sewell" }, { "code": "", "text": "Hi All,Can you please check again if you are still getting the same issue?In case you are not getting connected automatically to the Atlas cluster after launching a lab, I would recommend you to send an email to [email protected] with details such as:Happy Learning, cheers!\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi @Tarun_Gaur,It’s working for me now.Thank you to you too @John_Sewell", "username": "Ruben_Rodriguez_Alcalde" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Cannot pass the Lab "Creating a Single Field Index in MongoDB"
2023-09-03T06:05:33.175Z
Cannot pass the Lab &ldquo;Creating a Single Field Index in MongoDB&rdquo;
456
null
[ "java", "crud" ]
[ { "code": "BasicDBObject Bson filter1 = and(\n eq(\"platform\", platform),\n eq(\"channel\", channel)\n );\n double percent = 20;\n final Bson update = set(\"percent_view\", new BasicDBObject(\"$multiply\", Arrays.asList(\"$viewtime\", percent)));\n\n collection.updateOne(filter1, update);\n", "text": "I am using MongoDB with Java driver. I am trying to update a field by multiplying another with a number and setting it. But instead of the result of the multiplication, the field is being set by a BasicDBObject. How do I make it so that the result is set?Please give the answer using the Update builders of the Java driver?", "username": "khat33b" }, { "code": "Atlas atlas-b8d6l3-shard-0 [primary] test> db.testMultiply.find()\n[\n {\n _id: ObjectId(\"64f6db5af8128204652e1551\"),\n platform: 'platform',\n channel: 'channel',\n view: 50\n }\n]\npercent_viewimport java.util.Arrays;\nimport org.bson.Document;\nimport com.mongodb.MongoClient;\nimport com.mongodb.MongoClientURI;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.conversions.Bson;\nimport java.util.concurrent.TimeUnit;\nimport org.bson.Document;\nimport com.mongodb.client.AggregateIterable;\n\nMongoClient mongoClient = new MongoClient(\n new MongoClientURI(\n \"\"\n )\n);\nMongoDatabase database = mongoClient.getDatabase(\"test\");\nMongoCollection<Document> collection = database.getCollection(\"testMultiply\");\n\nAggregateIterable<Document> result = collection.aggregate(Arrays.asList(new Document(\"$match\", \n new Document(\"channel\", \"channel\")\n .append(\"platform\", \"platform\")), \n new Document(\"$addFields\", \n new Document(\"percent_view\", \n new Document(\"$multiply\", Arrays.asList(\"$view\", 20L))))));\n", "text": "Hi @khat33b and welcome to MongoDB community forums!!If I understand your question correctly you have the sample document that looks like:and you are trying to update the field value percent_view as view * 20.Please give the answer using the Update builders of the Java driver?Could you please explain why you want to limit the use of Update builder operators?It’s more straightforward to achieve this with the aggregation pipeline. You can check the code below to see how it’s done using the aggregation pipeline:If you prefer to continue using the update builders, one potential solution is to extract the field value at the application end, carry out the mathematical operation, and then use $set to assign the calculated value.Please let us know if you need further help.Regards\nAasawari", "username": "Aasawari" } ]
How to update a field in MongoDB using Java driver with the value of another field in the collection?
2023-08-30T06:58:15.225Z
How to update a field in MongoDB using Java driver with the value of another field in the collection?
473
https://www.mongodb.com/…c_2_1024x605.png
[ "node-js" ]
[ { "code": "", "text": "I’m doing the lab of Lession 1 in MongoDB CRUD Operations: Insert and Find Documents (for Node.js).Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.I keep getting the following error:\n\nimage2394×1416 246 KB\nWhich cluster and database does the auto-grader expects me to use?The lab starts without any MongoDB connection so I used my connection string for myAtlasClusterEDU. And seems like the only database that has an accounts collection is sample_analytics, so that’s the db I used.Does any one know how to resolve this? Thanks!", "username": "Luke_Li" }, { "code": "atlas auth login", "text": "I encountered a similar problem.\ntried with the atlas auth login , and selected the default project to the MDB_EDU project, authorize it from the cli, still the lab couldn’t check fo the data that was inserted,reporting an incorrect solution, despite I have checked both in atlas, and compass inserted the document to the right database,", "username": "WilliamCheong" }, { "code": "", "text": "Same issue here getting error “document couldn’t found in database”.", "username": "Suresh_Pradhana" }, { "code": "", "text": "Hi All,Can you please check again if you are still getting the same issue?In case you are, I would recommend you to send an email to [email protected] with details such as:Happy Learning, cheers!\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi Tarun,The issue has been resolved, and it’s working now. Thanks.", "username": "Suresh_Pradhana" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Unable to pass lab check
2023-09-01T03:20:14.834Z
Unable to pass lab check
528
null
[ "queries" ]
[ { "code": "account_id: 111333,\nlimit: 12000,\nproducts: [\n \"Commodity\",\n \"Brokerage\"\n ],\n\"last_updated\": new Date()\n(' root@mongodb:/app# } bash: syntax error near unexpected token {' root@mongodb:/app# account_id: 111333, bash: account_id:: command not found root@mongodb:/app# limit: 12000, bash: limit:: command not found root@mongodb:/app# products: [ bash: products:: command not found root@mongodb:/app# \"Commodity\", bash: Commodity,: command not found root@mongodb:/app# \"Brokerage\" bash: Brokerage: command not found root@mongodb:/app# ], bash: ],: command not found root@mongodb:/app# \"last_updated\": newDate() bash: syntax error near unexpected token }' root@mongodb:/app# db.accounts.insertOne({account_id:111333,limit:12000,products:[\"Commodity\",\"Brokerage\"],\"last_updated\":newDate()}) bash: syntax error near unexpected token ", "text": "I am using the mongoDb Insert Practice lab, however I get the error as belowroot@mongodb:/app# db.accounts.insertOne(\nbash: syntax error near unexpected token `newline’\nroot@mongodb:/app# {bash: syntax error near unexpected token (' root@mongodb:/app# } bash: syntax error near unexpected token }’\nroot@mongodb:/app# db.accounts.insertOne({\nbash: syntax error near unexpected token {' root@mongodb:/app# account_id: 111333, bash: account_id:: command not found root@mongodb:/app# limit: 12000, bash: limit:: command not found root@mongodb:/app# products: [ bash: products:: command not found root@mongodb:/app# \"Commodity\", bash: Commodity,: command not found root@mongodb:/app# \"Brokerage\" bash: Brokerage: command not found root@mongodb:/app# ], bash: ],: command not found root@mongodb:/app# \"last_updated\": newDate() bash: syntax error near unexpected token (’\nroot@mongodb:/app# })\nbash: syntax error near unexpected token }' root@mongodb:/app# db.accounts.insertOne({account_id:111333,limit:12000,products:[\"Commodity\",\"Brokerage\"],\"last_updated\":newDate()}) bash: syntax error near unexpected token {account_id:111333,limit:12000,products:[“Commodity”,“Brokerage”],“last_updated”:newDate’", "username": "Rajesh_Rajaraman" }, { "code": "File details and existing Atlas cluster match, using stored file details to connect to Atlas.\nmongosh mongodb+srv://catalina-student-crud1-lesson1:********@instruqttest.3xfvk./sample_analytics\nError: querySrv ENOTFOUND _mongodb._tcp.instruqttest.3xfvk.\n./run.sh: line 34: [: -gt: unary operator expected\nRe-running the ./run.sh as a network error occurred.\n", "text": "Same error here, I think they have are a problem setting up the environment, because really we are not inside a mongo shell. If we run “run.sh” file, we see this error:", "username": "Carlos_Rodriguez_Antolin" }, { "code": "", "text": "I also encountered the same error. How do I resolve it and move to the next exercise or lesson?", "username": "Vernon_Tebong_Mbah" }, { "code": "> use sample_analytics", "text": "Hi, I have the same issue. I tried to resolve it by running shell connection string in command line. Also I needed to activate “ALLOW ACCESS FROM ANYWHERE” beforehand.When connected I switched to db sample_analytics:\n> use sample_analytics\nAfter I ran the insert command.\nThe document was inserted and I can find it in DB (Atlas UI) but when I click “Check” I get:\n“Incorrect solution\nThe document were not found in the database. Please try again”.I hope it will be fixed soon.", "username": "Iryna_N_A1" }, { "code": "", "text": "In my case, I don’t have any errors, but I can’t see my data. If I run db.accounts.find() it lists all data without the new one. It looks like my data is not being inserted.", "username": "angelomribeiro" }, { "code": "", "text": "Same here, tried connecting via connection string & added the doc that way, however it still says “The document were not found in the database. Please try again” even though I can see the doc in the Atlas UI & Compass", "username": "Mike_Polyakovsky" }, { "code": "", "text": "Same issue here with mongodb-indexes lesson 2 lab. Seems like an issue accross all labs?", "username": "Julian_Andres_Munoz_Montoya" }, { "code": "", "text": "I encountered the same issue, so I tried using a MongoDB shell connection string and attempted to insert a document; however, I still received an error stating ‘document not found in the database.’", "username": "Suresh_Pradhana" }, { "code": "", "text": "Yes I encountered the same error:", "username": "Michael_Bell" }, { "code": "", "text": "It worked fine today; the check passed", "username": "Suresh_Pradhana" }, { "code": "", "text": "Hi All,Can you please check again if you are still getting the same issue?In case you are not getting connected automatically to the Atlas cluster after launching a lab, I would recommend you to send an email to [email protected] with details such as:Happy Learning, cheers!\nTarun", "username": "Tarun_Gaur" } ]
MongoDB Insert Document Practice Lab
2023-09-04T11:31:08.685Z
MongoDB Insert Document Practice Lab
597
https://www.mongodb.com/…34549b31d371.png
[]
[ { "code": "", "text": "Hi everyone,\nI was unable to install mongodb on Centos 7. The repository file’s content is listed below.[mongodb-org-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.ascThis problem will appear when I try to install. Please refer below\naaa800×317 5.12 KB\nPlease advise me on how to resolve this issue. Thank you.", "username": "arshraf_jalil" }, { "code": "", "text": "Hi @arshraf_jalil,\nTry It as baseurl:https://repo.mongodb.org/yum/redhat/7/mongodb-org/6.0/x86_64/And then try to reinstallRegards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Looks like your host is only pulling 32bit repos (i386). A 64bit host is required for mongodb.", "username": "chris" }, { "code": "", "text": "You are correct @chris . Thank you very much. I was able to install MongoDB after switching from 32bit to 64bit operating system", "username": "arshraf_jalil" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't install Mongodb on Centos 7
2023-09-05T09:45:25.059Z
Can&rsquo;t install Mongodb on Centos 7
410
null
[ "aggregation", "python" ]
[ { "code": "<correlation-id>", "text": "Hello, I am currently working on a Python tool to move cold data from our production cluster to an S3 bucket using data federation. Like this tutorial MongoDB Atlas Data Federation Tutorial: Federated Queries and $out to AWS S3 I want to delete the cold data from our cluster, after the aggregation pipeline finished.However, it seems that I do not get any indication from PyMongo if the pipeline finished successfully or if an error occured. As per Atlas documentation on the $out stage the error output is only written to the S3 bucket. So the delete could potentially drop data which has not been transfered, which is a huge no-go for our project.I also could not find a way to get the <correlation-id> described in the documentation, so currently it seems that the only option would be to search the S3 bucket for any folder that matches the error folder path described in the documentation.Is it possible to directly get a success/fail message from the aggregation pipeline?", "username": "Hermann_Baumgartl" }, { "code": "", "text": "Hi @Hermann_Baumgartl,I’ll send a DM to you regarding this.Regards,\nJason", "username": "Jason_Tran" } ]
Data federation error handling when moving data to S3
2023-09-05T08:15:34.666Z
Data federation error handling when moving data to S3
311
null
[ "atlas-search" ]
[ { "code": "const threadSchema = new Schema(\n {\n title: String,\n content: String,\n name: String,\n email: String,\n date: { type: Date, required: true },\n company: String,\n comments: [\n {\n type: Schema.Types.ObjectId,\n ref: \"Comments\",\n },\n ],\n }\n);\nconst CommentsSchema = new Schema({\n content: String,\n name: String,\n email: String,\n date: Date,\n});\n", "text": "Hey,So I’ve basically got a schema for a message board type application, where there are threads with referenced comments from another collection in the same database. The schema looks like this:and then comments:Is there a way for me to create a search index that will also populate the info from the comments collection and search across the 2 merged collections?Thanks,\nAndre", "username": "andre_c" }, { "code": "", "text": "Hi @andre_c,Is there a way for me to create a search index that will also populate the info from the comments collection and search across the 2 merged collections?Just to clarify, are you talking about an Atlas Search index here? If so, I am wondering if the How to Run Atlas Search Queries Across Collections documentation suits your use case(s).Additionally, I’m curious about the “populate the info from the comments collection” portion of your statement - Do have an example you can detail of what you’re trying to achieve here?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hey,Thanks for the reply!It is indeed an atlas search index im referring to.\nI’ve tried to get $lookup queries to run, but I can’t get the query path to recognise the comments array for each thread.Perhaps I explained myself poorly. I mean that as comments on each thread are a reference using objectID they might need populating in order to be searched through.I just want the returned results of a search to include any hits from the actual content in the referenced comments on each main thread.Thanks,\nAndre", "username": "andre_c" }, { "code": "\"comments\"\"thread\"$lookup", "text": "Thanks for providing that information and clarification Andre - I think I have a better idea of what you’re trying to achieve now.I just want the returned results of a search to include any hits from the actual content in the referenced comments on each main thread.Could you provide a few more details so that I can see why it may not be recognising the \"comments\" array for each \"thread\":Regards,\nJason", "username": "Jason_Tran" } ]
How to index/search referenced documents from another collection?
2023-09-04T18:05:30.751Z
How to index/search referenced documents from another collection?
328
https://www.mongodb.com/…6_2_1024x189.png
[]
[ { "code": "", "text": "\nmongo-client issue1760×326 28.7 KB\n", "username": "Gaurav_Singh_jethuri" }, { "code": "mongomongosh", "text": "mongo has been replaced by mongosh for some time now. So try that instead.", "username": "chris" } ]
After running mongo container using mongo image, I am not able run mongo client inside container. output: bash: mongo: command not found
2023-09-05T15:10:57.869Z
After running mongo container using mongo image, I am not able run mongo client inside container. output: bash: mongo: command not found
373
null
[ "compass", "atlas-cluster", "golang" ]
[ { "code": "import (\n ...\n\t\"github.com/rgzr/sshtun\"\n\t...\n)\n...\n\tsshTun := sshtun.New(27017, \"EC2 IP\", 27017)\n\tsshTun.SetUser(\"mongoproxy\")\n\tsshTun.SetPassword(\"...\")\n\tsshTun.SetLocalEndpoint(sshtun.NewTCPEndpoint(\"127.0.0.1\", 27017))\n\tsshTun.SetRemoteEndpoint(sshtun.NewTCPEndpoint(\"...mongodb.net\", 27017))\n\n\tsshTun.SetTunneledConnState(func(tun *sshtun.SSHTun, state *sshtun.TunneledConnState) {\n\t\tlog.Printf(\"# TunneledConnState: %+v\", state)\n\t})\n\n\tvar connected atomic.Bool\n\n\t// We set a callback to know when the tunnel is ready\n\tsshTun.SetConnState(func(tun *sshtun.SSHTun, state sshtun.ConnState) {\n\t\tswitch state {\n\t\tcase sshtun.StateStarting:\n\t\t\tlog.Printf(\"STATE is Starting\")\n\t\tcase sshtun.StateStarted:\n\t\t\tconnected.Store(true)\n\t\t\tlog.Printf(\"STATE is Started\")\n\t\tcase sshtun.StateStopped:\n\t\t\tconnected.Store(false)\n\t\t\tlog.Printf(\"STATE is Stopped\")\n\t\t}\n\t})\n\n\tgo func() {\n\t\tfor {\n\t\t\tif err := sshTun.Start(context.Background()); err != nil {\n\t\t\t\tlog.Printf(\"SSH tunnel error: %v\", err)\n\t\t\t\ttime.Sleep(time.Second * 10) \n\t\t\t}\n\t\t}\n\t}()\n\n\tfor !connected.Load() {\n\t\ttime.Sleep(time.Second)\n\t}\n\n ...\nfunc NewMongoClientWithTunneling(cfg *config.Config) MongoClient {\n\tctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\n\tdefer cancel()\n\n\tdataSource := fmt.Sprintf(\n\t\t\"mongodb+srv://%s:%s@%s/%s?retryWrites=true&w=majority\",\n\t\tcfg.Mongo.User,\n\t\tcfg.Mongo.Password,\n\t\t\"127.0.0.1\",\n\t\tcfg.Mongo.DBName,\n\t)\n\n\tc, err := mongo.Connect(ctx,\n\t\toptions.\n\t\t\tClient().\n\t\t\tApplyURI(dataSource).\n\t\t\tSetMinPoolSize(uint64(cfg.Mongo.Options.MinConnections)).\n\t\t\tSetMaxPoolSize(uint64(cfg.Mongo.Options.MaxConnections)).\n\t\t\tSetMaxConnIdleTime(10*time.Minute).\n\t\t\tSetMonitor(apmmongo.CommandMonitor()),\n\t)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t// Check the connection\n\tif err := c.Ping(ctx, nil); err != nil {\n\t\tpanic(fmt.Sprintf(\"Failed to connect to MongoDB: %v\", err))\n\t}\n...\n2023/08/31 18:37:01 error parsing uri: lookup _mongodb._tcp.127.0.0.1 on 168.126.63.1:53: no such host\n", "text": "I’m trying to use atlas mongoDB using ssh tunnel with AWS EC2.\nThe language is Go.I can guarantee that there will be no problems with EC2. It works when accessing with mongoDB compass.However, implementing tunneling-connection directly in Go does not work well.The tunneling code is:The connection code is as follows:When executing the above code, the following error occurs.\nThis is probably because the work done with srv cannot be done on the local host.I’m wondering what I did wrong and how compass handles this.I need your help…The test environment is M2 Pro ventura.", "username": "myyrakle_N_A" }, { "code": "mongodb+srv://_mongodb._tcp.<hostname>mongodb+srv://mongodb://dataSource := fmt.Sprintf(\n\t\"mongodb://%s:%s@%s/%s?retryWrites=true&w=majority\",\n\tcfg.Mongo.User,\n\tcfg.Mongo.Password,\n\t\"127.0.0.1\",\n\tcfg.Mongo.DBName,\n)\n", "text": "@myyrakle_N_A welcome and thanks for the question!When using the mongodb+srv:// URI scheme, the Go driver expects to be able to look up a DNS SRV record named _mongodb._tcp.<hostname>. That won’t work when using an IP address, so using the mongodb+srv:// scheme is generally incompatible with using IP addresses.Does the connection work if you use scheme mongodb:// instead?For example:", "username": "Matt_Dale" } ]
I am trying to access mongodb through ssh tunneling
2023-08-31T09:40:15.557Z
I am trying to access mongodb through ssh tunneling
598
null
[ "compass" ]
[ { "code": "", "text": "Unable to connect to Atlas Cluster via Compass - getting error “read ECONNRESET”.\nConnection was working fine since last 6 months but suddenly started throwing error today. All configuration related to IP access, etc. is correctly in place and I am also able to connect to other clusters.It is only one cluster that is throwing error. Need help urgently.", "username": "Ashish_Kapoor" }, { "code": "", "text": "Hello @Ashish_Kapoor ,Welcome to The MongoDB Community Forums! Can you please confirm a few details for me to have a better understanding of this issue?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi Tarun,PFB the answers inline:MongoDB Compass Version being used\n1.39.3 (latest version on mac)Atlas Cluster tier\nServerlessCheck if the number of connections in Atlas cluster exceeds the cluster tier limit\n0 connections at the momentUse the Url provided in the Atlas connection tab for the particular compass version\nUsing the same URL since last several months. Was working 2 days back but not today. It is same as shown in connection tab in Atlas.Make sure the user you are trying to login with is having required roles.\nUsing the same user since last several months. Was working 2 days back but not today. Even if I use a wrong username and password, I get connection error instead of auth error.I have tried connecting to this cluster via mongodb tools and atlas cli too. Both throw same error.\nI also set up the data API and tried calling it via postman. Getting the same error - “Failed to find documents: FunctionError: error connecting to MongoDB service cluster: failed to ping: connection() error occurred during connection handshake: EOF”.I would mention here again that if I create a new cluster, which uses the same user and network access configuration, it works right away. Only the existing cluster has issues. I think it was recently automatically updated to mongodb version 7 and that has broken it. Can it be downgraded or at least restarted?", "username": "Ashish_Kapoor" }, { "code": "", "text": "I would advise you to bring this up with the Atlas support team or connect with support via the in app chat support available in the Atlas UI. They may be able to check if anything on the Atlas side could have possibly caused this issue. In saying so, if a chat support is raised, please provide them with the following:Regards,\nTarun", "username": "Tarun_Gaur" } ]
Getting error "read ECONNRESET" while trying to connect to Atlas cluster. This issue exists for only 1 cluster, others in the same account are working fine
2023-09-05T14:37:34.233Z
Getting error &ldquo;read ECONNRESET&rdquo; while trying to connect to Atlas cluster. This issue exists for only 1 cluster, others in the same account are working fine
826
null
[]
[ { "code": "", "text": "I am using the this query:\ndb.getCollection(“collectionName”).count({created_date:{ $gte:‘07-01-2023’,\n$lte: ‘07-31-2023’}}), but its not returning right count.", "username": "Arif_Iqbal" }, { "code": "", "text": "Hello @Arif_Iqbal ,Welcome to The MongoDB Community Forums! Can you please share below details for me to test the query with respect to your documents?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "As I’m sure others will say as well, storing your dates in an illogical format will mean that doing anything meaningful with that data is non-trivial.Convert them to proper dates, it’ll take up less storage, you’ll be able to query using them and it’ll be quicker.", "username": "John_Sewell" } ]
Get the count of records between two date, where dates value are in string with times
2023-09-05T10:59:49.704Z
Get the count of records between two date, where dates value are in string with times
185
null
[]
[ { "code": "", "text": "Hi All,Please confirm the steps to restore 4.2 Version backup to 7 version Mongo.\nIs direct restore from 4.2 version work in Version 7.", "username": "Anuraj_T" }, { "code": "", "text": "Hello @Anuraj_T ,The recommended and tested procedure is to upgrade the MongoDB version by upgrading sequentially. Hence, for you the upgrade sequence would be 4.2 → 4.4 → 5.0 → 6.0 → 7.0.Kindly go through below thread to understand more on this.Let me know in case of any queries/issues, would be happy to help you! Cheers!\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Restore backup from Mongo 4.2 to 7
2023-09-05T11:03:51.208Z
Restore backup from Mongo 4.2 to 7
436
null
[ "transactions", "php", "storage" ]
[ { "code": "{\"t\":{\"$date\":\"2023-09-04T06:22:05.513+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.100.0.13:56080\",\"uuid\":{\"uuid\":{\"$uuid\":\"09c2c7ca-63b9-491b-a006-45c57e8a057d\"}},\"connectionId\":375,\"connectionCount\":32}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.513+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn375\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.100.0.13:56080\",\"client\":\"conn375\",\"doc\":{\"driver\":{\"name\":\"mongoc / ext-mongodb:PHP / PHPLIB \",\"version\":\"1.24.1 / 1.16.1 / 1.16.0 \"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"version\":\"22.04\",\"architecture\":\"x86_64\"},\"platform\":\"PHP 8.1.2-1ubuntu2.13 cfg=0x03515620c9 posix=200809 stdc=201710 CC=GCC 11.3.0 CFLAGS=\\\"\\\" LDFLAGS=\\\"\\\"\"}}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.514+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":6788700, \"ctx\":\"conn375\",\"msg\":\"Received first command on ingress connection since session start or auth handshake\",\"attr\":{\"elapsedMillis\":0}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.569+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn374\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.100.0.13:56078\",\"uuid\":{\"uuid\":{\"$uuid\":\"c309384d-51fe-4dd5-8ca5-755e40098aaf\"}},\"connectionId\":374,\"connectionCount\":31}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.837+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.100.0.13:56092\",\"uuid\":{\"uuid\":{\"$uuid\":\"ff59fcb5-221e-4b2c-bc6b-47d0b5fcd98e\"}},\"connectionId\":376,\"connectionCount\":32}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.837+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn376\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.100.0.13:56092\",\"client\":\"conn376\",\"doc\":{\"driver\":{\"name\":\"mongoc / ext-mongodb:PHP / PHPLIB \",\"version\":\"1.24.1 / 1.16.1 / 1.16.0 \"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"version\":\"22.04\",\"architecture\":\"x86_64\"},\"platform\":\"PHP 8.1.2-1ubuntu2.13 cfg=0x03515620c9 posix=200809 stdc=201710 CC=GCC 11.3.0 CFLAGS=\\\"\\\" LDFLAGS=\\\"\\\"\"}}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.840+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":6788700, \"ctx\":\"conn376\",\"msg\":\"Received first command on ingress connection since session start or auth handshake\",\"attr\":{\"elapsedMillis\":2}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.892+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn376\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.100.0.13:56092\",\"uuid\":{\"uuid\":{\"$uuid\":\"ff59fcb5-221e-4b2c-bc6b-47d0b5fcd98e\"}},\"connectionId\":376,\"connectionCount\":31}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:05.892+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn375\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"10.100.0.13:56080\",\"uuid\":{\"uuid\":{\"$uuid\":\"09c2c7ca-63b9-491b-a006-45c57e8a057d\"}},\"connectionId\":375,\"connectionCount\":30}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:07.788+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn34\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"selfmadelabs.$cmd\",\"command\":{\"update\":\"labstats\",\"ordered\":true,\"lsid\":{\"id\":{\"$uuid\":\"fa5453b3-980c-4e17-a8c6-46ae49513c04\"}},\"$db\":\"selfmadelabs\"},\"numYields\":1372,\"reslen\":60,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":5096}},\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"w\":5096}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":5096}},\"Global\":{\"acquireCount\":{\"w\":5096}},\"Database\":{\"acquireCount\":{\"w\":5096}},\"Collection\":{\"acquireCount\":{\"w\":5096}}},\"flowControl\":{\"acquireCount\":5096},\"storage\":{},\"cpuNanos\":3069647525,\"remote\":\"10.100.0.12:59344\",\"protocol\":\"op_msg\",\"durationMillis\":3115}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.145+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"conn27\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":22,\"message\":{\"ts_sec\":1693808530,\"ts_usec\":145725,\"thread\":\"1:0x7f2df5821640\",\"session_name\":\"WT_SESSION.get_value\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_txn_context_prepare_check:19:not permitted in a prepared transaction\",\"error_str\":\"Invalid argument\",\"error_code\":22}}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.145+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23083, \"ctx\":\"conn27\",\"msg\":\"Invariant failure\",\"attr\":{\"expr\":\"c->get_value(c, &value)\",\"error\":\"BadValue: 22: Invalid argument\",\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp\",\"line\":1975}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.145+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23084, \"ctx\":\"conn27\",\"msg\":\"\\n\\n***aborting after invariant() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.145+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn27\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"\\n\"}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.145+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn27\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.280+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"conn27\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"5633C7E1AFB4\",\"b\":\"5633C03FE000\",\"o\":\"7A1CFB4\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_117getStackTraceImplERKNS1_7OptionsE.constprop.0\",\"C\":\"mongo::stack_trace_detail::(anonymous namespace)::getStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&) [clone .constprop.0]\",\"s+\":\"224\"},{\"a\":\"5633C7E1CC68\",\"b\":\"5633C03FE000\",\"o\":\"7A1EC68\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"C\":\"mongo::printStackTrace()\",\"s+\":\"38\"},{\"a\":\"5633C7E177EA\",\"b\":\"5633C03FE000\",\"o\":\"7A197EA\",\"s\":\"abruptQuit\",\"s+\":\"6A\"},{\"a\":\"7F2E09AA8520\",\"b\":\"7F2E09A66000\",\"o\":\"42520\",\"s\":\"__sigaction\",\"s+\":\"50\"},{\"a\":\"7F2E09AFCA7C\",\"b\":\"7F2E09A66000\",\"o\":\"96A7C\",\"s\":\"pthread_kill\",\"s+\":\"12C\"},{\"a\":\"7F2E09AA8476\",\"b\":\"7F2E09A66000\",\"o\":\"42476\",\"s\":\"raise\",\"s+\":\"16\"},{\"a\":\"7F2E09A8E7F3\",\"b\":\"7F2E09A66000\",\"o\":\"287F3\",\"s\":\"abort\",\"s+\":\"D3\"},{\"a\":\"5633C7E09A47\",\"b\":\"5633C03FE000\",\"o\":\"7A0BA47\",\"s\":\"_ZN5mongo12_GLOBAL__N_19callAbortEv\",\"C\":\"mongo::(anonymous namespace)::callAbort()\",\"s+\":\"1B\"},{\"a\":\"5633C7E0AF98\",\"b\":\"5633C03FE000\",\"o\":\"7A0CF98\",\"s\":\"_ZN5mongo17invariantOKFailedEPKcRKNS_6StatusES1_j\",\"C\":\"mongo::invariantOKFailed(char const*, mongo::Status const&, char const*, unsigned int)\",\"s+\":\"279\"},{\"a\":\"5633C4D159D5\",\"b\":\"5633C03FE000\",\"o\":\"49179D5\",\"s\":\"_ZN5mongo31WiredTigerRecordStoreCursorBase4nextEv.cold\",\"C\":\"mongo::WiredTigerRecordStoreCursorBase::next() [clone .cold]\",\"s+\":\"2BB\"},{\"a\":\"5633C5D16C51\",\"b\":\"5633C03FE000\",\"o\":\"5918C51\",\"s\":\"_ZN5mongo3sbe9ScanStage7getNextEv\",\"C\":\"mongo::sbe::ScanStage::getNext()\",\"s+\":\"2D1\"},{\"a\":\"5633C58A81E7\",\"b\":\"5633C03FE000\",\"o\":\"54AA1E7\",\"s\":\"_ZN5mongo3sbe11FilterStageILb0ELb0EE7getNextEv\",\"C\":\"mongo::sbe::FilterStage<false, false>::getNext()\",\"s+\":\"57\"},{\"a\":\"5633C5C23915\",\"b\":\"5633C03FE000\",\"o\":\"5825915\",\"s\":\"_ZN5mongo3sbe14LimitSkipStage7getNextEv\",\"C\":\"mongo::sbe::LimitSkipStage::getNext()\",\"s+\":\"D5\"},{\"a\":\"5633C584E2EC\",\"b\":\"5633C03FE000\",\"o\":\"54502EC\",\"s\":\"_ZN5mongo13fetchNextImplINS_7BSONObjEEENS_3sbe9PlanStateEPNS2_9PlanStageEPNS2_5value12SlotAccessorES8_PT_PNS_8RecordIdEb\",\"C\":\"mongo::sbe::PlanState mongo::fetchNextImpl<mongo::BSONObj>(mongo::sbe::PlanStage*, mongo::sbe::value::SlotAccessor*, mongo::sbe::value::SlotAccessor*, mongo::BSONObj*, mongo::RecordId*, bool)\",\"s+\":\"4C\"},{\"a\":\"5633C584F62C\",\"b\":\"5633C03FE000\",\"o\":\"545162C\",\"s\":\"_ZN5mongo15PlanExecutorSBE11getNextImplINS_7BSONObjEEENS_12PlanExecutor9ExecStateEPT_PNS_8RecordIdE\",\"C\":\"mongo::PlanExecutor::ExecState mongo::PlanExecutorSBE::getNextImpl<mongo::BSONObj>(mongo::BSONObj*, mongo::RecordId*)\",\"s+\":\"38C\"},{\"a\":\"5633C584A98B\",\"b\":\"5633C03FE000\",\"o\":\"544C98B\",\"s\":\"_ZN5mongo15PlanExecutorSBE7getNextEPNS_7BSONObjEPNS_8RecordIdE\",\"C\":\"mongo::PlanExecutorSBE::getNext(mongo::BSONObj*, mongo::RecordId*)\",\"s+\":\"5B\"},{\"a\":\"5633C50322E0\",\"b\":\"5633C03FE000\",\"o\":\"4C342E0\",\"s\":\"_ZN5mongo12_GLOBAL__N_17FindCmd10Invocation3runEPNS_16OperationContextEPNS_3rpc21ReplyBuilderInterfaceE\",\"C\":\"mongo::(anonymous namespace)::FindCmd::Invocation::run(mongo::OperationContext*, mongo::rpc::ReplyBuilderInterface*)\",\"s+\":\"E70\"},{\"a\":\"5633C747D9B0\",\"b\":\"5633C03FE000\",\"o\":\"707F9B0\",\"s\":\"_ZN5mongo14CommandHelpers20runCommandInvocationEPNS_16OperationContextERKNS_12OpMsgRequestEPNS_17CommandInvocationEPNS_3rpc21ReplyBuilderInterfaceE\",\"C\":\"mongo::CommandHelpers::runCommandInvocation(mongo::OperationContext*, mongo::OpMsgRequest const&, mongo::CommandInvocation*, mongo::rpc::ReplyBuilderInterface*)\",\"s+\":\"60\"},{\"a\":\"5633C74818ED\",\"b\":\"5633C03FE000\",\"o\":\"70838ED\",\"s\":\"_ZN5mongo14CommandHelpers20runCommandInvocationESt10shared_ptrINS_23RequestExecutionContextEES1_INS_17CommandInvocationEEb\",\"C\":\"mongo::CommandHelpers::runCommandInvocation(std::shared_ptr<mongo::RequestExecutionContext>, std::shared_ptr<mongo::CommandInvocation>, bool)\",\"s+\":\"CD\"},{\"a\":\"5633C3E8D3F0\",\"b\":\"5633C03FE000\",\"o\":\"3A8F3F0\",\"s\":\"_ZN5mongo12_GLOBAL__N_120runCommandInvocationESt10shared_ptrINS_23RequestExecutionContextEES1_INS_17CommandInvocationEE\",\"C\":\"mongo::(anonymous namespace)::runCommandInvocation(std::shared_ptr<mongo::RequestExecutionContext>, std::shared_ptr<mongo::CommandInvocation>)\",\"s+\":\"B0\"},{\"a\":\"5633C3E8F996\",\"b\":\"5633C03FE000\",\"o\":\"3A91996\",\"s\":\"_ZN5mongo12_GLOBAL__N_113InvokeCommand3runEv\",\"C\":\"mongo::(anonymous namespace)::InvokeCommand::run()\",\"s+\":\"236\"},{\"a\":\"5633C3E97311\",\"b\":\"5633C03FE000\",\"o\":\"3A99311\",\"s\":\"_ZN5mongo12_GLOBAL__N_114RunCommandImpl11_runCommandEv\",\"C\":\"mongo::(anonymous namespace)::RunCommandImpl::_runCommand()\",\"s+\":\"2A1\"},{\"a\":\"5633C3E98DA6\",\"b\":\"5633C03FE000\",\"o\":\"3A9ADA6\",\"s\":\"_ZN5mongo12_GLOBAL__N_114RunCommandImpl8_runImplEv\",\"C\":\"mongo::(anonymous namespace)::RunCommandImpl::_runImpl()\",\"s+\":\"96\"},{\"a\":\"5633C3E91736\",\"b\":\"5633C03FE000\",\"o\":\"3A93736\",\"s\":\"_ZN5mongo12_GLOBAL__N_114RunCommandImpl3runEv\",\"C\":\"mongo::(anonymous namespace)::RunCommandImpl::run()\",\"s+\":\"136\"},{\"a\":\"5633C3E9A8F9\",\"b\":\"5633C03FE000\",\"o\":\"3A9C8F9\",\"s\":\"_ZN5mongo12_GLOBAL__N_119ExecCommandDatabase12_commandExecEv\",\"C\":\"mongo::(anonymous namespace)::ExecCommandDatabase::_commandExec()\",\"s+\":\"1E9\"},{\"a\":\"5633C3EA0918\",\"b\":\"5633C03FE000\",\"o\":\"3AA2918\",\"s\":\"_ZN5mongo19makeReadyFutureWithIZNOS_11future_util10AsyncStateINS_12_GLOBAL__N_119ExecCommandDatabaseEE13thenWithStateIZZNS3_14executeCommandESt10shared_ptrINS3_13HandleRequest16ExecutionContextEEENUlvE0_clEvEUlPT_E_EEDaOSC_EUlvE_EENS_6FutureINS_14future_details17UnwrappedTypeImplINSt13invoke_resultISF_JEE4typeEE4typeEEESF_\",\"s+\":\"48\"},{\"a\":\"5633C3EA122C\",\"b\":\"5633C03FE000\",\"o\":\"3AA322C\",\"s\":\"_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationIvZZNOS9_4thenINS_19CleanupFuturePolicyILb0EEEZNS_12_GLOBAL__N_114executeCommandESt10shared_ptrINSE_13HandleRequest16ExecutionContextEEEUlvE0_EEDaT_OT0_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EESQ_E_EENS7_ISK_EESM_EUlS3_E_EEDaOSK_EN12SpecificImpl4callEOS3_\",\"C\":\"mongo::unique_function<void (mongo::future_details::SharedStateBase*)>::makeImpl<mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::makeContinuation<void, mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::then<mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#2}>(mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#2}&&) &&::{lambda()#3}::operator()() const::{lambda(mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*, mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*)#1}>(mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#2}&&)::{lambda(mongo::future_details::SharedStateBase*)#1}>(mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::makeContinuation<void, mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::then<mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#2}>(mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#2}&&) &&::{lambda()#3}::operator()() const::{lambda(mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*, mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*)#1}>(mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#2}&&)::{lambda(mongo::future_details::SharedStateBase*)#1}&&)::SpecificImpl::call(mongo::future_details::SharedStateBase*&&)\",\"s+\":\"1FC\"},{\"a\":\"5633C3E37B27\",\"b\":\"5633C03FE000\",\"o\":\"3A39B27\",\"s\":\"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv\",\"C\":\"mongo::future_details::SharedStateBase::transitionToFinished()\",\"s+\":\"107\"},{\"a\":\"5633C3EABD7C\",\"b\":\"5633C03FE000\",\"o\":\"3AADD7C\",\"s\":\"_ZNO5mongo14future_details10FutureImplINS0_8FakeVoidEE17propagateResultToEPNS0_15SharedStateImplIS2_EE\",\"C\":\"mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::propagateResultTo(mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*) &&\",\"s+\":\"1AC\"},{\"a\":\"5633C3E999A3\",\"b\":\"5633C03FE000\",\"o\":\"3A9B9A3\",\"s\":\"_ZZN5mongo15unique_functionIFvPNS_14future_details15SharedStateBaseEEE8makeImplIZNS1_10FutureImplINS1_8FakeVoidEE16makeContinuationIvZZNOS9_4thenINS_19CleanupFuturePolicyILb0EEEZNS_12_GLOBAL__N_114executeCommandESt10shared_ptrINSE_13HandleRequest16ExecutionContextEEEUlvE_EEDaT_OT0_ENKUlvE1_clEvEUlPNS1_15SharedStateImplIS8_EESQ_E_EENS7_ISK_EESM_EUlS3_E_EEDaOSK_EN12SpecificImpl4callEOS3_\",\"C\":\"mongo::unique_function<void (mongo::future_details::SharedStateBase*)>::makeImpl<mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::makeContinuation<void, mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::then<mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#1}>(mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#1}&&) &&::{lambda()#3}::operator()() const::{lambda(mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*, mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*)#1}>(mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#1}&&)::{lambda(mongo::future_details::SharedStateBase*)#1}>(mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::makeContinuation<void, mongo::future_details::FutureImpl<mongo::future_details::FakeVoid>::then<mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#1}>(mongo::CleanupFuturePolicy<false>, mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#1}&&) &&::{lambda()#3}::operator()() const::{lambda(mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*, mongo::future_details::SharedStateImpl<mongo::future_details::FakeVoid>*)#1}>(mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)::{lambda()#1}&&)::{lambda(mongo::future_details::SharedStateBase*)#1}&&)::SpecificImpl::call(mongo::future_details::SharedStateBase*&&)\",\"s+\":\"93\"},{\"a\":\"5633C3E37B27\",\"b\":\"5633C03FE000\",\"o\":\"3A39B27\",\"s\":\"_ZN5mongo14future_details15SharedStateBase20transitionToFinishedEv\",\"C\":\"mongo::future_details::SharedStateBase::transitionToFinished()\",\"s+\":\"107\"},{\"a\":\"5633C3EA18D1\",\"b\":\"5633C03FE000\",\"o\":\"3AA38D1\",\"s\":\"_ZN5mongo12_GLOBAL__N_114executeCommandESt10shared_ptrINS0_13HandleRequest16ExecutionContextEE\",\"C\":\"mongo::(anonymous namespace)::executeCommand(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)\",\"s+\":\"641\"},{\"a\":\"5633C3EA238B\",\"b\":\"5633C03FE000\",\"o\":\"3AA438B\",\"s\":\"_ZN5mongo12_GLOBAL__N_116receivedCommandsESt10shared_ptrINS0_13HandleRequest16ExecutionContextEE\",\"C\":\"mongo::(anonymous namespace)::receivedCommands(std::shared_ptr<mongo::(anonymous namespace)::HandleRequest::ExecutionContext>)\",\"s+\":\"43B\"},{\"a\":\"5633C3EA2E78\",\"b\":\"5633C03FE000\",\"o\":\"3AA4E78\",\"s\":\"_ZN5mongo12_GLOBAL__N_115CommandOpRunner3runEv\",\"C\":\"mongo::(anonymous namespace)::CommandOpRunner::run()\",\"s+\":\"48\"},{\"a\":\"5633C3E9580C\",\"b\":\"5633C03FE000\",\"o\":\"3A9780C\",\"s\":\"_ZN5mongo23ServiceEntryPointCommon13handleRequestEPNS_16OperationContextERKNS_7MessageESt10unique_ptrIKNS0_5HooksESt14default_deleteIS8_EE\",\"C\":\"mongo::ServiceEntryPointCommon::handleRequest(mongo::OperationContext*, mongo::Message const&, std::unique_ptr<mongo::ServiceEntryPointCommon::Hooks const, std::default_delete<mongo::ServiceEntryPointCommon::Hooks const> >)\",\"s+\":\"37C\"},{\"a\":\"5633C3E878E0\",\"b\":\"5633C03FE000\",\"o\":\"3A898E0\",\"s\":\"_ZN5mongo23ServiceEntryPointMongod13handleRequestEPNS_16OperationContextERKNS_7MessageE\",\"C\":\"mongo::ServiceEntryPointMongod::handleRequest(mongo::OperationContext*, mongo::Message const&)\",\"s+\":\"50\"},{\"a\":\"5633C537E604\",\"b\":\"5633C03FE000\",\"o\":\"4F80604\",\"s\":\"_ZN5mongo9transport15SessionWorkflow4Impl13_dispatchWorkEv\",\"C\":\"mongo::transport::SessionWorkflow::Impl::_dispatchWork()\",\"s+\":\"144\"},{\"a\":\"5633C537EE57\",\"b\":\"5633C03FE000\",\"o\":\"4F80E57\",\"s\":\"_ZZNO5mongo14future_details10FutureImplISt10unique_ptrINS_9transport15SessionWorkflow4Impl8WorkItemESt14default_deleteIS6_EEE4thenINS_19CleanupFuturePolicyILb0EEEZNS5_15_doOneIterationEvEUlT_E_EEDaSE_OT0_ENKUlOS9_E_clESI_.isra.0\",\"C\":\"mongo::future_details::FutureImpl<std::unique_ptr<mongo::transport::SessionWorkflow::Impl::WorkItem, std::default_delete<mongo::transport::SessionWorkflow::Impl::WorkItem> > >::then<mongo::CleanupFuturePolicy<false>, mongo::transport::SessionWorkflow::Impl::_doOneIteration()::{lambda(auto:1)#1}>(mongo::CleanupFuturePolicy<false>, mongo::transport::SessionWorkflow::Impl::_doOneIteration()::{lambda(auto:1)#1}&&) &&::{lambda(std::unique_ptr<mongo::transport::SessionWorkflow::Impl::WorkItem, std::default_delete<mongo::transport::SessionWorkflow::Impl::WorkItem> >&&)#1}::operator()(std::unique_ptr<mongo::transport::SessionWorkflow::Impl::WorkItem, std::default_delete<mongo::transport::SessionWorkflow::Impl::WorkItem> >&&) const [clone .isra.0]\",\"s+\":\"47\"},{\"a\":\"5633C53806E5\",\"b\":\"5633C03FE000\",\"o\":\"4F826E5\",\"s\":\"_ZN5mongo9transport15SessionWorkflow4Impl15_doOneIterationEv\",\"C\":\"mongo::transport::SessionWorkflow::Impl::_doOneIteration()\",\"s+\":\"535\"},{\"a\":\"5633C5380FCD\",\"b\":\"5633C03FE000\",\"o\":\"4F82FCD\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZNS_9transport15SessionWorkflow4Impl18_scheduleIterationEvEUlS1_E_EEDaOT_EN12SpecificImpl4callEOS1_\",\"C\":\"mongo::unique_function<void (mongo::Status)>::makeImpl<mongo::transport::SessionWorkflow::Impl::_scheduleIteration()::{lambda(mongo::Status)#1}>(mongo::transport::SessionWorkflow::Impl::_scheduleIteration()::{lambda(mongo::Status)#1}&&)::SpecificImpl::call(mongo::Status&&)\",\"s+\":\"5D\"},{\"a\":\"5633C5382E24\",\"b\":\"5633C03FE000\",\"o\":\"4F84E24\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZNS_9transport15SessionWorkflow4Impl15_captureContextES3_EUlS1_E_EEDaOT_EN12SpecificImpl4callEOS1_\",\"C\":\"mongo::unique_function<void (mongo::Status)>::makeImpl<mongo::transport::SessionWorkflow::Impl::_captureContext(mongo::unique_function<void (mongo::Status)>)::{lambda(mongo::Status)#1}>(mongo::transport::SessionWorkflow::Impl::_captureContext(mongo::unique_function<void (mongo::Status)>)::{lambda(mongo::Status)#1}&&)::SpecificImpl::call(mongo::Status&&)\",\"s+\":\"94\"},{\"a\":\"5633C74D5C24\",\"b\":\"5633C03FE000\",\"o\":\"70D7C24\",\"s\":\"_ZZN5mongo15unique_functionIFvvEE8makeImplIZNS_9transport26ServiceExecutorSynchronous11SharedState8scheduleENS0_IFvNS_6StatusEEEEEUlvE0_EEDaOT_EN12SpecificImpl4callEv\",\"C\":\"mongo::unique_function<void ()>::makeImpl<mongo::transport::ServiceExecutorSynchronous::SharedState::schedule(mongo::unique_function<void (mongo::Status)>)::{lambda()#2}>(mongo::transport::ServiceExecutorSynchronous::SharedState::schedule(mongo::unique_function<void (mongo::Status)>)::{lambda()#2}&&)::SpecificImpl::call()\",\"s+\":\"C4\"},{\"a\":\"5633C74D8CDD\",\"b\":\"5633C03FE000\",\"o\":\"70DACDD\",\"s\":\"_ZN5mongo9transport12_GLOBAL__N_17runFuncEPv\",\"C\":\"mongo::transport::(anonymous namespace)::runFunc(void*)\",\"s+\":\"3DD\"},{\"a\":\"7F2E09AFAB43\",\"b\":\"7F2E09A66000\",\"o\":\"94B43\",\"s\":\"pthread_condattr_setpshared\",\"s+\":\"513\"},{\"a\":\"7F2E09B8BBB4\",\"b\":\"7F2E09A66000\",\"o\":\"125BB4\",\"s\":\"clone\",\"s+\":\"44\"}],\"processInfo\":{\"mongodbVersion\":\"7.0.0\",\"gitVersion\":\"37d84072b5c5b9fd723db5fa133fb202ad2317f1\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"5.15.0-82-generic\",\"version\":\"#91-Ubuntu SMP Mon Aug 14 14:14:14 UTC 2023\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"5633C03FE000\",\"elfType\":3,\"buildId\":\"36667979E9E7865B\"},{\"b\":\"7F2E09A66000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"69389D485A9793DBE873F0EA2C93E02EFAA9AA3D\"}]}}},\"tags\":[]}\n{\"t\":{\"$date\":\"2023-09-04T06:22:10.280+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn27\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"5633C7E1AFB4\",\"b\":\"5633C03FE000\",\"o\":\"7A1CFB4\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_117getStackTraceImplERKNS1_7OptionsE.constprop.0\",\"C\":\"mongo::stack_trace_detail::(anonymous namespace)::getStackTraceImpl(mongo::stack_trace_detail::(anonymous namespace)::Options const&) [clone .constprop.0]\",\"s+\":\"224\"}}}\n", "text": "This happens 3-4 times every day. If I restart the server it is good for sometime. But then this error happens and server crashes. Then the container restarts.What could be the root cause of this error ?", "username": "Sibidharan_Nandakumar" }, { "code": "db.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T07:54:55.867+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn214\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.18.0.1:38568\",\"client\":\"conn214\",\"doc\":{\"driver\":{\"name\":\"mongoc / ext-mongodb:PHP / PHPLIB \",\"version\":\"1.24.1 / 1.16.1 / 1.16.0 \"},\"os\":{\"type\":\"Linux\",\"name\":\"Ubuntu\",\"version\":\"22.04\",\"architecture\":\"x86_64\"},\"platform\":\"PHP 8.1.2-1ubuntu2.13 cfg=0x03515620c9 posix=200809 stdc=201710 CC=GCC 11.3.0 CFLAGS=\\\"\\\" LDFLAGS=\\\"\\\"\"}}}\ndb.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T07:54:55.868+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":6788700, \"ctx\":\"conn214\",\"msg\":\"Received first command on ingress connection since session start or auth handshake\",\"attr\":{\"elapsedMillis\":0}}\nconn214db.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T08:13:20.370+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"conn214\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":22,\"message\":{\"ts_sec\":1693815200,\"ts_usec\":370599,\"thread\":\"1:0x7f380caf0640\",\"session_name\":\"WT_SESSION.get_value\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_txn_context_prepare_check:19:not permitted in a prepared transaction\",\"error_str\":\"Invalid argument\",\"error_code\":22}}}\ndb.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T08:13:20.370+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23083, \"ctx\":\"conn214\",\"msg\":\"Invariant failure\",\"attr\":{\"expr\":\"c->get_value(c, &value)\",\"error\":\"BadValue: 22: Invalid argument\",\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp\",\"line\":1975}}\ndb.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T08:13:20.370+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23084, \"ctx\":\"conn214\",\"msg\":\"\\n\\n***aborting after invariant() failure\\n\\n\"}\ndb.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T08:13:20.370+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn214\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"\\n\"}}\ndb.selfmade.ninja | {\"t\":{\"$date\":\"2023-09-04T08:13:20.370+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"conn214\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n", "text": "I was checking the logs on another crash and I found. This crash is happening every once in a while.the context connection conn214 threw the error after over 20 mins from its first occurrence in log and then after 20 mins, it threw this followed by the stacktrace.", "username": "Sibidharan_Nandakumar" }, { "code": "", "text": "@Satyam Can you help me on this? Thank you.", "username": "Sibidharan_Nandakumar" }, { "code": "{\"t\":{\"$date\":\"2023-09-04T13:36:44.119+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.120+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.120+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":7091600, \"ctx\":\"main\",\"msg\":\"Starting TenantMigrationAccessBlockerRegistry\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"d89ca2c6d76b\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"7.0.0\",\"gitVersion\":\"37d84072b5c5b9fd723db5fa133fb202ad2317f1\",\"openSSLVersion\":\"OpenSSL 3.0.2 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2204\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.121+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"*\"},\"repair\":true,\"storage\":{\"dbPath\":\"/data/db\"}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.123+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.123+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.123+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=31637M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.753+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":630}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.753+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.753+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22316, \"ctx\":\"initandlisten\",\"msg\":\"Repairing size cache\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.753+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22327, \"ctx\":\"initandlisten\",\"msg\":\"Verify succeeded. Not salvaging.\",\"attr\":{\"uri\":\"table:sizeStorer\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.753+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22246, \"ctx\":\"initandlisten\",\"msg\":\"Repairing catalog metadata\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.754+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22327, \"ctx\":\"initandlisten\",\"msg\":\"Verify succeeded. Not salvaging.\",\"attr\":{\"uri\":\"table:_mdb_catalog\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.756+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.757+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":65530,\"recommendedMinimum\":1677720,\"maxConns\":838860},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.757+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21027, \"ctx\":\"initandlisten\",\"msg\":\"Repairing collection\",\"attr\":{\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.757+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22327, \"ctx\":\"initandlisten\",\"msg\":\"Verify succeeded. Not salvaging.\",\"attr\":{\"uri\":\"table:collection-0-737012485405667727\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.758+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20295, \"ctx\":\"initandlisten\",\"msg\":\"Validating internal structure\",\"attr\":{\"index\":\"_id_\",\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.759+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20303, \"ctx\":\"initandlisten\",\"msg\":\"validating collection\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuid\":{\"uuid\":{\"$uuid\":\"f2df65c2-9924-4b78-af36-ecef5d4b9212\"}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.759+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20296, \"ctx\":\"initandlisten\",\"msg\":\"Validating index consistency\",\"attr\":{\"index\":\"_id_\",\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.759+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20306, \"ctx\":\"initandlisten\",\"msg\":\"Validation complete for collection. No corruption found\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuid\":{\"uuid\":{\"$uuid\":\"f2df65c2-9924-4b78-af36-ecef5d4b9212\"}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.759+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21028, \"ctx\":\"initandlisten\",\"msg\":\"Collection validation\",\"attr\":{\"results\":{\"ns\":\"admin.system.version\",\"uuid\":{\"$uuid\":\"f2df65c2-9924-4b78-af36-ecef5d4b9212\"},\"nInvalidDocuments\":0,\"nNonCompliantDocuments\":0,\"nrecords\":1,\"nIndexes\":1,\"keysPerIndex\":{\"_id_\":1},\"indexDetails\":{\"_id_\":{\"valid\":true}}},\"detailedResults\":{\"valid\":true,\"repaired\":false,\"warnings\":[],\"errors\":[],\"extraIndexEntries\":[],\"missingIndexEntries\":[],\"corruptRecords\":[]}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.759+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4934002, \"ctx\":\"initandlisten\",\"msg\":\"Validate did not make any repairs\",\"attr\":{\"collection\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.760+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.760+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.760+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21029, \"ctx\":\"initandlisten\",\"msg\":\"repairDatabase\",\"attr\":{\"db\":\"admin\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.760+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21027, \"ctx\":\"initandlisten\",\"msg\":\"Repairing collection\",\"attr\":{\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.760+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22327, \"ctx\":\"initandlisten\",\"msg\":\"Verify succeeded. Not salvaging.\",\"attr\":{\"uri\":\"table:collection-0-737012485405667727\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.760+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20295, \"ctx\":\"initandlisten\",\"msg\":\"Validating internal structure\",\"attr\":{\"index\":\"_id_\",\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.761+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20303, \"ctx\":\"initandlisten\",\"msg\":\"validating collection\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuid\":{\"uuid\":{\"$uuid\":\"f2df65c2-9924-4b78-af36-ecef5d4b9212\"}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.761+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20296, \"ctx\":\"initandlisten\",\"msg\":\"Validating index consistency\",\"attr\":{\"index\":\"_id_\",\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.761+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20306, \"ctx\":\"initandlisten\",\"msg\":\"Validation complete for collection. No corruption found\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuid\":{\"uuid\":{\"$uuid\":\"f2df65c2-9924-4b78-af36-ecef5d4b9212\"}}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.761+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":21028, \"ctx\":\"initandlisten\",\"msg\":\"Collection validation\",\"attr\":{\"results\":{\"ns\":\"admin.system.version\",\"uuid\":{\"$uuid\":\"f2df65c2-9924-4b78-af36-ecef5d4b9212\"},\"nInvalidDocuments\":0,\"nNonCompliantDocuments\":0,\"nrecords\":1,\"nIndexes\":1,\"keysPerIndex\":{\"_id_\":1},\"indexDetails\":{\"_id_\":{\"valid\":true}}},\"detailedResults\":{\"valid\":true,\"repaired\":false,\"warnings\":[],\"errors\":[],\"extraIndexEntries\":[],\"missingIndexEntries\":[],\"corruptRecords\":[]}}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.761+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4934002, \"ctx\":\"initandlisten\",\"msg\":\"Validate did not make any repairs\",\"attr\":{\"collection\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.762+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6608200, \"ctx\":\"initandlisten\",\"msg\":\"Initializing cluster server parameters from disk\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.762+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.762+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20537, \"ctx\":\"initandlisten\",\"msg\":\"Finished checking dbs\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784908, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"initandlisten\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"initandlisten\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"initandlisten\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":5093807, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all TenantMigrationAccessBlockers on global shutdown\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"TenantMigrationBlockerNet\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":6529201, \"ctx\":\"initandlisten\",\"msg\":\"Network interface redundant shutdown\",\"attr\":{\"state\":\"Stopped\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"initandlisten\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784930, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"initandlisten\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"initandlisten\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.763+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"initandlisten\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.764+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22324, \"ctx\":\"initandlisten\",\"msg\":\"Closing WiredTiger in preparation for reconfiguring\",\"attr\":{\"closeConfig\":\"leak_memory=true,use_timestamp=false,\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.772+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795905, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":8}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.956+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795904, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger re-opened\",\"attr\":{\"durationMillis\":184}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.956+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22325, \"ctx\":\"initandlisten\",\"msg\":\"Reconfiguring\",\"attr\":{\"newConfig\":\"compatibility=(release=10.0)\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.959+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795903, \"ctx\":\"initandlisten\",\"msg\":\"Reconfigure complete\",\"attr\":{\"durationMillis\":3}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.959+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"initandlisten\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,use_timestamp=false,\"}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.966+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":7}}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.966+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"initandlisten\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.966+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.966+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.966+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-09-04T13:36:44.966+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n", "text": "I tried to repair the db, still the issue happens. Here is the repair log FYI", "username": "Sibidharan_Nandakumar" }, { "code": "", "text": "I dumped all data and moved to a different server, still the problem continues - same error happens and container restarts every now and then.", "username": "Sibidharan_Nandakumar" }, { "code": "", "text": "I tried going back to mongodb 6.0, 6.0.9 also, still the error persists. The container fails with the same log every now and then.", "username": "Sibidharan_Nandakumar" } ]
My mongodb server crashes often with this error, I am using MongoDB 7.0
2023-09-04T07:04:06.149Z
My mongodb server crashes often with this error, I am using MongoDB 7.0
670
null
[ "node-js", "mongoose-odm" ]
[ { "code": "import mongoose from \"mongoose\";\n\nconst institutionSchema = mongoose.Schema(\n {\n name: {\n type: String,\n required: true,\n min: 2,\n max: 50,\n },\n students: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\",\n unique: true,\n sparse: true,\n },\n ],\n logoPath: {\n type: String,\n default: \"\",\n },\n },\n { timestamps: true }\n);\n\nconst Institution = mongoose.model(\"Institution\", institutionSchema);\n\nexport default Institution;\n", "text": "Hey,\nI’m a new developer and started my journey working with MongoDB.\nI have this mongoose scheme and for some reason I still manage to have duplicated items in the students array (student is another User scheme):How can I fix this?\nWhat are those validators for?Thanks ", "username": "Arie_Levental" }, { "code": "students// Institution document\n{\n _id: 1,\n name: \"University of MongoDB\",\n students: [\n ObjectId(\"601af221b06858b7b8e35672\"), // John's student ID \n ObjectId(\"601af221b06858b7b8e35672\") // John's student ID added again\n ]\n}\n$addToSetstudents: [\n {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"User\",\n unique: true,\nunique:truestudents", "text": "Hey @Arie_Levental,Welcome to the MongoDB Community!I have this mongoose scheme and for some reason I still manage to have duplicated items in the students arrayBased on the schema, it seems there is a “User collection” that contains “User” documents (students).The “Institution collection” contains “Institution” documents, with each document having an array of student ObjectIds in the student’s field as a reference.So, to better understand the issue you are facing - Is the same student ObjectId being added multiple times to the students array in an Institution document?For example:To prevent this, you can check if it already exists before adding or using $addToSet operator to only add unique values.The unique:true on the students field does not help here, since that only ensures the field value itself is unique per document, not the array elements.I hope it clarifies your doubts. In case of any further questions feel free to reach out!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
How to avoid duplicated values?
2023-09-05T15:32:43.912Z
How to avoid duplicated values?
285
null
[ "connector-for-bi" ]
[ { "code": "", "text": "I connected to a client collection with the Power BI Atlas SQL (Beta) connector. I received the following error on 2 fields\nOLE DB or ODBC error: [Expression.Error] Data source error occurred. SQLSTATE: 22003 NativeError: 0 Error message: ODBC: ERROR [22003] [MongoDB][API] integral data “3350717864” was truncated due to overflow.\nHas anyone encountered a similar message. Is there a maximum value that is allowed from a field? Or is there some internal casting of data types and an error being thrown as the result of the cast is leading to truncation?\nIf anyone can help I would really appreciate it.", "username": "JASON_BOUGAS" }, { "code": "", "text": "Hi @JASON_BOUGAS welcome to the community! Quick question - where/when are you getting this error? Is it after connection but within Power Query? You mention it is on 2 fields, does the column say “error” then you received this message when clicking on “error”. What is the data type of these 2 fields? And do you think the data (and data type) within these fields differs greatly between documents?I will see if there is a known max value. You might also want to Get or Generate the SQL Schema for this collection to see if the assigned data type of these fields makes sense.\n\nScreenshot 2023-08-25 at 9.15.00 AM1285×722 152 KB\n", "username": "Alexi_Antonino" } ]
ODBC data truncated due to overflow error Power BI Connector
2023-09-05T15:04:03.820Z
ODBC data truncated due to overflow error Power BI Connector
350
null
[]
[ { "code": "", "text": "This is the response: ‘mongo’ is not recognized as an internal or external command,\noperable program or batch file.\nThe thing I am not understanding is that since my mongo shell is an earlier version (1.6 something) I am not sure that it is running. I replaced my first db string as well where the password was required.\nI even tried another version and still got the same response.\nAny thoughts?", "username": "Hermann_Rasch" }, { "code": "", "text": "Do you have mongo.exe under bin?\nTry to connect from there\nIf it works it could be path issue", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks, I believe that was it", "username": "Hermann_Rasch" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I tried to connection string to my cml and got this error
2022-11-30T15:24:20.284Z
I tried to connection string to my cml and got this error
897
null
[]
[ { "code": "2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on /tmp/mongodb-27017.sock: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on 127.0.0.1:27017: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on /tmp/mongodb-27017.sock: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on 127.0.0.1:27017: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on /tmp/mongodb-27017.sock: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on 127.0.0.1:27017: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on /tmp/mongodb-27017.sock: Bad file descriptor\n2023-09-04T17:05:35.643+0800 I NETWORK [listener] Error accepting new connection on 127.0.0.1:27017: Bad file descriptor\n", "text": "I start my mongoDB. it’s running ,but i can’t connect to it\nHere’s the error description in mongod.log:I searched and didn’t seem to be able to find a solution, I’ve tried to delete the file “/tmp/mongodb-27017.sock” and change it’s permission to “mongod:mongod”, and it didn’t work.\nI’ve also tried repairing, and it didn’t work", "username": "Bai_Li" }, { "code": "NETWORK [listener] Error accepting new connection on /tmp/mongodb-27017.sock: Bad file descriptor\nNETWORK [listener] Error accepting new connection on 127.0.0.1:27017: Bad file descriptor\n/tmpls -ld /tmpsudo chmod 777 /tmp", "text": "Hey @Bai_Li,Welcome to the MongoDB Community!Could you share the MongoDB version you are using, the OS, and whether you have deployed it on a VM or Docker?I’ve tried to delete the file “/tmp/mongodb-27017.sock” and change it’s permission to “mongod:mongod”, and it didn’t work.Can you check if the /tmp directory has write permissions for the mongod user/group. Run ls -ld /tmp to verify. If not, run sudo chmod 777 /tmp to allow write access.Also check for any other processes that may be blocking port 27017 or have the socket file open.In case the above steps don’t resolve the issue, try reinstalling MongoDB freshly which will overwrite the socket file, and restart everything cleanly.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi, Kushagra_KesavI’m using ubuntu 20.04 ,and MongoDB version is 4.0.28, and I deployed it on a server, not vm or docker.the /tmp directory has already changed to “chmod 777 /tmp”there’s no port blocking 27017, It actrully worked fine until today", "username": "Bai_Li" }, { "code": "", "text": "Hey @Bai_Li,MongoDB version is 4.0.28It seems you are using the outdated version of the MongoDB server. We recommend updating your MongoDB version to the latest release. MongoDB 4.0 is no longer supported, and upgrading to a newer version can provide improved stability, bug fixes, and additional features. You can refer to the EOL Support Policies for more information on MongoDB versions and their support status.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
My mongoDB didn't seem to work properly
2023-09-04T09:48:32.528Z
My mongoDB didn&rsquo;t seem to work properly
301
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hi,I can’t connect to Tableau Prep Builder using MongoDB BI Connector from Atlas. I have the BI Connector and an analytical node enabled.When I try to connect to Tableau Prep Builder I obtain the following error message:“Can’t connect to \nDetailed Error Message:\nAn Error ocurred while communicating with MongoDB BI Connector [MySQL][ODBC 8.0 (w) Driver][mysqld-5.7.12 mongosqld v2.14.3] This command is not supported in the prepared statement protocol yet Unable to connect to the MongoDB BI Connector server . Check that the server is running and that you have access privileges to the requested database.”The user I’m using is an admin inside Atlas so I should have the privileges to connect to the DB. I configured the ODBC correctly because my databases are showing and when I press “Test” I obtain as a test result “Connection Succesful”.I also tried to connect to Tableau Online but I obtain the same message. Because of this, I don’t think that the problem is in my version of MySQL driver or my Visual C++ Redistributable version.I hope you can help me.Thank you.", "username": "Fryderyk_Chopin" }, { "code": "", "text": "Hello Fryderyk_Chopin,Did you manage to solve this somehow? I’m facing a similar situation trying to connect Tableau Online with my MongoDB database(using the same, MongoDB BI connector from Atlas).Thank you! ", "username": "Diana_Mihaela_Gherghinoaica" }, { "code": "", "text": "Hi!Nope, I created a SQL datawarehouse and then connected Tableau from there. I don’t use BI Connector.", "username": "Fryderyk_Chopin" }, { "code": "", "text": "Hello both @Fryderyk_Chopin and @Diana_Mihaela_Gherghinoaica - We now have an Atlas SQL custom connector for Tableau. It is compatible with Desktop, Prep and Server. And we plan to get it working with the Online product in the future. Here is a quick demo video to watch. And here is some online documentation to help you decide if you’d like to try it out.", "username": "Alexi_Antonino" } ]
Unable to connect to Tableau with BI Connector
2021-08-18T05:05:32.048Z
Unable to connect to Tableau with BI Connector
3,962
null
[ "queries", "node-js", "compass", "indexes", "serverless" ]
[ { "code": "db.getCollection('users').getIndexKeys()\n\n[\n { _id: 1 },\n { email: 1 },\n { oauthId: 1 },\n { _fts: 'text', _ftsx: 1 },\n { amplify_id: 1 }\n]\nmongodb.collection('users').find(\n { amplify_id: \"123\" },\n )\nawait mongodb.collection('users').find(\n { amplify_id },\n ).explain()\n\ntotalDocsExamined: 1964,\nawait mongodb.collection<MongoUser>('users').indexes();\n\n{\n v: 2,\n key: { amplify_id: 1 },\n name: 'amplify_id_1',\n unique: true,\n sparse: true\n }\ntotalDocsExamined: 1,\n", "text": "Summary: Indexes created inside mongoDB compass are not used by Node driver. Using serverless.Explanation/proof:Running the following command in MongoDB compass gets me the following result:I filter on the amplify_id field for each operation, which looks just like this:And when I use the “explain” keyword on a “find” operation in my NODE script, the result is this:However, when I do this in MongoDB Compass, I get 1 doc examined, as expected. Same query.Node recognizes that an index exists:…but does not use it.This index was created in MongoDB compass. And sure enough, when I delete the index, then re-create it inside Node or with the mongo shell, it works:This is a huge problem and and can easily lead to unexpectedly high RPUs (billing going through the roof) and slow performance, and is not straightforward to detect.I pointed out another bug with the Node driver not using TEXT indexes. The workaround was the same; create the index IN NODE, not in MongoDB Compass.I’ll add that I really enjoy working in the Compass and it has been just fine for doing operations on data, but I won’t be using it anymore for deeper operations like creating indexes", "username": "Justin_Jaeger" }, { "code": "{\"amplify_id\" : 123},\n{\"amplify_id\" : 124},\n{\"amplify_id\" : 125},\n{\"amplify_id\" : 126},\n{\"amplify_id\" : 127}\n{\"amplify_id\": 1}mongoshdb.collection.getIndexes()explain(\"executionStats\")db.collection.find({\"amplify_id\": 123}).explain(\"executionStats\")\nmongosh", "text": "Hi @Justin_Jaeger,Thanks for providing those details.I’m going to try reproduce this but this is my understanding of the general steps needed to reproduce this behaviour:(Note: The tests will be performed against a serverless instance)With the above steps, I am comparing the same index but created one created via mongosh and another via MongoDB compass.If you believe something is missing or have any extra information that may help this, please let me know.Look forward to hearing from you.Best Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason,Thanks for attending to this!These steps are correct, but at #6, the key is to compare the results of the query inside of the Node driver AND MongoDB compass). These should produce different execution stats.After creating the index in MongoDB Compass, I’d expect when you query in there, it will use the index. Whereas when you query in Node it won’t use the index.", "username": "Justin_Jaeger" }, { "code": "mongodb.collection('users').createIndex({ name: 'text' })\nmongodb.collection('users').find({$text:{$search:'hello'}}).explain()\n", "text": "I’d also like to point out one more bug I’m observing which is similar. As stated, I fixed the indexing problem above by creating all my indexes in the mongo shell instead of compass. Except one. My text index was still not being recognized. That is, until I deleted it, and re-created it in Node.Reproduce:", "username": "Justin_Jaeger" }, { "code": "mongoshmongosh", "text": "@Justin_Jaeger this is a very strange scenario you’ve described as mongosh, as well as MongoDB Compass all use the Node.js driver to communicate with your cluster.To help us better understand the issue you’ve described can you share:This will help us ensure our reproductions match your environment as closely as possible.", "username": "alexbevi" }, { "code": "mongoshconst { MongoClient } = require(\"mongodb\");\nconst client = new MongoClient(\"mongodb+srv://xyz.mongodb.net/\");\nasync function run() {\n try {\n await client.connect();\n const database = client.db(\"test\");\n const coll = database.collection(\"amplify\"); \n var result = await coll.find({ amplify_id: 1 }).explain();\n console.log(result);\n } finally {\n await client.close();\n }\n process.exit(0);\n}\nrun().catch(console.dir);\n", "text": "For the moment I’ve tried this using MongoDB Compass 1.39.3, mongosh 1.10.6 and Node.js driver v6.0.0 and performed the following:At the moment everything appear to be working as expected. If it helps the script I used in Node is:", "username": "alexbevi" } ]
BUG: Node driver does not use indexes created with MongoDB compass (proof)!
2023-09-03T08:12:46.482Z
BUG: Node driver does not use indexes created with MongoDB compass (proof)!
597
null
[ "aggregation", "queries" ]
[ { "code": "db.mycollection.find({},{\"entries\":{\"$slice\":[0,10]},\"totalEntries\":{\"$size\":\"$entries\"}});\n", "text": "I have two MongoDB instances, one is local, and other is for dev testing. Both have version 5.0.0I send same query to both databases:In local database i got expected answer with list of entries and it size.\nBut in dev database i got error “Query failed with error code 2 with name ‘’ and error message ‘Unknown projection operator $size’ on server”Do mongoDB have some mechanism to fix wrong queries, and it fix it in first situation?\nOr how it can be explained, that same query give error only in one situation?", "username": "Roman_Buzuk1" }, { "code": "", "text": "Hey @Roman_Buzuk1,Welcome to the MongoDB Community!I have two MongoDB instances, one is local, and the other is for dev testing. Both have version 5.0.0\n‘Unknown projection operator $size’ on server”Could you share the sample documents, so we can troubleshoot the query in our environment to assist you better? Also, please share the specific sub-version of both the deployed servers.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Unknown projection operator $size
2023-09-05T10:53:30.765Z
Unknown projection operator $size
223
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "HI Team,Unable to track mongdump log. We need to know whether mongodump is completed or not in onpremises. how to log the mongodump activity. Please advise.", "username": "Kiran_Joshy" }, { "code": "mongodump --verbose\n", "text": "", "username": "Suresh_Pradhana" }, { "code": "", "text": "Hi Suresh,Thanks for the reply.\nCurrently i am using cronjob for running backup in onprem. I need log for this activity. after finishing the mongodump backup. I am unable see whether the backup complted or not. Need logging to log file. Is it possible?", "username": "Kiran_Joshy" }, { "code": "mongodump>>mongodumpcrontab -e\nmongodumpstdoutstderr>>0 2 * * * mongodump --uri \"mongodb://username:password@hostname:port/database\" >> /path/to/backup.log 2>&1\n0 2 * * *mongodumpmongodump --uri \"mongodb://username:password@hostname:port/database\"mongodump>> /path/to/backup.logstdoutstderr/path/to/backup.log2>&1stderrstdoutmongodump", "text": "I don’t know how to do it but This may help you, This is from chatGPT .Yes, it’s possible to log the output of the mongodump command to a log file when running it as a cron job. This allows you to keep track of whether the backup was completed successfully and review any error messages if there are issues during the backup process.To log the output of a command to a file, you can use the >> operator to append the command’s output to a log file. Here’s how you can modify your cron job to log the mongodump output:Open your crontab for editing by running the following command:Add your mongodump command to the crontab file, and redirect both standard output (stdout) and standard error (stderr) to a log file using the >> operator. For example:In this example:0 2 * * * schedules the mongodump command to run daily at 2:00 AM. You can adjust the schedule as needed.mongodump --uri \"mongodb://username:password@hostname:port/database\" is your mongodump command.>> /path/to/backup.log appends both stdout and stderr to the specified log file (/path/to/backup.log). You should replace this with the actual path where you want to store the log file.2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1) so that both standard output and standard error are logged to the same file.Save the crontab file and exit the text editor.With this configuration, the output and any error messages generated by the mongodump command will be appended to the specified log file. You can check this log file to monitor the backup process and review any potential issues.Make sure the directory where you want to store the log file is writable by the user running the cron job to avoid permission issues.", "username": "Suresh_Pradhana" }, { "code": "", "text": "Hi @Kiran_Joshy,\nYes, you need only to redirect the output in a file.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodump Status
2023-09-05T05:53:35.085Z
Mongodump Status
406
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "\n//Student.js\nconst mongoose = require('mongoose');\n\nconst studentSchema = new mongoose.Schema({\n name: {\n type :String\n },\n rollno: {\n type :String\n },\n mobileno:{\n type :String\n },\n classId: mongoose.Schema.Types.ObjectId,// Defines the field as an ObjectId reference\n});\n\nmodule.exports = mongoose.model('Student', studentSchema);\n\n~~``~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n//studentController.js\nconst Student = require('../models/Student');//import the Student model\nconst Class = require('../models/Class'); // Import the Class model\n //Read all students in a class with standard and division\n\nexports.getStudentsByClassStandardDivision = async (req, res) => {\n try {\n const standard= req.params.standard;\n const division=req.params.division;\n \n // Find all students in the specified class based on standard and division\n const students = await Student.find({ 'classId.standard': standard, 'classId.division': division });\n \n res.status(200).json(students);\n \n } catch (error) { \n console.error('Error getting students by class, standard, and division:', error);\n res.status(500).json({ error: 'Error getting students by class, standard, and division' });\n }\n };", "text": "Data Base name:studentdata//Class.js\nconst mongoose = require(‘mongoose’);const classSchema = new mongoose.Schema({\nstandard: {\ntype :String\n},\ndivision: {\ntype :String\n} ,\n});module.exports = mongoose.model(‘Class’, classSchema);", "username": "Rameesa_Hassan" }, { "code": "Studentconst mongoose = require('mongoose');\n\nconst studentSchema = new mongoose.Schema({\n name: String,\n rollno: String,\n mobileno: String,\n classId: mongoose.Schema.Types.ObjectId,\n});\n\nconst classSchema = new mongoose.Schema({\n standard: String,\n division: String,\n});\n\nconst studentModel = mongoose.model('Student', studentSchema);\nconst classModel = mongoose.model('Class', studentSchema);\nconst ObjectId = mongoose.Types.ObjectId;\n\nawait studentModel.insertMany([\n {\n name: 'Mykola',\n rollno: 'no1',\n mobileno: '+38099',\n classId: new ObjectId('64e4abd5ea1b087ea47d0009'),\n },\n {\n name: 'Sashko',\n rollno: 'no2',\n mobileno: '+38066',\n classId: new ObjectId('64e4abd5ea1b087ea47d0006'),\n }\n]);\n\nawait classModel.insertMany([\n {\n _id: new ObjectId('64e4abd5ea1b087ea47d0001'),\n standard: 'std-1',\n division: 'A',\n },\n {\n _id: new ObjectId('64e4abd5ea1b087ea47d0002'),\n standard: 'std-2',\n division: 'B',\n }\n]);\nstudentawait studentModel.findOne();\n{\n \"_id\": \"64e50e3557a6feae0403312e\",\n \"name\": \"Mykola\",\n \"rollno\": \"no1\",\n \"mobileno\": \"+38066\",\n \"classId\": \"64e4abd5ea1b087ea47d0001\",\n \"__v\": 0\n}\nstudentclassclass64e4abd5ea1b087ea47d0001studentclassId.standardclassId.divisionawait studentModel.find({ \n 'classId.standard': 'std-2', \n 'classId.division': 'B' \n});\nstudentclassconst studentSchema = new mongoose.Schema({\n /* other fields unchanged */\n classId: {\n type: mongoose.Schema.Types.ObjectId,\n ref: 'Class'\n },\n});\nfindOneclassawait studentModel.findOne({ \n classId: new ObjectId('64e4abd5ea1b087ea47d0001')\n}).populate('classId');\n{\n \"_id\": \"64e50e3557a6feae0403312e\",\n \"name\": \"Mykola\",\n \"rollno\": \"no1\",\n \"mobileno\": \"+38066\",\n \"classId\": {\n \"_id\": \"64e4abd5ea1b087ea47d0001\",\n \"standard\": \"std-1\",\n \"division\": \"A\",\n \"__v\": 0\n },\n \"__v\": 0\n}\nclassstudentclassId.standardclassId.divisionmongoose.set('debug', true);\nawait studentModel.aggregate([\n {\n $lookup: {\n from: 'classes',\n localField: 'classId',\n foreignField: '_id',\n as: 'classId'\n }\n },\n {\n $unwind: '$classId'\n },\n {\n // at this stage you can match the document, just like you wanted :)\n $match: {\n 'classId.standard': 'std-2',\n 'classId.division': 'B'\n }\n }\n]);\n[\n {\n \"_id\": \"64e50e3557a6feae0403312f\",\n \"name\": \"Sashko\",\n \"rollno\": \"no2\",\n \"mobileno\": \"+38095\",\n \"classId\": {\n \"_id\": \"64e4abd5ea1b087ea47d0002\",\n \"standard\": \"std-2\",\n \"division\": \"B\",\n \"__v\": 0\n },\n \"__v\": 0\n }\n]\nstudentconst studentSchema = new mongoose.Schema({\n name: String,\n rollno: String,\n mobileno: String,\n classId: mongoose.Schema.Types.ObjectId,\n classStandard: String, // new field\n classDivision: String, // new field\n});\nawait studentModel.find({\n classStandard: 'std-2',\n classDivision: 'B'\n});\n", "text": "Hello, @Rameesa_Hassan! Welcome to the MongoDB community! You get empty array because in your mongoose-find method you query fields, that Student model does not know about.Moreover, your code and data model can be improved and I will show you how exactly with examples.To reproduce your case, I am using your model schemas:Then I will create few sample documents do to the tests:Okay. Now we are all set! Let’s do some queries!\nLet’s see how your student document looks now in the database:Output:Notice, that student object does not contain any data about class document, except for its reference of ObjectId type. MongoDB and Mongoose do not know into what collection they have to look to find class data. Moreover, they do not know if any document actually has _id with value 64e4abd5ea1b087ea47d0001.This command below returns empty array, because zero student documents that have field classId.standard, and zero documents, that have field classId.division. Nothing matched - nothing returned (empty array).To make your student model aware of the class model (and collection), you need to explicitly mention it in your schema (see more details in the official Mongoose doc):Now, with this change if you do that same findOne operation again, you will get the same output - with no class document joined. That’s because you also need to calll populate() method:Output:Okay. Now data from class document is joined to student document. That means, that now we can use classId.standard and classId.division like some query above, right? No .This is why, under the hood, Mongoose, makes two queries to the database and then merges two results after retrieval. You can check that, if you enable debug mode on your Mongoose connection:To get the data your need, you need to run an aggregation pipeline, like this:Output:With MongoDB aggregation you can get the result more efficiently. Avoid using .populate() mehtod .Additionally, I can suggest you to add some additional fields to your student schema:It is completely OK to have a denormalized data structure in MongoDB. With these changes, you can easily query your data much faster and write queries easier, like this:", "username": "slava" }, { "code": "", "text": "Thank you for your response", "username": "Rameesa_Hassan" } ]
Get empty array when try to read all students with standard and division
2023-08-22T11:52:35.136Z
Get empty array when try to read all students with standard and division
606
null
[ "replication", "indexes" ]
[ { "code": "db.users.getIndexes()\n[\n {\n \"key\": {\n \"_fts\": \"text\",\n \"_ftsx\": 1\n },\n \"name\": \"search_terms_text\",\n \"ns\": \"db.users\",\n \"v\": 2,\n \"default_language\": \"english\",\n \"language_override\": \"language\",\n \"textIndexVersion\": 3,\n \"weights\": {\n \"search_terms\": 1\n }\n }\n]\ndb.users.dropIndex(\"search_terms_text\")\n[\n {\n \"$clusterTime\": {\n \"clusterTime\": {\"$timestamp\": {\"t\": 1645773616, \"i\": 473}},\n \"signature\": {\n \"hash\": {\"$binary\": {\"base64\": \"CH0lkqS6u83NeFP5Xd3y3NDVJKw=\", \"subType\": \"00\"}},\n \"keyId\": 7014029826021392388\n }\n },\n \"nIndexesWas\": 2,\n \"ok\": 1,\n \"operationTime\": {\"$timestamp\": {\"t\": 1645773616, \"i\": 473}}\n }\n]\ndb.users.createIndex(\n { search_terms: \"text\" },\n { default_language: \"none\" }\n)\nCommand failed with error 85 (IndexOptionsConflict): 'Index with name: search_terms_text already exists with different options' on server ***. The full response is {\"operationTime\": {\"$timestamp\": {\"t\": 1645773721, \"i\": 183}}, \"ok\": 0.0, \"errmsg\": \"Index with name: search_terms_text already exists with different options\", \"code\": 85, \"codeName\": \"IndexOptionsConflict\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1645773721, \"i\": 183}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"njwxfSeASmTReFUzs/tZFBJv34k=\", \"subType\": \"00\"}}, \"keyId\": 7014029826021392388}}}\ndb.users.getIndexes()\n[\n {\n \"key\": {\n \"_fts\": \"text\",\n \"_ftsx\": 1\n },\n \"name\": \"search_terms_text\",\n \"ns\": \"db.users\",\n \"v\": 2,\n \"default_language\": \"english\",\n \"language_override\": \"language\",\n \"textIndexVersion\": 3,\n \"weights\": {\n \"search_terms\": 1\n }\n }\n]\n1. index build: done building index search_terms_text on ns db.users\n2. Deferring table drop for index 'search_terms_text' on collection 'db.users.$search_terms_text (8082e0dc-6416-43f1-97a6-26101033e5ea)'. Ident: 'db/index-183-7199585454386394577', commit timestamp: 'Timestamp(1645773616, 473)'\n3. index build: inserted 551 keys from external sorter into index in 0 seconds\n4. index build: collection scan done. scanned 17 total records in 0 seconds\n5. index build: starting on db.users properties: { v: 2, key: { _fts: \"text\", _ftsx: 1 }, name: \"search_terms_text\", ns: \"db.users\", weights: { search_terms: 1 }, default_language: \"english\", language_override: \"language\", textIndexVersion: 3 } using method: Hybrid\n6. Completing drop for ident db/index-183-7199585454386394577 (ns: db.users.$search_terms_text) with drop timestamp Timestamp(1645773616, 473)\n", "text": "I am trying to change the text index setting, but the existing text index is not deleted.ReproduceA text index is created as shown below.I did dropIndex with the index name as shown below, and got the result that it was successful.When I try to re-create the text index after deletion, I get an error message stating that it already exists as shown below.When I call getIndexes() again, I could see that the text index was not deleted and remained.DB LogsWhen I check the DB log in that time period, it is as follows.Environmentmongodb version: 4.2.12\nreplica set: primary 1, secondary 2QuestionThanks to everyone who replied.", "username": "Joe_Cho" }, { "code": "", "text": "Hi @Joe_Cho and welcome in the MongoDB community !Initially I thought it could be a MongoDB Atlas Search index that was automatically being recreated because its definition is in Atlas - but I just tested and actually these Lucene indexes don’t even appear in the normal list of indexes so that’s not it.The only thing I can think of is that maybe you have Mongoose or Java with POJOs running on top of this collection and they have an index definition applied directly in the config of the Object files that represent that collection and they recreate the index automatically if it detects that it doesn’t exists.To be honest, that the only thing I can think of.Let me know if you find the solution !Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi~ @MaBeuLux88 Thanks for your reply.Currently, I am using GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDB and I am not creating any index in the application. And as far as I know, mongo-go-driver does not automatically create indexes. If it’s not automatically recreated in mongodb as you explained, I can think of it as regenerating somewhere. I’ll take a look again.Thanks,\nJoe, Cho", "username": "Joe_Cho" }, { "code": "", "text": "Hey\nWere you able to figure out what was recreating it?\nI have encountered a similar situation wherein indexes are being created on its own and i dont have any indexes mentioned in my config files.", "username": "Robinraj_Rajan" }, { "code": "", "text": "Hi @Robinraj_Rajan,Can you describe your tech stack? We are not talking about some internal / system collections here, right?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "5.13.15background.getIndexes() {\n \"v\" : 2.0,\n \"key\" : {\n \"_fts\" : \"text\",\n \"_ftsx\" : 1.0\n },\n \"name\" : \"name_text\",\n \"background\" : true,\n \"weights\" : {\n \"name\" : 1.0\n },\n \"default_language\" : \"english\",\n \"language_override\" : \"language\",\n \"textIndexVersion\" : 3.0\n }\n", "text": "Hello!I am getting the same as OP, successful drop but the index is not actually dropped.\nWe are using node js with mongoose version 5.13.15, but i cannot see that this index has been created through mongoose.There is also a background property set to true if that helps, even though i did not find something wrong with this property.\nIndex object from .getIndexes()Do you have any ideas?", "username": "vasilis_m0" } ]
Could not dropped text index
2022-02-25T07:36:14.089Z
Could not dropped text index
4,911
null
[ "queries", "node-js" ]
[ { "code": "for await (const doc of collection.find(query)) {\n if(check_clash(doc.date, user_date))\n break;\n}\nawait collection.find(query).getInBatches(20,(docs)=>{\n for(const d in docs)\n if(check_clash(d.date, user_date))\n return false; // break out \n})\n", "text": "I have a data set of dates and I’d like to check if a user input date clashes with an existing one,\nso far this has been my approach:but I have 2 concerns:Will this work even for a larger data set?Is there any way to do this in batches?\ni.e. since I’m guessing every time 1 doc is fetched it takes some amount of time whereas if we take e.g. 20 docs at a time, compare them & then take the next batch it should take less amount of time since there are less calls to the database?maybe a sudo code would look something like this?P.S. I tried the approach in this question: Batching data with find but a solution pointed out:It is never a good idea to use skip in mongo queries", "username": "MRM" }, { "code": "db.getCollection(\"Test\").find({_id:{$gt:XXX}}).sort({_id:1}).forEach(theDoc =>{\n XXX = theDoc._id,\n checkClash(theDoc.date, user_date)....etc\n})\n\n", "text": "Depending on the rule of check_clash you may be able to implement this as an aggregation pipeline and run it server side.In regards to the point of .skip, the alternative is to sort your data (make sure there is a supporting index) and then keep track of the last item processed, when you resume you just get data more than the item last found.So you could use the _id field as the primary key for this and something like this (pseudocode)", "username": "John_Sewell" }, { "code": "{\n\"month_year\" : \"01-2020\",\n\"dates\" : {\n \"1\": [...], // array of Booking IDS whose range falls within this date\n \"5\": [...],\n \"28\":[...]\n},\n...\n{\n \"_id\": ... , // Booking ID\n \"from\": \"2023-09-04T04:30:00.000Z\",\n \"till\": \"2023-09-05T06:00:00.000Z\",\n ...\n},\n\n...\nconst user_range_start=new Date(Date.UTC(2020, 0, 1) // 01-01-2020\nconst user_range_end=new Date(Date.UTC(2020, 2, 23) // 23-03-2020\n\nlet id_arr = []\nawait DB.collection(MONTHLY_COLLECTION).find(\n {\n \"month_year\": {\n \"$in\": month_year_between(user_range_start, user_range_end) // [`01-2020`, `02-2020`, `03-2020` ]\n }\n }\n)\n.forEach(doc => {\n for (let date in doc[\"dates\"])\n id_arr.push(...doc[\"dates\"][date])\n})\n\n\nawait DB.collection(BOOKING_COLLECTION).find({\n \"_id\": {\n \"$in\": id_arr\n }\n})\n.forEach(doc => {\n if(check_clash(doc.from, doc.till, user_range_start, user_range_end)\n return false\n})\n", "text": "I’ve actually abstracted my problem in the original question to simplify things but the core problem is that I have a user input date range and bookings also with a date range as such:MONTHLY_COLLECTION:BOOKING_COLLECTIONand my approach has been to first get all the Booking IDs then check if they clash with the user input range as suchMy only problem with this approach is that it is exhaustive till 101 documents since we might need to check for more than thatCould you please answer in context with this?", "username": "MRM" } ]
Loop through data in batches?
2023-09-05T07:43:19.692Z
Loop through data in batches?
258
null
[ "queries" ]
[ { "code": "", "text": "I wanted to send the Alerts in my Google Chat space, For that I configure Webhook in Atlas UI, but when Alert trigger, Message is not coming to Google Chat ?\nAny reason behind why message is not come to my Google Chat space.", "username": "Shivam_Tiwari2" }, { "code": "", "text": "Hi @Shivam_Tiwari2,It sounds like you’ve configured the webhook integration in the Atlas project. Correct me if I am wrong here. In this case, have you configured the Alert itself to send to the webhook URL (as per step 3 in the below example screenshot)? :\nimage2018×1340 181 KB\nRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Yes @Jason_Tran I have configured the Webhook.\n\nimage1187×742 37.1 KB\n\nand I also configued Webhook in Integration\n\nimage1726×815 70.8 KB\n", "username": "Shivam_Tiwari2" }, { "code": "", "text": "Thanks for sending those screenshots confirming.I haven’t used the alerts configured with webhooks specific to Googlechat but I’ll try to check to see if this is supported.Regards,\nJason", "username": "Jason_Tran" }, { "code": "text{\n \"created\": \"2023-09-04T23:29:36Z\",\n \"alertConfigId\": \"REDACTED\",\n \"groupId\": \"REDACTED\",\n \"eventTypeName\": \"FTS_INDEX_BUILD_COMPLETE\",\n \"links\": [\n {\n \"rel\": \"self\",\n \"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/REDACTED/alerts/REDACTED\"\n }\n ],\n \"id\": \"64f6686065adf706c973cdf4\",\n \"humanReadable\": \"Project: TEST\\n\\nOrganization: TEST\\n\\n----------------------------------------\\n\\nINFORMATIONAL\\nSearch Index Build Complete\\n\\nCreated: 2023/09/04 23:29 GMT\\n\\n INDEX: default in db.collection\\n\\n\\n----------------------------------------\\n\\n\",\n \"updated\": \"2023-09-04T23:29:36Z\",\n \"status\": \"INFORMATIONAL\"\n}\n", "text": "Looks like Googlespace is expecting a text field in the response based off : Created a text messageThe response / message sent from Atlas for an alert based off my testing:The response / outgoing alert message fields cannot be configured. You can perhaps make a feedback post for this or check with Google if google space can accept these responses and convert them to text for the google space / chat.", "username": "Jason_Tran" }, { "code": "", "text": "Thank you @Jason_Tran for this answer.", "username": "Shivam_Tiwari2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas Support Googlechat webhook?
2023-09-01T06:16:59.350Z
MongoDB Atlas Support Googlechat webhook?
335
null
[ "aggregation", "queries", "data-modeling", "time-series" ]
[ { "code": "s> db.stats_proc_diskstats1.find()\n[\n {\n timestamp: ISODate(\"2023-08-21T20:00:20.108Z\"),\n metadata: { disk: 'sda', hostname: 'server1' },\n reads_completed: 22893701,\n ...snipped a bunch of statistics... \n _id: ObjectId(\"64e3c2546a88d7217de0dd52\")\n }\n]\ns> db.stats_proc_diskstats2.find()\n[\n {\n timestamp: ISODate(\"2023-08-21T20:08:05.146Z\"),\n metadata: { hostname: 'server1' },\n diskstats: [\n {\n md: { disk: 'sda' },\n reads_completed: 22893701,\n ...snipped a bunch of statistics...\n }\n ...snipped, repeats for all the disks in the server...\n ],\n _id: ObjectId(\"64e3c425011a5d2aaa109dce\")\n }\n]\n", "text": "i’m dipping my toe into timeseries collections and i’m afraid a subtle nuance is unclear to me about which data model is betteras i understand it the first block is the general model for a time series collection. where you have some metadata and a bunch of values and each time step is a single documentas an alternate one could have an array and then use an aggregate frame work to unwind that array and basically produce the same outputthe first is surely easier to query and not unwinding an array during a query is probably less loading on the mongo server. but the question is, which one is better?to frame the scope of that bold question, this collection would number into the 100’s of millions. where i’m collecting disk performance statistics every five minutes from hundreds of servers with a 90 day retentioni’m sure there are processing/storage trade off’s inside mongo for each solution, but i my knowledge is still pretty light on the mongo internals", "username": "michael_didomenico" }, { "code": "", "text": "Hi @michael_didomenico and welcome to MongoDB community forums!!but the question is, which one is better?When deciding between organising your data as one document per timestamp or using the array model, your choice should be guided by how you plan to retrieve and work with your data.\nIf you prefer straightforward and easy queries, creating a document for each timestamp is a good option. This approach simplifies your queries and avoids unnecessary complexity and processing. It’s like having individual files for each moment in time, making it easier to find and access specific data.On the other hand, if you’re concerned about data storage and want to prevent your collection from growing excessively over time, the array model might be more suitable. With this approach, data is stored together in a single document, which helps manage storage space. However, it comes with a tradeoff - writing complex queries to extract specific information from the array can be more challenging.So, the choice ultimately depends on your data retrieval needs and how you want to balance simplicity in querying with efficient data storagei’m sure there are processing/storage trade off’s inside mongo for each solution, but i my knowledge is still pretty light on the mongo internalsYes, you are right here that this would involve a processing time trade off but you could make the choice based on how you would like to process the queries.\nIn saying so, we know that time series collections can handle large number of documents, therefore huge collection size should not be a concern.\nThe recommendation we have for you is to make use of the TTL Indexes which could help you to clear up space by removing the old data.Please feel free to reach out in case of further questions.Warm Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Timeseries schema design
2023-08-21T20:33:11.849Z
Timeseries schema design
536
null
[ "transactions" ]
[ { "code": "deleteMany()deleteMany()", "text": "Hey all, currently I am implementing the database operations for deleting all the information related to a particular user when a user wants to delete his/her account; this involves deleting the user’s data that spread across multiple collections, and ideally I would like to perform deleteMany() operations across multiple collections in an all-or-nothing transaction in order to ensure data consistency because some of these data are interconnected, such as the user’s posts, likes, etc.The number of deleteMany() operations to be included in the transaction would likely vary from anywhere around tens of thousands to around tens of millions, my question is that I read in this post (Performance Best Practices: Transactions and Read / Write Concerns | MongoDB Blog) that “For operations that need to modify more than 1,000 documents, developers should break the transaction into separate parts that process documents in batches.”, and I am wondering if this limit applies to document delete operations or only document update operations. I also read on this page (https://www.mongodb.com/docs/manual/core/transactions-production-consideration/#oplog-size-limit) that starting in version 4.2, MongoDB can create as many oplog entries as necessary instead of limiting all the oplog to a single 16 MB object, so I am also not sure if the 1,000 documents limit is still valid. Either way, I would like to seek some advice and recommendations on how to approach this issue.Thanks a lot in advance!", "username": "fried_empanada" }, { "code": "", "text": "Using a very big transaction is almost never good idea. Big transactions consume more resources and can cause performance impact.", "username": "Kobe_W" }, { "code": "_id", "text": "Hi @Kobe_W,I see, those are very good points. I actually have a follow-up question with regards to the second point: say if I delete the single user profile document belonging to user A, but do not delete the resources related to that user in other collections (those resources would reference that user by that user profile document’s ObjectId), then when a new user, say user B, signs up, is there any chance that the ObjectId originally assigned to user A’s user profile document will be “recycled” and assigned to user B’s user profile document, and therefore cause the system to mis-assign user A’s stale/left-over resources to user B and essentially allow user B to access user A’s resources? Basically I think this boils down to the question of whether or not deleted documents’ ObjectIds (the _id field) will be “recycled” and re-used by new documents in the future?Thanks for the help again!", "username": "fried_empanada" }, { "code": "", "text": "is there any chance that the ObjectId originally assigned to user A’s user profile document will be “recycled”for each signup, a new object id is generated and assigned to that specific user. so no recycle.", "username": "Kobe_W" }, { "code": "", "text": "Take a look at how an object id is constructed:From a comment elsewhere someone pointed out that the driver may generate this if inserting in a driver, or the server will if run on the server via shell etc.", "username": "John_Sewell" }, { "code": "", "text": "I do not want to be picky about the details, no really I want to be picky B-)The shell is simply a client application that uses a driver, and it is also the driver that generates the ObjectId, not the server.", "username": "steevej" }, { "code": "", "text": "Technically correct, is the best form of correct I was more leaning towards something like an aggregation out redirect on the server side where you project out the ID fields etc, so the server generates, but worded it badly.As you say though, the shell is just an application, using a driver!It’s worth knowing how an ID is created, and can also be useful to know when a document ID was generated from the imbedded timestamp (taking the knowledge that it may have been generated on the client into account)", "username": "John_Sewell" }, { "code": "", "text": "I see, gotchu, thank you all very much for the explanation and insights! I checked this page https://www.mongodb.com/docs/manual/reference/method/ObjectId/ you shared, it looks the ObjectId is constructed from information from the current timestamp, random value unique to machine and process, and an incrementing counter; if I understand this correctly, then there should not be any recycling of old deleted ObjectId right? Thank you very much for all your help again!", "username": "fried_empanada" }, { "code": "", "text": "there should not be any recycling of old deleted ObjectId right?yes, this is correct.", "username": "Kobe_W" } ]
Large number of deletes in a transaction
2023-07-21T22:03:33.213Z
Large number of deletes in a transaction
761
null
[ "aggregation", "queries", "python", "indexes" ]
[ { "code": "", "text": "Does anyone here know the general concept of how to find the second most efficient index setup when given an aggregation?For example lets say I have something like this:\ndb.collection.find(name: “Daniel”).sort(age: 1, height: 1)I understand the ESR rule and that ultimately the most effective index for this would be:\ndb.collection.createIndex(name:1, age:1, height:1)What confuses me is how I would go about finding the second most effective index. I thought that in order to find the second most effective I would simply invert all values. In this case everything would become -1.However, according to the python practice certification problem I completed (which is extremely similar to my example with the only difference being the field names), instead it was stated that the proper second most effective answer would be:\ndb.collection.createIndex(name:1, age:-1, height:-1)Can someone explain why this is? Does this have to do with ESR? I am wondering if maybe I simply need to invert the groupings in reverse order of ESR so that if I had any range values I would invert them. If I did not have range values then I invert all sort values as I do here.I have been looking for information on this all over and have not been able to find anything useful the entire time. Any information on this would be helpful, thanks.", "username": "Daniel_R" }, { "code": "....\n winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { x: 1, y: 1, z: 1 },\n indexName: 'x_1_y_1_z_1',\n isMultiKey: false,\n multiKeyPaths: { x: [], y: [], z: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n x: [ '[1, 1]' ],\n y: [ '[MinKey, MaxKey]' ],\n z: [ '[MinKey, MaxKey]' ]\n }\n }\n },\n....\n...\n winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { x: 1, y: 1, z: 1 },\n indexName: 'x_1_y_1_z_1',\n isMultiKey: false,\n multiKeyPaths: { x: [], y: [], z: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {\n x: [ '[1, 1]' ],\n y: [ '[MaxKey, MinKey]' ],\n z: [ '[MaxKey, MinKey]' ]\n }\n }\n },\n...\n...\n winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { x: 1, y: -1, z: -1 },\n indexName: 'x_1_y_-1_z_-1',\n isMultiKey: false,\n multiKeyPaths: { x: [], y: [], z: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {\n x: [ '[1, 1]' ],\n y: [ '[MinKey, MaxKey]' ],\n z: [ '[MinKey, MaxKey]' ]\n }\n }\n....\ny:1, z:1Sy:1, z:1y:-1, z:-1y:1:, z:-1y:-1, z:1yzwinningPlan: {\n stage: 'SORT',\n sortPattern: { y: 1, z: 1 },\n memLimit: 33554432,\n type: 'simple',\n inputStage: {\n stage: 'COLLSCAN',\n filter: { x: { '$eq': 1 } },\n direction: 'forward'\n }\n },\n", "text": "Hi @Daniel_R and welcome to MongoDB community forums!!I would like to explain the indexes and the query using a more simple yet details format.The query would utilise the below index definitions completely.\nCase 1: ERS Rule:\nConsider the query as: db.testI.find({x:1}).sort({y:1, z:1})\nAccording to the ESR rule, the index definition would look like: db.testI.createIndex({x:1, y:1, z:1})The part of the explain output shows that the index definition has been utilised:Case 2: The following query db.testI.find({x:1}).sort({y:-1, z:-1}) would also utilise the same index definition used in Case 1 with equally effective result.The explain output would look like:In this case, the direction is marked as backward while in the former case it is marked a s forward.For the above query, if we follow the ESR rule, the most effective index would be db.testI.createIndex({x:1, y:-1, z:-1})\nThe explain output would look like:In conclusion, the y:1, z:1 forms the S in ESR as a group. Thus, as long as it’s consistent (either y:1, z:1 or y:-1, z:-1 ) they should work the same way. This is not the case for y:1:, z:-1 or y:-1, z:1 though, since the ordering of y & z are opposites of each other.\nFor example, if the index is defined as:\ndb.testI.createIndex({x:1, y:-1, z:-1}) and the query is db.testI.find({x:1}).sort({y:1, z:1}) it would not use the index. The explain output would look like:I hope I was able to answer your questions. Please feel free to reach out in case of any further queries.Regards\nAasawari", "username": "Aasawari" } ]
How To Find Most Efficient Index Setup (Python Certification Exam)
2023-08-30T09:31:10.699Z
How To Find Most Efficient Index Setup (Python Certification Exam)
429
null
[ "aggregation", "queries", "crud" ]
[ { "code": "{\n name : \"A\",\n\touterArr : [\n\t\t{\n\t\t\touterId : 1,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\touterId : 2,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\touterId : 3,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t]\n}\ndb.xyz.updateOne(\n {\n $and: [\n {\"name\" : \"A\"},\n { \"outerArr.outerId\": { $in: [1, 2, 3] } },\n { \"outerArr.innerArr.date\": \"2023-08-3\" },\n { \"outerArr.innerArr.type\": \"Normal\" }\n ]\n },\n {\n \"$set\": {\n \"outerArr.$[i].innerArr.$[j].type\": \"Reserved\"\n }\n },\n {\n \"arrayFilters\": [\n { \n \"outerArr.outerId\": { \"$in\": [1, 2, 3] }\n \n },\n {\n \"innerArr.date\": \"2023-08-3\"\n }\n ]\n }\n)\n{\n\touterArr : [\n\t\t{\n\t\t\touterId : 1,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Reserved'}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\touterId : 2,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\touterId : 3,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t]\n}\n", "text": "HiMy schema is like belowI want to update multiple sub documents provided in outerId list [1, 2, 3 ] in a document A in a atomic fashion. Either all mentioned outerId should be modified or none.Hep me to write a update command to change type to “Reserved” for a given date. “2023-08-3” to each of outerId in list.I have triedHowever this command fails if json is like below. The problem is outerId 2 and 3 matches conditions but not outerId 1. Write should have failed because outerId 1 not matching the query (My expectation), but when i run write succeeds.\nHow to make it work for outerId 1 AND outerId 2 AND outerId 3 (logical AND among each outerId in list)Please help in writing proper update", "username": "Manjunath_k_s" }, { "code": "$in$orouterId$all$elemMatchinnerArrconst innerArr = {\n \"$elemMatch\": {\n \"date\": \"2023-08-3\",\n \"type\": \"Normal\"\n }\n};\ndb.xyz.updateOne(\n {\n \"name\": \"A\",\n \"outerArr\": {\n \"$all\": [\n {\n \"$elemMatch\": {\n \"outerId\": 1,\n \"innerArr\": innerArr\n }\n },\n {\n \"$elemMatch\": {\n \"outerId\": 2,\n \"innerArr\": innerArr\n }\n },\n {\n \"$elemMatch\": {\n \"outerId\": 3,\n \"innerArr\": innerArr\n }\n }\n ]\n }\n },\n {\n \"$set\": {\n \"outerArr.$[i].innerArr.$[j].type\": \"Reserved\"\n }\n },\n {\n \"arrayFilters\": [\n { \"i.outerId\": { \"$in\": [1, 2, 3] } },\n { \"j.date\": \"2023-08-3\" }\n ]\n }\n)\n", "text": "Hello @Manjunath_k_s,The $in operator returns true if the value of the field matches any of the values from the array, same as $or condition,You can try $all and $elemMatch operators, but you need to specify the separate condition for each outerId.", "username": "turivishal" }, { "code": "", "text": "Hi Vishal,Thanks for suggestion.Is it possible to give the solution you have provided in spring mongodb syntax ? Atleast the query format ?", "username": "Manjunath_k_s" }, { "code": "javaspring-data-odm", "text": "Hello @Manjunath_k_s,I don’t know more about spring syntax, so I would suggest you ask a new topic with the java and spring-data-odm tags so the related person will help you.", "username": "turivishal" } ]
Need help to write a updateOne syntax for my schema
2023-09-02T16:57:18.533Z
Need help to write a updateOne syntax for my schema
394
null
[ "connecting", "serverless", "next-js" ]
[ { "code": "ERROR\tUnhandled Promise Rejection \n\n{\n \"errorType\":\"Runtime.UnhandledPromiseRejection\",\n \"errorMessage\":\"MongoServerSelectionError: Server selection timed out after 30000 ms\",\n \"reason\":{\n \"errorType\":\"MongoServerSelectionError\",\n \"errorMessage\":\"Server selection timed out after 30000 ms\",\n \"reason\":{\n \"type\":\"ReplicaSetNoPrimary\",\n \"servers\":{\n \n },\n \"stale\":false,\n \"compatible\":true,\n \"heartbeatFrequencyMS\":10000,\n \"localThresholdMS\":15,\n \"setName\":\"atlas-j7739j-shard-0\",\n \"maxElectionId\":null,\n \"maxSetVersion\":null,\n \"commonWireVersion\":0,\n \"logicalSessionTimeoutMinutes\":null\n },\n \"stack\":[\n \"MongoServerSelectionError: Server selection timed out after 30000 ms\",\n \" at Timeout._onTimeout (/var/task/node_modules/mongodb/lib/sdam/topology.js:278:38)\",\n \" at listOnTimeout (node:internal/timers:569:17)\",\n \" at process.processTimers (node:internal/timers:512:7)\"\n ]\n },\n \"promise\":{\n \n },\n \"stack\":[\n \"Runtime.UnhandledPromiseRejection: MongoServerSelectionError: Server selection timed out after 30000 ms\",\n \" at process.<anonymous> (file:///var/runtime/index.mjs:1250:17)\",\n \" at process.emit (node:events:526:35)\",\n \" at emit (node:internal/process/promises:149:20)\",\n \" at processPromiseRejections (node:internal/process/promises:283:27)\",\n \" at process.processTicksAndRejections (node:internal/process/task_queues:96:32)\"\n ]\n}\n\n\"LAMBDA_RUNTIME Failed to post handler success response. Http response code\":\"400.\n", "text": "Is there any particular reason why I am intermittently getting this error in my Next.js Vercel deployment when connected to MongoDB? This isn’t happening too often but I would like to understand what the issue is and fix it. I’m passing the options { useNewUrlParser: true, useUnifiedTopology: true } to the MongoClient. It appears this is a somewhat common problem on Vercel, but I haven’t found any solutions. Any ideas? Thanks.", "username": "Adam_Romero" }, { "code": "", "text": "Hi @Adam_Romero,It appears this is a somewhat common problem on Vercel, but I haven’t found any solutions. Any ideas?Difficult to decisively say at the moment what the issue could be. To confirm from the Atlas perspective, have you checked your cluster at the time of these errors to see if there were any possible issues (restarts, resource exhaustion, etc)?You could also try having another application outside of Vercel connecting to the same cluster to see if it generates the same error and the same time for troubleshooting purposes. This might help narrow down where / what the issue could be.Regards,\nJason", "username": "Jason_Tran" } ]
Intermittently getting MongoServerSelectionError error on my Next.js Vercel deployment
2023-09-02T01:26:54.074Z
Intermittently getting MongoServerSelectionError error on my Next.js Vercel deployment
555
null
[ "node-js", "replication", "mongoose-odm", "atlas-cluster" ]
[ { "code": "ab64f03c-c8f9-43fb-8cb0-20197289717f\tINFO\tMongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/\n at _handleConnectionErrors (/var/task/node_modules/mongoose/lib/connection.js:791:11)\n at NativeConnection.openUri (/var/task/node_modules/mongoose/lib/connection.js:766:11)\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async /var/task/db.js:12:7 {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-4ugleph-shard-00-00.btwmhhb.mongodb.net:27017' => [ServerDescription],\n 'ac-4ugleph-shard-00-01.btwmhhb.mongodb.net:27017' => [ServerDescription],\n 'ac-4ugleph-shard-00-02.btwmhhb.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-xsqkyc-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\nconst mongoose = require('mongoose');\nconst config = require('config');\nconst chalk = require('chalk');\n\n// Mongodb ServerUrl\n\nconst connectUrl = 'const connectUrl = mongodb+srv://abbakid:<password>@abbakid-dev.erpttih.mongodb.net/Abbakid?retryWrites=true&w=majority';\n\n\n(async () => {\n try {\n await mongoose.connect(connectUrl, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n });\n console.log(chalk.yellow('MongoDB connected...', connectUrl));\n } catch (err) {\n console.log(err);\n console.log(chalk.red('Error in DB connection: ' + err));\n }\n})();\n\nmodule.exports = mongoose;\n", "text": "Mongodb connection diconnected after 2-3 hrs.My backend application is running on AWS Lamba on the top of API gateway. Its a monoletic application which has 20+ endpoints.Network access - 0.0.0.0/0 (includes your current IP address)I am trying to connect mongo from there - First 2-3 hrs it is working fine. Then I am getting this below error. -When I ran the same code from Ec2, it is working fine.How I am connecting mongo -Any idea what I am doing wrong.", "username": "Atique_Ahmed" }, { "code": "", "text": "Hi @Atique_Ahmed - Welcome to the community.Firstly, thanks for providing all those details including the error and connection portion of the code.When I ran the same code from Ec2, it is working fine.I’d like to gather a few more details regarding this scenario:Regards,\nJason", "username": "Jason_Tran" } ]
Mongodb connection disconnected after 2-3 hrs
2023-08-31T07:16:43.248Z
Mongodb connection disconnected after 2-3 hrs
484
null
[ "aggregation", "crud" ]
[ { "code": "", "text": "In the JavaScript Web API to Atlas App Services I’d like to use updateMany with an aggregation pipeline, but am getting the error: cannot transform type primitive.D to a BSON Document: WriteArray can only write a Array while positioned on a Element or Value but is positioned on a TopLevel.Is the aggregation pipeline supported within updateMany? I’m a bit confused by which documentation I should be looking at.If it’s not supported, do I have any options for conditionally updating multiple documents without looping through on the client side, reading each document, testing the condition, then updating?", "username": "Daniel_Bernasconi" }, { "code": "updateMany()<update>db.collection.updateMany(\n <filter>,\n [ <update> ] // aggregation pipeline\n)\nprimitive.D", "text": "Hey @Daniel_Bernasconi,Is the aggregation pipeline supported within updateMany? I’m a bit confused by which documentation I should be looking at.The aggregation pipeline within updateMany() is fully supported from MongoDB 4.2 onwards. As stated in the MongoDB documentation, the <update> argument must be an array of one or more pipeline stages.If you are encountering an issue, please make sure you are running MongoDB server version 4.2 or later, and everything is correct in your pipeline stages.error: cannot transform type primitive.D to a BSON Document: WriteArray can only write a Array while positioned on a Element or Value but is positioned on a TopLevel.As for the error that cannot transform type primitive.D to a BSON Document, it appears your update clause may not be formatted correctly. Could you please share more details about your update clause so we can assist accordingly? In addition to these, please also share the sample documents and the expected output.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": " // Error\n taskCollection.updateMany(\n { sectionId : sId },\n [{ $set: { rows: {\n $add: [\n \"$rows\",\n { $subtract: [\n { $min: [ 7, { $add: [ \"$rowStart\", \"$rows\", -1 ] } ] },\n { $max: [ 3, { $add: [ \"$rowStart\", 1 ] } ] }\n ]},\n ],\n }}}],\n );\n // Error\n taskCollection.updateMany(\n { },\n [{$set: { rows : 3 }}],\n );\n // No error\n taskCollection.updateMany(\n { },\n {$set: { rows : 3 }},\n );\n", "text": "Thanks Kushagra. I’m using the hosted Atlas App Services (cloud.mongodb.com) and the version is 6.0.9.The code that is generating that error is as follow:However, I get the same erorr with this:I get no error if I do a $set without aggretation:", "username": "Daniel_Bernasconi" }, { "code": "", "text": "Hey @Daniel_Bernasconi,Could you confirm if you are talking about Atlas Functions?If yes, could you share the whole code snippet and the exact error message you are encountering?Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav,No, this is not an Atlas Function. This is a JavaScript code that I’m running within a React app within a browser on a PC, using the Realm-Web API.I’ve got numerous database API calls that work fine, and have been developing the application for a few months, but this is the first time I’ve tried to use aggregation in an updateMany call.The code is as in my previous post, and the error I’m getting is: “cannot transform type primitive.D to a BSON Document: WriteArray can only write a Array”Thanks", "username": "Daniel_Bernasconi" } ]
updateMany with aggregation pipeline
2023-09-02T06:05:06.500Z
updateMany with aggregation pipeline
446
null
[ "aggregation", "queries" ]
[ { "code": "// users collection\n[\n {\n name: \"M\",\n email: \"\",\n phone: \"\",\n events: [\n {\n _id: \"1234567\"\n }\n ]\n },\n {\n name: \"N\",\n email: \"\",\n phone: \"\",\n events: [\n {\n _id: \"1234567\"\n }\n ]\n },\n {\n name: \"0\",\n email: \"\",\n phone: \"\",\n events: [\n {\n _id: \"8907867\"\n }\n ]\n }\n]\n\n", "text": "Hi,I have a collection with users. The “events” array is embedded in each user document. If an event is deleted globally, i want to run an aggregation that find all the docs with the event _id in them and then delete the event from each remaining document “events” array.Something like this:say i want to delete event _id: 1234567 from the users who have that event in their embedded events array. How can i go about doing that with an aggregation pipeline? Or, is there a simpler way to achieve this?", "username": "Michael_Murray4" }, { "code": "", "text": "You could probably use the $pull operator:See examples in that page for something similar to your scenario.This is to actually update the document, I assume you don’t just want to use aggregation queries to present data with explicit events removed each time?", "username": "John_Sewell" }, { "code": "", "text": "Hi John,You are correct l, thank you!!", "username": "Michael_Murray4" } ]
Delete object from multiple arrays during aggregation
2023-09-04T18:16:10.376Z
Delete object from multiple arrays during aggregation
190
null
[ "upgrading" ]
[ { "code": "mongodsudo systemctl start mongodsystemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Thu 2022-03-03 07:54:19; 2min 37s ago\n Docs: https://docs.mongodb.org/manual\n Process: 1452 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=62)\n Main PID: 1452 (code=exited, status=62)\n\nMar 03 09:54:17 ubuntu systemd[1]: Started MongoDB Database Server.\nMar 03 09:54:19 ubuntu systemd[1]: mongod.service: Main process exited, code=exited, status=62/n/a\nMar 03 09:54:19 ubuntu systemd[1]: mongod.service: Failed with result 'exit-code'.\nmongod", "text": "Hi, I recently upgraded :from v4.2.9 to v5.0.6 in Ubuntu Desktop 20.04.3.However, now I can not start mongod as it failed after being started using sudo systemctl start mongod.Here is the log :It is said that status code 62 means : Returned by mongod if the datafiles in --dbpath are incompatible with the version of mongod currently running.Is MongoDB v5.0.6 incompatible with Ubuntu Desktop 20.04.3 ?\nHow can I fix and start mongod normally ?", "username": "marc" }, { "code": "Active: failed (Result: exit-code) since Thu 2022-03-03 09:54:19; 2min 37s ago\n", "text": "Sorry, let me correct thus line :", "username": "marc" }, { "code": "", "text": "Hi @marc,Skipping over major release versions for an in-place upgrade (i.e. using the same data files) is currently not supported. Any required changes to data files are performed as part of each major version upgrade.If you want to perform an in-place upgrade from MongoDB 4.2 to MongoDB 5.0 you need to:Upgrade MongoDB 4.2 to MongoDB 4.4 (note the important final step to Enable backwards-incompatible 4.4 features).Upgrade MongoDB 4.4 to MongoDB 5.0Is MongoDB v5.0.6 incompatible with Ubuntu Desktop 20.04.3 ?Per the MongoDB Production Notes, Ubuntu 20.04 is currently supported for MongoDB 4.4 and 5.0. Ubuntu 20.04 is a Long Term Support (LTS) Ubuntu release, so it won’t reach End-of-Life until April, 2025.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Status Code 62 suggests that the data directory defined in the /etc/mongod.conf file under the key storage->dbPath is not compatible with this version. In case you don’t need to keep the backup of your database, you can just erase the path specified in /etc/mongod.conf and then uninstall and reinstall the mongodb.So the steps are:In case you need to keep backup of your data, please make sure to copy data directory first.", "username": "Tarif_Ezaz" } ]
MongoDB v5.0.6 exited with status code 62
2022-03-03T10:14:31.892Z
MongoDB v5.0.6 exited with status code 62
8,076
https://www.mongodb.com/…180a3b5a4025.png
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "DB_CONNECT = 'mongodb+srv://ID:[email protected]/?retryWrites=true&w=majority';\nconst dotenv = require(\"dotenv\").config();\nmongoose.set(\"strictQuery\", false);\n\nmongoose.connect(process.env.DB_CONNECT, {\n\n useUnifiedTopology: true,\n\n useNewUrlParser: true,\n\n}).then(console.log('connect sucess to mongodb'))\n\nbot.ticketTranscript = mongoose.model('transcripts',\n\n new mongoose.Schema({\n\n Channel : String,\n\n Content : Array\n\n })\n\n)\nthrow new MongoParseError('Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"');\n\nMongoParseError: Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"\n", "text": "Hello,\nI have an error that blocks me, have you ever encountered this error?File .env =>index.jsError =>Thanks you in advance", "username": "bill" }, { "code": "process.env.DB_CONNECT", "text": "Hello @bill, Welcome to the MongoDB community forum,Can you please make sure by consol print this variable process.env.DB_CONNECT has the correct connection string?", "username": "turivishal" }, { "code": "'mongodb+srv://ID:[email protected]/?retryWrites=true&w=majority';\n", "text": "Hi, thanks you for answer,console displayWith ’ ';It’s normal ?", "username": "bill" }, { "code": " useUnifiedTopology: true,useNewUrlParser: true", "text": "It looks good, check out the documentation, I think you are missing something debug your code step by step,\nhttps://mongoosejs.com/docs/connections.htmlOut of the question, If you are using mongoose latest version then don’t need to pass useUnifiedTopology: true, and useNewUrlParser: true in connection because by default it set true", "username": "turivishal" }, { "code": "", "text": "It’s normal ?it notremove the quotes and if it still does not work remove leading and trailing spacss", "username": "steevej" }, { "code": "", "text": "The trailing semicolon is probably erroneous too.", "username": "steevej" }, { "code": "strictQueryfalsemongoose.set('strictQuery', false);mongoose.set('strictQuery', true);", "text": "[MONGOOSE] DeprecationWarning: Mongoose: the strictQuery option will be switched\nback to false by default in Mongoose 7. Use mongoose.set('strictQuery', false); if you want to prepare for this change. Or use mongoose.set('strictQuery', true); to suppress this warning.Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”both the error will be gone\nfirst use the\n// mongoose.set(‘strictQuery’, true) in top\nand remove the extra space in the link of mondodb", "username": "Sachin_Pandey" }, { "code": "", "text": "remove the ‘;’ from the last of .env file, and it will work", "username": "Tushar_Kumar2" }, { "code": "", "text": "why are you storing string value in env??? and also why ‘;’\nThis should be-DB_CONNECT = mongodb+srv://ID:[email protected]/?retryWrites=true&w=majority", "username": "Shadab_Ahmed" } ]
Error: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
2023-04-27T18:36:41.786Z
Error: Invalid scheme, expected connection string to start with &ldquo;mongodb://&rdquo; or &ldquo;mongodb+srv://&rdquo;
1,429
null
[ "change-streams", "kafka-connector" ]
[ { "code": "If the resume token is no longer available then there is the potential for data loss.\nSaved resume tokens are managed by Kafka and stored with the offset data.\n\nTo restart the change stream with no resume token either: \n * Create a new partition name using the `offset.partition.name` configuration.\n * Set `errors.tolerance=all` and ignore the erroring resume token. \n * Manually remove the old offset from its configured storage.\n\nResetting the offset will allow for the connector to be resume from the latest resume\ntoken. Using `startup.mode = copy_existing` ensures that all data will be outputted by the\nconnector but it will duplicate existing data.\n=====================================================================================\n (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,471] INFO Unable to recreate the cursor (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,477] INFO Watching for collection changes on 'avd.vehicles' (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,478] INFO New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,480] WARN Failed to resume change stream: The $changeStream stage is only supported on replica sets 40573\n\n", "text": "Hi, I’m trying to use the MongoDB Kafka Source Connector v1.11.0 but it’s not working and the logs keep printing the following in an endless loop", "username": "Siya_Sosibo" }, { "code": "$changeStream stage is only supported on replica sets", "text": "$changeStream stage is only supported on replica setsThe source needs to be a replica set and can not be a stand alone MongoDB node", "username": "Robert_Walters" } ]
MongoDB Kafka Source Connector Unable to recreate the cursor
2023-08-29T19:20:39.301Z
MongoDB Kafka Source Connector Unable to recreate the cursor
462
null
[ "aggregation" ]
[ { "code": "{\n \"Meta.FlowId\" : 1,\n \"Meta.UpstreamMessageId\" : 1,\n \"MessageType\" : 1,\n \"Meta.TrackingId\" : 1,\n \"Status\" : 1,\n \"HandleResult.HandleStatus\" : 1\n}\ndb.getCollection('FlowMessageInfo').explain(\"executionStats\").aggregate([\n { $match: { $and: [{'Meta.FlowId' : UUID('ce5d9c36-68be-4d3d-95af-7904a9fab34a')}, \n {'Meta.UpstreamMessageId': {$ne: null}},\n {'MessageType': {$ne: 'PublishDoneMessage'}}]}\n },\n { $group: {_id: '$Meta.TrackingId',\n all: {$sum: {$cond: [{$ne: ['$Status', 'ContainsInvalidEntities']}, 1, 0]}}, \n processed: {$sum: {$cond: [{$and: [{$or: [{$eq: ['$Status', 'Processed']}, {$eq: ['$Status', 'Failed']}]}, {$ne: ['$HandleResult.HandleStatus', 'Deferred']}]}, 1, 0]}},\n failed: {$sum: {$cond: [{$eq: ['$Status', 'Failed']}, 1, 0]}},\n invalids: {$sum: {$cond: [{$or: [{$eq: ['$Status', 'ContainsInvalidEntities']}, {$eq: ['$HandleResult.HandleStatus', 'MissingSourceData']}]}, 1, 0]}}}\n }, \n { $group: {_id: null, \n all: {$sum: 1},\n processed: {$sum: {$cond: [{$eq: ['$all', '$processed']}, 1, 0]}},\n failed: {$sum: {$cond: [{$ne: ['$failed', 0]}, 1, 0]}},\n invalids: {$sum: '$invalids'}}\n }\n], {allowDiskUse: true})\n/* 1 */\n{\n \"stages\" : [ \n {\n \"$cursor\" : {\n \"query\" : {\n \"$and\" : [ \n {\n \"Meta.FlowId\" : UUID(\"ce5d9c36-68be-4d3d-95af-7904a9fab34a\")\n }, \n {\n \"Meta.UpstreamMessageId\" : {\n \"$ne\" : null\n }\n }, \n {\n \"MessageType\" : {\n \"$ne\" : \"PublishDoneMessage\"\n }\n }\n ]\n },\n \"fields\" : {\n \"HandleResult.HandleStatus\" : 1,\n \"Meta.TrackingId\" : 1,\n \"Status\" : 1,\n \"_id\" : 0\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"Caps.FlowMessageInfo\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"Meta.FlowId\" : {\n \"$eq\" : UUID(\"ce5d9c36-68be-4d3d-95af-7904a9fab34a\")\n }\n }, \n {\n \"MessageType\" : {\n \"$not\" : {\n \"$eq\" : \"PublishDoneMessage\"\n }\n }\n }, \n {\n \"Meta.UpstreamMessageId\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n }\n ]\n },\n \"queryHash\" : \"C55B39EF\",\n \"planCacheKey\" : \"896E0A8A\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"HandleResult.HandleStatus\" : 1,\n \"Meta.TrackingId\" : 1,\n \"Status\" : 1,\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"Meta.FlowId\" : 1,\n \"Meta.UpstreamMessageId\" : 1,\n \"MessageType\" : 1,\n \"Meta.TrackingId\" : 1,\n \"Status\" : 1,\n \"HandleResult.HandleStatus\" : 1\n },\n \"indexName\" : \"Meta.FlowId_1_Meta.UpstreamMessageId_1_MessageType_1_Meta.TrackingId_1_Status_1_HandleResult.HandleStatus_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"Meta.FlowId\" : [],\n \"Meta.UpstreamMessageId\" : [],\n \"MessageType\" : [],\n \"Meta.TrackingId\" : [],\n \"Status\" : [],\n \"HandleResult.HandleStatus\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"Meta.FlowId\" : [ \n \"[UUID(\\\"ce5d9c36-68be-4d3d-95af-7904a9fab34a\\\"), UUID(\\\"ce5d9c36-68be-4d3d-95af-7904a9fab34a\\\")]\"\n ],\n \"Meta.UpstreamMessageId\" : [ \n \"[MinKey, undefined)\", \n \"(null, MaxKey]\"\n ],\n \"MessageType\" : [ \n \"[MinKey, \\\"PublishDoneMessage\\\")\", \n \"(\\\"PublishDoneMessage\\\", MaxKey]\"\n ],\n \"Meta.TrackingId\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"Status\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"HandleResult.HandleStatus\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : []\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 994520,\n \"executionTimeMillis\" : 13026,\n \"totalKeysExamined\" : 994523,\n \"totalDocsExamined\" : 0,\n \"executionStages\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"nReturned\" : 994520,\n \"executionTimeMillisEstimate\" : 358,\n \"works\" : 994523,\n \"advanced\" : 994520,\n \"needTime\" : 2,\n \"needYield\" : 0,\n \"saveState\" : 8039,\n \"restoreState\" : 8039,\n \"isEOF\" : 1,\n \"transformBy\" : {\n \"HandleResult.HandleStatus\" : 1,\n \"Meta.TrackingId\" : 1,\n \"Status\" : 1,\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 994520,\n \"executionTimeMillisEstimate\" : 107,\n \"works\" : 994523,\n \"advanced\" : 994520,\n \"needTime\" : 2,\n \"needYield\" : 0,\n \"saveState\" : 8039,\n \"restoreState\" : 8039,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"Meta.FlowId\" : 1,\n \"Meta.UpstreamMessageId\" : 1,\n \"MessageType\" : 1,\n \"Meta.TrackingId\" : 1,\n \"Status\" : 1,\n \"HandleResult.HandleStatus\" : 1\n },\n \"indexName\" : \"Meta.FlowId_1_Meta.UpstreamMessageId_1_MessageType_1_Meta.TrackingId_1_Status_1_HandleResult.HandleStatus_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"Meta.FlowId\" : [],\n \"Meta.UpstreamMessageId\" : [],\n \"MessageType\" : [],\n \"Meta.TrackingId\" : [],\n \"Status\" : [],\n \"HandleResult.HandleStatus\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"Meta.FlowId\" : [ \n \"[UUID(\\\"ce5d9c36-68be-4d3d-95af-7904a9fab34a\\\"), UUID(\\\"ce5d9c36-68be-4d3d-95af-7904a9fab34a\\\")]\"\n ],\n \"Meta.UpstreamMessageId\" : [ \n \"[MinKey, undefined)\", \n \"(null, MaxKey]\"\n ],\n \"MessageType\" : [ \n \"[MinKey, \\\"PublishDoneMessage\\\")\", \n \"(\\\"PublishDoneMessage\\\", MaxKey]\"\n ],\n \"Meta.TrackingId\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"Status\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"HandleResult.HandleStatus\" : [ \n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\" : 994523,\n \"seeks\" : 3,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : \"$Meta.TrackingId\",\n \"all\" : {\n \"$sum\" : {\n \"$cond\" : [ \n {\n \"$ne\" : [ \n \"$Status\", \n {\n \"$const\" : \"ContainsInvalidEntities\"\n }\n ]\n }, \n {\n \"$const\" : 1.0\n }, \n {\n \"$const\" : 0.0\n }\n ]\n }\n },\n \"processed\" : {\n \"$sum\" : {\n \"$cond\" : [ \n {\n \"$and\" : [ \n {\n \"$or\" : [ \n {\n \"$eq\" : [ \n \"$Status\", \n {\n \"$const\" : \"Processed\"\n }\n ]\n }, \n {\n \"$eq\" : [ \n \"$Status\", \n {\n \"$const\" : \"Failed\"\n }\n ]\n }\n ]\n }, \n {\n \"$ne\" : [ \n \"$HandleResult.HandleStatus\", \n {\n \"$const\" : \"Deferred\"\n }\n ]\n }\n ]\n }, \n {\n \"$const\" : 1.0\n }, \n {\n \"$const\" : 0.0\n }\n ]\n }\n },\n \"failed\" : {\n \"$sum\" : {\n \"$cond\" : [ \n {\n \"$eq\" : [ \n \"$Status\", \n {\n \"$const\" : \"Failed\"\n }\n ]\n }, \n {\n \"$const\" : 1.0\n }, \n {\n \"$const\" : 0.0\n }\n ]\n }\n },\n \"invalids\" : {\n \"$sum\" : {\n \"$cond\" : [ \n {\n \"$or\" : [ \n {\n \"$eq\" : [ \n \"$Status\", \n {\n \"$const\" : \"ContainsInvalidEntities\"\n }\n ]\n }, \n {\n \"$eq\" : [ \n \"$HandleResult.HandleStatus\", \n {\n \"$const\" : \"MissingSourceData\"\n }\n ]\n }\n ]\n }, \n {\n \"$const\" : 1.0\n }, \n {\n \"$const\" : 0.0\n }\n ]\n }\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : {\n \"$const\" : null\n },\n \"all\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n },\n \"processed\" : {\n \"$sum\" : {\n \"$cond\" : [ \n {\n \"$eq\" : [ \n \"$all\", \n \"$processed\"\n ]\n }, \n {\n \"$const\" : 1.0\n }, \n {\n \"$const\" : 0.0\n }\n ]\n }\n },\n \"failed\" : {\n \"$sum\" : {\n \"$cond\" : [ \n {\n \"$ne\" : [ \n \"$failed\", \n {\n \"$const\" : 0.0\n }\n ]\n }, \n {\n \"$const\" : 1.0\n }, \n {\n \"$const\" : 0.0\n }\n ]\n }\n },\n \"invalids\" : {\n \"$sum\" : \"$invalids\"\n }\n }\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"ip-10-93-178-93.us-west-2.compute.internal\",\n \"port\" : 27017,\n \"version\" : \"4.2.12\",\n \"gitVersion\" : \"5593fd8e33b60c75802edab304e23998fa0ce8a5\"\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1693474550, 6),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"DLFvEwE9WWem+6i/CKdJchM/hbc=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(7217799378239488001)\n }\n },\n \"operationTime\" : Timestamp(1693474550, 6)\n}\n", "text": "Hi,I am trying to wrap my head around covered queries in context of aggregations, but firstly, mongodb cluster specs:I tried to cover my query with compound index, but the explain shows “PROJECTION_DEFAULT” instead of “PROJECTION_COVERED”, it takes ~12 seconds to execute on 1 million docs, and have to use disk.Index:Aggregation query:Result from explain:Do you have any idea how i can cover this query? I assume that the low performance of this query is caused by disk swapping that could be avoided by query covering.\nThanks.", "username": "Inclouds_N_A" }, { "code": "", "text": "As the documentation on the covering aggregation queries is non existent, could you please provide me with some courses/books that i could read on this matter? Or some another source of knowledge?", "username": "Inclouds_N_A" } ]
Aggregation query not covered
2023-08-31T09:40:43.599Z
Aggregation query not covered
274
null
[ "compass" ]
[ { "code": "", "text": "My application maintains logs in MongoDB through Serilog and it was working great so far.Recently, MondoDB server was moved to a different server for our DEV environment. Post this, the following tasks were performed at our end:All the developers’ IP addresses were whitelisted for connectivity and we are able to read/write using Compass client.Our application’s web configurations were updated with the new connectionstringHowever, logging from our application in DEV (Visual studio) environment has stopped. Any attempt to check for possible cause has lead us nowhere.Please suggest what could be the issue and if I should get in touch with our networking people for this.", "username": "Stak" }, { "code": "", "text": "What errors are being thrown by your dev app and in mongodb log file?", "username": "Kobe_W" }, { "code": "", "text": "That’s the issue, serilog doesn’t throw any error which is making it difficult to pinpoint the root cause.", "username": "Stak" }, { "code": "", "text": "I just enabled debugging/diagnostics for Serilog and below is the generated exception:2023-09-04T07:08:46.0982059Z Exception while emitting periodic batch from Serilog.Sinks.MongoDB.MongoDBSink: System.AggregateException: One or more errors occurred. —> MongoDB.Driver.MongoCommandException: Command insert failed: Unsupported OP_QUERY command: insert. The client driver may require an upgrade. For more details see https://dochub.mongodb.org/core/legacy-opcode-removal.I guess this is due to version incompatibility between the MongoDB sink and the server. For now, I’ve asked IT team to install the server version which worked before.", "username": "Stak" } ]
MongoDB with Serilog not working
2023-09-01T12:24:35.189Z
MongoDB with Serilog not working
419
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "F:\\Node js\\mongo db\\node_modules\\mongoose\\lib\\connection.js:755\n err = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at _handleConnectionErrors (F:\\Node js\\mongo db\\node_modules\\mongoose\\lib\\connection.js:755:11)\n at NativeConnection.openUri (F:\\Node js\\mongo db\\node_modules\\mongoose\\lib\\connection.js:730:11)\n at runNextTicks (node:internal/process/task_queues:60:5)\n at listOnTimeout (node:internal/timers:538:9)\n at process.processTimers (node:internal/timers:512:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 4030433,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (F:\\Node js\\mongo db\\node_modules\\mongodb\\lib\\cmap\\connect.js:383:20)\n at Socket.<anonymous> (F:\\Node js\\mongo db\\node_modules\\mongodb\\lib\\cmap\\connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n\nNode.js v18.14.0\n[nodemon] app crashed - waiting for file changes before starting...\n[nodemon] restarting due to changes...\n[nodemon] starting `node index.js`\nServer is running on port : 5000\nF:\\Node js\\mongo db\\node_modules\\mongoose\\lib\\connection.js:755\n err = new ServerSelectionError();\n ^\n\nMongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at _handleConnectionErrors (F:\\Node js\\mongo db\\node_modules\\mongoose\\lib\\connection.js:755:11)\n at NativeConnection.openUri (F:\\Node js\\mongo db\\node_modules\\mongoose\\lib\\connection.js:730:11)\n at runNextTicks (node:internal/process/task_queues:60:5)\n at listOnTimeout (node:internal/timers:538:9)\n at process.processTimers (node:internal/timers:512:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 4708510,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (F:\\Node js\\mongo db\\node_modules\\mongodb\\lib\\cmap\\connect.js:383:20)\n at Socket.<anonymous> (F:\\Node js\\mongo db\\node_modules\\mongodb\\lib\\cmap\\connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n\nNode.js v18.14.0\n[nodemon] app crashed - waiting for file changes before starting...```\n\n\n\n\n\n\n\nplease help i am stuck from last 2 nights.", "text": "Hey it’s my node js code </>\nconst express= require(‘express’);\nconst mongoose = require(‘mongoose’);\nconst app = express();\nconst mongoDB = “mongodb://localhost:27017/ecomm”;mongoose.connect(mongoDB,{ useNewUrlParser: true })\n.then(()=>console.log(‘connection successfully’))app.listen(5000,()=>{\nconsole.log(‘Server is running on port : 5000’);\n})</>i am getting to error when i execute this code to connect mongodb from node js", "username": "Rishav_Saxena" }, { "code": "", "text": "Try with 127.0.0.1 instead localhost in your code", "username": "Ramachandra_Tummala" }, { "code": "", "text": "It really worked , thank a lot", "username": "Basith_nizam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I am getting error to connect mogodb with node js
2023-03-05T05:05:17.310Z
I am getting error to connect mogodb with node js
2,555
null
[]
[ { "code": "", "text": "MongoServerSelectionError: read ECONNRESET. It looks like this is a MongoDB Atlas cluster. Please ensure that your Network Access List allows connections from your IP.", "username": "Nithin_Reddy_Nagapur" }, { "code": "", "text": "Hi @Nithin_Reddy_Nagapur and welcome to MongoDB community forums!!Could you confirm if you have performed the needed steps to whitelist the IPs in your Atlas cluster.\nYou can follow the procedure to Add Your Connection IP Address to Your IP Access List and let us know if the issue still persists.\nIf after performing the above steps you are sill seeing the similar error, could you help us with the following information:Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "I’m new to this MongoDB. I am trying to create the Atlas cluster repeatedly. But the error is coming. So, please clear this issue.", "username": "VENKATANATHAN_P_R" }, { "code": "", "text": " Hi @VENKATANATHAN_P_RIn general it is preferable to start a new discussion to keep the details of different environments/questions separate and improve visibility of new discussions. That will also allow you to mark your topic as “Solved” when you resolve any outstanding questions.Mentioning the url of an existing discussion on the forum will automatically create links between related discussions for other users to follow.Please have a look at How to write a good post/question for some ideas on best practices.I also recommend reading Getting Started with the MongoDB Community: README.1ST for some tips to help improve your community outcomes.Regards,\nAasawari", "username": "Aasawari" } ]
I am unable to connect to the cluster
2023-08-30T21:27:16.407Z
I am unable to connect to the cluster
811
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": " const articleSchema={\n title:String,\n content:String\n};\nconst Article=mongoose.model(\"Article\",articleSchema);\n\napp.get(\"/articles\",function(req,res){\n Article.find().then(function(foundarticles){\n res.send(foundarticles);\n }).catch(function(err){\n console.log(err);\n });\n});\n\napp.post(\"/articles\",function(req,res){\n console.log(req.body.title);\n console.log(req.body.content);\n const newArticle=new Article({\n title:req.body.title,\n content:req.body.content\n });\n newArticle.save();\n});\n", "text": "", "username": "Hassan_Subhani" }, { "code": "", "text": "Hey @Hassan_Subhani,Welcome to the MongoDB Community!MongoDB: Document failed validationThis error typically occurs when you try to save/insert a document that does not match the validation rules defined in your Mongoose schema.Some common reasons why this error happens:Please make sure the document being saved has all required fields populated.In case of any further help, please share the error log message you are encountering and the workflow of posting the data via API. This will help us to assist you better.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
MongoDB: Document failed validation
2023-08-27T12:19:21.046Z
MongoDB: Document failed validation
555
null
[ "time-series", "bucket-pattern" ]
[ { "code": "{\n \"Value\": 5.3,\n \"TimeStamp\": \"2023-08-22T12:34:56Z\",\n \"Meta\": {\n \"DeviceId\": \"ID123\",\n \"SensorId\": \"Sensor456\",\n \"TemporaryTimeSeries\": true\n }\n}\n", "text": "Hi there.I’m using time-series collection to store the measurements of a sensor.\nI have a collection in which I store the avarage values of one hour of measuraments.\nIn order to valorized the current hour I insert every 10 minutes a temporary row with the partial values. My Meta is an object and one of its properties is “bool TemporaryTimeSeries”. When TemporaryTimeSeries is true the time-series with the partial values will be deleted by the logic of the program. I do not perform an update because I can’t update values or time stamp of a time-series. To reach the same result I delete the temporary time series that will be replaced with the updated one. Once the hour change i perform an update on the time-series with TemporaryTimeSeries at true, setting it to false.Here an example of a time-series that i am working withThe problem is that when I update TsTemporary to false MongoDb does not update the buckets structures even if the Meta is the same.\nWhat happen is that I have different buckets with the same Meta.\nThat’s lead me to have as many bucket as the time-series that i collected.Is this behavior fixable?\nIf not what can i do?Thanks", "username": "Michele_Bandini" }, { "code": "\"metadata.TemporaryTimeSeries\"false", "text": "Hey @Michele_Bandini,Welcome to the MongoDB Community!The problem is that when I update TsTemporary to false MongoDb does not update the buckets structures even if the Meta is the same.\nWhat happens is that I have different buckets with the same Meta.\nThat led me to have as many buckets as the time-series that I collected.\nIs this behavior fixable?Let me see if I understand the issue - it sounds like when you update the \"metadata.TemporaryTimeSeries\" field to false in your time series collection, the change is not reflected in your bucket collection structure.If yes then - as of now, we don’t have the functionality to merge buckets in our internal collections. If this functionality is important to you, I recommend you to submit feedback through our MongoDB Feedback EngineHowever, the internal system bucket documents are for internal use only and are subject to change. For most use cases, you only need to interact with the time series collection itself rather than the underlying bucket structure.Please let us know if you have any other questions.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Updating a time-series meta but the buckets are not update
2023-08-22T13:28:03.514Z
Updating a time-series meta but the buckets are not update
384
null
[ "queries" ]
[ { "code": "users.findOne()", "text": "{ “message”: “Operation users.findOne() buffering timed out after 10000ms”, “success”: false }", "username": "100_Jatin_Godnani" }, { "code": "", "text": "Hi @100_Jatin_Godnani,I’ve sent you a DM regarding this post. In saying so, please provide more information so that we can better asisst including:Redact any personal or sensitive information before posting here.Regards,\nJason", "username": "Jason_Tran" } ]
Atlas cluster Nextjs error
2023-08-30T20:08:19.322Z
Atlas cluster Nextjs error
250
null
[]
[ { "code": "", "text": "Hi,\nI’m using MongoDB on GCP.\nThere is no way to restart the DB on the management screen.\nIs it possible to restart it using a command?\n(When the CPU reaches 100%, I want to deal with it by rebooting instead of upgrading the CPU)", "username": "Anju_Asano" }, { "code": "", "text": "Hi @Anju_Asano - Welcome to the community I’m using MongoDB on GCP.\nThere is no way to restart the DB on the management screen.\nIs it possible to restart it using a command?Is this a MongoDB Atlas cluster hosted on GCP? If so, you cannot restart a node via command. I would check into the Fix CPU Usage Issues documentation for some details about possible triggers and solutions.(When the CPU reaches 100%, I want to deal with it by rebooting instead of upgrading the CPU)I believe it would be more beneficial to figure out what is causing the CPU to reach 100% and try to deal with that as opposed to aiming for a restart each time the CPU reaches 100%. Are you aware of any particular operations that may be causing the CPU to reach 100%? If so, please provide any details regarding this.You can also check out the Monitor and Improve Slow Queries documentation which may help in this scenario.Regards,\nJason", "username": "Jason_Tran" } ]
How to restart MongoDB
2023-09-01T05:07:43.779Z
How to restart MongoDB
346
null
[]
[ { "code": "", "text": "Hi,Since a couple of hour my primary node on my free tier plan is not responding anymore, is there a way to restart it from the interface?", "username": "Jean-Pierre_Boutherin" }, { "code": "", "text": "Hi @Jean-Pierre_Boutherin - Welcome to the community.Since a couple of hour my primary node on my free tier plan is not responding anymore, is there a way to restart it from the interface?To answer your question regarding the restart of the primary node - No, you will not be able to restart this.Please contact the Atlas in-app chat support regarding this as they will have more insight into the cluster.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Primary server stuck
2023-09-03T20:57:42.451Z
Primary server stuck
310
null
[]
[ { "code": "", "text": "mongo-db prometheus discovery endpoint doesn’t return private endpoints for peered network.We are using VPC peering to connect with Mongo Atlas. With the recent account about, prometheus integration. We added scrape config to mongo-db discovery API. However, scraping times out. Upon checking further it is found that discovery API returns public endpoints not private ones. Hence connection is failing. Is there a way that discovery API can send private endpoints.", "username": "Aslam_Khan" }, { "code": "", "text": "Same issue here, we need to use a static file in order to scrape mongodb through the private endpoint…", "username": "Clement_Mondion" }, { "code": "", "text": "Discovery just returns an empty list for me. Are these known issues? Would really love to have prometheus integration working.", "username": "Kara_Spencer" }, { "code": "", "text": "connect with Mongo Atlas. With the recent account about, prometheus integration. We added scrape config to mongo-db discovery API. However, scraping times out. Upon checking further it is found that discoveHi Aslam,Private endpoints are not supported for the Prom integration today. However, this is a great idea for a future enhancement. I’ll add this as an enhancement request for the backlog and keep you all updated as we make progress.Thank you!\nFrank", "username": "Frank_Sun" }, { "code": "", "text": "Hi @Aslam_Khan, @Kara_Spencer, and @Clement_Mondion,I just wanted to follow up on this thread and update you all that the Atlas Prometheus integration does now support VPC peering. In Atlas, when configuring your Prometheus integration, you can choose the discovery API target type to be either public or private. Hope this helps!Thank you,\nFrank", "username": "Frank_Sun" }, { "code": "", "text": "Hi @Frank_Sun,Thanks for the follow up, does that work also when using vpc endpoints ?Best regards,\nClément", "username": "Clement_Mondion" }, { "code": "", "text": "Hi @Clement_Mondion,This does not yet support PrivateLink.Thanks,\nFrank", "username": "Frank_Sun" }, { "code": "", "text": "Hey @Frank_SunIs there an update or estimate on the support for PrivateLink?Thanks for your feedback,\nDavid", "username": "David_R" }, { "code": "", "text": "Not sure if can help. I have opened an “idea” here", "username": "Jacopo_Secchiero" }, { "code": "", "text": "Hey @Kara_Spencer. I am trying to set up prometheus integration and facing the same issue. Have you managed to find a solution for it?", "username": "Vlad_Musaelyan" } ]
Mongodb prometheus discovery endpoint doesnt return private endpoints for peered network
2022-03-21T08:10:47.135Z
Mongodb prometheus discovery endpoint doesnt return private endpoints for peered network
4,632
null
[ "queries", "node-js", "crud" ]
[ { "code": "let obj = { \"ordine\": [...data.order] };\n\nresult = await restaurantsCollection.updateOne(\n { 'nameRestaurant': data.nameRestaurant },\n {\n $push: { 'restaurant.$[elem].tavolo.temp.$[elem2].ordini': obj }\n },\n {\n arrayFilters: [\n { 'elem.tavolo.numTavolo': data.numTavolo },\n { 'elem2.active': true }\n ]\n }\n );\n", "text": "I can’t add an object containing a property associated with an array of objects to mongoDB, below I put my code, I’ve already used $push elsewhere and it worked, but here it doesn’t work, it returns the matchedCount: 1 and that the modifiedCount: 0.", "username": "Samuele_Cervietti" }, { "code": "", "text": "Hello @Samuele_Cervietti, Welcome Back,It would be easy to understand your problem if you could provide more information:", "username": "turivishal" }, { "code": "{\n \"restaurant\":[\n {\n \"tavolo\":{\n \"numTavolo\":1,\n \"temp\":[\n {\n \"id\":1213876346,\n \"active\":true,\n \"ordini\":[]\n }\n ]\n }\n },\n {\n \"tavolo\":{\n \"numTavolo\":2,\n \"temp\":[\n {\n \"id\":1213812346,\n \"active\":true,\n \"ordini\":[]\n }\n ]\n }\n },\n ]{\n \"tavolo\":{\n \"numTavolo\":3,\n \"temp\":[\n {\n \"id\":1213876326,\n \"active\":true,\n \"ordini\":[]\n }\n ]\n }\n }\n}\nobj={ \"ordine\" : [ {\"name\" : 'Paolo' }, {\"name\" : 'Marco' }, {\"name\" : 'Luca' } ] }\n{\n \"restaurant\":[\n {\n \"tavolo\":{\n \"numTavolo\":1,\n \"temp\":[\n {\n \"id\":1213876346,\n \"active\":true,\n \"ordini\":[\n { \n \"ordine\" : [ \n {\"name\" : 'Paolo' }, \n {\"name\" : 'Marco' }, \n {\"name\" : 'Luca' } \n ] \n \n }\n ]\n }\n ]\n }\n },\n {\n \"tavolo\":{\n \"numTavolo\":2,\n \"temp\":[\n {\n \"id\":1213812346,\n \"active\":true,\n \"ordini\":[]\n }\n ]\n }\n },\n ]{\n \"tavolo\":{\n \"numTavolo\":3,\n \"temp\":[\n {\n \"id\":1213876326,\n \"active\":true,\n \"ordini\":[]\n }\n ]\n }\n }\n}\n", "text": "This is the pattern of my collection:The input value is:The expected result is, as can be seen in the first case of numTavolo=1, an object with ordine properties of the array type is inserted into the ordini array, which in turn contains a series of objects. I put the result below:", "username": "Samuele_Cervietti" }, { "code": "data.numTavoloresult = await restaurantsCollection.updateOne(\n { \"nameRestaurant\": \"abc\" },\n {\n \"$push\": {\n \"restaurant.$[elem].tavolo.temp.$[elem2].ordini\": { \n \"ordine\" : [ {\"name\" : 'Paolo' }, {\"name\" : 'Marco' }, {\"name\" : 'Luca' } ] \n }\n }\n },\n {\n \"arrayFilters\": [\n { \"elem.tavolo.numTavolo\": 1 },\n { \"elem2.active\": true }\n ]\n }\n)\n", "text": "Hello @Samuele_Cervietti,The query looks good, it is working for me,matchedCount: 1 and that the modifiedCount: 0.As you are saying it matches the document but is not modified, Probably the issue must be in input data.numTavolo inside arrayFilters.Try executing the query by specifying static values and see what happens,See your working query,Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "turivishal" } ]
Problem with $push
2023-09-02T16:24:33.774Z
Problem with $push
369
null
[ "dot-net", "data-modeling" ]
[ { "code": "", "text": "I’m developing an app and I’m currently using OID and because of the timestamp publicly available in OID I wanted to switch to UUID but after some research I found that UUID is bad for indexing and now I’m confused, is UUID really bad for Indexing ? I can take small performance hit.", "username": "Ay.Be" }, { "code": "", "text": "I found that UUID is bad for indexinglink for the research?", "username": "Kobe_W" }, { "code": "", "text": "I saw this video\nand there is this benchmark", "username": "Ay.Be" } ]
ObjectID vs UUID?
2023-09-02T23:27:18.918Z
ObjectID vs UUID?
667