image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"java",
"field-encryption"
] | [
{
"code": "\n private ClientEncryption createKMIPncryptionClient() {\n MongoClientSettings kvmcs = MongoClientSettings.builder().applyConnectionString(CONNECTION_STR).build();\n ClientEncryptionSettings ces = ClientEncryptionSettings.builder()\n .keyVaultMongoClientSettings(kvmcs)\n .keyVaultNamespace(VAULT_NS.getFullName())\n .kmsProviders(kmipKmsProviders)\n .build();\n System.out.println(\"=> Creating KMIP encryption client.\");\n return ClientEncryptions.create(ces);\n }\nprivate Map<String, Map<String, Object>> generateKmipKmsProviders(byte[] masterKey) {\n System.out.println(\"=> Creating KMIP Key Management System using the master key.\");\n Map<String, Map<String, Object>> kmsProviders = new HashMap<String, Map<String, Object>>();\n Map<String, Object> providerDetails = new HashMap<>();\n providerDetails.put(\"endpoint\", \"localhost:8200\");\n kmsProviders.put(KMIP, providerDetails);\n return kmsProviders;\n\nException in thread \"main\" com.mongodb.MongoClientException: Exception in encryption library: Unrecognized SSL message, plaintext connection?\n\tat com.mongodb.client.internal.Crypt.wrapInClientException(Crypt.java:363)\n\tat com.mongodb.client.internal.Crypt.decryptKeys(Crypt.java:344)\n\tat com.mongodb.client.internal.Crypt.executeStateMachine(Crypt.java:286)\n\tat com.mongodb.client.internal.Crypt.createDataKey(Crypt.java:174)\n\tat com.mongodb.client.internal.ClientEncryptionImpl.createDataKey(ClientEncryptionImpl.java:93)\n\tat csfle.ClientSideFieldLevelEncryption.readOrCreateDEKUsingKMIP(ClientSideFieldLevelEncryption.java:139)\n\tat csfle.ClientSideFieldLevelEncryption.demo(ClientSideFieldLevelEncryption.java:75)\n\tat csfle.ClientSideFieldLevelEncryption.main(ClientSideFieldLevelEncryption.java:45)\nCaused by: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?\n\tat sun.security.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)\n\tat sun.security.ssl.InputRecord.read(InputRecord.java:527)\n\tat sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)\n\tat sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)\n\tat sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:757)\n\tat sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)\n\tat java.io.OutputStream.write(OutputStream.java:75)\n\tat com.mongodb.client.internal.KeyManagementService.stream(KeyManagementService.java:75)\n\tat com.mongodb.client.internal.Crypt.decryptKey(Crypt.java:349)\n\tat com.mongodb.client.internal.Crypt.decryptKeys(Crypt.java:339)\n\t... 6 more\n",
"text": "Hello, Am trying to do POC on CSFLE using Java application. Where am trying to write some data into MongoDB / Reading the same by Encrypting few fields .When am using KMSProviders as local and keeping my master key file in my local working as expected , Fields are getting encrypted properly.When am using the KMIP based KMS Providers (Particularly with Hasicorp vault) . Am getting exception while connecting to Vault .Here am using the endpoint as local (as i have hashicorp vault running in my local machine in DEV mode).while running the application , I haven’t passed any JVM parameters . Below is the exception am getting.",
"username": "Navaneethakumar_Balasubramanian"
},
{
"code": "",
"text": "I’m not sure what the root cause is, but that SSLException about a plaintext connection is weird. I think KMIP requires TLS, even for local dev environment. Is the Hashicorp vault running locally configured that way?Instructions are here, in case you haven’t seen them: https://www.mongodb.com/docs/manual/core/csfle/tutorials/kmip/kmip-automatic/#specify-your-certificates",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "No , Hashicorp vault running in local doesn’t have the configuration to enable TLS. In this case we cant use KMIP(via Hashicorp vault running in local) is it ?Also as per the link specify your certificate. Will it access only PKCS12 format of certs ?",
"username": "Navaneethakumar_Balasubramanian"
},
{
"code": "",
"text": "In this case we cant use KMIP(via Hashicorp vault running in local) is it ?Use of “localhost:8200” might suggest the application is connecting to the vault server (not using KMIP). Hashicorp Vault needs to run the KMIP Secrets Engine. The default port is 5696.KMIP - Secrets Engines | Vault | HashiCorp Developer may help to enable the KMIP Secrets Engine and generate client certificates.",
"username": "Kevin_Albertson"
},
{
"code": " private <bytes> void readOrCreateDEKUsingKMIP(ClientEncryption encryption) {\n\n // start-datakeyopts\n BsonDocument masterKeyProperties = new BsonDocument(); // an empty key object prompts your KMIP-compliant key provider to generate a new Customer Master Key\n System.out.println(\"=> Empty Master Key Properties :\" + masterKeyProperties);\n // end-datakeyopts\n\n List<String> list_DEK_ALIAS_NAME= Arrays.asList(DEK_ALIAS_NAME.split(\"\\\\,\"));\n System.out.println(\"=> DEK as a list :\" + list_DEK_ALIAS_NAME);\n\n BsonDocument key = encryption.getKeyByAltName(DEK_ALIAS_NAME);\n if (key != null) {\n BsonBinary DEK_FROM_DB = (BsonBinary) key.get(\"_id\");\n System.out.println(\"=> Retrieved Data Key From Encryption DB:\" + DEK_FROM_DB);\n }\n else{\n System.out.println(\"=> No DEK present in the Encryption DB for the given DEK Alias Name :\" + DEK_ALIAS_NAME );\n BsonBinary DEK_CREATED_INTO_DB = encryption.createDataKey(KMIP, new DataKeyOptions().masterKey(masterKeyProperties).keyAltNames(list_DEK_ALIAS_NAME));\n System.out.println(\"=> Created DEK for \"+DEK_ALIAS_NAME+\" : \" + DEK_CREATED_INTO_DB.toString());\n }\n\n }\nprivate Map<String, Map<String, Object>> generateKmipKmsProviders(byte[] masterKey) {\n System.out.println(\"=> Creating KMIP Key Management System using the master key.\");\n Map<String, Map<String, Object>> kmsProviders = new HashMap<String, Map<String, Object>>();\n Map<String, Object> providerDetails = new HashMap<>();\n String vault_endpoint=\"vault-dev-xxx.xxx.xxxxxxx.xxxxxxxx.net:5696\";\n providerDetails.put(\"endpoint\", vault_endpoint);\n kmsProviders.put(KMIP, providerDetails);\n return kmsProviders;\n\n }\nException in thread \"main\" com.mongodb.MongoClientException: Exception in encryption library: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)\n\tat com.mongodb.client.internal.Crypt.wrapInClientException(Crypt.java:363)\n\tat com.mongodb.client.internal.Crypt.decryptKeys(Crypt.java:344)\n\tat com.mongodb.client.internal.Crypt.executeStateMachine(Crypt.java:286)\n\tat com.mongodb.client.internal.Crypt.createDataKey(Crypt.java:174)\n\tat com.mongodb.client.internal.ClientEncryptionImpl.createDataKey(ClientEncryptionImpl.java:93)\n\tat csfle.ClientSideFieldLevelEncryption.readOrCreateDEKUsingKMIP(ClientSideFieldLevelEncryption.java:151)\n\tat csfle.ClientSideFieldLevelEncryption.demo(ClientSideFieldLevelEncryption.java:75)\n\tat csfle.ClientSideFieldLevelEncryption.main(ClientSideFieldLevelEncryption.java:45)\nCaused by: java.net.SocketException: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)\n\tat javax.net.ssl.DefaultSSLSocketFactory.throwException(SSLSocketFactory.java:248)\n\tat javax.net.ssl.DefaultSSLSocketFactory.createSocket(SSLSocketFactory.java:255)\n\tat com.mongodb.client.internal.KeyManagementService.stream(KeyManagementService.java:58)\n\tat com.mongodb.client.internal.Crypt.decryptKey(Crypt.java:349)\n\tat com.mongodb.client.internal.Crypt.decryptKeys(Crypt.java:339)\n\t... 6 more\nCaused by: java.security.NoSuchAlgorithmException: Error constructing implementation (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextImpl$DefaultSSLContext)\n\tat java.security.Provider$Service.newInstance(Provider.java:1617)\n\tat sun.security.jca.GetInstance.getInstance(GetInstance.java:236)\n\tat sun.security.jca.GetInstance.getInstance(GetInstance.java:164)\n\tat javax.net.ssl.SSLContext.getInstance(SSLContext.java:156)\n\tat javax.net.ssl.SSLContext.getDefault(SSLContext.java:96)\n\tat javax.net.ssl.SSLSocketFactory.getDefault(SSLSocketFactory.java:122)\n\tat com.mongodb.client.internal.KeyManagementService.stream(KeyManagementService.java:57)\n\t... 8 more\nCaused by: java.io.IOException: DerInputStream.getLength(): lengthTag=109, too big.\n\tat sun.security.util.DerInputStream.getLength(DerInputStream.java:599)\n\tat sun.security.util.DerValue.init(DerValue.java:391)\n\tat sun.security.util.DerValue.<init>(DerValue.java:332)\n\tat sun.security.util.DerValue.<init>(DerValue.java:345)\n\tat sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:1938)\n\tat java.security.KeyStore.load(KeyStore.java:1445)\n\tat sun.security.ssl.SSLContextImpl$DefaultManagersHolder.getKeyManagers(SSLContextImpl.java:851)\n\tat sun.security.ssl.SSLContextImpl$DefaultManagersHolder.<clinit>(SSLContextImpl.java:758)\n\tat sun.security.ssl.SSLContextImpl$DefaultSSLContext.<init>(SSLContextImpl.java:913)\n\tat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n\tat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n\tat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n\tat java.lang.reflect.Constructor.newInstance(Constructor.java:423)\n\tat java.security.Provider$Service.newInstance(Provider.java:1595)\n\t... 14 more\n\n",
"text": "@Kevin_Albertson /@Jeffrey_Yemin - Now i have got my HasiCorp vault setup(Enterprise) with license. As mentioned in the Initial update am still using both of those Methods.Along with the below MethodWhere im having an empty master key , and DEK alias (ex:MyDEK-1) , Since my Master Key is NULL , It goes to ELSE part on the above code.When trying to create a DEK. It uses the ENCRYPTION returned from createKMIPncryptionClient method given in my first update.This time generateKmipKmsProviders method is updated with Actual Hashicorp vault URI.Also am passing the certificate parameters for connecting the vault. As Java system property.image828×529 20.7 KBNow am getting error as belowHashicorp vault provides us only PEM formatted file. I have converted that into PKCS12 and added that into local java trust store location. Which i passed as an argument.Please help me in this proceed further.Thanks,",
"username": "Navaneethakumar_Balasubramanian"
},
{
"code": "Caused by: java.io.IOException: DerInputStream.getLength(): lengthTag=109, too big.\n",
"text": "Searching for that error suggests it may be an error reading the certificate. Example: Java Certificate Issue - IOException: DerInputStream.getLength(): lengthTag=109, too big | Jira | Atlassian Documentation",
"username": "Kevin_Albertson"
}
] | CSFLE - Using Hashicorp vault | 2023-09-07T06:50:52.663Z | CSFLE - Using Hashicorp vault | 535 |
null | [
"compass",
"atlas-cluster"
] | [
{
"code": "",
"text": "querySrv ENODATA _mongodb._tcp.jvuwithjesus.ygys3.mongodb.net",
"username": "Jimmy_h_Vu"
},
{
"code": "mongodb+srv://",
"text": "Are you using a mongodb+srv:// connection string? If you are, it’s possible that the DNS servers of your internet service provider can’t resolve SRV records. Try switching your DNS to 8.8.8.8 and 8.8.4.4 (Google DNSs) and see if that works.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "This is my connection string : mongodb+srv://hellojesus1:[email protected]/jesusdatabase?retryWrites=true&w=majority\nI still have error:\nquerySrv ENODATA _mongodb._tcp.jvuwithjesus.ygys3.mongodb.net",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "The connection string is correct. You have toTry switching your DNS to 8.8.8.8 and 8.8.4.4 (Google DNSs) and see if that works.",
"username": "steevej"
},
{
"code": "",
"text": "I tried to switch to you dns but It still doesn’t work.\nIt still has same error.\nIs there any other way?\nPlease help !!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "Did you try long form of string instead of srv?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I tried to switch to you dns but It still doesn’t work.Please post a screenshot of the DNS configuration you tried.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nCan you show me the screen shot of google dns connection.\nI really don’t know where I replace google dns.\nPlease Help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "I used this connection string:\nmongodb+srv://hellojesus1:[email protected]/jesusdatabase?retryWrites=true&w=majority\nbut it doen’t work.\nPlease help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "Check this linkA free, global DNS resolution service that you can use as an alternative to your current DNS provider.Long form of connect string you can get from your Atlas account.Choose old version of shellIs password given in your connect string correct?\nI am getting different error2022-05-21T16:43:41.648+0530 I NETWORK [js] Marking host jvuwithjesus-shard-00-02.ygys3.mongodb.net:27017 as failed ::\ncaused by :: Location40659: can’t connect to new replica set master [jvuwithjesus-shard-00-02.ygys3.mongodb.net:27017],\nerr: Location8000: bad auth : Authentication failed.",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This is my creen shot of network ip:",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "This is not your DNS server setting. It is your Atlas network access list. Go to the Google Developers link provided by @Ramachandra_Tummala and follow the instructions to change your DNS settings.Alternatively, do as suggested by @Ramachandra_Tummala and use the Long form.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nI already connected your DNS settings 8.8.8.8 and 8.8.8.4 but It did not work and I could not use my internet to access any websites.\nIs there other ways?\nPlease help!!!",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "Did you do the dns settings on your laptop/pc from network-adapter?Did you try long form(old style) connect string method?\nFrom your Atlas account choose connect to shell and select old shell version in drop down\nWhat other options you see in your Compass?\nDoes fill individual fields option exist?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I already connected your DNS settings 8.8.8.8 and 8.8.8.4It is 8.8.4.4 and please help us help you by providinga screenshot of the DNS configuration you tried.",
"username": "steevej"
},
{
"code": "",
"text": "\nimage1920×1080 412 KB\n",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "I do not see where you specified 8.8.8.8 or 8.8.4.4",
"username": "steevej"
},
{
"code": "",
"text": "\nimage1920×1080 428 KB\n",
"username": "Jimmy_h_Vu"
},
{
"code": "",
"text": "That’s the correct DNS. But your previous screenshot shows that you are not using this Wi-Fi network interface. You have to set the DNS on the network interface you are using.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nI really don’t understand what you mean about network interface. Can you give an example or explainations about it.\nThanks for your time.\nPlease help!!!",
"username": "Jimmy_h_Vu"
}
] | I can not connect my mongodb compass with my cluster | 2022-05-19T08:20:18.630Z | I can not connect my mongodb compass with my cluster | 9,709 |
[
"ops-manager"
] | [
{
"code": "sudo -u mongod mongod -f /etc/mongod.conf",
"text": "Trying to set up a test mongodb environment for Ops manager installation - following the following steps to the T: Install a Simple Test Ops Manager Installation — MongoDB Ops Manager upcoming\nAlthough as soon as finishing editing the mongod.conf, and running the sudo -u mongod mongod -f /etc/mongod.conf command, it provides the output “Illegal Instruction” and MongoDB fails to startplease find the conf file attached:\n\nimage711×295 2.98 KB",
"username": "Gareth_Furnell"
},
{
"code": "sudo -u mongod mongod -f /etc/mongod.confmongod",
"text": "Although as soon as finishing editing the mongod.conf, and running the sudo -u mongod mongod -f /etc/mongod.conf command, it provides the output “Illegal Instruction” and MongoDB fails to startThis indicates that you’re likely running mongod on and unsupported platform.see Supported Platforms for more details.",
"username": "chris"
},
{
"code": "",
"text": "Chris! Thanks for the feedback, yes i checked that as the last idea in my mind and confirmed it, thank you for the solution as well, will be testing out on rhel 7.9 and ops man 6.0",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Ops Manager Test Environment Documentation Not Working | 2023-10-24T09:17:48.094Z | Ops Manager Test Environment Documentation Not Working | 181 |
|
[
"node-js",
"mongoose-odm",
"vscode"
] | [
{
"code": " const timeoutError = new error_1.MongoServerSelectionError(`Server selection timed out after ${options.serverSelectionTimeoutMS} ms`, this.description);\n ^\n",
"text": "MongoServerSelectionError: connect ECONNREFUSED ::1:27017\nat EventTarget. (C:\\Users\\Abzal\\OneDrive - Lovely Professional University\\Desktop\\Node\\node_modules\\mongodb\\lib\\sdam\\topology.js:276:34)\nat [nodejs.internal.kHybridDispatch] (node:internal/event_target:757:20)\nat EventTarget.dispatchEvent (node:internal/event_target:692:26)\nat abortSignal (node:internal/abort_controller:369:10)\nat TimeoutController.abort (node:internal/abort_controller:403:5)\nat Timeout. (C:\\Users\\Abzal\\OneDrive - Lovely Professional University\\Desktop\\Node\\node_modules\\mongodb\\lib\\utils.js:1010:92)\nat listOnTimeout (node:internal/timers:569:17)\nat process.processTimers (node:internal/timers:512:7) {\nreason: TopologyDescription {\ntype: ‘Unknown’,\nservers: Map(1) {\n‘localhost:27017’ => ServerDescription {\naddress: ‘localhost:27017’,\ntype: ‘Unknown’,\nhosts: ,\npassives: ,\narbiters: ,\ntags: {},\nminWireVersion: 0,\nmaxWireVersion: 0,\nroundTripTime: -1,\nlastUpdateTime: 586271229,\nlastWriteDate: 0,\nerror: MongoNetworkError: connect ECONNREFUSED ::1:27017\nat connectionFailureError (C:\\Users\\Abzal\\OneDrive - Lovely Professional University\\Desktop\\Node\\node_modules\\mongodb\\lib\\cmap\\connect.js:379:20)\nat Socket. (C:\\Users\\Abzal\\OneDrive - Lovely Professional University\\Desktop\\Node\\node_modules\\mongodb\\lib\\cmap\\connect.js:285:22)\nat Object.onceWrapper (node:events:632:26)\nat Socket.emit (node:events:517:28)\nat emitErrorNT (node:internal/streams/destroy:151:8)\nat emitErrorCloseNT (node:internal/streams/destroy:116:3)\nat process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n[Symbol(errorLabels)]: Set(1) { ‘ResetPool’ },\n[cause]: Error: connect ECONNREFUSED ::1:27017\nat TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) {\nerrno: -4078,\ncode: ‘ECONNREFUSED’,\nsyscall: ‘connect’,\naddress: ‘::1’,\nport: 27017\n}\n},\ntopologyVersion: null,\nsetName: null,\nsetVersion: null,\nelectionId: null,\nlogicalSessionTimeoutMinutes: null,\nprimary: null,\nme: null,\n‘$clusterTime’: null\n}\n},\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: null,\nmaxElectionId: null,\nmaxSetVersion: null,\ncommonWireVersion: 0,\nlogicalSessionTimeoutMinutes: null\n},\ncode: undefined,\n[Symbol(errorLabels)]: Set(0) {},\n[cause]: MongoNetworkError: connect ECONNREFUSED ::1:27017\nat connectionFailureError (C:\\Users\\Abzal\\OneDrive - Lovely Professional University\\Desktop\\Node\\node_modules\\mongodb\\lib\\cmap\\connect.js:379:20)\nat Socket. (C:\\Users\\Abzal\\OneDrive - Lovely Professional University\\Desktop\\Node\\node_modules\\mongodb\\lib\\cmap\\connect.js:285:22)\nat Object.onceWrapper (node:events:632:26)\nat Socket.emit (node:events:517:28)\nat emitErrorNT (node:internal/streams/destroy:151:8)\nat emitErrorCloseNT (node:internal/streams/destroy:116:3)\nat process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n[Symbol(errorLabels)]: Set(1) { ‘ResetPool’ },\n[cause]: Error: connect ECONNREFUSED ::1:27017\nat TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16) {\nerrno: -4078,\ncode: ‘ECONNREFUSED’,\nsyscall: ‘connect’,\naddress: ‘::1’,\nport: 27017\n}\n}\n}Screenshot (26)1920×1080 317 KB",
"username": "Shaik_Abzal"
},
{
"code": "",
"text": "Looks like you’re trying to connect to a local mongo instance (localhost). Can you connect to the instance with Compass to verify that the server is running correctly?",
"username": "John_Sewell"
}
] | Connect with db | 2023-10-25T07:53:00.238Z | Connect with db | 177 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "Hi. I recently faced a problem below:\nWhen I create doc, mongo returns me info about inserted doc. I take the id and try to find this doc by id that i received. and when base is on some load it returns null.\nI want to understand why this happens. And what to do if i want to update the document in short time after creation.",
"username": "Igor_Tikush"
},
{
"code": "",
"text": "You received acknowledgement that the document was created but you cannot find the document.Many reasons may cause that.#1 for us is very hard to checkWe could check #2 or #3 only if you share your code.What do you mean bybase is on some load it returns nullWhat is base? What do you mean by some load?",
"username": "steevej"
},
{
"code": "",
"text": "Hi, thank you for reply\nBy base I mean database server. By some load I mean when database server receives a lot of requests and kind of busy.\nThere is no mistakes in my code. When database server is not on load - i can find the document.",
"username": "Igor_Tikush"
},
{
"code": "",
"text": "Since you are confirming that it is not #1, #2 or #3, then it must be #4If you do not write with majority commit, then you have to enforce reading from primary.",
"username": "steevej"
},
{
"code": "mongodmongodmongod",
"text": "Hi @Igor_Tikush and welcome to the community!!I agree to what @steevej mentions and also would need a few more details to have more clarity on the issue mentioned.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "I also encounter with that problem sometimes in my current project!\nand also don’t know why.\nsame logic in code but different result.",
"username": "Luan_Nguyen1"
},
{
"code": "",
"text": "Hi @Luan_Nguyen1 and welcome to MongoDB community forums!!In general it is preferable to start a new discussion to keep the details of different environments/questions separate and improve visibility of new discussions. That will also allow you to mark your topic as “Solved” when you resolve any outstanding questions.Mentioning the url of an existing discussion on the forum will automatically create links between related discussions for other users to follow.Hence, the recommendation would be to post the details in a separate posts with all the relevant information.Regards\nAasawari",
"username": "Aasawari"
}
] | Cant find doc right after creation | 2022-06-28T05:01:07.242Z | Cant find doc right after creation | 3,623 |
null | [
"app-services-cli"
] | [
{
"code": "slm_database_nameDevDBProdDBaddTeamMember.js const collection = context.services.get(\"mongodb-atlas\").db(context.environment.values.slm_database_name).collection(\"User\");\n\ndata_sources/mongodb-atlas/TestDBdata_sources/mongodb-atlas/TestDB/Foo/relationships.json{\n \"events\": {\n \"ref\": \"#/relationship/mongodb-atlas/TestDB/GameEvent\",\n \"source_key\": \"events\",\n \"foreign_key\": \"_id\",\n \"is_list\": true\n }\n}\nsync/config.json{\n ...\n \"database_name\": \"TestDB\",\n ...\ndatabase_nameslm_database_name",
"text": "I have a single Atlas App Service that I want to have different environments (development, production, etc). I want to have each environment connect to a separate Atlas database. When I originally setup the app, I used the Atlas database named “TestDB”I’ve read in How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions | MongoDB as well as other posts on how to use the environment.json to specify an environment variable (in my case: slm_database_name which is set to something like DevDB or ProdDB based on the environment) to specify the database. And I can use that variable in the server side functions that I write, e.g. in addTeamMember.js:Great!However, there are still places where the non-environment variable based database is referenced, and I can’t figure out how to set/change those. For example, if I do a “realm-cli pull”, there is an entire directory named data_sources/mongodb-atlas/TestDB, and within that I have files such as data_sources/mongodb-atlas/TestDB/Foo/relationships.json (where Foo has a relationship to another object) and the relationship is hard-coded with “TestDB” e.g.:or in sync/config.json has:Can someone tell me: once you setup the environment variable, what do you need to do in either the web UI or the json configuration files to use the environment variable that specifies the database? Or maybe I’m doing it wrong to begin with? If I use the environment name database_name instead of slm_database_name, will it magically work?Thanks.",
"username": "Alex_Tang1"
},
{
"code": "",
"text": "Using realm-cli for deployment, you can replace the hardcoded database name.\nSee my post here:",
"username": "Mikael_Gurenius"
},
{
"code": "\"%(%%environment.values.database)\"%(%%environment.values.database)demoapp/data_sources/mongodb-atlas/%(%%environment.values.database)/Demo/rules.json",
"text": "\"%(%%environment.values.database)\"Thank you for that information. One followup question however, is the file really stored in a directory with the string %(%%environment.values.database)? Your example shows:demoapp/data_sources/mongodb-atlas/%(%%environment.values.database)/Demo/rules.jsonWhich seems a little odd to me.Thanks again!",
"username": "Alex_Tang1"
},
{
"code": "",
"text": "Unfortunately, this doesn’t work. Atlas app services translates the variable upon setting and then hard-codes it. Changing the environment from Development to Testing or some other type doesn’t change the database thereafter. Also I’m getting errors when trying to enable sync such as:recoverable event subscription error encountered: failed to configure namespaces for sync: failed to configure namespace ns=‘%%environment.values.slm_database_name.User’ for sync: error ensuring namespace exists: (InvalidNamespace) ‘.’ is an invalid character in a db name: %%environment.values.slm_database_name",
"username": "Alex_Tang1"
},
{
"code": "",
"text": "Yes.image642×578 39.3 KB",
"username": "Mikael_Gurenius"
},
{
"code": "relationships.json\n{\n \"previousContext\": {\n \"ref\": \"#/relationship/mongodb-atlas/%(%%environment.values.database)/ReportingContexts\",\n \"source_key\": \"previousContext\",\n \"foreign_key\": \"_id\",\n \"is_list\": false\n }\n}\n\nrules.json\n{\n \"collection\": \"ReportingContexts\",\n \"database\": \"%(%%environment.values.database)\",\n \"roles\": []\n}\n",
"text": "True, changing environment after deployment will not change your database. These values are for deployment with realm-cli. On the other hand, you can deploy multiple versions of the same app to the same project, each with any environment/database you want.I’m using it like below, no problems with partition based sync (haven’t tried flexible).",
"username": "Mikael_Gurenius"
}
] | Configure different Atlas Database per environment | 2023-10-23T16:44:21.710Z | Configure different Atlas Database per environment | 275 |
null | [
"aggregation",
"queries",
"crud"
] | [
{
"code": "'raw_features': {'0 items|$0.00': [['0 items', 0.766865203825603]],\n 'our price': [['$14.95 - save 25%! *off rrp',\n 0.7352671338104433]]}\ndb.collection.updateOne({'_id': ObjectId('000826efaba4ee3f242cb3ea')}, {$rename : { 'raw_features.0 items|$0.00' : \"raw_features.0 items0_00\"} } )\n",
"text": "I have documents in a collection where it has nested fields with period and dollar sign in the field name. I want to rename those field names.I tried this update command but didn’t work",
"username": "Jeewan_Sooriyaarachchi"
},
{
"code": "",
"text": "Take a look at this command when dealing with field names with embedded reserved characters.",
"username": "John_Sewell"
}
] | Rename nested fields with period and dollar | 2023-10-25T04:53:04.471Z | Rename nested fields with period and dollar | 154 |
null | [] | [
{
"code": "{\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": xxxxxxxxxxxxxxxxxxxxxx,\n \"i\": 1\n }\n },\n \"ok\": 0,\n \"errmsg\": \"Error in $cursor stage :: caused by :: operation was interrupted\",\n \"code\": 11601,\n \"codeName\": \"Interrupted\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1697618635,\n \"i\": 1\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": \"xxxxxxxxxxxxxxxxxxxxxxxxxxx\",\n \"$type\": \"00\"\n },\n \"keyId\": {\n \"$numberLong\": \"xxxxxxxxxxxxxxxxxxxxx\"\n }\n }\n }\n}\n",
"text": "",
"username": "Anurag_Shankar"
},
{
"code": "Error in $cursor stage :: caused by :: operation was interrupted",
"text": "Hello @Anurag_Shankar ,Welcome to The MongoDB Community Forums! I notice that you have not had a response on this topic yet, were you able to find a solution?Error in $cursor stage :: caused by :: operation was interruptedThe “operation was interrupted” error typically occurs when an operation is terminated or interrupted before it can complete (Mostly with long-running operations). This interruption can be caused by various factors, such as:To assist you further, could you please share your MongoDB Server logs? Additionally, may I ask if this is the first time you have encountered this error, and if not, how frequently this error is occurring?Additionally, can you share steps/more details for me to replicate this?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | I am getting error from mongo can anyone give me insight about this error | 2023-10-18T08:58:21.021Z | I am getting error from mongo can anyone give me insight about this error | 260 |
null | [
"node-js",
"crud",
"indexes"
] | [
{
"code": "[\n {\n key: { myId1: 1 },\n name: \"myId1_1\",\n unique: true,\n sparse: true\n },\n {\n key: { myId2: 1 },\n name: \"myId1_2\",\n unique: true,\n sparse: true\n }\n ]\n[{\n \"updateOne\":{\n \"filter\":{\"myId1\":\"myId1_0\"},\n \"update\":{\"$setOnInsert\":{\"myId1\":\"myId1_0\"},\"$set\":{\"att1\": \"100\", \"updated\":1697585425844}},\n \"upsert\":true},\n \"updateOne\":{\n \"filter\":{\"myId1\":\"myId1_1\"},\n \"update\":{\"$setOnInsert\":{\"myId1\":\"myId1_1\"},\"$set\":{\"att1\": \"200\", \"updated\":1697585425844}},\n \"upsert\":true},\n}]\nawait dbClient.db(dbName).collection(colName).bulkWrite(bulkWriteList, { ordered: false });\n",
"text": "Hi Experts,\nI am using bulkwriter to upsert batch documents into a collection with 2 unique sparse indexes.\nDuring the testing, the CPU usage was almost 100% and there was timeout error when the bulkwrite size is 100 only.However, I did the similar testing on a collection with single unique index. There is no such perfromance issue at all. THe CPU usage was only below 10%.I am quite confused with the issue.\nAppreciate any feedback.More information are provided as below.platform information:\nMongoDB Atlas M20: 6.0.11\nNode.js:\nversion 20\nmongodb: 6.1.0Indexes on the collections:Here is the list for bulkwrite: The number of bulksize is 100, and there is same index id for a whole batch.node.js for bulkwirte",
"username": "frand_li"
},
{
"code": "",
"text": "Hi @frand_li and welcome to MongoDB community forums!!As mentioned in the MongoDB Documentation for clusters, M10 and M20 operate on a burstable performance infrastructure, which utilizes a virtual (shared) core and dynamically allocates CPU credits according to their instance size.To understand further could you help me understand if you are observing this issue constantly or does this happen in a specific condition.\nAlso, how large is the data set that you are trying to update using the bulk write operation?The recommendation here would be to look for CPU steal % and consider upscaling the cluster as the cluster would have possible exhausted the credits and hence resulting in high utilisation.Also, I would recommend reach out to the MongoDB Support to deeper insights and understanding the cluster information.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": " {\n myId1: 1,\n myId2: 1\n },\n {\n unique: true,\n partialFilterExpression: {\n $or: [{ myId1: { $exists: false } }, { myId2: { $exists: false } }]\n }\n }\n",
"text": "AasawariHi, Aasawari,\nThanks for your quick response.\nThe bulkwrite size was 100 only and the number of documents in the collection was only 10,000.\nI tried to upsert 10,000 documents with connection poolsize 5, which means there were up to 5 concurrent process to do bulkwrite at same time.If the collection has only 1 unique index set for a collection, then there is no performance problem at all even if the bulkwrite size is set to 1000.But once there are 2 unique indexes created as sparse above or compound indexs as below, the performance issue occured with timeout error even if the bulksize was set to 100.With same data and same sending patterns, the only difference is the indexes. It seemed like that the indexes of sparse or compound didn’t work at all. It should not be caused by the performance bollte neck of M20.Is there anything wrong with the using above 2 types of indexes?\nMany thanks.",
"username": "frand_li"
},
{
"code": " {\n myId1: 1,\n myId2: 1\n },\n {\n unique: true,\n partialFilterExpression: {\n $or: [{ myId1: { $exists: true} }, { myId2: { $exists: true} }]\n }\n }\n",
"text": "Here are my new findings.\nWhen indexes were created as below, there is no performance problem to bulkwrite documents with myId1. All 1000 documents can be upserted isntantly at even M10.\nHowver, there will be timeout issue with bulkwritting only 500 documents with the second unique index, aka, myId2 even on M60 server.Is it possible a bug? Or it was caused by misusing the compound index?BTW, there was a typo in previous reply about the compound indexes json by setting ture as false:\n$or: [{ myId1: { $exists: false } }, { myId2: { $exists: false } }].",
"username": "frand_li"
},
{
"code": "",
"text": "Hi @frand_li and thank you for getting back.Based on the information provided earlier, I’m attempting to recreate the problem on a local setup. It would be greatly appreciated if you could provide additional details for a clearer understanding.With respect to the defined indexes, it appears that “myId1” and “myId2” field values should be unique. In the context of the bulkWrite operation, are you attempting to filter a single document in a single operation and subsequently update the “att1” and “updated” values using this bulkWrite operation?Furthermore, I would like to request the following information:Regards\nAasawari",
"username": "Aasawari"
}
] | Upsert by Bulkwrite Performance issue with 2 sparse indexes | 2023-10-18T02:14:49.574Z | Upsert by Bulkwrite Performance issue with 2 sparse indexes | 269 |
null | [] | [
{
"code": "",
"text": "I’m getting this error when trying the MongoDB course",
"username": "Bujji_Phanikiran_Vanjarapu"
},
{
"code": "",
"text": "Hi @Bujji_Phanikiran_Vanjarapu ,Welcome to The MongoDB Community Forums! Can you please check again if you are still getting the same issue?In case if you are still getting the error after launching a lab, I would recommend you to send an email to [email protected] with details such as:Happy Learning, cheers! \nTarun",
"username": "Tarun_Gaur"
}
] | Instruqt :: Instruqt: connection closed | 2023-10-24T12:29:53.170Z | Instruqt :: Instruqt: connection closed | 146 |
null | [
"mongodb-shell"
] | [
{
"code": "db.adminCommand({setParameter: 1, mongodb_max_varchar_length: 0})db.runCommand({ sqlGenerateSchema: 1, sampleNamespaces: [\"test.cars\"], sampleSize: 2, setSchemas: true})",
"text": "I try to connect to power BI, before, everything was correct but after I did copy of database - no.\nSo now I have two databases and when I try to connect I get error:DataSource.Error: The table has no visible columns and cannot be queried.I tried many different thing.MongoServerError: (Unauthorized) not authorized on admin to execute command { setParameter: 1, mongodb_max_varchar_length: 0, apiVersion: “1”, lsid: { …MongoServerError: command not foundSo before everything works correct because I made custom Data Federation now I try to use automatic Federation. Can someone help me?mongo 6.1",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "As always - the right question always has an answer.",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoServerError: sqlGenerateSchema command not found | 2023-10-25T02:19:05.988Z | MongoServerError: sqlGenerateSchema command not found | 164 |
null | [
"aggregation"
] | [
{
"code": "[\n {\n $match: {\n $and: [\n {\n \"identifier.arrayField\": {\n $in: [\"0000000000001\"],\n },\n },\n {\n $and: [\n {\n someArrayField: {\n $exists: true,\n },\n },\n {\n $and: [\n {\n someArrayField: {\n $elemMatch: {\n itemId: {\n $in: [\n \"65353aeabf6d627ba9bdc5cc\",\n ],\n },\n },\n },\n },\n {\n anotherField: {\n $ne: \"65361492e800c0150c7a7ed\",\n },\n },\n ],\n },\n ],\n },\n {\n anIndexedField: \"value\",\n },\n ],\n },\n },\n {\n $facet: {\n items: [\n {\n $sort: {\n \"sort_field\": 1,\n },\n },\n {\n $skip: 0,\n },\n {\n $limit: 10,\n },\n ],\n count: [\n {\n $count: \"count\",\n },\n ],\n ets: [\n {\n $group: {\n _id: null,\n earliestTimestamp: {\n $min: \"$createDate\",\n },\n },\n },\n ],\n },\n },\n {\n $addFields: {\n total: {\n $arrayElemAt: [\"$count.count\", 0],\n },\n earliestTimestamp: {\n $arrayElemAt: [\n \"$ets.earliestTimestamp\",\n 0,\n ],\n },\n },\n },\n]\n",
"text": "I’m trying to put together an aggregation pipeline that filters a large data set, then returns a paginated subset, along with the total count and the earliest timestamp within the collection (regardless of the page). Would very much like to avoid a FETCH.The resulting explanation is extremely long but I can paste that in if it helps.Ideally, I would like to avoid fetching the entire matching collection for the sake of a count and getting the earliest timestamp.",
"username": "Reka_Burmeister"
},
{
"code": "",
"text": "Why not just make 2 calls, one for the earliest and total and another for the data you want (try and avoid skip if you can and do a filter instead)? Do you need the total every time in case more documents are added as you’re paging? If that’s the case can you guarantee that the added document will be after the previously shown results?This avoids the complexity and possibly limitations of a $facet stage.You seem to have a lot of the query written already, what’s the problem you’re facing? If you’re seeing non-index hits then your index is not matching your filtering.",
"username": "John_Sewell"
},
{
"code": "",
"text": "I was thinking it might have to be that - was just hoping that everything can be done “nicely” within the aggregation but what I would like resembles a tree structure of pipelines rather than a linear one.The query is written and working but the performance benchmarking vs the earlier version (doing separate calls) showed that the new solution is worse. It was very surprising as that version fetched the collection and did the sort + pagination in java instead. Still, it’s faster.We think the problem is a FETCH after the IXSCAN - so the index is being used but the facet then seems to do a fetch for all hits.I have another aggregation that’s similar but isn’t supposed to fetch any items, just their count, grouping them into 5 different facets after applying the groups’ individual matches - again, this is slower than doing 5 different calls.As you say, might just be a limitation of the facet - I confess I didn’t read its documentation thoroughly enough.",
"username": "Reka_Burmeister"
}
] | Help with an aggregation pipeline | 2023-10-24T18:39:25.451Z | Help with an aggregation pipeline | 143 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I am getting the same error and my node is also v20.8.0",
"username": "Akash_Kumar9"
},
{
"code": "",
"text": "I am getting the same error and my node is also v20.8.0",
"username": "Harsh_Prajapati"
},
{
"code": "",
"text": "Hey @Harsh_Prajapati / @Akash_Kumar9,Could you please let me know if the problem still persists for you? If it does, please share the Node.js driver/Mongoose version you are using and the error log you are encountering.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "monicaveloso@SJC-UBU-001459:~/about/FruitsProject$ node -v\nv21.0.0\nmonicaveloso@SJC-UBU-001459:~/about/FruitsProject$ node app.js\n(node:1436578) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.\n(Use `node --trace-deprecation ...` to show where the warning was created)\nnull\nmonicaveloso@SJC-UBU-001459:~/about/FruitsProject$ nodejs app.js\n/home/monicaveloso/about/FruitsProject/node_modules/mongodb/lib/operations/add_user.js:16\n this.options = options ?? {};\n ^\n\nSyntaxError: Unexpected token ?\n at Module._compile (internal/modules/cjs/loader.js:723:23)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)\n at Module.load (internal/modules/cjs/loader.js:653:32)\n at tryModuleLoad (internal/modules/cjs/loader.js:593:12)\n at Function.Module._load (internal/modules/cjs/loader.js:585:3)\n at Module.require (internal/modules/cjs/loader.js:692:17)\n at require (internal/modules/cjs/helpers.js:25:18)\n at Object.<anonymous> (/home/monicaveloso/about/FruitsProject/node_modules/mongodb/lib/admin.js:4:20)\n at Module._compile (internal/modules/cjs/loader.js:778:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)\n \"name\": \"fruitsproject\",\n \"version\": \"1.0.0\",\n \"lockfileVersion\": 3,\n \"requires\": true,\n \"packages\": {\n \"\": {\n \"name\": \"fruitsproject\",\n \"version\": \"1.0.0\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"mongodb\": \"^5.9.0\"\n }\n },\n \"node_modules/@mongodb-js/saslprep\": {\n \"version\": \"1.1.0\",\n \"resolved\": \"https://registry.npmjs.org/@mongodb-js/saslprep/-/saslprep-1.1.0.tgz\",\n \"integrity\": \"sha512-Xfijy7HvfzzqiOAhAepF4SGN5e9leLkMvg/OPOF97XemjfVCYN/oWa75wnkc6mltMSTwY+XlbhWgUOJmkFspSw==\",\n \"optional\": true,\n \"dependencies\": {\n \"sparse-bitfield\": \"^3.0.3\"\n }\n },\n \"node_modules/@types/node\": {\n \"version\": \"20.8.7\",\n \"resolved\": \"https://registry.npmjs.org/@types/node/-/node-20.8.7.tgz\",\n \"integrity\": \"sha512-21TKHHh3eUHIi2MloeptJWALuCu5H7HQTdTrWIFReA8ad+aggoX+lRes3ex7/FtpC+sVUpFMQ+QTfYr74mruiQ==\",\n \"dependencies\": {\n \"undici-types\": \"~5.25.1\"\n }\n },\n \"node_modules/@types/webidl-conversions\": {\n \"version\": \"7.0.2\",\n \"resolved\": \"https://registry.npmjs.org/@types/webidl-conversions/-/webidl-conversions-7.0.2.tgz\",\n \"integrity\": \"sha512-uNv6b/uGRLlCVmelat2rA8bcVd3k/42mV2EmjhPh6JLkd35T5bgwR/t6xy7a9MWhd9sixIeBUzhBenvk3NO+DQ==\"\n },\n \"node_modules/@types/whatwg-url\": {\n \"version\": \"8.2.2\",\n \"resolved\": \"https://registry.npmjs.org/@types/whatwg-url/-/whatwg-url-8.2.2.tgz\",\n \"integrity\": \"sha512-FtQu10RWgn3D9U4aazdwIE2yzphmTJREDqNdODHrbrZmmMqI0vMheC/6NE/J1Yveaj8H+ela+YwWTjq5PGmuhA==\",\n \"dependencies\": {\n \"@types/node\": \"*\",\n \"@types/webidl-conversions\": \"*\"\n }\n },\n \"node_modules/bson\": {\n \"version\": \"5.5.1\",\n \"resolved\": \"https://registry.npmjs.org/bson/-/bson-5.5.1.tgz\",\n \"integrity\": \"sha512-ix0EwukN2EpC0SRWIj/7B5+A6uQMQy6KMREI9qQqvgpkV2frH63T0UDVd1SYedL6dNCmDBYB3QtXi4ISk9YT+g==\",\n \"engines\": {\n \"node\": \">=14.20.1\"\n }\n },\n \"node_modules/ip\": {\n \"version\": \"2.0.0\",\n \"resolved\": \"https://registry.npmjs.org/ip/-/ip-2.0.0.tgz\",\n \"integrity\": \"sha512-WKa+XuLG1A1R0UWhl2+1XQSi+fZWMsYKffMZTTYsiZaUD8k2yDAj5atimTUD2TZkyCkNEeYE5NhFZmupOGtjYQ==\"\n },\n \"node_modules/memory-pager\": {\n \"version\": \"1.5.0\",\n \"resolved\": \"https://registry.npmjs.org/memory-pager/-/memory-pager-1.5.0.tgz\",\n \"integrity\": \"sha512-ZS4Bp4r/Zoeq6+NLJpP+0Zzm0pR8whtGPf1XExKLJBAczGMnSi3It14OiNCStjQjM6NU1okjQGSxgEZN8eBYKg==\",\n \"optional\": true\n },\n \"node_modules/mongodb\": {\n \"version\": \"5.9.0\",\n \"resolved\": \"https://registry.npmjs.org/mongodb/-/mongodb-5.9.0.tgz\",\n \"integrity\": \"sha512-g+GCMHN1CoRUA+wb1Agv0TI4YTSiWr42B5ulkiAfLLHitGK1R+PkSAf3Lr5rPZwi/3F04LiaZEW0Kxro9Fi2TA==\",\n \"dependencies\": {\n \"bson\": \"^5.5.0\",\n \"mongodb-connection-string-url\": \"^2.6.0\",\n \"socks\": \"^2.7.1\"\n },\n \"engines\": {\n \"node\": \">=14.20.1\"\n },\n \"optionalDependencies\": {\n \"@mongodb-js/saslprep\": \"^1.1.0\"\n },\n \"peerDependencies\": {\n \"@aws-sdk/credential-providers\": \"^3.188.0\",\n \"@mongodb-js/zstd\": \"^1.0.0\",\n \"kerberos\": \"^1.0.0 || ^2.0.0\",\n \"mongodb-client-encryption\": \">=2.3.0 <3\",\n \"snappy\": \"^7.2.2\"\n },\n \"peerDependenciesMeta\": {\n \"@aws-sdk/credential-providers\": {\n \"optional\": true\n },\n \"@mongodb-js/zstd\": {\n \"optional\": true\n },\n \"kerberos\": {\n \"optional\": true\n },\n \"mongodb-client-encryption\": {\n \"optional\": true\n },\n \"snappy\": {\n \"optional\": true\n }\n }\n },\n \"node_modules/mongodb-connection-string-url\": {\n \"version\": \"2.6.0\",\n \"resolved\": \"https://registry.npmjs.org/mongodb-connection-string-url/-/mongodb-connection-string-url-2.6.0.tgz\",\n \"integrity\": \"sha512-WvTZlI9ab0QYtTYnuMLgobULWhokRjtC7db9LtcVfJ+Hsnyr5eo6ZtNAt3Ly24XZScGMelOcGtm7lSn0332tPQ==\",\n \"dependencies\": {\n \"@types/whatwg-url\": \"^8.2.1\",\n \"whatwg-url\": \"^11.0.0\"\n }\n },\n \"node_modules/punycode\": {\n \"version\": \"2.3.0\",\n \"resolved\": \"https://registry.npmjs.org/punycode/-/punycode-2.3.0.tgz\",\n \"integrity\": \"sha512-rRV+zQD8tVFys26lAGR9WUuS4iUAngJScM+ZRSKtvl5tKeZ2t5bvdNFdNHBW9FWR4guGHlgmsZ1G7BSm2wTbuA==\",\n \"engines\": {\n \"node\": \">=6\"\n }\n },\n \"node_modules/smart-buffer\": {\n \"version\": \"4.2.0\",\n \"resolved\": \"https://registry.npmjs.org/smart-buffer/-/smart-buffer-4.2.0.tgz\",\n \"integrity\": \"sha512-94hK0Hh8rPqQl2xXc3HsaBoOXKV20MToPkcXvwbISWLEs+64sBq5kFgn2kJDHb1Pry9yrP0dxrCI9RRci7RXKg==\",\n \"engines\": {\n \"node\": \">= 6.0.0\",\n \"npm\": \">= 3.0.0\"\n }\n },\n \"node_modules/socks\": {\n \"version\": \"2.7.1\",\n \"resolved\": \"https://registry.npmjs.org/socks/-/socks-2.7.1.tgz\",\n \"integrity\": \"sha512-7maUZy1N7uo6+WVEX6psASxtNlKaNVMlGQKkG/63nEDdLOWNbiUMoLK7X4uYoLhQstau72mLgfEWcXcwsaHbYQ==\",\n \"dependencies\": {\n \"ip\": \"^2.0.0\",\n \"smart-buffer\": \"^4.2.0\"\n },\n \"engines\": {\n \"node\": \">= 10.13.0\",\n \"npm\": \">= 3.0.0\"\n }\n },\n \"node_modules/sparse-bitfield\": {\n \"version\": \"3.0.3\",\n \"resolved\": \"https://registry.npmjs.org/sparse-bitfield/-/sparse-bitfield-3.0.3.tgz\",\n \"integrity\": \"sha512-kvzhi7vqKTfkh0PZU+2D2PIllw2ymqJKujUcyPMd9Y75Nv4nPbGJZXNhxsgdQab2BmlDct1YnfQCguEvHr7VsQ==\",\n \"optional\": true,\n \"dependencies\": {\n \"memory-pager\": \"^1.0.2\"\n }\n },\n \"node_modules/tr46\": {\n \"version\": \"3.0.0\",\n \"resolved\": \"https://registry.npmjs.org/tr46/-/tr46-3.0.0.tgz\",\n \"integrity\": \"sha512-l7FvfAHlcmulp8kr+flpQZmVwtu7nfRV7NZujtN0OqES8EL4O4e0qqzL0DC5gAvx/ZC/9lk6rhcUwYvkBnBnYA==\",\n \"dependencies\": {\n \"punycode\": \"^2.1.1\"\n },\n \"engines\": {\n \"node\": \">=12\"\n }\n },\n \"node_modules/undici-types\": {\n \"version\": \"5.25.3\",\n \"resolved\": \"https://registry.npmjs.org/undici-types/-/undici-types-5.25.3.tgz\",\n \"integrity\": \"sha512-Ga1jfYwRn7+cP9v8auvEXN1rX3sWqlayd4HP7OKk4mZWylEmu3KzXDUGrQUN6Ol7qo1gPvB2e5gX6udnyEPgdA==\"\n },\n \"node_modules/webidl-conversions\": {\n \"version\": \"7.0.0\",\n \"resolved\": \"https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-7.0.0.tgz\",\n \"integrity\": \"sha512-VwddBukDzu71offAQR975unBIGqfKZpM+8ZX6ySk8nYhVoo5CYaZyzt3YBvYtRtO+aoGlqxPg/B87NGVZ/fu6g==\",\n \"engines\": {\n \"node\": \">=12\"\n }\n },\n \"node_modules/whatwg-url\": {\n \"version\": \"11.0.0\",\n \"resolved\": \"https://registry.npmjs.org/whatwg-url/-/whatwg-url-11.0.0.tgz\",\n \"integrity\": \"sha512-RKT8HExMpoYx4igMiVMY83lN6UeITKJlBQ+vR/8ZJ8OCdSiN3RwCq+9gH0+Xzj0+5IrM6i4j/6LuvzbZIQgEcQ==\",\n \"dependencies\": {\n \"tr46\": \"^3.0.0\",\n \"webidl-conversions\": \"^7.0.0\"\n },\n \"engines\": {\n \"node\": \">=12\"\n }\n }\n }\n}\n\nconst { MongoClient } = require(\"mongodb\");\n\n// Replace the uri string with your connection string.\nconst uri = \"mongodb://localhost:27017\";\n\nconst client = new MongoClient(uri);\n\nasync function run() {\n try {\n const database = client.db('fruitsDb');\n const fruits = database.collection('fruits');\n\n // Query for a Banana\n const query = { name: 'Banana' };\n const banana = await fruits.findOne(query);\n\n console.log(banana);\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\nrun().catch(console.dir);\n",
"text": "Hello! I’m having this error too How I’m trying running:My package_lock.jsonMy app.jsI have tried delete node_modules and package_lock.json and restart. Have tried to downgrad the mongodb version to ^5.2.0. I don’t know what else to do please SOS",
"username": "Monica_Moreira_Lopes_Veloso"
},
{
"code": "",
"text": "The syntax error seems to come from the file add_user.js.Could you please share this file.",
"username": "steevej"
},
{
"code": "\"use strict\";\nObject.defineProperty(exports, \"__esModule\", { value: true });\nexports.AddUserOperation = void 0;\nconst crypto = require(\"crypto\");\nconst error_1 = require(\"../error\");\nconst utils_1 = require(\"../utils\");\nconst command_1 = require(\"./command\");\nconst operation_1 = require(\"./operation\");\n/** @internal */\nclass AddUserOperation extends command_1.CommandCallbackOperation {\n constructor(db, username, password, options) {\n super(db, options);\n this.db = db;\n this.username = username;\n this.password = password;\n this.options = options ?? {};\n }\n executeCallback(server, session, callback) {\n const db = this.db;\n const username = this.username;\n const password = this.password;\n const options = this.options;\n // Error out if digestPassword set\n // v5 removed the digestPassword option from AddUserOptions but we still want to throw\n // an error when digestPassword is provided.\n if ('digestPassword' in options && options.digestPassword != null) {\n return callback(new error_1.MongoInvalidArgumentError('Option \"digestPassword\" not supported via addUser, use db.command(...) instead'));\n }\n let roles;\n if (!options.roles || (Array.isArray(options.roles) && options.roles.length === 0)) {\n (0, utils_1.emitWarningOnce)('Creating a user without roles is deprecated. Defaults to \"root\" if db is \"admin\" or \"dbOwner\" otherwise');\n if (db.databaseName.toLowerCase() === 'admin') {\n roles = ['root'];\n }\n else {\n roles = ['dbOwner'];\n }\n }\n else {\n roles = Array.isArray(options.roles) ? options.roles : [options.roles];\n }\n let topology;\n try {\n topology = (0, utils_1.getTopology)(db);\n }\n catch (error) {\n return callback(error);\n }\n const digestPassword = topology.lastHello().maxWireVersion >= 7;\n let userPassword = password;\n if (!digestPassword) {\n // Use node md5 generator\n const md5 = crypto.createHash('md5');\n // Generate keys used for authentication\n md5.update(`${username}:mongo:${password}`);\n userPassword = md5.digest('hex');\n }\n // Build the command to execute\n const command = {\n createUser: username,\n customData: options.customData || {},\n roles: roles,\n digestPassword\n };\n // No password\n if (typeof password === 'string') {\n command.pwd = userPassword;\n }\n super.executeCommandCallback(server, session, command, callback);\n }\n}\nexports.AddUserOperation = AddUserOperation;\n(0, operation_1.defineAspects)(AddUserOperation, [operation_1.Aspect.WRITE_OPERATION]);\n//# sourceMappingURL=add_user.js.map\n",
"text": "Its a file from mongo_db!. The path is “/home/monicaveloso/about/FruitsProject/node_modules/mongodb/lib/operations/add_user.js:16”The file is",
"username": "Monica_Moreira_Lopes_Veloso"
},
{
"code": "",
"text": "This is valid JS according to MDN. I am puzzled. I have try it in a normal (that is not a constructor) and it worked fine.I have tagged your post with node-js in hope of somebody with more JS experience see it.",
"username": "steevej"
},
{
"code": "monicaveloso@SJC-UBU-001459:~/about/FruitsProject$ node app.js\n(node:1436578) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.\n(Use `node --trace-deprecation ...` to show where the warning was created)\nnull\nmonicaveloso@SJC-UBU-001459:~/about/FruitsProject$ nodejs app.js\n/home/monicaveloso/about/FruitsProject/node_modules/mongodb/lib/operations/add_user.js:16\n this.options = options ?? {};\n ^\n\nSyntaxError: Unexpected token ?\n",
"text": "So, still trying I discovery part of the problem.As shown in my first message output:The node app.js works just fine, but mongodb is giving me just null instead of working or thrown an error, and it makes me think that node wasnt working. Then I tried to run with nodejs, and nodejs isnt in the same version than node. Giving me the sintax error.So its just a problem of discovering why mongodb is giving me the null output. (my app.js is described in my first comment).",
"username": "Monica_Moreira_Lopes_Veloso"
},
{
"code": "const database = client.db('fruitsDb');\n const fruits = database.collection('fruits');\n\n // Query for a Banana\n const query = { name: 'Banana' };\n const banana = await fruits.findOne(query);\n",
"text": "If I understand the following correctlySo its just a problem of discovering why mongodb is giving me the null output.you want to know why the const banana is evaluated to null in the following code:I think you simply forgot to open the connection.I am sorry if I focused on the syntax error rather than on the null banana issue butSo I really really thought that the issue was about the syntax error with ??. You should have created a new thread with a title like My query returns null. Why?To terminate, if the issue is not the missing open(), the null banana could be that",
"username": "steevej"
},
{
"code": "",
"text": "My initial error was the one I initially reported. However, I persisted to resolve the issue, and I eventually identified the problem. The first line of my previous message was:“So, still trying I discovery part of the problem.”I discovered that when I ran “nodejs app.js,” it was giving me a problem related to ‘this.options = options ?? {};’ because the version of Node.js I had installed differed from ‘node.’Running “node app.js” yielded a null result, which led me to use the ‘nodejs’ command instead, mistakenly thinking that “node app.js” wasn’t working because of null result.I have no intention of shifting away from the original problem, so I will open another forum thread if the need arises.Thank you for your attention!",
"username": "Monica_Moreira_Lopes_Veloso"
}
] | Getting Error This.options = options ? {}; | 2023-10-11T09:14:56.117Z | Getting Error This.options = options ? {}; | 385 |
null | [
"node-js",
"mongoose-odm",
"serverless"
] | [
{
"code": "let connection = await mongoose.createConnection(process.env.MONGO_DB_URL, {useNewUrlParser: true, useUnifiedTopology: true});MongoNetworkError: read ECONNRESET",
"text": "I am using Node.js with Mongoose to connect with these settings: let connection = await mongoose.createConnection(process.env.MONGO_DB_URL, {useNewUrlParser: true, useUnifiedTopology: true});\nRandomly once a week or so I will start getting connection errors of MongoNetworkError: read ECONNRESET for between 10 minutes to 1.5 hours on some not all of my connections. I am using the serverless instance with MongoDB and was informed by support that there are downtimes for the serverless instance.I don’t currently have a retry connection flow and would like to implement one but since these tasks need to be handled quickly to represent data to the app users, does anyone have thoughts on what an optimized retry flow would look like for this? Is there any other connection settings you would recommend like increasing timeouts?Thanks",
"username": "Trevor_Theodore"
},
{
"code": "MongoNetworkError: read ECONNRESET",
"text": "Hi @Trevor_Theodore,Randomly once a week or so I will start getting connection errors of MongoNetworkError: read ECONNRESET for between 10 minutes to 1.5 hours on some not all of my connectionsCan you clarify what this means to your application. For example, are you “production down” during these times or is there just a lot of “noise” in the logs regarding connections being reset?I don’t currently have a retry connection flow and would like to implement one but since these tasks need to be handled quickly to represent data to the app users, does anyone have thoughts on what an optimized retry flow would look like for this?All modern MongoDB Drivers (including the Node.js driver that is included with Mongoose) have Retryable Writes and Retryable Reads enabled by default. These should automatically handle most transient network failures without the need for additional retry logic.Is there any other connection settings you would recommend like increasing timeouts?Before trying to tune any settings it would help if we understood what the impact of these errors are on your active workload.",
"username": "alexbevi"
},
{
"code": "",
"text": "Hey @alexbevi thanks for the response.For users who are trying to retrieve information on active web pages, they may not be served that information across multiple websites during the time the connections are down. Additionally, if an app user attempts a POST request action that action will error out during that time. Webhooks retry themselves so that isn’t really much of an issue. Otherwise, yes it’s a lot of noise in the logs.",
"username": "Trevor_Theodore"
},
{
"code": "",
"text": "For users who are trying to retrieve information on active web pages, they may not be served that information across multiple websites during the time the connections are down. Additionally, if an app user attempts a POST request action that action will error out during that time.Unfortunately if there is genuinely a service disruption (as opposed to a transient network error) custom retry logic wouldn’t mitigate this issue. The serverless team is looking into improvements in this area though ",
"username": "alexbevi"
}
] | MongoNetworkError: read ECONNRESET on Serverless Instance | 2023-10-20T17:53:40.015Z | MongoNetworkError: read ECONNRESET on Serverless Instance | 273 |
[] | [
{
"code": "",
"text": "Dear MongoDB Team,Greetings! I’ve been trying to claim the MongoDB benefits from the GitHub Student Pack, particularly the $50 Atlas Credit. However, each time I click on “Request your access code” to proceed, I’m consistently met with the following message: “We have a little problem. We are experiencing difficulties issuing a code. Please check back later to receive your Atlas code.”I’ve attempted this for over a day now, encountering the same issue repeatedly. I’m certain that both my student verification and GitHub account are valid and active, so I’m quite puzzled as to why I’m facing this hurdle.I’m keen on utilizing the Atlas Credit for my academic projects and learning. Any assistance or guidance you can provide would be greatly appreciated.Thank you for your assistance in this matter! I look forward to your response.Best regards\nimage1511×965 73.3 KB",
"username": "luolin0826"
},
{
"code": "",
"text": "Hi there and welcome to the forums! Sorry you encountered this issue. We’re in the process of adding more Atlas codes to our system so if you check back later today or tomorrow, you should be able to generate an Atlas code without issue.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Hi again! You should be able to generate an Atlas promo code now.",
"username": "Aiyana_McConnell"
}
] | Issue Claiming MongoDB Atlas Credit from the GitHub Student Pack | 2023-10-24T03:04:06.863Z | Issue Claiming MongoDB Atlas Credit from the GitHub Student Pack | 165 |
|
null | [
"crud"
] | [
{
"code": "[{\n \"_id\" : 206, \n \"Events\" : [\n {\n \"_t\" : \"PageReviewed\", \n \"UpdateDate\" : null,\n },\n {\n \"_t\" : \"PageMetadataSet\", \n \"UpdateDate\" : null,\n \"Metadata\" : {\n \"Category\" : \"Any caSE cateGory\",\n \"Name\" : \"Any CaSe name\",\n \"Culture\" : \"*\",\n \"Environment\" : \"*\",\n \"Platform\" : \"*\",\n \"Value\" : true,\n \"ValueTypeName\" : \"System.Boolean\"\n }\n }\n ]\n},\n{\n \"_id\" : 207, \n \"Events\" : [\n {\n \"_t\" : \"PageReviewed\",\n \"UpdateDate\" : null,\n },\n {\n \"_t\" : \"PageMetadataSet\", \n \"UpdateDate\" : null,\n \"Metadata\" : {\n \"Category\" : \"Any casing Category\",\n \"Name\" : \"Any casing Name\",\n \"Culture\" : \"*\",\n \"Environment\" : \"*\",\n \"Platform\" : \"*\",\n \"Value\" : false,\n \"ValueTypeName\" : \"System.Boolean\"\n }\n }\n ]\n}]\nconst lowercase = (value) => {\n return value.toLowerCase();\n}\n\ndb.getCollection(\"vsm.commits.bck\").updateMany(\n {\n \"Events._t\": /MetadataSet/\n },\n { \n $set : {\n \"Events.$[m].Metadata.Name\": lower(\"Events.$[m].Metadata.Name\")\n }\n }, \n {\n arrayFilters: [\n {\"m.Metadata\" : {$exists: true}}\n ]\n }\n)\n\"Events.1.Metadata.Name\" : \"events.$.metadata.name\"\n",
"text": "Hi, this is the first time I have written here for help, I hope to be the more precise I can.Having the following documentsI would need a way to update the nested property Category and Name inside the Metadata object contained in the Events array.\nI’ve tried the updateMany approach with the array element, but I cannot access the original value of the property to be lowercased:Unfortunately, I cannot achieve the desired update since the reference to the item inside the $set to get the original value is not resolved. ($set : { “Events.$[m].Metadata.Name”: lower(“Events.$[m].Metadata.Name”) })The Name field has been updated with the string specified ",
"username": "Alessio_Ansinelli"
},
{
"code": " \"Events.$[m].Metadata.Name\": \"events.$[m].metadata.name\"\ndb.test.updateMany(\n {\n },\n [\n {\n $set:{\n 'A':{$toLower:'$myField'}\n }\n }\n ]\n)\ndb.getCollection(\"Test\").updateMany(\n{\n \"Events._t\": /MetadataSet/\n},\n[\n {\n $set:{\n Events:{\n $map:{\n input:'$Events',\n as: 'thisEvent',\n in:{\n $cond:[\n {$gt:['$$thisEvent.Metadata.Name', null]},\n {\n $mergeObjects:[\n '$$thisEvent',\n {\n 'Metadata':{\n $mergeObjects:[\n '$$thisEvent.Metadata',\n {'Name':{$toLower:'$$thisEvent.Metadata.Name'}}\n ]\n }\n }\n ]\n },\n '$$thisEvent'\n ]\n },\n }\n }\n }\n }\n]\n)\n",
"text": "I had a play with this as it was something I’ve not had to do before…One issue here is that you’re creating the lower function and this is running on the string “Events.$[m].Metadata.Name” before this is being sent to the server so the query you’re sending to the server is:Which is not what you want, you want to get the server to perform a lower function on the value that’s evaluated.The next thing I though was to use the aggregate operator $toLower so I put that in, this meant swapping to an aggregation based update (note the wrapping square brackets):Unfortunately, when swapping to this, you cannot use arrayFilters…So, checking some SO and other forum replies of similar questions, you can instead use a $map to manipulate the data we have and replace the current array entries with the new value.\nSo the plan was to use a $map and then we can merge the current object with another where we’ve swapped the case of the field to lower.\nUnfortunately, a $merge will not merge child elements, only the top level it seems to merging with a basic object with just the Name field set, removed all other data.\nAnother issue was that not all documents have this field, we want to only set it on array elements that have the field. We can get round this with a $cond.So in summary (and that was a lot of rambling, apologies) this is what I came up with:We’re running an update, replacing the Events field with a map of the Events field, that just returns the same array element if it does not have the Metadata.Name field set, of it if does, then it changes the element to the current value, merged with a merge of an object with the lowercase name set into the current Metadata object.Here it is in mongo playground, but as an aggregate as that does not support updates with aggregate syntax, plus it’s how I stumbled to a solution…Mongo playground: a simple sandbox to test and share MongoDB queries onlineI’ll wait while someone else comes up with a one line equivalent…/Edit - refs I took a look at",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update property of a nested object in array with its lowercased value | 2023-10-24T15:48:22.308Z | Update property of a nested object in array with its lowercased value | 156 |
null | [
"dot-net"
] | [
{
"code": "{ \n \"objects\": [\n {\n \"id\": \"some_id\",\n \"custom_data\": { \"some_key\": 1, \"other_string\": \"a\" }\n },\n {\n \"id\": \"totally_different_id\",\n \"custom_data\": { \"totally_different_object\": 942, \"stuff\": { \"a\" : 5 } }\n }\n ]\n}\npublic partial class ServerObjects: IRealmObject {\n [MapTo(\"objects\")] public IList<ServerItem> Objects;\n}\n\npublic partial class ServerItem: IEmbeddedObject {\n [MapTo(\"id\")] public string Id {get; set;}\n [MapTo(\"custom_data\")] RealmValue CustomData; //?????????\n}\n\npublic partial class SomeIdCustomData: IEmbeddedObject {\n///some fields\n}\n\npublic partial class TotallyDifferentIdCustomData: IEmbeddedObject {\n///some fields\n}\nIEmbeddedObjects are not supported by RealmValue yet",
"text": "Hi there,I have data design question, i have a json object gathered from a server, looking like this:So I want to mirror this layout to a realm object like this:Somehow I need to map different custom data to their corresponding classes. I can have a mapping from id to their data type, so i can parse json according to that, but after putting them in a realm value, it says IEmbeddedObjects are not supported by RealmValue yet error. If I make custom data classes IRealmObjects then they have their own objects dangling around the db. I want them to be embedded.What is the best approach to handle these kind of problems? I can change the design completely.Thanks.",
"username": "Santavik"
},
{
"code": "",
"text": "I want them to be embeddedEmbedded objects cannot exist on their own. They are not managed objects, and do not have an official object id (although it can have one, it’s just not used by Realm).I am not sure it’s clear what you want embedded - embedded objects are often part of a List, although they could be defined as a single object property as well.Can you clarify the question?",
"username": "Jay"
}
] | Using IEmbeddedObject as RealmValue | 2023-10-24T08:50:15.991Z | Using IEmbeddedObject as RealmValue | 176 |
null | [
"queries",
"kotlin"
] | [
{
"code": "frogfun updateAge(age: Int)\n{\n realm.write {\n val frog: Frog? =\n this.query<Frog>(\"_id == $0\", currentFrogId).first().find()\n frog?.age = age\n }\n}\nfun updateName(name: String)\n{\n realm.write {\n val frog: Frog? =\n this.query<Frog>(\"_id == $0\", currentFrogId).first().find()\n frog?.name = name\n }\n}\nquery",
"text": "I am just starting with Realm. I use the Realm Kotlin SDK. Expanding on this information.If I want to update different properties of the same frog at different times. E.g.Do I have to do the query every time, or is it possible to perform it once and use the result for multiple updates?",
"username": "OliverClimbs"
},
{
"code": "",
"text": "is it possible to perform it once and use the result for multiple updates?It depends on how your code is crafted.If for example, you were to query the frog at keep it alive at a high level, like a class var, then any functions could update that frog at any time, as long as the frog doesn’t go out of scope.Likewise, if you were to query a bunch of frogs and keep the returned results as a class var then any of those frogs could be updated at any time.Another example would be if you were to keep the _id of the frog as a class var, you can update an object via it’s unique _id.",
"username": "Jay"
}
] | Multiple updates of same object | 2023-10-24T07:18:05.147Z | Multiple updates of same object | 179 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello guys, I’m trying to implement a local database with Realm in my project that vite + electron but I’m facing several errors.\nIn the official documentation it asks to install crack, but crack is only for create-react-app.\nI’ve tried booting several ways and they all point to this error:[vite] Internal server error: Failed to resolve entry for package “realm”. The package may have incorrect main/module/exports specified in its package.json: No known conditions for “.” specifier in “realm” packageI just do the “import Realm from ‘realm’” and this error appears.\nPlease, help ",
"username": "Gustavo_Maia"
},
{
"code": "",
"text": "you have to use realm as native module",
"username": "lamido_tijjani"
}
] | Realm in a Electron + Vite project | 2023-09-01T10:08:19.836Z | Realm in a Electron + Vite project | 591 |
null | [
"serverless"
] | [
{
"code": "mongoClient.getDatabase(DB_NAME).getCollection<ItemDescription>(COLLECTION_NAME).find(Filters.eq(CustomOrder::product_id.name, \"item\"))",
"text": "My apologies if the question is too basic but the description is not entirely clear to meRead operations to the database.Atlas calculates RPUs based on the number of read operations, document bytes read (in 4KB increments), and index bytes read (in 256 byte increments) per operation. RPUs are calculated on a daily basis and start at 0 each day.Just to see if I get it right:Case 1 - I searched database and got a result of size 3KB - that’s 1 read operation? Or two because its ready operation + size of the result\nCase 2 - I searched collection and got a result of size 4,097 B- that’s 2 read operations\nCase 3 - Im exploring the database using Atlas and opened a collection with 20 documents each having size of 4KB - that’s 20 read operations just by clicking on database name? And if tap to go to the next page will be another 20 operations?\nCase 4 - using atlas or programmatically - is connecting to the database and retrieving the basic state of databases (storage size, collections) counts as one read operation or is free?mongoClient.getDatabase(DB_NAME).getCollection<ItemDescription>(COLLECTION_NAME).find(Filters.eq(CustomOrder::product_id.name, \"item\"))In the example above, assuming it returns single or multiple results of size less than 4KB - would it count as 1 read operation or 3 because of getDatabase(DB_NAME) + getCollection + result ?Also, is there a way to put hard size limits on returned results besides limiting how many documents to return?\nThanks!",
"username": "Dominykas_Zaksas"
},
{
"code": "",
"text": "Hi Dominykas_ZaksasThank you for the question. Here are a few public resources to help get a better understanding of how RPUs are calculated:Atlas calculates RPUs based on the number of read operations, document bytes read (in 4KB increments), and index bytes read (in 256 byte increments) per operation. Keep in mind that you are charged $0.10 for the first million RPUs. Therefore your first 100k RPUs (until you hit $0.01 per day) are “free”.To answer the questions above:Case1: The number of RPUs would be dependent on how many documents are being scanned. Not on the number of documents that are returned. Please see the 3rd link pasted above for more details on how RPUs change with and without indexes.Case2: Same answer as above. The RPUs would be based on how many documents and indexes were scanned to find the documents that were returned.Case3: Correct, you are charged RPUs for viewing this page. Although, you won’t be charged 20 RPUs because a single operation can fetch multiple documents.Case4: No, commands that are use for administrative purposes do not count towards RPUsThere is no way to put a limit on how many documents are scanned. You can however, set alerts on RPUs so that you receive alerts if it crosses a certain threshold. You can also make sure that every operation you run uses an Index to mitigate expensive operations that might have to scan all of the documents.Please let me know if you have any further questions.Best,\nAnurag",
"username": "Anurag_Kadasne"
},
{
"code": "",
"text": "I see, so it’s very easy to make mistakes if not using index data or billing alerts\nTo me, a dedicated server sounds like the safest option as it is flat fee + data transfer a.k.a. size of returned results and nothing elseOne more question - for the free tier and dedicated server - is it possible to get data on RPUs or write operations even if that info is seemingly irrelevant?Thanks, Anurag, for explaining stuff and great URLs - I’ve missed them",
"username": "Dominykas_Zaksas"
},
{
"code": "",
"text": "Although it is easy to miss indexing, the cost of the not having indexing should not be much (depends on how many documents you have and how many times you are running inefficient queries). Therefore, testing your workload on serverless for a day before deciding to move to dedicated could be a good path to go down. Also, if your workload is not predictable, serverless is the recommended option as it scales quicker than dedicated. Having said that dedicated is a great option if your workload is predictable.At this point, we don’t have a way to get RPUs on Free Tier and Dedicated Tiers. This is on our roadmap. For WPUs, you can estimate WPUs per document by estimating the size of your writes per document and dividing it by 1kb. For example, if you write 4kb, that would be 4WPUs. You are ignoring the writes to the index using this method, however, you will get a good estimate of the WPUs.If you are running your workload on Serverless, you can get your RPU estimate by:Hope this helps, please let me know if you have any questions.",
"username": "Anurag_Kadasne"
}
] | What exactly counts as a read operation? (serverless) | 2023-10-24T12:24:04.857Z | What exactly counts as a read operation? (serverless) | 183 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "Hello,Any chance to install ops manager on air gaped environment ?\ndue to company policy all of related database environment is strict to outside connections.i have already test it but since the ops manager need preflight check before starting up the instance. there is no available web gui on the local.also whenever i set the download binary option to local mode, button of new deployment unable to click.thank you!",
"username": "Alif_Irawan"
},
{
"code": "",
"text": "Any chance to install ops manager on air gaped environment ?Yes. Installing ops manager itself does not require any internet connectivity.also whenever i set the download binary option to local mode, button of new deployment unable to click.All versions that you want to deploy will need to be downloaded and copied to the Ops Managers Version Directory as well as MongoDB Database Tools.Some documentation on running in local modeAnd about MongoDB ToolsAs always you should open a case with MongoDB Support as they are better placed to support your organisation in a timely and thorough manner.",
"username": "chris"
},
{
"code": "",
"text": "Blockquote All versions that you want to deploy will need to be downloaded and copied to the Ops Managers Version Directory as well as MongoDB Database Tools.All versions that i downloaded is only for 6.0 version for linux platform, do i need to download whole package from mongodb CE & Enterprise edition ?",
"username": "Alif_Irawan"
},
{
"code": "",
"text": "All versions that i downloaded is only for 6.0 version for linux platform, do i need to download whole package from mongodb CE & Enterprise edition ?No just for the platform,OS and version that you are deploying to.",
"username": "chris"
}
] | Offline Installation Ops Manager | 2023-10-23T13:18:54.229Z | Offline Installation Ops Manager | 168 |
null | [
"aggregation"
] | [
{
"code": "[\n {\n name: \"product1\",\n cat: [\n {\n title: \"cat1\"\n },\n {\n title: \"cat2\"\n }\n ],\n price: 123\n },\n {\n name: \"product2\",\n cat: [\n {\n title: \"cat1\"\n }\n ],\n price: 100\n }\n]\n[\n {\n _id: \"cat1\",\n items: [\n {name: \"product1\", price: 123},\n {name: \"product2\", price: 100}\n ]\n },\n {\n _id: \"cat2\",\n items: [\n {name: \"product1\", price: 123}\n ]\n }\n]\n",
"text": "I have an array like this:How do I convert to this:I know I can use $group operator but I only get _id field, please help me.",
"username": "Hi_n_Nguy_n_Duy"
},
{
"code": "",
"text": "Is that an array of documents or an array within a document?I’d first $unwind the cat field, then use $group to reform into your needed format.If you have multiple in a document then you’ll need to unwind that first.Run in steps, unwind first, look at the output, then apply the group on top of that.",
"username": "John_Sewell"
},
{
"code": "const products = await _shop_product.aggregate([\n { $sort: { _id: -1 } },\n { $match: filter },\n { $project: { _id: 0, cat_name: \"$categories.name\", product_list: { name: \"$name\", price: \"$price\" } } },\n { $unwind: \"$cat_name\" },\n { $group: { _id: \"$cat_name\", product_list: { $push: { name: \"$product_list.name\", price: \"$product_list.price\" } } } },\n { $project: { product_list: { $slice: [\"$product_list\", 6] } } }\n ])\n",
"text": "Thank you so much. This is my codeIt’s working. Thanks",
"username": "Hi_n_Nguy_n_Duy"
},
{
"code": "",
"text": "Excellent, glad you got it working!Good luck!",
"username": "John_Sewell"
}
] | Group by nested array object and group items to array | 2023-10-23T10:03:36.000Z | Group by nested array object and group items to array | 172 |
[
"replication",
"change-streams",
"bengaluru-mug"
] | [
{
"code": "",
"text": " Announcing Unleashing the Potential of AWS + MongoDB \nBangalore MUG - Design Kit- 1stApr-41920×1080 318 KB\n Date: 17th June 2023\n Time: 10AM to 2PM Hello tech enthusiasts, developers, and cloud aficionados! We are thrilled to invite you to the most exciting technology meetup of the year: Unleashing the Potential of AWS + MongoDB Solutions! Join us for an immersive journey into the world of AWS and MongoDB solutions. Event Highlights: Deploying MongoDB ReplicaSet using CloudFormation \nLearn how to automate the deployment of a scalable and fault-tolerant MongoDB ReplicaSet in the cloud using CloudFormation. Discover best practices and implementation strategies from Naveen Kumar, an experienced cloud practitioner. Get ready to revolutionize your data infrastructure! Leveraging AWS Kinesis with MongoDB ChangeStream \nUncover the power of AWS Kinesis, a fully managed data streaming service, in combination with MongoDB ChangeStream. Explore how real-time data changes can be captured and processed, enabling you to build robust event-driven architectures for your data pipelines. Delivered by Jones and Darshan, AWS and MongoDB experts. MongoDB Atlas and Amazon SageMaker \nExperience the seamless integration of MongoDB Atlas, a fully managed MongoDB service, with Amazon SageMaker, a powerful machine learning platform. Learn how to leverage this integration to build intelligent applications, analyze data, and make data-driven decisions like never before. Delivered by Babu Srinivasan, a cloud solutions architect. Why Attend?\n Gain insights into AWS and MongoDB integration.\n Learn from industry experts and practitioners.\n Network with like-minded professionals and expand your connections.\n Stay up-to-date with the latest advancements in MongoDB and AWS technologies.\n Acquire practical knowledge and skills applicable to your projects. Registration and Details\nSecure your spot at Unleashing the Potential of AWS + MongoDB today! Limited seats are available. Don’t miss out on this incredible opportunity to explore the limitless possibilities of AWS and MongoDB. Join us at Unleashing the Potential of AWS + MongoDB and unlock new horizons for your tech career! See you there! For any inquiries, please contact our team#UnleashingAWSandMongoDB #CloudSolutions #DataRevolution #TechEvents #MongoDB #AWSKinesis #SageMakerEvent Type: In-Person\nLocation: Amazon Development Centre India Pvt. Ltd Aquila Building, Bagmane Constellation Business Park (BCBP), Mahadevpura",
"username": "DarshanJayarama"
},
{
"code": "",
"text": "Not able to see RSVP option",
"username": "AMRUTABANDHU_CHAUDHURY"
},
{
"code": "",
"text": "the RSVP with a tick symbol is the button to RSVP",
"username": "Megha_Arora"
},
{
"code": "",
"text": "For registration… do i have to log in with my email id ? Right?",
"username": "Abhishek_Singh_N_A"
},
{
"code": "",
"text": "Hi @Abhishek_Singh_N_A,Welcome to MUG Bangalore community. Yes please login by email ID and click on the RSVP link you are all set.Looking forward to see you in the event.Thanks,\nDarshan",
"username": "DarshanJayarama"
},
{
"code": "",
"text": "Utilizing AWS and MongoDB technologies can revolutionize data management and scalability for your business. Combined with MongoDB, AWS (Amazon Web Services) offers a robust and flexible solution for storing, processing, and analyzing large data volumes.Storage, compute, and database solutions are all available through Amazon Web Services. Pay only for the resources you use on AWS, and benefit from high availability and data durability with scalable infrastructure.The MongoDB NoSQL database, on the other hand, is highly scalable and flexible, enabling efficient handling of structured and unstructured data. Due to its document-based data model, horizontal scaling, and flexible schema, it is ideal for handling diverse and rapidly evolving data sets.With AWS and MongoDB integrated, you can take advantage of several benefits. To begin with, AWS offers managed MongoDB services such as Amazon DocumentDB and MongoDB Atlas, which simplify database management tasks, such as provisioning, scaling, and backups. Your resources can then be used to develop applications and innovate.Additionally, AWS offers a wide range of services that complement MongoDB, including Amazon Elastic Compute Cloud (EC2) for flexible compute resources, Amazon Simple Storage Service (S3) for scalable object storage, and Amazon Kinesis for real-time data streaming. With these services, you can build comprehensive data pipelines, process data at scale, and leverage advanced analytics and machine learning.Using AWS and MongoDB together facilitates global data distribution, allowing you to replicate data across multiple regions for improved performance and resilience. AWS also offers security features such as encryption, identity management, and compliance certifications, ensuring your data remains secure and compliant.With AWS and MongoDB technologies, your business can handle large-scale data workloads efficiently, scale seamlessly, and drive innovation. Your data-driven initiatives can be enhanced in a variety of ways using this powerful combination, whether you are building modern applications, implementing real-time analytics, or managing complex data pipelines.",
"username": "Simriti_Pandey"
},
{
"code": "",
"text": "Hi @DarshanJayarama ,\nI have logged in and have clicked on the RSVP link. Not sure but i did not receive any mail . Just thought of checking if any mail confirmation needed to come to the event. Thank you Regards,\nVenu",
"username": "venu_muvvala"
},
{
"code": "",
"text": "Is the registration still open as I cannot see the RSVP button?",
"username": "Sahil_Jamwal"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Ahto_Kivimagi"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Ahto_Kivimagi"
}
] | Unleashing the Potential of AWS + MongoDB Technologies | 2023-05-17T11:44:18.354Z | Unleashing the Potential of AWS + MongoDB Technologies | 4,445 |
|
null | [
"replication",
"mongodb-shell",
"transactions",
"containers",
"storage"
] | [
{
"code": "services:\n db:\n image: docker.io/bitnami/mongodb:6.0.7\n container_name: RocketChat-DB\n hostname: rocketchat-db\n security_opt:\n - no-new-privileges:true\n healthcheck:\n test: [\"CMD\", \"mongosh\", \"--eval\", \"db.adminCommand('ping')\"]\n interval: 10s\n timeout: 10s\n retries: 5\n start_period: 20s\n environment:\n MONGODB_REPLICA_SET_MODE: primary\n MONGODB_REPLICA_SET_NAME: rs0\n ALLOW_EMPTY_PASSWORD: 1\n volumes:\n - /volume2/docker/RC:/bitnami/mongodb\n restart: always\n\n rocketchat:\n image: rocketchat/rocket.chat:latest\n container_name: RocketChat\n hostname: rocketchat\n security_opt:\n - no-new-privileges:true\n environment:\n MONGO_URL: mongodb://rocketchat-db:27017/rocketchat?replicaSet=rs0\n MONGO_OPLOG_URL: mongodb://rocketchat-db:27017/local?replicaSet=rs0\n ROOT_URL: http://xxxxx.local:3000\n PORT: 3000\n DEPLOY_METHOD: docker\n volumes:\n - /volume2/docker/RC/uploads:/app/uploads\n ports:\n - 3000:3000\n restart: always\nmongodb 08:13:06.13 \nmongodb 08:13:06.14 Welcome to the Bitnami mongodb container\nmongodb 08:13:06.14 Subscribe to project updates by watching https://github.com/bitnami/containers\nmongodb 08:13:06.14 Submit issues and feature requests at https://github.com/bitnami/containers/issues\nmongodb 08:13:06.14 \nmongodb 08:13:06.15 INFO ==> ** Starting MongoDB setup **\nmongodb 08:13:06.19 INFO ==> Validating settings in MONGODB_* env vars...\nmongodb 08:13:06.26 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=1. For safety reasons, do not use this flag in a production environment.\nmongodb 08:13:06.29 INFO ==> Initializing MongoDB...\nmongodb 08:13:06.33 INFO ==> Deploying MongoDB with persisted data...\nmongodb 08:13:06.37 INFO ==> ** MongoDB setup finished! **\n\nmongodb 08:13:06.42 INFO ==> ** Starting MongoDB **\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.489Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5760901, \"ctx\":\"-\",\"msg\":\"Applied --setParameter options\",\"attr\":{\"serverParameters\":{\"enableLocalhostAuthBypass\":{\"default\":true,\"value\":true}}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.489Z\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.489+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.492+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.501+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.502+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.502+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.502+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.503+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/bitnami/mongodb/data/db\",\"architecture\":\"64-bit\",\"host\":\"rocketchat-db\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.504+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.7\",\"gitVersion\":\"202ad4fda2618c652e35f5981ef2f903d8dd1f1a\",\"openSSLVersion\":\"OpenSSL 1.1.1n 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian11\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.504+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 11 (bullseye)\\\"\",\"version\":\"Kernel 4.4.180+\"}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.504+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/opt/bitnami/mongodb/conf/mongodb.conf\",\"net\":{\"bindIp\":\"*\",\"ipv6\":false,\"port\":27017,\"unixDomainSocket\":{\"enabled\":true,\"pathPrefix\":\"/opt/bitnami/mongodb/tmp\"}},\"processManagement\":{\"fork\":false,\"pidFilePath\":\"/opt/bitnami/mongodb/tmp/mongodb.pid\"},\"replication\":{\"enableMajorityReadConcern\":true,\"replSetName\":\"rs0\"},\"security\":{\"authorization\":\"disabled\"},\"setParameter\":{\"enableLocalhostAuthBypass\":\"true\"},\"storage\":{\"dbPath\":\"/bitnami/mongodb/data/db\",\"directoryPerDB\":false,\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"logRotate\":\"reopen\",\"path\":\"/opt/bitnami/mongodb/logs/mongodb.log\",\"quiet\":false,\"verbosity\":0}}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.505+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/bitnami/mongodb/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:06.505+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=31506M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.528+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":{\"ts_sec\":1697962388,\"ts_usec\":527910,\"thread\":\"1:0x7f23f7a5dcc0\",\"session_dhandle_name\":\"file:WiredTiger.wt\",\"session_name\":\"connection\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__posix_open_file:812:/bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open\",\"error_str\":\"Operation not permitted\",\"error_code\":1}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.543+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":{\"ts_sec\":1697962388,\"ts_usec\":543363,\"thread\":\"1:0x7f23f7a5dcc0\",\"session_dhandle_name\":\"file:WiredTiger.wt\",\"session_name\":\"connection\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__posix_open_file:812:/bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open\",\"error_str\":\"Operation not permitted\",\"error_code\":1}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.558+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":{\"ts_sec\":1697962388,\"ts_usec\":558304,\"thread\":\"1:0x7f23f7a5dcc0\",\"session_dhandle_name\":\"file:WiredTiger.wt\",\"session_name\":\"connection\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__posix_open_file:812:/bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open\",\"error_str\":\"Operation not permitted\",\"error_code\":1}}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.561+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.561+00:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"1: Operation not permitted\"}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.561+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":708}}\n{\"t\":{\"$date\":\"2023-10-22T08:13:08.561+00:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n\n",
"text": "The problem appeared after the disk on which the docker-compose was deployed failed.\nBut the database was on another disk and it was not affected.\nNow, when building a new docker-compose and specifying the path to the database, the following error occurs. But mongo’s versions are no different.docker-compose:logs:",
"username": "Anton_Morgan"
},
{
"code": "",
"text": "the problem turned out to be incorrect folder permissions",
"username": "Anton_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade | 2023-10-22T08:17:55.422Z | Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade | 373 |
null | [] | [
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Charles_Karen"
},
{
"code": "",
"text": "Good work @Charles_Karen for hiding your spam inside a ChatGPT answer. I think your forgot that I am following you for the exact reason of catching your spams as soon as possible.But I will let your spam visible a few days so that others know whom to boycott.",
"username": "steevej"
},
{
"code": "",
"text": "Seven days of SPAM exposure is enough. Your post is flagged as SPAM.",
"username": "steevej"
}
] | How to Spam Disk I/O for MongoDB Workloads on Rack Servers? | 2023-10-16T09:09:22.588Z | How to Spam Disk I/O for MongoDB Workloads on Rack Servers? | 195 |
null | [
"crud"
] | [
{
"code": "[][]",
"text": "I am testing about insertMany (or Bulkwrite insertOne) in hashed shard environmentMy environment is a 5-node cluster\n‘shard1/node1:27021,node2:27021,node3:27021’,\n‘shard2/node2:27022,node3:27022,node4:27022’,\n‘shard3/node3:27023,node4:27023,node5:27023’,\n‘shard4/node1:27024,node4:27024,node5:27024’,\n‘shard5/node1:27025,node2:27025,node5:27025’,I am trying to study about difference between range key and hashed key sharding.##chunksize\nuse config\ndb.settings.updateOne({_id:“chunksize” },{$set:{_id:“chunksize”,value:5}},{upsert:true})##Range key\nsh.shardCollection(“test.testloop”,{“key”:1},true)##Hashed key\nsh.shardCollection(“test.testloop”,{_id:“hashed”})\nsh.shardCollection(“test.testloop”,{“key”:“hashed”})And I find that insertMany (or Bulkwrite insertOne) in Hashed-key environment is much more slower than in Range-key environment.\nI want to know if it is normal. According to the document,the Hashed-key environment should have good performance in write action.#insertMany\nvar loop = []\nvar feed = (min,max) => {\nfor (var i=min; i<= max; ++i) {\nloop.push({key:i})\n}\ndb.testloop.insertMany(loop)\n}\nfeed(1,100000)#bulkWrite\nvar loop = []\nvar feed = (min,max) => {\nfor (var i=min; i<= max; ++i) {\nloop.push({insertOne:{key:i}})\n}\ndb.testloop.bulkWrite(loop,{ordered:“false”})\n}\nfeed(1,100000)",
"username": "Chiu_Chun_Yu"
},
{
"code": "",
"text": "I suspect that the fact that your key are consecutive is detrimental to the performance of one and beneficial to the performance of the other.I can only suggest to try with random keys.",
"username": "steevej"
},
{
"code": "",
"text": "According to the document,the Hashed-key environment should have good performance in write actionwhere do you read this, can you share me the link?I doubt if there’s any big difference between hash shard and range shard, after all, hash is using a hash value, which is a string/integer. (then the value can be compared to decide the “range”).",
"username": "Kobe_W"
},
{
"code": "",
"text": "If you mean using “_id”, I have tried and the performance is still much more slower than range key.\nBoth “_id” and key named “key” are tested as a shard key in this case.##Hashed key\nsh.shardCollection(“test.testloop”,{_id:“hashed”})\nsh.shardCollection(“test.testloop”,{“key”:“hashed”})ref: https://www.mongodb.com/docs/manual/core/hashed-sharding/The field you choose as your hashed shard key should have a good cardinality, or large number of different values. Hashed keys are ideal for shard keys with fields that change monotonically like ObjectId values or timestamps. A good example of this is the default “_id” field, assuming it only contains ObjectId values.",
"username": "Chiu_Chun_Yu"
},
{
"code": "",
"text": "For Hashed sharding, the workload will separated into different shards (I know it will reducing Targeted Operations vs. Broadcast Operations, but my case is insert only). For my environment (5-node cluster), I expect the insert workload separated into 5 different hosts (In range key, the insert workload is focus in one host only).I doubt if there’s any big difference between hash shard and range shard, after all, hash is using a hash value, which is a string/integer. (then the value can be compared to decide the “range”).Yes. But the true is hash shard uses 3-4min but the range shard uses only 10 seconds for same size insert in my test.ref: https://www.mongodb.com/docs/manual/core/hashed-sharding/Hashed sharding provides a more even data distribution across the sharded cluster at the cost of reducing Targeted Operations vs. Broadcast Operations. Post-hash, documents with “close” shard key values are unlikely to be on the same chunk or shard - the mongos is more likely to perform Broadcast Operations to fulfill a given ranged query. mongos can target queries with equality matches to a single shard.",
"username": "Chiu_Chun_Yu"
},
{
"code": "",
"text": "There is something strange as I am very surprised to seeIn range key, the insert workload is focus in one host onlyyetrange shard uses only 10 seconds for same size insertWhere is your mongos running?Where are the config server replica set instances running?You run multiple mongod instances on the same machine, that is not really realistic. Do you know which instance of which shard was the primary at the time of the test. Anyway, when replication and write heavy use-cases are involved all mongod instances have to handle the same workload because all nodes replicate the write operations. I suspect that in one case (hash key) your configuration is struggling with context switches because each shard is equally involved. And with range key, when a shard is involve, there is no context switch and the workload is evenly distribute on the hardware.I am pretty sure you would be better off without sharding.In principle, you run only 1 instance of mongod on each machine so that it gets all the RAM and CPU to itself. Running multiple instances on the same hardware is detrimental because the cache of each instance is smaller and context switches increase.To run a 5-shard cluster the minimum number of machines would 5 (shards) * 3 + 3 (config server) + 1 (mongos) = 19 nodes.",
"username": "steevej"
}
] | About insertMany (or Bulkwrite insertOne) in hashed shard environment | 2023-10-21T15:51:13.164Z | About insertMany (or Bulkwrite insertOne) in hashed shard environment | 214 |
null | [
"aggregation"
] | [
{
"code": "var pipeline = [\n {\n \"$match\": {\n \"shipmentDate\": {\n \"$gte\": ISODate(\"2023-10-02 00:00:00.000+02:00\")\n \"$lte\": ISODate(\"2023-10-23 00:00:00.000+02:00\")\n }\n }\n }, \n {\n \"$lookup\": {\n \"from\": \"bags\",\n \"localField\": \"bag\",\n \"foreignField\": \"_id\",\n \"as\": \"bags\"\n }\n }, \n {\n \"$match\": {\n \"bags\": {\n \"$exists\": true,\n \"$ne\": []\n }\n }\n }, \n {\n \"$unwind\": {\n \"path\": \"$bags\"\n }\n }, \n {\n \"$group\": {\n \"_id\": {\n \"zipcode\": \"$address.zipcode\",\n \"categoryId\": \"$categoryDetails.categoryId\",\n \"categoryName\": \"$categoryDetails.name\",\n \"courierId\": \"$bags.courier.userId\",\n \"name\": \"$bags.courier.name\"\n },\n \"count\": {\n \"$sum\": 1\n },\n \"totalValue\": {\n \"$sum\": \"$priceData.price\"\n }\n }\n }, \n {\n \"$project\": {\n \"_id\": 0,\n \"courier\": \"$_id.name\",\n \"zipcode\": \"$_id.zipcode\",\n \"category\": \"$_id.categoryName\",\n \"count\": \"$count\",\n \"totalValue\": \"$totalValue\"\n }\n }, \n {\n \"$sort\": {\n \"courier\": 1\n }\n }\n ];\ncouriergroupbags.courier",
"text": "Hello, I am running the following mongodb aggregation pipeline query:There is a problem with the courier field- it is never displayed in the output (even though it is projected). After disabling stages 1 by 1 I have tracked the issue down to the group stage. After this stage the output doesn’t contain the courier details (nor courierId, nor name). What could be causing these issues? It needs to be said that bags.courier is potentially null, could that be breaking the grouping?",
"username": "Vladimir"
},
{
"code": "",
"text": "Got a sample document?",
"username": "John_Sewell"
},
{
"code": "$lookup{\n \"_id\" : ObjectId(\"653697dc72a5ebd93f88715e\"),\n \"courier\" : {\n \"name\" : \"John Doe\",\n \"email\" : \"[email protected]\",\n \"cell\" : \"243244\",\n \"gender\" : \"MALE\",\n \"userId\" : ObjectId(\"642ce61139d0a2b1ca526d97\"),\n \"isDelivery\" : true,\n \"_id\" : ObjectId(\"653697dc72a5ebd93f88715f\")\n },\n \"creationDate\" : ISODate(\"2023-10-23T15:57:16.965+0000\"),\n \"assignmentDate\" : ISODate(\"2023-10-23T22:00:00.000+0000\"),\n \"__v\" : 0\n}\n",
"text": "@John_Sewell Hello, thank you for your reply. Here is a sample BAG document that gets $lookup’ed but name does not get projected in the end. The “courier” subdocument is optional and may not exist.",
"username": "Vladimir"
},
{
"code": "",
"text": "That’s weird, I just tested it and it worked, I created a sample document to lookup to bags and removed the dates and filtering:Mongo playground: a simple sandbox to test and share MongoDB queries onlineIf you have a pair of documents that don’t work, can you create a playground with them in?",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Group stage skipping on fields | 2023-10-23T13:09:59.319Z | Group stage skipping on fields | 132 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "documentExample = {\n id: ObjectId,\n name: string,\n myArrayElement: {\n isSuccess: boolean,\n severity: string,\n date: Date\n }[]\n}\n",
"text": "Hello everyone! I’m new to using MongoDB, and I’m trying to create an aggregation query where I need to sum some values inside an array.Here’s what my document looks like:documentExample = {\nid: ObjectId,\nname: string,\nmyArrayElement: {\nisSuccess: boolean,\nseverity: string,\ndate: Date\n}\n}Hello everyone! I’m new to using MongoDB, and I’m trying to create an aggregation query where I need to sum some values inside an array.Here’s what my document looks like:What I want to achieve is a query that filters all documents containing an element inside ‘myArrayElement’ that matches a certain date or any other conditions. I also want to get the following counts:What is the best way to construct this query, considering that the database might contain a large amount of data? Does anyone have any insights? Thanks.",
"username": "satoru_turing"
},
{
"code": "",
"text": "It would be nice if you could share some sample documents and the expected results. Having to create our own documents in order to experiment is way more tedious that simply using the one you share.",
"username": "steevej"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"6534743d813c22c7e43282dd\"\n },\n \"name\": \"Jhon Doe\",\n \"inclusionDate\": {\n \"$date\": \"2023-06-22T03:57:48.576Z\"\n },\n \"isActive\": true,\n \"integrationHistory\": [\n {\n \"isSuccess\": false,\n \"integrationDate\": {\n \"$date\": \"2020-11-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true,\n \"severity\": \"warning\"\n },\n {\n \"isSuccess\": false,\n \"integrationDate\": {\n \"$date\": \"2020-12-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true,\n \"severity\": \"error\"\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-01-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-02-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-03-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-04-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-05-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-06-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n },\n {\n \"isSuccess\": true,\n \"integrationDate\": {\n \"$date\": \"2021-07-01T00:00:00.000Z\"\n },\n \"statusSentToClient\": true\n }\n ]\n}\nintegrationHistory\"integrationHistory.integrationDate\"{\n \"countSuccessItems\": number, // Number of items where \"integrationHistory.isSuccess\" is true\n \"countItemsWithError\": number, // Number of items where \"integrationHistory.isSuccess\" is false\n \"countStatusSentToClient\": number, // Number of items where \"integrationHistory.statusSentToClient\" is true\n \"countSeverityWarning\": number, // Number of items where \"integrationHistory.severity\" is 'warning'\n \"countSeverityError\": number // Number of items where \"integrationHistory.severity\" is 'error'\n}\n",
"text": "Hello. Sure, this is what my document looks like:My collection contains a lot of documents, and the process runs once every month, saving the status of the integration in integrationHistory . What I need to do is filter by date and obtain some indicators of the integration process by month.What I want is a query that filters all documents matching the \"integrationHistory.integrationDate\" with a specific date. As a result, I want to achieve something like this:",
"username": "satoru_turing"
},
{
"code": "pipeline = [ ] ;\n\nproject_history = { \"$project\" : {\n\t\"integrationHistory\" : 1 ,\n\t\"_id\" : 0\n} } ;\n\npipeline.push( project_history ) ;\n\nunwind_history = { \"$unwind\" : \"$integrationHistory\" } ;\n\npipeline.push( unwind_history ) ;\n\nmatch_date = { \"$match\" : {\n\t\"integrationDate\" : date_to_match \n} } ;\n\npipeline.push( match_date ) ;\n\nsuccess_facet = { \"$group\" : {\n\t\"_id\" : \"$integrationHistory.isSuccess\" ,\n\t\"count\" : { \"$sum\" : 1 }\n} } ;\n\nseverity_facet = { \"$group\" : {\n\t\"_id\" : \"$integrationHistory.severity\" ,\n\t\"count\" : { \"$sum\" : 1 }\n} } ;\n\nsent_facet = { \"$group\" : {\n\t\"_id\" : \"$integrationHistory.statusSentToClient\" ,\n\t\"count\" : { \"$sum\" : 1 }\n} } ;\n\nfacet = { \"$facet\" : {\n\t\"success\" : [ success_facet ],\n\t\"severity\" : [ severity_facet ],\n\t\"sent\" : [ sent_facet ]\n} } ;\n\npipeline.push( facet ) ;\n\n/* For filter_results and project_results, I only did it for isSuccess field\n as the other fields follow the same pattern. */\nfilter_results = { \"$project\" : {\n\t\"countSuccessItems\" : { \"$arrayElemAt\" : [ { \"$filter\" : {\n\t\t\"input\" : \"$success\" ,\n\t\t\"cond\" : { \"$eq\" : [ true , \"$$this._id\" ] }\n\t} } , 0 ] } ,\n\t\"countItemsWithError\" : { \"$arrayElemAt\" : [ { \"$filter\" : {\n\t\t\"input\" : \"$success\" ,\n\t\t\"cond\" : { \"$eq\" : [ false , \"$$this._id\" ] }\n\t} } , 0 ] } ,\n} }\n\npipeline.push( filter_results ) ;\n\nproject_results = { \"$project\" : {\n\t\"countSuccessItems\" : \"$countSuccessItems.count\" ,\n\t\"countItemsWithError\" : \"$countItemsWithError.count\" \n} }\n\npipeline.push( project_results ) ;\n",
"text": "There is many ways to achieve that. The following seems to the simplest and easiest to develop, test and understand.1 - It first uses $project to weed out the fields that are not needed for the use-case.\n2 - Then we simply $unwind to make things easier to work with.\n3 - Next is a $facet with 3 paths.\n3.a - One $facet path for the isSuccess field\n3.b - One $facet path for the severity field\n3.c - One last $facet path for the statusSentToClient\n4 - 2 $project stages to produce the results in the desired format. It could probably be done in one.The nice thing about $facet is you can develop and test each path individually before putting them together in the $facet stage.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you, it worked.",
"username": "satoru_turing"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Best way to do $group aggregate summing array property | 2023-10-20T20:55:17.318Z | Best way to do $group aggregate summing array property | 241 |
null | [
"atlas-search"
] | [
{
"code": "{\n name: index,\n definition: {\n \"mappings\": {\n \"fields\": {\n \"embedding\": [\n {\n \"dimensions\": 1536,\n \"similarity\": \"euclidean\",\n \"type\": \"knnVector\"\n }\n ]\n }\n }\n }\n }\n",
"text": "Hello,\nI want to create Atlas Search Index with filter option, is it possible to do it? For example, I have a collection with metadata field and I want to do search on it only when it’s equal to “1234” for other documents it shouldn’t make any research. I will implement this for langchain similarity search.",
"username": "Dilara_Bayar"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"<field-name>\": {\n \"type\": \"knnVector\",\n \"dimensions\": 1536,\n \"similarity\": \"euclidean\"\n }\n }\n }\n}\n",
"text": "Hey @Dilara_Bayar,Based on my knowledge, it is possible to configure your Atlas Search index definition on one specific field. Here is the sample syntax for your reference:I want to create Atlas Search Index with filter option, is it possible to do it? For ex, I have a collection with metadata field and I want to do search on it only when it’s equal to “1234However, configuring the index based on just a single value is not currently supported. May I ask what specific use case you are trying to accomplish using the Atlas vector search?Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Atlas Search Index with Only Documents That has Specific Field Value | 2023-10-17T04:40:35.872Z | Atlas Search Index with Only Documents That has Specific Field Value | 222 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi there,I transferred data from one collection to another with a different structure.\nI need to verify the data consistency between these two collections.\nIs it possible to reformat a collection from a template to avoid doing the work manually ?Do you think $objectToArray aggregation could be a good idea ?thanks",
"username": "Emmanuel_Bernard1"
},
{
"code": "",
"text": "Thats a pretty lose requirement to give much feedback. On a daily basis I run reconciliations between environments to ensure data migrations have gone well but it relies on knowing the data format of the source and target data.\nTypically we run a top level check on doc counts and totals using group operations, then delve further into the data on a period and type basis.\nWhat do your source and target documents look like?",
"username": "John_Sewell"
},
{
"code": "{\n \"seasonId\": \"H2021\",\n \"seasonRef\": \"7\",\n \"universeId\": \"1\",\n \"name\": \"SCLARK\",\n \"madeIn\": [\n {\n \"countryIso\": \"TN\",\n \"countryLabel\": \"TUNISIE\"\n }\n ],\n \"laundryCareSymbols\": {\n \"washing\": \"30D\",\n \"bleaching\": \"NO\",\n \"drying\": \"NO_TUMBLE\",\n \"ironing\": \"MEDIUM\",\n \"professionalCleaning\": \"NO_DRY_CLEAN\"\n },\n \"segmentations\": [\n {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"a26630\",\n \"name\": \"Top\",\n \"segmentationType\": \"PURCHASE_FAMILY\",\n \"segmentationCode\": \"a26630_PURCHASE_FAMILY\",\n \"parent\": {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"a26\",\n \"name\": \"Chemisier\"\n }\n },\n {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"7002\",\n \"name\": \"Hiver\",\n \"segmentationType\": \"SEASONALITY\",\n \"segmentationCode\": \"7002_SEASONALITY\"\n },\n {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"w32115\",\n \"name\": \"Chemises / Chemisiers\",\n \"segmentationType\": \"WEB_FAMILY\",\n \"segmentationCode\": \"w32115_WEB_FAMILY\",\n \"parent\": {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"w32\",\n \"name\": \"Chemisiers / Tuniques\"\n }\n },\n {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"fr_OMNICANAL_FAMILY_32_OMNICANAL_FAMILY_32115\",\n \"name\": \"Chemises / Chemisiers\",\n \"segmentationType\": \"OMNICANAL_FAMILY\",\n \"segmentationCode\": \"32115\",\n \"parent\": {\n \"prefix\": \"segmentation\",\n \"segmentationId\": \"fr_OMNICANAL_FAMILY_32\",\n \"name\": \"Chemisiers / Tuniques\"\n }\n }\n ]\n}\n{\n \"universeId\": \"1\",\n \"seasonId\": \"H2021\",\n \"seasonRef\": \"7\",\n \"model\": {\n \"modelId\": \"17260627\",\n \"modelName\": \"SCLARK\"\n },\n \"madeIn\": [\n {\n \"alpha2Code\": \"TN\",\n \"alpha3Code\": \"TUN\",\n \"name\": \"Tunisie\"\n }\n ],\n \"laundryCareSymbol\": {\n \"washing\": \"30D\",\n \"bleaching\": \"NO\",\n \"drying\": \"NO_TUMBLE\",\n \"ironing\": \"MEDIUM\",\n \"professionalCleaning\": \"NO_DRY_CLEAN\"\n },\n \"productSegmentations\": [\n {\n \"segmentationId\": \"fr_TYPE_MATIERE_TMAT_TYPE_MATIERE_C\",\n \"segmentationType\": \"TYPE_MATIERE\",\n \"segmentationCode\": \"C\",\n \"segmentationName\": \"Chaine et Trame\",\n \"parent\": {\n \"segmentationId\": \"fr_TYPE_MATIERE_TMAT\",\n \"segmentationType\": \"TYPE_MATIERE\",\n \"segmentationCode\": \"TMAT\",\n \"segmentationName\": \"TMAT\"\n }\n },\n {\n \"segmentationId\": \"fr_PURCHASE_FAMILY_26_PURCHASE_FAMILY_630\",\n \"segmentationType\": \"PURCHASE_FAMILY\",\n \"segmentationCode\": \"630\",\n \"segmentationName\": \"Top\",\n \"parent\": {\n \"segmentationId\": \"fr_PURCHASE_FAMILY_26\",\n \"segmentationType\": \"PURCHASE_FAMILY\",\n \"segmentationCode\": \"26\",\n \"segmentationName\": \"Chemisier\"\n }\n },\n {\n \"segmentationId\": \"fr_OMNICANAL_FAMILY_29_OMNICANAL_FAMILY_29517\",\n \"segmentationType\": \"OMNICANAL_FAMILY\",\n \"segmentationCode\": \"29517\",\n \"segmentationName\": \"Débardeurs\",\n \"parent\": {\n \"segmentationId\": \"fr_OMNICANAL_FAMILY_29\",\n \"segmentationType\": \"OMNICANAL_FAMILY\",\n \"segmentationCode\": \"29\",\n \"segmentationName\": \"Tops / T-shirts\"\n }\n }\n ]\n}\n",
"text": "Hi John,This is what the documents look like; I’ve only taken a part.Source :Target :A few fields are similar (same name and same type).\nFor others, I need to rename, reorder or remove them from an embedded document or an array.\nI can use $set to reorder and rename fields; however, for embedded and array fields it is more complex!Finaly, I want to export the data in two CSV files with the same format and be able to compare these files.\nI’m attempting to create a JSON template to facilitate the transformation and make the processus faster than doing it manually.Regards",
"username": "Emmanuel_Bernard1"
},
{
"code": "",
"text": "Sorry Emmanuel, Ive had to travel over last few days and so not had time to reply. Ill take a look at your reply when i get time though later in the week when im back.",
"username": "John_Sewell"
},
{
"code": "",
"text": "I’m getting back into things Emmanuel, sorry for disappearing but I had some family matters to attend to.With the two sets of data I’d look at the following approaches:For #3 I find notepad++ and Excel to be superb in creating these kinds of thing, I also make a lot of use of Pivot tables in excel when reconciling data between systems, so when exporting data you can pull it into Excel as a CSV and then pivot key dimensions to check counts.I can’t think of a magic bullet for this, if you have two projections to compare then it can be a manual work process, but you can start with smaller datasets which can make life easier.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Hi John,I hope your family is doing well.Thank you very much for your answers.\nStarting with smaller datasets is a good approach.",
"username": "Emmanuel_Bernard1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Check data consistency between two collections | 2023-10-09T15:40:54.414Z | Check data consistency between two collections | 323 |
null | [
"node-js",
"next-js"
] | [
{
"code": "",
"text": "Hi community! I’m currently developing a site with node js and next ja for which I have created two collections (“users” and “messages”) so my question is the following:\nWhat would be the best approach for creating a real time inbox?\nIt wouldn’t be like a real time chat. What I need is messages to appear in the inbox in real time and show a notification when a user sends a message to another.\nI’m not sure if I should use a trigger for the “messages” collection, or change streams so I can monitor the collection in real time.\nMy problem is that I expect the site to have a lot of users using it simultaneously so I’m worried about how resource demanding this approach would be.\nI’ve also tried using socket io to fire an event when a message is sent, but so far I haven’t been able to make the message get received in the frontend inbox component in real time, only by the users that the message goes to.\nI would serious appreciate any guidance or advise as I’ve been struggling with this for several days now.\nThanks in advance",
"username": "becker135"
},
{
"code": "",
"text": "It’s very much related to how the frontend works. But if you have a long lived tcp connection between your client and the servers, either change stream events or a “push” can achieve this. A message to the recipient can be sent whenever the trigger is pulled.Check https://www.youtube.com/watch?v=vvhC64hQZMkI haven’t been able to make the message get received in the frontend inbox component in real time, only by the users that the message goes to.i don’t know what this means.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks a lot for your response.I’m sorry I wasn’t very clear. What I meant to say is that I tried to emit an event with Socket.IO when a new message is sent, but I couldn’t make the new message be received only by the message recipient; instead, it was sent to all users.\nCurrently, I store all site messages in this collection with the fields “from_userID” and “to_userID.”\nWould it be a good practice to either set a change stream event or a trigger so I can update the “messages” state in the frontend when a new message is received? Or would that be too resource demanding?Thanks a lot for your help. It is greatly appreciated",
"username": "becker135"
},
{
"code": "",
"text": "I tried to emit an event with Socket.IO when a new message is sent, but I couldn’t make the new message be received only by the message recipient; instead, it was sent to all users.i never use this, but it looks like you are broadcasting the events. Are you sure you have correctly written the code with their SDK? Their documentation should have explained how to use the APIs.Would it be a good practice to either set a change stream event or a trigger so I can update the “messages” state in the frontend when a new message is received? Or would that be too resource demanding?The answer will depend on your traffic, your resource provisioning, and many more. Actually for most of real time chats i have heared of, they all use sort of own push mechanism (e.g. websocket) instead of relying on database change stream. I think the key difference is whether you control everything or you delegate the trigger to your db tool. Maintaining change streams of course consume resources.Probably using your own push logic is a bit easier, and given it’s 100% controlled by you, troubleshooting is also easier.",
"username": "Kobe_W"
},
{
"code": "",
"text": "You might want to look at kafka for real time delivery of messages.",
"username": "steevej"
},
{
"code": "",
"text": "Dear @farzana_kashif, you replied with SPAM in a thread I participated. What an error since I am a real and dedicated SPAM hater. I have started following you so that I catch your next attempt faster. I will not flag your post as SPAM yet. I want to expose you. I want people to know whom and what site to boycott.",
"username": "steevej"
},
{
"code": "",
"text": "Ok, @farzana_kashif, 3 days of SPAM exposure is enough. Your post has been flagged.",
"username": "steevej"
}
] | What would be the best approach for a real time inbox? | 2023-10-17T22:41:16.314Z | What would be the best approach for a real time inbox? | 245 |
null | [
"queries",
"crud",
"transactions"
] | [
{
"code": "",
"text": "Hello.\nOn LESSON 4: QUERYING ON ARRAY ELEMENTS IN MONGODB\nAt Lab: Querying on Array Elements in MongoDBdb.transactions.find({\ntransactions: {\n$elemMatch: { amount: { $lte: 4500 }, transaction_code: “sell” },\n},\n})Does not retrieve what is expected.\nAny ideas?\nRegards\nJorge",
"username": "Jorge_Gerardo_Fernandez_Lugo"
},
{
"code": "",
"text": "Does not retrieve what is expected.Please share the documents you are expecting to retrieve.Any ideas?1 - you are connected to the wrong server\n2 - you are using the wrong database\n3 - there is no collection named transactions\n4 - there is no document that has an array named transactions that matches",
"username": "steevej"
},
{
"code": "",
"text": "Hi Jorge,The $elemMatch operator will return any document for which at least 1 array item meets the query. I went through the lab, and it is indeed what is returned so it seems to work as expected.If you are seeing something different or some errors, please open a ticket with the MongoDB University Team by sending an email to [email protected] you,Davenson Lombard",
"username": "Davenson_Lombard"
}
] | Transactions elemMatch not retrieve what is expected | 2023-10-20T14:38:47.783Z | Transactions elemMatch not retrieve what is expected | 219 |
null | [] | [
{
"code": "",
"text": "Hello everyoneI want to be able to make a payment with card on my App. How I can block some fields until my payment is completed so other users dont complete the payment and there are no products to sell.For example: I have 1 croissant to sell. Two users are looking at the same time. They both are paying with the card. It takes about 30 seconds to verify the payment. The fastest verified user will take the croissant and the other user will pay but will not have the croissant.Do I have to block the croissant when one user is trying to pay?What is the best flow?Have a nice day!",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hi I believe this is the best way (pessimistic), the block is temporary until payment confirmation. Can be placed inside a transaction in a moderate way. What do you think?[image]",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "How I can block some fields of table in MongoDB Realm?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hi @Ciprian_GaborI think the use case is classic transaction:That’s the very very general gist of it. Note that this kind of workflow is frequently cited as examples for transaction use in many other database products as well, so you’ll be able to find a related tutorial realtively easily.In MongoDB, transaction is not very different. See Transactions for more details.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you for you answer Kevin!Does anyone have an example of this for Kotlin or Swift. I am working with a KMM app and MongoDB Realm.",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello everyone. Should I use write function from Realm to make the transaction possible?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "To prevent multiple users from purchasing the same product simultaneously in your app:Integrate a payment gateway like Stripe for secure transactions. This way, you maintain fairness and user experience in your app.",
"username": "David_Sadler"
}
] | How to block data to complete payment | 2022-12-13T22:13:30.634Z | How to block data to complete payment | 1,579 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "sample_mflix.moviesknnBetaknnVectorgenres{{queryEmbedding}} \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"dataSource\": \"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"vector_01\",\n \"knnBeta\": {\n \"vector\": {{queryEmbedding}},\n \"path\": \"plot_embedding\",\n \"k\": 5\n }\n }\n },\n {\n \"$set\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n },\n {\n \"$project\": {\n \"embedding\": 0\n }\n }\n ]\n}\nComedygenresgenres{\n \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"dataSource\": \"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"vector_01\",\n \"knnBeta\": {\n \"vector\": {{queryEmbedding}},\n \"path\": \"plot_embedding\",\n \"k\": 5,\n \"filter\": {\n \"in\": {\n \"path\": \"genres\",\n \"value\": [\n \"Comedy\"\n ]\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n },\n {\n \"$project\": {\n \"embedding\": 0\n }\n }\n ]\n}\nrated{\n \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"dataSource\": \"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"vector_01\",\n \"knnBeta\": {\n \"vector\": {{queryEmbedding}},\n \"path\": \"plot_embedding\",\n \"k\": 5,\n \"filter\": {\n \"text\": {\n \"path\": \"rated\",\n \"query\": \"PASSED\"\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n },\n {\n \"$project\": {\n \"embedding\": 0\n }\n }\n ]\n}\ngenresvector_01{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"genres\": {\n \"analyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"plot_embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n }\n },\n \"storedSource\": {\n \"include\": [\n \"title\",\n \"plot\",\n \"genres\"\n ]\n }\n}\nmust",
"text": "I’m using sample_mflix.movies collection from Sample Mflix Dataset with knnBeta and knnVector index. First, I followed this tutorial of semantic search, and as a next step, I want to filter the movies by genres array field, before doing the semantic search.(In the following queries {{queryEmbedding}} is the embedding array - It’s environment variable in Postman.)The regular vector search works fine:Some of the documents have Comedy in genres array.\nWhen I try to filter on genres field, I get an empty result. This query doesn’t work, and I don’t know why:Interestingly, the text filter works so I can filter on rated field:Additionally, I tried modifying the index to include the genres field. This is the definition of vector_01 index:I tried with a single filter as in the examples above and with must in a compound filter. The results are the same.How can I filter on arrays while using knnBeta at the same time?",
"username": "Zacheusz_Siedlecki"
},
{
"code": "textphrasecompoundmust{\n \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"dataSource\": \"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"vector_01\",\n \"knnBeta\": {\n \"vector\": {{queryEmbedding}},\n \"path\": \"plot_embedding\",\n \"k\": 5,\n \"filter\": {\n \"text\": {\n \"query\": \"Comedy\",\n \"path\": \"genres\"\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n },\n {\n \"$project\": {\n \"plot_embedding\": 0\n }\n }\n ]\n}\n{\n \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"dataSource\": \"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"vector_01\",\n \"knnBeta\": {\n \"vector\": {{queryEmbedding}},\n \"path\": \"plot_embedding\",\n \"k\": 5,\n \"filter\": {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"query\": \"Comedy\",\n \"path\": \"genres\"\n }\n },\n {\n \"text\": {\n \"query\": \"Drama\",\n \"path\": \"genres\"\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n },\n {\n \"$project\": {\n \"plot_embedding\": 0\n }\n }\n ]\n}\nphrase{\n \"collection\": \"movies\",\n \"database\": \"sample_mflix\",\n \"dataSource\": \"Cluster0\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"vector_01\",\n \"knnBeta\": {\n \"vector\": {{queryEmbedding}},\n \"path\": \"plot_embedding\",\n \"k\": 5,\n \"filter\": {\n \"compound\": {\n \"must\": [\n {\n \"phrase\": {\n \"query\": \"Comedy\",\n \"path\": \"genres\"\n }\n },\n {\n \"phrase\": {\n \"query\": \"Drama\",\n \"path\": \"genres\"\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n },\n {\n \"$project\": {\n \"plot_embedding\": 0\n }\n }\n ]\n}\nin",
"text": "It appears that text and phrase filters on the array return the expected results, but how can I filter using multiple values? Do I have to use compound filter with multiple must statements?This is a query for a single value (works fine):and compound filter with multiple values (it works):phrase filter works well too, and I suppose that’s a better choice:Is it the right approach, or should I use in operator for filtering an array?",
"username": "Zacheusz_Siedlecki"
},
{
"code": "must",
"text": "Hi @Zacheusz_Siedlecki and welcome to MongoDB community forums!!As mentioned by you in the above post, if using “phrase” solves the issue and returns the correct documents that you are looking for and does not result into performance issues, you can continue to use thatIf you do not wish to use compound filter with multiple must statements, you can use the $vectorSearch aggregation pipeline stage.\nThis stage allows you to query the indexed vector data in your Atlas cluster. You can also use comparison operator and aggregation pipeline operators to pre-filter the data that you perform the semantic search on.Please note that, the vector search is in public preview therefore, it is not recommended for production deployments and is subjected to change in the future.Please reach out in case of any questions.Warm regards\nAasawari",
"username": "Aasawari"
},
{
"code": "$vectorSearch",
"text": "@Aasawari, thank you. How can I use aggregation pipeline to pre filter the data before $vectorSearch? The documentation says that $vectorSearch must be the first stage of any pipeline where it appears.",
"username": "Zacheusz_Siedlecki"
}
] | knnBeta search with filter/must on array | 2023-10-08T01:44:43.222Z | knnBeta search with filter/must on array | 353 |
null | [
"data-modeling",
"python"
] | [
{
"code": "",
"text": "Hi,i have an ~10,000 .csv files in a folders in Windows [I can migrate it to Linux][1] They are all categorically arranged in a folder structure [like folder within folder within folder…5 levels of folder structure\n[2] There are EVER-RUNNING python scripts, which update these .csv files and may add new .csv files.\n[3] I don’t want to import .csv files into mongodb.\n[4] Is there any way Mongodb can source data from these .csv files [given the folder structure].\n[5] last requirement would be, can it AUTO sync to latest .csv files. given the number of .csv files may grow in each folder and also each file may grow in size [python updation].Thanks",
"username": "Calculant"
},
{
"code": "",
"text": "What do you want to get out of this? If you’re not loading them into Mongo then why does Mongo come into this? Do you want to be able to search and index them and are just thinking of being able to use the Mongo query language as a tool to accomplish this?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Hi,[1] i re-looked at my question, i meant the following on\npoint 3. “i dont want to import .csv files into mongodb\nMANUALLY”.\n[2] I should be able to use any MongoDB tool and it should be able to create\ncollections within collections based on the nesting of folders and importing .csv AUTO.Thanks",
"username": "Calculant"
},
{
"code": "",
"text": "I’m not aware of anything within Mongo or the db tools that could facilitate this automatically however it should be trivial to create something that monitors a folder and when a file is dropped in or updated runs a mongoimport to pull the data into a collection, you could then form the collection name based on the file / path or set a field on the import based on the file path.\nYou’d need to account for modified files, dropping the existing collection with the drop flag, but you may need to watch out for edge cases when the file is modified and then modified again while being processed.When I say trivial…it’ll take some work but the mechanisms for monitoring should be fairly easy.There was a similar question on reddit from a few years ago:\nhttps://www.reddit.com/r/mongodb/comments/a55tgg/automatic_import_csv_files/",
"username": "John_Sewell"
}
] | How to store csv's in a folder structure | 2023-10-21T02:57:11.026Z | How to store csv’s in a folder structure | 227 |
[] | [
{
"code": "",
"text": "I want to retrieve the user information without getting the password and salt. However when I try to make a projection it doesn’t work and still outputs the password hash. Does anyone know the issue with my code?\nimage1206×674 30.8 KBI am using nodejs with mongo module.",
"username": "F_E"
},
{
"code": " const options = {\n // Sort matched documents in descending order by rating\n sort: { \"imdb.rating\": -1 },\n // Include only the `title` and `imdb` fields in the returned document\n projection: { _id: 0, title: 1, imdb: 1 },\n };\n // Execute query\n const movie = await movies.findOne(query, options);\nawait db.collection('account').findOne({username:rusername}, {projection:{_id:0, password:0, token:0, salt:0}})\n",
"text": "Second param is the options which looks like it’s not correctly defined, see:Of note is the setting up of the options parameter:In your case you’re not passing the projection definition in wrapped in a projection field, so it could be:Also…why are you storing the salt with the password?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you, that worked!\nIs storing salt and hashed passwords together a bad practice? What would be an alternative?",
"username": "F_E"
},
{
"code": "",
"text": "I had typed out a long reply…but checking some other resources it seems people don’t seen that concerned about storing it together with the password any more. I’ve typically stored it as an environment variable.As it’s just to avoid the use of rainbow tables in case of a data breach, it can be stored with the password.It also seems that it’s recommended to have a salt unique to each password…learned something new today!",
"username": "John_Sewell"
}
] | Projection doesnt hide values | 2023-10-22T05:10:49.169Z | Projection doesnt hide values | 144 |
|
null | [
"next-js",
"react-js"
] | [
{
"code": "Module not found: Can't resolve 'mongodb-client-encryption' in 'C:\\...'\n\nImport trace for requested module:\n./node_modules/mongodb/lib/deps.js\n./node_modules/mongodb/lib/client-side-encryption/client_encryption.js\n./node_modules/mongodb/lib/index.js\n./app/lib/mongodb.js\n./app/components/ig-report/reportOverviewTable.js\n./app/reportdisplay/page.js\n",
"text": "In react (next JS) I can import and use mongodb fine in a server component. However, when I try to import the server component into a client component I get the following error:This is occurring for the following dependencies:mongodb-client-encryption\naws4\nsocks\nsnappy\ngcp-metadata\naws-sdk\nzstd\nkerberos",
"username": "Rupey_N_A"
},
{
"code": "Module not found: Can't resolve 'mongodb-client-encryption' in 'C:\\...'\nv13.0.2-canary.0",
"text": "Hey @Rupey_N_A,I believe this issue is related to a specific GitHub issue #42277. Based on the shared workaround, I recommend testing v13.0.2-canary.0 of Next.js. Please give it a try!In case the issue persists, I’d recommend opening a GitHub issue here.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | When I import server component using mongodb atlas on react, dependencies cant be resolved | 2023-10-22T12:50:11.094Z | When I import server component using mongodb atlas on react, dependencies cant be resolved | 175 |
[
"thailand-mug"
] | [
{
"code": "",
"text": "Mongoberfest2566×1268 291 KBMongoberfest x Pyfest is the part of Hacktoberfest event in Thailand.Contribute Period: 07-21 OCT 2023\nEvent date: Saturday, OCT 21\nPlace: WiseSightThis event has two activities.Event Type: Hybrid\nLocation: 123 Suntowers Building B, 33rd floor, Vibhavadi-Rangsit Rd., Chom Phon, Chatuchak, Bangkok 10900 Thailand\nVideo Conferencing URL",
"username": "Piti.Champeethong"
},
{
"code": "",
"text": "See you all there. We’ll launch the first live-session for how to contribute soon!",
"username": "Kanin_Kearpimy"
},
{
"code": "",
"text": "This is the video how to contribution (in Thai) > click here",
"username": "Piti.Champeethong"
},
{
"code": "",
"text": "Hi Everyone.There are some good feeling pictures at the event \nI hope to see all of you again next meetup. Thank you!!! ",
"username": "Piti.Champeethong"
}
] | Thailand MUG: Mongoberfest x Pyfest | 2023-10-05T01:10:27.535Z | Thailand MUG: Mongoberfest x Pyfest | 726 |
|
null | [
"aggregation",
"replication"
] | [
{
"code": "rs.status()health: 1{\"t\":{\"$date\":\"2023-10-22T14:47:19.001+00:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n",
"text": "Hi,I have a PSA architecture where the arbiter was added as the last node. rs.status() dumps health: 1 for all nodes but there’s a log entry that I see appending every 1 second in the arbiter node. This is what it says:Should this be ignored or something be fixed?",
"username": "rick123"
},
{
"code": "",
"text": "Hi @rick123From the documentation:https://www.mongodb.com/docs/manual/core/replica-set-arbiter/#:~:text=An%20arbiter%20participates%20in%20elections%20for%20primary%20but%20an%20arbiter%20does%20not%20have%20a%20copy%20of%20the%20data%20set%20and%20cannot%20become%20a%20primary.https://www.mongodb.com/docs/manual/core/replica-set-oplog/#:~:text=MongoDB%20applies%20database,of%20the%20database.So, if the arbiter node serves only for voting and is not a data-carrying node, logically, it will not be an impactful issue.Regards",
"username": "Fabio_Ramohitaj"
}
] | Arbiter Logging Error Every Second | 2023-10-22T14:49:12.952Z | Arbiter Logging Error Every Second | 245 |
null | [
"aggregation",
"data-modeling"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6524ea72a85a4164cf29f849\"\n },\n \"form_id\": {\n \"$oid\": \"6523d74ecf7337be2640b59b\"\n },\n \"user_id\": 94584,\n \"resource_id\": 31258,\n \"resource_type\": 1,\n \"responses\": [\n {\n \"field_id\": {\n \"$oid\": \"6523d74ecf7337be2640b596\"\n },\n \"field_type\": 2,\n \"question\": \"What is your feedback of the session?\",\n \"options\": [\n {\n \"_id\": {\n \"$oid\": \"6523d74ecf7337be2640b597\"\n },\n \"label\": \"Very informative\"\n },\n {\n \"_id\": {\n \"$oid\": \"6523d74ecf7337be2640b598\"\n },\n \"label\": \"Content needs to improve\"\n },\n {\n \"_id\": {\n \"$oid\": \"6523d74ecf7337be2640b599\"\n },\n \"label\": \"Lecture was good\"\n }\n ],\n \"answer\": {\n \"$oid\": \"6523d74ecf7337be2640b597\"\n }\n },\n {\n \"field_id\": {\n \"$oid\": \"6523d74ecf7337be2640b59a\"\n },\n \"field_type\": 3,\n \"question\": \"How would you rate the session\",\n \"numbers\": 5,\n \"start_label\": \"Poor\",\n \"end_label\": \"Great\",\n \"answer\": 5\n },\n {\n \"field_id\": {\n \"$oid\": \"6524d92af40cb9f6de2966c8\"\n },\n \"field_type\": 4,\n \"question\": \"Long answer Type\",\n \"answer\": \"Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum\".\n }\n ],\n \"createdAt\": {\n \"$date\": \"2023-10-10T06:08:50.333Z\"\n },\n \"updatedAt\": {\n \"$date\": \"2023-10-10T06:08:50.333Z\"\n }\n}\n{\n $match:\n /**\n * query: The query in MQL.\n */\n {\n form_id: ObjectId(\n \"6523d74ecf7337be2640b59b\"\n ),\n \"responses.field_type\": {\n $in: [2, 3],\n },\n },\n },\n {\n $project:\n /**\n * specifications: The fields to\n * include or exclude.\n */\n {\n \"responses.field_id\": 1,\n \"responses.field_type\": 1,\n \"responses.answer\": 1,\n },\n },\n {\n $unwind:\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n {\n path: \"$responses\",\n },\n },\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n \"responses.field_type\": {\n $in: [2, 3],\n },\n },\n },\n {\n $group:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n _id: {\n field_id: \"$responses.field_id\",\n answer: \"$responses.answer\",\n },\n answer: {\n $first: \"$responses.answer\",\n },\n count: {\n $sum: 1,\n },\n },\n },\n {\n $group:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n _id: \"$_id.field_id\",\n submissions: {\n $push: {\n k: {\n $toString: \"$_id.answer\",\n },\n v: \"$count\",\n },\n },\n },\n },\n {\n $project: {\n _id: 1,\n submissions: {\n $arrayToObject: \"$submissions\",\n },\n },\n },\n{\n \"_id\": {\n \"$oid\": \"6523d74ecf7337be2640b596\"\n },\n \"submissions\": {\n \"6523d74ecf7337be2640b597\": 20311,\n \"6523d74ecf7337be2640b599\": 19922,\n \"6523d74ecf7337be2640b598\": 19769\n }\n},\n{\n \"_id\": {\n \"$oid\": \"6523d74ecf7337be2640b59a\"\n },\n \"submissions\": {\n \"1\": 12068,\n \"2\": 12056,\n \"3\": 11919,\n \"4\": 12086,\n \"5\": 11873\n }\n}\n",
"text": "Hello,So to explain i have a collection of feedback submission which contains data in the following structure.so this is a example of one submission and there can be possibly more than 3 million submissions for each\nform_id .For stats (like google forms ) lets say only for form fields like : Single choice or multiple choice or ratings and similar i need to bring the count for all the options .and i have made this aggregate commandwhile checking it is working for 60k records ( submissions) in 500 ms . I doubt this is a slow query and is there any better way to write this .Indesxes are on\nform_id , responses.field_id\nThe above is giving response like this which is fine.Also for fields like short answer and long answer lets say i need to collect only latest 20 response grouped by responses.field_id ( yet to make aggregate command) , ( if anyone can help )",
"username": "Gouri_Sankar"
},
{
"code": "",
"text": "A few things.1 - Since you do $in on responses.field_type in your first $match, your index should include responses.field_type between form_id and responses.field_id2 - Since you do $project on responses.answer it could help to add it as the last field of the index.3 - You could $project _id:04 - Rather that $unwind, then $match on field_type, you could $filter in the preceding $project. This way you would $unwind only the appropriate elements, then the $match would be unnecessary.5 - In the first $group, using $first on responses.answer seems redundant since it is also part of the group _id.",
"username": "steevej"
},
{
"code": "\"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"634eb9d68eb58a3508562fac_mongo_vla.feedback_submissions\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"form_id\": {\n \"$eq\": \"6523d74ecf7337be2640b59b\"\n }\n },\n \"queryHash\": \"BEF482EC\",\n \"planCacheKey\": \"44FEDEFF\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"PROJECTION_DEFAULT\",\n \"transformBy\": {\n \"_id\": true,\n \"responses\": {\n \"$filter\": {\n \"input\": \"$responses\",\n \"as\": \"response\",\n \"cond\": {\n \"$in\": [\n \"$$response.field_type\",\n { \"$const\": [2, 3] }\n ]\n }\n }\n }\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": { \"form_id\": 1 },\n \"indexName\": \"form_id_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"form_id\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"form_id\": [\n \"[ObjectId('6523d74ecf7337be2640b59b'), ObjectId('6523d74ecf7337be2640b59b')]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 110007,\n \"executionTimeMillis\": 935,\n \"totalKeysExamined\": 110007,\n \"totalDocsExamined\": 110007,\n \"executionStages\": {\n \"stage\": \"PROJECTION_DEFAULT\",\n \"nReturned\": 110007,\n \"executionTimeMillisEstimate\": 370,\n \"works\": 110008,\n \"advanced\": 110007,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 157,\n \"restoreState\": 157,\n \"isEOF\": 1,\n \"transformBy\": {\n \"_id\": true,\n \"responses\": {\n \"$filter\": {\n \"input\": \"$responses\",\n \"as\": \"response\",\n \"cond\": {\n \"$in\": [\n \"$$response.field_type\",\n { \"$const\": [2, 3] }\n ]\n }\n }\n }\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 110007,\n \"executionTimeMillisEstimate\": 143,\n \"works\": 110008,\n \"advanced\": 110007,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 157,\n \"restoreState\": 157,\n \"isEOF\": 1,\n \"docsExamined\": 110007,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 110007,\n \"executionTimeMillisEstimate\": 46,\n \"works\": 110008,\n \"advanced\": 110007,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 157,\n \"restoreState\": 157,\n \"isEOF\": 1,\n \"keyPattern\": { \"form_id\": 1 },\n \"indexName\": \"form_id_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"form_id\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"form_id\": [\n \"[ObjectId('6523d74ecf7337be2640b59b'), ObjectId('6523d74ecf7337be2640b59b')]\"\n ]\n },\n \"keysExamined\": 110007,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 110007,\n \"executionTimeMillisEstimate\": 655\n },\n {\n \"$unwind\": { \"path\": \"$responses\" },\n \"nReturned\": 220004,\n \"executionTimeMillisEstimate\": 768\n },\n {\n \"$group\": {\n \"_id\": {\n \"field_id\": \"$responses.field_id\",\n \"answer\": \"$responses.answer\"\n },\n \"count\": { \"$sum\": { \"$const\": 1 } }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"count\": 640\n },\n \"totalOutputDataSizeBytes\": 3920,\n \"usedDisk\": false,\n \"spills\": 0,\n \"nReturned\": 8,\n \"executionTimeMillisEstimate\": 935\n },\n {\n \"$group\": {\n \"_id\": \"$_id.field_id\",\n \"submissions\": {\n \"$push\": {\n \"k\": {\n \"$convert\": {\n \"input\": \"$_id.answer\",\n \"to\": { \"$const\": \"string\" }\n }\n },\n \"v\": \"$count\"\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"submissions\": 2336\n },\n \"totalOutputDataSizeBytes\": 2778,\n \"usedDisk\": false,\n \"spills\": 0,\n \"nReturned\": 2,\n \"executionTimeMillisEstimate\": 935\n },\n {\n \"$project\": {\n \"_id\": true,\n \"submissions\": {\n \"$arrayToObject\": [\"$submissions\"]\n }\n },\n \"nReturned\": 2,\n \"executionTimeMillisEstimate\": 935\n }\n ],\n\"command\": {\n \"aggregate\": \"feedback_submissions\",\n \"pipeline\": [\n {\n \"$match\": {\n \"form_id\": \"6523d74ecf7337be2640b59b\"\n }\n },\n {\n \"$project\": {\n \"responses\": {\n \"$filter\": {\n \"input\": \"$responses\",\n \"as\": \"response\",\n \"cond\": {\n \"$in\": [\n \"$$response.field_type\",\n [2, 3]\n ]\n }\n }\n }\n }\n },\n { \"$unwind\": { \"path\": \"$responses\" } },\n {\n \"$group\": {\n \"_id\": {\n \"field_id\": \"$responses.field_id\",\n \"answer\": \"$responses.answer\"\n },\n \"count\": { \"$sum\": 1 }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$_id.field_id\",\n \"submissions\": {\n \"$push\": {\n \"k\": { \"$toString\": \"$_id.answer\" },\n \"v\": \"$count\"\n }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"submissions\": {\n \"$arrayToObject\": \"$submissions\"\n }\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 60000,\n \"$db\": \"mongo_vla\"\n },\n",
"text": "Hey steevej ,Thank you for helping here , have updated the query based on the above mentioned data ,\ntechnically filter looks good rather than match but still i can see similar numberadding query and statsquery / CommandAlso as this is a testing cluster its free and shared ( will it affect time , if yes how much).As i am designing the collection we can change the collection architecture also if you have any idea which can make it faster",
"username": "Gouri_Sankar"
},
{
"code": "",
"text": "Hello , Can you check below comment",
"username": "Gouri_Sankar"
},
{
"code": "\"responses.field_type\": {\n $in: [2, 3],\n },\n",
"text": "Removingfrom the first $match stage was not something I mentioned? Why did you removed it?this is a testing cluster its free and shared ( will it affect time , if yes how much)Of course it will. It is free and shared. Shared means many other might connect on other free and shared clusters running on the same machine. So it is clear it will affect time and it is impossible to say how much.",
"username": "steevej"
},
{
"code": "",
"text": "So you mean I should not be removed from the first match right? If I keep it, it is 150ms slower but let’s say in the longer run obviously it will help as will pick only submissions having those field types which makes sense here .",
"username": "Gouri_Sankar"
},
{
"code": "",
"text": "Yes, in principle, you should keep it. Otherwise, you fetch, $filter and $unwind documents that are not needed.However, if all your top documents $match all the time, it is kind of useless and might be the reason why it is slower with it.But doing performance analysis on a shared cluster is kind of useless sinceShared means many other might connect on other free and shared clusters running on the same machine. So it is clear it will affect time and it is impossible to say how much.I have notice on the explain plan that you did not3 - You could $project _id:0Also the index used is form_id_1 so I suspect that the following was also ignored2 - Since you do $project on responses.answer it could help to add it as the last field of the index.",
"username": "steevej"
},
{
"code": "",
"text": "Yes it makes no much sense to have a specific caluclation on shared cluster but thats what will se what can be done and will test with indepndent also.I have notice on the explain plan that you did notYes because in response i need that _id at the last one if you are talking about but if about the _id of submission , than making_id : 0 didn’t really changes much it was a change if 10ms , i had done but wasn’t there in copied explain plan.Also the index used is form_id_1 so I suspect that the following was also ignoredI am unsure exactly what you are talking about but if adding index to response.answer i feel it will increase the index size.",
"username": "Gouri_Sankar"
},
{
"code": "",
"text": "Yes because in response i need that _id at the last one if you are talking about but if about the _id of submissionI am talking about the _id of the original documents, not the _id out of the $group stage. The _id of the original documents are not needed for the computation. So they can be projected out in the $project stage where you $filter.i feel it will increase the index sizeOh it will definitively make the index bigger. But, may be, just may be, the aggregation can be covered by the index, hence avoiding fetching the documents on disk.",
"username": "steevej"
}
] | Grouping and counting in mongodb | 2023-10-11T04:47:13.598Z | Grouping and counting in mongodb | 328 |
[
"cxx"
] | [
{
"code": "auto builder = bsoncxx::builder::stream::document{};\nbsoncxx::document::value doc_value = builder\n << \"name\" << \"MongoDB\"\n << \"type\" << \"database\"\n << \"count\" << 1\n << \"versions\" << bsoncxx::builder::stream::open_array\n << \"v3.2\" << \"v3.0\" << \"v2.6\"\n << bsoncxx::builder::stream::close_array\n << \"info\" << bsoncxx::builder::stream::open_document\n << \"x\" << 203\n << \"y\" << 102\n << bsoncxx::builder::stream::close_document\n << bsoncxx::builder::stream::finalize;\n",
"text": "Hello,\nI am trying to get a code example running that I found here.\nmongocxx::database db = client[“testdb”];\nmongocxx::collection coll = db[“testcollection”];When I run the code in VS 2019, I get the following error:\nWhat should I do?",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "Hi @Simon_Reitbauer,It would be helpful if you could provide the program code example, so that other could help debug the issue better.Could you also provide:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "I’ve solved this issue, you can delete this topic because it doesn’t contain informations that might be helpful for others.",
"username": "Simon_Reitbauer"
},
{
"code": "",
"text": "Hi @Simon_Reitbauer,I’m glad that you’re able to solve the issue. If you could share the problem and how did you solve it, it may be helpful to others encountering a similar issue in the future.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "And how you managed with this? I have the same problem",
"username": "Anton_Frantsen"
}
] | Mongocxx-application: abort() has been called | 2020-05-18T17:22:35.460Z | Mongocxx-application: abort() has been called | 2,540 |
|
null | [
"swift"
] | [
{
"code": "@main\nstruct MyApp: SwiftUI.App {\n let app: RealmSwift.App? = RealmSwift.App(id: YOUR_APP_SERVICES_APP_ID_HERE)\n \n var body: some Scene {\n WindowGroup {\n\t\t\n Group {\n if let app = app, let user = app.currentUser, let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in INITIAL_SUBSCRIPTION_WORK}, rerunOnOpen: true) {\n OpenSyncedRealmView()\n .environment(\\.realmConfiguration, config)\n } else {\n AuthView()\n }\n }\n }\n }\n}\n\nstruct OpenSyncedRealmView: View {\n @AsyncOpen(appId: my_app_id, timeout: 10000) var asyncOpen\n \n var body: some View {\n switch asyncOpen {\n case .error(let error):\n Text(error.localizedDescription) // This is \"ResolvedFailed\" when user is in subway (or on flight mode)\n case .connecting, .waitingForUser, .progress:\n LoadingView()\n case .open(let realm):\n MyAppMainView()\n .environment(\\.realm, realm)\n }\n }\n}\n",
"text": "How do I sync the realm again once internet connection is restored?Scenario:I know the user can hard-close the app and re-open, but it’s bad user experience. Is there a better way?Here’s a code sample:",
"username": "Itamar_Gil"
},
{
"code": "",
"text": "I have also come across this problem. Is there a solution to restart the sync?",
"username": "Thomas_Flad"
}
] | SwiftUI how to retry @AsyncOpen when internet is restored ResolvedFailed error when offline | 2023-04-17T18:40:36.845Z | SwiftUI how to retry @AsyncOpen when internet is restored ResolvedFailed error when offline | 670 |
null | [] | [
{
"code": "",
"text": "Is there any way to insert a document with server-side Date, or some workaround for it?",
"username": "Noam_Gershi"
},
{
"code": "_id_idupsert: truedb.collection.updateOne(\n {\n // pass query that does not find any document\n },\n {\n $currentDate: {\n // date property, modify to whatever you want to set\n dateProperty: true\n },\n $set: {\n // your insert object properties\n // ...\n }\n },\n { upsert: true }\n)\n$currentDateupsert: true$setNOWCLUSTER_TIMEdb.collection.updateOne(\n {\n // pass query that does not find any document\n },\n [{\n $set: {\n // date property, modify to whatever you want to set\n dateProperty: \"$$NOW\",\n // your insert object properties\n // ...\n }\n }],\n { upsert: true }\n)\n",
"text": "Hello @Noam_Gershi, Welcome to the MongoDB community forum,If you are using MongoDB’s default ObjectId in _id, that includes a timestamp component which you can use to infer the creation date for a document. you will find a method to get timestamps from the _id property in a specific language driver whatever you are using.If it is still needed then you can try the below options:There are other options to add timestamps in MongoDB server-side using update methods only,1). $currentDate: you can use this operator with upsert: true optionStarting in MongoDB 5.0, update operators process document fields with string-based names in lexicographic order. Fields with numeric names are processed in numeric order. See Update Operators Behavior for details.2). Aggregation Alternative to $currentDate: you can use “$$NOW” or “$$CLUSTER_TIME” with update with aggregation pipeline and upsert: true optionStarting in version 4.2, update methods can accept an aggregation pipeline. As such, the previous example can be rewritten as the following using the aggregation stage $set and the aggregation variables NOW (for the current datetime) and CLUSTER_TIME (for the current timestamp):",
"username": "turivishal"
},
{
"code": "",
"text": "Thanx for the reply!Since _id has seconds-precision, and not millis-precision (as with Date) - I wonder if it is safe to use instead MongoDB Timestamp type (which also has seconds-precision) . From MongoDB documentation:The BSON timestamp type is for internal MongoDB use. For most cases, in application development, you will want to use the BSON date type.",
"username": "Noam_Gershi"
},
{
"code": "ObjectId",
"text": "Hi @Noam_GershiI wonder if it is safe to use instead MongoDB Timestamp type (which also has seconds-precision)Note that the timestamp type is intended to be used for internal MongoDB processes. You can of course try to use it if it fits your use case, but bear in mind the reason why this type is created.Instead, it’s recommended to use the date type instead for most cases.In general, although the default ObjectId type does contain a date information, if you need the dates to do some processing (e.g. fetching between two dates), then it’s usually better to have a field dedicated to storing this date. This way, you can use many aggregation operations on dates, such as $dateAdd, $dateDiff, $dateSubtract, $dateToParts, $dateToString, and others. Those are not available for ObjectId-derived dates.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "_id_id_id",
"text": "Let’s say I want to add a new document with the lastUpdate field and I would also like to get the newly inserted _id, how can I do that? I know that I can just add the _id field for the insert, but is it possible to have two identical _id generated and cause one of the record being override?",
"username": "Chongju_Mai"
},
{
"code": "",
"text": "You may want to create a new thread as opposed to resurrecting an old one. You’re not going to get a duplicate _id if you let the server / driver create it for you. If using straight shell and certain APIs you’ll get the created ID returned to you anyway:You didn’t way what language you’re using so I can’t check any driver documentation for what you may want, but you can check yourself on the mongo driver docs pages:",
"username": "John_Sewell"
}
] | Saving server-side Date on insert | 2023-02-02T20:33:43.486Z | Saving server-side Date on insert | 1,278 |
null | [] | [
{
"code": "",
"text": "The M0 issue happens even when trying to create a fresh cluster.",
"username": "Sofiane_Chaieb"
},
{
"code": "",
"text": "Hi @Sofiane_Chaieb,Could you provide some further details regarding this post including:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": " 400 (request \"INVALID_ENUM_VALUE\") An invalid enumeration value M0 was specified'MongoDB::Atlas::Cluster'AtlasCluster:\n Type: 'MongoDB::Atlas::Cluster'\n Properties:\n ProjectId: !GetAtt \n - AtlasProject\n - Id\n Name: !Ref AtlasClusterName\n Profile: !Ref ProfileKey\n ClusterType: REPLICASET\n ReplicationSpecs:\n - NumShards: '1'\n AdvancedRegionConfigs:\n - ElectableSpecs:\n EbsVolumeType: STANDARD\n InstanceSize: M0\n NodeCount: '3'\n RegionName: US_EAST_1\n",
"text": "Hi @Jason_Tran,The issue is discussed here: MongoDB::Atlas::Cluster doesn't expose all ProviderSettings available in Atlas API · Issue #22 · mongodb/mongodbatlas-cloudformation-resources · GitHub",
"username": "Sofiane_Chaieb"
}
] | M0 creation issue | 2023-05-11T17:50:07.474Z | M0 creation issue | 575 |
[] | [
{
"code": "",
"text": "I am trying to install mongo-c-driver with this guide: Getting Started with MongoDB and C++ | MongoDB\nThe building process is going fine, but when I am trying to execute(install) it, I recieve MAAANY SYNTAX ERRORS:\nü2959×809 46 KBIdk what is wrong, I hope You will help me.",
"username": "Anton_Frantsen"
},
{
"code": "",
"text": "@Roberto_Sanchez I sawn you help on this forum a lot, please help, I use MongoDB for educational purposes.",
"username": "Anton_Frantsen"
}
] | Many syntax errors when executing mongo-c-driver build | 2023-10-21T18:13:34.672Z | Many syntax errors when executing mongo-c-driver build | 178 |
|
null | [
"aggregation",
"java",
"spark-connector"
] | [
{
"code": "import com.mongodb.client.model.{Aggregates, Filters}\nimport com.mongodb.spark.sql.connector.config.{MongoConfig, ReadConfig}\nimport com.mongodb.spark.sql.connector.read.partitioner.PaginateBySizePartitioner\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.SparkSession\nimport org.bson.BsonValue\n\nobject WorkWithMongoTest {\n def main(args: Array[String]): Unit = {\n val conf = new SparkConf()\n conf.setAppName(\"Spark-MongoDB-Connector-Tests \" + new java.util.Date(System.currentTimeMillis()))\n val spark = SparkSession.builder().config(conf).getOrCreate()\n try {\n val docIds: Array[BsonValue] = Array.empty[BsonValue] // array of 300 document ids\n val filterPipeLine = Aggregates.`match`(Filters.in(\"_id\", docIds: _*))\n val configMap = Map(\n \"connection.uri\" ->\n \"mongodb://admin:admin@localhost:27017/heavy_db.heavy_data?readPreference=primaryPreferred&authSource=admin&authMechanism=SCRAM-SHA-1\",\n \"aggregation.pipeline\" -> filterPipeLine.toBsonDocument.toJson(),\n \"partitioner\" -> classOf[PaginateBySizePartitioner].getName,\n \"partitioner.options.partition.size\" -> \"64\"\n )\n spark.read.format(\"mongodb\")\n .options(configMap)\n .load()\n } catch {\n case e: Exception => e.printStackTrace()\n } finally {\n println(\"========================== PROGRAM END ==============================\")\n spark.stop()\n }\n }\n}\n",
"text": "Data Source Details:\nDatabase: Mongo DB 6.0\nCollection Size: 3.35 GB\nTotal Documents: 3000\nAvg. Document Size: ~1 MBTotal Available Cores: 3\nAllocated Executor Cores: 3\nTotal Available Memory: 3 GB\nTotal Allocated Memory: 3 GB\nAllocated Driver Memory: 1 GB\nNumber of Executors: 1Spark-Mongo-Connector Version: 10.1.1\nUsing default fraction values.The below program is crashing due to OOM: Java heap space. On analyzing the heap dump, this code is actually fetching and putting the data into an ArrayList internally causing OOM.My Spark Program:Q1. The method load() is not an action. Then why is it still fetching the data and loading it into the heap?Q2. Why am I possibly facing the Java Heap Space issue while reading 300 documents? Any suggestions?",
"username": "Basant_Gurung"
},
{
"code": "load()DataFrameReaderDataset<Row>Dataset<Row>DataFrameReader$sampleDatasetspark.read.format(\"mongodb\") .options(configMap).schema(<SCHEMA>).load()\n",
"text": "Hi @Basant_Gurung,Great question and thank you for including lots of information. I think I know the cause and have a solution for you.Q1. The method load() is not an action. Then why is it still fetching the data and loading it into the heap?This is slightly naunced as load() essentially takes a DataFrameReader and outputs a Dataset<Row>. The Dataset<Row> has a schema and there is no set schema on the DataFrameReader so there is an “action” the connector has to infer the schema.\nSee: https://www.mongodb.com/docs/spark-connector/current/read-from-mongodbQ2. Why am I possibly facing the Java Heap Space issue while reading 300 documents? Any suggestions?Schema inference uses the sampleSize configuration which in turn utilizes the $sample operator and then compares the documents to produce a schema. This is done on the Spark Driver machine and it appears that is what requires more memory.So to fix please either explicitly provide a Schema to the Dataset eg:Or increase the memory allocated to the Spark Driver.I hope that helps,Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks for your response, Ross. That clarifies the root cause. In my case, the schema is unknown and it may or may not be the same for all the documents in the target collection.\nSince I have to process TBs of such kind of data in the future, I am trying to read the documents in batches so that it doesn’t run OOM. The physical memory is limited, If I allocate let’s say 4GB memory for the spark driver, it can still throw OOM at some point. A batch of 300 documents could be of 300 Kb or 3GB. Also, it’s not necessary that reading documents of 100 MB size will consume exactly 100MB of heap space. Creating small batches could lead to underutilization of the resources and large batches would lead to OOM.",
"username": "Basant_Gurung"
},
{
"code": "sampleSizesampleSize",
"text": "Hi @Ross_Lawley, I could see that setting the value for sampleSize in the spark conf to a lesser value like 10 does the trick. I would like to know how important schema inferring is, since in my case the documents may not have a similar structure always? what impact will it have if I set the sampleSize to 1?",
"username": "Basant_Gurung"
},
{
"code": "sampleSize",
"text": "Hi @Basant_Gurung,So sampleSize is important in that directly relates to the number of documents used to infer the schema. If you chose 1 - then only a single document’s schema would be used. If your data is mixed and many documents have different shapes, then a larger sample will be required. As the sample is randomly selected from the collection - you need a relatively large size to ensure a representative sample.When reading from the collection the documents are then shaped into the schema. So not having a schema that is representative of the data is problematic as data could be missed or type errors can occur converting into the corresponding Spark type.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Oh! I see. Points noted. Thanks for your time @Ross_Lawley.",
"username": "Basant_Gurung"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Facing Java Heap Space OOM issue when large data is read on limited memory | 2023-10-19T05:41:36.829Z | Facing Java Heap Space OOM issue when large data is read on limited memory | 308 |
null | [
"spark-connector"
] | [
{
"code": " val spark = SparkSession.builder()\n .appName(\"Spark-MongoDB-Connector-Tests-001\")\n .config(\"spark.mongodb.read.connection.uri\", \"mongodb://x:x@localhost:27017/\")\n .config(\"spark.mongodb.read.database\", \"mydb\")\n .config(\"spark.mongodb.read.collection\", \"data_1000_docs_1mb_each\")\n .config(\"spark.mongodb.read.sampleSize\", \"200\")\n .getOrCreate()\n\n spark.read.format(\"mongodb\")\n .load()\n .toJSON.count()\n",
"text": "I have a collection of 1000 documents of 1 MB avg doc size. I want to fetch 200 random docs. I am using the “sampleSize” property as follows. But it is fetching the entire collection. Please help! why is the “sampleSize” configuration not working? Is there any issue with the code?",
"username": "Basant_Gurung"
},
{
"code": "",
"text": "The sampleSize option is just the number of docs that are sampled for inferring the schema. otherwise you have to setup the schema of the source documents (from where you would be reading) explicitly.For fetching 200 random docs you will have to perform that logic in application layer. Let me know if that helped answer your question",
"username": "Prakul_Agarwal"
},
{
"code": "sampleSize$sample",
"text": "Thanks @Prakul_Agarwal for your response. Now, I understand the purpose of sampleSize. For fetching 200 random documents, I am now using $sample pipeline with the spark connector. It works like a charm! Thanks again for your time. ",
"username": "Basant_Gurung"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | spark.mongodb.read.sampleSize is not working in mongo spark connector r10.1.1 | 2023-10-09T05:00:46.004Z | spark.mongodb.read.sampleSize is not working in mongo spark connector r10.1.1 | 291 |
null | [
"replication"
] | [
{
"code": "maxStalenessSeconds",
"text": "HiI have 3 nodes in replica set. Lets say A is primary and B, C are secondary.\nI have configured write concern as “majority”\n{ w : “majority” }I have configured read preference as “Read from secondary”.\nNow lets say at time\nt1 - A , B and C are consitent\nt2 - write happens\nt3 - A, B are consistent and not C (due to write majority)Question 1t4- If we do read, from which secondary node (B or C) read is served ? what technique is used by default ? Is it from B as it has latest update ?\nIs it from C ? If this happens i might get stale data. Not wanted to happen.Question 2And also could you please explain maxStalenessSeconds when does maxStaleness comes into picture if lag is compared against primary node last write and secondary node last write ?\nTo comapre we have two entities primary time and secondary time ? what is the role of maxStaleness",
"username": "Manjunath_k_s"
},
{
"code": "",
"text": "First please go over all your previous thread and make sure you help the forum be useful by marking as the solution one of the many replies you got.You should find the answers in",
"username": "steevej"
},
{
"code": "",
"text": "If this happens i might get stale data. Not wanted to happenthen your only choice is to use lineariable read.",
"username": "Kobe_W"
}
] | Read preference among secondary nodes? | 2023-10-21T11:11:09.679Z | Read preference among secondary nodes? | 182 |
[] | [
{
"code": "",
"text": "Hey everybody, I just wanted to mention that we published a paper about our experience and the step-by-step instructions on how to implement MongoDB in a laboratory environment for nontech people, based on the experience of our effort in COVID-19 surveillance.Maybe somebody will find it interesting.Abstract. With the rapidly growing amount of biological data, powerful but also flexible data management and visualization systems are of increasingly crucial i",
"username": "Mateusz_Jundzill_1"
},
{
"code": "",
"text": "Thanks for sharing.Interesting indeed.",
"username": "steevej"
}
] | We just write a paper about using MongoDB in the lab! | 2023-10-18T13:23:46.544Z | We just write a paper about using MongoDB in the lab! | 229 |
|
null | [
"aggregation",
"queries",
"node-js",
"data-modeling"
] | [
{
"code": "const ContactInfo = new mongoose.Schema({\n number: Number,\n email: String,\n facebook: String,\n snapchat: String,\n twitter: String,\n instagram: String,\n ...\n})\n\nconst UsersSchema = new mongoose.Schema({\n name: String,\n contactInfo: ContactInfo\n})\n$arrayToObject$set",
"text": "Hi,I’m working on a project where users need to be able to update and add contact information. Users can add as many means of contact or social media information as they want. I have a schema that looks like:I want to be able to send to the backend only the contactInfo data that the user provides, so I’m trying to do that using $arrayToObject. The problem that I’m having is that when I using this with $set its deleting/ ignoring the previously saved data. Here is an example of the issue: mongoplayground. I would really appreciate any help or advice on how to fix this. Thank you!",
"username": "p_p1"
},
{
"code": "db.collection.update({\n \"_id\": \"63492520b5ef3f0bf00282ca\"\n},\n{\n $set: {\n \"contactInfo.email\": \"instagram\",\n \"contactInfo.example_gmail_com\": \"@example\"\n }\n},\n{\n upsert: true\n})\n[\n {\n \"_id\": \"63492520b5ef3f0bf00282ca\",\n \"contactInfo\": {\n \"email\": \"instagram\",\n \"example_gmail_com\": \"@example\",\n \"number\": \"xxx-xxx-xxxx\"\n }\n }\n]\n",
"text": "Hey, welcome to the MongoDB community.See if this helps you =DOutput:",
"username": "Samuel_84194"
},
{
"code": "",
"text": "I think we are missing some facts about your use-case.Why do you use $arrayToObject? How is your array of k/v values built?I think that sharing the code around your update could lead to more specif recommendation. The reason why the other fields are removed from contactInfo is that you $set the top field contactInfo. Basically, you request to change the object contactInfo with the result of $arrayToObject. Using the dot notation as shown by Samuel_84194 is the way to set sub-fields inside an object without losing the other sub-fields.If you have no choice and you receive the array as you shared then you would need to use reduce on this array to convert it to the dot notation.",
"username": "steevej"
}
] | How to not delete items in document when using $set with $arrayToObject? | 2023-10-20T23:27:49.079Z | How to not delete items in document when using $set with $arrayToObject? | 186 |
null | [
"node-js"
] | [
{
"code": "TypeError: Cannot read properties of undefined (reading 'startsWith')\n\n 7 |\n 8 | // Create a MongoClient with a MongoClientOptions object to set the Stable API version\n > 9 | const client = new MongoClient(uri as string, {\n | ^\n 10 | serverApi: {\n 11 | version: ServerApiVersion.v1,\n 12 | strict: false,\n\n at connectionStringHasValidScheme (node_modules/mongodb-connection-string-url/src/index.ts:13:22)\n at new ConnectionString (node_modules/mongodb-connection-string-url/src/index.ts:132:30)\n at parseOptions (node_modules/mongodb/src/connection_string.ts:244:15)\n at new MongoClient (node_modules/mongodb/src/mongo_client.ts:331:34)\n at Object.<anonymous> (src/data/mongodb.ts:9:16)\n at Object.<anonymous> (src/components/Sidebar/Sidebar.tsx:12:18)\n at Object.<anonymous> (src/components/Sidebar/__test__/Sidebar.test.tsx:8:57)\nconst uri = process.env.MONGODB_CONNECTION_STRING;\n\n// Create a MongoClient with a MongoClientOptions object to set the Stable API version\nconst client = new MongoClient(uri as string, {\n serverApi: {\n version: ServerApiVersion.v1,\n strict: false,\n deprecationErrors: true,\n },\n});\n",
"text": "Hi. I am trying to connect to Mongo Atlas using Nextjs (^13.5.3), mongodb (^6.1.0) and nodejs (20.5.1). My web app is running perfectly, retrieves and updates data as it should. However when I run my tests with Jest and React Tesing Library, the following error is logged:I read a same issue related to this: TypeError: Cannot read property 'startsWith' of undefined\nThe problem was solved by moving the .env files inside the source directory, but that also didn’t work for me.\nAt this point, I have no idea what could be the problem. My only thought is that the node server isn’t running when I run the tests so the enviroment variables can’t be accessed. But I need to include the Sidebar component so I could test it in the unit test.Here is the code that throws the error:Appreciate every help ",
"username": "Csanad_Tarjanyi"
},
{
"code": "export class MongoDB {\n private static URI = process.env.MONGODB_URI;\n private static client = new MongoClient(this.URI!);\nnpm i --save dotenv-clinpx dotenv-cli -e .env.local -- jest",
"text": "I was having the same issues with my code: server runs fine but my tests break.I isolated the problem and fixed it.\nI assume your ‘uri’ is getting its value from an environment variable, am I correct? So, mine code was like this:Problem is that URI is getting undefined when I run the tests, but not when I run my app. Why?\nWell, for some reason, Jest (Which I’m using) does not get your environment variables correctly, and you should inform jest which file to look.To resolve the issue, I installed dotenv-cli, which ables you to run applications on the cli using a dotenv file as base in case you have environment variables in your SUT.These are the steps:",
"username": "Joao_Textor"
}
] | Tests failed because of type error: TypeError: Cannot read properties of undefined (reading 'startsWith') | 2023-10-05T20:01:45.886Z | Tests failed because of type error: TypeError: Cannot read properties of undefined (reading ‘startsWith’) | 380 |
[
"queries",
"mongodb-shell"
] | [
{
"code": "db.restaurants.findOne(){\n location: {\n type: \"Point\",\n coordinates: [-73.856077, 40.848447]\n },\n name: \"Morris Park Bake Shop\"\n}\n{\"_id\":{\"$oid\":\"5eb3d668b31de5d588f4292e\"},\"address\":{\"building\":\"1007\",\"coord\":[{\"$numberDouble\":\"-73.856077\"},{\"$numberDouble\":\"40.848447\"}],\"street\":\"Morris Park Ave\",\"zipcode\":\"10462\"},\"borough\":\"Bronx\",\"cuisine\":\"Bakery\",\"grades\":[{\"date\":{\"$date\":{\"$numberLong\":\"1393804800000\"}},\"grade\":\"A\",\"score\":{\"$numberInt\":\"2\"}},{\"date\":{\"$date\":{\"$numberLong\":\"1378857600000\"}},\"grade\":\"A\",\"score\":{\"$numberInt\":\"6\"}},{\"date\":{\"$date\":{\"$numberLong\":\"1358985600000\"}},\"grade\":\"A\",\"score\":{\"$numberInt\":\"10\"}},{\"date\":{\"$date\":{\"$numberLong\":\"1322006400000\"}},\"grade\":\"A\",\"score\":{\"$numberInt\":\"9\"}},{\"date\":{\"$date\":{\"$numberLong\":\"1299715200000\"}},\"grade\":\"B\",\"score\":{\"$numberInt\":\"14\"}}],\"name\":\"Morris Park Bake Shop\",\"restaurant_id\":\"30075445\"}\ndb.restaurants.findOne()",
"text": "Hi, just trying to understand the basic geospatial data & queries. InExploring the DataI inspected an entry in the newly-created restaurants collection in\nmongosh, withdb.restaurants.findOne()which returns a document like the following:However, what I found in the MongoDB Atlas UI, the actual record looks like this:So is the db.restaurants.findOne() a function that is doing its own data massaging?As I want to understand what data format I can/should use for geospatial data in my DB, so that I can make use of the geospatial queries, yet, what the above query returned differs greatly with what was stored.Any clarification/explanation please? thx.",
"username": "MBee"
},
{
"code": "{\n location: {\n type: \"Point\",\n coordinates: [-73.856077, 40.848447]\n },\n name: \"Morris Park Bake Shop\"\n}\n",
"text": "Did you reallyI inspected an entry in the newly-created restaurants collectionand get simply the following as outputor you simply cut-n-paste what is in the documentation?I think you simply cut-n-paste what is in the documentation because without a projection you will get an _id. I think that in the documentation they only shown the fields important to the discussion.I think you are really looking at the same document. One is represented as EJSON (the one with $oid) and the other is only part of the whole document to make the documentation easier to read.So no, findOne() is not doing its own data massaging.",
"username": "steevej"
},
{
"code": "> db.restaurants.findOne()\n{\n _id: ObjectId(\"55cba2476c522cafdb053add\"),\n location: { coordinates: [ -73.856077, 40.848447 ], type: 'Point' },\n name: 'Morris Park Bake Shop'\n}\n\"address\":{\"building\":\"1007\",\"coord\":db.restaurants.findOne()findOne()mongoimport",
"text": "Thanks,Ah, indeed, there is an _id field. Here is the actual output from my side:What I’m asking is that, in the sample data, it was:\"address\":{\"building\":\"1007\",\"coord\":Yet the db.restaurants.findOne() is showing me totally different thing.So if findOne() is not doing its own data massaging, then where the differences are coming from?Or I am looking at the wrong collection?I was looking into the sample_restaurants collection in the MongoDB Atlas UI, where the mongoimport command in the tutorial didn’t specify a database…",
"username": "MBee"
},
{
"code": "use test\nshow collections\ndb.restaurants.countDocuments()\nuse sample_restaurants\nshow collections\ndb.restaurants.countDocuments()\n",
"text": "Or I am looking at the wrong collection?Most likely you are.I was looking into the sample_restaurants collection in the MongoDB Atlas UIFrom the screenshot you supplied (the part with show dbs), sample_restaurants is a database, not a collection.What I suspect is that your mongoimport used the test database. In the same Atlas UI, do the following commands and share the output:Also share the mongoimport command you used.",
"username": "steevej"
},
{
"code": "",
"text": "Ah, indeed:Problem solved. Thanks.",
"username": "MBee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Understanding the basic geospatial data & queries | 2023-10-19T21:54:13.193Z | Understanding the basic geospatial data & queries | 224 |
|
null | [
"aggregation",
"change-streams"
] | [
{
"code": "app.get('/msg/:id', (req, res) => {\n\n const pipeline = [\n { $match: { 'fullDocument.parent': req.params.id } }\n ];\n const changeStream = db.collection('message').watch(pipeline);\n changeStream.on('change', (result) => {\n res.write('event: test\\n');\n res.write('data: ' + JSON.stringify([result.fullDocument]) + '\\n\\n');\n });\n});\n",
"text": "I’m using express / mongodb official driver to make APIs for a real time app.\nWhen users make a request to /msg/:id, will it spawn many changeStream listeners every time so that the performance will eventually drop?\nor is it OK to use like this?",
"username": "suren"
},
{
"code": "",
"text": "what’s the variable scope of this change stream object? can it be reused later by a new request?i’m a bit concerned with resource leak, yeah, as you said. Something is allocated but then ref lost and never gets released.",
"username": "Kobe_W"
},
{
"code": "const pipeline = [\n { $match: { 'fullDocument.parent': req.params.id } }\n]\nconst changeStream = db.collection('message').watch(pipeline)\n\napp.get('/msg/:id', (req, res) => {\n\n changeStream.on('change', (result) => {\n res.write('event: test\\n');\n res.write('data: ' + JSON.stringify([result.fullDocument]) + '\\n\\n');\n });\n});\n",
"text": "Oh yeah moving the changeStream to a global variable would be better.\nbut I’m still wondering about using changeStream.on() inside a server API is safe or not.",
"username": "surenk"
},
{
"code": "",
"text": "Have you tried your code?As far as I understand on() simply sets a listener and returns right away. So, I see1 - res will never get a value with res.write unless you are lucky and a message is updated just at the right moment\n2 - if you could set many listeners on the same stream, the you will be susceptible to DOSThat is why, have you tried your code?",
"username": "steevej"
},
{
"code": "app.get('/msg/:id', (req, res) => {\n res.writeHead(200, {\n \"Connection\": \"keep-alive\",\n \"Content-Type\": \"text/event-stream\",\n \"Cache-Control\": \"no-cache\",\n });\n\n const pipeline = [\n { $match: { 'fullDocument.parent': req.params.id } }\n ];\n const changeStream = db.collection('message').watch(pipeline);\n changeStream.on('change', (result) => {\n res.write('event: test\\n');\n res.write('data: ' + JSON.stringify([result.fullDocument]) + '\\n\\n');\n });\n});\n",
"text": "Yeah the res.write works fine because I used SSE and it keeps the connection alive.\nAbout number 2, I should think more about it, thanks.",
"username": "suren"
}
] | Is it OK to use change streams inside a server API? | 2023-10-21T01:05:20.447Z | Is it OK to use change streams inside a server API? | 176 |
null | [] | [
{
"code": "",
"text": "Hello All,How to get the buffer/cache pool hit ratio in MongoDB as we do it in DB2 and MySQL?\nWhat is the good ratio of buffer/cache hit ratio for best performance?Thank you in advance.",
"username": "Elango_Gopal"
},
{
"code": "serverStatusdb.serverStatus( { \"wiredTiger.cache\": 1 } )\nwiredTiger.cachebytes currently in the cachetracked dirty bytes in the cachebytes read into cachebytes written from cache",
"text": "Hey, welcome to the MongoDB community!In MongoDB, the buffer/cache hit rate refers to the efficiency with which the in-memory data cache (WiredTiger in the case of the default storage engine) is utilized. A high hit rate typically indicates that most read operations are being served from the in-memory cache, which is faster than reading from disk.To get statistics on cache performance in MongoDB when using the WiredTiger storage engine, you can use the serverStatus command:Within the wiredTiger.cache section, you’ll find various cache-related statistics. Statistics of interest include:The hit rate can be approximated from these statistics, but MongoDB does not provide a direct “cache hit rate” like some other databases.In general, a cache hit rate above 95% is considered good for most workloads. This means that 95% or more of the read operations are being served from the cache instead of going to disk. However, the exact value considered “good” may vary based on the nature of your workload and your system’s configuration.If the cache hit rate is consistently below 95%, it may be an indication that:In these cases, you might consider:",
"username": "Samuel_84194"
}
] | How to get Buffer cache Hit ratio / Recommended ratio | 2023-10-21T09:10:54.141Z | How to get Buffer cache Hit ratio / Recommended ratio | 147 |
null | [
"python"
] | [
{
"code": "",
"text": "ServerSelectionTimeoutError pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 6533a7fa644f86b3222a4e79, topology_type: Unknown, servers: [<ServerDescr",
"username": "Aakash_Rathor"
},
{
"code": "",
"text": "Is mongod running?Can you connect with Compass or mongosh?The errorConnection refusedoften means that no process is listening on the given IP and port?What is your OS?",
"username": "steevej"
}
] | Server slection timeout error how to solve | 2023-10-21T10:51:59.810Z | Server slection timeout error how to solve | 167 |
[
"aggregation",
"kigali-mug"
] | [
{
"code": "",
"text": "\nspeaker for MongoDB event960×540 110 KB\nThis event will be covering Aggregation with MongoDB.The agenda is as follows:Arrival - 8:45am to 9:00amIcebreaker(Games, Interaction) - 9:00am to 9:10amIntroduction - 9:10am to 9:30am.\n- Women in Tech Club leaders, faculty and staff, plus other club leaders.MongoDB Introduction + Benefits - 9:20am to 9:45am.Health Break - 9:45am - 9:55am.Speaker Session - 10:30am to 11:15am.Health Break - 11:15am to 11:30am.Q & A Session - 11:30am to 11:45am.Kahoot Game - 11:45am to 11:50am.Swag Issue + Photo Session - 12:00am to 12:15amSnacks and guest leave at their own pleasure.Event Type: In-Person\nLocation: Carnegie Mellon University Africa(CMU-Africa), Regional ICT Center of Excellence Bldg Plot No A8.The speaker is Romerik a CMU-Africa student pursuing a Master’s in Engineering Artificial Intelligence(MS EAI) Class of 2025. He is a Data Scientist and familiar with MongoDB Technologies.Organiser: delphine nyaboke, in collaboration with Women in Tech Club at CMU-Africa.Kindly note that you’re required to also register on the Google Form for CMU-Africa logistics purposes to confirm attendance. Also, carry a laptop for the hands-on session.",
"username": "delphine_nyaboke"
},
{
"code": "",
"text": "The session was very insightful and educative. Thanks to the organizers and volunteers of the event.\nI enjoyed the technical content because I got the understanding from a practical standpoint of how to perform aggregation in MongoDB atlas and I got a high-level understanding of how to start using it in my projects that are utilizing MongoDB App Services.I also enjoyed the networking and the Kahoot games which allowed me to win some cool swaggs with credits.I can’t wait to attend another one ",
"username": "Julius_Aries_Kanneh_Jr"
}
] | MongoDB for Data Aggregation | 2023-09-13T19:48:51.449Z | MongoDB for Data Aggregation | 1,952 |
|
null | [
"queries",
"sharding",
"indexes"
] | [
{
"code": "",
"text": "Suppose I have a large collection sharded by {_id: “hashed”}, with an index on {_id: “hashed”, foo: 1}I would like to build a set of queries that will process all documents that have a specific value for foo, in partitions that can be parallelized. Ideally, I would like each partition to be targeted to as few shards as possible.One way to do this is with the min/max/hint cursor methods. We can partition the range of a 64-bit long (which Mongo uses for hashed sharding and indexing) into chunks, or even use the bounds as reported by config.chunks.A typical query might look like this:db.coll.find({foo: “abc”}).min({_id: 398989839382, foo: “abc”}).max({_id: 9389898239898, foo: “abc”}).hint({_id: “hashed”, foo: 1})Here, the bounds for _id passed to min() and max() are hashes used for sharding, not ObjectIds.So far so good, and this query will return the correct documents. The issue I’m seeing is that the {foo: “abc”} condition is being applied in the FETCH, not the IXSCAN. This makes the query ineffecient, because there may be many other values for “foo” that we are not interested in.My questions:",
"username": "Evan_Goldenberg"
},
{
"code": "",
"text": "is there a way to make Mongo evaluate the {foo: “abc”} condition using the index, which does have all the necessary informationTry creating an index on {foo: 1}. Your query only specifies foo value, which is not a prefix of {_id:hashed, foo:1}Are there other recommended approaches for splitting this query into parallelizable partitions?i’m not sure why you want to do this manually? db.coll.find({foo: “abc”}) should transparently broadcast the query to all shards and then do aggregation across results.",
"username": "Kobe_W"
}
] | Processing all matching documents in parallelizable partitions | 2023-10-20T22:54:14.134Z | Processing all matching documents in parallelizable partitions | 172 |
null | [
"queries",
"python"
] | [
{
"code": "notice_idmissing = 0\nfor notice_id in notice_ids:\n rec = collection.find_one( {\"noticeId\":notice_id } )\n if not rec:\n missing += 1\nprint(f\"There are {missing} missing of {len(notice_ids)} records\")\nThere are 366 missing of 486 records.find.find_oneThere are 0 missing of 486 records",
"text": "I am running some QA on a project I took over and I am trying to find out why when I check if there is any missing data based on a file of notice_id(s)I get There are 366 missing of 486 recordsHowever, when I update my script using the .find method instead of the .find_one the answer becomes There are 0 missing of 486 records",
"username": "Daniel_Donovan"
},
{
"code": "findcursor",
"text": "Nevermind, the find method returns a cursor…",
"username": "Daniel_Donovan"
},
{
"code": "missing = 0\nfor notice_id in notice_ids:\n for doc in collection.find({\"noticeId\":notice_id }):\n if not doc:\n missing+=1\nprint(f\"There are {missing} missing of {len(notice_ids)} records\")\nfind_one",
"text": "Well when i loop through the cursor like,I return 0; why is the find_one method not finding all the records?",
"username": "Daniel_Donovan"
},
{
"code": " for doc in collection.find({\"noticeId\":notice_id }):\n if not doc:\n missing+=1\n",
"text": "Both methods work. But your code is wrong. With the find codemissing will always be 0 because the find() returns no documents the if not doc: in your for loop is never executed but if not rec: is always executed.",
"username": "steevej"
}
] | What are the differences between the find and find_one methods? | 2023-10-20T20:02:45.679Z | What are the differences between the find and find_one methods? | 175 |
[
"aggregation",
"queries",
"python"
] | [
{
"code": " start_of_month = datetime.datetime(year, month, 1, 0, 0, 0)\n last_day_number = calendar.monthrange(year, month)[1]\n last_of_month = datetime.datetime(year, month, last_day_number)\n\n for finding query => {\"date\": {\"$gte\": start_of_month, \"$lt\": end_of_month}}\n",
"text": "Screenshot from 2023-10-20 12-43-56729×105 4.97 KB\nHi,\nI have a datas like this in the database. I need to query the datas between start and end of the specific month.I have used python and write the code as below,This code is not working properly. could you help me out?",
"username": "Suganth_M"
},
{
"code": "",
"text": "Most likely you have a problem because your field date is stored as a string while you querying with a datetime value.",
"username": "steevej"
}
] | Filtering using specific date range | 2023-10-20T07:16:52.429Z | Filtering using specific date range | 164 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "I am trying to query a collection inside an App Service function.\nMy _id is a BSON.UUID.\nWhen I receive my data, I received an empty object for my _id.\nFor instance:\n{“_id”:{},“name”:“Anne”,“phone”:“21987654321”}\nThat also happens when I have a function triggered by insertions. I receive my document but the _id is an empty object.\nIt also happens with other “different” types, like decimal 128.\nCould someone help me?",
"username": "Mariana_Zagatti"
},
{
"code": "",
"text": "Is it really an empty object? Or your print routine is not able to print the value correctly? The fact thatIt also happens with other “different” types, like decimal 128.seems to indicate a bug in your print routine.",
"username": "steevej"
},
{
"code": "",
"text": "I am not sure what the problem is, but I can’t access what is inside _id.\nWhen I try do to something like\nnew BSON.UUID(obj._id)\nI get “TypeError: Value is not an object: undefined”\nWhen I log obj._id, I get [object Binary]\nWhen I log JSON.stringify(oldUsedId._id), I get {}",
"username": "Mariana_Zagatti"
},
{
"code": "",
"text": "Why is obj inWhen I log obj._id, I get [object Binary]and oldUsedId inWhen I log JSON.stringify(oldUsedId._id), I get {}What is the code that produce{“_id”:{},“name”:“Anne”,“phone”:“21987654321”}",
"username": "steevej"
},
{
"code": "",
"text": "Hello Mariana, I issued a similar problem. There is some difference when we use Binary data with MongoDB and try to manipulate it with JSON. Maybe you can try to use EJSON instead of JSON https://www.mongodb.com/docs/atlas/app-services/data-api/data-formats/#binary.It works for me hope it could works for you too.",
"username": "Ana_Carolyne_Matos_Nascimento"
},
{
"code": "",
"text": "@Mariana_Zagatti, could we have a follow-up on this? If a solution was provided here, please mark the post as the solution, otherwise please share the solution. This is the best way to keep the forum useful to everyone.",
"username": "steevej"
},
{
"code": "",
"text": "Too bad we do not have a follow up from @Mariana_Zagatti.",
"username": "steevej"
}
] | "find" returns empty object for UUID | 2023-10-11T21:57:10.249Z | “find” returns empty object for UUID | 331 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "/Users/olumayokunogunfowora/Desktop/complete-node-bootcamp-master/4-natours/starter/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:367\n return new error_1.MongoNetworkError(err);\n ^\n\nMongoNetworkError: 80203CF601000000:error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1586:SSL alert number 80\n\n at connectionFailureError (/Users/olumayokunogunfowora/Desktop/complete-node-bootcamp-master/4-natours/starter/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:367:20)\n at TLSSocket.<anonymous> (/Users/olumayokunogunfowora/Desktop/complete-node-bootcamp-master/4-natours/starter/node_modules/mongoose/node_modules/mongodb/lib/cmap/connect.js:290:22)\n at Object.onceWrapper (node:events:628:26)\n at TLSSocket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: [Error: 80203CF601000000:error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:../deps/openssl/openssl/ssl/record/rec_layer_s3.c:1586:SSL alert number 80\n ] {\n library: 'SSL routines',\n reason: 'tlsv1 alert internal error',\n code: 'ERR_SSL_TLSV1_ALERT_INTERNAL_ERROR'\n },\n connectionGeneration: 1,\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n}\n\nNode.js v18.16.1\n[nodemon] app crashed - waiting for file changes before starting...\n",
"text": "App is listening on port 5001",
"username": "olumayokun_ogunfowora"
},
{
"code": "",
"text": "Good afternoon, welcome!You can pass information about your connection, how you are creating, if you are using Atlas or your own servers. This will help people understand the problem.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "One possible cause to this for Mongo Atlas in particular, is if your IP is not whitelisted. In that case, the SSL handshake fails. The error is a little misleading, but I was getting an “ERR_SSL_TLSV1_ALERT_INTERNAL_ERROR” until I whitelisted my IP address.",
"username": "Jeff_Reed"
},
{
"code": "",
"text": "Yeah Thanks,It really helped me",
"username": "Syeda_Aiman"
},
{
"code": "",
"text": "Hey!\ndid you figure out the olutio i am getting the same erro.",
"username": "Kabir_Khan"
},
{
"code": "",
"text": "Here“One possible cause to this for Mongo Atlas in particular, is if your IP is not whitelisted. In that case, the SSL handshake fails. The error is a little misleading, but I was getting an “ERR_SSL_TLSV1_ALERT_INTERNAL_ERROR” until I whitelisted my IP address.”",
"username": "Samuel_84194"
}
] | I keep getting error when i try connecting node to mongodb | 2023-09-08T08:41:35.184Z | I keep getting error when i try connecting node to mongodb | 1,599 |
null | [
"compass",
"indexes"
] | [
{
"code": "",
"text": "I’m using this collation option\n{\nlocale: ‘en’,\ncaseLevel: false,\nstrength: 2,\nalternate: ‘shifted’,\n}to create index with unique and collation",
"username": "VC_jhala"
},
{
"code": "",
"text": "pattern matching on strings in MongoDB 7.0",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "What to do with regex? I want to create index.",
"username": "VC_jhala"
},
{
"code": "",
"text": "Sorry, it gives error in collation as it considers same john-jacob-jafrrie and “john jacob”",
"username": "VC_jhala"
},
{
"code": "",
"text": "",
"username": "Jack_Woehr"
}
] | Mongodb Collation issue. I have a usecase that i need a field like name should be unique such that it should consider "John Jacob", "john jacob", "John Jacob" as same. but it should not consider "John Jacob Jafrrie" as same value | 2023-10-19T13:38:49.433Z | Mongodb Collation issue. I have a usecase that i need a field like name should be unique such that it should consider “John Jacob”, “john jacob”, “John Jacob” as same. but it should not consider “John Jacob Jafrrie” as same value | 178 |
null | [
"react-js"
] | [
{
"code": "'use client'\nimport React, {ChangeEvent, FormEvent, useState} from \"react\";\nimport axios from 'axios';\n\nconst Form : React.FC = () => {\n const [rangeValue, setRangeValue] = React.useState<number | string>(\"50\");\n const [name, setName] = useState('');\n const giveRangeValue = (event: React.ChangeEvent<HTMLInputElement>) => {\n const newRangeValue: string = event.target.value;\n setRangeValue(newRangeValue === \"0\" ? 0 : newRangeValue);\n }\n\n const handleSubmit = async (e: FormEvent<HTMLFormElement>)=> {\n e.preventDefault()\n console.log(\"Name:\", name);\n console.log(\"Range Value:\", rangeValue);\n\n const postData = {\n gebruikersNaam: name,\n risicoScore: rangeValue,\n };\n try {\n const response = await axios.post('/api/risk', postData);\n\n if (response.status === 200) {\n console.log(response.data);\n } else {\n console.error(\"Error:\", response.data);\n }\n } catch (error){\n console.error(error)\n }\n }\n\n return (\n <main className={\"flex flex-row\"}>\n <form onSubmit={handleSubmit}>\n <div className={\"form-div space-y-4\"}>\n <div className={\"alert flex flex-col items-start\"}>\n <label>\n Naam\n </label>\n <input\n className={'input'}\n id=\"email\"\n value={name}\n type=\"text\"\n name=\"gebruikersNaam\"\n onChange={(e) =>\n setName(e.target.value)}/>\n </div>\n <div className=\"alert flex flex-row\">\n <input\n id=\"range\"\n type={\"range\"}\n min={0} max={100}\n value={rangeValue}\n onChange={giveRangeValue}>\n </input>\n <span className={\"value\"}>{rangeValue}</span>\n </div>\n <div className={\"form-btns\"}>\n <button className={\"btn mr-5\"} type={\"submit\"}>Maak melding</button>\n <button className={\"btn btn-form ml-10\"} type={\"button\"}><a href={\"../\"}>Terug</a>\n </button>\n </div>\n </div>\n </form>\n </main>\n )\n}\n\nexport default Form\nimport {getCollection} from \"@util/mongodb\";\nimport {Riskscore} from \"@/types/db/Riskscore\";\nimport {NextApiRequest, NextApiResponse} from \"next\";\n\n\nexport async function POST(req: NextApiRequest, res: NextApiResponse) {\n if (req.method === 'POST') {\n try {\n const {gebruikersNaam, risicoScore} = req.body;\n const collection = await getCollection<Riskscore>('netherlands', 'risicoscores');\n\n const result = await collection.insertOne({\n overlast: \"E38\",\n gebruikersNaam: \"Jack\",\n risicoScore: 4,\n });\n\n if (result.insertedId) {\n res.status(200).json({message: 'Name submitted to MongoDB'});\n } else {\n res.status(500).json({error: 'Error inserting data into MongoDB'});\n }\n } catch (error) {\n res.status(500).json({error: 'Error submitting name to MongoDB'});\n }\n } else {\n res.status(405).end();\n }\n}\nimport {Document} from \"mongodb\";\n\nexport interface Riskscore extends Document {\n overlast: string\n gebruikersNaam: string\n risicoScore: number\n}\n",
"text": "The problem is that when I press the submit button, it sends the data to MongoDB, but I also receive a 500 error. Another issue is that when I try to input user data (inside the html), it sends null. Can someone help me?This is the from.tsxThis is the app/api/riskthis is the Riskscore.d.ts",
"username": "Batin_Simsek"
},
{
"code": "import {getCollection} from \"@util/mongodb\";\nimport {Riskscore} from \"@/types/db/Riskscore\";\n\nexport async function POST(request: Request) {\n try {\n const body = await request.json(); //This is how you access the body in 13\n const {gebruikersNaam, risicoScore} = body;\n console.log(\"Received Data: gebruikersNaam =\", gebruikersNaam, \"risicoScore =\", risicoScore);\n const collection = await getCollection<Riskscore>('netherlands', 'risicoscores');\n\n const result = await collection.insertOne({\n overlast: \"E38\",\n gebruikersNaam: gebruikersNaam,\n risicoScore: risicoScore,\n });\n\n if (result.insertedId) {\n body.status(200).json({message: 'Name submitted to MongoDB'});\n } else {\n body.status(500).json({error: 'Error inserting data into MongoDB'});\n }\n } catch (error) {\n console.log(error)\n }\n}\n",
"text": "NextJS 13 handles the responses differently from 12 so you will need to adjust your code to this.\nNow it work",
"username": "Batin_Simsek"
}
] | Getting 500 Error and Null Values When Submitting Data to MongoDB | 2023-10-20T18:54:11.639Z | Getting 500 Error and Null Values When Submitting Data to MongoDB | 157 |
[
"queries",
"next-js",
"api"
] | [
{
"code": "import clientPromise from \"../lib/mongodb\";\n\nexport default async (req, res) => {\n try {\n const client = await clientPromise;\n const db = client.db(\"Reports\");\n\n const collection = await db\n .collection(\"Reports\")\n \n const report = await db\n .collection(\"Reports\")\n .find({})\n .toArray()\n\n res.json(report);\n } catch (e) {\n console.error(e);\n }\n};\n\nTypeError: Cannot read properties of undefined (reading 'json')\n",
"text": "I am following this tutorial:Learn how to easily integrate MongoDB into your Next.js application with the official MongoDB package.My code is:In my code the report after .toArray() returns my data within an array. However, I get this error:It seems the data comes through to the array but then there is an issue returning it as json.",
"username": "Rupey_N_A"
},
{
"code": "import clientPromise from \"../lib/mongodb\";\n\nexport default async (req, res) => {\n try {\n const client = await clientPromise;\n const db = client.db(\"Reports\");\n\n const collection = await db\n .collection(\"Reports\")\n \n const report = await db\n .collection(\"Reports\")\n .find({})\n .toArray()\n\n res.json(report);\n } catch (e) {\n console.error(e);\n }\n};\nreturn new Response(JSON.stringify({report : report}), { status: 200 })import clientPromise from \"../lib/mongodb\";\n\nexport const GET = async (res, req) =>{\n try {\n const client = await clientPromise;\n const db = client.db(\"Reports\");\n\n const collection = await db\n .collection(\"Reports\")\n \n const report = await db\n .collection(\"Reports\")\n .find({})\n .toArray()\n\n return new Response(JSON.stringify({report : report}), { status: 200 })\n\n } catch (e) {\n console.error(e);\n }\n};\n",
"text": "NextJS made an update on this, you should return your response like so:return new Response(JSON.stringify({report : report}), { status: 200 })Update your code as below, it should work.Let me know if this is helpful.",
"username": "rasaq_adewuyi"
}
] | Error setting up API with NextJS and MongoDB | 2023-10-20T16:32:29.823Z | Error setting up API with NextJS and MongoDB | 155 |
|
null | [
"replication"
] | [
{
"code": "",
"text": "Mongodb has 4 nodes of replicaset withNode 1 - Priority 1- Secondary\nNode 2- Priority 1- Secondary\nNode 3- Priority 1 -Primary\nNode 4- Priority 0 - Secondary( used for Reading Data)Node 3 Primary server disk was filled and running out of space and services got stopped and node 4 db services was also down dueNow here Node 1 or Node 2 Any one Node needs to became primary. But it failed to became primaryNode1 and Node2 were only Secondary , After that in the node 3 server Space was cleared and services was restarted, Node 1 became the primary and node 2 and node 3 were secondary,My question why node 1 or node 2 didn;t became primary when the node 3 was services was down due to space constraints.Mongodb version 5.0.20OS\nNAME=“CentOS Linux”\nVERSION=“8”\nID=“centos”\nID_LIKE=“rhel fedora”\nVERSION_ID=“8”\nPLATFORM_ID=“platform:el8”\nPRETTY_NAME=“CentOS Linux 8”\nANSI_COLOR=“0;31”\nCPE_NAME=“cpe:/o:centos:centos:8”\nHOME_URL=“https://centos.org/”\nBUG_REPORT_URL=“https://bugs.centos.org/”\nCENTOS_MANTISBT_PROJECT=“CentOS-8”\nCENTOS_MANTISBT_PROJECT_VERSION=“8”Please do suggest what can be the issue any one node should have became the primary during space issue on node 3 (Primary) services was down",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "If that is the configuration then yes Node 1 or 2 should have become primary.Double check the replicaset configuration to be sure it is what you expect it to be. The logs will contain all the information about what occurred during the the period when node 3 and 4 were down and should indicate why a primary was not elected.",
"username": "chris"
},
{
"code": "",
"text": "{“t”:{“$date”:“2023-10-04T03:57:36.344+05:30”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-4164”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“his-air-db4:45431”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}\n{“t”:{“$date”:“2023-10-04T03:57:36.972+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}\n{“t”:{“$date”:“2023-10-04T03:57:36.972+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db4:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:36.972+05:30”},“s”:“I”, “c”:“-”, “id”:4333218, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Rescheduling the next replica set monitoring request”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“delayMillis”:0}}\n{“t”:{“$date”:“2023-10-04T03:57:36.972+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“ip-10-20-110-22:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:36.972+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db3:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:36.972+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db2:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:36.973+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:36.973+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“his-air-db4:45431”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}}\n{“t”:{“$date”:“2023-10-04T03:57:37.205+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplNetwork”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db4:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333218, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Rescheduling the next replica set monitoring request”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“delayMillis”:499}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“ip-10-20-110-22:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db3:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db2:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.974+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:37.974+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“his-air-db4:45431”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}}\n{“t”:{“$date”:“2023-10-04T03:57:38.205+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplNetwork”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.346+05:30”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-4145”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“his-air-db4:45431”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}\n{“t”:{“$date”:“2023-10-04T03:57:38.474+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.474+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}\n{“t”:{“$date”:“2023-10-04T03:57:38.475+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db4:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.475+05:30”},“s”:“I”, “c”:“-”, “id”:4333218, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Rescheduling the next replica set monitoring request”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“delayMillis”:499}}\n{“t”:{“$date”:“2023-10-04T03:57:38.475+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“ip-10-20-110-22:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.475+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db3:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.475+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db2:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.975+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:38.975+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“his-air-db4:45431”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}}\n{“t”:{“$date”:“2023-10-04T03:57:39.205+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplNetwork”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.475+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.476+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}\n{“t”:{“$date”:“2023-10-04T03:57:39.476+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db4:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.476+05:30”},“s”:“I”, “c”:“-”, “id”:4333218, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Rescheduling the next replica set monitoring request”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“delayMillis”:499}}\n{“t”:{“$date”:“2023-10-04T03:57:39.476+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“ip-10-20-110-22:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.476+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db3:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.476+05:30”},“s”:“I”, “c”:“-”, “id”:4333227, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM monitoring host in expedited mode until we detect a primary”,“attr”:{“host”:“his-air-db2:45431”,“replicaSet”:“arcusapp”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.975+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:57:39.976+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“his-air-db4:45431”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}}\n{“t”:{“$date”:“2023-10-04T03:57:40.205+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplNetwork”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“his-air-db4:45431”}}It was keep on looking for primary , but not elected as primary",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "{“t”:{“$date”:“2023-10-04T03:55:35.484+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:55:35.484+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“his-air-db4:45431”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}}\n{“t”:{“$date”:“2023-10-04T03:55:35.775+05:30”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-4160”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“his-air-db4:45431”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}\n{“t”:{“$date”:“2023-10-04T03:55:35.898+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333213, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM Topology Change”,“attr”:{“replicaSet”:“arcusapp”,“newTopologyDescription”:“{ id: \"4406317e-2a95-45db-a0cb-31cb34c560dd\", topologyType: \"ReplicaSetNoPrimary\", servers: { his-air-db2:45431: { address: \"his-air-db2:45431\", topologyVersion: { processId: ObjectId(‘64fca3272b58dab17e865419’), counter: 51 }, roundTripTime: 472, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db2:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371935898), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db3:45431: { address: \"his-air-db3:45431\", topologyVersion: { processId: ObjectId(‘6505d5f2312accdf7dd80771’), counter: 48 }, roundTripTime: 668, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db3:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371935292), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db4:45431: { address: \"his-air-db4:45431\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, ip-10-20-110-22:45431: { address: \"ip-10-20-110-22:45431\", topologyVersion: { processId: ObjectId(‘64e0c5ace2bc1369ed0b76d6’), counter: 24 }, roundTripTime: 106955, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"ip-10-20-110-22:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371932691), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } } }, logicalSessionTimeoutMinutes: 30, setName: \"arcusapp\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId(‘7fffffff00000000000000b5’), setVersion: 286895 } }”,“previousTopologyDescription”:“{ id: \"d7ad2052-5ba7-4084-9974-78550ebf6e48\", topologyType: \"ReplicaSetNoPrimary\", servers: { his-air-db2:45431: { address: \"his-air-db2:45431\", topologyVersion: { processId: ObjectId(‘64fca3272b58dab17e865419’), counter: 51 }, roundTripTime: 472, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db2:45431\", setName: \"arcusapp\", setVersion: 286895, primary: \"his-air-db3:45431\", lastUpdateTime: new Date(1696371925888), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db3:45431: { address: \"his-air-db3:45431\", topologyVersion: { processId: ObjectId(‘6505d5f2312accdf7dd80771’), counter: 48 }, roundTripTime: 668, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db3:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371935292), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db4:45431: { address: \"his-air-db4:45431\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, ip-10-20-110-22:45431: { address: \"ip-10-20-110-22:45431\", topologyVersion: { processId: ObjectId(‘64e0c5ace2bc1369ed0b76d6’), counter: 24 }, roundTripTime: 106955, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"ip-10-20-110-22:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371932691), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } } }, logicalSessionTimeoutMinutes: 30, setName: \"arcusapp\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId(‘7fffffff00000000000000b5’), setVersion: 286895 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:35.985+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received error response”,“attr”:{“host”:“his-air-db4:45431”,“error”:“HostUnreachable: Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”,“replicaSet”:“arcusapp”,“response”:“{}”}}\n{“t”:{“$date”:“2023-10-04T03:55:35.985+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“arcusapp”,“host”:“his-air-db4:45431”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.088+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:4615652, “ctx”:“ReplCoord-4163”,“msg”:“Starting an election, since we’ve seen no PRIMARY in election timeout period”,“attr”:{“electionTimeoutPeriodMillis”:10000}}\n{“t”:{“$date”:“2023-10-04T03:55:36.088+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:21438, “ctx”:“ReplCoord-4163”,“msg”:“Conducting a dry run election to see if we could be elected”,“attr”:{“currentTerm”:181}}\n{“t”:{“$date”:“2023-10-04T03:55:36.088+05:30”},“s”:“I”, “c”:“REPL”, “id”:21752, “ctx”:“ReplCoord-4163”,“msg”:“Scheduling remote command request”,“attr”:{“context”:“vote request”,“request”:“RemoteCommand 116088808 – target:his-air-db4:45431 db:admin cmd:{ replSetRequestVotes: 1, setName: \"arcusapp\", dryRun: true, term: 181, candidateIndex: 0, configVersion: 286895, configTerm: 181, lastAppliedOpTime: { ts: Timestamp(1696371924, 2), t: 181 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.088+05:30”},“s”:“I”, “c”:“REPL”, “id”:21752, “ctx”:“ReplCoord-4163”,“msg”:“Scheduling remote command request”,“attr”:{“context”:“vote request”,“request”:“RemoteCommand 116088809 – target:ip-10-20-110-22:45431 db:admin cmd:{ replSetRequestVotes: 1, setName: \"arcusapp\", dryRun: true, term: 181, candidateIndex: 0, configVersion: 286895, configTerm: 181, lastAppliedOpTime: { ts: Timestamp(1696371924, 2), t: 181 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.088+05:30”},“s”:“I”, “c”:“REPL”, “id”:21752, “ctx”:“ReplCoord-4163”,“msg”:“Scheduling remote command request”,“attr”:{“context”:“vote request”,“request”:“RemoteCommand 116088810 – target:his-air-db3:45431 db:admin cmd:{ replSetRequestVotes: 1, setName: \"arcusapp\", dryRun: true, term: 181, candidateIndex: 0, configVersion: 286895, configTerm: 181, lastAppliedOpTime: { ts: Timestamp(1696371924, 2), t: 181 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.088+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplNetwork”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“ip-10-20-110-22:45431”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.089+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4159”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:181,“dryRun”:true,“failReason”:“failed to receive response”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“from”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.089+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4162”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:181,“dryRun”:true,“vote”:“yes”,“from”:“his-air-db3:45431”,“message”:{“term”:181,“voteGranted”:true,“reason”:“”,“ok”:1,“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1696371924,“i”:2}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“operationTime”:{“$timestamp”:{“t”:1696371924,“i”:2}}}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.135+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4160”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:181,“dryRun”:true,“vote”:“yes”,“from”:“ip-10-20-110-22:45431”,“message”:{“term”:181,“voteGranted”:true,“reason”:“”,“ok”:1,“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1696371924,“i”:2}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“operationTime”:{“$timestamp”:{“t”:1696371924,“i”:2}}}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.135+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:21444, “ctx”:“ReplCoord-4160”,“msg”:“Dry election run succeeded, running for election”,“attr”:{“newTerm”:182}}\n{“t”:{“$date”:“2023-10-04T03:55:36.135+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:6015300, “ctx”:“ReplCoord-4160”,“msg”:“Storing last vote document in local storage for my election”,“attr”:{“lastVote”:{“term”:182,“candidateIndex”:0}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.139+05:30”},“s”:“I”, “c”:“REPL”, “id”:21752, “ctx”:“ReplCoord-4160”,“msg”:“Scheduling remote command request”,“attr”:{“context”:“vote request”,“request”:“RemoteCommand 116088811 – target:his-air-db4:45431 db:admin cmd:{ replSetRequestVotes: 1, setName: \"arcusapp\", dryRun: false, term: 182, candidateIndex: 0, configVersion: 286895, configTerm: 181, lastAppliedOpTime: { ts: Timestamp(1696371924, 2), t: 181 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.139+05:30”},“s”:“I”, “c”:“REPL”, “id”:21752, “ctx”:“ReplCoord-4160”,“msg”:“Scheduling remote command request”,“attr”:{“context”:“vote request”,“request”:“RemoteCommand 116088812 – target:ip-10-20-110-22:45431 db:admin cmd:{ replSetRequestVotes: 1, setName: \"arcusapp\", dryRun: false, term: 182, candidateIndex: 0, configVersion: 286895, configTerm: 181, lastAppliedOpTime: { ts: Timestamp(1696371924, 2), t: 181 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.139+05:30”},“s”:“I”, “c”:“REPL”, “id”:21752, “ctx”:“ReplCoord-4160”,“msg”:“Scheduling remote command request”,“attr”:{“context”:“vote request”,“request”:“RemoteCommand 116088813 – target:his-air-db3:45431 db:admin cmd:{ replSetRequestVotes: 1, setName: \"arcusapp\", dryRun: false, term: 182, candidateIndex: 0, configVersion: 286895, configTerm: 181, lastAppliedOpTime: { ts: Timestamp(1696371924, 2), t: 181 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.140+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4159”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:182,“dryRun”:false,“failReason”:“failed to receive response”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“from”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.147+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4162”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:182,“dryRun”:false,“vote”:“yes”,“from”:“his-air-db3:45431”,“message”:{“term”:182,“voteGranted”:true,“reason”:“”,“ok”:1,“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1696371924,“i”:2}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“operationTime”:{“$timestamp”:{“t”:1696371924,“i”:2}}}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4145”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:182,“dryRun”:false,“vote”:“yes”,“from”:“ip-10-20-110-22:45431”,“message”:{“term”:182,“voteGranted”:true,“reason”:“”,“ok”:1,“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1696371924,“i”:2}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“operationTime”:{“$timestamp”:{“t”:1696371924,“i”:2}}}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:21450, “ctx”:“ReplCoord-4145”,“msg”:“Election succeeded, assuming primary role”,“attr”:{“term”:182}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“REPL”, “id”:21358, “ctx”:“ReplCoord-4145”,“msg”:“Replica set state transition”,“attr”:{“newState”:“PRIMARY”,“oldState”:“SECONDARY”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“REPL”, “id”:21106, “ctx”:“ReplCoord-4145”,“msg”:“Resetting sync source to empty”,“attr”:{“previousSyncSource”:“:27017”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“REPL”, “id”:21359, “ctx”:“ReplCoord-4145”,“msg”:“Entering primary catch-up mode”}\n{“t”:{“$date”:“2023-10-04T03:55:36.179+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333213, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM Topology Change”,“attr”:{“replicaSet”:“arcusapp”,“newTopologyDescription”:“{ id: \"107740bc-dad1-4b26-84f8-484f9533e930\", topologyType: \"ReplicaSetNoPrimary\", servers: { his-air-db2:45431: { address: \"his-air-db2:45431\", topologyVersion: { processId: ObjectId(‘64fca3272b58dab17e865419’), counter: 52 }, roundTripTime: 472, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db2:45431\", setName: \"arcusapp\", setVersion: 286895, electionId: ObjectId(‘7fffffff00000000000000b6’), primary: \"his-air-db2:45431\", lastUpdateTime: new Date(1696371936179), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db3:45431: { address: \"his-air-db3:45431\", topologyVersion: { processId: ObjectId(‘6505d5f2312accdf7dd80771’), counter: 48 }, roundTripTime: 668, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db3:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371935292), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db4:45431: { address: \"his-air-db4:45431\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, ip-10-20-110-22:45431: { address: \"ip-10-20-110-22:45431\", topologyVersion: { processId: ObjectId(‘64e0c5ace2bc1369ed0b76d6’), counter: 24 }, roundTripTime: 106955, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"ip-10-20-110-22:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371932691), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } } }, logicalSessionTimeoutMinutes: 30, setName: \"arcusapp\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId(‘7fffffff00000000000000b5’), setVersion: 286895 } }”,“previousTopologyDescription”:“{ id: \"4406317e-2a95-45db-a0cb-31cb34c560dd\", topologyType: \"ReplicaSetNoPrimary\", servers: { his-air-db2:45431: { address: \"his-air-db2:45431\", topologyVersion: { processId: ObjectId(‘64fca3272b58dab17e865419’), counter: 51 }, roundTripTime: 472, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db2:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371935898), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db3:45431: { address: \"his-air-db3:45431\", topologyVersion: { processId: ObjectId(‘6505d5f2312accdf7dd80771’), counter: 48 }, roundTripTime: 668, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"his-air-db3:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371935292), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } }, his-air-db4:45431: { address: \"his-air-db4:45431\", type: \"Unknown\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, ip-10-20-110-22:45431: { address: \"ip-10-20-110-22:45431\", topologyVersion: { processId: ObjectId(‘64e0c5ace2bc1369ed0b76d6’), counter: 24 }, roundTripTime: 106955, lastWriteDate: new Date(1696371924000), opTime: { ts: Timestamp(1696371924, 2), t: 181 }, type: \"RSSecondary\", minWireVersion: 9, maxWireVersion: 13, me: \"ip-10-20-110-22:45431\", setName: \"arcusapp\", setVersion: 286895, lastUpdateTime: new Date(1696371932691), logicalSessionTimeoutMinutes: 30, hosts: { 0: \"his-air-db2:45431\", 1: \"his-air-db3:45431\", 2: \"his-air-db4:45431\" }, arbiters: {}, passives: { 0: \"ip-10-20-110-22:45431\" } } }, logicalSessionTimeoutMinutes: 30, setName: \"arcusapp\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId(‘7fffffff00000000000000b5’), setVersion: 286895 } }”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.181+05:30”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-4159”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“his-air-db4:45431”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.205+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplNetwork”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.214+05:30”},“s”:“I”, “c”:“REPL”, “id”:21364, “ctx”:“ReplCoord-4163”,“msg”:“Caught up to the latest optime known via heartbeats after becoming primary”,“attr”:{“targetOpTime”:{“ts”:{“$timestamp”:{“t”:1696371924,“i”:2}},“t”:181},“myLastApplied”:{“ts”:{“$timestamp”:{“t”:1696371924,“i”:2}},“t”:181}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.214+05:30”},“s”:“I”, “c”:“REPL”, “id”:21363, “ctx”:“ReplCoord-4163”,“msg”:“Exited primary catch-up mode”}\n{“t”:{“$date”:“2023-10-04T03:55:36.214+05:30”},“s”:“I”, “c”:“REPL”, “id”:21107, “ctx”:“ReplCoord-4163”,“msg”:“Stopping replication producer”}\n{“t”:{“$date”:“2023-10-04T03:55:36.214+05:30”},“s”:“I”, “c”:“REPL”, “id”:21239, “ctx”:“ReplBatcher”,“msg”:“Oplog buffer has been drained”,“attr”:{“term”:182}}\n{“t”:{“$date”:“2023-10-04T03:55:36.217+05:30”},“s”:“I”, “c”:“REPL”, “id”:21343, “ctx”:“RstlKillOpThread”,“msg”:“Starting to kill user operations”}\n{“t”:{“$date”:“2023-10-04T03:55:36.217+05:30”},“s”:“I”, “c”:“REPL”, “id”:21344, “ctx”:“RstlKillOpThread”,“msg”:“Stopped killing user operations”}\n{“t”:{“$date”:“2023-10-04T03:55:36.217+05:30”},“s”:“I”, “c”:“REPL”, “id”:21340, “ctx”:“RstlKillOpThread”,“msg”:“State transition ops metrics”,“attr”:{“metrics”:{“lastStateTransition”:“stepUp”,“userOpsKilled”:0,“userOpsRunning”:30}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.217+05:30”},“s”:“I”, “c”:“REPL”, “id”:4508103, “ctx”:“OplogApplier-0”,“msg”:“Increment the config term via reconfig”}The logs not elected one as primary, it was keep on searching for the node to became primary, But not succeed",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "@chris Any update on this from the log it was trying to find but primary was not elected.",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "@chris from the logs also it was not stated that one of the secondary became the primary, There could be another reason?",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "Node 1 - Priority 1- Secondary\nNode 2- Priority 1- Secondary\nNode 3- Priority 1 -Primary\nNode 4- Priority 0 - Secondary( used for Reading Data)if all 4 nodes have 1 as vote number, then no primary can be elected after two nodes are down.",
"username": "Kobe_W"
},
{
"code": "{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:21450, “ctx”:“ReplCoord-4145”,“msg”:“Election succeeded, assuming primary role”,“attr”:{“term”:182}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“REPL”, “id”:21358, “ctx”:“ReplCoord-4145”,“msg”:“Replica set state transition”,“attr”:{“newState”:“PRIMARY”,“oldState”:“SECONDARY”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“REPL”, “id”:21106, “ctx”:“ReplCoord-4145”,“msg”:“Resetting sync source to empty”,“attr”:{“previousSyncSource”:“:27017”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“REPL”, “id”:21359, “ctx”:“ReplCoord-4145”,“msg”:“Entering primary catch-up mode”}\nhis-air-db3:45431ip-10-20-110-22:45431{“t”:{“$date”:“2023-10-04T03:55:36.140+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4159”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:182,“dryRun”:false,“failReason”:“failed to receive response”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to his-air-db4:45431 (172.20.253.208:45431) :: caused by :: Connection refused”},“from”:“his-air-db4:45431”}}\n{“t”:{“$date”:“2023-10-04T03:55:36.147+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4162”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:182,“dryRun”:false,“vote”:“yes”,“from”:“his-air-db3:45431”,“message”:{“term”:182,“voteGranted”:true,“reason”:“”,“ok”:1,“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1696371924,“i”:2}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“operationTime”:{“$timestamp”:{“t”:1696371924,“i”:2}}}}}\n{“t”:{“$date”:“2023-10-04T03:55:36.177+05:30”},“s”:“I”, “c”:“ELECTION”, “id”:51799, “ctx”:“ReplCoord-4145”,“msg”:“VoteRequester processResponse”,“attr”:{“term”:182,“dryRun”:false,“vote”:“yes”,“from”:“ip-10-20-110-22:45431”,“message”:{“term”:182,“voteGranted”:true,“reason”:“”,“ok”:1,“$clusterTime”:{“clusterTime”:{“$timestamp”:{“t”:1696371924,“i”:2}},“signature”:{“hash”:{“$binary”:{“base64”:“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”,“subType”:“0”}},“keyId”:0}},“operationTime”:{“$timestamp”:{“t”:1696371924,“i”:2}}}}}\n",
"text": "We see that a primary was elected.And yes, as @Kobe_W is saying, it looks like all members have 1 vote as this host is soliciting votes from 3 members, with his-air-db3:45431 and ip-10-20-110-22:45431 voting yes.If you lost another node then the replicaset would be without quorum and would be degraded to secondaries only.Updating your replicaset so the p:0 member also had votes:0 would have resulted in what you were expecting.Use an Odd number of voting members:",
"username": "chris"
},
{
"code": "",
"text": "Thanks Chis … after that it elecected as primaryIf there is even number of nodes do we need to set prority like 1, 0.5 and 0.5 ,0 to avoid not making primary to secondary when primary db goes down",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "Hi @Samrat_MehtaIts the votes you need to be concerned with, though a votes:0 member also has to be priority:0.",
"username": "chris"
},
{
"code": "",
"text": "Thanks Chris, But my question was if the RS is Even do we need to set the priority on that like nodes to make the nodes from secondary to primary when proriity of one goes down",
"username": "Samrat_Mehta"
},
{
"code": "",
"text": "I’m not understanding your question.",
"username": "chris"
}
] | Mongodb with 4 nodes -2 nodes were down and no node became primary | 2023-10-09T16:42:58.521Z | Mongodb with 4 nodes -2 nodes were down and no node became primary | 414 |
null | [] | [
{
"code": "",
"text": "Enterprise version 4.4.19 and above, setting fork=true cannot start normally. I tested that fork=true in version 4.4.14 can start normally",
"username": "1697383273"
},
{
"code": "",
"text": "Are there any solutions? I have found many solutions online, but none of them can work",
"username": "1697383273"
},
{
"code": "",
"text": "What would be very helpful in order to help you is to have the error message you get.The exact command line you are using and the configuration file would also be beneficial.The words it does not work is far from enough to make a diagnostic.",
"username": "steevej"
},
{
"code": "vim /etc/yum.repos.d/mongodb-enterprise-4.4.repo\n\n\n[mongodb-enterprise-4.4]\nname=MongoDB Enterprise Repository\nbaseurl=https://repo.mongodb.com/yum/redhat/$releasever/mongodb-enterprise/4.4/$basearch/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc\n\nsudo yum install -y mongodb-enterprise\n \n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n logRotate: reopen\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n directoryPerDB: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n \n\n\n\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.257+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.261+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.262+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.262+08:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":5945604, \"ctx\":\"main\",\"msg\":\"LDAP startup complete\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.357+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":23672,\"port\":27017,\"dbPath\":\"/var/lib/mongo\",\"architecture\":\"64-bit\",\"host\":\"rocketmq-nameserver2\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.357+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.19\",\"gitVersion\":\"9a996e0ad993148b9650dc402e6d3b1804ad3b8a\",\"openSSLVersion\":\"OpenSSL 1.0.1e-fips 11 Feb 2013\",\"modules\":[\"enterprise\"],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel70\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.357+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"CentOS Linux release 7.9.2009 (Core)\",\"version\":\"Kernel 3.10.0-1160.31.1.el7.x86_64\"}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.357+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"0.0.0.0\",\"port\":27017},\"processManagement\":{\"fork\":true,\"pidFilePath\":\"/var/run/mongodb/mongod.pid\",\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"storage\":{\"dbPath\":\"/var/lib/mongo\",\"directoryPerDB\":true,\"journal\":{\"enabled\":true}},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"logRotate\":\"reopen\",\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.358+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=1373M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.481+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1678239420:481205][23672:0x7fb2bc595bc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.481+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1678239420:481259][23672:0x7fb2bc595bc0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.486+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":128}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.486+08:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.493+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.495+08:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.495+08:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22178, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.495+08:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22181, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.496+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"admin.system.version\",\"uuidDisposition\":\"provided\",\"uuid\":{\"uuid\":{\"$uuid\":\"31344e27-b717-4e94-ab3b-e8a68ceed2de\"}},\"options\":{\"uuid\":{\"$uuid\":\"31344e27-b717-4e94-ab3b-e8a68ceed2de\"}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.503+08:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"admin.system.version\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.503+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":20459, \"ctx\":\"initandlisten\",\"msg\":\"Setting featureCompatibilityVersion\",\"attr\":{\"newVersion\":\"4.4\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.503+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.504+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"e594a6a3-3cad-4f75-b665-f8e1a7774b24\"}},\"options\":{\"capped\":true,\"size\":10485760}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.513+08:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"local.startup_log\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.513+08:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/var/lib/mongo/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.513+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.514+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.514+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.514+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.514+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"config.system.sessions\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"0e7d4626-cbf1-43ba-8fc4-b8db3e2709b8\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.514+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20712, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NamespaceNotFound: config.system.sessions does not exist\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23377, \"ctx\":\"SignalHandler\",\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23378, \"ctx\":\"SignalHandler\",\"msg\":\"Signal was sent by kill(2)\",\"attr\":{\"pid\":1,\"uid\":0}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23381, \"ctx\":\"SignalHandler\",\"msg\":\"will terminate after current cmd ends\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.517+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784903, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the LogicalSessionCache\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"config.system.sessions\",\"index\":\"_id_\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"namespace\":\"config.system.sessions\",\"index\":\"lsidTTLIndex\",\"commitTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"SignalHandler\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23017, \"ctx\":\"listener\",\"msg\":\"removing socket file\",\"attr\":{\"path\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"SignalHandler\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784908, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToAbortExpiredTransactions\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.527+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784934, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784909, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicationCoordinator\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784910, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ShardingInitializationMongoD\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784911, \"ctx\":\"SignalHandler\",\"msg\":\"Enqueuing the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784912, \"ctx\":\"SignalHandler\",\"msg\":\"Killing all operations for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"SignalHandler\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":3}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784913, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down all open transactions\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784914, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":4784915, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the IndexBuildsCoordinator\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784916, \"ctx\":\"SignalHandler\",\"msg\":\"Reacquiring the ReplicationStateTransitionLock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784917, \"ctx\":\"SignalHandler\",\"msg\":\"Attempting to mark clean shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"SignalHandler\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784930, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down the storage engine\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22320, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22321, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down journal flusher thread\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20282, \"ctx\":\"SignalHandler\",\"msg\":\"Deregistering all the collections\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22261, \"ctx\":\"SignalHandler\",\"msg\":\"Timestamp monitor shutting down\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22317, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTigerKVEngine shutting down\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22318, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22319, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down session sweeper thread\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22322, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22323, \"ctx\":\"SignalHandler\",\"msg\":\"Finished shutting down checkpoint thread\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.528+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795902, \"ctx\":\"SignalHandler\",\"msg\":\"Closing WiredTiger\",\"attr\":{\"closeConfig\":\"leak_memory=true,\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.529+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1678239420:529872][23672:0x7fb2b389e700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 35, snapshot max: 35 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1\"}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":8}}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-03-08T09:37:00.537+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n\n\n",
"text": "The installation steps are as followsconfiguration filesudo systemctl start mongod------ start log ------",
"username": "1697383273"
},
{
"code": "",
"text": "I sent you the steps. Please help me to have a look",
"username": "1697383273"
},
{
"code": "\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}",
"text": "For some reason, someone or some process is sending SIGTERM:\"msg\":\"Received signal\",\"attr\":{\"signal\":15,\"error\":\"Terminated\"}}Then mongod terminates as it should when receiving SIGTERM.What does systemctl status mongod give you?Did you had another version running before?",
"username": "steevej"
},
{
"code": "",
"text": "Hi @1697383273I would also mention that using the Enterprise Edition involves an Enterprise Advanced subscription, which gives you access to the Support Portal and the ability to open a support ticket.If you have an Enterprise Advanced subscription, please feel free to open a support ticket. If you’re in the process of evaluating MongoDB Enterprise and this issue is blocking you, please send me a DM and I’ll be able to connect you to the relevant people.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This is a new server with no status, but I have used version 4.4.14 without this problem",
"username": "1697383273"
},
{
"code": "",
"text": "Hello, I haven’t Advanced subscription",
"username": "1697383273"
},
{
"code": "",
"text": "I still have no clue.Please share the mongod service file.Are you running with SELinux enabled? If it is the case, try to disable it temporarily to see if it makes a difference. There is almost no time delay between the start and the signal. So either systemd itself or another automated process is sending the signal.",
"username": "steevej"
},
{
"code": "",
"text": "I have not configured SELinux. These are the default configurations\nIsn’t mongod service file binary? How do I upload it to you",
"username": "1697383273"
},
{
"code": "",
"text": "mongod.service must be a text file.\nDo a cat on the file and paste the contents here",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Service files are usually located in /lib/systemd/system.You may also see the content with systemctl cat mongod.What does systemctl status mongod give you?",
"username": "steevej"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Alicia_Smith"
}
] | Enterprise version 4.4.19 and above, setting fork=true cannot start normally | 2023-03-07T15:30:53.203Z | Enterprise version 4.4.19 and above, setting fork=true cannot start normally | 1,360 |
null | [] | [
{
"code": "",
"text": "Hi,I am evaluating MongoDB Atlas for vector embedding use case for Gen AI application. I am using Langchain framework for this implementation. My problem statement is I have a PDF document that gets split into smaller chunks (using Langchain doc loader and splitter) to create smaller set of sub-documents and trying to store in Mongo. I cant store one big PDF document vector embeddings in one document in Mongo as that wont be efficient for language models.\nMy requirement is to use Mongo database trigger to call AWS EventBridge. Since the number of chunks (documents) for one big document are more and are being stored as separate documents in Mongo, attaching trigger will trigger it for each document though each document (chunk). Whats the best way organize the documents in smaller chunks (documents) in Mongo in order to use the Trigger efficiently?",
"username": "Vivek_Daramwal"
},
{
"code": "",
"text": "Hi @Vivek_Daramwal and welcome to MongoDB community forums!!My problem statement is I have a PDF document that gets split into smaller chunks (using Langchain doc loader and splitter) to create smaller set of sub-documents and trying to store in Mongo.Using multiple vector embedding in MongoDB is certainly not an issue unless the following parameters are considered:Whats the best way organize the documents in smaller chunks (documents) in Mongo in order to use the Trigger efficiently?The schema design in MongoDB plays an important role in performance of the query and hence the recommendation is to use modelling technique according to your use case.\nOne possible way would be to use the extended reference pattern to store the chunks in the collection.\nYou can also make use of Model Tree structures to store the chunks as well.Finally, could you confirm if you using the PDF load/split from LangChain PDF | 🦜️🔗 Langchain ?If you have further questions, please feel free to share additional information of your use case so that others can help you betterWarm regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Yes I am using PDF load/split from LangChain to generate smaller set of documents from out of a big PDF document and storing it in the MongoDB collection. Because there can be multiple such documents chunks (say 20), they gets stored as separate documents in Mongo. If I have to use Database triggers for insert, it will trigger 20 times for the same document. I want the trigger to happen just once. How can I make it possible?And I cant use multi-field vector in the document because I dont know how many document chunks it can create and it will also create problem for querying on all the documents.",
"username": "Vivek_Daramwal"
},
{
"code": "",
"text": "Hi Vivek,Thank you for the question! Ideally one PDF document would yield one trigger, which executes a function that splits the PDF into an arbitrary number of chunks that then get added individually to your collection.You can do this by setting up an Atlas Trigger for when a PDF is added which then calls AWS eventbridge, as you’ve described. This would then schedule an AWS Lambda function which can leverage Langchain (in either Javascript or Python) in the way you’ve used it to parse the document into subdocuments which can be added back to MongoDB. The reason we suggest using Lambda instead of triggers for calling Langchain is that there is greater support for javascript dependencies in Lambda at the moment.I found a few resources detailing how to do this, but this is definitely something that has come up before so we may put out our own dedicated tutorial on this.I hope this helps answer your question, but do reach out if you have any additional ones.Cheers,\nHenry",
"username": "Henry_Weller"
},
{
"code": "",
"text": "Hi Henry,We dont want to push PDF documents to MongoDB as a whole. We already have a repository of documents and want to build a workflow where we pick the PDF document from our store and split it into chunks via LangChain and push the smaller documents (generated via chunk split) to MongoDB. Now since multiple such documents are stored in Mongo collection, it wont be possible to trigger just once for all. Is there a better way to implement this?",
"username": "Vivek_Daramwal"
},
{
"code": "",
"text": "Hi Vivek,Apologies for using vague language, when I said “a PDF is added” I meant a PDF is updated in the source system where it lives, the metadata of which could be tracked in MongoDB (this is a common pattern).In this way you can watch for the “last_updated_time” to change, and trigger the workflow I mentioned to split a single PDF document into many MongoDB documents containing chunks and embeddings, along with any additional relevant metadata.",
"username": "Henry_Weller"
}
] | Whats the best way to store vector embeddings in chunks for one document in Mongo to use Triggers efficiently | 2023-10-15T13:22:03.917Z | Whats the best way to store vector embeddings in chunks for one document in Mongo to use Triggers efficiently | 368 |
null | [
"aggregation",
"python"
] | [
{
"code": "",
"text": "I’m just starting to learn Mongodb in conjunction with Python.\nI need to select documents for two dates 2023-10-20 and 2023-10-22 only. Tell me how to do this or where I can read about it\nTHX All!!!",
"username": "Raj_Polinovsky"
},
{
"code": "",
"text": "You may start with mongodb find in python - Google Search.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you \nI did just that, but for two weeks I haven’t found a solution, that’s why I contacted the community",
"username": "Raj_Polinovsky"
},
{
"code": "",
"text": "What have you tried?",
"username": "steevej"
}
] | Find a record by two dates | 2023-10-20T08:54:45.416Z | Find a record by two dates | 169 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi,I am currently using Realm as my database for mobile app development with .net MAUI and have noted the encryption key property with this. I am wondering if there are any examples on how to implement this and what the best way to generate a key per device to make this as secure as possible?Many thanksChris",
"username": "Chris_Boot1"
},
{
"code": "",
"text": "It’s probably best to use Secure storage to store the encryption key for the file. As for how to generate it, you could use RandomNumberGenerator.GetBytes.",
"username": "nirinchev"
}
] | Example requested for utilising an encryption key with RealmDb | 2023-10-19T18:30:07.610Z | Example requested for utilising an encryption key with RealmDb | 205 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "",
"text": "We are trying to fetch data out of 100 million records. The filter attributes are in different arrays. We created separate indexes on both arrays. We used multiple stages to filter the records. But when wee checking the execution plan, both match stages are combined and the index is chosen. Its taking more than 20 seconds to get the final result set(only 23 records). Below the execution plan\n‘’‘{‘queryPlanner’: {‘indexFilterSet’: False,\n‘namespace’: ‘dap-ods.bookings’,\n‘optimizedPipeline’: True,\n‘parsedQuery’: {’$and’: [{‘contacts.value’: {‘$eq’: ‘11133 3336-8878’}},\n{‘segments.departureAirportCode’: {‘$eq’: ‘SEA’}},\n{‘segments.departureDateTimeStnLocal’: {‘$lte’: datetime.datetime(2023, 10, 14, 0, 0)}},\n{‘segments.departureDateTimeStnLocal’: {‘$gte’: datetime.datetime(2023, 10, 13, 0, 0)}}]},\n‘planCacheKey’: ‘6820AF83’,\n‘plannerVersion’: 1,\n‘queryHash’: ‘D0CBB6D1’,\n‘rejectedPlans’: [{‘inputStage’: {‘filter’: {‘$and’: [{‘contacts.value’: {‘$eq’: ‘11133 3336-8878’}},\n{‘segments.departureAirportCode’: {‘$eq’: ‘SEA’}},\n{‘segments.departureDateTimeStnLocal’: {‘$gte’: datetime.datetime(2023, 10, 13, 0, 0)}}]},\n‘inputStage’: {‘direction’: ‘forward’,\n‘indexBounds’: {‘segments.departureDateTimeStnLocal’: [‘[new ’\n‘Date(-9223372036854775808), ’\n‘new ’\n‘Date(1697241600000)]’]},\n‘indexName’: ‘segments.departureDateTimeStnLocal_1’,\n‘indexVersion’: 2,\n‘isMultiKey’: True,\n‘isPartial’: False,\n‘isSparse’: False,\n‘isUnique’: False,\n‘keyPattern’: {‘segments.departureDateTimeStnLocal’: 1},\n‘multiKeyPaths’: {‘segments.departureDateTimeStnLocal’: [‘segments’]},\n‘stage’: ‘IXSCAN’},\n‘stage’: ‘FETCH’},\n‘stage’: ‘PROJECTION_SIMPLE’,\n‘transformBy’: {’_id’: True,\n‘businessKey’: True,\n‘passengersInfo’: True}},\n{‘inputStage’: {‘filter’: {’$and’: [{‘contacts.value’: {‘$eq’: ‘1111333336 8878’}},\n{‘segments.departureAirportCode’: {‘$eq’: ‘SEA’}},\n{‘segments.departureDateTimeStnLocal’: {‘$lte’: datetime.datetime(2023, 10, 14, 0, 0)}}]},\n‘inputStage’: {‘direction’: ‘forward’,\n‘indexBounds’: {‘segments.departureDateTimeStnLocal’: [‘[new ’\n‘Date(1697155200000), ’\n‘new ’\n‘Date(9223372036854775807)]’]},\n‘indexName’: ‘segments.departureDateTimeStnLocal_1’,\n‘indexVersion’: 2,\n‘isMultiKey’: True,\n‘isPartial’: False,\n‘isSparse’: False,\n‘isUnique’: False,\n‘keyPattern’: {‘segments.departureDateTimeStnLocal’: 1},\n‘multiKeyPaths’: {‘segments.departureDateTimeStnLocal’: [‘segments’]},\n‘stage’: ‘IXSCAN’},\n‘stage’: ‘FETCH’},\n‘stage’: ‘PROJECTION_SIMPLE’,\n‘transformBy’: {’_id’: True,\n‘businessKey’: True,\n‘passengersInfo’: True}},\n{‘inputStage’: {‘filter’: {’$and’: [{‘contacts.value’: {‘$eq’: ‘11133 3336-8878’}},\n{‘segments.departureAirportCode’: {‘$eq’: ‘SEA’}},\n{‘segments.departureDateTimeStnLocal’: {‘$gte’: datetime.datetime(2023, 10, 13, 0, 0)}}]},\n‘inputStage’: {‘direction’: ‘forward’,\n‘indexBounds’: {‘segments.departureAirportCode’: [‘[MinKey, ’\n‘MaxKey]’],\n‘segments.departureDateTimeStnLocal’: [’[new ’\n‘Date(-9223372036854775808), ’\n‘new ’\n‘Date(1697241600000)]’]},\n‘indexName’: ‘seg_departureDate_depAirport_1’,\n‘indexVersion’: 2,\n‘isMultiKey’: True,\n‘isPartial’: False,\n‘isSparse’: False,\n‘isUnique’: False,\n‘keyPattern’: {‘segments.departureAirportCode’: 1,\n‘segments.departureDateTimeStnLocal’: 1},\n‘multiKeyPaths’: {‘segments.departureAirportCode’: [‘segments’],\n‘segments.departureDateTimeStnLocal’: [‘segments’]},\n‘stage’: ‘IXSCAN’},\n‘stage’: ‘FETCH’},\n‘stage’: ‘PROJECTION_SIMPLE’,\n‘transformBy’: {’_id’: True,\n‘businessKey’: True,\n‘passengersInfo’: True}},\n{‘inputStage’: {‘filter’: {‘$and’: [{‘contacts.value’: {‘$eq’: ‘11133 3336-8878’}},\n{‘segments.departureAirportCode’: {‘$eq’: ‘SEA’}},\n{‘segments.departureDateTimeStnLocal’: {‘$lte’: datetime.datetime(2023, 10, 14, 0, 0)}}]},\n‘inputStage’: {‘direction’: ‘forward’,\n‘indexBounds’: {‘segments.departureAirportCode’: [‘[MinKey, ’\n‘MaxKey]’],\n‘segments.departureDateTimeStnLocal’: [’[new ’\n‘Date(1697155200000), ’\n‘new ’\n‘Date(9223372036854775807)]’]},\n‘indexName’: ‘seg_departureDate_depAirport_1’,\n‘indexVersion’: 2,\n‘isMultiKey’: True,\n‘isPartial’: False,\n‘isSparse’: False,\n‘isUnique’: False,\n‘keyPattern’: {‘segments.departureAirportCode’: 1,\n‘segments.departureDateTimeStnLocal’: 1},\n‘multiKeyPaths’: {‘segments.departureAirportCode’: [‘segments’],\n‘segments.departureDateTimeStnLocal’: [‘segments’]},\n‘stage’: ‘IXSCAN’},\n‘stage’: ‘FETCH’},\n‘stage’: ‘PROJECTION_SIMPLE’,\n‘transformBy’: {’_id’: True,\n‘businessKey’: True,\n‘passengersInfo’: True}}],\n‘winningPlan’: {‘inputStage’: {‘filter’: {‘$and’: [{‘segments.departureAirportCode’: {‘$eq’: ‘SEA’}},\n{‘segments.departureDateTimeStnLocal’: {‘$lte’: datetime.datetime(2023, 10, 14, 0, 0)}},\n{‘segments.departureDateTimeStnLocal’: {‘$gte’: datetime.datetime(2023, 10, 13, 0, 0)}}]},\n‘inputStage’: {‘direction’: ‘forward’,\n‘indexBounds’: {‘contacts.value’: [‘[“11133 3336-8878”, “11133 3336-8878”]’]},\n‘indexName’: ‘contacts_1’,\n‘indexVersion’: 2,\n‘isMultiKey’: True,\n‘isPartial’: False,\n‘isSparse’: False,\n‘isUnique’: False,\n‘keyPattern’: {‘contacts.value’: 1},\n‘multiKeyPaths’: {‘contacts.value’: [‘contacts’]},\n‘stage’: ‘IXSCAN’},\n‘stage’: ‘FETCH’},\n‘stage’: ‘PROJECTION_SIMPLE’,\n‘transformBy’: {‘_id’: True,\n‘businessKey’: True,\n‘passengersInfo’: True}}},‘’’",
"username": "Habeeb_Raja"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and reformat accordingly. We cannot really read your explain plan.It would be nice to see the original query.",
"username": "steevej"
}
] | Array index - slow performance | 2023-10-20T06:29:22.614Z | Array index - slow performance | 208 |
null | [] | [
{
"code": "user_idcreatedAt",
"text": "We have a MongoDB collection with a size of 200 GB, and we need to delete records based on a specific condition using a query. The query we’re using looks like this:“$and”: [\n{\n“user_id”: {\n“$oid”: “64e87bfa2c7fb2a959e1ad4f”\n}\n},\n{\n“$or”: [\n{\n“createdAt”: {\n“$lt”: {\n“$date”: “2023-10-10T11:05:12.447Z”\n}\n}\n},\n{\n“createdAt”: {\n“$eq”: null\n}\n},\n{\n“createdAt”: {\n“$exists”: false\n}\n}\n]\n}\n]\nWe’ve also created a compound index on user_id and createdAt to improve the query’s performance. However, the deletion process is taking a significant amount of time, approximately 3 minutes to delete 50,000 records.Is there a more efficient way to delete these records, or are there any optimizations we can make to speed up the deletion process? We’d like to reduce the time it takes to delete records from our collection.Thanks in advance for any advice or suggestions!",
"username": "Akanksha_gogar"
},
{
"code": "",
"text": "approximately 3 minutes to delete 50,000 recordsThe fact that these are bad or good numbers depends on a lot of things.What is the mongod configuration? RAM, disks, …What are other indexes?What else do you have running on this system?Please share the explain plan.One way to reduce the load one the server but that could increase the running time is to do 3 deleteMany, one with createdAt:null, one with createdAt:$exist:false and one with the createAt:$lt. An $exist:false might involve a collection scan, which is probably very slow on a 200GB collection. Could you update your model to make sure createAt always exists at least with a null.If this is a non-frequent use-case, it might be better to reduce the CPU spike rather than making the use-case faster, aka throttling.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Remove Query taking around 3 mins to delete 50000 records | 2023-10-18T12:45:03.101Z | Remove Query taking around 3 mins to delete 50000 records | 143 |
null | [
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "\tconst updatedDinner = await Dinner.findOneAndUpdate(\n\t\t\t{ 'cups.plates': { $elemMatch: { id: plateId } } },\n\t\t\t{\n\t\t\t\t$set: {\n\t\t\t\t\tupdatedBy: 'me',\n\t\t\t\t\t'cups.$[].plates.$[plate].status': 'COMPLETE',\n\t\t\t\t\t'cups.$[].plates.$[plate].outputs':\n\t\t\t\t\t\tfinishedOutputs.map(formatOutput),\n\t\t\t\t},\n\t\t\t},\n\t\t\t{\n\t\t\t\tarrayFilters: [{ 'plate.id': plateId }],\n\t\t\t\tnew: true,\n\t\t\t}\n\t\t).exec();\nconst dinnerSchema = new mongoose.Schema(\n\t{\n\t\tid: {\n\t\t\ttype: String,\n\t\t\tdefault: uuidV4,\n\t\t\tindex: true,\n\t\t},\n\t\tupdatedBy: String,\n\t\tcups: [\n\t\t\t{\n\t\t\t\tplates: [\n\t\t\t\t\t{\n\t\t\t\t\t\tid: {\n\t\t\t\t\t\t\ttype: Number,\n\t\t\t\t\t\t\tindex: true,\n\t\t\t\t\t\t},\n\t\t\t\t\t\tstatus: String,\n\t\t\t\t\t\toutputs: [],\n\t\t\t\t\t},\n\t\t\t\t],\n\t\t\t},\n\t\t],\n\t},\n\n\t{\n\t\ttimestamps: true,\n\t}\n);\n",
"text": "Hi there!I have an application which needs to do updates to some nested arrays. Following the mongoDB documentation, I have the following query:This works MOST of the time, but every tenth or so time I don’t see the changes reflected in the database. The logs all look normal as if the update has gone through, but I just can’t see the change reflected in the database. any guidance would be much appreciated! Many thanks.This is the data model:",
"username": "bosalie"
},
{
"code": "",
"text": "I do not really know mongoose so my comment might be off.The first thing that could help you is to handle errors and exceptions. From the little code snippet you share it seems like you are silently ignoring them.Next you could share sample documents and the values of your variables like plateId.",
"username": "steevej"
}
] | Problem with mongodb operation occasionally not updating with findOneAndUpdate nested arrays | 2023-10-20T12:47:50.505Z | Problem with mongodb operation occasionally not updating with findOneAndUpdate nested arrays | 135 |
null | [] | [
{
"code": "",
"text": "Hello! My startup Torem is deploying in AWS while using mongoDB Atlas.We have some AWS credits. Is there any way we could use those credits with mongoDB Atlas directly? It could really mean a lot to us.Thanks!!",
"username": "Torem_Software"
},
{
"code": "",
"text": "Hi @Torem_Software - Welcome to the community.We have some AWS credits. Is there any way we could use those credits with mongoDB Atlas directly? It could really mean a lot to us.I would clarify if you’re able to pay with AWS credits with the Atlas in-app chat support team. However, the following blog regarding the AWS marketplace Atlas (Pay as you go) listing may also provide more details regarding payment.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "You can’t directly use AWS credits with MongoDB Atlas, but you can indirectly save by using AWS services like EC2 instances for your app, optimizing resources, or covering data transfer costs with the credits. It won’t be a direct deduction, but it can help reduce your overall expenses.",
"username": "David_Sadler"
}
] | AWS activate and MongoDB Atlas | 2023-03-26T18:52:45.313Z | AWS activate and MongoDB Atlas | 1,081 |
null | [
"react-native",
"flexible-sync"
] | [
{
"code": "",
"text": "Hi Mongo, we are using Mongo Realm Flexible Sync as our solution for a offline first app.We recently encountered a BSONObjectTooLarge error log in Mongo Atlas which required us to restart flexible sync. While Flexible Sync was down all our clients were unable to sync data from Mongo Realm.As this caused an outage for our users we are wondering what is the best action plan to deal with this in the future and we’re hoping you could help us with some questions:How can we narrow down which document is responsible for the error?Why does one client cause the entire sync system to terminate and what mitigation steps are taken to prevent this event?Is it possible to control the change event size on the client so we can prevent a client exceeding the limit of the change events / documents size when syncing.When Flexible Sync fails across all clients how can we reenable flexible sync without critical data loss on our clients.Some of the errors we are seeing in our logs are:BadClientFileIdent Error\nMongoEncodingError Error\nDivergingHistories Error\nTranslatorFatalError Error\nEncountered BSONObjectTooLarge error. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning.",
"username": "Ben_Corbett"
},
{
"code": "The server has forgotten about the client-side file presented by the client. \nThis is likely due to using a synchronized realm after terminating and re-enabling sync. \nPlease wipe the file on the client to resume synchronization.\nMongoEncodingErrorTranslatorFatalError",
"text": "Hi, I apologize for this unfortunate situation, and I will do my best to answer your questions.If you can send me the entire error message it should have a long hex string in the error. This can determine the document that is causing the error. If you do not have this, if you can provide me with an application_id (the objectId in the URL of realm.mongodb.com) I can search for it in our logs.This error is actually caused by a write to MongoDB by an external client (shell, driver, compass, etc) that Device Sync is listening to. It is a longstanding issue in MongoDB in which Change Events (the structure that is used to listen for events) exceed 16MB and MongoDB does not allow us to continue consuming events (and thus we lose our pointer into the oplog, so it is unsafe to do anything other than fail loudly). We realize this is not ideal, and it is why the MongoDB server team released this feature in 7.0 (https://www.mongodb.com/docs/manual/reference/operator/aggregation/changeStreamSplitLargeEvent/#-changestreamsplitlargeevent--aggregation-). We are completing the work soon to begin using this new stage to avoid this issue altogether.As mentioned above, this is actually a write that originated from outside of Device Sync in which the PreImage of the document combined with the UpdateDescription (https://www.mongodb.com/docs/manual/reference/change-events/) exceeded 16MB. This generally means you have a document somewhere in the 10MB-16MB range causing the issue. While MongoDB does support documents up to 16MB, it is advised that you design your data model such that you don’t come too close to that limit.Do you mind explaining more about what kind of Data Loss you saw? Generally speaking, when terminating and re-enabling sync clients should perform a Client Reset. The new default is a mode called RecoverUnsynced that should perform a best-effort attempt at not losing any data. https://www.mongodb.com/docs/atlas/app-services/sync/error-handling/client-resets/Some of the errors we are seeing in our logs are:These errors are not unexpected. BadClientFileIdent and DivergingHistories are errors that occur when you terminate and re-enable sync. When that happens, clients need to be reset and those errors are the 2 possible ways we have of detecting older clients. They should be accompanied by the message:See here for more details about these errors: https://www.mongodb.com/docs/atlas/app-services/sync/error-handling/errors/MongoEncodingError is also normal and indicative of having MongoDB documents that do not match the schema you have configured in App Services. See here for a better description in our documentation: https://www.mongodb.com/docs/atlas/app-services/sync/error-handling/errors/#mongodb-translator-errorsTranslatorFatalError - This is what happens when the component in charge of translating changes between MongoDB and Device Sync has encountered a fatal error that requires user interaction.Let me know if you have any other questions and I would be happy to answer. We are excited that we will soon be able to bypass this error on clusters utilizing a newer version of MongoDB.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "should perform a best-effort attempt at not losing anyHi Tyler, thanks for your time and such a comprehensive response this has been a huge help for the team in understanding how best to tackle the sync issue.We have located the document which has stored low-res thumbnails which somewhat explains the change event size being over 16 mg and flexible sync being terminated. We are taking steps to mitigate the problem which includes reducing the document size and modifying the writes to reduce the change event!As far as data loss we currently do not have any reported issues when terminating the sync, but we were just concerned about the possibility. We will investigate your client reset and error handling suggestions and report back if we have any issues.Thanks again for such a great answer, look forward to your updates to change event handling.\nBen",
"username": "Ben_Corbett"
},
{
"code": "",
"text": "@Tyler_Kaye, I find the design isn’t fault tolerant enough. We experienced the same issue last year. Lucky for us, it’s an enterprise app where we could contact the user to have him delete the app.Validation of object size on client only is error prone and against best practices. All requests must also be validated server side!One user trying to sync an object too large shouldn’t break sync for ALL users! Tracking this user down before sync can be restored isn’t justified.We raised our concerns in the support ticket but given this thread, it is still an issue.Best regard,\n// Mikael",
"username": "Mikael_Gurenius"
},
{
"code": "",
"text": "Hey @Mikael_GureniusI totally agree! For changes coming from Realm we can handle arbitrarily sized data (though we are limited by the 16MB document size). I will note that generally speaking if you find yourself bumping into the 16MB document size it is likely the case that the bigger issue is that your Data Models should be revisited (and you will have performance problems in MongoDB dealing with such large documents).Also, just to be clear, the issue above is what happens when a change is made to MongoDB that results in a document being larger than 16MB. So it doesn’t have to do with changes from a client, but rather changes from the shell / drivers / etc. This error has been fixed in MongoDB with a new feature and we are working on rolling it out to users of App Services. (see: https://www.mongodb.com/docs/manual/reference/operator/aggregation/changeStreamSplitLargeEvent/#-changestreamsplitlargeevent--aggregation-)Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "The split looks promising! I’ll have another look if I can reproduce our issue from last year.// Mikael",
"username": "Mikael_Gurenius"
}
] | Approaches to dealing with Flexible Sync outages relating to client change set errors | 2023-10-04T08:04:15.690Z | Approaches to dealing with Flexible Sync outages relating to client change set errors | 348 |
null | [
"java",
"python",
"spark-connector"
] | [
{
"code": "spark.app.id=local-1697738922184\n\nspark.app.name=MongoDB\n\nspark.app.startTime=1697738921889\n\nspark.driver.host=10.0.2.15\n\nspark.driver.port=43157\n\nspark.executor.id=driver\n\nspark.jars.packages=org.mongodb.spark:mongo-spark-connector_2.12-10.2.0\n\nspark.master=local[*]\n\nspark.mongodb.collection=tweets\n\nspark.mongodb.connection.uri=mongodb://localhost:27017/\n\nspark.mongodb.database=twitter_db\n\nspark.rdd.compress=True\n\nspark.serializer.objectStreamReset=100\n\nspark.sql.catalogImplementation=hive\n\nspark.sql.warehouse.dir=file:/home/hduser/Desktop/CA/spark-warehouse\n\nspark.submit.deployMode=client\n\nspark.submit.pyFiles=\n\nspark.ui.showConsoleProgress=true\nPy4JJavaError: An error occurred while calling o208.save.\nat com.mongodb.spark.sql.connector.config.ClassHelper.createInstance(ClassHelper.java:79)\n",
"text": "Hello,I’m working on an Ubuntu machine, I installed everything and I normally use Hadoop and pyspark. I’m trying to write a Spark dataframe to MongoDb but I keep get an error. I did all necessary steps but still no luck. I get an error on the mongo config DefaultMongoClientFactory, from the connector doc I can see this value is optional, I even tried to manually write the default value but no luck. Please find the steps with commands/output below:Blockquotemongo: v3.2.10connector: mongo-spark-connector_2.12-10.2.0.jarspark: 3.1.3dependecies:bson-3.2.0.jarmongodb-driver-3.2.0.jarmongodb-driver-core-3.2.0.jarprint(spark.sparkContext.getConf().toDebugString())data.write.format(“mongodb”).mode(“overwrite”).save(): com.mongodb.spark.sql.connector.exceptions.ConfigException: Invalid value com.mongodb.spark.sql.connector.connection.DefaultMongoClientFactory for configuration mongoClientFactory",
"username": "Val_Mrt"
},
{
"code": "NoSuchMethodError: org.apache.spark.sql.catalyst.encoders.RowEncoder$.\n",
"text": "bson-3.2.0.jarmongodb-driver-3.2.0.jarmongodb-driver-core-3.2.0.jarI have updated those jars + Spark and Scala version, and also added mongo-java-driver-3.9.1.jar but no luck. The error I get now isI then replaced spark-catalyst 2.12 with 2.13 but I couldn’t even initialize spark session so I revert it back",
"username": "Val_Mrt"
}
] | Write to mango from pyspark | 2023-10-19T18:27:30.147Z | Write to mango from pyspark | 220 |
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "Hello im triying the new Power BI connector. All my collections are showing, but some collections have missing information/columns when opened in the Power Query editor. What can i do to fix this?",
"username": "Santiago_De_la_Pena_Miranda"
},
{
"code": "",
"text": "Hello @Santiago_De_la_Pena_Miranda and welcome to the MongoDB Community! I am pretty sure that the underlying SQL schema just didn’t sample enough documents to represent some of these “missing” fields/columns. If you regenerate the SQL Schema, you can make sure these fields get represented. Here are some instructions:\n\nScreenshot 2023-08-25 at 9.15.00 AM1285×722 152 KB\nHere is the command that you would run, of course you would change the db instance name from “datalake” to th name of your virtual db name and you can also change the sample size.\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: [“datalake.*”], sampleSize: 1000, setSchemas: true})",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Hello Alexi_Antonino - I successfully implemented the above steps as mentioned by you. Unfortunately, I still don’t get to see the entire columns on Power BI. It would be really helpful if you could help me resolve the issue as I’m struck on this issue from many days.\nAppreciate your help !Below is the actual tables on MongoDB - I have around 74 fields with few nested objects.\nScreenshot 2023-10-19 at 5.51.00 PM2248×1314 331 KBBelow is the list of columns that PowerBI has pulled ( around 15 columns )…rest is missingCapture451×645 39.6 KB",
"username": "Srinivas_Jayaram"
},
{
"code": "",
"text": "Hi there @Srinivas_Jayaram - a few questions. When you ran the sqlGenerateSchema command, did the output/results list all of the columns in your collection? Did you change the sample size to a larger number to make sure all of the fields were sampled?Here is what the output of the sqlGenerateSchema looks like (this is just a portion of my collection- but this shows an array field called items and the fields within items) - you should see all fields present in this schema:\nScreenshot 2023-08-25 at 9.26.18 AM761×744 16.1 KBIf you run the sqlGenerateSchema command and do not see all of the fields you expect, please run this again and increase the sample size (you can up this to 20K even).And my last question, did you enable Atlas SQL with the Quickstart? or did you manually create a federated database? If you manually created a federated database, I would want to make sure that you have all of the cluster collections separated into virtual collections. Sometimes users miss this:\nScreenshot 2023-10-19 at 9.39.48 AM1276×685 152 KBLet me know if this is helpful.",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: [“datalake.*”], sampleSize: 1000, setSchemas: true})Hi Alexi Antonino,Thank you so much for the reply. I’m now able to get the desired field on PowerBI.Step1 : I have increased the sample size to 10k\nStep 2 : I have enabled Atlas SQL with Quickstart.It worked for me !!! thank you so much for all your help. Appreciate it !!!Thanks,\nSrini",
"username": "Srinivas_Jayaram"
},
{
"code": "",
"text": "I’m so glad this works for you! Best to you and your Power BI adventures:)",
"username": "Alexi_Antonino"
}
] | Not all collumns showing in Power BI (Using Mongo DB Atlas beta connector) | 2023-09-22T21:27:36.622Z | Not all collumns showing in Power BI (Using Mongo DB Atlas beta connector) | 470 |
null | [
"spark-connector"
] | [
{
"code": "",
"text": "Hi:\nWhen using mongo db connector of spark 2.x or spark 3.x, for the same volume of data, it both works well for sample partition and splitvector partition.\nBut, after upgrade to mongodb spark connector v10.x, we met strange issues:\n1: There is log info “Partitioner Getting collection stats …”, and it took too long time for big data size\n2: we have 2 collections, it only read the first one without the second one.my questions:\nIs there any limitation for v10.x or what we can do to ignore the “Partitioner Getting collection stats” operation?\nThere is no Splitvector Partition for v10.x, is there any reason to remove it?",
"username": "jj_jj"
},
{
"code": "",
"text": "I have exactly the same problem as you. My jobs get stuck in the partition size calculation phase. After 40 minutes, there is no progress. I cannot migrate to connectors 10.2 because of this.",
"username": "Lukasz_75327"
}
] | Why does mongodb spark connector v10 log the info Partitioner Getting collection stats and it took too long time for big data size | 2022-11-22T04:08:25.161Z | Why does mongodb spark connector v10 log the info Partitioner Getting collection stats and it took too long time for big data size | 1,712 |
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "Hello,I am trying to connect power bi with mongo DB and get this error\nWer encounter an error while trying to connect.\nDetails: \"Data source error occurred.\nSQLSTATE: 01000\nNativeError: 444\nError message: ODBC: ERROR [01000] The driver returned invalid (or failed to return) SQL_DRIVER_ODBC_VER: 03.80I follow the guide in the website step by step.",
"username": "Sahar_Adi"
},
{
"code": "",
"text": "hello Sahar! Welcome to the community. This error usually indicates an issue with your user credentials. And because of caching on the MS Power BI side, if you just messed up the password on the first try, then put in the correct one, this error will still be present. Take a look at these instructions to help remove this error/blocker:\n\nScreenshot 2023-07-11 at 9.28.20 AM2218×1186 349 KB\n\n\nScreenshot 2023-07-11 at 9.28.31 AM2220×1244 415 KB\nGood Luck and let me know if this helped.\nBest,\nAlexi",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "I tried this but it is not working.\nsame issue.\nuser and password are correcti use power bi tool and want to change my Database to mongoDB, that is all",
"username": "Sahar_Adi"
},
{
"code": "",
"text": "Are you able to connect to your Atlas SQL Federated DB via Compass or MongoDB Shell (using this same user if and password)? This will help us validate the user credentials.",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "YES.\nI am new to this. Maybe you are available for a zoom meeting?\nIf yes tell me when and i send a link",
"username": "Sahar_Adi"
},
{
"code": "",
"text": "Hello Sahar - please use my calendly link to Calendly - Alexi Antonino schedule some time with me and I will send you a zoom link for the time slot. Looking forward to connecting with you.\nBest,\nAlexi",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Hey Alexi\nI am so sorry for missing our meeting. i will reschedule a new one",
"username": "Sahar_Adi"
},
{
"code": "",
"text": "no worries - I am looking forward to meeting with you.",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Do you guys had a meeting ? have you find any solutions?",
"username": "E_Mahe"
},
{
"code": "",
"text": "Great! Thanks! I could resolve my problem, so, now, when my data is loading power bi show this message: “This query does not have any columns with the supported data types. It will be disabled from being loaded to the model.” any advice?",
"username": "edgard_carrillo"
},
{
"code": "",
"text": "How did you solve the problem, cos i have the same issue. I would appreciate any help i can get.",
"username": "Joshua_Olaniyi"
},
{
"code": "",
"text": "I have the same issue, please share the steps you made in resolving this. thanks",
"username": "Gil_Sabado"
},
{
"code": "",
"text": "Hey All - if you are able to connect, but then get a message about no columns, it could be because of the SQL Schema. Take a look at these steps to get/generate the SQL Schema. And here are the online docs about SQL Schema Management.\n\nScreenshot 2023-07-03 at 9.28.06 AM2152×1194 466 KB\n",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Hi @Alexi_Antonino how are you? Please I am with the same issue, I did the steps and the erro still there.Can we schedule some time with me? Can I meet into your calendy?I need advance with Power Bi the analytics into my company, we need help here.Best,Gabriel",
"username": "Gabriel_Nogueira"
},
{
"code": "",
"text": "I could be missing something but after getting this error, I relooked at the download instructions and realized that I hadn’t downloaded and moved the connector into the right place.Follow all of the instructions for both mongoDB connector and the powerbi connection, which ended up working for me to clear out this issue.",
"username": "Hudson_Lorfing1"
},
{
"code": "",
"text": "@Hudson_Lorfing1 thanks for the feedback. I am working to make our doc instructions more overt around the steps, especially for the download/install of the odbc driver. This might help others.\n@Gabriel_Nogueira If you are still stuck, please feel free to schedule some time with me or email me: Calendly - Alexi Antonino\[email protected]\nAlexi",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Hello @Alexi_Antonino, I cannot load my MongoDB database on PowerBi even after trying all methods. I tried using the ODBC method as well but it couldn’t load. Can you please help me with the same?I keep getting the error. Could you help me with the same?Details: “Data source error occurred.\nSQLSTATE: 01000\nNativeError: 444\nError message: ODBC: ERROR [01000] The driver returned invalid (or failed to return) SQL_DRIVER_ODBC_VER: 03.80”Regards,\nRohan",
"username": "Rohan_Pisipati"
},
{
"code": "",
"text": "@Alexi_Antonino : Can you please post the solution for the benefit of everyone. I too am facing the same issue where I getDetails: “Data source error occurred.\nSQLSTATE: 01000\nNativeError: 444\nError message: ODBC: ERROR [01000] The driver returned invalid (or failed to return) SQL_DRIVER_ODBC_VER: 03.80”Regards,\nMahesh",
"username": "Mahesh_T_Venkatramani"
},
{
"code": "",
"text": "Here is what fixed that error for me…delete existing ODBC connection (file → datasource settings → delete)\nmake the ODBC connection again (datasource settings → new → default or custom tab → save)Not specifying any credentials / connection string properties is what made things work for me.",
"username": "Quinn_Phillips"
},
{
"code": "",
"text": "Hi All - Here is a possible way around this error (see image below):\nAlso, make sure you are using a MongoDB Database User/Login and not the credentials you use for Atlas. And one more thing to check is that your IP address is whitelisted.\n\" IP Access ListAtlas only allows client connections to the database deployment from entries in the project’s IP access list. To connect, you must add an entry to the IP access list.\"Screenshot 2023-07-11 at 9.28.20 AM2218×1186 349 KB\nScreenshot 2023-07-11 at 9.28.31 AM2220×1244 415 KBHope this helps!\nAlexi",
"username": "Alexi_Antonino"
}
] | Power bi conection preblem to MongoDB | 2023-07-11T12:29:34.820Z | Power bi conection preblem to MongoDB | 2,435 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi Team,Replication oplog window has gone below 72 hours frequently received alert , we did check in logs and oplog collections insert and update and recently changes not much affect very less changes but considered getmore and query going between 120 140 and more per second these commands appear Opcounters metric when we received the alertso what is factor any additional we need to check replication oplog window has gone below ?\noplog size : 38000MB oploggb/H: 700In oplog collection filter condition query taking long time how do we retrieve data from oplog collection with short time",
"username": "Srihari_Mamidala"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "clark_jenifer"
}
] | Replication oplog window has gone below | 2023-10-17T09:55:08.874Z | Replication oplog window has gone below | 220 |
null | [] | [
{
"code": "",
"text": "One of the collections in Mongodb atlas has started showing zero documents from last 2 days (since Oct 13). Looking at the status graphs, I see a server restart event (red line) and a different replica chosen as primary on Oct 13. I’m using M0 Sandbox. I also see that the version has been upgraded from 6.0.10 to 6.0.11Why is the collection suddenly empty? This has led to data loss which is a concern for us. Is there a way to recover this data? Thanks in advance.",
"username": "Karthik_Ramachandra"
},
{
"code": "",
"text": "Contact the in-app support from your cluster. Bottom right of the screen.",
"username": "chris"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "ashar_killo"
}
] | Atlas Collection is empty after server restart | 2023-10-15T17:03:20.998Z | Atlas Collection is empty after server restart | 228 |
null | [
"connecting"
] | [
{
"code": "",
"text": "I can connect to mongo through my phone data, but I cant seem to be able to connect using my router\nI tried to change my dns to 8.8.8.8 but it still didnt work\nHow can I fix this",
"username": "Cirrus_N_A"
},
{
"code": "",
"text": "Hello @Cirrus_N_A ,Welcome to The MongoDB Community forums! It’s possible that your router is blocking the connection to the MongoDB server. Here are some steps you can take to troubleshoot the issue:Check if your router has any firewall settings that might be blocking the connection to the MongoDB server. Make sure that the ports required for MongoDB (typically port 27017) are open.Check if your router is using any parental controls or content filtering settings that might be blocking the connection to the MongoDB server.Finally, if none of the above steps resolve the issue, and as mentioned by you that you are able to connect using your mobile internet then you may need to contact your internet service provider or router manufacturer for further assistance.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "I am facing the same issue. Have you solved it? if so, can you share with me?",
"username": "Ye_Bhone_Myat_N_A"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "clark_jenifer"
}
] | Cant connect through router but can connect using phone data | 2023-04-10T00:59:39.478Z | Cant connect through router but can connect using phone data | 1,182 |
[] | [
{
"code": "",
"text": "Ever since checking my MongoDB Environment I’ve seen that the connection count to MongoDB is ever increasing:\nPrimary:\nimage936×376 22.4 KB\nI run my environment on 5 Rhel servers and the Connections correspond through all membersAny assistance would be appreciated\nKindest\nGareth",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "the screenshot only shows a few hours data. Maybe you have a higher traffic during that window?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi, thanks for your feedback - not at all, this would be an unusual circumstance and I’m afraid that I will run out of connections\nimage933×447 32.1 KB\nThis is an update, and have not been able to find the root cause just yet, as soon as I have, I will update but any assistance is appreciated",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "Update and Resolved:\nApplication Connection to MongoDB had one usecase that was opening files and connections and not closing them or using the connection pool that existed - Max open files were reached, all nodes went down, there was downtime and I had to increase the ulimit from 64k to 100k for max open files so that MongoDB could run for a bit until - restarted the usecase to remove all kept connections and pausing it until the developers take a look at it.",
"username": "Gareth_Furnell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Ever Increasing connections | 2023-10-17T12:13:31.362Z | Ever Increasing connections | 233 |
|
null | [
"aggregation",
"queries",
"crud"
] | [
{
"code": "const mods = db.modifications.insertMany([\n {\n title: 'Modification #1',\n image: 'img1.png',\n },\n {\n title: 'Modification #2',\n image: 'img2.png',\n },\n {\n title: 'Modification #3',\n image: 'img3.png',\n },\n])\n\ndb.products.insertOne({\n slug: 'product1',\n title: 'Product #1',\n variants: [\n {\n size: 20,\n price: 200,\n modifications: [\n { id: mods.insertedIds[0], price: 10 },\n { id: mods.insertedIds[1], price: 15 },\n ],\n },\n {\n size: 30,\n price: 250,\n modifications: [\n { id: mods.insertedIds[0], price: 15 },\n { id: mods.insertedIds[2], price: 20 },\n ],\n },\n ],\n})\ndb.products.aggregate([\n { $match: { slug: 'product1' } },\n // ?\n])\nconst result = {\n slug: 'product1',\n title: 'Product #1',\n variants: [\n {\n size: 20,\n price: 200,\n modifications: [\n { _id: '…', title: 'Modification #1', image: '…', price: 10 },\n { _id: '…', title: 'Modification #2', image: '…', price: 15 },\n ],\n },\n {\n size: 30,\n price: 250,\n modifications: [\n { _id: '…', title: 'Modification #2', image: '…', price: 15 },\n { _id: '…', title: 'Modification #3', image: '…', price: 20 },\n ],\n },\n ],\n}\n$unwind$lookupdb.products.aggregate([\n { $match: { slug: 'product1' } },\n { $unwind: '$variants' },\n { $unwind: '$variants.modifications' },\n {\n $lookup: {\n from: 'modifications',\n localField: 'variants.modifications.id',\n foreignField: '_id',\n let: { price: '$variants.modifications.price' },\n pipeline: [{ $addFields: { price: '$$price' } }],\n as: 'variants.modifications',\n },\n },\n])\n$groupmodificationsprice",
"text": "I have a data model where each Product has many Variants and each Variant has many Modifications. In database it looks like this:Mongo playground: a simple sandbox to test and share MongoDB queries onlineWhat I want is to doto get the result that looks like thisHow to accomplish this?I’ve tried to $unwind twice and then $lookupbut then I don’t know how to $group (?) that data back.Also, there’s a similar question with working solution. In my case though, the modifications array isn’t just array of ids, but has data within its elements (the price field) which I need to include in the result somehow.",
"username": "crabvk"
},
{
"code": "\nAtlas atlas-cihc7e-shard-0 [primary] test> db.products.aggregate([ { $match: { slug: \"product1\" } }, { $unwind: { path: \"$variants\" } }, { $unwind: { path: \"$variants.modifications\" } }, { $lookup: { from: \"modifications\", localField: \"variants.modifications.id\", foreignField: \"_id\", let: { price: \"$variants.modifications.price\" }, pipeline: [ { $addFields: { price: \"$$price\" } }], as: \"variants.modifications\" } }, { $group: { _id: { slug: \"$slug\", size: \"$variants.size\" }, title: { $first: \"$title\" }, totalPrice: { $sum: \"$variants.price\" }, modifications: { $addToSet: \"$variants.modifications\" } } }, { $project: { _id: 0, slug: \"$_id.slug\", size: \"$_id.size\", title: 1, totalPrice: 1, modifications: 1 } }] )\n[\n {\n title: 'Product #1',\n totalPrice: 400,\n modifications: [\n [\n {\n _id: ObjectId(\"6531197917a38d454218eede\"),\n title: 'Modification #1',\n image: 'img1.png',\n price: 10\n }\n ],\n [\n {\n _id: ObjectId(\"6531197917a38d454218eedf\"),\n title: 'Modification #2',\n image: 'img2.png',\n price: 15\n }\n ]\n ],\n slug: 'product1',\n size: 20\n },\n {\n title: 'Product #1',\n totalPrice: 500,\n modifications: [\n [\n {\n _id: ObjectId(\"6531197917a38d454218eee0\"),\n title: 'Modification #3',\n image: 'img3.png',\n price: 20\n }\n ],\n [\n {\n _id: ObjectId(\"6531197917a38d454218eede\"),\n title: 'Modification #1',\n image: 'img1.png',\n price: 15\n }\n ]\n ],\n slug: 'product1',\n size: 30\n }\n]\n",
"text": "Hi @crabvk and welcome to MongoDB community forums!!Thank you for sharing the detailed information on the posts.\nBased on my understandingFirstly, using the combination of two unwinds and group together may result in a suboptimal query and if this is a frequently used query in the application, might impact on the performance of the application.\nThe initial recommendation would be to re-design the schema to make it easier to achieve the result desired. Depending on your use case, an example would be to reduce the nested arrays.\nIf you have the related schema, the documentations for Best Practices for Data Modelling would be a good staring point for reference.Now based on your use case, I tried to replicate the query in my local environment and tried the following query. The result is not exactly as desired, but this is quite close and may be suitable for your use case.I tried the following query as:Does the above output helpful to you ?Warm Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thank you for the query and the link, definitely must read topic.\nRegarding my initial question, I’ve got the solution on stackoverflow join - MongoDB $lookup on doubly nested array of objects - Stack Overflow",
"username": "crabvk"
}
] | Join on doubly nested array of objects | 2023-10-18T16:33:49.399Z | Join on doubly nested array of objects | 224 |
null | [
"aggregation"
] | [
{
"code": "{\"a\": \"12-6-2022\", \"b\": 1},\n{\"a\": \"\", \"b\": 2}\na: struct<date: timestamp, string: string>\n",
"text": "i have a collection with look like below:when use $out to s3 with parquet format in trigger data federatoin it return schema i a group with 2 type :how to make data federation just create one schema type?",
"username": "Dat_Le"
},
{
"code": "",
"text": "Hi @Dat_Le and welcome to MongoDB community forums!!Thank you for sharing the above information. However, to understand deeper could you help me with some extra details like:As per my understanding, each document in the collection is created as one parquet file and hence ideally one datatype that the field would be the part of the parquet file.Warm Regards\nAasawari",
"username": "Aasawari"
}
] | How to convert field have 2 types to 1 type schema in data federation | 2023-10-17T17:24:51.358Z | How to convert field have 2 types to 1 type schema in data federation | 188 |
null | [
"python",
"database-tools",
"backup"
] | [
{
"code": "00 30 63 390c95F 69 64 00 0D 00 00 00 30 63 39 36 63 30 61 35 37 39 66 36 00 02 6B 65 79\n_id\n0c96c0a579f6\u0002key\n0x10db26c invalid length or type code b'R<\\x00\\x00\\x02_id\\x00\\r\\x00\\x00\\x00036'\n0x10db26c52 3C 00 00 02R<\u000274 69 74 6C 65 00 00 52 3C 00 00 02 5F 69 64\ntitleR<\u0002_id\ntitle_id",
"text": "For the past month or so, I have been working to recover a corrupted Mongodump archive that was provided to us when we switched cloud providers. Unfortunately, the original database no longer exists, so recovering this archive is the only way to get the data back. I originally posted in Mongorestore fails with error: `Failed: corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator` - #18 by Southpaw_1496 , but the topic was closed due to inactivity. This is a summary of the progress that was made:We first found that the error was caused by log messages being streamed to stdin along with the dump data and therefore being recorded as part of the dump file. Removing those messages didn’t initially work, but it did change the error, and Chris (who was helping in the thread) found that there was corruption elsewhere in the file, not just the log messages. He created a shell script to try and eliminate these errors, as well as a python script that was designed to find corrupted parts of a dump file.Since the topic was closed due to inactivity, I tried running the shell script. Restoring once again fails with the “Corruption found in archive” error, but with a slightly different value. The 32-bit integer it complained about (962801664) is equivalent to the hex 00 30 63 39 which encodes to 0c9, part of an ID in the database near the end of the file. The surrounding data iswhich encodes toUnlike the first error I received where the error is clearly the result of log messages being added to the dump file, the ID and the bytes surrounding it seem to be valid data, so it’s not obvious where the corruption is.I also tried the python script, but I’m unsure how to interpret its output:The data at offset 0x10db26c is 52 3C 00 00 02, which encodes to R<\u0002. The surrounding data is:Which encodes to:From my limited understanding, the data appears to be valid: There’s 7 bytes between title and _id, just like in all the other (presumably valid) occurrences of the pattern in the file. Possibly the python script is telling me where exactly the corruption is, but I can’t understand what it’s trying to say.Does anyone have ideas for other things I could try?",
"username": "Southpaw_1496"
},
{
"code": "b'R<\\x00\\x00\\x02_id\\x00\\r\\x00\\x00\\x00036'52 3c 00 00 02 5f 69 64 00 0d 00 00 00 30 33 3652 3c 00 00025f 69 64 00_id000d 00 00 0030 33 36036",
"text": "I also tried the python script, but I’m unsure how to interpret its output:All the script is doing is attempting to decode a document. It prints out the location that it errored and the error along with 16B of the document.As I mentioned in that topic the corruption could be in the preceding document ( I think this would end up changing this document’s length) or in the document starting at this location(I think that is the case for this document).The first 16B of this document is:\nb'R<\\x00\\x00\\x02_id\\x00\\r\\x00\\x00\\x00036'\nor in hex:\n52 3c 00 00 02 5f 69 64 00 0d 00 00 00 30 33 3652 3c 00 00 is the int32 length of the document: 15442Bytes\n02 is specifying type string for the next document.\n5f 69 64 00 is the cstring for the e_name(field): _id terminated with 00\nBeing a string type the next 4B are the int32 length of the string\n0d 00 00 00 13Bytes\n30 33 36 is the start of the 13B string 036You’ll have to get familiar with the bson spec and the archive spec. I knew very little about them before I looked into the original post.",
"username": "chris"
},
{
"code": "",
"text": "I guess I’ll start there then.",
"username": "Southpaw_1496"
}
] | Recovering a corrupted MongoDump archive | 2023-10-14T11:37:41.634Z | Recovering a corrupted MongoDump archive | 303 |
null | [] | [
{
"code": "",
"text": "Hi there, i’m trying to get started using cloudformation to deploy and manage a cluster. I can successfully deploy a cluster using cloudformation, however any time i want to make any updates to the Atlas resources (Database User, Cluster Details etc…), using cloudformation i get the following error:Resource handler returned message: “Unable to complete request: runtime error: invalid memory address or nil pointer dereference” (RequestToken: 5e5de241-0d4e-aa67-67c0-c7287978271c, HandlerErrorCode: GeneralServiceException)I have installed all of the cloudformation resources using the get-setup shell script found here: GitHub - mongodb-developer/get-started-aws-cfn: Get started using MongoDB Atlas with AWS CloudFormation today!I’m operating in the us-east-1 region.What am i doing wrong? or is it not possible to update the Atlas resources via cloudformation after their initialisation?",
"username": "Lee_Wilkins"
},
{
"code": "",
"text": "Hi Lee, was this issue resolved for you? If so, can you please share the resolution.I’m facing the same issue.",
"username": "Darmadeja_Venkatakrishnan"
}
] | Updating Atlas resources using AWS Cloudformation | 2023-01-13T12:48:10.251Z | Updating Atlas resources using AWS Cloudformation | 764 |
null | [
"dot-net"
] | [
{
"code": "'MongoDB.Driver.Linq.ExpressionNotSupportedException' in MongoDB.Driver.dllCollection.CountDocumentsAsync(e => e.League.Id == leagueId)await Collection.CountDocumentsAsync(Builders<LeagueEvent>.Filter.Eq(e => e.League.Id, leagueId))",
"text": "I’ve started seeing 'MongoDB.Driver.Linq.ExpressionNotSupportedException' in MongoDB.Driver.dll appearing in my c# output. I use a lot of linq expressions to evaluate what I’m looking for. similar to the following\nCollection.CountDocumentsAsync(e => e.League.Id == leagueId)\nNow it appears the only way to get these exceptions to stop is to use a Builder to not show these exceptions like so\nawait Collection.CountDocumentsAsync(Builders<LeagueEvent>.Filter.Eq(e => e.League.Id, leagueId))\nHowever, I still have no problems using the same or similar expressions when performing a FindAsync or DeleteOneAsync, etc. Is there something wrong with my expression, or does CoundDocumentsAsync have different requirements?",
"username": "Jeremy_Regnerus"
},
{
"code": "var connectionString = \"mongodb://localhost\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V2;\nvar client = new MongoClient(clientSettings);\nusing System;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<Player>(\"players\");\n\nvar leagueId = 42;\nvar count = await coll.CountDocumentsAsync(e => e.League.Id == leagueId);\nConsole.WriteLine($\"The count is {count}.\");\n\nrecord Player(ObjectId Id, League League);\nrecord League(int Id);\n",
"text": "Starting in 2.19.0, we upgraded the default LINQ provider from LINQ2 to LINQ3. LINQ3 is a more modern implementation with a lot more functionality. That said, users have encountered edge cases that are not supported. We encourage you to report any problems with the LINQ3 provider in our JIRA project.You can switch back to LINQ2 provider though we will be removing it in the upcoming 3.0 driver release:I attempted to reproduce the reported issue, but the following snippet works as expected with the 2.22.0 driver.Please create a CSHARP ticket in our JIRA project with a self-contained repro and we will be happy to investigate with you.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | CountDocumentsAsync throws exception when using expression | 2023-10-19T16:07:16.802Z | CountDocumentsAsync throws exception when using expression | 173 |
[
"node-js",
"transactions"
] | [
{
"code": "",
"text": "Hello everyone,\nI encounter this error every 3 or 4 days. My project runs on app engine and on average around 500 thousand queries and transactions are made daily and all of it works with Mongodb but in the future, this rate will reach 5 million requests per day, but this is not limited to 5 million, it will increase even more. My app engine settings are set to scale completely automatically. There is a serverless production in Mongodb Atlas.This is how my Nodejs project connects to mongodb atlas.\nScreenshot 2023-10-19 at 21.57.07823×233 24.1 KBThe only network access IP address in MongoDB Atlas: 0.0.0.0/0Nodejs Version: 18And I get this error “MongoNetworkError: read ECONNRESET” 1 or 2 times a month.How can I solve this problem?",
"username": "Mete_Oguzhan_Bayrampinar"
},
{
"code": "MongoNetworkError",
"text": "@Mete_Oguzhan_Bayrampinar, based on the details you’ve share I’m not entirely sure what you’re describing is actually an issue. Though a MongoNetworkError bubbling up to the application is indicative of an error, the MongoDB drivers have retryability features built in to account for these transient network issues (such as retryable reads and retryable writes).When you see this error in the logs is it associated with a failed operation by the application? Is data not being read/written?And I get this error “MongoNetworkError: read ECONNRESET” 1 or 2 times a month.That’s pretty infrequent given the volume of traffic you’re alluding to, but if you really want to go into the weeds node.js - How to debug a socket hang up error in NodeJS? - Stack Overflow has some details that might be interesting.",
"username": "alexbevi"
}
] | "connection * to *:27017 closed" error | 2023-10-19T19:07:20.115Z | “connection * to *:27017 closed” error | 177 |
|
null | [] | [
{
"code": "rror parsing YAML config file: yaml-cpp: error at line 31, column 6: end of map not found\n",
"text": "Hi fellows,\nI have a problem I cpuldn’t solve nor find a solution for yet. I needed to add authorization “enabled” to the mongod.conf to create and update users. If I try I get the following error:I couldn’t find a solution how to fix that . My mongod.conf you find here: https://pastebin.com/azXpEqiDMany thanks in advance,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "security.authorizationsecurity# Security\nsecurity:\n authorization: enabled\n",
"text": "Hi\nYou need to add security.authorization part to your config file - in your config file, security seems to be hashedI suggest check official MongoDB documentation",
"username": "Arkadiusz_Borucki"
},
{
"code": " systemctl restart mongod\nroot@docker:/etc# systemctl status mongod\n× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Wed 2022-07-13 14:36:40 CEST; 431ms ago\n Duration: 4.728s\n Docs: https://docs.mongodb.org/manual\n Process: 105115 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=14)\n Main PID: 105115 (code=exited, status=14)\n CPU: 791ms\n\nJul 13 14:36:35 docker systemd[1]: Started MongoDB Database Server.\nJul 13 14:36:40 docker systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.\n",
"text": "Hi,Thanks but that wasn’t it all yet. I removed the # restarted mongod but now I getif I try to create a new user I still get this error:\ndb.createUser({\n… user: “m103-admin”,\n… pwd: “m103-pass”,\n… roles: [\n… {role: “root”, db: “admin”}\n… ]\n… })\nuncaught exception: Error: couldn’t add user: command createUser requires authentication :Regards,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "Hi,\nmongod starts now, but the error cerating a new user still exists . How can I fix that?Many thanks in advance,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "Hi,\nCan you send you the current config file and output from mongod log ?\nDid you add your first, admin user before you enabled authorization ?",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "you need to add a first admin user before you enable authorization",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Hi, with or without authorization I get > Error: couldn’t add user: command createUser requires authentication :_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1367:11\n@(shell):1:1\nHere is the mongod.log you asked for:\nhttps://pastebin.com/EuARNZmBI hope it will be helpfulThanks in advance\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "Hi,the mongod.log you can find here as a file: https://ukleemann.net/index.php/apps/files/?dir=/Documents/FILES&fileid=277Regards,Uli",
"username": "Ulrich_Kleemann1"
},
{
"code": "admin",
"text": "I assume you are using standalone mongod instance, at least I could not see a replica set in your config file.\nYou can add the first admin user to your database with disabled access control, try the following steps:procedure is available online\nIf you enable access control before creating any user, MongoDB provides a localhost exception which allows you to create a user administrator in the admin database.",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Hi,I changed authorization from enable to disabled like this:security:\nauthorization: disabledthen I tried to add an admin user like that:use admin\ndb.createUser({then I got the know error againuncaught exception: Error: couldn’t add user: command createUser requires authentication :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createUser@src/mongo/shell/db.js:1367:11\n@(shell):1:1\nwhat did I do wrong?Regards,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "# security:\n# authorization: enabled\n",
"text": "you need to disable access control like this (add # before security and authorization: enabled)now restart mongod and add first user",
"username": "Arkadiusz_Borucki"
},
{
"code": "Jul 13 14:36:35 docker systemd[1]: Started MongoDB Database Server.\nJul 13 14:36:40 docker systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.\nJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.ss -tlnp\nps -aef | grep [m]ongod\ndocker ps\n",
"text": "Your restart that indicates that it fails.And in the same post, you are able to connect and call db.createUser.This is inconsistent. If mongod does not start then you cannot connect. If you can connect then another instance is running or you are not connecting to the instance you think you are starting.FromJul 13 14:36:40 docker systemd[1]: mongod.service: Failed with result 'exit-code'.it looks like you are trying to start a docker instance. It is possible then when you connect you try to connect to a local instance, which is not using the configuration file you shared and is not running with authentication.To know more about your setup please share the output of the following commands:",
"username": "steevej"
},
{
"code": "",
"text": "mongo started (it is how I understand it), see",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Thanks, I saw that, but I have some doubts about the whole setup. So I am still interested to see the output of the commands.I would also like to see the command used to connect.",
"username": "steevej"
},
{
"code": "ss -ltnp \nss -ltnp\nState Recv-Q Send-Q Local Address:Port Peer Address:Port Process \nLISTEN 0 4096 127.0.0.1:8125 0.0.0.0:* users:((\"netdata\",pid=5347,fd=68)) \nLISTEN 0 4096 0.0.0.0:30783 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=254)) \nLISTEN 0 4096 127.0.0.1:19999 0.0.0.0:* users:((\"netdata\",pid=5347,fd=5)) \nLISTEN 0 4096 0.0.0.0:31808 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=43)) \nLISTEN 0 64 0.0.0.0:2049 0.0.0.0:* \nLISTEN 0 4096 0.0.0.0:10050 0.0.0.0:* users:((\"zabbix_agentd\",pid=1809,fd=4),(\"zabbix_agentd\",pid=1808,fd=4),(\"zabbix_agentd\",pid=1807,fd=4),(\"zabbix_agentd\",pid=1806,fd=4),(\"zabbix_agentd\",pid=1805,fd=4),(\"zabbix_agentd\",pid=1769,fd=4))\nLISTEN 0 4096 192.168.10.67:27011 0.0.0.0:* users:((\"mongod\",pid=129697,fd=14)) \nLISTEN 0 4096 127.0.0.1:27011 0.0.0.0:* users:((\"mongod\",pid=129697,fd=13)) \nLISTEN 0 4096 127.0.0.1:2947 0.0.0.0:* users:((\"systemd\",pid=1,fd=280)) \nLISTEN 0 4096 192.168.10.67:27012 0.0.0.0:* users:((\"mongod\",pid=44529,fd=14)) \nLISTEN 0 4096 127.0.0.1:27012 0.0.0.0:* users:((\"mongod\",pid=44529,fd=13)) \nLISTEN 0 4096 192.168.10.67:27013 0.0.0.0:* users:((\"mongod\",pid=44585,fd=14)) \nLISTEN 0 4096 127.0.0.1:27013 0.0.0.0:* users:((\"mongod\",pid=44585,fd=13)) \nLISTEN 0 4096 127.0.0.1:10248 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=279)) \nLISTEN 0 4096 127.0.0.1:27017 0.0.0.0:* users:((\"mongod\",pid=141575,fd=12)) \nLISTEN 0 4096 127.0.0.1:10249 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=248)) \nLISTEN 0 3 127.0.0.1:2601 0.0.0.0:* users:((\"zebra\",pid=1612,fd=25)) \nLISTEN 0 80 0.0.0.0:3306 0.0.0.0:* users:((\"mariadbd\",pid=1821,fd=31)) \nLISTEN 0 4096 0.0.0.0:59563 0.0.0.0:* users:((\"rpc.mountd\",pid=1990,fd=5)) \nLISTEN 0 511 127.0.0.1:6379 0.0.0.0:* users:((\"redis-server\",pid=1757,fd=6)) \nLISTEN 0 4096 127.0.0.1:6444 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=22)) \nLISTEN 0 4096 0.0.0.0:37261 0.0.0.0:* users:((\"rpc.statd\",pid=1989,fd=9)) \nLISTEN 0 10 127.0.0.1:5038 0.0.0.0:* users:((\"asterisk\",pid=7558,fd=7)) \nLISTEN 0 4096 0.0.0.0:47279 0.0.0.0:* users:((\"rpc.mountd\",pid=1990,fd=9)) \nLISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:((\"rpcbind\",pid=1204,fd=4),(\"systemd\",pid=1,fd=235)) \nLISTEN 0 4096 127.0.0.1:10256 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=257)) \nLISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=210)) \nLISTEN 0 4096 127.0.0.1:10258 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=201)) \nLISTEN 0 4096 127.0.0.1:10259 0.0.0.0:* users:((\"k3s-server\",pid=1958,fd=219)) \nLISTEN 0 4096 0.0.0.0:47219 0.0.0.0:* users:((\"rpc.mountd\",pid=1990,fd=13)) \nLISTEN 0 32 10.234.225.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=13324,fd=7)) \nLISTEN 0 32 192.168.12.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=3251,fd=6)) \nLISTEN 0 32 192.168.11.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=3216,fd=6)) \nLISTEN 0 32 192.168.100.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=3183,fd=6)) \nLISTEN 0 4096 127.0.2.1:53 0.0.0.0:* users:((\"dnscrypt-proxy\",pid=1749,fd=8),(\"systemd\",pid=1,fd=269)) \nLISTEN 0 128 127.0.0.1:8118 0.0.0.0:* users:((\"privoxy\",pid=2114,fd=4)) \nLISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:((\"sshd\",pid=1814,fd=3)) \nLISTEN 0 128 127.0.0.1:631 0.0.0.0:* users:((\"cupsd\",pid=1748,fd=8)) \nLISTEN 0 244 127.0.0.1:5432 0.0.0.0:* users:((\"postgres\",pid=1922,fd=4)) \nLISTEN 0 3 127.0.0.1:2616 0.0.0.0:* users:((\"staticd\",pid=1620,fd=12)) \nLISTEN 0 244 127.0.0.1:5433 0.0.0.0:* users:((\"postgres\",pid=1923,fd=6)) \nLISTEN 0 64 0.0.0.0:37849 0.0.0.0:* \nLISTEN 0 4096 127.0.0.1:10010 0.0.0.0:* users:((\"containerd\",pid=8951,fd=18)) \nLISTEN 0 244 127.0.0.1:5434 0.0.0.0:* users:((\"postgres\",pid=1849,fd=6)) \nLISTEN 0 4096 127.0.0.1:9050 0.0.0.0:* users:((\"tor\",pid=1851,fd=6)) \nLISTEN 0 4096 [::1]:8125 [::]:* users:((\"netdata\",pid=5347,fd=67)) \nLISTEN 0 64 [::]:2049 [::]:* \nLISTEN 0 4096 [::]:10050 [::]:* users:((\"zabbix_agentd\",pid=1809,fd=5),(\"zabbix_agentd\",pid=1808,fd=5),(\"zabbix_agentd\",pid=1807,fd=5),(\"zabbix_agentd\",pid=1806,fd=5),(\"zabbix_agentd\",pid=1805,fd=5),(\"zabbix_agentd\",pid=1769,fd=5))\nLISTEN 0 4096 [::]:53539 [::]:* users:((\"rpc.mountd\",pid=1990,fd=15)) \nLISTEN 0 4096 [::1]:2947 [::]:* users:((\"systemd\",pid=1,fd=279)) \nLISTEN 0 4096 [::]:42597 [::]:* users:((\"rpc.statd\",pid=1989,fd=11)) \nLISTEN 0 4096 *:10250 *:* users:((\"k3s-server\",pid=1958,fd=278)) \nLISTEN 0 80 [::]:3306 [::]:* users:((\"mariadbd\",pid=1821,fd=32)) \nLISTEN 0 4096 *:10251 *:* users:((\"k3s-server\",pid=1958,fd=218)) \nLISTEN 0 4096 *:6443 *:* users:((\"k3s-server\",pid=1958,fd=14)) \nLISTEN 0 511 [::1]:6379 [::]:* users:((\"redis-server\",pid=1757,fd=7)) \nLISTEN 0 4096 [::]:49839 [::]:* users:((\"rpc.mountd\",pid=1990,fd=7)) \nLISTEN 0 4096 [::]:111 [::]:* users:((\"rpcbind\",pid=1204,fd=6),(\"systemd\",pid=1,fd=237)) \nLISTEN 0 511 *:80 *:* users:((\"apache2\",pid=11166,fd=4),(\"apache2\",pid=11165,fd=4),(\"apache2\",pid=11159,fd=4),(\"apache2\",pid=3098,fd=4),(\"apache2\",pid=3097,fd=4),(\"apache2\",pid=3096,fd=4),(\"apache2\",pid=3095,fd=4),(\"apache2\",pid=3094,fd=4),(\"apache2\",pid=3093,fd=4),(\"apache2\",pid=3053,fd=4),(\"apache2\",pid=3044,fd=4))\nLISTEN 0 64 [::]:43635 [::]:* \nLISTEN 0 128 [::1]:8118 [::]:* users:((\"privoxy\",pid=2114,fd=5)) \nLISTEN 0 128 [::]:22 [::]:* users:((\"sshd\",pid=1814,fd=4)) \nLISTEN 0 128 [::1]:631 [::]:* users:((\"cupsd\",pid=1748,fd=7)) \nLISTEN 0 244 [::1]:5432 [::]:* users:((\"postgres\",pid=1922,fd=3)) \nLISTEN 0 244 [::1]:5433 [::]:* users:((\"postgres\",pid=1923,fd=5)) \nLISTEN 0 244 [::1]:5434 [::]:* users:((\"postgres\",pid=1849,fd=5)) \nLISTEN 0 4096 [::]:44699 [::]:* users:((\"rpc.mountd\",pid=1990,fd=11)) \nLISTEN 0 4096 *:6556 *:* users:((\"systemd\",pid=1,fd=296)) \ndocker -ps \n\ndocker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n\nps -aef |grep [m]ongodb\n\nps -aef |grep [m]ongodb\nuli 127446 45830 0 15:58 pts/16 00:00:04 mongosh mongodb://local\nmongodb 141575 1 0 16:49 ? 00:00:06 /usr/bin/mongod --config /etc/mongod.conf\n",
"text": "Hi,following you advice uncommentig authorization with # I get the same error when I try to create the m103-admin userhere are the outputs ofHope this will help.Thanks,\nUli",
"username": "Ulrich_Kleemann1"
},
{
"code": "mongod.conf",
"text": "can you also show your current mongod.conf file (with disabled access control)",
"username": "Arkadiusz_Borucki"
},
{
"code": "ping local\nps -aef | grep [m]ongodroot@docker",
"text": "More weird stuff.Output ofThe ss -tlnp output shows at least 3 instances of mongod listening.LISTEN 0 4096 192.168.10.67:27012 0.0.0.0:* users:((“mongod”,pid=44529,fd=14))\nLISTEN 0 4096 192.168.10.67:27013 0.0.0.0:* users:((“mongod”,pid=44585,fd=14))\nLISTEN 0 4096 127.0.0.1:27017 0.0.0.0:* users:((“mongod”,pid=141575,fd=12))But your ps output only shows:mongodb 141575 1 0 16:49 ? 00:00:06 /usr/bin/mongod --config /etc/mongod.confMay it is because you didps -aef |grep [m]ongodbrather thanps -aef | grep [m]ongodThe trailing b you added might the others not show if started by another user. This, or the output is redacted.With which user are you runningdocker psCan you do it asroot@docker",
"username": "steevej"
},
{
"code": "",
"text": "hi steeve,ps -aef | grep [m]ongod gives meps -aef |grep [m]ongodb\nuli 127446 45830 0 15:58 pts/16 00:00:04 mongosh mongodb://local\nmongodb 141575 1 0 16:49 ? 00:00:06 /usr/bin/mongod --config /etc/mongod.confthe docker ps command I run as root@docker but docker is no docker container just a hostname therefore it shows no running docker containersRegards,Uli",
"username": "Ulrich_Kleemann1"
},
{
"code": "",
"text": "hi steeve,thanks for you help. the 3 instnaces I made to create a local replica set following this tutorial from mongo m103-courseCloud: MongoDB Cloudthats what I want to do so the ps command gives you 3 instances on 3 different ports 27011 27012 and 27013my mongod.conf with diabled commented security section looks like this# mongod.conf# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/# Where and how to store data.\nstorage:# where to write logging data.\nsystemLog:# network interfaces\nnet:# how the process runs\nprocessManagement:#security:\n# authorization: disabledRegards,Uli",
"username": "Ulrich_Kleemann1"
},
{
"code": "ps -aef | grep 44529\nps -aef | grep 44585\nping local\n",
"text": "If the processes are listening they should show up with ps.ShareOnce againOutput ofRemove the trailing d fromps -aef | grep [m]ongodand do it as root.",
"username": "steevej"
}
] | Cannot add authorization "enable" to mongod.conf yaml ccp-error | 2022-07-13T12:06:18.256Z | Cannot add authorization “enable” to mongod.conf yaml ccp-error | 8,451 |
[
"queries",
"crud",
"sharding"
] | [
{
"code": "",
"text": "Let’s say you’ve set up a sharded cluster with data in two zones like in this example:And you have an application that also runs in two separate regions (one in us and one in Europe). How does the reads and writes from the application running in the European data center get routed to the European shard, and vice versa? Do the two application use different connection strings, so that the routing is handled at the DNS level? Or do both applications connect using the same connection string? If so, won’t that mean that one of the application instances will have to do a cross-region connection, even if subsequent connections are to the closest shard?Thanks",
"username": "John_Knoop"
},
{
"code": "",
"text": "Hi John,I am going to answer your question assuming you’re using MongoDB Atlas’s implementation of this concept, which Atlas calls “Global Clusters” (read more here: https://docs.atlas.mongodb.com/global-clusters/)What happens is that Atlas puts a query router node (called a mongos in backend nomenclature) in every zone: then the concise SRV connection string which is shared globally allows the application-tier drivers to poll those query routers (mongos): the driver automatically connects to the nearest query router (mongos). This means that if you have an app in Europe, that app will connect to the data shards behind the scenes via that query router co-located with the European zone.To explain in more depth: the SRV connection string essentially enables the driver to look up a comma-delineated list of mongos hostnames, which are going to be located globally wherever there is a zone. The driver uses this list to find the nearest one to start using for queries.Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew and thanks for the quick response!So it’s the SRV connection string that does the magic? That’s some kind of DNS feature right? Is that the topic I should read up on if I want to understand how each app instance is guaranteed to connect to the nearest shard?",
"username": "John_Knoop"
},
{
"code": "",
"text": "Hi John,Honestly even if you used the legacy connection string (a large list of mongos hostnames comma delineated) it’s the same thing: all SRV does is make that list a single hostname so that it’s more concise – and SRV makes that long list accessible from DNS directly. But there’s no real magic in that in both cases the driver gets the full list and then finds the closest one.Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Oh, I see. Yeah, that makes sense.Do you know how the driver figures out which shard is the closest? Is that something that is implemented in the driver itself somehow, or is that handled automatically by some internet protocol?Thanks",
"username": "John_Knoop"
},
{
"code": "",
"text": "Also, what’s the differenct between Global Clusters and “Multi-Region Deployments” mentioned on this page?Thanks again!",
"username": "John_Knoop"
},
{
"code": "",
"text": "The driver just sees which of the mongos’s it gets a ping response from fastest (so no real magic here).Regarding your second-question, Multi-Region refers to replication, where there is a preferred region that writes go to (unless that region is lost and there’s a failover to another region) as opposed to Atlas Global Clusters which have different write regions in different parts of the world!I am happy you’re asking because these concepts are so nuanced and we need to figure out how to position them better.Cheers\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Ok, so if we want geographic sharding on Atlas, then Global Clusters is the way to go? Do you know if Global Clusters will eventually come as serverless?",
"username": "John_Knoop"
},
{
"code": "",
"text": "That’s a great question. Serverless is a long and major journey for us starting with the basics and building up, so not over the near-term but long-term yes. I also believe it’s important to maintain an open mind and continue to evaluate the best strategy and user experience for delivering a global in-region latency application experience. We like our model of embedding the location in the schema but I wonder if over time we’re going to need to invest in capabilities or see others democratize this further at the app tier since the database doesn’t live in isolation. Feedback appreciated",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew,\nI have a shard that has a primary node in one region A and a secondary in region B, and an aplication in region B. The documentation seems to indicate that all traffic goes from the application to the primary Node in region B instead to go to the secondary node in the same region.\nThat is correct ?",
"username": "Enzo_Angel_Trevisan"
}
] | Routing in geographic sharding | 2021-09-27T16:32:36.948Z | Routing in geographic sharding | 3,214 |
|
null | [
"aggregation",
"queries",
"node-js",
"data-modeling"
] | [
{
"code": "",
"text": "Hi everyone,I’ve been using a location-based search feature in an application, and I’ve set a radius of 5 miles to find nearby places. However, I’m puzzled because I’m only getting one result, when I know there should be more businesses or locations within that radius.I’ve double-checked the coordinates and the radius parameter in my query, and they seem to be correct. Is there something else I should consider or check to ensure that I receive all the expected results within the specified radius? Any help or suggestions would be greatly appreciated!Thank you!HERE IS MY CODE\nHERE IS MY MONGODB DOCUMENT IN THE DATABASE\nHERE IS MY RESULT OF THE QUERYdon’t mind the sample data i replaced. i just want to symbolize that only one fulfillment location comes back although the first 2 are even almost next to each other!\nno matter which maxDistance I specify only one comes back. If i set the radius to 1 i get no result. the first one is about 23 km away. if i set 1000 km i get only the first one back (why ???) . If i do 10km i get none back (what is actually right). I am really getting desperate!PLEASE PING ME If you answer. Thanks a lot ",
"username": "Jamez_Fatout1"
},
{
"code": "",
"text": "Hi Jamez, have you gotten to the bottom of this? Is it possible that your use of $addFields with $first after the unwind stage is essentially replacing each array element with the same values such that you’re losing the others? I did see node.js - How can I do a $geoNear aggregation on embedded documents? - Stack Overflow separately but I don’t think that’s the issue here if you’re just trying to deal with all array elements in a document that has at least one array element that fits the geo query requirement",
"username": "Andrew_Davidson"
}
] | MONGODB Why Am I Getting Only One Result When There Should Be More in the Specified Radius? | 2023-10-06T10:03:11.309Z | MONGODB Why Am I Getting Only One Result When There Should Be More in the Specified Radius? | 377 |
[
"queries",
"node-js",
"crud",
"mongoose-odm"
] | [
{
"code": "Product.updateMany(\n { score: { $exists: true } },\n [{ $set: { score: { $round: [{ $divide: [\"$score\", 2] }] } } }],\n);\n",
"text": "Hey there,\nI’m using mongodb atlas in my web application. Recently I’ve added new batch operation that is executed weekly, but pretty soon noticed some unexpected problem with it. The idea is the following: I need to halve one field in every document in collection, so I perform the following operation (using the latest mongoose):the collection contains >3mln documents and I’m totally fine if it takes time, as I said it’s a background job that runs weekly.However, during this operation the database server becomes extremely slow, almost unresponsive + I see some crazy spikes in out (and a bit of in) network traffic which I can not understand (the update operation itself just returns update statistic).I can provide any information that might help to solve this problem, my setup is pretty standard - just a regular atlas cluster on M20 (General), MongoDB version 5.0.21. You can see the spike in the chart bellowScreenshot 2023-10-18 at 19.58.043348×1084 364 KB",
"username": "Sergey_Zelenov"
},
{
"code": "",
"text": "Are you using a replication set ?We usually use secondaries as read preference, so that primary instance can focus on write traffic.Did you see a cpu usage spike during that window?",
"username": "Kobe_W"
},
{
"code": "updateManyscoredb.products.createIndex(\n { score: 1 },\n { partialFilterExpression: { score: { $exists: true } } }\n)\nupdateManybatchStart = new Date();\n// only project the _id field as that's all we'll need for the batch operations\nfilter = db.products.find({ score: { $exists: true }, last_updated: { $lte: batchStart } }, { _id: 1 });\ncount = filter.countDocuments();\nwhile (count > 0) {\n operations = [];\n filter.limit(20000).forEach(function(d) {\n // setup an updateOne operation that filters on _id\n // add operation to \"operations\" array\n });\n db.products.bulkWrite(operations);\n count = filter.countDocuments();\n console.log(count + \" documents remaining\");\n sleep(500);\n}\nlast_updateddb.products.createIndex(\n { last_updated: 1, score: 1, _id: 1 },\n { partialFilterExpression: { score: { $exists: true } } }\n)\n",
"text": "Hey @Sergey_Zelenov,However, during this operation the database server becomes extremely slow, almost unresponsive + I see some crazy spikes in out (and a bit of in) network traffic which I can not understand (the update operation itself just returns update statistic).The spike in traffic out of the replica set primary in this case isn’t really surprising. Since this is a replica set, the changes to the documents the updateMany operation is performing are also being written to the replica set oplog, which is being tailed by the secondary members (over the network) to ensure data changes can be applied to the other members.If a number of documents are modified on the primary in a short period of time, those changes will be replicated to all secondary members over the network.The cluster becoming unresponsive during these times is more likely due to storage layer resource exhaustion. If you need to write a lot of data on an M20, the available disks typically don’t offer much in terms of IOPS (see Fix IOPS Issues), so if you don’t want to scale up to an M30 (where greater IOPS are availble) you can try the following:Ensure you have an index to satisfy the filter criteria of your update\nSince you’re filtering on score, having the following index would ensure you’re efficiently identifying the documents you want to update:The above is created as a partial index providing sparse index functionality to more efficiently identify only documents where the field exists.Logically batch the work and add artificial delays\nSince the updateMany is potentially targeting all documents in the collection, if you’re ok with the process taking a little longer you could batch the job so that it only updates a limited number of documents at once, dwells, then repeats.For example you could iteratively update 20000 documents at a time using the bulk write API (pseudocode):For this to work the update operations would also need to set the update documents last_updated field to a current timestamp. You’d also want to ensure the index recommended above was modified to be:Note that the above should be tested thoroughly before being applied to production data to ensure it’s producing the desired result ",
"username": "alexbevi"
},
{
"code": "",
"text": "thank you so much @alexbevi for the detailed answer, and especially - for getting into the problem and offering valuable solutions.\neverything is super clear to me, I think we won’t migrate at the moment (this task alone is not that important), so I would rather modify the snippet to batch updating as you suggested.",
"username": "Sergey_Zelenov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update operation makes server unresponsive | 2023-10-18T17:57:04.473Z | Update operation makes server unresponsive | 196 |
|
null | [
"aggregation",
"crud"
] | [
{
"code": "",
"text": "Hi all / MongoDB team!Are the following CRUD topics tested in the exam:My exam scheduled for this Friday, hoping to get a response before then Thanks everyone!",
"username": "Anthony_Barnes1"
},
{
"code": "",
"text": "Hi @Anthony_Barnes1\nWe suggest reviewing the Associate Developer Exam Study Guide. We have provided the exam objectives and topic level weighting for your reference.\nIf you have any further questions, please reach out to [email protected]\nGood luck on your exam!",
"username": "Heather_Davis"
}
] | CRUD Operations on the exam | 2023-10-18T14:00:38.873Z | CRUD Operations on the exam | 241 |
[
"atlas-cluster"
] | [
{
"code": "",
"text": "Hi,I’m trying to connect to MongoDB Atlas SQL ODBC, but I get the following error:image1026×522 26.4 KBIn field MongoDB set: mongodb+srv://fortaleza-production.xv7f8.mongodb.net",
"username": "Alcides_Tiriritan_Neto"
},
{
"code": "",
"text": "Hi There - I can tell right away that you are not using the correct MongoDB URI. Atlas SQL requires a Federated Database Instance, and then the SQL Endpoint/URI points to that. You can enable Atlas SQL with a Quickstart federated database or Advanced configuration - see here https://www.mongodb.com/docs/atlas/data-federation/query/sql/getting-started/And once you have enabled Atlas SQL, you then select the “connect” button from the federated database instance cards then select the Atlas SQL tile:\nScreenshot 2023-10-19 at 10.17.33 AM1537×468 43.3 KB\nScreenshot 2023-10-19 at 10.17.59 AM915×724 62.9 KBHope this helps.",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Hi Alexi,Thanks for the quick responseI’ll test it with this solution you gave me.I saw in the image that the driver is selected for PowerBI. Can you tell me if I can make it work this same way in the Alteryx program?Thank you again",
"username": "Alcides_Tiriritan_Neto"
}
] | Error connect MongoDB Atlas SQL ODBC | 2023-10-19T14:00:22.445Z | Error connect MongoDB Atlas SQL ODBC | 154 |
|
[
"time-series",
"storage"
] | [
{
"code": "timeseriesIdleBucketExpiryMemoryUsageThresholdbucketMaxSpanSeconds\": 31536000# Type 1:\n{\"metadata\": {\"id_1\": \"string\", \"id_2\": \"string\"}, \n\t\"date\": \"Date\", \"d_0\": \"int\", \"d_i\": \"int\", \"d_2\": \"int\"}\n# 500_000 unique pairs of (\"id_1\", \"id_2\"), avg 15 records in month for id pair, 12 months\n\n\n# Type 2: # more measurement values, 20 instead of 2 - one record size is larger\n{\"metadata\": {\"id_1\": \"string\", \"id_2\": \"string\"}, \n\t\"date\": \"Date\", \"d_0\": \"int\", \"d_i\": \"int\", \"d_i\": \"int\", ... , \"d_N\": \"int\"} # N = 20\n# 25_000 unique pairs (\"id_1\", \"id_2\"), avg 15 records in month for id pair, 12 months\ncollstats{'timeseries.bucketCount': 25,000, 'avgNumMeasurementsPerCommit': 17}collstats{'timeseries.bucketCount': 420,000 # (slightly less than the number of elements in the chunk), 'avgNumMeasurementsPerCommit': 1, numBucketsClosedDueToCachePressure: 415,000}bucket.size + sizeToBeAdded > effectiveMaxSize# bucket_catalog_internal.cpp, determineRolloverAction function\nauto bucketMaxSize = getCacheDerivedBucketMaxSize(opCtx->getServiceContext()->getStorageEngine(), \n\tcatalog.numberOfActiveBuckets.load()); # bucketMaxSize = storageCacheSize / (2 * numberOfActiveBuckets)\n\nint32_t effectiveMaxSize = std::min(gTimeseriesBucketMaxSize, bucketMaxSize);\nbucketMaxSizebucketMaxSizedb.runCommand({ serverStatus: 1 }).bucketCatalognumberOfActiveBucketsnumberOfOpenedBucketsgTimeseriesBucketMinSize",
"text": "Brief description:When using timeseries-collections, there comes a point when new buckets are created non-optimally and contain only 1 document ( but if you initiate inserting of such collection on mongo without load ( e.g. just after start ), documents with same metadata will be grouped to one bucket ).Server parameters: 128 GB RAM, memory usage during the experiment does not exceed 80 GB.\nMongo settings: all default ( WiredTiger Cache : 50% RAM, timeseriesIdleBucketExpiryMemoryUsageThreshold set to 3 GB). Pymongo driver is used.Experiment Description:\nLoading is performed into several collections (50 in total).The general schema of collections provided below (collections are created with the parameter bucketMaxSpanSeconds\": 31536000, date is rounded to day ):Load testing: during the load testing 50 collections are created, after that data in them is inserted by chunks ( chunk contains all dates from specific month for set of id_pairs ), starting with one month, then proceeding to the entire following month, and so on. Operations across different collections are interleaved. Dates within each chunk are sorted. Chunks of specific collection also inserted in time-order.Issue:\nIf a new collection is created and a Type 2 collection chunk inserted at the beginning of the load testing ( or before it ), the collstats results are: {'timeseries.bucketCount': 25,000, 'avgNumMeasurementsPerCommit': 17}.However, if a new collection is created and a Type 2 collection chunk is loaded after loading has been performed in other collections ( during load testing ), the collstats results are vastly different: {'timeseries.bucketCount': 420,000 # (slightly less than the number of elements in the chunk), 'avgNumMeasurementsPerCommit': 1, numBucketsClosedDueToCachePressure: 415,000}. This means that records with the same id pair were not merged, resulting in buckets with a size of 1.As a result, buckets with a size of 1 are created, although without loading they were of size 17 (for a single chunk) and 200 (for all inserted months). Consequently, the collection begins to occupy significantly more space ( at least 3 times more ) compared to the same collection loaded without any system stress. So bucket effective size optimization seems to create even more load.Possible reasons:\nAs far as I understand, when adding a new document, there is a check for the ability to add it to an already opened bucket in the catalog ( check by time, the number of measurements, and so on ). One of the checks involves comparing the size after a potential addition of a new document to the bucket with the effective maximum size of the bucket: bucket.size + sizeToBeAdded > effectiveMaxSize.The effective size of the bucket, in turn, depends on bucketMaxSize. bucketMaxSize becomes smaller with more active buckets. With time it becomes comparable to the size of one record.The graphs showing the changes in the number of buckets and memory during load testing are presented below. Generated based on the statistics from db.runCommand({ serverStatus: 1 }).bucketCatalog.buckets_num1833×809 83.9 KBQuestions:",
"username": "ktverdov"
},
{
"code": "",
"text": "“New users can add only one media.”\nSo closed buckets graph:\nclosed_buckets1827×793 62.7 KB",
"username": "ktverdov"
},
{
"code": "",
"text": "bucketCatalog memory usage:\nbucket_catalog_memory1829×783 65.5 KB",
"username": "ktverdov"
}
] | Suboptimal bucket creation in timeseries collection due to cache pressure | 2023-10-19T14:13:02.846Z | Suboptimal bucket creation in timeseries collection due to cache pressure | 173 |
|
null | [] | [
{
"code": " Error: Could not find the Realm binary.expo init MyAwesomeRealmApp --template @realm/expo-templatenpx expo startError: Could not find the Realm binary. Please consult our troubleshooting guide: https://www.mongodb.com/docs/realm-sdks/js/latest/#md:troubleshooting-missing-binary, js engine: hermes",
"text": "I am creating a new app that will use Expo 49 and realm. However, I am getting the Error: Could not find the Realm binary. error when running the app on expo go. Here are the step to repo:When the app compile, Error: Could not find the Realm binary. Please consult our troubleshooting guide: https://www.mongodb.com/docs/realm-sdks/js/latest/#md:troubleshooting-missing-binary, js engine: hermes show up.",
"username": "Chongju_Mai"
},
{
"code": "",
"text": "Can someone hear here? I am stuck with same problem; on my own project I am facing with same problem. My related dependencies are below:\n“dependencies”: {\n“@realm/react”: “^0.6.1”,\n“expo”: “^49.0.13”,\n“react-native”: “0.72.6”,\n“realm”: “^12.2.1”;\n…\n}\nAfter running “expo-cli start --tunnel”, and scan qr code with my iphone, I get “Could not find the Realm binary.” error.",
"username": "Tuna_DAG"
},
{
"code": "exporealm",
"text": "@Chongju_Mai @Tuna_DAG Follow the link from the error. There is an expo section. Follow these steps to compile realm into Expo. You will not be able to use Expo Go for this, as we are not included in the base SDK for Expo.",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Thank you for fast reply. For my situation, I understood that problem source is “expo-cli start --tunnel” command, as you stated. I tried with emulator, its working.",
"username": "Tuna_DAG"
}
] | Error: Could not find the Realm binary. with | 2023-10-10T21:58:10.657Z | Error: Could not find the Realm binary. with | 657 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "{\n $lookup: {\n from: \"comments\",\n let: { refId: \"$_id\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\"$feed.refId\", \"$$refId\"]\n }\n }\n },\n {\n $project: {\n _id: 1,\n user: 1,\n content: 1,\n parent: 1,\n feed: 1,\n }\n },\n ],\n as: \"comments\",\n }\n},\nconst answer = await Question.aggregate([\n {\n $match: {\n questionLink: slug\n }\n },\n {\n $lookup: {\n from: \"answers\",\n as: \"answers\",\n let: { questionID: \"$_id\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\"$questionID\", \"$$questionID\"]\n }\n },\n },\n {\n $lookup: {\n from: \"sign-ups\",\n localField: \"profileID\",\n foreignField: \"_id\",\n as: \"profileID\",\n pipeline: [\n {\n $project: {\n first_name: 1,\n last_name: 1,\n profileImg: 1,\n email: 1,\n further: 1,\n points: 1,\n }\n }\n ]\n }\n },\n {\n $unwind: \"$profileID\"\n },\n {\n $lookup: {\n from: \"comments\",\n let: { refId: \"$_id\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\"$feed.refId\", \"$$refId\"]\n }\n }\n },\n {\n $project: {\n _id: 1,\n user: 1,\n content: 1,\n likes: 1,\n isEdited: 1,\n parent: 1,\n depth: 1,\n feed: 1,\n }\n },\n ],\n as: \"comments\",\n }\n },\n ],\n }\n },\n {\n $project: {\n answers: 1,\n }\n }\n ]);\n{\n \"_id\": {\n \"$oid\": \"652ebc783709d2ccaf9102b9\"\n },\n \"user\": {\n \"$oid\": \"617c08cba353ad33ea7ac2dd\"\n },\n \"feed\": {\n \"type\": \"answers\",\n \"refId\": {\n \"$oid\": \"65295a12d1e2dfea76ad4933\"\n }\n },\n \"parent\": null,\n}\n{\n \"_id\": {\n \"$oid\": \"62baeacf9bec5b79db41b189\"\n },\n \"userID\": \"62b58a02a44aca7c8a89a888\",\n \"questionID\": {\n \"$oid\": \"62bae9639bec5b79db41b174\"\n },\n \"answer\": \"xyz\"\n \"dateTime\": \"1656416975346\",\n \"profileID\": {\n \"$oid\": \"62b58a02a44aca7c8a89a888\"\n },\n}\n",
"text": "I’ve tried converting to string to objectId but its still not working, this is the lookup thats not working:its a part of full aggregation query, the $_id is of answer. now the comments are still returned empty array which means the match is not working any idea where could i be wrong ?\nfor reference heres the full query:for further ref this is data snap from DB:\nCOMMENT:ANSWER:",
"username": "Avelon_N_A"
},
{
"code": "",
"text": "Your sample documents from the answers and comments do not match your use-case. The _id from answers do not match feed.refId in *comments.When your $match inside a $lookup simply uses $eq on the fields, you might be better off using localField: and foreignField: just like you do with your $lookup from: sign-ups.",
"username": "steevej"
}
] | _id match not working in lookup | 2023-10-18T12:28:07.024Z | _id match not working in lookup | 189 |
null | [
"app-services-data-access"
] | [
{
"code": "InternalServerError Error\nError:\nerror executing match expression: $and/$or/$nor needs an array\n\t\t{\n\t \"name\": \"system_group\",\n\t\t\t\"apply_when\": { \"%and\": [ { \"%%user.custom_data.group_owner_id\": { \"%exists\": true } }, \n\t\t\t\t\t\t{ \"%%root.owner_id\": \"%%user.custom_data.group_owner_id\" } ] },\n\t \"read\": true,\n\t \"write\": false,\n\t \"insert\": false,\n\t \"delete\": false,\n\t \"search\": true,\n\t\t\t\"additional_fields\": {}\n\t\t},\n",
"text": "Today I received a support request from a user of my web site saying they couldn’t log in. When I investigated I saw a sudden spike in errors in my Atlas App Services log file looking like this:And as far as I could see my web site was unusable. I traced the error to an entry in a rules.json file looking like this:This had been in the code for many months unchanged. However, we decided to remove these entries from the site (which disabled certain functionality) and the errors went away and the web site came back up. I raised a support request and it seems that there was a MongoDb release today that caused an issue. Has anyone else seen anything similar. I’m curious to the exact impact of this as we have other rules that were not impacted by the issue.",
"username": "ConstantSphere"
},
{
"code": "",
"text": "I see from MongoDB support that the issue is now fixed (several hours ago). We have now rolled back our workaround and can confirm the web site is up and running as normal. Thanks to all at MongoDb for such a rapid response.",
"username": "ConstantSphere"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Server errors in log due to rules.json | 2023-10-18T19:47:23.450Z | Server errors in log due to rules.json | 187 |
null | [
"replication"
] | [
{
"code": "> rs.add(\"mongo-vm-a-01:27017\")\n{\n\t\"topologyVersion\" : {\n\t\t\"processId\" : ObjectId(\"652cc54c89b2c853fc478079\"),\n\t\t\"counter\" : NumberLong(1)\n\t},\n\t\"ok\" : 0,\n\t\"errmsg\" : \"New config is rejected :: caused by :: replSetReconfig should only be run on a writable PRIMARY. Current state REMOVED;\",\n\t\"code\" : 10107,\n\t\"codeName\" : \"NotWritablePrimary\"\n}\n",
"text": "Hello! I have a question - there is a replicaset from which I need to move one large database to a new replicaset. What I did - added three new nodes to the existing replicaset, set priority and votes to 0 to bypass the limitation on the number of nodes in the cluster. I caught up the new nodes with a lag of 0 seconds, removed them from the old replicaset, and changed the replsetname in mongo.conf to the new one. On one of the new nodes, I tried to add the remaining two new nodes using rs.add, but I get an error:I understand why this error occurred - my three new nodes belong to the old replicaset, but I don’t understand how to fix this error. In some sources, I saw that you can try to delete system databases - local, admin, config, but I’m afraid to break something. I ask for your help; I couldn’t find similar topics in the forum.",
"username": "Arthur_Fedorov"
},
{
"code": "--replSetlocal--replSet",
"text": "Hi @Arthur_Fedorov, welcome to the forums & community.This is a variation of restore replica set from backup. You have the datafiles already though.If you’re feeling a little shy about the procedure run a test with a smaller dataset to get confidence with it.",
"username": "chris"
},
{
"code": "",
"text": "@chris , thank you, for your answer!",
"username": "Arthur_Fedorov"
},
{
"code": "rs.add()ctrl+crs.status()",
"text": "Have a last question - from this post Is it possible to split an existent ReplicaSet? - #8 by Prasad_Saya\nI followed all the instructions as described there. A new master of a new replica set started successfully. However, when I try to add another machine to the new replica set, which I previously removed from the other replica set (and, of course, I deleted the “local” database), the rs.add() command on the new master hangs, and nothing happens. I use ctrl+c to interrupt it, but rs.status() shows that the machine with the replica is in the “startup” state, but there is nothing on the replica itself.",
"username": "Arthur_Fedorov"
}
] | Create new replicaset from another replicaset | 2023-10-16T06:25:28.701Z | Create new replicaset from another replicaset | 307 |
null | [
"queries"
] | [
{
"code": "",
"text": "I am setting up prometheus mongodb exporter in standalone server. My mongodb is running on localhost:27017 and expose at localhost:9216 but the mongodb_up is still 0.\nERRO[0000] Cannot connect to server using url http://test:testing@localhost:27017/admin: no reachable servers source=“connection.go:86”\nERRO[0000] Can’t create mongo session to http://test:testing@localhost:27017/admin source=“mongodb_collector.go:202”\nFATA[0000] listen tcp localhost::9216: bind: address already in use source=“server.go:122”here is prometheus config\nscrape_configs:",
"username": "MIn_Htet_Khaing"
},
{
"code": "",
"text": "Hello, I have the same problem. Did you find the solution?",
"username": "Joyci_Pereira1"
},
{
"code": "",
"text": "I also tried with external mongodb uri string but it is still going and can’t find solution.",
"username": "MIn_Htet_Khaing"
}
] | No reachable servers source="connection.go:86" | 2023-10-05T03:58:13.823Z | No reachable servers source=“connection.go:86” | 396 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.