image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
[
"connector-for-bi"
]
| [
{
"code": "mysql-connector-odbc-5.3.13mongodb-connector-odbc-1.2.0mongodb-bi-linux-x86_64-ubuntu1604-v2.12.0mongodrdl --host=*.*.*.*** --port=27017 --db=**** --out=outffile.drdl --username=username --password=pass --authenticationDatabase=authdb --authenticationMechanism=SCRAM-SHA-1mongosqld --addr *.*.*.***:3307 --mongo-uri=mongodb://*.*.*.***:27017 --sslMode=disabled --mongo-username=username --mongo-password \"pass\" --auth --mongo-authenticationSource=authdb --schema path_to_drdl_file -vvmongosqld mysql --ssl-mode REQUIRED --ssl-ca=/opt/certs/mdbca.crt --enable-cleartext-plugin --port 3307 -u <username> -pstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\nnet:\n port: 27017\n bindIp: 127.0.0.1\n ssl:\n mode: requireSSL\n PEMKeyFile: '/opt/certs/mdb.pem'\n CAFile: '/opt/certs/mdbca.crt'\n clusterFile: '/opt/certs/mdb.pem'\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n clusterAuthMode: x509\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\nsystemLog:\n logAppend: true\n path: \"/var/log/mongodb/mongosqld.log\"\n verbosity: 2\n\nsecurity:\n enabled: true\n\nmongodb:\n net:\n uri: \"mongodb://127.0.0.1:27017\"\n auth:\n username: \"user2\"\n password: \"user2\"\n ssl:\n enabled: true\n PEMKeyFile: '/opt/certs/mdb.pem'\n CAFile: '/opt/certs/mdbca.crt'\n\nnet:\n bindIp: 0.0.0.0\n port: 3307\n ssl:\n mode: \"requireSSL\"\n PEMKeyFile: \"/opt/certs/mdb.pem\"\n CAFile: \"/opt/certs/mdbca.crt\"\n\nschema:\n# path : \"/home/reizend100/tableau/mongo-odbc-driver/iqraa_analytics11.drdl\"\n sample:\n namespaces: \"iqraa_analytics.*\"\n\nprocessManagement:\n service:\n name: mongosqld\n displayName: mongosqld\n description: \"BI Connector SQL proxy server\"\n",
"text": "Hi,I have set up a Tableau Server (2019.3) for production in Linux (Ubuntu 18.04). My database is MongoDB Enterprise 4.0.19, and so I use MongoDB BI connector to connect to it.I have installed the MYSQL ODBC connector (mysql-connector-odbc-5.3.13) , MongoDB ODBC connector (mongodb-connector-odbc-1.2.0), MongoBI connector (mongodb-bi-linux-x86_64-ubuntu1604-v2.12.0)My MongoDB uses a normal authentication with a username & password without SSL.The following command created a ‘.drdl’ file for the schemamongodrdl --host=*.*.*.*** --port=27017 --db=**** --out=outffile.drdl --username=username --password=pass --authenticationDatabase=authdb --authenticationMechanism=SCRAM-SHA-1I started the MongoSQLd service using the following command:mongosqld --addr *.*.*.***:3307 --mongo-uri=mongodb://*.*.*.***:27017 --sslMode=disabled --mongo-username=username --mongo-password \"pass\" --auth --mongo-authenticationSource=authdb --schema path_to_drdl_file -vvThe connection works fine with Tableau Desktop application but fails when connecting from Tableau Server to MongoDB (for refreshing extracts).Screenshot attached from Tableau Server when I try to test connectionBelow screenshot is the log of MongoSQLd service\n\nrenditionDownload1027×68 4.26 KB\n\nNOTE: It works fine if I disable the MongoDB authentication. But authentication is a requirement.Then I configured SSL for BI Connector as given in MongoDB documentation[https://docs.mongodb.com/bi-connector/v2.12/tutorial/ssl-setup](http://mongo --ssl --host reizend100 --sslCAFile /opt/certs/mdbca.crt --sslPEMKeyFile /opt/certs/mdb.pem)\nand I have edited the mongosqld.conf and mongod.conf but when testing the connection with mongo shell\nit got connected successfullymongo --ssl --host mongodb.localhost --sslCAFile /opt/certs/mdbca.crt --sslPEMKeyFile /opt/certs/mdb.pem -u “admin” -p “abcd” --authenticationDatabase “admin”But when connected to mongosqld instance using the command:\n mysql --ssl-mode REQUIRED --ssl-ca=/opt/certs/mdbca.crt --enable-cleartext-plugin --port 3307 -u <username> -pIt didn’t show the DB in that.\nand the mongosqld log at this time:\nimage819×143 25 KB\nMongod.confMongosqld.congThe user2 has the privilege “find”, “listCollections”I have tried with another user which has “readAnyDatabase” but still not connected from the tableau serverWhat am I missing?This Issues occur when we have authentication in MongoDB and work fine when we disable the Mongodb authentication",
"username": "Nithin_Prasenan"
},
{
"code": "",
"text": "Having the same issue on my local setup.\nDid you find any answer to it?I have a Tableau Server and MongoDB running on Windows 10.My partial solution is to login into Windows admin user and manually make connections, because i access from Tablea to my Mongo using \\localhost ssl is not required to make manual extracts.\nBut for my Tableau Server\\Jobs\\Extract Refresh fails .\n",
"username": "Rodrigo_RRS"
}
]
| MongoSQLd connection not working from Tableau Server | 2020-10-20T09:26:17.859Z | MongoSQLd connection not working from Tableau Server | 4,827 |
|
null | [
"python",
"spark-connector"
]
| [
{
"code": "read_from_mongo = (\n spark.readStream.format(\"mongodb\")\n .option(\"connection.uri\", <mongodb-atlas-uri>)\n .option(\"database\", \"streaming\")\n .option(\"collection\", \"events\")\n .option(\"lookup.full.document\", \"updateLookup\")\n .load()\n .writeStream\n .trigger(continuous=\"10 seconds\")\n .format(\"memory\")\n .queryName(\"v10_stream\")\n .outputMode(\"append\")\n)\n\ny = read_from_mongo.start()\nimport pyspark.sql.functions as F\nimport pyspark.sql.types as T\n\nperiodic_data = (\n spark\n .readStream\n .format(\"rate\")\n .option(\"rowsPerSecond\", 1)\n .load()\n .withColumn(\n 'purpose',\n F.concat_ws(' ', F.lit('one row per second stream to memory'), (F.rand() * 100))\n )\n)\n\nwrite_to_mongo = (\n periodic_data\n .writeStream\n .format(\"mongodb\")\n .option(\"checkpointLocation\", \"/tmp/pyspark/periodic_data\")\n .option(\"forceDeleteTempCheckpointLocation\", \"true\")\n .option(\"connection.uri\", <mongodb-atlas-uri>)\n .option(\"database\", \"streaming\")\n .option(\"collection\", \"events\")\n .outputMode(\"append\")\n)\n\nx = write_to_mongo.start()\n------------------------------------\nBatch: 14\n------------------------------------\n+--------------------+-------+-----+\n| _id|purpose|value|\n+--------------------+-------+-----+\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n+--------------------+-------+-----+\nonly showing top 20 rows\n\n------------------------------------\nBatch: 15\n------------------------------------\n+--------------------+-------+-----+\n| _id|purpose|value|\n+--------------------+-------+-----+\n|{\"_data\": \"82627A...| null| null|\n|{\"_data\": \"82627A...| null| null|\n+--------------------+-------+-----+\n",
"text": "Hello,I’m trying to use the new MongoDB Connector for Spark (V10), mainly for the better support of Spark Structured Streaming.This is my reading stream, watching for changes on a MongoDB collection:And this is a writing stream on the same collection in order to generate inserts and change events.Problem is, the reading stream is only returning the token data and everything else is empty. It’s not even a change stream document, but an incomplete mix of a collection document with the _id replaced with the change event token value.Am I missing something or misconfigured the streams?\nThe relevant documentation: https://www.mongodb.com/docs/spark-connector/current/configuration/read/#change-streamsThanks. Best regards.",
"username": "root"
},
{
"code": ".option(\"change.stream.publish.full.document.only\", \"true\")\n\n.option(\"change.stream.lookup.full.document\", \"updateLookup\")\n",
"text": "Hi,I think you may want to do:Instead of:That way it will output the full document only and not the full change event.You may also want to explicitly set the schema on the readStream if the above change doesn’t work.Ross–\nEdit: Fixed option names.",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Also, another read example is here Streaming Data with Apache Spark and MongoDB | MongoDB",
"username": "Robert_Walters"
},
{
"code": ".option(\"spark.mongodb.change.stream.publish.full.document.only\", \"true\")\n",
"text": "Hello,Thanks for your replies.\nI had already tested the “publish.full.document.only” option without success. I retried it now just in case with the same result.So I copied and pasted the example you linked. Now, it seems that the only time it’s returning data as expected is when specifying this option with the complete prefix:And then I get the full document with data. But in reality what I would want is in fact the full change event to do an Extract-Load ingestion regardless of document structure (fields and types).I know it’s in fact called Structured Streaming for a reason and that it expects a fixed structure, but I would want to utilize the Change Event structure instead of the document collection one to be able to process schema updates on a collection on the fly, even when the streaming process is running.PS: I’m using the this connector “org.mongodb.spark:mongo-spark-connector:10.0.1”Thanks.\nBest regards.",
"username": "root"
},
{
"code": ".option(\"spark.mongodb.change.stream.lookup.full.document\", \"updateLookup\")\n# define a streaming query\nquery = (spark\n .readStream\n .format(\"mongodb\")\n .option(\"spark.mongodb.connection.uri\", <mongodb-connection-string>)\n .option('spark.mongodb.database', <database-name>)\n .option('spark.mongodb.collection', <collection-name>)\n .schema(readSchema)\n .load()\n # manipulate your streaming data\n .writeStream\n .format(\"csv\")\n .option(\"path\", \"/output/\")\n .trigger(continuous=\"1 second\")\n .outputMode(\"append\")\n)\n# run the query\nquery.start()\nPy4JJavaError: An error occurred while calling o1087.start.\n: java.lang.IllegalStateException: Unknown type of trigger: ContinuousTrigger(1000)\n\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution.<init>(MicroBatchExecution.scala:64)\n\tat org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:300)\n\tat org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:349)\n\tat org.apache.spark.sql.streaming.DataStreamWriter.startQuery(DataStreamWriter.scala:458)\n\tat org.apache.spark.sql.streaming.DataStreamWriter.startInternal(DataStreamWriter.scala:437)\n\tat org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:254)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)\n\tat py4j.Gateway.invoke(Gateway.java:295)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.GatewayConnection.run(GatewayConnection.java:251)\n\tat java.lang.Thread.run(Thread.java:748)\n",
"text": "Hello,So no matter how I try, I still can’t make this option to work:In the meantime, now that “change.stream.publish.full.document.only” returns data, I’m trying to write and save the data back. But because the “mongodb” read stream only works as continuous processing it’s impossible to write in a file sink. Even this example from the documentation returns the following error:I tried the “csv”, “parquet” and “delta” output sinks.",
"username": "root"
},
{
"code": "",
"text": "Hey , did you find any solution? I’m getting the same error with writestream with diff formats .",
"username": "Gs_N_A"
},
{
"code": "",
"text": "The problem is you are trying to write to a stream to a CSV using a continuous trigger which isn’t supported.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "I read mongodb change stream and write to kafka. No matter how I change read.change.stream.publish.full.document.only it doesn’t work.I want to get metadata (operationType etc) but the result only show fullDocument.",
"username": "khang_pham"
},
{
"code": "pipeline=[{\"$match\": { \"$and\": [{\"operationType\": \"insert\"}, { \"fullDocument.eventId\": 321 }] } }, {\"$project\": { \"fullDocument._id\": 0, \"fullDocument.eventId\": 0 } } ]\n",
"text": "If you want to show the operation type but not the fullDocument, just configure the pipeline parameter to project out the operationType and the fields you wish to return.in this example it only shows insert events where the event_id = 321 and does not return the _id or event_id but will return anything else in the document.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "@Robert_Walters can you send me the instruction about setting pipeline in Spark-mongodb connector? I’m using scala btw.",
"username": "khang_pham"
},
{
"code": "",
"text": "Is it possible to get all change event (insert, delete etc) and output to Kafka sink? I want to get all the metadata (at least event type) and fullDocument.Thanks",
"username": "khang_pham"
},
{
"code": "",
"text": "You won’t get delete and its full document because delete, removes the document so it won’t exist when the event is created. to specify the pipeline see https://www.mongodb.com/docs/spark-connector/current/configuration/read/#read-configuration-options. It will be a SparkConf setting so “spark.mongodb.read.aggregation.pipeline”:“[{”$match\": {“operationType”: “insert”}]’ for example",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Thanks Robert. It’s very helpful.",
"username": "khang_pham"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Connector for Spark V10 and Change Stream | 2022-05-10T14:26:56.174Z | MongoDB Connector for Spark V10 and Change Stream | 7,056 |
null | [
"compass",
"swift"
]
| [
{
"code": "",
"text": "I have been using realm for the last few months, and things have been going well. However last week I wanted to debug why some data wasn’t saving as expected, and I began using Realm Studio to debug. That’s when I noticed that my partition field is no longer being filled out for records on my device/in Realm Studio.When I check Atlas, however, the partition is properly set on all records.Any new objects I create on iOS and save to realm no longer have their partition value set when I check in Realm Studio (but again, it is properly set in the Atlas fields.).If I make a new record in Atlas manually using Compass, the partition field is properly set in Studio.I do not specify the field in my schema definitions in swift by default, as I thought I did not have to. However I have tried adding it and am still faced with the same problem.Is this something I should be worried about? It seems very odd that this would suddenly just start happening, and it makes me concerned that data is not properly being synced somehow.Thanks!",
"username": "Griffin_Meyer"
},
{
"code": "",
"text": "Hi @Griffin_Meyer,If you’ve a Support contract, I’d suggest you open a ticket, to go through a proper diagnosis of the issue.If you haven’t, we may try to do it here, but please note that we may require data that aren’t meant to be visible in a public forum…Can you please provide:",
"username": "Paolo_Manna"
}
]
| Partition value is correct in Atlas, but missing in Realm Studio/Realm Swift | 2022-07-07T22:32:06.811Z | Partition value is correct in Atlas, but missing in Realm Studio/Realm Swift | 1,384 |
null | [
"connecting",
"php"
]
| [
{
"code": "",
"text": "Hey guys, new guy here so please bear with me.\nI have a plain web page hosted on hostinger.\nIt supports php 8.0 and has a mongdb extension which I have enabled.\nI tried connecting to my atlas server with a simple test.php code. All I get is some sort of unable to load page, http 500 error. I tried different variations of the code. My impression is that this extension is not properly installed, or rather, I have to install it and maybe I have no clue how to. Any tips?\nShould I have signed up for a dedicated server and installed the mongodb php driver through a shell? I have the basic hosting plan I think, not sure I get a dedicated server. But it shouldn’t be hard to make a simple php code and connect to my atlas server right? Is a dedicated server for the php page a must? Seems like overkill to me.\nThanks in advance,\nMarcelo",
"username": "Marcelo_Apsan"
},
{
"code": "",
"text": "Update: I have managed to use the ssh to connect to my host. I have succesfully installed the package with the composer. Now the problem seems to be on the code itself I guess. Although I still get the very same http 500 error.\nHere’s the code nevertheless:Did I miss anything? I simply used the command composer require mongodb/mongodb and installed it through ssh.",
"username": "Marcelo_Apsan"
},
{
"code": "",
"text": "Hi Marcelo,Were you able to resolve this issue… I am facing the same…",
"username": "Sandeep_Kumar9"
},
{
"code": "",
"text": "any solution to this ?",
"username": "tylerrr"
},
{
"code": "",
"text": "goto it … instead of client we must use MongoDB\\Driver\\Manager\nsource: php - Class 'MongoDB\\Client' not found, mongodb extension installed - Stack Overflow",
"username": "tylerrr"
}
]
| Php driver on hostinger | 2021-12-01T13:53:44.614Z | Php driver on hostinger | 3,972 |
null | [
"aggregation",
"sharding",
"cxx"
]
| [
{
"code": "delete_many$currentOpdb.killOp()delete_manymongos> db.aggregate([ { $currentOp : { allUsers: true } }, { $match : {op: 'remove'} } ])$match\"remove\"$match$matchMongoClient client1{\"\"};\nstd::thread{cleanAccountFunc, &client1.conn, accountId}.detach();\n\nMongoClient client2{\"\"};\nmongocxx::database db = client2.conn[\"admin\"];\nmongocxx::pipeline p{};\np.current_op(make_document(kvp(\"allUsers\", true)));\np.match(make_document(kvp(\"op\", \"remove\")));\nauto cursor = db.aggregate(p, mongocxx::options::aggregate{});\nfor (auto doc : cursor) {\n\tstd::cout << bsoncxx::to_json(doc) << \"\\n\";\n}\n",
"text": "Hi!\nI’m trying to write a small async function in mongocxx driver that empties some collections using delete_many.\nI also want to have the option to abort the operation while its running.\nI came across $currentOp aggregation and db.killOp() and it seem to work pretty fine when I test it from the mongos shell, meaning I’m able to catch the delete_many operation using the following query:\nmongos> db.aggregate([ { $currentOp : { allUsers: true } }, { $match : {op: 'remove'} } ])On the other, when I try the same from the c++ program, I get an empty cursor when trying to $match only \"remove\" operations (when I remove $match then it does print some documents but not what I was looking for).These are the few lines of codes in c++ (they don’t print anything with the $match clause):I am using mongo 5.0.9 version and mongocxx 3.6.6 version.Am I using it wrong? didn’t find any example for current_op, but from the documentation it seems correct.",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "ended up making a python program to solve my issue (using pymongo instead), but this is still not clear for mongocxx",
"username": "Oded_Raiches"
}
]
| Using mongocxx aggregation with current_op | 2022-07-10T16:00:59.065Z | Using mongocxx aggregation with current_op | 2,265 |
null | [
"installation",
"upgrading"
]
| [
{
"code": "Illegal instruction (core dumped)sudo dmidecode -s system-manufacturer\nQEMU\ncat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 45\nmodel name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz\nstepping : 7\nmicrocode : 0x1\ncpu MHz : 1999.993\ncache size : 16384 KB\nphysical id : 0\nsiblings : 1\ncore id : 0\ncpu cores : 1\napicid : 0\ninitial apicid : 0\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority ept vpid tsc_adjust xsaveopt arat\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf\nbogomips : 3999.98\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 45\nmodel name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz\nstepping : 7\nmicrocode : 0x1\ncpu MHz : 1999.993\ncache size : 16384 KB\nphysical id : 1\nsiblings : 1\ncore id : 0\ncpu cores : 1\napicid : 1\ninitial apicid : 1\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority ept vpid tsc_adjust xsaveopt arat\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf\nbogomips : 3999.98\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n\nprocessor : 2\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 45\nmodel name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz\nstepping : 7\nmicrocode : 0x1\ncpu MHz : 1999.993\ncache size : 16384 KB\nphysical id : 2\nsiblings : 1\ncore id : 0\ncpu cores : 1\napicid : 2\ninitial apicid : 2\nfpu : yes\nfpu_exception : yes\ncpuid level : 13\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority ept vpid tsc_adjust xsaveopt arat\nbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf\nbogomips : 3999.98\nclflush size : 64\ncache_alignment : 64\naddress sizes : 40 bits physical, 48 bits virtual\npower management:\n",
"text": "Hi, I am currently using mongo 4.4 and would like to upgrade to mongo 6.0 in the future due to this upcoming feature: https://www.mongodb.com/docs/upcoming/release-notes/6.0/#change-streams-with-document-pre--and-post-imagesIn the meantime, I wanted to change the version to 5.0 just to see that I am comfortable with the process when 6.0 becomes official.\nUnfortunately, after the installation and when trying to run mongo I got the following error:\nIllegal instruction (core dumped)Going through some other threads I found this, stating that the VM architecture is too old.I wanted to know if my VM also has this issue and how can I resolve it to install 5.0 (and 6.0 in the future).Mine is as such:This is on a system that is not yet in production, so installing from scratch is also an option for me.Tell me if any other info can be of assistance.Thanks!",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "Can you use VMware virtual machine?",
"username": "steven_lam1"
},
{
"code": "",
"text": "Hi Steven, no I cannot use a different VM type",
"username": "Oded_Raiches"
},
{
"code": "Illegal instruction (core dumped)",
"text": "Unfortunately, after the installation and when trying to run mongo I got the following error:\nIllegal instruction (core dumped)Hi @Oded_Raiches,As far as I’m aware, QEMU does not have full AVX instruction support yet so will not be compatible with binaries compiled for newer microarchitectures (for example, MongoDB 5.0+). A relevant tracking issue for the feature request appears to be qemu x86 TCG doesn't support AVX insns (#164) · Issues · QEMU / QEMU · GitLab.Since you are intending to use a feature in MongoDB 6.0, I expect your options at the moment are:Build MongoDB from source using older architecture tags compatible with your Qemu environment.Upgrade your VM or hosting solution to support newer microarchitecture. AVX (Advanced Vector Extensions) are just over a decade old and should be widely available on modern servers.Although you have ruled out the option of changing VM type, it may still be worth considering this approach to avoid the overhead of building and testing your own binary packages.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X, thanks for the reply!\nLooking over all the old arch tags looks a bit much, is there any straight forward instructions on how to build from sources depending on my environment?Cheers,\nOded",
"username": "Oded_Raiches"
},
{
"code": "-march= nehalemx86-64v6.0",
"text": "Hi @Oded_Raiches,You’ll need to match the CPU flags supported in your QEMU environment, but it looks like -march= nehalem should work:Intel Nehalem CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2 and POPCNT instruction set support.You could also use the generic x86-64 microarchitecture:A generic CPU with 64-bit extensionsThe Build MongoDB from source link above is to the build instructions in the server repo. You’ll want to choose the right branch to match the release you are trying to build. For MongoDB 6.0 that would be the v6.0 branch: mongo/building.md at v6.0 · mongodb/mongo · GitHub.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "#0 0x000055eb8895489a in tcmalloc::SizeMap::Init() ()\n#1 0x000055eb8895d1b7 in tcmalloc::Static::InitStaticVars() ()\n#2 0x000055eb8895ec07 in tcmalloc::ThreadCache::InitModule() ()\n#3 0x000055eb8895ed9d in tcmalloc::ThreadCache::CreateCacheIfNecessary() ()\n#4 0x000055eb88a076b5 in tcmalloc::allocate_full_malloc_oom(unsigned long) ()\n#5 0x00007f39e4801ae9 in set_binding_values (codesetp=0x0, dirnamep=<synthetic pointer>, domainname=0x7f39e350ff97 \"gnutls\") at bindtextdom.c:202\n#6 __bindtextdomain (domainname=0x7f39e350ff97 \"gnutls\", dirname=0x7f39e350e1da \"/usr/share/locale\") at bindtextdom.c:320\n#7 0x00007f39e3441c5a in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30\n#8 0x00007f39e3420651 in ?? () from /usr/lib/x86_64-linux-gnu/libgnutls.so.30\n#9 0x00007f39e65bd8d3 in call_init (env=0x7ffdbe977478, argv=0x7ffdbe977468, argc=1, l=<optimized out>) at dl-init.c:72\n#10 _dl_init (main_map=0x7f39e67d8170, argc=1, argv=0x7ffdbe977468, env=0x7ffdbe977478) at dl-init.c:119\n#11 0x00007f39e65ae0ca in _dl_start_user () from /lib64/ld-linux-x86-64.so.2\n#12 0x0000000000000001 in ?? ()\n#13 0x00007ffdbe977720 in ?? ()\n#14 0x0000000000000000 in ?? ()\n(gdb) f 11\n#11 0x00007f39e65ae0ca in _dl_start_user () from /lib64/ld-linux-x86-64.so.2\n(gdb) info locals\nlibrary_path = 0x0\nversion_info = 0\nany_debug = 0\n_dl_rtld_libname = {name = 0x55eb8691f270 \"/lib64/ld-linux-x86-64.so.2\", next = 0x7f39e67d7fe0 <newname>, dont_free = 0}\nrelocate_time = 13887144\n_dl_rtld_libname2 = {name = 0x0, next = 0x0, dont_free = 0}\nstart_time = 634402465805098\ntls_init_tp_called = true\nload_time = 4311564\naudit_list = 0x0\npreloadlist = 0x0\n__GI__dl_argv = 0x7ffdbe977468\n_dl_argc = 1\naudit_list_string = 0x0\n_rtld_global = {_dl_ns = {{_ns_loaded = 0x7f39e67d8170, _ns_nloaded = 39, _ns_main_searchlist = 0x7f39e67d8428, _ns_global_scope_alloc = 0, _ns_unique_sym_table = {lock = {mutex = pthread_mutex_t = {Type = Recursive, \n Status = Not acquired, Robust = No, Shared = No, Protocol = None}}, entries = 0x0, size = 0, n_elements = 0, free = 0x0}, _ns_debug = {r_version = 0, r_map = 0x0, r_brk = 0, r_state = RT_CONSISTENT, \n r_ldbase = 0}}, {_ns_loaded = 0x0, _ns_nloaded = 0, _ns_main_searchlist = 0x0, _ns_global_scope_alloc = 0, _ns_unique_sym_table = {lock = {mutex = pthread_mutex_t = {Type = Normal, Status = Not acquired, \n Robust = No, Shared = No, Protocol = None}}, entries = 0x0, size = 0, n_elements = 0, free = 0x0}, _ns_debug = {r_version = 0, r_map = 0x0, r_brk = 0, r_state = RT_CONSISTENT, r_ldbase = 0}} <repeats 15 times>}, \n _dl_nns = 1, _dl_load_lock = {mutex = pthread_mutex_t = {Type = Recursive, Status = Not acquired, Robust = No, Shared = No, Protocol = None}}, _dl_load_write_lock = {mutex = pthread_mutex_t = {Type = Recursive, \n Status = Not acquired, Robust = No, Shared = No, Protocol = None}}, _dl_load_adds = 39, _dl_initfirst = 0x0, _dl_cpuclock_offset = 634402465829299, _dl_profile_map = 0x0, _dl_num_relocations = 5046, \n _dl_num_cache_relocations = 868, _dl_all_dirs = 0x7f39e67d8e20, _dl_rtld_map = {l_addr = 139886654574592, l_name = 0x55eb8691f270 \"/lib64/ld-linux-x86-64.so.2\", l_ld = 0x7f39e67d6e68, l_next = 0x7f39e67c58c0, \n l_prev = 0x7f39e67c53d0, l_real = 0x7f39e67d79f0 <_rtld_global+2448>, l_ns = 0, l_libname = 0x7f39e67d8030 <_dl_rtld_libname>, l_info = {0x0, 0x0, 0x7f39e67d6ee8, 0x7f39e67d6ed8, 0x7f39e67d6e78, 0x7f39e67d6e98, \n 0x7f39e67d6ea8, 0x7f39e67d6f18, 0x7f39e67d6f28, 0x7f39e67d6f38, 0x7f39e67d6eb8, 0x7f39e67d6ec8, 0x0, 0x0, 0x7f39e67d6e68, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7f39e67d6ef8, 0x0, 0x0, 0x7f39e67d6f08, 0x0 <repeats 12 times>, \n 0x7f39e67d6f58, 0x7f39e67d6f48, 0x0, 0x0, 0x7f39e67d6f78, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7f39e67d6f68, 0x0 <repeats 25 times>, 0x7f39e67d6e88}, l_phdr = 0x7f39e65ad040, l_entry = 0, l_phnum = 7, \n l_ldnum = 0, l_searchlist = {r_list = 0x0, r_nlist = 0}, l_symbolic_searchlist = {r_list = 0x0, r_nlist = 0}, l_loader = 0x0, l_versions = 0x7f39e67b8300, l_nversions = 6, l_nbuckets = 17, l_gnu_bitmask_idxbits = 3, \n l_gnu_shift = 8, l_gnu_bitmask = 0x7f39e65ad2d8, {l_gnu_buckets = 0x7f39e65ad2f8, l_chain = 0x7f39e65ad2f8}, {l_gnu_chain_zero = 0x7f39e65ad338, l_buckets = 0x7f39e65ad338}, l_direct_opencount = 0, l_type = lt_library, \n l_relocated = 1, l_init_called = 1, l_global = 1, l_reserved = 0, l_phdr_allocated = 0, l_soname_added = 0, l_faked = 0, l_need_tls_init = 0, l_auditing = 0, l_audit_any_plt = 0, l_removed = 0, l_contiguous = 0, \n l_symbolic_in_local_scope = 0, l_free_initfini = 0, l_rpath_dirs = {dirs = 0x0, malloced = 0}, l_reloc_result = 0x0, l_versyms = 0x7f39e65ad914, l_origin = 0x0, l_map_start = 139886654574592, \n l_map_end = 139886656848240, l_text_end = 139886654711120, l_scope_mem = {0x0, 0x0, 0x0, 0x0}, l_scope_max = 0, l_scope = 0x0, l_local_scope = {0x0, 0x0}, l_file_id = {dev = 0, ino = 0}, l_runpath_dirs = {dirs = 0x0, \n malloced = 0}, l_initfini = 0x0, l_reldeps = 0x0, l_reldepsmax = 0, l_used = 1, l_feature_1 = 0, l_flags_1 = 0, l_flags = 0, l_idx = 0, l_mach = {plt = 0, gotplt = 0, tlsdesc_table = 0x0}, l_lookup_cache = {\n sym = 0x7f39e65ad480, type_class = 1, value = 0x7f39e67c53d0, ret = 0x7f39e47d6110}, l_tls_initimage = 0x0, l_tls_initimage_size = 0, l_tls_blocksize = 0, l_tls_align = 0, l_tls_firstbyte_offset = 0, l_tls_offset = 0, \n l_tls_modid = 0, l_tls_dtor_count = 0, l_relro_addr = 2266752, l_relro_size = 2432, l_serial = 0, l_audit = 0x7f39e67d7e60 <_rtld_global+3584>}, audit_data = {{cookie = 0, bindflags = 0} <repeats 16 times>}, \n _dl_rtld_lock_recursive = 0x7f39e4bcbfd0 <__GI___pthread_mutex_lock>, _dl_rtld_unlock_recursive = 0x7f39e4bcd810 <__GI___pthread_mutex_unlock>, _dl_make_stack_executable_hook = 0x7f39e4bc8740 <__make_stacks_executable>, \n _dl_stack_flags = 6, _dl_tls_dtv_gaps = false, _dl_tls_max_dtv_idx = 3, _dl_tls_dtv_slotinfo_list = 0x7f39e67b6000, _dl_tls_static_nelem = 3, _dl_tls_static_size = 4992, _dl_tls_static_used = 976, \n _dl_tls_static_align = 64, _dl_initial_dtv = 0x7f39e67b7810, _dl_tls_generation = 1, _dl_init_static_tls = 0x7f39e4bc9020 <__pthread_init_static_tls>, _dl_wait_lookup_done = 0x7f39e4bc9140 <__wait_lookup_done>, \n _dl_scope_free_list = 0x0}\n_rtld_global_ro = {_dl_debug_mask = 0, _dl_osversion = 265827, _dl_platform = 0x7ffdbe9776a9 \"x86_64\", _dl_platformlen = 6, _dl_pagesize = 4096, _dl_inhibit_cache = 0, _dl_initial_searchlist = {r_list = 0x7f39e67bab18, \n r_nlist = 38}, _dl_clktck = 100, _dl_verbose = 0, _dl_debug_fd = 2, _dl_lazy = 1, _dl_bind_not = 0, _dl_dynamic_weak = 0, _dl_fpu_control = 895, _dl_correct_cache_id = 771, _dl_hwcap = 2, _dl_auxv = 0x7ffdbe977550, \n _dl_x86_cpu_features = {kind = arch_kind_intel, max_cpuid = 13, cpuid = {{eax = 132823, ebx = 16779264, ecx = 2411340323, edx = 260832255}, {eax = 0, ebx = 2, ecx = 0, edx = 0}, {eax = 132823, ebx = 0, ecx = 1, \n edx = 739248128}}, family = 6, model = 45, xsave_state_size = 640, xsave_state_full_size = 640, feature = {1097728}, data_cache_size = 0, shared_cache_size = 0, non_temporal_threshold = 0}, _dl_x86_hwcap_flags = {\n \"sse2\\000\\000\\000\\000\", \"x86_64\\000\\000\", \"avx512_1\"}, _dl_x86_platforms = {\"i586\\000\\000\\000\\000\", \"i686\\000\\000\\000\\000\", \"haswell\\000\", \"xeon_phi\"}, _dl_inhibit_rpath = 0x0, _dl_origin_path = 0x0, \n _dl_use_load_bias = 0, _dl_profile = 0x0, _dl_profile_output = 0x7f39e65ce5dc \"/var/tmp\", _dl_trace_prelink = 0x0, _dl_trace_prelink_map = 0x0, _dl_init_all_dirs = 0x7f39e67d8e20, _dl_sysinfo_dso = 0x7ffdbe99c000, \n _dl_sysinfo_map = 0x7f39e67d8710, _dl_hwcap2 = 0, _dl_debug_printf = 0x7f39e65be6a0 <_dl_debug_printf>, _dl_mcount = 0x7f39e65bfa70 <__GI__dl_mcount>, _dl_lookup_symbol_x = 0x7f39e65b8260 <_dl_lookup_symbol_x>, \n _dl_check_caller = 0x7f39e65c0f30 <_dl_check_caller>, _dl_open = 0x7f39e65c18b0 <_dl_open>, _dl_close = 0x7f39e65c3bc0 <_dl_close>, _dl_tls_get_addr_soft = 0x7f39e65c0a90 <_dl_tls_get_addr_soft>, \n _dl_discover_osversion = 0x7f39e65c83f0 <_dl_discover_osversion>, _dl_audit = 0x0, _dl_naudit = 0}\n_dl_skip_args = 0\n__pointer_chk_guard_local = 2841440123743661675\n",
"text": "Hi @Stennie_X,\nStill not sure this is a problem with the VM type.\nCan you look again at this stack trace?Cheers,\nOded",
"username": "Oded_Raiches"
},
{
"code": "marchpython3 buildscripts/scons.py install-core MONGO_VERSION=5.3.0 --march=nehalem\n...\n...\nSCons Error: no such option: --march\n",
"text": "@Stennie_X @steven_lam1\nHi, sending this as a reminder as I didn’t see any reply for my last commentIn addition, how should I use the new march parameter for compiling from source? this did not work:Regards,\nOded",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "Preferred enabling AVX on my VMs, you can close the thread",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Installing mongodb 5.0 on a QEMU VM | 2022-05-26T13:23:35.241Z | Installing mongodb 5.0 on a QEMU VM | 9,779 |
[
"atlas-device-sync",
"react-native"
]
| [
{
"code": "siteRealm.objects(\"context\")site=${preferedSiteID}",
"text": "I’ve been using Realm Sync with my React Native app for some time and it worked great. Recently I’ve tried to clone the Realm app associated with my react Native app to connect to a database on a new cluster. I’ve not changed my client code, appart from changing the Realm app ID to sync to.Two of my 3 partitions are syncing fine on this new setup, but one partition is not syncing (read or write). The writes are not synced to Atlas and the reads show an empty array. In my client code, the realm is opened without issues but siteRealm.objects(\"context\") returns an empty array.Judging by this screenshot of the initial sync write when first enabled, the documents should be syncable:\n\nScreenshot 2022-07-11 at 10.25.351242×613 68.9 KB\nHere is an example of a ‘context’ object in the site partition that is not syncing:\nI don’t think the issue is a schema or permission issue because there is no message in the logs about that. When I write a new document in the client app, it just stays locally but does not sync to Atlas, and there are no messages in the logs appart from:\n\nScreenshot 2022-07-11 at 10.37.571265×427 32.3 KB\nI’m not sure if that log message is useful to find out what is going on?What could be the reasons for a partition not syncing, without any error message in logs? How can I find out what is going on?My setup:Things I’ve tried:Many thanks in advance for your help!",
"username": "Laekipia"
},
{
"code": "",
"text": "I finally solved my issue after days of searching. I had forgotten to also setup the relationships in my schema… I really wished some error message could have pointed me in the right direction there.Hope this helps others!",
"username": "Laekipia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm Sync (now Device Sync) partition not syncing | 2022-07-11T09:47:45.666Z | Realm Sync (now Device Sync) partition not syncing | 2,554 |
|
null | [
"aggregation",
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "const UserSchema = new Schema({\n profile: {\n type: Schema.Types.ObjectId,\n ref: \"profiles\",\n },\n});\nconst ProfileSchema = new Schema({\n user: {\n type: Schema.Types.ObjectId,\n ref: \"users\",\n },\n education: [\n {\n institution: {\n type: Schema.Types.ObjectId,\n ref: \"institutions\",\n },\n major: {\n type: Schema.Types.ObjectId,\n ref: \"majors\",\n },\n },\n ],\n date: {\n type: Date,\n default: Date.now,\n },\n});\nuser.profile.educationarrayaggregateinstitutionmajor[\n // user 1\n {\n profile: {\n education: [\n { institution: \"institution_1_data\", major: \"major_1_data\" },\n { institution: \"institution_2_data\", major: \"major_2_data\" },\n { institution: \"institution_3_data\", major: \"major_3_data\" },\n ],\n },\n },\n // user 2\n {\n profile: {\n education: [\n { institution: \"institution_1_data\", major: \"major_1_data\" },\n { institution: \"institution_2_data\", major: \"major_2_data\" },\n ],\n },\n },\n];\nconst getUsersWithPopulatedMajorAndInstitution = async () => {\n const unwind_education_stage = {\n $unwind: \"$education\",\n };\n const populate_education_stage = {\n $lookup: {\n from: \"majors\",\n let: { major: \"$education.major\" },\n pipeline: [{ $match: { $expr: { $eq: [\"$_id\", \"$$major\"] } } }],\n as: \"education.major\",\n },\n $lookup: {\n from: \"institutions\",\n let: { institution: \"$education.institution\" },\n pipeline: [{ $match: { $expr: { $eq: [\"$_id\", \"$$institution\"] } } }],\n as: \"education.institution\",\n },\n };\n\n const populate_profile_stage = {\n $lookup: {\n from: \"profiles\",\n let: { profile_id: \"$profile\" },\n pipeline: [\n {\n $match: {\n $expr: { $eq: [\"$_id\", \"$$profile_id\"] },\n },\n },\n unwind_education_stage,\n populate_education_stage,\n {\n $project: {\n education: \"$education\",\n },\n },\n ],\n as: \"profile\",\n },\n };\n\n let users = await User.aggregate([populate_profile_stage]);\n\n return users;\n};\ninstitution$lookupmajor$lookupmajorinstitution$unwindeducationeducation[\n // user 1\n {\n profile: {\n education: [{ institution: \"institution_1_data\", major: \"major_1_data\" }],\n },\n },\n {\n profile: {\n education: [{ institution: \"institution_2_data\", major: \"major_2_data\" }],\n },\n },\n {\n profile: {\n education: [{ institution: \"institution_3_data\", major: \"major_3_data\" }],\n },\n },\n // user 2\n {\n profile: {\n education: [{ institution: \"institution_1_data\", major: \"major_1_data\" }],\n },\n },\n {\n profile: {\n education: [{ institution: \"institution_2_data\", major: \"major_2_data\" }],\n },\n },\n];\n",
"text": "I have these models:And this model:I’ve been trying to populate the user.profile.education array using aggregate.\nParticularly, the fields institution and major.\nSo the expected result is the array of education to have its education elements populated.\nSo the expected result should be something like this:This is the query that I wrote:There are two problems with this query.\nPROBLEM 1:\nIt only populates institution because the institution $lookup stage was added after the major $lookup stage.\nThis makes no sense to me, as I’ve been using aggregate for a while and would expect both major and institution to be populated.PROBLEM 2:\nUsing $unwind means education field would be unwinded.\nSo if the education array contains more than 1 education element (like the examples above), three “copies” of the user will be created and the end result is something like this:But, that’s not the expected result as I mentioned above.\nWhat should I change/add in the query?",
"username": "Ghrib_Ahmed"
},
{
"code": "const getUsersWithPopulatedMajorAndInstitution = async (\n user_name_surname_input_value\n) => {\n const unwind_education_stage = {\n $unwind: \"$education\",\n };\n\n const look_up_institution_stage = {\n $lookup: {\n from: \"institutions\",\n localField: \"education.institution\",\n foreignField: \"_id\",\n as: \"education.institution\",\n },\n };\n\n const look_up_major_stage = {\n $lookup: {\n from: \"majors\",\n localField: \"education.major\",\n foreignField: \"_id\",\n as: \"education.major\",\n },\n };\n\n const populate_stage = {\n $lookup: {\n from: \"profiles\",\n let: { profile_id: \"$profile\" },\n pipeline: [\n {\n $match: {\n $expr: { $eq: [\"$_id\", \"$$profile_id\"] },\n },\n },\n unwind_education_stage,\n look_up_major_stage,\n look_up_institution_stage,\n {\n $project: {\n education: \"$education\",\n },\n },\n ],\n as: \"profile\",\n },\n };\n\n let filtered_users = await User.aggregate([\n\n populate_stage,\n ]);\n\n return filtered_users;\n};\n[\n {\n \"_id\": \"60ded1353752602bf4b364ee\",\n \"profile\": [\n {\n \"education\": {\n \"_id\": \"62cc1b51423b0b2c02867a76\",\n \"major\": [\n {\n \"_id\": \"60c094d603202ea0a23f970f\",\n // The rest of major data\n }\n ],\n \"institution\": [\n {\n \"_id\": \"5faf77b2acb848347a5f1ab7\",\n // The rest of institution data\n }\n ],\n }\n },\n {\n \"_id\": \"60ded1363752602bf4b364ef\",\n \"education\": {\n \"major\": [\n {\n \"_id\": \"60c094d603202ea0a23f970f\",\n // The rest of major data\n\n }\n ],\n \"institution\": [\n {\n \"_id\": \"5faf77b2acb848347a5f1ab2\",\n // The rest of institution data\n }\n ],\n }\n }\n ],\n }\n]\narrayeducationuser.profile.education",
"text": "I managed to achieve something that isn’t too far from what I want:That results in this:It is not the exact structure I wanted. As I wanted a single array of education that contains all the populated education data.But, still this populates user.profile.education, and returns one object per user.\nSo I think it’s good enough.",
"username": "Ghrib_Ahmed"
}
]
| Mongoose: Problem populating nested array with aggregate | 2022-07-12T14:31:51.556Z | Mongoose: Problem populating nested array with aggregate | 10,136 |
null | [
"containers"
]
| [
{
"code": "",
"text": "Hi,\nI am running mongodb using docker compose. Within the container the /data folder is owned by mongodb which is user number 999. When I look at this docker volume on my local ubuntu machine this user number 999 is interpreted as systemd-coredump .\nMy question is, is this an issue or is this normal as I had expected the docker volume on ubuntu to be owned by root as docker compose is being run as root.\nThanks",
"username": "Tim_Pynegar"
},
{
"code": "",
"text": "Hi @Tim_Pynegar and welcome to the community!!The file ownership issue seems to be a common theme with the Docker bind mount. However, are you seeing any specific issue with the file ownership, e.g. the container failing to start, etc.?Looking at this issue, I found the following article on File Ownership Inside Docker that may be able to help you.If Docker bind mount continues to be troublesome for you, may I suggest you examine the merits of Docker Volume as an alternative to bind mount.Also, please note that the official MongoDB Docker image is maintained by Docker and not by MongoDB, so if there are any issue, we have limited means to help if it involves Docker itself. Having said that, we’re happy to help if you’re having any MongoDB-specific issue.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Docker User Number | 2022-07-08T17:31:47.184Z | Docker User Number | 2,652 |
null | [
"aggregation",
"swift"
]
| [
{
"code": "",
"text": "I am working on local RealmDB. In my application there are two collectionDrugid drugName1 abc2 xyz3 pqrPillid pillName drugID1 qqq 12 www 23 eee 14 ttt 25 rrr 26 fff 3To get pills for any specific drug with some name… I need to write below codelet drugObject = realm.objects( Drug . self ).where{ $0.drugName == “ xyz ” }.firstlet pills = realm.objects( Pill . self ).where{ $0. drugID == drugObject.id }Instead of writing two queries here is there any way to combine into single query like join operation and get the result here?",
"username": "Basavaraj_KM1"
},
{
"code": "",
"text": "Welcome to the forums!Yes, there is a way (and maybe lots of ways) to do that but some clarity is needed in the question.Is the goal to retrieve all of the pills that contain a specific drug? Also, drug names change so using that as a key to look things up may not be the best idea - I can suggest some better options on that once we have clarity on the goal in the question.In the future, it’s a good idea to include your actual Realm objects in the question along with sample data to clarify what’s being asked. (and format your code!)",
"username": "Jay"
},
{
"code": "",
"text": "Yes the goal is to retrieve the pills of specific drugID.Here I am not using drugName as a key. First the user will perform search operation on Drug collection based on drugName and then tries to fetch the pill details for that drug.",
"username": "Basavaraj_KM1"
},
{
"code": "class PillClass: Object {\n @Persisted var drug_list = List<DrugClass>()\n}\n\nclass DrugClass: Object {\n @Persisted var drug _id = \"\"\n @Persisted(originProperty: \"drug_list\") var linkedPills: LinkingObjects<PillClass>\n}\nlet results = realm.objects(PillClass.self ).where { $0.drug_list.drug_id == \"some drug id\" }",
"text": "It appears that Pills can contain several drugs and a Drug can appear in a variety of pills. In that case we have a two-way relationship between Pills and Drugs.One possible setup would have two classesThe PillClass has list of Drugs that are in that pill and the DrugClass appears in different PillsTo then get the pills that contain a specific drugID the query would belet results = realm.objects(PillClass.self ).where { $0.drug_list.drug_id == \"some drug id\" }You could also then traverse the graph from Drugs back to pills to for example, detect drug interactions with certain pills.",
"username": "Jay"
},
{
"code": "",
"text": "Is there any way to write queries without LinkingObjects… Pill to drug is always one to one relationship not list.",
"username": "Basavaraj_KM1"
},
{
"code": "",
"text": "It Pill to Drug is always 1-1 then you should consider should use embedded objects as that simplifies the structures and queries.However that would mean that 1 pill only ever has one drug and 1 drug is only ever contained in one pill. Is that correct?",
"username": "Jay"
},
{
"code": "",
"text": "In this use case 1 drug can contain multiple pills. Yes embedding pill objects with drug is good in this situation.From learning perspective as mentioned in question, Instead of writing two queries is there any way to combine into single query like join operation and get the result without linking and embedding objects?",
"username": "Basavaraj_KM1"
},
{
"code": "",
"text": "Sure. Assuming there’s a Drug object with a List if EmbeddedObjects of Pills, you can filter for drug.pill == ‘some pill’ to return Drugs that contain that pill.",
"username": "Jay"
},
{
"code": "",
"text": "Thank you.I wanted to understand how we can write join query to get pill objects instead of the one I mentioned above … when drug and pill are two independent collectionsIs there any solution other than below one?let drugObject = realm.objects( Drug . self ).where{ $0.drugName == “ xyz ” }.firstlet pills = realm.objects( Pill . self ).where{ $0. drugID == drugObject.id }",
"username": "Basavaraj_KM1"
},
{
"code": "let results = realm.objects(PillClass.self ).where { $0.drug_list.drug_id == \"some drug id\" }",
"text": "Ok, first thing is that MongoDB Is a NoSQL Database - so there are not really any ‘join’ queries like what you would find in SQL. Realm puts a nice ‘face’ on that NoSQL data so we can run more human-readable queries and joins are done through relationships (forward through the object graph with Lists, and backward with LinkingObjects).Second thing is (when) you’re using embedded objects, only one object will be managed; the parent object that contains the embedded object. Embedded objects never stand alone.The last thing is that it’s not really clear what you’re trying to query for and why don’t want to use the solution presented above.It seems like you want to retrieve pills that contain a specific drug and that’s exact what my example above does, all in one query.let results = realm.objects(PillClass.self ).where { $0.drug_list.drug_id == \"some drug id\" }Is there a reason you don’t want to use that?",
"username": "Jay"
},
{
"code": "",
"text": "Yep will go with same solution as you suggested for Pill and Drugs collection. But I just wanted to understand is there any ways to go with single query with two independent collection to get records instead of two separate queries.Thanks for your support.",
"username": "Basavaraj_KM1"
},
{
"code": "",
"text": "My pleasure.To answer your question, yes! and the above solution does that - it’s a single query with two independent collections.Joining is (can be) done through a common column in SQL and joining is done through references in Realm.Keep in mind that in your original question, there’s really only one query - the first statement retrieves the objectId your after and the second line is the actual query.That would be akin to not knowing the column name you want to join by in SQL; If you know the column name though - you can SQL query join in one line.Likewise, if you know the ObjectID in Realm, you can get the results with a single query.",
"username": "Jay"
}
]
| Aggregating two queries into one on local RealmDB using swiftSDK | 2022-05-05T12:37:17.772Z | Aggregating two queries into one on local RealmDB using swiftSDK | 4,153 |
null | [
"replication"
]
| [
{
"code": "",
"text": "I am upgrading Mongo replicaset from 3.4 → 4.2. Last week I did the upgrade fro 3.6 → 4.0 and the next step is to upgrade from 4.0 → 4.2.\nDuring the upgrade fro 3.6 to 4.0, we faced some issues due to read concern majority.\nAfter we stepped down the primary, there was some issue in the old primary and took a lot of time for this instance to be back up and running. now since we have read concern majority and w:1 , writes were successful, but the reads were failing because the majority was not satisfied.\nAnd after sometime there was performance issue on the new primary and some writes also started timing out and the overall response time increased.In the next upgrade i.e fro 4.0 to 4.2, we might face this issue again. is there a way to do this upgrade without read and write failures?",
"username": "Ishrat_Jahan"
},
{
"code": "SECONDARYPRIMARYSECONDARYPRIMARYSECONDARY",
"text": "Hi @Ishrat_Jahan,After we stepped down the primary, there was some issue in the old primary and took a lot of time for this instance to be back up and running.The main issue here is that a PSA setup will always have issues with read and write majority when one of the data-bearing node goes offline. Some possibly being:And after sometime there was performance issue on the new primary and some writes also started timing out and the overall response time increasedCan you confirm when this issue happens whether or not the SECONDARY was up when the new PRIMARY was up? I.e. PSA.\nOr was it a case where the SECONDARY was offline and the new PRIMARY was up? I.e. PXA (Where X is an offline node, specifically the SECONDARY in this scenario).In the next upgrade i.e fro 4.0 to 4.2, we might face this issue again. is there a way to do this upgrade without read and write failures?Unfortunately due to the nature of a PSA set, you will encounter this issue if any data-bearing node is offline for an extended period of time. However, there are workaround for this, as shown in the following procedure is followed.Lastly, regarding the PSA set up as well, you may find information on Stennie’s response specifically regarding “Should you add an Arbiter?” on this topic useful.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks for your input. We have decided to add a secondary and remove the arbiter. Effectively converting the setup from PSA to PSS. SO we should not face such issues during the maintenance process.",
"username": "Ishrat_Jahan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Upgrade from 4.0 to 4.2 without downtime in PSA replicaset | 2022-07-03T13:05:22.900Z | Upgrade from 4.0 to 4.2 without downtime in PSA replicaset | 2,500 |
[
"atlas-device-sync",
"flexible-sync"
]
| [
{
"code": "",
"text": "Hello friends. I am trying to sync into realm atlas and authenticate the created users with email/password method. I am setting up the connection with @realm/react method. It seems that everything is going according the plan. but I keep receiving the messages in the Gmail that connection process occurred but it failed and I don’t have an idea what causes it. From console.log I receive the generic message of [Error: Network request failed].Google msg pic:\n\nimage739×190 6.05 KB\nThank you for your time guys.",
"username": "Lukas_Vainikevicius"
},
{
"code": "",
"text": "Hi Lukas,What cluster tier is your app trying to sync with?\nIf it is a M0 or one of the shared tiers then it’s most likely a resourcing issue.See my explanation of a sync error that would be associated with this type of root cause.Regards",
"username": "Mansoor_Omar"
}
]
| Trying to sync into Mongo db Atlas with realm but is not working | 2022-07-11T17:00:54.528Z | Trying to sync into Mongo db Atlas with realm but is not working | 2,075 |
|
null | []
| [
{
"code": "",
"text": "Hi,When server is started, mongotop works. After a few hours/days, mongotop fails. The error shown is of the form:\n2022-07-01T01:38:14.747+0000\tFailed: BSONObj size: 59797193 (0x3906EC9) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds”If the mongod service is restarted, mongotop works again. But, again after a few hours/days, mongotop fails again with the same error above.Another case is, when a script is run to check document object size across all documents in all collections in all databases, script fails after a while. It is Never always the same database/collection/document.Please help in understanding why mongotop is failing randomly and make it work reliably without any failure.Thank you,\nMelvin",
"username": "Melvin_George"
},
{
"code": "BSONObjmongotopmongotopmongotopmongodmongotop",
"text": "Hi @Melvin_George,Welcome to the MongoDB Community forums again 2022-07-01T01:38:14.747+0000 Failed: BSONObj size: 59797193 (0x3906EC9) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds”This error is raising because it’s hitting more than the max size of BSONObj i.e., 16 MB and the mongotop command currently returns its output as a single BSON document so the result is limited to 16MB and here it’s 60MB.If the mongod service is restarted, mongotop works again. But, again after a few hours/days, mongotop fails again with the same error above.So, restarting the mongod clears the in-memory information and it works again!However, to better understand the issue could you please provide us a few details:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks for replying, Kesav.To your questions, please find answers below:What version of MongoDB you are using?\nMongoDB 4.4.14 (same error happened with 4.4.6, so we upgraded to 4.4.14 and the error still exists)How many active collections do you have?\nAround 80 databases, with mostly 1300 collections each.What information you are looking for in mongotop?\nIdentify the database.collection having most activity during peak load.I suppose you are trying to understanding whether we are going past any limits configured by default. Appreciate your help.Regards,\nMelvin",
"username": "Melvin_George"
},
{
"code": "topmongotop",
"text": "Hi @Melvin_George,Thanks for sharing these details!Unfortunately, I think you’re hitting the issue SERVER-6627, where in a very large deployment, the output of the top command (which mongotop relies on) can get past the 16MB BSON size limitation. I would encourage you to comment & upvote on the ticket to help the development team prioritize this issue.In the meantime, I think you can use the output of db.currentOp() as a workaround.Please let us know if you have any follow-up questions!Kind Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thanks for your response, Kushagra.Have commented and upvoted the SERVER-6627 bug.Will check db.currentOp() and get back on this.Regards,\nMelvin",
"username": "Melvin_George"
},
{
"code": "use admin\ndb.currentOp()\n$currentOpuse admin\n\ndb.aggregate([{$currentOp: {allUsers: true, idleConnections: true}}, {$match: {$and: [{\"ns\": {$exists: true}}, {\"ns\": {$nin: [\"\", \"admin.$cmd\", \"admin.$cmd.aggregate\"]}}]}}, {$project: {ns: 1, microsecs_running: 1, op: 1}}]).toArray()\ndb.currentOp()mongotop",
"text": "Hi Kushagra,I looked at:Seems it also has a limit of 16MB on the final document size:But, thanks to your hint above, I came across $currentOp aggregation stage:It returns a cursor that can build a document with details found in db.currentOp(). The cursor goes over documents each of which should not be greater than 16MB. But, the aggregate will build a document that has No size restrictions. Can give all info in mongotop, plus more.Thanks, Kushagra!",
"username": "Melvin_George"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongotop fails after sometime with error Failed: BSONObj size: 59797193 (0x3906EC9) is invalid. Size must be between 0 and 16793600(16MB) | 2022-07-01T03:51:37.504Z | Mongotop fails after sometime with error Failed: BSONObj size: 59797193 (0x3906EC9) is invalid. Size must be between 0 and 16793600(16MB) | 8,554 |
null | [
"security"
]
| [
{
"code": "db.createRole({role: \"mydbAdmin\", privileges: [], roles: [{role: \"userAdmin\", db: \"admin\"}, {role: \"dbAdmin\", db: \"admin\"}, {role: \"readWrite\", db: \"admin\"}, {role: \"dbAdmin\", db: \"mydb\"}, {role: \"readWrite\", db: \"mydb\"}]});db.createRole({role: \"mydbUser\", privileges: [{resource: {db: \"admin\", collection: \"\"}, actions: [\"changeOwnPassword\", \"changeOwnCustomData\"]} ], roles: [{role: \"readWrite\", db: \"mydb\"}]});",
"text": "Hi! I have a question on custom roles. I’ve set up my admin role as:\ndb.createRole({role: \"mydbAdmin\", privileges: [], roles: [{role: \"userAdmin\", db: \"admin\"}, {role: \"dbAdmin\", db: \"admin\"}, {role: \"readWrite\", db: \"admin\"}, {role: \"dbAdmin\", db: \"mydb\"}, {role: \"readWrite\", db: \"mydb\"}]});and generic user role as:\ndb.createRole({role: \"mydbUser\", privileges: [{resource: {db: \"admin\", collection: \"\"}, actions: [\"changeOwnPassword\", \"changeOwnCustomData\"]} ], roles: [{role: \"readWrite\", db: \"mydb\"}]});Although my custom admin role works on user creation, deletion, granting and revoking roles. I can’t seem to use the updateUser feature to replace user roles. Any attempt to do so results in:uncaught exception: Error: Updating user failed: not authorized on admin to execute commandstrangely, when I switch to an account with the *AnyDatabase roles, I have no problems executing the “updateUser” above. All users are created in the admin database. Any pointers on getting the right credentials to execute “updateUser” would be greatly appreciated!Thanks in advanced!Suresh",
"username": "Suresh_Kumar3"
},
{
"code": "admintestuserAdminuserAdminAnyDatabsegetUsers()dropUser()createUser()updateUser()getUsers()dropUser()createUser()updateUser()",
"text": "Hi @Suresh_Kumar3\nWelcome to the community!!Could you help by confirming the steps on reproducing the above issue observed?If this is not the exact steps of reproducing the issue, could you please provide a step by step reproduction of what you are observing.\nAlso could you also confirm the version of MongoDB you are using?Thanks\nAasawari",
"username": "Aasawari"
}
]
| Custom Roles for admin | 2022-07-02T12:51:14.796Z | Custom Roles for admin | 2,392 |
null | []
| [
{
"code": "",
"text": "Hi there. Last July 7 there was a webinar titled “Intro to Atlas Search”. I could’t attend to the webinar, but I’m really interested on this topic. Is there any way to access the recording of the webinar?\nThanks in advanced.",
"username": "Ruben_FS"
},
{
"code": "",
"text": "Hi @Ruben_FS,You can watch the recorded seminar of MongoDB Atlas Search eWorkshop from here: Resources | MongoDBAlso, you can access other webinar resources from here: Resources | MongoDBPlease let us know if you have any other questions!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Intro to Atlas Search webinar | 2022-07-12T17:39:11.550Z | Intro to Atlas Search webinar | 1,928 |
null | [
"storage"
]
| [
{
"code": "[2022-07-03T08:03:39.809+0000] [.info] [src/runtimestats/printer.go:mainLoop:58] <runtimestatsPrinter> [08:03:39.809] memory:rss=250593280\nalloc=70680792\ttotalalloc=44663666952\tmallocs=344157089\tfrees=343503736\theapinuse=76693504\theapobjects=653353\tgcpausetotalns=176823910\tgcpauselastns=78201\tgcnum=1328\tutimens=365716808000\tstimens=190250093000\tminflt=2464936\tmajflt=437\n[2022-07-03T08:03:41.410+0000] [.info] [src/config/config.go:ReadClusterConfig:439] [08:03:41.410] Retrieving cluster config from https://api-agents.mongodb.com/agents/api/automation/conf/v1/58adb0513b34b952980b36d2?av=12.0.5.7560&aos=linux&aa=x86_64&ab=64&ad=ubuntu1804&ah=parag-r1ew1-mongo01-p.comapi.internal&ahs=parag-r1ew1-mongo01-p.comapi.internal&at=1656832598695...\n[2022-07-03T08:03:41.510+0000] [.info] [main/components/agent.go:LoadClusterConfig:277] [08:03:41.510] clusterConfig unchanged\n[2022-07-03T08:03:55.683+0000] [.error] [src/mongoctl/processctl.go:runListCollections:2890] <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/mongoctl/processctl.go:func1:2842] <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/mongoctl/processctl.go:getCollectionsHelper:2870] <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/mongoctl/processctl.go:OplogSize:3304] <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/mongoctl/replsetctl.go:AskAboutOplog:201] <server-rs_16> [08:03:55.683] Could not determine oplog size on mongo-server-docs.internal:27017 (local=false) : <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [state/stateutil/fickleutil.go:currentOplogState:950] <server-rs_16> [08:03:55.683] Failed to ask about oplog size : <server-rs_16> [08:03:55.683] Could not determine oplog size on mongo-server-docs.internal:27017 (local=false) : <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [state/stateutil/stateutil.go:ComputeCurrentState:100] <server-rs_16> [08:03:55.683] Error getting fickle state for current state : <server-rs_16> [08:03:55.683] Failed to ask about oplog size : <server-rs_16> [08:03:55.683] Could not determine oplog size on mongo-server-docs.internal:27017 (local=false) : <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/director/director.go:updateCurrentState:739] <server-rs_16> [08:03:55.683] Error calling ComputeState : <server-rs_16> [08:03:55.683] Error getting fickle state for current state : <server-rs_16> [08:03:55.683] Failed to ask about oplog size : <server-rs_16> [08:03:55.683] Could not determine oplog size on mongo-server-docs.internal:27017 (local=false) : <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/director/director.go:planAndExecute:522] <server-rs_16> [08:03:55.683] Failed to compute states : <server-rs_16> [08:03:55.683] Error calling ComputeState : <server-rs_16> [08:03:55.683] Error getting fickle state for current state : <server-rs_16> [08:03:55.683] Failed to ask about oplog size : <server-rs_16> [08:03:55.683] Could not determine oplog size on mongo-server-docs.internal:27017 (local=false) : <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n[2022-07-03T08:03:55.683+0000] [.error] [src/director/director.go:mainLoop:395] <server-rs_16> [08:03:55.683] Failed to planAndExecute : <server-rs_16> [08:03:55.683] Failed to compute states : <server-rs_16> [08:03:55.683] Error calling ComputeState : <server-rs_16> [08:03:55.683] Error getting fickle state for current state : <server-rs_16> [08:03:55.683] Failed to ask about oplog size : <server-rs_16> [08:03:55.683] Could not determine oplog size on mongo-server-docs.internal:27017 (local=false) : <server-rs_16> [08:03:55.683] Error getting collection information for db=local coll=oplog.rs : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error executing WithClientFor() for cp=mongo-server-docs.internal:27017 (local=false) connectMode=SingleConnect : <server-rs_16> [08:03:55.683] Error getting collections for db local : <server-rs_16> [08:03:55.683] Error running ListCollections with filter map[name:oplog.rs] : connection(mongo-server-docs.internal:27017[-93]) incomplete read of message header: read tcp 10.10.10.10:52302->11.11.11.11:27017: i/o timeout\n",
"text": "Hi all,We are running MongoDB 4.4. Recently during an infrastructure outage, the number of open files (our limit is set to 500 000) was being reached and causing the consistency check of WiredTiger to get reset every 20 minutes. I increased the number of open files/ processes to 3 000 000 and restarted the services and the WiredTiger consistency check could continue to completion without being re-initialized.Shortly after the MongoDB automation agent brought up the replica into a healthy state, the entire replica became unresponsive and displayed a warning/health problem. I saw the following errors being logged by the automation agent:These blocks of errors continued until I restarted the physical Linux Node. I can see each message is displaying i/o timeout at the end. Has this got to do with Networking or the fact that even though I could see the Open File count was higher than 500 000, some other limit had been reached? Just to let you know, during this period, all other resources, memory, cpu and networking were completed under utilized.Thanks and Regards,\nAlex",
"username": "Alex_Meyer1"
},
{
"code": "",
"text": "Hi @Alex_Meyer1It appears that your deployment is using Cloud Manager. Since your deployment is likely a very large one (due to needing that many open files) and undoubtedly a complex one, I believe troubleshooting the issue would also likely to require significant amount of specific knowledge about the deployment itself and other operational concerns. Since Cloud Manager is part of the MongoDB Enterprise Advanced subscription, you might want to contact MongoDB Support. Otherwise if you’re currently evaluating Cloud Manager, I’d be happy to link you with sales to discuss further options.Best regards\nKevin",
"username": "kevinadi"
}
]
| Replica completely unresponsive - Number of open files | 2022-07-08T11:39:27.779Z | Replica completely unresponsive - Number of open files | 2,100 |
null | []
| [
{
"code": "",
"text": "Will my insert be successful if I provide an _id field with a value that does not exist on the DB",
"username": "Olufemi_Bolaji"
},
{
"code": "",
"text": "yes it will.\ndid you tried and it failed?\ndo you have an issue?",
"username": "steevej"
},
{
"code": "",
"text": "no, just checking…\nthanks @steevej",
"username": "Olufemi_Bolaji"
},
{
"code": "_id",
"text": "you can insert anything into _id fieldbut if you don’t supply it then MongoDB will generate an ObjectId and insert that instead.",
"username": "Yilmaz_Durmaz"
},
{
"code": "_id_idE11000 duplicate key error",
"text": "Hello and welcome to the community @Olufemi_Bolaji !I think both @steevej and @Yilmaz_Durmaz have provided the answer here. Yes you can provide a custom _id field, and yes if you don’t supply one it will be generated for you.However I’d like to focus on one part of your question:Will my insert be successful if I provide an _id field with a value that does not exist on the DBActually you need to provide an _id that does not exist yet in the collection, as it’s supposed to be the document’s primary key. Otherwise the insert will fail with an E11000 duplicate key error message. For more information, see the _id field.Best regards\nKevin",
"username": "kevinadi"
}
]
| The insert with _id value | 2022-07-06T07:42:40.059Z | The insert with _id value | 2,025 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "I was following along with the How to use the MERN Stack: A Complete Guide and I have had issues troubleshooting the errors in Chrome Dev Tools after starting the server and the client. I installed typescript in both folders and enabled type checking so as to try fixing any underlying syntax or import issues. I updated all imports to ES6 and updated the index.js to use the React 18 version of importing React-DOM.The errors that appear in the console are:Invalid hook callUncaught TypeError: Cannot read properties of null (reading ‘useRef’)The above error occurred in the componenet: at BrowserRouter (http://localhost:3000/static/js/bundle.js:46422:5)I also have several errors in the server folder that are noted with TypeScript:app.use(require(recordRoutes));const client = new MongoClient(Db, {\nuseNewUrlParser: true,\nuseUnifiedTopology: true,\n});let db_connect = getDb(“CoffeeList”);Any help on clearing these errors would be greatly appreciated! Cheers",
"username": "Roqa_Deji"
},
{
"code": "",
"text": "Fixed the client connection issue by updating the code toconst client = MongoClient.connect(Db);",
"username": "Roqa_Deji"
},
{
"code": "",
"text": "Fixed recordRoutes by removing requireapp.use(recordRoutes());",
"username": "Roqa_Deji"
}
]
| Errors while following the MERN stack tutorial | 2022-07-12T21:32:02.598Z | Errors while following the MERN stack tutorial | 2,373 |
[
"queries",
"node-js"
]
| [
{
"code": " I have a use case where i want to filter the newly inserted document from insertOne.\n",
"text": "Hello,with this document i want to match multiple filters which contains multiple find queries like “$in”, “$or”, and more operators.\nexample:\nfor this i will need to use this “match” but in js directly, the function should return me true or false if matched. i’ve seen that in C language you have “Client Side Document Matching” http://mongoc.org/libmongoc/1.2.0/matcher.html but i can’t found this function for nodeJS can anyone help me with this.",
"username": "Nauman_matlob"
},
{
"code": "",
"text": "Documentation for mongodb",
"username": "Jack_Woehr"
}
]
| Is there matcher function like in C for nodeJS | 2022-07-12T13:01:21.116Z | Is there matcher function like in C for nodeJS | 1,089 |
|
null | [
"queries"
]
| [
{
"code": "",
"text": "I just want to know the timezone in metrics page(Opcounters, Connections, logical size and network charts) of our cluster is changable or not if not how do I calculate the bytes in and bytes out to my current timezone from UTC?",
"username": "Koushik_Romel"
},
{
"code": "",
"text": "Hi Koushik,Thank you for your question! The timezone in the Metrics page is changeable. However, the timezone is a project setting and will apply to all clusters in your project. To set your timezone, go to your Project Settings and select from the “Project Time Zone” dropdown.Thanks,\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "Thanks, This will be very much helpful for me and I’m learning many things in MongoDB and I just want to thank all the developers who are helping beginners by solving their problems",
"username": "Koushik_Romel"
},
{
"code": "",
"text": "Glad we were able to help Koushik! And thank you for your kind words!",
"username": "Frank_Sun"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Changing Time zones | 2022-05-25T14:06:54.145Z | Changing Time zones | 3,357 |
null | [
"python",
"atlas-cluster"
]
| [
{
"code": "pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-00.8e7bs.mongodb.net:27017: connection closed,cluster0-shard-00-02.8e7bs.mongodb.net:27017: connection closed,cluster0-shard-00-01.8e7bs.mongodb.net:27017: connection closed, Timeout: 30s, Topology Description: <TopologyDescription id: 62cdbe348945cca01dcb1910, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.8e7bs.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.8e7bs.mongodb.net:27017: connection closed')>, <ServerDescription ('cluster0-shard-00-01.8e7bs.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.8e7bs.mongodb.net:27017: connection closed')>, <ServerDescription ('cluster0-shard-00-02.8e7bs.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.8e7bs.mongodb.net:27017: connection closed')>]>\ninsert_many",
"text": "I am using the pymongo library and recently started getting this error:This is happening on the insert_many on just 742 short documents. I am new to mongo and unsure how to debug this one. ANy ideas?",
"username": "Simon_Shapiro"
},
{
"code": "",
"text": "Solved! The ip address changed from the one allowed.",
"username": "Simon_Shapiro"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Surpise timeout using the Python library | 2022-07-12T18:39:07.549Z | Surpise timeout using the Python library | 1,563 |
null | [
"python"
]
| [
{
"code": ">>> data_frame = client.db.test.find_pandas_all({'qty': {'$gt': 5}}, schema=schema)\n>>> data_frame\n _id qty\n0 1 25.4\n1 2 16.9\n>>> arrow_table = client.db.test.find_arrow_all({'qty': {'$gt': 5}}, schema=schema)\n>>> arrow_table\npyarrow.Table\n_id: int64\nqty: double\n>>> ndarrays = client.db.test.find_numpy_all({'qty': {'$gt': 5}}, schema=schema)\n>>> ndarrays\n{'_id': array([1, 2, 3]), 'qty': array([25.4, 16.9, 2.3])}\n$ python -m pip install pymongoarrow\n",
"text": "We are proud to announce the initial release of PyMongoArrow - a companion library to PyMongo that makes it easy to move data between MongoDB and Python’s scientific & numerical computing libraries like Pandas, PyArrow and NumPy.PyMongoArrow extends PyMongo’s APIs and makes it possible to materialize query result sets as pandas DataFrames:Similar APIs facilitate loading result sets as PyArrow Tables:As well as NumPy ndarrays:Wheels are available on PyPI for macOS and Linux platforms on x86_64 architectures.",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "Are there plans to support Windows",
"username": "QianShan_i"
},
{
"code": "",
"text": "Yes, please follow PYTHON-2691 for updates.",
"username": "Shane"
},
{
"code": "",
"text": "@Prashant_Mital Interesting lib.\nIt’s working so far. But I couldn’t figure out how to put fields into my schema from nested objects.\nHow to do that?",
"username": "Chris_Haus"
},
{
"code": "{'_id': ObjectId('62cd854a73939396fff10edd'), 'a': {'b': 1, 'c': 2}}\n\nschema = Schema({'ab': int, 'ac': int})\ndf = coll.aggregate_pandas_all([{'$project':{'ab':'$a.b', 'ac':'$a.c'}}], schema = schema)\n",
"text": "Hi @Chris_Haus,You can use aggregation pipeline to export data of the nested fields out of MongoDB into any of the supported data formats.For example, let’s say we want to export MongoDB data into pandas dataframe. We can use Pymongoarrow’s aggregate_pandas_all() function to achieve this.Let’s say this is our sample document containing nested fields:Using $project, we can rename the nested field and use the new names to define the Schema. For example:We also have a ticket open (ARROW-9) for adding a direct support for this.If you have any other questions/feedback related to PyMongoArrow, please feel free to get back to us and we would be happy to chat more with you ~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
]
| PyMongoArrow 0.1.1 Released | 2021-04-28T00:51:22.367Z | PyMongoArrow 0.1.1 Released | 4,747 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "constantsactivitiesapplicationsconstantsactivitiesactivitiesapplicationsactivity_typesitemdataactivities$push$group{\n _id: id\n value : {\n categories: [\n {\n id: 001,\n title: \"test 1\"\n },\n {\n id: 002,\n title: \"test 2\"\n },\n {\n id: 003,\n title: \"test 3\"\n }\n ]\n }\n}\n{\n propert1: \"\",\n propert2: \"\",\n config: {\n agenda_item_category_ids: [ObjectId(001), ObjectId(002)]\n },\n activity_type_id: ObjectId(123)\n\n}\n{\n propert1: \"\",\n propert2: \"\",\n activity_type_id: ObjectId(456)\n config: {\n agenda_item_category_ids: [ObjectId(002)]\n }\n\n}\n{\n _id: ObjectId(123),\n prop1: \"\",\n prop2: \"\"\n}\n{\n _id: ObjectId(456),\n prop1: \"\",\n prop2: \"\"\n}\nconst results = await Constants.aggregate([\n {\n $match: query,\n },\n {\n $unwind: {\n path: '$value.categories',\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: 'activity',\n localField: 'value.categories.id',\n foreignField: 'config.agenda_item_category_ids',\n as: 'data',\n },\n },\n {\n $lookup: {\n from: 'applications',\n localField: 'items.activity_type_id',\n foreignField: '_id',\n as: 'activity_type',\n },\n },\n {\n $project: {\n _id: 0,\n category_id: '$value.categories.id',\n title: '$value.categories.title',\n description: '$value.categories.description',\n icon_src: '$value.categories.icon_src',\n data: 1,\n activity_type: 1,\n },\n },\n]);\n[\n {\n data: [\n {item1},\n {item2}\n ],\n activity_type,\n title\n _id\n },\n {\n data: [\n {item1},\n {item2}\n ],\n activity_type,\n title\n _id\n }\n]\n[\n {\n data: [\n {\n item1,\n activity_type\n },\n {\n item2,\n activity_type\n }\n ],\n title\n _id\n },\n]\n{\n \"_id\": \"$_id\",\n \"activity_type\": {\n \"$push\": \"$activity_type\"\n }\n}\n",
"text": "I have 3 collection namely constants, activities, applications with mentioned properties.\nNow, Quering constants collection with activities and activities with applications with matching Id’s. I am getting correct results. But now activity_types are shown at per data level.But expecting the output should be at per item level inside data whichever is matching with item. Because activities are matching for Item and it should be shown in per item level not at data level. I tried with $push and $group but not getting expected results.ConstantsActivityapplicationsCurrent queryCurrent outputExpected outputTried method",
"username": "Dharmik_Soni1"
},
{
"code": "localField: 'items.activity_type_id'{\n propert1: '',\n propert2: '',\n config: {\n agenda_item_category_ids: [\n ObjectId(\"00000001a582a3f70ad77b7e\"),\n ObjectId(\"00000002a582a3f70ad77b7f\")\n ]\n },\n activity_type_id: ObjectId(\"0000007ba582a3f70ad77b80\")\n}\n\n{\n propert1: '',\n propert2: '',\n activity_type_id: ObjectId(\"000001c8a582a3f70ad77b81\"),\n config: {\n agenda_item_category_ids: [ ObjectId(\"00000002a582a3f70ad77b82\") ]\n }\n}\n",
"text": "It looks like you might have redacted your data or your query prior before posting your question. The query is inconsistent with your input documents and cannot possibly produce the current output you shared.You $lookup with localField: 'items.activity_type_id' and no input documents have a field named items and items is neither a projected field or the as: field of a $lookup.Your ObjectId(xxx) also seems to be redacted, in Constants collection you have id:001 but you seem to use them as ObjectId(xxx). Not that mongosh can use the expression ObjectId(002) but you do not get the same value for all invocation. Taking your 2 Activity documents as published, I get:You are also missing some field separator.Please verify your query and documents and post corrections that we can use.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB : Push or merge one lookup's result in other lookup's result | 2022-07-11T04:03:01.007Z | MongoDB : Push or merge one lookup’s result in other lookup’s result | 2,221 |
[
"replication"
]
| [
{
"code": "",
"text": "Currently I have set up a simple replica set with three nodes and readPreference value is “secondary”. Monitoring cpu usage between the two secondaries, I notice one is under 8% and one is almost 100%. My question is why such difference exists? Should it be 50-50 or something similar? If this is not normal then what is the possible reason for it, as well as the solution to fix our problem? Thank you.\nScreen Shot 2022-07-12 at 17.14.001802×278 43.9 KB\n",
"username": "Nam_Le"
},
{
"code": "rs.config()rs.status()",
"text": "Hi,\ncan you share your replica set config - rs.config() and a code where you specified a readPreference? you can have additional parameters or tags which can affect query routing, also check if both secondary replica set members are in a healthy state - rs.status()",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "most drivers and programs use connection pooling and they connect to one node after the initial connection procedures.that node can be different for many users, but a single user will have a pooling to a single node. so it won’t be 50-50 load unless you have many different programs connected to the server.",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongo = await new MongoClient(mongo_uri, {\n ignoreUndefined: true,\n readPreference: ReadPreference.SECONDARY_PREFERRED,\n}).connect()\n{\n\t\"set\" : \"replicaset-remote\",\n\t\"date\" : ISODate(\"2022-07-12T11:36:15.052Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(9),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 2,\n\t\"writeMajorityCount\" : 2,\n\t\"votingMembersCount\" : 3,\n\t\"writableVotingMembersCount\" : 3,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1657625775, 17),\n\t\t\t\"t\" : NumberLong(9)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2022-07-12T11:36:15.046Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1657625775, 17),\n\t\t\t\"t\" : NumberLong(9)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2022-07-12T11:36:15.046Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1657625775, 17),\n\t\t\t\"t\" : NumberLong(9)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1657625775, 17),\n\t\t\t\"t\" : NumberLong(9)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2022-07-12T11:36:15.046Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2022-07-12T11:36:15.046Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1657625753, 4),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"stepUpRequestSkipDryRun\",\n\t\t\"lastElectionDate\" : ISODate(\"2021-12-26T10:07:29.763Z\"),\n\t\t\"electionTerm\" : NumberLong(9),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1640513247, 1),\n\t\t\t\"t\" : NumberLong(8)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1640513247, 1),\n\t\t\t\"t\" : NumberLong(8)\n\t\t},\n\t\t\"numVotesNeeded\" : 2,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"priorPrimaryMemberId\" : 0,\n\t\t\"numCatchUpOps\" : NumberLong(0),\n\t\t\"newTermStartDate\" : ISODate(\"2021-12-26T10:07:29.778Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2021-12-26T10:07:32.894Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 17113068,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1657625775, 17),\n\t\t\t\t\"t\" : NumberLong(9)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-07-12T11:36:15Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1640513249, 1),\n\t\t\t\"electionDate\" : ISODate(\"2021-12-26T10:07:29Z\"),\n\t\t\t\"configVersion\" : 3,\n\t\t\t\"configTerm\" : 9,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 15477834,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1657625773, 151),\n\t\t\t\t\"t\" : NumberLong(9)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1657625773, 151),\n\t\t\t\t\"t\" : NumberLong(9)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-07-12T11:36:13Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2022-07-12T11:36:13Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-07-12T11:36:13.314Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2022-07-12T11:36:14.588Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : 0,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 3,\n\t\t\t\"configTerm\" : 9\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 15477759,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1657625774, 7),\n\t\t\t\t\"t\" : NumberLong(9)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1657625774, 7),\n\t\t\t\t\"t\" : NumberLong(9)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-07-12T11:36:14Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2022-07-12T11:36:14Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2022-07-12T11:36:14.093Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2022-07-12T11:36:14.830Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 3,\n\t\t\t\"configTerm\" : 9\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1657625775, 17),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"udjGeYIefRN886XFy4HW8chzJeY=\"),\n\t\t\t\"keyId\" : NumberLong(\"7086458178816180233\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1657625775, 17)\n}\n",
"text": "Hi, thank you for your response. Currently there’s another team managing this microservice. This is what I gave themmongo_uri=mongodb://user:pass@db1:27000,db2:27000,db3:27000/?replicaSet=replicaset-remote&authSource=admin&readPreference=secondary&maxStalenessSeconds=120but in their code, they are using these optionsI noticed there’s conflict in readPreference, but haven’t had the chance to fix it yet, due to role restriction. But do you think this may be the cause of the problem? Also this is the result from rs.status() if that helps. Thank you",
"username": "Nam_Le"
},
{
"code": "maxStalenessSeconds=120mongo_uri=mongodb://user:pass@db1:27000,db2:27000,db3:27000/?replicaSet=replicaset-remote&authSource=admin&readPreference=secondary&maxStalenessSeconds=120\nmaxStalenessSecondsrs.printSecondaryReplicationInfo()\n",
"text": "I see you are using maxStalenessSeconds=120The read preference maxStalenessSeconds option lets you specify a maximum replication lag, or “staleness”, for reads from secondaries. When a secondary’s estimated staleness exceeds maxStalenessSeconds , the client stops using it for read operations.can you also run:maybe one secondary is lagging behind a primary",
"username": "Arkadiusz_Borucki"
},
{
"code": "maxStalenessSeconds",
"text": "Thank you. If it’s true then it’s the first time I’ve seen the option maxStalenessSeconds in action and it’s very interesting.\nWhen I ran your command, I saw the 8% and 100% cpu-usage secondaries has 2 and 1 seconds lag behind the primary, respectively. So it’s not because of that option, right?",
"username": "Nam_Le"
},
{
"code": "net.maxIncomingConnections: 250mongod --versionReadPreference.SECONDARY_PREFERRED_PREFERRED",
"text": "Assumptions without actual experimenting are not good. so try these first.do you have access to your server and admin credentials?\nyou can try lowering connection limit, restart server and check the usage again with 3-5 clients connected.what is your server version? use mongod --versionReadPreference.SECONDARY_PREFERRED is not a conflict. you can tell them to drop _PREFERRED part. this preference will first try all secondary nodes for a connection. if they are too busy to respond (connections at limit), then it will ask to primary. otherwise, it will just give connection error. which one do you prefer?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I asked the teams using our database and this may be the reason. There’s currently only one very active microservice which is responsible for making 95% of the queries to our database. May be that’s why most of the queries only go to one member of secondaries?",
"username": "Nam_Le"
},
{
"code": "",
"text": "I would suggest you create your own scripted read workload to work over a minute and run a copy of it 3-5 times at least to benchmark your cluster. this way you should see a distributed reading from both secondaries. A nodejs or python script will be the fastest choice I guess.and as for the microservice, yes, they usually tend to create a pooling to a node when they start and use that pool until they crash or shut down. and you have only 1 of them running it seems. I haven’t thought about this before but this is clearly a bottleneck. I don’t have an answer to optimize this.",
"username": "Yilmaz_Durmaz"
}
]
| readPreference=secondary routes all request to only one secondary | 2022-07-12T10:17:02.103Z | readPreference=secondary routes all request to only one secondary | 3,237 |
|
null | [
"node-js",
"atlas-cluster"
]
| [
{
"code": "mongodb+srv://DragonOsman:<password>@cluster0.asbjy.mongodb.net/?retryWrites=true&w=majority\n",
"text": "Hi.I noticed that there are some similar topics, but I don’t think they’re similar enough so I went ahead and made this.I used this URI to try to connect:I used a browser-suggested strong password, but it doesn’t seem to have any problematic characters so I don’t think it’s the reason for my issue.The error I have is copy-pasted here: mongodb-connect-error.txt (gist.github.com)",
"username": "Osman_Zakir"
},
{
"code": "",
"text": "I was able to solve the issue by setting the allowed IP address (that I could use to connect) to the wildcard 0.0.0.0/0. So it really wasn’t because of the password.",
"username": "Osman_Zakir"
},
{
"code": "0.0.0.0/00.0.0.0/0",
"text": "Hi @Osman_ZakirPrior to adding 0.0.0.0/0 to your Network Access List, were you able to connect at all? I.e. You were able to connect at one stage, then at some point no longer were able to connect before being able to connect only by adding 0.0.0.0/0 to your Network Access List.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "No, I wasn’t able to connect at all.",
"username": "Osman_Zakir"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Having Trouble Connecting to Database Cluster (Connection <monitor> Closed) | 2022-07-01T02:59:13.669Z | Having Trouble Connecting to Database Cluster (Connection <monitor> Closed) | 2,478 |
null | [
"replication",
"sharding",
"backup",
"ops-manager"
]
| [
{
"code": "",
"text": "Hi there!\nI have two questions about backup and restore sharded cluster:I have replicaset. Replicaset comprises primary, secondary and arbiter.\nI create a new sharded cluster and i want to move data from the replicaset to the sharded cluster.\nHow can i do it? Maybe there are best practices for this?What is the best way for backup and restore sharded cluster without using MongoDB Atlas, MongoDB Cloud Manager, MongoDB Ops Manager?Thank you for your answers",
"username": "JeffryGilza"
},
{
"code": "",
"text": "Hi,\nAre you going to move a ReplicaSet to a new Shared Cluster?\nYou can transfer your Replica set to a new cluster: Replica Set to Replica Set (backup/restore or extend your old RS cluster by nodes from a new cluster, sync the data, switch primary to the new cluster, remove old RS members ), at the end you can convert it to a sharded cluster.\nConvert a Replica Set to a Sharded Clusterregarding the best way to backup and restore sharded cluster without using MongoDB Atlas, MongoDB Cloud Manager, or Ops Manager - you can use LVM snapshots, please read:\nBackup and Restore Sharded Clusters",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Thank you for your answer!\nI want to exactly move the data from the replicaset to the new sharded cluster\nexpected result:\ni want to have a replicaset and a sharded cluster with same data",
"username": "JeffryGilza"
},
{
"code": "",
"text": "In the beginning, I would create a new cluster as a Replica Set, move data from the old RS to the new RS (see my first post) and convert new RS to a shared cluster Convert a Replica Set to a Sharded Cluster",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "Thank you a lot! i got you)",
"username": "JeffryGilza"
},
{
"code": "",
"text": "hi!\ni read this https://www.mongodb.com/docs/manual/tutorial/backup-sharded-cluster-with-filesystem-snapshots/\nand i found that:In MongoDB 4.2+, you cannot use file system snapshots for backups that involve transactions across shards because those backups do not maintain atomicity. Instead, use one of the following to perform the backups:if i uderstand i can’t use LVM snapshots",
"username": "JeffryGilza"
},
{
"code": "",
"text": "it is for backups that involve transactions across shards, are you using multi-document transactions across shards in your application?",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "oh, no we don’t need multi-document transactions, okay, thank you again",
"username": "JeffryGilza"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| What about the best practice for backup and restore sharded cluster | 2022-07-11T11:45:54.683Z | What about the best practice for backup and restore sharded cluster | 3,156 |
null | [
"sharding"
]
| [
{
"code": "",
"text": "Hi allAs we know currently from the MongoDB documentation, if you have sharded collection and you want a specific field to be a unique index AND also have a shard key, the constraint is that shard key has to be this unique index’s prefix.One of the workarounds around this when you need a unique requirement on a specific field in the collection is to set up a proxy collection like described in Enforce Unique Keys for Sharded Collections — MongoDB Manual 3.2This means that for each unique field you want in the collection you will need a separate proxy collection, which is costly.Are there any other workarounds?Thanks",
"username": "Mavericks2022"
},
{
"code": "",
"text": "Hi @Mavericks2022\nWelcome to the community!!Unfortunately due to the way sharded collection works, the unique index must contain the full shard key as a prefix of the index. Using a proxy collection is the only supported solution at this moment.However, there is a feature request to allow the exact scenario in https://jira.mongodb.org/browse/SERVER-19860 and also a related feedback item here that may be of interest to your use case: Unique index in sharded cluster – MongoDB Feedback Engine.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks @Aasawari , is there an estimate when MongoDB will be able to add the feature to support multiple unique field constraints?e.g. upcoming ver6 vs a few years down the line vs etc",
"username": "Mavericks2022"
},
{
"code": "",
"text": "Hi @Mavericks2022Unfortunately, we cannot predict when the functionality will be ready. You might, however, keep an eye on the aforementioned server ticket for updates.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Any good suggestions on maintaining multiple unique fields within a sharded collection? | 2022-07-01T14:20:23.337Z | Any good suggestions on maintaining multiple unique fields within a sharded collection? | 2,091 |
null | [
"serverless",
"realm-web"
]
| [
{
"code": "",
"text": "I created a serverless atlas instance, and then tried to connect my real web app to it, but i found that the serverless instances were not listed… Can realm apps use atlas serverless instances?Also, when creating a global real-app what does this actually mean? Is the database copied to various (4 i think) edge locations around the world, hence shorter response time? Is this similar to 1 origin server and 4 sharded servers?",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Hi @Rishi_uttam ,Serverless instances cannot yet be supported as linked source for realm apps. Keep posted on realm realese page.When an application is deployed globally, the application components (functions, values , services) are placed in all available regions.The writes are eventually always routed through the write region and this one should be placed where the primary of your replica set cluster is located.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hey @Pavel_Duchovny. After World last week I set out to try this on some unreleased apps, especially since the marketing here says that Atlas Device Sync (Realm Sync) now works with Serverless.I’m still seeing it grayed out though. Is there a target date to actually do what the marketing is saying?Did marketing just jump the gun or is it significantly delayed? Thanks.–KurtFor reference: https://twitter.com/kurtlibby/status/1535815477938855941?s=20&t=19E6Qw9qtjmi1pwcYLZDYQ",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Hi @Kurt_Libby1 ,I believe it is released now as I can link App Services to a Serverless instance.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,It does seem like they’re getting closer, but it is still not usable.When selecting the cluster to sync on, the serverless option is still grayed out:\n\nScreen Shot 2022-06-27 at 7.51.59 AM1248×470 127 KB\nI can link it, but it will not work for sync.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Oh I see what got you confused @Kurt_Libby1 ,The device sync service is serverless as it is part of app services on Atlas side. But the page doesn’t mention it works with serverless clusters (that’s a different product than serverless services and backend)So sync works with dedicated and shared clusters for now.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I need this feature as well!",
"username": "Mark_Vrahas"
},
{
"code": "",
"text": "Hey @Mark_Vrahas! Welcome to the forums.It would be super great, but I’m sure there are some technical difficulties here, otherwise they would have rolled it out at MDB World last month.",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "@Kurt_Libby1 and @Mark_Vrahas ,You can keep up with the up to date limitations:Thanks",
"username": "Pavel_Duchovny"
}
]
| Atlas Serverless with Realm web app? | 2021-09-19T16:14:05.961Z | Atlas Serverless with Realm web app? | 4,379 |
null | [
"kafka-connector"
]
| [
{
"code": "",
"text": "Please can you advise - which is the ‘mongodb CDC source connector’ ?",
"username": "Onesmus_Nyakotyo"
},
{
"code": "",
"text": "",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb CDC source connector | 2022-07-11T20:55:52.870Z | Mongodb CDC source connector | 1,869 |
null | [
"aggregation"
]
| [
{
"code": "db.test.update(\n { $project: {\n roles: {\n $cond: {\n \"if\": {\n $isArray: '$roles'\n },\n \"then\": {\n $set: {\n roles: {\n $map: {\n input: \"$roles\",\n in: {\n name: \"$$this\",\n \"startDate\": \"Jan 1 2000\",\n \"endDate\": \"Jan 1 2099\" \n }\n }\n }\n }\n },\n \"else\": { \n $set: {\n roles: [ {\n \"name\" : {$ifNull: [\"$roles\", null] },\n \"startDate\": \"Jan 1 2000\",\n \"endDate\": \"Jan 1 2099\" \n }\n ] }\n }\n }\n }\n }\n }\n)\n",
"text": "I am having issues adding fields to an existing field. It is either an Object with a value or an array with one or more values. I want to change all fields to an array with three fields and retain the existing values.\nMy aggregate attempt returns the Error: need an update object or pipeline.I want to go from this:\nroles: “automation_engineer”\nor\nroles:\n0:“mentor”\n1:“automation_engineer”to this:\nroles:\n0: name: “mentor”\nstartDate: “01-01-2000”\nendDate: “01-01-2099”This is my code that fails:Thanks,\nAustin",
"username": "Austin_Summers"
},
{
"code": "",
"text": "Because the syntax is wrong.When doing an update you need to specify 2 parameters. The first one is a query that specifies which documents to update. The second parameter is the update operation.You are missing the query part.",
"username": "steevej"
}
]
| Conditional update Object vs Array | 2022-07-07T14:24:45.925Z | Conditional update Object vs Array | 1,346 |
null | []
| [
{
"code": " {\n \"_id\": \"nraboy\",\n \"cards\": [\n {\n \"name\": \"Charizard\",\n \"set\": \"Base\",\n \"variant\": \"1st Edition\",\n \"value\": 500000\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 350\n }\n ]\n },\n",
"text": "Hello, I was reading an official article on the MongoDB website where the following collection was used as a learning example:However, the article has left me with 1 question unanswered: is there a way to prevent inserting/ updating duplicated? For example, is there any way to not allow inserting another object that has a name “Pikachu” again?Thank you",
"username": "RENOVATIO"
},
{
"code": "",
"text": "Enjoy the fine writing",
"username": "steevej"
},
{
"code": " \"cards\": [\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 350\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Yellow\",\n \"value\": 740\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 600\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"Upgraded\",\n \"variant\": \"Red Cheeks\",\n \"value\": 350\n },\n ]\n",
"text": "Hello, thanks for the reply, however I could not find an answer to my question under your link. In fact, it appears I need exactly the opposite of what you posted.Rephrased:How does one disallow multiple Pikachus being created. Example:Suppose Pikachu does not exist yet in the collection, how do I make it unique so that this is disallowed:Furthermore, another Pikachu should be perfectly allowed in another document. In other words, the restraint encompasses the document’s array only.",
"username": "RENOVATIO"
},
{
"code": "\"cards.name\" : { \"$ne\" : \"Pikachu\" }\n",
"text": "The title of the post is A collection with a unique … so I assumed you wanted to ensure that no documents have the same value. This can be done with unique index which was the purpose of the link I shared.Making sure that an array cannot have duplicate value is a completely different requirement.This you can do by adding the following in your query part for your $pushThis $push will be only done if none element of the array cards have name:Pikachu.",
"username": "steevej"
}
]
| Advice please - A collection with a unique list of services | 2022-07-07T19:38:04.857Z | Advice please - A collection with a unique list of services | 1,504 |
[
"atlas-cluster"
]
| [
{
"code": "",
"text": "MongoServerSelectionError: connection to 13.235.142.61:27017 closed\nimage1920×1080 359 KB\nGetting Authentication failed error.\n‘mongodb+srv://Mahesh:[email protected]/meetups?retryWrites=true&w=majority’Thanks in advance.",
"username": "K_Mahesh"
},
{
"code": "getaddrinfo(\"cluster0.4fbnl.mongodb.net\") failed: No address associated with hostname\nError: couldn't initialize connection to host cluster0.4fbnl.mongodb.net, address is invalid\nmongomongoshCompass",
"text": "from a console, your connection string has this error; it is not an accesible address.other than that, your username/password might have a typo, you might be forgetting auth admin database, or tls/ssl flags.try your connection string in a terminal with mongo, mongosh, or use Compass. if you connect succesfuly with them, you can then use it in your app",
"username": "Yilmaz_Durmaz"
},
{
"code": "c:\\DevTools\\mongodbc:\\DevTools\\mongodbc:\\DevTools\\mongodb\\bin\\mongod.exe --dbpath c:\\DevTools\\mongodb\\dataMongoClient.connect(\"mongodb://127.0.0.1\")",
"text": "you seem you are still in beginner’s steps. if not, sorry for the impression.let me suggest you 2 things for you to work with MongoDB and Javascript.1- Run a local MongoDB server. This will be both faster during development and easy to setup.2- If you haven’t done so, check this course: M220JS: MongoDB for JavaScript Developers | MongoDB University.",
"username": "Yilmaz_Durmaz"
},
{
"code": "id 62723\nopcode QUERY\nrcode NOERROR\nflags QR RD RA\n;QUESTION\ncluster0.4fbnl.mongodb.net. IN TXT\n;ANSWER\ncluster0.4fbnl.mongodb.net. 44 IN TXT \"authSource=admin&replicaSet=atlas-x8f8fc-shard-0\"\n;AUTHORITY\n;ADDITIONAL\n",
"text": "Authentication failedMeans you have the wrong user name or the wrong password.your connection string has this error; it is not an accesible addressThere is no DNS address since this is a cluster address. There is TXT DNS entry and corresponding SRV record.",
"username": "steevej"
}
]
| MongoServerSelectionError: connection <monitor> to 13.235.142.61:27017 closed | 2022-07-09T19:00:25.945Z | MongoServerSelectionError: connection <monitor> to 13.235.142.61:27017 closed | 15,786 |
|
null | [
"connecting"
]
| [
{
"code": "",
"text": "Able to use Debezium connector with atlas free tier?When I tried to use Debezium connector capture data from atlas but I got error as:connect | [2021-11-19 06:50:25,594] ERROR Error while attempting to read from oplog on ‘atlas-XXXX-shard-0/gls-shard-AAAA.mongodb.net:27017,gls-shard-BBBB.mongodb.net:27017,gls-shard-CCCC.mongodb.net:27017’:Query failed with error code 8000 and error message ‘noTimeout cursors are disallowed in this atlas tier’ on server gls-shard-AAA.mongodb.net:27017 (io.debezium.connector.mongodb.MongoDbStreamingChangeEventSource)connect | com.mongodb.MongoQueryException: Query failed with error code 8000 and error message ‘noTimeout cursors are disallowed in this atlas tier’ on server gls-shard-AAA.mongodb.net:27017",
"username": "may_rununrath"
},
{
"code": "",
"text": "The shared tiers, share an oplog so due to secure concerns you can’t access them. That said, use the MongoDB Connector for Apache Kafka as it uses Change Stream and not the OpLog. https://www.mongodb.com/docs/kafka-connector/current/",
"username": "Robert_Walters"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Able to use Debezium connector with atlas free tier? | 2021-11-25T03:05:58.144Z | Able to use Debezium connector with atlas free tier? | 2,964 |
null | [
"flutter"
]
| [
{
"code": "CMake Error at flutter/ephemeral/.plugin_symlinks/realm/linux/CMakeLists.txt:37execute_process(COMMAND \"${FLUTTER_ROOT}/bin/flutter\" \"pub\" \"run\" \"realm\" \"install\" \"--target-os-type\" \"linux\" \"--package-name\" \"realm\" #\"--debug\"\n OUTPUT_VARIABLE output\n RESULT_VARIABLE result\n COMMAND_ERROR_IS_FATAL ANY\n)\n",
"text": "I’m trying to compile a Flutter app using Realm Flutter SDK. I’ve set a schema and generated the code successful. Compiling the Flutter application on Linux (Ubuntu 22.04) gives this error message from CMake.\nCMake Error at flutter/ephemeral/.plugin_symlinks/realm/linux/CMakeLists.txt:37I’ve tried to compile the same app on Windows without any problem. The CMake code that gives the error message is this:Any help please.",
"username": "Tembo_Nyati"
},
{
"code": "flutter run -d linux\nLaunching lib/main.dart on Linux in debug mode...\npub finished with exit code 64\nCMake Error at flutter/ephemeral/.plugin_symlinks/realm/linux/CMakeLists.txt:37 (execute_process):\n execute_process failed command indexes:\n\n 1: \"Child return code: 64\"\n\n\nBuilding Linux application... \nException: Unable to generate build files\n",
"text": "The complete error message is:",
"username": "Tembo_Nyati"
},
{
"code": "flutter run -d linux --verboserealm install",
"text": "Hi,\nCould you check these versions and write back what they areAlso could you run flutter run -d linux --verbose\nand get the output of the Realm Install command from the flutter build output. Search for realm install or something related to the error in the build output.During the build process Realm tries to install the realm binaries so the project can compile correctly with the correct realm native binaries. It seems this install command fails on your env.",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "@Lyubomir_Blagoev\nCMake version: 3.22.1,\nFlutter version: 3.0.3\nRealm package version: realm: ^0.2.1+alpha (from Flutter project’s pubspec.yaml).",
"username": "Tembo_Nyati"
},
{
"code": "realm install❯ flutter run -d linux --verbose\n[ +85 ms] executing: uname -m\n[ +43 ms] Exit code 0 from: uname -m\n[ ] x86_64\n[ +10 ms] executing: [/<hidden-path>/flutter/] git -c log.showSignature=false log -n 1 --pretty=format:%H\n[ +11 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H\n[ ] 676cefaaff197f27424942307668886253e1ec35\n[ ] executing: [/<hidden-path>/flutter/] git tag --points-at 676cefaaff197f27424942307668886253e1ec35\n[ +17 ms] Exit code 0 from: git tag --points-at 676cefaaff197f27424942307668886253e1ec35\n[ ] 3.0.3\n[ +12 ms] executing: [/<hidden-path>/flutter/] git rev-parse --abbrev-ref --symbolic @{u}\n[ +8 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}\n[ ] origin/stable\n[ ] executing: [/<hidden-path>/flutter/] git ls-remote --get-url origin\n[ +14 ms] Exit code 0 from: git ls-remote --get-url origin\n[ ] https://github.com/flutter/flutter.git\n[ +149 ms] executing: [/<hidden-path>/flutter/] git rev-parse --abbrev-ref HEAD\n[ +17 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD\n[ ] stable\n[ +97 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.\n[ +1 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.\n[ +1 ms] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.\n[ +4 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.\n[ +1 ms] Artifact Instance of 'WindowsUwpEngineArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.\n[ +87 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.\n[ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.\n[ +3 ms] Artifact Instance of 'WindowsUwpEngineArtifacts' is not required, skipping update.\n[ +1 ms] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.\n[ +3 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.\n[ +1 ms] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.\n[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.\n[ +107 ms] Skipping pub get: version match.\n[ +157 ms] Found plugin path_provider at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider-2.0.11/\n[ +10 ms] Found plugin path_provider_android at\n/<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_android-2.0.16/\n[ +5 ms] Found plugin path_provider_ios at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_ios-2.0.10/\n[ +20 ms] Found plugin path_provider_linux at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_linux-2.1.7/\n[ +7 ms] Found plugin path_provider_macos at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_macos-2.0.6/\n[ +9 ms] Found plugin path_provider_windows at\n/<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_windows-2.0.7/\n[ +9 ms] Found plugin realm at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/realm-0.2.1+alpha/\n[ +241 ms] Found plugin path_provider at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider-2.0.11/\n[ +2 ms] Found plugin path_provider_android at\n/<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_android-2.0.16/\n[ +1 ms] Found plugin path_provider_ios at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_ios-2.0.10/\n[ ] Found plugin path_provider_linux at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_linux-2.1.7/\n[ ] Found plugin path_provider_macos at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_macos-2.0.6/\n[ +1 ms] Found plugin path_provider_windows at\n/<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_windows-2.0.7/\n[ +7 ms] Found plugin realm at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/realm-0.2.1+alpha/\n[ +92 ms] Generating //dart/myproject/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.java\n[ +176 ms] Initializing file store\n[ +18 ms] Skipping target: gen_localizations\n[ +7 ms] gen_dart_plugin_registrant: Starting due to {InvalidatedReasonKind.inputChanged: The following inputs have updated contents:\n//dart/myproject/.dart_tool/package_config_subset,//dart/myproject/.dart_tool/flutter_build/dart_plugin_registrant.da\nrt}\n[ +58 ms] Found plugin path_provider at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider-2.0.11/\n[ +1 ms] Found plugin path_provider_android at\n/<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_android-2.0.16/\n[ +1 ms] Found plugin path_provider_ios at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_ios-2.0.10/\n[ ] Found plugin path_provider_linux at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_linux-2.1.7/\n[ ] Found plugin path_provider_macos at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_macos-2.0.6/\n[ +1 ms] Found plugin path_provider_windows at\n/<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/path_provider_windows-2.0.7/\n[ +3 ms] Found plugin realm at /<hidden-path>/flutter/.pub-cache/hosted/pub.dartlang.org/realm-0.2.1+alpha/\n[ +31 ms] gen_dart_plugin_registrant: Complete\n[ +1 ms] Skipping target: _composite\n[ +9 ms] complete\n[ +21 ms] Launching lib/main.dart on Linux in debug mode...\n[ +8 ms] /<hidden-path>/flutter/bin/cache/dart-sdk/bin/dart --disable-dart-dev\n/<hidden-path>/flutter/bin/cache/dart-sdk/bin/snapshots/frontend_server.dart.snapshot --sdk-root\n/<hidden-path>/flutter/bin/cache/artifacts/engine/common/flutter_patched_sdk/ --incremental --target=flutter --debugger-module-names\n--experimental-emit-debug-metadata -DFLUTTER_WEB_AUTO_DETECT=true --output-dill /tmp/flutter_tools.VDLUOD/flutter_tool.VSUVAL/app.dill\n--packages //dart/myproject/.dart_tool/package_config.json -Ddart.vm.profile=false -Ddart.vm.product=false --enable-asserts\n--track-widget-creation --filesystem-scheme org-dartlang-root --initialize-from-dill\nbuild/c075001b96339384a97db4862b8ab8db.cache.dill.track.dill --source\n//dart/myproject/.dart_tool/flutter_build/dart_plugin_registrant.dart --source package:flutter/src/dart_plugin_registrant.dart\n-Dflutter.dart_plugin_registrant=file:////dart/myproject/.dart_tool/flutter_build/dart_plugin_registrant.dart\n--enable-experiment=alternative-invalidation-strategy\n[ +36 ms] Building Linux application...\n[ +36 ms] <- compile package:myproject/main.dart\n[ +2 ms] executing: [build/linux/x64/debug/] cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -DFLUTTER_TARGET_PLATFORM=linux-x64\n//dart/myproject/linux\n[+4329 ms] pub finished with exit code 64\n[ +284 ms] CMake Error at flutter/ephemeral/.plugin_symlinks/realm/linux/CMakeLists.txt:37 (execute_process):\n[ +4 ms] execute_process failed command indexes:\n[ ] 1: \"Child return code: 64\"\n[ ] -- Configuring incomplete, errors occurred!\n[ ] See also \"//dart/myproject/build/linux/x64/debug/CMakeFiles/CMakeOutput.log\".\n[ +92 ms] Building Linux application... (completed in 4.7s)\n[+10285 ms] Exception: Unable to generate build files\n[ +17 ms] \"flutter run\" took 16,271ms.\n[ +6 ms] \n #0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)\n #1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:699:9)\n <asynchronous suspension>\n #2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1183:27)\n <asynchronous suspension>\n #3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:150:19)\n <asynchronous suspension>\n #4 CommandRunner.runCommand (package:args/command_runner.dart:209:13)\n <asynchronous suspension>\n #5 FlutterCommandRunner.runCommand.<anonymous closure>\n(package:flutter_tools/src/runner/flutter_command_runner.dart:281:9)\n <asynchronous suspension>\n #6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:150:19)\n <asynchronous suspension>\n #7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:229:5)\n <asynchronous suspension>\n #8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:62:9)\n <asynchronous suspension>\n #9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:150:19)\n <asynchronous suspension>\n #10 main (package:flutter_tools/executable.dart:94:3)\n <asynchronous suspension>\n \n \n[ +253 ms] ensureAnalyticsSent: 251ms\n[ +1 ms] Running shutdown hooks\n[ ] Shutdown hooks complete\n[ ] exiting with code 1\n",
"text": "@Lyubomir_Blagoev\nI could not find realm install so I’m posting the whole output here.",
"username": "Tembo_Nyati"
},
{
"code": "",
"text": "Hi,\nCould you upgrade realm to the latest version 0.3.1+beta and try again.",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "@Lyubomir_Blagoev\nUpgrading to Realm 0.3.1+beta works. Thank you for your help.",
"username": "Tembo_Nyati"
}
]
| Realm Flutter SDK: CMake Error at flutter/ephemeral/.plugin_symlinks/realm/linux/CMakeLists.txt:37 | 2022-07-11T06:15:03.249Z | Realm Flutter SDK: CMake Error at flutter/ephemeral/.plugin_symlinks/realm/linux/CMakeLists.txt:37 | 3,394 |
[
"node-js",
"connecting",
"serverless",
"next-js"
]
| [
{
"code": "",
"text": "I am trying to using mongo in a serverless environment (vercel’s now). But I am constantly deailing with my connection just dying.What does my code do?So a call to my /scan-blogs API endpoint will potentially scan and add hundreds of blog posts. All of this was working fine on mLab, but after switching to Atlas I’ve had to really put a limit on concurrency of these scans to dance around the connection limit.Here is my shared db utility that I use in all of my serverless functionScreen Shot 2021-02-19 at 9.48.44 AM829×1213 116 KBand here are the errors that I keep getting, about 1/3 of the requests fail.error1900×212 25.4 KB ",
"username": "Andrew_Lisowski"
},
{
"code": "",
"text": "Hey Andrew,Like from my Twitter message, if you can please try this utility instead https://github.com/vercel/next.js/blob/canary/examples/with-mongodb/util/mongodb.js and let me know if it helps. I think that would be a good first step to helping troubleshoot this.Thanks,\nAdo",
"username": "ado"
},
{
"code": "",
"text": "and my connections shoot way up to 500",
"username": "Andrew_Lisowski"
},
{
"code": "",
"text": "It didn’t workSame errorScreen Shot 2021-02-19 at 10.14.58 AM974×555 74.1 KB",
"username": "Andrew_Lisowski"
},
{
"code": "",
"text": "I think what’s happening here is that the Atlas M0 free sandbox allows only 500 concurrent connections and Vercel is spawning >500 concurrent Lambda functions each making a separate connection: Is there a way to reduce the concurrency a bit or re-use Lambda contexts?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "image1114×658 26.5 KB\nsame error . Have you find the fix of this issue?",
"username": "h_b"
},
{
"code": "",
"text": "We are still trying to reproduce this but one thing you could try is to close the MongoClient via close() when your process exits. You do not want to do this after each operation since you want to re-use the MongoClient object while the lambda function is warm, but you should call .close() when your function goes cold.",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hey Andrew,In your next.config.js file, is your target set to “serverless” or “server”? I’ve been trying to get to the bottom of this for a little while, and have found that if your target is “server”, when your app is deployed to Vercel, it will create just two lambda functions that will bundle the /api/ routes and pages that use getServerSideProps(). If you have it set to “serverless” mode then each page will create a unique lambda function, thus potentially adding too many connections.In my testing, with 10 different API’s and multiple pages using getServerSideProps, I never go over 10 connections. The other thing to consider is shutting down and deleting prior builds.Hope this helps.",
"username": "ado"
},
{
"code": "",
"text": "Andrew_Lisowski - We reached out to you over MongoDB Atlas’ in-app chat app so that we can provide more tailored help with this connection limit issue. Could you kindly log into https://cloud.mongodb.com/ and look for the chat that we opened?",
"username": "Angela_Shulman"
},
{
"code": "",
"text": "h_b - It looks like one of our Atlas support agents already helped you solve the issue you were having connecting with Mongoose. Please open another chat if you still need help!",
"username": "Angela_Shulman"
},
{
"code": "",
"text": "Hi @Andrew_Davidson,\nI’m new to serverless and I’m also facing an issue with my database hitting a limit of 500 open connections. I have caching in my serverless functions as well. I just want to ask how do you often detect if a function goes “cold,” is that the job of Mongo or the serverless system (I’m using Netlify which uses AWS Lambda). Thank you",
"username": "Jessica_Dao"
},
{
"code": "",
"text": "Hi @Jessica_Dao,Sorry to hear that you are facing this issue.To answer your question, Atlas does not know that you’re using a serverless environment.We would not expect you to be hitting the limit if you’re caching the MongoClient (unless you actually peak up to 500 concurrent operations) and would like to help you debug your issue in the context of your Atlas cluster and your specific environment. Could you use the in-app chat icon in the lower-right corner? You can ask for me specifically.Kind regards,\nAngela",
"username": "Angela_Shulman"
},
{
"code": "",
"text": "Hi @Angela_Shulman ,Thanks for getting back. Yesterday I have reached out to Mongo support but they said this question was not in their scope. Is there any other way you could help me.Best regards,\nJessica",
"username": "Jessica_Dao"
},
{
"code": "",
"text": "Hi @Jessica_Dao,Sorry for the confusion - could you kindly look for an updated chat from us? Look forward to working with you.Angela",
"username": "Angela_Shulman"
},
{
"code": "",
"text": "Hi allI am also running into this 500 connection issues with just in the development stage. Should only have one connection for the app i would have thought. NextJS seems to inherently need insane connections, must be a way around this ?Daniel",
"username": "Daniel_Gadd"
}
]
| Atlas + Next.js + Now Connection Issues | 2021-02-19T17:56:16.580Z | Atlas + Next.js + Now Connection Issues | 7,222 |
|
null | [
"aggregation",
"python"
]
| [
{
"code": "cursor = db.coll.aggregate([{....}])\nopid = cursor.get_opId() ?????\nfor record in cursor:\n ...\n",
"text": "I’m connected to a DB and I don’t have access to the admin db so I can’t run db.currentOp… Is there a way to get all the operations currently running under my username ?The operations are run from a python script. Is there a way to get the opId of a given cursor (usually aggregate) while running it ?",
"username": "RemiJ"
},
{
"code": "",
"text": "Hi @RemiJ and welcome to the CommunityIs there a way to get all the operations currently running under my usernameThe below command, would give you the desired operationsdb.currentOp( { “$ownOps”: true } )Please see the db.currentOp() documentation for more information.Also, using the above command, you would be able to see not all but only your own operations.Let us know if you have any further questions.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "mongos> db.currentOp({\"$ownOps\": true})\n{\n \"ok\" : 0,\n \"errmsg\" : \"not authorized on admin to execute command { currentOp: 1.0, $ownOps: true, lsid: { id: UUID(\\\"6e4e47ae-f332-4b7d-a833-5b7e553c2b92\\\") }, $clusterTime: { clusterTime: Timestamp(1657525626, 1), signature: { hash: BinData(0, 14CFA2458DE7ED3958CCB9C029CFF80B35C7DCEB), keyId: 7059081335716970499 } }, $db: \\\"admin\\\" }\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\",\n \"operationTime\" : Timestamp(1657525846, 3),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1657525846, 3),\n \"signature\" : {\n \"hash\" : BinData(0,\"ZXgJ23Ls2/uc3m9GA2tJM/XqhXA=\"),\n \"keyId\" : NumberLong(\"7059081335716970499\")\n }\n }\n}\n",
"text": "Thanks @Aasawari\nSorry for the delay.I’ve just tried this and still get an error :mongodb V4.2.20user have R+W access to the db where the data are, nothing on any other db.Documentation is about calling currentOp on mongod. I’m running a sharded db so I don’t know anything about mongod, only mongos. I need a consolidated view of the operations running under my username.",
"username": "RemiJ"
}
]
| Get the running opIds of current user | 2022-06-17T08:39:02.748Z | Get the running opIds of current user | 1,807 |
null | [
"java",
"spring-data-odm"
]
| [
{
"code": "",
"text": "Hi All experts,May u know is it possible to created atlas index from spring application like normal mongo db index?",
"username": "yichao_wang"
},
{
"code": "IndexOperationsMongoTemplate#indexOpsIndexOperations",
"text": "Hello @yichao_wang, welcome to the MongoDB Community forum!You can access IndexOperations that can be performed on a collection using the MongoTemplate#indexOps method. IndexOperations has methods to create, drop and query indexes.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "MongoTemplateHi Prasad,\nI am talking about atlas index except normal index from mongo like compound index. Are u sure mongotemplate can do that?",
"username": "yichao_wang"
}
]
| Create atlas index from spring data | 2022-07-10T07:14:53.902Z | Create atlas index from spring data | 2,296 |
null | [
"data-modeling",
"python",
"time-series"
]
| [
{
"code": "class SampleTimeSeries(Document):\n ts: datetime = Field(default_factory=datetime.now)\n meta: str\n\n class Settings:\n timeseries = TimeSeriesConfig(\n time_field=\"ts\", # Require\n )\nclass CollectionOfTimeSeries(Document):\n timeseries_a: SampleTimeSeries\n timeseries_b: SampleTimeSeries\n",
"text": "Hey everyone, big MongoDB fan here. Using Beanie object-mapper for Python.A lot of my collections are time series, therefore I use the time series settings when creating a collection:However, I would like to create a collection of these time series collection. Can I do this? If so, what is the easiest way? The example I give below assumes the “timeseries_a” field to be a single document when I try this way.Example:",
"username": "Daniel_Smyth1"
},
{
"code": "{\ntimestamp: ISODate(\"2021-05-18T00:00:00.000Z\",\ntempdata:{\n \"timestamp\": ISODate(\"2021-05-18T00:00:00.000Z\"),\n \"temp\": 12\n },\nhumiditydata:{\n \"timestamp\": ISODate(\"2021-05-18T00:00:00.000Z\"),\n \"humidity\": 50\n }\n}\n$lookup$facet",
"text": "“Time Series Collection” is just another collection, butyou are trying to embed them into another collection. then they are no more time series collections. they will become just documents holding some time data.but you can create a new time series collection, and use their timestamp fields and insert into this new one if they share exact timestamps, or make adjustments to their timestamps then insert.remember your data does not have to have all fields in mind in mongodb. so you can insert 2 different-field documents at any time, but you need that top level timestamp if you want to make use of good things about time series collections.if this is not what you have in mind, then you may try making “views” of your data by $lookup and $facet aggregations to combine the two time series collections, but the result won’t be a time series unless you insert the result into a new one with the above structure.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks Yilmaz.In case anyone sees this post in the future, I decided to combine both time series into a single time series. Like Yilmaz said, the data does not have to have all fields in mind. Therefore, I combined the two time series into a single time series with each document holding either the A values or the B values.",
"username": "Daniel_Smyth1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can I make a collection of collections? | 2022-07-08T22:32:04.019Z | Can I make a collection of collections? | 2,115 |
null | [
"replication"
]
| [
{
"code": "",
"text": "HiWe are trying to migrate from 4.0.27 to 4.2.20 version, using below path4.0.27 migrating replicaset data from MMapV1 to WT, as belowAll fine till above steps.Now we are trying to move from 4.0.27 to 4.2.x version, using below pathUpgrade one of the secondary to 4.2.x version, but the secondary is not coming up with below error message,“[rsSync-0]Fatal assertion 34437 InvalidOptions: unknown option to collMod: usePowerOf2Sizes at src/mongo/db/repl/sync_tail.cpp 851”\n“[rsSync-0] \\n\\n***aborting after fassert() failure \\n\\n”from mongo document collMod — MongoDB Manual, it is obvious that, these options are removed from 4.2.But couldn’t find any documentation reference, how to mitigate this issue before upgrade to 4.2.x?\nHow do we remove usePowerOf2Sizes option in the existing collection without removing and affecting existing data in the collections before upgrading to 4.2.x version?Could you please provide any pointer to this?Thanks,\nNavanee",
"username": "Navaneethakrishnan_91112"
},
{
"code": "",
"text": "Hi @Navaneethakrishnan_91112 and welcome back !The MMAPv1 deprecated storage engine since 3.2 is completely removed in MongoDB 4.2.Before you upgrade to 4.2, you need to upgrade your nodes to WiredTiger.I would do the follow to upgrade. Supposing you have a 3 nodes RS in 4.0.X:Always make sure to have 2 nodes running so they can elect a primary (or a strict majority if you have more than 3 nodes).Before you do all that though, make sure to read again the production notes about upgrade from version X to Y to make sure that you didn’t overlooked a step like remove pv0 for example or forget to set the feature compatibility version.Just for the record, Power of 2 is a notion that doesn’t exist anymore in WiredTiger so a WT node shouldn’t mention that.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Power of 2Hi Maxime,Thank you very much for your reply. Please see below reply to some your reply<<\nThe MMAPv1 deprecated storage engine since 3.2 is completely removed in MongoDB 4.2.Yes. We are aware of that, that’s the reason before moving to 4.2, we migrated the storage engine from MMapV1 to WT in 4.0.27 itself.<<\nI would do the follow to upgrade. Supposing you have a 3 nodes RS in 4.0.X:This is exactly we did. Protocol version already running in PV1 in 4.0 itself and appropriate FCV values are set before 4.2 upgrade.Surprise here was that, some reason 4.2 upgraded node, complaining about Power of 2 in the initial sync and didn’t come up, as reported in the initial query.Unfortunately, we couldn’t reproduce this issue in our setup again.So my question again is, in 4.0 itself (Power of 2 option set in 4.0), is there any way to remove this Power of 2 option before upgrading to 4.2.x.\n“options” : {\n“flags” : 1\n}, → this is power of 2 option set in the collection, how do we remove this from the collection, which is running in 4.0 with WT engine.This is just to make sure, non-supported flags are cleaned up in 4.0 itself, before 4.2 upgrade.Thanks,\nNavanee",
"username": "Navaneethakrishnan_91112"
},
{
"code": "",
"text": "Where / how do you see this power of 2 flag / option ?",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "“[rsSync-0]Fatal assertion 34437 InvalidOptions: unknown option to collMod: usePowerOf2Sizes at src/mongo/db/repl/sync_tail.cpp 851”\n“[rsSync-0] \\n\\n***aborting after fassert() failure \\n\\n”First saw power of 2 from 4.2 secondary log<<\n“[rsSync-0]Fatal assertion 34437 InvalidOptions: unknown option to collMod: usePowerOf2Sizes at src/mongo/db/repl/sync_tail.cpp 851”\n“[rsSync-0] \\n\\n***aborting after fassert() failure \\n\\n”Second I did a simple testing,\nIn my collection I did add the power of 2 using runCommand,before adding power of 2 in 4.0 with WT, db.getCollectionInfos() returns empty options“options” : {},After adding power of 2, db.getCollectionInfos() returns below“options” : {\n“flags” : 1\n},So I assume, flags:1 is power of 2.FYI, when I add additional flag, flags is getting changed.",
"username": "Navaneethakrishnan_91112"
},
{
"code": "",
"text": "In my collection I did add the power of 2 using runCommand,How did you do that? In 4.2 the option was removed.Can you send the steps to reproduce the problem? I can’t reproduce on my end.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "How did you do that? In 4.2 the option was removed.Sorry, if my reply was not clear earlier, I added this in 4.0-WT- Primary node with\n4.2 WT, as secondary node.Unfortunately, not able to repro it now, but give me sometime, I have identified a pattern, where in which, this could be repro-ed, will get back in with sample repro program.",
"username": "Navaneethakrishnan_91112"
},
{
"code": "",
"text": "Following is the repro stepsHave replica-set, for testing purpose, currently having 3 member setStep 1)\nCreate a “testCollection”Step 2)\ndb.runCommand( {collMod : “testCollection” , usePowerOf2Sizes : true } )\n{\n“usePowerOf2Sizes_old” : true,\n“usePowerOf2Sizes_new” : true,\n“ok” : 1,\n“operationTime” : Timestamp(1656473051, 1),\n“$clusterTime” : {\n“clusterTime” : Timestamp(1656473051, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n}\n}Check whether the powerOf2 is added\ndb.getCollectionInfos()\n“name” : “testCollection”,\n“type” : “collection”,\n“options” : {\n“flags” : 1\n},Step 3)\nSimple document insertion in to the collectionStep 4)\nNow upgrade m3 to 4.2, for simplicity purpose changed, m3 data path to “m3_new_data”at this moment, following is the replica setm1 - running in 4.0 as PRIMARY\nm2 - running in 4.0 as SECONDARY\nm3 - running in 4.2 as SECONDARYStep 5)\nRepeat Step 2) in m1now the m3 running in 4.2 as secondary crashes.2022-06-29T09:23:52.615+0530 E REPL [repl-writer-worker-0] Failed command { collMod: “testCollection”, usePowerOf2Sizes: false } on test with status InvalidOptions: unknown option to collMod: usePowerOf2Sizes during oplog application\n2022-06-29T09:23:52.615+0530 F REPL [repl-writer-worker-0] Error applying operation ({ op: “c”, ns: “test.$cmd”, ui: UUID(“ed0b8b42-7fa5-49fa-8b67-2fc34c63cf65”), o: { collMod: “testCollection”, usePowerOf2Sizes: false }, o2: { collectionOptions_old: { uuid: UUID(“ed0b8b42-7fa5-49fa-8b67-2fc34c63cf65”), flags: 0 } }, ts: Timestamp(1656474832, 1), t: 4, h: 5926513413780485954, v: 2, wall: new Date(1656474832605) }): :: caused by :: InvalidOptions: unknown option to collMod: usePowerOf2Sizes\n2022-06-29T09:23:52.618+0530 F REPL [rsSync-0] Failed to apply batch of operations. Number of operations in batch: 1. First operation: { op: “c”, ns: “test.$cmd”, ui: UUID(“ed0b8b42-7fa5-49fa-8b67-2fc34c63cf65”), o: { collMod: “testCollection”, usePowerOf2Sizes: false }, o2: { collectionOptions_old: { uuid: UUID(“ed0b8b42-7fa5-49fa-8b67-2fc34c63cf65”), flags: 0 } }, ts: Timestamp(1656474832, 1), t: 4, h: 5926513413780485954, v: 2, wall: new Date(1656474832605) }. Last operation: { op: “c”, ns: “test.$cmd”, ui: UUID(“ed0b8b42-7fa5-49fa-8b67-2fc34c63cf65”), o: { collMod: “testCollection”, usePowerOf2Sizes: false }, o2: { collectionOptions_old: { uuid: UUID(“ed0b8b42-7fa5-49fa-8b67-2fc34c63cf65”), flags: 0 } }, ts: Timestamp(1656474832, 1), t: 4, h: 5926513413780485954, v: 2, wall: new Date(1656474832605) }. Oplog application failed in writer thread 0: InvalidOptions: unknown option to collMod: usePowerOf2Sizes\n2022-06-29T09:23:52.618+0530 F - [rsSync-0] Fatal assertion 34437 InvalidOptions: unknown option to collMod: usePowerOf2Sizes at src\\mongo\\db\\repl\\sync_tail.cpp 851\n2022-06-29T09:23:52.624+0530 F - [rsSync-0] \\n\\n***aborting after fassert() failure\\n\\nIt is happening consistently.\nIt looks like a bug to me, what do you think?NOTE: If we try the same step 2) in 4.2 member as PRIMARY, then it gracefully rejects it and doesn’t crash.\nCouldn’t the same be the behavior in the replication scenario as well??Thanks,\nNavanee",
"username": "Navaneethakrishnan_91112"
},
{
"code": "usePowerOf2SizestestCollectionusePowerOf2Sizestruefalsetest:PRIMARY> db.getCollectionInfos()\n[\n\t{\n\t\t\"name\" : \"testCollection\",\n\t\t\"type\" : \"collection\",\n\t\t\"options\" : {\n\t\t\t\"flags\" : 1\n\t\t},\n\t\t\"info\" : {\n\t\t\t\"readOnly\" : false,\n\t\t\t\"uuid\" : UUID(\"8a6af3c2-c998-4d25-91bf-78eb91e0b021\")\n\t\t},\n\t\t\"idIndex\" : {\n\t\t\t\"v\" : 2,\n\t\t\t\"key\" : {\n\t\t\t\t\"_id\" : 1\n\t\t\t},\n\t\t\t\"name\" : \"_id_\",\n\t\t\t\"ns\" : \"test.testCollection\"\n\t\t}\n\t}\n]\ntest:PRIMARY> db.getCollectionInfos()\n[\n\t{\n\t\t\"name\" : \"testCollection\",\n\t\t\"type\" : \"collection\",\n\t\t\"options\" : {\n\t\t\t\n\t\t},\n\t\t\"info\" : {\n\t\t\t\"readOnly\" : false,\n\t\t\t\"uuid\" : UUID(\"8a6af3c2-c998-4d25-91bf-78eb91e0b021\")\n\t\t},\n\t\t\"idIndex\" : {\n\t\t\t\"v\" : 2,\n\t\t\t\"key\" : {\n\t\t\t\t\"_id\" : 1\n\t\t\t},\n\t\t\t\"name\" : \"_id_\",\n\t\t\t\"ns\" : \"test.testCollection\"\n\t\t}\n\t}\n]\ndb.runCommand( {collMod : \"testCollection\" , usePowerOf2Sizes : false } )\n",
"text": "Hi @Navaneethakrishnan_91112,I tried to reproduce this error with a single node RS but I wasn’t able to get the error.I did some digging though in the upgrade replica set to 4.2 docs and in the Preparedness section, you get a link to all the Compatibility Changes in MongoDB 4.2.In this doc, the section “MMAPv1 Specific Options for Commands and Methods” mentions that the MMAPv1 usePowerOf2Sizes is removed. It’s also mentioned in the collMod doc in the “note” at the top.During my testing I tried to migrate a single node RS from 4.0.28 to 4.2.21 twice. First time the testCollection had a flag usePowerOf2Sizes set to true or false.Before each upgrade I have this:And after I have:So as you see above, the flag was removed automatically for me during the upgrade and I don’t have the error you mentioned in the logs.The difference between you and I could be that you are running 4.2 and 4.0 in the same RS at the same time. Question: Did you disable the option before the migration like it’s suggested in the doc or what is still on?I guess this should be enough to fix the problem during the migration process as the flag doesn’t exist in 4.2+ anyway.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi Maxime,Thank you for the reply again. I think, you have missed a step in repro. Issue is not happening during 4.2 upgrade. But 4.2 node as secondary after upgrade.As mentioned in Step 4) in the earlier reply, keep primary node in 4.0 and secondary node in 4.2.Step 4)\nNow upgrade m3 to 4.2, for simplicity purpose changed, m3 data path to “m3_new_data”at this moment, following is the replica setm1 - running in 4.0 as PRIMARY\nm2 - running in 4.0 as SECONDARY\nm3 - running in 4.2 as SECONDARYNow you run the “db.runCommand( {collMod : “testCollection” , usePowerOf2Sizes : true} )” in 4.0 Primary, which has 4.2 as secondary.You’ll see the crash for sure in 4.2 Secondary. Please let me know if still repro steps are not clear.Thanks,\nNavanee",
"username": "Navaneethakrishnan_91112"
},
{
"code": "false",
"text": "Yes I understand exactly what you mean and it’s clearly stated in the doc that this isn’t supported because the option is removed.Your 4.2 node is trying to replicate the operation that was done in 4.0 but can’t because it doesn’t exist anymore in 4.2. It makes sense.When you are upgrading your cluster from 4.0 to 4.2, it’s just a transitional state to allow the upgrade. It’s not a stable position that you want to keep running for hours. The goal is to migrate all the machines to 4.2 as soon as possible in a safe manner.As you are already in 4.0 with WiredTiger, this flag is completely useless (it’s a MMAPv1 flag). So just set it to false for all the collections in all the DBs and don’t touch it while migrating. The flags will disappear in your 4.2 RS and you won’t have the error.As a general guideline, avoid running “admin” operations when your are in the middle of an upgrade process.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you very much for support on this till now.<<\nYour 4.2 node is trying to replicate the operation that was done in 4.0\nbut can’t because it doesn’t exist anymore in 4.2. It makes sense.If you look at from my initial reply, we did look into all the possible document and proceed with the upgrade, based on the documentation.<<\nAs you are already in 4.0 with WiredTiger, this flag is completely useless (it’s a MMAPv1 flag). So just set it to false for all the collections in all the DBs and don’t touch it while migrating. The flags will disappear in your 4.2 RS and you won’t have the error.JFYI, even if this flag is set as “true” and we migrate, the flag will disappear, in the initial sync. The problem occurs, only if this option is added true/false during migration.For our use case, the application part where in which “powerOf2size” is added during migration is removed from our code, and it solves the problem.But in general,\n4.2 running as Primary works perfectly, if the application tries to add unsupported mmapv1 specific option, it just ignores as stated in document.\n“MongoDB ignores the MMAPv1 specific option async for fsync.”But the question remains same,\nShouldn’t the same behavior applies to mongo 4.2 running as secondary to mongo 4.0 Primary?In a large deployment, for high availability purpose, there could be use cases, where in which, in a replica set, only part of replica set members upgraded in a day, rest will be upgraded in subsequent days.Considering this kind of use case, Mongo 4.2 as secondary crashing over un-supported option, doesn’t look good to me.\nPlease check from this perspective. Either please do add a documentation section on this or try to keep 4.2 Primary node behavior here in the secondary as well i.e instead of CRASH, let Mongo 4.2 secondary node as well ignore that option.FYI, just for comparison perspective, did download and check the behavior in Enterprise edition also, the behavior looks same there, i.e Mongo 4.2 secondary crashes for an unsupported option from 4.0 Primary.Thanks,\nNavanee",
"username": "Navaneethakrishnan_91112"
},
{
"code": "",
"text": "Hi @Navaneethakrishnan_91112,Ok I totally get it now and I understand your point. I’m escalating this issue and I’ll circle back when I have some news.I agree that the node shouldn’t crash in these circonstances and should also ignore the command.Thanks a lot for all the explanations!\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @Navaneethakrishnan_91112,I got a feedback from the SERVER team. They were able to reproduce and confirm the problem but as it’s a deprecated command, the best course of action is to remove these commands from the code prior to the migration, as it’s already documented.So I’m not sure yet if they are going to fix the problem or just add extra documentation in the migration doc 4.0 => 4.2 to make sure there is a proper warning, but at least the message has been delivered to the right people now and they are taking actions.I’ll keep you updated here if I get more news.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Bug ticket has been open. You can track it here:https://jira.mongodb.org/browse/SERVER-67924",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "https://jira.mongodb.org/browse/SERVER-67924Thank you very much for your support!Thanks,\nNavanee",
"username": "Navaneethakrishnan_91112"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Procedure to remove usePowerOf2Sizes in existing collection before moving to 4.2.x | 2022-06-15T05:33:27.577Z | Procedure to remove usePowerOf2Sizes in existing collection before moving to 4.2.x | 2,082 |
[
"node-js",
"compass",
"react-js"
]
| [
{
"code": "",
"text": "[Some context for my error] I am building a reservation system using FullCalendar, and l am at the part where l am trying to send a created reservation or event to be stored on the database. When l create the event it populates on the calendar fine, but it will not save to the mongodb. I am using mongoldb compass. The error I’m specifically getting in the console is :\nScreenshot 2022-06-25 at 17.31.422062×318 177 KB\nI have been googling for 2 days trying to solve this error. I have tried changing the proxy settings multiple ways. I have tried rewriting the server.js file. I have tried setting breakpoints to dive deeper into the issue to no avail. Any help would be much appreciated.Thanks",
"username": "Ronan_Morgan"
},
{
"code": "",
"text": "Hi,\nI think I have exactly the same problem and I can’t get it to work.Did you find a solution ?Thanks,\nPierre",
"username": "Pierre_Pepe"
},
{
"code": "",
"text": "I wasn’t able to find a solution. Rebuilt the database in SQLServer and was able to get it working then. If you do find anything, let me know because l would have preferred to use MongoDB.Thanks,\nRonan",
"username": "Ronan_Morgan"
},
{
"code": "",
"text": "Hi, Where you able to find a solution for this issue?",
"username": "Ronan_Morgan"
}
]
| Proxy Error: Could not proxy request /api/calendar/create-event from localhost:3000 to http://127.0.0.1:5000/. (ECONNRESET) | 2022-06-26T22:08:07.820Z | Proxy Error: Could not proxy request /api/calendar/create-event from localhost:3000 to http://127.0.0.1:5000/. (ECONNRESET) | 6,868 |
|
null | [
"flutter"
]
| [
{
"code": "flutter --version\nFlutter 3.1.0-0.0.pre.1533 • channel master • https://github.com/flutter/flutter.git\nFramework • revision 78e3b93664 (18 hours ago) • 2022-07-07 08:34:06 -0400\nEngine • revision 56faff459e\nTools • Dart 2.18.0 (build 2.18.0-261.0.dev) • DevTools 2.15.0\nflutter run\nMultiple devices found:\nmacOS (desktop) • macos • darwin-x64 • macOS 12.4 21F79 darwin-x64\nChrome (web) • chrome • web-javascript • Google Chrome 103.0.5060.114\n[1]: macOS (macos)\n[2]: Chrome (chrome)\nPlease choose one (To quit, press \"q/Q\"): 1\nLaunching lib/main.dart on macOS in debug mode...\n\nBuilding macOS application... \n[ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: RealmException: non-zero custom status code considered fatal\n\nSyncing files to device macOS... 635ms\n\nFlutter run key commands.\nr Hot reload. 🔥🔥🔥\nR Hot restart.\nh List all available interactive commands.\nd Detach (terminate \"flutter run\" but leave application running).\nc Clear the screen\nq Quit (terminate the application on the device).\n\n💪 Running with sound null safety 💪\n\nAn Observatory debugger and profiler on macOS is available at: http://127.0.0.1:60209/IoDFABWbRVg=/\nThe Flutter DevTools debugger and profiler on macOS is available at: http://127.0.0.1:9100?uri=http://127.0.0.1:60209/IoDFABWbRVg=/\n",
"text": "I’ve been looking at the examples for Realm and Flutter and I’ve come across an error that I don’t know how to work around.\nRealm Flutter documentation\ngithub: realm-dart-samplesI’ve bumped up my Flutter SDK toThe error is when I try to run the apps from the repository:",
"username": "Ilan_Toren"
},
{
"code": "",
"text": "Hi,\nThis error shows that realm native binary was correctly loaded into the app and most probably it happens cause there are no network entitlements on the app. You can read how to add them here Networking | Flutter\nand Building macOS apps with Flutter | FlutterYou can also try running this Flutter sample which does not have network requirements realm-dart-samples/provider_shopper at main · realm/realm-dart-samples · GitHubAlso bumping the Flutter version to Flutter Beta is not required. Consider using the stable version of Flutter.cheers",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "@Ilan_Toren We have fixed our sample with the correct entitlements. Can you get the latest master and try running it again.Cheers.",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "So far it looks good. And I have the dart_flexible_sync working. Thanks for your response, because now I can put together the Flutter UI app I want to get off the ground.",
"username": "Ilan_Toren"
}
]
| Flutter with Realm | 2022-07-08T06:16:38.740Z | Flutter with Realm | 3,536 |
[]
| [
{
"code": "",
"text": "\nimage_2022-07-09_1552080681545×249 37.1 KB\n\nThese error occurs can plz help.\nThe solution like the giving the configuration 0.0.0.0 doesn’t work…\nI have also add the current ip address but it also doesn’t work",
"username": "Tista_Dutta"
},
{
"code": "",
"text": "Change your DNS provider to use Google’s 8.8.8.8.",
"username": "steevej"
}
]
| Error in connecting the mongodb with streamlit with another ip address | 2022-07-09T10:26:01.949Z | Error in connecting the mongodb with streamlit with another ip address | 1,322 |
|
null | [
"aggregation",
"views"
]
| [
{
"code": "",
"text": "l have 2 doc.\ndoc1:[ {a:1,b:10} {a:2:b:20}]\ndoc2:[ {a:1,c:100} {a:2:c:200}]I want move doc2.c to doc1 when doc.a is same.this is my command:\ndb.doc2.aggregate([ { $merge: { into: “doc1”, on: “a”, let: { c: “$c” }, whenMatched: [ { $addFields: { c: “$$c” } }], whenNotMatched: “discard” } }])in mongo4.2 it works\nin mongo4.4 is error: Cannot $merge to internal database: local.is my command wrong?",
"username": "zh_pc"
},
{
"code": "",
"text": "There are terminology issues with your post that makes it hard to understand.You write that you have 2 docs, but use $merge which is a collection operation.You also write that it works in 4.2.Assuming doc1 and doc2 are really collections, I suspect that you forgot to switch to the database that contains your collections with the use command and that you are using the local database which is reserved. It worked in 4.2 simply because you did the appropriate use TheCorrectDatabase before the $merge.",
"username": "steevej"
},
{
"code": "",
"text": "sorry for my bad english…\ni means :\ndoc1 and doc2 are collections. i want to copy doc2.d to doc1.d where doc1.a == doc2.a\nthis is my pic.in mongo4.4:\n4.41115×767 49.6 KB\n",
"username": "zh_pc"
},
{
"code": "",
"text": "in mongo4.2:\n4.21103×776 47.8 KB\n",
"username": "zh_pc"
},
{
"code": "mongoshlocaluse localmergingdbuse mergingdb",
"text": "it is nice thing you use mongosh because it shows you which database is currently active: local is your current one because at some point you have used use local command.that you are using the local database which is reservedas @steevej stated above, it is reserved for internal use, and you are supposed to NOT use it. check this link for more details The local Database — MongoDB ManualPlease use/create some other database name for your needs, such as mergingdb and switch to it with use mergingdb for your operations. rest assured your query works that way.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "oh oh, i got it.\nthank you all for help me !!!",
"username": "zh_pc"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Move doc1.field to doc2.field on same id. aggregate error in mongo4.4: Cannot $merge to internal database: local | 2022-07-02T04:04:11.629Z | Move doc1.field to doc2.field on same id. aggregate error in mongo4.4: Cannot $merge to internal database: local | 2,777 |
[
"mongodb-shell",
"atlas"
]
| [
{
"code": "",
"text": "\nimage1171×201 14.5 KB\n",
"username": "Stephan_Mingoes"
},
{
"code": "",
"text": "You are already connected to mongodb as the prompt shows.Please exit and try the connect string you were attempting from os prompt",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to Connect to Atlas Cluster 2 | 2022-07-09T23:34:25.132Z | Unable to Connect to Atlas Cluster 2 | 1,422 |
|
null | []
| [
{
"code": "public static void main(String[] args) {\n\t\n\tConnectionString connectionString = new ConnectionString(\"mongodb+srv://demo:[email protected]/test?retryWrites=true&w=majority\");\n\tMongoClientSettings settings = MongoClientSettings.builder()\n\t .applyConnectionString(connectionString)\n\t .build();\n\tMongoClient mongoClient = MongoClients.create(settings);\n\tMongoDatabase database = mongoClient.getDatabase(\"test\");\n\t\n\tSystem.out.println(database.getName());\n}\nException in thread \"main\" com.mongodb.MUQrUTqDch7niLusZ4SxSTAcoawWJK91eT: Unable to look up TXT record for host cluster0.f02ax.mongodb.net\n\tat com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:131)\n\tat com.mongodb.ConnectionString.<init>(ConnectionString.java:381)\n\tat com.example.demo.em.main(em.java:13)\nCaused by: javax.naming.CommunicationException: DNS error [Root exception is java.net.SocketTimeoutException: Receive timed out]; remaining name 'cluster0.f02ax.mongodb.net'\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.query(DnsClient.java:316)\n\tat jdk.naming.dns/com.sun.jndi.dns.Resolver.query(Resolver.java:81)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:434)\n\tat java.naming/com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:235)\n\tat java.naming/com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:141)\n\tat java.naming/com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:129)\n\tat java.naming/javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:171)\n\tat com.mongodb.internal.dns.DefaultDnsResolver.resolveAdditionalQueryParametersFromTxtRecords(DefaultDnsResolver.java:114)\n\t... 2 more\nCaused by: java.net.SocketTimeoutException: Receive timed out\n\tat java.base/sun.nio.ch.DatagramChannelImpl.trustedBlockingReceive(DatagramChannelImpl.java:700)\n\tat java.base/sun.nio.ch.DatagramChannelImpl.blockingReceive(DatagramChannelImpl.java:630)\n\tat java.base/sun.nio.ch.DatagramSocketAdaptor.receive(DatagramSocketAdaptor.java:239)\n\tat java.base/java.net.DatagramSocket.receive(DatagramSocket.java:569)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.doUdpQuery(DnsClient.java:426)\n\tat jdk.naming.dns/com.sun.jndi.dns.DnsClient.query(DnsClient.java:214)\n\t... 9 more\n",
"text": "following is the code.while running this it gives below error",
"username": "jikar_bhati"
},
{
"code": "",
"text": "The TXT DNS entry is fine. Try switching DNS provider by using 8.8.8.8.",
"username": "steevej"
},
{
"code": "",
"text": "How to change dns provider…can you elaborate?",
"username": "jikar_bhati"
},
{
"code": "",
"text": "I am also facing a similar issue. Could you please let me know how was the issue resolved for you?",
"username": "Sweta_Das1"
},
{
"code": "",
"text": "If it is exactly the same issue you should first try what was already proposed.The TXT DNS entry is fine. Try switching DNS provider by using 8.8.8.8.Personally, I am not assuming that this is the same issue, so please post a screenshot of what you are doing that shows the exact issue you are having. The important thing to see is your connection string. That is the only way we can tell you if the TXT DNS entry is right or not.",
"username": "steevej"
},
{
"code": "",
"text": "switching DNS providerI’m experiencing the exact same issue but can’t seem to find a resolution to it. I’ve tried changing my DNS provider to Google DNS (8.8.8.8 & 8.8.4.4) but I’m still failing to connect via my application to Mongo. Does anyone know what is going wrong and what the fix to this issue is?",
"username": "Joseph_Magara"
}
]
| Not able to connect mongodb atlas with java | 2021-06-26T17:31:00.462Z | Not able to connect mongodb atlas with java | 5,575 |
null | [
"java"
]
| [
{
"code": "",
"text": "I am using Java Drivers, and want to have more control on pool size. Specifically I want to increase pool size exponentially whenever the requirement reaches the current pools size, i.e suppose first time it increased by 2, then again by 4, then by 8, so totals will be 2, 6, 12…",
"username": "Divyansh_N_A"
},
{
"code": "",
"text": "Hi @Divyansh_N_A ,There is no way currently to do that with any MongoDB driver. This functionality has not been requested by any other user (and I’ve not seen it in any other connection pool implementation), so I’m wondering why you think it’s needed.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "@Jeffrey_Yemin\nCan you just point me to where and how it is done in the source codes? Thanks",
"username": "Divyansh_N_A"
}
]
| How to have more control on mongoDB pool size | 2022-07-08T07:02:41.894Z | How to have more control on mongoDB pool size | 1,523 |
null | [
"connector-for-bi"
]
| [
{
"code": "",
"text": "Hi All,I am trying to connect, Tableau to MongoDB Atlas using BI connector. I have enabled MongoDB BI connector in MongoDB Atlas (M10 cluster). I have installed the ODBC driver and went ahead with my System DSN setup.Post that I have established a data connectivity using ODBC data sources using the DSN created in my previous step.The connection failed with the below error message:“Bad Connection: Tableau could not connect to the data source.\nNote that you might need to make local configuration changes to resolve the error.Error Code: 4B810EA5MongoDB BI Connector versions older than 4.1.1 are not supported by this version of Tableau.”Could someone help and let me know the resolution to solve this issue.Thanks.",
"username": "Jay_Thattai"
},
{
"code": "",
"text": "MongoDB BI Connector versions older than 4.1.1 are not supported by this version of Tableau.”Hi @Jay_Thattai,What are your versions of:I believe the error message you are encountering should only be displayed by a very outdated version of the BI Connector. The 4.1.1 version in this context is referring to a minimum expected version of the MySQL wire protocol being used.If you are not already using the latest versions of the MongoDB ODBC Driver & BI Connector, please try again after updating software versions.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Error Code: 4B810EA5 - Tableau could not connect to the data source | 2022-01-25T17:30:55.319Z | Error Code: 4B810EA5 - Tableau could not connect to the data source | 4,665 |
null | [
"node-js"
]
| [
{
"code": "",
"text": "I have setup database cluster on MongoDB Atlas which consists of db instance from from EU, US, and Asia. When connecting MongoDB Atlas using node.js, how to get the information of locations of db instance that response to my request?",
"username": "Jack_Tan"
},
{
"code": "",
"text": "Hi Jack, Do you mean that you have a multi-region cluster with replicas in each region, or is it an Atlas “global cluster” with shard zones in each cluster? Either way, your driver will by default connect to the nearest node: what would you use the location detail of the replica or zone for?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew, I want to make sure my cross geo setup works properly, and the mongodb indeed connects to the nearest nodes with the setup - is there a way to get the location details?",
"username": "Jack_Tan"
}
]
| Get Location of DB instances the response from Node.JS | 2022-07-07T13:21:03.272Z | Get Location of DB instances the response from Node.JS | 1,250 |
null | [
"java",
"field-encryption",
"spring-data-odm"
]
| [
{
"code": "",
"text": "Hi Everyone,I implemented mongo field level encryption in spring boot in one of my project. My data encryption keys were kind of rotating with each deployment. Now lately i noticed that for the same field value (say m) for which i was manually checking whether that is present in my db or not, I see multiple entries got created.To my understanding, i feel that this is due to the rotating key strategy that i used here and using single data-encryption key might solve the issue.But is there any better approach to this. I want to use rotating key for security purpose.Reference Material: https://www.mongodb.com/docs/manual/core/security-client-side-encryption/Any help is much appreciated. Thanks",
"username": "Prateek_Mittal"
},
{
"code": "",
"text": "Hello Prateek and thank you for posting!When you say that you were kind of rotating your data encryption keys, does that mean that you were specifying different keyIds to be used for the same field via explicit encryption at different time periods? And were you using deterministic or random encryption? From what I gather you were using deterministic encryption with different keys and lwere wanting the same cleartext value to always result in the same ciphertext. If that is what you were doing then you are correct that changing keys is the problem. A cleartext value will only result in the same ciphertext when the same key is used for encryption. If that is not what you are doing please provide some more detail and I’ll be happy to review.Cynthia",
"username": "Cynthia_Braund"
}
]
| Client-Side Field Level Encryption | Dedupe failing on encrypted field | 2022-05-06T07:24:51.067Z | Client-Side Field Level Encryption | Dedupe failing on encrypted field | 2,201 |
null | []
| [
{
"code": "import pulumi\nimport pulumi_mongodbatlas as mongodbatlas\n\ntest = mongodbatlas.Project(\"test\",\n api_keys=[mongodbatlas.ProjectApiKeyArgs(\n api_key_id=\"61003b299dda8d54a9d7d10c\",\n role_names=[\"GROUP_READ_ONLY\"],\n )],\n org_id=\"<ORG_ID>\",\n project_owner_id=\"<OWNER_ACCOUNT_ID>\",\n teams=[\n mongodbatlas.ProjectTeamArgs(\n role_names=[\"GROUP_OWNER\"],\n team_id=\"5e0fa8c99ccf641c722fe645\",\n ),\n mongodbatlas.ProjectTeamArgs(\n role_names=[\n \"GROUP_READ_ONLY\",\n \"GROUP_DATA_ACCESS_READ_WRITE\",\n ],\n team_id=\"5e1dd7b4f2a30ba80a70cd4rw\",\n ),\n ])\n",
"text": "I need team “IDs” for automating creation of db clusters using Pulumi. I do not have “Organization Owner” privileges? Is it possible for me to obtain the Team IDs, or do I need to contact an “Organization Owner”? How would they obtain these IDs?I am following this Pulumi resource example from mongodbatlas.Project | Pulumi Registry",
"username": "Aaron_Cutchin"
},
{
"code": "atlas teams listproject_owner_idtest = mongodbatlas.Project(\"test\",\n api_keys=[mongodbatlas.ProjectApiKeyArgs(\n api_key_id=\"61003b299dda8d54a9d7d10c\",\n role_names=[\"GROUP_READ_ONLY\"],\n )],\n org_id=\"<ORG_ID>\"\n)\n\n",
"text": "Hi @Aaron_Cutchin ,There’s a few ways you can grab those team ids. The first is directly from the Atlas Admin API using curl. You can find a good example here that will get all team names and ids: https://www.mongodb.com/docs/atlas/reference/api/teams-get-all/You can also use the Atlas CLI (brew install mongodb-atlas-cli) and use atlas teams list after logging in.Also, I tested with an account where I only have Organization Project Creator and Organization Member and was able to get the team names and ids. So you don’t need “Organization Owner” for it.Finally, while the Pulumi example has teams and even project_owner_id you don’t have to have them to just create a Project as those are optional parameters. Note a Project is an organizational structure to allow you to have settings across 1 or more clusters and hence has to be crated before the cluster. Hence this should work just as well but be a bit simpler:Best,\nMelissa",
"username": "Melissa_Plunkett"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How obtain Team IDs? | 2022-07-07T23:42:31.748Z | How obtain Team IDs? | 1,467 |
null | [
"queries"
]
| [
{
"code": "{\n \"_id\": \"5b841bc3e7179a43f9ad53a8\",\n \"title\": \"Lunar\",\n \"brand\": \"Nike\",\n \"sizes\": [\n {\n \"sizeValue\": 36,\n \"count\": 8,\n \"_id\": \"5fa595b97cc8a20ca00493e2\"\n },\n {\n \"sizeValue\": 49,\n \"count\": 7,\n \"_id\": \"5fa595b97cc8a20ca00493e3\"\n },\n {\n \"sizeValue\": 44,\n \"count\": 0,\n \"_id\": \"620927644fe2a131b1477842\"\n }\n ],\n \"sex\": \"жіночі\"\n }\nfind({sizes: {$all: {$elemMatch: {count:{$eq: 0}}}};});\n[ \n {\n \"_id\": \"61dca3a72cdd6222592e8ba6\",\n \"title\": \"643 Ultra\",\n \"brand\": \"New Balance\",\n \"sizes\": [],\n \"sex\": \"чоловічі\"\n },\n {\n \"_id\": \"61fabd7d9e38f5770f4f52e5\",\n \"title\": \"568\",\n \"brand\": \"New Balance\",\n \"sizes\": [ \n {\n \"_id\": \"62c0b68987bfb4c96d096c76\",\n \"sizeValue\": 39,\n \"count\": 0\n }\n ],\n \"sex\": \"жіночі\"\n },\n {\n \"_id\": \"61ffdfc839180210ea77a407\",\n \"title\": \"Чорні в'єтнамки\",\n \"brand\": \"Adidas\",\n \"sizes\": [\n {\n \"_id\": \"62c0b68987bfb4c96d096c76\",\n \"sizeValue\": 40,\n \"count\": 0\n }\n ],\n \"sex\": \"чоловічі\"\n }\n]\n",
"text": "There is the model example.As you see, there is sizes array in which objects have the count field. Hence, I need to match all models where only all count field is equal to 0 or sizes array is empty.I tried using this query but this did not work.The estimated result must be as following:",
"username": "Ivan_Kravchenko"
},
{
"code": "{ \"sizes.count\" : 0 }\n",
"text": "The simple querywill matches all documents that have one element of the array sizes with the field count 0.Then to only get the elements of sizes with count 0, you need to do a projection with $filter.",
"username": "steevej"
},
{
"code": "find({\n $or:[\n {sizes: {\n $not: {\n $elemMatch: {\n count: {$ne: 0}}\n }}},\n {\n sizes: {\n $exists: true,\n $size: 0\n }\n }]\n});\n",
"text": "After few attempts I find out a optimal solution.",
"username": "Ivan_Kravchenko"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| I need to match all models where array has objects where the field is only equal to 0 | 2022-07-02T18:34:47.550Z | I need to match all models where array has objects where the field is only equal to 0 | 1,368 |
null | [
"swift"
]
| [
{
"code": "collection_nameCollectionName",
"text": "I’m trying to add pre-existing collections from MongoDB Atlas to my Realm schema in Swift. The problem I’m encountering is that the existing collection names are snake_cased, and Swift naming conventions are PascalCased, so Realm is creating new collections. How do I setup Realm to recognize the existing collection names (i.e. collection_name) to refer to the Realm object models in Swift (i.e. CollectionName)?",
"username": "Tom_J"
},
{
"code": "db.collection.renameCollection()\n",
"text": "I don’t use Swift so I’m not sure this answer will work. But if your swift schema is the only thing using these collections can you just rename the collections so they are in the correct format?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks for the reply! Unfortunately the pre-existing collections are already in use by other server side processes, which I’d like to avoid modifying if possible.",
"username": "Tom_J"
},
{
"code": "",
"text": "Hi Tom,You should be able to keep the collection name unchanged and for it’s Sync Cloud Schema update the Title field to use a PascalCase format equivalent which is what the Swift class can refer to.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "class cool_task: Object {class Task_Classclass taskClassclass TaSkClAsS",
"text": "Perhaps I don’t fully understand the question but let me ask for clarificationThere is no general requirement in Swift for specific formats for var, class or structure names. You can use snake_case and PascalCases (sometimes called CamelCase but it’s slightly different). Camel case for vars please if you use upper/lower case.You can freely use whatever works for your use case - the important bit it to keep it consistent in whatever format you go with.So can’t you just name the Realm objects models the same? e.g. if the object is “cool_task” in MongoDB, can there be a Realm model class cool_task: Object { that matches?Realm is creating new collectionsCan you clarify what you mean? A Collection is a set of Realm objects; either Results, List or MutableSet etc, which refer to objects in CODE, after they are lazily loaded from disk.When using RealmSync objects on disk are stored by their partition key locally (for partition bases sync)So are you asking about Objects or the partition files stored on disk? Here’s an example task class that uses all kinds of var name formats, all of which are totally fine.class TaskClass: Object {\n@Persisted var MyNamePascal = “”\n@Persisted var my_name_snake = “”\n@Persisted var myNameCamel = “”\n}You could even call the classclass Task_Class or class taskClass or class TaSkClAsS. Regardless of how it’s named or the var names, on disk its stored according to the objects partition.",
"username": "Jay"
}
]
| Map existing collection names to Swift schema | 2022-06-22T18:06:46.881Z | Map existing collection names to Swift schema | 2,224 |
null | [
"replication",
"sharding"
]
| [
{
"code": "",
"text": "Hello,Other member in this community created question:I see that Atlas isn’t allowing me to select data size of more than 4TB per shard in MongoDB Atlas.Is that a hard limit?MongoDB team reply:MongoDB offers horizontal scale-out using sharding: While a single ‘Replica Set’ (aka a shard in a sharded cluster) cannot exceed 4TB of physical storage, you can use as many shards as you want in your MongoDB Atlas sharded cluster.For example, if you allocated 2TB per shard, a twenty shard cluster would have a total of 40TB of physical space (all would be redundant for high availability).My Question is:Is possible to have different data in this storage of 40TB or only 2TB (redundant in all twenty nodes)?",
"username": "Osvaldo_Bay_Machado1"
},
{
"code": "",
"text": "sharding is like a letter indexing in a dictionary. from A to Z, you will create a shard to hold data only for a portion of actual data; apples in one shard, corns in another according to one or more fields under the same name like “produce” or “farm”.your shard settings will normally broadcast a query to all shards. but you can speed up read/write by targeted operations with the correct shard key.But one thing to keep in mind is that the sharding is made on collections. if you shard one collection but keep the others unsharded, they will be kept in primary shard. so the answer is yes, it is possible.but keep in mind that your “other” data can fill your primary shard faster if your sharding logic is not good enough.check this official Sharding — MongoDB Manual about considerations on the logic.",
"username": "Yilmaz_Durmaz"
},
{
"code": "storageSize",
"text": "Welcome to the MongoDB Community @Osvaldo_Bay_Machado!Is possible to have different data in this storage of 40TB or only 2TB (redundant in all twenty nodes)?Data storage in Atlas clusters provides data redundancy in the sense that there are multiple copies of the data. The storage limit is separate from the data redundancy factor.For example, if you have 2TB of storage in a 3 member replica set, there will actually be 6TB of physical storage backing the cluster. Each replica set member will be provisioned in a different cloud provider availability zone with consistent instance specs (CPU, RAM, storage). The storage limit in a dedicated Atlas cluster is based on the storageSize (size on disk) of your data files, so a 2TB replica set will store more than 2TB of data and indexes depending on how compressible your data is.A sharded cluster is comprised of 2 or more shard replica sets which are presented (from the application of view) as a single logical cluster. A 20 x 2TB sharded cluster has a 40TB physical storage limit (not including the backend storage provisioned for data redundancy).Regards,\nStennie",
"username": "Stennie_X"
}
]
| Limits on data size? [2] | 2022-07-07T00:41:19.370Z | Limits on data size? [2] | 2,599 |
null | [
"java",
"sharding"
]
| [
{
"code": "",
"text": "Reference: https://www.mongodb.com/docs/manual/release-notes/3.4-upgrade-standalone/\nMy mongodb systems are running under CentOS-7 and their only clients are Java apps which pull from v3.2.8 AND also v3.4.6 MongoDB databases. The MongoDB servers are AWS EC2s and are NOT clustered.Is it really as simple as this:Do I need to make any changes to the data or metadata for my data? Anything at all?\nShould I stop my EC2s, snapshot the drive where MongoDB has all its data, and then start the EC2s and start up my MongoDB databases as a first step? I will if there is ANY chance I’ll need to backout the upgrade because something “goes wrong”.\nThanks.",
"username": "Joseph_Estrada"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Joseph_Estrada !The 3.2 to 3.4 upgrade procedure is as per the documentation link you referenced.While this is a straightforward upgrade, I would always take a backup before any significant deployment changes.I also recommend upgrading to the latest patch release for your release series, which would be 3.4.24. Minor releases do not include any backward-breaking changes and 3.4.6 (July 2017) is lacking 2 1/2 years of bug fixes & stability improvements compared to 3.4.24 (Jan 2020).MongoDB 3.4 reached End of Life in Jan 2020 and no longer receives any security updates or maintenance support. I recommend planning to upgrade to a supported version (currently 4.2 or newer) if this is a production environment.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Upgrading my standalone non-sharded MongoDB systems, each running v3.2.8, to v3.4.6 | 2022-07-08T15:08:27.858Z | Upgrading my standalone non-sharded MongoDB systems, each running v3.2.8, to v3.4.6 | 1,279 |
null | []
| [
{
"code": "{\n country: 'USA',\n invoices: [\n { client: 'Cliet_1', docNum: '23123j' },\n { client: 'Cliet_2', docNum: '34123j' },\n { client: 'Cliet_1', docNum: '3453412df' }\n ]\n},\n{\n country: 'Canada',\n invoices: [\n { client: 'Cliet_3', docNum: '23123j' },\n { client: 'Cliet_4', docNum: '34123j' },\n { client: 'Cliet_4', docNum: '3453412df' }\n ]\n},\n{\n country: 'USA',\n invoices: [\n { client: 'Cliet_1', docNum: '23123j' },\n { client: 'Cliet_2', docNum: '34123j' },\n { client: 'Cliet_5', docNum: '3453412df' }\n ]\n}\n[\n { country: 'USA', clients: ['Cliet_1', 'Cliet_2', 'Cliet_5'] },\n { country: 'Canada', clients: ['Cliet_3', 'Cliet_4'] },\n]\n",
"text": "I have DB with this structure:I can’t figure out how to transform it into it:Seems pretty easy and I could do it by JS but it would be better to use Mongo",
"username": "Nick_Elovsky"
},
{
"code": "$group$reduce$setUniondb.collection.aggregate([\n {\n \"$group\": {\n \"_id\": \"$country\",\n \"invoices\": {\n \"$addToSet\": \"$invoices.client\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"country\": \"$_id\",\n \"clients\": {\n \"$reduce\": {\n \"input\": \"$invoices\",\n \"initialValue\": [],\n \"in\": {\n \"$setUnion\": [\n \"$$value\",\n \"$$this\"\n ]\n }\n }\n }\n }\n }\n])\n",
"text": "You can do it like this:Working example",
"username": "NeNaD"
},
{
"code": "",
"text": "Thank you so much!\nIt works as I expected. The only thing to do is to replace countries with real ids and take real entities by them.",
"username": "Nick_Elovsky"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to grouped data from arrays of documents and combine it by field? | 2022-07-07T16:38:53.187Z | How to grouped data from arrays of documents and combine it by field? | 2,669 |
null | [
"aggregation",
"java",
"spring-data-odm"
]
| [
{
"code": "",
"text": "Here is my situation. I have audit documents and over time it will be growing in size. Our design requirement was to query audit documents and update those results to another document called ticket which will be created before querying documents. The catch here was invoking js query happens from a spring boot application but after invocation the connection from java to mongo should be closed and the query execution should happen in mongo db as this query execution can take from few mins to hours.I was creating a function in mongo db but as per this link(Alternative of eval() command in Java) my requirement is only possible through mongo shell and can’t be fulfilled with java mongo driver.Any help will be greatly appreciated",
"username": "Harish_Bondalapati"
},
{
"code": " }\n",
"text": "function test() {\ndb = connect(“xxxx:xxxxxxx@yyyyyyyy:zzzzz/qqqqq”);\nconst res = db.getCollection(‘audittt’).find( {\n$and: [\n{“userId”: {$eq: “fdfdf”}},\n{“businessFunction”: {$eq: “sdff”}},\n{“applicationName”: {$eq: “HdfdfRS”}},\n{\"$expr\":{\n“$and”:[\n{\"$gte\":[{\"$convert\":{“input”:\"$_id\",“to”:“date”}}, ISODate(“2022-07-01T00:00:00.000Z”)]},\n{\"$lte\":[{\"$convert\":{“input”:\"$_id\",“to”:“date”}}, ISODate(“2022-07-06T11:59:59.999Z”)]}\n]}}]});\ndb.audit.update({ticketNumber: “4O7nv2CFAq”}, {$set : {result : res.toArray()}});I have a function like this and i would like to invoke this function from a spring boot application. can i do it and how can i do it. any help will be appreciated",
"username": "Harish_Bondalapati"
}
]
| How to create a stored js file and invoke it from spring boot application | 2022-06-30T12:02:45.548Z | How to create a stored js file and invoke it from spring boot application | 2,970 |
null | []
| [
{
"code": "",
"text": "I have documents of the form:\n{a: [ { b: 1, c: 2 }, { b: 3, c: 4 } ]}I need to find all documents that match a given b and c value in the same nested object. How can I do that? And what is the shape of the index?I can only figure out a find that automatically enumerates the array in a, and the result is incorrect:test> db.hx2a.findOne({\"$and\":[{“a.b”:{\"$eq\":1}},{“a.c”:{\"$eq\":4}}]})\n{\n_id: ObjectId(“62c45959c5da24a18acfa3e8”),\na: [ { b: 1, c: 2 }, { b: 3, c: 4 } ]\n}The values are in two different nested objects, not in the same one.\nThanks!",
"username": "Vincent_Lextrait"
},
{
"code": "",
"text": "Take a look at $elemMatch.",
"username": "steevej"
},
{
"code": "",
"text": "Ah fantastic, thanks, Steve!One more question, I am creating the index the following simple way:{“a.b”: 1, “a.c”: 1}My understanding is that it creates an index with a size which is quadratic as a function of the average array size (as the cartesian product is calculated). If true, this is an issue. I did not see anything equivalent to $elemMatch in the operators allowed in partialFilterExpression, is there something that could help, while ensuring that the find leverages the index properly and does not do a full scan?Thanks again!",
"username": "Vincent_Lextrait"
},
{
"code": "",
"text": "If index size with a.b:1,a.c:1 is an issue, I would investigate by using only a.b:1 or a.c:1. The size would be smaller. The field with the highest number of different values will probably provide better performance. For example, if a.b is boolean and a.c is date, a.c index will be more selective.",
"username": "steevej"
},
{
"code": "",
"text": "Understood, it makes sense.\nI am thinking also of concatenating b and c in a single field.\nThanks a lot!",
"username": "Vincent_Lextrait"
}
]
| Finding documents containing an array of objects | 2022-07-06T19:46:17.181Z | Finding documents containing an array of objects | 1,542 |
null | []
| [
{
"code": "",
"text": "Hello my applications are hosted on GCP and AWS both and are trying to connect to MongoDB Atlas hosted on GCP.I understand we can successfully peer with GCP as my Atlas cluster is hosted in GCP.\nHowever any deployment model and strategy we can adopt to basically secure the channel between AWS(application) and GCP(MongoDB Atlas DB) ? So the calls do not travel via public internet and are always within the peered cloud network.Would appreciate any suggestions.",
"username": "Saurabh_Johri"
},
{
"code": "",
"text": "i also encountered the same issue. Have you resolve the way? @Saurabh_Johri",
"username": "roger.le"
},
{
"code": "",
"text": "It’s important to understand that to go from one cloud provided to another will require transit via public IP. However it’s also important to understand that all connections to an Atlas cluster require TLS network encryption over the wire. Remember that TLS network encryption (e.g. what also makes HTTPS possible) is the bedrock that makes the entire internet something we can use with built-in privacy as a general rule: peering can be understood as essentially a second level of encryption but as we enter a trustless security mindset it’s important to look at the building blocks for what they are",
"username": "Andrew_Davidson"
}
]
| MongoDB Atlas Peering with Cloud providers VPC | 2021-10-15T14:54:03.068Z | MongoDB Atlas Peering with Cloud providers VPC | 1,828 |
null | [
"node-js",
"atlas-triggers"
]
| [
{
"code": "posttitlepost.titletitlecatchconsole.log(JSON.stringify(err))\"[object Object]\"{\\\"message\\\":\\\"[object Object]\\\",\\\"name\\\":\\\"FunctionError\\\"}\"☎☎️⏱️",
"text": "We are extensively using Realm triggers to sync relevant data from MongoDB to Algolia for search purposes. But some of the documents fail to sync. From the logs, we noticed it for documents with strings which contain emoji characters in them. So suppose there is a collection called post which consists of title, then post.title would result in failure of the trigger if title has an emoji in the string.We have two clusters setup - prod and dev. The above behaviour is not reproducible for Realm app belonging to dev cluster but it is reproducible for prod cluster.We try logging the error in catch block like so - console.log(JSON.stringify(err)) but it just results in \"[object Object]\" string like so :-{\\\"message\\\":\\\"[object Object]\\\",\\\"name\\\":\\\"FunctionError\\\"}\"I don’t think this is an error at Algolia end since, we are able to sync all such fields with emoji content using external scripts. Not sure, why this is only happening for the prod cluster and ways to mitigate this.\nDid stumble on utf-validation bit but not sure how to disable it for trigger based function calls. Also not sure if that is the issue but seems like closest bet.Document with ☎ is allowed to sync but ☎️ isn’t. The second one has char length of two while first doesn’t. This doesn’t happen for dev cluster though. Also ⏱️ syncs without any issue even though it has char length of 2.",
"username": "Lakshya_Thakur"
},
{
"code": "",
"text": "P.S. - This doesn’t seem to be an issue earlier in prod (can see data with emojis synced successfully on 7th June). Old data with emojis got successfully synced. @Mansoor_Omar Your insights would be helpful here. It’s a weird issue that is stemming only for prod cluster and not for dev.",
"username": "Lakshya_Thakur"
},
{
"code": "",
"text": "There has been no reply on this issue for more than a week now. @Humayara_Karim - Would appreciate your help on this. Got your mail regarding App Services experience and this is currently a weird behaviour we are dealing with.",
"username": "Lakshya_Thakur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Unable to sync document with "emoji" characters in realm trigger functions | 2022-06-28T07:30:26.279Z | Unable to sync document with “emoji” characters in realm trigger functions | 2,776 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "HI\nI am using c# driver(NetCore) with CancellationToken\nWhen I am canceling a request the canceling take few second and not immediately (~10 sec)\nany idea ?",
"username": "Netanel_Ohayon"
},
{
"code": "public class UserRepository : IUserRepository {\n private readonly CancellationToken _cancellationToken; \n\n // constructor - dependency injection\n public UserRepository(IMongoClient client, IMongoDbSettings dbSettings, ITokenService tokenService) {\n var database = client.GetDatabase(dbSettings.DatabaseName);\n _collection = database.GetCollection<AppUser>(_collectionName);\n _tokenService = tokenService;\n _cancellationToken = new CancellationToken();\n }\n\n public async Task<UserDto?> GetUser(string userId) {\n var user = await _collection.Find<AppUser>(user => user.Id == userId).FirstOrDefaultAsync();\n return new UserDto {\n //some code\n };\n }\n}\n",
"text": "Hi,\nWould you please share how you call CancellationToken in your API?\nIf you could incorporate my code would be very helpful and appreciated.",
"username": "Reza_Taba"
}
]
| Cancelling a request using CancellationToken takes a few seconds | 2020-11-03T09:34:43.280Z | Cancelling a request using CancellationToken takes a few seconds | 2,886 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "Hi,I didn’t find a clear answer in the documentation or a different thread, so I’m asking here real quick.Are collections, specifically for IList, ordered? So everytime I read the collection from the database it will have the same order of elements without calling any order/sort method?",
"username": "Thorsten_Schmitz"
},
{
"code": "RealmList",
"text": "Hi Thorsten,Yes RealmLists are ordered.\nYou can find this information in the api documentation.Andrea",
"username": "Andrea_Catalini"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Order of Elements in a Realm Collection | 2022-07-07T19:23:12.535Z | Order of Elements in a Realm Collection | 1,593 |
[]
| [
{
"code": "",
"text": "Hi Community,I am having a concern about using Charts to visualize the data to different user. Currently , I am using user-specific filter to let different user to login and check their own charts.However, I just check the “network” tab and found that the logged in user can actually see all the other user’s email address … Just wondering how can I prevent that happening ?\nsignal-2022-07-08-101451_0011455×528 108 KB\nThanks !Cat",
"username": "Super_Chain"
},
{
"code": "",
"text": "Hi Cat, can you give a bit more info on the scenario? It looks like it’s an embedded dashboard? How are you doing the filtering? Are you embedding using unauthenticated or authenticated mode? Are there any charts on the dashboard that show all email addresses?Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom !Thanks for your help again ! Yes , its a dashboard !I am using JWT token with authenticated embedded dashboard, to filter the user , I used user-specific filter Basically ,its based on email address to filter the user :// Return a filter based on token attributes, e.g:\nreturn { email: context.token.username };Are there any charts on the dashboard that show all email addresses?\nNo , None of the chart is showing “email address”",
"username": "Super_Chain"
},
{
"code": "",
"text": "Thanks for the extra information. It looks like the page is requesting information for the dashboard filters, even though the dashboard filters pane is not accessible for embedded dashboards. This is not expected so I’ll raise a bug for this. In the meantime to prevent this issue you should be able to delete the dashboard filter containing the email addresses.Let me know if this explanation feels correct.\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi Tom,You are right, if I deleted the dashboard filter, its fine !Originally, i only disabled the filter, then this will happen.Cheers!Cat",
"username": "Super_Chain"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Explode user personal email in Charts using user-specific filter | 2022-07-08T02:25:05.294Z | Explode user personal email in Charts using user-specific filter | 2,608 |
|
null | [
"sharding"
]
| [
{
"code": "",
"text": "Since 4.4 mongodb support compose hashed shard key, but I can not find any document that talk about role of additional fields of the shardkey, for example my shardkey is:{oemNumber: “hashed”, zipCode: 1, supplierId: 1}what zipCode and SupplierId fields will do and what do they help for sharding, thanks ! ",
"username": "_Jin"
},
{
"code": "",
"text": "Hi @_Jin and welcome to the community.The following document on Compound Sharded key is useful for understanding the ideas of Sharding on Compound Indexes.what zipCode and SupplierId fields will do and what do they help for shardingHaving multiple fields as part of the compound shard key would be good for a monotonically increasing value where a monotonically increasing shard key could be an issue, since it could mean that your inserts would be directed toward a “hot shard” where one shard is doing all the insertion works and all the other shards not participating in the workload, thus reducing the benefits of using a sharded cluster in the first placeThus to answer your question, combining a hashed shard key with other keys in a compound shard key would help with the shard key’s cardinality and also could potentially be used to help with range & sorting queriesPlease let us know if you have any other questions.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "{oemNumber: 'hashed', zipCode: 1, supplierId: 1}[\n {\n oemNumber: \"ABC\",\n zipCode: 12345,\n supplierId: 10\n },\n {\n oemNumber: \"ABC\", // same oemNumber\n zipCode: 67890,\n supplierId: 15\n }\n]\nhashFunc(\"ABC\") // ===> 123456789\nhashFunc(oemNumber, zipCode, supplierId)\n{oemNumber: 'hashed', zipCode: 'hashed', supplierId: 'hashed'}",
"text": "@Aasawari Thank you for your response but I still don’t clear much, reason is, for example I have a shardkey like this:{oemNumber: 'hashed', zipCode: 1, supplierId: 1}Sample data like:Base on my understand, for above key, the oemNumber field is hashed, like this:Two above records have same value ( “ABC” ), hash function will generate ‘ABC’ to ONE fixed number, ex: 123456789, and due to hashed value are SAME so 2 above records will be put into one chunk in one shard.In this case, monotonically increasing shard key is an issue, yes, but if we add more fields (zipcode, supplierId) then hashed function also just generate by value ‘ABC’Because we can just hash one field (oemNumber), so I don’t understand what role of “zipCode” and “supplierId” in hash function, OR are you mean that the hash function will accept ALL 3 parameters like this:If it accepts 3 parameters then why we just can define 1 hashed filed [ oemNumber: ‘hashed’ ] , why we don’t define shardkey like this:{oemNumber: 'hashed', zipCode: 'hashed', supplierId: 'hashed'}Please teach me, thank you !",
"username": "_Jin"
},
{
"code": "oemNumber",
"text": "Hi @_JinTo understand the complete concept of compound hashed shard key, you would need to understand the basic concepts regarding shard key, compound index and hashed shard key.2 above records will be put into one chunk in one shard.Yes, in the above mentioned case, it will put into one chunk and hence it is recommended to use a shard key with maximum cardinality value i.e which has more number of distinct values.If the oemNumber is not monotonically increasing and does not have a good cardinality, hashing the field would not be a right.{oemNumber: ‘hashed’, zipCode: ‘hashed’, supplierId: ‘hashed’}Also, hashing for multiple fields for a compound shard key is not possible as of today.If you wish to know more on concepts of sharding and shard keys, please visit our University course on MongoDB Courses and Trainings | MongoDB University.Let us know if you have any further questions.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "{oemNumber: ‘hashed’, zipCode: 1, supplierId: 1}\nzipCodesupplierId{oemNumber: 'hashed', zipCode: 1, supplierId: 1}\n{oemNumber: 'hashed'}\n",
"text": "Thank you for responding @Aasawariso that means that zipCode and supplierId fields in above shard key have no meaning for sharding right ? they are just used in compose index. Do I understand it correct ?Only for sharding feature, 2 bellow shardkeys are the same right?and",
"username": "_Jin"
},
{
"code": "zipCodesupplierId",
"text": "Hi @_Jinso that means that zipCode and supplierId fields in above shard key have no meaning for sharding right ? they are just used in compose index. Do I understand it correct ?No. As I previously mentioned, a hashed compound shard key is a combination of compound index, shard key, and hashed index. It combines all the concepts of those three things. Please refer to the documentations attached for the same.2 bellow shardkeys are the same right?No, they are not the same, the former defines the compound hashed shard key and later is hashed shard key.I would reiterate my earlier suggestion about enrolling to the M103 MongoDB University Course. I believe it would be very helpful in your MongoDB journey Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "many thanks @Aasawari\nI will read more",
"username": "_Jin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Question about relate fields of compose hash shard key | 2022-06-24T09:27:38.045Z | Question about relate fields of compose hash shard key | 2,739 |
null | []
| [
{
"code": "wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -\nOK\necho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list\nsudo apt-get update\nsudo apt-get install -y mongodb-org\n..\nE: Unable to locate package mongodb-org\necho \"deb http://repo.mongodb.org/apt/debian buster/mongodb-org/5.0 main\" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list\n\n",
"text": "Hi,\nThere are 2 mongodb packages, one is from Ubuntu and other is from mongodb-org. I want to install mongodb-org on a fresh RaspberryPi 4 (bullseye). I follow the instruction on the official site (https://www.mongodb.com/docs/manual/tut … on-ubuntu/) , but I always get this error:\nE: Unable to locate package mongodb-orgI followed those steps:I don’t get any error from other steps. Only from installation step.I also tried Debian package:Still no success.\nHow can I install it?",
"username": "Mubin_Icyer"
},
{
"code": "uname -a",
"text": "Are you running 64bit linux?What is uname -a showing?",
"username": "chris"
},
{
"code": "",
"text": "No \nI was not running 64-bit Linux.\nI installed 64-bit and now I was able to install mongodb.\nThanks.",
"username": "Mubin_Icyer"
},
{
"code": "",
"text": "How did you do the install?",
"username": "Berkay_Erarslan"
}
]
| Installing MongoDB-org on Raspberry Pi (bullseye) doesn't work | 2022-05-10T09:05:40.566Z | Installing MongoDB-org on Raspberry Pi (bullseye) doesn’t work | 6,967 |
null | [
"node-js",
"change-streams"
]
| [
{
"code": "",
"text": "Hi All!What is the best way for restarting a change stream service when the reading cursor is terminated/exhausted?\n(could be on a primary switch and other cases)I have tackled several times in situations when the service was doing nothing because of that, and I want to create a good way to inform k8s when it turns idle (health check?/readiness?).does someone have an advice for me?Thank you.",
"username": "Shay_I"
},
{
"code": "",
"text": "Hi @Shay_I,I’m not too familiar with k8s deployments but I have a few questions regarding the scenario you’ve detailed at the bottom of my reply.What is the best way for restarting a change stream service when the reading cursor is terminated/exhausted?\n(could be on a primary switch and other cases)If the cursor is terminated, you can resume change streams by specifying a resume token to either resumeAfter or startAfter when opening the cursor.Additionally, as per the change streams documentation:While the connection to the MongoDB deployment remains open, the cursor remains open until one of the following occurs:With regards to the following:I have tackled several times in situations when the service was doing nothing because of that, and I want to create a good way to inform k8s when it turns idle (health check?/readiness?).Can you provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Change Stream Idle handling | 2022-06-30T06:35:49.298Z | Change Stream Idle handling | 2,070 |
null | [
"transactions",
"field-encryption"
]
| [
{
"code": "1.AutoDecryptFieldsIfNecessary(CommandResponseMessage encryptedResponseMessage, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol1.Execute(IConnection connection, CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.CommandOperationBase1.ExecuteAttempt(RetryableReadContext context, Int32 attempt, Nullable1 operation, RetryableReadContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.ReadCommandOperation1.Execute(RetryableReadContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.FindOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1 operation, ReadPreference readPreference, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1 filter, FindOptions1.<>c__DisplayClass46_01.UsingImplicitSession[TResult](Func1.FindSync[TProjection](FilterDefinition2 options, CancellationToken cancellationToken) at MongoDB.Driver.FindFluent",
"text": "Hello everyone,I am dealing with an issue where I rotated my Azure Key that I was using to generate a KMS token in my server layer. My Client Side Field Level Encryption is now throwing an error when my encryption client tries to call a collection with encrypted data. The error is below. I am not sure I implemented it incorrectly in the past which would be reason for concern. I was under the impression that if I were to rotate my key, then my previously encrypted data would still be able to be decrypted using the old KMS key store in the mongo KeyVault.MongoDB.Driver.Encryption.MongoEncryptionException\nHResult=0x80131500\nMessage=Encryption related exception: Error in KMS response: 'The parameter is incorrect.\n'. HTTP status=400.\nSource=MongoDB.Driver\nStackTrace:\nat MongoDB.Driver.Encryption.AutoEncryptionLibMongoCryptController.DecryptFields(Byte[] encryptedDocumentBytes, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.WireProtocol.CommandMessageFieldDecryptor.DecryptFields(CommandResponseMessage encryptedResponseMessage, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol1.AutoDecryptFieldsIfNecessary(CommandResponseMessage encryptedResponseMessage, CancellationToken cancellationToken) at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol1.Execute(IConnection connection, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.WireProtocol.CommandWireProtocol1.Execute(IConnection connection, CancellationToken cancellationToken) at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol1 protocol, ICoreSession session, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.CommandOperationBase1.ExecuteProtocol(IChannelHandle channel, ICoreSessionHandle session, ReadPreference readPreference, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Operations.ReadCommandOperation1.ExecuteAttempt(RetryableReadContext context, Int32 attempt, Nullable1 transactionNumber, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Operations.RetryableReadOperationExecutor.Execute[TResult](IRetryableReadOperation1 operation, RetryableReadContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.ReadCommandOperation1.Execute(RetryableReadContext context, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Operations.FindOperation1.Execute(RetryableReadContext context, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.FindOperation1.Execute(IReadBinding binding, CancellationToken cancellationToken)\nat MongoDB.Driver.OperationExecutor.ExecuteReadOperation[TResult](IReadBinding binding, IReadOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation1 operation, ReadPreference readPreference, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.ExecuteReadOperation[TResult](IClientSessionHandle session, IReadOperation1 operation, CancellationToken cancellationToken) at MongoDB.Driver.MongoCollectionImpl1.FindSync[TProjection](IClientSessionHandle session, FilterDefinition1 filter, FindOptions2 options, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionImpl1.<>c__DisplayClass46_01.b__0(IClientSessionHandle session)\nat MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSession[TResult](Func2 func, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoCollectionImpl1.FindSync[TProjection](FilterDefinition1 filter, FindOptions2 options, CancellationToken cancellationToken) at MongoDB.Driver.FindFluent2.ToCursor(CancellationToken cancellationToken)\nat MongoDB.Driver.IAsyncCursorSourceExtensions.FirstOrDefault[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)Inner Exception 1:\nCryptException: Error in KMS response: 'The parameter is incorrect.\n'. HTTP status=400",
"username": "Anthony_LaMartina"
},
{
"code": "",
"text": "A little more clarity on this. I have CSFLE working for all of my server environments before today. It was specifically the act of rotating the Key in Azure Key Vault that resulted in this error. I am currently just using the same KMS base64 key (not generating a new KMS ever) when creating the auto-encryption client in the server. When I test by generating a new KMS, the server still throws the error. I don’t know a lot about encryption, but I am wondering if my current keyvault record in MongoDb that is used to decrypt my database fields isn’t getting validated against the azure key vault because of something to do with the current version key.Also my KMS is being generated from a 2048 RSA key",
"username": "Anthony_LaMartina"
},
{
"code": "",
"text": "Sorry, a little confusing on how to use this platform when submitting updates. I figured out the solution!My my mongoDB key vault KMS key stored in my database was missing the “keyVersion” parameter under the “masterKey” field. I added that parameter to be the correct Azure Key Vault version and it fixed everything.",
"username": "Anthony_LaMartina"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Azure Key Rotation Broke my CSFLE | 2022-07-07T20:43:27.730Z | Azure Key Rotation Broke my CSFLE | 1,661 |
null | [
"connecting"
]
| [
{
"code": "Error: connect ECONNREFUSED 52.202.136.90:27017Jul 02 15:24:25 myapp app/web.1 2022-07-02T20:24:25.105Z silly: OPTIONS: /api/posts/62c0a45ad56ea20004014196 \nJul 02 15:24:25 myapp app/web.1 2022-07-02T20:24:25.197Z silly: PATCH: /api/posts/62c0a45ad56ea20004014196 \nJul 02 15:24:25 myapp app/web.1 allowedOrigin https://www.mysite.com \nJul 02 15:24:25 myapp app/web.1 node:internal/process/promises:279 \nJul 02 15:24:25 myapp app/web.1 triggerUncaughtException(err, true /* fromPromise */); \nJul 02 15:24:25 myapp app/web.1 ^ \nJul 02 15:24:25 myapp app/web.1 Error: connect ECONNREFUSED 52.202.136.90:27017 \nJul 02 15:24:25 myapp app/web.1 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1187:16) { \nJul 02 15:24:25 myapp app/web.1 name: 'MongoNetworkError' \nJul 02 15:24:25 myapp app/web.1 } \nJul 02 15:24:25 myapp heroku/web.1 State changed from up to crashed \nJul 02 15:24:25 myapp heroku/web.1 State changed from crashed to starting \nJul 02 15:24:25 myapp heroku/router at=error code=H13 desc=\"Connection closed without response\" method=PATCH path=\"/api/posts/62c0a45ad56ea20004014196\" host=api.mysite.com request_id=f2f05673-f516-49d1-b2c4-e07f33b5bc8a fwd=\"45.49.3.242\" dyno=web.1 connect=0ms service=38ms status=503 bytes=0 protocol=https\n",
"text": "I have a site running in production. The backend connects to mongo atlas cloud.Error: connect ECONNREFUSED 52.202.136.90:27017 caused an unhandled rejection. What does this error mean?I found reports of a similar error, but for those reports the error prevented connection to mongo entirely. On the other hand, my backend was already running in production and had been making connections to my mongo atlas cloud database before I got this error. I have a lot of activity on my site, yet this only happened with one request.Heroku uses a range of AWS IP addresses. So I have 0.0.0.0/0 (includes your current IP address) in my IP Access list for Network Access to account for their changing IP’s.I’m not sure what’s causing this which also means I’m not sure where to look to add some error handling or what this concerns.Backend logs",
"username": "Dashiell_Bark-Huss"
},
{
"code": "",
"text": "What do you get when trying to go at http://portquiz.net:27017?",
"username": "steevej"
},
{
"code": "",
"text": "This server listens on all TCP ports, allowing you to test any outbound TCP port.You have reached this page on port 27017 (from http host header).Your network allows you to use this port. (Assuming that your network is not doing advanced traffic filtering.)Network service: unknown\nYour outgoing IP: 7…But I’m confused why this matters. The error I got wouldn’t concern my local environment.",
"username": "Dashiell_Bark-Huss"
},
{
"code": "",
"text": "The error I got wouldn’t concern my local environment.You are totally right. You have to do it from the machine that has the connection issue. The machine that gets ECONNREFUSED.",
"username": "steevej"
},
{
"code": "",
"text": "That’s heroku’s servers",
"username": "Dashiell_Bark-Huss"
}
]
| Error: connect ECONNREFUSED in production for only one request caused unhandled rejection | 2022-07-03T18:02:40.095Z | Error: connect ECONNREFUSED in production for only one request caused unhandled rejection | 3,630 |
null | [
"devops"
]
| [
{
"code": "mongodbatlas_project.project: Creating...\n╷\n│ Error: error creating Project: POST https://cloud.mongodb.com/api/atlas/v1.0/groups: 401 (request \"NOT_ORG_GROUP_CREATOR\") The currently logged in user does not have the group creator role in organization 6262820f755c87761f7751e8.\n│\n│ with mongodbatlas_project.project,\n│ on project.tf line 1, in resource \"mongodbatlas_project\" \"project\":\n│ 1: resource \"mongodbatlas_project\" \"project\" {\n",
"text": "Hi,\nI have tried to provision the cluster to be used later for integration tests with terraform found here (terraform-provider-mongodbatlas/examples/starter at master · mongodb/terraform-provider-mongodbatlas · GitHub) with the following statement (I made a small modification to accept cidr not an ip):terraform -chdir=“terraform-manifests” apply -state-out=“result.tfstate” -var=“cloud_provider=AZURE” -var=“cluster_name=Foo” -var=“database_name=Bar” -var=“dbuser=test” -var=“dbuser_password=test123$” -var=“cidr_block=0.0.0.0/0” -var=“mongodbversion=5.0” -var=“org_id=MY_ORG_ID” -var=“private_key=MY_PRIVATE_KEY” -var=“project_name=Project 0” -var=“public_key=MY_PUBLIC_KEY” -var=“region=westeurope”The plan looks ok, but when executing the above apply; I get the following error:Can anyone help with this. Who is the logged in user? I did not logged in anywhere, so I am not sure who is the exception referring to. When I am logged in as a user [email protected] onto a atlas portal, do I need to assign this user some special permissions, or is the problem somewhere else?",
"username": "Sebastijan_Pistotnik"
},
{
"code": "",
"text": "The logged-in user is the public API key. Make sure the API key has sufficient permissions for the operation.Also, M0 clusters are available only for Mongo 4.4 or lower.Finally, not sure if you are allowed to use the unlimited CIDR range 0.0.0.0/0.",
"username": "Edgar_Knapp"
},
{
"code": "",
"text": "Thank you Edgar. I was able to by pass the authentication issues. I had to set permission on the API Key to be “Organization Owner”.I tried with the following options (I also tried it with ip address as it is on the original github sample instead of cidr_block; but I do not believe this is an issue, since both should work and in cidr_block case I even received an email notification success warning about it; so that think worked and I also see it in the response details):First I tried with M0 Free cluster (with 4.4 as can be seen on terraform docs). So I have setup the following in terraform files:cloud_backup = false\nauto_scaling_disk_gb_enabled = false\nprovider_name = var.cloud_provider\nprovider_instance_size_name = “M0”-var=“mongodbversion=4.4” (since terraform docs says so)And I have also setup the Azure region to westeurope. In this case, I get:mongodbatlas_project.project: Creating…\nmongodbatlas_project.project: Creation complete after 5s [id=6262ec38815a200dfa1f42d6]\nmongodbatlas_project_ip_access_list.ip: Creating…\nmongodbatlas_database_user.user: Creating…\nmongodbatlas_cluster.cluster: Creating…\nmongodbatlas_database_user.user: Creation complete after 0s [id=YXV0aF9kYXRhYmFzZV9uYW1l:YWRtaW4=-cHJvamVjdF9pZA==:NjI2MmVjMzg4MTVhMjAwZGZhMWY0MmQ2-dXNlcm5hbWU=:dGVzdA==]\nmongodbatlas_project_ip_access_list.ip: Creation complete after 4s [id=ZW50cnk=:ODQuMTE1LjIxNi4yNTU=-cHJvamVjdF9pZA==:NjI2MmVjMzg4MTVhMjAwZGZhMWY0MmQ2]\n╷\n│ Error: error creating MongoDB Cluster: POST …cloud.mongodb.com/api/atlas/v1.0/groups/6262ec38815a200dfa1f42d6/clusters: 400 (request “INVALID_ENUM_VALUE”) An invalid enumeration value M0 was specified.\n│\n│ with mongodbatlas_cluster.cluster,\n│ on atlas_cluster.tf line 1, in resource “mongodbatlas_cluster” “cluster”:\n│ 1: resource “mongodbatlas_cluster” “cluster” {Then I tried with a different instance size, due to the enum error.cloud_backup = false\nauto_scaling_disk_gb_enabled = false\nprovider_name = var.cloud_provider\nprovider_instance_size_name = “M2”And I have also setup the Azure region to northeurope. (since I saw that M2 is available only in northeurope)terraform -chdir=“terraform-manifests” apply -state-out=“result.tfstate” -var=“cloud_provider=AZURE” -var=“cluster_name=Foo” -var=“database_name=Bar” -var=“dbuser=test” -var=“dbuser_password=test123$” -var=“cidr_block=0.0.0.0/0” -var=“mongodbversion=4.4” -var=“org_id=my id” -var=“private_key=my key” -var=“project_name=Project_11” -var=“public_key=my key” -var=“region=northeurope”In this case, I get again:mongodbatlas_project.project: Creating…\nmongodbatlas_project.project: Creation complete after 4s [id=6262f0e8622e084960f1524a]\nmongodbatlas_project_ip_access_list.ip: Creating…\nmongodbatlas_database_user.user: Creating…\nmongodbatlas_cluster.cluster: Creating…\nmongodbatlas_database_user.user: Creation complete after 1s [id=YXV0aF9kYXRhYmFzZV9uYW1l:YWRtaW4=-cHJvamVjdF9pZA==:NjI2MmYwZTg2MjJlMDg0OTYwZjE1MjRh-dXNlcm5hbWU=:dGVzdA==]\nmongodbatlas_project_ip_access_list.ip: Creation complete after 5s [id=ZW50cnk=:MC4wLjAuMC8w-cHJvamVjdF9pZA==:NjI2MmYwZTg2MjJlMDg0OTYwZjE1MjRh]\n╷\n│ Error: error creating MongoDB Cluster: POST …cloud.mongodb.com/api/atlas/v1.0/groups/6262f0e8622e084960f1524a/clusters: 400 (request “INVALID_ENUM_VALUE”) An invalid enumeration value M2 was specifiedcloud_backup = true\nauto_scaling_disk_gb_enabled = true\nprovider_name = var.cloud_provider\nprovider_instance_size_name = “M10”-var=“mongodbversion=5.0”mongodbatlas_project.project: Creating…\nmongodbatlas_project.project: Creation complete after 5s [id=6262f160be944a1535c0f66c]\nmongodbatlas_project_ip_access_list.ip: Creating…\nmongodbatlas_database_user.user: Creating…\nmongodbatlas_cluster.cluster: Creating…\nmongodbatlas_database_user.user: Creation complete after 1s [id=YXV0aF9kYXRhYmFzZV9uYW1l:YWRtaW4=-cHJvamVjdF9pZA==:NjI2MmYxNjBiZTk0NGExNTM1YzBmNjZj-dXNlcm5hbWU=:dGVzdA==]\nmongodbatlas_project_ip_access_list.ip: Creation complete after 6s [id=ZW50cnk=:MC4wLjAuMC8w-cHJvamVjdF9pZA==:NjI2MmYxNjBiZTk0NGExNTM1YzBmNjZj]\n╷\n│ Error: error creating MongoDB Cluster: POST …cloud.mongodb.com/api/atlas/v1.0/groups/6262f160be944a1535c0f66c/clusters: 500 (request “UNEXPECTED_ERROR”) Unexpected error.In the last sample, I even tried to turn on the verbose logging in terraform and I got a api call response details (on the one that failed and the rest that were success); but there is not much info there (which makes sense, since it is a good API practice not to disclose an internal exception; but I wish I could see them at least somehow on the portal in some kind of an audit log):\n{\n“detail”: “Unexpected error.”,\n“error”: 500,\n“errorCode”: “UNEXPECTED_ERROR”,\n“parameters”: [],\n“reason”: “Internal Server Error”\n}Any advice is welcome:)",
"username": "Sebastijan_Pistotnik"
},
{
"code": "Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/626658bf2740e75cd6e68888/clusters: 402 (request \"NO_PAYMENT_INFORMATION_FOUND\") No payment information was found for group 626658bf2740e75cd6e68888.",
"text": "So it seems like you need to use “Atlas Region” EUROPE_NORTH and not Azure Regions for the region part of vars; I found the docs here:This gives me a more specific error now:Error: error creating MongoDB Cluster: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/626658bf2740e75cd6e68888/clusters: 402 (request \"NO_PAYMENT_INFORMATION_FOUND\") No payment information was found for group 626658bf2740e75cd6e68888.Unfortunately free M0 still does not work due to a enum issues.",
"username": "Sebastijan_Pistotnik"
},
{
"code": "",
"text": "626658bf2740e75cd6e68888Have you followed up with verifying your email yet? When you signed up for a MongoDB Atlas account you should have received an email to do so. If not, please do and then try again.",
"username": "Melissa_Plunkett"
},
{
"code": "400 (request \"INVALID_ENUM_VALUE\") An invalid enumeration value M0 was specified.",
"text": "I am having the same problem using terraform… 400 (request \"INVALID_ENUM_VALUE\") An invalid enumeration value M0 was specified. I have a credit card on file and a valid email.",
"username": "Alex_Raskin"
}
]
| Provisioning of Atlas Mongodb cluster with terraform for Integration Tests | 2022-04-22T10:56:28.458Z | Provisioning of Atlas Mongodb cluster with terraform for Integration Tests | 6,870 |
null | [
"java",
"ops-manager"
]
| [
{
"code": "2022-03-04T04:54:57.287+0000 [main] ERROR com.xgen.svc.common.migration.MigrationRunner [MigrationRunner.java.run:310] - Failed to apply migration(s)\n\ncom.xgen.svc.common.migration.exception.NoMigrationPathException: There is no migration path from the existing application version (7c47919fa03b89ec66bb769d7d6ead4246cfa231) to the current one (80727814dc078a9ff9beb24f914729602dc059d0). Exiting.\n\n at com.xgen.svc.common.migration.MigrationSvc.verifyMigrationPath(MigrationSvc.java:300)\n\n at com.xgen.svc.common.migration.MigrationSvc.areWePerformingASystemUpgrade(MigrationSvc.java:202)\n\n at com.xgen.svc.common.migration.MigrationRunner.ensureLiveMigrationsState(MigrationRunner.java:353)\n\n at com.xgen.svc.common.migration.MigrationRunner.run(MigrationRunner.java:305)\n\n at com.xgen.svc.common.migration.MigrationRunner.main(MigrationRunner.java:387)\n",
"text": "Hi,\nGood morning.Can I request your suggestions on MongoDB OpsManager upgrade error? I have current setup Ops Manager 4.2.24 version installed. Doing Ops Manager upgrade to 4.4.10 version. I used rpm upgrade process (rpm -Uvh). During upgrade process I hit following error and Ops Manager not upgraded due to this error.Error details:",
"username": "Venkata_Sivasankar"
},
{
"code": "",
"text": "Hi Venkata,This forum covers MongoDB Community products, while MongoDB Ops Manager is part of our MongoDB Enterprise offering.Please raise a case in the MongoDB Support Portal (http://support.mongodb.com/) using the account associated to your Company MongoDB Enterprise subscriptions to get the needed assistance on your question.Kind regards,\nEmilio",
"username": "Emilio_Scalise"
},
{
"code": "",
"text": "@Venkata_Sivasankar did your issue resolve? It will be helpful if you can share more about fix",
"username": "Dheeraj_G"
},
{
"code": "",
"text": "Hi, try to upgrade to the latest version of 4.2 (4.2.26) and then start upgrading to 4.4.x, this may work for you.",
"username": "Dheeraj_G"
}
]
| Ops Manager upgrade issue (from 4.2.24 to 4.4.10 ) | 2022-03-17T02:14:58.953Z | Ops Manager upgrade issue (from 4.2.24 to 4.4.10 ) | 4,435 |
null | [
"atlas-cluster"
]
| [
{
"code": "",
"text": "I have this error when i try to connect Airbyte to Mongo AtlasTimed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@24b52d3e. Client view of cluster state is {type=REPLICA_SET, servers=[{address:27017=clustertest-shard-00-02.dwqoi.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadException: Prematurely reached end of stream}}, {address:27017=clustertest-shard-00-01.dwqoi.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadException: Prematurely reached end of stream}}, {address:27017=clustertest-shard-00-00.dwqoi.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadException: Prematurely reached end of stream}}]",
"username": "tim_Ran"
},
{
"code": "",
"text": "Hi @tim_Ran,Is the IP address of the Airbyte client correctly set in the IP access list?Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Mongo Connection | 2022-07-07T09:53:35.199Z | Mongo Connection | 1,791 |
[
"node-js",
"mongoose-odm",
"sharding"
]
| [
{
"code": "",
"text": "For a database, having User collection, I want to send a user or one document from collection present in NA Zone Cluster 1 → Shard 1 to EU Zone Cluster 2 → Shard N + 1. Is it possible to do this. Also, is it possible to do this in NodeJS project (App Server).",
"username": "Sasha_N_A"
},
{
"code": "",
"text": "I have to cover 3 zones in total and my plan is to deploy cluster in each zone having one shard each, in which shard will have a PSS architecture replica set . There is also a chance to migrate a user from one zone to another permanently then for that I would require above functionality.I have started learning about clusters and sharding recently and I would greatly appreciate any help / suggestions.",
"username": "Sasha_N_A"
},
{
"code": "",
"text": "Hi @Sasha_N_A and welcome in the MongoDB Community !I think this doc answers exactly your question:Let me know if you need additional clarifications.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88Thanks a lot, provided link is exactly what I was looking for I just have 2 more questions.It was mentioned “You must be on a mongos. Do not issue the operation directly on the shard.” in the reference link does this mean I cannot do this on my node js app server?.Let say my User A (mongo Id: 12345) made 50 comments in NA zone and each document in 50 comment has owner field linked with User A (owner: 12345). Now if I move User A to EU would the comments document get corrupted or would the reference would still work and it will fetch user from EU but comments from NA?",
"username": "Sasha_N_A"
},
{
"code": "userscommentscommentscommentsarticlesitems",
"text": "Of course you can. When you are using a sharded cluster, the drivers connect to the mongos. You only connect directly to the shards when you are doing admin operations, upgrades, etc. Check out the chapter 3 of M103 for a better understand of sharded clusters.I’m going to assume that both the users and the comments comments are sharded collections in your example. This $lookup isn’t possible with MongoDB 5.0.X as the “from” collection (comments collection in this case) can’t be shared as mentioned in the 5.0 $lookup doc. But you are in luck because since 5.1, the “from” collection can now be sharded. See the 6.0 doc. I’d still be very careful with the distribution (shard keys) to make sure the queries aren’t doing any “scatter gather” operations. I’d also start the debate whether or not it’s a good idea to have a separate comments collection. Maybe it could be directly embedded in the articles collection (or items, I don’t know what is being commented). Double check the data modeling to see if there isn’t a better alternative.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Move a document from one shard to another shard in other cluster | 2022-07-06T06:22:18.043Z | Move a document from one shard to another shard in other cluster | 2,203 |
|
null | [
"aggregation",
"node-js"
]
| [
{
"code": "",
"text": "Hey friends, Please i need help, I have a table called ward wit wards details such as name, date of birth and so on. i want to get the data of all the wards that their birthday will be in the next 7days using aggregate. Kindly help. i am stocked",
"username": "Gbade_Francis"
},
{
"code": "",
"text": "To explain better, i want to get all wards data using where the date_of_birth column will come up in the next 7days.Thanks",
"username": "Gbade_Francis"
},
{
"code": "\"date_of_birth\": { \n \"$gte\": <current date>, \n \"$lt\": <current date + 7 days> \n}\n",
"text": "Hello @Gbade_Francis, Welcome to MongoDB Community Forum,You can use conditional operators with the and condition, here you have to input your current date for greater than and equal to condition and current date +7 days date in less than condition,",
"username": "turivishal"
},
{
"code": " [\n {\n '$match': {\n 'isDeleted': false,\n \"dateOfBirth\": {\n \"$gte\": \"$dateOfBirth\",\n \"$lt\": \"$dateOfBirth\"+7\n }\n\n\n }\n }, {\n '$project': {\n '_id': '$_id',\n 'dateOfBirth': {\n '$dateToString': {\n 'format': '%Y-%m-%d',\n 'date': '$dateOfBirth'\n }\n },\n 'firstName': '$firstName',\n 'middleName': '$middleName',\n 'lastName': '$lastName',\n 'sex': '$sex'\n }\n }\n ]\n ])\n{\n\n \"sex\": \"female\",\n\n \"dateJoined\": \"2022-02-13T18:58:37.083Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": false,\n\n \"isRestricted\": false,\n\n \"levelHistory\": [],\n\n \"_id\": \"6209599e9ee1d60016315ce0\",\n\n \"admissionNumber\": \"Dia001\",\n\n \"firstName\": \"kehinde\",\n\n \"middleName\": \"\",\n\n \"lastName\": \"fatokun\",\n\n \"dateOfBirth\": \"2006-02-23T19:18:12.000Z\",\n\n \"guardianContact\": \"08034356783\",\n\n \"currentLevel\": \"620956889ee1d60016315bfc\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0\n\n},\n\n{\n\n \"sex\": \"male\",\n\n \"dateJoined\": \"2022-02-13T18:58:37.083Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": false,\n\n \"isRestricted\": false,\n\n \"levelHistory\": [],\n\n \"_id\": \"620959f09ee1d60016315ced\",\n\n \"admissionNumber\": \"Dia002\",\n\n \"firstName\": \"funsho \",\n\n \"middleName\": \"\",\n\n \"lastName\": \"adeoye\",\n\n \"dateOfBirth\": \"2014-02-20T19:19:30.000Z\",\n\n \"guardianContact\": \"08035678893\",\n\n \"currentLevel\": \"620956889ee1d60016315bfa\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0\n\n},\n\n{\n\n \"sex\": \"female\",\n\n \"dateJoined\": \"2022-02-13T18:58:37.083Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": false,\n\n \"isRestricted\": false,\n\n \"levelHistory\": [],\n\n \"_id\": \"62095b199ee1d60016315d0c\",\n\n \"admissionNumber\": \"dia003\",\n\n \"firstName\": \"tolani\",\n\n \"middleName\": \"\",\n\n \"lastName\": \"makinde\",\n\n \"dateOfBirth\": \"2015-02-11T19:24:34.000Z\",\n\n \"guardianContact\": \"08033567898\",\n\n \"currentLevel\": \"620956889ee1d60016315bfa\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0\n\n},\n\n{\n\n \"sex\": \"male\",\n\n \"dateJoined\": \"2022-02-13T18:58:37.083Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": true,\n\n \"isRestricted\": false,\n\n \"levelHistory\": [],\n\n \"_id\": \"62095ba19ee1d60016315d25\",\n\n \"admissionNumber\": \"Dia004\",\n\n \"firstName\": \"kayode \",\n\n \"middleName\": \"\",\n\n \"lastName\": \"adeolu\",\n\n \"dateOfBirth\": \"2010-02-18T19:26:53.000Z\",\n\n \"guardianContact\": \"08033679866\",\n\n \"currentLevel\": \"620956889ee1d60016315bfa\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0\n\n},\n\n{\n\n \"sex\": \"female\",\n\n \"dateJoined\": \"2022-02-13T18:58:37.083Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": false,\n\n \"isRestricted\": false,\n\n \"levelHistory\": [],\n\n \"_id\": \"62095bec9ee1d60016315d31\",\n\n \"admissionNumber\": \"dia005\",\n\n \"firstName\": \"tina\",\n\n \"middleName\": \"\",\n\n \"lastName\": \"phillips\",\n\n \"dateOfBirth\": \"2010-02-18T19:28:04.000Z\",\n\n \"guardianContact\": \"08035678764\",\n\n \"currentLevel\": \"620956889ee1d60016315bfa\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0\n\n},\n\n{\n\n \"sex\": \"female\",\n\n \"dateJoined\": \"2022-02-13T18:58:37.083Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": false,\n\n \"isRestricted\": false,\n\n \"levelHistory\": [],\n\n \"_id\": \"62095cda9ee1d60016315d46\",\n\n \"admissionNumber\": \"dia006\",\n\n \"firstName\": \"kehinde\",\n\n \"middleName\": \"\",\n\n \"lastName\": \"faye\",\n\n \"dateOfBirth\": \"2006-02-16T00:00:00.000Z\",\n\n \"guardianContact\": \"08143469839\",\n\n \"currentLevel\": \"620956889ee1d60016315bfa\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0\n\n},\n\n{\n\n \"sex\": \"male\",\n\n \"dateJoined\": \"2022-02-14T07:13:58.502Z\",\n\n \"roleName\": null,\n\n \"isDeleted\": false,\n\n \"isRestricted\": true,\n\n \"levelHistory\": [],\n\n \"_id\": \"620a02689e6dfb00162eaa98\",\n\n \"admissionNumber\": \"DI001\",\n\n \"firstName\": \"gold\",\n\n \"middleName\": \"\",\n\n \"lastName\": \"silver\",\n\n \"dateOfBirth\": \"2022-02-01T07:18:45.000Z\",\n\n \"guardianContact\": \"07037617125\",\n\n \"currentLevel\": \"620956889ee1d60016315bfb\",\n\n \"schoolId\": \"620956889ee1d60016315bf8\",\n\n \"__v\": 0,\n\n \"imageUrl\": \"http://res.cloudinary.com/asm-web/image/upload/v1646499727/zhes72svpkdwylmzl638.png\"\n\n},\n",
"text": "Thanks for the response. but current date?? current date should be like recent date now? it did not work with the birthday date.let getbirthday= await model.aggregate([Please help me to check.This is the data from my ward table:[]Thanks",
"username": "Gbade_Francis"
},
{
"code": "let fromDate = new Date();\nlet toDate = new Date();\ntoDate.setDate(toDate.getDate() + 7);\ntoDate = new Date(toDate);\n{\n \"$match\": {\n \"dateOfBirth\": {\n \"$gte\": fromDate,\n \"$lt\": toDate\n }\n }\n}\n",
"text": "the date_of_birth column will come up in the next 7days.Means a specific date period, right?so current date means now date, for ex in JS,The query would be,",
"username": "turivishal"
},
{
"code": "\"dateOfBirth\": {\n \"$gte\": fromDate,\n \"$lt\": toDate\n }\n",
"text": "Boos, You are the best. It actually work, but i think there is something needed to be done again.because i run it, remember we have date of birth which is\n2009-10-13T00:00:00.000+00:00, Meaning that we have\nyear i was born is 2009\nMonth is 10\nand day is 13Now, the system should calculate in the sense that if it is 2009, 10, 13, how will the system understand that the birthday is comming up on 2022-10-13. because not untill i alter the birthday and change it to todays date before i can get the data. if not, its given empty array.\nI hope you understand boss",
"username": "Gbade_Francis"
},
{
"code": " let fromDate = new Date();\n console.log(fromDate)\n let toDate = new Date();\n console.log(toDate)\n toDate.setDate(toDate.getDate() + 7);\n toDate = new Date(toDate);\n console.log(toDate)\n let getbirthday= await model.aggregate([\n [\n {\n '$match': {\n 'isDeleted': false,\n \"dateOfBirth\": {\n \"$gte\": fromDate,\n \"$lt\": toDate\n }\n }\n",
"text": "]then i got this\n[ ]\nGET /api/v1/birthday/get-wardBirthday - - ms - -\n2022-07-07T11:16:34.972Z\n2022-07-07T11:16:34.973Z\n2022-07-14T11:16:34.973Z\n[ ]instead of it to pull the wards birthday that belongs to that range.remember my date of birth i inserted last year can be 1988-03-13\nand i want the system to remember my birthday this year. and it should remind me 7days before my birthday… (This is exactly what it should work boss).Thanks so much",
"username": "Gbade_Francis"
},
{
"code": "$expr$month$dayOfMonthlet fromDate = new Date();\nlet toDate = new Date();\ntoDate.setDate(toDate.getDate() + 7);\ntoDate = new Date(toDate);\n\nlet getbirthday= await model.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$and\": [\n {\n \"$gte\": [\n { \"$dayOfMonth\": \"$dateOfBirth\" },\n { \"$dayOfMonth\": fromDate }\n ]\n },\n {\n \"$gte\": [\n { \"$month\": \"$dateOfBirth\" },\n { \"$month\": fromDate }\n ]\n },\n {\n \"$lte\": [\n { \"$dayOfMonth\": \"$dateOfBirth\" },\n { \"$dayOfMonth\": toDate }\n ]\n },\n {\n \"$lte\": [\n { \"$month\": \"$dateOfBirth\" },\n { \"$month\": toDate }\n ]\n }\n ]\n }\n }\n }\n]);\nlet fromMonth = 7;\nlet toMonth = 7;\nlet fromDay = 7;\nlet toDay = 14;\n\nlet getbirthday= await model.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$and\": [\n {\n \"$gte\": [\n { \"$dayOfMonth\": \"$dateOfBirth\" },\n fromDay\n ]\n },\n {\n \"$gte\": [\n { \"$month\": \"$dateOfBirth\" },\n fromMonth\n ]\n },\n {\n \"$lte\": [\n { \"$dayOfMonth\": \"$dateOfBirth\" },\n toDay\n ]\n },\n {\n \"$lte\": [\n { \"$month\": \"$dateOfBirth\" },\n toMonth\n ]\n }\n ]\n }\n }\n }\n]);\n",
"text": "Okay, I misunderstood,Might be there are other solutions as well but, I know the way is $expr expression with aggregation operators, by using $month and $dayOfMonth operators,You can simplify more this by passing the month number and date number instead of the whole date,",
"username": "turivishal"
},
{
"code": "",
"text": "Wow!!!. I am grateful. it work exactly the way i want it. Thanks so much and thanks so much. CHeers.\nPlease i will like to be your friend, you can send me your whatsap number on +2348132185887 or in my email. [email protected]",
"username": "Gbade_Francis"
},
{
"code": "if fromMonth == toMonth\n return monthOfBirth == fromMonth && fromDay <= dayOfBirth && dayOfBirth <= toDay\nelse if monthOfBirth == fromMonth\n return fromDay <= dayOfBirth\nelse if monthOfBirth == toMonth\n return dayOfBirth <= toDay\n/* set stage to set monthOfBirth and dayOfBirth using $month and $dayOfMonth */\n\"$or\" : [\n { \"$and\" : [\n { \"$eq\" : [ fromMonth , toMonth ] } ,\n { \"$eq\" : [ fromMonth , \"$monthOfBirth\" ] } ,\n { \"$lte\" : [ fromDay , \"$dayOfBirth\" ] } ,\n { \"$lte\" : [ \"$dayOfBirth\" , toDay ] } ,\n ] } ,\n { \"$and\" : [\n { \"$eq\" : [ fromMonth , \"$monthOfBirth\" ] } ,\n { \"$lte\" : [ fromDay , \"$dayOfBirth\" ] }\n ] } ,\n { \"$and\" : [\n { \"$eq\" : [ toMonth , \"$monthOfBirth\" ] } ,\n { \"$lte\" : [ \"$dayOfBirth\" , toDay ] } ,\n ] }\n]\n",
"text": "I am not too sure that this will work for boundary cases.Assume you want birthdays from June 30th to July 6th.So your fromDay=30, fromMonth=6, toDay=6, toMonth=7.There is no way you could find a day that $gte:fromDay and $lte:toDay. The aggregation works because fromMonth == toMonth.You need something a little bit more complex.Something like:Which would look like:I am still unsure if the above handle the edge cases where fromMonth=12 and toMonth=1",
"username": "steevej"
},
{
"code": "$dateFromPartslet fixedYear = 2000;\n\nlet fromDate = new Date();\nfromDate.setFullYear(fixedYear);\nfromDate = new Date(fromDate);\n\nlet toDate = new Date();\ntoDate.setFullYear(fixedYear);\ntoDate.setDate(toDate.getDate() + 7);\ntoDate = new Date(toDate);\n\nlet getbirthday= await model.aggregate([\n {\n \"$match\": {\n \"$expr\": {\n \"$and\": [\n {\n \"$gte\": [\n {\n \"$dateFromParts\": {\n \"year\": fixedYear,\n \"month\": { \"$month\": \"$dateOfBirth\" },\n \"day\": { \"$dayOfMonth\": \"$dateOfBirth\" }\n }\n },\n fromDate\n ]\n },\n {\n \"$lt\": [\n {\n \"$dateFromParts\": {\n \"year\": fixedYear,\n \"month\": { \"$month\": \"$dateOfBirth\" },\n \"day\": { \"$dayOfMonth\": \"$dateOfBirth\" }\n }\n },\n toDate\n ]\n }\n ]\n }\n }\n }\n])\n",
"text": "You are right,There is another solution, I am not sure about the performance but, what if we fix the year and match the condition, and reconstruct the date by $dateFromParts operator,",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Using Agreegate to get the data of all wards that their birthday will be in the next 7 days | 2022-07-07T04:28:59.401Z | Using Agreegate to get the data of all wards that their birthday will be in the next 7 days | 3,532 |
null | [
"python"
]
| [
{
"code": "",
"text": "Hello. I’m deloping python app using mongodb cloud with motor 3.0.0 and faced with this error:\nServerSelectionTimeoutError: none:27017: [Errno 11001] getaddrinfo failed\nHow can solve this problem?",
"username": "Oleg_Butirsky"
},
{
"code": "",
"text": "Check your URI. It is not defined correctly. It fails because the host name none does not exist as indicated bygetaddrinfo failed",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| ServerSelectionTimeoutError: none:27017: [Errno 11001] | 2022-07-06T14:39:52.616Z | ServerSelectionTimeoutError: none:27017: [Errno 11001] | 3,459 |
[]
| [
{
"code": "",
"text": "I do have installed mongodb at my Server.\nI can connect via CLI straight from the server, and I can connect the SSH-Tunnel to the server (via NoSQLBooster).But what not work is to connect the MongoDB via SSH-Tunnel.\nHere are the output of the Connection:\n\nimage1063×591 95.6 KB\nMight it be an miss-configuration of SSH or the mongodb?",
"username": "Samuel_79093"
},
{
"code": "",
"text": "Got it, maybe it helps other ones in the future:You have to allow “AllowTcpForwarding” in your ssh-config",
"username": "Samuel_79093"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Connection via SSH-Tunnel don't work | 2022-07-07T11:02:08.545Z | Connection via SSH-Tunnel don’t work | 1,956 |
|
null | [
"aggregation",
"time-series"
]
| [
{
"code": "const cursor = await db.collection(test).aggregate([\n {\n $match: {\n \"meta.ti\": ti,\n \"meta.type\": \"P\",\n ts: {\n $lte: new Date(\"2022-05-27T16:40:29.000Z\"),\n $gte: new Date(\"2022-05-09T06:10:22.000Z\"),\n },\n },\n },\n {\n $group: {\n _id: {\n ti: \"$meta.ti\",\n trace: \"$trace\",\n ts: {\n $dateTrunc: {\n date: \"$ts\",\n unit: \"day\",\n startOfWeek: \"monday\",\n binSize: 1,\n },\n },\n },\n docs: {\n $topN: {\n output: \"$$ROOT\",\n sortBy: {\n _id: 1,\n },\n n: 1,\n },\n },\n },\n },\n {\n $sort: {\n \"docs.ts\": -1,\n },\n },\n ]);\nconst cursor = await db.collection(system.buckets.test).aggregate([\n {\n $match: {\n \"meta.ti\": ti,\n \"meta.type\": \"P\",\n },\n },\n { $limit: 2 },\n {\n $_internalUnpackBucket: {\n timeField: \"ts\",\n metaField: \"meta\",\n bucketMaxSpanSeconds: 3600,\n },\n },\n {\n $group: {\n _id: {\n ti: \"$meta.ti\",\n trace: \"$trace\",\n ts: {\n $dateTrunc: {\n date: \"$ts\",\n unit: \"day\",\n startOfWeek: \"monday\",\n binSize: 1,\n },\n },\n },\n docs: {\n $topN: {\n output: \"$$ROOT\",\n sortBy: {\n _id: 1,\n },\n n: 1,\n },\n },\n },\n },\n {\n $sort: { \"docs.ts\": -1 },\n },\n ]);\n",
"text": "Hi all,we use a time series collection and need to regularly collect and display the first 50, 100, 250 documents of a device. Individual entries must be unique and updatable, so duplicates must always be filtered out (using a group). I’m new to mongo and stuck at calculating the document count.I have two ideas:Aggregate the documents via the system.bucket and use its control.count field to determine the current document count. Then break as soon as the desired limit is reached. Yet, I don’t know how to do this dynamically and more important directly in the aggregation. In fact, I currently just guess the $limit.Use the time series collection (not the internal bucket collection) and limit documents using a time window. Yet, I don’t know how to dynamically increase the window in the aggregation.Here are two examples.\nExample aggregation using the time series collection:Example aggregation using the system.bucket collection:Devices provide data points with different insertion rates (i.e. some devices insert within seconds and others in days to weeks). So for some devices I will scan too little for others too much documents if I just guess the time span or the limit.Is it possible in both scenarios to avoid this under and over fetching? Is there a smarter way to approach this problem?Thanks,\nBen",
"username": "Benjamin_Behringer"
},
{
"code": "",
"text": "Hi @Benjamin_Behringer,Regarding your approach in 1., I would perhaps avoid this scenario as this is an implementation detail and it might change in future MongoDB versions as the feature is improved.To get a better idea of what you’re trying to achieve with the second approach, would you be able to provide the following information:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"ts\": ISODate('2022-05-27T12:29:37.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3383,\n \"_id\": ObjectId(\"62b790d8510c3407edfb7434\")\n},\n{\n \"ts\": ISODate('2022-04-20T11:24:48.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3382,\n \"_id\": ObjectId(\"62b790c8510c3407edfb092d\")\n},\n{\n \"ts\": ISODate('2022-03-25T14:56:28.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3381,\n \"_id\": ObjectId(\"62b790bc510c3407edfab8bd\")\n},\n{\n \"ts\": ISODate('2022-03-25T14:51:38.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3380,\n \"_id\": ObjectId(\"62b790bc510c3407edfab8bf\")\n},\n{\n \"ts\": ISODate('2022-03-25T14:51:14.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3380,\n \"_id\": ObjectId(\"62b790bc510c3407edfab8c3\")\n},\n{\n \"ts\": ISODate('2022-02-26T10:32:37.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"receipt\": 3379,\n \"_id\": ObjectId(\"62b790b1510c3407edfa720b\")\n}\n{\n \"ts\": ISODate('2022-05-27T12:29:37.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3383,\n \"_id\": ObjectId(\"62b790d8510c3407edfb7434\")\n},\n{\n \"ts\": ISODate('2022-04-20T11:24:48.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3382,\n \"_id\": ObjectId(\"62b790c8510c3407edfb092d\")\n}\n{\n \"ts\": ISODate('2022-05-27T12:29:37.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3383,\n \"_id\": ObjectId(\"62b790d8510c3407edfb7434\")\n},\n{\n \"ts\": ISODate('2022-04-20T11:24:48.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3382,\n \"_id\": ObjectId(\"62b790c8510c3407edfb092d\")\n},\n{\n \"ts\": ISODate('2022-03-25T14:56:28.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3381,\n \"_id\": ObjectId(\"62b790bc510c3407edfab8bd\")\n},\n{\n \"ts\": ISODate('2022-03-25T14:51:38.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"trace\": 3380,\n \"_id\": ObjectId(\"62b790bc510c3407edfab8bf\")\n},\n{\n \"ts\": ISODate('2022-02-26T10:32:37.000+00:00'),\n \"meta\": { \"ti\": 1 },\n \"receipt\": 3379,\n \"_id\": ObjectId(\"62b790b1510c3407edfa720b\")\n}\n$match: {\n \"meta.ti\": 1,\n ts: {\n $gte: ISODate(\"2022-04-20T11:24:48.000+00:00\"),\n },\n },\n",
"text": "Hi Jason,Thanks for your response. Here are the details:MongoDB Version 6.0.0-rc7 (Atlas)Here’s an example with infrequent data.I want to be able to say “give me the last 2 documents”, which gives me:Or the last 5, which gives me:Note that trace 3380 is duplicated in the collection and thus only the most recent document with this trace is valid and printed (cf. $group in the first post).Let’s use the example above again. If I want to collect the “last two” transactions, I need at least:Some devices (identified by meta.ti) have a higher insertion frequency, e.g., hundreds of transactions a day. So just using time in the $match to retrieve documents leads to over-fetching or under-fetching.In a sample of 50.000 documents 119 are duplicates. Since documents track financial transactions from multiple legacy systems, it would indeed affect results In particular, each transaction has a trace number. This trace can be duplicated indicating that the prior transaction with this trace failed (cf. example). Thus, only the transaction with the latest transaction time and trace pair is valid. Since we don’t care about failed transactions, we remove duplicates already while inserting, yet it is not guaranteed that this way all duplicated traces are identified. It would be great if we could change the status of a transaction, but updating fields seems to be not possible with time series currently.Hope this helps to understand the problem.Thanks\nBen",
"username": "Benjamin_Behringer"
},
{
"code": "tsdb>db.collection.aggregate(\n{$match:{\"meta.ti\":1}},\n{$group:{_id:{trace:\"$trace\"},ts:{$max:\"$ts\"},meta:{\"$max\":\"$meta\"}}},\n{$group:{_id:{ti:\"$meta.ti\"},data: { '$topN': { output: '$$ROOT', sortBy: { ts: -1 }, n: 2 } }}})\n[\n {\n _id: { ti: 1 },\n data: [\n {\n _id: { trace: 1962 },\n ts: ISODate(\"2022-12-30T13:42:10.017Z\"),\n meta: { ti: 1 }\n },\n {\n _id: { trace: 4725 },\n ts: ISODate(\"2022-12-28T10:40:41.485Z\"),\n meta: { ti: 1 }\n }\n ]\n }\n]\n{\"meta.ti\":1}{\"ts\":1}$group$grouptracetsmeta$max$groupntstracetsdb>db.collection.aggregate(\n{$match:{\"meta.ti\":1}},\n{$group:{_id:{trace:\"$trace\"},ts:{$max:\"$ts\"},meta:{\"$max\":\"$meta\"}}},\n{$group:{_id:{ti:\"$meta.ti\"},data: { '$topN': { output: '$$ROOT', sortBy: { ts: -1 }, n: 2 } }}})\n[\n {\n _id: { ti: 1 },\n data: [\n {\n _id: { trace: 3380 },\n ts: ISODate(\"2022-03-25T14:51:38.000Z\"),\n meta: { ti: 1 }\n }\n ]\n }\n]\n",
"text": "Hi @Benjamin_Behringer - Thank you for providing the detailed response I want to be able to say “give me the last 2 documents”, which gives me:I’ve come up with a possible working aggregation which appears to provide an output similar to your expected output:Please note that I have a the following indexes on my test environment:Note that trace 3380 is duplicated in the collection and thus only the most recent document with this trace is valid and printed (cf. $group in the first post).There are multiple $group stages mentioned above, the first $group stage is aimed to remove duplicate entries with the same trace value taking the latest ts value. As they are duplicates, I presume the meta field data would be the same so I have just used the $max accumulator on this field. However, correct me if I am wrong here.The second $group is similar to what you had advised in the initial post but with n set to the value of the amount of documents per device you want returned and sorted by ts descending.Since the operation is attempting to achieve a result that includes “clean” data whilst querying, there will be a hit to performance which I understand is not ideal. In saying so, if you need this query to be more performant, perhaps you could consider a separate process that can filter/remove the duplicates before querying the Top-N documents from it.Running the same aggregation using a test collection containing duplicate documents with the trace field value of 3380 (duplicate):If you are to use this aggregation, it is highly recommend to extensively test and verify it suits all your use case requirements in a test environment firstGoing back to the duplicate documents, how often does this occur? While a timeseries collection is great for ingesting the data type you have, have you considered using a regular collection with a unique index that can deal with the duplicate values as it is being inserted?but updating fields seems to be not possible with time series currently.With regards to the above and starting in MongoDB version 5.0.5, you can perform some delete and update operations but with the requirements listed in the Time Series Collections Limitations - Updates and Deletes documentation.In addition to the above, the Indexes documentation for timeseries collection may be of use to you.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,Wow, thanks for your great help! Really appreciate it I tested your approach. Unfortunately, performance depends on the maximum number of documents as internalUnpackBucket needs to unpack all documents and subsequent grouping is performed on all of them. So this wouldn’t scale well? Moreover, the sort operation might fail eventually? Below is an example with 12108 transactions to be unpacked and grouped. We retrieved the last 200 transactions.\nScreenshot 2022-06-29 at 11.34.041124×1088 122 KB\nUsing the approach in my first post is significantly faster, since we need to unpack the required documents only. In the given time frame the last 453 documents have been retrieved, grouped and sorted in 12 ms. Note that Compass used the index, but didn’t indicate it. However, we would under fetch or over fetch and would need to use limit to cap at 200 documents…\nScreenshot 2022-06-29 at 11.54.431086×1268 137 KB\nThis is why I thought just going through the system.bucket might help, but that doesn’t feel right and comes with the drawbacks you mentioned.Some more questions:Going back to the duplicate documents, how often does this occur?~0,23% of documents are duplicates in our sample (50k docs)have you considered using a regular collection with a unique index that can deal with the duplicate values as it is being inserted?Yes, but time series feel natural for this problem, since it takes the burden of creating buckets, efficiently storing the data, enabling nice queries, … off our shoulders.With regards to the above and starting in MongoDB version 5.0.5, you can perform some delete and update operationsUnfortunately, queries can be only performed on the meta fields, which doesn’t help in our scenario.Thanks,\nBen",
"username": "Benjamin_Behringer"
},
{
"code": "[\n {\n $match: {\n 'meta.ti': 1,\n 'meta.type': 'P'\n }\n }, {\n $sort: {\n ts: -1\n }\n }, {\n $limit: 50\n }, \n ...\n]\n[\n {\n $match: {\n 'meta.ti': 1,\n 'meta.type': 'P'\n \"ts\" : {$lt: ISODate('2022-07-07T09:59:42.743Z')}\n }\n }, {\n $sort: {\n ts: -1\n }\n }, {\n $limit: 50\n }, \n ...\n]\n",
"text": "Tested and solved for us with 6.0.0-rc13 Just need to $match, $sort and $limit before doing anything else. This way mongo limits the number of buckets to be unpacked and does not unpack all buckets anymore. For instance:And if you want to retrieve the next 50 you just use the timestamp of the last element:Using an index on the necessary meta fields makes this blazing fast ",
"username": "Benjamin_Behringer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Getting most recent documents in time series collection | 2022-06-26T22:20:41.752Z | Getting most recent documents in time series collection | 4,413 |
[
"swift",
"atlas-functions",
"documentation"
]
| [
{
"code": "",
"text": "I am building sample Task Tracker app from here:\nhttps://docs.mongodb.com/realm/tutorial/realm-app/#std-label-tutorial-task-tracker-create-realm-appFollowing the instructions, I created the back end using the CLI tools.I downloaded the sample iOS app and I am able to create accounts, log in, create data.When I try to add a team member using email to share data I get an error: “function not found”I’ve attached screenshots to shot the function is setup on the back end with the proper names.I suspect a permission issue, but I’m unable to resolve the issue.\nScreen Shot 2021-07-27 at 9.39.53 PM2988×1010 270 KB\n",
"username": "Xavier_De_Leon1"
},
{
"code": "",
"text": "I have set up the backend on more than half a dozen servers during the last week to test the response times from various server locations and did not come across this error. For the front-end I used React Native, Node.js, CLI and the iOS project using the final Github versions to prevent typing errors. The tutorials do have a few bugs and syntax errors, but the Backend always worked on every client I tried, if I correctly followed every step.One permissions issue I had, was due to my mistake in following the backend tutorial: make sure you select “Project Owner”\nimage792×533 45.8 KB\nYour issue may be elsewhere. Also check, if you see all the functions in your Realm App. If my suggestion does not work, just set it up again from scratch on a different MongoDB server.",
"username": "Christian_Wagner"
},
{
"code": "",
"text": "I’ve the same issue. what can I do?",
"username": "Andrea_Montagner"
}
]
| MongoDB Realm Task Tracker Tutorial: "Function not found" | 2021-07-28T04:40:46.355Z | MongoDB Realm Task Tracker Tutorial: “Function not found” | 4,539 |
|
null | [
"queries",
"node-js",
"data-modeling",
"atlas-device-sync",
"react-native"
]
| [
{
"code": "",
"text": "Hello , I am building a mobile react native app thru expo. I was looking thru all of the possible NoSQL db candidates and I got attracted by mongoDB. As I understand you can integrate mongoDB atlas directly inside of your app through realm SDK and use cloud server side functions to take care of the post,fetch and t.t data operations. So I don’t understand what is the point of creating the realm schemes for your local db and sync them later if you can connect your mongo db directly?Thank you dear friends for your help.",
"username": "Lukas_Vainikevicius"
},
{
"code": "",
"text": "Hi @Lukas_Vainikevicius ,what is the point of creating the realm schemes for your local db and sync them later if you can connect your mongo db directly?Realm is an offline-first database, to work with your data you don’t need to be connected to the network, all the changes you apply to your records are immediately available, and the synchronisation happens transparently for the user. This is ideal when a connection is not guaranteed, and apps are responsive at all times, and is a common use case in mobile apps. In fact, having a synchronised DB is a great feature, but isn’t a requirement at all, a lot of apps use Realm exclusively as a local persistent storage, as an alternative to, for example, raw SQLite or Core Data on iOS.That said, you’re free to ignore the local storage, and work exclusively with the cloud, either via Functions, GraphQL, Data API, or Custom HTTPS Endpoints: the whole point is to have choices, and that’s what Atlas and Realm provide.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "thank you very much for you answer",
"username": "Lukas_Vainikevicius"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Why should I create realm schemes for my project if I can integrate atlas DB through realm SDK directly and use cloud functions? | 2022-07-06T23:58:33.375Z | Why should I create realm schemes for my project if I can integrate atlas DB through realm SDK directly and use cloud functions? | 2,142 |
null | []
| [
{
"code": "",
"text": "HI, When we create a permission to a database administrator with full permission , is it required to grant permission like this below? just dbAdminAnyDatabase isnt enough?\ndb.createUser({ user: “mongodmin” , pwd: “password”, roles: [“userAdminAnyDatabase”, “dbAdminAnyDatabase”, “readWriteAnyDatabase”]})",
"username": "Rajitha_Hewabandula"
},
{
"code": "",
"text": "if your database administrator should have full privileges on all resources read about\nroot role",
"username": "Arkadiusz_Borucki"
}
]
| Database admin permission set | 2022-07-07T05:33:02.240Z | Database admin permission set | 1,053 |
null | [
"queries"
]
| [
{
"code": "router.get(\"/\", (req, res, next) => {\n console.log(\"BEGIN\")\n users.find(\n {\n _id: req.query._id\n },\n {\n perms: 1,\n lastName: 1\n },\n (error, data) => {\n if(error) {\n return next(error);\n } else {\n console.log(\"data: \"+data)\n res.json(data);\n }\n console.log(\"END\")\n }\n )\n})\nBEGIN\ndata: \nEND\nGET /userData/?_id=62c5ca2734ed51f3e82bbe56 200 50.509 ms - 2\n",
"text": "One of my mongoDB calls is not functioning, and I am unable to figure out why. All others work properly.The following code is the callThis is the output of the log:No data is sent. “perms” is an object and lastName is a string.",
"username": "Maxerature_N_A"
},
{
"code": "req.query._id",
"text": "Just a wild guess since you don’t show enough to make certain, but quite possibly req.query._id is null or nonsense.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "The ID is used elsewhere (login), and matches the format in the database",
"username": "Maxerature_N_A"
},
{
"code": "console.log()",
"text": "The ID is used elsewhere (login), and matches the format in the databaseQuite possibly so. To make sure, why not console.log() it right there with the BEGIN.",
"username": "Jack_Woehr"
},
{
"code": "_idimport { ObjectId } from \"bson\"\n....\nusers.find( { _id: ObjectId( req.query._id ) } )\n....\n",
"text": "when querying a proper _id field generated by MongoDB during inserts (or with proper id function), you need to use the same method to fetch it back.In javascript, you would also need to import the function.",
"username": "Yilmaz_Durmaz"
}
]
| Express script not working | 2022-07-06T18:00:16.472Z | Express script not working | 1,976 |
null | [
"queries",
"atlas-search",
"text-search"
]
| [
{
"code": "exports = async function(request){\n return await db.collection\n .find({ $text: { $search: \"Pink Panther\" } })\n .sort( { \"Published\": -1, \"_id\": 1 } )\n .limit(10)\n .toArray();\n}",
"text": "Greetings. I’m looking to text search for multiple phrases (name variations) using OR conditions. I’m using a Realm function in Atlas; the collection has and index of the two fields I want to search. This forum post appears to offer a solution, but I can’t get @Doug_Tarr code to work in my function. Below is my current function; what I’d like is for the search term “Pink Panther” to be an array of OR variations ([“Pink Panther”, “Pinkerton J. Panther”, “Pinky Pants”, etc.]). Thanks!exports = async function(request){\n return await db.collection\n .find({ $text: { $search: \"Pink Panther\" } })\n .sort( { \"Published\": -1, \"_id\": 1 } )\n .limit(10)\n .toArray();\n}",
"username": "Jet_Hin"
},
{
"code": "",
"text": "Can you share your index definition?It looks like in your sample code you’re using $text indexes (linked here), but in order to take advantage of Doug’s linked example, you would need to use Atlas Search and create an index on the fields you want to search on. (Then use compound to search the multiple fields)Also this blog outlines the differences between $text and Atlas Search. Sounds like Atlas Search is better suited.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "Thanks Elle. I’m still learning MongoDB, but I mostly get what you’re saying. Does the screenshot below from Compass help explain the index I have in my collection? I’ll see if I can wrap my head around the compound examples. (I couldn’t get any of this forum thread’s code to work either.) Also, should the path variable contain the individual fields I want to search, or the name of the text index (which contains those same fields, “article+snippet” in my collection)?\nScreen Shot 2022-06-30 at 3.29.15 PM2332×438 46.8 KB\n",
"username": "Jet_Hin"
},
{
"code": "",
"text": "Hi Jet_Hin!I can totally relate to this issue, I experienced the same! I recommend creating an Atlas Search index using the JSON editor in the Atlas UI instead of utilizing Compass (it’s much easier). This can be done byClicking the Create Search Index button after creating your cluster\n\nScreen Shot 2022-07-01 at 10.22.11 AM1372×546 199 KB\nThen choosing your editor\n\nScreen Shot 2022-07-01 at 10.13.44 AM1920×921 77.5 KB\nHere is an example of how an index can look like\n\nScreen Shot 2022-07-01 at 10.15.54 AM1920×665 53.9 KB\nThis tutorial helped me a lot when it came to understanding how to create an index and gives an in depth walk through. After successfully creating the Search index, try following through the documents Elle shared and then trying Doug’s code.",
"username": "Humayara_Karim"
},
{
"code": ".aggregate([{\n\t$search: {\n\t\tphrase: {\n\t\t\tquery: [\"phrase one\", \"phrase two\"],\n\t\t\tpath: [\"title\", \"description\"]\n\t\t}\n\t}\n}])\n",
"text": "Thanks @Elle_Shwer and @Humayara_Karim. I was able to create an index (still not sure exactly what the indexing in Compass does) and I believe this simple aggregation pipeline (a confusing term IMO for search parameters) did the trick. Here it is for posterity:",
"username": "Jet_Hin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Multiple OR Phrases Text Search in Atlas / Realm | 2022-06-30T19:14:32.038Z | Multiple OR Phrases Text Search in Atlas / Realm | 4,013 |
null | [
"crud",
"golang"
]
| [
{
"code": "//find by id\n\nvar response bson.M\n\terr := r.db.FindOne(ctx, bson.M{\"_id\": id}).Decode(&response)\n\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &response, nil\n//output (json response via http)\n{\n \"_id\": {\n \"Subtype\": 0,\n \"Data\": \"d1rWV3UZSNu/s6ii14EFsA==\"\n },\n \"email\": \"[email protected]\",\n \"name\": \"name2\",\n \"username\": \"username\"\n}\n//expected output\n{\n \"_id\": \"513775c1-e34b-4ead-a886-03157f650336\"\n \"email\": \"[email protected]\",\n \"name\": \"john\",\n \"username\": \"johndoe\"\n}\n\n",
"text": "I’m storing data with UUID as id format. It’s represented as binary in MongoDB.\nMy problem is, that while I decode a result from the DB, the id field is not decoding as UUID.\nI’m not using structs here, if I use them, there is no problem, it decodes well.In this scenario, I have generic repository(with interfaces, not with actual generics future)\nSo, we are decoding to bson objectsGo code;",
"username": "dante_karakaya"
},
{
"code": "type Binary struct {\n\tSubtype byte\n\tData []byte\n}\nprimitive.Binaryprimitive.Binaryprimitive.Binary\"_id\"uuid.UUIDresID := response[\"_id\"]\nb, ok := resID.(primitive.Binary)\nif !ok {\n\treturn nil,\n\t\tfmt.Errorf(\"expected response field _id to be type primitive.Binary, but is type %T\", id)\n}\nuid, err := uuid.FromBytes(b.Data)\nif err != nil {\n\treturn nil, err\n}\nstring",
"text": "@dante_karakaya that’s a great question! The reason the response format is different is that the UUID values are converted to the BSON “binary” type when inserted into MongoDB, then the Go Driver unmarshals those binary values into a primitive.Binary, which has the structure you’re seeing in the JSON output:While the BSON specification and MongoDB support a UUID-specific binary subtype, the BSON library in the Go Driver currently doesn’t have any special support for that UUID-specific binary subtype, so you just end up with a primitive.Binary value. If you want to use that primitive.Binary value as the original UUID type, you have to convert it.For example, assuming you’re using the github.com/google/uuid UUID library, you could convert the primitive.Binary value in the \"_id\" field to a uuid.UUID like this:A potentially simpler alternative would be to convert the UUID values to strings when you insert them, and do the same when querying for them.P.S. The Go Driver team is investigating improving usability of UUID values. Check out GODRIVER-2484 and leave any comments you have!",
"username": "Matt_Dale"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Decode UUID(binary format) to bson/interface{} | 2022-05-24T07:45:38.481Z | Decode UUID(binary format) to bson/interface{} | 6,640 |
null | []
| [
{
"code": "",
"text": "I’m trying to delete a project in Mongo Atlas. States can’t be deleted until all endpoint services are terminated. How do I do this as I do not see any endpoint services configured.",
"username": "James_Lawrence"
},
{
"code": "",
"text": "hI @James_Lawrence and welcome in the MongoDB Community !Do you have some applications running in the App Services tab? Sounds like it’s referring to some HTTPS endpoints or GraphQL API deployed.You’ll also have to terminate any running cluster in this project.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Trying to delete Atlas Project | 2022-07-06T13:55:40.617Z | Trying to delete Atlas Project | 1,347 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n \"location_1\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 10,\n 20\n ]\n },\n \"location_2\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 10,\n 30\n ]\n }\n }\n]\nlocation_1location_2",
"text": "If we know longitude and latitude of the geolocation points, how we can calculate the distance between them?For example, image this input document:How can we calculate the distance between location_1 and location_2?",
"username": "NeNaD"
},
{
"code": "distance = √ ((x2-x1)² + (y2-y1)²)> db.coll.findOne()\n{\n _id: ObjectId(\"62c43a4128c8b27fda436536\"),\n location_1: { type: 'Point', coordinates: [ 10, 20 ] },\n location_2: { type: 'Point', coordinates: [ 10, 30 ] }\n}\n[\n {\n '$addFields': {\n 'distance': {\n '$sqrt': {\n '$add': [\n {\n '$pow': [\n {\n '$subtract': [\n {\n '$arrayElemAt': [\n '$location_2.coordinates', 0\n ]\n }, {\n '$arrayElemAt': [\n '$location_1.coordinates', 0\n ]\n }\n ]\n }, 2\n ]\n }, {\n '$pow': [\n {\n '$subtract': [\n {\n '$arrayElemAt': [\n '$location_2.coordinates', 1\n ]\n }, {\n '$arrayElemAt': [\n '$location_1.coordinates', 1\n ]\n }\n ]\n }, 2\n ]\n }\n ]\n }\n }\n }\n }\n]\n[\n {\n _id: ObjectId(\"62c43a4128c8b27fda436536\"),\n location_1: { type: 'Point', coordinates: [ 10, 20 ] },\n location_2: { type: 'Point', coordinates: [ 10, 30 ] },\n distance: 10\n }\n]\n",
"text": "Hi @NeNaD,Would the distance calculated with distance = √ ((x2-x1)² + (y2-y1)²) work for you?Aggregation:Result:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi @MaBeuLux88,Thanks for the response! Actually, it should be calculated like in this SO answer here.I just though that maybe I can use some built-in operator for that. Can you tell me where I can submit a proposal for new operators?",
"username": "NeNaD"
},
{
"code": "",
"text": "I was actually afraid you would say something like that! Well this calculation can also be done as their is also $cos, $sin, etc in the trigonometry operators. But my formula is probably good enough if you have all your points within the same city for example.If this operator exists, I never heard of it.Feedback & improvements are in this direction though and I like the idea! I would have also liked to find the “$distance” one that I had to “implement” in here…https://feedback.mongodb.com/Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Yeah, it would be great to have both of these built-in with some operator. Let me submit both suggestions! ",
"username": "NeNaD"
},
{
"code": "",
"text": "Submitted. Here is the suggestion, so everyone that would love to see these operators built-in can go and upvote the suggestion.",
"username": "NeNaD"
},
{
"code": "",
"text": "Looks like it’s already in the backlog actually:https://jira.mongodb.org/browse/SERVER-2990I added my vote on it!",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to calculate distance between two geolocation points? | 2022-07-04T09:53:32.990Z | How to calculate distance between two geolocation points? | 8,057 |
null | [
"security"
]
| [
{
"code": "",
"text": "HI, how many data base administrators we can create? (once we install mongo and start with mongo, it coles to the command line, the type use admin. And the I found that we can create as many as admin users here)\nOnce you know the server root un and pw, you can create many db admins?",
"username": "Rajitha_Hewabandula"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Rajitha_Hewabandula !For self-hosted MongoDB deployments there is no limit on the number of users you can create, but typically you would not want to have 100s of admin users with direct access to a deployment. For a general reference for the MongoDB server please refer to MongoDB Limits and Thresholds.MongoDB Atlas has some Organisation and Project Limits including 100 database users and 500 Atlas users per project, but if you have additional requirements you could contact support. For related discussion, please see The 100 User Limit in Atlas - #5 by Stennie.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Many thanks for the swift response, Stennie.\nI cant see any info. about how many database admins we can create in a production server. ( we are planning to move our mysql dbs to mongo very soon.). whats the maximum limit?",
"username": "Rajitha_Hewabandula"
},
{
"code": "",
"text": "I cant see any info. about how many database admins we can create in a production server. ( we are planning to move our mysql dbs to mongo very soon.). whats the maximum limit?Hi @Rajitha_Hewabandula,The limitations are as per my earlier response: no limit for for self-hosted deployments, some documented limits for Atlas (which is a managed cloud service).The MongoDB Limits and Thresholds documentation I shared is a general reference for the MongoDB server, but there’s no mention around users because these are not limited by the server. I believe this is similar to self-hosted MySQL. Limits on number of admin or database users might be imposed by managed services.Out of curiosity, how many admin users do you anticipate needing?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "hi @Stennie_X more than the number of admin users, my concern is, if the server root password is compromised, can a hacker create admin accounts (dbAdminAnyDatabases) and access databases?",
"username": "Rajitha_Hewabandula"
},
{
"code": "",
"text": "If the server root password is compromised that user could create many user admins. There is no limit and any limit you could impose could be overridden by the root user.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "if the server root password is compromised, can a hacker create admin accounts (dbAdminAnyDatabases) and access databases?Hi @Rajitha_Hewabandula,This question really isn’t specific to MongoDB: if an adversary is able to gain access to an account with escalated privileges they will have whatever access a trusted user with the same credentials has.For example, if you are referring to root access for a server instance hosting a database deployment someone could start/stop/reconfigure services. There are many security best practices and tools to help reduce your exposure and proactively detect intrusion attempts.For a list of security measures to consider for a MongoDB deployment, please review the MongoDB Security Checklist.For guidance on securing your own environment including cloud instances, O/S, and other aspects you would have to consult with the relevant documentation for your infrastructure and tech stack. There are many security-focused sites like OWASP (the Open Web Application Security Project) that provide helpful tools and resources such as Secure Design Principles to follow.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How many database admins can create? | 2022-07-05T08:45:03.037Z | How many database admins can create? | 4,059 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "Topic.aggregate([\n\n { $match: { _id: new ObjectId(req.params.id) } },\n\n {\n\n $lookup: {\n\n from: \"mcqs\",\n\n localField: \"_id\",\n\n foreignField: \"topicId\",\n\n as: \"topics_mcqs_info\"\n\n },\n\n },\n\n {\n\n $lookup: {\n\n from: \"true_falses\",\n\n localField: \"_id\",\n\n foreignField: \"topicId\",\n\n as: \"topics_trueFalse_info\"\n\n }\n\n },\n ])\n{\n \"_id\": \"629f451869b9778bdd7f4b16\",\n \"mcqs\": \"Hello mcqs\",\n \"option1\": \"a\",\n \"option2\": \"b\",\n \"option3\": \"c\",\n \"option4\": \"d\",\n \"answer\": \"option2\",\n \"sequence\": 1,\n \"topicId\": \"629f44f969b9778bdd7f4b10\"\n }\n",
"text": "0In my MongoDB, I have these documentsI am getting the result now I want to sort the questions by their sequence number.\nThis my question json object. How can I sort questions? As I am getting first all the mcqs. then true false.",
"username": "Naila_Nosheen"
},
{
"code": "topics_mcqs_infodb.orders.aggregate( [\n {\n $lookup: {\n from: \"mcqs\",\n localField: \"_id\",\n foreignField: \"topicId\",\n pipeline: [ {\n $sort: {\n sequenceNumber: 1\n }\n } ],\n as: \"topics_mcqs_info\"\n }\n }, \n { the other similar lookup stage here }\n] )\n",
"text": "Hi @Naila_Nosheen and welcome in the MongoDB Community !You have the wrong way to do it and the right way to do it. \nThe wrong way would be to use $unwind to break down the array (I think topics_mcqs_info here) and rebuild the question array with a $group using $push and with a $sort before that group stage to sort the docs in the order you want.This would work but add a lot of useless processing (=breaking down the array and rebuilding it).The right solution is to use the other format of $lookup, the one that is using a subpipeline, so you can actually sort the docs directly in there and build the array of questions already sorted.You pipeline will probably look like this:I think this should work. Please provide a few sample docs if you can’t figure it out so I can test it on my side.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "[\n {\n \"_id\": \"62a4769c989f1ace846eaf35\",\n \"topic\": \"Plants\",\n \"language\": \"English\",\n \"grade\": \"ELEMENTARY SCHOOL Grade 1\",\n \"noOfQuestions\": \"3\",\n \"__v\": 0,\n \"topics_mcqs_info\": [\n {\n \"_id\": \"62a476f7989f1ace846eaf44\",\n \"mcqs\": \"I am MCQs.\",\n \"option1\": \"option1\",\n \"option2\": \"option2\",\n \"option3\": \"option3\",\n \"option4\": \"option4\",\n \"answer\": \"option2\",\n \"sequence\": 3,\n \"topicId\": \"62a4769c989f1ace846eaf35\",\n \"__v\": 0\n }\n ],\n \"topics_trueFalse_info\": [\n {\n \"_id\": \"62a476b9989f1ace846eaf3b\",\n \"question\": \"I am true false.\",\n \"answer\": \"true\",\n \"sequence\": 1,\n \"topicId\": \"62a4769c989f1ace846eaf35\",\n \"__v\": 0\n }\n ],\n \"topics_openEnded_info\": [\n {\n \"_id\": \"62a476cb989f1ace846eaf3f\",\n \"question\": \"I am short question.\",\n \"sequence\": 2,\n \"topicId\": \"62a4769c989f1ace846eaf35\",\n \"__v\": 0\n }\n ]\n }\n]\nconst getQuestionsByTopicId = function (req, res) {\n Topic.aggregate([\n { $match: { _id: new ObjectId(req.params.id) } },\n {\n $lookup: {\n from: \"mcqs\",\n localField: \"_id\",\n foreignField: \"topicId\",\n pipeline: [ {\n $sort: {\n sequence: 1\n }\n } ],\n as: \"topics_mcqs_info\"\n },\n },\n {\n $lookup: {\n from: \"true_falses\",\n localField: \"_id\",\n foreignField: \"topicId\",\n pipeline: [ {\n $sort: {\n sequence: 1\n }\n } ],\n as: \"topics_trueFalse_info\"\n }\n },\n {\n $lookup: {\n from: \"open_endeds\",\n localField: \"_id\",\n foreignField: \"topicId\",\n pipeline: [ {\n $sort: {\n sequence: 1\n }\n } ],\n as: \"topics_openEnded_info\"\n }\n }\n ])\n .then((result) => {\n res.status(200).send(result)\n console.log(result);\n })\n .catch((error) => {\n console.log(error);\n });\n}\n",
"text": "Hello @MaBeuLux88 I have tried the pipeline way. But the questions are not sorted. this JSON object I am getting.As In my code the mcqs lookup is first so it is returning me mcqs first regardless of the sequence.This is my code:Can you please see this for me? As I am noticing it has sorted the array in lookups, but is there a way to sort it that way to get the output I want?",
"username": "Naila_Nosheen"
},
{
"code": "sequence",
"text": "Hi @Naila_Nosheen,I don’t understand what isn’t sorted in your output. Each lookup adds a new array of documents that come from a different collection each time and these arrays are sorted by sequence in each individual array. It’s not what you want?Do you want to mix all these arrays in a single array where all the questions are mixed and sorted by sequence?Also please use markdown code blocks when you send code to ease the readability of your posts.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Sorry for the late reply but thank you so much. Your Solution helped me a lot. but is it possible to mix all the questions in one array and then sort them by sequence? ",
"username": "Naila_Nosheen"
},
{
"code": "[\n {\n '$addFields': {\n 'all': {\n '$sortArray': {\n 'input': {\n '$concatArrays': [\n '$topics_mcqs_info', '$topics_trueFalse_info', '$topics_openEnded_info'\n ]\n }, \n 'sortBy': {\n 'sequence': 1\n }\n }\n }\n }\n }, {\n '$project': {\n 'topics_mcqs_info': 0, \n 'topics_trueFalse_info': 0, \n 'topics_openEnded_info': 0\n }\n }\n]\n",
"text": "Hi @Naila_Nosheen,Then I would remove the sorts from the lookups (as they would be redundant) and I would add the following at the end of your pipeline:Note that $sortArray is new in 5.2.If 5.2 isn’t yet possible for you, then you have an alternative but it’s a little bit unpleasant. Read this for more details.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you so much @MaBeuLux88. You saved my day.",
"username": "Naila_Nosheen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How can i sort data coming from multiple tables joined by lookup | 2022-06-07T14:57:56.035Z | How can i sort data coming from multiple tables joined by lookup | 4,040 |
null | []
| [
{
"code": "",
"text": "Hey MongoDB Team & Community,\nI am looking up for some info - “is mongo the best database for communications app?”, whether one wants to build instant chat app or email server or video meeting apps or even a TOTP app.Anybody has any authentic articles or use case on why one should use mongo than any other database (mainly comparisons between redis and oracle 19c or 21c with every data supported)",
"username": "Prnam"
},
{
"code": "",
"text": "MongoDB is used for many chat applications. The primary benefits associated with the document model enable you to seamlessly extend your application as you build functionality. I think it’s hard to say that one database or another is the BEST without understanding more about the application but rest assured that MongoDB has been the choice for many popular chat applications. I’m curious about what you’re building… and what other requirements you may have. The thing I like about MongoDB is that it’s a platform… the underlying database is easy to use in whatever language you’re coding… it’s idiomatic… i.e. you simply use JavaScript to interact with MongoDB using Objects when you’re coding in JavaScript, Pythonic expressions in Python, etc.I wrote an article describing how to leverage Realm (formerly called Stitch) to integrate directly with a very popular chat platform, Slack. This may help you in your architecture… leveraging an API to manage events occurring in your application is a great way to centralize logic. You may also want to consider leveraging Triggers as part of managing interactions.From zero to bot in ten minutes with MongoDB StitchHope this helps.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB is best choice of DB for communication app? | 2021-07-20T18:40:26.428Z | MongoDB is best choice of DB for communication app? | 13,724 |
null | [
"api"
]
| [
{
"code": "",
"text": "Do you plan on adding the following capabilities to the API?I could do that within my application, like a one time “provisioning” script, but being able to use a consistent devops tool such as Terraform to do the provisioning would really be a big plus.",
"username": "Arthur_Rio"
},
{
"code": "",
"text": "I think it is all there. You just have to look at the documentation:For example, you can find, https://docs.mongodb.com/manual/reference/method/db.dropDatabase/ inAnd you can find https://docs.mongodb.com/manual/reference/method/db.collection.createIndex/ in",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for your reply but it’s not what I’m looking for. As I stated in my initial message, I don’t want to do it from the CLI, I would like to be able to define my databases, collections and indexes right from terraform, at the same time I’m provisioning the cluster. But for that to be possible, we would need the capabilities to be exposed via the mongodb atlas API (https://docs.atlas.mongodb.com/api/) and create new terraform resources to interact with it.",
"username": "Arthur_Rio"
},
{
"code": "",
"text": "Okay, now I see. Sorry for the diversion.",
"username": "steevej"
},
{
"code": "",
"text": "Solved or bypassed the impasse? If yes, how did you do it?",
"username": "Wilian_Oliveira"
}
]
| Database, collection and index provisioning | 2021-05-17T22:11:19.424Z | Database, collection and index provisioning | 4,607 |
null | []
| [
{
"code": "",
"text": "Hi,im working on a PyGame. My problem is, how to check if a user exists or not in my collection.\nI mean, how can i check if a variable is equal to my key:value.\nSomething like this:user: [\n{\n‘User’ : ‘Tim’\n‘Password’:‘password’\n}\n]if (‘user’:‘Tim’) is equal ‘Tim’:\nreturnthx",
"username": "Senel_Ekiz"
},
{
"code": "collection.find_oneresult = collection.find_one( { 'user': 'Tim' } )valueNoneresultfind_one",
"text": "Hello @Senel_Ekiz, welcome to the MongoDB Community forum!To access MongoDB database and the data stored in its collections, you can use the PyMongo driver software. The APIs of the driver allow you to connect to the database and query the data from your Python program.You can use the collection.find_one method for checking the value of a key. For example:result = collection.find_one( { 'user': 'Tim' } )The value of the result will be None in case no match is found. In case a match is found the value of the result will be the document.Here is the PyMongo tutorial for using the find_one:https://pymongo.readthedocs.io/en/stable/tutorial.html#getting-a-single-document-with-find-one",
"username": "Prasad_Saya"
},
{
"code": "def check_user():\n result = user.find_one({'User':'dsada'})\n if result == model.User.user_input:\n print(\"user exist\")\n else:\n print(\"user dont exist\")\n",
"text": "Thx for you answer.But, how can i check, if the only the value(‘Tim’) exists?My Code:My ‘user_input’ is the texfield in my Menue.",
"username": "Senel_Ekiz"
},
{
"code": "",
"text": "Additional:i want to iterate over every collection in my Database, to check if any ‘User’ Key have a value from my Input.",
"username": "Senel_Ekiz"
},
{
"code": "result = user.find_one({'User':'dsada'})# Assuming the field user_input has a value, for example, \"Tim\"\n# And the field User represents the name of the user\nresult = user.find_one( { 'User': user_input } )\n\nif result:\n print(\"user exists\")\nelse:\n print(\"user not found\")\n",
"text": "result = user.find_one({'User':'dsada'})I think you can use this:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Perfect. Its work. Thank you.",
"username": "Senel_Ekiz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Check the value of a key | 2022-07-05T23:30:29.945Z | Check the value of a key | 4,642 |
null | []
| [
{
"code": "",
"text": "i’m working with oplog to help retrieve changes that have been made to collections in my database. but i noticed the update operation doesnt return full document. and i cant find anything online to help with this problem . please help out",
"username": "Nelson_Ogbeide"
},
{
"code": "",
"text": "You are not supposed to use the oplog to do that. The internal format of the oplog may change.You are supposed to use change stream.",
"username": "steevej"
},
{
"code": "",
"text": "So bascially oplog cant return fullDocument ?",
"username": "Nelson_Ogbeide"
},
{
"code": "",
"text": "i was really looking at using oplog for my batch system. since change stream is more synonymous streaming pipeline .",
"username": "Nelson_Ogbeide"
}
]
| Oplog update not returning full document | 2022-07-05T13:18:44.032Z | Oplog update not returning full document | 1,504 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "[\n {\n \"device\": 1,\n \"date\": ISODate(\"2022-7-10\")\n },\n {\n \"device\": 2,\n \"date\": ISODate(\"2022-8-10\")\n },\n {\n \"device\": 1,\n \"date\": ISODate(\"2022-9-11\")\n },\n {\n \"device\": 3,\n \"date\": ISODate(\"2022-10-11\")\n }\n]\n[\n {\n \"_id\": {\n year: 2022,\n month: 7\n },\n count: 1 // device 1\n },\n {\n \"_id\": {\n year: 2022,\n month: 8\n },\n count: 2 // device 1 and 2\n },\n {\n \"_id\": {\n year: 2022,\n month: 9\n },\n count: 2 // device 1 and 2\n },\n {\n \"_id\": {\n year: 2022,\n month: 10\n },\n count: 3 // device 1, 2 and 3 \n }\n]\n",
"text": "As shown below, documents in the collection look like this.\nThe aggregation I need should count the number of devices monthly, the device should start being counted from the first month it appears, and from that month until the current month, independent of whether or not it appears again.The output would be:",
"username": "pseudo.charles"
},
{
"code": "start_month = \nend_month = \nstart_date = 1st day of start_month\nend_date = last day of last_month\nall_months = [ array of months from start to current month ]\npipeline = [\n { \"$match\" : {\n \"date\" : { \"$gte\" : start_date , \"$lte\" : end_date }\n } } ,\n { \"$sort\" : {\n \"date\" : 1\n } } ,\n { \"$group\" : {\n \"_id\" : \"$device\" ,\n \"first_occurrence\" : { \"$first\" : \"$date\" }\n } } ,\n { \"$set\" : {\n \"filtered_months\" : {\n \"$filter\" : {\n \"input\" : all_months ,\n \"cond\" : { $gte: [ \"$$this.first_occurence\", start_month ] }\n }\n } } ,\n { \"$unwind\" : \"$filtered_months\" } ,\n { \"$group\" : {\n \"_id\" : { \"$filtered_month\" } ,\n \"count\" : { \"$sum\" : 1 } \n } }\n]\n",
"text": "The difficult part with your use-case is that you want to count absent data.To be able to count absent data usually involve some combination of $range, $map, $filter and $reduce to generate the holes in the data.UNTESTED AND LOTS OF DETAILS LEFT OUTBut personally, I would stop the aggregation after the $group with _id:$devices and would complete the data in the application back plane.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Aggregation count by period considering the past documents | 2022-07-02T18:23:34.909Z | Aggregation count by period considering the past documents | 1,147 |
null | [
"aggregation",
"node-js"
]
| [
{
"code": "{\n \"name\": \"left-handed smoke shifter\",\n \"manufacturer\": \"Acme Corp\",\n \"catalog_number\": \"1234\",\n \"parts\": [\"ObjectID('AAAA')\", \"ObjectID('BBBB')\", \"ObjectID('CCCC')\"]\n}\n",
"text": "Hi guys, I’m quite new to MongoDB. I read a blog post about best practices in designing schema in MongoDB. One way to handle one-to-many relationship is to reference it and this is the example from the blog post.I checked out the $lookup documentation but I couldn’t get that to work with an array of references. Would really appreciate if someone could help me.",
"username": "Theodorus_Andi_Gunawan"
},
{
"code": "localField:\"parts\"foreignField:\"_id\"parts\"ObjectID('AAA')\"\"ObjectID('AAA')\"'AAA'\"_id\": \"ObjectId('AAA')\"\"_id\": ObjectID(\"AAAB\")",
"text": "you may have confused local and foreign fields.try localField:\"parts\" and foreignField:\"_id\" and see if it resolves correct.if not, from your given data, it is possible your parts array does not hold actual IDs but just some strings that look like some ID. \"ObjectID('AAA')\" is a string as is it surrounded by double-quotes.in this second case solution of this post may help. $lookup foreignField is ObjectId - Working with Data - MongoDB Developer Community Forums . Be careful though, if your IDs are stored as this quoted string \"ObjectID('AAA')\" you need to extract 'AAA' from them.PS: I believe the author of that blog post has typos for some reason. He uses \"_id\": \"ObjectId('AAA')\" for a while and then \"_id\": ObjectID(\"AAAB\"). note the double quotes around ObjectId function. The first one is just a string but the second is the representation of an Id value.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "If you like to watch video courses along with written ones, you may want to watch this MongoDB Schema Design Best Practices video from that very same author on that very same topic explained in the blog post.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Please share documents from bothe collections.Please share what you have try with $lookup. Since it should work we can help you better finding what is wrong if we see what you did.Also indicate how it fails to provide the desired results.",
"username": "steevej"
},
{
"code": " const parts = await productCollection.aggregate([\n {\n $match: {\n _id: ObjectId(productId),\n },\n $lookup: {\n from: \"parts\",\n localField: \"parts\",\n foreignField: \"_id\",\n as: \"parts\",\n },\n },\n ]);\n return parts.toArray();\n",
"text": ":\"partsThat did not workMongoServerError: A pipeline stage specification object must contain exactly one field.",
"username": "Theodorus_Andi_Gunawan"
},
{
"code": "",
"text": "How do I edit the post?",
"username": "Theodorus_Andi_Gunawan"
},
{
"code": "",
"text": "there is a pen image near the bottom of each post, along with like/link/reply, visible to the author of a post and replies.",
"username": "Yilmaz_Durmaz"
},
{
"code": "[]{}$lookup$match[ \n{ $match: {...} },\n{ $lookup: {...} }\n]\nObjectID('AAAA')\"ObjectID('AAAA')\"",
"text": "That did not workyou need to be careful with opening and closing of curly and square brackets. square brackets [] are for arrays, curlies {} are for objects. your $lookup is inside the brackets of $match hence the error “must contain one field”each pipeline is a separate object, so it should look like this:PS: do not forget ObjectID('AAAA') is an object ID, but \"ObjectID('AAAA')\" is not. the second is in double quotes making it a string.",
"username": "Yilmaz_Durmaz"
},
{
"code": "_id_id\"ObjectID('AAAA')\"$lookupas:\"parts\"",
"text": "I have made a mistake. I have forgotten a feature of mongodb “ObjectID(‘AAAA’)” is not normally an object id in the sense of an id mongodb uses; it is a value represented by 24 char hexadecimal value such as “5d505646cf6d4fe581014ab2”. but you can still overwrite _id field with something else like a number or a string.if take those 2 data from the blog post and put them into a test database, 1 for products and 1 for parts, you will see the product will get assigned a proper _id but the part will have \"ObjectID('AAAA')\" as its id.however, the $lookup will still hold true and fill in the details of the part into the product as it also stores that string in the parts array.also, beware if you use as:\"parts\", you will override the original parts array with this new filled-in details array.\nScreenshot (45)739×212 14.2 KB\n",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "thanks for your reply! its working now. To extend the question a little bit, lets say each part has its own parts, are those sub-parts usually nested?",
"username": "Theodorus_Andi_Gunawan"
},
{
"code": "",
"text": "To extend the question a little bit, lets say each part has its own parts, are those sub-parts usually nested?nesting levels depend on your database design mentioned in the blog post and video. You can continue relational or switch to embedded models.I suggest you take M320: Data Modeling in MongoDB University.All MDBU courses are free so feel free to check other courses to extend your knowledge.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| What is the best way to $lookup / "join" one-to-many collections with array of references in MongoDB? | 2022-07-02T15:23:37.868Z | What is the best way to $lookup / “join” one-to-many collections with array of references in MongoDB? | 5,294 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"type\": \"command\",\n \"ns\": \"smartpos-dev-env.product\",\n \"command\": {\n \"aggregate\": \"product\",\n \"pipeline\": [\n {\n \"$match\": {\n \"tenantId\": \"bbb60d4e-212f-445e-97a7-ddad13395931\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"category\",\n \"localField\": \"category\",\n \"foreignField\": \"_id\",\n \"as\": \"category\"\n }\n },\n {\n \"$unwind\": \"$category\"\n },\n {\n \"$project\": {\n \"legacyId\": \"$productCode\",\n \"id\": 1,\n \"codAlfa\": 1,\n \"description\": 1,\n \"saleValue\": 1,\n \"promotionalValue\": 1,\n \"promotionalDisplayTimer\": 1,\n \"promotionalExpirationDate\": 1,\n \"lastUpdate\": 1,\n \"detail\": 1,\n \"googleProductCategory\": 1,\n \"hasVariant\": 1,\n \"featuredPosition\": 1,\n \"categoryId\": \"$category._id\",\n \"categoryDescription\": \"$category.description\"\n }\n },\n {\n \"$sort\": {\n \"featuredPosition\": 1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 30\n }\n ],\n \"cursor\": {},\n \"allowDiskUse\": false,\n \"collation\": {\n \"locale\": \"pt\"\n },\n \"$db\": \"smartpos-dev-env\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1657059071,\n \"i\": 20\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"XKFGrZDq3g2cs9HUQaoigMg67z0=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 7116985341772300000\n }\n },\n \"lsid\": {\n \"id\": {\n \"$binary\": {\n \"base64\": \"56xHmGfTRCG9TAuLgbUhKg==\",\n \"subType\": \"04\"\n }\n }\n }\n },\n \"planSummary\": \"COLLSCAN\",\n \"keysExamined\": 7,\n \"docsExamined\": 455735,\n \"hasSortStage\": true,\n \"cursorExhausted\": true,\n \"numYields\": 455,\n \"nreturned\": 8,\n \"queryHash\": \"21B91BC1\",\n \"planCacheKey\": \"F49AA791\",\n \"reslen\": 2575,\n \"locks\": {\n \"Global\": {\n \"acquireCount\": {\n \"r\": 473\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 18\n }\n }\n },\n \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"writeConcern\": {\n \"w\": \"majority\",\n \"wtimeout\": 0,\n \"provenance\": \"implicitDefault\"\n },\n \"storage\": {},\n \"remote\": \"54.211.177.188:35965\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 301,\n \"v\": \"5.0.9\"\n}\n",
"text": "Hello there,I am developing an application that access a mongo database. We have created a collection with 455k documents for testing purposes. While developing, I realized that the performance was not that good (more than a second to perform the query). I know it sounds fast, but it doesn’t look good for future.So, I was using Mongo Atlas Profiler and there was where I saw a strange behaviour. The planSummary is COLLSCAN, and the docsExamined are 455735. I can believe with that info my entire collection is being scaned. Here comes the problem: the field that I am using inside match has an index and I really don’t know why it is not being used. I use an Atlas search in another field in this collection (I didn’t create it as the default).Below are the infos that Atlas profiler is showing to me. Sorry, I know it is big. Being more specific: the field tenantId in the first match in pipeline has an index. As I said, it is an aggregation, with some steps…",
"username": "Renan_Geraldo"
},
{
"code": "$match$searchcollation \"collation\": {\n \"locale\": \"pt\"\n },\ntenantId",
"text": "Hi @Renan_Geraldo,Actually, this behavior doesn’t seem to be due to the Atlas Search index configured on this collection. Since the first stage in the pipeline is a $match, and not a $search, the presence or absence of an Atlas Search index on this collection will make no difference.I suspect that the behavior that you are observing might be due to the collation being used in the query, which is:Does the index on tenantId have the same collation defined as the above? Note that an index with a collation cannot support an operation that performs string comparisons on the indexed fields if the operation specifies a different collation, as documented here. Also, the reverse is true - if the index does not have a collation defined then a query like the above that specifies a collation won’t be able to use that index, and will resort to a COLLSCAN.Please check if that’s the case here.Thanks,\nHarshad",
"username": "Harshad_Dhavale"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Why my aggregation is not using index? (Atlas Search configured) | 2022-07-05T22:28:01.322Z | Why my aggregation is not using index? (Atlas Search configured) | 2,476 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "// 1\n{\n \"_id\": ObjectId(\"62c238b03148c6cc6594dc9b\"),\n \"name\": \"首页\",\n \"icon_name\": \"HomeFilled\",\n \"router_name\": \"Home\",\n \"router_path\": \"/home\",\n \"role_group\": [\n \"admin\",\n \"common\"\n ],\n \"is_operate\": false,\n \"describe\": \"\",\n \"create_time\": ISODate(\"2022-07-04T00:47:44.977Z\"),\n \"children\": [ ],\n \"__v\": NumberInt(\"0\")\n}\n\n// 2\n{\n \"_id\": ObjectId(\"62c238b03148c6cc6594dc9c\"),\n \"name\": \"系统设置\",\n \"icon_name\": \"Setting\",\n \"router_name\": \"System\",\n \"router_path\": \"/system\",\n \"role_group\": [\n \"admin\"\n ],\n \"is_operate\": false,\n \"describe\": \"\",\n \"children\": [\n {\n \"name\": \"角色管理\",\n \"icon_name\": \"Postcard\",\n \"router_name\": \"Role\",\n \"router_path\": \"/system/role\",\n \"role_group\": [\n \"admin\"\n ],\n \"is_operate\": false,\n \"describe\": \"\",\n \"_id\": ObjectId(\"62c238b03148c6cc6594dc9d\"),\n \"create_time\": ISODate(\"2022-07-04T00:47:44.979Z\"),\n \"children\": [ ]\n },\n {\n \"name\": \"菜单管理\",\n \"icon_name\": \"Menu\",\n \"router_name\": \"Menu\",\n \"router_path\": \"/system/menu\",\n \"role_group\": [\n \"admin\"\n ],\n \"is_operate\": false,\n \"describe\": \"\",\n \"_id\": ObjectId(\"62c238b03148c6cc6594dc9e\"),\n \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"),\n \"children\": [ ]\n },\n {\n \"name\": \"部门管理\",\n \"icon_name\": \"OfficeBuilding\",\n \"router_name\": \"Dept\",\n \"router_path\": \"/system/dept\",\n \"role_group\": [\n \"admin\"\n ],\n \"is_operate\": false,\n \"describe\": \"\",\n \"_id\": ObjectId(\"62c238b03148c6cc6594dc9f\"),\n \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"),\n \"children\": [ ]\n },\n {\n \"name\": \"用户管理\",\n \"icon_name\": \"User\",\n \"router_name\": \"User\",\n \"router_path\": \"/system/user\",\n \"role_group\": [\n \"admin\"\n ],\n \"is_operate\": false,\n \"describe\": \"\",\n \"_id\": ObjectId(\"62c238b03148c6cc6594dca0\"),\n \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"),\n \"children\": [ ]\n }\n ],\n \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"),\n \"__v\": NumberInt(\"0\")\n}\n",
"text": "For example, if the ID “62c238b03148c6cc6594dc9e” is known, how to query the object data where the ID is located?The above is the data structure. Now it is two-level nesting. How to realize multi-level nested query?",
"username": "sy1215zuigao"
},
{
"code": "_iddb.coll.insertMany([{ \"_id\": ObjectId(\"62c238b03148c6cc6594dc9b\"), \"name\": \"首页\", \"icon_name\": \"HomeFilled\", \"router_name\": \"Home\", \"router_path\": \"/home\", \"role_group\": [ \"admin\", \"common\"], \"is_operate\": false, \"describe\": \"\", \"create_time\": ISODate(\"2022-07-04T00:47:44.977Z\"), \"children\": [], \"__v\": NumberInt(\"0\") }, { \"_id\": ObjectId(\"62c238b03148c6cc6594dc9c\"), \"name\": \"系统设置\", \"icon_name\": \"Setting\", \"router_name\": \"System\", \"router_path\": \"/system\", \"role_group\": [ \"admin\"], \"is_operate\": false, \"describe\": \"\", \"children\": [ { \"name\": \"角色管理\", \"icon_name\": \"Postcard\", \"router_name\": \"Role\", \"router_path\": \"/system/role\", \"role_group\": [ \"admin\"], \"is_operate\": false, \"describe\": \"\", \"_id\": ObjectId(\"62c238b03148c6cc6594dc9d\"), \"create_time\": ISODate(\"2022-07-04T00:47:44.979Z\"), \"children\": [] }, { \"name\": \"菜单管理\", \"icon_name\": \"Menu\", \"router_name\": \"Menu\", \"router_path\": \"/system/menu\", \"role_group\": [ \"admin\"], \"is_operate\": false, \"describe\": \"\", \"_id\": ObjectId(\"62c238b03148c6cc6594dc9e\"), \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"), \"children\": [] }, { \"name\": \"部门管理\", \"icon_name\": \"OfficeBuilding\", \"router_name\": \"Dept\", \"router_path\": \"/system/dept\", \"role_group\": [ \"admin\"], \"is_operate\": false, \"describe\": \"\", \"_id\": ObjectId(\"62c238b03148c6cc6594dc9f\"), \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"), \"children\": [] }, { \"name\": \"用户管理\", \"icon_name\": \"User\", \"router_name\": \"User\", \"router_path\": \"/system/user\", \"role_group\": [ \"admin\"], \"is_operate\": false, \"describe\": \"\", \"_id\": ObjectId(\"62c238b03148c6cc6594dca0\"), \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"), \"children\": [] }], \"create_time\": ISODate(\"2022-07-04T00:47:44.98Z\"), \"__v\": NumberInt(\"0\") }])\n[\n {\n '$match': {\n 'children._id': new ObjectId('62c238b03148c6cc6594dc9e')\n }\n }, {\n '$project': {\n 'output': {\n '$filter': {\n 'input': '$children', \n 'as': 'item', \n 'cond': {\n '$eq': [\n '$$item._id', new ObjectId('62c238b03148c6cc6594dc9e')\n ]\n }\n }\n }\n }\n }\n]\n> db.coll.aggregate([ { '$match': { 'children._id': new ObjectId('62c238b03148c6cc6594dc9e') } }, { '$project': { 'output': { '$filter': { 'input': '$children', 'as': 'item', 'cond': { '$eq': [ '$$item._id', new ObjectId('62c238b03148c6cc6594dc9e')] } } } } }])\n[\n {\n _id: ObjectId(\"62c238b03148c6cc6594dc9c\"),\n output: [\n {\n name: '菜单管理',\n icon_name: 'Menu',\n router_name: 'Menu',\n router_path: '/system/menu',\n role_group: [ 'admin' ],\n is_operate: false,\n describe: '',\n _id: ObjectId(\"62c238b03148c6cc6594dc9e\"),\n create_time: ISODate(\"2022-07-04T00:47:44.980Z\"),\n children: []\n }\n ]\n }\n]\n",
"text": "Hi @sy1215zuigao and welcome in the MongoDB Community !I’m not sure what you are expecting in the output. If that’s not it, please provide the exact expected output you expect given the 2 docs in example. Also I’m not sure if the _id can appear in multiple docs or multiple times within the same doc. This would affect how you query as well depending what you want exactly.Here is my proposition:Insert docs:Pipeline:Result:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thank you for your reply. I have changed the structure of data and no longer use nested data. Thank you again for your reply!",
"username": "sy1215zuigao"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to query multi-level nested sub document objects when the sub document ID is known? | 2022-07-04T01:42:56.658Z | How to query multi-level nested sub document objects when the sub document ID is known? | 4,257 |
null | [
"queries",
"crud"
]
| [
{
"code": "",
"text": "I want to filter genetic variants that contain the ‘Amino Acid Change’: p.Ala161Thr’, and then $set a nested field for ‘Predicted cDNA’: ‘c.481G>A’ for arrays that contain ‘Variant (cDNA)’: ‘c.?’.I have tried multiple things with nothing working.db.LCFAOD.updateMany({“Variant.Amino Acid Change”:“p.Ala161Thr”},{$set: {“Variant.$[Predicted cDNA]”:“c.481G>A”}},{ arrayFilter: { “Variant.Variant (cDNA)”:“c.?” } })\nMongoServerError: No array filter found for identifier ‘Predicted cDNA’ in path ‘Variant.$[Predicted cDNA]’db.LCFAOD.updateMany({“Variant.Amino Acid Change”:“p.Ala161Thr”},{$set: {“Variant.Variant (cDNA”:“c.481G>A”}})\nMongoServerError: Cannot create field ‘Variant (cDNA’ in element {Variant: [ { Gene: (continues on)I am hoping to automate this once I figure out the correct way to call them, so I don’t want to manually use the Variant.0 method because it may not always be the first array. Was hoping to have queries tell it which array to update.Thank you!",
"username": "HRichbourg"
},
{
"code": "{ arrayFilters : [ { Filter_Identifier : Filter_Condition } ] } \ndb.LCFAOD.updateMany( { \"Variant.Amino Acid Change\" : \"p.Ala161Thr\" } ,\n { \"$set\" : {\n \"Variant.$[has_variant_c].Predicted cDNA\" : \"c.481G>A\"\n } } ,\n { arrayFilters : [\n {\n \"has_variant_c\" : { \"Variant (cDNA)\" : \"c.?\" }\n }\n ] }\n)\n",
"text": "When specifying arrayFilters: (not arrayFilter:{} like you wrote), you will see in the documentation that the syntax isThe Filter_Identifier is the name you have to use withing your $set.The following untested code should be bring you closer your goal.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much for your help and time. I do not quite understand the ‘has_variant_c’ component, but I see what you mean regarding the square brackets vs curly.When I try your suggestion, I get: [MongoServerError:] Error parsing array filter :: caused by :: The top-level field name must be an alphanumeric string beginning with a lowercase letter, found ‘has_variant_c’.I have gotten this error with other options I have tried. Could you possibly explain the has_variant_c so I can further trouble shoot using your suggestion?Thank you again!\nHeather",
"username": "HRichbourg"
},
{
"code": "db.LCFAOD.updateMany( { \"Variant.Amino Acid Change\" : \"p.Ala161Thr\" } ,\n { \"$set\" : {\n \"Variant.$[hasVariant].Predicted cDNA\" : \"c.481G>A\"\n } } ,\n { arrayFilters : [\n {\n \"hasVariant.Variant (cDNA)\" : \"c.?\"\n }\n ] }\n)\n",
"text": "If you look at the documentation I share you will see that has_variant_c is my filter <identifier>.You did not share any sample documents so it is very hard to supply a tested solution, so it is very hard to see what I missed.Try with this variant where I replaced has_variant_c to hasVariant (it looks like underscore is not liked). Also read my own document link to fix the filter specification.I still do not know if it does the right thing, from the lack of sample documents to test, but at least there is no error.",
"username": "steevej"
},
{
"code": "<identifier>",
"text": "And when I read my the documentation I shared I saw:The <identifier> must begin with a lowercase letter and contain only alphanumeric characters.Shame on me in trying to make it readable by using _.",
"username": "steevej"
},
{
"code": " \"Variant\": [\n {\n \"Gene\": \"ACADVL\",\n \"Genomic Coordinate GRCh37\": \"17:g.7124860?\",\n \"Genomic Coordinate GRCh38\": \"17:g.7221541?\",\n \"Transcript\": \"NM_000018.4\",\n \"Variant (cDNA)\": \"c.?\",\n \"Start Position\": \"481\",\n \"Variant Location\": \"Exon 7\",\n \"HGVS\": \"NM_000018.4:c.?\",\n \"Effect Type\": \"Missense\",\n \"Amino Acid Change\": \"p.Ala161Thr\",\n \"Predicted ACMG Call\": \"likely_pathogenic\",\n \"GNOMAD Frequency\": \"NA\"\n },\n {\n \"Gene\": \"ACADVL\",\n \"Genomic Coordinate GRCh37\": \"17:g.7125591?\",\n \"Genomic Coordinate GRCh38\": \"17:g.7222272?\",\n \"Transcript\": \"NM_000018.4\",\n \"Variant (cDNA)\": \"c.?\",\n \"Start Position\": \"848\",\n \"Variant Location\": \"Exon 9\",\n \"HGVS\": \"NM_000018.4:c.?\",\n \"Amino Acid Change\": \"p.Val283Ala\",\n \"Predicted ACMG Call\": \"likely_pathogenic\"\n }\n ]\ndb.LCFAOD.updateMany( {\n \"Variant.Amino Acid Change\": \"p.Ala161Thr\"\n},\n{\n \"$set\": {\n \"Variant.$[hasVariant].Predicted cDNA\": \"c.481G>A\"\n }\n},\n{ arrayFilters : [\n {\n \"hasVariant.Variant (cDNA)\": \"c.?\",\n \"hasVariant.Amino Acid Change\": \"p.Ala161Thr\"\n }\n }\n]\n})\ndb.LCFAOD.updateMany( {\n \"Variant.Amino Acid Change\": \"p.Ala161Thr\"\n},\n{\n \"$set\": {\n \"Variant.$[hasVariant].Predicted cDNA\": \"c.481G>A\"\n }\n},\n{ arrayFilters : [\n {\n \"hasVariant.Variant (cDNA)\": \"c.?\"\n },\n {\n \"hasVariant.Amino Acid Change\": \"p.Ala161Thr\"\n }\n ]\n}\n)\n",
"text": "Apologies; I did read the documentation but am not quite understanding it well enough to assume how to use the ‘has’ component.A sample document is below:Your suggestion worked, but unfortunately for me it adds the Predicted cDNA field to each array item (This is because there are two variants with c.? but different amino acid outcomes. I tried editing your script to include the additional filter so that it is only added to array items that include both cDNA \"c.?’ and amino acid change of “p.Ala161Thr”. However, it seems I cannot use hasVariant twice, and am getting a clone error when playing with the code. I still do not quite understand the different between hasVariant vs Variant.Variant (cDNA).Error: clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:……)} could not be cloned.MongoServerError: Found multiple array filters with the same top-level field name hasVariantAgain, appreciate the help. I am quite new to Mongo",
"username": "HRichbourg"
},
{
"code": "{ arrayFilters : [ hasVariant : { $elemMatch : { \n \"Variant (cDNA)\" : \"c.?\",\n \"Amino Acid Change\" : \"p.Ala161Thr\"\n} } ] }\n",
"text": "Try",
"username": "steevej"
},
{
"code": "db.LCFAOD.updateMany( \n{ \"Variant.Amino Acid Change\" : \"p.Ala161Thr\" } , \n{ $set : { \"Variant.$[hasVariant].Predicted cDNA\" : \"c.481G>A\" } } ,\n{ arrayFilters : [ hasVariant : { $elemMatch : { \"Variant (cDNA)\" : \"c.?\", \"Amino Acid Change\" : \"p.Ala161Thr\"} } ] }\n} )\nncaught exception: SyntaxError: unexpected token: identifier :db.LCFAOD.updateMany(\n{ \"Variant.Amino Acid Change\" : \"p.Ala161Thr\"}, \n{ $set: { \"Variant.$[g].Predicted cDNA\" : \"c.481G>A\" } }, \n{ arrayFilters: [ { “g.Variant (cDNA)\" : \"c.?\", “g.Amino Acid Change\" : \"p.Ala161Thr\" } ] }\n)\nuncaught exception: SyntaxError: illegal character :",
"text": "Thank you for an additional suggestion. I moved to the terminal because mongoDB compass was giving me a weird line clone error.I updated the code to:That gives this error:\nncaught exception: SyntaxError: unexpected token: identifier :I was trying additional options and tried something similar to this help page, but get a similar error.And I get this error: uncaught exception: SyntaxError: illegal character :",
"username": "HRichbourg"
},
{
"code": "{ arrayFilters : [ hasVariant : { $elemMatch : { \"Variant (cDNA)\" : \"c.?\", \"Amino Acid Change\" : \"p.Ala161Thr\"} } ] }“g.Variant (cDNA)\" : \"c.?\", “",
"text": "In{ arrayFilters : [ hasVariant : { $elemMatch : { \"Variant (cDNA)\" : \"c.?\", \"Amino Acid Change\" : \"p.Ala161Thr\"} } ] }arrays are a list of simple values, arrays or objects. With hasVariant: you are defining an object so you are missing curly braces arround the field name hasVariant and its value.In“g.Variant (cDNA)\" : \"c.?\", “you have wrong quotes at the start and end.",
"username": "steevej"
},
{
"code": "db.LCFAOD.updateMany( \n {\"Variant.Amino Acid Change\": \"p.Ala161Thr\"},\n{\"$set\": {\"Variant.$[g].Predicted cDNA\": \"c.481G>A\"}\n},\n{ arrayFilters : [\n{\n \"g.Variant (cDNA)\": \"c.?\",\n \"g.Amino Acid Change\": \"p.Ala161Thr\"\n}\n]})\n",
"text": "Oh thank you so much. Wow, it is always the little things. It worked with this code:Again, thank you so much for the persistent help. Greatly appreciated.H",
"username": "HRichbourg"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Automating updates to specific array items within documents | 2022-06-23T21:12:11.995Z | Automating updates to specific array items within documents | 3,222 |
[]
| [
{
"code": "",
"text": "\nScreen Shot 2022-07-03 at 09.47.291692×955 194 KB\nHello, I am getting the above error after setting up the free tier Mongo Atlas cluster and trying to use the db for my node js app. I am really stuck. The app is complete and I am in the process of deploying to heroku. Any help is highly appreciated. Thanks",
"username": "Poon_D"
},
{
"code": "",
"text": "Have you setup the network security of your cluster to give access from your application server?",
"username": "steevej"
},
{
"code": "",
"text": "Hi, thank you for you’re reply. First I whitelisted my ip. Didn’t work. Then I allowed access from anywhere(0.0.0.0/0) still same error. Is this what you are referring to or is there something else I should be doing?",
"username": "Poon_D"
},
{
"code": "",
"text": "What do you get when trying to go at http://portquiz.net:27017 ?",
"username": "steevej"
},
{
"code": "",
"text": "Hey @steevej, thanks again. This was weird one. Really not sure what caused it. After googling for three days and getting nothing, I, out of nowhere, connected my laptop to my phone’s cellular data and tada… it worked. No changes to my code or atlas config. I managed to get it working, pushed everything to heroku and my app is actually live. If you don’t mind, check it out @ https://tranquil-grand-canyon-92474.herokuapp.com Thanks again. BTW if I go to the url in your comment, it just times out. Don’t know why.",
"username": "Poon_D"
},
{
"code": "",
"text": "BTW if I go to the url in your comment, it just times out. Don’t know why.Because you wired network does not allow out traffic on the given port by firewall or VPN. That’s the purpose of the link, to test if outgoing traffic is allowed or not. That is why using cell’s hotspot it worked, it is not restricted.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongooseServerSelectionError | 2022-07-03T18:01:57.379Z | MongooseServerSelectionError | 2,270 |
|
null | [
"aggregation",
"node-js",
"mongoose-odm"
]
| [
{
"code": " question = {\n _id: \"00000\",\n question: \"\",\n category: \"gaming\"\n }\n \n answer = {\n _id: \"xyz\",\n answer: \"some answer\",\n questionId: \"00000\",\n userID: \"11111\"\n }\n \n profile = {\n _id: \"11111\",\n name: \"alex\",\n }\n {\n _id: \"00000\",\n question: \"\",\n category: \"gaming\",\n usersAnswered: [\n { _id: \"xyz\",\n answer: \"some answer\",\n questionId: \"00000\",\n userID: {\n _id: \"11111\",\n name: \"alex\",\n } \n }\n ]\n }\nuserID question\n .aggregate([\n {\n $match: {\n category: \"gaming\",\n },\n },\n {\n $lookup: {\n from: \"answers\",\n localField: \"_id\",\n foreignField: \"questionID\",\n as: \"usersAnswered\",\n },\n },\n ])\nusersAnswered",
"text": "question, answer and profile modalswhat i want to achieve:i want to populate the userID field which we get from $lookupwhat should be done to further populate the userID field using $lookup i guess.Just a bit new to aggregation framework, been using mongoose to get the job done but then this situation came up and had to integrate a field from another collection in question model i.e. usersAnswered so i am learning aggregation framework it gets a bit confusing.",
"username": "Ali_Abyer"
},
{
"code": "$lookup: {\n from: \"profiles\",\n localField: \"usersAnswered.userID\",\n foreignField: \"_id\",\n as: \"profiles\",\n }\n{ _id: '00000',\n question: '',\n category: 'gaming',\n usersAnswered: \n [ { _id: 'xyz',\n answer: 'some answer',\n questionId: '00000',\n userID: '11111' },\n { _id: 0,\n answer: 'some answer',\n questionId: '00000',\n userID: '11111' },\n { _id: 1,\n answer: 'some answer',\n questionId: '00000',\n userID: 222 } ],\n profiles: [ { _id: 222, name: 'alex' }, { _id: '11111', name: 'alex' } ] }\n",
"text": "The first thing is that you had foreignField:questionID rather than foreignField:questionId which was producing an empty usersAnswered.The $lookup is smart enough to it over arrays so the next stage is simplyThis will not be in the exact format you want as the profile data will not be inside the usersAnswered array but will be at the root. Something like (using extra data to show features) :As you can see user _id:11111 answered the question twice (may not be possible in your use-case) but it’s profile was produced only once.To get your final result you may have a $set stage that $map usersAnswered using $filter on profiles. But personally, I prefer to do this kind of data cosmetic in the application layer to offload as much as possible the data access layer.",
"username": "steevej"
}
]
| How to populate a nested field after lookup using lookup mongodb? | 2022-07-01T07:21:34.764Z | How to populate a nested field after lookup using lookup mongodb? | 6,225 |
null | [
"connecting"
]
| [
{
"code": "version: \"3.9\"\nservices:\n database:\n image : 'mongo'\n container_name: 'deibadb-container'\n environment:\n - PUID=1000\n - PGID=1000\n volumes:\n - ./database:/data/db\n entrypoint: [ \"/usr/bin/mongod\", \"--bind_ip_all\", \"--replSet\", \"rs0\" ]\n ports:\n - 27017:27017\n restart: unless-stopped\n \n mongo-express:\n image: mongo-express:0.54\n ports:\n - \"8081:8081\"\n environment:\n ME_CONFIG_MONGODB_SERVER: database\n depends_on:\n - database\n\n typesense:\n image: typesense/typesense:0.21.0\n ports:\n - \"8108:8108\"\n environment:\n TYPESENSE_API_KEY: verbetes0key\n TYPESENSE_DATA_DIR: /data/typesense\n TYPESENSE_ENABLE_CORS: \"true\"\n volumes:\n - ./typesense:/data/typesense\n{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2022-06-28T20:41:24.776Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(1),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 1,\n\t\"writeMajorityCount\" : 1,\n\t\"votingMembersCount\" : 1,\n\t\"writableVotingMembersCount\" : 1,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1656448876, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2022-06-28T20:41:16.041Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1656448876, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1656448876, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1656448876, 1),\n\t\t\t\"t\" : NumberLong(1)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2022-06-28T20:41:16.041Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2022-06-28T20:41:16.041Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1656448828, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2022-06-27T20:47:58.345Z\"),\n\t\t\"electionTerm\" : NumberLong(1),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1656362878, 1),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1656362878, 1),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"numVotesNeeded\" : 1,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"newTermStartDate\" : ISODate(\"2022-06-27T20:47:58.368Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2022-06-27T20:47:58.379Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"70b5125bd4e1:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 86088,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1656448876, 1),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-06-28T20:41:16Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2022-06-28T20:41:16.041Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2022-06-28T20:41:16.041Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1656362878, 2),\n\t\t\t\"electionDate\" : ISODate(\"2022-06-27T20:47:58Z\"),\n\t\t\t\"configVersion\" : 3,\n\t\t\t\"configTerm\" : 1,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1656448876, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1656448876, 1)\n}\n",
"text": "Hi guys, I’m having a problem when trying to connect to replica set. I’m trying to connect with compass and with python code, but nothing happens.My docker-compose file:My rs.status():When I trying from compass, a loading page still to trying connect. When trying from python code, I get:pymongo.errors.ServerSelectionTimeoutError: Could not reach any servers in [(‘70b5125bd4e1’, 27017)]. Replica set is configured with internal hostnames or IPs?, Timeout: 30s, Topology Description: <TopologyDescription id: 62bc58af5e1aafde51de4af1, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘70b5125bd4e1’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘70b5125bd4e1:27017: [Errno -3] Temporary failure in name resolution’)>]>Could help me?",
"username": "Anderson_Mendes_de_Almeida"
},
{
"code": "",
"text": "I get a solution to my issue. I followed this guide:cfg = rs.conf()\ncfg.members[0].host = “mongodb0.example.net:27017”\nrs.reconfig(cfg)Available hereBasically was needed to update the host name of my replica set member.",
"username": "Anderson_Mendes_de_Almeida"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Having a problem when trying to connect to replica set with compass and with python code | 2022-06-29T13:59:22.412Z | Having a problem when trying to connect to replica set with compass and with python code | 3,745 |
null | [
"monitoring"
]
| [
{
"code": "",
"text": "Good Afternoon everybodyWe have a request to provide CPU metrics to the business users so they can glance at the database health. We are trying to do this by exporting metrics from Atlas and sending them to Grafana via Prometheus, We don’t want to give access to the end-users to Atlas monitoring because it does not uses AD access, so we will end maintaining local accounts.That said we have not been able to determine the formula for how the CPU utilization gets calculated, in other words despite our best effort the chart we see in Grafana is completely different from what we see in Altas monitoring.We asked Mongo support and their answer was very vague, we will appreciate any help, also if you know a query that could provide this information that will also help.Thank you in advanceRafael Orta",
"username": "Rafael_Orta"
},
{
"code": "",
"text": "Hi Rafael,Are you aware that you can federate your access to the MongoDB Atlas console/control plane with an identity provider via SAML for SSO? This would let you give the users console acccess.In any event I recommend giving the business users access to more database-specific metric in conjunction with hardware metrics: the latter alone is a bit like looking at shadows. Both should be available via the Atlas admin API and prom endpoints-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Thank you Andrew, if I could allow the users get to the MongoDB Atlas console using SSO that would be ideal, where could I find more information about how to do this?.",
"username": "Rafael_Orta"
},
{
"code": "",
"text": "Totally, check out https://www.mongodb.com/docs/atlas/security/federated-authentication/",
"username": "Andrew_Davidson"
}
]
| MongoDB Atlas need query or formula to calculate the CPU utilization | 2022-06-28T18:16:39.436Z | MongoDB Atlas need query or formula to calculate the CPU utilization | 3,008 |
null | [
"golang",
"containers"
]
| [
{
"code": "",
"text": "At first I would like to apologize with my bad English.\nSo I use go with docker , and mongo with server ubuntu.\nWhen on production , it’s look like work normal but when got a lot of traffic or something I got error like this “server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: IP:PORT, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp IP:PORT: i/o timeout }, ] }”and today I see in my mongo.log\n“msg”:“Error sending response to client. Ending connection from remote”,“attr”:{“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Connection reset by peer”},“remote”:”IP:50700”,“connectionId”:182673}}“c”:“COMMAND”, “id”:51803, “ctx”:“conn254054”,“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”“admin.$cmd”,“command”:{“hello”:1,“helloOk”:true,“topologyVersion”:{“processId”:{\"$oid\":“62b84b3d8070fc21d0f35ab6”},“counter”:0},“maxAwaitTimeMS”:10000,\"$db\":“admin”},“numYields”:0,“ok”:0,“errMsg”:“operation was interrupted”,“errName”:Disconnect\",“errCode”:279,“reslen”:117,“locks”:{},“remote”:”IP:43521”,“protocol”:“op_msg”,“durationMillis”:114}}",
"username": "Re3v3s_N_A"
},
{
"code": "",
"text": "In my code it’s look like this\nI declare 1 function for connection\n1 function for call collection\nand If i have to insert i alway use this\ndb , err := functionConnection()\nCollection := functionCallCollection()after insert I declare like this\ndefer db.Disconnect(ctx1)",
"username": "Re3v3s_N_A"
},
{
"code": "limit",
"text": "“msg”:“Error sending response to client. Ending connection from remote”\n“errmsg”:“Connection reset by peer”}\n…\n“msg”:“Slow query”,\n“maxAwaitTimeMS”:10000,\n“errMsg”:“operation was interrupted”It is possible your query results in large data but it could not return that data in the required timeslot of 10 seconds and your client program terminated the connection because of this timeout.try that same query but use limit and see if it will still give the same error. then try the same by setting a longer timeout value.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thx Sir , but i think it’s wrong with my code, cuz I connect for multiple times",
"username": "Re3v3s_N_A"
},
{
"code": "deferasyncawaitawaitdb , err := await functionConnection()\ndefer db.Disconnect(ctx1)\nCollection := await functionCallCollection()\nawait Collection.insert(data)\nasync functionConnection(){ return await mongo.Connect()}\nasync functionCallCollection(){ return await db.getCollection()}\n",
"text": "It is then possible you have goroutine/waitGroup or async/await problem.Initially, I thought it would be the size of the data to be returned. Now I noticed you moved your db logic to functions and use defer to close the connection.I am not good at go, but most programming languages behave the same: if you run a function and it has an async feature in it, the function will still return without waiting for that operation to finish, but its return value will include a structure to indicate job is not finished yet.outside that function, you need to process this structure accordingly or the program just exits and your process will be cut short. you need to define your functions as async, and use await in the calling lines.the following is just a scratch to give an idea. I might have overused await so check the language and driver manuals.",
"username": "Yilmaz_Durmaz"
}
]
| I am using golang + mongodb and got error like this | 2022-07-02T16:56:00.098Z | I am using golang + mongodb and got error like this | 4,234 |
null | [
"replication"
]
| [
{
"code": "",
"text": "Hi, we have a customer whose internet connection might fall at any time. We’ve decided to do a local installation, although we would like make backups of their database on our own, so if they local server goes down, they could keep working with our replication on our server. But we would prioritize working on their local one.Expected behaviour :What options do we have to achieve this behaviour ?",
"username": "SuredaKuara"
},
{
"code": "n",
"text": "Welcome to the MongoDB Community @SuredaKuara !Replica sets require a strict majority of voting members (n/2+1) available in order to elect or sustain a primary, so you will need a minimum of 3 replica set members for a fault tolerant deployment.If two of those are local and have a higher priority than the remote member, the primary will always be local.The suggested configuration would look like:This differs from your expected behaviour in one regard: there is a second local member so failover will not be remote. There is still a remote replica set member for offsite data redundancy.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X, thanks for your response.But if I apply this configuration, would member3 (remote) fail if member1 and member2 went down? Maybe it is what you exposed in your last paragraph but I can not understand it at all.Regars,\nAndreu",
"username": "SuredaKuara"
},
{
"code": "",
"text": "I think your and customer’s systems are in two different geographic locations.Have you checked this page?\nReplica Sets Distributed Across Two or More Data Centers — MongoDB Manualyou have to use at least 3 members the way @Stennie_X shortly explained. but there is a problem with this 2+1 setup that if 2 of the local go down, your remote will be read-only.You can have a 3-center setup with extra hardware in use. Please check that link and see if you can make sense of your case.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz, that is what i meant.Thanks for your responses, we will discuss if it is a good option the 2+1 approach. Just one more question, this configuration would work with a Arbiter Node, wouldn’t it ?",
"username": "SuredaKuara"
},
{
"code": "mongodb://host1:27017,host2:27017,host3:27017/?replicaSet=myRS",
"text": "Arbiters do not hold data, they are there to make a majority in voting.In any case, you will need a 3rd member with its own IP address/port that both the other members can access, whether in a real or virtual machine in your or the customer’s location. that is why the connection string has addresses of all members: mongodb://host1:27017,host2:27017,host3:27017/?replicaSet=myRSArbiter is recommended on the customer’s side in the manual. I haven’t tried it myself but I think if the arbiter survives it will vote for the remote member to become primary. but if both members are lost in local, remote will become read-only.also because the connection is possibly slow between centers, the writes on a single machine in the local might get lost forever. extra costs for another member might become less important compared to data loss. so keep this in mind in your next meeting.check this one (if you haven’t) on how to make an arbiter member and add to the replica set\nAdd an Arbiter to Replica Set — MongoDB Manual",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Yilmaz_Durmaz many thanks for your time, we will take a look.",
"username": "SuredaKuara"
},
{
"code": "",
"text": "Hi @SuredaKuara,I would avoid using arbiters if possible. For more elaboration on the downsides, see Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie.But if I apply this configuration, would member3 (remote) fail if member1 and member2 went down?You need at least two healthy voting members to elect or maintain a primary if there are three configured voting members in your replica set. If any two members of your replica set are down, a healthy third member will be a readonly secondary.This avoids data inconsistency scenarios where a network partition could otherwise result in more than one primary. For example: member1 is down, the connection to the remote network is down, but member2 and member3 are healthy. If member2 and member3 could both decide to be primaries and accept writes, the data in these replica set members would diverge. The strict majority requirement ensures that a primary will only be present in a partition with a majority of voting members.Since your original description had all requests originating via a local web server with unreliable internet I assumed you would prefer local availability, but you can plan your deployment to suit your failover and fault tolerance requirements.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Replica Set of Only 2 Nodes | 2022-07-05T11:25:48.365Z | Replica Set of Only 2 Nodes | 3,614 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.